markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
¿Por qué usar Numpy? %%capture captura la salida de la ejecución de la celda en la variable dada como parámetro. Después se puede imprimir. %timeit se utiliza para ejecutar varias veces una instrucción y calcular un promedio de su duración.
%%capture timeit_output %timeit l1 = range(1,1000) %timeit l2 = np.arange(1,1000) print(timeit_output) x = np.array([[1,2],[3,4]]) print (np.sum(x)) # Compute sum of all elements; prints "10" print (np.sum(x, axis=0)) # Compute sum of each column; prints "[4 6]" print (np.sum(x, axis=1)) # Compute sum of each row; prints "[3 7]" x * 2 x ** 2
intro/sesion0.ipynb
dsevilla/bdge
mit
numpy tiene infinidad de funciones, por lo que sería interesante darse una vuelta por su documentación: https://docs.scipy.org/doc/. Matplotlib Matplotlib permite generar gráficos de forma sencilla. Lo veremos aquí primero conectado sólo con Numpy y después conectado con Pandas.
x = np.arange(0, 3 * np.pi, 0.1) y = np.sin(x) plt.subplot() # Plot the points using matplotlib plt.plot(x, y) plt.show() plt.subplot(211) plt.plot(range(12)) plt.subplot(212, facecolor='y') plt.plot(range(100)) plt.show() # Compute the x and y coordinates for points on sine and cosine curves x = np.arange(0, 3 * np.pi, 0.1) y_sin = np.sin(x) y_cos = np.cos(x) # Plot the points using matplotlib plt.plot(x, y_sin) plt.plot(x, y_cos) plt.xlabel('x axis label') plt.ylabel('y axis label') plt.title('Sine and Cosine') plt.legend(['Sine', 'Cosine']) plt.show()
intro/sesion0.ipynb
dsevilla/bdge
mit
Pandas Tutoriales: 1, 2, 3. Pandas permite gestionar conjuntos de datos n-dimensionales de diferentes formas, y también conectarlo con matplotlib para hacer gráficas. Los conceptos principales de Pandas son los Dataframes y las Series. La diferencia entre ambas es que la serie guarda sólo una serie (una columna o una fila, depende de como se quiera interpretar), mientras que un Dataframe guarda estructuras multidimensaionales agregando series. Ambas tienen una (o varias) "columna fantasma", que sirve de índice, y que se puede acceder con d.index (tanto si d es una serie o un dataframe). Si no se especifica un índice, se le añade uno virtual numerando las filas desde cero. Además, los índices pueden ser multidimensionales (por ejemplo, tener un índice por mes y dentro uno por dia de la semana).
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000)) ts ts.describe() ts = ts.cumsum() ts.plot(); df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, columns=list('ABCD')) df = df.cumsum() df.plot();
intro/sesion0.ipynb
dsevilla/bdge
mit
Se puede hacer plot también de una columna contra otra.
df3 = pd.DataFrame(np.random.randn(1000, 2), columns=['B', 'C']).cumsum() df3['A'] = pd.Series(list(range(len(df3)))) df3.plot(x='A', y='B');
intro/sesion0.ipynb
dsevilla/bdge
mit
Valores incompletos. Si no se establecen, se pone a NaN (not a number).
d = {'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c']), 'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])} df = pd.DataFrame(d) df
intro/sesion0.ipynb
dsevilla/bdge
mit
fillna() permite cambiar el valor de los datos faltantes.
df.fillna(0) pd.DataFrame(d, index=['d', 'b', 'a']) pd.DataFrame(d, index=['d', 'b', 'a'], columns=['two', 'three'])
intro/sesion0.ipynb
dsevilla/bdge
mit
A continuación se muestra un ejemplo de uso de Pandas para leer datos y procesarlos en un Dataframe. El primer ejemplo completo carga desde el fichero swift-question-dates.txt.gz las fechas de las preguntas en Stackoverflow que contienen el tag "swift". La función read_csv es capaz de leer cualquier fichero CSV y lo convierte en un "Dataframe", una estructura de tabla que guarda también los nombres y los tipos de las columnas, así como un índice por el que se identificarán las tablas. La lista incluye la fecha en donde se produjo una pregunta con el tag "swift". Como los datos en sí son las fechas, hacemos que la columna de fechas haga a su vez de índice.
df = pd.read_csv('https://github.com/dsevilla/bdge/raw/master/intro/swift-question-dates.txt.gz', header=None, names=['date'], compression='gzip', parse_dates=['date'], index_col='date') df
intro/sesion0.ipynb
dsevilla/bdge
mit
De la fecha, extraer sólo la fecha (no la hora, que no nos interesa).
df.index = df.index.date
intro/sesion0.ipynb
dsevilla/bdge
mit
Añadimos una columna de todo "1" para especificar que cada pregunta cuenta como 1.
df['Count'] = 1 df
intro/sesion0.ipynb
dsevilla/bdge
mit
A los Dataframe de Pandas también se les puede aplicar operaciones de agregación, como groupby o sum. Finalmente, la funcion plot() permite mostrar los datos en un gráfico.
accum = df.groupby(df.index).sum() accum # Los 30 primeros registros que tengan un número de preguntas mayor que 20 por día. accum = accum[accum.Count > 20][:30] accum accum[accum.Count > 30][:30].plot.bar()
intro/sesion0.ipynb
dsevilla/bdge
mit
A continuación comprobamos con la página de la Wikipedia cuándo apareció el lenguaje Swift:
!pip install lxml dfwiki = pd.read_html('https://en.wikipedia.org/wiki/Swift_(programming_language)',attrs={'class': 'infobox vevent'}) dfwiki[0] firstdate = dfwiki[0][1][4] firstdate from dateutil.parser import parse dt = parse(firstdate.split(';')[0]) print (dt.date().isoformat()) print (accum.index[0].isoformat()) assert dt.date().isoformat() == accum.index[0].isoformat()
intro/sesion0.ipynb
dsevilla/bdge
mit
A continuación se muestra cómo ubicar posiciones en un mapa con el paquete folium. Se muestra también cómo acceder a distintas posiciones del Dataframe con iloc, loc, etc.
# cargar municipios y mostrarlos en el mapa df = pd.read_csv('https://github.com/dsevilla/bdge/raw/master/intro/municipios-2017.csv.gz',header=0,compression='gzip') df.head() df.iloc[0] df.iloc[0].NOMBRE_ACTUAL df.loc[:,'NOMBRE_ACTUAL'] df.iloc[:,0] df.PROVINCIA df[df.PROVINCIA == 'A Coruña'] mula = df[df.NOMBRE_ACTUAL == 'Mula'].iloc[0] mula (mula_lat,mula_lon) = (mula.LATITUD_ETRS89, mula.LONGITUD_ETRS89) (mula_lat,mula_lon)
intro/sesion0.ipynb
dsevilla/bdge
mit
El paquete folium permite generar mapas de posiciones. El siguiente ejemplo centra un mapa en Mula y pone un marcador con su nombre:
!pip install folium import folium map = folium.Map(location=[mula_lat, mula_lon],zoom_start=10) folium.Marker(location = [mula_lat, mula_lon], popup="{} ({} habitantes)".format(mula.NOMBRE_ACTUAL,mula.POBLACION_MUNI)).add_to(map) map
intro/sesion0.ipynb
dsevilla/bdge
mit
Test your code to make sure that the class definition worked.
# code to make sure constructors and get methods all work
lesson15/Lesson15_individual.ipynb
nerdcommander/scientific_computing_2017
mit
To run this example, we need to first get the data from [1] and process it so we have dichtonomous $y\in{-1,1}$ outputs and the matrix of predictors has been standardised. In addition, we also add a column of 1s corresponding to a constant term in the regression. If you are connected to the internet, by instantiating with x=None, Pints will fetch the data from the repo for you. If, instead, you have local copies of the x and y matrices, these can be supplied as arguments.
logpdf = pints.toy.GermanCreditLogPDF(download=True)
examples/toy/distribution-german-credit.ipynb
martinjrobins/hobo
bsd-3-clause
Let's look at the data: x is a matrix of predictors and y is a vector of credit recommendations for 1000 individuals. Specifically, let's look at the PCA scores and plot the first two dimensions against one another. Here, we see that the two groups overlap substantially, but that there neverless some separation along the first PCA component.
def pca(X): # Data matrix X, assumes 0-centered n, m = X.shape # Compute covariance matrix C = np.dot(X.T, X) / (n-1) # Eigen decomposition eigen_vals, eigen_vecs = np.linalg.eig(C) # Project X onto PC space X_pca = np.dot(X, eigen_vecs) return X_pca x, y = logpdf.data() scores = pca(x) # colour individual points by whether or not to recommend them credit plt.scatter(scores[:, 0], scores[:, 1], c=y) plt.xlabel('PCA 1') plt.ylabel('PCA 2') plt.show()
examples/toy/distribution-german-credit.ipynb
martinjrobins/hobo
bsd-3-clause
Now we run HMC to fit the parameters of the model.
xs = [ np.random.uniform(0, 1, size=(logpdf.n_parameters())), np.random.uniform(0, 1, size=(logpdf.n_parameters())), np.random.uniform(0, 1, size=(logpdf.n_parameters())), ] mcmc = pints.MCMCController(logpdf, len(xs), xs, method=pints.HamiltonianMCMC) mcmc.set_max_iterations(200) # Set up modest logging mcmc.set_log_to_screen(True) mcmc.set_log_interval(10) for sampler in mcmc.samplers(): sampler.set_leapfrog_step_size(0.2) sampler.set_leapfrog_steps(10) # Run! print('Running...') chains = mcmc.run() print('Done!')
examples/toy/distribution-german-credit.ipynb
martinjrobins/hobo
bsd-3-clause
HMC is quite efficient here at sampling from the posterior distribution.
results = pints.MCMCSummary(chains=chains, time=mcmc.time()) print(results)
examples/toy/distribution-german-credit.ipynb
martinjrobins/hobo
bsd-3-clause
*2. Set all graphics from matplotlib to display inline
import matplotlib.pyplot as plt %matplotlib inline
07/Animal_Panda_Homework_7_Skinner.ipynb
barjacks/foundations-homework
mit
*3. Import pandas with the right name
#for encoding the command would look smth like this: #df = pd.read_csv("XXXXXXXXXXXXXXXXX.csv", encoding='mac_roman') df = pd.read_csv("Animal_Data/07-hw-animals.csv")
07/Animal_Panda_Homework_7_Skinner.ipynb
barjacks/foundations-homework
mit
*4. Display the names of the columns in the csv
df.columns
07/Animal_Panda_Homework_7_Skinner.ipynb
barjacks/foundations-homework
mit
*5. Display the first 3 animals.
df.head(3)
07/Animal_Panda_Homework_7_Skinner.ipynb
barjacks/foundations-homework
mit
*6. Sort the animals to see the 3 longest animals.
df.sort_values(by='length', ascending=False).head(3)
07/Animal_Panda_Homework_7_Skinner.ipynb
barjacks/foundations-homework
mit
*7. What are the counts of the different values of the "animal" column? a.k.a. how many cats and how many dogs.
df['animal'].value_counts()
07/Animal_Panda_Homework_7_Skinner.ipynb
barjacks/foundations-homework
mit
*8. Only select the dogs.
#df['animal'] == 'dog' this just tests, whether row is a dog or not, True or False #is_dog = df['animal'] == 'dog' #df[is_dog] df[df['animal'] == 'dog']
07/Animal_Panda_Homework_7_Skinner.ipynb
barjacks/foundations-homework
mit
*9. Display all of the animals that are greater than 40 cm.
df[df['length'] > 40] #del df['feet']
07/Animal_Panda_Homework_7_Skinner.ipynb
barjacks/foundations-homework
mit
*10. 'length' is the animal's length in cm. Create a new column called inches that is the length in inches.
df['inches'] = df['length'] * 0.394 df.head()
07/Animal_Panda_Homework_7_Skinner.ipynb
barjacks/foundations-homework
mit
*11. Save the cats to a separate variable called "cats." Save the dogs to a separate variable called "dogs."
dogs = df[df['animal'] == 'dog'] cats = df[df['animal'] == 'cat']
07/Animal_Panda_Homework_7_Skinner.ipynb
barjacks/foundations-homework
mit
*12. Display all of the animals that are cats and above 12 inches long. First do it using the "cats" variable, then do it using your normal dataframe.
cats[cats['inches'] > 12] #df[df[df[df['animal'] == 'cat']'inches'] > 12] #df[df['animal'] == 'cat']& #df[df['inches'] > 12] #pd.read_csv('imdb.txt') # .sort(columns='year') # .filter('year >1990') # .to_csv('filtered.csv') df[(df['animal'] == 'cat') & (df['inches'] > 12)] #3 > 2 & 4 > 3 #true & true #true #3 > 2 & 4 > 3 #true & 4 > 3 #(3 > 2) & (4 > 3)
07/Animal_Panda_Homework_7_Skinner.ipynb
barjacks/foundations-homework
mit
*13. What's the mean length of a cat?
df[df['animal'] == 'cat'].describe()
07/Animal_Panda_Homework_7_Skinner.ipynb
barjacks/foundations-homework
mit
*14. What's the mean length of a dog?
df[df['animal'] == 'dog'].describe()
07/Animal_Panda_Homework_7_Skinner.ipynb
barjacks/foundations-homework
mit
*15. Use groupby to accomplish both of the above tasks at once.
df.groupby(['animal'])['inches'].describe()
07/Animal_Panda_Homework_7_Skinner.ipynb
barjacks/foundations-homework
mit
*16. Make a histogram of the length of dogs. I apologize that it is so boring.
df[df['animal'] == 'dog'].hist()
07/Animal_Panda_Homework_7_Skinner.ipynb
barjacks/foundations-homework
mit
*17. Change your graphing style to be something else (anything else!)
import matplotlib.pyplot as plt plt.style.available plt.style.use('ggplot') dogs['inches'].hist()
07/Animal_Panda_Homework_7_Skinner.ipynb
barjacks/foundations-homework
mit
*18. Make a horizontal bar graph of the length of the animals, with their name as the label (look at the billionaires notebook I put on Slack!)
df['length'].plot(kind='bar') #or: df.plot(kind='barh', x='name', y='length', legend=False)
07/Animal_Panda_Homework_7_Skinner.ipynb
barjacks/foundations-homework
mit
*19. Make a sorted horizontal bar graph of the cats, with the larger cats on top.
cats_sorted = cats.sort_values(by='length', ascending=True).head(3) cats_sorted.plot(kind='barh', x='name', y='length', legend=False) #or: df[df['animal'] == 'cat'].sort_values(by='length', ascending=True).plot(kind='barh', x='name', y='length')
07/Animal_Panda_Homework_7_Skinner.ipynb
barjacks/foundations-homework
mit
Advanced Sounding Plot a sounding using MetPy with more advanced features. Beyond just plotting data, this uses calculations from metpy.calc to find the lifted condensation level (LCL) and the profile of a surface-based parcel. The area between the ambient profile and the parcel profile is colored as well.
from datetime import datetime import matplotlib.pyplot as plt import metpy.calc as mpcalc from metpy.io import get_upper_air_data from metpy.io.upperair import UseSampleData from metpy.plots import SkewT from metpy.units import concatenate with UseSampleData(): # Only needed to use our local sample data # Download and parse the data dataset = get_upper_air_data(datetime(1999, 5, 4, 0), 'OUN') p = dataset.variables['pressure'][:] T = dataset.variables['temperature'][:] Td = dataset.variables['dewpoint'][:] u = dataset.variables['u_wind'][:] v = dataset.variables['v_wind'][:]
v0.4/_downloads/Advanced_Sounding.ipynb
metpy/MetPy
bsd-3-clause
Create a new figure. The dimensions here give a good aspect ratio
fig = plt.figure(figsize=(9, 9)) skew = SkewT(fig, rotation=45) # Plot the data using normal plotting functions, in this case using # log scaling in Y, as dictated by the typical meteorological plot skew.plot(p, T, 'r') skew.plot(p, Td, 'g') skew.plot_barbs(p, u, v) skew.ax.set_ylim(1000, 100) skew.ax.set_xlim(-40, 60) # Calculate LCL height and plot as black dot l = mpcalc.lcl(p[0], T[0], Td[0]) lcl_temp = mpcalc.dry_lapse(concatenate((p[0], l)), T[0])[-1].to('degC') skew.plot(l, lcl_temp, 'ko', markerfacecolor='black') # Calculate full parcel profile and add to plot as black line prof = mpcalc.parcel_profile(p, T[0], Td[0]).to('degC') skew.plot(p, prof, 'k', linewidth=2) # Example of coloring area between profiles greater = T >= prof skew.ax.fill_betweenx(p, T, prof, where=greater, facecolor='blue', alpha=0.4) skew.ax.fill_betweenx(p, T, prof, where=~greater, facecolor='red', alpha=0.4) # An example of a slanted line at constant T -- in this case the 0 # isotherm l = skew.ax.axvline(0, color='c', linestyle='--', linewidth=2) # Add the relevant special lines skew.plot_dry_adiabats() skew.plot_moist_adiabats() skew.plot_mixing_lines() # Show the plot plt.show()
v0.4/_downloads/Advanced_Sounding.ipynb
metpy/MetPy
bsd-3-clause
To print a value to the screen, we use the function print() e.g. print(1)
print("Hello World")
Lesson01_Variables/lesson01.ipynb
WomensCodingCircle/CodingCirclePython
mit
Jupyter notebooks will always print the value of the last line so you don't have to. You can suppress this with a semicolon ';'
"Hello World" "Hello World";
Lesson01_Variables/lesson01.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Predict and then print the type of 'Orca' Variables A variable is a name that you give a value. You can then use this name anywhere you would use the value that the name refers to. It has some rules. * It must only contain letters, numbers and/or the underscore character. * However, it cannot start with a number. * It can start with an underscore but this usually means something special so stick to letters for now. To assign a value to a variable, you use the assignment operator, which is '=' e.g., my_name = 'Charlotte'
WHALE = 'Orca' number_of_whales = 10 weight_of_1_whale = 5003.2
Lesson01_Variables/lesson01.ipynb
WomensCodingCircle/CodingCirclePython
mit
Notice that when you ran that, nothing printed out. To print a variable, you use the same statement you would use to print the value. e.g. print(WHALE)
print(number_of_whales)
Lesson01_Variables/lesson01.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Assign the name of a sea creature to the variable sea_creature. Then print the value. Reccomendation Name your variables with descriptive names. Naming a variable 'a' is easy to type but won't help you figure out what it is doing when you come back to your code six months later. Operators and operands Operators are special symbols that represent computations that the computer performs. We have already learned one operator: the assignment operator '='. Operands are the values the operator is applied to. Basic math operators * + addition * - subtraction * * multiplication * / division * ** power (exponentiation) To use these operators, put a value or variable on either side of them. You can even assign the new value to a variable or print it out. They work with both integers or floats.
1 + 2 fish = 15 fish_left = fish - 3 print(fish_left) print(3 * 2.1) number_of_whales ** 2 print(5 / 2)
Lesson01_Variables/lesson01.ipynb
WomensCodingCircle/CodingCirclePython
mit
Hint: You can use a variable and assign it to the same variable name in the same statement.
number_of_whales = 8 number_of_whales = number_of_whales + 2 print(number_of_whales)
Lesson01_Variables/lesson01.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Find the result of 6^18. Order of operations You can combine many operators in a single python statement. The way python evaluates it is the same way you were taught to in elementary school. PEMDAS for Please Excuse My Dear Aunt Sally. Or 1. Parentheses, 2. Exponents, 3. Multiplication, 4. Division, 5. Addition, 6. Subtraction. Left to right, with that precedence. It is good practice to always include parentheses to make your intention clear, even if order of operations is on your side.
2 * 3 + 4 / 2 (2 * (3 + 4)) / 2
Lesson01_Variables/lesson01.ipynb
WomensCodingCircle/CodingCirclePython
mit
Modulus operator The modulus operator is not one you were taught in school. It returns the remainder of integer division. It is useful in a few specific cases, but you could go months without using it.
5 % 2
Lesson01_Variables/lesson01.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Find if 12342872 is divisible by 3 String operations The + operator also works on strings. It is the concatenation operator, meaning it joins the two strings together.
print('Hello ' + 'Coding Circle') print("The " + WHALE + " lives in the sea.")
Lesson01_Variables/lesson01.ipynb
WomensCodingCircle/CodingCirclePython
mit
Hint: Be careful with spaces
print("My name is" + "Charlotte")
Lesson01_Variables/lesson01.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Print out Good morning to the sea creature you stored in variable named sea_creature earlier. Asking the user for input To get an input for the user we use the built-in function input() and assign it to a variable. NOTE: The result is always a string. WARNING if you leave an input box without ever putting input in, jupyter won't be able to run any code. Ex. you run a cell with input and then re-run that cell before submitting input. To fix this hang the stop button in the menu.
my_name = input() print(my_name)
Lesson01_Variables/lesson01.ipynb
WomensCodingCircle/CodingCirclePython
mit
You can pass a string to the input() function to prompt the user for what you are looking for. e.g. input('How are you feeling?') Hint, add a new line character "\n" to the end of the prompt to make the user enter it on a new line.
favorite_ocean_animal = input("What is your favorite sea creature?\n") print("The " + favorite_ocean_animal + " is so cool!")
Lesson01_Variables/lesson01.ipynb
WomensCodingCircle/CodingCirclePython
mit
If you want the user to enter a number, you will have to convert the string. Here are the conversion commands. To convert a variable to an integer, use int -- e.g., int(variable_name) To convert a variable to a float, use float -- e.g., float(variable_name) To convert a variable to a string, use str -- e.g., str(variable_name)
number_of_fish = input("How many fish do you want?\n") number_of_fish_int = int(number_of_fish) print(number_of_fish_int * 1.05)
Lesson01_Variables/lesson01.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Prompt the user for their favorite whale and store the value in a variable called favorite_whale. Comments Comments let you explain your progam to someone who is reading your code. Do you know who that person is? It is almost always you in six months. Don't screw over future you. Comment your code. To make a comment: you use the # symbol. You can put a comment on its own line or at the end of a statement.
# Calculate the price of fish that a user wants number_of_fish = input("How many fish do you want?\n") # Ask user for quantity of fish number_of_fish_int = int(number_of_fish) # raw_input returns string, so convert to integer print(number_of_fish_int * 1.05) # multiply by price of fish
Lesson01_Variables/lesson01.ipynb
WomensCodingCircle/CodingCirclePython
mit
Invoking Modules Let's instantiate a Dense layer. - Modules are actually objects in this API, so we provide contructor arguments when initializing the Module. In this case, we only have to provide the output features dimension.
model = nn.Dense(features=3)
docs/notebooks/linen_intro.ipynb
google/flax
apache-2.0
We need to initialize the Module variables, these include the parameters of the Module as well as any other state variables. We call the init method on the instantiated Module. If the Module __call__ method has args (self, *args, **kwargs) then we call init with (rngs, *args, **kwargs) so in this case, just (rng, input):
# Make RNG Keys and a fake input. key1, key2 = random.split(random.PRNGKey(0), 2) x = random.uniform(key1, (4,4)) # provide key and fake input to get initialized variables init_variables = model.init(key2, x) init_variables
docs/notebooks/linen_intro.ipynb
google/flax
apache-2.0
We call the apply method on the instantiated Module. If the Module __call__ method has args (self, *args, **kwargs) then we call apply with (variables, *args, rngs=<RNGS>, mutable=<MUTABLEKINDS>, **kwargs) where - <RNGS> are the optional call time RNGs for things like dropout. For simple Modules this is just a single key, but if your module has multiple kinds of data, it's a dictionary of rng-keys per-kind, e.g. {'params': key0, 'dropout': key1} for a Module with dropout layers. - <MUTABLEKINDS> is an optional list of names of kinds that are expected to be mutated during the call. e.g. ['batch_stats'] for a layer updating batchnorm statistics. So in this case, just (variables, input):
y = model.apply(init_variables, x) y
docs/notebooks/linen_intro.ipynb
google/flax
apache-2.0
Additional points: - If you want to init or apply a Module using a method other than call, you need to provide the method= kwarg to init and apply to use it instead of the default __call__, e.g. method='encode', method='decode' to apply the encode/decode methods of an autoencoder. Defining Basic Modules Composing submodules We support declaring modules in setup() that can still benefit from shape inference by using Lazy Initialization that sets up variables the first time the Module is called.
class ExplicitMLP(nn.Module): features: Sequence[int] def setup(self): # we automatically know what to do with lists, dicts of submodules self.layers = [nn.Dense(feat) for feat in self.features] # for single submodules, we would just write: # self.layer1 = nn.Dense(feat1) def __call__(self, inputs): x = inputs for i, lyr in enumerate(self.layers): x = lyr(x) if i != len(self.layers) - 1: x = nn.relu(x) return x key1, key2 = random.split(random.PRNGKey(0), 2) x = random.uniform(key1, (4,4)) model = ExplicitMLP(features=[3,4,5]) init_variables = model.init(key2, x) y = model.apply(init_variables, x) print('initialized parameter shapes:\n', jax.tree_map(jnp.shape, unfreeze(init_variables))) print('output:\n', y)
docs/notebooks/linen_intro.ipynb
google/flax
apache-2.0
Here we show the equivalent compact form of the MLP that declares the submodules inline using the @compact decorator.
class SimpleMLP(nn.Module): features: Sequence[int] @nn.compact def __call__(self, inputs): x = inputs for i, feat in enumerate(self.features): x = nn.Dense(feat, name=f'layers_{i}')(x) if i != len(self.features) - 1: x = nn.relu(x) # providing a name is optional though! # the default autonames would be "Dense_0", "Dense_1", ... # x = nn.Dense(feat)(x) return x key1, key2 = random.split(random.PRNGKey(0), 2) x = random.uniform(key1, (4,4)) model = SimpleMLP(features=[3,4,5]) init_variables = model.init(key2, x) y = model.apply(init_variables, x) print('initialized parameter shapes:\n', jax.tree_map(jnp.shape, unfreeze(init_variables))) print('output:\n', y)
docs/notebooks/linen_intro.ipynb
google/flax
apache-2.0
Declaring and using variables Flax uses lazy initialization, which allows declared variables to be initialized only at the first site of their use, using whatever shape information is available a the local call site for shape inference. Once a variable has been initialized, a reference to the data is kept for use in subsequent calls. For declaring parameters that aren't mutated inside the model, but rather by gradient descent, we use the syntax: self.param(parameter_name, parameter_init_fn, *init_args) with arguments: - parameter_name just the name, a string - parameter_init_fn a function taking an RNG key and a variable number of other arguments, i.e. fn(rng, *args). typically those in nn.initializers take an rng and a shape argument. - the remaining arguments to feed to the init function when initializing. Again, we'll demonstrate declaring things inline as we typically do using the @compact decorator.
class SimpleDense(nn.Module): features: int kernel_init: Callable = nn.initializers.lecun_normal() bias_init: Callable = nn.initializers.zeros @nn.compact def __call__(self, inputs): kernel = self.param('kernel', self.kernel_init, # RNG passed implicitly. (inputs.shape[-1], self.features)) # shape info. y = lax.dot_general(inputs, kernel, (((inputs.ndim - 1,), (0,)), ((), ())),) bias = self.param('bias', self.bias_init, (self.features,)) y = y + bias return y key1, key2 = random.split(random.PRNGKey(0), 2) x = random.uniform(key1, (4,4)) model = SimpleDense(features=3) init_variables = model.init(key2, x) y = model.apply(init_variables, x) print('initialized parameters:\n', init_variables) print('output:\n', y)
docs/notebooks/linen_intro.ipynb
google/flax
apache-2.0
We can also declare variables in setup, though in doing so you can't take advantage of shape inference and have to provide explicit shape information at initialization. The syntax is a little repetitive in this case right now, but we do force agreement of the assigned names.
class ExplicitDense(nn.Module): features_in: int # <-- explicit input shape features: int kernel_init: Callable = nn.initializers.lecun_normal() bias_init: Callable = nn.initializers.zeros def setup(self): self.kernel = self.param('kernel', self.kernel_init, (self.features_in, self.features)) self.bias = self.param('bias', self.bias_init, (self.features,)) def __call__(self, inputs): y = lax.dot_general(inputs, self.kernel, (((inputs.ndim - 1,), (0,)), ((), ())),) y = y + self.bias return y key1, key2 = random.split(random.PRNGKey(0), 2) x = random.uniform(key1, (4,4)) model = ExplicitDense(features_in=4, features=3) init_variables = model.init(key2, x) y = model.apply(init_variables, x) print('initialized parameters:\n', init_variables) print('output:\n', y)
docs/notebooks/linen_intro.ipynb
google/flax
apache-2.0
General Variables For declaring generally mutable variables that may be mutated inside the model we use the call: self.variable(variable_kind, variable_name, variable_init_fn, *init_args) with arguments: - variable_kind the "kind" of state this variable is, i.e. the name of the nested-dict collection that this will be stored in inside the top Modules variables. e.g. batch_stats for the moving statistics for a batch norm layer or cache for autoregressive cache data. Note that parameters also have a kind, but they're set to the default param kind. - variable_name just the name, a string - variable_init_fn a function taking a variable number of other arguments, i.e. fn(*args). Note that we don't assume the need for an RNG, if you do want an RNG, provide it via a self.make_rng(variable_kind) call in the provided arguments. - the remaining arguments to feed to the init function when initializing. ⚠️ Unlike parameters, we expect these to be mutated, so self.variable returns not a constant, but a reference to the variable. To get the raw value, you'd write myvariable.value and to set it myvariable.value = new_value.
class Counter(nn.Module): @nn.compact def __call__(self): # easy pattern to detect if we're initializing is_initialized = self.has_variable('counter', 'count') counter = self.variable('counter', 'count', lambda: jnp.zeros((), jnp.int32)) if is_initialized: counter.value += 1 return counter.value key1 = random.PRNGKey(0) model = Counter() init_variables = model.init(key1) print('initialized variables:\n', init_variables) y, mutated_variables = model.apply(init_variables, mutable=['counter']) print('mutated variables:\n', mutated_variables) print('output:\n', y)
docs/notebooks/linen_intro.ipynb
google/flax
apache-2.0
Another Mutability and RNGs Example Let's make an artificial, goofy example that mixes differentiable parameters, stochastic layers, and mutable variables:
class Block(nn.Module): features: int training: bool @nn.compact def __call__(self, inputs): x = nn.Dense(self.features)(inputs) x = nn.Dropout(rate=0.5)(x, deterministic=not self.training) x = nn.BatchNorm(use_running_average=not self.training)(x) return x key1, key2, key3, key4 = random.split(random.PRNGKey(0), 4) x = random.uniform(key1, (3,4,4)) model = Block(features=3, training=True) init_variables = model.init({'params': key2, 'dropout': key3}, x) _, init_params = init_variables.pop('params') # When calling `apply` with mutable kinds, returns a pair of output, # mutated_variables. y, mutated_variables = model.apply( init_variables, x, rngs={'dropout': key4}, mutable=['batch_stats']) # Now we reassemble the full variables from the updates (in a real training # loop, with the updated params from an optimizer). updated_variables = freeze(dict(params=init_params, **mutated_variables)) print('updated variables:\n', updated_variables) print('initialized variable shapes:\n', jax.tree_map(jnp.shape, init_variables)) print('output:\n', y) # Let's run these model variables during "evaluation": eval_model = Block(features=3, training=False) y = eval_model.apply(updated_variables, x) # Nothing mutable; single return value. print('eval output:\n', y)
docs/notebooks/linen_intro.ipynb
google/flax
apache-2.0
JAX transformations inside modules JIT It's not immediately clear what use this has, but you can compile specific submodules if there's a reason to. Known Gotcha: at the moment, the decorator changes the RNG stream slightly, so comparing jitted an unjitted initializations will look different.
class MLP(nn.Module): features: Sequence[int] @nn.compact def __call__(self, inputs): x = inputs for i, feat in enumerate(self.features): # JIT the Module (it's __call__ fn by default.) x = nn.jit(nn.Dense)(feat, name=f'layers_{i}')(x) if i != len(self.features) - 1: x = nn.relu(x) return x key1, key2 = random.split(random.PRNGKey(3), 2) x = random.uniform(key1, (4,4)) model = MLP(features=[3,4,5]) init_variables = model.init(key2, x) y = model.apply(init_variables, x) print('initialized parameter shapes:\n', jax.tree_map(jnp.shape, unfreeze(init_variables))) print('output:\n', y)
docs/notebooks/linen_intro.ipynb
google/flax
apache-2.0
Remat For memory-expensive computations, we can remat our method to recompute a Module's output during a backwards pass. Known Gotcha: at the moment, the decorator changes the RNG stream slightly, so comparing remat'd and undecorated initializations will look different.
class RematMLP(nn.Module): features: Sequence[int] # For all transforms, we can annotate a method, or wrap an existing # Module class. Here we annotate the method. @nn.remat @nn.compact def __call__(self, inputs): x = inputs for i, feat in enumerate(self.features): x = nn.Dense(feat, name=f'layers_{i}')(x) if i != len(self.features) - 1: x = nn.relu(x) return x key1, key2 = random.split(random.PRNGKey(3), 2) x = random.uniform(key1, (4,4)) model = RematMLP(features=[3,4,5]) init_variables = model.init(key2, x) y = model.apply(init_variables, x) print('initialized parameter shapes:\n', jax.tree_map(jnp.shape, unfreeze(init_variables))) print('output:\n', y)
docs/notebooks/linen_intro.ipynb
google/flax
apache-2.0
Vmap You can now vmap Modules inside. The transform has a lot of arguments, they have the usual jax vmap args: - in_axes - an integer or None for each input argument - out_axes - an integer or None for each output argument - axis_size - the axis size if you need to give it explicitly In addition, we provide for each kind of variable it's axis rules: variable_in_axes - a dict from kinds to a single integer or None specifying the input axes to map variable_out_axes - a dict from kinds to a single integer or None specifying the output axes to map split_rngs - a dict from RNG-kinds to a bool, specifying whether to split the rng along the axis. Below we show an example defining a batched, multiheaded attention module from a single-headed unbatched attention implementation.
class RawDotProductAttention(nn.Module): attn_dropout_rate: float = 0.1 train: bool = False @nn.compact def __call__(self, query, key, value, bias=None, dtype=jnp.float32): assert key.ndim == query.ndim assert key.ndim == value.ndim n = query.ndim attn_weights = lax.dot_general( query, key, (((n-1,), (n - 1,)), ((), ()))) if bias is not None: attn_weights += bias norm_dims = tuple(range(attn_weights.ndim // 2, attn_weights.ndim)) attn_weights = jax.nn.softmax(attn_weights, axis=norm_dims) attn_weights = nn.Dropout(self.attn_dropout_rate)(attn_weights, deterministic=not self.train) attn_weights = attn_weights.astype(dtype) contract_dims = ( tuple(range(n - 1, attn_weights.ndim)), tuple(range(0, n - 1))) y = lax.dot_general( attn_weights, value, (contract_dims, ((), ()))) return y class DotProductAttention(nn.Module): qkv_features: Optional[int] = None out_features: Optional[int] = None train: bool = False @nn.compact def __call__(self, inputs_q, inputs_kv, bias=None, dtype=jnp.float32): qkv_features = self.qkv_features or inputs_q.shape[-1] out_features = self.out_features or inputs_q.shape[-1] QKVDense = functools.partial( nn.Dense, features=qkv_features, use_bias=False, dtype=dtype) query = QKVDense(name='query')(inputs_q) key = QKVDense(name='key')(inputs_kv) value = QKVDense(name='value')(inputs_kv) y = RawDotProductAttention(train=self.train)( query, key, value, bias=bias, dtype=dtype) y = nn.Dense(features=out_features, dtype=dtype, name='out')(y) return y class MultiHeadDotProductAttention(nn.Module): qkv_features: Optional[int] = None out_features: Optional[int] = None batch_axes: Sequence[int] = (0,) num_heads: int = 1 broadcast_dropout: bool = False train: bool = False @nn.compact def __call__(self, inputs_q, inputs_kv, bias=None, dtype=jnp.float32): qkv_features = self.qkv_features or inputs_q.shape[-1] out_features = self.out_features or inputs_q.shape[-1] # Make multiheaded attention from single-headed dimension. Attn = nn.vmap(DotProductAttention, in_axes=(None, None, None), out_axes=2, axis_size=self.num_heads, variable_axes={'params': 0}, split_rngs={'params': True, 'dropout': not self.broadcast_dropout}) # Vmap across batch dimensions. for axis in reversed(sorted(self.batch_axes)): Attn = nn.vmap(Attn, in_axes=(axis, axis, axis), out_axes=axis, variable_axes={'params': None}, split_rngs={'params': False, 'dropout': False}) # Run the vmap'd class on inputs. y = Attn(qkv_features=qkv_features // self.num_heads, out_features=out_features, train=self.train, name='attention')(inputs_q, inputs_kv, bias) return y.mean(axis=-2) key1, key2, key3, key4 = random.split(random.PRNGKey(0), 4) x = random.uniform(key1, (3, 13, 64)) model = functools.partial( MultiHeadDotProductAttention, broadcast_dropout=False, num_heads=2, batch_axes=(0,)) init_variables = model(train=False).init({'params': key2}, x, x) print('initialized parameter shapes:\n', jax.tree_map(jnp.shape, unfreeze(init_variables))) y = model(train=True).apply(init_variables, x, x, rngs={'dropout': key4}) print('output:\n', y.shape)
docs/notebooks/linen_intro.ipynb
google/flax
apache-2.0
Scan Scan allows us to apply lax.scan to Modules, including their parameters and mutable variables. To use it we have to specify how we want each "kind" of variable to be transformed. For scanned variables we specify similar to vmap via in variable_in_axes, variable_out_axes: - nn.broadcast broadcast the variable kind across the scan steps as a constant - &lt;axis:int&gt; scan along axis for e.g. unique parameters at each step OR we specify that the variable kind is to be treated like a "carry" by passing to the variable_carry argument. Further, for scan'd variable kinds, we further specify whether or not to split the rng at each step.
class SimpleScan(nn.Module): @nn.compact def __call__(self, xs): dummy_rng = random.PRNGKey(0) init_carry = nn.LSTMCell.initialize_carry(dummy_rng, xs.shape[:1], xs.shape[-1]) LSTM = nn.scan(nn.LSTMCell, in_axes=1, out_axes=1, variable_broadcast='params', split_rngs={'params': False}) return LSTM(name="lstm_cell")(init_carry, xs) key1, key2 = random.split(random.PRNGKey(0), 2) xs = random.uniform(key1, (1, 5, 2)) model = SimpleScan() init_variables = model.init(key2, xs) print('initialized parameter shapes:\n', jax.tree_map(jnp.shape, unfreeze(init_variables))) y = model.apply(init_variables, xs) print('output:\n', y)
docs/notebooks/linen_intro.ipynb
google/flax
apache-2.0
One of the simple things we can do is override the default CSS to customize our DataFrame output. This specific example is from - Brandon Rhodes' talk at pycon For the purposes of the notebook, I'm defining CSS as a variable but you could easily read in from a file as well.
CSS = """ body { margin: 0; font-family: Helvetica; } table.dataframe { border-collapse: collapse; border: none; } table.dataframe tr { border: none; } table.dataframe td, table.dataframe th { margin: 0; border: 1px solid white; padding-left: 0.25em; padding-right: 0.25em; } table.dataframe th:not(:empty) { background-color: #fec; text-align: left; font-weight: normal; } table.dataframe tr:nth-child(2) th:empty { border-left: none; border-right: 1px dashed #888; } table.dataframe td { border: 2px solid #ccf; background-color: #f4f4ff; } """
notebooks/Ipython-pandas-tips-and-tricks.ipynb
chris1610/pbpython
bsd-3-clause
Now add this CSS into the current notebook's HTML.
from IPython.core.display import HTML HTML('<style>{}</style>'.format(CSS)) SALES=pd.read_csv("../data/sample-sales-tax.csv", parse_dates=True) SALES.head()
notebooks/Ipython-pandas-tips-and-tricks.ipynb
chris1610/pbpython
bsd-3-clause
You can see how the CSS is now applied to the DataFrame and how you could easily modify it to customize it to your liking. Jupyter notebooks do a good job of automatically displaying information but sometimes you want to force data to display. Fortunately, ipython provides and option. This is especially useful if you want to display multiple dataframes.
from IPython.display import display display(SALES.head(2)) display(SALES.tail(2)) display(SALES.describe())
notebooks/Ipython-pandas-tips-and-tricks.ipynb
chris1610/pbpython
bsd-3-clause
Using pandas settings to control output Pandas has many different options to control how data is displayed. You can use max_rows to control how many rows are displayed
pd.set_option("display.max_rows",4) SALES
notebooks/Ipython-pandas-tips-and-tricks.ipynb
chris1610/pbpython
bsd-3-clause
Depending on the data set, you may only want to display a smaller number of columns.
pd.set_option("display.max_columns",6) SALES
notebooks/Ipython-pandas-tips-and-tricks.ipynb
chris1610/pbpython
bsd-3-clause
You can control how many decimal points of precision to display
pd.set_option('precision',2) SALES pd.set_option('precision',7) SALES
notebooks/Ipython-pandas-tips-and-tricks.ipynb
chris1610/pbpython
bsd-3-clause
You can also format floating point numbers using float_format
pd.set_option('float_format', '{:.2f}'.format) SALES
notebooks/Ipython-pandas-tips-and-tricks.ipynb
chris1610/pbpython
bsd-3-clause
This does apply to all the data. In our example, applying dollar signs to everything would not be correct for this example.
pd.set_option('float_format', '${:.2f}'.format) SALES
notebooks/Ipython-pandas-tips-and-tricks.ipynb
chris1610/pbpython
bsd-3-clause
Third Party Plugins Qtopian has a useful plugin called qgrid - https://github.com/quantopian/qgrid Import it and install it.
import qgrid qgrid.nbinstall(overwrite=True)
notebooks/Ipython-pandas-tips-and-tricks.ipynb
chris1610/pbpython
bsd-3-clause
Showing the data is straighforward.
qgrid.show_grid(SALES, remote_js=True)
notebooks/Ipython-pandas-tips-and-tricks.ipynb
chris1610/pbpython
bsd-3-clause
The plugin is very similar to the capability of an Excel autofilter. It can be handy to quickly filter and sort your data. Improving your plots I have mentioned before how the default pandas plots don't look so great. Fortunately, there are style sheets in matplotlib which go a long way towards improving the visualization of your data. Here is a simple plot with the default values.
SALES.groupby('name')['quantity'].sum().plot(kind="bar")
notebooks/Ipython-pandas-tips-and-tricks.ipynb
chris1610/pbpython
bsd-3-clause
We can use some of the matplolib styles available to us to make this look better. http://matplotlib.org/users/style_sheets.html
plt.style.use('ggplot') SALES.groupby('name')['quantity'].sum().plot(kind="bar")
notebooks/Ipython-pandas-tips-and-tricks.ipynb
chris1610/pbpython
bsd-3-clause
You can see all the styles available
plt.style.available plt.style.use('bmh') SALES.groupby('name')['quantity'].sum().plot(kind="bar") plt.style.use('fivethirtyeight') SALES.groupby('name')['quantity'].sum().plot(kind="bar")
notebooks/Ipython-pandas-tips-and-tricks.ipynb
chris1610/pbpython
bsd-3-clause
Basic principles Automatic levels If data is set, the levels are chosen according to the min and max of the data:
dl = DataLevels(a, cmap=cm) dl.visualize(orientation='horizontal', add_values=True)
examples/DataLevels.ipynb
fmaussion/cleo
gpl-3.0
The Default number of levels is 8, but it's up to you to choose something else:
dl = DataLevels(a, cmap=cm, nlevels=256) dl.visualize(orientation='horizontal', add_values=True)
examples/DataLevels.ipynb
fmaussion/cleo
gpl-3.0
Automatic levels with min and max value You can also specify the boundaries of the levels to choose:
dl = DataLevels(a, cmap=cm, nlevels=256, vmax=3) dl.visualize(orientation='horizontal', add_values=True)
examples/DataLevels.ipynb
fmaussion/cleo
gpl-3.0
You see that the colorbar has been extended. This behavior is forced by DataLevels and prevents unexpected clipping and such. If you set another data, it will remember your choice for a vmax:
dl.set_data(np.arange(5) / 4) dl.visualize(orientation='horizontal', add_values=True)
examples/DataLevels.ipynb
fmaussion/cleo
gpl-3.0
However vmin has changed, of course. The object remembers what has to be chosen automatically and what was previously set. User levels The same is true when the user specifies the levels:
dl = DataLevels(a, cmap=cm, levels=[0, 1, 2, 3]) dl.visualize(orientation='horizontal', add_values=True)
examples/DataLevels.ipynb
fmaussion/cleo
gpl-3.0
Note that the colors have been chosen to cover the full palette, which is much better than the default behavior (see https://github.com/matplotlib/matplotlib/issues/4850). Since the color choice is automated, when changing the data the colorbar also changes:
dl.set_data(np.arange(5) / 4) dl.visualize(orientation='horizontal', add_values=True)
examples/DataLevels.ipynb
fmaussion/cleo
gpl-3.0
Since the new data remains within the level range, there is no need to extend the colorbar. Maybe you'd like the two plots above to have the same color code. For this you shoudl set the extend keyword:
dl.set_extend('both') dl.visualize(orientation='horizontal', add_values=True)
examples/DataLevels.ipynb
fmaussion/cleo
gpl-3.0
Cleo made the choice to warn you if you did something wrong that hides information from the data:
dl = DataLevels(a, cmap=cm, levels=[0, 1, 2, 3], extend='neither') dl.visualize(orientation='horizontal', add_values=True)
examples/DataLevels.ipynb
fmaussion/cleo
gpl-3.0
Using DataLevels for your own visualisations The examples above made use of the utilitary function visualize(), which is just here to have a look at the data. Here's a "real world" example with a scatterplot:
# Make the data x = np.random.randn(1000) y = np.random.randn(1000) z = x**2 + y**2 # The datalevel step dl = DataLevels(z, cmap=cm, levels=np.arange(6)) # The actual plot fig, ax = plt.subplots(1) ax.scatter(x, y, color=dl.to_rgb(), s=64) cbar = dl.append_colorbar(ax, "right", size="5%", pad=0.5) # Note that the DataLevel class has to draw the colorbar
examples/DataLevels.ipynb
fmaussion/cleo
gpl-3.0
This also works for images:
# Make the data img, xi, yi = np.histogram2d(x, y) # The datalevel step dl = DataLevels(img, cmap=cm, levels=np.arange(6)) # The actual plot fig, ax = plt.subplots(1) toplot = dl.to_rgb() ax.imshow(toplot, interpolation='none') cbar = dl.append_colorbar(ax, "right", size="5%", pad=0.5) # Note that the DataLevel class has to draw the colorbar
examples/DataLevels.ipynb
fmaussion/cleo
gpl-3.0
And with missing data:
cm.set_bad('green') img[1:5, 1:5] = np.NaN # The datalevel step dl = DataLevels(img, cmap=cm, levels=np.arange(6)) # The actual plot fig, ax = plt.subplots(1) toplot = dl.to_rgb() # note that the shape is preserved ax.imshow(toplot, interpolation='none') cbar = dl.append_colorbar(ax, "right", size="5%", pad=0.5) # Note that the DataLevel class has to draw the colorbar
examples/DataLevels.ipynb
fmaussion/cleo
gpl-3.0
1. Export edges from Retweets, Mentions, or Replies Run one of three blocks of codes below for your purpose.
# 1. Export edges from Retweets retweets = tweets[tweets['is_retweet'] == 'Yes'] retweets['original_twitter'] = retweets['text'].str.extract('RT @([a-zA-Z0-9]\w{0,}):', expand=True) edges = retweets[['screen_name', 'original_twitter','created_at']] edges.columns = ['Source', 'Target', 'Strength'] # 2. Export edges from Mentions mentions = tweets[tweets['mentions'].notnull()] edges = pd.DataFrame(columns=('Source','Target','Strength')) for index, row in mentions.iterrows(): mention_list = row['mentions'].split(", ") for mention in mention_list: edges = edges.append(pd.DataFrame([[row['screen_name'], mention, row['created_at']]] , columns=('Source','Target','Strength')), ignore_index=True) # 3. Export edges from Replies replies = tweets[tweets['in_reply_to_screen_name'].notnull()] edges = replies[['screen_name', 'in_reply_to_screen_name','created_at']] edges.columns = ['Source', 'Target', 'Strength']
20170720-building-social-network-graphs-CSV.ipynb
gwu-libraries/notebooks
mit
2. Leave only the tweets whose strength level >= user specified level (directed)
strengthLevel = 3 # Network connection strength level: the number of times in total each of the tweeters responded to or mentioned the other. # If you have 1 as the level, then all tweeters who mentioned or replied to another at least once will be displayed. But if you have 5, only those who have mentioned or responded to a particular tweeter at least 5 times will be displayed, which means that only the strongest bonds are shown. edges2 = edges.groupby(['Source','Target'])['Strength'].count() edges2 = edges2.reset_index() edges2 = edges2[edges2['Strength'] >= strengthLevel]
20170720-building-social-network-graphs-CSV.ipynb
gwu-libraries/notebooks
mit
3. Export nodes
# Export nodes from the edges and add node attributes for both Sources and Targets. users = tweets[['screen_name','followers_count','friends_count']] users = users.sort_values(['screen_name','followers_count'], ascending=[True, False]) users = users.drop_duplicates(['screen_name'], keep='first') ids = edges2['Source'].append(edges2['Target']).to_frame() ids['Label'] = ids ids.columns = ['screen_name', 'Label'] ids = ids.drop_duplicates(['screen_name'], keep='first') nodes = pd.merge(ids, users, on='screen_name', how='left') print(nodes.shape) print(edges2.shape)
20170720-building-social-network-graphs-CSV.ipynb
gwu-libraries/notebooks
mit
4. Export nodes and edges to csv files
# change column names for Kumu import (Run this when using Kumu) edges2.columns = ['From','To','Strength'] # Print nodes to check nodes.head(3) # Print edges to check edges2.head(3) # Export nodes and edges to csv files nodes.to_csv('nodes.csv', encoding='utf-8', index=False) edges2.to_csv('edges.csv', encoding='utf-8', index=False)
20170720-building-social-network-graphs-CSV.ipynb
gwu-libraries/notebooks
mit
O indexador i precisa ser inteiro. Entretanto o array que será indexado pode ser qualquer tipo. f = row[i] shape(f) é igual ao shape(i) dtype(f) é o dtype(row)
row = np.arange(0.,100,10) print('row:', row) i = np.array([[3,5,0,8],[4,2,7,1]]) f = row[i] print('i:', i) print('f=row[i]\n',f) print(id(i),id(row),id(f))
master/tutorial_ti_2.ipynb
robertoalotufo/ia898
mit
Vejamos agora o caso bidimensional, apropriado para imagens e a transformação de intensidade. Seja uma imagem f de dimensões (2,3) com os valores de pixels variando de 0 a 2:
f = np.array([[0, 1, 2], [2, 0, 1]]) print('f=\n',f)
master/tutorial_ti_2.ipynb
robertoalotufo/ia898
mit
Seja agora a transformação de intensidade T, especificada por um vetor de 3 elementos, onde T[0] = 5; T[1] = 6 e T[2] = 7:
T = np.array([5, 6, 7]) print('T:', T) for i in np.arange(T.size): print('%d:%d'% (i,T[i]))
master/tutorial_ti_2.ipynb
robertoalotufo/ia898
mit
A aplicação da transformação de intensidade é feita utilizando-se a imagem f como índice da transformação T, como se escreve na equação matemática:
g = T[f] print('g=T[f]= \n', g) print('g.shape:', g.shape)
master/tutorial_ti_2.ipynb
robertoalotufo/ia898
mit
Note que T[f] tem as mesmas dimensões de f, entretanto, seus pixels passaram pelo mapeamento da tabela T. Utilização em imagens Existem muitas funções úteis que podem ser feitas com o mapeamento T: realce de contraste, equalização de histograma, thresholding, redução de níveis de cinza, negativo da imagem, entre várias outras. É comum representar a tabela de transformação de intensidade em um gráfico. A seguir várias funções de transformações são calculadas:
T1 = np.arange(256).astype('uint8') # função identidade T2 = ia.normalize(np.log(T1+1.)) # logaritmica - realce partes escuras T3 = 255 - T1 # negativo T4 = ia.normalize(T1 > 128) # threshold 128 T5 = ia.normalize(T1//30) # reduz o número de níveis de cinza plt.plot(T1) plt.plot(T2) plt.plot(T3) plt.plot(T4) plt.plot(T5) plt.legend(['T1', 'T2', 'T3', 'T4','T5'], loc='right') plt.xlabel('valores de entrada') plt.ylabel('valores de saída') plt.show()
master/tutorial_ti_2.ipynb
robertoalotufo/ia898
mit
Veja a aplicação destas tabelas na imagem "cameraman.tif": T1: Função identidade
nb = ia.nbshow(2) f = mpimg.imread('../data/cameraman.tif') f1 = T1[f] nb.nbshow(f,'original') plt.plot(T1) plt.title('T1: identidade') nb.nbshow(f1,'T1[f]') nb.nbshow()
master/tutorial_ti_2.ipynb
robertoalotufo/ia898
mit
T2: Função logaritmica
f2 = T2[f] nb.nbshow(f,'original') plt.plot(T2) plt.title('T2: logaritmica') nb.nbshow(f2,'T2[f]') nb.nbshow()
master/tutorial_ti_2.ipynb
robertoalotufo/ia898
mit