markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Get movie titles
movie_titles = pd.read_csv("Movie_Id_Titles") movie_titles.head()
Machine Learning/Recommender Systems/Recommender Systems with Python.ipynb
luizhsda10/Data-Science-Projectcs
mit
Merged dataframes
df = pd.merge(df,movie_titles,on='item_id') df.head()
Machine Learning/Recommender Systems/Recommender Systems with Python.ipynb
luizhsda10/Data-Science-Projectcs
mit
Exploratory Data Analysis
import matplotlib.pyplot as plt import seaborn as sns sns.set_style('white') %matplotlib inline
Machine Learning/Recommender Systems/Recommender Systems with Python.ipynb
luizhsda10/Data-Science-Projectcs
mit
Create a ratings dataframe with average rating and number of ratings
df.groupby('title')['rating'].mean().sort_values(ascending=False).head() df.groupby('title')['rating'].count().sort_values(ascending=False).head() ratings = pd.DataFrame(df.groupby('title')['rating'].mean()) ratings.head()
Machine Learning/Recommender Systems/Recommender Systems with Python.ipynb
luizhsda10/Data-Science-Projectcs
mit
Number of ratings column
ratings['num of ratings'] = pd.DataFrame(df.groupby('title')['rating'].count()) ratings.head()
Machine Learning/Recommender Systems/Recommender Systems with Python.ipynb
luizhsda10/Data-Science-Projectcs
mit
Data Visualization: Histogram
plt.figure(figsize=(10,4)) ratings['num of ratings'].hist(bins=70) plt.figure(figsize=(10,4)) ratings['rating'].hist(bins=70) sns.jointplot(x='rating',y='num of ratings',data=ratings,alpha=0.5)
Machine Learning/Recommender Systems/Recommender Systems with Python.ipynb
luizhsda10/Data-Science-Projectcs
mit
Recommending Similar Movies
moviemat = df.pivot_table(index='user_id',columns='title',values='rating') moviemat.head()
Machine Learning/Recommender Systems/Recommender Systems with Python.ipynb
luizhsda10/Data-Science-Projectcs
mit
Most rated movies
ratings.sort_values('num of ratings',ascending=False).head(10)
Machine Learning/Recommender Systems/Recommender Systems with Python.ipynb
luizhsda10/Data-Science-Projectcs
mit
We choose two movies: starwars, a sci-fi movie. And Liar Liar, a comedy.
ratings.head()
Machine Learning/Recommender Systems/Recommender Systems with Python.ipynb
luizhsda10/Data-Science-Projectcs
mit
Now let's grab the user ratings for those two movies:
starwars_user_ratings = moviemat['Star Wars (1977)'] liarliar_user_ratings = moviemat['Liar Liar (1997)'] starwars_user_ratings.head()
Machine Learning/Recommender Systems/Recommender Systems with Python.ipynb
luizhsda10/Data-Science-Projectcs
mit
Using corrwith() method to get correlations between two pandas series:
similar_to_starwars = moviemat.corrwith(starwars_user_ratings) similar_to_liarliar = moviemat.corrwith(liarliar_user_ratings)
Machine Learning/Recommender Systems/Recommender Systems with Python.ipynb
luizhsda10/Data-Science-Projectcs
mit
Clear data by removing NaN values and using a DataFrame instead of a series
corr_starwars = pd.DataFrame(similar_to_starwars,columns=['Correlation']) corr_starwars.dropna(inplace=True) corr_starwars.head() corr_starwars.sort_values('Correlation',ascending=False).head(10)
Machine Learning/Recommender Systems/Recommender Systems with Python.ipynb
luizhsda10/Data-Science-Projectcs
mit
Filtering out movies that have less than 100 reviews (this value was chosen based off the histogram). This is needed to get more accurate results
corr_starwars = corr_starwars.join(ratings['num of ratings']) corr_starwars.head()
Machine Learning/Recommender Systems/Recommender Systems with Python.ipynb
luizhsda10/Data-Science-Projectcs
mit
Now sort the values
corr_starwars[corr_starwars['num of ratings']>100].sort_values('Correlation',ascending=False).head()
Machine Learning/Recommender Systems/Recommender Systems with Python.ipynb
luizhsda10/Data-Science-Projectcs
mit
The same for the comedy Liar Liar:
corr_liarliar = pd.DataFrame(similar_to_liarliar,columns=['Correlation']) corr_liarliar.dropna(inplace=True) corr_liarliar = corr_liarliar.join(ratings['num of ratings']) corr_liarliar[corr_liarliar['num of ratings']>100].sort_values('Correlation',ascending=False).head()
Machine Learning/Recommender Systems/Recommender Systems with Python.ipynb
luizhsda10/Data-Science-Projectcs
mit
Note: The R code and the results in this notebook has been converted to markdown so that R is not required to build the documents. The R results in the notebook were computed using R 3.5.1 and lme4 1.1. ipython %load_ext rpy2.ipython ipython %R library(lme4) array(['lme4', 'Matrix', 'tools', 'stats', 'graphics', 'grDevices', 'utils', 'datasets', 'methods', 'base'], dtype='<U9') Comparing R lmer to statsmodels MixedLM The statsmodels imputation of linear mixed models (MixedLM) closely follows the approach outlined in Lindstrom and Bates (JASA 1988). This is also the approach followed in the R package LME4. Other packages such as Stata, SAS, etc. should also be consistent with this approach, as the basic techniques in this area are mostly mature. Here we show how linear mixed models can be fit using the MixedLM procedure in statsmodels. Results from R (LME4) are included for comparison. Here are our import statements: Growth curves of pigs These are longitudinal data from a factorial experiment. The outcome variable is the weight of each pig, and the only predictor variable we will use here is "time". First we fit a model that expresses the mean weight as a linear function of time, with a random intercept for each pig. The model is specified using formulas. Since the random effects structure is not specified, the default random effects structure (a random intercept for each group) is automatically used.
data = sm.datasets.get_rdataset("dietox", "geepack").data md = smf.mixedlm("Weight ~ Time", data, groups=data["Pig"]) mdf = md.fit(method=["lbfgs"]) print(mdf.summary())
v0.13.2/examples/notebooks/generated/mixed_lm_example.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Here is the same model fit using LMER in R: ipython %R print(summary(lmer("Weight ~ Time + (1 + Time | Pig)", data=dietox))) ``` Linear mixed model fit by REML ['lmerMod'] Formula: Weight ~ Time + (1 + Time | Pig) Data: dietox REML criterion at convergence: 4434.1 Scaled residuals: Min 1Q Median 3Q Max -6.4286 -0.5529 -0.0416 0.4841 3.5624 Random effects: Groups Name Variance Std.Dev. Corr Pig (Intercept) 19.493 4.415 Time 0.416 0.645 0.10 Residual 6.038 2.457 Number of obs: 861, groups: Pig, 72 Fixed effects: Estimate Std. Error t value (Intercept) 15.73865 0.55012 28.61 Time 6.93901 0.07982 86.93 Correlation of Fixed Effects: (Intr) Time 0.006 ``` The random intercept and random slope are only weakly correlated $(0.294 / \sqrt{19.493 * 0.416} \approx 0.1)$. So next we fit a model in which the two random effects are constrained to be uncorrelated:
0.294 / (19.493 * 0.416) ** 0.5 md = smf.mixedlm("Weight ~ Time", data, groups=data["Pig"], re_formula="~Time") free = sm.regression.mixed_linear_model.MixedLMParams.from_components( np.ones(2), np.eye(2) ) mdf = md.fit(free=free, method=["lbfgs"]) print(mdf.summary())
v0.13.2/examples/notebooks/generated/mixed_lm_example.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
We can further explore the random effects structure by constructing plots of the profile likelihoods. We start with the random intercept, generating a plot of the profile likelihood from 0.1 units below to 0.1 units above the MLE. Since each optimization inside the profile likelihood generates a warning (due to the random slope variance being close to zero), we turn off the warnings here.
import warnings with warnings.catch_warnings(): warnings.filterwarnings("ignore") likev = mdf.profile_re(0, "re", dist_low=0.1, dist_high=0.1)
v0.13.2/examples/notebooks/generated/mixed_lm_example.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Here is a plot of the profile likelihood function. We multiply the log-likelihood difference by 2 to obtain the usual $\chi^2$ reference distribution with 1 degree of freedom.
import matplotlib.pyplot as plt plt.figure(figsize=(10, 8)) plt.plot(likev[:, 0], 2 * likev[:, 1]) plt.xlabel("Variance of random intercept", size=17) plt.ylabel("-2 times profile log likelihood", size=17)
v0.13.2/examples/notebooks/generated/mixed_lm_example.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Here is a plot of the profile likelihood function. The profile likelihood plot shows that the MLE of the random slope variance parameter is a very small positive number, and that there is low uncertainty in this estimate.
re = mdf.cov_re.iloc[1, 1] with warnings.catch_warnings(): # Parameter is often on the boundary warnings.simplefilter("ignore", ConvergenceWarning) likev = mdf.profile_re(1, "re", dist_low=0.5 * re, dist_high=0.8 * re) plt.figure(figsize=(10, 8)) plt.plot(likev[:, 0], 2 * likev[:, 1]) plt.xlabel("Variance of random slope", size=17) lbl = plt.ylabel("-2 times profile log likelihood", size=17)
v0.13.2/examples/notebooks/generated/mixed_lm_example.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
We create a small 2D grid where P is a tracer that we want to interpolate. In each grid cell, P has a random value between 0.1 and 1.1. We then set P[1,1] to 0, which for Parcels specifies that this is a land cell
dims = [5, 4] dx, dy = 1./dims[0], 1./dims[1] dimensions = {'lat': np.linspace(0., 1., dims[0], dtype=np.float32), 'lon': np.linspace(0., 1., dims[1], dtype=np.float32)} data = {'U': np.zeros(dims, dtype=np.float32), 'V': np.zeros(dims, dtype=np.float32), 'P': np.random.rand(dims[0], dims[1])+0.1} data['P'][1, 1] = 0. fieldset = FieldSet.from_data(data, dimensions, mesh='flat')
parcels/examples/tutorial_interpolation.ipynb
OceanPARCELS/parcels
mit
We create a Particle class that can sample this field
class SampleParticle(JITParticle): p = Variable('p', dtype=np.float32) def SampleP(particle, fieldset, time): particle.p = fieldset.P[time, particle.depth, particle.lat, particle.lon]
parcels/examples/tutorial_interpolation.ipynb
OceanPARCELS/parcels
mit
Now, we perform four different interpolation on P, which we can control by setting fieldset.P.interp_method. Note that this can always be done after the FieldSet creation. We store the results of each interpolation method in an entry in the dictionary pset.
pset = {} for p_interp in ['linear', 'linear_invdist_land_tracer', 'nearest', 'cgrid_tracer']: fieldset.P.interp_method = p_interp # setting the interpolation method for fieldset.P xv, yv = np.meshgrid(np.linspace(0, 1, 8), np.linspace(0, 1, 8)) pset[p_interp] = ParticleSet(fieldset, pclass=SampleParticle, lon=xv.flatten(), lat=yv.flatten()) pset[p_interp].execute(SampleP, endtime=1, dt=1)
parcels/examples/tutorial_interpolation.ipynb
OceanPARCELS/parcels
mit
And then we can show each of the four interpolation methods, by plotting the interpolated values on the Particle locations (circles) on top of the Field values (background colors)
fig, ax = plt.subplots(1, 4, figsize=(18, 5)) for i, p in enumerate(pset.keys()): data = fieldset.P.data[0, :, :] data[1, 1] = np.nan x = np.linspace(-dx/2, 1+dx/2, dims[0]+1) y = np.linspace(-dy/2, 1+dy/2, dims[1]+1) if p == 'cgrid_tracer': for lat in fieldset.P.grid.lat: ax[i].axhline(lat, color='k', linestyle='--') for lon in fieldset.P.grid.lon: ax[i].axvline(lon, color='k', linestyle='--') ax[i].pcolormesh(y, x, data, vmin=0.1, vmax=1.1) ax[i].scatter(pset[p].lon, pset[p].lat, c=pset[p].p, edgecolors='k', s=50, vmin=0.1, vmax=1.1) xp, yp = np.meshgrid(fieldset.P.lon, fieldset.P.lat) ax[i].plot(xp, yp, 'kx') ax[i].set_title("Using interp_method='%s'" % p) plt.show()
parcels/examples/tutorial_interpolation.ipynb
OceanPARCELS/parcels
mit
The white box is here the 'land' point where the tracer is set to zero and the crosses are the locations of the grid points. As you see, the interpolated value is always equal to the field value if the particle is exactly on the grid point (circles on crosses). For interp_method='nearest', the particle values are the same for all particles in a grid cell. They are also the same for interp_method='cgrid_tracer', but the grid cells have then shifted. That is because in a C-grid, the tracer grid cell is on the top-right corner (black dashed lines in right-most panel). For interp_method='linear_invdist_land_tracer', we see that values are the same as interp_method='linear' for grid cells that don't border the land point. For grid cells that do border the land cell, the linear_invdist_land_tracer interpolation method gives higher values, as also shown in the difference plot below
plt.scatter(pset['linear'].lon, pset['linear'].lat, c=pset['linear_invdist_land_tracer'].p-pset['linear'].p, edgecolors='k', s=50, cmap=cm.bwr, vmin=-0.25, vmax=0.25) plt.colorbar() plt.title("Difference between 'interp_method=linear' and 'interp_method=linear_invdist_land_tracer'") plt.show()
parcels/examples/tutorial_interpolation.ipynb
OceanPARCELS/parcels
mit
So in summary, Parcels has four different interpolation schemes for tracers: interp_method=linear: compute linear interpolation interp_method=linear_invdist_land_tracer: compute linear interpolation except near land (where field value is zero). In that case, inverse distance weighting interpolation is computed, weighting by squares of the distance. interp_method=nearest: return nearest field value interp_method=cgrid_tracer: return nearest field value supposing C cells Interpolation and sampling on time-varying Fields Note that there is an important subtlety in Sampling a time-evolving Field. As noted in this Issue, interpolation of a Field only gives the correct answer when that field is interpolated at time+particle.dt and the Sampling Kernel is concatenated after the Advection Kernel. Let's show how this works with a simple idealised Field P given by the equation
def calc_p(t, y, x): return 10*t+x+0.2*y
parcels/examples/tutorial_interpolation.ipynb
OceanPARCELS/parcels
mit
Let's define a simple FieldSet with two timesteps, a 0.5 m/s zonal velocity and no meridional velocity.
dims = [2, 4, 5] dimensions = {'lon': np.linspace(0., 1., dims[2], dtype=np.float32), 'lat': np.linspace(0., 1., dims[1], dtype=np.float32), 'time': np.arange(dims[0], dtype=np.float32)} p = np.zeros(dims, dtype=np.float32) for i, x in enumerate(dimensions['lon']): for j, y in enumerate(dimensions['lat']): for n, t in enumerate(dimensions['time']): p[n, j, i] = calc_p(t, y, x) data = {'U': 0.5*np.ones(dims, dtype=np.float32), 'V': np.zeros(dims, dtype=np.float32), 'P': p} fieldset = FieldSet.from_data(data, dimensions, mesh='flat')
parcels/examples/tutorial_interpolation.ipynb
OceanPARCELS/parcels
mit
Now create four particles and a Sampling class so we can sample the Field P
xv, yv = np.meshgrid(np.arange(0, 1, 0.5), np.arange(0, 1, 0.5)) class SampleParticle(JITParticle): p = Variable('p', dtype=np.float32) pset = ParticleSet(fieldset, pclass=SampleParticle, lon=xv.flatten(), lat=yv.flatten())
parcels/examples/tutorial_interpolation.ipynb
OceanPARCELS/parcels
mit
The key now is that we need to create a sampling Kernel where the Field P is sampled at time+particle.dt and that we concatenate this kernel after the AdvectionRK4 Kernel
def SampleP(particle, fieldset, time): """ offset sampling by dt""" particle.p = fieldset.P[time+particle.dt, particle.depth, particle.lat, particle.lon] kernels = AdvectionRK4 + pset.Kernel(SampleP) # Note that the order of concatenation matters here!
parcels/examples/tutorial_interpolation.ipynb
OceanPARCELS/parcels
mit
We can now run these kernels on the ParticleSet
pfile = pset.ParticleFile("interpolation_offset.nc", outputdt=1) pset.execute(kernels, endtime=1, dt=1, output_file=pfile) pfile.close()
parcels/examples/tutorial_interpolation.ipynb
OceanPARCELS/parcels
mit
And we can check whether the Particle.p values indeed are consistent with the calc_p() values
for p in pset: assert np.isclose(p.p, calc_p(p.time, p.lat, p.lon))
parcels/examples/tutorial_interpolation.ipynb
OceanPARCELS/parcels
mit
And the same for the netcdf file (note that we need to convert time from nanoseconds to seconds)
ds = xr.open_dataset("interpolation_offset.nc").isel(obs=1) for i in range(len(ds['p'])): assert np.isclose(ds['p'].values[i], calc_p(float(ds['time'].values[i])/1e9, ds['lat'].values[i], ds['lon'].values[i]))
parcels/examples/tutorial_interpolation.ipynb
OceanPARCELS/parcels
mit
First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts in a series of 28 by 28 images. The labels simply identify the letter presented in each image (and are limited to A-J, so, 10 classes). The training set and test set have about 500000 and 19000 image-label pairs, respectively. Even with these sizes, it should be possible to train models quickly on any machine. Note: This could take some time! You are about to download a ~1.7 GB file. Go get some coffee.
cache_file = fetch_notMNIST()
notebooks/1_notmnist.ipynb
astroNN/astroNN
mit
First, we'll print some information about the data:
with h5py.File(cache_file, 'r') as f: for name,group in f.items(): print("{}:".format(name)) for k,v in group.items(): print("\t {} {}".format(k,v.shape))
notebooks/1_notmnist.ipynb
astroNN/astroNN
mit
Problem 1 Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Plot a 3 by 3 grid of sample images from the test set and set the title of each panel to the character name (use the labels). Hint: use matplotlib.pyplot.imshow()
with h5py.File(cache_file, 'r') as f: pass
notebooks/1_notmnist.ipynb
astroNN/astroNN
mit
Problem 2 Now display the mean of all images from each class individually and again set the title of each panel to the corresponding character name.
with h5py.File(cache_file, 'r') as f: pass
notebooks/1_notmnist.ipynb
astroNN/astroNN
mit
Problem 3 Next, we'll randomize the data. It's important to randomize both the train and test data sets. Verify that the data is still labeled correctly after randomization.
def randomize(data, labels): pass with h5py.File(cache_file, 'r') as f: train_dataset, train_labels = randomize(f['train']['images'][:], f['train']['labels'][:]) test_dataset, test_labels = randomize(f['test']['images'][:], f['test']['labels'][:])
notebooks/1_notmnist.ipynb
astroNN/astroNN
mit
Windowing -- Tour of Beam Sometimes, we want to aggregate data, like GroupByKey or Combine, only at certain intervals, like hourly or daily, instead of processing the entire PCollection of data only once. We might want to emit a moving average as we're processing data. Maybe we want to analyze the user experience for a certain task in a web app, it would be nice to get the app events by sessions of activity. Or we could be running a streaming pipeline, and there is no end to the data, so how can we aggregate data? Windows in Beam allow us to process only certain data intervals at a time. In this notebook, we go through different ways of windowing our pipeline. Lets begin by installing apache-beam.
# Install apache-beam with pip. !pip install --quiet apache-beam
examples/notebooks/tour-of-beam/windowing.ipynb
axbaretto/beam
apache-2.0
First, lets define some helper functions to simplify the rest of the examples. We have a transform to help us analyze an element alongside its window information, and we have another transform to help us analyze how many elements landed into each window. We use a custom DoFn to access that information. You don't need to understand these, you just need to know they exist 🙂.
import apache_beam as beam def human_readable_window(window) -> str: """Formats a window object into a human readable string.""" if isinstance(window, beam.window.GlobalWindow): return str(window) return f'{window.start.to_utc_datetime()} - {window.end.to_utc_datetime()}' class PrintElementInfo(beam.DoFn): """Prints an element with its Window information.""" def process(self, element, timestamp=beam.DoFn.TimestampParam, window=beam.DoFn.WindowParam): print(f'[{human_readable_window(window)}] {timestamp.to_utc_datetime()} -- {element}') yield element @beam.ptransform_fn def PrintWindowInfo(pcollection): """Prints the Window information with how many elements landed in that window.""" class PrintCountsInfo(beam.DoFn): def process(self, num_elements, window=beam.DoFn.WindowParam): print(f'>> Window [{human_readable_window(window)}] has {num_elements} elements') yield num_elements return ( pcollection | 'Count elements per window' >> beam.combiners.Count.Globally().without_defaults() | 'Print counts info' >> beam.ParDo(PrintCountsInfo()) )
examples/notebooks/tour-of-beam/windowing.ipynb
axbaretto/beam
apache-2.0
Now lets create some data to use in the examples. Windows define data intervals based on time, so we need to tell Apache Beam a timestamp for each element. We define a PTransform for convenience, so we can attach the timestamps automatically. Apache Beam requires us to provide the timestamp as Unix time, which is a way to represent a date and time as the number of seconds since January 1st, 1970. For our data, lets analyze some events about the seasons and moon phases for the year 2021, which might be useful for a gardening project. To attach timestamps to each element, we can Map each element and return a TimestmpedValue.
import time from apache_beam.options.pipeline_options import PipelineOptions def to_unix_time(time_str: str, time_format='%Y-%m-%d %H:%M:%S') -> int: """Converts a time string into Unix time.""" time_tuple = time.strptime(time_str, time_format) return int(time.mktime(time_tuple)) @beam.ptransform_fn @beam.typehints.with_input_types(beam.pvalue.PBegin) @beam.typehints.with_output_types(beam.window.TimestampedValue) def AstronomicalEvents(pipeline): return ( pipeline | 'Create data' >> beam.Create([ ('2021-03-20 03:37:00', 'March Equinox 2021'), ('2021-04-26 22:31:00', 'Super full moon'), ('2021-05-11 13:59:00', 'Micro new moon'), ('2021-05-26 06:13:00', 'Super full moon, total lunar eclipse'), ('2021-06-20 22:32:00', 'June Solstice 2021'), ('2021-08-22 07:01:00', 'Blue moon'), ('2021-09-22 14:21:00', 'September Equinox 2021'), ('2021-11-04 15:14:00', 'Super new moon'), ('2021-11-19 02:57:00', 'Micro full moon, partial lunar eclipse'), ('2021-12-04 01:43:00', 'Super new moon'), ('2021-12-18 10:35:00', 'Micro full moon'), ('2021-12-21 09:59:00', 'December Solstice 2021'), ]) | 'With timestamps' >> beam.MapTuple( lambda timestamp, element: beam.window.TimestampedValue(element, to_unix_time(timestamp)) ) ) # Lets see how the data looks like. beam_options = PipelineOptions(flags=[], type_check_additional='all') with beam.Pipeline(options=beam_options) as pipeline: ( pipeline | 'Astronomical events' >> AstronomicalEvents() | 'Print element' >> beam.Map(print) )
examples/notebooks/tour-of-beam/windowing.ipynb
axbaretto/beam
apache-2.0
ℹ️ After running this, it looks like the timestamps disappeared! They're actually still implicitly part of the element, just like the windowing information. If we need to access it, we can do so via a custom DoFn. Aggregation transforms use each element's timestamp along with the windowing we specified to create windows of elements. Global window All pipelines use the GlobalWindow by default. This is a single window that covers the entire PCollection. In many cases, especially for batch pipelines, this is what we want since we want to analyze all the data that we have. ℹ️ GlobalWindow is not very useful in a streaming pipeline unless you only need element-wise transforms. Aggregations, like GroupByKey and Combine, need to process the entire window, but a streaming pipeline has no end, so they would never finish.
import apache_beam as beam # All elements fall into the GlobalWindow by default. with beam.Pipeline() as pipeline: ( pipeline | 'Astrolonomical events' >> AstronomicalEvents() | 'Print element info' >> beam.ParDo(PrintElementInfo()) | 'Print window info' >> PrintWindowInfo() )
examples/notebooks/tour-of-beam/windowing.ipynb
axbaretto/beam
apache-2.0
Fixed time windows If we want to analyze our data hourly, daily, monthly, etc. We might want to create evenly spaced intervals. FixedWindows allow us to create fixed-sized windows. We only need to specify the window size in seconds. In Python, we can use timedelta to help us do the conversion of minutes, hours, or days for us. ℹ️ Some time deltas like a month cannot be so easily converted into seconds, since a month can have from 28 to 31 days. Sometimes using an estimate like 30 days in a month is enough. We must use the WindowInto transform to apply the kind of window we want.
import apache_beam as beam from datetime import timedelta # Fixed-sized windows of approximately 3 months. window_size = timedelta(days=3*30).total_seconds() # in seconds print(f'window_size: {window_size} seconds') with beam.Pipeline() as pipeline: elements = ( pipeline | 'Astronomical events' >> AstronomicalEvents() | 'Fixed windows' >> beam.WindowInto(beam.window.FixedWindows(window_size)) | 'Print element info' >> beam.ParDo(PrintElementInfo()) | 'Print window info' >> PrintWindowInfo() )
examples/notebooks/tour-of-beam/windowing.ipynb
axbaretto/beam
apache-2.0
Sliding time windows Maybe we want a fixed-sized window, but we don't want to wait until a window finishes so we can start the new one. We might want to calculate a moving average. For example, lets say we want to analyze our data for the last three months, but we want to have a monthly report. In other words, we want windows at a monthly frequency, but each window should cover the last three months. Sliding windows allow us to do just that. We need to specify the window size in seconds just like with FixedWindows. We also need to specify a window period in seconds, which is how often we want to emit each window.
import apache_beam as beam from datetime import timedelta # Sliding windows of approximately 3 months every month. window_size = timedelta(days=3*30).total_seconds() # in seconds window_period = timedelta(days=30).total_seconds() # in seconds print(f'window_size: {window_size} seconds') print(f'window_period: {window_period} seconds') with beam.Pipeline() as pipeline: ( pipeline | 'Astronomical events' >> AstronomicalEvents() | 'Sliding windows' >> beam.WindowInto( beam.window.SlidingWindows(window_size, window_period) ) | 'Print element info' >> beam.ParDo(PrintElementInfo()) | 'Print window info' >> PrintWindowInfo() )
examples/notebooks/tour-of-beam/windowing.ipynb
axbaretto/beam
apache-2.0
A thing to note with SlidingWindows is that one element might be processed multiple times because it might overlap in more than one window. In our example, the "processing" is done by PrintElementInfo which simply prints the element with its window information. For windows of three months every month, each element is processed three times, one time per window. In many cases, if we're just doing simple element-wise operations, this isn't generally an issue. But for more resource-intensive transformations, it might be a good idea to perform those transformations before doing the windowing.
import apache_beam as beam from datetime import timedelta # Sliding windows of approximately 3 months every month. window_size = timedelta(days=3*30).total_seconds() # in seconds window_period = timedelta(days=30).total_seconds() # in seconds print(f'window_size: {window_size} seconds') print(f'window_period: {window_period} seconds') with beam.Pipeline() as pipeline: ( pipeline | 'Astronomical events' >> AstronomicalEvents() #------ # ℹ️ Here we're processing / printing the data before windowing. | 'Print element info' >> beam.ParDo(PrintElementInfo()) | 'Sliding windows' >> beam.WindowInto( beam.window.SlidingWindows(window_size, window_period) ) #------ | 'Print window info' >> PrintWindowInfo() )
examples/notebooks/tour-of-beam/windowing.ipynb
axbaretto/beam
apache-2.0
Note that by doing the windowing after the processing, we only process / print the elments once, but the windowing afterwards is the same. Session windows Maybe we don't want regular windows, but instead, have the windows reflect periods where activity happened. Sessions allow us to create those kinds of windows. We now have to specify a gap size in seconds, which is the maximum number of seconds of inactivity to close a session window. For example, if we specify a gap size of 30 days. The first event would open a new session window since there are no already opened windows. If the next event happens within the next 30 days or less, like 20 days after the previous event, the session window extends and covers that as well. If there are no new events for the next 30 days, the session window closes and is emitted.
import apache_beam as beam from datetime import timedelta # Sessions divided by approximately 1 month gaps. gap_size = timedelta(days=30).total_seconds() # in seconds print(f'gap_size: {gap_size} seconds') with beam.Pipeline() as pipeline: ( pipeline | 'Astronomical events' >> AstronomicalEvents() | 'Session windows' >> beam.WindowInto(beam.window.Sessions(gap_size)) | 'Print element info' >> beam.ParDo(PrintElementInfo()) | 'Print window info' >> PrintWindowInfo() )
examples/notebooks/tour-of-beam/windowing.ipynb
axbaretto/beam
apache-2.0
Load the DESI model:
fiducial_telescope = batoid.Optic.fromYaml("DESI.yaml")
notebook/DESI model details.ipynb
jmeyers314/batoid
bsd-2-clause
Corrector Internal Baffles Setup YAML to preserve dictionary order and trunctate distances (in meters) to 5 digits:
import collections def dict_representer(dumper, data): return dumper.represent_dict(data.items()) yaml.Dumper.add_representer(collections.OrderedDict, dict_representer) def float_representer(dumper, value): return dumper.represent_scalar(u'tag:yaml.org,2002:float', f'{value:.5f}') yaml.Dumper.add_representer(float, float_representer)
notebook/DESI model details.ipynb
jmeyers314/batoid
bsd-2-clause
Define the corrector internal baffle apertures, from DESI-4103-v1. These have been checked against DESI-4037-v6, with the extra baffle between ADC1 and ADC2 added:
# baffle z-coordinates relative to FP in mm from DESI-4103-v1, checked # against DESI-4037-v6 (and with extra ADC baffle added). ZBAFFLE = np.array([ 2302.91, 2230.29, 1916.86, 1823.57, 1617.37, 1586.76, 1457.88, 1349.45, 1314.68, 1232.06, 899.67, 862.08, 568.81, 483.84, 415.22]) # baffle radii in mm from DESI-4103-v1, checked # against DESI-4037-v6 (and with extra ADC baffle added). RBAFFLE = np.array([ 558.80, 544.00, 447.75, 417.00, 376.00, 376.00, 378.00, 378.00, 395.00, 403.00, 448.80, 453.70, 492.00, 501.00, 496.00])
notebook/DESI model details.ipynb
jmeyers314/batoid
bsd-2-clause
Calculate batoid Baffle surfaces for the corrector. These are mechanically planar, but that would put their (planar) center inside a lens, breaking the sequential tracing model. We fix this by use spherical baffle surfaces that have the same apertures. This code was originally used to read a batoid model without baffles, but also works if the baffles are already added.
def baffles(nindent=10): indent = ' ' * nindent # Measure z from C1 front face in m. zbaffle = 1e-3 * (2425.007 - ZBAFFLE) # Convert r from mm to m. rbaffle = 1e-3 * RBAFFLE # By default, all baffles are planar. nbaffles = len(zbaffle) baffles = [] for i in range(nbaffles): baffle = collections.OrderedDict() baffle['type'] = 'Baffle' baffle['name'] = f'B{i+1}' baffle['coordSys'] = {'z': float(zbaffle[i])} baffle['surface'] = {'type': 'Plane'} baffle['obscuration'] = {'type': 'ClearCircle', 'radius': float(rbaffle[i])} baffles.append(baffle) # Loop over corrector lenses. corrector = fiducial_telescope['DESI.Hexapod.Corrector'] lenses = 'C1', 'C2', 'ADC1rotator.ADC1', 'ADC2rotator.ADC2', 'C3', 'C4' for lens in lenses: obj = corrector['Corrector.' + lens] assert isinstance(obj, batoid.optic.Lens) front, back = obj.items[0], obj.items[1] fTransform = batoid.CoordTransform(front.coordSys, corrector.coordSys) bTransform = batoid.CoordTransform(back.coordSys, corrector.coordSys) _, _, zfront = fTransform.applyForwardArray(0, 0, 0) _, _, zback = bTransform.applyForwardArray(0, 0, 0) # Find any baffles "inside" this lens. inside = (zbaffle >= zfront) & (zbaffle <= zback) if not any(inside): continue inside = np.where(inside)[0] for k in inside: baffle = baffles[k] r = rbaffle[k] # Calculate sag at (x,y)=(0,r) to avoid effect of ADC rotation about y. sagf, sagb = front.surface.sag(0, r), back.surface.sag(0, r) _, _, zf = fTransform.applyForwardArray(0, r, sagf) _, _, zb = bTransform.applyForwardArray(0, r, sagb) if zf > zbaffle[k]: print(f'{indent}# Move B{k+1} in front of {obj.name} and make spherical to keep model sequential.') assert isinstance(front.surface, batoid.Sphere) baffle['surface'] = {'type': 'Sphere', 'R': front.surface.R} baffle['coordSys']['z'] = float(zfront - (zf - zbaffle[k])) elif zbaffle[k] > zb: print(f'{indent}# Move B{k+1} behind {obj.name} and make spherical to keep model sequential.') assert isinstance(back.surface, batoid.Sphere) baffle['surface'] = {'type': 'Sphere', 'R': back.surface.R} baffle['coordSys']['z'] = float(zback + (zbaffle[k] - zb)) else: print(f'Cannot find a solution for B{k+1} inside {obj.name}!') lines = yaml.dump(baffles) for line in lines.split('\n'): print(indent + line) baffles()
notebook/DESI model details.ipynb
jmeyers314/batoid
bsd-2-clause
Validate that the baffle edges in the final model have the correct apertures:
def validate_baffles(): corrector = fiducial_telescope['DESI.Hexapod.Corrector'] for i in range(len(ZBAFFLE)): baffle = corrector[f'Corrector.B{i+1}'] # Calculate surface z at origin in corrector coordinate system. _, _, z = batoid.CoordTransform(baffle.coordSys, corrector.coordSys).applyForwardArray(0, 0, 0) # Calculate surface z at (r,0) in corrector coordinate system. sag = baffle.surface.sag(1e-3 * RBAFFLE[i], 0) z += sag # Measure from FP in mm. z = np.round(2425.007 - 1e3 * z, 2) assert z == ZBAFFLE[i], baffle.name validate_baffles()
notebook/DESI model details.ipynb
jmeyers314/batoid
bsd-2-clause
Corrector Cage and Spider Calculate simplified vane coordinates using parameters from DESI-4110-v1:
def spider(dmin=1762, dmax=4940.3, ns_angle=77, widths=[28.5, 28.5, 60., 19.1], wart_r=958, wart_dth=6, wart_w=300): # Vane order is [NE, SE, SW, NW], with N along -y and E along +x. fig, ax = plt.subplots(figsize=(10, 10)) ax.add_artist(plt.Circle((0, 0), 0.5 * dmax, color='yellow')) ax.add_artist(plt.Circle((0, 0), 0.5 * dmin, color='gray')) ax.set_xlim(-0.5 * dmax, 0.5 * dmax) ax.set_ylim(-0.5 * dmax, 0.5 * dmax) # Place outer vertices equally along the outer ring at NE, SE, SW, NW. xymax = 0.5 * dmax * np.array([[1, -1], [1, 1], [-1, 1], [-1, -1]]) / np.sqrt(2) # Calculate inner vertices so that the planes of the NE and NW vanes intersect # with an angle of ns_angle (same for the SE and SW planes). angle = np.deg2rad(ns_angle) x = xymax[1, 0] dx = xymax[1, 1] * np.tan(0.5 * angle) xymin = np.array([[x - dx, 0], [x - dx, 0], [-x+dx, 0], [-x+dx, 0]]) for i in range(4): plt.plot([xymin[i,0], xymax[i,0]], [xymin[i,1], xymax[i,1]], '-', lw=0.1 * widths[i]) # Calculate batoid rectangle params for the vanes. xy0 = 0.5 * (xymin + xymax) heights = np.sqrt(np.sum((xymax - xymin) ** 2, axis=1)) # Calculate wart rectangle coords. wart_h = 2 * (wart_r - 0.5 * dmin) wart_dth = np.deg2rad(wart_dth) wart_xy = 0.5 * dmin * np.array([-np.sin(wart_dth), np.cos(wart_dth)]) plt.plot(*wart_xy, 'rx', ms=25) # Print batoid config. indent = ' ' * 10 print(f'{indent}-\n{indent} type: ClearAnnulus') print(f'{indent} inner: {np.round(0.5e-3 * dmin, 5)}') print(f'{indent} outer: {np.round(0.5e-3 * dmax, 5)}') for i in range(4): print(f'{indent}-\n{indent} type: ObscRectangle') print(f'{indent} x: {np.round(1e-3 * xy0[i, 0], 5)}') print(f'{indent} y: {np.round(1e-3 * xy0[i, 1], 5)}') print(f'{indent} width: {np.round(1e-3 * widths[i], 5)}') print(f'{indent} height: {np.round(1e-3 * heights[i], 5)}') dx, dy = xymax[i] - xymin[i] angle = np.arctan2(-dx, dy) print(f'{indent} theta: {np.round(angle, 5)}') print(f'-\n type: ObscRectangle') print(f' x: {np.round(1e-3 * wart_xy[0], 5)}') print(f' y: {np.round(1e-3 * wart_xy[1], 5)}') print(f' width: {np.round(1e-3 * wart_w, 5)}') print(f' height: {np.round(1e-3 * wart_h, 5)}') print(f' theta: {np.round(wart_dth, 5)}') spider()
notebook/DESI model details.ipynb
jmeyers314/batoid
bsd-2-clause
Plot "User Aperture Data" from the ZEMAX "spider" surface 6, as cross check:
def plot_obs(): wart1 = np.array([ [ -233.22959, 783.94254], [-249.32698, 937.09892], [49.02959, 968.45746], [ 65.126976, 815.30108], [ -233.22959, 783.94254], ]) wart2 = np.array([ [-233.22959, 783.94254], [ -249.32698, 937.09892], [49.029593, 968.45746], [65.126976, 815.30108], [-233.22959, 783.94254], ]) vane1 = np.array([ [363.96554,-8.8485008], [341.66121, 8.8931664], [1713.4345, 1733.4485], [1735.7388, 1715.7068], [363.96554,-8.8485008], ]) vane2 = np.array([ [-1748.0649, 1705.9022], [ -1701.1084, 1743.2531], [ -329.33513, 18.697772], [ -376.29162, -18.653106], [-1748.0649, 1705.9022], ]) vane3 = np.array([ [ -1717.1127, -1730.5227], [ -1732.0605, -1718.6327], [ -360.28728, 5.922682], [-345.33947, -5.9673476], [ -1717.1127, -1730.5227], ]) vane4 = np.array([ [ 341.66121, -8.8931664], [363.96554, 8.8485008], [1735.7388, -1715.7068], [1713.4345, -1733.4485], [ 341.66121, -8.8931664], ]) extra = np.array([ [ 2470 , 0 ], [ 2422.5396 , -481.8731 ], [ 2281.9824 , -945.22808 ], [ 2053.7299 , -1372.2585 ], [ 1746.5537 , -1746.5537 ], [ 1372.2585 , -2053.7299 ], [ 945.22808 , -2281.9824 ], [ 481.8731 , -2422.5396 ], [ 3.0248776e-13 , -2470 ], [ -481.8731 , -2422.5396 ], [ -945.22808 , -2281.9824 ], [ -1372.2585 , -2053.7299 ], [ -1746.5537 , -1746.5537 ], [ -2053.7299 , -1372.2585 ], [ -2281.9824 , -945.22808 ], [ -2422.5396 , -481.8731 ], [ -2470 , 2.9882133e-12 ], [ -2422.5396 , 481.8731 ], [ -2281.9824 , 945.22808 ], [ -2053.7299 , 1372.2585 ], [ -1746.5537 , 1746.5537 ], [ -1372.2585 , 2053.7299 ], [ -945.22808 , 2281.9824 ], [ -481.8731 , 2422.5396 ], [ 5.9764266e-12 , 2470 ], [ 481.8731 , 2422.5396 ], [ 945.22808 , 2281.9824 ], [ 1372.2585 , 2053.7299 ], [ 1746.5537 , 1746.5537 ], [ 2053.7299 , 1372.2585 ], [ 2281.9824 , 945.22808 ], [ 2422.5396 , 481.8731 ], [ 2470 , -1.0364028e-11 ], [ 2724 , 0 ], [ 2671.6591 , -531.42604 ], [ 2516.6478 , -1042.4297 ], [ 2264.9232 , -1513.3733 ], [ 1926.1589 , -1926.1589 ], [ 1513.3733 , -2264.9232 ], [ 1042.4297 , -2516.6478 ], [ 531.42604 , -2671.6591 ], [ 3.3359379e-13 , -2724 ], [ -531.42604 , -2671.6591 ], [ -1042.4297 , -2516.6478 ], [ -1513.3733 , -2264.9232 ], [ -1926.1589 , -1926.1589 ], [ -2264.9232 , -1513.3733 ], [ -2516.6478 , -1042.4297 ], [ -2671.6591 , -531.42604 ], [ -2724 , 3.2955032e-12 ], [ -2671.6591 , 531.42604 ], [ -2516.6478 , 1042.4297 ], [ -2264.9232 , 1513.3733 ], [ -1926.1589 , 1926.1589 ], [ -1513.3733 , 2264.9232 ], [ -1042.4297 , 2516.6478 ], [ -531.42604 , 2671.6591 ], [ 6.5910065e-12 , 2724 ], [ 531.42604 , 2671.6591 ], [ 1042.4297 , 2516.6478 ], [ 1513.3733 , 2264.9232 ], [ 1926.1589 , 1926.1589 ], [ 2264.9232 , 1513.3733 ], [ 2516.6478 , 1042.4297 ], [ 2671.6591 , 531.42604 ], [ 2724 , -1.1429803e-11 ], [ 2470 , 0 ], ]) plt.figure(figsize=(20, 20)) plt.plot(*wart1.T) plt.plot(*wart2.T) plt.plot(*vane1.T) plt.plot(*vane2.T) plt.plot(*vane3.T) plt.plot(*vane4.T) plt.plot(*extra.T) w = 1762./2. plt.gca().add_artist(plt.Circle((0, 0), w, color='gray')) plt.gca().set_aspect(1.) plot_obs()
notebook/DESI model details.ipynb
jmeyers314/batoid
bsd-2-clause
CGRtools has subpackage containers with data structures classes: MoleculeContainer - for molecular structure ReactionContainer - for chemical reaction CGRContainer - for Condensed Graph of Reaction QueryContainer - queries for substructure search in molecules QueryCGRContainer - queries for substructure search in CGRs
from CGRtools.containers import * # import all containers
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
1.1. MoleculeContainer Molecules are represented as undirected graphs. Molecules contain Atom objects and Bond objects. Atom objects are represented as dictionary with unique number for each atom as key. Bond objects are stored as sparse matrix with adjacent atoms pair as keys for rows and columns. Hereafter, atom number is unique integer used to enumerate atoms in molecule. Please, don't confuse it with element number in Periodic Table, hereafter called element number. Methods for molecule handling and the arguments of MoleculeContainer are described below.
m1.meta # dictionary for molecule properties storage. For example, DTYPE/DATUM fields of SDF file are read into this dictionary m1 # MoleculeContainer supports depiction and graphic representation in Jupyter notebooks. m1.depict() # depiction returns SVG image in format string with open('molecule.svg', 'w') as f: # saving image to SVG file f.write(m1.depict()) m_copy = m1.copy() # copy of molecule m_copy len(m1) # get number of atoms in molecule # or m1.atoms_count m1.bonds_count # number of bonds m1.atoms_numbers # list of atoms numbers
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
Each structure has additional atoms attributes: number of neighbors and hybridization. The following notations are used for hybridization of atoms. Values are given as numbers below (in parenthesis symbols that are used in SMILES-like signatures are shown): 1 (s) - all bonds of atom are single, i.e. sp3 hybridization 2 (d) - atom has one double bond and others are single, i.e. sp2 hybridization 3 (t) - atom has one triple or two double bonds and other are single, i.e. sp hybridization 4 (a) - atom is in aromatic ring Neighbors and hybridizations atom attributes are required for substructure operations and structure standardization. See below
# iterate over atoms using its numbers list(m1.atoms()) # works the same as dict.items() # iterate over bonds using adjacent atoms numbers list(m1.bonds()) # access to atom by number m1.atom(1) try: m1.atom(10) # raise error for absent atom numbers except KeyError: print(format_exc()) # access to bond using adjacent atoms numbers m1.bond(1, 4) try: m1.bond(1, 3) # raise error for absent bond except KeyError: print(format_exc())
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
Atom objects are dataclasses which store information about: element isotope charge radical state xy coordinates Also atoms has methods for data integrity checks and include some internally used data.
a = m1.atom(1) # access to information a.atomic_symbol # element symbol a.charge # formal charge a.is_radical # atom radical state a.isotope # atom isotope. Default isotope if not set. Default isotopes are the same as used in InChI notation a.x # coordinates a.y #or a.xy a.neighbors # Number of neighboring atoms. It is read-only. a.hybridization # Atoms hybridization. It is read-only. try: a.hybridization = 2 # Not assignable. Read-only! Thus error is raised. except AttributeError: print(format_exc())
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
Atomic attributes are assignable. CGRtools has integrity checks for verification of changes induced by user
a.charge = 1 m1 a.charge = 0 a.is_radical = True m1 # bond objects also are data-like classes which store information about bond order b = m1.bond(3, 4) b.order try: b.order = 1 # order change not possible except AttributeError: print(format_exc())
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
Bonds are Read-only For bond modification one should to use delete_bond method to break bond and add_bond for creating new.
m1.delete_bond(3, 4) m1
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
Method delete_atom removes atom from the molecule
m1.delete_atom(3) m1 m_copy # copy unchanged!
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
Atoms and bonds objects can be converted into integer representation that could be used to classify their types. Atom type is represented by 21 bit code rounded to 32 bit integer number: 9 bits are used for isotope (511 posibilities, highest known isotope is ~300) 7 bits stand for atom number (2 ** 7 - 1 == 127, currently 118 elements are presented in Periodic Table) 4 bits stand for formal charge. Charges range from -4 to +4 rescaled to range 0-8 1 bit are used for radical state.
int(a) # 61705 == 000001111 0001000 0100 1 # 000001111 == 15 isotope # 0001000 == 8 Oxygen # 0100 == 4 (4 - 4 = 0) uncharged # 1 == 1 is radical int(b) # bonds are encoded by their order a = m_copy.atom(1) print(a.implicit_hydrogens) # get number of implicit hydrogens on atom 1 print(a.explicit_hydrogens) # get number of explicit hydrogens on atom 1 print(a.total_hydrogens) # get total number of hydrogens on atom 1 m1 m1.check_valence() # return list of numbers of atoms with invalid valences m4 # molecule with valence errors m4.check_valence() m3 m3.sssr # Method for application of Smallest Set of Smallest Rings algorithm for rings # identification. Returns tuple of tuples of atoms forming smallest rings
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
Connected components. Sometimes molecules has disconnected components (salts etc). One can find them and split molecule to separate components.
m2 # it's a salt represented as one graph m2.connected_components # tuple of tuples of atoms belonging to graph components anion, cation = m2.split() # split molecule to components anion # graph of only one salt component cation # graph of only one salt component
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
Union of molecules Sometimes it is more convenient to represent salts as ion pair. Otherwise ambiguity could be introduced, for example in reaction of salt exchange: Ag+ + NO3- + Na+ + Br- = Ag+ + Br- + Na+ + NO3-. Reactants and products sets are the same. In this case one can combine anion-cation pair into single graph. It could be convenient way to represent other molecule mixtures.
salt = anion | cation # or salt = anion.union(cation) salt # this graph has disconnected components, it is considered single compound now
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
Substructures could be extracted from molecules.
sub = m3.substructure([4,5,6,7,8,9]) # substructure with passed atoms sub
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
augmented_substructure is a substructure consisting from atoms and a given number of shells of neighboring atoms around it. deep argument is a number of considered shells. It also returns projection by default.
aug = m3.augmented_substructure([10], deep=2) # atom 10 is Nitrogen aug
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
Atoms Ordering. This functionality is used for canonic numbering of atoms in molecules. Morgan algorithm is used for atom ranking. Property atoms_order returns dictionary of atom numbers as keys and their ranks according to canonicalization as values. Equal rank mean that atoms are symmetric (are mapped to each other in automorhisms).
m5.atoms_order
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
Atom number can be changed by remap method. This method is useful when it is needed to change order of atoms in molecules. First argument to remap method is dictionary with existing atom numbers as keys and desired atom number as values. It is possible to change atom numbers for only part of atoms. Atom numbers could be non-sequencial but need to be unique. If argument copy is set True new object will be created, else existing molecule will be changed. Default is False.
m5 remapped = m5.remap({4:2}, copy=True) remapped
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
1.2. ReactionContainer ReactionContainer objects has the following properties: reactants - list of reactants molecules reagents - list of reagents molecules products - list of products molecules meta - dictinary of reaction metadata (DTYPE/DATUM block in RDF)
r1 # depiction supported r1.meta print(r1.reactants, r1.products) # Access to lists of reactant and products. reactant1, reactant2, reactant3 = r1.reactants product = r1.products[0]
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
Reactions also has standardize, kekule, thiele, implicify_hydrogens, explicify_hydrogens, etc methods (see part 3). These methods are applied independently to every molecule in reaction. 1.3. CGR CGRContainer object is similar to MoleculeConrtainer, except some methods. The following methods are not suppoted for CGRContainer: standardization methods hydrogens count methods check_valence CGRContainer also has some methods absent in MoleculeConrtainer: centers_list center_atoms center_bonds CGRContainer is undirected graph. Atoms and bonds in CGR has two states: reactant and product. Composing to CGR As mentioned above, atoms in MoleculeContainer have unique numbers. These numbers are used as atom-to-atom mapping in CGRtools upon CGR creation. Thus, atom order for molecules in reaction should correspond to atom-to-atom mapping. Pair of molecules can be transformed into CGR. Notice that, the same atom numbers in reagents and products imply the same atoms. Reaction also can be composed into CGR. Atom numbers of molecules in reaction are used as atom-to-atom mapping of reactants to products.
cgr1 = m7 ^ m8 # CGR from molecules # or cgr1 = m7.compose(m8) print(cgr1) cgr1 r1 cgr2 = ~r1 # CGR from reactions # or cgr2 = r1.compose() print(cgr2) # signature is printed out. cgr2.clean2d() cgr2 a = cgr2.atom(2) # atom access is the same as for MoleculeContainer a.atomic_symbol # element attribute a.isotope # isotope attribute
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
For CGRContainer attributes charge, is_radical, neighbors and hybridization refer to atom state in reactant of reaction; arguments p_charge, p_is_radical, p_neighbors and p_hybridization could be used to extract atom state in product part in reaction.
a.charge # charge of atom in reactant a.p_charge # charge of atom in product a.p_is_radical # radical state of atom in product. a.neighbors # number of neighbors of atom in reactant a.p_neighbors # number of neighbors of atom in product a.hybridization # hybridization of atom in reactant. 1 means only single bonds are incident to atom a.p_hybridization # hybridization of atom in product. 1 means only single bonds are incident to atom b = cgr1.bond(4, 10) # take bond
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
Bonds has order and p_order attribute If order attribute value is None, it means that bond was formed If p_order is None, it means that bond was broken Both order and p_order can't be None
b.order # bond order in reactant b.p_order is None # bond order in product in None
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
CGR can be decomposed back to reaction, i.e. reactants and products. Notice that CGR can lose information in case of unbalanced reactions (where some atoms of reactant does not have counterpart in product, and vice versa). Decomposition of CGRs for unbalanced reactions back to reaction may lead to strange (and erroneous) structures.
reactant_part, product_part = ~cgr1 # CGR of unbalanced reaction is decomposed back into reaction # or reactant_part, product_part = cgr1.decompose() reactant_part # reactants extracted. One can notice it is initial molecule product_part #extracted products. Originally benzene was the product.
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
For decomposition of CGRContainer back into ReactionContainer ReactionContainer.from_cgr constructor method can be used.
decomposed = ReactionContainer.from_cgr(cgr2) decomposed.clean2d() decomposed
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
You can see that water absent in products initially was restored. This is a side-effect of CGR decomposing that could help with reaction balancing. But balancing using CGR decomposition works correctly only if minor part atoms are lost but multiplicity and formal charge are saved. In next release electronic state balansing will be added.
r1 # compare with initial reaction
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
1.4 Queries CGRtools supports special objects for Queries. Queries are designed for substructure isomorphism. User can set number of neighbors and hybridization by himself (in molecules they could be calculated but could not be changed). Queries don't have reset_query_marks method
from CGRtools.containers import* m10 # ether carb = m10.substructure([5,7,2], as_query=True) # extract of carboxyl fragment print(carb) carb
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
CGRs also can be transformed into Query. QueryCGRContainer is similar to QueryContainer class for CGRs and has the same API. QueryCGRContainer take into account state of atoms and bonds in reactant and product, including neighbors and hybridization
cgr_q = cgr1.substructure(cgr1, as_query=True) # transfrom CGRContainer into QueryCGRContainer #or cgr_q = QueryCGRContainer() | cgr1 # Union of Query container with CGR or Molecule gives QueryCGRContainer print(cgr_q) # print out signature of query cgr_q
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
1.5. Molecules, CGRs, Reactions construction CGRtools has API for objects construction from scratch. CGR and Molecule has methods add_atom and add_bond for adding atoms and bonds.
from CGRtools.containers import MoleculeContainer from CGRtools.containers.bonds import Bond from CGRtools.periodictable import Na m = MoleculeContainer() # new empty molecule m.add_atom('C') # add Carbon atom using element symbol m.add_atom(6) # add Carbon atom using element number. {'element': 6} is not valid, but {'element': 'O'} is also acceptable m.add_atom('O', charge=-1) # add negatively charged Oxygen atom. Similarly other atomic properties can be set # add_atom has second argument for setting atom number. # If not set, the next integer after the biggest among already created will be used. m.add_atom(Na(23), 4, charge=1) # For isotopes required element object construction. m.add_bond(1, 2, 1) # add bond with order = 1 between atoms 1 and 2 m.add_bond(3, 2, Bond(1)) # the other possibility to set bond order m.clean2d() #experimental function to calculate atom coordinates. Has number of flaws yet m
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
Reactions can be constructed from molecules. Reactions are tuple-like objects. Modification impossible.
r = ReactionContainer(reactants=[m1], products=[m11]) # one-step way to construct reaction # or r = ReactionContainer([m1], [m11]) # first list of MoleculeContainers is interpreted as reactants, second one - as products r r.fix_positions() # this method fixes coordinates of molecules in reaction without calculation of atoms coordinates. r
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
QueryContainers can be constructed in the same way as MoleculeContainers. Unlike other containers QueryContainers additionally support atoms, neighbors and hybridization lists.
q = QueryContainer() # creation of empty container q.add_atom('N') # add N atom, any isotope, not radical, neutral charge, # number of neighbors and hybridization are irrelevant q.add_atom('C', neighbors=[2, 3], hybridization=2) # add carbon atom, any isotope, not radical, neutral charge, # has 2 or 3 explicit neighbors and sp2 hybridization q.add_atom('O', neighbors=1) q.add_bond(1, 2, 1) # add single bond between atom 1 and 2 q.add_bond(2, 3, 2) # add double bond between atom 1 and 2 # any amide group will fit this query print(q) # print out signature (SMILES-like) q.clean2d() q
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
1.6. Extending CGRtools You can easily customize CGRtools for your tasks. CGRtools is OOP-oriented library with subclassing and inheritance support. As an example, we show how special marks on atoms for ligand donor centers can be added.
from CGRtools.periodictable import Core, C, O class Marked(Core): __slots__ = '__mark' # all new attributes should be slotted! def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.__mark = None # set default value for added attribute @property def mark(self): # created new property return self.__mark @mark.setter def mark(self, mark): # do some checks and calculations self.__mark = mark def __repr__(self): if self.__isotope: return f'{self.__class__.__name__[6:]}({self.__isotope})' return f'{self.__class__.__name__[6:]}()' @property def atomic_symbol(self) -> str: return self.__class__.__name__[6:] class MarkedC(Marked, C): pass class MarkedO(Marked, O): pass m = MoleculeContainer() # create newly developed container MarkedMoleculeContainer m.add_atom(MarkedC()) # add custom atom C m.add_atom(MarkedO()) # add custom atom O m.add_bond(1, 2, 1) m.atom(2).mark = 1 # set mark on atom. print(m) m.clean2d() m m.atom(2).mark # one can return mark
doc/tutorial/1_data_types_and_operations.ipynb
stsouko/CGRtools
lgpl-3.0
EvoFlow hello world: OneMax This notebook provide a quick introduction of how EvoFlow works by showing how you can use to solve the classic OneMax problem with it. At its core <b>EvoFlow is a modern hardware accelerated genetic algorithm framework that recast genetic algorithm programing as a dataflow computation on tensors</b>. Conceptually is very similar to what Tensorflow.keras is doing so if you have experience with Keras or Tensorflow you will feel right at home. Under the hood, EvoFlow leverage Tensorflow or Cupy to provide hardware accelerated operations. For more information about EvoFlow design and architecture, see our paper. <b>EvoFlow, while heavily tested, is considered experimental - use at your own risks. Issues should be reported on Github. For the rest: evoflow@google.com</b> <table align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/google-research/evoflow/blob/master/notebooks/onemax.ipynb"><img src="https://storage.googleapis.com/evoflow/images/colab.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/google-research/evoflow/blob/master/notebooks/onemax.ipynb"><img src="https://storage.googleapis.com/evoflow/images/github.png" />View source on GitHub</a> </td> </table> Setup
# installing the latest version of evoflow try: import evoflow except: !pip install -U evoflow %load_ext autoreload %autoreload 2 from evoflow.engine import EvoFlow from evoflow.selection import SelectFittest from evoflow.population import randint_population from evoflow.fitness import Sum from evoflow.ops import Input, RandomMutations1D, UniformCrossover1D
notebooks/onemax.ipynb
google-research/evoflow
apache-2.0
Population definition In this example, we are going to represent the population as a 2D tensor where: the 1st dimension is the number of chromosome (population) the 2nd dimension is the number of gene per chromosome.
POPULATION_SIZE = 64 #@param {type: "slider", min: 16, max: 2048} CHROMOSOME_SIZE = 32 #@param {type: "slider", min: 16, max: 2048} SHAPE = (POPULATION_SIZE, CHROMOSOME_SIZE) print(f"Population will have {SHAPE[0]} distinct chromosomes made of {SHAPE[1]} genes")
notebooks/onemax.ipynb
google-research/evoflow
apache-2.0
Our model need an initial population to mutate as inputs. Here, we are taking the traditional approach to initialize this population at random while ensure the gene value is either 0 or 1 by setting the max_value to 1. We will use the population in the evolve() funtion very similarly that you would feed your x_train data in deep-learning.
population = randint_population(SHAPE, max_value=1)
notebooks/onemax.ipynb
google-research/evoflow
apache-2.0
Evolution model setup Building an evolution model requires setup the <i>evolution operations</i> and then compiling the model. Genetic operations are represented by OPs that are very similar to the tf.keras.layers. They are combined by creating a directed graph that inter-connect them, again very similarly to what the keras functional API is doing. In this example we will mutate our population using two very basic genetic algorithm operations: the Random Mutation and Uniform Crossover. As our population is made of single dimensions chromosomes we will use the 1D variant of those ops: RandomMutations1D and UniformCrossover1D. You can experiment using a 2D or even a 3D population by changing the input shape above and using operation variants that match your chromosomes shape. For example you need to use UniformCrossover2D for a 2D population, and UniformCrossover3D for a 3D population.
inputs = Input(shape=(SHAPE)) x = RandomMutations1D(max_gene_value=1, min_gene_value=0)(inputs) outputs = UniformCrossover1D()(x)
notebooks/onemax.ipynb
google-research/evoflow
apache-2.0
We instantiate our model by providing its inputs and outputs. Under the hood a EvoFlow model is represented as a direct graph that support multiples inputs, multiple outputs, and arbitrary branching to tackle the most complex use-cases. You can use summary() to check what your model will look like.
ef = EvoFlow(inputs, outputs) ef.summary()
notebooks/onemax.ipynb
google-research/evoflow
apache-2.0
Model compilation Before the model is ready for evolutions, it needs two additional key components that will be supplied to the compile function: How to asses how good is a given chromosome (fitness function) How to select chromosomes and renew the pool (Selection function) Fitness function The fitness function is the algorithm objective function that the model try to maximize. It is the most critical part of a genetic algorithm where you express the constraints that a chromosome must satisify to solve the problem. At each evolution this function is used to compute how fit for the task each chromosome are. Fitness functions are very similar to loss functions in deep-learning except they don't need to be differentiable and therefore can perform arbitrary computation. Depending on the problem solved, you can decide to either maximize, minimize the fitness value or get it to converge to a fixed value. The cost of fitness function increased expressiveness and flexibility compared to neural network loss is that we don't have the gradients to help guide the model convergence and therefore coverging is more computationaly expensive. To make things efficient and fast, it is recommended to implement Fitness functions in EvoFlow as tensor operations but this is not requireds as long as the function return at the end an 1D tensor that contains the fitness value for each chromosome in the population. To solve the OneMax problem we want a fitness function that encourages the chromosomes to have as many genes with a value of 1 as possible. In tensor representation this is easy to acheive by simply computing the sum of the chromosome and using that value as its fitness. To make the progress look nicer, we scale the fitness between 0 and 1 by supplying a max_sum_value equivalent to CHROMOSOME_SIZE as the best case is a chromosome made only of 1s.
fitness_function = Sum(max_sum_value=CHROMOSOME_SIZE)
notebooks/onemax.ipynb
google-research/evoflow
apache-2.0
Modeling evolutionary selection The evolutionary process takes the fitness values and decides which chromosomes to keep. A naive form to model selection is to keep the fitest individuals and carry them over to the next generation. This is usualy referred as an elitist selection strategy. For instance, we can keep the fitest individuals (the one with the largest fitness values) as we create the next generation. Alternative functions that have different selection intensitive pressure properties exist. For example, roulette wheel selection has non-constant selection intensity depending on the population's fitness distribution, whereas tournament selection provides constant selection intesity regardless of the fitness distribution.
selection_strategy = SelectFittest()
notebooks/onemax.ipynb
google-research/evoflow
apache-2.0
Compilation Now that we have defined our fitness_function and our selection_strategy, our model is ready to get compiled with those as parameters.
ef.compile(selection_strategy, fitness_function)
notebooks/onemax.ipynb
google-research/evoflow
apache-2.0
Evolution We now are going to evolve our inital random population over a couple of generations. At each generation the evolution strategy keep the best invididuals and replace the low performing ones with the best one of the previous generation. generation= controls the number of time the model is applied to the population to generate a new generation of mutated specimens. It is equivalent to what the number of epochs is deep-learning. The harder the problem, the more generations you need.
GENERATIONS = 4 #@param {type: "slider", min: 5, max: 100}
notebooks/onemax.ipynb
google-research/evoflow
apache-2.0
As the model evolve, look for the value of fitness_function_max which indicate the value of the best performing chromosome. As you will see it will quickly reach 1, which indicates that we found an optimal solution
results = ef.evolve(population, generations=GENERATIONS)
notebooks/onemax.ipynb
google-research/evoflow
apache-2.0
Results Let's check that the optimal solution we found are chromosomes fully made of 1s. First let's look at how quickly our model converged. Depending on your population size and chromosomes size the convergence will be slower or faster. We encourage you to experiments with different values. Note: in the graph below we use static=True to generate graph as .png and have them display in colab and github. However, when developping your own algorithms, we recommend using the interactive ones that rely on altair as it makes for a nicer experience :)
results.plot_fitness(static=True) # note we use a static display
notebooks/onemax.ipynb
google-research/evoflow
apache-2.0
Next we can look at what the population look like using a heatmap. As the model converged to the optimal solution the top chromosomes are all made of ones and form a uniform color band. If you evolve long enough the whole heatmap will be of a solid color as all the chromosomes will contains the optimial solution.
results.display_populations(top_k=100, rounding=0)
notebooks/onemax.ipynb
google-research/evoflow
apache-2.0
Finally to convince ourselves that EvoFlow worked as intended we can display the top 10 best solutions and check they are made of 1s
results.top_k()
notebooks/onemax.ipynb
google-research/evoflow
apache-2.0
Data loading Load the training data:
import csv def load_train_data(filename): X = [] y = [] with open(filename) as fd: reader = csv.reader(fd, delimiter='\t') # ignore header row next(reader, None) for row in reader: X.append(row[1]) y.append(row[0].split()) return np.array(X), np.array(y) X, y = load_train_data('data/train.tsv')
indeed.ipynb
matheusportela/indeed-ml-codesprint
mit
Show some input and output data:
print 'Input:', X[0] print print 'Output:', y[0]
indeed.ipynb
matheusportela/indeed-ml-codesprint
mit
Preprocessing definition Preprocessing steps are applied differently for input vectors and target vectors. Input preprocessing First, we need to transform the input text into a numerical representation. This is done by generating a vector where each position is the number of occurrences for a given word in the data. For instance, given the text hello. this is my first line. this is my next line. this is the final one, its count vector, considering that the first position is with respect to this, the second is line and the third is final is [3, 2, 1]. The count vector did not use any stop words but only considered words that appeared at least 2 times in the training data, with maximum frequency of 95%. Next, we apply tf-idf to weight the words according to their importance. Too frequent or too rare words are less important than the others. Output preprocessing Usually, the output is given as a list of tags for each description, such as [['part-time-job', 'salary', 'supervising-job'], ['2-4-years-experience-required', 'hourly-wage']]. However, since some tags are mutually exclusive (only one can exist at a time), we take that into account. For instance, no description can be both 'part-time-job' and 'full-time-job' at the same time. Therefore, the target vector is splitted into several vectors, one for each mutually exclusive set of tags, in a format such as: python { 'job': [['part-time-job'], ['full-time-job'], ['part-time-job']], 'wage': [['salary'], [], []], 'degree': [[], [], []], 'experience': [[], [], []], 'supervising': [[], [], ['supervising-job']] } With the splitted target vector, we will be able to train one model for each tag type. After that, each tag type target label will be encoded in numerical format, where each tag will be replaced by an integer. For instance, [['part-time-job'], ['full-time-job'], [], ['part-time-job'], []] may be encoded to [1, 2, 0, 1, 0]. Define input data preprocessor as bag-of-words and tf-idf feature extraction: CountVectorizer: Transforms text to vector of occurrences for each word found in training set (bag-of-words representation). TfidfTransformer: Transforms bag-of-words to its relative frequency, removing too frequent or rare words from the final representation.
from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer X_preprocessor = Pipeline([ ('count', CountVectorizer(max_df=0.95, min_df=2, ngram_range=(1, 2))), ('tfidf', TfidfTransformer()) ])
indeed.ipynb
matheusportela/indeed-ml-codesprint
mit
Define multi-label binarizer for output data. Each target sample will be a binary array: 0 if not present, 1 otherwise.
from sklearn.preprocessing import LabelEncoder y_preprocessors = { 'job': LabelEncoder(), 'wage': LabelEncoder(), 'degree': LabelEncoder(), 'experience': LabelEncoder(), 'supervising': LabelEncoder() }
indeed.ipynb
matheusportela/indeed-ml-codesprint
mit
Separate the target vector y into one vector for each mutually exclusive tag type: ```python y = [['part-time-job', 'salary'], ['full-time-job'], ['part-time-job', 'supervising-job']] split_y = split_exclusive_tags(y) split_y { 'job': [['part-time-job'], ['full-time-job'], ['part-time-job']], 'wage': [['salary'], [], []], 'degree': [[], [], []], 'experience': [[], [], []], 'supervising': [[], [], ['supervising-job']] } ``` This is a useful step when training one model for each exclusive tag type.
# Separate targets for mutually exclusive tags def split_exclusive_tags(y): split_y = { 'job': [], 'wage': [], 'degree': [], 'experience': [], 'supervising': [] } for target in y: split_y['job'].append(filter(lambda x: x in ['part-time-job', 'full-time-job'], target)) split_y['wage'].append(filter(lambda x: x in ['hourly-wage', 'salary'], target)) split_y['degree'].append(filter(lambda x: x in ['associate-needed', 'bs-degree-needed', 'ms-or-phd-needed', 'licence-needed'], target)) split_y['experience'].append(filter(lambda x: x in ['1-year-experience-needed', '2-4-years-experience-needed', '5-plus-years-experience-needed'], target)) split_y['supervising'].append(filter(lambda x: x in ['supervising-job'], target)) return split_y
indeed.ipynb
matheusportela/indeed-ml-codesprint
mit
Classifier definition Define classifier as SVM with one-vs-all strategy for multilabel classification.
# F1 score: 0.511 from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import LinearSVC models = { 'job': OneVsRestClassifier(LinearSVC()), 'wage': OneVsRestClassifier(LinearSVC()), 'degree': OneVsRestClassifier(LinearSVC()), 'experience': OneVsRestClassifier(LinearSVC()), 'supervising': OneVsRestClassifier(LinearSVC()) }
indeed.ipynb
matheusportela/indeed-ml-codesprint
mit
Model usage For each mutually exclusive tag type, we train one multiclass model capable of deciding which tag (or even none) is appropriate for the given input. Initially, an attempt of a single multilabel model was used, which would be able to output multiple labels at once. However, considering that the input space was huge for this situation, better results were achieved by using multiclass models, one for each mutually exclusive tag type. Thus the output would be the output for each tag type model aggregated in a single vector.
def fit_models(models, X_preprocessor, y_preprocessors, X, y): print 'Fitting models' split_y = split_exclusive_tags(y) X_processed = X_preprocessor.fit_transform(X) for tag_type, model in models.items(): # Learn one preprocessor for each mutually exclusive tag X_processed = X_preprocessor.transform(X) y_processed = y_preprocessors[tag_type].fit_transform(split_y[tag_type]) # Learn one model for each mutually exclusive tag model.fit(X_processed, y_processed)
indeed.ipynb
matheusportela/indeed-ml-codesprint
mit
Predict the output by executing the model for each tag type:
def predict_models(models, X_preprocessor, y_preprocessors, X): print 'Predicting with models' output = [[] for _ in X] for tag_type, model in models.items(): # Preprocess and use model for the given type of tag X_processed = X_preprocessor.transform(X) model_output = model.predict(X_processed) tag_type_output = y_preprocessors[tag_type].inverse_transform(model_output) # Aggregate outputs for all types of tags in the same array for i, out in enumerate(tag_type_output): if type(out) in [list, tuple]: output[i].extend(out) else: output[i].append(out) return output
indeed.ipynb
matheusportela/indeed-ml-codesprint
mit