code stringlengths 114 1.05M | path stringlengths 3 312 | quality_prob float64 0.5 0.99 | learning_prob float64 0.2 1 | filename stringlengths 3 168 | kind stringclasses 1
value |
|---|---|---|---|---|---|
Tutorial 8 - Using A Custom Model Fit
=====================================
SAILS provides implementations of several algorithms for fitting autoregressive
models but it is straightforward to create a custom class which implements a
new model fit or uses one from another package.
This tutorial will outline how to create a custom model fit class using the
Vector Autoregression class from ``statsmodels.tsa``. We start by importing SAILS
and creating a simulated time series to model.
.. code-block:: python
import sails
# Create a simulated signal
sample_rate = 100
siggen = sails.Baccala2001_fig2()
X = siggen.generate_signal(sample_rate=sample_rate,num_samples=1000)
We then fit a model using Ordinary Least Squared as implemented in SAILS.
.. code-block:: python
# Fit an autoregressive model with order 3
sails_model = sails.OLSLinearModel.fit_model(X, np.arange(4))
Before we continue, we should note that you can pass an existing ``sklearn``
estimator (for example, ``sklearn.linear_model.LinearRegression`` as the
``estimator`` parameter to the ``fit_model`` function of the ``OLSLinearModel``
class. If you do this, you should not fit the intercept in the model. For
instance:
.. code-block:: python
import sklearn.linear_model
# Fit an autoregressive model using SKLearn's LinearRegressor
estimator = sklearn.linear_model.LinearRegression(fit_intercept=False)
sails_model = sails.OLSLinearModel.fit_model(X, np.arange(4), estimator=estimator)
The above will give the same answers as the default method (which calculates
the parameters using the normal equations). You can, however, extend this
approach to use, for instance, ridge or lasso-regression using the relevant
classes (`sklearn.linear_model.Ridge` or `sklearn.linear_model.Lasso`).
To go beyond what is available using the previous options, we can create a new
model fit class based on ``sails.modelfit.AbstractLinearModel``. This is a base
class which contains a number of methods and properties to store and compute
information on a model. The ``AbstractLinearModel`` is not usable on its own
as the fit_model method is not implemented. When classes such as
``OLSLinearModel`` are defined in SAILS, they inherit from
``AbstractLinearModel`` to define the helper functions before a specific
``fit_model`` method is defined. We can do the same to define a custom model
fit class using an external package. We will first create a new class which
inherits from ``AbstractLinearModel`` and then define a ``fit_model`` method
which computes the model fit and stores the outputs in a standard form.
Here is our custom model fit class, each section is described in the comments in the code.
.. code-block:: python
# Define a new class inheriting from the SAILS base model
class TSALinearModel( sails.AbstractLinearModel ):
# Define a fit_model method using the python @classmethod decorator
@classmethod
def fit_model( cls, data, delay_vect):
# Some sanity checking of the input matrix We make sure that the input
# data is in 2d format or 3d format with a single epoch.
# statsmodels.tsa doesn't currently support fitting multitrial data
if data.ndim == 3:
if data.shape[2] > 1:
raise ValueError('This class is only implemented for single-trial data')
# Take first trial if we have 3d data
data = data[:,:,0]
# Create object - classmethods act as object constructors. cls points
# to TSALinearModel and ret is a specific instance of TSALinearModel
# though it is currently empty.
ret = cls()
# The rest of this method will populate ret with information based on
# our model fit
# Import the model fit function
from statsmodels.tsa.api import VAR
# Initialise and fit model - we use a simple VAR with default options.
# Note that we return the model and results to ret.tsa_model. This
# means that the statsmodels.tsa.api.VAR object will be stored and
# returned with ret. Later we can use this to access the statsmodels
# model and results though the SAILS object.
ret.tsa_model = VAR(data.T) # SAILS assumes channels first, TSA samples first
model_order = len(delay_vect) - 1 # delay_vect includes a leading zero
ret.tsa_results = ret.tsa_model.fit(model_order)
# The method must assign the following values for SAILS metrics to work properly
ret.maxorder = model_order
ret.delay_vect = np.arange(model_order)
ret.parameters = np.concatenate((-np.eye(data.shape[0])[:,:,None],
ret.tsa_results.coefs.transpose((1,2,0))), axis=2)
ret.data_cov = sails.find_cov(data.T,data.T)
ret.resid_cov = sails.find_cov(ret.tsa_results.resid.T,ret.tsa_results.resid.T)
# Return fitted model within an instance of a TSALinearModel
return ret
It is crucial that the ``fit_model`` class returns an instance of our the
overall class. This instance must contain the following information. Other
functions in SAILS assume that these are stored in a fitted model class with
specific formats and names.
- ``maxorder``: the model order of the fit
- ``delay_vect``: the vector of delays used in the model fit
- ``parameters``: the fitted autoregressive parameters of shape `[num_channels x num_channels x model_order]` with a leading identity
- ``data_cov``: the covariance matrix of the fitted data
- ``resid_cov``: the covariance matrix of the residuls of the fit
Other data can be added in as well (we store ``tsa_model`` and ``tsa_results``
in the example here) but these five must be defined within the returned class.
We can now fit a model using our new class
.. code-block:: python
tsa_model = TSALinearModel.fit_model(X,np.arange(4))
Finally, we compute connectivity metrics from each model fit and plot a comparison
.. code-block:: python
freq_vect = np.linspace(0,sample_rate/2)
sails_metrics = sails.FourierMvarMetrics.initialise(sails_model,sample_rate,freq_vect)
tsa_metrics = sails.FourierMvarMetrics.initialise(tsa_model,sample_rate,freq_vect)
PDC = np.concatenate( (sails_metrics.partial_directed_coherence,tsa_metrics.partial_directed_coherence),axis=3)
sails.plotting.plot_vector(PDC,freq_vect,line_labels=['SAILS','TSA'],diag=True,x_label='Frequency (Hz'))
We see that the partial directed coherence from the two models is nearly identical.
.. image:: tutorial8_1.png
| /sails-1.4.0.tar.gz/sails-1.4.0/doc/source/tutorials/tutorial8.rst | 0.971307 | 0.957715 | tutorial8.rst | pypi |
Tutorial 5 - MVAR Connectivity Estimation
=========================================
In this tutorial, we will explore a range of connectivity estimators in a
simulated network.
We start by importing sails and defining some meta-parameters as we did in
previous tutorials.
.. code-block:: python
import numpy as np
import sails
sample_rate = 128
We will use an example network from Baccala & Sameshima 2001. This is defined
within the simulation module in Sails. We can import a signal generator based
on figure 2 from Baccala & Sameshima 2001.
.. image:: tutorial5_extfig1.png
.. image:: tutorial5_extfig2.png
.. code-block:: python
siggen = sails.simulate.Baccala2001_fig2()
The `siggen` object contains the autoregressive parameters defining the network
and can create a model containing these parameters which we can use in further
analyses. Here we create a model containing the 'true' autoregressive
parameters and use it to compute the connectivity metrics.
.. code-block:: python
m = siggen.generate_model()
freq_vect = np.linspace(0, sample_rate/2, 36)
F = sails.FourierMvarMetrics.initialise(m, sample_rate, freq_vect)
F is an object containing methods to compute a range of frequency domain
connectivity metrics. Each metric is evaluated across the range of
frequencies_defined in freq_vect and can be plotted using the plot_vector
function in sails. Next we plot the Magnitude Squared Coherence estimated from
our simulation parameter.
.. code-block:: python
fig = sails.plotting.plot_vector(np.abs(F.S), freq_vect, diag=True)
fig.show()
.. image:: tutorial5_1.png
This will generate a matrix of plots, each plot represents the coherence as a
function of frequency between the node specified in the column and row labels.
In this case, we find that all our nodes are strongly coherence with each
other. This is as the coherence does not distinguish between direct and
indirect connections. For example, nodes 1 and 5 are only connected through
node 4 yet the coherence still shows a connection. The coherence is also
symmetric, that is the connection from node 1->2 is the same as node 2->1.
Next we plot the Directed Transfer Function which is a directed measure that is
able to show when connections are not symmetrical
.. code-block:: python
fig = sails.plotting.plot_vector(F.directed_transfer_function, freq_vect, diag=True)
fig.show()
.. image:: tutorial5_2.png
The Directed Transfer Function shows far fewer connections than the Magnitude
Squared Coherence. We can now see that the connections between node 1 and the
rest of the nodes are asymmetrical. This means that node 1 is driving the
others. The interaction between nodes 4 and 5 is now also isolated. A remaining
issue is that the Directed Transfer Function is still sensitive to indirect
connections, as we can see by the power in the subplot between node 1 and 5.
The Partial Directed Coherence aims to address this problem.
.. code-block:: python
fig = sails.plotting.plot_vector(F.partial_directed_coherence, freq_vect, diag=True)
fig.show()
.. image:: tutorial5_3.png
The Partial Directed Coherence now shows only the direct connections within our
network. We retain our frequency resolution and the sensitivity to asymmetrical
connections. There are many other MVAR derived connectivity metrics available
within sails with different properties and sensitivities, these include:
* Coherency (:func:`sails.mvar_metrics.coherency`,
:attr:`sails.mvar_metrics.AbstractMVARMetrics.coherency`)
* Imaginary Coherence
(:attr:`sails.mvar_metrics.AbstractMVARMetrics.imaginary_coherence`)
* Phase Coherence
(:attr:`sails.mvar_metrics.AbstractMVARMetrics.phase_coherence`)
* Magnitude Squared Coherence
(:attr:`sails.mvar_metrics.AbstractMVARMetrics.magnitude_squared_coherence`)
* Partial Coherence (:func:`sails.mvar_metrics.partial_coherence`,
:attr:`sails.mvar_metrics.AbstractMVARMetrics.partial_coherence`)
* Directed Transfer Function (:func:`sails.mvar_metrics.directed_transfer_function`,
:attr:`sails.mvar_metrics.AbstractMVARMetrics.directed_transfer_function`)
* Full Frequency Directed Transfer Function
(:attr:`sails.mvar_metrics.AbstractMVARMetrics.ff_directed_transfer_function`)
* Directed Directed Transfer Function
(:attr:`sails.mvar_metrics.AbstractMVARMetrics.d_directed_transfer_function`)
* Partial Directed Coherence
(:func:`sails.mvar_metrics.partial_directed_coherence`,
:attr:`sails.mvar_metrics.AbstractMVARMetrics.partial_directed_coherence`)
* Isolated Effective Coherence
(:func:`sails.mvar_metrics.isolated_effective_coherence`,
:attr:`sails.mvar_metrics.AbstractMVARMetrics.isolated_effective_coherence`)
In the second part of this tutorial we will look at fitting and MVAR model and
the Partial Directed Coherence to simulated data, rather than from the 'true'
model.
We can generate data from our simulated model using the
:meth:`sails.simulate:AbstractSigGen.generate_signal` method and specifying the
sample_rate and number of samples to generate
.. code-block:: python
X = siggen.generate_signal(sample_rate=128, num_samples=640)
``X`` is a ``(nchannels x nsamples)`` array containing our simulated data. We can
plot ``X`` using matplotlib
.. code-block:: python
import matplotlib.pyplot as plt
plt.figure()
for ii in range(5):
plt.plot(X[ii, :] + (ii * 10))
plt.show()
.. image:: tutorial5_4.png
We now have a figure containing 5 time-series from our simulation. We can see
there is an oscillation by eye and that some of the time-series vary together
more than others.
We can fit a model to the simulated data and compute connectivity metrics as we
did in previous tutorials.
.. code-block:: python
delay_vect = np.arange(4)
m = sails.VieiraMorfLinearModel.fit_model(X, delay_vect)
F = sails.FourierMvarMetrics.initialise(m, sample_rate, freq_vect)
diag = m.compute_diagnostics(X)
We check that our model is fitting well by interrogating the diagnostics. Here
we see that we are explaining around 56% of the total variance in the signal
and that our model is stable (``diag.SI = .91``).
Let's compare the Partial Directed Coherence from our fitted model to the
Partial Directed Coherence from the 'true' model.
.. code-block:: python
m0 = siggen.generate_model() # This is our true model
F0 = sails.FourierMvarMetrics.initialise(m0, sample_rate, freq_vect)
pdc = np.concatenate((F0.partial_directed_coherence, F.partial_directed_coherence), axis=3)
fig = sails.plotting.plot_vector(pdc, freq_vect, diag=True, line_labels=['True', 'Fitted'])
fig.show()
.. image:: tutorial5_5.png
The resulting figure shows the nodes by nodes matrix of subplots containing the
PDC estimates. We can see that our model is doing a pretty good job
approximating the true pattern of connectivity. There may be some
false-positive connections which show power for the fitted model but not for
the true model.
Try re-running the simulation with a higher or lower number of samples in the
time series. You should see that the estimation starts to really break down
(lots of false positives and a distorted spectrum shape) when we have too few
samples (e.g. ``num_samples = 128``) and becomes nearly perfect when we have a
very long time-series (e.g. ``num_sample = 2048``)
| /sails-1.4.0.tar.gz/sails-1.4.0/doc/source/tutorials/tutorial5.rst | 0.964043 | 0.917893 | tutorial5.rst | pypi |
Tutorial 1 - A pink noise system
================================
In this tutorial we will demonstrate how to set up a simple univariate
AR model which models a pink noise process. We will use the model
to demonstrate how to extract the transfer function using both a Fourier
and Modal estimator.
We start by importing the routines we will need from the numpy, sails
and matplotlib. We set the ggplot style for all plots in the tutorials.
.. code-block:: python
import numpy as np
from sails import generate_pink_roots, root_plot, model_from_roots
import matplotlib.pyplot as plt
plt.style.use('ggplot')
We now define the meta-parameters of our system. We will arbitrarily
pretend that we are using a sampling rate of 100Hz. From this, we
can compute our Nyquist frequency, and a vector of frequencies at which
we will evaluate measures such as our Fourier estimator. The variable
freq_vect will come up often during our tutorials. In this case, we
will estimate 64 frequencies which are linearly spaced between 0Hz
and Nyquist. You can alter freq_vect and examine how this affects
the results of the later estimations.
.. code-block:: python
sample_rate = 100
nyq = sample_rate / 2.
freq_vect = np.linspace(0, nyq, 64)
As mentioned above, for our first system we will emulate a system which
behaves as a pink noise process. Whereas we would normally learn our
model from some data, for this example we will construct a model from
the polynomial form of the AR model. A function
(:func:`~sails.tutorial_utils.generate_pink_roots`) has been provided which
calculates a set of roots which will generate the appropriate roots:
.. code-block:: python
roots = generate_pink_roots(1)
Given the roots of the model, we can plot these in a Z-plane representation to
examine them. A function (:func:`~sails.plotting.root_plot`) is provided to
make this straightforward.
.. code-block:: python
ax = root_plot(roots)
ax.figure.show()
.. image:: tutorial1_1.png
The plot shows frequency increasing counterclockwise from the x-axis. Nyquist
frequency is on the negative x-axis. For real systems, the structure of the
pole plot will be mirrored across the x-axis.
As mentioned above, we would normally set up a model by learning the parameters
from data. During this exploratory tutorial, we can analytically create the
model from the roots. The :func:`~sails.tutorial_utils.model_from_roots`
function will perform this task for you.
.. code-block:: python
m = model_from_roots(roots)
No matter how we create our model, it will be a subclass of AbstractLinearModel.
All subclasses of :class:`~sails.modelfit.AbstractLinearModel` provide a number
of guarantees so that they can all be used with the various estimator routines
we discuss below. We will discuss fitting a model from data in a later
tutorial.
Now that we have our model, we can extract the transfer function (H) using two
different methods. The first method extracts the transfer function using
a Fourier transform of the parameters.
.. code-block:: python
from sails import FourierMvarMetrics
F = FourierMvarMetrics.initialise(m, sample_rate, freq_vect)
F_H = F.H
The second method using a modal decomposition of the parameter matrix to break
the parameters apart into, hopefully, interpretable components. We first of
all create a :class:`~sails.mvar_metrics.ModalMvarMetrics` and then use the
:class:`~sails.modal.MvarModalDecomposition` object inside of this to extract
the transfer function for each mode. Each mode will consist of either a pair
of poles in the complex plane or a single real mode. The
:meth:`~sails.modal.MvarModalDecomposition.per_mode_transfer_function` will
take this into account for you.
.. code-block:: python
from sails import ModalMvarMetrics
M = ModalMvarMetrics.initialise(m, sample_rate, freq_vect)
M_H = M.modes.per_mode_transfer_function(sample_rate, freq_vect)
We can plot our modes in both forms to examine the relationship between them.
Firstly, we sum the modal transfer function across all modes to recover the
Fourier based transfer function (to within computational precision).
.. code-block:: python
fourier_H = np.abs(F_H).squeeze()
modal_H = np.abs(M_H.sum(axis=1)).squeeze()
# Check the two forms are equivalent
equiv = np.allclose( fourier_H, modal_H )
ssdiff = np.sum( (fourier_H - modal_H)**2 )
print('Fourier and Modal transfer functions equivalent: {0}'.format(equiv))
print('Sum-square residual difference: {0}'.format(ssdiff))
# Plot our fourier and modal spectra
f2 = plt.figure()
plt.plot(freq_vect, fourier_H, 'o');
plt.plot(freq_vect, modal_H);
plt.xlabel('Frequency (Hz)')
plt.ylabel('Frequency Response')
plt.legend(['Fourier H', 'Modal H'])
plt.savefig('tutorial1_2.png', dpi=300)
f2.show()
.. image:: tutorial1_2.png
Finally, we can see how each mode contributes to the overall transfer function
shape by plotting each mode separately. Each mode contributes a single
uni-modal resonance to the spectrum. In this case there are no clear peaks in
the spectrum, just the 1/f shape - as such, all the mode peaks sum together to
make a smooth spectrum. Note that we're plotting the modes on a log y-scale as
some modes make a very small contribution to the overall transfer function.
.. code-block:: python
fourier_H = np.abs(F_H).squeeze()
modal_H = np.abs(M_H).squeeze()
# Plot our fourier and modal spectra
f2 = plt.figure()
plt.semilogy(freq_vect, fourier_H, 'o');
plt.semilogy(freq_vect, modal_H);
plt.xlabel('Frequency (Hz)')
plt.ylabel('Frequency Response')
legend = ['Mode' + str(ii) for ii in range(modal_H.shape[1])]
plt.legend(['Fourier H'] + legend)
plt.savefig('tutorial1_3.png', dpi=300)
f2.show()
.. image:: tutorial1_3.png
| /sails-1.4.0.tar.gz/sails-1.4.0/doc/source/tutorials/tutorial1.rst | 0.966268 | 0.959383 | tutorial1.rst | pypi |
Tutorial 11 - Morlet Wavelet Decomposition
=======================================================
In this tutorial, we will look at describing time-frequency dynamics in a
signal using a Morlet Wavelet decomposition.
For this tutorial, we will use the same MEG example data which we have used in previous
tutorials.
We start by importing our modules and finding and loading the example data.
.. code-block:: python
import os
from os.path import join
import h5py
import numpy as np
import matplotlib.pyplot as plt
import sails
plt.style.use('ggplot')
SAILS will automatically detect the example data if downloaded into your home
directory. If you've used a different location, you can specify this in an
environment variable named ``SAILS_EXAMPLE_DATA``.
.. code-block:: python
# Specify environment variable with example data location
# This is only necessary if you have not checked out the
# example data into $HOME/sails-example-data
#os.environ['SAILS_EXAMPLE_DATA'] = '/path/to/sails-example-data'
# Locate and validate example data directory
example_path = sails.find_example_path()
# Load data using h5py
data_path = os.path.join(sails.find_example_path(), 'meg_occipital_ve.hdf5')
X = h5py.File(data_path, 'r')['X'][:, 122500:130000, 0]
sample_rate = 250
time_vect = np.linspace(0, X.shape[1] // sample_rate, X.shape[1])
The Morlet Wavelet transform first defines a set of wavelet functions to use as
am adaptive basis set. These wavelets are simple burst-like oscillations
created to a pre-defined set of parameters.
.. code-block:: python
# Wavelet frequencies in Hz
freqs = [10]
# Number of cycles within the oscillatory event
ncycles = 5
# Length of window in seconds
win_len = 10
# Compute wavelets
mlt = sails.wavelet.get_morlet_basis(freqs, ncycles, win_len, sample_rate, normalise=False)
``mlt`` is now a list of wavelet basis functions. In this case, we have a
single 10Hz basis. This wavelet is a complex-valued array (in the same way that
a Fourier transform returns a complex valued result). To visualise the wavelet,
we can plot the real and imaginary parts of the complex function.
.. code-block:: python
plt.figure()
plt.plot(mlt[0].real)
plt.plot(mlt[0].imag)
plt.legend(['Real', 'Imag'])
Note that the real and imaginary components are the same apart from a 90 degree
phase shift in the time-series. This shift allow the wavelet transform to
estimate the full amplitude envelope and phase of the underlying signal.
The ``ncycles`` parameter is critical for defining the time-frequency
resolution of the wavelet transform. A small value (typically less than 5) will
lead to high temporal resolution but low frequency resolution whilst a large
value (typically greater than 7) will have a low temporal resolutoin and a high
frequency resolution. We will look into this in more detail later - for now, we
can see that this parameter simply changes the number of cycles of oscillation
present in our basis wavelets.
.. code-block:: python
# Compute wavelets
mlt3 = sails.wavelet.get_morlet_basis(freqs, 3, win_len, sample_rate, normalise=False)
mlt5 = sails.wavelet.get_morlet_basis(freqs, 5, win_len, sample_rate, normalise=False)
mlt7 = sails.wavelet.get_morlet_basis(freqs, 7, win_len, sample_rate, normalise=False)
plt.figure()
for idx, mlt in enumerate([mlt3, mlt5, mlt7]):
y = mlt[0].real
t = np.arange((len(y))) - len(y)/2 # zero-centre the wavelet
plt.plot(t, y + idx*2)
plt.legend(['3-cycles', '5-cycles', '7-cycles'])
We can see that the frequency of the oscillation in each wavelet is unchanged
whilst the number of cycles is modified by changing ``ncycles``.
We will often compute wavelets for a range of frequencies rather than just one.
Here we pass in an array of frequency values to compute wavelets for.
.. code-block:: python
freqs = [3, 6, 9, 12, 15]
# Compute wavelets
mlt = sails.wavelet.get_morlet_basis(freqs, ncycles, win_len, sample_rate, normalise=False)
plt.figure()
for ii in range(len(freqs)):
y = mlt[ii].real
t = np.arange((len(y))) - len(y)/2 # zero-centre the wavelet
plt.plot(t, y + ii*2)
plt.legend(freqs)
This time, we see that changing frequency keeps a consistent number of cycles
in each wavelet but modifies the oscillatory period.
To compute the wavelet transform itself, each wavelet basis function is
convolved across the dataset. In this instance (as the wavelet function is
symmetric and the input time-series are real values) this convolution is
similar to computing the correlation between the basis function and the
time-series at each point in time.
Let's compute the wavelet transform at 10Hz on our real data.
.. code-block:: python
freqs = [10]
cwt = sails.wavelet.morlet(X[0, :], freqs, sample_rate)
plt.figure()
plt.subplot(211)
plt.plot(X.T)
plt.subplot(212)
plt.plot(cwt.T)
We can see that the wavelet power tracks the amplitude of the oscillations
visible in the original time-series.
Finally, let's compute a full wavelet transform across a wider range of frequencies
.. code-block:: python
freqs = np.linspace(1, 20, 38)
cwt = sails.wavelet.morlet(X[0, :], freqs, sample_rate, normalise='tallon')
plt.figure()
plt.subplot(211)
plt.plot(time_vect, X.T)
plt.xlim(time_vect[0], time_vect[-1])
plt.subplot(212)
plt.pcolormesh(time_vect, freqs, cwt)
| /sails-1.4.0.tar.gz/sails-1.4.0/doc/source/tutorials/tutorial11.rst | 0.948131 | 0.963196 | tutorial11.rst | pypi |
Tutorial 3 - Fitting real univariate data
=========================================
In the previous two tutorials we set up our system using the polynomial
representation. In most cases, we will want to learn the parameters
of our model from real data. In this section, we cover how this is
done.
We will be using some data from a single MEG Virtual Electrode reconstruction.
The data has been down-sampled from the original sampling rate for the purposes
of demonstration and ease of analysis.
We start with some imports and finding the location of our example data:
.. code-block:: python
from os.path import join
import numpy as np
import matplotlib.pyplot as plt
from sails import find_example_path, root_plot, model_from_roots
plt.style.use('ggplot')
ex_dir = find_example_path()
The original MEG data was sampled at 2034.51Hz and has been down-sampled by 24
times giving us a sampling rate of just under 85Hz. The data has also been
Z-scored. The data is stored in an HDF5 file in the example data directory
as ``meg_single.hdf5``; the data is stored as the dataset ``X``.
.. code-block:: python
import h5py
sample_rate = 2034.51 / 24
nyq = sample_rate / 2.
freq_vect = np.linspace(0, nyq, 64)
X = h5py.File(join(ex_dir, 'meg_single.hdf5'), 'r')['X'][...]
print(X.shape)
.. code-block:: console
(1, 30157, 1)
The form of the data for the fitting routines is ``(nsignals, nsamples, ntrials)``.
In the current case, we are performing a univariate analysis so we only have one
signal. We have just over 30,000 data points and one one trial.
We are now in a position to set up our model. We will use the Vieira-Morf
algorithm to fit our model, and fit a model of order 19 using the delay_vect
argument. We will discuss model order selection in a later tutorial.
.. code-block:: python
from sails import VieiraMorfLinearModel
delay_vect = np.arange(20)
m = VieiraMorfLinearModel.fit_model(X, delay_vect)
print(m.order)
.. code-block:: console
19
We can now compute some diagnostics in order to check that our
model looks sensible. We can start by looking at the R-squared
value:
.. code-block:: python
diag = m.compute_diagnostics(X)
print(diag.R_square)
.. code-block:: console
0.41148872414
We can see that we are explaining just over 40% of the variance with
our model which, given that we are modelling human MEG data collected
over roughly 6 minutes, is reasonable.
The diagnostics class also gives us access to various other measures:
.. code-block:: python
print(diag.AIC)
.. code-block:: console
-25563.4038876
.. code-block:: python
print(diag.BIC)
.. code-block:: console
-25396.883104
.. code-block:: python
print(diag.LL)
.. code-block:: console
-25603.4038876
.. code-block:: python
print(diag.DW)
.. code-block:: console
[ 2.00103186]
.. code-block:: python
print(diag.SI)
.. code-block:: console
0.976853801393
.. code-block:: python
print(diag.SR)
.. code-block:: console
0.0663411856574
In turn, these are the Akaike Information Criterion (AIC), Bayesian Information
Criterion (BIC), Log Likelihood (LL), Durbin-Watson coefficient (DW), Stability
Index (SI) and Stability Ratio (SR). It is also possible to access the Percent
Consistency (PC), although this is not computed by default due to it being
memory intensive - you can compute this using the
:func:`~sails.stats.percent_consistency` routine.
As in our previous examples, we can extract our metrics and plot the
transfer functions using both the Fourier and Modal methods as
we have previously done:
.. code-block:: python
from sails import FourierMvarMetrics, ModalMvarMetrics
F = FourierMvarMetrics.initialise(m, sample_rate, freq_vect)
F_H = F.H
M = ModalMvarMetrics.initialise(m, sample_rate, freq_vect)
M_H = M.modes.per_mode_transfer_function(sample_rate, freq_vect)
# Plot our fourier and modal spectra
f2 = plt.figure()
plt.plot(freq_vect, np.abs(F_H).squeeze(), 'o');
plt.plot(freq_vect, np.abs(M_H).squeeze());
plt.xlabel('Frequency (Hz)')
plt.ylabel('Frequency Response')
f2.show()
.. image:: tutorial3_1.png
.. code-block:: python
f3 = plt.figure()
plt.semilogy(freq_vect, np.abs(F_H).squeeze(), 'o');
plt.semilogy(freq_vect, np.abs(M_H).squeeze());
plt.xlabel('Frequency (Hz)')
plt.ylabel('Frequency Response')
f3.show()
.. image:: tutorial3_2.png
In our previous examples, the model was defined by the structure
of the polynomial, and we could analytically write down the form
of the poles. In this example, we have learned the parameters from
data. We may want to look at the plot of the system roots:
.. code-block:: python
ax = M.modes.pole_plot()
ax.figure.show()
.. image:: tutorial3_3.png
As previously, we can also go on to extract the magnitude of the eigenvalues and
period of each of the modes:
.. code-block:: python
ev = np.abs(M.modes.evals)
idx = [i[0] for i in M.modes.mode_indices]
print(ev[idx])
.. code-block:: console
[[ 0.9768538 ]
[ 0.88790649]
[ 0.83392372]
[ 0.82257801]
[ 0.7869429 ]
[ 0.79447331]
[ 0.80151847]
[ 0.81061091]
[ 0.81709057]
[ 0.80730369]]
.. code-block:: python
print(M.modes.peak_frequency[idx])
.. code-block:: console
[[ 0. ]
[ 8.88121272]
[ 4.51372792]
[ 13.89286835]
[ 18.15521077]
[ 22.01471814]
[ 26.47871083]
[ 30.9324283 ]
[ 35.59229698]
[ 40.1740829 ]]
From this, we can see that the mode which primarily fits this data
is an alpha mode at 8.9Hz.
| /sails-1.4.0.tar.gz/sails-1.4.0/doc/source/tutorials/tutorial3.rst | 0.9556 | 0.888372 | tutorial3.rst | pypi |
Tutorial 10 - Dynamic connectivity during a task
================================================
Here, we will look at using MVAR modelling to describe changes in connectivity
within a functional network as participants perform a simple button press task.
This is similar to the sliding window modelling in tutorial 6
We will analyse MEG source time-courses from four regions of the AAL atlas
(Precentral gyrus and supplemental motor area from left and right hemispheres)
during a self-paced finger tap task from 10 participants. Each trial lasts 20
seconds with 10 seconds of finger tapping at the start and 10 seconds post
movement time. Finger tapping was performed with the right hand. The MEG
data were recorded on a 4D NeuroImaging WHS-3600 system and the source
time-courses were generated from the data using an LCMV beamformer on data
which had been band-pass filtered between 1 and 80Hz.
First, lets import sails and load in the example data. If you haven't already
done so, please download the example data repository from
https://github.com/sails-dev/sails-example-data
We start by importing the modules we will require:
.. code-block:: python
import os
import h5py
import numpy as np
import matplotlib.pyplot as plt
import sails
SAILS will automatically detect the example data if downloaded into your home
directory. If you've used a different location, you can specify this in an
environment variable named ``SAILS_EXAMPLE_DATA``.
.. code-block:: python
# Specify environment variable with example data location
# This is only necessary if you have not checked out the
# example data into $HOME/sails-example-data
os.environ['SAILS_EXAMPLE_DATA'] = '/path/to/sails-example-data'
# Locate and validate example data directory
example_path = sails.find_example_path()
# Load data using h5py
motor_data = h5py.File(os.path.join(sails.find_example_path(), 'fingertap_group_data.hdf5'))
The motor data is stored in hdf5 format and contains the data sample rate and
10 data arrays with the data for each participant. These can be accessed using
keys similar to a dictionary. Here, we print the keys from ``motor_data`` and
extract the sample_rate. Note that the ``motor_data['sample_rate']`` returns a
h5py object which we can further index to extract the sample rate using
``motor_data['sample_rate'][0]``
.. code-block:: python
# Print contents of motor_data
print(list(motor_data.keys()))
# Extract sample_rate
sample_rate = motor_data['sample_rate'][...]
print('Data sample rate is {0}Hz'.format(sample_rate))
# Define node labels
labels = ['L Precentral', 'R Precentral', 'L SuppMotorArea', 'R SuppMotorArea']
The fingertap data itself is in a 3d array of size `[nchannels x nsamples x
ntrials]`. Every participant has 4 channels and 3391 samples in each trial but
slightly different numbers of trials - around 20-30 each.
.. code-block:: python
# Print shape of data array from the first participant
print(motor_data['subj0'][...].shape)
Before fitting our model we specify a time vector with the time in seconds of
each of our samples.
.. code-block:: python
# Specify a time vector
num_samples = motor_data['subj0'].shape[1]
time_vect = np.linspace(0, num_samples/sample_rate, num_samples)
Now we will fit our models. We first define the vector of delays to fit the
MVAR model on and a set of frequency values to estimate connectivity across. We
will compute three things for each participant: ``m`` is the LinearModel
containing the autoregressive parameters, ``d`` is a set of model diagnostics
for each mode and ``f`` is a MvarMetrics instance which we can use to compute
power and connectivity values.
We compute ``m``, ``d`` and ``f`` for each participant in turn and store them
in a list. Please see tutorial 6 for more details on ``sliding_window_fit`` and
its options.
.. code-block:: python
# Define model delays, time vector and frequency vector
delay_vect = np.arange(15)
freq_vect = np.linspace(0, 48, 48)
# Initialise output lists
M = []
D = []
F = []
# Main loop over 10 subjects
for ii in range(10):
print('Processing subj {0}'.format(ii))
# Get subject data
x = motor_data['subj{}'.format(ii)][...]
# Fit sliding window model
sliding_window_length = int(sample_rate) # 1 second long windows
sliding_window_step = int(sample_rate / 8) # 125ms steps between windows
m, d = sails.sliding_window_fit(sails.VieiraMorfLinearModel, x, delay_vect,
sliding_window_length, sliding_window_step)
# Compute Fourier MVAR metrics from sliding window model
f = sails.FourierMvarMetrics.initialise(m, sample_rate, freq_vect)
# Append results into list
M.append(m) # MVAR Model
D.append(d) # Model Diagnostics
F.append(f) # Fourier Metrics
# Get time vector for centre of sliding windows (in seconds)
model_time_vect = time_vect[m.time_vect.astype(int)]
We can extract information across participants using list comprehensions. Here,
we extract the power spectral density from each participant and concatenate
them into a single array for visualisation.
.. code-block:: python
# Create a list of PSD arrays with a singleton dummy dimension on the end
# and concatenate into a single array
PSD = np.concatenate([x.PSD[..., np.newaxis] for x in F], axis=4)
# PSD is now [nnodes x nnodes x nfrequencies x ntimes x nparticipants]
print(PSD.shape)
Next we visualise the time-frequency power spectral density for each of the
four nodes. We perform a simple baseline correction by subtracting the average
of the last 2 seconds of data from the whole trial. The resulting PSD shows
the power relative to this pre-movement period. We annotate the plots with two
dotted lines, one at 10 seconds to show the end of the finger-tapping and one
at 18 seconds showing the start of the baseline period.
.. code-block:: python
# Count the number of nodes and subjects
num_nodes = PSD.shape[0]
# Number of windows over which to calculate baseline estimate
baseline_windows = 11
# Create a new figure
plt.figure(figsize=(6, 12))
# Main plotting loop
for ii in range(num_nodes):
# Average PSD across participants
psd = PSD[ii, ii, :, :, :].mean(axis=2)
# Apply a simple baseline correction
psd = psd - psd[:, -baseline_windows:, np.newaxis].mean(axis=1)
# Find the max value for the colour scale
mx = np.abs(psd).max()
# Make new subplot and plot baseline corrected PSD
plt.subplot(num_nodes, 1, ii + 1)
plt.pcolormesh(model_time_vect, freq_vect, psd, cmap='RdBu_r', vmin=-mx, vmax=mx)
# Annotate subplot
cb = plt.colorbar()
cb.set_label('PSD')
# Place lines showing the period of finger tapping
plt.vlines([10, 18], freq_vect[0], freq_vect[-1], linestyles='dashed')
# Annotate windows
plt.text(5, 40, 'Tapping', horizontalalignment='center')
plt.text(14, 40, 'Rebound', horizontalalignment='center')
plt.text(18.75, 40, 'BL', horizontalalignment='center')
# Tidy up x-axis labelling
if ii == (num_nodes - 1):
plt.xlabel('Time (seconds)')
else:
plt.gca().set_xticklabels([])
# Y axis labelling and title
plt.ylabel('Frequency (Hz)')
plt.title(labels[ii])
plt.show()
Note that the Left Precentral gyrus has a strong increase in beta power after
movement has stopped. The left and right Supplemental Motor Areas have a weaker
rebound.
.. image:: tutorial10_1.png
It is always good idea to inspect the model diagnostic values for an MVAR
analysis. We now extract the stability index, r-squared and residual covariances
for each participant using list comprehensions to extract data from ``D``.
We use the ``np._r`` operator as a quick way to concatenate our lists into numpy arrays.
.. code-block:: python
# Get stability index
SI = np.r_[[d.SI for d in D]]
# Get R-square variance explained
R_square = np.r_[[d.R_square.mean(axis=1) for d in D]]
# Get the matrix norm of the residual covariance matrices - this is a
# convenient summary of the sum-squared values in the residual covariance
# matrices.
resid_norm = np.r_[[np.linalg.norm(d.resid_cov, axis=(0, 1)) for d in D]]
A quick visualisation of these diagnostics shows that our models are stable for
all participants and all time windows (SI < 1). The models explain between 15
and 40% of variance and have relatively stable residual covariances across the
whole window.
.. code-block:: python
plt.figure()
plt.subplot(3, 1, 1)
plt.plot(model_time_vect, SI.T, 'grey')
plt.plot(model_time_vect, SI.mean(axis=0), 'k', linewidth=2)
plt.ylabel('Stability Index')
plt.gca().set_xticklabels([])
plt.subplot(3, 1, 2)
plt.plot(model_time_vect, R_square.T, 'grey')
plt.plot(model_time_vect, R_square.mean(axis=0), 'k', linewidth=2)
plt.ylabel('R-square')
plt.gca().set_xticklabels([])
plt.subplot(3, 1, 3)
plt.plot(model_time_vect, resid_norm.T, 'grey')
plt.plot(model_time_vect, resid_norm.mean(axis=0), 'k', linewidth=2)
plt.ylabel('Norm of\nresidual covariance')
plt.show()
.. image:: tutorial10_2.png
Now we trust that our models are capturing reasonable task dynamics within each
brain region and have good diagnostics we can look at the connectivity.
We first look at the cross-spectral densities across the network. These are the
off diagonal elements of the ``PSD`` metric. We first extract the ``PSD`` using
the list comprehension method and concatenate them into a single array. After
that, we plot the average cross spectral density for between all nodes using
``sails.plotting.plot_matrix``.
.. code-block:: python
# Create a list of PSD arrays with a singleton dummy dimension on the end
# and convert into an array
PSD = np.concatenate([f.PSD[..., np.newaxis] for f in F], axis=4)
# Visualise
fig = plt.figure(figsize=(12, 8))
sails.plotting.plot_matrix(PSD.mean(axis=4), model_time_vect, freq_vect,
title='Cross Spectral Density',
labels=labels, F=fig,
vlines=[10], cmap='hot_r', diag=False,
x_label='Time (secs)', y_label='Frequency (Hz)')
fig.show()
The cross spectral densities show a similar post-movement beta rebound pattern
to the within-node power spectral densities. Now we can also see that there is
shared spectral information in the left-precentral gyrus <-> left-supplemental
motor area and left-supplemental motor area <-> right-supplemental motor area
connections. There appears to be strong cross-spectral densities below 10Hz
between all nodes.
.. image:: tutorial10_3.png
The Magnitude-Squared Coherence might be a better representation of these
connections. It expresses the cross-spectral density between two nodes as a
ratio of the power within each node.
.. code-block:: python
# Extract the magnitude squared coherence using the list comprehension method
# and convert into a numpy array
MSC = np.concatenate([f.magnitude_squared_coherence[..., np.newaxis] for f in F], axis=4)
# Visualise
fig = plt.figure(figsize=(12, 8))
sails.plotting.plot_matrix(MSC.mean(axis=4), model_time_vect, freq_vect,
title='Magnitude Squared Coherence',
labels=labels, F=fig,
vlines=[10], cmap='hot_r', diag=False,
x_label='Time (secs)', y_label='Frequency (Hz)')
plt.show()
The normalisation emphasises the coherence within the beta rebound and strongly
reduces the apparent shared power below 10Hz. This suggests that the beta cross
spectral density is relatively large when compared to the power in each node at
that frequency, but the <10Hz cross spectra are very low power compared to the
within node power.
.. image:: tutorial10_4.png
Next, we can explore whether this beta connectivity is symmetrical i.e. whether
both nodes are equally influential on each other or if one node in the pair
might be 'driving' the other. We use the Directed Transfer Function to estimate
this and visualise in the same way.
.. code-block:: python
# Extract the directed transfer function using the list comprehension method
# and convert into a numpy array
DTF = np.concatenate([f.directed_transfer_function[..., np.newaxis] for f in F], axis=4)
# Visualise
fig = plt.figure(figsize=(12, 8))
sails.plotting.plot_matrix(DTF.mean(axis=4), model_time_vect, freq_vect,
title='Directed Transfer Function',
labels=labels, F=fig,
vlines=[10], cmap='hot_r', diag=False,
x_label='Time (secs)', y_label='Frequency (Hz)')
plt.show()
The DTF is an asymmetrical measure, so the upper and lower triangles of the DTF
plot are not symmetrical. We see similar connections in the beta band again,
but the DTF additionally suggests that Left Precentral Gyrus which is driving
Left Supplemental Motor Area, though there is some influence in the reciprocal
direction. Similarly Left Supplemental Motor Area appears to be influencing
Right Supplemental Motor Area.
.. image:: tutorial10_5.png
Finally, we can emphasise the change in connectivity relative to baseline by
performing a simple baseline correction on the DTF values. Here, we subtract
the average DTF from the last two seconds of the epoch from each time-point.
Positive values then indicate a movement-evoked increase in connectivity in a
connection and negative values a movement-evoked decrease.
.. code-block:: python
# Number of windows over which to calculate baseline estimate
baseline_windows = 11
# Apply a simple baseline correction
bcDTF = DTF.mean(axis=4)
bcDTF = bcDTF - bcDTF[:, :, :, -baseline_windows:, np.newaxis].mean(axis=3)
# Plot baseline corrected DTF
fig = plt.figure(figsize=(12, 8))
sails.plotting.plot_matrix(bcDTF, model_time_vect, freq_vect,
title='baseline-corrected Directed Transfer Function',
labels=labels, F=fig,
vlines=[10, 18], cmap='RdBu_r', diag=False,
x_label='Time (secs)', y_label='Frequency (Hz)')
plt.show()
This baseline correction makes the change in directed functional connectivity
during the post-movement beta rebound much clearer. It also reveals the
fact that the relationship between the two supplementary motor areas appears
to be driven by the left SMA. Given that this is a right-hand movement task,
this could potentially be interpreted as a form of inhibitory signal from
the left to the right hemisphere. Further data and analysis would be necessary
to fully establish the nature of such a signal.
.. image:: tutorial10_6.png
| /sails-1.4.0.tar.gz/sails-1.4.0/doc/source/tutorials/tutorial10.rst | 0.914991 | 0.950041 | tutorial10.rst | pypi |
from saimll import SAIML
# There are include macros that will do cool affects like make the passed text rainbow
SAIML.print("[^rainbow]Rainbow Text")
# There is also an included macro for displaying hyperlinks
SAIML.print("[~https://tired-fox.github.io/SAIMLDecor/teddecor.html]Documentation")
# There is currently also a macro for outputing a string literal
# For example if you have special escape character and want to print there literals then you can do
SAIML.print("[^repr]\x1b[0m")
# You can also define your own macros. These are functions that will manipulate the next test block
# They must take a string as a parameter and return a string as an output. If it doesn't return a string the function won't have an effect.
def hello_world(string: str) -> str:
return "Hello World"
# You let SAIML know about the function before you can call it.
# You give the function a name which you use to call it in a macro then you pass the callable function.
SAIML.define("hw", hello_world)
SAIML.print("[^hw]Cat goes moo")
# Macros can be nested, but this is really only useful if you want to stype your hyperlinks atm
# Colors can be passed with many formats... the one below shows rgb which can be seperated with both `,` and `;`
SAIML.print(
"[~https://github.com/Tired-Fox/SAIMLDecor]*SAIMLDecor[@F138,43,226]Github [@F220;20;60 @B255;255,255]page"
)
# Here is passing colors as hex and xterm color codes. Hex must have a `#` in front of it
SAIML.print("[@F #83a748]HEX[@F] and [@F206]XTERM")
# Colors can also be passed in with default build in terminal color codes.
# These include black, red, green, yellow, blue, magenta, cyan, and white
SAIML.print("[@B cyan]Predefined Color")
# The `@` is a color macro and the `F` or `B` immediatly following it is whether to apply it to the foreground or background
# You can use a [@F] or [@B] to reset the foreground and background colors respectively
# You can also use [@] to reset both foreground and background
# SAIML also has markdown syntax for underline and bold. Bold = * and underline = _ , with each only using one character.
# When a character is used it toggles the bold or underline state.
SAIML.print("Normal, _Underlined_, *Bold*, Normal, *Bold, _Bold and Underline")
# Notice how the Bold, Underline, colors and other formatting doen't need to closed or wrap the intended text.
# This is intentional as you specify when it should stop.
# If you want to reset everything, both color and style, then you can use `[]`.
SAIML.print("[@Fred]*I have a color and style[], and I don't")
# SAIML also has rich exceptions that are called when you don't close a macro, don't specify the macro type, or don't
# specify if a color is for the background or foreground
# # Additionally, if you want to display the special characters, `[`, `]`, `*`, and `_` you can use the `\` escape character.
# SAIML.print("\[@Fred] Is one way you can make the text red")
# Escaping `[` will also automatically escape the `]`.
SAIML.print(
"[~https://tired-fox.github.io/SAIMLDecor/teddecor.html]\[^rainbow]Documentation"
)
# If you have a long string that you want to escape you can use `SAIML.encode`
SAIML.print(
SAIML.escape(
"[] _This wil be * a literal where_ you see __ the markdown * characters"
)
)
# This is a literal block to show what will be output next using `\`
SAIML.print(
"\[~https://tired-fox.github.io/SAIMLDecor/teddecor.html ^rainbow]Documentation"
)
# Now for the output
SAIML.print(
"[~https://tired-fox.github.io/SAIMLDecor/teddecor.html ^rainbow]Documentation"
) | /saimll-0.4.0.tar.gz/saimll-0.4.0/examples/basics.py | 0.66769 | 0.32532 | basics.py | pypi |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu=0, sigma=1):
Distribution.__init__(self, mu, sigma)
def calculate_mean(self):
"""Function to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
def calculate_stdev(self, sample=True):
"""Function to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.calculate_mean()
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def plot_histogram(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Function to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Function to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean, self.stdev) | /saj_distributions-0.1.tar.gz/saj_distributions-0.1/saj_distributions/Gaussiandistribution.py | 0.688364 | 0.853058 | Gaussiandistribution.py | pypi |
from colorama import Fore
class TableVetayenaKanchha(Exception):
"""
Database ma nai table navayesi aaune error !
"""
def __init__(self, table_name,db_name) -> None:
super().__init__(Fore.RED + f"Timle deko table '{table_name}' {db_name} vanne database ma nai vetayena ! Spelling bigryo ki herata ramro sanga !")
class ColumnNaiXainaKanchha(Exception):
"""
Table ma nai navako column diyesi aaune error !
"""
def __init__(self,table_name) -> None:
super().__init__(Fore.RED + f"Timle deko column Table '{table_name}' ma nai vetayena ! Spelling bigryo ki ramro sanga herata !")
class DatabaseConnectVayenaKanchha(Exception):
"""
Database connection config namilda aaune error !
"""
def __init__(self) -> None:
super().__init__(Fore.RED + f"Arrey !! Database ta ramrari connect gara paila !!")
class IdXainaKanchha(Exception):
"""
Data ferda id diyena vane aaune error
"""
def __init__(self) -> None:
super().__init__(Fore.RED + f"Data fernalai euta unique ID chainxa . Unique Id diyerw feri try gara !!")
class SyntaxBigryoKanchha(Exception):
"""
Syntax nai bigrexi aaune error !
"""
def __init__(self) -> None:
super().__init__(Fore.RED + f"Query ko syntax bigrye xa . Yedi update garda aako vaye id bahek arko field ni chainxa !!")
class DateFormatMilenaKanchha(Exception):
"""
Date Format nai bigrexi aaune error !
"""
def __init__(self) -> None:
super().__init__(Fore.RED + f"Miti ko format bigrexa . Date Format Yesto Prakar ko Xa Hai (2022-01-01) i.e (year/month/day) !!")
class NotNullMaDataVayenaKanchha(Exception):
"""
Not Null ma Data navayesi aaune error !
"""
def __init__(self) -> None:
super().__init__(Fore.RED + f"Not Null vako column ma data vayena . Data halerw feri try gara !!")
class MaxLengthVayenaKanchha(Exception):
"""
Not Null ma Data navayesi aaune error !
"""
def __init__(self) -> None:
super().__init__(Fore.RED + f"String ma max_length compulsary xa kanchha .max_length rakherw feri try garnu hola!!") | /sajilo_orm-0.0.6-py3-none-any.whl/sajilo_orm/exceptions.py | 0.586523 | 0.155142 | exceptions.py | pypi |
import jax
import jax.numpy as jnp
from flax import linen as nn
from typing import Callable, Optional
from .utils import ExpNormalSmearing
from .functional import get_x_minus_xt, get_x_minus_xt_norm, get_h_cat_ht
from functools import partial
def double_sigmoid(x):
return 2.0 * jax.nn.sigmoid(x)
class ContinuousFilterConvolutionWithConcatenation(nn.Module):
out_features : int
kernel_features : int = 50
activation : Callable = jax.nn.silu
def setup(self):
self.kernel = ExpNormalSmearing(num_rbf=self.kernel_features)
self.mlp_in = nn.Dense(self.kernel_features)
self.mlp_out = nn.Sequential(
[
nn.Dense(self.out_features),
self.activation,
nn.Dense(self.out_features),
]
)
def __call__(self, h, x):
h0 = h
h = self.mlp_in(h)
_x = self.kernel(x) * h
h = self.mlp_out(
jnp.concatenate(
[h0, _x, x],
axis=-1
)
)
return h
class SAKELayer(nn.Module):
out_features : int
hidden_features : int
activation : Callable = jax.nn.silu
n_heads : int = 4
update: bool=True
use_semantic_attention: bool = True
use_euclidean_attention: bool = True
use_spatial_attention: bool = True
cutoff: Callable = None
def setup(self):
self.edge_model = ContinuousFilterConvolutionWithConcatenation(self.hidden_features)
self.n_coefficients = self.n_heads * self.hidden_features
self.node_mlp = nn.Sequential(
[
# nn.LayerNorm(),
nn.Dense(self.hidden_features),
self.activation,
nn.Dense(self.out_features),
self.activation,
]
)
if self.update:
self.velocity_mlp = nn.Sequential(
[
nn.Dense(self.hidden_features),
self.activation,
nn.Dense(1, use_bias=False),
double_sigmoid,
],
)
self.semantic_attention_mlp = nn.Sequential(
[
nn.Dense(self.n_heads),
partial(nn.celu, alpha=2.0),
],
)
self.post_norm_mlp = nn.Sequential(
[
nn.Dense(self.hidden_features),
self.activation,
nn.Dense(self.hidden_features),
self.activation,
]
)
self.v_mixing = nn.Dense(1, use_bias=False)
self.x_mixing = nn.Sequential([nn.Dense(self.n_coefficients, use_bias=False), jnp.tanh])
log_gamma = -jnp.log(jnp.linspace(1.0, 5.0, self.n_heads))
if self.use_semantic_attention and self.use_euclidean_attention:
self.log_gamma = self.param(
"log_gamma",
nn.initializers.constant(log_gamma),
log_gamma.shape,
)
else:
self.log_gamma = jnp.ones(self.n_heads)
class DenseSAKELayer(SAKELayer):
def spatial_attention(self, h_e_mtx, x_minus_xt, x_minus_xt_norm, mask=None):
# (batch_size, n, n, n_coefficients)
# coefficients = self.coefficients_mlp(h_e_mtx)# .unsqueeze(-1)
coefficients = self.x_mixing(h_e_mtx)
# (batch_size, n, n, 3)
# x_minus_xt = x_minus_xt * euclidean_attention.mean(dim=-1, keepdim=True) / (x_minus_xt_norm + 1e-5)
x_minus_xt = x_minus_xt / (x_minus_xt_norm + 1e-5) # ** 2
# (batch_size, n, n, coefficients, 3)
combinations = jnp.expand_dims(x_minus_xt, -2) * jnp.expand_dims(coefficients, -1)
if mask is not None:
_mask = jnp.expand_dims(jnp.expand_dims(mask, -1), -1)
combinations = combinations * _mask
combinations_sum = combinations.sum(axis=-3) / (_mask.sum(axis=-3) + 1e-8)
else:
# (batch_size, n, n, coefficients)
combinations_sum = combinations.mean(axis=-3)
combinations_norm = (combinations_sum ** 2).sum(-1)# .pow(0.5)
h_combinations = self.post_norm_mlp(combinations_norm)
# h_combinations = self.norm(h_combinations)
return h_combinations, combinations
def aggregate(self, h_e_mtx, mask=None):
# h_e_mtx = self.mask_self(h_e_mtx)
if mask is not None:
h_e_mtx = h_e_mtx * jnp.expand_dims(mask, -1)
h_e = h_e_mtx.sum(axis=-2)
return h_e
def node_model(self, h, h_e, h_combinations):
out = jnp.concatenate([
h,
h_e,
h_combinations,
],
axis=-1)
out = self.node_mlp(out)
out = h + out
return out
def euclidean_attention(self, x_minus_xt_norm, mask=None):
# (batch_size, n, n, 1)
_x_minus_xt_norm = x_minus_xt_norm + 1e5 * jnp.expand_dims(jnp.eye(
x_minus_xt_norm.shape[-2],
x_minus_xt_norm.shape[-2],
), -1)
if mask is not None:
_x_minus_xt_norm = _x_minus_xt_norm + 1e5 * (1- jnp.expand_dims(mask, -1))
att = jax.nn.softmax(
-_x_minus_xt_norm * jnp.exp(self.log_gamma),
axis=-2,
)
return att
def semantic_attention(self, h_e_mtx, mask=None):
# (batch_size, n, n, n_heads)
att = self.semantic_attention_mlp(h_e_mtx)
# (batch_size, n, n, n_heads)
# att = att.view(*att.shape[:-1], self.n_heads)
att = att - 1e5 * jnp.expand_dims(jnp.eye(
att.shape[-2],
att.shape[-2],
), -1)
if mask is not None:
att = att - 1e5 * (1 - jnp.expand_dims(mask, -1))
att = jax.nn.softmax(att, axis=-2)
return att
def combined_attention(self, x_minus_xt_norm, h_e_mtx, mask=None):
semantic_attention = self.semantic_attention(h_e_mtx, mask=mask)
if self.cutoff is not None:
euclidean_attention = self.cutoff(x_minus_xt_norm)
else:
euclidean_attention = 1.0
combined_attention = euclidean_attention * semantic_attention
if mask is not None:
combined_attention = combined_attention - 1e5 * (1 - jnp.expand_dims(mask, -1))
# combined_attention = jax.nn.softmax(combined_attention, axis=-2)
combined_attention = combined_attention / combined_attention.sum(axis=-2, keepdims=True)
return euclidean_attention, semantic_attention, combined_attention
def velocity_model(self, v, h):
v = self.velocity_mlp(h) * v
return v
def __call__(
self,
h,
x,
v=None,
mask=None,
he=None,
):
x_minus_xt = get_x_minus_xt(x)
x_minus_xt_norm = get_x_minus_xt_norm(x_minus_xt=x_minus_xt)
h_cat_ht = get_h_cat_ht(h)
if he is not None:
h_cat_ht = jnp.concatenate([h_cat_ht, he], -1)
h_e_mtx = self.edge_model(h_cat_ht, x_minus_xt_norm)
euclidean_attention, semantic_attention, combined_attention = self.combined_attention(x_minus_xt_norm, h_e_mtx, mask=mask)
h_e_att = jnp.expand_dims(h_e_mtx, -1) * jnp.expand_dims(combined_attention, -2)
h_e_att = jnp.reshape(h_e_att, h_e_att.shape[:-2] + (-1, ))
h_combinations, delta_v = self.spatial_attention(h_e_att, x_minus_xt, x_minus_xt_norm, mask=mask)
if not self.use_spatial_attention:
h_combinations = jnp.zeros_like(h_combinations)
delta_v = jnp.zeros_like(delta_v)
# h_e_mtx = (h_e_mtx.unsqueeze(-1) * combined_attention.unsqueeze(-2)).flatten(-2, -1)
h_e = self.aggregate(h_e_att, mask=mask)
h = self.node_model(h, h_e, h_combinations)
if self.update:
if mask is not None:
delta_v = self.v_mixing(delta_v.swapaxes(-1, -2)).swapaxes(-1, -2).sum(axis=(-2, -3))
delta_v = delta_v / (mask.sum(-1, keepdims=True) + 1e-10)
else:
delta_v = self.v_mixing(delta_v.swapaxes(-1, -2)).swapaxes(-1, -2).mean(axis=(-2, -3))
if v is not None:
v = self.velocity_model(v, h)
else:
v = jnp.zeros_like(x)
v = delta_v + v
x = x + v
return h, x, v
def segment_mean(data: jnp.ndarray,
segment_ids: jnp.ndarray,
num_segments: Optional[int] = None,
indices_are_sorted: bool = False,
unique_indices: bool = False):
"""Returns mean for each segment.
Args:
data: the values which are averaged segment-wise.
segment_ids: indices for the segments.
num_segments: total number of segments.
indices_are_sorted: whether ``segment_ids`` is known to be sorted.
unique_indices: whether ``segment_ids`` is known to be free of duplicates.
"""
nominator = jax.ops.segment_sum(
data,
segment_ids,
num_segments,
indices_are_sorted=indices_are_sorted,
unique_indices=unique_indices)
denominator = jax.ops.segment_sum(
jnp.ones_like(data),
segment_ids,
num_segments,
indices_are_sorted=indices_are_sorted,
unique_indices=unique_indices)
return nominator / jnp.maximum(denominator,
jnp.ones(shape=[], dtype=denominator.dtype))
class SparseSAKELayer(SAKELayer):
def spatial_attention(self, h_e_mtx, x_minus_xt, x_minus_xt_norm, idxs):
# (batch_size, n, n, n_coefficients)
# coefficients = self.coefficients_mlp(h_e_mtx)# .unsqueeze(-1)
coefficients = self.x_mixing(h_e_mtx)
# (batch_size, n, n, 3)
# x_minus_xt = x_minus_xt * euclidean_attention.mean(dim=-1, keepdim=True) / (x_minus_xt_norm + 1e-5)
x_minus_xt = x_minus_xt / (x_minus_xt_norm + 1e-5) # ** 2
# (batch_size, n, n, coefficients, 3)
combinations = jnp.expand_dims(x_minus_xt, -2) * jnp.expand_dims(coefficients, -1)
# (batch_size, n, n, coefficients, 3)
# dense: combinations_sum = combinations.mean(axis=-3)
combinations_sum = segment_mean(
combinations.swapaxes(-3, -5),
idxs[..., -1],
).swapaxes(-2, -4)
combinations_norm = (combinations_sum ** 2).sum(-1)# .pow(0.5)
h_combinations = self.post_norm_mlp(combinations_norm)
# h_combinations = self.norm(h_combinations)
return h_combinations, combinations
def aggregate(self, h_e_mtx, mask=None):
# h_e_mtx = self.mask_self(h_e_mtx)
if mask is not None:
h_e_mtx = h_e_mtx * jnp.expand_dims(mask, -1)
h_e = h_e_mtx.sum(axis=-2)
return h_e
def node_model(self, h, h_e, h_combinations):
out = jnp.concatenate([
h,
h_e,
h_combinations,
],
axis=-1)
out = self.node_mlp(out)
out = h + out
return out
def euclidean_attention(self, x_minus_xt_norm, mask=None):
# (batch_size, n, n, 1)
_x_minus_xt_norm = x_minus_xt_norm + 1e5 * jnp.expand_dims(jnp.eye(
x_minus_xt_norm.shape[-2],
x_minus_xt_norm.shape[-2],
), -1)
if mask is not None:
_x_minus_xt_norm = _x_minus_xt_norm + 1e5 * (1- jnp.expand_dims(mask, -1))
att = jax.nn.softmax(
-_x_minus_xt_norm * jnp.exp(self.log_gamma),
axis=-2,
)
return att
def semantic_attention(self, h_e_mtx, mask=None):
# (batch_size, n, n, n_heads)
att = self.semantic_attention_mlp(h_e_mtx)
# (batch_size, n, n, n_heads)
# att = att.view(*att.shape[:-1], self.n_heads)
att = att - 1e5 * jnp.expand_dims(jnp.eye(
att.shape[-2],
att.shape[-2],
), -1)
if mask is not None:
att = att - 1e5 * (1 - jnp.expand_dims(mask, -1))
att = jax.nn.softmax(att, axis=-2)
return att
def combined_attention(self, x_minus_xt_norm, h_e_mtx, mask=None):
semantic_attention = self.semantic_attention(h_e_mtx, mask=mask)
if self.cutoff is not None:
euclidean_attention = self.cutoff(x_minus_xt_norm)
else:
euclidean_attention = 1.0
combined_attention = euclidean_attention * semantic_attention
if mask is not None:
combined_attention = combined_attention - 1e5 * (1 - jnp.expand_dims(mask, -1))
# combined_attention = jax.nn.softmax(combined_attention, axis=-2)
combined_attention = combined_attention / combined_attention.sum(axis=-2, keepdims=True)
return euclidean_attention, semantic_attention, combined_attention
def velocity_model(self, v, h):
v = self.velocity_mlp(h) * v
return v
def __call__(
self,
h,
x,
v=None,
mask=None,
he=None,
):
x_minus_xt = get_x_minus_xt(x)
x_minus_xt_norm = get_x_minus_xt_norm(x_minus_xt=x_minus_xt)
h_cat_ht = get_h_cat_ht(h)
if he is not None:
h_cat_ht = jnp.concatenate([h_cat_ht, he], -1)
h_e_mtx = self.edge_model(h_cat_ht, x_minus_xt_norm)
euclidean_attention, semantic_attention, combined_attention = self.combined_attention(x_minus_xt_norm, h_e_mtx, mask=mask)
h_e_att = jnp.expand_dims(h_e_mtx, -1) * jnp.expand_dims(combined_attention, -2)
h_e_att = jnp.reshape(h_e_att, h_e_att.shape[:-2] + (-1, ))
h_combinations, delta_v = self.spatial_attention(h_e_att, x_minus_xt, x_minus_xt_norm, mask=mask)
if not self.use_spatial_attention:
h_combinations = jnp.zeros_like(h_combinations)
delta_v = jnp.zeros_like(delta_v)
# h_e_mtx = (h_e_mtx.unsqueeze(-1) * combined_attention.unsqueeze(-2)).flatten(-2, -1)
h_e = self.aggregate(h_e_att, mask=mask)
h = self.node_model(h, h_e, h_combinations)
if self.update:
if mask is not None:
delta_v = self.v_mixing(delta_v.swapaxes(-1, -2)).swapaxes(-1, -2).sum(axis=(-2, -3))
delta_v = delta_v / (mask.sum(-1, keepdims=True) + 1e-10)
else:
delta_v = self.v_mixing(delta_v.swapaxes(-1, -2)).swapaxes(-1, -2).mean(axis=(-2, -3))
if v is not None:
v = self.velocity_model(v, h)
else:
v = jnp.zeros_like(x)
v = delta_v + v
x = x + v
return h, x, v
class EquivariantGraphConvolutionalLayer(nn.Module):
out_features : int
hidden_features : int
activation : Callable = jax.nn.silu
update : bool = False
sigmoid : bool = False
def setup(self):
self.node_mlp = nn.Sequential(
[
nn.Dense(self.hidden_features),
self.activation,
nn.Dense(self.out_features),
self.activation,
]
)
self.scaling_mlp = nn.Sequential(
[
nn.Dense(self.hidden_features),
self.activation,
nn.Dense(1, use_bias=False),
],
)
self.shifting_mlp = nn.Sequential(
[
nn.Dense(self.hidden_features),
self.activation,
nn.Dense(1, use_bias=False),
],
)
if self.sigmoid:
self.edge_model = nn.Sequential(
[
nn.Dense(1, use_bias=False),
jax.nn.sigmoid,
],
)
def aggregate(self, h_e_mtx, mask=None):
# h_e_mtx = self.mask_self(h_e_mtx)
if mask is not None:
h_e_mtx = h_e_mtx * jnp.expand_dims(mask, -1)
if self.sigmoid:
h_e_weights = self.edge_model(h_e_mtx)
h_e_mtx = h_e_weights * h_e_mtx
h_e = h_e_mtx.sum(axis=-2)
return h_e
def node_model(self, h, h_e):
out = jnp.concatenate([
h,
h_e,
],
axis=-1)
out = self.node_mlp(out)
out = h + out
return out
def velocity_model(self, v, h):
v = self.velocity_mlp(h) * v
return v
def __call__(
self,
h,
x,
v=None,
mask=None,
):
x_minus_xt = get_x_minus_xt(x)
x_minus_xt_norm = get_x_minus_xt_norm(x_minus_xt=x_minus_xt)
h_cat_ht = get_h_cat_ht(h)
h_e_mtx = jnp.concatenate([h_cat_ht, x_minus_xt_norm], axis=-1)
h_e = self.aggregate(h_e_mtx, mask=mask)
shift = self.shifting_mlp(h_e_mtx).sum(-2)
scale = self.scaling_mlp(h)
if self.update:
v = v * scale + shift
x = x + v
h = self.node_model(h, h_e)
return h, x, v
class EquivariantGraphConvolutionalLayerWithSmearing(nn.Module):
out_features : int
hidden_features : int
activation : Callable = jax.nn.silu
update : bool = False
sigmoid: bool = True
def setup(self):
self.edge_model = ContinuousFilterConvolutionWithConcatenation(self.hidden_features)
self.node_mlp = nn.Sequential(
[
nn.Dense(self.hidden_features),
self.activation,
nn.Dense(self.out_features),
self.activation,
]
)
self.scaling_mlp = nn.Sequential(
[
nn.Dense(self.hidden_features),
self.activation,
nn.Dense(1, use_bias=False),
],
)
self.shifting_mlp = nn.Sequential(
[
nn.Dense(self.hidden_features),
self.activation,
nn.Dense(1, use_bias=False),
],
)
if self.sigmoid:
self.edge_att = nn.Sequential(
[
nn.Dense(1, use_bias=False),
jax.nn.sigmoid,
],
)
def aggregate(self, h_e_mtx, mask=None):
# h_e_mtx = self.mask_self(h_e_mtx)
if mask is not None:
h_e_mtx = h_e_mtx * jnp.expand_dims(mask, -1)
if self.sigmoid:
h_e_weights = self.edge_att(h_e_mtx)
h_e_mtx = h_e_weights * h_e_mtx
h_e = h_e_mtx.sum(axis=-2)
return h_e
def node_model(self, h, h_e):
out = jnp.concatenate([
h,
h_e,
],
axis=-1)
out = self.node_mlp(out)
out = h + out
return out
def velocity_model(self, v, h):
v = self.velocity_mlp(h) * v
return v
def __call__(
self,
h,
x,
v=None,
mask=None,
):
x_minus_xt = get_x_minus_xt(x)
x_minus_xt_norm = get_x_minus_xt_norm(x_minus_xt=x_minus_xt)
h_cat_ht = get_h_cat_ht(h)
h_e_mtx = self.edge_model(h_cat_ht, x_minus_xt_norm)
h_e = self.aggregate(h_e_mtx, mask=mask)
shift = self.shifting_mlp(h_e_mtx).sum(-2)
scale = self.scaling_mlp(h)
if self.update:
v = v * scale + shift
x = x + v
h = self.node_model(h, h_e)
return h, x, v | /sake-gnn-0.0.2.post1.tar.gz/sake-gnn-0.0.2.post1/sake/layers.py | 0.872538 | 0.289657 | layers.py | pypi |
import jax
import jax.numpy as jnp
import numpy as onp
from flax import linen as nn
import math
def coloring(x, mean, std):
return std * x + mean
def cosine_cutoff(x, lower=0.0, upper=5.0):
cutoffs = 0.5 * (
jnp.cos(
math.pi
* (
2
* (x - lower)
/ (upper - lower)
+ 1.0
)
)
+ 1.0
)
# remove contributions below the cutoff radius
x = x * (x < upper)
x = x * (x > lower)
return cutoffs
class ExpNormalSmearing(nn.Module):
cutoff_lower: float = 0.0
cutoff_upper: float = 5.0
num_rbf: float = 50
def setup(self):
self.alpha = 5.0 / (self.cutoff_upper - self.cutoff_lower)
means, betas = self._initial_params()
self.out_features = self.num_rbf
self.means = self.param(
"means",
nn.initializers.constant(means),
means.shape,
)
self.betas = self.param(
"betas",
nn.initializers.constant(betas),
betas.shape,
)
def _initial_params(self):
# initialize means and betas according to the default values in PhysNet
# https://pubs.acs.org/doi/10.1021/acs.jctc.9b00181
start_value = jnp.exp(
-self.cutoff_upper + self.cutoff_lower
)
means = jnp.linspace(start_value, 1, self.num_rbf)
betas = jnp.array(
[(2 / self.num_rbf * (1 - start_value)) ** -2] * self.num_rbf
)
return means, betas
def __call__(self, dist):
return jnp.exp(
-self.betas
* (jnp.exp(self.alpha * (-dist + self.cutoff_lower)) - self.means) ** 2
)
@jax.jit
def mae(x, y):
return jnp.abs(x - y).mean()
@jax.jit
def mae_with_replacement(x, y, seed=0):
key = jax.random.PRNGKey(seed)
idxs = jax.random.choice(
key, x.shape[0], shape=(x.shape[0],), replace=True,
)
x = x[idxs]
y = y[idxs]
return mae(x, y)
def bootstrap_mae(x, y, n_samples=10, ci=0.95):
original = jnp.abs(x - y).mean().item()
results = []
for idx in range(n_samples):
result = mae_with_replacement(x, y, idx).item()
results.append(result)
low = onp.percentile(results, 100.0 * 0.5 * (1 - ci))
high = onp.percentile(results, (1 - ((1 - ci) * 0.5)) * 100.0)
return original, low, high | /sake-gnn-0.0.2.post1.tar.gz/sake-gnn-0.0.2.post1/sake/utils.py | 0.841272 | 0.414129 | utils.py | pypi |
import jax
import jax.numpy as jnp
from flax import linen as nn
from typing import Callable, Union, List
from .layers import (
DenseSAKELayer,
EquivariantGraphConvolutionalLayer,
EquivariantGraphConvolutionalLayerWithSmearing,
)
class DenseSAKEModel(nn.Module):
hidden_features: int
out_features: int
depth: int = 4
activation: Callable=nn.silu
update: Union[List[bool], bool]=True
use_semantic_attention: bool = True
use_euclidean_attention: bool = True
use_spatial_attention: bool = True
n_heads: int=4
cutoff: Callable=None
def setup(self):
self.embedding_in = nn.Dense(self.hidden_features)
self.embedding_out = nn.Sequential(
[
nn.Dense(self.hidden_features),
self.activation,
nn.Dense(self.out_features),
],
)
if isinstance(self.update, bool):
update = [self.update for _ in range(self.depth)]
else:
update = self.update
for idx in range(self.depth):
setattr(
self,
"d%s" % idx,
DenseSAKELayer(
hidden_features=self.hidden_features,
out_features=self.hidden_features,
update=update[idx],
use_semantic_attention=self.use_semantic_attention,
use_euclidean_attention=self.use_euclidean_attention,
use_spatial_attention=self.use_spatial_attention,
n_heads=self.n_heads,
cutoff=self.cutoff,
),
)
self.layers = [getattr(self, "d%s" % idx) for idx in range(self.depth)]
def __call__(self, h, x, v=None, mask=None, he=None):
h = self.embedding_in(h)
for layer in self.layers:
h, x, v = layer(h, x, v, mask=mask, he=he)
h = self.embedding_out(h)
return h, x, v
class EquivariantGraphNeuralNetwork(nn.Module):
hidden_features: int
out_features: int
depth: int = 4
activation: Callable=nn.silu
update: Union[List[bool], bool]=True
smear: bool = False
sigmoid: bool = False
def setup(self):
self.embedding_in = nn.Dense(self.hidden_features)
self.embedding_out = nn.Sequential(
[
nn.Dense(self.hidden_features),
self.activation,
nn.Dense(self.out_features),
],
)
if self.smear:
layer = EquivariantGraphConvolutionalLayerWithSmearing
else:
layer = EquivariantGraphConvolutionalLayer
for idx in range(self.depth):
setattr(
self,
"d%s" % idx,
layer(
hidden_features=self.hidden_features,
out_features=self.hidden_features,
activation=self.activation,
update=self.update,
sigmoid=self.sigmoid
),
)
self.layers = [getattr(self, "d%s" % idx) for idx in range(self.depth)]
def __call__(self, h, x, v=None, mask=None, he=None):
h = self.embedding_in(h)
if v is None:
v = jnp.zeros_like(x)
for layer in self.layers:
h, x, v = layer(h, x, v, mask=mask, he=he)
h = self.embedding_out(h)
return h, x, v | /sake-gnn-0.0.2.post1.tar.gz/sake-gnn-0.0.2.post1/sake/models.py | 0.891729 | 0.291882 | models.py | pypi |
import io
import os
from shutil import copyfile
from sakee import addoninfo
from sakee.colors import Colors
from sakee.stub import KodiStub
class File(object): # NOSONAR
def __init__(self, path, flags='r'):
""" File class.
:param str path: The file or directory to open
:param str flags: The flags used to open the file
"""
if flags not in ['r', 'w']:
raise ValueError("flags should be 'r' or 'w'")
self._file = io.open(path, flags + 'b')
def close(self):
""" Close the file. """
self._file.close()
# noinspection PyShadowingBuiltins
def read(self, bytes=-1):
""" Read from the file.
:param int bytes: How many bytes to read
:return: The read bytes
:rtype: str
"""
# noinspection PyUnresolvedReferences
return self._file.read(bytes).decode('utf-8')
# noinspection PyPep8Naming
def readBytes(self, numbytes=-1): # NOSONAR
""" Read from the file.
:param int numbytes: How many bytes to read
:return: The read bytes
:rtype: bytes
"""
return self._file.read(numbytes)
# noinspection PyPep8Naming
def seek(self, seekBytes, iWhence=0): # NOSONAR
""" Seek to position in file.
:param int seekBytes: Position in the file
:param int iWhence: Where in a file to seek from (0=beginning, 1=current, 2=end position)
:return: The current position in the file
:rtype: int
"""
return self._file.seek(seekBytes, iWhence)
def size(self):
""" Get the file size.
:return: The file size
:rtype: int
"""
self._file.flush()
return os.fstat(self._file.fileno()).st_size
def tell(self):
""" Get the current position in the file.
:return: The current position in the file
:rtype: int
"""
return self._file.tell()
def write(self, buffer):
""" Write to the file.
:param str|byte buffer: Data to write to the file
:return: True if successful
:rtype bool
"""
if isinstance(buffer, bytes):
# noinspection PyTypeChecker
bytes_written = self._file.write(buffer)
else:
# noinspection PyTypeChecker
bytes_written = self._file.write(buffer.encode())
return bytes_written == len(buffer)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.close()
class Stat(object): # NOSONAR
def __init__(self, path):
""" Stat class.
:param str path: The file or directory to stat
"""
self._stat = os.stat(path)
def st_atime(self):
""" Returns the st_atime attribute.
:rtype: float
:return: The st_atime attribute
"""
return self._stat.st_atime
def st_ctime(self):
""" Returns the st_ctime attribute.
:rtype: float
:return: The st_ctime attribute
"""
return self._stat.st_ctime
def st_dev(self):
""" Returns the st_dev attribute.
:rtype: int
:return: The st_dev attribute
"""
return self._stat.st_dev
def st_gid(self):
""" Returns the st_gid attribute.
:rtype: int
:return: The st_gid attribute
"""
return self._stat.st_gid
def st_ino(self):
""" Returns the st_ino attribute.
:rtype: int
:return: The st_ino attribute
"""
return self._stat.st_ino
def st_mode(self):
""" Returns the st_mode attribute.
:rtype: int
:return: The st_mode attribute
"""
return self._stat.st_mode
def st_mtime(self):
""" Returns the st_mtime attribute.
:rtype: float
:return: The st_mtime attribute
"""
return self._stat.st_mtime
def st_nlink(self):
""" Returns the st_nlink attribute.
:rtype: int
:return: The st_nlink attribute
"""
return self._stat.st_nlink
def st_size(self):
""" Returns the st_size attribute.
:rtype: int
:return: The st_size attribute
"""
return self._stat.st_size
def st_uid(self):
""" Returns the st_uid attribute.
:rtype: int
:return: The st_uid attribute
"""
return self._stat.st_uid
def copy(source, destination):
""" Copy file to destination, returns true/false.
:param str source: The file to copy
:param str destination: The destination
:return: True if successful
:rtype bool
"""
return copyfile(source, destination) == destination
# noinspection PyPep8Naming
def rename(file, newFile): # NOSONAR
""" file, newFile
:param str file: File to rename
:param str newFile: New filename, including the full path.
:return: True if successed
:rtype: bool
"""
try:
os.rename(file, newFile)
return True
except:
return False
def delete(file):
""" Delete a file.
:param str file: File to delete
:return: True if successful
:rtype bool
"""
try:
os.remove(file)
return True
except OSError:
return False
def exists(path):
""" Check for a file or folder existence.
:param str path: File or folder (folder must end with slash or backslash)
:return: True if the file exists
:rtype bool
"""
return os.path.exists(path)
def listdir(path):
""" Lists content of a folder.
:param str path: Folder to get list from
:return: Directory content list (directories, files)
:rtype (list[str], list[str])
"""
files = []
dirs = []
if not exists(path):
return dirs, files
for filename in os.listdir(path):
fullname = os.path.join(path, filename)
if os.path.isfile(fullname):
files.append(filename)
if os.path.isdir(fullname):
dirs.append(filename)
return dirs, files
# noinspection PyPep8Naming
def makeLegalFilename(path): # NOSONAR
""" Returns a legal filename or path as a string.
:param str path: Filename or path to make legal.
:return: Legal filename or path as a string
:rtype str
"""
return os.path.normpath(path)
def mkdir(path):
""" Create a directory.
:param str path: Directory to create
:return: True if successful
:rtype bool
"""
return os.mkdir(path)
def mkdirs(path):
""" Create a directory and all the directories along the path.
:param str path: Directory to create
:return: True if successful
:rtype bool
"""
if os.path.exists(path):
return True
try:
os.makedirs(path)
return os.path.exists(path)
except OSError:
return False
def rmdir(path):
""" Remove a directory.
:param str path: Directory to remove
:return: True if successful
:rtype bool
"""
return os.rmdir(path)
# noinspection PyPep8Naming
def validatePath(path): # NOSONAR
""" Returns the validated path.
:param str path: Path to format
:return: The validated path
:rtype str
"""
return os.path.normpath(path)
# noinspection PyPep8Naming
def translatePath(path): # NOSONAR
""" Returns the translated path.
:param str path: Path to format
:return: Translated path
:rtype: str
See http://kodi.wiki/view/Special_protocol
E.g:
special://home/ is mapped to: kodi/
special://profile/ is mapped to: kodi/userdata
Or in portable:
special://home/ is mapped to: kodi/portable_data/
special://profile/ is mapped to: kodi/portable_data/userdata
"""
def get_return_path(base_path, name, *segments):
if not base_path:
raise ValueError("Missing __kodi_{}_path data".format(name))
new_path = os.path.join(base_path, *[i.replace("/", os.sep) for i in segments if i and i != ''])
return new_path
if path.startswith("special://profile/"):
return_path = get_return_path(__add_on_info.kodi_profile_path,
"profile",
path.replace("special://profile/", ""))
elif path.startswith("special://home/"):
return_path = get_return_path(__add_on_info.kodi_home_path,
"home",
path.replace("special://home/", ""))
elif path.startswith("special://xbmcbin/"):
return_path = get_return_path(__add_on_info.kodi_home_path,
"home",
"system",
path.replace("special://xbmcbin/", ""))
elif os.path.isabs(path):
return path
else:
raise ValueError("Invalid special path: %s" % (path,))
actual_path = os.path.abspath(return_path)
KodiStub.print_line("Mapped '{0}' -> '{1}'".format(path, actual_path), color=Colors.Blue)
return actual_path
__add_on_info = addoninfo.get_add_on_info_from_calling_script() | /sakee-0.1.4.tar.gz/sakee-0.1.4/xbmcvfs.py | 0.682891 | 0.283168 | xbmcvfs.py | pypi |
from sakura.daemon.processing.operator import Operator
from sakura.daemon.processing.source import ComputedSource
from numpy.lib import recfunctions
from time import time
import numpy as np
class PlotOperator(Operator):
NAME = "Plot"
SHORT_DESC = "Displays a plot from a list of 2D points"
TAGS = [ "visualisation"]
def construct(self):
# inputs
self.input = self.register_input('Table with X and Y column',
on_change = self.update_iterator)
# parameters
self.input_column_param_x = self.register_parameter(
'NUMERIC_COLUMN_SELECTION', 'X (abscissa)', self.input,
on_change = self.update_iterator)
self.input_column_param_y = self.register_parameter(
'NUMERIC_COLUMN_SELECTION', 'Y (ordinate)', self.input,
on_change = self.update_iterator)
# additional tabs
self.register_tab('Plot', 'plot.html')
self.iterator = None
def update_iterator(self):
column_x = self.input_column_param_x.column
column_y = self.input_column_param_y.column
if column_x is None or column_y is None:
self.iterator = None
else:
source = self.input.source
source = source.select(column_x, column_y)
self.iterator = source.chunks()
def handle_event(self, ev_type, time_credit):
print(time())
deadline = time() + time_credit
if not self.input.connected():
return { 'issue': 'NO DATA: Input is not connected.' }
if ev_type == 'get_data': #first time data is asked
self.update_iterator() # re-init iterator
big_chunk = None
for chunk in self.iterator:
if big_chunk is None:
big_chunk = chunk
else:
# concatenate
big_chunk = recfunctions.stack_arrays((big_chunk, chunk))
if time() > deadline:
return {'dp': big_chunk, 'done': False}
return {'dp': big_chunk, 'done': True} | /sakura-py-0.9.6.tar.gz/sakura-py-0.9.6/operators/plot/operator.py | 0.645455 | 0.190762 | operator.py | pypi |
from sakura.daemon.processing.operator import Operator
from sakura.daemon.processing.source import ComputedSource
from numpy.lib import recfunctions
from time import time
import numpy as np
class gps2d(Operator):
NAME = "GPS_2D"
SHORT_DESC = "Displays trajectories on a 2D map."
TAGS = [ "visualisation"]
def construct(self):
# inputs
self.input = self.register_input('Input gps table')
# parameters
self.input_column_param_id = self.register_parameter(
'STRING_COLUMN_SELECTION', 'Trajectory ids', self.input)
self.input_column_param_lon = self.register_parameter(
'NUMERIC_COLUMN_SELECTION', 'longitude', self.input)
self.input_column_param_lat = self.register_parameter(
'NUMERIC_COLUMN_SELECTION', 'latitude', self.input)
# additional tabs
self.register_tab('GPS_2D', 'gps2d.html')
self.iterator = None
def handle_event(self, ev_type):
if not self.input.connected():
return { 'issue': 'NO DATA: Input is not connected.' }
if 'get_data' in ev_type:
if ev_type == 'get_data_first':
#Init iterator
column_id = self.input_column_param_id.column
column_x = self.input_column_param_lon.column
column_y = self.input_column_param_lat.column
source = self.input.source
source = source.select(column_id, column_x, column_y)
self.iterator = source.chunks()
try:
chunk = next(self.iterator)
return {'db': chunk,
'max': np.max([(c[2], c[1]) for c in chunk], axis=0),
'min': np.min([(c[2], c[1]) for c in chunk], axis=0),
'end': False}
except StopIteration:
return {'db': None, 'max':None, 'min':None, 'end': True}
return {'db': None, 'max':None, 'min':None, 'end': True} | /sakura-py-0.9.6.tar.gz/sakura-py-0.9.6/operators/gps2d/operator.py | 0.555676 | 0.188231 | operator.py | pypi |
import numpy as np
import math, copy, random
def distance_2D(a, b):
return math.sqrt( (b[0]-a[0])**2 +
(b[1]-a[1])**2 )
def m_mult(a, b):
i, j = 0, 0
M = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1]
while i < 4:
while j < 4:
M[i*4 + j] = a[i*4]*b[j] + a[i*4+1]*b[j+4] + a[i*4+2]*b[j+8] + a[i*4+3]*b[j+12]
j += 1
i += 1
j = 0
return M
def m_rotation(vector, angle):
v = normalize(vector)
s = math.sin(angle)
c = math.cos(angle)
C = 1 - c
sx = s * v[0]
sy = s * v[1]
sz = s * v[2]
Cx = C * v[0]
Cy = C * v[1]
Cz = C * v[2]
Cxy = Cy * v[0]
Cyz = Cz * v[1]
Czx = Cx * v[2]
return np.array([v[0] * Cx + c, Cxy - sz, Czx + sy, 0.0,
Cxy + sz, v[1] * Cy + c, Cyz - sx, 0.0,
Czx - sy, Cyz + sx, v[2] * Cz + c, 0.0,
0.0, 0.0, 0.0, 1.0]).reshape((4,4))
def rotate(p, vector, angle, pivot = []):
M = m_rotation(vector, angle)
if len(pivot) == 0:
return M.dot(np.array([p[0], p[1], p[2], 1.0]))[:3]
else:
po = np.array([p[0], p[1], p[2], 1.0])
pi = np.array([pivot[0], pivot[1], pivot[2], 1.0])
return (M.dot( po - pi) +pi)[:3]
def normalize(v):
n = norm(v)
if n != 0:
for i in range(len(v)):
v[i] /= n
return v
for i in range(len(v)):
v[i] = 0
return v
def vector(a, b):
v = []
for i in range(len(a)):
v.append(b[i] - a[i])
return v
def norm(v):
n = 0
for i in range(len(v)):
n += v[i]*v[i]
return math.sqrt(n)
def cross (u,v):
return [u[1]*v[2] - u[2]*v[1],
u[2]*v[0] - u[0]*v[2],
u[0]*v[1] - u[1]*v[0]]
def dot(u, v):
res = 0
for i in range(len(u)):
res += u[i]*v[i]
return res
def perspective(fov, aspect, near, far):
rfov = fov*math.pi/180.
d = math.tan(rfov/2.0)
mat = np.zeros((4,4))
mat[0][0] = 1/(d*aspect)
mat[1][1] = 1/d
mat[2][2] = (near + far)/(near - far)
mat[2][3] = 2*near*far/(near-far)
mat[3][2] = -1
return mat
def viewport(sx, sy, w, h, near, far):
mat = np.identity(4)
mat[0][0] = w/2.
mat[0][3] = sx + w/2.
mat[1][1] = h/2.
mat[1][3] = sy + h/2.
mat[2][2] = (far-near)/2.
mat[2][3] = (far+near)/2.
return mat
def orthographic(left, right, bottom, top, near, far):
mat = np.identity(4)
mat[0][0] = 2./(right-left)
mat[0][3] = -(right+left)/(right-left)
mat[1][1] = 2./(top-bottom)
mat[1][3] = -(top+bottom)/(top-bottom)
mat[2][2] = -2./(far-near)
mat[2][3] = -(far+near)/(far-near)
return mat
def look_at(eye, center, up):
f = normalize([ center[0] - eye[0],
center[1] - eye[1],
center[2] - eye[2] ])
_up = normalize(up)
s = cross(f, _up)
u = cross(normalize(s), f)
R = [ s[0], u[0], -f[0], 0,
s[1], u[1], -f[1], 0,
s[2], u[2], -f[2], 0,
0, 0, 0, 1 ]
T = [ 1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
-eye[0],-eye[1],-eye[2], 1 ]
return np.array(m_mult(T, R)).reshape((4,4)).T
def distances_point_frame(p, mins, maxs):
''' Frame is defined by two corners
we suppose 'mins' is north-west '''
vx = [1,0,0]
vz = [0,0,1]
v_mins = p - mins[0:3]
v_maxs = p - maxs[0:3]
w = norm(cross(v_mins, vz))
e = norm(cross(v_maxs, vz))
n = norm(cross(v_mins, vx))
s = norm(cross(v_maxs, vx))
return w, e, n, s
def cylinder(pos, radius, height, tess, color):
a = 2*math.pi/tess
h = np.array([0, 0, 0, height])
vertices = np.empty([tess*12, 4])
normals = np.empty([tess*12, 3])
for i in range(tess):
#vertices
p1 = np.array([math.cos(i*a)*radius, 0, math.sin(i*a)*radius, 0]) + pos
p2 = np.array([math.cos((i+1)*a)*radius, 0, math.sin((i+1)*a)*radius, 0]) + pos
p3 = p2 + h
p4 = p1 + h
#normals
n1 = normalize(p1[[0,3,2]] - pos[[0,3,2]])
n2 = normalize(p2[[0,3,2]] - pos[[0,3,2]])
n3 = normalize(p3[[0,3,2]] - (pos[[0,3,2]] + h[[0,3,2]]))
n4 = normalize(p4[[0,3,2]] - (pos[[0,3,2]] + h[[0,3,2]]))
nh = np.array([0,1,0])
for v, n, j in zip([pos + h, p4, p3, p1, p2, p3, p1, p3, p4, pos, p1, p2],
[ nh, nh, nh, n1, n2, n3, n1, n3, n4, -nh, -nh, -nh],
range(12)):
vertices[12*i + j] = v
normals [12*i + j] = n
colors = np.full((len(vertices), 4), color)
return vertices, normals, colors
def spiral(mat, seed):
debug = True
flat = [copy.copy(seed)]
final = [mat[seed[0]][seed[1]]]
w = len(mat)
h = len(mat[0])
N, S, W, E = (-1, 0), (1, 0), (0, -1), (0, 1) # directions
turn_right = {N: E, E: S, S: W, W: N} # old -> new direction
turn_back = {E: N, S: E, W: S, N: W}
dir = N
while len(flat) < w*h:
seed[0] += dir[0]
seed[1] += dir[1]
#outside of the matrix
if seed[0] >= w or seed[1] >= h or seed[0] < 0 or seed[1] < 0:
seed[0] -= dir[0]
seed[1] -= dir[1]
dir = turn_right[dir]
seed[0] += dir[0]
seed[1] += dir[1]
#still in the matrix, but already passed here
elif seed in flat:
seed[0] -= dir[0]
seed[1] -= dir[1]
dir = turn_back[dir]
#here is good
else:
flat.append(copy.copy(seed))
final.append(mat[seed[0]][seed[1]])
dir = turn_right[dir]
return final
def random_color():
return np.array([ random.randint(0, 255),
random.randint(0, 255),
random.randint(0, 255)])/255.
def id_to_color(id):
r = int(id/(255*255))
v = int((id - r*(255*255))/255)
b = (id - r*255*255 - v*255)
return [r/255.0, v/255.0, b/255.0, 1.0]
def color_to_id(color):
color *= 255
return int(color[2] + color[1]*255 + color[0]*255*255)
def pt_in_frame(p, mins, maxs):
if p[0] >= mins[0] and \
p[0] <= maxs[0] and \
p[1] >= mins[1] and \
p[1] <= maxs[1]:
return True
return False
def proj_pt_on_line(p, a, b):
'''point: p; line defined by [a,b]'''
ap, ab = [], []
for i in range(len(p)):
ap.append(p[i]-a[i])
ab.append(b[i]-a[i])
d = dot(ap,ab)/dot(ab,ab)
res = []
for i in range(len(p)):
res.append(a[i] + d * ab[i])
return res | /sakura-py-0.9.6.tar.gz/sakura-py-0.9.6/operators/SpaceTimeCube/stc/libs/geomaths.py | 0.404155 | 0.535888 | geomaths.py | pypi |
import math, time
import numpy as np
from . import geomaths as gm
def intersection(p1,p2,v1,v2):
beta = (p1[1] + (p2[0]-p1[0])*v1[1]/v1[0] - p2[1])/(v2[1] - v2[0]*v1[1]/v1[0])
return [x+beta*y for x,y in zip(p1,v1)]
def cross(v1,v2):
return [v1[1]*v2[2]-v1[2]*v2[1],
v1[2]*v2[0]-v1[0]*v2[2],
v1[0]*v2[1]-v1[1]*v2[0]]
def dot(v1,v2):
return v1[0]*v2[0] + v1[1]*v2[1] + v1[2]*v2[2]
def normalize(v):
try:
n = np.sqrt(v[0]*v[0] + v[1]*v[1] + v[2]*v[2])
except ValueError:
print()
print()
print(ValueError)
print()
exit(0)
if n == 0:
return [0,0,0]
return [v[0]/n, v[1]/n, v[2]/n]
def norme(v):
return math.sqrt(v[0]*v[0] + v[1]+v[1] + v[2]*v[2])
def m_mult(a, b):
i, j = 0, 0
M = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1]
while i < 4:
while j < 4:
M[i*4 + j] = a[i*4]*b[j] + a[i*4+1]*b[j+4] + a[i*4+2]*b[j+8] + a[i*4+3]*b[j+12]
j += 1
i += 1
j = 0
return M
def m_rotation(vector, angle):
v = normalize(vector)
s = math.sin(angle)
c = math.cos(angle)
C = 1 - c
sx = s * v[0]
sy = s * v[1]
sz = s * v[2]
Cx = C * v[0]
Cy = C * v[1]
Cz = C * v[2]
Cxy = Cy * v[0]
Cyz = Cz * v[1]
Czx = Cx * v[2]
return np.array([v[0] * Cx + c, Cxy - sz, Czx + sy, 0.0,
Cxy + sz, v[1] * Cy + c, Cyz - sx, 0.0,
Czx - sy, Cyz + sx, v[2] * Cz + c, 0.0,
0.0, 0.0, 0.0, 1.0]).reshape((4,4))
def rotate(p, vector, angle, pivot = []):
M = m_rotation(vector, angle)
if len(pivot) == 0:
return M.dot(np.array([p[0], p[1], p[2], 1.0]))[:3]
else:
po = np.array([p[0], p[1], p[2], 1.0])
pi = np.array([pivot[0], pivot[1], pivot[2], 1.0])
return (M.dot( po - pi) +pi)[:3]
def projectPointOnPlane(p0,p,n):
""" This function gives the projection of p0 on the plan defined by a point ( p ) and a normal vector ( n )
We suppose that norme(n) == 1"""
#Normal of the triangle plan
pp0 = [x-y for x,y in zip(p0,p)]
pp0_norme = norme(pp0)
if pp0_norme == 0:
return p
#Angle between the normal and T[0]p
pp0 = [x/pp0_norme for x in pp0]
cosAlpha = dot(pp0,n)
#Now the point on the plan
return [x-(pp0_norme*cosAlpha)*y for x,y in zip(p0,n)]
class projector:
"""Parametres extrinseques et intrinseques d'un projecteur"""
def __init__(self, viewpoint= [0,0,0], position = [1000, 1000, 1000], width = 800, height = 600):
self.position = position
self.viewpoint = viewpoint
self.direction = None
self.up = [0, 1, 0]
self.near = .1
self.far = 1000
self.width = width
self.height = height
self.v_angle = 45*math.pi/180.0 #radians
#wiggling params
self.wiggle = False
self.wiggle_pivot = self.viewpoint
self.wiggle_position = self.position
self.wiggle_viewpoint = self.viewpoint
self.wiggle_speed = 2*math.pi # degrees per seconds
self.wiggle_arc = math.pi/200 # wiggle amplitude
self.wiggle_time = time.time()
self.wiggle_angle = 0
self.change_ratio(width/height)
self.compute_direction()
def change_ratio(self, new_ratio):
self.ratio = new_ratio
h = self.near*math.tan(self.v_angle/2.0)
w = h*self.ratio
self.left = -w
self.right = w
self.top = h
self.bottom = -h
def compute_direction(self):
self.direction = np.array(normalize([x-y for x, y in zip(self.viewpoint, self.position)]))
def print_params(self):
print("Projector params :")
print("\tNear : ",self.near)
print("\tFar : ",self.far)
print("\tPosition : ",self.position)
print("\tDirection : ",self.direction)
print("\tUp : ",self.up)
print("\tFrustum(l,r,t,b) : ",self.left, self.right, self.top, self.bottom)
def projection(self):
return self.perspective()
def project_on_screen(self, p):
p = np.dot(self.projection(),np.dot(self.modelview(),p))
return p[0]/p[3], p[1]/p[3]
def perspective(self):
'''
M= [2n/(r-l) 0 0 0]
[0 2n/(t-b) 0 0]
[0 0 -(f+n)/(f-n) -2(fn)/(f-n)]
[0 0 -1 0]
'''
m00 = 2*self.near/(self.right - self.left)
m11 = 2*self.near/(self.top - self.bottom)
m22 = -(self.far+self.near)/(self.far-self.near)
m23 = -2*(self.far*self.near)/(self.far-self.near)
return np.array([ m00, 0, 0, 0,
0, m11, 0, 0,
0, 0, m22, m23,
0, 0, -1, 0 ]).reshape((4,4))
def orthographic(self):
'''
M= [10/(r-l) 0 0 0]
[0 10/(t-b) 0 0]
[0 0 -2/(f-n) -(f+n)/(f-n)]
[0 0 0 1]
'''
m00 = 1
m11 = 1
m22 = -2/(self.far-self.near)
m23 = -(self.far+self.near)/(self.far-self.near)
return np.array([ m00, 0, 0, 0,
0, m11, 0, 0,
0, 0, m22, m23,
0, 0, 0, 1 ]).reshape((4,4))
def modelview(self):
if not self.wiggle:
pos = self.position
vie = self.viewpoint
else:
pos = self.wiggle_position
vie = self.wiggle_viewpoint
f = normalize([ vie[0] - pos[0],
vie[1] - pos[1],
vie[2] - pos[2] ])
_up = normalize(self.up)
s = cross(f, _up)
u = cross(normalize(s), f)
R = [ s[0], u[0], -f[0], 0,
s[1], u[1], -f[1], 0,
s[2], u[2], -f[2], 0,
0, 0, 0, 1 ]
p = pos
T = [ 1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
-p[0],-p[1],-p[2], 1 ]
return np.array(m_mult(T, R)).reshape((4,4)).T
def compute_up(self):
x = normalize(cross(self.direction, [0,1,0]))
self.up = normalize(cross(x, self.direction))
def get_right(self):
return np.array(gm.normalize(gm.cross(self.direction, self.up)))
def h_rotation(self, angle):
'''angle is in degrees. In map navigation, horizontal rotation is around a vertical axis'''
self.position = np.array(gm.rotate(self.position, [0,1,0], angle, self.viewpoint))
self.compute_direction()
self.compute_up()
def v_rotation(self, angle):
'''angle is in degrees'''
position = np.array(gm.rotate(self.position, self.get_right(), angle, self.viewpoint))
v = gm.normalize(position - self.viewpoint)
if v[1] < 0.95 and v[1] > .001:
self.position = position
self.compute_direction()
self.compute_up()
def translate(self, dir):
'''dir should be given with only two value [dx, dy]'''
right = np.array(gm.normalize(gm.cross(self.direction, [0,1,0])))
front = dir[1]*np.array(gm.normalize(gm.cross([0,1,0], right)))
self.position += dir[0]*right + front
self.viewpoint += dir[0]*right + front
self.wiggle_pivot += dir[0]*right + front
def zoom(self, dir):
n_pos = self.position + np.array(self.direction)*dir
dist = gm.norm(self.viewpoint - n_pos)
d = dot(self.direction, normalize(self.viewpoint - n_pos))
if dist >= .2 and d > 0:
self.position = n_pos
def wiggle_next(self):
dt = self.wiggle_time - time.time()
self.wiggle_angle = (math.sin(dt*self.wiggle_speed))*self.wiggle_arc/2.0
self.wiggle_position = gm.rotate(self.position, self.up, self.wiggle_angle, self.wiggle_pivot)
self.wiggle_viewpoint = gm.rotate(self.viewpoint, self.up, self.wiggle_angle, self.wiggle_pivot) | /sakura-py-0.9.6.tar.gz/sakura-py-0.9.6/operators/SpaceTimeCube/stc/libs/projector.py | 0.487551 | 0.414958 | projector.py | pypi |
import numpy as np
from .. import shader as sh
from .. import geomaths as gm
try:
from OpenGL.GL import *
from OpenGL.GL import shaders
except:
print ('''ERROR in cube.py: PyOpenGL not installed properly. ** ''')
def wire_cube(mins, maxs):
size = np.fabs(maxs - mins)
return np.array([
#verticals
mins + np.array([size[0], 0, size[0]]),
mins + np.array([size[0], size[0], size[0]]),
mins + np.array([size[0], 0, 0]),
mins + np.array([size[0], size[0], 0]),
mins,
mins + np.array([0, size[0], 0]),
mins + np.array([0, 0, size[0]]),
mins + np.array([0, size[0], size[0]]),
#ground
mins + np.array([size[0], 0, size[0]]),
mins + np.array([size[0], 0, 0]),
mins + np.array([size[0], 0, 0]),
mins,
mins,
mins + np.array([0, 0, size[0]]),
mins + np.array([0, 0, size[0]]),
mins + np.array([size[0], 0, size[0]]),
#top
mins + np.array([size[0], size[0], size[0]]),
mins + np.array([size[0], size[0], 0]),
mins + np.array([size[0], size[0], 0]),
mins + np.array([0, size[0], 0]),
mins + np.array([0, size[0], 0]),
mins + np.array([0, size[0], size[0]]),
mins + np.array([0, size[0], size[0]]),
mins + np.array([size[0], size[0], size[0]]),
])
class cube:
def __init__(self, mins=np.array([-.5,-.5,-.5]),
maxs=np.array([.5,.5,.5])):
self.height = maxs[1] - mins[1]
self.proj_corners_bottom = []
self.proj_corners_up = []
self.sh = sh.shader()
self.vertices = wire_cube(mins, maxs)
self.colors = np.full((len(self.vertices)*3), .5)
self.colors = self.colors.reshape(int(len(self.vertices)), 3)
self.sh.display = self.display
self.reset()
def reset(self):
self.current_edge = -1
def set_height(self, value):
if value > 1.0: self.height = 1.0
elif value <= 0.0: self.height = 0.00001
else: self.height = value
def generate_buffers_and_attributes(self):
self.vbo_vertices = glGenBuffers(1)
self.vbo_colors = glGenBuffers(1)
self.attr_vertices = sh.new_attribute_index()
self.attr_colors = sh.new_attribute_index()
def update_arrays(self):
sh.bind(self.vbo_vertices, self.vertices, self.attr_vertices, 3, GL_FLOAT)
sh.bind(self.vbo_colors, self.colors, self.attr_colors, 3, GL_FLOAT)
def display(self):
self.update_uniforms(self.sh)
glDrawArrays(GL_LINES, 0, len(self.vertices))
def update_unforms(self, sh):
pass
def create_shader(self, dir, glsl_version):
return sh.create( dir+'/cube.vert',
None,
dir+'/cube.frag',
[self.attr_vertices, self.attr_colors],
['in_vertex', 'in_color'],
glsl_version)
def closest_edge(self, threshold, m):
'''from cube corner projections and mouse position,
we give the closest cube indice'''
#m = [self.mouse[0], self.height - self.mouse[1]]
#vertical edges only for now
dist = float('inf')
index = -1
for i in range(len(self.proj_corners_bottom)):
a = self.proj_corners_bottom[i]
b = self.proj_corners_up[i]
d = gm.distance_2D(gm.proj_pt_on_line(m, a, b), m)
if d < dist:
index = i
dist = d
if dist < threshold:
return index
return -1
def project_corner(self, p, traj_size, projo, win_size):
if traj_size[0] > traj_size[1]:
p[2] *= traj_size[1]/traj_size[0]
else:
p[0] *= traj_size[0]/traj_size[1]
p[1] = p[1]*self.height - self.height/2;
pp = projo.project_on_screen([p[0], p[1], p[2], 1.0] )
return [ (pp[0]+1)/2*win_size[0], (pp[1]+1)/2*win_size[1]]
def compute_proj_corners(self,size, m, projo, win):
self.proj_corners_bottom = [
self.project_corner([.5, 0.,.5], size, projo, win),
self.project_corner([.5, 0.,-.5], size, projo, win),
self.project_corner([-.5, 0.,-.5], size, projo, win),
self.project_corner([-.5, 0.,.5], size, projo, win)
]
self.proj_corners_up = [
self.project_corner([.5, 1.,.5], size, projo, win),
self.project_corner([.5, 1.,-.5], size, projo, win),
self.project_corner([-.5, 1.,-.5], size, projo, win),
self.project_corner([-.5, 1.,.5], size, projo, win)
]
#Are we on an edge ?
index = -1
if len(self.proj_corners_bottom):
index = self.closest_edge(5,m)
if index != -1:
self.colors[index*2] = [1,1,1]
self.colors[index*2+1] = [1,1,1]
if index == -1 or index != self.current_edge:
self.colors[self.current_edge*2] = [.5,.5,.5]
self.colors[self.current_edge*2+1] = [.5,.5,.5]
self.current_edge = index
def crop_mode(self, m):
a = self.proj_corners_bottom[self.current_edge]
b = self.proj_corners_up[self.current_edge]
dist_a = gm.distance_2D(m, a)
dist_b = gm.distance_2D(m, b)
if dist_a < dist_b:
return 'crop_down'
return 'crop_up'
def scale(self, delta):
a = self.proj_corners_bottom[self.current_edge]
b = self.proj_corners_up[self.current_edge]
dist = gm.distance_2D(a, b)
amount = 0
dot = 0
if dist > 0:
amount = gm.norm(delta)/gm.distance_2D(a, b)
dot = gm.dot( gm.normalize(gm.vector(a, b)),
gm.normalize(delta))
if dot <= -0.5:
self.set_height(self.height + amount*self.height)
elif dot >= 0.5:
self.set_height(self.height - amount*self.height) | /sakura-py-0.9.6.tar.gz/sakura-py-0.9.6/operators/SpaceTimeCube/stc/libs/display_objs/cube.py | 0.499268 | 0.349644 | cube.py | pypi |
from pony.orm import Required, Optional, Set, Json, \
composite_key as UNIQUE, PrimaryKey
from sakura.hub.mixins.dataflow import DataflowMixin
from sakura.hub.mixins.project import ProjectMixin
from sakura.hub.mixins.page import ProjectPageMixin
from sakura.hub.mixins.daemon import DaemonMixin
from sakura.hub.mixins.datastore import DatastoreMixin
from sakura.hub.mixins.database import DatabaseMixin
from sakura.hub.mixins.table import TableMixin
from sakura.hub.mixins.column import ColumnMixin
from sakura.hub.mixins.opclass import OpClassMixin
from sakura.hub.mixins.opinstance import OpInstanceMixin
from sakura.hub.mixins.link import LinkMixin
from sakura.hub.mixins.param import OpParamMixin
from sakura.hub.mixins.user import UserMixin
from sakura.hub.mixins.session import SessionMixin
from sakura.common.access import ACCESS_SCOPES
epoch = float
def define_schema(db):
class User(db.Entity, UserMixin):
login = PrimaryKey(str)
email = Required(str, unique=True)
password_salt = Required(bytes)
password_hash = Required(bytes)
first_name = Optional(str)
last_name = Optional(str)
creation_date = Optional(epoch) # registration time/date
gender = Optional(str)
country = Optional(str)
institution = Optional(str)
occupation = Optional(str) # work profile related
work_domain = Optional(str) # research topic
privileges = Required(Json, default = [])
requested_privileges = Required(Json, default = [])
sessions = Set('Session')
class Session(db.Entity, SessionMixin):
id = PrimaryKey(int)
user = Optional(User)
timeout = Required(epoch)
class Dataflow(db.Entity, DataflowMixin):
access_scope = Required(int, default = ACCESS_SCOPES.private)
grants = Required(Json, default = {})
gui_data = Optional(str)
metadata = Optional(Json, default = {})
op_instances = Set('OpInstance')
class Daemon(db.Entity, DaemonMixin):
name = Required(str, unique=True)
datastores = Set('Datastore')
op_instances = Set('OpInstance')
class OpClass(db.Entity, OpClassMixin):
access_scope = Required(int, default = ACCESS_SCOPES.public)
grants = Required(Json, default = {})
repo = Required(Json)
code_subdir = Required(str)
metadata = Optional(Json, default = {})
op_instances = Set('OpInstance')
class OpInstance(db.Entity, OpInstanceMixin):
daemon = Optional(Daemon)
dataflow = Required(Dataflow)
revision = Optional(Json, default = {})
op_class = Required(OpClass)
gui_data = Optional(str, default = '')
uplinks = Set('Link')
downlinks = Set('Link')
params = Set('OpParam')
class Link(db.Entity, LinkMixin):
src_op = Required(OpInstance, reverse='downlinks')
src_out_id = Required(int)
dst_op = Required(OpInstance, reverse='uplinks')
dst_in_id = Required(int)
gui_data = Optional(str)
class OpParam(db.Entity, OpParamMixin):
op = Required(OpInstance, reverse='params')
param_id = Required(int)
value = Optional(Json)
PrimaryKey(op, param_id)
class Datastore(db.Entity, DatastoreMixin):
daemon = Required(Daemon, reverse='datastores')
host = Required(str)
driver_label = Required(str)
access_scope = Required(int, default = ACCESS_SCOPES.private)
grants = Required(Json, default = {})
metadata = Optional(Json, default = {})
databases = Set('Database')
class Database(db.Entity, DatabaseMixin):
datastore = Required(Datastore)
name = Required(str)
access_scope = Required(int, default = ACCESS_SCOPES.private)
grants = Required(Json, default = {})
metadata = Optional(Json, default = {})
tables = Set('DBTable')
UNIQUE(datastore, name)
class DBTable(db.Entity, TableMixin):
database = Required(Database)
name = Required(str)
columns = Set('DBColumn')
primary_key = Required(Json, default = [])
foreign_keys = Required(Json, default = [])
metadata = Optional(Json, default = {})
UNIQUE(database, name)
class DBColumn(db.Entity, ColumnMixin):
table = Required(DBTable)
col_id = Required(int)
col_name = Required(str)
col_type = Required(str)
daemon_tags = Required(Json, default = [])
user_tags = Required(Json, default = [])
PrimaryKey(table, col_id)
UNIQUE(table, col_name)
class Project(db.Entity, ProjectMixin):
access_scope = Required(int, default = ACCESS_SCOPES.private)
grants = Required(Json, default = {})
metadata = Optional(Json, default = {})
pages = Set('ProjectPage')
class ProjectPage(db.Entity, ProjectPageMixin):
project = Required(Project)
name = Required(str)
content = Optional(str) | /sakura-py-0.9.6.tar.gz/sakura-py-0.9.6/sakura/hub/db/schema.py | 0.674158 | 0.291756 | schema.py | pypi |
import re
GEOJSON_BBOX = '''{
"type": "Polygon",
"crs": {
"type": "name",
"properties": {
"name": "%(srid)s"
}
},
"coordinates": [[
[%(min_longitude)s, %(min_latitude)s],
[%(min_longitude)s, %(max_latitude)s],
[%(max_longitude)s, %(max_latitude)s],
[%(max_longitude)s, %(min_latitude)s],
[%(min_longitude)s, %(min_latitude)s]
]]
}'''
GEOJSON_BBOX = re.sub(r"\s+", "", GEOJSON_BBOX)
class GeoBoundingBox:
def __init__(self, min_latitude = None, max_latitude = None, min_longitude = None, max_longitude = None):
self.min_latitude = min_latitude
self.max_latitude = max_latitude
self.min_longitude = min_longitude
self.max_longitude = max_longitude
self.srid = 'EPSG:4326'
def as_dict(self):
return { 'min_latitude': self.min_latitude,
'max_latitude': self.max_latitude,
'min_longitude': self.min_longitude,
'max_longitude': self.max_longitude }
def as_tuple(self):
return (self.min_latitude, self.max_latitude, self.min_longitude, self.max_longitude)
def is_fully_defined(self):
return None not in self.as_tuple()
def is_blank(self):
return all(v is None for v in self.as_tuple())
def copy(self):
return GeoBoundingBox(*self.as_tuple())
def intersect(self, other):
merged = {}
for attr, val_self in self.as_dict().items():
val_other = getattr(other, attr)
if val_self is None:
merged[attr] = val_other
elif val_other is None:
merged[attr] = val_self
else:
attr_prefix = attr[:3]
if attr_prefix == 'min':
# we intersect => we keep the highest min
merged[attr] = val_other if (val_self < val_other) else val_self
else:
# we intersect => we keep the lowest max
merged[attr] = val_other if (val_self > val_other) else val_self
return GeoBoundingBox(**merged)
def __contains__(self, val):
if self.is_blank():
return True
conds = ()
if self.min_latitude is not None:
cond = (val.Y > self.min_latitude)
conds += (cond,)
if self.max_latitude is not None:
cond = (val.Y < self.max_latitude)
conds += (cond,)
if self.min_longitude is not None:
cond = (val.X > self.min_longitude)
conds += (cond,)
if self.max_longitude is not None:
cond = (val.X < self.max_longitude)
conds += (cond,)
res = conds[0]
for cond in conds[1:]:
res = res & cond
return res
def as_geojson(self):
# format to a geojson polygon
return GEOJSON_BBOX % dict(
srid = self.srid,
**self.as_dict()
) | /sakura-py-0.9.6.tar.gz/sakura-py-0.9.6/sakura/daemon/processing/geo.py | 0.403449 | 0.412648 | geo.py | pypi |
import requests
import json
import base64
import random
from urllib.parse import unquote
from Sakurajima.models import (
Anime,
RecommendationEntry,
Relation,
AniWatchEpisode,
Episode,
ChronicleEntry,
UserAnimeListEntry,
UserMedia,
UserOverview,
AniwatchStats,
Notification,
WatchListEntry,
Media,
)
from Sakurajima.models.user_models import Friend, FriendRequestIncoming, FriendRequestOutgoing
from Sakurajima.utils.episode_list import EpisodeList
from Sakurajima.utils.network import Network
from Sakurajima.errors import AniwatchError
class Sakurajima:
"""
Sakurajima at its core, is a wrapper around the aniwatch.me API. However,
it does include some additional functionality, namely the ability to download
episodes of your favorite anime in your prefered quality.
Sakurajima requires you to have an aniwatch account. It requires you to provide
your username, your user ID and your auth token. For instructions on how to get
those, checkout `this
<https://github.com/veselysps/Sakurajima/wiki/How-to-get-username,-user-ID,-authorization-token.>`_ wiki.
"""
def __init__(
self, username=None, userId=None, authToken=None, proxies=None, endpoint="https://aniwatch.me/api/ajax/APIHandle"
):
self.API_URL = endpoint
self.network = Network(username, userId, authToken, proxies, endpoint)
@classmethod
def using_proxy(
cls, proxy_file=None, username=None, userId=None, authToken=None, endpoint="https://aniwatch.me/api/ajax/APIHandle"
):
"""An alternate constructor that reads a file containing a list of proxies
and chooses a random proxy to use with Sakurajima.
:param username: The username of the user, defaults to None
:type username: str, optional
:param userId: The user ID of the user, defaults to None
:type userId: int, optional
:param authToken: The auth token of the user, defaults to None
:type authToken: str, optional
:param proxy_file: The file containing the list of proxies, defaults to None
:type proxy_file: [type], optional
:param endpoint: [description], defaults to "https://aniwatch.me/api/ajax/APIHandle"
:type endpoint: str, optional
:return: A Sakurajima object configured to use a proxy.
:rtype: Sakurajima
"""
with open(proxy_file, "r") as proxy_file_handle:
proxies = proxy_file_handle.readlines()
proxy = random.choice(proxies).replace("\n", "")
return cls(username, userId, authToken, {"https": proxy})
@classmethod
def from_cookie(cls, cookie_file):
"""An alternate constructor that reads a cookie file and automatically extracts
the data neccasary to initialize Sakurajime
:param cookie_file: The file containing the cookie.
:type cookie_file: str
:rtype: :class:`Sakurajima`
"""
with open(cookie_file, "r") as cookie_file_handle:
cookie = json.loads(unquote(cookie_file_handle.read()))
return cls(cookie["username"], cookie["userid"], cookie["auth"])
def get_episode(self, episode_id, lang="en-US"):
"""Gets an AniWatchEpisode by its episode ID.
:param episode_id: The episode ID of the episode you want to get.
:type episode_id: int
:param lang: Language of the episode you want to get, defaults to "en-US"
(English Subbed)
:type lang: str, optional
:return: An AniWatchEpisode object which has data like streams and lamguages.
:rtype: :class:`AniWatchEpisode`
"""
data = {
"controller": "Anime",
"action": "watchAnime",
"lang": lang,
"ep_id": episode_id,
"hoster": "",
}
return AniWatchEpisode(self.network.post(data), episode_id)
def get_episodes(self, anime_id: int):
"""Gets an EpisodeList object which contains all the
available episodes of a given anime.
:param anime_id: The ID of the anime whose episodes you want.
:type anime_id: int
:return: An EpisodeList object. An EpisodeList is very similar to a normal list,
you can access item on a specific index the same way you would do for
a normal list. Check out the EpisodeList documentation for further details.
:rtype: :class:`EpisodeList`
"""
data = {
"controller": "Anime",
"action": "getEpisodes",
"detail_id": str(anime_id),
}
return EpisodeList(
[
Episode(data_dict, self.network, self.API_URL, anime_id)
for data_dict in self.network.post(data, f"/anime/{anime_id}")["episodes"]
]
)
def get_anime(self, anime_id: int):
"""Gets an anime by its ID.
:param anime_id: ID of the anime you want to get.
:type anime_id: int
:return: An Anime object that has all the relevant details regarding
a single anime like title, description, score, number of episodes etc.
:rtype: Anime
"""
data = {"controller": "Anime", "action": "getAnime", "detail_id": str(anime_id)}
headers = {
"X-PATH": f"/anime/{anime_id}",
"REFERER": f"https://aniwatch.me/anime/{anime_id}"
}
json = self.network.post(data, headers)
if json.get("success", True) != True:
error = json["error"]
raise AniwatchError(error)
else:
return Anime(json["anime"], network=self.network, api_url=self.API_URL,)
def get_recommendations(self, anime_id: int):
"""Gets a list of recommendations for an anime.
:param anime_id: The ID of the anime whose recommendation you want to get.
:type anime_id: int
:return: A list of ReccomendationEntry object where each object represents a
single recommendation.
:rtype: list[RecommendationEntry]
"""
data = {
"controller": "Anime",
"action": "getRecommendations",
"detail_id": str(anime_id),
}
return [
RecommendationEntry(data_dict, self.network)
for data_dict in self.network.post(data, f"/anime/{anime_id}")["entries"]
]
def get_relation(self, relation_id: int):
"""Gets the relations of an anime by its relation ID.
:param relation_id: Relation ID of the anime whose relations you want to get.
:type relation_id: int
:return: A Relation object that contains all the details of a relation.
:rtype: Relation
"""
data = {
"controller": "Relation",
"action": "getRelation",
"relation_id": relation_id,
}
return Relation(self.network.post(data)["relation"])
def get_seasonal_anime(self, index="null", year="null"):
"""Gets current seasonal animes.
:param index: [description], defaults to "null"
:type index: str, optional
:param year: [description], defaults to "null"
:type year: str, optional
:return: A list of Anime objects.
:rtype: list[Anime]
"""
data = {
"controller": "Anime",
"action": "getSeasonalAnime",
"current_index": index,
"current_year": year,
}
return [Anime(data_dict, self.network, self.API_URL) for data_dict in self.network.post(data, "/home")["entries"]]
def get_latest_releases(self):
"""Gets the latest anime releases. This includes currently airing
animes.
:return: List of Anime objects.
:rtype: list[Anime]
"""
data = {"controller": "Anime", "action": "getLatestReleases"}
return [Anime(data_dict, self.network, self.API_URL) for data_dict in self.network.post(data, "/home")["entries"]]
def get_latest_uploads(self):
"""Gets latest uploads on "aniwatch.me". This includes animes that are not airing
currently.
:return: A list of Anime objects.
:rtype: list[Anime]
"""
data = {"controller": "Anime", "action": "getLatestUploads"}
return [Anime(data_dict, self.network, self.API_URL) for data_dict in self.network.post(data, "/home")["entries"]]
def get_latest_anime(self):
"""Gets the latest animes on "aniwatch.me"
:return: A list of Anime objects.
:rtype: list[Anime]
"""
data = {"controller": "Anime", "action": "getLatestAnime"}
return [Anime(data_dict, self.network, self.API_URL) for data_dict in self.network.post(data, "/home")["entries"]]
def get_random_anime(self):
"""Gets a random anime from the aniwatch.me library.
:return: An Anime object representing a random anime.
:rtype: Anime
"""
data = {"controller": "Anime", "action": "getRandomAnime"}
return Anime(self.network.post(data, "/random")["entries"][0], self.network, self.API_URL)
def get_airing_anime(self, randomize=False):
"""Gets currently airing anime arranged according to weekdays.
:param randomize: [description],
defaults to False
:type randomize: bool, optional
:return: A dictionary with weekdays as keys and corresponding list of Anime
objects as values.
:rtype: dict
"""
data = {
"controller": "Anime",
"action": "getAiringAnime",
"randomize": randomize,
}
airing_anime_response = self.network.post(data, "/airing")["entries"]
airing_anime = {}
for day, animes in airing_anime_response.items():
airing_anime[day] = [Anime(anime_dict, self.network, self.API_URL) for anime_dict in animes]
return airing_anime
def get_popular_anime(self, page=1):
"""Gets all time popular anime.
:param page: Page number of the popularity chart that you want, defaults to 1
:type page: int, optional
:return: A list of Anime objects
:rtype: list[Anime]
"""
data = {"controller": "Anime", "action": "getPopularAnime", "page": page}
return [Anime(data_dict, self.network, self.API_URL) for data_dict in self.network.post(data, "/top")["entries"]]
def get_popular_seasonal_anime(self, page=1):
"""Gets popular anime of the current season.
:param page: Page number of the popularity chart that you want, defaults to 1
:type page: int, optional
:return: A list of Anime objects.
:rtype: list[Anime]
"""
data = {"controller": "Anime", "action": "getPopularSeasonals", "page": page}
return [Anime(data_dict, self.network, self.API_URL) for data_dict in self.network.post(data, "/seasonal")["entries"]]
def get_popular_upcoming_anime(self, page=1):
"""Gets popular anime that have not started airing yet.
:param page: Page number of the popularity chart that you want, defaults to 1
:type page: int, optional
:return: A list of Anime objects.
:rtype: list[Anime]
"""
data = {"controller": "Anime", "action": "getPopularUpcomings", "page": page}
return [Anime(data_dict, self.network, self.API_URL) for data_dict in self.network.post(data, "/home")["entries"]]
def get_hot_anime(self, page=1):
# TODO inspect this to figure out a correct description.
data = {"controller": "Anime", "action": "getHotAnime", "page": page}
return [Anime(data_dict, self.network, self.API_URL) for data_dict in self.network.post(data, "/home")["entries"]]
def get_best_rated_anime(self, page=1):
"""Gets the highest rated animes on "aniwatch.me".
:param page: Page number of the popularity chart that you want, defaults to 1
:type page: int, optional
:return: A list of Anime objetcs.
:rtype: list[Anime]
"""
data = {"controller": "Anime", "action": "getBestRatedAnime", "page": page}
return [Anime(data_dict, self.network, self.API_URL) for data_dict in self.network.post(data, "/home")["entries"]]
def add_recommendation(self, anime_id: int, recommended_anime_id: int):
"""Submit a recommendation for an anime.
:param anime_id: The ID of the anime where you want to submit the recommendation.
:type anime_id: int
:param recommended_anime_id: The ID of the anime that you want to recommend.
:type recommended_anime_id: int
:return: [description]
:rtype: [type]
"""
data = {
"controller": "Anime",
"action": "addRecommendation",
"detail_id": str(anime_id),
"recommendation": str(recommended_anime_id),
}
return self.network.post(data)
def get_stats(self):
"""Gets the aniwatch.me site stats. This includes things like the total
number of streams of a given quality, total users, total number of anime, movie,
special entries in the aniwatch.me library.
:return: An AniwatchStats object which wraps all the relevant statistics.
:rtype: AniwatchStats
"""
data = {"controller": "XML", "action": "getStatsData"}
return AniwatchStats(self.network.post(data, "/stats"))
def get_user_overview(self, user_id):
"""Gets a brief user overview which includes stats like total hours watched,
total number of animes completed etc.
:param user_id: The id of the target user
:type user_id: int, str
:return: An UserOverview object that wraps all the relevant details
regarding the given user.
:rtype: UserOverview
"""
data = {
"controller": "Profile",
"action": "getOverview",
"profile_id": str(user_id),
}
return UserOverview(self.network.post(data, f"/profile/{user_id}")["overview"])
def get_user_chronicle(self, user_id, page=1):
"""Gets the user's chronicle. A chronicle tracks a user's watch history.
:param user_id: The id of the target user
:type user_id: int, str
:param page: The page number of the chronicle that you
want, defaults to 1
:type page: int, optional
:return: A list of ChronicleEntry objects, each object wraps all the
information related to an entry like episode number, date, anime ID.
:rtype: list[ChronicleEntry]
"""
data = {
"controller": "Profile",
"action": "getChronicle",
"profile_id": str(user_id),
"page": page,
}
return [
ChronicleEntry(data_dict, self.network, self.API_URL) for data_dict in self.network.post(data, f"/profile/{user_id}")["chronicle"]
]
def get_user_anime_list(self):
"""Gets the user's aniwatch.me anime list. This list includes animes
that are marked by the user.
:return: A list of UserAnimeListEntry objects. An UserAnimeListEntry objects
contains information like the status, progress and total episodes etc.
:rtype: list[UserAnimeListEntry]
"""
data = {
"controller": "Profile",
"action": "getAnimelist",
"profile_id": str(self.userId),
}
return [
UserAnimeListEntry(data_dict, self.network)
for data_dict in self.network.post(data, f"/profile/{user_id}")["animelist"]
]
def get_user_media(self, page=1):
"""Gets the users favorite media.
:param page: The page number of the favorites page that you want, defaults to 1
:type page: int, optional
:return: A list of UserMedia objects. A UserMedia object has data like title and
media ID
:rtype: list[UserMedia]
"""
data = {
"controller": "Profile",
"action": "getMedia",
"profile_id": str(self.userId),
"page": page,
}
return [UserMedia(data_dict, self.network, self.API_URL) for data_dict in self.network.post(data, f"/profile/{user_id}")["entries"]]
def send_image_to_discord(self, episode_id, base64_image, episode_time):
data = {
"controller": "Profile",
"action": "sendToDiscord",
"file": base64_image,
"episode_id": int(episode_id),
"time": episode_time,
"lang": "en-US",
}
return self.network.post(data)
def get_friends(self, page=1):
data = {"controller": "Profile", "action": "getFriends", "page": page}
resp = self.network.post(data)
return [Friend(self.network, x) for x in resp["friends"]]
def get_outgoing_requests(self, page=1):
data = {"controller": "Profile", "action": "getFriends", "page": page}
resp = self.network.post(data)
return [FriendRequestOutgoing(self.network, x) for x in resp["outgoing"]]
def get_friend_requests(self, page=1):
data = {"controller": "Profile", "action": "getFriends", "page": page}
resp = self.network.post(data)
return [FriendRequestIncoming(self.network, x) for x in resp["incoming"]]
def add_friend(self, friend_user_id):
data = {
"controller": "Profile",
"action": "addFriend",
"profile_id": friend_user_id,
}
self.network.post(data)
return self.get_outgoing_requests()[-1]
def remove_friend(self, friend_id):
data = {
"controller": "Profile",
"action": "removeFriend",
"friend_id": friend_id,
}
return self.network.post(data)
def withdraw_friend_request(self, friend_id):
data = {
"controller": "Profile",
"action": "withdrawRequest",
"friend_id": friend_id,
}
return self.network.post(data)
def accept_friend_request(self, friend_id):
data = {
"controller": "Profile",
"action": "acceptRequest",
"friend_id": friend_id,
}
return self.network.post(data)
def reject_friend_request(self, friend_id):
data = {
"controller": "Profile",
"action": "rejectRequest",
"friend_id": friend_id,
}
return self.network.post(data)
def get_user_settings(self):
data = {"controller": "Profile", "action": "getSettings"}
return self.network.post(data)
def get_notifications(self):
"""Gets the user's unread notifications.
:return: A list of Notification objects. A Notificaton object contains
data like date, notification ID and the content etc.
:rtype: list[Notification]
"""
data = {"controller": "Profile", "action": "getNotifications"}
return [
Notification(data_dict, self.network)
for data_dict in self.network.post(data)["notifications"]
]
def mark_all_notifications_as_read(self):
"""Marks all notification as read.
:return: [description]
:rtype: [type]
"""
data = {
"controller": "Profile",
"action": "markAllNotificationsAsRead",
"view": 0,
}
return self.network.post(data)
def delete_all_notifications(self):
"""Deletes all notifications.
:return: [description]
:rtype: [type]
"""
data = {"controller": "Profile", "action": "deleteAllNotifications", "view": 0}
return self.network.post(data)
def toggle_notification_seen(self, notification_id: int):
"""Toggles the mark as seen status of a notification.
:param notification_id: The ID of the notification whose status you want to
toggle.
:type notification_id: int
:return: [description]
:rtype: [type]
"""
data = {
"controller": "Profile",
"action": "toggleNotificationSeen",
"id": notification_id,
}
return self.network.post(data)
def delete_notification(self, notification_id: int):
"""Deletes a specific notification.
:param notification_id: The ID of the notification you want to delete.
:type notification_id: int
:return: [description]
:rtype: [type]
"""
data = {
"controller": "Profile",
"action": "deleteNotification",
"id": notification_id,
}
return self.network.post(data)
def get_anime_chronicle(self, anime_id: int, page=1):
"""
Gets the user's anime specific chronicle.
:param anime_id: The ID of the anime whose chronicle you want.
:type anime_id: int
:param page: The page number of the chronicle that you want, defaults to 1
:type page: int, optional
:return: A list of ChronicleEntry objects.
:rtype: list[ChronicleEntry]
"""
data = {
"controller": "Profile",
"action": "getChronicle",
"detail_id": str(anime_id),
"page": page,
}
return [
ChronicleEntry(data_dict, self.network, self.API_URL) for data_dict in self.network.post(data)["chronicle"]
]
def remove_chronicle_entry(self, chronicle_id: int):
"""Removes a specific chronicle entry.
:param chronicle_id: The ID of the chronicle that you want to remove.
:type chronicle_id: int
:return: [description]
:rtype: [type]
"""
data = {
"controller": "Profile",
"action": "removeChronicleEntry",
"chronicle_id": chronicle_id,
}
return self.network.post(data)
def get_discord_hash(self):
data = {"controller": "Profile", "action": "getDiscordHash"}
return self.network.post(data)
def renew_discord_hash(self):
data = {"controller": "Profile", "action": "renewDiscordHash"}
return self.network.post(data)
def remove_discord_verification(self):
data = {"controller": "Profile", "action": "removeDiscordVerification"}
return self.network.post(data)
def get_unread_notifications(self):
"""Gets a user's unread notifications.
:return: A list of Notification objects.
:rtype: list[Notification]
"""
data = {"controller": "Profile", "action": "getUnreadNotifications"}
return [
Notification(data_dict, self.network)
for data_dict in self.network.post(data)["notifications"]
]
def toggle_mark_as_watched(self, anime_id: int, episode_id: int):
"""Toggles the mark as watched status of a particular episode of an anime.
:param anime_id: The ID of the anime to which the episode belongs.
:type anime_id: int
:param episode_id: The ID of the episode that you want to toggle the status for.
:type episode_id: int
:return: [description]
:rtype: [type]
"""
data = {
"controller": "Profile",
"action": "markAsWatched",
"detail_id": str(anime_id),
"episode_id": episode_id,
}
return self.network.post(data)
def mark_as_completed(self, anime_id: int):
"""Marks an anime as completed.
:param anime_id: The ID of the anime that you want to mark as "complete".
:type anime_id: int
:return: [description]
:rtype: [type]
"""
data = {
"controller": "Profile",
"action": "markAsCompleted",
"detail_id": str(anime_id),
}
return self.network.post(data)
def mark_as_plan_to_watch(self, anime_id: int):
"""Marks an anime as plan to watch.
:param anime_id: The ID of the anime that you want to mark as "plan to watch".
:type anime_id: int
:return: [description]
:rtype: [type]
"""
data = {
"controller": "Profile",
"action": "markAsPlannedToWatch",
"detail_id": str(anime_id),
}
return self.network.post(data)
def mark_as_on_hold(self, anime_id: int):
"""Marks an anime as "on hold"
:param anime_id: The ID of the anime that you want to mark as "on hold".
:type anime_id: int
:return: [description]
:rtype: [type]
"""
data = {
"controller": "Profile",
"action": "markAsOnHold",
"detail_id": str(anime_id),
}
return self.network.post(data)
def mark_as_dropped(self, anime_id: int):
"""Marks an anime as "dropped"
:param anime_id: The ID of the anime that you want to mark as dropped.
:type anime_id: int
:return: [description]
:rtype: [type]
"""
data = {
"controller": "Profile",
"action": "markAsDropped",
"detail_id": str(anime_id),
}
return self.network.post(data)
def mark_as_watching(self, anime_id: int):
"""Marks an anime as "watching"
:param anime_id: The ID of the anime that you want to mark as watching.
:type anime_id: int
:return: [description]
:rtype: [type]
"""
data = {
"controller": "Profile",
"action": "markAsWatching",
"detail_id": str(anime_id),
}
return self.network.post(data)
def remove_from_list(self, anime_id: int):
"""Removes an anime from the user's anime list.
:param anime_id: The ID of the anime that you want to remove from the list.
:type anime_id: int
:return: [description]
:rtype: [type]
"""
data = {
"controller": "Profile",
"action": "removeAnime",
"detail_id": str(anime_id),
}
return self.network.post(data)
def favorite_media(self, media_id: int):
"""Marks a media as favorite.
:param media_id: The ID of the media that you want to marks as favorite.
:type media_id: int
:return: [description]
:rtype: [type]
"""
data = {"controller": "Media", "action": "favMedia", "media_id": str(media_id)}
return self.network.post(data)
def rateAnime(self, anime_id: int, rating: int):
"""Sets the user's rating for a given anime. Rate an anime zero inorder to
remove the user's rating.
:param anime_id: The ID of the anime that you want to rate.
:type anime_id: int
:param rating: The number of stars that you want to the rate the given anime.
:type rating: int
:return: [description]
:rtype: [type]
"""
# Rate 0 to remove rating
data = {
"controller": "Profile",
"action": "rateAnime",
"detail_id": str(anime_id),
"rating": rating,
}
return self.network.post(data)
def get_reports(self):
data = {"controller": "Profile", "action": "getReports"}
return self.network.post(data)
def report_missing_anime(self, anime_name: str):
"""Reports a missing anime.
:param anime_name: The name of the anime that you want to report.
:type anime_name: str
:return: [description]
:rtype: [type]
"""
data = {
"controller": "Anime",
"action": "reportMissingAnime",
"anime_name": str(anime_name),
}
return self.network.post(data)
def report_missing_streams(self, anime_id: int):
"""Reports a missing stream.
:param anime_id: The ID of the anime whose strean you want to report as missing.
:type anime_id: int
:return: [description]
:rtype: [type]
"""
data = {
"controller": "Anime",
"action": "reportMissingStreams",
"detail_id": str(anime_id),
}
return self.network.post(data)
def get_watchlist(self, page=1):
"""Gets the user's current watchlist.
:param page: The page number of the watchlist that you want to get, defaults to 1
:type page: int, optional
:return: [description]
:rtype: [type]
"""
data = {
"controller": "Anime",
"action": "getWatchlist",
"detail_id": 0,
"page": page,
}
return [
WatchListEntry(data_dict, self.network, self.API_URL) for data_dict in self.network.post(data)["entries"]
]
def login(self, username, password):
data = {
"username": username,
"password": base64.b64encode(bytes(password, "utf8")).decode("utf8"),
"code": "",
"controller": "Authentication",
"action": "doLogin",
}
return self.network.post(data)
def forgot_password(self, email):
data = {
"controller": "Authentication",
"action": "doForgotPW",
"email": base64.b64encode(bytes(email, "utf8")).decode("utf8"),
}
return self.network.post(data)
def search(self, query: str):
"""Searches the aniwatch.me library for the given anime title.
:param query: The title of the anime that you want to search. Aniwatch also
stores synonyms, english names for animes so those can be used too.
:type query: str
:return: A list of Anime objects.
:rtype: [type]
"""
data = {
"controller": "Search",
"action": "search",
"rOrder": False,
"order": "title",
"typed": str(query),
"genre": "[]",
"staff": "[]",
"tags": [],
"langs": [],
"anyGenre": False,
"anyStaff": False,
"anyTag": False,
"animelist": [2],
"types": [0],
"status": [0],
"yearRange": [1965, 2022],
"maxEpisodes": 0,
"hasRelation": False,
}
headers = {
"X-PATH": "/search",
"REFERER": f"https://aniwatch.me/search"
}
json = self.network.post(data, headers)
if type(json) == dict and json.get("success", True) != True:
error = json["error"]
raise AniwatchError(error)
else:
return [Anime(data_dict, self.network, self.API_URL) for data_dict in json]
def get_media(self, anime_id: int):
"""Gets an anime's media.
:param anime_id: The ID of the anime whose media you want.
:type anime_id: int
:return: A Media object. A Media object has properties like opennings, endings, OSTs etc
that each contain a list of MediaEntry objects representing the respective media.
:rtype: Media
"""
data = {"controller": "Media", "action": "getMedia", "detail_id": str(anime_id)}
return Media(self.network.post(data), self.network, anime_id) | /sakurajima-0.3.1.tar.gz/sakurajima-0.3.1/Sakurajima/api.py | 0.711832 | 0.209409 | api.py | pypi |
import datetime
import requests
import json
from m3u8 import M3U8
from Crypto.Cipher import AES
from Sakurajima.models.relation import Relation
from Sakurajima.models.recommendation import RecommendationEntry
from Sakurajima.models.chronicle import ChronicleEntry
from Sakurajima.models.media import Media
from Sakurajima.models.helper_models import Language, Stream
from Sakurajima.utils.episode_list import EpisodeList
from Sakurajima.utils.downloader import Downloader, MultiThreadDownloader
from Sakurajima.errors import AniwatchError
import subprocess
from time import sleep
from pathvalidate import sanitize_filename
from multiprocessing import Process
import os
import shutil
class Anime(object):
"""Wraps all the relevant data for an anime like anime_id
(called as detail_id by AniWatch backend), title, airing date etc.
Use the get_episodes method to get a list of available episodes"""
def __init__(self, data_dict: dict, network, api_url: str):
self.__network = network
self.__API_URL = api_url
self.data_dict = data_dict
self.anime_id = data_dict.get("detail_id", None)
self.airing_start = data_dict.get("airing_start", None)
self.airing_end = data_dict.get("airing_end", None)
self.start_index = data_dict.get("start_index", None)
self.end_index = data_dict.get("end_index", None)
self.airing_start_unknown = data_dict.get("airing_start_unknown", None)
self.airing_end_unknown = data_dict.get("airing_end_unknown", None)
self.relation_id = data_dict.get("relation_id", None)
self.genre = data_dict.get("genre", None)
self.staff = data_dict.get("staff", None)
self.tags = data_dict.get("tags", None)
self.title = data_dict.get("title", None)
self.description = data_dict.get("description", None)
self.cover = data_dict.get("cover", None)
self.episode_max = data_dict.get("episode_max", None)
self.type = data_dict.get("type", None)
try:
self.broadcast_start = datetime.utcfromtimestamp(data_dict.get("broadcast_start"))
except:
self.broadcast_start = None
try:
self.launch_day = datetime.datetime.utcfromtimestamp(data_dict.get("launch_day"))
except:
self.launch_day = None
self.status = data_dict.get("status", None)
self.synonyms = data_dict.get("synonyms", None)
self.broadcast_time = data_dict.get("broadcast_time", None)
self.launch_offset = data_dict.get("launch_offset", None)
self.has_nudity = data_dict.get("hasNudity", None)
self.cur_episodes = data_dict.get("cur_episodes", None)
self.is_on_anime_list = data_dict.get("isOnAnimeList", None)
self.planned_to_watch = data_dict.get("planned_to_watch", None)
self.completed = data_dict.get("completed", None)
self.watching = data_dict.get("watching", None)
self.progress = data_dict.get("progress", None)
self.dropped = data_dict.get("dropped", None)
self.on_hold = data_dict.get("on_hold", None)
self.rating = data_dict.get("rating", None)
self.members_counter = data_dict.get("member_counters", None)
self.members_counter_rank = data_dict.get("members_counter_rank", None)
self.score = data_dict.get("score", None)
self.score_count = data_dict.get("score_count", None)
self.score_rank = data_dict.get("score_rank", None)
self.__episodes = None
def __generate_default_headers(self):
return {
"X-PATH": f"/anime/{self.anime_id}",
"REFERER": f"https://aniwatch.me/anime/{self.anime_id}"
}
def get_episodes(self):
"""Gets a list of all available episodes of the anime.
:return: An EpisodeList object, an EpisodeList object is very similar to a
normal list. You can access specific indexes using the "[]" syntax.
In addition to this, an EpisodeList has few convinience methods like
``get_episode_by_number(episode_number)`` which returns an episode with the provided
episode number.
:rtype: EpisodeList
"""
if self.__episodes:
return self.__episodes
else:
data = {
"controller": "Anime",
"action": "getEpisodes",
"detail_id": str(self.anime_id),
}
headers = self.__generate_default_headers()
json = self.__network.post(data, headers)
if json.get("success", True) != True:
error = json["error"]
raise AniwatchError(error)
else:
self.__episodes = EpisodeList(
[
Episode(data_dict, self.__network, self.__API_URL, self.anime_id, self.title,)
for data_dict in json["episodes"]
]
)
return self.__episodes
def __repr__(self):
return f"<Anime: {self.title}>"
def get_relations(self):
"""Gets the relation of the anime.
:return: A Relation object that contains all the details of the relation.
:rtype: Relation
"""
data = {
"controller": "Relation",
"action": "getRelation",
"relation_id": self.relation_id,
}
headers = self.__generate_default_headers()
json = self.__network.post(data, headers)
if json.get("success", True) != True:
error = json["error"]
raise AniwatchError(error)
else:
return Relation(json["relation"])
def get_recommendations(self):
"""Gets the recommendations for the anime.
:return: A list of RecommendationEntry objects where each object represents
a single recommendation.
:rtype: list[RecommendationEntry]
"""
data = {
"controller": "Anime",
"action": "getRecommendations",
"detail_id": str(self.anime_id),
}
headers = self.__generate_default_headers()
json = self.__network(data, headers)
if json.get("success", True) != True:
error = json["error"]
raise AniwatchError(error)
else:
return [
RecommendationEntry(data_dict, self.__network)
for data_dict in json["entries"]
]
def get_chronicle(self, page=1):
"""Gets the chronicle for the anime, a chronicle tracks the user's watch
history for a particular anime.
:param page: The page of the chronicle you want to get, defaults to 1
:type page: int, optional
:return: A list of ChronicleEntry objects where each object has details like date.
:rtype: list[ChronicleEntry]
"""
data = {
"controller": "Profile",
"action": "getChronicle",
"detail_id": str(self.anime_id),
"page": page,
}
headers = self.__generate_default_headers()
json = self.__network(data, headers)
if json.get("success", True) != True:
error = json["error"]
raise AniwatchError(error)
else:
return [
ChronicleEntry(data_dict, self.__network, self.__API_URL)
for data_dict in json["chronicle"]
]
def mark_as_completed(self):
"""Marks the anime as "completed" on the user's aniwatch anime list.
:return: True if the operation is successful, False if an error occurs.
:rtype: bool
"""
data = {
"controller": "Profile",
"action": "markAsCompleted",
"detail_id": str(self.anime_id),
}
headers = self.__generate_default_headers()
json = self.__network.post(data, headers)
return json["success"]
def mark_as_plan_to_watch(self):
"""Marks the anime as "plan to watch" on the user's aniwatch anime list.
:return: True if the operation is successful, False if an error occurs.
:rtype: bool
"""
data = {
"controller": "Profile",
"action": "markAsPlannedToWatch",
"detail_id": str(self.anime_id),
}
return self.__network.post(data, f"/anime/{self.anime_id}")["success"]
def mark_as_on_hold(self):
"""Marks the anime as "on hold" on the user's aniwatch anime list.
:return: True if the operation is successful, False if an error occurs.
:rtype: bool
"""
data = {
"controller": "Profile",
"action": "markAsOnHold",
"detail_id": str(self.anime_id),
}
headers = self.__generate_default_headers()
json = self.__network.post(data, headers)
return json["success"]
def mark_as_dropped(self):
"""Marks the anime as "dropped" on the user's aniwatch anime list.
:return: True if the operation is successful, False if an error occurs.
:rtype: bool
"""
data = {
"controller": "Profile",
"action": "markAsDropped",
"detail_id": str(self.anime_id),
}
headers = self.__generate_default_headers()
json = self.__network.post(data, headers)
return json["success"]
def mark_as_watching(self):
"""Marks the anime as "watching" on the user's aniwatch anime list
:return: True if the operation is successful, False if an error occurs.
:rtype: bool
"""
data = {
"controller": "Profile",
"action": "markAsWatching",
"detail_id": str(self.anime_id),
}
headers = self.__generate_default_headers()
json = self.__network.post(data, headers)
return json["success"]
def remove_from_list(self):
"""Removes the anime from the user's aniwatch anime list.
:return: True if the operation is successful, False if an error occurs.
:rtype: bool
"""
data = {
"controller": "Profile",
"action": "removeAnime",
"detail_id": str(self.anime_id),
}
headers = self.__generate_default_headers()
json = self.__network.post(data, headers)
return json["success"]
def rate(self, rating: int):
"""Set the user's rating for the anime on aniwatch.
:param rating: The rating you want to set, should be between 1 to 10.
Rate 0 to remove the user's rating for the anime
:type rating: int
:return: True if the operation is successful, False if an error occurs.
:rtype: bool
"""
# Rate 0 to remove rating
data = {
"controller": "Profile",
"action": "rateAnime",
"detail_id": str(self.anime_id),
"rating": rating,
}
headers = self.__generate_default_headers()
json = self.__network.post(data, headers)
return json["success"]
def get_media(self):
"""Gets the anime's associated media from aniwatch.me
:return: A Media object that has attributes like ``opening``, ``osts``.
:rtype: Media
"""
data = {
"controller": "Media",
"action": "getMedia",
"detail_id": str(self.anime_id),
}
headers = self.__generate_default_headers()
json = self.__network.post(data, headers)
return Media(json, self.__network, self.anime_id,)
def get_complete_object(self):
"""Gets the current anime object but with complete attributes. Sometimes, the Anime
object that is returned by the API does not contain values for all the attributes
that the Anime object has. Use this method to get an object with the maximum amount of
data. This method is almost never required but is for edge cases
:return: An Anime object with values for as many attributes as possible.
:rtype: Anime
"""
data = {
"controller": "Anime",
"action": "getAnime",
"detail_id": str(self.anime_id),
}
headers = self.__generate_default_headers()
json = self.__network.post(data, headers)
if json.get("success", True) != True:
error = json["error"]
raise AniwatchError(error)
else:
data_dict = json["anime"]
return Anime(data_dict, self.__network, api_url=self.__API_URL,)
def add_recommendation(self, recommended_anime_id: int):
"""Adds the user's reccomendation for the anime.
:param recommended_anime_id: The aniwatch anime ID of the anime you want to recommend.
:type recommended_anime_id: int
:return: [description]
:rtype: [type]
"""
data = {
"controller": "Anime",
"action": "addRecommendation",
"detail_id": str(self.anime_id),
"recommendation": str(recommended_anime_id),
}
headers = self.__generate_default_headers()
json = self.__network.post(data, headers)
return json
def get_dict(self):
"""Gets the JSON response in the form of a dictionary that was used to
initialize the object.
:return: A dictionary of the JSON response
:rtype: dict
"""
return self.data_dict
class Episode(object):
def __init__(self, data_dict, network, api_url, anime_id, anime_title=None):
self.anime_title = anime_title
"""The title of the anime that the episode belongs to."""
self.__network = network
self.anime_id = anime_id
"""The anime ID of the anime that the episode belongs to."""
self.__API_URL = api_url
self.number = data_dict.get("number", None)
"""The episode number of the episode."""
self.title = data_dict.get("title", None)
"""The title of the episode."""
self.description = data_dict.get("description", None)
"""The description of the episode."""
self.thumbnail = data_dict.get("thumbnail", None)
"""The URL to the thumbnail for the episode."""
self.added = datetime.datetime.utcfromtimestamp(data_dict.get("added", None))
"""The date when the episode was added."""
self.filler = data_dict.get("filler", None)
"""Is set to 1 if the episode is filler else 0"""
self.ep_id = data_dict.get("ep_id", None)
"""The ID of the episode"""
self.duration = data_dict.get("duration", None)
"""The duration of the episode"""
self.is_aired = data_dict.get("is_aired", None)
"""Is set to 1 if the episode has aired else 0."""
self.lang = data_dict.get("lang", None)
"""The language of the episode."""
self.watched = data_dict.get("watched", None)
"""Is set to 1 if the user has marked the episode as watched else 0"""
self.__aniwatch_episode = None
self.__m3u8 = None
def __generate_default_headers(self):
headers = {
"REFERER": f"https://aniwatch.me/anime/{self.anime_id}/{self.number}",
"X-PATH": f"/anime/{self.anime_id}/{self.ep_id}"
}
def get_aniwatch_episode(self, lang="en-US"):
"""Gets the AniWatchEpisode object associated with the episode.
An AniWatchEpisode has data regarding languages and streams available
for the current anime.
:param lang: Used only because the aniwatch API requires it, defaults to "en-US"
:type lang: str, optional
:return: An AniWatchEpisode object.
:rtype: AniWatchEpisode
"""
if self.__aniwatch_episode:
return self.__aniwatch_episode
else:
data = {
"controller": "Anime",
"action": "watchAnime",
"lang": lang,
"ep_id": self.ep_id,
"hoster": "",
}
headers = self.__generate_default_headers()
json = self.__network.post(data, headers)
self.__aniwatch_episode = AniWatchEpisode(json, self.ep_id)
return self.__aniwatch_episode
def get_m3u8(self, quality: str) -> M3U8:
"""Gets the episode's M3U8 data.
:param quality: The quality whose M3U8 data you need. All the available
are "ld" (360p), "sd" (480p), "hd" (720p) and "fullhd" (1080p).
:type quality: str
:return: A M3U8 object, the data can be accessed by calling the ``data`` property on the
object.
:rtype: M3U8
"""
if self.__m3u8:
return self.__m3u8
else:
try:
headers = self.__generate_default_headers()
self.toggle_mark_as_watched()
aniwatch_episode = self.get_aniwatch_episode()
uri = aniwatch_episode.stream.sources[quality] # The uri to the M3U8 file.
res = self.__network.get_with_user_session(uri, headers)
self.__m3u8 = M3U8(res.text)
return self.__m3u8
except:
return None
def download(
self,
quality: str,
file_name: str = None,
path: str = None,
multi_threading: bool = False,
max_threads: int = None,
use_ffmpeg: bool = True,
include_intro: bool = False,
delete_chunks: bool = True,
on_progress=None,
print_progress: bool = True,
):
"""Downloads the current episode in your selected quality.
:param quality: The quality that you want to dowload. All the available
are "ld" (360p), "sd" (480p), "hd" (720p) and "fullhd".
Note that all qualities may not be available for all episodes.
:type quality: str
:param file_name: The name of the downloaded file. If left to None, the file will be named
"[anime_name] - [episode_number].mp4". Macros are also supported,
"<anititle>" will be replaced by the anime name, <ep> will be replaced
by episode number and <eptitle> will be replaced by the episodes title.
For example, lets say that the episode in question is the third episode of
the anime called "Vinland Saga". The title of the episode is "Troll". Suppose
we pass the string ``"<anititle> - <ep> - <eptitle>"``, the resulting file will be
named ``"Vinland Saga - 3 - Troll.mp4"``
:type file_name: str, optional
:param path: Path to where you want the downloaded video to be, defaults to None. If left None
the current working directory i.e. the directory where the script calling the method
lives is taken as the path.
:type path: str, optional
:param multi_threading: Set this to true to enable multithreaded downloading, defaults to False.
Enabling this can offer significant performance benefits especially on faster
connections. However this comes with a trade off. Using multi threading negatively
affects download resumabilty. Therefore it is recommended that this be set to False
when using slower connections.
:type multi_threading: bool, optional
:param max_threads: Set the maximum number of threads that will be used at once when using multi
threaded downloading, defaults to None. When None, the maximum number of feasible
threads will be used i.e one thread per chunk.
:type max_threads: int, optional
:param use_ffmpeg: Enable/disable using FFMPEG to combine the downloaded chunks, defaults to True.
Requires FFMPEG. It is recommended to keep this enabled as not using FFMPEG can cause
video playback issues on certain players. Using FFMPEG also results in noticibly smaller
files.
:type use_ffmpeg: bool, optional
:param include_intro: Set this to true to include the 5 second aniwatch intro, defaults to False.
It is recommended to skip the intro as this causes issues when combining the chunks
with FFMPEG.
:type include_intro: bool, optional
:param delete_chunks: Set this to False to not delete the downloaded .ts chunks after they have been,
combined into a single mp4 file. Defaults to True
:type delete_chunks: bool, optional
:param on_progress: Register a function that is called every time a new chunk is downloaded. The the number
of chunks done and the total number of chunks are passed as arguments to the function in that
exact order. Defaults to None.
:type on_progress: function, optional
:param print_progress: Print the number of chunks done and the total number of chunks to the console,
defaults to True.
:type print_progress: bool, optional
"""
m3u8 = self.get_m3u8(quality)
if file_name is None:
file_name = f"{self.anime_title[:128]}-{self.number}"
else:
file_name = (
file_name.replace("<ep>", str(self.number))
.replace("<eptitle>", self.title)
.replace("<anititle>", self.anime_title[:128])
)
file_name = sanitize_filename(file_name)
current_path = os.getcwd()
if path:
os.chdir(path)
if multi_threading:
dlr = MultiThreadDownloader(
self.__network, m3u8, file_name, self.ep_id, max_threads, use_ffmpeg, include_intro, delete_chunks,
)
else:
dlr = Downloader(self.__network, m3u8, file_name, self.ep_id, use_ffmpeg, include_intro, delete_chunks,)
dlr.download()
dlr.merge()
if delete_chunks:
dlr.remove_chunks()
os.chdir(current_path)
def get_available_qualities(self):
"""Gets a list of available qualities for the episode.
:return: A list of available qualities. "ld", "sd", "hd" and "fullhd"
refer to 360p, 480p, 720 and 1080p respectively.
:rtype: list[str]
"""
aniwatch_episode = self.get_aniwatch_episode()
return tuple(aniwatch_episode.stream.sources.keys())
def toggle_mark_as_watched(self):
"""Toggles the "mark as watched" status of the episode
:return: True if the operation is successful, False if an error occured.
:rtype: bool
"""
data = {
"controller": "Profile",
"action": "markAsWatched",
"detail_id": str(self.anime_id),
"episode_id": self.ep_id,
}
headers = self.__generate_default_headers()
json = self.__network.post(data, headers)
return json["success"]
def __repr__(self):
return f"<Episode {self.number}: {self.title}>"
class AniWatchEpisode(object):
def __init__(self, data_dict, episode_id):
self.episode_id = episode_id
"""The ID of the episode to which the object belongs."""
self.languages = [Language(lang) for lang in data_dict.get("lang", None)]
"""List of Language objects available for the episode."""
self.stream = Stream(data_dict.get("stream", None))
"""The Stream object associated with the episode."""
def __repr__(self):
return f"Episode ID: {self.episode_id}" | /sakurajima-0.3.1.tar.gz/sakurajima-0.3.1/Sakurajima/models/base_models.py | 0.546496 | 0.205974 | base_models.py | pypi |
class AniwatchStats(object):
def __init__(self, data_dict):
self.total_streams = data_dict["hoster"][0]["count"]
"""The total number of streams on aniwatch.me"""
self.total_1080p_streams = data_dict["hoster"][0]["rows"][0]["count"]
"""The total number of 1080p streams on aniwatch.me"""
self.total_720p_streams = data_dict["hoster"][0]["rows"][1]["count"]
"""The total number of 720p streams on aniwatch.me"""
self.total_480p_streams = data_dict["hoster"][0]["rows"][2]["count"]
"""The total number of 480p streams on aniwatch.me"""
self.total_360p_streams = data_dict["hoster"][0]["rows"][3]["count"]
"""The total number of 360p streams on aniwatch.me"""
self.registered_users = data_dict["user"][0]["count"]
"""The total number of registered users on aniwatch.me"""
self.registered_users_graph_data = data_dict["user"][0]["rows"]
"""The graph data for the total number of registered users on aniwatch.me"""
self.new_registered_users = data_dict["user"][1]["count"]
"""The total number of users who have registered recently."""
self.new_registered_users_graph_data = data_dict["user"][1]["rows"]
"""The graph data for the recently joined users."""
self.total_shows = data_dict["entry"]["count"]
"""The total number of shows on aniwatch, this is the sum total of all the
animes, specials, movies and hentais."""
self.total_animes = data_dict["entry"]["rows"][0]["count"]
"""The total number of animes on aniwatch.me"""
self.total_specials = data_dict["entry"]["rows"][1]["count"]
"""The total number of specials on aniwatch.me"""
self.total_movies = data_dict["entry"]["rows"][2]["count"]
"""The total number of movies on aniwatch.me"""
self.total_hentais = data_dict["entry"]["rows"][3]["count"]
"""The total number of hentais on aniwatch.me"""
# I don't know what they mean by it, the api response has a "Unknown" column the same way it
# has the total anime and total movie column so I decided to include it.
self.total_unknowns = data_dict["entry"]["rows"][4]["count"]
"""The total number of shows that don't fall into any of the categories."""
def __repr__(self):
return "<AniwatchStats>" | /sakurajima-0.3.1.tar.gz/sakurajima-0.3.1/Sakurajima/models/stats.py | 0.624866 | 0.316633 | stats.py | pypi |
import json
from Sakurajima.models import base_models as bm
class RecommendationEntry(object):
def __init__(self, data_dict, network):
self.__network = network
self.title = data_dict.get("title", None)
"""The title of the recommeneded anime."""
self.episodes_max = data_dict.get("episodes_max", None)
"""The total number of episodes that the recommended anime has."""
self.type = data_dict.get("type", None)
"""The type of the recommended anime. For example, anime, special or movie."""
self.anime_id = data_dict.get("detail_id", None)
"""The ID of the recommended anime."""
self.cover = data_dict.get("cover", None)
"""The URL to the cover image of the recommended anime."""
self.airing_start = data_dict.get("airing_start", None)
"""The season when the recommended anime started airing."""
self.recommendations = data_dict.get("recommendations", None)
"""The number of users that have recommened this anime."""
self.d_status = data_dict.get("d_status", None)
"""The d_status of the anime? If you figure this out please let us know by opening an issue
on our `repo <https://github.com/veselysps/Sakurajima/issues>`_"""
self.has_special = data_dict.get("hasSpecial", None)
"""If the recommended anime has a special."""
self.progress = data_dict.get("progress", None)
"""The user's progress for the recommended anime."""
self.cur_episodes = data_dict.get("cur_episodes", None)
"""The number of currently aired episodes of the recommended anime."""
def __repr__(self):
return f"<RecommendationEntry: {self.title}>"
def get_anime(self):
"""Gets the Anime object of the anime recommended in the RecommendationEntry.
:rtype: `Anime <anime.html>`_
"""
data = {
"controller": "Anime",
"action": "getAnime",
"detail_id": str(self.anime_id),
}
return bm.Anime(self.__post(data)["anime"], network=self.__network, api_url=self.__API_URL,) | /sakurajima-0.3.1.tar.gz/sakurajima-0.3.1/Sakurajima/models/recommendation.py | 0.621426 | 0.364976 | recommendation.py | pypi |
import requests
import json
import datetime
class Notification(object):
def __init__(self, data_dict, network):
self.__network = network
self.data_dict = data_dict
self.id = data_dict.get("id", None)
"""The ID of the notification."""
self.type = data_dict.get("type", None)
"""The notification type, example if the notification is for a recently aired anime."""
self.mode = data_dict.get("mode", None)
self.content = data_dict.get("content", None)
"""The content or the body of the notification."""
try:
self.time = datetime.datetime.utcfromtimestamp(data_dict.get("time", None))
"""The time the notification was issued."""
except Exception:
self.time = data_dict.get("time", None)
self.seen = data_dict.get("seen", None)
"""The "seen" status of the notification."""
self.href_blank = data_dict.get("href_blank", None)
"""If the notification has an associated URL."""
self.href = data_dict.get("href", None)
"""The URL associated with the notification. If the nottification is for a
recently aired episode, this is the URL to that episode."""
def get_dict(self):
return self.data_dict
def toggle_seen(self):
"""Toggles the "seen" status of the notification.
:return: True if the operation is successful, False if an error occured.
:rtype: bool
"""
data = {
"controller": "Profile",
"action": "toggleNotificationSeen",
"id": self.id,
}
return self.__network.post(data)["success"]
def delete(self):
"""Deletes the notification from the user's account.
:return: True if the operation is successful, False if an error occured.
:rtype: bool
"""
data = {"controller": "Profile", "action": "deleteNotification", "id": self.id}
return self.__network.post(data)["success"]
def __repr__(self):
return f"<Notification ID: {self.id}, Date: {self.time}>" | /sakurajima-0.3.1.tar.gz/sakurajima-0.3.1/Sakurajima/models/notification.py | 0.584983 | 0.284675 | notification.py | pypi |
import requests
import json
from Sakurajima.models import base_models as bm
from Sakurajima.models.chronicle import ChronicleEntry
import datetime
class UserAnimeListEntry(object):
"""A UserAnimeListEntry represents a single show on a user's aniwatch.me
anime list.
"""
def __init__(self, data_dict, network):
self.__network = network
self.title = data_dict.get("title", None)
"""The title of the anime."""
self.episodes_max = data_dict.get("episodes_max", None)
"""The total number of episodes the anime has."""
self.type = data_dict.get("type", None)
"""The type of the show. For example, the type can anime, special or movie etc."""
self.cover = data_dict.get("cover", None)
"""The URL to the cover image of the anime."""
self.anime_id = data_dict.get("details_id", None)
"""The ID of the anime."""
self.progress = data_dict.get("progress", None)
"""The total number of episodes that the user has watched for this anime."""
self.airing_start = data_dict.get("airing_start", None)
"""The season the anime started airing."""
self.cur_episodes = data_dict.get("cur_episodes", None)
"""The total number of episodes that have already aired."""
if data_dict.get("completed", None) == 1:
self.status = "completed"
"""The watch status of the anime."""
elif data_dict.get("planned_to_watch", None) == 1:
self.status = "planned_to_watch"
elif data_dict.get("on_hold", None) == 1:
self.status = "on_hold"
elif data_dict.get("dropped", None) == 1:
self.status = "dropped"
def get_anime(self):
"""Gets the Anime object of the entry.
:rtype: Anime
"""
data = {
"controller": "Anime",
"action": "getAnime",
"detail_id": str(self.anime_id),
}
return bm.Anime(self.__post(data)["anime"], network=self.__network, api_url=self.__API_URL,)
def __repr__(self):
return f"<AnimeListEntry: {self.title}>"
class UserOverview(object):
"""The user's overview. This includes things like the total number of shows
watched by type, the user's title and the URL of the user's cover."""
def __init__(self, data_dict):
self.anime = UserOverviewWatchType(data_dict["anime"])
"""UserOverviewWatchType object for the animes."""
self.special = UserOverviewWatchType(data_dict["special"])
"""UserOverviewWatchType object for the specials."""
self.movie = UserOverviewWatchType(data_dict["movie"])
"""UserOverviewWatchType object for the movies."""
self.hentai = UserOverviewWatchType(data_dict["hentai"])
"""UserOverviewWatchType object for the hentais."""
self.stats = UserOverviewStats(data_dict["stats"])
"""The UserOverviewStats object that represents the user's watch stats.
This includes things like the total number hours watched etc."""
self.mine = data_dict.get("mine", None)
self.username = data_dict.get("username", None)
"""The user's aniwatch.me username."""
self.title = data_dict.get("title", None)
"""The user's title on aniwatch.me"""
self.admin = data_dict.get("admin", None)
"""If the user is an administrator."""
self.staff = data_dict.get("staff", None)
"""If the user is a staff."""
self.cover = data_dict.get("cover", None)
"""The URL to the user's profile cober image."""
self.friend = data_dict.get("friend", None)
"""The total number of friends that the user has."""
def __repr__(self):
return f"<UserOverView: {self.username}>"
class UserOverviewWatchType(object):
"""A generic class that holds data regarding the shows watched of a particular type."""
def __init__(self, data_dict):
self.total = data_dict.get("total", None)
"""The total number of shows watched that belong to a particular type."""
self.episodes = data_dict.get("episodes", None)
"""The total number of episodes watched that belong to a particuler type"""
self.icon = data_dict.get("icon", None)
class UserOverviewStats(object):
"""Holds data regarding the watch stats of a user."""
def __init__(self, data_dict):
self.total = data_dict.get("total", None)
"""The total number of shows that the user has watched, this includes
all categories like anime, movie annd special etc."""
self.total_episodes = data_dict.get("total_episodes", None)
"""The total number of episodes that the user has watched. Includes
episodes from all categories."""
self.watched_hours = data_dict.get("watched_hours", None)
"""The total watch time of a user in hours."""
self.watched_days = data_dict.get("watched_days", None)
"""The total watch time of a user in days."""
self.mean_score = data_dict.get("mean_score", None)
"""The mean score that the user has given to different shows"""
self.ratings = data_dict.get("ratings", None)
"""The total number of shows that the user has rated."""
class Friend(object):
"""Represents a user's friend on aniwatch.me"""
def __init__(self, network, data_dict):
self.__network = network
self.username = data_dict.get("username", None)
self.user_id = data_dict.get("userid", None)
self.cover_img = data_dict.get("cover", None)
try:
self.friends_since = datetime.datetime.utcfromtime(data_dict.get("date", None))
except:
self.friends_since = None
def __repr__(self):
return f"<Friend {self.username}>"
def unfriend(self):
"""Removes the friend from the user's profile.
:return: True if the operation is successful, False if an error occured.
:rtype: bool
"""
data = {
"controller": "Profile",
"action": "removeFriend",
"friend_id": self.user_id,
}
return self.__network.post(data)["success"]
def get_overview(self):
"""Gets the friend's aniwatch.me overview.
:return: A UserOverview object that has data regarding things like total
watch time, mean score and total shows watched etc.
:rtype: `UserOverview <user_overview>`
"""
data = {
"controller": "Profile",
"action": "getOverview",
"profile_id": self.user_id,
}
return UserOverview(self.__network.post(data)["overview"])
def get_chronicle(self, page=1):
"""Gets the friend's chronicle. A chronicle tracks a user's watch history.
:param page: The page of the chronicle that you want to get, defaults to 1
:type page: int, optional
:return: A list of ChronicleEntry objects.
:rtype: list[ChronicleEntry]
"""
data = {
"controller": "Profile",
"action": "getChronicle",
"profile_id": self.user_id,
"page": page,
}
return [
ChronicleEntry(data_dict, self.__network, self.__network.API_URL)
for data_dict in self.__network.post(data)["chronicle"]
]
class FriendRequestIncoming(object):
"""Represents a friend requests that the user has recieved.
"""
def __init__(self, network, data_dict):
self.__network = network
self.username = data_dict.get("username", None)
self.user_id = data_dict.get("userid", None)
self.cover_img = data_dict.get("cover", None)
self.date = data_dict.get("date", None)
def accept(self):
"""Accepts the friend request.
:return: True if the operation was successful, False if an error occured.
:rtype: :class:`bool`
"""
data = {
"controller": "Profile",
"action": "acceptRequest",
"friend_id": self.user_id,
}
return self.__network.post(data)["success"]
def decline(self):
"""Declines the friend request.
:return: True if the operation was successful, False if an error occured.
:rtype: :class:`bool`
"""
data = {
"controller": "Profile",
"action": "rejectRequest",
"friend_id": self.user_id,
}
return self.__network.post(data)["success"]
def __repr__(self):
return f"<FriendRequestIncoming: {self.username}>"
class FriendRequestOutgoing(object):
"""Represents a friend request that the user has sent.
"""
def __init__(self, network, data_dict):
self.__network = network
self.username = data_dict.get("username", None)
self.user_id = data_dict.get("userid", None)
self.cover_img = data_dict.get("cover", None)
self.date = data_dict.get("date", None)
def withdraw(self):
"""Withdraws the friend request.
:return: True if the operation was successful, False if an error occured.
:rtype: :class:`bool`
"""
data = {
"controller": "Profile",
"action": "withdrawRequest",
"friend_id": self.user_id,
}
return self.__network.post(data)["success"]
def __repr__(self):
return f"<FriendRequestOutgoing: {self.username}>" | /sakurajima-0.3.1.tar.gz/sakurajima-0.3.1/Sakurajima/models/user_models.py | 0.801431 | 0.390883 | user_models.py | pypi |
import requests
import json
import datetime
class Media(object):
"""Contains media entries for categories like openings, endings and OSTs"""
def __init__(self, data_dict, network, anime_id):
self.__network = network
self.anime_id = anime_id
"""The ID of the anime to which the media belons."""
self.theme_songs = [
MediaEntry(data, self.__network, self.__API_URL) for data in data_dict.get("Theme Songs", [])
]
"""Theme songs for the anime."""
self.openings = [MediaEntry(data, self.__network) for data in data_dict.get("Openings", [])]
"""Opening songs for the anime."""
self.endings = [MediaEntry(data, self.__network) for data in data_dict.get("Endings", [])]
"""Ending songs for the anime."""
self.osts = [MediaEntry(data, self.__network) for data in data_dict.get("OSTs", [])]
"""The official sound tracks for the anime."""
def __repr__(self):
return f"<Media for Anime: {self.anime_id}>"
class MediaEntry(object):
"""Represents a single media entry and contains all the relevant data related
to the entry like media_id and favorite status."""
def __init__(self, data_dict, network):
self.__network = network
self.title = data_dict.get("media_title", None)
"""The title of the media."""
self.type = data_dict.get("media_type", None)
"""The type of the media, example opening, ending etc"""
self.value = data_dict.get("media_value", None)
self.favorites = data_dict.get("media_favorites", None)
"""The number of users who have favorited the media."""
self.is_favorited = data_dict.get("media_hasFaved", None)
"""If the user has favorited the media."""
self.id = data_dict.get("media_id", None)
"""The ID of the media."""
self.occurence = data_dict.get("media_occurence", None)
self.thumbnail = data_dict.get("media_thumb", None)
"""The url to the thumbnail of the media."""
def __repr__(self):
return f"<Media Entry {self.title}>"
def favorite_media(self):
"""Marks the media as favorite.
:return: True if the operation is successful, False if an error occured.
:rtype: bool
"""
data = {"controller": "Media", "action": "favMedia", "media_id": str(self.id)}
return self.__network.post(data)["success"]
class UserMedia(object):
def __init__(self, data_dict, network):
self.__network = network
self.title = data_dict.get("title", None)
"""The title of the media."""
self.media_id = data_dict.get("media_id", None)
"""The ID of the media."""
try:
self.date = datetime.datetime.utcfromtimestamp(data_dict["date"])
"""The date the media was created."""
except:
self.date = None
def __repr__(self):
return f"UserMedia: {self.title}" | /sakurajima-0.3.1.tar.gz/sakurajima-0.3.1/Sakurajima/models/media.py | 0.71602 | 0.356755 | media.py | pypi |
from Sakurajima.models import base_models as bm
class EpisodeList(object):
"""An :class:`EpisodeList` is very similar to a normal list. You can do everything
with a :class:`EpisodeList` that you can with a normal list. The only difference is that
an EpisodeList has some convinience methods that make selecting a particular episode easier.
"""
def __init__(self, episode_list):
self.validate_list(episode_list)
self.__episode_list = episode_list
def validate_list(self, episode_list):
for episode in episode_list:
if isinstance(episode, bm.Episode):
continue
else:
raise ValueError(
"EpisodeList only take in lists that contain only Episode objects"
)
def get_episode_by_number(self, episode_number: int):
"""Returns the first :class:`Episode` object from the list whose ``number`` attribue matches the
``episode_number`` parameter.
:param episode_number: The episode number that you want to find in the list.
:type episode_number: int
:rtype: :class:`Episode`
"""
def check_episode_number(episode):
if episode.number == episode_number:
return True
else:
return False
result = None
for episode in self.__episode_list:
if check_episode_number(episode):
result = episode
break
return result
def get_episode_by_title(self, title: str):
"""Returns the first :class:`Episode` object from the list whose ``title`` attribue matches the
``title`` parameter.
:param title: The title of the episode that you want to find.
:type title: str
:rtype: :class:`Episode`
"""
def check_episode_title(episode):
if episode.title == title:
return True
else:
return False
result = None
for episode in self.__episode_list:
if check_episode_title(episode):
result = episode
break
return result
def last(self):
"""Returns the last :class:`Episode` object from the list.
:rtype: :class:`Episode`
"""
return self.__episode_list[:-1]
def __getitem__(self, position):
if isinstance(position, int):
return self.__episode_list[position]
elif isinstance(position, slice):
return EpisodeList(self.__episode_list[position])
def __len__(self):
return len(self.__episode_list)
def __reversed__(self):
return self[::-1]
def __repr__(self):
return f"EpisodeList({self.__episode_list})" | /sakurajima-0.3.1.tar.gz/sakurajima-0.3.1/Sakurajima/utils/episode_list.py | 0.852107 | 0.375792 | episode_list.py | pypi |
import os
from Crypto.Cipher import AES
from Crypto.Util.Padding import pad, unpad
from Sakurajima.utils.merger import ChunkMerger, FFmpegMerger, ChunkRemover
from threading import Thread, Lock
from progress.bar import IncrementalBar
from Sakurajima.utils.progress_tracker import ProgressTracker
from Sakurajima.utils.decrypter_provider import DecrypterProvider
from concurrent.futures import ThreadPoolExecutor
class Downloader(object):
"""
Facilitates downloading an episode from aniwatch.me using a single thread.
"""
def __init__(
self,
network,
m3u8,
file_name: str,
episode_id: int,
use_ffmpeg: bool = True,
include_intro: bool = False,
delete_chunks: bool = True,
on_progress=None,
):
"""
:param network: The Sakurajima :class:`Network` object that is used to make network requests.
:type network: :class:`Network`
:param m3u8: The M3U8 data of the episode that is to be downloaded.
:type m3u8: :class:`M3U8`
:param file_name: The name of the downloaded video file.
:type file_name: str
:param episode_id: The episode ID of the episode being downloaded.
This is only required to uniquely identify the progree
tracking data of the episode.
:type episode_id: int
:param use_ffmpeg: Whether to use ``ffmpeg`` to merge the downlaoded chunks, defaults to True
:type use_ffmpeg: bool, optional
:param include_intro: Whether to include the 5 second aniwatch intro, defaults to False
:type include_intro: bool, optional
:param delete_chunks: Whether to delete the downloaded chunks after that have been
merged into a single file, defaults to True
:type delete_chunks: bool, optional
:param on_progress: Register a function that is called every time a chunk is downloaded, the function
passed the chunk number of the downloaded chunk and the total number of chunks as
parameters, defaults to None
:type on_progress: ``function``, optional
"""
self.__network = network
self.m3u8 = m3u8
self.file_name = file_name
self.use_ffmpeg = use_ffmpeg
self.include_intro = include_intro
self.delete_chunks = delete_chunks
self.on_progress = on_progress
self.progress_tracker = ProgressTracker(episode_id)
def init_tracker(self):
self.progress_tracker.init_tracker(
{
"headers": self.__network.headers,
"cookies": self.__network.cookies,
"segments": self.m3u8.data["segments"],
"file_name": self.file_name,
"total_chunks": self.total_chunks,
}
)
def download(self):
"""Runs the downloader and starts downloading the video file.
"""
chunk_tuple_list = []
# Will hold a list of tuples of the form (chunk_number, chunk).
# The chunk_number in this list will start from 1.
for chunk_number, chunk in enumerate(self.m3u8.data["segments"]):
chunk_tuple_list.append((chunk_number, chunk))
if not self.include_intro:
for chunk_tuple in chunk_tuple_list:
# Check if the string is in the URI of the chunk
if "img.aniwatch.me" in chunk_tuple[1]["uri"]:
# Revome the tuple from the tuple list.
chunk_tuple_list.remove(chunk_tuple)
self.total_chunks = len(chunk_tuple_list)
try:
os.makedirs("chunks")
except FileExistsError:
pass
self.progress_bar = IncrementalBar("Downloading", max=self.total_chunks)
self.init_tracker()
decryter_provider = DecrypterProvider(self.__network, self.m3u8)
for chunk_number, chunk_tuple in enumerate(chunk_tuple_list):
# We need the chunk number here to name the files. Note that this is
# different from the chunk number that is inside the tuple.
file_name = f"chunks\/{self.file_name}-{chunk_number}.chunk.ts"
ChunkDownloader(
self.__network,
chunk_tuple[1], # The segment data
file_name,
chunk_tuple[0], # The chunk number needed for decryption.
decryter_provider
).download()
self.progress_bar.next()
self.progress_tracker.update_chunks_done(chunk_number)
if self.on_progress:
self.on_progress.__call__(chunk_number + 1, self.total_chunks)
self.progress_bar.finish()
def merge(self):
"""Merges the downloaded chunks into a single file.
"""
if self.use_ffmpeg:
FFmpegMerger(self.file_name, self.total_chunks).merge()
else:
ChunkMerger(self.file_name, self.total_chunks).merge()
def remove_chunks(self):
"""Deletes the downloaded chunks.
"""
ChunkRemover(self.file_name, self.total_chunks).remove()
class ChunkDownloader(object):
"""
The object that actually downloads a single chunk.
"""
def __init__(self, network, segment, file_name, chunk_number, decrypt_provider: DecrypterProvider):
"""
:param network: The Sakurajima :class:`Network` object that is used to make network requests.
:type network: :class:`Network`
:param segment: The segement data from that M3U8 file that is to be downloaded.
:type segment: :class:`dict`
:param file_name: The file name of the downloaded chunk.
:type file_name: :class:`str`
:param chunk_number: The chunk number of the the chunk to be downloaded, required to generate
the AES decryption initialization vector.
:type chunk_number: int
"""
self.__network = network
self.segment = segment
self.file_name = file_name
self.chunk_number = chunk_number,
self.decrypter_provider = decrypt_provider
def download(self):
"""Starts downloading the chunk.
"""
with open(self.file_name, "wb") as videofile:
res = self.__network.get(self.segment["uri"])
chunk = res.content
key_dict = self.segment.get("key", None)
if key_dict is not None:
decrypted_chunk = self.decrypt_chunk(chunk)
videofile.write(decrypted_chunk)
else:
videofile.write(chunk)
def decrypt_chunk(self, chunk):
decryter = self.decrypter_provider.get_decrypter(self.chunk_number)
return decryter.decrypt(chunk)
class MultiThreadDownloader(object):
"""
Facilitates downloading an episode from aniwatch.me using multiple threads.
"""
def __init__(
self,
network,
m3u8,
file_name: str,
episode_id: int,
max_threads: int = None,
use_ffmpeg: bool = True,
include_intro: bool = False,
delete_chunks: bool = True,
):
"""
:param network: The Sakurajima :class:`Network` object that is used to make network requests.
:type network: :class:`Network`
:type m3u8: :class:`M3U8`
:param file_name: The name of the downloaded video file.
:type file_name: str
:param episode_id: The episode ID of the episode being downloaded.
This is only required to uniquely identify the progree
tracking data of the episode.
:type episode_id: int
:param max_threads: The maximum number of threads that will be used for downloading, defaults to None,
if None, the maximum possible number of threads will be used.
:type max_threads: int, optional
:param use_ffmpeg: Whether to use ``ffmpeg`` to merge the downlaoded chunks, defaults to True
:type use_ffmpeg: bool, optional
:param include_intro: Whether to include the 5 second aniwatch intro, defaults to False
:type include_intro: bool, optional
:param delete_chunks: Whether to delete the downloaded chunks after that have been
merged into a single file, defaults to True
:type delete_chunks: bool, optional
"""
self.__network = network
self.m3u8 = m3u8
self.file_name = file_name
self.use_ffmpeg = use_ffmpeg
self.max_threads = max_threads
self.include_intro = include_intro
self.delete_chunks = delete_chunks
self.threads = []
self.progress_tracker = ProgressTracker(episode_id)
self.__lock = Lock()
try:
os.makedirs("chunks")
except FileExistsError:
pass
def init_tracker(self):
self.progress_tracker.init_tracker(
{
"headers": self.__network.headers,
"cookies": self.__network.cookies,
"segments": self.m3u8.data["segments"],
"file_name": self.file_name,
"total_chunks": self.total_chunks,
}
)
def assign_segments(self, segment):
ChunkDownloader(
segment.network,
segment.segment,
segment.file_name,
segment.chunk_number,
segment.decrypter_provider
).download()
with self.__lock:
self.progress_tracker.update_chunks_done(segment.chunk_number)
self.progress_bar.next()
def download(self):
"""Runs the downloader and starts downloading the video file.
"""
decrypter_provider = DecrypterProvider(self.__network, self.m3u8)
chunk_tuple_list = []
# Will hold a list of tuples of the form (chunk_number, chunk).
# The chunk_number in this list will start from 1.
for chunk_number, chunk in enumerate(self.m3u8.data["segments"]):
chunk_tuple_list.append((chunk_number, chunk))
if not self.include_intro:
for chunk_tuple in chunk_tuple_list:
# Check if the string is in the URI of the chunk
if "img.aniwatch.me" in chunk_tuple[1]["uri"]:
# Revome the tuple from the tuple list.
chunk_tuple_list.remove(chunk_tuple)
self.total_chunks = len(chunk_tuple_list)
self.progress_bar = IncrementalBar("Downloading", max=self.total_chunks)
self.init_tracker()
segment_wrapper_list = []
for chunk_number, chunk in enumerate(chunk_tuple_list):
file_name = f"chunks\/{self.file_name}-{chunk_number}.chunk.ts"
segment_wrapper = _SegmentWrapper(
self.__network,
chunk[1], # Segment data.
file_name,
chunk[0], # The chunk number needed for decryption.
decrypter_provider
)
segment_wrapper_list.append(segment_wrapper)
if self.max_threads == None:
# If the value for max threads is not provided, then it is set to
# the total number of chunks that are to be downloaded.
self.max_threads = self.total_chunks
self.executor = ThreadPoolExecutor(max_workers = self.max_threads)
with self.executor as exe:
futures = exe.map(self.assign_segments, segment_wrapper_list)
for future in futures:
# This loop servers to run the generator.
pass
self.progress_bar.finish()
def merge(self):
"""Merges the downloaded chunks into a single file.
"""
if self.use_ffmpeg:
FFmpegMerger(self.file_name, self.total_chunks).merge()
else:
ChunkMerger(self.file_name, self.total_chunks).merge()
def remove_chunks(self):
"""Deletes the downloaded chunks.
"""
ChunkRemover(self.file_name, self.total_chunks).remove()
class _SegmentWrapper(object):
# As the name suggests, this is only wrapper class introduced with a hope that it
# will lead to more readable code.
def __init__(self, network, segment, file_name, chunk_number, decrypter_provider):
self.network = network
self.segment = segment
self.file_name = file_name
self.chunk_number = chunk_number
self.decrypter_provider = decrypter_provider | /sakurajima-0.3.1.tar.gz/sakurajima-0.3.1/Sakurajima/utils/downloader.py | 0.695855 | 0.268654 | downloader.py | pypi |
import shutil
import subprocess
import os
class ChunkMerger(object):
"""Merges the downloaded chunks by concatinating them into a single file.
"""
def __init__(self, file_name, total_chunks):
"""
:param file_name: The file name prefix of the chunks.
:type file_name: str
:param total_chunks: The total number of chunks. The merger assumes thet the chunks are
in a ``chunks`` directory and are named according to a sequence
i.e. "{file_name}-{chunk_number}.chunk.ts"
:type total_chunks: int
"""
self.file_name = file_name
self.total_chunks = total_chunks
def merge(self):
"""Starts the merger and creates a single file ``.mp4`` file.
"""
with open(f"{self.file_name}.mp4", "wb") as merged_file:
for ts_file in [
open(f"chunks\/{self.file_name}-{chunk_number}.chunk.ts")
for chunk_number in range(self.total_chunks)
]:
shutil.copyfileobj(ts_file, merged_file)
class FFmpegMerger(object):
"""Merges the downloaded chunks using ``ffmpeg``.
"""
def __init__(self, file_name, total_chunks):
"""
The parameters have the same meaning as in :class:`ChunkMerger`"""
self.file_name = file_name
self.total_chunks = total_chunks
def merge(self):
"""Starts the merger and creates a single file ``.mp4`` file.
"""
print("Merging chunks into mp4.")
concat = '"concat'
for x in range(0, self.total_chunks):
if x == 0:
concat += f":chunks\/{self.file_name}-{x}.chunk.ts"
else:
concat += f"|chunks\/{self.file_name}-{x}.chunk.ts"
concat += '"'
subprocess.run(f'ffmpeg -i {concat} -c copy "{self.file_name}.mp4"')
class ChunkRemover(object):
def __init__(self, file_name, total_chunks):
self.file_name = file_name
self.total_chunks = total_chunks
def remove(self):
for chunk_number in range(self.total_chunks):
try:
os.remove(f"chunks\/{self.file_name}-{chunk_number}.chunk.ts")
except FileNotFoundError:
pass | /sakurajima-0.3.1.tar.gz/sakurajima-0.3.1/Sakurajima/utils/merger.py | 0.540924 | 0.271429 | merger.py | pypi |
# %% auto 0
__all__ = ['WithChildrenMixin', 'Data', 'MappedData', 'map_data', 'render', 'FrontMatter', 'parse_arg', 'parse_attrs']
# %% ../nbs/00_core.ipynb 4
from typing import Any
from copy import deepcopy
from textwrap import indent
from collections import ChainMap
from typing import Callable, Optional
from jinja2 import Environment, BaseLoader, Template, StrictUndefined
from frontmatter.default_handlers import YAMLHandler
from frontmatter.util import u
from textwrap import dedent
# %% ../nbs/00_core.ipynb 9
class WithChildrenMixin:
"""
Adds `parent`/`children` functionality to a class.
"""
def __init__(self):
self.parent = None
self.children = []
def __len__(self):
return len(self.children)
def __contains__(self, element):
return element in self.children
def add_child(self, child: "Data"):
"""
Add a child element to the children list and set its parent to self.
"""
self.children.append(child)
child.set_parent(self)
return child
def set_parent(self, parent: "Data"):
"""
Set the parent element of self.
"""
self.parent = parent
def __iter__(self):
def iter_data(obj, level=0):
"""Simply yields parent and then children"""
yield obj, level
for child in obj.children:
yield from iter_data(child, level=level + 1)
return iter_data(self)
# %% ../nbs/00_core.ipynb 11
class Data(WithChildrenMixin):
"""
Data holder used during code generation. Logic is kept as separate functions.
"""
def __init__(
self,
name: str, # Name of this element
attrs: dict[str, Any] = None, # Attributes for this element
):
"""
Initialize Data object.
"""
self.name = name
if attrs is None:
attrs = {}
self._attrs = attrs
super().__init__()
@property
def attrs(self):
"""
Get the attributes for this element, merged with parent's attributes, if available.
"""
if self.parent:
return ChainMap(self._attrs, self.parent.attrs)
return ChainMap(self._attrs)
def clone(self):
"""
Create a deep copy of this Data object.
"""
return deepcopy(self)
def __eq__(self, a):
"""
Compare this Data object with another object for equality.
"""
print("==")
same_name = self.name == a.name
same_attrs = self.attrs == a.attrs
same_children = self.children == a.children
return same_name and same_attrs and same_children
def __str__(self):
"""
Get the string representation of this Data object.
"""
is_self_closing = not self.children
if self.children:
children = map(str, self.children)
children = "\n".join(children)
children = children.strip()
children = f"\n{children}\n"
children = indent(children, " ")
if self.attrs:
if is_self_closing:
return f"<{self.name} {dict(self.attrs)} />"
else:
return f"<{self.name} {dict(self.attrs)}>{children}</{self.name}>"
if is_self_closing:
return f"<{self.name} />"
else:
return f"<{self.name}>{children}</{self.name}>"
__repr__ = __str__
# %% ../nbs/00_core.ipynb 36
class MappedData(WithChildrenMixin):
"""Data structure used to return results from the `map_data` function"""
def __init__(self, value):
self.value = value
super().__init__()
# %% ../nbs/00_core.ipynb 37
def map_data(obj: Data, process: Callable, level=0) -> MappedData:
"""Maps over a `Data` inst returning `MappedData` instances"""
child_results = [map_data(c, process, level=level + 1) for c in obj.children]
value = process(obj, level)
data = MappedData(value)
for c in child_results:
data.add_child(c)
return data
# %% ../nbs/00_core.ipynb 44
def _get_env():
return Environment(loader=BaseLoader(), undefined=StrictUndefined)
# %% ../nbs/00_core.ipynb 46
def render(
template: str, # template in string form
filters: Optional[dict] = None, # jinja filters
**kwargs: Any,
) -> str:
if not filters:
filters = {}
env = _get_env()
env.filters.update(filters)
jinja: Template = env.from_string(template)
result: str = jinja.render(**kwargs)
return result
# %% ../nbs/00_core.ipynb 51
class FrontMatter:
def __init__(self, handler=None):
if handler is None:
handler = YAMLHandler()
self.handler = handler
def split(self, raw_content, *, encoding="utf-8"):
raw_content = u(raw_content, encoding).strip()
try:
fm, content = self.handler.split(raw_content)
except ValueError:
return None, raw_content
return fm, content
def parse(self, raw_frontmatter, *, metadata=None):
if metadata is None:
metadata = {}
try:
raw_frontmatter = self.handler.load(raw_frontmatter)
except Exception as e:
msg = dedent(
f"""
===
There is an error with the following yaml (front matter)
```
{raw_frontmatter}
```
===
"""
)
print(msg)
raise e
if isinstance(raw_frontmatter, dict):
metadata.update(raw_frontmatter)
return metadata
def get_content(self, template):
frontmatter, content = self.split(template)
return content.strip()
def get_raw_frontmatter(self, template):
resp = self.split(template)
frontmatter, content = resp
if frontmatter:
return frontmatter.strip()
# %% ../nbs/00_core.ipynb 53
import json
# %% ../nbs/00_core.ipynb 54
def parse_arg(arg):
try:
v = json.loads(arg)
except json.JSONDecodeError:
v = arg
return v
# %% ../nbs/00_core.ipynb 57
def parse_attrs(attrs):
for k, y in attrs.items():
attrs[k] = parse_arg(y)
return attrs | /sal_code_generator-0.0.29-py3-none-any.whl/sal/core.py | 0.812756 | 0.444505 | core.py | pypi |
import time
from .Color import Color
from .Typer import Typer
import random
class ProgressBar:
def __init__(self, iteration: int, total: int, prefix = '', suffix = '', decimals = 1, length = 100, fill = '█', printEnd = "\r", ending = '\n') -> None:
"""
Call in a loop to create terminal progress bar
@params:
iteration - Required : current iteration (Int)
total - Required : total iterations (Int)
prefix - Optional : prefix string (Str)
suffix - Optional : suffix string (Str)
decimals - Optional : positive number of decimals in percent complete (Int)
length - Optional : character length of bar (Int)
fill - Optional : bar fill character (Str)
printEnd - Optional : end character (e.g. "\r", "\r\n") (Str)
"""
percent = ("{0:." + str(decimals) + "f}").format(100 * (iteration / float(total)))
filledLength = int(length * iteration // total)
color = [Color.WHITE, Color.RED, Color.GREEN, Color.BLUE, Color.CYAN, Color.MAGENTA, Color.YELLOW]
normal = ' ' * (length - filledLength)
bar = f'{color[random.randint(0, len(color) - 1)]}{fill * filledLength}{normal}'
Typer.Print(f'\r{prefix} |{bar}{Color.WHITE}| {Color.GREEN}{percent}% {Color.WHITE}Progress', Refresh=True) if printEnd == '\r' else Typer.Print(f'\r{prefix} |{bar}{Color.WHITE}| {Color.GREEN}{percent}% {Color.WHITE}Progress', Enter=True)
if iteration == total:
bar = f'{Color.GREEN}{fill * filledLength}{normal}'
Typer.Print(f'\r{prefix} |{bar}{Color.WHITE}| {Color.GREEN}{percent}% {Color.WHITE}{suffix}', Refresh=True) if ending == '\r' else Typer.Print(f'\r{prefix} |{bar}{Color.WHITE}| {Color.GREEN}{percent}% {Color.WHITE}{suffix}', Enter=True)
class ProgressWait:
def __init__(self, Random: bool, Start = 0, End = 60) -> None:
if Random is True: items = list(range(0, random.randint(Start, End)))
else: items = list(range(Start, End))
ProgressBar(0, len(items), prefix = 'Please wait:', suffix = 'Complete', length = 25, ending = '\r')
for i, item in enumerate(items):
ProgressBar(i + 1, len(items), prefix = 'Please wait:', suffix = 'Complete', length = 25, ending = '\r')
time.sleep(1) | /sal_dutils-0.1.2-py3-none-any.whl/dutils/ProgressBar.py | 0.644113 | 0.194923 | ProgressBar.py | pypi |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu=0, sigma=1):
Distribution.__init__(self, mu, sigma)
def calculate_mean(self):
"""Function to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
def calculate_stdev(self, sample=True):
"""Function to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.calculate_mean()
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def plot_histogram(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Function to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Function to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean, self.stdev) | /salRad_distributions-0.1.tar.gz/salRad_distributions-0.1/salRad_distributions/Gaussiandistribution.py | 0.688364 | 0.853058 | Gaussiandistribution.py | pypi |
import argparse
from pprint import pformat
__author__ = "salammzere3"
__copyright__ = "Copyright (c) 2023, salammzere3"
EMOTICONS = [":)", ":D", ":P", ":S", ":(", "=)", "=/", ":/", ":{", ";)"]
EMOJIS = [
"\U0001f600",
"\U0001f603",
"\U0001f604",
"\U0001f601",
"\U0001f605",
"\U0001f923",
"\U0001f602",
"\U0001f609",
"\U0001f60A",
"\U0001f61b",
]
MAX_STR_LEN = 70
def run_argparse():
"""User arguments"""
parser = argparse.ArgumentParser(
description="""
Obfuscate your python script by converting an input script to an output
script that functions the same (hopefully) but encodes the code as emoji
icons, currently emoticons or emojis. -- salammzere3, 2023"""
)
parser.add_argument("-i", "--input", required=True, help="input python script name")
parser.add_argument(
"-o", "--output", required=True, help="output python script name"
)
parser.add_argument(
"-e",
"--emoji",
dest="emoji",
action="store_true",
help="output emojis instead of emoticons",
)
parser.set_defaults(emoji=False)
return parser.parse_args()
def chunk_string(in_s, n):
"""Chunk string to max length of n"""
return "\n".join(
"{}\\".format(in_s[i : i + n]) for i in range(0, len(in_s), n)
).rstrip("\\")
def encode_string(in_s, alphabet):
"""Convert input string to encoded output string with the given alphabet"""
# Note prior to Cpython 3.6 output order may differ to due to
# dicts not retaining insertion order
d1 = dict(enumerate(alphabet))
d2 = {v: k for k, v in d1.items()}
return (
'import salamemojify\nexec("".join(map(chr,[int("".join(str({}[i]) for i in x.split())) for x in\n'
'"{}"\n.split(" ")])))\n'.format(
pformat(d2),
chunk_string(
" ".join(" ".join(d1[int(i)] for i in str(ord(c))) for c in in_s),
MAX_STR_LEN,
),
)
)
def main(in_file, out_file, emoji):
"""Read input and write output file"""
if emoji:
alphabet = EMOJIS
else:
alphabet = EMOTICONS
with open(in_file) as in_f, open(out_file, "w") as out_f:
# Assumes it's ok to read the entire input file into memory
out_f.write(encode_string(in_f.read(), alphabet))
print("done {}".format(alphabet[0]))
if __name__ == "__main__":
args = run_argparse()
main(args.input, args.output, args.emoji) | /salamemojify-1.0.0-py3-none-any.whl/salamemojify-1.0.0.data/scripts/salamemojify.py | 0.452294 | 0.226987 | salamemojify.py | pypi |
import functools
from heapq import nsmallest
from operator import itemgetter
from collections import defaultdict
try:
from collections import Counter
except ImportError:
class Counter(dict):
'Mapping where default values are zero'
def __missing__(self, key):
return 0
def twolvl_iterator(dict):
for k, v in dict.items():
for kk, vv in v.items():
yield k, kk, vv
def create_cache1lvl(lock_obj):
def cache1lvl(maxsize=100):
"""
modified version of http://code.activestate.com/recipes/498245/
"""
def decorating_function(user_function):
cache = {}
use_count = Counter()
lock = lock_obj()
@functools.wraps(user_function)
def wrapper(key, *args, **kwargs):
try:
result = cache[key]
except KeyError:
with lock:
if len(cache) == maxsize:
for k, _ in nsmallest(maxsize // 10 or 1,
use_count.items(),
key=itemgetter(1)):
del cache[k], use_count[k]
cache[key] = user_function(key, *args, **kwargs)
result = cache[key]
use_count[key] += 1
else:
with lock:
use_count[key] += 1
return result
def clear():
cache.clear()
use_count.clear()
def delete(key):
try:
del cache[key]
del use_count[key]
return True
except KeyError:
return False
wrapper.clear = clear
wrapper.cache = cache
wrapper.delete = delete
return wrapper
return decorating_function
return cache1lvl
def create_cache2lvl(lock_obj):
def cache2lvl(maxsize=100):
"""
modified version of http://code.activestate.com/recipes/498245/
"""
def decorating_function(user_function):
cache = {}
use_count = defaultdict(Counter)
lock = lock_obj()
@functools.wraps(user_function)
def wrapper(*args, **kwargs):
try:
result = cache[args[0]][args[1]]
except KeyError:
with lock:
if wrapper.cache_size == maxsize:
to_delete = maxsize / 10 or 1
for k1, k2, v in nsmallest(to_delete,
twolvl_iterator(
use_count),
key=itemgetter(2)):
del cache[k1][k2], use_count[k1][k2]
if not cache[k1]:
del cache[k1]
del use_count[k1]
wrapper.cache_size -= to_delete
result = user_function(*args, **kwargs)
try:
cache[args[0]][args[1]] = result
except KeyError:
cache[args[0]] = {args[1]: result}
use_count[args[0]][args[1]] += 1
wrapper.cache_size += 1
else:
use_count[args[0]][args[1]] += 1
return result
def clear():
cache.clear()
use_count.clear()
def delete(key, *args):
if args:
try:
del cache[key][args[0]]
del use_count[key][args[0]]
if not cache[key]:
del cache[key]
del use_count[key]
wrapper.cache_size -= 1
return True
except KeyError:
return False
else:
try:
wrapper.cache_size -= len(cache[key])
del cache[key]
del use_count[key]
return True
except KeyError:
return False
wrapper.clear = clear
wrapper.cache = cache
wrapper.delete = delete
wrapper.cache_size = 0
return wrapper
return decorating_function
return cache2lvl | /salang_saara-0.4.2-py3-none-any.whl/CodernityDB/lfu_cache_with_lock.py | 0.466846 | 0.15084 | lfu_cache_with_lock.py | pypi |
import re
import tokenize
import token
import uuid
class IndexCreatorException(Exception):
def __init__(self, ex, line=None):
self.ex = ex
self.line = line
def __str__(self):
if self.line:
return repr(self.ex + "(in line: %d)" % self.line)
return repr(self.ex)
class IndexCreatorFunctionException(IndexCreatorException):
pass
class IndexCreatorValueException(IndexCreatorException):
pass
class Parser(object):
def __init__(self):
pass
def parse(self, data, name=None):
if not name:
self.name = "_" + uuid.uuid4().hex
else:
self.name = name
self.ind = 0
self.stage = 0
self.logic = ['and', 'or', 'in']
self.logic2 = ['&', '|']
self.allowed_props = {'TreeBasedIndex': ['type', 'name', 'key_format', 'node_capacity', 'pointer_format', 'meta_format'],
'HashIndex': ['type', 'name', 'key_format', 'hash_lim', 'entry_line_format'],
'MultiHashIndex': ['type', 'name', 'key_format', 'hash_lim', 'entry_line_format'],
'MultiTreeBasedIndex': ['type', 'name', 'key_format', 'node_capacity', 'pointer_format', 'meta_format']
}
self.funcs = {'md5': (['md5'], ['.digest()']),
'len': (['len'], []),
'str': (['str'], []),
'fix_r': (['self.fix_r'], []),
'prefix': (['self.prefix'], []),
'infix': (['self.infix'], []),
'suffix': (['self.suffix'], [])
}
self.handle_int_imports = {'infix': "from itertools import izip\n"}
self.funcs_with_body = {'fix_r':
(""" def fix_r(self,s,l):
e = len(s)
if e == l:
return s
elif e > l:
return s[:l]
else:
return s.rjust(l,'_')\n""", False),
'prefix':
(""" def prefix(self,s,m,l,f):
t = len(s)
if m < 1:
m = 1
o = set()
if t > l:
s = s[:l]
t = l
while m <= t:
o.add(s.rjust(f,'_'))
s = s[:-1]
t -= 1
return o\n""", False),
'suffix':
(""" def suffix(self,s,m,l,f):
t = len(s)
if m < 1:
m = 1
o = set()
if t > l:
s = s[t-l:]
t = len(s)
while m <= t:
o.add(s.rjust(f,'_'))
s = s[1:]
t -= 1
return o\n""", False),
'infix':
(""" def infix(self,s,m,l,f):
t = len(s)
o = set()
for x in xrange(m - 1, l):
t = (s, )
for y in xrange(0, x):
t += (s[y + 1:],)
o.update(set(''.join(x).rjust(f, '_').lower() for x in izip(*t)))
return o\n""", False)}
self.none = ['None', 'none', 'null']
self.props_assign = ['=', ':']
self.all_adj_num_comp = {token.NUMBER: (
token.NUMBER, token.NAME, '-', '('),
token.NAME: (token.NUMBER, token.NAME, '-', '('),
')': (token.NUMBER, token.NAME, '-', '(')
}
self.all_adj_num_op = {token.NUMBER: (token.NUMBER, token.NAME, '('),
token.NAME: (token.NUMBER, token.NAME, '('),
')': (token.NUMBER, token.NAME, '(')
}
self.allowed_adjacent = {
"<=": self.all_adj_num_comp,
">=": self.all_adj_num_comp,
">": self.all_adj_num_comp,
"<": self.all_adj_num_comp,
"==": {token.NUMBER: (token.NUMBER, token.NAME, '('),
token.NAME: (token.NUMBER, token.NAME, token.STRING, '('),
token.STRING: (token.NAME, token.STRING, '('),
')': (token.NUMBER, token.NAME, token.STRING, '('),
']': (token.NUMBER, token.NAME, token.STRING, '(')
},
"+": {token.NUMBER: (token.NUMBER, token.NAME, '('),
token.NAME: (token.NUMBER, token.NAME, token.STRING, '('),
token.STRING: (token.NAME, token.STRING, '('),
')': (token.NUMBER, token.NAME, token.STRING, '('),
']': (token.NUMBER, token.NAME, token.STRING, '(')
},
"-": {token.NUMBER: (token.NUMBER, token.NAME, '('),
token.NAME: (token.NUMBER, token.NAME, '('),
')': (token.NUMBER, token.NAME, '('),
'<': (token.NUMBER, token.NAME, '('),
'>': (token.NUMBER, token.NAME, '('),
'<=': (token.NUMBER, token.NAME, '('),
'>=': (token.NUMBER, token.NAME, '('),
'==': (token.NUMBER, token.NAME, '('),
']': (token.NUMBER, token.NAME, '(')
},
"*": self.all_adj_num_op,
"/": self.all_adj_num_op,
"%": self.all_adj_num_op,
",": {token.NUMBER: (token.NUMBER, token.NAME, token.STRING, '{', '[', '('),
token.NAME: (token.NUMBER, token.NAME, token.STRING, '(', '{', '['),
token.STRING: (token.NAME, token.STRING, token.NUMBER, '(', '{', '['),
')': (token.NUMBER, token.NAME, token.STRING, '(', '{', '['),
']': (token.NUMBER, token.NAME, token.STRING, '(', '{', '['),
'}': (token.NUMBER, token.NAME, token.STRING, '(', '{', '[')
}
}
def is_num(s):
m = re.search('[^0-9*()+\-\s/]+', s)
return not m
def is_string(s):
m = re.search('\s*(?P<a>[\'\"]+).*?(?P=a)\s*', s)
return m
data = re.split('make_key_value\:', data)
if len(data) < 2:
raise IndexCreatorFunctionException(
"Couldn't find a definition of make_key_value function!\n")
spl1 = re.split('make_key\:', data[0])
spl2 = re.split('make_key\:', data[1])
self.funcs_rev = False
if len(spl1) > 1:
data = [spl1[0]] + [data[1]] + [spl1[1]]
self.funcs_rev = True
elif len(spl2) > 1:
data = [data[0]] + spl2
else:
data.append("key")
if data[1] == re.search('\s*', data[1], re.S | re.M).group(0):
raise IndexCreatorFunctionException("Empty function body ",
len(re.split('\n', data[0])) + (len(re.split('\n', data[2])) if self.funcs_rev else 1) - 1)
if data[2] == re.search('\s*', data[2], re.S | re.M).group(0):
raise IndexCreatorFunctionException("Empty function body ",
len(re.split('\n', data[0])) + (1 if self.funcs_rev else len(re.split('\n', data[1]))) - 1)
if data[0] == re.search('\s*', data[0], re.S | re.M).group(0):
raise IndexCreatorValueException("You didn't set any properity or you set them not at the begining of the code\n")
data = [re.split(
'\n', data[0]), re.split('\n', data[1]), re.split('\n', data[2])]
self.cnt_lines = (len(data[0]), len(data[1]), len(data[2]))
ind = 0
self.predata = data
self.data = [[], [], []]
for i, v in enumerate(self.predata[0]):
for k, w in enumerate(self.predata[0][i]):
if self.predata[0][i][k] in self.props_assign:
if not is_num(self.predata[0][i][k + 1:]) and self.predata[0][i].strip()[:4] != 'type' and self.predata[0][i].strip()[:4] != 'name':
s = self.predata[0][i][k + 1:]
self.predata[0][i] = self.predata[0][i][:k + 1]
m = re.search('\s+', s.strip())
if not is_string(s) and not m:
s = "'" + s.strip() + "'"
self.predata[0][i] += s
break
for n, i in enumerate(self.predata):
for k in i:
k = k.strip()
if k:
self.data[ind].append(k)
self.check_enclosures(k, n)
ind += 1
return self.parse_ex()
def readline(self, stage):
def foo():
if len(self.data[stage]) <= self.ind:
self.ind = 0
return ""
else:
self.ind += 1
return self.data[stage][self.ind - 1]
return foo
def add(self, l, i):
def add_aux(*args):
# print args,self.ind
if len(l[i]) < self.ind:
l[i].append([])
l[i][self.ind - 1].append(args)
return add_aux
def parse_ex(self):
self.index_name = ""
self.index_type = ""
self.curLine = -1
self.con = -1
self.brackets = -1
self.curFunc = None
self.colons = 0
self.line_cons = ([], [], [])
self.pre_tokens = ([], [], [])
self.known_dicts_in_mkv = []
self.prop_name = True
self.prop_assign = False
self.is_one_arg_enough = False
self.funcs_stack = []
self.last_line = [-1, -1, -1]
self.props_set = []
self.custom_header = set()
self.tokens = []
self.tokens_head = ['# %s\n' % self.name, 'class %s(' % self.name, '):\n', ' def __init__(self, *args, **kwargs): ']
for i in xrange(3):
tokenize.tokenize(self.readline(i), self.add(self.pre_tokens, i))
# tokenize treats some keyword not in the right way, thats why we
# have to change some of them
for nk, k in enumerate(self.pre_tokens[i]):
for na, a in enumerate(k):
if a[0] == token.NAME and a[1] in self.logic:
self.pre_tokens[i][nk][
na] = (token.OP, a[1], a[2], a[3], a[4])
for i in self.pre_tokens[1]:
self.line_cons[1].append(self.check_colons(i, 1))
self.check_adjacents(i, 1)
if self.check_for_2nd_arg(i) == -1 and not self.is_one_arg_enough:
raise IndexCreatorValueException("No 2nd value to return (did u forget about ',None'?", self.cnt_line_nr(i[0][4], 1))
self.is_one_arg_enough = False
for i in self.pre_tokens[2]:
self.line_cons[2].append(self.check_colons(i, 2))
self.check_adjacents(i, 2)
for i in self.pre_tokens[0]:
self.handle_prop_line(i)
self.cur_brackets = 0
self.tokens += ['\n super(%s, self).__init__(*args, **kwargs)\n def make_key_value(self, data): ' % self.name]
for i in self.pre_tokens[1]:
for k in i:
self.handle_make_value(*k)
self.curLine = -1
self.con = -1
self.cur_brackets = 0
self.tokens += ['\n def make_key(self, key):']
for i in self.pre_tokens[2]:
for k in i:
self.handle_make_key(*k)
if self.index_type == "":
raise IndexCreatorValueException("Missing index type definition\n")
if self.index_name == "":
raise IndexCreatorValueException("Missing index name\n")
self.tokens_head[0] = "# " + self.index_name + "\n" + \
self.tokens_head[0]
for i in self.funcs_with_body:
if self.funcs_with_body[i][1]:
self.tokens_head.insert(4, self.funcs_with_body[i][0])
if None in self.custom_header:
self.custom_header.remove(None)
if self.custom_header:
s = ' custom_header = """'
for i in self.custom_header:
s += i
s += '"""\n'
self.tokens_head.insert(4, s)
if self.index_type in self.allowed_props:
for i in self.props_set:
if i not in self.allowed_props[self.index_type]:
raise IndexCreatorValueException("Properity %s is not allowed for index type: %s" % (i, self.index_type))
# print "".join(self.tokens_head)
# print "----------"
# print (" ".join(self.tokens))
return "".join(self.custom_header), "".join(self.tokens_head) + (" ".join(self.tokens))
# has to be run BEFORE tokenize
def check_enclosures(self, d, st):
encs = []
contr = {'(': ')', '{': '}', '[': ']', "'": "'", '"': '"'}
ends = [')', '}', ']', "'", '"']
for i in d:
if len(encs) > 0 and encs[-1] in ['"', "'"]:
if encs[-1] == i:
del encs[-1]
elif i in contr:
encs += [i]
elif i in ends:
if len(encs) < 1 or contr[encs[-1]] != i:
raise IndexCreatorValueException("Missing opening enclosure for \'%s\'" % i, self.cnt_line_nr(d, st))
del encs[-1]
if len(encs) > 0:
raise IndexCreatorValueException("Missing closing enclosure for \'%s\'" % encs[0], self.cnt_line_nr(d, st))
def check_adjacents(self, d, st):
def std_check(d, n):
if n == 0:
prev = -1
else:
prev = d[n - 1][1] if d[n - 1][0] == token.OP else d[n - 1][0]
cur = d[n][1] if d[n][0] == token.OP else d[n][0]
# there always is an endmarker at the end, but this is a precaution
if n + 2 > len(d):
nex = -1
else:
nex = d[n + 1][1] if d[n + 1][0] == token.OP else d[n + 1][0]
if prev not in self.allowed_adjacent[cur]:
raise IndexCreatorValueException("Wrong left value of the %s" % cur, self.cnt_line_nr(line, st))
# there is an assumption that whole data always ends with 0 marker, the idea prolly needs a rewritting to allow more whitespaces
# between tokens, so it will be handled anyway
elif nex not in self.allowed_adjacent[cur][prev]:
raise IndexCreatorValueException("Wrong right value of the %s" % cur, self.cnt_line_nr(line, st))
for n, (t, i, _, _, line) in enumerate(d):
if t == token.NAME or t == token.STRING:
if n + 1 < len(d) and d[n + 1][0] in [token.NAME, token.STRING]:
raise IndexCreatorValueException("Did you forget about an operator in between?", self.cnt_line_nr(line, st))
elif i in self.allowed_adjacent:
std_check(d, n)
def check_colons(self, d, st):
cnt = 0
br = 0
def check_ret_args_nr(a, s):
c_b_cnt = 0
s_b_cnt = 0
n_b_cnt = 0
comas_cnt = 0
for _, i, _, _, line in a:
if c_b_cnt == n_b_cnt == s_b_cnt == 0:
if i == ',':
comas_cnt += 1
if (s == 1 and comas_cnt > 1) or (s == 2 and comas_cnt > 0):
raise IndexCreatorFunctionException("Too much arguments to return", self.cnt_line_nr(line, st))
if s == 0 and comas_cnt > 0:
raise IndexCreatorValueException("A coma here doesn't make any sense", self.cnt_line_nr(line, st))
elif i == ':':
if s == 0:
raise IndexCreatorValueException("A colon here doesn't make any sense", self.cnt_line_nr(line, st))
raise IndexCreatorFunctionException("Two colons don't make any sense", self.cnt_line_nr(line, st))
if i == '{':
c_b_cnt += 1
elif i == '}':
c_b_cnt -= 1
elif i == '(':
n_b_cnt += 1
elif i == ')':
n_b_cnt -= 1
elif i == '[':
s_b_cnt += 1
elif i == ']':
s_b_cnt -= 1
def check_if_empty(a):
for i in a:
if i not in [token.NEWLINE, token.INDENT, token.ENDMARKER]:
return False
return True
if st == 0:
check_ret_args_nr(d, st)
return
for n, i in enumerate(d):
if i[1] == ':':
if br == 0:
if len(d) < n or check_if_empty(d[n + 1:]):
raise IndexCreatorValueException(
"Empty return value", self.cnt_line_nr(i[4], st))
elif len(d) >= n:
check_ret_args_nr(d[n + 1:], st)
return cnt
else:
cnt += 1
elif i[1] == '{':
br += 1
elif i[1] == '}':
br -= 1
check_ret_args_nr(d, st)
return -1
def check_for_2nd_arg(self, d):
c_b_cnt = 0 # curly brackets counter '{}'
s_b_cnt = 0 # square brackets counter '[]'
n_b_cnt = 0 # normal brackets counter '()'
def check_2nd_arg(d, ind):
d = d[ind[0]:]
for t, i, (n, r), _, line in d:
if i == '{' or i is None:
return 0
elif t == token.NAME:
self.known_dicts_in_mkv.append((i, (n, r)))
return 0
elif t == token.STRING or t == token.NUMBER:
raise IndexCreatorValueException("Second return value of make_key_value function has to be a dictionary!", self.cnt_line_nr(line, 1))
for ind in enumerate(d):
t, i, _, _, _ = ind[1]
if s_b_cnt == n_b_cnt == c_b_cnt == 0:
if i == ',':
return check_2nd_arg(d, ind)
elif (t == token.NAME and i not in self.funcs) or i == '{':
self.is_one_arg_enough = True
if i == '{':
c_b_cnt += 1
self.is_one_arg_enough = True
elif i == '}':
c_b_cnt -= 1
elif i == '(':
n_b_cnt += 1
elif i == ')':
n_b_cnt -= 1
elif i == '[':
s_b_cnt += 1
elif i == ']':
s_b_cnt -= 1
return -1
def cnt_line_nr(self, l, stage):
nr = -1
for n, i in enumerate(self.predata[stage]):
# print i,"|||",i.strip(),"|||",l
if l == i.strip():
nr = n
if nr == -1:
return -1
if stage == 0:
return nr + 1
elif stage == 1:
return nr + self.cnt_lines[0] + (self.cnt_lines[2] - 1 if self.funcs_rev else 0)
elif stage == 2:
return nr + self.cnt_lines[0] + (self.cnt_lines[1] - 1 if not self.funcs_rev else 0)
return -1
def handle_prop_line(self, d):
d_len = len(d)
if d[d_len - 1][0] == token.ENDMARKER:
d_len -= 1
if d_len < 3:
raise IndexCreatorValueException("Can't handle properity assingment ", self.cnt_line_nr(d[0][4], 0))
if not d[1][1] in self.props_assign:
raise IndexCreatorValueException(
"Did you forget : or =?", self.cnt_line_nr(d[0][4], 0))
if d[0][0] == token.NAME or d[0][0] == token.STRING:
if d[0][1] in self.props_set:
raise IndexCreatorValueException("Properity %s is set more than once" % d[0][1], self.cnt_line_nr(d[0][4], 0))
self.props_set += [d[0][1]]
if d[0][1] == "type" or d[0][1] == "name":
t, tk, _, _, line = d[2]
if d_len > 3:
raise IndexCreatorValueException(
"Wrong value to assign", self.cnt_line_nr(line, 0))
if t == token.STRING:
m = re.search('\s*(?P<a>[\'\"]+)(.*?)(?P=a)\s*', tk)
if m:
tk = m.groups()[1]
elif t != token.NAME:
raise IndexCreatorValueException(
"Wrong value to assign", self.cnt_line_nr(line, 0))
if d[0][1] == "type":
if d[2][1] == "TreeBasedIndex":
self.custom_header.add("from CodernityDB.tree_index import TreeBasedIndex\n")
elif d[2][1] == "MultiTreeBasedIndex":
self.custom_header.add("from CodernityDB.tree_index import MultiTreeBasedIndex\n")
elif d[2][1] == "MultiHashIndex":
self.custom_header.add("from CodernityDB.hash_index import MultiHashIndex\n")
self.tokens_head.insert(2, tk)
self.index_type = tk
else:
self.index_name = tk
return
else:
self.tokens += ['\n kwargs["' + d[0][1] + '"]']
else:
raise IndexCreatorValueException("Can't handle properity assingment ", self.cnt_line_nr(d[0][4], 0))
self.tokens += ['=']
self.check_adjacents(d[2:], 0)
self.check_colons(d[2:], 0)
for i in d[2:]:
self.tokens += [i[1]]
def generate_func(self, t, tk, pos_start, pos_end, line, hdata, stage):
if self.last_line[stage] != -1 and pos_start[0] > self.last_line[stage] and line != '':
raise IndexCreatorFunctionException("This line will never be executed!", self.cnt_line_nr(line, stage))
if t == 0:
return
if pos_start[1] == 0:
if self.line_cons[stage][pos_start[0] - 1] == -1:
self.tokens += ['\n return']
self.last_line[stage] = pos_start[0]
else:
self.tokens += ['\n if']
elif tk == ':' and self.line_cons[stage][pos_start[0] - 1] > -1:
if self.line_cons[stage][pos_start[0] - 1] == 0:
self.tokens += [':\n return']
return
self.line_cons[stage][pos_start[0] - 1] -= 1
if tk in self.logic2:
# print tk
if line[pos_start[1] - 1] != tk and line[pos_start[1] + 1] != tk:
self.tokens += [tk]
if line[pos_start[1] - 1] != tk and line[pos_start[1] + 1] == tk:
if tk == '&':
self.tokens += ['and']
else:
self.tokens += ['or']
return
if self.brackets != 0:
def search_through_known_dicts(a):
for i, (n, r) in self.known_dicts_in_mkv:
if i == tk and r > pos_start[1] and n == pos_start[0] and hdata == 'data':
return True
return False
if t == token.NAME and len(self.funcs_stack) > 0 and self.funcs_stack[-1][0] == 'md5' and search_through_known_dicts(tk):
raise IndexCreatorValueException("Second value returned by make_key_value for sure isn't a dictionary ", self.cnt_line_nr(line, 1))
if tk == ')':
self.cur_brackets -= 1
if len(self.funcs_stack) > 0 and self.cur_brackets == self.funcs_stack[-1][1]:
self.tokens += [tk]
self.tokens += self.funcs[self.funcs_stack[-1][0]][1]
del self.funcs_stack[-1]
return
if tk == '(':
self.cur_brackets += 1
if tk in self.none:
self.tokens += ['None']
return
if t == token.NAME and tk not in self.logic and tk != hdata:
if tk not in self.funcs:
self.tokens += [hdata + '["' + tk + '"]']
else:
self.tokens += self.funcs[tk][0]
if tk in self.funcs_with_body:
self.funcs_with_body[tk] = (
self.funcs_with_body[tk][0], True)
self.custom_header.add(self.handle_int_imports.get(tk))
self.funcs_stack += [(tk, self.cur_brackets)]
else:
self.tokens += [tk]
def handle_make_value(self, t, tk, pos_start, pos_end, line):
self.generate_func(t, tk, pos_start, pos_end, line, 'data', 1)
def handle_make_key(self, t, tk, pos_start, pos_end, line):
self.generate_func(t, tk, pos_start, pos_end, line, 'key', 2) | /salang_saara-0.4.2-py3-none-any.whl/CodernityDB/indexcreator.py | 0.414899 | 0.166981 | indexcreator.py | pypi |
from salary_stone.salary_extractor import Salary_Extractor
def recommend(skill_vec, data, model: Salary_Extractor, extracted_scol:str):
"""
The purpose of this method is to provide the recommended skills and predicted percentage increase in salary
that could be expected if that skill were to be included.
:param skill_vec: is the vector of skills that a person currently has.
:param data: is the pandas dataframe of the data that we are using to make the comparison.
:param model: is a salary extractor model that we will be using to make the recommendations.
:param extracted_scol: is a column consisting of a list of skills for the row in the dataframe.
:returns: a list of the skill names and a list of the expected return percentages.
"""
salarye = model
# make a list of the unique extracted skills
list_extracted_skills = []
for skill in data[extracted_scol]:
for i in skill:
list_extracted_skills.append(i)
unique_list_extracted_skills = list(set(list_extracted_skills))
#len(unique_list_extracted_skills)
# find skills and percentage of how much they will help
skill_list = []
percentage_prediction = []
for index in range(len(unique_list_extracted_skills)):
if index == 0:
prediction = salarye.extract_salary(' '.join(skill_vec))
# print(prediction)
if prediction[1] == 'inf':
base_mean = prediction[0]
else:
base_mean = (prediction[0] + prediction[1]) / 2
if unique_list_extracted_skills[index] not in skill_vec:
skill_vec.extend([unique_list_extracted_skills[index]])
# find the prediction from random forest model
prediction = salarye.extract_salary(' '.join(skill_vec))
# print(unique_list_extracted_skills[index])
# print(prediction)
try:
if prediction[1] == 'inf':
mean = prediction[0]
else:
mean = (prediction[0] + prediction[1]) / 2
percentage_prediction.append((mean - base_mean)/ base_mean)
skill_list.append(unique_list_extracted_skills[index])
except:
continue
skill_vec = skill_vec[:-1]
percentage_prediction, skill_name = zip(*sorted(zip(percentage_prediction, skill_list), reverse = True))
return skill_name, percentage_prediction | /salary_stone-0.4.0.tar.gz/salary_stone-0.4.0/salary_stone/skill_recommender.py | 0.532182 | 0.751648 | skill_recommender.py | pypi |
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('saleboxdjango', '0021_auto_20210814_1825'),
]
operations = [
migrations.AddField(
model_name='attribute',
name='created',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='attribute',
name='last_update',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='attributeitem',
name='created',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='attributeitem',
name='last_update',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='contentpage',
name='created',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='contentpage',
name='last_update',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='contentpageitem',
name='created',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='contentpageitem',
name='last_update',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='country',
name='created',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='country',
name='last_update',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='countrystate',
name='created',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='countrystate',
name='last_update',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='discountgroup',
name='created',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='discountgroup',
name='last_update',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='discountruleset',
name='created',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='discountruleset',
name='last_update',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='member',
name='created',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='member',
name='last_update',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='membergroup',
name='created',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='membergroup',
name='last_update',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='product',
name='created',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='product',
name='last_update',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='productcategory',
name='created',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='productcategory',
name='last_update',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='productvariant',
name='created',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='productvariant',
name='last_update',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='productvariantimage',
name='created',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='productvariantimage',
name='last_update',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='translation',
name='created',
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name='translation',
name='last_update',
field=models.DateTimeField(blank=True, null=True),
),
] | /salebox-django-0.0.243.tar.gz/salebox-django-0.0.243/saleboxdjango/migrations/0022_auto_20210814_1828.py | 0.729616 | 0.190159 | 0022_auto_20210814_1828.py | pypi |
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('saleboxdjango', '0013_analytic'),
]
operations = [
migrations.AddField(
model_name='analytic',
name='language',
field=models.CharField(blank=True, max_length=10, null=True),
),
migrations.AddField(
model_name='analytic',
name='screen_height',
field=models.IntegerField(blank=True, null=True),
),
migrations.AddField(
model_name='analytic',
name='screen_width',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='analytic',
name='ua_browser_family',
field=models.CharField(blank=True, max_length=32, null=True, verbose_name='Browser family'),
),
migrations.AlterField(
model_name='analytic',
name='ua_browser_version',
field=models.CharField(blank=True, max_length=20, null=True, verbose_name='Browser version'),
),
migrations.AlterField(
model_name='analytic',
name='ua_device_brand',
field=models.CharField(blank=True, max_length=20, null=True, verbose_name='Device brand'),
),
migrations.AlterField(
model_name='analytic',
name='ua_device_family',
field=models.CharField(blank=True, max_length=32, null=True, verbose_name='Device family'),
),
migrations.AlterField(
model_name='analytic',
name='ua_device_model',
field=models.CharField(blank=True, max_length=20, null=True, verbose_name='Device model'),
),
migrations.AlterField(
model_name='analytic',
name='ua_is_bot',
field=models.BooleanField(null=True, verbose_name='Is bot?'),
),
migrations.AlterField(
model_name='analytic',
name='ua_is_mobile',
field=models.BooleanField(null=True, verbose_name='Is mobile?'),
),
migrations.AlterField(
model_name='analytic',
name='ua_is_pc',
field=models.BooleanField(null=True, verbose_name='Is PC?'),
),
migrations.AlterField(
model_name='analytic',
name='ua_is_tablet',
field=models.BooleanField(null=True, verbose_name='Is tablet?'),
),
migrations.AlterField(
model_name='analytic',
name='ua_is_touch_capable',
field=models.BooleanField(null=True, verbose_name='Is touchscreen?'),
),
migrations.AlterField(
model_name='analytic',
name='ua_os_family',
field=models.CharField(blank=True, max_length=20, null=True, verbose_name='OS family'),
),
migrations.AlterField(
model_name='analytic',
name='ua_os_version',
field=models.CharField(blank=True, max_length=20, null=True, verbose_name='OS version'),
),
] | /salebox-django-0.0.243.tar.gz/salebox-django-0.0.243/saleboxdjango/migrations/0014_auto_20191216_1334.py | 0.703753 | 0.159905 | 0014_auto_20191216_1334.py | pypi |
from django.conf import settings
import django.contrib.postgres.fields.jsonb
from django.db import migrations, models
import django.db.models.deletion
import mptt.fields
import uuid
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Attribute',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('code', models.CharField(max_length=20)),
('created', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
],
options={
'verbose_name': 'Product Attribute',
'ordering': ['code'],
},
),
migrations.CreateModel(
name='AttributeItem',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('value', models.CharField(max_length=100)),
('slug', models.CharField(blank=True, db_index=True, max_length=100)),
('created', models.DateTimeField(auto_now_add=True)),
('delete_dt', models.DateTimeField(blank=True, null=True)),
('last_update', models.DateTimeField(auto_now=True)),
('attribute', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.Attribute')),
],
options={
'verbose_name': 'Product Attribute Item',
'ordering': ['value'],
},
),
migrations.CreateModel(
name='BasketWishlist',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('session', models.CharField(blank=True, max_length=32, null=True)),
('basket_flag', models.BooleanField(default=True)),
('quantity', models.IntegerField(default=1)),
('weight', models.IntegerField(default=0)),
('created', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
('user', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='CallbackStore',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('ip_address', models.GenericIPAddressField()),
('method', models.CharField(max_length=7)),
('post', django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True)),
('created', models.DateTimeField(auto_now_add=True)),
],
),
migrations.CreateModel(
name='CheckoutStore',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('uuid', models.UUIDField(db_index=True)),
('visible_id', models.CharField(max_length=14, unique=True)),
('user', models.IntegerField(blank=True, null=True)),
('gateway_code', models.CharField(max_length=12)),
('status', models.IntegerField(choices=[(10, 'New: Pending send to gateway'), (20, 'Pending: Awaiting gateway response'), (30, 'Success: Pending POST to Salebox'), (31, 'Success: Successfully POSTed to Salebox'), (40, 'Rejected: Gateway rejected payment'), (50, 'Timeout: Gateway did not respond in an acceptable time period')])),
('data', django.contrib.postgres.fields.jsonb.JSONField()),
('created', models.DateTimeField(auto_now_add=True)),
('last_updated', models.DateTimeField(auto_now_add=True)),
],
),
migrations.CreateModel(
name='CheckoutStoreUpdate',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('status', models.IntegerField(choices=[(10, 'New: Pending send to gateway'), (20, 'Pending: Awaiting gateway response'), (30, 'Success: Pending POST to Salebox'), (31, 'Success: Successfully POSTed to Salebox'), (40, 'Rejected: Gateway rejected payment'), (50, 'Timeout: Gateway did not respond in an acceptable time period')])),
('data', django.contrib.postgres.fields.jsonb.JSONField()),
('created', models.DateTimeField(auto_now_add=True)),
('store', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.CheckoutStore')),
],
),
migrations.CreateModel(
name='Country',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('code_2', models.CharField(blank=True, max_length=2, null=True)),
('code_3', models.CharField(blank=True, max_length=3, null=True)),
('name', models.CharField(blank=True, max_length=50, null=True)),
('default', models.BooleanField(default=False)),
('created', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
],
options={
'verbose_name_plural': 'Countries',
'ordering': ['name'],
},
),
migrations.CreateModel(
name='CountryState',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('code_2', models.CharField(blank=True, max_length=2, null=True)),
('name', models.CharField(max_length=50)),
('created', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
('country', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.Country')),
],
options={
'verbose_name_plural': 'Country States',
'ordering': ['name'],
},
),
migrations.CreateModel(
name='CountryStateTranslation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('language', models.CharField(max_length=7)),
('value', models.TextField(blank=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_updated', models.DateTimeField(auto_now=True)),
('state', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.CountryState')),
],
options={
'verbose_name': 'Country State Translations',
'ordering': ['language', 'state'],
},
),
migrations.CreateModel(
name='CountryTranslation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('language', models.CharField(max_length=7)),
('value', models.TextField(blank=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_updated', models.DateTimeField(auto_now=True)),
('country', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.Country')),
],
options={
'verbose_name': 'Country Translations',
'ordering': ['language', 'country'],
},
),
migrations.CreateModel(
name='DiscountGroup',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=25)),
('group_type', models.CharField(choices=[('S', 'Seasonal'), ('M', 'Manual')], default='M', max_length=1)),
('operational_flag', models.BooleanField(default=False)),
('active_flag', models.BooleanField(default=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
],
options={
'verbose_name': 'Discount Group',
},
),
migrations.CreateModel(
name='DiscountRuleset',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=25)),
('discount_type', models.CharField(choices=[('flat_percent', 'Flat Percentage')], default='flat_percent', max_length=12)),
('value', models.IntegerField(blank=True, null=True)),
('start_dt', models.DateTimeField(blank=True, null=True)),
('end_dt', models.DateTimeField(blank=True, null=True)),
('operational_flag', models.BooleanField(default=True)),
('active_flag', models.BooleanField(default=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
('group', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.DiscountGroup')),
],
options={
'verbose_name': 'Discount Ruleset',
},
),
migrations.CreateModel(
name='Event',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('event', models.CharField(max_length=50)),
('transaction_guid', models.CharField(blank=True, max_length=20, null=True)),
('salebox_member_id', models.UUIDField(blank=True, default=uuid.uuid4, null=True)),
('processed_flag', models.BooleanField(default=False)),
('created', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
],
options={
'ordering': ['-created'],
},
),
migrations.CreateModel(
name='LastUpdate',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('code', models.CharField(max_length=36)),
('value', models.FloatField(default=0.0)),
],
options={
'verbose_name': 'Last Update',
'ordering': ['code'],
},
),
migrations.CreateModel(
name='Member',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('guid', models.CharField(db_index=True, max_length=25)),
('salebox_member_id', models.UUIDField(blank=True, db_index=True, null=True)),
('gender', models.CharField(blank=True, choices=[('M', 'Male'), ('F', 'Female'), ('U', 'Unspecified')], max_length=1, null=True)),
('title', models.IntegerField(blank=True, choices=[(1, 'Mr'), (2, 'Mrs'), (3, 'Miss')], null=True)),
('name_first', models.CharField(blank=True, max_length=20, null=True)),
('name_last', models.CharField(blank=True, max_length=30, null=True)),
('date_of_birth', models.DateField(blank=True, null=True)),
('address_1', models.CharField(blank=True, max_length=255, null=True)),
('address_2', models.CharField(blank=True, max_length=255, null=True)),
('address_3', models.CharField(blank=True, max_length=255, null=True)),
('address_4', models.CharField(blank=True, max_length=255, null=True)),
('address_5', models.CharField(blank=True, max_length=255, null=True)),
('postcode', models.CharField(blank=True, max_length=12, null=True)),
('phone_1', models.CharField(blank=True, max_length=20, null=True)),
('phone_2', models.CharField(blank=True, max_length=20, null=True)),
('email', models.EmailField(blank=True, max_length=254, null=True)),
('id_card', models.CharField(blank=True, max_length=20, null=True)),
('status', models.CharField(choices=[('A', 'Active'), ('R', 'Resigned'), ('S', 'Suspended'), ('T', 'Terminated')], default='A', max_length=1)),
('active_flag', models.BooleanField(default=True)),
('join_date', models.DateField(blank=True, null=True)),
('string_1', models.CharField(blank=True, max_length=255, null=True)),
('string_2', models.CharField(blank=True, max_length=255, null=True)),
('string_3', models.CharField(blank=True, max_length=255, null=True)),
('string_4', models.CharField(blank=True, max_length=255, null=True)),
('string_5', models.CharField(blank=True, max_length=255, null=True)),
('string_6', models.CharField(blank=True, max_length=255, null=True)),
('string_7', models.CharField(blank=True, max_length=255, null=True)),
('string_8', models.CharField(blank=True, max_length=255, null=True)),
('string_9', models.CharField(blank=True, max_length=255, null=True)),
('string_10', models.CharField(blank=True, max_length=255, null=True)),
('string_11', models.CharField(blank=True, max_length=255, null=True)),
('string_12', models.CharField(blank=True, max_length=255, null=True)),
('string_13', models.CharField(blank=True, max_length=255, null=True)),
('string_14', models.CharField(blank=True, max_length=255, null=True)),
('boolean_1', models.BooleanField(default=False)),
('boolean_2', models.BooleanField(default=False)),
('boolean_3', models.BooleanField(default=False)),
('boolean_4', models.BooleanField(default=False)),
('boolean_5', models.BooleanField(default=False)),
('boolean_6', models.BooleanField(default=False)),
('created', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
('country', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.Country')),
('country_state', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.CountryState')),
],
options={
'verbose_name': 'Member',
'ordering': ['guid'],
},
),
migrations.CreateModel(
name='MemberGroup',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50)),
('flat_discount_percentage', models.FloatField(default=0)),
('can_be_parent', models.BooleanField(default=True)),
('default_group', models.BooleanField(default=False)),
('active_flag', models.BooleanField(default=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
],
options={
'verbose_name': 'Member Group',
'ordering': ['name'],
},
),
migrations.CreateModel(
name='Product',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=150)),
('string_1', models.CharField(blank=True, max_length=150, null=True)),
('string_2', models.CharField(blank=True, max_length=150, null=True)),
('string_3', models.CharField(blank=True, max_length=150, null=True)),
('string_4', models.CharField(blank=True, max_length=150, null=True)),
('sold_by', models.CharField(choices=[('item', 'item'), ('weight', 'weight')], default='item', max_length=6)),
('vat_applicable', models.BooleanField(default=True)),
('image', models.CharField(blank=True, max_length=70, null=True)),
('local_image', models.CharField(blank=True, max_length=25, null=True)),
('inventory_flag', models.BooleanField(default=True)),
('slug', models.CharField(blank=True, db_index=True, max_length=100, null=True)),
('bestseller_rank', models.IntegerField(default=0)),
('active_flag', models.BooleanField(default=True)),
('i18n', django.contrib.postgres.fields.jsonb.JSONField(default=dict)),
('created', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
('rating_score', models.IntegerField(default=0)),
('rating_vote_count', models.IntegerField(default=0)),
('attribute_1', models.ManyToManyField(blank=True, related_name='product_attr_1', to='saleboxdjango.AttributeItem')),
('attribute_10', models.ManyToManyField(blank=True, related_name='product_attr_10', to='saleboxdjango.AttributeItem')),
('attribute_2', models.ManyToManyField(blank=True, related_name='product_attr_2', to='saleboxdjango.AttributeItem')),
('attribute_3', models.ManyToManyField(blank=True, related_name='product_attr_3', to='saleboxdjango.AttributeItem')),
('attribute_4', models.ManyToManyField(blank=True, related_name='product_attr_4', to='saleboxdjango.AttributeItem')),
('attribute_5', models.ManyToManyField(blank=True, related_name='product_attr_5', to='saleboxdjango.AttributeItem')),
('attribute_6', models.ManyToManyField(blank=True, related_name='product_attr_6', to='saleboxdjango.AttributeItem')),
('attribute_7', models.ManyToManyField(blank=True, related_name='product_attr_7', to='saleboxdjango.AttributeItem')),
('attribute_8', models.ManyToManyField(blank=True, related_name='product_attr_8', to='saleboxdjango.AttributeItem')),
('attribute_9', models.ManyToManyField(blank=True, related_name='product_attr_9', to='saleboxdjango.AttributeItem')),
],
options={
'verbose_name': 'Product',
'ordering': ['name'],
},
),
migrations.CreateModel(
name='ProductCategory',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('short_name', models.CharField(max_length=30)),
('name', models.CharField(max_length=100)),
('image', models.CharField(blank=True, max_length=70, null=True)),
('local_image', models.CharField(blank=True, max_length=25, null=True)),
('seasonal_flag', models.BooleanField(default=False)),
('slug', models.CharField(blank=True, db_index=True, max_length=100, null=True)),
('slug_path', models.CharField(blank=True, db_index=True, max_length=255, null=True)),
('active_flag', models.BooleanField(default=True)),
('i18n', django.contrib.postgres.fields.jsonb.JSONField(default=dict)),
('created', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
('lft', models.PositiveIntegerField(db_index=True, editable=False)),
('rght', models.PositiveIntegerField(db_index=True, editable=False)),
('tree_id', models.PositiveIntegerField(db_index=True, editable=False)),
('level', models.PositiveIntegerField(db_index=True, editable=False)),
('parent', mptt.fields.TreeForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='children', to='saleboxdjango.ProductCategory')),
],
options={
'verbose_name': 'Product Category',
'verbose_name_plural': 'Product Categories',
'ordering': ['name'],
},
),
migrations.CreateModel(
name='ProductVariant',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, default='', max_length=150)),
('bo_name', models.CharField(blank=True, default='', max_length=200)),
('plu', models.CharField(blank=True, default='', max_length=25)),
('sku', models.CharField(blank=True, default='', max_length=25)),
('color', models.CharField(blank=True, default='', max_length=50)),
('size', models.CharField(blank=True, default='', max_length=20)),
('size_order', models.FloatField(default=0)),
('size_uom', models.CharField(blank=True, choices=[('', 'n/a'), ('g', 'g'), ('kg', 'kg'), ('ml', 'ml')], default='', max_length=2)),
('price', models.IntegerField(null=True)),
('sale_price', models.IntegerField(default=0)),
('sale_percent', models.IntegerField(default=0)),
('barcode', models.CharField(blank=True, default='', max_length=50)),
('available_to_order', models.BooleanField(default=True)),
('available_on_pos', models.BooleanField(default=True)),
('available_on_ecom', models.BooleanField(default=True)),
('shelf_expiry_type', models.CharField(default='manual', max_length=12)),
('shelf_life_days', models.IntegerField(blank=True, null=True)),
('slug', models.CharField(blank=True, db_index=True, max_length=150, null=True)),
('image', models.CharField(blank=True, max_length=70, null=True)),
('local_image', models.CharField(blank=True, max_length=20, null=True)),
('unique_string', models.CharField(blank=True, max_length=255)),
('shipping_weight', models.IntegerField(blank=True, null=True)),
('loyalty_points', models.FloatField(blank=True, null=True)),
('member_discount_applicable', models.BooleanField(default=True)),
('string_1', models.CharField(blank=True, max_length=150, null=True)),
('string_2', models.CharField(blank=True, max_length=150, null=True)),
('string_3', models.CharField(blank=True, max_length=150, null=True)),
('string_4', models.CharField(blank=True, max_length=150, null=True)),
('warehouse_location', models.CharField(blank=True, max_length=50, null=True)),
('int_1', models.IntegerField(blank=True, null=True)),
('int_2', models.IntegerField(blank=True, null=True)),
('int_3', models.IntegerField(blank=True, null=True)),
('int_4', models.IntegerField(blank=True, null=True)),
('date_1', models.DateField(blank=True, null=True)),
('date_2', models.DateField(blank=True, null=True)),
('active_flag', models.BooleanField(default=True)),
('ecommerce_description', models.TextField(blank=True, null=True)),
('ecommerce_low_stock_threshold', models.IntegerField(default=10)),
('bestseller_rank', models.IntegerField(default=0)),
('default_image', models.CharField(blank=True, max_length=35, null=True)),
('i18n', django.contrib.postgres.fields.jsonb.JSONField(default=dict)),
('stock_count', models.IntegerField(default=0)),
('stock_checked_out', models.IntegerField(default=0)),
('stock_total', models.IntegerField(default=0)),
('created', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
('rating_score', models.IntegerField(default=0)),
('rating_vote_count', models.IntegerField(default=0)),
('name_sorted', models.IntegerField(db_index=True, default=0)),
('attribute_1', models.ManyToManyField(blank=True, related_name='variant_attr_1', to='saleboxdjango.AttributeItem')),
('attribute_10', models.ManyToManyField(blank=True, related_name='variant_attr_10', to='saleboxdjango.AttributeItem')),
('attribute_2', models.ManyToManyField(blank=True, related_name='variant_attr_2', to='saleboxdjango.AttributeItem')),
('attribute_3', models.ManyToManyField(blank=True, related_name='variant_attr_3', to='saleboxdjango.AttributeItem')),
('attribute_4', models.ManyToManyField(blank=True, related_name='variant_attr_4', to='saleboxdjango.AttributeItem')),
('attribute_5', models.ManyToManyField(blank=True, related_name='variant_attr_5', to='saleboxdjango.AttributeItem')),
('attribute_6', models.ManyToManyField(blank=True, related_name='variant_attr_6', to='saleboxdjango.AttributeItem')),
('attribute_7', models.ManyToManyField(blank=True, related_name='variant_attr_7', to='saleboxdjango.AttributeItem')),
('attribute_8', models.ManyToManyField(blank=True, related_name='variant_attr_8', to='saleboxdjango.AttributeItem')),
('attribute_9', models.ManyToManyField(blank=True, related_name='variant_attr_9', to='saleboxdjango.AttributeItem')),
('product', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.Product')),
],
options={
'verbose_name': 'Product Variant',
'ordering': ['id'],
},
),
migrations.CreateModel(
name='ProductVariantImage',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('img', models.CharField(default='', max_length=100)),
('local_img', models.CharField(blank=True, max_length=25, null=True)),
('img_height', models.IntegerField(default=0)),
('img_width', models.IntegerField(default=0)),
('title', models.CharField(blank=True, max_length=150, null=True)),
('description', models.CharField(blank=True, max_length=255, null=True)),
('order', models.IntegerField(blank=True, null=True)),
('active_flag', models.BooleanField(default=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
('variant', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.ProductVariant')),
],
options={
'ordering': ('order',),
},
),
migrations.CreateModel(
name='ProductVariantRating',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('rating', models.IntegerField(default=50)),
('created', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
('variant', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.ProductVariant')),
],
options={
'verbose_name': 'Product Variant Rating',
},
),
migrations.CreateModel(
name='UserAddress',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('default', models.BooleanField(default=False)),
('address_group', models.CharField(default='default', max_length=10)),
('full_name', models.CharField(max_length=150)),
('address_1', models.CharField(max_length=150)),
('address_2', models.CharField(blank=True, max_length=150, null=True)),
('address_3', models.CharField(blank=True, max_length=150, null=True)),
('address_4', models.CharField(blank=True, max_length=150, null=True)),
('address_5', models.CharField(blank=True, max_length=150, null=True)),
('postcode', models.CharField(blank=True, max_length=12, null=True)),
('phone_1', models.CharField(blank=True, max_length=20, null=True)),
('phone_2', models.CharField(blank=True, max_length=20, null=True)),
('email', models.EmailField(blank=True, max_length=254, null=True)),
('string_1', models.CharField(blank=True, max_length=255, null=True)),
('string_2', models.CharField(blank=True, max_length=255, null=True)),
('tax_id', models.CharField(blank=True, max_length=36, null=True)),
('active_flag', models.BooleanField(default=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
('country', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.Country')),
('country_state', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.CountryState')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
options={
'verbose_name': 'User Address',
'ordering': ['-default', 'full_name', 'address_1'],
},
),
migrations.AddField(
model_name='product',
name='category',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.ProductCategory'),
),
migrations.AddField(
model_name='member',
name='group',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.MemberGroup'),
),
migrations.AddField(
model_name='member',
name='group_when_created',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='group_when_created', to='saleboxdjango.MemberGroup'),
),
migrations.AddField(
model_name='member',
name='parent',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.Member'),
),
migrations.AddField(
model_name='discountruleset',
name='product_variant',
field=models.ManyToManyField(blank=True, to='saleboxdjango.ProductVariant'),
),
migrations.AddField(
model_name='basketwishlist',
name='variant',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.ProductVariant'),
),
migrations.AddIndex(
model_name='productvariant',
index=models.Index(fields=['bestseller_rank', 'name_sorted'], name='saleboxdjan_bestsel_5ad3e6_idx'),
),
migrations.AddIndex(
model_name='productvariant',
index=models.Index(fields=['-bestseller_rank', 'name_sorted'], name='saleboxdjan_bestsel_e36854_idx'),
),
migrations.AddIndex(
model_name='productvariant',
index=models.Index(fields=['sale_price', 'name_sorted'], name='saleboxdjan_sale_pr_1681f7_idx'),
),
migrations.AddIndex(
model_name='productvariant',
index=models.Index(fields=['-sale_price', 'name_sorted'], name='saleboxdjan_sale_pr_90497c_idx'),
),
migrations.AddIndex(
model_name='productvariant',
index=models.Index(fields=['rating_score', 'name_sorted'], name='saleboxdjan_rating__c1dad4_idx'),
),
migrations.AddIndex(
model_name='productvariant',
index=models.Index(fields=['-rating_score', 'name_sorted'], name='saleboxdjan_rating__4d1500_idx'),
),
] | /salebox-django-0.0.243.tar.gz/salebox-django-0.0.243/saleboxdjango/migrations/0001_initial.py | 0.513181 | 0.170681 | 0001_initial.py | pypi |
import django.contrib.postgres.fields.jsonb
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('saleboxdjango', '0019_auto_20210807_0644'),
]
operations = [
migrations.CreateModel(
name='ContentPage',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('page_type', models.CharField(max_length=30)),
('slug', models.SlugField()),
('combined_slug', models.CharField(max_length=255)),
('depth', models.IntegerField(default=0)),
('order', models.IntegerField(default=0)),
('is_locked', models.BooleanField(default=False)),
('is_active', models.BooleanField(default=False)),
('meta_data', models.JSONField(blank=True, default=dict)),
('parent_ids', models.CharField(blank=True, max_length=255, null=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
('parent', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.contentpage')),
],
),
migrations.CreateModel(
name='KeyValueStore',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('key', models.CharField(db_index=True, max_length=32, unique=True)),
('value', django.contrib.postgres.fields.jsonb.JSONField(default=dict)),
('created', models.DateTimeField(auto_now_add=True)),
('last_updated', models.DateTimeField(auto_now=True)),
],
),
migrations.CreateModel(
name='SyncLog',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('status', models.CharField(choices=[('INFO', 'INFO'), ('WARNING', 'WARNING'), ('ERROR', 'ERROR')], default='INFO', max_length=7)),
('function_name', models.CharField(max_length=30)),
('message', models.CharField(max_length=255)),
('created', models.DateTimeField(auto_now_add=True)),
],
options={
'ordering': ('created',),
},
),
migrations.CreateModel(
name='SyncQueue',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('model_id', models.CharField(max_length=36)),
('model_name', models.CharField(max_length=50)),
('created', models.DateTimeField(auto_now_add=True)),
],
options={
'ordering': ('created',),
},
),
migrations.CreateModel(
name='ContentPageItem',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('item_type', models.CharField(max_length=30)),
('order', models.IntegerField(default=0)),
('is_locked', models.BooleanField(default=False)),
('is_active', models.BooleanField(default=True)),
('content', models.JSONField(blank=True, default=dict)),
('created', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
('page', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='saleboxdjango.contentpage')),
],
),
] | /salebox-django-0.0.243.tar.gz/salebox-django-0.0.243/saleboxdjango/migrations/0020_contentpage_contentpageitem_keyvaluestore_synclog_syncqueue.py | 0.534127 | 0.193338 | 0020_contentpage_contentpageitem_keyvaluestore_synclog_syncqueue.py | pypi |
from django.conf import settings
from django.utils.translation import get_language
from python2c2p.redirectapi import twoctwop_redirectapi
from saleboxdjango.lib.basket import SaleboxBasket
from saleboxdjango.views.checkout.gateway import SaleboxCheckoutGatewayView
class SaleboxProviders2C2PGatewayView(SaleboxCheckoutGatewayView):
gateway_code = "2c2p"
def gateway(self, request, *args, **kwargs):
context = self.get_context_data()
store = self.sc.save_to_store(request.user, gateway_code=self.gateway_code)
# override start
data = self.sc.get_raw_data()
# get the html to POST to the gatewy
context["html"] = self.gateway_init(data, request, store)
# reset basket TODO - should the basket be reset on first callback?
basket = SaleboxBasket(request)
basket.reset_basket(request)
# render
return self.render_to_response(context)
def gateway_init(self, data, request, store):
"""
Optionally override this method if there are different
payment parameters you wish to send
"""
tctp = twoctwop_redirectapi(
settings.TWOCTWOP["MERCHANT_ID"],
settings.TWOCTWOP["SECRET_KEY"],
settings.TWOCTWOP["GATEWAY_URL"],
)
tctp.set_value("payment_option", "A")
# total price
total_price = data["basket"]["sale_price"] + data["shipping_method"]["price"]
# default values
tctp.set_value("amount", total_price)
tctp.set_value("currency", "THB")
tctp.set_value("customer_email", request.user.email)
tctp.set_value("default_lang", get_language())
tctp.set_value("order_id", store.visible_id)
tctp.set_value("payment_description", settings.TWOCTWOP["PAYMENT_DESCRIPTION"])
tctp.set_value(
"result_url_1",
settings.TWOCTWOP["CALLBACK_URL_FRONTEND"],
)
tctp.set_value(
"result_url_2",
settings.TWOCTWOP["CALLBACK_URL_BACKEND"],
)
tctp.set_value("user_defined_1", store.uuid)
return tctp.request() | /salebox-django-0.0.243.tar.gz/salebox-django-0.0.243/saleboxdjango/providers/twoctwop/gateway.py | 0.401336 | 0.184786 | gateway.py | pypi |
import json
from datetime import datetime
import requests
from bs4 import BeautifulSoup
def parse_report(url):
"""
Parse the report's HTML into a Report object.
Parameters
-------------
url: :class:`str`
The report URL
Raises
-------------
ValueError
Parsing failed (Invalid ID / Glitched report)
Returns
-------------
:class:`Report`
The Report object.
"""
request = requests.get(url).text
if request in ["Could not find any reports with that ID.", "No report file found."]:
raise ValueError("Report not found.")
soup = BeautifulSoup(request, "lxml")
data = {}
judgement = soup.find("div", id="splash")
if not judgement:
judgement = "Open"
elif judgement.text == "This report has been closed without judgement.":
judgement = "Closed"
elif judgement.text == "This report has been deemed guilty.":
judgement = "Guilty"
elif judgement.text == "This report has been deemed innocent.":
judgement = "Innocent"
data['judgement'] = judgement
data['id'] = soup.find("span", class_="reportId").text
data['user'] = soup.find("span", class_="reportedPlayer").text
data['reason'] = soup.find("span", class_="reportReason").text
data['ranked'] = bool("Ranked Game." in soup.find(
"span", class_="notice").text)
data['details'] = str(soup.find("span", class_='reportDescription')).split(
'<span class="reportDescription">')[1].split('</span>')[0].split("<br/>")
data['details'] = [] if data['details'] == [''] else data['details']
data['soup'] = soup
try:
return Report(data)
except Exception:
error_found = True
if error_found:
raise ValueError("There was an error processing this report.")
def _get_player(name, players):
data = {}
data['all_players'] = players
players = json.loads(players)["players"]
# Fix bugged usernames
if "(" in name:
name = name.split("(")[0]
# Check if 'name' is a username
if name in [x["username"] for x in players]:
data['name'] = name
data['type'] = "username"
return Player(data)
# Check if 'name' is an IGN
for player in players:
if player["ign"] == name:
data["type"] = "ign"
data['name'] = player["ign"]
return Player(data)
# Otherwise, the report is bugged.
# This is extremely rare (occurs in about 0.3% of reports).
# Returns None to avoid an error.
return None
class Report:
"""
Represents a report parsed through parse_report(url).
Attributes
-------------
id: :class:`int`
The report ID.
reported: :class:`Player`
The reported player.
Can return None if the reported player was not provided in the report.
reason: :class:`str`
The reason for the report. e.g Gamethrowing, Cheating, Spamming.
details: [:class:`str`]
Reasons given by the reporters.
is_ranked: :class:`bool`
Returns True if the report is for a ranked game, otherwise False.
judgement: :class:`str`
The report's judgement. e.g Guilty, Innocent, Open, Closed.
content: [:class:`Event`]
The report's events. e.g messages, deaths, trials etc.
dt: :class:`int`
The time in seconds between the epoch and the time the report was submitted.
winner: :class:`str`
The faction that won the game. e.g "Mafia", "Town"
Returns "Stalemate" for draws.
"""
def __init__(self, data):
self.winner = None
soup = data['soup']
content = list(soup.find("div", id="reportContent").find_all("span"))
players = str(soup.find_all("script")).split(
'data =')[1].split("}]};")[0] + "}]}"
date = soup.find("span", class_="reportDate").text
if date[-8:-7] == " ":
date = date[:14] + "0" + date[14:]
date = int(datetime.strptime(
date, "%b. %d, %Y %I:%M %p").timestamp())
game_over = False
# Convert elements from Soup to string
for message in content[::]:
content.remove(message)
content.append(str(message))
# Remove all messages before day 1
content = content[content.index(
'<span class="time day">Day 1</span>'):]
for message in content[::]:
# Remove wills and death notes
if '<span class="note"' in message:
content.remove(message)
# Remove messages from dead people
elif 'dead">' in message or 'dead" title="' in message:
content.remove(message)
# Remove "decided to execute" messages. We will instead use the player's death message.
elif '<span class="notice' in message and " decided to execute " in message:
content.remove(message)
# Remove "has died" messages. We will instead use "was attacked by" messages to find deaths.
elif '<span class="notice' in message and 'death"' in message:
if not ' has been lynched.</span>' in message:
content.remove(message)
# Remove "End of Report" messages.
elif '<span class="end">End of Report</span>' in message:
content.remove(message)
# Remove "vampires have bitten" messages. We will instead use "was converted to" messages to find vampire bites.
elif '<span class="Vampire vampire" title="">*Vampires have bit ' in message:
content.remove(message)
# Remove "has forged the will" notices
elif '<span class="notice"' in message and ' has forged the will.</span>' in message:
content.remove(message)
# Remove glitched messages
elif '<span class="" title=""></span>' in message or '<span class="seance" title=""></span>' in message:
content.remove(message)
elif '<span class="notice' in message and 'jail" title="' in message:
content.remove(message)
else:
try:
class_ = message.split(
'<span class="')[1].split('" title="">')[0].replace(" ", "")
txt = message.split('" title="">')[
1].split("</span>")[0].replace(" ", "")
if not class_ in ['notice', 'time day', 'time night', 'stage'] and class_ in txt:
content.remove(message)
continue
except (IndexError, ValueError):
pass
if '<span class="notice' in message and 'Witch control"' in message:
if not 'made' in message or not 'target' in message:
content.remove(message)
new_list = []
for message in content:
if '<span class="stage">GameOver</span>' in message:
game_over = True
continue
if 'class="notice"' in message:
# Find if the message contains the winner
if "has won.</span>" in message:
winner = message.split('">')[
1].split(" has won.</span>")[0]
self.winner = winner
break
if '">Stalemate.</span>' in message:
self.winner = "Stalemate"
break
if '>Draw.</span>' in message:
self.winner = "Draw"
break
# Break at the end of the report
if game_over:
self.winner = None
break
elif game_over or '<span class="end">End of Report</span>' in message:
self.winner = None
break
event_data = {}
event_data['msg'] = message
event_data['players'] = players
new_list.append(Event(event_data))
data['content'] = new_list
self.id = int(data["id"])
self.reported = _get_player(data['user'], str(soup.find_all(
"script")).split('data =')[1].split("}]};")[0] + "}]}")
self.reason = data["reason"]
self.details = data["details"]
self.is_ranked = data['ranked']
self.judgement = data['judgement']
self.content = data['content']
self.dt = date
def __repr__(self):
return str(self.id)
class Event:
"""
Represents an event that occured (messages, deaths, etc.).
Attributes
-------------
day: :class:`int`
The day that has begun.
Returns None if the event was not a new day.
night: :class:`int`
The night that has begun.
Returns None if the event was not a new night.
message: :class:`str`
The message sent.
Returns None if the event was not a message or whisper.
is_jail: :class:`bool`
Returns True if the message was sent in jail, otherwise False.
Returns None if the event was not a message.
is_mafia: :class:`bool`
Returns True if the message was sent in mafia chat, otherwise False.
Returns None if the event was not a message.
author: :class:`Player`
The player who sent the message/whisper.
Returns None if the event was not a message or whisper.
type: :class:`str`
The type of event, e.g "Message", "Death", "Investigation", "Sheriff" etc.
killed: :class:`Player`
The player who was killed.
Returns None if the event was not a death.
killer: :class:`str`
The role/faction that killed the player. e.g "Mafia", "Veteran", "Jailor"
Returns "Guilt" when a vigilante shoots themselves.
Returns "Guarding" when a bodyguard dies protecting someone.
Returns "Lynch" if the player was lynched.
Returns "Heartbreak" if the player died from heartbreak.
Returns None if the event was not a death.
visitor: :class:`Player`
The player who visited.
Can also be a string representing the visitor's faction, e.g vampire bites will return "Vampire"
Returns None if the event was not an ability.
visited: :class:`Player`
The player who was visited.
Returns None if no player was visited.
recipient: :class:`Player`
The player who the whisper was sent to.
Returns None if the event was not a whisper.
leaver: :class:`Player`
The player who left the game.
Returns None if the event was not a player leaving the game.
voter: :class:`Player`
The player who submitted the vote.
Returns None if the event was not a vote.
verdict: :class:`str`
The verdict of the voter. (Abstain, Guilty, Innocent)
Returns None if the event was not a vote.
revived: :class:`Player`
The player who was revived.
Returns None if the event was not a revive.
witched: :class:`Player`
The player who was witched.
Returns None if the event was not a witch.
witch_target: :class:`Player`
The target who the player was witched into.
Returns None if the event was not a witch.
witcher: :class:`str`
The role that witched the player. Always returns "Witch", "CovenLeader" or "Necromancer"
Returns None if the event was not a witch.
amne: :class:`Player`
The amnesiac who remembered they were like a role.
Returns None if the event was not a remember.
remembered: :class:`str`
The role that the amnesiac remembered they were like.
Returns None if the event was not a remember.
converted: :class:`Player`
The player who was converted to the vampires.
Returns None if the event was not a conversion.
transported: [:class:`Player]
The two players who were transported.
Returns None if the event was not a transport.
revealer: :class:`Player`
The mayor who revealed.
Returns None if the event was not a reveal.
"""
def __init__(self, data):
message = data['msg']
all_players = data['players']
self.day = None
self.night = None
self.message = None
self.is_jail = None
self.is_mafia = None
self.author = None
self.type = None
self.killed = None
self.killer = None
self.visitor = None
self.visited = None
self.recipient = None
self.leaver = None
self.voter = None
self.verdict = None
self.revived = None
self.witched = None
self.witch_target = None
self.witcher = None
self.amne = None
self.remembered = None
self.converted = None
self.transported = None
self.revealer = None
broken_roles = ["Potion Master"]
if '<span class="time night">Night' in message:
self.type = "Night"
self.night = int(message.split(
"</span>")[0].split('<span class="time night">Night ')[1])
elif '<span class="time day">Day' in message:
self.type = "Day"
self.day = int(message.split("</span>")
[0].split('<span class="time day">Day ')[1])
elif '<span class="stage">Defense</span>' in message:
self.type = "Defense"
elif '<span class="stage">Judgement</span>' in message:
self.type = "Judgement"
elif " investigated " in message and ('<span class="notice Investigator"' in message or ('<span class="notice ' in message and 'Investigator" title="' in message)):
self.type = "Investigation"
visitor = message.split(">")[1].split(" investigated ")[0]
try:
self.visitor = _get_player(visitor, all_players)
except ValueError:
visitor = visitor.replace(
" ", "") if visitor in broken_roles else visitor
self.visitor = _get_player(visitor, all_players)
self.visited = _get_player(message.split(
".</span>")[0].split(" investigated ")[1], all_players)
elif '<span class="notice Sheriff"' in message or ('<span class="notice ' in message and 'Sheriff" title="' in message) and ' checked ' in message:
self.type = "Sheriff"
visited = message.split(".</span>")[0].split(" checked ")[1]
visitor = message.split(">")[1].split(" checked ")[0]
self.visitor = _get_player(visitor, all_players)
self.visited = _get_player(visited, all_players)
elif 'whisper" title="' in message or 'whisper">' in message:
self.type = "Whisper"
author = message.split(' ">')[0].split(" ")[-2]
recipient = message.split(' ">')[0].split(" ")[-1]
try:
self.author = _get_player(author, all_players)
except ValueError:
author = message.split(' ">')[0].split(
" ")[-2] + " " + message.split(' ">')[0].split(" ")[-1]
self.author = _get_player(author, all_players)
try:
self.recipient = _get_player(recipient, all_players)
except ValueError:
try:
recipient = message.split(' ">')[0].split(
" ")[-2] + " " + message.split(' ">')[0].split(" ")[-1]
self.recipient = _get_player(
recipient, all_players)
except ValueError:
recipient = message.split(
'">')[1].split(" to ")[1].split(": ")[0]
self.recipient = _get_player(
recipient, all_players)
try:
self.message = message.split(
":", 1)[1].split("</span>")[0].strip()
except IndexError:
self.message = ""
elif '<span class="notice"' in message and "attacked by" in message:
self.type = "Death"
if " was attacked by" in message:
killed = message.split('">')[1].split(" was attacked by")[0]
self.killed = _get_player(killed, all_players)
else:
self.killed = _get_player(message.split(
'">')[1].split(" attacked by")[0], all_players)
self.killer = message.split(".</span>")[0].split(" ")[-1]
elif '<span class="notice"' in message and ' was ignited by an Arsonist.</span>' in message:
self.type = "Death"
self.killer = "Arsonist"
killed = message.split('">')[1].split(
" was ignited by an Arsonist.</span>")[0]
self.killed = _get_player(killed, all_players)
elif '<span class="notice"' in message and "visited a VampireHunter.</span>" in message:
self.type = "Death"
self.killer = "VampireHunter"
killed = message.split('">')[1].split(
" visited a VampireHunter.</span>")[0]
self.killed = _get_player(killed, all_players)
elif '<span class="notice"' in message and " was staked by a VampireHunter.</span>" in message:
self.type = "Death"
self.killer = "VampireHunter"
killed = message.split('">')[1].split(
" was staked by a VampireHunter.</span>")[0]
self.killed = _get_player(killed, all_players)
elif '<span class="notice"' in message and "died guarding someone.</span>" in message:
self.type = "Death"
self.killer = "Guarding"
killed = message.split('">')[1].split(
" died guarding someone.</span>")[0]
self.killed = _get_player(killed, all_players)
elif '<span class="notice"' in message and " died from guilt over shooting a Town member.</span>" in message:
self.type = "Death"
self.killer = "Guilt"
killed = message.split('">')[1].split(
" died from guilt over shooting a Town member.</span>")[0]
self.killed = _get_player(killed, all_players)
elif '<span class="notice"' in message and "visited a SerialKiller.</span>" in message:
self.type = "Death"
self.killer = "SerialKiller"
killed = message.split('">')[1].split(
" visited a SerialKiller.</span>")[0]
self.killed = _get_player(killed, all_players)
elif '<span class="notice' in message and ' has been lynched.</span>' in message:
self.type = "Death"
self.killer = "Lynch"
killed = message.split('">')[1].split(
" has been lynched.</span>")[0]
try:
self.killed = _get_player(killed, all_players)
except ValueError:
killed = killed.replace(
" ", "") if killed in broken_roles else killed
self.killed = _get_player(killed, all_players)
elif '<span class="notice"' in message and ' died from heartbreak.</span>' in message:
self.type = "Death"
self.killer = "Heartbreak"
killed = message.split('">')[1].split(
" died from heartbreak.</span>")[0]
self.killed = _get_player(killed, all_players)
elif '<span class="notice"' in message and "has left the game.</span>" in message:
self.type = "Quit"
player = message.split('">')[1].split(
" has left the game.</span>")[0]
self.leaver = _get_player(player, all_players)
elif '<span class="notice"' in message and 'voted guilty.</span>' in message or 'voted innocent.</span>' in message or 'abstained.</span>' in message:
self.type = "Vote"
verdict = message.split('.</span>')[0].split(" ")[-1]
voter = message.split('">')[1]
if verdict == "abstained":
self.verdict = "Abstain"
self.voter = voter.split(" abstained.</span>")[0]
if verdict == "guilty":
self.verdict = "Guilty"
self.voter = voter.split(" voted guilty.</span>")[0]
if verdict == "innocent":
self.verdict = "Innocent"
self.voter = voter.split(" voted innocent.</span>")[0]
elif '<span class="notice' in message and 'has been resurrected.</span>' in message:
self.type = "Revive"
revived = message.split(
" has been resurrected.</span>")[0].split('">')[1]
self.revived = _get_player(revived, all_players)
elif '<span class="notice' in message and 'Witch control"' in message:
self.type = "Witch"
self.witcher = message.split('">')[1].split(" ")[0]
msg = message.split(f'">{self.witcher} made ')[
1].split(" target ")
try:
self.witched = _get_player(msg[0], all_players)
except ValueError:
try:
msg = message.split(f'">{self.witcher} made ')[
1].rsplit(" target ")
self.witched = _get_player(msg[0], all_players)
except ValueError:
msg[0] = msg[0].replace(
" ", "") if msg[0] in broken_roles else msg[0]
self.witched = _get_player(msg[0], all_players)
self.witch_target = _get_player(
msg[-1].split(".</span>")[0], all_players)
elif '<span class="notice"' in message and " has remembered they were " in message:
self.type = "Remember"
self.remembered = message.split(".</span>")[0].split(" ")[-1]
amne = message.split('">')[1].split(
" has remembered they were ")[0]
self.amne = _get_player(amne, all_players)
elif '<span class="notice Vampire convert"' in message:
self.type = "Conversion"
converted = message.split('">')[1].split(
" was converted from being ")[0]
self.converted = _get_player(converted, all_players)
elif '<span class="notice' in message and '">Transporter swapped ' in message:
self.type = "Transport"
msg = message.split('">Transporter swapped ')[1].split(" with ")
transported1 = _get_player(msg[0], all_players)
try:
transported2 = _get_player(
msg[1].split(".</span>")[0], all_players)
except ValueError:
msg = message.split('">Transporter swapped ')[
1].rsplit(" with ")
transported2 = _get_player(
msg[0].split(".</span>")[0], all_players)
self.transported = [transported1, transported2]
elif '<span class="notice"' in message and "has revealed themselves as the Mayor.</span>" in message:
self.type = "Reveal"
revealer = message.split('">')[1].split(
" has revealed themselves as the Mayor.</span>")[0]
self.revealer = _get_player(revealer, all_players)
else:
self.type = "Message"
self.is_mafia = bool('mafia">' in message)
self.is_jail = bool('jail">' in message)
try:
name = message.split('<span class="')[1].split(" ")[0]
except IndexError:
name = ""
try:
author = _get_player(name, all_players)
except ValueError:
name = message.split('<span class="')[1].split(
" ")[0] + " " + message.split('<span class="')[1].split(" ")[1]
author = _get_player(name, all_players)
try:
details = message.split(": ")[1]
details = details.split("</span>")[0]
except IndexError:
details = ""
self.author = author
self.message = details
def __repr__(self):
return self.type
class Player:
"""
Represents a player in the game.
Attributes
-------------
name: :class:`str`
The player's username
nick: :class:`str`
The player's in game name
slot: :class:`int`
The player's slot (between 1 and 15)
role: :class:`str`
The player's role, e.g Mafioso
faction: :class:`str
The player's faction, e.g Mafia
alignment: :class:`str`
The player's alignment, e.g Mafia Killing
"""
def __init__(self, data):
name = data['name']
category = data['type']
all_players = data['all_players']
all_players = json.loads(all_players)["players"]
for player in all_players:
if player[category] == name:
info = player
break
role_info = _find_faction(info["role"])
self.name = info["username"]
self.role = info["role"]
self.slot = int(info["slot"])
self.nick = info["ign"]
self.faction = role_info["faction"]
self.alignment = role_info["alignment"]
def __repr__(self):
return self.nick
def _find_faction(role):
if role == "BodyGuard":
role_info = {"faction": "Town", "alignment": "Town Protective"}
elif role == "Doctor":
role_info = {"faction": "Town", "alignment": "Town Protective"}
elif role == "Escort":
role_info = {"faction": "Town", "alignment": "Town Support"}
elif role == "Investigator":
role_info = {"faction": "Town", "alignment": "Town Investigative"}
elif role == "Jailor":
role_info = {"faction": "Town", "alignment": "Town Killing"}
elif role == "Lookout":
role_info = {"faction": "Town", "alignment": "Town Investigative"}
elif role == "Mayor":
role_info = {"faction": "Town", "alignment": "Town Support"}
elif role == "Medium":
role_info = {"faction": "Town", "alignment": "Town Support"}
elif role == "Retributionist":
role_info = {"faction": "Town", "alignment": "Town Support"}
elif role == "Sheriff":
role_info = {"faction": "Town", "alignment": "Town Investigative"}
elif role == "Spy":
role_info = {"faction": "Town", "alignment": "Town Investigative"}
elif role == "Transporter":
role_info = {"faction": "Town", "alignment": "Town Support"}
elif role == "VampireHunter":
role_info = {"faction": "Town", "alignment": "Town Killing"}
elif role == "Veteran":
role_info = {"faction": "Town", "alignment": "Town Killing"}
elif role == "Vigilante":
role_info = {"faction": "Town", "alignment": "Town Killing"}
elif role == "Crusader":
role_info = {"faction": "Town", "alignment": "Town Protective"}
elif role == "Tracker":
role_info = {"faction": "Town", "alignment": "Town Investigative"}
elif role == "Trapper":
role_info = {"faction": "Town", "alignment": "Town Protective"}
elif role == "Psychic":
role_info = {"faction": "Town", "alignment": "Town Investigative"}
elif role == "Blackmailer":
role_info = {"faction": "Mafia", "alignment": "Mafia Support"}
elif role == "Consigliere":
role_info = {"faction": "Mafia", "alignment": "Mafia Support"}
elif role == "Consort":
role_info = {"faction": "Mafia", "alignment": "Mafia Support"}
elif role == "Disguiser":
role_info = {"faction": "Mafia", "alignment": "Mafia Deception"}
elif role == "Forger":
role_info = {"faction": "Mafia", "alignment": "Mafia Deception"}
elif role == "Framer":
role_info = {"faction": "Mafia", "alignment": "Mafia Deception"}
elif role == "Godfather":
role_info = {"faction": "Mafia", "alignment": "Mafia Killing"}
elif role == "Janitor":
role_info = {"faction": "Mafia", "alignment": "Mafia Deception"}
elif role == "Mafioso":
role_info = {"faction": "Mafia", "alignment": "Mafia Killing"}
elif role == "Hypnotist":
role_info = {"faction": "Mafia", "alignment": "Mafia Deception"}
elif role == "Ambusher":
role_info = {"faction": "Mafia", "alignment": "Mafia Support"}
elif role == "Amnesiac":
role_info = {"faction": "Neutral", "alignment": "Neutral Benign"}
elif role == "Arsonist":
role_info = {"faction": "Neutral", "alignment": "Neutral Killing"}
elif role == "Executioner":
role_info = {"faction": "Neutral", "alignment": "Neutral Evil"}
elif role == "Guardian Angel":
role_info = {"faction": "Neutral", "alignment": "Neutral Benign"}
elif role == "Jester":
role_info = {"faction": "Neutral", "alignment": "Neutral Evil"}
elif role == "Juggernaut":
role_info = {"faction": "Neutral", "alignment": "Neutral Killing"}
elif role == "Pirate":
role_info = {"faction": "Neutral", "alignment": "Neutral Chaos"}
elif role == "Plaguebearer":
role_info = {"faction": "Neutral", "alignment": "Neutral Chaos"}
elif role == "SerialKiller":
role_info = {"faction": "Neutral", "alignment": "Neutral Killing"}
elif role == "Survivor":
role_info = {"faction": "Neutral", "alignment": "Neutral Benign"}
elif role == "Vampire":
role_info = {"faction": "Neutral", "alignment": "Neutral Chaos"}
elif role == "Werewolf":
role_info = {"faction": "Neutral", "alignment": "Neutral Killing"}
elif role == "Witch":
role_info = {"faction": "Neutral", "alignment": "Neutral Evil"}
elif role in ["Coven Leader", "Potion Master", "HexMaster", "Necromancer", "Poisoner", "Medusa"]:
role_info = {"faction": "Coven", "alignment": "Coven"}
return role_info | /salem_parser-v1.0.3.tar.gz/salem_parser-v1.0.3/salem_parser/main.py | 0.700997 | 0.172189 | main.py | pypi |
.. _gis:
Map transformations
===================
Most of the georeferencing machinery for gridded datasets is
handled by the :py:class:`~salem.Grid` class: its capacity to handle
gridded datasets in a painless manner was one of the primary
motivations to develop Salem.
Grids
-----
A point on earth can be defined unambiguously in two ways:
DATUM (lon, lat, datum)
longitudes and latitudes are angular coordinates of a point on an
ellipsoid (often called "datum")
PROJ (x, y, projection)
x (eastings) and y (northings) are cartesian coordinates of a point in a
map projection (the unit of x, y is usually meter)
Salem adds a third coordinate reference system (**crs**) to this list:
GRID (i, j, Grid)
on a structured grid, the (x, y) coordinates are distant of a
constant (dx, dy) step. The (x, y) coordinates are therefore equivalent
to a new reference frame (i, j) proportional to the projection's (x, y)
frame.
Transformations between datums and projections is handled by several tools
in the python ecosystem, for example `GDAL`_ or the more lightweight
`pyproj`_, which is the tool used by Salem internally [#]_.
The concept of Grid added by Salem is useful when transforming data between
two structured datasets, or from an unstructured dataset to a structured one.
.. _GDAL: https://pypi.python.org/pypi/GDAL/
.. _pyproj: https://jswhit.github.io/pyproj/
.. [#] Most datasets nowadays are defined in the WGS 84 datum, therefore the
concepts of datum and projection are often interchangeable:
(lon, lat) coordinates are equivalent to cartesian (x, y) coordinates
in the plate carree projection.
A :py:class:`~salem.Grid` is defined by a projection, a reference point in
this projection, a grid spacing and a number of grid points:
.. ipython:: python
import numpy as np
import salem
from salem import wgs84
grid = salem.Grid(nxny=(3, 2), dxdy=(1, 1), x0y0=(0.5, 0.5), proj=wgs84)
x, y = grid.xy_coordinates
x
y
The default is to define the grids according to the pixels center point:
.. ipython:: python
smap = salem.Map(grid)
smap.set_data(np.arange(6).reshape((2, 3)))
lon, lat = grid.ll_coordinates
smap.set_points(lon.flatten(), lat.flatten())
@savefig plot_example_grid.png width=80%
smap.visualize(addcbar=False)
But with the ``pixel_ref`` keyword you can use another convention. For Salem,
the two conventions are identical:
.. ipython:: python
grid_c = salem.Grid(nxny=(3, 2), dxdy=(1, 1), x0y0=(0, 0),
proj=wgs84, pixel_ref='corner')
assert grid_c == grid
While it's good to know how grids work, most of the time grids should be
inferred directly from the data files (see also: :ref:`xarray_acc.init`):
.. ipython:: python
ds = salem.open_xr_dataset(salem.get_demo_file('himalaya.tif'))
grid = ds.salem.grid
grid.proj.srs
grid.extent
Grids come with several convenience functions, for example for transforming
points onto the grid coordinates:
.. ipython:: python
grid.transform(85, 27, crs=salem.wgs84)
Or for reprojecting structured data as explained below.
Reprojecting data
-----------------
Interpolation
~~~~~~~~~~~~~
The standard way to reproject a gridded dataset into another one is to use the
:py:func:`~salem.DatasetAccessor.transform` method:
.. ipython:: python
:suppress:
plt.rcParams['figure.figsize'] = (7, 3)
f = plt.figure(figsize=(7, 3))
.. ipython:: python
dse = salem.open_xr_dataset(salem.get_demo_file('era_interim_tibet.nc'))
t2_era_reproj = ds.salem.transform(dse.t2m.isel(time=0))
@savefig plot_reproj_grid.png width=80%
t2_era_reproj.salem.quick_map();
This is the recommended way if the output grid (in this case, a high resolution
lon-lat grid) is of similar or finer resolution than the input grid (in this
case, reanalysis data at 0.75°). As of v0.2, three interpolation methods are
available in Salem: ``nearest`` (default), ``linear``, or ``spline``:
.. ipython:: python
t2_era_reproj = ds.salem.transform(dse.t2m.isel(time=0), interp='spline')
@savefig plot_reproj_grid_spline.png width=80%
t2_era_reproj.salem.quick_map();
Internally, Salem uses `pyproj <https://jswhit.github.io/pyproj/>`__ for the
coordinates transformation and scipy's interpolation methods for the
resampling. Note that reprojecting data can be computationally and
memory expensive: it is generally recommended to reproject your data at the
end of the processing chain if possible.
The :py:func:`~salem.DatasetAccessor.transform` method returns an object of
the same structure as the input. The only differences are the coordinates and
the grid, which are those of the arrival grid:
.. ipython:: python
dst = ds.salem.transform(dse)
dst
dst.salem.grid == ds.salem.grid
Aggregation
~~~~~~~~~~~
If you need to resample higher resolution data onto a coarser grid,
:py:func:`~salem.DatasetAccessor.lookup_transform` may be the way to go. This
method gets its name from the "lookup table" it uses internally to store
the information needed for the resampling: for each
grid point in the coarser dataset, the lookup table stores the coordinates
of the high-resolution grid located below.
The default resampling method is to average all these points:
.. ipython:: python
dse = dse.salem.subset(corners=((77, 23), (94.5, 32.5)))
dsl = dse.salem.lookup_transform(ds)
@savefig plot_lookup_grid.png width=80%
dsl.data.salem.quick_map(cmap='terrain');
But any aggregation method is available, for example ``np.std``, or ``len`` if
you want to know the number of high resolution pixels found below a coarse
grid point:
.. ipython:: python
dsl = dse.salem.lookup_transform(ds, method=len)
@savefig plot_lookup_grid_std.png width=80%
dsl.data.salem.quick_map();
| /salem-0.3.9.tar.gz/salem-0.3.9/docs/gis.rst | 0.959988 | 0.841696 | gis.rst | pypi |
from .utils import graphql_request, graphql_multipart_request, override_dict, handle_errors, get_payload
class ETLDataLoader:
"""abstraction around several graphQL query to load data into Saleor.
Notes
-----
This class requires a valid `auth_token` to be provided during
initialization. An `app` must be first created for example using django cli
```bash
python manage.py create_app etl --permission account.manage_users \
--permission account.manage_staff \
--permission app.manage_apps \
--permission app.manage_apps \
--permission discount.manage_discounts \
--permission plugins.manage_plugins \
--permission giftcard.manage_gift_card \
--permission menu.manage_menus \
--permission order.manage_orders \
--permission page.manage_pages \
--permission product.manage_products \
--permission shipping.manage_shipping \
--permission site.manage_settings \
--permission site.manage_translations \
--permission webhook.manage_webhooks \
--permission checkout.manage_checkouts
```
Attributes
----------
headers : dict
the headers used to make graphQL queries.
endpoint_url : str
the graphQL endpoint url to query to.
Methods
-------
"""
def __init__(self, auth_token, endpoint_url="http://localhost:8000/graphql/"):
"""initialize the `DataLoader` with an auth_token and an url endpoint.
Parameters
----------
auth_token : str
token used to identify called to the graphQL endpoint.
endpoint_url : str, optional
the graphQL endpoint to be used , by default "http://localhost:8000/graphql/"
"""
self.headers = {"Authorization": "Bearer {}".format(auth_token)}
self.endpoint_url = endpoint_url
def update_shop_settings(self, **kwargs):
"""update shop settings.
Parameters
----------
**kwargs : dict, optional
overrides the default value set to update the shop settings refer to the
ShopSettingsInput graphQL type to know what can be overriden.
Raises
------
Exception
when shopErrors is not an empty list
"""
variables = {
"input": kwargs
}
query = """
mutation ShopSettingsUpdate($input: ShopSettingsInput!) {
shopSettingsUpdate(input: $input) {
shop {
headerText
description
includeTaxesInPrices
displayGrossPrices
chargeTaxesOnShipping
trackInventoryByDefault
defaultWeightUnit
automaticFulfillmentDigitalProducts
defaultDigitalMaxDownloads
defaultDigitalUrlValidDays
defaultMailSenderName
defaultMailSenderAddress
customerSetPasswordUrl
}
shopErrors {
field
message
code
}
}
}
"""
response = graphql_request(
query, variables, self.headers, self.endpoint_url)
errors = response["data"]["shopSettingsUpdate"]["shopErrors"]
handle_errors(errors)
return response["data"]["shopSettingsUpdate"]["shop"]
def update_shop_domain(self, **kwargs):
"""update shop domain.
Parameters
----------
**kwargs : dict, optional
overrides the default value set to update the shop domain refer to the
SiteDomainInput graphQL type to know what can be overriden.
Raises
------
Exception
when shopErrors is not an empty list
"""
variables = {
"siteDomainInput": kwargs
}
query = """
mutation ShopDomainUpdate($siteDomainInput: SiteDomainInput!) {
shopDomainUpdate(input: $siteDomainInput) {
shop {
domain {
host
sslEnabled
url
}
}
shopErrors {
field
message
code
}
}
}
"""
response = graphql_request(
query, variables, self.headers, self.endpoint_url)
errors = response["data"]["shopDomainUpdate"]["shopErrors"]
handle_errors(errors)
return response["data"]["shopSettingsUpdate"]["shop"]["domain"]
def update_shop_address(self, **kwargs):
"""update shop address.
Parameters
----------
**kwargs : dict, optional
overrides the default value set to update the shop address refer to the
AddressInput graphQL type to know what can be overriden.
Raises
------
Exception
when shopErrors is not an empty list
"""
variables = {
"addressInput": kwargs
}
query = """
mutation ShopAddressUpdate($addressInput: AddressInput!) {
shopAddressUpdate(input: $addressInput) {
shop {
companyAddress {
id
firstName
lastName
companyName
streetAddress1
streetAddress2
city
cityArea
postalCode
country {
code
country
}
countryArea
phone
isDefaultShippingAddress
isDefaultBillingAddress
}
}
shopErrors {
field
message
code
}
}
}
"""
response = graphql_request(
query, variables, self.headers, self.endpoint_url)
errors = response["data"]["shopAddressUpdate"]["shopErrors"]
handle_errors(errors)
return response["data"]["shopAddressUpdate"]["shop"]["companyAddress"]
def create_warehouse(self, **kwargs):
"""create a warehouse.
Parameters
----------
**kwargs : dict, optional
overrides the default value set to create the warehouse refer to the
WarehouseCreateInput graphQL type to know what can be overriden.
Returns
-------
id : str
the id of the warehouse created
Raises
------
Exception
when warehouseErrors is not an empty list
"""
default_kwargs = {
"companyName": "The Fake Company",
"email": "fake@example.com",
"name": "fake warehouse",
"address": {
"streetAddress1": "a fake street adress",
"city": "Fake City",
"postalCode": "1024",
"country": "CH"
}
}
override_dict(default_kwargs, kwargs)
variables = {
"input": default_kwargs
}
query = """
mutation createWarehouse($input: WarehouseCreateInput!) {
createWarehouse(input: $input) {
warehouse {
id
}
warehouseErrors {
field
message
code
}
}
}
"""
response = graphql_request(
query, variables, self.headers, self.endpoint_url)
errors = response["data"]["createWarehouse"]["warehouseErrors"]
handle_errors(errors)
return response["data"]["createWarehouse"]["warehouse"]["id"]
def create_shipping_zone(self, **kwargs):
"""create a shippingZone.
Parameters
----------
**kwargs : dict, optional
overrides the default value set to create the shippingzone refer to
the shippingZoneCreateInput graphQL type to know what can be
overriden.
Returns
-------
id : str
the id of the shippingZone created.
Raises
------
Exception
when shippingErrors is not an empty list.
"""
default_kwargs = {
"name": "CH",
"countries": [
"CH"
],
"default": False,
}
override_dict(default_kwargs, kwargs)
variables = {
"input": default_kwargs
}
query = """
mutation createShippingZone($input: ShippingZoneCreateInput!) {
shippingZoneCreate(input: $input) {
shippingZone {
id
}
shippingErrors {
field
message
code
}
}
}
"""
response = graphql_request(
query, variables, self.headers, self.endpoint_url)
errors = response["data"]["shippingZoneCreate"]["shippingErrors"]
handle_errors(errors)
return response["data"]["shippingZoneCreate"]["shippingZone"]["id"]
def create_attribute(self, **kwargs):
"""create a product attribute.
Parameters
----------
**kwargs : dict, optional
overrides the default value set to create the attribute refer to
the AttributeCreateInput graphQL type to know what can be
overriden.
Returns
-------
id : str
the id of the attribute created.
Raises
------
Exception
when productErrors is not an empty list.
"""
default_kwargs = {
"inputType": "DROPDOWN",
"name": "default"
}
override_dict(default_kwargs, kwargs)
variables = {
"input": default_kwargs
}
query = """
mutation createAttribute($input: AttributeCreateInput!) {
attributeCreate(input: $input) {
attribute {
id
}
productErrors {
field
message
code
}
}
}
"""
response = graphql_request(
query, variables, self.headers, self.endpoint_url)
errors = response["data"]["attributeCreate"]["productErrors"]
handle_errors(errors)
return response["data"]["attributeCreate"]["attribute"]["id"]
def create_attribute_value(self, attribute_id, **kwargs):
"""create a product attribute value.
Parameters
----------
attribute_id : str
id of the attribute on which to add the value.
**kwargs : dict, optional
overrides the default value set to create the attribute refer to
the AttributeValueCreateInput graphQL type to know what can be
overriden.
Returns
-------
id : str
the id of the attribute on which the value was created.
Raises
------
Exception
when productErrors is not an empty list.
"""
default_kwargs = {
"name": "default"
}
override_dict(default_kwargs, kwargs)
variables = {
"attribute": attribute_id,
"input": default_kwargs
}
query = """
mutation createAttributeValue($input: AttributeValueCreateInput!, $attribute: ID!) {
attributeValueCreate(input: $input, attribute: $attribute) {
attribute{
id
}
productErrors {
field
message
code
}
}
}
"""
response = graphql_request(
query, variables, self.headers, self.endpoint_url)
errors = response["data"]["attributeValueCreate"]["productErrors"]
handle_errors(errors)
return response["data"]["attributeValueCreate"]["attribute"]["id"]
def create_product_type(self, **kwargs):
"""create a product type.
Parameters
----------
**kwargs : dict, optional
overrides the default value set to create the type refer to
the ProductTypeInput graphQL type to know what can be
overriden.
Returns
-------
id : str
the id of the productType created.
Raises
------
Exception
when productErrors is not an empty list.
"""
default_kwargs = {
"name": "default",
"hasVariants": False,
"productAttributes": [],
"variantAttributes": [],
"isDigital": "false",
}
override_dict(default_kwargs, kwargs)
variables = {
"input": default_kwargs
}
query = """
mutation createProductType($input: ProductTypeInput!) {
productTypeCreate(input: $input) {
productType {
id
}
productErrors {
field
message
code
}
}
}
"""
response = graphql_request(
query, variables, self.headers, self.endpoint_url)
errors = response["data"]["productTypeCreate"]["productErrors"]
handle_errors(errors)
return response["data"]["productTypeCreate"]["productType"]["id"]
def create_category(self, **kwargs):
"""create a category.
Parameters
----------
**kwargs : dict, optional
overrides the default value set to create the category refer to
the productTypeCreateInput graphQL type to know what can be
overriden.
Returns
-------
id : str
the id of the productType created.
Raises
------
Exception
when productErrors is not an empty list.
"""
default_kwargs = {
"name": "default"
}
override_dict(default_kwargs, kwargs)
variables = {
"input": default_kwargs
}
query = """
mutation createCategory($input: CategoryInput!) {
categoryCreate(input: $input) {
category {
id
}
productErrors {
field
message
code
}
}
}
"""
response = graphql_request(
query, variables, self.headers, self.endpoint_url)
errors = response["data"]["categoryCreate"]["productErrors"]
handle_errors(errors)
return response["data"]["categoryCreate"]["category"]["id"]
def create_product(self, product_type_id, **kwargs):
"""create a product.
Parameters
----------
product_type_id : str
product type id required to create the product.
**kwargs : dict, optional
overrides the default value set to create the product refer to
the ProductCreateInput graphQL type to know what can be
overriden.
Returns
-------
id : str
the id of the product created.
Raises
------
Exception
when productErrors is not an empty list.
"""
default_kwargs = {
"name": "default",
"description": "default",
"productType": product_type_id,
"basePrice": 0.0,
"sku": "default"
}
override_dict(default_kwargs, kwargs)
variables = {
"input": default_kwargs
}
query = """
mutation createProduct($input: ProductCreateInput!) {
productCreate(input: $input) {
product {
id
}
productErrors {
field
message
code
}
}
}
"""
response = graphql_request(
query, variables, self.headers, self.endpoint_url)
errors = response["data"]["productCreate"]["productErrors"]
handle_errors(errors)
return response["data"]["productCreate"]["product"]["id"]
def create_product_variant(self, product_id, **kwargs):
"""create a product variant.
Parameters
----------
product_id : str
id for which the product variant will be created.
**kwargs : dict, optional
overrides the default value set to create the product variant refer
to the ProductVariantCreateInput graphQL type to know what can be
overriden.
Returns
-------
id : str
the id of the product variant created.
Raises
------
Exception
when productErrors is not an empty list.
"""
default_kwargs = {
"product": product_id,
"sku": "0",
"attributes": []
}
override_dict(default_kwargs, kwargs)
variables = {
"input": default_kwargs
}
query = """
mutation createProductVariant($input: ProductVariantCreateInput!) {
productVariantCreate(input: $input) {
productVariant {
id
}
productErrors {
field
message
code
}
}
}
"""
response = graphql_request(
query, variables, self.headers, self.endpoint_url)
errors = response["data"]["productVariantCreate"]["productErrors"]
handle_errors(errors)
return response["data"]["productVariantCreate"]["productVariant"]["id"]
def create_product_image(self, product_id, file_path):
"""create a product image.
Parameters
----------
product_id : str
id for which the product image will be created.
file_path : str
path to the image to upload.
Returns
-------
id : str
the id of the product image created.
Raises
------
Exception
when productErrors is not an empty list.
"""
body = get_payload(product_id, file_path)
response = graphql_multipart_request(
body, self.headers, self.endpoint_url)
errors = response["data"]["productImageCreate"]["productErrors"]
handle_errors(errors)
return response["data"]["productImageCreate"]["image"]["id"]
def create_customer_account(self, **kwargs):
"""
Creates a customer (as an admin)
Parameters
----------
kwargs: customer
Returns
-------
"""
default_kwargs = {
"firstName": "default",
"lastName": "default",
"email": "default@default.com",
"isActive": False,
}
override_dict(default_kwargs, kwargs)
variables = {"input": default_kwargs}
query = """
mutation customerCreate($input: UserCreateInput !) {
customerCreate(input: $input) {
user {
id
}
accountErrors {
field
message
code
}
}
}
"""
response = graphql_request(query, variables, self.headers, self.endpoint_url)
errors = response["data"]["customerCreate"]["accountErrors"]
handle_errors(errors)
return response["data"]["customerCreate"]["user"]["id"]
def update_private_meta(self, item_id, input_list):
"""
Parameters
----------
item_id: ID of the item to update. Model need to work with private metadata
input_list: an input dict to which to set the private meta
Returns
-------
"""
variables = {"id": item_id, "input": input_list}
query = """
mutation updatePrivateMetadata($id: ID!, $input: [MetadataInput!]!) {
updatePrivateMetadata(id: $id, input: $input) {
item {
privateMetadata {
key
value
}
}
metadataErrors {
field
message
code
}
}
}
"""
response = graphql_request(query, variables, self.headers, self.endpoint_url)
if (
len(response["data"]["updatePrivateMetadata"]["item"]["privateMetadata"])
> 0
):
return item_id
else:
return None | /saleor-gql-loader-0.0.5.tar.gz/saleor-gql-loader-0.0.5/saleor_gql_loader/data_loader.py | 0.904598 | 0.618291 | data_loader.py | pypi |
import requests
import json
from pathlib import Path
from requests_toolbelt import MultipartEncoder
from django.core.serializers.json import DjangoJSONEncoder
GQL_DEFAULT_ENDPOINT = "http://localhost:8000/graphql/"
def graphql_request(query, variables={}, headers={},
endpoint=GQL_DEFAULT_ENDPOINT):
"""Execute the graphQL `query` provided on the `endpoint`.
Parameters
----------
query : str
docstring representing a graphQL query.
variables : dict, optional
dictionary corresponding to the input(s) of the `query` must be
serializable by requests into a JSON object.
headers : dict, optional
headers added to the request (important for authentication).
endpoint : str, optional
the graphQL endpoint url that will be queried, default is
`GQL_DEFAULT_ENDPOINT`.
Returns
-------
response : dict
a dictionary corresponding to the parsed JSON graphQL response.
Raises
------
Exception
when `response.status_code` is not 200.
"""
response = requests.post(
endpoint,
headers=headers,
json={
'query': query,
'variables': variables
}
)
parsed_response = json.loads(response.text)
if response.status_code != 200:
raise Exception("{message}\n extensions: {extensions}".format(
**parsed_response["errors"][0]))
else:
return parsed_response
def graphql_multipart_request(body, headers, endpoint=GQL_DEFAULT_ENDPOINT):
"""Execute a multipart graphQL query with `body` provided on the `endpoint`.
Parameters
----------
body : str
payloads of graphQL query.
headers : dict, optional
headers added to the request (important for authentication).
endpoint : str, optional
the graphQL endpoint url that will be queried, default is
`GQL_DEFAULT_ENDPOINT`.
Returns
-------
response : dict
a dictionary corresponding to the parsed JSON graphQL response.
Raises
------
Exception
when `response.status_code` is not 200.
"""
bodyEncoder = MultipartEncoder(body)
base_headers = {
"Content-Type": bodyEncoder.content_type,
}
override_dict(base_headers, headers)
response = requests.post(endpoint, data=bodyEncoder, headers=base_headers, timeout=90)
parsed_response = json.loads(response.text)
if response.status_code != 200:
raise Exception("{message}\n extensions: {extensions}".format(
**parsed_response["errors"][0]))
else:
return parsed_response
def override_dict(a, overrides):
"""Override a dict with another one **only first non nested keys**.
Notes
-----
This works only with non-nested dict. If dictionarries are nested then the
nested dict needs to be completly overriden.
The Operation is performed inplace.
Parameters
----------
a : dict
a dictionary to merge.
overrides : dict
another dictionary to merge.
"""
for key, val in overrides.items():
try:
if type(a[key]) == dict:
print(
"**warning**: key '{}' contained a dict make sure to override each value in the nested dict.".format(key))
except KeyError:
pass
a[key] = val
def handle_errors(errors):
"""Handle a list of errors as dict with keys message and field.
Parameters
----------
error : list
a list of errors each error must be a dict with at least the following
keys: `field` and `message`
Raises
------
Exception
when the list is not empty and display {field} : {message} errors.
"""
if len(errors) > 0:
txt_list = [
"{field} : {message}".format(**error) for error in errors]
raise Exception("\n".join(txt_list))
def get_operations(product_id):
"""Get ProductImageCreate operations
Parameters
----------
product_id : str
id for which the product image will be created.
Returns
-------
query : str
variables: dict
"""
query = """
mutation ProductImageCreate($product: ID!, $image: Upload!, $alt: String) {
productImageCreate(input: {alt: $alt, image: $image, product: $product}) {
image{
id
}
productErrors {
field
message
}
}
}
"""
variables = {
"product": product_id,
"image": "0",
"alt": ''
}
return {"query": query, "variables": variables}
def get_payload(product_id, file_path):
"""Get ProductImageCreate operations
Parameters
----------
product_id : str
id for which the product image will be created.
Returns
-------
query : str
variables: dict
"""
return {
"operations": json.dumps(
get_operations(product_id), cls=DjangoJSONEncoder
),
"map": json.dumps({'0': ["variables.image"]}, cls=DjangoJSONEncoder),
"0": (Path(file_path).name, open(file_path, 'rb'), 'image/png')
} | /saleor-gql-loader-0.0.5.tar.gz/saleor-gql-loader-0.0.5/saleor_gql_loader/utils.py | 0.806472 | 0.302314 | utils.py | pypi |

<div align="center">
<h1>Saleor Commerce</h1>
</div>
<div align="center">
<strong>Customer-centric e-commerce on a modern stack</strong>
</div>
<div align="center">
A headless, GraphQL-first e-commerce platform delivering ultra-fast, dynamic, personalized shopping experiences. Beautiful online stores, anywhere, on any device.
</div>
<br>
<div align="center">
Join our active, engaged community: <br>
<a href="https://saleor.io/">Website</a>
<span> | </span>
<a href="https://medium.com/saleor">Blog</a>
<span> | </span>
<a href="https://twitter.com/getsaleor">Twitter</a>
<span> | </span>
<a href="https://gitter.im/mirumee/saleor">Gitter</a>
<span> | </span>
<a href="https://spectrum.chat/saleor">Spectrum</a>
</div>
<br>
<div align="center">
<a href="https://circleci.com/gh/mirumee/saleor">
<img src="https://circleci.com/gh/mirumee/saleor.svg?style=svg" alt="Build status" />
</a>
<a href="http://codecov.io/github/mirumee/saleor?branch=master">
<img src="http://codecov.io/github/mirumee/saleor/coverage.svg?branch=master" alt="Codecov" />
</a>
<a href="https://docs.saleor.io/">
<img src="https://img.shields.io/badge/docs-docs.saleor.io-brightgreen.svg" alt="Documentation" />
</a>
<a href="https://github.com/python/black">
<img src="https://img.shields.io/badge/code%20style-black-000000.svg" alt="Code style: black">
</a>
</div>
## Table of Contents
- [What makes Saleor special?](#what-makes-saleor-special)
- [Features](#features)
- [Installation](#installation)
- [Documentation](#documentation)
- [Demo](#demo)
- [Contributing](#contributing)
- [Translations](#translations)
- [Your feedback](#your-feedback)
- [License](#license)
## What makes Saleor special?
Saleor is a rapidly-growing open source e-commerce platform that has served high-volume companies from branches like publishing and apparel since 2012. Based on Python and Django, the latest major update introduces a modular front end powered by a GraphQL API and written with React and TypeScript.
## Features
- **PWA**: End users can shop offline for better sales and shopping experiences
- **GraphQL API**: Access all data from any web or mobile client using the latest technology
- **Headless commerce**: Build mobile apps, customize storefronts and externalize processes
- **UX and UI**: Designed for a user experience that rivals even the top commercial platforms
- **Dashboard**: Administrators have total control of users, processes, and products
- **Orders**: A comprehensive system for orders, dispatch, and refunds
- **Cart**: Advanced payment and tax options, with full control over discounts and promotions
- **Payments**: Flexible API architecture allows integration of any payment method. It comes with Braintree support out of the box.
- **Geo-adaptive**: Automatic localized pricing. Over 20 local languages. Localized checkout experience by country.
- **SEO**: Packed with features that get stores to a wider audience
- **Cloud**: Optimized for deployments using Docker
- **Analytics**: Server-side Google Analytics to report e-commerce metrics without affecting privacy
Saleor is free and always will be.
Help us out… If you love free stuff and great software, give us a star! 🌟


## Installation
Saleor requires Python 3.8, Node.js 10.0+, PostgreSQL and OS-specific dependency tools.
[See the Saleor docs](https://docs.saleor.io/docs/getting-started/intro/) for step-by-step installation and deployment instructions.
Note:
The `master` branch is the development version of Saleor and it may be unstable. To use the latest stable version, download it from the [Releases](https://github.com/mirumee/saleor/releases/) page or switch to a release tag.
The current stable version is 2.10 and you should use this version for all three components:
- Saleor: https://github.com/mirumee/saleor/releases/tag/2.10.1
- Dashboard: https://github.com/mirumee/saleor-dashboard/releases/tag/2.10.0
- Storefront: https://github.com/mirumee/saleor-storefront/releases/tag/2.10.0
## Documentation
Saleor documentation is available here: [docs.saleor.io](https://docs.saleor.io)
To contribute, please see the [`mirumee/saleor-docs` repository](https://github.com/mirumee/saleor-docs/).
## Saleor Platform
The easiest way to run all components of Saleor (API, storefront and dashboard) together on your local machine is to use the [saleor-platform](https://github.com/mirumee/saleor-platform) project. Go to that repository for instructions on how to use it.
[View saleor-platform](https://github.com/mirumee/saleor-platform)
## Storefront
For PWA, single-page storefront go to the [saleor-storefront](https://github.com/mirumee/saleor-storefront) repository.
[View storefront demo](https://pwa.saleor.io/)
## Dashboard
For dashboard go to the [saleor-dashboard](https://github.com/mirumee/saleor-dashboard) repository.
[View dashboard demo](https://pwa.saleor.io/dashboard/)
## Demo
Want to see Saleor in action?
[View Storefront](https://pwa.saleor.io/) | [View Dashboard (admin area)](https://pwa.saleor.io/dashboard/)
Or launch the demo on a free Heroku instance.
[](https://heroku.com/deploy)
Login credentials: `admin@example.com`/`admin`
## Contributing
We love your contributions and do our best to provide you with mentorship and support. If you are looking for an issue to tackle, take a look at issues labeled [`Help Wanted`](https://github.com/mirumee/saleor/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22).
If nothing grabs your attention, check [our roadmap](https://github.com/mirumee/saleor/projects/6) or come up with your feature. Just drop us a line or [open an issue](https://github.com/mirumee/saleor/issues/new) and we’ll work out how to handle it.
Get more details in our [Contributing Guide](https://docs.getsaleor.com/docs/contributing/intro/).
## Legacy views
If you're interested in using the old version of Saleor, go the [legacy-views](https://github.com/mirumee/legacy-views) repository. It contains the 2.9.0 release, which includes Django-based views and HTML templates of Storefront 1.0 and Dashboard 1.0. Note: this version of Saleor is no longer officially maintained.
## Your feedback
Do you use Saleor as an e-commerce platform?
Fill out this short survey and help us grow. It will take just a minute, but mean a lot!
[Take a survey](https://mirumee.typeform.com/to/sOIJbJ)
## License
Disclaimer: Everything you see here is open and free to use as long as you comply with the [license](https://github.com/mirumee/saleor/blob/master/LICENSE). There are no hidden charges. We promise to do our best to fix bugs and improve the code.
Some situations do call for extra code; we can cover exotic use cases or build you a custom e-commerce appliance.
#### Crafted with ❤️ by [Mirumee Software](http://mirumee.com)
hello@mirumee.com
| /saleor-2.10.1.tar.gz/saleor-2.10.1/README.md | 0.749912 | 0.771198 | README.md | pypi |
import pandas as pd
# --------------------------------------------------------------------------
# Data pipeline
class SalesPipeline:
"""Backend pipeline for data in sales_analysis/data_pipeline/data
Parameters
----------
data : dict of pd.DataFrame
dict should contain sales data from
sales_analysis/data_pipeline/data. Must include 'commissions.csv',
'orders.csv', 'order_lines.csv', 'products.csv',
'product_promotions.csv' and 'promotions.csv'
Attributes
----------
merged_order_data : pd.DataFrame
Formatted order/orderlines data
"""
# ----------------------------------------------------------------------
# Constructors
def __init__(self, **data):
for k, v in data.items():
setattr(self, k.split('.')[0], v)
self.merged_order_data = self._merge_order_data()
# ----------------------------------------------------------------------
# formatting methods
def _format_orders(self, orders):
"date formatting method for orders.csv"
orders["created_at"] = pd.to_datetime(orders["created_at"]).dt.date
return orders
def _merge_order_data(self):
"Method to combine orders.csv and order_lines.csv"
orders = self._format_orders(self.orders)
order_lines = self.order_lines
return order_lines.merge(
orders,
how="left",
left_on="order_id",
right_on="id",
)
# ----------------------------------------------------------------------
# statistics/calculation methods
def _customer_count(self):
"Method to count the total number of unique customers per day"
orders = self._format_orders(self.orders)
customer_count = orders.groupby("created_at")["customer_id"].nunique()
customer_count.name = "customers"
return customer_count
def _total_discount(self):
"Method to calculate the total discount given per day"
merged = self.merged_order_data
merged["total_discount"] = (
merged["full_price_amount"] - merged["discounted_amount"])
total_discount = merged.groupby("created_at")["total_discount"].sum()
total_discount.name = "total_discount_amount"
return total_discount
def _item_count(self):
"Method to calculate the total number of items sold per day"
sales_quantity = (self.merged_order_data
.groupby("created_at")["quantity"].sum())
sales_quantity.name = "items"
return sales_quantity
def _mean_order_total(self):
"Method to calculate the average order amount per day"
average_daily_order = (self.merged_order_data
.groupby("created_at")['total_amount'].mean())
average_daily_order.name = "order_total_avg"
return average_daily_order
def _mean_discount_rate(self):
"Method to calculate the average discount rate offered per day"
average_discount_rate = (self.merged_order_data
.groupby("created_at")["discount_rate"].mean())
average_discount_rate.name = "discount_rate_avg"
return average_discount_rate
# ----------------------------------------------------------------------
# Public methods
def summary(self):
"""Summary sales statistics per day.
Returns
-------
summary: pd.DataFrame
Summary of the total number of customers, total discount offered,
total number of items sold, average order amount and the average
discount rate offered each day
"""
summary_df = pd.concat([
self._customer_count(),
self._total_discount(),
self._item_count(),
self._mean_order_total(),
self._mean_discount_rate(),
], axis=1)
summary_df.index = pd.to_datetime(summary_df.index)
return summary_df | /sales_analysis-0.4-py3-none-any.whl/sales_analysis/data_pipeline/_pipeline.py | 0.859531 | 0.356979 | _pipeline.py | pypi |
from itertools import chain
from sklearn import base
import pandas as pd
import numpy as np
import os
import glob
import datetime
class ToWeeklySalesDataset(base.BaseEstimator, base.TransformerMixin):
def __init__(self, DATETYPE_COLUMNS_LIST, DTYPE_DICT, COLS_TO_KEEP, INPUT_FILE_PREFIX, DEBUT_DATE):
self.DATETYPE_COLUMNS_LIST = DATETYPE_COLUMNS_LIST
self.DTYPE_DICT = DTYPE_DICT
self.COLS_TO_KEEP = COLS_TO_KEEP
self.INPUT_FILE_PREFIX = INPUT_FILE_PREFIX
self.DEBUT_DATE = DEBUT_DATE
def fit(self, X):
self.X = X
return self
def transform(self, X):
X = self.X
print('###########################')
print(' WEEKLY SALES RETRIEVAL ')
print('###########################\n')
# initiating returned object
dataset = X.copy()
dataset['SITE_CODE'] = dataset['SITE_VENTE'].astype(
str) + "_" + dataset['CODE_ARTICLE'].astype(str)
# remove lines with empty date
dataset = dataset[dataset.DATE_COMMANDE.notna()]
# remove lines with date prior to debut date
print('Removing lines with dates prior to debut date of SalesDataset object..')
dataset = dataset[dataset.DATE_COMMANDE >= self.DEBUT_DATE]
# sum per DATE and SITE_CODE
dataset = dataset.groupby(
['DATE_COMMANDE', 'SITE_CODE']).sum().reset_index()
# Expanding time serie for first SITE_CODE in order to have the entire date range (needed for the KFold algorithm afterwards)
idx = pd.date_range(self.DEBUT_DATE, max(dataset['DATE_COMMANDE']))
first_site_code = list(dataset.SITE_CODE.unique())[0]
tmp = dataset[dataset.SITE_CODE == first_site_code].drop(
['SITE_CODE'], axis=1).set_index('DATE_COMMANDE')
tmp = tmp.reindex(idx, fill_value=0).reset_index().rename(
columns={"index": "DATE_COMMANDE"})
tmp['SITE_CODE'] = first_site_code
dataset = pd.concat(
[dataset[dataset.SITE_CODE != first_site_code], tmp])
# Coverting dates to integer (date = 0 is the first day of the sample ==> DEBUT_DATE)
print('Converting date to week number...')
dataset.DATE_COMMANDE = dataset.DATE_COMMANDE.apply(
lambda dt_time: (dt_time.date() - datetime.datetime.strptime(self.DEBUT_DATE, "%m-%d-%Y").date()).days//7)
dataset = dataset.groupby(
['DATE_COMMANDE', 'SITE_CODE']).sum().reset_index()
print('Unstacking dataset...\n')
dataset = dataset.groupby(['DATE_COMMANDE', 'SITE_CODE']).sum().unstack(
fill_value=0).stack().reset_index()
print('###########################')
print('WEEKLY SALES RETRIEVAL DONE')
print('###########################')
return dataset | /sales_forecast_package-0.0.3.tar.gz/sales_forecast_package-0.0.3/sales_forecast_package/ToWeeklySalesDataset.py | 0.407098 | 0.183191 | ToWeeklySalesDataset.py | pypi |
import mysql.connector as sql
import datetime
#Crear connexión
cnx = sql.connect(user='root',
password='example',
host='db',
database='booking')
# Creamos la clase BaseManager
class BaseManager:
connection = None
@classmethod
# Establece una única conexión a la base de datos y la mantiene para todos los métodos (SELECT, INSERT, UPDATE, DELETE)
def set_connection(cls, database_settings):
connection = sql.connect(**database_settings)
connection.autocommit = True
cls.connection = connection
@classmethod
def _get_cursor(cls):
return cls.connection.cursor(buffered=True)
@classmethod
def _execute_query(cls, query, params=None):
cursor = cls._get_cursor()
cursor.execute(query, params)
def __init__(self, model_class):
self.model_class = model_class
def select(self, *field_names, limit=10):
# Build SELECT query
fields_format = ', '.join(field_names)
query = f"SELECT {fields_format} FROM {self.model_class.table_name}"
# Execute query
cursor = self._get_cursor()
cursor.execute(query)
# Retrieve Data
model_objects = list()
is_fetching_completed = False
result = cursor.fetchmany(size=limit)
for row_values in result:
keys, values = field_names, row_values
row_data = dict(zip(keys, values))
model_objects.append(self.model_class(**row_data))
return model_objects
def insert(self, rows: list):
field_names = rows[0].keys()
fields_format = ", ".join(field_names)
values_placeholder_format = ", ".join([f'({", ".join(["%s"] * len(field_names))})'] * len(rows))
query = f"INSERT INTO {self.model_class.table_name} ({fields_format}) " \
f"VALUES {values_placeholder_format}"
params = list()
for row in rows:
row_values = [row[field_name] for field_name in field_names]
params += row_values
self._execute_query(query, params)
def update(self, new_data: dict):
field_names = new_data.keys()
placeholder_format = ', '.join([f'{field_name} = %s' for field_name in field_names])
query = f"UPDATE {self.model_class.table_name} SET {placeholder_format}"
params = list(new_data.values())
self._execute_query(query, params)
def delete(self):
query = f"DELETE FROM {self.model_class.table_name}"
self._execute_query(query)
# Crear la clase Metamodel (elemento intermedio): es una clase abstracta que guesrda el gestor de oa base de datos y
# define que la clase que va a mapear va a heredar el tipo
class MetaModel(type):
manager_class = BaseManager
def _get_manager(cls):
return cls.manager_class(model_class=cls)
@property
def objects(cls):
return cls._get_manager()
# Crear la clase Basemodel
class BaseModel(metaclass=MetaModel):
table_name = ""
def __init__(self, **row_data):
for field_name, value in row_data.items():
setattr(self, field_name, value)
def __repr__(self):
attrs_format = ", ".join([f'{field}={value}' for field, value in self.__dict__.items()])
return f"<{self.__class__.__name__}: ({attrs_format})>"
# Crear las clases que definen las tablas
class Sale(BaseModel):
manager_class = BaseManager
table_name = "sales"
# Conexión a la base de datos
BaseManager.set_connection(DB_SETTINGS) | /sales_module-0.0.1.tar.gz/sales_module-0.0.1/sales.py | 0.434941 | 0.210848 | sales.py | pypi |
from hyperopt import STATUS_OK, Trials, fmin, hp, tpe
from xgboost import XGBRegressor
from functools import partial
from sklearn.metrics import mean_squared_error
from sklearn.ensemble import RandomForestRegressor
DEFAULT_XGBR_SPACE = {
'n_estimators': hp.quniform('n_estimators', 10, 1000, 1),
'max_depth': hp.quniform('max_depth', 3, 18, 1),
'grow_policy': hp.choice('grow_policy', [0, 1]),
'learning_rate': hp.quniform('learning_rate', 0.025, 0.5, 0.025),
'booster': 'gbtree',
'tree_method': hp.choice('tree_method', ['exact', 'approx', 'hist']),
'gamma': hp.quniform('gamma', 0.5, 1, 0.05),
'min_child_weight': hp.quniform('min_child_weight', 1, 6, 1),
'subsample': hp.quniform('subsample', 0.5, 1, 0.05),
'colsample_bytree': hp.quniform('colsample_bytree', 0.5, 1, 0.05),
'colsample_bylevel': hp.quniform('colsample_bylevel', 0.5, 1, 0.05),
'colsample_bynode': hp.quniform('colsample_bynode', 0.5, 1, 0.05),
}
DEFAULT_RFR_SPACE = {
'n_estimators': hp.quniform('n_estimators', 10, 1000, 1),
'max_depth': hp.quniform('max_depth', 3, 20, 1),
'min_samples_split': hp.quniform('min_samples_split', 2, 100, 1),
'min_samples_leaf': hp.quniform('min_samples_leaf', 1, 20, 1),
'max_features': hp.quniform('max_features', 0.1, 0.9, 0.1),
'max_samples': hp.quniform('max_samples', 0.1, 0.9, 0.1)
}
def xgbr_score(params, **data):
model = XGBRegressor(n_estimators=int(params['n_estimators']),
max_depth=int(params['max_depth']),
learning_rate=params['learning_rate'],
booster=params['booster'],
tree_method=params['tree_method'],
gamma=params['gamma'],
min_child_weight=int(params['min_child_weight']),
subsample=params['subsample'],
colsample_bytree=params['colsample_bytree'],
colsample_bylevel=params['colsample_bylevel'],
colsample_bynode=params['colsample_bynode'],
random_state=1001
)
model.fit(data['t_x'], data['t_y'])
pred = model.predict(data['v_x'])
mse = mean_squared_error(data['v_y'], pred, squared=False)
return {'loss': mse, 'status': STATUS_OK, 'model': model}
def rfr_score(params, **data):
model = RandomForestRegressor(n_jobs=-1, random_state=1001,
n_estimators=int(params['n_estimators']),
max_depth=int(params['max_depth']),
min_samples_split=int(params['min_samples_split']),
min_samples_leaf=int(params['min_samples_leaf']),
max_features=params['max_features'] if params['max_features'] < 0.99 else 1,
max_samples=params['max_samples'] if params['max_samples'] < 0.99 else 1,
)
model.fit(data['t_x'], data['t_y'].ravel())
pred = model.predict(data['v_x'])
mse = mean_squared_error(data['v_y'], pred, squared=False)
return {'loss': mse, 'status': STATUS_OK, 'model': model}
def bayesian_tuning(space, model_type, max_evals=50, **data):
score_func = None
if model_type == 'xgbr':
score_func = xgbr_score
elif model_type == 'rfr':
score_func = rfr_score
trials = Trials()
best = fmin(fn=partial(score_func, **data),
space=space,
algo=tpe.suggest,
max_evals=max_evals,
trials=trials)
return best, trials | /sales_pred_filiankova-1.3.9.tar.gz/sales_pred_filiankova-1.3.9/sales_pred_filiankova/models/tuning.py | 0.738952 | 0.375878 | tuning.py | pypi |
import numpy as np
import pandas as pd
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
import category_encoders as ce
class FeatureScaler(BaseEstimator, TransformerMixin):
def __init__(self):
self._feat_scaler = StandardScaler()
self._num_cols = None
def fit(self, X, y=None):
self._num_cols = list(set(X.select_dtypes(include='float64').columns) - {'month_sin', 'month_cos'})
self._feat_scaler = self._feat_scaler.fit(X.loc[:, self._num_cols])
return self
def transform(self, X, y=None):
X.loc[:, self._num_cols] = self._feat_scaler.transform(X.loc[:, self._num_cols])
return X
class TargetEncoder(BaseEstimator, TransformerMixin):
def __init__(self, smoothing=1.0, min_samples_leaf=1):
self._encoder = ce.TargetEncoder(min_samples_leaf=min_samples_leaf, smoothing=smoothing)
self._cat_cols = ['shop_id', 'city_id', 'item_id', 'item_category_id', 'item_global_category_id']
def fit(self, X, y=None):
X.loc[:, self._cat_cols] = X.loc[:, self._cat_cols].astype(object)
self._encoder = self._encoder.fit(X.loc[:, self._cat_cols], y)
return self
def transform(self, X, y=None):
X.loc[:, self._cat_cols] = self._encoder.transform(X.loc[:, self._cat_cols])
return X
class TargetTransformer(BaseEstimator, TransformerMixin):
def __init__(self, no_log=False):
self._scaler = StandardScaler()
self._no_log = no_log
def fit(self, X, y=None):
if self._no_log:
self._scaler = self._scaler.fit(X)
else:
self._scaler = self._scaler.fit(np.log(X))
return self
def transform(self, X, y=None):
if not self._no_log:
X = np.log(X)
X = self._scaler.transform(X)
return X
def inverse_transform(self, X, y=None):
X = self._scaler.inverse_transform(X)
if not self._no_log:
X = np.exp(X)
return X
class OutlierRemover(BaseEstimator, TransformerMixin):
def __init__(self, target='item_cnt_month', prob=0.03):
self._prob = prob
self._threshold = None
self._target = target
def fit(self, X, y=None):
self._threshold = np.mean(y) / self._prob
return self
def fit_transform(self, X, y=None):
self._threshold = np.mean(y) / self._prob
X = X[y < self._threshold]
y = y[y < self._threshold]
return X, y
def transform(self, X, y=None):
X = X[y < self._threshold]
y = y[y < self._threshold]
return X, y
class Preprocessor:
def __init__(self, target_col, no_log_target=False, outlier_prob=0.03, keep_cat=False, **encoder_params):
self.target_transformer = TargetTransformer(no_log=no_log_target)
self._outlier_remover = OutlierRemover(prob=outlier_prob, target=target_col)
self._target_col = target_col
preprocessing_steps = [('scaler', FeatureScaler())]
if not keep_cat:
preprocessing_steps.append(('encoder', TargetEncoder(**encoder_params)))
self._feature_prep_pipeline = Pipeline(steps=preprocessing_steps)
def preprocess_data(self, train_data=None, val_data=None, test_data=None):
output = []
if train_data is not None:
train_x, train_y = train_data
train_x, train_y = self._outlier_remover.fit_transform(train_x, train_y)
train_y_transformed = self.target_transformer.fit_transform(train_y.values.reshape(-1, 1))
train_x = self._feature_prep_pipeline.fit_transform(train_x, train_y)
output.append(train_x)
output.append(pd.DataFrame(train_y_transformed))
output.append(pd.DataFrame(train_y))
if val_data is not None:
val_x, val_y = val_data
val_x, val_y = self._outlier_remover.transform(val_x, val_y)
val_y_transformed = self.target_transformer.transform(val_y.values.reshape(-1, 1))
assert (val_x.columns == train_x.columns).all()
assert type(val_x) == type(train_x)
val_x = self._feature_prep_pipeline.transform(val_x)
output.append(val_x)
output.append(pd.DataFrame(val_y_transformed))
output.append(pd.DataFrame(val_y))
if test_data is not None:
test_x = self._feature_prep_pipeline.transform(test_data)
output.append(test_x)
return output | /sales_pred_filiankova-1.3.9.tar.gz/sales_pred_filiankova-1.3.9/sales_pred_filiankova/features/preprocessing.py | 0.759136 | 0.292623 | preprocessing.py | pypi |
import time
from enum import Enum
from typing import List
from . import base as bulk_base
from .. import base
from ... import config, exceptions
from ...models import bulk as models
class OPERATION(Enum):
DELETE = 'delete'
INSERT = 'insert'
QUERY = 'query'
QUERY_ALL = 'queryall'
UPSERT = 'upsert'
UPDATE = 'update'
HARD_DELETE = 'hardDelete'
class JOB_STATE(Enum):
OPEN = 'Open'
CLOSED = 'Closed'
ABORTED = 'Aborted'
FAILED = 'Failed'
class BATCH_STATE(Enum):
QUEUED = 'Queued'
IN_PROGRESS = 'InProgress'
COMPLETED = 'Completed'
FAILED = 'Failed'
NOT_PROCESSED = 'Not Processed'
BATCH_STATE_DONE = [BATCH_STATE.COMPLETED, BATCH_STATE.FAILED]
class Client(bulk_base.Client, base.AsyncService):
def insert(self, object_name: str, entries: List[dict]) -> List[models.ResultRecord]:
return self._execute_operation(OPERATION.INSERT, object_name, entries)
def update(self, object_name: str, entries: List[dict]) -> List[models.ResultRecord]:
return self._execute_operation(OPERATION.UPDATE, object_name, entries)
def upsert(self, object_name: str, entries: List[dict], external_id_field_name: str = 'Id') -> List[models.ResultRecord]:
return self._execute_operation(OPERATION.UPSERT, object_name, entries, external_id_field_name)
def select(self, **kwargs):
raise NotImplementedError
def delete(self, object_name: str, ids: List[str]) -> List[models.ResultRecord]:
return self._execute_operation(OPERATION.DELETE, object_name, [{'Id': id} for id in ids])
def _execute_operation(self, operation: OPERATION, object_name: str, entries: List[dict], external_id_field_name: str = None) -> List[models.ResultRecord]:
job = Job.create(self.connection, operation, object_name, external_id_field_name)
job.upload(entries)
return job.wait()
class Job(base.AsyncService):
def __init__(self, connection, job_id):
super().__init__(connection, 'job/' + job_id)
self.job_id = job_id
self.batches = []
def _set_state(self, new_state: JOB_STATE):
result = self._post(data={
'state': new_state.value
})
def upload(self, entries: List[dict]):
self.add_batch(entries)
self.close()
def add_batch(self, entries: List[dict]):
return self.batches.append(
Batch.create(self.connection, self.job_id, entries)
)
def close(self):
return self._set_state(JOB_STATE.CLOSED)
def get_result(self) -> List[models.ResultRecord]:
results = [batch.get_result() for batch in self.batches]
return [item for sublist in results for item in sublist]
def get_errors(self):
return [
batch.get_state_message()
for batch in self.batches
if batch.is_failed()
]
def wait(self) -> List[models.ResultRecord]:
batch_states = [batch.get_state() for batch in self.batches]
for state in batch_states:
if state not in BATCH_STATE_DONE:
time.sleep(config.BULK_SLEEP_SECONDS)
return self.wait()
if BATCH_STATE.FAILED in batch_states:
raise exceptions.BulkJobFailedError('One or more batches failed')
return self.get_result()
@classmethod
def create(cls, connection, operation: OPERATION, object_name: str, external_id_field_name: str = None):
result = base.AsyncService(connection, 'job')._post(uri='', data={
'operation': operation.value,
'object': object_name,
'contentType': 'JSON',
'externalIdFieldName': external_id_field_name
})
return cls(connection, result['id'])
class Batch(base.AsyncService):
def __init__(self, connection, job_id, batch_id):
super().__init__(connection, f'job/{job_id}/batch/{batch_id}')
def get_info(self):
return self._get()
def get_state(self) -> BATCH_STATE:
return BATCH_STATE(self.get_info().get('state'))
def get_state_message(self) -> BATCH_STATE:
return self.get_info().get('stateMessage')
def is_done(self):
return self.get_state() in BATCH_STATE_DONE
def is_failed(self):
return self.get_state() == BATCH_STATE.FAILED
def get_result(self) -> List[models.ResultRecord]:
reader = self._get('result')
return [
self._convert_result(x)
for x in reader
]
def _convert_result(self, row):
if row['success']:
return models.SuccessResultRecord(row['id'], row)
error = ', '.join([
x['message']
for x in row['errors']
if x['message'] != None
])
return models.FailResultRecord(row['id'], error, row)
def wait(self) -> List[models.ResultRecord]:
while not self.is_done():
time.sleep(config.BULK_SLEEP_SECONDS)
if self.get_state() == BATCH_STATE.FAILED:
raise exceptions.BulkJobFailedError(self.get_state_message())
return self.get_result()
@classmethod
def create(cls, connection, job_id, entries):
result = base.AsyncService(connection, f'job/{job_id}/batch')._post(data=entries)
return cls(connection, job_id, result['id']) | /salesforce-api-0.1.45.tar.gz/salesforce-api-0.1.45/salesforce_api/services/bulk/v1.py | 0.757077 | 0.153803 | v1.py | pypi |
# Salesforce Bulkipy
A Python library for the Salesforce Bulk API (that actually works)
## Changes over [salesforce-bulk](https://github.com/heroku/salesforce-bulk)
The [salesforce-bulk](https://github.com/heroku/salesforce-bulk) library was used to export 18k records to [Wingify](https://github.com/wingify)'s Salesforce system. Even though the library was super useful, it's broken, not maintained anymore and was a pain to work with while figuring out the bugs. [@bholagabbar](https://github.com/bholagabbar) decided to fix all the issues faced and release a new, usable library **salesforce-bulkipy**. This library has been tested successfully on our Salesforce Sandbox.
* Added support for [Two-Factor Authentication](https://developer.salesforce.com/docs/atlas.en-us.identityImplGuide.meta/identityImplGuide/security_require_two-factor_authentication.htm) by routing authentication via [simple-salesforce](https://github.com/simple-salesforce/simple-salesforce)
* Added support for [Salesforce Sandbox](https://test.salesforce.com)
* Added support for parsing unicode characters in CSV
* Fixed various other bugs
**salesforce-bulkipy** will be actively maintained, unlike salesforce-bulk
## Installation
**```sudo pip install salesforce-bulkipy```** (not yet available)
Incase your setup fails, you may have a few essential tools missing. Try
`sudo apt-get install build-essential libssl-dev libffi-dev python-dev`
## Authentication
To access the Bulk API, you need to authenticate a user into Salesforce. There are 2 possible ways to achieve this. These methods work irrespective of whether your organisation has [Two-Factor Authentication](https://developer.salesforce.com/docs/atlas.en-us.identityImplGuide.meta/identityImplGuide/security_require_two-factor_authentication.htm) enabled or not, so that's a massive overhead taken care of.
##### The code samples shown read credentials from a [config.properties](https://docs.python.org/2/library/configparser.html) file. Feel free to adapt the input method to your setting
#### 1. username, password, [security_token](https://success.salesforce.com/answers?id=90630000000glADAAY)
```
from salesforce_bulkipy import SalesforceBulkipy
import ConfigParser
config = ConfigParser.RawConfigParser()
config.read('config.properties')
username = config.get('Section', 'username')
password = config.get('Section', 'password')
security_token = config.get('Section', 'security_token')
bulk = SalesforceBulkipy(username=username, password=password, security_token=security_token) #optional parameter: sandbox=True
# Authentication Successful!
```
#### 2. session_id, host
```
from salesforce_bulkipy import SalesforceBulkipy
import ConfigParser
config = ConfigParser.RawConfigParser()
config.read('config.properties')
session_id = config.get('Section', 'session_id')
session_id = config.get('Section', 'session_id')
bulk = SalesforceBulkipy(session_id=session_id, host=host) #optional parameter: sandbox=True
# Authentication Successful!
```
## Operations
The basic sequence for driving the Bulk API is:
1. Create a new job
2. Add one or more batches to the job
3. Wait for each batch to finish
4. Close the job
## Bulk Insert, Update, Delete
All Bulk upload operations work the same. You set the operation when you create the
job. Then you submit one or more documents that specify records with columns to
insert/update/delete. When deleting you should only submit the Id for each record.
For efficiency you should use the `post_bulk_batch` method to post each batch of
data. (Note that a batch can have a maximum 10,000 records and be 1GB in size.)
You pass a generator or iterator into this function and it will stream data via
POST to Salesforce. For help sending CSV formatted data you can use the
salesforce_bulk.CsvDictsAdapter class. It takes an iterator returning dictionaries
and returns an iterator which produces CSV data.
**Concurrency mode**: When creating the job, you can pass `concurrency=Serial` or `concurrency=Parallel` to set the
concurrency mode for the job.
## Bulk Insert Example
```
from salesforce_bulkipy import SalesforceBulkipy
from salesforce_bulkipy import CsvDictsAdapter
bulk = SalesforceBulkipy(username=username, password=password, security_token=security_token)
records_to_insert = [{}, {}] # A list of A Custom Object dict
# Bulk Insert
job = bulk.create_insert_job("CustomObjectName", contentType='CSV')
csv_iter = CsvDictsAdapter(iter(records_to_insert))
batch = bulk.post_bulk_batch(job, csv_iter)
bulk.wait_for_batch(job, batch)
bulk.close_job(job)
```
## Bulk Query Example
```
from salesforce_bulkipy import SalesforceBulkipy
bulk = SalesforceBulkipy(username=username, password=password, security_token=security_token)
records_to_insert = [{}, {}] # A list of A Custom Object dict
# Bulk Query
query = '' # SOQL Query
job = bulk.create_query_job("PushCrew_Account__c", contentType='CSV')
batch = bulk.query(job, query)
bulk.wait_for_batch(job, batch)
bulk.close_job(job)
# Result
results = bulk.get_batch_result_iter(job, batch, parse_csv=True)
```
## Credits and Contributions
This repository is a maintained fork of [heroku/salesforce-bulk](https://github.com/heroku/salesforce-bulk). The changes incorporated here are a result of a joint effort by [@lambacck](https://github.com/lambacck), [@Jeremydavisvt](https://github.com/Jeremydavisvt), [@alexhughson](https://github.com/alexhughson) and [@bholagabbar](https://github.com/bholagabbar). Thanks to [@heroku](https://github.com/heroku) for creating the original useful library.
Feel free to contribute by creating Issues and Pull Requests. We'll test and merge them. | /salesforce-bulkipy-1.0.tar.gz/salesforce-bulkipy-1.0/README.md | 0.417984 | 0.921534 | README.md | pypi |
import base64
import dateutil.parser
import pyarrow
from .constants import API_VERSION_V2
from .constants import QUERY_RESPONSE_KEY_DONE
from .constants import QUERY_RESPONSE_KEY_NEXT_BATCH_ID
from .constants import QUERY_RESPONSE_KEY_ARROW_STREAM
from .constants import QUERY_RESPONSE_KEY_METADATA
from .constants import QUERY_RESPONSE_KEY_METADATA_TYPE
from .constants import DATA_TYPE_TIMESTAMP
from .constants import ENCODING_ASCII
from .query_submitter import QuerySubmitter
class PandasUtils:
@staticmethod
def get_dataframe(connection, query):
"""
Executes the query and returns results as a Pandas dataframe
:param connection: SalesforceCDPConnection object
:param query: The query to be executed
:return: Query results as Pandas Dataframe
"""
arrow_stream_list = []
result = QuerySubmitter.execute(connection, query, API_VERSION_V2, True)
encoded_arrow_stream = result[QUERY_RESPONSE_KEY_ARROW_STREAM]
arrow_table = PandasUtils._get_pyarrow_table(encoded_arrow_stream)
PandasUtils._add_table_to_list(arrow_stream_list, arrow_table)
while result[QUERY_RESPONSE_KEY_DONE] is not True:
result = QuerySubmitter.get_next_batch(connection, result[QUERY_RESPONSE_KEY_NEXT_BATCH_ID], True)
encoded_arrow_stream = result[QUERY_RESPONSE_KEY_ARROW_STREAM]
arrow_table = PandasUtils._get_pyarrow_table(encoded_arrow_stream)
PandasUtils._add_table_to_list(arrow_stream_list, arrow_table)
if len(arrow_stream_list) > 0:
pandas_df = pyarrow.concat_tables(arrow_stream_list).to_pandas()
date_columns = PandasUtils._get_date_columns(result)
for date_column in date_columns:
pandas_df = pandas_df.apply(lambda row: PandasUtils._convert_to_date(row, date_column), axis=1)
return pandas_df
return None
@staticmethod
def _convert_to_date(row, column):
value = row[column]
if isinstance(value, str):
row[column] = dateutil.parser.parse(value)
else:
row[column] = None
return row
@staticmethod
def _get_date_columns(result):
metadata = result[QUERY_RESPONSE_KEY_METADATA]
date_columns = [x for x in metadata.keys() if PandasUtils._istimestamp(x, metadata)]
return date_columns
@staticmethod
def _istimestamp(key, metadata_list):
metadata_type = metadata_list[key][QUERY_RESPONSE_KEY_METADATA_TYPE]
if metadata_type is not None:
metadata_type = metadata_type.upper()
return metadata_type == DATA_TYPE_TIMESTAMP
@staticmethod
def _get_pyarrow_table(encoded_arrow_stream):
if encoded_arrow_stream is None:
return None
stream_bytes = encoded_arrow_stream.encode(ENCODING_ASCII)
decoded_bytes = base64.b64decode(stream_bytes)
return pyarrow.ipc.open_stream(decoded_bytes).read_all()
@staticmethod
def _add_table_to_list(arrow_stream_list, arrow_stream):
if arrow_stream is not None:
arrow_stream_list.append(arrow_stream) | /salesforce-cdp-connector-1.0.7.tar.gz/salesforce-cdp-connector-1.0.7/salesforcecdpconnector/pandas_utils.py | 0.659405 | 0.209025 | pandas_utils.py | pypi |
import dateutil.parser
from .constants import QUERY_RESPONSE_KEY_DATA
from .constants import QUERY_RESPONSE_KEY_METADATA
from .constants import QUERY_RESPONSE_KEY_DONE
from .constants import QUERY_RESPONSE_KEY_NEXT_BATCH_ID
from .constants import DATA_TYPE_TIMESTAMP
from .constants import QUERY_RESPONSE_KEY_PLACE_IN_ORDER
from .parsed_query_result import QueryResult
class QueryResultParser:
@staticmethod
def parse_result(result):
"""
Parses the json response from queryV2 API
:param result: JSON response from queryV2 API
:return: ParsedQueryResult
"""
return QueryResultParser._parse_v2_result(result)
@staticmethod
def _parse_v2_result(result):
data = result[QUERY_RESPONSE_KEY_DATA]
metadata_dict = result[QUERY_RESPONSE_KEY_METADATA]
is_done = result[QUERY_RESPONSE_KEY_DONE]
next_batch_id = result.get(QUERY_RESPONSE_KEY_NEXT_BATCH_ID)
has_next = not is_done
sorted_metadata_items = QueryResultParser._sort_metadata_by_place_in_order(metadata_dict)
description = QueryResultParser._convert_metadata_list_to_description(sorted_metadata_items)
QueryResultParser._convert_timestamps(data, description)
return QueryResult(data, description, has_next, next_batch_id)
@staticmethod
def _convert_timestamps(data, description):
"""
TIMESTAMPS are coming as string in JSON. This function will update the string to datetime object
:param data: List of JSON results
:param description: Cursor description
:return: None
"""
for i in range(0, len(description)):
if description[i][1] == DATA_TYPE_TIMESTAMP:
for data_row in data:
if data_row[i] is not None and isinstance(data_row[i], str) and len(data_row[i]) > 0:
data_row[i] = dateutil.parser.parse(data_row[i])
@staticmethod
def _convert_metadata_item_to_description_item(metadata_item):
metadata_name = metadata_item[0]
type = metadata_item[1]['type']
return QueryResultParser._get_description_item(metadata_name, type)
@staticmethod
def _get_description_item(name, data_type):
"""
This will generate the description to be used with Cursor
:param name: Column Name
:param data_type: Column Type
:return: One tuple representing description for a column
"""
return (
name, # Column Name
data_type, # Column Type
None,
None,
None,
None,
None
)
@staticmethod
def _convert_metadata_list_to_description(sorted_metadata_list):
return [QueryResultParser._convert_metadata_item_to_description_item(metadata_item) for metadata_item in
sorted_metadata_list]
@staticmethod
def _sort_metadata_by_place_in_order(metadata_dict):
"""
This functions sorts the column metadata from queryV2 response based on the placeInOrder field.
:param metadata_dict: The metadata dict from JSON
:return: sorted column metadata as List
"""
metadataList = [metadataItem for metadataItem in metadata_dict.items()]
metadataList = sorted(metadataList, key=lambda item: item[1][QUERY_RESPONSE_KEY_PLACE_IN_ORDER])
return metadataList | /salesforce-cdp-connector-1.0.7.tar.gz/salesforce-cdp-connector-1.0.7/salesforcecdpconnector/query_result_parser.py | 0.690872 | 0.203312 | query_result_parser.py | pypi |
from datetime import date, time, datetime
from .exceptions import NotSupportedError, Error
from .query_result_parser import QueryResultParser
from .query_submitter import QuerySubmitter
class SalesforceCDPCursor:
"""
This class represents the cursor
"""
_TRANSLATION_TABLE = str.maketrans({"\\": r"\\",
"\n": r"\\n",
"\r": r"\\r",
"'": r"\'"})
def __init__(self, connection):
self.arraysize = 1
self.description = None
self.data = None
self.connection = connection
self.current_query = None
self.has_next = None
self.next_batch_id = None
self.closed = False
self.has_result = False
def execute(self, query, params=None):
"""
Executes the query and loads the first batch of results. Supports only read only queries
:param query: The input query.
:param params: The parameters to the query
:return: None
"""
self._check_cursor_closed()
self.current_query = self._resolve_query_with_params(query, params)
json_results = QuerySubmitter.execute(self.connection, self.current_query)
results = QueryResultParser.parse_result(json_results)
self.description = results.description
self.data = results.data
self.has_next = results.has_next
self.next_batch_id = results.next_batch_id
self.has_result = True
def fetchall(self):
"""
To be called after the execute function. This will return the entire results of the query executed.
:return: Returns the query results
"""
if not self.has_result:
raise Error('No results available to fetch')
while self.has_next is True:
self._check_cursor_closed()
json_results = QuerySubmitter.get_next_batch(self.connection, self.next_batch_id)
results = QueryResultParser.parse_result(json_results)
self.description = results.description
self.has_next = results.has_next
self.next_batch_id = results.next_batch_id
self.data = self.data + results.data
self._check_cursor_closed()
self.has_result = False
return self.data
def fetchone(self):
"""
To be called after the execute function. This will return next row from the query results.
:return: Returns the next row from query result. Returns None if no more data is present.
"""
if not self.has_result:
raise Error('No results available to fetch')
self._check_cursor_closed()
if self.data is not None and len(self.data) > 0:
next_row = self.data.pop(0)
return next_row
elif self.has_next is True:
json_results = QuerySubmitter.get_next_batch(self.connection, self.next_batch_id)
results = QueryResultParser.parse_result(json_results)
self.description = results.description
self.data = results.data
self.has_next = results.has_next
self.next_batch_id = results.next_batch_id
next_row = self.data.pop(0)
return next_row
else:
self.has_result = False
return None
def fetchmany(self, size=None):
"""
Returns size number of rows from result
:param size: If not provided cursor.arraysize will be used by default
:return: size number of rows from result
"""
if size is None:
size = self.arraysize
result = []
while size > 0:
next_row = self.fetchone()
size = size - 1
if next_row is None:
break
result.append(next_row)
return result
def close(self):
"""
Nothing to close
:return: None
"""
self.closed = True
def rollback(self):
"""
Nothing to rollback
:return: None
"""
pass
def commit(self):
"""
Nothing to commit
:return: None
"""
pass
def setinputsizes(self):
"""
Do nothing
:return:
"""
pass
def setoutputsize(self):
"""
Do nothing
:return:
"""
pass
def _check_cursor_closed(self):
if self.closed:
raise Error('Attempting operation while cursor is closed')
elif self.connection.closed:
raise Error('Attempting operation while connection is closed')
def _resolve_query_with_params(self, query, params):
params_count_from_query = query.count('?')
if params is None and params_count_from_query == 0:
return query
if self._is_iterable(params) and params_count_from_query == len(params):
for param in params:
query = self._replace_next_param(query, param)
elif params_count_from_query == 1 and params is not None:
query = self._replace_next_param(query, params)
else:
raise Exception('Parameter count not matching')
return query
def _replace_next_param(self, query, param):
if param is None:
query = query.replace('?', 'null', 1)
elif self._is_numeric(param):
query = query.replace('?', str(param), 1)
else:
if not isinstance(param, str):
param = str(param)
param = param.translate(self._TRANSLATION_TABLE)
query = query.replace('?', f"'{str(param)}'", 1)
return query
def _is_iterable(self, param):
try:
iter(param)
return True
except Exception:
pass
return False
def _is_numeric(self, param):
return isinstance(param, int) or \
isinstance(param, float)
def executemany(self, **kwargs):
raise NotSupportedError('executemany is not supported')
def Date(year, month, day):
"""Construct an object holding a date value."""
return date(year, month, day)
def Time(hour, minute=0, second=0, microsecond=0, tzinfo=None):
"""Construct an object holding a time value."""
return time(hour, minute, second, microsecond, tzinfo)
def Timestamp(year, month, day, hour=0, minute=0, second=0, microsecond=0,
tzinfo=None):
"""Construct an object holding a time stamp value."""
return datetime(year, month, day, hour, minute, second, microsecond,
tzinfo)
def DateFromTicks(ticks):
return date(*time.localtime(ticks)[:3])
def TimeFromTicks(ticks):
return time(*time.localtime(ticks)[3:6])
def TimestampFromTicks(ticks):
return datetime(*time.localtime(ticks)[:6])
class Binary(bytes):
"""Construct an object capable of holding a binary (long) string value."""
class DBAPITypeObject:
def __init__(self, *values):
self.values = [v.lower() for v in values]
def __eq__(self, other):
return other.lower() in self.values
STRING = DBAPITypeObject("VARCHAR", "CHAR", "VARBINARY", "JSON")
NUMBER = DBAPITypeObject(
"BOOLEAN", "TINYINT", "SMALLINT", "INTEGER", "BIGINT", "DOUBLE", "DECIMAL"
)
DATETIME = DBAPITypeObject(
"DATE",
"TIME",
"DATETIME",
"TIMESTAMP"
) | /salesforce-cdp-connector-1.0.7.tar.gz/salesforce-cdp-connector-1.0.7/salesforcecdpconnector/cursor.py | 0.871461 | 0.210279 | cursor.py | pypi |
<p align="center">
<br>
<img src="assets/logo.png" width="500"/>
<br>
<p>
<div align="center">
<a href="https://opensource.org/license/apache-2-0/">
<img alt="license" src="https://img.shields.io/badge/License-Apache%202.0-green.svg"/>
</a>
<a href="https://www.python.org/downloads/release/python-380/">
<img alt="python" src="https://img.shields.io/badge/python-3.8+-yellow.svg"/>
</a>
<a href="https://pypi.org/project/salesforce-codetf/">
<img alt="downloads" src="https://static.pepy.tech/badge/salesforce-codetf"/>
</a>
<a href="https://arxiv.org/pdf/2306.00029.pdf">Technical Report</a>,
<a href="https://opensource.salesforce.com/CodeTF/latest/index.html">Documentation</a>,
<a href="https://github.com/salesforce/CodeTF/tree/main/test_inference">Examples</a>,
# CodeTF - A One-stop Transformer Library for State-of-the-art Code LLM
<!--
[](https://github.com/bdqnghi/CodeTF_personal/blob/main/LICENSE)
[](https://www.python.org/downloads/release/python-390/)
[](https://github.com/psf/black) -->
</div>
## Table of Contents
- [Introduction](#introduction)
- [Installation](#installation-guide)
- [Getting Started](#getting-started)
- [Inferencing Pipeline](#inferencing-pipeline)
- [Model Zoo](#model-zoo)
- [Fine-Tuning Your Own Model](#fine-tuning-pipeline)
- [Evaluate On Well-Known Benchmarks](#evaluate-on-well-known-benchmarks)
- [Utilities to Manipulate Source Code Based on AST](#code-utilities)
- [AST Parser in Multiple Languages](#ast-parser-in-multiple-languages)
- [Extract Code Attributes](#extract-code-attributes)
- [Remove Comments](#remove-comments)
- [Ethical and Responsible Use](#ethical-and-responsible-use)
- [License](#license)
## Introduction
CodeTF is a one-stop Python transformer-based library for ***code large language models (Code LLMs)*** and ***code intelligence***, provides a seamless interface for training and inferencing on code intelligence tasks like code summarization, translation, code generation and so on. It aims to facilitate easy integration of SOTA CodeLLMs into real-world applications.
In addition to the core LLMs's features for code, CodeTF offers utilities for code manipulation across various languages, including easy extraction of code attributes. Using tree-sitter as its core AST parser, it enables parsing of attributes such as function names, comments, and variable names. Pre-built libraries for numerous languages are provided, eliminating the need for complicated parser setup. CodeTF thus ensures a user-friendly and accessible environment for code intelligence tasks.
The current version of the library offers:
- **Fast Model Serving**: We support an easy-to-use interface for rapid inferencing with **pre-quantized models** (int8, int16, float16). CodeTF handles all aspects of device management, so users do not have to worry about that aspect. If your model is large, we offer advanced features such as weight sharding across GPUs to serve the models more quickly.
- **Fine-Tuning Your Own Models**: We provide an API for quickly fine-tuning your own LLMs for code using SOTA techniques for **parameter-efficient fine-tuning** (HuggingFace PEFT) on distributed environments.
- **Supported Tasks**: nl2code, code summarization, code completion, code translation, code refinement, clone detection, defect prediction.
- **Datasets+**: We have preprocessed well-known benchmarks (**Human-Eval, MBPP, CodeXGLUE, APPS, etc.**) and offer an easy-to-load feature for these datasets.
- **Model Evaluator**: We provide interface to evaluate models on well-known benchmarks (e.g. Human-Eval) on popular metrics (e.g., pass@k) with little effort (**~15 LOCs**).
- **Pretrained Models**: We supply pretrained checkpoints of state-of-the-art foundational language models of code (CodeBERT, CodeT5, CodeGen, CodeT5+, Incoder, StarCoder, etc.).
- **Fine-Tuned Models**: We furnish fine-tuned checkpoints for 8+ downstream tasks.
- **Utility to Manipulate Source Code**: We provide utilities to easily manipulate source code, such as user-friendly AST parsers (based on tree-sitter) in **15+ programming languages**, to extract important code features, such as function name, identifiers, etc.
The following table shows the supported models with sizes and the tasks that the models support. This is a continuing effort and we are working on further growing the list.
| Model | Size | Tasks |
|--------------|-------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------|
| CodeT5 | Base, Base-multi-sum, Base-translate-cs, Base-translate-java, Base-sum, Base-clone, Base-defect | Pretrained, NL to Code, Refine, Translation (CS to Java, Java to CS), Summarization (Python, Go, PHP, JavaScript, Java, Ruby), Clone detection, Defect prediction |
| CodeT5+ | Plus-instruct-16B, Plus-16B, Plus-6B, Plus-2B, Plus-770M-python, Plus-770M, Plus-220M | Pretrained, NL to Code, Refine , Defect prediction |
| CodeGen | Mono: 350M, 2B, 6B, 1B, 3.7B, 7B, 16B<br>Multi: 350M, 2B, 6B<br>NL: 350M, 2B | Pretrained |
| StarCoder | 15.5B | Pretrained |
| SantaCoder | 1.1B | Pretrained |
| GPT-NeoX | 20B | Pretrained |
| GPT-Neo | 1.3B | Pretrained |
| GPT-J | 6B | Pretrained |
| Incoder | 6B | Pretrained |
| CodeParrot | Small-python (110M), Small-multi(110M), 1.5B | Pretrained |
| CodeBERT | CodeBERT-base, UnixCoder-base, CodeBERTa-small | Pretrained |
## Installation Guide
1. (Optional) Creating conda environment
```bash
conda create -n codetf python=3.8
conda activate codetf
```
2. Install from [PyPI](https://pypi.org/project/salesforce-codetf/):
```bash
pip install salesforce-codetf==1.0.1.1
```
3. Alternatively, build CodeTF from source:
```bash
git clone https://github.com/salesforce/CodeTF.git
cd CodeTF
pip install -e .
```
Additionally, to make sure the quantization feature works well, also install these dependencies:
```bash
pip install -q -U git+https://github.com/huggingface/transformers.git
pip install -q -U git+https://github.com/huggingface/peft.git
pip install -q -U git+https://github.com/huggingface/accelerate.git
```
For some models, such as [StarCoder](https://github.com/bigcode-project/starcoder), it is required to log in Huggingface. Please obtain the HuggingFace token and login:
```
huggingface-cli login
```
## Getting Started
### Inferencing Pipeline
Getting started with CodeTF is simple and quick with our model loading pipeline function ``load_model_pipeline()``. Here's an example showing how to load codet5+ model and perform inference on code generation task:
```python
from codetf.models import load_model_pipeline
code_generation_model = load_model_pipeline(model_name="codet5", task="pretrained",
model_type="plus-770M-python", is_eval=True,
load_in_8bit=True, load_in_4bit=False, weight_sharding=False)
result = code_generation_model.predict(["def print_hello_world():"])
print(result)
```
There are a few notable arguments that need to be considered:
- ``model_name``: the name of the model, currently support ``codet5`` and ``causal-lm``.
- ``model_type``: type of model for each model name, e.g. ``base``, ``codegen-350M-mono``, ``j-6B``, etc.
- ``load_in_8bit`` and ``load_in_4bit``: inherit the dynamic quantization feature from [Huggingface Quantization](https://huggingface.co/docs/transformers/main/main_classes/quantization).
- ``weight_sharding``: our advance feature that leverages [HuggingFace Sharded Checkpoint](https://huggingface.co/docs/accelerate/v0.19.0/en/package_reference/big_modeling#accelerate.load_checkpoint_and_dispatch) to split a large model in several smaller shards in different GPUs. Please consider using this if you are dealing with large models.
### Model Zoo
You might want to view all of the supported models. To do this, you can use the ``model_zoo()``:
```python
from codetf.models import model_zoo
print(model_zoo)
# ============================================================================================================
# Architectures Types Tasks
# ============================================================================================================
# causallm codegen-350M-mono pretrained
# codegen-350M-multi pretrained
# codegen-350M-nl pretrained
# codegen-2B-mono pretrained
# codegen-2B-multi pretrained
# codegen-2B-nl pretrained
# codegen-6B-mono pretrained
# codegen-6B-nl pretrained
# codegen-6B-multi pretrained
# starcoder-15.5B pretrained
# gpt-neox-20B pretrained
# gpt-neo-1.3B pretrained
# gpt-j-6B pretrained
# incoder-6B pretrained
# codegen2-1B pretrained
# codegen2-3.7B pretrained
# codegen2-7B pretrained
# codegen2-16B pretrained
# codet5 base-multi-sum pretrained
# base nl2code
# base refine
# base translate_cs_java
# base translate_java_cs
# base sum_python
# base sum_go
# base sum_php
# base sum_javascript
# base sum_java
# base sum_ruby
# base clone
# base defect
# plus-instruct-16B pretrained
# plus-16B pretrained
# plus-6B pretrained
# plus-2B pretrained
# plus-770M-python pretrained
# plus-770M pretrained
# plus-220M pretrained
# bert codebert-base pretrained
# unixcoder-base pretrained
# codeberta-small pretrained
```
### Fine-Tuning Pipeline
Want to train a custom LLM for code? We've got you covered. Below is an example using the ``Seq2SeqTrainer`` to fine-tune a [CodeT5+ pretrained model](https://github.com/salesforce/CodeT5), along with our dataset utilities, make it easy to fine-tune your models using the CodeXGLUE dataset. Here's an example:
```python
from codetf.trainer.codet5_trainer import CodeT5Seq2SeqTrainer
from codetf.data_utility.codexglue_dataset import CodeXGLUEDataset
from codetf.models import load_model_pipeline
from codetf.performance.evaluation_metric import EvaluationMetric
from codetf.data_utility.base_dataset import CustomDataset
model_class = load_model_pipeline(model_name="codet5", task="pretrained",
model_type="plus-220M", is_eval=True)
dataset = CodeXGLUEDataset(tokenizer=model_class.get_tokenizer())
train, test, validation = dataset.load(subset="text-to-code")
train_dataset= CustomDataset(train[0], train[1])
test_dataset= CustomDataset(test[0], test[1])
val_dataset= CustomDataset(validation[0], validation[1])
evaluator = EvaluationMetric(metric="bleu", tokenizer=model_class.tokenizer)
# peft can be in ["lora", "prefixtuning"]
trainer = CodeT5Seq2SeqTrainer(train_dataset=train_dataset,
validation_dataset=val_dataset,
peft="lora",
pretrained_model_or_path=model_class.get_model(),
tokenizer=model_class.tokenizer)
trainer.train()
```
Comparing to [this script from StarCoder](https://github.com/bigcode-project/starcoder/blob/main/finetune/finetune.py), which requires ~300 LOCs to fine-tune a model, we only need 14 LOCs to do the same !!!
### Evaluate on Well-Known Benchmarks
Planning to reproduce the results of well-known benchmarks like ``Human-Eval``, but struggling with not achieving the same numbers as reported in the original papers? Worried about the complicated evaluation process? Don't worry, we've got you covered with an intuitive, easy-to-use interface. Here's a sample snippet demonstrating how to evaluate Human Eval using pass@k (k=[1,10,100]) as the metric:
```python
from codetf.models import load_model_pipeline
from codetf.data_utility.human_eval_dataset import HumanEvalDataset
from codetf.performance.model_evaluator import ModelEvaluator
os.environ["HF_ALLOW_CODE_EVAL"] = "1"
os.environ["TOKENIZERS_PARALLELISM"] = "true"
model_class = load_model_pipeline(model_name="causal-lm", task="pretrained",
model_type="codegen-350M-mono", is_eval=True,
load_in_8bit=True, weight_sharding=False)
dataset = HumanEvalDataset(tokenizer=model_class.get_tokenizer())
prompt_token_ids, prompt_attention_masks, references= dataset.load()
problems = TensorDataset(prompt_token_ids, prompt_attention_masks)
evaluator = ModelEvaluator(model_class)
avg_pass_at_k = evaluator.evaluate_pass_k(problems=problems, unit_tests=references)
print("Pass@k: ", avg_pass_at_k)
```
Comparing to [this script from HuggingFace](https://github.com/huggingface/transformers/blob/main/examples/research_projects/codeparrot/scripts/human_eval.py), which requires ~230 LOCs to evaluate on pass@k, we only need 14 LOCs to do the same !!!
### Loading Preprocessed Data
CodeTF provides the Dataset utility for several well-known datasets, such as CodeXGLUE, Human Eval, MBPP, and APPS. The following is an example of how to load the CodeXGLUE dataset:
```python
from codetf.data_utility.codexglue_dataset import CodeXGLUEDataset
from transformers import RobertaTokenizer
tokenizer = RobertaTokenizer.from_pretrained("Salesforce/codet5-base", use_fast=True)
dataset = CodeXGLUEDataset(tokenizer=tokenizer)
train, test, validation = dataset.load(subset="text-to-code")
```
The ``train``, ``test``, ``validation`` are returned in form of [Pytorch tensor](https://pytorch.org/docs/stable/tensors.html) to provide the flexilbity for the users to wrap it into higher-lever wrapper for their own use cases.
### Code Utilities
In addition to providing utilities for LLMs, CodeTF also equips users with tools for effective source code manipulation. This is crucial in the code intelligence pipeline, where operations like parsing code into an Abstract Syntax Tree (AST) or extracting code attributes (such as function names or identifiers) are often required (CodeT5). These tasks can be challenging to execute, especially when setup and multi-language support is needed. Our code utility interface offers a streamlined solution, facilitating easy parsing and attribute extraction from code across 15+ languages.
#### AST Parser in Multiple Languages
CodeTF includes AST parsers compatible with numerous programming languages. Here's an example showcasing the parsing of Apex code into an AST:
```python
from codetf.code_utility.apex.apex_code_utility import ApexCodeUtility
apex_code_utility = ApexCodeUtility()
sample_code = """
public class SampleClass {
public Integer myNumber;
**
* This is a method that returns the value of myNumber.
* @return An integer value
*/
public Integer getMyNumber() {
// Return the current value of myNumber
return this.myNumber;
}
}
"""
ast = apex_code_utility.parse(sample_code)
# This will print the tree-sitter AST object
print(ast)
```
Then you can traverse the tree using the interface from [py-tree-sitter](https://github.com/tree-sitter/py-tree-sitter)
```
root_node = ast.root_node
assert root_node.type == 'module'
assert root_node.start_point == (1, 0)
assert root_node.end_point == (3, 13)
```
There are also other utilities for Java, Python, etc, that can perform the same operations.
#### Extract Code Attributes
CodeTF provides an interface to easily extract code attributes. The following is a sample for extracting the function name of a Python function:
```python
code_attributes = apex_code_utility.get_code_attributes(sample_code)
print(code_attributes)
```
This will print:
``
{'class_names': ['AccountWithContacts'], 'method_names': ['getAccountsWithContacts'], 'comments': [], 'variable_names': ['acc', 'accounts', 'con', 'System', 'debug', 'Contacts', 'Id', 'Name', 'Account', 'Email', 'LastName']}
``
### Remove Comments
There are other existing utilities, such as removing comments from code:
```python
new_code_snippet = apex_code_utility.remove_comments(sample_code)
print(new_code_snippet)
```
This will print:
```java
public class SampleClass {
public Integer myNumber;
public Integer getMyNumber() {
return this.myNumber;
}
}
```
Note that this is an ongoing process, we will add more features to extract complicated code attributes in the future. More examples can be found [here](https://github.com/salesforce/CodeTF/tree/main/test_code_utilities).
## More Examples
You can find more examples for each use case:
- [Fine-tuning](https://github.com/salesforce/CodeTF/tree/main/test_trainer)
- [Inferencing](https://github.com/salesforce/CodeTF/tree/main/test_inference)
- [Model Evaluate](https://github.com/salesforce/CodeTF/tree/main/test_evaluation)
- [Code Utility](https://github.com/salesforce/CodeTF/tree/main/test_code_utilities)
## Notes
- CodeTF is designed to complement and enhance the capabilities of [HuggingFace Transformers](https://huggingface.co/docs/transformers/index), rather than replace it. It serves as a specialized layer specifically tailored for code intelligence tasks, such as fine-tuning language models with code-specific features and evaluating on well-known code intelligence benchmarks. If users require more customization, they are encouraged to write their own training code from scratch.
- CodeTF leverages the powerful functionality provided by [Accelerate](https://github.com/huggingface/accelerate) for both inference and training. With Accelerate, users do not need to manually manage GPUs or CPU devices for most operations, allowing for a streamlined and efficient workflow.
## Ethical and Responsible Use
CodeTF, while powerful, does not guarantee infallible code intelligence capabilities. Users may encounter inaccuracies or biases, possibly leading to misinterpretations or undesired behaviors. Risks include the generation of insecure code, propagation of poor coding practices, or inadvertent revelation of sensitive data. We strongly advise users to examine the pretrained models and system before practical adoption. CodeTF facilitates effective code analysis, prediction, and debugging, promoting reproducible research and development. We encourage its responsible use for enhancing software quality and developer productivity.
However, misuse can lead to unethical outcomes such as unauthorized code manipulation, privacy breaches, or insecure coding practices. Users should familiarize themselves with guidelines for responsible AI before using CodeTF. Our commitment is to continually refine the library by identifying and mitigating potential biases and inappropriate behaviors. Users should review the models and system before practical implementation, and contribute towards refining the library to ensure ethical usage.
## Technical Report and Citing CodeTF
You can find more details in our [technical report](https://arxiv.org/abs/2306.00029).
If you're using CodeTF in your research or applications, please cite using this BibTeX:
```bibtex
@misc{nghi2023codetf,
title={CodeTF: A Transformer-based Library for CodeLLM & Code Intelligence},
author={Nghi D. Q. Bui, Henry Le, Yue Wang, Akhilesh Deepak Gotmare, Junnan Li, Steven Hoi.},
year={2023},
eprint={2209.09019},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Contact us
If you have any questions, comments or suggestions, please do not hesitate to contact us at codetf@salesforce.com.
## License
[Apache License Version 2.0](LICENSE.txt)
| /salesforce-codetf-1.0.2.2.tar.gz/salesforce-codetf-1.0.2.2/README.md | 0.753194 | 0.927626 | README.md | pypi |
import sys
from pathlib import Path
sys.path.append(str(Path(".").absolute().parent))
from transformers import AutoTokenizer
from codetf.models.base_model import BaseModel
from transformers import AutoModelForSeq2SeqLM, AutoConfig
from codetf.common.registry import registry
from accelerate import Accelerator
import torch
from accelerate import init_empty_weights, load_checkpoint_and_dispatch
from huggingface_hub import hf_hub_download
import torch
@registry.register_model("codet5")
class Seq2SeqModel(BaseModel):
MODEL_DICT = "configs/inference/codet5.yaml"
def __init__(self, model, model_config, tokenizer):
super().__init__()
self.model = model
self.tokenizer = tokenizer
self.max_source_length = model_config["max_source_length"]
self.max_prediction_length = model_config["max_prediction_length"]
self.beam_size = model_config["beam_size"]
@classmethod
def init_tokenizer(cls, model):
return AutoTokenizer.from_pretrained(model)
@classmethod
def load_model_from_config(model_class, model_config, load_in_8bit=False, load_in_4bit=False, weight_sharding=False):
checkpoint = model_config["huggingface_url"]
if load_in_8bit and load_in_4bit:
raise ValueError("Only one of load_in_8bit or load_in_4bit can be True. Please choose one.")
# This "device" is for the case of CodeT5plus, will be removed in the future
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
if weight_sharding:
try:
# Try to download and load the json index file
weights_location = hf_hub_download(checkpoint, "pytorch_model.bin")
except Exception:
try:
# If that fails, try to download and load the bin file
weights_location = hf_hub_download(checkpoint, "pytorch_model.bin.index.json")
except Exception as e:
# If both fail, raise an error
raise Exception(f"Failed to download weights: {str(e)}")
config = AutoConfig.from_pretrained(checkpoint)
with init_empty_weights():
model = AutoModelForSeq2SeqLM.from_config(config)
model.tie_weights()
model = load_checkpoint_and_dispatch(
model, weights_location, model_config["device_map"],
no_split_module_classes=["GPTJBlock"]
)
else:
if load_in_8bit:
if model_config["device_map"]:
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint,
load_in_8bit=load_in_8bit,
low_cpu_mem_usage=True,
device_map="auto", trust_remote_code=model_config["trust_remote_code"])
else:
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint,
load_in_8bit=load_in_8bit,
low_cpu_mem_usage=True,
trust_remote_code=model_config["trust_remote_code"])
elif load_in_4bit:
if model_config["device_map"]:
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint,
load_in_4bit=load_in_4bit,
low_cpu_mem_usage=True,
device_map="auto", trust_remote_code=model_config["trust_remote_code"])
else:
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint,
load_in_4bit=load_in_4bit,
low_cpu_mem_usage=True,
trust_remote_code=model_config["trust_remote_code"])
else:
if model_config["device_map"]:
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint,
load_in_4bit=load_in_4bit,
low_cpu_mem_usage=True,
device_map=model_config["device_map"], trust_remote_code=model_config["trust_remote_code"])
else:
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint,
load_in_4bit=load_in_4bit,
low_cpu_mem_usage=True,
trust_remote_code=model_config["trust_remote_code"]).to(device)
tokenizer = model_class.init_tokenizer(model_config.get("tokenizer_url"))
return model_class(
model=model,
model_config=model_config,
tokenizer=tokenizer
)
def forward(self, sources, max_length=512, beam_size=5):
encoding = self.tokenizer(sources, return_tensors='pt').to(self.model.device)
encoding['decoder_input_ids'] = encoding['input_ids'].clone()
generated_ids = self.model.generate(**encoding,
max_length=max_length,
num_beams=beam_size)
predictions = self.tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
return predictions
def predict(self, sources, max_length=512, beam_size=5):
input_for_net = [' '.join(source.strip().split()).replace('\n', ' ') for source in sources]
output = self.forward(input_for_net, max_length, beam_size)
return output | /salesforce-codetf-1.0.2.2.tar.gz/salesforce-codetf-1.0.2.2/codetf/models/seq2seq_models/__init__.py | 0.483161 | 0.178097 | __init__.py | pypi |
import sys
from pathlib import Path
sys.path.append(str(Path(".").absolute().parent))
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig
from codetf.models.base_model import BaseModel
from codetf.common.registry import registry
from collections import defaultdict
from tqdm import tqdm
import torch
from accelerate import init_empty_weights, load_checkpoint_and_dispatch
from huggingface_hub import hf_hub_download
import torch
@registry.register_model("causallm")
class CausalLMModel(BaseModel):
MODEL_DICT = "configs/inference/causal_lm.yaml"
def __init__(self, model, model_config, tokenizer):
super().__init__()
self.model = model
self.tokenizer = tokenizer
self.max_prediction_length = model_config["max_prediction_length"]
@classmethod
def init_tokenizer(cls, model):
tokenizer = AutoTokenizer.from_pretrained(model)
return tokenizer
@classmethod
def load_model_from_config(model_class, model_config, load_in_8bit=False, load_in_4bit=False, weight_sharding=False):
checkpoint = model_config["huggingface_url"]
if load_in_8bit and load_in_4bit:
raise ValueError("Only one of load_in_8bit or load_in_4bit can be True. Please choose one.")
if weight_sharding:
try:
# Try to download and load the json index file
weights_location = hf_hub_download(checkpoint, "pytorch_model.bin")
except Exception:
try:
# If that fails, try to download and load the bin file
weights_location = hf_hub_download(checkpoint, "pytorch_model.bin.index.json")
except Exception as e:
# If both fail, raise an error
raise Exception(f"Failed to download weights: {str(e)}")
config = AutoConfig.from_pretrained(checkpoint)
with init_empty_weights():
model = AutoModelForCausalLM.from_config(config)
model.tie_weights()
model = load_checkpoint_and_dispatch(
model, weights_location, device_map="auto", no_split_module_classes=["GPTJBlock"]
)
else:
if load_in_8bit:
model = AutoModelForCausalLM.from_pretrained(checkpoint,
load_in_8bit=load_in_8bit,
low_cpu_mem_usage=True,
device_map="auto")
elif load_in_4bit:
model = AutoModelForCausalLM.from_pretrained(checkpoint,
load_in_4bit=load_in_4bit,
low_cpu_mem_usage=True,
device_map="auto")
else:
model = AutoModelForCausalLM.from_pretrained(checkpoint,
low_cpu_mem_usage=True,
device_map="auto")
tokenizer = model_class.init_tokenizer(model_config["tokenizer_url"])
return model_class(
model=model,
model_config=model_config,
tokenizer=tokenizer
)
def forward(self, sources, max_length=512):
encoding = self.tokenizer(sources, return_tensors='pt').to(self.device)
# input_ids = encoding.input_ids.to(self.device)
# attention_mask = encoding.attention_mask.to(self.device)
generated_ids = self.model.generate(**encoding,
max_length=max_length)
predictions = self.tokenizer.batch_decode(generated_ids, truncate_before_pattern=[r"\n\n^#", "^'''", "\n\n\n"])
return predictions
def predict(self, sources, max_length=512):
input_for_net = [' '.join(source.strip().split()).replace('\n', ' ') for source in sources]
output = self.forward(input_for_net, max_length=512)
return output | /salesforce-codetf-1.0.2.2.tar.gz/salesforce-codetf-1.0.2.2/codetf/models/causal_lm_models/__init__.py | 0.454472 | 0.168857 | __init__.py | pypi |
import sys
from pathlib import Path
sys.path.append(str(Path(".").absolute().parent))
from transformers import RobertaTokenizer, RobertaModel, RobertaConfig
from codetf.models.base_model import BaseModel
from codetf.common.registry import registry
from accelerate import Accelerator
from collections import defaultdict
from tqdm import tqdm
import torch
from accelerate import init_empty_weights, load_checkpoint_and_dispatch
from huggingface_hub import hf_hub_download
import torch
@registry.register_model("bert")
class BertModel(BaseModel):
MODEL_DICT = "configs/inference/bert.yaml"
def __init__(self, model, model_config, tokenizer):
super().__init__()
self.model = model
self.tokenizer = tokenizer
self.max_prediction_length = model_config["max_prediction_length"]
@classmethod
def init_tokenizer(cls, model):
tokenizer = RobertaTokenizer.from_pretrained(model)
return tokenizer
@classmethod
def load_model_from_config(model_class, model_config, load_in_8bit=False, load_in_4bit=False, weight_sharding=False):
checkpoint = model_config["huggingface_url"]
if load_in_8bit and load_in_4bit:
raise ValueError("Only one of load_in_8bit or load_in_4bit can be True. Please choose one.")
if weight_sharding:
try:
# Try to download and load the json index file
weights_location = hf_hub_download(checkpoint, "pytorch_model.bin")
except Exception:
try:
# If that fails, try to download and load the bin file
weights_location = hf_hub_download(checkpoint, "pytorch_model.bin.index.json")
except Exception as e:
# If both fail, raise an error
raise Exception(f"Failed to download weights: {str(e)}")
config = RobertaConfig.from_pretrained(checkpoint)
with init_empty_weights():
model = RobertaModel.from_config(config)
model.tie_weights()
model = load_checkpoint_and_dispatch(
model, weights_location, device_map="auto", no_split_module_classes=["GPTJBlock"]
)
else:
if load_in_8bit:
model = RobertaModel.from_pretrained(checkpoint,
load_in_8bit=load_in_8bit,
device_map="auto")
elif load_in_4bit:
model = RobertaModel.from_pretrained(checkpoint,
load_in_4bit=load_in_4bit,
device_map="auto")
else:
model = RobertaModel.from_pretrained(checkpoint,
device_map="auto")
tokenizer = model_class.init_tokenizer(model_config["tokenizer_url"])
return model_class(
model=model,
model_config=model_config,
tokenizer=tokenizer
)
def forward(self, sources):
encoding = self.tokenizer(sources, return_tensors='pt')
input_ids = encoding.input_ids.to(self.device)
generated_ids = self.model(input_ids)
# predictions = self.tokenizer.batch_decode(generated_ids, truncate_before_pattern=[r"\n\n^#", "^'''", "\n\n\n"])
return generated_ids
def predict(self, sources):
input_for_net = [' '.join(source.strip().split()).replace('\n', ' ') for source in sources]
output = self.forward(input_for_net)
return output | /salesforce-codetf-1.0.2.2.tar.gz/salesforce-codetf-1.0.2.2/codetf/models/bert_models/__init__.py | 0.442877 | 0.152473 | __init__.py | pypi |
from codetf.data_utility.base_dataset import BaseDataset
from datasets import load_dataset
class CodeXGLUEDataset(BaseDataset):
def __init__(self, tokenizer, max_length=512):
super().__init__(tokenizer, max_length)
self.load_funcs = {
'text-to-code': self.load_codexglue_text_to_code_dataset,
'code-to-text': self.load_codexglue_code_to_text_dataset,
'java-to-csharp': self.load_codexglue_java_to_csharp_dataset,
'code-refinement': self.load_codexglue_code_refinement_dataset
}
def load(self, subset, *args, **kwargs):
if subset in self.load_funcs:
return self.load_funcs[subset](*args, **kwargs)
else:
raise ValueError(f'Invalid subset {subset}. Available subsets are: {list(self.load_funcs.keys())}')
def load_codexglue_text_to_code_dataset(self, *args, **kwargs):
dataset = self.dataset_config["codexglue_text_to_code"]
dataset = load_dataset(dataset)
train = dataset["train"]
train_nl_tensors, _ = self.process_data(train["nl"])
train_code_tensors, _ = self.process_data(train["code"])
test = dataset["test"]
test_nl_tensors, _ = self.process_data(test["nl"])
test_code_tensors, _ = self.process_data(test["code"])
validation = dataset["validation"]
validation_nl_tensors, _ = self.process_data(validation["nl"])
validation_code_tensors, _ = self.process_data(validation["code"])
return (train_nl_tensors, train_code_tensors), (test_nl_tensors, test_code_tensors), (validation_nl_tensors, validation_code_tensors)
def load_codexglue_code_to_text_dataset(self, config, *args, **kwargs):
dataset = self.dataset_config["codexglue_code_to_text"]
dataset = load_dataset(dataset, config)
train = dataset["train"]
train_code_tensors, _ = self.process_data(train["code"])
train_docstring_tensors, _ = self.process_data(train["docstring"])
test = dataset["test"]
test_code_tensors, _ = self.process_data(test["code"])
test_docstring_tensors, _ = self.process_data(test["docstring"])
validation = dataset["validation"]
validation_code_tensors, _ = self.process_data(validation["code"])
validation_docstring_tensors, _ = self.process_data(validation["docstring"])
return (train_code_tensors, train_docstring_tensors), (test_code_tensors, test_docstring_tensors), (validation_code_tensors, validation_docstring_tensors)
def load_codexglue_java_to_csharp_dataset(self, *args, **kwargs):
dataset = self.dataset_config["codexglue_java_to_csharp"]
dataset = load_dataset(dataset)
train = dataset["train"]
train_java_tensors, _ = self.process_data(train["java"])
train_csharp_tensors, _ = self.process_data(train["cs"])
test = dataset["test"]
test_java_tensors, _ = self.process_data(test["java"])
test_csharp_tensors, _ = self.process_data(test["cs"])
validation = dataset["validation"]
validation_java_tensors, _ = self.process_data(validation["java"])
validation_csharp_tensors, _ = self.process_data(validation["cs"])
return (train_java_tensors, train_csharp_tensors), (test_java_tensors, test_csharp_tensors), (validation_java_tensors, validation_csharp_tensors)
def load_codexglue_code_refinement_dataset(self, *args, **kwargs):
dataset = self.dataset_config["codexglue_code_refinement"]
dataset = load_dataset(dataset)
train = dataset["train"]
train_buggy_tensors, _ = self.process_data(train["buggy"])
train_fixed_tensors, _ = self.process_data(train["fixed"])
test = dataset["test"]
test_buggy_tensors, _ = self.process_data(test["buggy"])
test_fixed_tensors, _ = self.process_data(test["fixed"])
validation = dataset["validation"]
validation_buggy_tensors, _ = self.process_data(validation["buggy"])
validation_fixed_tensors, _ = self.process_data(validation["fixed"])
return (train_buggy_tensors, train_fixed_tensors), (test_buggy_tensors, test_fixed_tensors), (validation_buggy_tensors, validation_fixed_tensors) | /salesforce-codetf-1.0.2.2.tar.gz/salesforce-codetf-1.0.2.2/codetf/data_utility/codexglue_dataset.py | 0.803714 | 0.342599 | codexglue_dataset.py | pypi |
import sys
from pathlib import Path
sys.path.append(str(Path(".").absolute().parent))
from codetf.models import load_model_pipeline
from codetf.data_utility.util import EOF_STRINGS, EndOfFunctionCriteria, remove_last_block
from torch.utils.data.dataloader import DataLoader
from transformers import StoppingCriteriaList
import torch
import os
from accelerate import Accelerator
import torch
from collections import defaultdict
from tqdm import tqdm
import torch
from evaluate import load
import numpy as np
class ModelEvaluator:
def __init__(self, model_class, num_workers=5):
self.model_class = model_class
self.code_eval = load("code_eval")
self.accelerator = Accelerator()
def evaluate_pass_k(self, problems, unit_tests, batch_size=1, max_length=600,
top_p=0.95, k=[1,10,100],
num_return_sequences=200, sequences_per_chunk=10, num_workers=1):
# Load dataset
data_loader = DataLoader(problems, batch_size=batch_size)
data_loader = self.accelerator.prepare(data_loader)
# Initialize stopping criteria
gen_kwargs = {
"do_sample": True,
"top_p": top_p,
"stopping_criteria": StoppingCriteriaList([EndOfFunctionCriteria(0, EOF_STRINGS, self.model_class.get_tokenizer())]),
}
# Store generated tokens
gen_token_dict = defaultdict(list)
solutions = []
chunks = num_return_sequences // sequences_per_chunk
# Generate and evaluate solutions
dataloader_pbar = tqdm(enumerate(data_loader), total=len(data_loader))
for step, batch in dataloader_pbar:
prompt_ids, attention_masks = batch
solutions_per_chunk = []
for i in range(chunks):
with torch.no_grad():
gen_kwargs["stopping_criteria"][0].start_length = attention_masks[0].sum().item()
input_ids = prompt_ids[0, :attention_masks[0].sum().item()]
input_data = self.model_class.get_tokenizer().decode(input_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
batch_generated_ids = self.model_class.get_model().generate(
input_ids=input_ids.unsqueeze(0),
attention_mask=attention_masks[0, :attention_masks[0].sum().item()].unsqueeze(0),
max_length=max_length, num_return_sequences=sequences_per_chunk,
**gen_kwargs
)
batch_generated_ids = batch_generated_ids.cpu().numpy()
gen_codes = self.model_class.get_tokenizer().batch_decode(batch_generated_ids,
skip_special_tokens=True, clean_up_tokenization_spaces=True)
for item in gen_codes:
cleaned = remove_last_block(item)
solutions_per_chunk.append(cleaned)
solutions.append(solutions_per_chunk)
dataloader_pbar.set_description(f"Processing step {step+1}/{len(data_loader)}")
pass_at_k, _ = self.code_eval.compute(
references=unit_tests, predictions=solutions, k=k, num_workers=num_workers
)
return pass_at_k | /salesforce-codetf-1.0.2.2.tar.gz/salesforce-codetf-1.0.2.2/codetf/performance/model_evaluator.py | 0.596198 | 0.230065 | model_evaluator.py | pypi |
import sacrebleu
from rouge_score import rouge_scorer
from nltk.translate.meteor_score import meteor_score
from sklearn.metrics import f1_score, precision_score, recall_score
from transformers import EvalPrediction
class EvaluationMetric:
def __init__(self, metric, tokenizer):
self.metric = metric
self.tokenizer = tokenizer
def compute_metrics(self, eval_pred: EvalPrediction):
predictions = self.tokenizer.batch_decode(eval_pred.predictions, skip_special_tokens=True)
references = self.tokenizer.batch_decode(eval_pred.label_ids, skip_special_tokens=True)
if self.metric == "bleu":
return {"bleu": sacrebleu.corpus_bleu(predictions, [references]).score}
elif self.metric == "f1":
return {"f1": self.compute_f1_score(predictions, references)}
elif self.metric == "precision":
return {"precision": self.compute_precision_score(predictions, references)}
elif self.metric == "recall":
return {"recall": self.compute_recall_score(predictions, references)}
elif self.metric == "rouge":
rouge_scores = self.compute_rouge(predictions, references)
return {f"rouge_{key}": value for key, value in rouge_scores.items()}
elif self.metric == "meteor":
return {"meteor": self.compute_meteor(predictions, references)}
else:
raise ValueError("Invalid metric specified")
def compute_f1_score(self, hypotheses, references):
# Calculate F1 score for your use case, this is just a sample
return f1_score(hypotheses, references, average='weighted')
def compute_precision_score(self, hypotheses, references):
# Calculate precision score for your use case, this is just a sample
return precision_score(hypotheses, references, average='weighted')
def compute_recall_score(self, hypotheses, references):
# Calculate recall score for your use case, this is just a sample
return recall_score(hypotheses, references, average='weighted')
def compute_rouge(self, hypotheses, references):
scorer = rouge_scorer.RougeScorer(['rouge1', 'rougeL'], use_stemmer=True)
scores = [scorer.score(ref, hyp) for ref, hyp in zip(references, hypotheses)]
rouge1 = sum([score['rouge1'].fmeasure for score in scores]) / len(scores)
rougeL = sum([score['rougeL'].fmeasure for score in scores]) / len(scores)
return {"rouge1": rouge1, "rougeL": rougeL}
def compute_meteor(self, hypotheses, references):
return sum([meteor_score([ref], hyp) for ref, hyp in zip(references, hypotheses)]) / len(hypotheses) | /salesforce-codetf-1.0.2.2.tar.gz/salesforce-codetf-1.0.2.2/codetf/performance/evaluation_metric.py | 0.826081 | 0.420778 | evaluation_metric.py | pypi |
import json
import os
from urllib.parse import urlparse
from iopath.common.file_io import g_pathmgr
from codetf.common.registry import registry
def now():
from datetime import datetime
return datetime.now().strftime("%Y%m%d%H%M")[:-1]
def is_url(url_or_filename):
parsed = urlparse(url_or_filename)
return parsed.scheme in ("http", "https")
def get_cache_path(rel_path):
return os.path.expanduser(os.path.join(registry.get_path("cache_root"), rel_path))
def get_abs_path(rel_path):
return os.path.join(registry.get_path("library_root"), rel_path)
def load_json(filename):
with open(filename, "r") as f:
return json.load(f)
# ===== download utils =====
# Copyright (c) Facebook, Inc. and its affiliates.
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
def makedir(dir_path):
"""
Create the directory if it does not exist.
"""
is_success = False
try:
if not g_pathmgr.exists(dir_path):
g_pathmgr.mkdirs(dir_path)
is_success = True
except BaseException:
print(f"Error creating directory: {dir_path}")
return is_success
import io
import os
import re
import urllib
import urllib.error
import urllib.request
from typing import Optional
from urllib.parse import urlparse
from torch.utils.model_zoo import tqdm
from torchvision.datasets.utils import (
check_integrity,
download_file_from_google_drive,
extract_archive,
)
def get_redirected_url(url: str):
"""
Given a URL, returns the URL it redirects to or the
original URL in case of no indirection
"""
import requests
with requests.Session() as session:
with session.get(url, stream=True, allow_redirects=True) as response:
if response.history:
return response.url
else:
return url
def to_google_drive_download_url(view_url: str) -> str:
"""
Utility function to transform a view URL of google drive
to a download URL for google drive
Example input:
https://drive.google.com/file/d/137RyRjvTBkBiIfeYBNZBtViDHQ6_Ewsp/view
Example output:
https://drive.google.com/uc?export=download&id=137RyRjvTBkBiIfeYBNZBtViDHQ6_Ewsp
"""
splits = view_url.split("/")
assert splits[-1] == "view"
file_id = splits[-2]
return f"https://drive.google.com/uc?export=download&id={file_id}"
def download_google_drive_url(url: str, output_path: str, output_file_name: str):
"""
Download a file from google drive
Downloading an URL from google drive requires confirmation when
the file of the size is too big (google drive notifies that
anti-viral checks cannot be performed on such files)
"""
import requests
with requests.Session() as session:
# First get the confirmation token and append it to the URL
with session.get(url, stream=True, allow_redirects=True) as response:
for k, v in response.cookies.items():
if k.startswith("download_warning"):
url = url + "&confirm=" + v
# Then download the content of the file
with session.get(url, stream=True, verify=True) as response:
makedir(output_path)
path = os.path.join(output_path, output_file_name)
total_size = int(response.headers.get("Content-length", 0))
with open(path, "wb") as file:
from tqdm import tqdm
with tqdm(total=total_size) as progress_bar:
for block in response.iter_content(
chunk_size=io.DEFAULT_BUFFER_SIZE
):
file.write(block)
progress_bar.update(len(block))
# The following methods are copied from torchvision, but we use g_pathmgr
# instead of `os` lib to support multiple distributed file systems.
def _get_google_drive_file_id(url: str) -> Optional[str]:
parts = urlparse(url)
if re.match(r"(drive|docs)[.]google[.]com", parts.netloc) is None:
return None
match = re.match(r"/file/d/(?P<id>[^/]*)", parts.path)
if match is None:
return None
return match.group("id")
def _urlretrieve(url: str, filename: str, chunk_size: int = 1024) -> None:
with open(filename, "wb") as fh:
with urllib.request.urlopen(
urllib.request.Request(url, headers={"User-Agent": "vissl"})
) as response:
with tqdm(total=response.length) as pbar:
for chunk in iter(lambda: response.read(chunk_size), ""):
if not chunk:
break
pbar.update(chunk_size)
fh.write(chunk)
def download_url(
url: str,
root: str,
filename: Optional[str] = None,
md5: Optional[str] = None,
) -> None:
"""Download a file from a url and place it in root.
Args:
url (str): URL to download file from
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under.
If None, use the basename of the URL.
md5 (str, optional): MD5 checksum of the download. If None, do not check
"""
root = os.path.expanduser(root)
if not filename:
filename = os.path.basename(url)
fpath = os.path.join(root, filename)
makedir(root)
# check if file is already present locally
if check_integrity(fpath, md5):
print("Using downloaded and verified file: " + fpath)
return
# expand redirect chain if needed
url = get_redirected_url(url)
# check if file is located on Google Drive
file_id = _get_google_drive_file_id(url)
if file_id is not None:
return download_file_from_google_drive(file_id, root, filename, md5)
# download the file
try:
print("Downloading " + url + " to " + fpath)
_urlretrieve(url, fpath)
except (urllib.error.URLError, IOError) as e: # type: ignore[attr-defined]
if url[:5] == "https":
url = url.replace("https:", "http:")
print(
"Failed download. Trying https -> http instead."
" Downloading " + url + " to " + fpath
)
_urlretrieve(url, fpath)
else:
raise e
# check integrity of downloaded file
if not check_integrity(fpath, md5):
raise RuntimeError("File not found or corrupted.")
def download_and_extract_archive(
url: str,
download_root: str,
extract_root: Optional[str] = None,
filename: Optional[str] = None,
md5: Optional[str] = None,
remove_finished: bool = False,
) -> None:
download_root = os.path.expanduser(download_root)
if extract_root is None:
extract_root = download_root
if not filename:
filename = os.path.basename(url)
download_url(url, download_root, filename, md5)
archive = os.path.join(download_root, filename)
print("Extracting {} to {}".format(archive, extract_root))
extract_archive(archive, extract_root, remove_finished)
# Copyright (c) Facebook, Inc. and its affiliates.
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import json
import logging
import os
import pickle
import re
import shutil
import time
from urllib.parse import urlparse
import numpy as np
import pandas as pd
import yaml
from iopath.common.download import download
from iopath.common.file_io import file_lock, g_pathmgr
def cache_url(url: str, cache_dir: str) -> str:
"""
This implementation downloads the remote resource and caches it locally.
The resource will only be downloaded if not previously requested.
"""
parsed_url = urlparse(url)
dirname = os.path.join(cache_dir, os.path.dirname(parsed_url.path.lstrip("/")))
makedir(dirname)
filename = url.split("/")[-1]
cached = os.path.join(dirname, filename)
with file_lock(cached):
if not os.path.isfile(cached):
logging.info(f"Downloading {url} to {cached} ...")
cached = download(url, dirname, filename=filename)
logging.info(f"URL {url} cached in {cached}")
return cached
# TODO (prigoyal): convert this into RAII-style API
def create_file_symlink(file1, file2):
"""
Simply create the symlinks for a given file1 to file2.
Useful during model checkpointing to symlinks to the
latest successful checkpoint.
"""
try:
if g_pathmgr.exists(file2):
g_pathmgr.rm(file2)
g_pathmgr.symlink(file1, file2)
except Exception as e:
logging.info(f"Could NOT create symlink. Error: {e}")
def save_file(data, filename, append_to_json=True, verbose=True):
"""
Common i/o utility to handle saving data to various file formats.
Supported:
.pkl, .pickle, .npy, .json
Specifically for .json, users have the option to either append (default)
or rewrite by passing in Boolean value to append_to_json.
"""
if verbose:
logging.info(f"Saving data to file: {filename}")
file_ext = os.path.splitext(filename)[1]
if file_ext in [".pkl", ".pickle"]:
with g_pathmgr.open(filename, "wb") as fopen:
pickle.dump(data, fopen, pickle.HIGHEST_PROTOCOL)
elif file_ext == ".npy":
with g_pathmgr.open(filename, "wb") as fopen:
np.save(fopen, data)
elif file_ext == ".json":
if append_to_json:
with g_pathmgr.open(filename, "a") as fopen:
fopen.write(json.dumps(data, sort_keys=True) + "\n")
fopen.flush()
else:
with g_pathmgr.open(filename, "w") as fopen:
fopen.write(json.dumps(data, sort_keys=True) + "\n")
fopen.flush()
elif file_ext == ".yaml":
with g_pathmgr.open(filename, "w") as fopen:
dump = yaml.dump(data)
fopen.write(dump)
fopen.flush()
else:
raise Exception(f"Saving {file_ext} is not supported yet")
if verbose:
logging.info(f"Saved data to file: {filename}")
def load_file(filename, mmap_mode=None, verbose=True, allow_pickle=False):
"""
Common i/o utility to handle loading data from various file formats.
Supported:
.pkl, .pickle, .npy, .json
For the npy files, we support reading the files in mmap_mode.
If the mmap_mode of reading is not successful, we load data without the
mmap_mode.
"""
if verbose:
logging.info(f"Loading data from file: {filename}")
file_ext = os.path.splitext(filename)[1]
if file_ext == ".txt":
with g_pathmgr.open(filename, "r") as fopen:
data = fopen.readlines()
elif file_ext in [".pkl", ".pickle"]:
with g_pathmgr.open(filename, "rb") as fopen:
data = pickle.load(fopen, encoding="latin1")
elif file_ext == ".npy":
if mmap_mode:
try:
with g_pathmgr.open(filename, "rb") as fopen:
data = np.load(
fopen,
allow_pickle=allow_pickle,
encoding="latin1",
mmap_mode=mmap_mode,
)
except ValueError as e:
logging.info(
f"Could not mmap {filename}: {e}. Trying without g_pathmgr"
)
data = np.load(
filename,
allow_pickle=allow_pickle,
encoding="latin1",
mmap_mode=mmap_mode,
)
logging.info("Successfully loaded without g_pathmgr")
except Exception:
logging.info("Could not mmap without g_pathmgr. Trying without mmap")
with g_pathmgr.open(filename, "rb") as fopen:
data = np.load(fopen, allow_pickle=allow_pickle, encoding="latin1")
else:
with g_pathmgr.open(filename, "rb") as fopen:
data = np.load(fopen, allow_pickle=allow_pickle, encoding="latin1")
elif file_ext == ".json":
with g_pathmgr.open(filename, "r") as fopen:
data = json.load(fopen)
elif file_ext == ".yaml":
with g_pathmgr.open(filename, "r") as fopen:
data = yaml.load(fopen, Loader=yaml.FullLoader)
elif file_ext == ".csv":
with g_pathmgr.open(filename, "r") as fopen:
data = pd.read_csv(fopen)
else:
raise Exception(f"Reading from {file_ext} is not supported yet")
return data
def abspath(resource_path: str):
"""
Make a path absolute, but take into account prefixes like
"http://" or "manifold://"
"""
regex = re.compile(r"^\w+://")
if regex.match(resource_path) is None:
return os.path.abspath(resource_path)
else:
return resource_path
def makedir(dir_path):
"""
Create the directory if it does not exist.
"""
is_success = False
try:
if not g_pathmgr.exists(dir_path):
g_pathmgr.mkdirs(dir_path)
is_success = True
except BaseException:
logging.info(f"Error creating directory: {dir_path}")
return is_success
def is_url(input_url):
"""
Check if an input string is a url. look for http(s):// and ignoring the case
"""
is_url = re.match(r"^(?:http)s?://", input_url, re.IGNORECASE) is not None
return is_url
def cleanup_dir(dir):
"""
Utility for deleting a directory. Useful for cleaning the storage space
that contains various training artifacts like checkpoints, data etc.
"""
if os.path.exists(dir):
logging.info(f"Deleting directory: {dir}")
shutil.rmtree(dir)
logging.info(f"Deleted contents of directory: {dir}")
def get_file_size(filename):
"""
Given a file, get the size of file in MB
"""
size_in_mb = os.path.getsize(filename) / float(1024**2)
return size_in_mb | /salesforce-codetf-1.0.2.2.tar.gz/salesforce-codetf-1.0.2.2/codetf/common/utils.py | 0.595845 | 0.215041 | utils.py | pypi |
from dataclasses import dataclass
from .data_api import DataAPI
__all__ = ["User", "Org", "Context"]
@dataclass(frozen=True, kw_only=True, slots=True)
class User:
"""
Information about the Salesforce user that invoked the function.
When deployed to a compute environment, the function runs as the Salesforce user with the
Cloud Integration User profile when making requests to the org, not as the actual Salesforce
user that invoked the function. See [Update Function Permissions](https://developer.salesforce.com/docs/platform/functions/guide/permissions.html)
for details.
When invoked locally, the function runs as the user who executed the CLI command, with their
credentials. See [Invoke Functions Locally](https://developer.salesforce.com/docs/platform/functions/guide/invoke-local.html).
""" # noqa: E501 pylint: disable=line-too-long
id: str
"""
The user's ID.
For example: `005JS000000H123`
"""
username: str
"""
The username of the user.
For example: `user@example.tld`
"""
on_behalf_of_user_id: str | None
"""
The ID of the user on whose behalf this user is operating.
For example: `005JS000000H456`
"""
@dataclass(frozen=True, kw_only=True, slots=True)
class Org:
"""Information about the Salesforce org and the user that invoked the function."""
id: str
"""
The Salesforce org ID.
For example: `00DJS0000000123ABC`
"""
base_url: str
"""
The URL of the current connection to the Salesforce org.
If [Salesforce Sites](https://help.salesforce.com/s/articleView?id=sf.sites_overview.htm&type=5)
is enabled in the org, then the URL follows their format. The URL could also include the
Salesforce instance, which can change if the org migrates to a new instance.
For example: `https://example-base-url.my.salesforce-sites.com`
"""
domain_url: str
"""
The canonical URL of the Salesforce org.
This URL never changes. Use this URL when making API calls to your org.
For example: `https://example-domain-url.my.salesforce.com`
"""
data_api: DataAPI
"""An initialized data API client instance for interacting with data in the org."""
user: User
"""The currently logged in user."""
@dataclass(frozen=True, kw_only=True, slots=True)
class Context:
"""Information about the Salesforce org that invoked the function."""
org: Org
"""Information about the Salesforce org and the user that invoked the function.""" | /salesforce_functions-0.6.0-py3-none-any.whl/salesforce_functions/context.py | 0.883085 | 0.609088 | context.py | pypi |
from dataclasses import dataclass
from datetime import datetime
from typing import Generic, TypeVar
__all__ = ["InvocationEvent"]
T = TypeVar("T")
@dataclass(frozen=True, kw_only=True, slots=True)
class InvocationEvent(Generic[T]):
"""
The metadata and data payload of the event that caused the function to be invoked.
The `InvocationEvent` type accepts a single generic parameter, which represents
the type of the input data payload present in the `data` field.
To improve IDE auto-completion and linting coverage, we recommend that you pass an
explicit type in the type definition that represents the data payload the function
expects to receive.
For example, if your function must accept JSON input like this:
```json
{
"fieldOne": "Hello World!",
"fieldTwo": 23
}
```
Then use these Python type annotations:
```python
EventPayloadType = dict[str, Any]
async def function(event: InvocationEvent[EventPayloadType], context: Context):
# ...
```
For more information, see the [Python typing documentation](https://docs.python.org/3/library/typing.html).
"""
id: str
"""
The unique identifier for this execution of the function.
For example: `00DJS0000000123ABC-d75b3b6ece5011dcabbed4-3c6f7179`
"""
type: str
"""
The type of this invocation event.
For example: `com.salesforce.function.invoke.sync`
"""
source: str
"""
A URI which identifies the context in which an event happened.
This URI often includes information such as the type of the event source, the
org publishing the event, or the process that produced the event.
For example: `urn:event:from:salesforce/JS/56.0/00DJS0000000123ABC/apex/ExampleClass:example_function():7`
"""
data: T
"""
The input data payload of the event.
The type of this field is determined from the generic type parameter passed
to `InvocationEvent`.
For example, `data` will be of type `dict[str, Any]` if the invocation event type is defined as:
```python
InvocationEvent[dict[str, Any]]
```
"""
time: datetime | None
"""
The timestamp of when the occurrence happened.
If the time of the occurrence can't be determined, then this attribute
may be set to some other time (such as the current time).
""" | /salesforce_functions-0.6.0-py3-none-any.whl/salesforce_functions/invocation_event.py | 0.933264 | 0.817829 | invocation_event.py | pypi |
from ._requests import (
CreateRecordRestApiRequest,
DeleteRecordRestApiRequest,
RestApiRequest,
UpdateRecordRestApiRequest,
)
from .record import Record
from .reference_id import ReferenceId
__all__ = ["UnitOfWork"]
class UnitOfWork:
"""
Represents a `UnitOfWork`.
A `UnitOfWork` encapsulates a set of one or more Salesforce operations that must be
performed as a single atomic operation. Single atomic operations reduce the number of
requests back to the org, and are more efficient when working with larger data volumes.
First, register the create, update, or delete operations that make up the `UnitOfWork`
using their corresponding methods, such as `register_create`. Then submit the `UnitOfWork`
with the `commit_unit_of_work` method of `DataAPI`.
For example:
```python
from salesforce_functions import Record, UnitOfWork
# Create a unit of work, against which multiple operations can be registered.
unit_of_work = UnitOfWork()
# Register a new Account for creation
account_reference_id = unit_of_work.register_create(
Record(
type="Account",
fields={
"Name": "Example Account",
},
)
)
# Register a new Contact for creation, that references the account above.
unit_of_work.register_create(
Record(
type="Contact",
fields={
"FirstName": "Joe",
"LastName": "Smith",
"AccountId": account_reference_id,
},
)
)
# Commit the unit of work, executing all of the operations registered above.
result = await context.org.data_api.commit_unit_of_work(unit_of_work)
```
"""
def __init__(self) -> None:
self._sub_requests: dict[ReferenceId, RestApiRequest[str]] = {}
self._next_reference_id = 0
def register_create(self, record: Record) -> ReferenceId:
"""
Register a record creation for the `UnitOfWork`.
Returns a `ReferenceId` that you can use to refer to the created record in subsequent operations in this
`UnitOfWork`.
For example:
```python
from salesforce_functions import Record, UnitOfWork
unit_of_work = UnitOfWork()
reference_id = unit_of_work.register_create(
Record(
type="Account",
fields={
"Name": "Example Account",
},
)
)
```
"""
return self._register(CreateRecordRestApiRequest(record))
def register_update(self, record: Record) -> ReferenceId:
"""
Register a record update for the `UnitOfWork`.
The given `Record` must contain an `Id` field.
Returns a `ReferenceId` that you can use to refer to the updated record in subsequent operations in this
`UnitOfWork`.
For example:
```python
from salesforce_functions import Record, UnitOfWork
unit_of_work = UnitOfWork()
reference_id = unit_of_work.register_update(
Record(
type="Account",
fields={
"Id": "001B000001Lp1FxIAJ",
"Name": "New Name",
},
)
)
```
"""
return self._register(UpdateRecordRestApiRequest(record))
def register_delete(self, object_type: str, record_id: str) -> ReferenceId:
"""
Register a deletion of an existing record of the given type and ID.
Returns a `ReferenceId` that you can use to refer to the deleted record in subsequent operations in this
`UnitOfWork`.
For example:
```python
from salesforce_functions import UnitOfWork
unit_of_work = UnitOfWork()
reference_id = unit_of_work.register_delete("Account", "001B000001Lp1FxIAJ")
```
"""
return self._register(DeleteRecordRestApiRequest(object_type, record_id))
def _register(self, request: RestApiRequest[str]) -> ReferenceId:
reference_id = ReferenceId(id="referenceId" + str(self._next_reference_id))
self._next_reference_id += 1
self._sub_requests[reference_id] = request
return reference_id | /salesforce_functions-0.6.0-py3-none-any.whl/salesforce_functions/data_api/unit_of_work.py | 0.886782 | 0.870817 | unit_of_work.py | pypi |
from typing import Any, TypeVar
import aiohttp
import orjson
from aiohttp.payload import BytesPayload
from ..__version__ import __version__
from ._requests import (
CompositeGraphRestApiRequest,
CreateRecordRestApiRequest,
DeleteRecordRestApiRequest,
QueryNextRecordsRestApiRequest,
QueryRecordsRestApiRequest,
RestApiRequest,
UpdateRecordRestApiRequest,
)
from .exceptions import ClientError, UnexpectedRestApiResponsePayload
from .record import Record, RecordQueryResult
from .reference_id import ReferenceId
from .unit_of_work import UnitOfWork
__all__ = ["DataAPI"]
T = TypeVar("T")
class DataAPI:
"""
Data API client to interact with data in a Salesforce org.
We provide a preconfigured instance of this client at `context.org.data_api`
to make it easier for you to query, insert, and update records.
For example:
```python
async def function(event: InvocationEvent[Any], context: Context):
result = await context.org.data_api.query("SELECT Id, Name FROM Account")
return result.records
```
"""
def __init__(
self,
*,
org_domain_url: str,
api_version: str,
access_token: str,
session: aiohttp.ClientSession | None = None,
) -> None:
self._api_version = api_version
self._org_domain_url = org_domain_url
self._shared_session = session
self.access_token = access_token
async def query(self, soql: str) -> RecordQueryResult:
"""
Query for records using the given SOQL string.
For example:
```python
result = await context.org.data_api.query("SELECT Id, Name FROM Account")
for record in result.records:
# ...
```
If the returned `RecordQueryResult`'s `done` attribute is `False`, there are more
records to be returned. To retrieve these, use `DataAPI.query_more()`.
For more information, see the [Query REST API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_query.htm).
""" # noqa: E501 pylint: disable=line-too-long
return await self._execute(
QueryRecordsRestApiRequest(soql, self._download_file)
)
async def query_more(self, result: RecordQueryResult) -> RecordQueryResult:
"""
Query for more records, based on the given `RecordQueryResult`.
For example:
```python
result = await context.org.data_api.query("SELECT Id, Name FROM Account")
if not result.done:
query_more_result = await context.org.data_api.query_more(result)
```
For more information, see the [Query More Results REST API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_query_more_results.htm).
""" # noqa: E501 pylint: disable=line-too-long
if result.next_records_url is None:
return RecordQueryResult(
done=True,
total_size=result.total_size,
records=[],
next_records_url=None,
)
return await self._execute(
QueryNextRecordsRestApiRequest(result.next_records_url, self._download_file)
)
async def create(self, record: Record) -> str:
"""
Create a new record based on the given `Record` object.
Returns the ID of the new record.
For example:
```python
from salesforce_functions import Record
record_id = await context.org.data_api.create(
Record(
type="Account",
fields={
"Name": "Example Account",
},
)
)
```
"""
return await self._execute(CreateRecordRestApiRequest(record))
async def update(self, record: Record) -> str:
"""
Update an existing record based on the given `Record` object.
The given `Record` must contain an `Id` field. Returns the ID of the record that was updated.
For example:
```python
from salesforce_functions import Record
await context.org.data_api.update(
Record(
type="Account",
fields={
"Id": "001B000001Lp1FxIAJ",
"Name": "New Name",
},
)
)
```
"""
return await self._execute(UpdateRecordRestApiRequest(record))
async def delete(self, object_type: str, record_id: str) -> str:
"""
Delete an existing record of the given Salesforce object type and ID.
Returns the ID of the record that was deleted.
For example:
```python
await data_api.delete("Account", "001B000001Lp1FxIAJ")
```
"""
return await self._execute(DeleteRecordRestApiRequest(object_type, record_id))
async def commit_unit_of_work(
self, unit_of_work: UnitOfWork
) -> dict[ReferenceId, str]:
"""
Commit a `UnitOfWork`, which executes all operations registered with it.
If any of these operations fail, the whole unit is rolled back. To examine results for a
single operation, inspect the returned dict (which is keyed with `ReferenceId` objects
returned from the `register*` functions on `UnitOfWork`).
For example:
```python
from salesforce_functions import UnitOfWork
# Create a unit of work, against which multiple operations can be registered.
unit_of_work = UnitOfWork()
first_reference_id = unit_of_work.register_create(
# ...
)
second_reference_id = unit_of_work.register_create(
# ...
)
# Commit the unit of work, executing all of the operations registered above.
result = await context.org.data_api.commit_unit_of_work(unit_of_work)
# The result of each operation.
first_record_id = result[first_create_reference_id]
second_record_id = result[second_create_reference_id]
```
"""
return await self._execute(
CompositeGraphRestApiRequest(
self._api_version,
unit_of_work._sub_requests, # pyright: ignore [reportPrivateUsage] pylint:disable=protected-access
)
)
async def _execute(self, rest_api_request: RestApiRequest[T]) -> T:
url: str = rest_api_request.url(self._org_domain_url, self._api_version)
method: str = rest_api_request.http_method()
body = rest_api_request.request_body()
session = self._shared_session or _create_session()
try:
response = await session.request(
method,
url,
headers=self._default_headers(),
data=None if body is None else _json_serialize(body),
)
# Using orjson for faster JSON deserialization over the stdlib.
# This is not implemented using the `loads` argument to `Response.json` since:
# - We don't want the content type validation, since some successful requests.py return 204
# (No Content) which will not have an `application/json`` content type header. However,
# these parse just fine as JSON helping to unify the interface to the REST request classes.
# - Orjson's performance/memory usage is better if it is passed bytes directly instead of `str`.
response_body = await response.read()
json_body = orjson.loads(response_body) if response_body else None
except aiohttp.ClientError as e:
# https://docs.aiohttp.org/en/stable/client_reference.html#client-exceptions
raise ClientError(
f"An error occurred while making the request: {e.__class__.__name__}: {e}"
) from e
except orjson.JSONDecodeError as e:
raise UnexpectedRestApiResponsePayload(
f"The server didn't respond with valid JSON: {e.__class__.__name__}: {e}"
) from e
finally:
if session != self._shared_session:
await session.close()
return await rest_api_request.process_response(response.status, json_body)
async def _download_file(self, url: str) -> bytes:
session = self._shared_session or _create_session()
try:
response = await session.request(
"GET", f"{self._org_domain_url}{url}", headers=self._default_headers()
)
return await response.read()
finally:
if session != self._shared_session:
await session.close()
def _default_headers(self) -> dict[str, str]:
return {
"Authorization": f"Bearer {self.access_token}",
"Sforce-Call-Options": f"client=sf-functions-python:{__version__}",
}
def _create_session() -> aiohttp.ClientSession:
# Disable cookie storage using `DummyCookieJar`, given that:
# - The same session will be used by multiple invocation events.
# - We don't need cookie support.
return aiohttp.ClientSession(cookie_jar=aiohttp.DummyCookieJar())
def _json_serialize(data: Any) -> BytesPayload:
"""
JSON serialize the provided data to bytes.
This is a replacement for aiohttp's default JSON implementation that uses `orjson` instead
of the Python stdlib's `json` module, since `orjson` is faster:
https://github.com/ijl/orjson#performance
We can't just implement this by passing `json_serialize` to `ClientSession`, due to:
https://github.com/aio-libs/aiohttp/issues/4482
So instead this is based on `payload.JsonPayload`:
https://github.com/aio-libs/aiohttp/blob/v3.8.3/aiohttp/payload.py#L386-L403
"""
return BytesPayload(
orjson.dumps(data), encoding="utf-8", content_type="application/json"
) | /salesforce_functions-0.6.0-py3-none-any.whl/salesforce_functions/data_api/__init__.py | 0.84994 | 0.549459 | __init__.py | pypi |
from dataclasses import dataclass
# The order in `__all__` is the in which pdoc3 will display the classes in the docs.
__all__ = [
"DataApiError",
"SalesforceRestApiError",
"InnerSalesforceRestApiError",
"MissingFieldError",
"ClientError",
"UnexpectedRestApiResponsePayload",
]
class DataApiError(Exception):
"""Base class for Data API exceptions."""
@dataclass(frozen=True, kw_only=True, slots=True)
class SalesforceRestApiError(DataApiError):
"""Raised when the Salesforce REST API signalled error(s)."""
api_errors: list["InnerSalesforceRestApiError"]
"""A list of one or more errors returned from Salesforce REST API."""
def __str__(self) -> str:
errors_list = "\n---\n".join(str(error) for error in self.api_errors)
return (
f"Salesforce REST API reported the following error(s):\n---\n{errors_list}"
)
@dataclass(frozen=True, kw_only=True, slots=True)
class InnerSalesforceRestApiError:
"""An error returned from the Salesforce REST API."""
message: str
"""The description of this error."""
error_code: str
"""The error code for this error."""
fields: list[str]
"""
The field names where the error occurred.
This will be empty for errors that aren't related to a specific field.
"""
def __str__(self) -> str:
# The error message includes the field names, so `self.fields` is intentionally not
# included, as the string representation is for human not programmatic consumption.
return f"{self.error_code} error:\n{self.message}"
class MissingFieldError(DataApiError):
"""Raised when the given `Record` must contain a field, but no such field was found."""
class ClientError(DataApiError):
"""Raised when the API request failed due to a connection error, timeout, or malformed HTTP response."""
class UnexpectedRestApiResponsePayload(DataApiError):
"""Raised when the Salesforce REST API returned an unexpected payload.""" | /salesforce_functions-0.6.0-py3-none-any.whl/salesforce_functions/data_api/exceptions.py | 0.926287 | 0.281603 | exceptions.py | pypi |
import logging
import structlog
def configure_logging() -> None:
"""
Configure structlog to output logs in logfmt format, using options recommended for best performance.
https://www.brandur.org/logfmt
https://www.structlog.org/en/stable/performance.html
"""
structlog.configure(
processors=[
# Adds any log attributes bound to the request context (such as `invocationId`).
structlog.contextvars.merge_contextvars,
# Adds the log event level as `level={info,warning,...}`.
structlog.processors.add_log_level,
# Override the default structlog message key name of `event`.
structlog.processors.EventRenamer("msg"),
# Pretty print any exceptions prior to the logfmt log line referencing the exception.
# The output is not in logfmt style, but makes the exception much easier to read than
# trying to newline escape it and output it under an attribute on the log line itself.
structlog.processors.ExceptionPrettyPrinter(),
structlog.processors.LogfmtRenderer(),
],
# Only output log level info and above.
wrapper_class=structlog.make_filtering_bound_logger(logging.INFO),
logger_factory=structlog.WriteLoggerFactory(),
cache_logger_on_first_use=True,
)
def get_logger() -> structlog.stdlib.BoundLogger:
"""
Create a logger instance that outputs logs in logfmt style.
The logger's API matches the stdlib's `logger.Logger`, but the output
is in the structured `logfmt` logging style.
Example:
```python
from salesforce_functions import get_logger
logger = get_logger()
async def function(event: InvocationEvent[Any], context: Context):
logger.info("Info message")
logger.warning("Warning message")
logger.error("Error message")
logger.info("Info message with an additional structured log attribute", record_id=12345)
```
"""
# structlog's `get_logger()` returns a proxy that only instantiates the logger on first usage.
# Calling `bind()` here ensures that this instantiation doesn't have to occur each time a
# the function is invoked. `configure_logging()` must be called (by us) prior to `get_logger()`
# being used for the first time.
return structlog.stdlib.get_logger().bind() | /salesforce_functions-0.6.0-py3-none-any.whl/salesforce_functions/_internal/logging.py | 0.761095 | 0.598312 | logging.py | pypi |
import importlib.util
import inspect
import sys
import traceback
import typing
from pathlib import Path
from typing import Any, Awaitable, Callable
from ..context import Context
from ..invocation_event import InvocationEvent
FUNCTION_MODULE_NAME = "main"
FUNCTION_NAME = "function"
Function = Callable[[InvocationEvent[Any], Context], Awaitable[Any]]
def load_function(project_path: Path) -> Function:
"""
Load and validate the function inside `main.py` in the specified directory.
Uses the approach documented here:
https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly
"""
# Convert `project_path` to a normalised absolute path, so that:
# - it's clearer in any error messages where we were attempting to look for the function
# - we don't end up putting a relative path onto `sys.path`.
project_path = project_path.resolve()
module_filename = f"{FUNCTION_MODULE_NAME}.py"
module_path = project_path.joinpath(module_filename)
if not module_path.is_file():
raise LoadFunctionError(
f"Didn't find a {module_filename} file at {module_path}."
)
# `submodule_search_locations` is set to ensure relative imports work within the imported module.
spec = importlib.util.spec_from_file_location(
FUNCTION_MODULE_NAME,
module_path,
submodule_search_locations=[str(project_path)],
)
# These can only be None if our implementation is incorrect (eg: trying to load a file that
# doesn't have a .py extension, so has no known loader). The assertions are to prove this
# to the type checker.
assert spec is not None
assert spec.loader is not None
module = importlib.util.module_from_spec(spec)
sys.modules[FUNCTION_MODULE_NAME] = module
# Allow the imported module to use absolute imports for modules within it's own directory.
sys.path.insert(0, str(project_path))
try:
spec.loader.exec_module(module)
except Exception as e: # e.g.: SyntaxError, ImportError, NameError.
raise LoadFunctionError(
f"Couldn't import {module_filename}:\n\n{traceback.format_exc()}"
) from e
function = getattr(module, FUNCTION_NAME, None)
if function is None or not inspect.isfunction(function):
raise LoadFunctionError(
f"Didn't find a function named '{FUNCTION_NAME}' in {module_filename}."
)
if not inspect.iscoroutinefunction(function):
raise LoadFunctionError(
f"The function named '{FUNCTION_NAME}' in {module_filename} must be an async function."
f" Change the function definition from 'def {FUNCTION_NAME}' to 'async def {FUNCTION_NAME}'."
)
parameter_count = len(inspect.signature(function).parameters)
expected_parameter_count = len(typing.get_args(Function)[0])
if parameter_count != expected_parameter_count:
raise LoadFunctionError(
f"The function named '{FUNCTION_NAME}' in {module_filename} has the wrong number of"
f" parameters (expected {expected_parameter_count} but found {parameter_count})."
)
return function
class LoadFunctionError(Exception):
"""There was an error loading the function or it failed validation.""" | /salesforce_functions-0.6.0-py3-none-any.whl/salesforce_functions/_internal/function_loader.py | 0.549641 | 0.318671 | function_loader.py | pypi |
import re
import sys
from dataclasses import dataclass
from pathlib import Path
from typing import Any
if sys.version_info < (3, 11):
# `tomllib` was only added to the stdlib in Python 3.11, so for older Python
# versions we use the third party `tomli` package, which has an identical API.
import tomli as tomllib # pragma: no-cover-python-gte-311
else:
import tomllib # pragma: no-cover-python-lt-311
MINIMUM_SALESFORCE_API_MAJOR_VERSION = 53
@dataclass(frozen=True, kw_only=True, slots=True)
class Config:
"""The function's configuration."""
salesforce_api_version: str
"""The requested Salesforce REST API version (for example '56.0')."""
def load_config(project_path: Path) -> Config:
"""Load a function's configuration from its project.toml file."""
project_toml_path = project_path.joinpath("project.toml").resolve()
if not project_toml_path.is_file():
raise ConfigError(f"Didn't find a project.toml file at {project_toml_path}.")
try:
with project_toml_path.open(mode="rb") as file:
project_toml = tomllib.load(file)
except tomllib.TOMLDecodeError as e:
raise ConfigError(f"The project.toml file isn't valid TOML: {e}") from e
except Exception as e: # e.g.: OSError, UnicodeDecodeError.
raise ConfigError(
f"Couldn't read project.toml: {e.__class__.__name__}: {e}"
) from e
try:
salesforce_table: dict[str, Any] = project_toml["com"]["salesforce"]
salesforce_api_version = salesforce_table.get("salesforce-api-version")
except (AttributeError, KeyError, ValueError) as e:
raise ConfigError(
"The project.toml file is missing the required '[com.salesforce]' table."
) from e
if salesforce_api_version is None:
raise ConfigError(
"The project.toml file is missing the required 'com.salesforce.salesforce-api-version' key."
)
if not isinstance(salesforce_api_version, str):
raise ConfigError(
"The 'com.salesforce.salesforce-api-version' key in project.toml must be a string."
)
match = re.match(r"(?P<major_version>\d+)\.\d+$", salesforce_api_version)
if not match:
raise ConfigError(
f"'{salesforce_api_version}' isn't a valid Salesforce REST API version. Update the 'salesforce-api-version'"
" key in project.toml to a version that uses the form 'X.Y', such as '56.0'."
)
if int(match.group("major_version")) < MINIMUM_SALESFORCE_API_MAJOR_VERSION:
raise ConfigError(
f"Salesforce REST API version '{salesforce_api_version}' isn't supported. Update the"
f" 'salesforce-api-version' key in project.toml to '{MINIMUM_SALESFORCE_API_MAJOR_VERSION}.0' or later."
)
return Config(salesforce_api_version=salesforce_api_version)
class ConfigError(Exception):
"""There was an error loading the project config or it failed validation.""" | /salesforce_functions-0.6.0-py3-none-any.whl/salesforce_functions/_internal/config.py | 0.5564 | 0.155495 | config.py | pypi |
<p align="center">
<br>
<img src="docs/_static/logo_final.png" width="400"/>
<br>
<p>
<div align="center">
<a href="https://github.com/salesforce/LAVIS/releases"><img alt="Latest Release" src="https://img.shields.io/github/release/salesforce/LAVIS.svg" /></a>
<a href="https://opensource.salesforce.com/LAVIS/index.html">
<img alt="docs" src="https://github.com/salesforce/LAVIS/actions/workflows/docs.yaml/badge.svg"/>
<a href="https://opensource.org/licenses/BSD-3-Clause">
<img alt="license" src="https://img.shields.io/badge/License-BSD_3--Clause-blue.svg"/>
</a>
<a href="https://pepy.tech/project/salesforce-lavis">
<img alt="Downloads" src="https://pepy.tech/badge/salesforce-lavis">
</a>
</div>
<div align="center">
<a href="https://opensource.salesforce.com/LAVIS//latest/benchmark.html">Benchmark</a>,
<a href="https://arxiv.org/abs/2209.09019">Technical Report</a>,
<a href="https://opensource.salesforce.com/LAVIS//latest/index.html">Documentation</a>,
<a href="https://github.com/salesforce/LAVIS/tree/main/examples">Jupyter Notebook Examples</a>,
<a href="https://blog.salesforceairesearch.com/lavis-language-vision-library/">Blog</a>
</div>
# LAVIS - A Library for Language-Vision Intelligence
## What's New: 🎉
* [Model Release] Jan 2023, released implementation of **BLIP-2** <br>
[Paper](https://arxiv.org/abs/2301.12597), [Project Page](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [](https://huggingface.co/spaces/Salesforce/BLIP2), [](https://colab.research.google.com/github/salesforce/LAVIS/blob/main/examples/blip2_instructed_generation.ipynb)
> A generic and efficient pre-training strategy that easily harvests development of pretrained vision models and large language models (LLMs) for vision-language pretraining. BLIP-2 beats Flamingo on zero-shot VQAv2 (**65.0** vs **56.3**), establishing new state-of-the-art on zero-shot captioning (on NoCaps **121.6** CIDEr score vs previous best **113.2**). In addition, equipped with powerful LLMs (e.g. OPT, FlanT5), BLIP-2 also unlocks the new **zero-shot instructed vision-to-language generation** capabilities for various interesting applications!
* Jan 2023, LAVIS is now available on [PyPI](https://pypi.org/project/salesforce-lavis/) for installation!
* [Model Release] Dec 2022, released implementation of **Img2prompt-VQA** <br>
[Paper](https://arxiv.org/pdf/2212.10846.pdf), [Project Page](https://github.com/salesforce/LAVIS/tree/main/projects/img2prompt-vqa), [](https://colab.research.google.com/github/salesforce/LAVIS/blob/main/projects/img2prompt-vqa/img2prompt_vqa.ipynb)
> A plug-and-play module that enables off-the-shelf use of Large Language Models (LLMs) for visual question answering (VQA). Img2Prompt-VQA surpasses Flamingo on zero-shot VQA on VQAv2 (61.9 vs 56.3), while in contrast requiring no end-to-end training!
* [Model Release] Oct 2022, released implementation of **PNP-VQA** (**EMNLP Findings 2022**, _"Plug-and-Play VQA: Zero-shot VQA by Conjoining Large Pretrained Models with Zero Training"_, by Anthony T.M.H. et al), <br>
[Paper](https://arxiv.org/abs/2210.08773), [Project Page](https://github.com/salesforce/LAVIS/tree/main/projects/pnp-vqa), [](https://colab.research.google.com/github/salesforce/LAVIS/blob/main/projects/pnp-vqa/pnp_vqa.ipynb))
> A modular zero-shot VQA framework that requires no PLMs training, achieving SoTA zero-shot VQA performance.
## Table of Contents
- [Introduction](#introduction)
- [Installation](#installation)
- [Getting Started](#getting-started)
- [Model Zoo](#model-zoo)
- [Image Captioning](#image-captioning)
- [Visual question answering (VQA)](#visual-question-answering-vqa)
- [Unified Feature Extraction Interface](#unified-feature-extraction-interface)
- [Load Datasets](#load-datasets)
- [Jupyter Notebook Examples](#jupyter-notebook-examples)
- [Resources and Tools](#resources-and-tools)
- [Documentations](#documentations)
- [Ethical and Responsible Use](#ethical-and-responsible-use)
- [Technical Report and Citing LAVIS](#technical-report-and-citing-lavis)
- [License](#license)
## Introduction
LAVIS is a Python deep learning library for LAnguage-and-VISion intelligence research and applications. This library aims to provide engineers and researchers with a one-stop solution to rapidly develop models for their specific multimodal scenarios, and benchmark them across standard and customized datasets.
It features a unified interface design to access
- **10+** tasks
(retrieval, captioning, visual question answering, multimodal classification etc.);
- **20+** datasets (COCO, Flickr, Nocaps, Conceptual
Commons, SBU, etc.);
- **30+** pretrained weights of state-of-the-art foundation language-vision models and their task-specific adaptations, including [ALBEF](https://arxiv.org/pdf/2107.07651.pdf),
[BLIP](https://arxiv.org/pdf/2201.12086.pdf), [ALPRO](https://arxiv.org/pdf/2112.09583.pdf), [CLIP](https://arxiv.org/pdf/2103.00020.pdf).
<p align="center">
<br>
<img src="assets/demo-6.png"/>
<br>
<p>
Key features of LAVIS include:
- **Unified and Modular Interface**: facilitating to easily leverage and repurpose existing modules (datasets, models, preprocessors), also to add new modules.
- **Easy Off-the-shelf Inference and Feature Extraction**: readily available pre-trained models let you take advantage of state-of-the-art multimodal understanding and generation capabilities on your own data.
- **Reproducible Model Zoo and Training Recipes**: easily replicate and extend state-of-the-art models on existing and new tasks.
- **Dataset Zoo and Automatic Downloading Tools**: it can be a hassle to prepare the many language-vision datasets. LAVIS provides automatic downloading scripts to help prepare a large variety of datasets and their annotations.
The following table shows the supported tasks, datasets and models in our library. This is a continuing effort and we are working on further growing the list.
| Tasks | Supported Models | Supported Datasets |
| :--------------------------------------: | :----------------------: | :----------------------------------------: |
| Image-text Pre-training | ALBEF, BLIP | COCO, VisualGenome, SBU ConceptualCaptions |
| Image-text Retrieval | ALBEF, BLIP, CLIP | COCO, Flickr30k |
| Text-image Retrieval | ALBEF, BLIP, CLIP | COCO, Flickr30k |
| Visual Question Answering | ALBEF, BLIP | VQAv2, OKVQA, A-OKVQA |
| Image Captioning | BLIP | COCO, NoCaps |
| Image Classification | CLIP | ImageNet |
| Natural Language Visual Reasoning (NLVR) | ALBEF, BLIP | NLVR2 |
| Visual Entailment (VE) | ALBEF | SNLI-VE |
| Visual Dialogue | BLIP | VisDial |
| Video-text Retrieval | BLIP, ALPRO | MSRVTT, DiDeMo |
| Text-video Retrieval | BLIP, ALPRO | MSRVTT, DiDeMo |
| Video Question Answering (VideoQA) | BLIP, ALPRO | MSRVTT, MSVD |
| Video Dialogue | VGD-GPT | AVSD |
| Multimodal Feature Extraction | ALBEF, CLIP, BLIP, ALPRO | customized |
| Text-to-image Generation | [COMING SOON] | |
## Installation
1. (Optional) Creating conda environment
```bash
conda create -n lavis python=3.8
conda activate lavis
```
2. install from [PyPI](https://pypi.org/project/salesforce-lavis/)
```bash
pip install salesforce-lavis
```
3. Or, for development, you may build from source
```bash
git clone https://github.com/salesforce/LAVIS.git
cd LAVIS
pip install -e .
```
## Getting Started
### Model Zoo
Model zoo summarizes supported models in LAVIS, to view:
```python
from lavis.models import model_zoo
print(model_zoo)
# ==================================================
# Architectures Types
# ==================================================
# albef_classification ve
# albef_feature_extractor base
# albef_nlvr nlvr
# albef_pretrain base
# albef_retrieval coco, flickr
# albef_vqa vqav2
# alpro_qa msrvtt, msvd
# alpro_retrieval msrvtt, didemo
# blip_caption base_coco, large_coco
# blip_classification base
# blip_feature_extractor base
# blip_nlvr nlvr
# blip_pretrain base
# blip_retrieval coco, flickr
# blip_vqa vqav2, okvqa, aokvqa
# clip_feature_extractor ViT-B-32, ViT-B-16, ViT-L-14, ViT-L-14-336, RN50
# clip ViT-B-32, ViT-B-16, ViT-L-14, ViT-L-14-336, RN50
# gpt_dialogue base
```
Let’s see how to use models in LAVIS to perform inference on example data. We first load a sample image from local.
```python
import torch
from PIL import Image
# setup device to use
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# load sample image
raw_image = Image.open("docs/_static/merlion.png").convert("RGB")
```
This example image shows [Merlion park](https://en.wikipedia.org/wiki/Merlion) ([source](https://theculturetrip.com/asia/singapore/articles/what-exactly-is-singapores-merlion-anyway/)), a landmark in Singapore.
### Image Captioning
In this example, we use the BLIP model to generate a caption for the image. To make inference even easier, we also associate each
pre-trained model with its preprocessors (transforms), accessed via ``load_model_and_preprocess()``.
```python
import torch
from lavis.models import load_model_and_preprocess
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# loads BLIP caption base model, with finetuned checkpoints on MSCOCO captioning dataset.
# this also loads the associated image processors
model, vis_processors, _ = load_model_and_preprocess(name="blip_caption", model_type="base_coco", is_eval=True, device=device)
# preprocess the image
# vis_processors stores image transforms for "train" and "eval" (validation / testing / inference)
image = vis_processors["eval"](raw_image).unsqueeze(0).to(device)
# generate caption
model.generate({"image": image})
# ['a large fountain spewing water into the air']
```
### Visual question answering (VQA)
BLIP model is able to answer free-form questions about images in natural language.
To access the VQA model, simply replace the ``name`` and ``model_type`` arguments
passed to ``load_model_and_preprocess()``.
```python
from lavis.models import load_model_and_preprocess
model, vis_processors, txt_processors = load_model_and_preprocess(name="blip_vqa", model_type="vqav2", is_eval=True, device=device)
# ask a random question.
question = "Which city is this photo taken?"
image = vis_processors["eval"](raw_image).unsqueeze(0).to(device)
question = txt_processors["eval"](question)
model.predict_answers(samples={"image": image, "text_input": question}, inference_method="generate")
# ['singapore']
```
### Unified Feature Extraction Interface
LAVIS provides a unified interface to extract features from each architecture.
To extract features, we load the feature extractor variants of each model.
The multimodal feature can be used for multimodal classification.
The low-dimensional unimodal features can be used to compute cross-modal similarity.
```python
from lavis.models import load_model_and_preprocess
model, vis_processors, txt_processors = load_model_and_preprocess(name="blip_feature_extractor", model_type="base", is_eval=True, device=device)
caption = "a large fountain spewing water into the air"
image = vis_processors["eval"](raw_image).unsqueeze(0).to(device)
text_input = txt_processors["eval"](caption)
sample = {"image": image, "text_input": [text_input]}
features_multimodal = model.extract_features(sample)
print(features_multimodal.multimodal_embeds.shape)
# torch.Size([1, 12, 768]), use features_multimodal[:,0,:] for multimodal classification tasks
features_image = model.extract_features(sample, mode="image")
features_text = model.extract_features(sample, mode="text")
print(features_image.image_embeds.shape)
# torch.Size([1, 197, 768])
print(features_text.text_embeds.shape)
# torch.Size([1, 12, 768])
# low-dimensional projected features
print(features_image.image_embeds_proj.shape)
# torch.Size([1, 197, 256])
print(features_text.text_embeds_proj.shape)
# torch.Size([1, 12, 256])
similarity = features_image.image_embeds_proj[:,0,:] @ features_text.text_embeds_proj[:,0,:].t()
print(similarity)
# tensor([[0.2622]])
```
### Load Datasets
LAVIS inherently supports a wide variety of common language-vision datasets by providing [automatic download tools](https://opensource.salesforce.com/LAVIS//latest/benchmark) to help download and organize these datasets. After downloading, to load the datasets, use the following code:
```python
from lavis.datasets.builders import dataset_zoo
dataset_names = dataset_zoo.get_names()
print(dataset_names)
# ['aok_vqa', 'coco_caption', 'coco_retrieval', 'coco_vqa', 'conceptual_caption_12m',
# 'conceptual_caption_3m', 'didemo_retrieval', 'flickr30k', 'imagenet', 'laion2B_multi',
# 'msrvtt_caption', 'msrvtt_qa', 'msrvtt_retrieval', 'msvd_caption', 'msvd_qa', 'nlvr',
# 'nocaps', 'ok_vqa', 'sbu_caption', 'snli_ve', 'vatex_caption', 'vg_caption', 'vg_vqa']
```
After downloading the images, we can use ``load_dataset()`` to obtain the dataset.
```python
from lavis.datasets.builders import load_dataset
coco_dataset = load_dataset("coco_caption")
print(coco_dataset.keys())
# dict_keys(['train', 'val', 'test'])
print(len(coco_dataset["train"]))
# 566747
print(coco_dataset["train"][0])
# {'image': <PIL.Image.Image image mode=RGB size=640x480>,
# 'text_input': 'A woman wearing a net on her head cutting a cake. ',
# 'image_id': 0}
```
If you already host a local copy of the dataset, you can pass in the ``vis_path`` argument to change the default location to load images.
```python
coco_dataset = load_dataset("coco_caption", vis_path=YOUR_LOCAL_PATH)
```
## Jupyter Notebook Examples
See [examples](https://github.com/salesforce/LAVIS/tree/main/examples) for more inference examples, e.g. captioning, feature extraction, VQA, GradCam, zeros-shot classification.
## Resources and Tools
- **Benchmarks**: see [Benchmark](https://opensource.salesforce.com/LAVIS//latest/benchmark) for instructions to evaluate and train supported models.
- **Dataset Download and Browsing**: see [Dataset Download](https://opensource.salesforce.com/LAVIS//latest/benchmark) for instructions and automatic tools on download common language-vision datasets.
- **GUI Demo**: to run the demo locally, run ```bash run_scripts/run_demo.sh``` and then follow the instruction on the prompts to view in browser. A web demo is coming soon.
## Documentations
For more details and advanced usages, please refer to
[documentation](https://opensource.salesforce.com/LAVIS//latest/index.html#).
## Ethical and Responsible Use
We note that models in LAVIS provide no guarantees on their multimodal abilities; incorrect or biased predictions may be observed. In particular, the datasets and pretrained models utilized in LAVIS may contain socioeconomic biases which could result in misclassification and other unwanted behaviors such as offensive or inappropriate speech. We strongly recommend that users review the pre-trained models and overall system in LAVIS before practical adoption. We plan to improve the library by investigating and mitigating these potential biases and
inappropriate behaviors in the future.
## Technical Report and Citing LAVIS
You can find more details in our [technical report](https://arxiv.org/abs/2209.09019).
If you're using LAVIS in your research or applications, please cite using this BibTeX:
```bibtex
@misc{li2022lavis,
title={LAVIS: A Library for Language-Vision Intelligence},
author={Dongxu Li and Junnan Li and Hung Le and Guangsen Wang and Silvio Savarese and Steven C. H. Hoi},
year={2022},
eprint={2209.09019},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Contact us
If you have any questions, comments or suggestions, please do not hesitate to contact us at lavis@salesforce.com.
## License
[BSD 3-Clause License](LICENSE.txt)
| /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/README.md | 0.574275 | 0.830044 | README.md | pypi |
from collections import OrderedDict
from itertools import repeat
import collections.abc
import math
import torch
import torch.nn.functional as F
from torch import nn
from fairscale.nn.checkpoint.checkpoint_activations import checkpoint_wrapper
from lavis.models.eva_vit import convert_weights_to_fp16
from lavis.common.dist_utils import download_cached_file
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1):
super().__init__()
# all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1
self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.relu1 = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.relu2 = nn.ReLU(inplace=True)
self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity()
self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * self.expansion)
self.relu3 = nn.ReLU(inplace=True)
self.downsample = None
self.stride = stride
if stride > 1 or inplanes != planes * Bottleneck.expansion:
# downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1
self.downsample = nn.Sequential(OrderedDict([
("-1", nn.AvgPool2d(stride)),
("0", nn.Conv2d(inplanes, planes * self.expansion, 1, stride=1, bias=False)),
("1", nn.BatchNorm2d(planes * self.expansion))
]))
def forward(self, x: torch.Tensor):
identity = x
out = self.relu1(self.bn1(self.conv1(x)))
out = self.relu2(self.bn2(self.conv2(out)))
out = self.avgpool(out)
out = self.bn3(self.conv3(out))
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu3(out)
return out
class AttentionPool2d(nn.Module):
def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None):
super().__init__()
self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5)
self.k_proj = nn.Linear(embed_dim, embed_dim)
self.q_proj = nn.Linear(embed_dim, embed_dim)
self.v_proj = nn.Linear(embed_dim, embed_dim)
self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim)
self.num_heads = num_heads
def forward(self, x):
x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3]).permute(2, 0, 1) # NCHW -> (HW)NC
x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC
x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC
x, _ = F.multi_head_attention_forward(
query=x, key=x, value=x,
embed_dim_to_check=x.shape[-1],
num_heads=self.num_heads,
q_proj_weight=self.q_proj.weight,
k_proj_weight=self.k_proj.weight,
v_proj_weight=self.v_proj.weight,
in_proj_weight=None,
in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]),
bias_k=None,
bias_v=None,
add_zero_attn=False,
dropout_p=0,
out_proj_weight=self.c_proj.weight,
out_proj_bias=self.c_proj.bias,
use_separate_proj_weight=True,
training=self.training,
need_weights=False
)
return x[0]
class LayerNorm(nn.LayerNorm):
"""Subclass torch's LayerNorm to handle fp16."""
def forward(self, x: torch.Tensor):
orig_type = x.dtype
ret = super().forward(x.type(torch.float32))
return ret.type(orig_type)
class QuickGELU(nn.Module):
def forward(self, x: torch.Tensor):
return x * torch.sigmoid(1.702 * x)
class ResidualAttentionBlock(nn.Module):
def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None, use_grad_checkpointing=False):
super().__init__()
self.attn = nn.MultiheadAttention(d_model, n_head)
self.ln_1 = LayerNorm(d_model)
self.mlp = nn.Sequential(OrderedDict([
("c_fc", nn.Linear(d_model, d_model * 4)),
("gelu", QuickGELU()),
("c_proj", nn.Linear(d_model * 4, d_model))
]))
self.ln_2 = LayerNorm(d_model)
self.attn_mask = attn_mask
if use_grad_checkpointing:
self.attn = checkpoint_wrapper(self.attn)
self.mlp = checkpoint_wrapper(self.mlp)
def attention(self, x: torch.Tensor):
self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None
return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
def forward(self, x: torch.Tensor):
x = x + self.attention(self.ln_1(x))
x = x + self.mlp(self.ln_2(x))
return x
class Transformer(nn.Module):
def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None, use_grad_checkpointing=False):
super().__init__()
self.width = width
self.layers = layers
self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask, use_grad_checkpointing and i>12) for i in range(layers)])
def forward(self, x: torch.Tensor):
return self.resblocks(x)
class VisionTransformer(nn.Module):
def __init__(self, input_resolution: int, patch_size: int, width: int, layers: int, heads: int, use_grad_checkpointing: bool):
super().__init__()
self.input_resolution = input_resolution
self.num_features = width
self.num_heads = heads
self.num_patches = (input_resolution // patch_size) ** 2
self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False)
scale = width ** -0.5
self.class_embedding = nn.Parameter(scale * torch.randn(width))
self.positional_embedding = nn.Parameter(scale * torch.randn(self.num_patches + 1, width))
self.ln_pre = LayerNorm(width)
self.transformer = Transformer(width, layers-1, heads, use_grad_checkpointing=use_grad_checkpointing)
# self.ln_final = LayerNorm(width)
def forward(self, x: torch.Tensor):
x = self.conv1(x) # shape = [*, width, grid, grid]
x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2]
x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width]
x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width]
x = x + self.positional_embedding.to(x.dtype)
x = self.ln_pre(x)
x = x.permute(1, 0, 2) # NLD -> LND
x = self.transformer(x)
x = x.permute(1, 0, 2) # LND -> NLD
# x = self.ln_final(x)
return x
# From PyTorch internals
def _ntuple(n):
def parse(x):
if isinstance(x, collections.abc.Iterable):
return x
return tuple(repeat(x, n))
return parse
to_2tuple = _ntuple(2)
def interpolate_pos_embed(model, state_dict, interpolation: str = 'bicubic', seq_dim=1):
# Rescale the grid of position embeddings when loading from state_dict
old_pos_embed = state_dict.get('positional_embedding', None)
grid_size = round((model.positional_embedding.shape[0] - 1) ** 0.5)
if old_pos_embed is None:
return
grid_size = to_2tuple(grid_size)
extra_tokens = 1 # FIXME detect different token configs (ie no class token, or more)
new_seq_len = grid_size[0] * grid_size[1] + extra_tokens
if new_seq_len == old_pos_embed.shape[0]:
return
if extra_tokens:
pos_emb_tok, pos_emb_img = old_pos_embed[:extra_tokens], old_pos_embed[extra_tokens:]
else:
pos_emb_tok, pos_emb_img = None, old_pos_embed
old_grid_size = to_2tuple(int(math.sqrt(len(pos_emb_img))))
print('Resizing position embedding grid-size from %s to %s', old_grid_size, grid_size)
pos_emb_img = pos_emb_img.reshape(1, old_grid_size[0], old_grid_size[1], -1).permute(0, 3, 1, 2)
pos_emb_img = F.interpolate(
pos_emb_img,
size=grid_size,
mode=interpolation,
align_corners=True,
)
pos_emb_img = pos_emb_img.permute(0, 2, 3, 1).reshape(1, grid_size[0] * grid_size[1], -1)[0]
if pos_emb_tok is not None:
new_pos_embed = torch.cat([pos_emb_tok, pos_emb_img], dim=0)
else:
new_pos_embed = pos_emb_img
state_dict['positional_embedding'] = new_pos_embed
def create_clip_vit_L(img_size=224,use_checkpoint=False,precision="fp16"):
model = VisionTransformer(
input_resolution=img_size,
patch_size=14,
width=1024,
layers=22,
heads=16,
use_grad_checkpointing=use_checkpoint,
)
url = "https://storage.googleapis.com/sfr-vision-language-research/LAVIS/models/BLIP2/clip_vit_L.pth"
cached_file = download_cached_file(
url, check_hash=False, progress=True
)
state_dict = torch.load(cached_file, map_location="cpu")
interpolate_pos_embed(model,state_dict)
incompatible_keys = model.load_state_dict(state_dict, strict=False)
# print(incompatible_keys)
if precision == "fp16":
convert_weights_to_fp16(model)
return model | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/clip_vit.py | 0.962961 | 0.379982 | clip_vit.py | pypi |
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
from functools import partial
from timm.models.vision_transformer import _cfg, PatchEmbed
from timm.models.registry import register_model
from timm.models.layers import trunc_normal_, DropPath
from timm.models.helpers import named_apply, adapt_input_conv
from fairscale.nn.checkpoint.checkpoint_activations import checkpoint_wrapper
from lavis.models.base_model import BaseEncoder
class Mlp(nn.Module):
"""MLP as used in Vision Transformer, MLP-Mixer and related networks"""
def __init__(
self,
in_features,
hidden_features=None,
out_features=None,
act_layer=nn.GELU,
drop=0.0,
):
super().__init__()
out_features = out_features or in_features
hidden_features = hidden_features or in_features
self.fc1 = nn.Linear(in_features, hidden_features)
self.act = act_layer()
self.fc2 = nn.Linear(hidden_features, out_features)
self.drop = nn.Dropout(drop)
def forward(self, x):
x = self.fc1(x)
x = self.act(x)
x = self.drop(x)
x = self.fc2(x)
x = self.drop(x)
return x
class Attention(nn.Module):
def __init__(
self,
dim,
num_heads=8,
qkv_bias=False,
qk_scale=None,
attn_drop=0.0,
proj_drop=0.0,
):
super().__init__()
self.num_heads = num_heads
head_dim = dim // num_heads
# NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights
self.scale = qk_scale or head_dim**-0.5
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
self.attn_gradients = None
self.attention_map = None
def save_attn_gradients(self, attn_gradients):
self.attn_gradients = attn_gradients
def get_attn_gradients(self):
return self.attn_gradients
def save_attention_map(self, attention_map):
self.attention_map = attention_map
def get_attention_map(self):
return self.attention_map
def forward(self, x, register_hook=False):
B, N, C = x.shape
qkv = (
self.qkv(x)
.reshape(B, N, 3, self.num_heads, C // self.num_heads)
.permute(2, 0, 3, 1, 4)
)
q, k, v = (
qkv[0],
qkv[1],
qkv[2],
) # make torchscript happy (cannot use tensor as tuple)
attn = (q @ k.transpose(-2, -1)) * self.scale
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
if register_hook:
self.save_attention_map(attn)
attn.register_hook(self.save_attn_gradients)
x = (attn @ v).transpose(1, 2).reshape(B, N, C)
x = self.proj(x)
x = self.proj_drop(x)
return x
class Block(nn.Module):
def __init__(
self,
dim,
num_heads,
mlp_ratio=4.0,
qkv_bias=False,
qk_scale=None,
drop=0.0,
attn_drop=0.0,
drop_path=0.0,
act_layer=nn.GELU,
norm_layer=nn.LayerNorm,
use_grad_checkpointing=False,
):
super().__init__()
self.norm1 = norm_layer(dim)
self.attn = Attention(
dim,
num_heads=num_heads,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
attn_drop=attn_drop,
proj_drop=drop,
)
# NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity()
self.norm2 = norm_layer(dim)
mlp_hidden_dim = int(dim * mlp_ratio)
self.mlp = Mlp(
in_features=dim,
hidden_features=mlp_hidden_dim,
act_layer=act_layer,
drop=drop,
)
if use_grad_checkpointing:
self.attn = checkpoint_wrapper(self.attn)
self.mlp = checkpoint_wrapper(self.mlp)
def forward(self, x, register_hook=False):
x = x + self.drop_path(self.attn(self.norm1(x), register_hook=register_hook))
x = x + self.drop_path(self.mlp(self.norm2(x)))
return x
class VisionTransformer(nn.Module):
"""Vision Transformer
A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale` -
https://arxiv.org/abs/2010.11929
"""
def __init__(
self,
img_size=224,
patch_size=16,
in_chans=3,
num_classes=1000,
embed_dim=768,
depth=12,
num_heads=12,
mlp_ratio=4.0,
qkv_bias=True,
qk_scale=None,
representation_size=None,
drop_rate=0.0,
attn_drop_rate=0.0,
drop_path_rate=0.0,
norm_layer=None,
use_grad_checkpointing=False,
ckpt_layer=0,
):
"""
Args:
img_size (int, tuple): input image size
patch_size (int, tuple): patch size
in_chans (int): number of input channels
num_classes (int): number of classes for classification head
embed_dim (int): embedding dimension
depth (int): depth of transformer
num_heads (int): number of attention heads
mlp_ratio (int): ratio of mlp hidden dim to embedding dim
qkv_bias (bool): enable bias for qkv if True
qk_scale (float): override default qk scale of head_dim ** -0.5 if set
representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set
drop_rate (float): dropout rate
attn_drop_rate (float): attention dropout rate
drop_path_rate (float): stochastic depth rate
norm_layer: (nn.Module): normalization layer
"""
super().__init__()
self.num_features = (
self.embed_dim
) = embed_dim # num_features for consistency with other models
norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
self.patch_embed = PatchEmbed(
img_size=img_size,
patch_size=patch_size,
in_chans=in_chans,
embed_dim=embed_dim,
)
num_patches = self.patch_embed.num_patches
self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim))
self.pos_drop = nn.Dropout(p=drop_rate)
dpr = [
x.item() for x in torch.linspace(0, drop_path_rate, depth)
] # stochastic depth decay rule
self.blocks = nn.ModuleList(
[
Block(
dim=embed_dim,
num_heads=num_heads,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
drop=drop_rate,
attn_drop=attn_drop_rate,
drop_path=dpr[i],
norm_layer=norm_layer,
use_grad_checkpointing=(
use_grad_checkpointing and i >= depth - ckpt_layer
),
)
for i in range(depth)
]
)
self.norm = norm_layer(embed_dim)
trunc_normal_(self.pos_embed, std=0.02)
trunc_normal_(self.cls_token, std=0.02)
self.apply(self._init_weights)
def _init_weights(self, m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=0.02)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
@torch.jit.ignore
def no_weight_decay(self):
return {"pos_embed", "cls_token"}
def forward(self, x, register_blk=-1):
B = x.shape[0]
x = self.patch_embed(x)
cls_tokens = self.cls_token.expand(
B, -1, -1
) # stole cls_tokens impl from Phil Wang, thanks
x = torch.cat((cls_tokens, x), dim=1)
x = x + self.pos_embed[:, : x.size(1), :]
x = self.pos_drop(x)
for i, blk in enumerate(self.blocks):
x = blk(x, register_blk == i)
x = self.norm(x)
return x
@torch.jit.ignore()
def load_pretrained(self, checkpoint_path, prefix=""):
_load_weights(self, checkpoint_path, prefix)
@torch.no_grad()
def _load_weights(model: VisionTransformer, checkpoint_path: str, prefix: str = ""):
"""Load weights from .npz checkpoints for official Google Brain Flax implementation"""
import numpy as np
def _n2p(w, t=True):
if w.ndim == 4 and w.shape[0] == w.shape[1] == w.shape[2] == 1:
w = w.flatten()
if t:
if w.ndim == 4:
w = w.transpose([3, 2, 0, 1])
elif w.ndim == 3:
w = w.transpose([2, 0, 1])
elif w.ndim == 2:
w = w.transpose([1, 0])
return torch.from_numpy(w)
w = np.load(checkpoint_path)
if not prefix and "opt/target/embedding/kernel" in w:
prefix = "opt/target/"
if hasattr(model.patch_embed, "backbone"):
# hybrid
backbone = model.patch_embed.backbone
stem_only = not hasattr(backbone, "stem")
stem = backbone if stem_only else backbone.stem
stem.conv.weight.copy_(
adapt_input_conv(
stem.conv.weight.shape[1], _n2p(w[f"{prefix}conv_root/kernel"])
)
)
stem.norm.weight.copy_(_n2p(w[f"{prefix}gn_root/scale"]))
stem.norm.bias.copy_(_n2p(w[f"{prefix}gn_root/bias"]))
if not stem_only:
for i, stage in enumerate(backbone.stages):
for j, block in enumerate(stage.blocks):
bp = f"{prefix}block{i + 1}/unit{j + 1}/"
for r in range(3):
getattr(block, f"conv{r + 1}").weight.copy_(
_n2p(w[f"{bp}conv{r + 1}/kernel"])
)
getattr(block, f"norm{r + 1}").weight.copy_(
_n2p(w[f"{bp}gn{r + 1}/scale"])
)
getattr(block, f"norm{r + 1}").bias.copy_(
_n2p(w[f"{bp}gn{r + 1}/bias"])
)
if block.downsample is not None:
block.downsample.conv.weight.copy_(
_n2p(w[f"{bp}conv_proj/kernel"])
)
block.downsample.norm.weight.copy_(
_n2p(w[f"{bp}gn_proj/scale"])
)
block.downsample.norm.bias.copy_(_n2p(w[f"{bp}gn_proj/bias"]))
embed_conv_w = _n2p(w[f"{prefix}embedding/kernel"])
else:
embed_conv_w = adapt_input_conv(
model.patch_embed.proj.weight.shape[1], _n2p(w[f"{prefix}embedding/kernel"])
)
model.patch_embed.proj.weight.copy_(embed_conv_w)
model.patch_embed.proj.bias.copy_(_n2p(w[f"{prefix}embedding/bias"]))
model.cls_token.copy_(_n2p(w[f"{prefix}cls"], t=False))
pos_embed_w = _n2p(w[f"{prefix}Transformer/posembed_input/pos_embedding"], t=False)
if pos_embed_w.shape != model.pos_embed.shape:
pos_embed_w = resize_pos_embed( # resize pos embedding when different size from pretrained weights
pos_embed_w,
model.pos_embed,
getattr(model, "num_tokens", 1),
model.patch_embed.grid_size,
)
model.pos_embed.copy_(pos_embed_w)
model.norm.weight.copy_(_n2p(w[f"{prefix}Transformer/encoder_norm/scale"]))
model.norm.bias.copy_(_n2p(w[f"{prefix}Transformer/encoder_norm/bias"]))
# if isinstance(model.head, nn.Linear) and model.head.bias.shape[0] == w[f'{prefix}head/bias'].shape[-1]:
# model.head.weight.copy_(_n2p(w[f'{prefix}head/kernel']))
# model.head.bias.copy_(_n2p(w[f'{prefix}head/bias']))
# if isinstance(getattr(model.pre_logits, 'fc', None), nn.Linear) and f'{prefix}pre_logits/bias' in w:
# model.pre_logits.fc.weight.copy_(_n2p(w[f'{prefix}pre_logits/kernel']))
# model.pre_logits.fc.bias.copy_(_n2p(w[f'{prefix}pre_logits/bias']))
for i, block in enumerate(model.blocks.children()):
block_prefix = f"{prefix}Transformer/encoderblock_{i}/"
mha_prefix = block_prefix + "MultiHeadDotProductAttention_1/"
block.norm1.weight.copy_(_n2p(w[f"{block_prefix}LayerNorm_0/scale"]))
block.norm1.bias.copy_(_n2p(w[f"{block_prefix}LayerNorm_0/bias"]))
block.attn.qkv.weight.copy_(
torch.cat(
[
_n2p(w[f"{mha_prefix}{n}/kernel"], t=False).flatten(1).T
for n in ("query", "key", "value")
]
)
)
block.attn.qkv.bias.copy_(
torch.cat(
[
_n2p(w[f"{mha_prefix}{n}/bias"], t=False).reshape(-1)
for n in ("query", "key", "value")
]
)
)
block.attn.proj.weight.copy_(_n2p(w[f"{mha_prefix}out/kernel"]).flatten(1))
block.attn.proj.bias.copy_(_n2p(w[f"{mha_prefix}out/bias"]))
for r in range(2):
getattr(block.mlp, f"fc{r + 1}").weight.copy_(
_n2p(w[f"{block_prefix}MlpBlock_3/Dense_{r}/kernel"])
)
getattr(block.mlp, f"fc{r + 1}").bias.copy_(
_n2p(w[f"{block_prefix}MlpBlock_3/Dense_{r}/bias"])
)
block.norm2.weight.copy_(_n2p(w[f"{block_prefix}LayerNorm_2/scale"]))
block.norm2.bias.copy_(_n2p(w[f"{block_prefix}LayerNorm_2/bias"]))
def resize_pos_embed(posemb, posemb_new, num_tokens=1, gs_new=()):
# Rescale the grid of position embeddings when loading from state_dict. Adapted from
# https://github.com/google-research/vision_transformer/blob/00883dd691c63a6830751563748663526e811cee/vit_jax/checkpoint.py#L224
print("Resized position embedding: %s to %s", posemb.shape, posemb_new.shape)
ntok_new = posemb_new.shape[1]
if num_tokens:
posemb_tok, posemb_grid = posemb[:, :num_tokens], posemb[0, num_tokens:]
ntok_new -= num_tokens
else:
posemb_tok, posemb_grid = posemb[:, :0], posemb[0]
gs_old = int(math.sqrt(len(posemb_grid)))
if not len(gs_new): # backwards compatibility
gs_new = [int(math.sqrt(ntok_new))] * 2
assert len(gs_new) >= 2
print("Position embedding grid-size from %s to %s", [gs_old, gs_old], gs_new)
posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2)
posemb_grid = F.interpolate(
posemb_grid, size=gs_new, mode="bicubic", align_corners=False
)
posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_new[0] * gs_new[1], -1)
posemb = torch.cat([posemb_tok, posemb_grid], dim=1)
return
def interpolate_pos_embed(pos_embed_checkpoint, visual_encoder):
# interpolate position embedding
embedding_size = pos_embed_checkpoint.shape[-1]
num_patches = visual_encoder.patch_embed.num_patches
num_extra_tokens = visual_encoder.pos_embed.shape[-2] - num_patches
# height (== width) for the checkpoint position embedding
orig_size = int((pos_embed_checkpoint.shape[-2] - num_extra_tokens) ** 0.5)
# height (== width) for the new position embedding
new_size = int(num_patches**0.5)
if orig_size != new_size:
# class_token and dist_token are kept unchanged
extra_tokens = pos_embed_checkpoint[:, :num_extra_tokens]
# only the position tokens are interpolated
pos_tokens = pos_embed_checkpoint[:, num_extra_tokens:]
pos_tokens = pos_tokens.reshape(
-1, orig_size, orig_size, embedding_size
).permute(0, 3, 1, 2)
pos_tokens = torch.nn.functional.interpolate(
pos_tokens, size=(new_size, new_size), mode="bicubic", align_corners=False
)
pos_tokens = pos_tokens.permute(0, 2, 3, 1).flatten(1, 2)
new_pos_embed = torch.cat((extra_tokens, pos_tokens), dim=1)
print(
"reshape position embedding from %d to %d" % (orig_size**2, new_size**2)
)
return new_pos_embed
else:
return pos_embed_checkpoint
class VisionTransformerEncoder(VisionTransformer, BaseEncoder):
@classmethod
def from_config(cls, cfg, from_pretrained=False):
vit_type = cfg.get("vit_type", "base")
image_size = cfg.get("image_size", 384)
ckpt_layer = cfg.get("vit_ckpt_layer", 0)
drop_path_rate = cfg.get("vit_drop_path_rate", 0)
norm_layer_eps = cfg.get("vit_layer_norm_epsilon", -1)
use_grad_checkpointing = cfg.get("vit_grad_ckpt", False)
if norm_layer_eps == -1:
norm_layer = None
else:
norm_layer = partial(nn.LayerNorm, eps=norm_layer_eps)
# norm_layer=partial(nn.LayerNorm, eps=1e-6),
assert vit_type in ["base", "large"], "vit parameter must be base or large"
if vit_type == "base":
vision_width = 768
visual_encoder = cls(
img_size=image_size,
patch_size=16,
embed_dim=vision_width,
depth=12,
num_heads=12,
use_grad_checkpointing=use_grad_checkpointing,
ckpt_layer=ckpt_layer,
drop_path_rate=0 or drop_path_rate,
norm_layer=norm_layer,
)
if from_pretrained:
checkpoint = torch.hub.load_state_dict_from_url(
url="https://dl.fbaipublicfiles.com/deit/deit_base_patch16_224-b5f2ef4d.pth",
map_location="cpu",
check_hash=True,
)
state_dict = checkpoint["model"]
state_dict["pos_embed"] = interpolate_pos_embed(
state_dict["pos_embed"], visual_encoder
)
msg = visual_encoder.load_state_dict(state_dict, strict=False)
elif vit_type == "large":
vision_width = 1024
visual_encoder = cls(
img_size=image_size,
patch_size=16,
embed_dim=vision_width,
depth=24,
num_heads=16,
use_grad_checkpointing=use_grad_checkpointing,
ckpt_layer=ckpt_layer,
drop_path_rate=0.1 or drop_path_rate,
norm_layer=norm_layer,
)
if from_pretrained:
from timm.models.helpers import load_custom_pretrained
from timm.models.vision_transformer import default_cfgs
load_custom_pretrained(
visual_encoder, default_cfgs["vit_large_patch16_224_in21k"]
)
visual_encoder.vision_width = vision_width
return visual_encoder
def forward_features(self, x, register_blk=-1):
return super().forward(x, register_blk) | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/vit.py | 0.947381 | 0.277357 | vit.py | pypi |
import math
from functools import partial
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.checkpoint as checkpoint
from timm.models.layers import drop_path, to_2tuple, trunc_normal_
from timm.models.registry import register_model
from lavis.common.dist_utils import download_cached_file
def _cfg(url='', **kwargs):
return {
'url': url,
'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None,
'crop_pct': .9, 'interpolation': 'bicubic',
'mean': (0.5, 0.5, 0.5), 'std': (0.5, 0.5, 0.5),
**kwargs
}
class DropPath(nn.Module):
"""Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
"""
def __init__(self, drop_prob=None):
super(DropPath, self).__init__()
self.drop_prob = drop_prob
def forward(self, x):
return drop_path(x, self.drop_prob, self.training)
def extra_repr(self) -> str:
return 'p={}'.format(self.drop_prob)
class Mlp(nn.Module):
def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
super().__init__()
out_features = out_features or in_features
hidden_features = hidden_features or in_features
self.fc1 = nn.Linear(in_features, hidden_features)
self.act = act_layer()
self.fc2 = nn.Linear(hidden_features, out_features)
self.drop = nn.Dropout(drop)
def forward(self, x):
x = self.fc1(x)
x = self.act(x)
# x = self.drop(x)
# commit this for the orignal BERT implement
x = self.fc2(x)
x = self.drop(x)
return x
class Attention(nn.Module):
def __init__(
self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0.,
proj_drop=0., window_size=None, attn_head_dim=None):
super().__init__()
self.num_heads = num_heads
head_dim = dim // num_heads
if attn_head_dim is not None:
head_dim = attn_head_dim
all_head_dim = head_dim * self.num_heads
self.scale = qk_scale or head_dim ** -0.5
self.qkv = nn.Linear(dim, all_head_dim * 3, bias=False)
if qkv_bias:
self.q_bias = nn.Parameter(torch.zeros(all_head_dim))
self.v_bias = nn.Parameter(torch.zeros(all_head_dim))
else:
self.q_bias = None
self.v_bias = None
if window_size:
self.window_size = window_size
self.num_relative_distance = (2 * window_size[0] - 1) * (2 * window_size[1] - 1) + 3
self.relative_position_bias_table = nn.Parameter(
torch.zeros(self.num_relative_distance, num_heads)) # 2*Wh-1 * 2*Ww-1, nH
# cls to token & token 2 cls & cls to cls
# get pair-wise relative position index for each token inside the window
coords_h = torch.arange(window_size[0])
coords_w = torch.arange(window_size[1])
coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
relative_coords[:, :, 0] += window_size[0] - 1 # shift to start from 0
relative_coords[:, :, 1] += window_size[1] - 1
relative_coords[:, :, 0] *= 2 * window_size[1] - 1
relative_position_index = \
torch.zeros(size=(window_size[0] * window_size[1] + 1, ) * 2, dtype=relative_coords.dtype)
relative_position_index[1:, 1:] = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
relative_position_index[0, 0:] = self.num_relative_distance - 3
relative_position_index[0:, 0] = self.num_relative_distance - 2
relative_position_index[0, 0] = self.num_relative_distance - 1
self.register_buffer("relative_position_index", relative_position_index)
else:
self.window_size = None
self.relative_position_bias_table = None
self.relative_position_index = None
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(all_head_dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
def forward(self, x, rel_pos_bias=None):
B, N, C = x.shape
qkv_bias = None
if self.q_bias is not None:
qkv_bias = torch.cat((self.q_bias, torch.zeros_like(self.v_bias, requires_grad=False), self.v_bias))
# qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
qkv = F.linear(input=x, weight=self.qkv.weight, bias=qkv_bias)
qkv = qkv.reshape(B, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)
q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
q = q * self.scale
attn = (q @ k.transpose(-2, -1))
if self.relative_position_bias_table is not None:
relative_position_bias = \
self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
self.window_size[0] * self.window_size[1] + 1,
self.window_size[0] * self.window_size[1] + 1, -1) # Wh*Ww,Wh*Ww,nH
relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
attn = attn + relative_position_bias.unsqueeze(0)
if rel_pos_bias is not None:
attn = attn + rel_pos_bias
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
x = (attn @ v).transpose(1, 2).reshape(B, N, -1)
x = self.proj(x)
x = self.proj_drop(x)
return x
class Block(nn.Module):
def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
drop_path=0., init_values=None, act_layer=nn.GELU, norm_layer=nn.LayerNorm,
window_size=None, attn_head_dim=None):
super().__init__()
self.norm1 = norm_layer(dim)
self.attn = Attention(
dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
attn_drop=attn_drop, proj_drop=drop, window_size=window_size, attn_head_dim=attn_head_dim)
# NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.norm2 = norm_layer(dim)
mlp_hidden_dim = int(dim * mlp_ratio)
self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
if init_values is not None and init_values > 0:
self.gamma_1 = nn.Parameter(init_values * torch.ones((dim)),requires_grad=True)
self.gamma_2 = nn.Parameter(init_values * torch.ones((dim)),requires_grad=True)
else:
self.gamma_1, self.gamma_2 = None, None
def forward(self, x, rel_pos_bias=None):
if self.gamma_1 is None:
x = x + self.drop_path(self.attn(self.norm1(x), rel_pos_bias=rel_pos_bias))
x = x + self.drop_path(self.mlp(self.norm2(x)))
else:
x = x + self.drop_path(self.gamma_1 * self.attn(self.norm1(x), rel_pos_bias=rel_pos_bias))
x = x + self.drop_path(self.gamma_2 * self.mlp(self.norm2(x)))
return x
class PatchEmbed(nn.Module):
""" Image to Patch Embedding
"""
def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768):
super().__init__()
img_size = to_2tuple(img_size)
patch_size = to_2tuple(patch_size)
num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0])
self.patch_shape = (img_size[0] // patch_size[0], img_size[1] // patch_size[1])
self.img_size = img_size
self.patch_size = patch_size
self.num_patches = num_patches
self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
def forward(self, x, **kwargs):
B, C, H, W = x.shape
# FIXME look at relaxing size constraints
assert H == self.img_size[0] and W == self.img_size[1], \
f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
x = self.proj(x).flatten(2).transpose(1, 2)
return x
class RelativePositionBias(nn.Module):
def __init__(self, window_size, num_heads):
super().__init__()
self.window_size = window_size
self.num_relative_distance = (2 * window_size[0] - 1) * (2 * window_size[1] - 1) + 3
self.relative_position_bias_table = nn.Parameter(
torch.zeros(self.num_relative_distance, num_heads)) # 2*Wh-1 * 2*Ww-1, nH
# cls to token & token 2 cls & cls to cls
# get pair-wise relative position index for each token inside the window
coords_h = torch.arange(window_size[0])
coords_w = torch.arange(window_size[1])
coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
relative_coords[:, :, 0] += window_size[0] - 1 # shift to start from 0
relative_coords[:, :, 1] += window_size[1] - 1
relative_coords[:, :, 0] *= 2 * window_size[1] - 1
relative_position_index = \
torch.zeros(size=(window_size[0] * window_size[1] + 1,) * 2, dtype=relative_coords.dtype)
relative_position_index[1:, 1:] = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
relative_position_index[0, 0:] = self.num_relative_distance - 3
relative_position_index[0:, 0] = self.num_relative_distance - 2
relative_position_index[0, 0] = self.num_relative_distance - 1
self.register_buffer("relative_position_index", relative_position_index)
# trunc_normal_(self.relative_position_bias_table, std=.02)
def forward(self):
relative_position_bias = \
self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
self.window_size[0] * self.window_size[1] + 1,
self.window_size[0] * self.window_size[1] + 1, -1) # Wh*Ww,Wh*Ww,nH
return relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
class VisionTransformer(nn.Module):
""" Vision Transformer with support for patch or hybrid CNN input stage
"""
def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, embed_dim=768, depth=12,
num_heads=12, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop_rate=0., attn_drop_rate=0.,
drop_path_rate=0., norm_layer=nn.LayerNorm, init_values=None,
use_abs_pos_emb=True, use_rel_pos_bias=False, use_shared_rel_pos_bias=False,
use_mean_pooling=True, init_scale=0.001, use_checkpoint=False):
super().__init__()
self.image_size = img_size
self.num_classes = num_classes
self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
self.patch_embed = PatchEmbed(
img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim)
num_patches = self.patch_embed.num_patches
self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
if use_abs_pos_emb:
self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim))
else:
self.pos_embed = None
self.pos_drop = nn.Dropout(p=drop_rate)
if use_shared_rel_pos_bias:
self.rel_pos_bias = RelativePositionBias(window_size=self.patch_embed.patch_shape, num_heads=num_heads)
else:
self.rel_pos_bias = None
self.use_checkpoint = use_checkpoint
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
self.use_rel_pos_bias = use_rel_pos_bias
self.blocks = nn.ModuleList([
Block(
dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer,
init_values=init_values, window_size=self.patch_embed.patch_shape if use_rel_pos_bias else None)
for i in range(depth)])
# self.norm = nn.Identity() if use_mean_pooling else norm_layer(embed_dim)
# self.fc_norm = norm_layer(embed_dim) if use_mean_pooling else None
# self.head = nn.Linear(embed_dim, num_classes) if num_classes > 0 else nn.Identity()
if self.pos_embed is not None:
trunc_normal_(self.pos_embed, std=.02)
trunc_normal_(self.cls_token, std=.02)
# trunc_normal_(self.mask_token, std=.02)
# if isinstance(self.head, nn.Linear):
# trunc_normal_(self.head.weight, std=.02)
self.apply(self._init_weights)
self.fix_init_weight()
# if isinstance(self.head, nn.Linear):
# self.head.weight.data.mul_(init_scale)
# self.head.bias.data.mul_(init_scale)
def fix_init_weight(self):
def rescale(param, layer_id):
param.div_(math.sqrt(2.0 * layer_id))
for layer_id, layer in enumerate(self.blocks):
rescale(layer.attn.proj.weight.data, layer_id + 1)
rescale(layer.mlp.fc2.weight.data, layer_id + 1)
def _init_weights(self, m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=.02)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
def get_classifier(self):
return self.head
def reset_classifier(self, num_classes, global_pool=''):
self.num_classes = num_classes
self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()
def forward_features(self, x):
x = self.patch_embed(x)
batch_size, seq_len, _ = x.size()
cls_tokens = self.cls_token.expand(batch_size, -1, -1) # stole cls_tokens impl from Phil Wang, thanks
x = torch.cat((cls_tokens, x), dim=1)
if self.pos_embed is not None:
x = x + self.pos_embed
x = self.pos_drop(x)
rel_pos_bias = self.rel_pos_bias() if self.rel_pos_bias is not None else None
for blk in self.blocks:
if self.use_checkpoint:
x = checkpoint.checkpoint(blk, x, rel_pos_bias)
else:
x = blk(x, rel_pos_bias)
return x
# x = self.norm(x)
# if self.fc_norm is not None:
# t = x[:, 1:, :]
# return self.fc_norm(t.mean(1))
# else:
# return x[:, 0]
def forward(self, x):
x = self.forward_features(x)
# x = self.head(x)
return x
def get_intermediate_layers(self, x):
x = self.patch_embed(x)
batch_size, seq_len, _ = x.size()
cls_tokens = self.cls_token.expand(batch_size, -1, -1) # stole cls_tokens impl from Phil Wang, thanks
x = torch.cat((cls_tokens, x), dim=1)
if self.pos_embed is not None:
x = x + self.pos_embed
x = self.pos_drop(x)
features = []
rel_pos_bias = self.rel_pos_bias() if self.rel_pos_bias is not None else None
for blk in self.blocks:
x = blk(x, rel_pos_bias)
features.append(x)
return features
def interpolate_pos_embed(model, checkpoint_model):
if 'pos_embed' in checkpoint_model:
pos_embed_checkpoint = checkpoint_model['pos_embed'].float()
embedding_size = pos_embed_checkpoint.shape[-1]
num_patches = model.patch_embed.num_patches
num_extra_tokens = model.pos_embed.shape[-2] - num_patches
# height (== width) for the checkpoint position embedding
orig_size = int((pos_embed_checkpoint.shape[-2] - num_extra_tokens) ** 0.5)
# height (== width) for the new position embedding
new_size = int(num_patches ** 0.5)
# class_token and dist_token are kept unchanged
if orig_size != new_size:
print("Position interpolate from %dx%d to %dx%d" % (orig_size, orig_size, new_size, new_size))
extra_tokens = pos_embed_checkpoint[:, :num_extra_tokens]
# only the position tokens are interpolated
pos_tokens = pos_embed_checkpoint[:, num_extra_tokens:]
pos_tokens = pos_tokens.reshape(-1, orig_size, orig_size, embedding_size).permute(0, 3, 1, 2)
pos_tokens = torch.nn.functional.interpolate(
pos_tokens, size=(new_size, new_size), mode='bicubic', align_corners=False)
pos_tokens = pos_tokens.permute(0, 2, 3, 1).flatten(1, 2)
new_pos_embed = torch.cat((extra_tokens, pos_tokens), dim=1)
checkpoint_model['pos_embed'] = new_pos_embed
def convert_weights_to_fp16(model: nn.Module):
"""Convert applicable model parameters to fp16"""
def _convert_weights_to_fp16(l):
if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)):
l.weight.data = l.weight.data.half()
if l.bias is not None:
l.bias.data = l.bias.data.half()
# if isinstance(l, (nn.MultiheadAttention, Attention)):
# for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]:
# tensor = getattr(l, attr)
# if tensor is not None:
# tensor.data = tensor.data.half()
model.apply(_convert_weights_to_fp16)
def create_eva_vit_g(img_size=224,drop_path_rate=0.4,use_checkpoint=False,precision="fp16"):
model = VisionTransformer(
img_size=img_size,
patch_size=14,
use_mean_pooling=False,
embed_dim=1408,
depth=39,
num_heads=1408//88,
mlp_ratio=4.3637,
qkv_bias=True,
drop_path_rate=drop_path_rate,
norm_layer=partial(nn.LayerNorm, eps=1e-6),
use_checkpoint=use_checkpoint,
)
url = "https://storage.googleapis.com/sfr-vision-language-research/LAVIS/models/BLIP2/eva_vit_g.pth"
cached_file = download_cached_file(
url, check_hash=False, progress=True
)
state_dict = torch.load(cached_file, map_location="cpu")
interpolate_pos_embed(model,state_dict)
incompatible_keys = model.load_state_dict(state_dict, strict=False)
# print(incompatible_keys)
if precision == "fp16":
# model.to("cuda")
convert_weights_to_fp16(model)
return model | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/eva_vit.py | 0.917136 | 0.337749 | eva_vit.py | pypi |
import logging
import torch
from omegaconf import OmegaConf
from lavis.common.registry import registry
from lavis.models.base_model import BaseModel
from lavis.models.albef_models.albef_classification import AlbefClassification
from lavis.models.albef_models.albef_feature_extractor import AlbefFeatureExtractor
from lavis.models.albef_models.albef_nlvr import AlbefNLVR
from lavis.models.albef_models.albef_pretrain import AlbefPretrain
from lavis.models.albef_models.albef_retrieval import AlbefRetrieval
from lavis.models.albef_models.albef_vqa import AlbefVQA
from lavis.models.alpro_models.alpro_qa import AlproQA
from lavis.models.alpro_models.alpro_retrieval import AlproRetrieval
from lavis.models.blip_models.blip import BlipBase
from lavis.models.blip_models.blip_caption import BlipCaption
from lavis.models.blip_models.blip_classification import BlipClassification
from lavis.models.blip_models.blip_feature_extractor import BlipFeatureExtractor
from lavis.models.blip_models.blip_image_text_matching import BlipITM
from lavis.models.blip_models.blip_nlvr import BlipNLVR
from lavis.models.blip_models.blip_pretrain import BlipPretrain
from lavis.models.blip_models.blip_retrieval import BlipRetrieval
from lavis.models.blip_models.blip_vqa import BlipVQA
from lavis.models.blip2_models.blip2 import Blip2Base
from lavis.models.blip2_models.blip2_opt import Blip2OPT
from lavis.models.blip2_models.blip2_t5 import Blip2T5
from lavis.models.blip2_models.blip2_qformer import Blip2Qformer
from lavis.models.blip2_models.blip2_image_text_matching import Blip2ITM
from lavis.models.pnp_vqa_models.pnp_vqa import PNPVQA
from lavis.models.pnp_vqa_models.pnp_unifiedqav2_fid import PNPUnifiedQAv2FiD
from lavis.models.img2prompt_models.img2prompt_vqa import Img2PromptVQA
from lavis.models.med import XBertLMHeadDecoder
from lavis.models.vit import VisionTransformerEncoder
from lavis.models.clip_models.model import CLIP
from lavis.models.gpt_models.gpt_dialogue import GPTDialogue
from lavis.processors.base_processor import BaseProcessor
__all__ = [
"load_model",
"AlbefClassification",
"AlbefFeatureExtractor",
"AlbefNLVR",
"AlbefVQA",
"AlbefPretrain",
"AlbefRetrieval",
"AlproQA",
"AlproRetrieval",
"BaseModel",
"BlipBase",
"BlipFeatureExtractor",
"BlipCaption",
"BlipClassification",
"BlipITM",
"BlipNLVR",
"BlipPretrain",
"BlipRetrieval",
"BlipVQA",
"Blip2Qformer",
"Blip2Base",
"Blip2ITM",
"Blip2OPT",
"Blip2T5",
"PNPVQA",
"Img2PromptVQA",
"PNPUnifiedQAv2FiD",
"CLIP",
"VisionTransformerEncoder",
"XBertLMHeadDecoder",
"GPTDialogue",
]
def load_model(name, model_type, is_eval=False, device="cpu", checkpoint=None):
"""
Load supported models.
To list all available models and types in registry:
>>> from lavis.models import model_zoo
>>> print(model_zoo)
Args:
name (str): name of the model.
model_type (str): type of the model.
is_eval (bool): whether the model is in eval mode. Default: False.
device (str): device to use. Default: "cpu".
checkpoint (str): path or to checkpoint. Default: None.
Note that expecting the checkpoint to have the same keys in state_dict as the model.
Returns:
model (torch.nn.Module): model.
"""
model = registry.get_model_class(name).from_pretrained(model_type=model_type)
if checkpoint is not None:
model.load_checkpoint(checkpoint)
if is_eval:
model.eval()
if device == "cpu":
model = model.float()
return model.to(device)
def load_preprocess(config):
"""
Load preprocessor configs and construct preprocessors.
If no preprocessor is specified, return BaseProcessor, which does not do any preprocessing.
Args:
config (dict): preprocessor configs.
Returns:
vis_processors (dict): preprocessors for visual inputs.
txt_processors (dict): preprocessors for text inputs.
Key is "train" or "eval" for processors used in training and evaluation respectively.
"""
def _build_proc_from_cfg(cfg):
return (
registry.get_processor_class(cfg.name).from_config(cfg)
if cfg is not None
else BaseProcessor()
)
vis_processors = dict()
txt_processors = dict()
vis_proc_cfg = config.get("vis_processor")
txt_proc_cfg = config.get("text_processor")
if vis_proc_cfg is not None:
vis_train_cfg = vis_proc_cfg.get("train")
vis_eval_cfg = vis_proc_cfg.get("eval")
else:
vis_train_cfg = None
vis_eval_cfg = None
vis_processors["train"] = _build_proc_from_cfg(vis_train_cfg)
vis_processors["eval"] = _build_proc_from_cfg(vis_eval_cfg)
if txt_proc_cfg is not None:
txt_train_cfg = txt_proc_cfg.get("train")
txt_eval_cfg = txt_proc_cfg.get("eval")
else:
txt_train_cfg = None
txt_eval_cfg = None
txt_processors["train"] = _build_proc_from_cfg(txt_train_cfg)
txt_processors["eval"] = _build_proc_from_cfg(txt_eval_cfg)
return vis_processors, txt_processors
def load_model_and_preprocess(name, model_type, is_eval=False, device="cpu"):
"""
Load model and its related preprocessors.
List all available models and types in registry:
>>> from lavis.models import model_zoo
>>> print(model_zoo)
Args:
name (str): name of the model.
model_type (str): type of the model.
is_eval (bool): whether the model is in eval mode. Default: False.
device (str): device to use. Default: "cpu".
Returns:
model (torch.nn.Module): model.
vis_processors (dict): preprocessors for visual inputs.
txt_processors (dict): preprocessors for text inputs.
"""
model_cls = registry.get_model_class(name)
# load model
model = model_cls.from_pretrained(model_type=model_type)
if is_eval:
model.eval()
# load preprocess
cfg = OmegaConf.load(model_cls.default_config_path(model_type))
if cfg is not None:
preprocess_cfg = cfg.preprocess
vis_processors, txt_processors = load_preprocess(preprocess_cfg)
else:
vis_processors, txt_processors = None, None
logging.info(
f"""No default preprocess for model {name} ({model_type}).
This can happen if the model is not finetuned on downstream datasets,
or it is not intended for direct use without finetuning.
"""
)
if device == "cpu" or device == torch.device("cpu"):
model = model.float()
return model.to(device), vis_processors, txt_processors
class ModelZoo:
"""
A utility class to create string representation of available model architectures and types.
>>> from lavis.models import model_zoo
>>> # list all available models
>>> print(model_zoo)
>>> # show total number of models
>>> print(len(model_zoo))
"""
def __init__(self) -> None:
self.model_zoo = {
k: list(v.PRETRAINED_MODEL_CONFIG_DICT.keys())
for k, v in registry.mapping["model_name_mapping"].items()
}
def __str__(self) -> str:
return (
"=" * 50
+ "\n"
+ f"{'Architectures':<30} {'Types'}\n"
+ "=" * 50
+ "\n"
+ "\n".join(
[
f"{name:<30} {', '.join(types)}"
for name, types in self.model_zoo.items()
]
)
)
def __iter__(self):
return iter(self.model_zoo.items())
def __len__(self):
return sum([len(v) for v in self.model_zoo.values()])
model_zoo = ModelZoo() | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/__init__.py | 0.656218 | 0.152663 | __init__.py | pypi |
from copy import deepcopy
import torch
import torch.nn.functional as F
from lavis.common.registry import registry
from lavis.models.albef_models import compute_sim_matrix
from lavis.models.base_model import (
MomentumDistilationMixin,
SharedQueueMixin,
all_gather_with_grad,
concat_all_gather,
)
from lavis.models.blip_models.blip import BlipBase
from lavis.models.blip_models.blip_outputs import (
BlipOutput,
BlipSimilarity,
BlipIntermediateOutput,
)
from lavis.models.med import XBertEncoder
from lavis.models.vit import VisionTransformerEncoder
from torch import nn
@registry.register_model("blip_retrieval")
class BlipRetrieval(BlipBase, MomentumDistilationMixin, SharedQueueMixin):
"""
BLIP retrieval model.
Supported model types:
- coco: fine-tuned BLIP base model on COCO dataset (Karpathy split).
- flickr: fine-tuned BLIP base model on Flickr30k dataset.
Usage:
>>> from lavis.models import load_model
>>> model = load_model("blip_retrieval", "coco")
>>> model = load_model("blip_retrieval", "flickr")
"""
PRETRAINED_MODEL_CONFIG_DICT = {
"coco": "configs/models/blip_retrieval_coco.yaml",
"flickr": "configs/models/blip_retrieval_flickr.yaml",
}
def __init__(
self,
image_encoder,
text_encoder,
queue_size,
alpha=0.4,
embed_dim=256,
momentum=0.995,
negative_all_rank=False,
max_txt_len=35,
):
""" """
super().__init__()
self.tokenizer = self.init_tokenizer()
self.visual_encoder = image_encoder
self.text_encoder = text_encoder
# creating projection layers for ITC
text_width = text_encoder.config.hidden_size
vision_width = image_encoder.vision_width
self.vision_proj = nn.Linear(vision_width, embed_dim)
self.text_proj = nn.Linear(text_width, embed_dim)
self.itm_head = nn.Linear(text_width, 2)
# create the momentum encoder
self.visual_encoder_m = deepcopy(self.visual_encoder)
self.text_encoder_m = deepcopy(self.text_encoder)
self.vision_proj_m = deepcopy(self.vision_proj)
self.text_proj_m = deepcopy(self.text_proj)
self.model_pairs = [
[self.visual_encoder, self.visual_encoder_m],
[self.text_encoder, self.text_encoder_m],
[self.vision_proj, self.vision_proj_m],
[self.text_proj, self.text_proj_m],
]
self.copy_params()
# create the queue
self.register_buffer("image_queue", torch.randn(embed_dim, queue_size))
self.register_buffer("text_queue", torch.randn(embed_dim, queue_size))
self.register_buffer("idx_queue", torch.full((1, queue_size), -100))
self.register_buffer("queue_ptr", torch.zeros(1, dtype=torch.long))
self.image_queue = nn.functional.normalize(self.image_queue, dim=0)
self.text_queue = nn.functional.normalize(self.text_queue, dim=0)
self.queue_size = queue_size
self.momentum = momentum
self.temp = nn.Parameter(0.07 * torch.ones([]))
self.alpha = alpha
self.max_txt_len = max_txt_len
self.negative_all_rank = negative_all_rank
def _rampup_factor(self, epoch, iters, num_iters_per_epoch):
return min(1, (epoch * num_iters_per_epoch + iters) / (2 * num_iters_per_epoch))
def forward(self, samples):
"""
Args:
samples (dict): A dictionary containing the following keys:
- image (torch.Tensor): A tensor of shape (batch_size, 3, H, W). The input images.
- text_input (list): A list of length batch_size, each element is a string of text/caption.
- image_id (torch.Tensor): A tensor of shape (batch_size, ). The image ids, used to identify same images in batch.
- epoch (int): The current epoch.
- iters (int): The current iteration.
- num_iters_per_epoch (int): The number of iterations per epoch.
Returns:
BlipOutput: A BlipOutput object. See ``lavis.models.blip_models.blip_outputs.BlipOutput`` for more details.
Examples:
>>> import torch
>>> from lavis.models import load_model
>>> model = load_model("blip_retrieval", "coco")
>>> images = torch.randn(4, 3, 384, 384)
>>> text_input = ["caption of image 1", "another caption of image 1", "caption of image 2", "caption of image 3"]
>>> image_id = torch.tensor([1, 1, 2, 3])
>>> samples = {"image": images, "text_input": text_input, "image_id": image_id, "epoch": 0, "iters": 0, "num_iters_per_epoch": 100}
>>> output = model(samples)
>>> output.keys()
odict_keys(['sims', 'intermediate_output', 'loss', 'loss_itc', 'loss_itm'])
"""
image = samples["image"]
caption = samples["text_input"]
idx = samples["image_id"]
alpha = self.alpha * self._rampup_factor(
epoch=samples["epoch"],
iters=samples["iters"],
num_iters_per_epoch=samples["num_iters_per_epoch"],
)
with torch.no_grad():
self.temp.clamp_(0.001, 0.5)
image_embeds = self.visual_encoder.forward_features(image)
image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(
image.device
)
image_feat = F.normalize(self.vision_proj(image_embeds[:, 0, :]), dim=-1)
text = self.tokenizer(
caption,
padding="max_length",
truncation=True,
max_length=self.max_txt_len,
return_tensors="pt",
).to(image.device)
text_output = self.text_encoder.forward_text(text)
text_embeds = text_output.last_hidden_state
text_feat = F.normalize(self.text_proj(text_embeds[:, 0, :]), dim=-1)
# Image-text Contrastive Learning
idx = idx.view(-1, 1)
idx_all = torch.cat([idx.t(), self.idx_queue.clone().detach()], dim=1)
pos_idx = torch.eq(idx, idx_all).float()
sim_targets = pos_idx / pos_idx.sum(1, keepdim=True)
# get momentum features
with torch.no_grad():
self._momentum_update()
image_embeds_m = self.visual_encoder_m(image)
image_feat_m = F.normalize(
self.vision_proj_m(image_embeds_m[:, 0, :]), dim=-1
)
image_feat_m_all = torch.cat(
[image_feat_m.t(), self.image_queue.clone().detach()], dim=1
)
text_output_m = self.text_encoder_m.forward_text(text)
text_embeds_m = text_output_m.last_hidden_state
text_feat_m = F.normalize(self.text_proj_m(text_embeds_m[:, 0, :]), dim=-1)
text_feat_m_all = torch.cat(
[text_feat_m.t(), self.text_queue.clone().detach()], dim=1
)
sim_i2t_m = image_feat_m @ text_feat_m_all / self.temp
sim_t2i_m = text_feat_m @ image_feat_m_all / self.temp
sim_i2t_targets = (
alpha * F.softmax(sim_i2t_m, dim=1) + (1 - alpha) * sim_targets
)
sim_t2i_targets = (
alpha * F.softmax(sim_t2i_m, dim=1) + (1 - alpha) * sim_targets
)
sim_i2t = image_feat @ text_feat_m_all / self.temp
sim_t2i = text_feat @ image_feat_m_all / self.temp
loss_i2t = -torch.sum(
F.log_softmax(sim_i2t, dim=1) * sim_i2t_targets, dim=1
).mean()
loss_t2i = -torch.sum(
F.log_softmax(sim_t2i, dim=1) * sim_t2i_targets, dim=1
).mean()
loss_itc = (loss_i2t + loss_t2i) / 2
self._dequeue_and_enqueue(image_feat_m, text_feat_m, idx)
# Image-text Matching
encoder_input_ids = text.input_ids.clone()
encoder_input_ids[:, 0] = self.tokenizer.enc_token_id
# forward the positve image-text pair
bs = image.size(0)
output_pos = self.text_encoder(
encoder_input_ids,
attention_mask=text.attention_mask,
encoder_hidden_states=image_embeds,
encoder_attention_mask=image_atts,
return_dict=True,
)
idxs = concat_all_gather(idx)
if self.negative_all_rank:
# compute sample similarity
with torch.no_grad():
mask = torch.eq(idx, idxs.t())
image_feat_world = concat_all_gather(image_feat)
text_feat_world = concat_all_gather(text_feat)
sim_i2t = image_feat @ text_feat_world.t() / self.temp
sim_t2i = text_feat @ image_feat_world.t() / self.temp
weights_i2t = F.softmax(sim_i2t, dim=1)
weights_i2t.masked_fill_(mask, 0)
weights_t2i = F.softmax(sim_t2i, dim=1)
weights_t2i.masked_fill_(mask, 0)
image_embeds_world = all_gather_with_grad(image_embeds)
# select a negative image (from all ranks) for each text
image_embeds_neg = []
for b in range(bs):
neg_idx = torch.multinomial(weights_t2i[b], 1).item()
image_embeds_neg.append(image_embeds_world[neg_idx])
image_embeds_neg = torch.stack(image_embeds_neg, dim=0)
# select a negative text (from all ranks) for each image
input_ids_world = concat_all_gather(encoder_input_ids)
att_mask_world = concat_all_gather(text.attention_mask)
text_ids_neg = []
text_atts_neg = []
for b in range(bs):
neg_idx = torch.multinomial(weights_i2t[b], 1).item()
text_ids_neg.append(input_ids_world[neg_idx])
text_atts_neg.append(att_mask_world[neg_idx])
else:
with torch.no_grad():
mask = torch.eq(idx, idx.t())
sim_i2t = image_feat @ text_feat.t() / self.temp
sim_t2i = text_feat @ image_feat.t() / self.temp
weights_i2t = F.softmax(sim_i2t, dim=1)
weights_i2t.masked_fill_(mask, 0)
weights_t2i = F.softmax(sim_t2i, dim=1)
weights_t2i.masked_fill_(mask, 0)
# select a negative image (from same rank) for each text
image_embeds_neg = []
for b in range(bs):
neg_idx = torch.multinomial(weights_t2i[b], 1).item()
image_embeds_neg.append(image_embeds[neg_idx])
image_embeds_neg = torch.stack(image_embeds_neg, dim=0)
# select a negative text (from same rank) for each image
text_ids_neg = []
text_atts_neg = []
for b in range(bs):
neg_idx = torch.multinomial(weights_i2t[b], 1).item()
text_ids_neg.append(encoder_input_ids[neg_idx])
text_atts_neg.append(text.attention_mask[neg_idx])
text_ids_neg = torch.stack(text_ids_neg, dim=0)
text_atts_neg = torch.stack(text_atts_neg, dim=0)
text_ids_all = torch.cat([encoder_input_ids, text_ids_neg], dim=0)
text_atts_all = torch.cat([text.attention_mask, text_atts_neg], dim=0)
image_embeds_all = torch.cat([image_embeds_neg, image_embeds], dim=0)
image_atts_all = torch.cat([image_atts, image_atts], dim=0)
output_neg = self.text_encoder(
text_ids_all,
attention_mask=text_atts_all,
encoder_hidden_states=image_embeds_all,
encoder_attention_mask=image_atts_all,
return_dict=True,
)
vl_embeddings = torch.cat(
[
output_pos.last_hidden_state[:, 0, :],
output_neg.last_hidden_state[:, 0, :],
],
dim=0,
)
itm_logits = self.itm_head(vl_embeddings)
itm_labels = torch.cat(
[torch.ones(bs, dtype=torch.long), torch.zeros(2 * bs, dtype=torch.long)],
dim=0,
).to(self.device)
loss_itm = F.cross_entropy(itm_logits, itm_labels)
return BlipOutput(
loss=loss_itc + loss_itm,
loss_itc=loss_itc,
loss_itm=loss_itm,
sims=BlipSimilarity(
sim_i2t=sim_i2t,
sim_t2i=sim_t2i,
sim_i2t_m=sim_i2t_m,
sim_t2i_m=sim_t2i_m,
sim_i2t_targets=sim_i2t_targets,
sim_t2i_targets=sim_t2i_targets,
),
intermediate_output=BlipIntermediateOutput(
image_embeds=image_embeds,
image_embeds_m=image_embeds_m,
text_embeds=text_embeds,
text_embeds_m=text_embeds_m,
encoder_output=output_pos,
encoder_output_neg=output_neg,
itm_logits=itm_logits,
itm_labels=itm_labels,
),
)
def reset_queue_ptr(self):
self.queue_ptr = torch.zeros(1, dtype=torch.long)
@classmethod
def from_config(cls, cfg=None):
# set from_pretrained=True to load weights for 'bert-base-uncased'
image_encoder = VisionTransformerEncoder.from_config(cfg)
text_encoder = XBertEncoder.from_config(cfg)
embed_dim = cfg.get("embed_dim", 256)
momentum = cfg.get("momentum", 0.995)
alpha = cfg.get("alpha", 0.4)
negative_all_rank = cfg.get("negative_all_rank", False)
queue_size = cfg.get("queue_size", 0)
max_txt_len = cfg.get("max_txt_len", 35)
model = cls(
image_encoder=image_encoder,
text_encoder=text_encoder,
queue_size=queue_size,
alpha=alpha,
embed_dim=embed_dim,
momentum=momentum,
negative_all_rank=negative_all_rank,
max_txt_len=max_txt_len,
)
model.load_checkpoint_from_config(cfg)
model.reset_queue_ptr()
return model
def compute_sim_matrix(self, data_loader, task_cfg):
"""
Compute similarity i2t, t2i matrix for the given data loader.
"""
k_test = task_cfg.k_test
return compute_sim_matrix(model=self, data_loader=data_loader, k_test=k_test) | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/blip_models/blip_retrieval.py | 0.878458 | 0.290257 | blip_retrieval.py | pypi |
import torch
import torch.nn.functional as F
from lavis.common.registry import registry
from lavis.models.blip_models.blip import BlipBase
from torch import nn
from lavis.models.med import XBertEncoder
from lavis.models.vit import VisionTransformerEncoder
@registry.register_model("blip_image_text_matching")
class BlipITM(BlipBase):
"""
BLIP Image-Text Matching (ITM) model.
Supported model types:
- base: fine-tuned BLIP retrieval weights on COCO dataset (Karpathy split).
- large: fine-tuned BLIP retrieval weights on COCO dataset (Karpathy split).
Usage:
>>> from lavis.models import load_model
>>> model = load_model("blip_image_text_matching", "base")
>>> model = load_model("blip_image_text_matching", "large")
"""
PRETRAINED_MODEL_CONFIG_DICT = {
"base": "configs/models/blip_itm_base.yaml",
"large": "configs/models/blip_itm_large.yaml",
}
def __init__(self, image_encoder, text_encoder, embed_dim=256, max_txt_len=35):
super().__init__()
self.tokenizer = self.init_tokenizer()
self.text_encoder = text_encoder
self.visual_encoder = image_encoder
self.max_txt_len = max_txt_len
# creating projection layers for ITC
text_width = text_encoder.config.hidden_size
vision_width = image_encoder.vision_width
self.vision_proj = nn.Linear(vision_width, embed_dim)
self.text_proj = nn.Linear(text_width, embed_dim)
self.itm_head = nn.Linear(text_width, 2)
def forward(self, samples, match_head="itm"):
image = samples["image"]
caption = samples["text_input"]
image_embeds = self.visual_encoder.forward_features(image)
image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(
image.device
)
text = self.tokenizer(
caption,
padding="longest",
truncation=True,
max_length=self.max_txt_len,
return_tensors="pt",
).to(image.device)
if match_head == "itm":
encoder_input_ids = text.input_ids.clone()
encoder_input_ids[:, 0] = self.tokenizer.enc_token_id # extra code
output = self.text_encoder(
encoder_input_ids,
attention_mask=text.attention_mask,
encoder_hidden_states=image_embeds,
encoder_attention_mask=image_atts,
return_dict=True,
)
itm_output = self.itm_head(output.last_hidden_state[:, 0, :])
return itm_output
elif match_head == "itc":
text_output = self.text_encoder(
text.input_ids,
attention_mask=text.attention_mask,
return_dict=True,
mode="text",
)
image_feat = F.normalize(self.vision_proj(image_embeds[:, 0, :]), dim=-1)
text_feat = F.normalize(
self.text_proj(text_output.last_hidden_state[:, 0, :]), dim=-1
)
sim = image_feat @ text_feat.t()
return sim
def itm_rank(self, image_embeds, image_atts, encoder_input_ids, match_head='itm'):
# breakpoint()
encoder_input_ids = encoder_input_ids.clone()
encoder_input_ids = encoder_input_ids[:, 3:]
text_attention_mask = (encoder_input_ids != self.tokenizer.pad_token_id).long()
if match_head == 'itm':
# encoder_input_ids = encoder_input_ids.clone()
encoder_input_ids[:, 0] = self.tokenizer.enc_token_id
output = self.text_encoder(encoder_input_ids,
attention_mask=text_attention_mask,
encoder_hidden_states=image_embeds,
encoder_attention_mask=image_atts,
return_dict=True,
)
# print(output.last_hidden_state.shape)
itm_output = self.itm_head(output.last_hidden_state[:, 0, :])
itm_output = F.softmax(itm_output, dim=1)[:,1]
return itm_output #, mask, token_length
elif match_head == 'itc':
encoder_input_ids[:, 0] = self.tokenizer.cls_token_id
text_output = self.text_encoder(encoder_input_ids, attention_mask=text_attention_mask,
return_dict=True, mode='text')
image_feat = F.normalize(self.vision_proj(image_embeds[:, 0, :]), dim=-1)
text_feat = F.normalize(self.text_proj(text_output.last_hidden_state[:, 0, :]), dim=-1)
sim = image_feat @ text_feat.t()
return sim
@classmethod
def from_config(cls, cfg=None):
image_encoder = VisionTransformerEncoder.from_config(cfg)
text_encoder = XBertEncoder.from_config(cfg)
embed_dim = cfg.get("embed_dim", 256)
max_txt_len = cfg.get("max_txt_len", 35)
model = cls(
image_encoder=image_encoder,
text_encoder=text_encoder,
embed_dim=embed_dim,
max_txt_len=max_txt_len,
)
model.load_checkpoint_from_config(cfg)
return model
def compute_gradcam(model, visual_input, text_input, tokenized_text, block_num=6):
model.text_encoder.base_model.base_model.encoder.layer[
block_num
].crossattention.self.save_attention = True
output = model({"image": visual_input, "text_input": text_input}, match_head="itm")
loss = output[:, 1].sum()
model.zero_grad()
loss.backward()
with torch.no_grad():
mask = tokenized_text.attention_mask.view(
tokenized_text.attention_mask.size(0), 1, -1, 1, 1
) # (bsz,1,token_len, 1,1)
token_length = tokenized_text.attention_mask.sum(dim=-1) - 2
token_length = token_length.cpu()
# grads and cams [bsz, num_head, seq_len, image_patch]
grads = model.text_encoder.base_model.base_model.encoder.layer[
block_num
].crossattention.self.get_attn_gradients()
cams = model.text_encoder.base_model.base_model.encoder.layer[
block_num
].crossattention.self.get_attention_map()
# assume using vit with 576 num image patch
cams = cams[:, :, :, 1:].reshape(visual_input.size(0), 12, -1, 24, 24) * mask
grads = (
grads[:, :, :, 1:].clamp(0).reshape(visual_input.size(0), 12, -1, 24, 24)
* mask
)
gradcams = cams * grads
gradcam_list = []
for ind in range(visual_input.size(0)):
token_length_ = token_length[ind]
gradcam = gradcams[ind].mean(0).cpu().detach()
# [enc token gradcam, average gradcam across token, gradcam for individual token]
gradcam = torch.cat(
(
gradcam[0:1, :],
gradcam[1 : token_length_ + 1, :].sum(dim=0, keepdim=True)
/ token_length_,
gradcam[1:, :],
)
)
gradcam_list.append(gradcam)
return gradcam_list, output | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/blip_models/blip_image_text_matching.py | 0.766031 | 0.290768 | blip_image_text_matching.py | pypi |
import torch
import torch.nn.functional as F
from lavis.common.registry import registry
from lavis.models.base_model import tile
from lavis.models.blip_models.blip import BlipBase
from lavis.models.blip_models.blip_outputs import (
BlipOutput,
BlipIntermediateOutput,
)
from lavis.models.med import XBertEncoder, XBertLMHeadDecoder
from lavis.models.vit import VisionTransformerEncoder
@registry.register_model("blip_vqa")
class BlipVQA(BlipBase):
"""
BLIP VQA models.
Supported model types:
- base: vqa model initialized with pre-trained BLIP base model on 115M image-text pairs after CapFilt; not fine-tuned.
- vqav2: fine-tuned BLIP base model on VQA v2.0 dataset.
Usage:
>>> from lavis.models import load_model
>>> model = load_model("blip_vqa", "vqav2")
>>> model = load_model("blip_vqa", "okvqa")
>>> model = load_model("blip_vqa", "aokvqa")
"""
PRETRAINED_MODEL_CONFIG_DICT = {
"vqav2": "configs/models/blip_vqav2.yaml",
"okvqa": "configs/models/blip_vqa_okvqa.yaml",
"aokvqa": "configs/models/blip_vqa_aokvqa.yaml",
}
def __init__(self, image_encoder, text_encoder, text_decoder, max_txt_len=35):
super().__init__()
self.tokenizer = self.init_tokenizer()
self.visual_encoder = image_encoder
self.text_encoder = text_encoder
self.text_decoder = text_decoder
self.max_txt_len = max_txt_len
def forward(self, samples):
"""
Args:
samples (dict): A dictionary containing the following keys:
- image (torch.Tensor): A tensor of shape (batch_size, 3, H, W). Default H=480, W=480.
- text_input (list): A list of strings, each string is a question
- answer (list): A list of strings, each string is an answer
- weight (torch.Tensor): A tensor used to weigh each answer in the loss computation.
The shape of the tensor is (sum(n_answers),)
- n_answers (torch.Tensor): A tensor shape (batch_size,) containing the number of answers
for each question in the batch.
Returns:
A BlipOutput object containing loss and intermediate outputs,
see :class:`lavis.models.blip_outputs.BlipOutput` for more details.
Examples:
```python
>>> import torch
>>> from lavis.models import load_model
>>> model = load_model("blip_vqa")
>>> samples = {
... "image": torch.rand(2, 3, 480, 480),
... "text_input": ["What is this?", "What is that?"],
... "answer": ["cat", "cat", "dog"],
... "weight": torch.tensor([1.0, 1.0, 1.0]),
... "n_answers": torch.tensor([2, 1]),
... }
>>> output = model(samples)
>>> output.keys()
odict_keys(['intermediate_output', 'loss'])
>>> output.intermediate_output.keys()
odict_keys(['image_embeds', 'encoder_output', 'decoder_output', 'decoder_labels'])
```
"""
encoder_output, image_embeds = self.forward_encoder(samples)
loss, decoder_output, decoder_targets = self.forward_decoder(
samples=samples, encoder_out=encoder_output
)
return BlipOutput(
loss=loss,
intermediate_output=BlipIntermediateOutput(
image_embeds=image_embeds,
encoder_output=encoder_output,
decoder_output=decoder_output,
decoder_labels=decoder_targets,
),
)
def forward_encoder(self, samples):
questions = samples["text_input"]
questions = self.tokenizer(
questions,
padding="longest",
truncation=True,
max_length=self.max_txt_len,
return_tensors="pt",
).to(self.device)
questions.input_ids[:, 0] = self.tokenizer.enc_token_id
samples.update({"tokenized_text": questions})
image_embeds = self.visual_encoder.forward_features(samples["image"])
encoder_output = self.text_encoder.forward_automask(
tokenized_text=samples["tokenized_text"], visual_embeds=image_embeds
)
return encoder_output, image_embeds
def forward_decoder(self, samples, encoder_out, **kwargs):
answers = self.tokenizer(
samples["answer"], padding="longest", return_tensors="pt"
).to(self.device)
answers.input_ids[:, 0] = self.tokenizer.bos_token_id
answer_targets = answers.input_ids.masked_fill(
answers.input_ids == self.tokenizer.pad_token_id, -100
)
question_states = []
question_atts = []
question = samples["tokenized_text"]
question_output = encoder_out
for b, n in enumerate(samples["n_answers"]):
question_states += [question_output.last_hidden_state[b]] * n
question_atts += [question.attention_mask[b]] * n
question_states = torch.stack(question_states, dim=0)
question_atts = torch.stack(question_atts, dim=0)
answer_output = self.text_decoder(
answers.input_ids,
attention_mask=answers.attention_mask,
encoder_hidden_states=question_states,
encoder_attention_mask=question_atts,
labels=answer_targets,
return_dict=True,
reduction="none",
)
loss = samples["weight"] * answer_output.loss
bsz = samples["image"].size(0)
loss = loss.sum() / bsz
return loss, answer_output, answer_targets
def predict_answers(
self,
samples,
num_beams=3,
inference_method="rank",
max_len=10,
min_len=1,
num_ans_candidates=128,
answer_list=None,
**kwargs
):
"""
Args:
samples (dict): A dictionary containing the following keys:
- image (torch.Tensor): A tensor of shape (batch_size, 3, H, W). Default H=480, W=480.
- text_input (str or [str]): String or a list of strings, each string is a question.
The number of questions must be equal to the batch size. If a single string, will be converted to a list of string, with length 1 first.
num_beams (int): Number of beams for beam search. 1 means no beam search.
inference_method (str): Inference method. One of "rank", "generate".
- If "rank", the model will return answers with the highest probability from the answer list.
- If "generate", the model will generate answers.
max_len (int): Maximum length of generated answers.
min_len (int): Minimum length of generated answers.
num_ans_candidates (int): Number of answer candidates, used to filter out answers with low probability.
answer_list (list): A list of strings, each string is an answer.
Returns:
List: A list of strings, each string is an answer.
Examples:
```python
>>> from PIL import Image
>>> from lavis.models import load_model_and_preprocess
>>> model, vis_processors, txt_processors = load_model_and_preprocess("blip_vqa", "vqav2")
>>> raw_image = Image.open("docs/data/merlion.png").convert("RGB")
>>> question = "Which city is this photo taken?"
>>> image = vis_processors["eval"](raw_image).unsqueeze(0)
>>> question = txt_processors["eval"](question)
>>> samples = {"image": image, "text_input": [question]}
>>> answers = model.predict_answers(samples)
>>> answers
['singapore']
>>> answer_list = ["Singapore", "London", "Palo Alto", "Tokyo"]
>>> answers = model.predict_answers(samples, answer_list=answer_list)
>>> answers
['Singapore']
```
"""
assert inference_method in [
"rank",
"generate",
], "Inference method must be one of 'rank' or 'generate', got {}.".format(
inference_method
)
if isinstance(samples["text_input"], str):
samples["text_input"] = [samples["text_input"]]
assert len(samples["text_input"]) == samples["image"].size(
0
), "The number of questions must be equal to the batch size."
if inference_method == "generate":
return self._generate_answers(
samples, num_beams=num_beams, max_length=max_len, min_length=min_len
)
elif inference_method == "rank":
assert answer_list is not None, "answer_list must be provided for ranking"
num_ans_candidates = min(num_ans_candidates, len(answer_list))
return self._rank_answers(
samples, answer_list=answer_list, num_ans_candidates=num_ans_candidates
)
def _generate_answers(self, samples, num_beams=3, max_length=10, min_length=1):
encoder_out, _ = self.forward_encoder(samples)
question_output = encoder_out
question_states = question_output.last_hidden_state.repeat_interleave(
num_beams, dim=0
)
question_atts = torch.ones(question_states.size()[:-1], dtype=torch.long).to(
self.device
)
model_kwargs = {
"encoder_hidden_states": question_states,
"encoder_attention_mask": question_atts,
}
bsz = samples["image"].size(0)
bos_ids = torch.full(
(bsz, 1), fill_value=self.tokenizer.bos_token_id, device=self.device
)
outputs = self.text_decoder.generate(
input_ids=bos_ids,
max_length=max_length,
min_length=min_length,
num_beams=num_beams,
eos_token_id=self.tokenizer.sep_token_id,
pad_token_id=self.tokenizer.pad_token_id,
**model_kwargs
)
# collect answers
answers = []
for output in outputs:
answer = self.tokenizer.decode(output, skip_special_tokens=True)
answers.append(answer)
return answers
def _rank_answers(self, samples, answer_list, num_ans_candidates):
"""
Generate the first token of answers using decoder and select ${num_ans_candidates}
most probable ones. Then select answers from answer list, which start with the probable tokens.
Lastly, use the selected answers as the ground-truth labels for decoding and calculating LM loss.
Return the answers that minimize the losses as result.
"""
answer_candidates = self.tokenizer(
answer_list, padding="longest", return_tensors="pt"
).to(self.device)
answer_candidates.input_ids[:, 0] = self.tokenizer.bos_token_id
answer_ids = answer_candidates.input_ids
answer_atts = answer_candidates.attention_mask
question_output, _ = self.forward_encoder(samples)
question_states = question_output.last_hidden_state
tokenized_question = samples["tokenized_text"]
question_atts = tokenized_question.attention_mask
num_ques = question_states.size(0)
start_ids = answer_ids[0, 0].repeat(num_ques, 1) # bos token
start_output = self.text_decoder(
start_ids,
encoder_hidden_states=question_states,
encoder_attention_mask=question_atts,
return_dict=True,
reduction="none",
)
logits = start_output.logits[:, 0, :] # first token's logit
# topk_probs: top-k probability
# topk_ids: [num_question, k]
answer_first_token = answer_ids[:, 1]
prob_first_token = F.softmax(logits, dim=1).index_select(
dim=1, index=answer_first_token
)
topk_probs, topk_ids = prob_first_token.topk(num_ans_candidates, dim=1)
# answer input: [num_question*k, answer_len]
input_ids = []
input_atts = []
for b, topk_id in enumerate(topk_ids):
input_ids.append(answer_ids.index_select(dim=0, index=topk_id))
input_atts.append(answer_atts.index_select(dim=0, index=topk_id))
input_ids = torch.cat(input_ids, dim=0)
input_atts = torch.cat(input_atts, dim=0)
targets_ids = input_ids.masked_fill(
input_ids == self.tokenizer.pad_token_id, -100
)
# repeat encoder's output for top-k answers
question_states = tile(question_states, 0, num_ans_candidates)
question_atts = tile(question_atts, 0, num_ans_candidates)
output = self.text_decoder(
input_ids,
attention_mask=input_atts,
encoder_hidden_states=question_states,
encoder_attention_mask=question_atts,
labels=targets_ids,
return_dict=True,
reduction="none",
)
log_probs_sum = -output.loss
log_probs_sum = log_probs_sum.view(num_ques, num_ans_candidates)
max_topk_ids = log_probs_sum.argmax(dim=1)
max_ids = topk_ids[max_topk_ids >= 0, max_topk_ids]
answers = [answer_list[max_id] for max_id in max_ids]
return answers
@classmethod
def from_config(cls, cfg=None):
image_encoder = VisionTransformerEncoder.from_config(cfg)
# text encoder + multimodal encoder
text_encoder = XBertEncoder.from_config(cfg)
text_decoder = XBertLMHeadDecoder.from_config(cfg)
max_txt_len = cfg.get("max_txt_len", 35)
model = cls(
image_encoder=image_encoder,
text_encoder=text_encoder,
text_decoder=text_decoder,
max_txt_len=max_txt_len,
)
model.load_checkpoint_from_config(cfg)
return model | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/blip_models/blip_vqa.py | 0.90943 | 0.644952 | blip_vqa.py | pypi |
from dataclasses import dataclass
from typing import Optional
import torch
from transformers.modeling_outputs import (
ModelOutput,
BaseModelOutputWithPoolingAndCrossAttentions,
CausalLMOutputWithCrossAttentions,
)
@dataclass
class BlipSimilarity(ModelOutput):
sim_i2t: torch.FloatTensor = None
sim_t2i: torch.FloatTensor = None
sim_i2t_m: Optional[torch.FloatTensor] = None
sim_t2i_m: Optional[torch.FloatTensor] = None
sim_i2t_targets: Optional[torch.FloatTensor] = None
sim_t2i_targets: Optional[torch.FloatTensor] = None
@dataclass
class BlipIntermediateOutput(ModelOutput):
"""
Data class for intermediate outputs of BLIP models.
image_embeds (torch.FloatTensor): Image embeddings, shape (batch_size, num_patches, embed_dim).
text_embeds (torch.FloatTensor): Text embeddings, shape (batch_size, seq_len, embed_dim).
image_embeds_m (torch.FloatTensor): Image embeddings from momentum visual encoder, shape (batch_size, num_patches, embed_dim).
text_embeds_m (torch.FloatTensor): Text embeddings from momentum text encoder, shape (batch_size, seq_len, embed_dim).
encoder_output (BaseModelOutputWithPoolingAndCrossAttentions): output from the image-grounded text encoder.
encoder_output_neg (BaseModelOutputWithPoolingAndCrossAttentions): output from the image-grounded text encoder for negative pairs.
decoder_output (CausalLMOutputWithCrossAttentions): output from the image-grounded text decoder.
decoder_labels (torch.LongTensor): labels for the captioning loss.
itm_logits (torch.FloatTensor): logits for the image-text matching loss, shape (batch_size * 3, 2).
itm_labels (torch.LongTensor): labels for the image-text matching loss, shape (batch_size * 3,)
"""
# uni-modal features
image_embeds: torch.FloatTensor = None
text_embeds: Optional[torch.FloatTensor] = None
image_embeds_m: Optional[torch.FloatTensor] = None
text_embeds_m: Optional[torch.FloatTensor] = None
# intermediate outputs of multimodal encoder
encoder_output: Optional[BaseModelOutputWithPoolingAndCrossAttentions] = None
encoder_output_neg: Optional[BaseModelOutputWithPoolingAndCrossAttentions] = None
itm_logits: Optional[torch.FloatTensor] = None
itm_labels: Optional[torch.LongTensor] = None
# intermediate outputs of multimodal decoder
decoder_output: Optional[CausalLMOutputWithCrossAttentions] = None
decoder_labels: Optional[torch.LongTensor] = None
@dataclass
class BlipOutput(ModelOutput):
# some finetuned models (e.g. BlipVQA) do not compute similarity, thus optional.
sims: Optional[BlipSimilarity] = None
intermediate_output: BlipIntermediateOutput = None
loss: Optional[torch.FloatTensor] = None
loss_itc: Optional[torch.FloatTensor] = None
loss_itm: Optional[torch.FloatTensor] = None
loss_lm: Optional[torch.FloatTensor] = None
@dataclass
class BlipOutputWithLogits(BlipOutput):
logits: torch.FloatTensor = None
logits_m: torch.FloatTensor = None
@dataclass
class BlipOutputFeatures(ModelOutput):
"""
Data class of features from BlipFeatureExtractor.
Args:
image_embeds: (torch.FloatTensor) of shape (batch_size, num_patches+1, embed_dim), optional
image_features: (torch.FloatTensor) of shape (batch_size, num_patches+1, feature_dim), optional
text_embeds: (torch.FloatTensor) of shape (batch_size, sequence_length+1, embed_dim), optional
text_features: (torch.FloatTensor) of shape (batch_size, sequence_length+1, feature_dim), optional
The first embedding or feature is for the [CLS] token.
Features are obtained by projecting the corresponding embedding into a normalized low-dimensional space.
"""
image_embeds: Optional[torch.FloatTensor] = None
image_embeds_proj: Optional[torch.FloatTensor] = None
text_embeds: Optional[torch.FloatTensor] = None
text_embeds_proj: Optional[torch.FloatTensor] = None
multimodal_embeds: Optional[torch.FloatTensor] = None | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/blip_models/blip_outputs.py | 0.958333 | 0.733977 | blip_outputs.py | pypi |
from copy import deepcopy
import torch
import torch.nn.functional as F
from lavis.common.registry import registry
from lavis.models.base_model import MomentumDistilationMixin, SharedQueueMixin
from lavis.models.blip_models import tie_encoder_decoder_weights
from lavis.models.blip_models.blip import BlipBase
from lavis.models.blip_models.blip_outputs import (
BlipOutput,
BlipSimilarity,
BlipIntermediateOutput,
)
from lavis.models.med import XBertEncoder, XBertLMHeadDecoder
from lavis.models.vit import VisionTransformerEncoder
from torch import nn
@registry.register_model("blip_pretrain")
class BlipPretrain(BlipBase, SharedQueueMixin, MomentumDistilationMixin):
"""
BLIP pretrain model.
Supported model types:
- base: BLIP base model before pretraining.
"""
PRETRAINED_MODEL_CONFIG_DICT = {
"base": "configs/models/blip_pretrain_base.yaml",
# "large": "configs/models/blip_pretrain_large.yaml",
}
def __init__(
self,
image_encoder,
text_encoder,
text_decoder,
queue_size,
alpha=0.4,
embed_dim=256,
momentum=0.995,
tie_enc_dec_weights=True,
max_txt_len=30,
):
super().__init__()
self.tokenizer = self.init_tokenizer()
text_encoder.resize_token_embeddings(len(self.tokenizer))
text_decoder.resize_token_embeddings(len(self.tokenizer))
if tie_enc_dec_weights:
tie_encoder_decoder_weights(
encoder=text_encoder,
decoder=text_decoder.bert,
base_model_prefix="",
skip_key="/attention",
)
self.visual_encoder = image_encoder
self.text_encoder = text_encoder
self.text_decoder = text_decoder
# creating projection layers for ITC
text_width = text_encoder.config.hidden_size
vision_width = image_encoder.vision_width
self.vision_proj = nn.Linear(vision_width, embed_dim)
self.text_proj = nn.Linear(text_width, embed_dim)
self.itm_head = nn.Linear(text_width, 2)
# create the momentum encoder
self.visual_encoder_m = deepcopy(self.visual_encoder)
self.text_encoder_m = deepcopy(self.text_encoder)
self.vision_proj_m = deepcopy(self.vision_proj)
self.text_proj_m = deepcopy(self.text_proj)
self.model_pairs = [
[self.visual_encoder, self.visual_encoder_m],
[self.text_encoder, self.text_encoder_m],
[self.vision_proj, self.vision_proj_m],
[self.text_proj, self.text_proj_m],
]
self.copy_params()
# create the queue
self.register_buffer("image_queue", torch.randn(embed_dim, queue_size))
self.register_buffer("text_queue", torch.randn(embed_dim, queue_size))
self.register_buffer("queue_ptr", torch.zeros(1, dtype=torch.long))
self.image_queue = nn.functional.normalize(self.image_queue, dim=0)
self.text_queue = nn.functional.normalize(self.text_queue, dim=0)
self.queue_size = queue_size
self.momentum = momentum
self.temp = nn.Parameter(0.07 * torch.ones([]))
self.alpha = alpha
self.max_txt_len = max_txt_len
def _rampup_factor(self, epoch, iters, num_iters_per_epoch):
return min(1, (epoch * num_iters_per_epoch + iters) / (2 * num_iters_per_epoch))
def forward(self, samples):
"""
Args:
samples (dict): A dictionary containing the following keys:
- image (torch.Tensor): A tensor of shape (batch_size, 3, H, W). The input images. Default: H=224, W=224.
- text_input (list): A list of length batch_size, each element is a string of text/caption.
- epoch (int): The current epoch.
- iters (int): The current iteration.
- num_iters_per_epoch (int): The number of iterations per epoch.
Returns:
BlipOutput: A BlipOutput object containing loss and intermediate output. See ``lavis.models.blip_models.blip_outputs.BlipOutput`` for more details.
Examples:
>>> import torch
>>> from lavis.models import load_model
>>> model = load_model("blip_pretrain", "base")
>>> images = torch.randn(4, 3, 224, 224)
>>> text_input = ["caption of image 1", "another caption of image 1", "caption of image 2", "caption of image 3"]
>>> samples = {"image": images, "text_input": text_input, "epoch": 0, "iters": 0, "num_iters_per_epoch": 100}
>>> output = model(samples)
>>> output.keys()
odict_keys(['sims', 'intermediate_output', 'loss', 'loss_itc', 'loss_itm', 'loss_lm'])
>>> output.intermediate_output.keys()
odict_keys(['image_embeds', 'text_embeds', 'image_embeds_m', 'text_embeds_m', 'encoder_output', 'encoder_output_neg', 'itm_logits', 'itm_labels', 'decoder_output', 'decoder_labels'])
>>> output.intermediate_output.image_embeds.shape
>>> # shape: (batch_size, num_patches, embed_dim)
torch.Size([4, 197, 768])
>>> output.intermediate_output.text_embeds.shape
>>> # shape: (batch_size, max_txt_len, embed_dim)
torch.Size([4, 30, 768])
>>> output.intermediate_output.image_embeds_m.shape
>>> # shape: (batch_size, num_patches, embed_dim)
torch.Size([4, 197, 768])
>>> output.intermediate_output.text_embeds_m.shape
>>> # shape: (batch_size, max_txt_len, embed_dim)
torch.Size([4, 30, 768])
>>> output.intermediate_output.itm_logits.shape
>>> # shape: (batch_size * 3, 2)
torch.Size([12, 2])
>>> output.intermediate_output.itm_labels.shape
>>> # shape: (batch_size * 3,)
torch.Size([12])
>>> output.intermediate_output.encoder_output.last_hidden_state.shape
>>> # shape: (batch_size, max_txt_len, embed_dim)
torch.Size([4, 30, 768])
>>> output.intermediate_output.encoder_output_m.last_hidden_state.shape
>>> # shape: (batch_size, max_txt_len, embed_dim)
torch.Size([4, 30, 768])
>>> output.intermediate_output.decoder_output.logits.shape
>>> # shape: (batch_size, max_txt_len, vocab_size)
torch.Size([4, 30, 30524])
>>> output.intermediate_output.decoder_labels.shape
>>> # shape: (batch_size, max_txt_len)
torch.Size([4, 30])
"""
image = samples["image"]
caption = samples["text_input"]
alpha = self.alpha * self._rampup_factor(
epoch=samples["epoch"],
iters=samples["iters"],
num_iters_per_epoch=samples["num_iters_per_epoch"],
)
with torch.no_grad():
self.temp.clamp_(0.001, 0.5)
# image embeddings and features
image_embeds = self.visual_encoder.forward_features(image)
image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(
image.device
)
image_feat = F.normalize(self.vision_proj(image_embeds[:, 0, :]), dim=-1)
text = self.tokenizer(
caption,
padding="max_length",
truncation=True,
max_length=self.max_txt_len,
return_tensors="pt",
).to(image.device)
# text embeddings and features
text_output = self.text_encoder.forward_text(text)
text_embeds = text_output.last_hidden_state
text_feat = F.normalize(self.text_proj(text_embeds[:, 0, :]), dim=-1)
# get momentum features
with torch.no_grad():
self._momentum_update()
image_embeds_m = self.visual_encoder_m(image)
image_feat_m = F.normalize(
self.vision_proj_m(image_embeds_m[:, 0, :]), dim=-1
)
image_feat_all = torch.cat(
[image_feat_m.t(), self.image_queue.clone().detach()], dim=1
)
text_output_m = self.text_encoder_m.forward_text(text)
text_embeds_m = text_output_m.last_hidden_state
text_feat_m = F.normalize(self.text_proj_m(text_embeds_m[:, 0, :]), dim=-1)
text_feat_all = torch.cat(
[text_feat_m.t(), self.text_queue.clone().detach()], dim=1
)
sim_i2t_m = image_feat_m @ text_feat_all / self.temp
sim_t2i_m = text_feat_m @ image_feat_all / self.temp
sim_targets = torch.zeros(sim_i2t_m.size()).to(image.device)
sim_targets.fill_diagonal_(1)
sim_i2t_targets = (
alpha * F.softmax(sim_i2t_m, dim=1) + (1 - alpha) * sim_targets
)
sim_t2i_targets = (
alpha * F.softmax(sim_t2i_m, dim=1) + (1 - alpha) * sim_targets
)
sim_i2t = image_feat @ text_feat_all / self.temp
sim_t2i = text_feat @ image_feat_all / self.temp
loss_i2t = -torch.sum(
F.log_softmax(sim_i2t, dim=1) * sim_i2t_targets, dim=1
).mean()
loss_t2i = -torch.sum(
F.log_softmax(sim_t2i, dim=1) * sim_t2i_targets, dim=1
).mean()
loss_itc = (loss_i2t + loss_t2i) / 2
self._dequeue_and_enqueue(image_feat_m, text_feat_m)
# Image-text Matching
encoder_input_ids = text.input_ids.clone()
encoder_input_ids[:, 0] = self.tokenizer.enc_token_id
# forward the positve image-text pair
bs = image.size(0)
output_pos = self.text_encoder(
encoder_input_ids,
attention_mask=text.attention_mask,
encoder_hidden_states=image_embeds,
encoder_attention_mask=image_atts,
return_dict=True,
)
with torch.no_grad():
weights_t2i = F.softmax(sim_t2i[:, :bs], dim=1) + 1e-4
weights_t2i.fill_diagonal_(0)
weights_i2t = F.softmax(sim_i2t[:, :bs], dim=1) + 1e-4
weights_i2t.fill_diagonal_(0)
# select a negative image for each text
image_embeds_neg = []
for b in range(bs):
neg_idx = torch.multinomial(weights_t2i[b], 1).item()
image_embeds_neg.append(image_embeds[neg_idx])
image_embeds_neg = torch.stack(image_embeds_neg, dim=0)
# select a negative text for each image
text_ids_neg = []
text_atts_neg = []
for b in range(bs):
neg_idx = torch.multinomial(weights_i2t[b], 1).item()
text_ids_neg.append(encoder_input_ids[neg_idx])
text_atts_neg.append(text.attention_mask[neg_idx])
text_ids_neg = torch.stack(text_ids_neg, dim=0)
text_atts_neg = torch.stack(text_atts_neg, dim=0)
text_ids_all = torch.cat([encoder_input_ids, text_ids_neg], dim=0)
text_atts_all = torch.cat([text.attention_mask, text_atts_neg], dim=0)
image_embeds_all = torch.cat([image_embeds_neg, image_embeds], dim=0)
image_atts_all = torch.cat([image_atts, image_atts], dim=0)
output_neg = self.text_encoder(
text_ids_all,
attention_mask=text_atts_all,
encoder_hidden_states=image_embeds_all,
encoder_attention_mask=image_atts_all,
return_dict=True,
)
vl_embeddings = torch.cat(
[
output_pos.last_hidden_state[:, 0, :],
output_neg.last_hidden_state[:, 0, :],
],
dim=0,
)
itm_logits = self.itm_head(vl_embeddings)
itm_labels = torch.cat(
[torch.ones(bs, dtype=torch.long), torch.zeros(2 * bs, dtype=torch.long)],
dim=0,
).to(image.device)
loss_itm = F.cross_entropy(itm_logits, itm_labels)
# LM
decoder_input_ids = text.input_ids.clone()
decoder_input_ids[:, 0] = self.tokenizer.bos_token_id
decoder_targets = decoder_input_ids.masked_fill(
decoder_input_ids == self.tokenizer.pad_token_id, -100
)
decoder_output = self.text_decoder(
decoder_input_ids,
attention_mask=text.attention_mask,
encoder_hidden_states=image_embeds,
encoder_attention_mask=image_atts,
labels=decoder_targets,
return_dict=True,
)
loss_lm = decoder_output.loss
return BlipOutput(
loss=loss_itc + loss_itm + loss_lm,
loss_itc=loss_itc,
loss_itm=loss_itm,
loss_lm=loss_lm,
sims=BlipSimilarity(
sim_i2t=sim_i2t,
sim_t2i=sim_t2i,
sim_i2t_m=sim_i2t_m,
sim_t2i_m=sim_t2i_m,
sim_i2t_targets=sim_i2t_targets,
sim_t2i_targets=sim_t2i_targets,
),
intermediate_output=BlipIntermediateOutput(
image_embeds=image_embeds,
text_embeds=text_embeds,
image_embeds_m=image_embeds_m,
text_embeds_m=text_embeds_m,
encoder_output=output_pos,
encoder_output_neg=output_neg,
itm_logits=itm_logits,
itm_labels=itm_labels,
decoder_output=decoder_output,
decoder_labels=decoder_targets,
),
)
def reset_queue_ptr(self):
self.queue_ptr = torch.zeros(1, dtype=torch.long)
@classmethod
def from_config(cls, cfg=None):
# set from_pretrained=True to load weights for 'bert-base-uncased'
image_encoder = VisionTransformerEncoder.from_config(cfg, from_pretrained=True)
text_encoder = XBertEncoder.from_config(cfg, from_pretrained=True)
text_decoder = XBertLMHeadDecoder.from_config(cfg, from_pretrained=True)
embed_dim = cfg.get("embed_dim", 256)
momentum = cfg.get("momentum", 0.995)
alpha = cfg.get("alpha", 0.4)
max_txt_len = cfg.get("max_txt_len", 30)
queue_size = cfg.get("queue_size", 57600)
model = cls(
image_encoder=image_encoder,
text_encoder=text_encoder,
text_decoder=text_decoder,
embed_dim=embed_dim,
queue_size=queue_size,
momentum=momentum,
alpha=alpha,
tie_enc_dec_weights=True,
max_txt_len=max_txt_len,
)
# [IMPORTANT] to reset queue pointer to 0.
# Otherwise when updating last batch in the queue, the batch size and remaining queue length may be un-equal.
model.reset_queue_ptr()
return model | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/blip_models/blip_pretrain.py | 0.900996 | 0.232093 | blip_pretrain.py | pypi |
import torch
import torch.nn as nn
from lavis.common.registry import registry
from lavis.models.base_model import BaseModel
from torch.nn import CrossEntropyLoss, MSELoss
from transformers import GPT2LMHeadModel
from transformers.modeling_outputs import CausalLMOutputWithCrossAttentions
@registry.register_model("gpt_dialogue")
class GPTDialogue(BaseModel, GPT2LMHeadModel):
PRETRAINED_MODEL_CONFIG_DICT = {"base": "configs/models/gpt_dialogue_base.yaml"}
def __init__(self, config, len_video_ft=4224):
super().__init__(config)
self.video_ff = nn.Linear(len_video_ft, config.n_embd)
self.video_ff_out = nn.Linear(config.n_embd, len_video_ft)
# Model parallel
self.model_parallel = False
self.device_map = None
# Initialize weights and apply final processing
self.post_init()
def forward(
self,
samples,
past_key_values=None,
position_ids=None,
head_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
use_cache=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
input_embs = self.transformer.wte(samples["input_ids"])
video_embs = self.video_ff(samples["video_fts"])
input_embs = torch.cat([video_embs, input_embs], dim=1)
transformer_outputs = self.transformer(
attention_mask=samples["attn_mask"],
token_type_ids=samples["token_type_ids"],
inputs_embeds=input_embs,
position_ids=position_ids,
head_mask=head_mask,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
hidden_states = transformer_outputs[0]
lm_logits = self.lm_head(hidden_states)
loss = None
if samples["labels"] is not None:
# Shift so that tokens < n predict n
shift_logits = lm_logits[..., :-1, :].contiguous()
shift_labels = samples["labels"][..., 1:].contiguous()
# Flatten the tokens
loss_fct = CrossEntropyLoss(ignore_index=-1)
loss = loss_fct(
shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)
)
if samples["video_fts"] is not None:
len_video_fts = samples["video_fts"].shape[1]
video_logits = self.video_ff_out(hidden_states[:, :len_video_fts, :])
# Shift so that tokens < n predict n
shift_logits = video_logits[..., :-1, :].contiguous()
shift_labels = samples["video_fts"][..., 1:, :].contiguous()
# Flatten the tokens
loss_fct = MSELoss(reduction="mean")
video_loss = loss_fct(shift_logits, shift_labels)
if loss is not None:
loss = loss + video_loss
else:
loss = video_loss
return CausalLMOutputWithCrossAttentions(
loss=loss,
logits=lm_logits,
past_key_values=transformer_outputs.past_key_values,
hidden_states=transformer_outputs.hidden_states,
attentions=transformer_outputs.attentions,
cross_attentions=transformer_outputs.cross_attentions,
)
@classmethod
def from_config(cls, cfg):
model = cls.__bases__[1].from_pretrained("gpt2")
model.resize_token_embeddings(cfg["len_tokenizer"])
return model | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/gpt_models/gpt_dialogue.py | 0.819785 | 0.213746 | gpt_dialogue.py | pypi |
import logging
import torch
from torch.cuda.amp import autocast as autocast
import torch.nn as nn
from lavis.common.registry import registry
from lavis.models.blip2_models.blip2 import Blip2Base, disabled_train
from lavis.models.blip2_models.modeling_opt import OPTForCausalLM, OPTConfig
from transformers import AutoTokenizer
@registry.register_model("blip2_opt")
class Blip2OPT(Blip2Base):
"""
BLIP2 OPT model.
Supported model types:
- pretrained_opt2.7b: pretrained model with OPT2.7b
- pretrained_opt6.7b: pretrained model with OPT6.7b
- caption_coco_opt2.7b: fintuned image captioning model with OPT2.7b
- caption_coco_opt6.7b: fintuned image captioning model with OPT6.7b
Usage:
>>> from lavis.models import load_model
>>> model = load_model("blip2_opt", "caption_coco_opt2.7b")
"""
PRETRAINED_MODEL_CONFIG_DICT = {
"pretrain_opt2.7b": "configs/models/blip2/blip2_pretrain_opt2.7b.yaml",
"pretrain_opt6.7b": "configs/models/blip2/blip2_pretrain_opt6.7b.yaml",
"caption_coco_opt2.7b": "configs/models/blip2/blip2_caption_opt2.7b.yaml",
"caption_coco_opt6.7b": "configs/models/blip2/blip2_caption_opt6.7b.yaml",
}
def __init__(
self,
vit_model="eva_clip_g",
img_size=224,
drop_path_rate=0,
use_grad_checkpoint=False,
vit_precision="fp16",
freeze_vit=True,
num_query_token=32,
opt_model="facebook/opt-2.7b",
prompt="",
max_txt_len=32,
):
super().__init__()
self.tokenizer = self.init_tokenizer()
self.visual_encoder, self.ln_vision = self.init_vision_encoder(
vit_model, img_size, drop_path_rate, use_grad_checkpoint, vit_precision
)
if freeze_vit:
for name, param in self.visual_encoder.named_parameters():
param.requires_grad = False
self.visual_encoder = self.visual_encoder.eval()
self.visual_encoder.train = disabled_train
logging.info("freeze vision encoder")
self.Qformer, self.query_tokens = self.init_Qformer(
num_query_token, self.visual_encoder.num_features
)
self.Qformer.cls = None
self.Qformer.bert.embeddings.word_embeddings = None
self.Qformer.bert.embeddings.position_embeddings = None
for layer in self.Qformer.bert.encoder.layer:
layer.output = None
layer.intermediate = None
self.opt_tokenizer = AutoTokenizer.from_pretrained(opt_model, use_fast=False)
self.opt_model = OPTForCausalLM.from_pretrained(
opt_model, torch_dtype=torch.float16
)
for name, param in self.opt_model.named_parameters():
param.requires_grad = False
self.eos_token_id = self.opt_tokenizer(
"\n", add_special_tokens=False
).input_ids[0]
self.opt_proj = nn.Linear(
self.Qformer.config.hidden_size, self.opt_model.config.hidden_size
)
self.max_txt_len = max_txt_len
self.prompt = prompt
prompt_tokens = self.opt_tokenizer(self.prompt, return_tensors="pt")
self.prompt_length = prompt_tokens.attention_mask.sum(1)
def forward(self, samples):
image = samples["image"]
with self.maybe_autocast():
image_embeds = self.ln_vision(self.visual_encoder(image))
image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(
image.device
)
query_tokens = self.query_tokens.expand(image_embeds.shape[0], -1, -1)
query_output = self.Qformer.bert(
query_embeds=query_tokens,
encoder_hidden_states=image_embeds,
encoder_attention_mask=image_atts,
return_dict=True,
)
inputs_opt = self.opt_proj(query_output.last_hidden_state)
atts_opt = torch.ones(inputs_opt.size()[:-1], dtype=torch.long).to(image.device)
self.opt_tokenizer.padding_side = "right"
text = [t + "\n" for t in samples["text_input"]]
opt_tokens = self.opt_tokenizer(
text,
return_tensors="pt",
padding="longest",
truncation=True,
max_length=self.max_txt_len,
).to(image.device)
targets = opt_tokens.input_ids.masked_fill(
opt_tokens.input_ids == self.opt_tokenizer.pad_token_id, -100
)
if self.prompt:
targets[:, : self.prompt_length] = -100 # do not apply loss to the prompt
empty_targets = (
torch.ones(atts_opt.size(), dtype=torch.long).to(image.device).fill_(-100)
)
targets = torch.cat([empty_targets, targets], dim=1)
inputs_embeds = self.opt_model.model.decoder.embed_tokens(opt_tokens.input_ids)
inputs_embeds = torch.cat([inputs_opt, inputs_embeds], dim=1)
attention_mask = torch.cat([atts_opt, opt_tokens.attention_mask], dim=1)
with self.maybe_autocast():
outputs = self.opt_model(
inputs_embeds=inputs_embeds,
attention_mask=attention_mask,
return_dict=True,
labels=targets,
)
loss = outputs.loss
return {"loss": loss}
@torch.no_grad()
def generate(
self,
samples,
use_nucleus_sampling=False,
num_beams=5,
max_length=30,
min_length=1,
top_p=0.9,
repetition_penalty=1.0,
length_penalty=1.0,
num_captions=1,
temperature=1,
):
"""
Args:
samples (dict): A dictionary containing the following keys:
- image (torch.Tensor): A tensor of shape (batch_size, 3, H, W)
use_nucleus_sampling (bool): Whether to use nucleus sampling. If False, use top-k sampling.
num_beams (int): Number of beams for beam search. 1 means no beam search.
max_length (int): The maximum length of the sequence to be generated.
min_length (int): The minimum length of the sequence to be generated.
top_p (float): The cumulative probability for nucleus sampling.
repetition_penalty (float): The parameter for repetition penalty. 1.0 means no penalty.
num_captions (int): Number of captions to be generated for each image.
Returns:
captions (list): A list of strings of length batch_size * num_captions.
"""
image = samples["image"]
with self.maybe_autocast():
image_embeds = self.ln_vision(self.visual_encoder(image))
image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(
image.device
)
query_tokens = self.query_tokens.expand(image_embeds.shape[0], -1, -1)
query_output = self.Qformer.bert(
query_embeds=query_tokens,
encoder_hidden_states=image_embeds,
encoder_attention_mask=image_atts,
return_dict=True,
)
inputs_opt = self.opt_proj(query_output.last_hidden_state)
atts_opt = torch.ones(inputs_opt.size()[:-1], dtype=torch.long).to(
image.device
)
if "prompt" in samples.keys():
prompt = samples["prompt"]
else:
prompt = self.prompt
prompt = [prompt] * image.size(0)
opt_tokens = self.opt_tokenizer(prompt, return_tensors="pt").to(
image.device
)
input_ids = opt_tokens.input_ids
attention_mask = torch.cat([atts_opt, opt_tokens.attention_mask], dim=1)
if use_nucleus_sampling:
query_embeds = inputs_opt.repeat_interleave(num_captions, dim=0)
num_beams = 1
else:
query_embeds = inputs_opt.repeat_interleave(num_beams, dim=0)
outputs = self.opt_model.generate(
input_ids=input_ids,
query_embeds=query_embeds,
attention_mask=attention_mask,
do_sample=use_nucleus_sampling,
top_p=top_p,
temperature=temperature,
num_beams=num_beams,
max_new_tokens=max_length,
min_length=min_length,
eos_token_id=self.eos_token_id,
repetition_penalty=repetition_penalty,
length_penalty=length_penalty,
num_return_sequences=num_captions,
)
prompt_length = opt_tokens.input_ids.shape[1]
output_text = self.opt_tokenizer.batch_decode(
outputs[:, prompt_length:], skip_special_tokens=True
)
output_text = [text.strip() for text in output_text]
return output_text
@classmethod
def from_config(cls, cfg):
vit_model = cfg.get("vit_model", "eva_clip_g")
img_size = cfg.get("image_size")
num_query_token = cfg.get("num_query_token")
opt_model = cfg.get("opt_model")
drop_path_rate = cfg.get("drop_path_rate", 0)
use_grad_checkpoint = cfg.get("use_grad_checkpoint", False)
vit_precision = cfg.get("vit_precision", "fp16")
freeze_vit = cfg.get("freeze_vit", True)
prompt = cfg.get("prompt", "")
max_txt_len = cfg.get("max_txt_len", 32)
model = cls(
vit_model=vit_model,
img_size=img_size,
drop_path_rate=drop_path_rate,
use_grad_checkpoint=use_grad_checkpoint,
vit_precision=vit_precision,
freeze_vit=freeze_vit,
num_query_token=num_query_token,
opt_model=opt_model,
prompt=prompt,
max_txt_len=max_txt_len,
)
model.load_checkpoint_from_config(cfg)
return model | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/blip2_models/blip2_opt.py | 0.873647 | 0.331823 | blip2_opt.py | pypi |
import torch
import torch.nn.functional as F
from lavis.common.registry import registry
from lavis.models.blip2_models.blip2_qformer import Blip2Qformer
@registry.register_model("blip2_image_text_matching")
class Blip2ITM(Blip2Qformer):
"""
BLIP Image-Text Matching (ITM) model.
Supported model types:
- pretrained: pretrained model
- coco: fintuned model on coco
Usage:
>>> from lavis.models import load_model
>>> model = load_model("blip2_image_text_matching", "pretrained")
>>> model = load_model("blip2_image_text_matching", "coco")
"""
def __init__(
self,
vit_model="eva_clip_g",
img_size=224,
drop_path_rate=0,
use_grad_checkpoint=False,
vit_precision="fp16",
freeze_vit=True,
num_query_token=32,
cross_attention_freq=2,
embed_dim=256,
max_txt_len=32,
):
super().__init__(
vit_model=vit_model,
img_size=img_size,
drop_path_rate=drop_path_rate,
use_grad_checkpoint=use_grad_checkpoint,
vit_precision=vit_precision,
freeze_vit=freeze_vit,
num_query_token=num_query_token,
cross_attention_freq=cross_attention_freq,
embed_dim=embed_dim,
max_txt_len=max_txt_len,
)
def forward(self, samples, match_head="itm"):
image = samples["image"]
caption = samples["text_input"]
with self.maybe_autocast():
image_embeds = self.ln_vision(self.visual_encoder(image))
image_embeds = image_embeds.float()
image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(
image.device
)
text = self.tokenizer(
caption,
truncation=True,
max_length=self.max_txt_len,
return_tensors="pt",
).to(image.device)
if match_head == "itm":
query_tokens = self.query_tokens.expand(image_embeds.shape[0], -1, -1)
query_atts = torch.ones(query_tokens.size()[:-1], dtype=torch.long).to(
image.device
)
attention_mask = torch.cat([query_atts, text.attention_mask], dim=1)
output_itm = self.Qformer.bert(
text.input_ids,
query_embeds=query_tokens,
attention_mask=attention_mask,
encoder_hidden_states=image_embeds,
encoder_attention_mask=image_atts,
return_dict=True,
)
itm_embeddings = output_itm.last_hidden_state[:, : query_tokens.size(1), :]
itm_logit = self.itm_head(itm_embeddings)
itm_logit = itm_logit.mean(dim=1)
return itm_logit
elif match_head == "itc":
query_tokens = self.query_tokens.expand(image_embeds.shape[0], -1, -1)
query_output = self.Qformer.bert(
query_embeds=query_tokens,
encoder_hidden_states=image_embeds,
encoder_attention_mask=image_atts,
return_dict=True,
)
image_feats = F.normalize(
self.vision_proj(query_output.last_hidden_state), dim=-1
)
text_output = self.Qformer.bert(
text.input_ids,
attention_mask=text.attention_mask,
return_dict=True,
)
text_feat = F.normalize(
self.text_proj(text_output.last_hidden_state[:, 0, :]), dim=-1
)
sims = torch.bmm(image_feats, text_feat.unsqueeze(-1))
sim, _ = torch.max(sims, dim=1)
return sim | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/blip2_models/blip2_image_text_matching.py | 0.813905 | 0.381248 | blip2_image_text_matching.py | pypi |
import logging
import torch
import torch.nn as nn
from torch.cuda.amp import autocast as autocast
from transformers import T5TokenizerFast
from lavis.common.registry import registry
from lavis.models.blip2_models.blip2 import Blip2Base, disabled_train
from lavis.models.blip2_models.modeling_t5 import T5Config, T5ForConditionalGeneration
@registry.register_model("blip2_t5")
class Blip2T5(Blip2Base):
"""
BLIP2 T5 model.
Supported model types:
- pretrain_flant5xl: pretrained model with FlanT5-XL
- pretrain_flant5xl_vitL: pretrained model with FlanT5-XL
- pretrain_flant5xxl: pretrained model with FlanT5-XXL
- caption_coco_flant5xl: fintuned image captioning model with FlanT5-XL
Usage:
>>> from lavis.models import load_model
>>> model = load_model("blip2_t5", "pretrain_flant5xl")
"""
PRETRAINED_MODEL_CONFIG_DICT = {
"pretrain_flant5xl": "configs/models/blip2/blip2_pretrain_flant5xl.yaml",
"pretrain_flant5xl_vitL": "configs/models/blip2/blip2_pretrain_flant5xl_vitL.yaml",
"pretrain_flant5xxl": "configs/models/blip2/blip2_pretrain_flant5xxl.yaml",
"caption_coco_flant5xl": "configs/models/blip2/blip2_caption_flant5xl.yaml",
}
def __init__(
self,
vit_model="eva_clip_g",
img_size=224,
drop_path_rate=0,
use_grad_checkpoint=False,
vit_precision="fp16",
freeze_vit=True,
num_query_token=32,
t5_model="google/flan-t5-xl",
prompt="",
max_txt_len=32,
apply_lemmatizer=False,
):
"""
apply_lemmatizer: when set to True, postprocess predict_answers() result with lemmas.
"""
super().__init__()
self.tokenizer = self.init_tokenizer()
self.visual_encoder, self.ln_vision = self.init_vision_encoder(
vit_model, img_size, drop_path_rate, use_grad_checkpoint, vit_precision
)
if freeze_vit:
for name, param in self.visual_encoder.named_parameters():
param.requires_grad = False
self.visual_encoder = self.visual_encoder.eval()
self.visual_encoder.train = disabled_train
logging.info("freeze vision encoder")
self.Qformer, self.query_tokens = self.init_Qformer(
num_query_token, self.visual_encoder.num_features
)
self.Qformer.cls = None
self.Qformer.bert.embeddings.word_embeddings = None
self.Qformer.bert.embeddings.position_embeddings = None
for layer in self.Qformer.bert.encoder.layer:
layer.output = None
layer.intermediate = None
self.t5_tokenizer = T5TokenizerFast.from_pretrained(t5_model)
t5_config = T5Config.from_pretrained(t5_model)
t5_config.dense_act_fn = "gelu"
self.t5_model = T5ForConditionalGeneration.from_pretrained(
t5_model, config=t5_config
)
for name, param in self.t5_model.named_parameters():
param.requires_grad = False
param.data = param.data.bfloat16()
self.t5_proj = nn.Linear(
self.Qformer.config.hidden_size, self.t5_model.config.hidden_size
)
self.max_txt_len = max_txt_len
self.prompt = prompt
self._apply_lemmatizer = apply_lemmatizer
self._lemmatizer = None
def forward(self, samples):
image = samples["image"]
with self.maybe_autocast():
image_embeds = self.ln_vision(self.visual_encoder(image))
image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(
image.device
)
query_tokens = self.query_tokens.expand(image_embeds.shape[0], -1, -1)
query_output = self.Qformer.bert(
query_embeds=query_tokens,
encoder_hidden_states=image_embeds,
encoder_attention_mask=image_atts,
return_dict=True,
)
inputs_t5 = self.t5_proj(query_output.last_hidden_state)
atts_t5 = torch.ones(inputs_t5.size()[:-1], dtype=torch.long).to(image.device)
with self.maybe_autocast(dtype=torch.bfloat16):
input_tokens = self.t5_tokenizer(
samples["text_input"],
padding="longest",
truncation=True,
max_length=self.max_txt_len,
return_tensors="pt",
).to(image.device)
output_tokens = self.t5_tokenizer(
samples["text_output"],
padding="longest",
truncation=True,
max_length=self.max_txt_len,
return_tensors="pt",
).to(image.device)
encoder_atts = torch.cat([atts_t5, input_tokens.attention_mask], dim=1)
targets = output_tokens.input_ids.masked_fill(
output_tokens.input_ids == self.t5_tokenizer.pad_token_id, -100
)
inputs_embeds = self.t5_model.encoder.embed_tokens(input_tokens.input_ids)
inputs_embeds = torch.cat([inputs_t5, inputs_embeds], dim=1)
outputs = self.t5_model(
inputs_embeds=inputs_embeds,
attention_mask=encoder_atts,
decoder_attention_mask=output_tokens.attention_mask,
return_dict=True,
labels=targets,
)
loss = outputs.loss
return {"loss": loss}
@torch.no_grad()
def generate(
self,
samples,
use_nucleus_sampling=False,
num_beams=5,
max_length=30,
min_length=1,
top_p=0.9,
repetition_penalty=1.0,
length_penalty=1.0,
num_captions=1,
temperature=1,
):
"""
Args:
samples (dict): A dictionary containing the following keys:
- image (torch.Tensor): A tensor of shape (batch_size, 3, H, W)
use_nucleus_sampling (bool): Whether to use nucleus sampling. If False, use top-k sampling.
num_beams (int): Number of beams for beam search. 1 means no beam search.
max_length (int): The maximum length of the sequence to be generated.
min_length (int): The minimum length of the sequence to be generated.
top_p (float): The cumulative probability for nucleus sampling.
repetition_penalty (float): The parameter for repetition penalty. 1.0 means no penalty.
num_captions (int): Number of captions to be generated for each image.
Returns:
captions (list): A list of strings of length batch_size * num_captions.
"""
image = samples["image"]
with self.maybe_autocast():
image_embeds = self.ln_vision(self.visual_encoder(image))
image_embeds = image_embeds.float()
image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(
image.device
)
query_tokens = self.query_tokens.expand(image_embeds.shape[0], -1, -1)
query_output = self.Qformer.bert(
query_embeds=query_tokens,
encoder_hidden_states=image_embeds,
encoder_attention_mask=image_atts,
return_dict=True,
)
inputs_t5 = self.t5_proj(query_output.last_hidden_state)
atts_t5 = torch.ones(inputs_t5.size()[:-1], dtype=torch.long).to(image.device)
if "prompt" in samples.keys():
prompt = samples["prompt"]
else:
prompt = self.prompt
if isinstance(prompt, str):
prompt = [prompt] * image.size(0)
else:
assert len(prompt) == image.size(
0
), "The number of prompts must be equal to the batch size."
input_tokens = self.t5_tokenizer(
prompt, padding="longest", return_tensors="pt"
).to(image.device)
encoder_atts = torch.cat([atts_t5, input_tokens.attention_mask], dim=1)
with self.maybe_autocast(dtype=torch.bfloat16):
inputs_embeds = self.t5_model.encoder.embed_tokens(input_tokens.input_ids)
inputs_embeds = torch.cat([inputs_t5, inputs_embeds], dim=1)
outputs = self.t5_model.generate(
inputs_embeds=inputs_embeds,
attention_mask=encoder_atts,
do_sample=use_nucleus_sampling,
top_p=top_p,
temperature=temperature,
num_beams=num_beams,
max_new_tokens=max_length,
min_length=min_length,
repetition_penalty=repetition_penalty,
length_penalty=length_penalty,
num_return_sequences=num_captions,
)
output_text = self.t5_tokenizer.batch_decode(
outputs, skip_special_tokens=True
)
return output_text
def predict_answers(
self,
samples,
num_beams=5,
inference_method="generate",
max_len=10,
min_len=1,
num_ans_candidates=128,
answer_list=None,
prompt="",
length_penalty=-1,
**kwargs
):
image = samples["image"]
with self.maybe_autocast():
image_embeds = self.ln_vision(self.visual_encoder(image))
image_embeds = image_embeds.float()
image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(
image.device
)
query_tokens = self.query_tokens.expand(image_embeds.shape[0], -1, -1)
query_output = self.Qformer.bert(
query_embeds=query_tokens,
encoder_hidden_states=image_embeds,
encoder_attention_mask=image_atts,
return_dict=True,
)
inputs_t5 = self.t5_proj(query_output.last_hidden_state)
atts_t5 = torch.ones(inputs_t5.size()[:-1], dtype=torch.long).to(image.device)
if isinstance(samples["text_input"], str):
samples["text_input"] = [samples["text_input"]]
if prompt:
text_input = [prompt.format(question) for question in samples["text_input"]]
else:
text_input = samples["text_input"]
input_tokens = self.t5_tokenizer(
text_input, padding="longest", return_tensors="pt"
).to(image.device)
encoder_atts = torch.cat([atts_t5, input_tokens.attention_mask], dim=1)
with self.maybe_autocast(dtype=torch.bfloat16):
inputs_embeds = self.t5_model.encoder.embed_tokens(input_tokens.input_ids)
inputs_embeds = torch.cat([inputs_t5, inputs_embeds], dim=1)
outputs = self.t5_model.generate(
inputs_embeds=inputs_embeds,
attention_mask=encoder_atts,
do_sample=False,
num_beams=num_beams,
max_new_tokens=max_len,
min_length=min_len,
length_penalty=length_penalty,
)
output_text = self.t5_tokenizer.batch_decode(
outputs, skip_special_tokens=True
)
if self._apply_lemmatizer:
output_text = self._lemmatize(output_text)
return output_text
def _lemmatize(self, answers):
def apply(answer):
doc = self.lemmatizer(answer)
words = []
for token in doc:
if token.pos_ in ["NOUN", "VERB"]:
words.append(token.lemma_)
else:
words.append(token.text)
answer = " ".join(words)
return answer
return [apply(answer) for answer in answers]
@property
def lemmatizer(self):
if self._lemmatizer is None:
try:
import spacy
self._lemmatizer = spacy.load("en_core_web_sm")
except ImportError:
logging.error(
"""
Please install spacy and en_core_web_sm model to apply lemmatization.
python -m spacy download en_core_web_sm
OR
import spacy.cli
spacy.cli.download("en_core_web_sm")
"""
)
exit(1)
return self._lemmatizer
@classmethod
def from_config(cls, cfg):
vit_model = cfg.get("vit_model", "eva_clip_g")
img_size = cfg.get("image_size")
num_query_token = cfg.get("num_query_token")
t5_model = cfg.get("t5_model")
drop_path_rate = cfg.get("drop_path_rate", 0)
use_grad_checkpoint = cfg.get("use_grad_checkpoint", False)
vit_precision = cfg.get("vit_precision", "fp16")
freeze_vit = cfg.get("freeze_vit", True)
prompt = cfg.get("prompt", "")
max_txt_len = cfg.get("max_txt_len", 32)
apply_lemmatizer = cfg.get("apply_lemmatizer", False)
model = cls(
vit_model=vit_model,
img_size=img_size,
drop_path_rate=drop_path_rate,
use_grad_checkpoint=use_grad_checkpoint,
vit_precision=vit_precision,
freeze_vit=freeze_vit,
num_query_token=num_query_token,
t5_model=t5_model,
prompt=prompt,
max_txt_len=max_txt_len,
apply_lemmatizer=apply_lemmatizer,
)
model.load_checkpoint_from_config(cfg)
return model | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/blip2_models/blip2_t5.py | 0.87448 | 0.314761 | blip2_t5.py | pypi |
from copy import deepcopy
import numpy as np
import torch
import torch.nn.functional as F
from lavis.common.registry import registry
from lavis.common.utils import get_abs_path
from lavis.models.albef_models import AlbefBase
from lavis.models.albef_models.albef_outputs import (
AlbefIntermediateOutput,
AlbefOutput,
AlbefSimilarity,
)
from lavis.models.base_model import MomentumDistilationMixin, SharedQueueMixin
from lavis.models.med import BertForMaskedLM
from lavis.models.vit import VisionTransformerEncoder
from torch import nn
from transformers import BertConfig
@registry.register_model("albef_pretrain")
class AlbefPretrain(AlbefBase, MomentumDistilationMixin, SharedQueueMixin):
"""
ALBEF pretrain model.
Supported model types:
- base: ALBEF base model used for pretraining.
"""
PRETRAINED_MODEL_CONFIG_DICT = {
"base": "configs/models/albef_pretrain_base.yaml",
}
def __init__(
self,
image_encoder,
text_encoder,
queue_size,
embed_dim=256,
mlm_mask_prob=0.15,
temp=0.07,
momentum=0.995,
alpha=0.4,
max_txt_len=30,
):
super().__init__()
self.tokenizer = self.init_tokenizer()
self.visual_encoder = image_encoder
self.text_encoder = text_encoder
text_width = text_encoder.config.hidden_size
vision_width = image_encoder.vision_width
self.embed_dim = embed_dim
self.vision_proj = nn.Linear(vision_width, embed_dim)
self.text_proj = nn.Linear(text_width, embed_dim)
self.itm_head = nn.Linear(text_width, 2)
# create the momentum encoder
self.visual_encoder_m = deepcopy(self.visual_encoder)
self.text_encoder_m = deepcopy(self.text_encoder)
self.vision_proj_m = deepcopy(self.vision_proj)
self.text_proj_m = deepcopy(self.text_proj)
self.model_pairs = [
[self.visual_encoder, self.visual_encoder_m],
[self.text_encoder, self.text_encoder_m],
[self.vision_proj, self.vision_proj_m],
[self.text_proj, self.text_proj_m],
]
self.copy_params()
# create the queue
self.register_buffer("image_queue", torch.randn(embed_dim, queue_size))
self.register_buffer("text_queue", torch.randn(embed_dim, queue_size))
self.register_buffer("queue_ptr", torch.zeros(1, dtype=torch.long))
self.image_queue = nn.functional.normalize(self.image_queue, dim=0)
self.text_queue = nn.functional.normalize(self.text_queue, dim=0)
self.queue_size = queue_size
self.momentum = momentum
self.temp = nn.Parameter(temp * torch.ones([]))
self.alpha = alpha
self.max_txt_len = max_txt_len
self.mlm_probability = mlm_mask_prob
def _rampup_factor(self, epoch, iters, num_iters_per_epoch):
return min(1, (epoch * num_iters_per_epoch + iters) / (2 * num_iters_per_epoch))
def forward(self, samples):
"""
Args:
samples (dict): A dictionary containing the following keys:
- image (torch.Tensor): A tensor of shape (batch_size, 3, H, W). The input images. Default: H=224, W=224.
- text_input (list): A list of length batch_size, each element is a string of text/caption.
- epoch (int): The current epoch.
- iters (int): The current iteration.
- num_iters_per_epoch (int): The number of iterations per epoch.
Returns:
BlipOutput: A BlipOutput object containing loss and intermediate output. See ``lavis.models.blip_models.blip_outputs.BlipOutput`` for more details.
Examples:
>>> import torch
>>> from lavis.models import load_model
>>> model = load_model("albef_pretrain")
>>> images = torch.randn(4, 3, 224, 224)
>>> text_input = ["caption of image 1", "another caption of image 1", "caption of image 2", "caption of image 3"]
>>> samples = {"image": images, "text_input": text_input, "epoch": 0, "iters": 0, "num_iters_per_epoch": 100}
>>> output = model(samples)
>>> output.keys()
odict_keys(['sims', 'intermediate_output', 'loss', 'loss_itc', 'loss_itm', 'loss_mlm'])
"""
image = samples["image"]
caption = samples["text_input"]
alpha = self.alpha * self._rampup_factor(
epoch=samples["epoch"],
iters=samples["iters"],
num_iters_per_epoch=samples["num_iters_per_epoch"],
)
with torch.no_grad():
self.temp.clamp_(0.001, 0.5)
image_embeds = self.visual_encoder.forward_features(image)
image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(
self.device
)
text = self.tokenizer(
caption,
padding="max_length",
truncation=True,
max_length=self.max_txt_len,
return_tensors="pt",
).to(self.device)
image_feat = F.normalize(self.vision_proj(image_embeds[:, 0, :]), dim=-1)
text_output = self.text_encoder.bert(
text.input_ids,
attention_mask=text.attention_mask,
return_dict=True,
mode="text",
)
text_embeds = text_output.last_hidden_state
text_feat = F.normalize(self.text_proj(text_embeds[:, 0, :]), dim=-1)
# get momentum features
with torch.no_grad():
self._momentum_update()
image_embeds_m = self.visual_encoder_m(image)
image_feat_m = F.normalize(
self.vision_proj_m(image_embeds_m[:, 0, :]), dim=-1
)
image_feat_all = torch.cat(
[image_feat_m.t(), self.image_queue.clone().detach()], dim=1
)
text_output_m = self.text_encoder_m.bert(
text.input_ids,
attention_mask=text.attention_mask,
return_dict=True,
mode="text",
)
text_embeds_m = text_output_m.last_hidden_state
text_feat_m = F.normalize(self.text_proj_m(text_embeds_m[:, 0, :]), dim=-1)
text_feat_all = torch.cat(
[text_feat_m.t(), self.text_queue.clone().detach()], dim=1
)
sim_i2t_m = image_feat_m @ text_feat_all / self.temp
sim_t2i_m = text_feat_m @ image_feat_all / self.temp
sim_targets = torch.zeros(sim_i2t_m.size()).to(image.device)
sim_targets.fill_diagonal_(1)
sim_i2t_targets = (
alpha * F.softmax(sim_i2t_m, dim=1) + (1 - alpha) * sim_targets
)
sim_t2i_targets = (
alpha * F.softmax(sim_t2i_m, dim=1) + (1 - alpha) * sim_targets
)
sim_i2t = image_feat @ text_feat_all / self.temp
sim_t2i = text_feat @ image_feat_all / self.temp
loss_i2t = -torch.sum(
F.log_softmax(sim_i2t, dim=1) * sim_i2t_targets, dim=1
).mean()
loss_t2i = -torch.sum(
F.log_softmax(sim_t2i, dim=1) * sim_t2i_targets, dim=1
).mean()
loss_itc = (loss_i2t + loss_t2i) / 2
self._dequeue_and_enqueue(image_feat_m, text_feat_m)
# forward the positve image-text pair
encoder_output_pos = self.text_encoder.bert(
encoder_embeds=text_embeds,
attention_mask=text.attention_mask,
encoder_hidden_states=image_embeds,
encoder_attention_mask=image_atts,
return_dict=True,
mode="fusion",
)
with torch.no_grad():
bs = image.size(0)
weights_i2t = sim_i2t[:, :bs].clone()
weights_t2i = sim_t2i[:, :bs].clone()
weights_i2t.fill_diagonal_(-np.Inf)
weights_t2i.fill_diagonal_(-np.Inf)
weights_i2t = F.softmax(weights_i2t, dim=1)
weights_t2i = F.softmax(weights_t2i, dim=1)
# select a negative image for each text
image_embeds_neg = []
for b in range(bs):
neg_idx = torch.multinomial(weights_t2i[b], 1).item()
image_embeds_neg.append(image_embeds[neg_idx])
image_embeds_neg = torch.stack(image_embeds_neg, dim=0)
# select a negative text for each image
text_embeds_neg = []
text_atts_neg = []
for b in range(bs):
neg_idx = torch.multinomial(weights_i2t[b], 1).item()
text_embeds_neg.append(text_embeds[neg_idx])
text_atts_neg.append(text.attention_mask[neg_idx])
text_embeds_neg = torch.stack(text_embeds_neg, dim=0)
text_atts_neg = torch.stack(text_atts_neg, dim=0)
text_embeds_all = torch.cat([text_embeds, text_embeds_neg], dim=0)
text_atts_all = torch.cat([text.attention_mask, text_atts_neg], dim=0)
image_embeds_all = torch.cat([image_embeds_neg, image_embeds], dim=0)
image_atts_all = torch.cat([image_atts, image_atts], dim=0)
encoder_output_neg = self.text_encoder.bert(
encoder_embeds=text_embeds_all,
attention_mask=text_atts_all,
encoder_hidden_states=image_embeds_all,
encoder_attention_mask=image_atts_all,
return_dict=True,
mode="fusion",
)
vl_embeddings = torch.cat(
[
encoder_output_pos.last_hidden_state[:, 0, :],
encoder_output_neg.last_hidden_state[:, 0, :],
],
dim=0,
)
itm_logits = self.itm_head(vl_embeddings)
itm_labels = torch.cat(
[torch.ones(bs, dtype=torch.long), torch.zeros(2 * bs, dtype=torch.long)],
dim=0,
).to(self.device)
loss_itm = F.cross_entropy(itm_logits, itm_labels)
# MLM
input_ids = text.input_ids.clone()
labels = input_ids.clone()
probability_matrix = torch.full(labels.shape, self.mlm_probability)
input_ids, labels = self.mask(
input_ids,
self.text_encoder.config.vocab_size,
self.device,
targets=labels,
probability_matrix=probability_matrix,
)
with torch.no_grad():
logits_m = self.text_encoder_m(
input_ids,
attention_mask=text.attention_mask,
encoder_hidden_states=image_embeds_m,
encoder_attention_mask=image_atts,
return_dict=True,
return_logits=True,
)
mlm_output = self.text_encoder(
input_ids,
attention_mask=text.attention_mask,
encoder_hidden_states=image_embeds,
encoder_attention_mask=image_atts,
return_dict=True,
labels=labels,
soft_labels=F.softmax(logits_m, dim=-1),
alpha=alpha,
)
loss_mlm = mlm_output.loss
return AlbefOutput(
loss=loss_itc + loss_itm + loss_mlm,
loss_itc=loss_itc,
loss_itm=loss_itm,
loss_mlm=loss_mlm,
sims=AlbefSimilarity(
sim_i2t=sim_i2t,
sim_t2i=sim_t2i,
sim_i2t_m=sim_i2t_m,
sim_t2i_m=sim_t2i_m,
sim_i2t_targets=sim_i2t_targets,
sim_t2i_targets=sim_t2i_targets,
),
intermediate_output=AlbefIntermediateOutput(
image_embeds=image_embeds,
image_embeds_m=image_embeds_m,
text_embeds=text_embeds,
text_embeds_m=text_embeds_m,
encoder_output=encoder_output_pos,
encoder_output_neg=encoder_output_neg,
itm_logits=itm_logits,
itm_labels=itm_labels,
),
)
def mask(
self,
input_ids,
vocab_size,
device,
targets=None,
masked_indices=None,
probability_matrix=None,
):
"""
Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original.
"""
if masked_indices is None:
masked_indices = torch.bernoulli(probability_matrix).bool()
masked_indices[input_ids == self.tokenizer.pad_token_id] = False
masked_indices[input_ids == self.tokenizer.cls_token_id] = False
if targets is not None:
targets[~masked_indices] = -100 # We only compute loss on masked tokens
# 80% of the time, we replace masked input tokens with tokenizer.mask_token ([MASK])
indices_replaced = (
torch.bernoulli(torch.full(input_ids.shape, 0.8)).bool() & masked_indices
)
input_ids[indices_replaced] = self.tokenizer.mask_token_id
# 10% of the time, we replace masked input tokens with random word
indices_random = (
torch.bernoulli(torch.full(input_ids.shape, 0.5)).bool()
& masked_indices
& ~indices_replaced
)
random_words = torch.randint(vocab_size, input_ids.shape, dtype=torch.long).to(
device
)
input_ids[indices_random] = random_words[indices_random]
# The rest of the time (10% of the time) we keep the masked input tokens unchanged
if targets is not None:
return input_ids, targets
else:
return input_ids
@classmethod
def from_config(cls, cfg=None):
image_encoder = VisionTransformerEncoder.from_config(cfg, from_pretrained=True)
config_text_encoder = BertConfig.from_json_file(
get_abs_path(cfg["med_config_path"])
)
config_text_encoder.fusion_layer = 6
text_encoder = BertForMaskedLM.from_pretrained(
"bert-base-uncased", config=config_text_encoder
)
embed_dim = cfg.get("embed_dim", 256)
momentum = cfg.get("momentum", 0.995)
alpha = cfg.get("alpha", 0.4)
mlm_mask_prob = cfg.get("mlm_mask_prob", 0.15)
temp = cfg.get("temp", 0.07)
max_txt_len = cfg.get("max_txt_len", 30)
queue_size = cfg.get("queue_size", 65536)
model = cls(
image_encoder=image_encoder,
text_encoder=text_encoder,
queue_size=queue_size,
embed_dim=embed_dim,
mlm_mask_prob=mlm_mask_prob,
temp=temp,
momentum=momentum,
alpha=alpha,
max_txt_len=max_txt_len,
)
return model | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/albef_models/albef_pretrain.py | 0.853974 | 0.262471 | albef_pretrain.py | pypi |
from copy import deepcopy
import torch
import torch.nn.functional as F
from lavis.common.registry import registry
from lavis.models.albef_models import AlbefBase, compute_sim_matrix
from lavis.models.albef_models.albef_outputs import (
AlbefIntermediateOutput,
AlbefOutput,
AlbefSimilarity,
)
from lavis.models.base_model import MomentumDistilationMixin, SharedQueueMixin
from lavis.models.med import XBertEncoder
from lavis.models.vit import VisionTransformerEncoder
from torch import nn
@registry.register_model("albef_retrieval")
class AlbefRetrieval(AlbefBase, MomentumDistilationMixin, SharedQueueMixin):
"""
ALBEF retrieval model.
Supported model types:
- coco: fine-tuned ALBEF base model on COCO dataset (Karparthy split).
- flickr: fine-tuned ALBEF base model on Flickr30k dataset.
Usage:
>>> from lavis.models import load_model
>>> model = load_model("albef_retrieval", "coco")
>>> model = load_model("albef_retrieval", "flickr")
"""
PRETRAINED_MODEL_CONFIG_DICT = {
"coco": "configs/models/albef_retrieval_coco.yaml",
"flickr": "configs/models/albef_retrieval_flickr.yaml",
}
def __init__(
self,
image_encoder,
text_encoder,
queue_size,
embed_dim=256,
temp=0.07,
use_distill=True,
momentum=0.995,
alpha=0.4,
max_txt_len=30,
):
super().__init__()
self.tokenizer = self.init_tokenizer()
self.visual_encoder = image_encoder
self.text_encoder = text_encoder
text_width = text_encoder.config.hidden_size
vision_width = image_encoder.vision_width
self.vision_proj = nn.Linear(vision_width, embed_dim)
self.text_proj = nn.Linear(text_width, embed_dim)
self.itm_head = nn.Linear(text_width, 2)
# create the momentum encoder
self.visual_encoder_m = deepcopy(self.visual_encoder)
self.text_encoder_m = deepcopy(self.text_encoder)
self.vision_proj_m = deepcopy(self.vision_proj)
self.text_proj_m = deepcopy(self.text_proj)
self.model_pairs = [
[self.visual_encoder, self.visual_encoder_m],
[self.text_encoder, self.text_encoder_m],
[self.vision_proj, self.vision_proj_m],
[self.text_proj, self.text_proj_m],
]
self.copy_params()
# create the queue
self.register_buffer("image_queue", torch.randn(embed_dim, queue_size))
self.register_buffer("text_queue", torch.randn(embed_dim, queue_size))
self.register_buffer("idx_queue", torch.full((1, queue_size), -100))
self.register_buffer("queue_ptr", torch.zeros(1, dtype=torch.long))
self.image_queue = nn.functional.normalize(self.image_queue, dim=0)
self.text_queue = nn.functional.normalize(self.text_queue, dim=0)
self.queue_size = queue_size
self.momentum = momentum
self.temp = nn.Parameter(temp * torch.ones([]))
self.alpha = alpha
self.max_txt_len = max_txt_len
self.use_distill = use_distill
def _rampup_factor(self, epoch, iters, num_iters_per_epoch):
return min(1, (epoch * num_iters_per_epoch + iters) / (2 * num_iters_per_epoch))
def forward(self, samples):
"""
Args:
samples (dict): A dictionary containing the following keys:
- image (torch.Tensor): A tensor of shape (batch_size, 3, H, W). The input images.
- text_input (list): A list of length batch_size, each element is a string of text/caption.
- image_id (torch.Tensor): A tensor of shape (batch_size, ). The image ids, used to identify same images in batch.
- epoch (int): The current epoch.
- iters (int): The current iteration.
- num_iters_per_epoch (int): The number of iterations per epoch.
Returns:
BlipOutput: A BlipOutput object. See ``lavis.models.blip_models.blip_outputs.BlipOutput`` for more details.
Examples:
>>> import torch
>>> from lavis.models import load_model
>>> model = load_model("albef_retrieval", "coco")
>>> images = torch.randn(4, 3, 384, 384)
>>> text_input = ["caption of image 1", "another caption of image 1", "caption of image 2", "caption of image 3"]
>>> image_id = torch.tensor([1, 1, 2, 3])
>>> samples = {"image": images, "text_input": text_input, "image_id": image_id, "epoch": 0, "iters": 0, "num_iters_per_epoch": 100}
>>> output = model(samples)
>>> output.keys()
odict_keys(['sims', 'intermediate_output', 'loss', 'loss_itc', 'loss_itm'])
"""
image = samples["image"]
caption = samples["text_input"]
idx = samples["image_id"]
alpha = self.alpha * self._rampup_factor(
epoch=samples["epoch"],
iters=samples["iters"],
num_iters_per_epoch=samples["num_iters_per_epoch"],
)
with torch.no_grad():
self.temp.clamp_(0.001, 0.5)
image_embeds = self.visual_encoder.forward_features(image)
image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(
self.device
)
image_feat = F.normalize(self.vision_proj(image_embeds[:, 0, :]), dim=-1)
text = self.tokenizer(
caption,
padding="max_length",
truncation=True,
max_length=self.max_txt_len,
return_tensors="pt",
).to(self.device)
text_output = self.text_encoder.forward_text(text)
text_embeds = text_output.last_hidden_state
text_feat = F.normalize(self.text_proj(text_embeds[:, 0, :]), dim=-1)
idx = idx.view(-1, 1)
idx_all = torch.cat([idx.t(), self.idx_queue.clone().detach()], dim=1)
pos_idx = torch.eq(idx, idx_all).float()
sim_targets = pos_idx / pos_idx.sum(1, keepdim=True)
with torch.no_grad():
self._momentum_update()
image_embeds_m = self.visual_encoder_m(image)
image_feat_m = F.normalize(
self.vision_proj_m(image_embeds_m[:, 0, :]), dim=-1
)
image_feat_all = torch.cat(
[image_feat_m.t(), self.image_queue.clone().detach()], dim=1
)
text_output_m = self.text_encoder_m.forward_text(text)
text_embeds_m = text_output_m.last_hidden_state
text_feat_m = F.normalize(self.text_proj_m(text_embeds_m[:, 0, :]), dim=-1)
text_feat_all = torch.cat(
[text_feat_m.t(), self.text_queue.clone().detach()], dim=1
)
if self.use_distill:
sim_i2t_m = image_feat_m @ text_feat_all / self.temp
sim_t2i_m = text_feat_m @ image_feat_all / self.temp
sim_i2t_targets = (
alpha * F.softmax(sim_i2t_m, dim=1) + (1 - alpha) * sim_targets
)
sim_t2i_targets = (
alpha * F.softmax(sim_t2i_m, dim=1) + (1 - alpha) * sim_targets
)
sim_i2t = image_feat @ text_feat_all / self.temp
sim_t2i = text_feat @ image_feat_all / self.temp
if self.use_distill:
loss_i2t = -torch.sum(
F.log_softmax(sim_i2t, dim=1) * sim_i2t_targets, dim=1
).mean()
loss_t2i = -torch.sum(
F.log_softmax(sim_t2i, dim=1) * sim_t2i_targets, dim=1
).mean()
else:
loss_i2t = -torch.sum(
F.log_softmax(sim_i2t, dim=1) * sim_targets, dim=1
).mean()
loss_t2i = -torch.sum(
F.log_softmax(sim_t2i, dim=1) * sim_targets, dim=1
).mean()
loss_itc = (loss_i2t + loss_t2i) / 2
self._dequeue_and_enqueue(image_feat_m, text_feat_m, idx)
encoder_output_pos = self.text_encoder(
encoder_embeds=text_embeds,
attention_mask=text.attention_mask,
encoder_hidden_states=image_embeds,
encoder_attention_mask=image_atts,
return_dict=True,
mode="fusion",
)
with torch.no_grad():
bs = image.size(0)
weights_i2t = F.softmax(sim_i2t[:, :bs] + 1e-4, dim=1)
weights_t2i = F.softmax(sim_t2i[:, :bs] + 1e-4, dim=1)
mask = torch.eq(idx, idx.T)
weights_i2t.masked_fill_(mask, 0)
weights_t2i.masked_fill_(mask, 0)
# select a negative image for each text
image_embeds_neg = []
for b in range(bs):
neg_idx = torch.multinomial(weights_t2i[b], 1).item()
image_embeds_neg.append(image_embeds[neg_idx])
image_embeds_neg = torch.stack(image_embeds_neg, dim=0)
# select a negative text for each image
text_embeds_neg = []
text_atts_neg = []
for b in range(bs):
neg_idx = torch.multinomial(weights_i2t[b], 1).item()
text_embeds_neg.append(text_embeds[neg_idx])
text_atts_neg.append(text.attention_mask[neg_idx])
text_embeds_neg = torch.stack(text_embeds_neg, dim=0)
text_atts_neg = torch.stack(text_atts_neg, dim=0)
text_embeds_all = torch.cat([text_embeds, text_embeds_neg], dim=0)
text_atts_all = torch.cat([text.attention_mask, text_atts_neg], dim=0)
image_embeds_all = torch.cat([image_embeds_neg, image_embeds], dim=0)
image_atts_all = torch.cat([image_atts, image_atts], dim=0)
encoder_output_neg = self.text_encoder(
encoder_embeds=text_embeds_all,
attention_mask=text_atts_all,
encoder_hidden_states=image_embeds_all,
encoder_attention_mask=image_atts_all,
return_dict=True,
mode="fusion",
)
vl_embeddings = torch.cat(
[
encoder_output_pos.last_hidden_state[:, 0, :],
encoder_output_neg.last_hidden_state[:, 0, :],
],
dim=0,
)
itm_logits = self.itm_head(vl_embeddings)
itm_labels = torch.cat(
[torch.ones(bs, dtype=torch.long), torch.zeros(2 * bs, dtype=torch.long)],
dim=0,
).to(self.device)
loss_itm = F.cross_entropy(itm_logits, itm_labels)
return AlbefOutput(
loss=loss_itc + loss_itm,
loss_itc=loss_itc,
loss_itm=loss_itm,
sims=AlbefSimilarity(
sim_i2t=sim_i2t,
sim_t2i=sim_t2i,
sim_i2t_m=sim_i2t_m,
sim_t2i_m=sim_t2i_m,
sim_i2t_targets=sim_i2t_targets,
sim_t2i_targets=sim_t2i_targets,
),
intermediate_output=AlbefIntermediateOutput(
image_embeds=image_embeds,
image_embeds_m=image_embeds_m,
text_embeds=text_embeds,
text_embeds_m=text_embeds_m,
encoder_output=encoder_output_pos,
encoder_output_neg=encoder_output_neg,
itm_logits=itm_logits,
itm_labels=itm_labels,
),
)
@classmethod
def from_config(cls, cfg=None):
image_encoder = VisionTransformerEncoder.from_config(cfg, from_pretrained=False)
text_encoder = XBertEncoder.from_config(cfg)
embed_dim = cfg.get("embed_dim", 256)
momentum = cfg.get("momentum", 0.995)
alpha = cfg.get("alpha", 0.4)
temp = cfg.get("temp", 0.07)
max_txt_len = cfg.get("max_txt_len", 30)
queue_size = cfg.get("queue_size", 0)
use_distill = cfg.get("use_distill", True)
model = cls(
image_encoder=image_encoder,
text_encoder=text_encoder,
queue_size=queue_size,
embed_dim=embed_dim,
temp=temp,
momentum=momentum,
alpha=alpha,
max_txt_len=max_txt_len,
use_distill=use_distill,
)
model.load_checkpoint_from_config(cfg)
return model
def compute_sim_matrix(self, data_loader, task_cfg):
"""
Compute similarity i2t, t2i matrix for the given data loader.
"""
k_test = task_cfg.k_test
return compute_sim_matrix(model=self, data_loader=data_loader, k_test=k_test) | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/albef_models/albef_retrieval.py | 0.868227 | 0.271653 | albef_retrieval.py | pypi |
import warnings
import torch
import torch.nn.functional as F
from lavis.common.registry import registry
from lavis.common.utils import get_abs_path
from lavis.models.albef_models import AlbefBase
from lavis.models.albef_models.albef_outputs import AlbefOutputFeatures
from lavis.models.med import BertForMaskedLM
from lavis.models.vit import VisionTransformerEncoder
from torch import nn
from transformers import BertConfig
@registry.register_model("albef_feature_extractor")
class AlbefFeatureExtractor(AlbefBase):
PRETRAINED_MODEL_CONFIG_DICT = {
"base": "configs/models/albef_feature_extractor.yaml",
}
def __init__(self, image_encoder, text_encoder, embed_dim=256, max_txt_len=30):
super().__init__()
self.tokenizer = self.init_tokenizer()
self.visual_encoder = image_encoder
self.text_encoder = text_encoder
text_width = text_encoder.config.hidden_size
vision_width = image_encoder.vision_width
self.embed_dim = embed_dim
self.vision_proj = nn.Linear(vision_width, embed_dim)
self.text_proj = nn.Linear(text_width, embed_dim)
self.max_txt_len = max_txt_len
self.temp = nn.Parameter(0.07 * torch.ones([]))
@torch.no_grad()
def extract_features(self, samples, mode="multimodal"):
"""
Extract features for multimodal or unimodal samples.
Args:
samples (dict): A dictionary of samples, containing the following keys:
- image (torch.Tensor): A tensor of shape (B, C, H, W) containing the image.
Raw images should be preprocessed before being passed to feature extractor.
- text_input (list): A list of strings containing the text, length B.
mode (str): The mode of feature extraction. Can be either "multimodal", "text" or "image".
If "multimodal", return image features and multimodal features;
if "text", return text features;
if "image", return image features.
Default: "multimodal".
Returns:
An AlbefOutputFeatures object, see lavis/models/albef_models/albef_outputs.py for details.
Examples:
```python
>>> from PIL import Image
>>> from lavis.models import load_model_and_preprocess
>>> raw_image = Image.open("docs/data/merlion.png").convert("RGB")
>>> caption = "a large fountain spewing water into the air"
>>> model, vis_processors, txt_processors = load_model_and_preprocess("albef_feature_extractor", is_eval=True)
>>> image = vis_processors["eval"](raw_image).unsqueeze(0)
>>> text_input = txt_processors["eval"](caption)
>>> sample = {"image": image, "text_input": [text_input]}
>>> features_multimodal = model.extract_features(sample)
>>> features_multimodal.keys()
odict_keys(['image_embeds', 'multimodal_embeds'])
>>> features_multimodal.image_embeds.shape
torch.Size([1, 197, 768])
>>> features_multimodal.multimodal_embeds.shape
torch.Size([1, 12, 768])
>>> features_text = model.extract_features(sample, mode="text")
>>> features_text.keys()
odict_keys(['text_embeds', 'text_features'])
>>> features_text.text_embeds.shape
torch.Size([1, 12, 768])
>>> features_text.text_features.shape
torch.Size([1, 12, 256])
>>> features_image = model.extract_features(sample, mode="image")
>>> features_image.keys()
odict_keys(['image_embeds', 'image_features'])
>>> features_image.image_embeds.shape
torch.Size([1, 197, 768])
>>> features_image.image_features.shape
torch.Size([1, 197, 256])
```
"""
image = samples["image"]
caption = samples["text_input"]
if isinstance(mode, str):
mode = [mode]
for m in mode:
assert m in [
"multimodal",
"image",
"text",
], "mode must be one of [multimodal, image, text], but got {}".format(m)
# initalize output
image_embeds, text_embeds, multimodal_embeds = None, None, None
image_features, text_features = None, None
if "image" in mode or "multimodal" in mode:
assert (
image is not None
), "image must be provided if mode is 'image' or 'multimodal'"
image_embeds = self.visual_encoder.forward_features(image)
image_features = F.normalize(self.vision_proj(image_embeds), dim=-1)
if "text" in mode or "multimodal" in mode:
assert (
caption is not None
), "text must be provided if mode is 'text' or 'multimodal'"
text = self.tokenizer(
caption,
padding=True,
return_tensors="pt",
).to(self.device)
text_output = self.text_encoder.bert(
text.input_ids,
attention_mask=text.attention_mask,
return_dict=True,
mode="text",
)
text_embeds = text_output.last_hidden_state
text_features = F.normalize(self.text_proj(text_embeds), dim=-1)
if "multimodal" in mode:
image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(
self.device
)
# forward the positve image-text pair
output = self.text_encoder.bert(
encoder_embeds=text_embeds,
attention_mask=text.attention_mask,
encoder_hidden_states=image_embeds,
encoder_attention_mask=image_atts,
return_dict=True,
mode="fusion",
)
multimodal_embeds = output.last_hidden_state
return AlbefOutputFeatures(
image_embeds=image_embeds,
image_embeds_proj=image_features,
text_embeds=text_embeds,
text_embeds_proj=text_features,
multimodal_embeds=multimodal_embeds,
)
@classmethod
def from_config(cls, cfg=None):
image_encoder = VisionTransformerEncoder.from_config(cfg, from_pretrained=True)
config_text_encoder = BertConfig.from_json_file(
get_abs_path(cfg["med_config_path"])
)
config_text_encoder.fusion_layer = 6
text_encoder = BertForMaskedLM.from_pretrained(
"bert-base-uncased", config=config_text_encoder
)
embed_dim = cfg.get("embed_dim", 256)
max_txt_len = cfg.get("max_txt_len", 30)
model = cls(
image_encoder=image_encoder,
text_encoder=text_encoder,
embed_dim=embed_dim,
max_txt_len=max_txt_len,
)
# load pre-trained weights
pretrain_path = cfg.get("pretrained", None)
if pretrain_path is not None:
msg = model.load_from_pretrained(
url_or_filename=pretrain_path, rename_text_keys=False
)
else:
warnings.warn("No pretrained weights are loaded.")
return model | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/albef_models/albef_feature_extractor.py | 0.836053 | 0.477493 | albef_feature_extractor.py | pypi |
from dataclasses import dataclass
from typing import Optional
import torch
from transformers.modeling_outputs import (
BaseModelOutputWithPoolingAndCrossAttentions,
CausalLMOutputWithCrossAttentions,
ModelOutput,
)
@dataclass
class AlbefSimilarity(ModelOutput):
sim_i2t: torch.FloatTensor = None
sim_t2i: torch.FloatTensor = None
sim_i2t_m: Optional[torch.FloatTensor] = None
sim_t2i_m: Optional[torch.FloatTensor] = None
sim_i2t_targets: Optional[torch.FloatTensor] = None
sim_t2i_targets: Optional[torch.FloatTensor] = None
@dataclass
class AlbefIntermediateOutput(ModelOutput):
# uni-modal features
image_embeds: torch.FloatTensor = None
text_embeds: Optional[torch.FloatTensor] = None
image_embeds_m: Optional[torch.FloatTensor] = None
text_embeds_m: Optional[torch.FloatTensor] = None
# intermediate outputs of multimodal encoder
encoder_output: Optional[BaseModelOutputWithPoolingAndCrossAttentions] = None
encoder_output_m: Optional[BaseModelOutputWithPoolingAndCrossAttentions] = None
encoder_output_neg: Optional[BaseModelOutputWithPoolingAndCrossAttentions] = None
itm_logits: Optional[torch.FloatTensor] = None
itm_labels: Optional[torch.LongTensor] = None
# intermediate outputs of multimodal decoder
decoder_output: Optional[CausalLMOutputWithCrossAttentions] = None
decoder_labels: Optional[torch.LongTensor] = None
@dataclass
class AlbefOutput(ModelOutput):
# some finetuned models (e.g. BlipVQA) do not compute similarity, thus optional.
sims: Optional[AlbefSimilarity] = None
intermediate_output: AlbefIntermediateOutput = None
loss: Optional[torch.FloatTensor] = None
loss_itc: Optional[torch.FloatTensor] = None
loss_itm: Optional[torch.FloatTensor] = None
loss_mlm: Optional[torch.FloatTensor] = None
@dataclass
class AlbefOutputWithLogits(AlbefOutput):
logits: torch.FloatTensor = None
logits_m: torch.FloatTensor = None
@dataclass
class AlbefOutputFeatures(ModelOutput):
"""
Data class of features from AlbefFeatureExtractor.
Args:
image_embeds: `torch.FloatTensor` of shape `(batch_size, num_patches+1, embed_dim)`, `optional`
image_features: `torch.FloatTensor` of shape `(batch_size, num_patches+1, feature_dim)`, `optional`
text_embeds: `torch.FloatTensor` of shape `(batch_size, sequence_length+1, embed_dim)`, `optional`
text_features: `torch.FloatTensor` of shape `(batch_size, sequence_length+1, feature_dim)`, `optional`
The first embedding or feature is for the [CLS] token.
Features are obtained by projecting the corresponding embedding into a normalized low-dimensional space.
"""
image_embeds: Optional[torch.FloatTensor] = None
image_embeds_proj: Optional[torch.FloatTensor] = None
text_embeds: Optional[torch.FloatTensor] = None
text_embeds_proj: Optional[torch.FloatTensor] = None
multimodal_embeds: Optional[torch.FloatTensor] = None | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/albef_models/albef_outputs.py | 0.957893 | 0.636014 | albef_outputs.py | pypi |
import datetime
import logging
import os
import time
import lavis.common.dist_utils as dist_utils
import torch
import torch.distributed as dist
import torch.nn.functional as F
from lavis.common.dist_utils import download_cached_file
from lavis.common.logger import MetricLogger
from lavis.common.utils import is_url
from lavis.models.base_model import BaseModel
from lavis.models.vit import interpolate_pos_embed
from transformers import BertTokenizer
class AlbefBase(BaseModel):
@classmethod
def init_tokenizer(cls):
return BertTokenizer.from_pretrained("bert-base-uncased")
def load_from_pretrained(self, url_or_filename, rename_text_keys=True):
if is_url(url_or_filename):
cached_file = download_cached_file(
url_or_filename, check_hash=False, progress=True
)
checkpoint = torch.load(cached_file, map_location="cpu")
elif os.path.isfile(url_or_filename):
checkpoint = torch.load(url_or_filename, map_location="cpu")
else:
raise RuntimeError("checkpoint url or path is invalid")
if "model" in checkpoint:
state_dict = checkpoint["model"]
else:
state_dict = checkpoint
state_dict["visual_encoder.pos_embed"] = interpolate_pos_embed(
state_dict["visual_encoder.pos_embed"], self.visual_encoder
)
if (
"visual_encoder_m.pos_embed" in self.state_dict().keys()
and "visual_encoder_m.pos_embed" in state_dict
):
state_dict["visual_encoder_m.pos_embed"] = interpolate_pos_embed(
state_dict["visual_encoder_m.pos_embed"], self.visual_encoder_m
)
if rename_text_keys:
for key in list(state_dict.keys()):
if "bert" in key:
new_key = key.replace("bert.", "")
state_dict[new_key] = state_dict[key]
del state_dict[key]
for key in self.state_dict().keys():
if key in state_dict.keys():
if state_dict[key].shape != self.state_dict()[key].shape:
del state_dict[key]
msg = self.load_state_dict(state_dict, strict=False)
logging.info("Missing keys {}".format(msg.missing_keys))
logging.info("load checkpoint from %s" % url_or_filename)
return msg
def compute_sim_matrix(model, data_loader, **kwargs):
k_test = kwargs.pop("k_test")
metric_logger = MetricLogger(delimiter=" ")
header = "Evaluation:"
logging.info("Computing features for evaluation...")
start_time = time.time()
texts = data_loader.dataset.text
num_text = len(texts)
text_bs = 256
text_ids = []
text_embeds = []
text_atts = []
for i in range(0, num_text, text_bs):
text = texts[i : min(num_text, i + text_bs)]
text_input = model.tokenizer(
text,
padding="max_length",
truncation=True,
max_length=35,
return_tensors="pt",
).to(model.device)
text_output = model.text_encoder.forward_text(text_input)
text_embed = F.normalize(
model.text_proj(text_output.last_hidden_state[:, 0, :])
)
text_embeds.append(text_embed)
text_ids.append(text_input.input_ids)
text_atts.append(text_input.attention_mask)
text_embeds = torch.cat(text_embeds, dim=0)
text_ids = torch.cat(text_ids, dim=0)
text_atts = torch.cat(text_atts, dim=0)
if hasattr(model.tokenizer, "enc_token_id"):
text_ids[:, 0] = model.tokenizer.enc_token_id
image_feats = []
image_embeds = []
for samples in data_loader:
image = samples["image"]
image = image.to(model.device)
image_feat = model.visual_encoder.forward_features(image)
image_embed = model.vision_proj(image_feat[:, 0, :])
image_embed = F.normalize(image_embed, dim=-1)
image_feats.append(image_feat.cpu())
image_embeds.append(image_embed)
image_feats = torch.cat(image_feats, dim=0)
image_embeds = torch.cat(image_embeds, dim=0)
sims_matrix = image_embeds @ text_embeds.t()
score_matrix_i2t = torch.full(
(len(data_loader.dataset.image), len(texts)), -100.0
).to(model.device)
num_tasks = dist_utils.get_world_size()
rank = dist_utils.get_rank()
step = sims_matrix.size(0) // num_tasks + 1
start = rank * step
end = min(sims_matrix.size(0), start + step)
for i, sims in enumerate(
metric_logger.log_every(sims_matrix[start:end], 50, header)
):
# topk_sim, topk_idx = sims.topk(k=config["k_test"], dim=0)
topk_sim, topk_idx = sims.topk(k=k_test, dim=0)
encoder_output = image_feats[start + i].repeat(k_test, 1, 1).to(model.device)
encoder_att = torch.ones(encoder_output.size()[:-1], dtype=torch.long).to(
model.device
)
output = model.text_encoder(
text_ids[topk_idx],
attention_mask=text_atts[topk_idx],
encoder_hidden_states=encoder_output,
encoder_attention_mask=encoder_att,
return_dict=True,
)
score = model.itm_head(output.last_hidden_state[:, 0, :])[:, 1]
score_matrix_i2t[start + i, topk_idx] = score + topk_sim
sims_matrix = sims_matrix.t()
score_matrix_t2i = torch.full(
(len(texts), len(data_loader.dataset.image)), -100.0
).to(model.device)
step = sims_matrix.size(0) // num_tasks + 1
start = rank * step
end = min(sims_matrix.size(0), start + step)
for i, sims in enumerate(
metric_logger.log_every(sims_matrix[start:end], 50, header)
):
topk_sim, topk_idx = sims.topk(k=k_test, dim=0)
encoder_output = image_feats[topk_idx.cpu()].to(model.device)
encoder_att = torch.ones(encoder_output.size()[:-1], dtype=torch.long).to(
model.device
)
output = model.text_encoder(
text_ids[start + i].repeat(k_test, 1),
attention_mask=text_atts[start + i].repeat(k_test, 1),
encoder_hidden_states=encoder_output,
encoder_attention_mask=encoder_att,
return_dict=True,
)
score = model.itm_head(output.last_hidden_state[:, 0, :])[:, 1]
score_matrix_t2i[start + i, topk_idx] = score + topk_sim
if dist_utils.is_dist_avail_and_initialized():
dist.barrier()
torch.distributed.all_reduce(
score_matrix_i2t, op=torch.distributed.ReduceOp.SUM
)
torch.distributed.all_reduce(
score_matrix_t2i, op=torch.distributed.ReduceOp.SUM
)
total_time = time.time() - start_time
total_time_str = str(datetime.timedelta(seconds=int(total_time)))
logging.info("Evaluation time {}".format(total_time_str))
return score_matrix_i2t.cpu().numpy(), score_matrix_t2i.cpu().numpy() | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/albef_models/__init__.py | 0.572006 | 0.218211 | __init__.py | pypi |
import logging
import os
from copy import deepcopy
import torch
import torch.nn.functional as F
from lavis.common.registry import registry
from lavis.common.utils import get_abs_path, is_url
from lavis.models.albef_models import AlbefBase
from lavis.models.albef_models.albef_outputs import AlbefIntermediateOutput, AlbefOutput
from lavis.models.base_model import MomentumDistilationMixin, tile
from lavis.models.med import BertConfig, BertLMHeadModel, XBertEncoder
from lavis.models.vit import VisionTransformerEncoder, interpolate_pos_embed
from lavis.common.dist_utils import download_cached_file
@registry.register_model("albef_vqa")
class AlbefVQA(AlbefBase, MomentumDistilationMixin):
"""
ALBEF VQA models.
Supported model types:
- base: vqa model initialized with pre-trained ALBEF base model on 115M image-text pairs after CapFilt; not fine-tuned.
- vqav2: fine-tuned ALBEF base model on VQA v2.0 dataset.
Usage:
>>> from lavis.models import load_model
>>> model = load_model("albef_vqa", "vqav2")
"""
PRETRAINED_MODEL_CONFIG_DICT = {
"vqav2": "configs/models/albef_vqav2.yaml",
}
def __init__(
self,
image_encoder,
text_encoder,
text_decoder,
use_distill=True,
momentum=0.995,
alpha=0.4,
max_txt_len=35,
):
super().__init__()
self.tokenizer = self.init_tokenizer()
self.max_txt_len = max_txt_len
self.use_distill = use_distill
self.visual_encoder = image_encoder
self.text_encoder = text_encoder
self.text_decoder = text_decoder
if self.use_distill:
self.visual_encoder_m = deepcopy(self.visual_encoder)
self.text_encoder_m = deepcopy(self.text_encoder)
self.text_decoder_m = deepcopy(self.text_decoder)
self.momentum = momentum
self.alpha = alpha
self.model_pairs = [
[self.visual_encoder, self.visual_encoder_m],
[self.text_encoder, self.text_encoder_m],
[self.text_decoder, self.text_decoder_m],
]
self.copy_params()
def _rampup_factor(self, epoch, iters, num_iters_per_epoch):
return min(1, (epoch * num_iters_per_epoch + iters) / num_iters_per_epoch)
def forward(self, samples):
"""
Args:
samples (dict): A dictionary containing the following keys:
- image (torch.Tensor): A tensor of shape (batch_size, 3, H, W). Default H=480, W=480.
- text_input (list): A list of strings, each string is a question
- answer (list): A list of strings, each string is an answer
- weight (torch.Tensor): A tensor used to weigh each answer in the loss computation.
The shape of the tensor is (sum(n_answers),)
- n_answers (torch.Tensor): A tensor shape (batch_size,) containing the number of answers
for each question in the batch.
Returns:
An AlbefOutput object containing loss and intermediate outputs;
see lavis/models/albef_models/albef_outputs.py for more details.
Examples:
>>> import torch
>>> from lavis.models import load_model
>>> model = load_model("albef_vqa")
>>> samples = {
... "image": torch.rand(2, 3, 384, 384),
... "text_input": ["What is this?", "What is that?"],
... "answer": ["cat", "cat", "dog"],
... "weight": torch.tensor([1.0, 1.0, 1.0]),
... "n_answers": torch.tensor([2, 1]),
... "epoch": 0, "iters": 0, "num_iters_per_epoch": 1000,
... }
>>> output = model(samples)
>>> output.keys()
odict_keys(['intermediate_output', 'loss'])
"""
(
encoder_output,
encoder_output_m,
image_embeds,
image_embeds_m,
) = self.forward_encoder(samples)
loss, decoder_output, decoder_targets = self.forward_decoder(
samples, encoder_out=(encoder_output, encoder_output_m)
)
return AlbefOutput(
loss=loss,
intermediate_output=AlbefIntermediateOutput(
image_embeds=image_embeds,
image_embeds_m=image_embeds_m,
encoder_output=encoder_output,
encoder_output_m=encoder_output_m,
decoder_output=decoder_output,
decoder_labels=decoder_targets,
),
)
def forward_encoder(self, samples):
questions = samples["text_input"]
questions = self.tokenizer(
questions,
padding="longest",
truncation=True,
max_length=self.max_txt_len,
return_tensors="pt",
).to(self.device)
samples.update({"tokenized_text": questions})
image_embeds = self.visual_encoder.forward_features(samples["image"])
encoder_output = self.text_encoder.forward_automask(
tokenized_text=samples["tokenized_text"], visual_embeds=image_embeds
)
if self.use_distill:
self._momentum_update()
with torch.no_grad():
image_embeds_m = self.visual_encoder_m(samples["image"])
encoder_output_m = self.text_encoder_m.forward_automask(
tokenized_text=samples["tokenized_text"],
visual_embeds=image_embeds_m,
)
else:
encoder_output_m = None
image_embeds_m = None
return encoder_output, encoder_output_m, image_embeds, image_embeds_m
def forward_decoder(self, samples, encoder_out, **kwargs):
answers = self.tokenizer(
samples["answer"], padding="longest", return_tensors="pt"
).to(self.device)
answer_targets = answers.input_ids.masked_fill(
answers.input_ids == self.tokenizer.pad_token_id, -100
)
question_states = []
question_atts = []
question = samples["tokenized_text"]
question_output, question_output_m = encoder_out
for b, n in enumerate(samples["n_answers"]):
question_states += [question_output.last_hidden_state[b]] * n
question_atts += [question.attention_mask[b]] * n
question_states = torch.stack(question_states, dim=0)
question_atts = torch.stack(question_atts, dim=0)
if self.use_distill:
with torch.no_grad():
question_states_m = []
for b, n in enumerate(samples["n_answers"]):
question_states_m += [question_output_m.last_hidden_state[b]] * n
question_states_m = torch.stack(question_states_m, 0)
logits_m = self.text_decoder_m(
answers.input_ids,
attention_mask=answers.attention_mask,
encoder_hidden_states=question_states_m,
encoder_attention_mask=question_atts,
return_logits=True,
)
alpha = self.alpha * self._rampup_factor(
epoch=samples["epoch"],
iters=samples["iters"],
num_iters_per_epoch=samples["num_iters_per_epoch"],
)
answer_output = self.text_decoder(
answers.input_ids,
attention_mask=answers.attention_mask,
encoder_hidden_states=question_states,
encoder_attention_mask=question_atts,
labels=answer_targets,
soft_labels=F.softmax(logits_m, dim=-1),
alpha=alpha,
return_dict=True,
reduction="none",
)
loss = samples["weight"] * answer_output.loss
bsz = samples["image"].size(0)
loss = loss.sum() / bsz
return loss, answer_output, answer_targets
def predict_answers(self, samples, answer_list, num_ans_candidates=128, **kwargs):
"""
Args:
samples (dict): A dictionary containing the following keys:
- image (torch.Tensor): A tensor of shape (batch_size, 3, H, W). Default H=480, W=480.
- text_input (str or [str]): String or a list of strings, each string is a question.
The number of questions must be equal to the batch size. If a single string, will be converted to a list of string, with length 1 first.
num_ans_candidates (int): Number of answer candidates, used to filter out answers with low probability.
answer_list (list): A list of strings, each string is an answer.
Returns:
List: A list of strings, each string is an answer.
Examples:
>>> from PIL import Image
>>> from lavis.models import load_model_and_preprocess
>>> model, vis_processors, txt_processors = load_model_and_preprocess("albef_vqa", "vqav2")
>>> raw_image = Image.open("docs/data/merlion.png").convert("RGB")
>>> question = "Which city is this photo taken?"
>>> image = vis_processors["eval"](raw_image).unsqueeze(0)
>>> question = txt_processors["eval"](question)
>>> samples = {"image": image, "text_input": [question]}
>>> answer_list = ["Singapore", "London", "Palo Alto", "Tokyo"]
>>> answers = model.predict_answers(samples, answer_list=answer_list)
>>> answers
['Singapore']
"""
if isinstance(samples["text_input"], str):
samples["text_input"] = [samples["text_input"]]
assert len(samples["text_input"]) == samples["image"].size(
0
), "The number of questions must be equal to the batch size."
num_ans_candidates = min(num_ans_candidates, len(answer_list))
return self.rank_answers(
samples, answer_list=answer_list, num_ans_candidates=num_ans_candidates
)
def rank_answers(self, samples, answer_list, num_ans_candidates):
"""
Generate the first token of answers using decoder and select ${num_ans_candidates}
most probable ones. Then select answers from answer list, which start with the probable tokens.
Lastly, use the selected answers as the ground-truth labels for decoding and calculating LM loss.
Return the answers that minimize the losses as result.
"""
answer_candidates = self.tokenizer(
answer_list, padding="longest", return_tensors="pt"
).to(self.device)
# answer_candidates.input_ids[:, 0] = self.tokenizer.bos_token_id
answer_ids = answer_candidates.input_ids
answer_atts = answer_candidates.attention_mask
question_output, _, _, _ = self.forward_encoder(samples)
question_states = question_output.last_hidden_state
tokenized_question = samples["tokenized_text"]
question_atts = tokenized_question.attention_mask
num_ques = question_states.size(0)
start_ids = answer_ids[0, 0].repeat(num_ques, 1) # bos token
start_output = self.text_decoder(
start_ids,
encoder_hidden_states=question_states,
encoder_attention_mask=question_atts,
return_dict=True,
reduction="none",
)
logits = start_output.logits[:, 0, :] # first token's logit
# topk_probs: top-k probability
# topk_ids: [num_question, k]
answer_first_token = answer_ids[:, 1]
prob_first_token = F.softmax(logits, dim=1).index_select(
dim=1, index=answer_first_token
)
topk_probs, topk_ids = prob_first_token.topk(num_ans_candidates, dim=1)
# answer input: [num_question*k, answer_len]
input_ids = []
input_atts = []
for b, topk_id in enumerate(topk_ids):
input_ids.append(answer_ids.index_select(dim=0, index=topk_id))
input_atts.append(answer_atts.index_select(dim=0, index=topk_id))
input_ids = torch.cat(input_ids, dim=0)
input_atts = torch.cat(input_atts, dim=0)
targets_ids = input_ids.masked_fill(
input_ids == self.tokenizer.pad_token_id, -100
)
# repeat encoder's output for top-k answers
question_states = tile(question_states, 0, num_ans_candidates)
question_atts = tile(question_atts, 0, num_ans_candidates)
output = self.text_decoder(
input_ids,
attention_mask=input_atts,
encoder_hidden_states=question_states,
encoder_attention_mask=question_atts,
labels=targets_ids,
return_dict=True,
reduction="none",
)
log_probs_sum = -output.loss
log_probs_sum = log_probs_sum.view(num_ques, num_ans_candidates)
max_topk_ids = log_probs_sum.argmax(dim=1)
max_ids = topk_ids[max_topk_ids >= 0, max_topk_ids]
answers = [answer_list[max_id] for max_id in max_ids]
return answers
@classmethod
def from_config(cls, cfg=None):
image_encoder = VisionTransformerEncoder.from_config(cfg)
text_encoder = XBertEncoder.from_config(cfg)
config_decoder = BertConfig.from_json_file(get_abs_path(cfg["med_config_path"]))
config_decoder.fusion_layer = 0
config_decoder.num_hidden_layers = 6
text_decoder = BertLMHeadModel.from_pretrained(
"bert-base-uncased", config=config_decoder
)
alpha = cfg.get("alpha", 0.4)
momentum = cfg.get("momentum", 0.995)
use_distill = cfg.get("use_distill", True)
max_txt_len = cfg.get("max_txt_len", 25)
model = cls(
image_encoder=image_encoder,
text_encoder=text_encoder,
text_decoder=text_decoder,
use_distill=use_distill,
momentum=momentum,
alpha=alpha,
max_txt_len=max_txt_len,
)
# load pre-trained weights
model.load_checkpoint_from_config(cfg)
return model
def load_from_pretrained(self, url_or_filename):
if is_url(url_or_filename):
cached_file = download_cached_file(
url_or_filename, check_hash=False, progress=True
)
checkpoint = torch.load(cached_file, map_location="cpu")
elif os.path.isfile(url_or_filename):
checkpoint = torch.load(url_or_filename, map_location="cpu")
else:
raise RuntimeError("checkpoint url or path is invalid")
if "model" in checkpoint:
state_dict = checkpoint["model"]
else:
state_dict = checkpoint
# reshape positional embedding to accomodate for image resolution change
pos_embed_reshaped = interpolate_pos_embed(
state_dict["visual_encoder.pos_embed"], self.visual_encoder
)
state_dict["visual_encoder.pos_embed"] = pos_embed_reshaped
m_pos_embed_reshaped = interpolate_pos_embed(
state_dict["visual_encoder_m.pos_embed"], self.visual_encoder_m
)
state_dict["visual_encoder_m.pos_embed"] = m_pos_embed_reshaped
for key in list(state_dict.keys()):
if "bert" in key:
encoder_key = key.replace("bert.", "")
state_dict[encoder_key] = state_dict[key]
# intialize text decoder as multimodal encoder (last 6 layers of model.text_encoder)
if "text_encoder" in key:
if "layer" in key:
encoder_keys = key.split(".")
layer_num = int(encoder_keys[4])
if layer_num < 6:
del state_dict[key]
continue
else:
decoder_layer_num = layer_num - 6
encoder_keys[4] = str(decoder_layer_num)
encoder_key = ".".join(encoder_keys)
else:
encoder_key = key
decoder_key = encoder_key.replace("text_encoder", "text_decoder")
state_dict[decoder_key] = state_dict[key]
del state_dict[key]
for key in self.state_dict().keys():
if key in state_dict.keys():
if state_dict[key].shape != self.state_dict()[key].shape:
del state_dict[key]
msg = self.load_state_dict(state_dict, strict=False)
logging.info("load checkpoint from %s" % url_or_filename)
logging.info(f"missing keys: {msg.missing_keys}")
return msg | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/albef_models/albef_vqa.py | 0.817246 | 0.249467 | albef_vqa.py | pypi |
import logging
import os
import torch
import torch.nn.functional as F
from lavis.common.dist_utils import download_cached_file
from lavis.common.utils import is_url
from lavis.models.base_model import BaseModel
from transformers import BertTokenizer
class AlproBase(BaseModel):
@classmethod
def init_tokenizer(cls):
return BertTokenizer.from_pretrained("bert-base-uncased")
def load_from_pretrained(self, url_or_filename, num_frames, num_patches):
if is_url(url_or_filename):
cached_file = download_cached_file(
url_or_filename, check_hash=False, progress=True
)
checkpoint = torch.load(cached_file, map_location="cpu")
elif os.path.isfile(url_or_filename):
checkpoint = torch.load(url_or_filename, map_location="cpu")
else:
raise RuntimeError("checkpoint url or path is invalid")
if "model" in checkpoint:
state_dict = checkpoint["model"]
else:
state_dict = checkpoint
for key in list(state_dict.keys()):
if "bert" in key:
new_key = key.replace("bert.", "")
state_dict[new_key] = state_dict[key]
del state_dict[key]
spatial_embed_key = "visual_encoder.model.pos_embed"
temporal_embed_key = "visual_encoder.model.time_embed"
## Resizing spatial embeddings in case they don't match
if num_patches + 1 != state_dict[spatial_embed_key].size(1):
state_dict[spatial_embed_key] = resize_spatial_embedding(
state_dict, spatial_embed_key, num_patches
)
else:
logging.info(
"The length of spatial position embedding matches. No need to resize."
)
## Resizing time embeddings in case they don't match
if temporal_embed_key in state_dict and num_frames != state_dict[
temporal_embed_key
].size(1):
state_dict[temporal_embed_key] = resize_temporal_embedding(
state_dict, temporal_embed_key, num_frames
)
else:
logging.info(
"No temporal encoding found. Or the length of temporal position embedding matches. No need to resize."
)
msg = self.load_state_dict(state_dict, strict=False)
logging.info("Missing keys {}".format(msg.missing_keys))
logging.info("load checkpoint from %s" % url_or_filename)
return msg
def resize_spatial_embedding(state_dict, key, num_patches):
logging.info(
f"Resizing spatial position embedding from {state_dict[key].size(1)} to {num_patches + 1}"
)
pos_embed = state_dict[key]
cls_pos_embed = pos_embed[0, 0, :].unsqueeze(0).unsqueeze(1)
other_pos_embed = pos_embed[0, 1:, :].unsqueeze(0).transpose(1, 2)
new_pos_embed = F.interpolate(other_pos_embed, size=(num_patches), mode="nearest")
new_pos_embed = new_pos_embed.transpose(1, 2)
new_pos_embed = torch.cat((cls_pos_embed, new_pos_embed), 1)
return new_pos_embed
def resize_temporal_embedding(state_dict, key, num_frames):
logging.info(
f"Resizing temporal position embedding from {state_dict[key].size(1)} to {num_frames}"
)
time_embed = state_dict[key].transpose(1, 2)
new_time_embed = F.interpolate(time_embed, size=(num_frames), mode="nearest")
return new_time_embed.transpose(1, 2) | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/alpro_models/__init__.py | 0.661158 | 0.241277 | __init__.py | pypi |
import logging
from functools import partial
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils
import torch.utils.checkpoint
from einops import rearrange
from fairscale.nn.checkpoint.checkpoint_activations import checkpoint_wrapper
from .helpers import load_pretrained, load_pretrained_imagenet, load_pretrained_kinetics
from .vit_utils import (
IMAGENET_DEFAULT_MEAN,
IMAGENET_DEFAULT_STD,
DropPath,
to_2tuple,
trunc_normal_,
)
def _cfg(url="", **kwargs):
return {
"url": url,
"num_classes": 1000,
"input_size": (3, 224, 224),
"pool_size": None,
"crop_pct": 0.9,
"interpolation": "bicubic",
"mean": IMAGENET_DEFAULT_MEAN,
"std": IMAGENET_DEFAULT_STD,
"first_conv": "patch_embed.proj",
"classifier": "head",
**kwargs,
}
default_cfgs = {
"vit_base_patch16_224": _cfg(
url="https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_p16_224-80ecf9dd.pth",
mean=(0.5, 0.5, 0.5),
std=(0.5, 0.5, 0.5),
),
}
class Mlp(nn.Module):
def __init__(
self,
in_features,
hidden_features=None,
out_features=None,
act_layer=nn.GELU,
drop=0.0,
):
super().__init__()
out_features = out_features or in_features
hidden_features = hidden_features or in_features
self.fc1 = nn.Linear(in_features, hidden_features)
self.act = act_layer()
self.fc2 = nn.Linear(hidden_features, out_features)
self.drop = nn.Dropout(drop)
def forward(self, x):
x = self.fc1(x)
x = self.act(x)
x = self.drop(x)
x = self.fc2(x)
x = self.drop(x)
return x
class Attention(nn.Module):
def __init__(
self,
dim,
num_heads=8,
qkv_bias=False,
qk_scale=None,
attn_drop=0.0,
proj_drop=0.0,
with_qkv=True,
):
super().__init__()
self.num_heads = num_heads
head_dim = dim // num_heads
self.scale = qk_scale or head_dim**-0.5
self.with_qkv = with_qkv
if self.with_qkv:
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
self.proj = nn.Linear(dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
self.attn_drop = nn.Dropout(attn_drop)
def forward(self, x):
B, N, C = x.shape
if self.with_qkv:
qkv = (
self.qkv(x)
.reshape(B, N, 3, self.num_heads, C // self.num_heads)
.permute(2, 0, 3, 1, 4)
)
q, k, v = qkv[0], qkv[1], qkv[2]
else:
qkv = x.reshape(B, N, self.num_heads, C // self.num_heads).permute(
0, 2, 1, 3
)
q, k, v = qkv, qkv, qkv
attn = (q @ k.transpose(-2, -1)) * self.scale
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
x = (attn @ v).transpose(1, 2).reshape(B, N, C)
if self.with_qkv:
x = self.proj(x)
x = self.proj_drop(x)
return x
class Block(nn.Module):
def __init__(
self,
dim,
num_heads,
layer_num,
mlp_ratio=4.0,
qkv_bias=False,
qk_scale=None,
drop=0.0,
attn_drop=0.0,
drop_path=0.1,
act_layer=nn.GELU,
norm_layer=nn.LayerNorm,
attention_type="divided_space_time",
use_grad_checkpointing=False,
):
super().__init__()
self.attention_type = attention_type
assert attention_type in [
"divided_space_time",
"space_only",
"joint_space_time",
]
self.norm1 = norm_layer(dim)
self.attn = Attention(
dim,
num_heads=num_heads,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
attn_drop=attn_drop,
proj_drop=drop,
)
# Temporal Attention Parameters
if self.attention_type == "divided_space_time":
self.temporal_norm1 = norm_layer(dim)
self.temporal_attn = Attention(
dim,
num_heads=num_heads,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
attn_drop=attn_drop,
proj_drop=drop,
)
self.temporal_fc = nn.Linear(dim, dim)
# drop path
self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity()
self.norm2 = norm_layer(dim)
mlp_hidden_dim = int(dim * mlp_ratio)
self.mlp = Mlp(
in_features=dim,
hidden_features=mlp_hidden_dim,
act_layer=act_layer,
drop=drop,
)
# [dxli]
self.layer_num = layer_num
self.use_grad_checkpointing = use_grad_checkpointing
if use_grad_checkpointing:
self.temporal_attn = checkpoint_wrapper(self.temporal_attn)
self.attn = checkpoint_wrapper(self.attn)
self.mlp = checkpoint_wrapper(self.mlp)
def forward(self, x, B, T, W):
num_spatial_tokens = (x.size(1) - 1) // T
H = num_spatial_tokens // W
if self.attention_type in ["space_only", "joint_space_time"]:
x = x + self.drop_path(self.attn(self.norm1(x)))
x = x + self.drop_path(self.mlp(self.norm2(x)))
return x
elif self.attention_type == "divided_space_time":
# Temporal
xt = x[:, 1:, :]
xt = rearrange(xt, "b (h w t) m -> (b h w) t m", b=B, h=H, w=W, t=T)
temporal_attn_out = self.temporal_attn(self.temporal_norm1(xt))
res_temporal = self.drop_path(temporal_attn_out)
res_temporal = rearrange(
res_temporal, "(b h w) t m -> b (h w t) m", b=B, h=H, w=W, t=T
)
res_temporal = self.temporal_fc(res_temporal)
xt = x[:, 1:, :] + res_temporal
# Spatial
init_cls_token = x[:, 0, :].unsqueeze(1)
cls_token = init_cls_token.repeat(1, T, 1)
cls_token = rearrange(cls_token, "b t m -> (b t) m", b=B, t=T).unsqueeze(1)
xs = xt
xs = rearrange(xs, "b (h w t) m -> (b t) (h w) m", b=B, h=H, w=W, t=T)
xs = torch.cat((cls_token, xs), 1)
spatial_attn_out = self.attn(self.norm1(xs))
res_spatial = self.drop_path(spatial_attn_out)
# Taking care of CLS token
cls_token = res_spatial[:, 0, :]
cls_token = rearrange(cls_token, "(b t) m -> b t m", b=B, t=T)
# averaging for every frame
cls_token = torch.mean(cls_token, 1, True)
res_spatial = res_spatial[:, 1:, :]
res_spatial = rearrange(
res_spatial, "(b t) (h w) m -> b (h w t) m", b=B, h=H, w=W, t=T
)
res = res_spatial
x = xt
# Mlp
x = torch.cat((init_cls_token, x), 1) + torch.cat((cls_token, res), 1)
x_res = x
x = self.norm2(x)
# x = x + self.drop_path(self.mlp(self.norm2(x)))
# MLP
mlp_out = self.mlp(x)
x = x_res + self.drop_path(mlp_out)
return x
class PatchEmbed(nn.Module):
"""Image to Patch Embedding"""
def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768):
super().__init__()
img_size = to_2tuple(img_size)
patch_size = to_2tuple(patch_size)
num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0])
self.img_size = img_size
self.patch_size = patch_size
self.num_patches = num_patches
self.proj = nn.Conv2d(
in_chans, embed_dim, kernel_size=patch_size, stride=patch_size
)
def forward(self, x):
B, C, T, H, W = x.shape
x = rearrange(x, "b c t h w -> (b t) c h w")
x = self.proj(x)
W = x.size(-1)
x = x.flatten(2).transpose(1, 2)
return x, T, W
class VisionTransformer(nn.Module):
"""Vision Transformere"""
def __init__(
self,
img_size=224,
patch_size=16,
in_chans=3,
num_classes=1000,
embed_dim=768,
depth=12,
num_heads=12,
mlp_ratio=4.0,
qkv_bias=False,
qk_scale=None,
drop_rate=0.0,
attn_drop_rate=0.0,
drop_path_rate=0.1,
hybrid_backbone=None,
norm_layer=nn.LayerNorm,
num_frames=8,
attention_type="divided_space_time",
dropout=0.0,
use_grad_checkpointing=False,
ckpt_layer=0,
):
super().__init__()
self.attention_type = attention_type
self.depth = depth
self.dropout = nn.Dropout(dropout)
self.num_classes = num_classes
# num_features for consistency with other models
self.num_features = self.embed_dim = embed_dim
self.patch_embed = PatchEmbed(
img_size=img_size,
patch_size=patch_size,
in_chans=in_chans,
embed_dim=embed_dim,
)
num_patches = self.patch_embed.num_patches
# Positional Embeddings
self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim))
self.pos_drop = nn.Dropout(p=drop_rate)
if self.attention_type != "space_only":
self.time_embed = nn.Parameter(torch.zeros(1, num_frames, embed_dim))
self.time_drop = nn.Dropout(p=drop_rate)
# Attention Blocks
dpr = [
x.item() for x in torch.linspace(0, drop_path_rate, self.depth)
] # stochastic depth decay rule
self.blocks = nn.ModuleList(
[
Block(
layer_num=i,
use_grad_checkpointing=(
use_grad_checkpointing and i >= self.depth - ckpt_layer
),
dim=embed_dim,
num_heads=num_heads,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
drop=drop_rate,
attn_drop=attn_drop_rate,
drop_path=dpr[i],
norm_layer=norm_layer,
attention_type=self.attention_type,
)
for i in range(self.depth)
]
)
self.norm = norm_layer(embed_dim)
# Classifier head
self.head = (
nn.Linear(embed_dim, num_classes) if num_classes > 0 else nn.Identity()
)
trunc_normal_(self.pos_embed, std=0.02)
trunc_normal_(self.cls_token, std=0.02)
self.apply(self._init_weights)
# initialization of temporal attention weights
if self.attention_type == "divided_space_time":
i = 0
for m in self.blocks.modules():
m_str = str(m)
if "Block" in m_str:
if i > 0:
nn.init.constant_(m.temporal_fc.weight, 0)
nn.init.constant_(m.temporal_fc.bias, 0)
i += 1
def _init_weights(self, m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=0.02)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
@torch.jit.ignore
def no_weight_decay(self):
return {"pos_embed", "cls_token", "time_embed"}
def get_classifier(self):
return self.head
def reset_classifier(self, num_classes, global_pool=""):
self.num_classes = num_classes
self.head = (
nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()
)
def remove_classifier(self):
self.num_classes = 0
self.head = None
def forward_features(self, x):
B = x.shape[0]
x, T, W = self.patch_embed(x)
cls_tokens = self.cls_token.expand(x.size(0), -1, -1)
x = torch.cat((cls_tokens, x), dim=1)
# resizing the positional embeddings in case they don't match the input at inference
if x.size(1) != self.pos_embed.size(1):
pos_embed = self.pos_embed
cls_pos_embed = pos_embed[0, 0, :].unsqueeze(0).unsqueeze(1)
other_pos_embed = pos_embed[0, 1:, :].unsqueeze(0).transpose(1, 2)
P = int(other_pos_embed.size(2) ** 0.5)
H = x.size(1) // W
other_pos_embed = other_pos_embed.reshape(1, x.size(2), P, P)
new_pos_embed = F.interpolate(other_pos_embed, size=(H, W), mode="nearest")
new_pos_embed = new_pos_embed.flatten(2)
new_pos_embed = new_pos_embed.transpose(1, 2)
new_pos_embed = torch.cat((cls_pos_embed, new_pos_embed), 1)
x = x + new_pos_embed
else:
x = x + self.pos_embed
x = self.pos_drop(x)
# Time Embeddings
if self.attention_type != "space_only":
cls_tokens = x[:B, 0, :].unsqueeze(1)
x = x[:, 1:]
x = rearrange(x, "(b t) n m -> (b n) t m", b=B, t=T)
# Resizing time embeddings in case they don't match
if T != self.time_embed.size(1):
time_embed = self.time_embed.transpose(1, 2)
new_time_embed = F.interpolate(time_embed, size=(T), mode="nearest")
new_time_embed = new_time_embed.transpose(1, 2)
x = x + new_time_embed
else:
x = x + self.time_embed
x = self.time_drop(x)
x = rearrange(x, "(b n) t m -> b (n t) m", b=B, t=T)
x = torch.cat((cls_tokens, x), dim=1)
# Attention blocks
for blk in self.blocks:
x = blk(x, B, T, W)
# Predictions for space-only baseline
if self.attention_type == "space_only":
x = rearrange(x, "(b t) n m -> b t n m", b=B, t=T)
x = torch.mean(x, 1) # averaging predictions for every frame
x = self.norm(x)
return x
def forward(self, x):
x = self.forward_features(x)
x = self.head(x)
return x
def _conv_filter(state_dict, patch_size=16):
"""convert patch embedding weight from manual patchify + linear proj to conv"""
out_dict = {}
for k, v in state_dict.items():
if "patch_embed.proj.weight" in k:
if v.shape[-1] != patch_size:
patch_size = v.shape[-1]
v = v.reshape((v.shape[0], 3, patch_size, patch_size))
out_dict[k] = v
return out_dict
class vit_base_patch16_224(nn.Module):
def __init__(self, cfg, **kwargs):
super(vit_base_patch16_224, self).__init__()
self.pretrained = True
patch_size = 16
self.model = VisionTransformer(
img_size=cfg.DATA.TRAIN_CROP_SIZE,
num_classes=cfg.MODEL.NUM_CLASSES,
patch_size=patch_size,
embed_dim=768,
depth=12,
num_heads=12,
mlp_ratio=4,
qkv_bias=True,
norm_layer=partial(nn.LayerNorm, eps=1e-6),
drop_rate=0.0,
attn_drop_rate=0.0,
drop_path_rate=0.1,
num_frames=cfg.DATA.NUM_FRAMES,
attention_type=cfg.TIMESFORMER.ATTENTION_TYPE,
**kwargs,
)
self.attention_type = cfg.TIMESFORMER.ATTENTION_TYPE
self.model.default_cfg = default_cfgs["vit_base_patch16_224"]
self.num_patches = (cfg.DATA.TRAIN_CROP_SIZE // patch_size) * (
cfg.DATA.TRAIN_CROP_SIZE // patch_size
)
pretrained_model = cfg.TIMESFORMER.PRETRAINED_MODEL
if self.pretrained:
load_pretrained(
self.model,
num_classes=self.model.num_classes,
in_chans=kwargs.get("in_chans", 3),
filter_fn=_conv_filter,
img_size=cfg.DATA.TRAIN_CROP_SIZE,
num_patches=self.num_patches,
attention_type=self.attention_type,
pretrained_model=pretrained_model,
)
def forward(self, x):
x = self.model(x)
return x
class TimeSformer(nn.Module):
def __init__(
self,
image_size=224,
patch_size=16,
n_frms=8,
attn_drop_rate=0.0,
drop_path_rate=0.1,
drop_rate=0,
use_grad_ckpt=False,
ckpt_layer=0,
remove_classifier=True,
**kwargs,
):
super(TimeSformer, self).__init__()
self.img_size = image_size
self.patch_size = patch_size
self.num_frames = n_frms
self.attn_drop_rate = attn_drop_rate
self.drop_path_rate = drop_path_rate
self.drop_rate = drop_rate
self.use_grad_ckpt = use_grad_ckpt
self.ckpt_layer = ckpt_layer
self.attention_type = "divided_space_time"
logging.info(
f"Initializing TimeSformer with img_size={self.img_size}, patch_size={self.patch_size}, num_frames={self.num_frames}"
)
# will be ignored when loading official pretrained ckpt
self.num_classes = 400
self.model = VisionTransformer(
img_size=self.img_size,
num_classes=self.num_classes,
patch_size=self.patch_size,
embed_dim=768,
depth=12,
num_heads=12,
mlp_ratio=4,
qkv_bias=True,
norm_layer=partial(nn.LayerNorm, eps=1e-6),
drop_rate=self.drop_rate,
attn_drop_rate=self.attn_drop_rate,
drop_path_rate=self.drop_path_rate,
num_frames=self.num_frames,
attention_type=self.attention_type,
use_grad_checkpointing=self.use_grad_ckpt,
ckpt_layer=self.ckpt_layer,
**kwargs,
)
if remove_classifier:
self.model.remove_classifier()
self.model.default_cfg = default_cfgs[
"vit_base_patch" + str(self.patch_size) + "_224"
]
self.num_patches = (self.img_size // self.patch_size) * (
self.img_size // self.patch_size
)
def forward(self, x):
x = self.model(x)
return x
def forward_features(self, x):
# b, c, t, h, w = x.shape
x = self.model.forward_features(x)
## apply pooling
W = H = self.img_size // self.patch_size
T = self.num_frames
cls_tokens = x[:, 0, :].unsqueeze(1)
other_tokens = x[:, 1:, :]
x = rearrange(other_tokens, "b (h w t) m -> b t (h w) m", h=H, w=W, t=T)
x = torch.mean(x, dim=1)
x = torch.cat((cls_tokens, x), dim=1)
return x
def load_state_dict(self, pretrained_ckpt_path):
logging.info(
"Loading TimeSformer checkpoints from {}".format(pretrained_ckpt_path)
)
if pretrained_ckpt_path == "vit_base_patch16_224":
load_ckpt_func = load_pretrained_imagenet
else:
load_ckpt_func = load_pretrained_kinetics
load_ckpt_func(
self.model,
num_classes=self.model.num_classes,
in_chans=3,
filter_fn=_conv_filter,
img_size=self.img_size,
num_frames=self.num_frames,
num_patches=self.num_patches,
attention_type=self.attention_type,
pretrained_model=pretrained_ckpt_path,
) | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/timesformer/vit.py | 0.925052 | 0.264026 | vit.py | pypi |
import hashlib
import os
import urllib
import warnings
from tqdm import tqdm
_RN50 = dict(
openai="https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt",
yfcc15m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn50-quickgelu-yfcc15m-455df137.pt",
cc12m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn50-quickgelu-cc12m-f000538c.pt",
)
_RN50_quickgelu = dict(
openai="https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt",
yfcc15m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn50-quickgelu-yfcc15m-455df137.pt",
cc12m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn50-quickgelu-cc12m-f000538c.pt",
)
_RN101 = dict(
openai="https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt",
yfcc15m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn101-quickgelu-yfcc15m-3e04b30e.pt",
)
_RN101_quickgelu = dict(
openai="https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt",
yfcc15m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn101-quickgelu-yfcc15m-3e04b30e.pt",
)
_RN50x4 = dict(
openai="https://openaipublic.azureedge.net/clip/models/7e526bd135e493cef0776de27d5f42653e6b4c8bf9e0f653bb11773263205fdd/RN50x4.pt",
)
_RN50x16 = dict(
openai="https://openaipublic.azureedge.net/clip/models/52378b407f34354e150460fe41077663dd5b39c54cd0bfd2b27167a4a06ec9aa/RN50x16.pt",
)
_RN50x64 = dict(
openai="https://openaipublic.azureedge.net/clip/models/be1cfb55d75a9666199fb2206c106743da0f6468c9d327f3e0d0a543a9919d9c/RN50x64.pt",
)
_VITB32 = dict(
openai="https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt",
laion400m_e31="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_e31-d867053b.pt",
laion400m_e32="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_e32-46683a32.pt",
laion400m_avg="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_avg-8a00ab3c.pt",
)
_VITB32_quickgelu = dict(
openai="https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt",
laion400m_e31="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_e31-d867053b.pt",
laion400m_e32="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_e32-46683a32.pt",
laion400m_avg="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_avg-8a00ab3c.pt",
)
_VITB16 = dict(
openai="https://openaipublic.azureedge.net/clip/models/5806e77cd80f8b59890b7e101eabd078d9fb84e6937f9e85e4ecb61988df416f/ViT-B-16.pt",
)
_VITL14 = dict(
openai="https://openaipublic.azureedge.net/clip/models/b8cca3fd41ae0c99ba7e8951adf17d267cdb84cd88be6f7c2e0eca1737a03836/ViT-L-14.pt",
)
_VITL14_336 = dict(
openai="https://openaipublic.azureedge.net/clip/models/3035c92b350959924f9f00213499208652fc7ea050643e8b385c2dac08641f02/ViT-L-14-336px.pt"
)
_PRETRAINED = {
"RN50": _RN50,
"RN50-quickgelu": _RN50_quickgelu,
"RN101": _RN101,
"RN101-quickgelu": _RN101_quickgelu,
"RN50x4": _RN50x4,
"RN50x16": _RN50x16,
"ViT-B-32": _VITB32,
"ViT-B-32-quickgelu": _VITB32_quickgelu,
"ViT-B-16": _VITB16,
"ViT-L-14": _VITL14,
"ViT-L-14-336": _VITL14_336,
}
def list_pretrained(as_str: bool = False):
"""returns list of pretrained models
Returns a tuple (model_name, pretrain_tag) by default or 'name:tag' if as_str == True
"""
return [
":".join([k, t]) if as_str else (k, t)
for k in _PRETRAINED.keys()
for t in _PRETRAINED[k].keys()
]
def list_pretrained_tag_models(tag: str):
"""return all models having the specified pretrain tag"""
models = []
for k in _PRETRAINED.keys():
if tag in _PRETRAINED[k]:
models.append(k)
return models
def list_pretrained_model_tags(model: str):
"""return all pretrain tags for the specified model architecture"""
tags = []
if model in _PRETRAINED:
tags.extend(_PRETRAINED[model].keys())
return tags
def get_pretrained_url(model: str, tag: str):
if model not in _PRETRAINED:
return ""
model_pretrained = _PRETRAINED[model]
tag = tag.lower()
if tag not in model_pretrained:
return ""
return model_pretrained[tag]
def download_pretrained(url: str, root: str = os.path.expanduser("~/.cache/clip")):
os.makedirs(root, exist_ok=True)
filename = os.path.basename(url)
if "openaipublic" in url:
expected_sha256 = url.split("/")[-2]
else:
expected_sha256 = ""
download_target = os.path.join(root, filename)
if os.path.exists(download_target) and not os.path.isfile(download_target):
raise RuntimeError(f"{download_target} exists and is not a regular file")
if os.path.isfile(download_target):
if expected_sha256:
if (
hashlib.sha256(open(download_target, "rb").read()).hexdigest()
== expected_sha256
):
return download_target
else:
warnings.warn(
f"{download_target} exists, but the SHA256 checksum does not match; re-downloading the file"
)
else:
return download_target
with urllib.request.urlopen(url) as source, open(download_target, "wb") as output:
with tqdm(
total=int(source.info().get("Content-Length")),
ncols=80,
unit="iB",
unit_scale=True,
) as loop:
while True:
buffer = source.read(8192)
if not buffer:
break
output.write(buffer)
loop.update(len(buffer))
if (
expected_sha256
and hashlib.sha256(open(download_target, "rb").read()).hexdigest()
!= expected_sha256
):
raise RuntimeError(
f"Model has been downloaded but the SHA256 checksum does not not match"
)
return download_target | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/clip_models/pretrained.py | 0.514888 | 0.243867 | pretrained.py | pypi |
Copied from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI.
"""
import gzip
import html
import os
from functools import lru_cache
from typing import Union, List
import ftfy
import regex as re
import torch
@lru_cache()
def default_bpe():
return os.path.join(
os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz"
)
@lru_cache()
def bytes_to_unicode():
"""
Returns list of utf-8 byte and a corresponding list of unicode strings.
The reversible bpe codes work on unicode strings.
This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
This is a signficant percentage of your normal, say, 32K bpe vocab.
To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
And avoids mapping to whitespace/control characters the bpe code barfs on.
"""
bs = (
list(range(ord("!"), ord("~") + 1))
+ list(range(ord("¡"), ord("¬") + 1))
+ list(range(ord("®"), ord("ÿ") + 1))
)
cs = bs[:]
n = 0
for b in range(2**8):
if b not in bs:
bs.append(b)
cs.append(2**8 + n)
n += 1
cs = [chr(n) for n in cs]
return dict(zip(bs, cs))
def get_pairs(word):
"""Return set of symbol pairs in a word.
Word is represented as tuple of symbols (symbols being variable-length strings).
"""
pairs = set()
prev_char = word[0]
for char in word[1:]:
pairs.add((prev_char, char))
prev_char = char
return pairs
def basic_clean(text):
text = ftfy.fix_text(text)
text = html.unescape(html.unescape(text))
return text.strip()
def whitespace_clean(text):
text = re.sub(r"\s+", " ", text)
text = text.strip()
return text
class SimpleTokenizer(object):
def __init__(self, bpe_path: str = default_bpe(), special_tokens=None):
self.byte_encoder = bytes_to_unicode()
self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
merges = gzip.open(bpe_path).read().decode("utf-8").split("\n")
merges = merges[1 : 49152 - 256 - 2 + 1]
merges = [tuple(merge.split()) for merge in merges]
vocab = list(bytes_to_unicode().values())
vocab = vocab + [v + "</w>" for v in vocab]
for merge in merges:
vocab.append("".join(merge))
if not special_tokens:
special_tokens = ["<start_of_text>", "<end_of_text>"]
else:
special_tokens = ["<start_of_text>", "<end_of_text>"] + special_tokens
vocab.extend(special_tokens)
self.encoder = dict(zip(vocab, range(len(vocab))))
self.decoder = {v: k for k, v in self.encoder.items()}
self.bpe_ranks = dict(zip(merges, range(len(merges))))
self.cache = {t: t for t in special_tokens}
special = "|".join(special_tokens)
self.pat = re.compile(
special + r"""|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""",
re.IGNORECASE,
)
self.vocab_size = len(self.encoder)
self.all_special_ids = [self.encoder[t] for t in special_tokens]
def bpe(self, token):
if token in self.cache:
return self.cache[token]
word = tuple(token[:-1]) + (token[-1] + "</w>",)
pairs = get_pairs(word)
if not pairs:
return token + "</w>"
while True:
bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf")))
if bigram not in self.bpe_ranks:
break
first, second = bigram
new_word = []
i = 0
while i < len(word):
try:
j = word.index(first, i)
new_word.extend(word[i:j])
i = j
except:
new_word.extend(word[i:])
break
if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
new_word.append(first + second)
i += 2
else:
new_word.append(word[i])
i += 1
new_word = tuple(new_word)
word = new_word
if len(word) == 1:
break
else:
pairs = get_pairs(word)
word = " ".join(word)
self.cache[token] = word
return word
def encode(self, text):
bpe_tokens = []
text = whitespace_clean(basic_clean(text)).lower()
for token in re.findall(self.pat, text):
token = "".join(self.byte_encoder[b] for b in token.encode("utf-8"))
bpe_tokens.extend(
self.encoder[bpe_token] for bpe_token in self.bpe(token).split(" ")
)
return bpe_tokens
def decode(self, tokens):
text = "".join([self.decoder[token] for token in tokens])
text = (
bytearray([self.byte_decoder[c] for c in text])
.decode("utf-8", errors="replace")
.replace("</w>", " ")
)
return text
_tokenizer = SimpleTokenizer()
def tokenize(
texts: Union[str, List[str]], context_length: int = 77
) -> torch.LongTensor:
"""
Returns the tokenized representation of given input string(s)
Parameters
----------
texts : Union[str, List[str]]
An input string or a list of input strings to tokenize
context_length : int
The context length to use; all CLIP models use 77 as the context length
Returns
-------
A two-dimensional tensor containing the resulting tokens, shape = [number of input strings, context_length]
"""
if isinstance(texts, str):
texts = [texts]
sot_token = _tokenizer.encoder["<start_of_text>"]
eot_token = _tokenizer.encoder["<end_of_text>"]
all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
tokens = tokens[:context_length] # Truncate
result[i, : len(tokens)] = torch.tensor(tokens)
return result | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/clip_models/tokenizer.py | 0.801159 | 0.342022 | tokenizer.py | pypi |
import logging
import torch
import torch.distributed.nn
from torch import distributed as dist, nn as nn
from torch.nn import functional as F
try:
import horovod.torch as hvd
except ImportError:
hvd = None
def gather_features(
image_features,
text_features,
local_loss=False,
gather_with_grad=False,
rank=0,
world_size=1,
use_horovod=False,
):
if use_horovod:
assert hvd is not None, "Please install horovod"
if gather_with_grad:
all_image_features = hvd.allgather(image_features)
all_text_features = hvd.allgather(text_features)
else:
with torch.no_grad():
all_image_features = hvd.allgather(image_features)
all_text_features = hvd.allgather(text_features)
if not local_loss:
# ensure grads for local rank when all_* features don't have a gradient
gathered_image_features = list(
all_image_features.chunk(world_size, dim=0)
)
gathered_text_features = list(
all_text_features.chunk(world_size, dim=0)
)
gathered_image_features[rank] = image_features
gathered_text_features[rank] = text_features
all_image_features = torch.cat(gathered_image_features, dim=0)
all_text_features = torch.cat(gathered_text_features, dim=0)
else:
# We gather tensors from all gpus
if gather_with_grad:
all_image_features = torch.cat(
torch.distributed.nn.all_gather(image_features), dim=0
)
all_text_features = torch.cat(
torch.distributed.nn.all_gather(text_features), dim=0
)
else:
gathered_image_features = [
torch.zeros_like(image_features) for _ in range(world_size)
]
gathered_text_features = [
torch.zeros_like(text_features) for _ in range(world_size)
]
dist.all_gather(gathered_image_features, image_features)
dist.all_gather(gathered_text_features, text_features)
if not local_loss:
# ensure grads for local rank when all_* features don't have a gradient
gathered_image_features[rank] = image_features
gathered_text_features[rank] = text_features
all_image_features = torch.cat(gathered_image_features, dim=0)
all_text_features = torch.cat(gathered_text_features, dim=0)
return all_image_features, all_text_features
class ClipLoss(nn.Module):
def __init__(
self,
local_loss=False,
gather_with_grad=False,
cache_labels=False,
rank=0,
world_size=1,
use_horovod=False,
):
super().__init__()
self.local_loss = local_loss
self.gather_with_grad = gather_with_grad
self.cache_labels = cache_labels
self.rank = rank
self.world_size = world_size
self.use_horovod = use_horovod
# cache state
self.prev_num_logits = 0
self.labels = {}
def forward(self, image_features, text_features, logit_scale):
device = image_features.device
if self.world_size > 1:
all_image_features, all_text_features = gather_features(
image_features,
text_features,
self.local_loss,
self.gather_with_grad,
self.rank,
self.world_size,
self.use_horovod,
)
if self.local_loss:
logits_per_image = logit_scale * image_features @ all_text_features.T
logits_per_text = logit_scale * text_features @ all_image_features.T
else:
logits_per_image = (
logit_scale * all_image_features @ all_text_features.T
)
logits_per_text = logits_per_image.T
else:
logits_per_image = logit_scale * image_features @ text_features.T
logits_per_text = logit_scale * text_features @ image_features.T
# calculated ground-truth and cache if enabled
num_logits = logits_per_image.shape[0]
if self.prev_num_logits != num_logits or device not in self.labels:
labels = torch.arange(num_logits, device=device, dtype=torch.long)
if self.world_size > 1 and self.local_loss:
labels = labels + num_logits * self.rank
if self.cache_labels:
self.labels[device] = labels
self.prev_num_logits = num_logits
else:
labels = self.labels[device]
total_loss = (
F.cross_entropy(logits_per_image, labels)
+ F.cross_entropy(logits_per_text, labels)
) / 2
return total_loss | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/clip_models/loss.py | 0.782496 | 0.346514 | loss.py | pypi |
import torch
import torch.nn as nn
from lavis.common.registry import registry
from lavis.models.base_model import BaseModel
from lavis.common.utils import get_abs_path
from transformers import T5Config, T5Tokenizer, T5ForConditionalGeneration
@registry.register_model("pnp_unifiedqav2_fid")
class PNPUnifiedQAv2FiD(T5ForConditionalGeneration, BaseModel):
PRETRAINED_MODEL_CONFIG_DICT = {}
def __init__(self, config, model_path):
super().__init__(config)
self.tokenizer = T5Tokenizer.from_pretrained(model_path)
def forward(self, input_ids=None, attention_mask=None, **kwargs):
if input_ids != None:
if input_ids.dim() == 3:
self.encoder.num_contexts = input_ids.size(1)
input_ids = input_ids.view(input_ids.size(0), -1)
if attention_mask != None:
attention_mask = attention_mask.view(attention_mask.size(0), -1)
return super().forward(
input_ids=input_ids,
attention_mask=attention_mask,
**kwargs
)
def generate(self, input_ids, attention_mask, num_beams=1, min_length=0, max_length=20):
self.encoder.num_contexts = input_ids.size(1)
return super().generate(
input_ids=input_ids.view(input_ids.size(0), -1),
attention_mask=attention_mask.view(attention_mask.size(0), -1),
num_beams=num_beams,
min_length=min_length,
max_length=max_length
)
def load_unifiedqa(self, state_dict):
self.load_state_dict(state_dict)
self.encoder = T5EncoderWrapper(self.encoder)
@classmethod
def from_config(cls, cfg):
model_path = cfg.get('pretrained')
t5_config_path = get_abs_path(cfg.get("t5_config_path"))
t5_config = T5Config.from_json_file(t5_config_path)
model = cls(t5_config, model_path)
model.load_unifiedqa(T5ForConditionalGeneration.from_pretrained(model_path).state_dict())
return model
class T5EncoderWrapper(torch.nn.Module):
def __init__(self, encoder):
super().__init__()
self.encoder = encoder
self.block = self.encoder.block
self.parallelize = self.encoder.parallelize
self.main_input_name = encoder.main_input_name
def forward(self, input_ids=None, attention_mask=None, **kwargs):
bsz, total_length = input_ids.shape
context_length = total_length // self.num_contexts
input_ids = input_ids.view(bsz*self.num_contexts, context_length)
attention_mask = attention_mask.view(bsz*self.num_contexts, context_length)
outputs = self.encoder(input_ids, attention_mask, **kwargs)
outputs = (outputs[0].view(bsz, self.num_contexts*context_length, -1), ) + outputs[1:]
return outputs | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/pnp_vqa_models/pnp_unifiedqav2_fid.py | 0.893309 | 0.230422 | pnp_unifiedqav2_fid.py | pypi |
import torch
import torch.nn as nn
from itertools import chain
from lavis.common.registry import registry
from lavis.models.base_model import BaseModel
from torch.nn import CrossEntropyLoss, MSELoss
from transformers import T5ForConditionalGeneration
from lavis.models.pnp_vqa_models import prepare_qa_input
from lavis.models.blip_models.blip_image_text_matching import compute_gradcam
from transformers.modeling_outputs import CausalLMOutputWithCrossAttentions
@registry.register_model("pnp_vqa")
class PNPVQA(BaseModel):
"""
PNPVQA model consists of three submodels for zero-shot VQA:
1. Image-questioning matching model
2. Image captioning model
3. Question answering model
Supported model types:
- base: BLIPITM, BLIPCaption, PNPUnifiedQAv2FiD (t5-base)
- large: BLIPITM, BLIPCaption, PNPUnifiedQAv2FiD (t5-large)
- 3b: BLIPITM, BLIPCaption, PNPUnifiedQAv2FiD (t5-3b)
Usage:
>>> from lavis.models import load_model
>>> model = load_model("pnp_vqa", "base", is_eval=True)
>>> model = load_model("pnp_vqa", "large", is_eval=True)
>>> model = load_model("pnp_vqa", "3b", is_eval=True)
"""
PRETRAINED_MODEL_CONFIG_DICT = {"base": "configs/models/pnp-vqa/pnp_vqa_base.yaml",
"large": "configs/models/pnp-vqa/pnp_vqa_large.yaml",
"3b": "configs/models/pnp-vqa/pnp_vqa_3b.yaml",
}
def __init__(self, image_question_matching_model, image_captioning_model,
question_answering_model, offload_model=False):
super().__init__()
self.image_question_matching_model = image_question_matching_model
self.image_captioning_model = image_captioning_model
self.question_answering_model = question_answering_model
self.offload_model = offload_model
def forward_itm(self, samples, block_num=7):
"""
Args:
samples (dict): A dictionary containing the following keys:
- image (torch.Tensor): A tensor of shape (batch_size, 3, H, W)
- text_input (list): A list of strings of length batch_size
block_num (int): The index of cross-attention block for gradcam computation.
Returns:
samples (dict): A dictionary containing the following keys:
- image (torch.Tensor): A tensor of shape (batch_size, 3, H, W)
- text_input (list): A list of strings of length batch_size
- gradcams (torch.Tensor): A tensor of shape (batch_size, H*W)
"""
image = samples['image']
question = [text.strip('?') for text in samples['text_input']]
tokenized_text = self.image_question_matching_model.tokenizer(question, padding='longest', truncation=True,
return_tensors="pt").to(self.image_question_matching_model.device)
with torch.set_grad_enabled(True):
gradcams, _ = compute_gradcam(model=self.image_question_matching_model,
visual_input=image,
text_input=question,
tokenized_text=tokenized_text,
block_num=block_num)
gradcams = [gradcam_[1] for gradcam_ in gradcams]
samples['gradcams'] = torch.stack(gradcams).reshape(samples['image'].size(0), -1)
return samples
def forward_cap(
self,
samples,
cap_max_length=20,
cap_min_length=0,
top_p=1,
top_k=50,
repetition_penalty=1.0,
num_captions=100,
num_patches=20,
):
"""
Args:
samples (dict): A dictionary containing the following keys:
- image (torch.Tensor): A tensor of shape (batch_size, 3, H, W)
- text_input (list): A list of strings of length batch_size
- gradcams (torch.Tensor): A tensor of shape (batch_size, H*W)
cap_max_length (int): The maximum length of the caption to be generated.
cap_min_length (int): The minimum length of the caption to be generated.
top_p (float): The cumulative probability for nucleus sampling.
top_k (float): The number of the highest probability tokens for top-k sampling.
repetition_penalty (float): The parameter for repetition penalty. 1.0 means no penalty.
num_captions (int): Number of captions generated for each image.
num_patches (int): Number of patches sampled for each image.
Returns:
samples (dict): A dictionary containing the following keys:
- image (torch.Tensor): A tensor of shape (batch_size, 3, H, W)
- text_input (list): A list of strings of length batch_size
- gradcams (torch.Tensor): A tensor of shape (batch_size, H*W)
- captions (nested list): A nested list of strings of total length batch_size * num_captions
"""
encoder_out = self.image_captioning_model.forward_encoder(samples)
captions = [[] for _ in range(encoder_out.size(0))]
min_num_captions = 0
while min_num_captions < num_captions:
encoder_out_samples = []
for i in range(num_captions):
patch_id = torch.multinomial(samples['gradcams'].to(self.image_captioning_model.device),
num_patches).reshape(encoder_out.size(0), -1) + 1
patch_id = patch_id.sort(dim=1).values.unsqueeze(-1).expand(-1, -1, encoder_out.size(2))
encoder_out_sample = torch.gather(encoder_out, 1, patch_id)
encoder_out_samples.append(encoder_out_sample)
stacked = torch.stack(encoder_out_samples, dim=1)
image_embeds = torch.flatten(stacked, start_dim=0, end_dim=1) #(bsz*num_seq, num_patch, dim)
image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(self.image_captioning_model.device)
model_kwargs = {
"encoder_hidden_states": image_embeds,
"encoder_attention_mask": image_atts,
}
prompt = [self.image_captioning_model.prompt] * image_embeds.size(0)
prompt = self.image_captioning_model.tokenizer(prompt,
return_tensors="pt").to(self.image_captioning_model.device)
prompt.input_ids[:, 0] = self.image_captioning_model.tokenizer.bos_token_id
prompt.input_ids = prompt.input_ids[:, :-1]
decoder_out = self.image_captioning_model.text_decoder.generate(
input_ids=prompt.input_ids,
max_length=cap_max_length,
min_length=cap_min_length,
do_sample=True,
top_p=top_p,
top_k=top_k,
num_return_sequences=1,
eos_token_id=self.image_captioning_model.tokenizer.sep_token_id,
pad_token_id=self.image_captioning_model.tokenizer.pad_token_id,
repetition_penalty=repetition_penalty,
**model_kwargs)
outputs = self.image_captioning_model.tokenizer.batch_decode(decoder_out, skip_special_tokens=True)
for counter, output in enumerate(outputs):
ind = counter//num_captions
if len(captions[ind]) < num_captions:
caption = output[len(self.image_captioning_model.prompt):]
overlap_caption = [1 for caps in captions[ind] if caption in caps]
if len(overlap_caption) == 0:
captions[ind].append(caption)
min_num_captions = min([len(i) for i in captions])
samples['captions'] = captions
return samples
def forward_qa(
self,
samples,
num_beams=1,
max_len=20,
min_len=0,
internal_bsz_fid=1,
num_captions=100,
num_captions_fid=1,
):
"""
Args:
samples (dict): A dictionary containing the following keys:
- image (torch.Tensor): A tensor of shape (batch_size, 3, H, W)
- text_input (list): A list of strings of length batch_size
- gradcams (torch.Tensor): A tensor of shape (batch_size, H*W)
- captions (nested list): A nested list of strings of total length batch_size * num_captions
- question_captions (nested list): A nested list of concatenated strings of questions and captions
num_beams (int): Number of beams for beam search. 1 means no beam search.
max_len (int): Maximum length of generated answers.
min_len (int): Minimum length of generated answers.
internal_bsz_fid (int): Internal batch size when using FiD decoding.
num_captions (int): Number of captions generated for each image.
num_captions_fid (int): Number of captions concatenated with a question during FiD decoding.
Returns:
List: A list of strings, each string is an answer.
"""
prepare_qa_input(samples, num_captions=num_captions, num_captions_fid=num_captions_fid)
pred_answers = []
question_captions = samples['question_captions']
question_captions_chunk = [question_captions[i:i + internal_bsz_fid]
for i in range(0, len(question_captions), internal_bsz_fid)]
question_captions_chunk = list(chain(*question_captions_chunk))
for question_caption in question_captions_chunk:
question_caption_input = self.question_answering_model.tokenizer(question_caption, padding='longest',
truncation=True, return_tensors="pt").to(self.question_answering_model.device)
question_caption_input.input_ids = question_caption_input.input_ids.reshape(
internal_bsz_fid, -1, question_caption_input.input_ids.size(1))
question_caption_input.attention_mask = question_caption_input.attention_mask.reshape(
internal_bsz_fid, -1, question_caption_input.attention_mask.size(1))
outputs = self.question_answering_model.generate(input_ids=question_caption_input.input_ids,
attention_mask=question_caption_input.attention_mask,
num_beams=num_beams,
min_length=min_len,
max_length=max_len,
)
for output in outputs:
pred_answer = self.question_answering_model.tokenizer.decode(output, skip_special_tokens=True)
pred_answers.append(pred_answer)
return pred_answers
def predict_answers(
self,
samples,
num_beams=1,
inference_method="generate",
max_len=20,
min_len=0,
internal_bsz_fid=1,
num_captions=50,
num_captions_fid=1,
cap_max_length=20,
cap_min_length=10,
top_k=50,
top_p=1,
repetition_penalty=1,
num_patches=50,
block_num=7,
):
"""
Args:
samples (dict): A dictionary containing the following keys:
- image (torch.Tensor): A tensor of shape (batch_size, 3, H, W). Default H=480, W=480.
- text_input (str or [str]): String or a list of strings, each string is a question.
The number of questions must be equal to the batch size. If a single string, will be converted to a list of string, with length 1 first.
num_beams (int): Number of beams for beam search. 1 means no beam search.
inference_method (str): Inference method. Must be "generate". The model will generate answers.
max_len (int): Maximum length of generated answers.
min_len (int): Minimum length of generated answers.
internal_bsz_fid (int): Internal batch size when using FiD decoding.
num_captions (int): Number of captions generated for each image.
num_captions_fid (int): Number of captions concatenated with a question during FiD decoding.
cap_max_length (int): The maximum length of the caption to be generated.
cap_min_length (int): The minimum length of the caption to be generated.
top_k (float): The number of the highest probability tokens for top-k sampling.
top_p (float): The cumulative probability for nucleus sampling.
repetition_penalty (float): The parameter for repetition penalty. 1.0 means no penalty.
num_patches (int): Number of patches sampled for each image.
block_num (int): The index of cross-attention block for gradcam computation.
Returns:
List: A list of strings, each string is an answer.
gradcams (torch.Tensor): A tensor of shape (batch_size, H*W)
captions (nested list): A nested list of strings of total length batch_size * num_captions
"""
assert inference_method in [
"generate",
], "Inference method must be 'generate', got {}.".format(
inference_method
)
if isinstance(samples["text_input"], str):
samples["text_input"] = [samples["text_input"]]
assert len(samples["text_input"]) == samples["image"].size(
0
), "The number of questions must be equal to the batch size."
samples = self.forward_itm(samples, block_num=block_num)
samples = self.forward_cap(samples,
cap_max_length=cap_max_length,
cap_min_length=cap_min_length,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty,
num_captions=num_captions,
num_patches=num_patches)
if self.offload_model:
samples['image'] = samples['image'].to('cpu')
self.image_question_matching_model.to('cpu')
self.image_captioning_model.to('cpu')
torch.cuda.empty_cache()
pred_answers = self.forward_qa(samples,
num_beams=num_beams,
max_len=max_len,
min_len=min_len,
internal_bsz_fid=internal_bsz_fid,
num_captions=num_captions,
num_captions_fid=num_captions_fid)
if self.offload_model:
self.image_question_matching_model.to(self.question_answering_model.device)
self.image_captioning_model.to(self.question_answering_model.device)
return pred_answers, samples['captions'], samples['gradcams']
@classmethod
def from_config(cls, model_config):
itm_config = model_config.image_question_matching_model
cap_config = model_config.image_captioning_model
qa_config = model_config.question_answering_model
itm_cls = registry.get_model_class(itm_config.arch)
cap_cls = registry.get_model_class(cap_config.arch)
qa_cls = registry.get_model_class(qa_config.arch)
image_question_matching_model = itm_cls.from_config(itm_config)
image_captioning_model = cap_cls.from_config(cap_config)
question_answering_model = qa_cls.from_config(qa_config)
model = cls(image_question_matching_model=image_question_matching_model,
image_captioning_model=image_captioning_model,
question_answering_model=question_answering_model,
offload_model= True if model_config.model_type == '3b' else False,
)
return model | /salesforce-lavis-1.0.2.tar.gz/salesforce-lavis-1.0.2/lavis/models/pnp_vqa_models/pnp_vqa.py | 0.859457 | 0.404507 | pnp_vqa.py | pypi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.