category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
transformers
|
Huggingface AutoTokenizer cannot be referenced when importing Transformers
|
https://stackoverflow.com/questions/68481189/huggingface-autotokenizer-cannot-be-referenced-when-importing-transformers
|
<p>I am trying to import AutoTokenizer and AutoModelWithLMHead, but I am getting the following error:</p>
<p>ImportError: cannot import name 'AutoTokenizer' from partially initialized module 'transformers' (most likely due to a circular import)</p>
<p>First, I install transformers: <code>pip install transformers</code> then implemented the following code:</p>
<pre><code>from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("t5-base")
model = AutoModelWithLMHead.from_pretrained("t5-base")
</code></pre>
|
<p>For anyone who comes across a problem around circular import, this could be due to the naming convention of your <code>.py</code> file. Changing my file name solved the issue as there might be a file in my Python lib folder with similar naming conventions.</p>
| 434
|
transformers
|
cannot import name 'TrainingArguments' from 'transformers'
|
https://stackoverflow.com/questions/70066746/cannot-import-name-trainingarguments-from-transformers
|
<p>I am trying to fine-tune a pretrained huggingface BERT model. I am importing the following</p>
<pre><code>from transformers import (AutoTokenizer, AutoConfig,
AutoModelForSequenceClassification, TrainingArguments, Trainer)
</code></pre>
<p>and get the following error:</p>
<pre><code>cannot import name 'TrainingArguments' from 'transformers'
</code></pre>
<p>Trainer also cannot import.</p>
<p>I currently have tensorflow 2.2.0, pytorch 1.7.1, and transformers 2.1.1 installed</p>
|
<p>Find out the version of the transformers lib installed.</p>
<pre><code>import transformers
print(transformers.__version__)
</code></pre>
<p>Somehow <code>pip install</code> doesn't install the latest version. I think we need to have v4.20.0 or greater.</p>
<p>you can try the following to install the latest version.</p>
<pre><code>git clone https://github.com/huggingface/transformers.git
cd transformers
pip install -e .
</code></pre>
<p><a href="https://huggingface.co/docs/transformers/installation#editable-install" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/installation#editable-install</a></p>
| 435
|
transformers
|
Are applicative transformers really superfluous?
|
https://stackoverflow.com/questions/37761078/are-applicative-transformers-really-superfluous
|
<p>There is a lot of talk about <code>Applicative</code> <em>not</em> needing its own transformer class, like this:</p>
<pre><code>class AppTrans t where
liftA :: Applicative f => f a -> t f a
</code></pre>
<p>But I can define applicative transformers that don't seem to be compositions of applicatives! For example <em>sideeffectful streams</em>:</p>
<pre><code>data MStream f a = MStream (f (a, MStream f a))
</code></pre>
<p>Lifting just performs the side effect at every step:</p>
<pre><code>instance AppTrans MStream where
liftA action = MStream $ (,) <$> action <*> pure (liftA action)
</code></pre>
<p>And if <code>f</code> is an applicative, then <code>MStream f</code> is as well:</p>
<pre><code>instance Functor f => Functor (MStream f) where
fmap fun (MStream stream) = MStream $ (\(a, as) -> (fun a, fmap fun as)) <$> stream
instance Applicative f => Applicative (MStream f) where
pure = liftA . pure
MStream fstream <*> MStream astream = MStream
$ (\(f, fs) (a, as) -> (f a, fs <*> as)) <$> fstream <*> astream
</code></pre>
<p>I know that for any practical purposes, <code>f</code> should be a monad:</p>
<pre><code>joinS :: Monad m => MStream m a -> m [a]
joinS (MStream stream) = do
(a, as) <- stream
aslist <- joinS as
return $ a : aslist
</code></pre>
<p>But while there is a <code>Monad</code> instance for <code>MStream m</code>, it's inefficient. (Or even incorrect?) The <code>Applicative</code> instance is actually useful!</p>
<p>Now note that usual streams arise as special cases for the identity functor:</p>
<pre><code>import Data.Functor.Identity
type Stream a = MStream Identity a
</code></pre>
<p>But the composition of <code>Stream</code> and <code>f</code> is not <code>MStream f</code>! Rather, <code>Compose Stream f a</code> is isomorphic to <code>Stream (f a)</code>.</p>
<p><em>I'd like to know whether <code>MStream</code> is a composition of any two applicatives.</em></p>
<p>Edit:</p>
<p>I'd like to offer a category theoretic viewpoint. A transformer is a "nice" endofunctor <code>t</code> on the category <code>C</code> of applicative functors (i.e. lax monoidal functors with strength), together with a natural transformation <code>liftA</code> from the identity on <code>C</code> to <code>t</code>. The more general question is now what useful transformers exist that are not of the form "compose with <code>g</code>" (where <code>g</code> is an applicative). My claim is that <code>MStream</code> is one of them.</p>
|
<p>Great question! I believe there are two different parts of this question:</p>
<ol>
<li><strong>Composing</strong> existing applicatives or monads into more complex ones.</li>
<li><strong>Constructing all</strong> applicatives/monads from some given starting set.</li>
</ol>
<p>Ad 1.: <strong>Monad transformers are essential for combining monads.</strong> Monads <a href="https://stackoverflow.com/q/13034229/1333025">don't compose directly</a>. It seems that there needs to be an extra bit of information provided by monad transformers that tells how each monad can be composed with other monads (but it could be this information is already somehow present, see <a href="https://stackoverflow.com/q/24515876/1333025">Is there a monad that doesn't have a corresponding monad transformer?</a>).</p>
<p>On the other hand, <strong>applicatives compose directly</strong>, see <a href="https://hackage.haskell.org/package/transformers-0.5.1.0/docs/Data-Functor-Compose.html" rel="nofollow noreferrer">Data.Functor.Compose</a>. This is why don't need applicative transformers for composition. They're also closed under <a href="https://hackage.haskell.org/package/transformers-0.5.1.0/docs/Data-Functor-Product.html" rel="nofollow noreferrer">product</a> (but not <a href="https://hackage.haskell.org/package/transformers-0.5.1.0/docs/Data-Functor-Sum.html" rel="nofollow noreferrer">coproduct</a>).</p>
<p>For example, having <a href="https://hackage.haskell.org/package/Stream-0.4.7.2/docs/Data-Stream.html" rel="nofollow noreferrer">infinite streams</a> <code>data Stream a = Cons a (Stream a)</code> and another applicative <code>g</code>, both <code>Stream (g a)</code> and <code>g (Stream a)</code> are applicatives.</p>
<p>But even though <code>Stream</code> is also a monad (<code>join</code> takes the diagonal of a 2-dimensional stream), its composition with another monad <code>m</code> won't be, neither <code>Stream (m a)</code> nor <code>m (Stream a)</code> will always be a monad.</p>
<p>Furthermore as we can see, they're both different from your <code>MStream g</code> (which is very close to <a href="https://wiki.haskell.org/ListT_done_right" rel="nofollow noreferrer"><code>ListT</code> done right</a>), therefore:</p>
<p>Ad 2.: <strong>Can all applicatives be constructed from some given set of primitives?</strong> Apparently not. One problem is constructing sum data types: If <code>f</code> and <code>g</code> are applicatives, <code>Either (f a) (g a)</code> won't be, as we don't know how to compose <code>Right h <*> Left x</code>.</p>
<p>Another construction primitive is taking a fixed point, as in your <code>MStream</code> example. Here we might attempt to generalize the construction by defining something like</p>
<pre><code>newtype Fix1 f a = Fix1 { unFix1 :: f (Fix1 f) a }
instance (Functor (f (Fix1 f))) => Functor (Fix1 f) where
fmap f (Fix1 a) = Fix1 (fmap f a)
instance (Applicative (f (Fix1 f))) => Applicative (Fix1 f) where
pure k = Fix1 (pure k)
(Fix1 f) <*> (Fix1 x) = Fix1 (f <*> x)
</code></pre>
<p>(which requires not-so-nice <code>UndecidableInstances</code>) and then</p>
<pre><code>data MStream' f g a = MStream (f (a, g a))
type MStream f = Fix1 (MStream' f)
</code></pre>
| 436
|
transformers
|
Spacy-Transformers: Access GPT-2?
|
https://stackoverflow.com/questions/68946827/spacy-transformers-access-gpt-2
|
<p>I'm using Spacy-Transformers to build some NLP models.</p>
<p>The <a href="https://spacy.io/universe/project/spacy-transformers#gatsby-noscript" rel="nofollow noreferrer">Spacy-Transformers docs</a> say:</p>
<blockquote>
<p><strong>spacy-transformers</strong></p>
<p><em>spaCy pipelines for pretrained BERT, XLNet and GPT-2</em></p>
</blockquote>
<p>The sample code on that page shows:</p>
<pre><code>import spacy
nlp = spacy.load("en_core_web_trf")
doc = nlp("Apple shares rose on the news. Apple pie is delicious.")
</code></pre>
<p>Based on what I've learned from <a href="https://www.youtube.com/watch?v=vyOgWhwUmec" rel="nofollow noreferrer">this video</a>,"en_core_web_trf" appears to be the <code>spacy.load()</code> package to use a BERT model. I've searched the <a href="https://spacy.io/universe/project/spacy-transformers#gatsby-noscript" rel="nofollow noreferrer">Spacy-Transformers docs</a> and haven't yet seen an equivalent package, to access GPT-2. Is there a specific <code>spacy.load()</code> package, to load in order to use a GPT-2 model?</p>
|
<p>The <code>en_core_web_trf</code> uses a specific Transformers model, but you can specify arbitrary ones using the <code>TransformerModel</code> wrapper class from <code>spacy-transformers</code>. See <a href="https://spacy.io/api/architectures#TransformerModel" rel="nofollow noreferrer">the docs</a> for that. An example config:</p>
<pre><code>[model]
@architectures = "spacy-transformers.TransformerModel.v1"
name = "roberta-base" # this can be the name of any hub model
tokenizer_config = {"use_fast": true}
</code></pre>
| 437
|
transformers
|
issue when importing BloomTokenizer from transformers in python
|
https://stackoverflow.com/questions/73107703/issue-when-importing-bloomtokenizer-from-transformers-in-python
|
<p>I am trying to import BloomTokenizer from transformers</p>
<pre><code>from transformers import BloomTokenizer
</code></pre>
<p>and I receive the following error</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'BloomTokenizer' from 'transformers'
(/root/miniforge3/envs/pytorch/lib/python3.8/site-packages/transformers/__init__.py)
</code></pre>
<p>my version of transformers:</p>
<pre><code>transformers 4.20.1
</code></pre>
<p>what could I do to be able to import BloomTokenizer?</p>
|
<p>BLOOM has no slow tokenizer class. It only has a <a href="https://huggingface.co/docs/transformers/model_doc/bloom#transformers.BloomTokenizerFast" rel="nofollow noreferrer">fast tokenizer</a>. The official documentation is wrong at this point. Use the following instead:</p>
<pre class="lang-py prettyprint-override"><code> from transformers import BloomTokenizerFast
tokenizer = BloomTokenizerFast.from_pretrained("...")
</code></pre>
| 438
|
transformers
|
huggingface/transformers: cache directory
|
https://stackoverflow.com/questions/73594934/huggingface-transformers-cache-directory
|
<p>I'm trying to use huggingface transformers.
(Win 11, Python 3.9, jupyternotebook, virtual environment)</p>
<p>When I ran code:</p>
<pre><code>from transformers import pipeline
print(pipeline('sentiment-analysis')('I hate you'))
</code></pre>
<p>I got an error :</p>
<blockquote>
<p>FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\Users\user/.cache\huggingface'</p>
</blockquote>
<p>There's no directory named '.cache' in my user folder,
so I used cache_dir="./cache"
but I want to change the path of the directory permanently.</p>
<p>P.S.</p>
<pre><code>import os
os.environ['TRANSFORMERS_CACHE'] = './cache'
</code></pre>
<p>also didn't work.</p>
| 439
|
|
transformers
|
TypeScript custom transformers with ts.createWatchProgram
|
https://stackoverflow.com/questions/62026189/typescript-custom-transformers-with-ts-createwatchprogram
|
<p>TypeScript has several high-level APIs to implement <strong>watch/compile,</strong> for example:</p>
<ul>
<li><a href="https://github.com/microsoft/TypeScript/blob/0b38a9a2b03d3c651822bc2a20d381545384f0f5/src/compiler/watchPublic.ts#L196" rel="nofollow noreferrer"><b>createWatchCompilerHost</b>( rootFiles, options, system, ... )</a></li>
<li><a href="https://github.com/microsoft/TypeScript/blob/177713ef45f7714c91af79246b8b5e2b6bd59128/src/compiler/tsbuildPublic.ts#L193" rel="nofollow noreferrer"><b>createSolutionBuilderWithWatchHost</b>( system, ... )</a></li>
<li><a href="https://github.com/microsoft/TypeScript/blob/177713ef45f7714c91af79246b8b5e2b6bd59128/src/compiler/tsbuildPublic.ts#L212" rel="nofollow noreferrer"><b>createSolutionBuilderWithWatch</b>( host, rootFiles, ... )</a></li>
</ul>
<p>Can any of them be used with <strong>custom transformers?</strong></p>
<p>A comment to <a href="https://github.com/microsoft/TypeScript/pull/31432" rel="nofollow noreferrer">solutionBuilder.<b>getNextInvalidatedProject</b>()</a> mentions ability to pass transformers, but it cannot be used with watchers.</p>
<p>Basically, I need via API to run TypeScript compiler in <code>--watch</code> mode, but passing in my custom transformers. Any clues?</p>
|
<p>Better approach, more aligned with <a href="https://github.com/microsoft/TypeScript/pull/31432" rel="nofollow noreferrer">solutionBuilder.<strong>getNextInvalidatedProject()</strong></a> recommendation.</p>
<p>When you call <strong>solutionBuilder.getNextInvalidatedProject() .emit(...)</strong> manually, you can pass in transformers. Invoking that API marks build as complete, meaning it won't emit again in its usual non-customised way.</p>
<p>You would invoke it both before the initial <code>build()</code> and from <code>WatchStatusReporter</code> callback.</p>
<p>That way you inject custom transformers, yet still retain built-in watching logic. See the proof-of-concept script below at work:</p>
<p><img src="https://gist.github.com/mihailik/11369fd2b5e0603a14bc5d883d47dd6c/raw/ecad1fd7c8e1c45d2a5c2392d8802c4fb69a9dd4/createSolutionBuilderWithWatch-slower.gif"></p>
<p>Here's full code, see also <a href="https://gist.github.com/mihailik/11369fd2b5e0603a14bc5d883d47dd6c" rel="nofollow noreferrer"><strong>Gist</strong></a> and <a href="https://repl.it/@OlegMihailik/buildjs-tscreateSolutionBuilderWithWatch#index.js" rel="nofollow noreferrer"><strong>Repl.it</strong></a>.</p>
<pre class="lang-js prettyprint-override"><code>// @ts-check
var ts = require('typescript');
var tsconfig_json = JSON.stringify({
compilerOptions: {
outFile: __filename + '.out.js',
allowJs: true,
checkJs: true,
target: 'es3'
},
files: [__filename]
}, null, 2);
var s = {
delete: 3
};
/** @type {import('typescript').System} */
var sysOverride = {};
for (var k in ts.sys) { sysOverride[k] = ts.sys[k]; }
sysOverride.readFile = function (file) {
if (ts.sys.resolvePath(file) === ts.sys.resolvePath(__dirname + '/tsconfig.json')) {
// console.log('readFile(', file, ') -> overridden tsconfig_json');
return tsconfig_json;
}
else {
var result = ts.sys.readFile(file);
// if (!/node_modules/.test(file))
// console.log('readFile(', file, ') -> ' + (typeof result === 'string' ? '"' + result.length + '"' : typeof result));
return result;
}
};
sysOverride.writeFile = function (file, content) {
console.log(' sys.writeFile(', file, ', [', content.length, '])');
ts.sys.writeFile(file, content);
};
var host = ts.createSolutionBuilderWithWatchHost(
sysOverride,
void 0,
reportDiag,
reportDiag,
reportWatch);
var buildStart = Date.now();
var solutionBuilder = ts.createSolutionBuilderWithWatch(
host,
[__dirname],
{ incremental: false }, {});
initiateFirstBuild();
function initiateFirstBuild() {
var firstBuild = solutionBuilder.getNextInvalidatedProject();
if (firstBuild) {
buildStart = Date.now();
startBuild(firstBuild);
}
solutionBuilder.build();
}
/**
* @param {import('typescript').InvalidatedProject<import('typescript').EmitAndSemanticDiagnosticsBuilderProgram>} proj
* @param {import('typescript').Diagnostic=} watchDiag
*/
function startBuild(proj, watchDiag) {
ts.sys.write(
'\x1b[93m ' + (ts.InvalidatedProjectKind[proj.kind] + ' ').slice(0, 10) + '\x1b[0m' +
(watchDiag ? '' : '\n'));
if (watchDiag) reportDiag(watchDiag);
buildStart = Date.now();
if (proj && proj.kind === ts.InvalidatedProjectKind.Build) {
progSource = proj;
proj.emit(
void 0,
void 0,
void 0,
void 0,
{ after: [transformInjectStatementNumbers] });
}
}
function completeBuild(watchDiag) {
ts.sys.write('\x1b[90m ' + (((Date.now() - buildStart) / 1000) + 's ').slice(0, 10) + '\x1b[0m');
if (watchDiag) reportDiag(watchDiag);
}
/** @type {import('typescript').FormatDiagnosticsHost} */
var diagHost;
/** @param {import('typescript').Diagnostic} diag */
function reportDiag(diag) {
if (!diagHost) {
diagHost = {
getCanonicalFileName: function (fileName) {
return ts.sys.resolvePath(fileName)
},
getCurrentDirectory: function () {
return ts.sys.getCurrentDirectory();
},
getNewLine: function () {
return ts.sys.newLine;
}
};
}
var output = ts.sys.writeOutputIsTTY && ts.sys.writeOutputIsTTY() ?
ts.formatDiagnosticsWithColorAndContext([diag], diagHost) :
ts.formatDiagnostic(diag, diagHost);
output = output.replace(/^[\r\n]+/, '').replace(/[\r\n]+$/, '');
ts.sys.write(output + '\n');
}
/** @param {import('typescript').Diagnostic} diag */
function reportWatch(diag) {
var proj = solutionBuilder.getNextInvalidatedProject();
if (proj && /** @type {*} */(proj).getProgram) {
progSource = /** @type {*} */(proj);
}
if (proj)
startBuild(proj, diag);
else
completeBuild(diag);
}
/** @type {{ getProgram(): import('typescript').Program }} */
var progSource;
/** @type {import('typescript').TypeChecker} */
var checker;
/** @param {import('typescript').TransformationContext} context */
function transformInjectStatementNumbers(context) {
checker = progSource.getProgram().getTypeChecker();
return transformFile;
function transformFile(sourceFile) {
console.log(' transforming(', sourceFile.fileName, ')...');
return ts.updateSourceFileNode(
sourceFile,
sourceFile.statements.map(decorateStatementWithComplexityAndType));
}
}
/**
* @param {import('typescript').Statement} statement
*/
function decorateStatementWithComplexityAndType(statement) {
var nodeCount = 0;
var type;
ts.forEachChild(statement, visitStatementChild);
return ts.addSyntheticLeadingComment(
statement, ts.SyntaxKind.SingleLineCommentTrivia,
' INJECTED >> complexity: ' + nodeCount +
(!type ? '' : ' : ' + checker.typeToString(type)));
/**
* @param {import('typescript').Node} child
*/
function visitStatementChild(child) {
nodeCount++;
if (!type) type = checker.getTypeAtLocation(child);
if (type.getFlags() === ts.TypeFlags.Any) type = null;
ts.forEachChild(child, visitStatementChild);
}
}
</code></pre>
| 440
|
transformers
|
Including multiple dataset transformers in custom transformer
|
https://stackoverflow.com/questions/77570948/including-multiple-dataset-transformers-in-custom-transformer
|
<p>Here is my custom transformer, meant to transform the subject dataframe of encoding and scaling:</p>
<pre><code>class DfGrooming(BaseEstimator, TransformerMixin):
def __init__(self):
self.encodable_columns = ['Education','EmploymentType','MaritalStatus', 'HasMortgage', 'HasDependents', 'LoanPurpose', 'HasCoSigner']
self.scalable_columns = ['Age', 'Income', 'LoanAmount', 'CreditScore', 'MonthsEmployed', 'InterestRate', 'LoanTerm']
self.encoder = LabelEncoder()
self.scaler = MinMaxScaler(feature_range=(0,5))
self.X_encoded = pd.DataFrame()
self.X_scaled = pd.DataFrame()
def fit(self, X, y=None):
self.encoder.fit(X[self.encodable_columns])
self.scaler.fit(X[self.scalable_columns])
return self
def transform(self, X, y=None):
self.X_encoded = self.encoder.transform(X[self.encodable_columns])
print(self.X_encoded.shape)
X.drop(columns=self.encodable_columns, axis=1, inplace=True)
X = pd.concat([X, self.X_encoded], axis=1)
print(X.shape)
self.X_scaled = X.filter(self.scalable_columns, axis=1)
self.X_scaled = pd.DataFrame(scaler.transform(self.X_scaled))
self.X_scaled.columns = self.scalable_columns
X[self.scalable_columns] = self.X_scaled[self.scalable_columns]
X.drop(['LoanID'], axis=1, inplace=True)
print(X.shape)
return X
</code></pre>
<p>But after running the pipeline:</p>
<pre><code>pipeline = Pipeline([('preparer', DfGrooming())])
t = pipeline.fit_transform(train_df)
t.head()
</code></pre>
<p>I get the following error:</p>
<pre><code>ValueError: bad input shape (178742, 7)
</code></pre>
<p>I would like to know what is actually happening and if I am missing anything in this implementation of transformers. Also please suggest better ways to implement this procedure. Thank you</p>
<p>I tried to include 2 transformers in one custom transformer.
I was expecting to combine 2 steps (and possibly 4 - removing and adding the encoded and scaled columns into the main dataframe)</p>
<p>I have a working algorithm where I semi-automatically transform the Validation and Test sets with functions, but wanted to try pipelines</p>
| 441
|
|
transformers
|
What are Haskell's monad transformers in categorical terms?
|
https://stackoverflow.com/questions/6854303/what-are-haskells-monad-transformers-in-categorical-terms
|
<p>As a math student, the first thing I did when I learned about monads in Haskell was check that they really were monads in the sense I knew about. But then I learned about monad transformers and those don't quite seem to be something studied in category theory.</p>
<p>In particular I would expect them to be related to distributive laws but they seem to be genuinely different: a monad transformer is expected to apply to an arbitrary monad while a distributive law is an affair between a monad and a specific other monad.</p>
<p>Also, looking at the usual examples of monad transformers, while <code>MaybeT m</code> composes <code>m</code> with <code>Maybe</code>, <code>StateT m</code> is not a composition of <code>m</code> with <code>State</code> in either order.</p>
<p>So my question is what are monad transformer in categorical language?</p>
|
<p>Monad transformers aren't exceedingly mathematically pleasant. However, we can get nice (co)products from free monads, and, more generally, ideal monads: See Ghani and Uustalu's "Coproducts of Ideal Monads": <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.2698">http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.2698</a></p>
| 442
|
transformers
|
Monad Transformers lift
|
https://stackoverflow.com/questions/39638546/monad-transformers-lift
|
<p>I was just looking into monad transformers in real world Haskell.
The book said that to make something a monad transformer, you need to make it an instance of the MonadTrans type class.</p>
<p>So the book defined a new Transformer, the <code>MaybeT m a</code> transformer.</p>
<p>They defined the monadTrans type class for this new transformer:</p>
<pre><code>instance MonadTrans MaybeT where
lift m = MaybeT (Just `liftM` m)
</code></pre>
<p>Then they made an instance of MonadState for this transformer:</p>
<pre><code>instance (MonadState s m) => MonadState s (MaybeT m) where
get = lift get
put k = lift (put k)
</code></pre>
<p>From what I understand the lift function is taking the underlying monad and wrapping it in the right constructor. However, I do not get the implementation of get or put in the MonadState type class, I would like some help in understanding what the lift is actually doing here. I have also heard that in the mtl package because of how the type classes are defined, you can have a stack of monad transformers with WriterT, StateT etc but you can use functions like get,put,tell etc without actually doing any lifting. I was wondering how does this work, I strongly suspect its to do with these type classes but I am not sure?</p>
|
<blockquote>
<p>but you can use functions like get,put,tell etc without actually doing any lifting</p>
</blockquote>
<p>This is because those functions are actually defined on e.g. the <code>MonadState</code> typeclass, not <code>State</code> type. </p>
<pre><code>class Monad m => MonadState s m | m -> s where
get :: m s
put :: s -> m ()
</code></pre>
<p>Then, both <code>State</code> and <code>StateT</code> are made an instance of that class, which makes using those possible<sup>1</sup></p>
<p>In your example instance, if we know that the inner monad of <code>MaybeT m</code> is (fulfills) <code>MonadState s</code>, we can treat the whole outer monad as <code>MonadState s</code> provided we lift the operations directed to the inner monad so that they fit the outer one, which done with <code>lift</code>.</p>
<p>In plain english, that would sound something like "<em>If the MaybeT transformer transforms (wraps) over some monad <code>m</code> that is a stateful (<code>MonadState</code>) monad for type <code>s</code>, the resulting type also is a stateful monad for that type</em>".</p>
<hr>
<p><sup>1</sup>This is actually just one instance, because <code>State s a</code> is actually implemented as <code>StateT s Identity a</code>. <a href="https://hackage.haskell.org/package/mtl-2.2.1/docs/src/Control-Monad-State-Class.html#MonadState" rel="nofollow">Refer to the sources</a> for the implementation details.</p>
| 443
|
transformers
|
Rxjava how to generify transformers?
|
https://stackoverflow.com/questions/54901081/rxjava-how-to-generify-transformers
|
<p>I've found myself writing a lot of transformers to generify some stream operations such as retry, apply schedulers, etc. Which made me write a lot of code duplication because each stream type(single, completable, etc.) has it's own Transformer so I had to implement 4 different transformers which do exactly the same. That's also led to duplication in tests. Is there a way to generify transformers?</p>
| 444
|
|
transformers
|
RuntimeError: Numpy is not available (transformers)
|
https://stackoverflow.com/questions/78863932/runtimeerror-numpy-is-not-available-transformers
|
<p>I basically just want to use the transformers pipeline() to classify data, but independent of which model I try to use, it returns the same error, stating <strong>Numpy is not available</strong></p>
<p>Code I'm running:</p>
<pre><code>pipe = pipeline("text-classification", model="AdamLucek/roberta-llama3.1405B-twitter-sentiment")
sentiment_pipeline('Today is a great day!')
# other model i've tried:
sentiment_pipeline = pipeline(model="cardiffnlp/twitter-roberta-base-sentiment-latest", tokenizer="cardiffnlp/twitter-roberta-base-sentiment-latest")
sentiment_pipeline('Today is a great day!')
</code></pre>
<p>Error I receive:</p>
<pre><code>RuntimeError Traceback (most recent call last)
Cell In[49], line 1
----> 1 sentiment_pipeline('Today is a great day!')
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\transformers\pipelines\text_classification.py:156, in TextClassificationPipeline.__call__(self, inputs, **kwargs)
122 """
123 Classify the text(s) given as inputs.
124
(...)
153 If `top_k` is used, one such dictionary is returned per label.
154 """
155 inputs = (inputs,)
--> 156 result = super().__call__(*inputs, **kwargs)
157 # TODO try and retrieve it in a nicer way from _sanitize_parameters.
158 _legacy = "top_k" not in kwargs
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\transformers\pipelines\base.py:1257, in Pipeline.__call__(self, inputs, num_workers, batch_size, *args, **kwargs)
1249 return next(
1250 iter(
1251 self.get_iterator(
(...)
1254 )
1255 )
1256 else:
-> 1257 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\transformers\pipelines\base.py:1265, in Pipeline.run_single(self, inputs, preprocess_params, forward_params, postprocess_params)
1263 model_inputs = self.preprocess(inputs, **preprocess_params)
1264 model_outputs = self.forward(model_inputs, **forward_params)
-> 1265 outputs = self.postprocess(model_outputs, **postprocess_params)
1266 return outputs
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\transformers\pipelines\text_classification.py:208, in TextClassificationPipeline.postprocess(self, model_outputs, function_to_apply, top_k, _legacy)
204 outputs = model_outputs["logits"][0]
206 if self.framework == "pt":
207 # To enable using fp16 and bf16
--> 208 outputs = outputs.float().numpy()
209 else:
210 outputs = outputs.numpy()
RuntimeError: Numpy is not available
</code></pre>
<p>I already tried simply un- and reinstalling transformers and numpy and for both the most recent versions are installed (and should be compatible).</p>
<p>Anyone has an idea on how to solve this?</p>
|
<p>Try:</p>
<pre><code>pip install "numpy<2"
</code></pre>
<p>then restart the kernel.</p>
| 445
|
transformers
|
issue when import ConditionalGeneration from transformers in python
|
https://stackoverflow.com/questions/74101343/issue-when-import-conditionalgeneration-from-transformers-in-python
|
<p>I am trying to import ConditionalGeneration from transformers in jupyter notebook</p>
<pre><code>from transformers import ConditionalGeneration
</code></pre>
<p>but encounter following error.I install different version and different method for installing pytorch and transformers but I can't solve it.</p>
<pre><code>---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_10284\1006457458.py in <module>
1 ## Tokenizer import
2
----> 3 from transformers import T5Tokenizer, ConditionalGeneration
ImportError: cannot import name 'ConditionalGeneration' from 'transformers' (C:\Users\lenovo\anaconda3\envs\latr\lib\site-packages\transformers\__init__.py)
</code></pre>
|
<p>There is no class named ConditionalGeneration in transformers simply)
you need to specify one of the classes for example <a href="https://huggingface.co/docs/transformers/v4.23.1/en/model_doc/bart#transformers.BartForConditionalGeneration" rel="nofollow noreferrer">BartForConditionalGeneration</a>, <a href="https://huggingface.co/docs/transformers/v4.23.1/en/model_doc/led#transformers.LEDForConditionalGeneration" rel="nofollow noreferrer">LEDForConditionalGeneration</a>, <a href="https://huggingface.co/docs/transformers/v4.23.1/en/model_doc/longt5#transformers.LongT5ForConditionalGeneration" rel="nofollow noreferrer">LongT5ForConditionalGeneration</a> or any another encoder decoder transformer from hugging face</p>
| 446
|
transformers
|
Missing convert_trajectory_transformer_original_pytorch_checkpoint_to_pytorch.py (notebook/Jupyter/transformers)
|
https://stackoverflow.com/questions/77519523/missing-convert-trajectory-transformer-original-pytorch-checkpoint-to-pytorch-py
|
<p>I tried on a notebook to install the package transformers, it failed:</p>
<p><code>!pip install transformers</code></p>
<p>I get this:</p>
<pre><code>FULLTRACE:
Collecting transformers
Obtaining dependency information for transformers from https://files.pythonhosted.org/packages/12/dd/f17b11a93a9ca27728e12512d167eb1281c151c4c6881d3ab59eb58f4127/transformers-4.35.2-py3-none-any.whl.metadata
Using cached transformers-4.35.2-py3-none-any.whl.metadata (123 kB)
Requirement already satisfied: filelock in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (3.13.1)
Requirement already satisfied: huggingface-hub<1.0,>=0.16.4 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (0.17.3)
Requirement already satisfied: numpy>=1.17 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (1.25.2)
Requirement already satisfied: packaging>=20.0 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (23.2)
Requirement already satisfied: pyyaml>=5.1 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (6.0.1)
Requirement already satisfied: regex!=2019.12.17 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (2023.10.3)
Requirement already satisfied: requests in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (2.31.0)
Requirement already satisfied: tokenizers<0.19,>=0.14 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (0.14.1)
Requirement already satisfied: safetensors>=0.3.1 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (0.4.0)
Requirement already satisfied: tqdm>=4.27 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (4.66.1)
Requirement already satisfied: fsspec in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from huggingface-hub<1.0,>=0.16.4->transformers) (2023.10.0)
Requirement already satisfied: typing-extensions>=3.7.4.3 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from huggingface-hub<1.0,>=0.16.4->transformers) (4.5.0)
Requirement already satisfied: colorama in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from tqdm>=4.27->transformers) (0.4.6)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests->transformers) (3.1.0)
Requirement already satisfied: idna<4,>=2.5 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests->transformers) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests->transformers) (2.0.3)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests->transformers) (2023.5.7)
Using cached transformers-4.35.2-py3-none-any.whl (7.9 MB)
Installing collected packages: transformers
</code></pre>
<pre><code>ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: 'C:\\Users\\x\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\transformers\\models\\deprecated\\trajectory_transformer\\convert_trajectory_transformer_original_pytorch_checkpoint_to_pytorch.py'
HINT: This error might have occurred since this system does not have Windows Long Path support enabled. You can find information on how to enable this at https://pip.pypa.io/warnings/enable-long-paths
[notice] A new release of pip is available: 23.2.1 -> 23.3.1
[notice] To update, run: C:\Users\x\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\python.exe -m pip install --upgrade pip
</code></pre>
<p>If you need more details, I'm ready !
I need to write more because the website ask me to. But I don't need to flood you with more info I think.</p>
|
<p>I was trying to load transformers onto my local system and had the same error. (Windows 11 Home)</p>
<p>I opened Registry Editor, navigated to:</p>
<p>HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem</p>
<p>found the value:</p>
<p>LongPathsEnabled</p>
<p>then changed the value from 0 to 1. (hexadecimal, which should be default)</p>
<p>How: Right click the value, select Modify, make the change to "Value Data" in the provided popup window, then click OK. Simple.</p>
<p>Now I can use long paths, which means that the transformers download is working properly. I verified that it works by finding the referenced folder, and seeing that the missing file was installed correctly, following this edit.</p>
<p>Hope this helps.</p>
| 447
|
transformers
|
How to import Transformers with Tensorflow
|
https://stackoverflow.com/questions/74914230/how-to-import-transformers-with-tensorflow
|
<p>After installing Transformers using</p>
<pre><code>pip install Transformers
</code></pre>
<p>I get version 4.25.1 , but when I try to import Transformer by</p>
<pre><code>from tensorflow.keras.layers import Transformer
# or
from tensorflow.keras.layers.experimental import Transformer
</code></pre>
<p>I get this error:</p>
<pre><code>ImportError: cannot import name 'Transformer' from 'tensorflow.keras.layers'
</code></pre>
<p>I am using <code>Tenserflow 2.10</code> and <code>python 3.7</code>.</p>
|
<p>Since you have installed Transformers directly you would have to directly import transformers, as in</p>
<pre><code>import transformers
</code></pre>
| 448
|
transformers
|
Laravel Dingo nested transformers
|
https://stackoverflow.com/questions/44900145/laravel-dingo-nested-transformers
|
<p>I'm trying to get one to many relationship objects with transformers. I want to get include metas but i only get just regular transform fields.</p>
<p>my transformer:</p>
<pre><code>class AssistantTransformer extends TransformerAbstract
{
protected $availableIncludes = [
'assistantmetas'
];
public function transform(User $user)
{
return [
'id' => (int) $user->id,
'firstname' => ucfirst($user->first_name),
'lastname' => ucfirst($user->last_name),
];
}
public function includeMetas(User $user)
{
$assistantmetas = $user->userMetas;
return $this->item($assistantmetas, new AssistantsMetaTransformer);
}
}
</code></pre>
|
<p>Just use <code>defaultIncludes</code> not available includes, because it needs to send request via <code>url? include=assistantmetas</code> to get result like this.</p>
| 449
|
transformers
|
How to change huggingface transformers default cache directory?
|
https://stackoverflow.com/questions/63312859/how-to-change-huggingface-transformers-default-cache-directory
|
<p>The default cache directory lacks disk capacity, I need to change the configuration of the default cache directory. How can I do that?</p>
|
<p>You can specify the cache directory whenever you load a model with <a href="https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.from_pretrained" rel="noreferrer">.from_pretrained</a> by setting the parameter <code>cache_dir</code>. You can also set a default location by exporting an environment variable <a href="https://huggingface.co/docs/transformers/installation?highlight=transformers_cache#cache-setup" rel="noreferrer">HF_HOME</a> each time before you use the library (i.e. <strong>before</strong> importing it!).</p>
<p>Python example:</p>
<pre class="lang-py prettyprint-override"><code>import os
os.environ['HF_HOME'] = '/blabla/cache/'
</code></pre>
<p>bash example:</p>
<pre class="lang-bash prettyprint-override"><code>export HF_HOME=/blabla/cache/
</code></pre>
<p>windows example:</p>
<pre><code>set HF_HOME=E:\huggingface_cache
</code></pre>
<p>Google Colab example (export via <code>os</code> works fine but not the bash variant. An alternative are the magic commands):</p>
<pre class="lang-bash prettyprint-override"><code>%env HF_HOME=/blabla/cache/
</code></pre>
<p><strong>transformers <v4.0.0</strong></p>
<p>Use the variable <a href="https://huggingface.co/docs/transformers/installation?highlight=transformers_cache#cache-setup" rel="noreferrer">TRANSFORMERS_CACHE</a> instead of <a href="https://huggingface.co/docs/transformers/installation?highlight=transformers_cache#cache-setup" rel="noreferrer">HF_HOME</a>. You can also use it in v4.0.0 <= transformers <= v5.0.0 but starting from v4.36.0 you will see the following warning:</p>
<blockquote>
<p>FutureWarning: Using <code>TRANSFORMERS_CACHE</code> is deprecated and will be
removed in v5 of Transformers. Use <code>HF_HOME</code> instead.</p>
</blockquote>
| 450
|
transformers
|
Examples of Haskell Applicative Transformers
|
https://stackoverflow.com/questions/12587195/examples-of-haskell-applicative-transformers
|
<p>The wiki on www.haskell.org tells us the following about Applicative Transformers:</p>
<blockquote>
<p>So where are applicative transformers? The answer is, that we do not need special transformers for applicative functors since they can be combined in a generic way.
<a href="http://www.haskell.org/haskellwiki/Applicative_functor#Applicative_transfomers" rel="noreferrer">http://www.haskell.org/haskellwiki/Applicative_functor#Applicative_transfomers</a></p>
</blockquote>
<p>I tried the following in order to try to combine a bunch of applicative functors. But all I got was bunch of errors. Here is the code:</p>
<pre><code>import Control.Applicative
import System.IO
ex x y = (:) <$> x <*> y
test1 = ex "abc" ["pqr", "xyz"] -- only this works correctly as expected
test2 = ex "abc" [Just "pqr", Just "xyz"]
test3 = ex "abc" (Just "pqr")
test4 = ex (Just 'a') ["pqr", "xyz"]
test5 = ex (return ("abc"):: IO ()) [Just "pqr", Just "xyz"]
</code></pre>
<p>This produces a lot of type errors, which though I can partially understand, I couldn't resolve them at all.</p>
<p>The errors are given at the end.</p>
<p>So, how do I combine the Maybe Applicative and the List Applicative for example?</p>
<p>How do I combine the State Applicative and the List Applicative for example?
Are there any other examples, let's say, combining Maybe and List, Maybe and State and finally the dreadful of all the IO and State applicatives?</p>
<p>Thanks.</p>
<p>The GHCi error msgs follow.</p>
<pre><code>example.hs:6:19:
Couldn't match expected type `[Char]' with actual type `Maybe a0'
In the return type of a call of `Just'
In the expression: Just "pqr"
In the second argument of `ex', namely `[Just "pqr", Just "xyz"]'
example.hs:7:19:
Couldn't match expected type `[[Char]]' with actual type `Maybe a0'
In the return type of a call of `Just'
In the second argument of `ex', namely `(Just "pqr")'
In the expression: ex "abc" (Just "pqr")
example.hs:8:23:
Couldn't match expected type `Maybe' with actual type `[]'
In the second argument of `ex', namely `["pqr", "xyz"]'
In the expression: ex (Just 'a') ["pqr", "xyz"]
In an equation for `test4': test4 = ex (Just 'a') ["pqr", "xyz"]
example.hs:9:21:
Couldn't match expected type `()' with actual type `[Char]'
In the first argument of `return', namely `("abc")'
In the first argument of `ex', namely `(return ("abc") :: IO ())'
In the expression:
ex (return ("abc") :: IO ()) [Just "pqr", Just "xyz"]
Failed, modules loaded: none.
Prelude>
</code></pre>
|
<p>The wiki article says that <code>liftA2 (<*>)</code> can be used to compose applicative functors. It's easy to see how to use it from its type:</p>
<pre><code>o :: (Applicative f, Applicative f1) =>
f (f1 (a -> b)) -> f (f1 a) -> f (f1 b)
o = liftA2 (<*>)
</code></pre>
<p>So to if <code>f</code> is <code>Maybe</code> and <code>f1</code> is <code>[]</code> we get:</p>
<pre><code>> Just [(+1),(+6)] `o` Just [1, 6]
Just [2,7,7,12]
</code></pre>
<p>The other way around is:</p>
<pre><code>> [Just (+1),Just (+6)] `o` [Just 1, Just 6]
[Just 2,Just 7,Just 7,Just 12]
</code></pre>
<p>As @McCann said your ex function is equivalent to <code>liftA2 (:)</code>:</p>
<pre><code>test1 = liftA2 (:) "abc" ["pqr", "xyz"]
</code></pre>
<p>To use <code>(:)</code> with deeper applicative stack you need multiple applications of <code>liftA2</code>:</p>
<pre><code>*Main> (liftA2 . liftA2) (:) (Just "abc") (Just ["pqr", "xyz"])
Just ["apqr","axyz","bpqr","bxyz","cpqr","cxyz"]
</code></pre>
<p>However it only works when both operands are equally deep. So besides double <code>liftA2</code> you should use <code>pure</code> to fix the level:</p>
<pre><code>*Main> (liftA2 . liftA2) (:) (pure "abc") (Just ["pqr", "xyz"])
Just ["apqr","axyz","bpqr","bxyz","cpqr","cxyz"]
</code></pre>
| 451
|
transformers
|
Transformers: AutoModel from pretrained istantiation error
|
https://stackoverflow.com/questions/78566180/transformers-automodel-from-pretrained-istantiation-error
|
<p>I'm instantiating a CodeBert Model using AutoModel.fromPretrained.</p>
<pre><code>File "/public.hpc/codeBertConcat/./codeBertConcatEvaluation.py", line 278, in <module>
model = CodeBERTConcatenatedClass(num_classes=NUM_CLASSES).to(DEVICE)
File "/public.hpc/codeBertConcat/./codeBertConcatEvaluation.py", line 137, in __init__
self.codebert = AutoModel.from_pretrained('microsoft/codebert-base', cache_dir="./cache2")
File "/public.hpc/codeBertConcat/venv/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 493, in from_pretrained
return model_class.from_pretrained(
File "/public.hpc/codeBertConcat/venv/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2903, in from_pretrained
) = cls._load_pretrained_model(
File "/public.hpc/codeBertConcat/venv/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3061, in _load_pretrained_model
id_tensor = id_tensor_storage(tensor) if tensor.device != torch.device("meta") else id(tensor)
RuntimeError: Expected one of cpu, cuda, xpu, mkldnn, opengl, opencl, ideep, hip, msnpu, xla, vulkan device type at start of device string: meta
</code></pre>
<p>Someone has a clue?
I'm using Transformers version 4.31.0 and PyTorch 1.8.1+cu111</p>
|
<p>Please check your computer supporting the GPU or using gpu first.</p>
<p>then If you do not resolve the problem, <code>upgrade pytorch version newest</code>.</p>
<p>your pytorch version is 1.8.1, current latest pytorch version is 2.3.0.
<a href="https://pytorch.org/get-started/locally/" rel="nofollow noreferrer">pytorch homepage</a></p>
| 452
|
transformers
|
How to use transformers from nodejs script
|
https://stackoverflow.com/questions/73549859/how-to-use-transformers-from-nodejs-script
|
<p>I would like to use transformers in my Python code from NodeJs but it is not working without showing any error. Just by importing transformers my code isn't working. Of course by removing that part, the code runs smoothly.</p>
<p>NodeJs code</p>
<pre><code>const spawn = require("child_process").spawn;
const pythonProcess = spawn('python',["./SentAnalysis.py", JSON.stringify(reviewsArray)]);
pythonProcess.stdout.on('data', (data) => {
console.log(data.toString())
});
</code></pre>
<p>Python code</p>
<pre><code>from transformers import pipeline
import json
import sys
reviewsStr = sys.argv[1]
reviewsArray = reviewsStr.strip('][').split(', ')
sentiment_pipeline = pipeline("sentiment-analysis")
results = sentiment_pipeline(reviewsArray)
print(''.join(results))
sys.stdout.flush()
</code></pre>
|
<p>Python usually gives errors but Python3 always get the job done. Although Python also has transformers package, it did not run in this instance. So replacing "python" with "python3" in the spawn process in nodeJs code worked like a charm.</p>
| 453
|
transformers
|
Use huggingface transformers without IPyWidgets
|
https://stackoverflow.com/questions/66644432/use-huggingface-transformers-without-ipywidgets
|
<p>I am trying to use the huggingface transformers library in a hosted Jupyter notebook platform called Deepnote. I want to download a model through the pipeline class but unfortunately deepnote does not support IPyWidgets. Is there a way to disable IPywidgets when using transformers? Specifically the below command.</p>
<pre><code>
classifier = pipeline("zero-shot-classification")
</code></pre>
<p>And the error I receive.</p>
<pre><code>ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
</code></pre>
<p>Note: Installing IPyWidgets is not an option</p>
|
<p>You have to disable transformers logging. Even though it is possible to use <a href="https://huggingface.co/transformers/main_classes/logging.html#transformers.logging.set_verbosity" rel="noreferrer">transformers.logging.set_verbosity</a> to change the log level, it's not possible to set it to <code>logging.NOTSET</code> which <a href="https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L1253" rel="noreferrer">is required</a> to skip using <code>IProgress</code> and <code>tqdm</code>. So we need to hack it like this:</p>
<pre class="lang-py prettyprint-override"><code>import transformers
import logging
transformers.logging.get_verbosity = lambda: logging.NOTSET
# transformers.logging.get_verbosity()
</code></pre>
<p>After that you should be able to use:</p>
<pre class="lang-py prettyprint-override"><code>from transformers import pipeline
pipeline('sentiment-analysis')('we love you')
</code></pre>
<p>Check out <a href="https://deepnote.com/project/Huggingface-in-Deepnote-QW4KEXEqTVSsWByFuqll9A" rel="noreferrer">my Deepnote project</a> for details ;)</p>
| 454
|
transformers
|
Python transformers can't load
|
https://stackoverflow.com/questions/79130386/python-transformers-cant-load
|
<p>I have this code:</p>
<pre><code>from flask import Flask, request, jsonify
from flask_cors import CORS
from transformers import pipeline
app = Flask(__name__)
CORS(app)
model = pipeline("text-generation", model="MBZUAI-Paris/Atlas-Chat-9B")
@app.route('/generate', methods=['POST'])
def generate():
data = request.json
user_input = data.get("input", "")
response = model(user_input, max_length=1000)
return jsonify({"output": response[0]['generated_text']})
if __name__ == '__main__':
app.run(port=5000)
</code></pre>
<p>But program stops without output.</p>
<p>I have latest Transformers, Accelerate, SafeTensors and PyTorch. And I have Python 3.10.0.</p>
<p>My computer is Lenovo Yoga C-930 13IKB.</p>
<p>I tried this along with many other models, but only the smaller ones seem to work, like the Instruct models. Whenever I try a slightly larger model, it stops without producing any output.</p>
|
<p>It seems like your laptop does not have a dedicated GPU and has a RAM of 16 GB based on the specs for the model name. <em>I might be wrong here, but I am going to assume this configuration and write the answer.</em></p>
<p>The model you are trying to use is <strong>~19 GB</strong> which does not fit on your RAM. Using quantized models might help in this case but since <strong>there is no GPU that supports CUDA on your laptop</strong>, <code>bitsandbytes</code> quantization is not supported (CPU and other backends are currently in experimental stage).</p>
<p><strong>In case you have one use you can use this code:</strong></p>
<pre class="lang-py prettyprint-override"><code># pip install bitsandbytes accelerate
from flask import Flask, request, jsonify
from flask_cors import CORS
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
app = Flask(__name__)
CORS(app)
model_id = "MBZUAI-Paris/Atlas-Chat-9B"
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=quantization_config,
low_cpu_mem_usage=True,
)
@app.route('/generate', methods=['POST'])
def generate():
data = request.json
user_input = data.get("input", "")
messages = [
{"role": "user", "content": user_input},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True, add_generation_prompt=True)
outputs = model.generate(**input_ids, max_new_tokens=1000, temperature=0.0)
response = tokenizer.decode(outputs[0]).split("<start_of_turn>model")[-1]
return jsonify({"output": response})
if __name__ == '__main__':
app.run(port=5000)
</code></pre>
<p><strong>But if you don't there is still a way you can run this on your laptop's CPU using <code>llama.cpp</code>.</strong></p>
<p>First, install the llama.cpp Python bindings with:</p>
<pre class="lang-bash prettyprint-override"><code>pip install llama-cpp-python
</code></pre>
<p>Then download the quantized model by typing this in your terminal:</p>
<pre class="lang-bash prettyprint-override"><code>!huggingface-cli download mradermacher/Atlas-Chat-9B-GGUF Atlas-Chat-9B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
</code></pre>
<p><strong>Note:</strong> Change the <code>--local-dir</code> parameter as needed.</p>
<p>Then use this code:</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, request, jsonify
from flask_cors import CORS
from llama_cpp import Llama
app = Flask(__name__)
CORS(app)
model_path = "./Atlas-Chat-9B.Q4_K_M.gguf" # Download the model file first
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path=model_path,
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=2, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=0 # The number of layers to offload to GPU, if you have GPU acceleration available
)
@app.route('/generate', methods=['POST'])
def generate():
data = request.json
user_input = data.get("input", "")
# Simple inference example
output = llm(f"<start_of_turn>user\n{user_input}<end_of_turn>\n<start_of_turn>model\n", # Prompt
max_tokens=1000, # Generate up to 1000 tokens
stop=["<end_of_turn>"], # Example stop token
echo=False # Whether to echo the prompt
)
return jsonify({"output": output["choices"][0]["text"]})
if __name__ == '__main__':
app.run(port=5000)
</code></pre>
<p>The input pattern is modified according to the format used in <code>Atlas-Chat</code> model.</p>
<p>I have used a 4-bit gguf quantized model here which effectively brings the model size down to <strong>5.76 GB</strong>. <em><strong>This model will load on your PC comfortably while sacrificing the quality of the generations to some extent.</strong></em></p>
| 455
|
transformers
|
from transformers import AutoTokenizer, AutoModel
|
https://stackoverflow.com/questions/75991822/from-transformers-import-autotokenizer-automodel
|
<p>I am running this code:</p>
<p>I have these updated packages versions:
tqdm-4.65.0 transformers-4.27.4</p>
<p>I am running this code:
from transformers import AutoTokenizer, AutoModel</p>
<p>I am obtaining this erros:
ImportError: cannot import name 'ObjectWrapper' from 'tqdm.utils' (/Users/anitasancho/opt/anaconda3/lib/python3.7/site-packages/tqdm/utils.py)</p>
|
<p>I solved from the terminal window by:
Create a new virtual environment using <code>conda</code> with the following command:</p>
<pre><code>conda create --name mi_entorno python=3.7
</code></pre>
<p>Then, activate the virtual environment with the following command:</p>
<pre><code>conda activate mi_entorno
</code></pre>
<p>Then import again, and it worked!</p>
| 456
|
transformers
|
Installing `transformers` on HPC Cluster
|
https://stackoverflow.com/questions/62995253/installing-transformers-on-hpc-cluster
|
<p>I'm trying to install the transformers library on HPC. I do:</p>
<pre><code>git clone https://github.com/huggingface/transformers.git
cd transformers
pip install -e . --user
</code></pre>
<p>All three of these work as expected, with the last output being:</p>
<pre><code>Successfully installed dataclasses-0.7 numpy-1.19.0 tokenizers-0.8.1rc2 transformers
</code></pre>
<p>Then, I try <code>python -c "import transformers"</code> but I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/maths/btech/mt1170727/transformers/src/transformers/__init__.py", line 23, in <module>
from .configuration_albert import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, AlbertConfig
File "/home/maths/btech/mt1170727/transformers/src/transformers/configuration_albert.py", line 18, in <module>
from .configuration_utils import PretrainedConfig
File "/home/maths/btech/mt1170727/transformers/src/transformers/configuration_utils.py", line 25, in <module>
from .file_utils import CONFIG_NAME, cached_path, hf_bucket_url, is_remote_url
File "/home/maths/btech/mt1170727/transformers/src/transformers/file_utils.py", line 37, in <module>
import torch
File "/home/soft/PYTHON/3.6.0/ucs4/gnu/447/lib/python3.6/site-packages/torch/__init__.py", line 125, in <module>
_load_global_deps()
File "/home/soft/PYTHON/3.6.0/ucs4/gnu/447/lib/python3.6/site-packages/torch/__init__.py", line 83, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File "/home/soft/PYTHON/3.6.0/ucs4/gnu/447/lib/python3.6/ctypes/__init__.py", line 344, in __init__
self._handle = _dlopen(self._name, mode)
OSError: libnvToolsExt.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>I have done as was written in the <a href="https://huggingface.co/transformers/installation.html#installing-from-source" rel="nofollow noreferrer">documentation</a>, and can't see why I'm facing this error. Any help would be great. Thanks...</p>
| 457
|
|
transformers
|
Consistent error message when using transformers
|
https://stackoverflow.com/questions/78954987/consistent-error-message-when-using-transformers
|
<p>I have been getting this error message for days on different projects and still can't understand where its from. This seems to only happen when I use tensorflow and transformers together. Can't really find anything for this specific error so help would be greatly appreciated. Am pretty new to this stuff.</p>
<pre><code>from transformers import pipeline
I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-09-03 23:40:00.379521: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
Traceback (most recent call last):
File "C:\Users\Preston Creed\AppData\Local\Programs\Python\Python312\Lib\site-packages\transformers\utils\import_utils.py", line 1603, in _get_module
return importlib.import_module("." + module_name, self.__name__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Preston Creed\AppData\Local\Programs\Python\Python312\Lib\importlib\__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 995, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "C:\Users\Preston Creed\AppData\Local\Programs\Python\Python312\Lib\site-packages\transformers\pipelines\__init__.py", line 26, in <module>
from ..image_processing_utils import BaseImageProcessor
File "C:\Users\Preston Creed\AppData\Local\Programs\Python\Python312\Lib\site-packages\transformers\image_processing_utils.py", line 21, in <module>
from .image_transforms import center_crop, normalize, rescale
File "C:\Users\Preston Creed\AppData\Local\Programs\Python\Python312\Lib\site-packages\transformers\image_transforms.py", line 49, in <module>
import tensorflow as tf
File "C:\Users\Preston Creed\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\__init__.py", line 53, in <module>
from tensorflow._api.v2 import compat
File "C:\Users\Preston Creed\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\_api\v2\compat\__init__.py", line 8, in <module>
from tensorflow._api.v2.compat import v1
File "C:\Users\Preston Creed\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\_api\v2\compat\v1\__init__.py", line 30, in <module>
from tensorflow._api.v2.compat.v1 import compat
File "C:\Users\Preston Creed\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\_api\v2\compat\v1\compat\__init__.py", line 8, in <module>
from tensorflow._api.v2.compat.v1.compat import v1
File "C:\Users\Preston Creed\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\_api\v2\compat\v1\compat\v1\__init__.py", line 47, in <module>
from tensorflow._api.v2.compat.v1 import lite
File "C:\Users\Preston Creed\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\_api\v2\compat\v1\lite\__init__.py", line 9, in <module>
from tensorflow._api.v2.compat.v1.lite import experimental
File "C:\Users\Preston Creed\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\_api\v2\compat\v1\lite\experimental\__init__.py", line 8, in <module>
from tensorflow._api.v2.compat.v1.lite.experimental import authoring
File "C:\Users\Preston Creed\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\_api\v2\compat\v1\lite\experimental\authoring\__init__.py", line 8, in <module>
from tensorflow.lite.python.authoring.authoring import compatible # line: 265
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Preston Creed\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\lite\python\authoring\authoring.py", line 43, in <module>
from tensorflow.lite.python import convert
File "C:\Users\Preston Creed\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\lite\python\convert.py", line 151, in <module>
_deprecated_conversion_binary = _resource_loader.get_path_to_datafile(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Preston Creed\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\python\platform\resource_loader.py", line 118, in get_path_to_datafile
new_fpath = r.Rlocation(
^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'Rlocation'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\DMNSIONS\General_AI_Projects\AI_Agents\MegaAgent_Architecture\test.py", line 1, in <module>
from transformers import pipeline
File "<frozen importlib._bootstrap>", line 1412, in _handle_fromlist
File "C:\Users\Preston Creed\AppData\Local\Programs\Python\Python312\Lib\site-packages\transformers\utils\import_utils.py", line 1593, in __getattr__
module = self._get_module(self._class_to_module[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Preston Creed\AppData\Local\Programs\Python\Python312\Lib\site-packages\transformers\utils\import_utils.py", line 1605, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):
'NoneType' object has no attribute 'Rlocation'
</code></pre>
<p>Am trying to use tensorflow and transformers pipelines in my multi-agent-system projects. I am very new to these libraries.</p>
<p>Dependencies:</p>
<pre><code>astunparse==1.6.3
certifi==2024.8.30
charset-normalizer==3.3.2
click==8.1.7
colorama==0.4.6
docx==0.2.4
filelock==3.15.4
flatbuffers==24.3.25
fsspec==2024.9.0
gast==0.6.0
google-pasta==0.2.0
grpcio==1.66.1
h5py==3.11.0
huggingface-hub==0.24.6
idna==3.8
joblib==1.4.2
keras==3.5.0
libclang==18.1.1
lxml==5.3.0
Markdown==3.7
markdown-it-py==3.0.0
MarkupSafe==2.1.5
mdurl==0.1.2
ml-dtypes==0.4.0
namex==0.0.8
nltk==3.9.1
numpy==1.26.4
opt-einsum==3.3.0
optree==0.12.1
packaging==24.1
pillow==10.4.0
protobuf==4.25.4
Pygments==2.18.0
PyYAML==6.0.2
regex==2024.7.24
requests==2.32.3
rich==13.8.0
safetensors==0.4.5
setuptools==74.1.2
six==1.16.0
tensorboard==2.17.1
tensorboard-data-server==0.7.2
tensorflow==2.17.0
tensorflow-intel==2.17.0
termcolor==2.4.0
tokenizers==0.19.1
tqdm==4.66.5
transformers==4.44.2
typing_extensions==4.12.2
urllib3==2.2.2
Werkzeug==3.0.4
wheel==0.44.0
wrapt==1.16.0
</code></pre>
| 458
|
|
transformers
|
Unable to import Hugging Face transformers
|
https://stackoverflow.com/questions/65383059/unable-to-import-hugging-face-transformers
|
<p>I have been using transformers fine up until today. However, when I imported the package today, I received this error message:</p>
<pre><code>In Transformers v4.0.0, the default path to cache downloaded models changed from '~/.cache/torch/transformers' to '~/.cache/huggingface/transformers'. Since you don't seem to have overridden and '~/.cache/torch/transformers' is a directory that exists, we're moving it to '~/.cache/huggingface/transformers' to avoid redownloading models you have already in the cache. You should only see this message once.
Error: Destination path '/home/user/.cache/huggingface/transformers/transformers' already exists
</code></pre>
<p>I have tried to install and uninstall the package but still unable to make it work.</p>
<p>Any suggestions to fix this would be really appreciated.</p>
|
<p>As it appears to be a cache file, simply move the /huggingface/ dir to /huggingface.bak</p>
<p>Then you should be able to re-run the script.</p>
| 459
|
transformers
|
Why are monad transformers different to stacking monads?
|
https://stackoverflow.com/questions/38710154/why-are-monad-transformers-different-to-stacking-monads
|
<p>In many cases, it isn't clear to me what is to be gained by combining two monads with a transformer rather than using two separate monads. Obviously, using two separate monads is a hassle and can involve do notation inside do notation, but are there cases where it just isn't expressive enough?</p>
<p>One case seems to be StateT on List: combining monads doesn't get you the right type, and if you do obtain the right type via a stack of monads like Bar (where Bar a = (Reader r (List (Writer w (Identity a))), it doesn't do the right thing.</p>
<p>But I'd like a more general and technical understanding of exactly what monad transformers are bringing to the table, when they are and aren't necessary, and why.</p>
<p>To make this question a little more focused:</p>
<ol>
<li>What is an actual example of a monad with no corresponding transformer (this would help illustrate what transformers can do that just stacking monads can't).</li>
<li>Are StateT and ContT the only transformers that give a type not equivalent to the composition of them with m, for an underlying monad m (regardless of which order they're composed.)</li>
</ol>
<p>(I'm not interested in particular implementation details as regards different choices of libraries, but rather the general (and probably Haskell independent) question of what monad transformers/morphisms are adding as an alternative to combining effects by stacking a bunch of monadic type constructors.)</p>
<p>(To give a little context, I'm a linguist who's doing a project to enrich Montague grammar - simply typed lambda calculus for composing word meanings into sentences - with a monad transformer stack. It would be really helpful to understand whether transformers are actually doing anything useful for me.)</p>
<p>Thanks,</p>
<p>Reuben</p>
|
<p>To answer you question about the difference between <code>Writer w (Maybe a)</code> vs <code>MaybeT (Writer w) a</code>, let's start by taking a look at the definitions:</p>
<pre><code>newtype WriterT w m a = WriterT { runWriterT :: m (a, w) }
type Writer w = WriterT w Identity
newtype MaybeT m a = MaybeT { runMaybeT :: m (Maybe a) }
</code></pre>
<p>Using <code>~~</code> to mean "structurally similar to" we have:</p>
<pre><code>Writer w (Maybe a) == WriterT w Identity (Maybe a)
~~ Identity (Maybe a, w)
~~ (Maybe a, w)
MaybeT (Writer w) a ~~ (Writer w) (Maybe a)
== Writer w (Maybe a)
... same derivation as above ...
~~ (Maybe a, w)
</code></pre>
<p>So in a sense you are correct -- structurally both <code>Writer w (Maybe a)</code> and <code>MaybeT (Writer w) a</code>
are the same - both are essentially just a pair of a Maybe value and a <code>w</code>.</p>
<p>The difference is how we treat them as monadic values.
The <code>return</code> and <code>>>=</code> class functions do very different things depending
on which monad they are part of.</p>
<p>Let's consider the pair <code>(Just 3, []::[String])</code>. Using the association
we have derived above here's how that pair would be expressed in both monads: </p>
<pre><code>three_W :: Writer String (Maybe Int)
three_W = return (Just 3)
three_M :: MaybeT (Writer String) Int
three_M = return 3
</code></pre>
<p>And here is how we would construct a the pair <code>(Nothing, [])</code>:</p>
<pre><code>nutin_W :: Writer String (Maybe Int)
nutin_W = return Nothing
nutin_M :: MaybeT (Writer String) Int
nutin_M = MaybeT (return Nothing) -- could also use mzero
</code></pre>
<p>Now consider this function on pairs:</p>
<pre><code>add1 :: (Maybe Int, String) -> (Maybe Int, String)
add1 (Nothing, w) = (Nothing w)
add1 (Just x, w) = (Just (x+1), w)
</code></pre>
<p>and let's see how we would implement it in the two different monads:</p>
<pre><code>add1_W :: Writer String (Maybe Int) -> Writer String (Maybe Int)
add1_W e = do x <- e
case x of
Nothing -> return Nothing
Just y -> return (Just (y+1))
add1_M :: MaybeT (Writer String) Int -> MaybeT (Writer String) Int
add1_M e = do x <- e; return (e+1)
-- also could use: fmap (+1) e
</code></pre>
<p>In general you'll see that the code in the MaybeT monad is more concise.</p>
<p>Moreover, semantically the two monads are very different...</p>
<p><code>MaybeT (Writer w) a</code> is a Writer-action which can fail, and the failure is
automatically handled for you. <code>Writer w (Maybe a)</code> is just a Writer
action which returns a Maybe. Nothing special happens if that Maybe value
turns out to be Nothing. This is exemplified in the <code>add1_W</code> function where
we had to perform a case analysis on <code>x</code>.</p>
<p>Another reason to prefer the <code>MaybeT</code> approach is that we can write code
which is generic over any monad stack. For instance, the function:</p>
<pre><code>square x = do tell ("computing the square of " ++ show x)
return (x*x)
</code></pre>
<p>can be used unchanged in any monad stack which has a Writer String, e.g.:</p>
<pre><code>WriterT String IO
ReaderT (WriterT String Maybe)
MaybeT (Writer String)
StateT (WriterT String (ReaderT Char IO))
...
</code></pre>
<p>But the return value of <code>square</code> does not type check against <code>Writer String (Maybe Int)</code> because <code>square</code> does not return a <code>Maybe</code>.</p>
<p>When you code in <code>Writer String (Maybe Int)</code>, you code explicitly reveals
the structure of monad making it less generic. This definition of <code>add1_W</code>:</p>
<pre><code>add1_W e = do x <- e
return $ do
y <- x
return $ y + 1
</code></pre>
<p>only works in a two-layer monad stack whereas a function like <code>square</code>
works in a much more general setting.</p>
| 460
|
transformers
|
Why use multi-headed attention in Transformers?
|
https://stackoverflow.com/questions/66244123/why-use-multi-headed-attention-in-transformers
|
<p>I am trying to understand why transformers use multiple attention heads. I found the following <a href="https://towardsdatascience.com/simple-explanation-of-transformers-in-nlp-da1adfc5d64f" rel="noreferrer">quote</a>:</p>
<blockquote>
<p>Instead of using a single attention function where the attention can
be dominated by the actual word itself, transformers use multiple
attention heads.</p>
</blockquote>
<p>What is meant by "the attention being dominated by the word itself" and how does the use of multiple heads address that?</p>
|
<p>Multi-headed attention was introduced due to the observation that different words relate to each other in different ways. For a given word, the other words in the sentence could act as moderating or negating the meaning, but they could also express relations like inheritance (is a kind of), possession (belongs to), etc.</p>
<p>I found <a href="https://www.youtube.com/watch?v=KmAISyVvE1Y&list=WL&index=44" rel="noreferrer">this online</a> lecture to be very helpful, which came up with this example:</p>
<p>"The restaurant was not too <strong>terrible</strong>."</p>
<p>Note that the meaning of the word 'terrible' is distorted by the two words 'too' and 'not' (too: moderation, not: inversion) and 'terrible' also relates to 'restaurant', as it expresses a property.</p>
| 461
|
transformers
|
Equivalent to tokenizer() in Transformers 2.5.0?
|
https://stackoverflow.com/questions/73127139/equivalent-to-tokenizer-in-transformers-2-5-0
|
<p>I am trying to convert the following code to work with Transformers 2.5.0. As written, it works in version 4.18.0, but not 2.5.0.</p>
<pre><code># Converting pretrained BERT classification model to regression model
# i.e. extracting base model and swapping out heads
from transformers import BertTokenizer, BertModel, BertConfig, BertForMaskedLM, BertForSequenceClassification, AutoConfig, AutoModelForTokenClassification
import torch
import numpy as np
old_model = BertForSequenceClassification.from_pretrained("textattack/bert-base-uncased-yelp-polarity")
model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=1)
model.bert = old_model.bert
# Ensure that model parameters are equivalent except for classifier head layer
for param_name in model.state_dict():
if 'classifier' not in param_name:
sub_param, full_param = model.state_dict()[param_name], old_model.state_dict()[param_name] # type: torch.Tensor, torch.Tensor
assert (sub_param.cpu().numpy() == full_param.cpu().numpy()).all(), param_name
tokenizer = BertTokenizer.from_pretrained("textattack/bert-base-uncased-yelp-polarity")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
output_value = np.array(logits)[0][0]
print(output_value)
</code></pre>
<p>tokenizer is not callable with transformers 2.5.0, resulting the following:</p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-1-d83f0d613f4b> in <module>
19
20
---> 21 inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
22
23 with torch.no_grad():
TypeError: 'BertTokenizer' object is not callable
</code></pre>
<p>However, attempting to replace tokenizer() with tokenizer.tokenize() results in the following:</p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-2-1d431131eb87> in <module>
21
22 with torch.no_grad():
---> 23 logits = model(**inputs).logits
24
25 output_value = np.array(logits)[0][0]
TypeError: BertForSequenceClassification object argument after ** must be a mapping, not list
</code></pre>
<p>Any help would be greatly appreciated.</p>
<hr />
<h2>Solution</h2>
<p>Using tokenizer.encode_plus() as suggested by @cronoik:</p>
<pre><code>tokenized = tokenizer.encode_plus("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
logits = model(**tokenized)
output_value = np.array(logits)[0]
print(output_value)
</code></pre>
|
<p>Sadly their documentation for the old versions is broken, but you can use encode_plus as shown in the following (he oldest available documentation of encode_plus is from <a href="https://huggingface.co/transformers/v2.10.0/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.encode_plus" rel="nofollow noreferrer">2.10.0</a>):</p>
<pre class="lang-py prettyprint-override"><code>import torch
from transformers import BertTokenizer
t = BertTokenizer.from_pretrained("textattack/bert-base-uncased-yelp-polarity")
tokenized = t.encode_plus("Hello, my dog is cute", return_tensors='pt')
print(tokenized)
</code></pre>
<p>Output:</p>
<pre><code>{'input_ids': tensor([[ 101, 7592, 1010, 2026, 3899, 2003, 10140, 102]]),
'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0]]),
'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1]])}
</code></pre>
| 462
|
transformers
|
Unable to pip install -U sentence-transformers
|
https://stackoverflow.com/questions/61994001/unable-to-pip-install-u-sentence-transformers
|
<p>I am unable to do: pip install -U sentence-transformers. I get this message on Anaconda Prompt:
ERROR: Could not find a version that satisfies the requirement torch>=1.0.1 (from sentence-transformers) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch>=1.0.1 (from sentence-transformers)
Can someone help?</p>
|
<p>I tried to Conda Install pytorch and then installed Sentence Transformer by doing these steps: </p>
<ol>
<li><p>conda install pytorch torchvision cudatoolkit=10.0 -c pytorch</p></li>
<li><p>pip install -U sentence-transformers</p></li>
</ol>
<p>This worked.
Thanks</p>
| 463
|
transformers
|
Understanding monad transformers in Scala
|
https://stackoverflow.com/questions/44868593/understanding-monad-transformers-in-scala
|
<p>I'm trying to understand how to use Monad Transformers. I read the <a href="https://en.wikibooks.org/wiki/Haskell/Monad_transformers" rel="noreferrer">wiki article</a> about it and still have some questions.</p>
<p>We have <code>IO</code> monad that needs to read input from user. But the input is not always provided. So it's <code>Option</code>. To simplify we can define a monad <code>OptionT</code> which "incapsulates" actions of <code>IO</code> monad. </p>
<p>In my particular case I have two monads of types <code>Future[Option[String]]</code> and <code>Future[List[Int]]</code>. It means to simplify it I need two different transformers <code>ListT[T]</code> and <code>OptionT[T]</code> for each monad type respectively in which I embed <code>Future</code> behavior... Right?</p>
|
<p>Right, the way monad transformers work is to help you on working with a "inner" monad that is being "transformed" by a "outer" monad.</p>
<p>So <code>F[Option[A]</code> can be turned into a <code>OptionT[F, A]</code> (where <code>F</code> is any monad), which is much easier to work with.</p>
<p>About <code>ListT</code>, it may not be so easy. For instance <code>cats</code> doesn't provide one, see <a href="https://github.com/typelevel/cats/blob/master/docs/src/main/tut/faq.md#listt" rel="noreferrer">their FAQ</a> for more info. As they suggest, you can use <code>Nested</code> as a replacement for the cases in which you don't need a <code>flatMap</code>, for example:</p>
<pre><code>import cats._
import cats.implicits._
import cats.data.Nested
import scala.concurrent.Future
import scala.concurrent.Implicits.global
val futList = Future(List(1, 2, 3))
Nested(futList).map(_ + 1).value // Future(List(2, 3, 4))
</code></pre>
<p>If you want another take on monad transformers, here's a short article I authored: <a href="https://blog.buildo.io/monad-transformers-for-the-working-programmer-aa7e981190e7" rel="noreferrer">https://blog.buildo.io/monad-transformers-for-the-working-programmer-aa7e981190e7</a></p>
| 464
|
transformers
|
Laravel 5.1, Dingo - Nested Transformers
|
https://stackoverflow.com/questions/31507673/laravel-5-1-dingo-nested-transformers
|
<p>Is there an elegant way to nest transformers for relationship use? I'm looking to build a REST interface that allows for collections to conditionally include relationship models. So far I've been marginally successful, but it seems to break down a bit when it comes to the transformers (I'll admit I'm a bit new to Laravel 5.1 and Dingo). I'm looking to keep this as DRY as possible, so that if relationships or attributes change in the future it's pretty easy to change.</p>
<p>For example, a simple scenario where a user may receive one or more messages (user hasMany received messages) I can do the following in the UserTransformer:</p>
<pre><code><?php
namespace App\Transformers;
use App\Models\User;
use League\Fractal;
class UserTransformer extends Fractal\TransformerAbstract
{
public function transform(User $user)
{
// Transform the basic model
$returnUser = [
'id' => (int) $user->id,
'email' => $user->email,
'role' => $user->role,
'status' => $user->status,
'links' => [
[
'rel' => 'self',
'uri' => '/users/'.$user->id
]
]
];
// Transform relationships, but only if they exist and are requested
if (isset($user->receivedMessages))
{
$returnUser['received_messages'] = [];
foreach ($user->receivedMessages as $msg)
{
$returnUser['received_messages'][] = [
'id' => $msg->id,
'read' => $msg->read,
'content' => $msg->content
];
}
}
return $returnUser;
}
}
</code></pre>
<p>In this case I'd like to nest / apply a MesagesTransformer to the related received messages for output formatting so that all REST output remains consistent across all relationships. Is this possible? Thanks!</p>
|
<p>I was able to find the answer to my question here: <a href="http://fractal.thephpleague.com/transformers/" rel="nofollow">http://fractal.thephpleague.com/transformers/</a>.</p>
| 465
|
transformers
|
Issue Installing transformers==4.18.0.dev0 from environment.yml File
|
https://stackoverflow.com/questions/77897628/issue-installing-transformers-4-18-0-dev0-from-environment-yml-file
|
<p>I created a virtual environment from the environment.yml file, and all packages were installed successfully except for the 'transformers' package. The requirement is to install 'transformers==4.18.0.dev0', but when I run the command:</p>
<blockquote>
<p><code>$pip install transformers==4.18.0.dev0</code></p>
</blockquote>
<blockquote>
<p><code>ERROR: Could not find a version that satisfies the requirement transformers==4.18.0.dev0 (from versions: none) ERROR: No matching distribution found for transformers==4.18.0.dev0 </code></p>
</blockquote>
<blockquote>
<p><code>$pip install transformers==4.18.0</code></p>
</blockquote>
<blockquote>
<p><code>ERROR: Package 'urllib3' requires a different Python: 3.7.7 not in '>=3.8' </code></p>
</blockquote>
<p>I'm hesitant to install transformer==4.18.0 because I'm worried it might not be compatible with other packages when using Python 3.8 or above. Is there a way to install the version that the previous developer used, transformer==4.18.0.dev0?"</p>
|
<p>You appear to be using the wrong command. To install transformers 4.18.0, you should run <code>pip install transformers==4.18.0</code> and this version is compatible with python 3.6-3.9. See
<a href="https://pypi.org/project/transformers/4.18.0/" rel="nofollow noreferrer">https://pypi.org/project/transformers/4.18.0/</a></p>
| 466
|
transformers
|
Proper use of transformers vs interceptors
|
https://stackoverflow.com/questions/26387868/proper-use-of-transformers-vs-interceptors
|
<p>When POSTing to an endpoint in a service layer to update a user's profile, I need to strip certain values from the request payload (the profile with the desired modifications from the client) and re-attach them in the response payload (the updated profile from the server). I am currently performing behavior using Angular's <a href="https://docs.angularjs.org/api/ng/service/$http#transforming-requests-and-responses" rel="nofollow">request and response transformers</a>, like this:</p>
<pre><code>myService.updateProfile = function (profile) {
return $http({
method: 'POST',
withCredentials: true,
url: root + 'users/profile',
data: profile,
transformRequest : requestTransformer,
transformResponse : responseTransformer
});
};
// the map used during transformation below
var myMap = {
0: 'foo',
1: 'bar',
2: 'etc'
};
// prependTransform() and appendTransform() are similar to the example provided in Angular transformer docs here:
// https://docs.angularjs.org/api/ng/service/$http#overriding-the-default-transformations-per-request
var requestTransformer = httpTransformer.prependTransform($http.defaults.transformRequest, function(profileRequest) {
profileRequest.myKey = myMap.indexOf(profileRequest.myValue);
delete profileRequest.myValue;
return profileRequest;
});
var responseTransformer = httpTransformer.appendTransform($http.defaults.transformResponse, function(profileResponse) {
profileRequest.myValue = myMap[profileRequest.myKey];
delete profileRequest.myKey;
return profileResponse;
});
</code></pre>
<p>I prepend a transformer to the default request transformers and append a transformer to the default response transformers. My question is, is there a better way to do this? Perhaps using <a href="https://docs.angularjs.org/api/ng/service/$http#interceptors" rel="nofollow">interceptors, as documented here,</a> instead? If so, how?</p>
|
<p>I think your solution is fine but if you want an alternative, you can intercept specific requests like so. HTTP interceptors are mostly useful for handling global HTTP requests/responses (auth, error handling, etc.).</p>
<p>In any case, the "response" payload should be taken cared of from the API/server-side.</p>
<pre><code>$provide.factory('userProfileInterceptor', function() {
return {
request: function(config) {
if (config.url.indexOf('/users/profile') >=0){
if (config.params.myValue) delete config.params.myValue;
}
return config;
},
response: function(response) {
if (response.config.url.indexOf('/users/profile') >=0){
delete response.data.myKey;
}
return response;
}
};
});
$httpProvider.interceptors.push('userProfileInterceptor');
</code></pre>
| 467
|
transformers
|
Transformation under Transformers
|
https://stackoverflow.com/questions/16552316/transformation-under-transformers
|
<p>I'm having a bit of difficulty with monad transformers at the moment. I'm defining a few different non-deterministic relations which make use of transformers. Unfortunately, I'm having trouble understanding how to translate cleanly from one effectful model to another.</p>
<p>Suppose these relations are "foo" and "bar". Suppose that "foo" relates As and Bs to Cs; suppose "bar" relates Bs and Cs to Ds. We will define "bar" in terms of "foo". To make matters more interesting, the computation of these relations will fail in different ways. (Since the bar relation depends on the foo relation, its failure cases are a superset.) I therefore give the following type definitions:</p>
<pre><code>data FooFailure = FooFailure String
data BarFailure = BarSpecificFailure | BarFooFailure FooFailure
type FooM = ListT (EitherT FooFailure (Reader Context))
type BarM = ListT (EitherT BarFailure (Reader Context))
</code></pre>
<p>I would then expect to be able to write the relations with the following function signatures:</p>
<pre><code>foo :: A -> B -> FooM C
bar :: B -> C -> BarM D
</code></pre>
<p>My problem is that, when writing the definition for "bar", I need to be able to receive errors from the "foo" relation and properly represent them in "bar" space. So I'd be fine with a function of the form</p>
<pre><code>convert :: (e -> e') -> ListT (EitherT e (Reader Context) a
-> ListT (EitherT e' (Reader Context) a
</code></pre>
<p>I can even write that little beast by running the ListT, mapping on EitherT, and then reassembling the ListT (because it happens that m [a] can be converted to ListT m a). But this seems... messy.</p>
<p>There's a good reason I can't just run a transformer, do some stuff under it, and generically "put it back"; the transformer I ran might have effects and I can't magically undo them. But is there some way in which I can lift a function just far enough into a transformer stack to do some work for me so I don't have to write the <code>convert</code> function shown above?</p>
|
<p>I think convert is a good answer, and using <code>Control.Monad.Morph</code> and <code>Control.Monad.Trans.Either</code> it's (almost) really simple to write:</p>
<pre><code>convert :: (Monad m, Functor m, MFunctor t)
=> (e -> e')
-> t (EitherT e m) b -> t (EitherT e' m) b
convert f = hoist (bimapEitherT f id)
</code></pre>
<p>the slight problem is that <code>ListT</code> isn't an instance of <code>MFunctor</code>. I think this is the author boycotting <code>ListT</code> because it <a href="https://stackoverflow.com/questions/12617916/why-is-listt-monad-transformer-considered-buggy-what-monad-laws-it-breaks">doesn't follow the monad transformer laws</a> though because it's easy to write a type-checking instance</p>
<pre><code>instance MFunctor ListT where hoist nat (ListT mas) = ListT (nat mas)
</code></pre>
<p>Anyway, generally take a look at <a href="http://hackage.haskell.org/package/mmorph" rel="nofollow noreferrer"><code>Control.Monad.Morph</code></a> for dealing with natural transformations on (parts of) transformer stacks. I'd say that fits the definition of lifting a function "just enough" into a stack.</p>
| 468
|
transformers
|
Type Variable Location in Transformers
|
https://stackoverflow.com/questions/49587122/type-variable-location-in-transformers
|
<p>Consider the <code>State</code> type - or at least a simplified version:</p>
<pre><code>newtype State s a = State { runState :: s -> (a, s) }
</code></pre>
<p>Now, let's say we want to derive the <code>StateT</code> monad transformer. <code>transformers</code> defines it as follows:</p>
<pre><code>newtype StateT s m a = StateT { runStateT :: s -> m (a, s) }
</code></pre>
<p>Here, the <code>m</code> has been placed on the right of the function arrow, but outside the tuple. However, if we didn't know the correct answer, we might instead put <code>m</code> somewhere else:</p>
<pre><code>newtype StateT s m a = StateT { runStateT :: m (s -> ( a, s)) }
newtype StateT s m a = StateT { runStateT :: s -> (m a, s) }
</code></pre>
<p>Obviously the version in <code>transformers</code> is correct, but why? More generally, how does one know where to put the type variable for the 'inner' monad when defining a monad transformer? Generalising even more, is there a similar rule for <a href="https://www.stackage.org/haddock/lts-11.2/comonad-5.0.3/Control-Comonad-Trans-Class.html" rel="nofollow noreferrer">comonad transformers</a>?</p>
|
<p>I think the difference can be easily understood when <code>m ~ IO</code>:</p>
<pre><code>s -> IO (a, s)
</code></pre>
<p>is the type of an action which can read the current state <code>s</code>, perform IO depending on that (e.g. printing the current state, reading a line from the user), and then produce both the new state <code>s</code>, and a return value <code>a</code>.</p>
<p>Instead:</p>
<pre><code>IO (s -> (a, s))
</code></pre>
<p>is the type of an action which immediately performs IO, without knowing the current state. After all the IO is over, it returns a pure function mapping the old state into a new state and a return value.</p>
<p>This is similar to the previous type, since the new state and return value can depend both on the previous state and the IO. However, the IO can not depend on the current state: e.g., printing the current state is disallowed.</p>
<p>Instead,</p>
<pre><code>s -> (IO a, s)
</code></pre>
<p>is the type of an action which reads the current state <code>s</code>, and then performs IO depending on that (e.g. printing the current state, reading a line from the user), and then produce a return value <code>a</code>. Depdnding on the current state, bot not on the IO, a new state is produced. This type is effectively isomorphic to a pair of functions <code>(s -> IO a, s -> s)</code>.</p>
<p>Here, the IO can read a line from the user, and produce a return value <code>a</code> depending on that, but the new state can not depend on that line.</p>
<p>Since the first variant is more general, we want that as our state transformer.</p>
<p>I don't think there's a "general rule" for deciding where to put <code>m</code>: it depends on what we want to achieve.</p>
| 469
|
transformers
|
AttributeError: module transformers has no attribute TFGPTNeoForCausalLM
|
https://stackoverflow.com/questions/68604289/attributeerror-module-transformers-has-no-attribute-tfgptneoforcausallm
|
<p>I cloned this repository/documentation <a href="https://huggingface.co/EleutherAI/gpt-neo-125M" rel="nofollow noreferrer">https://huggingface.co/EleutherAI/gpt-neo-125M</a></p>
<p>I get the below error whether I run it on google collab or locally. I also installed transformers using this</p>
<pre><code>pip install git+https://github.com/huggingface/transformers
</code></pre>
<p>and made sure the configuration file is named as config.json</p>
<pre><code> 5 tokenizer = AutoTokenizer.from_pretrained("gpt-neo-125M/",from_tf=True)
----> 6 model = AutoModelForCausalLM.from_pretrained("gpt-neo-125M",from_tf=True)
7
8
3 frames
/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in __getattr__(self, name)
AttributeError: module transformers has no attribute TFGPTNeoForCausalLM
</code></pre>
<p>Full code:</p>
<pre><code>from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125M",from_tf=True)
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-125M",from_tf=True)
</code></pre>
<p>transformers-cli env results:</p>
<ul>
<li><code>transformers</code> version: 4.10.0.dev0</li>
<li>Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.29</li>
<li>Python version: 3.8.5</li>
<li>PyTorch version (GPU?): 1.9.0+cpu (False)</li>
<li>Tensorflow version (GPU?): 2.5.0 (False)</li>
<li>Flax version (CPU?/GPU?/TPU?): not installed (NA)</li>
<li>Jax version: not installed</li>
<li>JaxLib version: not installed</li>
<li>Using GPU in script?: </li>
<li>Using distributed or parallel set-up in script?: </li>
</ul>
<p>Both collab and locally have TensorFlow 2.5.0 version</p>
|
<p>My solution was to first edit the source code to remove the line that adds "TF" in front of the package as the correct transformers module is GPTNeoForCausalLM
, but somewhere in the source code it manually added a "TF" in front of it.</p>
<p>Secondly, before cloning the repository it is a must to run</p>
<pre><code> git lfs install.
</code></pre>
<p>This link helped me install git lfs properly <a href="https://askubuntu.com/questions/799341/how-to-install-git-lfs-on-ubuntu-16-04">https://askubuntu.com/questions/799341/how-to-install-git-lfs-on-ubuntu-16-04</a></p>
| 470
|
transformers
|
Monad Stack Penetration Classes with Free/Operational Monad Transformers?
|
https://stackoverflow.com/questions/17936900/monad-stack-penetration-classes-with-free-operational-monad-transformers
|
<p>Can there be mtl-like mechanism for monad transformers created by FreeT / ProgramT ?</p>
<p>My understanding of the history is as follows. Once upon a time monad transformer was invented. Then people started to stack monad transformers one on other, then found it annoying to insert <code>lift</code> everywhere. Then a couple of people invented monad classes, so that we can e.g. <code>ask :: m r</code> in any monad <code>m</code> such that <code>MonadReader r m</code> . This was possible by making every monad class <em>penetrate</em> every monad transformer, like</p>
<blockquote>
<p><code>(Monoid w, MonadState s m) => MonadState s (WriterT w m)<br>
MonadWriter w m => MonadWriter w (StateT s m)</code></p>
</blockquote>
<p>you need such pair of instance declarations for every pair of monad transformers, so when there's <em>n</em> monad transformers there's <em>n</em>^2 costs. This was not a large problem, however, because people will mostly use predefined monads and rarely create their own. The story so far I understand, and also is detailed e.g. in the following Q&A:</p>
<p><a href="https://stackoverflow.com/questions/9054731/avoiding-lift-with-monad-transformers">Avoiding lift with Monad Transformers</a></p>
<p>Then my problem is with the new Free monads <a href="http://hackage.haskell.org/package/free" rel="nofollow noreferrer">http://hackage.haskell.org/package/free</a> and Operational monads <a href="http://hackage.haskell.org/package/operational" rel="nofollow noreferrer">http://hackage.haskell.org/package/operational</a> . They allow us to write our own DSL and use it as monads, just by defining the language as some algebraic <code>data</code> type (Operational doesn't even need <code>Functor</code> instances). Good news is that we can have monads and monad transformers for free; then how about monad classes? Bad news is that the assumption "we rarely define our own monad transformers" no longer holds.</p>
<p>As an attempt to understand this problem, I made two <code>ProgramT</code>s and made them penetrate each other;</p>
<p><a href="https://github.com/nushio3/practice/blob/master/operational/exe-src/test-05.hs" rel="nofollow noreferrer">https://github.com/nushio3/practice/blob/master/operational/exe-src/test-05.hs</a></p>
<p>The <code>operational</code> package does not support monad classes so I took another implementation <code>minioperational</code> and modified it to work as I need; <a href="https://github.com/nushio3/minioperational" rel="nofollow noreferrer">https://github.com/nushio3/minioperational</a></p>
<p>Still, I needed the specialized instance declaration</p>
<blockquote>
<p><code>instance (Monad m, Operational ILang m) => Operational ILang (ProgramT SLang m) where</code></p>
</blockquote>
<p>because the general declaration of the following form leads to undecidable instances.</p>
<blockquote>
<p><code>instance (Monad m, Operational f m) => Operational f (ProgramT g m) where</code></p>
</blockquote>
<p>My question is that how can we make it easier to let our Operational monads penetrate each other. Or, is my wish to have penetration for any Operational monad ill-posed.</p>
<p>I'd also like to know the correct technical term for <em>penetration</em> :)</p>
|
<p>I tried a bit different approach, which gives at least a partial answer. Since stacking monads can be sometimes problematic, and we know all our monads are constructed from some data type, I tried instead to combine the data types.</p>
<p>I feel more comfortable with <code>MonadFree</code> so I used it, but I suppose a similar approach could be used for <code>Operational</code> as well.</p>
<p>Let's start with the definition of our data types:</p>
<pre><code>{-# LANGUAGE DeriveFunctor, FlexibleContexts,
FlexibleInstances, FunctionalDependencies #-}
import Control.Monad
import Control.Monad.Free
data SLang x = ReadStr (String -> x) | WriteStr String x
deriving Functor
data ILang x = ReadInt (Int -> x) | WriteInt Int x
deriving Functor
</code></pre>
<p>In order to combine two functors together for using them in a free monad, let's define their coproduct:</p>
<pre><code>data EitherF f g a = LeftF (f a) | RightF (g a)
deriving Functor
</code></pre>
<p>If we create a free monad over <code>EitherF f g</code>, we can call the commands from both of them. In order to make this process transparent, we can use <a href="http://www.haskell.org/haskellwiki/Multi-parameter_type_class" rel="noreferrer">MPTC</a> to allow conversion from each of the functor into the target one:</p>
<pre><code>class Lift f g where
lift :: f a -> g a
instance Lift f f where
lift = id
instance Lift f (EitherF f g) where
lift = LeftF
instance Lift g (EitherF f g) where
lift = RightF
</code></pre>
<p>now we can just call <code>lift</code> and convert either part into the coproduct.</p>
<p>With a helper function</p>
<pre><code>wrapLift :: (Functor g, Lift g f, MonadFree f m) => g a -> m a
wrapLift = wrap . lift . fmap return
</code></pre>
<p>we can finally create generic functions that allow us to call commands from anything we can lift into a functor:</p>
<pre><code>readStr :: (Lift SLang f, MonadFree f m) => m String
readStr = wrapLift $ ReadStr id
writeStr :: (Lift SLang f, MonadFree f m) => String -> m ()
writeStr x = wrapLift $ WriteStr x ()
readInt :: (Lift ILang f, MonadFree f m) => m Int
readInt = wrapLift $ ReadInt id
writeInt :: (Lift ILang f, MonadFree f m) => Int -> m ()
writeInt x = wrapLift $ WriteInt x ()
</code></pre>
<p>Then, the program can be expressed as</p>
<pre><code>myProgram :: (Lift ILang f, Lift SLang f, MonadFree f m) => m ()
myProgram = do
str <- readStr
writeStr "Length of that str is"
writeInt $ length str
n <- readInt
writeStr "you wanna have it n times; here we go:"
writeStr $ replicate n 'H'
</code></pre>
<p>without defining any further instances.</p>
<hr>
<p>While all the above works nicely, the problem is how to generically run such composed free monads. I don't know if it is even possible, to have a fully generic, composable solution.</p>
<p>If we have just one base functor, we can run it as</p>
<pre><code>runSLang :: Free SLang x -> String -> (String, x)
runSLang = f
where
f (Pure x) s = (s, x)
f (Free (ReadStr g)) s = f (g s) s
f (Free (WriteStr s' x)) _ = f x s'
</code></pre>
<p>If we have two, we need to thread the state of both of them:</p>
<pre><code>runBoth :: Free (EitherF SLang ILang) a -> String -> Int -> ((String, Int), a)
runBoth = f
where
f (Pure x) s i = ((s, i), x)
f (Free (LeftF (ReadStr g))) s i = f (g s) s i
f (Free (LeftF (WriteStr s' x))) _ i = f x s' i
f (Free (RightF (ReadInt g))) s i = f (g i) s i
f (Free (RightF (WriteInt i' x))) s _ = f x s i'
</code></pre>
<p>I guess one possibility would be to express running the functors using <code>iter :: Functor f => (f a -> a) -> Free f a -> a</code> from <a href="http://hackage.haskell.org/packages/archive/free/3.4.2/doc/html/Control-Monad-Free.html#v:iter" rel="noreferrer">free</a> and then create a similar, combining function</p>
<pre><code>iter2 :: (Functor f, Functor g)
=> (f a -> a) -> (g a -> a) -> Free (EitherF f g) a -> a
</code></pre>
<p>But I haven't had time to try it out.</p>
| 471
|
transformers
|
How to pass in transformers to ts-node?
|
https://stackoverflow.com/questions/57342857/how-to-pass-in-transformers-to-ts-node
|
<p>I'm trying to roll my own compiler for Typescript due to the fact that I need to use transformers.</p>
<p>We use ts-node to run some files (individual tests, etc.) and I also need the transformers to get passed in to the ts-node compiler.</p>
<p>Here's my code</p>
<pre><code>const ts = require('typescript');
const tsNode = require('ts-node').register;
const keysTransformer = require( 'ts-transformer-keys/transformer');
const tsConfig = require( './tsconfig.json');
const compileProject = () => {
const { options, fileNames } = ts.parseJsonConfigFileContent(
tsConfig,
ts.sys,
__dirname
);
const program = ts.createProgram(fileNames, options);
const transformers = {
before: [keysTransformer(program)],
after: []
};
program.emit(undefined, undefined, undefined, false, transformers);
}
const compileAndRun = (files) => {
tsNode({ files, compilerOptions: tsConfig.compilerOptions, transformers: ["ts-transformer-keys/transformer"] });
files.forEach(file => {
require(file);
});
}
module.export = main = (args) => {
if(args.length >= 2) {
const fileNames = args.splice(2);
compileAndRun(fileNames);
} else {
compileProject();
}
}
main(process.argv);
</code></pre>
<p>Passing in the transformer to the TypeScript compiler (when compiling the entire project) works just fine by doing</p>
<pre><code>const transformers = {
before: [keysTransformer(program)],
after: []
};
</code></pre>
<p>However, I cannot see to find sufficient documentation on how to do the same with ts-node.</p>
|
<p>The <code>transformers</code> option to <code>register()</code> is of the <code>CustomTransformers</code> type (not an array as you're passing):</p>
<pre><code> interface CustomTransformers {
/** Custom transformers to evaluate before built-in .js transformations. */
before?: (TransformerFactory<SourceFile> | CustomTransformerFactory)[];
/** Custom transformers to evaluate after built-in .js transformations. */
after?: (TransformerFactory<SourceFile> | CustomTransformerFactory)[];
/** Custom transformers to evaluate after built-in .d.ts transformations. */
afterDeclarations?: (TransformerFactory<Bundle | SourceFile> | CustomTransformerFactory)[];
}
</code></pre>
| 472
|
transformers
|
Grails :Column transformers ( like Hibernate )
|
https://stackoverflow.com/questions/20430940/grails-column-transformers-like-hibernate
|
<p>I want to add <strong>Column transformers (read and write)</strong> <a href="https://docs.jboss.org/hibernate/core/3.6/reference/en-US/html/mapping.html#mapping-column-read-and-write" rel="nofollow">like this:</a> to a Groovy domain class in a Grails application</p>
|
<p>Depending on what you are trying to accomplish you could use Hibernate Custom Types which is explained in the Grails Documentation (<a href="http://grails.org/doc/latest/guide/GORM.html#customHibernateTypes" rel="nofollow">http://grails.org/doc/latest/guide/GORM.html#customHibernateTypes</a>). There is also a great example of it in practice in the jasypt (encryption) plugin by Ted Naleid (<a href="https://bitbucket.org/tednaleid/grails-jasypt/src" rel="nofollow">https://bitbucket.org/tednaleid/grails-jasypt/src</a>). In his plugin, he uses Hibernate custom types to encrypt and decrypt strings (and other data types) going into and out of the database. He delegates most of the work to the jasypt library, which can be found in many places, but this is one of them (<a href="http://grepcode.com/file/repo1.maven.org/maven2/org.jasypt/jasypt-hibernate3/1.9.0/org/jasypt/hibernate3/type/AbstractEncryptedAsStringType.java?av=f" rel="nofollow">http://grepcode.com/file/repo1.maven.org/maven2/org.jasypt/jasypt-hibernate3/1.9.0/org/jasypt/hibernate3/type/AbstractEncryptedAsStringType.java?av=f</a>)</p>
| 473
|
transformers
|
Confused about Haskell Monad Transformers
|
https://stackoverflow.com/questions/51864209/confused-about-haskell-monad-transformers
|
<p>I am confused about where <code>m</code> should be placed on the right side of the Monad transformers?</p>
<p>For example:</p>
<p><code>WriterT</code> is defined as</p>
<pre><code>newtype WriterT w m a = WriterT { runWriterT :: m (a, w) }
</code></pre>
<p>while <code>ReaderT</code> is defined as</p>
<pre><code>newtype ReaderT r m a = ReaderT { runReaderT :: r -> m a }
</code></pre>
<p>but NOT</p>
<pre><code>newtype ReaderT r m a = ReaderT { runReaderT :: m (r -> a) }
</code></pre>
|
<p>The placement of the monad <code>m</code> will depend on the function and operation of the monad transformer that's being applied to the underlying monad <code>m</code>, so it's determined by what functionality the reader and writer are supposed to be adding to the monad.</p>
<p>It helps to remember that <code>runReaderT</code> and <code>runWriterT</code> aren't really <em>doing</em> anything, despite their suggestive names. They're just unwrapping a newtype, and it's the things they wrap that are transforming the monad <code>m</code>.</p>
<p>What I mean by this is, given a monad <code>m</code>, you can add a reader to it by considering monadic actions of type:</p>
<pre><code>r -> m a
</code></pre>
<p>and you can add a writer to it by considering monadic actions of type:</p>
<pre><code>m (a, w)
</code></pre>
<p>and you can add a reader, writer, and state to it by considering monadic actions of type:</p>
<pre><code>r -> s -> m (a, s, w)
</code></pre>
<p>(That is, you don't need any of the transformer wrappers to do this, though they can make it more convenient, particularly since you can use existing operators like <code>>>=</code> and <code><*></code> instead of having to define your own.)</p>
<p>So, when you add a reader to a monad <code>m</code>, why don't you instead place the <code>m</code> at the beginning and consider monadic actions of following type?</p>
<pre><code>m (r -> a)
</code></pre>
<p>You could, in fact, do this, but you'd quickly discover that this method of adding a reader doesn't actually add very much functionality to the monad <code>m</code>.</p>
<p>For example, suppose you're writing a function that should look up a key in a table of values, and you want to carry the table in a reader. Since the lookup can fail, you'd like to do this in the <code>Maybe</code> monad. So, you'd like to write something like:</p>
<pre><code>myLookup :: Key -> Maybe Value
myLookup key = ...
</code></pre>
<p>However, you want to enhance the <code>Maybe</code> monad with a reader that provides the table of keys and values. If we do this using the <code>m (r -> a)</code> pattern, we get:</p>
<pre><code>myLookup :: Key -> Maybe ([(Key,Value)] -> Value)
</code></pre>
<p>Now, let's try to implement it:</p>
<pre><code>myLookup k = Just (\tbl -> ...)
</code></pre>
<p>Already, we see a problem. We have to provide a <code>Just</code> (indicating that the lookup has succeeded) before we're allowed to write code to access the <code>\tbl</code>. That is, the monadic action (failure or success with return value) cannot depend on information in the <code>r</code> which should have been obvious from the signature <code>m (r -> a)</code>. Using the alternate <code>r -> m a</code> pattern is more powerful:</p>
<pre><code>type M a = ([Key,Value]) -> Maybe a
myLookup :: Key -> M Value
myLookup key tbl = Prelude.lookup key tbl
</code></pre>
<p>@Thomas_M_DuBuisson gave another example. If we're trying to read an input file, we might write:</p>
<pre><code>readInput :: FilePath -> IO DataToProcess
readInput fp = withFile fp ReadMode $ \h -> ...
</code></pre>
<p>It would be nice to carry around configuration information like file paths in a reader, so let's transform it using the pattern <code>m (r -> a)</code> to:</p>
<pre><code>data Config = Config { inputFile :: FilePath }
readConfig :: IO (Config -> DataToProcess)
readConfig = ...um...
</code></pre>
<p>and we're stuck because we can't write an IO action that depends on the configuration information. If we'd used the alternate pattern <code>r -> m a</code>, we'd be set:</p>
<pre><code>type M a = Config -> IO a
readConfig :: M DataToProcess
readConfig cfg = withFile (inputFile cfg) ReadMode $ ...
</code></pre>
<p>Another issue, raised by @cdk, is that this new "monadic" action type:</p>
<pre><code>m (r -> a)
</code></pre>
<p>isn't even a monad. It's weaker (just an applicative).</p>
<p>Note that adding a merely applicative reader to a monad could <em>still</em> be useful. It just needs to be used in computations where the computational structure does not depend on the information in <code>r</code>. (So, if the underlying monad is <code>Maybe</code> to allow a computation to signal an error, the values from <code>r</code> can be used in the computation but the determination of whether or not the computation succeeds must be independent of <code>r</code>.)</p>
<p>However, the <code>r -> m a</code> version is strictly more powerful and can be used as both a monadic and applicative reader.</p>
<p>Note that some monadic transformations are useful in multiple forms. For example, you can (but only sometimes, as @luqui pointed out in a comment) add a writer to an <code>m</code> monad in two ways:</p>
<pre><code>m (a, w) -- if m is a monad this is always a monad
(m a, w) -- this is a monad for some, but not all, monads m
</code></pre>
<p>If <code>m</code> is <code>IO</code>, then <code>IO (a,w)</code> is way more useful than <code>(IO a, w)</code> -- with the latter, the written <code>w</code> (e.g., an error log) can't depend on the result of executing the <code>IO</code> action! Also, again <code>(IO a, w)</code> isn't actually a monad; it's just an applicative.</p>
<p>On the other hand, if <code>m</code> is <code>Maybe</code>, then <code>(Maybe a, w)</code> writes something whether the computation succeeds or fails, while <code>Maybe (a, w)</code> loses all the log entries if it returns <code>Nothing</code>. Both forms are monads and can be useful in different situations, and they correspond to stacking the transformers in different orders:</p>
<pre><code>MaybeT (Writer w) -- acts like (Maybe a, w)
WriterT w Maybe -- acts like Maybe (a, w)
</code></pre>
<p>The same is <strong>not</strong> true for stacking <code>Maybe</code> and <code>Reader</code> in different orders. Both of these are isomorphic to the "good" reader <code>r -> Maybe a</code>:</p>
<pre><code>MaybeT (Reader r)
ReaderT r Maybe
</code></pre>
| 474
|
transformers
|
SentenceTransformerEmbeddings without installing sentence transformers
|
https://stackoverflow.com/questions/78613151/sentencetransformerembeddings-without-installing-sentence-transformers
|
<p>I had my langchain project working on my win10 laptop, but after moving it to Debian GNU/Linux 12 server, I've learned that I can't install sentence transformers (python 3.11) in realation to problems with installing torch.</p>
<p>Is there some alternative to SentenceTransformerEmbeddings without sentence transformers?</p>
<p>I tried installing torch from <a href="https://download.pytorch.org/whl/torch_stable.html" rel="nofollow noreferrer">https://download.pytorch.org/whl/torch_stable.html</a> but it told me it doesn't have a matching version.</p>
| 475
|
|
transformers
|
ImportError: cannot import name 'BigBirdTokenizer' from 'transformers' (/usr/local/lib/python3.7/dist-packages/transformers/__init__.py)
|
https://stackoverflow.com/questions/69649310/importerror-cannot-import-name-bigbirdtokenizer-from-transformers-usr-loc
|
<p>In my env(colab) I need the following library.
Here is the list :</p>
<pre><code>!pip install --quiet transformers==4.1.1
!pip install --quiet pytorch-lightning==1.1.3
#!pip install pytorch-lightning
!pip install --quiet tokenizers==0.9.4
!pip install --quiet sentencepiece==0.1.94
!pip install torchtext==0.8.0 torch==1.7.1 pytorch-lightning==1.1.3
</code></pre>
<p>After I am importing FARMReader and TransformersReader from haystack library. Here is the code</p>
<pre><code>!pip install grpcio-tools==1.34.1
!pip install git+https://github.com/deepset-ai/haystack.git
from haystack.reader.farm import FARMReader
from haystack.reader.transformers import TransformersReader
</code></pre>
<p>This gives me the error:</p>
<pre><code>ImportError: cannot import name 'BigBirdTokenizer' from 'transformers' (/usr/local/lib/python3.7/dist-packages/transformers/__init__.py)
</code></pre>
<p>I tried to reinstall transformers other version but this does not work:</p>
<pre><code>!pip install --quiet transformers==4.7.0
</code></pre>
|
<p>I couldn't reproduce the error with the current master branch of haystack although I executed the exact same steps as mentioned in the question.</p>
<p>If you are still facing this issue, I suggest to start with a fresh virtual environment and check that you are really installing in that environment from the current master branch via</p>
<pre><code>!pip install git+https://github.com/deepset-ai/haystack.git
</code></pre>
<p>or as an alternative install the latest release via</p>
<pre><code>!pip install farm-haystack
</code></pre>
| 476
|
transformers
|
fail to use transformers to load model locally
|
https://stackoverflow.com/questions/79169173/fail-to-use-transformers-to-load-model-locally
|
<p>I have used the transformers function to download the whole model
into local directory: /home/marcus/Desktop/project/OCR_transformer_practices/models/moondream2
by the following code:</p>
<pre><code>from huggingface_hub import snapshot_download
# Specify the model ID and revision
model_id = "vikhyatk/moondream2"
revision = "2024-08-26"
# Specify the directory where you want to download the model
download_directory = "/home/marcus/Desktop/project/OCR_transformer_practices/models/moondream2" # Change this to your desired path
# Download the model files to the specified directory
local_model_path = snapshot_download(repo_id=model_id, revision=revision, local_dir=download_directory)
</code></pre>
<p>The model is well-saved in the directory:<a href="https://i.sstatic.net/A9WSA48J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A9WSA48J.png" alt="enter image description here" /></a></p>
<p>when i load the model from local directory by transformers with the following code:</p>
<pre><code>from PIL import Image
from transformers import AutoTokenizer, AutoModelForCausalLM
from pathlib import Path
import os
# Get the parent directory
project_dir = Path(__file__).parent
model_folder_name = 'models/moondream2'
model_dir = str(project_dir/model_folder_name)
# Load the tokenizer and model using the correct model ID
# model_id = "vikhyatk/moondream2"
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=model_dir, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained( pretrained_model_name_or_path=model_dir, use_safetensors=True, trust_remote_code=True,)
</code></pre>
<p>it pops out error message :</p>
<pre><code>Traceback (most recent call last):
File "/home/marcus/Desktop/project/OCR_transformer_practices/moondream_test.py", line 15, in <module>
model = AutoModelForCausalLM.from_pretrained( pretrained_model_name_or_path=model_dir, use_safetensors=True, trust_remote_code=True,)
File "/home/marcus/Desktop/project/OCR_transformer_practices/.venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 553, in from_pretrained
model_class = get_class_from_dynamic_module(
File "/home/marcus/Desktop/project/OCR_transformer_practices/.venv/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 552, in get_class_from_dynamic_module
return get_class_in_module(class_name, final_module, force_reload=force_download)
File "/home/marcus/Desktop/project/OCR_transformer_practices/.venv/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 237, in get_class_in_module
module_files: List[Path] = [module_file] + sorted(map(Path, get_relative_import_files(module_file)))
File "/home/marcus/Desktop/project/OCR_transformer_practices/.venv/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 128, in get_relative_import_files
new_imports.extend(get_relative_imports(f))
File "/home/marcus/Desktop/project/OCR_transformer_practices/.venv/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 97, in get_relative_imports
with open(module_file, "r", encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/marcus/.cache/huggingface/modules/transformers_modules/moondream2/fourier_features.py'
</code></pre>
<p>How to resolve it?</p>
| 477
|
|
transformers
|
How to use the HuggingFace transformers pipelines?
|
https://stackoverflow.com/questions/60209265/how-to-use-the-huggingface-transformers-pipelines
|
<p>I'm trying to do a simple text classification project with Transformers, I want to use the pipeline feature added in the V2.3, but there is little to no documentation.</p>
<pre><code>data = pd.read_csv("data.csv")
FLAUBERT_NAME = "flaubert-base-cased"
encoder = LabelEncoder()
target = encoder.fit_transform(data["category"])
y = target
X = data["text"]
model = FlaubertForSequenceClassification.from_pretrained(FLAUBERT_NAME)
tokenizer = FlaubertTokenizer.from_pretrained(FLAUBERT_NAME)
pipe = TextClassificationPipeline(model, tokenizer, device=-1) # device=-1 -> Use only CPU
print("Test #1: pipe('Bonjour le monde')=", pipe(['Bonjour le monde']))
</code></pre>
<hr>
<pre><code>Traceback (most recent call last):
File "C:/Users/PLHT09191/Documents/work/dev/Classif_Annonces/src/classif_annonce.py", line 33, in <module>
model = FlaubertForSequenceClassification.from_pretrained(FLAUBERT_NAME)
File "C:\Users\Myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\transformers-2.4.1-py3.5.egg\transformers\modeling_utils.py", line 463, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "C:\Users\Myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\transformers-2.4.1-py3.5.egg\transformers\modeling_flaubert.py", line 343, in __init__
super(FlaubertForSequenceClassification, self).__init__(config)
File "C:\Users\Myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\transformers-2.4.1-py3.5.egg\transformers\modeling_xlm.py", line 733, in __init__
self.transformer = XLMModel(config)
File "C:\Users\Myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\transformers-2.4.1-py3.5.egg\transformers\modeling_xlm.py", line 382, in __init__
self.ffns.append(TransformerFFN(self.dim, self.hidden_dim, self.dim, config=config))
File "C:\Users\Myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\transformers-2.4.1-py3.5.egg\transformers\modeling_xlm.py", line 203, in __init__
self.lin2 = nn.Linear(dim_hidden, out_dim)
File "C:\Users\Myself\Documents\work\dev\Classif_Annonces\venv\lib\site-packages\torch\nn\modules\linear.py", line 72, in __init__
self.weight = Parameter(torch.Tensor(out_features, in_features))
RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 9437184 bytes. Buy new RAM!
Process finished with exit code 1
</code></pre>
<p>How can I use my pipeline with my <code>X</code> and <code>y</code> data?</p>
| 478
|
|
transformers
|
Unable to user pipeline module inside transformers library
|
https://stackoverflow.com/questions/77814471/unable-to-user-pipeline-module-inside-transformers-library
|
<p>I have an issue. I'm using Python 3.11 and I have the latest version of transformers 4.36.2</p>
<p>The issue is that when I'm importing the pipeline module I facing the following issue in Jupyter Notebook:</p>
<pre><code>RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):
cannot import name 'formatargspec' from 'inspect' (/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/inspect.py)
</code></pre>
<p>I've tried to restart the kernel, my computer, uninstall transformers, also I found that 4.28.0 version from transformers was not having issues but even though I'm unable to use it.</p>
<p>Here's the full error message:</p>
<pre><code>ImportError Traceback (most recent call last)
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/utils/import_utils.py:1382, in _LazyModule._get_module(self, module_name)
1381 try:
-> 1382 return importlib.import_module("." + module_name, self.__name__)
1383 except Exception as e:
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py:126, in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1206, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1178, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1149, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:690, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:940, in exec_module(self, module)
File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/pipelines/__init__.py:28
27 from ..feature_extraction_utils import PreTrainedFeatureExtractor
---> 28 from ..image_processing_utils import BaseImageProcessor
29 from ..models.auto.configuration_auto import AutoConfig
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/image_processing_utils.py:28
27 from .feature_extraction_utils import BatchFeature as BaseBatchFeature
---> 28 from .image_transforms import center_crop, normalize, rescale
29 from .image_utils import ChannelDimension
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/image_transforms.py:47
46 if is_tf_available():
---> 47 import tensorflow as tf
49 if is_flax_available():
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tensorflow/__init__.py:48
46 _tf2.enable()
---> 48 from tensorflow._api.v2 import __internal__
49 from tensorflow._api.v2 import __operators__
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tensorflow/_api/v2/__internal__/__init__.py:8
6 import sys as _sys
----> 8 from tensorflow._api.v2.__internal__ import autograph
9 from tensorflow._api.v2.__internal__ import decorator
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tensorflow/_api/v2/__internal__/autograph/__init__.py:8
6 import sys as _sys
----> 8 from tensorflow.python.autograph.core.ag_ctx import control_status_ctx # line: 34
9 from tensorflow.python.autograph.impl.api import tf_convert # line: 493
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tensorflow/python/autograph/core/ag_ctx.py:21
19 import threading
---> 21 from tensorflow.python.autograph.utils import ag_logging
22 from tensorflow.python.util.tf_export import tf_export
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tensorflow/python/autograph/utils/__init__.py:17
15 """Utility module that contains APIs usable in the generated code."""
---> 17 from tensorflow.python.autograph.utils.context_managers import control_dependency_on_returns
18 from tensorflow.python.autograph.utils.misc import alias_tensors
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tensorflow/python/autograph/utils/context_managers.py:19
17 import contextlib
---> 19 from tensorflow.python.framework import ops
20 from tensorflow.python.ops import tensor_array_ops
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tensorflow/python/framework/ops.py:45
44 from tensorflow.python.client import pywrap_tf_session
---> 45 from tensorflow.python.eager import context
46 from tensorflow.python.eager import core
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tensorflow/python/eager/context.py:37
36 from tensorflow.python.eager import cancellation
---> 37 from tensorflow.python.eager import execute
38 from tensorflow.python.eager import executor
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tensorflow/python/eager/execute.py:23
22 from tensorflow.python.framework import tensor_conversion_registry
---> 23 from tensorflow.python.framework import tensor_shape
24 from tensorflow.python.types import core as core_types
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tensorflow/python/framework/tensor_shape.py:26
25 from tensorflow.python.platform import tf_logging as logging
---> 26 from tensorflow.python.saved_model import nested_structure_coder
27 from tensorflow.python.types import trace
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tensorflow/python/saved_model/nested_structure_coder.py:38
37 from tensorflow.python.util import compat
---> 38 from tensorflow.python.util import nest
39 from tensorflow.python.util.compat import collections_abc
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tensorflow/python/util/nest.py:93
16 """Functions that work with structures.
17
18 A structure is either:
(...)
90 API docstring: tensorflow.nest
91 """
---> 93 import wrapt as _wrapt
95 from tensorflow.python.util import _pywrap_nest
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/wrapt/__init__.py:10
4 from .wrappers import (ObjectProxy, CallableObjectProxy, FunctionWrapper,
5 BoundFunctionWrapper, WeakFunctionProxy, PartialCallableObjectProxy,
6 resolve_path, apply_patch, wrap_object, wrap_object_attribute,
7 function_wrapper, wrap_function_wrapper, patch_function_wrapper,
8 transient_function_wrapper)
---> 10 from .decorators import (adapter_factory, AdapterFactory, decorator,
11 synchronized)
13 from .importer import (register_post_import_hook, when_imported,
14 notify_module_loaded, discover_post_import_hooks)
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/wrapt/decorators.py:34
33 from functools import partial
---> 34 from inspect import ismethod, isclass, formatargspec
35 from collections import namedtuple
ImportError: cannot import name 'formatargspec' from 'inspect' (/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/inspect.py)
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
Cell In[1], line 1
----> 1 from transformers import pipeline
3 print("Pipeline module is available!")
File <frozen importlib._bootstrap>:1231, in _handle_fromlist(module, fromlist, import_, recursive)
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/utils/import_utils.py:1372, in _LazyModule.__getattr__(self, name)
1370 value = self._get_module(name)
1371 elif name in self._class_to_module.keys():
-> 1372 module = self._get_module(self._class_to_module[name])
1373 value = getattr(module, name)
1374 else:
File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/utils/import_utils.py:1384, in _LazyModule._get_module(self, module_name)
1382 return importlib.import_module("." + module_name, self.__name__)
1383 except Exception as e:
-> 1384 raise RuntimeError(
1385 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1386 f" traceback):\n{e}"
1387 ) from e
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):
cannot import name 'formatargspec' from 'inspect' (/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/inspect.py)
</code></pre>
<p>Thanks for your help!</p>
|
<p>Can you try installing the latest version of <code>wrapt</code></p>
<pre><code>python -m pip install wrapt==1.16.0
</code></pre>
| 479
|
transformers
|
ImportError: cannot import name 'LLaMATokenizer' from 'transformers'
|
https://stackoverflow.com/questions/75907910/importerror-cannot-import-name-llamatokenizer-from-transformers
|
<p>I am not able to import <strong>LLaMATokenizer</strong></p>
<p>Any solution for this problem?</p>
<p>I am using the code of this repo.
<a href="https://github.com/zphang/transformers/tree/llama_push" rel="noreferrer">https://github.com/zphang/transformers/tree/llama_push</a>
and trying to load the models and tokenizer using</p>
<pre><code>tokenizer = transformers.LLaMATokenizer.from_pretrained("./weights/tokenizer/")
model = transformers.LLaMAForCausalLM.from_pretrained("./weights/llama-7b/")
</code></pre>
<p>which results in the following error:</p>
<blockquote>
<p>ImportError: cannot import name 'LLaMATokenizer' from 'transformers'</p>
</blockquote>
|
<p>To complement <a href="https://stackoverflow.com/users/6664872/cronoik">cronoik</a> answer (that is the correct answer):</p>
<p>If you still having problems with <code>from transformers import LlamaForCausalLM, LlamaTokenizer</code> try to install the package directly from github:</p>
<pre><code>pip install git+https://github.com/huggingface/transformers
</code></pre>
<p>also don't forget to change the Tokenizer config file from <em>LLaMATokenizer</em> to <em>LlamaTokenizer</em>.</p>
<p>source: <a href="https://github.com/huggingface/transformers/issues/22222" rel="noreferrer">https://github.com/huggingface/transformers/issues/22222</a></p>
| 480
|
transformers
|
Simple Transformers producing nothing?
|
https://stackoverflow.com/questions/71200243/simple-transformers-producing-nothing
|
<p>I have a simple transformers script looking like this.</p>
<pre><code>from simpletransformers.seq2seq import Seq2SeqModel, Seq2SeqArgs
args = Seq2SeqArgs()
args.num_train_epoch=5
model = Seq2SeqModel(
"roberta",
"roberta-base",
"bert-base-cased",
)
import pandas as pd
df = pd.read_csv('english-french.csv')
df['input_text'] = df['english'].values
df['target_text'] =df['french'].values
model.train_model(df.head(1000))
print(model.eval_model(df.tail(10)))
</code></pre>
<p>The eval_loss is <code>{'eval_loss': 0.0001931049264385365}</code></p>
<p>However when I run my prediction script</p>
<pre><code>to_predict = ["They went to the public swimming pool."]
predictions=model.predict(to_predict)
</code></pre>
<p>I get this</p>
<pre><code>['']
</code></pre>
<p>The dataset I used is <a href="https://www.kaggle.com/faouzimohamed/englishfrench-fornmt?select=english-french.csv" rel="nofollow noreferrer">here</a></p>
<p>I'm very confused on the output. Any help or explanation why it returns nothing would be much appreciated.</p>
|
<p>Use this model instead.</p>
<pre><code>model = Seq2SeqModel(
encoder_decoder_type="marian",
encoder_decoder_name="Helsinki-NLP/opus-mt-en-mul",
args=args,
use_cuda=True,
)
</code></pre>
<p>roBERTa is not a good option for your task.</p>
<p>I have rewritten your code on <a href="https://colab.research.google.com/drive/1Ft6P-c-uQWzx5zYn1hikXRjvwGwRSIxI?usp=sharing" rel="nofollow noreferrer">this colab notebook</a></p>
<p><strong>Results</strong></p>
<pre><code># Input
to_predict = ["They went to the public swimming pool.", "she was driving the shiny black car."]
predictions = model.predict(to_predict)
print(predictions)
# Output
['Ils aient cher à la piscine publice.', 'elle conduit la véricine noir glancer.']
</code></pre>
| 481
|
transformers
|
How to use SuperGLUE with huggingface-transformers
|
https://stackoverflow.com/questions/61043681/how-to-use-superglue-with-huggingface-transformers
|
<p>I would like to use SuperGLUE tasks with huggingface-transformers. Looking at this page:</p>
<p><a href="https://github.com/huggingface/transformers/blob/master/examples/README.md" rel="nofollow noreferrer">https://github.com/huggingface/transformers/blob/master/examples/README.md</a></p>
<p>The only useful script is "run_glue.py". But I'm searching for "run_superglue.py", that I suppose it doesn't exist.</p>
<p>Did anyone try to use SuperGLUE tasks with huggingface-transformers? Maybe modifying "run_glue.py" adapting it to SuperGLUE tasks? Thanks</p>
| 482
|
|
transformers
|
Disable default typescript transformers
|
https://stackoverflow.com/questions/55514236/disable-default-typescript-transformers
|
<p>I want to write a custom transpiler using the typescript transformer api that outputs typescript instead of javascript. To accomplish this I would need to disable the default transformers (typescript → ecma2017 → ecma2016 → ...).</p>
<p>Is this possible? I would prefer to use <code>tsc</code> directly, but if I have to use the compiler api manually that's fine too.</p>
|
<p>There's no <code>ts.ScriptTarget.TypeScript</code> so you'll need to use the compiler API.</p>
<p>Here's the basic idea (not tested, but should help you start on this):</p>
<pre><code>import * as ts from "typescript";
// setup
const printer = ts.createPrinter();
const sourceFiles: ts.SourceFile[] = ...;
const transformerFactory: ts.TransformerFactory<ts.SourceFile> = ...;
// transform the source files
const transformationResult = ts.transform(sourceFiles, [transformerFactory]);
// log the diagnostics if they exist
if (transformationResult.diagnostics) {
// output diagnostics (ts.formatDiagnosticsWithColorAndContext is nice to use)
}
// print the transformed ASTs and write the result out to files
// note: replace fs.writeFile with something that actually works
const fileWrites = transformationResult.transformed
.map(file => fs.writeFile(file.fileName, printer.printFile(file));
Promise.all(fileWrites)
.then(() => console.log("finished"))
.catch(err => console.error(err));
</code></pre>
| 483
|
transformers
|
Monad Transformers in Scala
|
https://stackoverflow.com/questions/40503324/monad-transformers-in-scala
|
<p>I have been trying simple Monad Transformers where I have for comprehensions involving <code>M[F[A]]</code> where <code>M</code> and <code>F</code> are monads. How can I make <code>M[F[A]]</code> and <code>M[S[A]]</code> work together in a for comp if <code>S</code> is a different monad? </p>
<p>For example:</p>
<pre><code>val a: Future[List[Int]] = ...
val b: Future[Option[Int]] = ...
</code></pre>
<p><code>a</code> requires a <code>ListT[Future, Int]</code> and <code>b</code> requires an <code>OptionT[Future, Int]</code> but these do not compose, do I need to use another transformer? Would this depend on the order I use them in the for comp? </p>
|
<p>Monad Transformers help you in composing two values of type <code>F[G[X]]</code>.</p>
<p>In other terms, monad transformers work with <code>F[G[X]]</code> because they leverage the fact that you know how to compose two <code>G[X]</code> if <code>Monad[G]</code> exists.</p>
<p>Now, in case of <code>F[G[X]</code> and <code>F[H[X]]</code>, even if you state that <code>G</code> and <code>H</code> have <code>Monad</code> instances, you still don't have a general way of composing them.</p>
<p>I'm afraid composing <code>F[G[X]]</code> and <code>F[H[X]]</code> has no general solution with monad transformers.</p>
| 484
|
transformers
|
Error while installing sentence-transformers
|
https://stackoverflow.com/questions/78001556/error-while-installing-sentence-transformers
|
<p>Get the following error while installing sentence-transformers on Windows 11 (using the latest version of Python and pip). Can someone please help with this? Checked many other similar posts, but none of those solutions work.</p>
<pre><code>C:\Users\abc\ai\llama\jupyterproj\stlit>py -m pip install sentence-transformers
Collecting sentence-transformers
Using cached sentence_transformers-2.3.1-py3-none-any.whl.metadata (11 kB)
Collecting transformers<5.0.0,>=4.32.0 (from sentence-transformers)
Using cached transformers-4.37.2-py3-none-any.whl.metadata (129 kB)
Requirement already satisfied: tqdm in c:\users\abc\appdata\local\programs\python\python312\lib\site-packages (from sentence-transformers) (4.66.2)
Requirement already satisfied: torch>=1.11.0 in c:\users\abc\appdata\local\programs\python\python312\lib\site-packages (from sentence-transformers) (2.2.0)
Requirement already satisfied: numpy in c:\users\abc\appdata\local\programs\python\python312\lib\site-packages (from sentence-transformers) (1.26.4)
Collecting scikit-learn (from sentence-transformers)
Using cached scikit_learn-1.4.0-1-cp312-cp312-win_amd64.whl.metadata (11 kB)
Collecting scipy (from sentence-transformers)
Using cached scipy-1.12.0-cp312-cp312-win_amd64.whl.metadata (60 kB)
Collecting nltk (from sentence-transformers)
Using cached nltk-3.8.1-py3-none-any.whl (1.5 MB)
Collecting sentencepiece (from sentence-transformers)
Using cached sentencepiece-0.1.99.tar.gz (2.6 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [17 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\abc\AppData\Local\Temp\pip-install-3n9shirh\sentencepiece_fc383392079e43b6a8c226f0484c0928\setup.py", line 126, in <module>
subprocess.check_call([
File "C:\Users\abc\AppData\Local\Programs\Python\Python312\Lib\subprocess.py", line 408, in check_call
retcode = call(*popenargs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\abc\AppData\Local\Programs\Python\Python312\Lib\subprocess.py", line 389, in call
with Popen(*popenargs, **kwargs) as p:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\abc\AppData\Local\Programs\Python\Python312\Lib\subprocess.py", line 1026, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\abc\AppData\Local\Programs\Python\Python312\Lib\subprocess.py", line 1538, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [WinError 2] The system cannot find the file specified
[end of output]
</code></pre>
<p>note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed</p>
<p>× Encountered error while generating package metadata.
╰─> See above for output.</p>
<p>note: This is an issue with the package mentioned above, not pip.
hint: See above for details.</p>
|
<p>You need to use <code>python 3.11</code> to install <code>sentence-transformers</code></p>
<p>It is dependent on pytorch</p>
<p><strong><a href="https://pytorch.org/get-started/locally/#windows-python" rel="nofollow noreferrer">https://pytorch.org/get-started/locally/#windows-python</a></strong></p>
<blockquote>
<p>Currently, PyTorch on Windows only supports Python 3.8-3.11; Python 2.x is not supported.</p>
</blockquote>
<p><strong><a href="https://pypi.org/project/sentence-transformers/" rel="nofollow noreferrer">https://pypi.org/project/sentence-transformers/</a></strong></p>
<blockquote>
<p>We recommend Python 3.8 or higher, PyTorch 1.11.0 or higher and transformers v4.32.0 or higher. The code does not work with Python 2.7</p>
</blockquote>
| 485
|
transformers
|
Monad Transformers in C#
|
https://stackoverflow.com/questions/20353256/monad-transformers-in-c
|
<p>I am working on using monad transformers in C#.<br>
I would like to know if the following code I present, shows that I have understood this.<br>
I am fairly new to this so any feedback / comments are really welcome.<br>
This example is just for wrapping a maybe monad in a validation monad.</p>
<pre><code>using System;
using NUnit.Framework;
namespace Monads
{
public static class MaybeExtensions
{
public static IMaybe<T> ToMaybe<T>(this T value)
{
if (value == null)
return new None<T>();
return new Just<T>(value);
}
}
public interface IMaybe<T>
{
IMaybe<U> Select<U>(Func<T, U> f);
IMaybe<U> SelectMany<U>(Func<T, IMaybe<U>> f);
U Fold<U>(Func<U> error, Func<T, U> success);
}
public class Just<T> : IMaybe<T>
{
public Just(T value)
{
this.value = value;
}
public IMaybe<U> Select<U>(Func<T, U> f)
{
return f(value).ToMaybe();
}
public IMaybe<U> SelectMany<U>(Func<T, IMaybe<U>> f)
{
return f(value);
}
public U Fold<U>(Func<U> error, Func<T, U> success)
{
return success(value);
}
public IValidation<U, T> ToValidationT<U>()
{
return new ValidationMaybeT<U, T>(this, default(U));
}
private readonly T value;
}
public class None<T> : IMaybe<T>
{
public IMaybe<U> Select<U>(Func<T, U> f)
{
return new None<U>();
}
public IMaybe<U> SelectMany<U>(Func<T, IMaybe<U>> f)
{
return new None<U>();
}
public U Fold<U>(Func<U> error, Func<T, U> success)
{
return error();
}
public IValidation<U, T> ToValidationT<U>(U exceptionalValue)
{
return new ValidationMaybeT<U, T>(this, exceptionalValue);
}
}
public class Customer
{
public Customer(string name)
{
Name = name;
}
public string Name { get; set; }
}
public interface IValidation<T, U>
{
IValidation<T, V> Select<V>(Func<U, V> f);
IValidation<T, V> SelectMany<V>(Func<U, IValidation<T, V>> f);
}
public class ValidationError<T, U> : IValidation<T, U>
{
public ValidationError(T error)
{
Error = error;
}
public IValidation<T, V> Select<V>(Func<U, V> f)
{
return new ValidationError<T, V>(Error);
}
public IValidation<T, V> SelectMany<V>(Func<U, IValidation<T, V>> f)
{
return new ValidationError<T, V>(Error);
}
public T Error { get; private set; }
}
public class ValidationSuccess<T, U> : IValidation<T, U>
{
public ValidationSuccess(U value)
{
Result = value;
}
public IValidation<T, V> Select<V>(Func<U, V> f)
{
return new ValidationSuccess<T, V>(f(Result));
}
public IValidation<T, V> SelectMany<V>(Func<U, IValidation<T, V>> f)
{
return f(Result);
}
public U Result { get; private set; }
}
public class ValidationMaybeT<T, U> : IValidation<T, U>
{
public ValidationMaybeT(IMaybe<U> value, T error)
{
Value = value;
Error = error;
}
public IValidation<T, V> Select<V>(Func<U, V> f)
{
return Value.Fold<IValidation<T, V>>(() => new ValidationError<T, V>(Error), s => new ValidationSuccess<T, V>(f(s)));
}
ValidationError<T, V> SelectManyError<V>()
{
return new ValidationError<T, V>(Error);
}
public IValidation<T, V> SelectMany<V>(Func<U, IValidation<T, V>> f)
{
return Value.Fold(() => SelectManyError<V>(), s => f(s));
}
public IMaybe<U> Value { get; private set; }
public T Error { get; private set; }
}
public interface ICustomerRepository
{
IValidation<Exception, Customer> GetById(int id);
}
public class CustomerRepository : ICustomerRepository
{
public IValidation<Exception, Customer> GetById(int id)
{
if (id < 0)
return new None<Customer>().ToValidationT<Exception>(new Exception("Customer Id less than zero"));
return new Just<Customer>(new Customer("Structerre")).ToValidationT<Exception>();
}
}
public interface ICustomerService
{
void Delete(int id);
}
public class CustomerService : ICustomerService
{
public CustomerService(ICustomerRepository customerRepository)
{
this.customerRepository = customerRepository;
}
public void Delete(int id)
{
customerRepository.GetById(id)
.SelectMany(x => SendEmail(x).SelectMany(y => LogResult(y)));
}
public IValidation<Exception, Customer> LogResult(Customer c)
{
Console.WriteLine("Deleting: " + c.Name);
return new ValidationSuccess<Exception, Customer>(c);
//return new ValidationError<Exception, Customer>(new Exception("Unable write log"));
}
private IValidation<Exception, Customer> SendEmail(Customer c)
{
Console.WriteLine("Emailing: " + c.Name);
return new ValidationSuccess<Exception, Customer>(c);
}
ICustomerRepository customerRepository;
}
[TestFixture]
public class MonadTests
{
[Test]
public void Testing_With_Maybe_Monad()
{
new CustomerService(new CustomerRepository()).Delete(-1);
}
}
}
</code></pre>
<p>Another smaller sub question is if C# had higher kinded types could I just implement this class once (ValidationT) and it work for all other wrapped monads or is this incorrect?</p>
|
<p>Almost, is the quickest answer. Your <code>ValidationMaybeT</code> is storing the value of the <code>Maybe</code>, whereas a true monad transformer would have the behaviour of the <code>Maybe</code> and the <code>Validation</code> monad, and could modify the default behaviour of the wrapped monad if required. </p>
<p>This is a very manual way of doing it, which I wouldn't necessarily recommend, it gets very messy, very quickly. C#'s lack of higher-kinded polymorphism will trip you up at every opportunity.</p>
<p>The closest I managed (even then it's not a proper monad transformer system) is with my library: <a href="https://github.com/louthy/language-ext" rel="noreferrer">Language-Ext</a></p>
<p>There are 13 monads in the project (Option, Map, Lst, Either, Try, Reader, etc.), and I implement a standard set of functions for all of them:</p>
<pre><code>Sum
Count
Bind
Exists
Filter
Fold
ForAll
Iter
Map
Select
SeletMany
Where
Lift
</code></pre>
<p>These functions are the most useful in functional programming, and will pretty much allow you to do any operation needed.</p>
<p>So with all monads implementing these standard functions, they become a higher-kinded type. Not that the compiler knows this, they are all just part of the same 'set'.</p>
<p>Then I wrote a T4 template to generate transformer functions as extension methods (they have a <code>T</code> suffix), for every combination of monad and function in the 'higher-kinded type'.</p>
<p>So for example:</p>
<pre><code>var list = List(Some(1),None,Some(2),None,Some(3));
var total = list.SumT();
</code></pre>
<p>The code above results in <code>6</code>. The definition for <code>SumT</code> is:</p>
<pre><code>int SumT(Lst<Option<int>> self) =>
self.Map( s => s.Sum() ).Sum();
</code></pre>
<p><code>FilterT</code> for example will also work on the inner monad:</p>
<pre><code>var list = List(Some(1),None,Some(2),None,Some(3));
list = list.FilterT(x => x > 2);
</code></pre>
<p>So the extension method route is a very good one. Instead of creating a new type, use:</p>
<pre><code>IValidation<IMaybe<T>>
</code></pre>
<p>Then provide the <code>Maybe</code> extension methods for <code>IValidation<IMaybe<T>></code></p>
<p>You can either do what I did and auto-generate from a standard set, or write them manually. It then keeps your <code>Maybe</code> and <code>Validation</code> implementations clean and the bespoke transformer functionality separate.</p>
<p>If you're interested, this is the T4 template I used to generate the transformer methods (it's pretty ramshackle to be honest): <a href="https://github.com/louthy/language-ext/blob/master/LanguageExt.Core/HKT.tt" rel="noreferrer">LanguageExt.Core/HKT.tt</a></p>
<p>And this is the generated code: <a href="https://github.com/louthy/language-ext/blob/master/LanguageExt.Core/HKT.cs" rel="noreferrer">LanguageExt.Core/HKT.cs</a></p>
<p>Before I did the HKT stuff above I did a similar method to what you're attempting, I have a monad called <code>TryOption<T></code> which is a <code>Try</code> and an <code>Option</code>. But with the new HKT stuff I can now write <code>Try<Option<T>></code>. The original implementation is <a href="https://github.com/louthy/language-ext/blob/master/LanguageExt.Core/TryOption.cs" rel="noreferrer">here</a>:</p>
<p>Anyway, I hope that helps!</p>
| 486
|
transformers
|
Loading a safetensor file in transformers
|
https://stackoverflow.com/questions/76247802/loading-a-safetensor-file-in-transformers
|
<p>I have downloaded this <a href="https://huggingface.co/TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g" rel="nofollow noreferrer">model</a> from huggingface. I am trying to load this model in transformers so I can do inferencing:</p>
<pre><code>from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("path_to/TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g")
model = AutoModelForCausalLM.from_pretrained("path_to/TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g")
</code></pre>
<p>But i get the error saying that it expects a .bin or .h5 or .ckpt files but the above has only .safetensors or .pt files</p>
<p>How do i load the model?</p>
|
<p>you need to tell it to look for the safetensors</p>
<pre><code>AutoModelForCausalLM.from_pretrained(
<path>,
use_safetensors=True,
<rest_of_args>
)
</code></pre>
<p>This assumes you have the safetensors weights map in the same folder ofc</p>
<pre><code>model.safetensors.index.json
</code></pre>
| 487
|
transformers
|
What is difference between those two monad transformers?
|
https://stackoverflow.com/questions/36671136/what-is-difference-between-those-two-monad-transformers
|
<p>I'm familiar with monads, e.g. <code>Reader</code>, <code>Error</code>, and <code>State</code>. Transformers, however, are very new to me, hence this question.</p>
<p>Intuitively, I can tell there is a difference between the following two monad transformers, but I can't quite pinpoint what it is...</p>
<pre><code>ReaderT Env (ErrorT String (StateT Integer Identity)) a
ReaderT Env (StateT Integer (ErrorT String Identity)) a
</code></pre>
<p>What makes these two monad transformers different?</p>
|
<p>To simplify, compare only the relevant part (which isn't trivially the same):</p>
<pre><code>MaybeT (StateT Integer Identity) a
StateT Integer (MaybeT Identity) a
</code></pre>
<p>We know that (ignoring the <code>newtype</code> abstractions)</p>
<pre><code>type MaybeT m a = m (Maybe a)
type StateT s m a = s -> m (a, s)
</code></pre>
<p>Hence, the two transformer stack come out to be</p>
<pre><code>MaybeT (Λb. Integer -> (b, Integer)) a
≡ Integer -> (Maybe a, Integer)
</code></pre>
<p>and</p>
<pre><code>StateT Integer (Λb. Maybe b) a
≡ Integer -> Maybe (a, Integer)
</code></pre>
<p>So, these aren't exactly the same, the difference being that the latter only yields the state-integer inside of the <code>Maybe</code>. This means, if the <code>MaybeT</code> is down in the stack then the computation must immediately terminate as soon as you get a <code>Nothing</code>, whereas if the <code>MaybeT</code> is used on top then the <code>State</code> can still keep on going.</p>
<p>This is even more drastic with <code>IO</code>: once you get an exception, you <em>can't</em> possibly continue – exceptions can only be caught in <code>IO</code> itself. This is one reason why there can be no <code>IOT</code> transformer.</p>
| 488
|
transformers
|
Chatbot using Huggingface Transformers
|
https://stackoverflow.com/questions/70055966/chatbot-using-huggingface-transformers
|
<p>I would like to use Huggingface Transformers to implement a chatbot. Currently, I have the code shown below. The transformer model already takes into account the history of past user input.</p>
<p>Is there something else (additional code) I have to take into account for building the chatbot?</p>
<p>Second, how can I modify my code to run with TensorFlow instead of PyTorch?</p>
<p>Later on, I also plan to fine-tune the model on other data. I also plan to test different models such as BlenderBot and GPT2. I think to test this different models it should be as easy as replacing the corresponding model in <code>AutoTokenizer.from_pretrained("microsoft/DialoGPT-small")</code> and <code>AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-small")</code></p>
<pre><code>from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-small")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-small")
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
</code></pre>
|
<p>Here is an example of using the <code>DialoGPT</code> model with Tensorflow:</p>
<pre class="lang-py prettyprint-override"><code>from transformers import TFAutoModelForCausalLM, AutoTokenizer, BlenderbotTokenizer, TFBlenderbotForConditionalGeneration
import tensorflow as tf
chat_bots = {
'BlenderBot': [BlenderbotTokenizer.from_pretrained('facebook/blenderbot-400M-distill'), TFT5ForConditionalGeneration.from_pretrained('facebook/blenderbot-400M-distill')],
'DialoGPT': [AutoTokenizer.from_pretrained("microsoft/DialoGPT-small"), TFAutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-small")],
}
key = 'DialoGPT'
tokenizer, model = chat_bots[key]
for step in range(5):
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='tf')
if step > 0:
bot_input_ids = tf.concat([chat_history_ids, new_user_input_ids], axis=-1)
else:
bot_input_ids = new_user_input_ids
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
print(key + ": {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
</code></pre>
<pre><code>>> User:How are you?
DialoGPT: I'm here
>> User:Why are you here
DialoGPT: I'm here
>> User:But why
DialoGPT: I'm here
>> User:Where is here
DialoGPT: Where is where?
>> User:Here
DialoGPT: Where is here?
</code></pre>
<p>If you want to compare different chatbots, you might want to adapt their decoder parameters, because they are not always identical. For example, using <code>BlenderBot</code> and a <code>max_length</code> of 50 you get this kind of response with the current code:</p>
<pre><code>>> User:How are you?
BlenderBot: ! I am am great! how how how are are are???
</code></pre>
<p>In general, you should ask yourself which special characters are important for a chatbot (depending on your domain) and which characters should / can be omitted?</p>
<p>You should also experiment with different decoding methods such as greedy search, beam search, random sampling, top-k sampling, and nucleus sampling and find out what works best for your use case. For more information on this topic check out this <a href="https://huggingface.co/blog/how-to-generate" rel="nofollow noreferrer">post</a></p>
| 489
|
transformers
|
sklearn ColumnTransformer: Duplicate columns in transformers
|
https://stackoverflow.com/questions/63475704/sklearn-columntransformer-duplicate-columns-in-transformers
|
<p>I am looking for a help building a data preprocessing pipleline using sklearn's ColumnTransformer functions.</p>
<p>Currently my pipleline looks something like this:</p>
<pre><code>from scipy.stats.mstats import winsorize
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
ColumnTransformer(remainder='passthrough',
transformers=[
('Winsorize', FunctionTransformer(winsorize,
kw_args={'axis': 0, 'inplace': False, 'limits': [0, 0.01]}), ['feat_1','feat_2']),
('num_impute', SimpleImputer(strategy='median'), ['feat_3', 'feat_4']),
])
</code></pre>
<p>Note that each transformer is provided a unique set of features.</p>
<p>The issue I am encountering is how to apply stacked analysis for the same features. For example,</p>
<pre><code>ColumnTransformer(remainder='passthrough',
transformers=[
('Winsorize', FunctionTransformer(winsorize,
kw_args={'axis': 0, 'inplace': False, 'limits': [0, 0.01]}), ['feat_1','feat_2']),
('num_impute', SimpleImputer(strategy='median'), ['feat_1', 'feat_2', 'feat_3']),
])
</code></pre>
<p>Note that feat_1 and feat_2 where provided for both transformers.</p>
<p>a pipeline like this will create duplicate columns for feat_1 and feat_2 (Two columns based on Winsorize, and two columns based on num_impute)</p>
|
<p>From what I understand, you can use two <code>ColumnTransformer</code> and one <code>FeatureUnion</code> to achieve what you want. One <code>ColumnTransformer</code> will have <code>remainder='passthrough'</code> to keep all columns except the ones being transformed, and the other will have <code>remainder='drop'</code>. That could look like this:</p>
<pre><code>from sklearn.pipeline import FeatureUnion
ct1 = ColumnTransformer(
remainder='passthrough',
transformers=[(
'Winsorize', FunctionTransformer(
winsorize,
kw_args={'axis': 0, 'inplace': False, 'limits': [0, 0.01]}
),
['feat_1', 'feat_2']
)]
)
ct2 = ColumnTransformer(
remainder='drop',
transformers=[('num_impute', SimpleImputer(strategy='median'), ['feat_1', 'feat_2', 'feat_3'])],
)
union = FeatureUnion([('ct1', ct1), ('ct2', ct2)])
</code></pre>
| 490
|
transformers
|
Pyspark: save transformers
|
https://stackoverflow.com/questions/36136662/pyspark-save-transformers
|
<p>I am using some transformers of Pyspark such as StringIndexer, StandardScaler and more. I first apply those to the training set and then later on I want to use the same transformation objects (same parameters of StringIndexerModel, StandardScalerModel) in order to apply them on the test set. Therefore, I am looking for a way to save those transformation functions as a file. However, I cannot find any related method but only with ml functions such as LogisticRegression. Do you know any possible way to do that? Thanks.</p>
|
<p>I found an easy solution.</p>
<p>Save the indexer model to a file (on HDFS). </p>
<pre><code>writer = indexerModel._call_java("write")
writer.save("indexerModel")
</code></pre>
<p>Load the indexer model from a file (saved on HDFS). </p>
<pre><code>indexer = StringIndexerModel._new_java_obj("org.apache.spark.ml.feature.StringIndexerModel.load", "indexerModel")
indexerModel = StringIndexerModel(indexer)
</code></pre>
| 491
|
transformers
|
Higher Accuracy using SimpleTransformers vs Transformers Library with BERT
|
https://stackoverflow.com/questions/64595546/higher-accuracy-using-simpletransformers-vs-transformers-library-with-bert
|
<p>I am working on a project for text classification using BERT</p>
<p>I am getting ca. 90% accuracy by using simple transformers.
But I only get like 60% using my own training for-loop (not published here) or using the trainer module from the transformers library.
Both are done with the default parameters of simple transformers.</p>
<p>I am really struggling to understand why there is such a difference in performance</p>
<p>Dataset is from Kaggle: <a href="https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-news" rel="nofollow noreferrer">https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-news</a></p>
<p>Imports:</p>
<pre><code>from transformers import BertForSequenceClassification, AdamW, BertTokenizer, get_linear_schedule_with_warmup, Trainer, TrainingArguments
import torch
from torch.utils.data import DataLoader, RandomSampler, SequentialSampler, TensorDataset
import pandas as pd
from pathlib import Path
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
import numpy as np
from torch.nn import functional as F
from collections import defaultdict
import random
from simpletransformers.classification import ClassificationModel
</code></pre>
<p>Data Pre-Processing:</p>
<pre><code>#loading phrase bank dataset
phrase_bank_dataset = "all-data.csv"
phrase_bank_dataset_file = Path(phrase_bank_dataset)
file_loaded = False
while not file_loaded:
if phrase_bank_dataset_file.exists():
phrase_bank_dataset = pd.read_csv(phrase_bank_dataset, encoding='latin-1')
phrase_bank_dataset = phrase_bank_dataset.values.tolist()
file_loaded = True
print("Dataset Loaded")
else:
print("File not Found")
#correcting the format of phrase bank dataset
phrase_dataset = pd.DataFrame(columns=["news", "sentiment"])
for ele in phrase_bank_dataset:
news = ele[1]
#converting sentiment text into numbers
sentiment = 0 if ele[0] == 'negative' else 1 if ele[0] == 'neutral' else 2
row = [news, sentiment]
phrase_dataset.loc[len(phrase_dataset)] = row
print(phrase_dataset)
</code></pre>
<p>Simple Transformers Code:</p>
<pre><code>model = ClassificationModel('bert', 'bert-base-cased', num_labels=3,use_cuda=True)
train,eva = train_test_split(labeled_dataset,test_size = 0.2)
train_df = pd.DataFrame({
'text': train['news'],
'label': train['sentiment']
})
eval_df = pd.DataFrame({
'text': eva['news'],
'label': eva['sentiment']
})
model.train_model(train_df)
result, model_outputs, wrong_predictions = model.eval_model(eval_df)
lst = []
for arr in model_outputs:
lst.append(np.argmax(arr))
true = eval_df['label'].tolist()
predicted = lst
sklearn.metrics.accuracy_score(true,predicted)
</code></pre>
<p>Transformers Trainer Code:</p>
<pre><code>tokenizer = BertTokenizer.from_pretrained('bert-base-cased', stride = 0.8)
model = BertForSequenceClassification.from_pretrained('bert-base-cased', num_labels = 3)
if torch.cuda.is_available():
print("\nUsing: ", torch.cuda.get_device_name(0))
device = torch.device('cuda')
else:
print("\nUsing: CPU")
device = torch.device('cpu')
model = model.to(device)
#custom dataset class
class NewsSentimentDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
#method for tokenizing dataset list
def tokenize_headlines(headlines, labels, tokenizer):
encodings = tokenizer.batch_encode_plus(
headlines,
add_special_tokens = True,
pad_to_max_length = True,
return_attention_mask = True
)
dataset = NewsSentimentDataset(encodings, labels)
return dataset
#splitting dataset into training and validation set
all_headlines = phrase_dataset['news'].tolist()
all_labels = phrase_dataset['sentiment'].tolist()
train_headlines, val_headlines, train_labels, val_labels = train_test_split(phrase_headlines, phrase_labels, test_size=.2)
val_dataset = tokenize_headlines(val_headlines, val_labels, tokenizer)
train_dataset = tokenize_headlines(train_headlines, val_labels, tokenizer)
#data loader
train_batch_size = 8
val_batch_size = 8
train_data_loader = DataLoader(train_dataset, batch_size = train_batch_size, sampler=RandomSampler(train_dataset))
val_data_loader = DataLoader(val_dataset, batch_size = val_batch_size, sampler=SequentialSampler(val_dataset))
#optimizer and scheduler
num_epochs = 1
num_steps = len(train_data_loader) * num_epochs
optimizer = AdamW(model.parameters(), lr=4e-5, eps=1e-8)
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=30, num_training_steps=num_steps)
#training and evaluation with trainer moduel from huggingfaces
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='micro')
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
per_device_train_batch_size=8, # batch size per device during training
per_device_eval_batch_size=8, # batch size for evaluation
warmup_steps=0, # number of warmup steps for learning rate scheduler
weight_decay=0, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset , # evaluation dataset
compute_metrics=compute_metrics
)
trainer.train()
trainer.evaluate()
</code></pre>
| 492
|
|
transformers
|
Hugging-Face Transformers: Loading model from path error
|
https://stackoverflow.com/questions/62641972/hugging-face-transformers-loading-model-from-path-error
|
<p>I am pretty new to Hugging-Face transformers. I am facing the following issue when I try to load <strong>xlm-roberta-base</strong> model from a given path:</p>
<pre><code>>> tokenizer = AutoTokenizer.from_pretrained(model_path)
>> Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/anaconda3/lib/python3.7/site-packages/transformers/tokenization_auto.py", line 182, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/user/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 309, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/home/user/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 458, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/user/anaconda3/lib/python3.7/site-packages/transformers/tokenization_roberta.py", line 98, in __init__
**kwargs,
File "/home/user/anaconda3/lib/python3.7/site-packages/transformers/tokenization_gpt2.py", line 133, in __init__
with open(vocab_file, encoding="utf-8") as vocab_handle:
TypeError: expected str, bytes or os.PathLike object, not NoneType
</code></pre>
<p>However, if I load it by its name, there is no problem:</p>
<pre><code>>> tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base')
</code></pre>
<p>I would appreciate any help.</p>
|
<p>I assume you have created that directory as described in the <a href="https://huggingface.co/transformers/main_classes/configuration.html#transformers.PretrainedConfig.save_pretrained" rel="nofollow noreferrer">documentation</a> with :</p>
<pre class="lang-py prettyprint-override"><code>tokenizer.save_pretrained('YOURPATH')
</code></pre>
<p>There is currently an <a href="https://github.com/huggingface/transformers/issues/4197" rel="nofollow noreferrer">issue</a> under investigation which only affects the AutoTokenizers but not the underlying tokenizers like (XLMRobertaTokenizer). For example the following should work:</p>
<pre class="lang-py prettyprint-override"><code>from transformers import XLMRobertaTokenizer
tokenizer = XLMRobertaTokenizer.from_pretrained('YOURPATH')
</code></pre>
<p>To work with the AutoTokenizer you also need to save the config to load it offline:</p>
<pre class="lang-py prettyprint-override"><code>from transformers import AutoTokenizer, AutoConfig
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base')
config = AutoConfig.from_pretrained('xlm-roberta-base')
tokenizer.save_pretrained('YOURPATH')
config.save_pretrained('YOURPATH')
tokenizer = AutoTokenizer.from_pretrained('YOURPATH')
</code></pre>
<p>I recommend to <strong>either</strong> use a different path for the tokenizers and the model <strong>or</strong> to keep the config.json of your model because some modifications you apply to your model will be stored in the config.json which is created during <code>model.save_pretrained()</code> and will be overwritten when you save the tokenizer as described above after your model (i.e. you won't be able to load your modified model with tokenizer config.json).</p>
| 493
|
transformers
|
Monad vs Monad transformers
|
https://stackoverflow.com/questions/45082178/monad-vs-monad-transformers
|
<p>"Monads allow the programmer to build up computations using sequential building blocks" therefore it allows us to combine some computations. If this is the case, then why the following code can not be run? </p>
<pre><code>import Control.Monad.Trans.State
gt :: State String String
gt = do
name <- get
putStrLn "HI" -- Here is the source of problem!
put "T"
return ("hh..." ++ name ++ "...!")
main= do
print $ execState gt "W.."
print $ evalState gt "W.."
</code></pre>
<ul>
<li><p>Why cannot we put different functions in a monad (like the above example)?</p></li>
<li><p>Why do we need an additional layer, i.e. transformers to combine the monads? </p></li>
</ul>
|
<p>Monad transformers <em>are</em> the mechanisms for putting different functions in a monad.</p>
<p>A monad only knows how to combine computations that are within the abilities of that monad. You can't do I/O in a <code>State</code> monad, but you can in a <code>StateT s IO a</code> monad. However, you will need to use <a href="https://hackage.haskell.org/package/transformers-0.4.2.0/docs/Control-Monad-IO-Class.html" rel="noreferrer"><code>liftIO</code></a> on the computations that do I/O.</p>
<pre><code>import Control.Monad.Trans.State
import Control.Monad.IO.Class (liftIO)
gt :: StateT String IO String
gt = do
name <- get
liftIO $ putStrLn "HI"
put "T"
return ("hh..." ++ name ++ "...!")
main = do
print =<< execStateT gt "W.."
print =<< evalStateT gt "W.."
</code></pre>
| 494
|
transformers
|
Monad Transformers and lift function
|
https://stackoverflow.com/questions/17204573/monad-transformers-and-lift-function
|
<p>Why isn't it necessary to use lift for executing a function in an internal monad transformer environment, except for IO? I mean, if I have StateT over WriterT and WriterT over ReaderT, why can I do this?</p>
<pre><code>tell $ {- any code here for the Writer -}
foo <- asks {- This for the reader -}
and so on...
</code></pre>
<p>instead of</p>
<pre><code>lift $ tell $ {- code ... -}
...
</code></pre>
<p>Is there an special explanation or it is only the way the Monad Transformers were written? </p>
|
<p>It's because the Monad Transformer Library (MTL) recognizes that it's quite common for you to stack monads in just that way so they don't define <code>tell</code> as just some function <code>(Mondoid w) => w -> Writer ()</code>. </p>
<p>Instead they have <a href="http://hackage.haskell.org/packages/archive/mtl/latest/doc/html/Control-Monad-Writer-Class.html#t%3aMonadWriter"><code>MonadWriter</code></a> which is defined as <em>a typeclass with tell as a function in it</em>. Then they define a ton of instances of <code>MonadWriter</code>: <code>ReaderT</code>, <code>IO</code>, <code>Writer</code> (duh) etc. And thus you avoid the annoying repetition of <code>lift.</code>.</p>
<p>This is quite common, any monad transformer (in MTL) will have a <code>Control.Monad.***.Class</code> which has this sort of typeclass. </p>
| 495
|
transformers
|
Fine-tuning BERT on SequenceClassification using Transformers framework
|
https://stackoverflow.com/questions/64805769/fine-tuning-bert-on-sequenceclassification-using-transformers-framework
|
<p>I am currently fine-tuning a BERT model on a sequence classification task. To do this, I am using the transformers framework. This requires a Batch input in a Trainer: <a href="https://huggingface.co/transformers/_modules/transformers/trainer.html" rel="nofollow noreferrer">https://huggingface.co/transformers/_modules/transformers/trainer.html</a></p>
<p>The way fine-tuning works, is described here: <a href="https://huggingface.co/transformers/custom_datasets.html" rel="nofollow noreferrer">https://huggingface.co/transformers/custom_datasets.html</a>
I think the Batch needs to look like I created it, but for some reason I keep getting errors. The picture shows a single item from the dataset.</p>
<p>If I add the labels as tensor, a part of the model that converts labels to tensors gives an error. But when I add the labels as list I get: Expected input batch_size (16) to match target batch_size (2016).
What is the correct way to give a Batch to the BERT model?</p>
<p><a href="https://i.sstatic.net/d8hop.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d8hop.png" alt="What my dataSet object looks like" /></a></p>
<p>Here is how I initialise the model:</p>
<pre><code>training_args = TrainingArguments(
output_dir='C:', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='C:', # directory for storing logs
logging_steps=10,
)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained("bert-base-uncased")
data_collator = DataCollatorForTokenClassification(tokenizer)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
data_collator=data_collator, #
train_dataset=train_dataset, # training dataset
eval_dataset=test_dataset # evaluation dataset
)
trainer.train()
</code></pre>
| 496
|
|
transformers
|
Repo id error when using hugging face transformers
|
https://stackoverflow.com/questions/76900349/repo-id-error-when-using-hugging-face-transformers
|
<p>I keep getting this error when I try to use hugging face transformers library.</p>
<pre><code>huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'C:/Users/FZH91R/PycharmProjects/DudeWheresMyHealing/Pytorch/tf_model.h5'. Use `repo_type` argument if needed.
</code></pre>
<p>Here is my code:</p>
<pre><code> CODE_DIR = os.path.dirname(__file__)
ROOT_DIR = os.path.dirname(CODE_DIR)
MODEL_DIR = os.path.join(ROOT_DIR, "Pytorch")
CONFIG_DIR = os.path.join(ROOT_DIR, "Pytorch")
# Load the model to be used.
path_model = MODEL_DIR + "\tf_model.h5"
print(path_model)
path_config = CONFIG_DIR + "\config.json"
print(path_config)
# Load model from a local source.
tokenizer = AutoTokenizer.from_pretrained(path_model, local_files_only=True)
model = AutoModel.from_pretrained(path_model, config=path_config, local_files_only=True)
</code></pre>
<p>My versions:</p>
<p>transformers-4.31.0</p>
<p>How can I fix this error?</p>
| 497
|
|
transformers
|
Monad transformers explained in Javascript?
|
https://stackoverflow.com/questions/42783479/monad-transformers-explained-in-javascript
|
<p>I'm having a hard time understanding monad transformers, partly because most examples and explanations use Haskell.</p>
<p>Could anyone give an example of creating a transformer to merge a Future and an Either monad in Javascript and how it can be used.</p>
<p>If you can use the <code>ramda-fantasy</code> implementation of these monads it would be even better.</p>
|
<p><strong>Rules first</strong></p>
<p>First we have the <em>Natural Transformation Law</em></p>
<ul>
<li>Some functor <code>F</code> of <code>a</code>, mapped with function <code>f</code>, yields <code>F</code> of <code>b</code>, then naturally transformed, yields some functor <code>G</code> of <code>b</code>.</li>
<li>Some functor <code>F</code> of <code>a</code>, naturally transformed yields some functor <code>G</code> of <code>a</code>, then mapped with some function <code>f</code>, yields <code>G</code> of <code>b</code></li>
</ul>
<p>Choosing either path (map first, transform second, <strong>or</strong> transform first, map second) will lead to the same end result, <code>G</code> of <code>b</code>.</p>
<p><a href="https://i.sstatic.net/FdSWk.jpg" rel="noreferrer"><img src="https://i.sstatic.net/FdSWk.jpg" alt="natural transformation law"></a></p>
<pre><code>nt(x.map(f)) == nt(x).map(f)
</code></pre>
<hr>
<p><strong>Getting real</strong></p>
<p>Ok, now let's do a practical example. I'm gonna explain the code bit-by-bit and then I'll have a complete runnable example at the very end.</p>
<p>First we'll implement Either (using <code>Left</code> and <code>Right</code>)</p>
<pre><code>const Left = x => ({
map: f => Left(x),
fold: (f,_) => f(x)
})
const Right = x => ({
map: f => Right(f(x)),
fold: (_,f) => f(x),
})
</code></pre>
<p>Then we'll implement <code>Task</code></p>
<pre><code>const Task = fork => ({
fork,
// "chain" could be called "bind" or "flatMap", name doesn't matter
chain: f =>
Task((reject, resolve) =>
fork(reject,
x => f(x).fork(reject, resolve)))
})
Task.of = x => Task((reject, resolve) => resolve(x))
Task.rejected = x => Task((reject, resolve) => reject(x))
</code></pre>
<p>Now let's start defining some pieces of a theoretical program. We'll have a database of users where each user has a bff (best friend forever). We'll also define a simple <code>Db.find</code> function that returns a Task of looking up a user in our database. This is similar to any database library that returns a Promise.</p>
<pre><code>// fake database
const data = {
"1": {id: 1, name: 'bob', bff: 2},
"2": {id: 2, name: 'alice', bff: 1}
}
// fake db api
const Db = {
find: id =>
Task((reject, resolve) =>
resolve((id in data) ? Right(data[id]) : Left('not found')))
}
</code></pre>
<p>OK, so there's one little twist. Our <code>Db.find</code> function returns a <code>Task</code> of an <code>Either</code> (<code>Left</code> or <code>Right</code>). This is mostly for demonstration purposes, but also could be argued as a good practice. Ie, we might not consider user-not-found scenario an error, thus we don't want to <code>reject</code> the task – instead, we gracefully handle it later by <em>resolving</em> a <code>Left</code> of <code>'not found'</code>. We might use <code>reject</code> in the event of a different error, such as a failure to connect to the database or something.</p>
<hr>
<p><strong>Making goals</strong></p>
<p>The goal of our program is to take a given user id, and look up that user's bff.</p>
<p>We're ambitious, but naïve, so we first try something like this</p>
<pre><code>const main = id =>
Db.find(1) // Task(Right(User))
.map(either => // Right(User)
either.map(user => // User
Db.find(user.bff))) // Right(Task(Right(user)))
</code></pre>
<p>Yeck! a <code>Task(Right(Task(Right(User))))</code> ... this got out of hand very quickly. It will be a total nightmare working with that result...</p>
<hr>
<p><strong>Natural transformation</strong></p>
<p>Here comes our first natural transformation <code>eitherToTask</code>:</p>
<pre><code>const eitherToTask = e =>
e.fold(Task.rejected, Task.of)
// eitherToTask(Left(x)) == Task.rejected(x)
// eitherToTask(Right(x)) == Task.of(x)
</code></pre>
<p>Let's watch what happens when we <code>chain</code> this transformation on to our <code>Db.find</code> result</p>
<pre><code>const main = id =>
Db.find(id) // Task(Right(User))
.chain(<b>eitherToTask</b>) // <b>???</b>
...</code></pre>
<p>So what is <code>???</code>? Well <code>Task#chain</code> expects your function to return a <code>Task</code> and then it squishes the current Task, and the newly returned Task together. So in this case, we go:</p>
<pre><code>// Db.find // eitherToTask // chain
Task(Right(User)) -> Task(Task(User)) -> Task(User)
</code></pre>
<p>Wow. This is already a huge improvement because it's keeping our data much flatter as we move through the computation. Let's keep going ...</p>
<pre><code>const main = id =>
Db.find(id) // Task(Right(User))
.chain(eitherToTask) // <b>Task(User)</b>
<b>.chain(user => Db.find(user.bff))</b> // ???
...</code></pre>
<p>So what is <code>???</code> in this step? We know that <code>Db.find</code> returns <code>Task(Right(User)</code> but we're <code>chain</code>ing, so we know we'll squish at least two <code>Task</code>s together. That means we go:</p>
<pre><code>// Task of Db.find // chain
Task(Task(Right(User))) -> Task(Right(User))
</code></pre>
<p>And look at that, we have another <code>Task(Right(User))</code> which we already know how to flatten. <code>eitherToTask</code>!</p>
<pre><code>const main = id =>
Db.find(id) // Task(Right(User))
.chain(eitherToTask) // Task(User)
.chain(user => Db.find(user.bff)) // <b>Task(Right(User))</b>
<b>.chain(eitherToTask)</b> // Task(User) !!!</code></pre>
<p>Hot potatoes! Ok, so how would we work with this? Well <code>main</code> takes an <code>Int</code> and returns a <code>Task(User)</code>, so ...</p>
<pre><code>// main :: Int -> Task(User)
main(1).fork(console.error, console.log)
</code></pre>
<p>It's really that simple. If <code>Db.find</code> resolves a Right, it will be transformed to a <code>Task.of</code> (a resolved Task), meaning the result will go to <code>console.log</code> – otherwise, if <code>Db.find</code> resolves a Left, it will be transformed to a <code>Task.rejected</code> (a rejected Task), meaning the result will go to <code>console.error</code></p>
<hr>
<p><strong>Runnable code</strong></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>// Either
const Left = x => ({
map: f => Left(x),
fold: (f,_) => f(x)
})
const Right = x => ({
map: f => Right(f(x)),
fold: (_,f) => f(x),
})
// Task
const Task = fork => ({
fork,
chain: f =>
Task((reject, resolve) =>
fork(reject,
x => f(x).fork(reject, resolve)))
})
Task.of = x => Task((reject, resolve) => resolve(x))
Task.rejected = x => Task((reject, resolve) => reject(x))
// natural transformation
const eitherToTask = e =>
e.fold(Task.rejected, Task.of)
// fake database
const data = {
"1": {id: 1, name: 'bob', bff: 2},
"2": {id: 2, name: 'alice', bff: 1}
}
// fake db api
const Db = {
find: id =>
Task((reject, resolve) =>
resolve((id in data) ? Right(data[id]) : Left('not found')))
}
// your program
const main = id =>
Db.find(id)
.chain(eitherToTask)
.chain(user => Db.find(user.bff))
.chain(eitherToTask)
// bob's bff
main(1).fork(console.error, console.log)
// alice's bff
main(2).fork(console.error, console.log)
// unknown user's bff
main(3).fork(console.error, console.log)</code></pre>
</div>
</div>
</p>
<hr>
<p><strong>Attribution</strong></p>
<p>I owe almost this entire answer to Brian Lonsdorf (<a href="https://twitter.com/drboolean" rel="noreferrer">@drboolean</a>). He has a fantastic series on Egghead called <a href="https://egghead.io/courses/professor-frisby-introduces-composable-functional-javascript" rel="noreferrer"><em>Professor Frisby Introduces Composable Functional JavaScript</em></a>. Quite coincidentally, the example in your question (transforming Future and Either) is the same example used in his videos and in this code in my answer here.</p>
<p>The two about natural transformations are</p>
<ol>
<li><a href="https://egghead.io/lessons/javascript-principled-type-conversions-with-natural-transformations" rel="noreferrer"><em>Principled type conversions with natural transformations</em></a></li>
<li><a href="https://egghead.io/lessons/javascript-applying-natural-transformations-in-everyday-work" rel="noreferrer"><em>Applying natural transformations in everyday work</em></a></li>
</ol>
<hr>
<p><strong>Alternate implementation of Task</strong></p>
<p><code>Task#chain</code> has a little bit of magic going on that's not immediately apparent</p>
<pre><code>task.chain(f) == task.map(f).join()
</code></pre>
<p>I mention this as a side note because it's not particularly important for considering the natural transformation of Either to Task above. <code>Task#chain</code> is enough for demonstrations, but if you really want to take it apart to see how everything is working, it might feel a bit unapproachable.</p>
<p>Below, I derive <code>chain</code> using <code>map</code> and <code>join</code>. I'll put a couple of type annotations below that should help</p>
<pre><code>const Task = fork => ({
fork,
// map :: Task a => (a -> b) -> Task b
map (f) {
return Task((reject, resolve) =>
fork(reject, x => resolve(f(x))))
},
// join :: Task (Task a) => () -> Task a
join () {
return Task((reject, resolve) =>
fork(reject,
task => task.fork(reject, resolve)))
},
// chain :: Task a => (a -> Task b) -> Task b
chain (f) {
return this.map(f).join()
}
})
// these stay the same
Task.of = x => Task((reject, resolve) => resolve(x))
Task.rejected = x => Task((reject, resolve) => reject(x))
</code></pre>
<p>You can replace the definition of the old Task with this new one in the example above and everything will still work the same ^_^</p>
<hr>
<p><strong>Going Native with <code>Promise</code></strong></p>
<p>ES6 ships with Promises which can function very similarly to the Task we've implemented. Of course there's heaps of difference, but for the point of this demonstration, using Promise instead of Task will result in code that almost looks identical to the original example</p>
<p>The primary differences are:</p>
<ul>
<li>Task expects your <code>fork</code> function parameters to be ordered as <code>(reject, resolve)</code> - Promise executor function parameters are ordered as <code>(resolve, reject)</code> (reverse order)</li>
<li>we call <code>promise.then</code> instead of <code>task.chain</code></li>
<li>Promises automatically squish nested Promises, so you don't have to worry about manually flattening a Promise of a Promise</li>
<li><code>Promise.rejected</code> and <code>Promise.resolve</code> cannot be called first class – the context of each needs to be bound to <code>Promise</code> – eg <code>x => Promise.resolve(x)</code> or <code>Promise.resolve.bind(Promise)</code> instead of <code>Promise.resolve</code> (same for <code>Promise.reject</code>)</li>
</ul>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>// Either
const Left = x => ({
map: f => Left(x),
fold: (f,_) => f(x)
})
const Right = x => ({
map: f => Right(f(x)),
fold: (_,f) => f(x),
})
// natural transformation
const eitherToPromise = e =>
e.fold(x => Promise.reject(x),
x => Promise.resolve(x))
// fake database
const data = {
"1": {id: 1, name: 'bob', bff: 2},
"2": {id: 2, name: 'alice', bff: 1}
}
// fake db api
const Db = {
find: id =>
new Promise((resolve, reject) =>
resolve((id in data) ? Right(data[id]) : Left('not found')))
}
// your program
const main = id =>
Db.find(id)
.then(eitherToPromise)
.then(user => Db.find(user.bff))
.then(eitherToPromise)
// bob's bff
main(1).then(console.log, console.error)
// alice's bff
main(2).then(console.log, console.error)
// unknown user's bff
main(3).then(console.log, console.error)</code></pre>
</div>
</div>
</p>
| 498
|
transformers
|
Cannot import "from transformers import Wav2Vec2Processor"
|
https://stackoverflow.com/questions/76753228/cannot-import-from-transformers-import-wav2vec2processor
|
<p>The libraries used<br />
python : 3.7.16<br />
transformers : 4.24.0<br />
tensorflow : 2.4.1</p>
<p>I am triyng to convert tensorflow wav2vec model into tflite in colab</p>
<p><code>from transformers import Wav2Vec2Processor</code>
for this line it gives a error cannot load the wav2vec2Processor.
<code>from transformers import TFWav2Vec2ForCTC</code> also the same error
the error message is follows;</p>
<hr />
<pre><code>ModuleNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.7/site-packages/transformers/utils/import_utils.py in _get_module(self, module_name)
1075 try:
-> 1076 return importlib.import_module("." + module_name, self.__name__)
1077 except Exception as e:
19 frames
ModuleNotFoundError: No module named 'tokenizers.tokenizers'
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
/usr/local/lib/python3.7/site-packages/transformers/utils/import_utils.py in _get_module(self, module_name)
1076 return importlib.import_module("." + module_name, self.__name__)
1077 except Exception as e:
-> 1078 raise RuntimeError(
1079 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1080 f" traceback):\n{e}"
RuntimeError: Failed to import transformers.models.wav2vec2 because of the following error (look up to see its traceback):
No module named 'tokenizers.tokenizers'
</code></pre>
<p>but
<code>import transformers</code> dosent produce any error</p>
<p>but
<code>transformers.Wav2Vec2Processor</code> gives the same above error</p>
<p>I want to find out how to fix this error</p>
|
<p>at first Update transformers to the latest version:</p>
<pre><code>!pip install transformers --upgrade
</code></pre>
<p>Check dependencies: like your Error <code>No module named 'tokenizers.tokenizers'</code></p>
<pre><code>!pip install tokenizers
</code></pre>
<p>after you install the dependencies Restart the runtime and cheak it</p>
<pre><code>from transformers import Wav2Vec2Processor
</code></pre>
| 499
|
pytorch
|
How do I check if PyTorch is using the GPU?
|
https://stackoverflow.com/questions/48152674/how-do-i-check-if-pytorch-is-using-the-gpu
|
<p>How do I check if PyTorch is using the GPU? The <code>nvidia-smi</code> command can detect GPU activity, but I want to check it directly from inside a Python script.</p>
|
<p>These functions should help:</p>
<pre><code>>>> import torch
>>> torch.cuda.is_available()
True
>>> torch.cuda.device_count()
1
>>> torch.cuda.current_device()
0
>>> torch.cuda.device(0)
<torch.cuda.device at 0x7efce0b03be0>
>>> torch.cuda.get_device_name(0)
'GeForce GTX 950M'
</code></pre>
<p>This tells us:</p>
<ul>
<li>CUDA is available and can be used by one device.</li>
<li><code>Device 0</code> refers to the GPU <code>GeForce GTX 950M</code>, and it is currently chosen by PyTorch.</li>
</ul>
| 500
|
pytorch
|
How do I save a trained model in PyTorch?
|
https://stackoverflow.com/questions/42703500/how-do-i-save-a-trained-model-in-pytorch
|
<p>How do I save a trained model in PyTorch? I have read that:</p>
<ol>
<li><a href="https://github.com/torch/torch7/blob/master/doc/serialization.md#torchsavefilename-object--format-referenced" rel="noreferrer"><code>torch.save()</code></a>/<a href="https://github.com/torch/torch7/blob/master/doc/serialization.md#object-torchloadfilename--format-referenced" rel="noreferrer"><code>torch.load()</code></a> is for saving/loading a serializable object.</li>
<li><a href="http://pytorch.org/docs/nn.html#torch.nn.Module.state_dict" rel="noreferrer"><code>model.state_dict()</code></a>/<a href="http://pytorch.org/docs/nn.html#torch.nn.Module.load_state_dict" rel="noreferrer"><code>model.load_state_dict()</code></a> is for saving/loading model state.</li>
</ol>
|
<p>Found <a href="https://github.com/pytorch/pytorch/blob/761d6799beb3afa03657a71776412a2171ee7533/docs/source/notes/serialization.rst" rel="noreferrer">this page</a> on their github repo:</p>
<blockquote>
<h4>Recommended approach for saving a model</h4>
<p>There are two main approaches for serializing and restoring a model.</p>
<p>The first (recommended) saves and loads only the model parameters:</p>
<pre class="lang-py prettyprint-override"><code>torch.save(the_model.state_dict(), PATH)
</code></pre>
<p>Then later:</p>
<pre class="lang-py prettyprint-override"><code>the_model = TheModelClass(*args, **kwargs)
the_model.load_state_dict(torch.load(PATH))
</code></pre>
<hr />
<p>The second saves and loads the entire model:</p>
<pre class="lang-py prettyprint-override"><code>torch.save(the_model, PATH)
</code></pre>
<p>Then later:</p>
<pre><code>the_model = torch.load(PATH)
</code></pre>
<p>However in this case, the serialized data is bound to the specific classes and the exact directory structure used, so it can break in various ways when used in other projects, or after some serious refactors.</p>
</blockquote>
<hr />
<p>See also: <a href="https://pytorch.org/tutorials/beginner/basics/saveloadrun_tutorial.html#save-and-load-the-model" rel="noreferrer">Save and Load the Model</a> section from the official PyTorch tutorials.</p>
| 501
|
pytorch
|
How do I print the model summary in PyTorch?
|
https://stackoverflow.com/questions/42480111/how-do-i-print-the-model-summary-in-pytorch
|
<p>How do I print the summary of a model in PyTorch like what <code>model.summary()</code> does in Keras:</p>
<pre><code>Model Summary:
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 1, 15, 27) 0
____________________________________________________________________________________________________
convolution2d_1 (Convolution2D) (None, 8, 15, 27) 872 input_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D) (None, 8, 7, 27) 0 convolution2d_1[0][0]
____________________________________________________________________________________________________
flatten_1 (Flatten) (None, 1512) 0 maxpooling2d_1[0][0]
____________________________________________________________________________________________________
dense_1 (Dense) (None, 1) 1513 flatten_1[0][0]
====================================================================================================
Total params: 2,385
Trainable params: 2,385
Non-trainable params: 0
</code></pre>
|
<p>While you will not get as detailed information about the model as in Keras' model.summary, simply printing the model will give you some idea about the different layers involved and their specifications.</p>
<p>For instance:</p>
<pre><code>from torchvision import models
model = models.vgg16()
print(model)
</code></pre>
<p>The output in this case would be something as follows:</p>
<pre><code>VGG (
(features): Sequential (
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU (inplace)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU (inplace)
(4): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU (inplace)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU (inplace)
(9): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU (inplace)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU (inplace)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU (inplace)
(16): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU (inplace)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU (inplace)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU (inplace)
(23): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU (inplace)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU (inplace)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU (inplace)
(30): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
)
(classifier): Sequential (
(0): Dropout (p = 0.5)
(1): Linear (25088 -> 4096)
(2): ReLU (inplace)
(3): Dropout (p = 0.5)
(4): Linear (4096 -> 4096)
(5): ReLU (inplace)
(6): Linear (4096 -> 1000)
)
)
</code></pre>
<p>Now you could, as mentioned by <a href="https://stackoverflow.com/users/2704763/kashyap">Kashyap</a>, use the <code>state_dict</code> method to get the weights of the different layers. But using this listing of the layers would perhaps provide more direction is creating a helper function to get that Keras like model summary!</p>
| 502
|
pytorch
|
What's the difference between `reshape()` and `view()` in PyTorch?
|
https://stackoverflow.com/questions/49643225/whats-the-difference-between-reshape-and-view-in-pytorch
|
<p>In numpy, we use <code>ndarray.reshape()</code> for reshaping an array.</p>
<p>I noticed that in PyTorch, people use <code>torch.view()</code> for the same purpose, but at the same time, there is also a <code>torch.reshape()</code> existing.</p>
<p>So I am wondering what the differences are between them and when I should use either of them?</p>
|
<p><code>torch.view</code> has existed for a long time. It will return a tensor with the new shape. The returned tensor will share the underling data with the original tensor.
See the <a href="http://pytorch.org/docs/master/tensors.html?highlight=view#torch.Tensor.view" rel="noreferrer">documentation here</a>.</p>
<p>On the other hand, it seems that <code>torch.reshape</code> <a href="https://github.com/pytorch/pytorch/pull/5575" rel="noreferrer">has been introduced recently in version 0.4</a>. According to the <a href="http://pytorch.org/docs/master/torch.html#torch.reshape" rel="noreferrer">document</a>, this method will</p>
<blockquote>
<p>Returns a tensor with the same data and number of elements as input, but with the specified shape. When possible, the returned tensor will be a view of input. Otherwise, it will be a copy. Contiguous inputs and inputs with compatible strides can be reshaped without copying, but you should not depend on the copying vs. viewing behavior.</p>
</blockquote>
<p>It means that <code>torch.reshape</code> may return a copy or a view of the original tensor. You can not count on that to return a view or a copy. According to the developer:</p>
<blockquote>
<p>if you need a copy use clone() if you need the same storage use view(). The semantics of reshape() are that it may or may not share the storage and you don't know beforehand.</p>
</blockquote>
<p>Another difference is that <code>reshape()</code> can operate on both contiguous and non-contiguous tensor while <code>view()</code> can only operate on contiguous tensor. Also see <a href="https://stackoverflow.com/a/26999092/6064933">here</a> about the meaning of <code>contiguous</code>.</p>
| 503
|
pytorch
|
Pytorch tensor to numpy array
|
https://stackoverflow.com/questions/49768306/pytorch-tensor-to-numpy-array
|
<p>I have a pytorch <code>Tensor</code> of shape <code>[4, 3, 966, 1296]</code>. I want to convert it to <code>numpy</code> array using the following code:</p>
<pre><code>imgs = imgs.numpy()[:, ::-1, :, :]
</code></pre>
<p>How does that code work?</p>
|
<p>There are 4 dimensions of the tensor you want to convert.</p>
<pre><code>[:, ::-1, :, :]
</code></pre>
<p><code>:</code> means that the first dimension should be copied as it is and converted, same goes for the third and fourth dimension.</p>
<p><code>::-1</code> means that for the second axes it reverses the the axes</p>
| 504
|
pytorch
|
What does `view()` do in PyTorch?
|
https://stackoverflow.com/questions/42479902/what-does-view-do-in-pytorch
|
<p>What does <code>view()</code> do to the tensor <code>x</code>? What do negative values mean?</p>
<pre><code>x = x.view(-1, 16 * 5 * 5)
</code></pre>
|
<p><code>view()</code> reshapes the tensor without copying memory, similar to numpy's <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html" rel="noreferrer"><code>reshape()</code></a>.</p>
<p>Given a tensor <code>a</code> with 16 elements:</p>
<pre><code>import torch
a = torch.range(1, 16)
</code></pre>
<p>To reshape this tensor to make it a <code>4 x 4</code> tensor, use:</p>
<pre><code>a = a.view(4, 4)
</code></pre>
<p>Now <code>a</code> will be a <code>4 x 4</code> tensor. <em>Note that after the reshape the total number of elements need to remain the same. Reshaping the tensor <code>a</code> to a <code>3 x 5</code> tensor would not be appropriate.</em></p>
<h3>What is the meaning of parameter -1?</h3>
<p>If there is any situation that you don't know how many rows you want but are sure of the number of columns, then you can specify this with a -1. (<em>Note that you can extend this to tensors with more dimensions. Only one of the axis value can be -1</em>). This is a way of telling the library: "give me a tensor that has these many columns and you compute the appropriate number of rows that is necessary to make this happen".</p>
<p>This can be seen in <a href="https://stackoverflow.com/revisions/42479902/9">this model definition code</a>. After the line <code>x = self.pool(F.relu(self.conv2(x)))</code> in the forward function, you will have a 16 depth feature map. You have to flatten this to give it to the fully connected layer. So you tell PyTorch to reshape the tensor you obtained to have specific number of columns and tell it to decide the number of rows by itself.</p>
| 505
|
pytorch
|
Check the total number of parameters in a PyTorch model
|
https://stackoverflow.com/questions/49201236/check-the-total-number-of-parameters-in-a-pytorch-model
|
<p>How do I count the total number of parameters in a PyTorch model? Something similar to <code>model.count_params()</code> in Keras.</p>
|
<p>PyTorch doesn't have a function to calculate the total number of parameters as Keras does, but it's possible to sum the number of elements for every parameter group:</p>
<pre><code>pytorch_total_params = sum(p.numel() for p in model.parameters())
</code></pre>
<p>If you want to calculate only the <em>trainable</em> parameters:</p>
<pre><code>pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
</code></pre>
<hr />
<p><em>Answer inspired by <a href="https://discuss.pytorch.org/t/how-do-i-check-the-number-of-parameters-of-a-model/4325/9" rel="noreferrer">this answer</a> on PyTorch Forums</em>.</p>
| 506
|
pytorch
|
PyTorch preferred way to copy a tensor
|
https://stackoverflow.com/questions/55266154/pytorch-preferred-way-to-copy-a-tensor
|
<p>There seems to be several ways to create a copy of a tensor in PyTorch, including</p>
<pre><code>y = tensor.new_tensor(x) #a
y = x.clone().detach() #b
y = torch.empty_like(x).copy_(x) #c
y = torch.tensor(x) #d
</code></pre>
<p><code>b</code> is explicitly preferred over <code>a</code> and <code>d</code> according to a UserWarning I get if I execute either <code>a</code> or <code>d</code>. Why is it preferred? Performance? I'd argue it's less readable.</p>
<p>Any reasons for/against using <code>c</code>?</p>
|
<p><strong>TL;DR</strong></p>
<p>Use <code>.clone().detach()</code> (or preferrably <code>.detach().clone()</code>)</p>
<blockquote>
<p>If you first detach the tensor and then clone it, the computation path is not copied, the other way around it is copied and then abandoned. Thus, <code>.detach().clone()</code> is very slightly more efficient.-- <a href="https://discuss.pytorch.org/t/difference-between-detach-clone-and-clone-detach/34173" rel="noreferrer">pytorch forums</a></p>
</blockquote>
<p>as it's slightly fast and explicit in what it does.</p>
<hr />
<p>Using <a href="https://github.com/nschloe/perfplot/" rel="noreferrer"><code>perfplot</code></a>, I plotted the timing of various methods to copy a pytorch tensor.</p>
<pre><code>y = tensor.new_tensor(x) # method a
y = x.clone().detach() # method b
y = torch.empty_like(x).copy_(x) # method c
y = torch.tensor(x) # method d
y = x.detach().clone() # method e
</code></pre>
<p>The x-axis is the dimension of tensor created, y-axis shows the time. The graph is in linear scale. As you can clearly see, the <code>tensor()</code> or <code>new_tensor()</code> takes more time compared to other three methods.</p>
<p><a href="https://i.sstatic.net/5QjuT.png" rel="noreferrer"><img src="https://i.sstatic.net/5QjuT.png" alt="enter image description here" /></a></p>
<p><em>Note:</em> In multiple runs, I noticed that out of b, c, e, any method can have lowest time. The same is true for a and d. But, the methods b, c, e consistently have lower timing than a and d.</p>
<pre><code>import torch
import perfplot
perfplot.show(
setup=lambda n: torch.randn(n),
kernels=[
lambda a: a.new_tensor(a),
lambda a: a.clone().detach(),
lambda a: torch.empty_like(a).copy_(a),
lambda a: torch.tensor(a),
lambda a: a.detach().clone(),
],
labels=["new_tensor()", "clone().detach()", "empty_like().copy()", "tensor()", "detach().clone()"],
n_range=[2 ** k for k in range(15)],
xlabel="len(a)",
logx=False,
logy=False,
title='Timing comparison for copying a pytorch tensor',
)
</code></pre>
| 507
|
pytorch
|
Why do we need to call zero_grad() in PyTorch?
|
https://stackoverflow.com/questions/48001598/why-do-we-need-to-call-zero-grad-in-pytorch
|
<p>Why does <a href="https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html" rel="noreferrer"><code>zero_grad()</code></a> need to be called during training?</p>
<pre><code>| zero_grad(self)
| Sets gradients of all model parameters to zero.
</code></pre>
|
<p>In <a href="https://github.com/pytorch/pytorch" rel="noreferrer"><code>PyTorch</code></a>, for every mini-batch during the <em>training</em> phase, we typically want to explicitly set the gradients to zero before starting to do backpropagation (i.e., updating the <em><strong>W</strong>eights</em> and <em><strong>b</strong>iases</em>) because PyTorch <em>accumulates the gradients</em> on subsequent backward passes. This accumulating behavior is convenient while training RNNs or when we want to compute the gradient of the loss summed over multiple <em>mini-batches</em>. So, the default action has been set to <a href="https://pytorch.org/docs/stable/_modules/torch/autograd.html" rel="noreferrer">accumulate (i.e. sum) the gradients</a> on every <code>loss.backward()</code> call.</p>
<p>Because of this, when you start your training loop, ideally you should <a href="https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html#torch.optim.Optimizer.zero_grad" rel="noreferrer"><code>zero out the gradients</code></a> so that you do the parameter update correctly. Otherwise, the gradient would be a combination of the old gradient, which you have already used to update your model parameters and the newly-computed gradient. It would therefore point in some other direction than the intended direction towards the <em>minimum</em> (or <em>maximum</em>, in case of maximization objectives).</p>
<p>Here is a simple example:</p>
<pre><code>import torch
from torch.autograd import Variable
import torch.optim as optim
def linear_model(x, W, b):
return torch.matmul(x, W) + b
data, targets = ...
W = Variable(torch.randn(4, 3), requires_grad=True)
b = Variable(torch.randn(3), requires_grad=True)
optimizer = optim.Adam([W, b])
for sample, target in zip(data, targets):
# clear out the gradients of all Variables
# in this optimizer (i.e. W, b)
optimizer.zero_grad()
output = linear_model(sample, W, b)
loss = (output - target) ** 2
loss.backward()
optimizer.step()
</code></pre>
<hr />
<p>Alternatively, if you're doing a <em>vanilla gradient descent</em>, then:</p>
<pre><code>W = Variable(torch.randn(4, 3), requires_grad=True)
b = Variable(torch.randn(3), requires_grad=True)
for sample, target in zip(data, targets):
# clear out the gradients of Variables
# (i.e. W, b)
W.grad.data.zero_()
b.grad.data.zero_()
output = linear_model(sample, W, b)
loss = (output - target) ** 2
loss.backward()
W -= learning_rate * W.grad.data
b -= learning_rate * b.grad.data
</code></pre>
<hr />
<p><strong>Note</strong>:</p>
<ul>
<li>The <em>accumulation</em> (i.e., <em>sum</em>) of gradients happens when <a href="https://pytorch.org/docs/stable/_modules/torch/autograd.html" rel="noreferrer"><code>.backward()</code> is called on the <code>loss</code> tensor</a>.</li>
<li>As of v1.7.0, Pytorch offers the option to reset the gradients to <code>None</code> <a href="https://pytorch.org/docs/stable/_modules/torch/optim/optimizer.html#Optimizer.zero_grad" rel="noreferrer"><code>optimizer.zero_grad(set_to_none=True)</code></a> instead of filling them with a tensor of zeroes. The docs claim that this setting reduces memory requirements and slightly improves performance, but might be error-prone if not handled carefully.</li>
</ul>
| 508
|
pytorch
|
L1/L2 regularization in PyTorch
|
https://stackoverflow.com/questions/42704283/l1-l2-regularization-in-pytorch
|
<p>How do I add L1/L2 regularization in PyTorch without manually computing it?</p>
|
<p>See the <a href="http://pytorch.org/docs/optim.html#torch.optim.Adagrad" rel="noreferrer">documentation</a>. Add a <code>weight_decay</code> parameter to the optimizer for L2 regularization.</p>
| 509
|
pytorch
|
What does model.eval() do in pytorch?
|
https://stackoverflow.com/questions/60018578/what-does-model-eval-do-in-pytorch
|
<p>When should I use <code>.eval()</code>? I understand it is supposed to allow me to "evaluate my model". How do I turn it back off for training?</p>
<p>Example training <a href="https://github.com/natanielruiz/deep-head-pose/blob/master/code/train_hopenet.py" rel="noreferrer">code</a> using <code>.eval()</code>.</p>
|
<p><code>model.eval()</code> is a kind of switch for some specific layers/parts of the model that behave differently during training and inference (evaluating) time. For example, Dropouts Layers, BatchNorm Layers etc. You need to turn them off during model evaluation, and <code>.eval()</code> will do it for you. In addition, the common practice for evaluating/validation is using <code>torch.no_grad()</code> in pair with <code>model.eval()</code> to turn off gradients computation:</p>
<pre class="lang-py prettyprint-override"><code># evaluate model:
model.eval()
with torch.no_grad():
...
out_data = model(data)
...
</code></pre>
<p>BUT, don't forget to turn back to <code>training</code> mode after eval step:</p>
<pre class="lang-py prettyprint-override"><code># training step
...
model.train()
...
</code></pre>
| 510
|
pytorch
|
How to avoid "CUDA out of memory" in PyTorch
|
https://stackoverflow.com/questions/59129812/how-to-avoid-cuda-out-of-memory-in-pytorch
|
<p>I think it's a pretty common message for PyTorch users with low GPU memory:</p>
<pre><code>RuntimeError: CUDA out of memory. Tried to allocate X MiB (GPU X; X GiB total capacity; X GiB already allocated; X MiB free; X cached)
</code></pre>
<p>I tried to process an image by loading each layer to GPU and then loading it back:</p>
<pre class="lang-py prettyprint-override"><code>for m in self.children():
m.cuda()
x = m(x)
m.cpu()
torch.cuda.empty_cache()
</code></pre>
<p>But it doesn't seem to be very effective. I'm wondering is there any tips and tricks to train large deep learning models while using little GPU memory.</p>
|
<p>Although</p>
<pre><code>import torch
torch.cuda.empty_cache()
</code></pre>
<p>provides a good alternative for clearing the occupied cuda memory and we can also manually clear the not in use variables by using,</p>
<pre><code>import gc
del variables
gc.collect()
</code></pre>
<p>But still after using these commands, the error might appear again because pytorch doesn't actually clears the memory instead clears the reference to the memory occupied by the variables.
So reducing the batch_size after restarting the kernel and finding the optimum batch_size is the best possible option (but sometimes not a very feasible one).</p>
<p>Another way to get a deeper insight into the alloaction of memory in gpu is to use:</p>
<pre><code>torch.cuda.memory_summary(device=None, abbreviated=False)
</code></pre>
<p>wherein, both the arguments are optional. This gives a readable summary of memory allocation and allows you to figure the reason of CUDA running out of memory and restart the kernel to avoid the error from happening again (Just like I did in my case).</p>
<p>Passing the data iteratively might help but changing the size of layers of your network or breaking them down would also prove effective (as sometimes the model also occupies a significant memory for example, while doing transfer learning).</p>
| 511
|
pytorch
|
What does "unsqueeze" do in Pytorch?
|
https://stackoverflow.com/questions/57237352/what-does-unsqueeze-do-in-pytorch
|
<p>The <a href="https://pytorch.org/docs/stable/generated/torch.unsqueeze.html" rel="noreferrer">PyTorch documentation</a> says:</p>
<blockquote>
<p>Returns a new tensor with a dimension of size one inserted at the specified position. [...]</p>
<pre><code>>>> x = torch.tensor([1, 2, 3, 4])
>>> torch.unsqueeze(x, 0)
tensor([[ 1, 2, 3, 4]])
>>> torch.unsqueeze(x, 1)
tensor([[ 1],
[ 2],
[ 3],
[ 4]])
</code></pre>
</blockquote>
|
<p>If you look at the shape of the array before and after, you see that before it was <code>(4,)</code> and after it is <code>(1, 4)</code> (when second parameter is <code>0</code>) and <code>(4, 1)</code> (when second parameter is <code>1</code>). So a <code>1</code> was inserted in the shape of the array at axis <code>0</code> or <code>1</code>, depending on the value of the second parameter.</p>
<p>That is opposite of <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.squeeze.html" rel="noreferrer"><code>np.squeeze()</code></a> (nomenclature borrowed from MATLAB) which removes axes of size <code>1</code> (singletons).</p>
| 512
|
pytorch
|
Pytorch, what are the gradient arguments
|
https://stackoverflow.com/questions/43451125/pytorch-what-are-the-gradient-arguments
|
<p>I am reading through the documentation of PyTorch and found an example where they write </p>
<pre><code>gradients = torch.FloatTensor([0.1, 1.0, 0.0001])
y.backward(gradients)
print(x.grad)
</code></pre>
<p>where x was an initial variable, from which y was constructed (a 3-vector). The question is, what are the 0.1, 1.0 and 0.0001 arguments of the gradients tensor ? The documentation is not very clear on that.</p>
|
<blockquote>
<p>The original code I haven't found on PyTorch website anymore.</p>
</blockquote>
<pre><code>gradients = torch.FloatTensor([0.1, 1.0, 0.0001])
y.backward(gradients)
print(x.grad)
</code></pre>
<p>The problem with the code above is there is no function based on how to calculate the gradients. This means we don't know how many parameters (arguments the function takes) and the dimension of parameters.</p>
<p>To fully understand this I created an example close to the original:</p>
<blockquote>
<p>Example 1:</p>
</blockquote>
<pre><code>a = torch.tensor([1.0, 2.0, 3.0], requires_grad = True)
b = torch.tensor([3.0, 4.0, 5.0], requires_grad = True)
c = torch.tensor([6.0, 7.0, 8.0], requires_grad = True)
y=3*a + 2*b*b + torch.log(c)
gradients = torch.FloatTensor([0.1, 1.0, 0.0001])
y.backward(gradients,retain_graph=True)
print(a.grad) # tensor([3.0000e-01, 3.0000e+00, 3.0000e-04])
print(b.grad) # tensor([1.2000e+00, 1.6000e+01, 2.0000e-03])
print(c.grad) # tensor([1.6667e-02, 1.4286e-01, 1.2500e-05])
</code></pre>
<p>I assumed our function is <code>y=3*a + 2*b*b + torch.log(c)</code> and the parameters are tensors with three elements inside.</p>
<p>You can think of the <code>gradients = torch.FloatTensor([0.1, 1.0, 0.0001])</code> like this is the accumulator.</p>
<p>As you may hear, PyTorch autograd system calculation is equivalent to Jacobian product.</p>
<p><a href="https://i.sstatic.net/sDlmj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sDlmj.png" alt="Jacobian" /></a></p>
<p>In case you have a function, like we did:</p>
<pre><code>y=3*a + 2*b*b + torch.log(c)
</code></pre>
<p>Jacobian would be <code>[3, 4*b, 1/c]</code>. However, this <a href="https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant" rel="nofollow noreferrer">Jacobian</a> is not how PyTorch is doing things to calculate the gradients at a certain point.</p>
<p>PyTorch uses forward pass and <a href="https://arxiv.org/pdf/1502.05767.pdf" rel="nofollow noreferrer">backward mode automatic differentiation</a> (AD) in tandem.</p>
<p>There is no symbolic math involved and no numerical differentiation.</p>
<blockquote>
<p>Numerical differentiation would be to calculate <code>δy/δb</code>, for <code>b=1</code> and <code>b=1+ε</code> where ε is small.</p>
</blockquote>
<p>If you don't use gradients in <code>y.backward()</code>:</p>
<blockquote>
<p>Example 2</p>
</blockquote>
<pre><code>a = torch.tensor(0.1, requires_grad = True)
b = torch.tensor(1.0, requires_grad = True)
c = torch.tensor(0.1, requires_grad = True)
y=3*a + 2*b*b + torch.log(c)
y.backward()
print(a.grad) # tensor(3.)
print(b.grad) # tensor(4.)
print(c.grad) # tensor(10.)
</code></pre>
<p>You will simply get the result at a point, based on how you set your <code>a</code>, <code>b</code>, <code>c</code> tensors initially.</p>
<p>Be careful how you initialize your <code>a</code>, <code>b</code>, <code>c</code>:</p>
<blockquote>
<p>Example 3:</p>
</blockquote>
<pre><code>a = torch.empty(1, requires_grad = True, pin_memory=True)
b = torch.empty(1, requires_grad = True, pin_memory=True)
c = torch.empty(1, requires_grad = True, pin_memory=True)
y=3*a + 2*b*b + torch.log(c)
gradients = torch.FloatTensor([0.1, 1.0, 0.0001])
y.backward(gradients)
print(a.grad) # tensor([3.3003])
print(b.grad) # tensor([0.])
print(c.grad) # tensor([inf])
</code></pre>
<p>If you use <code>torch.empty()</code> and don't use <code>pin_memory=True</code> you may have different results each time.</p>
<p>Also, note gradients are like accumulators so zero them when needed.</p>
<blockquote>
<p>Example 4:</p>
</blockquote>
<pre><code>a = torch.tensor(1.0, requires_grad = True)
b = torch.tensor(1.0, requires_grad = True)
c = torch.tensor(1.0, requires_grad = True)
y=3*a + 2*b*b + torch.log(c)
y.backward(retain_graph=True)
y.backward()
print(a.grad) # tensor(6.)
print(b.grad) # tensor(8.)
print(c.grad) # tensor(2.)
</code></pre>
<p>Lastly few tips on terms PyTorch uses:</p>
<p>PyTorch creates a <strong>dynamic computational graph</strong> when calculating the gradients in forward pass. This looks much like a tree.</p>
<p>So you will often hear the <em>leaves</em> of this tree are <strong>input tensors</strong> and the <em>root</em> is <strong>output tensor</strong>.</p>
<p>Gradients are calculated by tracing the graph from the root to the leaf and multiplying every gradient in the way using the <strong>chain rule</strong>. This multiplying occurs in the backward pass.</p>
<p>Back some time I created <a href="https://programming-review.com/pytorch/ad" rel="nofollow noreferrer">PyTorch Automatic Differentiation tutorial</a> that you may check interesting explaining all the tiny details about AD.</p>
| 513
|
pytorch
|
Data Augmentation in PyTorch
|
https://stackoverflow.com/questions/51677788/data-augmentation-in-pytorch
|
<p>I am a little bit confused about the data augmentation performed in PyTorch. Now, as far as I know, when we are performing data augmentation, we are KEEPING our original dataset, and then adding other versions of it (Flipping, Cropping...etc). But that doesn't seem like happening in PyTorch. As far as I understood from the references, when we use <code>data.transforms</code> in PyTorch, then it applies them one by one. So for example:</p>
<pre class="lang-python prettyprint-override"><code>data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
</code></pre>
<p>Here , for the training, we are first randomly cropping the image and resizing it to shape <code>(224,224)</code>. Then we are taking these <code>(224,224)</code> images and horizontally flipping them. Therefore, our dataset is now containing ONLY the horizontally flipped images, so our original images are lost in this case.</p>
<p>Am I right? Is this understanding correct? If not, then where do we tell PyTorch in this code above (taken from Official Documentation) to keep the original images and resize them to the expected shape <code>(224,224)</code>?</p>
|
<p>The <code>transforms</code> operations are applied to your original images at every batch generation. So your dataset is left unchanged, only the batch images are copied and transformed every iteration.</p>
<p>The confusion may come from the fact that often, like in your example, <code>transforms</code> are used both for data preparation (resizing/cropping to expected dimensions, normalizing values, etc.) and for data augmentation (randomizing the resizing/cropping, randomly flipping the images, etc.).</p>
<hr>
<p>What your <code>data_transforms['train']</code> does is:</p>
<ul>
<li>Randomly resize the provided image and randomly crop it to obtain a <code>(224, 224)</code> patch</li>
<li>Apply or not a random horizontal flip to this patch, with a 50/50 chance</li>
<li>Convert it to a <code>Tensor</code></li>
<li>Normalize the resulting <code>Tensor</code>, given the mean and deviation values you provided</li>
</ul>
<p>What your <code>data_transforms['val']</code> does is:</p>
<ul>
<li>Resize your image to <code>(256, 256)</code></li>
<li>Center crop the resized image to obtain a <code>(224, 224)</code> patch</li>
<li>Convert it to a <code>Tensor</code></li>
<li>Normalize the resulting <code>Tensor</code>, given the mean and deviation values you provided</li>
</ul>
<p>(i.e. the random resizing/cropping for the training data is replaced by a fixed operation for the validation one, to have reliable validation results)</p>
<hr>
<p>If you don't want your training images to be horizontally flipped with a 50/50 chance, just remove the <code>transforms.RandomHorizontalFlip()</code> line.</p>
<p>Similarly, if you want your images to always be center-cropped, replace <code>transforms.RandomResizedCrop</code> by <code>transforms.Resize</code> and <code>transforms.CenterCrop</code>, as done for <code>data_transforms['val']</code>.</p>
| 514
|
pytorch
|
How do I visualize a net in Pytorch?
|
https://stackoverflow.com/questions/52468956/how-do-i-visualize-a-net-in-pytorch
|
<pre><code>import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as data
import torchvision.models as models
import torchvision.datasets as dset
import torchvision.transforms as transforms
from torch.autograd import Variable
from torchvision.models.vgg import model_urls
from torchviz import make_dot
batch_size = 3
learning_rate =0.0002
epoch = 50
resnet = models.resnet50(pretrained=True)
print resnet
make_dot(resnet)
</code></pre>
<p>I want to visualize <code>resnet</code> from the pytorch models. How can I do it? I tried to use <code>torchviz</code> but it gives an error:</p>
<pre><code>'ResNet' object has no attribute 'grad_fn'
</code></pre>
|
<p>The <code>make_dot</code> expects a variable (i.e., tensor with <code>grad_fn</code>), not the model itself.<br />
try:</p>
<pre><code>x = torch.zeros(1, 3, 224, 224, dtype=torch.float, requires_grad=False)
out = resnet(x)
make_dot(out) # plot graph of variable, not of a nn.Module
</code></pre>
| 515
|
pytorch
|
How do I initialize weights in PyTorch?
|
https://stackoverflow.com/questions/49433936/how-do-i-initialize-weights-in-pytorch
|
<p>How do I initialize weights and biases of a network (via e.g. He or Xavier initialization)?</p>
|
<h1>Single layer</h1>
<p>To initialize the weights of a single layer, use a function from <a href="https://pytorch.org/docs/master/nn.init.html" rel="noreferrer"><code>torch.nn.init</code></a>. For instance:</p>
<pre><code>conv1 = torch.nn.Conv2d(...)
torch.nn.init.xavier_uniform(conv1.weight)
</code></pre>
<p>Alternatively, you can modify the parameters by writing to <code>conv1.weight.data</code> (which is a <a href="http://pytorch.org/docs/master/tensors.html#torch.Tensor" rel="noreferrer"><code>torch.Tensor</code></a>). Example:</p>
<pre><code>conv1.weight.data.fill_(0.01)
</code></pre>
<p>The same applies for biases:</p>
<pre><code>conv1.bias.data.fill_(0.01)
</code></pre>
<h2><code>nn.Sequential</code> or custom <code>nn.Module</code></h2>
<p>Pass an initialization function to <a href="http://pytorch.org/docs/master/nn.html#torch.nn.Module.apply" rel="noreferrer"><code>torch.nn.Module.apply</code></a>. It will initialize the weights in the entire <code>nn.Module</code> recursively.</p>
<blockquote>
<p><strong>apply(<em>fn</em>):</strong> Applies <code>fn</code> recursively to every submodule (as returned by <code>.children()</code>) as well as self. Typical use includes initializing the parameters of a model (see also torch-nn-init).</p>
</blockquote>
<p>Example:</p>
<pre><code>def init_weights(m):
if isinstance(m, nn.Linear):
torch.nn.init.xavier_uniform(m.weight)
m.bias.data.fill_(0.01)
net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
net.apply(init_weights)
</code></pre>
| 516
|
pytorch
|
How to do gradient clipping in pytorch?
|
https://stackoverflow.com/questions/54716377/how-to-do-gradient-clipping-in-pytorch
|
<p>What is the correct way to perform gradient clipping in pytorch?</p>
<p>I have an exploding gradients problem.</p>
|
<p><a href="https://pytorch.org/docs/stable/generated/torch.nn.utils.clip_grad_norm_.html#torch.nn.utils.clip_grad_norm_" rel="noreferrer"><code>clip_grad_norm</code></a> (which is actually deprecated in favor of <code>clip_grad_norm_</code> following the more consistent syntax of a trailing <code>_</code> when in-place modification is performed) clips the norm of the <em>overall</em> gradient by concatenating all parameters passed to the function, as can be seen from <a href="https://pytorch.org/docs/stable/_modules/torch/nn/utils/clip_grad.html" rel="noreferrer">the documentation</a>:</p>
<blockquote>
<p>The norm is computed over all gradients together, as if they were concatenated into a single vector. Gradients are modified in-place.</p>
</blockquote>
<p>From your example it looks like that you want <a href="https://pytorch.org/docs/stable/_modules/torch/nn/utils/clip_grad.html" rel="noreferrer"><code>clip_grad_value_</code></a> instead which has a similar syntax and also modifies the gradients in-place:</p>
<pre><code>clip_grad_value_(model.parameters(), clip_value)
</code></pre>
<p>Another option is to register a <a href="https://pytorch.org/tutorials/beginner/former_torchies/nn_tutorial.html#forward-and-backward-function-hooks" rel="noreferrer">backward hook</a>. This takes the current gradient as an input and may return a tensor which will be used in-place of the previous gradient, i.e. modifying it. This hook is called each time after a gradient has been computed, i.e. there's no need for manually clipping once the hook has been registered:</p>
<pre><code>for p in model.parameters():
p.register_hook(lambda grad: torch.clamp(grad, -clip_value, clip_value))
</code></pre>
| 517
|
pytorch
|
What does .contiguous() do in PyTorch?
|
https://stackoverflow.com/questions/48915810/what-does-contiguous-do-in-pytorch
|
<p>What does <code>x.contiguous()</code> do for a tensor <code>x</code>?</p>
|
<p>There are a few operations on Tensors in PyTorch that do not change the contents of a tensor, but change the way the data is organized. These operations include:</p>
<blockquote>
<p><code>narrow()</code>, <code>view()</code>, <code>expand()</code> and <code>transpose()</code></p>
</blockquote>
<p><em>For example:</em> when you call <code>transpose()</code>, PyTorch doesn't generate a new tensor with a new layout, it just modifies meta information in the Tensor object so that the offset and stride describe the desired new shape. In this example, the transposed tensor and original tensor share the same memory:</p>
<pre class="lang-py prettyprint-override"><code>x = torch.randn(3,2)
y = torch.transpose(x, 0, 1)
x[0, 0] = 42
print(y[0,0])
# prints 42
</code></pre>
<p>This is where the concept of <em>contiguous</em> comes in. In the example above, <code>x</code> is contiguous but <code>y</code> is not because its memory layout is different to that of a tensor of same shape made from scratch. Note that the word <em>"contiguous"</em> is a bit misleading because it's not that the content of the tensor is spread out around disconnected blocks of memory. Here bytes are still allocated in one block of memory but the order of the elements is different!</p>
<p>When you call <code>contiguous()</code>, it actually makes a copy of the tensor such that the order of its elements in memory is the same as if it had been created from scratch with the same data.</p>
<p>Normally you don't need to worry about this. You're generally safe to assume everything will work, and wait until you get a <code>RuntimeError: input is not contiguous</code> where PyTorch expects a contiguous tensor to add a call to <code>contiguous()</code>.</p>
| 518
|
pytorch
|
Convert PyTorch tensor to python list
|
https://stackoverflow.com/questions/53903373/convert-pytorch-tensor-to-python-list
|
<p>How do I convert a PyTorch <code>Tensor</code> into a python <code>list</code>?</p>
<p>I want to convert a tensor of size <code>[1, 2048, 1, 1]</code> into a list of 2048 elements. My tensor has floating point values. Is there a solution which also works with other data types such as int?</p>
|
<p>Use <a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor.tolist" rel="noreferrer"><code>Tensor.tolist()</code></a> e.g:</p>
<blockquote>
<pre><code>>>> import torch
>>> a = torch.randn(2, 2)
>>> a.tolist()
[[0.012766935862600803, 0.5415473580360413],
[-0.08909505605697632, 0.7729271650314331]]
>>> a[0,0].tolist()
0.012766935862600803
</code></pre>
</blockquote>
<p>To remove all dimensions of size <code>1</code>, use <code>a.squeeze().tolist()</code>.</p>
<p>Alternatively, if all but one dimension are of size <code>1</code> (or you wish to get a list of every element of the tensor) you may use <a href="https://pytorch.org/docs/stable/torch.html#torch.flatten" rel="noreferrer"><code>a.flatten().tolist()</code></a>.</p>
| 519
|
pytorch
|
What does model.train() do in PyTorch?
|
https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch
|
<p>Does it call <code>forward()</code> in <code>nn.Module</code>? I thought when we call the model, <code>forward</code> method is being used.
Why do we need to specify train()?</p>
|
<p><code>model.train()</code> tells your model that you are training the model. This helps inform layers such as Dropout and BatchNorm, which are designed to behave differently during training and evaluation. For instance, in training mode, BatchNorm updates a moving average on each new batch; whereas, for evaluation mode, these updates are frozen.</p>
<p>More details:
<code>model.train()</code> sets the mode to train
(see <a href="https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module.train" rel="noreferrer">source code</a>). You can call either <code>model.eval()</code> or <code>model.train(mode=False)</code> to tell that you are testing.
It is somewhat intuitive to expect <code>train</code> function to train model but it does not do that. It just sets the mode.</p>
| 520
|
pytorch
|
Why `torch.cuda.is_available()` returns False even after installing pytorch with cuda?
|
https://stackoverflow.com/questions/60987997/why-torch-cuda-is-available-returns-false-even-after-installing-pytorch-with
|
<p>On a Windows 10 PC with an NVidia GeForce 820M
I installed CUDA 9.2 and cudnn 7.1 successfully,
and then installed PyTorch using the instructions at pytorch.org:</p>
<pre><code>pip install torch==1.4.0+cu92 torchvision==0.5.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html
</code></pre>
<p>But I get:</p>
<pre><code>>>> import torch
>>> torch.cuda.is_available()
False
</code></pre>
|
<p>Your graphics card does not support CUDA 9.0.</p>
<p>Since I've seen a lot of questions that refer to issues like this I'm writing a broad answer on how to check if your system is compatible with CUDA, specifically targeted at using PyTorch with CUDA support. Various circumstance-dependent options for resolving issues are described in the last section of this answer.</p>
<hr />
<p>The system requirements to use PyTorch with CUDA are as follows:</p>
<ul>
<li>Your graphics card must support the required version of CUDA</li>
<li>Your graphics card <strong>driver</strong> must support the required version of CUDA</li>
<li>The PyTorch binaries must be built with support for the compute capability of your graphics card</li>
</ul>
<p><em>Note</em>: If you install pre-built binaries (using either pip or conda) then you do <strong>not</strong> need to install the CUDA toolkit or runtime on your system before installing PyTorch with CUDA support. This is because PyTorch, unless compiled from source, is always delivered with a copy of the CUDA library.</p>
<hr />
<h1>1. How to check if your GPU/graphics card supports a particular CUDA version</h1>
<p>First, identify the model of your graphics card.</p>
<p>Before moving forward ensure that you've got an NVIDIA graphics card. <strong>AMD and Intel graphics cards do not support CUDA</strong>.</p>
<p>NVIDIA doesn't do a great job of providing CUDA compatibility information in a single location. The best resource is probably <a href="https://en.wikipedia.org/wiki/CUDA#GPUs_supported" rel="noreferrer">this section on the CUDA Wikipedia page</a>. To determine which versions of CUDA are supported</p>
<ol>
<li>Locate your graphics card model in the big table and take note of the compute capability version. For example, the GeForce 820M compute capability is 2.1.</li>
<li>In the bullet list preceding the table check to see if the required CUDA version is supported by the compute capability of your graphics card. For example, CUDA 9.2 is not supported for compute compatibility 2.1.</li>
</ol>
<p>If your card doesn't support the required CUDA version then see the options in section 4 of this answer.</p>
<p><em>Note</em>: Compute capability refers to the computational features supported by your graphics card. Newer versions of the CUDA library rely on newer hardware features, which is why we need to determine the compute capability in order to determine the supported versions of CUDA.</p>
<hr />
<h1>2. How to check if your GPU/graphics driver supports a particular CUDA version</h1>
<p>The graphics driver is the software that allows your operating system to communicate with your graphics card. Since CUDA relies on low-level communication with the graphics card you need to have an up-to-date driver in order use the latest versions of CUDA.</p>
<p>First, make sure you have an NVIDIA graphics driver installed on your system. You can acquire the newest driver for your system from <a href="https://www.nvidia.com/Download/index.aspx?lang=en-us" rel="noreferrer">NVIDIA's website</a>.</p>
<p>If you've installed the latest driver version then your graphics driver <em>probably</em> supports every CUDA version compatible with your graphics card (see section 1). To verify, you can check Table 2 in the <a href="https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html" rel="noreferrer">CUDA release notes</a>. <a href="https://github.com/pytorch/pytorch/issues/4546#issuecomment-356392237" rel="noreferrer">In rare cases</a> I've heard of the latest recommended graphics drivers not supporting the latest CUDA releases. You should be able to get around this by installing the CUDA toolkit for the required CUDA version and selecting the option to install compatible drivers, though this usually isn't required.</p>
<p>If you can't, or don't want to upgrade the graphics driver then you can check to see if your current driver supports the specific CUDA version as follows:</p>
<h3>On Windows</h3>
<ol>
<li>Determine your current graphics driver version (Source <a href="https://www.nvidia.com/en-gb/drivers/drivers-faq/" rel="noreferrer">https://www.nvidia.com/en-gb/drivers/drivers-faq/</a>)</li>
</ol>
<blockquote>
<p>Right-click on your desktop and select NVIDIA Control Panel. From the
NVIDIA Control Panel menu, select Help > System Information. The
driver version is listed at the top of the Details window. For more
advanced users, you can also get the driver version number from the
Windows Device Manager. Right-click on your graphics device under
display adapters and then select Properties. Select the Driver tab and
read the Driver version. The last 5 digits are the NVIDIA driver
version number.</p>
</blockquote>
<ol start="2">
<li>Visit the <a href="https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html" rel="noreferrer">CUDA release notes</a> and scroll down to Table 2. Use this table to verify your graphics driver is new enough to support the required version of CUDA.</li>
</ol>
<h3>On Linux/OS X</h3>
<p>Run the following command in a terminal window</p>
<pre><code>nvidia-smi
</code></pre>
<p>This should result in something like the following</p>
<pre class="lang-none prettyprint-override"><code>Sat Apr 4 15:31:57 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 435.21 Driver Version: 435.21 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 206... Off | 00000000:01:00.0 On | N/A |
| 0% 35C P8 16W / 175W | 502MiB / 7974MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1138 G /usr/lib/xorg/Xorg 300MiB |
| 0 2550 G /usr/bin/compiz 189MiB |
| 0 5735 G /usr/lib/firefox/firefox 5MiB |
| 0 7073 G /usr/lib/firefox/firefox 5MiB |
+-----------------------------------------------------------------------------+
</code></pre>
<p><code>Driver Version: ###.##</code> is your graphic driver version. In the example above the driver version is <code>435.21</code>.</p>
<p><code>CUDA Version: ##.#</code> is the latest version of CUDA supported by your graphics driver. In the example above the graphics driver supports CUDA 10.1 <em>as well as all compatible CUDA versions before 10.1</em>.</p>
<p><em>Note</em>: The <code>CUDA Version</code> displayed in this table does <strong>not</strong> indicate that the CUDA toolkit or runtime are actually installed on your system. This just indicates the latest version of CUDA your graphics driver is compatible with.</p>
<p>To be extra sure that your driver supports the desired CUDA version you can visit Table 2 on the <a href="https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html" rel="noreferrer">CUDA release notes</a> page.</p>
<hr />
<h1>3. How to check if a particular version of PyTorch is compatible with your GPU/graphics card compute capability</h1>
<p>Even if your graphics card supports the required version of CUDA then it's possible that the pre-compiled PyTorch binaries were not compiled with support for your compute capability. For example, in PyTorch 0.3.1 <a href="https://github.com/pytorch/pytorch/releases/tag/v0.3.1" rel="noreferrer">support for compute capability <= 5.0 was dropped</a>.</p>
<p><strong>First, verify that your graphics card and driver both support the required CUDA version (see Sections 1 and 2 above)</strong>, the information in this section assumes that this is the case.</p>
<p>The easiest way to check if PyTorch supports your compute capability is to install the desired version of PyTorch with CUDA support and run the following from a python interpreter</p>
<pre><code>>>> import torch
>>> torch.zeros(1).cuda()
</code></pre>
<p>If you get an error message that reads</p>
<pre class="lang-none prettyprint-override"><code>Found GPU0 XXXXX which is of cuda capability #.#.
PyTorch no longer supports this GPU because it is too old.
</code></pre>
<p>then that means PyTorch was not compiled with support for your compute capability. If this runs without issue then you should be good to go.</p>
<p><em>Update</em> If you're installing an old version of PyTorch on a system with a newer GPU then it's possible that the old PyTorch release wasn't compiled with support for your compute capability. Assuming your GPU supports the version of CUDA used by PyTorch, then you should be able to rebuild PyTorch from source with the desired CUDA version or upgrade to a more recent version of PyTorch that was compiled with support for the newer compute capabilities.</p>
<hr />
<h1>4. Conclusion</h1>
<p>If your graphics card and driver support the required version of CUDA (section 1 and 2) but the PyTorch binaries don't support your compute capability (section 3) then your options are</p>
<ul>
<li>Compile PyTorch from source with support for your compute capability (see <a href="https://pytorch.org/get-started/locally/#linux-from-source" rel="noreferrer">here</a>)</li>
<li>Install PyTorch without CUDA support (CPU-only)</li>
<li>Install an older version of the PyTorch binaries that support your compute capability (not recommended as PyTorch 0.3.1 is very outdated at this point). AFAIK compute capability older than 3.X has never been supported in the pre-built binaries</li>
<li>Upgrade your graphics card</li>
</ul>
<p>If your graphics card doesn't support the required version of CUDA (section 1) then your options are</p>
<ul>
<li>Install PyTorch without CUDA support (CPU-only)</li>
<li>Install an older version of PyTorch that supports a CUDA version supported by your graphics card (still may require compiling from source if the binaries don't support your compute capability)</li>
<li>Upgrade your graphics card</li>
</ul>
| 521
|
pytorch
|
How to multiply matrices in PyTorch?
|
https://stackoverflow.com/questions/44524901/how-to-multiply-matrices-in-pytorch
|
<p>With numpy, I can do a simple matrix multiplication like this:</p>
<pre><code>a = numpy.ones((3, 2))
b = numpy.ones((2, 1))
result = a.dot(b)
</code></pre>
<p>However, this does not work with PyTorch:</p>
<pre><code>a = torch.ones((3, 2))
b = torch.ones((2, 1))
result = torch.dot(a, b)
</code></pre>
<p>This code throws the following error:</p>
<blockquote>
<p>RuntimeError: 1D tensors expected, but got 2D and 2D tensors</p>
</blockquote>
<p>How do I perform matrix multiplication in PyTorch?</p>
|
<p>Use <a href="https://pytorch.org/docs/stable/generated/torch.mm.html" rel="noreferrer"><code>torch.mm</code></a>:</p>
<pre><code>torch.mm(a, b)
</code></pre>
<p><code>torch.dot()</code> behaves differently to <code>np.dot()</code>. There's been some discussion about what would be desirable <a href="https://github.com/pytorch/pytorch/issues/138" rel="noreferrer">here</a>. Specifically, <code>torch.dot()</code> treats both <code>a</code> and <code>b</code> as 1D vectors (irrespective of their original shape) and computes their inner product. The error is thrown because this behaviour makes your <code>a</code> a vector of length 6 and your <code>b</code> a vector of length 2; hence their inner product can't be computed. For matrix multiplication in PyTorch, use <code>torch.mm()</code>. Numpy's <code>np.dot()</code> in contrast is more flexible; it computes the inner product for 1D arrays and performs matrix multiplication for 2D arrays.</p>
<p><a href="https://pytorch.org/docs/master/torch.html#torch.matmul" rel="noreferrer"><code>torch.matmul</code></a> performs matrix multiplications if both arguments are <code>2D</code> and computes their dot product if both arguments are <code>1D</code>. For inputs of such dimensions, its behaviour is the same as <code>np.dot</code>. It also lets you do broadcasting or <code>matrix x matrix</code>, <code>matrix x vector</code> and <code>vector x vector</code> operations in batches.</p>
<pre><code># 1D inputs, same as torch.dot
a = torch.rand(n)
b = torch.rand(n)
torch.matmul(a, b) # torch.Size([])
# 2D inputs, same as torch.mm
a = torch.rand(m, k)
b = torch.rand(k, j)
torch.matmul(a, b) # torch.Size([m, j])
</code></pre>
| 522
|
pytorch
|
PyTorch reshape tensor dimension
|
https://stackoverflow.com/questions/43328632/pytorch-reshape-tensor-dimension
|
<p>I want to reshape a vector of shape <code>(5,)</code> into a matrix of shape <code>(1, 5)</code>.</p>
<p>With numpy, I can do:</p>
<pre><code>>>> import numpy as np
>>> a = np.array([1, 2, 3, 4, 5])
>>> a.shape
(5,)
>>> a = np.reshape(a, (1, 5))
>>> a.shape
(1, 5)
>>> a
array([[1, 2, 3, 4, 5]])
</code></pre>
<p>But how do I do this with PyTorch?</p>
|
<p>Use <a href="https://pytorch.org/docs/stable/generated/torch.unsqueeze.html#torch.unsqueeze" rel="noreferrer"><code>torch.unsqueeze(input, dim, out=None)</code></a>:</p>
<pre><code>>>> import torch
>>> a = torch.Tensor([1, 2, 3, 4, 5])
>>> a
1
2
3
4
5
[torch.FloatTensor of size 5]
>>> a = a.unsqueeze(0)
>>> a
1 2 3 4 5
[torch.FloatTensor of size 1x5]
</code></pre>
| 523
|
pytorch
|
Pytorch Operation to detect NaNs
|
https://stackoverflow.com/questions/48158017/pytorch-operation-to-detect-nans
|
<p>
Is there a Pytorch-internal procedure to detect <code>NaN</code>s in Tensors? Tensorflow has the <code>tf.is_nan</code> and the <code>tf.check_numerics</code> operations ... Does Pytorch have something similar, somewhere? I could not find something like this in the docs... </p>
<p>I am looking specifically for a Pytorch internal routine, since I would like this to happen on the GPU as well as on the CPU. This excludes numpy - based solutions (like <code>np.isnan(sometensor.numpy()).any()</code>) ...</p>
|
<p>You can always leverage the fact that <code>nan != nan</code>:</p>
<pre><code>>>> x = torch.tensor([1, 2, np.nan])
tensor([ 1., 2., nan.])
>>> x != x
tensor([ 0, 0, 1], dtype=torch.uint8)
</code></pre>
<p>With pytorch 0.4 there is also <a href="https://pytorch.org/docs/stable/torch.html?highlight=isnan#torch.isnan" rel="noreferrer"><code>torch.isnan</code></a>:</p>
<pre><code>>>> torch.isnan(x)
tensor([ 0, 0, 1], dtype=torch.uint8)
</code></pre>
| 524
|
pytorch
|
Pytorch vs. Keras: Pytorch model overfits heavily
|
https://stackoverflow.com/questions/50079735/pytorch-vs-keras-pytorch-model-overfits-heavily
|
<p>For several days now, I'm trying to replicate my keras training results with pytorch. Whatever I do, the pytorch model will overfit far earlier and stronger to the validation set then in keras. For pytorch I use the same XCeption Code from <a href="https://github.com/Cadene/pretrained-models.pytorch" rel="noreferrer">https://github.com/Cadene/pretrained-models.pytorch</a>.</p>
<p>The dataloading, the augmentation, the validation, the training schedule etc. are equivalent. Am I missing something obvious? There must be a general problem somewhere. I tried thousands of different module constellations, but nothing seems to come even close to the keras training. Can somebody help?</p>
<p>Keras model: val accuracy > 90%</p>
<pre><code># base model
base_model = applications.Xception(weights='imagenet', include_top=False, input_shape=(img_width, img_height, 3))
# top model
x = base_model.output
x = GlobalMaxPooling2D()(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.5)(x)
predictions = Dense(4, activation='softmax')(x)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
# Compile model
from keras import optimizers
adam = optimizers.Adam(lr=0.0001)
model.compile(loss='categorical_crossentropy',
optimizer=adam, metrics=['accuracy'])
# LROnPlateau etc. with equivalent settings as pytorch
</code></pre>
<p>Pytorch model: val accuracy ~81%</p>
<pre><code>from xception import xception
import torch.nn.functional as F
# modified from https://github.com/Cadene/pretrained-models.pytorch
class XCeption(nn.Module):
def __init__(self, num_classes):
super(XCeption, self).__init__()
original_model = xception(pretrained="imagenet")
self.features=nn.Sequential(*list(original_model.children())[:-1])
self.last_linear = nn.Sequential(
nn.Linear(original_model.last_linear.in_features, 512),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(512, num_classes)
)
def logits(self, features):
x = F.relu(features)
x = F.adaptive_max_pool2d(x, (1, 1))
x = x.view(x.size(0), -1)
x = self.last_linear(x)
return x
def forward(self, input):
x = self.features(input)
x = self.logits(x)
return x
device = torch.device("cuda")
model=XCeption(len(class_names))
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
model = nn.DataParallel(model)
model.to(device)
criterion = nn.CrossEntropyLoss(size_average=False)
optimizer = optim.Adam(model.parameters(), lr=0.0001)
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, 'min', factor=0.2, patience=5, cooldown=5)
</code></pre>
<p>Thank you very much!</p>
<p>Update:
Settings:</p>
<pre><code>criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=lr)
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, 'min', factor=0.2, patience=5, cooldown=5)
model = train_model(model, train_loader, val_loader,
criterion, optimizer, scheduler,
batch_size, trainmult=8, valmult=10,
num_epochs=200, epochs_top=0)
</code></pre>
<p>Cleaned training function:</p>
<pre><code>def train_model(model, train_loader, val_loader, criterion, optimizer, scheduler, batch_size, trainmult=1, valmult=1, num_epochs=None, epochs_top=0):
for epoch in range(num_epochs):
for phase in ['train', 'val']:
running_loss = 0.0
running_acc = 0
total = 0
# Iterate over data.
if phase=="train":
model.train(True) # Set model to training mode
for i in range(trainmult):
for data in train_loader:
# get the inputs
inputs, labels = data
inputs, labels = inputs.to(torch.device("cuda")), labels.to(torch.device("cuda"))
# zero the parameter gradients
optimizer.zero_grad()
# forward
outputs = model(inputs) # notinception
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
loss.backward()
optimizer.step()
# statistics
total += labels.size(0)
running_loss += loss.item()*labels.size(0)
running_acc += torch.sum(preds == labels)
train_loss=(running_loss/total)
train_acc=(running_acc.double()/total)
else:
model.train(False) # Set model to evaluate mode
with torch.no_grad():
for i in range(valmult):
for data in val_loader:
# get the inputs
inputs, labels = data
inputs, labels = inputs.to(torch.device("cuda")), labels.to(torch.device("cuda"))
# zero the parameter gradients
optimizer.zero_grad()
# forward
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels.data)
# statistics
total += labels.size(0)
running_loss += loss.item()*labels.size(0)
running_acc += torch.sum(preds == labels)
val_loss=(running_loss/total)
val_acc=(running_acc.double()/total)
scheduler.step(val_loss)
return model
</code></pre>
|
<p>it may be because type of weight initialization you are using
otherwise this should not happen
try with same initializer in both the models</p>
| 525
|
pytorch
|
How to tell PyTorch to not use the GPU?
|
https://stackoverflow.com/questions/53266350/how-to-tell-pytorch-to-not-use-the-gpu
|
<p>I want to do some timing comparisons between CPU & GPU as well as some profiling and would like to know if there's a way to tell <a href="/questions/tagged/pytorch" class="post-tag" title="show questions tagged 'pytorch'" rel="tag">pytorch</a> to not use the GPU and instead use the CPU only? I realize I could install another CPU-only <a href="/questions/tagged/pytorch" class="post-tag" title="show questions tagged 'pytorch'" rel="tag">pytorch</a>, but hoping there's an easier way.</p>
|
<p>Before running your code, run this shell command to tell torch that there are no GPUs:</p>
<pre><code>export CUDA_VISIBLE_DEVICES=""
</code></pre>
<hr />
<p>This will tell it to use only one GPU (the one with id 0) and so on:</p>
<pre><code>export CUDA_VISIBLE_DEVICES="0"
</code></pre>
| 526
|
pytorch
|
Why is PyTorch called PyTorch?
|
https://stackoverflow.com/questions/51530778/why-is-pytorch-called-pytorch
|
<p>I have been looking into deep learning frameworks lately and have been wondering about the origin of the name of PyTorch.</p>
<p>With Keras, their <a href="https://keras.io/" rel="noreferrer">home page</a> nicely explains the name's origin, and with something like TensorFlow, the reasoning behind the name seems rather clear. For PyTorch, however, I cannot seem to come across why it is so named.</p>
<p>Of course, I understand the "Py-" prefix and also know that PyTorch is a successor in some sense of Torch. But I am still wondering: what is the original idea behind the "-Torch" part? Is it known what the origin of the name is?</p>
|
<p>Here a short answer, formed as another question:</p>
<h3>Torch, SMORCH ???</h3>
<p>PyTorch developed from Torch7. A precursor to the original Torch was a library called <a href="http://bengio.abracadoudou.com/SVMTorch.html" rel="noreferrer">SVM-Torch</a>, which was developed around 2001. The SVM stands for Support Vector Machines.</p>
<p>SVM-Torch is a decomposition algorithm similar to <a href="http://svmlight.joachims.org/" rel="noreferrer">SVM-Light</a>, but adapted to regression problems, according to <a href="http://www.ai.mit.edu/projects/jmlr/papers/volume1/collobert01a/collobert01a.ps.gz" rel="noreferrer">this paper</a>.</p>
<p>Also around this time, G.W.Flake described the sequential minimal optimization algorithm (SMO), which could be used to train SVMs on sparse data sets, and this was incorporated into NODElib.</p>
<p>Interestingly, this was called <a href="https://web.archive.org/web/20030319153242/http://www.neci.nj.nec.com/homepages/flake/smorch.ps" rel="noreferrer">the SMORCH algorithm</a>.</p>
<p>You can find out more about SMORCH in the <a href="https://github.com/gwf/NODElib/blob/master/include/nodelib/svm.h" rel="noreferrer">NODElib docs</a></p>
<blockquote>
<p>Optimization of the SVMs is:</p>
<ul>
<li>performed by a variation of John Platt's sequential minimal</li>
<li>optimization (SMO) algorithm. This version of SMO is generalized</li>
<li>for regression, uses kernel caching, and incorporates several</li>
<li>heuristics; for these reasons, we refer to the optimization</li>
<li>algorithm as SMORCH.</li>
</ul>
</blockquote>
<p>So <strong>SMORCH</strong> =</p>
<p><strong>S</strong>equential<br />
<strong>M</strong>inimal<br />
<strong>O</strong>ptimization<br />
<strong>R</strong>egression<br />
<strong>C</strong>aching<br />
<strong>H</strong>euristics</p>
<p>I can't answer definitively, but my thinking is "Torch" is a riff or evolution of "Light" from SVM-Light combined with a large helping of SMORCHiness. You'd need to check in with the authors of SVMTorch and SVM-Light to confirm that this is indeed what "sparked" the name. It is reasonable to assume that the "TO" of Torch stands for some other optimization, rather than SMO, such as <strong>T</strong>ensor <strong>O</strong>ptimization, but I haven't found any direct reference... yet.</p>
| 527
|
pytorch
|
pytorch - connection between loss.backward() and optimizer.step()
|
https://stackoverflow.com/questions/53975717/pytorch-connection-between-loss-backward-and-optimizer-step
|
<p>Where is an explicit connection between the <code>optimizer</code> and the <code>loss</code>?</p>
<p>How does the optimizer know where to get the gradients of the loss without a call liks this <code>optimizer.step(loss)</code>?</p>
<p>-More context-</p>
<p>When I minimize the loss, I didn't have to pass the gradients to the optimizer.</p>
<pre><code>loss.backward() # Back Propagation
optimizer.step() # Gradient Descent
</code></pre>
|
<p>Without delving too deep into the internals of pytorch, I can offer a simplistic answer:</p>
<p>Recall that when initializing <code>optimizer</code> you explicitly tell it what parameters (tensors) of the model it should be updating. The gradients are "stored" by the tensors themselves (they have a <a href="https://pytorch.org/docs/master/autograd.html#torch.Tensor.grad" rel="noreferrer"><code>grad</code></a> and a <a href="https://pytorch.org/docs/master/autograd.html#torch.Tensor.requires_grad" rel="noreferrer"><code>requires_grad</code></a> attributes) once you call <code>backward()</code> on the loss. After computing the gradients for all tensors in the model, calling <code>optimizer.step()</code> makes the optimizer iterate over all parameters (tensors) it is supposed to update and use their internally stored <code>grad</code> to update their values.</p>
<p>More info on computational graphs and the additional "grad" information stored in pytorch tensors can be found in <a href="https://stackoverflow.com/a/63869655/1714410">this answer</a>.</p>
<p>Referencing the parameters by the optimizer can sometimes cause troubles, e.g., when the model is moved to GPU <em>after</em> initializing the optimizer.
Make sure you are done setting up your model <em>before</em> constructing the optimizer. See <a href="https://stackoverflow.com/a/66096687/1714410">this answer</a> for more details.</p>
| 528
|
pytorch
|
Understanding PyTorch einsum
|
https://stackoverflow.com/questions/55894693/understanding-pytorch-einsum
|
<p>I'm familiar with how <a href="https://en.wikipedia.org/wiki/Einstein_notation" rel="noreferrer"><strong><code>einsum</code></strong></a> works in NumPy. A similar functionality is also offered by PyTorch: <a href="https://pytorch.org/docs/stable/torch.html#torch.einsum" rel="noreferrer"><strong>torch.einsum()</strong></a>. What are the similarities and differences, either in terms of functionality or performance? The information available at PyTorch documentation is rather scanty and doesn't provide any insights regarding this.</p>
|
<p>Since the description of einsum is skimpy in torch documentation, I decided to write this post to document, compare and contrast how <a href="https://pytorch.org/docs/stable/_modules/torch/functional.html#einsum" rel="noreferrer"><code>torch.einsum()</code></a> behaves when compared to <a href="https://numpy.org/devdocs/reference/generated/numpy.einsum.html" rel="noreferrer"><code>numpy.einsum()</code></a>.</p>
<p><strong>Differences:</strong> </p>
<ul>
<li><p>NumPy allows both small case and capitalized letters <code>[a-zA-Z]</code> for the "<em>subscript string</em>" whereas PyTorch allows only the small case letters <code>[a-z]</code>.</p></li>
<li><p>NumPy accepts nd-arrays, plain Python lists (or tuples), list of lists (or tuple of tuples, list of tuples, tuple of lists) or even PyTorch tensors as <em>operands</em> (i.e. inputs). This is because the <em>operands</em> have only to be <em>array_like</em> and not strictly NumPy nd-arrays. On the contrary, PyTorch expects the <em>operands</em> (i.e. inputs) strictly to be PyTorch tensors. It will throw a <code>TypeError</code> if you pass either plain Python lists/tuples (or its combinations) or NumPy nd-arrays.</p></li>
<li><p>NumPy supports lot of keyword arguments (for e.g. <code>optimize</code>) in addition to <code>nd-arrays</code> while PyTorch doesn't offer such flexibility yet.</p></li>
</ul>
<p>Here are the implementations of some examples both in PyTorch and NumPy:</p>
<pre><code># input tensors to work with
In [16]: vec
Out[16]: tensor([0, 1, 2, 3])
In [17]: aten
Out[17]:
tensor([[11, 12, 13, 14],
[21, 22, 23, 24],
[31, 32, 33, 34],
[41, 42, 43, 44]])
In [18]: bten
Out[18]:
tensor([[1, 1, 1, 1],
[2, 2, 2, 2],
[3, 3, 3, 3],
[4, 4, 4, 4]])
</code></pre>
<hr>
<p><strong>1) Matrix multiplication</strong><br>
PyTorch: <code>torch.matmul(aten, bten)</code> ; <code>aten.mm(bten)</code><br>
NumPy : <code>np.einsum("ij, jk -> ik", arr1, arr2)</code> </p>
<pre><code>In [19]: torch.einsum('ij, jk -> ik', aten, bten)
Out[19]:
tensor([[130, 130, 130, 130],
[230, 230, 230, 230],
[330, 330, 330, 330],
[430, 430, 430, 430]])
</code></pre>
<p><strong>2) Extract elements along the main-diagonal</strong><br>
PyTorch: <code>torch.diag(aten)</code><br>
NumPy : <code>np.einsum("ii -> i", arr)</code> </p>
<pre><code>In [28]: torch.einsum('ii -> i', aten)
Out[28]: tensor([11, 22, 33, 44])
</code></pre>
<p><strong>3) Hadamard product (i.e. element-wise product of two tensors)</strong><br>
PyTorch: <code>aten * bten</code><br>
NumPy : <code>np.einsum("ij, ij -> ij", arr1, arr2)</code> </p>
<pre><code>In [34]: torch.einsum('ij, ij -> ij', aten, bten)
Out[34]:
tensor([[ 11, 12, 13, 14],
[ 42, 44, 46, 48],
[ 93, 96, 99, 102],
[164, 168, 172, 176]])
</code></pre>
<p><strong>4) Element-wise squaring</strong><br>
PyTorch: <code>aten ** 2</code><br>
NumPy : <code>np.einsum("ij, ij -> ij", arr, arr)</code> </p>
<pre><code>In [37]: torch.einsum('ij, ij -> ij', aten, aten)
Out[37]:
tensor([[ 121, 144, 169, 196],
[ 441, 484, 529, 576],
[ 961, 1024, 1089, 1156],
[1681, 1764, 1849, 1936]])
</code></pre>
<p><strong><em>General</em></strong>: Element-wise <code>nth</code> power can be implemented by repeating the subscript string and tensor <code>n</code> times.
For e.g., computing element-wise 4th power of a tensor can be done using:</p>
<pre><code># NumPy: np.einsum('ij, ij, ij, ij -> ij', arr, arr, arr, arr)
In [38]: torch.einsum('ij, ij, ij, ij -> ij', aten, aten, aten, aten)
Out[38]:
tensor([[ 14641, 20736, 28561, 38416],
[ 194481, 234256, 279841, 331776],
[ 923521, 1048576, 1185921, 1336336],
[2825761, 3111696, 3418801, 3748096]])
</code></pre>
<p><strong>5) Trace (i.e. sum of main-diagonal elements)</strong><br>
PyTorch: <code>torch.trace(aten)</code><br>
NumPy einsum: <code>np.einsum("ii -> ", arr)</code> </p>
<pre><code>In [44]: torch.einsum('ii -> ', aten)
Out[44]: tensor(110)
</code></pre>
<p><strong>6) Matrix transpose</strong><br>
PyTorch: <code>torch.transpose(aten, 1, 0)</code><br>
NumPy einsum: <code>np.einsum("ij -> ji", arr)</code> </p>
<pre><code>In [58]: torch.einsum('ij -> ji', aten)
Out[58]:
tensor([[11, 21, 31, 41],
[12, 22, 32, 42],
[13, 23, 33, 43],
[14, 24, 34, 44]])
</code></pre>
<p><strong>7) Outer Product (of vectors)</strong><br>
PyTorch: <code>torch.ger(vec, vec)</code><br>
NumPy einsum: <code>np.einsum("i, j -> ij", vec, vec)</code> </p>
<pre><code>In [73]: torch.einsum('i, j -> ij', vec, vec)
Out[73]:
tensor([[0, 0, 0, 0],
[0, 1, 2, 3],
[0, 2, 4, 6],
[0, 3, 6, 9]])
</code></pre>
<p><strong>8) Inner Product (of vectors)</strong>
PyTorch: <code>torch.dot(vec1, vec2)</code><br>
NumPy einsum: <code>np.einsum("i, i -> ", vec1, vec2)</code> </p>
<pre><code>In [76]: torch.einsum('i, i -> ', vec, vec)
Out[76]: tensor(14)
</code></pre>
<p><strong>9) Sum along axis 0</strong><br>
PyTorch: <code>torch.sum(aten, 0)</code><br>
NumPy einsum: <code>np.einsum("ij -> j", arr)</code> </p>
<pre><code>In [85]: torch.einsum('ij -> j', aten)
Out[85]: tensor([104, 108, 112, 116])
</code></pre>
<p><strong>10) Sum along axis 1</strong><br>
PyTorch: <code>torch.sum(aten, 1)</code><br>
NumPy einsum: <code>np.einsum("ij -> i", arr)</code> </p>
<pre><code>In [86]: torch.einsum('ij -> i', aten)
Out[86]: tensor([ 50, 90, 130, 170])
</code></pre>
<p><strong>11) Batch Matrix Multiplication</strong><br>
PyTorch: <code>torch.bmm(batch_tensor_1, batch_tensor_2)</code><br>
NumPy : <code>np.einsum("bij, bjk -> bik", batch_tensor_1, batch_tensor_2)</code> </p>
<pre><code># input batch tensors to work with
In [13]: batch_tensor_1 = torch.arange(2 * 4 * 3).reshape(2, 4, 3)
In [14]: batch_tensor_2 = torch.arange(2 * 3 * 4).reshape(2, 3, 4)
In [15]: torch.bmm(batch_tensor_1, batch_tensor_2)
Out[15]:
tensor([[[ 20, 23, 26, 29],
[ 56, 68, 80, 92],
[ 92, 113, 134, 155],
[ 128, 158, 188, 218]],
[[ 632, 671, 710, 749],
[ 776, 824, 872, 920],
[ 920, 977, 1034, 1091],
[1064, 1130, 1196, 1262]]])
# sanity check with the shapes
In [16]: torch.bmm(batch_tensor_1, batch_tensor_2).shape
Out[16]: torch.Size([2, 4, 4])
# batch matrix multiply using einsum
In [17]: torch.einsum("bij, bjk -> bik", batch_tensor_1, batch_tensor_2)
Out[17]:
tensor([[[ 20, 23, 26, 29],
[ 56, 68, 80, 92],
[ 92, 113, 134, 155],
[ 128, 158, 188, 218]],
[[ 632, 671, 710, 749],
[ 776, 824, 872, 920],
[ 920, 977, 1034, 1091],
[1064, 1130, 1196, 1262]]])
# sanity check with the shapes
In [18]: torch.einsum("bij, bjk -> bik", batch_tensor_1, batch_tensor_2).shape
</code></pre>
<p><strong>12) Sum along axis 2</strong><br>
PyTorch: <code>torch.sum(batch_ten, 2)</code><br>
NumPy einsum: <code>np.einsum("ijk -> ij", arr3D)</code> </p>
<pre><code>In [99]: torch.einsum("ijk -> ij", batch_ten)
Out[99]:
tensor([[ 50, 90, 130, 170],
[ 4, 8, 12, 16]])
</code></pre>
<p><strong>13) Sum all the elements in an nD tensor</strong><br>
PyTorch: <code>torch.sum(batch_ten)</code><br>
NumPy einsum: <code>np.einsum("ijk -> ", arr3D)</code> </p>
<pre><code>In [101]: torch.einsum("ijk -> ", batch_ten)
Out[101]: tensor(480)
</code></pre>
<p><strong>14) Sum over multiple axes (i.e. marginalization)</strong><br>
PyTorch: <code>torch.sum(arr, dim=(dim0, dim1, dim2, dim3, dim4, dim6, dim7))</code><br>
NumPy: <code>np.einsum("ijklmnop -> n", nDarr)</code> </p>
<pre><code># 8D tensor
In [103]: nDten = torch.randn((3,5,4,6,8,2,7,9))
In [104]: nDten.shape
Out[104]: torch.Size([3, 5, 4, 6, 8, 2, 7, 9])
# marginalize out dimension 5 (i.e. "n" here)
In [111]: esum = torch.einsum("ijklmnop -> n", nDten)
In [112]: esum
Out[112]: tensor([ 98.6921, -206.0575])
# marginalize out axis 5 (i.e. sum over rest of the axes)
In [113]: tsum = torch.sum(nDten, dim=(0, 1, 2, 3, 4, 6, 7))
In [115]: torch.allclose(tsum, esum)
Out[115]: True
</code></pre>
<p><strong>15) Double Dot Products / <a href="https://en.wikipedia.org/wiki/Frobenius_inner_product" rel="noreferrer">Frobenius inner product</a> (same as: torch.sum(hadamard-product) cf. 3)</strong><br>
PyTorch: <code>torch.sum(aten * bten)</code><br>
NumPy : <code>np.einsum("ij, ij -> ", arr1, arr2)</code> </p>
<pre><code>In [120]: torch.einsum("ij, ij -> ", aten, bten)
Out[120]: tensor(1300)
</code></pre>
| 529
|
pytorch
|
How do I convert a Pandas dataframe to a PyTorch tensor?
|
https://stackoverflow.com/questions/50307707/how-do-i-convert-a-pandas-dataframe-to-a-pytorch-tensor
|
<p>How do I train a simple neural network with PyTorch on a pandas dataframe <code>df</code>?</p>
<p>The column <code>df["Target"]</code> is the target (e.g. labels) of the network. This doesn't work:</p>
<pre><code>import pandas as pd
import torch.utils.data as data_utils
target = pd.DataFrame(df['Target'])
train = data_utils.TensorDataset(df, target)
train_loader = data_utils.DataLoader(train, batch_size=10, shuffle=True)
</code></pre>
|
<p>I'm referring to the question in the title as you haven't really specified anything else in the text, so just converting the DataFrame into a PyTorch tensor. </p>
<p>Without information about your data, I'm just taking float values as example targets here.</p>
<p><strong>Convert Pandas dataframe to PyTorch tensor?</strong></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import torch
import random
# creating dummy targets (float values)
targets_data = [random.random() for i in range(10)]
# creating DataFrame from targets_data
targets_df = pd.DataFrame(data=targets_data)
targets_df.columns = ['targets']
# creating tensor from targets_df
torch_tensor = torch.tensor(targets_df['targets'].values)
# printing out result
print(torch_tensor)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>tensor([ 0.5827, 0.5881, 0.1543, 0.6815, 0.9400, 0.8683, 0.4289,
0.5940, 0.6438, 0.7514], dtype=torch.float64)
</code></pre>
<p><em>Tested with Pytorch 0.4.0.</em></p>
<p>I hope this helps, if you have any further questions - just ask. :)</p>
| 530
|
pytorch
|
Differences in SciKit Learn, Keras, or Pytorch
|
https://stackoverflow.com/questions/54527439/differences-in-scikit-learn-keras-or-pytorch
|
<p>Are these libraries fairly interchangeable?</p>
<p>Looking here, <a href="https://stackshare.io/stackups/keras-vs-pytorch-vs-scikit-learn" rel="noreferrer">https://stackshare.io/stackups/keras-vs-pytorch-vs-scikit-learn</a>, it seems the major difference is the underlying framework (at least for PyTorch).</p>
|
<p>Yes, there is a major difference.</p>
<p>SciKit Learn is a general machine learning library, built on top of NumPy. It features a lot of machine learning algorithms such as support vector machines, random forests, as well as a lot of utilities for general pre- and postprocessing of data. It is not a neural network framework.</p>
<p>PyTorch is a deep learning framework, consisting of</p>
<ol>
<li>A vectorized math library similar to NumPy, but with GPU support and a lot of neural network related operations (such as softmax or various kinds of activations)</li>
<li>Autograd - an algorithm which can automatically calculate gradients of your functions, defined in terms of the basic operations</li>
<li>Gradient-based optimization routines for large scale optimization, dedicated to neural network optimization</li>
<li>Neural-network related utility functions</li>
</ol>
<p>Keras is a higher-level deep learning framework, which abstracts many details away, making code simpler and more concise than in PyTorch or TensorFlow, at the cost of limited hackability. It abstracts away the computation backend, which can be TensorFlow, Theano or CNTK. It does not support a PyTorch backend, but that's not something unfathomable - you can consider it a simplified and streamlined subset of the above.</p>
<p>In short, if you are going with "classic", non-neural algorithms, neither PyTorch nor Keras will be useful for you. If you're doing deep learning, scikit-learn may still be useful for its utility part; aside from it you will need the actual deep learning framework, where you can choose between Keras and PyTorch but you're unlikely to use both at the same time. This is very subjective, but in my view, if you're working on a novel algorithm, you're more likely to go with PyTorch (or TensorFlow or some other lower-level framework) for flexibility. If you're adapting a known and tested algorithm to a new problem setting, you may want to go with Keras for its greater simplicity and lower entry level.</p>
| 531
|
pytorch
|
How can l uninstall PyTorch with Anaconda?
|
https://stackoverflow.com/questions/43664444/how-can-l-uninstall-pytorch-with-anaconda
|
<p>I installed PyTorch with:</p>
<pre><code>conda install pytorch torchvision cuda80 -c soumith
</code></pre>
<p>How do I uninstall and remove all PyTorch dependencies?</p>
|
<p>From the <a href="https://conda.io/docs/commands/conda-uninstall.html" rel="noreferrer">anaconda docs</a>, you can uninstall with <code>conda uninstall</code></p>
<p>Try</p>
<pre><code>conda uninstall pytorch torchvision cuda80 -c soumith
</code></pre>
<p>Alternatively, the <a href="https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md" rel="noreferrer">pytorch docs</a> suggest </p>
<pre><code>conda uninstall pytorch
pip uninstall torch
pip uninstall torch # run this command twice
</code></pre>
| 532
|
pytorch
|
PyTorch memory model: "torch.from_numpy()" vs "torch.Tensor()"
|
https://stackoverflow.com/questions/48482787/pytorch-memory-model-torch-from-numpy-vs-torch-tensor
|
<p>I'm trying to have an in-depth understanding of how PyTorch Tensor memory model works.</p>
<pre><code># input numpy array
In [91]: arr = np.arange(10, dtype=float32).reshape(5, 2)
# input tensors in two different ways
In [92]: t1, t2 = torch.Tensor(arr), torch.from_numpy(arr)
# their types
In [93]: type(arr), type(t1), type(t2)
Out[93]: (numpy.ndarray, torch.FloatTensor, torch.FloatTensor)
# ndarray
In [94]: arr
Out[94]:
array([[ 0., 1.],
[ 2., 3.],
[ 4., 5.],
[ 6., 7.],
[ 8., 9.]], dtype=float32)
</code></pre>
<hr>
<p>I know that PyTorch tensors <em>share the memory buffer</em> of NumPy ndarrays. Thus, changing one will be reflected in the other. So, here I'm slicing and updating some values in the Tensor <code>t2</code></p>
<pre><code>In [98]: t2[:, 1] = 23.0
</code></pre>
<p>And as expected, it's updated in <code>t2</code> and <code>arr</code> since they share the same memory buffer.</p>
<pre><code>In [99]: t2
Out[99]:
0 23
2 23
4 23
6 23
8 23
[torch.FloatTensor of size 5x2]
In [101]: arr
Out[101]:
array([[ 0., 23.],
[ 2., 23.],
[ 4., 23.],
[ 6., 23.],
[ 8., 23.]], dtype=float32)
</code></pre>
<p>But, <strong><code>t1</code> is also updated</strong>. Remember that <code>t1</code> was constructed using <code>torch.Tensor()</code> whereas <code>t2</code> was constructed using <code>torch.from_numpy()</code></p>
<pre><code>In [100]: t1
Out[100]:
0 23
2 23
4 23
6 23
8 23
[torch.FloatTensor of size 5x2]
</code></pre>
<p>So, no matter whether we use <a href="http://pytorch.org/docs/master/torch.html#torch.from_numpy" rel="noreferrer"><code>torch.from_numpy()</code></a> or <a href="http://pytorch.org/docs/master/tensors.html#torch-tensor" rel="noreferrer"><code>torch.Tensor()</code></a> to construct a tensor from an ndarray, <strong>all</strong> such tensors and ndarrays share the same memory buffer.</p>
<p>Based on this understanding, my question is why does a dedicated function <a href="http://pytorch.org/docs/master/torch.html#torch.from_numpy" rel="noreferrer"><code>torch.from_numpy()</code></a> exists when simply <a href="http://pytorch.org/docs/master/tensors.html#torch-tensor" rel="noreferrer"><code>torch.Tensor()</code></a> can do the job?</p>
<p>I looked at the PyTorch documentation but it doesn't mention anything about this? Any ideas/suggestions?</p>
|
<p><code>from_numpy()</code> automatically inherits input array <code>dtype</code>. On the other hand, <code>torch.Tensor</code> is an alias for <code>torch.FloatTensor</code>. </p>
<p>Therefore, if you pass <code>int64</code> array to <code>torch.Tensor</code>, output tensor is float tensor and they wouldn't share the storage. <code>torch.from_numpy</code> gives you <code>torch.LongTensor</code> as expected.</p>
<pre><code>a = np.arange(10)
ft = torch.Tensor(a) # same as torch.FloatTensor
it = torch.from_numpy(a)
a.dtype # == dtype('int64')
ft.dtype # == torch.float32
it.dtype # == torch.int64
</code></pre>
| 533
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.