category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
spaCy
Import spaCy error
https://stackoverflow.com/questions/46887236/import-spacy-error
<p>When I try to import <a href="https://spacy.io/" rel="nofollow noreferrer">spaCy</a> (for the first time), I get the follwoing error:</p> <pre><code>&gt;&gt;&gt;import spacy Traceback (most recent call last): File "&lt;pyshell#0&gt;", line 1, in &lt;module&gt; import spacy File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/spacy/__init__.py", line 10, in &lt;module&gt; from . import en, de, zh, es, it, hu, fr, pt, nl, sv, fi, bn, he, nb, ja File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/spacy/en/__init__.py", line 4, in &lt;module&gt; from ..language import Language File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/spacy/language.py", line 14, in &lt;module&gt; from .pipeline import DependencyParser, EntityRecognizer File "spacy/pipeline.pyx", line 1, in init spacy.pipeline (spacy/pipeline.cpp:16536) # coding: utf8 File ".env/lib/python2.7/site-packages/thinc/extra/search.pxd", line 72, in init spacy.syntax.beam_parser (spacy/syntax/beam_parser.cpp:20037) ValueError: thinc.extra.search.MaxViolation has the wrong size, try recompiling </code></pre> <p>which recommends that I should recompile. What should I do to fix this error?</p>
<p>It could be a version conflict with either <code>thinc</code> or <code>spacy</code>. It has been fixed in thinc v6.9.0! </p> <p>I would suggest you uninstall by using <code>sudo pip uninstall thinc</code> or <code>sudo pip3 uninstall thinc</code> and reinstall the thinc version 6.9.0 provided here: <a href="https://pypi.python.org/pypi/thinc" rel="nofollow noreferrer">https://pypi.python.org/pypi/thinc</a></p>
934
spaCy
SpaCy can&#39;t find table(s) lexeme_norm for language &#39;en&#39; in spacy-lookups-data
https://stackoverflow.com/questions/66367447/spacy-cant-find-tables-lexeme-norm-for-language-en-in-spacy-lookups-data
<p>I am trying to train a text categorization pipe in SpaCy:</p> <pre class="lang-py prettyprint-override"><code>import spacy nlp = spacy.load(&quot;en_core_web_sm&quot;) nlp.add_pipe(&quot;textcat&quot;, last=True) other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'textcat'] with nlp.disable_pipes(*other_pipes): optimizer = nlp.begin_training() # training logic </code></pre> <p>However, every time I call <code>nlp.begin_training()</code>, I get the error</p> <pre><code>ValueError: [E955] Can't find table(s) lexeme_norm for language 'en' in spacy-lookups-data. Make sure you have the package installed or provide your own lookup tables if no default lookups are available for your language. </code></pre> <p>Running <code>python3 -m spacy validate</code> returns</p> <pre><code>✔ Loaded compatibility table ================= Installed pipeline packages (spaCy v3.0.3) ================= ℹ spaCy installation: /xxx/xxx/xxx/env/lib/python3.8/site-packages/spacy NAME SPACY VERSION en_core_web_lg &gt;=3.0.0,&lt;3.1.0 3.0.0 ✔ en_core_web_sm &gt;=3.0.0,&lt;3.1.0 3.0.0 ✔ </code></pre> <p>Furthermore, I have tried installing <code>spacy-lookups-data</code> without success.</p> <p>How can I resolve this error?</p>
<p>It isn't allowed to call <code>nlp.begin_training()</code> on pretrained models. If you want to train a new model, just use: <code>nlp = spacy.blank('en')</code> instead of <code>nlp = spacy.load(&quot;en_core_web_sm&quot;)</code></p> <p>However, if you want to continue training on an existing model call <code>optimizer = nlp.create_optimizer()</code> instead of <code>begin_training()</code></p>
935
spaCy
Failed building wheel for spacy
https://stackoverflow.com/questions/43370851/failed-building-wheel-for-spacy
<p>I'm trying to install spacy by running <code>pip install spacy</code> for python version <strong>3.6.1</strong> but continuously i'm getting errors like below,how to get rid of this issue? previously i was having cl.exe not found error, after that i added visual studio path in environment variables where cl.exe exists.</p> <pre><code> Failed building wheel for spacy Running setup.py clean for spacy Running setup.py bdist_wheel for murmurhash ... error Complete output from command c:\users\sh00428701\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\SH0042~1\\AppData\\Local\\Temp\\pip-build-joi6voav\\murmurhash\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\SH0042~1\AppData\Local\Temp\tmpa6tzdkovpip-wheel- --python-tag cp36: running bdist_wheel running build running build_py ---------------------------------------- Failed building wheel for murmurhash Running setup.py clean for murmurhash Running setup.py bdist_wheel for cymem ... error Complete output from command c:\users\sh00428701\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\SH0042~1\\AppData\\Local\\Temp\\pip-build-joi6voav\\cymem\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\SH0042~1\AppData\Local\Temp\tmpz7p6hkiwpip-wheel- --python-tag cp36: ---------------------------------------- Failed building wheel for cymem Running setup.py clean for cymem Running setup.py bdist_wheel for preshed ... error Complete output from command c:\users\sh00428701\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\SH0042~1\\AppData\\Local\\Temp\\pip-build-joi6voav\\preshed\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\SH0042~1\AppData\Local\Temp\tmpwppgmyp9pip-wheel- --python-tag cp36: ---------------------------------------- Failed building wheel for preshed Running setup.py clean for preshed Running setup.py bdist_wheel for thinc ... error ---------------------------------------- Failed building wheel for thinc Running setup.py clean for thinc Running setup.py bdist_wheel for ujson ... error ---------------------------------------- Failed building wheel for ujson Running setup.py clean for ujson Running setup.py bdist_wheel for cytoolz ... error ---------------------------------------- Failed building wheel for cytoolz Running setup.py clean for cytoolz Failed to build spacy murmurhash cymem preshed thinc ujson cytoolz Installing collected packages: murmurhash, cymem, preshed, wrapt, tqdm, toolz, cytoolz, plac, pyreadline, dill, termcolor, pathlib, thinc, ujson, regex, spacy Running setup.py install for murmurhash ... error C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\sh00428701\appdata\local\programs\python\python36\include -IC:\Users\SH0042~1\AppData\Local\Temp\pip-build-joi6voav\murmurhash\murmurhash\include -Ic:\users\sh00428701\appdata\local\programs\python\python36\include -Ic:\users\sh00428701\appdata\local\programs\python\python36\include /EHsc /Tpmurmurhash/mrmr.cpp /Fobuild\temp.win-amd64-3.6\Release\murmurhash/mrmr.obj /Ox /EHsc mrmr.cpp c1xx: fatal error C1083: Cannot open source file: 'murmurhash/mrmr.cpp': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\bin\\cl.exe' failed with exit status 2 ---------------------------------------- Command "c:\users\sh00428701\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\SH0042~1\\AppData\\Local\\Temp\\pip-build-joi6voav\\murmurhash\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\SH0042~1\AppData\Local\Temp\pip-_j1cxej1-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\SH0042~1\AppData\Local\Temp\pip-build-joi6voav\murmurhash\ </code></pre>
<p>A couple thoughts:</p> <ul> <li><s>Grab the various wheel files you need from <a href="http://www.lfd.uci.edu/%7Egohlke/pythonlibs/#spacy" rel="nofollow noreferrer">http://www.lfd.uci.edu/~gohlke/pythonlibs/#spacy</a> and install with <code>pip install x.whl y.whl etc.</code></s></li> <li>Upgrade your version of cpp build tools to 2017 and try pip again</li> </ul> <p><strong>Update 2024:</strong> (Gohlke's amazing resource having been retired)</p> <ul> <li>Find the appropriate version of Visual Studio Build Tools from here: <a href="https://wiki.python.org/moin/WindowsCompilers" rel="nofollow noreferrer">https://wiki.python.org/moin/WindowsCompilers</a> (probably 2022)</li> <li>Download, install, launch <code>Build Tools for VS 2022</code> <ul> <li>Currently, go to <a href="https://visualstudio.microsoft.com/downloads/" rel="nofollow noreferrer">https://visualstudio.microsoft.com/downloads/</a> and look under the <code>Tools for Visual Studio</code> dropdown.</li> </ul> </li> <li>Install the <code>Desktop development with C++</code> option</li> <li>Ensure that the optional tools include (at least) <code>MSVC v14...</code>, <code>Windows 10/11 SDK</code>, and <code>CMake...</code>.</li> </ul>
936
spaCy
Similarity in Spacy
https://stackoverflow.com/questions/53453559/similarity-in-spacy
<p>I am trying to understand how similarity in Spacy works. I tried using Melania Trump's <a href="http://time.com/4412008/republican-convention-melania-trump-2/" rel="noreferrer">speech</a> and Michelle Obama's <a href="https://www.npr.org/templates/story/story.php?storyId=93963863" rel="noreferrer">speech</a> to see how similar they were. </p> <p>This is my code. </p> <pre><code>import spacy nlp = spacy.load('en_core_web_lg') file1 = open("melania.txt").read().decode('ascii', 'ignore') file2 = open("michelle.txt").read().decode('ascii', 'ignore') doc1 = nlp(unicode(file1)) doc2 = nlp(unicode(file2)) print doc1.similarity(doc2) </code></pre> <p>I get the similarity score as 0.9951584208511974. This similarity score looks very high to me. Is this correct? Am I doing something wrong?</p>
<p>By default spaCy calculates cosine similarity. <a href="https://spacy.io/usage/vectors-similarity" rel="noreferrer">Similarity</a> is determined by comparing word vectors or word embeddings, multi-dimensional meaning representations of a word.</p> <p>It <a href="https://github.com/explosion/spaCy/blob/master/spacy/tokens/token.pyx" rel="noreferrer">returns</a> <code>return (numpy.dot(self.vector, other.vector) / (self_norm * other_norm))</code></p> <pre><code>text1 = 'How can I end violence?' text2 = 'What should I do to be a peaceful?' doc1 = nlp(text1) doc2 = nlp(text2) print("spaCy :", doc1.similarity(doc2)) print(np.dot(doc1.vector, doc2.vector) / (np.linalg.norm(doc1.vector) * np.linalg.norm(doc2.vector))) </code></pre> <p>Output:</p> <pre><code>spaCy : 0.916553147896471 0.9165532 </code></pre> <p>It seems that spaCy's <code>.vector</code> method created the vectors. <a href="https://spacy.io/models/en" rel="noreferrer">Documentation</a> says that spaCy's models are trained from GloVe's vectors.</p>
937
spaCy
Latest version of spacy-lefff incompatible with latest version of spacy
https://stackoverflow.com/questions/55309497/latest-version-of-spacy-lefff-incompatible-with-latest-version-of-spacy
<p>Installation of spacy-lefff v. 0.3.3 makes it impossible to run a Python program with module Spacy (v. 2.1.1)</p> <p>The root problem seems to be the modules thinc and cymem. I tried to unstalled thinc and cymem but i don't know which version of thinc and cymem to reinstall afterwords.</p> <p>There is no indication of the problem at <a href="https://github.com/sammous/spacy-lefff" rel="nofollow noreferrer">https://github.com/sammous/spacy-lefff</a></p> <p>There is an indication that "spacy-lefff requires spacy &lt;= v2.0.12".</p> <p>I tried to downgrade to spacy v. 2.0.12 but my code is not working because of modules thinc and cymem.</p> <p>Operating System : Win 10 Python version : 3.7-2 Spacy version used : 2.1.1 Lefff version used : 0.3.3.</p> <p>Thanks</p>
938
spaCy
Spacy-Transformers: Access GPT-2?
https://stackoverflow.com/questions/68946827/spacy-transformers-access-gpt-2
<p>I'm using Spacy-Transformers to build some NLP models.</p> <p>The <a href="https://spacy.io/universe/project/spacy-transformers#gatsby-noscript" rel="nofollow noreferrer">Spacy-Transformers docs</a> say:</p> <blockquote> <p><strong>spacy-transformers</strong></p> <p><em>spaCy pipelines for pretrained BERT, XLNet and GPT-2</em></p> </blockquote> <p>The sample code on that page shows:</p> <pre><code>import spacy nlp = spacy.load(&quot;en_core_web_trf&quot;) doc = nlp(&quot;Apple shares rose on the news. Apple pie is delicious.&quot;) </code></pre> <p>Based on what I've learned from <a href="https://www.youtube.com/watch?v=vyOgWhwUmec" rel="nofollow noreferrer">this video</a>,&quot;en_core_web_trf&quot; appears to be the <code>spacy.load()</code> package to use a BERT model. I've searched the <a href="https://spacy.io/universe/project/spacy-transformers#gatsby-noscript" rel="nofollow noreferrer">Spacy-Transformers docs</a> and haven't yet seen an equivalent package, to access GPT-2. Is there a specific <code>spacy.load()</code> package, to load in order to use a GPT-2 model?</p>
<p>The <code>en_core_web_trf</code> uses a specific Transformers model, but you can specify arbitrary ones using the <code>TransformerModel</code> wrapper class from <code>spacy-transformers</code>. See <a href="https://spacy.io/api/architectures#TransformerModel" rel="nofollow noreferrer">the docs</a> for that. An example config:</p> <pre><code>[model] @architectures = &quot;spacy-transformers.TransformerModel.v1&quot; name = &quot;roberta-base&quot; # this can be the name of any hub model tokenizer_config = {&quot;use_fast&quot;: true} </code></pre>
939
spaCy
ModuleNotFoundError: No module named &#39;spacy&#39; even though spacy and python are in the same path
https://stackoverflow.com/questions/69716018/modulenotfounderror-no-module-named-spacy-even-though-spacy-and-python-are-in
<p>I am following the spaCy installation guideline inside my AWS SageMaker notebook</p> <pre><code>pip install -U pip setuptools wheel pip install -U spacy python -m spacy download en_core_web_sm </code></pre> <p>When I do <code>import spacy</code> I get error</p> <pre><code>ModuleNotFoundError: No module named 'spacy' </code></pre> <p>I made sure my python and spacy path are same</p> <p>What am I missing? <a href="https://i.sstatic.net/pDbOF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pDbOF.png" alt="enter image description here" /></a></p>
<p>If you are installing spacy from inside the jupyter notebook, use the <code>%pip</code> syntax. That will use the pip associated with the kernel in use.</p> <pre><code>%pip install spacy </code></pre> <p>If installing from the command line, use</p> <pre><code>python -m pip install spacy </code></pre> <p>(Replace <code>python</code> with the path to the Python used in the notebook kernel.)</p>
940
spaCy
Using Arabert model with SpaCy
https://stackoverflow.com/questions/74062240/using-arabert-model-with-spacy
<p>SpaCy doesn't support the Arabic language, but Can I use SpaCy with the pretrained Arabert model?</p> <p>Is it possible to modify this code so it can accept bert-large-arabertv02 instead of en_core_web_lg?</p> <pre><code>!python -m spacy download en_core_web_lg import spacy nlp = spacy.load(&quot;en_core_web_lg&quot;) </code></pre> <p>Here How we can call AraBertV.02</p> <pre><code>from arabert.preprocess import ArabertPreprocessor from transformers import AutoTokenizer, AutoModelForMaskedLM model_name=&quot;aubmindlab/bert-large-arabertv02&quot; arabert_prep = ArabertPreprocessor(model_name=model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForMaskedLM.from_pretrained(model_name) </code></pre>
<p>spaCy actually does support Arabic, though only at an alpha level, which basically just means tokenization support (see <a href="https://spacy.io/usage/models#languages" rel="noreferrer">here</a>). That's enough for loading external models or training your own, though, so in this case you should be able to load this like any HuggingFace model - see <a href="https://github.com/explosion/spaCy/discussions/10768" rel="noreferrer">this FAQ</a>.</p> <p>In this case this would look like:</p> <pre><code>import spacy nlp = spacy.blank(&quot;ar&quot;) # empty English pipeline # create the config with the name of your model # values omitted will get default values config = { &quot;model&quot;: { &quot;@architectures&quot;: &quot;spacy-transformers.TransformerModel.v3&quot;, &quot;name&quot;: &quot;aubmindlab/bert-large-arabertv02&quot; } } nlp.add_pipe(&quot;transformer&quot;, config=config) nlp.initialize() # XXX don't forget this step! doc = nlp(&quot;فريك الذرة لذيذة&quot;) print(doc._.trf_data) # all the Transformer output is stored here </code></pre> <p>I don't speak Arabic, so I can't check the output thoroughly, but that code ran and produced an embedding for me.</p>
941
spaCy
Collocations with spaCy
https://stackoverflow.com/questions/39258476/collocations-with-spacy
<p>I've been using NLTK for finding collocations, or n-grams, and have recently discovered the spaCy module for NLP. I've only just begun familiarizing myself with it and have, thus far, seen little mention for supported collocation functions.</p> <p>Can spaCy be used to find collocations directly?</p> <p>I have read through the <a href="https://spacy.io/docs/" rel="noreferrer">documentation</a>, but haven't seen mention.</p>
<p>Collocations detection also can be based on dependency parsing, but spaCy do not have support to do it. You can use spaCy as part of an approach, but not directly.</p> <p>may you also consider gensim: <a href="https://radimrehurek.com/gensim/models/phrases.html" rel="noreferrer">https://radimrehurek.com/gensim/models/phrases.html</a></p> <p>I hope it can help you</p>
942
spaCy
spaCy and Docker: can&#39;t &quot;dockerize&quot; Flask app that uses spaCy modules
https://stackoverflow.com/questions/67394064/spacy-and-docker-cant-dockerize-flask-app-that-uses-spacy-modules
<p>I'm trying to install SpaCy on my docker image but it always fails. First it was on my requirements.txt file, but it failed right away. Later on I tried running a separate RUN instruction for pip to install it in isolation but it also failed.</p> <p>Here's my Dockerfile content:</p> <pre><code>FROM python:3.6-alpine WORKDIR /sentenceSimilarity ADD . /sentenceSimilarity RUN pip install -r requirements.txt RUN pip install -U pip setuptools wheel RUN pip install -U spacy RUN python -m spacy download en_core_web_sm CMD [&quot;python&quot;, &quot;app.py&quot;] </code></pre> <p>I ended up deleting everything from my requirements.txt file except for Flask and the issue is always stumbled upon the line in which Spacy comes, the only difference now is that it takes a huge time to fail. See screenshot:</p> <p><a href="https://i.sstatic.net/1gr4U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1gr4U.png" alt="enter image description here" /></a></p> <p>Observing a bit, I think pip has been iterating to check which version might suit, from newest to oldest. But none of those at the end gets installed.</p> <p>I've seen others with similar issues with SpaCy, but no apparent solution.</p> <p>Can someone suggest an approach I could use to fix this? Thanks in advance.</p>
<p>The <a href="https://github.com/explosion/spaCy/issues/6158" rel="nofollow noreferrer">spacy installation extremely slow in docker</a> Github issue explains the problem with the Alpine Python docker (I see you have <code>FROM python:3.6-alpine</code> in your dockerfile):</p> <blockquote> <p>If you're using an Alpine Linux container, you should probably not: it's a bad choice for Python because none of the PyPi wheels will work, so you'll have to rebuild everything. If you're determined to use Alpine Linux you should host yourself a wheelhouse so that you don't have to rebuild the wheels all the time.</p> </blockquote> <p>So, you need to use another image, e.g. a slim docker image, as <a href="https://github.com/explosion/spaCy/issues/6158#issuecomment-810677465" rel="nofollow noreferrer">recommended by @mkkeffeler</a>.</p>
943
spaCy
Spacy Verb highlight?
https://stackoverflow.com/questions/51739273/spacy-verb-highlight
<p>Just like spacy displacy renders entity highlights in html.</p> <pre><code>import spacy from spacy import displacy nlp = spacy.load('en') doc1 = nlp(u'This is a google sentence.') doc2 = nlp(u'This is another sentence.') html = displacy.render([doc1, doc2], style='ent', page=True) </code></pre> <p>How to highlight all verbs in a given text.</p> <pre><code>from __future__ import unicode_literals import spacy,en_core_web_sm import textacy nlp = en_core_web_sm.load() sentence = 'The cat jumped quickly over the wall.' doc = textacy.Doc(sentence, lang='en_core_web_sm') for token in doc: if (token.pos_ == 'VERB'): print(token.text) </code></pre> <p>Here output <strong>Jumped</strong> is to be highlighted in green color! How?</p> <p>Something similar to this</p> <p><a href="http://www.expresso-app.org/" rel="nofollow noreferrer">http://www.expresso-app.org/</a></p>
<p>You can use displacy ent interface to highlight custom entities by setting <code>manual=True</code> on either <code>render()</code> or <code>serve()</code>. Here is a simple example:</p> <pre><code>sentence = [{'text': 'The cat jumped quickly over the wall.', 'ents': [{'start': 8, 'end':14, 'label': 'VERB'}], 'title': None}] displacy.render(sentence, style='ent', manual=True) </code></pre> <p><a href="https://i.sstatic.net/JZSL3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JZSL3.png" alt="enter image description here"></a></p> <p>Additionally, to obtain data in required format, you could do a dependency parsing and use <a href="https://spacy.io/api/phrasematcher" rel="nofollow noreferrer">PhraseMatcher</a> over it to get <code>start</code> and <code>end</code> values.</p>
944
spaCy
Failed to load spaCy model &#39;en_core_web_sm&#39;
https://stackoverflow.com/questions/63683431/failed-to-load-spacy-model-en-core-web-sm
<p>Being complete python novice, I am trying to learn some spaCy basics. Problem is, I can not load any models.</p> <p>I am using Microsoft Visual Studio Community(MVSC), Windows 7, Python 3.7.5., pip 19.2.3 spaCy has been installed via pip on Python 3.7 (64-bit) environment.</p> <p>(1) when I run in MVSC:</p> <pre class="lang-py prettyprint-override"><code>import spacy nlp = spacy.load(&quot;en_core_web_sm&quot;) </code></pre> <p>I am getting: <code>“OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory.”</code></p> <p>(2) When I run the same code in IDLE Python 3.7 (64-bit) I am getting the same error as in (1). Within modules list, I can see there is spaCy. Via “help(SPACY)” it lists “PACKAGE CONTENTS”, there is no word like “model”; FILE is “c:\program files (x86)\microsoft visual studio\shared\python37_64\lib\site-packages\spacy_<em>init</em>_.py”</p> <p>(3) When I run in CMD with Admin privileges <code>python -m spacy download en_core_web_sm</code> I am getting: <code>“No module named spacy”</code>. When listing modules via python&gt;help()&gt;modules; I can see the list of available modules, but there is no module spaCy.</p> <p>(4) For the other trials in MVSC, I created a new 64bit Python environment, successfully installed spaCy via “install spacy (2.3.2)” =&gt; ran the above code, and got the same errors as above.</p>
<p>Have you checked the project's GitHub repo? There is a closed issue that seems to be close (minus the spaCy version you're using)</p> <p><a href="https://github.com/explosion/spaCy/issues/4577" rel="noreferrer">https://github.com/explosion/spaCy/issues/4577</a></p> <blockquote> <p>When you run <code>python -m spacy download en_core_web_sm</code>, it will pretty much execute the same thing (pip install [link]), with pip running in a subprocess. The download also takes care of finding you the right version of the model and outputting helpful messages.</p> <p>However, if the pip executed with python3 -m pip install isn't the same as pip3 install, it may execute the wrong pip and install the model in a different environment. In most environments, this is not a problem – but if it is, installing the models directly via the link is a fine solution.</p> </blockquote>
945
spaCy
Remove Spacy downloaded model
https://stackoverflow.com/questions/50344228/remove-spacy-downloaded-model
<p>After downloading and linking a spacy model (en large) by:</p> <pre><code>python -m spacy download en_core_web_lg </code></pre> <p>which is around 850 Mb of data.</p> <p>How can it find and delete the data (downloaded model) on my mac to free some space?</p> <pre><code>Spacy: 2.0.18 Python: 3.6.9 en_core_web_lg: 2.0.0 </code></pre>
<p>The <code>download</code> command will install the model via <code>pip</code>, place the package in your <code>site-packages</code>. So, the data should be inside the <code>spacy</code> directory inside your python's <code>site-packages</code>. Try to check some where like <code>/usr/lib/python3.6/site-packages/en_core_web_sm</code></p>
946
spaCy
Spacy MemoryError
https://stackoverflow.com/questions/54014422/spacy-memoryerror
<p>I managed to install spacy but when trying to use nlp then I am getting a MemoryError for some weird reason.</p> <p>The code I wrote is as follows:</p> <pre><code>import spacy import re from nltk.corpus import gutenberg def clean_text(astring): #replace newlines with space newstring=re.sub("\n"," ",astring) #remove title and chapter headings newstring=re.sub("\[[^\]]*\]"," ",newstring) newstring=re.sub("VOLUME \S+"," ",newstring) newstring=re.sub("CHAPTER \S+"," ",newstring) newstring=re.sub("\s\s+"," ",newstring) return newstring.lstrip().rstrip() nlp=spacy.load('en') alice=clean_text(gutenberg.raw('carroll-alice.txt')) nlp_alice=list(nlp(alice).sents) </code></pre> <p>The error I am getting is as follows</p> <p><a href="https://i.sstatic.net/kyGNV.png" rel="nofollow noreferrer">The error message</a></p> <p>Although when my code is something like this then it works:</p> <pre><code>import spacy nlp=spacy.load('en') alice=nlp("hello Hello") </code></pre> <p>If anybody could point out what I am doing wrong I would be very grateful</p>
<p>I'm guessing you truly are running out of memory. I couldn't find an exact number, but I'm sure Carrol's Alice's Adventures in Wonderland has tens of thousands of sentences. This equates to tens of thousands of <code>Span</code> elements from Spacy. Without modification, <code>nlp()</code> determines everything from POS to dependencies for the string passed to it. Moreover, the <code>sents</code> property returns an iterator which should be taken advantage of, as opposed to immediately expanding in a list.</p> <p>Basically, you're attempting a computation which very likely might be running into a memory constraint. How much memory does your machine support? In the comments Joe suggested watching your machine's memory usage, I second this. My recommendations: check if your are actually running out of memory, or limit the functionality of <code>nlp()</code>, or consider doing your work with the iterator functionality:</p> <pre><code>for sentence in nlp(alice).sents: pass </code></pre>
947
spaCy
Can older spaCy models be ported to future spaCy versions?
https://stackoverflow.com/questions/79583668/can-older-spacy-models-be-ported-to-future-spacy-versions
<p>The latest spaCy versions have better performance and compatibility for GPU acceleration on Apple devices, but I have an existing project that depends on spaCy 3.1.4 and some of the specific behavior of the 3.1.0 models (web lg, web trf).</p> <p>Would it be possible to port the old models from source to work with newer versions of spaCy (e.g. 3.5) so I could get the same behavior and results, or do I need to use the model that comes with the respective spaCy version?</p> <p>(This would be on a mac M1, and the advantage is that newer versions of spaCy started supporting Metal Performance Shaders.)</p>
948
spaCy
spaCy Alternatives in Java
https://stackoverflow.com/questions/41196081/spacy-alternatives-in-java
<p>I currently use spaCy to traverse the dependency tree, and generate entities. </p> <pre><code>nlp = get_spacy_model(detect_lang(unicode_text)) doc = nlp(unicode_text) entities = set() for sentence in doc.sents: # traverse tree picking up entities for token in sentence.subtree: ## pick entitites using some pre-defined rules entities.discard('') return entities </code></pre> <p>Are there any good Java alternatives for spaCy? </p> <p>I am looking for libs which generate the Dependency Tree as is done by spaCy.</p> <p>EDIT:</p> <p>I looked into Stanford Parser. However, it generated the following parse tree:</p> <pre><code> ROOT | NP _______________|_________ | NP | _________|___ | | PP | | ________|___ NP NP | NP ____|__________ | | _______|____ DT JJ JJ NN NNS IN DT JJ NN | | | | | | | | | the quick brown fox jumps over the lazy dog </code></pre> <p>However, I am looking for a tree structure like spaCy does:</p> <pre><code> jumps_VBZ __________________________|___________________ | | | | | over_IN | | | | | | | | | | | dog_NN | | | | | _______|_______ The_DT quick_JJ brown_JJ fox_NN ._. the_DT lazy_JJ </code></pre>
<p>You're looking for the <a href="http://nlp.stanford.edu/software/nndep.shtml" rel="nofollow noreferrer">Stanford Dependency Parser</a>. Like most of the Stanford tools, this is also bundled with <a href="http://stanfordnlp.github.io/CoreNLP/" rel="nofollow noreferrer">Stanford CoreNLP</a> under the <code>depparse</code> annotator. Other parsers include the <a href="http://www.maltparser.org/" rel="nofollow noreferrer">Malt parser</a> (a feature-based shift reduce parser) and Ryan McDonald's the <a href="http://www.seas.upenn.edu/~strctlrn/MSTParser/MSTParser.html" rel="nofollow noreferrer">MST parser</a> (an accurate but slower maximum spanning tree parser).</p>
949
spaCy
Python: Spacy and memory consumption
https://stackoverflow.com/questions/55841087/python-spacy-and-memory-consumption
<h2>1 - THE PROBLEM</h2> <p>I'm using "spacy" on python for text documents lemmatization. There are 500,000 documents having size up to 20 Mb of clean text.</p> <p>The problem is the following: spacy memory consuming is growing in time till the whole memory is used.</p> <h2>2 - BACKGROUND</h2> <p>My hardware configuration: CPU: Intel I7-8700K 3.7 GHz (12 cores) Memory: 16 Gb SSD: 1 Tb GPU is onboard but is not used for this task</p> <p>I'm using "multiprocessing" to split the task among several processes (workers). Each worker receives a list of documents to process. The main process performs monitoring of child processes. I initiate "spacy" in each child process once and use this one spacy instance to handle the whole list of documents in the worker.</p> <p>Memory tracing says the following:</p> <blockquote> <p>[ Memory trace - Top 10 ]</p> <p>/opt/develop/virtualenv/lib/python3.6/site-packages/thinc/neural/mem.py:68: size=45.1 MiB, count=99, average=467 KiB</p> <p>/opt/develop/virtualenv/lib/python3.6/posixpath.py:149: size=40.3 MiB, count=694225, average=61 B</p> <p>:487: size=9550 KiB, count=77746, average=126 B</p> <p>/opt/develop/virtualenv/lib/python3.6/site-packages/dawg_python/wrapper.py:33: size=7901 KiB, count=6, average=1317 KiB</p> <p>/opt/develop/virtualenv/lib/python3.6/site-packages/spacy/lang/en/lemmatizer/_nouns.py:7114: size=5273 KiB, count=57494, average=94 B</p> <p>prepare_docs04.py:372: size=4189 KiB, count=1, average=4189 KiB</p> <p>/opt/develop/virtualenv/lib/python3.6/site-packages/dawg_python/wrapper.py:93: size=3949 KiB, count=5, average=790 KiB</p> <p>/usr/lib/python3.6/json/decoder.py:355: size=1837 KiB, count=20456, average=92 B</p> <p>/opt/develop/virtualenv/lib/python3.6/site-packages/spacy/lang/en/lemmatizer/_adjectives.py:2828: size=1704 KiB, count=20976, average=83 B</p> <p>prepare_docs04.py:373: size=1633 KiB, count=1, average=1633 KiB</p> </blockquote> <h2>3 - EXPECTATIONS</h2> <p>I have seen a good recommendation to build a separated server-client solution [here]<a href="https://stackoverflow.com/questions/43554124/is-possible-to-keep-spacy-in-memory-to-reduce-the-load-time">Is possible to keep spacy in memory to reduce the load time?</a></p> <p>Is it possible to keep memory consumption under control using "multiprocessing" approach?</p> <h2>4 - THE CODE</h2> <p>Here is a simplified version of my code:</p> <pre class="lang-py prettyprint-override"><code>import os, subprocess, spacy, sys, tracemalloc from multiprocessing import Pipe, Process, Lock from time import sleep # START: memory trace tracemalloc.start() # Load spacy spacyMorph = spacy.load("en_core_web_sm") # # Get word's lemma # def getLemma(word): global spacyMorph lemmaOutput = spacyMorph(str(word)) return lemmaOutput # # Worker's logic # def workerNormalize(lock, conn, params): documentCount = 1 for filenameRaw in params[1]: documentTotal = len(params[1]) documentID = int(os.path.basename(filenameRaw).split('.')[0]) # Send to the main process the worker's current progress if not lock is None: lock.acquire() try: statusMessage = "WORKING:{:d},{:d},".format(documentID, documentCount) conn.send(statusMessage) documentCount += 1 finally: lock.release() else: print(statusMessage) # ---------------- # Some code is excluded for clarity sake # I've got a "wordList" from file "filenameRaw" # ---------------- wordCount = 1 wordTotalCount = len(wordList) for word in wordList: lemma = getLemma(word) wordCount += 1 # ---------------- # Then I collect all lemmas and save it to another text file # ---------------- # Here I'm trying to reduce memory usage del wordList del word gc.collect() if __name__ == '__main__': lock = Lock() processList = [] # ---------------- # Some code is excluded for clarity sake # Here I'm getting full list of files "fileTotalList" which I need to lemmatize # ---------------- while cursorEnd &lt; (docTotalCount + stepSize): fileList = fileTotalList[cursorStart:cursorEnd] # ---------------- # Create workers and populate it with list of files to process # ---------------- processData = {} processData['total'] = len(fileList) # worker total progress processData['count'] = 0 # worker documents done count processData['currentDocID'] = 0 # current document ID the worker is working on processData['comment'] = '' # additional comment (optional) processData['con_parent'], processData['con_child'] = Pipe(duplex=False) processName = 'worker ' + str(count) + " at " + str(cursorStart) processData['handler'] = Process(target=workerNormalize, name=processName, args=(lock, processData['con_child'], [processName, fileList])) processList.append(processData) processData['handler'].start() cursorStart = cursorEnd cursorEnd += stepSize count += 1 # ---------------- # Run the monitor to look after the workers # ---------------- while True: runningCount = 0 #Worker communication format: #STATUS:COMMENTS #STATUS: #- WORKING - worker is working #- CLOSED - worker has finished his job and closed pipe-connection #COMMENTS: #- for WORKING status: #DOCID,COUNT,COMMENTS #DOCID - current document ID the worker is working on #COUNT - count of done documents #COMMENTS - additional comments (optional) # ---------------- # Run through the list of workers ... # ---------------- for i, process in enumerate(processList): if process['handler'].is_alive(): runningCount += 1 # ---------------- # .. and check if there is somethng in the PIPE # ---------------- if process['con_parent'].poll(): try: message = process['con_parent'].recv() status = message.split(':')[0] comment = message.split(':')[1] # ---------------- # Some code is excluded for clarity sake # Update worker's information and progress in "processList" # ---------------- except EOFError: print("EOF----") # ---------------- # Some code is excluded for clarity sake # Here I draw some progress lines per workers # ---------------- else: # worker has finished his job. Close the connection. process['con_parent'].close() # Whait for some time and monitor again sleep(PARAM['MONITOR_REFRESH_FREQUENCY']) print("================") print("**** DONE ! ****") print("================") # ---------------- # Here I'm measuring memory usage to find the most "gluttonous" part of the code # ---------------- snapshot = tracemalloc.take_snapshot() top_stats = snapshot.statistics('lineno') print("[ Memory trace - Top 10 ]") for stat in top_stats[:10]: print(stat) ''' </code></pre>
<h1>Memory leaks with spacy</h1> <p>Memory problems when processing large amounts of data seem to be a known issue, see some relevant github issues:</p> <ul> <li><a href="https://github.com/explosion/spaCy/issues/3623" rel="noreferrer">https://github.com/explosion/spaCy/issues/3623</a></li> <li><a href="https://github.com/explosion/spaCy/issues/3556" rel="noreferrer">https://github.com/explosion/spaCy/issues/3556</a></li> </ul> <p>Unfortunately, it doesn't look like there's a good solution yet.</p> <h1>Lemmatization</h1> <p>Looking at your particular lemmatization task, I think your example code is a bit too over-simplified, because you're running the full spacy pipeline on single words and then not doing anything with the results (not even inspecting the lemma?), so it's hard to tell what you actually want to do.</p> <p>I'll assume you just want to lemmatize, so in general, you want to disable the parts of the pipeline that you're not using as much as possible (especially parsing if you're only lemmatizing, see <a href="https://spacy.io/usage/processing-pipelines#disabling" rel="noreferrer">https://spacy.io/usage/processing-pipelines#disabling</a>) and use <code>nlp.pipe</code> to process documents in batches. Spacy can't handle really long documents if you're using the parser or entity recognition, so you'll need to break up your texts somehow (or for just lemmatization/tagging you can just increase <code>nlp.max_length</code> as much as you need). </p> <p>Breaking documents into individual words as in your example kind of the defeats the purpose of most of spacy's analysis (you often can't meaningfully tag or parse single words), plus it's going to be very slow to call spacy this way.</p> <h2>Lookup lemmatization</h2> <p>If you just need lemmas for common words out of context (where the tagger isn't going to provide any useful information), you can see if the lookup lemmatizer is good enough for your task and skip the rest of the processing:</p> <pre><code>from spacy.lemmatizer import Lemmatizer from spacy.lang.en import LOOKUP lemmatizer = Lemmatizer(lookup=LOOKUP) print(lemmatizer(u"ducks", ''), lemmatizer(u"ducking", '')) </code></pre> <p>Output:</p> <blockquote> <p>['duck'] ['duck']</p> </blockquote> <p>It is just a static lookup table, so it won't do well on unknown words or capitalization for words like "wugs" or "DUCKS", so you'll have to see if it works well enough for your texts, but it would be much much faster without memory leaks. (You could also just use the table yourself without spacy, it's here: <a href="https://github.com/michmech/lemmatization-lists" rel="noreferrer">https://github.com/michmech/lemmatization-lists</a>.)</p> <h2>Better lemmatization</h2> <p>Otherwise, use something more like this to process texts in batches:</p> <pre><code>nlp = spacy.load('en', disable=['parser', 'ner']) # if needed: nlp.max_length = MAX_DOC_LEN_IN_CHAR for doc in nlp.pipe(texts): for token in doc: print(token.lemma_) </code></pre> <p>If you process one long text (or use <code>nlp.pipe()</code> for lots of shorter texts) instead of processing individual words, you should be able to tag/lemmatize (many) thousands of words per second in one thread.</p>
950
spaCy
output of spacy convert command not compatible with spacy train command
https://stackoverflow.com/questions/55615335/output-of-spacy-convert-command-not-compatible-with-spacy-train-command
<p>I wanted to convert conllu format for Hindi to json using spacy convert and it is generating jsonl format. When I pass on the same .jsonl file to spacy train I am getting an error. If I pass a json file to the train command, it is working. Shouldn't the output of spacy convert be compatible with spacy train? </p> <p>The error message is as follows:</p> <p><img src="https://i.sstatic.net/z3973.png" alt="Keyerror 1 for spacy train"></p> <p>I am using : Spacy version - 2.1.3 Python 3.6.8 ubuntu 18.04</p>
<p>I had the same problem when outputting convertions as <em>default jsonl format</em>. Fixed by converting with file-type : json attribute e.g.</p> <pre><code>python -m spacy convert lang_train.conllu output-folder --file-type json </code></pre>
951
spaCy
Can not install SPACY successfully
https://stackoverflow.com/questions/76869787/can-not-install-spacy-successfully
<p>I tried to install spacy though conda with the command:</p> <pre><code> conda install -c conda-forge spacy </code></pre> <p>The package was installed seemingly uneventfully. But when I tried to import spacy I got an exception. It seems the module pydantic is not cooperating well. I am a bit surprised that during the installation of spacy by conda no error message appeared.</p> <pre><code> File &quot;C:\Users\User\anaconda3\Lib\site-packages\pydantic\_migration.py&quot;, line 295, in wrapper raise PydanticImportError(f'`{import_path}` has been removed in V2.') pydantic.errors.PydanticImportError: `pydantic.errors:ConfigError` has been removed in V2 </code></pre> <p>In any case what should I do to be able to import and run spacy?</p>
<p>In my case I directly installed <code>spacy</code> from PyPI, instead of <code>conda install</code> and it works fine.</p> <pre><code>python -m pip install spacy </code></pre> <p>Kindly make sure you are inside the conda virtual env.</p>
952
spaCy
Tweek spacy spans
https://stackoverflow.com/questions/71481327/tweek-spacy-spans
<p>I am using spacy un some nlp project.</p> <p>I have texts in which text like this appear:</p> <pre><code> text='The car comprises 4 brakes 4.1, 4.2, 4.3 and 4.4 in fig. 5, all include an ESP system. This is shown in Fig. 6. Fig. 5 shows how the motors 56 and 57 are blocked. Besides the doors (44, 45) are painted blue.' </code></pre> <p>I would like to treat &quot;4.1, 4.2, 4.3 and 4.4&quot; as one entity. In order to extract the precedent noun phrase.</p> <p>Often times spacy divides that chunck into different tokens.</p> <p>Assuming I have regular expressions for these spans.</p> <p>What is the way to go to define an span?</p> <p>code so far:</p> <pre><code>nlp = spacy.load('/home/jovyan/shared/public/spacy/en_core_web_sm-3.2.0') text='The car comprises 4 brakes 4.1, 4.2, 4.3 and 4.4 in fig. 5, all include an ESP system. This is shown in Fig. 6. Fig. 5 shows how the motors 56 and 57 are blocked. Besides the doors (44, 45) are painted blue.' doc = nlp(text) print([token.text for token in doc]) </code></pre> <p>how I define spans based on regex?</p>
<p>There is a <a href="https://spacy.io/usage/rule-based-matching" rel="nofollow noreferrer">chapter</a> in Spacy doc dedicated to matching based on rules. You can use Spacy to match spans based on &quot;regex like&quot; rules and also you can extend the pipeline to include your rules and for example recognize entities with names using your rules.</p> <p>From docs:</p> <blockquote> <p>Compared to using regular expressions on raw text, spaCy’s rule-based matcher engines and components not only let you find the words and phrases you’re looking for – they also give you access to the tokens within the document and their relationships. This means you can easily access and analyze the surrounding tokens, merge spans into single tokens or add entries to the named entities in doc.ents</p> </blockquote> <p>As you can see in the following example taken from the documentation, it is very easy to define the rules using spacy's <a href="https://spacy.io/api/matcher" rel="nofollow noreferrer">Matcher</a> class and iterate over the results.</p> <pre><code>import spacy from spacy.matcher import Matcher nlp = spacy.load(&quot;en_core_web_sm&quot;) matcher = Matcher(nlp.vocab) # Add match ID &quot;HelloWorld&quot; with no callback and one pattern pattern = [{&quot;LOWER&quot;: &quot;hello&quot;}, {&quot;IS_PUNCT&quot;: True}, {&quot;LOWER&quot;: &quot;world&quot;}] matcher.add(&quot;HelloWorld&quot;, [pattern]) doc = nlp(&quot;Hello, world! Hello world!&quot;) matches = matcher(doc) for match_id, start, end in matches: string_id = nlp.vocab.strings[match_id] # Get string representation span = doc[start:end] # The matched span print(match_id, string_id, start, end, span.text) </code></pre> <p>On the other hand, you can also use the <a href="https://spacy.io/api/entityruler" rel="nofollow noreferrer">EntityRuler</a> class if you want to extend the spacy pipeline and recognize named entities based on regular expression-like rules.</p> <p>I modify your code to show you more or less how it would be. Surely you have to work the rules a bit to recognize exactly the numbers with the format that interests you.</p> <p>As you can see now, instead of iterating over the text tokens, I iterate over the list of entities recognized by the pipeline and keep only those that have the name <code>2_DIGIT</code>, which is the one that interests me.</p> <pre class="lang-py prettyprint-override"><code>import spacy from spacy.matcher import Matcher nlp = spacy.load('en_core_web_lg') text='The car comprises 4 brakes 4.1, 4.2, 4.3 and 4.4 in fig. 5, all include an ESP system. This is shown in Fig. 6. Fig. 5 shows how the motors 56 and 57 are blocked. Besides the doors (44, 45) are painted blue.' # Add EntityRuler to pipeline ruler = nlp.add_pipe(&quot;entity_ruler&quot;, before=&quot;ner&quot;, config={&quot;validate&quot;: True}) patterns = [{&quot;label&quot;: &quot;2_DIGIT&quot;, &quot;pattern&quot;: [{&quot;IS_DIGIT&quot;: True}, {&quot;IS_PUNCT&quot;: True}, {&quot;IS_DIGIT&quot;: True}]}] ruler.add_patterns(patterns) # Print 2-Digit Ents print([(ent.label_, text[ent.start_char:ent.end_char]) for ent in doc.ents if ent.label_ == &quot;2_DIGIT&quot;]) </code></pre> <p>I'm sorry I can't give you 100% working code that does what you want, but I think this is a good starting point to get what you're looking for.</p>
953
spaCy
Installing spacy returns &#39;set_default_tensor_type&#39; error
https://stackoverflow.com/questions/70102594/installing-spacy-returns-set-default-tensor-type-error
<p>In a Jupyter Notebook, using Python 3.9.9</p> <p>I went to <a href="https://spacy.io/usage" rel="nofollow noreferrer">https://spacy.io/usage</a> and followed the instructions for installing Spacy</p> <p>• MacOS/OSX</p> <p>• conda</p> <p>• virtual env</p> <p>• English</p> <p>• efficiency</p> <pre class="lang-py prettyprint-override"><code>!python -m venv .env </code></pre> <p>my virtual environment's name is firstEnv</p> <pre class="lang-py prettyprint-override"><code>!source .env/bin/activate </code></pre> <pre class="lang-py prettyprint-override"><code>!conda install -c conda-forge spacy </code></pre> <pre class="lang-py prettyprint-override"><code>python -m spacy download en_core_web_sm </code></pre> <p>I then import spacy</p> <pre class="lang-py prettyprint-override"><code>import spacy </code></pre> <p>spacy version is 3.2.0</p> <p>Instantiate the class</p> <pre class="lang-py prettyprint-override"><code>nlp = spacy.load(&quot;en_core_web_sm&quot;) </code></pre> <p>Returns error</p> <pre class="lang-py prettyprint-override"><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /var/folders/dp/5k1wgpbj6d72lnbdgvwv16l40000gn/T/ipykernel_41554/3909579629.py in &lt;module&gt; ----&gt; 1 nlp = spacy.load(&quot;en_core_web_sm&quot;) /opt/anaconda3/envs/firstEnv/lib/python3.9/site-packages/spacy/__init__.py in load(name, vocab, disable, exclude, config) 49 RETURNS (Language): The loaded nlp object. 50 &quot;&quot;&quot; ---&gt; 51 return util.load_model( 52 name, vocab=vocab, disable=disable, exclude=exclude, config=config 53 ) /opt/anaconda3/envs/firstEnv/lib/python3.9/site-packages/spacy/util.py in load_model(name, vocab, disable, exclude, config) 418 return get_lang_class(name.replace(&quot;blank:&quot;, &quot;&quot;))() 419 if is_package(name): # installed as package --&gt; 420 return load_model_from_package(name, **kwargs) # type: ignore[arg-type] 421 if Path(name).exists(): # path to model data directory 422 return load_model_from_path(Path(name), **kwargs) # type: ignore[arg-type] /opt/anaconda3/envs/firstEnv/lib/python3.9/site-packages/spacy/util.py in load_model_from_package(name, vocab, disable, exclude, config) 451 &quot;&quot;&quot; 452 cls = importlib.import_module(name) --&gt; 453 return cls.load(vocab=vocab, disable=disable, exclude=exclude, config=config) # type: ignore[attr-defined] 454 455 /opt/anaconda3/envs/firstEnv/lib/python3.9/site-packages/en_core_web_sm/__init__.py in load(**overrides) 8 9 def load(**overrides): ---&gt; 10 return load_model_from_init_py(__file__, **overrides) /opt/anaconda3/envs/firstEnv/lib/python3.9/site-packages/spacy/util.py in load_model_from_init_py(init_file, vocab, disable, exclude, config) 613 if not model_path.exists(): 614 raise IOError(Errors.E052.format(path=data_path)) --&gt; 615 return load_model_from_path( 616 data_path, 617 vocab=vocab, /opt/anaconda3/envs/firstEnv/lib/python3.9/site-packages/spacy/util.py in load_model_from_path(model_path, meta, vocab, disable, exclude, config) 486 overrides = dict_to_dot(config) 487 config = load_config(config_path, overrides=overrides) --&gt; 488 nlp = load_model_from_config(config, vocab=vocab, disable=disable, exclude=exclude) 489 return nlp.from_disk(model_path, exclude=exclude, overrides=overrides) 490 /opt/anaconda3/envs/firstEnv/lib/python3.9/site-packages/spacy/util.py in load_model_from_config(config, vocab, disable, exclude, auto_fill, validate) 523 # registry, including custom subclasses provided via entry points 524 lang_cls = get_lang_class(nlp_config[&quot;lang&quot;]) --&gt; 525 nlp = lang_cls.from_config( 526 config, 527 vocab=vocab, /opt/anaconda3/envs/firstEnv/lib/python3.9/site-packages/spacy/language.py in from_config(cls, config, vocab, disable, exclude, meta, auto_fill, validate) 1753 # then we would load them twice at runtime: once when we make from config, 1754 # and then again when we load from disk. -&gt; 1755 nlp = lang_cls(vocab=vocab, create_tokenizer=create_tokenizer, meta=meta) 1756 if after_creation is not None: 1757 nlp = after_creation(nlp) /opt/anaconda3/envs/firstEnv/lib/python3.9/site-packages/spacy/language.py in __init__(self, vocab, max_length, meta, create_tokenizer, batch_size, **kwargs) 174 if vocab is True: 175 vectors_name = meta.get(&quot;vectors&quot;, {}).get(&quot;name&quot;) --&gt; 176 vocab = create_vocab(self.lang, self.Defaults, vectors_name=vectors_name) 177 else: 178 if (self.lang and vocab.lang) and (self.lang != vocab.lang): /opt/anaconda3/envs/firstEnv/lib/python3.9/site-packages/spacy/vocab.pyx in spacy.vocab.create_vocab() /opt/anaconda3/envs/firstEnv/lib/python3.9/site-packages/spacy/vocab.pyx in spacy.vocab.Vocab.__init__() /opt/anaconda3/envs/firstEnv/lib/python3.9/site-packages/spacy/vectors.pyx in spacy.vectors.Vectors.__init__() /opt/anaconda3/envs/firstEnv/lib/python3.9/site-packages/thinc/backends/__init__.py in get_current_ops() 124 &quot;&quot;&quot;Get the current backend object.&quot;&quot;&quot; 125 if context_ops.get() is None: --&gt; 126 require_cpu() 127 return cast(Ops, context_ops.get()) 128 /opt/anaconda3/envs/firstEnv/lib/python3.9/site-packages/thinc/util.py in require_cpu() 164 165 ops = get_ops(&quot;cpu&quot;) --&gt; 166 set_current_ops(ops) 167 set_torch_tensor_type_for_ops(ops) 168 /opt/anaconda3/envs/firstEnv/lib/python3.9/site-packages/thinc/backends/__init__.py in set_current_ops(ops) 132 context_ops.set(ops) 133 _get_thread_state().ops = ops --&gt; 134 set_torch_tensor_type_for_ops(ops) 135 136 /opt/anaconda3/envs/firstEnv/lib/python3.9/site-packages/thinc/util.py in set_torch_tensor_type_for_ops(ops) 487 torch.set_default_tensor_type(&quot;torch.cuda.FloatTensor&quot;) 488 else: --&gt; 489 torch.set_default_tensor_type(&quot;torch.FloatTensor&quot;) 490 except ImportError: 491 pass AttributeError: module 'torch' has no attribute 'set_default_tensor_type' ​``` </code></pre>
<p>I had to updated pytorch</p> <p>in your terminal, type:</p> <pre><code>$! pip install --upgrade torch torchvision </code></pre>
954
spaCy
How to get spaCy NER probability
https://stackoverflow.com/questions/46934523/how-to-get-spacy-ner-probability
<p>I want to combine spaCy's NER engine with a separate NER engine (a BoW model). I'm currently comparing outputs from the two engines, trying to figure out what the optimal combination of the two would be. Both perform decently, but quite often spaCy finds entities that the BoW engine misses, and vice versa. What I would like is to access a probability score (or something similar) from spaCy whenever it finds an entity that is not found by the BoW engine. Can I get spaCy to print out its own probability score for a given entity it has found? As in, "Hi, I'm spaCy. I've found this token (or combination of tokens) that I'm X% certain is an entity of type BLAH." I want to know that number X every time spaCy finds an entity. I imagine there must be such a number somewhere internally in spaCy's NER engine, plus a threshold value below which the possible entity is not flagged as an entity, and I'd like to know how to get my hands on that number. Thanks in advance.</p>
<p>Actually, there is an <a href="https://github.com/explosion/spaCy/issues/881" rel="noreferrer">issue</a> for that.</p> <p>The author of the library, suggests there (among others) the following solution:</p> <blockquote> <ol> <li>Beam search with global objective. This is the standard solution: use a global objective, so that the parser model is trained to prefer parses that are better overall. Keep N different candidates, and output the best one. This can be used to support confidence by looking at the alternate analyses in the beam. If an entity occurs in every analysis, the NER is more confident it's correct.</li> </ol> </blockquote> <p>Code:</p> <pre class="lang-py prettyprint-override"><code>import spacy import sys from collections import defaultdict nlp = spacy.load('en') text = u'Will Japan join the European Union? If yes, we should \ move to United States. Fasten your belts, America we are coming' with nlp.disable_pipes('ner'): doc = nlp(text) threshold = 0.2 (beams, somethingelse) = nlp.entity.beam_parse([ doc ], beam_width = 16, beam_density = 0.0001) entity_scores = defaultdict(float) for beam in beams: for score, ents in nlp.entity.moves.get_beam_parses(beam): for start, end, label in ents: entity_scores[(start, end, label)] += score print ('Entities and scores (detected with beam search)') for key in entity_scores: start, end, label = key score = entity_scores[key] if ( score &gt; threshold): print ('Label: {}, Text: {}, Score: {}'.format(label, doc[start:end], score)) </code></pre> <p>Sample output: </p> <blockquote> <p>Entities and scores (detected with beam search) </p> <p>Label: GPE, Text: Japan, Score: 0.9999999999999997 </p> <p>Label: GPE, Text: America, Score: 0.9991664575947963 </p> </blockquote> <p><strong>Important note:</strong> The outputs you will get here are probably different from the outputs you would get using the Standard NER and not the beam search alternative. However, the beam search alternative provides you a metric of confidence that as I understand from your question is useful for your case.</p> <p>Outputs with Standard NER for this example:</p> <blockquote> <p>Label: GPE, Text: Japan</p> <p>Label: ORG, Text: the European Union </p> <p>Label: GPE, Text: United States </p> <p>Label: GPE, Text: America</p> </blockquote>
955
spaCy
A checklist for Spacy optimization?
https://stackoverflow.com/questions/74181750/a-checklist-for-spacy-optimization
<p>I have been trying to understand how to systematically make Spacy run as fast as possible for a long time and I would like this post to become a wiki-style public post if possible.</p> <p>Here is what I currently know, with subsidiary questions on each point:</p> <p><strong>1. Space will run faster on faster hardware. For example, try a computer with more CPU cores, or more RAM/primary memory.</strong></p> <p>What I do not know:</p> <ul> <li><em>What specific aspects of the execution of Spacy - especially the main one of instantiating the</em> <code>Doc</code> <em>object - depend more on CPU vs. RAM and why?</em></li> <li><em>Is the instantiation of a</em> <code>Doc</code> <em>object a sequence of arithmetical calculations (the compiled binary of the neural networks), so the more CPU cores, the more calculations can be done at once, therefore faster? Does that mean increasing RAM would not make this process faster?</em></li> <li><em>Are there any other aspects of CPUs or GPUs to watch out for, other than cores, that would make one chip better than another, for Spacy? Someone mentioned &quot;hyper threading&quot;.</em></li> <li><em>Is there any standard mathematical estimate of time per pipeline component, such as parser, relative to input string length? Like Parser, seconds = number of characters in input? / number of CPU cores</em></li> </ul> <p><strong>2. You can make Spacy run faster by removing <a href="https://spacy.io/usage/spacy-101#pipelines" rel="noreferrer">components</a> you don't need, for example by</strong> <code>nlp = spacy.load(&quot;en_core_web_sm&quot;, disable=['tagger', 'ner', 'lemmatizer', 'textcat'])</code></p> <ul> <li><em>Just loading the Spacy module itself with <code>import spacy</code> is slightly slow. If you haven't even loaded the language model yet, what are the most significant things being loaded here, apart from just adding functions to the namespace? Is it possible to only load a part of the module you need?</em></li> </ul> <p><strong>3. You can make Spacy faster by using certain options that simply make it run faster.</strong></p> <ul> <li><em>I have read about multiprocessing with</em> <code>nlp.pipe</code>, <code>n_process</code>, <code>batch_size</code> and <code>joblib</code><em>, but that's for multiple documents and I'm only doing a single document right now.</em></li> </ul> <p><strong>4. You can make Spacy faster by minimising the number of times it has to perform the same operations.</strong></p> <ul> <li><p><em>You can keep Spacy alive on a server and pass processing commands to it when you need to</em></p> </li> <li><p><em>You can serialize a</em> <code>Doc</code> <em>to reload it later, and you can further exclude attributes you don't need with</em> <code>doc.to_bytes(exclude=[&quot;tensor&quot;])</code> or <code>doc.to_array([LOWER, POS, ENT_TYPE, IS_ALPHA])</code></p> </li> </ul> <p><strong>5. Anything else?</strong></p>
<h1>Checklist</h1> <p>The following checklist is focused on runtime performance optimization and not training (i.e. when one utilises existing <code>config.cfg</code> files loaded with the convenience wrapper <code>spacy.load()</code>, instead of training their own models and creating a new <code>config.cfg</code> file), however, most of the points still apply. This list is not comprehensive: the spaCy library is extensive and there are many ways to build pipelines and carry out tasks. Thus, including all cases here is impractical, regardless, this list intends to be a handy reference and starting point.</p> <h2>Summary</h2> <ol> <li>If more powerful hardware is available, use it.</li> <li>Use (optimally) small models/pipelines.</li> <li>Use your GPU if possible.</li> <li>Process large texts as a stream and buffer them in batches.</li> <li>Use multiprocessing (if appropriate).</li> <li>Use only necessary pipeline components.</li> <li>Save and load progress to avoid re-computation.</li> </ol> <h2>1. <strong>If more powerful hardware is available, use it.</strong></h2> <p><strong>CPU</strong>. Most of spaCy's work at runtime is going to be using CPU instructions to allocate memory, assign values to memory and perform computations, which, in terms of speed, will be CPU bound not RAM, hence, <em>performance is predominantly dependent on the CPU</em>. So, opting for a better CPU as opposed to more RAM is the smarter choice in most situations. As a general rule, newer CPUs with higher frequencies, more cores/threads, more cache etc. will realise faster spaCy processing times. However, simply comparing these numbers between different CPU architectures is not useful. Instead look at benchmarks like <a href="https://cpu.userbenchmark.com/" rel="noreferrer">cpu.userbenchmark.com</a> (e.g. <a href="https://cpu.userbenchmark.com/Compare/Intel-Core-i5-12600K-vs-AMD-Ryzen-9-5900X/4120vs4087" rel="noreferrer">i5-12600k vs. Ryzen 9 5900X</a>) and compare the single-core and multi-core performance of prospective CPUs to find those that will likely offer better performance. See Footnote (1) on hyperthreading &amp; core/thread counts.</p> <p><strong>RAM</strong>. The practical consideration for RAM is the size: larger texts require more memory capacity, speed and latency is less important. If you have limited RAM capacity, disable <code>NER</code> and <code>parser</code> when creating your <code>Doc</code> for large input text (e.g. <code>doc = nlp(&quot;My really long text&quot;, disable = ['ner', 'parser'])</code>). If you require these parts of the pipeline, you'll only be able to process approximately <code>100,000 * available_RAM_in_GB</code> characters at a time, if you don't, you'll be able to process more than this. Note that the default spaCy input text limit is 1,000,000 characters, however this can be changed by setting <code>nlp.max_length = your_desired_length</code>.</p> <p><strong>GPU</strong>. If you opt to use a GPU, processing times can be improved for certain aspects of the pipeline which make use of GPU-based computations. See the section below on <em>making use of your GPU</em>. The same general rule as with CPUs applies here too: generally, newer GPUs with higher frequencies, more memory, larger memory bus widths, bigger bandwidth etc. will realise faster spaCy processing times.</p> <p><strong>Overclocking</strong>. If you're experienced with overclocking and have the correct hardware to be able to do it (adequate power supply, cooling, motherboard chipset), then another effective way to gain extra performance without changing hardware is to overclock your CPU/GPU.</p> <h2>2. Use (optimally) small models/pipelines.</h2> <p>When computation resources are limited, and/or accuracy is less of a concern (e.g. when experimenting or testing ideas), load spaCy pipelines that are efficiency focused (i.e. those with smaller models). For example:</p> <pre class="lang-py prettyprint-override"><code># Load a &quot;smaller&quot; pipeline for faster processing nlp = spacy.load(&quot;en_core_web_sm&quot;) # Load a &quot;larger&quot; pipeline for more accuracy nlp = spacy.load(&quot;en_core_web_trf&quot;) </code></pre> <p>As a <a href="https://spacy.io/usage/facts-figures#benchmarks-speed" rel="noreferrer">concrete example</a> of the differences, on the same system, the smaller <code>en_core_web_lg</code> pipeline is able to process 10,014 words per second, whereas the <code>en_core_web_trf</code> pipeline only processes 684. Remember that there is often a trade-off between speed and accuracy.</p> <h2>3. Use your GPU if possible.</h2> <p>Due to the nature of neural network-based models, their computations can be efficiently solved using a GPU, leading to boosts in processing times. <a href="https://spacy.io/usage/facts-figures#benchmarks-speed" rel="noreferrer">For instance</a>, the <code>en_core_web_lg</code> pipeline can process 10,014 vs. 14,954 words per second when using a CPU vs. a GPU.</p> <p>spaCy can be installed for a CUDA compatible GPU (i.e. Nvidia GPUs) by calling <code>pip install -U spacy[cuda]</code> in the command prompt. Once a GPU-enabled spaCy installation is present, one can call <code>spacy.prefer_gpu()</code> or <code>spacy.require_gpu()</code> somewhere in your program before any pipelines have been loaded. Note that <code>require_gpu()</code> will raise an error if no GPU is available. For example:</p> <pre class="lang-py prettyprint-override"><code>spacy.prefer_gpu() # Or use spacy.require_gpu() nlp = spacy.load(&quot;en_core_web_sm&quot;) </code></pre> <h2>4. Process large texts as a stream and buffer them in batches.</h2> <p>When processing large volumes of text, the statistical models are usually more efficient if you let them work on batches of texts (default is 1000), and process the texts as a stream using <code>nlp.pipe()</code>. For example:</p> <pre class="lang-py prettyprint-override"><code>texts = [&quot;One document.&quot;, &quot;...&quot;, &quot;Lots of documents&quot;] nlp = spacy.load(&quot;en_core_web_sm&quot;) docs = list(nlp.pipe(texts, batch_size=1000)) </code></pre> <h2>5. Use multiprocessing (if appropriate).</h2> <p>To make use of multiple CPU cores, spaCy includes built-in support for multiprocessing with <code>nlp.pipe()</code> using the <code>n_process</code> option. For example,</p> <pre class="lang-py prettyprint-override"><code>texts = [&quot;One document.&quot;, &quot;...&quot;, &quot;Lots of documents&quot;] nlp = spacy.load(&quot;en_core_web_sm&quot;) docs = list(nlp.pipe(texts, n_process=4)) </code></pre> <p><em>Note</em> that each process requires its own memory. This means that every time a new process is <code>spawned</code> (the default start method), model data has to be copied into memory for every individual process (hence, the larger the model, the more overhead to spawn a process). Therefore, it is recommended that if you are just doing small tasks, that you increase the batch size and use fewer processes. For example,</p> <pre class="lang-py prettyprint-override"><code>texts = [&quot;One document.&quot;, &quot;...&quot;, &quot;Lots of documents&quot;] nlp = spacy.load(&quot;en_core_web_sm&quot;) docs = list(nlp.pipe(texts, n_process=2, batch_size=2000)) # default batch_size = 1000 </code></pre> <p>Finally, multiprocessing is generally not recommended on GPUs because RAM is limited.</p> <h2>6. Use only necessary pipeline components.</h2> <p>Generating predictions from models in the pipeline that you don't require unnecessarily degrades performance. One can prevent this by either <em>disabling</em> or <em>excluding</em> specific components, either when loading a pipeline (i.e. with <code>spacy.load()</code>) or during processing (i.e. with <code>nlp.pipe()</code>).</p> <p>If you have limited memory, <code>exclude</code> the components you don't need, for example:</p> <pre class="lang-py prettyprint-override"><code># Load the pipeline without the entity recognizer nlp = spacy.load(&quot;en_core_web_sm&quot;, exclude=[&quot;ner&quot;]) </code></pre> <p>If you might need a particular component later in your program, but still want to improve processing speed for tasks that don't require those components in the interim, use <code>disable</code>, for example:</p> <pre class="lang-py prettyprint-override"><code># Load the tagger but don't enable it nlp = spacy.load(&quot;en_core_web_sm&quot;, disable=[&quot;tagger&quot;]) # ... perform some tasks with the pipeline that don't require the tagger # Eventually enable the tagger nlp.enable_pipe(&quot;tagger&quot;) </code></pre> <p><em>Note</em> that the <code>lemmatizer</code> depends on <code>tagger</code>+<code>attribute_ruler</code> or <code>morphologizer</code> for a number of languages. If you disable any of these components, you’ll see lemmatizer warnings unless the lemmatizer is also disabled.</p> <h2>7. Save and load progress to avoid re-computation.</h2> <p>If one has been modifying the pipeline or vocabulary, made updates to model components, processed documents etc., there is merit in saving one's progress to reload at a later date. This requires one to translate the contents/structure of an object into a format that can be saved -- a process known as <code>serialization</code>.</p> <h3>Serializing the pipeline</h3> <pre class="lang-py prettyprint-override"><code>nlp = spacy.load(&quot;en_core_web_sm&quot;) # ... some changes to pipeline # Save serialized pipeline nlp.to_disk(&quot;./en_my_pipeline&quot;) # Load serialized pipeline nlp.from_disk(&quot;./en_my_pipeline&quot;) </code></pre> <h3>Serializing multiple <code>Doc</code> objects</h3> <p>The <code>DocBin</code> class provides an easy method for serializing/deserializing multiple <code>Doc</code> objects, which is also more efficient than calling <code>Doc.to_bytes()</code> on every <code>Doc</code> object. For example:</p> <pre class="lang-py prettyprint-override"><code>from spacy.tokens import DocBin texts = [&quot;One document.&quot;, &quot;...&quot;, &quot;Lots of documents&quot;] nlp = spacy.load(&quot;en_core_web_sm&quot;) docs = list(nlp.pipe(texts)) doc_bin = DocBin(docs=docs) # Save the serialized DocBin to a file doc_bin.to_disk(&quot;./data.spacy&quot;) # Load a serialized DocBin from a file doc_bin = DocBin().from_disk(&quot;./data.spacy&quot;) </code></pre> <h3>Footnotes</h3> <p>(1) &quot;Hyper-threading&quot; is a term trademarked by Intel used to refer to their proprietary Simultaneous Multi-Threading (SMT) implementation that improves parallelisation of computations (i.e. doing multiple tasks at once). AMD has SMT as well, it just doesn't have a fancy name. In short, processors with 2-way SMT (SMT-2) allow an Operating System (OS) to treat each physical core on the processor as two cores (referred to as &quot;virtual cores&quot;). Processors with SMT will perform better on tasks that can make use of these multiple &quot;cores&quot;, sometimes referred to as &quot;threads&quot; (e.g. the Ryzen 5600X is an 6 core/12 thread processor (i.e. 6 physical cores, but with SMT-2, it has 12 &quot;virtual cores&quot; or &quot;threads&quot;)). Note that Intel has recently released a CPU architecture with e-cores, which are cores that don't have hyper-threading, despite other cores on the processor (namely, p-cores) having it, hence you will see some chips like the i5-12600k that have 10 cores with hyper-threading, but it has 16 threads not 20. This is because only the 6 p-cores have hyper-threading, while the 4 e-cores do not, hence 16 threads total.</p>
956
spaCy
Don&#39;t know how to uninstall unwanted Spacy installation, model
https://stackoverflow.com/questions/53052868/dont-know-how-to-uninstall-unwanted-spacy-installation-model
<p>I have limited disk memory and want to know how to uninstall/remove files for spacy 2.xx under python 2.7 (I use python3 and think I've got spacy installed correctly for it). Ditto for the default model in my python3 install. Here's my terminal session:</p> <pre><code>gw-mac-pro:~ gr$ pip install -U spacy Collecting spacy Downloading https://files.pythonhosted.org/packages/f8/db/490f0552f0f39eb3b75119 ... Linking successful /usr/local/lib/python2.7/site-packages/en_core_web_sm --&gt; /usr/local/lib/python2.7/site-packages/spacy/data/en You can now load the model via spacy.load('en') gw-mac-pro:~ gr$ python -m spacy download en </code></pre> <p>Is there an uninstall script (I couldn't find one)? If not, can I just remove these directories? Do I need to remove files elsewhere?</p> <p>New to spacy, obviously. Thanks for your help!</p> <p>ADDED: I should have mentioned I'm on Mac OS (excuse me, macOS). I just found a command to show info on my spacy installation. Here's what it returns for python 2.7</p> <pre><code>gw-mac-pro:~ gr$ python -m spacy info Info about spaCy Python version 2.7.15 Platform Darwin-15.6.0-x86_64-i386-64bit spaCy version 2.0.16 Location /usr/local/lib/python2.7/site-packages/spacy Models en </code></pre>
<p><em>spaCy</em> installs models as packages (via <code>pip</code>). That means you can uninstall them via <code>pip</code> as well:</p> <pre><code>pip list </code></pre> <p>This shows you all the installed packages, including the <em>spaCy</em> models.</p> <blockquote> <p>…</p> <p>en-core-web-sm 2.0.0</p> <p>…</p> </blockquote> <pre><code>pip uninstall en-core-web-sm </code></pre> <p>This will remove the <code>en</code> model.</p> <blockquote> <p>…</p> <p>Successfully uninstalled en-core-web-sm-2.0.0</p> </blockquote>
957
spaCy
Spacy cannot find model
https://stackoverflow.com/questions/66048470/spacy-cannot-find-model
<p>There are tickets or bugs across the internet on this and because there's Spacy 2.3 and 3.0 I can't tell which is relevant. I'm running 2.3 because moving to 3.0 is like moving to dojo 2.0. All that youtube watching is down the drain. Anyway, the following works:</p> <pre><code>spacy.load(r'C:\Users\martingale.henrietta\AppData\Local\Continuum\anaconda3\envs\spacy\Lib\site-packages\en_core_web_sm\en_core_web_sm-2.3.1') </code></pre> <p>or</p> <pre><code>nlp = en_core_web_sm.load() </code></pre> <p>But the line that's in every tutorial is -</p> <pre><code>spacy.load('en_core_web_sm') </code></pre> <p>or even</p> <pre><code>spacy.load('en') </code></pre> <p>if you're really courageous, never works for me. My path includes:</p> <pre><code>['', 'C:\\Users\\martingale.henrietta\\AppData\\Local\\Continuum\\anaconda3\\envs\\spacy\\python36.zip', 'C:\\Users\\martingale.henrietta\\AppData\\Local\\Continuum\\anaconda3\\envs\\spacy\\DLLs', 'C:\\Users\\martingale.henrietta\\AppData\\Local\\Continuum\\anaconda3\\envs\\spacy\\lib', 'C:\\Users\\martingale.henrietta\\AppData\\Local\\Continuum\\anaconda3\\envs\\spacy', 'C:\\Users\\martingale.henrietta\\AppData\\Local\\Continuum\\anaconda3\\envs\\spacy\\lib\\site-packages', 'C:\\Users\\martingale.henrietta\\AppData\\Local\\Continuum\\anaconda3\\envs\\spacy\\lib\\site-packages\\win32', 'C:\\Users\\martingale.henrietta\\AppData\\Local\\Continuum\\anaconda3\\envs\\spacy\\lib\\site-packages\\win32\\lib', 'C:\\Users\\martingale.henrietta\\AppData\\Local\\Continuum\\anaconda3\\envs\\spacy\\lib\\site-packages\\Pythonwin', 'C:\\Users\\martingale.henrietta\\AppData\\Local\\Continuum\\anaconda3\\envs\\spacy\\lib\\site-packages\\IPython\\extensions', 'C:\\Users\\martingale.henrietta\\.ipython', 'C:\\Users\\martingale.henrietta\\OneDrive - 247 Customer Pvt. Ltd\\workspace\\247', 'C:\\Users\\martingale.henrietta\\OneDrive - 247 Customer Pvt. Ltd\\workspace\\247'] </code></pre> <p>And, of course, this is in a Jupyter notebook. How do I get spacy to do <code>spacy.load('en')</code> in 2.3?</p>
<blockquote> <p>How do I get spacy to do <code>spacy.load('en')</code> in 2.3?</p> </blockquote> <p>You can't. From <a href="https://spacy.io/usage/models#download:%7E:text=installation.-,IMPORTANT%20NOTE%20FOR%20V3.0" rel="nofollow noreferrer">the docs</a>:</p> <blockquote> <p>Note that as of spaCy v3.0, shortcut links like <code>en</code> that create (potentially brittle) symlinks in your spaCy installation are deprecated. To download and load an installed pipeline package, use its full name:</p> </blockquote> <p>The shortcuts were based on symlinks, and they always had issues with Windows, or with installing multiple models for the same language, and other things, so they took them out. Except for the argument being longer there's no difference.</p> <p>This should work just fine:</p> <pre><code>spacy.load('en_core_web_sm') </code></pre> <p>If it doesn't work, make sure you actually downloaded the model and post your error.</p>
958
spaCy
SSL Certificate error while installing Spacy
https://stackoverflow.com/questions/68205676/ssl-certificate-error-while-installing-spacy
<p>I am facing the below error while installing Spacy.</p> <pre><code>requests.exceptions.SSLError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /explosion/spacy-models/master/compatibility.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)'))) </code></pre> <p><strong>Command i'm running :</strong> python -m spacy download en_core_web_md</p>
<p>I have fixed using this set of commands:</p> <pre><code>pip install pip-system-certs python -m spacy download it_core_news_sm </code></pre> <p>Ref: <a href="https://github.com/explosion/spaCy/discussions/5649" rel="noreferrer">https://github.com/explosion/spaCy/discussions/5649</a></p>
959
spaCy
Spacy CLI tools not functioning
https://stackoverflow.com/questions/69082919/spacy-cli-tools-not-functioning
<p>As the title says, I am unable to use the spacy cli tools. I get <code>No module named spacy</code> error when I try to run <code>python3 -m spacy download en_core_web_trf</code>.</p> <p>I was able to install the sm, md and lg models a few days ago. But now it doesnt seem to be working. I have spacy v3 installed</p> <pre><code>import spacy spacy.__version__ </code></pre> <p>gives <code>'3.1.2'</code></p> <p>running <code>python3 -m spacy download en_core_web_trf</code> gives the error. How can I resolve this?</p>
960
spaCy
Spacy download on Apple M1 chip
https://stackoverflow.com/questions/70658085/spacy-download-on-apple-m1-chip
<p>I'm in a MacBook Pro with M1 Pro chip, and I've built a Docker to install Tensorflow, spacy, etc. My Dockerfile:</p> <pre><code>FROM --platform=linux/x86_64 python:3.8 ENV PYTHONUNBUFFERED 1 RUN apt-get update &amp;&amp; apt-get -y upgrade RUN pip install --upgrade pip RUN pip install TensorFlow==2.3.4 RUN pip install rasa==2.8.0 RUN pip install spacy RUN python -m spacy download pt_core_news_lg </code></pre> <p>I've added platform tag (linux/x86_64) in line 1, to install correctly the TensorFlow and spacy packages. But, when run the spacy download command, occurs this error.</p> <pre><code> =&gt; ERROR [13/13] RUN python -m spacy download pt_core_news_lg 2.3s ------ &gt; [13/13] RUN python -m spacy download pt_core_news_lg: #17 2.290 2022-01-10 19:36:31.774553: F tensorflow/core/lib/monitoring/sampler.cc:42] Check failed: bucket_limits_[i] &gt; bucket_limits_[i - 1] (0 vs. 10) #17 2.290 qemu: uncaught target signal 6 (Aborted) - core dumped #17 2.300 Aborted ------ executor failed running [/bin/sh -c python -m spacy download pt_core_news_lg]: exit code: 134 </code></pre> <p>Any idea?</p>
961
spaCy
processing grammar using spaCY
https://stackoverflow.com/questions/48356424/processing-grammar-using-spacy
<p>I'm going to check for grammar in the sentences I give as input. If <strong>spaCy</strong> identifies <code>PRP</code> and <code>MD</code> and <code>NN</code> in a sentence then it will give me a text output</p> <blockquote> <p>there is a prp, md and nn in the sentence</p> </blockquote> <p>The problem is: how can I tell <strong>spaCy</strong> to check for <code>PRP</code>, <code>MD</code> and <code>NN</code> and then give me the desired text output?</p> <p>This is the code so far which is able to identify grammar of a text input:</p> <pre><code>import spacy sent=input("insert sentence: \n\n") nlp=spacy.load('en') doc=nlp(sent) for token in doc: print(token.text, token.tag_, token.dep_) </code></pre>
<p>If I understand correctly:</p> <pre><code>In [34]: chk_set = set(['PRP','MD','NN']) In [35]: chk_set.issubset(t.tag_ for t in nlp("I will go to the mall")) Out[35]: True In [36]: chk_set.issubset(t.tag_ for t in nlp("I will go")) Out[36]: False </code></pre> <p><strong>UPDATE:</strong></p> <blockquote> <p>how can I read the word marked as NN and print it out?</p> </blockquote> <pre><code>In [53]: [t.text for t in nlp("I will go to the mall") if t.tag_ in ['NN']] Out[53]: ['mall'] </code></pre>
962
spaCy
Applying Spacy Parser to Pandas DataFrame w/ Multiprocessing
https://stackoverflow.com/questions/44395656/applying-spacy-parser-to-pandas-dataframe-w-multiprocessing
<p>Say I have a dataset, like</p> <pre><code>iris = pd.DataFrame(sns.load_dataset('iris')) </code></pre> <p>I can use <code>Spacy</code> and <code>.apply</code> to parse a string column into tokens (my real dataset has >1 word/token per entry of course)</p> <pre><code>import spacy # (I have version 1.8.2) nlp = spacy.load('en') iris['species_parsed'] = iris['species'].apply(nlp) </code></pre> <p>result:</p> <pre><code> sepal_length ... species species_parsed 0 1.4 ... setosa (setosa) 1 1.4 ... setosa (setosa) 2 1.3 ... setosa (setosa) </code></pre> <p>I can also use this convenient multiprocessing function (<a href="http://www.racketracer.com/2016/07/06/pandas-in-parallel/" rel="noreferrer">thanks to this blogpost</a>) to do most arbitrary apply functions on a dataframe in parallel:</p> <pre><code>from multiprocessing import Pool, cpu_count def parallelize_dataframe(df, func, num_partitions): df_split = np.array_split(df, num_partitions) pool = Pool(num_partitions) df = pd.concat(pool.map(func, df_split)) pool.close() pool.join() return df </code></pre> <p>for example:</p> <pre><code>def my_func(df): df['length_of_word'] = df['species'].apply(lambda x: len(x)) return df num_cores = cpu_count() iris = parallelize_dataframe(iris, my_func, num_cores) </code></pre> <p>result:</p> <pre><code> sepal_length species length_of_word 0 5.1 setosa 6 1 4.9 setosa 6 2 4.7 setosa 6 </code></pre> <p>...But for some reason, I can't apply the Spacy parser to a dataframe using multiprocessing this way. </p> <pre><code>def add_parsed(df): df['species_parsed'] = df['species'].apply(nlp) return df iris = parallelize_dataframe(iris, add_parsed, num_cores) </code></pre> <p>result:</p> <pre><code> sepal_length species length_of_word species_parsed 0 5.1 setosa 6 () 1 4.9 setosa 6 () 2 4.7 setosa 6 () </code></pre> <p>Is there some other way to do this? I'm loving Spacy for NLP but I have a lot of text data and so I'd like to parallelize some processing functions, but ran into this issue.</p>
<p>Spacy is highly optimised and does the multiprocessing for you. As a result, I think your best bet is to take the data out of the Dataframe and pass it to the Spacy pipeline as a list rather than trying to use <code>.apply</code> directly.</p> <p>You then need to the collate the results of the parse, and put this back into the Dataframe. </p> <p>So, in your example, you could use something like:</p> <pre><code>tokens = [] lemma = [] pos = [] for doc in nlp.pipe(df['species'].astype('unicode').values, batch_size=50, n_threads=3): if doc.is_parsed: tokens.append([n.text for n in doc]) lemma.append([n.lemma_ for n in doc]) pos.append([n.pos_ for n in doc]) else: # We want to make sure that the lists of parsed results have the # same number of entries of the original Dataframe, so add some blanks in case the parse fails tokens.append(None) lemma.append(None) pos.append(None) df['species_tokens'] = tokens df['species_lemma'] = lemma df['species_pos'] = pos </code></pre> <p>This approach will work fine on small datasets, but it eats up your memory, so not great if you want to process huge amounts of text.</p>
963
spaCy
How to fix spaCy en_model incompatible with current spaCy version (2.3.2)?
https://stackoverflow.com/questions/64035821/how-to-fix-spacy-en-model-incompatible-with-current-spacy-version-2-3-2
<p>When I am running my NER model I am getting:</p> <pre><code>UserWarning: [W031] Model 'en_model' (0.0.0) requires spaCy v2.2 and is incompatible with the current spaCy version (2.3.2) </code></pre> <p>Please advise how can I fix it?</p> <p>Python 3.7.9, spaCy 2.3.2, Ubuntu 18.04.</p>
<p>Solved by downgrading spaCy to 2.2.4.</p> <pre><code>pip3 install spacy==2.2.4 </code></pre>
964
spaCy
How to install spacy?
https://stackoverflow.com/questions/79260748/how-to-install-spacy
<p>I am using trying to install spacy library using 'pip install -U spacy' in the command prompt (run as admin) in Windows-11 O.S., but it shows some error I don't understand. I am using Python 3.13.0, gcc 13.2.0 and make 4.4.1. What could be the problem? Or is there any other way to install spacy?</p> <pre class="lang-none prettyprint-override"><code>C:\&gt;pip install -U spacy Collecting spacy Using cached spacy-3.8.2.tar.gz (1.3 MB) Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─&gt; [113 lines of output] Ignoring numpy: markers 'python_version &lt; &quot;3.9&quot;' don't match your environment Collecting setuptools Using cached setuptools-75.6.0-py3-none-any.whl.metadata (6.7 kB) Collecting cython&lt;3.0,&gt;=0.25 Using cached Cython-0.29.37-py2.py3-none-any.whl.metadata (3.1 kB) Collecting cymem&lt;2.1.0,&gt;=2.0.2 Using cached cymem-2.0.10-cp313-cp313-win_amd64.whl.metadata (8.6 kB) Collecting preshed&lt;3.1.0,&gt;=3.0.2 Using cached preshed-3.0.9.tar.gz (14 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting murmurhash&lt;1.1.0,&gt;=0.28.0 Using cached murmurhash-1.0.11-cp313-cp313-win_amd64.whl.metadata (2.0 kB) Collecting thinc&lt;8.4.0,&gt;=8.3.0 Using cached thinc-8.3.2.tar.gz (193 kB) Installing build dependencies: started Installing build dependencies: still running... Installing build dependencies: finished with status 'error' error: subprocess-exited-with-error pip subprocess to install build dependencies did not run successfully. exit code: 1 [74 lines of output] Ignoring numpy: markers 'python_version &lt; &quot;3.9&quot;' don't match your environment Collecting setuptools Using cached setuptools-75.6.0-py3-none-any.whl.metadata (6.7 kB) Collecting cython&lt;3.0,&gt;=0.25 Using cached Cython-0.29.37-py2.py3-none-any.whl.metadata (3.1 kB) Collecting murmurhash&lt;1.1.0,&gt;=1.0.2 Using cached murmurhash-1.0.11-cp313-cp313-win_amd64.whl.metadata (2.0 kB) Collecting cymem&lt;2.1.0,&gt;=2.0.2 Using cached cymem-2.0.10-cp313-cp313-win_amd64.whl.metadata (8.6 kB) Collecting preshed&lt;3.1.0,&gt;=3.0.2 Using cached preshed-3.0.9.tar.gz (14 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting blis&lt;1.1.0,&gt;=1.0.0 Using cached blis-1.0.1.tar.gz (3.6 MB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting numpy&lt;2.1.0,&gt;=2.0.0 Using cached numpy-2.0.2.tar.gz (18.9 MB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Installing backend dependencies: started Installing backend dependencies: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'error' error: subprocess-exited-with-error Preparing metadata (pyproject.toml) did not run successfully. exit code: 1 [22 lines of output] + C:\Users\rohan\AppData\Local\Programs\Python\Python313\python.exe C:\Users\rohan\AppData\Local\Temp\pip-install-s6zj7q4q\numpy_fe36df85b8944a7fb67f6135b78a4bde\vendored-meson\meson\meson.py setup C:\Users\rohan\AppData\Local\Temp\pip-install-s6zj7q4q\numpy_fe36df85b8944a7fb67f6135b78a4bde C:\Users\rohan\AppData\Local\Temp\pip-install-s6zj7q4q\numpy_fe36df85b8944a7fb67f6135b78a4bde\.mesonpy-c4lb8p4h -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=C:\Users\rohan\AppData\Local\Temp\pip-install-s6zj7q4q\numpy_fe36df85b8944a7fb67f6135b78a4bde\.mesonpy-c4lb8p4h\meson-python-native-file.ini The Meson build system Version: 1.4.99 Source dir: C:\Users\rohan\AppData\Local\Temp\pip-install-s6zj7q4q\numpy_fe36df85b8944a7fb67f6135b78a4bde Build dir: C:\Users\rohan\AppData\Local\Temp\pip-install-s6zj7q4q\numpy_fe36df85b8944a7fb67f6135b78a4bde\.mesonpy-c4lb8p4h Build type: native build Project name: NumPy Project version: 2.0.2 C compiler for the host machine: gcc (gcc 13.2.0 &quot;gcc (GCC) 13.2.0&quot;) C linker for the host machine: gcc ld.bfd 2.41 C++ compiler for the host machine: c++ (gcc 6.3.0 &quot;c++ (MinGW.org GCC-6.3.0-1) 6.3.0&quot;) C++ linker for the host machine: c++ ld.bfd 2.28 Cython compiler for the host machine: cython (cython 3.0.11) Host machine cpu family: x86 Host machine cpu: x86 Program python found: YES (C:\Users\rohan\AppData\Local\Programs\Python\Python313\python.exe) Need python for x86, but found x86_64 Run-time dependency python found: NO (tried sysconfig) ..\meson.build:41:12: ERROR: Python dependency not found A full log can be found at C:\Users\rohan\AppData\Local\Temp\pip-install-s6zj7q4q\numpy_fe36df85b8944a7fb67f6135b78a4bde\.mesonpy-c4lb8p4h\meson-logs\meson-log.txt [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Encountered error while generating package metadata. See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error pip subprocess to install build dependencies did not run successfully. exit code: 1 See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. </code></pre>
<p>I raised a git issue a couple of months back:</p> <p><strong><a href="https://github.com/explosion/spaCy/issues/13658#issuecomment-2532080140" rel="nofollow noreferrer">https://github.com/explosion/spaCy/issues/13658#issuecomment-2532080140</a></strong></p> <p><em>They are updating the build system across their packages to use the latest version of cibuildwheel.</em></p> <p><code>python 3.13</code> is not added completely to their build system yet.</p> <p>Team is working very hard to make it available on <code>python 3.13</code></p> <p><strong>please use <code>python 3.12</code> for spacy for now.</strong></p>
965
spaCy
trouble to install spaCy 3.0 on GoogleColab
https://stackoverflow.com/questions/68494390/trouble-to-install-spacy-3-0-on-googlecolab
<h2 id="background-hnaq">Background</h2> <p>I'm woking on the following local environment and need to use spaCy 3.0 on GoogleColab.</p> <p>However, it looks accepted to install spaCy after 3.0 because of python version.</p> <pre><code>spaCy version 3.0.5 Python version 3.9.1 </code></pre> <h2 id="error-6jcp">Error</h2> <p>How can I use spaCy after 3.0 on GoogleColab or is it impossible now?</p> <pre><code>Requirement already satisfied: spacy in /usr/local/lib/python3.7/dist-packages (2.2.4) ERROR: Could not find a version that satisfies the requirement 3.0.5 (from versions: none) ERROR: No matching distribution found for 3.0.5 </code></pre> <h2 id="environment-b02a">Environment</h2> <p>GoogleColab</p> <pre><code>!pip install spacy !python -m spacy info ============================== Info about spaCy ============================== spaCy version 2.2.4 Python version 3.7.11 </code></pre>
<p>You can use <code>pip</code> to install spaCy, and specify version 3.1.1</p> <pre><code>!pip install spacy==3.1.1 </code></pre> <p>Then you can check the result.</p> <pre><code>import spacy print(spacy.__version__) # 3.1.1 </code></pre>
966
spaCy
spacy installation error (running cythonize failed)
https://stackoverflow.com/questions/41972099/spacy-installation-error-running-cythonize-failed
<p>So I was compiling spacy from source. I have cython v23.4 installed. when I run <code>pip install -e .</code> in the <code>spaCy</code> directory, I get this: `Obtaining file:///home/nitish/spaCy Complete output from command python setup.py egg_info:</p> <pre><code>Error compiling Cython file: ------------------------------------------------------------ ... from cymem.cymem cimport Pool from thinc.typedefs cimport weight_t ^ ------------------------------------------------------------ syntax/transition_system.pxd:2:0: 'thinc/typedefs.pxd' not found Processing gold.pyx Traceback (most recent call last): File "/home/nitish/spaCy/bin/cythonize.py", line 156, in &lt;module&gt; run(args.root) File "/home/nitish/spaCy/bin/cythonize.py", line 147, in run process(base, filename, db) File "/home/nitish/spaCy/bin/cythonize.py", line 113, in process preserve_cwd(base, process_pyx, root + '.pyx', root + '.cpp') File "/home/nitish/spaCy/bin/cythonize.py", line 78, in preserve_cwd func(*args) File "/home/nitish/spaCy/bin/cythonize.py", line 60, in process_pyx raise Exception('Cython failed') Exception: Cython failed Cythonizing sources Traceback (most recent call last): File "&lt;string&gt;", line 1, in &lt;module&gt; File "/home/nitish/spaCy/setup.py", line 265, in &lt;module&gt; setup_package() File "/home/nitish/spaCy/setup.py", line 216, in setup_package generate_cython(root, 'spacy') File "/home/nitish/spaCy/setup.py", line 148, in generate_cython raise RuntimeError('Running cythonize failed') RuntimeError: Running cythonize failed </code></pre> <p>`</p> <p>How can I correct this?</p>
<p>As you install from the local repository, you should install the requirements first.</p> <pre><code>pip install -r requirements.txt </code></pre> <p>from the spaCy repository.</p> <p>Do you have a specific reason for using the <code>-e</code> option? If not, I recommend to use <code>--user</code> instead.</p>
967
spaCy
Install SpaCy in a Jupyter Notebook
https://stackoverflow.com/questions/64268889/install-spacy-in-a-jupyter-notebook
<p>I try to install SpaCy for lemmatization, but it won't work...</p> <p>First I install spacy:</p> <pre><code>pip install -U spacy </code></pre> <p>Which leads to this results:</p> <pre><code>Requirement already satisfied, skipping upgrade: murmurhash&lt;1.1.0,&gt;=0.28.0 in c:\users\danis\.conda\envs\python36\lib\site-packages (from spacy) (1.0.2) Requirement already satisfied, skipping upgrade: srsly&lt;1.1.0,&gt;=1.0.2 in c:\users\danis\.conda\envs\python36\lib\site-packages (from spacy) (1.0.2) Requirement already satisfied, skipping upgrade: thinc==7.4.1 in c:\users\danis\.conda\envs\python36\lib\site-packages (from spacy) (7.4.1) Requirement already satisfied, skipping upgrade: cymem&lt;2.1.0,&gt;=2.0.2 in c:\users\danis\.conda\envs\python36\lib\site-packages (from spacy) (2.0.3) Requirement already satisfied, skipping upgrade: wasabi&lt;1.1.0,&gt;=0.4.0 in c:\users\danis\.conda\envs\python36\lib\site-packages (from spacy) (0.8.0) Requirement already satisfied, skipping upgrade: catalogue&lt;1.1.0,&gt;=0.0.7 in c:\users\danis\.conda\envs\python36\lib\site-packages (from spacy) (1.0.0) Requirement already satisfied, skipping upgrade: requests&lt;3.0.0,&gt;=2.13.0 in c:\users\danis\.conda\envs\python36\lib\site-packages (from spacy) (2.23.0) Requirement already satisfied, skipping upgrade: tqdm&lt;5.0.0,&gt;=4.38.0 in c:\users\danis\.conda\envs\python36\lib\site-packages (from spacy) (4.50.2) Requirement already satisfied, skipping upgrade: blis&lt;0.5.0,&gt;=0.4.0 in c:\users\danis\.conda\envs\python36\lib\site-packages (from spacy) (0.4.1) Requirement already satisfied, skipping upgrade: numpy&gt;=1.15.0 in c:\users\danis\.conda\envs\python36\lib\site-packages (from spacy) (1.18.1) Requirement already satisfied, skipping upgrade: setuptools in c:\users\danis\.conda\envs\python36\lib\site-packages (from spacy) (46.1.3.post20200330) Requirement already satisfied, skipping upgrade: plac&lt;1.2.0,&gt;=0.9.6 in c:\users\danis\.conda\envs\python36\lib\site-packages (from spacy) (1.1.3) Requirement already satisfied, skipping upgrade: preshed&lt;3.1.0,&gt;=3.0.2 in c:\users\danis\.conda\envs\python36\lib\site-packages (from spacy) (3.0.2) Requirement already satisfied, skipping upgrade: importlib-metadata&gt;=0.20; python_version &lt; &quot;3.8&quot; in c:\users\danis\.conda\envs\python36\lib\site-packages (from catalogue&lt;1.1.0,&gt;=0.0.7-&gt;spacy) (2.0.0) Requirement already satisfied, skipping upgrade: idna&lt;3,&gt;=2.5 in c:\users\danis\.conda\envs\python36\lib\site-packages (from requests&lt;3.0.0,&gt;=2.13.0-&gt;spacy) (2.9) Requirement already satisfied, skipping upgrade: certifi&gt;=2017.4.17 in c:\users\danis\.conda\envs\python36\lib\site-packages (from requests&lt;3.0.0,&gt;=2.13.0-&gt;spacy) (2020.6.20) Requirement already satisfied, skipping upgrade: chardet&lt;4,&gt;=3.0.2 in c:\users\danis\.conda\envs\python36\lib\site-packages (from requests&lt;3.0.0,&gt;=2.13.0-&gt;spacy) (3.0.4) Requirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,&lt;1.26,&gt;=1.21.1 in c:\users\danis\.conda\envs\python36\lib\site-packages (from requests&lt;3.0.0,&gt;=2.13.0-&gt;spacy) (1.25.8) Requirement already satisfied, skipping upgrade: zipp&gt;=0.5 in c:\users\danis\.conda\envs\python36\lib\site-packages (from importlib-metadata&gt;=0.20; python_version &lt; &quot;3.8&quot;-&gt;catalogue&lt;1.1.0,&gt;=0.0.7-&gt;spacy) (2.2.0) </code></pre> <p>Then I try to import spacy:</p> <pre><code>import spacy </code></pre> <p>which works fine, no error occur. And then I want to load a data package:</p> <pre><code>nlp = spacy.load('de_core_news_sm') </code></pre> <p>Here the error occur:</p> <pre><code>--------------------------------------------------------------------------- OSError Traceback (most recent call last) &lt;ipython-input-10-0fb7762ce64d&gt; in &lt;module&gt; ----&gt; 1 nlp = spacy.load('de_core_news_sm') ~\.conda\envs\python36\lib\site-packages\spacy\__init__.py in load(name, **overrides) 28 if depr_path not in (True, False, None): 29 warnings.warn(Warnings.W001.format(path=depr_path), DeprecationWarning) ---&gt; 30 return util.load_model(name, **overrides) 31 32 ~\.conda\envs\python36\lib\site-packages\spacy\util.py in load_model(name, **overrides) 173 elif hasattr(name, &quot;exists&quot;): # Path or Path-like to model data 174 return load_model_from_path(name, **overrides) --&gt; 175 raise IOError(Errors.E050.format(name=name)) 176 177 OSError: [E050] Can't find model 'de_core_news_sm'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory. </code></pre>
<p>In order to load the model you need to download it first, if you are doing it on your local machine.(not on google colab). So after</p> <pre><code>pip install -U spacy </code></pre> <p>you need to download using</p> <pre><code>python -m spacy download de_core_news_sm </code></pre> <p>Then,</p> <pre><code>nlp = spacy.load('de_core_news_sm') </code></pre> <hr /> <h3>Google Colaboratory</h3> <p>In case of trying it in <strong>google colab</strong>,</p> <pre><code>pip install -U spacy import spacy.cli spacy.cli.download(&quot;de_core_news_sm&quot;) </code></pre> <hr /> <p>You can also add virtual environment then use spacy after activating in virtual environment.</p> <pre><code>python3 -m venv &lt;name_of_virtualenv&gt; # to activate source /pathtovirenc/&lt;name_of_virtualenv&gt;/bin/activate #then use the above commands pip3 install -U spacy python3 -m spacy download de_core_news_sm </code></pre>
968
spaCy
SpaCy lemmatizer removes capitalization
https://stackoverflow.com/questions/63693463/spacy-lemmatizer-removes-capitalization
<p>I would like to lemmatize some textual data in Hungarian language and encountered a strange feature in <code>spaCy</code>. The <code>token.lemma_</code> function works well in terms of lemmatization, however, it returns some of the sentences without first letter capitalization. This is quite annoying, as my next function, <code>unnest_stences</code> (R) requires first capital letters in order to identify and break the text down into individual sentences. </p> <p>First I thought the problem was that I used the latest version of spaCy since I had gotten a warning that</p> <blockquote> <p>UserWarning: [W031] Model 'hu_core_ud_lg' (0.3.1) requires spaCy v2.1 and is incompatible with the current spaCy version (2.3.2). This may lead to unexpected results or runtime errors. To resolve this, download a newer compatible model or retrain your custom model with the current spaCy version.</p> </blockquote> <p>So I went ahead and installed spacy 2.1, but the problem still persists. </p> <p>The source of my data are some email messages I cannot share here, but here is a small, artificial example:</p> <pre><code># pip install -U spacy==2.1 # takes 9 mins # pip install hu_core_ud_lg # takes 50 mins import spacy from spacy.lemmatizer import Lemmatizer import hu_core_ud_lg import pandas as pd nlp = hu_core_ud_lg.load() a = &quot;Tisztelt levélíró!&quot; b = &quot;Köszönettel vettük megkeresését.&quot; df = pd.DataFrame({'text':[a, b]}) output_lemma = [] for i in df.text: mondat = &quot;&quot; doc = nlp(i) for token in doc: mondat = mondat + &quot; &quot; + token.lemma_ output_lemma.append(mondat) output_lemma </code></pre> <p>which yields</p> <pre><code>[' tisztelt levélíró !', ' köszönet vesz megkeresés .'] </code></pre> <p>but I would expect</p> <pre><code>[' Tisztelt levélíró !', ' Köszönet vesz megkeresés .'] </code></pre> <p>When I pass my original data to the function, it returns some sentences with upercase first letters, others with lowercase letters. For some strange reason I couldn't reproduce that pattern above, but I guess the main point is visible. The function does not work as expected.</p> <p>Any ideas how I could fix this?</p> <p>I'm using Jupyter Notebook, Python 2.7, Win 7 and a Toshiba laptop (Portégé Z830-10R i3-2367M).</p>
<p>Lowercasing is the expected behavior of spaCy's lemmatizer for non-proper-noun tokens.</p> <p>One workaround is to check if each token is titlecased, and convert to original casing after lemmatizing (only applies to the first character).</p> <pre><code>import spacy nlp = spacy.load('en_core_web_sm') text = 'This is a test sentence.' doc = nlp(text) newtext = ' '.join([tok.lemma_.title() if tok.is_title else tok.lemma_ for tok in doc]) print(newtext) # This be a test sentence . </code></pre>
969
spaCy
What versions of spaCy suport en_vectors_web_lg?
https://stackoverflow.com/questions/67361527/what-versions-of-spacy-suport-en-vectors-web-lg
<p>I am trying to download en_vectors_web_lg, but keep getting the below error:</p> <pre><code>ERROR: Could not install requirement en-vectors-web-lg==3.0.0 from https://github.com/explosion/spacy-models/releases/download/en_vectors_web_lg-3.0.0/en_vectors_web_lg-3.0.0-py3-none-any.whl#egg=en_vectors_web_lg==3.0.0 because of HTTP error 404 Client Error: Not Found for url: https://github.com/explosion/spacy-models/releases/download/en_vectors_web_lg-3.0.0/en_vectors_web_lg-3.0.0-py3-none-any.whl for URL https://github.com/explosion/spacy-models/releases/download/en_vectors_web_lg-3.0.0/en_vectors_web_lg-3.0.0-py3-none-any.whl#egg=en_vectors_web_lg==3.0.0 </code></pre> <p>Is spacy still supporting en_vectors_web_lg?</p> <p>I also just updated my spacy to the latest version</p>
<p>The naming conventions changed in v3 and the equivalent model is <code>en_core_web_lg</code>. It includes vectors and you can install it like this:</p> <pre><code>spacy download en_core_web_lg </code></pre> <p>I would not recommend downgrading to use the old vectors model unless you need to run old code.</p> <p>If you are concerned about accuracy and have a decent GPU the transformers model, <code>en_core_web_trf</code>, is also worth considering, though it doesn't include word vectors.</p>
970
spaCy
Evaluation in a Spacy NER model
https://stackoverflow.com/questions/44827930/evaluation-in-a-spacy-ner-model
<p>I am trying to evaluate a trained NER Model created using <a href="https://spacy.io/docs/usage/training-ner" rel="noreferrer">spacy lib</a>. Normally for these kind of problems you can use f1 score (a ratio between precision and recall). I could not find in the documentation an accuracy function for a trained NER model. </p> <p>I am not sure if it's correct but I am trying to do it with the following way(example) and using <code>f1_score</code> from <code>sklearn</code>:</p> <pre><code>from sklearn.metrics import f1_score import spacy from spacy.gold import GoldParse nlp = spacy.load("en") #load NER model test_text = "my name is John" # text to test accuracy doc_to_test = nlp(test_text) # transform the text to spacy doc format # we create a golden doc where we know the tagged entity for the text to be tested doc_gold_text= nlp.make_doc(test_text) entity_offsets_of_gold_text = [(11, 15,"PERSON")] gold = GoldParse(doc_gold_text, entities=entity_offsets_of_gold_text) # bring the data in a format acceptable for sklearn f1 function y_true = ["PERSON" if "PERSON" in x else 'O' for x in gold.ner] y_predicted = [x.ent_type_ if x.ent_type_ !='' else 'O' for x in doc_to_test] f1_score(y_true, y_predicted, average='macro')`[1] &gt; 1.0 </code></pre> <p>Any thoughts are or insights are useful. </p>
<p>You can find different metrics including F-score, recall and precision in <a href="https://github.com/explosion/spaCy/blob/master/spacy/scorer.py" rel="noreferrer">spaCy/scorer.py</a>.</p> <p>This example shows how you can use it:</p> <pre><code>import spacy from spacy.gold import GoldParse from spacy.scorer import Scorer def evaluate(ner_model, examples): scorer = Scorer() for input_, annot in examples: doc_gold_text = ner_model.make_doc(input_) gold = GoldParse(doc_gold_text, entities=annot) pred_value = ner_model(input_) scorer.score(pred_value, gold) return scorer.scores # example run examples = [ ('Who is Shaka Khan?', [(7, 17, 'PERSON')]), ('I like London and Berlin.', [(7, 13, 'LOC'), (18, 24, 'LOC')]) ] ner_model = spacy.load(ner_model_path) # for spaCy's pretrained use 'en_core_web_sm' results = evaluate(ner_model, examples) </code></pre> <p>The <code>scorer.scores</code> returns multiple scores. When running the example, the result looks like this: (Note the low scores occuring because the examples classify London and Berlin as 'LOC' while the model classifies them as 'GPE'. You can figure this out by looking at the <code>ents_per_type</code>.)</p> <pre><code>{'uas': 0.0, 'las': 0.0, 'las_per_type': {'attr': {'p': 0.0, 'r': 0.0, 'f': 0.0}, 'root': {'p': 0.0, 'r': 0.0, 'f': 0.0}, 'compound': {'p': 0.0, 'r': 0.0, 'f': 0.0}, 'nsubj': {'p': 0.0, 'r': 0.0, 'f': 0.0}, 'dobj': {'p': 0.0, 'r': 0.0, 'f': 0.0}, 'cc': {'p': 0.0, 'r': 0.0, 'f': 0.0}, 'conj': {'p': 0.0, 'r': 0.0, 'f': 0.0}}, 'ents_p': 33.33333333333333, 'ents_r': 33.33333333333333, 'ents_f': 33.33333333333333, 'ents_per_type': {'PERSON': {'p': 100.0, 'r': 100.0, 'f': 100.0}, 'LOC': {'p': 0.0, 'r': 0.0, 'f': 0.0}, 'GPE': {'p': 0.0, 'r': 0.0, 'f': 0.0}}, 'tags_acc': 0.0, 'token_acc': 100.0, 'textcat_score': 0.0, 'textcats_per_cat': {}} </code></pre> <p>The example is taken from a <a href="https://github.com/explosion/spaCy/blob/master/examples/training/train_ner_standalone.py#L159" rel="noreferrer">spaCy example on github</a> (link does not work anymore). It was last tested with spacy 2.2.4.</p>
971
spaCy
Spacy incorrectly identifying pronouns
https://stackoverflow.com/questions/75141938/spacy-incorrectly-identifying-pronouns
<p>When I try this code using Spacy, I get the desired result:</p> <pre><code>import spacy nlp = spacy.load(&quot;en_core_web_sm&quot;) # example 1 test = &quot;All my stuff is at to MyBOQ&quot; doc = nlp(test) for word in doc: if word.pos_ == 'PRON': print(word.text) </code></pre> <p>The output shows <code>All</code> and <code>my</code>. However, if I add a question mark:</p> <pre><code>test = &quot;All my stuff is at to MyBOQ?&quot; doc = nlp(test) for word in doc: if word.pos_ == 'PRON': print(word.text) </code></pre> <p>now it also identifies <code>MyBOQ</code> as a pronoun. It should be classified as an organization name (<code>word.pos_ == 'ORG'</code>) instead.</p> <p>How do I tell Spacy not to classify MyBOQ as a pronoun? Should I just remove all punctuation before checking for pronouns?</p>
<p>When running your code on my machine (Windows 11 64-bit, Python 3.10.9, spaCy 3.4.4), spaCy produces the following results for the text with and without the question mark:</p> <pre class="lang-none prettyprint-override"><code> en_core_web_sm en_core_web_md en_core_web_trf All my stuff is at to MyBOQ? All, my my my All my stuff is at to MyBOQ All, my my my </code></pre> <p>In this example, the word &quot;All&quot; is not a pronoun but rather a determiner, so only the <code>en_core_web_md</code> and <code>en_core_web_trf</code> pipelines are producing technically correct results. If you're running an old version of spaCy I'd suggest updating the package. Alternatively, if spaCy is up-to-date, try restarting your IDE/computer to see if it stops producing erroneous results---there should be no need to remove punctuation before checking for pronouns.</p> <p>Finally, <a href="https://en.wikipedia.org/wiki/Part_of_speech#Types_of_Part_of_Speech" rel="nofollow noreferrer">Part of Speech</a> (PoS) tags do not include organisation names (<code>ORG</code>). I think you're mixing <a href="https://spacy.io/usage/spacy-101#annotations-ner" rel="nofollow noreferrer">Named Entity</a> tags with PoS tags. &quot;MyBOQ&quot; should be PoS tagged as a proper noun (<code>PROPN</code>) which the <code>en_core_web_md</code> and <code>en_core_web_trf</code> pipelines identify correctly, whereas <code>en_core_web_sm</code> pipeline does not (instead tagging it as a basic <code>NOUN</code>).</p>
972
spaCy
Spacy SpanGroup
https://stackoverflow.com/questions/68537075/spacy-spangroup
<p>I am trying to use spaCy's SpanGroup class, but the following commands both give a ModuleNotFoundError:</p> <p><code>from spacy.tokens.span_group import SpanGroup</code> and <code>from spacy.tokens import SpanGroup</code></p> <p>I tried looking at the spaCy documentation, but their command wasn't working. Does someone know the correct command to import SpanGroup?</p>
<p>The code in <a href="https://spacy.io/api/spangroup#_title" rel="nofollow noreferrer">the docs</a> works for me.</p> <pre><code>from spacy.tokens import SpanGroup </code></pre> <p>Does that give you an error?</p>
973
spaCy
Spacy Permission Error 13
https://stackoverflow.com/questions/48165486/spacy-permission-error-13
<p>I am getting Permission error 13 when trying to save a trained model in spacy. I have tried changing the directory as well. I am trying to reproduce this example given <a href="https://spacy.io/usage/training#example-new-entity-typetrain_new_entity_type.py" rel="nofollow noreferrer">here</a>, to train custom entities in spacy`s named entity recognizer.</p> <pre><code>import random TRAIN_DATA = [ ("Uber blew through $1 million a week", {'entities': [(0, 4, 'ORG')]}), ("Google rebrands its business apps", {'entities': [(0, 6, "ORG")]}) ] nlp = spacy.blank('en') optimizer = nlp.begin_training() for i in range(20): random.shuffle(TRAIN_DATA) for text, annotations in TRAIN_DATA: nlp.update([text], [annotations], sgd=optimizer) nlp.to_disk('/model') </code></pre> <p>Here is the error I am getting</p> <pre><code>PermissionError Traceback (most recent call last) &lt;ipython-input-5-115363841730&gt; in &lt;module&gt;() 14 for text, annotations in TRAIN_DATA: 15 nlp.update([text], [annotations], sgd=optimizer) ---&gt; 16 nlp.to_disk('/model') ~/anaconda2/envs/py35/lib/python3.5/site-packages/spacy/language.py in to_disk(self, path, disable) 596 serializers[name] = lambda p, proc=proc: proc.to_disk(p, vocab=False) 597 serializers['vocab'] = lambda p: self.vocab.to_disk(p) --&gt; 598 util.to_disk(path, serializers, {p: False for p in disable}) 599 600 def from_disk(self, path, disable=tuple()): ~/anaconda2/envs/py35/lib/python3.5/site-packages/spacy/util.py in to_disk(path, writers, exclude) 508 path = ensure_path(path) 509 if not path.exists(): --&gt; 510 path.mkdir() 511 for key, writer in writers.items(): 512 if key not in exclude: ~/anaconda2/envs/py35/lib/python3.5/pathlib.py in mkdir(self, mode, parents, exist_ok) 1214 self._raise_closed() 1215 try: -&gt; 1216 self._accessor.mkdir(self, mode) 1217 except FileNotFoundError: 1218 if not parents or self.parent == self: ~/anaconda2/envs/py35/lib/python3.5/pathlib.py in wrapped(pathobj, *args) 369 @functools.wraps(strfunc) 370 def wrapped(pathobj, *args): --&gt; 371 return strfunc(str(pathobj), *args) 372 return staticmethod(wrapped) 373 PermissionError: [Errno 13] Permission denied: '/model' </code></pre>
<p>I think that it can be that the path you use <code>/model</code> is seen as an absolute path, so either exits a <code>/model</code> directory writeable by the user or you can try to use a path like <code>./model</code> which is a relative path</p>
974
spaCy
Spacy 3.0.1 Accuracy prediction
https://stackoverflow.com/questions/66637485/spacy-3-0-1-accuracy-prediction
<p>How to test accuracy of a spacy pretrained model in version 3.0.1. I want to see my output how accurate my tested model is predicted.This the code below for spacy version 2 but it doesn't work in spacy version 3.can somone tell me the code on spacy version 3.</p> <pre><code> from spacy.gold import GoldParse from spacy.scorer import Scorer def evaluate(nlp, examples, ent='PERSON'): scorer = Scorer() for input_, annot in examples: text_entities = [] for entity in annot.get('entities'): if ent in entity: text_entities.append(entity) doc_gold_text = nlp.make_doc(input_) gold = GoldParse(doc_gold_text, entities=text_entities) pred_value = nlp(input_) scorer.score(pred_value, gold) return scorer.scores examples = [ (&quot;Trump says he's answered Mueller's Russia inquiry questions \u2013 live&quot;,{&quot;entities&quot;:[[0,5,&quot;PERSON&quot;],[25,32,&quot;PERSON&quot;],[35,41,&quot;GPE&quot;]]}), (&quot;Alexander Zverev reaches ATP Finals semis then reminds Lendl who is boss&quot;,{&quot;entities&quot;:[[0,16,&quot;PERSON&quot;],[55,60,&quot;PERSON&quot;]]}), (&quot;Britain's worst landlord to take nine years to pay off string of fines&quot;,{&quot;entities&quot;:[[0,7,&quot;GPE&quot;]]}), (&quot;Tom Watson: people's vote more likely given weakness of May's position&quot;,{&quot;entities&quot;:[[0,10,&quot;PERSON&quot;],[56,59,&quot;PERSON&quot;]]}), ] nlp = spacy.load('en_core_web_sm') results = evaluate(nlp, examples) print(results) </code></pre>
<p>Personnally i had used this method, and i wish it will help you in your work: In your case, I think:</p> <pre><code>from spacy.training import Example #get test data test_data = [ (&quot;Trump says he's answered Mueller's Russia inquiry questions \u2013 live&quot;,{&quot;entities&quot;:[[0,5,&quot;PERSON&quot;],[25,32,&quot;PERSON&quot;],[35,41,&quot;GPE&quot;]]}), (&quot;Alexander Zverev reaches ATP Finals semis then reminds Lendl who is boss&quot;,{&quot;entities&quot;:[[0,16,&quot;PERSON&quot;],[55,60,&quot;PERSON&quot;]]}), (&quot;Britain's worst landlord to take nine years to pay off string of fines&quot;, {&quot;entities&quot;:[[0,7,&quot;GPE&quot;]]}), (&quot;Tom Watson: people's vote more likely given weakness of May's position&quot;, {&quot;entities&quot;:[[0,10,&quot;PERSON&quot;],[56,59,&quot;PERSON&quot;]]}), ] #formatted test data in order to adapt with the new version 3 of Spacy #get nlp object nlp = spacy.load('en_core_web_sm') new_test_data = [] for text, annots in test_data: new_test_data.append(Example.from_dict(nlp.make_doc(text), annots)) #end formatted test data #begin evaluation #using , the evaluate() methos scores_model = nlp.evaluate(new_test_data) #print scores that you want #precision_model = scores_model[&quot;ents_p&quot;] #recall_model = scores_model[&quot;ents_r&quot;] #f_score_model = scores_model[&quot;ents_f&quot;] #scores_entities = scores_model[&quot;ents_per_type&quot;] </code></pre>
975
spaCy
Custom sentence segmentation using Spacy
https://stackoverflow.com/questions/52205475/custom-sentence-segmentation-using-spacy
<p>I am new to Spacy and NLP. I'm facing the below issue while doing sentence segmentation using Spacy.</p> <p>The text I am trying to tokenise into sentences contains numbered lists (with space between numbering and actual text), like below.</p> <pre class="lang-py prettyprint-override"><code>import spacy nlp = spacy.load('en_core_web_sm') text = &quot;This is first sentence.\nNext is numbered list.\n1. Hello World!\n2. Hello World2!\n3. Hello World!&quot; text_sentences = nlp(text) for sentence in text_sentences.sents: print(sentence.text) </code></pre> <p>Output (1.,2.,3. are considered as separate lines) is:</p> <pre class="lang-sh prettyprint-override"><code>This is first sentence. Next is numbered list. 1. Hello World! 2. Hello World2! 3. Hello World! </code></pre> <p>But if there is no space between numbering and actual text, then sentence tokenisation is fine. Like below:</p> <pre class="lang-py prettyprint-override"><code>import spacy nlp = spacy.load('en_core_web_sm') text = &quot;This is first sentence.\nNext is numbered list.\n1.Hello World!\n2.Hello World2!\n3.Hello World!&quot; text_sentences = nlp(text) for sentence in text_sentences.sents: print(sentence.text) </code></pre> <p>Output(desired) is:</p> <pre class="lang-sh prettyprint-override"><code>This is first sentence. Next is numbered list. 1.Hello World! 2.Hello World2! 3.Hello World! </code></pre> <p>Please suggest whether we can customise sentence detector to do this.</p>
<p>When you use a pretrained model with spacy, the sentences get splitted based on training data that were provided during the training procedure of the model.</p> <p>Of course, there are cases like yours, that may somebody want to use a custom sentence segmentation logic. This is possible by adding a component to spacy pipeline.</p> <p>For your case, you can add a rule that prevents sentence splitting when there is a {number}. pattern. </p> <p>A workaround for your problem:</p> <pre class="lang-py prettyprint-override"><code>import spacy import re nlp = spacy.load('en') boundary = re.compile('^[0-9]$') def custom_seg(doc): prev = doc[0].text length = len(doc) for index, token in enumerate(doc): if (token.text == '.' and boundary.match(prev) and index!=(length - 1)): doc[index+1].sent_start = False prev = token.text return doc nlp.add_pipe(custom_seg, before='parser') text = u'This is first sentence.\nNext is numbered list.\n1. Hello World!\n2. Hello World2!\n3. Hello World!' doc = nlp(text) for sentence in doc.sents: print(sentence.text) </code></pre> <p>Hope it helps!</p>
976
spaCy
Training spaCy - NameError
https://stackoverflow.com/questions/68048737/training-spacy-nameerror
<p>I need to train a spaCy model to improve the accuracy to identify products. I'm struggling with training my spacy model. I have the following code:</p> <pre><code>TRAIN_DATA = [('..., {'entities': [(36,55,'PRODUCT')]})] nlp = spacy.load(&quot;en_core_web_lg&quot;) ner = nlp.get_pipe(&quot;ner&quot;) optimizer = nlp.create_optimizer() other_pipes = [pipe for pipe in nlp.pipe_names if pipe != &quot;ner&quot;] with nlp.disable_pipes(*other_pipes): # only train NER for itn in range(50): random.shuffle(TRAIN_DATA) losses = {} for text, annotations in TRAIN_DATA: doc = nlp.make_doc(text) example = Example.from_dict(doc, annotations) nlp.update([example], drop=0.25, sgd=optimizer, losses=losses) </code></pre> <p>but it's failing due to:</p> <pre><code>NameError Traceback (most recent call last) &lt;ipython-input-4-903f2be7114f&gt; in &lt;module&gt; 15 for text, annotations in TRAIN_DATA: 16 doc = nlp.make_doc(text) ---&gt; 17 example = Example.from_dict(doc, annotations) 18 nlp.update([example], drop=0.25, sgd=optimizer, losses=losses) 19 print(losses) NameError: name 'Example' is not defined </code></pre> <p>How do I need to define <code>Example</code>?</p>
<p>It's hot here... thanks for the hint I missed importing: from spacy.training import Example</p> <p>when moving the code from Jupyter to Visual Studio Code for the deployment</p>
977
spaCy
SpaCy: how to load Google news word2vec vectors?
https://stackoverflow.com/questions/42094180/spacy-how-to-load-google-news-word2vec-vectors
<p>I've tried several methods of loading the google news word2vec vectors (<a href="https://code.google.com/archive/p/word2vec/" rel="noreferrer">https://code.google.com/archive/p/word2vec/</a>):</p> <pre><code>en_nlp = spacy.load('en',vector=False) en_nlp.vocab.load_vectors_from_bin_loc('GoogleNews-vectors-negative300.bin') </code></pre> <p>The above gives:</p> <pre><code>MemoryError: Error assigning 18446744072820359357 bytes </code></pre> <p>I've also tried with the .gz packed vectors; or by loading and saving them with gensim to a new format:</p> <pre><code>from gensim.models.word2vec import Word2Vec model = Word2Vec.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True) model.save_word2vec_format('googlenews2.txt') </code></pre> <p>This file then contains the words and their word vectors on each line. I tried to load them with:</p> <pre><code>en_nlp.vocab.load_vectors('googlenews2.txt') </code></pre> <p>but it returns "0".</p> <p>What is the correct way to do this?</p> <p><strong>Update:</strong></p> <p>I can load my own created file into spacy. I use a test.txt file with "string 0.0 0.0 ...." on each line. Then zip this txt with .bzip2 to test.txt.bz2. Then I create a spacy compatible binary file:</p> <pre><code>spacy.vocab.write_binary_vectors('test.txt.bz2', 'test.bin') </code></pre> <p>That I can load into spacy:</p> <pre><code>nlp.vocab.load_vectors_from_bin_loc('test.bin') </code></pre> <p>This works! However, when I do the same process for the googlenews2.txt, I get the following error:</p> <pre><code>lib/python3.6/site-packages/spacy/cfile.pyx in spacy.cfile.CFile.read_into (spacy/cfile.cpp:1279)() OSError: </code></pre>
<p>For spacy 1.x, load Google news vectors into gensim and convert to a new format (each line in .txt contains a single vector: string, vec):</p> <pre><code>from gensim.models.word2vec import Word2Vec from gensim.models import KeyedVectors model = KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True) model.wv.save_word2vec_format('googlenews.txt') </code></pre> <p>Remove the first line of the .txt:</p> <pre><code>tail -n +2 googlenews.txt &gt; googlenews.new &amp;&amp; mv -f googlenews.new googlenews.txt </code></pre> <p>Compress the txt as .bz2:</p> <pre><code>bzip2 googlenews.txt </code></pre> <p>Create a SpaCy compatible binary file:</p> <pre><code>spacy.vocab.write_binary_vectors('googlenews.txt.bz2','googlenews.bin') </code></pre> <p>Move the googlenews.bin to /lib/python/site-packages/spacy/data/en_google-1.0.0/vocab/googlenews.bin of your python environment.</p> <p>Then load the wordvectors:</p> <pre><code>import spacy nlp = spacy.load('en',vectors='en_google') </code></pre> <p>or load them after later:</p> <pre><code>nlp.vocab.load_vectors_from_bin_loc('googlenews.bin') </code></pre>
978
spaCy
&quot;python&quot; unable to import spacy and download en_core_web_sm
https://stackoverflow.com/questions/76319917/python-unable-to-import-spacy-and-download-en-core-web-sm
<p><strong>What I want to achieve:</strong></p> <p>Import spacy and use it.</p> <p><em><strong>What I've tried:</strong></em></p> <p>When I try to import spacy on python I get <strong>ImportError: cannot import name util</strong> error (detail on error1)</p> <p>Spacy is sucessfully installed to my device.</p> <p><a href="https://github.com/explosion/spaCy/issues/2370" rel="nofollow noreferrer">https://github.com/explosion/spaCy/issues/2370</a> Following article I operated <strong>pip uninstall en_core_web_sm</strong> then I got <strong>WARNING: Skipping en_core_web_sm as it is not installed.</strong></p> <p>operate <strong>python -m spacy download en_core_web_sm</strong> give me <strong>TypeError: issubclass() arg 1 must be a class</strong> error (detail in error2)</p> <h2><strong>Error1:</strong></h2> <pre><code>ImportError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_11524/513823458.py in &lt;module&gt; 1 import sys ----&gt; 2 import spacy ~\AppData\Roaming\Python\Python39\site-packages\spacy\__init__.py in &lt;module&gt; 12 from thinc.api import Config 13 ---&gt; 14 from . import pipeline # noqa: F401 15 from .cli.info import info # noqa: F401 16 from .glossary import explain # noqa: F401 ~\AppData\Roaming\Python\Python39\site-packages\spacy\pipeline\__init__.py in &lt;module&gt; ----&gt; 1 from .attributeruler import AttributeRuler 2 from .dep_parser import DependencyParser 3 from .edit_tree_lemmatizer import EditTreeLemmatizer 4 from .entity_linker import EntityLinker 5 from .ner import EntityRecognizer ~\AppData\Roaming\Python\Python39\site-packages\spacy\pipeline\attributeruler.py in &lt;module&gt; 4 from pathlib import Path 5 ----&gt; 6 from .pipe import Pipe 7 from ..errors import Errors 8 from ..training import Example ~\AppData\Roaming\Python\Python39\site-packages\spacy\pipeline\pipe.pyx in init spacy.pipeline.pipe() ~\AppData\Roaming\Python\Python39\site-packages\spacy\vocab.pyx in init spacy.vocab() ~\AppData\Roaming\Python\Python39\site-packages\spacy\tokens\__init__.py in &lt;module&gt; ----&gt; 1 from .doc import Doc 2 from .token import Token 3 from .span import Span 4 from .span_group import SpanGroup 5 from ._serialize import DocBin ~\AppData\Roaming\Python\Python39\site-packages\spacy\tokens\doc.pyx in init spacy.tokens.doc() ImportError: cannot import name util </code></pre> <p><strong>Error2:</strong></p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\akira\anaconda3\lib\runpy.py&quot;, line 188, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File &quot;C:\Users\akira\anaconda3\lib\runpy.py&quot;, line 147, in _get_module_details return _get_module_details(pkg_main_name, error) File &quot;C:\Users\akira\anaconda3\lib\runpy.py&quot;, line 111, in _get_module_details __import__(pkg_name) File &quot;C:\Users\akira\AppData\Roaming\Python\Python39\site-packages\spacy\__init__.py&quot;, line 14, in &lt;module&gt; from . import pipeline # noqa: F401 File &quot;C:\Users\akira\AppData\Roaming\Python\Python39\site-packages\spacy\pipeline\__init__.py&quot;, line 1, in &lt;module&gt; from .attributeruler import AttributeRuler File &quot;C:\Users\akira\AppData\Roaming\Python\Python39\site-packages\spacy\pipeline\attributeruler.py&quot;, line 6, in &lt;module&gt; from .pipe import Pipe File &quot;spacy\pipeline\pipe.pyx&quot;, line 1, in init spacy.pipeline.pipe File &quot;spacy\vocab.pyx&quot;, line 1, in init spacy.vocab File &quot;C:\Users\akira\AppData\Roaming\Python\Python39\site-packages\spacy\tokens\__init__.py&quot;, line 1, in &lt;module&gt; from .doc import Doc File &quot;spacy\tokens\doc.pyx&quot;, line 36, in init spacy.tokens.doc File &quot;C:\Users\akira\AppData\Roaming\Python\Python39\site-packages\spacy\schemas.py&quot;, line 222, in &lt;module&gt; class TokenPattern(BaseModel): File &quot;pydantic\main.py&quot;, line 205, in pydantic.main.ModelMetaclass.__new__ File &quot;pydantic\fields.py&quot;, line 491, in pydantic.fields.ModelField.infer File &quot;pydantic\fields.py&quot;, line 421, in pydantic.fields.ModelField.__init__ File &quot;pydantic\fields.py&quot;, line 537, in pydantic.fields.ModelField.prepare File &quot;pydantic\fields.py&quot;, line 634, in pydantic.fields.ModelField._type_analysis File &quot;pydantic\fields.py&quot;, line 641, in pydantic.fields.ModelField._type_analysis File &quot;C:\Users\akira\anaconda3\lib\typing.py&quot;, line 847, in __subclasscheck__ return issubclass(cls, self.__origin__) TypeError: issubclass() arg 1 must be a class </code></pre>
<p>This has been reported. See the suggested workaround: <a href="https://github.com/explosion/spaCy/issues/12659" rel="nofollow noreferrer">https://github.com/explosion/spaCy/issues/12659</a>.</p>
979
spaCy
Instaling SpaCy for Anaconda
https://stackoverflow.com/questions/61259713/instaling-spacy-for-anaconda
<p>Hi guys I'm having a problem with installing SpaCy on Windows/Anaconda. I was trying:</p> <p><strong>conda install -c conda-forge spacy</strong></p> <p>And I'm getting an error:</p> <pre class="lang-none prettyprint-override"><code>Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: - **Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. failed UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -&gt; Available versions** </code></pre> <p>Does it mean that my anaconda version is not compatible with the SpaCy? Do you guys know how to fix this?</p> <p>Thanks! </p>
980
spaCy
Use spacy on pretokenized text
https://stackoverflow.com/questions/59115914/use-spacy-on-pretokenized-text
<p>I want to use spacy for processing an already pre-tokenized text. Parsing a list of tokens to spacy does not work.</p> <pre class="lang-py prettyprint-override"><code>import spacy nlp = spacy.load("en_core_web_sm") nlp(["This", "is", "a", "sentence"]) </code></pre> <p>This gives a TypeError (which makes sense): <code>TypeError: Argument 'string' has incorrect type (expected str, got list)</code></p> <p>I could replace the tokenizer with a custom one, but I feel like that would overcomplicate things and is not the preferred way.</p> <p>Thank you for your help :D</p>
<p>You can use this method:</p> <pre class="lang-py prettyprint-override"><code>tokens = ["This", "is", "a", "sentence"] sentence = nlp.tokenizer.tokens_from_list(tokens) print(sentence) </code></pre> <pre><code>This is a sentence </code></pre>
981
spaCy
Blank lemmatization using spacy
https://stackoverflow.com/questions/70147866/blank-lemmatization-using-spacy
<p>How to use lemmatization in Spacy? I try with this code but the output is blank. My spacy ver. 3.2.0</p> <pre><code>from spacy.lang.id import Indonesian nlp = Indonesian() def tokenizer(text): return [token.lemma_.lower() for token in nlp(text) if not token.is_stop and not token.is_punct] docs = [ 'Saya pikir ayah saya terlihat seperti Matt Damon.', 'Tapi apakah rezim mempelajari sesuatu?', 'Lalu dia mulai berjualan mariyuana.', 'Apakah Anda ingin menjadi penerbit, seseorang yang memberi lisensi teknologi?', ] for text in docs: print(tokenizer(text)) </code></pre>
<p>You need to configure the <a href="https://spacy.io/usage/linguistic-features#lemmatizer-lookup" rel="nofollow noreferrer">lookup lemmatizer</a>. If you install <code>spacy-lookups-data</code> then you can do that like this:</p> <pre><code>nlp = Indonesian() nlp.add_pipe(&quot;lemmatizer&quot;, config={&quot;mode&quot;: &quot;lookup&quot;}) nlp.initialize() </code></pre> <p>Then you'll get this output:</p> <pre><code>['pikir', 'ayah', 'matt', 'damon'] ['rezim', 'ajar'] ['jual', 'mariyuana'] ['terbit', 'lisensi', 'teknologi'] </code></pre>
982
spaCy
KeyError in spaCy GoldParse
https://stackoverflow.com/questions/47807189/keyerror-in-spacy-goldparse
<p>The following code fragment (modified from spaCy sample code) generates a KeyError that I just can't figure out:</p> <pre><code>import en_core_web_sm from spacy.gold import GoldParse nlp = en_core_web_sm.load() nlp.entity.add_label('ACCT') TRAIN_DATA = [ ("Exxon opened a new processing facility", { "entities": [(0, 5, "ACCT")] }), ("another example sentence", { "entities": [] }), ("Shell is an oil company, and so is Chevron.", { "entities": [(0, 5, "ACCT"), (35, 42, "ACCT")] }), ("Texaco?", { "entities": [(0, 6, "ACCT")] }) ] # Add new words to vocab for raw_text, _ in TRAIN_DATA: doc = nlp.make_doc(raw_text) for word in doc: _ = nlp.vocab[word.orth] loss = 0. for raw_text, entity_offsets in TRAIN_DATA: doc = nlp.make_doc(raw_text) gold = GoldParse(doc, entities=entity_offsets) loss += nlp.entity.update(doc, gold, drop=0.9) </code></pre> <p>The error is:</p> <pre><code>KeyError Traceback (most recent call last) &lt;ipython-input-27-bbf3e1dc4d39&gt; in &lt;module&gt;() 33 for raw_text, entity_offsets in TRAIN_DATA: 34 doc = nlp.make_doc(raw_text) ---&gt; 35 gold = GoldParse(doc, entities=entity_offsets) 36 loss += nlp.entity.update(doc, gold, drop=0.9) 37 gold.pyx in spacy.gold.GoldParse.__init__() KeyError: 0 </code></pre> <p>I'm seeing this error with spaCy 2.0.3 as well as spaCy 1.9. </p> <p>When I run similar code in a Flask app, I get additional trace information that suggests the actual line that is failing is <code>elif not isinstance(entities[0], basestring):</code> in the <code>gold.pyx</code> file.</p> <p>Can anyone help explain what's happening?</p>
<p>I don't know how the spaCy <a href="https://github.com/explosion/spaCy/blob/v1.9.0/examples/training/train_new_entity_type.py" rel="nofollow noreferrer">sample code</a> ever worked, but the <code>GoldParse</code> method wants <code>entities</code> to be a <code>list</code>, not a <code>dict</code>. Changing the line to:</p> <pre><code>gold = GoldParse(doc, entities=entity_offsets.get('entities')) </code></pre> <p>fixed the problem.</p>
983
spaCy
spacy python package no longer runs
https://stackoverflow.com/questions/75978880/spacy-python-package-no-longer-runs
<p>Running python 3.11.3 on macos, Intel.</p> <p>I had spacy working fine. I then decided to try adding gpu support with: <code>pip install -U 'spacy[cuda113]'</code> but started getting errors.</p> <p>I uninstalled with <code>pip uninstall 'spacy[cuda113]'</code> and then reinstalled spacy with just <code>pip install spacy</code>.</p> <p>However, I'm still getting the same errors when running a simple script with just</p> <p><code>import spacy</code> in it:</p> <pre><code>Traceback (most recent call last): File &quot;/Users/steve/workshop/python/blah.py&quot;, line 4, in &lt;module&gt; import spacy File &quot;/usr/local/lib/python3.11/site-packages/spacy/__init__.py&quot;, line 14, in &lt;module&gt; from . import pipeline # noqa: F401 ^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/spacy/pipeline/__init__.py&quot;, line 1, in &lt;module&gt; from .attributeruler import AttributeRuler File &quot;/usr/local/lib/python3.11/site-packages/spacy/pipeline/attributeruler.py&quot;, line 6, in &lt;module&gt; from .pipe import Pipe File &quot;spacy/pipeline/pipe.pyx&quot;, line 1, in init spacy.pipeline.pipe File &quot;spacy/vocab.pyx&quot;, line 1, in init spacy.vocab File &quot;/usr/local/lib/python3.11/site-packages/spacy/tokens/__init__.py&quot;, line 1, in &lt;module&gt; from .doc import Doc File &quot;spacy/tokens/doc.pyx&quot;, line 36, in init spacy.tokens.doc File &quot;/usr/local/lib/python3.11/site-packages/spacy/schemas.py&quot;, line 158, in &lt;module&gt; class TokenPatternString(BaseModel): File &quot;/usr/local/lib/python3.11/site-packages/pydantic/main.py&quot;, line 369, in __new__ cls.__signature__ = ClassAttribute('__signature__', generate_model_signature(cls.__init__, fields, config)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/pydantic/utils.py&quot;, line 231, in generate_model_signature merged_params[param_name] = Parameter( ^^^^^^^^^^ File &quot;/usr/local/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/inspect.py&quot;, line 2722, in __init__ raise ValueError('{!r} is not a valid parameter name'.format(name)) ValueError: 'in' is not a valid parameter name </code></pre>
<p>I'm not quite sure how/why these commands fixed the issue, but they did:</p> <pre><code>pip uninstall spacy pip uninstall spacy_legacy pip uninstall spacy_alignments pip uninstall spacy_loggers pip uninstall pydantic pip install 'pydantic&lt;1.8' pip install spacy pip uninstall pydantic pip install pydantic pip install spacy-alignments pip install spacy-transformers pip uninstall spacy pip install spacy </code></pre>
984
spaCy
spacy Arabic word2vector
https://stackoverflow.com/questions/75565318/spacy-arabic-word2vector
<p>i try to arabic vector with spacy, so i try code from this github <a href="https://github.com/bakrianoo/aravec/blob/master/aravec-with-spacy.ipynb" rel="nofollow noreferrer">https://github.com/bakrianoo/aravec/blob/master/aravec-with-spacy.ipynb</a></p> <p>when i use this code</p> <pre><code>!python -m spacy init-model ar spacy.aravec.model --vectors-loc ./spacyModel/aravec.txt.gz </code></pre> <p><strong>it apear this error</strong></p> <pre><code>Error: No such command 'init-model'. </code></pre> <p>so i try to fix it by change <code>init-model</code> to <code>init</code> and add <code>vector</code> but this error apear:</p> <pre><code>!python -m spacy init vectors ar ./spacyModel/spacy.aravec.model --vectors-loc ./spacyModel/aravec.txt.gz </code></pre> <p><strong>error</strong> :</p> <pre><code>Usage: python -m spacy init vectors [OPTIONS] LANG VECTORS_LOC OUTPUT_DIR Try 'python -m spacy init vectors --help' for help. Error: No such option: --vectors-loc </code></pre> <p>how can i fix it ?</p>
<p>Can you try this solution.</p> <pre><code>!python -m spacy init vectors ar ./spacyModel/spacy.aravec.model ./spacyModel/aravec.txt.gz </code></pre> <p>According to this information (<a href="https://github.com/explosion/spaCy/discussions/7509" rel="nofollow noreferrer">https://github.com/explosion/spaCy/discussions/7509</a>) and this(<a href="https://stackoverflow.com/questions/50466643/in-spacy-how-to-use-your-own-word2vec-model-created-in-gensim">In spacy, how to use your own word2vec model created in gensim?</a>), the latest version of the command is changed to the following format without <code>--vectors-loc</code> :</p> <pre><code>python -m spacy init vectors [OPTIONS] LANG VECTORS_LOC OUTPUT_DIR </code></pre>
985
spaCy
Issue with install Spacy
https://stackoverflow.com/questions/50837733/issue-with-install-spacy
<p>Runing the command in the instruction:</p> <pre><code>pip install -U spacy </code></pre> <p>It fails to build wheel for Building wheels for collected packages: spacy, murmurhash, cymem, preshed, thinc, ujson, regex, cytoolz</p> <p>This is the error shown on the terminal.</p> <pre><code> Running setup.py bdist_wheel for spacy ... error Complete output from command /usr/local/opt/python/bin/python3.6 -u -c "import setuptools, tokenize;__file__='/private/var/folders/9t/0qgpwgy906z_sww71dnkgkr80000gn/T/pip-install-9ygkdvbw/spacy/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /private/var/folders/9t/0qgpwgy906z_sww71dnkgkr80000gn/T/pip-wheel-swjw2gll --python-tag cp36: running bdist_wheel running build running build_py creating build creating build/lib.macosx-10.13-x86_64-3.6 creating build/lib.macosx-10.13-x86_64-3.6/spacy copying spacy/lemmatizer.py -&gt; build/lib.macosx-10.13-x86_64-3.6/spacy copying spacy/compat.py -&gt; build/lib.macosx-10.13-x86_64-3.6/spacy ... ... copying spacy/tests/tokenizer/sun.txt -&gt; build/lib.macosx-10.13-x86_64-3.6/spacy/tests/tokenizer running build_ext building 'spacy.parts_of_speech' extension creating build/temp.macosx-10.13-x86_64-3.6 creating build/temp.macosx-10.13-x86_64-3.6/spacy clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes /usr/local/opt/openssl/include -I/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/include/python3.6m -I/private/var/folders/9t/0qgpwgy906z_sww71dnkgkr80000gn/T/pip-install-9ygkdvbw/spacy/include -I/usr/local/include -I/usr/local/opt/openssl/include -I/usr/local/opt/sqlite/include -I/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/include/python3.6m -c spacy/parts_of_speech.cpp -o build/temp.macosx-10.13-x86_64-3.6/spacy/parts_of_speech.o -O2 -Wno-strict-prototypes -Wno-unused-function clang: warning: /usr/local/opt/openssl/include: 'linker' input unused [-Wunused-command-line-argument] clang++ -bundle -undefined dynamic_lookup /usr/local/opt/openssl/lib /usr/local/opt/openssl/include build/temp.macosx-10.13-x86_64-3.6/spacy/parts_of_speech.o -L/usr/local/lib -L/usr/local/opt/openssl/lib -L/usr/local/opt/sqlite/lib -o build/lib.macosx-10.13-x86_64-3.6/spacy/parts_of_speech.cpython-36m-darwin.so -Wl,-rpath,@loader_path/../spacy/platform/darwin/lib ld: can't map file, errno=22 file '/usr/local/opt/openssl/include' for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) error: command 'clang++' failed with exit status 1 ---------------------------------------- Failed building wheel for spacy Running setup.py clean for spacy Running setup.py bdist_wheel for murmurhash ... error Complete output from command /usr/local/opt/python/bin/python3.6 -u -c "import setuptools, tokenize;__file__='/private/var/folders/9t/0qgpwgy906z_sww71dnkgkr80000gn/T/pip-install-9ygkdvbw/murmurhash/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /private/var/folders/9t/0qgpwgy906z_sww71dnkgkr80000gn/T/pip-wheel-9ytvwkua --python-tag cp36: running bdist_wheel running build running build_py creating build creating build/lib.macosx-10.13-x86_64-3.6 creating build/lib.macosx-10.13-x86_64-3.6/murmurhash copying murmurhash/__init__.py -&gt; build/lib.macosx-10.13-x86_64-3.6/murmurhash copying murmurhash/about.py -&gt; build/lib.macosx-10.13-x86_64-3.6/murmurhash creating build/lib.macosx-10.13-x86_64-3.6/murmurhash/tests copying murmurhash/tests/__init__.py -&gt; build/lib.macosx-10.13-x86_64-3.6/murmurhash/tests copying murmurhash/tests/test_import.py -&gt; build/lib.macosx-10.13-x86_64-3.6/murmurhash/tests copying murmurhash/mrmr.pyx -&gt; build/lib.macosx-10.13-x86_64-3.6/murmurhash copying murmurhash/__init__.pxd -&gt; build/lib.macosx-10.13-x86_64-3.6/murmurhash copying murmurhash/mrmr.pxd -&gt; build/lib.macosx-10.13-x86_64-3.6/murmurhash creating build/lib.macosx-10.13-x86_64-3.6/murmurhash/include creating build/lib.macosx-10.13-x86_64-3.6/murmurhash/include/murmurhash copying murmurhash/include/murmurhash/MurmurHash2.h -&gt; build/lib.macosx-10.13-x86_64-3.6/murmurhash/include/murmurhash copying murmurhash/include/murmurhash/MurmurHash3.h -&gt; build/lib.macosx-10.13-x86_64-3.6/murmurhash/include/murmurhash running build_ext building 'murmurhash.mrmr' extension creating build/temp.macosx-10.13-x86_64-3.6 creating build/temp.macosx-10.13-x86_64-3.6/murmurhash clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes /usr/local/opt/openssl/include -I/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/include/python3.6m -I/private/var/folders/9t/0qgpwgy906z_sww71dnkgkr80000gn/T/pip-install-9ygkdvbw/murmurhash/murmurhash/include -I/usr/local/include -I/usr/local/opt/openssl/include -I/usr/local/opt/sqlite/include -I/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/include/python3.6m -c murmurhash/mrmr.cpp -o build/temp.macosx-10.13-x86_64-3.6/murmurhash/mrmr.o -O3 -Wno-strict-prototypes -Wno-unused-function clang: warning: /usr/local/opt/openssl/include: 'linker' input unused [-Wunused-command-line-argument] clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes /usr/local/opt/openssl/include -I/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/include/python3.6m -I/private/var/folders/9t/0qgpwgy906z_sww71dnkgkr80000gn/T/pip-install-9ygkdvbw/murmurhash/murmurhash/include -I/usr/local/include -I/usr/local/opt/openssl/include -I/usr/local/opt/sqlite/include -I/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/include/python3.6m -c murmurhash/MurmurHash2.cpp -o build/temp.macosx-10.13-x86_64-3.6/murmurhash/MurmurHash2.o -O3 -Wno-strict-prototypes -Wno-unused-function clang: warning: /usr/local/opt/openssl/include: 'linker' input unused [-Wunused-command-line-argument] clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes /usr/local/opt/openssl/include -I/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/include/python3.6m -I/private/var/folders/9t/0qgpwgy906z_sww71dnkgkr80000gn/T/pip-install-9ygkdvbw/murmurhash/murmurhash/include -I/usr/local/include -I/usr/local/opt/openssl/include -I/usr/local/opt/sqlite/include -I/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/include/python3.6m -c murmurhash/MurmurHash3.cpp -o build/temp.macosx-10.13-x86_64-3.6/murmurhash/MurmurHash3.o -O3 -Wno-strict-prototypes -Wno-unused-function clang: warning: /usr/local/opt/openssl/include: 'linker' input unused [-Wunused-command-line-argument] clang++ -bundle -undefined dynamic_lookup /usr/local/opt/openssl/lib /usr/local/opt/openssl/include build/temp.macosx-10.13-x86_64-3.6/murmurhash/mrmr.o build/temp.macosx-10.13-x86_64-3.6/murmurhash/MurmurHash2.o build/temp.macosx-10.13-x86_64-3.6/murmurhash/MurmurHash3.o -L/usr/local/lib -L/usr/local/opt/openssl/lib -L/usr/local/opt/sqlite/lib -o build/lib.macosx-10.13-x86_64-3.6/murmurhash/mrmr.cpython-36m-darwin.so ld: can't map file, errno=22 file '/usr/local/opt/openssl/lib' for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) error: command 'clang++' failed with exit status 1 ---------------------------------------- Failed building wheel for murmurhash Running setup.py clean for murmurhash </code></pre> <p>Similar error for the rest of the libraries.</p> <p>It probably has something to do with finding python header files. And I checked the location using this command : </p> <pre><code>python -c "import sysconfig; print(sysconfig.get_config_var('INCLUDEPY'))" </code></pre> <blockquote> <p>/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/include/python3.6m</p> </blockquote> <p>It has all the header files.</p> <p>This answer (<a href="https://stackoverflow.com/questions/43370851/failed-building-wheel-for-spacy">Failed building wheel for spacy</a>) suggests installing the wheel individually. How do I do that on Mac OS X?</p>
<p>The best way to download Spacy and work with it is to download it manually and link it. Below command is used to download it and link it to your python libraries:</p> <p>Command to download it:</p> <pre><code>pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz </code></pre> <p>Link that will explain more: <a href="https://spacy.io/usage/models#usage-link" rel="nofollow noreferrer">https://spacy.io/usage/models#usage-link</a></p>
986
spaCy
Error while executing &quot;python -m spacy download en_core_web_sm&quot; in spacy python
https://stackoverflow.com/questions/76006334/error-while-executing-python-m-spacy-download-en-core-web-sm-in-spacy-python
<p>I am trying to load &quot;en_core_web_sm&quot; in spacy but getting the following error</p> <p><em>Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))</em></p> <pre><code>import spacy from spacy import displacy from spacy import tokenizer nlp = spacy.load('en_core_web_sm') </code></pre> <p>Thanks in advance!</p>
<p>Downloading whl file from here <a href="https://github.com/explosion/spacy-models/releases/tag/en_core_web_sm-3.1.0" rel="nofollow noreferrer">spacy</a></p> <p>and running the command</p> <pre><code>pip install en_core_web_sm-3.1.0-py3-none-any.whl </code></pre> <p>did the trick</p>
987
spaCy
spaCy and scikit-learn vectorizer
https://stackoverflow.com/questions/45196312/spacy-and-scikit-learn-vectorizer
<p>I wrote a lemma tokenizer using spaCy for scikit-learn based on their <a href="http://scikit-learn.org/stable/modules/feature_extraction.html" rel="noreferrer">example</a>, it works OK standalone:</p> <pre><code>import spacy from sklearn.feature_extraction.text import TfidfVectorizer class LemmaTokenizer(object): def __init__(self): self.spacynlp = spacy.load('en') def __call__(self, doc): nlpdoc = self.spacynlp(doc) nlpdoc = [token.lemma_ for token in nlpdoc if (len(token.lemma_) &gt; 1) or (token.lemma_.isalnum()) ] return nlpdoc vect = TfidfVectorizer(tokenizer=LemmaTokenizer()) vect.fit(['Apples and oranges are tasty.']) print(vect.vocabulary_) ### prints {'apple': 1, 'and': 0, 'tasty': 4, 'be': 2, 'orange': 3} </code></pre> <p>However, using it in <code>GridSearchCV</code> gives errors, a self contained example is below:</p> <pre><code>from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.svm import SVC from sklearn.multiclass import OneVsRestClassifier from sklearn.pipeline import Pipeline from sklearn.grid_search import GridSearchCV wordvect = TfidfVectorizer(analyzer='word', strip_accents='ascii', tokenizer=LemmaTokenizer()) classifier = OneVsRestClassifier(SVC(kernel='linear')) pipeline = Pipeline([('vect', wordvect), ('classifier', classifier)]) parameters = {'vect__min_df': [1, 2], 'vect__max_df': [0.7, 0.8], 'classifier__estimator__C': [0.1, 1, 10]} gs_clf = GridSearchCV(pipeline, parameters, n_jobs=7, verbose=1) from sklearn.datasets import fetch_20newsgroups categories = ['comp.graphics', 'rec.sport.baseball'] newsgroups = fetch_20newsgroups(remove=('headers', 'footers', 'quotes'), shuffle=True, categories=categories) X = newsgroups.data y = newsgroups.target gs_clf = gs_clf.fit(X, y) ### AttributeError: 'spacy.tokenizer.Tokenizer' object has no attribute '_prefix_re' </code></pre> <p>The error does not appear when I load spacy outside of constructor of the tokenizer, then the <code>GridSearchCV</code> runs:</p> <pre><code>spacynlp = spacy.load('en') class LemmaTokenizer(object): def __call__(self, doc): nlpdoc = spacynlp(doc) nlpdoc = [token.lemma_ for token in nlpdoc if (len(token.lemma_) &gt; 1) or (token.lemma_.isalnum()) ] return nlpdoc </code></pre> <p>But this means that every of my <code>n_jobs</code> from the <code>GridSearchCV</code> will access and call the same spacynlp object, it is shared among these jobs, which leaves the questions:</p> <ol> <li>Is the spacynlp object from <code>spacy.load('en')</code> safe to be used by multiple jobs in GridSearchCV?</li> <li>Is this the correct way to implement calls to spacy inside a tokenizer for scikit-learn?</li> </ol>
<p>You are wasting time by running Spacy for each parameter setting in the grid. The memory overhead is also significant. You should run all data through Spacy once and save it to disk, then use a simplified vectoriser that reads in pre-lemmatised data. Look at the <code>tokenizer</code>, <code>analyser</code> and <code>preprocessor</code> parameters of <code>TfidfVectorizer</code>. There are plenty of examples on stack overflow that show how to build a custom vectoriser.</p>
988
spaCy
Python - cannot install spaCy
https://stackoverflow.com/questions/51501274/python-cannot-install-spacy
<p>I am trying to install spaCy on a Mac (OSX El Capitain). I am running python 2.7, have pip 18.0 and setuptools 40.0.</p> <p>Whenever I run 'sudo pip install spacy', I get the following:</p> <pre><code>Installing collected packages: numpy, murmurhash, cymem, preshed, msgpack, msgpack-numpy, toolz, cytoolz, wrapt, plac, tqdm, six, dill, pathlib, thinc, ujson, regex, idna, urllib3, chardet, certifi, requests, spacy Running setup.py install for murmurhash ... done Running setup.py install for preshed ... done Running setup.py install for msgpack ... done Running setup.py install for toolz ... done Running setup.py install for cytoolz ... done Running setup.py install for wrapt ... done Could not install packages due to an EnvironmentError: [Errno 1] Operation not permitted: '/System/Library/Frameworks/Python.framework/Versions/2.7/man' </code></pre> <p>So, there must be something in the plac package that it doesn't like. The file in the above path doesn't exist.</p> <p>Has anyone else found a solution to this?</p>
989
spaCy
Build vocab using spacy
https://stackoverflow.com/questions/66838185/build-vocab-using-spacy
<p>I'm using spacy tokenizer to tokenize my data, and then build vocab.</p> <p>This is my code:</p> <pre><code>import spacy nlp = spacy.load(&quot;en_core_web_sm&quot;) def build_vocab(docs, max_vocab=10000, min_freq=3): stoi = {'&lt;PAD&gt;':0, '&lt;UNK&gt;':1} itos = {0:'&lt;PAD&gt;', 1:'&lt;UNK&gt;'} word_freq = {} idx = 2 for sentence in docs: for word in [i.text.lower() for i in nlp(sentence)]: if word not in word_freq: word_freq[word] = 1 else: word_freq[word] += 1 if word_freq[word] == min_freq: if len(stoi) &lt; max_vocab: stoi[word] = idx itos[idx] = word idx += 1 return stoi, itos </code></pre> <p>But it takes hours to complete since I have more than 800000 sentences.</p> <p>Is there a faster and better way to achieve this? Thanks.</p> <p>update: tried to remove min_freq:</p> <pre><code>def build_vocab(docs, max_vocab=10000): stoi = {'&lt;PAD&gt;':0, '&lt;UNK&gt;':1} itos = {0:'&lt;PAD&gt;', 1:'&lt;UNK&gt;'} idx = 2 for sentence in docs: for word in [i.text.lower() for i in nlp(sentence)]: if word not in stoi: if len(stoi) &lt; max_vocab: stoi[word] = idx itos[idx] = word idx += 1 return stoi, itos </code></pre> <p>still takes a long time, does spacy have a function to build vocab like in torchtext (.build_vocab).</p>
<p>There are a couple of things you can do to make this faster.</p> <pre><code>import spacy from collections import Counter def build_vocab(texts, max_vocab=10000, min_freq=3): nlp = spacy.blank(&quot;en&quot;) # just the tokenizer wc = Counter() for doc in nlp.pipe(texts): for word in doc: wc[word.lower_] += 1 word2id = {} id2word = {} for word, count in wc.most_common(): if count &lt; min_freq: break if len(word2id) &gt;= max_vocab: break wid = len(word2id) word2id[word] = wid id2word[wid] = word return word2id, id2word </code></pre> <p>Explanation:</p> <ol> <li>If you only use the tokenizer you can use <code>spacy.blank</code></li> <li><code>nlp.pipe</code> is fast for lots of text (less important, maybe irrelevant with blank model though)</li> <li><code>Counter</code> is optimized for this kind of counting task</li> </ol> <p>Another thing is that the way you are building your vocab in your initial example, you will take the first N words that have enough tokens, not the top N words, which is probably wrong.</p> <p>Another thing is that if you're using spaCy you shouldn't build your vocab this way - spaCy has its own built-in vocab class that handles converting tokens to IDs. I guess you might need this mapping for a downstream task or something but look at the <a href="https://spacy.io/api/vocab" rel="nofollow noreferrer">vocab docs</a> to see if you can use that instead.</p>
990
spaCy
spacy split sentences with abbreviations
https://stackoverflow.com/questions/53968330/spacy-split-sentences-with-abbreviations
<p>spaCy splits a sentence incorrectly when there are dots for abbreviations.</p> <pre><code>import spacy tool = spacy.load('en') x = tool('It starts at 9:00 a.m. Eastern Standard Time.') list(x.sents) </code></pre> <p>produces two sentences instead of one. How do I do this correctly?</p>
<p>If you are using the standard English models, <code>en_core_web_sm</code> or <code>en_core_web_md</code> or <code>en_core_web_lg</code>, the most common abbreviations like that one should be already handled:</p> <pre><code>&gt;&gt;&gt; import spacy &gt;&gt;&gt; nlp = spacy.load('en_core_web_sm') &gt;&gt;&gt; doc = nlp('It starts at 9:00 a.m. Eastern Standard Time.') &gt;&gt;&gt; list(doc.sents) [It starts at 9:00 a.m. Eastern Standard Time.] </code></pre> <p>However, if you have an abbreviation that is not recognized by the model you are using, you can use <a href="https://spacy.io/usage/linguistic-features#special-cases" rel="noreferrer"><code>add_special_case</code></a> to handle it properly. For example in the following case, <code>Pres.</code> is not recognized as an abbreviation, therefore two sentences are returned instead of just one:</p> <pre><code>&gt;&gt;&gt; import spacy &gt;&gt;&gt; nlp = spacy.load('en_core_web_sm') &gt;&gt;&gt; doc = nlp('I want to talk to Pres. Michael') &gt;&gt;&gt; list(doc.sents) [I want to talk to Pres., Michael] </code></pre> <p>You would have to load your own library of special cases in order to inform the model that this is an abbreviation, and not the end of a sentence. The verbatim text of the token (<a href="https://spacy.io/usage/rule-based-matching#adding-patterns-attributes" rel="noreferrer"><code>ORTH</code></a>) can be whatever you want, and may also include the dot.</p> <pre><code>&gt;&gt;&gt; from spacy.attrs import ORTH, LEMMA &gt;&gt;&gt; nlp.tokenizer.add_special_case('Pres.', [{ORTH: 'Pres', LEMMA: 'president'}]) &gt;&gt;&gt; doc = nlp('I want to talk to Pres. Michael') &gt;&gt;&gt; list(doc.sents) [I want to talk to Pres Michael] </code></pre>
991
spaCy
How to extract tag attributes using Spacy
https://stackoverflow.com/questions/53755559/how-to-extract-tag-attributes-using-spacy
<p>I tried to get the morphological attributes of the verb using Spacy like below:</p> <pre><code>import spacy from spacy.lang.it.examples import sentences nlp = spacy.load('it_core_news_sm') doc = nlp('Ti è piaciuto il film?') token = doc[2] nlp.vocab.morphology.tag_map[token.tag_] </code></pre> <p>output was:</p> <blockquote> <p>{'pos': 'VERB'}</p> </blockquote> <p>But I want to extract </p> <blockquote> <p>V__Mood=Cnd|Number=Plur|Person=1|Tense=Pres|VerbForm=Fin": {POS: VERB}</p> </blockquote> <p>Is it possible to extract the mood, tense,number,person information as specified in the tag-map <a href="https://github.com/explosion/spacy/blob/master/spacy/lang/it/tag_map.py" rel="nofollow noreferrer">https://github.com/explosion/spacy/blob/master/spacy/lang/it/tag_map.py</a> like above using Spacy?</p>
<p>The <code>nlp.vocab.morphology.tag_map</code> maps from the detailed tag to the dict with simpler tag, so you just need to skip that step and inspect the tag directly:</p> <pre><code>import spacy nlp = spacy.load('it') doc = nlp('Ti è piaciuto il film?') print(doc[2].tag_) </code></pre> <p>should return</p> <blockquote> <p>VA__Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin</p> </blockquote> <p>(with spacy 2.0.11, it_core_news_sm-2.0.0)</p>
992
spaCy
Problem initializing experimental spacy coref
https://stackoverflow.com/questions/78732626/problem-initializing-experimental-spacy-coref
<p>Using the documentation here (<a href="https://spacy.io/api/coref" rel="nofollow noreferrer">https://spacy.io/api/coref</a>) to try and work a coreference resolution problem. I run into problems immediately when trying to add the pipe:</p> <pre><code>nlp = spacy.load(&quot;en_core_web_md&quot;) coref = nlp.add_pipe(&quot;experimental_coref&quot;) </code></pre> <pre><code>RegistryError: [E893] Could not find function 'spacy-experimental.Coref.v1' in function registry 'architectures'. If you're using a custom function, make sure the code is available. If the function is provided by a third-party package, e.g. spacy-transformers, make sure the package is installed in your environment. ... </code></pre> <p>my spacy version:</p> <pre><code>spaCy version 3.6.0 Location C:\Users\&lt;user&gt;\AppData\Roaming\Python\Python310\site-packages\spacy Platform Windows-10-10.0.22631-SP0 Python version 3.10.12 Pipelines en_core_web_lg (3.4.0), en_core_web_md (3.6.0), en_core_web_sm (3.6.0) </code></pre> <p>my experimental spacy version is 0.6.4</p> <p>all the documentation on all the different pages just import and use the nlp.add_pipe(&quot;experimental_coref&quot;) with no problems, and I cannot figure out what my problem is.</p>
993
spaCy
SpaCy permission denied
https://stackoverflow.com/questions/53200628/spacy-permission-denied
<p>I sorta have two issues, and I only mention both because I think they might be related.</p> <p><strong>Problem 1:</strong></p> <p>I believe I have two versions of Python3 downloaded (one through Anaconda) because when I run</p> <pre><code>$ pip3 --version </code></pre> <p>I get</p> <pre><code>pip 9.0.1 from /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (python 3.6) </code></pre> <p>and when I run</p> <pre><code>$ pip --version </code></pre> <p>I get</p> <pre><code>pip 10.0.1 from /Users/skylerilenstine/anaconda3/lib/python3.7/site-packages/pip (python 3.7) </code></pre> <p>(I know this is probably a roundabout way of figuring this out lol).</p> <p><strong>Problem 2:</strong></p> <p>Something went wrong with SpaCy.</p> <p>Just yesterday, when I ran </p> <pre><code>python3 name_of_my_program.py </code></pre> <p>from terminal, it worked fine. But today I get the error</p> <pre><code>ModuleNotFoundError: No module named 'spacy' </code></pre> <p>When I run</p> <pre><code>which `spacy` </code></pre> <p>in terminal, I get</p> <pre><code>/Library/Frameworks/Python.framework/Versions/3.6/bin/spacy: Permission denied </code></pre> <p>It makes sense that it's under this path, as the other one was the anaconda version, and the code was never working in anaconda. But now it's not working in terminal either. I'm not sure what "Permission denied" means or how to fix it.</p> <p>Could this have happened because I downloaded a new version of python 3 through anaconda, or is it unrelated? Should I redownload SpaCy?</p>
<h1>To solve problem 2 (Windows);</h1> <ol> <li>Log into Windows as an administrator.</li> <li>Right-click on the file or folder you want to change permissions for. </li> <li>Select "Properties."</li> <li>Click the "Security" tab.</li> <li>Click the "Edit" button.</li> <li>Click the "Add" button to add a new user or group to the list.</li> <li>Select the user that you want to change permissions for.</li> <li>Check the boxes for the permissions you want to add for that user or group.</li> </ol> <h1>To solve problem 2 (Mac);</h1> <ol> <li>Apply permissions to all items in a folder or a disk</li> <li>Select a folder or a disk, then choose File > Get Info.</li> <li>Click the lock icon to unlock it, then enter an administrator name and password.</li> <li>Click the Action pop-up menu , then choose “Apply to enclosed items.”</li> </ol> <hr> <h1>Problem 1;</h1> <p>Yes you seemingly have a "standard" python install and a anaconda install. Depending on your needs you may wish to uninstall one them. Personally I would get rid of anaconda, it is way too much. If you are new to python just stick with the basics... A text editor and a terminal!</p>
994
spaCy
ConnectTimeout Error while downloading spacy models
https://stackoverflow.com/questions/76789756/connecttimeout-error-while-downloading-spacy-models
<p>I started using spacy (3.6.0) recently. I understand that I have to download pre-trained models every time I re-start my kernel. It worked well for a couple of times with</p> <p><code>!python -m spacy download en_core_web_lg</code></p> <p>Then it started giving me errors. A little bit of research helped me find an alternative:</p> <p><code>spacy.cli.download(model = 'en_core_web_lg')</code></p> <p>Even this is not working now.</p> <p><code>!python -m spacy download en_core_web_lg --direct --sdist</code> is giving the following error:</p> <blockquote> <p>ERROR: HTTP error 404 while getting <a href="https://github.com/explosion/spacy-%3Emodels/releases/download/-en_core_web_lg/-en_core_web_lg.tar.gz" rel="nofollow noreferrer">https://github.com/explosion/spacy-&gt;models/releases/download/-en_core_web_lg/-en_core_web_lg.tar.gz</a> ERROR: Could not install requirement <a href="https://github.com/explosion/spacy-%3Emodels/releases/download/-en_core_web_lg/-en_core_web_lg.tar.gz" rel="nofollow noreferrer">https://github.com/explosion/spacy-&gt;models/releases/download/-en_core_web_lg/-en_core_web_lg.tar.gz</a> because of HTTP error 404 &gt;Client Error: Not Found for url: <a href="https://github.com/explosion/spacy-%3Emodels/releases/download/-en_core_web_lg/-en_core_web_lg.tar.gz" rel="nofollow noreferrer">https://github.com/explosion/spacy-&gt;models/releases/download/-en_core_web_lg/-en_core_web_lg.tar.gz</a> for URL &gt;https://github.com/explosion/spacy-models/releases/download/-en_core_web_lg/-&gt;en_core_web_lg.tar.gz</p> </blockquote> <p>When I run the alternative <code>spacy.cli.download(model = 'en_core_web_lg')</code>, I am getting the following message:</p> <blockquote> <p>ConnectTimeout: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max &gt;retries exceeded with url: /explosion/spacy-models/master/compatibility.json (Caused by &gt;ConnectTimeoutError(&lt;urllib3.connection.HTTPSConnection object at 0x000001B06FFB8850&gt;, &gt;'Connection to raw.githubusercontent.com timed out. (connect timeout=None)'))</p> </blockquote> <p>Am I doing something fundamentally wrong or it is really my internet connection issue like it is suggested <a href="https://stackoverflow.com/questions/76544393/unable-to-download-en-core-web-trf-for-spacy">here</a>? My internet connection seems fine but for this.</p>
995
spaCy
spacy sentence tokenization error on Hebrew
https://stackoverflow.com/questions/48572541/spacy-sentence-tokenization-error-on-hebrew
<p>Trying to use spacy sentence tokenization for Hebrew.</p> <pre><code>import spacy nlp = spacy.load('he') doc = nlp(text) sents = list(doc.sents) </code></pre> <p>I get:</p> <pre><code> Warning: no model found for 'he' Only loading the 'he' tokenizer. Traceback (most recent call last): ... sents = list(doc.sents) File "spacy/tokens/doc.pyx", line 438, in __get__ (spacy/tokens/doc.cpp:9707) raise ValueError( ValueError: Sentence boundary detection requires the dependency parse, which requires data to be installed. For more info, see the documentation: https://spacy.io/docs/usage </code></pre> <p>what to do?</p>
<p>spaCy's <a href="https://spacy.io/usage/models#alpha-support" rel="noreferrer">Hebrew coverage</a> is currently quite minimal. It currently only has <em>word</em> tokenization for Hebrew, which roughly splits on white space with some extra rules and exceptions. The <em>sentence</em> tokenization/boundary detection that you want requires a more sophisticated grammatical parsing of the sentence in order to determine where one sentence ends and another begins. These models require a large amount of labeled training data, so are available for a smaller number of languages than have tokenization (<a href="https://spacy.io/usage/models#languages" rel="noreferrer">here</a>'s the list). </p> <p>The initial message is telling you that it can do tokenization, which doesn't require a model, and then the error you're getting is the result of not having a model to split sentences, do NER or POS, etc.</p> <p>You might look at <a href="https://github.com/iddoberger/awesome-hebrew-nlp" rel="noreferrer">this list</a> for other resources for Hebrew NLP. If you find enough labeled data in the right format and you're feeling ambitious, you could train your own Hebrew spaCy model using the overview described <a href="https://spacy.io/usage/training" rel="noreferrer">here</a>.</p>
996
spaCy
SpaCy: Person entities missing
https://stackoverflow.com/questions/49532259/spacy-person-entities-missing
<p>I have some issues with spaCy missing Person entities. Below is an example when it seems like SpaCy is going blind when the name is preceded with certain titles. When you remove <code>Labour MP</code> (commented line) it recognises the name otherwise returns an empty list.</p> <pre><code>import spacy from spacy.en import English nlp_toolkit = English() nlp = spacy.load('en') text = u"In a recent tweet, Labour MP Luciana Berger sought clarification..." #text = u"In a recent tweet, Luciana Berger sought clarification..." all_tags = nlp(text) person_list=[] for ent in all_tags.ents: if ent.label_=="PERSON": person_list.append(str(ent)) print person_list </code></pre> <p>Now, I'm using 1.8.2 version of SpaCy. But before I upgrade I would really like to know if this issue was fixed in the version 2 as it seems the upgrade is not that straight forward. Could anyone test it or suggest a workaround? Thanks!</p> <p>UPDATE:</p> <p>It gets even more interesting. If you change the name to a different more common name, it works.</p> <pre><code>text = u"In a recent tweet, Labour MP James Mill sought clarification..." out: ['James Mill'] </code></pre> <p>So it is to do with the name and not <code>MP</code> before it... Anyway - if anyone can check if it works properly with v2 I'll appreciate it!</p>
<p>I tried your code with Spacy Version 2.0.7 and it does give "Luciana Berger" as an entity for the sentence </p> <p>I am getting this list <code>['Luciana Berger']</code> for sentence <code>"In a recent tweet, Labour MP Luciana Berger sought clarification..."</code></p> <p>Also for other sentence, it gives "James Mill"</p> <p>Maybe if you want to try the newer version, try to install in Virtual Environment, that way you can experiment on both versions. You can take help of this <a href="http://www.pythonforbeginners.com/basics/how-to-use-python-virtualenv" rel="nofollow noreferrer">how to use virtual enviornment</a>. In fact I will recommend using Virtual env as switching between versions is quite lengthy process. So better try before you switch.</p> <p>Also just for reference, NERs are working in Spacy on "Training of Models", here is the <a href="https://spacy.io/usage/training" rel="nofollow noreferrer">link</a>. So it can happen that not every name will be covered in NER. </p>
997
spaCy
Get position of word in sentence with spacy
https://stackoverflow.com/questions/46049612/get-position-of-word-in-sentence-with-spacy
<p>I'm aware of the basic spacy workflow for getting various attributes from a document, however I can't find a built in function to return the position (start/end) of a word which is part of a sentence.</p> <p>Would anyone know if this is possible with Spacy?</p>
<p>These are available as attributes of the tokens in the sentences. <a href="https://spacy.io/docs/api/token#attributes" rel="noreferrer">Doc</a> says:</p> <blockquote> <p>idx int The character offset of the token within the parent document.</p> <p>i int The index of the token within the parent document.</p> </blockquote> <pre><code>&gt;&gt;&gt; import spacy &gt;&gt;&gt; nlp = spacy.load('en') &gt;&gt;&gt; parsed_sentence = nlp(u'This is my sentence') &gt;&gt;&gt; [(token.text,token.i) for token in parsed_sentence] [(u'This', 0), (u'is', 1), (u'my', 2), (u'sentence', 3)] &gt;&gt;&gt; [(token.text,token.idx) for token in parsed_sentence] [(u'This', 0), (u'is', 5), (u'my', 8), (u'sentence', 11)] </code></pre>
998
spaCy
Spacy Pipeline?
https://stackoverflow.com/questions/38986235/spacy-pipeline
<p>So lately I've been playing around with a WikiDump. I preprocessed it and trained it on Word2Vec + Gensim</p> <p>Does anyone know if there is only one script within Spacy that would generate tokenization, sentence recognition, part of speech tagging, lemmatization, dependency parsing, and named entity recognition all at once</p> <p>I have not been able to find clear documentation Thank you </p>
<p>Spacy gives you all of that with just using <code>en_nlp = spacy.load('en'); doc=en_nlp(sentence)</code>. The <a href="https://spacy.io/docs/#getting-started" rel="noreferrer">documentation</a> gives you details about how to access each of the elements.</p> <p>An example is given below:</p> <pre><code>In [1]: import spacy ...: en_nlp = spacy.load('en') In [2]: en_doc = en_nlp(u'Hello, world. Here are two sentences.') </code></pre> <p>Sentences can be obtained by using <code>doc.sents</code>:</p> <pre><code>In [4]: list(en_doc.sents) Out[4]: [Hello, world., Here are two sentences.] </code></pre> <p>Noun chunks are given by <code>doc.noun_chunks</code>:</p> <pre><code>In [6]: list(en_doc.noun_chunks) Out[6]: [two sentences] </code></pre> <p><a href="https://spacy.io/docs/#examples-entities" rel="noreferrer">Named entity</a> is given by <code>doc.ents</code>:</p> <pre><code>In [11]: [(ent, ent.label_) for ent in en_doc.ents] Out[11]: [(two, u'CARDINAL')] </code></pre> <p>Tokenization: You can iterate over the doc to get tokens. <code>token.orth_</code> gives str of the token.</p> <pre><code>In [12]: [tok.orth_ for tok in en_doc] Out[12]: [u'Hello', u',', u'world', u'.', u'Here', u'are', u'two', u'sentences', u'.'] </code></pre> <p>POS is given by <code>token.tag_</code>:</p> <pre><code>In [13]: [tok.tag_ for tok in en_doc] Out[13]: [u'UH', u',', u'NN', u'.', u'RB', u'VBP', u'CD', u'NNS', u'.'] </code></pre> <p>Lemmatization:</p> <pre><code>In [15]: [tok.lemma_ for tok in en_doc] Out[15]: [u'hello', u',', u'world', u'.', u'here', u'be', u'two', u'sentence', u'.'] </code></pre> <p>Dependency parsing. You can traverse the parse tree by using <code>token.dep_</code> <code>token.rights</code> or <code>token.lefts</code>. You can write a function to print dependencies:</p> <pre><code>In [19]: for token in en_doc: ...: print(token.orth_, token.dep_, token.head.orth_, [t.orth_ for t in token.lefts], [t.orth_ for t in token.rights]) ...: (u'Hello', u'ROOT', u'Hello', [], [u',', u'world', u'.']) (u',', u'punct', u'Hello', [], []) (u'world', u'npadvmod', u'Hello', [], []) ... </code></pre> <p>For more details please consult the spacy documentation.</p>
999
matplotlib
How do I change the size of figures drawn with Matplotlib?
https://stackoverflow.com/questions/332289/how-do-i-change-the-size-of-figures-drawn-with-matplotlib
<p>How do I change the size of figure drawn with Matplotlib?</p>
<p><a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.figure.html" rel="noreferrer"><code>figure</code></a> tells you the call signature:</p> <pre><code>from matplotlib.pyplot import figure figure(figsize=(8, 6), dpi=80) </code></pre> <p><code>figure(figsize=(1,1))</code> would create an inch-by-inch image, which would be 80-by-80 pixels unless you also give a different dpi argument.</p>
1,000
matplotlib
Save plot to image file instead of displaying it
https://stackoverflow.com/questions/9622163/save-plot-to-image-file-instead-of-displaying-it
<p>This displays the figure in a GUI:</p> <pre><code>import matplotlib.pyplot as plt plt.plot([1, 2, 3], [1, 4, 9]) plt.show() </code></pre> <p>But how do I instead save the figure to a file (e.g. foo.png)?</p>
<p>When using <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.savefig.html" rel="noreferrer"><code>matplotlib.pyplot.savefig</code></a>, the file format can be specified by the extension:</p> <pre><code>from matplotlib import pyplot as plt plt.savefig('foo.png') plt.savefig('foo.pdf') </code></pre> <p>That gives a rasterized or vectorized output respectively. In addition, there is sometimes undesirable whitespace around the image, which can be removed with:</p> <pre><code>plt.savefig('foo.png', bbox_inches='tight') </code></pre> <p>Note that if showing the plot, <code>plt.show()</code> should follow <code>plt.savefig()</code>; otherwise, the file image will be blank.</p>
1,001
matplotlib
How to put the legend outside the plot
https://stackoverflow.com/questions/4700614/how-to-put-the-legend-outside-the-plot
<p>I have a series of 20 plots (not subplots) to be made in a single figure. I want the legend to be outside of the box. At the same time, I do not want to change the axes, as the size of the figure gets reduced.</p> <ol> <li>I want to keep the legend box outside the plot area (I want the legend to be outside at the right side of the plot area).</li> <li>Is there a way to reduce the font size of the text inside the legend box, so that the size of the legend box will be small?</li> </ol>
<ul> <li>You can make the legend text smaller by specifying <code>set_size</code> of <code>FontProperties</code>.</li> <li>Resources: <ul> <li><a href="https://matplotlib.org/tutorials/intermediate/legend_guide.html#legend-guide" rel="noreferrer">Legend guide</a></li> <li><a href="https://matplotlib.org/api/legend_api.html" rel="noreferrer"><code>matplotlib.legend</code></a></li> <li><a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.legend.html#matplotlib.pyplot.legend" rel="noreferrer"><code>matplotlib.pyplot.legend</code></a></li> <li><a href="https://matplotlib.org/3.3.1/api/font_manager_api.html" rel="noreferrer"><code>matplotlib.font_manager</code></a> <ul> <li><a href="https://matplotlib.org/3.3.1/api/font_manager_api.html#matplotlib.font_manager.FontProperties.set_size" rel="noreferrer"><code>set_size(self, size)</code></a></li> <li>Valid font size are <em>xx-small</em>, <em>x-small</em>, <em>small</em>, <em>medium</em>, <em>large</em>, <em>x-large</em>, <em>xx-large</em>, <em>larger</em>, <em>smaller</em>, and <em>None</em>.</li> </ul> </li> <li><a href="https://realpython.com/python-matplotlib-guide/" rel="noreferrer">Real Python: Python Plotting With Matplotlib (Guide)</a></li> </ul> </li> </ul> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt from matplotlib.font_manager import FontProperties fontP = FontProperties() fontP.set_size('xx-small') p1, = plt.plot([1, 2, 3], label='Line 1') p2, = plt.plot([3, 2, 1], label='Line 2') plt.legend(handles=[p1, p2], title='title', bbox_to_anchor=(1.05, 1), loc='upper left', prop=fontP) </code></pre> <p><a href="https://i.sstatic.net/OMgiC.png" rel="noreferrer"><img src="https://i.sstatic.net/OMgiC.png" alt="Enter image description here" /></a></p> <ul> <li><code>fontsize='xx-small'</code> also works, without importing <code>FontProperties</code>.</li> </ul> <pre class="lang-py prettyprint-override"><code>plt.legend(handles=[p1, p2], title='title', bbox_to_anchor=(1.05, 1), loc='upper left', fontsize='xx-small') </code></pre>
1,002
matplotlib
How to change the font size on a matplotlib plot
https://stackoverflow.com/questions/3899980/how-to-change-the-font-size-on-a-matplotlib-plot
<p>How does one change the font size for all elements (ticks, labels, title) on a matplotlib plot?</p> <p>I know how to change the tick label sizes, this is done with:</p> <pre><code>import matplotlib matplotlib.rc('xtick', labelsize=20) matplotlib.rc('ytick', labelsize=20) </code></pre> <p>But how does one change the rest?</p>
<p>From the <a href="https://matplotlib.org/stable/api/matplotlib_configuration_api.html#matplotlib.rc" rel="noreferrer">matplotlib documentation</a>,</p> <pre><code>font = {'family' : 'normal', 'weight' : 'bold', 'size' : 22} matplotlib.rc('font', **font) </code></pre> <p>This sets the font of all items to the font specified by the kwargs object, <code>font</code>.</p> <p>Alternatively, you could also use the <code>rcParams</code> <code>update</code> method as suggested in <a href="https://stackoverflow.com/a/6441839/605121">this answer</a>:</p> <pre><code>matplotlib.rcParams.update({'font.size': 22}) </code></pre> <p>or</p> <pre><code>import matplotlib.pyplot as plt plt.rcParams.update({'font.size': 22}) </code></pre> <p>You can find a full list of available properties on the <a href="https://matplotlib.org/stable/tutorials/introductory/customizing.html" rel="noreferrer">Customizing matplotlib page</a>.</p>
1,003
matplotlib
Purpose of &quot;%matplotlib inline&quot;
https://stackoverflow.com/questions/43027980/purpose-of-matplotlib-inline
<p>What exactly is the use of <code>%matplotlib inline</code>?</p>
<p><code>%matplotlib</code> is a <a href="http://ipython.readthedocs.io/en/stable/interactive/tutorial.html#magics-explained" rel="noreferrer"><em>magic function</em></a> in IPython. I'll quote the relevant documentation here for you to read for convenience:</p> <blockquote> <p>IPython has a set of predefined ‘magic functions’ that you can call with a command line style syntax. There are two kinds of magics, line-oriented and cell-oriented. Line magics are prefixed with the % character and work much like OS command-line calls: they get as an argument the rest of the line, where arguments are passed without parentheses or quotes. Lines magics can return results and can be used in the right hand side of an assignment. Cell magics are prefixed with a double %%, and they are functions that get as an argument not only the rest of the line, but also the lines below it in a separate argument.</p> </blockquote> <p><code>%matplotlib inline</code> <a href="http://ipython.readthedocs.io/en/stable/interactive/plotting.html" rel="noreferrer">sets the backend of matplotlib to the 'inline' backend</a>:</p> <blockquote> <p>With this backend, the output of plotting commands is displayed inline within frontends like the Jupyter notebook, directly below the code cell that produced it. The resulting plots will then also be stored in the notebook document.</p> </blockquote> <p>When using the 'inline' backend, your matplotlib graphs will be included in your notebook, next to the code. It may be worth also reading <em><a href="https://stackoverflow.com/questions/19410042/how-to-make-ipython-notebook-matplotlib-plot-inline">How to make IPython notebook matplotlib plot inline</a></em> for reference on how to use it in your code.</p> <p>If you want interactivity as well, you can use the <a href="http://matplotlib.org/users/whats_new.html#the-nbagg-backend" rel="noreferrer">nbagg backend</a> with <code>%matplotlib notebook</code> (in IPython 3.x), as described <a href="https://stackoverflow.com/questions/19410042/how-to-make-ipython-notebook-matplotlib-plot-inline">here</a>.</p>
1,004
matplotlib
How to make IPython notebook matplotlib plot inline
https://stackoverflow.com/questions/19410042/how-to-make-ipython-notebook-matplotlib-plot-inline
<p>I am trying to use IPython notebook on MacOS X with Python 2.7.2 and IPython 1.1.0.</p> <p>I cannot get matplotlib graphics to show up inline.</p> <pre><code>import matplotlib import numpy as np import matplotlib.pyplot as plt %matplotlib inline </code></pre> <p>I have also tried <code>%pylab inline</code> and the ipython command line arguments <code>--pylab=inline</code> but this makes no difference.</p> <pre><code>x = np.linspace(0, 3*np.pi, 500) plt.plot(x, np.sin(x**2)) plt.title('A simple chirp') plt.show() </code></pre> <p>Instead of inline graphics, I get this:</p> <pre><code>&lt;matplotlib.figure.Figure at 0x110b9c450&gt; </code></pre> <p>And <code>matplotlib.get_backend()</code> shows that I have the <code>'module://IPython.kernel.zmq.pylab.backend_inline'</code> backend.</p>
<p>I used <code>%matplotlib inline</code> in the first cell of the notebook and it works. I think you should try:</p> <pre><code>%matplotlib inline import matplotlib import numpy as np import matplotlib.pyplot as plt </code></pre> <p>You can also always start all your IPython kernels in inline mode by default by setting the following config options in your config files:</p> <pre><code>c.IPKernelApp.matplotlib=&lt;CaselessStrEnum&gt; Default: None Choices: ['auto', 'gtk', 'gtk3', 'inline', 'nbagg', 'notebook', 'osx', 'qt', 'qt4', 'qt5', 'tk', 'wx'] Configure matplotlib for interactive use with the default matplotlib backend. </code></pre>
1,005
matplotlib
When to use cla(), clf() or close() for clearing a plot
https://stackoverflow.com/questions/8213522/when-to-use-cla-clf-or-close-for-clearing-a-plot
<p>Matplotlib offers these functions:</p> <pre class="lang-py prettyprint-override"><code>cla() # Clear axis clf() # Clear figure close() # Close a figure window </code></pre> <p>When should I use each function and what exactly does it do?</p>
<p>They all do different things, since matplotlib uses a hierarchical order in which a figure window contains a figure which may consist of many axes. Additionally, there are functions from the pyplot interface and there are methods on the <code>Figure</code> class. I will discuss both cases below.</p> <h2>pyplot interface</h2> <p><code>pyplot</code> is a module that collects a couple of functions that allow matplotlib to be used in a functional manner. I here assume that <code>pyplot</code> has been imported as <code>import matplotlib.pyplot as plt</code>. In this case, there are three different commands that remove stuff:</p> <p>See <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.html" rel="noreferrer"><code>matplotlib.pyplot</code></a> Functions:</p> <ul> <li><strong><a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.cla.html#matplotlib.pyplot.cla" rel="noreferrer"><code>plt.cla()</code></a> clears an axis</strong>, i.e. the currently active axis in the current figure. It leaves the other axes untouched.</li> <li><strong><a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.clf.html#matplotlib.pyplot.clf" rel="noreferrer"><code>plt.clf()</code></a> clears the entire current figure with all its axes</strong>, but leaves the window opened, such that it may be reused for other plots.</li> <li><strong><a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.close.html#matplotlib.pyplot.close" rel="noreferrer"><code>plt.close()</code></a> closes a window</strong>, which will be the current window, if not specified otherwise.</li> </ul> <p>Which functions suits you best depends thus on your use-case.</p> <p>The <code>close()</code> function furthermore allows one to specify which window should be closed. The argument can either be a number or name given to a window when it was created using <code>figure(number_or_name)</code> or it can be a figure instance <code>fig</code> obtained, i.e., using<code>fig = figure()</code>. If no argument is given to <code>close()</code>, the currently active window will be closed. Furthermore, there is the syntax <code>close('all')</code>, which closes all figures.</p> <h2>methods of the Figure class</h2> <p>Additionally, the <code>Figure</code> class provides methods for clearing figures. I'll assume in the following that <code>fig</code> is an instance of a <code>Figure</code>:</p> <p><a href="https://matplotlib.org/stable/api/figure_api.html#matplotlib.figure.Figure.clf" rel="noreferrer"><code>fig.clf()</code> clears the entire figure</a>. This call is equivalent to <code>plt.clf()</code> only if <code>fig</code> is the current figure.</p> <p><code>fig.clear()</code> is a synonym for <code>fig.clf()</code></p> <p>Note that even <code>del fig</code> will not close the associated figure window. As far as I know the only way to close a figure window is using <code>plt.close(fig)</code> as described above.</p>
1,006
matplotlib
How do I set the figure title and axes labels font size?
https://stackoverflow.com/questions/12444716/how-do-i-set-the-figure-title-and-axes-labels-font-size
<p>I am creating a figure in Matplotlib like this:</p> <pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt fig = plt.figure() plt.plot(data) fig.suptitle('test title') plt.xlabel('xlabel') plt.ylabel('ylabel') fig.savefig('test.jpg') </code></pre> <p>I want to specify font sizes for the figure title and the axis labels. I need all three to be different font sizes, so setting a global font size (<code>mpl.rcParams['font.size']=x</code>) is not what I want. How do I set font sizes for the figure title and the axis labels individually?</p>
<p>Functions dealing with text like <code>label</code>, <code>title</code>, etc. accept parameters same as <a href="http://matplotlib.org/users/text_props.html" rel="noreferrer"><code>matplotlib.text.Text</code></a>. For the font size you can use <code>size/fontsize</code>:</p> <pre><code>from matplotlib import pyplot as plt fig = plt.figure() plt.plot(data) fig.suptitle('test title', fontsize=20) plt.xlabel('xlabel', fontsize=18) plt.ylabel('ylabel', fontsize=16) fig.savefig('test.jpg') </code></pre> <p>For globally setting <code>title</code> and <code>label</code> sizes, <a href="http://matplotlib.org/users/customizing.html" rel="noreferrer"><code>mpl.rcParams</code></a> contains <code>axes.titlesize</code> and <code>axes.labelsize</code>. (From the page):</p> <pre><code>axes.titlesize : large # fontsize of the axes title axes.labelsize : medium # fontsize of the x any y labels </code></pre> <p>(As far as I can see, there is no way to set <code>x</code> and <code>y</code> label sizes separately.)</p> <p>And I see that <code>axes.titlesize</code> does not affect <code>suptitle</code>. I guess, you need to set that manually.</p>
1,007
matplotlib
Changing the tick frequency on the x or y axis
https://stackoverflow.com/questions/12608788/changing-the-tick-frequency-on-the-x-or-y-axis
<p>I am trying to fix how python plots my data. Say:</p> <pre class="lang-py prettyprint-override"><code>x = [0, 5, 9, 10, 15] y = [0, 1, 2, 3, 4] matplotlib.pyplot.plot(x, y) matplotlib.pyplot.show() </code></pre> <p>The x axis' ticks are plotted in intervals of 5. Is there a way to make it show intervals of 1?</p>
<p>You could explicitly set where you want to tick marks with <code>plt.xticks</code>:</p> <pre><code>plt.xticks(np.arange(min(x), max(x)+1, 1.0)) </code></pre> <hr> <p>For example,</p> <pre><code>import numpy as np import matplotlib.pyplot as plt x = [0,5,9,10,15] y = [0,1,2,3,4] plt.plot(x,y) plt.xticks(np.arange(min(x), max(x)+1, 1.0)) plt.show() </code></pre> <hr> <p>(<code>np.arange</code> was used rather than Python's <code>range</code> function just in case <code>min(x)</code> and <code>max(x)</code> are floats instead of ints.) </p> <hr> <p>The <code>plt.plot</code> (or <code>ax.plot</code>) function will automatically set default <code>x</code> and <code>y</code> limits. If you wish to keep those limits, and just change the stepsize of the tick marks, then you could use <code>ax.get_xlim()</code> to discover what limits Matplotlib has already set. </p> <pre><code>start, end = ax.get_xlim() ax.xaxis.set_ticks(np.arange(start, end, stepsize)) </code></pre> <p>The default tick formatter should do a decent job rounding the tick values to a sensible number of significant digits. However, if you wish to have more control over the format, you can define your own formatter. For example,</p> <pre><code>ax.xaxis.set_major_formatter(ticker.FormatStrFormatter('%0.1f')) </code></pre> <p>Here's a runnable example:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import matplotlib.ticker as ticker x = [0,5,9,10,15] y = [0,1,2,3,4] fig, ax = plt.subplots() ax.plot(x,y) start, end = ax.get_xlim() ax.xaxis.set_ticks(np.arange(start, end, 0.712123)) ax.xaxis.set_major_formatter(ticker.FormatStrFormatter('%0.1f')) plt.show() </code></pre>
1,008
matplotlib
How do I change the figure size with subplots?
https://stackoverflow.com/questions/14770735/how-do-i-change-the-figure-size-with-subplots
<p>How do I increase the figure size for <a href="https://matplotlib.org/2.0.2/examples/pylab_examples/subplots_demo.html" rel="noreferrer">this figure</a>?</p> <p>This does nothing:</p> <pre class="lang-py prettyprint-override"><code>f.figsize(15, 15) </code></pre> <p>Example code from the link:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import numpy as np # Simple data to display in various forms x = np.linspace(0, 2 * np.pi, 400) y = np.sin(x ** 2) plt.close('all') # Just a figure and one subplot f, ax = plt.subplots() ax.plot(x, y) ax.set_title('Simple plot') # Two subplots, the axes array is 1-d f, axarr = plt.subplots(2, sharex=True) axarr[0].plot(x, y) axarr[0].set_title('Sharing X axis') axarr[1].scatter(x, y) # Two subplots, unpack the axes array immediately f, (ax1, ax2) = plt.subplots(1, 2, sharey=True) ax1.plot(x, y) ax1.set_title('Sharing Y axis') ax2.scatter(x, y) # Three subplots sharing both x/y axes f, (ax1, ax2, ax3) = plt.subplots(3, sharex=True, sharey=True) ax1.plot(x, y) ax1.set_title('Sharing both axes') ax2.scatter(x, y) ax3.scatter(x, 2 * y ** 2 - 1, color='r') # Fine-tune figure; make subplots close to each other and hide x ticks for # all but bottom plot. f.subplots_adjust(hspace=0) plt.setp([a.get_xticklabels() for a in f.axes[:-1]], visible=False) # row and column sharing f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex='col', sharey='row') ax1.plot(x, y) ax1.set_title('Sharing x per column, y per row') ax2.scatter(x, y) ax3.scatter(x, 2 * y ** 2 - 1, color='r') ax4.plot(x, 2 * y ** 2 - 1, color='r') # Four axes, returned as a 2-d array f, axarr = plt.subplots(2, 2) axarr[0, 0].plot(x, y) axarr[0, 0].set_title('Axis [0,0]') axarr[0, 1].scatter(x, y) axarr[0, 1].set_title('Axis [0,1]') axarr[1, 0].plot(x, y ** 2) axarr[1, 0].set_title('Axis [1,0]') axarr[1, 1].scatter(x, y ** 2) axarr[1, 1].set_title('Axis [1,1]') # Fine-tune figure; hide x ticks for top plots and y ticks for right plots plt.setp([a.get_xticklabels() for a in axarr[0, :]], visible=False) plt.setp([a.get_yticklabels() for a in axarr[:, 1]], visible=False) # Four polar axes f, axarr = plt.subplots(2, 2, subplot_kw=dict(projection='polar')) axarr[0, 0].plot(x, y) axarr[0, 0].set_title('Axis [0,0]') axarr[0, 1].scatter(x, y) axarr[0, 1].set_title('Axis [0,1]') axarr[1, 0].plot(x, y ** 2) axarr[1, 0].set_title('Axis [1,0]') axarr[1, 1].scatter(x, y ** 2) axarr[1, 1].set_title('Axis [1,1]') # Fine-tune figure; make subplots farther from each other. f.subplots_adjust(hspace=0.3) plt.show() </code></pre>
<p>Use <a href="https://matplotlib.org/stable/api/figure_api.html#matplotlib.figure.Figure.set_figwidth" rel="noreferrer"><code>.set_figwidth</code></a> and <a href="https://matplotlib.org/stable/api/figure_api.html#matplotlib.figure.Figure.set_figheight" rel="noreferrer"><code>.set_figheight</code></a> on the <a href="https://matplotlib.org/stable/api/figure_api.html#matplotlib.figure.Figure" rel="noreferrer"><code>matplotlib.figure.Figure</code></a> object returned by <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.subplots.html" rel="noreferrer"><code>plt.subplots()</code></a>, or set both with <a href="https://matplotlib.org/stable/api/figure_api.html#matplotlib.figure.Figure.set_size_inches" rel="noreferrer"><code>f.set_size_inches(w, h)</code></a>.</p> <pre><code>f.set_figheight(15) f.set_figwidth(15) </code></pre> <p><strong>Note:</strong> Unlike <code>set_size_inches()</code>, where the measurement unit is explicitly mentioned in the function's name, this is not the case for <code>set_figwidth()</code> and <code>set_figheight()</code>, which also use inches. This information is provided by the documentation of the function.</p> <p>Alternatively, when using <code>.subplots()</code> to create a new figure, specify <code>figsize=</code>:</p> <pre><code>f, axs = plt.subplots(2, 2, figsize=(15, 15)) </code></pre> <p><code>.subplots</code> accepts <code>**fig_kw</code>, which are passed to <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.figure.html#matplotlib.pyplot.figure" rel="noreferrer"><code>pyplot.figure</code></a>, and is where <code>figsize</code> can be found.</p> <p>Setting the figure's size may trigger the <code>ValueError</code> exception:</p> <pre><code>Image size of 240000x180000 pixels is too large. It must be less than 2^16 in each direction </code></pre> <p>This is a common problem for using the <code>set_fig*()</code> functions due to the assumptions that they work with pixels and not inches (obviously 240000*180000 inches is too much).</p>
1,009
matplotlib
pyplot scatter plot marker size
https://stackoverflow.com/questions/14827650/pyplot-scatter-plot-marker-size
<p>In the pyplot document for scatter plot:</p> <pre class="lang-py prettyprint-override"><code>matplotlib.pyplot.scatter(x, y, s=20, c='b', marker='o', cmap=None, norm=None, vmin=None, vmax=None, alpha=None, linewidths=None, faceted=True, verts=None, hold=None, **kwargs) </code></pre> <p>The marker size</p> <blockquote> <p>s: size in points^2. It is a scalar or an array of the same length as x and y.</p> </blockquote> <p>What kind of unit is <code>points^2</code>? What does it mean? Does <code>s=100</code> mean <code>10 pixel x 10 pixel</code>?</p> <p>Basically I'm trying to make scatter plots with different marker sizes, and I want to figure out what does the <code>s</code> number mean.</p>
<p>This can be a somewhat confusing way of defining the size but you are basically specifying the <em>area</em> of the marker. This means, to double the width (or height) of the marker you need to increase <code>s</code> by a factor of 4. [because A = W<em>H =&gt; (2W)</em>(2H)=4A]</p> <p>There is a reason, however, that the size of markers is defined in this way. Because of the scaling of area as the square of width, doubling the width actually appears to increase the size by more than a factor 2 (in fact it increases it by a factor of 4). To see this consider the following two examples and the output they produce.</p> <pre class="lang-py prettyprint-override"><code># doubling the width of markers x = [0,2,4,6,8,10] y = [0]*len(x) s = [20*4**n for n in range(len(x))] plt.scatter(x,y,s=s) plt.show() </code></pre> <p>gives</p> <p><img src="https://i.sstatic.net/m8xcU.png" alt="enter image description here" /></p> <p>Notice how the size increases very quickly. If instead we have</p> <pre class="lang-py prettyprint-override"><code># doubling the area of markers x = [0,2,4,6,8,10] y = [0]*len(x) s = [20*2**n for n in range(len(x))] plt.scatter(x,y,s=s) plt.show() </code></pre> <p>gives</p> <p><img src="https://i.sstatic.net/Znaw8.png" alt="enter image description here" /></p> <p>Now the apparent size of the markers increases roughly linearly in an intuitive fashion.</p> <p>As for the exact meaning of what a 'point' is, it is fairly arbitrary for plotting purposes, you can just scale all of your sizes by a constant until they look reasonable.</p> <p><strong>Edit:</strong> (In response to comment from @Emma)</p> <p>It's probably confusing wording on my part. The question asked about doubling the width of a circle so in the first picture for each circle (as we move from left to right) it's width is double the previous one so for the area this is an exponential with base 4. Similarly the second example each circle has <em>area</em> double the last one which gives an exponential with base 2.</p> <p>However it is the second example (where we are scaling area) that doubling area appears to make the circle twice as big to the eye. Thus if we want a circle to appear a factor of <code>n</code> bigger we would increase the area by a factor <code>n</code> not the radius so the apparent size scales linearly with the area.</p> <p><strong>Edit</strong> to visualize the comment by @TomaszGandor:</p> <p>This is what it looks like for different functions of the marker size:</p> <p><a href="https://i.sstatic.net/3H1BQ.png" rel="noreferrer"><img src="https://i.sstatic.net/3H1BQ.png" alt="Exponential, Square, or Linear size" /></a></p> <pre class="lang-py prettyprint-override"><code>x = [0,2,4,6,8,10,12,14,16,18] s_exp = [20*2**n for n in range(len(x))] s_square = [20*n**2 for n in range(len(x))] s_linear = [20*n for n in range(len(x))] plt.scatter(x,[1]*len(x),s=s_exp, label='$s=2^n$', lw=1) plt.scatter(x,[0]*len(x),s=s_square, label='$s=n^2$') plt.scatter(x,[-1]*len(x),s=s_linear, label='$s=n$') plt.ylim(-1.5,1.5) plt.legend(loc='center left', bbox_to_anchor=(1.1, 0.5), labelspacing=3) plt.show() </code></pre>
1,010
matplotlib
How to remove axis, legends, and white padding
https://stackoverflow.com/questions/9295026/how-to-remove-axis-legends-and-white-padding
<p>I would like to apply colormap to an image, and write the resulting image, without using axes, labels, titles, or anything automatically added by matplotlib. Here is what I did:</p> <pre><code>def make_image(inputname,outputname): data = mpimg.imread(inputname)[:,:,0] fig = plt.imshow(data) fig.set_cmap('hot') fig.axes.get_xaxis().set_visible(False) fig.axes.get_yaxis().set_visible(False) plt.savefig(outputname) </code></pre> <p>It successfully removes the axis of the figure, but the figure saved, presents a white padding, and a frame around the actual image.</p> <p>How can I remove them (at least the white padding)?</p>
<p>The <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.axis.html" rel="noreferrer"><code>axis('off')</code></a> method resolves one of the problems more succinctly than separately changing each axis and border. It still leaves the white space around the border however. Adding <code>bbox_inches='tight'</code> to the <code>savefig</code> command almost gets you there; you can see in the example below that the white space left is much smaller, but still present.</p> <p>Newer versions of matplotlib may require <code>bbox_inches=0</code> instead of the string <code>'tight'</code> (via @episodeyang and @kadrach)</p> <pre><code>from numpy import random import matplotlib.pyplot as plt data = random.random((5,5)) img = plt.imshow(data, interpolation='nearest') img.set_cmap('hot') plt.axis('off') plt.savefig(&quot;test.png&quot;, bbox_inches='tight') </code></pre> <p><img src="https://i.sstatic.net/4ZMuW.png" alt="enter image description here" /></p>
1,011
matplotlib
How to set the axis limits in Matplotlib?
https://stackoverflow.com/questions/3777861/how-to-set-the-axis-limits-in-matplotlib
<p>I need help with setting the limits of y-axis on matplotlib. Here is the code that I tried, unsuccessfully.</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt plt.figure(1, figsize = (8.5,11)) plt.suptitle('plot title') ax = [] aPlot = plt.subplot(321, axisbg = 'w', title = &quot;Year 1&quot;) ax.append(aPlot) plt.plot(paramValues,plotDataPrice[0], color = '#340B8C', marker = 'o', ms = 5, mfc = '#EB1717') plt.xticks(paramValues) plt.ylabel('Average Price') plt.xlabel('Mark-up') plt.grid(True) plt.ylim((25,250)) </code></pre> <p>With the data I have for this plot, I get y-axis limits of 20 and 200. However, I want the limits to be 20 and 250.</p>
<p>Get current axis via <code>plt.gca()</code>, and then set its limits:</p> <pre><code>ax = plt.gca() ax.set_xlim([xmin, xmax]) ax.set_ylim([ymin, ymax]) </code></pre>
1,012
matplotlib
How to adjust padding with cutoff or overlapping labels
https://stackoverflow.com/questions/6774086/how-to-adjust-padding-with-cutoff-or-overlapping-labels
<h2>Updated MRE with subplots</h2> <ul> <li>I'm not sure of the usefulness of the original question and MRE. The margin padding seems to be properly adjusted for large x and y labels.</li> <li>The issue is reproducible with subplots.</li> <li>Using <code>matplotlib 3.4.2</code></li> </ul> <pre class="lang-py prettyprint-override"><code>fig, axes = plt.subplots(ncols=2, nrows=2, figsize=(8, 6)) axes = axes.flatten() for ax in axes: ax.set_ylabel(r'$\ln\left(\frac{x_a-x_b}{x_a-x_c}\right)$') ax.set_xlabel(r'$\ln\left(\frac{x_a-x_d}{x_a-x_e}\right)$') plt.show() </code></pre> <p><a href="https://i.sstatic.net/RZ0QA.png" rel="noreferrer"><img src="https://i.sstatic.net/RZ0QA.png" alt="enter image description here" /></a></p> <h2>Original</h2> <p>I am plotting a dataset using <code>matplotlib</code> where I have an xlabel that is quite &quot;tall&quot; (it's a formula rendered in TeX that contains a fraction and is therefore has the height equivalent of a couple of lines of text).</p> <p>In any case, the bottom of the formula is always cut off when I draw the figures. Changing figure size doesn't seem to help this, and I haven't been able to figure out how to shift the x-axis &quot;up&quot; to make room for the xlabel. Something like that would be a reasonable temporary solution, but what would be nice would be to have a way to make matplotlib recognize automatically that the label is cut off and resize accordingly.</p> <p>Here's an example of what I mean:</p> <pre><code>import matplotlib.pyplot as plt plt.figure() plt.ylabel(r'$\ln\left(\frac{x_a-x_b}{x_a-x_c}\right)$') plt.xlabel(r'$\ln\left(\frac{x_a-x_d}{x_a-x_e}\right)$', fontsize=50) plt.title('Example with matplotlib 3.4.2\nMRE no longer an issue') plt.show() </code></pre> <p><a href="https://i.sstatic.net/pJ9qx.png" rel="noreferrer"><img src="https://i.sstatic.net/pJ9qx.png" alt="enter image description here" /></a></p> <p>The entire ylabel is visible, however, the xlabel is cut off at the bottom.</p> <p>In the case this is a machine-specific problem, I am running this on OSX 10.6.8 with matplotlib 1.0.0</p>
<p>Use:</p> <pre><code>import matplotlib.pyplot as plt plt.gcf().subplots_adjust(bottom=0.15) # alternate option without .gcf plt.subplots_adjust(bottom=0.15) </code></pre> <p>to make room for the label, where <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.gcf.html" rel="noreferrer"><code>plt.gcf()</code></a> means <strong>get the current figure</strong>. <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.gca.html" rel="noreferrer"><code>plt.gca()</code></a>, which gets the current <code>Axes</code>, can also be used.</p> <p>Edit:</p> <p>Since I gave the answer, <code>matplotlib</code> has added the <strong><a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.tight_layout.html#matplotlib.pyplot.tight_layout" rel="noreferrer"><code>plt.tight_layout()</code></a></strong> function.</p> <p><a href="https://matplotlib.org/stable/tutorials/intermediate/tight_layout_guide.html" rel="noreferrer">See matplotlib Tutorials: Tight Layout Guide</a></p> <p>So I suggest using it:</p> <pre><code>fig, axes = plt.subplots(ncols=2, nrows=2, figsize=(8, 6)) axes = axes.flatten() for ax in axes: ax.set_ylabel(r'$\ln\left(\frac{x_a-x_b}{x_a-x_c}\right)$') ax.set_xlabel(r'$\ln\left(\frac{x_a-x_d}{x_a-x_e}\right)$') plt.tight_layout() plt.show() </code></pre> <p><a href="https://i.sstatic.net/uTnEP.png" rel="noreferrer"><img src="https://i.sstatic.net/uTnEP.png" alt="enter image description here" /></a></p>
1,013
matplotlib
Rotate axis tick labels
https://stackoverflow.com/questions/10998621/rotate-axis-tick-labels
<p>I can't figure out how to rotate the text on the X Axis. Its a time stamp, so as the number of samples increase, they get closer and closer until they overlap. I'd like to rotate the text 90 degrees so as the samples get closer together, they aren't overlapping.</p> <p>Below is what I have, it works fine with the exception that I can't figure out how to rotate the X axis text.</p> <pre><code>import sys import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt import datetime font = {'family' : 'normal', 'weight' : 'bold', 'size' : 8} matplotlib.rc('font', **font) values = open('stats.csv', 'r').readlines() time = [datetime.datetime.fromtimestamp(float(i.split(',')[0].strip())) for i in values[1:]] delay = [float(i.split(',')[1].strip()) for i in values[1:]] plt.plot(time, delay) plt.grid(b='on') plt.savefig('test.png') </code></pre>
<p>This works for me:</p> <pre><code>plt.xticks(rotation=90) </code></pre>
1,014
matplotlib
Hiding axis text in matplotlib plots
https://stackoverflow.com/questions/2176424/hiding-axis-text-in-matplotlib-plots
<p>I'm trying to plot a figure without tickmarks or numbers on either of the axes (I use axes in the traditional sense, not the matplotlib nomenclature!). An issue I have come across is where matplotlib adjusts the x(y)ticklabels by subtracting a value N, then adds N at the end of the axis.</p> <p>This may be vague, but the following simplified example highlights the issue, with '6.18' being the offending value of N:</p> <pre><code>import matplotlib.pyplot as plt import random prefix = 6.18 rx = [prefix+(0.001*random.random()) for i in arange(100)] ry = [prefix+(0.001*random.random()) for i in arange(100)] plt.plot(rx,ry,'ko') frame1 = plt.gca() for xlabel_i in frame1.axes.get_xticklabels(): xlabel_i.set_visible(False) xlabel_i.set_fontsize(0.0) for xlabel_i in frame1.axes.get_yticklabels(): xlabel_i.set_fontsize(0.0) xlabel_i.set_visible(False) for tick in frame1.axes.get_xticklines(): tick.set_visible(False) for tick in frame1.axes.get_yticklines(): tick.set_visible(False) plt.show() </code></pre> <p>The three things I would like to know are:</p> <ol> <li><p>How to turn off this behaviour in the first place (although in most cases it is useful, it is not always!) I have looked through <code>matplotlib.axis.XAxis</code> and cannot find anything appropriate</p></li> <li><p>How can I make N disappear (i.e. <code>X.set_visible(False)</code>)</p></li> <li><p>Is there a better way to do the above anyway? My final plot would be 4x4 subplots in a figure, if that is relevant.</p></li> </ol>
<p>Instead of hiding each element, you can hide the whole axis:</p> <pre><code>frame1.axes.get_xaxis().set_visible(False) frame1.axes.get_yaxis().set_visible(False) </code></pre> <p>Or, you can set the ticks to an empty list:</p> <pre><code>frame1.axes.get_xaxis().set_ticks([]) frame1.axes.get_yaxis().set_ticks([]) </code></pre> <p>In this second option, you can still use <code>plt.xlabel()</code> and <code>plt.ylabel()</code> to add labels to the axes.</p>
1,015
matplotlib
Why do many examples use `fig, ax = plt.subplots()`
https://stackoverflow.com/questions/34162443/why-do-many-examples-use-fig-ax-plt-subplots
<p>I'm learning to use <code>matplotlib</code> by studying examples, and a lot of examples seem to include a line like the following before creating a single plot...</p> <pre><code>fig, ax = plt.subplots() </code></pre> <p>Here are some examples...</p> <ul> <li><a href="https://stackoverflow.com/questions/11244514/modify-tick-label-text">Modify tick label text</a></li> <li><a href="http://matplotlib.org/examples/pylab_examples/boxplot_demo2.html" rel="noreferrer">http://matplotlib.org/examples/pylab_examples/boxplot_demo2.html</a></li> </ul> <p>I see this function used a lot, even though the example is only attempting to create a single chart. Is there some other advantage? The official demo for <code>subplots()</code> also uses <code>f, ax = subplots</code> when creating a single chart, and it only ever references ax after that. This is the code they use.</p> <pre><code># Just a figure and one subplot f, ax = plt.subplots() ax.plot(x, y) ax.set_title('Simple plot') </code></pre>
<p><code>plt.subplots()</code> is a function that returns a tuple containing a figure and axes object(s). Thus when using <code>fig, ax = plt.subplots()</code> you unpack this tuple into the variables <code>fig</code> and <code>ax</code>. Having <code>fig</code> is useful if you want to change figure-level attributes or save the figure as an image file later (e.g. with <code>fig.savefig('yourfilename.png')</code>). You certainly don't have to use the returned figure object but many people do use it later so it's common to see. Also, all axes objects (the objects that have plotting methods), have a parent figure object anyway, thus:</p> <pre><code>fig, ax = plt.subplots() </code></pre> <p>is more concise than this:</p> <pre><code>fig = plt.figure() ax = fig.add_subplot(111) </code></pre>
1,016
matplotlib
Plot logarithmic axes
https://stackoverflow.com/questions/773814/plot-logarithmic-axes
<p>I want to plot a graph with one logarithmic axis using matplotlib.</p> <p>Sample program:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt a = [pow(10, i) for i in range(10)] # exponential fig = plt.figure() ax = fig.add_subplot(2, 1, 1) line, = ax.plot(a, color='blue', lw=2) plt.show() </code></pre>
<p>You can use the <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.set_yscale.html" rel="noreferrer"><code>Axes.set_yscale</code></a> method. That allows you to change the scale after the <code>Axes</code> object is created. That would also allow you to build a control to let the user pick the scale if you needed to.</p> <p>The relevant line to add is:</p> <pre><code>ax.set_yscale('log') </code></pre> <p>You can use <code>'linear'</code> to switch back to a linear scale. Here's what your code would look like:</p> <pre><code>import pylab import matplotlib.pyplot as plt a = [pow(10, i) for i in range(10)] fig = plt.figure() ax = fig.add_subplot(2, 1, 1) line, = ax.plot(a, color='blue', lw=2) ax.set_yscale('log') pylab.show() </code></pre> <p><a href="https://i.sstatic.net/CmQwl.png" rel="noreferrer"><img src="https://i.sstatic.net/CmQwl.png" alt="result chart" /></a></p>
1,017
matplotlib
How to change legend fontsize with matplotlib.pyplot
https://stackoverflow.com/questions/7125009/how-to-change-legend-fontsize-with-matplotlib-pyplot
<p>Simple question here: I'm trying to get the size of my legend using <code>matplotlib.pyplot</code> to be smaller (i.e., the text to be smaller). The code I'm using goes something like this:</p> <pre><code>plot.figure() plot.scatter(k, sum_cf, color='black', label='Sum of Cause Fractions') plot.scatter(k, data[:, 0], color='b', label='Dis 1: cf = .6, var = .2') plot.scatter(k, data[:, 1], color='r', label='Dis 2: cf = .2, var = .1') plot.scatter(k, data[:, 2], color='g', label='Dis 3: cf = .1, var = .01') plot.legend(loc=2) </code></pre>
<p>You can set an individual font size for the legend by adjusting the <code>prop</code> keyword. </p> <pre><code>plot.legend(loc=2, prop={'size': 6}) </code></pre> <p>This takes a dictionary of keywords corresponding to <code>matplotlib.font_manager.FontProperties</code> properties. See the <a href="http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.legend" rel="noreferrer">documentation for legend</a>:</p> <blockquote> <p>Keyword arguments:</p> <pre><code>prop: [ None | FontProperties | dict ] A matplotlib.font_manager.FontProperties instance. If prop is a dictionary, a new instance will be created with prop. If None, use rc settings. </code></pre> </blockquote> <p>It is also possible, as of version <a href="http://matplotlib.org/api/legend_api.html#matplotlib.legend.Legend" rel="noreferrer">1.2.1</a>, to use the keyword <code>fontsize</code>.</p>
1,018
matplotlib
What does the argument mean in fig.add_subplot(111)?
https://stackoverflow.com/questions/3584805/what-does-the-argument-mean-in-fig-add-subplot111
<p>Sometimes I come across code such as this:</p> <pre><code>import matplotlib.pyplot as plt x = [1, 2, 3, 4, 5] y = [1, 4, 9, 16, 25] fig = plt.figure() fig.add_subplot(111) plt.scatter(x, y) plt.show() </code></pre> <p>Which produces:</p> <p><img src="https://i.sstatic.net/yCOG3.png" alt="Example plot produced by the included code"></p> <p>I've been reading the documentation like crazy but I can't find an explanation for the <code>111</code>. sometimes I see a <code>212</code>.</p> <p>What does the argument of <code>fig.add_subplot()</code> mean?</p>
<p>These are subplot grid parameters encoded as a single integer. For example, "111" means "1x1 grid, first subplot" and "234" means "2x3 grid, 4th subplot".</p> <p>Alternative form for <code>add_subplot(111)</code> is <code>add_subplot(1, 1, 1)</code>.</p>
1,019
matplotlib
&quot;UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.&quot; when plotting figure with pyplot on Pycharm
https://stackoverflow.com/questions/56656777/userwarning-matplotlib-is-currently-using-agg-which-is-a-non-gui-backend-so
<p>I am trying to plot a simple graph using pyplot, e.g.:</p> <pre><code>import matplotlib.pyplot as plt plt.plot([1,2,3],[5,7,4]) plt.show() </code></pre> <p>but the figure does not appear and I get the following message:</p> <pre><code>UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure. </code></pre> <p>I found and tried some advice to re-configure the &quot;backend&quot; mentioned in that warning, like so:</p> <pre><code>import matplotlib matplotlib.use('TkAgg') import matplotlib.pyplot as plt </code></pre> <p>but this gives me an error message:</p> <pre><code>ModuleNotFoundError: No module named 'tkinter' </code></pre> <p>I assumed that I had to install this module separately, but <code>pip install tkinter</code> does not work:</p> <pre><code>Collecting tkinter Could not find a version that satisfies the requirement tkinter (from versions: ) No matching distribution found for tkinter </code></pre> <p>How can I make Matplotlib display the graph?</p> <hr /> <p><sub><strong>See also</strong>: <a href="https://stackoverflow.com/questions/76105218">Why does tkinter (or turtle) seem to be missing or broken? Shouldn&#39;t it be part of the standard library?</a> . This question is not a duplicate, because the answers discuss other backends besides the Tkinter one.</sub></p> <p><sub>Also see <a href="https://stackoverflow.com/questions/37604289">_tkinter.TclError: no display name and no $DISPLAY environment variable</a> for issues with attempts to use Matplotlib remotely.</sub></p>
<h3>Solution 1: is to install the GUI backend <code>tk</code></h3> <p>I found a solution to my problem (thanks to the help of <a href="https://stackoverflow.com/users/4124317/importanceofbeingernest">ImportanceOfBeingErnest</a>).</p> <p>All I had to do was to install <code>tkinter</code> through the Linux bash terminal using the following command:</p> <pre><code>sudo apt-get install python3-tk </code></pre> <p>instead of installing it with <code>pip</code> or directly in the virtual environment in Pycharm.</p> <h3>Solution 2: install any of the <code>matplotlib</code> supported GUI backends</h3> <ul> <li>solution 1 works fine because you get a GUI backend... in this case the <code>TkAgg</code></li> <li>however you can also fix the issue by installing any of the matplolib GUI backends like <code>Qt5Agg</code>, <code>GTKAgg</code>, <code>Qt4Agg</code>, etc <ul> <li>for example <code>pip install pyqt5</code> will fix the issue also</li> </ul> </li> </ul> <p>NOTE:</p> <ul> <li>usually this error appears when you pip install matplotlib and you are trying to display a plot in a GUI window and you do not have a python module for GUI display.</li> <li>The authors of <code>matplotlib</code> made the pypi software deps not depend on any GUI backend because some people <strong>need</strong> <code>matplotlib</code> without any GUI backend.</li> </ul>
1,020
matplotlib
Reverse colormap in matplotlib
https://stackoverflow.com/questions/3279560/reverse-colormap-in-matplotlib
<p>I would like to know how to simply reverse the color order of a given colormap in order to use it with plot_surface.</p>
<p>The standard colormaps also all have reversed versions. They have the same names with <code>_r</code> tacked on to the end. (<a href="https://matplotlib.org/stable/gallery/color/colormap_reference.html#reversed-colormaps" rel="noreferrer">Documentation here.</a>)</p>
1,021
matplotlib
Improve subplot size/spacing with many subplots
https://stackoverflow.com/questions/6541123/improve-subplot-size-spacing-with-many-subplots
<p>I need to generate a whole bunch of vertically-stacked plots in matplotlib. The result will be saved using <code>savefig</code> and viewed on a webpage, so I don't care how tall the final image is, as long as the subplots are spaced so they don't overlap.</p> <p>No matter how big I allow the figure to be, the subplots always seem to overlap.</p> <p>My code currently looks like</p> <pre><code>import matplotlib.pyplot as plt import my_other_module titles, x_lists, y_lists = my_other_module.get_data() fig = plt.figure(figsize=(10,60)) for i, y_list in enumerate(y_lists): plt.subplot(len(titles), 1, i) plt.xlabel(&quot;Some X label&quot;) plt.ylabel(&quot;Some Y label&quot;) plt.title(titles[i]) plt.plot(x_lists[i],y_list) fig.savefig('out.png', dpi=100) </code></pre>
<p>Please review <a href="https://matplotlib.org/stable/users/explain/axes/tight_layout_guide.html" rel="noreferrer">matplotlib: Tight Layout guide</a> and try using <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.tight_layout.html" rel="noreferrer"><code>matplotlib.pyplot.tight_layout</code></a>, or <a href="https://matplotlib.org/stable/api/figure_api.html#matplotlib.figure.Figure.tight_layout" rel="noreferrer"><code>matplotlib.figure.Figure.tight_layout</code></a></p> <p>As a quick example:</p> <pre><code>import matplotlib.pyplot as plt fig, axes = plt.subplots(nrows=4, ncols=4, figsize=(8, 8)) fig.tight_layout() # Or equivalently, &quot;plt.tight_layout()&quot; plt.show() </code></pre> <hr /> <p>Without Tight Layout</p> <p><a href="https://i.sstatic.net/U7agc.png" rel="noreferrer"><img src="https://i.sstatic.net/U7agc.png" alt="enter image description here" /></a></p> <hr /> <p>With Tight Layout</p> <p><a href="https://i.sstatic.net/G4NNT.png" rel="noreferrer"><img src="https://i.sstatic.net/G4NNT.png" alt="enter image description here" /></a></p>
1,022
matplotlib
How to change the figure size of a seaborn axes or figure level plot
https://stackoverflow.com/questions/31594549/how-to-change-the-figure-size-of-a-seaborn-axes-or-figure-level-plot
<p>How do I change the size of my image so it's suitable for printing?</p> <p>For example, I'd like to use an A4 paper, whose dimensions are 11.7 inches by 8.27 inches in landscape orientation.</p>
<p>You need to create the matplotlib Figure and Axes objects ahead of time, specifying how big the figure is:</p> <pre><code>from matplotlib import pyplot import seaborn import mylib a4_dims = (11.7, 8.27) df = mylib.load_data() fig, ax = pyplot.subplots(figsize=a4_dims) seaborn.violinplot(ax=ax, data=df, **violin_options) </code></pre>
1,023
matplotlib
Adding a matplotlib legend
https://stackoverflow.com/questions/19125722/adding-a-matplotlib-legend
<p>How can one create a legend for a line graph in Matplotlib's PyPlot without creating any extra variables?</p> <p>Please consider the graphing script below:</p> <pre class="lang-py prettyprint-override"><code>if __name__ == '__main__': PyPlot.plot(length, bubble, 'b-', length, ins, 'r-', length, merge_r, 'g+', length, merge_i, 'p-', ) PyPlot.title(&quot;Combined Statistics&quot;) PyPlot.xlabel(&quot;Length of list (number)&quot;) PyPlot.ylabel(&quot;Time taken (seconds)&quot;) PyPlot.show() </code></pre> <p>As you can see, this is a very basic use of matplotlib's PyPlot. This generates the following graph:</p> <p><img src="https://i.sstatic.net/sRcuYm.png" alt="Graph" /></p> <p>However, it is unclear which line is which. Thus, I need a legend; however, taking a look at the following example below (<a href="http://matplotlib.org/users/legend_guide.html" rel="noreferrer">from the official site</a>):</p> <pre><code>ax = subplot(1,1,1) p1, = ax.plot([1,2,3], label=&quot;line 1&quot;) p2, = ax.plot([3,2,1], label=&quot;line 2&quot;) p3, = ax.plot([2,3,1], label=&quot;line 3&quot;) handles, labels = ax.get_legend_handles_labels() # reverse the order ax.legend(handles[::-1], labels[::-1]) # or sort them by labels import operator hl = sorted(zip(handles, labels), key=operator.itemgetter(1)) handles2, labels2 = zip(*hl) ax.legend(handles2, labels2) </code></pre> <p>You will see that I need to create an extra variable <code>ax</code>. How can I add a legend to my graph <em>without</em> having to create this extra variable and retaining the simplicity of my current script?</p>
<p>Add a <code>label=</code> to each of your <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot" rel="noreferrer"><code>plot()</code></a> calls, and then call <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.legend" rel="noreferrer"><code>legend(loc='upper left')</code></a>.</p> <p>Consider this sample (tested with Python 3.8.0):</p> <pre><code>import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 20, 1000) y1 = np.sin(x) y2 = np.cos(x) plt.plot(x, y1, "-b", label="sine") plt.plot(x, y2, "-r", label="cosine") plt.legend(loc="upper left") plt.ylim(-1.5, 2.0) plt.show() </code></pre> <p><a href="https://i.sstatic.net/GyMXH.png" rel="noreferrer"><img src="https://i.sstatic.net/GyMXH.png" alt="enter image description here"></a> Slightly modified from this tutorial: <a href="http://jakevdp.github.io/mpl_tutorial/tutorial_pages/tut1.html" rel="noreferrer">http://jakevdp.github.io/mpl_tutorial/tutorial_pages/tut1.html</a></p>
1,024
matplotlib
How to change plot background color?
https://stackoverflow.com/questions/14088687/how-to-change-plot-background-color
<p>I am making a scatter plot in matplotlib and need to change the background of the actual plot to black. I know how to change the face color of the plot using:</p> <pre><code>fig = plt.figure() fig.patch.set_facecolor('xkcd:mint green') </code></pre> <p><a href="https://i.sstatic.net/UN6YB.png" rel="noreferrer"><img src="https://i.sstatic.net/UN6YB.png" alt="enter image description here"></a></p> <p>My issue is that this changes the color of the space around the plot. How to I change the actual background color of the plot?</p>
<p>Use the <strong><code>set_facecolor(color)</code> method of the <code>axes</code> object</strong>, which you've created one of the following ways:</p> <ul> <li><p>You created a figure and axis/es together</p> <pre><code>fig, ax = plt.subplots(nrows=1, ncols=1) </code></pre></li> <li><p>You created a figure, then axis/es later</p> <pre><code>fig = plt.figure() ax = fig.add_subplot(1, 1, 1) # nrows, ncols, index </code></pre></li> <li><p>You used the stateful API (if you're doing anything more than a few lines, and <em>especially</em> if you have multiple plots, the object-oriented methods above make life easier because you can refer to specific figures, plot on certain axes, and customize either)</p> <pre><code>plt.plot(...) ax = plt.gca() </code></pre></li> </ul> <p>Then you can use <code>set_facecolor</code>:</p> <pre><code>ax.set_facecolor('xkcd:salmon') ax.set_facecolor((1.0, 0.47, 0.42)) </code></pre> <p><a href="https://i.sstatic.net/2wFc6.png" rel="noreferrer"><img src="https://i.sstatic.net/2wFc6.png" alt="example plot with pink background on the axes"></a></p> <p>As a refresher for what colors can be:</p> <blockquote> <h2><a href="https://matplotlib.org/api/colors_api.html?highlight=color#module-matplotlib.colors" rel="noreferrer">matplotlib.colors</a></h2> <p>Matplotlib recognizes the following formats to specify a color:</p> <ul> <li>an RGB or RGBA tuple of float values in <code>[0, 1]</code> (e.g., <code>(0.1, 0.2, 0.5)</code> or <code>(0.1, 0.2, 0.5, 0.3)</code>);</li> <li>a hex RGB or RGBA string (e.g., <code>'#0F0F0F'</code> or <code>'#0F0F0F0F'</code>);</li> <li>a string representation of a float value in <code>[0, 1]</code> inclusive for gray level (e.g., <code>'0.5'</code>);</li> <li>one of <code>{'b', 'g', 'r', 'c', 'm', 'y', 'k', 'w'}</code>;</li> <li>a X11/CSS4 color name;</li> <li>a name from the <a href="https://xkcd.com/color/rgb/" rel="noreferrer">xkcd color survey</a>; prefixed with <code>'xkcd:'</code> (e.g., <code>'xkcd:sky blue'</code>);</li> <li>one of <code>{'tab:blue', 'tab:orange', 'tab:green', 'tab:red', 'tab:purple', 'tab:brown', 'tab:pink', 'tab:gray', 'tab:olive', 'tab:cyan'}</code> which are the Tableau Colors from the ‘T10’ categorical palette (which is the default color cycle);</li> <li>a “CN” color spec, i.e. 'C' followed by a single digit, which is an index into the default property cycle (<code>matplotlib.rcParams['axes.prop_cycle']</code>); the indexing occurs at artist creation time and defaults to black if the cycle does not include color.</li> </ul> <p>All string specifications of color, other than “CN”, are case-insensitive.</p> </blockquote>
1,025
matplotlib
How to remove xticks from a plot
https://stackoverflow.com/questions/12998430/how-to-remove-xticks-from-a-plot
<p>I have a semilogx plot and I would like to remove the xticks. I tried:</p> <pre><code>plt.gca().set_xticks([]) plt.xticks([]) ax.set_xticks([]) </code></pre> <p>The grid disappears (ok), but small ticks (at the place of the main ticks) remain. How to remove them?</p>
<p>The <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.tick_params.html" rel="noreferrer"><code>plt.tick_params</code></a> method is very useful for stuff like this. This code turns off major and minor ticks and removes the labels from the x-axis.</p> <p>Note that there is also <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.tick_params.html#matplotlib.axes.Axes.tick_params" rel="noreferrer"><code>ax.tick_params</code></a> for <code>matplotlib.axes.Axes</code> objects.</p> <pre><code>from matplotlib import pyplot as plt plt.plot(range(10)) plt.tick_params( axis='x', # changes apply to the x-axis which='both', # both major and minor ticks are affected bottom=False, # ticks along the bottom edge are off top=False, # ticks along the top edge are off labelbottom=False) # labels along the bottom edge are off plt.show() plt.savefig('plot') plt.clf() </code></pre> <p><img src="https://i.sstatic.net/0emBv.png" alt="enter image description here" /></p>
1,026
matplotlib
Installation Issue with matplotlib Python
https://stackoverflow.com/questions/21784641/installation-issue-with-matplotlib-python
<p>I have issue after installing the <strong>matplotlib</strong> package unable to <strong>import matplotlib.pyplot as plt</strong>. Any suggestion will be greatly appreciate. </p> <pre><code>&gt;&gt;&gt; import matplotlib.pyplot as plt Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "//anaconda/lib/python2.7/site-packages/matplotlib-1.3.1-py2.7-macosx-10.5-x86_64.egg/matplotlib/pyplot.py", line 98, in &lt;module&gt; _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup() File "//anaconda/lib/python2.7/site-packages/matplotlib-1.3.1-py2.7-macosx-10.5-x86_64.egg/matplotlib/backends/__init__.py", line 28, in pylab_setup globals(),locals(),[backend_name],0) File "//anaconda/lib/python2.7/site-packages/matplotlib-1.3.1-py2.7-macosx-10.5-x86_64.egg/matplotlib/backends/backend_macosx.py", line 21, in &lt;module&gt; from matplotlib.backends import _macosx **RuntimeError**: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. </code></pre>
<p><strong>Problem Cause</strong></p> <p>In mac os image rendering back end of matplotlib (what-is-a-backend to render using the API of Cocoa by default). There are Qt4Agg and GTKAgg and as a back-end is not the default. Set the back end of macosx that is differ compare with other windows or linux os. </p> <p><strong>Solution</strong></p> <ul> <li>I assume you have installed the pip matplotlib, there is a directory in your root called <code>~/.matplotlib</code>. </li> <li>Create a file <code>~/.matplotlib/matplotlibrc</code> there and add the following code: <code>backend: TkAgg</code></li> </ul> <p>From this <a href="http://matplotlib.org/examples/index.html" rel="noreferrer">link</a> you can try different diagrams. </p>
1,027
matplotlib
How to draw vertical lines on a given plot
https://stackoverflow.com/questions/24988448/how-to-draw-vertical-lines-on-a-given-plot
<p>Given a plot of a signal in time representation, how can I draw lines marking the corresponding time index?</p> <p>Specifically, given a signal plot with a time index ranging from 0 to 2.6 (seconds), I want to draw vertical red lines indicating the corresponding time index for the list <code>[0.22058956, 0.33088437, 2.20589566]</code>. How can I do it?</p>
<p>The standard way to add vertical lines that will cover your entire plot window without you having to specify their actual height is <code>plt.axvline</code></p> <pre><code>import matplotlib.pyplot as plt plt.axvline(x=0.22058956) plt.axvline(x=0.33088437) plt.axvline(x=2.20589566) </code></pre> <p>OR</p> <pre><code>xcoords = [0.22058956, 0.33088437, 2.20589566] for xc in xcoords: plt.axvline(x=xc) </code></pre> <p>You can use many of the keywords available for other plot commands (e.g. <code>color</code>, <code>linestyle</code>, <code>linewidth</code> ...). You can pass in keyword arguments <code>ymin</code> and <code>ymax</code> if you like in axes corrdinates (e.g. <code>ymin=0.25</code>, <code>ymax=0.75</code> will cover the middle half of the plot). There are corresponding functions for horizontal lines (<code>axhline</code>) and rectangles (<code>axvspan</code>). </p>
1,028
matplotlib
How to add a title to each subplot
https://stackoverflow.com/questions/25239933/how-to-add-a-title-to-each-subplot
<p>I have one figure which contains many subplots.</p> <pre><code>fig = plt.figure(num=None, figsize=(26, 12), dpi=80, facecolor='w', edgecolor='k') fig.canvas.set_window_title('Window Title') # Returns the Axes instance ax = fig.add_subplot(311) ax2 = fig.add_subplot(312) ax3 = fig.add_subplot(313) </code></pre> <p>How do I add titles to the subplots?</p> <p><code>fig.suptitle</code> adds a title to all graphs and although <code>ax.set_title()</code> exists, the latter does not add any title to my subplots. </p> <p>Thank you for your help.</p> <p>Edit: Corrected typo about <code>set_title()</code>. Thanks Rutger Kassies </p>
<p><code>ax.title.set_text('My Plot Title')</code> seems to work too.</p> <pre><code>fig = plt.figure() ax1 = fig.add_subplot(221) ax2 = fig.add_subplot(222) ax3 = fig.add_subplot(223) ax4 = fig.add_subplot(224) ax1.title.set_text('First Plot') ax2.title.set_text('Second Plot') ax3.title.set_text('Third Plot') ax4.title.set_text('Fourth Plot') plt.show() </code></pre> <p><a href="https://i.sstatic.net/dUp6p.png" rel="noreferrer"><img src="https://i.sstatic.net/dUp6p.png" alt="matplotlib add titles on subplots" /></a></p>
1,029
matplotlib
How to change tick label font size
https://stackoverflow.com/questions/6390393/how-to-change-tick-label-font-size
<p>In a matplotlib figure, how can I make the font size for the tick labels using <code>ax1.set_xticklabels()</code> smaller?</p> <p>Further, how can one rotate it from horizontal to vertical?</p>
<p>Please note that newer versions of MPL have a shortcut for this task. An example is shown in the other answer to this question: <a href="https://stackoverflow.com/a/11386056/42346">https://stackoverflow.com/a/11386056/42346</a></p> <p>The code below is for illustrative purposes and may not necessarily be optimized.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np def xticklabels_example(): fig = plt.figure() x = np.arange(20) y1 = np.cos(x) y2 = (x**2) y3 = (x**3) yn = (y1,y2,y3) COLORS = ('b','g','k') for i,y in enumerate(yn): ax = fig.add_subplot(len(yn),1,i+1) ax.plot(x, y, ls='solid', color=COLORS[i]) if i != len(yn) - 1: # all but last ax.set_xticklabels( () ) else: for tick in ax.xaxis.get_major_ticks(): tick.label.set_fontsize(14) # specify integer or one of preset strings, e.g. #tick.label.set_fontsize('x-small') tick.label.set_rotation('vertical') fig.suptitle('Matplotlib xticklabels Example') plt.show() if __name__ == '__main__': xticklabels_example() </code></pre> <p><img src="https://i.sstatic.net/lRp5U.png" alt="enter image description here"></p>
1,030
matplotlib
Remove the legend on a matplotlib figure
https://stackoverflow.com/questions/5735208/remove-the-legend-on-a-matplotlib-figure
<p>To add a legend to a matplotlib plot, one simply runs <code>legend()</code>.</p> <p>How to <strong>remove</strong> a legend from a plot?</p> <p>(The closest I came to this is to run <code>legend([])</code> in order to empty the legend from data. But that leaves an ugly white rectangle in the upper right corner.)</p>
<p>As of <a href="https://github.com/cimarronm/matplotlib/releases/tag/v1.4.0rc4" rel="noreferrer">matplotlib <code>v1.4.0rc4</code></a>, a <code>remove</code> method has been added to the legend object.</p> <p>Usage:</p> <pre class="lang-py prettyprint-override"><code>ax.get_legend().remove() </code></pre> <p>or</p> <pre class="lang-py prettyprint-override"><code>legend = ax.legend(...) ... legend.remove() </code></pre> <p>See <a href="https://github.com/matplotlib/matplotlib/pull/2798" rel="noreferrer">here</a> for the commit where this was introduced.</p>
1,031
matplotlib
How to invert the x or y axis
https://stackoverflow.com/questions/2051744/how-to-invert-the-x-or-y-axis
<p>I have a scatter plot graph with a bunch of random x, y coordinates. Currently the Y-Axis starts at 0 and goes up to the max value. I would like the Y-Axis to start at the max value and go up to 0.</p> <pre><code>points = [(10,5), (5,11), (24,13), (7,8)] x_arr = [] y_arr = [] for x,y in points: x_arr.append(x) y_arr.append(y) plt.scatter(x_arr,y_arr) </code></pre>
<p>There is a new API that makes this even simpler.</p> <pre><code>plt.gca().invert_xaxis() </code></pre> <p>and/or</p> <pre><code>plt.gca().invert_yaxis() </code></pre>
1,032
matplotlib
How can I convert an RGB image into grayscale in Python?
https://stackoverflow.com/questions/12201577/how-can-i-convert-an-rgb-image-into-grayscale-in-python
<p>I'm trying to use <code>matplotlib</code> to read in an RGB image and convert it to grayscale.</p> <p>In matlab I use this:</p> <pre><code>img = rgb2gray(imread('image.png')); </code></pre> <p>In the <a href="http://matplotlib.sourceforge.net/users/image_tutorial.html" rel="noreferrer">matplotlib tutorial</a> they don't cover it. They just read in the image</p> <pre><code>import matplotlib.image as mpimg img = mpimg.imread('image.png') </code></pre> <p>and then they slice the array, but that's not the same thing as converting RGB to grayscale from what I understand.</p> <pre><code>lum_img = img[:,:,0] </code></pre> <p>I find it hard to believe that numpy or matplotlib doesn't have a built-in function to convert from rgb to gray. Isn't this a common operation in image processing?</p> <p>I wrote a very simple function that works with the image imported using <code>imread</code> in 5 minutes. It's horribly inefficient, but that's why I was hoping for a professional implementation built-in. </p> <p>Sebastian has improved my function, but I'm still hoping to find the built-in one.</p> <p>matlab's (NTSC/PAL) implementation:</p> <pre><code>import numpy as np def rgb2gray(rgb): r, g, b = rgb[:,:,0], rgb[:,:,1], rgb[:,:,2] gray = 0.2989 * r + 0.5870 * g + 0.1140 * b return gray </code></pre>
<p>How about doing it with <a href="https://pillow.readthedocs.io/en/latest/" rel="noreferrer">Pillow</a>:</p> <pre><code>from PIL import Image img = Image.open('image.png').convert('L') img.save('greyscale.png') </code></pre> <hr /> <p>If an alpha (transparency) channel is present in the input image and should be preserved, use mode <code>LA</code>:</p> <pre><code>img = Image.open('image.png').convert('LA') </code></pre> <hr /> <p>Using matplotlib and <a href="https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.Image.convert" rel="noreferrer">the formula</a></p> <pre><code>Y' = 0.2989 R + 0.5870 G + 0.1140 B </code></pre> <p>you could do:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg def rgb2gray(rgb): return np.dot(rgb[...,:3], [0.2989, 0.5870, 0.1140]) img = mpimg.imread('image.png') gray = rgb2gray(img) plt.imshow(gray, cmap=plt.get_cmap('gray'), vmin=0, vmax=1) plt.show() </code></pre>
1,033