category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
matplotlib
|
Secondary axis with twinx(): how to add to legend
|
https://stackoverflow.com/questions/5484922/secondary-axis-with-twinx-how-to-add-to-legend
|
<p>I have a plot with two y-axes, using <code>twinx()</code>. I also give labels to the lines, and want to show them with <code>legend()</code>, but I only succeed to get the labels of one axis in the legend:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rc
rc('mathtext', default='regular')
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(time, Swdown, '-', label = 'Swdown')
ax.plot(time, Rn, '-', label = 'Rn')
ax2 = ax.twinx()
ax2.plot(time, temp, '-r', label = 'temp')
ax.legend(loc=0)
ax.grid()
ax.set_xlabel("Time (h)")
ax.set_ylabel(r"Radiation ($MJ\,m^{-2}\,d^{-1}$)")
ax2.set_ylabel(r"Temperature ($^\circ$C)")
ax2.set_ylim(0, 35)
ax.set_ylim(-20,100)
plt.show()
</code></pre>
<p>So I only get the labels of the first axis in the legend, and not the label 'temp' of the second axis. How could I add this third label to the legend?</p>
<p><img src="https://i.sstatic.net/MdCYW.png" alt="enter image description here"></p>
|
<p>You can easily add a second legend by adding the line:</p>
<pre><code>ax2.legend(loc=0)
</code></pre>
<p>You'll get this:</p>
<p><img src="https://i.sstatic.net/DLZkF.png" alt="enter image description here"></p>
<p>But if you want all labels on one legend then you should do something like this:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rc
rc('mathtext', default='regular')
time = np.arange(10)
temp = np.random.random(10)*30
Swdown = np.random.random(10)*100-10
Rn = np.random.random(10)*100-10
fig = plt.figure()
ax = fig.add_subplot(111)
lns1 = ax.plot(time, Swdown, '-', label = 'Swdown')
lns2 = ax.plot(time, Rn, '-', label = 'Rn')
ax2 = ax.twinx()
lns3 = ax2.plot(time, temp, '-r', label = 'temp')
# added these three lines
lns = lns1+lns2+lns3
labs = [l.get_label() for l in lns]
ax.legend(lns, labs, loc=0)
ax.grid()
ax.set_xlabel("Time (h)")
ax.set_ylabel(r"Radiation ($MJ\,m^{-2}\,d^{-1}$)")
ax2.set_ylabel(r"Temperature ($^\circ$C)")
ax2.set_ylim(0, 35)
ax.set_ylim(-20,100)
plt.show()
</code></pre>
<p>Which will give you this:</p>
<p><img src="https://i.sstatic.net/Z8pg4.png" alt="enter image description here"></p>
| 1,034
|
matplotlib
|
How to set a single, main title above all the subplots
|
https://stackoverflow.com/questions/7066121/how-to-set-a-single-main-title-above-all-the-subplots
|
<p>I am using <code>pyplot</code>. I have 4 subplots. How to set a single, main title above all the subplots? <code>title()</code> sets it above the last subplot.</p>
|
<p>Use <a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.suptitle.html?highlight=suptitle#matplotlib.pyplot.suptitle" rel="noreferrer"><code>pyplot.suptitle</code></a> or <a href="https://matplotlib.org/api/_as_gen/matplotlib.figure.Figure.html?highlight=suptitle#matplotlib.figure.Figure.suptitle" rel="noreferrer"><code>Figure.suptitle</code></a>:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
fig=plt.figure()
data=np.arange(900).reshape((30,30))
for i in range(1,5):
ax=fig.add_subplot(2,2,i)
ax.imshow(data)
fig.suptitle('Main title') # or plt.suptitle('Main title')
plt.show()
</code></pre>
<p><img src="https://i.sstatic.net/Kq15V.png" alt="enter image description here"></p>
| 1,035
|
matplotlib
|
Plot two histograms on single chart
|
https://stackoverflow.com/questions/6871201/plot-two-histograms-on-single-chart
|
<p>I created a histogram plot using data from a file and no problem. Now I wanted to superpose data from another file in the same histogram, so I do something like this</p>
<pre><code>n,bins,patchs = ax.hist(mydata1,100)
n,bins,patchs = ax.hist(mydata2,100)
</code></pre>
<p>but the problem is that for each interval, only the bar with the highest value appears, and the other is hidden. I wonder how could I plot both histograms at the same time with different colors.</p>
|
<p>Here you have a working example:</p>
<pre><code>import random
import numpy
from matplotlib import pyplot
x = [random.gauss(3,1) for _ in range(400)]
y = [random.gauss(4,2) for _ in range(400)]
bins = numpy.linspace(-10, 10, 100)
pyplot.hist(x, bins, alpha=0.5, label='x')
pyplot.hist(y, bins, alpha=0.5, label='y')
pyplot.legend(loc='upper right')
pyplot.show()
</code></pre>
<p><img src="https://i.sstatic.net/acUlv.png" alt="enter image description here"></p>
| 1,036
|
matplotlib
|
How to make inline plots in Jupyter Notebook larger?
|
https://stackoverflow.com/questions/36367986/how-to-make-inline-plots-in-jupyter-notebook-larger
|
<p>I have made my plots inline on my Ipython Notebook with "<code>%matplotlib inline</code>."</p>
<p>Now, the plot appears. However, it is very small. Is there a way to make it appear larger using either notebook settings or plot settings?</p>
<p><a href="https://i.sstatic.net/TiQum.png"><img src="https://i.sstatic.net/TiQum.png" alt="enter image description here"></a></p>
|
<p>Yes, play with <code>figuresize</code> and <code>dpi</code> like so (before you call your subplot):</p>
<pre><code>fig=plt.figure(figsize=(12,8), dpi= 100, facecolor='w', edgecolor='k')
</code></pre>
<p>As @tacaswell and @Hagne pointed out, you can also change the defaults if it's not a one-off:</p>
<pre><code>plt.rcParams['figure.figsize'] = [12, 8]
plt.rcParams['figure.dpi'] = 100 # 200 e.g. is really fine, but slower
</code></pre>
| 1,037
|
matplotlib
|
Set markers for individual points on a line
|
https://stackoverflow.com/questions/8409095/set-markers-for-individual-points-on-a-line
|
<p>I have used Matplotlib to plot lines on a figure. Now I would now like to set the style, specifically the marker, for individual points on the line. How do I do this?</p>
<p>To clarify my question, I want to be able to set the style for individual markers on a line, not every marker on said line.</p>
|
<p>Specify the keyword args <code>linestyle</code> and/or <code>marker</code> in your call to <code>plot</code>.</p>
<p>For example, using a dashed line and blue circle markers:</p>
<pre><code>plt.plot(range(10), linestyle='--', marker='o', color='b', label='line with marker')
plt.legend()
</code></pre>
<p>A shortcut call for the same thing:</p>
<pre><code>plt.plot(range(10), '--bo', label='line with marker')
plt.legend()
</code></pre>
<p><a href="https://i.sstatic.net/iRLpX.png" rel="noreferrer"><img src="https://i.sstatic.net/iRLpX.png" alt="enter image description here" /></a></p>
<p>Here is a list of the possible line and marker styles:</p>
<pre><code>================ ===============================
character description
================ ===============================
- solid line style
-- dashed line style
-. dash-dot line style
: dotted line style
. point marker
, pixel marker
o circle marker
v triangle_down marker
^ triangle_up marker
< triangle_left marker
> triangle_right marker
1 tri_down marker
2 tri_up marker
3 tri_left marker
4 tri_right marker
s square marker
p pentagon marker
* star marker
h hexagon1 marker
H hexagon2 marker
+ plus marker
x x marker
D diamond marker
d thin_diamond marker
| vline marker
_ hline marker
================ ===============================
</code></pre>
<hr />
<p><em>edit:</em> with an example of marking an arbitrary subset of points, as requested in the comments:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
xs = np.linspace(-np.pi, np.pi, 30)
ys = np.sin(xs)
markers_on = [12, 17, 18, 19]
plt.plot(xs, ys, '-gD', markevery=markers_on, label='line with select markers')
plt.legend()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/xG4iV.png" rel="noreferrer"><img src="https://i.sstatic.net/xG4iV.png" alt="enter image description here" /></a></p>
<p>This last example using the <code>markevery</code> kwarg is possible in since 1.4+, due to the merge of <a href="https://github.com/matplotlib/matplotlib/pull/2662" rel="noreferrer">this feature branch</a>. If you are stuck on an older version of matplotlib, you can still achieve the result by overlaying a scatterplot on the line plot. See the <a href="https://stackoverflow.com/posts/8409110/revisions">edit history</a> for more details.</p>
| 1,038
|
matplotlib
|
Scatter plot with different text at each data point
|
https://stackoverflow.com/questions/14432557/scatter-plot-with-different-text-at-each-data-point
|
<p>I am trying to make a scatter plot and annotate data points with different numbers from a list.
So, for example, I want to plot <code>y</code> vs <code>x</code> and annotate with corresponding numbers from <code>n</code>.</p>
<pre><code>y = [2.56422, 3.77284, 3.52623, 3.51468, 3.02199]
x = [0.15, 0.3, 0.45, 0.6, 0.75]
n = [58, 651, 393, 203, 123]
ax = fig.add_subplot(111)
ax1.scatter(z, y, fmt='o')
</code></pre>
<p>Any ideas?</p>
|
<p>I'm not aware of any plotting method which takes arrays or lists but you could use <code>annotate()</code> while iterating over the values in <code>n</code>.</p>
<pre><code>import matplotlib.pyplot as plt
x = [0.15, 0.3, 0.45, 0.6, 0.75]
y = [2.56422, 3.77284, 3.52623, 3.51468, 3.02199]
n = [58, 651, 393, 203, 123]
fig, ax = plt.subplots()
ax.scatter(x, y)
for i, txt in enumerate(n):
ax.annotate(txt, (x[i], y[i]))
</code></pre>
<p>There are a lot of formatting options for <code>annotate()</code>, see the <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.annotate.html" rel="noreferrer">matplotlib website:</a></p>
<p><img src="https://i.sstatic.net/6g4Et.png" alt="enter image description here" /></p>
| 1,039
|
matplotlib
|
Named colors in matplotlib
|
https://stackoverflow.com/questions/22408237/named-colors-in-matplotlib
|
<p>What named colors are available in matplotlib for use in plots? I can find a list on the matplotlib documentation that claims that these are the only names:</p>
<pre><code>b: blue
g: green
r: red
c: cyan
m: magenta
y: yellow
k: black
w: white
</code></pre>
<p>However, I've found that these colors can also be used, at least in this context:</p>
<pre><code>scatter(X,Y, color='red')
scatter(X,Y, color='orange')
scatter(X,Y, color='darkgreen')
</code></pre>
<p>but these are not on the above list. Does anyone know an exhaustive list of the named colors that are available?</p>
|
<p>I constantly forget the names of the colors I want to use and keep coming back to this question =)</p>
<p>The previous answers are great, but I find it a bit difficult to get an overview of the available colors from the posted image. I prefer the colors to be grouped with similar colors, so I slightly tweaked the <a href="http://matplotlib.org/examples/color/named_colors.html" rel="noreferrer">matplotlib answer</a> that was mentioned in a comment above to get a color list sorted in columns. The order is not identical to how I would sort by eye, but I think it gives a good overview.</p>
<p><em>I updated the image and code to reflect that 'rebeccapurple' has been added and the three sage colors have been moved under the 'xkcd:' prefix since I posted this answer originally.</em></p>
<p><a href="https://i.sstatic.net/lFZum.png" rel="noreferrer"><img src="https://i.sstatic.net/lFZum.png" alt="enter image description here"></a></p>
<p>I really didn't change much from the matplotlib example, but here is the code for completeness.</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib import colors as mcolors
colors = dict(mcolors.BASE_COLORS, **mcolors.CSS4_COLORS)
# Sort colors by hue, saturation, value and name.
by_hsv = sorted((tuple(mcolors.rgb_to_hsv(mcolors.to_rgba(color)[:3])), name)
for name, color in colors.items())
sorted_names = [name for hsv, name in by_hsv]
n = len(sorted_names)
ncols = 4
nrows = n // ncols
fig, ax = plt.subplots(figsize=(12, 10))
# Get height and width
X, Y = fig.get_dpi() * fig.get_size_inches()
h = Y / (nrows + 1)
w = X / ncols
for i, name in enumerate(sorted_names):
row = i % nrows
col = i // nrows
y = Y - (row * h) - h
xi_line = w * (col + 0.05)
xf_line = w * (col + 0.25)
xi_text = w * (col + 0.3)
ax.text(xi_text, y, name, fontsize=(h * 0.8),
horizontalalignment='left',
verticalalignment='center')
ax.hlines(y + h * 0.1, xi_line, xf_line,
color=colors[name], linewidth=(h * 0.8))
ax.set_xlim(0, X)
ax.set_ylim(0, Y)
ax.set_axis_off()
fig.subplots_adjust(left=0, right=1,
top=1, bottom=0,
hspace=0, wspace=0)
plt.show()
</code></pre>
<hr>
<h2>Additional named colors</h2>
<p><em>Updated 2017-10-25. I merged my previous updates into this section.</em></p>
<h3>xkcd</h3>
<p>If you would like to use additional named colors when plotting with matplotlib, you can use the <a href="http://xkcd.com/color/rgb/" rel="noreferrer">xkcd crowdsourced color names</a>, via the 'xkcd:' prefix:</p>
<pre><code>plt.plot([1,2], lw=4, c='xkcd:baby poop green')
</code></pre>
<p>Now you have access to a plethora of named colors!</p>
<p><a href="https://i.sstatic.net/nCk6u.jpg" rel="noreferrer"><img src="https://i.sstatic.net/nCk6u.jpg" alt="enter image description here"></a></p>
<h3>Tableau</h3>
<p>The default Tableau colors are available in matplotlib via the 'tab:' prefix:</p>
<pre><code>plt.plot([1,2], lw=4, c='tab:green')
</code></pre>
<p>There are ten distinct colors:</p>
<p><a href="https://i.sstatic.net/K6Q8n.png" rel="noreferrer"><img src="https://i.sstatic.net/K6Q8n.png" alt="enter image description here"></a></p>
<h3>HTML</h3>
<p>You can also plot colors by their <a href="https://www.computerhope.com/tips/tip143.htm" rel="noreferrer">HTML hex code</a>:</p>
<pre><code>plt.plot([1,2], lw=4, c='#8f9805')
</code></pre>
<p>This is more similar to specifying and RGB tuple rather than a named color (apart from the fact that the hex code is passed as a string), and I will not include an image of the 16 million colors you can choose from...</p>
<hr>
<p>For more details, please refer to <a href="https://matplotlib.org/users/colors.html" rel="noreferrer">the matplotlib colors documentation</a> and the source file specifying the available colors, <a href="https://github.com/matplotlib/matplotlib/blob/master/lib/matplotlib/_color_data.py" rel="noreferrer"><code>_color_data.py</code></a>.</p>
<hr>
| 1,040
|
matplotlib
|
Reduce left and right margins in matplotlib plot
|
https://stackoverflow.com/questions/4042192/reduce-left-and-right-margins-in-matplotlib-plot
|
<p>I'm struggling to deal with my plot margins in matplotlib. I've used the code below to produce my chart:</p>
<pre><code>plt.imshow(g)
c = plt.colorbar()
c.set_label("Number of Slabs")
plt.savefig("OutputToUse.png")
</code></pre>
<p>However, I get an output figure with lots of white space on either side of the plot. I've searched google and read the matplotlib documentation, but I can't seem to find how to reduce this.</p>
|
<p>One way to automatically do this is the <code>bbox_inches='tight'</code> kwarg to <a href="https://matplotlib.org/api/_as_gen/matplotlib.figure.Figure.html#matplotlib.figure.Figure.savefig" rel="noreferrer"><code>plt.savefig</code></a>.</p>
<p>E.g.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
data = np.arange(3000).reshape((100,30))
plt.imshow(data)
plt.savefig('test.png', bbox_inches='tight')
</code></pre>
<p>Another way is to use <a href="https://matplotlib.org/api/_as_gen/matplotlib.figure.Figure.html#matplotlib.figure.Figure.tight_layout" rel="noreferrer"><code>fig.tight_layout()</code></a></p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
xs = np.linspace(0, 1, 20); ys = np.sin(xs)
fig = plt.figure()
axes = fig.add_subplot(1,1,1)
axes.plot(xs, ys)
# This should be called after all axes have been added
fig.tight_layout()
fig.savefig('test.png')
</code></pre>
| 1,041
|
matplotlib
|
Seaborn plots not showing up
|
https://stackoverflow.com/questions/26597116/seaborn-plots-not-showing-up
|
<p>I'm sure I'm forgetting something very simple, but I cannot get certain plots to work with Seaborn. </p>
<p>If I do:</p>
<pre><code>import seaborn as sns
</code></pre>
<p>Then any plots that I create as usual with matplotlib get the Seaborn styling (with the grey grid in the background).</p>
<p>However, if I try to do one of the examples, such as:</p>
<pre><code>In [1]: import seaborn as sns
In [2]: sns.set()
In [3]: df = sns.load_dataset('iris')
In [4]: sns.pairplot(df, hue='species', size=2.5)
Out[4]: <seaborn.axisgrid.PairGrid at 0x3e59150>
</code></pre>
<p>The pairplot function returns a PairGrid object, but the plot doesn't show up. </p>
<p>I'm a little confused because matplotlib seems to be functioning properly, and the Seaborn styles are applied to other matplotlib plots, but the Seaborn functions don't seem to do anything. Does anybody have any idea what might be the problem?</p>
|
<p>Plots created using seaborn need to be displayed like ordinary matplotlib plots.
This can be done using the</p>
<pre><code>plt.show()
</code></pre>
<p>function from matplotlib.</p>
<p>Originally I posted the solution to use the already imported matplotlib object from seaborn (<code>sns.plt.show()</code>) however this is considered to be a bad practice. Therefore, simply directly import the <code>_matplotlib.pyplot_</code> module and show your plots with</p>
<pre><code>import matplotlib.pyplot as plt
plt.show()
</code></pre>
<p>If the IPython notebook is used the inline backend can be invoked to remove the necessity of calling show after each plot. The respective magic is</p>
<pre><code>%matplotlib inline
</code></pre>
| 1,042
|
matplotlib
|
Generating a PNG with matplotlib when DISPLAY is undefined
|
https://stackoverflow.com/questions/2801882/generating-a-png-with-matplotlib-when-display-is-undefined
|
<p>I am trying to use networkx with Python. When I run this program it get this error. Is there anything missing?</p>
<pre><code>#!/usr/bin/env python
import networkx as nx
import matplotlib
import matplotlib.pyplot
import matplotlib.pyplot as plt
G=nx.Graph()
G.add_node(1)
G.add_nodes_from([2,3,4,5,6,7,8,9,10])
#nx.draw_graphviz(G)
#nx_write_dot(G, 'node.png')
nx.draw(G)
plt.savefig("/var/www/node.png")
Traceback (most recent call last):
File "graph.py", line 13, in <module>
nx.draw(G)
File "/usr/lib/pymodules/python2.5/networkx/drawing/nx_pylab.py", line 124, in draw
cf=pylab.gcf()
File "/usr/lib/pymodules/python2.5/matplotlib/pyplot.py", line 276, in gcf
return figure()
File "/usr/lib/pymodules/python2.5/matplotlib/pyplot.py", line 254, in figure
**kwargs)
File "/usr/lib/pymodules/python2.5/matplotlib/backends/backend_tkagg.py", line 90, in new_figure_manager
window = Tk.Tk()
File "/usr/lib/python2.5/lib-tk/Tkinter.py", line 1650, in __init__
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: no display name and no $DISPLAY environment variable
</code></pre>
<hr>
<p>I get a different error now:</p>
<pre><code>#!/usr/bin/env python
import networkx as nx
import matplotlib
import matplotlib.pyplot
import matplotlib.pyplot as plt
matplotlib.use('Agg')
G=nx.Graph()
G.add_node(1)
G.add_nodes_from([2,3,4,5,6,7,8,9,10])
#nx.draw_graphviz(G)
#nx_write_dot(G, 'node.png')
nx.draw(G)
plt.savefig("/var/www/node.png")
</code></pre>
<hr>
<pre><code>/usr/lib/pymodules/python2.5/matplotlib/__init__.py:835: UserWarning: This call to matplotlib.use() has no effect
because the the backend has already been chosen;
matplotlib.use() must be called *before* pylab, matplotlib.pyplot,
or matplotlib.backends is imported for the first time.
if warn: warnings.warn(_use_error_msg)
Traceback (most recent call last):
File "graph.py", line 15, in <module>
nx.draw(G)
File "/usr/lib/python2.5/site-packages/networkx-1.2.dev-py2.5.egg/networkx/drawing/nx_pylab.py", line 124, in draw
cf=pylab.gcf()
File "/usr/lib/pymodules/python2.5/matplotlib/pyplot.py", line 276, in gcf
return figure()
File "/usr/lib/pymodules/python2.5/matplotlib/pyplot.py", line 254, in figure
**kwargs)
File "/usr/lib/pymodules/python2.5/matplotlib/backends/backend_tkagg.py", line 90, in new_figure_manager
window = Tk.Tk()
File "/usr/lib/python2.5/lib-tk/Tkinter.py", line 1650, in __init__
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: no display name and no $DISPLAY environment variable
</code></pre>
<hr>
<p>I get a different error now:</p>
<pre><code>#!/usr/bin/env python
import networkx as nx
import matplotlib
import matplotlib.pyplot
import matplotlib.pyplot as plt
matplotlib.use('Agg')
G=nx.Graph()
G.add_node(1)
G.add_nodes_from([2,3,4,5,6,7,8,9,10])
#nx.draw_graphviz(G)
#nx_write_dot(G, 'node.png')
nx.draw(G)
plt.savefig("/var/www/node.png")
</code></pre>
<hr>
<pre><code>/usr/lib/pymodules/python2.5/matplotlib/__init__.py:835: UserWarning: This call to matplotlib.use() has no effect
because the the backend has already been chosen;
matplotlib.use() must be called *before* pylab, matplotlib.pyplot,
or matplotlib.backends is imported for the first time.
if warn: warnings.warn(_use_error_msg)
Traceback (most recent call last):
File "graph.py", line 15, in <module>
nx.draw(G)
File "/usr/lib/python2.5/site-packages/networkx-1.2.dev-py2.5.egg/networkx/drawing/nx_pylab.py", line 124, in draw
cf=pylab.gcf()
File "/usr/lib/pymodules/python2.5/matplotlib/pyplot.py", line 276, in gcf
return figure()
File "/usr/lib/pymodules/python2.5/matplotlib/pyplot.py", line 254, in figure
**kwargs)
File "/usr/lib/pymodules/python2.5/matplotlib/backends/backend_tkagg.py", line 90, in new_figure_manager
window = Tk.Tk()
File "/usr/lib/python2.5/lib-tk/Tkinter.py", line 1650, in __init__
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: no display name and no $DISPLAY environment variable
</code></pre>
|
<p>The main problem is that (on your system) matplotlib chooses an x-using backend by default. I just had the same problem on one of my servers. The solution for me was to add the following code in a place that gets read <em>before</em> any other pylab/matplotlib/<strong>pyplot</strong> import:</p>
<pre><code>import matplotlib
# Force matplotlib to not use any Xwindows backend.
matplotlib.use('Agg')
</code></pre>
<p>The alternative is to <a href="https://matplotlib.org/stable/tutorials/introductory/customizing.html#the-matplotlibrc-file" rel="nofollow noreferrer">set it</a> in your <code>.matplotlibrc</code></p>
| 1,043
|
matplotlib
|
Common xlabel/ylabel for matplotlib subplots
|
https://stackoverflow.com/questions/16150819/common-xlabel-ylabel-for-matplotlib-subplots
|
<p>I have the following plot:</p>
<pre><code>fig,ax = plt.subplots(5,2,sharex=True,sharey=True,figsize=fig_size)
</code></pre>
<p>and now I would like to give this plot common x-axis labels and y-axis labels. With "common", I mean that there should be one big x-axis label below the whole grid of subplots, and one big y-axis label to the right. I can't find anything about this in the documentation for <code>plt.subplots</code>, and my googlings suggest that I need to make a big <code>plt.subplot(111)</code> to start with - but how do I then put my 5*2 subplots into that using <code>plt.subplots</code>?</p>
|
<p>This looks like what you actually want. It applies the same approach of <a href="https://stackoverflow.com/a/6981055/3753826">this answer</a> to your specific case:</p>
<pre><code>import matplotlib.pyplot as plt
fig, ax = plt.subplots(nrows=3, ncols=3, sharex=True, sharey=True, figsize=(6, 6))
fig.text(0.5, 0.04, 'common X', ha='center')
fig.text(0.04, 0.5, 'common Y', va='center', rotation='vertical')
</code></pre>
<p><img src="https://i.sstatic.net/IrJNO.png" alt="Multiple plots with common axes label" /></p>
| 1,044
|
matplotlib
|
How to remove frame from a figure
|
https://stackoverflow.com/questions/14908576/how-to-remove-frame-from-a-figure
|
<p>To remove frame in figure, I write</p>
<pre><code>frameon=False
</code></pre>
<p>works perfect with <code>pyplot.figure</code>, but with <code>matplotlib.Figure</code> it only removes the gray background, the frame stays. Also, I only want the lines to show, and all the rest of figure be transparent.</p>
<p>with pyplot I can do what I want, I want to do it with matplotlib for some long reason I'd rather not mention to extend my question.</p>
|
<p>First off, if you're using <code>savefig</code>, be aware that it will override the figure's background color when saving unless you specify otherwise (e.g. <code>fig.savefig('blah.png', transparent=True)</code>).</p>
<p>However, to remove the axes' and figure's background on-screen, you'll need to set both <code>ax.patch</code> and <code>fig.patch</code> to be invisible. </p>
<p>E.g.</p>
<pre><code>import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(range(10))
for item in [fig, ax]:
item.patch.set_visible(False)
with open('test.png', 'w') as outfile:
fig.canvas.print_png(outfile)
</code></pre>
<p><img src="https://i.sstatic.net/JwBju.png" alt="enter image description here"></p>
<p>(Of course, you can't tell the difference on SO's white background, but everything is transparent...)</p>
<p>If you don't want to show anything other than the line, turn the axis off as well using <code>ax.axis('off')</code>:</p>
<pre><code>import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(range(10))
fig.patch.set_visible(False)
ax.axis('off')
with open('test.png', 'w') as outfile:
fig.canvas.print_png(outfile)
</code></pre>
<p><img src="https://i.sstatic.net/qJaRf.png" alt="enter image description here"></p>
<p>In that case, though, you may want to make the axes take up the full figure. If you manually specify the location of the axes, you can tell it to take up the full figure (alternately, you can use <code>subplots_adjust</code>, but this is simpler for the case of a single axes). </p>
<pre><code>import matplotlib.pyplot as plt
fig = plt.figure(frameon=False)
ax = fig.add_axes([0, 0, 1, 1])
ax.axis('off')
ax.plot(range(10))
with open('test.png', 'w') as outfile:
fig.canvas.print_png(outfile)
</code></pre>
<p><img src="https://i.sstatic.net/gMrsE.png" alt="enter image description here"></p>
| 1,045
|
matplotlib
|
ImportError: No module named matplotlib.pyplot
|
https://stackoverflow.com/questions/18176591/importerror-no-module-named-matplotlib-pyplot
|
<p>I am currently practicing matplotlib. This is the first example I practice.</p>
<pre><code>#!/usr/bin/python
import matplotlib.pyplot as plt
radius = [1.0, 2.0, 3.0, 4.0]
area = [3.14159, 12.56636, 28.27431, 50.26544]
plt.plot(radius, area)
plt.show()
</code></pre>
<p>When I run this script with <code>python ./plot_test.py</code>, it shows plot correctly. However, I run it by itself, <code>./plot_test.py</code>, it throws the followings:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "./plot_test.py", line 3, in <module>
import matplotlib.pyplot as plt
ImportError: No module named matplotlib.pyplot
</code></pre>
<p>Does python look for matplotlib in different locations?</p>
<p>The environment is:</p>
<ul>
<li>Mac OS X 10.8.4 64bit</li>
<li>built-in python 2.7</li>
</ul>
<p>numpy, scipy, matplotlib is installed with:</p>
<pre><code>sudo port install py27-numpy py27-scipy py27-matplotlib \
py27-ipython +notebook py27-pandas py27-sympy py27-nose
</code></pre>
|
<p>You have two pythons installed on your machine, one is the standard python that comes with Mac OSX and the second is the one you installed with ports (this is the one that has <code>matplotlib</code> installed in its library, the one that comes with macosx does not).</p>
<pre><code>/usr/bin/python
</code></pre>
<p>Is the standard mac python and since it doesn't have <code>matplotlib</code> you should always start your script with the one installed with ports.</p>
<p>If <code>python your_script.py</code> works then change the <code>#!</code> to:</p>
<pre><code>#!/usr/bin/env python
</code></pre>
<p>Or put the full path to the python interpreter that has the <code>matplotlib</code> installed in its library.</p>
| 1,046
|
matplotlib
|
Matplotlib different size subplots
|
https://stackoverflow.com/questions/10388462/matplotlib-different-size-subplots
|
<p>I need to add two subplots to a figure. One subplot needs to be about three times as wide as the second (same height). I accomplished this using <code>GridSpec</code> and the <code>colspan</code> argument but I would like to do this using <code>figure</code> so I can save to PDF. I can adjust the first figure using the <code>figsize</code> argument in the constructor, but how do I change the size of the second plot?</p>
|
<ul>
<li>As of <code>matplotlib 3.6.0</code>, <code>width_ratios</code> and <code>height_ratios</code> can now be passed directly as keyword arguments to <a href="https://matplotlib.org/stable/api/figure_api.html#matplotlib.figure.Figure.subplots" rel="noreferrer"><code>plt.subplots</code></a> and <a href="https://matplotlib.org/stable/api/figure_api.html#matplotlib.figure.Figure.subplot_mosaic" rel="noreferrer"><code>subplot_mosaic</code></a>, as per <a href="https://matplotlib.org/stable/users/prev_whats_new/whats_new_3.6.0.html#subplots-subplot-mosaic-accept-height-ratios-and-width-ratios-arguments" rel="noreferrer">What's new in Matplotlib 3.6.0 (Sep 15, 2022)</a>.</li>
</ul>
<p><code>f, (a0, a1) = plt.subplots(1, 2, width_ratios=[3, 1])</code></p>
<p><code>f, (a0, a1, a2) = plt.subplots(3, 1, height_ratios=[1, 1, 3])</code></p>
<hr />
<ul>
<li>Another way is to use the <code>subplots</code> function and pass the width ratio with <code>gridspec_kw</code>
<ul>
<li><a href="https://matplotlib.org/stable/tutorials/intermediate/gridspec.html" rel="noreferrer">matplotlib Tutorial: Customizing Figure Layouts Using GridSpec and Other Functions</a></li>
<li><a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.gridspec.GridSpec.html#matplotlib.gridspec.GridSpec" rel="noreferrer"><code>matplotlib.gridspec.GridSpec</code></a> has available <code>gridspect_kw</code> options</li>
</ul>
</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
# generate some data
x = np.arange(0, 10, 0.2)
y = np.sin(x)
# plot it
f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios': [3, 1]})
a0.plot(x, y)
a1.plot(y, x)
f.tight_layout()
f.savefig('grid_figure.pdf')
</code></pre>
<p><a href="https://i.sstatic.net/aBJVa.png" rel="noreferrer"><img src="https://i.sstatic.net/aBJVa.png" alt="enter image description here" /></a></p>
<ul>
<li>Because the question is canonical, here is an example with vertical subplots.</li>
</ul>
<pre class="lang-py prettyprint-override"><code># plot it
f, (a0, a1, a2) = plt.subplots(3, 1, gridspec_kw={'height_ratios': [1, 1, 3]})
a0.plot(x, y)
a1.plot(x, y)
a2.plot(x, y)
f.tight_layout()
</code></pre>
<p><a href="https://i.sstatic.net/a2djk.png" rel="noreferrer"><img src="https://i.sstatic.net/a2djk.png" alt="enter image description here" /></a></p>
| 1,047
|
matplotlib
|
Plot a horizontal line on a given plot
|
https://stackoverflow.com/questions/33382619/plot-a-horizontal-line-on-a-given-plot
|
<p>How do I add a horizontal line to an existing plot?</p>
|
<p>You are correct, I think the <code>[0,len(xs)]</code> is throwing you off. You'll want to reuse the original x-axis variable <code>xs</code> and plot that with another numpy array of the same length that has your variable in it.</p>
<pre><code>annual = np.arange(1,21,1)
l = np.array(value_list) # a list with 20 values
spl = UnivariateSpline(annual,l)
xs = np.linspace(1,21,200)
plt.plot(xs,spl(xs),'b')
#####horizontal line
horiz_line_data = np.array([40 for i in xrange(len(xs))])
plt.plot(xs, horiz_line_data, 'r--')
###########plt.plot([0,len(xs)],[40,40],'r--',lw=2)
pylab.ylim([0,200])
plt.show()
</code></pre>
<p>Hopefully that fixes the problem!</p>
| 1,048
|
matplotlib
|
Is there a way to detach matplotlib plots so that the computation can continue?
|
https://stackoverflow.com/questions/458209/is-there-a-way-to-detach-matplotlib-plots-so-that-the-computation-can-continue
|
<p>After these instructions in the Python interpreter one gets a window with a plot:</p>
<pre><code>from matplotlib.pyplot import *
plot([1,2,3])
show()
# other code
</code></pre>
<p>Unfortunately, I don't know how to continue to interactively explore the figure created by <code>show()</code> while the program does further calculations.</p>
<p>Is it possible at all? Sometimes calculations are long and it would help if they would proceed during examination of intermediate results.</p>
|
<p>Use <code>matplotlib</code>'s calls that won't block:</p>
<p>Using <code>draw()</code>:</p>
<pre><code>from matplotlib.pyplot import plot, draw, show
plot([1,2,3])
draw()
print('continue computation')
# at the end call show to ensure window won't close.
show()
</code></pre>
<p>Using interactive mode:</p>
<pre><code>from matplotlib.pyplot import plot, ion, show
ion() # enables interactive mode
plot([1,2,3]) # result shows immediatelly (implicit draw())
print('continue computation')
# at the end call show to ensure window won't close.
show()
</code></pre>
| 1,049
|
matplotlib
|
tight_layout() doesn't take into account figure suptitle
|
https://stackoverflow.com/questions/8248467/tight-layout-doesnt-take-into-account-figure-suptitle
|
<p>If I add a subtitle to my matplotlib figure it gets overlaid by the subplot's titles. Does anybody know how to easily take care of that? I tried the <code>tight_layout()</code> function, but it only makes things worse.</p>
<p>Example:</p>
<pre class="lang-python prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
f = np.random.random(100)
g = np.random.random(100)
fig = plt.figure()
fig.suptitle('Long Suptitle', fontsize=24)
plt.subplot(121)
plt.plot(f)
plt.title('Very Long Title 1', fontsize=20)
plt.subplot(122)
plt.plot(g)
plt.title('Very Long Title 2', fontsize=20)
plt.tight_layout()
plt.show()
</code></pre>
|
<p>You can adjust the subplot geometry in the very <code>tight_layout</code> call as follows:</p>
<pre><code>fig.tight_layout(rect=[0, 0.03, 1, 0.95])
</code></pre>
<p>As it's stated in the documentation (<a href="https://matplotlib.org/stable/users/explain/axes/tight_layout_guide.html" rel="noreferrer">https://matplotlib.org/stable/users/explain/axes/tight_layout_guide.html</a>):</p>
<blockquote>
<p><code>tight_layout()</code> only considers ticklabels, axis labels, and titles. Thus, other artists may be clipped and also may overlap.</p>
</blockquote>
| 1,050
|
matplotlib
|
How to have one colorbar for all subplots
|
https://stackoverflow.com/questions/13784201/how-to-have-one-colorbar-for-all-subplots
|
<p>I've spent entirely too long researching how to get two subplots to share the same y-axis with a single colorbar shared between the two in Matplotlib. </p>
<p>What was happening was that when I called the <code>colorbar()</code> function in either <code>subplot1</code> or <code>subplot2</code>, it would autoscale the plot such that the colorbar plus the plot would fit inside the 'subplot' bounding box, causing the two side-by-side plots to be two very different sizes.</p>
<p>To get around this, I tried to create a third subplot which I then hacked to render no plot with just a colorbar present.
The only problem is, now the heights and widths of the two plots are uneven, and I can't figure out how to make it look okay.</p>
<p>Here is my code:</p>
<pre><code>from __future__ import division
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import patches
from matplotlib.ticker import NullFormatter
# SIS Functions
TE = 1 # Einstein radius
g1 = lambda x,y: (TE/2) * (y**2-x**2)/((x**2+y**2)**(3/2))
g2 = lambda x,y: -1*TE*x*y / ((x**2+y**2)**(3/2))
kappa = lambda x,y: TE / (2*np.sqrt(x**2+y**2))
coords = np.linspace(-2,2,400)
X,Y = np.meshgrid(coords,coords)
g1out = g1(X,Y)
g2out = g2(X,Y)
kappaout = kappa(X,Y)
for i in range(len(coords)):
for j in range(len(coords)):
if np.sqrt(coords[i]**2+coords[j]**2) <= TE:
g1out[i][j]=0
g2out[i][j]=0
fig = plt.figure()
fig.subplots_adjust(wspace=0,hspace=0)
# subplot number 1
ax1 = fig.add_subplot(1,2,1,aspect='equal',xlim=[-2,2],ylim=[-2,2])
plt.title(r"$\gamma_{1}$",fontsize="18")
plt.xlabel(r"x ($\theta_{E}$)",fontsize="15")
plt.ylabel(r"y ($\theta_{E}$)",rotation='horizontal',fontsize="15")
plt.xticks([-2.0,-1.5,-1.0,-0.5,0,0.5,1.0,1.5])
plt.xticks([-2.0,-1.5,-1.0,-0.5,0,0.5,1.0,1.5])
plt.imshow(g1out,extent=(-2,2,-2,2))
plt.axhline(y=0,linewidth=2,color='k',linestyle="--")
plt.axvline(x=0,linewidth=2,color='k',linestyle="--")
e1 = patches.Ellipse((0,0),2,2,color='white')
ax1.add_patch(e1)
# subplot number 2
ax2 = fig.add_subplot(1,2,2,sharey=ax1,xlim=[-2,2],ylim=[-2,2])
plt.title(r"$\gamma_{2}$",fontsize="18")
plt.xlabel(r"x ($\theta_{E}$)",fontsize="15")
ax2.yaxis.set_major_formatter( NullFormatter() )
plt.axhline(y=0,linewidth=2,color='k',linestyle="--")
plt.axvline(x=0,linewidth=2,color='k',linestyle="--")
plt.imshow(g2out,extent=(-2,2,-2,2))
e2 = patches.Ellipse((0,0),2,2,color='white')
ax2.add_patch(e2)
# subplot for colorbar
ax3 = fig.add_subplot(1,1,1)
ax3.axis('off')
cbar = plt.colorbar(ax=ax2)
plt.show()
</code></pre>
|
<p>Just place the colorbar in its own axis and use <code>subplots_adjust</code> to make room for it.</p>
<p>As a quick example:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
fig, axes = plt.subplots(nrows=2, ncols=2)
for ax in axes.flat:
im = ax.imshow(np.random.random((10,10)), vmin=0, vmax=1)
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
plt.show()
</code></pre>
<p><img src="https://i.sstatic.net/vsugg.png" alt="enter image description here"></p>
<p>Note that the color range will be set by the last image plotted (that gave rise to <code>im</code>) even if the range of values is set by <code>vmin</code> and <code>vmax</code>. If another plot has, for example, a higher max value, points with higher values than the max of <code>im</code> will show in uniform color.</p>
| 1,051
|
matplotlib
|
plot a circle with Matplotlib.pyplot
|
https://stackoverflow.com/questions/9215658/plot-a-circle-with-matplotlib-pyplot
|
<p>surprisingly I didn't find a straight-forward description on how to draw a circle with matplotlib.pyplot (please no pylab) taking as input center (x,y) and radius r. I tried some variants of this:</p>
<pre><code>import matplotlib.pyplot as plt
circle=plt.Circle((0,0),2)
# here must be something like circle.plot() or not?
plt.show()
</code></pre>
<p>... but still didn't get it working. </p>
|
<p>You need to add it to an axes. A <a href="https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.patches.Circle.html" rel="noreferrer"><code>Circle</code></a> is a subclass of an <a href="https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.patches.Patch.html" rel="noreferrer"><code>Patch</code></a>, and an <code>axes</code> has an <a href="https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.axes.Axes.add_patch.html" rel="noreferrer"><code>add_patch</code></a> method. (You can also use <a href="https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.axes.Axes.add_artist.html" rel="noreferrer"><code>add_artist</code></a> but it's not recommended.)</p>
<p>Here's an example of doing this:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
circle1 = plt.Circle((0, 0), 0.2, color='r')
circle2 = plt.Circle((0.5, 0.5), 0.2, color='blue')
circle3 = plt.Circle((1, 1), 0.2, color='g', clip_on=False)
fig, ax = plt.subplots() # note we must use plt.subplots, not plt.subplot
# (or if you have an existing figure)
# fig = plt.gcf()
# ax = fig.gca()
ax.add_patch(circle1)
ax.add_patch(circle2)
ax.add_patch(circle3)
fig.savefig('plotcircles.png')
</code></pre>
<p>This results in the following figure:</p>
<p><img src="https://i.sstatic.net/6Wq0M.png" alt="" /></p>
<p>The first circle is at the origin, but by default <code>clip_on</code> is <code>True</code>, so the circle is clipped when ever it extends beyond the <code>axes</code>. The third (green) circle shows what happens when you don't clip the <code>Artist</code>. It extends beyond the axes (but not beyond the figure, ie the figure size is <em>not</em> automatically adjusted to plot all of your artists).</p>
<p>The units for x, y and radius correspond to data units by default. In this case, I didn't plot anything on my axes (<code>fig.gca()</code> returns the current axes), and since the limits have never been set, they defaults to an x and y range from 0 to 1.</p>
<p>Here's a continuation of the example, showing how units matter:</p>
<pre class="lang-py prettyprint-override"><code>circle1 = plt.Circle((0, 0), 2, color='r')
# now make a circle with no fill, which is good for hi-lighting key results
circle2 = plt.Circle((5, 5), 0.5, color='b', fill=False)
circle3 = plt.Circle((10, 10), 2, color='g', clip_on=False)
ax = plt.gca()
ax.cla() # clear things for fresh plot
# change default range so that new circles will work
ax.set_xlim((0, 10))
ax.set_ylim((0, 10))
# some data
ax.plot(range(11), 'o', color='black')
# key data point that we are encircling
ax.plot((5), (5), 'o', color='y')
ax.add_patch(circle1)
ax.add_patch(circle2)
ax.add_patch(circle3)
fig.savefig('plotcircles2.png')
</code></pre>
<p>which results in:</p>
<p><img src="https://i.sstatic.net/DAssu.png" alt="" /></p>
<p>You can see how I set the fill of the 2nd circle to <code>False</code>, which is useful for encircling key results (like my yellow data point).</p>
| 1,052
|
matplotlib
|
Adding a y-axis label to secondary y-axis in matplotlib
|
https://stackoverflow.com/questions/14762181/adding-a-y-axis-label-to-secondary-y-axis-in-matplotlib
|
<p>I can add a y label to the left y-axis using <code>plt.ylabel</code>, but how can I add it to the secondary y-axis?</p>
<pre><code>table = sql.read_frame(query,connection)
table[0].plot(color=colors[0],ylim=(0,100))
table[1].plot(secondary_y=True,color=colors[1])
plt.ylabel('$')
</code></pre>
|
<p>The best way is to interact with the <code>axes</code> object directly</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0, 10, 0.1)
y1 = 0.05 * x**2
y2 = -1 *y1
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(x, y1, 'g-')
ax2.plot(x, y2, 'b-')
ax1.set_xlabel('X data')
ax1.set_ylabel('Y1 data', color='g')
ax2.set_ylabel('Y2 data', color='b')
plt.show()
</code></pre>
<p><img src="https://i.sstatic.net/K7727.png" alt="example graph"></p>
| 1,053
|
matplotlib
|
Savefig outputs blank image
|
https://stackoverflow.com/questions/9012487/savefig-outputs-blank-image
|
<p>I am trying to save plots I make using matplotlib; however, the images are saving blank.</p>
<p>Here is my code:</p>
<pre><code>plt.subplot(121)
plt.imshow(dataStack, cmap=mpl.cm.bone)
plt.subplot(122)
y = copy.deepcopy(tumorStack)
y = np.ma.masked_where(y == 0, y)
plt.imshow(dataStack, cmap=mpl.cm.bone)
plt.imshow(y, cmap=mpl.cm.jet_r, interpolation='nearest')
if T0 is not None:
plt.subplot(123)
plt.imshow(T0, cmap=mpl.cm.bone)
#plt.subplot(124)
#Autozoom
#else:
#plt.subplot(124)
#Autozoom
plt.show()
plt.draw()
plt.savefig('tessstttyyy.png', dpi=100)
</code></pre>
<p>And tessstttyyy.png is blank (also tried with .jpg)</p>
|
<p>First, what happens when <code>T0 is not None</code>? I would test that, then I would adjust the values I pass to <code>plt.subplot()</code>; maybe try values 131, 132, and 133, or values that depend whether or not <code>T0</code> exists.</p>
<p>Second, after <code>plt.show()</code> is called, a new figure is created. To deal with this, you can</p>
<ol>
<li><p>Call <code>plt.savefig('tessstttyyy.png', dpi=100)</code> before you call <code>plt.show()</code></p></li>
<li><p>Save the figure before you <code>show()</code> by calling <code>plt.gcf()</code> for "get current figure", then you can call <code>savefig()</code> on this <code>Figure</code> object at any time.</p></li>
</ol>
<p>For example: </p>
<pre><code>fig1 = plt.gcf()
plt.show()
plt.draw()
fig1.savefig('tessstttyyy.png', dpi=100)
</code></pre>
<p>In your code, 'tesssttyyy.png' is blank because it is saving the new figure, to which nothing has been plotted. </p>
| 1,054
|
matplotlib
|
Removing white space around a saved image
|
https://stackoverflow.com/questions/11837979/removing-white-space-around-a-saved-image
|
<p>I need to take an image and save it after some process. The figure looks fine when I display it, but after saving the figure, I got some white space around the saved image. I have tried the <code>'tight'</code> option for <code>savefig</code> method, did not work either. The code:</p>
<pre><code>import matplotlib.image as mpimg
import matplotlib.pyplot as plt
fig = plt.figure(1)
img = mpimg.imread("image.jpg")
plt.imshow(img)
ax = fig.add_subplot(1, 1, 1)
extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted())
plt.savefig('1.png', bbox_inches=extent)
plt.axis('off')
plt.show()
</code></pre>
<p>I am trying to draw a basic graph by using NetworkX on a figure and save it. I realized that without a graph it works, but when added a graph I get white space around the saved image;</p>
<pre><code>import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import networkx as nx
G = nx.Graph()
G.add_node(1)
G.add_node(2)
G.add_node(3)
G.add_edge(1, 3)
G.add_edge(1, 2)
pos = {1:[100, 120], 2:[200, 300], 3:[50, 75]}
fig = plt.figure(1)
img = mpimg.imread("image.jpg")
plt.imshow(img)
ax = fig.add_subplot(1, 1, 1)
nx.draw(G, pos=pos)
extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted())
plt.savefig('1.png', bbox_inches=extent)
plt.axis('off')
plt.show()
</code></pre>
|
<p>I cannot claim I know exactly why or how my “solution” works, but this is what I had to do when I wanted to plot the outline of a couple of aerofoil sections — without white margins — to a PDF file.
(Note that I used matplotlib inside an IPython notebook, with the -pylab flag.)</p>
<pre><code>plt.gca().set_axis_off()
plt.subplots_adjust(top = 1, bottom = 0, right = 1, left = 0,
hspace = 0, wspace = 0)
plt.margins(0,0)
plt.gca().xaxis.set_major_locator(plt.NullLocator())
plt.gca().yaxis.set_major_locator(plt.NullLocator())
plt.savefig("filename.pdf", bbox_inches = 'tight',
pad_inches = 0)
</code></pre>
<p>I have tried to deactivate different parts of this, but this always lead to a white margin somewhere. You may even have modify this to keep fat lines near the limits of the figure from being shaved by the lack of margins.</p>
| 1,055
|
matplotlib
|
Moving matplotlib legend outside of the axis makes it cutoff by the figure box
|
https://stackoverflow.com/questions/10101700/moving-matplotlib-legend-outside-of-the-axis-makes-it-cutoff-by-the-figure-box
|
<p>I'm familiar with the following questions:</p>
<p><a href="https://stackoverflow.com/questions/8971834/matplotlib-savefig-with-a-legend-outside-the-plot">Matplotlib savefig with a legend outside the plot</a></p>
<p><a href="https://stackoverflow.com/questions/4700614/how-to-put-the-legend-out-of-the-plot">How to put the legend out of the plot</a></p>
<p>It seems that the answers in these questions have the luxury of being able to fiddle with the exact shrinking of the axis so that the legend fits. </p>
<p>Shrinking the axes, however, is not an ideal solution because it makes the data smaller making it actually more difficult to interpret; particularly when its complex and there are lots of things going on ... hence needing a large legend</p>
<p>The example of a complex legend in the documentation demonstrates the need for this because the legend in their plot actually completely obscures multiple data points.</p>
<p><a href="http://matplotlib.sourceforge.net/users/legend_guide.html#legend-of-complex-plots" rel="noreferrer">http://matplotlib.sourceforge.net/users/legend_guide.html#legend-of-complex-plots</a> </p>
<p><strong>What I would like to be able to do is dynamically expand the size of the figure box to accommodate the expanding figure legend.</strong></p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x = np.arange(-2*np.pi, 2*np.pi, 0.1)
fig = plt.figure(1)
ax = fig.add_subplot(111)
ax.plot(x, np.sin(x), label='Sine')
ax.plot(x, np.cos(x), label='Cosine')
ax.plot(x, np.arctan(x), label='Inverse tan')
lgd = ax.legend(loc=9, bbox_to_anchor=(0.5,0))
ax.grid('on')
</code></pre>
<p>Notice how the final label 'Inverse tan' is actually outside the figure box (and looks badly cutoff - not publication quality!)
<img src="https://i.sstatic.net/0XtO2.png" alt="enter image description here"></p>
<p>Finally, I've been told that this is normal behaviour in R and LaTeX, so I'm a little confused why this is so difficult in python... Is there a historical reason? Is Matlab equally poor on this matter?</p>
<p>I have the (only slightly) longer version of this code on pastebin <a href="http://pastebin.com/grVjc007" rel="noreferrer">http://pastebin.com/grVjc007</a></p>
|
<p><em>[EDIT - 25th Feb 2025]
My day job is no longer Python, so I'm not following the recent matplotlib developments. Please read all the newer answers here as there look to be some excellent modern suggestions compared to this solution from the ancient history of 2012.</em></p>
<p>Sorry EMS, but I actually just got another response from the matplotlib mailling list (Thanks goes out to Benjamin Root).</p>
<p>The code I am looking for is adjusting the savefig call to:</p>
<pre><code>fig.savefig('samplefigure', bbox_extra_artists=(lgd,), bbox_inches='tight')
#Note that the bbox_extra_artists must be an iterable
</code></pre>
<p>This is apparently similar to calling tight_layout, but instead you allow savefig to consider extra artists in the calculation. This did in fact resize the figure box as desired.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
plt.gcf().clear()
x = np.arange(-2*np.pi, 2*np.pi, 0.1)
fig = plt.figure(1)
ax = fig.add_subplot(111)
ax.plot(x, np.sin(x), label='Sine')
ax.plot(x, np.cos(x), label='Cosine')
ax.plot(x, np.arctan(x), label='Inverse tan')
handles, labels = ax.get_legend_handles_labels()
lgd = ax.legend(handles, labels, loc='upper center', bbox_to_anchor=(0.5,-0.1))
text = ax.text(-0.2,1.05, "Aribitrary text", transform=ax.transAxes)
ax.set_title("Trigonometry")
ax.grid('on')
fig.savefig('samplefigure', bbox_extra_artists=(lgd,text), bbox_inches='tight')
</code></pre>
<p>This produces:</p>
<p><img src="https://i.sstatic.net/Zs4IQ.png" alt="" /></p>
<p>[edit] The intent of this question was to completely avoid the use of arbitrary coordinate placements of arbitrary text as was the traditional solution to these problems. Despite this, numerous edits recently have insisted on putting these in, often in ways that led to the code raising an error. I have now fixed the issues and tidied the arbitrary text to show how these are also considered within the bbox_extra_artists algorithm.</p>
<p>[edit]
Some of the comments below note that since 2019, the command has been simplified.
<em>plt.savefig('x.png', bbox_inches='tight') was sufficient. Thanks for sharing. – mateuszb Jun 27, 2019</em></p>
| 1,056
|
matplotlib
|
How do I plot in real-time in a while loop?
|
https://stackoverflow.com/questions/11874767/how-do-i-plot-in-real-time-in-a-while-loop
|
<p>I am trying to plot some data from a camera in real time using OpenCV. However, the real-time plotting (using matplotlib) doesn't seem to be working.</p>
<p>I've isolated the problem into this simple example:</p>
<pre><code>fig = plt.figure()
plt.axis([0, 1000, 0, 1])
i = 0
x = list()
y = list()
while i < 1000:
temp_y = np.random.random()
x.append(i)
y.append(temp_y)
plt.scatter(i, temp_y)
i += 1
plt.show()
</code></pre>
<p>I would expect this example to plot 1000 points individually. What actually happens is that the window pops up with the first point showing (ok with that), then waits for the loop to finish before it populates the rest of the graph.</p>
<p>Any thoughts why I am not seeing points populated one at a time?</p>
|
<p>Here's the working version of the code in question (requires at least version Matplotlib 1.1.0 from 2011-11-14):</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
plt.axis([0, 10, 0, 1])
for i in range(10):
y = np.random.random()
plt.scatter(i, y)
plt.pause(0.05)
plt.show()
</code></pre>
<p>Note the call to <code>plt.pause(0.05)</code>, which both draws the new data and runs the GUI's event loop (allowing for mouse interaction).</p>
| 1,057
|
matplotlib
|
How to draw a rectangle on image
|
https://stackoverflow.com/questions/37435369/how-to-draw-a-rectangle-on-image
|
<p>How to draw a rectangle on an image, like this:
<a href="https://i.sstatic.net/KWG46.jpg" rel="noreferrer"><img src="https://i.sstatic.net/KWG46.jpg" alt="enter image description here" /></a></p>
<pre><code>import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
im = np.array(Image.open('dog.png'), dtype=np.uint8)
plt.imshow(im)
</code></pre>
<p>To make it clear, I meant to draw a rectangle on top of the image for visualization, not to change the image data.</p>
<p>So using <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.patches.Patch.html" rel="noreferrer">matplotlib.patches.Patch</a> would be the best option.</p>
|
<p>You can add a <a href="https://matplotlib.org/api/_as_gen/matplotlib.patches.Rectangle.html#matplotlib.patches.Rectangle" rel="noreferrer"><code>Rectangle</code></a> patch to the matplotlib Axes.</p>
<p>For example (using the image from the tutorial <a href="https://matplotlib.org/stable/tutorials/introductory/images.html#sphx-glr-tutorials-introductory-images-py" rel="noreferrer">here</a>):</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib.patches as patches
from PIL import Image
im = Image.open('stinkbug.png')
# Create figure and axes
fig, ax = plt.subplots()
# Display the image
ax.imshow(im)
# Create a Rectangle patch
rect = patches.Rectangle((50, 100), 40, 30, linewidth=1, edgecolor='r', facecolor='none')
# Add the patch to the Axes
ax.add_patch(rect)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/4MJtp.png" rel="noreferrer"><img src="https://i.sstatic.net/4MJtp.png" alt="enter image description here" /></a></p>
| 1,058
|
matplotlib
|
Plot correlation matrix using pandas
|
https://stackoverflow.com/questions/29432629/plot-correlation-matrix-using-pandas
|
<p>I have a data set with huge number of features, so analysing the correlation matrix has become very difficult. I want to plot a correlation matrix which we get using <code>dataframe.corr()</code> function from pandas library. Is there any built-in function provided by the pandas library to plot this matrix?</p>
|
<p>You can use <a href="http://matplotlib.org/examples/pylab_examples/matshow.html" rel="noreferrer"><code>pyplot.matshow()</code></a> from <code>matplotlib</code>:</p>
<pre><code>import matplotlib.pyplot as plt
plt.matshow(dataframe.corr())
plt.show()
</code></pre>
<hr />
<p>Edit:</p>
<p>In the comments was a request for how to change the axis tick labels. Here's a deluxe version that is drawn on a bigger figure size, has axis labels to match the dataframe, and a colorbar legend to interpret the color scale.</p>
<p>I'm including how to adjust the size and rotation of the labels, and I'm using a figure ratio that makes the colorbar and the main figure come out the same height.</p>
<hr />
<p>EDIT 2:
As the df.corr() method ignores non-numerical columns, <code>.select_dtypes(['number'])</code> should be used when defining the x and y labels to avoid an unwanted shift of the labels (included in the code below).</p>
<pre><code>f = plt.figure(figsize=(19, 15))
plt.matshow(df.corr(), fignum=f.number)
plt.xticks(range(df.select_dtypes(['number']).shape[1]), df.select_dtypes(['number']).columns, fontsize=14, rotation=45)
plt.yticks(range(df.select_dtypes(['number']).shape[1]), df.select_dtypes(['number']).columns, fontsize=14)
cb = plt.colorbar()
cb.ax.tick_params(labelsize=14)
plt.title('Correlation Matrix', fontsize=16);
</code></pre>
<p><a href="https://i.sstatic.net/XfvsR.png" rel="noreferrer"><img src="https://i.sstatic.net/XfvsR.png" alt="correlation plot example" /></a></p>
| 1,059
|
matplotlib
|
matplotlib Legend Markers Only Once
|
https://stackoverflow.com/questions/6146778/matplotlib-legend-markers-only-once
|
<p>I often plot a point on a matplotlib plot with:</p>
<pre><code>x = 10
y = 100
plot(x, y, "k*", label="Global Optimum")
legend()
</code></pre>
<p>However, this causes the legend to put a star in the legend twice, such that it looks like:</p>
<pre><code>* * Global Optimum
</code></pre>
<p>when I really want it to look like:</p>
<pre><code> * Global Optimum
</code></pre>
<p>How do I do this?</p>
|
<p>This should work:</p>
<pre><code>legend(numpoints=1)
</code></pre>
<p>BTW, if you add the line</p>
<pre><code>legend.numpoints : 1 # the number of points in the legend line
</code></pre>
<p>to your matplotlibrc file, then this will be the new default.</p>
<p>[See also scatterpoints, depending on your plot.]</p>
<p>API: <a href="http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.legend" rel="noreferrer">Link to API docs</a></p>
| 1,060
|
matplotlib
|
How can I remove the top and right axis?
|
https://stackoverflow.com/questions/925024/how-can-i-remove-the-top-and-right-axis
|
<p>Instead of the default "boxed" axis style I want to have only the left and bottom axis, i.e.:</p>
<pre><code>+------+ |
| | |
| | ---> |
| | |
+------+ +-------
</code></pre>
<p>This should be easy, but I can't find the necessary options in the docs.</p>
|
<p>This is the suggested Matplotlib 3 solution from the official website <a href="http://matplotlib.org/examples/ticks_and_spines/spines_demo.html" rel="noreferrer">HERE</a>:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 2*np.pi, 100)
y = np.sin(x)
ax = plt.subplot(111)
ax.plot(x, y)
# Hide the right and top spines
ax.spines[['right', 'top']].set_visible(False)
plt.show()
</code></pre>
<p><img src="https://i.sstatic.net/3dwiV.png" alt="enter image description here" /></p>
| 1,061
|
matplotlib
|
Display image as grayscale
|
https://stackoverflow.com/questions/3823752/display-image-as-grayscale
|
<p>I'm trying to display a grayscale image using <code>matplotlib.pyplot.imshow()</code>. My problem is that the grayscale image is displayed as a colormap. I need it to be grayscale because I want to draw on top of the image with color.</p>
<p>I read in the image and convert to grayscale using PIL's <code>Image.open().convert("L")</code></p>
<pre><code>image = Image.open(file).convert("L")
</code></pre>
<p>Then I convert the image to a matrix so that I can easily do some image processing using</p>
<pre><code>matrix = scipy.misc.fromimage(image, 0)
</code></pre>
<p>However, when I do</p>
<pre><code>figure()
matplotlib.pyplot.imshow(matrix)
show()
</code></pre>
<p>it displays the image using a colormap (i.e. it's not grayscale).</p>
<p>What am I doing wrong here?</p>
|
<p>The following code will load an image from a file <code>image.png</code> and will display it as grayscale.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
fname = 'image.png'
image = Image.open(fname).convert("L")
arr = np.asarray(image)
plt.imshow(arr, cmap='gray', vmin=0, vmax=255)
plt.show()
</code></pre>
<p>If you want to display the inverse grayscale, switch the cmap to <code>cmap='gray_r'</code>.</p>
| 1,062
|
matplotlib
|
Getting individual colors from a color map in matplotlib
|
https://stackoverflow.com/questions/25408393/getting-individual-colors-from-a-color-map-in-matplotlib
|
<p>If you have a <a href="https://matplotlib.org/stable/tutorials/colors/colormaps.html" rel="noreferrer">Colormap</a> <code>cmap</code>, for example:</p>
<pre><code>cmap = matplotlib.cm.get_cmap('Spectral')
</code></pre>
<p>How can you get a particular colour out of it between 0 and 1, where 0 is the first colour in the map and 1 is the last colour in the map?</p>
<p>Ideally, I would be able to get the middle colour in the map by doing:</p>
<pre><code>>>> do_some_magic(cmap, 0.5) # Return an RGBA tuple
(0.1, 0.2, 0.3, 1.0)
</code></pre>
|
<p>You can do this with the code below, and the code in your question was actually very close to what you needed, all you have to do is call the <code>cmap</code> object you have.</p>
<pre><code>import matplotlib
cmap = matplotlib.cm.get_cmap('Spectral')
rgba = cmap(0.5)
print(rgba) # (0.99807766255210428, 0.99923106502084169, 0.74602077638401709, 1.0)
</code></pre>
<p>For values outside of the range [0.0, 1.0] it will return the under and over colour (respectively). This, by default, is the minimum and maximum colour within the range (so 0.0 and 1.0). This default can be changed with <code>cmap.set_under()</code> and <code>cmap.set_over()</code>. </p>
<p>For "special" numbers such as <code>np.nan</code> and <code>np.inf</code> the default is to use the 0.0 value, this can be changed using <code>cmap.set_bad()</code> similarly to under and over as above.</p>
<p>Finally it may be necessary for you to normalize your data such that it conforms to the range <code>[0.0, 1.0]</code>. This can be done using <a href="http://matplotlib.org/api/colors_api.html#matplotlib.colors.Normalize" rel="noreferrer"><code>matplotlib.colors.Normalize</code></a> simply as shown in the small example below where the arguments <code>vmin</code> and <code>vmax</code> describe what numbers should be mapped to 0.0 and 1.0 respectively.</p>
<pre><code>import matplotlib
norm = matplotlib.colors.Normalize(vmin=10.0, vmax=20.0)
print(norm(15.0)) # 0.5
</code></pre>
<p>A logarithmic normaliser (<a href="http://matplotlib.org/api/colors_api.html#matplotlib.colors.LogNorm" rel="noreferrer">matplotlib.colors.LogNorm</a>) is also available for data ranges with a large range of values.</p>
<p><em>(Thanks to both <a href="https://stackoverflow.com/users/325565/joe-kington">Joe Kington</a> and <a href="https://stackoverflow.com/users/380231/tcaswell">tcaswell</a> for suggestions on how to improve the answer.)</em></p>
| 1,063
|
matplotlib
|
Modify tick label text
|
https://stackoverflow.com/questions/11244514/modify-tick-label-text
|
<p>I want to make some modifications to a few selected tick labels in a plot.</p>
<p>For example, if I do:</p>
<pre><code>label = axes.yaxis.get_major_ticks()[2].label
label.set_fontsize(size)
label.set_rotation('vertical')
</code></pre>
<p>the font size and the orientation of the tick label is changed.</p>
<p>However, if try:</p>
<pre><code>label.set_text('Foo')
</code></pre>
<p>the tick label is <em>not</em> modified. Also if I do:</p>
<pre><code>print label.get_text()
</code></pre>
<p>nothing is printed.</p>
<p>Here's some more strangeness. When I tried this:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
axes = plt.figure().add_subplot(111)
t = np.arange(0.0, 2.0, 0.01)
s = np.sin(2*np.pi*t)
axes.plot(t, s)
for ticklabel in axes.get_xticklabels():
print(ticklabel.get_text())
</code></pre>
<p>Only empty strings are printed, but the plot contains ticks labeled as '0.0', '0.5', '1.0', '1.5', and '2.0'.</p>
<p><a href="https://i.sstatic.net/YNEBZ.png" rel="noreferrer"><img src="https://i.sstatic.net/YNEBZ.png" alt="enter image description here" /></a></p>
|
<p>Caveat: Unless the ticklabels are already set to a string (as is usually the case in e.g. a boxplot), this will not work with any version of matplotlib newer than <code>1.1.0</code>. If you're working from the current github master, this won't work. I'm not sure what the problem is yet... It may be an unintended change, or it may not be...</p>
<p>Normally, you'd do something along these lines:</p>
<pre><code>import matplotlib.pyplot as plt
fig, ax = plt.subplots()
# We need to draw the canvas, otherwise the labels won't be positioned and
# won't have values yet.
fig.canvas.draw()
labels = [item.get_text() for item in ax.get_xticklabels()]
labels[1] = 'Testing'
ax.set_xticklabels(labels)
plt.show()
</code></pre>
<p><img src="https://i.sstatic.net/5m900.png" alt="enter image description here"></p>
<p>To understand the reason why you need to jump through so many hoops, you need to understand a bit more about how matplotlib is structured.</p>
<p>Matplotlib deliberately avoids doing "static" positioning of ticks, etc, unless it's explicitly told to. The assumption is that you'll want to interact with the plot, and so the bounds of the plot, ticks, ticklabels, etc will be dynamically changing.</p>
<p>Therefore, you can't just set the text of a given tick label. By default, it's re-set by the axis's Locator and Formatter every time the plot is drawn.</p>
<p>However, if the Locators and Formatters are set to be static (<code>FixedLocator</code> and <code>FixedFormatter</code>, respectively), then the tick labels stay the same.</p>
<p>This is what <code>set_*ticklabels</code> or <code>ax.*axis.set_ticklabels</code> does. </p>
<p>Hopefully that makes it slighly more clear as to why changing an individual tick label is a bit convoluted.</p>
<p>Often, what you actually want to do is just annotate a certain position. In that case, look into <code>annotate</code>, instead. </p>
| 1,064
|
matplotlib
|
warning about too many open figures
|
https://stackoverflow.com/questions/21884271/warning-about-too-many-open-figures
|
<p>In a script where I create many figures with <code>fix, ax = plt.subplots(...)</code>, I get the warning <em>RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (<code>matplotlib.pyplot.figure</code>) are retained until explicitly closed and may consume too much memory.</em> </p>
<p>However, I don't understand <em>why</em> I get this warning, because after saving the figure with <code>fig.savefig(...)</code>, I delete it with <code>fig.clear(); del fig</code>. At no point in my code, I have more than one figure open at a time. Still, I get the warning about too many open figures. What does that mean / how can I avoid getting the warning?</p>
|
<p>Use <code>.clf</code> or <code>.cla</code> on your figure object instead of creating a <em>new</em> figure. From <a href="https://stackoverflow.com/a/8228808/249341">@DavidZwicker</a></p>
<p>Assuming you have imported <code>pyplot</code> as</p>
<pre><code>import matplotlib.pyplot as plt
</code></pre>
<p><a href="http://matplotlib.org/1.3.0/api/pyplot_api.html#matplotlib.pyplot.cla" rel="noreferrer"><code>plt.cla()</code> clears an axis</a>, i.e. the currently active axis in the current figure. It leaves the other axes untouched.</p>
<p><a href="http://matplotlib.org/1.3.0/api/pyplot_api.html#matplotlib.pyplot.clf" rel="noreferrer"><code>plt.clf()</code> clears the entire current figure</a> with all its axes, but leaves the window opened, such that it may be reused for other plots.</p>
<p><a href="http://matplotlib.org/1.3.0/api/pyplot_api.html#matplotlib.pyplot.close" rel="noreferrer"><code>plt.close()</code> closes a window</a>, which will be the current window, if not specified otherwise. <code>plt.close('all')</code> will close all open figures.</p>
<p>The reason that <code>del fig</code> does not work is that the <code>pyplot</code> state-machine keeps a reference to the figure around (as it must if it is going to know what the 'current figure' is). This means that even if you delete <em>your</em> ref to the figure, there is at least one live ref, hence it will never be garbage collected.</p>
<p>Since I'm polling on the collective wisdom here for this answer, @JoeKington mentions in the comments that <a href="http://matplotlib.org/1.3.0/api/pyplot_api.html#matplotlib.pyplot.close" rel="noreferrer"><code>plt.close(fig)</code></a> will remove a specific figure instance from the pylab state machine (<a href="https://github.com/matplotlib/matplotlib/blob/master/lib/matplotlib/_pylab_helpers.py" rel="noreferrer">plt._pylab_helpers.Gcf</a>) and allow it to be garbage collected. </p>
| 1,065
|
matplotlib
|
How to put individual tags for a matplotlib scatter plot?
|
https://stackoverflow.com/questions/5147112/how-to-put-individual-tags-for-a-matplotlib-scatter-plot
|
<p>I am trying to do a scatter plot in matplotlib and I couldn't find a way to add tags to the points. For example:</p>
<pre><code>scatter1=plt.scatter(data1["x"], data1["y"], marker="o",
c="blue",
facecolors="white",
edgecolors="blue")
</code></pre>
<p>I want for the points in "y" to have labels as "point 1", "point 2", etc. I couldn't figure it out.</p>
|
<p>Perhaps use <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.annotate">plt.annotate</a>:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
N = 10
data = np.random.random((N, 4))
labels = ['point{0}'.format(i) for i in range(N)]
plt.subplots_adjust(bottom = 0.1)
plt.scatter(
data[:, 0], data[:, 1], marker='o', c=data[:, 2], s=data[:, 3] * 1500,
cmap=plt.get_cmap('Spectral'))
for label, x, y in zip(labels, data[:, 0], data[:, 1]):
plt.annotate(
label,
xy=(x, y), xytext=(-20, 20),
textcoords='offset points', ha='right', va='bottom',
bbox=dict(boxstyle='round,pad=0.5', fc='yellow', alpha=0.5),
arrowprops=dict(arrowstyle = '->', connectionstyle='arc3,rad=0'))
plt.show()
</code></pre>
<p><img src="https://i.sstatic.net/b5uhP.png" alt="enter image description here"></p>
| 1,066
|
matplotlib
|
Specifying and saving a figure with exact size in pixels
|
https://stackoverflow.com/questions/13714454/specifying-and-saving-a-figure-with-exact-size-in-pixels
|
<p>Say I have an image of size 3841 x 7195 pixels. I would like to save the contents of the figure to disk, resulting in an image of the <strong>exact size</strong> I specify in pixels.</p>
<p>No axis, no titles. Just the image. I don't personally care about DPIs, as I only want to specify the size the image takes in the screen in disk <strong>in pixels</strong>.</p>
<p>I have read <a href="https://stackoverflow.com/search?q=figure%20size%20matplotlib">other</a> <a href="https://stackoverflow.com/questions/8775622/exact-figure-size-in-matplotlib-with-title-axis-labels">threads</a>, and they all seem to do conversions to inches and then specify the dimensions of the figure in inches and adjust dpi's in some way. I would like to avoid dealing with the potential loss of accuracy that could result from pixel-to-inches conversions.</p>
<p>I have tried with:</p>
<pre><code>w = 7195
h = 3841
fig = plt.figure(frameon=False)
fig.set_size_inches(w,h)
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(im_np, aspect='normal')
fig.savefig(some_path, dpi=1)
</code></pre>
<p>with no luck (Python complains that width and height must each be below 32768 (?))</p>
<p>From everything I have seen, <code>matplotlib</code> requires the figure size to be specified in <code>inches</code> and <code>dpi</code>, but I am only interested in <strong>the pixels</strong> the figure takes in disk. How can I do this?</p>
<p>To clarify: I am looking for a way to do this with <code>matplotlib</code>, and not with other image-saving libraries.</p>
|
<p>Matplotlib doesn't work with pixels directly, but rather physical sizes and DPI. If you want to display a figure with a certain pixel size, you need to know the DPI of your monitor. For example <a href="http://www.infobyip.com/detectmonitordpi.php" rel="noreferrer">this link</a> will detect that for you.</p>
<p>If you have an image of 3841x7195 pixels it is unlikely that you monitor will be that large, so you won't be able to show a figure of that size (matplotlib requires the figure to fit in the screen, if you ask for a size too large it will shrink to the screen size). Let's imagine you want an 800x800 pixel image just for an example. Here's how to show an 800x800 pixel image in my monitor (<code>my_dpi=96</code>):</p>
<pre><code>plt.figure(figsize=(800/my_dpi, 800/my_dpi), dpi=my_dpi)
</code></pre>
<p>So you basically just divide the dimensions in pixels by your DPI.</p>
<p>If you want to save a figure of a specific size, then it is a different matter. Screen DPIs are not so important anymore (unless you ask for a figure that won't fit in the screen). Using the same example of the 800x800 pixel figure, we can save it in different resolutions using the <code>dpi</code> keyword of <code>savefig</code>. To save it in the same resolution as the screen just use the same dpi:</p>
<pre><code>plt.savefig('my_fig.png', dpi=my_dpi)
</code></pre>
<p>To save it as an 8000x8000 pixel image, use a dpi 10 times larger:</p>
<pre><code>plt.savefig('my_fig.png', dpi=my_dpi * 10)
</code></pre>
<p>Note that the setting of the DPI is not supported by all backends. Here, the PNG backend is used, but the pdf and ps backends will implement the size differently. Also, changing the DPI and sizes will also affect things like fontsize. A larger DPI will keep the same relative sizes of fonts and elements, but if you want smaller fonts for a larger figure you need to increase the physical size instead of the DPI.</p>
<p>Getting back to your example, if you want to save a image with 3841 x 7195 pixels, you could do the following:</p>
<pre><code>plt.figure(figsize=(3.841, 7.195), dpi=100)
( your code ...)
plt.savefig('myfig.png', dpi=1000)
</code></pre>
<p>Note that I used the figure dpi of 100 to fit in most screens, but saved with <code>dpi=1000</code> to achieve the required resolution. In my system this produces a png with 3840x7190 pixels -- it seems that the DPI saved is always 0.02 pixels/inch smaller than the selected value, which will have a (small) effect on large image sizes. Some more discussion of this <a href="https://stackoverflow.com/questions/8775622/exact-figure-size-in-matplotlib-with-title-axis-labels">here</a>.</p>
| 1,067
|
matplotlib
|
How do I make a single legend for many subplots?
|
https://stackoverflow.com/questions/9834452/how-do-i-make-a-single-legend-for-many-subplots
|
<p>I am plotting the same type of information, but for different countries, with multiple subplots with Matplotlib. That is, I have nine plots on a 3x3 grid, all with the same for lines (of course, different values per line).</p>
<p>However, I have not figured out how to put a single legend (since all nine subplots have the same lines) on the figure just once.</p>
<p>How do I do that?</p>
|
<p>There is also a nice function <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.get_legend_handles_labels.html" rel="nofollow noreferrer"><code>get_legend_handles_labels()</code></a> you can call on the last axis (if you iterate over them) that would collect everything you need from <code>label=</code> arguments:</p>
<pre><code>handles, labels = ax.get_legend_handles_labels()
fig.legend(handles, labels, loc='upper center')
</code></pre>
<p>If the <code>pyplot</code> interface is being used instead of the <code>Axes</code> interface, use:</p>
<pre><code>handles, labels = plt.gca().get_legend_handles_labels()
</code></pre>
<p>To remove legends from subplots, see <a href="https://stackoverflow.com/q/5735208/7758804">Remove the legend on a matplotlib figure</a>.</p>
<p>To merge <code>twinx</code> legends, see <a href="https://stackoverflow.com/q/5484922/7758804">Secondary axis with twinx(): how to add to legend</a>.</p>
| 1,068
|
matplotlib
|
Plot yerr/xerr as shaded region rather than error bars
|
https://stackoverflow.com/questions/12957582/plot-yerr-xerr-as-shaded-region-rather-than-error-bars
|
<p>In matplotlib, how do I plot error as a shaded region rather than error bars?</p>
<p>For example:</p>
<p><a href="https://i.sstatic.net/skJ5O.png" rel="noreferrer"><img src="https://i.sstatic.net/skJ5O.png" alt="enter image description here"></a></p>
<p>rather than</p>
<p><a href="https://i.sstatic.net/CV5i6.gif" rel="noreferrer"><img src="https://i.sstatic.net/CV5i6.gif" alt="enter image description here"></a></p>
|
<p>Ignoring the smooth interpolation between points in your example graph (that would require doing some manual interpolation, or just have a higher resolution of your data), you can use <a href="https://matplotlib.org/3.1.3/api/_as_gen/matplotlib.pyplot.fill_between.html" rel="noreferrer"><code>pyplot.fill_between()</code></a>:</p>
<pre><code>from matplotlib import pyplot as plt
import numpy as np
x = np.linspace(0, 30, 30)
y = np.sin(x/6*np.pi)
error = np.random.normal(0.1, 0.02, size=y.shape)
y += np.random.normal(0, 0.1, size=y.shape)
plt.plot(x, y, 'k-')
plt.fill_between(x, y-error, y+error)
plt.show()
</code></pre>
<p><img src="https://i.sstatic.net/105nT.png" alt="enter image description here"></p>
<p>See also the <a href="http://matplotlib.org/examples/pylab_examples/fill_between_demo.html" rel="noreferrer">matplotlib examples</a>.</p>
<p></p>
| 1,069
|
matplotlib
|
Rotate label text in seaborn
|
https://stackoverflow.com/questions/26540035/rotate-label-text-in-seaborn
|
<p>I have a simple factorplot</p>
<pre><code>import seaborn as sns
g = sns.factorplot("name", "miss_ratio", "policy", dodge=.2,
linestyles=["none", "none", "none", "none"], data=df[df["level"] == 2])
</code></pre>
<p><img src="https://i.sstatic.net/gg7aD.png" alt="enter image description here"></p>
<p>The problem is that the x labels all run together, making them unreadable. How do you rotate the text so that the labels are readable?</p>
|
<p>You can rotate tick labels with the <code>tick_params</code> method on matplotlib <code>Axes</code> objects. To provide a specific example:</p>
<pre><code>ax.tick_params(axis='x', rotation=90)
</code></pre>
| 1,070
|
matplotlib
|
Date ticks and rotation
|
https://stackoverflow.com/questions/11264521/date-ticks-and-rotation
|
<p>I am having an issue trying to get my date ticks rotated in matplotlib. A small sample program is below. If I try to rotate the ticks at the end, the ticks do not get rotated. If I try to rotate the ticks as shown under the comment 'crashes', then matplot lib crashes. </p>
<p>This only happens if the x-values are dates. If I replaces the variable <code>dates</code> with the variable <code>t</code> in the call to <code>avail_plot</code>, the <code>xticks(rotation=70)</code> call works just fine inside <code>avail_plot</code>. </p>
<p>Any ideas? </p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import datetime as dt
def avail_plot(ax, x, y, label, lcolor):
ax.plot(x,y,'b')
ax.set_ylabel(label, rotation='horizontal', color=lcolor)
ax.get_yaxis().set_ticks([])
#crashes
#plt.xticks(rotation=70)
ax2 = ax.twinx()
ax2.plot(x, [1 for a in y], 'b')
ax2.get_yaxis().set_ticks([])
ax2.set_ylabel('testing')
f, axs = plt.subplots(2, sharex=True, sharey=True)
t = np.arange(0.01, 5, 1)
s1 = np.exp(t)
start = dt.datetime.now()
dates=[]
for val in t:
next_val = start + dt.timedelta(0,val)
dates.append(next_val)
start = next_val
avail_plot(axs[0], dates, s1, 'testing', 'green')
avail_plot(axs[1], dates, s1, 'testing2', 'red')
plt.subplots_adjust(hspace=0, bottom=0.3)
plt.yticks([0.5,],("",""))
#doesn't crash, but does not rotate the xticks
#plt.xticks(rotation=70)
plt.show()
</code></pre>
|
<p>If you prefer a non-object-oriented approach, move <code>plt.xticks(rotation=70)</code> to right <em>before</em> the two <code>avail_plot</code> calls, eg</p>
<pre><code>plt.xticks(rotation=70)
avail_plot(axs[0], dates, s1, 'testing', 'green')
avail_plot(axs[1], dates, s1, 'testing2', 'red')
</code></pre>
<p>This sets the rotation property before setting up the labels. Since you have two axes here, <code>plt.xticks</code> gets confused after you've made the two plots. At the point when <code>plt.xticks</code> doesn't do anything, <code>plt.gca()</code> does <em>not</em> give you the axes you want to modify, and so <code>plt.xticks</code>, which acts on the current axes, is not going to work.</p>
<p>For an object-oriented approach not using <code>plt.xticks</code>, you can use</p>
<pre><code>plt.setp( axs[1].xaxis.get_majorticklabels(), rotation=70 )
</code></pre>
<p><em>after</em> the two <code>avail_plot</code> calls. This sets the rotation on the correct axes specifically.</p>
| 1,071
|
matplotlib
|
How to plot a high resolution graph
|
https://stackoverflow.com/questions/39870642/how-to-plot-a-high-resolution-graph
|
<p>I've used matplotlib for plotting some experimental results (discussed it in here: <a href="https://stackoverflow.com/questions/39676294/looping-over-files-and-plotting-python/" title="Looping over files and plotting (Python)">Looping over files and plotting</a>. However, saving the picture by clicking right to the image gives very bad quality / low resolution images.</p>
<pre><code>from glob import glob
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
# loop over all files in the current directory ending with .txt
for fname in glob("./*.txt"):
# read file, skip header (1 line) and unpack into 3 variables
WL, ABS, T = np.genfromtxt(fname, skip_header=1, unpack=True)
# first plot
plt.plot(WL, T, label='BN', color='blue')
plt.xlabel('Wavelength (nm)')
plt.xlim(200,1000)
plt.ylim(0,100)
plt.ylabel('Transmittance, %')
mpl.rcParams.update({'font.size': 14})
#plt.legend(loc='lower center')
plt.title('')
plt.show()
plt.clf()
# second plot
plt.plot(WL, ABS, label='BN', color='red')
plt.xlabel('Wavelength (nm)')
plt.xlim(200,1000)
plt.ylabel('Absorbance, A')
mpl.rcParams.update({'font.size': 14})
#plt.legend()
plt.title('')
plt.show()
plt.clf()
</code></pre>
<p>Example graph of what I'm looking for: <a href="https://i.sstatic.net/CNSoO.png" rel="noreferrer">example graph</a></p>
|
<p>You can use <a href="https://matplotlib.org/api/_as_gen/matplotlib.figure.Figure.html#matplotlib.figure.Figure.savefig" rel="nofollow noreferrer"><code>savefig()</code></a> to export to an image file:</p>
<pre><code>plt.savefig('filename.png')
</code></pre>
<p>In addition, you can specify the <code>dpi</code> argument to some scalar value (default is 100). For example:</p>
<pre><code>plt.savefig('filename.png', dpi=300)
</code></pre>
| 1,072
|
matplotlib
|
How do I draw a grid onto a plot in Python?
|
https://stackoverflow.com/questions/8209568/how-do-i-draw-a-grid-onto-a-plot-in-python
|
<p>I just finished writing code to make a plot using <a href="https://en.wikipedia.org/wiki/Matplotlib#Comparison_with_MATLAB" rel="noreferrer">pylab</a> in Python and now I would like to superimpose a grid of 10x10 onto the scatter plot. How do I do that?</p>
<p>My current code is the following:</p>
<pre class="lang-py prettyprint-override"><code>x = numpy.arange(0, 1, 0.05)
y = numpy.power(x, 2)
fig = plt.figure()
ax = fig.gca()
ax.set_xticks(numpy.arange(0, 1, 0.1))
ax.set_yticks(numpy.arange(0, 1., 0.1))
plt.scatter(x, y)
plt.show()
</code></pre>
<p>And its output is:</p>
<p><a href="https://i.sstatic.net/gGOBRm.png" rel="noreferrer"><img src="https://i.sstatic.net/gGOBRm.png" alt="Without grid" /></a></p>
<p>What I would like is the following output:</p>
<p><a href="https://i.sstatic.net/GGscmm.png" rel="noreferrer"><img src="https://i.sstatic.net/GGscmm.png" alt="With grid" /></a></p>
|
<p>You want to use <code>pyplot.grid</code>:</p>
<pre><code>x = numpy.arange(0, 1, 0.05)
y = numpy.power(x, 2)
fig = plt.figure()
ax = fig.gca()
ax.set_xticks(numpy.arange(0, 1, 0.1))
ax.set_yticks(numpy.arange(0, 1., 0.1))
plt.scatter(x, y)
plt.grid()
plt.show()
</code></pre>
<p><code>ax.xaxis.grid</code> and <code>ax.yaxis.grid</code> can control grid lines properties.</p>
<p><img src="https://i.sstatic.net/7in5d.png" alt="Enter image description here"></p>
| 1,073
|
matplotlib
|
How to set the subplot axis range
|
https://stackoverflow.com/questions/2849286/how-to-set-the-subplot-axis-range
|
<p>How can I set the y axis range of the second subplot to e.g. [0,1000] ?
The FFT plot of my data (a column in a text file) results in a (inf.?) spike so that the actual data is not visible.</p>
<pre class="lang-py prettyprint-override"><code>pylab.ylim([0,1000])
</code></pre>
<p>has no effect, unfortunately. This is the whole script:</p>
<pre class="lang-py prettyprint-override"><code># based on http://www.swharden.com/blog/2009-01-21-signal-filtering-with-python/
import numpy, scipy, pylab, random
xs = []
rawsignal = []
with open("test.dat", 'r') as f:
for line in f:
if line[0] != '#' and len(line) > 0:
xs.append( int( line.split()[0] ) )
rawsignal.append( int( line.split()[1] ) )
h, w = 3, 1
pylab.figure(figsize=(12,9))
pylab.subplots_adjust(hspace=.7)
pylab.subplot(h,w,1)
pylab.title("Signal")
pylab.plot(xs,rawsignal)
pylab.subplot(h,w,2)
pylab.title("FFT")
fft = scipy.fft(rawsignal)
#~ pylab.axis([None,None,0,1000])
pylab.ylim([0,1000])
pylab.plot(abs(fft))
pylab.savefig("SIG.png",dpi=200)
pylab.show()
</code></pre>
<p>Other improvements are also appreciated!</p>
|
<p>You have <a href="https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.ylim.html" rel="noreferrer"><code>pylab.ylim</code></a>:</p>
<pre><code>pylab.ylim([0,1000])
</code></pre>
<p>Note: The command has to be executed after the plot!</p>
<p><strong>Update 2021</strong><br />
Since <a href="https://matplotlib.org/stable/api/index.html?highlight=pylab#module-pylab" rel="noreferrer">the use of pylab is now strongly discouraged by matplotlib</a>, you should instead use pyplot:</p>
<pre><code>from matplotlib import pyplot as plt
plt.ylim(0, 100)
#corresponding function for the x-axis
plt.xlim(1, 1000)
</code></pre>
| 1,074
|
matplotlib
|
_tkinter.TclError: no display name and no $DISPLAY environment variable
|
https://stackoverflow.com/questions/37604289/tkinter-tclerror-no-display-name-and-no-display-environment-variable
|
<p>I am running a simple python script in the server:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x = np.random.randn(60)
y = np.random.randn(60)
plt.scatter(x, y, s=20)
out_png = 'path/to/store/out_file.png'
plt.savefig(out_png, dpi=150)
</code></pre>
<p>I try to use the command <code>python example.py</code> in this server which has matplotlib 1.5.1 installed it fails with the error:</p>
<pre><code>Traceback (most recent call last):
File "example.py", line 7, in <module>
plt.scatter(x, y, s=20)
File "/home/USER/.virtualenvs/nnet/lib/python2.7/site-packages/matplotlib/pyplot.py", line 3241, in scatter
ax = gca()
File "/home/USER/.virtualenvs/nnet/lib/python2.7/site-packages/matplotlib/pyplot.py", line 928, in gca
return gcf().gca(**kwargs)
File "/home/USER/.virtualenvs/nnet/lib/python2.7/site-packages/matplotlib/pyplot.py", line 578, in gcf
return figure()
File "/home/USER/.virtualenvs/nnet/lib/python2.7/site-packages/matplotlib/pyplot.py", line 527, in figure
**kwargs)
File "/home/USER/.virtualenvs/nnet/lib/python2.7/site-packages/matplotlib/backends/backend_tkagg.py", line 84, in new_figure_manager
return new_figure_manager_given_figure(num, figure)
File "/home/USER/.virtualenvs/nnet/lib/python2.7/site-packages/matplotlib/backends/backend_tkagg.py", line 92, in new_figure_manager_given_figure
window = Tk.Tk()
File "/usr/local/lib/python2.7/lib-tk/Tkinter.py", line 1810, in __init__
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: no display name and no $DISPLAY environment variable
</code></pre>
<p>What is happening here?</p>
|
<p>Matplotlib chooses Xwindows backend by default.
You need to set matplotlib to not use the Xwindows backend. </p>
<p>Add this code to the start of your script (<strong>before importing pyplot</strong>) and try again:</p>
<pre><code>import matplotlib
matplotlib.use('Agg')
</code></pre>
<p>Or add to <code>.config/matplotlib/matplotlibrc</code> line <a href="https://matplotlib.org/faq/howto_faq.html#generate-images-without-having-a-window-appear" rel="noreferrer"><code>backend: Agg</code></a> to use non-interactive backend.</p>
<pre class="lang-sh prettyprint-override"><code>echo "backend: Agg" > ~/.config/matplotlib/matplotlibrc
</code></pre>
<p>Or when connect to server use <code>ssh -X remoteMachine</code> command to use Xwindows.</p>
<p>Also you may try to export display: <code>export DISPLAY=mymachine.com:0.0</code>.</p>
<p>For more info: <a href="https://matplotlib.org/faq/howto_faq.html#matplotlib-in-a-web-application-server" rel="noreferrer">https://matplotlib.org/faq/howto_faq.html#matplotlib-in-a-web-application-server</a></p>
| 1,075
|
matplotlib
|
Plotting in a non-blocking way with Matplotlib
|
https://stackoverflow.com/questions/28269157/plotting-in-a-non-blocking-way-with-matplotlib
|
<p>I am having problems trying to make matplotlib plot a function without blocking execution.</p>
<p>I have tried using <code>show(block=False)</code> as some people suggest, but all I get is a frozen window. If I simply call <code>show()</code>, the result is plotted properly but execution is blocked until the window is closed. From other threads I've read, I suspect that whether <code>show(block=False)</code> works or not depends on the backend. Is this correct? My backend is Qt4Agg. Could you have a look at my code and tell me if you see something wrong? Here is my code.</p>
<pre><code>from math import *
from matplotlib import pyplot as plt
print(plt.get_backend())
def main():
x = range(-50, 51, 1)
for pow in range(1,5): # plot x^1, x^2, ..., x^4
y = [Xi**pow for Xi in x]
print(y)
plt.plot(x, y)
plt.draw()
#plt.show() #this plots correctly, but blocks execution.
plt.show(block=False) #this creates an empty frozen window.
_ = raw_input("Press [enter] to continue.")
if __name__ == '__main__':
main()
</code></pre>
<p>PS. I forgot to say that I would like to update the existing window every time I plot something, instead of creating a new one.</p>
|
<p>I spent a long time looking for solutions, and found <a href="https://stackoverflow.com/questions/11874767/real-time-plotting-in-while-loop-with-matplotlib">this answer</a>.</p>
<p>It looks like, in order to get what you (and I) want, you need the combination of <code>plt.ion()</code>, <code>plt.show()</code> (not with <code>block=False</code>) and, most importantly, <code>plt.pause(.001)</code> (or whatever time you want). The <a href="http://nullege.com/codes/search/matplotlib.pyplot.pause" rel="noreferrer">pause</a> is needed because the GUI events happen while the main code is sleeping, including drawing. It's possible that this is implemented by picking up time from a sleeping thread, so maybe IDEs mess with that—I don't know.</p>
<p>Here's an implementation that works for me on python 3.5: </p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
def main():
plt.axis([-50,50,0,10000])
plt.ion()
plt.show()
x = np.arange(-50, 51)
for pow in range(1,5): # plot x^1, x^2, ..., x^4
y = [Xi**pow for Xi in x]
plt.plot(x, y)
plt.draw()
plt.pause(0.001)
input("Press [enter] to continue.")
if __name__ == '__main__':
main()
</code></pre>
| 1,076
|
matplotlib
|
Generating matplotlib graphs without a running X server
|
https://stackoverflow.com/questions/4931376/generating-matplotlib-graphs-without-a-running-x-server
|
<p>Matplotlib seems to require the $DISPLAY environment variable which means a running X server.<br>Some web hosting services do not allow a running X server session.<br>Is there a way to generate graphs using matplotlib without a running X server?</p>
<pre><code>[username@hostname ~]$ python2.6
Python 2.6.5 (r265:79063, Nov 23 2010, 02:02:03)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/username/lib/python2.6/matplotlib-1.0.1-py2.6-linux-i686.egg/matplotlib/pyplot.py", line 270, in figure
**kwargs)
File "/home/username/lib/python2.6/matplotlib-1.0.1-py2.6-linux-i686.egg/matplotlib/backends/backend_tkagg.py", line 80, in new_figure_manager
window = Tk.Tk()
File "/usr/local/lib/python2.6/lib-tk/Tkinter.py", line 1643, in __init__
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: no display name and no $DISPLAY environment variable
>>>
</code></pre>
|
<p>@Neil's answer is one (perfectly valid!) way of doing it, but you can also <a href="http://matplotlib.sourceforge.net/faq/howto_faq.html#matplotlib-in-a-web-application-server" rel="noreferrer">simply call <code>matplotlib.use('Agg')</code> <em>before</em> importing <code>matplotlib.pyplot</code></a>, and then continue as normal. </p>
<p>E.g.</p>
<pre><code>import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(range(10))
fig.savefig('temp.png')
</code></pre>
<p>You don't have to use the Agg backend, as well. The <a href="http://matplotlib.org/faq/usage_faq.html#what-is-a-backend" rel="noreferrer">pdf, ps, svg, agg, cairo, and gdk backends</a> can all be used without an X-server. However, only the Agg backend will be built by default (I think?), so there's a good chance that the other backends may not be enabled on your particular install.</p>
<p>Alternately, you can just set the backend parameter in your <a href="http://matplotlib.org/users/customizing.html#the-matplotlibrc-file" rel="noreferrer"><code>.matplotlibrc</code></a> file to automatically have <code>matplotlib.pyplot</code> use the given renderer.</p>
| 1,077
|
matplotlib
|
Setting different color for each series in scatter plot
|
https://stackoverflow.com/questions/12236566/setting-different-color-for-each-series-in-scatter-plot
|
<p>Suppose I have three data sets:</p>
<pre><code>X = [1,2,3,4]
Y1 = [4,8,12,16]
Y2 = [1,4,9,16]
</code></pre>
<p>I can scatter plot this:</p>
<pre><code>from matplotlib import pyplot as plt
plt.scatter(X,Y1,color='red')
plt.scatter(X,Y2,color='blue')
plt.show()
</code></pre>
<p>How can I do this with 10 sets? </p>
<p>I searched for this and could find any reference to what I'm asking.</p>
<p><strong>Edit: clarifying (hopefully) my question</strong> </p>
<p>If I call scatter multiple times, I can only set the same color on each scatter. Also, I know I can set a color array manually but I'm sure there is a better way to do this.
My question is then, "How can I automatically scatter-plot my several data sets, each with a different color. </p>
<p>If that helps, I can easily assign a unique number to each data set. </p>
|
<p>I don't know what you mean by 'manually'. You can choose a colourmap and make a colour array easily enough:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
x = np.arange(10)
ys = [i+x+(i*x)**2 for i in range(10)]
colors = cm.rainbow(np.linspace(0, 1, len(ys)))
for y, c in zip(ys, colors):
plt.scatter(x, y, color=c)
</code></pre>
<p><a href="https://i.sstatic.net/XwKlk.png" rel="noreferrer"><img src="https://i.sstatic.net/XwKlk.png" alt="Matplotlib graph with different colors" /></a></p>
<p>Or you can make your own colour cycler using <code>itertools.cycle</code> and specifying the colours you want to loop over, using <code>next</code> to get the one you want. For example, with 3 colours:</p>
<pre><code>import itertools
colors = itertools.cycle(["r", "b", "g"])
for y in ys:
plt.scatter(x, y, color=next(colors))
</code></pre>
<p><a href="https://i.sstatic.net/z6UAN.png" rel="noreferrer"><img src="https://i.sstatic.net/z6UAN.png" alt="Matplotlib graph with only 3 colors" /></a></p>
<p>Come to think of it, maybe it's cleaner not to use <code>zip</code> with the first one neither:</p>
<pre><code>colors = iter(cm.rainbow(np.linspace(0, 1, len(ys))))
for y in ys:
plt.scatter(x, y, color=next(colors))
</code></pre>
| 1,078
|
matplotlib
|
Matplotlib transparent line plots
|
https://stackoverflow.com/questions/4320021/matplotlib-transparent-line-plots
|
<p>I am plotting two similar trajectories in matplotlib and I'd like to plot each of the lines with partial transparency so that the red (plotted second) doesn't obscure the blue.</p>
<p><img src="https://i.sstatic.net/O3V1B.png" alt="alt text"></p>
<p><strong>EDIT</strong>: Here's the image with transparent lines.</p>
<p><img src="https://i.sstatic.net/D3GaU.png" alt="alt text"></p>
|
<p>Plain and simple:</p>
<pre><code>plt.plot(x, y, 'r-', alpha=0.7)
</code></pre>
<p>(I know I add nothing new, but the straightforward answer should be visible).</p>
| 1,079
|
matplotlib
|
Add x and y labels to a pandas plot
|
https://stackoverflow.com/questions/21487329/add-x-and-y-labels-to-a-pandas-plot
|
<p>Suppose I have the following code that plots something very simple using pandas:</p>
<pre><code>import pandas as pd
values = [[1, 2], [2, 5]]
df2 = pd.DataFrame(values, columns=['Type A', 'Type B'],
index=['Index 1', 'Index 2'])
df2.plot(lw=2, colormap='jet', marker='.', markersize=10,
title='Video streaming dropout by category')
</code></pre>
<p><img src="https://i.sstatic.net/LIkH3.png" alt="Output"></p>
<p>How do I easily set x and y-labels while preserving my ability to use specific colormaps? I noticed that the <code>plot()</code> wrapper for pandas DataFrames doesn't take any parameters specific for that.</p>
|
<p>In Pandas <em>version 1.10</em> you can use parameters <code>xlabel</code> and <code>ylabel</code> in the method <code>plot</code>:</p>
<pre><code>df.plot(xlabel='X Label', ylabel='Y Label', title='Plot Title')
</code></pre>
<p><a href="https://i.sstatic.net/zPRPM.png" rel="noreferrer"><img src="https://i.sstatic.net/zPRPM.png" alt="enter image description here" /></a></p>
| 1,080
|
matplotlib
|
How to get different colored lines for different plots in a single figure
|
https://stackoverflow.com/questions/4805048/how-to-get-different-colored-lines-for-different-plots-in-a-single-figure
|
<p>I am using <code>matplotlib</code> to create the plots. I have to identify each plot with a different color which should be automatically generated by Python.</p>
<p>Can you please give me a method to put different colors for different plots in the same figure? </p>
|
<p>Matplotlib does this by default.</p>
<p>E.g.:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x = np.arange(10)
plt.plot(x, x)
plt.plot(x, 2 * x)
plt.plot(x, 3 * x)
plt.plot(x, 4 * x)
plt.show()
</code></pre>
<p><img src="https://i.sstatic.net/TGKN1m.png" alt="Basic plot demonstrating color cycling"></p>
<p>And, as you may already know, you can easily add a legend:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x = np.arange(10)
plt.plot(x, x)
plt.plot(x, 2 * x)
plt.plot(x, 3 * x)
plt.plot(x, 4 * x)
plt.legend(['y = x', 'y = 2x', 'y = 3x', 'y = 4x'], loc='upper left')
plt.show()
</code></pre>
<p><img src="https://i.sstatic.net/7hPUnm.png" alt="Basic plot with legend"></p>
<p>If you want to control the colors that will be cycled through:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x = np.arange(10)
plt.gca().set_color_cycle(['red', 'green', 'blue', 'yellow'])
plt.plot(x, x)
plt.plot(x, 2 * x)
plt.plot(x, 3 * x)
plt.plot(x, 4 * x)
plt.legend(['y = x', 'y = 2x', 'y = 3x', 'y = 4x'], loc='upper left')
plt.show()
</code></pre>
<p><img src="https://i.sstatic.net/7Cq4Sm.png" alt="Plot showing control over default color cycling"></p>
<p>If you're unfamiliar with matplotlib, <a href="http://matplotlib.sourceforge.net/users/pyplot_tutorial.html#" rel="noreferrer">the tutorial is a good place to start</a>.</p>
<p><strong>Edit:</strong></p>
<p>First off, if you have a lot (>5) of things you want to plot on one figure, either: </p>
<ol>
<li>Put them on different plots (consider using a few subplots on one figure), or </li>
<li>Use something other than color (i.e. marker styles or line thickness) to distinguish between them. </li>
</ol>
<p>Otherwise, you're going to wind up with a <em>very</em> messy plot! Be nice to who ever is going to read whatever you're doing and don't try to cram 15 different things onto one figure!! </p>
<p>Beyond that, many people are colorblind to varying degrees, and distinguishing between numerous subtly different colors is difficult for more people than you may realize. </p>
<p>That having been said, if you really want to put 20 lines on one axis with 20 relatively distinct colors, here's one way to do it:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
num_plots = 20
# Have a look at the colormaps here and decide which one you'd like:
# http://matplotlib.org/1.2.1/examples/pylab_examples/show_colormaps.html
colormap = plt.cm.gist_ncar
plt.gca().set_prop_cycle(plt.cycler('color', plt.cm.jet(np.linspace(0, 1, num_plots))))
# Plot several different functions...
x = np.arange(10)
labels = []
for i in range(1, num_plots + 1):
plt.plot(x, i * x + 5 * i)
labels.append(r'$y = %ix + %i$' % (i, 5*i))
# I'm basically just demonstrating several different legend options here...
plt.legend(labels, ncol=4, loc='upper center',
bbox_to_anchor=[0.5, 1.1],
columnspacing=1.0, labelspacing=0.0,
handletextpad=0.0, handlelength=1.5,
fancybox=True, shadow=True)
plt.show()
</code></pre>
<p><img src="https://i.sstatic.net/cdG3jm.png" alt="Unique colors for 20 lines based on a given colormap"></p>
| 1,081
|
matplotlib
|
Plotting a 2D heatmap
|
https://stackoverflow.com/questions/33282368/plotting-a-2d-heatmap
|
<p>Using Matplotlib, I want to plot a 2D heat map. My data is an n-by-n Numpy array, each with a value between 0 and 1. So for the (i, j) element of this array, I want to plot a square at the (i, j) coordinate in my heat map, whose color is proportional to the element's value in the array.</p>
<p>How can I do this?</p>
|
<p>The <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imshow.html" rel="noreferrer"><code>imshow()</code></a> function with parameters <code>interpolation='nearest'</code> and <code>cmap='hot'</code> should do what you want.</p>
<p>Please review the <code>interpolation</code> parameter details, and see <a href="https://matplotlib.org/stable/gallery/images_contours_and_fields/interpolation_methods.html" rel="noreferrer">Interpolations for imshow</a> and <a href="https://matplotlib.org/stable/gallery/images_contours_and_fields/image_antialiasing.html" rel="noreferrer">Image antialiasing</a>.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
a = np.random.random((16, 16))
plt.imshow(a, cmap='hot', interpolation='nearest')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/O8e3x.png" rel="noreferrer"><img src="https://i.sstatic.net/O8e3x.png" alt="A sample color map produced by the example code" /></a></p>
| 1,082
|
matplotlib
|
Calling pylab.savefig without display in ipython
|
https://stackoverflow.com/questions/15713279/calling-pylab-savefig-without-display-in-ipython
|
<p>I need to create a figure in a file without displaying it within IPython notebook. I am not clear on the interaction between <code>IPython</code> and <code>matplotlib.pylab</code> in this regard. But, when I call <code>pylab.savefig("test.png")</code> the current figure get's displayed in addition to being saved in <code>test.png</code>. When automating the creation of a large set of plot files, this is often undesirable. Or in the situation that an intermediate file for external processing by another app is desired.</p>
<p>Not sure if this is a <code>matplotlib</code> or <code>IPython</code> notebook question.</p>
|
<p>This is a matplotlib question, and you can get around this by using a backend that doesn't display to the user, e.g. 'Agg':</p>
<pre><code>import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
plt.plot([1,2,3])
plt.savefig('/tmp/test.png')
</code></pre>
<p><strong>EDIT:</strong> If you don't want to lose the ability to display plots, turn off <a href="http://matplotlib.org/faq/usage_faq.html#what-is-interactive-mode">Interactive Mode</a>, and only call <code>plt.show()</code> when you are ready to display the plots:</p>
<pre><code>import matplotlib.pyplot as plt
# Turn interactive plotting off
plt.ioff()
# Create a new figure, plot into it, then close it so it never gets displayed
fig = plt.figure()
plt.plot([1,2,3])
plt.savefig('/tmp/test0.png')
plt.close(fig)
# Create a new figure, plot into it, then don't close it so it does get displayed
plt.figure()
plt.plot([1,3,2])
plt.savefig('/tmp/test1.png')
# Display all "open" (non-closed) figures
plt.show()
</code></pre>
| 1,083
|
matplotlib
|
Plt.show shows full graph but savefig is cropping the image
|
https://stackoverflow.com/questions/37427362/plt-show-shows-full-graph-but-savefig-is-cropping-the-image
|
<p>My code is succesfully saving images to file, but it is cropping important details from the right hand side. <a href="https://stackoverflow.com/questions/6774086/why-is-my-xlabel-cut-off-in-my-matplotlib-plot">Answers</a> exist for fixing this problem when it arises for <code>plt.show</code>, but it is the <code>savefig</code> command that is incorrectly producing the graph in this example. How can this be fixed? </p>
<p>The relevant sample of my code:</p>
<pre><code>import glob
import os
for file in glob.glob("*.oax"):
try:
spc_file = open(file, 'r').read()
newName = file[6:8] + '-' + file[4:6] + '-' + file[0:4] + ' ' + file[8:12] + ' UTC (Observed) - No Sea Breeze Day'
plt.title(newName, fontsize=12, loc='left')
plt.savefig('X:/' + newName + '.png')
plt.show()
except Exception:
pass
</code></pre>
<p>And the images (top is <code>plt.show</code> and bottom is file produced from <code>savefig</code>:</p>
<p><a href="https://i.sstatic.net/JlZj1.png" rel="noreferrer"><img src="https://i.sstatic.net/JlZj1.png" alt="Image when shown with plt.show"></a>
<a href="https://i.sstatic.net/eYmWY.png" rel="noreferrer"><img src="https://i.sstatic.net/eYmWY.png" alt="Image when saved to file"></a></p>
<hr>
|
<p>You may try </p>
<pre><code>plt.savefig('X:/' + newName + '.png', bbox_inches='tight')
</code></pre>
<p>Or you may define figure size like</p>
<pre><code>fig = plt.figure(figsize=(9, 11))
...
plt.savefig(filename, bbox_inches = 'tight')
</code></pre>
| 1,084
|
matplotlib
|
Get default line color cycle
|
https://stackoverflow.com/questions/42086276/get-default-line-color-cycle
|
<p>I noticed when you plot that the first line is blue, then orange, then green, and so on.</p>
<p>Is there some way to access this list of colors? I've seen a million posts on how to change the color cycle or access the iterator, but not on how to just get the list of colors that matplotlib cycles through by default.</p>
|
<p>In matplotlib versions >= 1.5, you can print the <code>rcParam</code> called <code>axes.prop_cycle</code>:</p>
<pre><code>print(plt.rcParams['axes.prop_cycle'].by_key()['color'])
# [u'#1f77b4', u'#ff7f0e', u'#2ca02c', u'#d62728', u'#9467bd', u'#8c564b', u'#e377c2', u'#7f7f7f', u'#bcbd22', u'#17becf']
</code></pre>
<p>Or equivalently, in <code>python2</code>:</p>
<pre><code>print plt.rcParams['axes.prop_cycle'].by_key()['color']
</code></pre>
<p>In versions < 1.5, this was called <code>color_cycle</code>:</p>
<pre><code>print plt.rcParams['axes.color_cycle']
# [u'b', u'g', u'r', u'c', u'm', u'y', u'k']
</code></pre>
<p>Note that the default color cycle changed in version 2.0.0 <a href="http://matplotlib.org/users/dflt_style_changes.html#colors-in-default-property-cycle" rel="noreferrer">http://matplotlib.org/users/dflt_style_changes.html#colors-in-default-property-cycle</a></p>
| 1,085
|
matplotlib
|
Aligning rotated xticklabels with their respective xticks
|
https://stackoverflow.com/questions/14852821/aligning-rotated-xticklabels-with-their-respective-xticks
|
<p>Check the x axis of the figure below. How can I move the labels a bit to the left so that they align with their respective ticks?</p>
<p>I'm rotating the labels using:</p>
<pre><code>ax.set_xticks(xlabels_positions)
ax.set_xticklabels(xlabels, rotation=45)
</code></pre>
<p>But, as you can see, the rotation is centered on the middle of the text labels. Which makes it look like they are shifted to the right.</p>
<p>I've tried using this instead:</p>
<pre><code>ax.set_xticklabels(xlabels, rotation=45, rotation_mode="anchor")
</code></pre>
<p>... but it doesn't do what I wished for. And <code>"anchor"</code> seems to be the only value allowed for the <code>rotation_mode</code> parameter.</p>
<p><img src="https://i.sstatic.net/LB7Vx.png" alt="Example"></p>
|
<p>You can set the horizontal alignment of ticklabels, see the example below. If you imagine a rectangular box around the rotated label, which side of the rectangle do you want to be aligned with the tickpoint?</p>
<p>Given your description, you want: ha='right'</p>
<pre><code>n=5
x = np.arange(n)
y = np.sin(np.linspace(-3,3,n))
xlabels = ['Ticklabel %i' % i for i in range(n)]
fig, axs = plt.subplots(1,3, figsize=(12,3))
ha = ['right', 'center', 'left']
for n, ax in enumerate(axs):
ax.plot(x,y, 'o-')
ax.set_title(ha[n])
ax.set_xticks(x)
ax.set_xticklabels(xlabels, rotation=40, ha=ha[n])
</code></pre>
<p><img src="https://i.sstatic.net/vvqth.png" alt="enter image description here"></p>
| 1,086
|
matplotlib
|
matplotlib error - no module named tkinter
|
https://stackoverflow.com/questions/36327134/matplotlib-error-no-module-named-tkinter
|
<p>I tried to use the matplotlib package via Pycharm IDE on windows 10.
when I run this code:</p>
<pre><code>from matplotlib import pyplot
</code></pre>
<p>I get the following error:</p>
<pre><code>ImportError: No module named 'tkinter'
</code></pre>
<p>I know that in python 2.x it was called Tkinter, but that is not the problem - I just installed a brand new python 3.5.1.</p>
<p>EDIT: in addition, I also tried to import 'tkinter' and 'Tkinter' - neither of these worked (both returned the error message I mentioned).</p>
|
<h3>For Linux</h3>
<p>Debian based distros:</p>
<pre><code>sudo apt-get install python3-tk
</code></pre>
<p>RPM based distros:</p>
<pre><code>sudo yum install python3-tkinter
</code></pre>
<h3>For windows:</h3>
<p>For Windows, I think the problem is you didn't install complete Python package. Since Tkinter should be shipped with Python out of box. See: <a href="http://www.tkdocs.com/tutorial/install.html" rel="noreferrer">http://www.tkdocs.com/tutorial/install.html</a> . Good python distributions for Windows can be found by the companies Anaconda or ActiveState.</p>
<h3>Test the python module</h3>
<pre><code>python -c "import tkinter"
</code></pre>
<p>p.s. I suggest installing <a href="https://ipython.org/" rel="noreferrer">ipython</a>, which provides powerful shell and necessary packages as well.</p>
| 1,087
|
matplotlib
|
What is the difference between drawing plots using plot, axes or figure in matplotlib?
|
https://stackoverflow.com/questions/37970424/what-is-the-difference-between-drawing-plots-using-plot-axes-or-figure-in-matpl
|
<p>I'm kind of confused what is going at the backend when I draw plots in matplotlib, tbh, I'm not clear with the hierarchy of plot, axes and figure. I read the documentation and it was helpful but I'm still confused...</p>
<p>The below code draws the same plot in three different ways - </p>
<pre><code>#creating the arrays for testing
x = np.arange(1, 100)
y = np.sqrt(x)
#1st way
plt.plot(x, y)
#2nd way
ax = plt.subplot()
ax.plot(x, y)
#3rd way
figure = plt.figure()
new_plot = figure.add_subplot(111)
new_plot.plot(x, y)
</code></pre>
<p>Now my question is -</p>
<ol>
<li><p>What is the difference between all the three, I mean what is going under the hood when any of the 3 methods are called? </p></li>
<li><p>Which method should be used when and what are the pros and cons of using any on those? </p></li>
</ol>
|
<p><strong>Method 1</strong></p>
<pre><code>plt.plot(x, y)
</code></pre>
<p>This lets you plot just one figure with (x,y) coordinates. If you just want to get one graphic, you can use this way.</p>
<p><strong>Method 2</strong></p>
<pre><code>ax = plt.subplot()
ax.plot(x, y)
</code></pre>
<p>This lets you plot one or several figure(s) in the same window. As you write it, you will plot just one figure, but you can make something like this:</p>
<pre><code>fig1, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2)
</code></pre>
<p>You will plot 4 figures which are named ax1, ax2, ax3 and ax4 each one but on the same window. This window will be just divided in 4 parts with my example.</p>
<p><strong>Method 3</strong></p>
<pre><code>fig = plt.figure()
new_plot = fig.add_subplot(111)
new_plot.plot(x, y)
</code></pre>
<p>I didn't use it, but you can find documentation.</p>
<p><strong>Example:</strong> </p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# Method 1 #
x = np.random.rand(10)
y = np.random.rand(10)
figure1 = plt.plot(x,y)
# Method 2 #
x1 = np.random.rand(10)
x2 = np.random.rand(10)
x3 = np.random.rand(10)
x4 = np.random.rand(10)
y1 = np.random.rand(10)
y2 = np.random.rand(10)
y3 = np.random.rand(10)
y4 = np.random.rand(10)
figure2, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2)
ax1.plot(x1,y1)
ax2.plot(x2,y2)
ax3.plot(x3,y3)
ax4.plot(x4,y4)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/zCAEL.png" rel="noreferrer"><img src="https://i.sstatic.net/zCAEL.png" alt="enter image description here"></a>
<a href="https://i.sstatic.net/LNRsa.png" rel="noreferrer"><img src="https://i.sstatic.net/LNRsa.png" alt="enter image description here"></a></p>
<p><strong>Other example:</strong></p>
<p><a href="https://i.sstatic.net/ArvNL.jpg" rel="noreferrer"><img src="https://i.sstatic.net/ArvNL.jpg" alt="enter image description here"></a></p>
| 1,088
|
matplotlib
|
vertical & horizontal lines in matplotlib
|
https://stackoverflow.com/questions/16930328/vertical-horizontal-lines-in-matplotlib
|
<p>I do not quite understand why I am unable to create horizontal and vertical lines at specified limits. I would like to bound the data by this box. However, the sides do not seem to comply with my instructions. Why is this? </p>
<pre><code># CREATING A BOUNDING BOX
# BOTTOM HORIZONTAL
plt.axhline(y=.4, xmin=0.25, xmax=0.402, linewidth=2, color = 'k')
# RIGHT VERTICAL
plt.axvline(x=0.402, ymin=0.4, ymax = 0.615, linewidth=2, color='k')
# LEFT VERTICAL
plt.axvline(x=0.1, ymin=0.58, ymax = 0.79, linewidth=2, color='k')
plt.show()
</code></pre>
<p><img src="https://i.sstatic.net/k47lM.png" alt="enter image description here"></p>
|
<p>The pyplot functions you are calling, <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.axhline"><code>axhline()</code></a> and <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.axvline"><code>axvline()</code></a> draw lines that span a portion of the axis range, regardless of coordinates. The parameters <code>xmin</code> or <code>ymin</code> use value 0.0 as the minimum of the axis and 1.0 as the maximum of the axis.</p>
<p>Instead, use <code>plt.plot((x1, x2), (y1, y2), 'k-')</code> to draw a line from the point (x1, y1) to the point (x2, y2) in color k. See <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot"><code>pyplot.plot</code></a>.</p>
| 1,089
|
matplotlib
|
Remove or adapt border of frame of legend using matplotlib
|
https://stackoverflow.com/questions/25540259/remove-or-adapt-border-of-frame-of-legend-using-matplotlib
|
<p>When plotting a plot using matplotlib:</p>
<ol>
<li>How to remove the box of the legend? </li>
<li>How to change the color of the border of the legend box?</li>
<li>How to remove only the border of the box of the legend?</li>
</ol>
|
<p>When plotting a plot using matplotlib:</p>
<p>How to remove the box of the legend?</p>
<pre><code>plt.legend(frameon=False)
</code></pre>
<p>How to change the color of the border of the legend box?</p>
<pre><code>leg = plt.legend()
leg.get_frame().set_edgecolor('b')
</code></pre>
<p>How to remove only the border of the box of the legend?</p>
<pre><code>leg = plt.legend()
leg.get_frame().set_linewidth(0.0)
</code></pre>
<p>For the <code>matplotlib</code> object oriented approach:</p>
<pre><code>axes.legend(frameon=False)
leg = axes.legend()
leg.get_frame().set_edgecolor('b')
leg.get_frame().set_linewidth(0.0)
</code></pre>
| 1,090
|
matplotlib
|
How to set xlim and ylim for a subplot
|
https://stackoverflow.com/questions/15858192/how-to-set-xlim-and-ylim-for-a-subplot
|
<p>I would like to limit the X and Y axis in matplotlib for a specific subplot.
The subplot figure itself doesn't have any axis property. I want for example to change only the limits for the second plot:</p>
<pre><code>import matplotlib.pyplot as plt
fig=plt.subplot(131)
plt.scatter([1,2],[3,4])
fig=plt.subplot(132)
plt.scatter([10,20],[30,40])
fig=plt.subplot(133)
plt.scatter([15,23],[35,43])
plt.show()
</code></pre>
|
<p>You should use the OO interface to matplotlib, rather than the state machine interface. Almost all of the <code>plt.*</code> function are thin wrappers that basically do <code>gca().*</code>.</p>
<p><a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.subplot" rel="noreferrer"><code>plt.subplot</code></a> returns an <a href="http://matplotlib.org/api/axes_api.html" rel="noreferrer"><code>axes</code></a> object. Once you have a reference to the axes object you can plot directly to it, change its limits, etc.</p>
<pre><code>import matplotlib.pyplot as plt
ax1 = plt.subplot(131)
ax1.scatter([1, 2], [3, 4])
ax1.set_xlim([0, 5])
ax1.set_ylim([0, 5])
ax2 = plt.subplot(132)
ax2.scatter([1, 2],[3, 4])
ax2.set_xlim([0, 5])
ax2.set_ylim([0, 5])
</code></pre>
<p>and so on for as many axes as you want.</p>
<p>or better, wrap it all up in a loop:</p>
<pre><code>import matplotlib.pyplot as plt
DATA_x = ([1, 2],
[2, 3],
[3, 4])
DATA_y = DATA_x[::-1]
XLIMS = [[0, 10]] * 3
YLIMS = [[0, 10]] * 3
for j, (x, y, xlim, ylim) in enumerate(zip(DATA_x, DATA_y, XLIMS, YLIMS)):
ax = plt.subplot(1, 3, j + 1)
ax.scatter(x, y)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
</code></pre>
| 1,091
|
matplotlib
|
How to plot normal distribution
|
https://stackoverflow.com/questions/10138085/how-to-plot-normal-distribution
|
<p>Given a mean and a variance is there a simple function call which will plot a normal distribution?</p>
|
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
import math
mu = 0
variance = 1
sigma = math.sqrt(variance)
x = np.linspace(mu - 3*sigma, mu + 3*sigma, 100)
plt.plot(x, stats.norm.pdf(x, mu, sigma))
plt.show()
</code></pre>
<p><img src="https://i.sstatic.net/IvOTE.png" alt="gass distro, mean is 0 variance 1"></p>
| 1,092
|
matplotlib
|
How to update a plot in matplotlib
|
https://stackoverflow.com/questions/4098131/how-to-update-a-plot-in-matplotlib
|
<p>I'm having issues with redrawing the figure here. I allow the user to specify the units in the time scale (x-axis) and then I recalculate and call this function <code>plots()</code>. I want the plot to simply update, not append another plot to the figure.</p>
<pre><code>def plots():
global vlgaBuffSorted
cntr()
result = collections.defaultdict(list)
for d in vlgaBuffSorted:
result[d['event']].append(d)
result_list = result.values()
f = Figure()
graph1 = f.add_subplot(211)
graph2 = f.add_subplot(212,sharex=graph1)
for item in result_list:
tL = []
vgsL = []
vdsL = []
isubL = []
for dict in item:
tL.append(dict['time'])
vgsL.append(dict['vgs'])
vdsL.append(dict['vds'])
isubL.append(dict['isub'])
graph1.plot(tL,vdsL,'bo',label='a')
graph1.plot(tL,vgsL,'rp',label='b')
graph2.plot(tL,isubL,'b-',label='c')
plotCanvas = FigureCanvasTkAgg(f, pltFrame)
toolbar = NavigationToolbar2TkAgg(plotCanvas, pltFrame)
toolbar.pack(side=BOTTOM)
plotCanvas.get_tk_widget().pack(side=TOP)
</code></pre>
|
<p>You essentially have two options:</p>
<ol>
<li><p>Do exactly what you're currently doing, but call <code>graph1.clear()</code> and <code>graph2.clear()</code> before replotting the data. This is the slowest, but most simplest and most robust option.</p></li>
<li><p>Instead of replotting, you can just update the data of the plot objects. You'll need to make some changes in your code, but this should be much, much faster than replotting things every time. However, the shape of the data that you're plotting can't change, and if the range of your data is changing, you'll need to manually reset the x and y axis limits.</p></li>
</ol>
<p>To give an example of the second option:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 6*np.pi, 100)
y = np.sin(x)
# You probably won't need this if you're embedding things in a tkinter plot...
plt.ion()
fig = plt.figure()
ax = fig.add_subplot(111)
line1, = ax.plot(x, y, 'r-') # Returns a tuple of line objects, thus the comma
for phase in np.linspace(0, 10*np.pi, 500):
line1.set_ydata(np.sin(x + phase))
fig.canvas.draw()
fig.canvas.flush_events()
</code></pre>
| 1,093
|
matplotlib
|
How to set common axes labels for subplots
|
https://stackoverflow.com/questions/6963035/how-to-set-common-axes-labels-for-subplots
|
<p>I have the following plot:</p>
<pre><code>import matplotlib.pyplot as plt
fig2 = plt.figure()
ax3 = fig2.add_subplot(2,1,1)
ax4 = fig2.add_subplot(2,1,2)
ax4.loglog(x1, y1)
ax3.loglog(x2, y2)
ax3.set_ylabel('hello')
</code></pre>
<p>I want to create axes labels and titles that span on both subplots. For example, since both plots have identical axes, I only need one set of <code>xlabel</code> and <code>ylabel</code>. I do want different titles for each subplot though.</p>
<p>How can I achieve this ?</p>
|
<p>You can create a big subplot that covers the two subplots and then set the common labels.</p>
<pre class="lang-python prettyprint-override"><code>import random
import matplotlib.pyplot as plt
x = range(1, 101)
y1 = [random.randint(1, 100) for _ in range(len(x))]
y2 = [random.randint(1, 100) for _ in range(len(x))]
fig = plt.figure()
ax = fig.add_subplot(111) # The big subplot
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
# Turn off axis lines and ticks of the big subplot
ax.spines['top'].set_color('none')
ax.spines['bottom'].set_color('none')
ax.spines['left'].set_color('none')
ax.spines['right'].set_color('none')
ax.tick_params(labelcolor='w', top=False, bottom=False, left=False, right=False)
ax1.loglog(x, y1)
ax2.loglog(x, y2)
# Set common labels
ax.set_xlabel('common xlabel')
ax.set_ylabel('common ylabel')
ax1.set_title('ax1 title')
ax2.set_title('ax2 title')
plt.savefig('common_labels.png', dpi=300)
</code></pre>
<p><img src="https://i.sstatic.net/EHhFk.png" alt="common_labels.png" /></p>
<p>Another way is using fig.text() to set the locations of the common labels directly.</p>
<pre class="lang-python prettyprint-override"><code>import random
import matplotlib.pyplot as plt
x = range(1, 101)
y1 = [random.randint(1, 100) for _ in range(len(x))]
y2 = [random.randint(1, 100) for _ in range(len(x))]
fig = plt.figure()
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
ax1.loglog(x, y1)
ax2.loglog(x, y2)
# Set common labels
fig.text(0.5, 0.04, 'common xlabel', ha='center', va='center')
fig.text(0.06, 0.5, 'common ylabel', ha='center', va='center', rotation='vertical')
ax1.set_title('ax1 title')
ax2.set_title('ax2 title')
plt.savefig('common_labels_text.png', dpi=300)
</code></pre>
<p><img src="https://i.sstatic.net/J6pVr.png" alt="common_labels_text.png" /></p>
| 1,094
|
matplotlib
|
How to remove gaps between subplots
|
https://stackoverflow.com/questions/20057260/how-to-remove-gaps-between-subplots
|
<p>The code below produces gaps between the subplots. How do I remove the gaps between the subplots and make the image a tight grid?</p>
<p><a href="https://i.sstatic.net/uBn4j.png" rel="noreferrer"><img src="https://i.sstatic.net/uBn4j.png" alt="enter image description here"></a></p>
<pre><code>import matplotlib.pyplot as plt
for i in range(16):
i = i + 1
ax1 = plt.subplot(4, 4, i)
plt.axis('on')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_aspect('equal')
plt.subplots_adjust(wspace=None, hspace=None)
plt.show()
</code></pre>
|
<p>You can use <a href="http://matplotlib.org/api/gridspec_api.html" rel="noreferrer">gridspec</a> to control the spacing between axes. There's more <a href="http://matplotlib.org/users/gridspec.html" rel="noreferrer">information</a> here. </p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
plt.figure(figsize = (4,4))
gs1 = gridspec.GridSpec(4, 4)
gs1.update(wspace=0.025, hspace=0.05) # set the spacing between axes.
for i in range(16):
# i = i + 1 # grid spec indexes from 0
ax1 = plt.subplot(gs1[i])
plt.axis('on')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_aspect('equal')
plt.show()
</code></pre>
<p><img src="https://i.sstatic.net/DKpb3.png" alt="axes very close together"></p>
| 1,095
|
matplotlib
|
How to convert a NumPy array to PIL image applying matplotlib colormap
|
https://stackoverflow.com/questions/10965417/how-to-convert-a-numpy-array-to-pil-image-applying-matplotlib-colormap
|
<p>I want to take a NumPy 2D array which represents a grayscale image, and convert it to an RGB PIL image while applying some of the matplotlib colormaps.</p>
<p>I can get a reasonable PNG output by using the <code>pyplot.figure.figimage</code> command:</p>
<pre><code>dpi = 100.0
w, h = myarray.shape[1]/dpi, myarray.shape[0]/dpi
fig = plt.figure(figsize=(w,h), dpi=dpi)
fig.figimage(sub, cmap=cm.gist_earth)
plt.savefig('out.png')
</code></pre>
<p>Although I could adapt this to get what I want (probably using StringIO do get the PIL image), I wonder if there is not a simpler way to do that, since it seems to be a very natural problem of image visualization. Let's say, something like this:</p>
<pre><code>colored_PIL_image = magic_function(array, cmap)
</code></pre>
|
<p>Quite a busy one-liner, but here it is:</p>
<ol>
<li>First ensure your NumPy array, <code>myarray</code>, is normalised with the max value at <code>1.0</code>.</li>
<li>Apply the colormap directly to <code>myarray</code>.</li>
<li>Rescale to the <code>0-255</code> range.</li>
<li>Convert to integers, using <code>np.uint8()</code>.</li>
<li>Use <code>Image.fromarray()</code>.</li>
</ol>
<p>And you're done:</p>
<pre><code>from PIL import Image
from matplotlib import cm
im = Image.fromarray(np.uint8(cm.gist_earth(myarray)*255))
</code></pre>
<p>with <code>plt.savefig()</code>:</p>
<p><img src="https://i.sstatic.net/PrTEI.png" alt="Enter image description here"></p>
<p>with <code>im.save()</code>:</p>
<p><img src="https://i.sstatic.net/NRa20.png" alt="Enter image description here"></p>
| 1,096
|
matplotlib
|
Set Matplotlib colorbar size to match graph
|
https://stackoverflow.com/questions/18195758/set-matplotlib-colorbar-size-to-match-graph
|
<p>I cannot get the <code>colorbar</code> on <code>imshow</code> graphs like this one to be the same height as the graph, short of using Photoshop after the fact. How do I get the heights to match?
<img src="https://i.sstatic.net/EXLiY.png" alt="Example of the colorbar size mismatch" /></p>
|
<p>You can do this easily with a matplotlib <a href="http://matplotlib.org/mpl_toolkits/axes_grid/users/overview.html#axesdivider" rel="noreferrer">AxisDivider</a>.</p>
<p>The example from the linked page also works without using subplots:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import numpy as np
plt.figure()
ax = plt.gca()
im = ax.imshow(np.arange(100).reshape((10,10)))
# create an axes on the right side of ax. The width of cax will be 5%
# of ax and the padding between cax and ax will be fixed at 0.05 inch.
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
plt.colorbar(im, cax=cax)
</code></pre>
<p><a href="https://i.sstatic.net/wgFtO.png" rel="noreferrer"><img src="https://i.sstatic.net/wgFtO.png" alt="enter image description here" /></a></p>
| 1,097
|
matplotlib
|
How do I tell matplotlib that I am done with a plot?
|
https://stackoverflow.com/questions/741877/how-do-i-tell-matplotlib-that-i-am-done-with-a-plot
|
<p>The following code plots to two <a href="http://en.wikipedia.org/wiki/PostScript" rel="noreferrer">PostScript</a> (.ps) files, but the second one contains both lines.</p>
<pre><code>import matplotlib
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
plt.subplot(111)
x = [1,10]
y = [30, 1000]
plt.loglog(x, y, basex=10, basey=10, ls="-")
plt.savefig("first.ps")
plt.subplot(111)
x = [10,100]
y = [10, 10000]
plt.loglog(x, y, basex=10, basey=10, ls="-")
plt.savefig("second.ps")
</code></pre>
<p>How can I tell matplotlib to start afresh for the second plot?</p>
|
<p>You can use <code>figure</code> to create a new plot, for example, or use <code>close</code> after the first plot.</p>
| 1,098
|
matplotlib
|
How to set 'auto' for upper limit, but keep a fixed lower limit
|
https://stackoverflow.com/questions/11744990/how-to-set-auto-for-upper-limit-but-keep-a-fixed-lower-limit
|
<p>I want to set the upper limit of the y-axis to 'auto', but I want to keep the lower limit of the y-axis to always be zero. I tried 'auto' and 'autorange', but those don't seem to work.</p>
<p>Here is my code:</p>
<pre><code>import matplotlib.pyplot as plt
def plot(results_plt,title,filename):
############################
# Plot results
# mirror result table such that each parameter forms an own data array
plt.cla()
#print results_plt
XY_results = []
XY_results = zip( *results_plt)
plt.plot(XY_results[0], XY_results[2], marker = ".")
plt.title('%s' % (title) )
plt.xlabel('Input Voltage [V]')
plt.ylabel('Input Current [mA]')
plt.grid(True)
plt.xlim(3.0, 4.2) #***I want to keep these values fixed"
plt.ylim([0, 80]) #****CHANGE**** I want to change '80' to auto, but still keep 0 as the lower limit
plt.savefig(path+filename+'.png')
</code></pre>
|
<p>You can pass just <code>left</code> or <code>right</code> to <code>set_xlim</code>:</p>
<pre><code>plt.gca().set_xlim(left=0)
</code></pre>
<p>For the y axis, use <code>bottom</code> or <code>top</code>:</p>
<pre><code>plt.gca().set_ylim(bottom=0)
</code></pre>
<p>Important note: "you must use the functions AFTER you have plotted the data. If you don't do this, it will use the default 0 for left/bottom and 1 for top/right." - <a href="https://stackoverflow.com/a/66805331/229792">Luc's answer.</a></p>
| 1,099
|
fine-tuning
|
Code-first vs Model/Database-first
|
https://stackoverflow.com/questions/5446316/code-first-vs-model-database-first
|
<p><strong>What are the pros & cons of using Entity Framework 4.1 Code-first over Model/Database-first with EDMX diagram?</strong></p>
<p>I'm trying to fully understand all the approaches to building data access layer using EF 4.1. I'm using Repository pattern and <code>IoC</code>.</p>
<p>I know I can use code-first approach: define my entities and context by hand and use <code>ModelBuilder</code> to fine-tune the schema.</p>
<p>I can also create an <code>EDMX</code> diagram and choose a code generation step that uses T4 templates to generate the same <code>POCO</code> classes. </p>
<p>In both cases I end up with <code>POCO</code> object which are <code>ORM</code> agnostic and context that derives from <code>DbContext</code>.</p>
<p>Database-first seems to be most appealing since I can design database in Enterprise Manager, quickly synch the model and fine-tune it using the designer. </p>
<p>So what is the difference between those two approaches? Is it just about the preference VS2010 vs Enterprise Manager?</p>
|
<p>I think the differences are:</p>
<p><strong>Code first</strong></p>
<ul>
<li>Very popular because hardcore programmers don't like any kind of designers and defining mapping in EDMX xml is too complex.</li>
<li>Full control over the code (no autogenerated code which is hard to modify).</li>
<li>General expectation is that you do not bother with DB. DB is just a storage with no logic. EF will handle creation and you don't want to know how it does the job.</li>
<li>Manual changes to database will be most probably lost because your code defines the database.</li>
</ul>
<p><strong>Database first</strong></p>
<ul>
<li>Very popular if you have DB designed by DBAs, developed separately or if you have existing DB. </li>
<li>You will let EF create entities for you and after modification of mapping you will generate POCO entities.</li>
<li>If you want additional features in POCO entities you must either T4 modify template or use partial classes.</li>
<li>Manual changes to the database are possible because the database defines your domain model. You can always update model from database (this feature works quite well).</li>
<li>I often use this together VS Database projects (only Premium and Ultimate version).</li>
</ul>
<p><strong>Model first</strong></p>
<ul>
<li>IMHO popular if you are designer fan (= you don't like writing code or SQL).</li>
<li>You will "draw" your model and let workflow generate your database script and T4 template generate your POCO entities. You will lose part of the control on both your entities and database but for small easy projects you will be very productive.</li>
<li>If you want additional features in POCO entities you must either T4 modify template or use partial classes.</li>
<li>Manual changes to database will be most probably lost because your model defines the database. This works better if you have Database generation power pack installed. It will allow you updating database schema (instead of recreating) or updating database projects in VS.</li>
</ul>
<p>I expect that in case of EF 4.1 there are several other features related to Code First vs. Model/Database first. Fluent API used in Code first doesn't offer all features of EDMX. I expect that features like stored procedures mapping, query views, defining views etc. works when using Model/Database first and <code>DbContext</code> (I haven't tried it yet) but they don't in Code first.</p>
| 1,100
|
fine-tuning
|
LMM Fine Tuning - Supervised Fine Tuning Trainer (SFTTrainer) vs transformers Trainer
|
https://stackoverflow.com/questions/76461859/lmm-fine-tuning-supervised-fine-tuning-trainer-sfttrainer-vs-transformers-tr
|
<p>When should one opt for the Supervised Fine Tuning Trainer (SFTTrainer) instead of the regular Transformers Trainer when it comes to instruction fine-tuning for Language Models (LLMs)? From what I gather, the regular Transformers Trainer typically refers to unsupervised fine-tuning, often utilized for tasks such as Input-Output schema formatting after conducting supervised fine-tuning. There seem to be various examples of fine-tuning tasks with similar characteristics, but with some employing the SFTTrainer and others using the regular Trainer. Which factors should be considered in choosing between the two approaches?</p>
<p>I looking for Fine Tuning a LLM for generating json to json transformation (matching texts in json) using huggingface and trl libraries.</p>
|
<p>I would suggest going for SFT trainer for faster training time and comparable outputs with efficient memory usage and simpler interface over Trainer.</p>
| 1,101
|
fine-tuning
|
Fine Tuning Blenderbot
|
https://stackoverflow.com/questions/72774975/fine-tuning-blenderbot
|
<p>I have been trying to fine-tune a conversational model of HuggingFace: Blendebot. I have tried the conventional method given on the official hugging face website which asks us to do it using the trainer.train() method. I also tried it using the .compile() method. I have tried fine-tuning using PyTorch as well as TensorFlow on my dataset. Both methods seem to fail and give us an error saying that there is no method called compile or train for the Blenderbot model.
I have also looked everywhere online to check how Blenderbot could be fine-tuned on my custom data and nowhere does it mention properly that runs without throwing an error. I have gone through Youtube tutorials, blogs, and StackOverflow posts but none answer this question. Hoping someone would respond here and help me out. I am open to using other HuggingFace Conversational Models as well for fine-tuning.</p>
<p>Thank you! :)</p>
|
<p>Here is a link I am using to fine-tune the blenderbot model.</p>
<p>Fine-tuning methods: <a href="https://huggingface.co/docs/transformers/training" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/training</a></p>
<p>Blenderbot: <a href="https://huggingface.co/docs/transformers/model_doc/blenderbot" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/model_doc/blenderbot</a></p>
<pre><code>from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration
mname = "facebook/blenderbot-400M-distill"
model = BlenderbotForConditionalGeneration.from_pretrained(mname)
tokenizer = BlenderbotTokenizer.from_pretrained(mname)
#FOR TRAINING:
trainer = Trainer(
model=model,
args=training_args,
train_dataset=small_train_dataset,
eval_dataset=small_eval_dataset,
compute_metrics=compute_metrics,
)
trainer.train()
#OR
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=tf.metrics.SparseCategoricalAccuracy(),
)
model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3)
</code></pre>
<p>None of these work! :(</p>
| 1,102
|
fine-tuning
|
Fine tuning vs Retraining
|
https://stackoverflow.com/questions/45134834/fine-tuning-vs-retraining
|
<p>So I am learning how to use Tensorflow to fine tune the Inception-v3 model for a custom dataset.</p>
<p>I found two tutorials related to this. One was about "<a href="https://www.tensorflow.org/tutorials/image_retraining" rel="noreferrer">How to Retrain Inception's Final Layer for New Categories</a>" and the other was "
<a href="https://github.com/tensorflow/models/blob/master/inception/README.md#how-to-fine-tune-a-pre-trained-model-on-a-new-task" rel="noreferrer">Train your own image classifier with Inception in TensorFlow with Fine tuning</a>
".</p>
<p>I did the first retraining tutorial on a virtual machine and it took only 2-3 hours to complete. And for the same flowers dataset, I am doing the second fine tuning tutorial on a GPU and it took around one whole day to perform the training.</p>
<p>What is the difference between retraining and fine tuning?</p>
<p>I was under the impression that both involved using a pre-trained Inception v3 model, removing the old top layer and train a new one on the flower photos.
But my understanding can be wrong. </p>
|
<p>Usually in the ML literature we call fine tuning the process of:</p>
<ol>
<li>Keep a trained model. Model = feature extractor layers + classification layers</li>
<li>Remove the classification layers</li>
<li>Attach new classification layer</li>
<li>Retrain the whole model end-to-end.</li>
</ol>
<p>This allow to start from a good configuration of the feature extract layers weights and thus reach an optimum value in a short time.</p>
<p>You can think about the fine tuning like a way to start a new train with a very good initialization method for your weights (although you have to initialize your new classification layers).</p>
<p>When, instead, we talk about retrain of a model, we usually refer to the the process of:</p>
<ol>
<li>Keep a model architecture</li>
<li>Change the last classification layer in order to produce the amount of classes you want to classify</li>
<li>Train the model end to end.</li>
</ol>
<p>In this case you don't start from a good starting point as above, but instead you start from a random point in the solution space.</p>
<p>This means that you have to train the model for a longer time because the initial solution is not as good as the initial solution that a pretrained model gives you.</p>
| 1,103
|
fine-tuning
|
Fine-tuning Glove Embeddings
|
https://stackoverflow.com/questions/50909726/fine-tuning-glove-embeddings
|
<p>Has anyone tried to fine-tune <strong>Glove embeddings</strong> on a domain-specific corpus?<br>
<strong>Fine-tuning word2vec</strong> embeddings has proven very efficient for me in a various NLP tasks, but I am wondering whether generating a cooccurrence matrix on my domain-specific corpus, and training glove embeddings (initialized with pre-trained embeddings) on that corpus would generate similar improvements.</p>
|
<p>I myself am trying to do the exact same thing. You can try <a href="https://github.com/ashutoshsingh0223/mittens" rel="nofollow noreferrer">mittens</a>.</p>
<p>They have succesfully built a framework for it. Christopher D. Manning(co-author of GloVe) is associated with it.</p>
| 1,104
|
fine-tuning
|
BERT fine tuning
|
https://stackoverflow.com/questions/60418179/bert-fine-tuning
|
<p>I'm trying to create my model for question answering based on BERT und can't understand what is the meaning of fine tuning. Do I understand it right, that it is like adaption for specific domain? And if I want to use it with Wikipedia corpora, I just need to integrate unchanged pre-trained model in my network?</p>
|
<p>Fine tuning is adopting (refining) the pre-trained BERT model to two things:</p>
<ol>
<li>Domain</li>
<li>Task (e.g. classification, entity extraction, etc.).</li>
</ol>
<p>You can use pre-trained models as-is at first and if the performance is sufficient, fine tuning for your use case may not be needed.</p>
| 1,105
|
fine-tuning
|
Fine Tuning of GoogLeNet Model
|
https://stackoverflow.com/questions/36841158/fine-tuning-of-googlenet-model
|
<p>I trained GoogLeNet model from scratch. But it didn't give me the promising results.<br>
As an alternative, I would like to do fine tuning of GoogLeNet model on my dataset. Does anyone know what are the steps should I follow? </p>
|
<p>Assuming you are trying to do image classification. These should be the steps for finetuning a model:</p>
<h3>1. Classification layer</h3>
<p>The original <a href="https://github.com/BVLC/caffe/blob/master/models/bvlc_googlenet/train_val.prototxt" rel="noreferrer">classification layer <code>"loss3/classifier"</code></a> outputs predictions for 1000 classes (it's <code>mum_output</code> is set to 1000). You'll need to replace it with a new layer with appropriate <code>num_output</code>. Replacing the classification layer:</p>
<ol>
<li>Change layer's name (so that when you read the original weights from caffemodel file there will be no conflict with the weights of this layer).</li>
<li>Change <code>num_output</code> to the right number of output classes you are trying to predict.</li>
<li>Note that you need to change ALL classification layers. Usually there is only one, but GoogLeNet happens to have three: <a href="https://github.com/BVLC/caffe/blob/master/models/bvlc_googlenet/train_val.prototxt#L904" rel="noreferrer"><code>"loss1/classifier"</code></a>, <a href="https://github.com/BVLC/caffe/blob/master/models/bvlc_googlenet/train_val.prototxt#L1667" rel="noreferrer"><code>"loss2/classifier"</code></a> and <a href="https://github.com/BVLC/caffe/blob/master/models/bvlc_googlenet/train_val.prototxt" rel="noreferrer"><code>"loss3/classifier"</code></a>.</li>
</ol>
<h3>2. Data</h3>
<p>You need to make a new training dataset with the new labels you want to fine tune to. See, for example, <a href="https://stackoverflow.com/a/31431716/1714410">this post</a> on how to make an lmdb dataset.</p>
<h3>3. How extensive a finetuning you want?</h3>
<p>When finetuning a model, you can train ALL model's weights or choose to fix some weights (usually filters of the lower/deeper layers) and train only the weights of the top-most layers. This choice is up to you and it ususally depends on the amount of training data available (the more examples you have the more weights you can afford to finetune).<br />
Each layer (that holds trainable parameters) has <code>param { lr_mult: XX }</code>. This coefficient determines how susceptible these weights to SGD updates. Setting <code>param { lr_mult: 0 }</code> means you FIX the weights of this layer and they will not be changed during the training process.<br />
Edit your <code>train_val.prototxt</code> accordingly.</p>
<h3>4. Run caffe</h3>
<p>Run <code>caffe train</code> but supply it with caffemodel weights as an initial weights:</p>
<pre><code>~$ $CAFFE_ROOT/build/tools/caffe train -solver /path/to/solver.ptototxt -weights /path/to/orig_googlenet_weights.caffemodel
</code></pre>
| 1,106
|
fine-tuning
|
Fine tuning vbscript
|
https://stackoverflow.com/questions/45689296/fine-tuning-vbscript
|
<p>I am writing a VBScript to pass back the date/time value (especially before 2:00 AM to get last day value). Is there any fine tuning instead of pass the value to another batch and use the Batch1 to call vbscript and then the batch2 (created in vbscript)? Thanks a lot</p>
<pre><code>dim dateMonth, dateDay, dateYear, dateYY, dateMMM, MM, pDateDay
'Check Time
if hour(now) < 2 then 'Before 2AM, count as last working day
dateMonth = Month(dateadd("d",-1,now))
dateDay = Day(dateadd("d",-1,now))
dateYear = Year(dateadd("d",-1,now))
dateYY = right(year(dateadd("d",-1,now)),2)
TimeHH = Hour(now)
TimeMM = Minute(now)
else
dateMonth = Month(now)
dateDay = Day(now)
dateYear = Year(now)
dateYY = right(year(now),2)
TimeHH = Hour(now)
TimeMM = Minute(now)
end if
MM = Array("","Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec")
dateMMM = mm(dateMonth)
if dateMonth < 10 then
dateMonth = "0" & dateMonth
end if
If dateDay < 10 then
dateDay = "0" & dateDay
End if
If TimeHH < 10 then
TimeHH = "0" & TimeHH
End if
If TimeMM < 10 then
TimeMM = "0" & TimeMM
End if
Set objFSO=CreateObject("Scripting.FileSystemObject")
' Create Log file
Dim oFSO, oTxtFile, curDir
Set oFSO = CreateObject("Scripting.FileSystemObject")
curDir = oFSO.GetAbsolutePathName(".")
strFile = "\datetime.bat"
If oFSO.FileExists(curDir & strFile) then
oFSO.DeleteFile curDir & strFile
end if
strValue = "SET Date_MMDD=" & dateMonth & dateDay
strValue = strValue & vbcrlf & "SET Date_MM=" & dateMonth
strValue = strValue & vbcrlf & "SET Date_MMM=" & dateMMM
strValue = strValue & vbcrlf & "SET Date_DD=" & dateDay
strValue = strValue & vbcrlf & "SET Date_HHMM=" & TimeHH & TimeMM
strValue = strValue & vbcrlf & "SET Time_HH=" & TimeHH
strValue = strValue & vbcrlf & "SET Time_MM=" & TimeMM
Set oTxtFile = oFSO.CreateTextFile(curDir & strFile)
oTxtFile.writeline(strValue)
wscript.echo strValue
set oTxtFile = nothing
set oFSO = nothing
</code></pre>
|
<p>I'm not sure if you want to run a batch script directly from VBScript, but in case that option is available, you don't need to write to generate a file at all - you can pass in the date and other info using command-line parameters.</p>
<p>In the example below, I simplified your date code and then passed some fields into a batch file, which will echo them back to VBScript to show in a message box. You can adapt for your needs.</p>
<p>test.vbs:</p>
<pre><code>Option Explicit
Dim runDate : runDate = Now()
If Hour(runDate) < 2 Then runDate = DateAdd("d", -1, runDate)
Dim runYear : runYear = Year(runDate)
Dim runMonth : runMonth = MonthName(Month(runDate), True)
Dim runDay : runDay = Day(runDate)
' Etc...
Dim parameters : parameters = Join(Array(runYear, runMonth, runDay), " ")
' See https://stackoverflow.com/a/45284140/534406 for the below code
Const WshRunning = 0
Const WshFinished = 1
Const WshFailed = 2
Dim shell : Set shell = CreateObject("WScript.Shell")
Dim exec : Set exec = shell.Exec("test.bat " & parameters)
While exec.Status = WshRunning
WScript.Sleep 50
Wend
Dim output
If exec.Status = WshFailed Then
output = exec.StdErr.ReadAll
Else
output = exec.StdOut.ReadAll
End If
WScript.Echo output
</code></pre>
<p>test.bat</p>
<pre><code>@echo off
set Year=%1
set Month=%2
set Day=%3
echo Year %Year% Month %Month% Day %Day%
</code></pre>
<p>Output (for today, after 2am):</p>
<pre><code>Year 2017 Month Aug Day 15
</code></pre>
| 1,107
|
fine-tuning
|
Fine-Tuning InceptionV3
|
https://stackoverflow.com/questions/62210883/fine-tuning-inceptionv3
|
<p>I want to fine-tuning Inception V3 for recognize the UC Merced Land Use Dataset.
It contains 21 classes,with 100 images for each class.
Manually I have split the Datataset in 5 fold, for each fold and for each class I have 60 images for training 20 for validation and 20 for Testing.
Example:
In the first fold for each class the images between 0 and 59 are for Training, the images between 60 and 79 are for Validation ecc...
In the second fold the images between 0 and 19 are for Testing,the images between 80 and 99 are for Validation ecc...
I applied the cross validation, so at the end I will test the Net with all images in the Dataset.</p>
<p>With this fine-tuning i have reached 93%, 97% is the goal.</p>
<pre><code># import
from keras.preprocessing import image
import os
from matplotlib import pyplot as plt
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, Model
from keras.layers import Conv2D, MaxPooling2D, Activation, Dropout, Flatten, Dense, GlobalAveragePooling2D
from keras import backend as K
from keras import applications
from keras import utils
from keras import optimizers
from sklearn.model_selection import KFold
import random
from keras.preprocessing import image
import os
from matplotlib import pyplot as plt
directory_train=["./UCMerced_LandUse2/Images/Fold1/Training","./UCMerced_LandUse2/Images/Fold2/Training","./UCMerced_LandUse2/Images/Fold3/Training","./UCMerced_LandUse2/Images/Fold4/Training","./UCMerced_LandUse2/Images/Fold5/Training"]
directory_validation=["./UCMerced_LandUse2/Images/Fold1/Validation","./UCMerced_LandUse2/Images/Fold2/Validation","./UCMerced_LandUse2/Images/Fold3/Validation","./UCMerced_LandUse2/Images/Fold4/Validation","./UCMerced_LandUse2/Images/Fold5/Validation"]
directory_test=["./UCMerced_LandUse2/Images/Fold1/Test","./UCMerced_LandUse2/Images/Fold2/Test","./UCMerced_LandUse2/Images/Fold3/Test","./UCMerced_LandUse2/Images/Fold4/Test","./UCMerced_LandUse2/Images/Fold5/Test"]
</code></pre>
<pre><code>img_width, img_height = 256, 256
num_samples=2100
batch_size = 10
Datagen = ImageDataGenerator(rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
Gen = ImageDataGenerator(rescale=1./255)
train_generator=[]
valid_generator=[]
test_generator=[]
print("Creazione Train Generator")
for i in range(5):
train_generator.append(Datagen.flow_from_directory(
directory_train[i],
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical'
))
print(train_generator[0].n//batch_size)
print("Creazione Validation Generator")
for i in range(5):
valid_generator.append(Gen.flow_from_directory(
directory_validation[i],
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical'
))
print(valid_generator[i].n//batch_size)
print("Creazione Test Generator")
for i in range(5):
test_generator.append(Gen.flow_from_directory(
directory_test[i],
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical'
))
</code></pre>
<pre><code># Inception V3 with pre-trained weights
base_model = applications.InceptionV3(weights='imagenet', include_top=False,input_shape=(256,256,3),classes=21)
base_model.trainable=True;
num_epochs = 100
history=[]
risultati=[]
for i in range(5):
model=Sequential()
model.add(base_model)
model.add(GlobalAveragePooling2D())
model.add(Dense(1024,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1024,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(100,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(21,activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.SGD(1e-4,momentum=0.9),
metrics=['accuracy'])
print(i)
model.fit_generator(train_generator[i],steps_per_epoch=train_generator[i].n//batch_size,epochs=num_epochs,validation_data=valid_generator[i],validation_steps=valid_generator[i].n//batch_size,shuffle=True)
model.save('sasaprova.h5')
print(i)
scores=model.evaluate_generator(test_generator[i])
print(scores[1])
risultati.append(scores[1]*100)
print(np.mean(risultati))
</code></pre>
<p>any suggestions?</p>
|
<p>There are few that may improve the classification accuracy:</p>
<ol>
<li><p>Use EfficientNet with noisy_student weights. There are less number of parameters to train. It gives better accuracy due to the scalable architecture it has.</p></li>
<li><p>You can use test time augmentation. In your test data generator, do a simple horizontal flip, vertical flip (if data looks realistic) and affine transformations. It will generate multiple views of the data and helps the model to average out more probable class.</p></li>
<li><p>Your data augmentation could be more exhaustive. Checkout imgaug library. Plus, there are random_eraser, cut out and mix up strategies that have been proved to be useful.</p></li>
<li><p>Try label smoothing. It can also help your classifier to give more probability to the correct class.</p></li>
<li><p>Try learning rate warmup. </p></li>
</ol>
| 1,108
|
fine-tuning
|
How to fine tuning again of a bert fined tuned model
|
https://stackoverflow.com/questions/67318099/how-to-fine-tuning-again-of-a-bert-fined-tuned-model
|
<p>I did a fine tuning bert model for text classification using ktrain.
Again i want to do fine tuning this model on another text classification data. How i can do?</p>
|
<p>See <a href="https://github.com/amaiya/ktrain/blob/master/FAQ.md#how-do-i-resume-training-from-a-saved-checkpoint" rel="nofollow noreferrer">this FAQ entry</a> on resuming training/fine-tuning.</p>
| 1,109
|
fine-tuning
|
Fine Tuning Stable Diffusion
|
https://stackoverflow.com/questions/73916442/fine-tuning-stable-diffusion
|
<p>I'm trying to use the fine tuning method for stable diffusion to generate AI art ths is the google colab link if required <a href="https://colab.research.google.com/drive/1yGiI2TYkFMuETm4Rh5bh3-k6yY1C38w0?usp=sharing#scrollTo=60jVYSk0BGC8&uniqifier=3" rel="nofollow noreferrer">https://colab.research.google.com/drive/1yGiI2TYkFMuETm4Rh5bh3-k6yY1C38w0?usp=sharing#scrollTo=60jVYSk0BGC8&uniqifier=3</a></p>
<pre><code> #@title Setup and check the images you have just added
import requests
import glob
from io import BytesIO
def download_image(url):
try:
response = requests.get(url)
except:
return None
return image.open(BytesIO(response.content)).convert("RGB")
images = list(filter(None,[download_image(url) for url in urls]))
save_path = "./my_concept"
if not os.path.exists(save_path):
os.mkdir(save_path)
[image.save(f"{save_path}/{i}.jpeg") for i, image in enumerate(images)]
image_grid(images, 1, len(images))
</code></pre>
<p>returns error</p>
<pre><code>NameError Traceback (most recent call last)
<ipython-input-49-adadff211ef8> in <module>
11 return image.open(BytesIO(response.content)).convert("RGB")
12
---> 13 images = list(filter(None,[download_image(url) for url in urls]))
14 save_path = "./my_concept"
15 if not os.path.exists(save_path):
1 frames
<ipython-input-49-adadff211ef8> in download_image(url)
9 except:
10 return None
---> 11 return image.open(BytesIO(response.content)).convert("RGB")
12
13 images = list(filter(None,[download_image(url) for url in urls]))
NameError: name 'image' is not defined
</code></pre>
|
<p>There are a few issues with the code that need to be fixed. First, the download_image function uses the image.open method to open the image, but it should use the Image.open method instead. This is a typo that needs to be corrected.</p>
<p>Second, the image_grid function is not defined in the code, so calling it will result in an error. This function is used to display the images that were downloaded, but it is not provided in the code. To fix this issue, you would need to define the image_grid function or use a different method to display the images.</p>
<p>Third, the code uses the .jpeg extension for the image files, but the Image.save method uses the format specified by the format parameter to determine the file format. If you want to save the images in JPEG format, you would need to specify the format="JPEG" parameter when calling Image.save.</p>
<p>To fix these issues and improve the code, you can make the following changes:</p>
<pre><code>import requests
import glob
from io import BytesIO
from PIL import Image
def download_image(url):
try:
response = requests.get(url)
except:
return None
return Image.open(BytesIO(response.content)).convert("RGB")
images = list(filter(None,[download_image(url) for url in urls]))
save_path = "./my_concept"
if not os.path.exists(save_path):
os.mkdir(save_path)
# Use a different method to display the images, such as matplotlib
# image_grid(images, 1, len(images))
# Save the images in JPEG format
for i, image in enumerate(images):
image.save(f"{save_path}/{i}.jpeg", format="JPEG")```
</code></pre>
| 1,110
|
fine-tuning
|
SpaCy fine-tuning GPU
|
https://stackoverflow.com/questions/78364403/spacy-fine-tuning-gpu
|
<p>I train a text classifier with spaCy and classy classification. But the model does not use gpu during training, fine-tuning is very long</p>
<p>GPU info</p>
<pre><code>$ nvidia-smi
Mon Apr 22 09:41:13 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.29.06 Driver Version: 545.29.06 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce GTX 1650 Ti Off | 00000000:01:00.0 Off | N/A |
| N/A 62C P8 1W / 50W | 6MiB / 4096MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 2154 G /usr/bin/gnome-shell 1MiB |
+---------------------------------------------------------------------------------------+
</code></pre>
<p>GPU are available for cupy</p>
<pre><code>import cupy as cp
x = cp.array([1, 2, 3])
print(x)
[1 2 3]
</code></pre>
<p>GPU are available for torch</p>
<pre><code>import torch
print(torch.__version__)
print(torch.version.cuda)
import tensorflow as tf
print(tf.__version__)
print(tf.test.gpu_device_name())
12.1
2.15.0
/device:GPU:0
</code></pre>
<p>I have tried this.</p>
<pre><code>spacy.require_gpu()
</code></pre>
<p>and this</p>
<pre><code>spacy.prefer_gpu(0)
</code></pre>
<pre><code>nlp.add_pipe(
'classy_classification',
config={
'data' : train_samples,
'model' : 'sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2',
'device' : 'gpu',
'verbose' : True
}
)
</code></pre>
| 1,111
|
|
fine-tuning
|
VGG16 fine tuning
|
https://stackoverflow.com/questions/58939250/vgg16-fine-tuning
|
<p>I'm trying to fine tune VGG16.
But sometimes I got a validation accuracy that is constant, sometimes it is fixed to 0.0 and sometimes it is fixed to 1.0 and it is the same also on the test accuracy.
It also happened that the training is constant.</p>
<p>Those are some examples:</p>
<p>Adam, bs: 64, lr: 0.001</p>
<pre><code>train_acc = [0.45828044, 0.4580425, 0.45812184, 0.45820114, 0.45820114, 0.45812184, 0.45820114, 0.45820114, 0.45820114, 0.4580425, 0.45820114, 0.45820114, 0.45812184, 0.45828044, 0.45820114, 0.45828044, 0.45812184, 0.45820114, 0.45812184, 0.45828044, 0.45820114, 0.45820114, 0.45812184, 0.45812184, 0.45820114, 0.45812184, 0.45828044, 0.45820114, 0.45828044, 0.45812184, 0.45820114, 0.45820114, 0.45812184, 0.45820114, 0.45820114, 0.45820114, 0.45828044, 0.45812184, 0.45828044, 0.4580425, 0.4580425, 0.45820114, 0.45820114, 0.45820114, 0.45828044, 0.45820114, 0.45812184, 0.45820114, 0.45820114, 0.45820114]
valid_acc = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
train_loss = [8.31718591143032, 8.35966631966799, 8.358442889857413, 8.357219463677575, 8.357219470939055, 8.358442853550015, 8.357219473359548, 8.357219434631658, 8.357219487882508, 8.359666328139717, 8.357219499984973, 8.357219495143987, 8.35844288017544, 8.355996039918232, 8.357219415267712, 8.355996025395273, 8.358442889857413, 8.357219521769412, 8.358442892277907, 8.355996052020698, 8.35721946609807, 8.357219415267712, 8.35844288017544, 8.358442885016427, 8.357219463677575, 8.358442882595934, 8.355996003610834, 8.357219458836589, 8.355996064123163, 8.357520040521766, 8.357219487882508, 8.357219480621028, 8.358442897118893, 8.357219495143987, 8.357219446734124, 8.35721945157511, 8.355996056861684, 8.358442911641852, 8.355996047179712, 8.359666311196264, 8.359666286991333, 8.35721946609807, 8.357219458836589, 8.35721944431363, 8.355996035077245, 8.357219453995603, 8.358442909221358, 8.357219439472644, 8.357219429790671, 8.357219461257083]
valid_loss = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
test_loss = 0.0
test_acc = 1.0
</code></pre>
<p>RMSprop, bs: 64, lr: 0.001</p>
<pre><code>train_acc = [0.5421161, 0.54179883, 0.54179883, 0.54171956, 0.54171956, 0.5419575, 0.54187816, 0.54179883, 0.54187816, 0.5419575, 0.5419575]
valid_acc = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
train_loss = [6.990036433118249, 7.025707591003573, 7.025707559537161, 7.026923776278036, 7.02692376054483, 7.023275266444017, 7.024491474713166, 7.025707566798641, 7.024491443246754, 7.023275273705497, 7.0232752761259905]
valid_loss = [15.33323860168457, 15.33323860168457, 15.33323860168457, 15.33323860168457, 15.33323860168457, 15.33323860168457, 15.33323860168457, 15.33323860168457, 15.33323860168457, 15.33323860168457, 15.33323860168457]
test_loss = 15.33323860168457
test_acc = 0.0
</code></pre>
<p>SDG, bs: 64, lr: 0.01, momentum: 0.2</p>
<pre><code>train_acc = [0.5406091, 0.5419575, 0.54187816, 0.54179883, 0.54187816, 0.54187816, 0.54187816, 0.54187816, 0.54179883, 0.54171956, 0.54179883]
valid_acc = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
train_loss = [6.990036433118249, 7.025707591003573, 7.025707559537161, 7.026923776278036, 7.02692376054483, 7.023275266444017, 7.024491474713166, 7.025707566798641, 7.024491443246754, 7.023275273705497, 7.0232752761259905]
valid_loss = [15.33323860168457, 15.33323860168457, 15.33323860168457, 15.33323860168457, 15.33323860168457, 15.33323860168457, 15.33323860168457, 15.33323860168457, 15.33323860168457, 15.33323860168457, 15.33323860168457]
test_loss = 15.33323860168457
test_acc = 0.0
</code></pre>
<p>SDG, bs: 64, lr: 0.01, momentum: 0.4</p>
<pre><code>train_acc = [0.45740798, 0.45828044, 0.45820114, 0.45828044, 0.45820114, 0.4580425, 0.45820114, 0.45820114, 0.45820114, 0.45820114, 0.45820114]
valid_acc = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
train_loss = [8.329831461313413, 8.355996044759218, 8.357219475780042, 8.355996035077245, 8.357219502405467, 8.35966631603725, 8.357219461257083, 8.357219461257083, 8.357219456416097, 8.357219441893138, 8.357219478200534]
valid_loss = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
test_loss = 0.0
test_acc = 1.0
</code></pre>
<p>For the fine tuning I've used the following top layers:</p>
<pre><code>model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
</code></pre>
<p>Do you have some idea of why this happen?</p>
<p>Anyway I'm still trying to train the network, but often the training accuracy increases and the validation accuracy behave in a very chaotic way, varying a lot from one epoch to another. Do you have some suggest, please?</p>
|
<p>Training accuracy increases and validation accuracy fluctuates are very obvious: the model is trying to learn how to "memorize" the training set, so we have validation set to prevent it from overfitting.</p>
<p>Also seeing from the result, your model seems to learn so low. Try tuning the hyperparameters.</p>
<p>A one thing that I notice (but cannot confirm): if you use transfer learning and the learning rate so big, it may destroy all the hard work of the pretrained model (in here, VGG). I found this learning rate scheduler from a Google's notebook, try using this:</p>
<pre class="lang-py prettyprint-override"><code>start_lr = 0.00001
min_lr = 0.00001
max_lr = 0.00005 * tpu_strategy.num_replicas_in_sync
rampup_epochs = 5
sustain_epochs = 0
exp_decay = .8
def lrfn(epoch):
if epoch < rampup_epochs:
return (max_lr - start_lr)/rampup_epochs * epoch + start_lr
elif epoch < rampup_epochs + sustain_epochs:
return max_lr
else:
return (max_lr - min_lr) * exp_decay**(epoch-rampup_epochs-sustain_epochs) + min_lr
lr_callback = tf.keras.callbacks.LearningRateScheduler(lambda epoch: lrfn(epoch), verbose=True)
...
model.fit(..., callbacks=[lr_callback])
</code></pre>
<p>The idea is to set a low learning rate at the first epoch, then increase it and slowly decrease it.</p>
| 1,112
|
fine-tuning
|
Why 8bit quantized fine tuning of a model occupy more memory than just original model fine tuning?
|
https://stackoverflow.com/questions/78869197/why-8bit-quantized-fine-tuning-of-a-model-occupy-more-memory-than-just-original
|
<p>I trying to fine tune Mistral model for 5 epochs. It shows it take 72 hours with 8 bit quantized fine tuning but 48 hours with just original mode fine tuning. Also memory footprint is higher for 8bit quantized fine tuning. Below is the code where I am loading model for 8 bit quantization.</p>
<pre><code>bnb_config = BitsAndBytesConfig(
load_in_8bit = True ,
llm_int8_enable_fp32_cpu_offload = True,
llm_int8_has_fp16_weight = False,
)
model = AutoModelForCausalLM.from_pretrained(
base_model,
quantization_config=bnb_config,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
</code></pre>
<p>Someone please explain why it is happening. Am I making any mistake?</p>
| 1,113
|
|
fine-tuning
|
Fine-tuning BERT sentence transformer model
|
https://stackoverflow.com/questions/69562624/fine-tuning-bert-sentence-transformer-model
|
<p>I am using a pre-trained BERT sentence transformer model, as described here <a href="https://www.sbert.net/docs/training/overview.html" rel="noreferrer">https://www.sbert.net/docs/training/overview.html</a> , to get embeddings for sentences.</p>
<p>I want to fine-tune these pre-trained embeddings, and I am following the instructions in the tutorial i have linked above. According to the tutorial, you fine-tune the pre-trained model by feeding it sentence pairs and a label score that indicates the similarity score between two sentences in a pair. I understand this fine-tuning happens using the architecture shown in the image below:</p>
<p><a href="https://i.sstatic.net/JPA53.png" rel="noreferrer"><img src="https://i.sstatic.net/JPA53.png" alt="enter image description here" /></a></p>
<p>Each sentence in a pair is encoded first using the BERT model, and then the "pooling" layer aggregates (usually by taking the average) the word embeddings produced by Bert layer to produce a single embedding for each sentence. The cosine similarity of the two sentence embeddings is computed in the final step and compared against the label score.</p>
<p>My question here is - which parameters are being optimized when fine-tuning the model using the given architecture? Is it fine-tuning only the parameters of the <em>last layer</em> in BERT model? This is not clear to me by looking at the code example shown in the tutorial for fine-tuning the model.</p>
|
<p>That actually depend on your requirement.
If you have a lot of computational resources and you want to get a perfect sentence representation then you should finetune all the layers.(Which was done in the original sentence bert model)</p>
<p>But if you are a student and want to create an almost good sentence representation then you can train only the non-bert layers.</p>
| 1,114
|
fine-tuning
|
RRD fine tuning
|
https://stackoverflow.com/questions/19658887/rrd-fine-tuning
|
<p>I have used RRD few months back , for my large application where is in I was running around 5k RRD update from my application resulting in huge I?O at my box.</p>
<p>I tried many things to improve the performance , but IO and corresponding load just forced me to move to flat files .</p>
<p>Are there any guide lines to use RRD at such level where you requires around 10k RRD/minute.</p>
<p>Is there and fine tuning guide for RRD?</p>
<p>P.S. I did this exercise on Linux box.</p>
<p>Thanks,
Jain</p>
|
<p>you must usa a) recent rrdtool (1.4.8) and b) make sure there is sufficient ram on the box so that the hot blocks of all rrds can be cached ... if you do testing, you should see that performance drops drastically as soon as you are over the cache limit. 10k/minute should be no problem at all.</p>
| 1,115
|
fine-tuning
|
Inception V3 fine tuning
|
https://stackoverflow.com/questions/48085257/inception-v3-fine-tuning
|
<p>I am not from cs background and I am trying to create a classifier in which I feed images containing disease and images without disease. I was trying to do fine tuning using inception v3 for this. Unfortunately all the examples for fine tuning are done for vgg-16 and they stop by saying inception v3 is trained similarly in almost all tutorials. I am using keras with tensorflow back-end. Everyone tells me to truncate the final softmax layer of inception and add two layers and do the fine tuning.I do not know how to add layer in inception also I am going to store my data in 2 folders this is also creating a headache for me as some tutorials load cifar database while others use directories and I'm uncomfortable with this too. Can anyone provide me with some inputs?</p>
<p><strong>train.py</strong></p>
<pre class="lang-py prettyprint-override"><code>import os
import sys
import glob
import argparse
import matplotlib
matplotlib.use('agg')
import matplotlib.pyplot as plt
from keras import backend as K
from keras import __version__
from keras.applications.inception_v3 import InceptionV3, preprocess_input
from keras.models import Model
from keras.layers import Dense, AveragePooling2D, GlobalAveragePooling2D, Input, Flatten, Dropout
from keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import SGD
IM_WIDTH, IM_HEIGHT = 299, 299 #fixed size for InceptionV3
NB_EPOCHS = 3
BAT_SIZE = 32
FC_SIZE = 1024
#NB_IV3_LAYERS_TO_FREEZE = 172
def get_nb_files(directory):
"""Get number of files by searching directory recursively"""
if not os.path.exists(directory):
return 0
cnt = 0
for r, dirs, files in os.walk(directory):
for dr in dirs:
cnt += len(glob.glob(os.path.join(r, dr + "/*")))
return cnt
def setup_to_transfer_learn(model, base_model):
"""Freeze all layers and compile the model"""
for layer in base_model.layers:
layer.trainable = False
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
def add_new_last_layer(base_model, nb_classes):
"""Add last layer to the convnet
Args:
base_model: keras model excluding top
nb_classes: # of classes
Returns:
new keras model with last layer
"""
x = base_model.output
x = AveragePooling2D((8, 8), border_mode='valid', name='avg_pool')(x)
x = Dropout(0.5)(x)
x = Flatten()(x)
predictions = Dense(2, activation='softmax')(x)
model = Model(input=base_model.input, output=predictions)
return model
"""
def setup_to_finetune(model):
Freeze the bottom NB_IV3_LAYERS and retrain the remaining top layers.
note: NB_IV3_LAYERS corresponds to the top 2 inception blocks in the inceptionv3 arch
Args:
model: keras model
for layer in model.layers[:NB_IV3_LAYERS_TO_FREEZE]:
layer.trainable = False
for layer in model.layers[NB_IV3_LAYERS_TO_FREEZE:]:
layer.trainable = True
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
"""
def train(args):
"""Use transfer learning and fine-tuning to train a network on a new dataset"""
train_img = 'training_set/'
validation_img = 'test_set/'
nb_epoch = int(args.nb_epoch)
nb_train_samples = get_nb_files(train_img)
nb_classes = len(glob.glob(train_img + "/*"))
# data prep
train_datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
validation_datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
train_generator = train_datagen.flow_from_directory(
train_img,
target_size=(299, 299),
batch_size=32,
class_mode='categorical')
validation_generator = validation_datagen.flow_from_directory(
validation_img,
target_size=(299, 299),
batch_size=32,
class_mode='categorical')
if(K.image_dim_ordering() == 'th'):
input_tensor = Input(shape=(3, 299, 299))
else:
input_tensor = Input(shape=(299, 299, 3))
# setup model
base_model = InceptionV3(input_tensor = input_tensor,weights='imagenet', include_top=False,input_shape=(IM_HEIGHT,IM_WIDTH,3)) #include_top=False excludes final FC layer
model = add_new_last_layer(base_model, nb_classes)
# transfer learning
setup_to_transfer_learn(model, base_model)
history_tl = model.fit_generator(train_generator,
samples_per_epoch=320,
nb_epoch=nb_epoch,
validation_data=validation_generator,
nb_val_samples=64)
model.save(args.output_model_file)
if args.plot:
plot_training(history_tl)
def plot_training(history):
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r.')
plt.plot(epochs, val_acc, 'r')
plt.title('Training and validation accuracy')
plt.savefig('accuracy.png')
plt.figure()
plt.plot(epochs, loss, 'r.')
plt.plot(epochs, val_loss, 'r-')
plt.title('Training and validation loss')
plt.savefig('loss.png')
if __name__=="__main__":
a = argparse.ArgumentParser()
a.add_argument("--nb_epoch", default=NB_EPOCHS)
a.add_argument("--batch_size", default=BAT_SIZE)
a.add_argument("--plot", action="store_true")
a.add_argument("--output_model_file", default="inceptionv3-ft.model")
args = a.parse_args()
train(args)
</code></pre>
<p><strong>predictions.py</strong></p>
<pre class="lang-py prettyprint-override"><code> import sys
import argparse
import numpy as np
from PIL import Image
import requests
from io import BytesIO
import matplotlib
matplotlib.use('agg')
import matplotlib.pyplot as plt
from keras.preprocessing import image
from keras.models import load_model
from keras.applications.inception_v3 import preprocess_input
target_size = (299, 299) #fixed size for InceptionV3 architecture
def predict(model, img, target_size):
"""Run model prediction on image
Args:
model: keras model
img: PIL format image
target_size: (w,h) tuple
Returns:
list of predicted labels and their probabilities
"""
if img.size != target_size:
img = img.resize(target_size)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = model.predict(x)
return preds[0]
def plot_preds(image, preds):
"""Displays image and the top-n predicted probabilities in a bar graph
Args:
image: PIL image
preds: list of predicted labels and their probabilities
"""
plt.figure()
labels = (" NO DR", "DR")
plt.barh([0, 1], preds, alpha=0.5)
plt.yticks([0, 1], labels)
plt.xlabel('Probability')
plt.xlim(0,1.01)
plt.tight_layout()
plt.savefig('out.png')
if __name__=="__main__":
a = argparse.ArgumentParser()
a.add_argument("--image", help="path to image")
a.add_argument("--image_url", help="url to image")
a.add_argument("--model")
args = a.parse_args()
if args.image is None and args.image_url is None:
a.print_help()
sys.exit(1)
model = load_model(args.model)
if args.image is not None:
img = Image.open(args.image)
preds = predict(model, img, target_size)
plot_preds(img, preds)
if args.image_url is not None:
response = requests.get(args.image_url)
img = Image.open(BytesIO(response.content))
preds = predict(model, img, target_size)
plot_preds(img, preds)
</code></pre>
<p>Finally I will pass an image via arguments and get a result in the form of a PNG file.</p>
|
<p>You seem to have multiple unrelated questions, but pretty much all of them are already answered in stackoverflow. I'll try to compile some information to give you some direction:</p>
<blockquote>
<p>i feed images containing disease and images without disease [...] Everyone tells me to truncate the final softmax layer of inception and add two layers and do the fine tuning</p>
</blockquote>
<p>I believe it's a little cleaner to load the model without its "top" dense layers (<code>softmax</code> included), and re-add the top layers yourself:</p>
<pre class="lang-py prettyprint-override"><code># This will load inception without its top dense layers (there's only 2).
model = InceptionV3(..., weights='imagenet', include_top=False)
x = model.output
# Re-add the layers here, with new weights.
x = GlobalAveragePooling2D(name='avg_pool')(x)
x = Dense(2, activation='softmax', name='predictions')(x)
model = Model(inputs=model.inputs, outputs=x)
</code></pre>
<p>Notice that you should NOT use both <code>GlobalAveragePooling2D</code> and <code>Flatten</code> together, as you are doing in your train script.</p>
<p>In case you use <code>VGG16</code>, the architecture is a little different:</p>
<pre class="lang-py prettyprint-override"><code>model = VGG16(..., weights='imagenet', include_top=False)
x = model.output
x = Flatten(name='flatten')(x)
x = Dense(4096, activation='relu', name='fc1')(x)
x = Dense(4096, activation='relu', name='fc2')(x)
x = Dense(2, activation='softmax', name='predictions')(x)
</code></pre>
<p><em>Note: you might want to change these <code>4096</code>. They seem a little high for only 2 classes.</em></p>
<blockquote>
<p>also I am going to store my data in 2 folders this is also creating a headache for me as some tutorials load cifar database while others use directories and I'm uncomfortable with this too. </p>
</blockquote>
<p><code>cifar</code> database in keras is a toy example. A debugging start to make sure everything else is running smoothly. This is why it can be loaded directly into the main memory.<br>
Real datasets need to be stored into disk.<br>
If they are contained in sub-foders named after their labels, such as this:</p>
<pre><code>train/
|-label_a/
|-label_b/
...
|-label_z/
valid/
|-label_a/
|-label_b/
...
|-label_z/
</code></pre>
<p>Then there's a helper for you that can automatically load these images and associate them with their correct labels:</p>
<pre class="lang-py prettyprint-override"><code>from keras.preprocessing.image import ImageDataGenerator
from keras.applications.inception_v3 import preprocess_input
# or from keras.applications.vgg16 import preprocess_input
train_dir = '/datasets/problem/train/'
valid_dir = '/datasets/problem/valid/'
g = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest',
preprocessing_function=preprocess_input)
train = g.flow_from_directory(train_dir,
target_size=(256, 256),
batch_size=32,
shuffle=True)
valid = g.flow_from_directory(valid_dir,
target_size=(256, 256),
batch_size=32,
shuffle=True)
</code></pre>
<p><em>Note: this seems to be the case for your train script.</em></p>
<p>If your dataset is not aranged such as this, then you need to implement a <a href="https://keras.io/utils/#sequence" rel="nofollow noreferrer"><code>Sequence</code></a> that's capable of loading the data for you and associate it with the appropriate labels.</p>
| 1,116
|
fine-tuning
|
Tensorflow fine tuning tutorial without Bazel
|
https://stackoverflow.com/questions/45071647/tensorflow-fine-tuning-tutorial-without-bazel
|
<p>I am using the Google Research tutorial for fine tuning the Inception model. </p>
<p><a href="https://github.com/tensorflow/models/tree/master/inception/README.md#how-to-fine-tune-a-pre-trained-model-on-a-new-task" rel="nofollow noreferrer">The tutorial can be found here</a></p>
<p>The tutorial uses Bazel.</p>
<p>I have access to my institution's GPU that doesn't have Bazel installed on it.</p>
<p>Is there a way I can complete this fine tuning tutorial without using Bazel?</p>
|
<p>Yes you can.Just check
<a href="https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/?utm_campaign=chrome_series_machinelearning_063016&utm_source=gdev&utm_medium=yt-desc#4." rel="nofollow noreferrer">this</a>.In the section 5.</p>
| 1,117
|
fine-tuning
|
OpenAI GPT-3 API: Fine tune a fine tuned model?
|
https://stackoverflow.com/questions/72758187/openai-gpt-3-api-fine-tune-a-fine-tuned-model
|
<p>The OpenAI documentation for the <code>model</code> attribute in the fine-tune API states a bit confusingly:</p>
<blockquote>
<p><strong>model</strong></p>
<p>The name of the base model to fine-tune. You can select one of "ada", "babbage", "curie", "davinci", or a fine-tuned model created after 2022-04-21.</p>
</blockquote>
<p>My question: is it better to fine-tune a base model or a fine-tuned model?</p>
<p>I created a fine-tune model from <code>ada</code> with file <code>mydata1K.jsonl</code>:</p>
<pre><code>ada + mydata1K.jsonl --> ada:ft-acme-inc-2022-06-25
</code></pre>
<p>Now I have a bigger file of samples <code>mydata2K.jsonl</code> that I want to use to improve the fine-tuned model.
In this second round of fine-tuning, is it better to fine-tune <code>ada</code> again or to fine-tune my fine-tuned model <code>ada:ft-acme-inc-2022-06-25</code>? I'm assuming this is possible because my fine tuned model is created after 2022-04-21.</p>
<pre><code>ada + mydata2K.jsonl --> better-model
</code></pre>
<p>or</p>
<pre><code>ada:ft-acme-inc-2022-06-25 + mydata2K.jsonl --> even-better-model?
</code></pre>
|
<p><strong>UPDATE</strong></p>
<p>It looks like fine-tuning a fine-tuned model is not supported anymore, as stated in the official <a href="https://platform.openai.com/docs/guides/fine-tuning/can-i-continue-fine-tuning-a-model-that-has-already-been-fine-tuned" rel="nofollow noreferrer">OpenAI documentation</a>:</p>
<blockquote>
<h4>Can I continue fine-tuning a model that has already been fine-tuned?</h4>
<p>No, we do not currently support continuing the fine-tuning process
once a job has finished. We plan to support this in the near future.</p>
</blockquote>
<hr />
<p>As stated in the official <a href="https://platform.openai.com/docs/guides/fine-tuning/continue-fine-tuning-from-a-fine-tuned-model" rel="nofollow noreferrer">OpenAI documentation</a>:</p>
<blockquote>
<p>If you have already fine-tuned a model for your task and now have
additional training data that you would like to incorporate, you can
continue fine-tuning from the model. <strong>This creates a model that has
learned from all of the training data without having to re-train from
scratch.</strong></p>
<p>To do this, pass in the fine-tuned model name when creating a new
fine-tuning job (e.g., <code>-m curie:ft-<org>-<date></code>). Other training
parameters do not have to be changed, however if your new training
data is much smaller than your previous training data, you may find it
useful to reduce <code>learning_rate_multiplier</code> by a factor of 2 to 4.</p>
</blockquote>
<h3>Which option to choose?</h3>
<p>You're asking about two options:</p>
<ul>
<li>Option 1: <code>ada + bigger-training-dataset.jsonl</code></li>
<li>Option 2: <code>ada:ft-acme-inc-2022-06-25 + additional-training-dataset.jsonl</code></li>
</ul>
<p>The documentation says nothing about which option is better <em>in terms of which would yield better results</em>.</p>
<p>However...</p>
<h3>Choose Option 2</h3>
<p>Why?</p>
<blockquote>
<p>When training a fine-tuned model, the total tokens used will be billed
according to our <a href="https://openai.com/api/pricing/#quotas" rel="nofollow noreferrer">training rates</a>.</p>
</blockquote>
<p>If you choose Option 1, you'll pay for some tokens in your training dataset twice. First when doing fine-tuning with initial training dataset, second when doing fine-tuning with bigger training dataset (i.e., <code>bigger-training-dataset.jsonl</code> = <code>initial-training-dataset.jsonl</code> + <code>additional-training-dataset.jsonl</code>).</p>
<p><strong>It's better to continue fine-tuning from a fine-tuned model because you'll pay only for tokens in your additional training dataset.</strong></p>
<p>Read more about <a href="https://openai.com/api/pricing/#faq-fine-tuning-pricing-calculation" rel="nofollow noreferrer">fine-tuning pricing calculation</a>.</p>
| 1,118
|
fine-tuning
|
Keras VGG16 fine tuning
|
https://stackoverflow.com/questions/43386463/keras-vgg16-fine-tuning
|
<p>There is an example of VGG16 fine-tuning on <a href="https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html" rel="noreferrer">keras blog</a>, but I can't reproduce it. </p>
<p>More precisely, here is code used to init VGG16 without top layer and to freeze all blocks except the topmost:</p>
<pre><code>WEIGHTS_PATH_NO_TOP = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5'
weights_path = get_file('vgg16_weights.h5', WEIGHTS_PATH_NO_TOP)
model = Sequential()
model.add(InputLayer(input_shape=(150, 150, 3)))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3'))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block5_maxpool'))
model.load_weights(weights_path)
for layer in model.layers:
layer.trainable = False
for layer in model.layers[-4:]:
layer.trainable = True
print("Layer '%s' is trainable" % layer.name)
</code></pre>
<p>Next, creating a top model with single hidden layer:</p>
<pre><code>top_model = Sequential()
top_model.add(Flatten(input_shape=model.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(1, activation='sigmoid'))
top_model.load_weights('top_model.h5')
</code></pre>
<p>Note that it was previously trained on bottleneck features like it is described in the blog post. Next, add this top model to the base model and compile:</p>
<pre><code>model.add(top_model)
model.compile(loss='binary_crossentropy',
optimizer=SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
</code></pre>
<p>And eventually, fit on cats/dogs data:</p>
<pre><code>batch_size = 16
train_datagen = ImageDataGenerator(rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
train_gen = train_datagen.flow_from_directory(
TRAIN_DIR,
target_size=(150, 150),
batch_size=batch_size,
class_mode='binary')
valid_gen = test_datagen.flow_from_directory(
VALID_DIR,
target_size=(150, 150),
batch_size=batch_size,
class_mode='binary')
model.fit_generator(
train_gen,
steps_per_epoch=nb_train_samples // batch_size,
epochs=nb_epoch,
validation_data=valid_gen,
validation_steps=nb_valid_samples // batch_size)
</code></pre>
<p>But here is an error I am getting when trying to fit:</p>
<blockquote>
<p>ValueError: Error when checking model target: expected block5_maxpool to have 4 > dimensions, but got array with shape (16, 1)</p>
</blockquote>
<p>Therefore, it seems that something is wrong with the last pooling layer in base model. Or probably I've done something wrong trying to connect base model with the top one. </p>
<p>Does anybody have similar issue? Or maybe there is a better way to build such "concatenated" models? I am using <code>keras==2.0.0</code> with <code>theano</code> backend.</p>
<blockquote>
<p><strong>Note</strong>: I was using examples from gist and <code>applications.VGG16</code> utility, but has issues trying to concatenate models, I am not too familiar with <code>keras</code> functional API. So this solution I provide here is the most "successful" one, i.e. it fails only on fitting stage. </p>
</blockquote>
<hr>
<h3>Update #1</h3>
<p>Ok, here is a small explanation about what I am trying to do. First of all, I am generating bottleneck features from VGG16 as follows:</p>
<pre><code>def save_bottleneck_features():
datagen = ImageDataGenerator(rescale=1./255)
model = applications.VGG16(include_top=False, weights='imagenet')
generator = datagen.flow_from_directory(
TRAIN_DIR,
target_size=(150, 150),
batch_size=batch_size,
class_mode=None,
shuffle=False)
print("Predicting train samples..")
bottleneck_features_train = model.predict_generator(generator, nb_train_samples)
np.save(open('bottleneck_features_train.npy', 'w'), bottleneck_features_train)
generator = datagen.flow_from_directory(
VALID_DIR,
target_size=(150, 150),
batch_size=batch_size,
class_mode=None,
shuffle=False)
print("Predicting valid samples..")
bottleneck_features_valid = model.predict_generator(generator, nb_valid_samples)
np.save(open('bottleneck_features_valid.npy', 'w'), bottleneck_features_valid)
</code></pre>
<p>Then, I create a top model and train it on these features as follows:</p>
<pre><code>def train_top_model():
train_data = np.load(open('bottleneck_features_train.npy'))
train_labels = np.array([0]*(nb_train_samples / 2) +
[1]*(nb_train_samples / 2))
valid_data = np.load(open('bottleneck_features_valid.npy'))
valid_labels = np.array([0]*(nb_valid_samples / 2) +
[1]*(nb_valid_samples / 2))
model = Sequential()
model.add(Flatten(input_shape=train_data.shape[1:]))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(train_data, train_labels,
nb_epoch=nb_epoch,
batch_size=batch_size,
validation_data=(valid_data, valid_labels),
verbose=1)
model.save_weights('top_model.h5')
</code></pre>
<p>So basically, there are two trained models, <code>base_model</code> with ImageNet weights and <code>top_model</code> with weights generated from bottleneck features. And I wonder how to concatenate them? Is it possible or I am doing something wrong? Because as I can see, the response from @thomas-pinetz supposes that the top model <em>is not trained separately and right away appended to the model</em>. Not sure if I am clear, here is a quote from the blog:</p>
<blockquote>
<p>In order to perform fine-tuning, all layers should start with properly trained weights: for instance you should not slap a randomly initialized fully-connected network on top of a pre-trained convolutional base. This is because the large gradient updates triggered by the randomly initialized weights would wreck the learned weights in the convolutional base. In our case this is why we first train the top-level classifier, and only then start fine-tuning convolutional weights alongside it.</p>
</blockquote>
|
<p>I think that the weights described by the vgg net do not fit your model and the error stems from this. Anyways there is a way better way to do this using the network itself as described in (<a href="https://keras.io/applications/#vgg16" rel="nofollow noreferrer">https://keras.io/applications/#vgg16</a>).</p>
<p>You can just use:</p>
<pre><code>base_model = keras.applications.vgg16.VGG16(include_top=False, weights='imagenet', input_tensor=None, input_shape=None)
</code></pre>
<p>to instantiate a vgg net that is pre-trained. Then you can freeze the layers and use the model class to instantiate your own model like this:</p>
<pre><code>x = base_model.output
x = Flatten()(x)
x = Dense(your_classes, activation='softmax')(x) #minor edit
new_model = Model(input=base_model.input, output=x)
</code></pre>
<p>To combine the bottom and the top network you can use the following code snippet. The following functions are used (Input Layer (<a href="https://keras.io/getting-started/functional-api-guide/" rel="nofollow noreferrer">https://keras.io/getting-started/functional-api-guide/</a>) / load_model (<a href="https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model" rel="nofollow noreferrer">https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model</a>) and the functional API of keras):</p>
<pre><code>final_input = Input(shape=(3, 224, 224))
base_model = vgg...
top_model = load_model(weights_file)
x = base_model(final_input)
result = top_model(x)
final_model = Model(input=final_input, output=result)
</code></pre>
| 1,119
|
fine-tuning
|
How does Fine-tuning Word Embeddings work?
|
https://stackoverflow.com/questions/40345607/how-does-fine-tuning-word-embeddings-work
|
<p>I've been reading some NLP with Deep Learning papers and found Fine-tuning seems to be a simple but yet confusing concept. There's been the same question asked <a href="https://stackoverflow.com/questions/40098450/hows-the-input-word2vec-get-fine-tuned-when-training-cnn/40098823#40098823">here</a> but still not quite clear. </p>
<p>Fine-tuning pre-trained word embeddings to task-specific word embeddings as mentioned in papers like <em>Y. Kim, “Convolutional Neural Networks for Sentence Classification,”</em> and <em>K. S. Tai, R. Socher, and C. D. Manning, “Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks,”</em> had only a brief mention without getting into any details. </p>
<p>My question is: </p>
<p>Word Embeddings generated using word2vec or Glove as pretrained word vectors are used as input features <code>(X)</code> for downstream tasks like parsing or sentiment analysis, meaning those input vectors are plugged into a new neural network model for some specific task, while training this new model, somehow we can get updated task-specific word embeddings.</p>
<p>But as far as I know, during the training, what back-propagation does is updating the weights <code>(W)</code> of the model, it does not change the input features <code>(X)</code>, so how exactly does the original word embeddings get fine-tuned? and where do these fine-tuned vectors come from?</p>
|
<p>Yes, if you feed the embedding vector as your input, you can't fine-tune the embeddings (at least easily). However, all the frameworks provide some sort of an <code>EmbeddingLayer</code> that takes as input an integer that is the class ordinal of the word/character/other input token, and performs a embedding lookup. Such an embedding layer is very similar to a fully connected layer that is fed a one-hot encoded class, but is way more efficient, as it only needs to fetch/change one row from the matrix on both front and back passes. More importantly, it allows the weights of the embedding to be learned.</p>
<p>So the classic way would be to feed the actual classes to the network instead of embeddings, and prepend the entire network with a embedding layer, that is initialized with word2vec / glove, and which continues learning the weights. It might also be reasonable to freeze them for several iterations at the beginning until the rest of the network starts doing something reasonable with them before you start fine tuning them.</p>
| 1,120
|
fine-tuning
|
Fine tuning resnet18 for cifar10
|
https://stackoverflow.com/questions/73799136/fine-tuning-resnet18-for-cifar10
|
<p>I just want fine tuning ResNet18 on cifar10 datasets. so I just want to change the last linear layer from 1000 to 10.
I tried use <code>children</code> function to get the previous layers</p>
<pre class="lang-py prettyprint-override"><code>ResModel = resnet18(weights=ResNet18_Weights)
model = nn.Sequential(
*list(ResModel.children())[:-1],
nn.Linear(512,10)
)
</code></pre>
<p>so it raised error
<code>RuntimeError: mat1 and mat2 shapes cannot be multiplied (32768x1 and 512x10)</code>
and then I tried this way <code>ResModel.fc=nn.Linear(512,10)</code> it works fine.
so why?</p>
|
<p>The difference between stacking all layers into a single <code>nn.Sequential</code> and overriding only the last layer is the <code>forward</code> function:<br />
Your <code>ResModel</code> is of type <code>torchvision.models.ResNet</code>, while your <code>model</code> is a simple <code>nn.Sequential</code>. The <code>forward</code> pass of <code>ResNet</code> has an additional <code>flatten</code> operation before the last linear layer -- you do not have this operation in your <code>nn.Sequential</code> <code>model</code>.</p>
| 1,121
|
fine-tuning
|
Wor2vec fine-tuning
|
https://stackoverflow.com/questions/56166089/wor2vec-fine-tuning
|
<p>I need to fine-tune my word2vec model. I have two datasets, <code>data1</code> and <code>data2</code>.</p>
<p>What I did so far is:</p>
<pre><code>model = gensim.models.Word2Vec(
data1,
size=size_v,
window=size_w,
min_count=min_c,
workers=work)
model.train(data1, total_examples=len(data1), epochs=epochs)
model.train(data2, total_examples=len(data2), epochs=epochs)
</code></pre>
<p>Is this correct? Do I need to store learned weights somewhere?</p>
<p>I checked <a href="https://stackoverflow.com/questions/46244286/fine-tuning-pre-trained-word2vec-google-news/55751018#55751018">this answer</a> and <a href="https://www.kaggle.com/kfujikawa/word2vec-fine-tuning" rel="nofollow noreferrer">this one</a> but I couldn’t understand how it’s done.</p>
<p>Can someone explain to me the steps to follow?</p>
|
<p>Note you <strong>don't</strong> need to call <code>train()</code> with <code>data1</code> if you already provided <code>data1</code> at the time of model instantiation. The model will have already done its own internal <code>build_vocab()</code> and <code>train()</code> on the supplied corpus, using the default number of <code>epochs</code> (5) if you haven't specified one in the instantiation. </p>
<p>"Fine-tuning" is not a simple process with reliable steps assured to improve the model. It's very error-prone. </p>
<p>In particular, if words in <code>data2</code> aren't already known to the model, they'll be ignored. (There's an option to call <code>build_vocab()</code> with the parameter <code>update=True</code> to expand the known vocabulary, but such words aren't really on full equal footing with earlier words.)</p>
<p>If <code>data2</code> includes some words, but not others, only those in <code>data2</code> get updated via the additional training – which may essentially pull those words <em>out</em> of comparable alignment from other words that only appeared in <code>data1</code>. (Only the words trained together, in an interleaved shared training session, will go through the "push-pull" that in the end leaves them in useful arrangments.)</p>
<p>The safest course for incremental training would be to shuffle <code>data1</code> and <code>data2</code> together, and do the continued training on all the data: so that all words get new interleaved training together.</p>
| 1,122
|
fine-tuning
|
Fine tuning BART to generate Summary
|
https://stackoverflow.com/questions/61863504/fine-tuning-bart-to-generate-summary
|
<p>I am trying to fine tune the <a href="https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration" rel="nofollow noreferrer">BART model</a> to generate news headlines. </p>
<p>I am taking the dataset from <a href="https://www.kaggle.com/sunnysai12345/news-summary" rel="nofollow noreferrer">Kaggle News Summary</a></p>
<p>Fine tuning <a href="https://colab.research.google.com/drive/1H2u5lUIr4HIiZu5pPKfxgTxhb1zMhF8C?usp=sharing" rel="nofollow noreferrer">Colab notebook</a></p>
<p>However, in the validation section when trying to generate the text from encoded tokens, i am running into error</p>
<pre><code>---------------------------------------------------------------------------
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-22-059727516ce4> in <module>()
1 for epoch in range(1):
----> 2 predictions, actuals = validate(epoch)
3 writer(predictions, actuals)
4 print('Output Files generated for review')
4 frames
<ipython-input-21-6738e7a3f1f9> in validate(epoch)
10 mask = data['source_mask'].to(device, dtype = torch.long)
11
---> 12 generate_ids = model.generate(input_ids = ids,attention_mask = mask, num_beams=4,repetition_penalty=2.5,length_penalty=2.0,early_stopping=True)
13 preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generate_ids]
14 target = [tokenizer.decode(t, skip_special_tokens=True, clean_up_tokenization_spaces=True)for t in y]
/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
13 def decorate_context(*args, **kwargs):
14 with self:
---> 15 return func(*args, **kwargs)
16 return decorate_context
17
/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, attention_mask, decoder_start_token_id, use_cache, **model_specific_kwargs)
914
915 # We cannot generate if the model does not have a LM head
--> 916 if self.get_output_embeddings() is None:
917 raise AttributeError(
918 "You tried to generate sequences with a model that does not have a LM Head."
/usr/local/lib/python3.6/dist-packages/transformers/modeling_bart.py in get_output_embeddings(self)
1021
1022 def get_output_embeddings(self):
-> 1023 return _make_linear_from_emb(self.model.shared) # make it on the fly
1024
1025
/usr/local/lib/python3.6/dist-packages/transformers/modeling_bart.py in _make_linear_from_emb(emb)
148 vocab_size, emb_size = emb.weight.shape
149 lin_layer = nn.Linear(vocab_size, emb_size, bias=False)
--> 150 lin_layer.weight.data = emb.weight.data
151 return lin_layer
152
RuntimeError: Attempted to call `variable.set_data(tensor)`, but `variable` and `tensor` have incompatible tensor type.
</code></pre>
<p>It will be great if i could get some guidance from the group. Thanks! </p>
| 1,123
|
|
fine-tuning
|
how to do fine-tuning with resnet50 model?
|
https://stackoverflow.com/questions/46693776/how-to-do-fine-tuning-with-resnet50-model
|
<p>I have seen many examples in the Internet about how to fine tune VGG16 and InceptionV3.For example, some people will set the first 25 layers to be frozen when fine tuning VGG16. For InceptionV3, the first 172 layers will be frozen. But how about resnet? When we do fine tuning, we will freeze some layers of the base model, like follows:</p>
<pre><code>from keras.applications.resnet50 import ResNet50
base_model = ResNet50(include_top=False, weights="imagenet", input_shape=(input_dim, input_dim, channels))
..............
for layer in base_model.layers[:frozen_layers]:
layer.trainable = False
</code></pre>
<p>So how should I set the frozen_layers? Actually I do not know how many layers should I set to be frozen when I do fine-tuning with VGG16, VGG19, ResNet50, InceptionV3 .etc. Can anyone give me suggestions on how to fine tune these models? Especially how many layers people will freeze when they do fine tuning with these models?</p>
|
<p>That's curious.... the VGG16 model has a total of 23 layers... (<a href="https://github.com/fchollet/keras/blob/master/keras/applications/vgg16.py" rel="nofollow noreferrer">https://github.com/fchollet/keras/blob/master/keras/applications/vgg16.py</a>)</p>
<hr>
<p>All these models have a similar strucutre:</p>
<ul>
<li>A series of convolutional layers </li>
<li>Followed by a few dense layers</li>
</ul>
<p>These few dense layers are what keras calls <code>top</code>. (As in the <code>include_top</code> parameter). </p>
<p>Usually, this fine tuning happens only in the last dense layers. You let the convolutional layers (which understand images and locate features) do their job unchanged, and create your ou top part adapted to your personal classes. </p>
<p>People often create their own top part because they don't have exactly the same classes the original model was trained to. So they adapt the final part, and train only the final part. </p>
<p>So, you create a model with <code>include_top=False</code>, then you freeze it entirely.<br>
Now you add your own dense layers and leave these trainable. </p>
<p>This is the most usual adaptation of these models. </p>
<p>For other kinds of fine tuning, there probably aren't clear rules. </p>
| 1,124
|
fine-tuning
|
Fine tuning weights in DBN
|
https://stackoverflow.com/questions/38182578/fine-tuning-weights-in-dbn
|
<p>In a Deep Belief Network, I have pretrained the net using CD-1. I have the weights and biases stored. Now can I run a supervised mlp code with dropout and initialise the weights as those obtained from pre training. Will it be equivalent to a DBN implemented with dropout fine tuning?</p>
|
<blockquote>
<p>dropout fine tuning on DBN</p>
</blockquote>
<p>means </p>
<blockquote>
<p>run a supervised mlp code with dropout and initialise the weights as those obtained from pre training</p>
</blockquote>
<p>So yes, they are equivalent.</p>
| 1,125
|
fine-tuning
|
Tensorflow: Fine-Tuning the VGG19 Model [Python]
|
https://stackoverflow.com/questions/74175610/tensorflow-fine-tuning-the-vgg19-model-python
|
<p>I have VGG19 Model.</p>
<p>Can it be used <code>fine_tune_at = 100</code> ???</p>
<p>Is it correct ???</p>
<p>I don't understand fine-tuning method.</p>
<p>I'm a newbie in both deep learning and tensorflow.</p>
<p>Can someone please explain to me ???</p>
<p>Thank you.</p>
<pre><code>vgg_model.trainable = True
</code></pre>
<p>fine tuning the vgg19 model unfreeze top layers/freeze</p>
<pre><code>print("Number of layers in the base model: ", len(vgg_model.layers))
fine_tune_at = 100
for layer in vgg_model.layers[:fine_tune_at]:
layer.trainable = False
Number of layers in the base model: 22
</code></pre>
<p>compiling the model</p>
<pre><code>model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
</code></pre>
<p>summary detail model</p>
<pre><code>model.summary()
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 224, 224, 3)] 0
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
block3_conv4 (Conv2D) (None, 56, 56, 256) 590080
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
block4_conv4 (Conv2D) (None, 28, 28, 512) 2359808
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
block5_conv4 (Conv2D) (None, 14, 14, 512) 2359808
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
global_average_pooling2d (G (None, 512) 0
lobalAveragePooling2D)
dense (Dense) (None, 128) 65664
dense_1 (Dense) (None, 1) 129
=================================================================
Total params: 20,090,177
Trainable params: 65,793
Non-trainable params: 20,024,384
</code></pre>
<p>num trainable_variables</p>
<pre><code>len(model.trainable_variables)
4
</code></pre>
| 1,126
|
|
fine-tuning
|
PHP REGEX fine-tuning capturing
|
https://stackoverflow.com/questions/49572379/php-regex-fine-tuning-capturing
|
<p>I have to process text that comes from student essays (texts can be VERY large).</p>
<p>I need in PHP a preg_match for dates inside that strings which may come in this way:</p>
<pre><code>...blah blah blah (1994) blah blah blah ...
...blah blah blah (nov-1994) blah blah blah ...
...blah blah blah (november-1994) blah blah blah ...
...blah blah blah (1994-nov) blah blah blah ...
...blah blah blah (1994-november) blah blah blah ...
</code></pre>
<p>The dates in the strings may come with '( )' or with '[ ]'</p>
<p>I have done it this way:</p>
<pre><code>if (preg_match('/\w{0,8}-?(19|20)\d{2}-?\w{0,8}/', $string, $s)) {
# code
}
</code></pre>
<p><strong>which is right and do its job but its capturing some unrelated strings like</strong></p>
<pre><code>... blah blah blah (SKU_1956) blah blah blah ...
... blah blah blah [INFERNO2000] blah blah blah ...
... blah blah blah [like-2000-me] blah blah blah ...
</code></pre>
<p>I dont seem to be able to do it, so I need help to fine-tuning this regexp to only capture if</p>
<ul>
<li>start with either ( [</li>
<li><em>may</em> be a single word and if it exists, MUST end in -</li>
<li>MUST BE a year in the lap 19xx-20xx</li>
<li><em>may</em> be a single word and if it exists, MUST start with -</li>
<li>end with either ) ]</li>
</ul>
<p>The word is limited to 8 chars because of the longest month (like december)</p>
<p>There is a huge amount of non-related strings captured, thats why I want to fine-tuning it.</p>
|
<p>You can use the RegEx <a href="https://regex101.com/r/qUwg5Z/3/" rel="nofollow noreferrer"><code>[(\[](([a-zA-Z]{1,8}-)?(19|20)\d{2}|(19|20)\d{2}-[a-zA-Z]{1,8})[)\]]</code></a></p>
<ul>
<li><p><code>[(\[] ... [)\]]</code> matches anything inside <code>()</code> or <code>[]</code></p></li>
<li><p><code>([a-zA-Z]{1,8}-)?(19|20)\d{2}</code> matches <code>month-YEAR</code> with the month being optional</p>
<ul>
<li><p><code>([a-zA-Z]{1,8}-)?</code> matches an alphabetical char between <code>1</code> and <code>8</code> times, and a <code>-</code></p></li>
<li><p><code>(19|20)\d{2}</code> matches <code>19..</code> or <code>20..</code></p></li>
</ul></li>
<li><p><code>(19|20)\d{2}-[a-zA-Z]{1,8})</code> matches <code>YEAR-month</code></p></li>
</ul>
<p><a href="https://regex101.com/r/qUwg5Z/3/" rel="nofollow noreferrer"><strong>Demo.</strong></a></p>
| 1,127
|
fine-tuning
|
Fine tuning of Bert word embeddings
|
https://stackoverflow.com/questions/64145666/fine-tuning-of-bert-word-embeddings
|
<p>I would like to load a pre-trained Bert model and to fine-tune it and particularly the word embeddings of the model using a custom dataset.
The task is to use the word embeddings of chosen words for further analysis.
It is important to mention that the dataset consists of tweets and there are no labels.
Therefore, I used the BertForMaskedLM model.</p>
<p>Is it OK for this task to use the input ids (the tokenized tweets) as the labels?
I have no labels. There are just tweets in randomized order.</p>
<p>From this point, I present the code I wrote:</p>
<p>First, I cleaned the dataset from emojis, non-ASCII characters, etc as described in the following link (2.3 Section):
<a href="https://www.kaggle.com/jaskaransingh/bert-fine-tuning-with-pytorch" rel="nofollow noreferrer">https://www.kaggle.com/jaskaransingh/bert-fine-tuning-with-pytorch</a></p>
<p>Second, the code of the fine tuning process:</p>
<pre><code>import torch
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
model.to(device)
model.train()
lr = 1e-2
optimizer = AdamW(model.parameters(), lr=lr, correct_bias=False)
max_len = 82
chunk_size = 20
epochs = 20
for epoch in range(epochs):
epoch_losses = []
for j, batch in enumerate(pd.read_csv(path + file_name, chunksize=chunk_size)):
tweets = batch['content_cleaned'].tolist()
encoded_dict = tokenizer.batch_encode_plus(
tweets, # Sentence to encode.
add_special_tokens = True, # Add '[CLS]' and '[SEP]'
max_length = max_len, # Pad & truncate all sentences.
pad_to_max_length = True,
truncation=True,
return_attention_mask = True, # Construct attn. masks.
return_tensors = 'pt', # Return pytorch tensors.
)
input_ids = encoded_dict['input_ids'].to(device)
# Is it correct? or should I train it in another way?
loss, _ = model(input_ids, labels=input_ids)
loss_score = loss.item()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm)
optimizer.step()
optimizer.zero_grad()
model.save_pretrained(path + "Fine_Tuned_BertForMaskedLM")
</code></pre>
<p>The loss starts from 50 and reduced until 2.3.</p>
|
<p>Since the objective of the masked language model is to predict the masked token, the label and the inputs are the same. So, whatever you have written is correct.</p>
<p>However, I would like to add on the concept of comparing word embeddings. Since, BERT is not a word embeddings model, it is contextual, in the sense, that the same word can have different embeddings in different context. Example: the word 'talk' will have a different embeddings in the sentences "I want to talk" and "I will attend a talk". So, there is no single vector of embeddings for each word. (Which makes BERT different from word2vec or fastText). Masked Language Model (MLM) on a pre-trained BERT is usually performed when you have a small new corpus, and want your BERT model to adapt to it. However, I am not sure on the performance gain that you would get by using MLM and then fine-tuning to a specific task than directly fine-tuning the pre-trained model with task specific corpus on a downstream task.</p>
| 1,128
|
fine-tuning
|
gpt3 fine tuning with openai not learning
|
https://stackoverflow.com/questions/73467393/gpt3-fine-tuning-with-openai-not-learning
|
<p>For my fine tuning jsonl files, I wanted a model that could predict the gender of the speaker given a statement. For instance, the prompt: "i went to buy a skirt today" has completion as "female".</p>
<p>I created several examples and gave it to gpt3 to finetune. I then fed the sentence "i went to pick my wife up from the shops" to the resulting model. I expected to get a gender as response but I got a whole story about picking up my wife from the shops.</p>
<p>It's as if gpt-3 didn't learn anything from my fine tuning at all.</p>
<p>I have a few questions:</p>
<ol>
<li><p>Is fine tuning equivalent to writing a few examples in openai playground and getting gpt-3 to guess what comes next?</p>
</li>
<li><p>After fine tuning, do you only pay for the tokens in the prompt/completion of subsequent runs? So If I spend $100 training a model on a million examples, I will then only have to pay for the individual prompt/completion of subsequent calls?</p>
</li>
<li><p>The chat bot for instance, come with a context sentence before the back and forth exchange of 2 chat participants. Something like "this is a conversation between a rude man named John and a young girl named Sarah". How can i incorporate such context into fine tuning structure of {"prompt":"...","completion":..."}?</p>
</li>
</ol>
|
<ol>
<li><p>Open AI Fine Tuning is a process of using a pre-trained model on a new dataset to improve the performance of the model on the new dataset. It is really important to have a specific prompt you are working with so the fine-tuning model knows exactly what you are training for.</p>
</li>
<li><p>Exactly, the benefit of fine tuning is that you won't have to pay for prompting the standard model with information each time. And, it improved performance for your specific usecase of course.</p>
</li>
<li><p>For that example, you would train it in an entire long conversation between rude john and a young girl named sarah. But, keep in mind which 'person' you are wanting the ai to be.</p>
</li>
</ol>
<p>Rude John 'AI': Lorem Ipsum
Sarah: Lorem Ipsum
<em>Train long conversation</em></p>
<p>But you would also want to train Rude John 'AI' to talking to others as well.</p>
| 1,129
|
fine-tuning
|
Organize data for transformer fine-tuning
|
https://stackoverflow.com/questions/70957390/organize-data-for-transformer-fine-tuning
|
<p>I have a corpus of synonyms and non-synonyms. These are stored in a list of python dictionaries like <code>{"sentence1": <string>, "sentence2": <string>, "label": <1.0 or 0.0> }</code>. Note that this words (or sentences) do not have to be a single token in the tokenizer.</p>
<p>I want to fine-tune a BERT-based model to take both sentences like: <code>[[CLS], <sentence1_token1>], ...,<sentence1_tokenN>, [SEP], <sentence2_token1>], ..., <sentence2_tokenM>, [SEP]]</code> and predict the "label" (a measurement between 0.0 and 1.0).</p>
<p><strong>What is the best approach to organized this data to facilitate the fine-tuning of the huggingface transformer?</strong></p>
|
<p>You can use the Tokenizer <code>__call__</code> method to join both sentences when encoding them.</p>
<p>In case you're using the PyTorch implementation, here is an example:</p>
<pre class="lang-py prettyprint-override"><code>import torch
from transformers import AutoTokenizer
sentences1 = ... # List containing all sentences 1
sentences2 = ... # List containing all sentences 2
labels = ... # List containing all labels (0 or 1)
TOKENIZER_NAME = "bert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_NAME)
encodings = tokenizer(
sentences1,
sentences2,
return_tensors="pt"
)
labels = torch.tensor(labels)
</code></pre>
<p>Then you can create your custom Dataset to use it on training:</p>
<pre class="lang-py prettyprint-override"><code>class CustomRealDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: value[idx] for key, value in self.encodings.items()}
item["labels"] = self.labels[idx]
return item
def __len__(self):
return len(self.labels)
</code></pre>
| 1,130
|
fine-tuning
|
How to Fine-tuning a Pretrained Network in Tensorflow?
|
https://stackoverflow.com/questions/34984065/how-to-fine-tuning-a-pretrained-network-in-tensorflow
|
<p>Can anyone give an example of how to fine tune a pretrained imagenet network with new data and different classes similar to this:</p>
<p><a href="https://github.com/BVLC/caffe/blob/master/examples/03-fine-tuning.ipynb" rel="noreferrer">Fine-tuning a Pretrained Network for Style Recognition</a></p>
|
<p>This <a href="https://www.tensorflow.org/tutorials/image_retraining" rel="nofollow noreferrer">TensorFlow tutorial</a> describes how to retrain a image classifier for new data and new classes.</p>
| 1,131
|
fine-tuning
|
Fine-tuning distilbert takes hours
|
https://stackoverflow.com/questions/74856703/fine-tuning-distilbert-takes-hours
|
<p>I am fine tuning the distilbert pretrained model for sentiment analysis (multilabel with 6 labels) using Huggingface emotion dataset. I am new to this, but 1 epoch, 250 steps takes around 2 hours to train on Google Colab notebook, is this normal? The train dataset has 16.000 twitter text data which of course affects the performance but isn't this too long? What is the reason behind this?</p>
<p>Also after 3 epochs, the accuracy started to drop. What could be the reason for this?</p>
|
<p>Are you using a GPU? If not, it's normal that it would take this much time. Also, I wouldn't be bothered by the accuracy if you're not able to run it for more epochs.</p>
| 1,132
|
fine-tuning
|
Random Forest Fine-tuning stuck in running
|
https://stackoverflow.com/questions/66945540/random-forest-fine-tuning-stuck-in-running
|
<pre><code>rf_classifier = RandomForestClassifier(class_weight = "balanced",
random_state=7)
param_grid = {'n_estimators': [50, 75, 100, 125, 150, 175],
'min_samples_split':[2,4,6,8,10],
'min_samples_leaf': [1, 2, 3, 4],
'max_depth': [5, 10, 15, 20, 25]}
grid_obj = GridSearchCV(rf_classifier,
iid=True,
return_train_score=True,
param_grid=param_grid,
scoring='roc_auc',
cv=10)
grid_fit = grid_obj.fit(X_train, y_train)
rf_opt = grid_fit.best_estimator_
print('='*20)
print("best params: " + str(grid_obj.best_estimator_))
print("best params: " + str(grid_obj.best_params_))
print('best score:', grid_obj.best_score_)
print('='*20)
</code></pre>
<p>Fine-tuning the random forest algorithm's hyper-parameters by cross-validation against the AUC score. The above code is just stuck loading in my jupyter notebook. Any ideas why?</p>
| 1,133
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.