QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
76,285,683
21,107,707
Rotating cube through constant degree intervals is speeding up
<p>I am coding a rotating cube using rotation matrices and generating plots of the cube at equally spaced intervals along its rotation. However, I notice that the cube likes to speed up sometimes during the gif. Here is the code to generate the cube:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import numpy as np from math import sin, cos, radians as rad import imageio import os class Point: def __init__(self, x, y, z): self.x = x self.y = y self.z = z def rotated(self, yaw, pitch, roll): x, y, z = self.x, self.y, self.z a, b, c = yaw, pitch, roll sa, sb, sc = sin(a), sin(b), sin(c) ca, cb, cc = cos(a), cos(b), cos(c) newx = (x*ca*cb + y*(ca*sb*sc-sa*cc) + z*(ca*sb*cc+sa*sc)) newy = (x*sa*cb + y*(sa*sb*sc+ca*cc) + z*(sa*sb*cc-ca*sc)) newz = (x*-sb + y*cb*sc + z*cb*cc) return Point(newx, newy, newz) def make_gif(fps): # creates images array and depth counter images, i = [], 1 # adds images to the images array while True: try: images.append(imageio.imread(f&quot;imgs/{i}.png&quot;)) i += 1 except: break # saves the gif imageio.mimsave(&quot;plot.gif&quot;, images, fps=fps) # deletes all the images for i in range(1, len(images)+1): os.remove(f&quot;imgs/{i}.png&quot;) def rotating_gif(frames, fps, pitch, roll, yaw): ax = plt.axes(projection=&quot;3d&quot;) points = [Point(x, y, z) for x in range(-5, 6) for y in range(-5, 6) for z in range(-5, 6)] i = 1 for p, r, y in zip(np.linspace(0, pitch, frames), np.linspace(0, roll, frames), np.linspace(0, yaw, frames)): rotated = [point.rotated(p, r, y) for point in points] x_vals = [point.x for point in rotated] y_vals = [point.y for point in rotated] z_vals = [point.z for point in rotated] ax.scatter3D(x_vals, y_vals, z_vals) ax.set_xlim(-7, 7) ax.set_ylim(-7, 7) ax.set_zlim(-7, 7) plt.grid(False) plt.axis('off') plt.savefig(f&quot;imgs/{i}.png&quot;) plt.cla() i += 1 make_gif(fps) def main(): rotating_gif(240, 60, rad(-720), rad(720), rad(720)) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>For anyone that wants to reproduce it, here are my packages and versions:</p> <pre><code>matplotlib==3.7.1 numpy==1.24.3 imageio==2.9.0 </code></pre> <p>Before running, you need to make a directory called <code>imgs</code>.</p>
<python><python-3.x>
2023-05-19 02:42:48
0
801
vs07
76,285,582
5,336,013
Pandas: Drop rows at beginning of groups if a column equals certain string value ('Sell' or 'Buy')
<p>To clarify, the 'group' in the title is not a result of pd.groupby. Instead, I meant it as rows that share the same values of certain columns. In my case it would be <strong>account</strong> and <strong>symbol</strong>.</p> <p>I am trying to calculate profits&amp;loss by account and position from trade data on a First-in, first-out (FIFO). As a result, when cumulative share quantities drops below zero, namely, when the most recent sell shares is bigger than all the buy shares prior combined, I need to reset it to 0. Same deal when the trade data begins with sell records.</p> <p>I am trying to design a cumulative sum which will reset to 0 to help with the process. What I have is:</p> <pre><code> def cumsum_with_reset(group): cumulative_sum = 0 group['reset_cumsum'] = 0 for index, row in group.iterrows(): cumulative_sum += row['Modified_Quantity'] if cumulative_sum &lt; 0: cumulative_sum = 0 group.loc[index, 'reset_cumsum'] = cumulative_sum return group </code></pre> <p>This function can return 0 if a group, namely the rows with the same account and symbol, begins with sell records. However, the problem is iterrows is so inefficient that it takes forever for large amount of data so I want to create a new function but I am stuck at the very first step: how to remove the <strong>sell</strong> rows in each group before the <strong>buy</strong> rows?</p> <p>Using some sample data:</p> <pre><code>pd.DataFrame(data = [['2022-01-01', 'foo', 'AMZN', 'buy', 10, 22], ['2022-01-02', 'foo', 'AMZN', 'sell', 15, 24], ['2022-01-03', 'cat', 'FB', 'sell', 5, 12], ['2022-01-04', 'cat', 'FB', 'buy', 17, 15], ['2022-01-05', 'cat', 'FB', 'sell', 15, 13], ['2022-01-06', 'bar', 'AAPL', 'buy', 10, 10], ['2022-01-07', 'bar', 'AAPL', 'buy', 5, 12], ['2022-01-08', 'bar', 'AAPL', 'sell', 8, 12], ['2022-01-09', 'bar', 'AAPL', 'sell', 12, 14], ['2022-01-10', 'dog', 'GOOG', 'sell', 20, 13], ['2022-01-11', 'dog', 'GOOG', 'buy', 15, 13], ['2022-01-12', 'dog', 'GOOG', 'buy', 5, 13], ['2022-01-13', 'dog', 'GOOG', 'sell', 7, 14]], columns = ['Date', 'account', 'symbol', 'Action', 'Quantity', 'Price']) </code></pre> <p>which looks like this:</p> <p><a href="https://i.sstatic.net/OrCKh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OrCKh.png" alt="sample" /></a></p> <p>There are 4 groups in this dataset:</p> <p><a href="https://i.sstatic.net/XhN3z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XhN3z.png" alt="grouped" /></a></p> <p>2nd and 4th group start with sell records, row 2 and row 9. How can I use Pandas to get rid of such records until each group starts with buy records?</p>
<python><pandas><finance><fifo>
2023-05-19 02:11:26
1
1,127
Bowen Liu
76,285,542
11,162,983
How to calculate sum and average bars for every histogram bin
<p>I would like to plot stacked bar charts. The first 3 bars are red, black and blue as shown below.</p> <p>I want to add a fourth bar that is the 'sum' of the values of red, black and blue bars.</p> <p>(In another case I want to add a fourth bar that is the 'average' of red, black and blue bars.)</p> <p>For example, this is my code:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt np.random.seed(19680801) n_bins = 20 x = np.random.randn(1000, 3) fig, ax0 = plt.subplots(nrows=1, ncols=1) colors = ['red', 'black', 'blue'] ax0.hist(x, n_bins, density=True, histtype='bar', color=colors, label=colors) ax0.legend(prop={'size': 10}) ax0.set_title('bars with legend') fig.tight_layout() plt.show() </code></pre> <p>This figure is without the 4th bar. <a href="https://i.sstatic.net/cZMxW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cZMxW.png" alt="enter image description here" /></a></p> <p>Any suggestions how of how I can show the 4th bar in this stack bar chart?</p>
<python><numpy><matplotlib><histogram><grouped-bar-chart>
2023-05-19 02:00:55
1
987
Redhwan
76,285,436
16,589,029
How to filter out each entry in a column, if each entry is a list, PANDAS
<p>i have this column called &quot;content&quot;, every entry in this column is a list, i want to filter and modify &quot;every list&quot; in the column so that every element in a list is filtered out if it does not contain the word <code>car</code>, how can i do that with pandas?</p> <p>I tried this, however it does actually filter anything when i save the data in the file, i am new to pandas ,so sometimes i get confused on how it works</p> <pre class="lang-py prettyprint-override"><code>output['content'] = output['content'].apply(lambda x: 'car' in x) </code></pre> <p>each entry has a list, i am aiming to filter out that list, so that it only includes sentencs/string that has the word <code>car</code></p>
<python><pandas><dataframe>
2023-05-19 01:21:48
2
766
Ghazi
76,285,293
9,352,077
scatterplot skips major ticks with log scale even with manual spacing
<p>I'm plotting points on a log-log scatterplot with <code>matplotlib</code>. The <code>x</code> coordinate of these points rises twice as fast (exponentially speaking) than the <code>y</code> coordinate, which means that the <code>x</code> axis is twice as densely packed when plotted on a square plot.</p> <p>Here's the code and the graph it produces:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import matplotlib.ticker as tkr fig, ax = plt.subplots(figsize=(10,8)) ax: plt.Axes # Log scale ax.set_xscale(&quot;log&quot;) # Needed for a log scatterplot. https://stackoverflow.com/a/52573929/9352077 ax.set_yscale(&quot;log&quot;) # ax.xaxis.set_major_locator(tkr.LogLocator(base=10)) # ax.xaxis.set_major_formatter(tkr.LogFormatterSciNotation()) # ax.yaxis.set_major_locator(tkr.LogLocator(base=10)) # ax.yaxis.set_major_formatter(tkr.LogFormatterSciNotation()) # Plot points t = range(1, 10_000_000, 1000) x = [e**2 for e in t] y = t ax.scatter(x, y, marker=&quot;.&quot;, linewidths=0.05) # Grid ax.set_axisbelow(True) ax.grid(True) fig.savefig(&quot;./test.pdf&quot;, bbox_inches='tight') </code></pre> <p><a href="https://i.sstatic.net/NM32b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NM32b.png" alt="missing major ticks" /></a></p> <p>By default, matplotlib seems to want a square grid, and hence skips every other power of 10 on the <code>x</code> axis. Supposedly, the 4 lines I have commented out -- a combination of a locator and a formatter -- are the way to customise the spacing of the major ticks. See e.g. <a href="https://stackoverflow.com/a/27258626/9352077">this post</a>. <strong>Yet, uncommenting them produces the same exact figure, with missing major ticks.</strong></p> <p>How else, if not with <code>LogLocator(base=10)</code>, can I change the spacing between the ticks on the log scale? Here's roughly what it should look like (I've highlighted the changes in red, but they should of course be gray):</p> <p><a href="https://i.sstatic.net/yletQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yletQ.png" alt="no more missing major ticks on log scale" /></a></p>
<python><matplotlib><plot><logarithm><xticks>
2023-05-19 00:26:41
1
415
Mew
76,285,142
3,821,009
Compare polars list to python list
<p><strong>Update:</strong> Support for direct list comparison was added to Polars.</p> <pre class="lang-py prettyprint-override"><code>df.filter(pl.concat_list(pl.all()) == [2, 5, 8]) </code></pre> <hr /> <p>Say I have this:</p> <pre><code>df = polars.DataFrame(dict( j=[1,2,3], k=[4,5,6], l=[7,8,9], )) shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ j ┆ k ┆ l β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ 1 ┆ 4 ┆ 7 β”‚ β”‚ 2 ┆ 5 ┆ 8 β”‚ β”‚ 3 ┆ 6 ┆ 9 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ </code></pre> <p>I can filter for a particular row doing it one column at at time, i.e.:</p> <pre><code>df = df.filter( (polars.col('j') == 2) &amp; (polars.col('k') == 5) &amp; (polars.col('l') == 8) ) shape: (1, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ j ┆ k ┆ l β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ 2 ┆ 5 ┆ 8 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ </code></pre> <p>I'd like to compare to the list instead though (so I can avoid listing each column and to accommodate variable column <code>DataFrame</code>s), e.g. something like:</p> <pre><code>df = df.filter( polars.concat_list(polars.all()) == [2, 5, 8] ) ... exceptions.ArrowErrorException: NotYetImplemented(&quot;Casting from Int64 to LargeList(Field { name: \&quot;item\&quot;, data_type: Int64, is_nullable: true, metadata: {} }) not supported&quot;) </code></pre> <p>Any ideas why the above is throwing the exception?</p> <p>I can build the expression manually:</p> <pre><code>df = df.filter( functools.reduce(lambda a, e: a &amp; e, (polars.col(c) == v for c, v in zip(df.columns, [2, 5, 8]))) ) </code></pre> <p>but I was hoping there's a way to compare lists directly - e.g. as if I had this <code>DataFrame</code> originally:</p> <pre><code>df = polars.DataFrame(dict(j=[ [1,4,7], [2,5,8], [3,6,9], ])) shape: (3, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ j β”‚ β”‚ --- β”‚ β”‚ list[i64] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ [1, 4, 7] β”‚ β”‚ [2, 5, 8] β”‚ β”‚ [3, 6, 9] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>and wanted to find the row which matches <code>[2, 5, 8]</code>. Any hints?</p>
<python><python-polars>
2023-05-18 23:43:22
3
4,641
levant pied
76,285,127
13,891,832
How to format the timeseries axis of a matplotlib plot like a pandas plot
<p>I've created following graph with <code>pd.Series([1]*month_limits.size, month_limits).plot(ax=ax)</code><br /> How do I recreate same x ticks labels with just matplotlib? I mean years and month not overriding each other <a href="https://i.sstatic.net/KvgGt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KvgGt.png" alt="enter image description here" /></a></p> <p>Best I got is having years and months at the same time, but years are overriding months</p> <pre class="lang-py prettyprint-override"><code>import datetime import matplotlib.pyplot as plt import matplotlib.dates as mdates import pandas as pd month_limits = pd.date_range('2021-08', '2023-01', freq=&quot;1m&quot;, normalize=True) + datetime.timedelta(days=1) plt.plot(month_limits, [1]*month_limits.size) l1 = mdates.YearLocator() l2 = mdates.MonthLocator(interval=1) ax.xaxis.set_major_locator(l1) ax.xaxis.set_minor_locator(l2) ax.xaxis.set_major_formatter(mdates.ConciseDateFormatter(l1)) ax.xaxis.set_minor_formatter(mdates.ConciseDateFormatter(l2)) </code></pre> <p><a href="https://i.sstatic.net/qrWin.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qrWin.png" alt="enter image description here" /></a></p>
<python><matplotlib><time-series><axis><xticks>
2023-05-18 23:39:35
1
917
bottledmind
76,285,106
1,860,222
How to get the underlying data of a row from a PyQt Table?
<p>I'm working on a python application that uses a QTableView to display some information. When the user double-clicks a row of the table, I want to extract the model data for that row and do something with it. The problem is, the data method for QAbstractTableModel returns the display string, not the data that was used to create it. So if I have a cell with an integer, I want to extract the actual integer but the data() method returns a string. In this simple case I could simply convert the string back to an int, but if I do something more complicated that may not be an option.</p> <p>My solution so far has been to store a copy of the data list used to populate the model and look up the data in that using the index from the double click signal. It works, but I feel like there should be a cleaner way of doing this. Am I just overthinking this? Is there a better way of extracting data from a QT model?</p> <p>For reference my model class looks like this:</p> <pre><code>from PyQt5 import QtCore from PyQt5.QtCore import Qt from ResourceCard import ResourceCard class ResourceCardModel( QtCore.QAbstractTableModel): ''' classdocs ''' _headers = [&quot;Level&quot;, &quot;Color&quot;, &quot;Points&quot;, &quot;Cost&quot;] def __init__(self, cards: list[ ResourceCard]): ''' Constructor ''' super(ResourceCardModel, self).__init__() self._data = cards def data(self, index, role): if role == Qt.DisplayRole or role == Qt.EditRole: # Look up the key by header index. column = index.column() row = index.row() item:ResourceCard = self._data[row] match column: case 0: return str(item.level) case 1: return str(item.suit.name) case 2: return str(item.points) case 3: return str(item.cost) case _: return &quot;&quot; def rowCount(self, index): # The length of the outer list. return len(self._data) def columnCount(self, index): # The length of our headers. return len(self._headers) def headerData(self, section, orientation, role): # section is the index of the column/row. if role == Qt.DisplayRole: if orientation == Qt.Horizontal: return str(self._headers[section]) if orientation == Qt.Vertical: return &quot;&quot; </code></pre> <p>In my main class I have some code like:</p> <pre><code>def __init__(self, *args, obj=None, **kwargs): ''' Constructor ''' QtWidgets.QMainWindow.__init__(self) Ui_Widget.__init__(self) self.setupUi(self) self.players = [] self.gameActions = GameActions([&quot;p1&quot;, &quot;p2&quot;, &quot;p3&quot;, &quot;p4&quot;]) self.players = self.gameActions.getPlayersList() self.cards:dict[int,list] = self.gameActions.game.availableResources self.lvOneModel = ResourceCardModel( self.cards.get(1)) self.lvOneCardsTable.setModel(self.lvOneModel) self.lvOneCardsTable.doubleClicked.connect(self.availableCardDoubleClicked) def availableCardDoubleClicked(self, item:QModelIndex): msgDialog = QtWidgets.QMessageBox(self) itemData = self.cards.get(1)[item.row()] msgDialog.setText(str(itemData)) msgDialog.show() </code></pre>
<python><model-view-controller><pyqt><qtableview>
2023-05-18 23:34:14
0
1,797
pbuchheit
76,285,079
1,530,967
How to type dynamically created classes so mypy can lint them properly
<p>I'm looking to refactor the function task decorator of a dataflow engine I contribute to called <a href="https://pydra.readthedocs.io" rel="nofollow noreferrer">Pydra</a> so that the argument types can be linted with mypy. The code captures the arguments to be passed to the function at workflow construction time and stores them in a dynamically created <code>Inputs</code> class (designed using python-attrs), in order to delay the execution of the function until runtime of the workflow.</p> <p>The following code works fine, but mypy doesn't know the types of the attributes in the dynamically generated classes. Is there a way to specify them dynamically too</p> <pre class="lang-py prettyprint-override"><code>import typing as ty import attrs import inspect from functools import wraps @attrs.define(kw_only=True, slots=False) class FunctionTask: name: str inputs: ty.Type[ty.Any] # Use typing.Type hint for inputs attribute def __init__(self, name: str, **kwargs): self.name = name self.inputs = type(self).Inputs(**kwargs) def task(function: ty.Callable) -&gt; ty.Type[ty.Any]: sig = inspect.signature(function) inputs_dct = { p.name: attrs.field(default=p.default) for p in sig.parameters.values() } inputs_dct[&quot;__annotations__&quot;] = { p.name: p.annotation for p in sig.parameters.values() } @wraps(function, updated=()) @attrs.define(kw_only=True, slots=True, init=False) class Task(FunctionTask): func = staticmethod(function) Inputs = attrs.define(type(&quot;Inputs&quot;, (), inputs_dct)) # type: ty.Any inputs: Inputs = attrs.field() def __call__(self): return self.func( **{ f.name: getattr(self.inputs, f.name) for f in attrs.fields(self.Inputs) } ) return Task </code></pre> <p>which you can use with</p> <pre class="lang-py prettyprint-override"><code>@task def myfunc(x: int, y: int) -&gt; int: return x + y # Would be great if `x` and `y` arguments could be type-checked as ints mytask = myfunc(name=&quot;mytask&quot;, x=1, y=2) # Would like mypy to know that mytask has an inputs attribute and that it has an int # attribute 'x', so the linter picks up the incorrect assignment below mytask.inputs.x = &quot;bad-value&quot; mytask() </code></pre> <p>I would like mypy to know that mytask has an inputs attribute and that it has an int attribute 'x', so the linter picks up the incorrect assignment of &quot;bad-value&quot;. If possible, it would be great if the keyword args to the <code>myfunc.__init__</code> are also type-checked.</p> <p>Is this possible? Any tips on things to try?</p> <p>EDIT: To try to make what I'm trying to do a bit clearer, here is an example of what one of the dynamically generated classes would look like if it was written statically</p> <pre class="lang-py prettyprint-override"><code>@attrs.define class StaticTask: @attrs.define class Inputs: x: int y: int name: str inputs: Inputs = attrs.field() @staticmethod def func(x: int, y: int) -&gt; int: return x + y def __init__(self, name: str, x: int, y: int): self.name = name self.inputs.x = x self.inputs.y = y def __call__(self): return self.func(x=self.inputs.x, y=self.inputs.y) </code></pre> <p>In this case</p> <pre class="lang-py prettyprint-override"><code>mytask2 = StaticTask(name=&quot;mytask&quot;, x=1, y=2) mytask2.inputs.x = &quot;bad-value&quot; </code></pre> <p>the final line gets flagged by mypy as setting a string to an int field. This is what I would like my dynamically created classes to replicate.</p>
<python><python-typing><mypy><python-attrs>
2023-05-18 23:26:15
2
594
Tom Close
76,285,036
18,150,609
Python, Selenium: Take a Full-Page Screenshot as .pdf Without Page Breaks, Regardless of Page Dimensions
<p>Currently, I see it is possible to create screenshots with Selenium. However, they are always <code>.png</code> files. How can I take the same style screenshot but as <code>.pdf</code>?</p> <p>Required style: No margins; Same dimensions as current page (like a full page screenshot)<br /> Printing the page doesn't accomplish this because of all the formatting that comes with printing.</p> <p>How I currently get a screenshot:</p> <pre><code>from selenium import webdriver # Function to find page size S = lambda X: driver.execute_script('return document.body.parentNode.scroll'+X) driver = webdriver.Firefox(options=options) driver.get('https://www.google.com') # Screen height = S('Height') width = S('Width') driver.set_window_size(width, height) driver.get_screenshot_as_file(PNG_SAVEAS) driver.close() </code></pre>
<python><selenium-webdriver><geckodriver>
2023-05-18 23:10:56
2
364
MrChadMWood
76,285,010
219,355
lxml xpath on new element yields empty list
<p>Say I am composing a xml based on a incomplete xml that lives on a file. I go over each element trying to find it or else creating it.</p> <pre><code> if len(list(root.xpath(COMSTACK_XPATH, namespaces=NS))) == 0: create_com_stack_subelement(root.xpath(CDDROOT_XPATH, namespaces=NS).pop()) </code></pre> <p>If the element gets created I'd like to repeat a similar operation, this time searching for the newly created element and in turn adding subelements to it.</p> <p>The problem is that any xpath pointing to the new element wont work unless I write the tree into a file and parse it back.</p> <p>Is this expected? I am using lxml==4.9.2</p> <p>Thanks</p>
<python><lxml>
2023-05-18 23:03:29
1
695
debuti
76,284,971
19,369,393
Can python list sort key function take index instead of an element?
<p>Take a look at this code:</p> <pre><code>points = [...] properties = [compute_point_property(point) for point in points] points.sort(?) </code></pre> <p>Here I have a list of points, and I compute a property of each point. Then I need to sort the points by their corresponding property value. The problem is that the <code>compute_point_property</code> is computationally expensive, so I don't want to call it more that needed. And I also would need further to use the <code>properties</code> of the points. That means I cannot just use</p> <pre><code>points.sort(key=compute_point_property) </code></pre> <p>Because in this case the result of the <code>compute_point_property</code> is not stored in a variable for me to use further.</p> <p>In my case, the elements (points) are hashable, so I can put them in dict, and use it like this:</p> <pre><code>points = [...] properties = [compute_point_property(point) for point in points] points_properties = dict(zip(points, properties)) points.sort(key=lambda point: points_properties[point]) </code></pre> <p>It would work, but it seems wrong and unnecessarily. And it works only for hashable list elements. I want to learn how to do it with any list element type, whether it hashable or not.</p> <p>Another idea is to put both point and its corresponding property into a tuple, and have a list of these tuples. Then sort the list of tuples by the second tuple element (the point property). Then extract the points from the list of tuples.</p> <pre><code>points = [...] properties = [compute_point_property(point) for point in points] points_properties = list(zip(points, properties)) points_properties.sort(key=lambda x: x[1]) points = [x[0] for x in points_properties] </code></pre> <p>Also works, but creates a list of tuples and extracts points from the list of tuples, so it does more than actually needed.</p> <p>It would be great if the <code>list.sort</code>'s key function would take an element index instead of the element value as an argument. In this case, it would be possible to do it like this:</p> <pre><code>points = [...] properties = [compute_point_property(point) for point in points] points.sort(key=lambda index: properties[index]) </code></pre> <p>What would be the best way to sort the list while storing the computed properties? Thanks in advance!</p>
<python><list><sorting>
2023-05-18 22:53:34
1
365
g00dds
76,284,950
461,048
Reshape a numpy array so the columns wrap below the original rows
<p>Consider the following scenario:</p> <pre><code>a = np.array([[1,1,1,3,3,3], [2,2,2,4,4,4]]) np.reshape(a, (4, 3)) </code></pre> <p>Output:</p> <pre><code>array([[1, 1, 1], [3, 3, 3], [2, 2, 2], [4, 4, 4]]) </code></pre> <p>Desired output:</p> <pre><code>array([[1, 1, 1], [2, 2, 2], [3, 3, 3], [4, 4, 4]]) </code></pre> <p>How can I reshape the array so the rows stay paired together and the overflowing columns wrap below the existing rows? <a href="https://i.sstatic.net/Qj8p0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qj8p0.png" alt="enter image description here" /></a></p>
<python><numpy><reshape>
2023-05-18 22:48:26
2
3,838
imbrizi
76,284,869
1,282,160
Pytest: how to do snapshot tests for Pandas DataFrame while failed
<p>I have two data frames:</p> <ul> <li><code>df1</code>: it originated from a csv file, and has been cleaned by drop unwanted data.</li> <li><code>df2</code>: it was a validated data frame from the previous success results. We save it into a file and load it back when testing.</li> </ul> <p>Now we write a test:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd def test_data_cleaning_process(): df1 = pd.read_csv('df1.csv') df2 = pd.read_csv('df2.csv') # execute target function df1 = cleanup(df1) # compare dfs assert df1.equals(df2) </code></pre> <p>It is OK while we pass the tests. When the test failed, the console output is hard to find out the unmatched data.</p> <p>For example</p> <pre class="lang-py prettyprint-override"><code>def test_for_failure(): import pandas as pd data1 = { 'A': [1,2,3,4], 'B': [1,2,3,4], 'C': [1,2,3,4], 'D': [1,2,3,4], 'E': [1,2,3,4], 'F': [1,2,3,4], 'G': [1,2,3,4], } data2 = { 'A': [1,2,3,4], 'B': [1,2,3,9], # We change 4 to 9 for failure 'C': [1,2,3,4], 'D': [1,2,3,4], 'E': [1,2,3,4], 'F': [1,2,3,4], 'G': [1,2,3,4], } df1 = pd.DataFrame(data1) df2 = pd.DataFrame(data2) assert df1.equals(df2) </code></pre> <p>The console output for pytest:</p> <pre><code>... df1 = pd.DataFrame(data1) df2 = pd.DataFrame(data2) &gt; assert df1.equals(df2) E assert False E + where False = &lt;bound method NDFrame.equals of A B C D E F G\n0 1 1 1 1 1 1 1\n1 2 2 2 2 2 2 2\n2 3 3 3 3 3 3 3\n3 4 4 4 4 4 4 4&gt;( A B C D E F G\n0 1 1 1 1 1 1 1\n1 2 2 2 2 2 2 2\n2 3 3 3 3 3 3 3\n3 4 9 4 4 4 4 4) E + where &lt;bound method NDFrame.equals of A B C D E F G\n0 1 1 1 1 1 1 1\n1 2 2 2 2 2 2 2\n2 3 3 3 3 3 3 3\n3 4 4 4 4 4 4 4&gt; = A B C D E F G\n0 1 1 1 1 1 1 1\n1 2 2 2 2 2 2 2\n2 3 3 3 3 3 3 3\n3 4 4 4 4 4 4 4.equals </code></pre> <p>The dataframes:</p> <pre><code># df1 A B C D E F G 0 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 # df2 A B C D E F G 0 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 4 9 4 4 4 4 4 </code></pre> <p>Only one row is unmatched, but pytest prints out useless info for failed cases when the data frame is big.</p> <h2>Questions</h2> <ol> <li>How to just print out which row is unmatched (row index 3 in this case) while failed? Or generate a diff file for review.</li> <li>Is there any tool can let me overwrite the current data into the snapshot file even if it failed?</li> <li>What is the best practice to test the cleaning process to ensure refactoring won't change the results?</li> </ol>
<python><pandas><pytest><snapshot>
2023-05-18 22:29:53
1
3,407
Xaree Lee
76,284,748
19,410,411
In which format should I send the date data?
<p>So I have a <strong>Django</strong> <strong>Flutter</strong> setup where I run an app that gets data from the a rest api server on my pc.</p> <p>I created a serilizer for creating users in <strong>Django</strong>, here is the code:</p> <pre class="lang-py prettyprint-override"><code>class CreateUserSerializer(serializers.ModelSerializer): class Meta: model = User fields = ['first_name', 'last_name', 'date_of_birth', 'email', 'password', 'gender', 'date_of_contamination', 'cluster_id_id'] </code></pre> <p>And knowing that this is my user model:</p> <pre class="lang-py prettyprint-override"><code>class User(models.Model): user_id = models.AutoField(primary_key=True) first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) date_of_birth = models.DateField() email = models.EmailField(max_length=254, unique=True) password = models.CharField(max_length=50) cronic_disease_1 = models.CharField(max_length=50, null=True) cronic_disease_2 = models.CharField(max_length=50, null=True) cronic_disease_3 = models.CharField(max_length=50, null=True) cronic_disease_4 = models.CharField(max_length=50, null=True) cronic_disease_5 = models.CharField(max_length=50, null=True) gender = models.CharField(max_length=50) latitude = models.FloatField(null=True) longitude = models.FloatField(null=True) cluster_id = models.ForeignKey('Cluster', on_delete=models.CASCADE) if_transmit = models.BooleanField(null=True) date_of_contamination = models.DateField(null=True) recommandation = models.FloatField(null=True) online = models.BooleanField(null=True) class Meta: constraints = [ models.CheckConstraint(check=models.Q(date_of_birth__lte=datetime.date.today()), name='no_future_date_of_birth') ] </code></pre> <p>After creating a post request api url that works (tested it with <strong>Postman</strong>) I tried to create a post request to send data from <strong>Flutter</strong>, here is my post request method:</p> <pre class="lang-dart prettyprint-override"><code>static Future&lt;bool&gt; postRegisterUser({ required String email, required String password, required String firstName, required String lastName, required DateTime dateOfBirth, required Gender gender, DateTime? dateOfContamination, }) async { Map&lt;String, dynamic&gt; data = { 'first_name': firstName, 'last_name': lastName, 'date_of_birth': dateOfBirth, 'email': email, 'password': password, 'gender': gender.toStringValue().capitalize, 'date_of_contamination': dateOfContamination, 'cluster_id': 0, }; final response = await Dio().post(postCreateUserUrl, data: data); return response.statusCode == 200 ? true : false; } </code></pre> <p>Now when I try to send a post query I'm getting errors like this one:</p> <blockquote> <p>[ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: DioError [bad response]: The request returned an invalid status code of 400.</p> </blockquote> <p>And this one:</p> <blockquote> <p>E/flutter (18370): [ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: DioError [unknown]: null E/flutter (18370): Error: Converting object to an encodable object failed: Instance of 'DateTime'</p> </blockquote> <p>Knowing that for a test a printed the <code>data</code> variable before sending it, and here is it:</p> <pre class="lang-py prettyprint-override"><code>{first_name: fkqsjmdfk, last_name: qfjsmkdfjm, date_of_birth: 2002-09-14, email: fqsfjkkmqsj, password: dskjfqmskj, gender: Homme, date_of_contamination: 2023-05-18, cluster_id: 0} </code></pre> <p>And for more clarity here is the datatype of each field contained inside the <code>data</code> varible before sending it:</p> <pre class="lang-dart prettyprint-override"><code>email: fqsfjkkmqsj String password: dskjfqmskj String first name: fkqsjmdfk String last name: qfjsmkdfjm String date of birth: 2002-09-14 00:00:00.000 DateTime gender: Homme String date of contamination: 2023-05-18 22:59:35.456108 DateTime </code></pre>
<python><django><flutter><dart>
2023-05-18 22:01:07
1
525
Mikelenjilo
76,284,724
1,864,162
Iterating over Dataframes - revisited?
<p>I have a dataframe of about 500.000 rows. Contains long, lat,, altitude, datetime (and a LOT more data) of aircraft radar data.</p> <p>To calculate 'propensity' (a dimensionsless number based on closest point of approach [cpa] theory) I do the following:</p> <ol> <li>take a chunk of rows within a certain time window (currently 3 seconds, holds about 20-40 rows of data. This is simply based on selecting a group of rows based on 'datetime. Then calling a function that operates on this chunk.</li> </ol> <pre><code> timeslice_start = row[1]['df']['datetime'].iloc[0] timeslice_end = timeslice_start + timeslice # while the end of the timeslice has not reached the end of the dataframe: while timeslice_end &lt;= row[1]['df']['datetime'].iloc[-1]: # Take a group of datapoints that are in the timeslice this_group = row[1]['df'][(row[1]['df']['datetime'] &gt;= timeslice_start) &amp; (row[1]['df']['datetime'] &lt;= timeslice_end)] do_checks3D(this_group, row[1]['limits'][0], row[1]['limits'][1], row[1]['limits'][2]) </code></pre> <p>This is a sliding window technique. Recently I discovered that pandas actually has a function for this with pandas.rolling(), but have not tried that yet as I do not expect a big performance boost.</p> <ol start="2"> <li>This group of data is then passed to a function to calculate cpa of all aircraft points in that chunk of data by iterating over the rows using a 'for' loop with a nested 'for' loop as follows:</li> </ol> <pre><code>for i in range(0, len(dframe)): # extracting parameters for vector1 of aircraft 1 # and compare it with all the other datapoints in the frame for j in range(i+1,len(dframe)): # do more extraction for vector2 of aircraft 2 # and call the cpa function cpa(vector1, vector2) </code></pre> <p>Effectively, this gives me the same as pandas.combinations(). However, using combinations() is much slower than brute force iterating like I do now.</p> <ol start="3"> <li>Slide window forward (currently 1 sec) and call the function again. Obviously this will give many overlaps which need to be weeded out later.</li> </ol> <p>Calculating all cpas for a day's worth of datapoints, will cost me around 30 minutes on a normal PC. I am looking for ways to significantly speed up the code. Would you have any suggestions? Is pandas.rolling() faster than my sliding window? Is there a better Pythonic way? Is there a faster way than my two nested 'for' loops?</p> <p>Any suggestions very welcome!</p>
<python><pandas><combinations><nested-loops>
2023-05-18 21:56:22
1
303
Job BrΓΌggen
76,284,578
19,369,393
Is there a standard way to create a "view" of a list in python with an element excluded?
<p>Let's say I have a huge list and a function that expects a list as an argument. This function does not modify the list in any way. I want to pass this list to the function, but with an element excluded. Removing the element would require me to copy the entire list without the element I want to exclude, which is obviously inefficient, and also unnecessary, since the function only views the elements of the list.</p> <p>One way is to create a custom view class that implements <code>__len__</code>, <code>__getitem__</code>, <code>__iter__</code> and other collections' methods and &quot;excludes&quot; the element from the list, by returning <code>len-1</code> in <code>__len__</code> and returning <code>n+1</code>th element instead of <code>n</code>th if <code>n</code> is greater or equal to the &quot;excluded&quot; element index.</p> <p>My question, is there any standard way to do that? Is there any such view class in the python's standard library? I will also consider external libraries if there are any. Just don't want to reinvent the wheel.</p> <p>Thanks in advance!</p>
<python><list>
2023-05-18 21:22:18
0
365
g00dds
76,284,496
13,877,993
Enabling API Key Auth in FastAPI for all endpoints except / and a healthcheck endpoint?
<p>I want to implement basic api key auth for all endpoints of my FastAPI app except for <code>/</code> and <code>/health</code>.</p> <p>I was going to do it using middleware, e.g.</p> <pre><code>@app.middleware(&quot;http&quot;) async def validate_api_key(request, call_next): if is_production(): api_key = request.headers.get(&quot;X-API-KEY&quot;) if api_key != settings.API_KEY: raise HTTPException( status_code=401, detail=&quot;Unauthorized&quot; ) response = await call_next(request) return response </code></pre> <p>However, this applies it to all endpoints including / and /health.</p> <p>The next thing I read was that I could use SubApps, but that doesn't really make sense to me to have / and /health in one app, and then all my other endpoints in another app. Because it doesn't seem like you can mount them on the same prefix.</p> <p>The other thing was to add it per endpoint using dependencies, e.g.</p> <pre><code> @app.get(&quot;/protected&quot;, dependencies=[Depends(api_key_auth)]) </code></pre> <p>Taken from <a href="https://testdriven.io/tips/6840e037-4b8f-4354-a9af-6863fb1c69eb/" rel="noreferrer">https://testdriven.io/tips/6840e037-4b8f-4354-a9af-6863fb1c69eb/</a> But that seems cumbersome to have to include it for every endpoint function that I write.</p> <p>Is there a better way to just do simple api key auth for most endpoints?</p>
<python><authentication><fastapi>
2023-05-18 21:07:18
1
1,610
fooiey
76,284,412
15,295,149
How can I stream a response from LangChain's OpenAI using Flask API?
<p>I am using Python Flask app for chat over data. In the console I am getting streamable response directly from the OpenAI since I can enable streming with a flag <code>streaming=True</code>.</p> <p>The problem is, that I can't &quot;forward&quot; the stream or &quot;show&quot; the strem than in my API call.</p> <p>Code for the processing OpenAI and chain is:</p> <pre><code>def askQuestion(self, collection_id, question): collection_name = &quot;collection-&quot; + str(collection_id) self.llm = ChatOpenAI(model_name=self.model_name, temperature=self.temperature, openai_api_key=os.environ.get('OPENAI_API_KEY'), streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])) self.memory = ConversationBufferMemory(memory_key=&quot;chat_history&quot;, return_messages=True, output_key='answer') chroma_Vectorstore = Chroma(collection_name=collection_name, embedding_function=self.embeddingsOpenAi, client=self.chroma_client) self.chain = ConversationalRetrievalChain.from_llm(self.llm, chroma_Vectorstore.as_retriever(similarity_search_with_score=True), return_source_documents=True,verbose=VERBOSE, memory=self.memory) result = self.chain({&quot;question&quot;: question}) res_dict = { &quot;answer&quot;: result[&quot;answer&quot;], } res_dict[&quot;source_documents&quot;] = [] for source in result[&quot;source_documents&quot;]: res_dict[&quot;source_documents&quot;].append({ &quot;page_content&quot;: source.page_content, &quot;metadata&quot;: source.metadata }) return res_dict </code></pre> <p>and the API route code:</p> <pre><code>@app.route(&quot;/collection/&lt;int:collection_id&gt;/ask_question&quot;, methods=[&quot;POST&quot;]) def ask_question(collection_id): question = request.form[&quot;question&quot;] # response_generator = document_thread.askQuestion(collection_id, question) # return jsonify(response_generator) def stream(question): completion = document_thread.askQuestion(collection_id, question) for line in completion['answer']: yield line return app.response_class(stream_with_context(stream(question))) </code></pre> <p>I am testing my endpoint with curl and I am passing flag <code>-N</code> to curl, so I should get the streamable response, if it is possible.</p> <p>When I make API call first the endpoint is waiting to process the data (I can see in my terminal in VS code the streamable answer) and when finished, I get everything displayed in one go.</p>
<python><flask><openai-api><langchain>
2023-05-18 20:53:03
2
746
devZ
76,284,376
7,437,143
How to format dash datatable cell using HTML code in cell string?
<h2>MWE</h2> <p>Suppose one has:</p> <pre class="lang-py prettyprint-override"><code>from dash import Dash, dash_table import pandas as pd from collections import OrderedDict data = OrderedDict( [ (&quot;Name&quot;, [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;]), (&quot;Region&quot;, [ &quot;Toronto&quot;, &quot;&lt;b&gt; First &lt;/b&gt;&quot;, '&lt;b style=&quot;display:inline&quot;&gt;Second&lt;/b&gt;',]), ] ) df = pd.DataFrame(data) app = Dash(__name__) app.layout = dash_table.DataTable( data=df.to_dict('records'), columns=[{'id': c, 'name': c} for c in df.columns] ) if __name__ == '__main__': app.run_server(debug=True) </code></pre> <p>Which outputs: <a href="https://i.sstatic.net/wy2CR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wy2CR.png" alt="enter image description here" /></a></p> <p>(Note at the second column, the third and fourth row are HTML formatted, but this is not picked up by Dash).</p> <h2>Question</h2> <p>How can one ensure the html content of these cell content strings, is parsed by Dash in the table?</p> <h2>Note</h2> <p>The <a href="https://dash.plotly.com/datatable/conditional-formatting" rel="nofollow noreferrer">examples</a> given in the Dash documentation for conditional formatting write separate, non-pythonic/elaborate/api-like code to format specific cells. However, for the intended use case, it would be less complicated, if I can store the conditional data in the string, at the time of creating the cell content, instead of creating extra data tables per cell with the relevant formatting of the cell.</p>
<python><html><plotly-dash>
2023-05-18 20:46:50
1
2,887
a.t.
76,284,294
3,853,504
Why is seaborn objects breaking on basic data?
<p>I am a native R user but need to use Python in a new job. Seaborn objects seems to match up with <code>ggplot</code> much more closely (I know about <code>plotnine</code>; let's ignore it for now).</p> <p>Unfortunately, it is not working and I cannot find help on the error. The code below is as basic as it gets, and the example was created using trivially modified code taken directly from <a href="https://seaborn.pydata.org/tutorial/objects_interface.html" rel="nofollow noreferrer">the SO page</a>.</p> <p>In my current position we are using a unified Python environment, so my ability to tweak/change/test is severely limited, if not unavailable.</p> <pre class="lang-py prettyprint-override"><code>import seaborn as sns import seaborn.objects as so import matplotlib.pyplot as plt import numpy as np import pandas as pd print(&quot;Seaborn version:&quot;, sns.__version__) x_min = 0 x_max = 50 x_len = 25 x = np.linspace(x_min, x_max, x_len) y = x ** (np.random.rand(x_len)) temp_d = pd.DataFrame( {&quot;x&quot;: x, &quot;y&quot;: y} ) print(temp_d.head()) plt.scatter(x, y) so.Plot( data = temp_d, x = &quot;x&quot;, y = &quot;y&quot; ).add(so.Dots()) </code></pre> <p>I get a &quot;TypeError: data type 'boolean' not understood&quot;:</p> <pre><code> 834 # Process the scale spec for coordinate variables and transform their data ... ---&gt; 97 boolean_vector = vector.dtype in boolean_dtypes 98 else: 99 boolean_vector = bool(np.isin(vector, [0, 1, np.nan]).all()) TypeError: data type 'boolean' not understood </code></pre> <p>I have attached a picture:</p> <p><a href="https://i.sstatic.net/XBvOL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XBvOL.png" alt="enter image description here" /></a></p> <p>Full anonymized trace:</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) /data/anaconda3/shared/xxxxxxxxxx3.8/lib/python3.8/site-packages/IPython/core/formatters.py in __call__(self, obj) 343 method = get_real_method(obj, self.print_method) 344 if method is not None: --&gt; 345 return method() 346 return None 347 else: /data/anaconda3/shared/xxxxxxxxxx3.8/lib/python3.8/site-packages/seaborn/_core/plot.py in _repr_png_(self) 277 def _repr_png_(self) -&gt; tuple[bytes, dict[str, float]]: 278 --&gt; 279 return self.plot()._repr_png_() 280 281 # TODO _repr_svg_? /data/anaconda3/shared/xxxxxxxxxx3.8/lib/python3.8/site-packages/seaborn/_core/plot.py in plot(self, pyplot) 819 &quot;&quot;&quot; 820 with theme_context(self._theme_with_defaults()): --&gt; 821 return self._plot(pyplot) 822 823 def _plot(self, pyplot: bool = False) -&gt; Plotter: /data/anaconda3/shared/xxxxxxxxxx3.8/lib/python3.8/site-packages/seaborn/_core/plot.py in _plot(self, pyplot) 834 # Process the scale spec for coordinate variables and transform their data 835 coord_vars = [v for v in self._variables if re.match(r&quot;^x|y&quot;, v)] --&gt; 836 plotter._setup_scales(self, common, layers, coord_vars) 837 838 # Apply statistical transform(s) /data/anaconda3/shared/xxxxxxxxxx3.8/lib/python3.8/site-packages/seaborn/_core/plot.py in _setup_scales(self, p, common, layers, variables) 1217 1218 prop = PROPERTIES[prop_key] -&gt; 1219 scale = self._get_scale(p, scale_key, prop, var_df[var]) 1220 1221 if scale_key not in p._variables: /data/anaconda3/shared/xxxxxxxxxx3.8/lib/python3.8/site-packages/seaborn/_core/plot.py in _get_scale(self, spec, var, prop, values) 1143 scale = prop.infer_scale(arg, values) 1144 else: -&gt; 1145 scale = prop.default_scale(values) 1146 1147 return scale /data/anaconda3/shared/xxxxxxxxxx3.8/lib/python3.8/site-packages/seaborn/_core/properties.py in default_scale(self, data) 65 &quot;&quot;&quot;Given data, initialize appropriate scale class.&quot;&quot;&quot; 66 ---&gt; 67 var_type = variable_type(data, boolean_type=&quot;boolean&quot;, strict_boolean=True) 68 if var_type == &quot;numeric&quot;: 69 return Continuous() /data/anaconda3/shared/xxxxxxxxxx/lib/python3.8/site-packages/seaborn/_core/rules.py in variable_type(vector, boolean_type, strict_boolean) 95 else: 96 boolean_dtypes = [&quot;bool&quot;, &quot;boolean&quot;] ---&gt; 97 boolean_vector = vector.dtype in boolean_dtypes 98 else: 99 boolean_vector = bool(np.isin(vector, [0, 1, np.nan]).all()) TypeError: data type 'boolean' not understood </code></pre> <p>UPDATE: These are the package versions I am currently using. I will likely ask our environment administrator to update them.</p> <pre><code>Matplotlib version: 3.4.3 Numpy version: 1.20.3 Pandas version: 1.3.3 Statsmodels version: 0.12.2 </code></pre>
<python><seaborn><seaborn-objects>
2023-05-18 20:33:24
0
935
jpm_phd
76,284,275
4,422,095
Vanishing data in PySpark: How to get it to stop vanishing?
<p>I'm having a problem with my PySpark script. My task is basically</p> <ol> <li>Import data into PySpark from mySQL database.</li> <li>Do some transformations</li> <li>Write the transformed data back to the MySQL database</li> </ol> <p>I can't show you the full code but I can show you an outline of what it looks like basically.</p> <pre class="lang-py prettyprint-override"><code># load the SparkSession configs = get_configs_for_spark() spark = get_spark_session(configs) # open SSH tunnel with VM with SSHTunnelForwarder(**get_ssh_tunnel_args(configs)) as tunnel: # grab unprocessed data df = get_raw_data(configs, tunnel, spark) # transform the data df = transform_my_data_1(configs, df) # Get the number of rows in the dataframe num_rows = df.count() # Determine the number of subsets (each with 100 rows) subset_size = 100 num_partitions = int(np.ceil(num_rows / subset_size)) # Add a partition_id column to your DataFrame df = df.repartition(num_partitions) # Specify the desired number of partitions # Add a partition_id column to your DataFrame df = df.withColumn(&quot;partition_id&quot;, spark_partition_id()) # Group by partition_id and count the occurrences partition_counts = df.groupBy(&quot;partition_id&quot;).count() # create the Azure client to store data to blob storage container_client = create_azure_client(configs) # Iterate over the subsets for i in range(num_partitions): subset_df = df.filter(df.partition_id == i) # Run transform_my_data_2 on the subset dataframe subset_df = transform_my_data_2(configs, container_client, subset_df) # Write the subset dataframe to MySQL DB write_data_to_mysql_db(configs, tunnel, subset_df.drop(&quot;partition_id&quot;)) # print the size of the data frame and expected size # Print progress when partitions have been executed expected_df_count = partition_counts.filter(partition_counts.partition_id == i).select(&quot;count&quot;).first()[0] print(f&quot;{i+1} / {num_partitions} partitions processed. || subset df Size: Expected({expected_df_count}) Actual({subset_df.count()}) &quot;) # end spark session spark.stop() </code></pre> <p>My function for writing the data to the MySQL db is</p> <pre class="lang-py prettyprint-override"><code>def write_data_to_mysql_db(configs, tunnel, df): database = configs[&quot;MySQL&quot;][&quot;database&quot;] username = configs[&quot;MySQL&quot;][&quot;username&quot;] password = configs[&quot;MySQL&quot;][&quot;password&quot;] driver = configs[&quot;MySQL&quot;][&quot;driver&quot;] table = configs[&quot;MySQL&quot;][&quot;table&quot;] # Define the JDBC URL for the MySQL database url = f&quot;jdbc:mysql://localhost:{tunnel.local_bind_port}/{database}&quot; # Define the database properties for authentication properties = { &quot;user&quot;: username, &quot;password&quot;: password, &quot;driver&quot;: driver } df.write.jdbc(url=url, table=table, mode=&quot;append&quot;, properties=properties) </code></pre> <p>My function for getting the information from the MySQL db is</p> <pre class="lang-py prettyprint-override"><code> def get_raw_data(configs, tunnel, spark_session): &quot;&quot;&quot; gets raw data &quot;&quot;&quot; database = configs[&quot;MySQL&quot;][&quot;database&quot;] username = configs[&quot;MySQL&quot;][&quot;username&quot;] password = configs[&quot;MySQL&quot;][&quot;password&quot;] # Define the MySQL JDBC URL url = f'jdbc:mysql://localhost:{tunnel.local_bind_port}/{database}' # Create a dataframe by reading the data from the MySQL database df = spark_session.read.jdbc(url=url, table=transription_table, properties={&quot;user&quot;: username, &quot;password&quot;: password, &quot;characterSetResults&quot;: &quot;utf8mb4&quot;}) # Execute custom SQL query to get data my_query = f&quot;&quot;&quot;QUERY REDACTED&quot;&quot;&quot; df = spark_session.read.jdbc(url=url, table=f&quot;({my_query}) as temp&quot;, properties={&quot;user&quot;: username, &quot;password&quot;: password}) return df </code></pre> <p>I checked the data after doing</p> <pre class="lang-py prettyprint-override"><code># grab unprocessed data df = get_raw_data(configs, tunnel, spark) # transform the data df = transform_my_data_1(configs, df) </code></pre> <p>and the data looked exactly as I would have expected. However, when I run the full script, many of the rows that should be written back somehow vanish. Initially, I had my data frame organized into several partitions each with approximately the same size.</p> <p>On the small set of data I tried, I initially started off with</p> <pre><code>+------------+-----+ |partition_id|count| +------------+-----+ | 0| 90| | 1| 90| | 2| 90| | 3| 90| | 4| 90| | 5| 89| | 6| 89| | 7| 90| | 8| 90| +------------+-----+ </code></pre> <p>But then, after doing a few write operations, suddenly the counts started changing, implying there are fewer rows in my data frame and data is being lost.</p> <pre><code>+------------+-----+ |partition_id|count| +------------+-----+ | 0| 87| | 1| 87| | 2| 87| | 3| 87| | 4| 87| | 5| 86| | 6| 86| | 7| 87| | 8| 87| +------------+-----+ </code></pre> <p>When I checked the MySQL database when the script finished, indeed some of the data was not present.</p> <p>The fact that some of the data vanishes is very strange. It shouldn't be doing this because data frames are immutable in Spark.</p> <p>To test the problem, I've tried various debugging strategies.</p> <h2>Debugging 1: Trying code on fake data</h2> <p>One that seemed highly effective was that, rather than using the data from the MySQL database, I generated a fake data set and then ran my code on that. This time, when I checked the amount of data in each partition, it remained consistently at</p> <pre><code>+------------+-----+ |partition_id|count| +------------+-----+ | 0| 100| | 1| 100| | 2| 100| | 3| 100| | 4| 100| | 5| 100| | 6| 100| | 7| 100| | 8| 100| | 9| 100| +------------+-----+ </code></pre> <p>So the point was, there was no observed loss of rows. This worked perfectly and allowed me to run my code just as I expected.</p> <p>But obviously using fake data isn’t what I want. I need my code to work on the actual data.</p> <h2>Debugging 2: Using df.persist()</h2> <p>I also tried using df.persist(). This seems to resolve the problem but it makes the computation so slow that it's not feasible to run it.</p> <pre><code># Iterate over the subsets for i in range(num_partitions): if not df.is_cached(): df.persist() subset_df = df.filter(df.partition_id == i) # Run transform_my_data_2 on the subset dataframe subset_df = transform_my_data_2(configs, container_client, subset_df) # Write the subset dataframe to MySQL DB write_data_to_mysql_db(configs, tunnel, subset_df.drop(&quot;partition_id&quot;)) subset_df.unpersist() # print the size of the data frame and expected size # Print progress when partitions have been executed expected_df_count = partition_counts.filter(partition_counts.partition_id == i).select(&quot;count&quot;).first()[0] print(f&quot;{i+1} / {num_partitions} partitions processed. || subset df Size: Expected({expected_df_count}) Actual({subset_df.count()}) &quot;) df.unpersist() </code></pre> <p>This seemed to solve the issue insofar as the could would run and no data was lost, but it runs so slow that it isn’t feasible to scale.</p> <h2>Questions</h2> <p>I don't understand what this tells me about what's causing my problem.</p> <ol> <li>Why do my scripts work when I use data generated by my script instead of imported from the MySQL database?</li> <li>Why does using df.persist() help solve this vanishing data problem but make it so slow that it's not feasible to run except on tiny data frames (e.g. 1000 rows)?</li> <li>How can I run my code on the real data without having these rows vanish?</li> </ol>
<python><mysql><apache-spark><pyspark>
2023-05-18 20:29:59
1
2,244
Stan Shunpike
76,284,273
3,858,193
Sagemaker Sklearn Transformation Jobs not able to process parquet file
<p>I have a trained model built using sklearn container (<code>framework_version=&quot;0.23-1&quot;</code>). While running batchg transform job using parquet file I am getting below error:</p> <pre><code> File &quot;transform_job.py&quot;, line 47, in &lt;module&gt; content_type='application/x-parquet') return run_func(*args, **kwargs) File &quot;/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.7/site-packages/sagemaker/transformer.py&quot;, line 296, in transform batch_data_capture_config, transform_job.start(TransformInput, TransformOutput, TransformResources, **kwargs) if _payload_size_within_limit(buffer + element, size): TypeError: can only concatenate str (not &quot;bytes&quot;) to str </code></pre> <p>Code:</p> <pre><code>final_model = SKLearnModel( model_data=latest_model_data , role=exec_role , entry_point='entrypoint.py' , framework_version=&quot;0.23-1&quot;, dependencies=[&quot;constants.py&quot;], ) css_transformer = final_model.transformer(instance_count=inference_instance_count , instance_type=inference_instance_type , output_path=output_path , accept='application/x-parquet' , max_payload=2 , max_concurrent_transforms=1 , strategy = 'MultiRecord' ) css_transformer.transform(data=s3_data_partition_prefix, data_type='S3Prefix', content_type='application/x-parquet') </code></pre>
<python><machine-learning><amazon-sagemaker>
2023-05-18 20:29:42
1
1,558
user3858193
76,284,095
6,912,830
SIGUSR1 signal mysteriously fails with `tar`, kills process instead of reporting progress
<p>I'm executing a <code>tar</code> process with the <code>subprocess</code> module and I discovered the ability to use signals to <a href="https://www.gnu.org/software/tar/manual/html_section/verbose.html" rel="nofollow noreferrer">get progress information out of it</a> (send to stderr).</p> <pre class="lang-bash prettyprint-override"><code>$ tar -xpf archive.tar --totals=SIGUSR1 ./blah $ pkill -SIGUSR1 tar # separate terminal, session etc. </code></pre> <p>Unfortunately, I am unable to replicate this sequence <em>successfully</em> in Python.</p> <pre class="lang-py prettyprint-override"><code>import os import subprocess import signal import time import sys # Define the command to execute command = [&quot;tar&quot;, &quot;-xpf&quot;, sys.argv[2], &quot;-C&quot;, sys.argv[1], &quot;--totals=SIGUSR1&quot;] # Start the subprocess print(' '.join(command)) process = subprocess.Popen(command, preexec_fn=os.setsid, stderr=subprocess.PIPE) try: while True: # Ping the subprocess with SIGUSR1 signal # NOTWORK: process.send_signal(signal.SIGUSR1) # NOTWORK: os.killpg(os.getpgid(process.pid), signal.SIGUSR1) subprocess.Popen([&quot;kill&quot;, &quot;-SIGUSR1&quot;, str(process.pid)]) print(process.stderr.readline().decode(&quot;utf-8&quot;).strip()) # print(process.stdout.readline().decode(&quot;utf-8&quot;).strip()) # Wait for a specified interval time.sleep(1.9) # Adjust the interval as needed except KeyboardInterrupt: # Handle Ctrl+C to gracefully terminate the script process.terminate() # Wait for the subprocess to complete process.wait() </code></pre> <p>The only style that works is opening a <code>kill</code> process with <code>Popen</code> like a cave man.</p> <p>I noticed that after using <code>os.kill</code> or <code>Popen.send_signal</code>, the <code>.poll()</code> method returns -10. Seems like it dies after receiving the signal for some reason?</p>
<python><subprocess><signals><tar>
2023-05-18 20:01:11
1
759
Xevion
76,284,094
979,242
How to parse a string to access a python dict
<p>This is my python dict:</p> <pre><code>my_dict={ &quot;key&quot;:{ &quot;outbound_email_detail&quot;:{ &quot;from&quot;:&quot;value&quot; } } } </code></pre> <p>Here is my input String :</p> <pre><code>input=&quot;key#outbound_email_detail#from&quot; </code></pre> <p>How could I parse my input to access python dict. I tried with the split but not sure how to assemble it back ?</p> <p>Expected Output is :</p> <pre><code>print(my_dict[&quot;key&quot;][&quot;outbound_email_detail&quot;][&quot;from&quot;]) </code></pre>
<python>
2023-05-18 20:01:05
2
505
PiaklA
76,284,080
4,709,300
Is it possible to create a Substate type in Python that removes the first section of a Literal type?
<p>I have a Python <code>Literal</code> type that includes various period separated strings. I would like to also have a <code>Literal</code> type that includes the same strings but without the first period-delimited section. I could manually do this, but I don't want to do that every time I need to remove a section.</p> <pre class="lang-py prettyprint-override"><code>from typing import Literal # Period-delimited string literals ActiveState = Literal['active.green', 'active.yellow', 'active.red'] State = ActiveState | OtherState | AnotherState # I'd like to do this with multiple # Remove the first period-delimited section for when we're only looking at stuff after that def handle_active_state(substate: Literal['green', 'yellow', 'red']) -&gt; None: ... </code></pre> <p>I've tried using <code>typing.get_args</code> to iterate over and modify the values in <code>ActiveState</code>, but I was unable to then find a way to use those values in a new <code>Literal</code> type. I tried using <code>types.GenericAlias</code> to recreate the <code>Literal</code>, but it thought the result was a value and not a type or type alias when I tried to use it.</p> <pre class="lang-py prettyprint-override"><code>from typing import Literal, get_args from types import GenericAlias ActiveState = Literal['active.green', 'active.yellow', 'active.red'] ActiveSubstate = GenericAlias(Literal, tuple('.'.join(state.split('.')[1:]) for state in get_args(ActiveState))) # While this does generate Literal['green', 'yellow', 'red'], it's not usable def handle_active_state(substate: ActiveSubstate) -&gt; None: ... # pylance: Illegal type annotation: variable not allowed unless it is a type alias # Also encountered similar issue with mypy </code></pre> <p>My goal is to be able to make some sort of reusable type that does this automatically. For example, something like <code>Substate[ActiveState]</code> would be ideal. Is this possible in Python? If not, why is not possible?</p> <h2>Edit:</h2> <p>This is for static type analysis. This kind of static type analysis does work in some other languages too. <a href="https://www.typescriptlang.org/play?#code/C4TwDgpgBAggxsAlgNwgZWAQ2NAvFAckwRQgDoBzAJwggDsCoAfQ4pVMkCAG24HsA7oxZESHGgBMCAbgBQoSFDQBXAEYBnLDgA8AFQB8UfLqgQAHjjoT1UAAYASAN6I6AMwhUoAMURVNAXzInF3dPGFccKn9bKAB%20WAiPKAAuKF05BWh4dnQ1TWw8JTytCG1s0gwC-TlZOD46TSgKPj4JVPLUFQ0So0JqWgY5OobgKFVMNtgxXO6C3oJVbmUIGVkgA" rel="nofollow noreferrer">Example of this working in TypeScript:</a></p> <pre class="lang-js prettyprint-override"><code>type ActiveState = 'active.green' | 'active.yellow' | 'active.red'; type Substate&lt;T&gt; = T extends `${infer First}.${infer After}` ? After : T; type ActiveSubstate = Substate&lt;ActiveState&gt;; const good: ActiveSubstate = 'green'; const bad: ActiveSubstate = 'blue'; // As expected: Type '&quot;blue&quot;' is not assignable to type 'ActiveSubstate'. </code></pre>
<python><types><mypy><pylance>
2023-05-18 19:59:04
0
5,105
andria_girl
76,284,035
14,244,437
Two backends, one DB - what's the best way to integrate data with Django?
<p>Currently I'm working in a project that will add a functionality to a legacy application.</p> <p>For a number of reasons that are completely out of my control, this functionality will resides in a different backend.</p> <p>Not only that, each backend is actually in a different programming language (legacy is ruby, using Rails, new is Python, using Django), and different architectures (legacy is a monolithic app, django will provide a Rest API for a React frontend).</p> <p>So far, so good. I've already solved how to integrate user's sessions, and mapped each table as a django model using the lifesaver command <code>python manage.py inspectdb &gt; legacy_models.py</code>.</p> <p>Now, I've got a bigger issue in my hands. Some of the old tables are not normalized, or there're missing fields which I'll need for the new application. I can't really mess with the old tables and create new fields, alter relationships or anything like that, since this legacy app is maintained by a separate corporation and we can't really depends on changes by their side, nor can we guarantee that changing the old tables won't mess with anything in this application.</p> <p>Therefore, I've took another route, and decided to create tables on Django that'll extend the old tables with the info I'll need.</p> <p>To make things easier to visualize, just imagine a table like <code>Employee</code>, but this table do not have a field <code>productivity</code> (that will be calculated in the new application). So I'd create a new table such as <code>EmployeeDetails</code> that will contain a OneToOne relationship with employee + any new field I need.</p> <p>The issue here is:</p> <p>I'd need to guarantee that any new Employee instance (that are being created in the legacy app) have a EmployeeDetails.</p> <p>I've thought of two ways to do that:</p> <ul> <li><p>Using a Celery Beat Task and a separate DB table to keep track of the last Employee synced, in order to verify if new employees were created.</p> </li> <li><p>Create a Postgres Trigger.</p> </li> </ul> <p>I'm tempted to use the first solution, since it's not possible for it to break anything in the legacy application (I mean, who knows what might happen in the legacy application during this employee trigger...)</p> <p>I'm seriously not confident about this approach though. This is not how I like to design my backends, but the more I look in the legacy app the more my brain screams <code>DITCH THE LEGACY, REDO THIS IN DJANGO THE CORRECT WAY</code>... but well, I'm not allowed to do that...</p> <p>Any thoughts about this?</p>
<python><ruby-on-rails><django><ruby><integration>
2023-05-18 19:51:23
1
481
andrepz
76,284,030
8,942,319
format function name as string and then call it? fn_call = f"static_{dynamic}" then fn_call()
<p>I have a series of functions that do some data cleaning.</p> <pre><code>def clean_field_1(field_1): return field_1 </code></pre> <p>for example.</p> <p>When looping through a dict of fields, field_1 may not be there. But if it is I'd like to call <code>clean_field_1()</code>.</p> <pre><code>for field in dict_object: fn_call = f&quot;clean_{field}&quot; fn_call(field) </code></pre> <p>That's to illustrate the goal. In this case that final line errors because the str object is not callable.</p> <p>Or it's possible that my approach is totally off base and I need a different one. Thanks for any pointers.</p>
<python><python-3.x><string><function>
2023-05-18 19:50:26
0
913
sam
76,283,978
9,386,819
Why doesn't seaborn barplot retain the order of a groupby index?
<p>I'm trying to create a seaborn barplot of a groupby object. Here is an example:</p> <pre><code>data = {'ID':[1, 2, 3, 1, 2, 3], 'value':[10, 20, 30, 100, 200, 300], 'other': ['a', 'b', 'c', 'd', 'e','f'] } data = pd.DataFrame(data) </code></pre> <p>I want to use <code>groupby()</code> to group the data by <code>ID</code>, calculate the mean <code>value</code>, and sort in descending order by <code>value</code>:</p> <pre><code>grouped = data.groupby('ID').mean() grouped = grouped.sort_values(by='value', ascending=False) grouped </code></pre> <p><a href="https://i.sstatic.net/9elwt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9elwt.png" alt="groupby result" /></a></p> <p>Now I want to plot this using seaborn's barplot:</p> <pre><code>sns.barplot(x=grouped.index, y=grouped['value']) </code></pre> <p>This is the result:</p> <p><a href="https://i.sstatic.net/Ddl7p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ddl7p.png" alt="bar plot" /></a></p> <p>The plot retains the original order and ignores the fact that I reassigned <code>grouped</code> to change the order.</p> <p>Why does this happen? I've tried so many different approaches, and it always yields the same results.</p>
<python><pandas><group-by><seaborn><bar-chart>
2023-05-18 19:41:52
2
414
NaiveBae
76,283,974
4,268,602
Using scpi commands on python with mac Ventura - is this possible?
<p>I am trying to use easy_scpi.</p> <p>However, this says that it requires a backend. I am not able to find a backend compatible with Ventura 13.3.1.</p> <p>Is it not possible to use this library on Ventura 13.3.1 for this reason? Is there a workaround or an alternative backend to NI-VISA?</p>
<python><pyvisa>
2023-05-18 19:40:42
1
4,156
Daniel Paczuski Bak
76,283,963
5,446,972
How Can I Make Kate Properly Indent Snippets with Multiple Lines
<p>I enjoy using Kate snippets for quick Python scripts, but I'm having a hard time getting them to properly inherit or detect indentation levels after the first line. Consider the following snippet:</p> <pre class="lang-py prettyprint-override"><code>print(&quot;line1&quot;) print(&quot;line2&quot;) </code></pre> <p>I want to insert this in 3 places in my code, at the global level, at the definition level, and at the class level. When I insert, it looks like this:</p> <pre class="lang-py prettyprint-override"><code>print(&quot;line1&quot;) print(&quot;line2&quot;) def test(): print(&quot;line1&quot;) print(&quot;line2&quot;)#&lt;- not what I intend return class MyClass(): def mydef(): print(&quot;line1&quot;) print(&quot;line2&quot;)#&lt;- not what I intend return return </code></pre> <p>Each time, the snippet's second line is inserted at the 0th indentation level.</p> <p>If I create a new snippet with static indentation like:</p> <pre class="lang-py prettyprint-override"><code>print(&quot;line3&quot;) print(&quot;line4&quot;) </code></pre> <p>the <code>print(&quot;line3&quot;)</code> obeys the current indentation level, but <code>print(&quot;line4&quot;)</code> always reverts to the 1st indentation level verbatim. This causes problems for deeper indentation. See:</p> <pre class="lang-py prettyprint-override"><code>print(&quot;line3&quot;) print(&quot;line4&quot;)#&lt;- not what I intend def test(): print(&quot;line3&quot;) print(&quot;line4&quot;)#&lt;- correct only by luck class MyClass(): def mydef(): print(&quot;line3&quot;) print(&quot;line4&quot;)#&lt;- not what I intend return return </code></pre> <p>I want subsequent lines in my code to automatically detect the current indentation level, but I can't figure out how. It seems like there should be some method in the JavaScript backend, but I can't find anything which is available to the snippets user without modifying a <code>.js</code> file, something which is not extensible between computers. E.g., I should be able to create a snippet that looks like:</p> <pre><code>class MySnippetClass(): ${cur_indent()} def my_snippet_def(${arg1}): ${cur_indent()} return ${cur_indent()} return </code></pre> <p>How do I fix this and make large, automagically indented snippets?</p>
<python><indentation><code-snippets><auto-indent><kate>
2023-05-18 19:39:01
0
490
WesH
76,283,962
13,630,719
How to make multiple APIs in parallel using Python Django?
<p>Currently, I have 2 API enpoints. The first API enpoint takes in a file as input and performs some calculations that take up to 30 minutes. The second API endpoint is a healthcheck that returns the status of the first API. Possible healthchecks are 15% complete, 25% complete, 50% complete, 75% complete, 90% complete and 100% complete. Using Python I want to write code that calls the first API and then every 10 seconds calls the second API and returns the percentage complete. Then I would like to end the healthcheck calls once 100% has been completed. This is all code used in a Django project, where I would like the UI to get updated. Without the healthcheck, the code for that constant update looks something like: <code>return render(request, 'processing.html', context)</code>.</p> <p>The code that I was thinking I could do this with would be something like the one below. That being said, I'm not sure if I'm handling the parallel asynchronous tasks correctly. I was thinking if multiprocessing or threading would be a good approach here but I'm not sure of the exact syntax.</p> <pre><code>import requests import time async def first_api(body, api_url): async with aiohttp.ClientSession() as session: async with session.post(api_url, data=body) as response: return await response.json() async def health_api(body, api_url): async with aiohttp.ClientSession() as session: async with session.post(api_url, data=body) as response: health = response.json().get('health') return health def process(): await first_api() while True: health = call_second_api() # I want to render the updated page if the new health is different than before but that would break the loop so I'm unsure of how to proceed. return render(request, 'processing.html', context) if health == 100: break time.sleep(10) </code></pre> <p>There is a final consideration, that I haven't incorporated that I need to consider. How exactly can I do all of the parallel API calls while continuously rendering the UI with the accurate health value? I'm a bit lost on how to proceed here. In particular, I would like assistance with 1. The proper way how to use asynchronous calls to make these parallel API calls and 2. The proper way to continuously update the webpage with these dynamically changing health values.</p> <p>Any assistance would be much appreciated!</p>
<python><django><multithreading><asynchronous>
2023-05-18 19:38:56
1
1,342
ENV
76,283,892
2,301,970
How to add an information display button to the interactive plot toolbar
<p>The matplotlib plot toolbar has some support for <a href="https://matplotlib.org/stable/gallery/user_interfaces/toolmanager_sgskip.html" rel="nofollow noreferrer">customization</a>. This example is provided on the official documentation:</p> <pre><code>import matplotlib.pyplot as plt from matplotlib.backend_tools import ToolBase, ToolToggleBase plt.rcParams['toolbar'] = 'toolmanager' class ListTools(ToolBase): &quot;&quot;&quot;List all the tools controlled by the `ToolManager`.&quot;&quot;&quot; default_keymap = 'm' # keyboard shortcut description = 'List Tools' def trigger(self, *args, **kwargs): print('_' * 80) fmt_tool = &quot;{:12} {:45} {}&quot;.format print(fmt_tool('Name (id)', 'Tool description', 'Keymap')) print('-' * 80) tools = self.toolmanager.tools for name in sorted(tools): if not tools[name].description: continue keys = ', '.join(sorted(self.toolmanager.get_tool_keymap(name))) print(fmt_tool(name, tools[name].description, keys)) print('_' * 80) fmt_active_toggle = &quot;{0!s:12} {1!s:45}&quot;.format print(&quot;Active Toggle tools&quot;) print(fmt_active_toggle(&quot;Group&quot;, &quot;Active&quot;)) print('-' * 80) for group, active in self.toolmanager.active_toggle.items(): print(fmt_active_toggle(group, active)) class GroupHideTool(ToolToggleBase): &quot;&quot;&quot;Show lines with a given gid.&quot;&quot;&quot; default_keymap = 'S' description = 'Show by gid' default_toggled = True def __init__(self, *args, gid, **kwargs): self.gid = gid super().__init__(*args, **kwargs) def enable(self, *args): self.set_lines_visibility(True) def disable(self, *args): self.set_lines_visibility(False) def set_lines_visibility(self, state): for ax in self.figure.get_axes(): for line in ax.get_lines(): if line.get_gid() == self.gid: line.set_visible(state) self.figure.canvas.draw() fig = plt.figure() plt.plot([1, 2, 3], gid='mygroup') plt.plot([2, 3, 4], gid='unknown') plt.plot([3, 2, 1], gid='mygroup') # Add the custom tools that we created fig.canvas.manager.toolmanager.add_tool('List', ListTools) fig.canvas.manager.toolmanager.add_tool('Show', GroupHideTool, gid='mygroup') # Add an existing tool to new group `foo`. # It can be added as many times as we want fig.canvas.manager.toolbar.add_tool('zoom', 'foo') # Remove the forward button fig.canvas.manager.toolmanager.remove_tool('forward') # To add a custom tool to the toolbar at specific location inside # the navigation group fig.canvas.manager.toolbar.add_tool('Show', 'navigation', 1) plt.show() </code></pre> <p>Which opens this plot where you can hide/show some data:</p> <p><a href="https://i.sstatic.net/vFMqN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vFMqN.png" alt="![enter image description here" /></a></p> <p><strong>How can I add such a button to display some text (regarding the plot data) on a new window?</strong></p>
<python><matplotlib><user-interface><toolbar>
2023-05-18 19:29:57
1
693
Delosari
76,283,757
6,828,329
Manipulating image/gif files and their .bin or .dat representations, and then converting them back to image/gif representation
<p>I'm trying to play around with Python and image/gif files and their .bin or .dat representations. For instance, I want to take a PNG file, convert it to its .bin or .dat file (its binary version), and then do something to manipulate the image, such as turning it from greyscale to black-and-white, or taking a black-and-white image and inverting all of the 0s and 1s so that the black parts become white and the white parts become black, and then convert it back into an actual image/gif. For instance, doing the black-to-white flipping for this PNG image:</p> <p><a href="https://i.sstatic.net/sC7JH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sC7JH.png" alt="enter image description here" /></a></p> <p>This is kind of like a hands-on educational exercise to better understand the relation between images and binary.</p> <p>I want to do this using the Python standard library functions, similar to what would be available in C. I don't want to use external libraries like numpy or opencv.</p> <p>I can't seem to find any sources that explain this in a straight-forward way. How can I do this?</p> <hr /> <p>This is what I've tried so far:</p> <pre><code>import base64 with open(&quot;984269812_1.png&quot;, &quot;rb&quot;) as f: png_encoded = base64.b64encode(f.read()) encoded_b2 = &quot;&quot;.join([format(n, '08b') for n in png_encoded]) print(encoded_b2) flippedBinary = ''.join('1' if x == '0' else '0' for x in encoded_b2) print(flippedBinary) # first split into 8-bit chunks bit_strings = [flippedBinary[i:i + 8] for i in range(0, len(flippedBinary), 8)] # then convert to integers byte_list = [int(b, 2) for b in bit_strings] with open('byte.dat', 'wb') as f: f.write(bytearray(byte_list)) # convert to bytearray before writing </code></pre> <p>This just inverts the bits for the previous image and produces a .dat file. I haven't been able to convert this back to a PNG image, and I'm not sure if I'm on the right track here.</p>
<python><image><png><gif><bin>
2023-05-18 19:05:14
1
2,406
The Pointer
76,283,754
8,365,573
I need help to split parquet file up to a maximum of 1000 lines with boto3
<p>I have a function to upload a parquet file in s3 using boto3. I need to split the files with a maximum of 1000 lines and upload them.</p> <pre><code>for i, file_data in enumerate(data): file_path = f'{path}{i:06}.{file_format.name}' save_to_s3(bucket, file_path, file_data, content_type=file_format.value, **kwargs) def save_to_s3(bucket: str | Bucket, path: str, data: (bytes | str), tags: dict[str, str] = None, content_type: str = &quot;&quot;, metadata: dict[str, str] = None) -&gt; None: bucket = Session(region_name=config.aws_region).resource('s3').Bucket(bucket) metadata = {**(metadata or {}), **{'created_by': config.spark_job_name}} data_hash = _get_content_hash(data) tags = _encode_object_tags(tags or {}) bucket.put_object( Key=path, Body=data, ContentMD5=data_hash, Metadata=metadata, Tagging=tags, ContentType=content_type, ) </code></pre>
<python><amazon-web-services><amazon-s3><boto3>
2023-05-18 19:04:47
1
522
Leticia Fatima
76,283,682
228,177
PYTHONPATH from launch.json not visible in Python script (VS Code)
<p>I would like to import methods from a directory one level above my current working directory. One way to do this is to manually append the directory to <code>sys.path</code>. I found a better solution for VSCode <a href="https://k0nze.dev/posts/python-relative-imports-vscode/" rel="nofollow noreferrer">on this post</a>. The solution described there involves creating a <code>launch.json</code> file with a configuration line that looks like <code>&quot;env&quot;: {&quot;PYTHONPATH&quot;: &quot;/tmp/a:/tmp/b&quot;}</code>. This method is <a href="https://code.visualstudio.com/docs/editor/debugging#_launch-configurations" rel="nofollow noreferrer">documented here</a>. Note that defining the environment variable in <code>.env</code> file works, but I would like to use the VSCode variable <code>$workspaceFolder</code>, which is not possible in <code>.env</code>.</p> <p>Using <code>launch.json</code>, I am not able to access the modified <code>PYTHONPATH</code>. In both interactive mode and Run mode, the path doesn't reflect the desired changes.</p> <p>Following is my directory structure:</p> <pre><code>project/ - .vscode - launch.json - conf/ - file1.py - src/ - file2.py - main.py - example.py </code></pre> <p>The file <code>launch.json</code> looks like this:</p> <pre><code>{ &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Python: main&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;module&quot;: &quot;main&quot;, &quot;justMyCode&quot;: true, &quot;env&quot;: {&quot;PYTHONPATH&quot;: &quot;${workspaceFolder}/conf/&quot;} }, { &quot;name&quot;: &quot;Python: example&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;module&quot;: &quot;example&quot;, &quot;justMyCode&quot;: true, &quot;env&quot;: {&quot;PYTHONPATH&quot;: &quot;${workspaceFolder}/conf/&quot;} }, ] } </code></pre> <p>When I run <code>main.py</code> or <code>example.py</code>, I don't see the <code>conf</code> directory in <code>sys.path</code>. What's the correct way to do this?</p> <p>Example to print <code>sys.path</code>:</p> <pre><code>import sys print(sys.path) </code></pre>
<python><visual-studio-code><import><environment-variables><pythonpath>
2023-05-18 18:52:05
3
1,320
vpk
76,283,397
3,324,136
How can I use the 'afade' parameter in ffmpeg-python without getting an 'option not found' error?
<p>I am using the ffmpeg library with python to add subtitles and a video as an overlay to a background still image. This code is working fine, but I want to use the <code>afade</code> parameter. The code without the <code>afade</code> parameter works fine. However, when I add the <code>afade</code> code in, I am getting the error.</p> <p><strong>Error</strong></p> <pre><code>Unrecognized option 'af fade=t=out:st=10:d=2'. Error splitting the argument list: Option not found </code></pre> <p><strong>Code</strong></p> <pre><code> try: subprocess.check_call([ 'ffmpeg', '-i', source_file, '-i', padding_image_file, '-filter_complex', '[0:v]scale=iw/'+ video_scale + ':ih/' + video_scale + ',setsar=1[main];' '[1:v]scale=720:1280[pad];' '[pad][main]overlay=' + video_overlay_x + ':'+ video_overlay_y + ',subtitles=' + subtitles_file + ':force_style=\'Alignment=6,OutlineColour=&amp;H100000000,BorderStyle=3,Outline=1,Shadow=0,Fontsize=8,MarginL=2,MarginV=20\'', '-c:a', 'copy', '-af &quot;afade=t=out:st=14.2:d=0.8&quot;', '-crf', '20', '-preset', 'fast', '-movflags', '+faststart', '-y', output_file ]) </code></pre> <p>I have tried multiple options and ways to using this -af in different ways but am still getting the &quot;option not found&quot; error.</p>
<python><ffmpeg-python>
2023-05-18 18:05:22
0
417
user3324136
76,283,363
8,869,570
How to make groupby get rid of the column you're groupbying
<p>Is there an innate way to drop the column you're grouping by when using the <code>groupby</code> operation?</p> <p>By that, I mean, suppose you have a dataframe <code>df</code> with columns <code>dt</code> and <code>values</code>, and you run <code>groups = df.groupby(&quot;dt&quot;)</code>.</p> <p>When iterating over <code>groups</code></p> <pre><code>for start_dt, sub_df in groups: </code></pre> <p>you will see that sub_df still contains <code>dt</code>. Is there a natural way to drop this column with <code>groupby</code>?</p>
<python><pandas>
2023-05-18 17:59:15
1
2,328
24n8
76,283,296
851,699
Tkinter - memory leak arising from "Frame.pack"
<p>I have a TKinter application (running on MacOS, M1 laptop) that allows you to select a video or livestream, and play it in an adjacent frame.</p> <p>When it's a livestream, there is no need for &quot;video progress&quot; slider, so I hide it.</p> <p>Here is the critical code:</p> <pre><code>class DetectionVideoPlayer: ... def __init__(self, ): self.progress_status_bar = tk.Frame(...) def _reset_ui_widgets(self, is_live: bool = False): self._progress_status_bar.pack_forget() if not is_live: self._progress_status_bar.pack(fill=tk.X, expand=True, side=tk.BOTTOM) </code></pre> <p>Now, when transitioning from a livestream (<code>is_live=True</code>), to a video (<code>is_live=False</code>), My entire application freezes and I get this rapid memory leak which (if I don't kill the process) will actually freeze the computer. Commenting out <code>self._progress_status_bar.pack(...)</code> gets rid of the leak. The line <code>self._progress_status_bar.pack(...)</code> is actually executed fairly quickly and the interpreter passes by it, the application freezes thereafter (I presume on the next draw cycle).</p> <p>I'm trying to understand why this is happening. Does anyone here understand why calling <code>Frame.pack</code> can freeze the application, and why it can cause a (very rapid, like 1GB every 5s or so) memory leak?</p>
<python><tkinter><memory-leaks>
2023-05-18 17:50:57
1
13,753
Peter
76,283,254
8,869,570
Inheriting from an abstract class and defining an abstract method to be an existing child class's method
<p>Suppose you have an ABC <code>base</code> with a method <code>compute</code>, and 2 child classes <code>child1, child2</code> with <code>compute</code> defined. <code>child1, child2</code> only inherit from <code>base</code>.</p> <p>Then suppose you another another class <code>related</code> that inherits from <code>base</code> as well as some other classes not mentioned here.</p> <p><code>related</code> needs to define <code>compute</code> in this case, but its definition is either the same as <code>child1</code>'s or <code>child2</code>'s depending on a flag that's passed into <code>related</code>'s constructor.</p> <p>So here's an example of how I currently do it.</p> <pre><code>class related(base, some other classes...): def __init__(self, toggle_flag, some other args): self.toggle_flag = toggle_flag if toggle_flag: return child1.__init__(self, some other args) else: return child2.__init__(self, some other args) def compute(self): if toggle_flag: return child1.compute(self) else: return child2.compute(self) </code></pre> <p>I don't have much experience writing OOP in Python as I mostly have a C++ background, but this design here looks a bit strange to me. Is there another way to achieve the same functionality?</p>
<python><inheritance>
2023-05-18 17:44:52
2
2,328
24n8
76,283,207
8,079,611
Altair - Plot Number of Stars/Rating (starting from bar or point graph)
<p>I am plotting a bar graph using Altair. However, instead of having the bar, I was hoping to have the number of stars (i.e. the symbol). In my case, this data is the rating (from 1 start up to 5 stars) of, for example, a movie. Checking their documentation, I see a similar idea being used <a href="https://altair-viz.github.io/gallery/isotype.html" rel="nofollow noreferrer">here</a>, however, it does seem a pretty tricky path for simply plotting rating.</p> <p>I am plotting my graph like this (please, not that it you can change from bar to a point graph simply replacing <code>mark_bar</code> by <code>mark_point</code>):</p> <pre><code>chart = ( alt.Chart(df) .mark_bar() .encode( x=alt.X( &quot;fund:O&quot;, axis=alt.Axis(title=None, labels=False, ticks=False, labelOverlap=&quot;greedy&quot;), ), y=alt.Y(&quot;value:Q&quot;, axis=alt.Axis(title=&quot;Value&quot;, grid=True, tickMinStep=1)), color=alt.Color( &quot;fund:N&quot;, legend=alt.Legend(orient=&quot;top&quot;, labelLimit=700, direction=&quot;vertical&quot;), ), shape=alt.Shape(&quot;value:N&quot;), tooltip=[&quot;fund:N&quot;, &quot;value&quot;, &quot;bucket&quot;, &quot;as of&quot;], column=alt.Column( &quot;bucket:N&quot;, header=alt.Header(title=None, labelOrient=&quot;bottom&quot;, labelAngle=x_angle), spacing=50, ), ) .configure_view(stroke=&quot;transparent&quot;) .interactive() ) </code></pre> <p>Which will produce this graph: <a href="https://i.sstatic.net/edyDC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/edyDC.png" alt="enter image description here" /></a></p> <p>I was hoping to replace each bar with a number of stars (i.e. 4, 4, 2, and 2 stars)</p>
<python><bar-chart><altair>
2023-05-18 17:39:21
1
592
FFLS
76,283,041
9,265,735
gspread_formatting not changing cell type to percentage in google doc
<p>I'm able to run <code>format_cell_range()</code> successfully to change a cell from normal to bold and back again, but I'm unable to change it to percent.</p> <pre class="lang-py prettyprint-override"><code>from gspread_formatting import format_cell_range, CellFormat, NumberFormat worksheet = &lt;worksheet successfully auth-d and connected&gt; fmt = CellFormat(numberFormat=NumberFormat(type='PERCENT', pattern=&quot;0.00%&quot;)) # define the format as percentage format_cell_range(worksheet, 'A1', fmt) </code></pre> <p>I have in the cell A1 a value of 0.1, and I want it to read 10.00%</p> <p>What am I doing wrong here?</p> <p>I've read the docs here:</p> <ul> <li><a href="https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/cells" rel="nofollow noreferrer">https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/cells</a></li> <li><a href="https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/cells#cellformat" rel="nofollow noreferrer">https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/cells#cellformat</a></li> <li><a href="https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/cells#NumberFormat" rel="nofollow noreferrer">https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/cells#NumberFormat</a></li> </ul> <p>And I'm pretty sure I've got it set up correctly, when I make the call there's a return with no replies.</p> <p>If I do:</p> <pre class="lang-py prettyprint-override"><code> default_format = CellFormat(textFormat={'bold': False}) format_cell_range(worksheet, 'A1', default_format) </code></pre> <p>or with <code>textFormat={'bold': False}</code> I'm able to switch it between bold and not bold. So it's connected, but I must be missing something in the cell format for the percentage.</p> <p>I've even tried it an entirely different way just to see if it would update, and again, <code>textFormat</code> updates but not <code>numberFormat</code></p> <pre class="lang-py prettyprint-override"><code>spreadsheet = client.open_by_key(spreadsheet_id) requests = [ { 'repeatCell': { 'range': { 'sheetId': worksheet.id, 'startRowIndex': 1, 'endRowIndex': 3, 'startColumnIndex': 1, 'endColumnIndex': 3 }, 'cell': { 'userEnteredFormat': { 'numberFormat': { 'type': 'PERCENT', 'pattern': '0.00%' }, 'textFormat': { 'bold': True }, }, }, 'fields': 'userEnteredFormat.numberFormat,userEnteredFormat.textFormat' } } ] body = { 'requests': requests } spreadsheet.batch_update(body) </code></pre> <p>I'm super stumped</p>
<python><google-sheets><google-api-python-client><gspread>
2023-05-18 17:13:30
1
612
glitchwizard
76,282,918
504,773
mkdocstrings not finding module
<p>I've hit a bit of a wall with the mkdocstring plugin as it does not seem to load modules as I would have expected.</p> <p>My folder structure is as such:</p> <pre><code>docs/ api.md src/ module/ somemodule.py </code></pre> <p>The api.md file looks like this:</p> <pre><code># API Documentation ::: module.somemodule.MyClass handler: python </code></pre> <p>The error I get is this:</p> <pre><code>% mkdocs build 1 ↡ ✹ INFO - Cleaning site directory INFO - Building documentation to directory: /site ERROR - mkdocstrings: No module named 'module' ERROR - Error reading page 'api.md': ERROR - Could not collect 'module.somemodule.MyClass' </code></pre> <p><strong>What I have found in testing:</strong></p> <ul> <li>mkdocs runs correctly, generating the expected documentation when excluding the mkdocstrings plugin.</li> <li>mkdocstrings plugin works only if I move mypackage to the repo root</li> </ul> <p>Is there some kind of config setting I need to set with the plugin in the mkdocs.yml to get it to recognize the module in the src folder? All the example I have come across seem to either be outdated or don't work.</p>
<python><mkdocs><mkdocstrings>
2023-05-18 16:57:54
4
1,655
renderbox
76,282,914
5,817,109
Dynamically Create the Expression of a List Comprehension
<p>I have a bunch of AWS Lambda functions in Python that retrieve data from an API and write it to CSV. After I retrieve the data, I extract the relevant fields I want using list comprehensions, something like:</p> <pre><code>records = requests.get(url,headers,params) values = [(x['id'],x['name'],x['address']) for x in records] </code></pre> <p>This works well. However, I have other types of records (from API calls to different URLs) that I retrieve in this way:</p> <pre><code>values = [(x['orders'][0]['status'],x['assignee']['id']) for x in records] </code></pre> <p>Is there any way I can build those list comprehensions dynamically? I would like to be able to use one script that takes in a url and some other input information, then pass in an &quot;expression&quot;, and create a list comprehension that extract the information I want from the data that the API returns.</p> <p>I was not able to find a way to use parameters in the list comprehension.</p>
<python>
2023-05-18 16:56:53
3
305
Moneer81
76,282,611
10,866,873
Tkinter: complex reassignment of variables from Event
<p>This is a complicated question based on the solution of cloning widgets from <a href="https://stackoverflow.com/questions/46505982/is-there-a-way-to-clone-a-tkinter-widget/69538200#69538200">this answer</a>.</p> <p>Basis:</p> <p>When cloning widgets using this method any functions that are reliant on the references of those widgets are not transfered to the new widgets. e.g.:</p> <pre class="lang-py prettyprint-override"><code>from tkinter import * from so_clone import clone_widget def test(event, widget): print(widget.get()) def clone(event, widget, root): t= Toplevel(root) c = clone_widget(widget, t) c.pack(side=TOP) root = Tk() f = Frame(root) f.pack(side=TOP, fill=BOTH, expand=1) e = Entry(f) e.pack(side=TOP) Button(f, text=&quot;print&quot;, command=lambda e=Event(), w=e:test(e, w)).pack(side=BOTTOM) Button(root, text=&quot;clone&quot;, command=lambda e=Event(), c=f, r=root:clone(e, c, r)).pack(side=BOTTOM) root.mainloop() </code></pre> <p>When cloning the widget using this method the cloned print button is still referencing the original Entry as it is a perfect duplication.</p> <p>What I am looking to do is reassign the <code>e</code> variable from the original Entry to the cloned Entry. To achieve this I have created 3 new classes for creating the cloned window and holding the paths.</p> <pre class="lang-py prettyprint-override"><code>class Tree: def __init__(self): self.widgets = {} def add(self, w) self.widgets[str(w)] = w return w def get(self, path): for w, p in self.widgest.items(): if p == path: return w return None class Popout(TopLevel): def __init__(self, root, **kwargs): self.root = root self.tree = kwargs.pop('tree', tree) ## take the input tree or make a new one. super(Popout, self).__init__(root) ##creates toplevel window self.bind('&lt;Configure&gt;', self._add_widget) ##call when configured = when widget added def _add_widget(self, event): if event.widget: self.tree.add(event.widget) class Poppable(Canvas): # ... Same as Popout just as a canvas that's in the root frame. def _popout(self): p = Popout(self.root, tree=self.tree) c = clone_widgets(self, p) ## here is where i need to recalculate the tree i think c.pack(side=TOP) </code></pre> <p>There are a few additional things that needs to happen for this to work correctly.</p> <ul> <li>First the paths must be calculated relative to the base cloned widget.</li> <li>When cloning the paths must be compared to the tree and the widgets be reassigned</li> <li>After cloning all variables associated with the original widgets should be updated to the new widgtets</li> </ul> <p>This is as far as i have gotten with the solution.</p> <p>The process would be:</p> <ol> <li>create Poppable Canvas</li> <li>Add Widgets to Poppable Canvas</li> <li>Widgets are added to Tree which calcualtes their relative path from the Poppable Canvas</li> <li>Assigne varaible <code>e</code> to Entry inside Poppable Canvas</li> <li>pop canvas which creates a Popout passing along the tree.</li> <li>Popout evaluates the tree while cloning the widgets to also get their relative path from Popout and when matching reassigns the widget in the tree</li> <li>Somehow figure out that <code>e</code> has the Entry assigned and reassign <code>e</code> to the new Entry.</li> </ol> <p>Although this is my current working theory it is not definitive if other methods are possible. I am looking for a solution to the problem as a whole, this example is just the first solution I could reasonable conceive to work.</p>
<python><tkinter>
2023-05-18 16:14:12
0
426
Scott Paterson
76,282,553
3,878,377
Panda says my Dataframe column type is object however analysis shows that it is float
<p>I have a Panda dataframe such as this:</p> <pre><code>col value 0 1.28 1 4 2 9.34 3 13 4 15 5 23 6 35 </code></pre> <p>When I do <code>df.info()</code> I get that value is <code>object</code> however when I test this as follows:</p> <pre><code>A = df['value'].reset_index().applymap(lambda x: isinstance(x, (float))) A[A['value']==False].shape[0] </code></pre> <p>I get zero rows. Also some models throw an error because of data type thing, then what I would do is to force the data type to be float.</p> <pre><code>A['value'] = A['value'].astype('float') </code></pre> <p>Can some one explain how you can investigate why data type is not float and what is wrong with my code for detecting the data type is float or not?</p>
<python><pandas><dataframe><types>
2023-05-18 16:05:33
0
1,013
user59419
76,282,520
1,441,592
KeyError when filter pandas dataframe by column with particular key:value pair
<p>My df looks like the following</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>col1</th> <th>col_x</th> </tr> </thead> <tbody> <tr> <td>...</td> <td>{&quot;key_x&quot;:&quot;value1&quot;}</td> </tr> <tr> <td>...</td> <td>None</td> </tr> <tr> <td>...</td> <td>{&quot;key_x&quot;:&quot;value2&quot;}</td> </tr> </tbody> </table> </div> <p>How to select all items with <code>{&quot;key_x&quot;:&quot;value1&quot;}</code> in <code>col_x</code> ?</p> <p><code>col_x</code> values could be <code>dict</code> or <code>None</code>.</p> <p>What I've tried:</p> <pre><code>df.loc[df['col_x']['key_x'] == 'value1'] # KeyError: 'key_x' </code></pre> <pre><code>df[~df['col_x'].isnull()].loc[df['col_x']['key_x'] == 'value1'] # KeyError: 'key_x' </code></pre> <pre><code>df_ok = df[[isinstance(x, dict) and ('key_x' in x.keys()) for x in df.col_x]] df_ok.loc[df_ok['col_x']['key_x'] == 'value1'] # KeyError: 'key_x' </code></pre> <p>(last one syntax according <a href="https://stackoverflow.com/questions/64267909/filtering-pandas-dataframe-with-column-of-dictionaries-using-a-particular-key">this answer</a>)</p>
<python><pandas><dataframe><dictionary><pandas-loc>
2023-05-18 16:01:17
2
3,341
Paul Serikov
76,282,454
253,924
Pandas DataFrame groupby and aggregate counting on condition
<p>I have started playing with Data Analysis and all the related tools: Pandas, Numpy, Jupyter etc...</p> <p>The task I am working on is simple, and I could do easily with regular python. However I am more interested in exploring Pandas, and I am looking therefore for a Pandas solution.</p> <p>I have this simple Pandas DataFrame. The timestamp column is just a Unix timestamp, but to make things more readable I have just put a more comfortable number:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>id</th> <th style="text-align: left;">timestamp</th> <th style="text-align: left;">success</th> </tr> </thead> <tbody> <tr> <td>1</td> <td style="text-align: left;">9999</td> <td style="text-align: left;">True</td> </tr> <tr> <td>2</td> <td style="text-align: left;">1111</td> <td style="text-align: left;">True</td> </tr> <tr> <td>3</td> <td style="text-align: left;">9999</td> <td style="text-align: left;">False</td> </tr> <tr> <td>4</td> <td style="text-align: left;">1111</td> <td style="text-align: left;">True</td> </tr> <tr> <td>5</td> <td style="text-align: left;">9999</td> <td style="text-align: left;">True</td> </tr> <tr> <td>6</td> <td style="text-align: left;">1111</td> <td style="text-align: left;">True</td> </tr> </tbody> </table> </div> <p>I want to group by timestamp, but I want another aggregate column which is the result of the success column: if True count as one, if False count as 0.</p> <p>I hope the table below clarify what I am trying to achieve. Basically 1111 has three True therefore the sum is 3. 9999 has two True and one False, therefore the sum is 2.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">timestamp</th> <th style="text-align: left;">success</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">1111</td> <td style="text-align: left;">3</td> </tr> <tr> <td style="text-align: left;">9999</td> <td style="text-align: left;">2</td> </tr> </tbody> </table> </div>
<python><pandas><dataframe>
2023-05-18 15:52:53
1
9,857
Leonardo
76,282,434
1,011,253
writing tasks/operators without using them in DAG
<p>We are running Airflow in production and we have existing DAG that looks like:</p> <pre class="lang-py prettyprint-override"><code>import os from datetime import datetime from airflow import DAG from airflow.operators.empty import EmptyOperator with DAG( &quot;partial&quot;, description=&quot;A simple partial DAG&quot;, start_date=datetime(2023, 1, 1), ) as dag: t1 = EmptyOperator( task_id=&quot;task_1&quot;, ) t2 = EmptyOperator( task_id=&quot;task_2&quot;, ) t3 = EmptyOperator( task_id=&quot;task_3&quot;, ) t1 &gt;&gt; t2 &gt;&gt; t3 </code></pre> <p>Now we want to add operators/tasks <code>t4</code> and <code>t5</code> gradually.</p> <p>We would like to add those two operator's declarations/definitions (for a code review and merging it to our <code>main</code> branch) without actually &quot;connect&quot; them to the DAG:</p> <pre class="lang-py prettyprint-override"><code>import os from datetime import datetime from airflow import DAG from airflow.operators.empty import EmptyOperator with DAG( &quot;partial&quot;, description=&quot;A simple partial DAG&quot;, start_date=datetime(2023, 1, 1), ) as dag: t1 = EmptyOperator( task_id=&quot;task_1&quot;, ) t2 = EmptyOperator( task_id=&quot;task_2&quot;, ) t3 = EmptyOperator( task_id=&quot;task_3&quot;, ) t4 = EmptyOperator( task_id=&quot;task_4&quot;, ) t5 = EmptyOperator( task_id=&quot;task_5&quot;, ) t1 &gt;&gt; t2 &gt;&gt; t3 </code></pre> <p>The outcome of the snippet above is:</p> <p><a href="https://i.sstatic.net/fL0Q8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fL0Q8.png" alt="enter image description here" /></a></p> <p>Which is not what we want (because those two new operators will run without any dependencies), Is there a way add those new operators and keep them out of the DAG?</p>
<python><airflow>
2023-05-18 15:50:00
1
11,527
ItayB
76,282,382
5,306,861
Equivalent of Python scipy.signal.correlate in C# or C++
<p>I have code in Python that searches for matching a small audio file within a large audio file. For this purpose it uses the <code>scipy.signal.correlate</code> function, I searched for an equivalent function in C# or C++, but I did not find it.</p> <p>I can translate all the code to C# except for the <code>scipy.signal.correlate</code> function.</p> <p>How can you implement a similar function in C# or C++?</p> <pre><code>import datetime from scipy import signal import numpy as np import librosa def seconds2str(seconds): return str(datetime.timedelta(seconds=seconds)) def find_templ(src_path, templ_path): samples_src, sample_rate = librosa.load(src_path, sr=None) samples_templ, sample_rate_templ = librosa.load(templ_path, sr=sample_rate) src_duration = librosa.get_duration(y=samples_src, sr=sample_rate) templ_duration = librosa.get_duration(y=samples_templ, sr=sample_rate_templ) if templ_duration &gt; src_duration: print(&quot;warning: You are looking for a clip within an audio clip shorter than it!!!&quot;) print(f&quot;src Duration: {seconds2str(src_duration)}&quot;) print(f&quot;templ Duration: {seconds2str(templ_duration)}&quot;) correlate = signal.correlate(samples_src[:sample_rate * int(src_duration)], samples_templ, mode='valid', method='fft') start = np.round(np.argmax(correlate) / sample_rate, 2) end = start + templ_duration print(f&quot;The moment the search stops in seconds: {seconds2str(src_duration)}&quot;) print(f&quot;Start: {seconds2str(start)}&quot;) print(f&quot;End: {seconds2str(end)}&quot;) full_audio = &quot;Track08.mp3&quot; small_audio = &quot;adv.wav&quot; find_templ(full_audio, small_audio) </code></pre>
<python><c#><c++><fft><correlation>
2023-05-18 15:43:01
0
1,839
codeDom
76,282,378
3,259,258
Python RSA Signing not agreeing with Ruby or Bash
<p>I have 3 different scripts that should be basically identical logic wise, yet when run the RSA sig generated by python does not agree with the RSA sigs of the other two implementations. What am I doing differently in Python?</p> <p>Ruby</p> <pre><code>#!/usr/bin/ruby # ruby 3.0.3p157 (2021-11-24 revision 3fb7d2cadc) [x86_64-linux] require 'openssl' require 'base64' string_to_sign = &quot;&quot;&quot;Method:GET Hashed Path:rakSQTQa55Ls8KWcrShhane6uFY= X-Ops-Content-Hash:O8Fciq4+QSTdQJA18y6i/Wg178k= X-Ops-Timestamp:2023-05-18T15:26:59Z X-Ops-UserId:test-user&quot;&quot;&quot; puts &quot;############################################################&quot; puts &quot;Ruby&quot; puts &quot;\nBase64 Encoding&quot; puts Base64.encode64(string_to_sign) # Adds newline every 60 bytes? puts &quot;\nsha1 Hash&quot; puts ::Base64.encode64(OpenSSL::Digest::SHA1.digest(string_to_sign)).chomp puts &quot;\nsign w/ RSA priv key&quot; puts Base64.encode64(OpenSSL::PKey::RSA.new(File.read(&quot;test-key.pem&quot;)).private_encrypt(string_to_sign)) </code></pre> <p>Bash</p> <pre><code>#!/bin/bash string_to_sign=&quot;Method:GET Hashed Path:rakSQTQa55Ls8KWcrShhane6uFY= X-Ops-Content-Hash:O8Fciq4+QSTdQJA18y6i/Wg178k= X-Ops-Timestamp:2023-05-18T15:26:59Z X-Ops-UserId:test-user&quot; echo -e &quot;\n############################################################&quot; echo -e &quot;Bash&quot; echo -e &quot;\nBase64 Encoding&quot; echo -n &quot;$string_to_sign&quot; | base64 echo -e &quot;\nSHA1 Hash&quot; echo -n &quot;$string_to_sign&quot; | openssl dgst -binary -sha1 | base64 echo -e &quot;\nSign w/ RSA priv key&quot; echo -n &quot;$string_to_sign&quot; | openssl rsautl -sign -inkey test-key.pem | base64 echo -e &quot;&quot; </code></pre> <p>Python</p> <pre><code>#!/usr/bin/python from Crypto.PublicKey import RSA from Crypto.Hash import SHA1 from Crypto.Signature import pkcs1_15 import base64 string_to_sign = &quot;&quot;&quot;Method:GET Hashed Path:rakSQTQa55Ls8KWcrShhane6uFY= X-Ops-Content-Hash:O8Fciq4+QSTdQJA18y6i/Wg178k= X-Ops-Timestamp:2023-05-18T15:26:59Z X-Ops-UserId:test-user&quot;&quot;&quot; print(&quot;############################################################&quot;) print(&quot;Python&quot;) print(&quot;\nBase64 Encoding&quot;) print(base64.encodebytes(string_to_sign.encode()).decode().rstrip()) print(&quot;\nSHA1 Hash&quot;) sha1_hash = SHA1.new(string_to_sign.encode()) print(base64.encodebytes(sha1_hash.digest()).decode().rstrip()) print(&quot;\nSign w/ RSA priv key&quot;) key = RSA.importKey(open(&quot;test-key.pem&quot;).read()) h = SHA1.new(string_to_sign.encode()) signature = pkcs1_15.new(key).sign(h) print(base64.encodebytes(signature).decode()) </code></pre> <p>For reference my test-key.pem (it was generated specifically for this question and thus is a throw away key).</p> <pre><code>-----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEApEWOeapXklrB1dV/SAm2cPLgZ2qwJfJ+POWr2FFa7m7+JdLy exDkxyqtbXw857/iiw+56g1/VbSxEEvauJincWGJPn6le47v/M+Cbojqv4ULw4Fh 2/4191EAjwdzJSlmXxDxbxO76qFDZnUyXstAUeyaJs549xz+H1u81tm2zU0a8cyO SeSnHxFfEe1kMosxkC3m0aHa3pA+sxpS8vqTBFUK87u20o2WpPuq/IgxIdnxd4tJ NNTsbKRMvw83PFpH6W5sDsWo1e8jDz92WXghOQeSqvm2uZEx5L4UAKKOwnOWNraF GMQRRnbflow6e/HjwZJoiRjFIEFkYlJJ1vUwTQIDAQABAoIBAH5WbGwP4QfTOw5d A2YA6kpV0NZYjB6zL/lf3dkhQKDtxhKK+ShC5uByZy00Bpdp0S6IKsDiHpNow2C4 JgAgj264x9fDiTvMw6+YXETskjY3ecOjpwKNsS2DI73cyebDv1LP8g8uizC5U9/h tJqJEO+w2yGLXCcZKiwt3r8Sc+/R4lGIv9UsHIo6fzcHSykhPfUnlMSo1pBQ9HYB zcF8ErGSS+UenfsSiFMxueU3iL++u1SDUl+ObVgh1FV1Ax0psse9gPp/vwgSWbKR jljiqmFeGtpH5aCTCjXTdjG83GHzZ9The/SIw0DyfH+obMrzsnUMG7tzVIGPMuL8 8+j5gxUCgYEA+twgS8L6o3Bgc8cvsrCDcoaVVvhaGNZahzGZjxCE+zt6943mHgZ0 3oFBYoOfLS2QjLW3VbMx6czJ1Ow+9gPWGRPN01uUaiCVP+pRKNN4ixlYgY4RFuTj 9NAeIMzMY/zeRDvFwRxO2IvKQwI3Nix+hupo2eVmA7NDHSWfuSi5ce8CgYEAp6M8 dndFWPkuMSL1jU1eVWqWi8D5UiJrlNdp+1DniSNr+C/oSD/+Ot72RDDhwKe+JpP4 iUiTEpmuc1ZUOJ2um9BO4DP347UvyHRGUssS1bAlKgIvFi5CAWRgvv0cAjdpRpTU VCe/iNE8nbTMQCj9TtQh4PaQF1f/eDarY5LlTYMCgYEA0eE/iANeTUWk/NjGmFrD 7xqYcYYhYyxb20ZtMlvg1o0CKYHn6HEAcHR17uUuVM8NZBxYgfQFq5Vxu5nYZ134 T0zZZJ73Qf92v13cfyrGbKJNAT+KHrxr2BQTUN/nlTQoBbB4mEOF1/jExWFiLgn1 5gzSopMh0bC2Uvl6c6CV3rMCgYEAgKUXQD41XJsUpKaUU9R8wQXj8+mqKyq47mcF MNScajRhpft1wQRC4AC8cgYlKIhRtx80yn2ER/Dh3CbyyOPQ3EfWT93xrLAdtDHu yZiHoq7jRkKYyefDxXe3ermYZecKBh0ueEpshN01LD1TxSTvhy/ps87jMtbX+PPT QL249GsCgYAi4344fB5tsrIpdWmmQ0AO/0LokBKm7oJEYm67lGMd1V6CRGmw4xl6 6Bmu15egsxKAmNtSAs7X7MI3gjS2wmx/3z2d3NLTz1chU4/7tzhPYTZsKAkXye9v CO7cEtAR5Va+ayzmxs6BQW14rnqDrX/0jri4nyF0yxEvDPnQsh3Hpg== -----END RSA PRIVATE KEY----- </code></pre> <p>and here is the results I am getting:</p> <pre><code>ruby signme.rb; bash signme.sh; python signme.py; ############################################################ Ruby Base64 Encoding TWV0aG9kOkdFVApIYXNoZWQgUGF0aDpyYWtTUVRRYTU1THM4S1djclNoaGFu ZTZ1Rlk9ClgtT3BzLUNvbnRlbnQtSGFzaDpPOEZjaXE0K1FTVGRRSkExOHk2 aS9XZzE3OGs9ClgtT3BzLVRpbWVzdGFtcDoyMDIzLTA1LTE4VDE1OjI2OjU5 WgpYLU9wcy1Vc2VySWQ6dGVzdC11c2Vy sha1 Hash lJMTFTq6/XPOwQUMKbGZdE3c5fE= sign w/ RSA priv key jxQHbrJlRBw5cfbpdyWvNVhfWOZtV1t0wdYPBKqGuIil5tXtzmOaXr1l21Cm P+SdSY9+yc0eeuC1ylIoFaDf3+W61+zImW9b6zS9GnssNqogPpC1ERhXfNkv 5t2vRcg5Df3EQG5XfQb/X/7VMkoshLOGjul4If2X3jbNK4ZaIehi8A1Ie7Xm LSllRBgOYxHDoztsGNlZz7mQQjNPCcO/nh7BPLG/2HcLZCsFOTBn8LD1EYyN tewFD3qIoo1aWzAMvqqBXr3AHhXlh1gqq9P/adQcWYf/uGZvGs6W2juRys9y 39I4hitDk2EUW10lx3+6SZq4oqt4G5TEZP9XQYs1kw== ############################################################ Bash Base64 Encoding TWV0aG9kOkdFVApIYXNoZWQgUGF0aDpyYWtTUVRRYTU1THM4S1djclNoaGFuZTZ1Rlk9ClgtT3Bz LUNvbnRlbnQtSGFzaDpPOEZjaXE0K1FTVGRRSkExOHk2aS9XZzE3OGs9ClgtT3BzLVRpbWVzdGFt cDoyMDIzLTA1LTE4VDE1OjI2OjU5WgpYLU9wcy1Vc2VySWQ6dGVzdC11c2Vy SHA1 Hash lJMTFTq6/XPOwQUMKbGZdE3c5fE= Sign w/ RSA priv key jxQHbrJlRBw5cfbpdyWvNVhfWOZtV1t0wdYPBKqGuIil5tXtzmOaXr1l21CmP+SdSY9+yc0eeuC1 ylIoFaDf3+W61+zImW9b6zS9GnssNqogPpC1ERhXfNkv5t2vRcg5Df3EQG5XfQb/X/7VMkoshLOG jul4If2X3jbNK4ZaIehi8A1Ie7XmLSllRBgOYxHDoztsGNlZz7mQQjNPCcO/nh7BPLG/2HcLZCsF OTBn8LD1EYyNtewFD3qIoo1aWzAMvqqBXr3AHhXlh1gqq9P/adQcWYf/uGZvGs6W2juRys9y39I4 hitDk2EUW10lx3+6SZq4oqt4G5TEZP9XQYs1kw== ############################################################ Python Base64 Encoding TWV0aG9kOkdFVApIYXNoZWQgUGF0aDpyYWtTUVRRYTU1THM4S1djclNoaGFuZTZ1Rlk9ClgtT3Bz LUNvbnRlbnQtSGFzaDpPOEZjaXE0K1FTVGRRSkExOHk2aS9XZzE3OGs9ClgtT3BzLVRpbWVzdGFt cDoyMDIzLTA1LTE4VDE1OjI2OjU5WgpYLU9wcy1Vc2VySWQ6dGVzdC11c2Vy SHA1 Hash lJMTFTq6/XPOwQUMKbGZdE3c5fE= Sign w/ RSA priv key VSFe3su8dejyKPdLSBQU9Z0MTL7/TGrt2pA97sMqfCy2rnWtuCA28KPYA77w6vGdlTrzpWjq6fpU ndJw3USwMcjzd5h/2XLYlpZjOP9/GYIbdayOZ3YtNl7b9GfFMcaKB+Lao5pPfuE8YMjvCDYGqpaY uD4qUz6+zEQ8W9GlTyGDNN+QRCYDtoimK/nU0vmyLTgtx1gf9mmub1ZmfBPdhjZpF2XHJljANit7 0i+bfnE44IMYlWGdd43yPc5fxwsywNgEnkt38lQ8hiWP6KDcfgz6SN/ax9qRgpEnTF21QxMt/yfr BLngJKdcW6TVv96uTXoU0K7VIkoFzq2PFu56ug== </code></pre>
<python><openssl><rsa>
2023-05-18 15:42:29
1
823
CaffeineAddiction
76,282,370
5,924,264
How to return the value from a 2 dicts when the key is guaranteed to exist in only one of the dicts?
<p>Suppose you have 2 dicts <code>d1, d2</code> and you want to query a key <code>k</code>. <code>k</code> is guaranteed to exist in exactly one of the dicts. Is there a way to return the value associated with <code>k</code> from <code>d1, d2</code> without using if statements?</p> <p>Currently I use if statements:</p> <pre><code>if k in d1.keys(): return d1[k] return d2[k] </code></pre>
<python><dictionary>
2023-05-18 15:41:31
2
2,502
roulette01
76,282,068
7,545,840
Loop over a spark df and add new columns
<p>I have some data like below:</p> <pre><code> df = spark.createDataFrame( [ (&quot;one two&quot;, &quot;foo foo&quot;), (&quot;one two&quot;, &quot;bar bar&quot;), ], [&quot;label1&quot;, &quot;label2&quot;] ) df.show() </code></pre> <p>and I'm using <code>regex_replace()</code> to amend the string in the given column.</p> <p>What I would like to do is loop over all columns in the df and apply the regex. I have some code below, but it is only saving the application of the regex to the last column. I think this is because the operation is not being appended to the dataframe:</p> <pre><code>names = df.schema.names for name in names: new_df = df.withColumn(name, regexp_replace(name, &quot; &quot;, &quot;_&quot;)) display(new_df) </code></pre> <p><a href="https://i.sstatic.net/5cwlB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5cwlB.png" alt="![enter image description here" /></a></p> <p>The data should look like this:</p> <p><a href="https://i.sstatic.net/seGbe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/seGbe.png" alt="enter image description here" /></a></p>
<python><apache-spark><pyspark><apache-spark-sql>
2023-05-18 15:04:28
2
493
Mrmoleje
76,282,066
4,019,495
Pydantic: is there a way to specify predefined validation behavior on a field of a Model?
<p>I would like the behavior of</p> <pre><code>class TestModel(BaseModel): field1: StrictStr # we want to validate that this has length 5 @validator('field1', always=True) def check_field1_length_5(cls, field1): if len(field1) != 5: raise ValueError(f'{field1=} does not have length 5') return field1 </code></pre> <p>but writing something like</p> <pre><code>class TestModel(BaseModel): field1: Length5StrictStr </code></pre> <p>The point being that I want both of</p> <ul> <li>don't want to nest the field inside another Model</li> <li>don't want to write the validator each time I use that validation logic</li> </ul> <p>Is this possible in pydantic?</p>
<python><pydantic>
2023-05-18 15:04:04
1
835
extremeaxe5
76,282,003
1,194,864
Binary image classifier in pytorch progress bar and way to check if the training was valid
<p>I would like to build and train a binary classifier in <code>PyTorch</code> that reads images from the path process them and trains a classifier using their labels. My images can be found in the following folder:</p> <pre><code>-data - class_1_folder - class_2_folder </code></pre> <p>Hence, to read them in tensors I am doing the following:</p> <pre><code>PATH = &quot;data/&quot; transform = transforms.Compose([transforms.Resize(256), transforms.RandomCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224,0.225])]) dataset = datasets.ImageFolder(PATH, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=356, shuffle=True) images, labels = next(iter(dataloader)) </code></pre> <p>This code actually reads the images and performs some necessary transformations-preprocess. Next step to create a model and perform the training:</p> <pre><code>acc = model_train(images, labels, images, labels) </code></pre> <p>With <code>model_train</code> to be:</p> <pre><code>import pdb import torch from torchvision import datasets, transforms, models from matplotlib import pyplot as plt from torchvision.transforms.functional import to_pil_image import torch.nn as nn import numpy as np import torch.optim as optim import tqdm import copy from time import sleep def my_model(): device = &quot;cuda&quot; if torch.cuda.is_available() else &quot;cpu&quot; model = models.resnet18(pretrained=False) num_features = model.fc.in_features model.fc = nn.Linear(num_features, 1) # Binary classifier with 2 output classes # Move the model to the device model = model.to(device) return model def model_train(X_train, y_train, X_val, y_val): model = my_model() dtype = torch.FloatTensor loss_fn = nn.CrossEntropyLoss().type(dtype) # binary cross entropy optimizer = optim.Adam(model.parameters(), lr=0.0001) n_epochs = 20 # number of epochs to run batch_size = 10 # size of each batch batch_start = torch.arange(0, len(X_train), batch_size) # Hold the best model best_acc = - np.inf # init to negative infinity best_weights = None for epoch in range(n_epochs): model.train() with tqdm.tqdm(batch_start, unit=&quot;batch&quot;, mininterval=0, disable=True) as bar: bar.set_description(f&quot;Epoch {epoch}&quot;) for start in bar: # take a batch bar.set_description(f&quot;Epoch {epoch}&quot;) X_batch = X_train[start:start+batch_size] y_batch = y_train[start:start+batch_size] # forward pass y_pred = model(X_batch) y_pred = torch.max(y_pred, 1)[0] loss = loss_fn(y_pred, y_batch.float()) # backward pass print(loss) optimizer.zero_grad() loss.backward() # update weights optimizer.step() # print progress acc = (y_pred.round() == y_batch).float().mean() bar.set_postfix( loss=float(loss), acc=float(acc) ) bar.set_postfix(loss=loss.item(), accuracy=100. * acc) sleep(0.1) # evaluate accuracy at end of each epoch model.eval() y_pred = model(X_val) acc = (y_pred.round() == y_val).float().mean() acc = float(acc) if acc &gt; best_acc: best_acc = acc best_weights = copy.deepcopy(model.state_dict()) # restore model and return best accuracy torch.save(model.state_dict(), &quot;model/my_model.pth&quot;) model.load_state_dict(best_weights) return best_acc </code></pre> <p>I am trying to understand how I can correctly portray the <code>progress bar</code> during training and second, how can I <code>validate that the training process took place correctly</code>. For the latter, I have noticed a weird behavior. For <code>class zero</code> I am getting always zero loss while for class one it's between range <code>13-24</code>. It seems to be incorrect, however, I am sure how to dive deeper!</p> <pre><code> tensor(-0., grad_fn=&lt;DivBackward1&gt;) tensor([-0.0986, -0.0806, -0.0161, 0.0287, -0.0279, 0.0083, -0.0526, -0.1393, -0.2082, -0.0141], grad_fn=&lt;MaxBackward0&gt;) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) torch.float32 torch.int64 tensor(-0., grad_fn=&lt;DivBackward1&gt;) tensor([-0.1779, 0.0936, -0.0341, -0.1531, -0.1222, -0.1169, -0.0160, -0.0674, 0.1230, -0.1181], grad_fn=&lt;MaxBackward0&gt;) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) torch.float32 torch.int64 tensor(-0., grad_fn=&lt;DivBackward1&gt;) tensor([-0.0438, -0.1269, -0.1624, -0.0976, -0.0132, -0.1944, -0.0034, -0.0454, -0.1559, 0.0657], grad_fn=&lt;MaxBackward0&gt;) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) torch.float32 torch.int64 tensor(-0., grad_fn=&lt;DivBackward1&gt;) tensor([-0.1655, 0.0222, -0.0801, -0.1390, -0.0905, -0.1472, -0.0395, -0.0180, -0.1492, 0.0914], grad_fn=&lt;MaxBackward0&gt;) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) torch.float32 torch.int64 tensor(-0., grad_fn=&lt;DivBackward1&gt;) tensor([-0.7035, -0.1989, 0.0921, -0.1082, -0.2588, -0.3557, 0.3093, 0.0909, 0.1603, 0.1838], grad_fn=&lt;MaxBackward0&gt;) tensor([0, 1, 1, 1, 1, 1, 1, 1, 1, 1]) torch.float32 torch.int64 tensor(20.4545, grad_fn=&lt;DivBackward1&gt;) tensor([-0.4783, -0.1027, -0.0357, 0.0882, -0.2955, -0.0968, 0.3323, -0.0472, 0.1017, -0.2186], grad_fn=&lt;MaxBackward0&gt;) tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) torch.float32 torch.int64 tensor(23.2550, grad_fn=&lt;DivBackward1&gt;) tensor([ 0.1554, -0.2664, 0.1419, 0.0203, 0.0895, -0.0085, -0.2867, -0.1957, -0.1315, -0.2340], grad_fn=&lt;MaxBackward0&gt;) tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) torch.float32 torch.int64 tensor(23.1584, grad_fn=&lt;DivBackward1&gt;) tensor([-0.0406, -0.2144, 0.1997, 0.2196, -0.3464, 0.1311, -0.0743, -0.2440, -0.1751, -0.2371], grad_fn=&lt;MaxBackward0&gt;) tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) torch.float32 torch.int64 tensor(23.2112, grad_fn=&lt;DivBackward1&gt;) tensor([-0.0080, -0.1138, -0.1035, 0.0697, -0.1745, -0.1438, -0.2360, -0.1308, 0.0146, 0.1209], grad_fn=&lt;MaxBackward0&gt;) tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) torch.float32 torch.int64 tensor(23.0853, grad_fn=&lt;DivBackward1&gt;) tensor([-0.1235, 0.0081, -0.1073, -0.1036, -0.2037, -0.1204, -0.0570, -0.1146, 0.0849, 0.0798], grad_fn=&lt;MaxBackward0&gt;) tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) torch.float32 torch.int64 tensor(23.0666, grad_fn=&lt;DivBackward1&gt;) tensor([-0.0660, -0.0832, -0.0414, -0.0334, -0.0123, -0.0203, -0.0549, -0.0747, -0.0779, -0.1629], grad_fn=&lt;MaxBackward0&gt;) </code></pre> <p>What can be wrong in this case?</p>
<python><pytorch>
2023-05-18 14:56:24
2
5,452
Jose Ramon
76,281,933
2,697,895
How to check the drive power state?
<p>I'm trying to read the drive power state (active/standby/sleep) using Python under Linux/Raspberry Pi.</p> <p>I found <a href="https://stackoverflow.com/a/65995054/2697895">here this answer</a> and it says that I can do it using <code>udisks2 dbus</code>:</p> <pre><code>you can get around this by invoking a dbus method on the system bus: Service: org.freedesktop.UDisks2 Object Path: /org/freedesktop/UDisks2/drives/&lt;ID of the hard drive&gt; Method: org.freedesktop.UDisks2.Drive.Ata.PmGetState </code></pre> <p>But I don't know how to implement it...</p> <p>I managed to write this code that is executed without errors, but I don't know how to continue it... Can you help me ?</p> <pre><code>from pydbus import SystemBus def get_drive_power_state(drive_path): bus = SystemBus() udisks = bus.get(&quot;.UDisks2&quot;) drive_obj = bus.get(&quot;org.freedesktop.UDisks2&quot;, drive_path) return None drive_path = &quot;/org/freedesktop/UDisks2/drives/WDC_WD20NMVW_11EDZS3_WD_WXV1EA57RT6E&quot; power_state = get_drive_power_state(drive_path) print(f&quot;Drive Power State: {power_state}&quot;) </code></pre>
<python><linux><raspberry-pi><dbus><hard-drive>
2023-05-18 14:47:05
1
3,182
Marus Gradinaru
76,281,639
4,002,204
Python - When saving text to a file, it adds redundant characters
<p>In my Linux server, I have a Python code that tracks an SSH connection, copies every character the user enters or is generated by the system, and saves it in a file.</p> <p>This is the function that is called to store new text:</p> <pre><code>async def writeToFile(filename, data): try: path = f'{RECORDINGS_PATH}/{filename}' f = open(path, &quot;a&quot;) f.write(data) f.close() except Exception as e: logging.exception(f'error writing to recordings file') </code></pre> <p>The problem is, that with the expected characters, it also saves in the file, redundant characters with a prefix of [</p> <p>example: <code>[4;48H[4;48H[4;48Hs</code></p> <p>When printing the file with Linux cat command, the output is as expected, but when opening the file in a text editor, it shows also the redundant characters.</p> <p>The problem is that our clients watch the files with a text editor.</p> <p>There is a way to save the file, so the client will see the expected text?</p>
<python><linux><ssh>
2023-05-18 14:10:09
1
1,106
Zag Gol
76,281,636
7,887,590
How can I use Pygame to display the URL of a request captured by mitmproxy?
<p>I am using mitmproxy for debugging purposes and I want to display the captured request URL in almost real-time through a GUI framework. The following is my add-on script:</p> <pre class="lang-py prettyprint-override"><code>import threading import pygame from mitmproxy import http # initialize pygame pygame.init() screen = pygame.display.set_mode((500, 200)) pygame.display.set_caption(&quot;mitmproxy and pygame Example&quot;) # shared url_queue url_queue = [] def request(flow: http.HTTPFlow): print(flow.request.url) url_queue.append(flow.request.url) def display_url(): screen.fill((255, 255, 255)) font = pygame.font.Font(None, 24) # display url for i, url in enumerate(url_queue): text = font.render(url, True, (0, 0, 0)) screen.blit(text, (20, 20 + i * 30)) pygame.display.flip() def main(): running = True clock = pygame.time.Clock() while running: for event in pygame.event.get(): if event.type == pygame.QUIT: running = False display_url() clock.tick(30) pygame.quit() threading.Thread(target=main).start() </code></pre> <p>After executing <code>mitmdump.exe -s addon_tmp.py</code>, mitmdump console output as expected, but the pygame window became unresponsive and failed to display any content.</p> <p>How can I achieve the purpose?</p>
<python><pygame><mitmproxy>
2023-05-18 14:09:10
0
470
mingchau
76,281,633
4,865,723
Migrate a QKeySequence from PyQt5 to PyQt6
<p>I'm in the middle of migrating a quit simple GUI application from <code>PyQt5</code> to <code>PyQt6</code>. I'm stuck on the following problem about combining Keys and Modifiers.</p> <p>The original <code>PyQt5</code> code looked like this</p> <pre><code>self.button.setShortcuts(QKeySequence(Qt.CTRL + Qt.Key_S)) </code></pre> <p>The &quot;location&quot; of <code>CTRL</code> and <code>Key_S</code> was moved and now need to look like this.</p> <pre><code>self.button.setShortcuts(QKeySequence(Qt.Modifier.CTRL + Qt.Key.Key_S)) </code></pre> <p>But now I have the problem that the <code>+</code> doesn't work here anymore.</p> <pre><code> self.button.setShortcuts(QKeySequence(Qt.Modifier.CTRL + Qt.Key.Key_S)) ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~ TypeError: unsupported operand type(s) for +: 'Modifier' and 'Key' </code></pre> <p>It is not easy for me to read the PyQt documentation because there is no such documentation. It is just autogenerated &quot;docu&quot; derived from its C++ version. It is not Python specific and hard to &quot;transfer&quot; to Python use cases. I don't know where to look.</p>
<python><pyqt5><migration><pyqt6>
2023-05-18 14:09:05
1
12,450
buhtz
76,281,431
197,726
Testing AsyncIO MongoDB function (Asyncio Motor Driver)
<p>I am pretty new to AsyncIO, and looking for a bit of help. I have a basic class that runs an aggregation pipeline, which works perfectly in production:</p> <pre class="lang-py prettyprint-override"><code>class BasePipeline2: client: motor.AsyncIOMotorClient db: motor.AsyncIOMotorDatabase collection: motor.AsyncIOMotorCollection def __init__(self, uri: str, database: str, collection: str): self.client = get_client(uri) self.db = self.client[database] self.collection = self.db[collection] async def run(self): &quot;&quot;&quot;Runs the pipeline and returns the results as a list&quot;&quot;&quot; result = await self.collection.aggregate({&quot;$match&quot;: {&quot;foo&quot;: &quot;bar&quot;}}).to_list(length=None) return result </code></pre> <p>I have introduced the following test:</p> <pre class="lang-py prettyprint-override"><code>@pytest.mark.asyncio async def test_run2(self): results = [{&quot;_id&quot;: 1, &quot;name&quot;: &quot;Alice&quot;, &quot;age&quot;: 25}, {&quot;_id&quot;: 2, &quot;name&quot;: &quot;Bob&quot;, &quot;age&quot;: 30}] cursor_mock = mock.AsyncMock() cursor_mock.to_list = mock.AsyncMock(return_value=results) collection_mock = mock.MagicMock() collection_mock.aggregate = mock.AsyncMock(return_value=cursor_mock) pipeline = BasePipeline2(&quot;mongodb://localhost:27017/&quot;, &quot;test_db&quot;, &quot;test_collection&quot;) with mock.patch.object(pipeline, &quot;collection&quot;, collection_mock): pipeline_result = await pipeline.run() assert pipeline_result == results </code></pre> <p>I am trying to mock the <code>to_list</code> method to return the test results. Yet no matter what I try I get an error:</p> <pre><code>AttributeError: 'coroutine' object has no attribute 'to_list' </code></pre> <p>I can not seem to find a solution, and I feel that the the issue is somewhere in the way I set up my mocks.</p>
<python><mongodb><testing><pytest><motor-asyncio>
2023-05-18 13:43:43
1
1,482
garyj
76,281,424
5,924,264
How to query from a dataframe using caching?
<p>I have a dataframe <code>df</code>. This dataframe consists of columns <code>id</code>, <code>time</code> and <code>quantity</code>. I have to make repeated queries to this <code>df</code> based on <code>time</code>, and the query should return the <code>id-quantity</code> pairs corresponding to the query time.</p> <p><code>df</code> is pre-ordered by <code>id</code>, and within each <code>id</code> group, it's sorted based on <code>time</code> (though the order itself probably won't help much with dataframe querying).</p> <p>The queries are in somewhat of an alternating order, and there's repeated queries, e.g., we can query in this order: <code>time = 10, 0, 10, 0, 10, 0, 20, 10, 20, 10, 20, 10, 30, 20, 30, 20, 30, 20, ....</code> It's in increasing alternating order, and note that ALL times in <code>df</code> will eventually be queried.</p> <p>Currently, my implementation doesn't involve any caching, and it just simply does:</p> <pre><code>def query(df, time): return df[df.time == time][[&quot;id&quot;, &quot;quantity&quot;]] </code></pre> <p>But I think this may be inefficient. I'm not very well versed with Python and pandas, but my understanding is <code>df[df.time == time]</code> can be an expensive operation, especially when <code>df</code> is large, which in my case it is, around 100k rows.</p> <p>How can I do this more efficiently?</p> <p>One idea I have is, we can use a <code>dict</code>, where the keys are <code>df.time().unique()</code>, and the values are the rows corresponding to that time, but I am not sure if this is the most efficient approach here.</p>
<python><pandas><dataframe>
2023-05-18 13:42:43
1
2,502
roulette01
76,281,299
12,435,792
convert dataframe to dictionary with multiple values
<p>I have a dataframe with values as:</p> <pre><code> Agent Name Team Available for production Product Type Air/Non-Air PCC key 0 Anjani Aggarwal APAC True Fresh AIR 0R0B 0R0B_AIR_Fresh 1 Anjani Aggarwal APAC True Fresh AIR 4H9B 4H9B_AIR_Fresh 2 Anjani Aggarwal APAC True Fresh AIR 56HB 56HB_AIR_Fresh 3 Anjani Aggarwal APAC True Fresh LAND 0R0B 0R0B_LAND_Fresh 4 Anjani Aggarwal APAC True Fresh LAND B9YJ B9YJ_LAND_Fresh 5 Anjani Aggarwal APAC True Fresh LAND G6MJ G6MJ_LAND_Fresh 6 Rohit Gusain APAC True Fresh AIR 0R0B 0R0B_AIR_Fresh 7 Rohit Gusain APAC True Fresh AIR 4H9B 4H9B_AIR_Fresh 8 Rohit Gusain APAC True Fresh AIR 56HB 56HB_AIR_Fresh 9 Rohit Gusain APAC True Fresh LAND 0R0B 0R0B_LAND_Fresh 10 Rohit Gusain APAC True Fresh LAND B9YJ B9YJ_LAND_Fresh 11 Rohit Gusain APAC True Fresh LAND G6MJ G6MJ_LAND_Fresh 12 Sakshi Malhotra APAC True Fresh AIR 0R0B 0R0B_AIR_Fresh 13 Sakshi Malhotra APAC True Fresh AIR 4H9B 4H9B_AIR_Fresh 14 Sakshi Malhotra APAC True Fresh AIR 56HB 56HB_AIR_Fresh 15 Sakshi Malhotra APAC True Fresh LAND 0R0B 0R0B_LAND_Fresh </code></pre> <p>I need to convert this dataframe into a dictionary like:</p> <pre><code>dict={'Anjani Aggarwal':['0R0B_AIR_Fresh','4H9B_AIR_Fresh','56HB_AIR_Fresh','0R0B_LAND_Fresh','B9YJ_LAND_Fresh','G6MJ_LAND_Fresh'], 'Rohit Gusain':['0R0B_AIR_Fresh','4H9B_AIR_Fresh','56HB_AIR_Fresh','0R0B_LAND_Fresh','B9YJ_LAND_Fresh','G6MJ_LAND_Fresh'] } </code></pre> <p>How can I do so?</p>
<python><dataframe><dictionary>
2023-05-18 13:26:39
1
331
Soumya Pandey
76,281,217
559,827
How to monitor the total size of output produced by a subprocess in real time?
<p>The code below is a toy example of the actual situation I am dealing with<sup>1</sup>. (Warning: this code will loop forever.)</p> <pre><code>import subprocess import uuid class CountingWriter: def __init__(self, filepath): self.file = open(filepath, mode='wb') self.counter = 0 def __enter__(self): return self def __exit__(self, exc_type, exc_value, traceback): self.file.close() def __getattr__(self, attr): return getattr(self.file, attr) def write(self, data): written = self.file.write(data) self.counter += written return written with CountingWriter('myoutput') as writer: with subprocess.Popen(['/bin/gzip', '--stdout'], stdin=subprocess.PIPE, stdout=writer) as gzipper: while writer.counter &lt; 10000: gzipper.stdin.write(str(uuid.uuid4()).encode()) gzipper.stdin.flush() writer.flush() # writer.counter remains unchanged gzipper.stdin.close() </code></pre> <p>In English, I start a subprocess, called <code>gzipper</code>, which receives input through its <code>stdin</code>, and writes compressed output to a <code>CountingWriter</code> object. The code features a <code>while</code>-loop, depending on the value of <code>writer.counter</code>, that at each iteration, feeds some random content to <code>gzipper</code>.</p> <p><em>This code does not work!</em></p> <p>More specifically, <code>writer.counter</code> never gets updated, so execution never leaves the <code>while</code>-loop.</p> <p>This example is certainly artificial, but it captures the problem I would like to solve: how to terminate the feeding of data into <code>gzipper</code> once it has written a certain number of bytes.</p> <p><strong>Q:</strong> How must I change the code above to get this to work?</p> <hr /> <p>FWIW, I thought that the problem had to do with buffering, hence all the calls to <code>*.flush()</code> in the code. They have no noticeable effect, though. Incidentally, I cannot call <code>gzipper.stdout.flush()</code> because <code>gzipper.stdout</code> is <em>not</em> a <code>CountingWriter</code> object (as I had expected), but rather it is <code>None</code>, surprisingly enough.</p> <hr /> <p><sup><sup>1</sup> In particular, I am using a <code>/bin/gzip --stdout</code> subprocess only for the sake of this example, because it is a more readily available alternative to the compression program that I am actually working with. If I really wanted to <code>gzip</code>-compress my output, I would use Python's standard <code>gzip</code> module.</sup></p>
<python><subprocess><ipc><io-buffering>
2023-05-18 13:16:37
1
35,691
kjo
76,280,682
13,589,242
simpy Resource attributes
<p>Is there a way we can add attributes to simpy Resource objects, and use their values after simulation for further analysis?</p> <p>For example, in a call centre, I have two agents. I'll declare them as <code>agents = simpy.Resource(env, capacity=2)</code>. I'll call them as <code> with agents.request() as req:</code>.</p> <p>What I want to do is that I want to give an <code>id</code> for each agent and know that which agent attended which call, further for how long each of them were idle, how long were they on call, etc.</p> <p>Any resource or idea about doing this would be appreciated,</p>
<python><simulation><simpy>
2023-05-18 12:04:19
1
572
mufassir
76,280,610
6,357,916
Accessing browser IP address in django
<p>I want to log an IP address from where particular url is accessed. I am implementing this functionality in django middleware. I tried obtaining IP address using method given in <a href="https://stackoverflow.com/a/4581997/6357916">this answer</a>:</p> <pre><code>def get_client_ip(request): x_forwarded_for = request.META.get('HTTP_X_FORWARDED_FOR') if x_forwarded_for: ip = x_forwarded_for.split(',')[0] else: ip = request.META.get('REMOTE_ADDR') return ip </code></pre> <p>I debugged this method and found that <code>request.META.get('HTTP_X_FORWARDED_FOR')</code> was <code>None</code>. and <code>request.META.get('REMOTE_ADDR')</code> was <code>10.0.0.2</code>. However, I dont understand from where this IP <code>10.0.0.2</code> came from. I am running django inside docker container on my laptop and accessing the website iniside the browser on the same laptop using address: <code>http://127.0.0.1:9001/</code>. So I thought ideally the address should be at least <code>127.0.0.1</code>. The IP address allotted to my laptop by router is <code>192.168.0.100</code>. The website will be used inside LAN. So I feel it will be more apt to log this IP address. But I checked all keys inside <code>request.META</code> dictionary and found none to contain IP address <code>192.168.0.100</code>. Also following two keys inside <code>request.META</code> contain local IP:</p> <pre><code>'HTTP_ORIGIN':'http://127.0.0.1:9001' 'HTTP_REFERER':'http://127.0.0.1:9001/' </code></pre> <p><strong>Q1.</strong> From where the IP <code>10.0.0.2</code> obtained?<br /> <strong>Q2.</strong> What <code>request.META</code> key should I access to log IP address from which the website is accessed.</p>
<python><django><django-rest-framework>
2023-05-18 11:56:11
1
3,029
MsA
76,280,539
813,946
Getting column names from a SELECT
<p>I wanted to get the names of the created columns from an SQL SELECT query. (My final goal is to create the INSERT ... SELECT from a complex SELECT query.)</p> <p>I started with the <a href="https://pypi.org/project/sqlparse/" rel="nofollow noreferrer">sqlparse</a> package, but I don't mind to use other one. So far I have this code:</p> <pre class="lang-py prettyprint-override"><code>import sqlparse def get_new_col_list(sql: str) -&gt; str: parsed = sqlparse.parse(sql) id_lists = filter(lambda x: isinstance(x, sqlparse.sql.IdentifierList), parsed[0].tokens) names = [] for id_list in id_lists: sub_names = [] for i in id_list.get_identifiers(): if not isinstance(i, sqlparse.sql.Identifier): # this print is here for testing print(&quot;no id ----&gt;&quot;, i, i.value, i.flatten, sep=&quot;\n&quot;) continue sub_names.append(i.get_name()) names.extend(sub_names) return names </code></pre> <p>but it can't find the yy in this query</p> <pre class="lang-sql prettyprint-override"><code>select p.x xx , 'SOMETHING' yy from polls p ; </code></pre> <p>I expect this as a return value:</p> <pre><code>[&quot;xx&quot;, &quot;yy&quot;] </code></pre> <p>How can I fix that code, or what other option do I have to get the list of columns?</p> <p>UPDATE:</p> <p>Works well for the example above, but not for this one. Can somebody here give a solution that work for this case too?</p> <pre class="lang-sql prettyprint-override"><code>select a aa , p pp , cast(case when x like 'TT%' then 'TT' when x like 'PP%' then 'PP' else 'TT' end as xtype) as colname from table </code></pre>
<python><sql><parsing>
2023-05-18 11:46:20
1
1,982
Arpad Horvath -- Π‘Π»Π°Π²Π° Π£ΠΊΡ€Π°Ρ—Π½Ρ–
76,280,511
1,477,418
Pyspark group rows with sequential numbers ( with duplicates)
<p>I have data which has a sequential time slot of customers arrivals</p> <pre class="lang-py prettyprint-override"><code>df = spark.createDataFrame( [(0, 'A'), (1, 'B'), (1, 'C'), (5, 'D'), (8, 'A'), (9, 'F'), (20, 'T'), (20, 'S'), (21, 'C')], ['time_slot', 'customer']) </code></pre> <pre><code>+--------+--------+ |time_slot|customer| +--------+--------+ | 0| A| | 1| B| | 1| C| | 5| D| | 8| A| | 9| F| | 20| T| | 20| S| | 21| C| +--------+--------+ </code></pre> <p>I need to group customers by the sequential time slots (including duplicates), so that I can get:</p> <pre><code>+--------------------+---------------------------------------------+ | grouped_slots| grouped_customers| +--------------------+---------------------------------------------+ | [0, 1]| ['A', 'B', 'C']| | [5]| ['D']| | [8, 9]| ['A', 'F']| | [20, 21]| ['T', 'S', 'C']| +--------------------+---------------------------------------------+ </code></pre> <p>I have tried to use the lag function to look at the prev record but not sure how to group based on that</p> <pre class="lang-py prettyprint-override"><code>window = W.orderBy(&quot;time_slot&quot;) df = df.withColumn(&quot;prev_time_slot&quot;, F.lag(F.col('time_slot'), 1).over(window)) </code></pre> <pre><code>+---------+--------+--------------+ |time_slot|customer|prev_time_slot| +---------+--------+--------------+ | 0| A| null| | 1| B| 0| | 1| C| 1| | 5| D| 1| | 8| A| 5| | 9| F| 8| | 20| T| 9| | 20| S| 20| | 21| C| 20| +---------+--------+--------------+ </code></pre>
<python><apache-spark><pyspark>
2023-05-18 11:42:59
1
1,880
Islam Elbanna
76,280,474
2,575,155
how to remove double quotes from null in dataframe
<p>I'm trying to replace blank with null and converting the dataframe and writing it as parquet file using awswrangler. This parquet loads in Snowflake table as JSON file. In Snowflake table null is coming with double quotes &quot;null&quot; instead json object null(without double quotes). It will be simple thing I'm missing, any help will be much appreciated</p> <p><strong>Expected JSON output null without double quotes</strong></p> <pre><code>{'name': 'Jane', 'age': '25', 'city': null} </code></pre> <p><strong>Code tried</strong></p> <pre><code>import json import pandas as pd import numpy as np df = pd.DataFrame({ &quot;name&quot;: [&quot;John&quot;, &quot;Jane&quot;, &quot;Raj&quot;, &quot;Mike&quot;], &quot;age&quot;: [30, 25, &quot;&quot;, 35], &quot;city&quot;: [&quot;New York&quot;, &quot;&quot;, &quot;&quot;, &quot;Chicago&quot;] }) metadata = { &quot;eventVersion&quot;: &quot;3&quot;, &quot;producedBy&quot;: &quot;Service&quot;, } json_list = [] column_names = df.columns.tolist() for _, row in df.iterrows(): payload_dict = row.to_dict() for key, value in payload_dict.items(): if value == '': payload_dict[key] = 'null' elif isinstance(value, float): payload_dict[key] = '{:.2f}'.format(value).rstrip('0').rstrip('.') else: payload_dict[key] = str(value) output_dict = {} output_dict.update(metadata) output_dict['payload'] = payload_dict json_list.append(output_dict) df_final = pd.DataFrame(json_list) for _, row in df_final.iterrows(): print(row) output_file_name = output_path+'parquet/'+fn.replace('csv','parquet') awswrangler.s3.to_parquet(df_final, f's3://{input_bucket}/{output_file_name}', dataset=False, index=False) </code></pre>
<python>
2023-05-18 11:39:01
1
736
marjun
76,280,434
10,033,921
python docker image main.py not found
<p>I am trying to create python image and run the container as pod, referring to the steps</p> <p>The python file main.py is as follows</p> <pre><code>from flask import Flask app = Flask(__name__) @app.route(&quot;/&quot;) def hello(): return &quot;Hello from Python!&quot; if __name__ == &quot;__main__&quot;: app.run(host='0.0.0.0') </code></pre> <p>The requirements.txt contains following</p> <pre><code>Flask </code></pre> <p>I ran the application locally in VS Code and it worked fine</p> <p>The dockerfile is as follows</p> <pre><code>FROM python:3.7 RUN mkdir /app WORKDIR /app ADD . /app/ RUN pip install -r requirements.txt EXPOSE 5000 CMD [&quot;python&quot;, &quot;/app/main.py&quot;] </code></pre> <p>I built the image in AWS using codebuild project and published to ecr. Ran below command to create pod</p> <pre><code>k run hello-pod –-image=&lt;image uri&gt;:v1 </code></pre> <p>When I try to create the pod using below command, I see status as β€œCrashLoopBackOff”</p> <p>I ran the command <code>kubectl logs hello-pod</code> , I see the message below:</p> <pre><code>Python: can’t open file β€˜main.py’ : [Errno 2] No such file or directory </code></pre>
<python><kubernetes>
2023-05-18 11:33:53
2
776
Sudhir Jangam
76,280,394
2,987,552
dynamically constructing client side content from server in flask
<p>I am trying to setup client side content from server in a flask app. Here is my code</p> <pre><code># Importing flask module in the project is mandatory # An object of Flask class is our WSGI application. from flask import Flask, send_file, Response, render_template, redirect, url_for, request import random import time import wave import io import os # Flask constructor takes the name of # current module (__name__) as argument. app = Flask(__name__) # The route() function of the Flask class is a decorator, # which tells the application which URL should call # the associated function. # β€˜/’ URL is bound with index() function. @app.route('/') def paadas(): #Approach 2 def generate(files): with wave.open(files[0], 'rb') as f: params = f.getparams() frames = f.readframes(f.getnframes()) for file in files[1:]: with wave.open(file, 'rb') as f: frames += f.readframes(f.getnframes()) buffer = io.BytesIO() with wave.open(buffer, 'wb') as f: f.setparams(params) f.writeframes(frames) buffer.seek(0) return buffer.read() files = [] number = random.randint(1,10) files.append(&quot;./numbers/&quot; + str(number) + &quot;.wav&quot;) times = random.randint(1,10) files.append(&quot;./times/&quot; + str(times) + &quot;.wav&quot;) data = dict( file=(generate(files), &quot;padaa.wav&quot;), ) app.post(url_for('static', filename='padaa.wav'), content_type='multipart/form-data', data=data) print ('posted data on client\n') return render_template(&quot;index.html&quot;) @app.route(&quot;/recording&quot;, methods=['POST', 'GET']) def index(): if request.method == &quot;POST&quot;: f = open('./file.wav', 'wb') print(request) f.write(request.data) f.close() if os.path.isfile('./file.wav'): print(&quot;./file.wav exists&quot;) return render_template('index.html', request=&quot;POST&quot;) else: return render_template(&quot;index.html&quot;) # main driver function if __name__ == '__main__': # run() method of Flask class runs the application # on the local development server. app.run() </code></pre> <p>However it fails while doing app.post with the error of</p> <pre><code>AssertionError: The setup method 'post' can no longer be called on the application. It has already handled its first request, any changes will not be applied consistently. Make sure all imports, decorators, functions, etc. needed to set up the application are done before running it. </code></pre> <p>Here is the complete stack:</p> <pre><code>Traceback (most recent call last): File &quot;C:\ML\Tables\app\webapp\venv\lib\site-packages\flask\app.py&quot;, line 2213, in __call__ return self.wsgi_app(environ, start_response) File &quot;C:\ML\Tables\app\webapp\venv\lib\site-packages\flask\app.py&quot;, line 2193, in wsgi_app response = self.handle_exception(e) File &quot;C:\ML\Tables\app\webapp\venv\lib\site-packages\flask\app.py&quot;, line 2190, in wsgi_app response = self.full_dispatch_request() File &quot;C:\ML\Tables\app\webapp\venv\lib\site-packages\flask\app.py&quot;, line 1486, in full_dispatch_request rv = self.handle_user_exception(e) File &quot;C:\ML\Tables\app\webapp\venv\lib\site-packages\flask\app.py&quot;, line 1484, in full_dispatch_request rv = self.dispatch_request() File &quot;C:\ML\Tables\app\webapp\venv\lib\site-packages\flask\app.py&quot;, line 1469, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File &quot;C:\ML\Tables\app\PaadasMLFlaskApp\PaadasML.py&quot;, line 47, in paadas app.post(url_for('static', filename='padaa.wav'), content_type='multipart/form-data', data=data) File &quot;C:\ML\Tables\app\webapp\venv\lib\site-packages\flask\scaffold.py&quot;, line 50, in wrapper_func self._check_setup_finished(f_name) File &quot;C:\ML\Tables\app\webapp\venv\lib\site-packages\flask\app.py&quot;, line 511, in _check_setup_finished raise AssertionError( </code></pre> <p>What am I missing? How can I do this? You can get my complete working app from <a href="https://github.com/sameermahajan/PaadasMLFlaskApp" rel="nofollow noreferrer">https://github.com/sameermahajan/PaadasMLFlaskApp</a> if you want to try out. Just uncomment the app.post... statement in the python file to reproduce the error.</p> <p>Basically when I return from paadas() I want to play the audio like it is done by</p> <pre><code>return Response(generate(files), mimetype='audio/wav') </code></pre> <p>but want the focus on index.html so that I can continue my interaction.</p>
<javascript><python><html><flask><client-server>
2023-05-18 11:28:16
1
598
Sameer Mahajan
76,280,371
15,239,717
How Can I add Extra Login Field to Django App and use session to identify User
<p>I am working a Django Real Estate Management website where I want users to choice their Categories (Landlord, Agent, Prospect) upon registration and login. For the Registration I did it but for the User login I am unable to add extra user login field for successful login. Django Form Code:</p> <pre><code>USER_TYPE_CHOICES = ( ('landlord', 'Landlord'), ('agent', 'Agent'), ('prospect', 'Prospect'), ) class UserLoginForm(forms.Form): user_type = forms.ChoiceField(choices=USER_TYPE_CHOICES, widget=forms.Select(attrs={'class': 'form-control left-label'})) </code></pre> <p>Django Login View code:</p> <pre><code>class MyLoginView(LoginView): template_name = 'account/login.html' authentication_form = UserLoginForm def get_form(self, form_class=None): form = super().get_form(form_class) form.fields['user_type'] = forms.ChoiceField( label='User Type', choices=[('landlord', 'Landlord'), ('agent', 'Agent'), ('prospect', 'Prospect')], widget=forms.Select(attrs={'class': 'form-control left-label'}) ) return form def form_valid(self, form): user = form.get_user() user_type = form.cleaned_data['user_type'] # Check if the selected user type matches the user's actual user type if user_type == 'landlord' and not user.profile.landlord: messages.error(self.request, 'Invalid user type for this user.') return self.form_invalid(form) elif user_type == 'agent' and not hasattr(user, 'agent'): messages.error(self.request, 'Invalid user type for this user.') return self.form_invalid(form) elif user_type == 'prospect' and not hasattr(user, 'prospect'): messages.error(self.request, 'Invalid user type for this user.') return self.form_invalid(form) login(self.request, user) self.request.session['user_type'] = user_type return super().form_valid(form) </code></pre> <p>Anytime I try loading the login page this is the error: <strong><strong>init</strong>() got an unexpected keyword argument 'request'</strong> someone should help with solution.</p>
<python><django>
2023-05-18 11:24:50
2
323
apollos
76,280,112
1,194,864
Read images from folder to tensors in torch and run a binary classifier
<p>I would like to load images from a local directory in Torch in order to train a binary classifier. My directory looks as follows:</p> <pre><code>-data - class_1_folder - class_2_folder </code></pre> <p>My folders <code>class_1</code> and <code>class_2</code> contain the .jpg images for each class. My images do have various sizes (mostly rectangular shapes though). I am using the following code to read my files:</p> <pre><code>import os import pdb import torch from torchvision import datasets, transforms import matplotlib.pyplot as plt from torchvision.transforms.functional import to_pil_image PATH = &quot;data/&quot; transform = transforms.Compose([transforms.Resize(256), transforms.RandomCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224,0.225])]) dataset = datasets.ImageFolder(PATH, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=356, shuffle=True) images, labels = next(iter(dataloader)) tensor_image = images[1] img = to_pil_image(inv_normalize(tensor_image)) plt.imshow(img) plt.show() </code></pre> <p>When I check these results with <code>imshow</code>, it seems that the <code>imshow</code> portrays a <code>grid</code> with the image nine times (<code>3X3</code>). How to avoid that? Is there a way also to revert the transformations easily before the <code>imshow</code>?</p> <p>I am trying to do something to <code>invert</code> the <code>normalization</code> like the following:</p> <pre><code>inv_normalize = transforms.Normalize( mean=[-0.485/0.229, -0.456/0.224, -0.406/0.255], std=[1/0.229, 1/0.224, 1/0.255] ) </code></pre> <p>However, the results are still a bit weird! Isn't that a correct <code>invert</code> transformation?</p>
<python><pytorch><classification>
2023-05-18 10:56:12
1
5,452
Jose Ramon
76,280,108
6,357,916
Accessing logged in user name inside django middleware
<p>I am trying to access <code>request.user.username</code> inside following django middleware:</p> <pre><code>class MyMiddleware(): def __init__(self, get_response): self.get_response = get_response def __call__(self, request): response = self.get_response(request) user_id = request.user.username # ... </code></pre> <p>I ensured that <code>MyMiddleware</code> appears after <code>SessionMiddleware</code> and <code>AuthenticationMiddleware</code> in the <code>settngs.py</code>:</p> <pre><code>MIDDLEWARE = ( 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', # ... `my_app.my_py_file.MyMiddleware` ) </code></pre> <p>But still the <code>request.user.username</code> turns out to be an empty string.</p> <p><strong>Q1.</strong> Why is it so?</p> <p>I also tried adding <code>@authentication_classes([TokenAuthentication,])</code>:</p> <pre><code>from rest_framework.authentication import TokenAuthentication # &lt;-- from rest_framework.decorators import authentication_classes # &lt;-- @authentication_classes([TokenAuthentication,]) # &lt;-- class MyMiddleware(): def __init__(self, get_response): self.get_response = get_response def __call__(self, request): response = self.get_response(request) user_id = request.user.username # ... </code></pre> <p>But no help.</p> <p>I also tried making class <code>APIView</code> (though it looks stupid as middleware is not a REST API):</p> <pre><code>from rest_framework.authentication import TokenAuthentication # &lt;-- from rest_framework.views import APIView # &lt;-- class MyMiddleware(APIView): # &lt;-- authentication_classes = [TokenAuthentication] # &lt;-- def __init__(self, get_response): self.get_response = get_response def __call__(self, request): response = self.get_response(request) user_id = request.user.username # ... </code></pre> <p>Still, it did not work.</p> <p>Then I came across <a href="https://stackoverflow.com/a/57142064/6357916">this answer</a>. It says:</p> <blockquote> <p>If you use simple_jwt, it's not possible to directly use request.user in a middleware because authentication mechanism works in view function. So you should manually authenticate inside the middleware and then you can use request.user to get whatever data from the user model.</p> </blockquote> <p>It makes use of <code>rest_framework_simplejwt</code> which does not seem to be the part of django framework and needs to be installed explicitly.</p> <p>The only way I am able to access <code>username</code> in my middleware is using <a href="https://stackoverflow.com/a/26246463/6357916">this answer</a>. I modified my middleware as follows:</p> <pre><code>from re import sub from rest_framework.authtoken.models import Token class MyMiddleware(object): def __init__(self, get_response): self.get_response = get_response def __call__(self, request): response = self.get_response(request) user_id = request.user.username if not user_id: user_id = self.getUsername(request) # ... def getUsername(self, request): header_token = request.META.get('HTTP_AUTHORIZATION', None) if header_token is not None: try: token = sub('Token ', '', header_token) token_obj = Token.objects.get(key = token) except Token.DoesNotExist: pass return token_obj.user.username </code></pre> <p><strong>Q2.</strong> Is this the only and correct way to access username inside middleware without installing third party libraries?</p> <p><strong>Update</strong></p> <p>Below is my <code>CustomUser</code> class and possibly related method <code>create_custom_user()</code>:</p> <pre><code>class CustomUser(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) # there are some application specific fields @receiver(post_save, sender=User) @debugger_queries def create_custom_user(sender, **kwargs): instance = kwargs['instance'] if kwargs['created']: # create CustomUser.objects.get_or_create(user=instance) </code></pre>
<python><django><django-rest-framework><django-views><django-middleware>
2023-05-18 10:56:04
1
3,029
MsA
76,279,973
11,471,439
Problems with interactive kernel in vscode conda environment
<p>I am having problems with using interactive jupyter kernels within vscode when in a conda environment.</p> <p>I create a new conda environment, activate it and install jupyter. Then try Ctrl+Shift+p and &quot;Jupyter: Create interactive window&quot; in command palette. Initially everything works fine. However, later on in a different session I try to activate the environment and open an interactive window. Now I get</p> <p><code>Failed to start the kernel: unhandled error.</code></p> <p>Looking in the log the error is</p> <pre><code>Failed to run command: ['&lt;path_to_local&gt;/.local/share/r-miniconda/envs/ner/bin/python', '-m', 'ipykernel_launcher', '-f', '&lt;path_to_local&gt;/.local/share/jupyter/runtime/kernel-20f53f7c-664b-471c-9e29-26e105fefd28.json'] PATH='&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin:/opt/code-server/lib/vscode/bin/remote-cli:/&lt;path_to_local&gt;/.local/bin:/&lt;bin_path&gt;/bin:/usr/local/texlive/2022/bin/x86_64-linux:/&lt;path_to_local&gt;/.local/share/r-miniconda/envs/ner/bin:/&lt;path_to_local&gt;/.local/share/r-miniconda/condabin:/&lt;path_to_local&gt;/.local/bin:/&lt;bin_path&gt;/bin:/usr/local/texlive/2022/bin/x86_64-linux:/opt/rh/devtoolset-7/root/usr/bin:/usr/lib64/ccache:/opt/python/3.9.5/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' with kwargs: {'stdin': -1, 'stdout': None, 'stderr': None, 'cwd': '&lt;cwd&gt;', 'start_new_session': True} </code></pre> <pre><code>[Errno 7] Argument list too long: '/&lt;path to r-miniconda&gt;/r-miniconda/envs/ner/bin/python' </code></pre> <p>(I replaced some of the explicit paths with stand-ins in &lt; &gt; brackets)</p> <p>Does anyone know what this means and how I can prevent this from happening? Is it because of too many repeated entries of the miniconda path in the PATH environment variable? If so why has that happened and how do I stop it?</p>
<python><visual-studio-code><jupyter><conda>
2023-05-18 10:39:56
1
324
goblinshark
76,279,929
4,473,615
Image to PDF using pdfkit
<p>I am facing issues while adding image to PDF.</p> <pre><code>result = &quot;&quot;&quot; &lt;h1&gt;Image&lt;/h1&gt; &lt;img src=&quot;static/images/image.png&quot;&gt; &quot;&quot;&quot; pdfkit.from_string(result, &quot;static/images/foo.pdf&quot;) </code></pre> <p>Error:</p> <pre><code>raise IOError('wkhtmltopdf reported an error:\n' + stderr) OSError: wkhtmltopdf reported an error: Exit with code 1 due to network error: ProtocolUnknownError </code></pre> <p>How can I resolve this?</p>
<python><pdfkit>
2023-05-18 10:34:47
1
5,241
Jim Macaulay
76,279,889
8,930,395
How to use Background Tasks inside a function that is called by a FastAPI endpoint?
<p>I have the below FastAPI endpoint:</p> <pre><code>@app.post(&quot;/abcd&quot;, response_model=abcdResponseModel) async def getServerDetails(IncomingData: serverModel) -&gt; abcdResponseModel: &quot;&quot;&quot; Get Server Details &quot;&quot;&quot; result = ServerDetails(IncomingData.dict()) return result </code></pre> <p>The above is calling the function below</p> <pre><code>def ServerDetails(IncomingData: dict) -&gt; Tuple[list, int]: background_tasks = BackgroundTasks() background_tasks.add(notification_func, arg1, arg2) response = some operation..... return response, 500 </code></pre> <p>Is this the way to do this, or do I need to create a <code>BackgroundTasks()</code> object inside <code>getServerDetails</code> endpoitn and pass it to the <code>ServerDetails</code> function?</p>
<python><fastapi><background-task><starlette>
2023-05-18 10:29:31
1
4,606
LOrD_ARaGOrN
76,279,830
5,070,879
Snowflake Python Worksheet - main handler with additional arguments
<p>The goal is to develop and Deploy Snowpark code inside Python Worksheet code that could take user input.</p> <p>If we try to provide additional user-defined arguments we get:</p> <pre><code>import snowflake.snowpark as snowpark def main(session: snowpark.Session, param): df = session.table('snowflake_sample_data.tpch_sf10.lineitem').limit(param) return df </code></pre> <blockquote> <p><strong>Handler has more arguments than expected.</strong> Function signature must have exactly one argument:</p> <pre><code> def main(session: snowpark.Session): </code></pre> </blockquote> <p>If we try to Deploy the code to a stored procedure:</p> <p><a href="https://i.sstatic.net/ff9wx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ff9wx.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/vofqI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vofqI.png" alt="enter image description here" /></a></p> <blockquote> <p>Stored procedure my_proc could not be created. failed running query: <strong>Python function is defined with 2 arguments (including session), but stored procedure definition contains 0 arguments. Python function arguments are expected to be session and stored procedure defined arguments in function MY_PROC with handler main</strong></p> </blockquote>
<python><snowflake-cloud-data-platform>
2023-05-18 10:21:13
1
180,782
Lukasz Szozda
76,279,739
2,682,273
_NoDefault.no_default in pandas library
<p>I have recently seen this in the panda's documentation. Can someone explain this and how can I access the value?</p> <p><a href="https://i.sstatic.net/MLcBU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MLcBU.png" alt="enter image description here" /></a></p>
<python><pandas>
2023-05-18 10:08:28
1
1,624
TPArrow
76,279,329
6,014,330
Checking if pinned package in requirements.txt is latest or not
<p>Consider you have the following <code>requirements.txt</code>:</p> <pre><code>pydantic==1.10.7 mypy==1.2.0 pylint&gt;=2.17.3 </code></pre> <p>when installing this requirements, pydantic and mypy will install the versions they are pinned to and pylint will install <code>2.17.4</code> instead of <code>2.17.3</code> because of the operator.</p> <p>Now there is a newer version of mypy available (<code>1.3.0</code>). How can I programmatically find this out having only the <code>requirements.txt</code> and only for pinned packages?.</p> <p>Things ive tried:</p> <ul> <li><code>pip list --outdated</code> <ul> <li>This is basically what i want but ONLY for the packages in <code>requirements.txt</code> not for all packages installed</li> </ul> </li> <li>Scripting a parser to consume the <code>requirements.txt</code> and resolve the version <ul> <li>This is just a headache. Digging around in <code>pip._internals</code> to find how to parse a requirements file, get the canonical name of the pinned package, resolve the versions, support <code>--extra-index-url</code>, supporting private repositories etc is just a lot when I feel like this is something that should be relatively simple to do</li> </ul> </li> <li><a href="https://github.com/sesh/piprot" rel="nofollow noreferrer">https://github.com/sesh/piprot</a> <ul> <li>This basically does what I want but its out of date and has been deprecated in favor of <code>pip list --outdated</code></li> </ul> </li> </ul> <p>Expected output from the above <code>requirements.txt</code> would be something like:</p> <pre><code>pydantic==1.10.7 Up to date mypy==1.2.0 Out of date. New version is: 1.3.0 pylint&gt;=2.17.3 Will install version 2.17.4 and is Up to date </code></pre> <p>Although I dont really care about the output, more about HOW I resolve the information.</p>
<python><pip>
2023-05-18 09:12:19
1
2,570
Keegan Cowle
76,279,304
1,191,545
Configure Pylance to stop prefixing project directory on import auto-complete
<p>When working in a Git repository where my Python/Django source is in a subfolder <code>{$workspace}/app</code>, as seen below.</p> <ul> <li>project/ <ul> <li><em>.vscode/</em></li> <li><em>.git/</em></li> <li><strong>app/</strong> -- the app source code (not a Python module)</li> <li><em>docs/</em></li> <li><em>.gitignore</em></li> <li><em>LICENSE</em></li> </ul> </li> </ul> <p>The problem is that VSCode adds an incorrect <code>app.</code> prefix when auto-generating import statements. For example, it will generate <code>app.core</code> when the actual project module is <code>core</code>. Also, there isn't an <code>__init__.py</code> in the <code>/app</code> directory.</p> <p>I have read that I can open a workspace in the <code>app/</code> directory, but the project repository is one directory above the app source and I sometimes need to make changes at the project root level.</p> <p>I have tried the following items in <code>settings.json</code>, which don't seem correct when all I want to do is configure the primary auto-complete and analysis paths.</p> <pre class="lang-json prettyprint-override"><code>{ &quot;python.analysis.extraPaths&quot;: [ &quot;/app&quot;, ], &quot;python.autoComplete.extraPaths&quot;: [ &quot;/app&quot;, ], { </code></pre> <p>How can I get VSCode to stop treating the project <code>/app</code> directory as a Python module?</p>
<python><visual-studio-code><pylance>
2023-05-18 09:09:12
1
1,932
Brylie Christopher Oxley
76,279,225
13,328,553
Unit tests for FastAPI - how to use breakpoints?
<p>I have a code base that is using FastAPI. The unit tests use the <code>TestClient</code> as recommended in the <a href="https://fastapi.tiangolo.com/tutorial/testing/" rel="noreferrer">FastAPI docs</a>.</p> <p>When debugging with VS Code, I add breakpoints to the code of my routes. However the debugger does not stop at the breakpoints.</p> <p>Example here:</p> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI from fastapi.testclient import TestClient app = FastAPI() @app.get(&quot;/&quot;) async def read_main(): x = 1 + 1 # Breakpoint added here - Does not work return {&quot;msg&quot;: x} client = TestClient(app) def test_read_main(): response = client.get(&quot;/&quot;) # Breakpoint added here - Does work assert response.status_code == 200 assert response.json() == {&quot;msg&quot;: &quot;Hello World&quot;} </code></pre> <p>How can I get FastAPI to stop on breakpoints for unit tests? The docs mention <a href="https://fastapi.tiangolo.com/tutorial/debugging/" rel="noreferrer">debugging</a>, but this is only for when the uvicorn server is running, and not with the <code>TestClient</code>.</p>
<python><visual-studio-code><pytest><fastapi>
2023-05-18 08:58:16
2
464
SoftwareThings
76,278,747
11,725,056
Out of 4 methods for Document Question Answering in LangChain, which one if the fastest and why (ignoring the LLM model used)?
<p><strong>NOTE:</strong>: My <em>reference document</em> data changes periodically so if I use Embedding Vector space method, I have to Modify the embedding, say once a day</p> <p>I want to know these factors so that I can design my system to compensate my <em>reference document data</em> generation latency with creating embedding beforehand using Cron Jobs</p> <p>There are 4 methods in <code>LangChain</code> using which we can retrieve the QA over Documents. More or less they are wrappers over one another.</p> <ol> <li><code>load_qa_chain</code> uses Dynamic Document each time it's called</li> <li><code>RetrievalQA</code> get it from the Embedding space of document</li> <li><code>VectorstoreIndexCreator</code> is the wrapper of <code>2.</code></li> <li><code>ConversationalRetrievalChain</code> uses Embedding Space and it has a memory and chat history too.</li> </ol> <p>What is the difference between <code>1,3,4</code> in terms of speed?</p> <ol> <li><p>If I use embedding DB, would it be faster than <code>load_qa_chain</code> assuming that making Embedding of the document beforehand (like in <code>2,3</code>)helps or is it same because the time taken for a 50 words Prompt is same as Time taken for a 2000 words (Document Text + Prompt in <code>load_qa_chain</code>) ?</p> </li> <li><p>Will the speed be affected If I use <code>ConversationalRetrievalChain</code> with or without <code>memory</code> and <code>chat_history</code>?</p> </li> </ol>
<python><openai-api><gpt-3><langchain>
2023-05-18 07:45:43
2
4,292
Deshwal
76,278,737
6,828,329
Searching the current working directory and retrieving all of the files in a list in ascending order of the number at the end of the file name
<p>I have the following list of files in my current working directory:</p> <pre><code>Java_question_1.txt Java_question_2.txt Java_question_3.txt </code></pre> <p>and so on ...</p> <p>I want to search the current working directory and retrieve all of these file objects / names in a list in ascending order of the number at the end of the file name.</p> <p>Also, I want to perform a check to ensure that they are actually .txt (or possibly some other file extension, like .bin or .dat. So, if we have</p> <pre><code>Java_question_1.txt Java_question_3.txt Java_question_3.dat Java_question_4.txt </code></pre> <p>and so on, it would only get the .txt files (or the .dat files, if I so choose).</p> <p>I was thinking of something like this:</p> <pre><code>if (language == &quot;Java&quot;): path = os.getcwd() print(&quot;path is: &quot; + path) files = [] for j in range(1, NUMBER_OF_JAVA_QUESTIONS_AVAILABLE + 1): for i in os.listdir(path): if os.path.isfile(os.path.join(path, i)) and 'Java_question_' in i: files.append(i) </code></pre> <p>How can I do this?</p>
<python>
2023-05-18 07:44:43
1
2,406
The Pointer
76,278,620
9,500,955
Create the array of integer with consecutive number in PySpark
<p>I have a dataframe like this one:</p> <pre><code>+---+-----+------------+ | ID| pos| value| +---+-----+------------+ | A1| 0| ABC| | A1| 1| BCD| | A1| 2| CDF| | A1| 5| ABC| | A1| 8| FGR| | A1| 9| EFD| | A2| 0| L_1| | A2| 1| L_2| | A2| 3| STU| +---+-----+------------+ </code></pre> <p>The expected output is:</p> <pre><code>+---+-------------+-----------------------------+ | ID| arr(pos)| arr(value)| +---+-------------+-----------------------------+ | A1| [0, 1, 2]| ['ABC', 'BCD', 'CDF']| | A1| [5]| ['ABC']| | A1| [8, 9]| ['FGR', 'EFD']| | A2| [0, 1]| ['L_1', 'L_2']| | A2| [3]| ['STU']| +---+-----+-------+-----------------------------+ </code></pre> <p>The meaning of this output is to <code>groupby</code> with the <code>ID</code> column and also <code>collect_list</code> in both the <code>pos</code> and <code>value</code> columns. Importantly, for an <code>ID</code>, the list of <code>pos</code> has to be consecutive. If the previous and next <code>pos</code> of an ID is not consecutive like the <code>ID=A1</code> with <code>pos=5</code>, then it will be an array with a single value.</p> <p>For an <code>ID</code>, the value of <code>pos</code> is not duplicated.</p> <p>I tried using the <code>lag</code> column to create the previous and next index but I don't know how to transform from that.</p> <pre class="lang-py prettyprint-override"><code>window = W.partitionBy(&quot;ID&quot;).orderBy(&quot;pos&quot;) df = ( df .withColumn(&quot;prev_pos&quot;, F.lag(F.col('pos'), 1).over(window)) .withColumn(&quot;next_pos&quot;, F.lag(F.col('pos'), -1).over(window)) ) </code></pre> <p>I have this output with this code but I don't know how to identify the consecutive pattern.</p> <pre><code>+---+-----+------------+---------+---------+ | ID| pos| value| prev_pos| next_pos| +---+-----+------------+---------+---------+ | A1| 0| ABC| null| 1| | A1| 1| BCD| 0| 2| | A1| 2| CDF| 1| 5| | A1| 5| ABC| 2| 8| | A1| 8| FGR| 5| 9| | A1| 9| EFD| 8| null| | A2| 0| L_1| null| 1| | A2| 1| L_2| 0| 3| | A2| 3| STU| 1| null| +---+-----+------------+---------+---------+ </code></pre> <p>Another approach I tried is:</p> <pre><code>df = ( df .groupby(&quot;ID&quot;) .agg( F.sort_array(F.collect_list(F.struct(&quot;pos&quot;, &quot;value&quot;))) .alias(&quot;pos_list&quot;) ) .withColumn(&quot;arr(pos)&quot;, F.col(&quot;pos_list.pos&quot;)) .withColumn(&quot;arr(value)&quot;, F.col(&quot;pos_list.value&quot;)) .drop(&quot;pos_list&quot;) ) </code></pre> <p>Then I will have this output but I don't know how to split the array when the number is not consecutive:</p> <pre><code>+---+--------------------+---------------------------------------------+ | ID| arr(pos)| arr(value)| +---+--------------------+---------------------------------------------+ | A1| [0, 1, 2, 5, 8, 9]| ['ABC', 'BCD', 'CDF', 'ABC', 'FGR', 'EFD']| | A2| [0, 1, 3]| ['L_1', 'L_2', 'STU']| +---+--------------------+---------------------------------------------+ </code></pre> <p>Code for creating the input:</p> <pre class="lang-py prettyprint-override"><code>from pyspark.sql import functions as F, Window as W df = spark.createDataFrame( [('A1', 0, 'ABC'), ('A1', 1, 'BCD'), ('A1', 2, 'CDF'), ('A1', 5, 'ABC'), ('A1', 8, 'FGR'), ('A1', 9, 'EFD'), ('A2', 0, 'L_1'), ('A2', 1, 'L_2'), ('A2', 2, 'STU')], ['ID', 'pos', 'value']) </code></pre>
<python><pyspark>
2023-05-18 07:24:53
1
1,974
huy
76,278,583
5,686,015
Querying snowflake table with sqlalchemy adds redundant database name prefix to table names
<p>I'm using sqlalchemy to query tables in a snowflake db.The sql queries are constructed through code. When it runs something like <code>conn.execute('SELECT COUNT(*) FROM dbname.purchase')</code> it throws the following error:</p> <pre><code>ProgrammingError: (snowflake.connector.errors.ProgrammingError) 002003 (02000): SQL compilation error: Schema 'DBNAME.DBNAME' does not exist or not authorized. </code></pre> <p>But if I run <code>conn.execute('SELECT COUNT(*) FROM purchase')</code> it works just fine. My question is, is this behaviour of prefixing db name configurable because I'd rather not alter the query construction logic. The code is supposed to support quite a few other dialects and snowflake is the only one where this is happening.</p>
<python><sqlalchemy><snowflake-cloud-data-platform>
2023-05-18 07:19:21
1
1,874
Judy T Raj
76,278,578
8,236,050
selenium webdriver connection problems (not connected to DevTools)
<p>I am running a Selenium webdriver program that needs a lot of time to complete. My problem is that, for some reason, whenever I run this script as a headless browser, I end up having issues. For example, if I use Firefox driver I get this after a short time:</p> <pre><code>urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=58161): Max retries exceeded with url: /session/1aa9d459-8770-41ec-a25b-a1a9be8f7d2b/elements (Caused by NewConnectionError('&lt;urllib3.connection.HTTPConnection object at 0x1817268d0&gt;: Failed to establish a new connection: [Errno 61] Connection refused')) </code></pre> <p>If I use Chrome's webdriver, the script does run much longer, but I always end up encountering this error:</p> <pre><code>disconnected: Unable to receive message from renderer (failed to check if window was closed: disconnected: not connected to DevTools) </code></pre> <p>As for the second error, I have seen this can happen due to having too many elements to load in a page, which is not the case, as the URLs where the driver stops are pages that in previous iterations have been loaded without any issue. I have also read this may happen because of not having interaction with the page for a long amount of time, but that is not the case either, as I am interacting with the page continuously (by typing on fields or clicking on elements).</p> <p>I have tried to catch the <code>WebDriverException</code> and restart the driver by doing the following:</p> <pre><code>print(&quot; ........ Restarting webdriver ...... &quot;) current_url = driver.current_url driver.quit() options = webdriver.ChromeOptions() options.add_argument('--headless') options.add_argument('--auto-show-cursor') options.add_argument('--no-sandbox') options.add_argument('--disable-dev-shm-usage') driver = webdriver.Chrome('chromedriver',options=options) driver.get(current_url) </code></pre> <p>Nevertheless, even if I do catch the exception, this does not prevent the script from failing and finishing the execution abruptly. Is there any way in which I can reconnect when something like this happens so that my script continues with the execution? I am using python, by the way.</p>
<python><selenium-webdriver><selenium-chromedriver><undetected-chromedriver>
2023-05-18 07:18:49
0
513
pepito
76,278,577
6,394,055
404 Error: "The empty path didn’t match any of these"
<p>I am very new in Django &amp; trying to run my first project. But I am stuck with this error:</p> <blockquote> <p>The empty path didn’t match any of these.</p> </blockquote> <p>In my app urls I have the following code:</p> <pre><code>from django.urls import path from . import views urlpatterns=[ path('members/',views.members,name='members'), ] </code></pre> <p>And in my project urls I have the following code:</p> <pre><code>from django.contrib import admin from django.urls import include, path urlpatterns = [ path('', include('members.urls')), path('admin/', admin.site.urls), ] </code></pre> <p>I read a number of answers to this question, but the suggestions did not work for me. Seeking help to dig into the error.</p> <p>Here is the full error traceback:</p> <pre><code>Page not found (404) Request Method: GET Request URL: http://127.0.0.1:8000/ Using the URLconf defined in my_tennis_club.urls, Django tried these URL patterns, in this order: members/ [name='members'] admin/ The empty path didn’t match any of these. You’re seeing this error because you have DEBUG = True in your Django settings file. Change that to False, and Django will display a standard 404 page. </code></pre>
<python><django><django-views><django-urls>
2023-05-18 07:18:47
2
709
ssajid.ru
76,278,575
3,257,218
Mock Dependency class or class method per test case wise but not globally in FastAPI
<p>I have my router method in FastAPI which depends on some JWT authentication mechanism. I want to mock my authentication to bypass the auth to test my API easily. To mock, I am using <a href="https://pypi.org/project/pytest-fastapi-deps/" rel="nofollow noreferrer">pytest-fastapi-deps 0.2.3</a> . Here is my router</p> <pre><code># object_manager.routers.object_get_save_router.py @router.post(&quot;/upload&quot;, status_code=201) async def upload_object(request: Request, response: Response, jwt_payload: dict = Depends(CheckPermission(permission_list=['can_upload_object']))): body = await request.form() res, objects = &lt;Some service method&gt; return JSONResponse(res) </code></pre> <p>Here is my CheckPermission class</p> <pre><code># auth.auth_handler.py oauth2_scheme = OAuth2PasswordBearer(tokenUrl=&quot;token&quot;) async def validate_jwt_token(token: str = Depends(oauth2_scheme)): try: &lt;Some valdation&gt; return payload except jwt.exceptions.DecodeError as error: raise error class CheckPermission: def __init__(self, permission_list: List[str]): self.permission_list = permission_list self.jwt_payload = {} async def __call__(self, jwt_payload: dict = Depends(validate_jwt_token)): &lt;Some additional valdation&gt; return jwt_payload </code></pre> <p>Here is my test class</p> <pre><code># tests.routers.test_base_file_upload_method.py from fastapi.testclient import TestClient from auth.auth_handler import validate_jwt_token, CheckPermission from main import app test_client = TestClient(app) async def validate_jwt_token_overriden(token: str = ''): return { 'userId': 'testuser' } class CheckPermissionOverriden: async def __call__(self, jwt_payload: dict): return { 'userId': 'testuser' } class TestFileUploadAPI: def test_no_input_error(self, fastapi_dep): with fastapi_dep(app).override({ validate_jwt_token: validate_jwt_token_overriden, CheckPermission.__call__: CheckPermissionOverriden.__call__ }): response = test_client.post(url='/upload', headers={ &quot;Authorization&quot;: &quot;Bearer some-token&quot; }) assert response.status_code == 400 </code></pre> <p>My <code>validate_jwt_token</code> method gets overridden but I can not override <code>CheckPermission.__call__</code> method or the whole <code>CheckPermission</code> class. I have tried <a href="https://stackoverflow.com/a/72204000/3257218">this answer</a> but could not make it work. How can I solve this problem?</p>
<python><pytest><fastapi>
2023-05-18 07:18:07
1
337
Arif
76,278,545
438,755
python loop a list of class objects
<p>I have class:</p> <pre><code>class Services: def __init__(self,a1,b1): self.a1 = a1 self.b1 = b1 def do_something(self): print('hello') obj1 = Services('aaa','bbb') obj2 = Services('aaa1','bbb1') objlist =['obj1','obj2'] for i in objlist: i.do_something() </code></pre> <p>Is is possible to loop objects and call class methods for multiple objects? Thanks</p>
<python>
2023-05-18 07:14:20
1
745
miojamo
76,278,439
6,293,886
refined selection of log-level of modules in Hydra
<p>I'm trying to debug a <code>Hydra</code> application, setting <code>hydra.verbose=true</code> will set the logging level of all modules to <code>Debug</code>.<br /> Is there a way to get rid of some of the modules debug messages that produce lengthy and non-informative Debug logs?<br /> I can imagine a syntax like this: <code>hydra.verbose=~[NAME1,NAME2]</code></p>
<python><logging><fb-hydra>
2023-05-18 06:55:46
1
1,386
itamar kanter
76,278,377
3,878,377
How to add a new dataframe column to an increasing integer for every group in dataframe
<p>Assume I have a following dataframe:</p> <pre><code>date group value 2022-11-01. 1 4 2022-11-02. 1 12 2022-11-03. 1 14 2022-11-04. 1 25 2021-11-01. 2 9 2021-11-02. 2 7 2019-10-01. 3 40 2022-10-02. 3 14 </code></pre> <p>I want to create a new column that is increasing integer based on date for each group. For example, this is the desired output:</p> <pre><code> date group value new_col 2022-11-01. 1 4. 0 2022-11-02. 1 12. 1 2022-11-03. 1 14. 2 2022-11-04. 1 25. 3 2021-11-01. 2 9. 0 2021-11-02. 2 7. 1 2019-10-01. 3 40. 0 2022-10-02. 3 14. 1 </code></pre> <p>You see new_col is like <code>np,arange(0, len(df['date'])+1)</code>, however I want to do it per group and it seems no variation of groupby works for me.</p> <p>I have tried:</p> <pre><code>df.groupby('group')['date'].apply(lambda x: np.arange(0, len(x)+1) </code></pre> <p>However this is not even close to what I want. I would appreciate if someone explains how to do this correctly.</p>
<python><pandas><group-by>
2023-05-18 06:46:06
1
1,013
user59419
76,278,237
855,059
PyMongo alternatives for Google App Engine
<p>I had wrote a web site which accesses MongoDB Atlas using PyMongo, and now am trying to move it onto Google App Engine(GAE).</p> <p>Since GAE allows only single-thread process and PyMongo is multi-threading implementation, I failed to move it naively.</p> <p>Are there any good alternative packages of PyMongo, which should be a single-threading implementation?</p>
<python><mongodb><google-app-engine>
2023-05-18 06:22:46
1
724
Yuji
76,278,086
264,632
Is it good practice to add optional arguments to AWS lambda function?
<p>For example, a python AWS lambda, the documented signature is:</p> <p><code>def handler(event, context):</code></p> <p>But the following is valid python syntax:</p> <p><code>def handler(event, context, *args, **kwargs):</code></p> <p>I've tested this and the lambda does not crashes. (BTW default arguments is also valid)</p> <p>This would allow, for example, to decorate the handler function and then introduce extra arguments to the function. Think about a dependency injection scenario for example.</p> <p>Is 'altering' the default signature considered bad practice in any way, and if so, in which way?</p>
<python><amazon-web-services><aws-lambda><dependency-injection><optional-parameters>
2023-05-18 05:54:30
1
11,947
Javier Novoa C.
76,277,938
10,508,542
csv.reader escape not working as expected
<p>So I have this line in my CSV file</p> <pre><code>zeroconf.net,TXT,rockview\032entrance._http._tcp.ietf98,&quot;\&quot;path=/\&quot;&quot;,2023-05-14T01:41:55.954Z,&quot;&quot; </code></pre> <p>and I want to parse it to this</p> <pre><code>['zeroconf.net', 'TXT', 'rockview\032garage._http._tcp.ietf98', '&quot;path=/&quot;', '2023-05-14T01:41:55.954Z', ''] </code></pre> <p>with this csv.reader config, <code>&quot;\&quot;path=/\&quot;&quot;</code> field was parsed well but <code>rockview\032garage._http._tcp.ietf98</code> was converted to <code>rockview032garage._http._tcp.ietf98</code> which not as I want</p> <pre><code>csv.reader(f, doublequote=False, escapechar=&quot;\\&quot;, quotechar='&quot;', quoting=csv.QUOTE_MINIMAL) ['zeroconf.net', 'TXT', 'rockview032garage._http._tcp.ietf98', '&quot;path=/&quot;', '2023-05-14T01:41:55.954Z', ''] </code></pre>
<python><python-3.x><csv>
2023-05-18 05:20:57
0
301
Thong Nguyen
76,277,517
14,109,040
Group pandas dataframe and flag corresponding rows where all values from a list exist in a column
<p>I have a dataframe with the following structure:</p> <pre><code>Group1 Group2 Label G1 A1 AA G1 A1 BB G1 A1 CC G1 A2 AA G1 A2 CC G2 A1 BB G2 A1 DD G2 A2 AA G2 A2 CC G2 A2 DD G2 A2 BB l1 = ['AA','BB','CC','DD'] </code></pre> <p>I want to group the dataframe based on the <code>Group1</code> and <code>Group2</code> columns and check if the 'Label' column is equal(ordering may be different) to a list of values (<code>l1</code>), and flag those group.</p> <p>Expected output: The group <code>Group1=G2</code>, <code>Group2=A2</code> has all values from <code>l1</code> in the <code>Label</code> column. Therefore the rows corresponding to the group are flagged.</p> <pre><code>Group1 Group2 Label Flag G1 A1 AA 0 G1 A1 BB 0 G1 A1 CC 0 G1 A2 AA 0 G1 A2 CC 0 G2 A1 BB 0 G2 A1 DD 0 G2 A2 AA 1 G2 A2 CC 1 G2 A2 DD 1 G2 A2 BB 1 </code></pre> <p>I haven't been able to make much progress:</p> <pre><code>import pandas as pd df = pd.DataFrame({ 'Group1': [ 'G1','G1', 'G1','G1','G1', 'G2','G2', 'G2','G2','G2','G2'], 'Group2': ['A1','A1','A1','A2','A2', 'A1','A1','A2','A2','A2','A2'], 'Label': ['AA','BB','CC','AA','CC','BB', 'DD','AA','CC','DD','BB']}) df.groupby(['Group1','Group2']) </code></pre> <p>A link to a solution or a function/method I can use to achieve this is appreciated</p>
<python><pandas><dataframe>
2023-05-18 03:21:16
3
712
z star
76,277,473
1,942,868
Update database partially by patch, django rest framework
<p>I have my <code>CustomUser</code> model which extend <code>AbstractUser</code></p> <pre><code>class CustomUser(AbstractUser): detail = models.JSONField(default=dict) created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) pass </code></pre> <p>then now I want to update only <code>detail</code> column.</p> <p>in javascript through patch request like this below.</p> <pre><code> var formData = new FormData(); var status = { fileObj:this.fileObj.id } console.log(&quot;syncdata to user table&quot;,status); console.log(&quot;syncdata for user:&quot;,request); formData.append(&quot;detail&quot;,JSON.stringify(status)); axios.patch( `/api/customusers/3/`,formData,{ } ).then(function (response) { console.log(&quot;syncdata customuser is saved:&quot;,response); }) .catch(function (response) { console.log(&quot;syncdata customuser failed:&quot;,response); }); </code></pre> <p>this through <code>&lt;QueryDict: {'detail': ['{&quot;fileObj&quot;:19}']}&gt;</code> as <code>request.data</code></p> <p>in views.py</p> <pre><code>class CustomUserViewSet(viewsets.ModelViewSet): serializer_class = s.CustomUserSerializer queryset = m.CustomUser.objects.all() def update(self,request,*args,**kwargs): print(&quot;custom user update&quot;) print(request.data) // &lt;QueryDict: {'detail': ['{&quot;fileObj&quot;:19}']}&gt; instance = self.get_object() serializer = self.get_serializer(instance,data = request.data) if serializer.is_valid(): self.perform_update(serializer) return Response(serializer.data) else: print(serializer.errors) // check error. </code></pre> <p>However <code>serializer</code> returns the error,</p> <pre><code>{'username': [ErrorDetail(string='This field is required.', code='required')]} </code></pre> <p>What I want to do is just update the <code>detail</code> field only, but it require the <code>username</code>.</p> <p>Where am I wrong?</p>
<javascript><python><django>
2023-05-18 03:06:50
1
12,599
whitebear
76,277,328
6,824,121
Can't install tesserocr using pip on buster docker
<p>Here is my Dockerfile:</p> <pre><code>FROM python:3.11-buster AS app WORKDIR /srv/app ENV PYTHONPATH /srv/app RUN apt update -y &amp;&amp; \ apt install -y libegl1 libgl1 libxkbcommon-x11-0 dbus tesseract-ocr liblept5 leptonica-progs libleptonica-dev libtesseract-dev RUN pip install --upgrade pip </code></pre> <p>When I connect to my container and I launch: <code>pip install tesserocr</code></p> <p>I got error:</p> <pre><code>$ pip install tesserocr Collecting tesserocr Downloading tesserocr-2.6.0.tar.gz (58 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.6/58.6 kB 929.0 kB/s eta 0:00:00 Preparing metadata (setup.py) ... done Building wheels for collected packages: tesserocr Building wheel for tesserocr (setup.py) ... error error: subprocess-exited-with-error Γ— python setup.py bdist_wheel did not run successfully. β”‚ exit code: 1 ╰─&gt; [32 lines of output] Supporting tesseract v4.0.0 Tesseract major version 4 Configs from pkg-config: {'library_dirs': [], 'include_dirs': ['/usr/include', '/usr/include'], 'libraries': ['tesseract', 'lept'], 'compile_time_env': {'TESSERACT_MAJOR_VERSION': 4, 'TESSERACT_VERSION': 67108864}} /usr/local/lib/python3.11/site-packages/setuptools/installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer. warnings.warn( running bdist_wheel running build running build_ext Detected compiler: unix building 'tesserocr' extension creating build creating build/temp.linux-x86_64-cpython-311 gcc -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/usr/include -I/usr/include -I/usr/local/include/python3.11 -c tesserocr.cpp -o build/temp.linux-x86_64-cpython-311/tesserocr.o -std=c++11 -DUSE_STD_NAMESPACE tesserocr.cpp: In function β€˜tesseract::TessResultRenderer* __pyx_f_9tesserocr_13PyTessBaseAPI__get_renderer(__pyx_obj_9tesserocr_PyTessBaseAPI*, __pyx_t_9tesseract_cchar_t*)’: tesserocr.cpp:22546:14: error: β€˜TessAltoRenderer’ is not a member of β€˜tesseract’ tesseract::TessAltoRenderer *__pyx_t_6; ^~~~~~~~~~~~~~~~ tesserocr.cpp:22546:14: note: suggested alternative: β€˜TessOsdRenderer’ tesseract::TessAltoRenderer *__pyx_t_6; ^~~~~~~~~~~~~~~~ TessOsdRenderer tesserocr.cpp:22546:32: error: β€˜__pyx_t_6’ was not declared in this scope tesseract::TessAltoRenderer *__pyx_t_6; ^~~~~~~~~ tesserocr.cpp:22546:32: note: suggested alternative: β€˜__pyx_t_5’ tesseract::TessAltoRenderer *__pyx_t_6; ^~~~~~~~~ __pyx_t_5 tesserocr.cpp:22645:23: error: expected type-specifier __pyx_t_6 = new tesseract::TessAltoRenderer(__pyx_v_outputbase); ^~~~~~~~~ error: command '/usr/bin/gcc' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for tesserocr Running setup.py clean for tesserocr Failed to build tesserocr ERROR: Could not build wheels for tesserocr, which is required to install pyproject.toml-based projects </code></pre> <p>How to solve this ?</p>
<python><docker><pip><tesseract><python-tesseract>
2023-05-18 02:18:31
1
1,736
Lenny4
76,277,286
1,631,414
How does Kafka store messages offsets on a local computer?
<p>How does Kafka store messages on a local server or laptop?</p> <p>I'm new to Kafka and just playing around with the tech for now but I'm curious to the answer because I started by looking at the Kafka Quickstart page to just start a Kafka service on my laptop. I sent several test messages and from some code, saw my topic had 100 messages.</p> <p>Somewhere along the way, I read I can change where to store the logs in <code>config/server.properties</code> under a setting <code>log.dirs</code>. The default is set to <code>/tmp/kafka-logs</code> so again, just to experiment, I shut off my Kafka service, recursive copied (cp -r) the dir to <code>/home/username/kafka-logs</code> and restarted the service.</p> <p>To my surprise, the offset was reset to 0. Shut off the Kafka server, went back to reset the <code>log.dirs</code> setting to be <code>/tmp/kafka-logs</code>, etc. etc. and when I checked, the offset was 100.</p> <p>I had performed a recursive copy of the contents of the /tmp/kafka-logs dir so why did my laptop think the offset was 0? Why did I &quot;lose&quot; my messages? How does Zookeeper play into this and how do I check the contents of Zookeeper if it does?</p> <p>I got some code from <a href="https://kafka-python.readthedocs.io/en/master/usage.html" rel="nofollow noreferrer">https://kafka-python.readthedocs.io/en/master/usage.html</a></p> <p>This is how I knew the offset had changed when I recursively copied my initial directory.</p> <pre><code>from kafka import KafkaProducer from kafka.errors import KafkaError producer = KafkaProducer(bootstrap_servers=['localhost:9092']) future = producer.send('my-topic', b'raw_bytes') # Successful result returns assigned partition and offset print ('topic is after this') print (record_metadata.topic) print ('partition is after this') print (record_metadata.partition) print ('offset is after this') print (record_metadata.offset) </code></pre>
<python><apache-kafka><apache-zookeeper><kafka-python>
2023-05-18 02:07:22
1
6,100
Classified
76,277,200
8,354,181
Better way to index through list of dictionaries
<p>I have this awful awful code where I am trying to parse through a list of dictionaries which contain a list of dictionaries.</p> <p>I need to output a list of dictionaries with the id and name like this if the id matches a certain id. Something like this:</p> <pre><code>[{&quot;id&quot;: 123}, {'name': 'bob'}] </code></pre> <pre><code>list1 = [{&quot;id&quot;: 123, &quot;job&quot;: &quot;nurse&quot;, &quot;who&quot;:[{&quot;user&quot;: &quot;bob_user&quot;, &quot;name&quot;: &quot;bob&quot;}]}, {&quot;id&quot;: 456, &quot;job&quot;: &quot;teacher&quot;, &quot;who&quot;:[{&quot;user&quot;: &quot;jim_user&quot;, &quot;name&quot;: &quot;jim&quot;}]} list2 = [] keys = [&quot;id&quot;, &quot;who&quot;] for items in list1: if items['id'] == 123: for key in keys: if key == &quot;who&quot;: for values in items[key]: for count in values: if count == &quot;name&quot;: list2.append({key : values[count]}) else: list2.append({key : items[key]}) print(list2) </code></pre> <p>Code currently works but looks like a monkey wrote it! How can I make this more efficient?</p>
<python>
2023-05-18 01:38:54
3
367
shxpark
76,276,576
504,877
Python decorator parameter scope
<p>I've implemented a retry decorator with some parameters:</p> <pre><code>def retry(max_tries: int = 3, delay_secs: float = 1, backoff: float = 1.5): print(&quot;level 1:&quot;, max_tries, delay_secs, backoff) def decorator(func): print(&quot;level 2:&quot;, max_tries, delay_secs, backoff) @functools.wraps(func) def wrapper(*args, **kwargs): nonlocal delay_secs ## UnboundLocalError if remove this line print(&quot;level 3:&quot;, max_tries, delay_secs, backoff) for attempt in range(max_tries): try: return func(*args, **kwargs) except Exception as e: print(f&quot;attempt {attempt} Exception: {e} Sleeping {delay_secs}&quot;) time.sleep(delay_secs) delay_secs *= backoff print(&quot;exceeded maximun tries&quot;) raise e return wrapper return decorator @retry(max_tries=4, delay_secs=1, backoff=1.25) def something(): raise Exception(&quot;foo&quot;) something() </code></pre> <p>If I remove this line, I got UnboundLocalError</p> <pre><code>nonlocal delay_secs </code></pre> <p>But that only happens for delay_secs, NOT for max_tries or backoff! I tried reordering/renaming the params and still that delay param is problematic.</p> <p>Can you help me understand why that parameter out of scope within the <strong>wrapper</strong> function but the other 2 parameters are just fine?</p> <p><strong>Python</strong>: 3.9.2</p> <p><strong>OS</strong>: Debian 11 Linux</p>
<python><variables><scope><decorator>
2023-05-17 22:28:36
1
765
Tyn