QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,956,697 | 4,736,261 | Efficiently append CSV files python | <p>I am trying to append > 1000 csv files (all with the same format) in Python. The files have anything between 1KB and 30GB. All the files have 200GB in total. I want to combine these files into a unique dataframe. Below what I am doing which is very very slow:</p>
<pre><code>folder_path = 'Some path here'
csv_files = [file for file in os.listdir(folder_path) if file.endswith('.csv')]
combined_data = pd.DataFrame()
for e, csv_file in enumerate(csv_files):
print(f'Processing {e+1} out of {len(csv_files)}: {csv_file}')
combined_data = pd.concat([combined_data, pd.read_csv(os.path.join(folder_path, csv_file), dtype={'stringbvariablename': str})])
</code></pre>
<p>One of the variables is a string. Everything else numbers. RAM memory is not an issue since I am using a cluster.</p>
| <python><csv><append> | 2023-08-22 20:34:08 | 1 | 1,144 | phdstudent |
76,956,654 | 251,276 | Can this recursive function be turned into an iterative function with similar performance? | <p>I am writing a function in python using numba to label objects in a 2D or 3D array, meaning all orthogonally connected cells with the same value in the input array will be given a unique label from 1 to N in the output array, where N is the number of orthogonally connected groups. It is very similar to functions such as <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.label.html#scipy.ndimage.label" rel="nofollow noreferrer"><code>scipy.ndimage.label</code></a> and similar functions in libraries such as scikit-image, but those functions label all orthogonally connected non-zero groups of cells, so it would merge connected groups with different values, which I don't want. For example, given this input:</p>
<pre><code>[0 0 7 7 0 0
0 0 7 0 0 0
0 0 0 0 0 7
0 6 6 0 0 7
0 0 4 4 0 0]
</code></pre>
<p>The scipy function would return</p>
<pre><code>[0 0 1 1 0 0
0 0 1 0 0 0
0 0 0 0 0 3
0 2 2 0 0 3
0 0 2 2 0 0]
</code></pre>
<p>Notice that the 6s and 4s were merged into the label <code>2</code>. I want them to be labeled as separate groups, e.g.:</p>
<pre><code>[0 0 1 1 0 0
0 0 1 0 0 0
0 0 0 0 0 4
0 2 2 0 0 4
0 0 3 3 0 0]
</code></pre>
<p>I <a href="https://stackoverflow.com/questions/72974571/find-groups-of-adjacent-cells-in-a-numpy-ndarray-with-the-same-value">asked this about a year ago</a> and have been using the solution in the accepted answer, however I am working on optimizing the runtime of my code and am revisiting this problem.</p>
<p>For the data size I generally work with, the linked solution takes about 1m30s to run. I wrote the following recursive algorithm which takes about 30s running as regular python and with numba's JIT runs in 1-2s (side note, I hate that adjacent function, any tips to make it less messy while still numba-compatible would be appreciated):</p>
<pre><code>@numba.njit
def adjacent(idx, shape):
coords = []
if len(shape) > 2:
if idx[0] < shape[0] - 1:
coords.append((idx[0] + 1, idx[1], idx[2]))
if idx[0] > 0:
coords.append((idx[0] - 1, idx[1], idx[2]))
if idx[1] < shape[1] - 1:
coords.append((idx[0], idx[1] + 1, idx[2]))
if idx[1] > 0:
coords.append((idx[0], idx[1] - 1, idx[2]))
if idx[2] < shape[2] - 1:
coords.append((idx[0], idx[1], idx[2] + 1))
if idx[2] > 0:
coords.append((idx[0], idx[1], idx[2] - 1))
else:
if idx[0] < shape[0] - 1:
coords.append((idx[0] + 1, idx[1]))
if idx[0] > 0:
coords.append((idx[0] - 1, idx[1]))
if idx[1] < shape[1] - 1:
coords.append((idx[0], idx[1] + 1))
if idx[1] > 0:
coords.append((idx[0], idx[1] - 1))
return coords
@numba.njit
def apply_label(labels, decoded_image, current_label, idx):
labels[idx] = current_label
for aidx in adjacent(idx, labels.shape):
if decoded_image[aidx] == decoded_image[idx] and labels[aidx] == 0:
apply_label(labels, decoded_image, current_label, aidx)
@numba.njit
def label_image(decoded_image):
labels = np.zeros_like(decoded_image, dtype=np.uint32)
current_label = 0
for idx in zip(*np.where(decoded_image >= 0)):
if labels[idx] == 0:
current_label += 1
apply_label(labels, decoded_image, current_label, idx)
return labels, current_label
</code></pre>
<p>This worked for some data, but crashed on other data and I found the issue is that when there are very large objects to label, the recursion limit is reached. I tried to rewrite <code>label_image</code> to not use recursion, but it now takes ~10s with numba. Still a huge improvement from where I started, but it seems like it should be possible to get the same performance as the recursive version. Here is my iterative version:</p>
<pre><code>@numba.njit
def label_image(decoded_image):
labels = np.zeros_like(decoded_image, dtype=np.uint32)
current_label = 0
for idx in zip(*np.where(decoded_image >= 0)):
if labels[idx] == 0:
current_label += 1
idxs = [idx]
while idxs:
cidx = idxs.pop()
if labels[cidx] == 0:
labels[cidx] = current_label
for aidx in adjacent(cidx, labels.shape):
if labels[aidx] == 0 and decoded_image[aidx] == decoded_image[idx]:
idxs.append(aidx)
return labels, current_label
</code></pre>
<p>Is there a way I can improve this?</p>
| <python><numpy><recursion><numba> | 2023-08-22 20:23:05 | 2 | 10,920 | Colin |
76,956,625 | 1,895,611 | opencv ellipse start/end angles are 'int'? | <p>I need to draw an arc (part of a circle) but I need the angles to be precise (float). But when I run the code below, I find 2 'bugs':</p>
<p>First, when the start/end angle is less than .5 it draws the 'arc' not at 0 degrees but at the CENTER OF THE ELLIPSE. Then, as it reaches 1 degree 2 degrees 3 degrees it draws an extra 'chunk' of the arc. Why am I not able to draw for example startAngle <code>0</code> and endAngle <code>2.7</code>. Why will it only draw integer angles. <a href="https://docs.opencv.org/4.x/d6/d6e/group__imgproc__draw.html#ga28b2267d35786f5f890ca167236cbc69" rel="nofollow noreferrer">The parameters in c++ seem to be 'double'</a> and i'm using floats.</p>
<pre><code>import numpy as np
import cv2
image = np.zeros((720,1280,3), np.uint8)
height, width = image.shape[0:2]
# Ellipse parameters
radius = 640
center = (0, height //2)
axes = (radius, radius)
angle = 0.
startAngle = .0001
endAngle = -.01
thickness = 200
for i in range(400):
cv2.ellipse(image, center, axes, angle, startAngle, endAngle*float(i), (255,0,0), thickness)
cv2.imshow('image', image)
if cv2.waitKey(20) & 0xFF == 27:
break
cv2.destroyAllWindows()
</code></pre>
| <python><opencv><drawing> | 2023-08-22 20:17:30 | 1 | 8,256 | AwokeKnowing |
76,956,534 | 687,827 | Make center widget in pyqt5 layout larger | <p>I have the following simple app setup. Presently the top half of the screen has 3 widgets representing where I will fill in my data later. The yellow section needs to be larger than the red and purple sections. How do I achieve this and how can I maintain the scale when resizing the window? I would like the red and purple sections to take up about 30% each and the yellow portion to take up the center 40%.</p>
<p><a href="https://i.sstatic.net/xa1Hg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xa1Hg.png" alt="My App Layout" /></a></p>
<pre><code>from PyQt5.QtGui import *
from PyQt5.QtWidgets import *
from PyQt5.QtCore import *
import sys
class Color(QWidget):
def __init__(self, color):
super(Color, self).__init__()
self.setAutoFillBackground(True)
self.color = color
palette = self.palette()
palette.setColor(QPalette.Window, QColor(self.color))
self.setPalette(palette)
class MainWindow(QMainWindow):
def __init__(self):
super(MainWindow, self).__init__()
self.setWindowTitle("My App")
self.setMinimumWidth(600)
self.setMinimumHeight(400)
main_div = QVBoxLayout()
main_div.setContentsMargins(0,0,0,0)
main_div.setSpacing(0)
layout2 = QHBoxLayout()
layout2.addWidget(Color('red'))
layout2.addWidget(Color('yellow'))
layout2.addWidget(Color('purple'))
main_div.addLayout( layout2 )
main_div.addWidget(Color('blue'))
self.widget = QWidget()
self.widget.setLayout(main_div)
self.setCentralWidget(self.widget)
def resizeEvent(self,event):
pass
app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec_()
</code></pre>
| <python><pyqt5> | 2023-08-22 19:59:04 | 1 | 484 | Scott Rowley |
76,956,139 | 107,294 | Why did Python `argparse` stop documenting nargs=REMAINDER? | <p>In the documentation for Python's <code>argparse</code> module, the <a href="https://docs.python.org/3.8/library/argparse.html#nargs" rel="noreferrer">3.8 documentation</a> states that <code>nargs</code> may be set to:</p>
<blockquote>
<p><code>argparse.REMAINDER</code>. All the remaining command-line arguments are gathered into a list. This is commonly useful for command line utilities that dispatch to other command line utilities.</p>
</blockquote>
<p>This has been removed from the <a href="https://docs.python.org/3.9/library/argparse.html#nargs" rel="noreferrer">3.9 documentation</a>, though there is no mention of it being deprecated, nor any good reason to do so that I can see given given that it provides useful functionality not apparently provided by other means.¹ Its existence is still mentioned in passing <a href="https://docs.python.org/3.9/library/argparse.html#intermixed-parsing" rel="noreferrer">elsewhere in the page</a>:</p>
<blockquote>
<p>These [intermixed] parsers do not support all the argparse features, and will raise exceptions if unsupported features are used. In particular, subparsers, argparse.REMAINDER, and mutually exclusive groups that include both optionals and positionals are not supported.</p>
</blockquote>
<p>But even this is removed in the [3.10 documentation]. Yet the feature persists even in Python 3.11.4, the latest released version.</p>
<p>So why has it been removed from the documentation?</p>
<hr />
<p>I ask this because the answer seems likely to bear directly on several other related questions I have about programming argument parsers in Python. (The particular situations where I was, am, and may continue to use <code>nargs=REMAINDER</code> are large enough that I believe that they should be posted as separate questions, if necessary.) The considerations include:</p>
<ul>
<li>Is the API broken in some way for my purposes, and does this imply that my code that uses it also broken?</li>
<li>Should I look for a replacement for this API?</li>
<li>Should I continue to use this API in new code? After all, it hasn't been deprecated.</li>
<li>Should I be converting existing code that uses this API to use something else?</li>
</ul>
<p>(Note too that the answers to questions like this will not only depend on the particular context in which <code>nargs=REMAINDER</code> is used, but may also be considered matters of opinion, which is another reason to leave them beyond the scope of this question.)</p>
<hr />
<p>¹ <code>nargs=REMAINDER</code> is different from <code>nargs='*'</code>: using <code>REMAINDER</code> means that argparse will not attempt to parse options (starting with <code>-</code>) from that point on. Thus, with <code>REMAINDER</code>, <code>mycmd -q run bash -c exit</code> will not attempt to parse the <code>-c</code> as an option to <code>mycmd</code>, but instead the line will be treated as <code>mycmd -q run -- bash -c exit</code> is with <code>'*'</code>.</p>
<p>[3.10 documentation]</p>
| <python><argparse> | 2023-08-22 18:51:03 | 2 | 27,842 | cjs |
76,956,097 | 8,337,391 | Pandas Aggregate concatenate other columns | <p>This is the data set that I am using is as below:</p>
<pre><code>data = [['2608 W SYLVESTER ST', 'PASCO', 'WA', 4304],
['61 W MESQUITE BLVD', 'MESQUITE', 'NV', 115000],
['287 NW 3RD AVE', 'ESTACADA', 'OR', 1000],
['287 NW 3RD AVE', 'ESTACADA', 'OR', 2000],
['287 NW 3RD AVE', 'ESTACADA', 'OR', 7000]])
</code></pre>
<p>The display of the dataframe:</p>
<pre><code> site_address site_city site_state price
0 2608 W SYLVESTER ST PASCO WA 4304
1 61 W MESQUITE BLVD MESQUITE NV 115000
2 287 NW 3RD AVE ESTACADA OR 1000
3 287 NW 3RD AVE ESTACADA OR 2000
4 287 NW 3RD AVE ESTACADA OR 7000
</code></pre>
<p>Need output as the following JSON structure:</p>
<pre><code>[
"sites": [
{
"location": "2608 W SYLVESTER ST, PASCO WA",
"value": 4304
},
{
"location": "61 W MESQUITE BLVD, MESQUITE NV",
"value": 115000
},
{
"location": "287 NW 3RD AVE, ESTACADA OR",
"value": 10000
}
]
</code></pre>
<p>Trying to use pandas, groupby & agg functionality:</p>
<pre><code>df_grp = df.groupby('site_address', as_index=False).agg(**{
'location': ** NEED HELP HERE **,
'value': ('price', 'sum')
}).get(['location', 'value']).reset_index(drop=True)
result = json.loads(df_grp.to_json(orient='records'))
print(result)
</code></pre>
| <python><pandas><aggregate> | 2023-08-22 18:43:09 | 4 | 433 | iPaul |
76,956,089 | 17,914,734 | Selenium with Webdrivermanager : AttributeError: 'str' object has no attribute 'capabilities' | <p>I have the following code</p>
<pre><code>from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
driver = webdriver.Chrome(ChromeDriverManager().install())
</code></pre>
<p>And get the following error :</p>
<p>During handling of the above exception, another exception occurred:</p>
<pre><code>Traceback (most recent call last):
File ~\untitled2.py:11 in <module>
driver = webdriver.Chrome(ChromeDriverManager().install())
File ~\anaconda3\envs\TFE\lib\site-packages\selenium\webdriver\chrome\webdriver.py:45 in __init__
super().__init__(
File ~\anaconda3\envs\TFE\lib\site-packages\selenium\webdriver\chromium\webdriver.py:51 in __init__
self.service.path = DriverFinder.get_path(self.service, options)
File ~\anaconda3\envs\TFE\lib\site-packages\selenium\webdriver\common\driver_finder.py:40 in get_path
msg = f"Unable to obtain driver for {options.capabilities['browserName']} using Selenium Manager."
AttributeError: 'str' object has no attribute 'capabilities'
</code></pre>
<p>Does anyone know where it could be from?</p>
<p>Thanks</p>
| <python><selenium-webdriver> | 2023-08-22 18:42:13 | 2 | 309 | zorals |
76,955,999 | 1,767,754 | Opencv remap returns squeezed image rather than slightly offsetted | <p>I have a "distort.png" image in which the "R,G" Channels holding motion vectors (Original Position offsetted by 10 pixels) I'm using <code>remap</code> to transform the image into the new location with the following snippet but I get a weirdly squeezed image instead a translated one. Any thoughts?</p>
<pre><code>import cv2
distort = cv2.imread(r"D:\distort.png").astype(np.float32)
src = cv2.imread(r"D:\taj.png")
map_x = distort[:, :, 1].copy()
map_y = distort[:, :, 2].copy()
remapped_image = cv2.remap(src, map_x, map_y, interpolation=cv2.INTER_LINEAR)
cv2.imshow('Original Image', remapped_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>However the image seems to be distorted:</p>
<p><a href="https://i.sstatic.net/h7slK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h7slK.png" alt="enter image description here" /></a></p>
<p>and supposed to look like this (translated by 10pixels in x and y):</p>
<p><a href="https://i.sstatic.net/JoWor.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JoWor.png" alt="enter image description here" /></a></p>
<p>Here is the source image "taj.png"</p>
<p><a href="https://i.sstatic.net/Ffy4f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ffy4f.png" alt="enter image description here" /></a></p>
<p>And the source distort (motion vector) file</p>
<p><a href="https://i.sstatic.net/rHfo1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rHfo1.png" alt="enter image description here" /></a></p>
| <python><numpy><opencv><scipy><opticalflow> | 2023-08-22 18:29:44 | 0 | 25,504 | user1767754 |
76,955,832 | 11,427,765 | Creating dict groupby from two dataframes | <p>I want to achieve the following when Back is equal to 1.</p>
<p>Logic is the following:
8582 is the main node level is 1 and has 4 sub node (8584,8593,8585,8586) and each of those sub nodes have sub nodes
Idea is to groupby in json structure or tree to be able to visualize the structure and sum of amounts i've the following code but it's not exactly what i want to get:</p>
<pre><code>import pandas as pd
from collections import defaultdict
import json
data = {
'Nod': [8582, 8586, 8585, 8593, 8584, 8590, 8583, 8597, 8587, 8674, 8589, 8588],
'Levels': [1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3],
'Parents': [None, 8582, 8582, 8582, 8582, 8586, 8593, 8584, 8585, 8586, 8586, 8585],
'Names': ['Xhelp', 'Hd', 'Ejjd', 'Mmmm', 'Awe', 'Urj', 'Bdh', 'Ddj', 'Lsk', 'Bws', 'Jsk', 'Pqq'],
'ID': [90, 89, 92, 85, 37, 28, 19, 34, 11, 83, 433, 37]
}
df1 = pd.DataFrame(data)
data2 = {
'ID': [10, 90, 89, 92, 85, 37, 28, 19, 34, 11, 83, 433, 433, 19],
'Amounts': [1288, 998, 7338, 9337, 784, 3884, 399, 8559, 5146, 9348, 111, 8445, 40, 90],
'Back': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2]
}
df2 = pd.DataFrame(data2)
def nested_dict():
return defaultdict(nested_dict)
tree = nested_dict()
for _, row in df1.iterrows():
node = {
"Levels": row["Levels"],
"Names": row["Names"],
"ID": row["ID"],
"Amounts": 0,
}
parent_id = row["Parents"]
if parent_id is None:
tree[row["Nod"]] = node
else:
parent_node = tree[parent_id]
parent_node["children"][row["Nod"]] = node
for _, row in df2.iterrows():
node_id = row["ID"]
amount = row["Amounts"]
if node_id in tree:
tree[node_id]["Amounts"] += amount
json_tree = json.dumps(tree, indent = 2)
print(json_tree)
</code></pre>
<p>New data :</p>
<pre><code>import pandas as pd
from io import StringIO
data = """Nod Levels Parents Amounts
8616 1 NaN 0
8636 5 8648 0
8637 5 8635 0
8631 4 8630 0
8605 5 8609 8888882
8606 5 8609 339494
8609 4 8615 0
8613 6 8620 0
8614 6 8636 0
8615 3 8642 0
8618 6 8620 49832
8619 6 8636 11122
8620 5 8648 0
8621 4 8615 0
8622 5 8621 237837
8623 5 8621 0
8624 4 8615 0
8625 5 8624 87328732
8634 4 8627 0
8639 5 8634 0
8648 4 8627 0
8630 3 8642 0
8632 4 8615 0
8627 3 8642 0
8629 5 8609 -8378383
8633 5 8632 0
8635 4 8627 0
8638 5 8634 -93198318
8638 5 8634 32323
8642 2 8616 0"""
# Create a DataFrame from the provided data
df1 = pd.read_csv(StringIO(data), sep=' ')
</code></pre>
<p>Thanks</p>
| <python><pandas><tree><pivot-table> | 2023-08-22 18:01:56 | 2 | 387 | Gogo78 |
76,955,631 | 5,029,589 | Python code runs fine in pycharm but gives module error while running in terminal | <p>I have my flask project project structure as below which I have deployed to ec2 .</p>
<pre><code>myproject
--src
-- cache
--- cache.py
-- storeservice
--- upload.py
-- esService
--- es.py
-- ui
--- html
--- static
-- main
--- controller.py
</code></pre>
<p>In Controller I have the import</p>
<pre><code>from storeservice.upload import Upload
</code></pre>
<p>This works fine on pycharm but when I run via terminal it gives error</p>
<p>command which I use is <code>python3 controller.py</code></p>
<pre><code>from storeservice.upload import Upload
ModuleNotFoundError: No module named 'storeservice'
</code></pre>
<p>Is there a way to execute it without making any code change ?</p>
| <python><pycharm> | 2023-08-22 17:30:52 | 1 | 2,174 | arpit joshi |
76,955,621 | 22,407,544 | pywintypes.com_error: (-2147221005, 'Invalid class string', None, None) | <p>Why do I keep getting
<code>pywintypes.com_error: (-2147221005, 'Invalid class string', None, None)</code>?</p>
<p>Here is my code. I am trying to use docx2pdf to convert pdf files to docx.</p>
<pre><code>import win32com.client
#from win32 import _win32sysloader
#from win32 import win32api
#from docx2pdf import convert
def translator():
print('reconverting')
convert("C:\\Users\\John\\Downloads\\btab083 (1).docx", "C:\\Users\\John\\Downloads\\btab083 (1).pdf")
print('success')
translator()
I'm using Python 3.11.4 on Windows 10.
</code></pre>
<p>Here is the entire error message:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\John\AppData\Local\Programs\Python\Python311\Lib\site-packages\win32com\client\dynamic.py", line 84, in _GetGoodDispatch
IDispatch = pythoncom.connect(IDispatch)
pywintypes.com_error: (-2147221005, 'Invalid class string', None, None)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\John\AppData\Local\Programs\Python\Python311\Scripts\translatorTest.py", line 55, in <module>
translator()
File "C:\Users\John\AppData\Local\Programs\Python\Python311\Scripts\translatorTest.py", line 53, in translator
convert("C:\\Users\\John\\Downloads\\btab083 (1).docx")
File "C:\Users\John\AppData\Local\Programs\Python\Python311\Lib\site-packages\docx2pdf\__init__.py", line 106, in convert
return windows(paths, keep_active)
File "C:\Users\John\AppData\Local\Programs\Python\Python311\Lib\site-packages\docx2pdf\__init__.py", line 19, in windows
word = win32com.client.Dispatch("Word.Application")
File "C:\Users\John\AppData\Local\Programs\Python\Python311\Lib\site-packages\win32com\client\__init__.py", line 118, in Dispatch
dispatch, userName = dynamic._GetGoodDispatchAndUserName(dispatch, userName, clsctx)
File "C:\Users\John\AppData\Local\Programs\Python\Python311\Lib\site-packages\win32com\client\dynamic.py", line 104, in _GetGoodDispatchAndUserName
return (_GetGoodDispatch(IDispatch, clsctx), userName)
File "C:\Users\John\AppData\Local\Programs\Python\Python311\Lib\site-packages\win32com\client\dynamic.py", line 86, in _GetGoodDispatch
IDispatch = pythoncom.CoCreateInstance(
pywintypes.com_error: (-2147221005, 'Invalid class string', None, None)
</code></pre>
<p>As you can see I tried different import statements of(commented out) as solutions. I found them online. No matter what order I tried them I kept getting No module named 'pywintypes' or No module named '_win32sysloader'.</p>
<p>I installed pywin32 using <code>pip install pywin32</code>. I also used<code> python pywin32_postinstall.py -install</code>, <code>pip install win32</code>, <code>python Scripts/pywin32_postinstall.py -install</code> and <code>python -m pip install --upgrade pywin32.</code></p>
<p>I also tried pip install pywintypes.</p>
| <python><pdf><ms-word><docx><pywin32> | 2023-08-22 17:28:26 | 0 | 359 | tthheemmaannii |
76,955,579 | 4,466,962 | Discrepancy between opecv warpPerspective and kornia warp_perspective | <p>Moving some code from cv2 to kornia to use in torch, and having a discrepancy between the different warp perspective functions.</p>
<p>As an example, I am using the following image:</p>
<p><a href="https://i.sstatic.net/CY6su.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CY6su.png" alt="Initial image" /></a></p>
<p>Using the following code, made sure to keep the flags and params similar:</p>
<pre><code>import cv2
import torch
import kornia
import numpy as np
from matplotlib import pyplot as plt
image = cv2.imread("grid.png", cv2.IMREAD_GRAYSCALE)
image = cv2.resize(image,(200,200))
image = image.astype(np.float32) / 255
angle = 45
center = (image.shape[1] / 2, image.shape[0] / 2)
rot_mat_cv2 = cv2.getRotationMatrix2D(center, angle, 1)
rot_mat_cv2 = np.vstack([rot_mat_cv2, [0, 0, 1]])
warped_cv2 = cv2.warpPerspective(image, rot_mat_cv2, (image.shape[1], image.shape[0]), flags= cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT, borderValue=0)
image_tensor = torch.tensor(image, dtype=torch.float32).unsqueeze(0).unsqueeze(0)
rot_mat_tensor = torch.tensor(rot_mat_cv2, dtype=torch.float32).unsqueeze(0)
warped_kornia = kornia.geometry.transform.warp_perspective(image_tensor,
rot_mat_tensor,
(image.shape[1], image.shape[0]),
mode="bilinear",
align_corners=True,
padding_mode='zeros',
)
warped_kornia_np = (warped_kornia.squeeze(0).squeeze(0).numpy())
difference = np.abs(warped_cv2 - warped_kornia_np)
</code></pre>
<p>This is a plot of the result:</p>
<p><a href="https://i.sstatic.net/BjPbW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BjPbW.png" alt="result" /></a></p>
<p>Not sure exactly what is the algorithm behind the <code>align_corners</code> parameter, but when setting it to false the diff is larger:</p>
<p><a href="https://i.sstatic.net/GsxKt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GsxKt.png" alt="align corners false" /></a></p>
<p>I am not sure if this is the expected result, and if it is not, is there anything I should take into account?</p>
| <python><numpy><opencv><torch><kornia> | 2023-08-22 17:22:01 | 0 | 2,567 | Dinari |
76,955,427 | 6,546,694 | lists vs python arrays. specifically the insert method | <p>I am trying to solve the <a href="https://leetcode.com/problems/count-of-smaller-numbers-after-self/description/" rel="nofollow noreferrer">following problem</a> in leetcode.</p>
<blockquote>
<p>Given an integer array nums, return an integer array counts where counts[i] is the number of smaller elements to the right of nums[i]</p>
</blockquote>
<p>The following solution rocks (1300 ms):</p>
<pre><code>import bisect
import array
class Solution:
def countSmaller(self, nums: List[int]) -> List[int]:
ns = array.array('h')
# ns = list()
ans = [0] * len(nums)
for i in range(len(nums) - 1, -1, -1):
x = nums[i]
l = bisect.bisect_left(ns, x)
ans[i] = l
ns.insert(l, x)
return ans
</code></pre>
<p>The following solution sucks (5000ms):</p>
<pre><code>import bisect
import array
class Solution:
def countSmaller(self, nums: List[int]) -> List[int]:
# ns = array.array('h')
ns = list()
ans = [0] * len(nums)
for i in range(len(nums) - 1, -1, -1):
x = nums[i]
l = bisect.bisect_left(ns, x)
ans[i] = l
ns.insert(l, x)
return ans
</code></pre>
<p>The only difference is the usage of lists (sucks) vs python arrays (rocks)</p>
<p>If the operation I am doing repeatedly is <code>.insert</code> then I don't think any of the benefits arising from the fact that python arrays are static would come into play. The array would have to be reallocated every time I increase the size of the array by inserting a value (correct?). At least lists allocate dynamically and at the very least should match the performance of arrays.</p>
<p>questions:</p>
<ol>
<li>The algorithms are O(n^2) for the arrays, right? I am running an insert for every element in the nums</li>
<li>Is the performance gain only because of fast indexing in arrays because they have dtype specified? Or, is there any other reason?</li>
<li>Please also help me correct any wrong understanding I might have asserted in the question</li>
</ol>
| <python><arrays><list><algorithm><time-complexity> | 2023-08-22 16:59:41 | 1 | 5,871 | figs_and_nuts |
76,955,312 | 8,587,712 | Drop duplicates in MultiIndexed pandas DataFrame by second index | <p>I have a MultiIndexed DataFrame:</p>
<pre><code>arrays = [
np.array(["foo", "foo", "foo", "bar", "bar", "bar", "bar", "bar"]),
np.array(["one", "two", "three", "four", "one", "two", "five", "six"]),
]
df = pd.DataFrame(np.random.randn(8, 2), index=arrays)
</code></pre>
<p>which gives</p>
<pre><code> 0 1
foo one 0.320046 -0.423325
two -1.621356 -0.487854
three 0.364116 0.385614
bar four 0.436584 -0.042120
one 0.404953 0.410876
two 1.695568 -1.475434
five -0.696704 1.226981
six 1.640631 0.518874
</code></pre>
<p>I would like to drop the rows where the second index is duplicated, and keep the last instance (i.e., drop the rows with <code>("foo", "one")</code> and <code>("foo", "two")</code> in this example). I have attempted using <code>.drop_duplicates()</code>, but this will only drop rows with all elements identical. I've attempted to impose a <code>subset</code> argument in <code>.drop_duplicates()</code>, but have not been successful.</p>
| <python><pandas><dataframe> | 2023-08-22 16:41:17 | 1 | 313 | Nikko Cleri |
76,955,249 | 524,307 | Loading a regex string with escape characters from YAML | <p>I have seen some answers like this <a href="https://stackoverflow.com/questions/10771163/python-interpreting-a-regex-from-a-yaml-config-file">Python interpreting a Regex</a> but I am unable to use it to load a regex from a yaml file like this:</p>
<p>reg_ents.yaml</p>
<pre><code>regex_sample: '\s*HEADER\d+\n'
</code></pre>
<p>When I attempt to load it from the code below:</p>
<pre><code>import yaml
with open("reg_ents.yaml", 'r',encoding="utf-8") as yaml_instr:
new_yaml = yaml.safe_load(yaml_instr)
nregex = re.compile(new_yaml['regex_sample'],flags=re.I)
print(nregex.findall("\nHEADER1\n"))
</code></pre>
<p>it returns an empty set <code>[]</code>. If you inspect <code>new_yaml['regex_sample']</code> you get this string:</p>
<pre><code>\\s\*HEADER\\d\+\\n
</code></pre>
<p>I tried also using the yaml safe_loader modification:</p>
<pre><code>import yaml
yaml.SafeLoader.add_constructor(u'tag:yaml.org,2002:python/regexp', lambda l, n: re.compile(l.construct_scalar(n),flags=re.I))
with open("reg_ents.yaml", 'r',encoding="utf-8") as yaml_instr:
new_yaml = yaml.safe_load(yaml_instr)
nregex = new_yaml['regex_sample']
print(nregex.findall("\nHEADER1\n"))
</code></pre>
<p>but the output is still <code>[]</code> and the re.Pattern value still contains the double slashes. Any ideas? I have confirmed that if you use a standard approach (embedding within a .py file), it will work:</p>
<pre><code>import re
test_regex = re.compile(r"\s*HEADER\d+\n",flags=re.I)
test_regex.findall("\nHEADER1\n")
</code></pre>
<p>This will output <code>['\nHEADER1\n']</code></p>
| <python><regex><yaml> | 2023-08-22 16:32:47 | 1 | 804 | Jon |
76,955,185 | 1,145,744 | TensorFlow saved_model.pb cannot be loaded correctly | <p>I have downloaded a <a href="https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_m/classification/2?tf-hub-format=compressed" rel="nofollow noreferrer">model</a> from Hub. The archive contains TensorFlow 2 saved model and if decompressed shows a file named <code>saved_model.pb</code> and a <code>variables</code> folder that inside has 2 files <code>variables.data-00000-of-00001</code> and <code>variables.index</code>.</p>
<p>This model seems cannot be passed to <code>tf.keras.models.load_model()</code>
I have tried</p>
<pre><code>my_model=tf.saved_model.load('extraction_path')
</code></pre>
<p>but the resulting <code>loaded_model</code> object seems it isn't a normal model object ready to be used with something like <code>my_model=model.predict(image)</code>and the <a href="https://www.tensorflow.org/api_docs/python/tf/saved_model/load" rel="nofollow noreferrer">documentation section</a> about the function isn't clear.</p>
<p>What is the proper procedure that should be used with this model format?</p>
| <python><tensorflow><tensorflow2.0><tensorflow-hub> | 2023-08-22 16:20:16 | 1 | 12,435 | AndreaF |
76,955,024 | 2,811,074 | change the variables in the __init__ method of inherited classes | <p>I am trying to change default value of "<code>seed</code>" which is inside <code>__init__</code> method after I built an instance of a class which inherited from other classes. I have <a href="https://github.com/deepmind/lab2d/blob/7fd7348da7e025a01e38878aa1bf17a3ce5836e0/dmlab2d/__init__.py" rel="nofollow noreferrer">"Environment" class</a> inside "<code>lab2d/dmlab2d/__init__.py</code>". When I print this parameter of my class <code>print(f"dmlab2d {env._env.__class__.__bases__[0].__bases__[0].__dict__}")</code>, the output is</p>
<pre><code>dmlab2d {'__module__': 'dmlab2d', '__doc__': 'Environment class for DeepMind Lab2D.\n\n This environment extends the `dm_env` interface with additional methods.\n For details, see https://github.com/deepmind/dm_env\n ', '__init__': <function Environment.__init__ at 0x78d207cfd5a0>, 'reset': <function Environment.reset at 0x78d207cfd510>, '_read_action': <function Environment._read_action at 0x78d207cfd480>, 'step': <function Environment.step at 0x78d207cfd120>, 'observation': <function Environment.observation at 0x78d207cfd360>, 'observation_spec': <function Environment.observation_spec at 0x78d207cfd2d0>, '_make_observation_spec': <function Environment._make_observation_spec at 0x78d207cfd240>, '_make_action_spec': <function Environment._make_action_spec at 0x78d207cfd090>, 'action_spec': <function Environment.action_spec at 0x78d207cfc310>, 'events': <function Environment.events at 0x78d207cfc280>, 'list_property': <function Environment.list_property at 0x78d207cfc1f0>, 'write_property': <function Environment.write_property at 0x78d207cfc0d0>, 'read_property': <function Environment.read_property at 0x78d207cfc040>, '__abstractmethods__': frozenset(), '_abc_impl': <_abc._abc_data object at 0x78d207b66400>}
</code></pre>
<p>Can someone suggest how I can change <code>seed</code> value from the <code>Environment</code> class in this entangled situation of attributes of inherented classes?</p>
<p>Here is my code</p>
<pre><code>import dmlab2d
import gymnasium as gym
from matplotlib import pyplot as plt
from gymnasium import spaces
from meltingpot import substrate
from ml_collections import config_dict
import numpy as np
from ray.rllib.env import multi_agent_env
class MeltingPotEnv(multi_agent_env.MultiAgentEnv):
"""An adapter between the Melting Pot substrates and RLLib MultiAgentEnv."""
def __init__(self, env: dmlab2d.Environment, max_cycles: int = MAX_CYCLES):
"""Initializes the instance.
Args:
env: dmlab2d environment to wrap. Will be closed when this wrapper closes.
"""
self._env = env
self._num_players = len(self._env.observation_spec())
self._ordered_agent_ids = [
PLAYER_STR_FORMAT.format(index=index)
for index in range(self._num_players)
]
# RLLib requires environments to have the following member variables:
# observation_space, action_space, and _agent_ids
self._agent_ids = set(self._ordered_agent_ids)
# RLLib expects a dictionary of agent_id to observation or action,
# Melting Pot uses a tuple, so we convert
self.observation_space = self._convert_spaces_tuple_to_dict(
spec_to_space(self._env.observation_spec()),
remove_world_observations=True)
self.action_space = self._convert_spaces_tuple_to_dict(
spec_to_space(self._env.action_spec()))
self.max_cycles = max_cycles
self.num_cycles = 0
super().__init__()
def reset(self, *args, **kwargs):
"""See base class."""
timestep = self._env.reset()
self.num_cycles = 0
return timestep_to_observations(timestep), {}
def step(self, action_dict):
"""See base class."""
actions = [action_dict[agent_id] for agent_id in self._ordered_agent_ids]
timestep = self._env.step(actions)
rewards = {
agent_id: timestep.reward[index]
for index, agent_id in enumerate(self._ordered_agent_ids)
}
self.num_cycles += 1
termination = timestep.last()
done = { '__all__': termination}
truncation = self.num_cycles >= self.max_cycles
truncations = {agent_id: truncation for agent_id in self._ordered_agent_ids}
info = {}
observations = timestep_to_observations(timestep)
return observations, rewards, done, truncations, info
def get_dmlab2d_env(self):
"""Returns the underlying DM Lab2D environment."""
return self._env
# Metadata is required by the gym `Env` class that we are extending, to show
# which modes the `render` method supports.
metadata = {'render.modes': ['rgb_array']}
def render(self) -> np.ndarray:
"""Render the environment.
This allows you to set `record_env` in your training config, to record
videos of gameplay.
Returns:
np.ndarray: This returns a numpy.ndarray with shape (x, y, 3),
representing RGB values for an x-by-y pixel image, suitable for turning
into a video.
"""
observation = self._env.observation()
world_rgb = observation[0]['WORLD.RGB']
# RGB mode is used for recording videos
return world_rgb
def _convert_spaces_tuple_to_dict(
self,
input_tuple: spaces.Tuple,
remove_world_observations: bool = False) -> spaces.Dict:
"""Returns spaces tuple converted to a dictionary.
Args:
input_tuple: tuple to convert.
remove_world_observations: If True will remove non-player observations.
"""
return spaces.Dict({
agent_id: (remove_world_observations_from_space(input_tuple[i])
if remove_world_observations else input_tuple[i])
for i, agent_id in enumerate(self._ordered_agent_ids)
})
env = substrate.build(env_config['substrate'], roles=env_config['roles'])
env = MeltingPotEnv(env)
</code></pre>
| <python><class><inheritance><multiple-inheritance> | 2023-08-22 15:55:38 | 1 | 4,378 | Dalek |
76,954,976 | 1,575,381 | Torch MPS on Intel Mac: get name of device? | <p>In Python, on a machine that has an NVIDIA GPU, one can confirm the mps device name of the GPU using:</p>
<p><code>torch.cuda.get_device_name()</code></p>
<p>Per the docs: <a href="https://pytorch.org/docs/stable/generated/torch.cuda.get_device_name.html#torch.cuda.get_device_name" rel="nofollow noreferrer">https://pytorch.org/docs/stable/generated/torch.cuda.get_device_name.html#torch.cuda.get_device_name</a></p>
<p>How do I confirm the mps device name for an Intel-based Mac, which has an AMD GPU? I don't see any mention of that in the docs.</p>
| <python><macos><machine-learning><pytorch><metal-performance-shaders> | 2023-08-22 15:48:25 | 1 | 431 | someguyinafloppyhat |
76,954,970 | 1,927,834 | Python pyserial threading feature behavior | <p>I am trying to get the python pyserial module threading feature to start a parallel thread that reads serial console messages and returns them. While that thread runs, I would like to do other things in the main thread. I created the following code snippet to try this out.</p>
<pre><code>import serial
import serial.threaded
import time
import threading
PORT = 'loop://'
def monitor_bmc_console():
class TestLines(serial.threaded.LineReader):
def __init__(self):
super(TestLines, self).__init__()
self.received_lines = []
def handle_line(self, data):
self.received_lines.append(data)
ser = serial.serial_for_url(PORT, baudrate=115200, timeout=1)
with serial.threaded.ReaderThread(ser, TestLines) as protocol:
while True:
protocol.write_line('hello')
time.sleep(2)
print(protocol.received_lines)
x = threading.Thread(target = monitor_bmc_console(), daemon=True)
x.start()
while True:
print('inside main thread')
time.sleep(1)
</code></pre>
<p>In the output, I expect to see a stream of <code>inside main thread</code> printed in parallel with a continuous stream of <code>hello</code>. But this is what I see:</p>
<pre><code>['hello']
['hello', 'hello']
['hello', 'hello', 'hello']
['hello', 'hello', 'hello', 'hello']
</code></pre>
<p>The message <code>inside main thread</code> DOES NOT get printed.
Can someone help me to determine what I am doing wrong?</p>
| <python><multithreading><pyserial> | 2023-08-22 15:46:49 | 0 | 361 | utpal |
76,954,805 | 10,522,495 | Multi-Threading Implementation | <p>I am trying to implement multi-threading for data scraping.</p>
<p>DATA_TABLE is a oracle sql table containing serial_number, Link_URL and Link_Status columns.</p>
<p>Serial_number: To track the number of entries in the table.</p>
<p>Link_URL : URL to be scraped.</p>
<p>Link_Status: 0 = not scraped & 1 = scraped.</p>
<p><strong>Code</strong>:</p>
<pre><code>import os
import boto3
import cx_Oracle
import logging
import configparser
from trafilatura import fetch_url, extract
from concurrent.futures import ThreadPoolExecutor
# Load configuration from the config.ini file
config = configparser.ConfigParser()
config.read('config.ini')
oracle_username = config.get('database', 'username')
oracle_password = config.get('database', 'password')
oracle_host = config.get('database', 'host')
oracle_port = config.get('database', 'port')
oracle_service_name = config.get('database', 'service_name')
# Create a connection pool to ensure each thread has its own separate connection
pool = cx_Oracle.SessionPool(
user=oracle_username,
password=oracle_password,
dsn=f"{oracle_host}:{oracle_port}/{oracle_service_name}",
min=2,
max=4,
increment=1,
threaded=True
)
s3_client = boto3.client('s3')
bucket_name = config.get('S3', 's3_bucket_name')
folder_path = config.get('S3', 's3_folder_path_test')
def fetch_url_and_process(link_url, serial_number):
try:
connection = pool.acquire()
with connection.cursor() as cursor:
# Scrap the text from the URL using trafilature
downloaded = fetch_url(link_url)
extracted_text = extract(downloaded)
print('******************************************')
print(extracted_text)
print('******************************************')
if extracted_text is None:
extracted_text = 'delete'
# Save the extracted text to a text file
local_file_name = f'{serial_number}.txt'
print('--------------------------->', local_file_name)
with open(local_file_name, 'w', encoding='utf-8') as file:
file.write(extracted_text)
# Upload the text file to S3 bucket
s3_client.upload_file(local_file_name, bucket_name, f'{folder_path}{local_file_name}')
print("Text file uploaded to S3")
# Delete the local text file
os.remove(local_file_name)
# Update Link_Status to 1
update_query = "UPDATE DATA_TABLE SET Link_Status = 1 WHERE Link_URL = :url"
cursor.execute(update_query, {'url': link_url})
connection.commit()
except cx_Oracle.Error as e:
logging.error(f"Error while inserting data into Oracle: {e}")
finally:
pool.release(connection)
if __name__ == "__main__":
try:
connection = pool.acquire()
with connection.cursor() as cursor:
# Fetch all Link_URLs where Link_Status = 0
query = "SELECT Link_URL, Serial_Number FROM DATA_TABLE WHERE Link_Status = 0"
cursor.execute(query)
rows = cursor.fetchall()
if rows:
links_to_process = [(row[0], row[1]) for row in rows]
with ThreadPoolExecutor(max_workers=4) as executor:
executor.map(fetch_url_and_process, links_to_process)
except cx_Oracle.Error as e:
logging.error(f"Error while fetching data from Oracle: {e}")
finally:
pool.release(connection)
</code></pre>
<p>Issue: When above is run, the console just prints '<strong>Killed</strong>' after 10-15 mins and execution is terminated. Nothing is printed on the console or log file.
is this the correct way to implement multi-threading for above task?</p>
| <python><multithreading><threadpool><python-multithreading><threadpoolexecutor> | 2023-08-22 15:25:55 | 0 | 401 | Vinay Sharma |
76,954,761 | 840,736 | Python filter noise data out of dataset | <p>Full disclosure, I am extremely new to python (am a C++/C# dev).</p>
<p>I have a list of tuples, each containing a one of 2 message ids, time, and value. The list is sorted by time ascending. In the real data, the time is an integer represented by seconds since epoch (i can replace if that adds clarity). I'd like to be able to pare down the list so that for every entry of Message ID 2, I find the most recent Message ID 1 that is in the list prior.</p>
<p>I have the data converted using pandas, if that aids in the ability to filter down.</p>
<pre class="lang-py prettyprint-override"><code>data_set = []
data_set.append((1, '12:14', 2.0))
data_set.append((1, '12:15', 42.0))
data_set.append((1, '12:16', 4.0))
data_set.append((2, '12:16', 2.0))
data_set.append((2, '12:17', 2.0))
data_set.append((2, '12:17', 3.0))
data_set.append((1, '12:17', 3.0))
data_set.append((2, '12:18', 18.0))
data_set.append((1, '12:19', 6.0))
</code></pre>
<p>After paring down, my list would look like this:</p>
<pre class="lang-none prettyprint-override"><code>1, 12:16, 4.0
2, 12:16, 2.0
2, 12:17, 2.0
2, 12:17, 3.0
1, 12:17: 3.0
2, 12:18, 18.0
</code></pre>
| <python><pandas> | 2023-08-22 15:19:37 | 1 | 2,215 | Jason |
76,954,709 | 8,176,763 | polars equivalent of postgres row_number over partition by | <p>I would like to implement the same functionality of the row_number window function present in many db flavors in polars. I have tried this and it seems to work, but it looks overkill. Is there a better way to implement this ?</p>
<pre><code>import polars as pl
df = pl.DataFrame(
{"foo": [2,1,3,7,6,32], "tech": ['python', 'python', 'java','rust','c','rust'], "service": ["a","a","a","b","c","b"]}
)
q = (
df
.with_columns(pl.lit(1).alias('ones'))
.select([pl.all().exclude('ones'),
pl.col('ones').cumsum().over(['tech','service']).flatten().alias('rn')])
)
print(q)
shape: (6, 4)
┌─────┬────────┬─────────┬─────┐
│ foo ┆ tech ┆ service ┆ rn │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ str ┆ str ┆ i32 │
╞═════╪════════╪═════════╪═════╡
│ 2 ┆ python ┆ a ┆ 1 │
│ 1 ┆ python ┆ a ┆ 2 │
│ 3 ┆ java ┆ a ┆ 1 │
│ 7 ┆ rust ┆ b ┆ 1 │
│ 6 ┆ c ┆ c ┆ 1 │
│ 32 ┆ rust ┆ b ┆ 2 │
└─────┴────────┴─────────┴─────┘
</code></pre>
| <python><python-polars> | 2023-08-22 15:13:10 | 0 | 2,459 | moth |
76,954,628 | 4,594,924 | Adding custom widget inside QTableWidget using pyside6 | <p>I am trying to add my custom widgets into QTableWidget. It was getting added. Though its not visible properly. Below is my code snippet. I tried removing layout spaces in the widget code. It seems cell boundary and Qwidget has some spaces which I could not make it zero. Please show some light on this on how to fix this.</p>
<p>Main UI</p>
<pre><code>custom_widget = PositionAddWidgets1()
self.tableWidget_position_sell.setCellWidget(row, 6, custom_widget)
</code></pre>
<p>Custom Widget class</p>
<pre><code> from PySide6.QtWidgets import QWidget, QLineEdit, QPushButton, QHBoxLayout, QSizePolicy, QVBoxLayout
class PositionAddWidgets1(QWidget):
def __init__(self, parent=None):
super().__init__(parent)
self.line_edit = QLineEdit(self)
self.button1 = QPushButton("Add", self)
self.button2 = QPushButton("1800", self)
self.button3 = QPushButton("900", self)
self.button4 = QPushButton("200", self)
layout = QHBoxLayout()
layout.addWidget(self.line_edit)
layout.addWidget(self.button1)
layout.addWidget(self.button2)
layout.addWidget(self.button3)
layout.addWidget(self.button4)
self.setLayout(layout)
</code></pre>
<p>What I got the result is
<a href="https://i.sstatic.net/Pec3K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Pec3K.png" alt="enter image description here" /></a></p>
<p>Expected reulst is like below
<a href="https://i.sstatic.net/YXcL1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YXcL1.png" alt="enter image description here" /></a></p>
| <python><qt><pyside6> | 2023-08-22 15:04:18 | 1 | 798 | Simbu |
76,954,545 | 17,835,656 | how can i create resizable widgets in PyQt5? | <p>i am working on a project using PyQt5 and i want to make sure that the widgets are resizable by the user ></p>
<p>what do i mean ?</p>
<p>this is a picture explains the meaning :</p>
<p><a href="https://i.sstatic.net/38oV0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/38oV0.png" alt="enter image description here" /></a></p>
<p>i want to make the user be able to resize each widget or make the left bigger the the right by his mouse control</p>
<p>i am very sorry because i do not know how to write a code make you close to the result , because i am not close at all ... but i wrote a code to try on :</p>
<pre class="lang-py prettyprint-override"><code>
from PyQt5 import QtWidgets
import sys
app = QtWidgets.QApplication(sys.argv)
window = QtWidgets.QWidget()
window.resize(800,500)
window_layout = QtWidgets.QGridLayout()
first_widget = QtWidgets.QGroupBox()
first_widget.setStyleSheet("background-color:rgb(0,150,0)")
second_widget = QtWidgets.QGroupBox()
second_widget.setStyleSheet("background-color:rgb(150,0,0)")
window_layout.addWidget(first_widget,0,0)
window_layout.addWidget(second_widget,0,1)
window.setLayout(window_layout)
window.show()
app.exec()
</code></pre>
<p>thanks</p>
| <python><pyqt><pyqt5><window-resize> | 2023-08-22 14:55:47 | 1 | 721 | Mohammed almalki |
76,954,493 | 3,247,006 | How to get the contents of the cache properly in Django? | <p>I set 4 cache values with <a href="https://docs.djangoproject.com/en/4.2/topics/cache/#local-memory-caching" rel="nofollow noreferrer">LocMemCache</a> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>from django.core.cache import cache
cache.set("first_name", "John")
cache.set("last_name", "Smith", version=2)
cache.set("age", 36, version=3)
cache.set("gender", "Male")
</code></pre>
<p>Then, I tried to see the contents of the cache with <code>cache._cache</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>from django.core.cache import cache
print(cache._cache) # Here
</code></pre>
<p>But, the key's values contain <code>b'\x80\x05\x95\x08\...</code> as shown below:</p>
<pre class="lang-none prettyprint-override"><code>OrderedDict([(':1:gender', b'\x80\x05\x95\x08\x00\x00\x00\x00\x00\x00\x00\x8c\x04Male\x94.'), (':3:age', b'\x80\x05K$.'), (':2:last_name', b'\x80\x05\x95\t\x00\x00\x00\x00\x00\x00\x00\x8c\x05Smith\x94.'), (':1:first_name', b'\x80\x05\x95\x08\x00\x00\x00\x00\x00\x00\x00\x8c\x04John\x94.')])
</code></pre>
<p>Actually, the result which I expected is as shown below:</p>
<pre class="lang-none prettyprint-override"><code>OrderedDict([(':1:gender', 'Male'), (':3:age', 36), (':2:last_name', 'Smith'), (':1:first_name', 'John')])
</code></pre>
<p>And, I tried to see the contents of the cache with <code>cache._cache.items()</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>from django.core.cache import cache
print(cache._cache.items()) # Here
</code></pre>
<p>But again, the key's values contain <code>b'\x80\x05\x95\x08\...</code> as shown below:</p>
<pre class="lang-none prettyprint-override"><code>odict_items([(':1:gender', b'\x80\x05\x95\x08\x00\x00\x00\x00\x00\x00\x00\x8c\x04Male\x94.'), (':3:age', b'\x80\x05K$.'), (':2:last_name', b'\x80\x05\x95\t\x00\x00\x00\x00\x00\x00\x00\x8c\x05Smith\x94.'), (':1:first_name', b'\x80\x05\x95\x08\x00\x00\x00\x00\x00\x00\x00\x8c\x04John\x94.')])
</code></pre>
<p>Actually, the result which I expected is as shown below:</p>
<pre class="lang-none prettyprint-override"><code>odict_items([(':1:gender', 'Male'), (':3:age', 36), (':2:last_name', 'Smith'), (':1:first_name', 'John')])
</code></pre>
<p>Lastly, I tried to see the contents of the cache with <code>locmem._caches</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>from django.core.cache.backends import locmem
print(locmem._caches) # Here
</code></pre>
<p>But again, the key's values contain <code>b'\x80\x05\x95\x08\...</code> as shown below:</p>
<pre class="lang-none prettyprint-override"><code>{'': OrderedDict([(':1:gender', b'\x80\x05\x95\x08\x00\x00\x00\x00\x00\x00\x00\x8c\x04Male\x94.'), (':3:age', b'\x80\x05K$.'), (':2:last_name', b'\x80\x05\x95\t\x00\x00\x00\x00\x00\x00\x00\x8c\x05Smith\x94.'), (':1:first_name', b'\x80\x05\x95\x08\x00\x00\x00\x00\x00\x00\x00\x8c\x04John\x94.')])}
</code></pre>
<p>Actually, the result which I expected is as shown below:</p>
<pre class="lang-none prettyprint-override"><code>{'': OrderedDict([(':1:gender', 'Male'), (':3:age', 36), (':2:last_name', 'Smith'), (':1:first_name', 'John')])}
</code></pre>
<p>My questions:</p>
<ol>
<li>How can I get the contents of the cache properly in Django?</li>
<li>What is <code>b'\x80\x05\x95\x08\...</code>?</li>
</ol>
| <python><django><caching><django-cache><django-caching> | 2023-08-22 14:49:45 | 1 | 42,516 | Super Kai - Kazuya Ito |
76,954,427 | 2,954,547 | Fix spacing for right-side y-axis labels with 270 rotation | <p>When I move the y-axis label to the right side of the axes and rotate it by 270˚, the label is not properly spaced relative to the axis border:</p>
<pre class="lang-py prettyprint-override"><code>ax.yaxis.set_label_position("right")
ax.set_ylabel("latitude", rotation=270)
</code></pre>
<p>I can fix it visually by setting <code>ax.set_ylabel("latitude", rotation=270, labelpad=12.0)</code>, but my experience with Matplotlib is that, in recent versions at least, there's usually a better way to do things than manually setting spacings.</p>
<p>Is there some missing option to request correct spacing? Without setting <code>rotation=</code>, the spacing is perfect, so maybe it has to do with re-centering the text relative to the axis of rotation.</p>
<p><a href="https://i.sstatic.net/UEV5Y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UEV5Y.png" alt="Bad spacing between y-axis label and border" /></a></p>
| <python><matplotlib> | 2023-08-22 14:42:36 | 1 | 14,083 | shadowtalker |
76,954,394 | 11,956,484 | Replace all values in df1 with nan is they are less than or equal to the value in the corresponding column of df2 | <p>I would like to replace all values in df1 with nan is they are less than or equal to the value in the corresponding column of df2.</p>
<p>For example</p>
<p>df1:</p>
<pre><code> | A | B | C |
-------------------------------
0| -0.1 | -0.5 | 3.0 |
1| -0.002 | 0.0037 | -0.06 |
2| 0.25 | 0.0048 | 0.06 |
3| 0.27 | -0.01 | 0.0055 |
</code></pre>
<p>df2:</p>
<pre><code> | A | B | C |
-----------------------------------
LessThan| 0.26 | 0.0037 | 0.0055 |
</code></pre>
<p>output:</p>
<pre><code> | A | B | C |
-------------------------
0| nan | nan | 3.0 |
1| nan | nan | nan |
2| nan | 0.0048 | 0.06 |
3| 0.27 | nan | nan |
</code></pre>
<p>In this example 0.25 in df1 column A was replaced with nan because it was less than the value in df2 column A. And so on. In my actual data there are 15 columns I need to check</p>
<p>I tried using apply</p>
<pre><code>df = df.apply(lambda x: np.nan if x<=df2.loc["LessThan"] else x)
</code></pre>
<p>and adding df2 onto df1 and then using apply</p>
<pre><code>df = pd.concat([df1, df2], axis=0)
df = df.apply(lambda x: np.nan if x<=df.loc["LessThan"] else x)
</code></pre>
| <python><pandas> | 2023-08-22 14:38:40 | 1 | 716 | Gingerhaze |
76,954,182 | 3,234,994 | How to add a member into an Azure AD Group using Python/Azure Databricks? | <p>I'm adding a bunch of Object IDs for multiple service principals as members into an Azure AD Group. Doing this using Azure Powershell is very stright forward and easy. I just run the below loop as a powershell script:</p>
<pre><code>foreach ($user_object_id in $service_principals)
{
Add-AzADGroupMember -TargetGroupObjectId "$Base_AD_group_ObjectId" -MemberObjectId "$user_object_id"
}
</code></pre>
<p>But we're making changes to our infra and so now I need to do the same thing using Python instead. I use Azure Databricks for our python scripts, and I'm unable to figure out how to do this in Python/Databricks. (I can't simply call the powershell script from inside a python script in this case. I believe that's not possible with Azure Databricks.)</p>
<p>Any help, please?</p>
| <python><azure><powershell> | 2023-08-22 14:13:24 | 1 | 2,583 | LearneR |
76,954,102 | 5,449,244 | Python Flask and Optional Routing - Code Design | <p>i am interested to understand a few use-cases of the following code:</p>
<pre><code>@app.route('/items', defaults={'category': 'all'})
@app.route('/items/<category>')
def get_items(category='all'):
return f"Items in category: {category}"
</code></pre>
<p>I understand why this could be used to facilitate a fallback behavior to a specific routing, but are there any other use cases?</p>
<p>PS: Apologies in advance if this is not the right stack site for this. Please redirect me if necessary.</p>
| <python><flask><routes><url-routing> | 2023-08-22 14:02:49 | 0 | 883 | onlyf |
76,954,088 | 2,115,332 | How to convert multichannel audio to stereo in python? | <p>I have a 5.1 or 7.1 audio in wav-file. What is the correct way of converting this multichannel audio to stereo using python language?</p>
| <python><audio><librosa> | 2023-08-22 14:02:06 | 1 | 3,732 | ZFTurbo |
76,954,014 | 2,298,766 | Wireshark showed/captured http get image packet but socket won't receive it | <p>I have a simple webserver and 1 simple index.html webpage with only 1 image element in it to send to the web client.</p>
<p>I have a Chrome web browser run on an Android Phone to get that webpage from that web server.</p>
<p>I have a problem.</p>
<p>Web server's code can get the first request message sent from the web client but not the second one.</p>
<p>When the web client send the first request, Web server's code can print the request. This mean recv() method is good. I mean recv() method is able to get that web client first request and also Web server can send the first response back to that web client with index.html webpage bytes but not along the image bytes, because image bytes will be sent by that webserver after that web client sent the second request as I will explain below.</p>
<p>But when the web client sent the second request to the web server to get that image of that webpage as what the web client should do in order to get complete webpage, the web server's code blocked or stuck on recv() insruction forever without getting that second request from that web cliet. In other side, Wireshark captured that second request from that web client. It means that web client indeed sent that second request to the Web server computer and it arrived at the web server computer. But why recv() can't get that second request? Why recv() only can get the first request from that web client?</p>
<p>Why is that happen and what is wrong?</p>
<p>Thank you.</p>
<p>This is my webserver code, very simple:</p>
<pre><code> import socket
HOST = '192.168.43.157'
PORT = 80
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print('# Socket created')
# Create socket on port
try:
s.bind((HOST, PORT))
except socket.error as msg:
print('# Bind failed. ')
sys.exit()
print('# Socket bind complete')
# Start listening on socket
s.listen(10)
print('# Socket now listening')
# Wait for client
conn, addr = s.accept()
print('# Connected to ' + addr[0] + ':' + str(addr[1]))
response = open(r"C:\Users\Totz Tech\Videos\response.txt",'rb').read()
# Receive data from client
while True:
print (addr[0])
data = conn.recv(5048)
line = data.decode('UTF-8') # convert to string (Python 3 only)
#Parse the line to get the web page request
line_member = line.split("\r\n") # remove newline character
line0member = line_member[0]
line01member = line0member.split(" ")
try:
line01member[1][1:] == ""
except:
print("Ada error bro")
else:
if line01member[1][1:] == "":
token = r"C:\Users\Totz Tech\Videos\mypage.html"
print("Ini token: " + token)
else:
token = line01member[1][1:]
print("Ini token: " + token)
f1 = open(token,'rb')
f1r = f1.read()
conn.send(response+f1r)
print("Go UP")
s.close()
</code></pre>
<p>This is the reponse.txt file contain:</p>
<pre><code>HTTP/1.1 200 OK
</code></pre>
<p>This is the mywebpage.html file contain:</p>
<pre><code><!DOCTYPE html>
<html>
<body>
<h1>The a download attribute</h1>
<p>Click on the image to download it:<p>
<a href="w3logo.jpg" download><img src="w3logo.jpg"></img></a>
<p>Notice that the filename of the downloaded file will be saved as "w3logo.jpg" instead of "myw3schoolsimage.jpg".</p>
<p><b>Note:</b> The download attribute is not supported in IE or Edge (prior version 18), or in Safari (prior version 10.1).</p>
</body>
</html>
</code></pre>
<p><a href="https://i.sstatic.net/ZXBsY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZXBsY.png" alt="This is Wireshark image of the second web client request:" /></a></p>
| <python><sockets><wireshark><blocking> | 2023-08-22 13:53:35 | 0 | 1,191 | The Mr. Totardo |
76,953,999 | 507,793 | Percent formatting byte string as utf-8 | <p>I'm using the <a href="https://docs.python.org/3.9/library/logging.html" rel="nofollow noreferrer">python logging</a> library with python 3.9.</p>
<p>I want to avoid the expense of doing a <code>decode</code> operation as most times I won't have debug level logging enabled.</p>
<pre class="lang-py prettyprint-override"><code>stdout, _ = process.communicate()
logging.debug("stdout: %s", stdout.decode('utf-8'))
</code></pre>
<p>One solution to avoid the <code>decode</code> call when logging is disabled is to explicitly check <code>isEnabledFor</code>, however, this adds a lot of noise to the code that I would like to avoid.</p>
<pre class="lang-py prettyprint-override"><code>if logging.getLogger().isEnabledFor(logging.DEBUG):
logging.debug("stdout: %s", stdout.decode('utf-8'))
</code></pre>
<p>I'm hoping there's a string formatting option that will defer the <code>decode</code> operation and be easy to read.</p>
<pre class="lang-py prettyprint-override"><code>logging.debug("stdout: %(bs.utf-8)", stdout)
</code></pre>
<p>Does such an operation or formatter exist?</p>
| <python><encoding><formatting><python-logging> | 2023-08-22 13:51:59 | 0 | 25,953 | Matthew |
76,953,679 | 18,494,333 | Way to Perform Tilde-Expansion on Every Path Evaluated by Python OS Module? | <p>I'm running Linux, and would like for my project to replace the tilde character, <code>'~'</code>, at the beginning of a path with my home directory, in the same way the terminal does. I've found a method of the <code>os</code> module that achieves this, known as <code>os.path.expanduser()</code>.</p>
<pre><code>from os.path import expanduser
# '/home/user/Music/Playlist 04'
expanduser('~/Music/Playlist 04')
</code></pre>
<p>My problem is that my project is going to incorporate many, many different packages that are completely separate from one another, meaning if I want to implement this functionality then I have to call the <code>expanduser()</code> method on every single path I take in for every single package.</p>
<p>Is there some kind of function I can call in my main file, the one that will run the main loop and make use of all of my other packages, that will somehow make it a rule that all file paths evaluated by the <code>'os'</code> module by subjected to the <code>expanduser()</code> method? Is there a way of doing this other than the <code>expanduser()</code> method? Would I have to hack <code>os</code>?</p>
<p>My two main reasons for wanting to implement this functionality are as follows.</p>
<ul>
<li>I don't want to have to import and then call the same method in all of my different packages just to avoid hardcoding my user directory. It's just replacing one bad practice with another.</li>
<li>Suppose one day I need to access a directory beginning with the ~? Would I need to go back in and change it? I don't want to code my own specific preference into everything I do.</li>
</ul>
<p>Thanks in advance.</p>
| <python><python-os><tilde-expansion> | 2023-08-22 13:11:13 | 1 | 387 | MillerTime |
76,953,485 | 1,422,096 | Where is the bottleneck in these 10 requests per second with Python Bottle + Javascript fetch? | <p>I am sending 10 HTTP requests per second between Python Bottle server and a browser client with JS:</p>
<pre><code>import bottle, time
app = bottle.Bottle()
@bottle.route('/')
def index():
return """<script>
var i = 0;
setInterval(() => {
i += 1;
let i2 = i;
console.log("sending request", i2);
fetch("/data")
.then((r) => r.text())
.then((arr) => {
console.log("finished processing", i2);
});
}, 100);
</script>"""
@bottle.route('/data')
def data():
return "abcd"
bottle.run(port=80)
</code></pre>
<p>The result is rather poor:</p>
<pre><code>sending request 1
sending request 2
sending request 3
sending request 4
finished processing 1
sending request 5
sending request 6
sending request 7
finished processing 2
sending request 8
sending request 9
sending request 10
finished processing 3
sending request 11
sending request 12
</code></pre>
<p><strong>Why does it fail to process 10 requests per second successfully (on an average i5 computer): is there a known bottleneck in my code?</strong></p>
<p>Where are the 100 ms lost per request, that prevent the program to keep a normal pace like:</p>
<pre><code>sending request 1
finished processing 1
sending request 2
finished processing 2
sending request 3
finished processing 3
</code></pre>
<p>?</p>
<p>Notes:</p>
<ul>
<li><p>Tested with Flask instead of Bottle and the problem is similar</p>
</li>
<li><p>Is there a simple way to get this working:</p>
<ul>
<li><p>without having to either <strong>monkey patch</strong> the Python stdlib (with <code>from gevent import monkey; monkey.patch_all()</code>),</p>
</li>
<li><p>and without using a much more complex setup with Gunicorn or similar (not easy at all on Windows)?</p>
</li>
</ul>
<p>?</p>
</li>
</ul>
| <javascript><python><performance><flask><bottle> | 2023-08-22 12:47:55 | 3 | 47,388 | Basj |
76,953,409 | 10,089,181 | How to perform filtering using the query command from plydata, when we are making the comparison with respect to variable with fixed value | <p>In plydata , how shall we be filtering the rows with respect to comparison which we are making with respect to variable whose value is already described</p>
<pre><code>lower_age_limit = 10
upper_age_limit = 40
titanic_data >> query('age > {lower_age_limit} & age < {upper_age_limit}')
</code></pre>
| <python> | 2023-08-22 12:38:56 | 1 | 404 | Ransingh Satyajit Ray |
76,953,227 | 610,569 | How to use locally saved United MUP model in Unbabel-Comet model for Machine Translation Evaluation? | <p>From <a href="https://huggingface.co/Unbabel/unite-mup" rel="nofollow noreferrer">https://huggingface.co/Unbabel/unite-mup</a>, there's a model that comes from the <a href="https://aclanthology.org/2022.acl-long.558/" rel="nofollow noreferrer">UniTE: Unified Translation Evaluation</a> paper. The usage was documented as such:</p>
<pre><code>from comet import download_model, load_from_checkpoint
model_path = download_model("Unbabel/unite-mup")
model = load_from_checkpoint(model_path)
data = [
{
"src": "这是个句子。",
"mt": "This is a sentence.",
"ref": "It is a sentence."
},
{
"src": "这是另一个句子。",
"mt": "This is another sentence.",
"ref": "It is another sentence."
}
]
model_output = model.predict(data, batch_size=8, gpus=1)
</code></pre>
<p>Similar to <a href="https://stackoverflow.com/questions/75879866/how-to-load-unbabel-comet-model-without-nested-wrapper-initialization">How to load Unbabel Comet model without nested wrapper initialization?</a>, there's a <code>load_from_checkpoint</code> wrapper around the model and the actual class object that makes use of the model. Also, there's no clear instruction of how to use a locally saved <code>Unbabel/unite-mup</code> model.</p>
<p><strong>Is there some way to use locally saved United MUP model in Unbabel-Comet model for Machine Translation Evaluation?</strong></p>
| <python><pytorch-lightning><machine-translation><huggingface-hub><unbabel-comet> | 2023-08-22 12:14:43 | 1 | 123,325 | alvas |
76,953,105 | 1,470,127 | Unable to write parquet with DATE as logical type for a column from pandas | <p>I am trying to write a parquet file which contains one date column having logical type in parquet as DATE and physical type as INT32. I am writing the parquet file using pandas and using fastparquet as the engine since I need to stream the data from database and append to the same parquet file. Here is my code</p>
<pre><code>import os
import pandas as pd
from sqlalchemy import create_engine
from sqlalchemy import text
sql = "SELECT TO_VARCHAR(CURRENT_DATE, 'YYYYMMDD') AS "REPORT_DATE" FROM DUMMY;"
def create_stream_enabled_connection(username, password, host, port):
conn_str = f"mydb://{username}:{password}@{host}:{port}"
engine = create_engine(conn_str, connect_args={'encrypt': 'True', 'sslValidateCertificate':'False'})
connection = engine.connect().execution_options(stream_results=True)
return connection
connection = create_stream_enabled_connection(username, password, host, port)
ROWS_IN_CHUNK = 500000
# Stream chunks from database
for dataframe_chunk in pd.read_sql(text(sql), connection, chunksize=ROWS_IN_CHUNK):
if os.stat(local_path).st_size == 0: # If file is empty
# write parquet file
dataframe_chunk.to_parquet(local_path, index=False, engine='fastparquet')
else:
# write parquet file
dataframe_chunk.to_parquet(local_path, index=False, engine='fastparquet', append=True)
</code></pre>
<p><strong>Problem:</strong></p>
<p>I am unable to get the logical type to be DATE and physical type to be INT32 in the output parquet from pandas <code>to_parquet</code> function using <code>fastparquet</code> as engine.</p>
<p><strong>A few things that I have tried:</strong></p>
<ul>
<li>If I take the REPORT_DATE coming from database as string and put it to parquet, then the logical type is STRING. Doesn't work for me.</li>
<li>If I take the REPORT_DATE coming from database as string and convert it to datetime using the following code, then the dtype is <code>datetime64[ns]</code> and logical type in parquet is <code>Timestamp(isAdjustedToUTC=false, timeUnit=nanoseconds, is_from_converted_type=false, force_set_converted_type=false)</code> and physical type is <code>INT64</code>. Doesn't work for me.</li>
</ul>
<p><code>dataframe_chunk['report_date'] = pd.to_datetime(dataframe_chunk['report_date'], format='%Y%m%d')</code></p>
<p>I need the parquet logical type to be <code>DATE</code> and physical type to be <code>INT32</code> since this parquet will be loaded directly to bigquery and the REPORT_DATE column will go to a <code>DATE</code> type column in bigquery. See the bigquery documentation here: <a href="https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-parquet#type_conversions" rel="nofollow noreferrer">https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-parquet#type_conversions</a></p>
<p>If I try to store STRING or DATETIME column in parquet and load it in bigquery, bigquery fails by saying that it expected the column with another type.</p>
| <python><pandas><google-bigquery><parquet><fastparquet> | 2023-08-22 12:00:38 | 2 | 4,059 | Behroz Sikander |
76,952,996 | 7,836,530 | Trying to avoid g++ installed in my image | <p>I have the following dockerfile</p>
<pre><code>FROM python:3.8-slim as builder
RUN apt-get update && apt-get install -y g++
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir --prefix=/install -r requirements.txt
FROM python:3.8-slim
WORKDIR /app
COPY --from=builder /install /usr/local
COPY . .
EXPOSE 5004
CMD ["python", "server.py"]
</code></pre>
<p>The one of my libraries ( I need PyStan library without the possibility of considering other alternatives) needs a C++ compiler to be installed (not to run) so I have <code>g++</code> which causes the image size increase to 1.4GB .</p>
<p>I tried looking for other alternatives such as musl-gcc or llvm-g++ but none of them seem to be available under my python image.</p>
<p>Is there any workaround to reduce the image weight without needing to install g++?</p>
| <python><docker><dockerfile> | 2023-08-22 11:47:44 | 0 | 2,283 | JamesHudson81 |
76,952,934 | 1,344,855 | Test isolation in async pytest fixtures for SQLAlchemy | <p>Running a Pytest suite with asyncio and SQLAlchemy I don't know how to fix the problem.</p>
<p>My pytest conftest.py is set up as follows:</p>
<pre class="lang-py prettyprint-override"><code># conftest.py
DATABASE_URL = 'postgresql+asyncpg://user:pass@host:5432/db'
async_engine = create_async_engine(DATABASE_URL, echo=True)
AsyncSessionLocal = async_sessionmaker(
bind=async_engine,
class_=AsyncSession,
autocommit=False,
autoflush=False,
expire_on_commit=False,
)
@pytest.fixture(scope='session')
def event_loop(request) -> asyncio.AbstractEventLoop:
# making the event loop accessible in the session
loop = asyncio.get_event_loop_policy().new_event_loop()
yield loop
loop.close()
@pytest.fixture(scope='session', autouse=True)
async def create_test_database():
# setup and teardown of database
async with async_engine.begin() as conn:
await conn.run_sync(Base.metadata.create_all)
yield
async with async_engine.begin() as conn:
await conn.run_sync(Base.metadata.drop_all)
@pytest.fixture(scope='function')
async def db() -> AsyncSession:
# yield a database session used by test functions
async with AsyncSessionLocal() as session:
yield session
</code></pre>
<p>If I run my ~200 tests like this they fail due to non-existent test isolation ( <code>test_function2(db)</code> accesses something that <code>test_function1(db)</code> has set up previously, so the assertion fails).</p>
<h2>Option 1: Set up and teardown in each function:</h2>
<p>Change the fixture scope of <code>create_test_database</code> to run in every function:</p>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture(scope='function', autouse=True)
async def create_test_database():
</code></pre>
<p>This leads to very long test runs, seemingly exponential.</p>
<h2>Option 2: Truncate</h2>
<p>Adding this fixture:</p>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture(scope="function", autouse=True)
async def truncate_tables():
async with async_engine.begin() as conn:
for table in reversed(Base.metadata.sorted_tables):
await conn.execute(table.delete())
yield
</code></pre>
<p>... truncates the tables after each test function. This runs fine but it smells like a hack. Is there a better way?</p>
<p>My deps:</p>
<pre><code>aiosqlite==0.19.0
alembic==1.11.2
anyio==3.7.1
asyncpg==0.27.0
fastapi==0.97.0
pytest==7.4.0
pytest-asyncio==0.21.1
SQLAlchemy==2.0.19
</code></pre>
<p>Is there a better option to achieve test isolation via pytest fixtures in SQLAlchemy using Async?</p>
| <python><postgresql><sqlalchemy><pytest><python-asyncio> | 2023-08-22 11:40:26 | 2 | 2,436 | dh762 |
76,952,744 | 130,208 | Python packaging with Flit: specifying dependencies: nox, Flit, and pyproject.toml | <p>I am using <a href="https://pypi.org/project/nox/" rel="nofollow noreferrer">Nox</a> to automate various Python packaging tasks. I am using <a href="https://flit.pypa.io/en/stable/" rel="nofollow noreferrer">Flit</a> to build a dist package and <em>pyproject.toml</em> is the metadata file.</p>
<p>My <em>pyproject.toml</em> looks as follows:</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ["flit_core >=3,<4", "ofjustpy-engine"]
build-backend = "flit_core.buildapi"
[tool.flit.metadata]
name = "ofjustpy"
module = "ofjuspty" # Replace with your actual package name
version = "1.0.0" # Replace with your desired version number
author = "Kabira K."
author-email = "kabira@monallabs.in"
description = "Full stack webdev framework in pure Python"
[tool.flit.metadata.dependencies]
python = ">=3.6"
ofjustpy_engine = "*"
[project]
name = "ofjustpy"
maintainers = [
{ name="Kabira K", email= "kabira@monallabs.in"}
]
dynamic = ["version", "description"]
readme= "README.md"
license = { file="LICENSE" }
dependencies = [
'ofjustpy-engine'
]
[project.urls]
Home = "http://webworks.monallabs.in/ofjustpy/ofjustpy"
Documentation = "https://github.com/monallabs-org/ofjustpy/ofjustpy"
Source = "https://github.com/monallabs-org/ofjustpy/"
[tool.pytest.ini_options]
addopts = [
"--import-mode=importlib"
]
</code></pre>
<p>The Nox file for running tests and creating badges looks as follows:</p>
<pre class="lang-none prettyprint-override"><code>@nox.session(python="3.11")
def badges(session):
# Install pytest, pytest-cov, flake8 and genbadge
session.install("pytest", "pytest-cov", "flake8", "flake8-html", "ofjustpy-engine", "genbadge[all]")
with tempfile.NamedTemporaryFile(delete=False) as fp:
print("THE FILE PATH NAME = ", fp.name)
session.run(
"pipenv",
"run",
"pip",
"freeze",
external=True,
stdout=fp,
)
session.env["PIPENV_VERBOSITY"] = "-1"
# Install dependencies from the temporary requirements.txt
session.install("-r", fp.name)
# Install your package in editable mode
session.run("pip", "install", "-e", ".")
# Run pytest to generate the junit and html reports
session.run(
"pytest",
"--cov",
"--junitxml=reports/junit/junit.xml",
"--cov-report=xml:reports/coverage/coverage.xml",
"--cov-report=html:reports/coverage/coverage.html",
)
# Run flake8 to generate the HTML report and statistics
session.run(
"flake8",
"src/addict_tracking_changes",
"--exit-zero",
"--format=pylint",
# "--htmldir=./reports/flake8",
"--statistics",
"--output-file=reports/flake8/flake8stats.txt",
)
# Generate a badge for tests
session.run(
"genbadge", "tests", "-i", "reports/junit/junit.xml", "-o", "badge_tests.svg"
)
# Generate a badge for coverage
session.run(
"genbadge",
"coverage",
"-i",
"reports/coverage/coverage.xml",
"-o",
"badge_coverage.svg",
)
# Generate a badge for flake8 based on the statistics text report
session.run(
"genbadge",
"flake8",
"-i",
"reports/flake8/flake8stats.txt",
"-o",
"badge_flake8.svg",
)
</code></pre>
<p>When I run <code>nox -rs badges</code>, it fails with an error:</p>
<pre class="lang-none prettyprint-override"><code>Preparing editable metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing editable metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [25 lines of output]
Traceback (most recent call last):
File "/home/kabiraatmonallabs/to_githubcodes/ofjustpy-new/.nox/badges/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/home/kabiraatmonallabs/to_githubcodes/ofjustpy-new/.nox/badges/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/kabiraatmonallabs/to_githubcodes/ofjustpy-new/.nox/badges/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 181, in prepare_metadata_for_build_editable
return hook(metadata_directory, config_settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-js3x92v0/overlay/lib/python3.11/site-packages/flit_core/buildapi.py", line 49, in prepare_metadata_for_build_wheel
metadata = make_metadata(module, ini_info)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-js3x92v0/overlay/lib/python3.11/site-packages/flit_core/common.py", line 425, in make_metadata
md_dict.update(get_info_from_module(module, ini_info.dynamic_metadata))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-js3x92v0/overlay/lib/python3.11/site-packages/flit_core/common.py", line 222, in get_info_from_module
docstring, version = get_docstring_and_version_via_import(target)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-js3x92v0/overlay/lib/python3.11/site-packages/flit_core/common.py", line 195, in get_docstring_and_version_via_import
spec.loader.exec_module(m)
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/kabiraatmonallabs/to_githubcodes/ofjustpy-new/src/ofjustpy/__init__.py", line 2, in <module>
import ofjustpy_engine as jp
ModuleNotFoundError: No module named 'ofjustpy_engine'
[end of output]
</code></pre>
<p>I am just not sure where to add the <em>ofjustpy_engine</em> dependency, so that it gets picked up when Nox invokes Flit.</p>
| <python><pip><pipenv><nox><flit> | 2023-08-22 11:17:48 | 0 | 2,065 | Kabira K |
76,952,714 | 736,662 | How to increase a static timestamp value for each call | <p>in my Locust script (Python) I decalre these two values in the start of the script:</p>
<pre><code>from_date = "2023-07-01T08:00:00.000Z"
to_date = "2023-07-01T08:15:00.000Z"
</code></pre>
<p>Later in the script I use the values like this:</p>
<pre><code>class SaveValuesUser(HttpUser):
host = server_name
# wait_time = between(10, 15)
@task
def save_list_values(self):
data_list = []
for i, (ts_id, ts_value) in enumerate(TS_IDs.items()):
if i > 31:
break
data = get_data(ts_id, from_date, to_date, ts_value)
data_list.append(data)
json_data = json.dumps(data_list, indent=2)
self.save_values(json_data)
def save_values(self, json_data):
print(type(json_data))
print(json_data)
# Make the PUT request with authentication:
response = self.client.put("/api/SaveValues", data=json_data, headers=headers)
# Check the response:
if response.status_code == 200:
print("SaveValues successful!")
print("Response:", response.json())
else:
print("SaveValues failed.")
print("Response:", response.text)
</code></pre>
<p>Is there a way to have allways increasing (and unique) timestamp values for each increment in the for loop?</p>
<p>Can I use i.e. the iterator to increase the timestamps to i.e.</p>
<pre><code> from_date = "2023-07-02T08:00:00.000Z"
to_date = "2023-07-02T08:15:00.000Z"
</code></pre>
<p>here increase by one day. But might be a problem when I get to value 30.</p>
<p>I need valid timestamps that are unique and "higher" for each call</p>
| <python><locust> | 2023-08-22 11:14:53 | 0 | 1,003 | Magnus Jensen |
76,952,695 | 12,730,406 | How to make stacked bar chart with annotations | <p>I have the following dataframe:</p>
<pre><code>df = pd.DataFrame(data = {'id' : ['hbo', 'hbo', 'hbo', 'fox','fox','fox', 'NBC', 'NBC', 'NBC'],
'device': ['laptop', 'tv', 'cellphone','laptop', 'tv', 'cellphone','laptop', 'tv', 'cellphone'],
'count': [100,200,300, 400, 300, 200, 900, 1000, 100],
'date': ['2022-04-30', '2022-05-30','2022-06-30', '2022-07-30', '2022-08-30','2022-09-30',
'2022-10-30', '2022-11-30','2022-12-30']})
</code></pre>
<p>I am trying to code and make a stacked bar chart, with the following:</p>
<ol>
<li>Bar Chart is stacked by count, so like a normal stacked bar chart</li>
<li>Annotation on the stacked bar chart areas showing the % (percentage)</li>
<li>x-axis is to show the date</li>
<li>y-axis to show count</li>
<li>color is defined by device - e.g. tv = 'red'</li>
</ol>
<p>The stacked bar chart is over time (as oppposed to line chart)</p>
<p>So far I have the following code, but am stuck on how to pivot the data and create the output using matplotlib:</p>
<pre><code>fig, ax = plt.subplots(1,1)
ax = df.plot.bar(stacked=True, figsize=(10, 6), ylabel='count', xlabel='dates', title='count of viewing type over time', ax = ax)
plt.legend(title='Category', bbox_to_anchor=(1.05, 1), loc='upper left')
plt.xlabel('Date', fontsize = 14)
plt.ylabel('Count of viewing type', fontsize = 14)
plt.title('plot test', fontsize = 20)
plt.xticks(rotation = 45)
plt.show();
</code></pre>
| <python><matplotlib><stacked-bar-chart><plot-annotations> | 2023-08-22 11:12:26 | 1 | 1,121 | Beans On Toast |
76,952,528 | 14,667,788 | Repeat a row based on column values in pandas | <p>I have this df in pandas:</p>
<pre class="lang-py prettyprint-override"><code>
df = { "id" : ["apple", "potato","lemon"],"som" : [4, 2, 7] , "value" : [1, 2, 4]}
df = pd.DataFrame(df)
</code></pre>
<p>I would like to repeat each row based on <code>value</code>. Desired output is:</p>
<pre class="lang-py prettyprint-override"><code>results_df = {"id" : ["apple", "potato", "potato", "lemon", "lemon", "lemon", "lemon"] , "som" : [4, 2, 2, 7, 7, 7, 7]}
results_df = pd.DataFrame(results_df)
</code></pre>
<p>How can I do this please?</p>
| <python><pandas> | 2023-08-22 10:47:50 | 1 | 1,265 | vojtam |
76,952,434 | 8,669,616 | Logging Python script code to a file based on flow of execution | <p>I have three different python files <code>a.py</code>, <code>b.py</code> and <code>c.py</code>. These three scripts are interconnected such that functions of <code>a.py</code> are imported in <code>b.py</code> and functions of <code>c.py</code> are imported in <code>b.py</code>.</p>
<p><code>b.py</code> is the main file. Running <code>b.py</code> will import functions from other two files. I want to write the python code that is being executed to a file such that the code to be saved in the file based on order of execution from <code>a.py</code>, <code>b.py</code> & <code>c.py</code>.</p>
<p>Can you please suggest an efficient way.</p>
<p>A file with flow of code execution from three python scripts</p>
<p>sample content in final file:</p>
<pre><code>test() # coming from a.py
test1() # coming from c.py`
def test3():
print("test2")
test3()
</code></pre>
| <python><logging> | 2023-08-22 10:35:31 | 2 | 553 | Avinash |
76,952,203 | 14,667,788 | How to convert pandas column into rows given a id? | <p>I have a following pandas df:</p>
<pre class="lang-py prettyprint-override"><code>df = { "id" : [1, 1, 1, 2, 2, 5], "name" : ["apple", "potato","lemon", "apple","potato","apple"]}
df = pd.DataFrame(df)
</code></pre>
<p>I would like to turn name column into rows based on id. Desired output is:</p>
<pre class="lang-py prettyprint-override"><code>new_df = {1 : ["apple", "apple", "apple" ], 2: ["potato", "potato", ""], 3: ["lemmon", "", ""]}
new_df = pd.DataFrame(new_df)
</code></pre>
<p>How can I do it please?</p>
| <python><pandas> | 2023-08-22 10:05:29 | 0 | 1,265 | vojtam |
76,952,016 | 1,422,096 | Make a request every 100 ms with setInterval, but the request can sometimes take more than 100 ms | <p>I need to update an image in a <code><canvas></code> 10 times per second, with data coming from an HTTP request:</p>
<pre><code>var i = 0;
setInterval(() => {
i += 1;
console.log("sending request", i);
fetch("/data") // this can sometimes take more than 100 ms
.then((r) => r.arrayBuffer())
.then((arr) => {
// update canvas with putImageData...
console.log("finished processing", i);
});
}, 100);
</code></pre>
<p>It works, but since the request sometimes takes more than 100ms, it gives results like:</p>
<pre><code>sending request 1
sending request 2
sending request 3
finished processing 3
sending request 4
sending request 5
finished processing 5
sending request 6
sending request 7
sending request 8
</code></pre>
<p>Question:</p>
<p><strong>Are the "not-finished-to-process" requests queued for later (then it will make everything even slower!), or are they just dropped?</strong> By the way, how can I modify my example in order to test if it's the former or the latter (I haven't been able to to a MCVE confirming the former or latter)?</p>
<p>Since it's video, I need to drop frames if the server takes too long to respond, how would you do it?</p>
<p>PS: example Python server code that takes 150 ms to respond:</p>
<pre><code>import bottle, time
app = bottle.Bottle()
@bottle.route('/')
def index():
return open("test.html", "r").read()
@bottle.route('/data')
def data():
time.sleep(0.150)
return b"0000"
bottle.run(port=80)
</code></pre>
| <javascript><python><video-streaming><setinterval> | 2023-08-22 09:41:47 | 2 | 47,388 | Basj |
76,951,922 | 15,452,168 | Extracting accurate product color and background color from an image using python | <p>I'm trying to extract the product color and background color from images where a model is wearing the product (e.g., a dress). My current approach involves masking out the skin and hair colors and then using KMeans clustering to find the dominant colors. However, the results aren't always accurate, especially when the product color is similar to the background or the model's skin/hair.</p>
<p>Here's the code I've been using:</p>
<pre><code>import cv2
import numpy as np
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
def mask_human_colors(image):
# Convert the image to HSV
hsv = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
# Define a range for skin colors in HSV
lower_skin = np.array([0, 20, 70], dtype=np.uint8)
upper_skin = np.array([20, 255, 255], dtype=np.uint8)
skin_mask = cv2.inRange(hsv, lower_skin, upper_skin)
# Define a range for common hair colors in HSV (This is a rough estimate and might need adjustments)
lower_hair = np.array([0, 0, 0], dtype=np.uint8)
upper_hair = np.array([180, 255, 60], dtype=np.uint8)
hair_mask = cv2.inRange(hsv, lower_hair, upper_hair)
# Combine the skin and hair masks
human_mask = cv2.bitwise_or(skin_mask, hair_mask)
# Mask the image to remove human colors
masked_image = cv2.bitwise_and(image, image, mask=~human_mask)
return masked_image
def extract_top_colors(image, k=2):
pixels = image.reshape(-1, 3)
pixels = pixels[np.any(pixels != [0, 0, 0], axis=1)] # Removing black pixels
kmeans = KMeans(n_clusters=k, n_init=10)
kmeans.fit(pixels)
sorted_labels = np.argsort(np.bincount(kmeans.labels_))[::-1]
colors = [kmeans.cluster_centers_[label].astype(int).tolist() for label in sorted_labels]
return colors
# Usage:
image_path = '/content/white_dress.JPG'
image = cv2.imread(image_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Mask human colors
masked_image = mask_human_colors(image)
# Extract top colors
dominant_colors = extract_top_colors(masked_image, k=2)
print(f"1st Dominant Color (RGB): {dominant_colors[0]}")
print(f"2nd Dominant Color (RGB): {dominant_colors[1]}")
fig, ax = plt.subplots(1, 3, figsize=(20, 5))
ax[0].imshow(image)
ax[0].set_title("Original Image")
ax[1].imshow(masked_image)
ax[1].set_title("Image after Masking Human Colors")
ax[2].imshow([dominant_colors])
ax[2].set_title("Top 2 Dominant Colors")
plt.tight_layout()
plt.show()
</code></pre>
<p>Here's an example image I'm working with: <a href="https://i.sstatic.net/jYiqN.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jYiqN.jpg" alt="enter image description here" /></a></p>
<p>In some cases, the extracted product color is a lighter shade than the actual product or it picks the background color.</p>
<p>current solution result shows: -</p>
<p><a href="https://i.sstatic.net/2tyc0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2tyc0.png" alt="enter image description here" /></a></p>
<p>I'm looking for suggestions to improve the accuracy of color extraction or alternative techniques that might be more effective.</p>
<p>Thank you</p>
| <python><image><opencv><image-processing><computer-vision> | 2023-08-22 09:27:38 | 0 | 570 | sdave |
76,951,809 | 6,758,862 | TS video copied to MP4, missing 3 first frames when programmatically read (ffmpeg bug) | <p>Running:</p>
<pre><code>ffmpeg -i test.ts -fflags +genpts -c copy -y test.mp4
</code></pre>
<p>for <a href="https://mega.nz/file/JANmRTgR#6wtVXNIHoB9MGhCOBuv1oKIdl4o_adiWcI73R6ggusc" rel="nofollow noreferrer">this</a> test.ts, which has 30 frames, readable by opencv, I end up with 28 frames, out of which 27 are readable by opencv. More specifically:</p>
<pre><code>ffprobe -v error -select_streams v:0 -count_packets -show_entries stream=nb_read_packets -of csv=p=0 tmp.ts
</code></pre>
<p>returns 30.</p>
<pre><code>ffprobe -v error -select_streams v:0 -count_packets -show_entries stream=nb_read_packets -of csv=p=0 tmp.mp4
</code></pre>
<p>returns 28.</p>
<p>Using OpenCV in that manner</p>
<pre><code>cap = cv2.VideoCapture(tmp_path)
readMat = []
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
readMat.append(frame)
</code></pre>
<p>I get for the ts file 30 frames, while for the mp4 27 frames.</p>
<p>Could someone explain why the discrepancies? I get no error during the transformation from ts to mp4:</p>
<pre><code>ffmpeg version N-111746-gd53acf452f Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 11.3.0 (GCC)
configuration: --ld=g++ --bindir=/bin --extra-libs='-lpthread -lm' --pkg-config-flags=--static --enable-static --enable-gpl --enable-libaom --enable-libass --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libsvtav1 --enable-libdav1d --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree --enable-cuda-nvcc --enable-cuvid --enable-nvenc --enable-libnpp
libavutil 58. 16.101 / 58. 16.101
libavcodec 60. 23.100 / 60. 23.100
libavformat 60. 10.100 / 60. 10.100
libavdevice 60. 2.101 / 60. 2.101
libavfilter 9. 10.100 / 9. 10.100
libswscale 7. 3.100 / 7. 3.100
libswresample 4. 11.100 / 4. 11.100
libpostproc 57. 2.100 / 57. 2.100
[mpegts @ 0x4237240] DTS discontinuity in stream 0: packet 5 with DTS 306003, packet 6 with DTS 396001
Input #0, mpegts, from 'tmp.ts':
Duration: 00:00:21.33, start: 3.400000, bitrate: 15 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(progressive), 300x300, 1 fps, 3 tbr, 90k tbn
Output #0, mp4, to 'test.mp4':
Metadata:
encoder : Lavf60.10.100
Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 300x300, q=2-31, 1 fps, 3 tbr, 90k tbn
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[out#0/mp4 @ 0x423e280] video:25kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 4.192123%
frame= 30 fps=0.0 q=-1.0 Lsize= 26kB time=00:00:21.00 bitrate= 10.3kbits/s speed=1e+04x
</code></pre>
<h2>Additional information</h2>
<p>The origin of the video I am processing comes from a continuous stitching operation of still images ts videos, produced by this class <code>update</code> method:</p>
<pre class="lang-py prettyprint-override"><code>import cv2
import os
import subprocess
from tempfile import NamedTemporaryFile
class VideoUpdater:
def __init__(
self, video_path: str, framerate: int, timePerFrame: Optional[int] = None
):
"""
Video updater takes in a video path, and updates it using a supplied frame, based on a given framerate.
Args:
video_path: str: Specify the path to the video file
framerate: int: Set the frame rate of the video
"""
if not video_path.endswith(".mp4"):
LOGGER.warning(
f"File type {os.path.splitext(video_path)[1]} not supported for streaming, switching to ts"
)
video_path = os.path.splitext(video_path)[0] + ".mp4"
self._ps = None
self.env = {
}
self.ffmpeg = "/usr/bin/ffmpeg "
self.video_path = video_path
self.ts_path = video_path.replace(".mp4", ".ts")
self.tfile = None
self.framerate = framerate
self._video = None
self.last_frame = None
self.curr_frame = None
def update(self, frame: np.ndarray):
if len(frame.shape) == 2:
frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
else:
frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
self.writeFrame(frame)
def writeFrame(self, frame: np.ndarray):
"""
The writeFrame function takes a frame and writes it to the video file.
Args:
frame: np.ndarray: Write the frame to a temporary file
"""
tImLFrame = NamedTemporaryFile(suffix=".png")
tVidLFrame = NamedTemporaryFile(suffix=".ts")
cv2.imwrite(tImLFrame.name, frame)
ps = subprocess.Popen(
self.ffmpeg
+ rf"-loop 1 -r {self.framerate} -i {tImLFrame.name} -t {self.framerate} -vcodec libx264 -pix_fmt yuv420p -y {tVidLFrame.name}",
env=self.env,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
ps.communicate()
if os.path.isfile(self.ts_path):
# this does not work to watch, as timestamps are not updated
ps = subprocess.Popen(
self.ffmpeg
+ rf'-i "concat:{self.ts_path}|{tVidLFrame.name}" -c copy -y {self.ts_path.replace(".ts", ".bak.ts")}',
env=self.env,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
ps.communicate()
shutil.move(self.ts_path.replace(".ts", ".bak.ts"), self.ts_path)
else:
shutil.copyfile(tVidLFrame.name, self.ts_path)
# fixing timestamps, we dont have to wait for this operation
ps = subprocess.Popen(
self.ffmpeg
+ rf"-i {self.ts_path} -fflags +genpts -c copy -y {self.video_path}",
env=self.env,
shell=True,
# stdout=subprocess.PIPE,
# stderr=subprocess.PIPE,
)
tImLFrame.close()
tVidLFrame.close()
</code></pre>
| <python><opencv><video><ffmpeg> | 2023-08-22 09:14:25 | 2 | 723 | Vasilis Lemonidis |
76,951,687 | 6,535,324 | Make python class "vars-able" | <p>I have a python class:</p>
<pre class="lang-py prettyprint-override"><code>class myclass:
a_a = "a---a"
b_b = "b---b"
</code></pre>
<p>I would like to be able to use <code>vars(myclass)</code> to get a dictionary that looks like this:</p>
<pre><code>{"a_a" : "a---a", "b_b" : "b---b"}
</code></pre>
<p>It shuold do so dynamically, so that if I add something in <code>myclass</code> I dont have to update the information twice.</p>
| <python> | 2023-08-22 09:00:08 | 1 | 2,544 | safex |
76,951,556 | 13,706,389 | Plotly Dash change primary color of all components | <p>This question seems to simple but I don't seem to find a proper solution.
I have a Dash app which is styled with bootstrap like this</p>
<pre class="lang-py prettyprint-override"><code>from dash import Dash
import dash_bootstrap_components as dbc
app = Dash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP])
app.layout = dbc.Container([dbc.Button("Test"),
dbc.RadioItems(["Test1", "Test2"], "Test1")])
if __name__ == '__main__':
app.run_server(debug=True)
</code></pre>
<p>Which gives
<a href="https://i.sstatic.net/jxnoT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jxnoT.png" alt="enter image description here" /></a></p>
<p>But I'd like to change the primary blue color to something else.
I'm sure there must be a way to simply change this <strong>for all possible components at once</strong> but I can't find how.
I've found ways to do this by changing the css of each component but not for <strong>all components at once</strong>.
<a href="https://stackoverflow.com/questions/70438022/dash-bootstrap-components-replace-primary-color-across-components">This question</a> comes somewhat close but doesn't really solve the issue and has the additional constraint that no additional css file can be used.</p>
| <python><css><plotly-dash> | 2023-08-22 08:44:14 | 1 | 684 | debsim |
76,951,340 | 14,282,714 | Check which pages are encrypted of pdf doc using pypdf | <p>I would like to know which pages are encrypted of a pdf document. I would like to know this, because some pdf documents are merged where one document had encryption and the other not. So this means some pages were not encrypted. Here I'm trying to create a reproducible code with <code>doc_not_encrypted.pdf</code> and <code>doc_is_encrypted.pdf</code>:</p>
<pre><code>import pypdf
File = open("doc_not_encrypted.pdf", "rb")
reader = pypdf.PdfReader(File)
if reader.is_encrypted:
print("This document is encrypted!")
else:
print("This document is not encrypted!")
This document is not encrypted!
import pypdf
File = open("doc_is_encrypted.pdf", "rb")
reader = pypdf.PdfReader(File)
if reader.is_encrypted:
print("This document is encrypted!")
else:
print("This document is not encrypted!")
This document is encrypted!
</code></pre>
<p>Now assume the documents are merged:</p>
<pre><code>import pypdf
File = open("doc_merged.pdf", "rb")
reader = pypdf.PdfReader(File)
if reader.is_encrypted:
print("This document is encrypted!")
else:
print("This document is not encrypted!")
This document is encrypted!
</code></pre>
<p>Of course it is encrypted, but I was wondering if anyone knows if it is possible to determine the pages that are actually encrypted using <code>pypdf</code>?</p>
<p>Here you can get the <a href="https://www.dropbox.com/sh/eb4u7dc8re27y1y/AAD8bpCW9chZytP_JBCkLLAra?dl=0" rel="nofollow noreferrer">files</a> to make the problem reproducible.</p>
| <python><pdf><encryption><pypdf> | 2023-08-22 08:15:16 | 1 | 42,724 | Quinten |
76,951,264 | 2,351,983 | How to operform Pandas groupby() multiplying values in grouped column by values in another column | <p>I have the following DataFrame:</p>
<pre><code> A B count_of_elmt_A
0 x 10.0
1 x k 3.0
2 y l NaN
3 z j 4.0
</code></pre>
<p>I want to group the data by <code>A</code> and annotate the count of each element in <code>A</code> multiplied by <code>count_of_elmt_A</code>. Invalid rows in <code>count_of_elmt_A</code> should default to 1. So the result should be:</p>
<pre><code>
A
x 13 # 10 + 3, because row 0 has 10 for count_of_elmt_A, row 1 has 3.
y 1 # because row 2 has a NaN count_of_elmt_A, which defaults to 1.
z 4 # because row 3 has 4 in count_of_elmt_A.
</code></pre>
<p>Here is what I tried, but it doesn't work as desired:</p>
<pre><code>import pandas as pd
data = {"A":["x","x","y","z"], "B": ["", "k", "l", "j"], "count_of_elmt_A": [10, 3, None, 4]}
df = pd.DataFrame(data)
df2 = df.groupby("A")["A"].count()
df2 = df2.mul(df["count_of_elmt_A"].fillna(1))
</code></pre>
| <python><pandas><dataframe> | 2023-08-22 08:05:10 | 1 | 356 | luanpo1234 |
76,951,139 | 7,978,903 | Get correct NTFS filenames with unicode characters in Linux | <p>I have an NTFS partition mounted in Linux. Due to the encoding difference, all Unicode characters in filenames are broken. E.g. <code>新建文件夹</code> (New folder in Chinese) is shown as <code>\320½\250\316ļ\376\274\320/</code>.</p>
<p>I cannot change the mount option thus I want to write a Python script to get the correct filename.
Here is what I tried (still using the <code>新建文件夹</code> example):</p>
<pre><code>>>> import os
>>> os.listdir('.')[-1]
'\udcd0½\udca8\udcceļ\udcfe\udcbc\udcd0'
>>> os.listdir(b'.')[-1]
b'\xd0\xc2\xbd\xa8\xce\xc4\xbc\xfe\xbc\xd0'
>>> os.listdir(b'.')[-1].decode('utf-16')
'싐ꢽ쓎ﺼ킼'
</code></pre>
<p>The output is still broken text. I also tried <code>utf_16_be</code> and <code>utf_16_le</code> but they don't work. What should be the correct way to get correct NTFS filenames with unicode characters in Linux? Any comments or feedback are appreciated. Thanks!</p>
| <python><linux><string><encoding><ntfs> | 2023-08-22 07:49:03 | 0 | 1,694 | Qin Heyang |
76,951,100 | 2,386,113 | Exception has occurred: AttributeError: __exit__ | <p>I am trying to use <strong><a href="https://github.com/thijor/DeepRF/blob/main/data_synthetic.py" rel="nofollow noreferrer">DeepRF/data_synthetic.py</a></strong> script. Below is my code to use the <code>Noise</code> class defined in the <a href="https://github.com/thijor/DeepRF/blob/main/data_synthetic.py" rel="nofollow noreferrer">data_synthetic.py</a> script:</p>
<p><strong>MWE:</strong></p>
<pre><code>import numpy as np
import sys
import os
# DeepRF module
deeprf_module_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '../DeepRF-main'))
sys.path.append(deeprf_module_path)
from data_synthetic import *
import data_synthetic as deeprf_data_synthetic
# testing DeepRF
TR = 1.2 # seconds
low_frequency_noise = deeprf_data_synthetic.LowFrequency(0.9, TR )
physiological_noise = deeprf_data_synthetic.Physiological(0.9, TR)
system_noise = deeprf_data_synthetic.System(0.9, np.random.RandomState)
task_noise = deeprf_data_synthetic.Task(0.9, np.random.RandomState)
temporal_noise = deeprf_data_synthetic.Temporal(0.9, np.random.RandomState)
test_t = np.arange(0, 300, 1)
test_signal = np.ones_like(test_t)
# Create a Noise instance with specified noise components and amplitudes
random_generator_t = np.random.RandomState(64556) # used to generate random parameters
random_generator_x = np.random.RandomState(23355) # used to generate random signal/noise
random_generator_y = np.random.RandomState(1258566) # used to generate predictions
noise = deeprf_data_synthetic.Noise(random_generator_y.rand(5), low_frequency_noise, physiological_noise, system_noise, task_noise, temporal_noise)
# Create a synthetic fMRI time series (replace this with your actual data)
fmri_data = np.random.normal(0, 1, size=300)
# Apply the noise to the fMRI data using the __call__ method
noisy_fmri_data = noise(fmri_data)
</code></pre>
<p><strong>PROBLEM:</strong> The execution of the last line of my code <code>noisy_fmri_data = noise(fmri_data)</code> leads to the execution of line-144 of the <a href="https://github.com/thijor/DeepRF/blob/main/data_synthetic.py" rel="nofollow noreferrer">data_synthetic.py</a> script, which is throwing an exception (screenshot below):</p>
<blockquote>
<p>Exception has occurred: AttributeError: <strong>exit</strong></p>
</blockquote>
<p><a href="https://i.sstatic.net/ntTbg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ntTbg.png" alt="enter image description here" /></a></p>
| <python> | 2023-08-22 07:44:10 | 1 | 5,777 | skm |
76,950,969 | 11,348,853 | Error response from daemon: failed to create task for container | <p>I have a django app. That also has redis, celery and flower.</p>
<p>Everything is working on my local machine.</p>
<p>But When I am trying to dockerize it The redis and django app is starting. But celery and flower is failing to start.</p>
<p>It is giving me this error while starting celery:</p>
<p><code>Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "celery -A core worker -P eventlet --autoscale=10,1 -l INFO": executable file not found in $PATH: unknown</code></p>
<p><code>Dockerfile</code></p>
<pre><code># For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:bullseye
EXPOSE 8000
RUN apt update && apt upgrade -y
# && apk install cron iputils-ping sudo nano -y
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
RUN rm requirements.txt
WORKDIR /app
COPY ./src /app
RUN mkdir "log"
# Set the environment variables
ENV PYTHONUNBUFFERED=1
ENV DJANGO_SETTINGS_MODULE=core.settings
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
# RUN echo 'appuser ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers.d/appuser
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
ENTRYPOINT ["sh", "entrypoint.sh"]
CMD ['gunicorn', 'core.wsgi:application', '--bind', '0.0.0.0:8000']
</code></pre>
<p><code>entrypoint.sh</code></p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/sh
echo "Apply database migrations"
python manage.py migrate
# Start server
echo "Starting server"
# run the container CMD
exec "$@"
</code></pre>
<p><code>docker-compose.yml</code></p>
<pre class="lang-yaml prettyprint-override"><code>version: "3.8"
services:
mailgrass_backend:
container_name: mailgrass_backend
restart: unless-stopped
ports:
- "8000:8000"
volumes:
- npm:/app
- ./.env:/app/.env:ro
networks:
- npm
env_file: .env
depends_on:
- redis
build:
context: .
dockerfile: ./Dockerfile
redis:
container_name: redis
image: redis:7.0-alpine
restart: unless-stopped
env_file: .env
ports:
- "6379:6379"
command:
- 'redis-server'
networks:
- npm
celery:
container_name: celery
restart: unless-stopped
env_file: .env
volumes:
- npm:/app
- ./.env:/app/.env:ro
build:
context: .
dockerfile: ./Dockerfile
networks:
- npm
depends_on:
- redis
entrypoint:
- "celery -A core worker -P eventlet --autoscale=10,1 -l INFO"
flower:
container_name: flower
restart: unless-stopped
ports:
- "5555:5555"
env_file: .env
volumes:
- npm:/app
- ./.env:/app/.env:ro
build:
context: .
dockerfile: ./Dockerfile
networks:
- npm
depends_on:
- redis
- celery
entrypoint:
- "celery -b redis://redis:6379 flower"
volumes:
npm:
postgres:
networks:
npm:
</code></pre>
<p><code>requirements.txt</code></p>
<pre><code>amqp==5.1.1
asgiref==3.7.2
attrs==23.1.0
billiard==4.1.0
black==23.7.0
celery==5.3.1
certifi==2023.7.22
cffi==1.15.1
charset-normalizer==3.2.0
click==8.1.6
click-didyoumean==0.3.0
click-plugins==1.1.1
click-repl==0.3.0
cron-descriptor==1.4.0
cryptography==41.0.3
defusedxml==0.7.1
dj-crontab==0.8.0
dj-rest-auth==4.0.1
Django==4.2.4
django-allauth==0.54.0
django-annoying==0.10.6
django-celery-beat==2.5.0
django-cleanup==8.0.0
django-cors-headers==4.2.0
django-debug-toolbar==4.2.0
django-filter==23.2
django-phonenumber-field==7.1.0
django-timezone-field==5.1
djangorestframework==3.14.0
djangorestframework-simplejwt==5.2.2
dnspython==2.4.1
drf-spectacular==0.26.4
email-validator==2.0.0.post2
eventlet==0.33.3
flower==2.0.1
greenlet==2.0.2
humanize==4.7.0
idna==3.4
inflection==0.5.1
isort==5.12.0
jsonschema==4.18.6
jsonschema-specifications==2023.7.1
kombu==5.3.1
mailchecker==5.0.9
Markdown==3.4.4
mypy-extensions==1.0.0
oauthlib==3.2.2
packaging==23.1
pathspec==0.11.2
phonenumberslite==8.13.18
Pillow==10.0.0
platformdirs==3.10.0
prometheus-client==0.17.1
prompt-toolkit==3.0.39
pycparser==2.21
PyJWT==2.8.0
pyotp==2.9.0
python-crontab==3.0.0
python-dateutil==2.8.2
python-dotenv==1.0.0
python3-openid==3.2.0
pytz==2023.3
PyYAML==6.0.1
redis==4.6.0
referencing==0.30.2
requests==2.31.0
requests-oauthlib==1.3.1
rpds-py==0.9.2
six==1.16.0
sqlparse==0.4.4
tornado==6.3.2
tzdata==2023.3
uritemplate==4.1.1
urllib3==2.0.4
vine==5.0.0
wcwidth==0.2.6
whitenoise==6.5.0
gunicorn
</code></pre>
<p>What am I doing wrong here?</p>
| <python><django><docker><docker-compose><django-celery> | 2023-08-22 07:24:14 | 2 | 2,387 | Nahidujjaman Hridoy |
76,950,928 | 1,229,208 | Validating JUST the path of a URL in Python | <p>What's the best way to validate that a string is a valid URL <strong>path</strong> in Python?</p>
<p>I want to avoid maintaining a complex regex if possible.</p>
<p>I have this regex so far but I am looking for alternatives because it does not cover all cases for <strong>just the path</strong> segment:</p>
<pre class="lang-py prettyprint-override"><code>def check_valid_path(v):
pattern = r"^\/[-a-z\d_\/]*$"
if not compile(pattern).match(v):
raise ValueError(f"Invalid path: {v}. Must match {pattern}")
</code></pre>
| <python> | 2023-08-22 07:18:37 | 2 | 596 | martin8768 |
76,950,730 | 5,733,709 | CMAKE LLAMA CPP Binding PIP Installation giving error | <p>Trying to install llama-cpp-python as depicted on MAC using METAL. However, it is giving the following error as depicted in the screenshot. Could someone help?</p>
<p><a href="https://python.langchain.com/docs/integrations/llms/llamacpp" rel="nofollow noreferrer">https://python.langchain.com/docs/integrations/llms/llamacpp</a></p>
<p><a href="https://i.sstatic.net/OUpJ4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OUpJ4.png" alt="Pycharm Python Console Error" /></a></p>
| <python><cmake><pip><langchain><llamacpp> | 2023-08-22 06:48:23 | 0 | 786 | Jason |
76,950,498 | 10,089,181 | How to replace missing values in a numeric and categorical column using plydata package in python? | <p>I have a dataframe in which there are missing values in various columns. I want to replace the categorical missing values in column beta by "missing" and replace the missing values in numeric columns z by the mean of the column. I want to use plydata package in python</p>
<pre><code>df1 = pd.DataFrame({
'alpha': list('a abbb'),
'beta': list('bab uq'),
'theta': list('cdec e'),
'x': [1, 2, np.nan, 4, 5, 6],
'y': [6, 5, 4, 3, 2, np.nan],
'z': [7, np.nan, 11, 8, 10, 12]
})
df1 >> mutate(newcol = if_else('x==np.nan, 'missing', x'))
</code></pre>
<p>The above code is not working, hence it would be great if code is updated in line with plydata.</p>
| <python> | 2023-08-22 06:10:32 | 1 | 404 | Ransingh Satyajit Ray |
76,950,336 | 3,509,106 | How do I check whether a date input includes days in Python? | <p>I'm working on a block of Python code that is meant to test inputs to determine whether they're numeric, timestamps, free text, etc. To detect dates, it uses the dateutil parser, then checks if the parse succeeded or an exception was thrown.</p>
<p>However, the dateutil parser is too forgiving and will turn all manner of values into date objects, such as time ranges, eg "12:00-16:00", being converted into timestamps on the current day, eg "2023-08-22T12:00-16:00" (which isn't even a valid timezone offset).</p>
<p>We'd like to only treat inputs as dates if they actually have a day-month-year component, not if they're just hours and minutes - but we still want to accept various date formats, yyyy-MM-ddThh:mm:ss or dd/MM/yyyy or whatever the input uses. Is there another library better suited to this, or some way to make dateutil stricter?</p>
| <python><parsing><python-dateutil> | 2023-08-22 05:33:52 | 2 | 1,081 | ThrawnCA |
76,950,273 | 457,850 | Converting Python notebook base64 image rendering to JavaScript? | <p>I have the below Python code which renders an image from a SageMaker API endpoint in a Jupyter Notebook. This works fine.</p>
<p><a href="https://i.sstatic.net/Z9Wqf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z9Wqf.png" alt="python code" /></a></p>
<p>I am converting this to a web app, and I'm struggling to decode the image correctly in javascript.</p>
<p>My web app gets the image data response from Ruby like this:</p>
<pre><code> response_json = JSON.parse(response.body.string)
response_json["images"].first
</code></pre>
<p>And this shows up in javascript like so:</p>
<p><a href="https://i.sstatic.net/tE2UL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tE2UL.png" alt="javascript console debug" /></a></p>
<p>I try to render the image like this:</p>
<pre><code><img src={`data:image/jpeg;base64,${apiResponse['data']}`} />
</code></pre>
<p>Which produces a broken image in the browser:</p>
<p><a href="https://i.sstatic.net/ZnRWl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZnRWl.png" alt="broken image preview" /></a></p>
<p>I have tried these options so far:</p>
<pre><code><img src={`data:image/jpeg;base64,${apiResponse['data']}`} />
<img src={`data:image/png;base64,${apiResponse['data']}`} />
<img src={`data:image/png;base64,${btoa(apiResponse['data'])}`} />
</code></pre>
<p>Can someone please help me figure out what is going on in that Python <code>base64.decodebytes</code> and <code>np.reshape</code>/<code>np.frombuffer</code> code? And how to implement the same thing in javascript so I can render the image output?</p>
<p>Thank you</p>
| <javascript><python><numpy><amazon-sagemaker> | 2023-08-22 05:16:01 | 0 | 4,945 | dtbaker |
76,950,006 | 1,492,229 | Dataframe order misses up when copy data to another dataframe in Python | <p>I have a dataframe df_reps that reads from csv</p>
<pre><code>df_reps = pd.read_csv("c:\text.txt")
</code></pre>
<p>Here is how df_reps looks like</p>
<pre><code>df_reps
Out[70]:
RepID Txt
0 4 BLUE MAN SNACK BLUE YELLOW LONDON MAN LAPTOP A...
1 13 ALPHA MAN GREEN GREEN GREEN MAN STREET KITKAT ...
2 15 PLASTIC LONDON PARIS BLUE MAN TSHIRT FALCON C...
3 16 STREET SNACK WOMAN LAPTOP LAPTOP GREEN BLUE MA...
4 18 JACK TSHIRT NANCY KITKAT MAN KITKAT YELLOW CAT...
... ...
122296 331787 MOVIE DVD WOMAN BLUE BLUE MCDONALDS HONDA KITK...
122297 331788 LAMP PIZZA APPLE MAZDA HONDA LIVE MAN BLUE FAL...
122298 331789 TREE MOVIE ALPHA DELTA PALESTINE BALL BLUE TAB...
122299 331790 TREE MAZDA NANCY RONALDO MAN KITKAT YELLOW FLO...
122300 331791 JACK JACK NANCY NANCY NANCY BALL YELLOW NANCY ...
[122301 rows x 2 columns]
</code></pre>
<p>labels df_results are read from another csv file</p>
<pre><code>df_results = pd.read_csv("c:\labels.txt")
</code></pre>
<p>here is how it looks like</p>
<pre><code>df_results
Out[71]:
RepID ICU
0 4 0
1 13 1
2 15 1
3 16 1
4 18 0
... ...
122296 331787 0
122297 331788 0
122298 331789 0
122299 331790 1
122300 331791 1
[122301 rows x 2 columns]
</code></pre>
<p>Due to dataframe is imbalanced</p>
<p>I used RandomUnderSampler and now things start to get missed up a little</p>
<pre><code>rus = RandomUnderSampler(random_state=42)
df_reps, df_results = rus.fit_resample(df_reps, df_results["ICU"])
</code></pre>
<p>Due to undersampling the results of df_reps and df_results that become unsorted</p>
<p>here they are</p>
<pre><code> RepID Txt
33266 90164 CAT GREEN HONDA HONDA KITKAT KITKAT HONDA NANC...
97472 264062 JACK FALCON KITKAT BALL BALL BLUE JACK FALCON ...
57080 154571 LAMP LAPTOP HONDA LONDON KITKAT HONDA KITKAT C...
64382 174131 KITKAT GREEN MAN MAN KITKAT KITKAT GREEN YELLO...
50312 135638 BALL HONDA FALCON DELTA DELTA FALCON KITKAT CA...
... ...
122291 331782 RONALDO JACK DELTA VOLVO KITKAT LAMP LAMP HOND...
122292 331783 BALL UNDER KITKAT TREE GREEN ECHO JACK JACK IN...
122293 331784 VOLVO VOLVO CANADA LONDON KITKAT KITKAT RONALD...
122299 331790 TREE MAZDA NANCY RONALDO MAN KITKAT YELLOW FLO...
122300 331791 JACK JACK NANCY NANCY NANCY BALL YELLOW NANCY ...
[87148 rows x 2 columns]
</code></pre>
<p>you can see RepID is randomised now</p>
<p>and same to the df_results</p>
<pre><code>df_results
Out[75]:
33266 0
97472 0
57080 0
64382 0
50312 0
..
122291 1
122292 1
122293 1
122299 1
122300 1
Name: ICD10, Length: 87148, dtype: int64
</code></pre>
<p>the index between both dataframe seems matching</p>
<p>Now this part is going to make a big mess to the order</p>
<p>i have to Bag of word the dataframe</p>
<pre><code>def BOW(df):
CountVec = CountVectorizer() # to use only bigrams ngram_range=(2,2)
Count_data = CountVec.fit_transform(df)
Count_data = Count_data.astype(np.uint8)
cv_dataframe=pd.DataFrame(Count_data.toarray(),columns=CountVec.get_feature_names_out())
return cv_dataframe.astype(np.uint8)
dfMethod = BOW(df_reps["Txt"])
</code></pre>
<p>here is dfMethod</p>
<pre><code>dfMethod
Out[76]:
BLUE MAN HONDA KITKAT
0 9 3 0 4
1 48 3 11 8
2 15 6 16 8
3 30 6 11 12
4 42 6 10 8
... ... ... ...
87143 12 3 3 0
87144 9 3 4 8
87145 12 3 4 4
87146 24 18 11 4
87147 18 15 6 16
[87148 rows x 100 columns]
</code></pre>
<p>index in dfMethod does not match index in df_reps nor df_results</p>
<p><strong>now what i want is to have RepID and ICU columns in dfMethod</strong></p>
<p>here is what i did</p>
<pre><code>dfMethod["RepID"] = df_reps["RepID"].astype(int)
dfMethod["ICU"] = df_results.astype(int)
</code></pre>
<p>But... the result came out not in the same order</p>
<pre><code>dfMethod
Out[85]:
BLUE MAN HONDA KITKAT RepID ICD10
0 9 3 0 4 4.0 0.0
1 48 3 11 8 13.0 1.0
2 15 6 16 8 15.0 1.0
3 30 6 11 12 16.0 1.0
4 42 6 10 8 18.0 0.0
... ... ... ... ... ...
87143 12 3 3 0 236157.0 0.0
87144 9 3 4 8 236159.0 0.0
87145 12 3 4 4 236160.0 0.0
87146 24 18 11 4 236161.0 0.0
87147 18 15 6 16 236166.0 0.0
[87148 rows x 102 columns]
</code></pre>
<p>you can see that RepID is not sorted ascending while it wasn't before.</p>
<p>now i am not sure if the RepID is assigned to its right feature set of BOW</p>
<p>anyone can help me link RepID to the right BOW features?</p>
| <python><pandas><dataframe> | 2023-08-22 03:57:19 | 1 | 8,150 | asmgx |
76,949,961 | 10,853,071 | Timeseries extraction based on specific data | <p>I have a huge dataframe containg profitability information of millions of clients. For every client, there are several rows containg the probitability of every month for the last years (example df above).</p>
<pre><code>import pandas as pd
from datetime import date
data = pd.DataFrame({
'client' : ['john', 'john','john','john','john','john','john','john','john','john','rambo','rambo','rambo','rambo','rambo','rambo','rambo','rambo','rambo','rambo'],
'profit' : [100, 200,200,150,300,0,50,1513,51,5465,54645,54654,4875,7875,5132,202,512,10,242,2131],
'date' : [date(2022,1,1),date(2022,2,1),date(2022,3,1),date(2022,4,1),date(2022,5,1),date(2022,6,1),date(2022,7,1),date(2022,8,1),date(2022,9,1),date(2022,10,1),date(2023,1,1),date(2023,2,1),date(2023,3,1),date(2023,4,1),date(2023,5,1),date(2023,6,1),date(2023,7,1),date(2023,8,1),date(2023,9,1),date(2023,10,1)],
'new_produt_start_date' : [date(2022,6,1),date(2022,6,1),date(2022,6,1),date(2022,6,1),date(2022,6,1),date(2022,6,1),date(2022,6,1),date(2022,6,1),date(2022,6,1),date(2022,6,1),date(2023,4,1),date(2023,4,1),date(2023,4,1),date(2023,4,1),date(2023,4,1),date(2023,4,1),date(2023,4,1),date(2023,4,1),date(2023,4,1),date(2023,4,1)]})
</code></pre>
<p><a href="https://i.sstatic.net/50AOB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/50AOB.png" alt="Example output" /></a></p>
<p>For every client there is a specific information that details since when such client started to use a target product.</p>
<p>what I want is to analyse if starting using the new product was affecting the client probitability (looking for client profitability some months before new product up to some months after the new product).</p>
<p>I am really strugling how to adapt such information so I could see somenthing like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">client</th>
<th style="text-align: center;">start_date</th>
<th style="text-align: right;">proft_2months_before</th>
<th style="text-align: right;">proft_1months_before</th>
<th style="text-align: right;">proft_1months_after</th>
<th style="text-align: right;">proft_2months_after</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">John</td>
<td style="text-align: center;">2023/01</td>
<td style="text-align: right;">100</td>
<td style="text-align: right;">120</td>
<td style="text-align: right;">500</td>
<td style="text-align: right;">400</td>
</tr>
</tbody>
</table>
</div> | <python><pandas> | 2023-08-22 03:44:43 | 1 | 457 | FábioRB |
76,949,955 | 187,141 | How to remove all tags from Liquid template? | <p>I have this <a href="https://shopify.github.io/liquid/" rel="nofollow noreferrer">Liquid</a> template:</p>
<pre><code>Hello, {% foo %} world!
</code></pre>
<p>I would like to remove the <code>foo</code> tag from here and all other tags too, to obtain a clean document, without any <code>Liquid</code> markup:</p>
<pre><code>Hello, world!
</code></pre>
<p>Is it possible to do this by using existing <code>Liquid</code> tools?</p>
| <python><ruby-on-rails><liquid> | 2023-08-22 03:42:35 | 1 | 105,792 | yegor256 |
76,949,876 | 2,251,058 | Json String with non ascii looks okay but invalid in jsonlint | <p><em><strong>Please read update: SO updates the string to remove certain character in json string which causes the problem. Shared the pastebin link below with original string</strong></em> <a href="https://pastebin.com/tkJLFNGS" rel="nofollow noreferrer">https://pastebin.com/tkJLFNGS</a></p>
<p>I am getting a json response from some page with non ascii characters. It has some html content inside.</p>
<p>The string when observed manually looks okay.</p>
<p>It shows as invalid in jsonlint and other sites etc. Some sites mention it to be valid. Possibly this is due to non ascii characters.</p>
<p>I couldn't parse it using json.loads.</p>
<p>I tried using suggestions for non ascii code but they haven't worked.</p>
<p>Is the response incorrect or I am unable to parse it correctly ?</p>
<pre><code>import json
json_string = """{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [{
"name": "Hur mycket kostar ett flyg från LHR till JFK?",
"@type": "Question",
"acceptedAnswer": {
"@type": "Answer",
"text": "<p>Lägsta angivna pris för ett flyg tur och retur mellan Heathrow och John F. Kennedy Intl. är 5 525 kr. Du måste dock agera kvickt. Denna siffra är baserad på biljettpriserna under den senaste veckan.</p>"
}
}, {
"name": "Måste jag betala en avbokningsavgift om jag avbokar mitt flyg från Heathrow till John F. Kennedy Intl.?",
"@type": "Question",
"acceptedAnswer": {
"@type": "Answer",
"text": "<p>Många flyg kan avbokas online, men huruvida du måste betala en avgift kan variera mellan flygbolagen. Om du behöver avboka ditt flyg kan du följa instruktionerna i vår <a href=\"https://www.expedia.se/service/?langid=1053\">kundtjänst</a>.</p>"
}
}, {
"name": "Hur gör man för att hitta billiga flyg med en flexibel ombokningspolicy från Heathrow till John F. Kennedy Intl.?",
"@type": "Question",
"acceptedAnswer": {
"@type": "Answer",
"text": "<p>Många flygbolag ger dig möjligheten att boka om ditt flyg utan ombokningsavgift. Du betalar endast mellanskillnaden mellan priset på ditt ursprungliga flyg och ditt nya flyg. När du söker på flyg från Heathrow till John F. Kennedy Intl. kan du välja filtret \"Inga ändringsavgifter\" för att se flyg utan avgifter.</p>"
}
}, {
"name": "Hur många timmar tar flyget från London (LHR-Heathrow) till New York, New York (JFK-John F. Kennedy Intl.)?",
"@type": "Question",
"acceptedAnswer": {
"@type": "Answer",
"text": "<p>Det finns gott om bra sätt att få den 7 timmar och 55 minuter långa resan mellan LHR och JFK att gå fortare. Du kan till exempel passa på att vara produktiv och sortera dina foton eller din e-post. En kraftlur får också resan att kännas kortare.</p>"
}
}, {
"name": "Hur långt är det från Heathrow till John F. Kennedy Intl.?",
"@type": "Question",
"acceptedAnswer": {
"@type": "Answer",
"text": "<p>Avståndet är cirka 5 500 kilometer. Resekudde? Japp! Tandborste? Jajamensan! Du bör packa några förnödenheter till denna långa resa från Hounslow till Jamaica.</p>"
}
}, {
"name": "Vilka flygbolag har direktflyg från London (LHR-Heathrow) till New York, New York (JFK-John F. Kennedy Intl.)?",
"@type": "Question",
"acceptedAnswer": {
"@type": "Answer",
"text": "<p>Med så många bra flygbolag att välja bland lär du hitta det perfekta flyget från Heathrow (LHR) till John F. Kennedy Intl. (JFK) på bara ett par klick. Nedan följer en lista över populära flygbolag som trafikerar denna sträcka:</p><p><ul><li>British Airways – (BA) med 212 flyg per månad.</li><li>Virgin Atlantic – (VS) med 152 flyg per månad.</li><li>American Airlines – (AA) med 121 flyg per månad.</li><li>Delta Air Lines – (DL) med 60 flyg per månad.</li></ul></p>"
}
}, {
"name": "Är flygbiljetter från Heathrow till John F. Kennedy Intl. billigare när man köper dem i sista minuten?",
"@type": "Question",
"acceptedAnswer": {
"@type": "Answer",
"text": "<p>Ibland kan man ha tur när man bokar en flygbiljett från Heathrow (LHR) till John F. Kennedy Intl. (JFK) i sista minuten – men ibland inte. Ibland går det att hitta toppenerbjudanden, men det finns också en risk att du helt missar chansen att köpa en biljett. Om du föredrar att göra dina planer i god tid, jämför Expedia biljetterbjudanden upp till ett år i förväg. Men kom ihåg att inte alla flygbolag publicerar sina flygpriser så långt i förväg. Vi rekommenderar att du kontrollerar priserna ofta eftersom de hela tiden uppdateras.</p>"
}
}, {
"name": "Går det att flyga från London (LHR-Heathrow) till New York, New York (JFK-John F. Kennedy Intl.) just nu?",
"@type": "Question",
"acceptedAnswer": {
"@type": "Answer",
"text": "<p>Vill du läsa mer om potentiella reseriktlinjer eller karantänkrav på New York, New York (JFK-John F. Kennedy Intl.) kan du gå till vår sida med <a href=\"https://www.expedia.se/lp/b/travel-advisor\">reseråd under covid-19</a>. Gå igenom all information innan du slutför bokningen, så att du inte råkar fastna någonstans när du reser.</p>"
}
}]
}"""
parsed_json = json.loads(json_string)
</code></pre>
<p>It throws this error</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
^^^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 16 column 197 (char 867)
</code></pre>
<p>As suggested by @Michael Butscher . When I try to use the raw version on the unformatted string, it still throws the error.</p>
<pre><code>json_string = r'''{ "@context": "https://schema.org", "@type": "FAQPage", "mainEntity": [{"name": "Hur mycket kostar ett flyg från LHR till JFK?","@type": "Question","acceptedAnswer": { "@type": "Answer", "text": "<p>Lägsta angivna pris för ett flyg tur och retur mellan Heathrow och John F. Kennedy Intl. är 5 525 kr. Du måste dock agera kvickt. Denna siffra är baserad på biljettpriserna under den senaste veckan.</p>"}}, {"name": "Måste jag betala en avbokningsavgift om jag avbokar mitt flyg från Heathrow till John F. Kennedy Intl.?","@type": "Question","acceptedAnswer": { "@type": "Answer", "text": "<p>Många flyg kan avbokas online, men huruvida du måste betala en avgift kan variera mellan flygbolagen. Om du behöver avboka ditt flyg kan du följa instruktionerna i vår <a href=\"https://www.expedia.se/service/?langid=1053\">kundtjänst</a>.</p>"}}, {"name": "Hur gör man för att hitta billiga flyg med en flexibel ombokningspolicy från Heathrow till John F. Kennedy Intl.?","@type": "Question","acceptedAnswer": { "@type": "Answer", "text": "<p>Många flygbolag ger dig möjligheten att boka om ditt flyg utan ombokningsavgift. Du betalar endast mellanskillnaden mellan priset på ditt ursprungliga flyg och ditt nya flyg. När du söker på flyg från Heathrow till John F. Kennedy Intl. kan du välja filtret \"Inga ändringsavgifter\" för att se flyg utan avgifter.</p>"}}, {"name": "Hur många timmar tar flyget från London (LHR-Heathrow) till New York, New York (JFK-John F. Kennedy Intl.)?","@type": "Question","acceptedAnswer": { "@type": "Answer", "text": "<p>Det finns gott om bra sätt att få den 7 timmar och 55 minuter långa resan mellan LHR och JFK att gå fortare. Du kan till exempel passa på att vara produktiv och sortera dina foton eller din e-post. En kraftlur får också resan att kännas kortare.</p>"}}, {"name": "Hur långt är det från Heathrow till John F. Kennedy Intl.?","@type": "Question","acceptedAnswer": { "@type": "Answer", "text": "<p>Avståndet är cirka 5 500 kilometer. Resekudde? Japp! Tandborste? Jajamensan! Du bör packa några förnödenheter till denna långa resa från Hounslow till Jamaica.</p>"}}, {"name": "Vilka flygbolag har direktflyg från London (LHR-Heathrow) till New York, New York (JFK-John F. Kennedy Intl.)?","@type": "Question","acceptedAnswer": { "@type": "Answer", "text": "<p>Med så många bra flygbolag att välja bland lär du hitta det perfekta flyget från Heathrow (LHR) till John F. Kennedy Intl. (JFK) på bara ett par klick. Nedan följer en lista över populära flygbolag som trafikerar denna sträcka:</p><p><ul><li>British Airways – (BA) med 212 flyg per månad.</li><li>Virgin Atlantic – (VS) med 152 flyg per månad.</li><li>American Airlines – (AA) med 121 flyg per månad.</li><li>Delta Air Lines – (DL) med 60 flyg per månad.</li></ul></p>"}}, {"name": "Är flygbiljetter från Heathrow till John F. Kennedy Intl. billigare när man köper dem i sista minuten?","@type": "Question","acceptedAnswer": { "@type": "Answer", "text": "<p>Ibland kan man ha tur när man bokar en flygbiljett från Heathrow (LHR) till John F. Kennedy Intl. (JFK) i sista minuten – men ibland inte. Ibland går det att hitta toppenerbjudanden, men det finns också en risk att du helt missar chansen att köpa en biljett. Om du föredrar att göra dina planer i god tid, jämför Expedia biljetterbjudanden upp till ett år i förväg. Men kom ihåg att inte alla flygbolag publicerar sina flygpriser så långt i förväg. Vi rekommenderar att du kontrollerar priserna ofta eftersom de hela tiden uppdateras.</p>"}}, {"name": "Går det att flyga från London (LHR-Heathrow) till New York, New York (JFK-John F. Kennedy Intl.) just nu?","@type": "Question","acceptedAnswer": { "@type": "Answer", "text": "<p>Vill du läsa mer om potentiella reseriktlinjer eller karantänkrav på New York, New York (JFK-John F. Kennedy Intl.) kan du gå till vår sida med <a href=\"https://www.expedia.se/lp/b/travel-advisor\">reseråd under covid-19</a>. Gå igenom all information innan du slutför bokningen, så att du inte råkar fastna någonstans när du reser.</p>"}}] }'''
json_parsed = json.loads(json_string)
</code></pre>
<p>Error</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/__init__.py", line 346, in loads
return _default_decoder.decode(json_parsed)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(json_parsed, idx=_w(json_parsed, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(json_parsed, idx)
^^^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Invalid control character at: line 1 column 1919 (char 1918)
</code></pre>
<p>Any help is appreciated</p>
<p><strong>Update :</strong> It seems Stack Overflow is formatting the string and showing something else. Please see the pastebin link below</p>
<p>Diff checker shows a difference in some tab character. I believe the space character doesn't work in python.</p>
<p>The Stack Overflow version works but the original does not.</p>
<p><strong>This is the original unformatted string shared via pastebin</strong></p>
<p><a href="https://pastebin.com/tkJLFNGS" rel="nofollow noreferrer">https://pastebin.com/tkJLFNGS</a></p>
<p><a href="https://i.sstatic.net/EXhf7.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EXhf7.jpg" alt="enter image description here" /></a></p>
<p>Trying to remove non-ascii characters using this hasn't worked either
<a href="https://stackoverflow.com/a/35492167/2251058">https://stackoverflow.com/a/35492167/2251058</a></p>
| <python><json><python-3.x> | 2023-08-22 03:14:09 | 2 | 3,287 | Akshay Hazari |
76,949,809 | 13,102,905 | how to upload files from bytes into gcp with pyton3 | <p>I need to make a code that list / files from s3 bucket and upload in a gcp bucket</p>
<pre><code>def download_from_s3():
files = s3_client.list_objects_v2(
Bucket=bucket_name,
Prefix=folder
)
output = []
for content in files.get('Contents', []):
object_key = content['Key']
report_type = parse_type(object_key)
if report_type == "":
continue
file = s3_client.get_object(
Bucket=bucket_name,
Key=object_key
)
object_body = file['Body'].read()
output.append({
"Key": os.path.basename(object_key),
"Type": parse_type(object_key),
"File": object_body,
})
return output
</code></pre>
<p>I found an example code from gcp, but the sending is through a file and I already have an array of bytes and not the files, is it possible to send without passing the file path, but the bytes?</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>def upload_blob(bucket_name, source_file, destination_blob_name):
"""Uploads a file to the bucket."""
# The ID of your GCS bucket
# bucket_name = "your-bucket-name"
# The path to your file to upload
# source_file_name = "local/path/to/file"
# The ID of your GCS object
# destination_blob_name = "storage-object-name"
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
# Optional: set a generation-match precondition to avoid potential race conditions
# and data corruptions. The request to upload is aborted if the object's
# generation number does not match your precondition. For a destination
# object that does not yet exist, set the if_generation_match precondition to 0.
# If the destination object already exists in your bucket, set instead a
# generation-match precondition using its generation number.
generation_match_precondition = 0
blob(source_file, if_generation_match=generation_match_precondition)
print(
f"File {source_file} uploaded to {destination_blob_name}."
)
def send_from_s3_to_gcp(s3_files):
for content in s3_files:
upload_blob(
bucket_name=gcp_bucket_name,
source_file=content["File"],
destination_blob_name=content["Type"]+content["Key"],
)
s3_client = build_session(role_arn, role_arn_name).client('s3')
output = download_from_s3()
send_from_s3_to_gcp(output)</code></pre>
</div>
</div>
</p>
| <python> | 2023-08-22 02:55:48 | 1 | 1,652 | Ming |
76,949,600 | 1,445,660 | Update existing nested ORM objects with new values from JSON | <p>I have table <code>A</code> and <code>B</code>. <code>B</code> has a <code>a_id</code> foreign key to <code>A</code>.
I do this in python:</p>
<pre><code>a = create_A(some_json)
b = create_B(some_json)
b.a = a # (I have a sqlalchemy relationship)
existing_a = session.query(A).filter_by(name=a.name).first()
if existing_a:
a.id = existing_a.id
session.merge(a)
</code></pre>
<p>I get a json from which I create A and B, then if A already exists in the db, I want to update it.
The problem is that 'b.a' is pointing to a, but b.a_id is None. I have the id of 'a' but I wanted to avoid doing 'b.a_id = a.id' because I have a more complex structure with lists of objects and more nested objects.
If I manually set b.a_id to the correct value, I still get an error because <code>session.commit()</code> tries to update a null value and not the value I set, I don't know why.</p>
<p>reproducible code:</p>
<pre><code>@dataclass
class A(Base):
__tablename__ = "a"
a_id = Column(INTEGER, primary_key=True,
server_default=Identity(always=True, start=1, increment=1, minvalue=1,
maxvalue=2147483647, cycle=False, cache=1),
autoincrement=True)
name: str = Column(TEXT, nullable=True)
bs = relationship('B', back_populates='a')
@dataclass
class B(Base):
__tablename__ = "b"
b_id = Column(INTEGER, primary_key=True,
server_default=Identity(always=True, start=1, increment=1, minvalue=1,
maxvalue=2147483647, cycle=False, cache=1),
autoincrement=True)
a_id: int = Column(INTEGER, ForeignKey('a.a_id'), nullable=False)
name: str = Column(TEXT, nullable=True)
a = relationship('A', back_populates='bs')
Session = sessionmaker(bind=engine_sync)
session = Session()
a = A()
a.name = 'new name a'
b = B()
b.name = 'new name b'
b.a = a
existing_a = session.query(A).filter_by(name='name a').first()
a.a_id = existing_a.a_id
# a.bs[0].a_id = a.a_id - this doesn't help
session.merge(a)
session.commit()
</code></pre>
<p>In the db these rows exist:
A:
a_id: 1, name: 'name a'
B:
b_id: 1, a_id: 1, name: 'name b'</p>
| <python><python-3.x><postgresql><sqlalchemy> | 2023-08-22 01:38:25 | 1 | 1,396 | Rony Tesler |
76,949,497 | 116 | Why does flask look for my python file in $HOME? | <p>When I run a flask program in debug mode, it tries to reload my program file from <code>$HOME</code>, which of course fails.</p>
<ul>
<li>Why is is trying to read from <code>$HOME</code>?</li>
<li>How can I stop this behavior, and make it reload from the directory where it resides?</li>
</ul>
<pre><code>/tmp% cat test_flask.py
import flask
app = flask.Flask(__name__)
@app.route('/foo', methods=['GET'])
def foo():
pass
if __name__ == "__main__":
# app.run(debug=False) -- everything works as expected when debug is False.
app.run(debug=True)
/tmp% python3 test_flask.py
* Serving Flask app "test_flask" (lazy loading)
[...]
/opt/bin/python3: can't open file '/Users/mark/test_flask.py': [Errno 2] No such file or directory
</code></pre>
<p>Here are details on my python:</p>
<pre><code>/tmp% which python3
/Users/mark/opt/anaconda3/bin/python3
/tmp% file $(which python3)
/Users/mark/opt/anaconda3/bin/python3: Mach-O 64-bit executable x86_64
</code></pre>
| <python><flask> | 2023-08-22 00:58:55 | 1 | 305,996 | Mark Harrison |
76,949,442 | 16,737,868 | Feeling defeated: APIError(code=-1111): Precision is over the maximum defined for this asset | <p>I am following the latest specifications on <code>step_size</code> and <code>tick_size</code> according to binance’s exchange info endpoint for BTCUSDT:</p>
<pre><code>"BTCUSDT": {
"step_size": "0.00001",
"tick_size": "0.01"
},
</code></pre>
<p>And even doing everything according to the documentation, I am still getting the following error:</p>
<pre><code>APIError(code=-1111): Precision is over the maximum defined for this asset.
</code></pre>
<p>I really am out of options. I’ve been creating trading bots with Binance API since 2020 and I have never found a clear solution to this problem. Only temporary fixes that for whatever reason end up failing, eventually, even if the exchange info data has not changed.</p>
<p>These are the functions I currently use to truncate price and quantity before creating the binance order (which do work for many other pairs like VETUSDT):</p>
<pre><code>def __get_trimmed_quantity(self, quantity):
self.logger.log("untrimmed qty: ", quantity)
quantity_dec = Decimal(str(quantity))
step_size_dec = Decimal(str(self.step_size))
trimmed_quantity_dec = quantity_dec.quantize(step_size_dec)
trimmed_quantity = float(trimmed_quantity_dec)
return trimmed_quantity
def __get_trimmed_price(self, price):
self.logger.log("untrimmed price: ", price)
price_dec = Decimal(str(price))
tick_size_dec = Decimal(str(self.tick_size))
trimmed_price_dec = price_dec.quantize(tick_size_dec)
trimmed_price = float(trimmed_price_dec)
return trimmed_price
</code></pre>
<p>And here you can see the logs that lead up to the error. You can verify that the order has the proper truncated price and quantity.</p>
<pre><code>untrimmed qty: 0.28760622
untrimmed price: 26193.02
Order to create (binance): {'side': 'SELL', 'positionSide': 'SHORT', 'type': 'LIMIT', 'timeInForce': 'GTC', 'quantity': 0.28761, 'price': 26193.02, 'newClientOrderId': '35021e7298b34c838f0b64e291ead9e1'}
__create_order_to_open_position: Binance order creation failed: APIError(code=-1111): Precision is over the maximum defined for this asset.
</code></pre>
<p>All I want is the order to go through. I’ll do whatever you tell me. Give me a function I can throw a quantity and price at and it will spit out the proper truncated values. I don’t care how it’s done. I just want it to work. Please.</p>
<p>Can anybody spot what I’m doing wrong?</p>
| <python><binance> | 2023-08-22 00:37:17 | 1 | 611 | Juan Chaher |
76,949,364 | 378,622 | SymPy: Getting special solutions in two different vectors | <p>I want to use SymPy solve for special solutions to Ax = 0:
<a href="https://i.sstatic.net/eGyaR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eGyaR.png" alt="enter image description here" /></a></p>
<p>Using this code:</p>
<pre><code>from sympy import solve
Vec = sp.Matrix(np.matrix([[1,5,7,9],
[0,4,1,7],
[2,-2,11,-3]]))
print(sp.linsolve( (Vec, sp.Matrix([[0],[0],[0],[0]])) ))
</code></pre>
<p>I get the correct output: <code>{(-23*tau0/4 - tau1/4, -tau0/4 - 7*tau1/4, tau0, tau1)}</code></p>
<p>However, I'd like the two special solutions (corresponding to multiples of tau0 and tau1) in two different vectors, like:</p>
<p><a href="https://i.sstatic.net/P1Hqd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P1Hqd.png" alt="enter image description here" /></a></p>
<p>Is there a way to get these two separate special solution vectors for each tau, rather than having them all in one?</p>
| <python><sympy><linear-algebra> | 2023-08-22 00:08:52 | 3 | 26,851 | Ben G |
76,949,358 | 1,275,942 | PyQt crashes when QtItemDelegate subclass has two or more instances | <p>I'm implementing a delegate for a QTableView where two columns should be dropdowns where the user selects a value from an enum.</p>
<p>Below is an example:</p>
<pre class="lang-py prettyprint-override"><code>from PyQt4 import QtGui, QtCore
import enum
Color = enum.Enum("Color", ("RED", "BLUE"))
Shape = enum.Enum("Shape", ("TRIANGLE", "CIRCLE"))
class EnumComboBoxDelegate(QtGui.QItemDelegate):
def __init__(self, enum_cls, parent=None):
super(EnumComboBoxDelegate, self).__init__(parent)
self.enum_cls = enum_cls
self.enum_objects = list(enum_cls)
self.enum_names = [enum_obj.name for enum_obj in self.enum_objects]
self.enum_values = [enum_obj.value for enum_obj in self.enum_objects]
def createEditor(self, widget, option, index):
editor = QtGui.QComboBox(widget)
for user_friendly_name in self.enum_names:
editor.addItem(user_friendly_name)
return editor
def setEditorData(self, editor, index):
combobox_index, is_int = index.model().data(index, QtCore.Qt.EditRole).toInt()
if is_int:
editor.setCurrentIndex(combobox_index)
else:
editor.setCurrentIndex(0)
def setModelData(self, editor, model, index):
combobox_index = editor.currentIndex()
if not combobox_index:
combobox_index = 0
model.setData(index, combobox_index, QtCore.Qt.EditRole)
def updateEditorGeometry(self, editor, option, index):
editor.setGeometry(option.rect)
class CentralWidget(QtGui.QWidget):
def __init__(self, *args, **kwargs):
super(CentralWidget, self).__init__(*args, **kwargs)
main_layout = QtGui.QVBoxLayout()
table_view = QtGui.QTableView()
table_view.setSelectionBehavior(QtGui.QAbstractItemView.SelectRows)
table_view.setSelectionMode(QtGui.QAbstractItemView.SingleSelection)
table_model = QtGui.QStandardItemModel(2, 2, None)
table_view.setModel(table_model)
color_combo_delegate = EnumComboBoxDelegate(Color)
table_view.setItemDelegateForColumn(0, color_combo_delegate)
shape_combo_delegate = EnumComboBoxDelegate(Shape)
table_view.setItemDelegateForColumn(1, shape_combo_delegate)
main_layout.addWidget(table_view)
self.setLayout(main_layout)
def run_self_contained_widget():
import sys
app = QtGui.QApplication(sys.argv)
main_window = QtGui.QMainWindow()
main_window.setCentralWidget(CentralWidget())
main_window.show()
sys.exit(app.exec_())
if __name__ == "__main__":
run_self_contained_widget()
</code></pre>
<p>Now, if run, the code crashes on startup without traceback:</p>
<p><code>Process finished with exit code -1073741819 (0xC0000005)</code></p>
<p>If I change the delegates so they both use the same instance of <code>EnumComboBoxDelegate</code>, I get no errors:</p>
<pre><code>color_combo_delegate = EnumComboBoxDelegate(Color)
table_view.setItemDelegateForColumn(0, color_combo_delegate)
shape_combo_delegate = EnumComboBoxDelegate(Shape)
table_view.setItemDelegateForColumn(1, color_combo_delegate)
</code></pre>
<p>What is causing this and how do I fix it? This code is using PyQt4 and Python 2.7, but it seems to be the same in Python 3.x</p>
| <python><qt><pyqt><lifetime><qitemdelegate> | 2023-08-22 00:08:09 | 1 | 899 | Kaia |
76,949,178 | 1,285,061 | How to add arrays inside a numpy array without flattening | <pre><code>>>> import numpy as np
>>> a = np.array([112,123,134,145])
>>> b = np.array([212,223,234])
>>> c = np.array([312,323])
>>> d = np.array([412])
>>> arr = np.hstack([a,b,c,d])
>>> arr
array([112, 123, 134, 145, 212, 223, 234, 312, 323, 412])
>>> arr = np.array([a,b,c,d])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (4,) + inhomogeneous part.
>>>
</code></pre>
<p>Expected array -</p>
<pre><code>array([[112,123,134,145], [212,223,234], [312,323], [412]],
[[512,523,534,545], [612,623,634], [712,723], [812]], ...)
</code></pre>
| <python><numpy><multidimensional-array> | 2023-08-21 23:05:15 | 3 | 3,201 | Majoris |
76,949,152 | 9,008,300 | Why does PyTorch's Linear layer store the weight in shape (out, in) and transpose it in the forward pass? | <p>PyTorch's <code>Linear</code> layer stores the <code>weight</code> attribute in the shape <code>(out_features, in_features)</code>. In the forward pass, the following operation is performed:</p>
<pre class="lang-py prettyprint-override"><code>def forward(self, input: Tensor) -> Tensor:
return F.linear(input, self.weight, self.bias)
</code></pre>
<p>which is equivalent to:</p>
<pre class="lang-py prettyprint-override"><code>def forward(self, input: Tensor) -> Tensor:
return torch.addmm(self.bias, input, self.weight.t())
</code></pre>
<p>This is strange. Why isn't the <code>weight</code> attribute stored in the shape of <code>(in_features, out_features)</code>, eliminating the need for a transposition in the forward and backward pass?</p>
<p>I performed a benchmark to compare both storage methods (forward and backward pass over 10,000 iterations) and found that a pre-transposed weight was around 10µs faster per iteration. The benchmark used a tensor with 100 features and 128 rows of random data. To ensure fairness, both <code>Linear</code> layers were configured with the same parameters, including the weight and bias, and were applied to identical random data.</p>
| <python><pytorch><tensor> | 2023-08-21 22:59:00 | 0 | 422 | Chris |
76,949,069 | 9,576,988 | How can I return an object that supports dict and dot-notation from SQLite queries in Python? | <p>I have the following code for a sqlite3 query. I want to be able to use index, dict, and dot-notation to access row attributes. Using the <code>sqlite3.Row</code> row_factory allows index and dict access, but not dot-notation.</p>
<pre class="lang-py prettyprint-override"><code>import sqlite3
conn = sqlite3.connect(':memory:')
conn.row_factory = sqlite3.Row
cur = conn.cursor()
cur.execute("CREATE TABLE movie(title, year, score)")
cur.execute("""
INSERT INTO movie VALUES
('Monty Python and the Holy Grail', 1975, 1.2),
('And Now for Something Completely Different', 1971, 2.3)
""")
conn.commit()
results = conn.execute("select * from movie;")
for r in results:
print(r[0], r['title'], r.title)
# ---> `r.title` AttributeError: 'sqlite3.Row' object has no attribute 'title'
</code></pre>
<p>There is a dot-notation row_factory that exists, but I can't figure out how to merge the two. This doesn't support dict notation,</p>
<pre class="lang-py prettyprint-override"><code>from collections import namedtuple
def namedtuple_factory(cursor, row):
"""Returns sqlite rows as named tuples."""
fields = [col[0] for col in cursor.description]
Row = namedtuple("Row", fields)
return Row(*row)
</code></pre>
| <python><sqlite><attributes><notation> | 2023-08-21 22:31:59 | 1 | 594 | scrollout |
76,948,951 | 9,771,455 | Infer Schema Fails in Databricks Notebook | <p>I have written a spark structured stream in Databricks. The first bit of code is to check if a delta table exists for my entity. If it does not then the delta table is created. Here, I wanted to use the infer schema option to get the schema for the delta table.</p>
<pre class="lang-py prettyprint-override"><code># Check if the Delta table exists, if not create it
if not DeltaTable.isDeltaTable(spark, sink_path):
# Read the Parquet data and infer the schema
parquet_data = spark.read.option("inferSchema", "true").parquet(source_path)
# Create a Delta table with the inferred schema
#### does not create transaction log
parquet_data.write.format("delta").mode("overwrite").save(sink_path)
print('delta table created')
</code></pre>
<p>I am getting this error. The source file has around 50 records. Why is the schema inference not working?</p>
<p>batch stream failed for advertiser: An error occurred while calling o671.load.
: com.databricks.sql.cloudfiles.errors.CloudFilesException: Cannot infer schema when the input path <code>dbfs:/mnt/raw/Entity1/fileA.parquet</code> is empty. Please try to start the stream when there are files in the input path, or specify the schema.</p>
| <python><databricks><spark-streaming><azure-databricks><spark-structured-streaming> | 2023-08-21 22:01:04 | 1 | 369 | Shoaib Maroof |
76,948,909 | 8,076,158 | What is the easiest way tell the typing system that my function arguments come from a list of strings? | <p>I have a type like the following, with an extra string argument <code>all</code>. I was hoping something like this would work.</p>
<p>Note: MY_LIST is a list of strings.</p>
<pre><code>list[Literal[MY_LIST]] | Literal["all"]
</code></pre>
<p>Error:</p>
<pre><code> File "/home/Me/.pyenv/versions/3.11.1/lib/python3.11/typing.py", line 1672, in __hash__
return hash(frozenset(_value_and_type_iter(self.__args__)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: unhashable type: 'list'
</code></pre>
<p>What is the correct command?</p>
| <python> | 2023-08-21 21:52:33 | 0 | 1,063 | GlaceCelery |
76,948,903 | 1,029,902 | Can't import .dta file with either Python or R | <p>I am trying to read a Stata .dta file into either python or R so that I can work with it and it is giving me a version error in Python and R . I was wondering how I could resolve this.
Here is my code:</p>
<pre><code>import pandas as pd
data = pd.read_stata('file.dta')
</code></pre>
<p>Here is the error it is giving me</p>
<pre><code>Traceback (most recent call last):
File "/mnt/c/Users/t/projects/cg/dta_csv.py", line 11, in <module>
data = pd.io.stata.read_stata('file.dta')
File "/home/t/.local/lib/python3.10/site-packages/pandas/io/stata.py", line 2090, in read_stata
return reader.read()
File "/home/t/.local/lib/python3.10/site-packages/pandas/io/stata.py", line 1702, in read
self._ensure_open()
File "/home/t/.local/lib/python3.10/site-packages/pandas/io/stata.py", line 1176, in _ensure_open
self._open_file()
File "/home/t/.local/lib/python3.10/site-packages/pandas/io/stata.py", line 1206, in _open_file
self._read_header()
File "/home/t/.local/lib/python3.10/site-packages/pandas/io/stata.py", line 1288, in _read_header
self._read_old_header(first_char)
File "/home/t/.local/lib/python3.10/site-packages/pandas/io/stata.py", line 1467, in _read_old_header
raise ValueError(_version_error.format(version=self._format_version))
ValueError: Version of given Stata file is 70. pandas supports importing versions 105, 108, 111 (Stata 7SE), 113 (Stata 8/9), 114 (Stata 10/11), 115 (Stata 12), 117 (Stata 13), 118 (Stata 14/15/16),and 119 (Stata 15/16, over 32,767 variables).
</code></pre>
<p>I have also trying using R & RStudio with the <code>haven</code> and <code>foreign</code> libraries. No luck either</p>
<pre><code>> library(foreign)
> df <- read.dta("file.dta")
Error in read.dta("file.dta") : not a Stata version 5-12 .dta file
> library(haven)
> df <- read.dta("file.dta")
Error in read.dta("file.dta") : not a Stata version 5-12 .dta file
</code></pre>
<p>Any suggestion for how I could possibly resolve this?</p>
<p><a href="https://drive.google.com/file/d/1w2gnxAfNdehquChhQyfWAw1g5jm-ARFm/view?usp=drive_link" rel="nofollow noreferrer">Here is a link to the file</a>:</p>
| <python><r><data-analysis><stata><file-conversion> | 2023-08-21 21:51:13 | 1 | 557 | Tendekai Muchenje |
76,948,899 | 22,407,544 | Why do I keep getting 'No module named 'pywintypes'' whenever I try to import win32com.client or pywin32? | <p>Here is my code. I am trying to use docx2pdf to convert pdf files to docx.</p>
<pre><code>import win32com.client
#from win32 import _win32sysloader
#from win32 import win32api
#from docx2pdf import convert
def translator():
print('reconverting')
convert("C:\\Users\\John\\Downloads\\btab083 (1).docx", "C:\\Users\\John\\Downloads\\btab083 (1).pdf")
print('success')
translator()
</code></pre>
<p>I'm using Python 3.11.4</p>
<p>As you can see I tried different import statements of(commented out) as solutions. I found them online. No matter what order I tried them I kept getting <code>No module named 'pywintypes'</code> or <code>No module named '_win32sysloader'</code>.</p>
<p>I installed pywin32 using '<code>pip install pywin32</code>. I also used <code>python pywin32_postinstall.py -install</code>, <code>pip install win32</code>, <code>python Scripts/pywin32_postinstall.py -install</code> and <code>python -m pip install --upgrade pywin32</code>.</p>
<p>I also tried pip install pywintypes</p>
| <python><python-3.x><pdf><ms-word><docx> | 2023-08-21 21:48:58 | 0 | 359 | tthheemmaannii |
76,948,781 | 880,874 | How can I turn a dataframe into a string and still keep all the rows? | <p>I have the following little script that gets about 100 rows of data from a database, puts it into a Data Frame, and then turns the Data Frame into a string:</p>
<pre><code>import sqlalchemy as sa
from sqlalchemy import create_engine
from sqlalchemy.engine import URL
from sqlalchemy import text
import pandas as pd
import Inspector as spec
connection_string = "DRIVER={ODBC Driver 17 for SQL Server};SERVER=data_governor;DATABASE=ProcessingInspector;UID=xyz;PWD=123"
connection_url = URL.create(
"mssql+pyodbc", query={"odbc_connect": connection_string})
engine = sa.create_engine(connection_url)
# get data from database
qry = text("EXECUTE dbo.InspectionServiceData")
with engine.connect() as con:
rs = con.execute(qry)
df = pd.read_sql_query(qry, engine)
if __name__ == '__main__':
# turn the DF into a string
query_result = df.to_string(header=True, index=False)
# get all the rows
rows = [[field.strip() for field in line.split(' ') if field] for line in query_result.splitlines()[2:-1]]
# send rows to 3rd party script
for inspection in spec.process_data(rows):
# testing
print(inspection)
</code></pre>
<p>I am using a 3rd party Python script that transforms each row of the data and inspects it , inputs each row into a template file, and then emails me a report.</p>
<p>So in order to send the rows, I turned my Data Frame into a string and then pass in the collection of rows into their Python script.</p>
<p>I am getting the error below, and after contacting the 3rd party tech support, I found out that it's expecting multiple rows, but I am only sending one giant one.</p>
<p>I think it has to do with turning the data frame into a string. I think that converts all the rows into one big giant row.</p>
<p>Is there a way I can convert my data frame to a string, but still keep all the rows?</p>
<p>Thanks!</p>
<pre><code> # Error from system: Expected multiple rows, only found one.
</code></pre>
| <python><python-3.x><pandas> | 2023-08-21 21:24:55 | 1 | 7,206 | SkyeBoniwell |
76,948,673 | 5,304,058 | how to map header with dataframe when there is no data for the header in pandas | <p>I have a dataframe which is loaded from a file without headers. I defined header in my code and map it to the data in the dataframe. and then load this dataframe into a SQL table.
How ever, the data we receive not necessailry have all the columns always.</p>
<pre><code>for example,
header = ["filedate", "code", "errorID", "filename","errortype"]
df
20230821 1002 111 filename1 type1
20230821 1003 114 filename2 type1
20230821 1002 444 filename2 type2
df
20230821 111 filename1 type1
20230821 114 filename2 type1
20230821 444 filename2 type2
</code></pre>
<p>Here, first column is date, second column is code, third column is errorID, fourth is filename and fifth column is errortype.</p>
<p>In second df, second column "code" is missing this has to be blank .</p>
<p>currently my code :</p>
<pre><code> final = {}
for i,index in zip(header ,df):
final.update({i: df[index].tolist()})
df_final= pd.DataFrame(final)
</code></pre>
<p>This code works fine if all columns are in my dataframe. but if my data frame is missing any of the columns but in headers my code doesnt work.
I have to map all the columns defined in my header to my SQL table. If any column missing in dataframe should be blank or null .Columns are always in order in the dataframe.</p>
<pre><code>Final_df:
df_final
filedate code errorID filename errortype
20230821 1002 111 filename1 type1
20230821 1003 114 filename2 type1
20230821 1002 444 filename2 type2
df_final
filedate code errorID filename errortype
20230821 111 filename1 type1
20230821 114 filename2 type1
20230821 444 filename2 type2
</code></pre>
| <python><pandas> | 2023-08-21 21:01:11 | 0 | 578 | unicorn |
76,948,533 | 9,028,015 | PikePdf returning incorrect page count, how to get correct page count? | <p>I am using pikepdf to open a pdf file and get the page count for that file. My code is roughly:</p>
<pre><code>import pikepdf
with open(file_uri, "rb") as input_pdf:
input_reader = pikepdf.Pdf.open(input_pdf)
page_count: int = len(input_reader.pages)
</code></pre>
<p>Very rarely a pdf file will give the incorrect number of pages. For example, I have a file that when opened on my laptop in a PDF viewer is 39 pages. <code>pikepdf</code> gives a page count of <code>47</code> pages. The metadata when I click <code>Get Info</code> in the file browser states 39 pages.</p>
<p>Unfortunately, I cannot share the file as it has protected information.</p>
<p>How would I make sure I get the correct page count from pikepdf? I've used other Python PDF libraries (eg pypdf) in the past but they have errors opening python files sometimes.</p>
| <python><pdf><pikepdf> | 2023-08-21 20:28:15 | 0 | 1,944 | Mason Caiby |
76,948,488 | 11,642,691 | PyOpenGL, Hidden-line removal and glut | <p>I am using an approach for hidden line removal in a PyOpenGL program where I draw a figure twice, once as a wireframe and then again as filled polygons. It is working fine for my own figures but not for the basic glutSolidCube/glutWireCube so I am curious if there is a flaw in my code that the use of glut figures is exposing. Maybe there is just something squirrelly in the glut figures but I am guessing those are pretty well used and debugged...</p>
<p>here is my draw code (working app follows), followed by screen shot of result:</p>
<pre><code>def render_scene():
glEnableClientState(GL_VERTEX_ARRAY)
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glLineWidth(2)
glMatrixMode(GL_MODELVIEW)
glPushMatrix()
glRotate(rot[0], 1, 0, 0)
glRotate(rot[1], 0, 1, 0)
#
# =============
# pass 1: lines
# =============
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE)
glColor(0, 0, 0)
# my cube
glVertexPointer(3, GL_FLOAT, 0, cube_vertices)
glDrawElements(GL_QUADS, 24, GL_UNSIGNED_INT, cube_indices)
# glut cube
glPushMatrix()
glTranslate(-1.1, 0, 0)
glutWireCube(1)
glPopMatrix()
#
# ================
# pass 2: filled polygons
# ================
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL)
glEnable(GL_POLYGON_OFFSET_FILL)
glPolygonOffset(2, 2)
glColor(1, 1, 1)
# my cube
glDrawElements(GL_QUADS, 24, GL_UNSIGNED_INT, cube_indices)
# glut cube
glPushMatrix()
glTranslate(-1.1, 0, 0)
glutSolidCube(1)
glPopMatrix()
glPopMatrix()
pg.display.flip()
return
</code></pre>
<p>Thanks in advance!</p>
<p>running code:</p>
<pre><code>import pygame as pg
from pygame.locals import *
from numpy import array, float32, uint32
from OpenGL.GLUT import *
from OpenGL.GL import *
from OpenGL.GLU import *
FPS = 60
WIDTH = 800
HEIGHT = 600
clock = pg.time.Clock()
rot = [10, 10]
cube_vertices = array([
-0.5, -0.5, -0.5,
0.5, -0.5, -0.5,
0.5, 0.5, -0.5,
-0.5, 0.5, -0.5,
-0.5, -0.5, 0.5,
0.5, -0.5, 0.5,
0.5, 0.5, 0.5,
-0.5, 0.5, 0.5], dtype=float32)
cube_indices = array([
0, 1, 2, 3, # front
4, 5, 1, 0, # top
3, 2, 6, 7, # bottom
5, 4, 7, 6, # back
1, 5, 6, 2, # right
4, 0, 3, 7], dtype=uint32)
def setup_rc():
glEnable(GL_DEPTH_TEST)
glEnable(GL_CULL_FACE)
glCullFace(GL_FRONT)
glClearColor(1, 1, 1, 0)
pg.event.post(pg.event.Event(VIDEORESIZE, {'size':(WIDTH, HEIGHT)}))
return
def on_video_resize(event):
w = event.size[0]
h = event.size[1]
if h == 0: h = 1
aspect_ratio = w/h
glViewport (0, 0, w, h)
glMatrixMode (GL_PROJECTION)
glLoadIdentity()
if w <= h:
glOrtho (-1.5, 1.5, -1.5*h/w, 1.5*h/w, -10.0, 10.0)
else:
glOrtho (-1.5*w/h, 1.5*w/h, -1.5, 1.5, -10.0, 10.0)
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
return True
def render_scene():
glEnableClientState(GL_VERTEX_ARRAY)
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glLineWidth(2)
glMatrixMode(GL_MODELVIEW)
glPushMatrix()
glRotate(rot[0], 1, 0, 0)
glRotate(rot[1], 0, 1, 0)
#
# =============
# pass 1: lines
# =============
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE)
glColor(0, 0, 0)
# my cube
glVertexPointer(3, GL_FLOAT, 0, cube_vertices)
glDrawElements(GL_QUADS, 24, GL_UNSIGNED_INT, cube_indices)
# glut cube
glPushMatrix()
glTranslate(-1.1, 0, 0)
glutWireCube(1)
glPopMatrix()
#
# ================
# pass 2: filled polygons
# ================
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL)
glEnable(GL_POLYGON_OFFSET_FILL)
glPolygonOffset(2, 2)
glColor(1, 1, 1)
# my cube
glDrawElements(GL_QUADS, 24, GL_UNSIGNED_INT, cube_indices)
# glut cube
glPushMatrix()
glTranslate(-1.1, 0, 0)
glutSolidCube(1)
glPopMatrix()
glPopMatrix()
pg.display.flip()
return
def update():
clock.tick(FPS)
return
def on_keydown(event):
key = event.key
if key == K_LEFT:
rot[1] -= 2
elif key == K_RIGHT:
rot[1] += 2
if key == K_UP:
rot[0] -= 2
elif key == K_DOWN:
rot[0] += 2
elif key == K_ESCAPE:
pg.event.post(pg.event.Event(QUIT))
return True
def main():
pg.init()
pg.display.set_mode((WIDTH, HEIGHT), DOUBLEBUF|OPENGL|RESIZABLE)
pg.key.set_repeat(200, 100)
pg.display.set_caption("Use arrow keys to rotate scene")
glutInit()
setup_rc()
add_event_handler(VIDEORESIZE, on_video_resize)
add_event_handler(KEYDOWN, on_keydown)
while do_events():
render_scene()
update()
pg.quit()
return
# =======================================
# my simplified pygame event handling,
# normally in a separate module but wanted
# to make this code standalone
# =======================================
#
event_map = {pg.QUIT : (lambda e: False)}
def add_event_handler(key, func):
event_map[key] = func
return
def try_to_apply(e):
try:
return event_map[e.type](e)
except KeyError:
return True
def do_events():
return all(try_to_apply(e) for e in pg.event.get())
# ========================================
main()
</code></pre>
<p>Screen shot (glut-based figure on left, my figure on right):
<a href="https://i.sstatic.net/O1sni.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O1sni.png" alt="enter image description here" /></a></p>
| <python><opengl><pygame><glut><pyopengl> | 2023-08-21 20:19:09 | 1 | 305 | PaulM |
76,948,672 | 332,936 | When running a VNC server, how can I distinguish between local input and remote mouse click events? | <p>I have a computer with a touchscreen where users navigate an interface presented by a Python app. This computer runs a VNC server (x11vnc on Debian). Administrators sometimes assist the users to navigate the interface remotely via VNC (RealVNC Viewer on MacOS), clicking their mouse on the appropriate interface components.</p>
<p>Our touchscreen GUI uses PsychoPy* to draw graphical elements and collect input data (where users are touching the screen, whether it is inside or outside of relevant hit boxes), but have no way to distinguish between user input (local) and administrator assistance input (remote).</p>
<p>Is it possible to distinguish between local touches and remote VNC mouse click events?</p>
<p>If so, what approaches are available to the Python app that is presenting the GUI?</p>
<p><br />
<em>* A scientific research library popular in psychology and behavioral science, which I believe uses Pygame and Pyglet.</em></p>
| <python><touch><remote-desktop><vnc><psychopy> | 2023-08-21 20:08:13 | 0 | 4,595 | Stew |
76,948,346 | 1,429,402 | Test if python class method is being called by a specific built-in function | <p>Given a python object with custom <code>__add__</code> and <code>__radd__</code> methods, is it possible to detect if they are being called within the scope of a <code>sum</code> built-in function?</p>
<p>naïve example:</p>
<pre><code>import inspect
import shlex
class Test:
def __init__(self, val):
self.val = val
def __add__(self, other):
curframe = inspect.currentframe()
context = inspect.getouterframes(curframe)[1].code_context[0]
is_summed = 'sum' in list(shlex.shlex(context))
print('__add__ called from sum(): ', is_summed)
try:
return Test(self.val + other.val)
except:
return Test(self.val + other)
def __radd__(self, other):
curframe = inspect.currentframe()
context = inspect.getouterframes(curframe)[1].code_context[0]
is_summed = 'sum' in list(shlex.shlex(context))
print('__radd__ called from sum(): ', is_summed)
if other == 0:
return self
else:
return self.__add__(other)
if __name__ == '__main__':
a = Test(5)
b = Test(10)
print('example 1 - plain add: a + b')
a + b
print('\nexample 2 - using sum: sum([a, b])')
sum([a, b])
print('\nexample 3 - (broken) using sum and an add:sum([a, b]) + 5')
sum([a, b]) + 5
</code></pre>
<p>Returns:</p>
<pre><code>example 1 - plain add: a + b
__add__ called from sum(): False
example 2 - using sum: sum([a, b])
__radd__ called from sum(): True
__add__ called from sum(): True
example 3 - (broken) using sum and an add:sum([a, b]) + 5
__radd__ called from sum(): True
__add__ called from sum(): True
__add__ called from sum(): True # desired result here is False because called outside of "sum"
</code></pre>
<p>In this example I naïvely return <code>True</code> if I detect the term <code>sum</code> in the current execution context. This is clearly not enough as it isn't aware of scope, as shows the 3rd example.</p>
<p><strong>QUESTION</strong></p>
<p>Is there a way for a class method to know precisely if it is called within the context of a specific built-in function?</p>
| <python> | 2023-08-21 19:52:20 | 0 | 5,983 | Fnord |
76,948,271 | 2,781,105 | Consolidate Pandas columns based on common value in the column header | <p>I have a dataframe consisting of events and dates which, for the purposes of illustration, looks as follows:</p>
<pre><code>df1 = pd.DataFrame({'yyyyww': ['2022-01','2022-02','2022-03', '2022-01','2022-02','2022-03','2022-01','2022-03'],
'2001_52': [10,0,0,0,0,0,0,0],
'2002_52': [0,30,0,0,0,0,0,0],
'2003_52': [0,0,50,0,0,0,0,0],
'2001_23': [0,0,0,20,0,0,0,0],
'2002_23': [0,0,0,0,15,0,0,0],
'dis20': [0,0,0,0,0,25,0,0],
'dis30': [0,0,0,0,0,0,15,0],
'dis40': [0,0,0,0,0,0,0,75],})
</code></pre>
<p><a href="https://i.sstatic.net/rN1JD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rN1JD.png" alt="enter image description here" /></a></p>
<p>I want to parse each column header and combine the common columns into one, such that the output is as follows:</p>
<pre><code>df2 = pd.DataFrame({'yyyyww': ['2022-01','2022-02','2022-03', '2022-01','2022-02','2022-03','2022-01','2022-03'],
'wk_52': [10,30,50,0,0,0,0,0],
'wk_23': [0,0,0,20,15,0,0,0],
'dis': [0,0,0,0,0,25,15,75],})
</code></pre>
<p><a href="https://i.sstatic.net/uB97I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uB97I.png" alt="enter image description here" /></a></p>
<p>My attempts have yielded the following which is evidently not quite the desired result as the first two columns are missing values, whereas the last column appears to be as expected.</p>
<pre><code># Create empty dict
merge_data = {}
# loop through columns to parse name
for col in df1.columns:
if col.startswith('2') and '_' in col:
key = col.split('_')[-1]
if key.isdigit():
if key not in merge_data:
merge_data[key] = df1[col]
else:
merge_data[key] +=df1[col]
# merge columns and apply values to empty dict
merge_data_out = {f"wk_{key}": values for key, values in merge_data.items()}
# procedure for combining 'dis...' columns
merge_data_out['dis'] = df1.filter(like='dis').sum(axis=1)
merge_data_out = pd.DataFrame(merge_data_out)
merge_data_out
</code></pre>
<p><a href="https://i.sstatic.net/XsAQ6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XsAQ6.png" alt="enter image description here" /></a></p>
<p>I would appreciate some guidance here as to how the desired result can be achieved.</p>
| <python><python-3.x><pandas><dataframe> | 2023-08-21 19:39:06 | 4 | 889 | jimiclapton |
76,948,203 | 4,459,346 | Weird PySide2 QPushButton background colour when checked | <p>I can use a stylesheet to set the background colour of a QPushButton when it is clicked or checked. The checked colours are however not as I've set them; they include some raster pattern.</p>
<p>The code I use is:</p>
<pre><code>import sys
from PySide2.QtWidgets import (
QMainWindow, QApplication, QPushButton,
)
from PySide2.QtCore import (
QSize,
)
class Window(QMainWindow):
def __init__(self, parent=None):
super().__init__(parent)
self.setMinimumSize(QSize(180, 110))
self.setWindowTitle('QPushButton Example')
button0 = QPushButton('Button 0', self)
button0.setCheckable(True)
button0.setStyleSheet(
"""QPushButton { background-color: lightgrey; }"""
"""QPushButton::checked { background-color: blue; }"""
"""QPushButton::pressed { background-color: blue; }"""
)
button0.resize(100, 32)
button0.move(50, 10)
button1 = QPushButton('Button 1', self)
button1.setCheckable(True)
button1.setStyleSheet(
"""QPushButton { background-color: lightgrey; }"""
"""QPushButton::checked { background-color: red; }"""
"""QPushButton::pressed { background-color: red; }"""
)
button1.resize(100, 32)
button1.move(50, 60)
def main():
app = QApplication(sys.argv)
window = Window()
window.show()
app.exec_()
if __name__ == "__main__":
main()
</code></pre>
<p>As long as the buttons are clicked with the mouse, I get the correct colour:</p>
<p><a href="https://i.sstatic.net/yVem7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yVem7.png" alt="clicked colour" /></a></p>
<p>When I push the button (when I 'check' the button), then I get the weird colours:</p>
<p><a href="https://i.sstatic.net/tBwew.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tBwew.png" alt="checked colour" /></a></p>
<p>How can I get the actual colour that I've set for 'checked' instead of this raster pattern?</p>
| <python><pyqt5><pyside2> | 2023-08-21 19:26:54 | 1 | 2,020 | NZD |
76,948,027 | 446,137 | Loading int16 data with NULLs from snowflake into a pandas DataFrame | <p>I am looking for an automated way to fetch data that doesn't fit into numpy datatypes (e.g. nullable ints) from a snowflake db (using the snowflake connector).</p>
<p>E.g. I have a query that loads a column of nullable ints:</p>
<pre><code>query = "SELECT MyIntColumn FROM MyTable"
</code></pre>
<p>where <code>MyIntColumn</code> is of type int16 with NULLs.</p>
<p>If I run</p>
<pre><code>con = snowflake.connector.connect()
cursor = con.cursor()
cursor.execute(query)
</code></pre>
<p>I have the option of fetching the data via</p>
<pre><code>df = cursor.fetch_pandas_all()
</code></pre>
<p>which returns a <code>pd.DataFrame</code> with <code>MyIntColumn</code> as <code>dtype=float</code>, which is not ideal.</p>
<p>I can instead get it via</p>
<pre><code>arrow_data = cursor.fetch_arrow_all()
</code></pre>
<p>But now I get an arrow table. If I run</p>
<pre><code>df = arrow_data.to_pandas()
</code></pre>
<p>even though I am and pandas 2.0.2, I still get a <code>float</code> column.</p>
<p>Is there an automated way that will allow me to preserve the arrow backend types when fetching this data?</p>
| <python><pandas><snowflake-cloud-data-platform><pyarrow> | 2023-08-21 18:53:55 | 0 | 1,790 | Hans |
76,947,815 | 5,986,907 | How does pytest create fixtures, or, how to get my yield? | <p>I have a generator function</p>
<pre><code>def foo():
resource = setup()
yield resource
tidy(resource)
</code></pre>
<p>I'm using this as a fixture</p>
<pre><code>@pytest.fixture
def foofix():
yield from foo()
</code></pre>
<p>This works fine.</p>
<p>I want to test <code>foo</code></p>
<pre><code>def test_foo():
res_gen = foo()
res = next(res_gen)
assert res.is_working
</code></pre>
<p>but because I tidy up <code>resource</code> immediately after I <code>yield</code> it, it's not longer available to assert it's working. How then does pytest use <code>resource</code> before it's tidied up, and how can I do the same?</p>
| <python><pytest><generator><pytest-fixtures> | 2023-08-21 18:15:44 | 1 | 8,082 | joel |
76,947,804 | 3,437,281 | Python virtual environment apparently not working as intended in production | <p>I have a python script that is running without a hitch in my development environment (PyCharm with venv) but as soon as I move it to a server, where it is meant to run as a service, it fails.</p>
<p>Excerpt from the code:</p>
<pre><code>import configparser
config_file = "config.ini"
config = configparser.ConfigParser()
config.read(config_file)
mqtt_broker = config["mqtt_broker"]["address"]
</code></pre>
<p>The config.ini looks like this:</p>
<pre><code>[mqtt_broker]
address = 10.0.0.10:1883
# maximum wait in milliseconds before raising an error
maximum_wait_ms = 3000
</code></pre>
<p>When I run it on the server (with user mqtt), I get:</p>
<pre><code>$ sudo runuser -l mqtt -c "/etc/mqtt_kafka_bridge/venv/bin/python3.9 /etc/mqtt_kafka_bridge/venv/main.py --serve-in-foreground -log.level=debug"
Traceback (most recent call last):
File "/etc/mqtt_kafka_bridge/venv/main.py", line 19, in <module>
mqtt_broker = config["mqtt_broker"]["address"]
File "/usr/lib/python3.9/configparser.py", line 963, in __getitem__
raise KeyError(key)
KeyError: 'mqtt_broker'
</code></pre>
<p>Note: The whole virtual environment has been copied from the development environment onto the server. Moreover:</p>
<pre><code>eric@ubuntu-server:~$ /etc/mqtt_kafka_bridge/venv/bin/python3 -m site
sys.path = [
'/home/eric',
'/usr/lib/python39.zip',
'/usr/lib/python3.9',
'/usr/lib/python3.9/lib-dynload',
'/etc/mqtt_kafka_bridge/venv/lib/python3.9/site-packages',
]
USER_BASE: '/home/eric/.local' (exists)
USER_SITE: '/home/eric/.local/lib/python3.9/site-packages' (doesn't exist)
ENABLE_USER_SITE: False
</code></pre>
<pre><code>eric@ubuntu-server:~$ python3 -m site
sys.path = [
'/home/eric',
'/usr/lib/python38.zip',
'/usr/lib/python3.8',
'/usr/lib/python3.8/lib-dynload',
'/home/eric/.local/lib/python3.8/site-packages',
'/usr/local/lib/python3.8/dist-packages',
'/usr/lib/python3/dist-packages',
]
USER_BASE: '/home/eric/.local' (exists)
USER_SITE: '/home/eric/.local/lib/python3.8/site-packages' (exists)
ENABLE_USER_SITE: True
</code></pre>
<p>The server is running Ubuntu 20.04 LTS, which does not include Python3.9. This was installed later.</p>
<p>What am I not seeing? Thanks.</p>
| <python><python-venv><configparser> | 2023-08-21 18:13:03 | 0 | 901 | ElToro1966 |
76,947,787 | 10,308,255 | How to calculate percentage change within a group and across a group? | <p>I have seen multiple posts asking this question but have been unable to get the to work with my dataset.</p>
<p>I have a dataframe containing summary information (output from a <code>.groupby</code> and <code>describe()</code> operation).</p>
<pre><code>data = [
[123456, "2017", 8.0, 150.235],
[123456, "2018", 8.0, 202.5],
[123456, "2019", 7.0, 168.526],
[123456, "2020", 6.0, 175.559],
[123456, "2021", 8.0, 206.667],
[789101, "2017", 8.0, 228.9],
[789101, "2018", 5.0, 208]
]
df = pd.DataFrame(
data,
columns=[
"ID",
"year",
"count",
"mean",
],
)
df
</code></pre>
<p><a href="https://i.sstatic.net/FmzK6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FmzK6.png" alt="enter image description here" /></a></p>
<p>I am trying to do 2 things:</p>
<ol>
<li>Calculate the percentage change within <code>ID</code> from <code>year</code> to <code>year</code>.</li>
<li>Calculate the percentage change across <code>ID</code>, i.e from the first <code>year</code> to the latest <code>year</code> what is the overall percentage change.</li>
</ol>
<p>Things I've tried:</p>
<ol>
<li><code>df["pct_ch"] = df.groupby(["ID", "year"])["mean"].apply(pd.Series.pct_change) + 1</code> as suggested <a href="https://stackoverflow.com/questions/543223">here</a>, unfortunately this returns a column of all <code>NaN</code>.</li>
<li><code>df["pct_change"] = df.groupby(["ID", "year"])["mean"].pct_change(-1)</code> as suggested <a href="https://stackoverflow.com/questions/69828840/how-to-find-percent-change-by-row-within-groups-python-pandas">here</a>, which also returns a column of all <code>NaN</code> values.</li>
</ol>
<p>I am curious what I am doing wrong (why am I getting all <code>NaN</code> and how I can calculate these percentage change columns?</p>
<p>Ideal output would look something like this:
<a href="https://i.sstatic.net/Rp4tM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Rp4tM.png" alt="enter image description here" /></a></p>
| <python><pandas><dataframe><percentage> | 2023-08-21 18:10:51 | 1 | 781 | user |
76,947,564 | 785,404 | How can I quickly sum over a Pandas groupby object while handling NaNs? | <p>I have a <code>DataFrame</code> with <code>key</code> and <code>value</code> columns. <code>value</code> is sometimes NA:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({
'key': np.random.randint(0, 1_000_000, 100_000_000),
'value': np.random.randint(0, 1_000, 100_000_000).astype(float),
})
df.loc[df.value == 0, 'value'] = np.nan
</code></pre>
<p>I want to group by <code>key</code> and sum over the <code>value</code> column. If any <code>value</code> is NA for a <code>key</code>, I want the sum to be NA.</p>
<p>The code in <a href="https://stackoverflow.com/a/42771217/785404">this answer</a> takes 35.7 seconds on my machine:</p>
<pre class="lang-py prettyprint-override"><code>df.groupby('key')['value'].apply(np.array).apply(np.sum)
</code></pre>
<p>This is a lot slower than what is theoretically possible. The built-in Pandas <code>SeriesGroupBy.sum</code> takes 6.31 seconds on my machine:</p>
<pre class="lang-py prettyprint-override"><code>df.groupby('key')['value'].sum()
</code></pre>
<p>but it doesn't support NA handling (see <a href="https://github.com/pandas-dev/pandas/issues/15675" rel="nofollow noreferrer">this GitHub issue</a>).</p>
<p>What code can I write to get comparable performance to the built-in operator while still handling NaNs?</p>
| <python><pandas><performance><group-by> | 2023-08-21 17:38:10 | 1 | 2,085 | Kerrick Staley |
76,947,468 | 5,561,472 | How to find number of decimal digits for currency in python? | <p>I need to find a number of decimal digits for currency (e.g. 2 for USD, 0 for yen).</p>
<p>I can do it based on locale:</p>
<pre class="lang-py prettyprint-override"><code>import locale
locale.setlocale(locale.LC_ALL, 'en_us')
print(locale.localeconv()['frac_digits']) # 2
locale.setlocale(locale.LC_ALL, 'ja')
print(locale.localeconv()['frac_digits']) # 0
</code></pre>
<p>Is it possible to get the same given currency not locale?</p>
| <python> | 2023-08-21 17:20:46 | 1 | 6,639 | Andrey |
76,947,236 | 11,427,765 | Groupby puzzle using python | <p>I've the following dataframe :</p>
<pre><code>import pandas as pd
# Create a list of data
data = [
["8582", 1, None],
["8586", 2, "8582"],
["8585", 2, "8582"],
["8593", 2, "8582"],
["8584", 2, "8582"],
["8590", 3, "8586"],
["8583", 3, "8593"],
["8597", 3, "8584"],
["8587", 3, "8585"],
["8674", 3, "8586"],
["8589", 3, "8586"],
["8588", 3, "8585"],
]
# Create a DataFrame
df = pd.DataFrame(data, columns=["Nod", "Levels", "Parents"])
</code></pre>
<p><a href="https://i.sstatic.net/Twg9z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Twg9z.png" alt="enter image description here" /></a></p>
<p>I want to achieve the following structure :
<a href="https://i.sstatic.net/l3XQk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l3XQk.png" alt="enter image description here" /></a></p>
<p>does anyone please know how to achieve this using python ? thank you</p>
| <python><dataframe> | 2023-08-21 16:42:02 | 1 | 387 | Gogo78 |
76,947,233 | 5,722,359 | How to make the ttk.Treeview widget not remember the last item that was mouse-clicked or selected? | <p>I notice that after an item in the <code>ttk.Treeview</code> had been selected, even after removing its selection, the <code>ttk.Treeview</code> widget will still remember the last item that was mouse-clicked.</p>
<pre><code>import tkinter as tk
from tkinter import ttk
class App:
def __init__(self):
self.root = tk.Tk()
self.tree = ttk.Treeview(self.root)
self.tree.pack(side="top", fill="both")
self.tree.bind("<Leave>", self.remove_selection_appearance)
for i in range(20):
self.tree.insert("", "end", text="Item %s" % i)
self.root.mainloop()
def remove_selection_appearance(self, event):
selected_items = event.widget.selection()
print(f"{selected_items=}")
event.widget.selection_remove(selected_items)
# event.widget.selection_toggle(selected_items)
# event.widget.selection_set("")
if __name__ == "__main__":
app = App()
</code></pre>
<p>Above is a sample code to illustrate this behavior. For example:</p>
<ol>
<li>When the mouse pointer enters and then leaves the widget, it will print <code>selected_items=()</code>.</li>
<li>Next, if <code>Item 3</code> is clicked on and the mouse pointer then moves out of the widget, the script will print <code>selected_items=('I004',)</code>.</li>
<li>Next, by pressing the <code>Shift</code> key and clicking on <code>Item 7</code> and then moving the mouse pointer out of the widget, the script will print <code>selected_items=('I004', 'I005', 'I006', 'I007', 'I008')</code>.</li>
<li>Finally, by pressing the <code>Shift</code> key and clicking on <code>Item 0</code> and then moving the mouse pointer out of the widget, the script will print <code>selected_items=('I001', 'I002', 'I003', 'I004')</code>.</li>
</ol>
<p>The latter two selections show that group selection started from and ended at <code>'I004'</code>, i.e. <code>Item 3</code>, respectively, despite the <code>.selection_remove()</code> method of the <code>ttk.Treeview</code> widget being used to remove the selections. Also, I had assumed that after removing any selections, the last item that was mouse clicked would similarly be removed/forgotten.</p>
<p>Is there a way to cause the <code>ttk.Treeview</code> widget not to remember the last item that was mouse-clicked? Or is this behaviour baked into the widget?</p>
| <python><tkinter><treeview> | 2023-08-21 16:41:26 | 1 | 8,499 | Sun Bear |
76,946,980 | 10,308,565 | Psycopg error when using register_type() with Newrelic agent | <p>When using Psycopg's <code>register_type()</code> function while having Newrelic's agent initialized, I get the following error:</p>
<pre><code>TypeError: argument 2 must be a connection, cursor or None
</code></pre>
<p>because the cursor is being wrapped at runtime with the <code>newrelic.hooks.database_psycopg2.CursorWrapper</code> class.</p>
| <python><psycopg2><newrelic> | 2023-08-21 16:01:41 | 1 | 394 | Alexander |
76,946,835 | 16,527,170 | Extract word and integer from Html file content using python | <p>HTML content:</p>
<pre><code> <tr>
<td bgcolor="#FFFFFF" style="padding-left:0pt;padding-Right:0.75pt;padding-Top:0.75pt;padding-Bottom:0pt;width:38.46%;" valign="top">
<p style="margin-bottom:0pt;margin-top:0pt;margin-left:13.7pt;;text-indent:0pt;;color:#000000;font-size:9pt;font-family:Times New Roman;font-weight:normal;font-style:normal;text-transform:none;font-variant: normal;">Basic earnings (loss) per share</p></td>
<td bgcolor="#FFFFFF" style="padding-left:0pt;padding-Right:0.75pt;padding-Top:0.75pt;padding-Bottom:0pt;width:1.08%;" valign="bottom">
<p style="text-align:right;margin-bottom:0pt;margin-top:0pt;margin-left:0pt;;text-indent:0pt;;color:#000000;font-size:9pt;font-family:Times New Roman;font-weight:normal;font-style:normal;text-transform:none;font-variant: normal;"> </p></td>
<td bgcolor="#FFFFFF" style="padding-left:0pt;padding-Right:0.75pt;padding-Top:0.75pt;padding-Bottom:0pt;width:1%;white-space:nowrap;" valign="bottom">
<p style="margin-bottom:0pt;margin-top:0pt;margin-left:0pt;;text-indent:0pt;;color:#000000;font-size:9pt;font-family:Times New Roman;font-weight:normal;font-style:normal;text-transform:none;font-variant: normal;"> </p></td>
<td bgcolor="#FFFFFF" style="padding-left:0pt;padding-Right:0.75pt;padding-Top:0.75pt;padding-Bottom:0pt;width:9.22%;white-space:nowrap;" valign="bottom">
<p style="text-align:right;margin-bottom:0pt;margin-top:0pt;margin-left:0pt;;text-indent:0pt;;color:#000000;font-size:9pt;font-family:Times New Roman;font-weight:normal;font-style:normal;text-transform:none;font-variant: normal;">0.08</p></td>
<td bgcolor="#FFFFFF" style="padding-left:0pt;padding-Right:0.75pt;padding-Top:0.75pt;padding-Bottom:0pt;width:1%;white-space:nowrap;" valign="bottom">
<p style="margin-bottom:0pt;margin-top:0pt;margin-left:0pt;;text-indent:0pt;;color:#000000;font-size:9pt;font-family:Times New Roman;font-weight:normal;font-style:normal;text-transform:none;font-variant: normal;"> </p></td>
<td bgcolor="#FFFFFF" style="padding-left:0pt;padding-Right:0.75pt;padding-Top:0.75pt;padding-Bottom:0pt;width:1.08%;" valign="bottom">
<p style="text-align:right;margin-bottom:0pt;margin-top:0pt;margin-left:0pt;;text-indent:0pt;;color:#000000;font-size:9pt;font-family:Times New Roman;font-weight:normal;font-style:normal;text-transform:none;font-variant: normal;"> </p></td>
<td bgcolor="#FFFFFF" style="padding-left:0pt;padding-Right:0.75pt;padding-Top:0.75pt;padding-Bottom:0pt;width:1%;white-space:nowrap;" valign="bottom">
<p style="margin-bottom:0pt;margin-top:0pt;margin-left:0pt;;text-indent:0pt;;color:#000000;font-size:9pt;font-family:Times New Roman;font-weight:normal;font-style:normal;text-transform:none;font-variant: normal;"> </p></td>
<td bgcolor="#FFFFFF" style="padding-left:0pt;padding-Right:0.75pt;padding-Top:0.75pt;padding-Bottom:0pt;width:9.22%;white-space:nowrap;" valign="bottom">
<p style="text-align:right;margin-bottom:0pt;margin-top:0pt;margin-left:0pt;;text-indent:0pt;;color:#000000;font-size:9pt;font-family:Times New Roman;font-weight:normal;font-style:normal;text-transform:none;font-variant: normal;">0.65</p></td>
<td bgcolor="#FFFFFF" style="padding-left:0pt;padding-Right:0.75pt;padding-Top:0.75pt;padding-Bottom:0pt;width:1%;white-space:nowrap;" valign="bottom">
<p style="margin-bottom:0pt;margin-top:0pt;margin-left:0pt;;text-indent:0pt;;color:#000000;font-size:9pt;font-family:Times New Roman;font-weight:normal;font-style:normal;text-transform:none;font-variant: normal;"> </p></td>
<td bgcolor="#FFFFFF" style="padding-left:0pt;padding-Right:0.75pt;padding-Top:0.75pt;padding-Bottom:0pt;width:1.08%;" valign="bottom">
</code></pre>
<p>I want to extract integer which is coming after the line having word <code>Basic</code>.
Output should be 0.08</p>
<p>My code: Able to extract line containing word "Basic", but not able to extract "0.08"</p>
<pre><code>import re
# Use regular expression to find the line containing "Basic"
pattern = re.compile(r".*Basic.*", re.IGNORECASE)
matches = re.findall(pattern, html_content)
if matches:
print("Found line with 'Basic':", matches[0])
else:
print("No line containing 'Basic' found.")
</code></pre>
| <python><beautifulsoup> | 2023-08-21 15:42:01 | 1 | 1,077 | Divyank |
76,946,748 | 10,198,627 | Flask server session loses data | <p>I use the <code>flask-session</code> package to save multiple python objects during a session on the server. The objects are not JSON-serializable without a lot of additional effort, so using the filesystem store seemed like a good idea compared to client-side sessions.</p>
<p>Reworking the backend to not save as much data in a session is currently not feasible.</p>
<p>Please find below the flask code:</p>
<pre><code>from flask_session import Session
from flask import Flask, session
app = Flask(__name__)
app.config['SESSION_TYPE'] = "filesystem"
app.config['SECRET_KEY'] = "super secret key"
app.config['SESSION_COOKIE_SAMESITE'] = "None"
app.config['SESSION_COOKIE_SECURE'] = True
app.config['SESSION_PERMANENT'] = False
server_session = Session()
server_session.init_app(app)
@app.route('/write', methods=['POST'])
def write():
key = request.form['key']
# For simplicity. In the real application, this is a complex Python object instantiated by the backend
value = request.form['value']
session[key] = value
print(session) # The key was successfully added
return ""
@app.route('/read', methods=['POST'])
def read():
print(session)
key = request.form['key']
value = session[key] # Key error
print(value)
return ""
</code></pre>
<p>Those endpoints are called via JavaScript `fetch´ methods, also concurrently. However, I can guarantee that there are no parallel writes to the same key in the session dict, and reading only ever happens after a write returned successfully.</p>
<p>I have multiple parallel requests to <code>/write</code>, but all posting a different key in the payload. During <code>/read</code> however, I get a key error on most keys. Sometimes 1 out of 10 keys is present, sometimes 2.</p>
<p>What can I do to keep a consistent session across multiple concurrent requests (if running the requests serially is not an option)?</p>
| <python><flask><session><concurrency> | 2023-08-21 15:31:11 | 0 | 442 | a.ilchinger |
76,946,695 | 8,992,901 | How to create a SnowFlake query with parameters having single quotes? | <p>I'm generating a SnowFlake query that looks like this:</p>
<pre><code>name_list = ("O'Reley", "de'Medici")
query = f'''
SELECT *
FROM "mydb"."myschema"."mytable"
WHERE NAME IN {name_list}
'''
</code></pre>
<p>How should I generate my <code>name_list</code> so that my SF query is</p>
<pre class="lang-sql prettyprint-override"><code>SELECT *
FROM "mydb"."myschema"."mytable"
WHERE NAME IN ('O''Reley', 'de''Medici')
</code></pre>
| <python><string><snowflake-cloud-data-platform> | 2023-08-21 15:23:16 | 1 | 394 | Mr369 |
76,946,635 | 2,472,451 | Robotframework Return SQL Query as CSV (not dict) | <p>Version: Robot Framework 3.2.2 (Python 3.11.4 on darwin)</p>
<p>I'm using the DatabaseLibrary with psycopg2 to query a simple table in PostgreSQL:</p>
<p><code>${psql_results}= Query SELECT * FROM myTable</code></p>
<p>The problem is that this library returns the SQL results as dict, not simple comma-separated values. In most cases that is fine; although cumbersome requiring additional code, I can use different things like json.dumps() or DataFrame.from_Dict() type conversions etc to get the CSV format, which is what I need.</p>
<p>Depending on the complexity of the data (eg:- Decimal, etc), the above methods hit a brick wall.</p>
<p>There are probably some ways to circumvent those blocks and use the above methods, but I was just wondering, is there any way to execute an SQL Query from RobotFramework for PostgreSQL and just simply get comma separated column values?</p>
<p>I'm getting this:</p>
<pre><code>[
{first:"sponge", last:"bob", skill:"line cook"}
{first:"Squid", last:"Ward", skill:"Clarinet"
]
</code></pre>
<p>Whereas what I want is:</p>
<pre><code>first,last,skill
sponge,bob,"line cook"
squid,ward,clarinet
</code></pre>
<p>Maybe there's a different library or something I can use that I'm not aware of?</p>
<p>Thanks!</p>
| <python><postgresql><csv><robotframework> | 2023-08-21 15:14:05 | 0 | 635 | luci5r |
76,946,618 | 1,422,096 | How to call a shell command fom pythonw (with no explorer.exe running) without a terminal window flashing? | <p>I am running a Python program on Windows with:</p>
<ul>
<li>no <code>explorer.exe</code> running, no desktop, no start menu, typical for embedded computers. If you want to try, you can emulate this by killing <code>explorer.exe</code> in the Task manager ; note that I do it differently but out of topic here</li>
<li><code>pythonw.exe</code> instead of <code>python.exe</code> to avoid a to have a <code>cmd.exe</code> terminal window</li>
</ul>
<p>If you run <code>pythonw test.py</code> with</p>
<pre><code>import os
os.system("shutdown /r /t 3600")
</code></pre>
<p>you will see a terminal window flashing during < 1 second.</p>
<p>Same problem with <code>subprocess.call(...)</code>.</p>
<p>Same problem with <code>subprocess.run(...)</code> like in <a href="https://stackoverflow.com/questions/67342897/is-there-a-command-in-python-for-instant-shutdown-on-windows-10/67342923#67342923">Is there a command in Python for instant shutdown on Windows 10?</a>.</p>
<p>(Note: if explorer.exe is killed, you can call the Task Manager with <kbd>CTRL+SHIFT+ESCAPE</kbd> and <code>Run</code> command to launch <code>pythonw test.py</code>)</p>
<p><strong>Question: How to call Windows tools like <code>shutdown</code> with 1) no explorer.exe 2) pythonw.exe, without seeing a cmd.exe window flashing?</strong></p>
| <python><windows><cmd><embedded><pythonw> | 2023-08-21 15:11:16 | 1 | 47,388 | Basj |
76,946,601 | 15,218,250 | how to reverse queryset without getting error: AttributeError: 'reversed' object has no attribute 'values' | <p>I want to reverse the order of the following queryset, but I get the error <code>AttributeError: 'reversed' object has no attribute 'values'</code>. I believe this is because of the <code>reversed_instances.values()</code>. Without changing <code>reversed_instances.values()</code>, how can I reverse the order of instances? Thank you, and please leave a comment at the bottom.</p>
<pre><code>def view(request, pk):
instances = Calls.objects.filter(var=var).order_by('-date')[:3]
reversed_instances = reversed(instances )
return JsonResponse({"calls":list(reversed_instances.values())})
</code></pre>
| <python><python-3.x><django><django-models><django-views> | 2023-08-21 15:08:41 | 1 | 613 | coderDcoder |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.