QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,668,665
| 522,209
|
Run a hook in one plugin out of many, in Pluggy
|
<p>I'm using <a href="https://github.com/pytest-dev/pluggy" rel="nofollow noreferrer">Pluggy</a> and I would like to run a hook in one of the plugins.</p>
<p>Having this code:</p>
<pre><code>import pluggy
import myspec
import impl1
import impl2
pm = pluggy.PluginManager("my-project-name")
pm.add_hookspecs(myspec)
pm.register(impl1, name="my_impl1")
pm.register(impl2, name="my_impl2")
</code></pre>
<p>How can I run a specific hook in "impl1" only?</p>
|
<python><plugins>
|
2023-03-08 01:25:15
| 0
| 896
|
Helgi Borg
|
75,668,574
| 11,116,696
|
how to run parallel processing Pyomo?
|
<p><strong>Background</strong></p>
<p>I am trying to solve a MILP problem using pyomo with a 'cbc' solver. the optimisation problem is being run over time series data and is very large (3005 rows, 3011 columns (2010 with objective) and 15998 elements)</p>
<p><strong>What I am attempting to do</strong></p>
<p>I've read that 'cbc' supports parallel processing. I've tried to implement it, but without luck. I've got the latest 'cbc' binaries.</p>
<p><strong>What I've tried</strong></p>
<pre><code># Solve the model
opt = SolverFactory('cbc')
opt.options['threads'] = 16
opt.options['parallel'] = True
instance = model.create_instance("scenario1.dat")
results = opt.solve(instance, tee=True)
instance.solutions.load_from(results)
</code></pre>
<p>however, CPU usage remaining flat and low, and in the log it seems to not even recognise the options.['parallel'] command.</p>
<pre><code>command line - C:\cbc-win64\cbc.exe -sec 100 -timeMode elapsed -threads 16 -parallel True -printingOptions all -import C:\Users\USER\AppData\Local\Temp\tmpfpy0lgal.pyomo.lp -stat=1 -solve -solu C:\Users\USER\AppData\Local\Temp\tmpfpy0lgal.pyomo.soln (default strategy 1)
seconds was changed from 1e+100 to 100
Option for timeMode changed from cpu to elapsed
threads was changed from 0 to 16
No match for parallel - ? for list of commands
No match for True - ? for list of commands
</code></pre>
<p><strong>Help Requested</strong></p>
<ul>
<li>Does any one know what I am doing it wrong,</li>
<li>could it be that I am misunderstanding what pyomo multiprocessing does (i.e. as I am trying to solve one big complex problem, and not just run the solver over multiple multiple independent solvers in parallel).</li>
</ul>
<p>Does anyone have any advice?</p>
|
<python><optimization><parallel-processing><pyomo>
|
2023-03-08 01:05:28
| 1
| 601
|
Bobby Heyer
|
75,668,544
| 178,315
|
Curl fails with SSL errors 56 and 35 when talking to a HTTPS Python web server
|
<p>I have setup my own HTTPS server using Python:</p>
<pre><code>from http.server import HTTPServer, BaseHTTPRequestHandler
import ssl
class SimpleHTTPRequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.end_headers()
self.wfile.write(b'Hello, secure world!\n')
httpd = HTTPServer(('127.0.0.1', 4443), SimpleHTTPRequestHandler)
context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
context.load_cert_chain(
certfile="/etc/letsencrypt/live/.../fullchain.pem",
keyfile="/etc/letsencrypt/live/.../privkey.pem")
httpd.socket = context.wrap_socket(httpd.socket, server_side=True)
httpd.serve_forever()
</code></pre>
<p>For security purposes, my server runs in a VirtualBox VM with port forwarding from 443 to 4443. When I query the HTTPS server via curl locally on the VM, I get a response, but there is also an error:</p>
<pre><code>$ curl -k 'https://locahost:4443'
Hello, secure world!
curl: (56) OpenSSL SSL_read: error:0A000126:SSL routines::unexpected eof while reading, errno 0
</code></pre>
<p>However, when I try to query it from the host, I only get an error:</p>
<pre><code>$ curl -k 'https://127.0.0.1'
curl: (35) Unknown SSL protocol error in connection to 127.0.0.1:443
</code></pre>
<p>As you can see, in both calls I disable cert verification because I query my server by IP address and not the actual domain. The certs are for my personal domain and are signed by letsencrypt.com. When I tried using an actual domain on the host, I get the same error (34).</p>
<p>Why do I get errors in the curl call in the guest VM and why is it different when calling from the host?</p>
|
<python><ssl><curl>
|
2023-03-08 00:58:08
| 1
| 6,134
|
Sergiy Belozorov
|
75,668,478
| 751,231
|
Yapf wanting a linebreak and tabbing when it shouldn't
|
<p>I'm having an issue with yapf behaving differently on my local machine and on a jenkins pipeline build.</p>
<p>This is my line:</p>
<pre><code>groups = thing.func(
*[iter(people)] * constants.MAX_PEOPLE_PER_CREATE)
</code></pre>
<p>That yapf keeps telling me is wrong (and failing, causing the whole build to fail), telling me the correct way is:</p>
<pre><code>groups = thing.func(*[iter(people)] *
constants.MAX_PEOPLE_PER_CREATE)
</code></pre>
<p>I'm assuming it's some kind of version mismatch on docker, but I can't figure out where it might be. I've confirmed all my python libraries are the same. Even if I force it to look like what it wants, it throws another fit with an error that you can't have a line break after an operator.</p>
|
<python><yapf>
|
2023-03-08 00:38:57
| 0
| 603
|
Rev Tyler
|
75,668,436
| 19,113,780
|
Django docker: docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed
|
<p>I want to deploy my django project with docker and uwsgi on windows 11, but get some errors.</p>
<pre><code>docker build . -t djangohw
docker run -it --rm djangohw -p 8080:80
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "-p": executable file not found in $PATH: unknown.
</code></pre>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM python:3.9
ENV DEPLOY=1
WORKDIR /opt/tmp
COPY . .
RUN pip install -r requirements.txt
EXPOSE 80
CMD ["sh", "start.sh"]
</code></pre>
<p><strong>start.sh</strong></p>
<pre><code>python3 manage.py makemigrations board
python3 manage.py migrate
uwsgi --module=mysite.wsgi:application \
--env DJANGO_SETTINGS_MODULE=mysite.settings \
--master \
--http=0.0.0.0:0 \
--processes=5 \
--harakiri=20 \
--max-requests=5000 \
--vacuum
</code></pre>
<p><strong>requirements.txt</strong></p>
<pre><code>django==4.1.3
django-cors-headers
pytest
pytest-django
coverage
uwsgi
</code></pre>
|
<python><django><docker>
|
2023-03-08 00:29:17
| 1
| 317
|
zhaozk
|
75,668,412
| 3,453,768
|
Regex search string with partial matches allowed
|
<p>Suppose I have a search pattern like <code>^FOO.B.A.R</code>, and I want to check whether a string matches the full search pattern, <em>but</em>, if the string is shorter than the search pattern, it's allowed to match only part of it.</p>
<p>That is:</p>
<ul>
<li>If the string is 1 character long, it must match <code>^F</code></li>
<li>If the string is 2 characters long, it must match <code>^FO</code></li>
<li>If the string is 3 characters long, it must match <code>^FOO</code></li>
<li>If the string is 4 characters long, it must match <code>^FOO.</code></li>
<li>...</li>
<li>If the string is 9 or more characters long, it must match <code>^FOO.B.A.R</code></li>
</ul>
<p>Is there a way to specify this in the regex search pattern, or do I need to detect the length of the string and then build the pattern accordingly?</p>
|
<python><regex>
|
2023-03-08 00:24:34
| 1
| 2,397
|
LarrySnyder610
|
75,668,312
| 2,774,885
|
how does PySerial manage multiple processes/clients accessing the same device at the same time? (context management maybe?)
|
<p>I've got a couple of IOT-type toys (power meter, specifically) that provide an RS485 interface for configuration and monitoring. With a basic USB<->RS485 bridge I'm able to communicate with the device, read the output data and have it respond to basic commands... all well and good.</p>
<p>the script is a super basic use of <a href="https://pypi.org/project/pyserial/" rel="nofollow noreferrer">pySerial</a>. The <code>get_values_str</code> is just the series of bytes to write that "asks" the power meter for a set of readings, and then <code>readline()</code> grabs that data.</p>
<pre><code>import serial
from time import sleep
with serial.Serial('/dev/ttyUSB1', baudrate=115200, timeout=1) as serialHandle:
get_values_str = b':R50=1,2,1,\n'
for n in range(100):
serialHandle.write(get_values_str)
print(serialHandle.readline())
sleep(10)
</code></pre>
<p>I was (pleasantly?) surprised today when I fired up a <strong>second</strong> instance of the script by mistake, and both of them proceeded to return valid data at the same time. I was under the impression that access to the serial port/device was an exclusive thing (like a file handle?) and that I wouldn't be allowed to have multiple open contexts at the same time.</p>
<p>In a slightly more general sense, I'm looking to understand what the best practice might be for allowing multiple clients/processes to access this info... it <strong>seems</strong> like constantly opening and closing a context manager is the wrong thing to do -- maybe it's not expensive but it sure seems like a bad idea. If the ability to access the USB device from multiple places is a normal/supported thing then maybe it's <strong>not</strong> a big deal to have a bunch of context managers open at the same time, but this is the part where we're getting into an area that I don't have much knowledge on.</p>
<p>I guess ideally I'd like to just have an "always available" function that returns one line of data, but it's totally unclear to me what combination of context managers and/or generators and/or locking issues I need to think about here.</p>
|
<python><pyserial><contextmanager>
|
2023-03-08 00:02:44
| 1
| 1,028
|
ljwobker
|
75,668,267
| 2,487,330
|
Filter by multiple items in lists?
|
<p>Given a DataFrame with a list column and a list of items not in the data frame:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({
"sets": [
[1, 2, 3],
[4],
[9, 10],
[2, 12],
[6, 6, 1],
[2, 0, 1],
[1, 1, 4],
[2, 7, 2],
]
})
items = [1, 2]
</code></pre>
<p>Is there an efficient way to filter the table to only have rows where the list column value contains a) one of the items in the list and b) all of the items in the list?</p>
<p>Expected result for <strong>ALL</strong>:</p>
<pre><code>shape: (2, 1)
┌───────────┐
│ sets │
│ --- │
│ list[i64] │
╞═══════════╡
│ [1, 2, 3] │
│ [2, 0, 1] │
└───────────┘
</code></pre>
<p>Expected result for <strong>ANY</strong>:</p>
<pre><code>shape: (6, 1)
┌───────────┐
│ sets │
│ --- │
│ list[i64] │
╞═══════════╡
│ [1, 2, 3] │
│ [2, 12] │
│ [6, 6, 1] │
│ [2, 0, 1] │
│ [1, 1, 4] │
│ [2, 7, 2] │
└───────────┘
</code></pre>
|
<python><python-polars>
|
2023-03-07 23:54:38
| 3
| 645
|
Brian
|
75,668,255
| 18,032,104
|
How to reduce Pycharm memory usage?
|
<p>Recently I got working to something that needs a lot of memory while using Pycharm. How should I possibly reduce the memory to about 550MB? My current is 1198MB. (I'm using <strong>Windows 10</strong>, <strong>Pycharm Community Edition 2022.2.4</strong>)</p>
|
<python><pycharm>
|
2023-03-07 23:51:26
| 1
| 314
|
CPP_is_no_STANDARD
|
75,668,153
| 1,867,985
|
matplotlib: 3 channel binary RGB image only shows black
|
<p>I have three 2D datasets that I'm trying to plot as an RGB image using <code>matplotlib</code>. I thought this would be quite straightforward but I'm finding it very frustrating.</p>
<p>Here's the bit of code I'm currently using:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.axes_rgb import RGBAxes
b = simg["B"].astype(bool).astype(int)
g = simg["G"].astype(bool).astype(int)
r = simg["R"].astype(bool).astype(int)
fig = plt.figure()
ax = RGBAxes(fig, [0.1, 0.1, 0.8, 0.8], pad=0.0)
ax.imshow_rgb(r, g, b, interpolation="none")
</code></pre>
<p>where <code>simg</code> is the dictionary with my single channel images in it, stored as <code>(n, m)</code> arrays with <code>dtype = uint16</code>. The original data is "almost binary", so I'm using the <code>astype</code>s to threshold it. For reference, each channel contains about 5-10% 1s after thresholding. This is working as intended so I don't think it's the problem, but I tried removing the <code>astype</code>s anyway and it doesn't fix the issue.</p>
<p>Here's the result of the above snippet:</p>
<p><a href="https://i.sstatic.net/dnX9b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dnX9b.png" alt="nothing but black" /></a></p>
<p>In that screenshot I've zoomed way in, and the area I marqueed off should be a green pixel. This data is present in the image, as rolling the mouse over this pixel shows that it indeed has a value of <code>[0, 1, 0]</code> in the tkinter window.</p>
<p>I thought this might have something to do with the <code>RGBAxes</code>, so I tried a much simpler method of displaying this data:</p>
<pre><code>rgb = np.dstack((r, g, b))
plt.imshow(rgb, interpolation="none")
</code></pre>
<p>This creates an <code>(n, m, 3)</code> array, which the <code>imshow</code> <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imshow.html" rel="nofollow noreferrer">documentation</a> says will be interpreted as an RGB image, but this also just plots a completely black image. I also tried all of the different available interpolation methods, and tried changing the plotting backend (currently <code>tkinter</code>, but I also tried <code>Qt5</code>).</p>
<p>The data plots just fine if I only plot one channel. For example:</p>
<pre><code>plt.imshow(b, cmap="binary_r", interpolation="none")
</code></pre>
<p>results in the image:</p>
<p><a href="https://i.sstatic.net/dxZqM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dxZqM.png" alt="working binary blue channel" /></a></p>
<p>which is exactly what I expect to see in that channel. So something about trying to plot the three channels at once is not working the way I expect it to.</p>
<p>I'd appreciate any guidance anyone can give here, and I'm happy to provide any additional information I can that might help. If I can't get this working I'll probably just try rolling my own solution using <code>matplotlib.cm.ScalarMappable</code> and manually combining the RGB channels, but if I go that route I'll also need to re-implement the features of <code>RGBAxes</code> (i.e. the separate, smaller R, G, and B plots beside the combined RGB image) which would be kind of tedious.</p>
|
<python><matplotlib><visualization><imshow>
|
2023-03-07 23:32:48
| 1
| 325
|
realityChemist
|
75,668,091
| 6,379,197
|
pandas to access text file
|
<p>I am using pandas to access a file whose content is as follows:</p>
<pre><code>English Word anger anticipation disgust fear joy negative positive sadness surprise trust Bengali Word
aback 0 0 0 0 0 0 0 0 0 0 বিস্মিত
abacus 0 0 0 0 0 0 0 0 0 1 অ্যাবাকাস
</code></pre>
<p>I am trying to read this file in Google colab using the following code.</p>
<pre><code>import pandas as pd
cols = ['English Word','anger ','anticipation ','disgust ','fear ','joy ','negative ','positive ','sadness ','surprise ','trust ','Bengali Word']
data=pd.read_csv('/content/gdrive/MyDrive/Bengali-NRC-EmoLex.txt', sep=' ', header=None, names=cols)
data
</code></pre>
<p>But my output is not correct. Can you please tell me how can I read the above text file in Python?</p>
|
<python><pandas><google-colaboratory><text-files>
|
2023-03-07 23:20:20
| 1
| 2,230
|
Sultan Ahmed
|
75,668,067
| 17,696,880
|
How to iterate over a list of lists and update one of the sublists with content from previous iterations under certain conditions?
|
<pre class="lang-py prettyprint-override"><code>prev_names_elements = [] # keep track of names from previous iterations
for idx, i in enumerate(["1", "2", "3", "4"]):
if (i == "1"): i_list = [['Luciana', 'María', 'Lucy'], ['jugando'], [], ['2023_-_03_-_07(19:00 pm_---_23:59 pm)']]
elif (i == "2"): i_list = [['ella NO DATA'], ['terminar el trabajo'], ['hacia el parque'], []]
elif (i == "3"): i_list = [['María', 'Melina Saez'], ['ordenaron', 'viajaron'], ['la casa'], ['2023_-_04_-_17(19:00 pm_---_23:59 pm)']]
elif (i == "4"): i_list = [['ellas NO DATA'], ['salieron de la casa juntos'], ['hacia el parque'], []]
if (idx > 0):
if 'ella NO DATA' in i_list[0]:
if 'ellas NO DATA' in i_list[0]:
# update prev_names list with names from current i_list
prev_names += [n for subl in i_list for n in subl if isinstance(subl, list)]
print(i_list)
print("\n")
</code></pre>
<p><code>['ella NO DATA']</code> would take the last name of the first sublist of the list from the previous iteration of the <code>for</code> loop, in this case it would become <code>['Lucy']</code> , since it is the last of the elements of the first sublist of the list of lists from the previous iteration.</p>
<p><code>['ellas NO DATA']</code> would take all the names of all the first sublists of all the lists of the previous iterations, without repeating the names, so in this case it would look like <code>['Luciana', 'María', 'Lucy', 'Lucy', 'María', 'Melina Saez']</code> and since repetitions should be avoided, it would look like this <code>[['Luciana', 'Lucy', 'María', 'Melina Saez']</code></p>
<p>In this way, when printing the <code>i_list</code> in each of the iterations of the <code>for</code> loop, you would be obtaining these 4 lists as output in the console:</p>
<pre><code>[['Luciana', 'María', 'Lucy'], ['jugando'], [], ['2023_-_03_-_07(19:00 pm_---_23:59 pm)']]
[['Lucy'], ['terminar el trabajo'], ['hacia el parque'], []]
[['María', 'Melina Saez'], ['ordenaron', 'viajaron'], ['la casa'], ['2023_-_04_-_17(19:00 pm_---_23:59 pm)']]
[['Luciana', 'Lucy', 'María', 'Melina Saez'], ['salieron de la casa juntos'], ['hacia el parque'], []]
</code></pre>
<p>How should I set up this verification and replacement of the elements of the lists following the structure of this recursive test loop, to obtain this result?</p>
|
<python><python-3.x><string><list><for-loop>
|
2023-03-07 23:17:02
| 1
| 875
|
Matt095
|
75,668,066
| 2,392,192
|
Yield in parent function in Python?
|
<p>I have a music library in Python that involves a system of clocks. Currently, they are implemented with different threads that use Events and Locks, etc. to keep everything synchronized. But the truth is, I want it to be effectively single-threaded.</p>
<p>The current syntax for the clocks is that you can fork functions like this:</p>
<pre class="lang-py prettyprint-override"><code>def some_musical_process():
play_a_note()
wait(2)
play_another_note()
wait(1.5)
...etc...
fork(some_musical_process)
</code></pre>
<p>When a function is forked, it is set up on its own thread. The wait function causes the execution of the process to stop and wait for a signal to wake up 2 beats later. The problem is, since each of these functions is a different thread, it's a bit of a nightmare to stop and start them at the right times.</p>
<p>It has always struck me that this is more suited to using generator functions; something along these lines:</p>
<pre><code>import random
import time
def yielding_routine():
while True:
print("hello")
yield random.choice([0.5, 1.0])
# Scheduler
yr = yielding_routine()
delay = next(yr)
while True:
time.sleep(delay)
delay = next(yr)
</code></pre>
<p>The problem is, I don't want to change the API. I want to keep that the user calls "wait", instead of yield, partly for backwards-compatibility, and also partly because there are several specialized wait functions like "wait_for_children_to_finish()" and stuff like that. What I really want is something like this:</p>
<pre class="lang-py prettyprint-override"><code>def yielding_routine():
while True:
print("hello")
wait(random.choice([0.5, 1.0]))
def wait(dur):
(((somehow cause the parent to yield dur)))
</code></pre>
<p>Where the <code>wait</code> function somehow yields in the parent generator function. Is there any way Python can do this? Even something hacky with "eval" or something? Effectively what I want is a macro, and I'm not sure if there are good ways to get that kind of functionality in Python.</p>
<p>Or is there an alternate approach that I haven't considered?</p>
<p>Many thanks!</p>
|
<python><macros><yield>
|
2023-03-07 23:16:55
| 2
| 563
|
MarcTheSpark
|
75,667,980
| 1,072,830
|
Connecting to SharePoint using Azure app authentication and Python
|
<p>I created a self signed certificate and uploaded it to an Azure app registration, similar to what I've done with a dozen or so other apps. I'm attempting to connect to my SharePoint site with Python's office365 library, which is not something I have a lot of experience with and it shows.</p>
<pre><code>from office365.runtime.auth.client_credential import ClientCredential
from office365.sharepoint.client_context import ClientContext
from office365.sharepoint.files.file_system_object_type import FileSystemObjectType
import os
def create_client_with_private_key():
cert_credentials = {
'tenant': tenant,
'client_id': appid,
'thumbprint': thumb,
'cert_path': path
}
return ClientContext(siteurl).with_client_certificate(**cert_credentials)
siteurl = 'https://mytenant.sharepoint.com/sites/mysite'
appid = '<AppID guid>'
tenant = '<TenantID>'
thumb = '<certificate thumbprint>'
path = "d:\\azure cert\\mycertificate.pem"
ctx = create_client_with_private_key()
current_web = ctx.web.get().execute_query()
print("{0}".format(current_web.url))
</code></pre>
<p>The script that dies at the second to last line (<code>current_web = ctx.web.get().execute_query()</code>) with this error</p>
<blockquote>
<p>Exception has occurred: ValueError
('Could not deserialize key data. The data may be in an incorrect format, it may be encrypted with an unsupported algorithm, or it may be an unsupported key type (e.g. EC curves with explicit parameters).', [_OpenSSLErrorWithText(code=75497580, lib=9, reason=108, reason_text=b'error:0480006C:PEM routines::no start line')])
ValueError: Could not deserialize key data. The data may be in an incorrect format or it may be encrypted with an unsupported algorithm.</p>
<p>During handling of the above exception, another exception occurred:</p>
<p>File "C:\scripts\sp-test.py", line 50, in
current_web = ctx.web.get().execute_query()
ValueError: ('Could not deserialize key data. The data may be in an incorrect format, it may be encrypted with an unsupported algorithm, or it may be an unsupported key type (e.g. EC curves with explicit parameters).', [_OpenSSLErrorWithText(code=75497580, lib=9, reason=108, reason_text=b'error:0480006C:PEM routines::no start line')])</p>
</blockquote>
<p>I've read up that I need to make sure I have the python cryptography package installed, which I do. I'm guessing the format of the certificate is wrong, but I'm not sure what format I should hve it in.</p>
<p>I used the following PowerShell script to create the certificate</p>
<pre><code>$validMonths =12;
$notAfter = (Get-Date).AddMonths($validMonths) # Valid for 4 months
$thumb = (New-SelfSignedCertificate -DnsName "mycertificate" -CertStoreLocation "cert:\LocalMachine\My" -KeyExportPolicy Exportable -Provider "Microsoft Enhanced RSA and AES Cryptographic Provider" -NotAfter $notAfter).Thumbprint
$pwd = ConvertTo-SecureString -String $certPassword -Force -AsPlainText
Export-PfxCertificate -cert "cert:\localmachine\my\$thumb" -FilePath $certPath -Password $pwd
$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate($certPath, $pwd)
$keyValue = [System.Convert]::ToBase64String($cert.GetRawCertData())
New-AzureADApplicationKeyCredential -ObjectId $objectID -Type AsymmetricX509Cert -Usage Verify -Value $keyValue -EndDate ($notAfter.AddHours(-2))
</code></pre>
<p>Which creates a pfx file. I had tried that first and that didn't work. So then I exported the certificate as a pem</p>
<pre><code>Export-Certificate -Type CERT -Cert "cert:\localmachine\my\$thumb" -FilePath "$baseCertPath\$($certName).der";
certutil -encode "$baseCertPath\$($certName).der" "$baseCertPath\$($certName).pem";
</code></pre>
<p>What am I doing wrong?</p>
|
<python><office365><sharepoint-online>
|
2023-03-07 22:59:39
| 1
| 6,622
|
Robbert
|
75,667,964
| 10,976,654
|
Matplotlib how to overlay probability density function onto baseline plot
|
<p>I want to overlay a probability distribution on a baseline graph. An MRE is below, and then the desired outcome is shown below by just copying/rotating/squeezing the plots in PowerPoint as an example.</p>
<pre><code># Import python libraries
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# Plot example PDF
x1 = np.logspace(start=np.log(0.1), stop=np.log(3), num=200, base=np.e)
ln_mu, ln_sigma = 1.1, 0.3
y1 = np.exp(stats.norm.pdf(x1, ln_mu, ln_sigma))
fig, ax = plt.subplots()
ax.fill_between(x1, min(y1), y1, alpha=0.5)
ax.set(xscale="log", xlim=[min(x1), max(x1)])
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/cP7oY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cP7oY.png" alt="enter image description here" /></a></p>
<pre><code># Plot example baseline curve
x2 = np.arange(0, 1.05, 0.05)
y2 = 2*np.sqrt(np.pi*x2)
fig, ax = plt.subplots()
ax.plot(x2,y2)
ax.set(yscale="log", ylim=[min(x1), max(x1)])
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/IbsYg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IbsYg.png" alt="enter image description here" /></a></p>
<p><strong>Here is what I am trying to produce using matplotlib.</strong> (The axes on the pdf should be turned off, I just didn't crop them out for the purposes of illustrating this example.) Let's assume that the right (straight) edge of the PDF should be at x=0.1125.</p>
<p><a href="https://i.sstatic.net/jCDRi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jCDRi.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2023-03-07 22:56:04
| 1
| 3,476
|
a11
|
75,667,862
| 1,506,145
|
Save and load a Grayscale WebP image in PIL
|
<p>I'm trying to save and load a grayscale image in PIL:</p>
<pre><code>import numpy as np
from PIL import Image, ImageOps
o = np.random.rand(100,100)*255
o = o.astype('uint8')
im = Image.fromarray((o))
im.save('test.webp', lossless=True)
im = Image.open(r"test.webp")
a = np.asarray(im)
a.shape
</code></pre>
<p><code>a.shape</code> outputs <code>(100, 100, 3)</code>, apparently it is saved as a RGB image. This doesn't happen if I save the image as PNG, it loads as <code>(100,100,1)</code>. What to do?</p>
|
<python><python-imaging-library>
|
2023-03-07 22:38:14
| 0
| 5,316
|
user1506145
|
75,667,853
| 21,113,865
|
Can you preemptively provide password input to a python program?
|
<p>I am running python on a linux machine and have a simple program that will prompt the user for input multiple times.</p>
<p>i.e.</p>
<pre><code>import getpass
try:
p1 = getpass.getpass()
except Exception as error:
print('ERROR', error)
try:
p2 = getpass.getpass()
except Exception as error:
print('ERROR', error)
</code></pre>
<p>Is there a way to pre-provide input to my python program so that it can run all at once?</p>
<p>Something like:</p>
<pre><code># echo "pass1 pass2" > python myprog
# echo "pass1 pass2" | python myprog
</code></pre>
<p>Of course, neither of these examples work, but just to illustrate what I would like to try to do.</p>
|
<python><passwords>
|
2023-03-07 22:37:15
| 0
| 319
|
user21113865
|
75,667,657
| 6,734,243
|
how to dynamically change indentation in a multiline string?
|
<p>I'm using docstrings to document my code with Sphinx. I recently started to use <a href="https://beta.ruff.rs/docs/" rel="nofollow noreferrer">ruff</a> to lint my code and It is by default applying D212 instead of D213. The only difference being that the first line of the docstring (short description) must start on the first line.</p>
<p>It breaks one of the decorator that I'm using (<a href="https://deprecated.readthedocs.io/en/latest/" rel="nofollow noreferrer">deprecated</a>) as it cannot handle both types. The decorator handles docstring as followed:</p>
<pre class="lang-py prettyprint-override"><code>docstring = textwrap.dedent(wrapped.__doc__ or "")
# do stuff with docstring
</code></pre>
<p>How can I update the docstring parsing to handle docstring starting on a newline (D213) and starting on the first line (D212) the same way ?</p>
<pre class="lang-py prettyprint-override"><code># respecting D213
def func_a():
"""
one liner
Args:
toto
"""
# respecting D212
def func_a():
"""one liner
Args:
toto
"""
</code></pre>
<p>I tried to manipulate <code>func_b.__doc__</code> in combination with <code>dedent</code> with no success.</p>
|
<python><docstring>
|
2023-03-07 22:09:00
| 2
| 2,670
|
Pierrick Rambaud
|
75,667,613
| 1,133,224
|
Switched from WSGI to ASGI for Django Channels and now CircleCI throws "corrupted double-linked list" even though tests pass
|
<p>I've been working on a project which requires WebSockets. The platform is built with Django and was running the WSGI server <code>gunicorn</code>. We decided to implement WebSockets using Django Channels.</p>
<p>I set everything up including switching from <code>gunicorn</code> to the ASGI server <code>daphne</code>. Everything works great in local development environment. Deployment to AWS is working and everything works great on dev/staging. <code>pytest</code> works and all tests pass locally. On CircleCI all the tests are passing, but at the end of the "test" step we get the following and CircleCI shows a failed status:</p>
<pre><code>================== 955 passed, 2 skipped in 216.09s (0:03:36) ==================
corrupted double-linked list
/bin/bash: line 2: 278 Aborted (core dumped) poetry run coverage run -m pytest $TESTFILES -vv --junitxml htmlcov/junit.xml
Exited with code exit status 134
CircleCI received exit code 134
</code></pre>
<p>There are no other errors, warnings, or unexpected output. I cannot replicate the issue outside of CircleCI.</p>
<p>I tried adding the <code>@pytest.mark.asyncio</code> decorator to the one async test we have and still got the above. Even when I totally remove said test CircleCI still throws the same. Google has not been helpful.</p>
<p><strong>Edit:</strong>
This same thing has also happened a couple of times during the "migrate" step of our CircleCI workflow. That step just runs <code>poetry run python manage.py migrate</code> so this is not exclusive to pytest.</p>
|
<python><django><pytest><circleci><django-channels>
|
2023-03-07 22:03:46
| 1
| 905
|
TWGerard
|
75,667,596
| 3,460,486
|
Python Django: Inline edit and sub-total automatic
|
<p>I have a simple application for forecasting hours in projects for team members. The 'view' mode:
<a href="https://i.sstatic.net/r0Tta.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r0Tta.png" alt="enter image description here" /></a></p>
<p>For the "edit" mode I have this interface:
<a href="https://i.sstatic.net/avyIj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/avyIj.png" alt="enter image description here" /></a></p>
<p>Instead of a new view/template for "edit" I would like to have inline edit functionality. Is that feasible with Python Dango only as I don't want to use jquery or JavaScript.</p>
<p>Similarly, can the sub-total be calculated automatically on the page just using Python Django?</p>
|
<python><django><forms><inline-editing>
|
2023-03-07 22:02:23
| 0
| 493
|
user3460486
|
75,667,535
| 18,192,997
|
Create product in printful using API - Python
|
<p>I am trying to a create a product using the Printful api and python. I was able to create a product on there website (a basic) hoodie, and I wanted to try and create the same hoddie using python.</p>
<p>Here is the documentation I am using: <a href="https://developers.printful.com/docs/#operation/createSyncProduct" rel="nofollow noreferrer">https://developers.printful.com/docs/#operation/createSyncProduct</a></p>
<p>I keep getting this error which is very generalistic:</p>
<pre><code>{
"code": 400,
"result": "This API endpoint applies only to Printful stores based on the Manual Order / API platform. Find out more at Printful API docs.",
"error": {
"reason": "BadRequest",
"message": "This API endpoint applies only to Printful stores based on the Manual Order / API platform. Find out more at Printful API docs."
}
}
</code></pre>
<p>here is the code I am trying for testing:</p>
<pre><code>import requests
import json
my_key = "my_token"
headers = {
"Authorization" : "Bearer "+my_key
}
data = {
"id": 302079884,
"external_id": "8136758395169",
"name": "Unisex Hoodie",
# "variants": 6,
"thumbnail_url": "https://cdn.shopify.com/s/files/1/0713/4275/2033/products/unisex-premium-hoodie-black-front-64079acdc1ff0_grande.jpg?v=1678219998",
"is_ignored": False
}
data = json.dumps(data)
products = requests.post("https://api.printful.com/store/products",headers=headers,data=data)
print(json.dumps(products.json(),indent=4))
</code></pre>
<p>I am not sure how to test this code because the response is very generalistic.</p>
<p>I was able to test my token to make sure it works by trying to get the product I made on the website:</p>
<pre><code>import requests
import json
my_key = "my_token"
headers = {
"Authorization" : "Bearer "+my_key
}
products = requests.get("https://api.printful.com/sync/products",headers=headers)
print(json.dumps(products.json(),indent=4))
</code></pre>
<p>and I got the response:</p>
<pre><code>{
"code": 200,
"result": [
{
"id": 302079884,
"external_id": "8136758395169",
"name": "Unisex Hoodie",
"variants": 6,
"synced": 6,
"thumbnail_url": "https://cdn.shopify.com/s/files/1/0713/4275/2033/products/unisex-premium-hoodie-black-front-64079acdc1ff0_grande.jpg?v=1678219998",
"is_ignored": false
}
],
"extra": [],
"paging": {
"total": 1,
"offset": 0,
"limit": 20
}
}
</code></pre>
<p>So I know I am hitting the api correctly. If anyone has any suggestions on how to create a basic product using the printful api, that would be awesome.</p>
<p><strong>EDIT</strong>: I also tried what @Driftr95 mentioned regarding wrapping the data around "sync_product" but that still didn't work. Here is the new data variable now:</p>
<pre><code>data = {
"sync_product": {
"name": "API product Bella",
"thumbnail": "https://imagemagick.org/image/wizard.jpg"
},
"sync_variants": [
{
"retail_price": "21.00",
"variant_id": 44666144522529,
"files": [
{
"url": "https://imagemagick.org/image/wizard.jpg"
},
{
"type": "back",
"url": "https://imagemagick.org/image/wizard.jpg"
}
]
}
]
}
</code></pre>
<p>but I still get:</p>
<pre><code>{
"code": 400,
"result": "This API endpoint applies only to Printful stores based on the Manual Order / API platform. Find out more at Printful API docs.",
"error": {
"reason": "BadRequest",
"message": "This API endpoint applies only to Printful stores based on the Manual Order / API platform. Find out more at Printful API docs."
}
}
</code></pre>
<p>That response is so vague it is very hard to tell what is wrong with the post data I am trying to send.</p>
|
<python><json><python-requests>
|
2023-03-07 21:55:12
| 1
| 537
|
PythonKiddieScripterX
|
75,667,527
| 6,165,007
|
Numpy vectorization and algorithmic complexity
|
<p>I have read many times about <em>vectorized</em> code in <code>numpy</code>. I know for a fact that a python for loop can be ~100x times slower than an equivalent <code>numpy</code> operation. However, I thought that the power of <code>numpy</code> <em>vectorization</em> was beyond a mere translation of a for loop from Python to C/Fortran code.</p>
<p>I have read here and there about SIMD (Single Instruction, Multiple Data), BLAS (Basic Linear Algebra Subprograms) and other low level stuff, without a clear understanding of what is going on under the hood, and what I thought was that those libraries, thanks to parallelization at the CPU level, were able to perform the operations so that they scale in a sublinear fashion.</p>
<p>Maybe an example will help clarify this. Let's say we wish to compute the product of two matrices, and I want to check how increasing the number of rows of the first matrix will affect the elapsed time (this operation has a lot to do with machine learning, if we think that the number of rows is the <em>batch size</em>, the left matrix is the data and the right matrix contains parameters of the model). Well, my naïve understanding was that, in this case, the total elapsed time will scale (up to some limit) sub linearly, so that, in principle, and as long as everything fits into the RAM, I expected that increasing the bath size was always a good idea.</p>
<p>I've made some benchmarks and the situation is not what I expected. It looks like growth is linear, and given that the number of operations is a linear function on the number of rows, it looks like the C/Fortran code that runs under the hood is merely doing for loops.</p>
<p>This is the code:</p>
<pre class="lang-py prettyprint-override"><code>k = 128
N = 100
time_info = {}
for size in list(range(100, 5000, 200)):
A, B = np.random.random((size, k)), np.random.random((k, k))
t = time()
for i in range(N):
np.dot(A, B)
delta = time() - t
time_info[size] = delta / N
x = np.array(list(time_info.keys()))
y = np.array(list(time_info.values())) * 1000
# Plotting the Graph
plt.plot(x, y)
plt.title("Elapsed time vs number of rows")
plt.xlabel("Number of rows of left matrix")
plt.ylabel("Time in miliseconds")
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/msGyM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/msGyM.png" alt="Benchmark" /></a></p>
<p>It looks like the trend is quite linear. By the way, I've checked <code>np.show_config()</code> and it shows that I have <code>openblas</code>.</p>
<p>So my questions are:</p>
<ul>
<li>What is exactly the meaning of <em>vectorization</em>?</li>
<li>Is the benchmark reasonable, and what you would have expected?</li>
<li>If so, is there any noticeable optimization thanks to <em>low level</em> stuff like SIMD that should have an impact with <em>vectorized</em> operations? Or, maybe, it will only have an impact when you go from very small vectors/matrices to medium size vectors/matrices?</li>
</ul>
<p>This last option could make sense if only when the size is very small the CPU is not fully occupied. For example, and correct me if I am saying something stupid, if we had an architecture capable of performing 8 mathematical operations in parallel, then you would expect that multiplying a (1,) vector times a (1,) vector will be as fast as multiplying a (8,) vector times a (1,) vector. So, for very small sizes, the gain will be huge (8x times gain). But if you have vectors of thousand of elements, then this effect will be negligible and the time will scale linearly, because you will always have the CPU fully occupied. Does this naïve explanation make sense?</p>
|
<python><numpy><machine-learning><blas>
|
2023-03-07 21:54:20
| 1
| 339
|
Jaime Arboleda Castilla
|
75,667,349
| 7,479,376
|
pydantic schema to postgres db
|
<p>I am trying to insert a pydantic schema (as json) to a postgres database using sqlalchemy. As you can see below I have defined a <code>JSONB</code> field to host the schema.</p>
<pre><code>from sqlalchemy import Column, Integer, String, JSON
from sqlalchemy.orm import declarative_base
from pydantic import BaseModel, Field
from sqlalchemy.dialects.postgresql import JSONB
from enum import Enum
from sqlalchemy import create_engine
from sqlalchemy.orm import Session
import jsonref
Base = declarative_base()
class Values(Enum):
one = 'one'
two = 'two'
class Mdl(BaseModel):
par: Enum = Field(
title='trial',
description='trial descr'
)
class SqlTest(Base):
__tablename__ = 'sql_test'
id = Column(Integer, primary_key=True)
param = Column(JSONB)
if __name__ == "__main__":
engine = create_engine('postgresql+psycopg2://user:password@localhost/mydb')
Base.metadata.create_all(engine)
row = SqlTest(id=1, param=Mdl.schema())
row2 = SqlTest(id=1, param=jsonref.loads(Mdl.schema_json()))
with Session(engine) as session:
session.add(row)
session.add(row2)
session.commit()
</code></pre>
<p>However, when I insert the schema extracted via <code>Mdl.schema()</code> everything works as expected while when I extract the schema via <code>jsonref.loads(Mdl.schema_json()))</code>
I get the error</p>
<pre><code>sqlalchemy.exc.StatementError: (builtins.TypeError) Object of type dict is not JSON serializable
</code></pre>
<p>How to fix this error? Apparently, both <code>row</code> and <code>row2</code> are <code>dict</code> types.</p>
|
<python><sqlalchemy><pydantic>
|
2023-03-07 21:31:24
| 1
| 3,353
|
Galuoises
|
75,667,320
| 220,997
|
Connecting Python SQLAlchemy to Redshift Connector with SAML Okta
|
<p>Looking for a way to write some python code to connect to Redshift using my okta MFA credentials. I'm able to connect using login/pw but need to use Okta SAML 2FA. Using sqlalchemy, so I can load into a Pandas dataframe and do some analysis.</p>
<p>There's a redshift connector driver I'm trying to use.
<a href="https://docs.aws.amazon.com/redshift/latest/mgmt/python-connect-examples.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/redshift/latest/mgmt/python-connect-examples.html</a></p>
|
<python><sqlalchemy><amazon-redshift>
|
2023-03-07 21:27:33
| 1
| 6,282
|
Gabe
|
75,667,301
| 150,684
|
python click custom shell completion for command
|
<p>I have a click application which is running just fine currently.</p>
<p>We want to introduce a new command called <code>api</code> which will dynamically reach into our python sdk and extract sdk methods that can be invoked to interact with our platform. There are 100s of sdk methods and it doesn't make sense to expose them all via the CLI application explicitly but we want to at least provide "power users" the option to use the sdk via the CLI in a more dynamic way.</p>
<p>I have this working in the general sense via <code>progname api METHOD</code> which would be invoked as <code>progname api users.list</code>. This will call the <code>list</code> method on the <code>users</code> object within our sdk to list our all our users.</p>
<p>I have successfully built a custom <code>shell_complete</code> method for the <code>METHOD</code> argument which dynamically reaches into the sdk to pull out the options available based on the <code>incomplete</code> which has been provided.</p>
<p>Next step is to allow shell completion for a list of dynamic options, which is sourced from the sdk method being invoked. The <code>api</code> command has a set of click managed options and then we need to dynamically grab a set of options from the sdk.</p>
<p>I have decorated the command with the following.</p>
<pre class="lang-py prettyprint-override"><code>@click.command(
context_settings=dict(
ignore_unknown_options=True,
allow_extra_args=True
)
)
</code></pre>
<p>This allows me to provide a dynamic set of options and then my business logic can parse out what is required via the context object and invoke the correct sdk method with the correct method parameters.</p>
<p>This would present itself like <code>progname api users.create --username <name> --email <email> --phone <phone></code>. And also as <code>progname api users.delete --user-id <user-id></code>. As can be seen, both the <code>METHOD</code> and options are dynamic.</p>
<p>So far, this is all working as expected...the final piece is the dynamic options for shell completion.</p>
<p>How can I build a custom shell completion method and associate it with the <code>api</code> command? I would prefer to maintain the existing shell completion for the command (to get the click managed options...things like output format, auth details, etc) and just append my dynamically sourced options?</p>
<p>I am not seeing a <code>shell_complete</code> setting for commands, just parameters.</p>
<p>I am open to monkey patching some section of the click codebase if that ends up being a viable option. Or perhaps there is a "special" click option I can provide that encapsulates all non explicitly provided options?</p>
|
<python><python-click>
|
2023-03-07 21:25:49
| 1
| 2,642
|
thomas
|
75,667,251
| 11,594,202
|
Get bytes of downloaded file in playwright (example in python)
|
<p>I am using playwright in python to automate some tasks. At the end, I want to download a file and save it to my cloud storage.</p>
<p>For this, I need to get the bytes of the downloaded file, so I cant post/put these bytes to my cloud api.</p>
<p>I used the very straightforward download method in playwright, as such:</p>
<pre><code>with page.expect_download() as download_info:
page.get_by_role("button", name="Download PDF").click()
download = download_info.value
</code></pre>
<p>I can easily save the file, but can't find anything about get the bytestream. I would expect something like this should be possible:</p>
<pre><code>download_in_bytes = download.tobytes()
</code></pre>
<p>But there is no such method available in playwright.</p>
<p>Does somebody know a way? I can probably save the file first, and then open it again to get the bytes, but I'd rather do it without saving the file in between.</p>
|
<python><playwright><webautomation>
|
2023-03-07 21:17:54
| 2
| 920
|
Jeroen Vermunt
|
75,667,232
| 1,330,719
|
Defining VSCode's python interpreter in a monorepo using poetry's in-project virtualenvs
|
<p>I have a folder I open in VSCode that is a monorepo. One of the folders inside it is a python service.</p>
<p>This python service uses poetry with <code>in-project</code> virtualenvs configured. This means that I want my interpreter path to be <code>./python-service/.venv</code>.</p>
<p>Unfortunately, VSCode does not automatically detect the appropriate interpreter when a file inside that folder is opened (most likely because I'm opening the entire monorepo, not the specific service folder).</p>
<p>I know I can manually set an interpreter, but I would prefer if anyone else opening the folder doesn't also have to manually define the interpreter location.</p>
<p>It seems VSCode's <code>settings.json</code> file does support <code>python.venvPath</code> and <code>python.venvFolders</code> but this is only be configurable for a user's settings, not a workspace's settings.</p>
<p>What is the best way to tell VSCode which interpreter to pick?</p>
|
<python><visual-studio-code>
|
2023-03-07 21:15:18
| 2
| 1,269
|
rbhalla
|
75,667,225
| 243,031
|
How to get latest version of not installed package using pip?
|
<p>I want to know, for a package which is not installed in my virtual environment, what is the latest version available.</p>
<p>For example, if I had to install <code>requests</code> library, I would want to know the version before installation.</p>
<p>I considered <code>pip search</code> but that was deprecated in Python 3.11.</p>
<p>Is there any other way to get the <code>release version</code> using some command before installing a Python package ?</p>
|
<python><pip>
|
2023-03-07 21:14:30
| 3
| 21,411
|
NPatel
|
75,667,087
| 14,697,000
|
Why is my python dictionary updating when it is not supposed to (conditions are not being met)?
|
<p>I am trying to update a dictionary with data I am obtaining from sorting methods. I am obtaining the time it takes for a certain algorithm to finish, I am also counting how many swaps and how many comparisons the algorithm is making. After my sorting function returns the values I want to update a dictionary and I am doing this using a bunch of nested loops.</p>
<p>This dictionary that I want to update is a dictionary that contains nested lists and other dictionaries. This is what my dictionary looks like:</p>
<pre><code>time_and_data_dictionary={"Time":12,"Data Comparisons":12,"Data Swaps":12}
selection_sort_information={"selection sort":time_and_data_dictionary}
bubble_sort_information={"bubble sort":time_and_data_dictionary}
insertion_sort_information={"insertion sort":time_and_data_dictionary}
array_of_sorting_algorithms=[selection_sort_information,bubble_sort_information,insertion_sort_information]
dictionary={"Ascending_Sorted_250":array_of_sorting_algorithms,"Ascending_Sorted_500":array_of_sorting_algorithms,"Ascending_Sorted_1000": array_of_sorting_algorithms,"Ascending_Sorted_10000":array_of_sorting_algorithms,"Descending_Sorted_250":array_of_sorting_algorithms,"Descending_Sorted_500":array_of_sorting_algorithms," Descending_Sorted_1000": array_of_sorting_algorithms,"Descending_Sorted_10000":array_of_sorting_algorithms,"Random_Sorted_250":array_of_sorting_algorithms,"Random_Sorted_500":array_of_sorting_algorithms," Random_Sorted_1000":array_of_sorting_algorithms,"Random_Sorted_10000":array_of_sorting_algorithms}
</code></pre>
<p>The calls to the functions looks something like this:</p>
<pre><code>start = time()
tuple_selection_sort_rd250 = selection_sort_array(unordered_data_250)
end=time()
td_selection_sort_rd250=end-start
start = time()
tuple_bubble_sort_rd250 = bubble_sort_array(unordered_data_250)
end=time()
td_bubble_sort_rd250=end-start
tuple_insertion_sort_rd250 = insertion_sort_array(unordered_data_250)
end=time()
td_insertion_sort_rd250=end-start
</code></pre>
<p>And the dictionary update procedure looks something like this:</p>
<pre><code>print(dictionary,"beginning")
for i,j in dictionary.items():
for x,val in enumerate(j):
for k,v in val.items():
for y,z in v.items():
if i=="Random_Sorted_250":
if x==0 :
if y==t:
dictionary[i][x][k][y]=td_selection_sort_rd250
if y==dc:
dictionary[i][x][k][y]=tuple_selection_sort_rd250[0]
if y==ds:
dictionary[i][x][k][y]=tuple_selection_sort_rd250[1]
if x==1 :
print("is",k,x)
print(i, x, k, t, dc, ds, type(i), type(x), type(k), type(t), type(dc), type(ds))
if y==t:
dictionary[i][x][k][y] = td_bubble_sort_rd250
if y==dc:
dictionary[i][x][k][y] = tuple_bubble_sort_rd250[0]
if y==ds:
dictionary[i][x][k][y] = tuple_bubble_sort_rd250[1]
if x==2 :
print("going",k,x)
print(i, x, k, t, dc, ds, type(i), type(x), type(k), type(t), type(dc), type(ds))
if y==t:
dictionary[i][x][k][y] = td_insertion_sort_rd250
if y==dc:
dictionary[i][x][k][y] = tuple_insertion_sort_rd250[0]
if y==ds:
dictionary[i][x][k][y] = tuple_insertion_sort_rd250[1]
print(dictionary, x)
print(dictionary,"$$$$$$$$$$$$$$")
print(len(list(dictionary)),"count")
</code></pre>
<p>My output is showing me the same value for all my sorting algorithms i.e., the selection sort, the bubble sort and the insertion sort. When debugging I can see that all the values are the same across the different keys i.e., all the times are the same including the ones that I haven't including in my if conditions. Is there something I am missing regarding dictionaries and how updating them works? Is it my for loops and the fact that I am using <code>.values()</code> method?</p>
<p>The output looks something like:</p>
<pre><code>{'Ascending_Sorted_250': [{'selection sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'bubble sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'insertion sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}], 'Ascending_Sorted_500': [{'selection sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'bubble sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'insertion sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}], 'Ascending_Sorted_1000': [{'selection sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'bubble sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'insertion sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}], 'Ascending_Sorted_10000': [{'selection sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'bubble sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'insertion sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}], 'Descending_Sorted_250': [{'selection sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'bubble sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'insertion sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}], 'Descending_Sorted_500': [{'selection sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'bubble sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'insertion sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}], 'Descending_Sorted_1000': [{'selection sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'bubble sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'insertion sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}], 'Descending_Sorted_10000': [{'selection sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'bubble sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'insertion sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}], 'Random_Sorted_250': [{'selection sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'bubble sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'insertion sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}], 'Random_Sorted_500': [{'selection sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'bubble sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'insertion sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}], 'Random_Sorted_1000': [{'selection sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'bubble sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'insertion sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}], 'Random_Sorted_10000': [{'selection sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'bubble sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}, {'insertion sort': {'Time': 0.015619993209838867, 'Data Comparisons': 0, 'Data Swaps': 0}}]} $$$$$$$$$$$$$$
12 count
</code></pre>
<p>Someone please explain. Please ignore the in-line debugging statements.</p>
|
<python><list><dictionary><for-loop><nested-loops>
|
2023-03-07 20:56:10
| 0
| 460
|
How why e
|
75,667,042
| 5,072,010
|
PuLP optimization running (essentially) infinitely despite never improving the answer?
|
<p>I have a python script which solves the following optimization:</p>
<pre><code>from pulp import *
import matplotlib.pyplot as plt
rates = {'h': {'level_pay_rate': 0.1425, 'on_demand': 0.2}, 'x': {'level_pay_rate': 0.0129, 'on_demand': 0.0162}, 'ak': {'level_pay_rate': 0.3642, 'on_demand': 0.448}, 'ao': {'level_pay_rate': 0.003, 'on_demand': 0.0042}, 'an': {'level_pay_rate': 0.1821, 'on_demand': 0.224}, 'y': {'level_pay_rate': 0.0165, 'on_demand': 0.023}, 'e': {'level_pay_rate': 3.342, 'on_demand': 4.608}, 'j': {'level_pay_rate': 3.901, 'on_demand': 4.512}, 'b': {'level_pay_rate': 0.493, 'on_demand': 0.68}, 'i': {'level_pay_rate': 0.3265, 'on_demand': 0.384}, 'l': {'level_pay_rate': 0.071, 'on_demand': 0.096}, 'n': {'level_pay_rate': 0.325, 'on_demand': 0.376}, 'u': {'level_pay_rate': 0.365, 'on_demand': 0.504}, 'p': {'level_pay_rate': 0.083, 'on_demand': 0.113}, 'z': {'level_pay_rate': 0.1322, 'on_demand': 0.1856}, 'q': {'level_pay_rate': 0.695, 'on_demand': 0.952}, 'ai': {'level_pay_rate': 0.1936, 'on_demand': 0.24}, 'f': {'level_pay_rate': 0.313, 'on_demand': 0.432}, 'aa': {'level_pay_rate': 0.2399, 'on_demand': 0.3328}, 'ac': {'level_pay_rate': 0.06, 'on_demand': 0.0832}, 'ag': {'level_pay_rate': 0.015, 'on_demand': 0.0208}, 'v': {'level_pay_rate': 5.808, 'on_demand': 8.016}, 'r': {'level_pay_rate': 1.389, 'on_demand': 1.904}, 'am': {'level_pay_rate': 0.032, 'on_demand': 0.0372}, 'ae': {'level_pay_rate': 0.0484, 'on_demand': 0.06}, 's': {'level_pay_rate': 0.32506, 'on_demand': 0.376}, 'ad': {'level_pay_rate': 0.03, 'on_demand': 0.0416}, 'a': {'level_pay_rate': 0.246, 'on_demand': 0.34}, 'w': {'level_pay_rate': 0.0083, 'on_demand': 0.0116}, 'ah': {'level_pay_rate': 0.12, 'on_demand': 0.1664}, 'g': {'level_pay_rate': 0.078, 'on_demand': 0.108}, 'd': {'level_pay_rate': 0.123, 'on_demand': 0.17}, 'aj': {'level_pay_rate': 0.217, 'on_demand': 0.3008}, 'al': {'level_pay_rate': 0.0542, 'on_demand': 0.0752}, 'm': {'level_pay_rate': 0.141, 'on_demand': 0.192}, 'o': {'level_pay_rate': 0.62, 'on_demand': 0.712}, 'af': {'level_pay_rate': 0.0037, 'on_demand': 0.0052}, 'c': {'level_pay_rate': 1.108, 'on_demand': 1.53}, 'k': {'level_pay_rate': 1.3, 'on_demand': 1.504}, 'ab': {'level_pay_rate': 0.3871, 'on_demand': 0.48}, 't': {'level_pay_rate': 2.403, 'on_demand': 3.06}}
hourly_demand = {'5': {'h': 0.748056, 'x': 3.0, 'ak': 2.0, 'ao': 1.0, 'an': 2.0, 'y': 5.0, 'e': 2.0, 'j': 1.0, 'b': 38.97125391, 'i': 9.0, 'l': 2.0, 'n': 12.0, 'u': 2.0, 'p': 3.0, 'z': 1.0, 'q': 3.0, 'ai': 35.18181818, 'f': 5.0, 'aa': 2.0, 'ac': 1.0, 'ag': 3.0, 'v': 1.0, 'r': 1.0, 'am': 1.0, 'ae': 2.0, 's': 6.683077626, 'ad': 3.0, 'a': 2.0, 'w': 2.0, 'ah': 1.0, 'g': 24.0, 'd': 23.0, 'aj': 1.0, 'al': 1.0, 'm': 73.0, 'o': 2.0, 'af': 2.0, 'c': 16.0, 'k': 1.0, 'ab': 18.0, 't': 0}, '6': {'ac': 1.0, 'a': 2.0, 'l': 2.0, 'aj': 1.0, 'b': 38.88245591, 'i': 7.0, 'al': 1.0, 'q': 3.0, 'f': 5.0, 'p': 3.0, 'c': 16.0, 'k': 1.0, 'm': 73.0, 'aa': 2.0, 'z': 1.0, 'ab': 9.0, 'ak': 2.0, 'x': 3.0, 'ai': 5.395093035, 'w': 2.0, 'n': 12.0, 'af': 2.0, 's': 1.0, 'd': 23.0, 'r': 1.0, 'u': 2.0, 'an': 2.0, 'j': 1.0, 'am': 1.0, 'y': 5.0, 'g': 24.0, 'o': 2.0, 'e': 2.0, 'ao': 1.0, 'ad': 3.0, 'v': 1.0, 'ah': 2.0, 'ag': 3.0, 'h': 0, 'ae': 0, 't': 0}, '7': {'e': 2.0, 'b': 29.04976177, 'p': 3.0, 'af': 2.0, 'ac': 1.0, 'w': 2.0, 'k': 1.0, 'f': 5.0, 'z': 1.0, 'y': 5.0, 'd': 23.0, 'i': 7.0, 'n': 12.0, 'l': 2.0, 'ai': 1.181818182, 'ag': 3.0, 'a': 2.0, 'v': 1.184444, 'r': 1.0, 'c': 16.0, 'j': 1.0, 'g': 24.0, 'm': 73.0, 'o': 2.0, 'aa': 2.0, 'ah': 2.0, 'u': 2.0, 'ak': 2.0, 's': 1.0, 'ab': 7.071698244, 'aj': 1.0, 'an': 2.0, 'ao': 1.0, 'q': 3.0, 'ad': 3.0, 'am': 1.0, 'x': 3.0, 'al': 1.0, 'h': 0, 'ae': 0, 't': 0}, '8': {'aa': 2.0, 'ab': 6.155000661, 'b': 7.0, 'm': 41.67691468, 'n': 12.0, 'ah': 2.0, 'al': 1.0, 'q': 3.0, 'ak': 2.0, 'o': 2.0, 'ag': 3.0, 'v': 1.0, 'k': 1.0, 'd': 23.0, 'ac': 1.0, 'g': 12.0, 'am': 1.0, 'u': 2.0, 'af': 2.0, 'e': 2.0, 'ad': 3.0, 'f': 3.0, 'c': 1.0, 'aj': 1.0, 'an': 2.0, 'y': 5.0, 'z': 1.0, 'a': 2.0, 'r': 1.0, 'ao': 1.0, 's': 1.0, 'p': 3.0, 'i': 7.0, 'j': 1.0, 't': 16.956666, 'h': 0, 'x': 0, 'l': 0, 'ai': 0, 'ae': 0, 'w': 0}, '9': {'g': 24.0, 'l': 2.0, 'p': 3.0, 't': 0.398611, 'e': 2.0, 'b': 40.98876661, 'r': 1.0, 'ac': 1.0, 'ai': 1.181818182, 'q': 3.0, 'x': 3.0, 'o': 2.0, 'ak': 2.0, 'am': 1.0, 'j': 1.0, 'af': 2.0, 'v': 1.0, 'ag': 3.0, 'an': 2.0, 'ab': 5.09321567, 'aa': 2.0, 'u': 2.0, 's': 1.0, 'm': 73.0, 'c': 16.0, 'i': 7.0, 'k': 1.0, 'y': 5.0, 'aj': 1.0, 'd': 23.0, 'a': 2.0, 'ah': 2.0, 'ao': 1.0, 'w': 2.0, 'ad': 3.0, 'n': 12.0, 'al': 1.0, 'f': 5.0, 'z': 1.0, 'h': 0, 'ae': 0}}
I = rates.keys() # items
T = hourly_demand.keys() # time periods
IT = {(i, t) for i in I for t in T} # item-time tuples
print(IT)
# Define the model
model = LpProblem("level_pay_model", LpMinimize)
# Define decision variables
level_pay_amt = LpVariable("level_pay", lowBound=0)
level_pay = LpVariable.dicts('level_pay', IT, lowBound = 0, cat='Integer') # the quantity of i in time t to buy with level-pay
# Define objective function: the aggregate cost of servicing all demands by either level pay or on-demand
# a helper function for the hourly cost
def hourly_cost(t):
# the level pay the total of items not covered by level pay * on-demand cost
return level_pay_amt + lpSum((hourly_demand[t][i] - level_pay[i, t]) * rates[i]['on_demand'] for i in I)
model += lpSum(hourly_cost(t) for t in T)
print(model)
# alternate construct with out the function...
#model += lpSum(level_pay_amt + lpSum((hourly_demand[t][i] - level_pay[i, t]) * rates[i]['on_demand'] for i in I) for t in T)
# constraints
# 1. don't bust the level-pay dollars
for t in T:
model += lpSum(level_pay[i, t] * rates[i]['level_pay_rate'] for i in I) <= level_pay_amt
# 2. limit the level-pay to the demand.... or else get funky negative results.
for i, t in IT:
model += lpSum(level_pay[i, t]) <= hourly_demand[t][i]
solution = model.solve()
print(f'level pay amt: {level_pay_amt.varValue}')
for t in T:
print(value(hourly_cost(t)))
for i,t in sorted(IT):
print("level pay:", i, t, level_pay[i, t].varValue)
print()
# Visualize the hourly costs and the optimized commitment rate
cost_vec = [value(hourly_cost(t)) for t in sorted(T)]
plt.plot(sorted(T), cost_vec, label='Hourly Cost')
plt.axhline(y=value(level_pay_amt), color='r', linestyle='-', label='Level Pay Amt.')
plt.xlabel('Hour')
plt.ylabel('$/hr')
plt.legend()
plt.title(f'Costs by Hour [Total Cost: ${value(model.objective) : 0.2f}]')
plt.show()
</code></pre>
<p>If I reduce the size of the input data (rates and hourly_demand), the optimization model finds the solution quite quickly (nearly instantly). However, right around the size of the present input data, the optimization model/solver (Cbc0010I) takes <em>ages</em> to figure it out. So long in fact that I need to kill the script once it gets past 1M nodes because I'm afraid of such a large number.</p>
<p>Additionally, it seems to have already found the right answer pretty early on, but continues to iterate regardless (see an arbitrary snippet of the stack trace below):</p>
<pre><code>Cbc0010I After 930000 nodes, 119185 on tree, -143.4139 best solution, best possible -143.43226 (200.59 seconds)
Cbc0010I After 931000 nodes, 119094 on tree, -143.4139 best solution, best possible -143.43226 (200.79 seconds)
Cbc0010I After 932000 nodes, 119430 on tree, -143.4139 best solution, best possible -143.43226 (201.02 seconds)
Cbc0010I After 933000 nodes, 119365 on tree, -143.4139 best solution, best possible -143.43226 (201.24 seconds)
Cbc0010I After 934000 nodes, 119356 on tree, -143.4139 best solution, best possible -143.43226 (201.44 seconds)
Cbc0010I After 935000 nodes, 119286 on tree, -143.4139 best solution, best possible -143.43226 (201.63 seconds)
Cbc0010I After 936000 nodes, 119234 on tree, -143.4139 best solution, best possible -143.43226 (201.83 seconds)
Cbc0010I After 937000 nodes, 119182 on tree, -143.4139 best solution, best possible -143.43226 (202.01 seconds)
Cbc0010I After 938000 nodes, 119096 on tree, -143.4139 best solution, best possible -143.43226 (202.24 seconds)
Cbc0010I After 939000 nodes, 119455 on tree, -143.4139 best solution, best possible -143.43226 (202.48 seconds)
</code></pre>
<p>The best solution hardly changes (if at all), but the model continues to iterate. Is there any way I can stop it from iterating once its 'good enough' (perhaps by specifying some sufficient decimal precision?), give it a node limit, or fix whatever is causing it to continue to iterate?</p>
|
<python><optimization><pulp>
|
2023-03-07 20:50:16
| 2
| 1,459
|
Runeaway3
|
75,666,879
| 2,226,029
|
List executables that are in $PATH without knowing their locations
|
<p>Is there a way to (glob-like) list executables that match a certain pattern without knowing their actual locations? For example, let's say I have multiple versions of GCC installed: <code>gcc-10</code>, <code>gcc-11</code> and <code>gcc-12</code>, then I need the pattern <code>gcc-*</code> to yield something like <code>['gcc-10', 'gcc-11', 'gcc-12']</code>.</p>
<p>The behavior I'm after is equivalent to writing <code>gcc-</code> and then hitting tab in a Unix shell.</p>
|
<python>
|
2023-03-07 20:29:03
| 2
| 2,043
|
thomas_f
|
75,666,795
| 4,261,613
|
Multiply Numpy n x 2 array by n x 1 array
|
<p>Assuming I have a Numpy n x 2 array <code>y</code>: <code>array([[1, 1], [2, 3], [1, 4], ...])</code> and a Numpy n x 1 array <code>x</code>: <code>array([2, 4, 5, ...])</code>, how can I efficiently obtain the following result, n x 2 array: <code>array([2, 2], [8, 12], [5, 20], ...])</code>, where each element (array) of the <code>y</code> is multiplied by corresponding value from array <code>x</code>?</p>
<p>I can do it via a cycle but looking for more performant approaches.</p>
|
<python><arrays><numpy><multiplication>
|
2023-03-07 20:19:59
| 1
| 3,746
|
Mikhail
|
75,666,714
| 4,392,566
|
gspread list_spreadsheet_files not getting files in subfolders given a folder_id
|
<p>This used to work in getting all Google Sheets in all sub-folders given a top folder ID:</p>
<pre><code>import gspread
scope = ['https://spreadsheets.google.com/feeds',
'https://www.googleapis.com/auth/drive']
credentials = ServiceAccountCredentials.from_json_keyfile_name(
'creds_path/creds.json', scope)
gc = gspread.authorize(credentials)
folderID = 'a_folder_id_here'
existing_sheets = gc.list_spreadsheet_files(folder_id = folderID)
</code></pre>
<p>This now returns an empty list where before it returned all of the Google Sheet names in that folder. The folder in question only has sub-folders, which each have Google Sheets inside of them.</p>
<p>Using:</p>
<pre><code>gspread 5.7.2
Python 3.10
</code></pre>
|
<python><directory><google-drive-api><gspread>
|
2023-03-07 20:08:13
| 1
| 3,733
|
Dance Party
|
75,666,604
| 3,943,868
|
What does this double dot slicing mean in numpy?
|
<pre><code>my_array[:, 0::2] = np.sin(A)
my_array[:, 1::2] = np.cos(B)
</code></pre>
<p>If my_array is a two dimension array, what does this code do? In particular, what does :: do?</p>
|
<python><numpy>
|
2023-03-07 19:53:38
| 0
| 7,909
|
marlon
|
75,666,497
| 2,328,273
|
st_ino from os.stat in Python gets unexpectedly altered if output to a file
|
<p>I have tried to research this to see if it is expected behavior or not but I haven't found anything. Maybe I'm not using the right search terms. I use os.stat in Python and capture file attributes but I have noticed some strange behavior with st_ino. I am using Python 3.10 in Linux. I've noticed when I output st_ino to a file, the value is or somehow gets changed. Here is an example:</p>
<pre><code>import os
import xlsxwriter
directory = "/mnt/user/other"
workbook = xlsxwriter.Workbook('os.walk.file_attributes.xlsx')
worksheet = workbook.add_worksheet()
headers = ['File Name', 'Size (bytes)', 'Creation Time', 'Last Modified Time', 'Last Access Time', 'Inode Links', 'Inode Number str', 'Inode Number']
for i, header in enumerate(headers):
worksheet.write(0, i, header)
row = 1
for root, dirs, files in os.walk(directory):
for file in files:
filepath = os.path.join(root, file)
statinfo = os.stat(filepath)
worksheet.write(row, 0, file)
worksheet.write(row, 1, statinfo.st_size)
worksheet.write(row, 2, statinfo.st_ctime)
worksheet.write(row, 3, statinfo.st_mtime)
worksheet.write(row, 4, statinfo.st_atime)
worksheet.write(row, 5, statinfo.st_nlink)
worksheet.write(row, 6, str(statinfo.st_ino))
worksheet.write(row, 7, statinfo.st_ino)
row += 1
workbook.close()
print("File attributes saved to os.walk.file_attributes.xlsx")
</code></pre>
<p>If you run this code and look at the last column in the xlsx file it creates, ino numbers are all wrong. I had many repeats of inode numbers for files of different sizes that aren't hard links. That should not be the case. The column before that however, I first converted it to a string and that seems to be correct. When I <code>print</code> to screen both <code>statinfo.st_ino</code> and <code>str(statinfo.st_ino)</code> are identical, as they should be. For some reason it gets changed when output to a file unless it is strigified. I first noticed this because I was using <code>shelve</code> to save time on testing and getting inconsistent results when I would load shelved data. That's when I tried the code above to see what the issue was. I couldn't find any mention of this unexpected behavior in the Python docs. It is simply a matter of needing to stringify and int before writing it to a file? I know that is the case when using <code>write</code> but it errors out and explicitly tells you so and I figured that was inherent to the <code>write</code> method and not necessarily <code>int</code>s. Has anyone else come across this or an explanation as to why this happens?</p>
|
<python><integer>
|
2023-03-07 19:40:24
| 1
| 1,010
|
user2328273
|
75,666,449
| 16,629,677
|
DataLoader default_collate found "NoneType"
|
<p>I have a dataset directory that has hundreds of subdirectories, each subdirectory's name is a UUID. Inside each subdirectory, there are four files: an image (<code>png</code>), a <code>html</code>, a <code>json</code>, and a <code>txt</code> file.</p>
<p>The image, html, and txt form the sample, and the json contains the corresponding label.</p>
<p>Here's the <code>__getitem__()</code> function of the <code>Dataset</code> subclass that I defined:</p>
<pre class="lang-py prettyprint-override"><code>def __getitem__(self, idx):
ID = self.list_IDs[idx]
label = self.labels[ID]
# load the sample
html = open(os.path.join(DATA_PATH, ID, 'html_dirty.html'),
encoding='utf-8').read()
url = open(os.path.join(DATA_PATH, ID, 'url.txt'),
encoding='utf-8').read()
sample = {'html': html, 'url': url}
if self.load_img:
sample['img'] = cv2.imread(os.path.join(DATA_PATH, ID, 'ss.png'))
return sample, label
</code></pre>
<p>But when I run:</p>
<pre class="lang-py prettyprint-override"><code>x = CustomDataset(partitions['train'], labels) # partitions['train'] is just a list of UUIDs
train_generator = DataLoader(x, batch_size=32)
for i, batch in enumerate(train_generator):
print(i)
</code></pre>
<p>It errors out. Here's the full stack trace:</p>
<pre class="lang-py prettyprint-override"><code> Traceback (most recent call last):
File "C:\Users\LENOVO\Desktop\Work\ml-project-1\data_loader.py", line 67, in <module>
for i, batch in train_generator:
File "C:\Users\LENOVO\Desktop\Work\ml-project-1\ml_proj_env\lib\site-packages\torch\utils\data\dataloader.py", line 628, in __next__
data = self._next_data()
File "C:\Users\LENOVO\Desktop\Work\ml-project-1\ml_proj_env\lib\site-packages\torch\utils\data\dataloader.py", line 671, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "C:\Users\LENOVO\Desktop\Work\ml-project-1\ml_proj_env\lib\site-packages\torch\utils\data\_utils\fetch.py", line 61, in fetch
return self.collate_fn(data)
File "C:\Users\LENOVO\Desktop\Work\ml-project-1\ml_proj_env\lib\site-packages\torch\utils\data\_utils\collate.py", line 271, in default_collate
return collate(batch, collate_fn_map=default_collate_fn_map)
File "C:\Users\LENOVO\Desktop\Work\ml-project-1\ml_proj_env\lib\site-packages\torch\utils\data\_utils\collate.py", line 147, in collate
return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed]
File "C:\Users\LENOVO\Desktop\Work\ml-project-1\ml_proj_env\lib\site-packages\torch\utils\data\_utils\collate.py", line 147, in <listcomp>
return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed]
File "C:\Users\LENOVO\Desktop\Work\ml-project-1\ml_proj_env\lib\site-packages\torch\utils\data\_utils\collate.py", line 132, in collate
return {key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem}
File "C:\Users\LENOVO\Desktop\Work\ml-project-1\ml_proj_env\lib\site-packages\torch\utils\data\_utils\collate.py", line 132, in <dictcomp>
return {key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem}
File "C:\Users\LENOVO\Desktop\Work\ml-project-1\ml_proj_env\lib\site-packages\torch\utils\data\_utils\collate.py", line 155, in collate
raise TypeError(default_collate_err_msg_format.format(elem_type))
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'NoneType'>
</code></pre>
<p>Which is weird because:</p>
<pre class="lang-py prettyprint-override"><code>any([x.__getitem__(i) == None for i in range(32)])
</code></pre>
<p>Returns <code>False</code>.</p>
|
<python><pytorch><pytorch-dataloader>
|
2023-03-07 19:35:24
| 1
| 343
|
Tharsalys
|
75,666,445
| 15,724,084
|
python selenium does not go to right page with scraper
|
<p>I have a scraper in python which with help of selenium goes to URL <code>www.apartments.com</code> then I enter (input) some location but it, with help of <code>method of selenium .click()</code> press search button but it only goes by default page which <code>chicago, il</code>.</p>
<p>here is code;</p>
<pre><code>from selenium.webdriver.chrome.options import Options
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.chrome.service import Service as ChromeService
from selenium.webdriver.common.keys import Keys
chrome_options = Options()
chrome_options.headless = False
chrome_options.add_argument("start-maximized")
# options.add_experimental_option("detach", True)
chrome_options.add_argument("--no-sandbox")
chrome_options.add_experimental_option("excludeSwitches", ["enable-automation"])
chrome_options.add_experimental_option('excludeSwitches', ['enable-logging'])
chrome_options.add_experimental_option('useAutomationExtension', False)
chrome_options.add_argument('--disable-blink-features=AutomationControlled')
driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()), options=chrome_options)
driver.get('https://www.apartments.com/')
driver.implicitly_wait(10)
var_inp='Santa Monica, CA'
#search for rentals
#driver.execute_script("arguments[0].setAttribute('value',arguments[1])",element, value)
#WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.ID, "search-box-input"))).send_keys(var_inp+Keys.RETURN)
def foo():
element=WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.ID, "quickSearchLookup")))
driver.execute_script("arguments[0].setAttribute('value','Santa Monica, CA')", element)
#element.send_keys(var_inp)
button=WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "//a[@class='go']")))
button.click()
foo()
##find it
##//div[contains(@class,"StyledPropertyCardDataArea")]/ul
#driver.execute_script("arguments[0].setAttribute('value',arguments[1])",driver.find_element(By.CSS_SELECTOR, "input#search-box-input"), 'New York, NY')
#driver.implicitly_wait(30)
prices=WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.XPATH, "//p[@class='property-pricing']")))
beds=WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.XPATH, "//p[@class='property-beds']")))
specials=WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.XPATH, "//p[@class='property-specials']")))
lst_beds=[]
for bed in beds:
lst_beds.append(bed.text)
lst_prices=[]
for price in prices:
lst_prices.append(price.text)
lst_specials=[]
for special in specials:
lst_specials.append(special.text)
print(lst_specials,lst_prices,lst_beds)
driver.quit()
</code></pre>
|
<python><python-3.x><selenium-webdriver><web-scraping>
|
2023-03-07 19:34:36
| 1
| 741
|
xlmaster
|
75,666,408
| 523,612
|
How can I collect the results of a repeated calculation in a list, dictionary etc. (or make a copy of a list with each element modified)?
|
<p><sub>There are a great many existing Q&A on Stack Overflow on this general theme, but they are all either poor quality (typically, implied from a beginner's debugging problem) or miss the mark in some other way (generally by being insufficiently general). There are at least two extremely common ways to get the naive code wrong, and beginners would benefit more from a canonical about looping than from having their questions closed as typos or a canonical about what printing entails. So this is my attempt to put all the related information in the same place.</sub></p>
<p>Suppose I have some simple code that does a calculation with a value <code>x</code> and assigns it to <code>y</code>:</p>
<pre><code>y = x + 1
# Or it could be in a function:
def calc_y(an_x):
return an_x + 1
</code></pre>
<p>Now I want to repeat the calculation for many possible values of <code>x</code>. I know that I can use a <code>for</code> loop if I already have a list (or other sequence) of values to use:</p>
<pre><code>xs = [1, 3, 5]
for x in xs:
y = x + 1
</code></pre>
<p>Or I can use a <code>while</code> loop if there is some other logic to calculate the sequence of <code>x</code> values:</p>
<pre><code>def next_collatz(value):
if value % 2 == 0:
return value // 2
else:
return 3 * value + 1
def collatz_from_19():
x = 19
while x != 1:
x = next_collatz(x)
</code></pre>
<p>The question is: <strong>how can I collect these values and use them after the loop</strong>? I tried <code>print</code>ing the value inside the loop, but it doesn't give me anything useful:</p>
<pre><code>xs = [1, 3, 5]
for x in xs:
print(x + 1)
</code></pre>
<p>The results show up on the screen, but I can't find any way to use them in the next part of the code. So I think I should try to store the values in a container, like a list or a dictionary. But when I try that:</p>
<pre><code>xs = [1, 3, 5]
for x in xs:
ys = []
y = x + 1
ys.append(y)
</code></pre>
<p>or</p>
<pre><code>xs = [1, 3, 5]
for x in xs:
ys = {}
y = x + 1
ys[x] = y
</code></pre>
<p>After either of these attempts, <code>ys</code> only contains the last result.</p>
|
<python><iteration><list-comprehension>
|
2023-03-07 19:30:09
| 3
| 61,352
|
Karl Knechtel
|
75,666,211
| 5,900,093
|
Stop logging parameters for django background tasks?
|
<p>I'm using <a href="https://django-background-tasks.readthedocs.io/en/latest/" rel="nofollow noreferrer">Django Background Tasks</a> in my app for a task that requires user authentication. To create the task, I use a command like:</p>
<pre><code>my_cool_task(pks, request.POST.get('username'), request.POST.get('password'))
</code></pre>
<p>I just realized that the <code>username</code> and <code>password</code> parameters are getting stored in the Django Admin tables for the tasks, which in this case creates a security issue.</p>
<p>Is there a way to not store these parameters? Or is there a better way to authenticate a process that will take longer than it takes for a server timeout error?</p>
|
<python><django>
|
2023-03-07 19:08:06
| 1
| 2,068
|
Evan
|
75,666,062
| 6,658,422
|
Discrete date values for x-axis in seaborn.objects plot
|
<p>I am trying to prepare a bar plot using <code>seaborn.objects</code> with time series data where the x-axis ticks and labels are only on the dates that really appear in the data.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import seaborn.objects as so
df1 = pd.DataFrame({'date': pd.to_datetime(['2022-01-01', '2022-02-01']), 'val': [10,20,]})
so.Plot(df1, x='date', y='val').add(so.Bar())
</code></pre>
<p>The result is the following graph with a tick mark at 2022-01-15. Going to three entries in the dataframe solves the issue, but how would I do it in the presented case.</p>
<p>Adding <code>.scale(x=so.Nominal())</code> or <code>.scale(x=so.Temporal())</code> does not help.</p>
<p>As a bonus, how would I format the x-axis ticks as "Jan 2022", "Feb 2022" etc.?</p>
<p><a href="https://i.sstatic.net/HWRbs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HWRbs.png" alt="enter image description here" /></a></p>
|
<python><pandas><seaborn><seaborn-objects>
|
2023-03-07 18:51:07
| 1
| 2,350
|
divingTobi
|
75,665,559
| 10,200,497
|
Finding first occurrence of even numbers
|
<p>This is my dataframe:</p>
<pre><code>df = pd.DataFrame(
{
'a': [20, 21, 333, 55, 444, 1000, 900, 44,100, 200, 100],
'b': [2, 2, 2, 4, 4, 4, 4, 3, 2, 2, 6]
}
)
</code></pre>
<p>And this is the output that I want:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>a</th>
<th>b</th>
<th>c</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>20</td>
<td>2</td>
<td>x</td>
</tr>
<tr>
<td>1</td>
<td>21</td>
<td>2</td>
<td>NaN</td>
</tr>
<tr>
<td>2</td>
<td>333</td>
<td>2</td>
<td>NaN</td>
</tr>
<tr>
<td>3</td>
<td>55</td>
<td>4</td>
<td>x</td>
</tr>
<tr>
<td>4</td>
<td>444</td>
<td>4</td>
<td>NaN</td>
</tr>
<tr>
<td>5</td>
<td>1000</td>
<td>4</td>
<td>NaN</td>
</tr>
<tr>
<td>6</td>
<td>900</td>
<td>4</td>
<td>NaN</td>
</tr>
<tr>
<td>7</td>
<td>44</td>
<td>3</td>
<td>NaN</td>
</tr>
<tr>
<td>8</td>
<td>100</td>
<td>2</td>
<td>x</td>
</tr>
<tr>
<td>9</td>
<td>200</td>
<td>2</td>
<td>NaN</td>
</tr>
<tr>
<td>10</td>
<td>100</td>
<td>6</td>
<td>x</td>
</tr>
</tbody>
</table>
</div>
<p>I want to create column <code>c</code> which marks the first occurrence of an even number. It does not matter whether the even number is repeated consecutively or not. First occurrence is what I want.</p>
<p>For example the first row is marked because it is the first occurrence of 2 in column <code>b</code>. And the streak of 2 ends. Accordingly, that is why the first 4 is marked.</p>
<p>I tried this code:</p>
<pre><code>def finding_first_even_number(df):
mask = (df.b % 2 == 0)
df.loc[mask.cumsum().eq(1) & mask, 'c'] = 'x'
return df
df = df.groupby('b').apply(finding_first_even_number)
</code></pre>
<p>But it does not give me the output that I want.</p>
|
<python><pandas>
|
2023-03-07 17:50:57
| 2
| 2,679
|
AmirX
|
75,665,549
| 1,506,850
|
how to iteratively extend pd.dataframe avoiding list overallocation?
|
<p>I often do</p>
<pre><code>df_collection = []
for iteration in iterations:
df_this_iteration = computational_step(*many_other_iteration_specific_params...)
df_collection.append(df_this_iteration)
df_collection = pd.concat(df_collection,axis=0)
</code></pre>
<p>when <code>iterations</code> and/or <code>df_this_iteration</code> become large , the <code>df_collection.append</code> operation becomes very memory inefficient due to underlying list allocation memory.
How to improve or event vectorize this code to reduce memory footprint?</p>
<p>I read in High Performance Python
by Micha Gorelick and Ian Ozsvald</p>
<blockquote>
<p>When a list of size N is first appended to, Python must create a new
list that is big enough to hold the original N items in addition to
the extra one that is being appended. However, instead of allocating
N+1 items, M items are actually allocated, where M > N, in order to
provide extra headroom for future appends. Then, the data from the old
list is copied to the new list and the old list is destroyed. The
philosophy is that one append is probably the beginning of many
appends, and by requesting extra space we can reduce the number of
times this allocation must happen and thus the total number of memory
copies that are necessary. This is quite important since memory copies
can be quite expensive, especially when list sizes start growing.</p>
</blockquote>
|
<python><pandas><memory>
|
2023-03-07 17:49:52
| 1
| 5,397
|
00__00__00
|
75,665,464
| 272,023
|
How to I write an S3 object to SharedMemory using boto3?
|
<p>How do I write the contents of an S3 object into <a href="https://docs.python.org/3/library/multiprocessing.shared_memory.html#multiprocessing.shared_memory.SharedMemory" rel="nofollow noreferrer">SharedMemory</a>?</p>
<pre><code>MB=100
mem = SharedMemory(create=True, size=MB*2**20)
response = s3_client.get_object(Bucket='my_bucket', Key='path/to/obj')
mem.buf[:] = response['Body'].read()
</code></pre>
<p>However, I then get an error:</p>
<pre><code>memoryview assignment: lvalue and rvalue have different structure
</code></pre>
<p>Printing the memoryview shape gives this:</p>
<pre><code>(105906176,)
</code></pre>
<p>When I then try this:</p>
<pre><code>mem.buf[0] = response['Body'].read()
</code></pre>
<p>I get a different error:</p>
<pre><code>memoryview: invalid type for format 'B'
</code></pre>
<p>How can I write the contents of an S3 file into SharedMemory? I don't want to write to disk.</p>
|
<python><amazon-web-services><amazon-s3><boto3>
|
2023-03-07 17:40:28
| 2
| 12,131
|
John
|
75,665,392
| 14,912,118
|
Getting Bad Magic Number while Decrypting the file
|
<p>Here i am encrypting the file using below code using python and i am getting the encrypted file as expected.</p>
<pre><code>from Crypto.Cipher import AES
import base64
def pad(s):
return s + b"\0" * (AES.block_size - len(s) % AES.block_size)
def encrypt_file(input_path, output_path, key):
with open(input_path, 'rb') as f:
plaintext = f.read()
plaintext = pad(plaintext)
cipher = AES.new(key, AES.MODE_CBC)
ciphertext = cipher.encrypt(plaintext)
with open(output_path, 'wb') as outfile:
outfile.write(base64.b64encode(ciphertext))
key = b'0123456789abcdef0123456789abcdef'
input_path = 'path/item_properties_part2/sample.csv'
output_path = 'path/enc/sample.csv'
encrypt_file(input_path, output_path, key)
</code></pre>
<p>But when I am trying to decrypt the using OpenSSL (OpenSSL 3.0.8 7 - version) using below command I am getting <code>bad magic number</code> error.</p>
<p>I am using below command</p>
<pre><code>openssl aes-256-cbc -d -in sample_final.csv -out decsample_final.csv -k 0123456789abcdef0123456789abcdef
</code></pre>
<p>My requirement is encrypting the file using python and decrypting the encrypted file using openssl.</p>
<p>Please help to solve this error. It will be a great help.</p>
|
<python><encryption><openssl><aes>
|
2023-03-07 17:33:59
| 1
| 427
|
Sharma
|
75,665,344
| 3,486,773
|
How to make every other row a column using row below as the value in Pandas?
|
<p>I have a pandas dataframe that looks like this:</p>
<pre><code> df = pd.DataFrame.from_dict({'type': {4: 'Second Product',
5: 'table',
6: 'First Product',
7: 'chair',
8: 'Second Product',
9: 'desk',
10: 'First Product',
11: 'chair'},
'id': {4: 'cust1',
5: 'cust1',
6: 'cust1',
7: 'cust1',
8: 'cust2',
9: 'cust2',
10: 'cust2',
11: 'cust2'}})
</code></pre>
<p>But I need the 'type' column broken out into column, value. So the column names would be 'Second Product', and 'First Product', but the values would be the row below those. Like this:</p>
<pre><code>df = pd.DataFrame.from_dict({'cust': {4:'cust1', 5:'cust2' },'Second Product': {4: 'table',
5: 'desk'},
'First Product': {4: 'chair',
5: 'chair'}})
</code></pre>
<p>the other catch is, there could be more than just first and second products, and I want to get all the columns and fill blanks or nans where they don't exist. So if there is one customer who has a 'Third Product' I need that to be a column and where others don't have a third product value, fill that as blank or nans.</p>
<p>I have tried transposing, stacking, unstacking and setting indexes etc... and I'm just stuck as to how to go about this.</p>
<p>Edit: I am not worried about the index being reset, so it doesn't need to match my example exactly.</p>
|
<python><pandas><dataframe>
|
2023-03-07 17:29:04
| 3
| 1,278
|
user3486773
|
75,665,264
| 832,490
|
Cannot access member "safe_search_detection" for type "ImageAnnotatorClient"
|
<p>I have this line:</p>
<pre><code>from google.cloud.vision import ImageAnnotatorClient
vision = ImageAnnotatorClient()
vision.safe_search_detection(image).safe_search_annotation.adult
</code></pre>
<p>After enabling type checking on Pylance I'm getting the error:</p>
<blockquote>
<p>Cannot access member "safe_search_detection" for type "ImageAnnotatorClient". Member "safe_search_detection" is unknown</p>
</blockquote>
<p>How can I fix it?</p>
<p>I'm using the latest version of google-cloud-vision</p>
|
<python><google-cloud-vision>
|
2023-03-07 17:20:52
| 1
| 1,009
|
Rodrigo
|
75,665,076
| 12,131,472
|
How to use pd.json_normalize to retrieve the data I need
|
<p>I have this JSON list in Python:</p>
<pre class="lang-json prettyprint-override"><code>[{'id': 'TC2-FFA',
'shortCode': 'TC2-FFA',
'dataSet': {'datumPrecision': 2,
'id': 'TC2_37',
'shortCode': 'TC2_37',
'shortDescription': 'Clean Continent to US Atlantic coast',
'displayGroup': 'BCTI',
'datumUnit': 'Worldscale',
'data': [{'value': 156.11, 'date': '2023-03-06'}],
'apiIdentifier': 'RDSX9KRCHQI9TVGID5O7XQGBP1KKBZ0F'},
'datumUnit': 'WS',
'datumPrecision': 3,
'projectionStartOn': '2005-01-04T00:00:00',
'projectionEndOn': '2023-03-06T00:00:00',
'apiIdentifier': 'RPSBTGHKN64SV91SV9R3492RCH33D2OH'},
{'id': 'TC2$-FFA',
'shortCode': 'TC2$-FFA',
'dataSet': {'datumPrecision': 2,
'id': 'TC2_37',
'shortCode': 'TC2_37',
'shortDescription': 'Clean Continent to US Atlantic coast',
'displayGroup': 'BCTI',
'datumUnit': 'Worldscale',
'data': [{'value': 156.11, 'date': '2023-03-06'}],
'apiIdentifier': 'RDSX9KRCHQI9TVGID5O7XQGBP1KKBZ0F'},
'datumUnit': '$/mt',
'datumPrecision': 3,
'projectionStartOn': '2010-05-10T00:00:00',
'projectionEndOn': '2023-03-06T00:00:00',
'apiIdentifier': 'RPSH1H9454DYUE7G8CLHVLFPJZ3BVM77'}]
</code></pre>
<p>How could I use <code>pandas.json_normalize</code> to only retrieve the data <code>shortCode</code> (or <code>id</code>) and <code>data</code> under <code>dataSet</code> (<code>dataSet</code>--<code>data</code>--<code>value</code> and <code>date</code>)?</p>
<p>This is the desired dataframe:</p>
<pre><code> shortCode data.value data.date
0 TC2-FFA 156.11 2023-03-06
1 TC2$-FFA 156.11 2023-03-06
</code></pre>
<p>I tried</p>
<pre><code>pd.json_normalize(lst_object, record_path=['dataSet', ['shortCode', ['data', 'value'], ['data', 'date']])
</code></pre>
<p>but it failed</p>
|
<python><json><pandas><json-normalize>
|
2023-03-07 17:04:44
| 2
| 447
|
neutralname
|
75,665,021
| 12,474,157
|
Airflow v2.3.4: Make all tasks in a DAG run at the same time
|
<p>How can a define the parameters for airflow KubernetesPodOperator make all tasks in a DAG run at the same time.</p>
<p>In my image below you can see that some tasks are in grey "scheduled", I want them to run all at the same time green, also make it NOT possible to run the same task more than once at a time.</p>
<h3>SO</h3>
<pre><code>task1_today & task1_yesterday: Cannot run together
task1_today, task2_today, ...taskN_today: Should be running ALL together
</code></pre>
<p><a href="https://i.sstatic.net/unRN2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/unRN2.png" alt="enter image description here" /></a></p>
<p>This is how my DAGs are defined</p>
<h3>Arguments</h3>
<pre><code>default_args = {
"owner": "airflow",
"depends_on_past": False,
"email_on_failure": True,
"email": ["intelligence@profinda.com"],
"retries": 2,
"retry_delay": timedelta(hours=6),
"email_on_retry": False,
"image_pull_policy": "Always",
"max_active_tasks": len(LIST_OF_TASKS),
}
</code></pre>
<h3>Kubernetes pod</h3>
<pre><code>KubernetesPodOperator(
namespace="airflow",
service_account_name="airflow",
image=DAG_IMAGE,
image_pull_secrets=[k8s.V1LocalObjectReference("docker-registry")],
container_resources=compute_resources,
env_vars={
"EXECUTION_DATE": "{{ execution_date }}",
},
cmds=["python3", "launcher.py", "-n", spider_name, "-r", "43000"],
is_delete_operator_pod=True,
in_cluster=True,
name=f"Crawler-{normalised_name}",
task_id=f"hydra-crawler-{normalised_name}",
get_logs=True,
max_active_tis_per_dag=1, # Previously task_concurrency before Airflow 2.2
)
</code></pre>
|
<python><airflow><airflow-2.x><airflow-taskflow>
|
2023-03-07 16:58:43
| 1
| 1,720
|
The Dan
|
75,664,836
| 955,273
|
Crash when attempting to return a pybind11::array_t from c++ to python
|
<p>I have created the following example library:</p>
<pre class="lang-cpp prettyprint-override"><code>#include <vector>
#include <cstdint>
#include <pybind11/pybind11.h>
#include <pybind11/numpy.h>
namespace py = pybind11;
struct Foo
{
using UintArr = py::array_t<std::uint64_t, py::array::c_style | py::array::forcecast>;
UintArr get() const
{
std::vector<std::uint64_t> v = { 0, 1, 2, 3, 4, 5 };
return UintArr(v.size(), v.data());
}
};
PYBIND11_MODULE(foo, m)
{
py::class_<Foo>(m, "Foo")
.def(py::init<>())
.def("get",
[](const Foo& t)
{
py::gil_scoped_release release;
return t.get();
});
}
</code></pre>
<p>In python, when I attempt to call <code>get</code> it crashes</p>
<pre class="lang-py prettyprint-override"><code>import foo
f = foo.Foo()
foo.get() <--- crashes here
</code></pre>
<p>If I look at the core file with gdb I can see it's crashing on an import of <code>numpy.core.multiarray</code></p>
<pre class="lang-py prettyprint-override"><code>#0 ... in PyImport_Import ()
#1 ... in PyImport_ImportModule ()
#2 ... in pybind11::module_::import (name=0x7f369e4c65de "numpy.core.multiarray") at /usr/include/pybind11/pybind11.h:1195
#3 ... in pybind11::detail::npy_api::lookup () at /usr/include/pybind11/numpy.h:264
#4 ... in pybind11::detail::npy_api::get () at /usr/include/pybind11/numpy.h:193
#5 ... in pybind11::detail::npy_format_descriptor<unsigned long, void>::dtype () at /usr/include/pybind11/numpy.h:1285
#6 ... in pybind11::dtype::of<unsigned long> () at /usr/include/pybind11/numpy.h:584
#7 ... in pybind11::array::array<unsigned long> (this=0x7ffed48a2548, shape=..., strides=..., ptr=0x560d3412b5a0, base=...) at /usr/include/pybind11/numpy.h:763
#8 ... in pybind11::array_t<unsigned long, 17>::array_t (this=0x7ffed48a2548, count=6, ptr=0x560d3412b5a0, base=...) at /usr/include/pybind11/numpy.h:1070
#9 ... in Foo::get (this=0x560d341285b0) at /home/steve.lorimer/src/python/example/pybind.cpp:15
</code></pre>
<p>I have confirmed that <code>numpy.core.multiarray</code> exists and that I am able to import it in python.</p>
<pre class="lang-py prettyprint-override"><code>import numpy.core.multiarray
numpy.core.multiarray.__file__
'/usr/local/lib/python3.10/dist-packages/numpy/core/multiarray.py'
</code></pre>
<p>What is happening here, and how can I return my numpy array to python from C++?</p>
|
<python><c++><numpy><pybind11>
|
2023-03-07 16:43:03
| 1
| 28,956
|
Steve Lorimer
|
75,664,740
| 12,814,680
|
lists comparing with nested values
|
<p>I have a list of objects, that among other attributes have an attribute named "id"</p>
<pre><code>
listA = [{"id":"5","age":"44"},{"id":"8","age":"34"},{"id":"15","age":"84"}]
</code></pre>
<p>I have a list of strings such as :</p>
<pre><code>
listB = ["9","1","7","5","10","15","20"]
</code></pre>
<p>I wish to extract into a third list, all object ids from listA that have an id value in listB</p>
<p>the expected result would be :</p>
<pre><code>
listResult = ["5","15"]
</code></pre>
<p>How can I do this without using loops?</p>
|
<python>
|
2023-03-07 16:34:53
| 4
| 499
|
JK2018
|
75,664,725
| 364,966
|
Installing PIP Package connectorx on Docker?
|
<p>I'm developing a Lambda using Docker. I had this in requirements.txt:</p>
<blockquote>
<p>connectorx</p>
</blockquote>
<p>I ran <code>docker-compose build</code> and got:</p>
<blockquote>
<p>Exception: connectorx is not installed. Please run <code>pip install connectorx>=0.3.1</code>.</p>
</blockquote>
<p>I updated the line in requirements.txt to:</p>
<blockquote>
<p>connectorx>=0.3.1</p>
</blockquote>
<p>Ran <code>docker-compose build</code> again and got:</p>
<blockquote>
<p>No matching distribution found for connectorx>=0.3.1</p>
</blockquote>
<p>What am I missing?</p>
|
<python><docker><docker-build>
|
2023-03-07 16:33:25
| 0
| 5,154
|
VikR
|
75,664,720
| 14,608,529
|
How to generate heartbeat sounds in Python given beats per minute?
|
<p>I'm looking to pass in an integer beats per minute (bpm) variable value (e.g. 62) into a function in Python that outputs the accompanying heartbeat noises (i.e. 62 heart beat sounds in a minute).</p>
<p>I can't find any libraries that can help with this and Google searching leads me to only find the reverse (i.e. calculating bpm given heartbeat audio file).</p>
<p>How would I go about achieving this?</p>
<p>I was thinking of downloading an MP3 heartbeat sound and somehow manipulating that based on the number of bpm, but not sure this will work.</p>
|
<python><python-3.x><file><audio><libraries>
|
2023-03-07 16:33:11
| 1
| 792
|
Ricardo Francois
|
75,664,665
| 7,776,781
|
Typing a function dynamically using a subset of a ParamSpec
|
<p>I have a decorator that should be used to test certain things prior to the function that it decorates is being called, and if the checks fail just raise an exception. The decorator itself takes callables as input, and these callables will accepts a subset of the parameters from the function that it decorates.</p>
<p>For example, if I have a function that takes in <code>name</code> and <code>age</code>, the decorator could accept a function that takes a single parameter <code>age</code> and checks that its over 18. The problem I am having is with typing this properly, since my inputs to my decorator should be callables that take a subset of the parameters or the function that the decorator is attached to.</p>
<p>I tried with this:</p>
<pre><code>import functools
from collections.abc import Callable
from typing import ParamSpec, TypeVar
P = ParamSpec("P")
R = TypeVar("R")
def decorator_with_input(
*inputs: Callable[P, bool],
) -> Callable[[Callable[P, R]], Callable[P, R]]:
def decorator(func: Callable[P, R]) -> Callable[P, R]:
@functools.wraps(wrapped=func)
def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
for input_ in inputs:
if not input_(*args, **kwargs):
raise ValueError
return func(*args, **kwargs)
return wrapper
return decorator
def func1(a: int, *_, **__) -> bool:
return bool(a)
def func2(b: str, *_, **__) -> bool:
return bool(b)
@decorator_with_input(func1, func2)
def func3(a: int, b: str) -> tuple[int, str]:
return a, b
func3(a=1, b="b")
</code></pre>
<p>Which gives me a bunch of errors from mypy:</p>
<p>t.py:34: error: Argument 1 has incompatible type "Callable[[int, str], Tuple[int, str]]"; expected "Callable[[VarArg(), KwArg()], Tuple[int, str]]" [arg-type]</p>
<p>t.py:34: error: Argument 1 to "decorator_with_input" has incompatible type "Callable[[int, VarArg(Any), KwArg(Any)], bool]"; expected "Callable[[VarArg(), KwArg()], bool]" [arg-type]</p>
<p>t.py:34: error: Argument 2 to "decorator_with_input" has incompatible type "Callable[[str, VarArg(Any), KwArg(Any)], bool]"; expected "Callable[[VarArg(), KwArg()], bool]" [arg-type]</p>
<p>t.py:39: error: Argument "a" to "func3" has incompatible type "int"; expected [arg-type]</p>
<p>t.py:39: error: Argument "b" to "func3" has incompatible type "str"; expected [arg-type]</p>
<p>My question essentially comes down to, how can I type this? It feels like I should make <code>*inputs</code> be a callable but instead of <code>P</code> having a subset of <code>P</code> - but what from I can tell I cant make another <code>ParamSpec</code> which is bounds to <code>P</code> since mypy then says:</p>
<p>error: Only the first argument to ParamSpec has defined semantics [misc]</p>
|
<python><python-typing>
|
2023-03-07 16:27:50
| 0
| 619
|
Fredrik Nilsson
|
75,664,548
| 8,231,763
|
how to solve scipy importerror
|
<p>I have run into a problem while importing Scipy.</p>
<p>My OS is windows 10 enterprise. I have installed the latest Anaconda package (2022-10 64bit) in Jan of 2023. Everything was fine. And today I found I have trouble to run some code I created a few weeks ago. The problem is with SciPy. For example, with</p>
<pre><code>from scipy import optimize
</code></pre>
<p>I got error</p>
<blockquote>
<p>ImportError: DLL load failed while importing _interpolative: The
specified procedure could not be found.</p>
</blockquote>
<p>It happens to other Scipy modules such as <code>scipy.integrate</code></p>
<p>From what I can remember, the only change I made recently was to install <code>pywin32</code>. Not sure if this is relevant. Also not sure if my company IT did any update to my system.</p>
<p>I have uninstalled Anaconda and reinsall twice. It did not help... Can anyone here help me out? Thank you very much.</p>
<p>P.S. following the suggestions from the replies, I tried</p>
<pre><code>print(sys.executable)
print(sys.version)
print(os.getcwd())
print(getattr(os, "uname", lambda: None)())
print(sys.path)
</code></pre>
<p>And below is the results I got</p>
<blockquote>
<p>C:\Users\span\Anaconda3\python.exe
3.9.13 (main, Aug 25 2022, 23:51:50) [MSC v.1916 64 bit (AMD64)]
C:\Users\span
None
['C:\Users\span', 'C:\Users\span\Anaconda3\python39.zip', 'C:\Users\span\Anaconda3\DLLs', 'C:\Users\span\Anaconda3\lib', 'C:\Users\span\Anaconda3', '', 'C:\Users\span\AppData\Roaming\Python\Python39\site-packages', 'C:\Users\span\Anaconda3\lib\site-packages', 'C:\Users\span\Anaconda3\lib\site-packages\win32', 'C:\Users\span\Anaconda3\lib\site-packages\win32\lib', 'C:\Users\span\Anaconda3\lib\site-packages\Pythonwin', 'C:\Users\span\Anaconda3\lib\site-packages\IPython\extensions', 'C:\Users\span\.ipython']</p>
</blockquote>
|
<python><scipy><anaconda><pywin32>
|
2023-03-07 16:18:33
| 0
| 325
|
Shu Pan
|
75,664,530
| 6,751,456
|
is average of total and the sum of average of group distribution the same
|
<p>I have a query that calculates the total sessions and total number of crashed sessions.</p>
<pre><code>SELECT COUNT(*) count,
SUM(cast((CAST((session.platform = 1 AND session.iscrashed) OR
(session.platform = 2 AND session.iscrashed) AS int)) as int)) total_crashed
FROM session
WHERE appid = '4ef36d'
AND (CAST(session.uploadedon AS timestamp) >= timestamp '2023-02-07 00:00:00' AND
CAST(session.uploadedon AS timestamp) <= timestamp '2023-03-07 23:59:59')
</code></pre>
<p>The total count here is <code>7896</code> and total crashed is <code>774</code>.
So the average is <code>774/7896</code> * <code>100</code> = <code>9.80%</code></p>
<p>Now, grouping by month:</p>
<pre><code>SELECT
FORMAT_DATETIME(session.uploadedon,'MMM yyyy') session_uploaded_month,
count(1) as count,
SUM(cast((CAST((session.platform = 1 AND session.iscrashed) OR
(session.platform = 2 AND session.iscrashed) AS int)) as int)) session_is_crashed
FROM hive.prod_views.session
WHERE appid = '4ef36d'
AND (CAST(session.uploadedon AS timestamp) >= timestamp '2023-02-07 00:00:00' AND
CAST(session.uploadedon AS timestamp) <= timestamp '2023-03-07 23:59:59')
GROUP BY FORMAT_DATETIME(session.uploadedon,'MMM yyyy')
</code></pre>
<p>The output is:</p>
<pre><code>+----------------------+-----+------------------+
|session_uploaded_month|count|session_is_crashed|
+----------------------+-----+------------------+
|Feb 2023 |7227 |774 |
|Mar 2023 |669 |0 |
+----------------------+-----+------------------+
</code></pre>
<p>This gives the total avg percentage of <code>774/7227 * 100</code> = <code>10.70%</code> for <code>Feb</code> only.</p>
<p>So my question being can a avg. percentage for group distribution be greater than the avg. total percentage?</p>
<p>Here total avg. is less than month avg.</p>
|
<python><sql><average><aggregation><weighted-average>
|
2023-03-07 16:17:13
| 0
| 4,161
|
Azima
|
75,664,524
| 10,271,487
|
Pandas MultiIndex updating with derived values
|
<p>I am tryng to update a MultiIndex frame with derived data.</p>
<p>My multiframe is a time series where 'Vehicle_ID' and 'Frame_ID' are the levels of index and I iterate through each Vehicle_ID in order and compute exponential weighted avgs to clean the data and try to merge the additional columns to the original MultiIndex dataframe.</p>
<p>Example Code:</p>
<pre><code>v_ids = trajec.index.get_level_values('Vehicle_ID').unique().values
for id in v_ids:
ewm_x = trajec.loc[(id,), 'Local_X'].ewm(span=T_pos/dt).mean()
ewm_y = trajec.loc[(id,), 'Local_Y'].ewm(span=T_pos_x/dt).mean()
smooth = pd.DataFrame({'Vehicle_ID': id, 'Frame_ID': ewm_y.index.values, 'ewm_y': ewm_y, 'ewm_x': ewm_x}).set_index(['Vehicle_ID', 'Frame_ID'])
trajec.join(smooth)
</code></pre>
<p>And this works outside of the loop, to join the values to the trajec dataframe. But when implemented in the loop seems to overwrite on each loop.</p>
<pre><code> Local_X, Local_Y, v_Length, v_Width, v_Class, v_Vel, v_Acc, Lane_ID, Preceding, Following, Space_Headway, Time_Headway
Vehicle_ID Frame_ID
1 12 16.884 48.213 14.3 6.4 2 12.50 0.0 2 0 0 0.00 0.00
13 16.938 49.463 14.3 6.4 2 12.50 0.0 2 0 0 0.00 0.00
14 16.991 50.712 14.3 6.4 2 12.50 0.0 2 0 0 0.00 0.00
15 17.045 51.963 14.3 6.4 2 12.50 0.0 2 0 0 0.00 0.00
16 17.098 53.213 14.3 6.4 2 12.50 0.0 2 0 0 0.00 0.00
... ... ... ... ... ... ... ... ... ... ... ... ... ...
2911 8588 53.693 1520.312 14.9 5.9 2 31.26 0.0 5 2910 2915 78.19 2.50
8589 53.719 1523.437 14.9 5.9 2 31.26 0.0 5 2910 2915 78.26 2.50
8590 53.746 1526.564 14.9 5.9 2 31.26 0.0 5 2910 2915 78.41 2.51
8591 53.772 1529.689 14.9 5.9 2 31.26 0.0 5 2910 2915 78.61 2.51
8592 53.799 1532.830 14.9 5.9 2 30.70 5.9 5 2910 2915 78.81 2.57
</code></pre>
<p>dataframe exerpt.</p>
|
<python><pandas>
|
2023-03-07 16:16:40
| 1
| 309
|
evan
|
75,664,522
| 3,581,217
|
Mask xarray Dataset on one dimension
|
<p>I have an xarray dataset with variables with dimensions <code>(time, x, y)</code> or <code>(x, y)</code>. I addition, I have an array with bools the same size as the <code>time</code> dimension, which I would like to use to filter my dataset. I can't get that working.</p>
<p>Simple example that does work (all variables have the same dimensions):</p>
<pre><code>import xarray as xr
import numpy as np
# Create dummy dataset:
data_vars = {
'var1': (['time', 'y', 'x'], np.arange(81).reshape(9,3,3))}
coords = {'time': (['time'], np.arange(9))}
ds = xr.Dataset(data_vars=data_vars, coords=coords)
# Define mask. Simple in this case, more complex patterns in real-life problem:
mask = (ds.time.values > 2) & (ds.time.values < 6)
# Mask dataset.
ds_masked = ds.where(mask[: ,np.newaxis, np.newaxis])
ds_final = ds_masked.dropna(dim='time')
</code></pre>
<p>I'm not sure if this is the best way of applying a mask, but at least it works. But if I add a variable with different dims:</p>
<pre><code>data_vars = {
'var1': (['time', 'y', 'x'], np.arange(81).reshape(9,3,3)),
'var2': (['y', 'x'], np.arange(9).reshape(3,3))}
</code></pre>
<p>and leave the rest of the code unchanged, I can no longer filter my dataset, because my mask has more dimensions than <code>var2</code>. Any idea how to easily do this? I would like to keep <code>var2</code> in the filtered dataset.</p>
|
<python><python-xarray>
|
2023-03-07 16:16:34
| 1
| 10,354
|
Bart
|
75,664,519
| 4,934,150
|
Malicious Macro Detected when trying to run Python-Script from Excel-VBA
|
<p>I want to run a simple PythonScript using the below VBA:</p>
<pre><code>Sub RunPythonScrip()
Set objShell = VBA.CreateObject("Wscript.shell")
PythonExePath = """C:\Users\user1\AppData\Local\Programs\Python\Python311\python.exe"""
PythonScriptPath = """C:\Users\user1\PycharmProjects\webscraping\main.py"""
objShell.Run PythonExePath & PythonScriptPath
End Sub
</code></pre>
<p>However, when I run this VBA I get the following error message:</p>
<p><a href="https://i.sstatic.net/6qZTP.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6qZTP.jpg" alt="enter image description here" /></a></p>
<hr />
<p>What settings do I not change so the Macro is not considered as malicious?</p>
|
<python><vba>
|
2023-03-07 16:15:58
| 1
| 5,543
|
Michi
|
75,664,404
| 14,045,537
|
How to extract hex color codes from a colormap
|
<p>From the below <code>branca</code> colormap</p>
<pre><code>import branca
color_map = branca.colormap.linear.PuRd_09.scale(0, 250)
colormap = color_map.to_step(index=[0, 10, 20, 50, 70, 90, 120, 200])
</code></pre>
<p><a href="https://i.sstatic.net/BK80u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BK80u.png" alt="enter image description here" /></a></p>
<p>How can I extract hex colours for all the steps(index) from the above <code>Branca</code> colormap?</p>
|
<python><matplotlib><visualization><colormap>
|
2023-03-07 16:06:18
| 2
| 3,025
|
Ailurophile
|
75,664,343
| 10,866,873
|
Tkinter child widgets don't trigger <Leave> on parent
|
<p>When the mouse moves from a parent widget into a child widget and the parent has a <code><Leave></code> binding the event is never triggered due to the mouse not actually leaving the bounds of the parent. It is also the same for <code><Enter></code> when the mouse outside the child widget yet remains in the parent.</p>
<p>Is there another way to trigger an event when the mouse moves to and from a child widget.</p>
<p>In the code below the bottom box should always match the colour of the box the mouse is over, when moving from green to purple the <code><Enter></code> event is triggered but the <code><Leave></code> event is never triggered from the green box, similar when moving from purple back to green the <code><Leave></code> event is triggered on purple (defaulting the box white) but the <code><Enter></code> event is never triggered on the green box.</p>
<pre class="lang-py prettyprint-override"><code>from tkinter import *
class tracer:
def __init__(self):
self.widgets = {}
self.active = None
self.marker = None
def add_w(self, c, w):
self.widgets[c] = w
def add_marker(self, w):
self.marker = w
def set_active(self, event, c):
print(f"setting {c} active")
if c in self.widgets:
self.active = self.widgets[c]
self.marker.configure(bg=c)
print(f"{self.active} is now active")
def set_inactive(self, event, c):
print(f"setting {c} inactive")
if c in self.widgets and self.active == self.widgets[c]:
self.active = None
self.marker.configure(bg="white")
print(f"Nothing should be active {self.active}")
t = tracer()
root = Tk()
root.geometry("400x400+100+100")
box1 = Frame(root, bg="red")
box1.pack(fill=BOTH,side=TOP,expand=1)
box2 = Frame(root, bg="green")
box2.pack(fill=BOTH,side=TOP,expand=1)
sub_box2 = Frame(box2, bg="purple")
sub_box2.pack(fill=BOTH, expand=1, padx=30, pady=30)
box3 = Frame(root, bg="white", borderwidth=2)
box3.pack(fill=BOTH,side=TOP,expand=1)
t.add_w("red", box1)
t.add_w("green", box2)
t.add_w("purple", sub_box2)
t.add_marker(box3)
box1.bind("<Enter>", lambda e=Event(), c="red", w=box1:t.set_active(e, c))
box2.bind("<Enter>", lambda e=Event(), c="green", w=box2:t.set_active(e, c))
sub_box2.bind("<Enter>", lambda e=Event(), c="purple", w=box2:t.set_active(e, c))
box1.bind("<Leave>", lambda e=Event(), c="red":t.set_inactive(e, c))
box2.bind("<Leave>", lambda e=Event(), c="green":t.set_inactive(e, c))
sub_box2.bind("<Leave>", lambda e=Event(), c="purple":t.set_inactive(e, c))
root.mainloop()
</code></pre>
<p>I need this to determine when the mouse is over a scrollable child widget of a scrollable parent widget as the mousewheel isn't handled by Tkinter natively.</p>
|
<python><tkinter>
|
2023-03-07 16:00:46
| 2
| 426
|
Scott Paterson
|
75,664,331
| 12,596,824
|
Using assign operator on already updated dataframe through method chaining
|
<p>I have a dataframe like so:</p>
<pre><code>pid name kids_count
10 tom 2
21 bill 0
22 peter NaT
81 jen 4
20 jerry 1
</code></pre>
<p>I do the following method chaining to create an ID column but it takes two lines:</p>
<pre><code>person_data = (person_data
.dropna()
.reset_index(drop = True)
)
person_data = (person_data
.assign(PKID = person_data.index + 1
))
</code></pre>
<p>How can I method chain to create an ID column in one line of code? I essentially want to use the assign operator but pass the data frame thats already dropped NAs..</p>
<p>Pseudocode would look something like this, where the * is some code that takes in the person_data dataframe as dropping nulls:</p>
<pre><code>person_data = (person_data
.dropna()
.reset_index(drop = True)
.assign(PKID = *)
)
</code></pre>
|
<python><pandas>
|
2023-03-07 15:59:53
| 2
| 1,937
|
Eisen
|
75,664,275
| 7,593,853
|
What hook language should I use for local pre-commit hooks written in Python?
|
<p>I’m working on a <a href="https://pre-commit.com/#repository-local-hooks" rel="nofollow noreferrer">local pre-commit hook</a> written in Python. At first, I saw <a href="https://pre-commit.com/#python" rel="nofollow noreferrer">this quote from the pre-commit documentation</a>:</p>
<blockquote>
<h3>python</h3>
<p>The hook repository must be installable via <code>pip install .</code> (usually by either <code>setup.py</code> or <code>pyproject.toml</code>).</p>
</blockquote>
<p>I made the repo with the local hook installable using <code>pip install .</code>, but later found out that <a href="https://github.com/pre-commit/pre-commit/issues/1926" rel="nofollow noreferrer">that isn’t supposed to work with local hooks</a>.</p>
<p>More recently, I created a bare-bones <a href="https://pre-commit.com/#adding-pre-commit-plugins-to-your-project" rel="nofollow noreferrer"><code>.pre-commit-config.yaml</code></a> that looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>repos:
-
repo: local
hooks:
-
id: &id1 test1
name: *id1
language: python
entry: python -c "print('Hello, world!')"
always_run: true
-
id: &id2 test2
name: *id2
language: system
entry: python -c "print('Hello, world!')"
always_run: true
</code></pre>
<p>Both seem to do the same thing.</p>
<p>Here’s my question: for local hooks, is there any difference between using <a href="https://pre-commit.com/#python" rel="nofollow noreferrer">the <code>python</code> <code>language</code></a> and <a href="https://pre-commit.com/#system" rel="nofollow noreferrer">the <code>system</code> <code>language</code></a>? Should I prefer one over the other when writing hooks in Python? I would think that <code>system</code> hooks would rely on the user’s <code>PATH</code> (and that <code>python</code> hooks might not), but the <code>python</code> command isn’t on my <code>PATH</code> normally, and the hook still works (I start a <a href="https://nixos.org/manual/nix/unstable/command-ref/nix-shell.html" rel="nofollow noreferrer"><code>nix-shell</code></a> when I want to run <code>python</code>).</p>
|
<python><pre-commit.com>
|
2023-03-07 15:55:00
| 1
| 646
|
Ginger Jesus
|
75,664,227
| 20,443,528
|
How to implement SQL ORDER BY like functionality if the first parameter is a function call in python?
|
<p>I have a list of objects like this</p>
<pre><code>time_slots = [
<TimeSlot: number: 1, capacity: 4, advance: 10, from: 02:00:00, till: 08:00:00>,
<TimeSlot: number: 2, capacity: 3, advance: 17, from: 01:00:00, till: 04:00:00>,
<TimeSlot: number: 3, capacity: 3, advance: 17, from: 01:00:00, till: 04:00:00>,
<TimeSlot: number: 4, capacity: 1, advance: 17, from: 03:00:00, till: 08:00:00>,
<TimeSlot: number: 5, capacity: 4, advance: 17, from: 02:00:00, till: 07:00:00>,
<TimeSlot: number: 6, capacity: 3, advance: 17, from: 02:00:00, till: 09:00:00>,
<TimeSlot: number: 7, capacity: 2, advance: 17, from: 03:00:00, till: 08:00:00>
]
</code></pre>
<p>I want to sort the above object on the basis of duration (till - from) and capacity.</p>
<p>Basically, I want to implement this-</p>
<pre><code>ORDER BY duration(till - from), capacity;
</code></pre>
<p>I can access the properties of every object by using something like-</p>
<pre><code>time_slots[i].number
</code></pre>
<p>I sorted the list by duration like this-</p>
<pre><code>from datetime import date, datetime
def bubbleSort(time_slots):
swapped = False
# Looping from size of array from last index[-1] to index [0]
for n in range(len(time_slots)-1, 0, -1):
for i in range(n):
if (datetime.combine(date.today(), time_slots[i].available_till) - datetime.combine(date.today(), time_slots[i].available_from)) > (datetime.combine(date.today(), time_slots[i + 1].available_till) - datetime.combine(date.today(), time_slots[i + 1].available_from)):
swapped = True
# swapping data if the element is less than next element in the array
time_slots[i], time_slots[i + 1] = time_slots[i + 1], time_slots[i]
if not swapped:
# exiting the function if we didn't make a single swap
# meaning that the array is already sorted.
return
bubbleSort(time_slots)
</code></pre>
<p>I do not know how to do the second part and make a list which implements <code>ORDER BY bubbleSort(time_slots), capacity;</code> logic.</p>
<p>I have to implement this logic in python. Can someone please help me with this?</p>
<p>Side note: This problem is actually the real problem. The earlier problem I gave was a simplified one and those answers which I received will not work.</p>
|
<python><sorting><datetime>
|
2023-03-07 15:51:02
| 2
| 331
|
Anshul Gupta
|
75,664,220
| 13,986,997
|
Is there a way to do vlookup using python?
|
<p>Lets say I have two dataframes df1 and df2 and I need to do vlookup on name and give out names which are matching.</p>
<pre><code>import pandas as pd
import numpy as np
df1 = pd.DataFrame({
'name': ['A', 'B', 'C', 'D'],
'val1': [5, 6, 7, 8],
'val2': [1, 2, 3, 4],
})
df2 = pd.DataFrame({
'name': ['B', 'D', 'E', 'F'],
'abc': [15, 16, 17, 18],
'def': [11, 21, 31, 41],
})
Expected Output:
name val1 val2 matched_name
A 5 1 NaN
B 6 2 B
C 7 3 NaN
D 8 4 D
</code></pre>
<p>I thought this could be done by:</p>
<pre><code>df1['matched_name'] = df1['name'].map(df2['name'])
</code></pre>
<p>But I'm getting all NaN's in matched column. Is there a way to do this?</p>
|
<python><pandas>
|
2023-03-07 15:50:06
| 2
| 413
|
Akilesh
|
75,664,217
| 12,065,403
|
How to update custom package version?
|
<p>I am trying to create a python package that I can reuse in other projects. I did follow this tutorial: <a href="https://towardsdatascience.com/create-your-custom-python-package-that-you-can-pip-install-from-your-git-repository-f90465867893" rel="nofollow noreferrer">Create Your Custom, private Python Package That You Can PIP Install From Your Git Repository</a>.</p>
<p>It works but when I update the code of my package and then try to update the package on a project that uses this package, it does not update.</p>
<p>So I have a git repo with my package at <a href="https://github.com/vincent2303/myCustomPackage" rel="nofollow noreferrer">https://github.com/vincent2303/myCustomPackage</a> with following files:</p>
<ul>
<li><code>myCustomPackage/functions.py</code> with the following function:</li>
</ul>
<pre><code>def say_foo():
print('Foo')
</code></pre>
<ul>
<li>A <code>setup.py</code> file with:</li>
</ul>
<pre><code>import setuptools
setuptools.setup(
name='myCustomPackage',
version='0.0.1',
author='Vincent2303',
description='Testing installation of Package',
long_description_content_type="text/markdown",
license='MIT',
packages=['myCustomPackage'],
install_requires=[],
)
</code></pre>
<p>Then I created a test project, with:</p>
<ul>
<li>A <code>main.py</code> file:</li>
</ul>
<pre><code>from myCustomPackage import functions
functions.say_foo()
</code></pre>
<ul>
<li>A Pipfile (I did <code>pipenv shell</code> then <code>pipenv install git+https://github.com/vincent2303/myCustomPackage.git#egg=myCustomPackage</code>):</li>
</ul>
<pre><code>[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
[packages]
mycustompackage = {git = "https://github.com/vincent2303/myCustomPackage.git"}
[requires]
python_version = "3.8"
</code></pre>
<p>At this point it works well, I can use <code>say_foo</code>.</p>
<p>Then I added a new function <code>say_bar</code> on myCustomPackage (in <code>myCustomPackage/functions.py</code>). I did increment version from <code>0.0.1</code> to <code>0.0.2</code> in <code>setup.py</code>. I did commit and push.</p>
<p>In my project that uses myCustomPackage I did run <code>pipenv install</code>, I expect pip to check version, detect that there is a new version and update it but it does not. I can't use <code>say_bar</code> (I get error: <code>AttributeError: module 'myCustomPackage.functions' has no attribute 'say_bar'</code>)</p>
<p>I tried as well to re run <code>pipenv install git+https://github.com/vincent2303/myCustomPackage.git#egg=myCustomPackage</code>. It does not update.</p>
<p><strong>How can I update the version of <code>myCustomPackage</code> in projects that uses this package ?</strong></p>
|
<python><git><pipenv><python-packaging>
|
2023-03-07 15:50:01
| 1
| 1,288
|
Vince M
|
75,664,215
| 7,987,987
|
Sum together list of F expressions
|
<p>Is there a way to specify (in an annotation or aggregation) that a sequence of <code>F</code> expressions should be summed together without manually typing out <code>F("first_prop") + F("second_prop") + ...</code>?</p>
<p>I want something similar to how python's <code>sum()</code> function allows you to pass an iterable and get the sum of the values in the iterable i.e. <code>sum([1,2,3])</code> returns <code>6</code>.</p>
<p>Concretely, I want something that looks like this:</p>
<pre class="lang-py prettyprint-override"><code>class Tree(TimeStampedModel):
leaf_count = models.IntegerField()
branch_count = models.IntegerField()
Tree.objects.create(leaf_count=60, branch_count=8)
Tree.objects.create(leaf_count=30, branch_count=3)
# now I want to annotate a combined count using my imaginary IterableSum aggregator
combined_sums = list(
Tree.objects.all().annotate(
combined_count=IterableSum(fields=[F("leaf_count"), F("branch_count")])
).values_list("combined_count", flat=True)
)
combined_sums # [68, 33]
</code></pre>
<p>How can I achieve this?</p>
|
<python><django><django-models><django-aggregation>
|
2023-03-07 15:49:52
| 1
| 936
|
Uche Ozoemena
|
75,664,160
| 1,107,474
|
How to pass the index of the iterable when using multiprocessing pool
|
<p>I would like to call a function <code>task()</code> in parallel <code>N</code> times. The function accepts two arguments, one is an array and the second is an index to write the return result in to the array:</p>
<pre><code>def task(arr, index):
arr[index] = "some result to return"
</code></pre>
<p>To be explicit, the reason for the array is so I can process all the parallel tasks once they have completed. I presume this is ok?</p>
<p>I have created a multiprocessing pool and it calls <code>task()</code>:</p>
<pre><code>def main():
N = 10
arr = np.empty(N)
pool = Pool(os.cpu_count())
pool.map(task, arr)
pool.close()
# Process results in arr
</code></pre>
<p>However, the problem is because <code>map()</code> is already iterable, how do I explicitly pass in the index? Each call to <code>task()</code> should pass in 0, 1, 2.... N.</p>
|
<python>
|
2023-03-07 15:45:16
| 1
| 17,534
|
intrigued_66
|
75,664,091
| 15,613,309
|
How to determine if tkinter Toplevel widget exists.?
|
<p>I have a need to destroy a <code>Toplevel</code> widget if it already exists before creating it. I've researched this and every response I can find here and elsewhere suggest using <code>winfo_exists</code>; however, the following code demonstrates that doesn't work. It fails with the error: object has no attribute 'toplevel'</p>
<p>I can use a brute force try:/except: to destroy the widget, but surely there's a better way.</p>
<p>BTW: I am on Windows 10 running Python 3.11.1</p>
<pre><code># does_widger_exists.py
import tkinter as tk
class TL_GUI:
def __init__(self):
self.root = tk.Tk()
self.root.geometry('300x100')
self.root.title('Tkinter Test')
button0 = tk.Button(self.root,text='Create Toplevel',command=self.makeToplevel).pack(pady=20)
self.root.mainloop()
def makeToplevel(self):
if tk.Toplevel.winfo_exists(self.toplevel):
self.toplevel.destroy()
self.toplevel = tk.Toplevel(self.root)
self.toplevel.geometry('500x100')
self.toplevel.title('I am a TopLevel')
text0 = tk.Label(self.toplevel,text='Hello World',height=1,width=25,borderwidth=5).pack()
if __name__ == '__main__':
TL_GUI()
</code></pre>
|
<python><tkinter>
|
2023-03-07 15:40:18
| 1
| 501
|
Pragmatic_Lee
|
75,664,060
| 980,151
|
Why does Mypy report a subset of JSON as invalid?
|
<p>I have a base class that returns a JSON type. I then have subclasses that return more specific types that should I think be valid JSON types, but Mypy reports an error:</p>
<pre><code>error: Return type "List[Dict[str, JSON]]" of "return_json" incompatible with return type "JSON" in supertype "JsonReturner" [override]
</code></pre>
<p>Have I misunderstood something or am I exploring the limits of the type-checking implementation?</p>
<p>Full example:</p>
<pre><code>from typing import TypeAlias
JSON: TypeAlias = dict[str, "JSON"] | list["JSON"] | str | int | float | bool | None
class JsonReturner:
def return_json(self) -> JSON:
raise NotImplementedError("abstract base class")
class ListJsonReturner(JsonReturner):
def return_json(self) -> list[JSON]:
return []
class DictJsonReturner(JsonReturner):
def return_json(self) -> dict[str, JSON]:
return {}
class ListDictJsonReturner(JsonReturner):
def return_json(self) -> list[dict[str, JSON]]:
return []
class DictListJsonReturner(JsonReturner):
def return_json(self) -> dict[str, list[JSON]]:
return {}
</code></pre>
<pre><code>$ mypy jsontypes.py
jsontypes.py:27: error: Return type "List[Dict[str, JSON]]" of "return_json" incompatible with return type "JSON" in supertype "JsonReturner" [override]
jsontypes.py:33: error: Return type "Dict[str, List[JSON]]" of "return_json" incompatible with return type "JSON" in supertype "JsonReturner" [override]
</code></pre>
|
<python><mypy><python-typing>
|
2023-03-07 15:37:31
| 1
| 2,569
|
wrgrs
|
75,664,029
| 6,240,756
|
Powershell: se variable in command
|
<p>I'm facing a basic and stupid problem on Windows PowerShell. I'm almost sure it has already been answered somewhere but I can't find something working for me.</p>
<p>I simply would like to use a variable inside a command in PowerShell:</p>
<pre><code>$VENV_DIR='C:\venv\'
python -m venv $VENV_DIR
$VENV_DIR\Scripts\python.exe --version
</code></pre>
<p>I expect to see <code>Python 3.10.8</code> as result.</p>
<p>But I have this error:</p>
<pre><code> PS C:\> $VENV_DIR\Scripts\python.exe --version
At line:1 char:10
+ $VENV_DIR\Scripts\python.exe --version
+ ~~~~~~~~~~~~~~~~~~~
Unexpected token '\Scripts\python.exe' in expression or statement.
+ CategoryInfo : ParserError: (:) [], ParentContainsErr
orRecordException
+ FullyQualifiedErrorId : UnexpectedToken
</code></pre>
<p>I tried a lot of different combinations, but none of them work</p>
<pre><code> 97 $VENV_DIR\Scripts\python.exe --version
98 `$VENV_DIR\Scripts\python.exe --version
99 $VENV_DIR\\Scripts\python.exe --version
100 $VENV_DIR\Scripts\python.exe --version
101 "$VENV_DIR"\Scripts\python.exe --version
102 "$VENV_DIR\Scripts\python.exe" --version
103 "$VENV_DIR\Scripts\python.exe --version"
104 ("$VENV_DIR\Scripts\python.exe --version")
105 $("$VENV_DIR\Scripts\python.exe --version")
106 -$("$VENV_DIR\Scripts\python.exe --version")
107 -$("$VENV_DIR")\Scripts\python.exe --version
108 -$($VENV_DIR)\Scripts\python.exe --version
109 $VENV_DIR\Scripts\python.exe --version
110 (echo $VENV_DIR)\Scripts\python.exe --version
111 echo $VENV_DIR\Scripts\python.exe --version
112 $(echo $VENV_DIR)\Scripts\python.exe --version
113 -$(echo $VENV_DIR)\Scripts\python.exe --version
114 -$("echo $VENV_DIR")\Scripts\python.exe --version
115 echo $VENV_DIR\Scripts\python.exe --version
</code></pre>
<p>Could you please help? Thanks</p>
|
<python><powershell>
|
2023-03-07 15:34:57
| 1
| 2,005
|
iAmoric
|
75,663,995
| 5,387,991
|
Polars: Addressing "The predicate passed to 'LazyFrame.filter' expanded to multiple expressions"
|
<p>From Polars 0.16.11 the following filter statement raises an exception with the following error message:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({'lag1':[0,1,None],'lag2':[0,None,2]})
df.filter(pl.col('^lag.*$').is_not_null())
ComputeError: The predicate passed to 'LazyFrame.filter' expanded to multiple expressions:
col("lag1").is_not_null().any(),
col("lag2").is_not_null().any(),
This is ambiguous. Try to combine the predicates with the 'all' or `any' expression.
</code></pre>
<p>How can I address it to filter rows where any of the lag columns are <code>null</code>?</p>
|
<python><python-polars>
|
2023-03-07 15:31:47
| 3
| 934
|
braaannigan
|
75,663,969
| 7,012,917
|
Compute percentage change by increasing window size up to period
|
<p>Say I have this series</p>
<pre><code>s = pd.Series([90, 91, 85, 95])
</code></pre>
<p>If I compute the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pct_change.html" rel="nofollow noreferrer">percentage change</a> with a period of 2 I get</p>
<pre><code>s.pct_change(periods=2)
0 NaN
1 NaN
2 -0.055556
3 0.043956
dtype: float64
</code></pre>
<p>But say I have a couple of years of data and I want to compute the year over year (YoY) rolling returns, with <code>pct_change(365)</code>. Then I would loose a year's worth of data (in the sense that I have 365 NaN values before I have a valid observation).</p>
<p><strong>I would instead like to compromise by starting as if I had a period of 1, 2, then 3 and so on up until I reach the specified</strong> <code>periods</code>.</p>
<p>In other words, I would like an output of the likes of</p>
<pre><code>0 NaN
1 0.011111
2 -0.055556
3 0.043956
dtype: float64
</code></pre>
<p>or</p>
<pre><code>0 0
1 0.011111
2 -0.055556
3 0.043956
dtype: float64
</code></pre>
<p>Is this possible? I know that <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.rolling.html" rel="nofollow noreferrer">pandas.DataFrame.rolling</a> has basically this functionality with the argument <code>min_periods</code>, but I couldn't find something similar for <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pct_change.html" rel="nofollow noreferrer">pandas.DataFrame.pct_change</a>.</p>
|
<python><pandas><dataframe><percentage>
|
2023-03-07 15:29:31
| 1
| 1,080
|
Nermin
|
75,663,813
| 8,580,652
|
Python regular expression: search from right to left by delimiter, then search from right to left in the left part of the delimiter
|
<p>Example:</p>
<pre><code>aaa_bbb_ /xyz=uvw,ccc=height:18y,weight:1xb,ddd=19d
</code></pre>
<p>The end goal is to parse it into a dictionary:</p>
<pre><code>{'aaa_bbb_ /xyz':'uvw','ccc':'height:18y,weight:1xb','ddd':19d}
</code></pre>
<p>The rule is:</p>
<blockquote>
<p>search for "=" from the right, split by "=".
To the left of the "=" sign, search for "," from right to left again, content between "," and "=" is the key: 'ddd', and content to the right of "=" is the value: '19d'.</p>
</blockquote>
<p>After this is done, repeat the step in the remainder of the string</p>
<pre><code>aaa_bbb_/xyz=uvw,ccc=height:18y,weight:1xb
</code></pre>
<p>The string contains at least one key:value pair(s). Character <code>,</code>, as well almost all special character, can exist in the value as the example suggest.</p>
|
<python><regex>
|
2023-03-07 15:16:03
| 1
| 398
|
John
|
75,663,721
| 5,568,409
|
How to label the kernel density estimate in histplot
|
<p>I am plotting a histogram using <code>seaborn</code>, along with a KDE curve and a Gaussian fit, but the instruction <code>label = "KDE fit"</code> in <code>sns.histplot</code> is inappropriately displayed in color, as it seems to refer to the whole histogram... Is there a way to specifically label the KDE curve so as to appear in the <code>legend</code> box as a solid green line (just as the Gaussian fit appears as a dashed red line)?</p>
<p>The full code I used is below:</p>
<pre><code>import numpy as np
import matplotlib.pylab as plt
import seaborn as sns
import scipy.stats as stats
from scipy.stats import norm
# Generating data
np.random.seed(63123)
data = np.random.normal(loc = 600, scale = 30, size = 20)
# Parameters for histogram plotting
min_val = data.min()
max_val = data.max()
val_width = max_val - min_val
n_bins = 7
bin_width = val_width/n_bins
list_xticks_raw = np.arange(min_val - bin_width/2, max_val + bin_width/2, bin_width).tolist()
list_xticks_round = [round(x) for x in list_xticks_raw]
# Histogram and Gaussian fit plotting
fig = plt.figure(figsize = (4,4))
h = sns.histplot(data = None, x = data , bins = n_bins, binrange=(min_val, max_val), discrete = False, shrink = 1.0,
stat = "density", element = "bars", color = "green", kde = True, label = "KDE fit")
plt.xlim(min_val - bin_width/2, max_val + bin_width/2) # Define x-axis limits
plt.xticks(list_xticks_round)
mu, sigma = stats.norm.fit(data)
sorted_data = np.sort(data)
gaussian_fit = stats.norm.pdf(sorted_data, mu, sigma)
plt.plot(sorted_data, gaussian_fit, linestyle = "--", color = "red", label = "Gaussian fit")
plt.legend()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/x7Dzb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x7Dzb.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><seaborn><histogram><kernel-density>
|
2023-03-07 15:07:55
| 1
| 1,216
|
Andrew
|
75,663,717
| 9,162,193
|
CVAT REST API to upload files
|
<p>I am able to create a Task in an existing project in CVAT using the below code, but I am unable to upload files, even if I try to reference this link here: <a href="https://github.com/opencv/cvat/issues/4704" rel="nofollow noreferrer">https://github.com/opencv/cvat/issues/4704</a></p>
<p>Any advice would be greatly appreciated please!</p>
<pre class="lang-py prettyprint-override"><code>import requests
# to create project (this works)
link = 'http://xx.x.xxx.xx:8080/api/tasks'
d = {
"name": "ABC",
"project_id": 3
}
Header = {
"X-Organization":"ABC_labelling"
}
r = requests.post(link, data=d, headers=Header, auth=('username','pw'))
r.json
</code></pre>
<pre><code># to upload file (doesn't work)
task_id = 8
link = 'http://xx.x.xxx.xx:8080/api/tasks/{task_id}/data/'
files = [
'/Users/username/Documents/images/abc001.tif',
'/Users/username/Documents/images/abc002.tif'
]
Header = {
"X-Organization":"ABC_labelling",
"Upload-Start":"true",
"Upload-Finish":"true"
}
d = {
"image-quality": 70,
"server_files": files
}
r = requests.post(link, data=d, headers=Header, auth=('username','pw'))
</code></pre>
|
<python><rest><cvat>
|
2023-03-07 15:07:22
| 1
| 1,339
|
yl_low
|
75,663,683
| 20,443,528
|
How to implement SQL ORDER BY like funtionality in python?
|
<p>I have a list of objects like this</p>
<pre><code>time_slots = ['<TimeSlot: capacity: 1, number: 5>',
'<TimeSlot: Room: capacity: 3, number: 2>',
'<TimeSlot: capacity: 4, number: 1>',
'<TimeSlot: capacity: 4, number: 6>',
'<TimeSlot: capacity: 4, number: 1>',
'<TimeSlot: capacity: 4, number: 3>']
</code></pre>
<p>I want to sort the above object on the basis of number and capacity.</p>
<p>Basically, I want to implement this-</p>
<pre><code>ORDER BY number, capacity;
</code></pre>
<p>I can access the properties of every object by using something like-</p>
<pre><code>time_slots[i].number
</code></pre>
<p>I have to implement this logic in python. Can someone please help me with this?</p>
<p>Edit: This is a simplified version of the actual problem. Please see the actual problem
<a href="https://stackoverflow.com/q/75664227/20443528">stackoverflow.com/q/75664227/20443528</a></p>
|
<python><sorting>
|
2023-03-07 15:04:26
| 2
| 331
|
Anshul Gupta
|
75,663,680
| 8,512,262
|
Can I force the tkinter.Text widget to wrap lines on the "space" character as well as words?
|
<p>Below is an example tkinter app with a <code>Text</code> field which wraps to the next line on long words, as expected. The issue, however, is that <em>spaces</em> don't wrap to the next line in the same way. I am able to enter as many spaces as I like prior to the start of a new word and a line wrap won't occur. Once I begin typing any other printable characters (as well as tabs and newlines), the text wraps as expected.</p>
<p>How can I configure my <code>Text</code> widget to <em>also</em> wrap on whitespace at the end of the line? Is there a better method than, say, parsing out the length of the line in question (with <code>Text.count()</code>, perhaps) and forcing a newline after <code>width</code> characters?</p>
<pre><code>import tkinter as tk
root = tk.Tk()
text = tk.Text(root, wrap='word', width=40, height=10)
text.pack(expand=True, fill='both')
if __name__ == '__main__':
root.mainloop()
</code></pre>
|
<python><tkinter><text><word-wrap>
|
2023-03-07 15:04:10
| 1
| 7,190
|
JRiggles
|
75,663,669
| 2,302,244
|
Change the content of a ttkbootstrap ScrolledFrame
|
<p>How do I replace the content of a <code>ttkbootstrap ScrolledFrame</code>?</p>
<p>I have built the scrolled frame with this snippet:</p>
<pre class="lang-py prettyprint-override"><code> for ndx, t in enumerate(sorted(tw, key=lambda x: x.created_at, reverse=True)):
print(t.created_at)
card = make_tweet_card(t, tweet_detail_scroller)
card.grid(pady=5, row=ndx, column=0, sticky="W")
</code></pre>
<p>Based on a button click I need to empty the <code>ScrolledFrame</code> and replace the content with different content.</p>
<p>How can I achieve this?</p>
|
<python><tkinter><ttkbootstrap>
|
2023-03-07 15:03:27
| 2
| 935
|
user2302244
|
75,663,384
| 12,193,952
|
How to log Python code memory consumption?
|
<h3>Question</h3>
<p>Hi, I am runnin' a <strong>Docker</strong> container with a <strong>Python</strong> application inside. The code performs some computing tasks and I would like to monitor it's memory consumption using logs (<em>so I can see how different parts of the calculations perform</em>). I do not need any charts or continous monitoring - I am okay with the inaccuracy of this approach.</p>
<p><strong>How should I do it</strong> without loosing performance?</p>
<p>Using external (AWS) tools to monitor used memory is not suitable, because I often debug using logs and thus it's very difficult to match logs with performance charts. Also the resolution is too small.</p>
<h3>Setup</h3>
<ul>
<li>using <code>python:3.10</code> as base docker image</li>
<li>using <code>Python 3.10</code></li>
<li>running in AWS ECS Fargate (but results are similar while testing on local)</li>
<li>running the calculation method using <code>asyncio</code></li>
</ul>
<p>I have read some articles about <code>tracemalloc</code>, but it says it degrades the performance a lot (around <code>30 %</code>). <a href="https://medium.com/survata-engineering-blog/monitoring-memory-usage-of-a-running-python-program-49f027e3d1ba" rel="nofollow noreferrer">The article</a>.</p>
<h3>Tried methods</h3>
<p>I have tried the following method, however it shows the same memory usage every time called. So I doubt it works the desired way.</p>
<p>Using <code>resource</code></p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import resource
# Local imports
from utils import logger
def get_usage():
usage = round(resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / 1000 / 1000, 4)
logger.info(f"Current memory usage is: {usage} MB")
return usage
# Do calculation - EXAMPLE
asyncio.run(
some_method_to_do_calculations()
)
</code></pre>
<p>Logs from Cloudwatch
<a href="https://i.sstatic.net/THKz6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/THKz6.png" alt="memory-usage-in-time" /></a></p>
<p>Using <code>psutil</code> (in testing)</p>
<pre class="lang-py prettyprint-override"><code>import psutil
# Local imports
from utils import logger
def get_usage():
total = round(psutil.virtual_memory().total / 1000 / 1000, 4)
used = round(psutil.virtual_memory().used / 1000 / 1000, 4)
pct = round(used / total * 100, 1)
logger.info(f"Current memory usage is: {used} / {total} MB ({pct} %)")
return True
</code></pre>
|
<python><docker><memory>
|
2023-03-07 14:41:02
| 2
| 873
|
FN_
|
75,663,294
| 20,959,773
|
Find and click disabled element in Selenium Python
|
<h2>I'm working on automating a website and I'm encountering a problem with finding a single button.</h2>
<p>The issue stands that the website, has made a button I want to find and click by Selenium disabled by default, meaning selenium cannot find it by the xpath selector. I have concluded that the button has an event listener associated with it for mouse-over, meaning that the button becomes enabled and available to interact with <strong>only</strong> if the mouse if over that element.</p>
<p>I have a few solutions but I don't fully like neither of them.</p>
<h4>Here is what works:</h4>
<p><strong><em>1. Mouse over that element by relative known position of bigger and encapsulating div element, and then clicking it.</em></strong></p>
<p><strong><em>2. If you open inspect element, for some odd reason, the element becomes interactable and the html looks normal. But this probably is a security feature from the website, that will easily detect that I'm automating if I use this solution of opening inspect element.</em></strong></p>
<p><strong><em>3. Using image recognition API for selenium to capture the image of the button and locate it. This is my least favorite, unnecessary complex work for this problem.</em></strong></p>
<p>The interesting thing from #2 is that when you try to see the hmtl code, the button and everything appears normal and starts to function, which I find very funny in a way.</p>
<p>I won't show any html code, I hopefully have made everything clear explaining with words.</p>
<p>Any suggestions or ideas to smartly fix this issue will be appreciated.</p>
<p>Thank you!</p>
|
<javascript><python><html><selenium-webdriver>
|
2023-03-07 14:31:24
| 1
| 347
|
RifloSnake
|
75,663,283
| 12,596,824
|
Duplicating every row in dataframe N times where N is random
|
<p>I have a data set like so:</p>
<p><strong>Input:</strong></p>
<pre><code>id name
1 tim
2 jim
3 john
4 bill
</code></pre>
<p>I want to duplicate each row in my data set randomly anywhere from 0 - 5 times.</p>
<p>So my final data set might look something like this:</p>
<p><strong>Output:</strong></p>
<pre><code>id name
1 tim
1 tim
2 jim
3 john
3 john
3 john
3 john
4 bill
4 bill
4 bill
</code></pre>
<p>how can i do this in python pandas?</p>
|
<python><pandas>
|
2023-03-07 14:30:11
| 1
| 1,937
|
Eisen
|
75,663,162
| 2,135,504
|
How to execute code only when executed as a VSCode Jupyter code cell?
|
<p>I often develop notebooks using <a href="https://code.visualstudio.com/docs/python/jupyter-support-py#_jupyter-code-cells" rel="nofollow noreferrer">VSCode jupyter code cells</a> like the one below and execute them via <code>python script.py</code> in tmux. The problem is that currently I don't have a function for <code>executed_as_jupyter_code_cell</code> and some code is executed unnecessarily. Can I check this somehow?</p>
<p>A workaround is to simply set a custom env variable, but maybe there's a direct solution?</p>
<pre class="lang-py prettyprint-override"><code>#%% load
def load(x):
...
if executed_as_jupyter_code_cell():
y=load(x)
#%% transform
def transform(y):
...
if executed_as_jupyter_code_cell():
z=transform(y)
#%% pipe
from concurrent.futures import ProcessPoolExecutor
def pipe(x):
y=load(x)
z=transform(y)
save(z)
with ProcessPoolExecutor() as exec:
exec.map(pipe, some_long_iterable)
</code></pre>
|
<python><visual-studio-code><jupyter-notebook>
|
2023-03-07 14:18:23
| 1
| 2,749
|
gebbissimo
|
75,663,109
| 10,755,448
|
Launch a program in vscode only if a certain condition is satisfied, e.g. the filename matches. (using launch.json)
|
<p>I am trying to write a <code>launch.json</code> config to run a python program called <code>myProg</code>. By design, <code>myProg</code> needs to be run from a folder containing a file named <code>myInput.py</code>; My workflow is that I open <code>myInput.py</code>, put some breakpoints there, and launch <code>myProg</code> through the following launch configuration:</p>
<pre class="lang-json prettyprint-override"><code>{
"name": "my launcher",
"type": "python",
"request": "launch",
"program": "/home/user/.local/bin/myProg",
"args": [
"install",
"--someOption=myOption",
"--someOtherOption=myOtherOption"
],
"cwd": "${fileDirname}",
"console": "integratedTerminal",
"justMyCode": false
}
</code></pre>
<p>This config works as long as I have <code>myInput.py</code> (or any other file from the folder containing <code>myInput.py</code>) open and in focus in the editor when I click "start debugging", because I have set <code>"cwd"</code> to be <code>"${fileDirname}"</code>, i.e. the folder containing the current file.</p>
<p>This is what I want to achieve: I want to get an error when any file other than <code>myInput.py</code> is in focus when I "start debugging".</p>
<p>These are the parts of the behavior that I want to change:</p>
<p>1- Currently when a file from a different folder is selected, the code runs until it reaches the part when it needs to load <code>myInput.py</code>, then it raises an exception and fails. What I want in this case is "not running at all", instead giving me an error, message or similar.</p>
<p>2- Currently when a file from the same folder as <code>myInput.py</code> is selected, the code runs fine. I would prefer it to "not run" and instead give me an error/message that the correct file is not selected.</p>
<p>Is there a way to achieve what I want (item 1 is more important)? In short, I want to have a condition based on which it is decided whether to run the program or not.</p>
<p>Assume that I cannot change "myProg" or how it works.</p>
|
<python><visual-studio-code>
|
2023-03-07 14:14:14
| 3
| 507
|
KMot
|
75,663,011
| 1,367,097
|
How to include both ends of a pandas date_range()
|
<p>From a pair of dates, I would like to create a list of dates at monthly frequency, including the months of both dates indicated.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import datetime
# Option 1
pd.date_range(datetime(2022, 1, 13),datetime(2022, 4, 5), freq='M', inclusive='both')
# Option 2
pd.date_range("2022-01-13", "2022-04-05", freq='M', inclusive='both')
</code></pre>
<p>both return the list: <code>DatetimeIndex(['2022-01-31', '2022-02-28', '2022-03-31'], dtype='datetime64[ns]', freq='M')</code>. However, I am expecting the outcome with a list of dates (4 long) with one date for each month: <code>[january, february, mars, april]</code></p>
<p>If now we run:</p>
<pre class="lang-py prettyprint-override"><code>pd.date_range("2022-01-13", "2022-04-05", freq='M', inclusive='right')
</code></pre>
<p>we still obtain the same result as before. It looks like <code>inclusive</code> has no effect on the outcome.</p>
<p>Pandas version. 1.5.3</p>
|
<python><pandas><datetime>
|
2023-03-07 14:05:08
| 2
| 2,042
|
Simon
|
75,662,912
| 6,444,472
|
Using `functools.partial` with variable-length arguments in python
|
<p>I have the following function with using *args, and a lambda function:</p>
<pre class="lang-py prettyprint-override"><code>def draw_figures(picture: Picture, *figures: Figure):
# draws the figures on the picture
...
figures = [figure1, figure2, figure3]
draw_fun = lambda new_picture: draw_figures(new_picture, *figures)
</code></pre>
<p>As you can see, this code produces a <code>draw_fun</code> that I can call later, giving it a picture to draw the figures I already selected on it. I was wondering if I could do a similar thing with partial functions:</p>
<pre class="lang-py prettyprint-override"><code>import functools
draw_fun = functools.partial(draw_figures, *figures)
draw_fun(other_picture)
</code></pre>
<p>Unfortunately, the latter does not work, because then <code>draw_figures</code> will take the first figure as the picture. This is coherent with the python documentation, since in <code>partial</code>, positional arguments provided to the new function are appended to the ones provided when defining the partial (check below). Is there a way of achieving what I am trying to do using partial?</p>
<pre class="lang-py prettyprint-override"><code># from python doc
def partial(func, /, *args, **keywords):
def newfunc(*fargs, **fkeywords):
newkeywords = {**keywords, **fkeywords}
return func(*args, *fargs, **newkeywords)
newfunc.func = func
newfunc.args = args
newfunc.keywords = keywords
return newfunc
</code></pre>
|
<python><lambda><functools>
|
2023-03-07 13:56:35
| 1
| 327
|
ledermauss
|
75,662,904
| 726,730
|
python-shout *** buffer overflow detected ***: terminated
|
<p>script:</p>
<pre class="lang-py prettyprint-override"><code>import shout
s = shout.Shout()
print("Using libshout version %s" % shout.version())
s.audio_info = {shout.SHOUT_AI_BITRATE:'128', shout.SHOUT_AI_SAMPLERATE:'44800', shout.SHOUT_AI_CHANNELS:'2'}
s.name = 'Test radio connection'
s.url = 'http://localhost:8000/test.ogg'
s.mount = 'test.ogg'
s.port = 8000
s.user = 'username'
s.password = 'password'
s.genre = 'Other'
s.description = 'Test description'
s.host = 'localhost'
s.format = 'ogg'
s.open()
s.get_connected()
</code></pre>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code>Using libshout version 2.4.6
*** buffer overflow detected ***: terminated
</code></pre>
<p>OS: Windows 11
Msys2 Mingw64</p>
<p>The error is happens when <code>s.open()</code> is run.</p>
|
<python><buffer-overflow><icecast>
|
2023-03-07 13:56:12
| 0
| 2,427
|
Chris P
|
75,662,883
| 2,749,397
|
2D array, check which rows are equal to a 1D array
|
<p>I have an array, <code>N</code> rows and <code>2</code> columns,</p>
<pre><code>arr = np.arange(200, dtype=int).reshape(-1,2)
</code></pre>
<p>and you know that <code>arr[50]</code> is <code>[100, 101]</code>, but I don't know that, so I write</p>
<pre><code>guess = np.array((100, 101), dtype=int)
arr == guess
</code></pre>
<p>expecting a 1D boolean array with <code>N</code> elements, but instead I get a 2D boolean array, <code>N</code> rows and <code>2</code> columns.</p>
<p>Is it possible to have an 1D boolean array, that singles out the rows of my array that are equal, member by member, to the guess array? no loops, by the way.</p>
|
<python><arrays><numpy>
|
2023-03-07 13:54:53
| 1
| 25,436
|
gboffi
|
75,662,882
| 6,281,366
|
Python: getting the list of wifi interfaces in linux
|
<p>Whats the most simple way to get the list of current wifi interfaces?
i guess i can just run a shell command and extract if from there, but i wonder if theres a simpler way.</p>
|
<python><linux>
|
2023-03-07 13:54:53
| 3
| 827
|
tamirg
|
75,662,696
| 145,400
|
How to infer type (for mypy & IDE) from a marshmallow schema?
|
<p>I have not asked a Python question in years! Exciting. This is largely an ecosystem question. Consider this snippet:</p>
<pre class="lang-py prettyprint-override"><code>try:
result = schema.load(data)
except marshmallow.ValidationError as exc:
sys.exit(f'schema validation failed: {exc}')
</code></pre>
<p>with <code>schema</code> being a <code>marshmallow.Schema</code>.</p>
<p>From <code>mypy</code>'s point of view, <code>result</code> is of type <code>Any</code>.</p>
<p>What options do I have so that <code>result</code> gets typed based on <code>schema</code>?</p>
<p>I have searched the web quite a bit, and my conclusion for now is that there is no such magic in the marshmallow ecosystem. Seemingly, there are solutions for the inverse (derive a marshmallow schema from e.g. a dataclass). But somehow I also suspect I am just missing something really obvious. But if there really is no solution for marshmallow: is the answer maybe to use e.g. pydantic to define the schema (and then get some beautiful magic for automatically inferred types that mypy can use)?</p>
<p>To maybe show an inspiration. In the JavaScript/TypeScript ecosystem there is yup, and here we have the derive-type-from-schema in the <a href="https://github.com/jquense/yup#getting-started" rel="nofollow noreferrer">quickstart</a> docs: <code>type User = InferType<typeof userSchema>;</code></p>
|
<python><types><mypy><pydantic><marshmallow>
|
2023-03-07 13:37:33
| 1
| 36,314
|
Dr. Jan-Philip Gehrcke
|
75,662,649
| 14,697,000
|
My python dictionary is not updating properly
|
<p>I am working on a project where I want to capture some data from different types of sorting algorithms and I planned on doing this by saving all the data into a dictionary and then converting that dictionary into a pandas data frame and eventually exporting it as a csv file.</p>
<p>The problem now is that I am updating my dictionary inside 4 for loops but it seems that for some reason the updating is being overwritten somewhere in my code at first I thought it was the <em><strong>global</strong></em> keyword that I was using to keep track of data comparison and data swap count but I am not sure I tried my best to look for moments that interfere with my global variable but I can't find anything can you please help?</p>
<p>My code looks something like this:</p>
<pre><code>
def merge_sort(array):
#apparently merge sort only has data assignment without no swaps and will create a new array so we can exclude data swap count
if len(array) <= 1:
return
mid = len(array) // 2
left = array[:mid].copy()
right = array[mid:].copy()
merge_sort(left)
merge_sort(right)
x,y=merge(array, left, right)
global comparison_counter,swap_counter
comparison_counter+=x
swap_counter+=y
return comparison_counter,swap_counter
def merge(array, lefthalf, righthalf):
i=0
j=0
k=0
global data_comparison_count, data_swap_count
while i < len(lefthalf) and j < len(righthalf):
#underneath is comparison
if lefthalf[i] < righthalf[j]:#data comparison
array[k]=lefthalf[i]
i=i+1
else:
array[k]=righthalf[j]
j=j+1
data_comparison_count+=1
k=k+1
while i < len(lefthalf):
array[k]=lefthalf[i]
i=i+1
k=k+1
while j < len(righthalf):
array[k]=righthalf[j]
j=j+1
k=k+1
return data_comparison_count,data_swap_count
def partition(array, start_index, end_index):
global data_comparison_count, data_swap_count
pivot_value = array[start_index]
left_mark = start_index + 1
right_mark = end_index
while True:
while left_mark <= right_mark and array[left_mark] <= pivot_value:#comparison
data_comparison_count+=1
left_mark = left_mark + 1
while array[right_mark] >= pivot_value and right_mark >= left_mark:#comparison
data_comparison_count+=1
right_mark = right_mark - 1
if right_mark < left_mark:
break
else:
#data_swap=1
temp = array[left_mark]
array[left_mark] = array[right_mark]
array[right_mark] = temp
data_swap_count += 1
#data_swap
data_swap_count+=1
array[start_index] = array[right_mark]
array[right_mark] = pivot_value
return right_mark,data_comparison_count,data_swap_count
def quick_sort(array):
temp1,temp2=quick_sort_helper(array, 0, len(array) - 1)
return temp1,temp2
def quick_sort_helper(array, start_index, end_index):
global comparison_counter, swap_counter
if start_index < end_index:
split_point,x,y = partition(array, start_index, end_index)
comparison_counter+=x
swap_counter+=y
quick_sort_helper(array, start_index, split_point - 1)
quick_sort_helper(array, split_point + 1, end_index)
return comparison_counter,swap_counter
</code></pre>
<pre><code>
time_and_data_dictionary={"Time":12,"Data Comparisons":12,"Data Swaps":12}
selection_sort_information={"selection sort":time_and_data_dictionary}
bubble_sort_information={"bubble sort":time_and_data_dictionary}
insertion_sort_information={"insertion sort":time_and_data_dictionary}
shell_sort_information={"shell sort":time_and_data_dictionary}
quick_sort_information={"quick sort":time_and_data_dictionary}
merge_sort_information={"merge sort":time_and_data_dictionary}
array_of_sorting_algorithms=[selection_sort_information,bubble_sort_information,insertion_sort_information,shell_sort_information,quick_sort_information,merge_sort_information]
dictionary={"Ascending_Sorted_250":array_of_sorting_algorithms,"Ascending_Sorted_500":array_of_sorting_algorithms,"Ascending_Sorted_1000": array_of_sorting_algorithms,"Ascending_Sorted_10000":array_of_sorting_algorithms,"Descending_Sorted_250":array_of_sorting_algorithms,"Descending_Sorted_500":array_of_sorting_algorithms," Descending_Sorted_1000": array_of_sorting_algorithms,"Descending_Sorted_10000":array_of_sorting_algorithms,"Random_Sorted_250":array_of_sorting_algorithms,"Random_Sorted_500":array_of_sorting_algorithms," Random_Sorted_1000":array_of_sorting_algorithms,"Random_Sorted_10000":array_of_sorting_algorithms}
t="Time"
dc="Data Comparisons"
ds="Data Swaps"
start = time()
tuple_selection_sort_ad250 = selection_sort_array(ascending_data_250)
end = time()
td_selection_sort_ad250 = end - start
start = time()
tuple_bubble_sort_ad250 = bubble_sort_array(ascending_data_250)
end = time()
td_bubble_sort_ad250 = end - start
start = time()
tuple_insertion_sort_ad250 = insertion_sort_array(ascending_data_250)
end = time()
td_insertion_sort_ad250 = end - start
start = time()
tuple_shell_sort_ad250 = shell_sort_array(ascending_data_250)
end = time()
td_shell_sort_ad250 = end - start
data_comparison_count = 0
data_swap_count = 0
comparison_counter = 0
swap_counter = 0
start = time()
tuple_merge_sort_ad250 = merge_sort(ascending_data_250)
end = time()
td_merge_sort_ad250 = end - start
data_comparison_count = 0
data_swap_count = 0
comparison_counter = 0
swap_counter = 0
start = time()
tuple_quick_sort_ad250 = quick_sort(ascending_data_250)
end = time()
td_quick_sort_ad250 = end - start
start = time()
tuple_selection_sort_ad500 = selection_sort_array(ascending_data_500)
end = time()
td_selection_sort_ad500 = end - start
start = time()
tuple_bubble_sort_ad500 = bubble_sort_array(ascending_data_500)
end = time()
td_bubble_sort_ad500 = end - start
start = time()
tuple_insertion_sort_ad500 = insertion_sort_array(ascending_data_500)
end = time()
td_insertion_sort_ad500 = end - start
start = time()
tuple_shell_sort_ad500 = shell_sort_array(ascending_data_500)
end = time()
td_shell_sort_ad500 = end - start
data_comparison_count = 0
data_swap_count = 0
comparison_counter = 0
swap_counter = 0
start = time()
tuple_merge_sort_ad500 = merge_sort(ascending_data_500)
end = time()
td_merge_sort_ad500 = end - start
data_comparison_count = 0
data_swap_count = 0
comparison_counter = 0
swap_counter = 0
start = time()
tuple_quick_sort_ad500 = quick_sort(ascending_data_500)
end = time()
td_quick_sort_ad500 = end - start
start = time()
tuple_selection_sort_dd250 = selection_sort_array(descending_data_250)
end = time()
td_selection_sort_dd250 = end - start
start = time()
tuple_bubble_sort_dd250 = bubble_sort_array(descending_data_250)
end = time()
td_bubble_sort_dd250 = end - start
start = time()
tuple_insertion_sort_dd250 = insertion_sort_array(descending_data_250)
end = time()
td_insertion_sort_dd250 = end - start
start = time()
tuple_shell_sort_dd250 = shell_sort_array(descending_data_250)
end = time()
td_shell_sort_dd250 = end - start
for i,j in dictionary.items():
for x,val in enumerate(j):
for k,v in val.items():
for y,z in v.items():
if i=="Ascending_Sorted_250":
if x==0:
dictionary[i][x][k][t]=td_selection_sort_ad250
print(td_selection_sort_ad250,"!!!!!!!!!!!!!!!!!!!!!!")
dictionary[i][x][k][dc]=tuple_selection_sort_ad250[0]
print(tuple_selection_sort_ad250[0], "???????????????????????")
dictionary[i][x][k][ds]=tuple_selection_sort_ad250[1]
print(tuple_selection_sort_ad250[1], "////////////////////")
if x==1:
print("is",k,x)
print(i, x, k, t, dc, ds, type(i), type(x), type(k), type(t), type(dc), type(ds))
dictionary[i][x][k][t] = td_bubble_sort_ad250
print(td_bubble_sort_ad250,"!!!!!!!!!!!!!!!!!!!!!!")
dictionary[i][x][k][dc] = tuple_bubble_sort_ad250[0]
dictionary[i][x][k][ds] = tuple_bubble_sort_ad250[1]
if x==2:
print("going",k,x)
print(i, x, k, t, dc, ds, type(i), type(x), type(k), type(t), type(dc), type(ds))
dictionary[i][x][k][t] = td_insertion_sort_ad250
dictionary[i][x][k][dc] = tuple_insertion_sort_ad250[0]
dictionary[i][x][k][ds] = tuple_insertion_sort_ad250[1]
if x==3:
print("on",k,x)
print(i, x, k, t, dc, ds, type(i), type(x), type(k), type(t), type(dc), type(ds))
dictionary[i][x][k][t] = td_shell_sort_ad250
dictionary[i][x][k][dc] = tuple_shell_sort_ad250[0]
dictionary[i][x][k][ds] = tuple_shell_sort_ad250[1]
if x==4:
print("here",k,x)
print(i, x, k, t, dc, ds, type(i), type(x), type(k), type(t), type(dc), type(ds))
dictionary[i][x][k][t] = td_merge_sort_ad250
dictionary[i][x][k][dc] = tuple_merge_sort_ad250[0]
dictionary[i][x][k][ds] = tuple_merge_sort_ad250[1]
if x==5:
print("now",k,x)
print(i, x, k, t, dc, ds, type(i), type(x), type(k), type(t), type(dc), type(ds))
dictionary[i][x][k][t] = td_quick_sort_ad250
dictionary[i][x][k][dc] = tuple_quick_sort_ad250[0]
dictionary[i][x][k][ds] = tuple_quick_sort_ad250[1]
if i=="Ascending_Sorted_500":
if x == 0:
dictionary[i][x][k][t] = td_selection_sort_ad500
print(td_selection_sort_ad250,"!!!!!!!!!!!!!!!!!!!!!!")
dictionary[i][x][k][dc] = tuple_selection_sort_ad500[0]
print(tuple_selection_sort_ad250[0], "???????????????????????")
dictionary[i][x][k][ds] = tuple_selection_sort_ad500[1]
print(tuple_selection_sort_ad250[1], "////////////////////")
if x == 1:
dictionary[i][x][k][t] = td_bubble_sort_ad500
print(td_bubble_sort_ad250,"!!!!!!!!!!!!!!!!!!!!!!")
dictionary[i][x][k][dc] = tuple_bubble_sort_ad500[0]
print(tuple_bubble_sort_ad250[0], "???????????????????????")
dictionary[i][x][k][ds] = tuple_bubble_sort_ad500[1]
print(tuple_bubble_sort_ad250[1], "////////////////////")
if x == 2:
dictionary[i][x][k][t] = td_insertion_sort_ad500
print(td_insertion_sort_ad250,"!!!!!!!!!!!!!!!!!!!!!!")
dictionary[i][x][k][dc] = tuple_insertion_sort_ad500[0]
print(tuple_insertion_sort_ad250[0], "???????????????????????")
dictionary[i][x][k][ds] = tuple_insertion_sort_ad500[1]
print(tuple_insertion_sort_ad250[1], "////////////////////")
if x == 3:
dictionary[i][x][k][t] = td_shell_sort_ad500
print(td_shell_sort_ad250,"!!!!!!!!!!!!!!!!!!!!!!")
dictionary[i][x][k][dc] = tuple_shell_sort_ad500[0]
print(tuple_shell_sort_ad250[0], "???????????????????????")
dictionary[i][x][k][ds] = tuple_shell_sort_ad500[1]
print(tuple_shell_sort_ad250[1], "////////////////////")
if x == 4:
dictionary[i][x][k][t] = td_merge_sort_ad500
print(td_merge_sort_ad250,"!!!!!!!!!!!!!!!!!!!!!!")
dictionary[i][x][k][dc] = tuple_merge_sort_ad500[0]
print(tuple_merge_sort_ad250[0], "???????????????????????")
dictionary[i][x][k][ds] = tuple_merge_sort_ad500[1]
print(tuple_merge_sort_ad250[1], "////////////////////")
if x == 5:
dictionary[i][x][k][t] = td_quick_sort_ad500
print(td_quick_sort_ad250,"!!!!!!!!!!!!!!!!!!!!!!")
dictionary[i][x][k][dc] = tuple_quick_sort_ad500[0]
print(tuple_quick_sort_ad250[0], "???????????????????????")
dictionary[i][x][k][ds] = tuple_quick_sort_ad500[1]
print(tuple_quick_sort_ad250[1], "////////////////////")
print(dictionary,"$$$$$$$$$$$$$$")
</code></pre>
<p>I tried initializing the variables every time I called the merge_sort and quick_sort functions, since I thought it was a problem with the <strong>globalization</strong> of my variables, I thought this would fix it, but this was far from the truth. I also tried to debug it using different statements but the output of my debug and the output for my dictionary was very different.
Here is a snippet of what my console(output) looks like:</p>
<p><a href="https://i.sstatic.net/bRpS9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bRpS9.png" alt="Console(output)" /></a></p>
<p><a href="https://i.sstatic.net/aEIrh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aEIrh.png" alt="Console(output)2nd slide" /></a></p>
|
<python><sorting><for-loop><global-variables><nested-loops>
|
2023-03-07 13:33:25
| 1
| 460
|
How why e
|
75,662,625
| 8,665,962
|
Understanding the subtle difference in calculating percentile
|
<p>When calculating the percentile using <code>numpy</code>, I see some authors use:</p>
<pre><code>Q1, Q3 = np.percentile(X, [25, 75])
</code></pre>
<p>which is clear to me. However, I also see others use:</p>
<pre><code>loss = np.percentile(X, 4)
</code></pre>
<p>I presume 4 implies dividing the 100 into 4 percentiles but how the loss is calculated here (i.e., in the second case)?</p>
|
<python><numpy><percentile>
|
2023-03-07 13:31:17
| 1
| 574
|
Dave
|
75,662,623
| 12,946,401
|
Understanding how to output multiclass segmentation using UNet in Pytorch
|
<p>So I have been learning about UNets and I managed to get the binary classification UNet model to work using some <a href="https://github.com/nikhilroxtomar/Retina-Blood-Vessel-Segmentation-in-PyTorch" rel="nofollow noreferrer">github examples</a>. However, I am now trying to figure out how this translates to multiclass segmentation problems. In the binary case, my input image was 512x512 with 3 channels for RGB, the masks were 512x512x1 and the output of the UNet was a 512x512 image with 1 channel representing the binary segmentation.
But with a multiclass problem, my masks are still 512x512 images but now have 3 channels for RGB where different objects in the mask are labeled with 5 different solid colors. So in total, the mask image is showing that there are 5 classes. Do we now take this RGB mask and export out the 5 individual colors to create 5 channels where each channel represents a class. This way the mask is going to be size 512x512x5 and the input image is still 512x512x3?</p>
<p>In this case, I had changed the output channels from 1 to a 5 in my final UNet layer so it becomes <code>self.outputs = nn.Conv2d(64, 5, kernel_size=1, padding=0)</code>. And then I was thinking of just using the following loss function and keeping everything from the binary case the same:</p>
<pre><code>class DiceBCELoss(nn.Module):
def __init__(self, weight=None, size_average=True):
super(DiceBCELoss, self).__init__()
def forward(self, inputs, targets, smooth=1):
#comment out if your model contains a sigmoid or equivalent activation layer
inputs = torch.sigmoid(inputs)
#flatten label and prediction tensors
inputs = inputs.view(-1)
targets = targets.view(-1)
intersection = (inputs * targets).sum()
dice_loss = 1 - (2.*intersection + smooth)/(inputs.sum() + targets.sum() + smooth)
BCE = F.binary_cross_entropy(inputs, targets, reduction='mean')
Dice_BCE = BCE + dice_loss
return Dice_BCE
</code></pre>
<p>The other alternative I was thinking was to convert the 512x512x3 mask to grayscale where there are now 5 different shades representing the 5 classes. I then look at the output of the UNet and for each pixel, I compare its value with the value of the corresponding mask pixel using the above loss function.</p>
|
<python><pytorch><conv-neural-network><unet-neural-network>
|
2023-03-07 13:31:11
| 1
| 939
|
Jeff Boker
|
75,662,358
| 14,093,555
|
create 2 dimension array in C, and every cell with 512 bit size
|
<p>I want to create 2d arrays, with dimension of 2 by M, and any cell in the array will have 512 bit. I was think to create something like this:</p>
<pre class="lang-c prettyprint-override"><code>#define M 1024
typedef struct {
unsigned char public_key[2][M];
unsigned char private_key[2][M];
} key_pair
void keygen(key *key_pair){
srand(time(NULL))
//
}
</code></pre>
<p>But as I can understand, every cell in this array is of size of 256 bit because is type of char, I want 512 bit, I not sure what is the simple idea to solve it.</p>
<p>I want have in public_key and private_key variables, a random value of 512bit in every cell. probably char I not good option, but I not able to understand what is the right way to do it, I think that I'm little a bit confused.</p>
<p>I have the code in Python, in Python its very simple to do it, and i want to do it in C as well.</p>
<pre class="lang-python prettyprint-override"><code>def keygen():
SecretKey = np.zeros((2,M), dtype=object)
PublicKey = np.zeros((2,M), dtype=object)
for i in range(0,M):
SecretKey[0][i] = secrets.token_bytes(512/8)
SecretKey[1][i] = secrets.token_bytes(512/8)
PublicKey[0][i] = secrets.token_bytes(512/8)
PublicKey[1][i] = secrets.token_bytes(512/8)
key = [SecretKey, PublicKey]
return key
</code></pre>
<p>In python its work perfectly</p>
|
<python><c>
|
2023-03-07 13:05:18
| 1
| 757
|
hch
|
75,662,333
| 12,965,658
|
Type cast data types using pandas
|
<p>I have a file which has all datatypes as string. I want to type cast the data.</p>
<pre><code>for s3_file in keys:
analytics_df = pd.read_csv(s3_file, encoding="utf8")
analytics_df = analytics_df.juice.astype(float).fillna(0.0)
print(analytics_df.dtypes)
</code></pre>
<p>I am getting these errors:</p>
<pre><code>ValueError: could not convert string to float: 'None'
</code></pre>
|
<python><python-3.x><pandas><dataframe>
|
2023-03-07 13:03:14
| 0
| 909
|
Avenger
|
75,662,319
| 15,724,084
|
python selenium add value to attribute named value
|
<p>I want to add with python selenium framework, to input tag's <code>value</code> attribute equals let's say <code>New York, NY</code>. I guess I need for it, like <code>driver.execute_script("arguments[0].setAttribute('value',arguments[1])",element, value)</code> but do not know how to use it. Any kind of support will be appreciated</p>
<pre><code><input type="text" role="combobox" aria-owns="react-autowhatever-1" aria-expanded="false" autocomplete="off" aria-autocomplete="list" aria-controls="react-autowhatever-1" class="StyledFormControl-c11n-8-82-0__sc-18qgis1-0 jxPUpE Input-c11n-8-82-0__sc-4ry0fw-0 qODeK react-autosuggest__input" placeholder="Enter an address, neighborhood, city, or ZIP code" aria-label="Search: Suggestions appear below" id="search-box-input" value="">
</code></pre>
|
<python><python-3.x><selenium-webdriver><web-scraping><css-selectors>
|
2023-03-07 13:01:40
| 1
| 741
|
xlmaster
|
75,662,229
| 7,379,587
|
Connect to postgresql database without a password
|
<p>I have no problem connecting to a PostgreSQL database locally on a server without a password on the commandline i.e., <code>$ psql -d dname</code> takes me straight into the database <code>dname</code> with no errors and no prompts for passwords etc. However, when I try to connect in Python (still locally on the server) then it says:</p>
<pre><code>In [7]: import psycopg2
In [9]: conn = psycopg2.connect("host=localhost dbname=dname port=5432 user=uname")
---------------------------------------------------------------------------
OperationalError Traceback (most recent call last)
Cell In[9], line 1
----> 1 conn = psycopg2.connect("host=localhost dbname=dname port=5432 user=uname")
File /opt/miniconda3/envs/general/lib/python3.10/site-packages/psycopg2/__init__.py:122, in connect(dsn, connection_factory, cursor_factory, **kwargs)
119 kwasync['async_'] = kwargs.pop('async_')
121 dsn = _ext.make_dsn(dsn, **kwargs)
--> 122 conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
123 if cursor_factory is not None:
124 conn.cursor_factory = cursor_factory
OperationalError: fe_sendauth: no password supplied
</code></pre>
<p>There is no password so I can't add one. I have done a lot of searching and the only solution that I have come across is to use the string <code>postgresql://localhost:5432/</code> to connect. However, I can't find a way to connect to DBs using Psycopg2 using strings like these.</p>
<p>It seemed like it was easier to change to a Python library that uses connection strings like this so I then tried SQL Alchemy but got the same problem of not having a password.</p>
<pre><code>In [10]: import sqlalchemy
In [11]: db = sqlalchemy.create_engine('postgresql://localhost:5432/dname')
In [12]: conn = db.connect()
---------------------------------------------------------------------------
OperationalError Traceback (most recent call last)
... (LONG ERROR)
...
File /opt/miniconda3/envs/general/lib/python3.10/site-packages/psycopg2/__init__.py:122, in connect(dsn, connection_factory, cursor_factory, **kwargs)
119 kwasync['async_'] = kwargs.pop('async_')
121 dsn = _ext.make_dsn(dsn, **kwargs)
--> 122 conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
123 if cursor_factory is not None:
124 conn.cursor_factory = cursor_factory
OperationalError: (psycopg2.OperationalError) fe_sendauth: no password supplied
(Background on this error at: https://sqlalche.me/e/14/e3q8)
</code></pre>
<p>Following the suggested link gives me</p>
<blockquote>
<p>OperationalError</p>
<p>Exception raised for errors that are related to the database’s operation and not >necessarily under the control of the programmer, e.g. an unexpected disconnect occurs, the >data source name is not found, a transaction could not be processed, a memory allocation >error occurred during processing, etc.</p>
<p>This error is a DBAPI Error and originates from the database driver (DBAPI), not >SQLAlchemy itself.</p>
<p>The OperationalError is the most common (but not the only) error class used by drivers in >the context of the database connection being dropped, or not being able to connect to the >database. For tips on how to deal with this, see the section Dealing with Disconnects.</p>
</blockquote>
<p>Which I don't fully understand but makes me wonder if there is something strange going on under the hood which is resulting in a misleading error and maybe the password isn't the problem.</p>
<p>Anyone have any ideas what is going on here?</p>
<p>OS: Ubunutu20
pg_hba.conf file: is empty</p>
|
<python><postgresql><sqlalchemy><psycopg2>
|
2023-03-07 12:53:18
| 0
| 899
|
ojunk
|
75,662,227
| 14,494,483
|
how to jitter the scatter plot on px.imshow heatmap in python plotly
|
<p>I have a risk matrix made using plotly's heatmap, and I have overlayed scatter points on the heatmap to show as each individual project. My question is if I have 2 projects that are in the same box, how to jitter the points so they can both fit in the box without overlaying onto each other? For example, I have <code>project1</code> in the box of impact medium, likelihood low. If I have a <code>project2</code> that's in the same box, how can both fit in the same box without overlaying?</p>
<p><a href="https://i.sstatic.net/yDngu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yDngu.png" alt="enter image description here" /></a></p>
<pre><code>import plotly.express as px
import plotly.graph_objects as go
fig = px.imshow([[3, 5, 5],
[1, 3, 5],
[1, 1, 3]],
color_continuous_scale='Reds',
labels=dict(x="Likelihood", y="Impact"),
x=['Low', 'Medium', 'High'],
y=['High', 'Medium', 'Low']
)
fig.add_trace(go.Scatter(x=["Low"], y=["Medium"],
name="project1",
marker=dict(color='black', size=16)))
fig.show()
</code></pre>
<h2>Expected Outcome</h2>
<p><a href="https://i.sstatic.net/zcSXh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zcSXh.png" alt="enter image description here" /></a></p>
|
<python><plotly><heatmap>
|
2023-03-07 12:53:10
| 1
| 474
|
Subaru Spirit
|
75,662,122
| 3,189,219
|
Proper pg8000 + SQLAlchemy Setup for Google Cloud Function
|
<p>my apologies in advance as I see there are a bunch of answers that are close to what I need, but no cigar - I'm a C# developer mostly, but am stuck on what should be basic boilerplate setup.</p>
<p>I need a small PostGRE SQL client for relatively basic queries (basic set or non-complex selects), but I want to re-use the connection in the proper manner, with my current working setup throwing a lot of errors in regards to max connections, and potentially even burning through those connections and no longer working until I reset the function - here is my Database class (with all logging, comments and other code stripped out, and one tab level removed for easier reading):</p>
<pre><code>def __get_client(self) -> sqlalchemy.engine.base.Engine:
connector = Connector()
def getconn() -> pg8000.dbapi.Connection:
conn: pg8000.dbapi.Connection = connector.connect(
{project:region:instance}
"pg8000",
user={user},
password={password},
db={db},
ip_type=IPTypes.PUBLIC,
)
return conn
pool = sqlalchemy.create_engine(
"postgresql+pg8000://",
creator=getconn,
pool_size=5,
max_overflow=2,
pool_timeout=30,
pool_recycle=1800,
)
return pool
def get_value(self, statement) -> str:
try:
with self.__get_client().connect() as connection:
result = connection.execute(statement).fetchall()
except Exception as e:
{Logging and error handling goes here}
return result
</code></pre>
<p>As I mentioned, this works, I get my results, but this is used for high frequency operations, with over 120 calls to the function via PubSub at a time, and this seems to cause the connections to run out - I had the exact error message prepped but I lost it and am so tired currently I've fallen asleep multiple times just attempting to write this sentence, but the issue is just that the connections start failing, it starts saying the max has been used and simple backoff isn't enough.</p>
<p>In order to deal with that issue, I added the following:</p>
<pre><code>def __init__(self, ...):
...
self._engine = None <== Added to constructor
...
...
def __connect(self): <== Added this method
self._log.debug("Starting")
connection = None
@contextlib.contextmanager
def connect():
nonlocal connection
if connection is None:
if self._engine is None:
self._engine = self.__get_engine()
connection = self._engine.connect()
with connection:
with connection.begin():
yield connection
else:
yield connection
return connect
...
def get_values(self, statement, *, retry: int = 0) -> str:
try:
with self.__connect() as connection: <== Use new method
</code></pre>
<p>And now the issue is much more simple, <code>TypeError: 'function' object does not support the context manager protocol</code> - obviously related to the <code>@contextlib.contextmanager</code> being on the inner function instead of the outer, but before I go ahead and tackle this next bit - does anyone have or have a link to an example of how properly to set up a basic pg8000 + SQLAlchemy connection for ongoing re-use? Even better if it's the latest version as my code was working with 1.X.</p>
<p>Again I apologise if this seems similar to existing, but I did look and they don't quite fit.</p>
|
<python><postgresql><sqlalchemy><google-cloud-functions><pg8000>
|
2023-03-07 12:43:04
| 0
| 369
|
Xsjado
|
75,661,970
| 15,751,564
|
Reshaping torch tensor
|
<p>I have a torch of shape <code>(1,3,8)</code>. I want to increase the first dimension to <code>n</code>, resulting in the final tensor of shape <code>(n,3,8)</code>. I want to pad zeroes of that shape. Here is what I worked on:</p>
<pre><code>n = 5
a = torch.randn(1,3,8) # Random (1,3,8) tensor
b = torch.cat((a,torch.zeros_like(a)))
for i in range(n-2):
b = torch.cat((b,torch.zeros_like(a)))
print(b.shape) # (5,3,8)
</code></pre>
<p>This works, but is there a better and more elegant solution?</p>
|
<python><python-3.x><pytorch>
|
2023-03-07 12:27:03
| 2
| 1,398
|
darth baba
|
75,661,851
| 11,197,301
|
add a row to an empty array in python
|
<p>I would lie to add an array to an empty array in numpy.
Basically I would like to do the following:</p>
<pre><code>AA = np.array([])
for i in range(0,3):
#
BB = np.random.rand(3)
CC = np.vstack((AA, BB))
</code></pre>
<p>However, I get the following error:</p>
<pre><code>all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 0 and the array at index 1 has size 3
</code></pre>
<p>I would like to avoid introducing an if clause. An idea could be to use "np.zeros(3)" to set-up AA and then remove the first row.</p>
<p>What do you think? I do like the second option either.</p>
<p>Thanks</p>
|
<python><arrays><is-empty><vstack>
|
2023-03-07 12:15:09
| 2
| 623
|
diedro
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.