QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,390,823
| 881,150
|
Iterating through Azure ItemPaged object
|
<p>I am calling the <code>list</code> operation to retrieve the metadata values of a blob storage.
My code looks like:</p>
<pre><code>blob_service_list = storage_client.blob_services.list('rg-exercise1', 'sa36730')
for items in blob_service_list:
print((items.as_dict()))
</code></pre>
<p>What's happening in this case is that the returned output only contains the items which had a corresponding Azure object:</p>
<pre><code>{'id': '/subscriptions/0601ba03-2e68-461a-a239-98cxxxxxx/resourceGroups/rg-exercise1/providers/Microsoft.Storage/storageAccounts/sa36730/blobServices/default', 'name': 'default', 'type': 'Microsoft.Storage/storageAccounts/blobServices', 'sku': {'name': 'Standard_LRS', 'tier': 'Standard'}, 'cors': {'cors_rules': [{'allowed_origins': ['www.xyz.com'], 'allowed_methods': ['GET'], 'max_age_in_seconds': 0, 'exposed_headers': [''], 'allowed_headers': ['']}]}, 'delete_retention_policy': {'enabled': False}}
</code></pre>
<p>Where-as, If I do a simple print of items, the output is much larger:</p>
<pre><code>{'additional_properties': {}, 'id': '/subscriptions/0601ba03-2e68-461a-a239-98c1xxxxx/resourceGroups/rg-exercise1/providers/Microsoft.Storage/storageAccounts/sa36730/blobServices/default', 'name': 'default', 'type': 'Microsoft.Storage/storageAccounts/blobServices', 'sku': <azure.mgmt.storage.v2021_06_01.models._models_py3.Sku object at 0x7ff2f8f1a520>, 'cors': <azure.mgmt.storage.v2021_06_01.models._models_py3.CorsRules object at 0x7ff2f8f1a640>, 'default_service_version': None, 'delete_retention_policy': <azure.mgmt.storage.v2021_06_01.models._models_py3.DeleteRetentionPolicy object at 0x7ff2f8f1a6d0>, 'is_versioning_enabled': None, 'automatic_snapshot_policy_enabled': None, 'change_feed': None, 'restore_policy': None, 'container_delete_retention_policy': None, 'last_access_time_tracking_policy': None}
</code></pre>
<p>Any value which is <code>None</code> has been removed from my example code. How can I extend my example code to include the None fields and have the final output as a list?</p>
|
<python><python-3.x><azure>
|
2023-02-08 19:28:24
| 1
| 1,114
|
abhinav singh
|
75,390,668
| 5,257,286
|
Conditions in panda dataframe when multiplying with multiple columns
|
<p>I originally had the following that works:</p>
<pre><code>df = df.replace({np.nan: 0})
df[new_cols] = df[cols].multiply(df["abc"], axis="index")
</code></pre>
<p>But I need a more exact condition. So if any of the values in a column in [cols] are NULL, then the column in [new_cols] is also NULL, otherwise the following is performed: <code>df[cols]*df["abc"]</code> to get <code>df[new_cols]</code></p>
<p>Any ideas?</p>
|
<python><pandas><null><multiplication>
|
2023-02-08 19:13:36
| 0
| 1,192
|
pymat
|
75,390,666
| 8,578,337
|
How to resolve openssl not found error while installing new Python version with Pyenv on M1 Mac OS Ventura 13.1?
|
<p>The system is M1 processor on MacOS Ventura 13.1. While installing a new version with <code>pyenv</code>, it throws following error regarding <code>openssl</code> not found on the system. <code>openssl</code> is already installed with the version number <code>LibreSSL 3.3.6</code></p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'readline'
WARNING: The Python readline extension was not compiled. Missing the GNU readline lib?
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/divyesh.parmar@postman.com/.pyenv/versions/3.10.6/lib/python3.10/ssl.py", line 99, in <module>
import _ssl # if we can't import it, let the error propagate
ModuleNotFoundError: No module named '_ssl'
ERROR: The Python ssl extension was not compiled. Missing the OpenSSL lib?
</code></pre>
<p>I've mostly tried approaches mentioned on this thread, but none of that seems to lead to anywhere. How to resolve this?</p>
|
<python><macos><terminal><openssl><apple-m1>
|
2023-02-08 19:13:32
| 2
| 421
|
Divyesh Parmar
|
75,390,544
| 14,033,226
|
Passing partial=True down to nested serializer in DRF
|
<p>I have two serializers organised like this:</p>
<pre><code>class OuterSerializer():
inner_obj = InnerSerializer(many=True, required=False)
other fields ......
</code></pre>
<pre><code>class InnerSerializer():
field_1 = CharField()
field_2 = CharField()
</code></pre>
<p>Now my use case is to partial update the outer serializer's model. How I'm doing that is:</p>
<pre><code> def partial_update(self, request, *args, **kwargs):
serializer = OuterSerializer(data=request.data, context={'request': self.request}, partial=True)
serializer.is_valid(raise_exception=True)
data = serializer.data
outerobj = self.service_layer.update(kwargs['pk'], data, request.user)
response_serializer = OpportunitySerializer(instance=outerobj, context={'request': self.request})
return Response(response_serializer.data, HTTPStatus.OK)
</code></pre>
<p>The issue is this partial flag does not get passed down to the InnerSerializer.
For example if my request body looks like below, I want it to work:</p>
<pre><code>{"inner_obj":
{
"field_1" : "abc"
}
}
</code></pre>
<p>Currently I get a 400 error for this saying the field is required.</p>
<p>What I've tried :</p>
<ol>
<li>Setting the partial variable within the OuterSerializer in the init method by modifying it as such</li>
</ol>
<pre><code> def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# We pass the "current serializer" context to the "nested one"
self.fields['inner_obj'].context.update(self.context)
self.fields['inner_obj'].partial = kwargs.get('partial')
</code></pre>
<p>However this doesn't travel down.</p>
|
<python><django><django-rest-framework><django-queryset><django-serializer>
|
2023-02-08 19:01:34
| 1
| 312
|
njari
|
75,390,522
| 9,318,372
|
Type Hint `Callable[[int, ...], None]` using `ParamSpec`?
|
<p>Similar to <a href="https://stackoverflow.com/questions/66961423/python-type-hint-callable-with-one-known-positional-type-and-then-args-and-kw">Python type hint Callable with one known positional type and then *args and **kwargs</a>, I want to type hint a <code>Callable</code> for which is known:</p>
<ol>
<li>It must have at least 1 positional input.</li>
<li>The first positional input must be <code>int</code>.</li>
<li>It must return <code>None</code>.</li>
</ol>
<p>Apart from that, <strong>any</strong> signature is valid. I tried to do the following, but it doesn't work. So, is it possible to do it in python 3.10/3.11 at all?</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeAlias, ParamSpec, Concatenate, Callable
P = ParamSpec("P")
intfun: TypeAlias = Callable[Concatenate[int, P], None]
def foo(i: int) -> None:
pass
a: intfun = foo # ✘ Incompatible types in assignment
# expression has type "Callable[[int], None]",
# variable has type "Callable[[int, VarArg(Any), KwArg(Any)], None]")
</code></pre>
<p><a href="https://mypy-play.net/?mypy=latest&python=3.11&gist=f4c26907bfc0ae0118b90c1fa5a79fe8" rel="nofollow noreferrer">https://mypy-play.net/?mypy=latest&python=3.11&gist=f4c26907bfc0ae0118b90c1fa5a79fe8</a></p>
<p>I am using <code>mypy==1.0.0</code>.</p>
<p><strong>Context:</strong> I want to type hint a <code>dict</code> hat holds key-value pairs where the value could be any <code>Callable</code> satisfying properties 1,2,3.</p>
|
<python><python-typing><mypy><python-3.11>
|
2023-02-08 18:59:40
| 1
| 1,721
|
Hyperplane
|
75,390,402
| 1,255,042
|
Unexpected Doctest result for an output including a backslash
|
<p>This doctest fails with an output shown at the bottom. Why does Pytest change the doctest to <code>strip_special_chars( "c:\abcDEF1 23-_@.sql")</code> with a single backslash leading to the unintended expected result? How do I change its behavior?</p>
<pre><code>def strip_special_chars(input_text, re_string='[^a-zA-Z0-9_\s\.\-:\\\\]'):
"""
>>> strip_special_chars( "c:\\abcDEF1 23-_@.sql")
'c:\\abcDEF1 23-_.sql'
"""
regex = re.compile(re_string)
return regex.sub('', input_text)
</code></pre>
<pre><code>_____________________________________________ [doctest] helpers.strip_special_chars _____________________________________________
1484
1485 >>> strip_special_chars( "c:\abcDEF1 23-_@.sql")
Expected:
'c:\abcDEF1 23-_.sql'
Got:
'c:bcDEF1 23-_.sql'
</code></pre>
|
<python><doctest>
|
2023-02-08 18:46:07
| 1
| 3,370
|
Jason O.
|
75,390,017
| 1,468,810
|
Filtering a pandas dataframe based presence of substrings in column
|
<p>Not sure if this is a 'filtering with pandas' question or one of text analysis, however:</p>
<p>Given a df,</p>
<pre><code>d = {
"item": ["a", "b", "c", "d"],
"report": [
"john rode the subway through new york",
"sally says she no longer wanted any fish, but",
"was not submitted",
"the doctor proceeded to call washington and new york",
],
}
df = pd.DataFrame(data=d)
df
</code></pre>
<p>Resulting in</p>
<pre><code>item, report
a, "john rode the subway through new york"
b, "sally says she no longer wanted any fish, but"
c, "was not submitted"
d, "the doctor proceeded to call washington and new york"
</code></pre>
<p>And a list of terms to match:</p>
<pre><code>terms = ["new york", "fish"]
</code></pre>
<p>How would you reduce the the df to have the following rows, based on whether a substring in <code>terms</code> is found in column <code>report</code> and so that <code>item</code> is preserved?</p>
<pre><code>item, report
a, "john rode the subway through new york"
b, "sally says she no longer wanted any fish, but"
d, "the doctor proceeded to call washington and new york"
</code></pre>
|
<python><pandas><list><dataframe>
|
2023-02-08 18:08:25
| 4
| 721
|
Benjamin
|
75,390,001
| 8,512,262
|
How can I impose server priority on a UDP client receiving from multiple servers on the same port
|
<p>I have a client application that is set to receive data from a given UDP port, and two servers (let's call them "primary" and "secondary") that are broadcasting data over that port.</p>
<p>I've set up a UDP receiver thread that uses a lossy queue to update my frontend. Lossy is okay here because the data are just status info strings, e.g. 'on'/'off', that I'm picking up periodically.</p>
<p>My desired behavior is as follows:</p>
<ul>
<li>If the primary server is active and broadcasting, the client will accept data from the primary server <em>only</em> (regardless of data coming in from the secondary server)</li>
<li>If the primary server stops broadcasting, the client will accept data from the secondary server</li>
<li>If the primary server resumes broadcasting, <em>don't</em> cede to the primary <em>unless</em> the secondary server goes down (to prevent bouncing back and forth in the event that the primary sever is going in and out of failure)</li>
<li>If neither server is broadcasting, raise a flag</li>
</ul>
<p>Currently the problem is that if both servers are broadcasting (which they will be most of the time), my client happily receives data from both and bounces back and forth between the two. I understand <em>why</em> this is happening, but I'm unsure how to stop it / work around it.</p>
<p>How can I structure my client to disregard data coming in from the secondary server as long as it's also getting data from the primary server?</p>
<p><em>NB</em> - I'm using threads and queues here to keep my UDP operations from blocking my GUI</p>
<pre><code># EXAMPLE CLIENT APP
import queue
import socket as skt
import tkinter as tk
from threading import Event, Thread
class App(tk.Tk):
def __init__(self):
super().__init__()
self.title('UDP Client Test')
# set up window close handler
self.protocol('WM_DELETE_WINDOW', self.on_close)
# display the received value
self.data_label_var = tk.StringVar(self, 'No Data')
self.data_label = ttk.Label(self, textvariable=self.data_label_var)
self.data_label.pack()
# server IP addresses (example)
self.primary = '10.0.42.1'
self.secondary = '10.0.42.2'
self.port = 5555
self.timeout = 2.0
self.client_socket = self.get_client_socket(self.port, self.timeout)
self.dq_loop = None # placeholder for dequeue loop 'after' ID
self.receiver_queue = Queue(maxsize=1)
self.stop_event = Event()
self.receiver_thread = Thread(
name='status_receiver',
target=self.receiver_worker,
args=(
self.client_socket,
(self.primary, self.secondary),
self.receiver_queue,
self.stop_event
)
)
def get_client_socket(self, port: int, timeout: float) -> skt.socket:
"""Set up a UDP socket bound to the given port"""
client_socket = skt.socket(skt.AF_INET, skt.SOCK_DGRAM)
client_socket.settimeout(timeout)
client_socket.bind('', port) # accept traffic on this port from any IP address
return client_socket
@staticmethod
def receiver_worker(
socket: skt.socket,
addresses: tuple[str, str],
queue: queue.Queue,
stop_event: Event,
) -> None:
"""Thread worker that receives data over UDP and puts it in a lossy queue"""
primary, secondary = addresses # server IPs
while not stop_event.is_set(): # loop until application exit...
try:
data, server = socket.recvfrom(1024)
# here's where I'm having trouble - if traffic is coming in from both servers,
# there's a good chance my frontend will just pick up data from both alternately
# (and yes, I know these conditions do the same thing...for now)
if server[0] == primary:
queue.put_nowait((data, server))
elif server[0] == secondary:
queue.put_nowait((data, server))
else: # inbound traffic on the correct port, but from some other server
print('disredard...')
except queue.Full:
print('Queue full...') # not a problem, just here in case...
except skt.timeout:
print('Timeout...') # TODO
def receiver_dequeue(self) -> None:
"""Periodically fetch data from the worker queue and update the UI"""
try:
data, server = self.receiver_queue.get_nowait()
except queue.Empty:
pass # nothing to do
else: # update the label
self.data_label_var.set(data.decode())
finally: # continue updating 10x / second
self.dq_loop = self.after(100, self.receiver_dequeue)
def on_close(self) -> None:
"""Perform cleanup tasks on application exit"""
if self.dq_loop:
self.after_cancel(self.dq_loop)
self.stop_event.set() # stop the receiver thread loop
self.receiver_thread.join()
self.client_socket.close()
self.quit()
if __name__ == '__main__':
app = App()
app.mainloop()
</code></pre>
<p>My actual application is only slightly more complex than this, but the basic operation is the same: get data from UDP, use data to update UI...rinse and repeat</p>
<p>I suspect the changes need to be made to my <code>receiver_worker</code> method, but I'm not sure where to go from here. Any help is very much welcome and appreciated! And thanks for taking the time to read this long question!</p>
<p><em>Addendum: FWIW I did some reading about <a href="https://docs.python.org/3/library/selectors.html#module-selectors" rel="nofollow noreferrer">Selectors</a> but I'm not sure how to go about implementing them in my case - if anybody can point me to a relevant example, that would be amazing</em></p>
|
<python><sockets><tkinter><udp>
|
2023-02-08 18:07:00
| 1
| 7,190
|
JRiggles
|
75,389,906
| 4,411,666
|
asyncio.gather doesn't execute my task in same time
|
<p>I am using asyncio.gather to run many query to an API. My main goal is to execute them all without waiting one finish for start another one.</p>
<pre><code>async def main():
order_book_coroutines = [asyncio.ensure_future(get_order_book_list()) for exchange in exchange_list]
results = await asyncio.gather(*order_book_coroutines)
async def get_order_book_list():
print('***1***')
sleep(10)
try:
#doing API query
except Exception as e:
pass
print('***2***')
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>My main problem here is the ouput :</p>
<pre><code>***1***
***2***
***1***
***2***
***1***
***2***
</code></pre>
<p>I was waiting something like :</p>
<pre><code>***1***
***1***
***1***
***2***
***2***
***2***
</code></pre>
<p>There is a problem with my code ? or i miss understood asyncio.gather utility ?</p>
|
<python><python-asyncio>
|
2023-02-08 17:58:14
| 1
| 1,073
|
Valentin Garreau
|
75,389,896
| 14,790,056
|
groupby and only keep rows if the value from a column appears on a different column
|
<p>I have exchange data. A transaction initiator sends USD and will receive Euro in return. I want to make sure that each transaction contains the correct information about the initiator. The way to ensure that is that the one who is sending money to the exchange always appear in <code>to</code> as well within the same transaction.</p>
<pre><code>transaction from to currency
1 A exchange USD
1 exchange A Euro
1 B C Euro
2 C exchange USD
2 B D Euro
2 A G Euro
3 F exchange USD
3 D A Euro
3 B F Euro
4 R exchange USD
4 A D Euro
4 B Q Euro
</code></pre>
<p>I want to filter out the meaningful rows of transactions.</p>
<p>Desired df</p>
<pre><code>transaction from to currency
1 A exchange USD
1 exchange A Euro
3 F exchange USD
3 B F Euro
</code></pre>
<p>Here, for each transaction, the initiator is <code>A</code>, <code>C</code>, <code>F</code>, and <code>R</code>. But for <code>C</code>, <code>R</code>, there is no record of incoming transactions. So I want to exclude these transactions.</p>
|
<python><pandas><dataframe>
|
2023-02-08 17:56:45
| 1
| 654
|
Olive
|
75,389,801
| 18,476,381
|
Python groupby/convert join table to triple nested dictionary
|
<p>From a SQL stored procedure that performs a join on 3 tables I get the data below.</p>
<pre><code> data = [
{"so_number": "ABC", "po_status": "OPEN", "item_id": 0, "part_number": "XTZ", "ticket_id": 10, "ticket_month": "JUNE"},
{"so_number": "ABC", "po_status": "OPEN", "item_id": 0, "part_number": "XTZ", "ticket_id": 11, "ticket_month": "JUNE"},
{"so_number": "ABC", "po_status": "OPEN", "item_id": 1, "part_number": "XTY", "ticket_id": 12, "ticket_month": "JUNE"},
{"so_number": "DEF", "po_status": "OPEN", "item_id": 3, "part_number": "XTU", "ticket_id": 13, "ticket_month": "JUNE"},
{"so_number": "DEF", "po_status": "OPEN", "item_id": 3, "part_number": "XTU", "ticket_id": 14, "ticket_month": "JUNE"},
{"so_number": "DEF", "po_status": "OPEN", "item_id": 3, "part_number": "XTU", "ticket_id": 15, "ticket_month": "JUNE"}]
</code></pre>
<p>I would like to group the data on so_number and item_id to return a list of dicts like below.</p>
<pre><code>[
{
"so_number ": "ABC",
"po_status": "OPEN",
"line_items": [
{
"item_id": 0,
"part_number": "XTZ",
"tickets": [
{
"ticket_id": 10,
"ticket_month": "JUNE"
},
{
"ticket_id": 11,
"ticket_month": "JUNE"
}
]
},
{
"item_id": 1,
"part_number": "XTY",
"tickets": [
{
"ticket_id": 12,
"ticket_month": "JUNE"
}
]
}
]
},
{
"so_number ": "DEF",
"po_status": "OPEN",
"line_items": [
{
"item_id": 3,
"part_number": "XTU"
"tickets": [
{
"ticket_id": 13,
"ticket_month": "JUNE"
},
{
"ticket_id": 14,
"ticket_month": "JUNE"
},
{
"ticket_id": 15,
"ticket_month": "JUNE"
}
]
}
]
}
]
</code></pre>
<p>I wanted to know if there was an efficient way of doing this. I am open to using pandas as well.</p>
<p>I thought about accessing the 3 sql tables through a loop and creating this list of dicts but it will probably not be best practice or efficient.</p>
|
<python><pandas><nested>
|
2023-02-08 17:47:35
| 1
| 609
|
Masterstack8080
|
75,389,759
| 3,620,605
|
Find blas calls happening under the hood when running inference
|
<p>I have a trained model saved with <code>tf.saved_model.save</code>; loaded back with <code>tf.saved_model.load</code> . Now I'd like to know what blas routines are called when I run inference on this model. I'm curious about what's going on under the hood; I would like to run the inference manually i.e. go through it step by step on my own and see if I can get the same results. When I say inference, I'm referring to the predict method <code>my_model.predict(new_input)</code></p>
|
<python><tensorflow><tensorflow2.0><blas>
|
2023-02-08 17:43:21
| 0
| 1,158
|
Effective_cellist
|
75,389,737
| 243,796
|
Replacing elements of a Python dictionary using regex
|
<p>I have been trying to replace integer components of a dictionary with string values given in another dictionary. However, I am getting the following error:</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 11, in <module>
File "/usr/lib/python3.11/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 14 (char 13)
</code></pre>
<p>The code has been given below. Not sure why I am getting an error.</p>
<pre><code>import re
from json import loads, dumps
movable = {"movable": [0, 3, 6, 9], "fixed": [1, 4, 7, 10], "mixed": [2, 5, 8, 11]}
int_mapping = {0: "Ar", 1: "Ta", 2: "Ge", 3: "Ca", 4: "Le", 5: "Vi", 6: "Li", 7: "Sc", 8: "Sa", 9: "Ca", 10: "Aq", 11: "Pi"}
movable = dumps(movable)
for key in int_mapping.keys():
movable = re.sub('(?<![0-9])' + str(key) + '(?![0-9])', int_mapping[key], movable)
movable = loads(movable)
</code></pre>
<p>I understand that this code can easily be written in a different way to get the desired output. However, I am interested to understand what I am doing wrong.</p>
|
<python><regex><dictionary>
|
2023-02-08 17:41:21
| 2
| 2,270
|
Sumit
|
75,389,678
| 3,725,599
|
Call function on pandas df with lagged values calculated in the previous row/loop
|
<p>I am calling a function rowise on a pandas data frame using lagged values (for <code>Q</code> and <code>S</code>) that were calculated for the previous row. The first row already has values for <code>Q</code> and <code>S</code> so it starts on the second row. It works fine in a for loop but the df I'm ultimately applying it too has over 3000 rows so I need something faster.</p>
<p>I've contemplated <code>df.shift(-1)</code>, <code>rolling.apply()</code> and vectorising but nothing I've tried works.</p>
<pre><code>import time
import pandas as pd
import math
def myfunc(Eo, P, Smax, Sprev, Qprev):
print("i = ", i)
print("Qprev = ", Qprev)
S = Sprev + Eo * math.exp(-1 * Sprev/Smax) - P + Qprev
Q = P + S
print("Q = ", Q)
return S, Q
data = {'peti': {0: 0.1960418075323104, 1: 0.5796640515327454, 2: 0.737823486328125, 3: 0.222676545381546, 4: 0.8804306983947754}, 'tas': {0: 281.0088195800781, 1: 277.112060546875, 2: 273.7044372558594, 3: 277.48309326171875, 4: 279.4878845214844}, 'precip': {0: 0.0, 1: 0.0, 2: 1.5046296539367177e-05, 3: 0.0002500000118743, 4: 4.6296295295178425e-06}, 'year': {0: 2008, 1: 2008, 2: 2008, 3: 2008, 4: 2008}, 'row_id': {0: 0, 1: 1, 2: 2, 3: 3, 4: 4}, 'S': {0: 90.9, 1: "nan", 2: "nan", 3: "nan", 4: "nan"}, 'Q': {0: 0.0, 1: "nan", 2: "nan", 3: "nan", 4: "nan"}}
df = pd.DataFrame.from_dict(data)
smaxval = 100
start_time = time.time()
for i in df.index[1:len(df)]: #' start on second row
df.loc[i,["S","Q"]] = myfunc(
df.peti[i],
df.precip[i],
smax_val,
df.S[i-1],
df.Q[i-1])
print("--- %s seconds ---" % (time.time() - start_time))
</code></pre>
|
<python><pandas><dataframe><rolling-computation>
|
2023-02-08 17:36:02
| 1
| 475
|
Josh J
|
75,389,651
| 8,285,736
|
How to make a Python function run concurrently or sequentially depending on the number of function calls and function arguments?
|
<p>I have the following sample function:</p>
<pre><code>def set_value(thing, set_quantity):
set_function(thing, set_quantity) #sets "thing" to the desired quantity
read_value = read_function(thing) #reads quantity of "thing"
if read_value != set_quantity:
raise Exception('Values are not equal')
</code></pre>
<p>I then have some test cases (they're Robot Framework test cases but I don't think that really matters for the question I have) that look something like this:</p>
<pre><code>#test case 1
set_value('Bag_A', 5)
#Test case 2
set_value('Bag_B', 3)
#Test case 3
set_value('Bag_A', 2)
set_value('Bag_B', 8)
set_value('Purse_A',4)
</code></pre>
<p>This is the desired behavior I want:</p>
<p>For Test Case 1, <code>set_value()</code> is called once and is executed right after the function call.</p>
<p>Test Case 2 is similar to #1 and its behavior should be the same.</p>
<p>For Test Case 3, we have 3 items: Bag_A, Bag_B, and Purse_A. Bag_A and Bag_B each call <code>set_value()</code>, but I want them to run concurrently. This is because they're both "Bags" (regardless of the _A or _B assignation), so I want the function to recognize that they are both "Bags" and should be run concurrently. Once those are executed and finished, then I want the Purse_A function call to <code>set_value()</code> to be executed (since "Purse" is not in the same category as "Bag").</p>
<p>The object type ("bag", "purse", etc) can be anything, so I'm not expecting a particular value from a small amount of pre-defined possibilities.
How would I go about doing this? By the way, I cannot change anything in the way the Test Cases are written.</p>
<p>I tried looking into the asyncio module since you're able to run things concurrently using tasks, but the things is I don't understand how to get <code>set_function()</code> to know how many tasks there will be since it all depends on any future functions calls which depends on each test case.</p>
|
<python><concurrency><python-asyncio>
|
2023-02-08 17:33:22
| 1
| 643
|
ATP
|
75,389,630
| 6,727,914
|
Why does the Knapsack Problem DP using tabular method discretize the capacity?
|
<p>I am trying to learn different aspect of DP and the first thing that pops on google search for "Knapsack 0-1" is this</p>
<p><a href="https://www.geeksforgeeks.org/python-program-for-dynamic-programming-set-10-0-1-knapsack-problem/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/python-program-for-dynamic-programming-set-10-0-1-knapsack-problem/</a></p>
<pre><code># A Dynamic Programming based Python
# Program for 0-1 Knapsack problem
# Returns the maximum value that can
# be put in a knapsack of capacity W
def knapSack(W, wt, val, n):
K = [[0 for x in range(W + 1)] for x in range(n + 1)]
# Build table K[][] in bottom up manner
for i in range(n + 1):
for w in range(W + 1):
if i == 0 or w == 0:
K[i][w] = 0
elif wt[i-1] <= w:
K[i][w] = max(val[i-1] + K[i-1][w-wt[i-1]], K[i-1][w])
else:
K[i][w] = K[i-1][w]
return K[n][W]
# Driver program to test above function
val = [60, 100, 120]
wt = [10, 20, 30]
W = 50
n = len(val)
print(knapSack(W, wt, val, n))
# This code is contributed by Bhavya Jain
</code></pre>
<p>I stopped at the first line. Why does the code create a matrix of n by W discretizing W with one unit? Why one was chosen? What if W is very big (example W=10000000000) ? Wouldn't the matrix overflow the memory ? What if the capacity W is a decimal number for example 50.6?</p>
<p>Strangely, the first video tutorial that pops on youtube shows the same approach <a href="https://www.youtube.com/watch?v=nLmhmB6NzcM" rel="nofollow noreferrer">https://www.youtube.com/watch?v=nLmhmB6NzcM</a></p>
|
<python><algorithm><optimization><dynamic-programming>
|
2023-02-08 17:32:04
| 0
| 21,427
|
TSR
|
75,389,463
| 8,869,570
|
Using attr.ib() for class member variables
|
<p>I'm not well versed with Python (mostly use C++) and the <code>attr</code> module. I'm looking through my codebase at work, and I often see class attributes created as follows:</p>
<pre><code>import attr
class my_class:
first_member_var = attr.ib()
second_member_var = attr.ib()
def func():
self.third_member_var = non_class_func()
</code></pre>
<p>In this class <code>third_member_var</code> was not set to <code>attr.ib()</code> like the first two. I'm not sure if the original developer just forgot to do this, or if there's some functional difference by doing this or not?</p>
|
<python><attr>
|
2023-02-08 17:20:32
| 0
| 2,328
|
24n8
|
75,389,357
| 9,357,484
|
Feature importance with logistic regression with feature names
|
<p>I want to find the feature-importance using logistic regression.</p>
<p>My model trained with the code block</p>
<pre><code>LR = LogisticRegression(multi_class='multinomial', random_state=1, max_iter=1000)
LR.fit(X_train, y_train)
R_y_pred = LR.predict(X_test)
target_names = ['No', 'Yes']
print(classification_report(y_test, R_y_pred, target_names=target_names))
</code></pre>
<p>To find out the feature importance I wrote the code block</p>
<pre><code>LR_importance = LR.coef_[0]
# summarize feature importance
for i,v in enumerate(LR_importance):
print('Feature: %0d, Score: %.5f' % (i,v))
</code></pre>
<p>I tried the code block <a href="https://machinelearningmastery.com/calculate-feature-importance-with-python/" rel="nofollow noreferrer">using this link</a>. The result looked like</p>
<pre><code>Feature: 1, Score: 0.00545
Feature: 2, Score: 0.00294
Feature: 3, Score: 0.00289
Feature: 4, Score: 0.52992
Feature: 5, Score: 0.42046
Feature: 6, Score: 0.02663
Feature: 7, Score: 0.00304
Feature: 8, Score: 0.00304
Feature: 9, Score: 0.00283
</code></pre>
<p>However in place of Feature1, Feature 2, Feature3,.... I want the features name of the dataset. How could I do that?</p>
<p>If the dataset contains the features name sepal_length, sepal_width, petal_length, petal_width, species, I would like the feature importance result like</p>
<pre><code>sepal_length: 0.5
sepal_width: 0.2
petal_length: 0.3
petal_width: 0.1
</code></pre>
<p>something like that where the features names of the dataset included in the result.</p>
<p>Thank you</p>
|
<python><pandas><logistic-regression><feature-selection>
|
2023-02-08 17:12:50
| 0
| 3,446
|
Encipher
|
75,389,254
| 12,919,727
|
Numpy: Function to take arrays a and b and return array c with elements 0:b[0] with value a[0], values b[0]:b[1] with value a[1], and so on
|
<p>Say I have two arrays:</p>
<pre><code>a = np.asarray([0,1,2])
b = np.asarray([3,7,10])
</code></pre>
<p>Is there a fast way to create:</p>
<pre><code>c = np.asarray([0,0,0,1,1,1,1,2,2,2])
# index 3 7 10
</code></pre>
<p>This can be done using a for loop but I wonder if there is a fast internal numpy function that achieves the same thing.</p>
|
<python><arrays><numpy>
|
2023-02-08 17:05:10
| 1
| 491
|
McM
|
75,389,211
| 4,079,144
|
How to get the if statement right for boto3
|
<p>I created a dynamodb table with two records in there. One is the ARN of the rds and one is date field name last_reported. Now i am trying to run lambda function which should fetch that record. If there is data in the table it should execute the value of the table / item, if there is no data in the table, it should exit with a message. I get the following error</p>
<pre><code> {
"errorMessage": "'Item'",
"errorType": "KeyError",
"requestId": "",
"stackTrace": [
" File \"/var/lang/lib/python3.9/importlib/__init__.py\", line 127, in import_module\n
return _bootstrap._gcd_import(name[level:], package, level)\n",
" File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\n",
" File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\n",
" File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\n",
" File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\n",
" File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\n",
" File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\n",
" File \"/var/task/lambda_function.py\", line 339, in <module>\n lambda_handler(1,1)\n",
" File \"/var/task/lambda_function.py\", line 263, in lambda_handler\n last_reported = int(read_db['Item']['last_reported']['N'])\n"
</code></pre>
<p>]
}</p>
<pre><code> dynamo_table_name = 'my_table'
key = 'arn'
dynamo = boto3.client('dynamodb', region_name=source_region, config=Config(connect_timeout=5, read_timeout=60, retries={'max_attempts': 10}))
response = dynamo.query(
TableName=dynamo_table_name,
KeyConditionExpression="arn = :arn",
ExpressionAttributeValues={":arn": {"S":arn}}
)
if response == True
read_db = dynamo.get_item(TableName=dynamo_table_name, Key={'arn':{'S':arn}})
last_reported = int(read_db['Item']['last_reported']['N'])
print (last_reported)
</code></pre>
|
<python><amazon-dynamodb><boto3>
|
2023-02-08 17:01:59
| 1
| 559
|
srk786
|
75,389,208
| 2,725,742
|
Hiding Empty ttk Notebook Tabs with Grid vs Pack
|
<p>At <a href="https://github.com/muhammeteminturgut/ttkScrollableNotebook" rel="nofollow noreferrer">https://github.com/muhammeteminturgut/ttkScrollableNotebook</a> there is a lovely demo of effectively adding scrolling tabs at the top of a notebook. But there is actually a notebook elements, one on top of the other.</p>
<pre><code>if self.useGrid:
self.rowconfigure(0, weight=1)
self.columnconfigure(0, weight=1)
leftArrow.grid(row=0, column=0)
rightArrow.grid(row=0, column=1)
slideFrame.grid(row=0, column=0, sticky=NE)
self.notebookContent.grid(row=1, column=0, sticky="nsew")
else:
leftArrow.pack(side=LEFT)
rightArrow.pack(side=RIGHT)
slideFrame.place(relx=1.0, x=0, y=1, anchor=NE)
self.notebookContent.pack(fill="both", expand=True)
</code></pre>
<p>I am trying to convert it to use grid instead of pack, but when using grid the empty tabs of the bottom notebook are displayed. Somehow they are hidden when using pack. What is the difference here? How can grid reproduce pack's functionality here?</p>
|
<python><tkinter><ttk>
|
2023-02-08 17:01:40
| 0
| 448
|
fm_user8
|
75,389,166
| 7,259,176
|
How to match an empty dictionary?
|
<p>Python supports <a href="https://peps.python.org/pep-0636/" rel="nofollow noreferrer">Structural Pattern Matching</a> since version <code>3.10</code>.
I came to notice that matching an empty <code>dict</code> doesn't work by simply matching <code>{}</code> as it does for <code>list</code>s.
According to my naive approach, non-empty <code>dict</code>s are also matched (Python 3.10.4):</p>
<pre class="lang-py prettyprint-override"><code>def match_empty(m):
match m:
case []:
print("empty list")
case {}:
print("empty dict")
case _:
print("not empty")
</code></pre>
<pre class="lang-py prettyprint-override"><code>match_empty([]) # empty list
match_empty([1, 2]) # not empty
match_empty({}) # empty dict
match_empty({'a': 1}) # empty dict
</code></pre>
<p>Matching the constructors even breaks the empty list matching:</p>
<pre class="lang-py prettyprint-override"><code>def match_empty(m):
match m:
case list():
print("empty list")
case dict():
print("empty dict")
case _:
print("not empty")
</code></pre>
<pre class="lang-py prettyprint-override"><code>match_empty([]) # empty list
match_empty([1, 2]) # empty list
match_empty({}) # empty dict
match_empty({'a': 1}) # empty dict
</code></pre>
<p>Here is a solution, that works as I expect:</p>
<pre class="lang-py prettyprint-override"><code>def match_empty(m):
match m:
case []:
print("empty list")
case d:
if isinstance(d, dict) and len(d) == 0:
print("empty dict")
return
print("not empty")
</code></pre>
<pre class="lang-py prettyprint-override"><code>match_empty([]) # empty list
match_empty([1, 2]) # not empty
match_empty({}) # empty dict
match_empty({'a': 1}) # not empty
</code></pre>
<p>Now my questions are:</p>
<ul>
<li>Why do my first 2 approaches not work (as expected)?</li>
<li>Is there a way to use structural pattern matching to match only an empty <code>dict</code> (without checking the <code>dict</code> length explicitly)?</li>
</ul>
|
<python><dictionary><python-3.10><structural-pattern-matching>
|
2023-02-08 16:57:37
| 3
| 2,182
|
upe
|
75,389,050
| 3,521,180
|
How to add dummy record to the existing column in pyspark?
|
<p>I have a data frame where I want to add one dummy record each. So, to do that, I read a dataframe from a parquet file, and created a list out of them., and then used python dict(zip()) to add them. Below is the code snippet.</p>
<pre><code>prem_df = read_parquet_file(folder_path, logger)
row_list = prem_df.select(col("cat")).collect()
y = [o[0] for o in row_list]
t = y.append("ABC")
row_list1 = prem_df.select(col("Val")).collect()
x = [o[0] for o in row_list1]
p = x.append("23.54")
dict(zip(t, p))
</code></pre>
<p>But not sure how would I create a dataframe out of it again, as I need to merge it back to the DF <code>prem_df</code>.</p>
<p>Basically, I want to add ABC at the end of the <code>"cat"</code> column, and <code>"23.54"</code> at the end of the <code>"Val"</code> column in such a way that if I filter on <code>"cat" == "ABC</code>, I should get the <code>"Val"</code> as <code>23.54</code>.</p>
<pre><code>df.filter("cat" == "ABC).select(col("cat", "val")
</code></pre>
<p>Note: the parquet file has 43 columns in total.
Please suggest.. Thank you</p>
|
<python><pyspark>
|
2023-02-08 16:47:36
| 1
| 1,150
|
user3521180
|
75,388,887
| 20,959,773
|
Regex works in regex101, but not in python code
|
<p>I have this text:</p>
<pre><code>a href="#" class="s-navigation--item js-gps-track js-products-menu" aria-controls="products-popover" data-controller="s-popover" data-action="s-popover#toggle" data-s-popover-placement="bottom" data-s-popover-toggle-class="is-selected" data-gps-track="top_nav.products.click({location:2, destination:1})" data-ga="[&quot;top navigation&quot;,&quot;products menu click&quot;,null,null,null]" aria-expanded="false"
</code></pre>
<p>With this regex:</p>
<pre><code>attr_regex = '(?:\w+[-\.]*)+(?:=+[\'\"][\w\d\s:;,$@#!\[\]^&?%*\/+(){}.=-]*[\'\"])*'
</code></pre>
<p>I want to separate this text into the individual words or variables there are, like this:
<a href="https://i.sstatic.net/gJAGs.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gJAGs.jpg" alt="enter image description here" /></a></p>
<p>But instead, in python code the output gets like this (in a list):</p>
<pre><code>['a', 'aria-controls="products-popover"', 'aria-expanded="false"', 'class="s-navigation--item js-gps-track js-products-menu"', 'data-action="s-popover#toggle"', 'data-controller="s-popover"', 'data-ga', 'top', 'navigation', 'products', 'menu', 'click', 'null', 'null', 'null', 'data-gps-track="top_nav.products.click({location:2, destination:1})"', 'data-s-popover-placement="bottom"', 'data-s-popover-toggle-class="is-selected"', 'href="#"']
</code></pre>
<p>As you can see there are some words which are not supposed to come out like that, because they are inside the value of the variable.</p>
<p>Python code:</p>
<pre><code>elements = re.findall(attr_regex, str(text))
print(elements)
</code></pre>
<p>Using raw string doesn't fix the problem!</p>
<p>How can I fix this problem, and better, how can I make this regex work successfully in every text possible?</p>
|
<python><html><regex>
|
2023-02-08 16:33:42
| 0
| 347
|
RifloSnake
|
75,388,867
| 649,749
|
Keras model with multiple categorical outputs of different shapes
|
<p>I have a keras model here which looks as follows:
<a href="https://i.sstatic.net/GyEbY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GyEbY.png" alt="Keras Model" /></a></p>
<p>As you can see, an intent (four classes) is predicted and each word of the sentence is tagged (choce of 10 classes). I'm now struggeling with the model.fit and the y_train data preparation. If I shape it as follows, all works, but it doesn't feel correct as the left output will have the same shape as the right output.</p>
<pre><code>x = np.array(df_ic.message)
y = np.zeros((df_ic.message.size,2,85))
</code></pre>
<p>Can anyone help/suggest how to best shape the train data, i.e. y?</p>
<p>Thanks a lot,
Martin</p>
|
<python><tensorflow><keras><artificial-intelligence>
|
2023-02-08 16:32:17
| 1
| 506
|
Martin Horvath
|
75,388,644
| 17,160,160
|
Fill NA values over varied data frame column slices in Pandas
|
<p>I have a Pandas data frame similar to the following:</p>
<pre><code>pd.DataFrame({
'End' : ['2022-03','2022-05','2022-06'],
'2022-01' : [1,2,np.nan],
'2022-02' : [np.nan,3,4],
'2022-03' : [np.nan,1,3],
'2022-04' : [np.nan,np.nan,2],
'2022-05' : [np.nan,np.nan,np.nan],
'2022-06' : [np.nan,np.nan,np.nan]
})
</code></pre>
<p>I would like to fill the NaN values in each row such that all columns up to that listed in <code>end</code> are replaced with 0 while those after remain as NaN</p>
<p>The desired output would be:</p>
<pre><code>pd.DataFrame({
'End' : ['2022-03','2022-05','2022-06'],
'2022-01' : [1,2,0],
'2022-02' : [0,3,4],
'2022-03' : [0,1,3],
'2022-04' : [np.nan,0,2],
'2022-05' : [np.nan,0,0],
'2022-06' : [np.nan,np.nan,0]
})
</code></pre>
|
<python><pandas>
|
2023-02-08 16:14:16
| 3
| 609
|
r0bt
|
75,388,487
| 5,695,336
|
Use a single ClientSession instead of creating one for every request slowed down my HTTP calls
|
<p>My python program makes HTTP requests to several different sites once every few hours. At first, I didn't know the recommended way to use <code>aiohttp</code> is to create only one <code>ClientSession</code> and use it for every request in the program's lifetime. So I create a new <code>ClientSession</code> for every call. The time between request and response was 0.3 to 0.5 seconds.</p>
<p>After learned that I should just use one <code>ClientSession</code>, it is supposed to be faster. I modified my code. But then, the time between request and response now is 0.5 to 1.5 seconds. I see > 1 seconds response time all the time, which never happened before.</p>
<p>Why is the recommended way slower?</p>
<p>I really don't want to change it back, because it is cleaner now, and I did other adjustments (which I am sure doesn't affect the response time) in the same commit. Is there any way I can use one shared <code>ClientSession</code> and make it fast like before?</p>
<p>Here are the code examples:</p>
<p>Before:</p>
<pre><code>async def my_func1():
async with aiohttp.ClientSession() as session:
async with session.post(...) as resp:
# process response
async def my_func2():
async with aiohttp.ClientSession() as session:
async with session.get(...) as resp:
# process response
await asyncio.gather(my_func1(), my_func2())
</code></pre>
<p>After:</p>
<pre><code>async def my_func1(session: ClientSession):
async with session.post(...) as resp:
# process response
async def my_func2(session: ClientSession):
async with session.get(...) as resp:
# process response
async with aiohttp.ClientSession() as session:
await asyncio.gather(my_func1(session), my_func2(session))
</code></pre>
|
<python><aiohttp>
|
2023-02-08 16:01:41
| 1
| 2,017
|
Jeffrey Chen
|
75,388,277
| 12,924,562
|
Python Mysql Queary not returning results
|
<p>I have a DB and when I query a table I get 67 results. The SQL is:</p>
<pre><code>SELECT lower(UserName) from Participants ;
</code></pre>
<p>I try to connect to the DB, and I get no connection errors.</p>
<pre><code>def db_connect ():
try:
cnx = mysql.connector.connect(user=db_user, password=db_password,host=db_host,database=db_name)
return cnx
except mysql.connector.Error as err:
if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print("Something is wrong with your user name or password")
elif err.errno == errorcode.ER_BAD_DB_ERROR:
print("Database does not exist")
else:
print(err)
def main():
cnx = db_connect()
cursor = cnx.cursor()
query = ("SELECT lower(UserName) from Participants ;")
cursor.execute(query)
print(cursor.rowcount)
</code></pre>
<p>It prints out -1 for rowcount. The connection to the DB appears to be working, the SQL is a simple query...</p>
|
<python><mysql><sql><mysql-connector>
|
2023-02-08 15:46:43
| 1
| 386
|
Rick Dearman
|
75,388,263
| 9,262,339
|
FastAPI: url_for in Jinja2 template does not work with https
|
<p>Everything was running fine until I switched the application to use https.
All the links that the <code>url_for</code> function generates in the templates now look like this <a href="https://ibb.co/N3cJ9V4" rel="nofollow noreferrer">https://ibb.co/N3cJ9V4</a></p>
<p><strong>Problem:</strong></p>
<pre><code>Mixed Content: The page at 'https://team-mate.app/' was loaded over HTTPS, but requested an insecure stylesheet 'http://team-mate.app/static/css/materialize.min.css'. This request has been blocked; the content must be served over HTTPS.
</code></pre>
<p>I've seen similar problems on stackoverflow and tried every possible option, but nothing worked.</p>
<p>I tried the advice I found and ran <code>Uvicorn</code> through <code>--proxy-headers</code></p>
<p><strong>Dockerfile</strong>
....</p>
<pre><code>CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--proxy-headers"]
</code></pre>
<p><strong>docker-compose.yaml</strong></p>
<p>....</p>
<pre><code>command: sh -c "alembic upgrade head && uvicorn main:app --host 0.0.0.0 --port 8000 --reload --proxy-headers"
</code></pre>
<p>But nothing has changed. The problem remains.
Maybe I misunderstood the advice or the nginx config needs to be edited somehow.</p>
<p>The second way I used</p>
<pre><code>from fastapi.middleware.httpsredirect import HTTPSRedirectMiddleware
app = FastAPI()
app.add_middleware(HTTPSRedirectMiddleware)
</code></pre>
<p>But got cyclic redirect <code>HTTP/1.0" 307 Temporary Redirect</code> for all my urls`</p>
<p>Also tried <a href="https://stackoverflow.com/questions/70521784/fastapi-links-created-by-url-for-in-jinja2-template-use-http-instead-of-https">this</a> No effect</p>
<p>My current configs</p>
<p><strong>nginx-config</strong></p>
<pre><code>server {
listen 80;
server_name team-mate.app;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name team-mate.app;
server_tokens off;
location /static/ {
gzip on;
gzip_buffers 8 256k;
root /app;
}
ssl_certificate /etc/letsencrypt/live/team-mate.app/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/team-mate.app/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://web:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /favicon.ico {
access_log off;
log_not_found off;
}
}
</code></pre>
<p><strong>docker-compose.yaml</strong></p>
<pre><code>version: '3.9'
services:
web:
env_file: .env
build: .
command: sh -c "alembic upgrade head && uvicorn main:app --host 0.0.0.0 --port 8000 --reload"
volumes:
- .:/app
ports:
- 8000:8000
depends_on:
- db
- redis
db:
image: postgres:11
volumes:
- postgres_data:/var/lib/postgresql/data
expose:
- 5432
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
- POSTGRES_DB=${DB_NAME}
redis:
image: redis:6-alpine
volumes:
- redis_data:/data
nginx:
image: nginx:latest
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
volumes:
- ./nginx_config.conf:/etc/nginx/conf.d/default.conf
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
depends_on:
- web
certbot:
image: certbot/certbot
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
volumes:
postgres_data:
redis_data:
</code></pre>
<p><strong>main.py</strong></p>
<pre><code>app = FastAPI()
app.mount("/static", StaticFiles(directory="static"), name="static")
templates = Jinja2Templates(directory="templates")
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>@router.get('/', response_class=HTMLResponse)
async def main_page(request: Request,
user: User = Depends(UserService.get_authenticated_user_id)
):
return templates.TemplateResponse('base.html',
context={
'request': request,
'user': user,
}
)
</code></pre>
<p><strong>html</strong></p>
<pre><code> <link type="text/css" href="{{ url_for('static', path='/css/materialize.min.css') }}" rel="stylesheet">
<link type="text/css" href="{{ url_for('static', path='/css/custom.css') }}" rel="stylesheet">
</code></pre>
<p><strong>p.s.</strong></p>
<p>I followed MatsLindh's advice and added the <code>proxy_set_header X-Forwarded-Proto $scheme</code> parameter to the nginx configuration</p>
<p>And added <code>--forwarded-allow-ips="*"</code> to docker-compose file as</p>
<pre><code>command: sh -c "alembic upgrade head && uvicorn main:app --host 0.0.0.0 --port 8000 --reload --proxy-headers --forwarded-allow-ips="*""
</code></pre>
<p>And it made a bit of a difference - the debugger stopped writing <code>Mixed-content</code> but returns <code>404</code> error for static files</p>
<p><a href="https://i.sstatic.net/fpONl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fpONl.png" alt="404" /></a></p>
<p><strong>upd2</strong></p>
<p>In the end, after many attempts to solve the problem, I managed to convert the application to work with https. Now the function <code>url_for</code> e.g. <code>href="{{ url_for('user_profile', pk=member.id) }}"</code> is correctly executed and redirects to the correct URL. Also, the warning in the browser that the connection is not secure is gone.</p>
<p>The trick was to use <code>--proxy-headers --forwarded-allow-ips</code> inside docker-compose</p>
<pre><code>command: sh -c "alembic upgrade head && uvicorn main:app --host 0.0.0.0 --port 8000 --reload --proxy-headers --forwarded-allow-ips="*""
</code></pre>
<p>and X-Forwarded-Proto in nginx config</p>
<pre><code>proxy_set_header X-Forwarded-Proto $scheme;
</code></pre>
<p>There is one problem left to solve. I still get 404 for the static folder. Therefore, the styles don't work.</p>
<p>My current nginx-config</p>
<pre><code>server {
listen 80;
server_name team-mate.app;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name team-mate.app;
server_tokens off;
location /static/ {
gzip on;
gzip_buffers 8 256k;
alias /app/static/;
}
ssl_certificate /etc/letsencrypt/live/team-mate.app/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/team-mate.app/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://web:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /favicon.ico {
access_log off;
log_not_found off;
}
}
</code></pre>
<p>and docker-compose.yaml</p>
<pre><code>version: '3.9'
services:
web:
env_file: .env
build: .
command: sh -c "alembic upgrade head && uvicorn main:app --host 0.0.0.0 --port 8000 --reload --proxy-headers --forwarded-allow-ips="*""
volumes:
- .:/app
ports:
- 8000:8000
depends_on:
- db
- redis
db:
image: postgres:11
volumes:
- postgres_data:/var/lib/postgresql/data
expose:
- 5432
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
- POSTGRES_DB=${DB_NAME}
redis:
image: redis:6-alpine
volumes:
- redis_data:/data
nginx:
image: nginx:latest
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
volumes:
- ./nginx_config.conf:/etc/nginx/conf.d/default.conf
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
depends_on:
- web
certbot:
image: certbot/certbot
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
volumes:
postgres_data:
redis_data:
</code></pre>
<p>if I explicitly specify the path to the statics in the templates</p>
<pre><code><link rel="stylesheet" href="https://team-mate.app/static/css/custom.css'">
</code></pre>
<p>it still returns a 404.</p>
<p>Static file mount</p>
<pre><code>app = FastAPI()
app.mount("/static", StaticFiles(directory="static"), name="static")
</code></pre>
<p>Project structure</p>
<pre><code>wplay
├── api
│ ├── endpoints
│ │ └── __pycache__
│ ├── __pycache__
│ └── services
│ └── __pycache__
├── data
│ ├── certbot
│ │ ├── conf
│ │ │ └── renewal-hooks
│ │ │ ├── deploy
│ │ │ ├── post
│ │ │ └── pre
│ │ └── www
│ └── nginx
├── endpoints
│ └── __pycache__
├── helpers
│ └── __pycache__
├── migrations
│ ├── __pycache__
│ └── versions
│ └── __pycache__
├── models
│ └── __pycache__
├── __pycache__
├── schemas
│ └── __pycache__
├── services
│ └── __pycache__
├── sessions
│ ├── core
│ │ └── __pycache__
│ └── __pycache__
├── static
│ ├── css
│ ├── fonts
│ │ └── roboto
│ ├── img
│ └── js
└── templates
</code></pre>
<p><strong>UPD</strong></p>
<p>File response works correct</p>
<pre><code>@router.get("/download")
async def download_file():
file_path = "static/img/6517.svg"
return FileResponse(file_path, media_type="image/svg+xml")
</code></pre>
<p><strong>UPD 20.02.2023</strong></p>
<p>I left the files: <code>nginx-config</code>, <code>docker-compose</code>, <code>Dockerfile</code> unchanged and ran a simple application</p>
<p><strong>main.py</strong></p>
<pre><code>from fastapi import FastAPI, Request
from fastapi.staticfiles import StaticFiles
from fastapi.templating import Jinja2Templates
from db import database
app = FastAPI()
app.mount("/static", StaticFiles(directory="static"), name="static")
templates = Jinja2Templates(directory="templates")
@app.get("/")
async def home(request: Request):
return templates.TemplateResponse('base.html',
context={'request': request,
}
)
</code></pre>
<p><strong>base.html</strong></p>
<pre><code><!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>ToDo App</title>
<link type="text/css" href="{{ url_for('static', path='/css/styles.css') }}" rel="stylesheet">
</head>
<body>
<main>
<h1>ToDo App</h1>
<br>
{% block content %}
{% endblock content %}
</main>
</body>
</html>
</code></pre>
<p>And got the same result 404 for
<a href="https://team-mate.app/static/css/styles.css" rel="nofollow noreferrer">https://team-mate.app/static/css/styles.css</a></p>
|
<python><nginx><https><fastapi><starlette>
|
2023-02-08 15:45:53
| 0
| 3,322
|
Jekson
|
75,388,232
| 9,475,509
|
Install Python package inside a Jupyter Notebook kernel
|
<p>Inside a Jupyter Notebook I have managed to install a Python kernel with</p>
<pre><code>!python -m ipykernel install --user --name other-env --display-name "Python (other-env)"
</code></pre>
<p>as informed <a href="https://ipython.readthedocs.io/en/stable/install/kernel_install.html" rel="nofollow noreferrer">here</a> and it is available with other kernels on the menu <strong>Kernel</strong> → <strong>Change kernel</strong> and</p>
<pre><code>!jupyter kernelspec list
</code></pre>
<p>will also show them</p>
<pre><code>Available kernels:
avenv C:\Users\Full Name\AppData\Roaming\jupyter\kernels\avenv
chatterbot C:\Users\Full Name\AppData\Roaming\jupyter\kernels\chatterbot
othervenv C:\Users\Full Name\AppData\Roaming\jupyter\kernels\othervenv
python3 C:\Users\Full Name\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\share\jupyter\kernels\python3
</code></pre>
<p>Then I try to install Python a package using</p>
<pre><code>%pip install a_package
</code></pre>
<p>as given <a href="https://stackoverflow.com/a/56190436/9475509">here</a> that said</p>
<blockquote>
<p>with % (instead of !) it will install <code>a_package</code> into the current kernel (rather than into the instance of Python that launched the notebook).</p>
</blockquote>
<p>But what I have got that it installs <code>a_package</code> to all kernels, or <code>%pip list</code> will list the same installed packages in all kernels.</p>
<p>Is there a way to install Python package only to an active Jupyter Notebook kernel?</p>
|
<python><pip><jupyter-notebook><python-venv><jupyter-kernel>
|
2023-02-08 15:43:33
| 3
| 789
|
dudung
|
75,388,146
| 12,829,151
|
Using sample_weight param with XGBoost through a pipeline
|
<p>I want to use the <code>sample_weight</code> parameter with XGBClassifier from the <code>xgboost</code> package.</p>
<p>The problem happen when I want to use it inside a <code>pipeline</code> from <code>sklearn.pipeline</code>.</p>
<pre><code>from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import Pipeline
from xgboost import XGBClassifier
clf = XGBClassifier(**params)
steps = [ ('scaler', MinMaxScaler() ), ('classifier', clf ) ]
pipeline = Pipeline( steps )
</code></pre>
<p>When I run <code>pipeline.fit(x, y, sample_weight=sample_weight)</code> where <code>sample_weight</code> is just a dictionary with int representing weights, I have the following error:</p>
<blockquote>
<p>ValueError: Pipeline.fit does not accept the sample_weight parameter.</p>
</blockquote>
<p>How can I solve this problem? Is there a workaround? I have seen that an <a href="https://github.com/scikit-learn/scikit-learn/issues/18159" rel="nofollow noreferrer">issue</a> already exists.</p>
|
<python><machine-learning><scikit-learn><pipeline><xgboost>
|
2023-02-08 15:36:59
| 1
| 1,885
|
Will
|
75,388,099
| 8,971,938
|
Split tensorflow BatchDataset for LSTM with multiple inputs
|
<p>I construct a LSTM model with two inputs: one for categorical variables, one for numerical variables:</p>
<pre><code>model = Model(inputs = [cat_input, num_input], outputs = x, name = "LSTM")
</code></pre>
<p>The input data for the LSTM is generated by means of <code>tensorflow.keras.utils.timeseries_dataset_from_array()</code>:</p>
<pre><code>input_dataset = timeseries_dataset_from_array(
df[["cat", "num1", "num2"]], df["target"], sequence_length=n_timesteps, sequence_stride=1, batch_size=20
)
</code></pre>
<p>When I directly feed <code>input_dataset</code> into the model, I get the following error: "ValueError: Layer "LSTM" expects 2 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, None, 3) dtype=int64>]", because the model expects two inputs and not one.</p>
<p>I can achieve this (a bit ugly) like so:</p>
<pre><code>input_dataset2 = input_dataset.map(lambda x, y: ((x[:,:,0:1], x[:,:,1:3]), y))
model.fit(
input_dataset2, steps_per_epoch=20, epochs=50, verbose=0, shuffle=True
) # this now works
</code></pre>
<p>My question: The solution I found is not very elegant. Is this kind of split also possible with <code>tf.split()</code> or another function?</p>
<p>EDIT: When I try the following:</p>
<pre><code>input_dataset.map(lambda x, y: ((split(value=x, num_or_size_splits=[1, 2], axis = -1)), y))
</code></pre>
<p>I get this error: "ValueError: Value [<tf.Tensor 'split:0' shape=(None, None, 1) dtype=int64>, <tf.Tensor 'split:1' shape=(None, None, 2) dtype=int64>] is not convertible to a tensor with dtype <dtype: 'int64'> and shape (2, None, None, None)."</p>
|
<python><tensorflow><keras><time-series><lstm>
|
2023-02-08 15:32:35
| 1
| 597
|
Requin
|
75,388,072
| 5,181,219
|
Given two numpy arrays, how to split one into an array of lists based on the second
|
<p>I have two numpy arrays: one containing arbitrary values, and one containing integers larger than 1. The sum of the integers is equal to the length of the first array. Sample:</p>
<pre class="lang-py prettyprint-override"><code>values = np.array(["a", "b", "c", "d", "e", "f", "g", "h"])
lengths = np.array([1, 3, 2, 2])
len(values) == sum(lengths) # True
</code></pre>
<p>I would like to split the first array according to the lengths of the second array, and end up with something like:</p>
<pre class="lang-py prettyprint-override"><code>output = np.array([["a"], ["b", "c", "d"], ["e", "f"], ["g", "h"]], dtype=object)
</code></pre>
<p>It's easy to iterate over the array with a Python loop, but it's also slow when both lists are very large (hundreds of millions of elements). Is there a way to do this operation using native numpy operations, which presumably should be must faster?</p>
|
<python><arrays><numpy><performance>
|
2023-02-08 15:30:48
| 1
| 1,092
|
Ted
|
75,388,064
| 10,178,162
|
predict_proba version issue
|
<p>I want to get probability of each class in the following code,</p>
<pre><code># example making new probability predictions for a classification problem
from keras.models import Sequential
from keras.layers import Dense
from sklearn.datasets import make_blobs
from sklearn.preprocessing import MinMaxScaler
# generate 2d classification dataset
X, y = make_blobs(n_samples=100, centers=2, n_features=2, random_state=1)
scalar = MinMaxScaler()
scalar.fit(X)
X = scalar.transform(X)
# define and fit the final model
model = Sequential()
model.add(Dense(4, input_shape=(2,), activation='relu'))
model.add(Dense(4, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam')
model.fit(X, y, epochs=500, verbose=0)
# new instances where we do not know the answer
Xnew, _ = make_blobs(n_samples=3, centers=2, n_features=2, random_state=1)
Xnew = scalar.transform(Xnew)
# make a prediction
ynew = model.predict_proba(Xnew)
# show the inputs and predicted outputs
for i in range(len(Xnew)):
print("X=%s, Predicted=%s" % (Xnew[i], ynew[i]))
</code></pre>
<p>However, I am getting the following error,</p>
<p><code>AttributeError: 'Sequential' object has no attribute 'predict_proba'</code></p>
<p>I was running this before so I assume this is a version issue. Any help is appreciated!</p>
|
<python><tensorflow><keras>
|
2023-02-08 15:30:02
| 1
| 399
|
Phoenix
|
75,388,032
| 11,006,089
|
How to Load the Earnings Calendar data from TradingView link and into Dataframe
|
<p>I want to load the Earnings Calendar data from TradingView link and load into Dataframe.</p>
<pre><code>Link: https://in.tradingview.com/markets/stocks-india/earnings/
Filter-1: Data for "This Week"
</code></pre>
<p><a href="https://i.sstatic.net/eMAzl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eMAzl.png" alt="enter image description here" /></a></p>
<p>I am not able to select the Tab "This Week". Any help ?</p>
<p>Answer is Closed so posting here:</p>
<pre><code>import time
from selenium import webdriver
from selenium.webdriver.common.by import By
import pandas as pd
pd.set_option('display.max_rows', 50000)
pd.set_option('display.max_columns', 100)
pd.set_option('display.width', 10000)
driver = webdriver.Chrome()
driver.get("https://in.tradingview.com/markets/stocks-india/earnings/")
driver.find_element(By.XPATH, "//div[.='This Week']").click()
time.sleep(5)
visible_columns = driver.find_elements(By.CSS_SELECTOR, 'div.tv-screener__content-pane thead th:not([class*=i-hidden])')
data_field = [c.get_attribute('data-field') for c in visible_columns]
header = [c.text.split('\n')[0] for c in visible_columns]
rows = driver.find_elements(By.XPATH, "//div[@class='tv-screener__content-pane']//tbody/tr")
columns = []
for field in data_field:
column = driver.find_elements(By.XPATH, f"//div[@class='tv-screener__content-pane']//tbody/tr/td[@data-field-key='{field}']")
columns.append([col.text.replace('\n',' - ') for col in column])
df = pd.DataFrame(dict(zip(header, columns)))
print(df)
driver.quit()
</code></pre>
|
<python><python-3.x><pandas><selenium-webdriver><web-scraping>
|
2023-02-08 15:28:07
| 1
| 465
|
Rohit
|
75,387,992
| 10,112,162
|
How to activate virtual environment withing npm script in package.json
|
<p>How do i activate a virtual environment within a npm script?</p>
<pre><code> "scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"dev": "virtualenv ../Scripts/venv && nodemon server.js"
},
</code></pre>
<p>I tried this but it wont work since virtualenv is not recognized as an internal or external command.</p>
<p>I have also tried to use source, and also just use the activate.bat file inside the environemnt.</p>
<p>FYI i am using windows.</p>
|
<python><node.js><virtualenv>
|
2023-02-08 15:25:20
| 1
| 365
|
Elias Knudsen
|
75,387,979
| 11,574,636
|
Artifactory aql search error: "Props Authentication Token not found" 403
|
<p>I have a pipeline running in GitLab. I want it to do an aql search for Artifactory to collect information about my images.</p>
<p>For this I send a POST request to</p>
<pre><code>https://{url}/artifactory/api/search/aql
</code></pre>
<p>with my aql request in the body and this header:</p>
<pre><code>headers = {'X-JFrog-Art-Api': token,
'content-type': "text/plain"}
</code></pre>
<p>When I run my function in IntelliJ it works without problems, but as soon as I do it with an Api key from a service account with all permissions I get this 403 error:</p>
<pre><code>b'{\n "errors" : [ {\n "status" : 403,\n "message" : "Props Authentication Token not found"} ]\n}'
</code></pre>
|
<python><gitlab-ci><artifactory>
|
2023-02-08 15:24:28
| 0
| 326
|
Fabian
|
75,387,921
| 5,353,753
|
Remove non numeric rows from dataframe
|
<p>I have a dataframe of patients and their gene expressions. I has this format:</p>
<pre><code>Patient_ID | gene1 | gene2 | ... | gene10000
p1 0.142 0.233 ... bla
p2 0.243 0.243 ... -0.364
...
p4000 1.423 bla ... -1.222
</code></pre>
<p>As you see, that dataframe contains noise, with cells that are values other then a float value.</p>
<p>I want to remove every row that has a any column with non numeric values.</p>
<p>I've managed to do this using <code>apply</code> and <code>pd.to_numeric</code> like this:</p>
<pre><code>cols = df.columns[1:]
df[cols] = df[cols].apply(pd.to_numeric, errors='coerce')
df = df.dropna()
</code></pre>
<p>The problem is that it's taking for ever to run, and I need a better and more efficient way of achieving this</p>
<p><strong>EDIT</strong>: To reproduce something like my data:</p>
<pre><code>arr = np.random.random_sample((3000,10000))
df = pd.DataFrame(arr, columns=['gene' + str(i) for i in range(10000)])
df = pd.concat([pd.DataFrame(['p' + str(i) for i in range(10000)], columns=['Patient_ID']),df],axis = 1)
df['gene0'][2] = 'bla'
df['gene9998'][4] = 'bla'
</code></pre>
|
<python><pandas><dataframe><performance>
|
2023-02-08 15:20:22
| 1
| 40,569
|
sagi
|
75,387,904
| 6,734,243
|
how to exclude "tests" folder from the wheel of a pyproject.toml managed lib?
|
<p>I try my best to move from a <code>setup.py</code> managed lib to a pure <code>pyproject.toml</code> one.
I have the following folder structure:</p>
<pre><code>tests
└── <files>
docs
└── <files>
sepal_ui
└── <files>
pyproject.toml
</code></pre>
<p>and in my <code>pyproject.toml</code> the following setup for file and packages discovery:</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ["setuptools>=61.2", "wheel"]
[tool.setuptools]
include-package-data = false
[tool.setuptools.packages.find]
include = ["sepal_ui*"]
exclude = ["docs*", "tests*"]
</code></pre>
<p>and in the produce wheel, I get the following:</p>
<pre><code>tests
└── <files>
docs
└── <files>
sepal_ui
└── <files>
sepal_ui.egg-info
└── top-level.txt
</code></pre>
<p>looking at the top-level.txt, I see that only sepal_ui is included so my question is simple why do the extra "docs" and "tests" folder are still included even if they are not used? how to get rid of them ?</p>
<p>PS: I'm aware of the MANIFEST.in solution that I will accept if it's really the only one but I found it redundant to specify in 2 files.</p>
|
<python><setuptools><pyproject.toml>
|
2023-02-08 15:19:03
| 4
| 2,670
|
Pierrick Rambaud
|
75,387,825
| 4,495,790
|
How to replace column values to most frequent in groups in Pandas?
|
<p>I have the following Pandas DF like this:</p>
<pre><code>ID category
-----------
1 A
1 A
1 B
1 A
2 A
2 A
2 B
2 B
3 B
3 B
3 C
3 A
</code></pre>
<p>Now I would like to get a version of it where <code>category</code> column values are updated to the most frequent per <code>ID</code>. So the desired output is:</p>
<pre><code>ID category
-----------
1 A
1 A
1 A
1 A
2 B
2 B
2 B
2 B
3 B
3 B
3 B
3 B
</code></pre>
<p>(In case an equal number of discrete elements happen to occure in <code>category</code> per <code>ID</code>, any of the values is acceptable.) What is the respective Pandas expression to use?</p>
|
<python><pandas>
|
2023-02-08 15:13:39
| 0
| 459
|
Fredrik
|
75,387,809
| 11,692,124
|
Python sockets cant connect to the server
|
<p>I cant connect to the server with client over the internet both on different windows machines. here is the server's side code:</p>
<pre><code>import socket
def getPublicIP():
import requests
response = requests.get("https://api.ipify.org")
return response.text
serverAddressPublic = getPublicIP()
print('serverAddressPublic:',serverAddressPublic)
serverAddressPrivate = socket.gethostbyname(socket.gethostname())
serverAddressPrivate = "0.0.0.0"#also tried this
print('serverAddressPrivate:',serverAddressPrivate)
serverSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
serverAddressPrivateAndPort = (serverAddressPrivate, 10001)
serverSocket.bind(serverAddressPrivateAndPort)
serverSocket.listen()
print(f"[LISTENING] server is listening on {serverAddressPublic}")
print(f"[LISTENING] server is listening on {serverAddressPrivate}")
clientSocket, clientAddress = serverSocket.accept()
print(f'connected to {(clientSocket, clientAddress)}')
</code></pre>
<p>for private IP for server I tried both <code>socket.gethostbyname(socket.gethostname())</code> and <code>0.0.0.0</code> <a href="https://stackoverflow.com/a/28776405/11692124">from</a>.</p>
<p>client's side code:</p>
<pre><code>import socket
clientSocket=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
serverAddress='publicIPofServer'#uuu
print('before connected')#uuu
clientSocket.connect((serverAddress,10001))
print('after connected')#uuu
</code></pre>
<p>but it gives time out in the client: <code>TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond</code>.</p>
<p>so I tried <code>telnet publicIPofServer 10001</code> and <code>telnet privateIPofServer 10001</code> in cmd in server and both gave similar answer <code>Connecting To 0.0.0.0...Could not open connection to the host, on port 10001: Connect failed</code></p>
<p>then I tried <code>netsh firewall set portopening TCP 10001 serverclientpythoncode</code> in cmd. even though I got <code>Command executed successfully</code>. but after <code>telnet 0.0.0.0 10001</code> got <code>Connecting To 0.0.0.0...Could not open connection to the host, on port 10001: Connect failed.</code></p>
<h2>update</h2>
<h3>1</h3>
<p>with <code>netsh firewall set portopening TCP 10001 serverclientpythoncode</code> in 2 other computer, except my personal computer, I was able to make the connection. by on my pc, I couldnt do that. even by making new <code>inboundRule</code> in <code>Windows Firewall with Advanced Security</code>. so how to open ports on my pc? note <code>netstat -an | find "LISTENING"</code> shows that its listening on <code>0.0.0.0:10001</code>.</p>
<h3>2</h3>
<p>even with turning off the firewall completely I couldnt connect to my pc either through the client code or even pinging it!!! so I thought maybe the function is returning the IP wrong, but then I checked it from 5 other sites and it was right!!!1</p>
|
<python><sockets><server><client>
|
2023-02-08 15:12:42
| 0
| 1,011
|
Farhang Amaji
|
75,387,776
| 3,672,883
|
unrecognized option attr-defined in mypy
|
<p>Hello I am using oracledb.NUMBER, etc... and when I execute mypy I got the following error:</p>
<pre><code>Module has not attribute "NUMBER"
</code></pre>
<p>I set into mypy.ini the following options in order to skip that error, but didn't work.</p>
<pre><code>[mypy]
disable-error-code = attr-defined
[mypy]
disable-error-code = "attr-defined"
[mypy-oracle.*]
disable-error-code = "attr-defined"
</code></pre>
<p>how can I ignore it?</p>
<p>thanks</p>
|
<python><mypy>
|
2023-02-08 15:10:24
| 0
| 5,342
|
Tlaloc-ES
|
75,387,765
| 10,007,302
|
Trying to rearrange multiple columns in a dataframe based on ranking row values
|
<p>I'm working on a matching company names and I have a dataframe that returns output in the format below.</p>
<p>The table has an original name and for each original name, there could be N number of matches. For each match, there are 3 columns, match_name_0, score_0, match_index_0 and so on up to match_name_N.</p>
<p>I'm trying to figure out a way to return a new dataframe that sorts the columns after the original_name by the highest match scores. Essentially, if match_score_2 was the highest then match_score_0 followed by match_score_1 the columns would be</p>
<p>original_score, match_name_2, match_score_2, match_index_2, match_name_0, match_score_0, match_index_0, match_name_2, match_score_2, match_index_2,</p>
<p>In the event of a tie, the leftmost match should be ranked higher. I should note that sometimes they will be in the correct order but 30-40% of the times, they are not.</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>index</th>
<th>original_name</th>
<th>match_name_0</th>
<th>score_0</th>
<th>match_index_0</th>
<th>match_name_1</th>
<th>score_1</th>
<th>match_index_1</th>
<th>match_name_2</th>
<th>score_2</th>
<th>match_index_2</th>
<th>match_name_3</th>
<th>score_3</th>
<th>match_index_3</th>
<th>match_name_4</th>
<th>score_4</th>
<th>match_index_4</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>aberdeen asset management plc</td>
<td>aberdeen asset management sa</td>
<td>100</td>
<td>2114</td>
<td>aberdeen asset management plc esop</td>
<td>100</td>
<td>2128</td>
<td>aberdeen asset management inc</td>
<td>100</td>
<td>2123</td>
<td>aberdeen asset management spain</td>
<td>71.18779356</td>
<td>2132</td>
<td>aberdeen asset management ireland</td>
<td>69.50514818</td>
<td>2125</td>
</tr>
<tr>
<td>2</td>
<td>agi partners llc</td>
<td>agi partners llc</td>
<td>100</td>
<td>5274</td>
<td>agi partners llc</td>
<td>100</td>
<td>5273</td>
<td>agr partners llc</td>
<td>57.51100704</td>
<td>5378</td>
<td>aci partners llc</td>
<td>53.45090217</td>
<td>3097</td>
<td>avi partners llc</td>
<td>53.45090217</td>
<td>17630</td>
</tr>
<tr>
<td>3</td>
<td>alberta investment management corporation</td>
<td>alberta investment management corporation</td>
<td>100</td>
<td>6754</td>
<td>alberta investment management corporation pension arm</td>
<td>100</td>
<td>6755</td>
<td>anchor investment management corporation</td>
<td>17.50748486</td>
<td>10682</td>
<td>cbc investment management corporation</td>
<td>11.79760839</td>
<td>36951</td>
<td>harvest investment management corporation</td>
<td>31.70316571</td>
<td>85547</td>
</tr>
</tbody>
</table></div>
|
<python><pandas><dataframe>
|
2023-02-08 15:09:42
| 1
| 1,281
|
novawaly
|
75,387,685
| 12,430,026
|
Files not being included by hatchling when specified in pyproject.toml
|
<p>I am trying to package my tool with <code>Hatch</code> and want to include some extra files found in <code>/docs</code> in the below directory tree:</p>
<pre><code>this_project
│ .gitattributes
│ .gitignore
│ LICENSE
│ MANIFEST.in
│ pyproject.toml
│ README.md
│
├───docs
│ default.primers
│
└───ribdif
__init__.py
__main__.py
</code></pre>
<p>I am installing the tool with <code>pip install git+https://github.com/Rob-murphys/ribdif.git</code> but am only getting the expected file inside <code>ribdif</code> despite specifying in the <code>pyproject.toml</code> per <a href="https://hatch.pypa.io/latest/config/build/#file-selection" rel="nofollow noreferrer">https://hatch.pypa.io/latest/config/build/#file-selection</a>:</p>
<pre><code>[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
name = "ribdif"
version = "1.1.2"
authors = [
{ name="Robert Murphy", email="Robert.murphy@bio.ku.dk" },
]
description = "A program to analyse and correct for the usefulness of amplicon sequences"
readme = "README.md"
requires-python = ">=3.11"
classifiers = [
"Programming Language :: Python :: 3.11",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent",
]
[tool.hatch.build]
include = [
"ribdif/*.py",
"/docs",
]
[project.scripts]
ribdif = "ribdif.__main__:main"
</code></pre>
|
<python><pip><hatch>
|
2023-02-08 15:03:50
| 1
| 1,577
|
Lamma
|
75,387,495
| 10,357,604
|
How to solve the Import Error of cv2 module
|
<p>I know that there are many questions and answers regarding this.
But I couldn't solve it.
I get</p>
<blockquote>
<p>ImportError: DLL load failed while importing cv2: The specified module could not be found.</p>
</blockquote>
<p>I have
Windows 11,
Python 3.9.12</p>
<p>opencv-python and opencv-contrib-python 4.6.0.66</p>
<p>PyCharm IDE</p>
<p>I tried</p>
<ul>
<li>installing Visual build tools</li>
</ul>
|
<python><python-3.x><opencv>
|
2023-02-08 14:49:55
| 1
| 1,355
|
thestruggleisreal
|
75,387,407
| 7,376,511
|
classmethod with different overloaded signature between instance and base class
|
<p>I am trying to write a class with an additional constructing method that accepts extra values. These extra values are expensive to compute, and are saved at the end of the program, so <code>.initialize()</code> effectively serves as an injection to avoid recomputing them again at subsequent runs of the program.</p>
<pre><code>class TestClass:
init_value: str
secondary_value: int
@overload
@classmethod
def initialize(cls: type["TestClass"], init_value: str, **kwargs) -> "TestClass":
...
@overload
@classmethod
def initialize(cls: "TestClass", **kwargs) -> "TestClass":
# The erased type of self "TestClass" is not a supertype of its class "Type[TestClass]
...
@classmethod
def initialize(cls: type["TestClass"] | "TestClass", init_value: str | None = None, **kwargs) -> "TestClass":
if isinstance(cls, type):
instance = cls(init_value=init_value)
# Argument "init_value" to "TestClass" has incompatible type "Optional[str]"; expected "str"
else:
instance = cls
for extra_key, extra_value in kwargs.items():
setattr(instance, extra_key, extra_value)
return instance
def __init__(self, init_value: str) -> None:
self.init_value = init_value
instance1 = TestClass.initialize(init_value="test", secondary_value="test2")
instance2 = TestClass(init_value="test").initialize(secondary_value="test2")
# Missing positional argument "init_value" in call to "initialize" of "TestClass"
instance1.init_value
instance2.init_value
instance1.secondary_value
instance2.secondary_value
</code></pre>
<p>How can I make the above work so that <code>TestClass(init_value).initialize()</code> does not require init_value passed to <code>.initialize()</code> because it's already been declared in <code>__init__</code>, while <code>TestClass.initialize()</code> does?</p>
<p>In short, how can I define a classmethod with different typing depending on whether it's called on an instance or a class?</p>
<p>These extra values cannot be declared in <code>__init__</code>, because of complex internals of the class that would be too long to repeat here.</p>
|
<python><python-typing><mypy><class-method>
|
2023-02-08 14:42:34
| 2
| 797
|
Some Guy
|
75,387,344
| 5,152,424
|
use the result of YOLOv8 for pyzbar
|
<p>I want to pass the result from the YOLOv8 to the decode function so that the barcodes are read from it.</p>
<p>My program code is:</p>
<pre><code>model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
results = model.predict(source=frame, show=True, conf=0.70, stream=True, device=0)
decode(results.numpy())
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p>When I do this, I get the following error message:</p>
<pre><code>AttributeError: 'generator' object has no attribute 'numpy'
</code></pre>
<p>Additionally I want to preprocess the frame with kraken.binarization.nlbin() is this possible, if so how?</p>
|
<python><opencv><yolo>
|
2023-02-08 14:37:25
| 1
| 533
|
AS400
|
75,387,254
| 6,729,591
|
Python's dynaconf does not merge multi-file settings correctly
|
<p>I am using <a href="https://www.dynaconf.com/configuration/" rel="nofollow noreferrer">dynaconf</a> to manage my programs' settings. But i often need to switch between debugging and regular run profile.</p>
<p>In <code>default.yaml</code> I have:</p>
<pre class="lang-yaml prettyprint-override"><code>SETTINGS:
RUN_WITH_SEED: false
SEED: 42
COLORS: ['red', 'white']
</code></pre>
<p>And in my <code>debug.yaml</code> I have only:</p>
<pre class="lang-yaml prettyprint-override"><code>SETTINGS:
RUN_WITH_SEED: true
</code></pre>
<p>However, when I load the debug after the default it also removes the untouched settings. I really want to avoid writing all settings again because most are unchanged and there are quite a few.</p>
<p>This is how I load them:</p>
<pre class="lang-py prettyprint-override"><code>
config_files = ['config/default.yaml', 'config/.secrets.yaml', 'config/debug.yaml']
config = Dynaconf(
envvar_prefix="DYNACONF",
settings_files=config_files,
)
print(config.SETTINGS.COLORS)
</code></pre>
<p>Yields:</p>
<pre><code>"'DynaBox' object has no attribute 'colors'"
</code></pre>
|
<python><python-3.x><configuration>
|
2023-02-08 14:30:03
| 1
| 1,404
|
Dr. Prof. Patrick
|
75,386,804
| 16,389,095
|
Kivy MD DropDownItem: how to set the item of a dropdownitem into another dropdownitem
|
<p>I'm trying to developed a simple GUI in Kivy MD / Python. Originally, I modified the <a href="https://github.com/kivymd/KivyMD/wiki/Components-DropDownItem" rel="nofollow noreferrer">example code</a>:</p>
<pre><code>from kivy.lang import Builder
from kivy.metrics import dp
from kivymd.uix.list import OneLineIconListItem
from kivymd.app import MDApp
from kivymd.uix.menu import MDDropdownMenu
from kivymd.uix.dropdownitem import MDDropDownItem
from kivymd.uix.boxlayout import MDBoxLayout
KV = '''
MDScreen
MDDropDownItem:
id: drop_item_1
pos_hint: {'center_x': .5, 'center_y': .8}
text: 'FREQUENCY_1'
on_release: app.menu_sampling_rate_1.open()
MDDropDownItem:
id: drop_item_2
pos_hint: {'center_x': .5, 'center_y': .4}
text: 'FREQUENCY_2'
on_release: app.menu_sampling_rate_2.open()
'''
class MainApp(MDApp):
sampling_rate = ['300 Hz', '200 Hz', '100 Hz']
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.screen = Builder.load_string(KV)
self.menu_sampling_rate_1, self.sampling_rate_items_1 = self.Create_DropDown_Widget(self.screen.ids.drop_item_1, self.sampling_rate)
self.menu_sampling_rate_2, self.sampling_rate_items_2 = self.Create_DropDown_Widget(self.screen.ids.drop_item_2, self.sampling_rate)
def Create_DropDown_Widget(self, drop_down_item, item_list):
items_collection = [
{
"viewclass": "OneLineListItem",
"text": item_list[i],
"height": dp(56),
"on_release": lambda x = item_list[i]: self.Set_DropDown_Item(drop_down_item, menu, x),
} for i in range(len(item_list))
]
menu = MDDropdownMenu(caller=drop_down_item, items=items_collection, width_mult=2)
menu.bind()
return menu, items_collection
def Set_DropDown_Item(self, dropDownItem, dropDownMenu, textItem):
dropDownItem.set_item(textItem)
dropDownMenu.dismiss()
def build(self):
return self.screen
if __name__ == '__main__':
MainApp().run()
</code></pre>
<p>I tried to slightly modify it using a class <em>View</em> in which all methods and properties related to the interface are included.</p>
<pre><code>from kivy.lang import Builder
from kivy.metrics import dp
from kivymd.uix.list import OneLineIconListItem
from kivymd.app import MDApp
from kivymd.uix.menu import MDDropdownMenu
from kivymd.uix.dropdownitem import MDDropDownItem
from kivymd.uix.boxlayout import MDBoxLayout
KV = '''
<View>:
orientation: vertical
MDDropDownItem:
id: drop_item_1
pos_hint: {'center_x': .5, 'center_y': .8}
text: 'FREQUENCY_1'
on_release: root.menu_sampling_rate_1.open()
MDDropDownItem:
id: drop_item_2
pos_hint: {'center_x': .5, 'center_y': .4}
text: 'FREQUENCY_2'
on_release: root.menu_sampling_rate_2.open()
'''
class View(MDBoxLayout):
sampling_rate = ['300 Hz', '200 Hz', '100 Hz']
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.menu_sampling_rate_1, self.sampling_rate_items_1 = self.Create_DropDown_Widget(self.ids.drop_item_1, self.sampling_rate)
self.menu_sampling_rate_2, self.sampling_rate_items_2 = self.Create_DropDown_Widget(self.ids.drop_item_2, self.sampling_rate)
def Create_DropDown_Widget(self, drop_down_item, item_list):
items_collection = [
{
"viewclass": "OneLineListItem",
"text": item_list[i],
"height": dp(56),
"on_release": lambda x = item_list[i]: self.Set_DropDown_Item(drop_down_item, menu, x),
} for i in range(len(item_list))
]
menu = MDDropdownMenu(caller=drop_down_item, items=items_collection, width_mult=2)
menu.bind()
return menu, items_collection
def Set_DropDown_Item(self, dropDownItem, dropDownMenu, textItem):
dropDownItem.set_item(textItem)
dropDownMenu.dismiss()
class MainApp(MDApp):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.view = View()
def build(self):
return self.view
if __name__ == '__main__':
MainApp().run()
</code></pre>
<p>My questions are:</p>
<ol>
<li>In this second version, with the <em>View</em> class, why I get the <em>AttributeError: 'super' object has no attribute '<strong>getattr</strong>'</em>?</li>
<li>How to set the item of a dropdownitem equal to the current item of the second dropdownitem and viceversa? In this way when the user selects a new item into a dropdownitem, this new selection appears also into the other dropdownitem.So the two dropdownitem show the same current item.</li>
<li>How to set the width of a dropdownitem equal to dp(80)? The approach based on <em>size_hint_x</em> and <em>width</em> seems to not work.</li>
<li>Is there a way to enable/disable a dropdownitem? The property <em>active</em> seems to not work.</li>
</ol>
<p>Thank you in advance for any suggestions.</p>
|
<python><drop-down-menu><kivy><kivy-language><kivymd>
|
2023-02-08 13:55:01
| 1
| 421
|
eljamba
|
75,386,721
| 283,538
|
Sagemaker HyperparameterTuner and fixed hyper parameters (StaticHyperParameters)
|
<p>I used to use this type of hyper parameter (optimisation) specification:</p>
<pre><code> "OutputDataConfig": {"S3OutputPath": output_path},
"ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.m4.xlarge", "VolumeSizeInGB": 3},
"RoleArn": role_arn,
"StaticHyperParameters": {
"objective": "reg:squarederror"
},
"StoppingCondition": {"MaxRuntimeInSeconds": 10000}
</code></pre>
<p>TBH I do not even know if this is an old way of doing things or a different SDK - very confusing Sagemaker sometimes. Anyway, I want to use <a href="https://sagemaker.readthedocs.io/en/stable/api/training/tuner.html" rel="nofollow noreferrer">this SDK/API</a> instead - more precisely the HyperparameterTuner. How would I specify StaticHyperParameters (e.g. "objective":"quantile")? Simply by not giving this hyperparameter a range and hard coding it? Thanks!</p>
|
<python><amazon-sagemaker><amazon-sagemaker-studio>
|
2023-02-08 13:46:43
| 1
| 17,568
|
cs0815
|
75,386,643
| 315,168
|
multiprocessing.Manager() hangs Popen.communicate() on Python
|
<p>The use of <code>multiprocessing.Manager</code> prevents clean termination of Python child process using <code>subprocess.Process.Popen.terminate()</code> and <code>subprocess.Process.Popen.kill()</code>.</p>
<p>This seems to be because <code>Manager</code> creates a child process behind the scenes for communicating, but this process does not know how to clean itself up when the parent is terminated.</p>
<p>What is the easiest way to use <code>multiprocessing.Manager</code> so that it does not prevent a process shutdown by a signal?</p>
<p>A demostration:</p>
<pre class="lang-py prettyprint-override"><code>"""Multiprocess manager hang test."""
import multiprocessing
import subprocess
import sys
import time
def launch_and_read_process():
proc = subprocess.Popen(
[
"python",
sys.argv[0],
"run_unkillable"
],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
# Give time for the process to run and print()
time.sleep(3)
status = proc.poll()
print("poll() is", status)
print("Terminating")
assert proc.returncode is None
proc.terminate()
exit_code = proc.wait()
print("Got exit code", exit_code)
stdout, stderr = proc.communicate()
print("Got output", stdout.decode("utf-8"))
def run_unkillable():
# Disable manager creation to make the code run correctly
manager = multiprocessing.Manager()
d = manager.dict()
d["foo"] = "bar"
print("This is an example output", flush=True)
time.sleep(999)
def main():
mode = sys.argv[1]
print("Doing subrouting", mode)
func = globals().get(mode)
func()
if __name__ == "__main__":
main()
</code></pre>
<p>Run as <code>python test-script.py launch_and_read_process</code>.</p>
<p>Good output (no multiprocessing.Manager):</p>
<pre><code>
Doing subrouting launch_and_read_process
poll() is None
Terminating
Got exit code -15
Got output Doing subrouting run_unkillable
This is an example output
</code></pre>
<p>Output when <code>subprocess.Popen.communicate</code> hangs because use of <code>Manager</code>:</p>
<pre><code> Doing subrouting launch_and_read_process
poll() is None
Terminating
Got exit code -15
</code></pre>
|
<python><python-multiprocessing>
|
2023-02-08 13:40:29
| 1
| 84,872
|
Mikko Ohtamaa
|
75,386,552
| 4,267,439
|
python logging in AWS Fargate, datetime duplicated
|
<p>I'm trying to use python logging module in AWS Fargate. The same application should work also locally, so I'd like to use a custom logger for local use but to keep intact cloudwatch logs.
This is what I'm doing:</p>
<pre><code>if logging.getLogger().hasHandlers():
log = logging.getLogger()
log.setLevel(logging.INFO)
else:
from logging.handlers import RotatingFileHandler
log = logging.getLogger('sm')
log.root.setLevel(logging.INFO)
...
</code></pre>
<p>But I get this in cloudwatch:</p>
<pre><code>2023-02-08T13:06:27.317+01:00 08/02/2023 12:06 - sm - INFO - Starting
</code></pre>
<p>And this locally:</p>
<pre><code>08/02/2023 12:06 - sm - INFO - Starting
</code></pre>
<p>I thought Fargate was already defining a logger, but apparently the following has no effect:</p>
<pre><code>logging.getLogger().hasHandlers()
</code></pre>
<p>Ideally this should be the desired log in cloudwatch:</p>
<pre><code>2023-02-08T13:06:27.317+01:00 sm - INFO - Starting
</code></pre>
|
<python><amazon-ecs><aws-fargate><amazon-cloudwatchlogs><python-logging>
|
2023-02-08 13:33:04
| 2
| 2,825
|
rok
|
75,386,541
| 13,609,298
|
Panel ordered logit in Python
|
<p>I am looking to run a panel ordered logit model on Python. I am aware that this question has already been asked <a href="https://stackoverflow.com/questions/28035216/ordered-logit-in-python">here</a>, however, this is not for panel data. What I am looking for is something equivalent to the "<a href="https://cran.r-project.org/web/packages/pglm/pglm.pdf" rel="nofollow noreferrer">pglm</a>" R package. Is there a library for this on Python?</p>
|
<python><logistic-regression><glm><panel-data>
|
2023-02-08 13:32:26
| 0
| 311
|
Carl
|
75,386,477
| 4,296,426
|
Enable web access or interactive shell for PipelineJob tasks for Vertex AI
|
<p>I am trying to debug a <code>PipelineJob</code> that I launch on Vertex AI. Is there a way to <a href="https://cloud.google.com/vertex-ai/docs/reference/rest/v1/CustomJobSpec" rel="nofollow noreferrer">enable web access</a> on the components like you can when you launch Custom Jobs? This way, I could ssh into the running task and do a bunch of debugging.</p>
<p><strong>Here is a simplified version of my pipeline code:</strong></p>
<pre><code>import kfp.v2.dsl as dsl
from google.cloud import aiplatform
from kfp.v2 import compiler
from kfp.v2.dsl import (
component,
Input,
Output,
Dataset,
Metrics,
Model,
Artifact,
graph_component
)
from copy import copy
from kfp.v2.google.client import AIPlatformClient
from typing import Optional, Dict, Union, List
@component(
packages_to_install=['google-cloud-aiplatform']
)
def hello_world():
import time
print("Hello world")
time.sleep(300)
@dsl.pipeline(
name = "dataprep"
)
def train_model_pipeline(style: int):
# Set Up Training and Test Data
hello_op = hello_world()
</code></pre>
<p>I expected to be able to set <code>enable_web_access(True)</code> on the task, but that doesn't seem like an option because it's part of the CustomJob spec and not the PipelineTask.</p>
|
<python><google-cloud-vertex-ai><kfp>
|
2023-02-08 13:27:32
| 1
| 1,682
|
Optimus
|
75,386,359
| 12,416,164
|
Athena AWS python client get query size
|
<p>Is there any way to get the query size using the <code>query_execution_id</code> without downloading the csv file?. Using the <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/athena.html" rel="nofollow noreferrer">boto3</a> client.</p>
|
<python><amazon-web-services><amazon-athena>
|
2023-02-08 13:16:01
| 0
| 682
|
Andrex
|
75,386,342
| 4,225,972
|
django channels with redis in WSL2
|
<p>I have a redis installation running inside the windows subsystem for linux. It is working finde, but I cannot connect to it from django-channels. In my WSL I started redis and when using a normal terminal and python in Windows I can do for example:</p>
<pre><code>import redis
c = redis.Redis("localhost", 6379, 0)
c.keys("hello world")
</code></pre>
<p>which leads inside of WSL2 to:</p>
<pre><code>1675861647.991521 [0 [::1]:32934] "KEYS" "hello world"
</code></pre>
<p>But when I am trying to do the same thing with the functions from the <code>channels 4</code> <a href="https://channels.readthedocs.io/en/stable/tutorial/part_2.html#enable-a-channel-layer" rel="nofollow noreferrer">tutorial</a> I get stuck:</p>
<pre><code>$ python3 manage.py shell
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
import channels.layers
channel_layer = channels.layers.get_channel_layer()
from asgiref.sync import async_to_sync
async_to_sync(channel_layer.send)('test_channel', {'type': 'hello'})
async_to_sync(channel_layer.receive)('test_channel')
</code></pre>
<p>the last call results in the following error:</p>
<pre><code>Task exception was never retrieved
future: <Task finished name='Task-5' coro=<Connection.disconnect() done, defined at ...\venv\lib\site-packages\redis\asyncio\connection.py:723> exception=RuntimeError('Event loop is closed')>
Traceback (most recent call last):
File ...\venv\lib\site-packages\redis\asyncio\connection.py", line 732, in disconnect
self._writer.close() # type: ignore[union-attr]
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2544.0_x64__qbz5n2kfra8p0\lib\asyncio\streams.py", line 337, in close
return self._transport.close()
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2544.0_x64__qbz5n2kfra8p0\lib\asyncio\selector_events.py", line 706, in close
self._loop.call_soon(self._call_connection_lost, None)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2544.0_x64__qbz5n2kfra8p0\lib\asyncio\base_events.py", line 753, in call_soon
self._check_closed()
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2544.0_x64__qbz5n2kfra8p0\lib\asyncio\base_events.py", line 515, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
{'type': 'hello'}
</code></pre>
<p>I configured my <code>channels</code> in settings.py:</p>
<pre><code>ASGI_APPLICATION = "sst4.asgi.application"
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("127.0.0.1", 6379)],
},
},
}
</code></pre>
|
<python><django><django-channels>
|
2023-02-08 13:14:24
| 1
| 1,400
|
xtlc
|
75,386,327
| 3,459,293
|
Transform rows categories to column while preserving rest of the data frame python
|
<p>I have data frame as below</p>
<pre><code> Time Groups Entity GC Seg Category Year Quarter IndicatorName Value
0 2021-06-01 KRO CO P_GA None Model_Q2_2021 2021 2 yhat 568759.481223
1 2021-07-01 KRO CO P_GA None Model_Q2_2021 2021 3 yhat 586003.965652
2 2021-08-01 KRO CO P_GA None Model_Q2_2021 2021 3 yhat 583703.420655
3 2021-09-01 KRO CO P_GA None Model_Q2_2021 2021 3 y 608601.857510
4 2021-10-01 KRO CO P_GA None Model_Q2_2021 2021 4 y 628928.602344
</code></pre>
<p>I want to <code>IndicatorName</code> categories to make them columns in such a way that the corresponding value to them in addtion to the rest of the columns</p>
<p>I tried <code>pivot</code>, and <code>melt</code> but nothing gave me desired results.</p>
<p>The closed I have gone was with this</p>
<pre><code>grouper = df.groupby('IndicatorName')
out = pd.concat([pd.Series(v['Value'].tolist(), name=k) for k, v in grouper], axis=1)
y yhat
0 8626.88 5.687595e+05
1 8215.30 5.860040e+05
2 8601.53 5.837034e+05
3 8145.16 6.086019e+05
4 9376.81 6.289286e+05
... ... ...
744 NaN 5.402358e+06
745 NaN 5.796123e+06
746 NaN 5.218829e+06
747 NaN 5.451504e+06
</code></pre>
<p>But I want to have all columns preserved and additional columns <code>yhat</code> and <code>y</code></p>
<p>Any help/suggestion would be much appreciated.</p>
<p>Thanks in advance!</p>
|
<python><pandas><data-wrangling>
|
2023-02-08 13:13:34
| 2
| 340
|
user3459293
|
75,386,255
| 784,433
|
using jax.vmap to vectorize along with broadcasting
|
<p>Consider the following toy example:</p>
<pre><code>x = np.arange(3)
# np.sum(np.sin(x - x[:, np.newaxis]), axis=1)
cfun = lambda x: np.sum(np.sin(x - x[:, np.newaxis]), axis=1)
cfuns = jax.vmap(cfun)
# for a 2d x:
x = np.arange(6).reshape(3,2)
cfuns(x)
</code></pre>
<p>where <code>x-x[:,None]</code> is the broadcasting part and give a 3x3 array.
I want cfuns to be vectorized over each row of x.</p>
<pre class="lang-bash prettyprint-override"><code>The numpy.ndarray conversion method __array__() was called on the JAX Tracer object Traced<ShapedArray(int64[2,2])>with<BatchTrace(level=1/0)> with
val = Array([[[ 0, 1],
[-1, 0]],
[[ 0, 1],
</code></pre>
|
<python><jax>
|
2023-02-08 13:06:52
| 1
| 1,237
|
Abolfazl
|
75,386,166
| 17,897,456
|
Why does `discord.utils.sleep_until` sleeps forever?
|
<p>I have defined a periodic task in a Cog that does something every 60 seconds. I want that task to start at the very beginning of a minute, so I added a <code>before_loop</code> decorator to it.</p>
<pre class="lang-py prettyprint-override"><code>import discord
from discord.ext import commands
from discord.ext import tasks
import datetime
class Fanfare(commands.Cog):
def __init__(self, bot):
self.bot = bot
self.fanfare.start()
@tasks.loop(seconds=60)
async def fanfare(self):
print('do something')
@fanfare.before_loop
async def fanfare_before_loop(self):
await self.bot.wait_until_ready()
# Compute the next minute
now = datetime.datetime.now()
next_minute = now.replace(second=0, microsecond=0) + datetime.timedelta(minutes=1)
# Troubleshooting
print(now)
print(next_minute)
# Await until next minute
await discord.utils.sleep_until(next_minute)
print('sleep done')
def setup(bot):
bot.add_cog(Fanfare(bot))
</code></pre>
<p>Here is the result of the troubleshooting prints :</p>
<pre><code>2023-02-08 13:47:56.568279
2023-02-08 13:48:00
</code></pre>
<p>Which seems normal to me.</p>
<p>However, the <code>'sleep done'</code> never appears, even after waiting. It seems like discord is sleeping forever</p>
|
<python><datetime><discord.py><sleep>
|
2023-02-08 12:58:57
| 0
| 710
|
Mateo Vial
|
75,386,165
| 5,777,827
|
Building one file eel python aplication with pyinstaller
|
<p>After building distributable binary with PyInstaller</p>
<pre><code>python.exe -m eel --onefile c:\Users\Darek\PycharmProjects\eelTest\web\ c:\Users\Darek\PycharmProjects\eelTest\eelTest.py
</code></pre>
<p>When I try run app I got this error</p>
<pre><code>Failed to extract C:\Users\Darek\PycharmProjects\eelTest\web\main.html: failed to open target file!
fopen: Invalid argument
</code></pre>
<p><a href="https://i.sstatic.net/5eOTQ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5eOTQ.jpg" alt="eel test" /></a></p>
<p>How to fix it?</p>
|
<python><pyinstaller><eel>
|
2023-02-08 12:58:53
| 0
| 375
|
ssnake
|
75,386,117
| 7,800,760
|
Python: json.dumps with ensure_ascii=False and encoding('utf-8') seems to convert string to bytes
|
<p>I am generating a Python dictionary as follows:</p>
<pre><code>placedict = {
"id": geonames.geonames_id,
"info": json.dumps(jsoninfo),
}
</code></pre>
<p>where id is a string and info a valid and readable JSON string:</p>
<pre><code>'{"geonamesurl": "http://geonames.org/310859/kahramanmara\\u015f.html", "searchstring": "Kahramanmara\\u015f", "place": "Kahramanmara\\u015f", "confidence": 1, "typecode": "PPLA", "toponym": "Kahramanmara\\u015f", "geoid": 310859, "continent": "AS", "country": "Turkey", "state": "Kahramanmara\\u015f", "region": "Kahramanmara\\u015f", "lat": "37.5847", "long": "36.92641", "population": 376045, "bbox": {"northeast": [37.66426194452945, 37.02690583904019], "southwest": [37.50514805547055, 36.825904160959816]}, "timezone": "Europe/Istanbul", "wikipedia": "en.wikipedia.org/wiki/Kahramanmara%C5%9F", "hyerlist": ["part-of: Earth GeoID: 6295630 GeoCode: AREA", "part-of: Asia GeoID: 6255147 GeoCode: CONT", "part-of: Turkey GeoID: 298795 GeoCode: PCLI", "part-of: Kahramanmara\\u015f GeoID: 310858 GeoCode: ADM1", "part-of: Kahramanmara\\u015f GeoID: 310859 GeoCode: PPLA"], "childlist": ["Aksu", "Barbaros", "Egemenlik"]}'
</code></pre>
<p>but as you can see while the jsoninfo variable holds valid utf-8 chars, the placedict['info'] chars are not utf-8 encoded but rather escaped.
I therefore tried to change the json.dumps line to:</p>
<pre><code>placedict = {
"id": geonames.geonames_id,
"info": json.dumps(jsoninfo).encode("utf-8"),
}
</code></pre>
<p>or even</p>
<pre><code>placedict = {
"id": geonames.geonames_id,
"info": json.dumps(jsoninfo, ensure_ascii=False).encode("utf-8"),
}
</code></pre>
<p>hoping this would encode the JSON as desired, but I see that after either of these modifications, the 'info" member of the dictionary returns as b'.........' and therefore find a binary string in MongoDB.</p>
<p>I want to store the dictionary with an utf-8 encoded readable JSON string in MongoDB.</p>
<p>Where am I making a mistake?</p>
|
<python><json><mongodb><utf-8>
|
2023-02-08 12:54:37
| 1
| 1,231
|
Robert Alexander
|
75,386,096
| 10,982,755
|
What is the best way to build a Document Management System in python using GCS or AWS S3?
|
<p>I'm building a tool where users in a particular account/tenant can upload images/videos (CREATE/DELETE) and also create/delete folders to organize those images. These images/videos can later be dragged and dropped onto a page. This page will be accessible to everyone in that account. So I have thought of 2 architecture flows but both seem to have trade-offs.</p>
<ol>
<li><p>I thought I can generate signed url for each of the resource available in the document management system and for each resource that is used in the page. This method works if there are less number of images used in a page. What if the user has 30-40 images in a page, the client has to request signed URLs for each of those resource everytime a user loads the page. This increases latency while rendering the page on the client side.</p>
</li>
<li><p>Another architecture is to put all of the uploaded resource in a public bucket (explicitly stating the user that all uploaded resource will be public). The obvious tradeoff is security.</p>
</li>
</ol>
<p>Is there a way where I can securely allow users to have numerous resources? Something like instead of generating a signedURL for the blob itself, would it be possible to generate a signedURL for a path? Example: instead of generating a signed url for <code>/folder1/folder2/blob.png</code> would I be able to generate a signedURL for <code>/folder1/folder2</code> so that the client can request for all the blobs within the folder2 without multiple requests to the server?</p>
<p>What I want to achieve is minimal latency without compromising security.</p>
|
<python><amazon-s3><architecture><google-cloud-storage><system-design>
|
2023-02-08 12:53:19
| 1
| 617
|
Vaibhav
|
75,385,976
| 12,814,680
|
Search values in dataframe by condition on another column
|
<p>I need to get the value in column A for the closest value in column B for each multiple of 'trigger'</p>
<p>for instance, in the dataframe below :</p>
<pre><code>import random
trigger = 100
info2 = {'A': [0]*100,'B': [0]*100}
dfA = pd.DataFrame(info2)
for i in range(1, len(dfA)):
dfA.loc[i,'B'] = i*3.78
dfA.loc[i,'A'] = i*10
dfA
</code></pre>
<p><a href="https://i.sstatic.net/LPP2v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LPP2v.png" alt="enter image description here" /></a></p>
<p>Since the closest value to trigger<em>1 would be 98.28 from row n°26
The closest value to trigger</em>2 would be 200.34 from row n°53
The closest value to trigger*3 would be 298.62 from row n°79</p>
<p>The expected result would be :
result = [260,530,790]</p>
|
<python><pandas><dataframe>
|
2023-02-08 12:43:52
| 4
| 499
|
JK2018
|
75,385,850
| 2,558,671
|
Sort list of dictionaries based on the order given by another list
|
<p>There are a lot of <a href="https://www.google.com/search?q=python%20sort%20list%20of%20dicts%20based%20on%20another%20list%20site:stackoverflow.com" rel="nofollow noreferrer">similar questions</a> on Stack Overflow but not exactly this one.</p>
<p>I need to sort a list of dictionaries based on the values of another list but (unlike all the other questions I found) the second list just gives the order, is not an element of the dictionary.</p>
<p>Let's say I have these lists</p>
<pre><code>a = [{"a": 5}, {"b": 5}, {"j": {}}, {123: "z"}]
b = [8, 4, 4, 3]
</code></pre>
<p>Where <code>b</code> does not contain values of the dictionaries in the list, but gives the order (ascending) to use to sort <code>a</code>, therefore I want the output to be:</p>
<pre><code>[{123: "z"}, {"b": 5}, {"j": {}}, {"a": 5}]
</code></pre>
<p>I tried <code>sorted(zip(b, a)</code> but this gives an error probably because when it finds a tie it tries to sort on the second list</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[497], line 1
----> 1 sorted(zip(b, a))
TypeError: '<' not supported between instances of 'dict' and 'dict'
</code></pre>
<p>In case of ties it's fine to leave the original order</p>
|
<python><sorting>
|
2023-02-08 12:33:22
| 1
| 1,503
|
Marco
|
75,385,554
| 4,358,785
|
Python set ONNX runtime to return tensor instead of numpy array
|
<p>In python I'm loading my predefined model (super-gradients, yolox-s):</p>
<pre><code>onnx_session = onnxrt.InferenceSession("yolox_s_640_640.onnx")
</code></pre>
<p>Then I load some data and run it:</p>
<pre><code>dataset = MyCostumeDataset(args.path, 'val')
val_dataloader = DataLoader(dataset, batch_size=args.bsize)
for inputs in val_dataloader:
onnx_inputs = {onnx_session.get_inputs()[0].name: inputs}
# inputs.shape: torch.Size([4, 3, 640, 640]), i.e., this is a Tensor
raw_predictions = onnx_session.run(None, onnx_inputs)
# this returns a list of numpy arrays:
# type(raw_predictions[0])
# <class 'numpy.ndarray'>
# raw_predictions[0].shape
# (4, 8400, 85)
</code></pre>
<p>So far it is working as it should, <em>except</em> I'd like it to return, by default, a list of Tensors (torch.Tensor) instead of numpy array. I'm new to both ONNX and PyTorch, and I'm feeling like this is something basic that I'm missing here.</p>
<p>How can I get onnx_session to return a list of torch.Tensor, instead of numpy arrays? This will same some overhead in the conversion. Thanks!</p>
|
<python><pytorch><onnx><onnxruntime>
|
2023-02-08 12:03:54
| 1
| 971
|
Ruslan
|
75,385,512
| 872,616
|
How to use a Python generator function in tkinter?
|
<p>Generator functions promise to make some code easier to write. But I don't always understand how to use them.</p>
<p>Let's say I have a Fibonacci generator function <code>fib()</code>, and I want a <code>tkinter</code> application that displays the first result. When I click on a button "Next" it displays the second number, and so on. How can I structure the app to do that?</p>
<p>I probably need to run the generator in a thread. But how do I connect it back to the GUI?</p>
|
<python><tkinter>
|
2023-02-08 12:00:34
| 2
| 5,507
|
Andreas Haferburg
|
75,385,497
| 14,625,546
|
InvalidArgumentError: slice index 1 of dimension 2 out of bounds when training GRU RNN
|
<p>I'm a newbie in the world of recurrent neural networks and I am trying to follow a <a href="https://www.tensorflow.org/tutorials/structured_data/time_series" rel="nofollow noreferrer">TensorFlow tutorial</a>. This tutorial makes forecasting on weather time series data, and trains a LSTM model for it also. I want to make another RNN of type GRU for it to compare its performance to the LSTM model, but when I try to plot it with the <a href="https://www.tensorflow.org/tutorials/structured_data/time_series#3_plot" rel="nofollow noreferrer">plot method of WindowGenerator class</a> defined in this tutorial I get the following error:</p>
<blockquote>
<p>InvalidArgumentError: slice index 1 of dimension 2 out of bounds.
[Op:StridedSlice] name: strided_slice/</p>
</blockquote>
<p>What's the problem and how can I fix it?</p>
<p>The code I wrote:</p>
<pre><code>wide_window.plot(gru_model)
</code></pre>
<p>The GRU model:</p>
<pre><code>gru_model = tf.keras.models.Sequential([
tf.keras.layers.GRU(32, return_sequences=True),
tf.keras.layers.Dense(units=1)
])
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', gru_model(wide_window.example[0]).shape)
</code></pre>
<blockquote>
<p>Input shape: (32, 24, 19)
Output shape: (32, 24, 1)</p>
</blockquote>
<p>After this I train and evaluate it.</p>
<p>And the wide_window definition:</p>
<pre><code>wide_window = WindowGenerator(input_width=24, label_width=24, shift=1, label_columns=['T (degC)'])
</code></pre>
|
<python><tensorflow><keras><recurrent-neural-network>
|
2023-02-08 11:59:24
| 0
| 321
|
Zoltán Orosz
|
75,385,262
| 1,017,373
|
How to add a new column in pandas Dataframe if the string or object value of column 1 is repeated in three consecutive rows
|
<p>Say, I have a dataframe like this,</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'ID': ['p1305', 'p1305', 'p1305', 'p1307', 'p1307', 'p1307', 'p1301', 'p1301', 'p1301', 'p1340', 'p1340', 'p1340','P569','P987','P569']})
</code></pre>
<p>I need to add a column y if the values in ID are the same for three consecutive rows, then add yes in column y. Otherwise, add no.</p>
<p>Here is what I have tried,</p>
<pre><code># create a rolling window of size 3
rolling = df['ID'].rolling(3)
# apply a custom function to the rolling window to check if all values are the same
df['y'] = rolling.apply(lambda x: 'Yes' if all(x == x[0]) else 'No')
</code></pre>
<p>However, the above code is throwing the following error,</p>
<pre><code>DataError: No numeric types to aggregate
</code></pre>
<p>The final desired output would be:</p>
<pre><code> ID y
0 p1305 Yes
1 p1305 Yes
2 p1305 Yes
3 p1307 Yes
4 p1307 Yes
5 p1307 Yes
6 p1301 Yes
7 p1301 Yes
8 p1301 Yes
9 p1340 Yes
10 P1340 Yes
11 P1340 Yes
</code></pre>
<p>Any suggestions or help are much appreciated!
Thanks</p>
|
<python><pandas><dataframe>
|
2023-02-08 11:39:45
| 1
| 2,100
|
ARJ
|
75,385,179
| 15,239,717
|
Django Create Model Instance from an Object
|
<p>I am working on a Django Daily Saving App where Staff User can create Customer Account, and from the list of the Customer Accounts there is link for Deposit where staff user can add customer deposit.
The issue is that after getting the customer id to the customer deposit view, I want to create get the customer details from the ID and create his deposit but anytime I try it I see: <strong>Cannot assign "<django.db.models.fields.related_descriptors.ReverseOneToOneDescriptor object at 0x00000129FD8F4910>": "Deposit.customer" must be a "Profile" instance</strong>
See below my Models:</p>
<pre><code>class Profile(models.Model):
customer = models.OneToOneField(User, on_delete=models.CASCADE, null = True)
surname = models.CharField(max_length=20, null=True)
othernames = models.CharField(max_length=40, null=True)
gender = models.CharField(max_length=6, choices=GENDER, blank=True, null=True)
address = models.CharField(max_length=200, null=True)
phone = models.CharField(max_length=11, null=True)
image = models.ImageField(default='avatar.jpg', blank=False, null=False, upload_to ='profile_images',
)
def __str__(self):
return f'{self.customer.username}-Profile'
class Account(models.Model):
customer = models.OneToOneField(User, on_delete=models.CASCADE, null=True)
account_number = models.CharField(max_length=10, null=True)
date = models.DateTimeField(auto_now_add=True, null=True)
def __str__(self):
return f' {self.customer} - Account No: {self.account_number}'
class Deposit(models.Model):
customer = models.ForeignKey(Profile, on_delete=models.CASCADE, null=True)
acct = models.CharField(max_length=6, null=True)
staff = models.ForeignKey(User, on_delete=models.CASCADE, null=True)
deposit_amount = models.PositiveIntegerField(null=True)
date = models.DateTimeField(auto_now_add=True)
def get_absolute_url(self):
return reverse('create_account', args=[self.id])
def __str__(self):
return f'{self.customer} Deposited {self.deposit_amount} by {self.staff.username}'
</code></pre>
<p>Here are my views:</p>
<pre><code>def create_account(request):
if searchForm.is_valid():
#Value of search form
value = searchForm.cleaned_data['value']
#Filter Customer by Surname, Othernames , Account Number using Q Objects
user_filter = Q(customer__exact = value) | Q(account_number__exact = value)
#Apply the Customer Object Filter
list_customers = Account.objects.filter(user_filter)
else:
list_customers = Account.objects.all()
context = {
'customers':customers,
}
return render(request, 'dashboard/customers.html', context)
def customer_deposit(request, id):
try:
#Check the Customer ID in DB
customer = Account.objects.get(id=id)
#Customer Account
acct = Account.account_number
profile = Profile.objects.get(customer=customer.customer.id)
profile = User.profile
except Account.DoesNotExist:
messages.error(request, 'Customer Does Not Exist')
return redirect('create-customer')
else:
if request.method == 'POST':
#Deposit Form
form = CustomerDepositForm(request.POST or None)
if form.is_valid():
#Get Deposit Details for form variable
amount = form.cleaned_data['deposit_amount']
#Set Minimum Deposit
minimum_deposit = 100
#Check if Customer Deposit is Less than the Minimum Deposit
if amount < minimum_deposit:
messages.error(request, f'N{amount} is less than the Minimum Deposit of N{minimum_deposit}')
else:
#Add Customer Deposit
credit_acct = Deposit.objects.create(customer=profile, acct=acct, staff=user, deposit_amount=amount)
#Save the Customer Deposit
credit_acct.save()
context.update( {
'amount':amount,
'count_accounts':count_accounts,
})
messages.success(request, f'N{amount} Deposited to Account {acct} Successfully.')
return render(request, 'dashboard/deposit_slip.html', context)
else:
form = CustomerDepositForm()
context.update( {
'customer':customer,
})
return render(request, 'dashboard/deposit.html', context)
</code></pre>
<p>Here is my Template code the use of context.</p>
<pre><code>{% for customer in customers %}
<tr>
<td>{{ forloop.counter }}</td>
<td>{{ customer.account_number }}</td>
{% if customer.customer.profile.surname == None %}
<td> <a class="btn btn-danger" href=" {% url 'update-customer' customer.customer.id %} ">Update Customer Details.</a> </td>
{% else %}
<td>{{ customer.customer.profile.surname }} {{ customer.customer.profile.othernames }}</td>
<td>{{ customer.customer.profile.phone }}</td>
<td><a class="btn btn-success btn-sm" href="{% url 'account-statement' customer.customer.id %}">Statement</a></td>
<td><a class="btn btn-danger btn-sm" href="{% url 'dashboard-witdrawal' customer.id %}">Withdraw</a></td>
<th scope="row"><a class="btn btn-success btn-sm" href="{% url 'create-deposit' customer.id %}">Deposit</a></th>
{% endif %}
</tr>
{% endfor %}
</code></pre>
<p>Please ignore any missing form and help on how to make a Profile instance from the Customer Account ID as shown above. thanks</p>
|
<python><django>
|
2023-02-08 11:32:15
| 1
| 323
|
apollos
|
75,384,989
| 1,283,776
|
Can I avoid getting pylint errors from not having __init__.py without disabling rules?
|
<p>This is my project</p>
<pre class="lang-none prettyprint-override"><code>root
├── main.py
└── utils
└── tool.py
</code></pre>
<p>This is my main</p>
<pre><code>from utils.tool import some_func
</code></pre>
<p>It works but I'm getting <code>Pylint(E0611:no-name-in-module)</code>. I'm pretty new to Python so I don't want to break any rules. But is there any way to get rid of this message without disabling the rule or breaking commonly accepted conventions?</p>
<p>I'm asking because I dislike the idea of spraying <code>__init__.py</code> files into all of my project folders.</p>
|
<python><python-import><pylint>
|
2023-02-08 11:16:50
| 0
| 22,194
|
user1283776
|
75,384,958
| 5,305,512
|
Prophet fit RuntimeError: Error during optimization
|
<p>This is how my dataframe looks like:</p>
<p><a href="https://i.sstatic.net/TP28d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TP28d.png" alt="enter image description here" /></a></p>
<p>Then when I try to fit a prophet model, I get this error:</p>
<pre><code>>>> from prophet import Prophet
>>> m = Prophet()
>>> m.fit(df)
16:35:16 - cmdstanpy - INFO - Chain [1] start processing
16:35:16 - cmdstanpy - INFO - Chain [1] done processing
16:35:16 - cmdstanpy - ERROR - Chain [1] error: terminated by signal 6 Unknown error: -6
Optimization terminated abnormally. Falling back to Newton.
16:36:22 - cmdstanpy - INFO - Chain [1] start processing
16:36:22 - cmdstanpy - INFO - Chain [1] done processing
16:36:22 - cmdstanpy - ERROR - Chain [1] error: terminated by signal 6 Unknown error: -6
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
File ~/miniforge3/lib/python3.10/site-packages/prophet/models.py:96, in CmdStanPyBackend.fit(self, stan_init, stan_data, **kwargs)
95 try:
---> 96 self.stan_fit = self.model.optimize(**args)
97 except RuntimeError as e:
98 # Fall back on Newton
File ~/miniforge3/lib/python3.10/site-packages/cmdstanpy/model.py:738, in CmdStanModel.optimize(self, data, seed, inits, output_dir, sig_figs, save_profile, algorithm, init_alpha, tol_obj, tol_rel_obj, tol_grad, tol_rel_grad, tol_param, history_size, iter, save_iterations, require_converged, show_console, refresh, time_fmt, timeout)
737 else:
--> 738 raise RuntimeError(msg)
739 mle = CmdStanMLE(runset)
RuntimeError: Error during optimization! Command '/Users/Admin/miniforge3/lib/python3.10/site-packages/prophet/stan_model/prophet_model.bin random seed=56334 data file=/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/tmpcyyh7d7t/2qlqak12.json init=/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/tmpcyyh7d7t/z3_79y1x.json output file=/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/tmpcyyh7d7t/prophet_modelufn_f82n/prophet_model-20230208163516.csv method=optimize algorithm=lbfgs iter=10000' failed: console log output:
dyld[37030]: Library not loaded: @rpath/libtbb.dylib
Referenced from: <AC271190-0BD7-38FF-AFC9-F18DFE088087> /Users/Admin/miniforge3/lib/python3.10/site-packages/prophet/stan_model/prophet_model.bin
Reason: tried: '/private/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/pip-install-woutubot/prophet_98a976c6cbfd4e95b7cb41d7d690c7eb/build/lib.macosx-11.0-arm64-cpython-310/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/libtbb.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/private/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/pip-install-woutubot/prophet_98a976c6cbfd4e95b7cb41d7d690c7eb/build/lib.macosx-11.0-arm64-cpython-310/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/libtbb.dylib' (no such file), '/private/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/pip-install-woutubot/prophet_98a976c6cbfd4e95b7cb41d7d690c7eb/build/lib.macosx-11.0-arm64-cpython-310/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/libtbb.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/private/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/pip-install-woutubot/prophet_98a976c6cbfd4e95b7cb41d7d690c7eb/build/lib.macosx-11.0-arm64-cpython-310/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/libtbb.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS@rpath/libtbb.dylib' (no such file), '/private/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/pip-install-woutubot/prophet_98a976c6cbfd4e95b7cb41d7d690c7eb/build/lib.macosx-11.0-arm64-cpython-310/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/libtbb.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/private/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/pip-install-woutubot/prophet_98a976c6cbfd4e95b7cb41d7d690c7eb/build/lib.macosx-11.0-arm64-cpython-310/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/libtbb.dylib' (no such file), '/private/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/pip-install-woutubot/prophet_98a976c6cbfd4e95b7cb41d7d690c7eb/build/lib.macosx-11.0-arm64-cpython-310/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/libtbb.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/private/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/pip-install-woutubot/prophet_98a976c6cbfd4e95b7cb41d7d690c7eb/build/lib.macosx-11.0-arm64-cpython-310/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/libtbb.dylib' (no such file), '/usr/local/lib/libtbb.dylib' (no such file), '/usr/lib/libtbb.dylib' (no such file, not in dyld cache)
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
Input In [7], in <cell line: 3>()
1 m = Prophet()
----> 2 m.fit(df)
File ~/miniforge3/lib/python3.10/site-packages/prophet/forecaster.py:1181, in Prophet.fit(self, df, **kwargs)
1179 self.params = self.stan_backend.sampling(stan_init, dat, self.mcmc_samples, **kwargs)
1180 else:
-> 1181 self.params = self.stan_backend.fit(stan_init, dat, **kwargs)
1183 self.stan_fit = self.stan_backend.stan_fit
1184 # If no changepoints were requested, replace delta with 0s
File ~/miniforge3/lib/python3.10/site-packages/prophet/models.py:103, in CmdStanPyBackend.fit(self, stan_init, stan_data, **kwargs)
101 logger.warning('Optimization terminated abnormally. Falling back to Newton.')
102 args['algorithm'] = 'Newton'
--> 103 self.stan_fit = self.model.optimize(**args)
104 params = self.stan_to_dict_numpy(
105 self.stan_fit.column_names, self.stan_fit.optimized_params_np)
106 for par in params:
File ~/miniforge3/lib/python3.10/site-packages/cmdstanpy/model.py:738, in CmdStanModel.optimize(self, data, seed, inits, output_dir, sig_figs, save_profile, algorithm, init_alpha, tol_obj, tol_rel_obj, tol_grad, tol_rel_grad, tol_param, history_size, iter, save_iterations, require_converged, show_console, refresh, time_fmt, timeout)
736 get_logger().warning(msg)
737 else:
--> 738 raise RuntimeError(msg)
739 mle = CmdStanMLE(runset)
740 return mle
RuntimeError: Error during optimization! Command '/Users/Admin/miniforge3/lib/python3.10/site-packages/prophet/stan_model/prophet_model.bin random seed=73289 data file=/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/tmpcyyh7d7t/6qjofygo.json init=/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/tmpcyyh7d7t/3wkgsh__.json output file=/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/tmpcyyh7d7t/prophet_modelpxzqvzzp/prophet_model-20230208163622.csv method=optimize algorithm=newton iter=10000' failed: console log output:
dyld[37096]: Library not loaded: @rpath/libtbb.dylib
Referenced from: <AC271190-0BD7-38FF-AFC9-F18DFE088087> /Users/Admin/miniforge3/lib/python3.10/site-packages/prophet/stan_model/prophet_model.bin
Reason: tried: '/private/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/pip-install-woutubot/prophet_98a976c6cbfd4e95b7cb41d7d690c7eb/build/lib.macosx-11.0-arm64-cpython-310/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/libtbb.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/private/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/pip-install-woutubot/prophet_98a976c6cbfd4e95b7cb41d7d690c7eb/build/lib.macosx-11.0-arm64-cpython-310/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/libtbb.dylib' (no such file), '/private/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/pip-install-woutubot/prophet_98a976c6cbfd4e95b7cb41d7d690c7eb/build/lib.macosx-11.0-arm64-cpython-310/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/libtbb.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/private/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/pip-install-woutubot/prophet_98a976c6cbfd4e95b7cb41d7d690c7eb/build/lib.macosx-11.0-arm64-cpython-310/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/libtbb.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS@rpath/libtbb.dylib' (no such file), '/private/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/pip-install-woutubot/prophet_98a976c6cbfd4e95b7cb41d7d690c7eb/build/lib.macosx-11.0-arm64-cpython-310/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/libtbb.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/private/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/pip-install-woutubot/prophet_98a976c6cbfd4e95b7cb41d7d690c7eb/build/lib.macosx-11.0-arm64-cpython-310/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/libtbb.dylib' (no such file), '/private/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/pip-install-woutubot/prophet_98a976c6cbfd4e95b7cb41d7d690c7eb/build/lib.macosx-11.0-arm64-cpython-310/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/libtbb.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/private/var/folders/8f/qf0d4l3j2mn5_vw839wgcj1w0000gn/T/pip-install-woutubot/prophet_98a976c6cbfd4e95b7cb41d7d690c7eb/build/lib.macosx-11.0-arm64-cpython-310/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/libtbb.dylib' (no such file), '/usr/local/lib/libtbb.dylib' (no such file), '/usr/lib/libtbb.dylib' (no such file, not in dyld cache)
</code></pre>
<p>Any idea what the problem is? I am on an M1 Mac, if that matters.</p>
|
<python><time-series><facebook-prophet>
|
2023-02-08 11:13:27
| 1
| 3,764
|
Kristada673
|
75,384,957
| 3,381,215
|
Current Python version (3.9.7) is not allowed by the project (^3.11)
|
<p>We have a poetry project with a pyproject.toml file like this:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry]
name = "daisy"
version = "0.0.2"
description = ""
authors = [""]
[tool.poetry.dependencies]
python = "^3.9"
pandas = "^1.5.2"
DateTime = "^4.9"
names = "^0.3.0"
uuid = "^1.30"
pyyaml = "^6.0"
psycopg2-binary = "^2.9.5"
sqlalchemy = "^2.0.1"
pytest = "^7.2.0"
[tool.poetry.dev-dependencies]
jupyterlab = "^3.5.2"
line_profiler = "^4.0.2"
matplotlib = "^3.6.2"
seaborn = "^0.12.1"
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>When I change the file to use Python 3.11 and run <code>poetry update</code> we get the following error:</p>
<pre class="lang-bash prettyprint-override"><code>Current Python version (3.9.7) is not allowed by the project (^3.11).
Please change python executable via the "env use" command.
</code></pre>
<p>I only have one env:</p>
<pre class="lang-bash prettyprint-override"><code>> poetry env list
daisy-Z0c0FuMJ-py3.9 (Activated)
</code></pre>
<p>Strangely this issue does not occur on my Macbook, only on our Linux machine.</p>
|
<python><python-poetry>
|
2023-02-08 11:13:14
| 2
| 1,199
|
Freek
|
75,384,939
| 939,451
|
Loading custom MapBox GL JS tile source generated with Python tornado async RequestHandler
|
<p>I created custom tile source provider with python running tornado web server. On the server I generate on the fly the specific tile and output it as a "image/png" header format. To prevent blocking I did all the server stuff in async mode. Now I see that I get some errors on the server: "Task was destroyed but it is pending!".</p>
<p>To figure out if there is a Python/tornado problem I did test by running a lot of tile requests from the Python code directly (by using the same url as mapbox source). All of the results returned HTTP response result 200 OK, so the python code runs as it should.</p>
<p>In the end I figure out that maybe the mapbox js tile source is not properly define or it has a timeout that is too quick and cancels request before it was successfully finished.</p>
<p>This is the javascript code for adding raster tile source:</p>
<pre><code>mapbox_obj.on('load', function(){
mapbox_obj.addSource('rad-data', {
"type": "raster",
"tiles": ["/tile/{x}/{y}/{z}.png"],
"tileSize": 512
});
mapbox_obj.addLayer({
"id": "rad-data-layer",
"type": "raster",
"source": "rad-data",
"minzoom": 0,
"maxzoom": 20,
'layout': {
'visibility': 'visible'
}
});
});
</code></pre>
<p>The Python code generates the tile on the fly and returns it back as a byte string:</p>
<pre><code>TILE_SIZE = 512
img = Image.new('RGBA', size=(TILE_SIZE, TILE_SIZE), color=(0, 0, 0, 0))
img_b = io.BytesIO()
img.save(img_b, "PNG")
tile = img_b.seek(0)
for line in tile:
self.write(line)
self.set_header("Content-type", "image/png")
</code></pre>
<p>Most of the tiles are usually rendered properly, but some of them sometimes are not and the error is as I mentioned: "Task was destroyed but it is pending!".</p>
<blockquote>
<p>It looks like all the calls for tiles are called at the same time and
maybe that could be the issue for the tornado RequestHandler that
handles the async mysql query and the generation of the tile images.</p>
</blockquote>
<p>Thank you for your help,
Toni</p>
|
<python><tornado><mapbox-gl-js><requesthandler>
|
2023-02-08 11:12:02
| 0
| 361
|
toni
|
75,384,751
| 353,337
|
Use --config-setting in tox
|
<p>In <a href="https://github.com/tox-dev/tox" rel="nofollow noreferrer">tox</a>, I'd like to build my package with a certain <a href="https://github.com/pypa/build" rel="nofollow noreferrer">pypa-build</a> <a href="https://pypa-build.readthedocs.io/en/stable/#python--m-build---config-setting" rel="nofollow noreferrer"><code>--config-setting</code></a>. Any idea how to specify it?</p>
|
<python><tox>
|
2023-02-08 10:55:31
| 0
| 59,565
|
Nico Schlömer
|
75,384,689
| 6,535,324
|
How apply optimization flag for run but not debug for default configurations?
|
<p>I would like to modify the default run (not debug) behavior in Pycharm to include the <code>-O</code> flag, as indicated in this <a href="https://stackoverflow.com/a/75384534/6251742">answer</a>.</p>
<p>The information in this <a href="https://stackoverflow.com/questions/36211994/pycharm-add-o-flag-to-configuration">answer</a> on creating a run configuration is helpful, but I have many files with <code>if __name__ == "__main__"</code> blocks that I need to run locally, and I would like Pycharm to automatically execute them with the <code>-O</code> option during a "run". The debug behavior should remain unchanged and the <code>-O</code> flag shouldn't be set.</p>
|
<python><pycharm>
|
2023-02-08 10:49:18
| 1
| 2,544
|
safex
|
75,384,665
| 10,568,883
|
How to remove dotted box around focused item in QTableView in PyQt5
|
<p>I am writing a simple PyQt5 app, which uses <code>QTableView</code> widget. What I want to do is to remove the dotted box around the focused item in <code>QTableView</code>. For minimal working example I use this code:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, main_window):
main_window.setObjectName("main_window")
main_window.resize(640, 480)
self.main_container = QtWidgets.QWidget(main_window)
self.main_container.setObjectName("main_container")
self.verticalLayout = QtWidgets.QVBoxLayout(self.main_container)
self.verticalLayout.setObjectName("verticalLayout")
self.model = QtGui.QStandardItemModel(0, 2, main_window)
for col1, col2 in (("foo", "bar"), ("fizz", "buzz")):
it_col1 = QtGui.QStandardItem(col1)
it_col2 = QtGui.QStandardItem(col2)
self.model.appendRow([it_col1, it_col2])
self.view = QtWidgets.QTableView(
self.main_container, showGrid=False, selectionBehavior=QtWidgets.QAbstractItemView.SelectRows
)
self.view.setModel(self.model)
self.verticalLayout.addWidget(self.view)
main_window.setCentralWidget(self.main_container)
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
</code></pre>
<p>It produces the following result (red arrow shows the box, which I want to remove):</p>
<p><a href="https://i.sstatic.net/VMmr3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VMmr3.png" alt="Undesired result" /></a></p>
<p>Different topics on SO and Qt forums suggest using changing my code as:</p>
<pre class="lang-py prettyprint-override"><code># Previous code
self.view.setModel(self.model)
self.view.setStyleSheet(" QTableView::item:focus { outline: none } ") # also 0px was suggested
# OR
self.view.setModel(self.model)
self.view.setStyleSheet(" QTableView::item:focus { border: none } ") # also 0px was suggested
# OR
self.view.setModel(self.model)
self.view.setFocusPolicy(QtCore.Qt.FocusPolicy.NoFocus)
# Further code
</code></pre>
<p>But none of these approaches worked for me:</p>
<ul>
<li><code>outline: none</code> approach changed literally nothing;</li>
<li><code>border: none</code> not only keeps box, but also fills item with white;</li>
<li><code>setFocusPolicy</code> works the best visually, but makes cells uneditable and generally unresponsive to keyboard events (which is expected, since they now can't be focused and accept events)</li>
</ul>
<p>So my question is: is this possible to somehow remove or, at least, customize this box? If this matters, I'm using <code>PyQt5==5.15.4</code>, <code>PyQt5-Qt5==5.15.2</code> on Windows machine.</p>
|
<python><qt><pyqt><pyqt5><qt5>
|
2023-02-08 10:48:01
| 0
| 499
|
Евгений Крамаров
|
75,384,590
| 18,321,042
|
Why can't my Gitlab pipeline find python packages installed in Dockerfile?
|
<p>My file structure is as follows:</p>
<ul>
<li><code>Dockerfile</code></li>
<li><code>.gitlab-ci.yml</code></li>
</ul>
<p>Here is my <code>Dockerfile</code>:</p>
<pre><code>FROM python:3
RUN apt-get update && apt-get install make
RUN apt-get install -y python3-pip
RUN pip3 install --upgrade pip
RUN pip3 install pygdbmi
RUN pip3 install pyyaml
RUN pip3 install Path
</code></pre>
<p>And here is my <code>.gitlab-ci.yml</code> file:</p>
<pre><code>
test-job:
stage: test
image: runners:test-harness
script:
- cd test-harness
# - pip3 install pygdbmi
# - pip3 install pyyaml
- python3 main.py
artifacts:
untracked: false
when: on_success
expire_in: "30 days"
paths:
- test-harness/script.log
</code></pre>
<p>For some reason the <code>pip3 install</code> in the Dockerfile doesn't seem to be working as I get the error:</p>
<pre><code>python3 main.py
Traceback (most recent call last):
File "/builds/username/test-harness/main.py", line 6, in <module>
from pygdbmi.gdbcontroller import GdbController
ModuleNotFoundError: No module named 'pygdbmi'
</code></pre>
<p>When I uncomment the two commented lines in <code>.gitlab-ci.yml</code>:</p>
<pre><code># - pip3 install pygdbmi
# - pip3 install pyyaml
</code></pre>
<p>It works fine but ideally, I want those 2 packages to be installed in the <code>Dockerfile</code> not the <code>.gitlab-ci.yml</code> pipeline stage</p>
<p>I've tried changing the <code>WORKDIR</code> as well as <code>USER</code> and it doesn't seem to have any effect.</p>
<p>Any ideas/solutions?</p>
|
<python><docker><gitlab>
|
2023-02-08 10:41:18
| 0
| 575
|
Liam
|
75,384,531
| 2,751,394
|
How can I check that my application is shutdown with QFTest?
|
<p>I check a Java application with <strong>QFTest</strong>. I need to prove that the HMI is stopped at Shutdown.
In <strong>QFTest</strong>, I created a <strong>Jython</strong> procédure which try to send a socket to the HMI, if it can't, then it means that the HMI is stopped and then the test is OK. here is the <strong>jython</strong> script:</p>
<pre><code>import threading
import time
rc.setLocal("returnValue", False)
for i in range(50):
time.sleep(0.5)
try:
# here we try to send a socket to HMI
rc.toSUT("client", vars)
except:
# here there was an exception trying to send the socket, the HMI is shutdown: test OK."
rc.setLocal("returnValue", True)
break
</code></pre>
<p>It seems that the <strong>QFTest</strong> <strong>javaagent</strong> used to connect my Java program to <strong>QFTest</strong>, prevents my application to be fully killed. Have you an idea to prove that my HMI is killed in a <strong>QFTest</strong> procedure ?</p>
|
<python><java><jython><javaagents><qf-test>
|
2023-02-08 10:37:01
| 1
| 601
|
Skartt
|
75,384,503
| 12,913,047
|
Arranging data for Heatmap Visualization with Seaborn
|
<p>I am trying to create a heatmap, with the y Axis with the Criteria Column, as seen below, and the x, with all the other columns (Robustness, Stability, etc.) However, most of the tutorials I come across just use 1 column, i.e., the common flights example with prices and years. I basically just want there to be 1 color if it is a 1, and another color if it is a 0.</p>
<p>Any assistance would be great, thanks!</p>
<p>df head snippet</p>
<pre class="lang-none prettyprint-override"><code>Criteria,Robustness,Stability,Flexibility,Resourcefulness,Coordination Capacitiy,Redundancy,Diversity,Foresight Capacity,Independence,Connectivity & Interdependence,Collaboration Capacity,Agility,Adaptability,Self-Organization,Creativity & Innovation,Efficiency,Equity
Ecosystem Monitoring & Protection,0,1,1,0,1,0,0,1,1,0,1,0,1,1,0,1,0
Using local and native material and species ,1,0,1,0,1,0,1,1,0,1,0,1,0,1,0,0,1
Erosion protection,0,1,0,1,0,1,0,1,0,1,1,0,1,0,1,0,1
Protection of wetlands and watersheds,1,0,1,0,1,0,1,1,0,1,0,1,0,1,0,1,1
Availability and accessibility of resources,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,1,1
Reduction of environmental impacts,0,1,0,1,0,1,0,1,0,1,0,1,0,0,1,1,1
Quality of resources,1,0,1,1,1,0,1,0,0,1,0,1,0,1,0,1,0
Biodiversity and wildlife conservation,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,0,1
"Material and resource management (production, consumption, conservation, recycling etc)",1,1,0,1,1,0,1,1,1,0,1,1,0,1,1,0,1
</code></pre>
|
<python><numpy><seaborn>
|
2023-02-08 10:34:36
| 0
| 506
|
JamesArthur
|
75,384,464
| 6,535,324
|
Unexpected PyCharm run vs debug behavior for __debug__
|
<p>I have the following python code:</p>
<pre class="lang-py prettyprint-override"><code>def main():
if __debug__:
print("debug mode")
else:
print("non debug")
if __name__ == '__main__':
main()
</code></pre>
<p>No matter whether I run the file or debug it, it always prints "debug mode". this is not what I would have expected. My debug block is computationally costly, so I Would prefer to only run it on my development machine if I am in debug mode in pycharm (and never in prod).</p>
|
<python><pycharm>
|
2023-02-08 10:31:50
| 1
| 2,544
|
safex
|
75,384,141
| 1,901,071
|
Python Polars find the length of a string in a dataframe
|
<p>I am trying to count the number of letters in a string in Polars.
I could probably just use an apply method and get the <code>len(Name)</code>.
However, I was wondering if there is a polars specific method?</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
"start_date": ["2020-01-02", "2020-01-03", "2020-01-04", "2020-01-05"],
"Name": ["John", "Joe", "James", "Jörg"]
})
</code></pre>
<p>In Pandas I can use <code>.str.len()</code></p>
<pre><code>>>> df.to_pandas()["Name"].str.len()
0 4
1 3
2 5
3 4
Name: Name, dtype: int64
</code></pre>
<p>But that does not exist in Polars:</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(pl.col("Name").str.len())
# AttributeError: 'ExprStringNameSpace' object has no attribute 'len'
</code></pre>
|
<python><string><dataframe><python-polars>
|
2023-02-08 10:04:19
| 1
| 2,946
|
John Smith
|
75,384,124
| 9,099,959
|
How to initialize named tuple in python enum
|
<p>I have a Enum which is used as a column datatype in SqlALchemy. But I need more properties of the Enum to make it accessible for the code's other functionality.</p>
<p>This is what I have created till now:</p>
<pre><code>class ServerHealth(Enum):
"""Status of Server health."""
HealthStatus = namedtuple("HealthStatus", ["name", "low_bound", "high_bound"])
High = HealthStatus(name="High", low_bound=5, high_bound=24)
Fair = HealthStatus(name="Fair", low_bound=3, high_bound=5)
Low = HealthStatus(name="Low", low_bound=0, high_bound=3)
@DynamicClassAttribute
def value(self):
return self.name
</code></pre>
<p>This is used somewhere in sqlalchemy model as well, so I need to keep it the way it doesn't break-</p>
<pre><code>class Server(Base):
server_health = Column(ENUM(ServerHealth), nullable=True)
</code></pre>
<p>I need the "name" of namedtuple as the columns value in DB. As per my understanding the value attribute is used by sqlalchemy, so I have overwritten them and the value works as expected i.e.</p>
<pre><code>>>>ServerHealth.High.value
"High"
</code></pre>
<p>but when I try to access low_bound and high_bound it breaks</p>
<pre><code>>>>ServerHealth.High.low_bound
AttributeError Traceback (most recent call last)
Input In [77], in <cell line: 1>()
----> 1 ServerHealth.High.low_bound
AttributeError: 'ServerHealth' object has no attribute 'low_bound'
</code></pre>
<p>Now what function should I overwrite in order to get that working without breaking db initialization.</p>
|
<python><python-3.x><sqlalchemy><enums>
|
2023-02-08 10:03:17
| 1
| 1,406
|
Akash Kumar
|
75,384,112
| 19,369,310
|
Retrieve last row of data with conditions
|
<p>I have the following large dataset recording the result of a math competition among students in descending order of date: So for example, student 1 comes third in Race 1 while student 3 won Race 2, etc.</p>
<pre><code>Race_ID Date Student_ID Rank
21 1/1/2023 1 3
21 1/1/2023 2 2
21 1/1/2023 3 1
21 1/1/2023 4 4
25 11/9/2022 1 2
25 11/9/2022 2 3
25 11/9/2022 3 1
3 17/4/2022 5 4
3 17/4/2022 2 1
3 17/4/2022 3 2
3 17/4/2022 4 3
14 1/3/2022 1 1
14 1/3/2022 2 2
85 1/1/2021 1 2
85 1/1/2021 2 3
85 1/1/2021 3 1
</code></pre>
<p>And I want to create a new column called <code>Last_win</code> which returns the <code>Race_ID</code> of the last time that student won (i.e. rank number 1). So the outcome should look like</p>
<pre><code>Race_ID Date Student_ID Rank Last_win
21 1/1/2023 1 3 14
21 1/1/2023 2 2 3
21 1/1/2023 3 1 25
21 1/1/2023 4 4 NaN
25 11/9/2022 1 2 14
25 11/9/2022 2 3 3
25 11/9/2022 3 1 85
3 17/4/2022 5 4 NaN
3 17/4/2022 2 1 NaN
3 17/4/2022 3 2 85
3 17/4/2022 4 3 NaN
14 1/3/2022 1 1 NaN
14 1/3/2022 2 2 NaN
85 1/1/2021 1 2 NaN
85 1/1/2021 2 3 NaN
85 1/1/2021 3 1 NaN
</code></pre>
<p>Thank you so much inadvacne.</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-02-08 10:02:31
| 2
| 449
|
Apook
|
75,383,917
| 7,800,760
|
How to use Poetry to install MongoDB database
|
<p>I am writing my first python script which would use the pymongo library to persist data into a MongoDB NOSQL database. This script is running within a Poetry managed virtual environment and therefore pyproject.toml declares the following dependencies:</p>
<pre><code>[tool.poetry.dependencies]
python = "^3.10"
sparqlwrapper = "^2.0.0"
rdflib = "^6.2.0"
pandas = "^1.5.3"
requests = "^2.28.2"
geocoder = "^1.38.1"
geonames-rdf = "^0.2.3"
lxml = "^4.9.2"
pytz = "^2022.7.1"
pymongo = "^4.3.3"
</code></pre>
<p>Is it possible to use Poetry to also ensure the MongoDB server itself is installed as well? Of course I could install MongoDB on my Mac via homebrew manually. Best practice?</p>
|
<python><mongodb><pymongo><python-poetry>
|
2023-02-08 09:47:26
| 1
| 1,231
|
Robert Alexander
|
75,383,908
| 12,814,680
|
dataframe apply lambda function that requires value from row n+1
|
<p>I have a dataframe and geopy to calculate distances between two geo coordinates as follows :</p>
<pre><code>import geopy.distance
distCalcExample = geopy.distance.geodesic((49.18443, -0.36098), (49.184335, -0.361185)).m
r = {'poly':[(49.419453, 0.232884),(49.41956, 0.23269),(49.41956, 0.23261),(49.41953, 0.23255),(49.41946, 0.23247)]}
df=pd.DataFrame(r)
df['dist']=0
df
</code></pre>
<p><a href="https://i.sstatic.net/eWX2s.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eWX2s.png" alt="enter image description here" /></a></p>
<p>I need to calculate the distance between coordinates of rows n and n+1.
I was thinking of using geopy as in distCalcExample, along with apply and a lambda function.
But i have not managed to achieve it. What would be the simplest way to make it?</p>
|
<python><pandas><dataframe>
|
2023-02-08 09:46:35
| 1
| 499
|
JK2018
|
75,383,879
| 1,432,980
|
get certain keys from the dictionary and spread remaining into another dictionary
|
<p>I have a dictionary that looks like this</p>
<pre><code>{
"name": "id",
"data_type": "int",
"min_value": "10",
"max_value": "110"
}
</code></pre>
<p>I want to convert it into a tuple where the first two parameters are values of the first two keys, while the rest is the dictionary</p>
<pre><code>(id, int, {min_value: 10, max_value: 110})
</code></pre>
<p>When I do like this</p>
<pre><code>for item in input:
name = item['name']
del item['name']
data_type = item['data_type']
del item['data_type']
tup = (name, data_type, {**item})
print(tup) # ('id', 'int', {'min_value': 10, 'max_value': 110})
</code></pre>
<p>It works fine, but I wonder if there is a better way to do that?</p>
|
<python><dictionary>
|
2023-02-08 09:44:43
| 1
| 13,485
|
lapots
|
75,383,858
| 2,164,904
|
Regex to replace between second occurance of symbol A and symbol B
|
<p>I have an example string to match:</p>
<p><code>s = 'https://john:ABCDE@api.example.com'</code></p>
<p>I am trying to replace the string <code>ABCDE</code> between the 2nd colon and the first occurrance of <code>@</code>. So my desired output is:</p>
<p><code>s_out = 'https://john:REPLACED@api.example.com'</code></p>
<p>My current code is:</p>
<pre><code>import re
s_out = re.sub(r":*(.+)@api.example.com", 'REPLACED', s)
</code></pre>
<p>But i am unable to replace this currently.</p>
|
<python><regex><python-re>
|
2023-02-08 09:42:41
| 1
| 1,385
|
John Tan
|
75,383,742
| 31,130
|
What is the type of a Logger function like Logger.info?
|
<p>How do I type annotate a Logger function like Logger.info? Using reveal_type returns this:</p>
<pre><code>Revealed type is "def (msg: builtins.object, *args: builtins.object, *, exc_info: Union[None, builtins.bool, Tuple[Type[builtins.BaseException], builtins.BaseException, Union[types.TracebackType, None]], Tuple[None, None, None], builtins.BaseException] =, stack_info: builtins.bool =, stacklevel: builtins.int =, extra: Union[typing.Mapping[builtins.str, builtins.object], None] =)"
</code></pre>
<p>There must be a more concise type for this, how do I go about finding it?</p>
|
<python><type-hinting><mypy>
|
2023-02-08 09:34:31
| 1
| 2,263
|
nagul
|
75,383,700
| 14,494,483
|
Streamlit dynamic UI to generate dynamic input widgets depending on value from a different input widget
|
<p>I want to open this post as I can't find anything on the official documentation from streamlit or any resources that mentioned how to do this. After some trial and error I have figured out a way, and will post the answer below. This is a function that in R shiny is called dynamic UI, here's the question.</p>
<p>How to generate dynamic input widgets depending on the value from a different input widget? For example see below picture, the numbers of <code>text_input</code> called Product Code <code>i</code> depends on the value from the <code>number_input</code> called <code>Number of Products</code>. So if there are x number of products, there will be x number of <code>text_input</code> generated dynamically. Moreover, the value inside the generated <code>text_input</code> can be extracted as well.</p>
<p><a href="https://i.sstatic.net/WkSli.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WkSli.png" alt="enter image description here" /></a></p>
|
<python><streamlit><variable-variables><dynamic-ui>
|
2023-02-08 09:31:37
| 2
| 474
|
Subaru Spirit
|
75,383,587
| 2,119,336
|
how to manage scrollbar with not visible items in PySimpleGUI
|
<p>i have developed a PySimpleGUI application and i must show a list of items depending from the number of rows readed from a file.
I know that is not possible to create components dinamically in PySimpleGUI, so i've defined a max number of components and set they as not visible until the file is readed.</p>
<p>After the reading, i set the desired number of rows as visible, but the scrollbar of the Column container remains unusable.</p>
<p><em>Initial situation:</em>
<a href="https://i.sstatic.net/kgxM4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kgxM4.png" alt="enter image description here" /></a></p>
<p><em>After the reading:</em>
<a href="https://i.sstatic.net/3UDIv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3UDIv.png" alt="enter image description here" /></a></p>
<p>I also try to leave an item of the row always visibile, and in this case scrollbar works well:</p>
<p><em>Initial situation with index always visible:</em>
<a href="https://i.sstatic.net/1rYQI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1rYQI.png" alt="enter image description here" /></a></p>
<p><em>After the reading with index always visible:</em>
<a href="https://i.sstatic.net/1ToZ4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1ToZ4.png" alt="enter image description here" /></a></p>
<p><em>Code:</em></p>
<pre><code>def getLayoutFramePars():
sub_layout = []
for node_index in range(30):
line = addParLine(node_index)
sub_layout.append(line)
layout = [
[
sg.Col(
layout=sub_layout,
background_color=Blue4,
expand_x=True,
expand_y=True,
)
]
]
return layout
def addParLine(index):
text_index = str(index)
if index < 10:
text_index = "0" + text_index
KEY_LABEL_INDEX = PREFIX_PAR_LABEL_INDEX + text_index
KEY_LABEL_NAME = PREFIX_PAR_LABEL_NAME + text_index
KEY_LABEL_VALUE = PREFIX_PAR_LABEL_VALUE + text_index
KEY_BUTTON_READ = PREFIX_PAR_BUTTON_READ_VALUE + text_index
KEY_BUTTON_SEND = PREFIX_PAR_BUTTON_SEND_VALUE + text_index
layout = [
sg.Text(
key=KEY_LABEL_INDEX,
text=text_index,
size=LABEL_SIZE_05,
justification=ALIGN_CENTER,
visible=False,
pad=5,
),
sg.Text(
'',
key=KEY_LABEL_NAME,
size=LABEL_SIZE_20,
relief=sg.RELIEF_SUNKEN,
justification=ALIGN_CENTER,
visible=False,
pad=5,
),
sg.InputText(
'',
key=KEY_LABEL_VALUE,
size=LABEL_SIZE_30,
justification=ALIGN_LEFT,
text_color=Blue2,
background_color=LightGrey,
enable_events=True,
visible=False,
pad=5,
),
sg.Button(
key=KEY_BUTTON_READ,
button_text='Leggi',
size=BUTTON_SIZE_14,
button_color=BUTTON_COLOR_BLUE,
visible=False,
pad=5,
),
sg.Button(
key=KEY_BUTTON_SEND,
button_text='Invia',
size=BUTTON_SIZE_14,
button_color=BUTTON_COLOR_BLUE,
visible=False,
pad=5,
),
]
return layout
def initLayoutBody():
layout = [
[
sg.Text(
text="##",
size=LABEL_SIZE_05,
text_color=DarkOrange,
justification=ALIGN_CENTER,
pad=5,
),
sg.Text(
text="Nome",
size=LABEL_SIZE_20,
text_color=DarkOrange,
justification=ALIGN_CENTER,
pad=5,
),
sg.Text(
text="Valore",
size=LABEL_SIZE_30,
text_color=DarkOrange,
justification=ALIGN_CENTER,
pad=5,
),
],
[
sg.Column(
key=KEY_FRAME_PARS,
size=(750, 300),
layout=getLayoutFramePars(),
element_justification=ALIGN_LEFT,
scrollable=True,
vertical_scroll_only=True,
background_color=DarkGreen,
),
]
]
return layout
</code></pre>
<p>Could someone help me to understand what i do wrong and how to solve this problem?</p>
<p>Thanks</p>
|
<python><layout><scrollbar><pysimplegui>
|
2023-02-08 09:22:10
| 1
| 778
|
Tirrel
|
75,383,559
| 10,967,961
|
Colors problem in plotting figure with matplotlib in python
|
<p>I am trying to plot a bipartite graph to highlight the differences between two rankings. I am doing so by connecting the city in the left list to the same city on the right list with a colored arrow. The color should be proportional to the difference in rankings.</p>
<p>Here is a MWE:</p>
<pre><code>import matplotlib.pyplot as plt
# Sample data
cities = ['City A', 'City B', 'City C', 'City D', 'City E',
'City F', 'City G', 'City H', 'City I', 'City J']
genepy_rank = [3, 1, 4, 2, 5, 8, 7, 10, 9, 6]
fitness_rank = [7, 9, 2, 5, 4, 6, 3, 1, 8, 10]
# Calculate the difference in ranking
diff_rank = [genepy - fitness for genepy, fitness in zip(genepy_rank, fitness_rank)]
# Plot the graph
fig, ax = plt.subplots()
for i, city in enumerate(cities):
x = [genepy_rank[i], fitness_rank[i]]
y = [i, i]
color = diff_rank[i]
ax.plot(x, y, color=color, marker='o')
ax.annotate(city, (x[0], y[0]), xytext=(-20, 20),
textcoords='offset points', ha='right', va='bottom',
bbox=dict(boxstyle='round,pad=0.5', fc='yellow', alpha=0.5),
arrowprops=dict(arrowstyle = '->', connectionstyle='arc3,rad=0'))
ax.set_xlim(0, 11)
ax.set_ylim(-1, 11)
ax.set_yticks([i for i in range(len(cities))])
ax.set_yticklabels(cities)
ax.spines['left'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.tick_right()
ax.yaxis.set_label_position("right")
plt.show()
</code></pre>
<p>The problem is that this exits with an error: ValueError: -4 is not a valid value for color which I understand. Is there a way to determine a color grid and assign a color to the arrows based on diff_rank?</p>
<p>Thank you</p>
|
<python><matplotlib><networkx>
|
2023-02-08 09:20:17
| 1
| 653
|
Lusian
|
75,383,459
| 12,814,680
|
Set values in dataframe A by iterating from values on dataframe B
|
<p>Dataframe A is similar to this :</p>
<pre><code>info2 = {'speed': [None]*80}
dfA = pd.DataFrame(info2)
dfA
</code></pre>
<p><a href="https://i.sstatic.net/qZAwr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qZAwr.png" alt="enter image description here" /></a></p>
<p>Dataframe B is similar to this :</p>
<pre><code>info={"IndexSpeed":[7,16,44,56,80],"speed":[25,50,25,50,90]}
dfB = pd.DataFrame(info)
dfB
</code></pre>
<p><a href="https://i.sstatic.net/VHlr1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VHlr1.png" alt="enter image description here" /></a></p>
<p>I need to set the values in dfA['speed'] by using the values in dfB.
For instance, for each row in dfA of index <=7, speed should be set at 25.
for each row of index between 8 and 16, speed should be set at 50. and so on untill all 80 rows are set.</p>
<p>What would be the optimal way to do this?</p>
|
<python><pandas><dataframe>
|
2023-02-08 09:11:03
| 2
| 499
|
JK2018
|
75,383,260
| 20,740,043
|
Export dataset haveing more than 1048576 rows, in multiple sheets of single excel file, in Python
|
<p>I have a dataset given as such:</p>
<pre><code>import numpy as np
import pandas as pd
## Create an array
data = np.arange(7152123)
print(data)
## Dataframe
df = pd.DataFrame(data)
print("\n an df = \n", df)
## Load the data in excel sheet
df.to_excel('df.xlsx', index=False,header=False)
</code></pre>
<p>I get an error:</p>
<blockquote>
<p>f"This sheet is too large! Your sheet size is: {num_rows}, {num_cols}
" ValueError: This sheet is too large! Your sheet size is: 7152123, 1
Max sheet size is: 1048576, 16384</p>
</blockquote>
<p>The error comes because a single sheet of excel allows a maximum of 1048576 rows.</p>
<p>However, the dataset that I need to export has 7152123 rows</p>
<p>Can somebody please let me know how do I export such a huge dataset in multiple sheets of single excel file in Python?</p>
|
<python><python-3.x><excel><pandas><dataframe>
|
2023-02-08 08:49:30
| 3
| 439
|
NN_Developer
|
75,383,244
| 9,418,052
|
Navigation Timeout Exceeded: 30000 ms exceeded while using Nbconvert
|
<p>I am using the following command : <strong>jupyter nbconvert --to webpdf --allow-chromium-download PNM.ipynb</strong>, to convert Jupiter notebook to a pdf file.</p>
<p>However, there is a timeout error. According to the documentation: "The timeout option can also be set to None or -1 to remove any restriction on execution time.". But, I am unaware as to how to use it.</p>
|
<python><nbconvert>
|
2023-02-08 08:48:06
| 1
| 404
|
Ankit Tripathi
|
75,383,045
| 2,919,052
|
Emit python list in Signal to Qml ui
|
<p>I am trying to communicate a python script to a qml ui using signal/slot and, while it works for some types (str) it seems to not be working when I try emitting a list:</p>
<p><strong>Python:</strong></p>
<pre><code>from PySide6.QtCore import QObject, Signal, Slot
from PySide6.QtGui import QGuiApplication
from PySide6.QtQml import QQmlApplicationEngine
import time, sys
class PythonSignalEmitter(QObject):
getListTest = Signal(list)
@Slot()
def callList(self):
print("HELLO from python ")
test = ["a", "b", "c"]
self.getListTest.emit(test)
if __name__ == '__main__':
app = QGuiApplication([])
engine = QQmlApplicationEngine()
signal_emitter = PythonSignalEmitter()
engine.rootContext().setContextProperty("signalEmitter", signal_emitter)
engine.load("main.qml")
sys.exit(app.exec())
</code></pre>
<p><strong>Fragment of the main.qml file:</strong></p>
<pre><code>Connections {
target: signalEmitter
function onSignal(res) {
console.log(`------------`);
console.log(`qml list is ${res}`);
console.log(`------------`);
}
}
</code></pre>
<p>The output of this just gives:</p>
<pre><code>HELLO from python
</code></pre>
<p>So the app runs with no problem, and after a click on a specified component, the slot is called, but the log in the QLM side is not even printed, seems like the signal is not even emitted.</p>
|
<python><qml><pyside6>
|
2023-02-08 08:28:38
| 1
| 5,778
|
codeKiller
|
75,383,010
| 1,619,432
|
Python D-Bus: Subscribe to signal and read property with dasbus
|
<p>How to monitor and read Ubuntu's "Night Light" status via D-Bus using Python with <a href="https://dasbus.readthedocs.io/en/latest/" rel="nofollow noreferrer">dasbus</a>? I can't figure out the API docs on how to read a property or subscribe to a signal.<br />
Likely candidates:</p>
<ul>
<li><a href="https://dasbus.readthedocs.io/en/latest/api/dasbus.client.property.html#dasbus.client.property.PropertyProxy.get" rel="nofollow noreferrer"><code>dasbus.client.property.get()</code></a></li>
<li><a href="https://dasbus.readthedocs.io/en/latest/api/dasbus.client.handler.html?highlight=subscribe#dasbus.client.handler.GLibClient.subscribe_signal" rel="nofollow noreferrer"><code>GLibClient.subscribe()</code></a></li>
</ul>
<p>The following is adapted from the basic examples and prints the interfaces and properties/signals of the object:</p>
<pre><code>#!/usr/bin/env python3
from dasbus.connection import SessionMessageBus
bus = SessionMessageBus()
# dasbus.client.proxy.ObjectProxy
proxy = bus.get_proxy(
"org.gnome.SettingsDaemon.Color", # bus name
"/org/gnome/SettingsDaemon/Color", # object path
)
print(proxy.Introspect())
# read and print properties "NightLightActive" and "Temperature" from interface "org.gnome.SettingsDaemon.Color" in (callback) function
# subscribe to signal "PropertiesChanged" in interface "org.freedesktop.DBus.Properties" / register callback function
</code></pre>
<hr>
Resources
<ul>
<li><a href="https://pypi.org/project/dbus-python/" rel="nofollow noreferrer">https://pypi.org/project/dbus-python/</a></li>
<li><a href="https://stackoverflow.com/questions/58639602/what-is-recommended-to-use-pydbus-or-dbus-python-and-what-are-the-differences">What is recommended to use pydbus or dbus-python and what are the differences?</a></li>
<li><a href="https://wiki.python.org/moin/DbusExamples" rel="nofollow noreferrer">https://wiki.python.org/moin/DbusExamples</a></li>
<li><a href="https://stackoverflow.com/questions/54113389/migration-from-dbus-to-gdbus-in-python-3">Migration from dbus to GDbus in Python 3</a></li>
</ul>
|
<python><linux><dbus>
|
2023-02-08 08:25:25
| 1
| 6,500
|
handle
|
75,382,794
| 10,062,025
|
Why is request returning must provide query string when scraped?
|
<p>I am trying to scrape <a href="https://www.sayurbox.com/category/vegetables-1-a0d03d59?selectedCategoryType=ops&touch_point=screen_CATEGORY_sembako-1-e6a33b51&section_source=shop_list_slider_navigation_category_vegetables-1-a0d03d59" rel="nofollow noreferrer">https://www.sayurbox.com/category/vegetables-1-a0d03d59?selectedCategoryType=ops&touch_point=screen_CATEGORY_sembako-1-e6a33b51&section_source=shop_list_slider_navigation_category_vegetables-1-a0d03d59</a></p>
<p>Here's my current code:</p>
<pre><code>dcID="RGVsaXZlcnlDb25maWc6VGh1cnNkYXksIDA5IEZlYnJ1YXJ5IDIwMjN8SkswMXxTRDI5fGZhbHNl"
slugcat="vegetables-1-a0d03d59"
url="https://www.sayurbox.com/graphql/v1?deduplicate=1"
payload={"operationName":"getCartItemCount",
"variables":{"deliveryConfigId":DCId},
"query":"query getCartItemCount($deliveryConfigId: ID!) {\n cart(deliveryConfigId: $deliveryConfigId) {\n id\n count\n __typename\n }\n}"},{"operationName":"getProducts",
"variables":{"deliveryConfigId":DCId,
"sortBy":"related_product",
"isInstantDelivery":False,
"slug":slugcat,
"first":12,
"abTestFeatures":[]},
"query":"query getProducts($deliveryConfigId: ID!, $sortBy: CatalogueSortType!, $slug: String!, $after: String, $first: Int, $isInstantDelivery: Boolean, $abTestFeatures: [String!]) {\n productsByCategoryOrSubcategoryAndDeliveryConfig(\n deliveryConfigId: $deliveryConfigId\n sortBy: $sortBy\n slug: $slug\n after: $after\n first: $first\n isInstantDelivery: $isInstantDelivery\n abTestFeatures: $abTestFeatures\n ) {\n edges {\n node {\n ...ProductInfoFragment\n __typename\n }\n __typename\n }\n pageInfo {\n hasNextPage\n endCursor\n __typename\n }\n productBuilder\n __typename\n }\n}\n\nfragment ProductInfoFragment on Product {\n id\n uuid\n deliveryConfigId\n displayName\n priceRanges\n priceMin\n priceMax\n actualPriceMin\n actualPriceMax\n slug\n label\n isInstant\n isInstantOnly\n nextDayAvailability\n heroImage\n promo\n discount\n isDiscount\n variantType\n imageIds\n isStockAvailable\n defaultVariantSkuCode\n quantitySoldFormatted\n promotion {\n quota\n isShown\n campaignId\n __typename\n }\n productVariants {\n productVariant {\n id\n skuCode\n variantName\n maxQty\n isDiscount\n stockAvailable\n promotion {\n quota\n campaignId\n isShown\n __typename\n }\n __typename\n }\n pageInfo {\n hasPreviousPage\n hasNextPage\n __typename\n }\n __typename\n }\n __typename\n}"}
response=requests.get(url,headers=headers,json=payload)
response.json()
</code></pre>
<p>The response returns</p>
<pre><code>[{'errors': [{'message': 'Must provide query string.',
'extensions': {'timestamp': 1675842901472}}]},
{'errors': [{'message': 'Must provide query string.',
'extensions': {'timestamp': 1675842901472}}]}]
</code></pre>
<p>I am not sure where I went wrong, as I've copied the payload and headers exactly. Can someone help?</p>
|
<python><web-scraping><python-requests>
|
2023-02-08 08:00:10
| 2
| 333
|
Hal
|
75,382,768
| 1,367,705
|
Python with -c flag, input from user and if/else inside - shows syntax error
|
<p>I need a simple one-liner in Python: ask user for choice and then print a message depending on what user chose. Here's my attempt:</p>
<p><code>python3 -c "ans=input('Y/N?'); if ans == 'Y': print('YES') else: print('NO');"</code></p>
<p>And errors of course:</p>
<pre><code> File "<string>", line 1
ans=input('Y/N?'); if ans == 'Y': print('YES') else: print('NO');
^^
SyntaxError: invalid syntax
</code></pre>
<p>Is it possible to do this in one-liner? It must be one-liner, I can't use a script here. Thanks.</p>
|
<python>
|
2023-02-08 07:57:09
| 3
| 2,620
|
mazix
|
75,382,639
| 12,478,660
|
Join and get queryset accordring to many to many field
|
<p>I have MyUser model with ForeignKey and ManyToMany related fields city and skills:</p>
<pre><code>class MyUser(AbstractBaseUser):
email = models.EmailField()
skills = models.ManyToManyField('jobs.Skill')
class Skill(models.Model):
name = models.CharField()
</code></pre>
<p>suppose this my table data in database:</p>
<pre><code>{'email': 'some@email.com', 'skills': ['Python', 'Java']},
{'email': 'another@email.com', 'skills': ['JavaScript', 'C#', 'Python']}
</code></pre>
<pre><code>>>> MyUser.objects.all().count()
</code></pre>
<p>output is 2 but I want:</p>
<pre><code>MyUser.objects. ..........
</code></pre>
<p>answer to 5 my like following data:</p>
<pre><code>{'email': 'some@email.com', 'city': 'London', 'skills': 'Python'},
{'email': 'some@email.com', 'city': 'London', 'skills': 'Java'},
{'email': 'another@email.com', 'city': 'Berlin', 'skills': 'JavaScript'},
{'email': 'another@email.com', 'city': 'Berlin', 'skills': 'C#'},
{'email': 'another@email.com', 'city': 'Berlin', 'skills': 'Python'},
</code></pre>
|
<python><django><django-models><django-views><django-queryset>
|
2023-02-08 07:41:27
| 2
| 618
|
ShiBil PK
|
75,382,340
| 19,950,360
|
python pandas read_excel error "Value must be either numerical or a string containing a wild card"
|
<p>I dont know why this error occurs.</p>
<pre><code>pd.read_excel('data/A.xlsx', usecols=["B", "C"])
</code></pre>
<p>Then I get this error:</p>
<pre><code>"Value must be either numerical or a string containing a wild card"
</code></pre>
<p>So i change my code use nrows all data</p>
<pre><code>pd.read_excel('data/A.xlsx', usecols=["B","C"], nrows=172033)
</code></pre>
<p>Then there is no error and a dataframe is created.</p>
<p>my excel file has 172034 rows, 1st is column name.</p>
|
<python><excel><pandas>
|
2023-02-08 07:02:00
| 4
| 315
|
lima
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.