QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,185,737
| 11,277,108
|
"OSError: [Errno 48] Address already in use" when checking IP address in a loop
|
<p>I'm using the <a href="https://pypi.org/project/WhatIsMyIP/" rel="nofollow noreferrer"><code>whatismyip</code></a> package to check my IP address before I call an IP sensitive API. The code runs in a loop and I'm finding after a few iterations I get the following error:</p>
<pre><code>OSError: [Errno 48] Address already in use
</code></pre>
<p>From some searching I understand this is because <code>whatismyip</code> is trying to use its designated port before it's been freed up.</p>
<p>I've tried creating an MRE to replicate the error:</p>
<pre><code>import time
import whatismyip
EXPECTED_IP = "123.123.123.123"
while True:
current_ip_address = whatismyip.whatismyipv4()
if current_ip_address != EXPECTED_IP:
raise Exception
time.sleep(0.1) # simulate accessing IP sensitive API for info A
current_ip_address = whatismyip.whatismyipv4()
if current_ip_address != EXPECTED_IP:
raise Exception
time.sleep(0.1) # simulate accessing IP sensitive API for info B
current_ip_address = whatismyip.whatismyipv4()
if current_ip_address != EXPECTED_IP:
raise Exception
time.sleep(0.1) # simulate accessing IP sensitive API for info C
</code></pre>
<p>However, annoyingly it only spits out:</p>
<pre><code>ResourceWarning: unclosed <socket.socket fd=73, family=AddressFamily.AF_INET, type=SocketKind.SOCK_DGRAM, proto=0, laddr=('0.0.0.0', 54320)>
</code></pre>
<p>This looks related to my error but doesn't stop the code.</p>
<p>In the real code I've tried catching the <code>OSError</code> and retrying <code>whatismyip.whatismyipv4()</code> but this just generates the same error.</p>
<p>The error occurs on the last line of this block of code that I've extracted from the package:</p>
<pre><code>SOURCE_IP = '0.0.0.0'
SOURCE_PORT = 54320
sockObj = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sockObj.settimeout(2)
sockObj.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sockObj.bind((SOURCE_IP, SOURCE_PORT))
</code></pre>
<p>In searching for solutions, most answers, e.g. <a href="https://stackoverflow.com/a/39557155/11277108">this one</a>, seem to recommend finding the process that uses the port and kill it. However, these don't provide any code I could include in my script. In addition to this I came across <a href="https://stackoverflow.com/a/23224980/11277108">this post</a> which says you can't get the OS to close a socket. It then recommends implementing the same code that's already above:</p>
<pre><code>sockObj.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
</code></pre>
<p>I realise this isn't a perfect post because I can't replicate the error in an MRE but would anyone have any ideas how to solve this?</p>
<p>I'm using python 3.10.8 on MacOS 13.2.1.</p>
|
<python><sockets>
|
2023-05-05 20:28:16
| 1
| 1,121
|
Jossy
|
76,185,575
| 6,534,818
|
Transformers: How to override subclass
|
<p>I need to override a subclass but I am not sure how to do it:</p>
<pre><code>RuntimeError: Passing `optimizers` is not allowed if Fairscale, Deepspeed or PyTorch FSDP is enabled.
You should subclass `Trainer` and override the `create_optimizer_and_scheduler` method.
</code></pre>
<p>Here is a limited example making use of deepspeed</p>
<pre><code>
from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments, DataCollatorWithPadding
from transformers.optimization import Adafactor, AdafactorSchedule
import torch
# model
model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased",
num_labels=2,
torch_dtype=torch.float16,
)
# adafactor
optimizer = Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None)
lr_scheduler = AdafactorSchedule(optimizer)
# training args
training_args = TrainingArguments(
output_dir='./xd',
num_train_epochs=4,
per_device_train_batch_size=32,
per_device_eval_batch_size=32*2,
learning_rate=1e-4,
warmup_steps=500,
tf32=True,
bf16=False,
fp16=True,
dataloader_num_workers=16,
gradient_accumulation_steps=1,
evaluation_strategy='steps',
eval_steps=2500,
save_steps=1000,
deepspeed=r'ds_config_zero3.json',
disable_tqdm=False,
weight_decay=0.01,
logging_dir='./xd_logs',
logging_steps=1000,
)
# init trainer
trainer = Trainer(
model=model,
args=training_args,
optimizers = (optimizer, lr_scheduler), # want to use this
)
</code></pre>
<p>It wants me to override this:</p>
<pre><code> def create_optimizer_and_scheduler(self, num_training_steps: int):
"""
Setup the optimizer and the learning rate scheduler.
We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the
Trainer's init through `optimizers`, or subclass and override this method (or `create_optimizer` and/or
`create_scheduler`) in a subclass.
"""
self.create_optimizer()
if IS_SAGEMAKER_MP_POST_1_10 and smp.state.cfg.fp16:
# If smp >= 1.10 and fp16 is enabled, we unwrap the optimizer
optimizer = self.optimizer.optimizer
else:
optimizer = self.optimizer
self.create_scheduler(num_training_steps=num_training_steps, optimizer=optimizer)
</code></pre>
<p>But I dont know what that would look like considering I want to pass this in to the <code>Trainer</code></p>
<pre><code> optimizers = (optimizer, lr_scheduler), # want to use this
</code></pre>
|
<python><huggingface-transformers>
|
2023-05-05 19:59:31
| 0
| 1,859
|
John Stud
|
76,185,570
| 1,814,881
|
Efficiently performing elementwise operation using coordinates on scipy sparse array
|
<p>I'm trying to figure out how to efficiently do the following operation on a scipy sparse array (csc format):</p>
<p>Elementwise pseudocode:</p>
<pre><code>try:
r = M[i,j] / (V[i] + V[j])
if r.isfinite():
M[i,j] = r
else:
# Leave at old value
pass # e.g. 1e200/1e-200 -> inf
except ZeroDivisionError:
# Leave at old value
pass # e.g. 1e200/0 -> ZeroDivisionError
</code></pre>
<p>Note that this leaves zero elements as 0, and so does not increase the density of the input matrix <code>M</code>.</p>
<p>There's a straightforward approach for a dense matrix:</p>
<pre class="lang-py prettyprint-override"><code># V is a ndarray of shape (N), all values finite and >= 0.0
# M is a ndarray of shape (N,N), all values finite
rows_plus_cols = V[:, np.newaxis] + V[np.newaxis, :]
M = np.divide(M, rows_plus_cols, out=M, where=(rows_plus_cols != 0))
</code></pre>
<p>(Note that this example does not handle underflow properly in all cases.)</p>
<p>Unfortunately, this approach doesn't appear to work for a sparse matrix, because the intermediate calculation of <code>rows_plus_cols</code> results in a dense matrix of size <code>(N,N)</code>.</p>
<p>A typical input M might be 100k by 100k, with ~1M nonzero entries. Decidedly in the regime where materializing a full dense matrix is ill-advised. For testing purposes, the following is 'close enough' to typical inputs:</p>
<pre class="lang-py prettyprint-override"><code>n = 100000
d = 10
# a.k.a. "mostly nonzero, but not always"
V = np.maximum((np.random.normal(size=n)+2), 0)
M = scipy.sparse.csc_array((np.random.normal(size=n*d), np.random.randint(n, size=(2, n*d))), shape=(n,n))
M.sum_duplicates()
M.prune()
M.sort_indices()
</code></pre>
<p>Minor problem: scipy sparse arrays don't appear to allow elementwise division with a <code>where</code> clause. Can be worked around, but that's not the main issue. I know that you can access the underlying datapoints of a scipy csc array with <code>M.data</code>; I do not see how to do an elementwise calculation <em>that is a function of the coordinates</em> without materializing a dense matrix.</p>
<p>Suggestions?</p>
|
<python><scipy><sparse-matrix>
|
2023-05-05 19:58:21
| 1
| 1,459
|
TLW
|
76,185,423
| 3,507,584
|
Applying pandas styler to format dataframe header in latex
|
<p>I am trying to get the <code>latex</code> file for the following table/dataframe:</p>
<pre><code>df = pd.DataFrame(np.random.rand(10,2)*5,
index=pd.date_range(start="2021-01-01", periods=10),
columns=["Tokyo", "Beijing"])
df.index.names = ['date']
df.reset_index(inplace=True)
</code></pre>
<p>For styling the headers, I try to use <code>apply_index</code> as shown in the <a href="https://pandas.pydata.org/docs/dev/reference/api/pandas.io.formats.style.Styler.apply_index.html" rel="nofollow noreferrer">documentation</a> as below:</p>
<pre><code>def header_custom(v):
return f"background-color: blue; color:black; font-weight:bold;"
styler = df.style.apply_index(header_custom,axis="columns")
with open('temp.tex','w') as file:
file.write(styler.to_latex(hrules=True,convert_css=True,column_format="p{30mm}|m|m|M|", multirow_align="c",multicol_align="c",clines='all;data'))
</code></pre>
<p>The code shows an error <code>AttributeError: 'Series' object has no attribute 'columns'</code> which I don't understand.
It is not related to the <code>to_latex()</code> attributes as even removing all of them keeps the same error. Anyone knows how to format the headers using the styler?</p>
|
<python><pandas><pandas-styles>
|
2023-05-05 19:36:35
| 3
| 3,689
|
User981636
|
76,185,248
| 512,480
|
Invalid exception in PIL image library?
|
<p>I'm using the version of PIL from the begin "pillow" package:</p>
<pre><code>from PIL import Image
</code></pre>
<p>When running on Linux (but nowhere else) I sometimes get these very strange errors:</p>
<pre><code>Exception ignored in: <function Image.__del__ at 0x7efefc7adf80>
Traceback (most recent call last):
File "/nix/store/gvv61xmxpdcnm3y7n1xvb0i3h89y7xy8-python3-3.7.9/lib/python3.7/tkinter/__init__.py", line 3508, in __del__
TypeError: catching classes that do not inherit from BaseException is not allowed
</code></pre>
<p>Are there any measures to be taken short of reporting this as a bug? The traceback gives no clue as to what in my code might give rise to it.</p>
|
<python><exception><python-imaging-library>
|
2023-05-05 19:07:56
| 1
| 1,624
|
Joymaker
|
76,185,208
| 13,653,794
|
How to structure golang api that uses function from python file
|
<p>I am writing an api that will be sending messages over websockets.</p>
<p>I have a python monitoring function that is used to monitor and format the data which will be sent over the websocket. If this monitoring function was written in golang, I'd run the function on a separate goroutine and each time new data was fetched by the function I would send it through a channel and then send it over the websocket connection.</p>
<p>How/is it possible to achieve a similar type of behaviour if the monitoring function is a python function? I'd rather not use grpc as speed is important for my use case</p>
|
<python><go><ctypes>
|
2023-05-05 19:00:07
| 1
| 374
|
Fergus Johnson
|
76,184,947
| 2,687,317
|
How to plot binned datetimes from pd.cut
|
<p>SO I have data like this:</p>
<pre><code>LFrame Date_Time DoW run_time az el distance pass_ID SV_ID Direction PFD_Jy
0 3114360965 2023-03-29 17:25:20 Wednesday 62720.0 349.254117 12.199639 2.171043e+06 2023_03_29_154 154 SB 8.505332
1 3114360977 2023-03-29 17:25:21 Wednesday 62721.0 349.216316 12.294878 2.164688e+06 2023_03_29_154 154 SB 1085.548185
2 3114360988 2023-03-29 17:25:22 Wednesday 62722.0 349.178240 12.390492 2.158335e+06 2023_03_29_154 154 SB 515.828602
3 3114360999 2023-03-29 17:25:23 Wednesday 62723.0 349.139888 12.486484 2.151987e+06 2023_03_29_154 154 SB 344.120530
4 3114361010 2023-03-29 17:25:24 Wednesday 62724.0 349.101256 12.582857 2.145641e+06 2023_03_29_154 154 SB 37.207705
</code></pre>
<p>...</p>
<p>I bin it up since it's at 1 sec and I don't want to plot it that densely:</p>
<pre><code>binned = SV_pfd_data.groupby(pd.cut(SV_pfd_data.Date_Time, SV_pfd_data.shape[0]//5), as_index=True).mean() # ~1 min bins
binned = binned.reset_index()
</code></pre>
<p>which results in this data:</p>
<pre><code>Date_Time LFrame run_time az el distance SV_ID PFD_Jy
0 (2023-03-29 17:25:19.408999936, 2023-03-29 17:... 3.114361e+09 62722.5 349.158693 12.438994 2.155165e+06 154.0 448.731927
1 (2023-03-29 17:25:25.008474624, 2023-03-29 17:... 3.114361e+09 62728.0 348.943580 12.972617 2.120295e+06 154.0 213.259464
2 (2023-03-29 17:25:30.016949248, 2023-03-29 17:... 3.114361e+09 62733.0 348.740192 13.468287 2.088688e+06 154.0 556.595627
3 (2023-03-29 17:25:35.025423616, 2023-03-29 17:... 3.114361e+09 62738.0 348.529055 13.974280 2.057173e+06 154.0 872.418091
</code></pre>
<p><em>NOTE THAT THE BINNED DATE/TIME has resolution better than microseconds</em> Not sure why</p>
<p>However, when I plot:</p>
<pre><code>timeOnly = matdates.DateFormatter('%H:%M:%S')
fig, ax = plt.subplots(figsize=(10,5))
ax.plot_date(binned.Date_Time, binned.PFD_Jy,
label=r"$\nu$=1611.1")
ax.set_ylabel("PFD [Jy]")
ax.set_xlabel("Date-time")
ax.set_xticklabels(SV_pfd_data.Date_Time, rotation = 65, fontsize=10)
ax.xaxis.set_major_formatter(timeOnly)
</code></pre>
<p>I get an error:</p>
<pre><code>OverflowError: int too big to convert
</code></pre>
|
<python><pandas><matplotlib>
|
2023-05-05 18:15:09
| 2
| 533
|
earnric
|
76,184,853
| 659,503
|
Python3 won't run - no module name encodings
|
<p>When I try to run a python command I get a warning about missing module 'encodings'. So I set the PYTHONHOME variable as described in other SO answers, but it still can't find 'encodings'</p>
<pre><code>[root@fedora ~]# python help
Python path configuration:
PYTHONHOME = '/usr/lib64/python3.9/venv/bin/python'
PYTHONPATH = (not set)
program name = 'python'
isolated = 0
environment = 1
user site = 1
import site = 1
sys._base_executable = '/bin/python'
sys.base_prefix = '/usr/lib64/python3.9/venv/bin/python'
sys.base_exec_prefix = '/usr/lib64/python3.9/venv/bin/python'
sys.platlibdir = 'lib64'
sys.executable = '/bin/python'
sys.prefix = '/usr/lib64/python3.9/venv/bin/python'
sys.exec_prefix = '/usr/lib64/python3.9/venv/bin/python'
sys.path = [
'/usr/lib64/python3.9/venv/bin/python/lib64/python39.zip',
'/usr/lib64/python3.9/venv/bin/python/lib64/python3.9',
'/usr/lib64/python3.9/venv/bin/python/lib64/python3.9/lib-dynload',
]
Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding
Python runtime state: core initialized
ModuleNotFoundError: No module named 'encodings'
Current thread 0x00007f241f6dd740 (most recent call first):
<no Python frame>
[root@fedora ~]#
</code></pre>
<p>In case it matters, I recently installed a cross-compilation environment (for Raspberry Pi) on this x86_64 Fedora 34 host. But I don't think that should matter. I tried to reinstall python with <code>dnf reinstall python</code> but now Python doesn't run, neither do dnf and other utilities.</p>
|
<python><installation>
|
2023-05-05 18:00:56
| 1
| 4,736
|
TSG
|
76,184,843
| 1,888,440
|
RuntimeError: await wasn't used with future when using twisted, pytest_twisted plugin, and asyncio reactor
|
<p>I'm running into the <code>RuntimeError: await wasn't used with future</code> in a simple pytest. I have the <code>pytest_twisted</code> plugin enabled with the argument <code>--reactor asyncio</code>. I can see that twisted is using asyncio and all my twisted tests run fine. However, this code gives me the above error.</p>
<pre><code>async def _sleep():
await asyncio.sleep(1.0)
@defer.inlineCallbacks
def test_sleep():
yield defer.ensureDeferred(_sleep())
</code></pre>
<p>It's just a simple test case to see if I can mix asyncio and twisted code together. The full stack trace is as follows:</p>
<pre><code>Traceback (most recent call last):
File "test/test_simple.py", line 23, in test_sleep
yield defer.ensureDeferred(_sleep())
File "/usr/local/lib/python3.10/site-packages/twisted/internet/defer.py", line 1697, in _inlineCallbacks
result = context.run(gen.send, result)
File "test/test_cli.py", line 18, in _sleep
await asyncio.sleep(1.0)
File "/usr/local/lib/python3.10/asyncio/tasks.py", line 605, in sleep
return await future
RuntimeError: await wasn't used with future
</code></pre>
<p>Anything obvious jump out?</p>
<pre><code>Twisted: 22.10.0
Python: 3.10.11
pytest: 7.3.1
pytest_twisted: 1.14.0
</code></pre>
|
<python><python-asyncio><twisted>
|
2023-05-05 18:00:27
| 2
| 4,994
|
robert_difalco
|
76,184,613
| 475,710
|
Dynamically add test methods to python unittest in setup method
|
<p>I wish to add dynamic tests to a python unittest class during setup.
Is there any way to get this working?</p>
<p>I know that this works <a href="https://stackoverflow.com/questions/32899/how-do-you-generate-dynamic-parameterized-unit-tests-in-python">based on the answers on this page</a>:</p>
<pre><code>def generate_test(a, b):
def test(self):
self.assertEqual(a, b)
return test
def add_test_methods(test_class):
test_list = [[1, 1, '1'], [5, 5, '2'], [0, 0, '3']]
for case in test_list:
test = generate_test(case[0], case[1])
setattr(test_class, "test_%s" % case[2], test)
class TestScenario(unittest.TestCase):
def setUp(self):
print("setup")
add_test_methods(TestScenario)
if __name__ == '__main__':
unittest.main(verbosity=1)
</code></pre>
<p>But this doesn't:</p>
<pre><code>class TestScenario(unittest.TestCase):
def setUp(self):
add_test_methods(TestScenario)
</code></pre>
<p>It is unable to find any tests:</p>
<pre><code>Process finished with exit code 5
Empty suite
Empty suite
</code></pre>
<p>Any idea why this doesn't work and how could I get it working?</p>
<p>Thanks.</p>
<p>UPDATE:</p>
<p>Tried to invoke add_test_methods from inside the TestScenario in this fashion, but it too doesn't work as it cannot resolve the TestScenario class and throws this error:
"ERROR: not found: TestScenario"</p>
<pre><code>class TestScenario(unittest.TestCase):
add_test_methods(TestScenario)
def setUp(self):
pass
</code></pre>
|
<python><dynamic><python-unittest>
|
2023-05-05 17:22:32
| 1
| 1,011
|
Sennin
|
76,184,540
| 791,793
|
Get all documents from ChromaDb using Python and langchain
|
<p>I'm using langchain to process a whole bunch of documents which are in an Mongo database.</p>
<p>I can load all documents fine into the chromadb vector storage using langchain. Nothing fancy being done here. This is my code:</p>
<pre class="lang-python prettyprint-override"><code>from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
from langchain.vectorstores import Chroma
db = Chroma.from_documents(docs, embeddings, persist_directory='db')
db.persist()
</code></pre>
<p>Now, after storing the data, I want to get a list of all the documents and embeddings WITH id's.</p>
<p>This is so I can store them back into MongoDb.</p>
<p>I also want to put them through Bertopic to get the topic categories.</p>
<p>How do I get all documents I've just stored in the Chroma database? I want the documents, and all the metadata.</p>
|
<python><langchain><py-langchain><chromadb>
|
2023-05-05 17:09:44
| 5
| 721
|
user791793
|
76,184,438
| 1,443,702
|
Capture flask request when entering and leaving the application
|
<p>I'm struggling a bit with writing decorators to capture and modify requests within my flask application once the requests enters the application context and leaves the application context.</p>
<p>My function in a flask application looks like below. I want the control of requests at two points to add custom headers and params:</p>
<ol>
<li>before the request goes inside <code>get_foo</code> function and,</li>
<li>just before <code>requests.get('http://third-party-service/v1/api')</code> so I can attach any custom header or query param just before this request leaves the application.</li>
</ol>
<pre><code>@app.route('/my/v1/api')
def get_foo():
...
...
r = requests.get('http://third-party-service/v1/api')
...
...
return 'success!'
</code></pre>
<p>The former part I'm able to do using <code>@app.before_request</code> decorator where I get control of the request once it reaches the <code>get_foo</code> function.</p>
<pre><code>@app.before_request
def before_request_callback():
method = request.method
path = request.path
print(method, path)
</code></pre>
<p>The latter part I'm not able to do. If I use <code>@app.after_request</code> decorator, I get the control once the request has been process and I have got the response from <code>http://third-party-service</code>.</p>
<p>I debugged sentry sdk and they're able to take control of the requests once the request leaves the application. I tried out to follow how they have implemented it by following their piece of code from the github repo (<a href="https://github.com/getsentry/sentry-python" rel="nofollow noreferrer">https://github.com/getsentry/sentry-python</a>) but wasn't able to do so hence posting the question over here.</p>
|
<python><flask><python-requests><request><distributed-tracing>
|
2023-05-05 16:55:39
| 6
| 4,726
|
xan
|
76,184,432
| 2,601,293
|
Add type hints to a python package similar to how TypeScript can use a .d.ts file
|
<p>I'm using a library that is a little old and doesn't have Python type hints.</p>
<p>Since this isn't my library, I can't simply just type hints in. With TypeScript, there is a concept of using a <strong>.d.ts</strong> file that goes along side the <strong>.js</strong> file. This provides typing information without modifying to original code. Is there some way in python that this could be implemented?</p>
<p>So far the only thing I've come up with is to extend the classes without type hints and make a super call to them. Unless I'm mistaken, this would require wrapping every single function/class in the original code to function, instead of just not having a type hint for a missing one in the wrapper class.</p>
<pre><code>class The_Class_I_Want_To_Use:
def foo(self, foo, bar):
...
class My_TypeHint_Wrapper(The_Class_I_Want_To_Use):
...
def foo(self, foo: str, bar: str) -> bool:
super().foo(foo, bar)
</code></pre>
|
<python><typescript><type-hinting>
|
2023-05-05 16:54:38
| 1
| 3,876
|
J'e
|
76,184,257
| 1,914,781
|
annotate on subplot with add_annotation
|
<p>The top subplot works fine but the bottom one the arrow start/end style not the same as top subplot!</p>
<pre><code>import re
import pandas as pd
import plotly.graph_objects as go
from plotly.subplots import make_subplots
def plot_line(df,pngname):
fontsize = 10
title = "demo"
xlabel = "KeyPoint"
ylabel = "Duration(secs)"
xname = df.columns[0]
colnames = df.columns[1:]
n = len(colnames)
fig = make_subplots(
rows=n, cols=1,
shared_xaxes=True,
vertical_spacing = 0.02,
)
xaxis,yaxis = get_xyaxis()
for i,yname in enumerate(colnames):
trace1 = go.Scatter(
x=df[xname],
y=df[yname],
text=df[yname],
textposition='top center',
mode='lines+text',
marker=dict(
size=1,
line=dict(width=0,color='DarkSlateGrey')),
name=yname)
fig.add_trace(
trace1,
row=i+1,
col=1
)
fig.update_xaxes(xaxis)
fig.update_yaxes(yaxis)
add_anns(fig,df,xname,yname,i+1,1)
xpading=.05
fig.update_layout(
margin=dict(l=20,t=40,r=10,b=40),
plot_bgcolor='#ffffff',#'rgb(12,163,135)',
paper_bgcolor='#ffffff',
title=title,
title_x=0.5,
showlegend=True,
legend=dict(x=.02,y=1.05),
barmode='group',
bargap=0.05,
bargroupgap=0.0,
font=dict(
family="Courier New, monospace",
size=fontsize,
color="black"
),
)
fig.show()
return
def get_xyaxis():
xaxis=dict(
title_standoff=1,
tickangle=-15,
showline=True,
linecolor='black',
color='black',
linewidth=.5,
ticks='outside',
showgrid=True,
gridcolor='grey',
gridwidth=.5,
griddash='solid',#'dot',
)
yaxis=dict(
title_standoff=1,
showline=True,
linecolor='black',
color='black',
linewidth=.5,
showgrid=True,
gridcolor='grey',
gridwidth=.5,
griddash='solid',#'dot',
zeroline=True,
zerolinecolor='grey',
zerolinewidth=.5,
showticklabels=True,
)
return [xaxis,yaxis]
def add_anns(fig,df,xname,yname,i,j):
prev = df.loc[0]
for idx, row in df.iterrows():
dy = row[yname] - prev[yname]
x0 = row[xname]
y0 = row[yname]
x1 = row[xname]
y1 = prev[yname]
if abs(dy) > 0:
ans = add_vline(fig,x0,y0,y1,i,j,"%.1f"%(dy))
prev = row
return
def add_vline(fig,x0,y0,y1,row,col,text=None):
dw = 10 # pixels
if text == None:
text = "%.1f"%(y1-y0)
anns = []
xref='x'
yref='y'
fig.add_annotation(#vertical1
x=x0,y=y0,ax=x0,ay=y1,
xref=xref,yref=yref,axref=xref,ayref=yref,
showarrow=True,text='',
arrowhead=2,arrowside='start+end',arrowsize=2,arrowwidth=.5,arrowcolor='black',
row=row,col=col,
)
fig.add_annotation(# start
x=x0,y=y0,ax=-dw,ay=y0,
xref=xref,yref=yref,axref='pixel',ayref=yref,
showarrow=True,text='',arrowwidth=.5,arrowcolor='black',
row=row,col=col,
)
fig.add_annotation(
x=x0,y=y0,ax=dw,ay=y0,
xref=xref,yref=yref,axref='pixel',ayref=yref,
showarrow=True,text='',arrowwidth=.5,arrowcolor='black',
row=row,col=col,
)
fig.add_annotation( # end
x=x0,y=y1,ax=-dw,ay=y1,
xref=xref,yref=yref,axref='pixel',ayref=yref,
showarrow=True,text='',arrowwidth=.5,arrowcolor='black',
row=row,col=col,
)
fig.add_annotation(
x=x0,y=y1,ax=dw,ay=y1,
xref=xref,yref=yref,axref='pixel',ayref=yref,
showarrow=True,text='',arrowwidth=.5,arrowcolor='black',
row=row,col=col,
)
fig.add_annotation(# text label
x=x0, y=(y0+y1)/2,
xref=xref,yref=yref,
text=text,textangle=0,font=dict(color='black',size=14),
bgcolor='white',
showarrow=False,arrowhead=1,arrowwidth=2,
row=row,col=col,
)
return
def main():
data = [
['AAA',1,2],
['BBB',10,12],
['CCC',5,6],
['DDD',8,9],
]
df = pd.DataFrame(data,columns=['name','v1','v2'])
plot_line(df,"./demo.png")
return
main()
</code></pre>
<p>Output:
<a href="https://i.sstatic.net/XKCoL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XKCoL.png" alt="enter image description here" /></a></p>
|
<python><plotly><scatter-plot>
|
2023-05-05 16:29:09
| 1
| 9,011
|
lucky1928
|
76,184,252
| 3,667,089
|
How to avoid creating a newline when using if-else in f-string expression
|
<p>Please see the below minimal example,</p>
<pre><code>printbbb = True
print(f"""AAA
{'''BBB''' if printbbb else ''}
CCC""")
</code></pre>
<p>This prints</p>
<pre><code>AAA
BBB
CCC
</code></pre>
<p>as desired, however, if I set <code>printbbb</code> to <code>False</code>, it prints</p>
<pre><code>AAA
CCC
</code></pre>
<p>How can I change the code so that it will print</p>
<pre><code>AAA
CCC
</code></pre>
<p>when <code>printbbb</code> is set to <code>False</code>?</p>
|
<python><python-3.x><f-string>
|
2023-05-05 16:28:44
| 2
| 3,388
|
user3667089
|
76,184,234
| 5,437,090
|
pandas dataframe map using lambda function with multiple input arguments | AttributeError: 'DataFrame' object has no attribute 'map'
|
<p><strong>Given</strong>:</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame(data={'user_ip': ["u1", "u2", "u3", "u4", "u5"],
'a': [1, np.nan, 8, 2, 0],
'b': [2, 5, 1, np.nan, 0],
'c': [3, 0, np.nan, 0, 7],
'd': [0, 2, 1, 2, 9],
},
)
user_ip a b c d
0 u1 1.0 2.0 3.0 0
1 u2 NaN 5.0 0.0 2
2 u3 8.0 1.0 NaN 1
3 u4 2.0 NaN 0.0 2
4 u5 0.0 0.0 7.0 9
</code></pre>
<p><strong>Goal</strong>:</p>
<p>I'd like to loop through each row to get a new column using my custom defined function with input arguments (including <code>DataFrame</code> and its column) as follows:</p>
<pre><code>def fcn(df, col, x, y):
return x*df[col] + y
df["new_col_apply"] = df.apply(lambda inp_df: fcn(inp_df, col="b", x=2, y=10), axis=1)
</code></pre>
<p>My solution works fine but <code>apply()</code> method seems quite slow for my original <code>dataframe</code> containing more than <code>900K</code> rows.</p>
<p>I am aware of <code>map()</code> but since <code>DataFrame</code> doesn’t have <code>map()</code> transformation and I specifically need to input my <code>DataFrame</code> and its column (<code>col</code>) as input to my function <code>fcn</code>, my following snippet:</p>
<pre><code>df["new_col_map"] = df.map(lambda inp_df: fcn(inp_df, col="b", x=2, y=10), na_action="ignore")
</code></pre>
<p>ends up in <code>AttributeError</code> as bellow:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-14-dfe11c4bff87> in <cell line: 1>()
----> 1 df["new_col_map"] = df.map(lambda inp_df: fcn(inp_df, col="b", x=2, y=10), na_action="ignore")
2 df
/usr/local/lib/python3.10/dist-packages/pandas/core/generic.py in __getattr__(self, name)
5900 ):
5901 return self[name]
-> 5902 return object.__getattribute__(self, name)
5903
5904 def __setattr__(self, name: str, value) -> None:
AttributeError: 'DataFrame' object has no attribute 'map'
</code></pre>
<p>Is there any better and faster alternative than <code>apply()</code> transformation to loop through large pandas <code>DataFrame</code> with custom defined functions with several arguments?</p>
<p>Cheers,</p>
|
<python><pandas><error-handling>
|
2023-05-05 16:26:42
| 1
| 1,621
|
farid
|
76,184,100
| 3,358,927
|
How to send at-commands to tcp port and retrieve response
|
<p>I am writing Python code to receive information from tcp port 20001. According to the documentation , there are some AT-commands that can be called. One of them is <code>+OK</code>. Also, the AT command end bit must be <code>0x0D</code></p>
<p><a href="https://i.sstatic.net/I8Mzs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I8Mzs.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/D0hIf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D0hIf.png" alt="enter image description here" /></a></p>
<p>I followed the example <a href="https://medium.com/@dillan.teagle.va/sending-binary-commands-to-devices-in-python-60d64dc02c44" rel="nofollow noreferrer">here</a> and wrote the code below to send the command and receive the response</p>
<pre><code>TCP_IP = '127.0.0.1'
TCP_PORT = 20001
BUFFER_SIZE = 1024
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((TCP_IP, TCP_PORT))
print('connected')
# won't work without encoding
packet = bytearray('x1b' + '+OK' + 'x1d', encoding='ascii')
s.send(packet)
s.shutdown(socket.SHUT_WR)
data = s.recv(BUFFER_SIZE)
print(data)
</code></pre>
<p>It connected successfully but I am not getting any data back. I highly doubt I got the <code>packet</code> part right. The example didn't have the encoding part. But it won't work without encoding (typeerror string argument without an encoding). And do I need <code>AT</code> in front of <code>+OK</code>? Never worked with AT command before and there aren't many examples out there.</p>
<h3>EDIT</h3>
<p>The returned connection information looks like this:
<a href="https://i.sstatic.net/fVdD5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fVdD5.png" alt="enter image description here" /></a></p>
|
<python><sockets><tcp><byte>
|
2023-05-05 16:08:44
| 1
| 5,049
|
ddd
|
76,183,984
| 2,475,612
|
python mypy: sorting a list on nullable key - where items with null fields are removed
|
<p>I am on a project using mypy 1.2.0. I am linting the following code:</p>
<pre class="lang-py prettyprint-override"><code>from datetime import date, timedelta
from pydantic import BaseModel
class ScheduleItem(BaseModel):
start_date: Optional[date] = None
items = [
ScheduleItem(start_date=None),
ScheduleItem(start_date=date.today()),
ScheduleItem(start_date=date.today() - timedelta(days=3)),
]
sorted_items = sorted(
[item for item in items if item.start_date],
key=lambda item: item.start_date,
)
</code></pre>
<p>I get the following lint errors:</p>
<pre><code>error: Argument "key" to "sorted" has incompatible type "Callable[[ScheduleItem], Optional[date]]"; expected "Callable[[ScheduleItem], Union[SupportsDunderLT[Any], SupportsDunderGT[Any]]]" [arg-type]
error: Incompatible return value type (got "Optional[date]", expected "Union[SupportsDunderLT[Any], SupportsDunderGT[Any]]") [return-value]
</code></pre>
<p>However, I am passing in a list to <code>sorted</code> which enforces the values that are being sorted on are not <code>None</code>. How do I communicate this to mypy?</p>
|
<python><mypy>
|
2023-05-05 15:54:25
| 2
| 1,614
|
i_trope
|
76,183,964
| 226,081
|
pip install with target directory but skip already installed system packages
|
<p>The <code>pip install --target</code> lets you install Python packages into a specific directory which can then be added to the PYTHONPATH (basically creating something similar to a virtualenv but without the Python interpreter and libs).</p>
<p>However, I have a use-case where I'd like to install packages into the target directory, but skip the install of a transitive dependency if a compatible version is already satisfied on the PYTHONPATH.</p>
<p>For example, since the <code>requests</code> package depends on <code>urllib3</code> and if a requests-compatible version of <code>urllib3</code> is already installed on the PYTHONPATH of my current Python interpreter / pip executable, I'm hoping to have way to install <code>requests</code> to the "target" directory, but skip the install of <code>urllib3</code> (since it is already present).</p>
<p>I understand that it makes sense for the default behavior of <code>pip install --target</code> to install full copies to the target directory, but am wondering if there's something similar to virtualenv's <code>--system-site-packages</code> flag to include system site packages when evaluating what to install in the target dir.</p>
<p>How can I pip install to a target directory but not install copies of packages installed at the system level? Thanks for reading; any advice is much appreciated.</p>
|
<python><pip><virtualenv><python-packaging><python-venv>
|
2023-05-05 15:52:41
| 0
| 10,861
|
Joe Jasinski
|
76,183,891
| 14,775,478
|
Is saving keras models with tf >=2.12.0 not backward compatible?
|
<p>I am struggling with keras' way of saving models in <code>v2.12.0</code>. It does not seem to be backward compatible. Am I getting something wrong, or must we refactor code to migrate to <code>v2.12.0</code>?</p>
<p>Here is our old way of saving models (<code>v2.7.0</code>): Pass valid <code>.h5</code> file (!) path to keras, and voilà, a <code>.h5</code> file of that name appears on disk:</p>
<pre><code>model = tf.keras.models.Sequential(...)
model.compile(...)
model.save('my_model.h5')
</code></pre>
<p>That code now blows up with <code>v2.12.0</code>:</p>
<pre><code>`tensorflow.python.framework.errors_impl.FailedPreconditionError: /tmp/tmpt118zg26 is not a directory`.
</code></pre>
<p>I understand that <code>v2.12.0</code> offers 2 options:</p>
<ul>
<li>The new way of saving into <a href="https://www.tensorflow.org/api_docs/python/tf/saved_model/save" rel="nofollow noreferrer">folders</a> is enabled via <code>tf.saved_model.save()</code></li>
<li>The <a href="https://www.tensorflow.org/api_docs/python/tf/keras/saving/save_model" rel="nofollow noreferrer">old way</a> of saving to files would still be supported by <code>tf.keras.saving.save_model</code>. Apparently, <code>model.save()</code> would serve as an alias for <code>tf.keras.saving.save_model()</code></li>
</ul>
<p>The <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.12.0" rel="nofollow noreferrer">release notes</a> also state that</p>
<blockquote>
<blockquote>
<p>Moved all saving-related utilities to a new namespace, <code>keras.saving</code>, for example</p>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<p>The previous API locations (in <code>keras.utils</code> and <code>keras.models</code>) will be available indefinitely, but we recommend you update your code to point to the new API locations.</p>
</blockquote>
</blockquote>
<p>So, then, why does the above code fail all of a sudden?</p>
<ul>
<li><p>Can I still use the "existing approach" to save .h5 directly? If so, how? Is this not supposed to be backward compatible? Do I need to rewrite the code anyway, now (if I want to stick to file writing, do I need to make one-line changes to <code>tf.keras.saving.save_model()</code>)?</p>
</li>
<li><p>Or, must we save into folders, instead of files? Then what's the migration path? All models into one folder, or would they overwrite one another? Do we need to nest folders, then, and also reinvent our way of syncing these models elsewhere (e.g., having to <code>aws s3 sync 'folder'</code> instead of <code>boto3</code> call <code>client.upload_file()</code> to shift the models off to aws)?</p>
</li>
</ul>
|
<python><tensorflow><keras>
|
2023-05-05 15:43:32
| 2
| 1,690
|
KingOtto
|
76,183,853
| 1,825,360
|
Pandas dataframe to Html ValueError: Styles supplied as string must follow CSS rule formats
|
<p>I'm trying to write a Pandas data frame to HTML format (Pandas 1.5.3) and getting the following error. The code however runs fine in PD version 0.25.1.</p>
<p>This is a small dataframe which I uploaded in <a href="https://pastebin.com/MGnwCvMk" rel="nofollow noreferrer">pastebin</a></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
url_raw = "https://pastebin.com/raw/MGnwCvMk"
re_df = pd.read_csv(url_raw, sep="\t", na_filter=False)
columns_with_color_dictionary = {
"Seq": "#FFFFFF",
"Cov": "#FFFFFF",
"CTer": "#FFFFFF",
"NTer": "#FFFFFF",
"B13": "#fc9863",
"F0": "#fdca93",
"F1": "#fdbb85",
"F3": "#fdbd86",
"F2": "#fdbb85",
"Y73": "#fda671",
"Y59": "#f06749",
"Y58": "#840000",
"Y45": "#fdd6a3",
"Y37": "#feeacc",
"Y26": "#fdb67f",
"Y24": "#fdaf79",
"Y13": "#fca36e",
}
def f_color(dat,c='red'):
return [f'background-color: {c}' if v=='1' else '#FFFFFF' for v in dat]
style = re_df.style
style = style.set_properties(width="25px")
for column, color in columns_with_color_dictionary.items():
style = style.apply(f_color, axis=0, subset=column, c=color)
with open("MyTest.html", 'w') as fh:
fh.write(style.render())
</code></pre>
<p>I've tried style.to_html(), but I'm getting the following error code (PD version 0.25.1 works fine):</p>
<pre><code>C:\Users\Admin\AppData\Local\Temp\ipykernel_10844\4235473097.py:31: FutureWarning: this method is deprecated in favour of `Styler.to_html()`
fh.write(style.render())
Traceback (most recent call last):
File C:\Anaconda3\lib\site-packages\pandas\io\formats\style_render.py:1873 in maybe_convert_css_to_tuples
return [
File C:\Anaconda3\lib\site-packages\pandas\io\formats\style_render.py:1874 in <listcomp>
(x.split(":")[0].strip(), x.split(":")[1].strip())
IndexError: list index out of range
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
Cell In[76], line 31
fh.write(style.render())
File C:\Anaconda3\lib\site-packages\pandas\io\formats\style.py:457 in render
return self._render_html(sparse_index, sparse_columns, **kwargs)
File C:\Anaconda3\lib\site-packages\pandas\io\formats\style_render.py:206 in _render_html
d = self._render(sparse_index, sparse_columns, max_rows, max_cols, "&nbsp;")
File C:\Anaconda3\lib\site-packages\pandas\io\formats\style_render.py:163 in _render
self._compute()
File C:\Anaconda3\lib\site-packages\pandas\io\formats\style_render.py:258 in _compute
r = func(self)(*args, **kwargs)
File C:\Anaconda3\lib\site-packages\pandas\io\formats\style.py:1762 in _apply
self._update_ctx(result)
File C:\Anaconda3\lib\site-packages\pandas\io\formats\style.py:1585 in _update_ctx
css_list = maybe_convert_css_to_tuples(c)
File C:\Anaconda3\lib\site-packages\pandas\io\formats\style_render.py:1879 in maybe_convert_css_to_tuples
raise ValueError(
ValueError: Styles supplied as string must follow CSS rule formats, for example 'attr: val;'. '#FFFFFF' was given.
</code></pre>
|
<python><pandas><valueerror>
|
2023-05-05 15:38:23
| 1
| 469
|
The August
|
76,183,798
| 1,821,692
|
What the purpose of creating Python class inherited from `abc.ABC` but without `abstractmethod`?
|
<p>I've read TorchServe's default handlers' sources and found that the <a href="https://github.com/pytorch/serve/blob/86d440041b663961c71a6262fe648111d85b27d8/ts/torch_handler/base_handler.py#L62" rel="nofollow noreferrer"><code>BaseHandler</code></a> is inherited from <code>abc.ABC</code> and doesn't have any abstract method. The <a href="https://github.com/pytorch/serve/blob/86d440041b663961c71a6262fe648111d85b27d8/ts/torch_handler/vision_handler.py#L15" rel="nofollow noreferrer"><code>VisionHandler</code></a> is the same.</p>
<p>What could be the reason and when I should use <code>abc.ABC</code> without <code>@abstractmethod</code>?</p>
<p>PS: I found that one of the possible answers is to provide a common interface. In this case I think I still can use simple class without inheritance from <code>ABC</code>. No one would force you to implement all methods without <code>abstractmethod</code> decorator.</p>
|
<python><abc><abstract-methods><torchserve>
|
2023-05-05 15:29:57
| 1
| 3,047
|
feeeper
|
76,183,675
| 13,039,962
|
differences between markers of scatter and mlines legend
|
<p>With this code i'm plotting values of a map:</p>
<pre><code>conditions = [(maxcwd1['CWD'] == 0),
(maxcwd1['CWD'] > 0) & (maxcwd1['CWD'] < 3),
(maxcwd1['CWD'] >= 3) & (maxcwd1['CWD'] < 5),
(maxcwd1['CWD'] >= 5) & (maxcwd1['CWD'] < 7),
(maxcwd1['CWD'] >= 7) & (maxcwd1['CWD'] < 10),
(maxcwd1['CWD'] >= 10) & (maxcwd1['CWD'] < 15),
(maxcwd1['CWD'] >= 15) & (maxcwd1['CWD'] < 20),
(maxcwd1['CWD'] >= 20) & (maxcwd1['CWD'] <= 31)
]
sizes = [100,200,300,400,500,600,700,800]
maxcwd1['size'] = np.select(conditions, sizes)
#______________________________________________________________________________
fig = plt.figure('map', figsize=(20,25), dpi=300)
ax = fig.add_axes([0.05, 0.05, 0.90, 0.85], projection=ccrs.PlateCarree())
l1 = NaturalEarthFeature(category='cultural', name='admin_0_countries', scale='50m', facecolor='none')
ax.add_feature(cfeature.OCEAN.with_scale('50m'), facecolor='deepskyblue')
ax.add_feature(cfeature.LAND.with_scale('50m'))
ax.add_feature(cfeature.COASTLINE.with_scale('50m'))
ax.add_feature(COUNTIES, edgecolor='black',linewidth=0.5,facecolor='#BDBBC4',zorder=2)
ax.add_feature(COUNTIES_P, edgecolor='black',linewidth=0.5,facecolor='#E1E1E1',zorder=2)
for index, row in maxcwd1.iterrows():
ax.scatter(row['Longitud'], row['Latitud'], s=row['size'], color='black',
marker='o',edgecolors="black",linewidth=1,zorder=4,transform=ccrs.PlateCarree())
#And the legend:
m6 = mlines.Line2D([], [], color='black', marker='o', linestyle='None',markeredgecolor='black',
markersize=5, label='0')
m7 = mlines.Line2D([], [], color='black', marker='o', linestyle='None',markeredgecolor='black',
markersize=10, label='[1-3>')
m8 = mlines.Line2D([], [], color='black', marker='o', linestyle='None',markeredgecolor='black',
markersize=15, label='[3-5>')
m9 = mlines.Line2D([], [], color='black', marker='o', linestyle='None',markeredgecolor='black',
markersize=20, label='[5-7>')
m10 = mlines.Line2D([], [], color='black', marker='o', linestyle='None',markeredgecolor='black',
markersize=25, label='[7-10>')
m11 = mlines.Line2D([], [], color='black', marker='o', linestyle='None',markeredgecolor='black',
markersize=30, label='[10-15>')
m12 = mlines.Line2D([], [], color='black', marker='o', linestyle='None',markeredgecolor='black',
markersize=35, label='[15-20>')
m13 = mlines.Line2D([], [], color='black', marker='o', linestyle='None',markeredgecolor='black',
markersize=40, label='[20-31]')
leg=ax.legend(handles=[m6, m7,m8,m9,m10,m11,m12,m13],title='CWD',title_fontsize=20,
fontsize=22, bbox_to_anchor=(0.04, 0.22, 0.1, 0.35),
borderpad=0.45, labelspacing=0.75,handlelength=0.4, facecolor='white',edgecolor='black',borderaxespad=0.004)
</code></pre>
<p>The problem is when I try to put the same sizes of the markers in the legend and in the ax.scatter. They are not the same size on the legend as on the map. They come out exaggeratedly bigger.</p>
<p>How can i solve this?</p>
|
<python><pandas><matplotlib>
|
2023-05-05 15:16:57
| 0
| 523
|
Javier
|
76,183,636
| 12,292,254
|
Default value for nested dataclasses
|
<p>I got the following dataclasses in Python:</p>
<pre><code>@dataclass
class B(baseClass):
d: float
e: int
@dataclass
class A(baseClass):
b: B
c: str
</code></pre>
<p>I have this configured so that the baseClass allows to get the variables of each class from a config.json file. This file contains the following nested dictionary.</p>
<pre><code>{
a: {
b: {
"d": a_float_value
"e": a_int_value
}
c: a_string_value
}
</code></pre>
<p>But the key "b" can be an empty dict.</p>
<pre><code>{
a: {
b: {}
c: a_string_value
}
</code></pre>
<p>But how do I integrate this into my dataclasses?
I tried</p>
<pre><code>@dataclass
class B(baseClass):
d: float
e: int
@dataclass
class A(baseClass):
b: Optional[B] = {}
c: str
</code></pre>
<p>and</p>
<pre><code>@dataclass
class B(baseClass):
d: float
e: int
@dataclass
class A(baseClass):
b: B = field(default_factory=dict)
c: str
</code></pre>
<p>But this doesn't seem to work.</p>
|
<python><nested><python-dataclasses>
|
2023-05-05 15:11:58
| 1
| 460
|
Steven01123581321
|
76,183,587
| 2,664,910
|
Can you monitor multiple urlpatterns with Django's runserver autoreload
|
<p>In the project I'm working on, I'd like to have it run 2 apps on one runserver for development purposes (standalone in production), to utilize amongs others autoreload options.</p>
<p>What I'd like to do is have wsgi application, and asgi application by implementing Django channels next to it.
For that, I feel like I need to have 2 ROOT_URLCONFs basically,
as one would apply to wsgi, and other to asgi application.
I am interested is there a way to make runserver pay attention to multiple ROOT_URLCONFs or urlpatterns whatever you want to name them?
And does this approach make sense in the first place,
as from what I've seen on the web, it's not such an uncommon setup,
but I haven't seen anyone placing it under one runserver.</p>
<p>For the ease of development setup it would be one server,
as there's already an overhead on startup of the project development.</p>
|
<python><django><django-rest-framework><django-channels>
|
2023-05-05 15:05:32
| 0
| 429
|
800c25d6-cd74-11ed-afa1-0242ac
|
76,183,504
| 127,320
|
Long creation time with conda env create -f environment.yml
|
<p>I have the following <code>environment.yml</code> file. It is taking 1.5 hours to create this environment. How to improve (or debug) the creation time?</p>
<pre><code>name: test_syn_spark_3_3_1
channels:
- defaults
- conda-forge
dependencies:
- python=3.10
- pandas=1.5
- pip=23.0
- pyarrow=11.0.0
- pyspark=3.3.1
- setuptools=65.0
- pip:
- azure-common==1.1.28
- azure-core==1.26.1
- azure-datalake-store==0.0.51
- azure-identity==1.7.0
- azure-mgmt-core==1.3.2
- azure-mgmt-resource==21.2.1
- azure-mgmt-storage==20.1.0
- azure-storage-blob==12.16.0
- azure-mgmt-authorization==2.0.0
- azure-mgmt-keyvault==10.1.0
- azure-storage-file-datalake==12.11.0
- check-wheel-contents==0.4.0
- pyarrowfs-adlgen2==0.2.4
- wheel-filename==1.4.1
</code></pre>
|
<python><anaconda><conda><miniconda>
|
2023-05-05 14:55:13
| 1
| 80,467
|
Aravind Yarram
|
76,183,497
| 2,664,910
|
How to setup urlpatterns for ASGI Django Channels http type protocol?
|
<p>I am pretty sure I am missing something from the documentation,
but I am unable to find a definitive answer anywhere.</p>
<p>I have a Django application, and what I'd like to do is use Django Channels for websocket purposes, however I am also interested in async http that Django Channels can provide for me.</p>
<p>Looking all over the internet and also in the source code of Django Channels,
the only kinds of examples I was able to find match the ones from the <a href="https://channels.readthedocs.io/en/stable/topics/routing.html" rel="nofollow noreferrer">documentation</a></p>
<p>where "http" property in ProtocolTypeRouter is set to <code>get_asgi_application</code>,
and only websockets have their own urlpatterns.</p>
<pre><code>application = ProtocolTypeRouter({
# Django's ASGI application to handle traditional HTTP requests
"http": django_asgi_app,
# WebSocket chat handler
"websocket": AllowedHostsOriginValidator(
AuthMiddlewareStack(
URLRouter([
path("chat/admin/", AdminChatConsumer.as_asgi()),
path("chat/", PublicChatConsumer.as_asgi()),
])
)
),
})
</code></pre>
<p>I can't understand how to set up urlpatterns that http property will route to.</p>
|
<python><django><django-channels><asgi>
|
2023-05-05 14:54:06
| 1
| 429
|
800c25d6-cd74-11ed-afa1-0242ac
|
76,183,463
| 507,770
|
Cannot Set Figure DPI
|
<p>I am trying to set the figure DPI. The docs say that it <em>should</em> default to 100, but it is defaulting to 200. I am trying to force it to 200 in 2 different ways. No dice.</p>
<p>Here is my python script in its entirety.</p>
<pre><code>import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 100
fig = plt.figure(figsize=(1, 1), dpi=100)
print(fig.dpi) # prints 200
</code></pre>
<p>Why is the figure's DPI 200?</p>
|
<python><matplotlib>
|
2023-05-05 14:50:48
| 0
| 18,592
|
bbrame
|
76,183,446
| 1,833,326
|
Duplicates even there are no duplicates
|
<p>I have a data frame as a result of multiple joins. When I check, it tells me that I have a duplicate, even though that is impossible from my perspective. Here is an abstract example:</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
import pyspark.sql.functions as f
from pyspark.sql.functions import lit
# Create a Spark session
spark = SparkSession.builder.appName("CreateDataFrame").getOrCreate()
# User input for number of rows
n_a = 10
n_a_c = 5
n_a_c_d = 3
n_a_c_e = 4
# Define the schema for the DataFrame
schema_a = StructType([StructField("id1", StringType(), True)])
schema_a_b = StructType(
[
StructField("id1", StringType(), True),
StructField("id2", StringType(), True),
StructField("extra", StringType(), True),
]
)
schema_a_c = StructType(
[
StructField("id1", StringType(), True),
StructField("id3", StringType(), True),
]
)
schema_a_c_d = StructType(
[
StructField("id3", StringType(), True),
StructField("id4", StringType(), True),
]
)
schema_a_c_e = StructType(
[
StructField("id3", StringType(), True),
StructField("id5", StringType(), True),
]
)
# Create a list of rows with increasing integer values for "id1" and a constant value of "1" for "id2"
rows_a = [(str(i),) for i in range(1, n_a + 1)]
rows_a_integers = [str(i) for i in range(1, n_a + 1)]
rows_a_b = [(str(i), str(1), "A") for i in range(1, n_a + 1)]
def get_2d_list(ids_part_1: list, n_new_ids: int):
rows = [
[
(str(i), str(i) + "_" + str(j))
for i in ids_part_1
for j in range(1, n_new_ids + 1)
]
]
return [item for sublist in rows for item in sublist]
rows_a_c = get_2d_list(ids_part_1=rows_a_integers, n_new_ids=n_a_c)
rows_a_c_d = get_2d_list(ids_part_1=[i[1] for i in rows_a_c], n_new_ids=n_a_c_d)
rows_a_c_e = get_2d_list(ids_part_1=[i[1] for i in rows_a_c], n_new_ids=n_a_c_e)
# Create the DataFrame
df_a = spark.createDataFrame(rows_a, schema_a)
df_a_b = spark.createDataFrame(rows_a_b, schema_a_b)
df_a_c = spark.createDataFrame(rows_a_c, schema_a_c)
df_a_c_d = spark.createDataFrame(rows_a_c_d, schema_a_c_d)
df_a_c_e = spark.createDataFrame(rows_a_c_e, schema_a_c_e)
# Join everything
df_join = (
df_a.join(df_a_b, on="id1")
.join(df_a_c, on="id1")
.join(df_a_c_d, on="id3")
.join(df_a_c_e, on="id3")
)
# Nested structure
# show
df_nested = df_join.withColumn("id3", f.struct(f.col("id3"))).orderBy("id3")
for i, index in enumerate([(5, 3), (4, 3), (3, None)]):
remaining_columns = list(set(df_nested.columns).difference(set([f"id{index[0]}"])))
df_nested = (
df_nested.groupby(*remaining_columns)
.agg(f.collect_list(f.col(f"id{index[0]}")).alias(f"id{index[0]}_tmp"))
.drop(f"id{index[0]}")
.withColumnRenamed(
f"id{index[0]}_tmp",
f"id{index[0]}",
)
)
if index[1]:
df_nested = df_nested.withColumn(
f"id{index[1]}",
f.struct(
f.col(f"id{index[1]}.*"),
f.col(f"id{index[0]}"),
).alias(f"id{index[1]}"),
).drop(f"id{index[0]}")
</code></pre>
<p>I check for duplicates based on <code>id3</code> which should be unique the entire data frame on the second level:</p>
<pre><code># Investigate for duplicates
df_test = df_nested.select("id2", "extra", f.explode(f.col("id3")["id3"]).alias("id3"))
df_test.groupby("id3").count().filter(f.col("count") > 1).show()
</code></pre>
<p>Which tell me that <code>ID3 == 8_3</code> exists twice:</p>
<pre><code>+---+-----+
|id3|count|
+---+-----+
|8_3| 2|
+---+-----+
</code></pre>
<p>However, in the data frame is clearly unique for ID3. Which can be shown by (ID4 and ID5 are on the next level)</p>
<pre><code>df_join.groupby("id3", "id4", "id5").count().filter(f.col("count") > 1).show()
</code></pre>
<p>leading to</p>
<pre><code>+---+---+---+-----+
|id3|id4|id5|count|
+---+---+---+-----+
+---+---+---+-----+
</code></pre>
<p>If it helps I use Databricks Runtime Version 11.3 LTS (includes Apache Spark 3.3.0, Scala
2.12)</p>
|
<python><apache-spark><pyspark><apache-spark-sql><databricks>
|
2023-05-05 14:48:49
| 1
| 1,018
|
Lazloo Xp
|
76,183,438
| 20,212,187
|
Sklearn pipeline with LDA and KNN
|
<p>I try to use <code>LinearDiscriminantAnalysis</code> (LDA) class from sklearn as preprocessing part of my modeling to reduce the dimensionality of my data, and after applied a KNN classifier. I know that a good pratice is to use pipeline to bring together preprocessing and modeling part.</p>
<p>I also use the method <code>cross_validate</code> to avoid overfitting using cross validation. But when I build my pipeline, and pass it to the <code>cross_validate</code> method, it seems that only LDA is used to classify my data, since LDA can be used as a classifier too.</p>
<p>I don't understand why, it is like since the LDA can predict the class, it just use it without the KNN or something like that. I may be using the <code>Pipeline</code> class wrong.</p>
<p>Below you can find the code with the pipeline (LDA + KNN) and a version with just LDA, the results are exactly the same. Note that when I transform (reduce) the data before, and use the reduced data into a <code>cross_validate</code> method with KNN my result are way better.</p>
<pre><code># Define the pipeline to use LDA as preprocessing part
pipeline2 = Pipeline([
('lda', lda),
('knn', knn)
])
# Use stratified cross validation on pipeline (LDA and KNN) classifier
result_test = pd.DataFrame(cross_validate(
pipeline2,
X_train_reduced,
y_train,
return_train_score=True,
cv=3,
scoring=['accuracy']
))
# Get mean train and test accuracy
print(f"Mean train accuracy: {result_test['train_accuracy'].mean():.3f}")
print(f"Mean validation accuracy: {result_test['test_accuracy'].mean():.3f}")
</code></pre>
<p>Mean train accuracy: 1.000<br />
Mean validation accuracy: 0.429</p>
<pre><code># Define the pipeline to use LDA as preprocessing part
pipeline2 = Pipeline([
('lda', lda),
#('knn', knn) THE KNN IS COMMENT IN THIS CASE!!
])
# Use stratified cross validation on pipeline (LDA and KNN) classifier
result_test = pd.DataFrame(cross_validate(
pipeline2,
X_train_reduced,
y_train,
return_train_score=True,
cv=3,
scoring=['accuracy']
))
# Get mean train and test accuracy
print(f"Mean train accuracy: {result_test['train_accuracy'].mean():.3f}")
print(f"Mean validation accuracy: {result_test['test_accuracy'].mean():.3f}")
</code></pre>
<p>Mean train accuracy: 1.000<br />
Mean validation accuracy: 0.429</p>
<p>Note that the data used is quiet complex, it is from MRI images, and it has been already reduced using PCA to filter noise on images.</p>
<p>Thank you for your help!</p>
|
<python><scikit-learn><pipeline><knn>
|
2023-05-05 14:47:38
| 1
| 543
|
Adrien Riaux
|
76,183,422
| 1,114,105
|
How to share a string using Python 3's multiprocessing.Value?
|
<p>I am trying to share a string between two processes using Value, but it does not work.</p>
<pre><code>from multiprocessing import Value, Process
import time
import ctypes
def say_hi(v):
with v.get_lock():
print(f"Saying hi with {v.value=:}")
v.value = "borat"
if __name__ == "__main__":
v = Value(ctypes.c_wchar_p, "boris")
p1 = Process(target=say_hi, args=(v,))
p2 = Process(target=say_hi, args=(v,))
p1.start()
p1.join()
p2.start()
p2.join()
</code></pre>
<p>The first process is supposed to set the string to "borat" but the output is:</p>
<p><code>Saying hi with v.value=boris</code></p>
<p><code>Saying hi with v.value=</code></p>
|
<python><python-multiprocessing><ctypes>
|
2023-05-05 14:45:56
| 2
| 1,189
|
user1114
|
76,183,393
| 17,353,489
|
How to view-cast / reinterpret-cast in pythran / numpy?
|
<p>I'm trying to do <code>numpy</code> view-casting (which I believe in <code>C/C++</code> land would be called reinterpret-casting) in pythran:</p>
<p>The following silly made-up example takes an array of unsigned 8-byte integers, reinterprets them as twice as many unsigned 4-byte integers, slices off the first and last (this also does not touch the actual data; it only changes the "base pointer") and reinterprets as unsigned 8-byte again, the total effect being a frame shift. (We'll worry about endianness some other time.)</p>
<pre><code>import numpy as np
A = np.arange(5,dtype="u8")
a = A.view("u4")
B = a[1:9].view("u8")
A
# array([0, 1, 2, 3, 4], dtype=uint64)
B
# array([ 4294967296, 8589934592, 12884901888, 17179869184], dtype=uint64)
np.shares_memory(A,B)
# True
</code></pre>
<p>I cannot have pythran translate this directly because it doesn't know the <code>.view</code> attribute.</p>
<p>Is there a way to reinterpret cast arrays in pythran?</p>
|
<python><numpy><reinterpret-cast><pythran>
|
2023-05-05 14:41:41
| 1
| 533
|
Albert.Lang
|
76,183,337
| 5,016,028
|
Select features by correlation with label in pandas dataframe
|
<p>I have a dataframe of the form:</p>
<pre><code>df = pd.DataFrame({ 'A' : [1,2,3],
'B' : [4,5,6],
'label' : [1.0, 0.0, 1.0]
})
</code></pre>
<p>And I first select only the features that have a correlation above a threshold to the <code>'label'</code> column.</p>
<pre><code>cor = df.corr()
cor_target = abs(cor["label"])
relevant_features = cor_target[cor_target>0.05]
</code></pre>
<p>How can I use the <code>relevant_features</code> object to filter out a new dataframe, say <code>df2</code> from <code>df</code>, that only has those features in it?</p>
|
<python><python-3.x><pandas><dataframe><list>
|
2023-05-05 14:35:47
| 2
| 4,373
|
Qubix
|
76,183,316
| 7,985,055
|
How does one serialize a firefox profile in selenium?
|
<p>All,</p>
<p>I am trying to setup a firefox profile to run some selenium tests in a docker container selenium standalone firefox.(<a href="https://hub.docker.com/r/selenium/standalone-firefox/" rel="nofollow noreferrer">https://hub.docker.com/r/selenium/standalone-firefox/</a>) . However, I am not able to integrate the capabilities, profile and options into the remote web driver...</p>
<p>I keep get a type error when i try to include the profile stuff..</p>
<pre><code> ^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1008.0_x64__qbz5n2kfra8p0\Lib\json\__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1008.0_x64__qbz5n2kfra8p0\Lib\json\encoder.py", line 200, in encode
chunks = self.iterencode(o, _one_shot=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1008.0_x64__qbz5n2kfra8p0\Lib\json\encoder.py", line 258, in iterencode
return _iterencode(o, 0)
^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1008.0_x64__qbz5n2kfra8p0\Lib\json\encoder.py", line 180, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type FirefoxProfile is not JSON serializable
</code></pre>
<p>This is what I have so far...</p>
<p>firefox.py</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from time import sleep
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.firefox.firefox_profile import FirefoxProfile
capabilities = webdriver.DesiredCapabilities.FIREFOX.copy()
capabilities['acceptInsecureCerts'] = True
capabilities['marionette'] = True
profile = webdriver.FirefoxProfile()
profile.set_preference("accept_untrusted_certs", True)
profile.set_preference("browser.download.panel.shown", False)
profile.set_preference("browser.helperApps.neverAsk.openFile","text/csv,application/vnd.ms-excel")
profile.set_preference("browser.helperApps.neverAsk.saveToDisk", "application/msword, application/csv, application/ris, text/csv, image/png, application/pdf, text/html, text/plain, application/zip, application/x-zip, application/x-zip-compressed, application/download, application/octet-stream")
profile.set_preference("browser.download.manager.showWhenStarting", False)
profile.set_preference("browser.download.manager.alertOnEXEOpen", False)
profile.set_preference("browser.download.manager.focusWhenStarting", False)
profile.set_preference("browser.download.folderList", 2)
profile.set_preference("browser.download.useDownloadDir", True)
profile.set_preference("browser.helperApps.alwaysAsk.force", False)
profile.set_preference("browser.download.manager.alertOnEXEOpen", False)
profile.set_preference("browser.download.manager.closeWhenDone", True)
profile.set_preference("browser.download.manager.showAlertOnComplete", False)
profile.set_preference("browser.download.manager.useWindow", False)
profile.set_preference("services.sync.prefs.sync.browser.download.manager.showWhenStarting", False)
profile.set_preference("pdfjs.disabled", True)
profile.set_preference("javascript.enabled", False)
profile.update_preferences()
options=Options()
options.set_preference('profile', profile)
options.set_preference('network.proxy.type', 1)
options.set_preference('network.proxy.socks', '127.0.0.1')
options.set_preference('network.proxy.socks_port', 9050)
options.set_preference('network.proxy.socks_remote_dns', False)
driver = webdriver.Remote(
command_executor="http://localhost:4444",
desired_capabilities=capabilities,
options=options
)
driver.get("https://www.google.com")
search_input_box = driver.find_element(By.NAME, "q")
search_input_box.send_keys("selenium webdriver" + Keys.ENTER)
sleep(60)
driver.quit()
</code></pre>
<p>If I remove the profile, or options settings, it works fine, but I would like to be able to play around wit the settings if I could. Could some one point out what I am missing?</p>
<p>ta,</p>
<p>x</p>
|
<python><selenium-webdriver><firefox>
|
2023-05-05 14:33:51
| 2
| 525
|
Mr. E
|
76,183,278
| 9,885,747
|
How to locally develop EventHub Triggered Functions in Python (programming model v2)?
|
<p>I would like to learn to develop Azure Functions locally using Visual Studio Code. While there are <a href="https://learn.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-python?pivots=python-mode-decorators" rel="nofollow noreferrer">numerous examples</a> and <a href="https://www.youtube.com/watch?v=KARieaWBxuk&t=1200s" rel="nofollow noreferrer">demos</a> available for using an HTTP trigger, I'm struggling to find much information on creating a minimal working example for <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-event-hubs-trigger?tabs=python-v2%2Cin-process%2Cfunctionsv2%2Cextensionv5&pivots=programming-language-python" rel="nofollow noreferrer">event-triggered functions</a>. I've even come across some <a href="https://stackoverflow.com/questions/63846854/is-it-possible-to-execute-event-hub-triggered-azure-functions-locally-without-us">disagreements</a> regarding whether it's possible to develop locally without connecting to an actual Event Hub Service.</p>
<p>I have a few questions for the community:</p>
<ol>
<li>Is it feasible to develop event-triggered functions locally (and with a reasonable effort)?</li>
<li>If anyone has successfully done this, could you please provide an example? I've gone through several posts, but I'm having trouble putting everything together. I saw a mention of "Thunder Client", but I'm unfamiliar with it. Could someone explain if it's an option and how it works?</li>
<li>What should the host.json and local.settings.json files look like?</li>
</ol>
<p>I would like to start with the sample test code provided by Microsoft. Here is the code:</p>
<pre><code>import azure.functions as func
import logging
app = func.FunctionApp()
@app.function_name(name="EventHubTrigger")
@app.event_hub_message_trigger(arg_name="hub",
event_hub_name="<EVENT_HUB_NAME>",
connection="<CONNECTION_SETTING>")
def test_function(myhub: func.EventHubEvent):
logging.info('Python EventHub trigger processed an event: %s',
myhub.get_body().decode('utf-8'))
</code></pre>
<p>I appreciate any guidance or assistance you can provide. Thank you!</p>
|
<python><azure><azure-functions><event-handling><emulation>
|
2023-05-05 14:29:48
| 4
| 1,685
|
DataBach
|
76,182,959
| 1,509,372
|
Full outer join in Django ORM
|
<p>I have a set of models like this:</p>
<pre class="lang-py prettyprint-override"><code>class Expense(models.Model):
debitors = models.ManyToManyField(Person, through='Debitor')
total_amount = models.DecimalField(max_digits=5, decimal_places=2)
class Debitor(models.Model):
class Meta:
unique_together = [['person', 'expense']]
person = models.ForeignKey(Person)
expense = models.ForeignKey(Expense)
amount = models.IntegerField()
class Person(models.Model):
name = models.CharField(max_length=25)
</code></pre>
<p>i.e. I have a many-to-many relationship between <code>Expense</code>s and <code>Person</code>s <em>through</em> <code>Debitor</code>s.</p>
<p>For a given <code>Expense</code>, I'd like to query all <code>Debitor</code>s/<code>Person</code>s, <strong>but including <code>Person</code>s for which a <code>Debitor</code> record does not exist</strong>. In that case, I want a row with a certain <code>person_id</code>, but with the <code>Debitor</code> and <code>Expense</code> columns <code>null</code>. In other words, a FULL OUTER JOIN between <code>Expense</code>, <code>Debitor</code>, and <code>Person</code>.</p>
<p>I have not found how to accomplish this with Django's ORM. I have figured out a raw query that does what I want (which is a bit more convoluted because SQLite does not natively support FULL OUTER JOINs):</p>
<pre class="lang-sql prettyprint-override"><code>SELECT debitor_id as id, amount, src_person.id as person, expense_id as expense
FROM src_person
LEFT JOIN (SELECT src_debitor.id as debitor_id, src_debitor.amount as amount, src_expense.id as expense_id, src_debitor.person_id as person_id
FROM src_debitor
INNER JOIN src_expense ON src_debitor.expense_id = src_expense.id and src_expense.id = x) as sol
ON src_person.id = sol.person_id;
</code></pre>
<p>For example, for <code>expense_id=6</code> this returns:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">id</th>
<th style="text-align: left;">amount</th>
<th style="text-align: left;">inhabitant</th>
<th style="text-align: left;">expense</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">6</td>
</tr>
<tr>
<td style="text-align: left;">5</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">6</td>
</tr>
<tr>
<td style="text-align: left;">null</td>
<td style="text-align: left;">null</td>
<td style="text-align: left;">3</td>
<td style="text-align: left;">null</td>
</tr>
<tr>
<td style="text-align: left;">null</td>
<td style="text-align: left;">null</td>
<td style="text-align: left;">4</td>
<td style="text-align: left;">null</td>
</tr>
<tr>
<td style="text-align: left;">null</td>
<td style="text-align: left;">null</td>
<td style="text-align: left;">5</td>
<td style="text-align: left;">null</td>
</tr>
<tr>
<td style="text-align: left;">null</td>
<td style="text-align: left;">null</td>
<td style="text-align: left;">6</td>
<td style="text-align: left;">null</td>
</tr>
</tbody>
</table>
</div>
<p>However, when executing this query using <code>Debitor.objects.raw()</code>, I get the following error:</p>
<pre><code>ValueError: Cannot assign "1": "Debitor.person" must be a "Person" instance.
</code></pre>
<p>I presume it is not possible to map raw queries to model instances using <code>Debitor.objects.raw()</code> when they contain relations to other models? Otherwise I'm not sure what I'm doing wrong...</p>
<p>Is there another/better way to do this? I'm still hoping there is some way to do this within Django's ORM, but I have not found it so far.</p>
|
<python><django><orm>
|
2023-05-05 13:55:20
| 3
| 758
|
Compizfox
|
76,182,881
| 12,193,952
|
How to operate a Kubernetes cluster on Digitalocean from Python?
|
<h2>the question</h2>
<p>I am running a Kubernetes cluster in Digitalocean and I want to connect to it from certain Python app, in order to retrieve information about running services. I am able to setup the connection from my local (where <code>kubeconfig</code> created using <code>doctl</code> is present), however how to create a <code>kubeconfig</code> when the app is deployed (using <code>Docker</code>)?</p>
<h2>current state</h2>
<p>So far the connection I use looks like this, the issue is with <code>load_kube_config</code>, because in my container there is not <code>kubeconfig</code>.</p>
<pre class="lang-py prettyprint-override"><code>import kubernetes as k8s
from kubernetes.client.rest import ApiException
def init_k8s():
"""
Init connection to k8s cluster
:return:
"""
# Load the Kubernetes configuration
k8s.config.load_kube_config()
# Configure the Digital Ocean API access token
configuration = k8s.client.Configuration()
configuration.host = 'https://api.digitalocean.com'
# configuration.verify_ssl = True
# configuration.debug = False
configuration.api_key = {'access-token': DO_API_SECRET}
</code></pre>
<h2>possible options</h2>
<p>Please note, I am using Digitalocean Applications for build and run. This limits some options, as seen below.</p>
<h3>💡 create kubeconfig during <code>docker build</code></h3>
<p>One idea, is to create <code>kubeconfig</code> during build of the container, running</p>
<pre><code>doctl auth init -t the-secret-token
doctl kubernetes cluster kubeconfig save 1c08efb9-xxx-yyy
</code></pre>
<p>The only issue might with exposing certain variable during build.</p>
<h3>❌ map existing kubeconfig to the container</h3>
<p>One possible option will be to somehow map the existing <code>kubeconfig</code> into the container during runtime. Unfortunately I have no option to do this <em>(it's deployed over Digitalocean Apps, not Kubernetes or so)</em></p>
<h3>❌ copy existing kubeconfig during build</h3>
<p>Also not possible due to point above - I am building the app in DO Apps and there is no option to copy the <code>kubeconfig</code>.</p>
<h3>❌ create base docker image</h3>
<p>Create a new docker image containing the <code>kubeconfig</code> and use it as base image during the app build is one possible approach. However merging my secrets (<code>kubeconfig</code>) into an image which needs to be in public Docker registry is not possible</p>
<h3>✅ create kubeconfig during app runtime - THIS WORKS</h3>
<p>The most optimal approach, so far, is to create a <code>kubeconfig</code> file during app runtime using <code>doctl</code> API (or lib like this). I am currently trying to figure out this way.</p>
<p>I am gonna dig and try more and more options. I will be glad for any help 🙏</p>
|
<python><kubernetes><digital-ocean><kubeconfig>
|
2023-05-05 13:45:58
| 1
| 873
|
FN_
|
76,182,735
| 3,182,021
|
Why this gradient descent algorithm does not works for all values of sinus function?
|
<p>I have this gradient descent code that compute well for boolean functions but not for sinus. I do not know what is the problem with it? I tried to change activation function of hidden layer or output layer from sigmoid to hyperbolic tangent but it still does not compute the good values for some values of sinus.</p>
<p>Here is the Python code:</p>
<pre><code># L'algorithme de rétro-propagation du gradient dans un
# réseau de neurones avec 1 couche cachée.
# modifications par D. Mattei
from random import seed, uniform
seed(1789) # si vous voulez avoir les mêmes tirages aléatoires à chaque exécution du fichier !
from math import exp, pow,pi , sin, tanh
from MatrixNumPy import MatrixNumPy
from time import time
import sys
# sigmoïde
def sig(x):
try:
s = 1/(1+ exp(-x))
except OverflowError as e:
# Somehow no exception is caught here...
#print('OverflowError...')
#print("x=",x)
#sys.exit(1)
s = 0
except Exception as e:
print(e)
return s
class ReseauRetroPropagation():
def __init__(self,ne=2,nc=3,ns=1,nbiter=3,eta=1):
'''Construit un réseau de neurones avec une couche cachée. Il y a ne entrées (+ biais),
nc neurones dans la couche cachée (+ biais) et ns neurones en sortie.'''
print(ne,'entrées(+1),',nc,'neurones cachés(+1) et',ns,'en sortie.')
# le réseau calcule sur 7 vecteurs et 2 matrices
self.z_i = ne * [0] # les entrées concrètes seront fournies avec la méthode accepte
# ne+1 in the matrix size because with add one column of bias in the matrix for each hidden neuron of the hidden layer "c"
self.mat_ij = MatrixNumPy(lambda j,i: uniform(-1,1),nc,ne+1) # self.mat_ij[j][i] == poids i->j
self.z_j = nc * [0] # valeurs z_j des neurones cachés
self.grad_j = nc * [0] # gradients locaux des neurones cachés
# nc+1 in the matrix size because with add one column of bias in the matrix for each neuron of the output layer "k"
self.mat_jk = MatrixNumPy(lambda k,j: uniform(-1,1),ns,nc+1) # self.mat_jk[k][j] == poids j->k
self.z_k = ns * [0] # valeurs z_k des neurones de sortie
self.grad_k = ns * [0] # gradients locaux des neurones de sortie
self.nbiter = nbiter
self.eta = eta # "learning rate"
self.error = 0
# fusionne accept et propage
# z_* sans le coef. 1 constant
def accepte_et_propage(self,Lentrees): # on entre des entrées et on les propage
if len(Lentrees) != len(self.z_i):
raise ValueError("Mauvais nombre d'entrées !")
self.z_i = Lentrees # on ne touche pas au biais
# propagation des entrées vers la sortie
# calcul des stimuli reçus par la couche cachée à-partir des entrées
# note: i just reference the variables for code readness (hide all the self keyword)
mat_ij = self.mat_ij
z_i = self.z_i
# create a list with 1 in front
z_i_1 = [1] + z_i
z̃_j = mat_ij * z_i_1 # z̃_i = matrix * iterable (list here)
# calcul des réponses des neurones cachés
z_j = list(map(sig,z̃_j))
#z_j = list(map(tanh,z̃_j))
# calcul des stimuli reçus par la couche de sortie
mat_jk = self.mat_jk
# create a list with 1 in front
z_j_1 = [1] + z_j
z̃_k = mat_jk * z_j_1 # matrix * iterable (list here)
# calcul des réponses de la couche de sortie
z_k = list(map(sig,z̃_k))
#z_k = list(map(tanh,z̃_k))
# update the variable when necessary
self.z_j = z_j
self.z_k = z_k
#print("accepte_et_propage : self.z_k ="); print(self.z_k)
#return self.z_k # et retour des sorties
def apprentissage(self,Lexemples): # apprentissage des poids par une liste d'exemples
nbiter = self.nbiter
ip = 0 # numéro de l'exemple courant
# TODO: take in account the error as stop point
for it in range(nbiter): # le nombre d'itérations est fixé !
error = 0.0 # l'erreur totale pour cet exemple
(entrees,sorties_attendues) = Lexemples[ip] # un nouvel exemple à apprendre
# PROPAGATION VERS L'AVANT
self.accepte_et_propage(entrees) # sorties obtenues sur l'exemple courant, self.z_k et z_j sont mis à jour
# RETRO_PROPAGATION VERS L'ARRIERE, EN DEUX TEMPS
# note: i just reference the variables for code readness (hide all the self keyword)
z_k = self.z_k # read-only variable
grad_k = self.grad_k
ns = len(z_k)
# TEMPS 1. calcul des gradients locaux sur la couche k de sortie (les erreurs commises)
for k in range(ns):
grad_k[k] = sorties_attendues[k] - z_k[k] # gradient sur un neurone de sortie (erreur locale)
error += pow(grad_k[k],2) # l'erreur quadratique totale
error *= 0.5
#print(it)
#print(error)
if it == nbiter-1 : self.error = error # mémorisation de l'erreur totale à la dernière itération
# modification des poids j->k
mat_jk = self.mat_jk # read/write data
z_i = self.z_i
z_j = self.z_j
nc = len(z_j)
#eta = self.eta
eta = ((0.0001 - 1.0) / nbiter) * it + 1.0
#print(eta)
# (test fait: modifier la matrice apres le calcul du gradient de la couche j , conclusion: ne change pas la convergence de l'algo)
self.modification_des_poids(mat_jk,eta,z_j,z_k,grad_k)
#print(mat_jk)
# for k in range(ns): # line
# for j in range(nc): # column , parcours les colonnes de la ligne sauf le bias
# mat_jk[k][j+1] -= - eta * z_j[j] * z_k[k] * (1 - z_k[k]) * grad_k[k]
# # and update the bias
# mat_jk[k][0] -= - eta * 1.0 * z_k[k] * (1 - z_k[k]) * grad_k[k]
# Réponse à la question "b4" : T_{jk} = z_k * (1-z_k) * w_{jk}
# TEMPS 2. calcul des gradients locaux sur la couche j cachée (rétro-propagation), sauf pour le bias constant
grad_j = self.grad_j
for j in range(nc):
# must match the hidden activation function !
grad_j[j] = sum(z_k[k] * (1 - z_k[k]) * mat_jk[k,j+1] * grad_k[k] for k in range(ns))
#grad_j[j] = sum((1 - tanh(z_k[k])**2) * mat_jk[k,j+1] * grad_k[k] for k in range(ns))
#print(grad_j)
# modification des poids i->j
mat_ij = self.mat_ij
self.modification_des_poids(mat_ij,eta,z_i,z_j,grad_j)
# for j in range(nc): # line
# for i in range(ne): # column , parcours les colonnes de la ligne sauf le bias
# mat_ij[j][i+1] -= -eta * z_i[i] * z_j[j] * (1 - z_j[j]) * grad_j[j]
# # and update the bias
# mat_ij[j][0] -= -eta * 1.0 * z_j[j] * (1 - z_j[j]) * grad_j[j]
# et l'on passe à l'exemple suivant
ip = (ip + 1) % len(Lexemples) # parcours des exemples en ordre circulaire
def modification_des_poids(self,M_i_o,eta,z_input,z_output,grad_i_o):
# the length of output and input layer with coeff. used for bias update
(len_layer_output, len_layer_input_plus1forBias) = M_i_o.dim()
len_layer_input = len_layer_input_plus1forBias - 1
for j in range(len_layer_output): # line
for i in range(len_layer_input): # column , parcours les colonnes de la ligne sauf le bias
M_i_o[j,i+1] -= -eta * z_input[i] * z_output[j] * (1 - z_output[j]) * grad_i_o[j]
# and update the bias
M_i_o[j,0] -= -eta * 1.0 * z_output[j] * (1 - z_output[j]) * grad_i_o[j]
def dump(self,n,msg): # dump du réseau en entrant dans l'itération numéro n
print('---------- DUMP',msg,'itération numéro',n)
print('mat_ij :') ; print(self.mat_ij)
print('z_j :',self.z_j)
print('grad_j :',self.grad_j)
print('mat_jk :') ; print(self.mat_jk)
print('z_k :',self.z_k)
print('grad_k :',self.grad_k)
print()
def test(self,Lexemples):
print('Test des exemples :')
for (entree,sortie_attendue) in Lexemples:
self.accepte_et_propage(entree)
print(entree,'-->',self.z_k,': on attendait',sortie_attendue)
if __name__ == '__main__':
print('################## NOT ##################')
r1 = ReseauRetroPropagation(1,2,1,nbiter=10000,eta=0.5)
Lexemples1 = [[[1],[0]],[[0],[1]]]
START = time() ; r1.apprentissage(Lexemples1) ; END = time()
r1.test(Lexemples1)
print('APPRENTISSAGE sur {} itérations, time = {:.2f}s'.format(r1.nbiter,END-START))
print()
print('################## XOR ##################')
r2 = ReseauRetroPropagation(2,3,1,nbiter=50000,eta=0.1) # 2 entrées (+ bias), 3 neurones cachés (+ bias), 1 neurone en sortie
Lexemples2 = [[[1,0],[1]], [[0,0],[0]], [[0,1],[1]], [[1,1],[0]]]
START = time() ; r2.apprentissage(Lexemples2) ; END = time()
print('APPRENTISSAGE sur {} itérations, time = {:.2f}s'.format(r2.nbiter,END-START))
r2.test(Lexemples2)
print("Error=") ; print(r2.error)
#print("r2.mat_ij=",r2.mat_ij)
#print("r2.mat_jk=",r2.mat_jk)
print('################## SINUS ##################')
r3 = ReseauRetroPropagation(1,50,1,nbiter=50000,eta=0.01) # 2 entrées (+ bias), 3 couches de neurones cachés (+ bias), 1 neurone en sortie
Llearning = [ [[x],[sin(x)]] for x in [ uniform(-pi,pi) for n in range(1000)] ]
Ltest = [ [[x],[sin(x)]] for x in [ uniform(-pi/2,pi/2) for n in range(10)] ]
START = time() ; r3.apprentissage(Llearning) ; END = time()
print('APPRENTISSAGE sur {} itérations, time = {:.2f}s'.format(r3.nbiter,END-START))
r3.test(Ltest)
print("Error=") ; print(r3.error)
</code></pre>
<p>And the matrix class code:</p>
<pre><code>#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
MatrixNumPy.py
The class MatrixNumPy.
Derived from:
exo_mat2.py
La classe MatrixNumPy : algèbre des matrices de format quelconque, avec numpy
"""
# D. Mattei
from multimethod import multimethod
from typing import Union,Callable
from collections.abc import Iterable
import numpy
class MatError(Exception): # juste pour la lisibilité des exceptions
pass
Numeric = Union[float, int]
class MatrixNumPy:
'''Construct an object MatrixNumPy.'''
# >>> m1=MatrixNumPy(2,3)
@multimethod
def __init__(self,n : Numeric,p : Numeric): # lines, columns
'''Construit un objet matrice de type MatrixNumPy, d'attributs le format self.dim
et le tableau architecturé en liste de listes de même longueur. Exemples :
m = MatrixNumPy([[1,3],[-2,4],[0,-1]]) à 3 lignes et 2 colonnes
m = MatrixNumPy(lambda i,j: i+j,3,5) à 3 lignes et 5 colonnes'''
if __debug__:
print("# MatrixNumPy constructor MatrixNumPy (Numeric,Numeric) #")
self.__init__(lambda i,j: 0,n,p) # return a Zero matrix
@multimethod
def __init__(self,f : Callable,n : Numeric,p : Numeric):
if __debug__:
print("# MatrixNumPy constructor MatrixNumPy (function,Numeric,Numeric) #")
self.A = numpy.array([[f(i,j) for j in range(p)] for i in range(n)])
@multimethod
def __init__(self,Af : list): # la liste qui contient les éléments de matrice
if __debug__:
print("# MatrixNumPy constructor MatrixNumPy,list #")
if any(map(lambda x:type(x) != list,Af)) :
raise MatError('MatrixNumPy : on attend une liste de listes !')
p = len(Af[0])
if any(map(lambda x:len(x)!=p,Af)) :
raise MatError('MatrixNumPy : on attend une liste de listes de même longueur !')
self.A = numpy.array(Af) # l'array qui contient les éléments de matrice
@multimethod
def __init__(self,Arr : numpy.ndarray):
if __debug__:
print("# MatrixNumPy constructor MatrixNumPy,numpy.ndarray #")
self.A = Arr
def dim(self):
'''Retourne le format de la matrice courante.'''
return self.A.shape
# m1=MatrixNumPy(lambda i,j : i+j, 5,2)
# # MatrixNumPy constructor MatrixNumPy (function,Numeric,Numeric) #
# m1
# 0.00 1.00
# 1.00 2.00
# 2.00 3.00
# 3.00 4.00
# 4.00 5.00
# MatrixNumPy @ 0x105ae03d0
# print(m1)
# 0.00 1.00
# 1.00 2.00
# 2.00 3.00
# 3.00 4.00
# 4.00 5.00
def __repr__(self):
'''Retourne une chaine formatée avec colonnes alignées représentant
la matrice m.'''
return self.__str__() + '\nMatrixNumPy @ {} \n'.format(hex(id(self)))
# >>> print(m)
def __str__(self):
'''Retourne une chaine formatée avec colonnes alignées représentant
la matrice m.'''
return self.A.__str__()
def __getitem__(self,i): # pour pouvoir écrire m[i] pour la ligne i
return self.A[i] # et m[i][j] pour l'élément en ligne i et colonne j
def __setitem__(self, i, data):
self.A[i] = data
def lig(self,i): # m.lig(i) <==> m[i]
'''Retourne la ligne i >= 0 de la matrice sous forme de liste plate.'''
return self.A[i].tolist()
def col(self,j):
'''Retourne la colonne j >= 0 de la matrice sous forme de liste plate.'''
(n,_) = self.dim()
return [self.A[i][j] for i in range(n)]
def __add__(self,m2):
'''Retourne la somme de la matrice courante et d'une matrice m2
de même format.'''
(n,p) = self.dim()
if m2.dim() != (n,p):
raise MatError('mat_sum : Mauvais formats de matrices !')
A = self.A ; A2 = m2.A
AplusA2 = numpy.add(A,A2)
return MatrixNumPy(AplusA2)
def __sub__(self,m2):
'''Retourne la différence entre la matrice courante et une matrice
m2 de même format.'''
return MatrixNumPy(numpy.substract(self.A,m2.A))
def mul(self,k):
'''Retourne le produit externe du nombre k par la matrice m.'''
(n,p) = self.dim()
return MatrixNumPy(lambda i,j : k*self.A[i][j],n,p)
# R : multiplicand
# matrix multiplication by number
@multimethod
def __rmul__(self, m : Numeric): # self is at RIGHT of multiplication operand : m * self
'''Retourne le produit externe du nombre par la matrice'''
if __debug__:
print("MatrixNumPy.py : __rmul__(MatrixNumPy,Numeric)")
return self.mul(m)
def app(self,v): # v = [a,b,c,d]
'''Retourne l'application de la matrice self au vecteur v vu comme une liste
plate. Le résultat est aussi une liste plate.'''
# transformation de la liste v en matrice uni-colonne
mv = MatrixNumPy(list(map(lambda x:[x],v))) # mv = [[a],[b],[c],[d]]
# l'application n'est autre qu'un produit de matrices
res = self * mv # objet de type MatrixNumPy car produit de 2 matrices
res = res.A # objet de type Array
# et on ré-aplatit la liste
return list(map(lambda A:A[0],res))
# R : multiplicand
# m1=MatrixNumPy(lambda i,j : i+j, 5,2)
# # MatrixNumPy constructor MatrixNumPy (function,Numeric,Numeric) #
# m1*(-2,-3.5)
# MatrixNumPy.py : __mul__(MatrixNumPy,Iterable)
# # MatrixNumPy constructor MatrixNumPy,list #
# MatrixNumPy.py : __mul__(MatrixNumPy,MatrixNumPy)
# # MatrixNumPy constructor MatrixNumPy (function,Numeric,Numeric) #
# [-3.5, -9.0, -14.5, -20.0, -25.5]
@multimethod
def __mul__(self, R : Iterable): # self is at LEFT of multiplication operand : self * R = MatrixNumPy * R, R is at Right
if __debug__:
print("MatrixNumPy.py : __mul__(MatrixNumPy,Iterable)")
return self.app(R)
# R : multiplicand
# matrix multiplication
# m2=MatrixNumPy([[-2],[-3.5]])
# m1*m2
# >>> m2
# [[-2. ]
# [-3.5]]
# MatrixNumPy @ 0x7f48a430ee10
# >>> m1*m2
# MatrixNumPy.py : __mul__(MatrixNumPy,MatrixNumPy)
# # MatrixNumPy constructor MatrixNumPy,numpy.ndarray #
# [[ -3.5]
# [ -9. ]
# [-14.5]
# [-20. ]
# [-25.5]]
#MatrixNumPy @ 0x7f48a4362590
@multimethod
def __mul__(self, m2 : object): # self is at LEFT of multiplication operand : self * m2 = MatrixNumPy * m2 = MatrixNumPy * MatrixNumPy, m2 is at Right of operator
if __debug__:
print("MatrixNumPy.py : __mul__(MatrixNumPy,MatrixNumPy)")
(n1,p1) = self.dim()
(n2,p2) = m2.dim()
if p1 != n2 : raise MatError('Produit de matrices impossible !')
# le produit aura pour format (n1,p2)
#return MatrixNumPy(numpy.matmul(self.A,m2.A))
return MatrixNumPy(self.A @ m2.A)
# m1.A @ m2.A
# array([[ -3.5],
# [ -9. ],
# [-14.5],
# [-20. ],
# [-25.5]])
</code></pre>
|
<python><deep-learning><gradient-descent>
|
2023-05-05 13:28:44
| 1
| 419
|
Damien Mattei
|
76,182,705
| 10,232,932
|
Use virtual enviorment from a git repository in visual studio code
|
<p>I created a virtual enviornment (venv) called <em><strong>.MF39</strong></em> on my local machine in windows and pushed this into a git repository, lets call the repository: "mastermind".</p>
<p>When I now go on a different machine and run the commands in the terminal of visual studio code:</p>
<pre><code>git pull
</code></pre>
<p>and afterwards the virtual enviornment is also pulled and popped up in the explorer:
<a href="https://i.sstatic.net/CWprn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CWprn.png" alt="enter image description here" /></a></p>
<p>How can I use that one now on a different machine/ activate the enviornment?</p>
|
<python><git><visual-studio-code>
|
2023-05-05 13:25:51
| 2
| 6,338
|
PV8
|
76,182,623
| 4,172,765
|
Upload file using Python-requests
|
<p>I have a problem when writing REST API to upload files by Python.
This is the REST API on Postman. It worked.<br />
<a href="https://i.sstatic.net/c2IKF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c2IKF.png" alt="enter image description here" /></a></p>
<p>But when I write the python code, it shows this error <code><Response [400]></code>.<br />
Here is my code</p>
<pre class="lang-pyhton prettyprint-override"><code>import requests
headers = {
'X-Mobsf-Api-Key': '587b10a3f848a4e16ca8f6f0703dcfeaad2d7056f7b394534da0a6ad430f4739'
}
files = {
'file': open("C://Users//ASUS//anaconda3//phd_implement//productivity_categor//1clickVPN-1.1.3.apk", 'rb')
}
response = requests.post('http://193.206.183.20:8000/api/v1/upload', headers=headers, files=files)
print(response)
</code></pre>
<p><a href="https://i.sstatic.net/uuGSx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uuGSx.png" alt="enter image description here" /></a></p>
<p>Note: I even test on Linux. It has the same error too.</p>
<p><a href="https://i.sstatic.net/hP0E8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hP0E8.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/HKCZU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HKCZU.png" alt="enter image description here" /></a></p>
|
<python><python-requests>
|
2023-05-05 13:15:50
| 1
| 917
|
ThanhLam112358
|
76,182,588
| 15,632,586
|
What should I do to save files in the right directory in Flask?
|
<p>I am creating a website in Flask, in which ideally after pasting my report and adding the report name, the website would save the newly generated report in <code>static/{report_name}/new_file.txt</code> and a Graphviz file (in <code>static/{report_name}/graph.gv</code>).</p>
<p>Here is my current js and HTML code that I am using:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>async function summarize() {
// Get the text area element
const text = document.getElementById("paragraph").value;
const reportname = document.getElementById("reportname").value;
// Get the summarize button element
const summarizeButton = document.getElementById("summarize");
// Add an event listener to the summarize button
summarizeButton.addEventListener("click", async() => {
// Get the text from the text area
const formData = new FormData();
// Add the text to the FormData object
formData.append("text", text);
formData.append("reportname", reportname);
// Save the user's input in a file named "input.txt" in the report folder
fetch("../save_input", {
method: "POST",
body: JSON.stringify({
text,
reportname
}),
});
// Save the selected graph in a file named "graph.gv" in the report folder
const graph = document.getElementById("selGraph").value;
fetch("../save_graph", {
method: "POST",
body: graph,
headers: {
"Content-Type": "text/plain"
},
});
const graphText = await fetch(graph).then(response => response.text());
fetch(`../static/reports/${reportname}/graph.gv`, {
method: "POST",
body: graphText,
headers: {
"Content-Type": "text/plain"
},
});
// Reload the page to display the Step2 content
document.getElementById("Step2").click();
});
}</code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code><p>Please add your report into this box: </p>
<br />
<textarea id="paragraph" , aria-labelledby="TextforReport" />
</textarea>
<p>Please add the report name into this box: </p>
<br />
<textarea id="reportname" ,aria-labelledby="Textforname" />
</textarea>
<br /></code></pre>
</div>
</div>
</p>
<p>And here are my code in Flask for the server:</p>
<pre><code>from flask import Flask, render_template, request, jsonify
import subprocess
import os
@app.route('/save_input', methods=['POST'])
def save_input():
data = request.get_json()
paragraph = data.get('text')
reportname = data.get('reportname')
# Create a directory for the report if it doesn't already exist
if not os.path.exists(f'static/reports/{reportname}'):
os.makedirs(f'static/reports/{reportname}')
# Save the input to a file named "input.txt" in the report folder
with open(f'static/reports/{reportname}/input.txt', 'w') as file:
file.write(paragraph)
return 'OK'
@app.route('/save_graph', methods=['POST'])
def save_graph():
graph = request.data.decode('utf-8')
reportname = request.form.get('reportname')
# Create a directory for the report if it doesn't already exist
if not os.path.exists(f'static/reports/{reportname}'):
os.makedirs(f'static/reports/{reportname}')
# Save the graph to a file named "graph.gv" in the report folder
with open(f'static/reports/{reportname}/graph.gv', 'w') as file:
file.write(graph)
return 'OK'
</code></pre>
<p>However, when I run the code in Flask, the directory with the report name is not created, and Flask returned this error for me (the report name is Emotet in this case, so the directory should be <code>static/reports/Emotet</code>):</p>
<p><a href="https://i.sstatic.net/JKKg2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JKKg2.png" alt="enter image description here" /></a></p>
<p>So what did I do wrong in this case, and what should I do to save both of my files in the right directory?</p>
|
<javascript><python><flask>
|
2023-05-05 13:11:03
| 0
| 451
|
Hoang Cuong Nguyen
|
76,182,587
| 1,436,800
|
Error installing Syft 0.2.5 in Colab: subprocess exited with error
|
<p>I am trying to install Syft version 0.2.5 on Google Colab using the command !pip install syft==0.2.5. However, I am running into an error while installing the scipy dependency. The error message indicates that the subprocess to install the build dependencies did not run successfully, with exit code 1.
I am encountering following error:</p>
<pre><code>Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Requirement already satisfied: pip in /usr/local/lib/python3.10/dist-packages (23.0.1)
Collecting pip
Downloading pip-23.1.2-py3-none-any.whl (2.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 21.1 MB/s eta 0:00:00
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 23.0.1
Uninstalling pip-23.0.1:
Successfully uninstalled pip-23.0.1
Successfully installed pip-23.1.2
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting syft==0.2.5
Using cached syft-0.2.5-py3-none-any.whl (369 kB)
Collecting flask-socketio~=4.2.1 (from syft==0.2.5)
Using cached Flask_SocketIO-4.2.1-py2.py3-none-any.whl (16 kB)
Collecting Flask~=1.1.1 (from syft==0.2.5)
Using cached Flask-1.1.4-py2.py3-none-any.whl (94 kB)
Collecting lz4~=3.0.2 (from syft==0.2.5)
Using cached lz4-3.0.2.tar.gz (152 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: msgpack~=1.0.0 in /usr/local/lib/python3.10/dist-packages (from syft==0.2.5) (1.0.5)
Collecting numpy~=1.18.1 (from syft==0.2.5)
Using cached numpy-1.18.5.zip (5.4 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting phe~=1.4.0 (from syft==0.2.5)
Using cached phe-1.4.0.tar.gz (35 kB)
Preparing metadata (setup.py) ... done
Collecting Pillow~=6.2.2 (from syft==0.2.5)
Using cached Pillow-6.2.2.tar.gz (37.8 MB)
Preparing metadata (setup.py) ... done
Collecting requests~=2.22.0 (from syft==0.2.5)
Downloading requests-2.22.0-py2.py3-none-any.whl (57 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.0/58.0 kB 2.4 MB/s eta 0:00:00
Collecting scipy~=1.4.1 (from syft==0.2.5)
Using cached scipy-1.4.1.tar.gz (24.6 MB)
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>I would appreciate any help in resolving this issue.</p>
|
<python><scipy><google-colaboratory><pysyft>
|
2023-05-05 13:11:01
| 0
| 315
|
Waleed Farrukh
|
76,182,394
| 14,271,847
|
How to build a JWT RS256 Algorithm using Python?
|
<p>How to fix the error "JWT method 'encode' does not exist" when trying to mount a base64 for the assertion parameter of an API?</p>
<p>I'm trying to create a base64 to use as an "assertion" parameter in an API call, but I'm getting an error saying that the "encode" method of the JWT does not exist.</p>
<p>I installed using:</p>
<pre><code>pip3 install jwt
</code></pre>
<p>Full code</p>
<pre><code>import jwt
private_key = open("C:\\file\\privateKey.pem", "r").read()
payload = {
"iss": "iss",
"aud": "aud",
"scope": "*",
"iat": 1683141898,
"exp": 1683228298
}
signed = jwt.encode(payload, private_key, algorithm='RS256')
print(signed)
</code></pre>
<p>Error:</p>
<pre><code> signed = jwt.encode(payload, private_key, algorithm='RS256')
^^^^^^^^^^
AttributeError: module 'jwt' has no attribute 'encode'
</code></pre>
|
<python><encoding><jwt><rs256>
|
2023-05-05 12:50:52
| 1
| 429
|
sysOut
|
76,182,290
| 16,367,397
|
I installed the module, but still writes ModuleNotFoundError
|
<p>I installed Python using brew
I put the package like this pip install moderngl, as you can see in the screenshot it is there, but it still gives an error that it is not installed
<a href="https://i.sstatic.net/ain5J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ain5J.png" alt="enter image description here" /></a></p>
|
<python><pip>
|
2023-05-05 12:38:30
| 1
| 348
|
steind.VY
|
76,182,348
| 1,581,090
|
How to create a small python code to get a list of participants of a teams call?
|
<p>Using python, and having an azure applicationID/ objectID/ tenantID / clientID and clientSecret I want to access a "teams" meeting using e.g. <code>requests</code> to get the list of participants of an ongoing teams meeting. There seems to be a lot of confusion between existing and non-exsting modules like <code>msgraph</code>, <code>msgraph-sdk</code> and <code>msgraph-sdk-python</code>. They all do not seem to work, or they work differently.</p>
<p>I appreciate a small code python snippet that actually works, and that I can use to get the list of participants of an ongoing Teams call.</p>
<p>I had a code like the following which does not work:</p>
<pre><code>from microsoftgraph.client import Client
client = Client(client_id, client_secret, account_type='common')
# Make a GET request to obtain the list of participants
call_id = '123 456 789'
response = client.get(f'/communications/calls/{call_id}/participants', headers={'Authorization': f'Bearer {access_token}'})
participants = response.json()
</code></pre>
<p>Error:</p>
<blockquote>
<p>AttributeError: 'Client' object has no attribute 'get'</p>
</blockquote>
<p>I also found <a href="https://developer.microsoft.com/en-us/graph/quick-start" rel="nofollow noreferrer">this quick start guide</a> in which I unfortunately have to request access, and I will not know if someone ever will reply to my request.</p>
|
<python><azure><microsoft-teams>
|
2023-05-05 12:38:17
| 2
| 45,023
|
Alex
|
76,182,193
| 4,045,275
|
How to create a list or array of days falling on the 15th of each month
|
<h2>What I am trying to do</h2>
<p>I am trying to create a list or array of dates all falling on a specific day of the month, and then changing the first item. E.g. something like:</p>
<pre><code>20-Jan-2023,15-Feb-2023,15-Mar-2023, etc.
</code></pre>
<p>I am sure this is documented somewhere, but I find the quality of pandas documentation, at least when it comes to dates, absolutely atrocious and I have already banged my head against the wall for a while, so please be gentle if this has already been answered elsewhere :)</p>
<h2>What I have tried but doesn't work (pandas.date_range and bdate_range)</h2>
<p>I have tried with <code>pandas.date_range</code> and <code>bdate_range</code>, but I have 2 problems:</p>
<ol>
<li>I cannot change any item in the output generated below</li>
<li>If there is a way to use these functions to generate a list of dates every 15th day of a month, I have not found it. The docs for <code>date_range</code> <a href="https://pandas.pydata.org/docs/reference/api/pandas.date_range.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.date_range.html</a> don't mention what the valid options for <code>freq</code> are (see why I find the docs atrocious?). There is a list here <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases</a> but what I need doesn't seem to be there</li>
</ol>
<p>What I have tried is:</p>
<pre><code>import pandas as pd
x = pd.bdate_range(start='15-Jan-2023', periods=60, freq='MS')
x[0] = pd.to_datetime('15-Jan-2023') # doesn't work
TypeError: Index does not support mutable operations
</code></pre>
<h2>What works (but is convoluted, there must be a better way)</h2>
<p>The code below works, but it seems convoluted and clunky. I want to hope there is a better way?
I create a dataframe with a column of months to add (zero to 60), then apply DateOffset to each row. Also, this is not vectorised and likely to be slow (not in this toy example but maybe on large datasets).</p>
<pre><code>import pandas as pd
import numpy as np
day_0 = pd.to_datetime('15-Jan-2023')
df = pd.DataFrame()
df['months to add'] = np.arange(0,60)
df['dates'] = df.apply(lambda x: day_0 + pd.DateOffset(months = x['months to add'] ), axis=1)
df.loc[0,'dates'] = pd.to_datetime('20-Jan-2023')
df = df.drop('months to add', axis=1)
</code></pre>
|
<python><pandas><date><datetime>
|
2023-05-05 12:28:21
| 1
| 9,100
|
Pythonista anonymous
|
76,182,022
| 9,827,719
|
Docker install WeasyPrint on Google Cloud Run for Python gives "Unable to locate package python-lxml"
|
<p>I am trying to install WeasyPrint on Google Cloud Run using Dockerfile. This gives error <code>Unable to locate package python-lxml</code>. What can I do to install Weasyprint?</p>
<p><strong>Dockerfile</strong></p>
<pre><code># Specify Python
FROM python:latest
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Open port
EXPOSE 8080
# Add Python script
RUN mkdir /app
WORKDIR /app
COPY . .
# Install Weasyprint
RUN apt-get install -y python-lxml libcairo2 libpango1.0-0 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info
RUN pip install WeasyPrint
# Install dependencies
RUN pip install -r requirements.txt
# Set Pythons path
ENV PYTHONPATH /app
# Run script
CMD [ "python", "./main.py" ]
</code></pre>
<p><strong>Build output</strong></p>
<pre><code>starting build "e52c6efd-6ef4-47dd-b545-60bd7b90c529"
FETCHSOURCE
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint: git config --global init.defaultBranch <name>
hint:
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint:
hint: git branch -m <name>
Initialized empty Git repository in /workspace/.git/
From https://github.com/a/pdfs
* branch afbc93509425d5b21407cae07c1400bdbfae2cdb -> FETCH_HEAD
HEAD is now at afbc935 Autogenerate PDFs
BUILD
Starting Step #0 - "Build"
Step #0 - "Build": Already have image (with digest): gcr.io/cloud-builders/docker
Step #0 - "Build": Sending build context to Docker daemon 1.132MB
Step #0 - "Build": Step 1/12 : FROM python:latest
Step #0 - "Build": latest: Pulling from library/python
Step #0 - "Build": Digest: sha256:30f9c5b85d6a9866dd6307d24f4688174f7237bc3293b9293d590b1e59c68fc7
Step #0 - "Build": Status: Downloaded newer image for python:latest
Step #0 - "Build": ---> 815c8c75dfc0
Step #0 - "Build": Step 2/12 : ENV PYTHONDONTWRITEBYTECODE 1
Step #0 - "Build": ---> Running in 9996805d690a
Step #0 - "Build": Removing intermediate container 9996805d690a
Step #0 - "Build": ---> 4234084722a2
Step #0 - "Build": Step 3/12 : ENV PYTHONUNBUFFERED 1
Step #0 - "Build": ---> Running in 102f817b7e22
Step #0 - "Build": Removing intermediate container 102f817b7e22
Step #0 - "Build": ---> 7399700a9710
Step #0 - "Build": Step 4/12 : EXPOSE 8080
Step #0 - "Build": ---> Running in 2cc1758ba13e
Step #0 - "Build": Removing intermediate container 2cc1758ba13e
Step #0 - "Build": ---> b0f8f7fb316e
Step #0 - "Build": Step 5/12 : RUN mkdir /app
Step #0 - "Build": ---> Running in 22f75153299b
Step #0 - "Build": Removing intermediate container 22f75153299b
Step #0 - "Build": ---> 4bfd53553c9c
Step #0 - "Build": Step 6/12 : WORKDIR /app
Step #0 - "Build": ---> Running in 721718a67160
Step #0 - "Build": Removing intermediate container 721718a67160
Step #0 - "Build": ---> e4fa70d738e0
Step #0 - "Build": Step 7/12 : COPY . .
Step #0 - "Build": ---> dfa52ca88166
Step #0 - "Build": Step 8/12 : RUN apt-get install -y python-lxml libcairo2 libpango1.0-0 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info
Step #0 - "Build": ---> Running in 2ef5ddbba608
Step #0 - "Build": Reading package lists...
Step #0 - "Build": Building dependency tree...
Step #0 - "Build": Reading state information...
Step #0 - "Build": Package libgdk-pixbuf2.0-0 is not available, but is referred to by another package.
Step #0 - "Build": This may mean that the package is missing, has been obsoleted, or
Step #0 - "Build": is only available from another source
Step #0 - "Build": However the following packages replace it:
Step #0 - "Build": libgdk-pixbuf-2.0-0
Step #0 - "Build":
Step #0 - "Build": [91mE: Unable to locate package python-lxml
Step #0 - "Build": E: Unable to locate package libpango1.0-0
Step #0 - "Build": E: Couldn't find any package by glob 'libpango1.0-0'
Step #0 - "Build": E[0m[91m: Couldn't find any package by regex 'libpango1.0-0'
Step #0 - "Build": E: Package 'libgdk-pixbuf2.0-0' has no installation candidate
Step #0 - "Build": The command '/bin/sh -c apt-get install -y python-lxml libcairo2 libpango1.0-0 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info' returned a non-zero code: 100
Finished Step #0 - "Build"
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 100
Step #0 - "Build": [0m
</code></pre>
|
<python><docker><weasyprint>
|
2023-05-05 12:05:27
| 0
| 1,400
|
Europa
|
76,181,779
| 1,308,967
|
Cache Django REST Framework HTTP Streaming Response?
|
<p>I am trying to cache Django REST Framework HTTP streaming responses.</p>
<p>My thinking is a Response sub-class can write the chunks into a temporary file as it streams, and on closing after streaming the final chunk, run a callable that copies the file into cache.</p>
<pre class="lang-py prettyprint-override"><code>from django.http import StreamingHttpResponse
class CachedStreamingHttpResponse(StreamingHttpResponse):
def __init__(self, streaming_content=(), *args, **kwargs):
self._post_render_callbacks = []
self._buffer = None
self.buffered = False
super().__init__(streaming_content, *args, **kwargs)
def _set_streaming_content(self, value):
self._buffer = TemporaryFile()
super()._set_streaming_content(value)
def post_render(self):
self._buffer.seek(0)
self.buffered = self._buffer
retval = self
for post_callback in self._post_render_callbacks:
newretval = post_callback(retval)
if newretval is not None:
retval = newretval
def buffer(self, b):
self._buffer.write(b)
return b
@staticmethod
def closing_iterator_wrapper(iterable, close):
try:
yield from iterable
finally:
close()
@property
def streaming_content(self):
buffered = map(self.buffer, super().streaming_content)
return self.closing_iterator_wrapper(buffered, self.post_render)
@streaming_content.setter
def streaming_content(self, value):
self._set_streaming_content(value)
def add_post_render_callback(self, callback):
"""A list of callables to be run after the final chunk is returned. Used to copy the response to cache."""
if self.buffered:
callback(self)
else:
self._post_render_callbacks.append(callback)
</code></pre>
<p>I plan to have my cache framework pass a callable into the response, which then calls it from a content_stream <code>finally</code> block to copy the temporary file into S3.</p>
<p>However with the above code I see <em>two</em> streams - one compressed, one not, and the response cannot be returned from cache.</p>
<p>I have modified this question to save the reader reading about syntax errors, but one was interesting. Because I overrode the <code>streaming_content</code> getter, I had to re-declare the setter (identically to how it was declared in the super-class).</p>
<p>Footnote: Caching streams is almost always wrong. But these responses are generated by complex queries and DRF serializers and viewsets, and we stream so our many users on very poor connections see data arriving more quickly. Given a stream locks resources on server and client for the duration, this might use more resources than not streaming; it might push some memory consumption to the database rather than webserver as records are cursored. The responses are up to a few megabytes, usually less, and these will be cached on our S3 cache tier. Redis would be too expensive.</p>
|
<python><django><caching><httpresponse>
|
2023-05-05 11:38:08
| 1
| 6,522
|
Chris
|
76,181,776
| 13,040,314
|
how to disable pep8 in pycharm for a code block
|
<p>I want to disable pep8 for a block of code for example, a an array which has long text, I do not want it to complain about line too long. How can I disable the pep8 inspection only for that array?</p>
|
<python><pycharm><pep8><pep>
|
2023-05-05 11:37:45
| 0
| 325
|
StaticName
|
76,181,711
| 5,016,028
|
Remove all dataframe columns that have a sum lower than given value
|
<p>I have a dataframe <code>df</code> where some columns have very small elements. I want to remove these columns based on the sum of the elements. If the sum is below 1.0, then I want to drop the column. Now I am trying this:</p>
<pre><code>df = df.loc[:, (df.sum() < 1.0).any(axis=0)]
</code></pre>
<p>But it does throw an error that just says "True". Any idea how to make it work?</p>
|
<python><pandas><dataframe>
|
2023-05-05 11:30:50
| 2
| 4,373
|
Qubix
|
76,181,453
| 3,561,433
|
Hyperparameter tuning for Custom Gym Env in Stable Baselines3 using RL Zoo
|
<p>I have created a Gym environment and am able to train it via PPO from Stable Baselines3. However, I am not getting desired results. The agent seems to be stuck at the local optimum rather than the global one. However, even reaching this stage required a lot of computing and I am unsure if I have proper parameter settings. I would like to perform hyperparameter tuning for this custom env and I am following this example - <a href="https://rl-baselines3-zoo.readthedocs.io/en/master/guide/tuning.html#hyperparameters-search-space" rel="nofollow noreferrer">https://rl-baselines3-zoo.readthedocs.io/en/master/guide/tuning.html#hyperparameters-search-space</a></p>
<p>I am unable to follow the document.</p>
<ol>
<li><p>It says that they use Optuna for optimization. However, nowhere is it mentioned that it must be installed first. Do we need to do that?</p>
</li>
<li><p>I ran my code using the same structure</p>
<pre><code>python train.py --algo ppo --env MountainCar-v0 -n 50000 -optimize --n-trials 1000 --n-jobs 2 --sampler tpe --pruner median
</code></pre>
</li>
</ol>
<p>My environment</p>
<pre><code>python mobile_robot_scripts/train_youbot_camera.py --algo ppo --env youbotCamGymEnv -n 10000 --n-trials 1000 --n-jobs 2 --sampler tpe --pruner median
</code></pre>
<p>Now, there are a couple of things that I think are wrong. I have not registered my env with gym, unlike the default MountainCar-v0 example because nowhere is it mentioned that registering is required or has any advantage apart from gym.make() convenience. I do not have Optuna installed. And still while running the code it runs, but I don't think it does any hyperparameter tuning, rather just continues with normal training.</p>
<p>I would like to know how should I proceed further to actually perform hyperparameter tuning. As you can see, I am pretty much using the defaults. Here is my code for reference:-</p>
<pre><code>#add parent dir to find package. Only needed for source code build, pip install doesn't need it.
import os, inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(os.path.dirname(currentdir))
os.sys.path.insert(0, parentdir)
from youbotCamGymEnv import youbotCamGymEnv
import datetime
from stable_baselines3 import ppo
from stable_baselines3.common.env_checker import check_env
def main():
env = youbotCamGymEnv(renders=False, isDiscrete=False)
# It will check your custom environment and output additional warnings if needed
check_env(env)
model = ppo.PPO("CnnPolicy", env, verbose=1)
model.learn(total_timesteps = 50000)
print("############Training completed################")
model.save(os.path.join(currentdir,"youbot_camera_trajectory"))
# del model
# env = youbotCamGymEnv(renders=True, isDiscrete=False)
model = ppo.PPO.load(os.path.join(currentdir,"youbot_camera_trajectory"))
# obs = env.reset()
# for i in range(1000):
# action, _states = model.predict(obs, deterministic=True)
# obs, reward, done, info = env.step(action)
# # print("reward is ", reward)
# env.render(mode='human')
# if done:
# obs = env.reset()
# env.close()
if __name__ == '__main__':
main()
</code></pre>
|
<python><reinforcement-learning><openai-gym><hyperparameters><stable-baselines>
|
2023-05-05 10:57:33
| 1
| 522
|
Manish
|
76,181,450
| 572,575
|
Metplotlib show error The number of FixedLocator locations, usually from a call to set_ticks, does not match the number of labels
|
<p>I want to plot graph and show the label in Matplotlib. I have some class to plot in same axis.</p>
<pre><code> import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
x = [1,1, # plot 2 value at X1
2,2,2,2, # plot 4 value at X2
3,3,3,3,3,
4,4,4,4,4,
5,5,5,5,5,]
y = [0.04,0.005, # value for plot at X1
0.3, 0.03,0.02,0.06, # value for plot at X2
0.2,0.4,0.07,0.5, 0.009,
0.1, 0.008, 0.3, 0.007,0.005,
0.01,0.001,0.0004,0.0006,0.002]
classes = ['A',#0
'B',#1
'C',#2
'D',#3
'E',#4
'F',#5
'G',#6
'H',#7
'I',#8
'J',#9
'K',#10
'L',#11
'M']#12
]
values = [8,12, # Class to plot at X1
1,5,6,8, # Class to plot at X2
0,2,3,4,5,
3,2,7, 11,12,
1,4,10,11,12]
my_xticks = ["X1","X2","X3","X4",
"X5","X6","X7","X8",
"X9","X10","X11","X12",
"X13","X14","X15"]
plt.xticks(x,my_xticks)
scatter = plt.scatter(x, y, c=values)
plt.legend(handles=scatter.legend_elements()[0], labels=classes)
</code></pre>
<p>when I run this code it show error like this. It have no error if I remove my_xticks. How to fix it?</p>
<pre><code>ValueError: The number of FixedLocator locations (21), usually from a call to set_ticks, does not match the number of labels (15).
</code></pre>
|
<python><matplotlib>
|
2023-05-05 10:57:09
| 0
| 1,049
|
user572575
|
76,181,426
| 241,605
|
Custom rendering of nested field in django-tables2
|
<p>I have a table defined like this, where I want to apply custom rendering for a field:</p>
<pre><code>import django_tables2 as tables
class SetTable(tables.Table):
class Meta:
model = Set
fields = (
"series.title",
)
def render_series_title(self, value):
return 'x'
</code></pre>
<p>For simple fields on the object itself (i.e. without a period) this works. But when there is a nested relationship (in this case, the <code>Set</code> model has a foreign key relationship to <code>Series</code>, and I want to display the <code>Title</code> field from the series), the custom rendering is not triggered. I have tried <code>render_title</code>, <code>render_series_title</code>, <code>render_series__title</code>, etc., and none of these gets called.</p>
<p>The table is specified as the <code>table_class</code> in a <code>django_filters</code> view, and is displaying the data fine, except that it is using the default rendering and bypassing the custom rendering function.</p>
<p>What am I missing?</p>
|
<python><django><django-tables2>
|
2023-05-05 10:53:20
| 1
| 20,791
|
Matthew Strawbridge
|
76,181,408
| 5,672,673
|
Error concatenating layers in the generator in a GAN
|
<p>I am making a GAN (in deep learning) in tensor flow Keras. The generator is a UNET architecture: we downsample the image input into smaller and deeper matrix, then we upsample to restore the size. Here is the code for the generator:</p>
<pre><code>OUTPUT_CHANNELS = 3
def downsample(filters,size,apply_batchnorm = True):
initializer = tf.random_normal_initializer(0.0,0.02)
result = tf.keras.Sequential()
result.add(
tf.keras.layers.Conv2D(filters,
size,
strides=2,
padding='same',
kernel_initializer=initializer,
use_bias=False)
)
if apply_batchnorm:
result.add(tf.keras.layers.BatchNormalization())
result.add(tf.keras.layers.LeakyReLU())
return result
def upsample(filters,size,apply_dropout = False):
initializer = tf.random_normal_initializer(0.0,0.02)
result = tf.keras.Sequential()
result.add(
tf.keras.layers.Conv2DTranspose(
filters,
size,
strides=2,
padding='same',
kernel_initializer=initializer,
use_bias = False
)
)
result.add(tf.keras.layers.BatchNormalization())
if apply_dropout:
result.add(tf.keras.layers.Dropout(0.5))
result.add(tf.keras.layers.LeakyReLU())
return result
def Generator():
inputs = tf.keras.layers.Input(shape=[256,256,1])
down_stack = [ #Output Size
downsample(32,4,False),
downsample(64,4,True) #128
]
up_stack = [ #Output&Channel Size
upsample(32,4), #128 128
upsample(32,4)
]
initializer = tf.random_normal_initializer(0.0,0.2)
last = tf.keras.layers.Conv2DTranspose(
OUTPUT_CHANNELS,
4,
2,
padding = 'same',
kernel_initializer = initializer,
activation = 'tanh'
)
x = inputs
skips = []
for down in down_stack:
x = down(x)
skips.append(x)
skips = reversed(skips[:-1])
for up,skip in zip(up_stack,skips):
x = up(x)
x = tf.keras.layers.Concatenate()([x,skip])
x = last(x)
return tf.keras.Model(inputs=inputs,outputs=x)
generator = Generator()
tf.keras.utils.plot_model(generator,show_shapes=True,dpi=64)
</code></pre>
<p>Here is the model, you can see that at concatenate_6 (the first concatenate), the dimensions are equal</p>
<p><a href="https://i.sstatic.net/4njFJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4njFJ.png" alt="enter image description here" /></a></p>
<p>When I run it for an input of a gray image (256x256x1) <code> generator(example_input, training=True)</code>, it says the following error:</p>
<pre><code>ValueError: Exception encountered when calling layer 'concatenate_6' (type Concatenate).
Dimension 2 in both shapes must be equal, but are 2 and 1. Shapes are [32,128,2] and [32,128,1]. for '{{node model_7/concatenate_7/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32](model_7/sequential_29/leaky_re_lu_30/LeakyRelu, model_7/sequential_27/leaky_re_lu_28/LeakyRelu, model_7/concatenate_7/concat/axis)' with input shapes: [32,128,2,32], [32,128,1,32], [] and with computed input tensors: input[2] = <3>.
Call arguments received by layer 'concatenate_4' (type Concatenate):
• inputs=['tf.Tensor(shape=(32, 128, 2, 32), dtype=float32)', 'tf.Tensor(shape=(32, 128, 1, 32), dtype=float32)']
</code></pre>
<p>I make the layers down and up the same, how is it possible that there is a discrepancy, please help!</p>
|
<python><tensorflow><keras><deep-learning><generative-adversarial-network>
|
2023-05-05 10:51:20
| 0
| 1,177
|
Linh Chi Nguyen
|
76,181,266
| 10,685,529
|
Type hints with Python ^3.10 and Pylance for VSCode
|
<p>I try to follow the new features for type hinting that came with Python 3.10. I use VSCode with the pylance extension.</p>
<p>For instance I have a methos like this in a class:</p>
<pre class="lang-py prettyprint-override"><code>def execute(
self, query: str, return_type: str | None = None
) -> pd.DataFrame | list[Any] | None:
...
</code></pre>
<p>Then I get the following seen in the screenshot below:</p>
<p><a href="https://i.sstatic.net/El7Kk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/El7Kk.png" alt="enter image description here" /></a></p>
<p>So, my question is: Is Pylance not yet ready for Python 3.10 when there could be multiple return types or am I doing something wrong?</p>
|
<python><python-typing><pylance>
|
2023-05-05 10:34:26
| 1
| 1,353
|
Lewi Uberg
|
76,181,251
| 12,466,687
|
How to apply value_counts() to multiple columns in polars python?
|
<p>I am trying to apply <code>value_counts()</code> to multiple columns, but getting an error.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.from_repr("""
┌──────────────┬─────────────┐
│ sub-category ┆ category │
│ --- ┆ --- │
│ str ┆ str │
╞══════════════╪═════════════╡
│ tv ┆ electronics │
│ mobile ┆ mobile │
│ tv ┆ electronics │
│ wm ┆ electronics │
│ micro ┆ kitchen │
│ wm ┆ electronics │
└──────────────┴─────────────┘
""")
</code></pre>
<p>If I convert it to <code>Pandas</code>, I can use <code>apply</code>:</p>
<pre class="lang-py prettyprint-override"><code>pl.from_pandas(
df.to_pandas().apply(lambda x: x.value_counts()).reset_index()
)
</code></pre>
<pre><code>shape: (6, 3)
┌─────────────┬──────────────┬──────────┐
│ index ┆ sub-category ┆ category │
│ --- ┆ --- ┆ --- │
│ str ┆ f64 ┆ f64 │
╞═════════════╪══════════════╪══════════╡
│ electronics ┆ null ┆ 4.0 │
│ kitchen ┆ null ┆ 1.0 │
│ micro ┆ 1.0 ┆ null │
│ mobile ┆ 1.0 ┆ 1.0 │
│ tv ┆ 2.0 ┆ null │
│ wm ┆ 2.0 ┆ null │
└─────────────┴──────────────┴──────────┘
</code></pre>
<p>How do I get the same result in Polars?</p>
|
<python><dataframe><apply><python-polars><unpivot>
|
2023-05-05 10:31:34
| 4
| 2,357
|
ViSa
|
76,181,200
| 14,548,431
|
Python send e-mail async with trio
|
<p>I want to send e-mails in an asynchronous way by using the package trio. I found the package aiosmtplib, but this is only for asyncio.</p>
<p>Is there any package which I can use for it, or has anyone an idea of how to implement this with trio?</p>
<p>Update:
I found this package <a href="https://pypi.org/project/mailers/" rel="nofollow noreferrer">mailers</a>, which describes that it uses anyio, so that trio can be used. But under the hood, it uses aiosmtplib, too.</p>
|
<python><email><asynchronous><smtp><python-trio>
|
2023-05-05 10:24:01
| 0
| 653
|
Phil997
|
76,181,169
| 3,729,714
|
Python - filedialog.askdirectory won't recognize initialdir of local drive
|
<p>I'm trying to get the second filedialog call in my script to open to my local C drive, but it will only open the first call's initialdir, which is a UNC path. (Both are stored in different variables).</p>
<p>1: Is the following syntax correct, and is there anything I need to do to force the code to use the second call's initialdir?</p>
<pre><code>filedialog.askdirectory(initialdir=r"C:\\Users\\User1\\Desktop",title='Please select a directory')
</code></pre>
<p>2: Also, can you have a variable in the initialdir, like this?</p>
<pre><code>filedialog.askdirectory(initialdir=r"C:\\Users\\User1\\"+variable+"\UserFolder1",title='Please select a directory')
</code></pre>
|
<python><python-3.x>
|
2023-05-05 10:20:49
| 0
| 1,735
|
JM1
|
76,180,846
| 1,401,202
|
Send notification from cron action in Odoo 16
|
<p>I have a recurring CRON action in my app. When fired it updates some records in the database. I want to inform the creator of the record that it has been updated by the CRON task. I've tried many variations of the code below but no message is sent. There is no error thrown.</p>
<pre><code>def _send_discuss_message(self, plan):
recipient_id = plan.create_uid.id
channel = self.env['mail.channel'].channel_get(
[recipient_id])
user_id = self.env.user.id
message = ("Your plan %s has been validated") % (plan.name)
channel_id = self.env['mail.channel'].browse(channel["id"])
channel_id.message_notify(
author_id=user_id,
partner_ids=[recipient_id],
subject="Test",
body=(message),
message_type='comment',
subtype_xmlid="mail.mt_comment",
notify_by_email=False
)
</code></pre>
|
<python><odoo><odoo-16>
|
2023-05-05 09:40:12
| 1
| 8,078
|
kpg
|
76,180,798
| 11,758,843
|
Jenkins Job Failing for urllib3: ValueError: Timeout value connect was <object object at 0x7efe5adb9aa0>, but it must be an int, float or None
|
<p>As of May 4, 2023, at 16:00, I started seeing one of our Jenkins job failing with the following error:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/jenkins/__init__.py", line 822, in get_info
return json.loads(self.jenkins_open(
File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/jenkins/__init__.py", line 560, in jenkins_open
return self.jenkins_request(req, add_crumb, resolve_auth).text
File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/jenkins/__init__.py", line 576, in jenkins_request
self.maybe_add_crumb(req)
File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/jenkins/__init__.py", line 373, in maybe_add_crumb
response = self.jenkins_open(requests.Request(
File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/jenkins/__init__.py", line 560, in jenkins_open
return self.jenkins_request(req, add_crumb, resolve_auth).text
File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/jenkins/__init__.py", line 579, in jenkins_request
self._request(req))
File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/jenkins/__init__.py", line 553, in _request
return self._session.send(r, **_settings)
File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/requests/adapters.py", line 483, in send
timeout = TimeoutSauce(connect=timeout, read=timeout)
File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/urllib3/util/timeout.py", line 119, in __init__
self._connect = self._validate_timeout(connect, "connect")
File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/urllib3/util/timeout.py", line 156, in _validate_timeout
raise ValueError(
ValueError: Timeout value connect was <object object at 0x7efe5adb9aa0>, but it must be an int, float or None.
</code></pre>
<p>As nothing had changed on my side in my configuration, it looks like an upstream issue.</p>
<p>I was using <code>requests</code> Python library in my job and <code>requests</code> uses <code>urllib3</code>.</p>
<p>How can we fix this?</p>
|
<python><jenkins><python-requests><jenkins-pipeline><urllib3>
|
2023-05-05 09:35:31
| 2
| 5,920
|
Abdullah Khawer
|
76,180,541
| 14,949,601
|
In Python multiprocesssing, why the `qsize()` of a queue is not 0, but `get_nowait()` produce empty item?
|
<p>I write a python program where I use <code>multiprocessing</code> lib to communicate between parent process and child process.</p>
<p><a href="https://i.sstatic.net/oxMJ9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oxMJ9.png" alt="enter image description here" /></a></p>
<p>In this code and the running results, the parent process forks only one child process, waiting for data from parent process. <code>batch_computing_inputs</code> is a Queue Object in multiprocessing, where parent process can pass data to child process.</p>
<p>When I execute the <code>qsize()</code> in the child process, it produce the size of the queue is 1, which means the queue has item that I could pick out. The item is an image with the size of about 1M~2M in base64 format.</p>
<p>but when I execute <code>get_nowait()</code> method, it produce an exception in the <code>try-catch</code> code.</p>
<pre><code>Traceback (most recent call last):
File "/home/cuichengyu/github/concurrent_batch_computing/server_app.py", line 45, in batch_computing
cur_batch_computing_input = batch_computing_inputs.get_nowait()
File "/home/cuichengyu/anaconda3/lib/python3.7/multiprocessing/queues.py", line 126, in get_nowait
return self.get(False)
File "/home/cuichengyu/anaconda3/lib/python3.7/multiprocessing/queues.py", line 107, in get
raise Empty
_queue.Empty
child process - 1683277465.2352884 finish get inputs data.
</code></pre>
<p>So, why the size of Queue is 1, but I can not get data from the Queue?</p>
|
<python><multiprocessing><queue>
|
2023-05-05 09:05:49
| 0
| 329
|
dongrixinyu
|
76,180,497
| 353,337
|
Read XML prolog
|
<p>I have an XML file</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<?foo class="abc"?>
<document>
...
</document>
</code></pre>
<p>frpm which I'd like to read the <code>foo</code> prolog using Python.</p>
<pre class="lang-py prettyprint-override"><code>import xml.etree.ElementTree as ET
tree = ET.parse("bar.xml")
# ??
</code></pre>
<p>Any hints?</p>
|
<python><xml>
|
2023-05-05 09:00:08
| 2
| 59,565
|
Nico Schlömer
|
76,180,141
| 11,696,358
|
Select the maximum number of rows so that the sum of the columns is balanced
|
<p>Suppose I have a table with the following columns and much more rows:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Id</th>
<th>n_positive_class1</th>
<th>n_positive_class2</th>
<th>n_positive_class3</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>0</td>
<td>10</td>
<td>4000</td>
</tr>
<tr>
<td>2</td>
<td>122</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>3</td>
<td>4</td>
<td>5234</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>I'd like to select the maximum number of rows (by Id) so that the sum of the 3 columns for the chosen rows is as balanced as possible (perfect balance is impossible and the tolerance on the balance is probably a parameter).</p>
<p>Is there an already available function to do so?
Otherwise could you help me in building such functions with the typical python libraries (pandas, numpy, scipy, etc.)?</p>
<p>My use case is to balance a dataset that don't fit in memory for training a machine learning model.</p>
|
<python><pandas><numpy><imbalanced-data>
|
2023-05-05 08:16:59
| 0
| 478
|
user11696358
|
76,180,106
| 5,404,765
|
How to store name value pair in Django db?
|
<p>I want to create a student record data base using django db. The class will look something like below</p>
<pre><code>class StudentRecord(models.Model):
studentname = models.CharField()
#marks = ['Science':98, 'Maths':99, 'Computer'::95]
</code></pre>
<p>I want to add 'marks' field, which is like a dictionary or like a list of tuple. What is the best way to achieve this in Django ?
I am new to Django, and I could not find the relevant answer in the doc : <a href="https://docs.djangoproject.com/en/4.2/topics/db/models/" rel="nofollow noreferrer">https://docs.djangoproject.com/en/4.2/topics/db/models/</a></p>
|
<python><django><web-applications>
|
2023-05-05 08:11:45
| 3
| 311
|
Darshan Bhat
|
76,179,953
| 11,082,866
|
Pycairo error while deploying django app on AWS
|
<p>My app is deployed on AWS ElasticBeanStalk was working fine up until one hour ago and now it is giving 502 Bad gateway error. When I tried to redeploy the application it gives me the following error</p>
<pre><code>2023/05/05 07:41:44.917340 [ERROR] An error occurred during execution of command [self-startup] - [InstallDependency]. Stop running the command. Error: fail to install dependencies with requirements.txt file with error Command /bin/sh -c /var/app/venv/staging-LQM1lest/bin/pip install -r requirements.txt failed with error exit status 1. Stderr: error: subprocess-exited-with-error
× Building wheel for pycairo (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [15 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-38
creating build/lib.linux-x86_64-cpython-38/cairo
copying cairo/__init__.py -> build/lib.linux-x86_64-cpython-38/cairo
copying cairo/__init__.pyi -> build/lib.linux-x86_64-cpython-38/cairo
copying cairo/py.typed -> build/lib.linux-x86_64-cpython-38/cairo
running build_ext
Package cairo was not found in the pkg-config search path.
Perhaps you should add the directory containing `cairo.pc'
to the PKG_CONFIG_PATH environment variable
No package 'cairo' found
Command '['pkg-config', '--print-errors', '--exists', 'cairo >= 1.15.10']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pycairo
ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects
</code></pre>
<p>I don't even have Pycairo in my requirements.txt. Is there any dependency using this in the background, if so How can I find it and resolve this?</p>
<p>I added cairo in .ebextensions as well but it's of no use</p>
<p>requirements.txt</p>
<pre><code>asgiref==3.2.7
Django==3.0.5
django-cors-headers==3.2.1
djangorestframework==3.11.0
djangorestframework-simplejwt==4.4.0
PyJWT==1.7.1
pytz==2020.1
sqlparse==0.3.1
djangorestframework-jwt
django-storages
boto3==1.11.4
zplgrf
reportlab
pandas==1.4.2
python-barcode
pretty_html_table
folium
googlemaps
pyexcel
pyexcel_xls
pyexcel-xlsx
openpyxl
gunicorn
apscheduler
django-smtp-ssl
xlsxwriter
django-simple-search
gmplot
awscrt
awsiotsdk
psycopg2-binary==2.8.6
</code></pre>
|
<python><django><amazon-elastic-beanstalk>
|
2023-05-05 07:52:21
| 1
| 2,506
|
Rahul Sharma
|
76,179,643
| 13,957,731
|
Pymysql reset cursor like mysql.connector
|
<p>I have a piece of code which I migrate driver from mysql.connector to pymysql.
On mysql.connector there have cursor.reset(free=True), is there pymysql have functionality like that?</p>
|
<python><mysql><pymysql>
|
2023-05-05 07:12:31
| 0
| 325
|
GurbaniX
|
76,179,430
| 1,976,597
|
Perform some action when a Python module is imported
|
<p>I want to run a function only if a particular package has been imported, or is later imported.</p>
<p>In pseudo code, this is what I want to do:</p>
<pre class="lang-py prettyprint-override"><code>
if 'some_package' in sys.modules: # The easy part
do_things_with_some_package()
else:
# The pseudo code
somehow_register_callback_for_when_module_imported(
module_name="some_package",
callback=do_things_with_some_package,
)
</code></pre>
<p>My code is in a library, and <em>might</em> be used (by another user) in the same project as <code>some_package</code> or might not. When it is, my library might be imported before or after <code>some_package</code>.</p>
<p>My code <em>should not</em> call <code>do_things_with_some_package</code> (because it is slow) unless code consuming my code imports <code>some_package</code> at some point.</p>
<p>Is this possible with Python?</p>
|
<python>
|
2023-05-05 06:40:49
| 3
| 4,914
|
David Gilbertson
|
76,179,376
| 14,094,546
|
Which statistics test should I use to check difference in frequency between 2 populations?
|
<p>I have two population and I want to known if their difference is significant or no (since the population B is the result of a subset of population A after some filtering criteria).
Which test should I use (since I have only 2 numbers). I am confused trying to find the correct approach (and implement it in Python).
Thanks a lot</p>
<p><a href="https://i.sstatic.net/5PO01.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5PO01.png" alt="enter image description here" /></a></p>
|
<python><statistics><chi-squared><t-test><population>
|
2023-05-05 06:31:32
| 0
| 520
|
Chiara
|
76,179,314
| 1,668,622
|
How to correctly assign elements of recursively typed collections?
|
<p>According to <a href="https://github.com/python/mypy/issues/731" rel="nofollow noreferrer">https://github.com/python/mypy/issues/731</a> recursive type definition should be possible and enabled by default. And for me it's working somehow.</p>
<p>However I didn't manage to successfully assign a value from a recursively typed list, without <code>mypy</code> complaining:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Union # need this, because strings cannot be `or`-ed
NestedStringSequence = list[Union[str, "NestedStringSequence"]]
element: NestedStringSequence = []
stack: NestedStringSequence = [element]
stack.append([])
element = stack.pop()
</code></pre>
<p>So <code>element</code> should be <code>str</code> or <code>NestedStringSequence</code>, yet <code>mypy</code> gives me</p>
<pre class="lang-bash prettyprint-override"><code>$ mypy nested.py
nested.py:10: error: Incompatible types in assignment (expression has type "Union[str, NestedStringSequence]", variable has type "NestedStringSequence") [assignment]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>.. which sounds like a contradiction to me because <code>NestedStringSequence</code> <em>is</em> by definition <code>Union[str, NestedStringSequence]</code></p>
<p>But of course it's not a contradiction - <code>stack.pop()</code> could return a <code>str</code>, and only I know that I'm only <code>append</code>ing lists..</p>
<p>How should I deal with this? I could <code>cast</code> the result of <code>stack.pop()</code> of course, but that would be cheating, wouldn't it?</p>
<p>The answer to a <a href="https://stackoverflow.com/a/49101404/1668622">similar question</a> suggests <code>@overload</code>ing, but I guess that would imply somehow wrapping <code>list.pop()</code>, which would be even uglier in my eyes..</p>
<p>I'm using <code>mypy</code> 1.2.0</p>
|
<python><python-3.x><mypy><recursive-type>
|
2023-05-05 06:21:44
| 1
| 9,958
|
frans
|
76,179,056
| 2,566,283
|
A package import works when running script directly but not when that script is imported by another
|
<p>I have this situation:</p>
<pre><code>main_script.py
tools/
__init__.py
a.py
b.py
</code></pre>
<p>In <code>a.py</code> I have the following</p>
<pre><code>import argparse
from b import b_func
def main():
parser = argparse.ArgumentParser()
parser.add_argument('a')
args = parser.parse_args()
b_func(args.a)
def a_func(x):
return x
if __name__ == '__main__':
main()
</code></pre>
<p>If I want to run this, everything works fine: <code>python tools/a.py foo</code></p>
<p>However, in my <code>main_script.py</code> file, I am importing <code>a.py</code>:</p>
<pre><code>import argparse
from a import a_func
def main():
parser = argparse.ArgumentParser()
parser.add_argument('a')
args = parser.parse_args()
a_func(args.a)
if __name__ == '__main__':
main()
</code></pre>
<p>Now if I run <code>python main_script.py foo</code> I will get this error:</p>
<pre><code>ModuleNotFoundError: No module named 'b'
</code></pre>
<p>I can correct this by doing <code>from tools.b import b_func</code> inside <code>a.py</code> but then I won't be able to do <code>python a.py foo</code> anymore.</p>
<p>Why is this happening and how can I correct it? How can I make sure these references are all correct?</p>
|
<python><python-3.x><import>
|
2023-05-05 05:25:06
| 2
| 2,724
|
teepee
|
76,178,810
| 16,527,596
|
How to call the methods of 3 other classes into another class in python
|
<p>So for now i have build 3 classes</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as mcol
color_dict = mcol.TABLEAU_COLORS
def lin2db(x):
return 10*np.log10(x)
def db2lin(x):
return 10**(x/10)
class ADC():
fs_step = 2.75625e3
def __init__(self,n_bit):
self.n_bit=n_bit
@property
def snr(self):
n_bit=self.n_bit
M=2**n_bit
snr_lin=M**2
return snr_lin
@property
def snr_db(self):
snr_lin=self.snr
snr_db=lin2db(snr_lin)
return snr_db
@property
def m(self):
M=2**self.n_bit
return M
class Digital_signal_information(object):
def __init__(self, signal_power :float, noise_power :float, n_bit_mod:int):
self.__signal_power=signal_power
self.__noise_power=noise_power
self.__n_bit_mod=n_bit_mod
@property
def nbit_mod(self):
return self.__n_bit_mod
@nbit_mod.setter
def set_nbit_mod(self,value):
self.__n_bit_mod=value
@property
def signal_power(self):
return self.__signal_power
@property
def noise_power(self):
return self.__noise_power
@noise_power.setter
def set_noise_power(self):
self.__noise_power=0;
class Line(object):
def __init__(self, loss_coefficient: float, length: float):
self._loss_coefficient = loss_coefficient/1000
self._length = length/1000
@property
def length(self):
return self._length
@property
def loss_coefficient(self):
return self._loss_coefficient;
@property
def Loss(self):
return self.loss_coefficient * self.length
def Noise_Generation(self,sn):
return 1e-9 * sn * self.length
def snr_digital(self,sn):
return lin2db(sn)-lin2db(self.Noise_Generation(sn))-self.Loss
</code></pre>
<p>I know that they work individually without any problems if you define the stuff outside(i will later in another function) example</p>
<pre><code>dsi=Digital_signal_information(1,2,3)
sn=dsi.signal_power
l=Line(1,2)
print(l.snr_digital(sn))
</code></pre>
<p>So it will print what i want.</p>
<p>Thing is that at the 4th <code>Class PCM</code> the project says:</p>
<p>Modify the class <code>PCM</code>considering as input attributes: <code>Digital_signal_information</code>,<code>ADC</code>,<code>Line</code> such that <code>PCM</code> acts as a container class which defines the transmission system in use. Modify the method <code>snr</code>, which has to retrieve the <code>snr_digital</code>from <code>Line</code> and with this we need to print out the <code>db2lin</code> version of it(its just a function defined in the beginning)</p>
<p>Thing is how do i do this? Considering that there are many other functions to be defined as well? How do i call correctly these functions from the other Classes.</p>
<p>The idea is not to initialize most of them because we will do so in an <code>exercise</code> function after we are done.</p>
<p>How would you continue beyond this</p>
<pre><code>class PCM(ADC,Digital_signal_information,Line):
def snr(self):
return
</code></pre>
|
<python><oop>
|
2023-05-05 04:11:27
| 1
| 385
|
Severjan Lici
|
76,178,795
| 8,391,698
|
How to test if a sequence contains a new line characters within a certain window length in Python
|
<p>I have this list in Python:</p>
<pre><code>my_list = ["KKAWTMYGNLSKKDQNRYNA\nDAT", # false
"EEQLLNEPD\nKIVIIPACVIDELEENKKLKGLEEILKKVRRA", #true
"NNCLQKYKEAEEYYEESIFILKTVN", # false
"EEQLLNEPD\nKIVIIPACV\nIDELEENKKLKGLEEILKKVRRA", #true
"NNCLQKYKEAEEYY\nEESIFILKTVN", # false
]
</code></pre>
<p>I'd like a function to detect if the given string has "\n" (new line) characters within the 10 characters window.</p>
<p>At the end of the day, only two strings satisfy that:</p>
<pre><code>"EEQLLNEPD\nKIVIIPACVIDELEENKKLKGLEEILKKVRRA", #true
"EEQLLNEPD\nKIVIIPACV\nIDELEENKKLKGLEEILKKVRRA", #true
</code></pre>
<p>I'm stuck with this code that doesn't do the job.</p>
<pre><code>def has_newline_within_window(s, window=10):
for i in range(len(s) - window):
if "\n" in s[i:i + window]:
return True
return False
my_list = ["KKAWTMYGNLSKKDQNRYNA\nDAT", # false
"EEQLLNEPD\nKIVIIPACVIDELEENKKLKGLEEILKKVRRA", #true
"NNCLQKYKEAEEYYEESIFILKTVN", # false
"EEQLLNEPD\nKIVIIPACV\nIDELEENKKLKGLEEILKKVRRA", #true
"NNCLQKYKEAEEYY\nEESIFILKTVN", # false
]
for sequence in my_list:
if has_newline_within_window(sequence):
print(sequence)
</code></pre>
<p>Please advice, how can I go about it?</p>
|
<python><python-3.x><string>
|
2023-05-05 04:08:18
| 3
| 5,189
|
littleworth
|
76,178,774
| 15,584,917
|
Python group by and keep rows with greatest absolute value?
|
<p>I have a very large dataframe. I would like to group by <code>col1</code> and <code>col4</code> and keep rows which have the greatest absolute value in <code>col5</code>.</p>
<p>However I would like the final table to also include <code>col2</code> and <code>col3</code> (but I do not want to group by these columns when identifying the row with the greatest absolute value).</p>
<p>I tried the following:</p>
<pre><code>results_abs_df = results_df.loc[results_df.groupby(['col1','col4',])['col5'].agg([lambda x: max(x,key=abs)])]
</code></pre>
<p>But received error <code>Cannot index with multidimensional key</code>.</p>
<p>I'm not sure if there's another way (cannot be too memory intensive because the dataset is large) to achieve the end result?</p>
|
<python><pandas>
|
2023-05-05 04:00:23
| 1
| 339
|
1288Meow
|
76,178,706
| 6,751,456
|
django serializer dictfield vs jsonfield
|
<p>I have a column <code>custom_property</code> of type <code>jsonb</code> in postgres.</p>
<p>I need to validate a payload for this column.</p>
<pre><code>payload:
{
"description": "test description3",
"custom_property": {
"access": "Yes"
}
}
</code></pre>
<p>I have a serializer:</p>
<pre><code>class MySerializer(serializers.Serializer):
description = serializers.CharField(required=False, max_length=500)
custom_property = serializers.DictField(required=False)
</code></pre>
<p>Should I use <code>DictField</code> or <code>JSONField</code> for this column?</p>
<p>It seems that in both case, it needs to be formatted to json string first with <code>json.dumps()</code>.</p>
|
<python><json><django><dictionary><django-serializer>
|
2023-05-05 03:39:51
| 1
| 4,161
|
Azima
|
76,178,525
| 14,365,042
|
How to combine multiple rows (with two column values are different) into one row in Pandas
|
<p>I have a big survey data (over 50K rows) as:</p>
<pre><code>df1 = pd.DataFrame(list(zip(['0001', '0001', '0002', '0003', '0004', '0004'],
['a', 'b', 'a', 'b', 'a', 'b'],
['USA', 'USA', 'USA', 'USA', 'USA', 'USA'],
['Jan', 'Jan', 'Jan', 'Jan', 'Jan', 'Jan'],
[1,2,3,4,5,6])),
columns=['sample ID', 'compound', 'country', 'month', 'value'])
df1
</code></pre>
<p><a href="https://i.sstatic.net/u0rzc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u0rzc.png" alt="enter image description here" /></a></p>
<p>Two compounds (<code>compound</code>) are included for some samples (<code>sampleID</code>). I want to combine the two rows (with two compounds for the same <code>sampleID</code>) to one row:</p>
<pre><code>df2 = pd.DataFrame(list(zip(['0001', '0002', '0003', '0004'],
['a', 'a', '', 'a'],
[1, 3, np.nan, 5],
['b', '', 'b', 'b'],
[2, np.nan, 4, 6],
['USA', 'USA', 'USA', 'USA'],
['Jan', 'Jan', 'Jan', 'Jan'])),
columns=['sample ID', 'compound1', 'value1', 'compound2', 'value2','country', 'month'])
df2
</code></pre>
<p><a href="https://i.sstatic.net/iakPU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iakPU.png" alt="enter image description here" /></a></p>
<p>The below can work:</p>
<pre><code>pd.merge((df1.loc[df1.compound == 'a']),
(df1.loc[df1.compound == 'b']),
how="outer",
on=['sample ID', 'country', 'month'],
suffixes=("_no3", "_no2"))
</code></pre>
<p>Any better approach?</p>
|
<python><pandas>
|
2023-05-05 02:44:18
| 0
| 305
|
Joe
|
76,178,000
| 10,387,506
|
Assemble random sales table with Python
|
<p>I have attempted to assemble a randomized sales table with mock data.</p>
<ul>
<li>I have a customer table with 50,000 entries.</li>
<li>I have a product table with 550,000 entries.</li>
<li>I produced a series with store codes. Ideally I'd like 5 stores.</li>
<li>I also created a date/time series spanning 2 years.</li>
<li>I would love to create a mock sales table that gives me at least 500K sales lines per year.</li>
<li>The end result would be exported to CSV.</li>
</ul>
<p>Ideally, this would give me output as follows, basically mimicking a true sales table:</p>
<pre><code>+------------+-----------+----------+-------------+-----------+---------+---------+
| CustomerID | FirstName | LastName | ProductName | ProductID | StoreID | Date |
+------------+-----------+----------+-------------+-----------+---------+---------+
| 157863 | Kimberly | Archey | Name1 | ID1 | 1 | 1/1/22 |
| 148101 | Tony | Roberson | Name2 | ID2 | 2 | 1/5/23 |
| 113579 | Mandy | Kridel | Name3 | ID3 | 3 | 1/4/22 |
| 23000 | Russell | Cornett | Name4 | ID4 | 4 | 1/3/22 |
| 160104 | Craig | Sterling | Name5 | ID5 | 5 | 1/10/22 |
+------------+-----------+----------+-------------+-----------+---------+---------+
</code></pre>
<p>I found an article online that explained how to create a mock sales table, however, I think my data is too complex. The following code crashes the kernel:</p>
<pre><code>index = pd.MultiIndex.from_product(
[date_range_2022, store_codes, df3['ProductID'], df_customers['CustomerID']],
names = ['Date', 'StoreCode', 'ProductID', 'CustomerID'])
sales = pd.DataFrame(index = index)
sales.head()
</code></pre>
<p>It works OK if I am just combining the time series and store codes, but if I add in other fields, it bombs.</p>
<p>I have not been able to figure out how to generate a sales table from the data I already have.</p>
<p>Where am I going wrong?</p>
|
<python><pandas><numpy>
|
2023-05-04 23:39:06
| 1
| 333
|
Dolunaykiz
|
76,177,904
| 5,725,780
|
How do I validate all files in a torrent using libtorrent offline in python?
|
<p>Given a torrent file, I would like to use libtorrent to validate that I have all the files, essentially what a <em>force recheck</em> would do in a bittorrent client, and I want to do it off-line and without involving some other client.</p>
<p>I've only found how to recalculate the hash for all pieces but I'm not quite sure how to use it if I already <em>have</em> everything and just want to make sure it matches what's already available on disk.</p>
<p>Here's a short example of what I would like to accomplish:</p>
<pre class="lang-py prettyprint-override"><code>import libtorrent as lt
torrent = lt.load_torrent_file("torrents/legal_linux_iso.torrent")
torrent.save_path = 'data/'
if torrent.recheck(): # Mythical missing part
print("Hurrah!")
</code></pre>
<p>I'm worried that if I try implementing code that calculate every piece, I will end up missing some detail with regards to version 1 or 2, hybrid torrent files, merkle trees and such. Hopefully there's a short and robust solution I've missed. Also, I want to use <em>libtorrent</em> for this because it's used for other things in my code and the client I'm working with, so if there's any quirks I want the quirks to at least match!</p>
|
<python><bittorrent><libtorrent>
|
2023-05-04 23:07:17
| 1
| 721
|
pipe
|
76,177,732
| 6,286,900
|
How to test API with pytest
|
<p>I have the following <a href="https://github.com/sasadangelo/races/blob/main/app/races_api.py" rel="nofollow noreferrer">CRUD Flask application</a> where I have my CRUD api defined <a href="https://github.com/sasadangelo/races/blob/main/app/races_api.py" rel="nofollow noreferrer">in this file</a> for the Race objects.</p>
<p>I tested the API with some curl and they worked fine. Now I want to define a pytest to test them. I created the <a href="https://github.com/sasadangelo/races/blob/main/test_races_api.py" rel="nofollow noreferrer">following test file</a> but for all my tests I get:</p>
<pre><code>E assert 404 == 201
E + where 404 = <WrapperTestResponse streamed [404 NOT FOUND]>.status_code
</code></pre>
<p>I read docs, I browsed Internet but so far no luck.</p>
<p><strong>Can anyone help me to figure out what is the issue?</strong></p>
<p>Moreover, I would like to move my test_races_api.py file in the test folder but it doesn't work and always give me the error that the module app cannot be found (I tried ..app but another error appear).</p>
<p><strong>Can anyone help me also on this issue?</strong></p>
<p>Thank you in advance for your help.</p>
|
<python><flask><pytest>
|
2023-05-04 22:21:13
| 0
| 1,179
|
Salvatore D'angelo
|
76,177,622
| 4,885,544
|
Executing celery task through a listener on a RabbitMQ queue
|
<p>I am trying to run a script for a messaging consumer that listens to a rabbitMQ instance and executes a celery job based on that message value. I am receiving issues with circular dependencies and cannot figure what exactly is the issue and how to fix it. Below are my file examples:</p>
<p>celery.py
import os
from celery import Celery</p>
<pre><code>os.environ.setdefault("DJANGO_SETTINGS_MODULE", "projectTest.settings")
app = Celery(
"appTest",
broker=os.environ.get("RABBITMQ_CONN_STRING"),
backend="rpc://",
)
# Optional configuration, see the application user guide.
app.conf.update(
result_expires=3600,
)
app.config_from_object("appTest.celeryConfig", namespace="CELERY")
app.autodiscover_tasks()
if __name__ == "__main__":
app.start()
</code></pre>
<p>class_task.py
from celery import app</p>
<pre><code>logger = get_task_logger("emailSvcCelery")
class ClassTask(app.Task):
name = "class-task"
def run(self):
print('Task Ran!')
app.register_task(ClassTask)
</code></pre>
<p>message_consumer.py</p>
<pre><code>import os
import json
import pika
from tasks.class_task import ClassTask
connection_parameters = pika.ConnectionParameters(os.environ.get("RABBITMQ_CONN_STRING"))
connection = pika.BlockingConnection(connection_parameters)
channel = connection.channel()
queues = (
'queue-test'
)
for q in queues:
channel.queue_declare(queue=q, durable=True)
def callback(channel, method, properties, body):
ClassTask().delay()
channel.basic_consume(queue='queue-test', on_message_callback=callback, auto_ack=True)
print("Started Consuming...")
channel.start_consuming()
</code></pre>
<p>The error I am recieving is below after running <code>python3 message_consumer.py</code>:</p>
<pre><code>Traceback (most recent call last):
File "/message_consumer.py", line 4, in <module>
from tasks.class_task import ClassTask
File "/class_task.py", line 1, in <module>
from celery import app
File "/celery.py", line 2, in <module>
from celery import Celery
ImportError: cannot import name 'Celery' from partially initialized module 'celery' (most likely due to a circular import) (/celery.py)
</code></pre>
<p>I am somewhat new to python and need some help, thank you!</p>
|
<python><django><rabbitmq><celery><pika>
|
2023-05-04 22:00:42
| 1
| 486
|
sclem72
|
76,177,601
| 3,159,428
|
pandas group by date in pandas 2.0
|
<p>i have this dataframe</p>
<pre class="lang-none prettyprint-override"><code>date in_count out_count
2013-01-03, 1.00000, 0.00000
2013-03-04, 1.00000, 0.00000
2013-04-08, 1.00000, 0.00000
2013-04-22, 1.00000, 0.00000
2013-05-06, 0.00000, 1.00000
</code></pre>
<p>the idea is to grouping by weeks. in the past the software use pandas=0.24.2, but i need to change to pandas 2.0, so for pandas 0.24.2 i have this</p>
<pre><code>by_week_df = articles_df.set_index(
pd.DatetimeIndex(articles_df['date'])
).groupby(pd.Grouper(freq='1W')).sum().reset_index()
</code></pre>
<p>but running on pandas 2.0 i have the next error:</p>
<pre class="lang-none prettyprint-override"><code> File "pandas/_libs/groupby.pyx", line 717, in pandas._libs.groupby.group_sum
TypeError: unsupported operand type(s) for +: 'datetime.date' and 'datetime.date'
</code></pre>
<p>i tried to convert first the date column to datetime but i obtain the same error</p>
|
<python><pandas><dataframe>
|
2023-05-04 21:56:32
| 2
| 486
|
Fermin Pitol
|
76,177,509
| 7,988,497
|
Python Packaging with Pytest
|
<p>I have a tree like this</p>
<pre><code>src
__init__.py
my_app
__init__.py
db.py
config.py
tests
__init__.py
test_db.py
</code></pre>
<p>In <code>db.py</code> I've got an import like this</p>
<pre><code>from config import my_func
</code></pre>
<p>This works fine when I run db.py by itself (python /src/my_app/db.py).</p>
<p>When I try to run it with pytest, then it says, there is no module named config in the db.py file. If I prepend the path</p>
<pre><code>import sys, os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from config import my_func
</code></pre>
<p>Then I get an error from pytest that says</p>
<pre><code>from src.my_app.db import foo
"No module found named src"
</code></pre>
<p>I just don't understand what's wrong. I've looked at dozens of posts and feel like I have this right, but apparently not.</p>
|
<python><python-3.x>
|
2023-05-04 21:36:04
| 1
| 1,336
|
MichaelD
|
76,177,506
| 2,562,058
|
Is it possible to display images in IPython (or Jupyter) console?
|
<p>I have just discovered that there are tools that allow embedding images into terminals, like <a href="https://linuxhint.com/display-image-linux-terminal/" rel="nofollow noreferrer">these ones</a>.</p>
<p>I am wondering if there is a <code>IPython</code> (or, more generally <code>jupyter console</code>) builtin function that allows to do the same. That would be handy for displaying plots inline.</p>
<p>I know you could do that with <code>QtConsole</code>, but that is not really a terminal.</p>
|
<python><jupyter><ipython><jupyter-console>
|
2023-05-04 21:35:54
| 1
| 1,866
|
Barzi2001
|
76,177,432
| 4,538,768
|
Reduction of queries in ManyToMany field with prefetch_related
|
<p>I want to reduce further the number of queries. I used prefetch_related decreasing the number of queries. I was wondering if it is possible to reduce to one query. Please let me show the code involved:</p>
<p>I have a view with prefetch_related:</p>
<pre><code>class BenefitList(generics.ListAPIView):
serializer_class = BenefitGetSerializer
def get_queryset(self):
queryset = Benefit.objects.all()
queryset = queryset.filter(deleted=False)
qs= queryset.prefetch_related('nearest_first_nations__reserve_id')
return qs
</code></pre>
<p>I have the models used by the serializers. In here, it is important to notice the hybrid property name which I want to display along with reserve_id and reserve_distance:</p>
<p>benefit.py:</p>
<pre><code>class IndianReserveBandDistance(models.Model):
reserve_id = models.ForeignKey(IndianReserveBandName,
on_delete=models.SET_NULL,
db_column="reserve_id",
null=True)
reserve_distance = models.DecimalField(max_digits=16, decimal_places=4, blank=False, null=False)
@property
def name(self):
return self.reserve_id.name
class Benefit(models.Model):
banefit_name = models.TextField(blank=True, null=True)
nearest_first_nations = models.ManyToManyField(IndianReserveBandDistance,
db_column="nearest_first_nations",
blank=True,
null=True)
</code></pre>
<p>Name field is obtained in the model IndianReserveBandName.</p>
<p>indian_reserve_band_name.py:</p>
<pre><code>class IndianReserveBandName(models.Model):
ID_FIELD = 'CLAB_ID'
NAME_FIELD = 'BAND_NAME'
name = models.CharField(max_length=127)
band_number = models.IntegerField(null=True)
</code></pre>
<p>Then, the main serializer using BenefitIndianReserveBandSerializer to obtain the fields reserve_id, reserve_distance and name:</p>
<p>get.py:
class BenefitGetSerializer(serializers.ModelSerializer):
nearest_first_nations = BenefitIndianReserveBandSerializer(many=True)</p>
<p>The serializer to obtain the mentioned fields:
distance.py:</p>
<pre><code>class BenefitIndianReserveBandSerializer(serializers.ModelSerializer):
class Meta:
model = IndianReserveBandDistance
fields = ('reserve_id', 'reserve_distance', 'name')
</code></pre>
<p>The above is resulting in two queries which I would like to be one:</p>
<pre><code>SELECT ("benefit_nearest_first_nations"."benefit_id") AS "_prefetch_related_val_benefit_id",
"indianreservebanddistance"."id",
"indianreservebanddistance"."reserve_id",
"indianreservebanddistance"."reserve_distance"
FROM "indianreservebanddistance"
INNER JOIN "benefit_nearest_first_nations"
ON ("indianreservebanddistance"."id" = "benefit_nearest_first_nations"."indianreservebanddistance_id")
WHERE "benefit_nearest_first_nations"."benefit_id" IN (1, 2)
SELECT "indianreservebandname"."id",
"indianreservebandname"."name"
FROM "indianreservebandname"
WHERE "indianreservebandname"."id" IN (678, 140, 627, 660, 214, 607)
ORDER BY "indianreservebandname"."id" ASC
</code></pre>
<p>I am expecting the following query:</p>
<pre><code>SELECT ("benefit_nearest_first_nations"."benefit_id") AS "_prefetch_related_val_benefit_id",
"indianreservebanddistance"."id",
"indianreservebanddistance"."reserve_id",
"indianreservebanddistance"."reserve_distance",
"indianreservebandname"."name"
FROM "indianreservebanddistance"
INNER JOIN "benefit_nearest_first_nations"
ON ("indianreservebanddistance"."id" = "benefit_nearest_first_nations"."indianreservebanddistance_id")
inner JOIN "indianreservebandname"
on ("indianreservebandname"."id" = "indianreservebanddistance"."reserve_id")
WHERE "benefit_nearest_first_nations"."benefit_id" IN (1, 2)
</code></pre>
<p>Would you know if it is possible to get just one query? Am I missing something which is stopping Django to create just one query?</p>
<p>Thanks a lot</p>
|
<python><django><django-models><django-views>
|
2023-05-04 21:20:11
| 1
| 1,787
|
JarochoEngineer
|
76,177,406
| 1,711,271
|
Compute m columns from 2m columns without a for loop
|
<p>I have a dataframe where some columns are name-paired (for each column ending with <code>_x</code> there is a corresponding column ending with <code>_y</code>) and others are not. For example:</p>
<pre><code>import pandas as pd
import numpy as np
colnames = [
'foo', 'bar', 'baz',
'a_x', 'b_x', 'c_x',
'a_y', 'b_y', 'c_y',
]
rng = np.random.default_rng(0)
data = rng.random((20, len(colnames)))
df = pd.DataFrame(data, columns=colnames)
</code></pre>
<p>Assume I have two lists containing all the column names ending with <code>_x</code>, and all the column names ending with <code>_y</code> (it's easy to build such lists), of the same length <code>m</code> (remember that for each <code>_x</code> column there is one and only one corresponding <code>_y</code> column). I want to create <code>m</code> new columns with a simple formula:</p>
<pre><code>df['a_err'] = (df['a_x'] - df['a_y']) / df['a_y']
</code></pre>
<p>without hard-coding the column names, of course. It's easy to do so with a <code>for</code> loop, but I would like to know if it's possible to do the same without a loop, in the hope that it would be faster (the real dataframe is way bigger than this small example).</p>
|
<python><pandas><apply><calculated-columns>
|
2023-05-04 21:15:42
| 3
| 5,726
|
DeltaIV
|
76,177,382
| 1,224,437
|
Fine grained resource management in an async function?
|
<p>Using <code>@asyncio.coroutine</code> one can write a fine-grained resource manager for acquiring and releasing a shared resource (say a <code>Lock</code> or a global variable or database connection):</p>
<pre class="lang-py prettyprint-override"><code>@asyncio.coroutine
def resource_context(func: Callable[..., Awaitable]):
print("acquire")
for i in func().__await__():
print("release")
yield i
print("acquire")
print("release")
</code></pre>
<p>This fine grained resource management requires use of <code>yield</code> rather <code>yield from</code>. This is much finer grained than a context manager which would simply acquire once at the beginning and release once at the end. Whereas a context manager would hold the resource while our <code>func</code> waits, this finer grained releases the resource each time it is waiting.</p>
<p>But <code>@asyncio.coroutine</code> is deprecated as of Python 3.8. How can I achieve this functionality using the more modern <code>async def</code> coroutines?</p>
<p>Note I'm not looking for an async iterator or an async context manager: neither of those work for this use case. I'd like to be able to await arbitrary other async functions that are called inside <code>func</code> and release the resource when any of those callees awaits anything. That is, I'd like for the resource to be acquired inside <code>func</code> and any code it calls, but to be temporarily released while <code>func</code> yields control.</p>
|
<python><asynchronous><async-await><python-asyncio>
|
2023-05-04 21:10:38
| 1
| 517
|
fritzo
|
76,177,372
| 2,377,957
|
Generating Accessible PDFs in Python
|
<p>I am attempting to generate automated reports which are Accessible (508 Compliant). I read that <code>reportlab</code> is capable of generating pdf which are accessible. I have yet to see any examples of this being true. However, there is a paid option. I have been experimenting extensively with various packages including pikepdf and pymupdf. While the functionality of making documents accessible does not appear to be particularly complex it does appear to be not something that has yet been implemented on an open source software suite.</p>
<p>As I have blind friends it is disappointing to see this basic feature missing. The basic requirements are:</p>
<ol>
<li>tagged title</li>
<li>tagged table headers and rows</li>
<li>figures and tables having alternative text (mostly explaining what the content is)</li>
<li>having a specified tab order</li>
</ol>
<p>The following is some example code to generate a simple pdf in reportlab. Could someone tell me how to change my code allow the generated content to be accessible? If not, could you point me to an option which would produce an accessible pdf document?</p>
<p>My only solution for automated reports right now is generate them as html, open in MS Word, export as Pdf.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from reportlab.lib.pagesizes import letter, landscape
from reportlab.lib import colors
from reportlab.platypus import SimpleDocTemplate, Paragraph, Spacer, Table, TableStyle
from reportlab.lib.styles import getSampleStyleSheet, ParagraphStyle
from reportlab.lib.enums import TA_CENTER
# Metadata for PDF
pdf_title = "Sample Report"
pdf_author = "Your Name"
pdf_subject = "Sample PDF Report"
pdf_keywords = "report, sample, pdf, python"
# Sample data for the table
data = {
'Item Name': ['Item A', 'Item B', 'Item C', 'Item D'],
'Quantity': [10, 14, 34, 22],
'Price': [25, 56, 80, 120],
}
df = pd.DataFrame(data)
# Create the PDF document
doc = SimpleDocTemplate("sample_report.pdf", pagesize=landscape(letter),
title=pdf_title, author=pdf_author, subject=pdf_subject,
keywords=pdf_keywords)
# Set up the page styles and content
styles = getSampleStyleSheet()
styles.add(ParagraphStyle(name="Centered", alignment=TA_CENTER))
title = Paragraph("Sample Report", styles['Heading1'])
author = Paragraph("Author: Your Name", styles['Heading2'])
subject = Paragraph("Subject: Sample PDF Report", styles['Heading2'])
doc.build([title, Spacer(1, 0.5), author, Spacer(1, 0.25), subject])
table_data = [['Item Name', 'Quantity', 'Price']]
table_data += df.values.tolist()
table = Table(table_data)
table.setStyle(TableStyle([
('BACKGROUND', (0, 0), (-1, 0), colors.grey),
('TEXTCOLOR', (0, 0), (-1, 0), colors.whitesmoke),
('ALIGN', (0, 0), (-1, -1), 'CENTER'),
('FONTNAME', (0, 0), (-1, 0), 'Helvetica-Bold'),
('FONTSIZE', (0, 0), (-1, 0), 14),
('BOTTOMPADDING', (0, 0), (-1, 0), 12),
('BACKGROUND', (0, 1), (-1, -1), colors.beige),
('GRID', (0, 0), (-1, -1), 1, colors.black),
('BOX', (0, 0), (-1, -1), 2, colors.black)
]))
doc.build([title, Spacer(1, 0.5), author, Spacer(1, 0.5), subject, Spacer(1, 0.5), table])
</code></pre>
|
<python><pdf><accessibility>
|
2023-05-04 21:08:27
| 3
| 4,105
|
Francis Smart
|
76,177,370
| 14,293,020
|
How to reproject xarray dataset memory efficiently with chunks and dask?
|
<p><strong>Context:</strong> I have a netcdf file that I want to reproject. It is a costly operation, and I am learning how to use <code>dask</code> and <code>zarr</code> to do it efficiently without crashing my RAM.</p>
<p><strong>Code presentation:</strong> <code>ds</code> is a 3D <code>xarray</code> dataset (dimensions: <code>time</code>, <code>y</code>, <code>x</code>). This array is in projection <code>EPSG:32607</code> and I want it in <code>EPSG:3413</code>. To do that, I open the dataset by chunking it with <code>dask</code>, and resample it reproject it with <code>rioxarray</code>. I then save it as a <code>zarr</code> (with either <code>xarray</code> or <code>dask</code>, idk which one is the best).</p>
<p><strong>Problems:</strong> I want to do two things:</p>
<ol>
<li>Have the reprojection work on the chunks without crashing my memory.</li>
<li>Find valid chunk size automatically so I can save the dataset to <code>.zarr</code> through <code>dask</code> (last line of code).</li>
</ol>
<p>How can I do that ?</p>
<p><strong>Dataset description:</strong>
<a href="https://i.sstatic.net/byT8F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/byT8F.png" alt="Dataset description" /></a></p>
<p><strong>Code:</strong></p>
<pre><code># Fetch the path for essential python scripts, grab only the variable "v"
#ds = xr.open_dataset('Cubes/temp/Cube_32607.nc', chunks=({'time': 100, 'y': 100, 'x': 100})).v
# Example reproducing dataset the size of ds
start_date = np.datetime64('1984-06-01')
end_date = np.datetime64('2020-01-01')
num_dates = 18206
len_x = 333
len_y = 334
# Generate time dimension
date_array = date_range = np.linspace(start_date.astype('datetime64[s]').view('int64'), end_date.astype('datetime64[s]').view('int64'), num_dates).astype('datetime64[s]')
# Generate spatial dimension
xs = np.linspace(486832.5, 566512.5, 333)
ys = np.linspace(6695887.5, 6615967.5, 334)
# Create example data
v = np.random.rand(len(date_array), len(ys), len(xs))
ds = xr.Dataset({'v': (('time', 'y', 'x'), v)},
coords={'x': xs, 'y': ys, 'time': date_array}, chunks=({'mid_date': 100, 'y': 100, 'x': 100}))
# Import pyproj to get the CRS
import pyproj
# Attribute a projection to the dataset (currently missing its projection)
ds = ds.rio.write_crs("EPSG:32607", inplace=True)
# Reproject the dataset
ds = ds.rio.reproject("EPSG:3413",resampling=Resampling.bilinear)
# Save it a zarr
ds.to_zarr('Cubes/temp/Reprojected_Cube/Reprojected_Cube.zarr', mode='w', compute=False)
# Trigger the parallel computation to write the data to the Zarr store (Is this line a better way to do it ?)
dask.compute(ds.to_zarr('Cubes/temp/Reprojected_Cube/Reprojected_Cube.zarr', mode='w'))
</code></pre>
|
<python><dask><python-xarray><chunks><zarr>
|
2023-05-04 21:08:21
| 1
| 721
|
Nihilum
|
76,177,366
| 12,064,467
|
Dot product between NumPy arrays
|
<p>I have a Pandas Series object called X, which has dimension (100,). Each element of X is mapped to a NumPy ndarray of floating point numbers, with shape (10, 10). I also have another Pandas Series object called Y, which also has dimension (100,). Each element of Y is mapped to a Pandas DataFrame; each DataFrame contains only floating point numbers, and is of size (10, 10). Both X and Y have the same indices - all indices are of type DatetimeIndex.</p>
<p>I want to perform the operation X•Y•X, where "•" is the dot product. I want a solution that works, and is robust to the presence of <code>NaN</code>s. How can I do this? The attempts below have all failed, and I'm not sure why:</p>
<pre class="lang-py prettyprint-override"><code>date_index = pd.date_range('2022-01-01', periods=100, freq='D')
labels = list('ABCDEFGHIJKLMNOPQRSTUVWXY')
X = pd.Series([np.random.rand(25, 25) for _ in range(100)], index=date_index)
Y = pd.Series([pd.DataFrame(np.random.rand(25, 25), index=labels[:25], columns=labels[:25]) for _ in range(100)], index=date_index)
# Attempt 1:
X.dot(Y).dot(X) ## ValueError: matrices are not aligned
# Attempt 2:
dot_products = lambda x, y: np.dot(np.dot(x, y), x)
X.apply(lambda x: dot_products(x, Y.loc[x])) # ValueError - can't index with multidimensional key
</code></pre>
|
<python><pandas><numpy>
|
2023-05-04 21:07:42
| 1
| 522
|
DataScienceNovice
|
76,177,151
| 2,312,835
|
Argmax over vectors in numpy
|
<p>I am trying to write as clean and efficient an expression as possible in order to solve the following problem. I have a set <code>[[v11,...,v1n],...,[vm1,...,vmn]]</code> of vectors (stored as an <code>(m,n,a)</code> array) and a second set <code>[b1,...,bk]</code> (again, as a <code>(k,a)</code> array). What I want is <code>argmax_j dot(vij, bx)</code> for all <code>i,x</code>. I have the following for now, which creates unnecessarily large arrays of which I then have to take the diagonal.</p>
<pre class="lang-py prettyprint-override"><code>gamma = [[v11,...,v1n],...,[vm1,...,vmn]] # coordinates (i,j,s)
b = [b1,...,bk] # coordinates (k,s)
# compute argmaxes
argmaxes = (
np.tensordot(gamma, b, axes=[[2], [1]]) # coordinates (i,j,k)
.argmax(axis=1) # coordinates (i,k)
)
result = np.diagonal(
np.take(gamma, argmaxes, axis=1), # coordinates (i, i, k, s)
axis1=0, axis2=1
) # coordinates (i, k, s)
</code></pre>
<p>Is there a way to avoid this step with <code>np.take</code> that gives me a ton of redundant information?</p>
|
<python><numpy><linear-algebra>
|
2023-05-04 20:26:39
| 1
| 330
|
Daniel Robert-Nicoud
|
76,177,087
| 5,090,680
|
SimpleITK: Rigidly transform image according to 4x4 matrix defined in numpy
|
<p>I have an image that I have read using <code>sitk.ReadImage</code> and a 4x4 numpy array representing a rigid transform I would like to apply on the image. For the life of me, I cannot seem to figure out how to cast the numpy array to a transform object that is accepted by sitk when trying to perform <code>sitk.Resample</code>. Does anyone know how to do this?</p>
|
<python><numpy><itk><simpleitk>
|
2023-05-04 20:17:20
| 0
| 368
|
brohan322
|
76,177,042
| 12,064,467
|
Pandas Correlation Matrices?
|
<p>I have a large DataFrame; the values of all columns are floating point numbers, and the indices are of type DatetimeIndex. For example, here is the first part of my DataFrame:</p>
<pre><code> col0 col1 col2 col3 col4 ...
2035-10-30 1.0 1.0 1.0 1.0 1.0 ...
2035-10-31 1.0 1.0 1.0 1.0 1.0 ...
2035-11-01 1.0 1.0 1.0 1.0 1.0 ...
2035-11-02 1.0 1.0 1.0 1.0 1.0 ...
... ... ... ... ... ... ...
</code></pre>
<p>For every single element in my list of indices (i.e. every single date), I want to find the correlation matrix for the last 1000 rows of the DataFrame using the <code>.corr()</code> method, as demonstrated in the code below:</p>
<pre class="lang-py prettyprint-override"><code>for d in df.index:
cor_mat = other_df[:d].tail(1000).corr()
</code></pre>
<p>In the code above, <code>df</code> and <code>other_df</code> both have the same dimensions and the same indices.</p>
<p>What I currently have is extremely slow, because the dimensions of my DataFrame are very big. How can I vectorize this operation, or otherwise make it more efficient? I've tried playing with the <code>.rolling()</code> method, but I haven't been able to get the right answer yet. Any help would be appreciated.</p>
|
<python><pandas><dataframe><data-science>
|
2023-05-04 20:10:00
| 1
| 522
|
DataScienceNovice
|
76,176,965
| 4,027,688
|
Include py.typed file in wheel produced with nuitka
|
<p>I'm trying to build a wheel for a pyproject.toml based project using nuitka as described <a href="https://nuitka.net/doc/user-manual.html#use-case-5-setuptools-wheels" rel="nofollow noreferrer">here</a>. I'd like to include a <code>py.typed</code> file in the final wheel. My repo structure looks like this</p>
<pre><code>pyproject.toml
my_package/
__init__.py
my_file.py
py.typed
</code></pre>
<p>I'm using the <code>[build-system]</code> section in pyproject.toml to specify that the wheel should be built with nuitka.</p>
<pre><code>[build-system]
requires = ["setuptools>=42", "wheel", "nuitka", "toml"]
build-backend = "nuitka.distutils.Build"
</code></pre>
<p>Under the nuitka configuration section, I've tried various combinations of <code>include-package-data</code> and <code>include-data-files</code>. The below appears to be most similar to the recommended setuptools pattern for including <code>py.typed</code> files.</p>
<pre><code>[nuitka]
disable-ccache = true
include-package-data = "my_package=py.typed"
</code></pre>
<p>However, none of the settings I've tried result in a <code>py.typed</code> file being included in the final wheel. How can I include this file?</p>
|
<python><setuptools><python-packaging><nuitka>
|
2023-05-04 19:58:42
| 1
| 3,175
|
bphi
|
76,176,610
| 3,709,062
|
Second test with pytest-postgresql still using schema from first test
|
<p>I am trying to perform some tests using <strong>pytest-postgresql</strong> but it seems I do not fully grasp how the engine generated from the fixture works. The code below is the minimal example I could write to reproduce the error I am getting. There are 2 tests named for tokyo and london and each is meant to use the DB connection to create a schema with the name of the city and then to create the table <code>citymeta</code> in the schema.</p>
<pre class="lang-py prettyprint-override"><code>import pytest
from pytest_postgresql.janitor import DatabaseJanitor
from sqlalchemy import create_engine, inspect
from sqlalchemy.orm import Session
from sqlalchemy import text
from sqlalchemy import (Column, Integer, String)
from sqlalchemy.orm import declarative_base
Base = declarative_base()
class BaseModel(Base):
__abstract__ = True
id = Column(Integer, primary_key=True)
class CityMeta(BaseModel):
__tablename__ = "citymeta"
id = Column(Integer, primary_key=True)
version = Column(String(10), nullable=False)
def create_table(engine, schema):
with engine.connect() as conn:
conn.execute(text(f"""CREATE SCHEMA IF NOT EXISTS
{schema}"""))
conn.commit()
for table in BaseModel.metadata.tables.values():
table.schema = schema
table.create(engine, checkfirst=True)
schema1 = "tokyo"
schema2 = "london"
@pytest.fixture(scope="session")
def engine_postgresql(postgresql_proc):
with DatabaseJanitor(
postgresql_proc.user,
postgresql_proc.host,
postgresql_proc.port,
postgresql_proc.dbname,
postgresql_proc.version,
password=postgresql_proc.password,
):
yield create_engine(
f"postgresql+psycopg2://{postgresql_proc.user}:"
f"{postgresql_proc.password}@{postgresql_proc.host}:"
f"{postgresql_proc.port}/{postgresql_proc.dbname}"
)
def test_tokyo(engine_postgresql):
create_table(engine_postgresql, schema1)
insp = inspect(engine_postgresql)
assert insp.has_table("citymeta", schema=schema1)
with Session(engine_postgresql) as session:
meta = CityMeta(id=1, version='2.3')
session.add(meta)
session.commit()
res = session.query(CityMeta).filter_by(id=1).first()
assert res.version == '2.3'
def test_london(engine_postgresql):
create_table(engine_postgresql, schema2)
insp = inspect(engine_postgresql)
assert insp.has_table("citymeta", schema=schema2)
with Session(engine_postgresql) as session:
meta = CityMeta(id=1, version='0.0')
session.add(meta)
session.commit()
res = session.query(CityMeta).filter_by(id=1).first()
assert res.version == '0.0'
</code></pre>
<p>Since each test creates a separate schema, I would expect both the tests to run without an issue. However, I am getting the following error at the second test:</p>
<pre><code> sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "citymeta_pkey"
E DETAIL: Key (id)=(1) already exists.
E
E [SQL: INSERT INTO tokyo.citymeta (id, version) VALUES (%(id)s, %(version)s)]
E [parameters: {'id': 1, 'version': '0.0'}]
E (Background on this error at: https://sqlalche.me/e/20/gkpj)
</code></pre>
<p>It seems that during the second test (london), even though the london schema is created successfully, when the session is started, somehow it still uses the schema tokyo, and it cannot insert the data due to key violations (both ids are 1). I know that <strong>pytest-postgresql</strong> maintains the same database until the end of the tests, but, since a new schema is generated in the second test, shouldn't the new schema be used by the new session?</p>
|
<python><sqlalchemy><pytest>
|
2023-05-04 19:04:36
| 2
| 1,176
|
GStav
|
76,176,587
| 2,281,318
|
PyLance in strict mode: additional typing hints for external libraries
|
<p>The code [Python 3.11, Visual Studio Code, Pylance strict mode]</p>
<pre class="lang-py prettyprint-override"><code>from typing import List, Tuple
import networkx as nx
graph = nx.MultiDiGraph()
# .add_edge(source node, sink node, edge type)
graph.add_edge("node1", "node2", (0, 0))
graph.add_edge("node1", "node2", (0, 1))
graph.add_edge("node2", "node3", (0, 2))
triplets: List[Tuple[str, str, Tuple[int, int]]] = []
for u, v, e_type in graph.edges(keys=True):
triplets.append((u, v, e_type))
</code></pre>
<p>gives me three groups of PyLance warnings (shown below).</p>
<p><strong>Question:</strong> How do I pass the message <em>please, trust me, <code>u</code> is <code>str</code> and <code>e_type</code> is <code>Tuple[int, int]</code></em> to PyLance, so it would not complain even though, in general, <code>u</code>, <code>v</code> and <code>e_type</code> can be anything (hashable)?</p>
<hr />
<p>The warnings are issued:</p>
<ul>
<li>for the function <code>add_edge</code>:</li>
</ul>
<blockquote>
<p>Type of "add_edge" is partially unknown<br>
Type of "add_edge" is "(u_for_edge: Unknown, v_for_edge: Unknown, key: Unknown | None = None, > **attr: Unknown)</p>
</blockquote>
<ul>
<li>for the for loop (shown for "u", same for "v" and "e_type"):</li>
</ul>
<blockquote>
<p>Type of "u" is unknown</p>
</blockquote>
<ul>
<li>for the append (which is a consequence of the previous warning):</li>
</ul>
<blockquote>
<p>Argument type is partially unknown<br>
Argument corresponds to parameter "__object" in function "append"<br>
Argument type is "tuple[Unknown, Unknown, Unknown]"</p>
</blockquote>
<p>I noticed that including <code>assert isinstance(u, str)</code> helps, but</p>
<ul>
<li><code>isinstance</code> tends to be slow, so I would avoid it if possible</li>
<li>I don't know, how to check that e_type is actually <code>Tuple[int, int]</code> (<code>isinstance</code> does not allow <code>Tuple[int, int]</code> as the second argument)</li>
</ul>
|
<python><python-typing><pyright>
|
2023-05-04 19:01:20
| 1
| 966
|
Antoine
|
76,176,534
| 1,246,260
|
Longer Responses with LlamaIndex?
|
<p>I currently have LlamaIndex functioning off some private data just fine, however, it only outputs about 1000 characters worth. How do I extend the output until its completion? I know I can bump the tokens a bit, but I'm looking at potentially pages worth.</p>
<pre><code>llm_predictor = LLMPredictor(llm=OpenAI(temperature=0, model_name="text-davinci-003"))
max_input_size = 4096
num_output = 100
max_chunk_overlap = 20
chunk_size_limit = 600
prompt_helper = PromptHelper(max_input_size, num_output, max_chunk_overlap, chunk_size_limit=chunk_size_limit)
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper)
storage_context = StorageContext.from_defaults(persist_dir="./storage")
documents = []
documents += SimpleDirectoryReader('dataDir1').load_data()
documents += SimpleDirectoryReader('dataDir2').load_data()
index = GPTVectorStoreIndex.from_documents(documents, storage_context=storage_context, service_context=service_context)
storage_context.persist()
query_engine = index.as_query_engine()
resp = query_engine.query("Write a policy that is compliant with XYZ.")
print(resp)
</code></pre>
|
<python><openai-api><llama-index>
|
2023-05-04 18:56:12
| 2
| 613
|
Jon
|
76,176,477
| 10,286,813
|
Python, Regex everything before a sequence
|
<p>I have a data frame column which contains the following strings.
I want to parse everything to the left of the 4 or 5 digit number into a separate column</p>
<pre><code>column name
0 129 4,029.16 08-31-13 8043 304.12 02-28-13 00T773466
1 19 179.00 03-31-12 T0707440 7794 2,898.56 01-31-13 00T757276
2 19716 3.36 01-31-15 00T729912 30593 6,041.74 12-31-16 00T770362
3 19870 131.22 02-28-15 00T709284 4 30611 199.46 02-03-23 12-31-16 00T691818
4 19984 12.25 02-28-15 00T879664 30647 199.45 12-31-16 00T691818
5 20505 1,206.13 03-31-15 00T590582 30648 199.45 12-31-16 00T691818
6 20623 84.11 04-30-15 00T621725 30759 3,325.27 12-31-16 00T580639
</code></pre>
<p>So the desired dataframe column would look something like this:</p>
<pre><code>before
0 129 4,029.16 08-31-13
1 19 179.00 03-31-12 T0707440
2 19716 3.36 01-31-15 00T729912
3 19870 131.22 02-28-15 00T709284 4
4 19984 12.25 02-28-15 00T879664
5 20505 1,206.13 03-31-15 00T590582
6 20623 84.11 04-30-15 00T621725
</code></pre>
<p>I tried parsing everything to the right of the 4 or 5 digit using</p>
<pre><code>df['after']=df['column_name'].str.extract('(\s\d{4,5}\s+.*)')
</code></pre>
<p>And it works but how do I do the same to obtain df['before']?</p>
<p>Any help is greatly appreciated. Thanks!</p>
|
<python><regex><dataframe>
|
2023-05-04 18:50:15
| 4
| 1,049
|
Nev1111
|
76,176,439
| 2,883,245
|
Why is my streamlit app's second radio disappearing when I change the default answer?
|
<p>I am designing an small annotation interface which will present users with a series of examples. An example will be presented to a user, they will make a mutually exclusive selection using a radio button, then more information will be presented in a second button where they can make a new selection with the same options; after submitting this, they move to the next example. I have the following code:</p>
<pre><code>#!/usr/bin/env python3
import streamlit as st
columns = ["question", "answerA", "answerB", "answerC", "explanationA", "explanationB", "explanationC", "answer"]
with open('fake_data.tsv') as f:
examples = [dict(zip(columns, line.strip().split("\t"))) for line in f]
def show_example(examples, i, answers):
example = examples[i]
st.header("context: TODO, wire this up")
answer_before = st.radio(example["question"], [example["answerA"], example["answerB"], example["answerC"], "multiple options"], key=f"q1-{i}")
if st.button("submit", key=f"b1-{i}"):
answer_after = st.radio(example["question"], [
": ".join((example["answerA"], example["explanationA"])),
": ".join((example["answerB"], example["explanationB"])),
": ".join((example["answerC"], example["explanationC"])),
"multiple options"
], key=f"q2-{i}")
if i + 1 == len(examples):
print("Saving answers")
else:
if st.button("next question", key=f"b2-{i}"):
answers.append((answer_before, answer_after))
return show_example(examples, i+1, answers)
show_example(examples, 0, [])
</code></pre>
<p>which when run with a sample data file:</p>
<ol>
<li>Presents the first example's first part</li>
<li>Allows a user to submit their answer</li>
<li>Presents the first example's second part</li>
</ol>
<p>but when a different selection is made in (3), the second radio button disappears completely.</p>
<p>My data file is here:</p>
<h1><code>fake_data.tsv</code>:</h1>
<pre><code>question0 answerA0 answerB0 answerC0 explanationA0 explanationB0 explanationC0 acceptedAnswer0
question1 answerA1 answerB1 answerC1 explanationA1 explanationB1 explanationC1 acceptedAnswer1
question2 answerA2 answerB2 answerC2 explanationA2 explanationB2 explanationC2 acceptedAnswer2
question3 answerA3 answerB3 answerC3 explanationA3 explanationB3 explanationC3 acceptedAnswer3
</code></pre>
<p>How can I achieve my desired interaction? I'm using streamlit 1.22.0.</p>
|
<python><streamlit>
|
2023-05-04 18:43:37
| 1
| 17,125
|
erip
|
76,176,245
| 489,088
|
How to get the indexes of a sort operation instead of a sorted array using numpy?
|
<p>I have a 3D array like so:</p>
<pre><code>arr = [[[20, 5, 10], ...], ...]
</code></pre>
<p>if I do</p>
<pre><code>print(arr.argsort())
</code></pre>
<p>I get the indexes that would sort the array on that last axis:</p>
<pre><code>[[[1, 2, 0], ...], ...]
</code></pre>
<p>But what I want is for each position the index that given element would have, if the array was sorted:</p>
<pre><code>[[[2, 0, 1], ...], ...]
</code></pre>
<p>How can I do this using numpy? I have very large ndarrays so I am trying to avoid things like list comprehensions, etc and just accomplish this directly with numpy instead.</p>
|
<python><arrays><numpy><numpy-ndarray>
|
2023-05-04 18:15:50
| 1
| 6,306
|
Edy Bourne
|
76,176,222
| 1,874,170
|
Overriding __new__ to pre-process arguments and create a subclass
|
<p>I've got a base class, <code>_BaseWidget</code>, which is not instantiated. It's got many <code>abstractmethod</code>s to be filled in by sub-classes like <code>AppleWidget</code> and <code>PearWidget</code> which are actually instantiated and used.</p>
<p>However, <strong>for convenience</strong>, I'd like to make the <code>_BaseWiget</code> constructor usable, and produce an appropriate sub-class if called.</p>
<p>I <em>want to</em> write something like this:</p>
<pre class="lang-py prettyprint-override"><code>class _BaseWidget:
def __init__(self, parameter1, parameter2=None):
if isinstance(parameter1, AppleSeed):
return AppleWidget(_prepare_apple(parameter1, color=parameter2))
elif isinstance(parameter1, PearSeed)
return PearWidget(_prepare_pear(parameter1))
raise ValueError("convenience constructor couldn't guess what you're trying to do. please just instantiate the appropriate subclass directly")
</code></pre>
<p>However, I'm running up against two issues:</p>
<ol>
<li><code>__init__</code> can't override which class is created.</li>
<li><code>__new__</code> can't access parameters passed to the constructor, which I <em>would</em> like to pre-process slightly in this stage before passing them to the subclass constructors.</li>
</ol>
<p>Is it possible to do both of these at the same time? Again, I want to provide a slightly simplified "convenience" interface to the base class constructor that will do pre-processing on the arguments before giving them to the actual subclasses.</p>
|
<python><class><constructor><subclass>
|
2023-05-04 18:11:26
| 0
| 1,117
|
JamesTheAwesomeDude
|
76,176,132
| 11,155,419
|
Regex to extract part of URL that can be in the middle or end
|
<p>So I have a URL that could be in the following three formats:</p>
<pre class="lang-none prettyprint-override"><code>https://www.example.com/d/abcd-1234/edit
https://www.example.com/d/abcd-1234/
https://www.example.com/d/abcd-1234
</code></pre>
<p>I would like to extract only the <code>abcd-1234</code> bit from the above URLs. I've tried doing so using the following regular expression, however it will only capture the first and second case.</p>
<pre class="lang-py prettyprint-override"><code>import re
id = re.search(r'https://www.example.com/d/(.+?)/', url).group(1)
</code></pre>
<p>For <code>url=https://www.example.com/d/abcd-1234</code> the above will fail with:</p>
<pre class="lang-none prettyprint-override"><code>AttributeError: 'NoneType' object has no attribute 'group'
</code></pre>
<p>given that it does not match the regex.</p>
<p>What regex shall I use in order to indicate that the part of interest could be followed by no character at all?</p>
|
<python><python-3.x><regex>
|
2023-05-04 17:59:20
| 3
| 843
|
Tokyo
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.