QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,415,473
| 20,646,427
|
How to make your own pagination Django
|
<p>I have some json data from 3rd party API which has parameters like <code>3rd_party.com/API?NumberOfItemsOnPage=5&PageNumber1</code> NumberOfItemsOnPage - how many items will be on one page and PageNumber - current page. This json has items, info about current page, how many items on page and how many pages in general</p>
<p>And I want to know how can I paginate through that json using paginate_by in class <code>ListView</code></p>
<pre><code>class myView(ListView)
paginate_by = 5
def get_queryset(self, ...):
queryset = []
page_number = self.request.GET.get('page')
# This might be wrong but I doesn't realy care because I need to know how to paginate using default django pagination
request = f'3rd_party.com/API?NumberOfItemsOnPage=5&PageNumber{page_number}'
queryset = request.json()
return queryset
</code></pre>
<p>I guess I need to override django Paginator class but I dont know how to do that because django paginator paginate will through only first page and won't get another's</p>
|
<python><django><django-pagination>
|
2023-06-06 13:59:20
| 1
| 524
|
Zesshi
|
76,415,468
| 13,370,214
|
% of revenue chnage from previous month and previous year of same month
|
<p>Input Data</p>
<pre><code> Date State Revenue
31-01-2020 M 100
05-05-2020 M 500
05-05-2020 k 500
31-05-2020 M 100
12-04-2021 K 250
15-04-2021 M 300
20-05-2021 K 250
21-05-2021 M 300
</code></pre>
<p>Expected Output:</p>
<p>Only for the Month of May'21</p>
<pre><code>Month STATE Total_Revenue % change of revenue compared to the previous month of the state % change of revenue compared to same month from previous year for the state
May'21 M
K
</code></pre>
|
<python><pandas><dataframe>
|
2023-06-06 13:58:53
| 1
| 431
|
Harish reddy
|
76,415,360
| 7,678,074
|
OpenCV moments() returns zero, causing ZeroDivisionError in calculation of object center and area
|
<p>I am trying to compute the area and the center coordinates of objects in a binary mask using opencv. However, I noticed that in some cases I get the wrong result. For example, if I get the contours like this:</p>
<pre><code>import numpy as np
import cv2
binary_mask = np.array([
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1]])
contours, _ = cv2.findContours(
binary_mask.astype(np.uint8),
cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
>>> contours
(array([[[0, 3]],
[[5, 3]]], dtype=int32),
array([[[2, 0]],
[[2, 1]],
[[3, 1]],
[[3, 0]]], dtype=int32))
</code></pre>
<p>Then I get <code>ZeroDivisionError</code> for the center calculation:</p>
<pre><code>def get_centroid_from_contour(contour):
M = cv2.moments(contour)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
return (cX, cY)
>>> get_centroid_from_contour(contour)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in get_centroid_from_contour_
ZeroDivisionError: float division by zero
</code></pre>
<p>This I think is related to the fact that somehow opencv thinks the 2x2 squared object has zero area:</p>
<pre><code>>>> cv2.contourArea(contours[0])
0.0
</code></pre>
<p>It seems something related to "open objects". Indeed the first contour only contains two points that do not close the polygon, but I have no idea how to fix this. I also tried closing the contour as suggested <a href="https://stackoverflow.com/a/72249832/7678074">here</a> but it doesn't work.</p>
|
<python><opencv>
|
2023-06-06 13:46:19
| 1
| 936
|
Luca Clissa
|
76,415,323
| 3,507,584
|
Pandas replace multiple columns only when missing or NA from other dataset
|
<p>As a Minimum Working Example, I have two datasets with 2 keys [<code>col1</code> and <code>col2</code>] and multiple columns with data [columns starting with <code>z_</code>].</p>
<pre><code>df1 = pd.DataFrame(data= {'col1': [1, 2], 'col2': ["A", "B"], 'z_col3': [3, np.nan], 'z_col4': [3, 4]} )
df2 = pd.DataFrame(data= {'col1': [1,2], 'col2': ["C", "B"], 'z_col3': [3, 4], 'z_col4': [3, 4]} )
</code></pre>
<p><a href="https://i.sstatic.net/bf574.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bf574.png" alt="Data frames" /></a></p>
<p>I want to do a merge where <code>df1</code> with missing values in the <code>z_</code> columns would get the values from <code>df2</code>. Is there any intelligent way to do so? This is a MWE so I have a quite large table with 50+ columns.</p>
<p>I tried the following but it yields an error:</p>
<pre><code>df1[['z_col3','z_col4']] = df2[['col1','col2']].map(df2.set_index(['col1','col2'])[['z_col3','z_col4']])
</code></pre>
<p>Any idea on how to make it?</p>
|
<python><pandas>
|
2023-06-06 13:41:11
| 3
| 3,689
|
User981636
|
76,415,318
| 202,335
|
Python + selenium+css selector
|
<p>this is the HTML code:</p>
<pre><code> </template>
</el-table-column>
<el-table-column prop="announcementTitle" label="公告标题">
<template slot-scope="scope">
<span class="ahover">
<a v-html="scope.row.announcementTitle" target="_blank" :href="linkLastPage(scope.row)"></a>
<span v-show="checkDocType(scope.row.adjunctType)" class="icon-f"><i class="iconfont" :class="[checkDocType(scope.row.adjunctType)]"></i></span>
</span>
</template>
</code></pre>
<p>If I want to get the content of annoncementTitle with Python + selenium and css_selector, how should I write the code?</p>
|
<python><selenium-webdriver><css-selectors>
|
2023-06-06 13:40:39
| 2
| 25,444
|
Steven
|
76,415,204
| 811,359
|
Beam/Dataflow: How to trigger results early in a SlidingWindow without losing data?
|
<p>I have three events coming in from PubSub, and I want to join them on a common key, ideally without losing any data. I started out using <code>beam.window.SlidingWindows(10, 5)</code> - two sliding windows of 5 seconds each to ensure I got all the events (duplicates are fine in my use case). This worked for getting 100% of the data to join, but the latency was very high, with an average of 35 seconds to join the data. So I decided to experiment with triggers. Setting a trigger of <code>beam.transforms.trigger.AfterProcessingTime(1)</code> was much faster, with an average latency of 3 sec. However, I'm only joining 70% of my data now. I was expecting this code to join the data it could early, while the window would still fire once the watermark passed the sliding window time. How can I fire some of the data early without losing the data that comes later?</p>
<pre><code> event_1_window = (
event_1
| beam.Map(lambda r: (r['requestTXID'], r))
| "Event 1 Window" >> beam.WindowInto(beam.window.SlidingWindows(10, 5),
trigger=beam.transforms.trigger.AfterProcessingTime(1),
accumulation_mode=beam.transforms.trigger.AccumulationMode.ACCUMULATING
))
event_2_window = (
event_2
| beam.Map(lambda r: (r['requestTXID'], r))
|
"Event 2 window" >> beam.WindowInto(beam.window.SlidingWindows(10, 5),
trigger=beam.transforms.trigger.AfterProcessingTime(1),
accumulation_mode=beam.transforms.trigger.AccumulationMode.ACCUMULATING
))
joined = (
[event_1_window, event_2_window]
| "Join" >> beam.CoGroupByKey()
....
)
</code></pre>
|
<python><google-cloud-platform><google-cloud-dataflow><apache-beam>
|
2023-06-06 13:30:25
| 1
| 474
|
ahalbert
|
76,415,149
| 1,901,071
|
Wincom32 Safetly Restrict email in group outlook before searching on subject
|
<p>I have the following Python code which loops through emails in a group shared outlook mailbox and then saves attachments. However this will do a scan of the entire inbox.</p>
<p>I have seen in other answers that this is understandably a terrible idea although the third answer isn't clear to me how to go about it:</p>
<p><a href="https://stackoverflow.com/questions/31619012/extract-senders-email-address-from-outlook-exchange-in-python-using-win32">Extract sender's email address from Outlook Exchange in Python using win32</a> .</p>
<p>I wish to restrict the inbox to only emails sent by a specific person with keywords in the subject. My pseudo-code is below which i have been working on today</p>
<pre><code># new outlook object
import win32com.client as win32com
outlook = win32com.Dispatch("Outlook.Application").GetNamespace("MAPI")
inbox = outlook.Folders.Item("group@foo.com").Folders.Item("Inbox")
# Here we apply the filter above and then print out the attachment name
# After the file is downloaded we mark the mail as read
Items = inbox.Items
for item in Items:
# Check if the item is unread and has the desired sender and subject
if item.UnRead and item.Sender.GetExchangeUser().PrimarySmtpAddress == "foo@foo.com" and item.subject.contains("FM1, Football, results"):
# Loop through all the attachments in the item
for attachment in item.Attachments:
try:
save_path = (os.getcwd() + "\\email_attachments\\" +
attachment.FileName)
# Save the File and mark the email as unread
attachment.SaveAsFile(save_path)
item.UnRead = False
except:
print('Error:', attachment.FileName)
</code></pre>
|
<python><outlook><email-attachments><win32com><office-automation>
|
2023-06-06 13:24:54
| 2
| 2,946
|
John Smith
|
76,415,133
| 22,009,322
|
Using Random.sample with hatches in broken bars doesn't work
|
<p>I want to draw different random hatches (from a predefined set) for each broken bar.
Like this: <a href="https://i.sstatic.net/wMh8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wMh8J.png" alt="https://i.sstatic.net/rfNLf.png" /></a></p>
<p>Below is the full code example:</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import random
result = pd.DataFrame([['Bill', 1972, 1974],
['Bill', 1976, 1978],
['Bill', 1967, 1971],
['Danny', 1969, 1975],
['Danny', 1976, 1977],
['James', 1971, 1972],
['Marshall', 1967, 1975]],
columns=['Person', 'Year_start', 'Year_left']).\
sort_values(['Year_start', 'Year_left'], ascending=[True, True]).\
reset_index(drop=True)
fig, ax = plt.subplots()
height = 0.5
names = []
colors = ['tab:blue', 'tab:green', 'tab:red', 'yellow', 'brown', 'black', 'tab:orange', 'aquamarine']
hatches = ['/', 'o', '+', '-', '*', 'O', 'x', 'X', '|']
bc = result.groupby('Person')['Person'].count().max()
# this is broken barh's max count in this dataframe that will be used further in loop (in order to avoid using constants)
print(bc)
for y, (name, g) in enumerate(result.groupby('Person', sort=False)):
ax.broken_barh(list(zip(g['Year_start'],
g['Year_left'] - g['Year_start'])),
(y - height / 2, height),
facecolors=random.sample(colors, k=bc),
hatch=random.sample(hatches, k=bc)
)
names.append(name)
ax.set_ylim(0 - height, len(names) - 1 + height)
ax.set_xlim(result['Year_start'].min() - 1, result['Year_left'].max() + 1)
ax.set_yticks(range(len(names)), names)
ax.set_yticklabels(names)
print(result)
ax.grid(True)
plt.show()
</code></pre>
<p>While it perfectly works with the colors:</p>
<pre><code>facecolors=random.sample(colors, k=bc)
</code></pre>
<p>it does not work with the hatches in the same loop:</p>
<pre><code>hatch=random.sample(hatches, k=bc)
</code></pre>
<p>hatches are not displayed (image below), which is very strange.
Is this something that is not supported for hatches in particular, or did I made a mistake?
Many thanks!</p>
<p><a href="https://i.sstatic.net/7aEmo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7aEmo.png" alt="enter image description here" /></a></p>
|
<python><pandas><matplotlib><random><bar-chart>
|
2023-06-06 13:22:53
| 1
| 333
|
muted_buddy
|
76,415,091
| 7,846,884
|
SyntaxError: Not all output, log and benchmark files of rule per_read_aggregate contain the same wildcards
|
<p>I've been trying for hours to debug this error. could you please help identify what i'm missing?</p>
<p>here is the error i keep getting</p>
<pre><code>SyntaxError:
Not all output, log and benchmark files of rule per_read_aggregate contain the same wildcards. This is crucial though, in order to avoid that two or more jobs write to the same file.
File "/scripts/workflows/aggregate_per_reads_ONT/Snakefile.smk", line 14, in <module>
</code></pre>
<p>Here's the full snakemake file</p>
<pre><code>configfile: "config/config.yaml"
configfile: "config/samples.yaml"
rule all:
input:
expand("results/per_read_aggregate/{samples}/{samples}.5hmC_CpGs_aggregate_reads_stats.rds", samples=config["samples"]),
expand("results/per_read_aggregate/{samples}/{samples}.5mC_CpGs_aggregate_reads_stats.rds", samples=config["samples"]),
expand("results/per_read_aggregate/{samples}/{samples}.methylation_states_counts_5hmC_5mC_Unmeth.pdf", samples=config["samples"]),
expand("results/per_read_aggregate/{samples}/{samples}.hist_nReads_5hmC_5mC_Unmeth.pdf", samples=config["samples"]),
expand("results/per_read_aggregate/{samples}/{samples}.hist_mean_exp_mod_prob_5hmC_5mC_aggregate_reads_perCpG_sites.pdf", samples=config["samples"]),
expand("results/per_read_aggregate/{samples}/{samples}.hist_mean_exp_5mC_prob_aggregate_reads_perCpG_sites.pdf", samples=config["samples"]),
expand("results/per_read_aggregate/{samples}/{samples}.hist_mean_exp_5hmC_prob_aggregate_reads_perCpG_sites.pdf", samples=config["samples"])
rule per_read_aggregate:
input:
input_file_from_config=lambda wildcards: config["samples"][wildcards.samples]
output:
agg_5hmC_rds="results/per_read_aggregate/{samples}/{samples}.5hmC_CpGs_aggregate_reads_stats.rds",
agg_5mC_rds="results/per_read_aggregate/{samples}/{samples}.5mC_CpGs_aggregate_reads_stats.rds",
plt_meth_state="results/per_read_aggregate/{samples}/{samples}.methylation_states_counts_5hmC_5mC_Unmeth.pdf",
hist_agg_nReads_pdf="results/per_read_aggregate/{samples}/{samples}.hist_nReads_5hmC_5mC_Unmeth.pdf",
hist_agg_5hmC_5mC_pdf="results/per_read_aggregate/{samples}/{samples}.hist_mean_exp_mod_prob_5hmC_5mC_aggregate_reads_perCpG_sites.pdf",
hist_agg_5mC_pdf="results/per_read_aggregate/{samples}/{samples}.hist_mean_exp_5mC_prob_aggregate_reads_perCpG_sites.pdf",
hist_agg_5hmc_pdf="results/per_read_aggregate/{samples}/{samples}.hist_mean_exp_5hmC_prob_aggregate_reads_perCpG_sites.pdf"
params:
rscript_from_config=config["Rscript_config"]
log:
"logs/{rule}/{samples}/{samples}.log"
shell:
"""
Rscript {params.rscript_from_config} --input_file {input.input_file_from_config} --hmc_output_file {output.agg_5hmC_rds} --mc_output_file {output.agg_5mC_rds} --plot_meth_state {output.plt_meth_state} --plot_nReads {output.hist_agg_nReads_pdf} --plot_mean_exp_5mC_5hmC_prob {output.hist_agg_5hmC_5mC_pdf} --plot_mean_exp_5mC_prob {output.hist_agg_5mC_pdf} --plot_mean_exp_5hmC_prob {output.hist_agg_5hmc_pdf} 2> {log}
"""
onsuccess:
shell("date '+%d/%m/%Y %H:%M:%S' > time_end_Aggregate_per_read_stats_ONT.txt")
</code></pre>
<p>Here's how the sample.yaml look like</p>
<pre><code>samples:
sample1_N: /directory/sample1_N/per_read_modified_base_calls.txt
sample1_T: /directory/sample1_T/per_read_modified_base_calls.txt
</code></pre>
|
<python><bash><snakemake>
|
2023-06-06 13:18:16
| 1
| 473
|
sahuno
|
76,415,058
| 3,329,877
|
Matrix multiplication with a transposed NumPy array using Numba JIT does not work
|
<h2>Environment</h2>
<ul>
<li>OS: Windows 10</li>
<li>Python version: 3.10</li>
<li>Numba version: 0.57.0</li>
<li>NumPy version: 1.24.3</li>
</ul>
<h2>Example</h2>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from numba import njit
@njit
def matmul_transposed(a: np.ndarray, b: np.ndarray) -> np.ndarray:
# return a @ b.T # also tried this with a similar result, np.matmul seems to be unsupported by Numba
return a.dot(b.transpose())
matmul_transposed(np.array([[1.0, 1.0]]), np.array([[1.0, 1.0]]))
</code></pre>
<h2>Error</h2>
<p>The above example raises an error</p>
<pre><code>- Resolution failure for literal arguments:
No implementation of function Function(<function array_dot at 0x...>) found for signature:
>>> array_dot(array(float64, 2d, C), array(float64, 2d, F))
There are 2 candidate implementations:
- Of which 2 did not match due to:
Overload in function 'array_dot': File: numba\np\arrayobj.py: Line 5929.
With argument(s): '(array(float64, 2d, C), array(float64, 2d, F))':
Rejected as the implementation raised a specific error:
TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<function dot at 0x...>) found for signature:
>>> dot(array(float64, 2d, C), array(float64, 2d, F))
There are 4 candidate implementations:
- Of which 2 did not match due to:
Overload in function 'dot_2': File: numba\np\linalg.py: Line 525.
With argument(s): '(array(float64, 2d, C), array(float64, 2d, F))':
Rejected as the implementation raised a specific error:
LoweringError: Failed in nopython mode pipeline (step: native lowering)
scipy 0.16+ is required for linear algebra
File "[...]\numba\np\linalg.py", line 582:
def _dot2_codegen(context, builder, sig, args):
<source elided>
return lambda left, right: _impl(left, right)
^
During: lowering "$8call_function.3 = call $2load_deref.0(left, right, func=$2load_deref.0, args=[Var(left, linalg.py:582), Var(right, linalg.py:582)], kws=(), vararg=None, varkwarg=None, target=None)" at [...]\numba\np\linalg.py (582)
raised from [...]\numba\core\errors.py:837
- Of which 2 did not match due to:
Overload in function 'dot_3': File: numba\np\linalg.py: Line 784.
With argument(s): '(array(float64, 2d, C), array(float64, 2d, F))':
Rejected as the implementation raised a specific error:
TypingError: missing a required argument: 'out'
raised from [...]\numba\core\typing\templates.py:784
During: resolving callee type: Function(<function dot at 0x...>)
During: typing of call at [...]\numba\np\arrayobj.py (5932)
File "[...]\numba\np\arrayobj.py", line 5932:
def dot_impl(arr, other):
return np.dot(arr, other)
^
raised from [...]\numba\core\typeinfer.py:1086
- Resolution failure for non-literal arguments:
None
During: resolving callee type: BoundFunction((<class 'numba.core.types.npytypes.Array'>, 'dot') for array(float64, 2d, C))
During: typing of call at [...]\example.py (7)
File "scratch_2.py", line 7:
def matmul_transposed(a: np.ndarray, b: np.ndarray) -> np.ndarray:
<source elided>
"""Return a @ b.T"""
return a.dot(b.transpose())
^
</code></pre>
<h2>Interpretation</h2>
<p>From the error message I concluded that Numba seems to transpose the array by changing its layout style from C to Fotran which is a cheap operation as it does not have to change the layout physically but than it seems to not know how to multiply the C-style array and the Fotrtran style array.</p>
<h2>Question</h2>
<p>Is there any way to multiply these matrices? Preferably without copying the whole <code>b</code> while doing the transposition?</p>
<p>It seems like this is a fairly ordinary operation so I'm confused that it does not work.</p>
|
<python><numpy><matrix><matrix-multiplication><numba>
|
2023-06-06 13:14:33
| 1
| 397
|
VaNa
|
76,415,025
| 3,247,006
|
How to completely remove a package from PyPI or TestPyPI?
|
<p>I could remove all releases of my package <code>example_package_superkai</code> but could not remove the package itself from <a href="https://pypi.org/" rel="nofollow noreferrer">PyPI</a> or <a href="https://test.pypi.org/" rel="nofollow noreferrer">TestPyPI</a> as shown below:</p>
<p><a href="https://i.sstatic.net/ccQYF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ccQYF.png" alt="enter image description here" /></a></p>
<p>Or:</p>
<p><a href="https://i.sstatic.net/TPgtV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TPgtV.png" alt="enter image description here" /></a></p>
<p>Actually, <a href="https://stackoverflow.com/questions/20403387/how-to-remove-a-package-from-pypi">How to remove a package from Pypi</a> doesn't have such answers.</p>
<p>So, how can I completely remove a package from <code>PyPI</code> or <code>TestPyPI</code>?</p>
|
<python><package><release><pypi><testpypi>
|
2023-06-06 13:10:17
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
76,414,933
| 1,581,090
|
How to use telnetlib3 to write and read to/from a telnet connection using python?
|
<p>I am trying to use <a href="https://telnetlib3.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">telnetlib3</a> with python 3.8.10 to open a connection, write data to it and try to read the reply. This is the code I am using:</p>
<pre><code>reader,writer = asyncio.run(telnetlib3.open_connection(host, port))
writer.write("$SYS,INFO")
reply = asyncio.run(reader.read())
</code></pre>
<p>However, I get the following error for which I have no idea what it means, and for which I do not really find anything useful with google:</p>
<pre><code>RuntimeError: Task <Task pending name='Task-3' coro=<TelnetReaderUnicode.read() running at /home/alex/.local/lib/python3.8/site-packages/telnetlib3/stream_reader.py:206> cb=[_run_until_complete_cb() at /usr/lib/python3.8/asyncio/base_events.py:184]> got Future <Future pending> attached to a different loop
</code></pre>
<p>Can this be fixed? How to use <code>telnetlib3</code> properly?</p>
|
<python><telnet><telnetlib><telnetlib3>
|
2023-06-06 13:01:07
| 1
| 45,023
|
Alex
|
76,414,894
| 7,321,700
|
Using Pivot in a DF to output only the latest available value
|
<p><strong>Scenario:</strong> I have a dataframe with multiple columns. Four of them (A_Id, FF_Id, Date and Output) are relevant.</p>
<p><strong>Objective:</strong> I am trying to extract these columns and create a dataframe with the format Index=A_Id, Columns=F_Id, and the values come from the column Output. Whenever there are two or more values of a given combination of A and F, use the column Date to select the latest available.</p>
<p><strong>Data Sample:</strong> Each given A_Id entry has a value in combination with each F_Id entry. a given combination can have 2 or more results with different dates . There are 10 unique F_Id, 7000 A_Id. Here is a subset</p>
<pre><code>FId Date Output AId
628 2020/12/31 Yes 1
629 2020/12/31 No 1
080 2020/12/31 No 1
081 2020/12/31 No 1
628 2020/12/31 Yes 2
629 2020/12/31 No 2
080 2020/12/31 No 2
081 2020/12/31 No 2
628 2021/12/31 Yes 3
629 2021/12/31 Yes 3
080 2021/12/31 No 3
081 2021/12/31 No 3
628 2020/12/31 Yes 14
629 2020/12/31 No 14
080 2020/12/31 No 14
081 2020/12/31 No 14
628 2021/12/31 Yes 14
629 2021/12/31 No 14
080 2021/12/31 No 14
081 2021/12/31 No 14
628 2020/12/31 Yes 15
629 2020/12/31 No 15
080 2020/12/31 No 15
081 2020/12/31 Yes 15
</code></pre>
<p><strong>Desired ouptut:</strong> A Matrix using A_ID as the index and F_Id as columns. Either in one dataframe with 2d, where the first dimension has Output and the Second has the Dates:</p>
<pre><code> 4628 4629 5080 5081 4628 4629 5080 5081
1 Yes No No No 2020/12/31 2020/12/31 2020/12/31 2020/12/31
2 Yes No No No 2020/12/31 2020/12/31 2020/12/31 2020/12/31
3 Yes Yes No No 2021/12/31 2021/12/31 2021/12/31 2021/12/31
14 Yes No No No 2021/12/31 2021/12/31 2021/12/31 2021/12/31
15 Yes No No Yes 2020/12/31 2020/12/31 2020/12/31 2020/12/31
</code></pre>
<p>or two different dataframes, of the same size, one with Output and the other with Dates:</p>
<pre><code> 4628 4629 5080 5081
1 Yes No No No
2 Yes No No No
3 Yes Yes No No
14 Yes No No No
15 Yes No No Yes
4628 4629 5080 5081
1 2020/12/31 2020/12/31 2020/12/31 2020/12/31
2 2020/12/31 2020/12/31 2020/12/31 2020/12/31
3 2021/12/31 2021/12/31 2021/12/31 2021/12/31
14 2021/12/31 2021/12/31 2021/12/31 2021/12/31
15 2020/12/31 2020/12/31 2020/12/31 2020/12/31
</code></pre>
<p><strong>Obs:</strong> There are cases like A_Id 14, where the Outputs are available for two dates. In this case, I am trying to select the Output value for the latest available date, and that date for reference.</p>
<p><strong>What I tried 1:</strong> First I tried passing the Output and Date columns into the pivot_table function.:</p>
<pre><code># source df is read from the raw data on an excel spreadsheet
list_aid =[source['A_Id']]
list_fid = [source['F_Id']]
list_answer = [source['Output']]
testdf33 = source[['AgentId','FactorId','Answer','Date']].pivot_table(values=['Answer','Date'],
index='AgentId',
columns='FactorId',
aggfunc={'Answer': lambda x: ' '.join(x), 'Date': lambda x: ' '.join(x)})
</code></pre>
<p><strong>Issue 1:</strong> When running this, there is the error, because I am passing a Datestime whereas the function was expecting a string.</p>
<pre><code>TypeError: sequence item 0: expected str instance, Timestamp found
</code></pre>
<p><strong>What I tried 2:</strong> I forcibly changed the data in the Datecolumn into str format:</p>
<pre><code>testdf11 = source[['AgentId','FactorId','Answer','Date']]
testdf11['Date'] = testdf11['Date'].astype(str)
testdf33 = testdf11.pivot_table(values=['Answer','Date'],
index='AgentId',columns='FactorId',
aggfunc={'Answer': lambda x: ' '.join(x),
'Date': lambda x: ' '.join(x)})
</code></pre>
<p>Obs2: This gives me all the data aggregated, but I only need the latest output/dates, not an aggregation of whatever is available.</p>
<p><strong>Question:</strong> How can I output only the Output/Date combination for the latest available date?</p>
|
<python><pandas><dataframe>
|
2023-06-06 12:55:56
| 2
| 1,711
|
DGMS89
|
76,414,844
| 20,920,790
|
How to correct positions of annotations for graph (plt.bar + plt.plot)
|
<p>I got this data:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">month_dt</th>
<th style="text-align: right;">ses_canceled</th>
<th style="text-align: right;">ses_finished</th>
<th style="text-align: right;">all_ses</th>
<th style="text-align: right;">canceled_ptc</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">2021-02-01</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">2021-03-01</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">2021-04-01</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">28.57</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: left;">2021-05-01</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">15</td>
<td style="text-align: right;">19</td>
<td style="text-align: right;">21.05</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: left;">2021-06-01</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">23</td>
<td style="text-align: right;">27</td>
<td style="text-align: right;">14.81</td>
</tr>
<tr>
<td style="text-align: right;">5</td>
<td style="text-align: left;">2021-07-01</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">30</td>
<td style="text-align: right;">37</td>
<td style="text-align: right;">18.92</td>
</tr>
<tr>
<td style="text-align: right;">6</td>
<td style="text-align: left;">2021-08-01</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">53</td>
<td style="text-align: right;">59</td>
<td style="text-align: right;">10.17</td>
</tr>
<tr>
<td style="text-align: right;">7</td>
<td style="text-align: left;">2021-09-01</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">62</td>
<td style="text-align: right;">71</td>
<td style="text-align: right;">12.68</td>
</tr>
<tr>
<td style="text-align: right;">8</td>
<td style="text-align: left;">2021-10-01</td>
<td style="text-align: right;">11</td>
<td style="text-align: right;">90</td>
<td style="text-align: right;">101</td>
<td style="text-align: right;">10.89</td>
</tr>
<tr>
<td style="text-align: right;">9</td>
<td style="text-align: left;">2021-11-01</td>
<td style="text-align: right;">24</td>
<td style="text-align: right;">95</td>
<td style="text-align: right;">119</td>
<td style="text-align: right;">20.17</td>
</tr>
<tr>
<td style="text-align: right;">10</td>
<td style="text-align: left;">2021-12-01</td>
<td style="text-align: right;">30</td>
<td style="text-align: right;">131</td>
<td style="text-align: right;">161</td>
<td style="text-align: right;">18.63</td>
</tr>
<tr>
<td style="text-align: right;">11</td>
<td style="text-align: left;">2022-01-01</td>
<td style="text-align: right;">33</td>
<td style="text-align: right;">159</td>
<td style="text-align: right;">192</td>
<td style="text-align: right;">17.19</td>
</tr>
<tr>
<td style="text-align: right;">12</td>
<td style="text-align: left;">2022-02-01</td>
<td style="text-align: right;">34</td>
<td style="text-align: right;">189</td>
<td style="text-align: right;">223</td>
<td style="text-align: right;">15.25</td>
</tr>
<tr>
<td style="text-align: right;">13</td>
<td style="text-align: left;">2022-03-01</td>
<td style="text-align: right;">60</td>
<td style="text-align: right;">275</td>
<td style="text-align: right;">335</td>
<td style="text-align: right;">17.91</td>
</tr>
<tr>
<td style="text-align: right;">14</td>
<td style="text-align: left;">2022-04-01</td>
<td style="text-align: right;">75</td>
<td style="text-align: right;">391</td>
<td style="text-align: right;">466</td>
<td style="text-align: right;">16.09</td>
</tr>
<tr>
<td style="text-align: right;">15</td>
<td style="text-align: left;">2022-05-01</td>
<td style="text-align: right;">108</td>
<td style="text-align: right;">485</td>
<td style="text-align: right;">593</td>
<td style="text-align: right;">18.21</td>
</tr>
<tr>
<td style="text-align: right;">16</td>
<td style="text-align: left;">2022-06-01</td>
<td style="text-align: right;">90</td>
<td style="text-align: right;">585</td>
<td style="text-align: right;">675</td>
<td style="text-align: right;">13.33</td>
</tr>
<tr>
<td style="text-align: right;">17</td>
<td style="text-align: left;">2022-07-01</td>
<td style="text-align: right;">160</td>
<td style="text-align: right;">775</td>
<td style="text-align: right;">935</td>
<td style="text-align: right;">17.11</td>
</tr>
<tr>
<td style="text-align: right;">18</td>
<td style="text-align: left;">2022-08-01</td>
<td style="text-align: right;">216</td>
<td style="text-align: right;">1140</td>
<td style="text-align: right;">1356</td>
<td style="text-align: right;">15.93</td>
</tr>
<tr>
<td style="text-align: right;">19</td>
<td style="text-align: left;">2022-09-01</td>
<td style="text-align: right;">187</td>
<td style="text-align: right;">955</td>
<td style="text-align: right;">1142</td>
<td style="text-align: right;">16.37</td>
</tr>
</tbody>
</table>
</div>
<p>My code:</p>
<pre><code>import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10, 6))
ax1 = plt.bar(
x=df7_2['month_dt'],
height=df7_2['ses_finished'],
label='Finished sessions',
edgecolor='black',
linewidth=0,
width=20,
color='#3049BF'
)
ax2 = plt.bar(
x=df7_2['month_dt'],
height=df7_2['ses_canceled'],
bottom=df7_2['ses_finished'],
label='Canceled sessions',
edgecolor='black',
linewidth=0,
width=20,
color='#BF9530'
)
secax = ax.twinx()
secax.set_ylim(min(df7_2['canceled_ptc'])-10, max(df7_2['canceled_ptc'])*1.5)
ax3 = plt.plot(
df7_2['month_dt'],
df7_2['canceled_ptc'],
color='#E97800',
label='Canceled sessions, ptc'
)
columns = ['ses_finished', 'ses_canceled', 'canceled_ptc']
colors = ['b', 'y', 'black']
for col, c in zip(columns, colors):
for x, y in zip(df7_2['month_dt'], df7_2[col]):
# list with quantiles for data
lst = [0, 0.25, 0.5, 0.75, 1]
describe_nearest = []
[describe_nearest.append(df7_2[col].quantile(el, interpolation='nearest')) for el in lst]
describe_nearest.append(df7_2[col].values[-1::][0])
# add annotation if value in quantiles list
if y in describe_nearest:
label = '{:.0f}'.format(y)
plt.annotate(
label,
(x, y),
textcoords='offset points',
xytext=(0, 0),
ha='center',
color=c
)
ax.grid(False)
secax.grid(False)
h1, l1 = ax.get_legend_handles_labels()
h2, l2 = secax.get_legend_handles_labels()
handles = h1 + h2
labels = l1 + l2
ax.legend(handles, labels)
plt.show()
</code></pre>
<p>Output:
<a href="https://i.sstatic.net/Hbqmx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Hbqmx.png" alt="plt.show" /></a>
How to correct positions of annotations?
Values for bars should be inside bars, values for plot near line.</p>
|
<python><matplotlib>
|
2023-06-06 12:51:06
| 1
| 402
|
John Doe
|
76,414,793
| 13,370,214
|
Find the % change in sales with previous month
|
<p>I want to create a table which contains the region with highest percentage change in sales for each accounting month.</p>
<pre><code>Accounting_Month Region Sales
Jan'19 North 100
West 50
East 20
South 10
Feb'19 North 200
West 80
East 10
South 12
Mar'19 North 220
West 70
East 200
South 10
</code></pre>
<p>The sample output is as follows:</p>
<pre><code>Accouting_Month Highest chnage
Feb'19 North
Mar'19 East
</code></pre>
|
<python><pandas><dataframe>
|
2023-06-06 12:44:39
| 1
| 431
|
Harish reddy
|
76,414,718
| 4,681,541
|
What is the meaning of this error mean when starting RStudio?
|
<p>When RStudio starts the following error appears in the console:</p>
<pre><code>Error in FUN(X[[i]], ...) :
'CreateProcess' failed to run 'C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\python.exe -c "import sys; print(sys.platform)"'
Calls: do.call -> <Anonymous> -> <Anonymous> -> vapply -> FUN
Execution halted
</code></pre>
<p>I am not using Python but have it installed. The error does not appear to have consequences when using R.
Any explanations, should I worry about it?<br />
Thanks.</p>
|
<python><r><rstudio><startup>
|
2023-06-06 12:36:20
| 0
| 415
|
LDBerriz
|
76,414,559
| 3,336,412
|
Multiple inheritance of two classes with same field
|
<p>I guess for the python experts this is a simple one... but maybe someone can explain me.</p>
<p>imagine we have two classes with the same field:</p>
<pre class="lang-py prettyprint-override"><code>class A:
name = 'a'
class B:
name = 'b'
</code></pre>
<p>and now we got a third class that inherits both:</p>
<pre class="lang-py prettyprint-override"><code>class C(A, B):
...
</code></pre>
<p>now we can check for the name-field with <code>C.name</code>, I found out, that the fields value is <code>'a'</code>.
Can I assume that it's always the first class that "wins"? Shouldn't the second class overwrite the first ones fields?</p>
|
<python><multiple-inheritance>
|
2023-06-06 12:17:00
| 1
| 5,974
|
Matthias Burger
|
76,414,514
| 4,061,339
|
"cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_'" on AWS Lambda using a layer
|
<h1>What I want to achieve</h1>
<p>To scrape an website using AWS Lambda and save the data on S3.</p>
<h1>The issues I'm having</h1>
<p>When I execute Lambda, the following error message appears.</p>
<blockquote>
<p>{ "errorMessage": "Unable to import module 'lambda_function': cannot
import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_'
(/opt/python/urllib3/util/ssl_.py)", "errorType":
"Runtime.ImportModuleError", "requestId":
"fb66bea9-cbad-4bd3-bd4d-6125454e21be", "stackTrace": [] }</p>
</blockquote>
<h1>Code</h1>
<p>The minimum Lambda code is as follows.</p>
<pre><code>import requests
import boto3
def lambda_handler(event, context):
s3 = boto3.client('s3')
upload_res = s3.put_object(Bucket='horserace-dx', Key='/raw/a.html', Body='testtext')
return event
</code></pre>
<p>An layer was added to the Lambda. Files were save in <code>python</code> folder using the commands below , frozen in a zip file, then uploaded to AWS Lambda as a layer.</p>
<pre><code>!pip install requests -t ./python --no-user
!pip install pandas -t ./python --no-user
!pip install beautifulsoup4 -t ./python --no-user
</code></pre>
<ul>
<li>The bucket <code>horserace-dx</code> exists</li>
<li>The folder <code>raw</code> exists</li>
<li>The role of the Lambda is properly set. It can read from and write to S3</li>
<li>The runtime of the Lambda is Python 3.9. The python version of the local computer is 3.9.13.</li>
</ul>
<h1>What I did so far</h1>
<p>I google "cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_'" and found some suggestions. I made the layer with the following code and tried again in vain.</p>
<pre><code>!pip install requests -t ./python --no-user
!pip install pandas -t ./python --no-user
!pip install beautifulsoup4 -t ./python --no-user
!pip install urllib3==1.26.15 -t ./python --no-user
</code></pre>
<p>So what should I do to achieve what I want to achieve? Any suggestions would be greatly appreciated.</p>
|
<python><amazon-web-services><amazon-s3><aws-lambda><boto3>
|
2023-06-06 12:10:43
| 13
| 3,094
|
dixhom
|
76,414,505
| 13,950,870
|
JWT decode error: 'Could not deserialize key data' using jwt in Python but the token works when I paste it into online decoders
|
<p>The full error is this:</p>
<p>'Could not deserialize key data. The data may be in an incorrect format, it may be encrypted with an unsupported algorithm, or it may be an unsupported key type (e.g. EC curves with explicit parameters).', [_OpenSSLErrorWithText(code=75497580, lib=9, reason=108, reason_text=b'error:0480006C:PEM routines::no start line')]'.</p>
<p>When I paste the token into <a href="https://jwt.io/" rel="nofollow noreferrer">https://jwt.io/</a> (it also says RS256 is the algo) it works fine but when I run this code:</p>
<pre><code>import jwt
jwt.decode('eyJ0..............', algorithms=['RS256'])
</code></pre>
<p>I get the above error. What am I doing wrong? The token is generated through the MSAL package using the acquireSilentToken method in the MSAL-browser npm package.</p>
|
<python><jwt>
|
2023-06-06 12:10:11
| 1
| 672
|
RogerKint
|
76,414,495
| 12,113,049
|
Twisted: builtins.TypeError: a bytes-like object is required, not 'str'
|
<p>I'm using python3.8 and twisted 22.10.0.
I'm connecting freeswitch with reactor and using a event socket to handle the events.I'm able to make TCP connection with freeswitch and after executing the reactor thread I get the following error</p>
<pre><code>Unhandled Error
Traceback (most recent call last):
File "/home/.local/lib/python3.8/site-packages/twisted/python/log.py", line
96, in callWithLogger
return callWithContext({"system": lp}, func, *args, **kw)
File "/home/.local/lib/python3.8/site-packages/twisted/python/log.py", line
80, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "/home/.local/lib/python3.8/site-packages/twisted/python/context.py",
line 117, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/home/.local/lib/python3.8/site-packages/twisted/python/context.py",
line 82, in callWithContext
return func(*args, **kw)
--- <exception caught here> ---
File "/home/.local/lib/python3.8/site-packages/twisted/internet/posixbase.py",
line 487, in _doReadOrWrite
why = selectable.doRead()
File "/home/.local/lib/python3.8/site-packages/twisted/internet/tcp.py", line
249, in doRead
return self._dataReceived(data)
File "/home/.local/lib/python3.8/site-packages/twisted/internet/tcp.py", line
256, in _dataReceived
rval = self.protocol.dataReceived(data)
File "/home/.local/lib/python3.8/site-packages/twisted/protocols/basic.py",
line 542, in dataReceived
line, self._buffer = self._buffer.split(self.delimiter, 1)
builtins.TypeError: a bytes-like object is required, not 'str'
</code></pre>
<p>My snippet of what I'm trying:</p>
<pre><code>from twisted.internet import protocol, reactor
factory = MyFactory()
reactor.connectTCP('127.0.0.1', 8021, factory)
reactor.run(installSignalHandlers=0)
class MyFactory(protocol.ReconnectingClientFactory):
def __init__(self):
self.password = '----'
MyFactory.stopDeferred = defer.Deferred()
def buildProtocol():
some code here to build protocol
return protocol
</code></pre>
<p>I kept console in the <code>basic.py</code> and found that self._buffer and self.delimeter are both byte types. And I beleive it's not advisble to typecast to string in this file.
How do I get over this error?</p>
|
<python><twisted>
|
2023-06-06 12:09:47
| 0
| 816
|
Eranki
|
76,414,442
| 8,451,248
|
How do I get as close as possible to type safety in Python?
|
<p>I have a tree of nodes that I can access via a <code>select</code> (returns one node) and a <code>select_all</code> (returns a list of nodes) function.
I'm sometimes running into the problem that I use the <code>select</code> function when I meant to use the <code>select_all</code> one, and providing type annotation does not make Pylance raise a warning. Are there any tools that can provide this ?</p>
<p>Note that switching to a type-safe language is not a solution, of course. If I could not use Python I of course wouldn't.</p>
|
<python>
|
2023-06-06 12:03:11
| 1
| 310
|
ouai
|
76,414,385
| 16,371,459
|
How to combine multiple succeeding rows in pyspark?
|
<p>I have a dataset, for example</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Column A</th>
<th>Column B</th>
<th>Column C</th>
<th>Column D</th>
</tr>
</thead>
<tbody>
<tr>
<td>Cell A1</td>
<td>Cell B1</td>
<td>Cell C1</td>
<td>Cell D1</td>
</tr>
<tr>
<td>Cell A2</td>
<td>Cell B2</td>
<td>Cell C2</td>
<td>Cell D2</td>
</tr>
<tr>
<td>Cell A3</td>
<td>Cell B3</td>
<td>Cell C3</td>
<td>Cell D3</td>
</tr>
<tr>
<td>Cell A4</td>
<td>Cell B4</td>
<td>Cell C4</td>
<td>Cell D4</td>
</tr>
</tbody>
</table>
</div>
<p>Is there a possibility that I can join n rows together. For example, row 1 and row 2 get joined together, while mainting the columns?
So, I can get</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Column A</th>
<th>Column B</th>
<th>Column C</th>
<th>Column D</th>
</tr>
</thead>
<tbody>
<tr>
<td>Cell A1, A2</td>
<td>Cell B1, B2</td>
<td>Cell C1, C2</td>
<td>Cell D1, D2</td>
</tr>
<tr>
<td>Cell A3, A4</td>
<td>Cell B3, B4</td>
<td>Cell C3, C4</td>
<td>Cell D3, D4</td>
</tr>
</tbody>
</table>
</div>
|
<python><pyspark><bigdata><data-manipulation>
|
2023-06-06 11:56:08
| 2
| 318
|
Basir Mahmood
|
76,414,377
| 4,161,120
|
Unable to execute system command from sqlalchemy as in mysql
|
<p>Execution of system command using sqlalchemy.engine.base.Connection is resulting in error.</p>
<pre><code>import sqlalchemy
from urllib.parse import quote_plus as urlquote
user = 'root'
host = 'localhost'
port = 3306
password = 'mypassword'
engine_string = 'mysql://{0}:%s@{1}:{2}/{3}'.format(user, host, port, db_name) % urlquote(password)
engine = sqlalchemy.create_engine(engine_string)
db = engine.connect()
db.execute(r"\! sh /home/satish/test.sh")
</code></pre>
<p>sqlalchemy.exc.ProgrammingError: (MySQLdb.ProgrammingError) (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '\! sh /home/satish/test.sh' at line 1")
[SQL: ! sh /home/satish/test.sh]</p>
|
<python><mysql><sqlalchemy>
|
2023-06-06 11:54:58
| 0
| 443
|
SatishV
|
76,414,329
| 9,974,205
|
Problem with implementation of the knapsack problem using DEAP library in python and metaheuristics
|
<p>I am currently working in an implementation of the knapsack problem in python using the libray DEAP. I have to maximize the benefit and minimize the preference. The problem cannot have more elements inside the knapsak than a selected number.</p>
<p>I have generated the following function:</p>
<pre><code>def evaluate(individual):
weight = sum(individual[i] * weights[i] for i in range(len(individual)))
benefit = sum(individual[i] * benefits[i] for i in range(len(individual)))
preference = sum(individual[i] * preferences[i] for i in range(len(individual)))
nTotal=sum(individual)
if weight > max_weight:
return -benefit + (weight - max_weight)*10000, preference+10
elif nTotal > nMax:
return -benefit + (nTotal - nMax)*10000, preference+10
else:
return -benefit, preference
</code></pre>
<p>alongside</p>
<pre><code>creator.create("FitnessMulti", base.Fitness, weights=(1.0, 1.0))
creator.create("Individual", list, fitness=creator.FitnessMulti)
toolbox = base.Toolbox()
toolbox.register("attr_bool", random.randint, 0, 1)
toolbox.register("individual", generate_individual)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
toolbox.register("evaluate", evaluate)
toolbox.register("mate", tools.cxTwoPoint)
toolbox.register("mutate", tools.mutFlipBit, indpb=0.05)
toolbox.register("select", tools.selNSGA2)
</code></pre>
<p>My solutions do not respect the restriction in weight nor the restriction in the number of elements.</p>
<p>Can someone please help me to define a better version of evaluate so that I can enforce the restrictions?</p>
<p>If someone is interested, I can post the whole code</p>
|
<python><optimization><constraints><heuristics><deap>
|
2023-06-06 11:47:41
| 1
| 503
|
slow_learner
|
76,414,273
| 1,259,330
|
HID iClass dl smart card how to get card number
|
<p>I'm using the HID omnikey 5022 card reader
The card has HID iClass DL written on it.
Facility code is 147
Card number is 4511</p>
<p>using PySmartCard I can read the
ATR = 3B8F8001804F0CA0000003060A0018000000007A
UID = 7BA49412FEFF12E0
Using a smart card reader application it tells me the card is type iClass 2ks
I also pretty sure it has a weigand 26 bit code.</p>
<p>I've followed every guide I can find on the internet on how to parse this in to card number and get nonsense results.<br />
The common method seems to be,
1, convert ATR to binary
11101110001111100000000000000110000000010011110000110010100000000000000000000000000011000001100000101000000000
2, get rid of LSB (far right digit)
3, get next 16 digits from right and this should be Card number
0000010100000000 = 128 ?? This should be 4511
4, the next 8 from right should be the facility code.
10000011 =131 ?? This should be 147</p>
<p>I'm probably doing something silly here as it looks like it should be pretty straight forwards. Any help appreciated. Thanks</p>
|
<python><smartcard><wiegand>
|
2023-06-06 11:41:30
| 1
| 412
|
perfo
|
76,414,236
| 13,370,214
|
Categorise date into accounting month
|
<p>Create a table which contains the region with highest percentage change in sales for each accounting month. (An accounting month M is defined as period between 15th of month M to 14th of month M+1. Ex : accounting month May '20 begins on 15th May '20 and ends on 14th June '20)</p>
<p>Sample Input data</p>
<pre><code>Date Region Accounting Month
01-01-2019 North Dec'18
14-01-2019 South Dec'18
20-01-2019 West Jan'19
04-02-2019 North Jan'19
</code></pre>
<p>I tried the following code</p>
<pre><code>df['Accounting Month'] = df['Date'] + pd.offsets.MonthBegin(-1) + pd.DateOffset(days=14)
</code></pre>
<p>But it is providing inaccurate results</p>
|
<python><pandas><dataframe><numpy>
|
2023-06-06 11:36:59
| 2
| 431
|
Harish reddy
|
76,414,156
| 9,251,158
|
Google Translate module randomly throws "the JSON object must be str, bytes or bytearray, not NoneType"
|
<p>I use Google Translate in Python and found this bug in my code. I copy here a minimal reproducible example:</p>
<pre><code>import time
from googletrans import Translator as google_Translator
class Translator:
lang: str
def __init__(self, lang: str = 'en'):
self.lang = lang
def go(self, text: str):
try:
translator = google_Translator()
except Exception as e:
print("First exception type: ", e)
print("Argument: ", text)
print("lang: ", self.lang)
print("translator: ", translator)
return None
try:
t1 = translator.translate(text, dest=self.lang)
except Exception as e:
print("Second exception type: ", e)
print("Argument: '%s'" % text)
print("lang: ", self.lang)
print("translator: ", translator)
return None
try:
t2 = t1.text
except Exception as e:
print("Third exception type: ", e)
print("argument: ", text)
print("lang: ", self.lang)
print("translator: ", translator)
return None
return t2
if __name__ == '__main__':
t = Translator('English')
while True:
print("\n---\n%s\n---\n" % t.go('Mieux Comprendre, Mieux Décider, Mieux Soigner'))
time.sleep(5)
</code></pre>
<p>Sometimes the code works and translates the sentence, other times it does not. Here's sample output:</p>
<pre><code>---
Better understand, decide better, better care
---
---
Better understand, decide better, better care
---
---
Better understand, decide better, better care
---
---
Better understand, decide better, better care
---
Second exception type: the JSON object must be str, bytes or bytearray, not NoneType
Argument: 'Mieux Comprendre, Mieux Décider, Mieux Soigner'
lang: English
translator: <googletrans.client.Translator object at 0x109e96d40>
---
None
---
---
Better understand, decide better, better care
---
</code></pre>
<p>Here are my system specifications:</p>
<pre><code>(venv)$ python3 --version
Python 3.10.11
(venv)$ python3 -m pip show googletrans
Name: googletrans
Version: 4.0.0rc1
Summary: Free Google Translate API for Python. Translates totally free of charge.
Home-page: https://github.com/ssut/py-googletrans
Author: SuHun Han
Author-email: ssut@ssut.me
License: MIT
Location: ~/venv/lib/python3.10/site-packages
Requires: httpx
Required-by:
</code></pre>
<p>Why does it behave like this, and how to fix it?</p>
<h2>update</h2>
<p>The exception thrown is often <code>TypeError: the JSON object must be str, bytes or bytearray, not NoneType</code>. If the text only has spaces, it also sometimes throws <code>IndexError: list index out of range</code>. Here is the stacktrace for the latter.</p>
<pre><code>TypeError from translator: list index out of range
Text =
Traceback (most recent call last):
File "/Translator.py", line 22, in go
return translator.translate(text, dest=self.lang).text
File "/home/ubuntu/.local/lib/python3.10/site-packages/googletrans/client.py", line 222, in translate
translated_parts = list(map(lambda part: TranslatedPart(part[0], part[1] if len(part) >= 2 else []), parsed[1][0][0][5]))
File "/home/ubuntu/.local/lib/python3.10/site-packages/googletrans/client.py", line 222, in <lambda>
translated_parts = list(map(lambda part: TranslatedPart(part[0], part[1] if len(part) >= 2 else []), parsed[1][0][0][5]))
IndexError: list index out of range
</code></pre>
<p>I'll add the stacktrace of the former when I see it.</p>
|
<python><module><google-translate><googletrans>
|
2023-06-06 11:27:37
| 0
| 4,642
|
ginjaemocoes
|
76,414,129
| 4,035,257
|
Making a list of tuples with sequential elements in python
|
<p>Assuming <code>K an integer number</code>, <code>Y a pd.Dataframe()</code> and <code>Z a list()</code>. How can I make a list of tuples with the following sequential structure for the first element of each tuple:</p>
<p><code>[(K,Y,Z), (K+1,Y,Z), (K+2,Y,Z), (K+3,Y,Z), (K+4,Y,Z),......,(K+N,Y,Z)]</code> , for any positive value of <code>N</code>.</p>
|
<python><list><tuples>
|
2023-06-06 11:24:13
| 1
| 362
|
Telis
|
76,414,085
| 5,838,180
|
Code action based on print statements from code in python?
|
<p>I am running in a loop an astronomical script written by another person. This is a minimal version of my loop:</p>
<pre><code>from astro_script_file import some_astro_class
for file_name in ['file_1.txt', 'file_2.txt', 'file_n.txt']:
some_astro_class.function_1(file_name)
</code></pre>
<p>When executing the above <code>function_1</code> makes calculations based on the input txt-file and prints an output to the screen of the kind <code>number of populated galaxies: some-number</code>. The number of galaxies is declining with each iteration of my loop and I want to break the loop when the number reaches 0. For that I need to somehow read in the print statement or load it into a variable. I don't want to alter the astronomical script itself that I am provided with, instead I want to tinker only outside of it when I am executing the script.</p>
<p>Is this possible? Is there some way to load print statements when they get executed?</p>
|
<python>
|
2023-06-06 11:15:45
| 1
| 2,072
|
NeStack
|
76,414,027
| 16,623,816
|
Lost order between file save and file quit
|
<p>I want to save a file, on top of my old file, after python running VBA script. But problem that I every time need press 'Save' to overwrite my old file, what I can do with this code, to make press Save automatic ? When I write <code>workbook.Save()</code> before <code>excel.Quit()</code> its creating new excel file.</p>
<p>After <code>column E hide</code> block, need run VBA and then save file without my 'touch'</p>
<pre><code>import os
import io
import pyodbc
import pandas as pd
import openpyxl
from openpyxl import load_workbook
from openpyxl.utils import column_index_from_string
from openpyxl.drawing.image import Image
from tqdm import tqdm
from PIL import Image as PilImage
# Define database connection parameters
server = ...
database = ...
username = ...
password = ...
# Connect to database
cnxn = pyodbc.connect(f"DRIVER={{SQL Server}};SERVER={server};DATABASE={database};Trusted_Connection=yes;UID={username};PWD={password}")
# Read SQL query from file
with open('...item.txt', 'r', encoding='utf-8') as f:
sql_query = f.read()
# Execute SQL query and store results in dataframe
df = pd.read_sql(sql_query, cnxn)
# Create new Excel file
excel_file = '....output.xlsx'
writer = pd.ExcelWriter(excel_file)
# Write dataframe to Excel starting from cell B1
df.to_excel(writer, index=False, startrow=0, startcol=1)
# Save Excel file
writer._save()
# Load the workbook
wb = load_workbook(excel_file)
# Select the active worksheet
ws = wb.active
# Set width of item column to 20
item_col = 'A'
ws.column_dimensions[item_col].width = 20
for i in range(2, len(df) + 2):
ws.row_dimensions[i].height = 85
# Iterate over each row and insert the image in column A
for i, link in enumerate(df['Link to Picture']):
if link.lower().endswith('.pdf'):
continue # Skip PDF links
img_path = link.replace('file://', '')
if os.path.isfile(img_path):
# Open image with PIL Image module
img_pil = PilImage.open(img_path)
# Convert image to RGB mode
img_pil = img_pil.convert('RGB')
# Resize image while maintaining aspect ratio
max_width = ws.column_dimensions[item_col].width * 7
max_height = ws.row_dimensions[i+2].height * 1.3
img_pil.thumbnail((max_width, max_height))
# Convert PIL Image object back to openpyxl Image object
img_byte_arr = io.BytesIO()
img_pil.save(img_byte_arr, format='JPEG')
img_byte_arr.seek(0)
img = Image(img_byte_arr)
cell = f'A{i+2}' # Offset by 2 to account for header row
ws[cell].alignment = openpyxl.styles.Alignment(horizontal="center", vertical="center")
ws.add_image(img, cell)
for col in range(2, ws.max_column + 1):
max_length = 0
column = ws.cell(row=1, column=col).column_letter
for cell in ws[column]:
try:
if len(str(cell.value)) > max_length:
max_length = len(str(cell.value))
except:
pass
adjusted_width = (max_length + 2) * 1.2
ws.column_dimensions[column].width = adjusted_width
# Select column E and hide it
col_E = ws.column_dimensions['E']
col_E.hidden = True
# Save the workbook
wb.save(excel_file)
import win32com.client as win32
# Connect to Excel
excel = win32.gencache.EnsureDispatch('Excel.Application')
excel.Visible = False
# Open the Excel file
workbook = excel.Workbooks.Open(r'....output.xlsx')
# Add the macro to the workbook
vb_module = workbook.VBProject.VBComponents.Add(1) # 1= vbext_ct_StdModule
macro_code = '''
Sub MoveAndSizePictures()
Dim pic As Shape
For Each pic In Sheets("Sheet1").Shapes
If pic.Type = msoPicture Then
pic.Placement = xlMoveAndSize
End If
Next pic
End Sub
'''
vb_module.CodeModule.AddFromString(macro_code)
# Run the macro
excel.Run('MoveAndSizePictures')
# Delete the macro from the workbook
workbook.VBProject.VBComponents.Remove(vb_module)
# Quit Excel
excel.Quit()
</code></pre>
|
<python><vba><visual-studio><autosave>
|
2023-06-06 11:07:12
| 1
| 438
|
Nickname_used
|
76,414,005
| 9,202,041
|
Multiple cumulative cdf plots
|
<p>I have multiyear data (14 years) with 15 mins internals and I am trying to make multiple plots comparing short term and long term CDF values.</p>
<p>I am computing the cumulative distribution function (CDF) of one variable (Energy values) for each month:</p>
<p>one cumulative distribution function for this variable, each month and each year of data (e.g. one for Jan. 2007, one for Jan. 2008, one for Jan 2009, ... and for each month.).</p>
<p>one long-term cumulative distribution function for each variable and each month (e.g. one for Energy values for January containing all daily values from 2007 to 2020 and similarly for the other months).</p>
<p>Now I want to see the results for each month. I want to plot both short term and long term cdf on a single plot and I need 12 plots, one for each month.</p>
<p>Can anyone guide me how can I do this using python. I am little lost on this.</p>
<p>So far I have reached until CDF functions.</p>
<pre><code>import pandas as pd
import numpy as np
columns_to_read = ['DateTime', 'PLANT ENERGY MWh']
df = pd.read_excel(data, skiprows=0, usecols=columns_to_read)
df['DateTime'] = pd.to_datetime(df['DateTime'])
df.set_index('DateTime', inplace=True)
monthly_yearly_cdfs = df.groupby([df.index.month, df.index.year]).apply(lambda x: (x.rank() - 1) / len(x))
# Compute the long-term CDF for each month
long_term_cdfs = df.groupby([df.index.month]).apply(lambda x: (x.rank() - 1) / len(x))
</code></pre>
<hr />
<h2>Sample Data</h2>
<p>Data looks like this: <a href="https://docs.google.com/spreadsheets/d/1TRmQg241NdpsXOEbMbwmsV5QrRlTlI0Z/edit#gid=1454301022" rel="nofollow noreferrer">Data</a></p>
<p><code>df.sample(n=400, random_state=2023).sort_index()</code></p>
<pre class="lang-none prettyprint-override"><code>DateTime,PLANT ENERGY MWh
1/3/2007 10:37:00 AM,1697.15308896009
1/13/2007 2:37:00 PM,2001.19395988835
1/18/2007 2:22:00 AM,0.0
1/26/2007 4:07:00 PM,1188.33577403658
1/29/2007 11:07:00 AM,2127.49307914357
3/7/2007 5:22:00 PM,334.448532482322
3/12/2007 6:52:00 PM,0.0
4/14/2007 1:22:00 PM,2233.7697588734
4/15/2007 5:22:00 PM,317.205656506989
4/28/2007 6:37:00 AM,488.824535055448
5/7/2007 7:52:00 AM,1275.23932269807
5/14/2007 10:07:00 AM,2239.92631025772
5/18/2007 7:07:00 AM,508.144515654771
5/21/2007 10:37:00 AM,2229.80792618988
5/24/2007 12:52:00 AM,0.0
6/19/2007 6:22:00 PM,7.88971689302022
6/23/2007 11:52:00 PM,0.0
6/26/2007 4:07:00 AM,0.0
7/6/2007 2:22:00 PM,864.618295502631
7/9/2007 5:52:00 AM,57.3243582278141
7/15/2007 4:52:00 PM,810.558329649709
7/30/2007 2:07:00 AM,0.0
8/15/2007 6:52:00 AM,371.352632081607
8/25/2007 9:22:00 PM,0.0
10/2/2007 4:37:00 PM,382.184867573695
10/14/2007 5:22:00 AM,0.0
11/21/2007 11:07:00 AM,423.203318513767
12/31/2007 12:37:00 AM,0.0
1/31/2008 3:37:00 PM,1609.05426883337
3/2/2008 7:37:00 PM,0.0
3/13/2008 11:37:00 AM,2266.95967476719
4/21/2008 9:52:00 PM,0.0
5/11/2008 9:22:00 AM,1829.80960610426
6/1/2008 11:37:00 AM,2245.96073624474
6/21/2008 4:07:00 AM,0.0
6/22/2008 11:07:00 PM,0.0
6/24/2008 12:07:00 PM,2259.96206542845
7/17/2008 5:52:00 AM,51.5973221015606
8/1/2008 4:22:00 AM,0.0
8/6/2008 6:07:00 PM,19.7647009486228
9/17/2008 6:07:00 AM,180.287417415929
11/8/2008 10:52:00 PM,0.0
11/15/2008 11:07:00 AM,545.990451766781
11/17/2008 9:22:00 PM,0.0
11/28/2008 12:37:00 PM,1719.4663563644
12/10/2008 1:52:00 AM,0.0
12/27/2008 6:22:00 PM,0.0
1/3/2009 5:52:00 PM,0.0
1/5/2009 7:22:00 AM,335.136147656082
1/17/2009 2:52:00 PM,1909.33403627444
1/25/2009 11:52:00 PM,0.0
1/28/2009 9:52:00 PM,0.0
2/25/2009 11:07:00 AM,1619.45530002807
3/16/2009 1:37:00 PM,2246.82725911847
3/18/2009 9:52:00 AM,2261.26512617316
3/29/2009 4:52:00 PM,701.565265876862
3/31/2009 9:07:00 AM,1046.46520763076
4/1/2009 1:07:00 PM,2064.36376234579
4/8/2009 5:52:00 AM,12.972388116731
4/17/2009 7:37:00 AM,692.358096524199
4/25/2009 10:37:00 AM,1854.46955717815
5/5/2009 10:52:00 PM,0.0
5/25/2009 8:22:00 AM,1697.85769768111
5/28/2009 8:52:00 PM,0.0
6/26/2009 4:37:00 AM,0.0
7/27/2009 10:22:00 PM,0.0
8/9/2009 11:22:00 PM,0.0
8/29/2009 7:22:00 AM,594.861006878339
9/17/2009 1:52:00 PM,690.529124708089
9/27/2009 1:07:00 PM,1329.97088539607
10/2/2009 9:22:00 PM,0.0
10/7/2009 4:52:00 AM,0.0
10/13/2009 9:07:00 AM,2103.38141683629
10/24/2009 1:22:00 AM,0.0
11/11/2009 10:22:00 AM,2251.0384034407
11/22/2009 9:22:00 AM,1342.49452643981
11/27/2009 2:37:00 PM,1813.79712527398
11/30/2009 9:52:00 PM,0.0
12/9/2009 12:22:00 PM,2189.95023445697
12/27/2009 6:22:00 PM,0.0
1/9/2010 7:37:00 AM,780.595688944353
1/13/2010 11:07:00 PM,0.0
1/14/2010 2:22:00 PM,1730.47123769195
2/13/2010 6:22:00 AM,2.5718700880953
2/21/2010 4:07:00 AM,0.0
3/8/2010 4:07:00 PM,1408.94018629172
3/21/2010 12:52:00 AM,0.0
3/21/2010 2:52:00 PM,2178.13783101468
3/27/2010 1:37:00 PM,2221.39248545497
4/6/2010 5:22:00 PM,384.551199703043
4/10/2010 10:07:00 AM,2231.68051156408
5/1/2010 8:22:00 AM,1539.37393822866
5/2/2010 2:07:00 AM,0.0
5/25/2010 3:22:00 AM,0.0
6/9/2010 10:37:00 PM,0.0
6/15/2010 10:22:00 AM,2221.49331162576
6/19/2010 3:22:00 AM,0.0
7/11/2010 2:37:00 AM,0.0
7/11/2010 5:22:00 AM,0.0
7/18/2010 1:07:00 PM,177.234158020317
7/20/2010 1:52:00 PM,1617.79156092834
8/4/2010 7:07:00 PM,0.0
8/15/2010 1:22:00 AM,0.0
8/23/2010 11:22:00 AM,2240.11034099836
10/5/2010 11:37:00 AM,1015.36807551329
10/11/2010 10:37:00 AM,2233.80329680852
10/23/2010 6:37:00 AM,364.89510315923
11/20/2010 10:37:00 PM,0.0
2/15/2011 11:37:00 AM,2264.61961430358
2/15/2011 5:22:00 PM,325.880976602414
2/27/2011 12:37:00 AM,0.0
3/14/2011 11:52:00 AM,2252.7096856352
3/15/2011 12:37:00 AM,0.0
4/6/2011 12:37:00 PM,1849.3643869394
4/20/2011 2:37:00 PM,2126.29455289231
5/12/2011 10:22:00 PM,0.0
5/13/2011 2:37:00 PM,2128.63365958903
5/29/2011 9:37:00 AM,1293.39647346765
6/5/2011 11:52:00 PM,0.0
6/13/2011 10:52:00 PM,0.0
7/19/2011 4:22:00 AM,0.0
8/15/2011 3:22:00 PM,906.023563345153
8/30/2011 6:37:00 AM,231.904817099589
9/23/2011 11:52:00 PM,0.0
10/12/2011 9:37:00 PM,0.0
10/31/2011 3:52:00 PM,930.09465715263
11/8/2011 3:07:00 PM,977.406939503701
11/13/2011 7:07:00 PM,0.0
11/23/2011 2:52:00 AM,0.0
11/30/2011 5:07:00 PM,27.298205385358
12/1/2011 3:07:00 AM,0.0
12/12/2011 10:37:00 AM,1263.45669467322
12/15/2011 12:37:00 PM,2246.92287639293
12/25/2011 4:52:00 PM,246.438785724009
12/31/2011 7:37:00 PM,0.0
1/18/2012 11:07:00 PM,0.0
2/15/2012 4:07:00 AM,0.0
3/3/2012 3:07:00 AM,0.0
3/14/2012 9:37:00 PM,0.0
4/11/2012 10:07:00 PM,0.0
4/26/2012 11:22:00 PM,0.0
5/13/2012 8:37:00 AM,1838.25097018283
5/14/2012 1:37:00 AM,0.0
5/15/2012 2:52:00 AM,0.0
6/4/2012 11:07:00 AM,1510.79288480949
6/22/2012 12:22:00 AM,0.0
6/30/2012 11:22:00 AM,849.138608900924
7/8/2012 12:07:00 AM,0.0
7/23/2012 3:22:00 AM,0.0
7/25/2012 10:52:00 AM,1051.97121880218
7/27/2012 9:37:00 AM,2069.59700360015
8/29/2012 11:22:00 AM,1344.68789088748
8/29/2012 12:22:00 PM,1320.87610528159
9/1/2012 1:37:00 PM,1616.38568269662
9/4/2012 5:52:00 AM,65.7313698615252
10/3/2012 11:37:00 AM,210.717982319463
11/3/2012 1:22:00 AM,0.0
11/25/2012 2:22:00 AM,0.0
12/3/2012 5:22:00 AM,0.0
12/11/2012 8:07:00 AM,1264.75635450344
12/15/2012 3:22:00 PM,1144.96069403523
12/29/2012 7:37:00 PM,0.0
1/14/2013 4:22:00 PM,493.200390367576
1/15/2013 8:07:00 AM,1017.8240973095
2/2/2013 4:52:00 AM,0.0
2/5/2013 11:52:00 PM,0.0
2/25/2013 3:07:00 AM,0.0
2/25/2013 8:07:00 AM,751.762960018433
3/8/2013 11:07:00 AM,2253.25776764632
3/25/2013 12:22:00 PM,1784.09991583734
5/1/2013 11:37:00 PM,0.0
6/3/2013 5:22:00 PM,385.627924290769
6/6/2013 1:22:00 AM,0.0
6/20/2013 3:22:00 PM,645.051467394196
6/29/2013 12:07:00 PM,1102.03805855432
6/30/2013 5:07:00 AM,0.0
7/2/2013 5:52:00 PM,136.203682460618
7/12/2013 7:22:00 AM,779.231336832705
7/20/2013 9:52:00 PM,0.0
7/31/2013 11:37:00 AM,1744.3572678315
7/31/2013 5:52:00 PM,100.03550973256
8/5/2013 4:07:00 PM,124.45735342751
8/31/2013 6:52:00 AM,440.389115692219
9/7/2013 6:37:00 AM,334.536499194091
9/26/2013 12:22:00 PM,1112.06280915502
9/28/2013 2:52:00 AM,0.0
9/29/2013 9:37:00 AM,2244.28200544644
10/5/2013 4:52:00 AM,0.0
10/25/2013 2:22:00 PM,1879.49706658889
10/30/2013 6:07:00 AM,91.9530243255773
11/22/2013 12:22:00 AM,0.0
11/23/2013 6:07:00 AM,16.8277390161346
11/28/2013 11:22:00 PM,0.0
12/4/2013 10:22:00 PM,0.0
12/8/2013 5:37:00 AM,0.0
12/14/2013 9:22:00 PM,0.0
12/22/2013 2:07:00 AM,0.0
1/30/2014 7:07:00 PM,0.0
2/13/2014 11:37:00 AM,2247.95944038382
2/13/2014 5:37:00 PM,105.199078964879
3/9/2014 2:22:00 AM,0.0
4/12/2014 12:22:00 AM,0.0
4/24/2014 8:22:00 AM,1732.60647545131
5/1/2014 7:07:00 AM,849.041096679454
5/14/2014 8:52:00 PM,0.0
5/21/2014 2:52:00 PM,1364.73057626358
5/24/2014 11:37:00 PM,0.0
6/2/2014 5:37:00 PM,31.3122284453889
6/8/2014 12:07:00 PM,2189.03708035709
6/16/2014 10:37:00 AM,1991.5811944257
6/19/2014 4:07:00 AM,0.0
6/24/2014 4:22:00 AM,0.0
6/29/2014 12:52:00 PM,1666.75943254743
7/5/2014 10:52:00 PM,0.0
7/10/2014 9:22:00 PM,0.0
7/13/2014 4:22:00 AM,0.0
7/18/2014 10:37:00 PM,0.0
7/24/2014 8:22:00 PM,0.0
7/31/2014 11:07:00 PM,0.0
8/7/2014 11:22:00 AM,2072.96745414091
8/21/2014 4:37:00 PM,722.767349369365
8/23/2014 8:07:00 AM,1500.44231651125
8/24/2014 4:07:00 AM,0.0
9/11/2014 4:52:00 AM,0.0
9/11/2014 9:22:00 AM,1869.30516413828
9/12/2014 8:37:00 AM,1710.73033450055
10/20/2014 5:52:00 AM,19.4205739999711
10/21/2014 6:07:00 AM,158.634771951938
10/25/2014 2:52:00 PM,1803.95482753973
10/29/2014 6:37:00 AM,162.919158884821
11/1/2014 2:07:00 PM,1970.73022590591
11/10/2014 3:07:00 PM,911.160075032066
12/12/2014 8:52:00 AM,1768.89711015088
12/29/2014 9:52:00 AM,1996.11311231376
1/17/2015 5:07:00 PM,116.278784714271
1/20/2015 3:07:00 PM,1515.88857937741
1/24/2015 6:07:00 AM,0.0
2/12/2015 9:52:00 PM,0.0
2/21/2015 4:37:00 AM,0.0
2/23/2015 11:07:00 PM,0.0
3/8/2015 8:07:00 AM,1441.39903094257
3/11/2015 12:52:00 PM,1360.3205244187
3/25/2015 12:22:00 PM,1613.28196521365
3/25/2015 9:52:00 PM,0.0
3/25/2015 11:22:00 PM,0.0
3/29/2015 6:37:00 AM,322.274273699383
4/8/2015 6:07:00 AM,106.374454642966
5/20/2015 5:22:00 AM,0.0
5/22/2015 6:22:00 AM,312.223354265717
6/14/2015 1:52:00 AM,0.0
6/27/2015 3:37:00 PM,1581.06172254746
7/27/2015 12:52:00 AM,0.0
8/4/2015 7:37:00 PM,0.0
8/24/2015 1:22:00 AM,0.0
8/24/2015 4:22:00 PM,399.294694339561
9/5/2015 11:07:00 AM,2215.84603206964
10/15/2015 10:52:00 AM,2229.46768062446
11/28/2015 7:52:00 AM,385.100835443082
1/2/2016 11:22:00 AM,2237.30043095103
2/14/2016 10:52:00 AM,2228.87777153332
2/24/2016 12:07:00 PM,2224.84413280813
3/1/2016 7:52:00 AM,805.881212801799
3/12/2016 7:37:00 PM,0.0
5/8/2016 5:37:00 AM,0.0
5/12/2016 4:22:00 AM,0.0
7/9/2016 8:37:00 AM,898.113346822966
8/29/2016 10:52:00 PM,0.0
9/22/2016 11:07:00 AM,1381.94315896979
10/4/2016 8:22:00 AM,1456.48740626995
10/20/2016 11:22:00 AM,1158.60905438133
11/16/2016 9:22:00 AM,785.844444160097
11/20/2016 2:52:00 AM,0.0
12/15/2016 2:37:00 AM,0.0
12/16/2016 11:52:00 AM,417.89954596086
12/28/2016 4:37:00 AM,0.0
1/13/2017 11:52:00 AM,2223.7194438618
1/14/2017 7:52:00 PM,0.0
1/15/2017 11:37:00 PM,0.0
1/25/2017 10:52:00 AM,2084.48118540304
2/3/2017 12:22:00 AM,0.0
2/5/2017 5:22:00 AM,0.0
2/18/2017 2:37:00 PM,490.975221341271
2/19/2017 9:07:00 AM,472.368352031344
2/20/2017 1:52:00 AM,0.0
3/8/2017 6:52:00 PM,0.0
3/10/2017 1:07:00 AM,0.0
3/31/2017 6:37:00 PM,0.0
4/7/2017 1:37:00 PM,2179.86519440919
4/29/2017 7:22:00 AM,992.085502332247
5/21/2017 4:37:00 AM,0.0
6/8/2017 11:37:00 PM,0.0
6/25/2017 2:07:00 PM,1802.34173117944
6/29/2017 8:07:00 PM,0.0
6/30/2017 12:52:00 PM,1988.17404515818
6/30/2017 7:22:00 PM,0.0
7/1/2017 1:37:00 PM,2171.13733947046
7/10/2017 1:52:00 AM,0.0
7/18/2017 5:22:00 AM,0.0
7/23/2017 4:37:00 PM,376.674126050257
7/26/2017 5:52:00 AM,10.6644177186485
8/5/2017 12:37:00 AM,0.0
8/17/2017 6:22:00 AM,212.032074357397
8/29/2017 12:22:00 AM,0.0
9/14/2017 6:22:00 PM,0.0
9/25/2017 2:07:00 PM,429.394672110146
9/26/2017 4:07:00 AM,0.0
10/1/2017 3:22:00 AM,0.0
11/5/2017 9:07:00 AM,84.7524925739651
11/20/2017 8:22:00 PM,0.0
12/27/2017 4:07:00 AM,0.0
3/16/2018 12:37:00 AM,0.0
3/23/2018 10:07:00 PM,0.0
3/27/2018 11:22:00 AM,2212.90966758228
4/19/2018 7:22:00 PM,0.0
4/21/2018 6:22:00 AM,291.299618596293
4/22/2018 5:37:00 PM,192.633704823958
5/10/2018 1:22:00 PM,2151.96101205623
5/17/2018 4:37:00 PM,889.112694648581
6/4/2018 6:37:00 PM,0.0
7/20/2018 3:52:00 AM,0.0
7/22/2018 12:37:00 PM,660.596136467228
7/24/2018 10:37:00 PM,0.0
8/10/2018 6:52:00 AM,215.745023484612
8/26/2018 1:07:00 AM,0.0
8/27/2018 9:37:00 AM,1631.35546624633
9/3/2018 6:37:00 PM,0.0
9/14/2018 8:07:00 AM,179.988860609959
9/19/2018 9:22:00 PM,0.0
9/22/2018 8:22:00 PM,0.0
10/9/2018 3:22:00 AM,0.0
10/17/2018 1:52:00 AM,0.0
10/24/2018 8:22:00 PM,0.0
11/2/2018 4:52:00 PM,225.983282567867
11/18/2018 2:07:00 PM,2100.69577404248
12/16/2018 8:37:00 PM,0.0
12/17/2018 9:37:00 PM,0.0
1/20/2019 8:07:00 AM,1166.56731508099
2/10/2019 7:37:00 AM,815.777148505311
3/9/2019 11:07:00 AM,2091.1494293728
3/21/2019 4:22:00 AM,0.0
3/23/2019 8:37:00 AM,1741.87476037795
3/28/2019 2:52:00 AM,0.0
3/28/2019 4:22:00 AM,0.0
4/10/2019 5:37:00 AM,0.0
4/13/2019 5:37:00 AM,0.0
4/14/2019 6:22:00 AM,115.501506903225
4/28/2019 12:52:00 PM,2155.65013095046
5/3/2019 8:22:00 AM,1601.18794707181
5/6/2019 1:37:00 PM,2218.31975219021
5/9/2019 1:37:00 AM,0.0
5/26/2019 2:07:00 PM,1159.95842527914
5/28/2019 5:22:00 AM,0.0
6/16/2019 4:52:00 AM,0.0
6/29/2019 7:37:00 PM,0.0
7/13/2019 11:37:00 PM,0.0
8/27/2019 5:37:00 PM,70.297367492205
8/28/2019 6:22:00 AM,174.444270586398
8/29/2019 11:52:00 PM,0.0
9/17/2019 7:52:00 AM,849.831780980663
9/29/2019 1:37:00 AM,0.0
9/30/2019 10:52:00 AM,1960.68727802557
10/6/2019 12:07:00 PM,2195.98004768548
10/14/2019 4:52:00 PM,384.731147656982
10/15/2019 11:07:00 AM,2185.13478503396
10/23/2019 7:52:00 PM,0.0
11/8/2019 5:22:00 PM,0.0
11/22/2019 2:37:00 PM,272.810073966629
12/26/2019 8:22:00 PM,0.0
1/10/2020 7:52:00 AM,644.628312963384
1/27/2020 11:22:00 AM,2207.87188724061
1/31/2020 3:07:00 PM,1377.46407332282
2/1/2020 11:37:00 AM,2214.19474593128
3/4/2020 1:07:00 AM,0.0
3/9/2020 9:07:00 PM,0.0
3/19/2020 2:37:00 PM,2165.83856381109
3/23/2020 7:07:00 PM,0.0
4/6/2020 10:22:00 PM,0.0
4/21/2020 5:37:00 AM,0.0
6/1/2020 6:37:00 AM,541.601057136189
6/3/2020 4:37:00 PM,692.598128405066
6/5/2020 11:07:00 AM,2159.80323674935
6/15/2020 5:22:00 AM,0.0
6/27/2020 8:52:00 AM,1801.0129423256
6/29/2020 12:52:00 PM,2171.07665746352
6/30/2020 10:37:00 AM,2176.04432414765
7/4/2020 2:52:00 AM,0.0
8/9/2020 7:37:00 PM,0.0
8/18/2020 3:52:00 PM,1179.07868394892
8/22/2020 9:52:00 PM,0.0
8/28/2020 5:37:00 PM,83.867067156831
9/2/2020 5:22:00 PM,289.017298139724
9/3/2020 3:22:00 AM,0.0
9/5/2020 7:07:00 AM,335.582724269764
9/11/2020 2:07:00 PM,2156.50505885321
9/24/2020 1:52:00 PM,1291.82123782733
9/29/2020 9:52:00 PM,0.0
10/24/2020 3:22:00 AM,0.0
10/25/2020 5:22:00 AM,0.0
11/16/2020 10:37:00 AM,2175.28229042033
12/7/2020 5:07:00 PM,11.7636367947525
</code></pre>
|
<python><matplotlib><seaborn><subplot><cdf>
|
2023-06-06 11:04:48
| 1
| 305
|
Jawairia
|
76,413,960
| 8,704,240
|
Distributed pg_dump job using Apache Spark
|
<p><strong>Hello everyone!</strong><br />
I'm using pg_dump to create a dump of my database and save it to S3.
I manage multiple postgresql databases with different sizes, some of them small and others big and have a lot of data.
My problem is that the pg_dump command takes too long on large databases. I thought about distributing the load of the command with the use of Apache Spark capabilities.</p>
<p><strong>This brings up 2 questions on my side:</strong></p>
<ol>
<li>Is it possible to use Apache Spark in order to run a distributed pg_dump job?</li>
<li>If so how will it be executed?</li>
</ol>
<p>Thank you for your time :)</p>
|
<python><postgresql><apache-spark><pyspark>
|
2023-06-06 11:00:27
| 1
| 971
|
Yair Chen
|
76,413,927
| 8,477,566
|
Multi-dimensional indexing of numpy arrays along inner axis
|
<ul>
<li>I have a numpy array <code>x</code> with shape <code>[4, 5, 3]</code></li>
<li>I have a 2D array of indices <code>i</code> with shape <code>[4, 3]</code>, referring to indices along dimension 1 (of length 5) in <code>x</code></li>
<li>I'd like to extract a sub-array <code>y</code> from <code>x</code>, with shape <code>[4, 3]</code>, such that <code>y[j, k] == x[j, i[j, k], k]</code></li>
<li>How do I do this?</li>
</ul>
|
<python><numpy><indexing><numpy-ndarray><numpy-slicing>
|
2023-06-06 10:57:08
| 4
| 1,950
|
Jake Levi
|
76,413,746
| 125,673
|
saving a model I get: module 'tensorflow.python.saved_model.registration' has no attribute 'get_registered_name'
|
<p>When I try to save my ternsorflow model I get this error message. What is the problem here and how do I fix it?</p>
<pre><code> model = tf.keras.models.Sequential()
# define the neural network architecture
model.add(
tf.keras.layers.Dense(50, input_dim=hidden_dim, activation="relu")
)
model.add(tf.keras.layers.Dense(n_classes))
k += 1
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=lr),
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=["mse", "accuracy"],
)
history = model.fit(
x_train,
y_train,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_test, y_test),
verbose=0,
)
folder = "model_mlp_lm"
file = f"m{k}_model"
os.makedirs(folder, exist_ok=True)
path = f"{folder}/{file}"
if os.path.isfile(path) is False:
model.save(path)
</code></pre>
<blockquote>
<p>module 'tensorflow.python.saved_model.registration' has no attribute 'get_registered_name'</p>
</blockquote>
<p>This is the stack trace:</p>
<pre><code>Traceback (most recent call last):
File "D:\Anaconda\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "D:\Anaconda\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "c:\Users\hijik\.vscode\extensions\ms-python.python-2023.10.0\pythonFiles\lib\python\debugpy\__main__.py", line 39, in <module>
cli.main()
File "c:\Users\hijik\.vscode\extensions\ms-python.python-2023.10.0\pythonFiles\lib\python\debugpy/..\debugpy\server\cli.py", line 430, in main
run()
File "c:\Users\hijik\.vscode\extensions\ms-python.python-2023.10.0\pythonFiles\lib\python\debugpy/..\debugpy\server\cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "c:\Users\hijik\.vscode\extensions\ms-python.python-2023.10.0\pythonFiles\lib\python\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "c:\Users\hijik\.vscode\extensions\ms-python.python-2023.10.0\pythonFiles\lib\python\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "c:\Users\hijik\.vscode\extensions\ms-python.python-2023.10.0\pythonFiles\lib\python\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "D:\_lodestar\personality-prediction\finetune_models\MLP_LM.py", line 273, in <module>
File "D:\Anaconda\lib\site-packages\tensorflow\python\saved_model\save.py", line 1450, in _build_meta_graph_impl
object_graph_proto = _serialize_object_graph(
File "D:\Anaconda\lib\site-packages\tensorflow\python\saved_model\save.py", line 1022, in _serialize_object_graph
_write_object_proto(obj, obj_proto, asset_file_def_index,
File "D:\Anaconda\lib\site-packages\tensorflow\python\saved_model\save.py", line 1061, in _write_object_proto
registered_name = registration.get_registered_name(obj)
AttributeError: module 'tensorflow.python.saved_model.registration' has no attribute 'get_registered_name'
</code></pre>
|
<python><tensorflow><keras>
|
2023-06-06 10:33:59
| 2
| 10,241
|
arame3333
|
76,413,729
| 981,499
|
Define partial views on a SQLAlchemy model
|
<p>We are dealing with a very large legacy SQLAlchemy model (and underlying table schema) aggregating many logically-separate models, that cannot, for practical reasons, be refactored.</p>
<p>We would want to be able to design authorizations/permissions on sub-models that restrict read/write to a subset of attributes (and perhaps methods as well).</p>
<p>What would be the best approach to design a "partial view" Mixin/Class/Metaclass that would allow:</p>
<ol>
<li>keeping the same original SQLAlchemy/flask-sqlalchemy model underneath (support <code>.query</code> etc)</li>
<li>once loaded/inited, restrict access to a designated subset (stored at class level) of attributes</li>
</ol>
<p>In essence going from:</p>
<pre><code>class MyLargeModel(db.Model):
id = db.Column(db.Integer, primary_key=True)
foo = db.Column(db.Text, nullable=False)
bar = db.Column(db.Text, nullable=False)
baz = db.Column(db.Text, nullable=False)
</code></pre>
<p>To something like:</p>
<pre><code>class ModelBase(db.Model):
id = db.Column(db.Integer, primary_key=True)
class FooView(ModelBase):
foo = db.Column(db.Text, nullable=False)
[??]
class BarView(ModelBase):
bar = db.Column(db.Text, nullable=False)
[??]
class BarbazView(BarView):
baz = db.Column(db.Text, nullable=False)
[??]
class MyLargeModel(FooView, BarbazView):
[??]
</code></pre>
<p>Where each of the view class retain as much of the SQLA model properties as possible (loading, and ideally persisting).</p>
<p>In a way, something similar to what SQLA-Marshmallow etc do for schemas (except while keeping an actual ORM model).</p>
<p>Is there any known pattern that could help me here?</p>
<p>[Edit] After further digging, it <em>seems</em> like SQLAlchemy's <a href="https://docs.sqlalchemy.org/en/20/orm/inheritance.html#single-table-inheritance" rel="nofollow noreferrer">Single table inheritance</a> might hold the key to what we are trying to do, but it seems to require a <code>polymorphic_on</code> column to split vertically on (I only want to split horizontally).</p>
|
<python><sqlalchemy><orm><flask-sqlalchemy>
|
2023-06-06 10:32:07
| 2
| 301
|
Dave
|
76,413,715
| 14,679,834
|
`time.time()` in Python is 1 hour behind from the current UNIX epoch
|
<p>I need to get the current UNIX epoch at UTC in Python and I've tried the following:</p>
<pre><code>from datetime import datetime
from datetime import timezone
import time
import calendar
def get_nonce():
unix_epoch = datetime.utcfromtimestamp(0).replace(tzinfo=timezone.utc)
now = datetime.now(tz=timezone.utc)
seconds = (now - unix_epoch).total_seconds()
return int(seconds)
def get_time():
current_time_seconds = time.time()
return int(current_time_seconds)
def get_time_utc():
current_time_seconds = datetime.now(timezone.utc).timestamp()
return int(current_time_seconds)
def get_time_again():
current_time = int(calendar.timegm(time.gmtime()))
return int(current_time)
print(get_nonce())
print(get_time())
print(get_time_utc())
print(get_time_again())
</code></pre>
<p>All are 1 hour behind from the current epoch which I check with this website: <a href="https://www.unixtimestamp.com/" rel="nofollow noreferrer">https://www.unixtimestamp.com/</a></p>
<p>What can be the reason why? <code>time.tzname</code> says I'm at UTC and using <code>date +%s</code> on the command line also returns a value that's about an hour behind as well.</p>
<p>I started getting the wrong epoch value yesterday and it's been hindering my progress since the API I'm integrating with requires the current epoch value in the body</p>
|
<python><datetime><time><windows-subsystem-for-linux>
|
2023-06-06 10:30:19
| 1
| 526
|
neil_ruaro
|
76,413,689
| 12,695,210
|
Convert Numpy array to ctypes `int` pointer to call Cython function from Python
|
<p>I am attempting the vendor the method <code>_select_by_peak_distance</code> from <code>scipy.signal._peak_finding_utils</code>. This is a cython <code>.pyx</code> file and the function has the signature:</p>
<pre><code>def _select_by_peak_distance(np.intp_t[::1] peaks not None,
np.float64_t[::1] priority not None,
np.float64_t distance):
</code></pre>
<p>I have -recompiled <code>_peak_finding_utils.pyx</code> locally with a setup.py file below.</p>
<p>I have imported the function into Python and am now trying to call it. Naively I have started with
calling with <code>np.array[int], np.array[float], float</code> however I got an error related to the first argument:</p>
<pre><code>*** ValueError: Buffer dtype mismatch, expected 'intp_t' but got 'long'
</code></pre>
<p>I see the first argument should be a pointer to an integer array, not integers. So I tried to cast the numpy array to a pointer based on the numpy <code>ctypes</code> documentation:</p>
<pre><code>pointer = my_array.ctypes.data_as(ctypes.POINTER(ctypes.c_int))
</code></pre>
<p>However, calling the function with this argument results in the error:</p>
<pre><code>*** ValueError: Buffer has wrong number of dimensions (expected 1, got 0)
</code></pre>
<p>I have researched this but cannot find the underlying cause and feel like the <code>int</code> casting procedure is wrong. Any help would be much appreciated.</p>
<p>setup.py</p>
<pre><code>from distutils.core import setup
from Cython.Build import cythonize
import numpy
setup(
ext_modules=cythonize(
"_peak_finding_utils.pyx", compiler_directives={"language_level": "3"}, # TODO: what does this do.
),
include_dirs=[numpy.get_include()]
)
</code></pre>
|
<python><numpy><cython>
|
2023-06-06 10:27:29
| 1
| 695
|
Joseph
|
76,413,570
| 2,829,150
|
if elif turn into more convenient approach
|
<p>Is there any way to make those if statements to more cleaner approach by means to some kind of switch statement?</p>
<pre><code>def col_type_generator(self, val):
if "int" or "tinyint" in val:
return " A"
elif "longtext" or "mediumtext" or "text" or "tinytext" or "json" in val:
return " B"
elif "varchar" in val:
return " C"
elif "bit" in val:
return " D"
elif "datetime" in val:
return " E"
else:
return "UNKNOWN COLUMN TYPE"
</code></pre>
|
<python>
|
2023-06-06 10:13:10
| 1
| 3,611
|
Arie
|
76,413,557
| 16,383,578
|
How do I create a smooth image containing all RGB colors in NumPy?
|
<p>I want to create an uncompressed image containing every RGB color exactly once in NumPy.</p>
<p>I am using 24-bit color depth here. An RGB color contains three channels: Red, Green and Blue, each channel can have 256 intensities, so an RGB color is a sequence of 3 bytes and the total number of RGB colors is (2^8)^3 = 2^24 = 16777216.</p>
<p>An image containing every RGB color exactly once would therefore have 16777216 pixels. The square root of 16777216 is 2^12 = 4096, thus I want an image of resolution 4096x4096.</p>
<p>An uncompressed image of such would be exactly have a size of 16777216*3/1048576=48MiB. Because the image uploading size limit is 2MiB I can only upload compressed versions.</p>
<p>Here is what I have got so far:</p>
<p><a href="https://i.sstatic.net/Qhix1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qhix1.png" alt="enter image description here" /></a></p>
<p>It isn't smooth because there are horizontal bands with height of 16 pixels.</p>
<p>My code:</p>
<pre><code>import cv2
import numpy as np
byte = range(256)
colors = np.array(np.meshgrid(byte, byte, byte)).T.reshape(-1,3)
striped = colors.reshape((4096, 4096, 3))
img = np.zeros((4096, 4096, 3), dtype=np.uint8)
for i in range(4096):
img[i] = cv2.rotate(striped[i].reshape((16, 256, 3)), cv2.ROTATE_90_CLOCKWISE).reshape((-1, 3))
new_img = np.zeros((4096, 4096, 3), dtype=np.uint8)
rot90 = cv2.rotate(img, cv2.ROTATE_90_CLOCKWISE)
for i in range(4096):
new_img[i] = cv2.rotate(rot90[i].reshape((256, 16, 3)), cv2.ROTATE_90_CLOCKWISE).reshape((-1, 3))
new_img = np.rot90(new_img)
cv2.imwrite('D:/BGR_colors.png', striped, (cv2.IMWRITE_PNG_COMPRESSION, 0))
cv2.imwrite('D:/BGR_colors_new.png', img, (cv2.IMWRITE_PNG_COMPRESSION, 0))
cv2.imwrite('D:/BGR_colors_new1.png', new_img, (cv2.IMWRITE_PNG_COMPRESSION, 0))
cv2.imwrite('D:/BGR_colors_compressed.png', striped, (cv2.IMWRITE_PNG_COMPRESSION, 9))
cv2.imwrite('D:/BGR_colors_new_compressed.png', img, (cv2.IMWRITE_PNG_COMPRESSION, 9))
cv2.imwrite('D:/BGR_colors_new1_compressed.png', new_img, (cv2.IMWRITE_PNG_COMPRESSION, 9))
</code></pre>
<p>So I first generated all RGB colors using Cartesian product and reshaped it to 4096x4096 image.</p>
<p>The result:</p>
<p><a href="https://i.sstatic.net/AS348.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AS348.png" alt="enter image description here" /></a></p>
<p>It wasn't smooth, it has many small bands, there are 16 segments of 256 pixels horizontally that look very similar, and vertically there are 256 bands of 16 pixels.</p>
<p>There are 4096 rectangles of size 256x16. I want to make the image smooth, by eliminating the rectangles.</p>
<p>I want to get rid of the rectangles, by taking the first pixel of each of the 16 groups of 256 pixels in each row, put them together, then take the second pixel of each group, and so on. Similarly, this should happen to the 256 groups of 16 pixels in each column.</p>
<p>I only succeeded in eliminating the horizontal bands, I have tried many methods to eliminate the vertical bands but the result only gets worse:</p>
<p><a href="https://i.sstatic.net/Igv7C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Igv7C.png" alt="enter image description here" /></a></p>
<p>How can I get rid of the vertical bands, and how can I get the final result in the least amount of steps and only using NumPy methods?</p>
|
<python><python-3.x><image><numpy>
|
2023-06-06 10:11:32
| 1
| 3,930
|
Ξένη Γήινος
|
76,413,536
| 7,615,872
|
Trouble connecting an async function to a PySide6 button signal
|
<p>I'm currently working on a Python application using PySide6 for the GUI. I have a <code>QPushButton</code> and I want to connect its <code>clicked</code> signal to an asynchronous function, but I'm running into some issues.</p>
<p>Here's a simplified version of my code:</p>
<pre><code>from PySide6.QtWidgets import QApplication, QMainWindow, QPushButton
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle("Async Function Example")
self.login_button = QPushButton("Login", self)
self.login_button.clicked.connect(self.login_button_clicked)
async def login_button_clicked(self):
# Perform async tasks
print("test async button clicked!")
app = QApplication([])
window = MainWindow()
window.show()
app.exec()
</code></pre>
<p>However, when I run this code, I receive the following error:</p>
<pre><code>RuntimeWarning: coroutine 'MainWindow.login_button_clicked' was never awaited
app.exec()
</code></pre>
<p>I understand that the error is caused by not awaiting the asynchronous function. I have tried using the await keyword before the method call when connecting the signal, but PySide6 doesn't directly support connecting asynchronous functions to signals.</p>
<p>I've also attempted to work around this by using a helper function or a coroutine runner, but without success.</p>
<p>Could someone please guide me on how to properly connect an asynchronous function to a PySide6 button signal? Is there a recommended approach or workaround for this situation? Any code examples or suggestions would be greatly appreciated.</p>
<p>Thank you in advance for your help!</p>
|
<python><python-asyncio><pyside6>
|
2023-06-06 10:08:44
| 3
| 1,085
|
Mehdi Ben Hamida
|
76,413,508
| 18,579,739
|
why keyword argument are not passed into __init_subclass__(..)
|
<p>Code:</p>
<pre><code>class ExternalMeta(type):
def __new__(cls, name, base, dct, **kwargs):
dct['district'] = 'Jiading'
x = super().__new__(cls, name, base, dct)
x.city = 'Shanghai'
return x
class MyMeta(ExternalMeta):
def __new__(cls, name, base, dct, age=0, **kwargs):
x = super().__new__(cls, name, base, dct)
x.name = 'Jerry'
x.age = age
return x
def __init__(self, name, base, dct, age=0, **kwargs):
self.country = 'China'
class MyClass(metaclass=MyMeta, age=10):
def __init_subclass__(cls, say_hi, **kwargs):
print(f'keyword arguments are: {kwargs}')
super().__init_subclass__(**kwargs)
cls.hello = say_hi
class DerivedClass(MyClass, say_hi="hello"):
pass
</code></pre>
<p>this throws:</p>
<pre><code>Traceback (most recent call last):
File "app2.py", line 27, in <module>
class DerivedClass(MyClass, say_hi="hello"):
File "app2.py", line 11, in __new__
x = super().__new__(cls, name, base, dct)
File "app2.py", line 4, in __new__
x = super().__new__(cls, name, base, dct)
TypeError: __init_subclass__() missing 1 required positional argument: 'say_hi'
</code></pre>
<p>From the offical doc:</p>
<blockquote>
<p>classmethod object.__init_subclass__(cls)
This method is called whenever the containing class is subclassed. cls is then the new subclass. If defined as a normal instance method, this method is implicitly converted to a class method.</p>
<p>Keyword arguments which are given to a new class are passed to the parent’s class __init_subclass__. For compatibility with other classes using __init_subclass__, one should take out the needed keyword arguments and pass the others over to the base class, as in:</p>
<p><a href="https://docs.python.org/3/reference/datamodel.html#object.%5C_%5C_init_subclass%5C_%5C" rel="nofollow noreferrer">https://docs.python.org/3/reference/datamodel.html#object.\_\_init_subclass\_\</a>_</p>
</blockquote>
<p>I try to print <code>kwargs</code>, it's <code>{}</code>, empty, so why my <code>say_hi</code>, arguments are not passed to <code>__init_subclass__</code> method?</p>
<p>edit:
I read more materials and write an article with diagrams and tests about the creation of instance and classes in Python:</p>
<p><a href="https://shan-weiqiang.github.io/2023/06/24/Python-metaclass.html" rel="nofollow noreferrer">https://shan-weiqiang.github.io/2023/06/24/Python-metaclass.html</a></p>
<p>Reference to articles(including this page) are included in it, hopes it can help anyone.</p>
|
<python><metaclass>
|
2023-06-06 10:04:16
| 2
| 396
|
shan
|
76,413,384
| 20,920,790
|
How to add yticks label for tornado plot
|
<p>I got this data:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">ses</th>
<th style="text-align: left;">year_dt</th>
<th style="text-align: right;">mentee_cnt</th>
<th style="text-align: right;">ses_amount</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">12</td>
<td style="text-align: left;">2022-01-01</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">12</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">11</td>
<td style="text-align: left;">2022-01-01</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">33</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">10</td>
<td style="text-align: left;">2022-01-01</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">60</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">9</td>
<td style="text-align: left;">2022-01-01</td>
<td style="text-align: right;">16</td>
<td style="text-align: right;">144</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: right;">8</td>
<td style="text-align: left;">2022-01-01</td>
<td style="text-align: right;">39</td>
<td style="text-align: right;">312</td>
</tr>
<tr>
<td style="text-align: right;">5</td>
<td style="text-align: right;">7</td>
<td style="text-align: left;">2022-01-01</td>
<td style="text-align: right;">64</td>
<td style="text-align: right;">448</td>
</tr>
<tr>
<td style="text-align: right;">6</td>
<td style="text-align: right;">6</td>
<td style="text-align: left;">2022-01-01</td>
<td style="text-align: right;">124</td>
<td style="text-align: right;">744</td>
</tr>
<tr>
<td style="text-align: right;">7</td>
<td style="text-align: right;">5</td>
<td style="text-align: left;">2022-01-01</td>
<td style="text-align: right;">191</td>
<td style="text-align: right;">955</td>
</tr>
<tr>
<td style="text-align: right;">8</td>
<td style="text-align: right;">4</td>
<td style="text-align: left;">2022-01-01</td>
<td style="text-align: right;">252</td>
<td style="text-align: right;">1008</td>
</tr>
<tr>
<td style="text-align: right;">9</td>
<td style="text-align: right;">3</td>
<td style="text-align: left;">2022-01-01</td>
<td style="text-align: right;">361</td>
<td style="text-align: right;">1083</td>
</tr>
<tr>
<td style="text-align: right;">10</td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">2022-01-01</td>
<td style="text-align: right;">368</td>
<td style="text-align: right;">736</td>
</tr>
<tr>
<td style="text-align: right;">11</td>
<td style="text-align: right;">1</td>
<td style="text-align: left;">2022-01-01</td>
<td style="text-align: right;">382</td>
<td style="text-align: right;">382</td>
</tr>
<tr>
<td style="text-align: right;">12</td>
<td style="text-align: right;">0</td>
<td style="text-align: left;">2022-01-01</td>
<td style="text-align: right;">278</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">13</td>
<td style="text-align: right;">4</td>
<td style="text-align: left;">2021-01-01</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">8</td>
</tr>
<tr>
<td style="text-align: right;">14</td>
<td style="text-align: right;">3</td>
<td style="text-align: left;">2021-01-01</td>
<td style="text-align: right;">28</td>
<td style="text-align: right;">84</td>
</tr>
<tr>
<td style="text-align: right;">15</td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">2021-01-01</td>
<td style="text-align: right;">103</td>
<td style="text-align: right;">206</td>
</tr>
<tr>
<td style="text-align: right;">16</td>
<td style="text-align: right;">1</td>
<td style="text-align: left;">2021-01-01</td>
<td style="text-align: right;">309</td>
<td style="text-align: right;">309</td>
</tr>
<tr>
<td style="text-align: right;">17</td>
<td style="text-align: right;">0</td>
<td style="text-align: left;">2021-01-01</td>
<td style="text-align: right;">384</td>
<td style="text-align: right;">0</td>
</tr>
</tbody>
</table>
</div>
<p>Code:</p>
<pre><code>years = ['2021', '2022']
hight = [3, 6]
for y, h in zip(years, hight):
df3_2_filter = df3_2[df3_2['year_dt'] == datetime.strptime(f'{y}-01-01', '%Y-%m-%d').date()]
pos = np.array(df3_2_filter['ses'])
fig, (ax_left, ax_right) = plt.subplots(ncols=2, figsize=(13, h))
left = ax_left.barh(pos, df3_2_filter['mentee_cnt'], align='center', facecolor='#DEBB68', linewidth=0)
ax_left.bar_label(left, padding=-20, size=11, label_type='edge')
ax_left.set_xlabel('Mentee')
ax_left.invert_xaxis()
ax_left.set_xlim(max(df3_2_filter['mentee_cnt'])*1.1, 0)
ax_left.tick_params(axis='both', labelsize=11)
ax_left.set_yticks([])
ax_left.grid(False)
right = ax_right.barh(pos, df3_2_filter['ses_amount'], align='center', facecolor='#DEBB68', linewidth=0)
ax_right.bar_label(right, padding=0, size=11, label_type='edge')
ax_right.set_xlabel('Sessions')
ax_right.set_xlim(0, max(df3_2_filter['ses_amount'])*1.1)
ax_right.tick_params(axis='both', labelsize=11)
ax_right.set_yticks(pos)
ax_right.set_yticklabels(df3_2_filter['ses'], ha='center', x=-0.08)
ax_right.grid(False)
plt.suptitle(f'Mentee and sessions {y} year')
plt.show()
</code></pre>
<p>Output:
<a href="https://i.sstatic.net/QIkPC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QIkPC.png" alt="plt.show" /></a>
<a href="https://i.sstatic.net/R8oGT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R8oGT.png" alt="plt.show" /></a></p>
<p>How to add title for y axis for this graph?
Something like this:
<a href="https://i.sstatic.net/FaPAo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FaPAo.png" alt="plt.show" /></a></p>
|
<python><matplotlib>
|
2023-06-06 09:49:42
| 1
| 402
|
John Doe
|
76,413,283
| 765,766
|
Create Coloring book page from image
|
<p>I want to play and learn a little bit with image processing.</p>
<p>I found a page that converts any image to a coloring book sketch and it does this really well.</p>
<p>Original:
<a href="https://i.sstatic.net/WeU72.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WeU72.png" alt="enter image description here" /></a>
Sketch:
<a href="https://i.sstatic.net/uVnXT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uVnXT.png" alt="enter image description here" /></a></p>
<p>I would like to rebuild something like this. I looked into OpenCV and thought maybe by highlighting contours/edges that would work.</p>
<p>I got some results with Canny Edge detection:</p>
<pre><code>img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_blur = cv2.GaussianBlur(img_gray, (3, 3), 0)
edges = cv2.Canny(image=img_gray, threshold1=20, threshold2=100) # Canny Edge Detection
ret,th2 = cv2.threshold(edges,100,255,cv2.THRESH_BINARY_INV)
cv2.imshow('Canny Edge Detection', th2)
</code></pre>
<p><a href="https://i.sstatic.net/FQ7qP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FQ7qP.png" alt="enter image description here" /></a></p>
<p>My result is not so smooth and detailed. For example, the text, or the face of my cat.
So my question would be, am I on the right path; if yes, what else could I try?</p>
<p>Or do I need to do this with an machine learning model? For that I would need a ton of data I suppose to train the model...</p>
|
<python><opencv><image-processing>
|
2023-06-06 09:34:38
| 1
| 699
|
metabolic
|
76,413,246
| 7,114,703
|
Train a classifier on specific labels of MNIST dataset with TensorFlow
|
<p>I would like to train a classifier on MNIST dataset but with limited labels. For eg. I would like to train a classifier only on labels [1, 4, 5, 6, 8, 9] from all the labels [0-9]. I am getting the following error:</p>
<p><code>res = tf.nn.sparse_softmax_cross_entropy_with_logits( Node: 'sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits' Received a label value of 9 which is outside the valid range of [0, 6). Label values: 9 5 9 9 6 1 4 4 6 6 9 4 9 1 8 5 9 5 4 8 9 9 1 8 6 4 4 9 9 4 4 8 8 6 6 5 9 4 1 5 5 6 4 1 1 8 9 6 8 5 6 1 6 6 4 6 1 4 4 4 1 1 1 6 9 8 8 8 5 1 8 8 6 6 5 1 1 5 1 6 9 8 1 8 4 6 4 9 8 1 6 5 5 9 1 6 8 1 5 5 6 9 1 9 9 6 4 6 6 4 8 6 6 4 5 4 4 5 8 1 8 6 1 5 4 5 8 1</code></p>
<p>Here is approach I have used:</p>
<pre><code>import tensorflow_datasets as tfds
import tensorflow as tf
val_split = 20 # percent of training data
(ds_test, ds_valid, ds_train), ds_info = tfds.load(
'mnist,
split=['test', f'train[0%:{val_split}%]', f'train[{val_split}%:]'],
as_supervised=True,
with_info=True
)
</code></pre>
<p>The ds_train dataset object has the following samples per label <br>
{0: 4705, 1: 5433, 2: 4772, 3: 4936, 4: 4681, 5: 4333, 6: 4728, 7: 4966, 8: 4703, 9: 4743}.</p>
<p>After this I filter the dataset using <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset#filter" rel="nofollow noreferrer">filter()</a> as follows:</p>
<pre><code>known_classes = [1, 4, 5, 6, 8, 9]
kc = tf.constant(known_classes, dtype=tf.int64)
def predicate(image, label):
isallowed = tf.equal(kc, label)
reduced = tf.reduce_sum(tf.cast(isallowed, tf.int64))
return tf.greater(reduced, tf.constant(0, dtype=tf.int64))
ds_test = ds_test.filter(predicate)
ds_valid = ds_valid.filter(predicate)
ds_train = ds_train.filter(predicate)
</code></pre>
<p>The updated samples per label post filter fo ds_train is <br>
{0: 0, 1: 5433, 2: 0, 3: 0, 4: 4681, 5: 4333, 6: 4728, 7: 0, 8: 4703, 9: 4743}.</p>
<p>Next steps are normalizing the image and preparing the dataset objects for training.</p>
<pre><code>def normalize_img(image, label):
"""Normalizes images: `uint8` -> `float32`."""
return tf.cast(image, tf.float32) / 255., label
ds_train = ds_train.map(normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples)
ds_train = ds_train.batch(128, drop_remainder=True)
ds_train = ds_train.prefetch(tf.data.AUTOTUNE)
ds_test = ds_test.map(normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_test = ds_test.batch(128, drop_remainder=True)
ds_test = ds_test.prefetch(tf.data.AUTOTUNE)
ds_valid = ds_valid.map(normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_valid = ds_valid.batch(128, drop_remainder=True)
ds_valid = ds_valid.prefetch(tf.data.AUTOTUNE)
</code></pre>
<p>Thereafter, I create a simple model as follows and then proceed with training</p>
<pre><code>model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(len(known_classes)) # known_classes from above
])
model.compile(
optimizer=tf.keras.optimizers.Adam(0.001),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],
)
history = model.fit(
ds_train,
epochs=5,
validation_data=ds_valid,
)
</code></pre>
<p>I am new to Tensorflow and any help is appreciated!!</p>
|
<python><tensorflow><keras><classification><tensorflow-datasets>
|
2023-06-06 09:30:16
| 1
| 528
|
Mahendra Singh
|
76,413,151
| 1,745,291
|
How to deal with GenericAliases with python 3.8 with compability with more recent versions?
|
<p>I need to do some introspection on dataclasses.
currently, the project is running on python 3.8, but soon, it will be upgraded. However, right now, I would like to know how I can check if a type is a GenericAlias, and get its t_orig, and t_args. How can I do is so that :</p>
<ul>
<li><p>On python 3.8 :</p>
<ul>
<li>It works with typing.List[int]</li>
</ul>
</li>
<li><p>On python 3.9+ :</p>
<ul>
<li>It works with typing.List[int]</li>
<li>It works with list[int]</li>
</ul>
</li>
<li><p>The same code works all 3.8+ versions</p>
</li>
</ul>
|
<python><compatibility><type-hinting><introspection>
|
2023-06-06 09:18:47
| 0
| 3,937
|
hl037_
|
76,413,124
| 4,091,473
|
Change bar orientation plotly python based on screen size
|
<p>I have a Flask app and I have no idea if it is possible (and if so, how?) to get a python plotly bar graph with the bars vertically oriented for big screens and horizontally oriented when the screens are small. I tried with the configuration of the plot but it did not work. It would be perfect if the solution could simply use <code>python</code> directly, since my JS knowledge is practically nonexistent.</p>
<p>My HTML is:</p>
<pre><code><...>
<!-- Graph and Calendar -->
<div class="row">
<h2 class="text-start mb-2">Title of this section</h2>
<hr>
<div class="col-12 col-lg-8 mb-4">
<div class="card">
<div class="card-header">Title of the card</div>
<div class="card-body">
<!-- Interactive Graph Here -->
{{ graph_html | safe }}
</div>
</div>
</div>
<div class="col-12 col-lg-4 mb-4">
<div id="calendar"></div>
</div>
</div>
<...>
</code></pre>
<p>My Plotly in python is:</p>
<pre><code>def create_bar_graph():
data = [
go.Bar(
x=['Category 1', 'Category 2', 'Category 3'],
y=[10, 20, 15],
)
]
layout = go.Layout(
title='Bar Graph',
barmode='group',
)
fig = go.Figure(data=data, layout=layout)
# This didn't work:
config = {'responsive': True}
# Generate the graph HTML
graph_html = pio.to_html(fig, full_html=False, config=config)
return graph_html
</code></pre>
|
<python><flask><plotly>
|
2023-06-06 09:15:38
| 1
| 368
|
Jzbach
|
76,412,906
| 5,449,876
|
Installing tensorflow 2.0 in anaconda on computer with no Internet
|
<p>I work in an organization where there is no internet in our workstations.I wanted to install Tensorflow 2.0.
I have already installed anaconda 4.4 (which has python 3.6) on my PC.
Since there is no internet I installed tensorflow 1.2.0 with immense difficulty as it was dependent on many other libraries. Finally, someone who had done this earlier told me that I need to install the following prerequisites below before installing Tensorflor 1.2.0</p>
<ol>
<li><p>protobuf-3.2.0-py3-none-any.whl</p>
</li>
<li><p>webencodings-0.5.1-py2.py3-none-any.whl</p>
</li>
<li><p>html5lib-0.9999999</p>
</li>
<li><p>backports.weakref-1.0rc1-py3-none-any.whl</p>
</li>
<li><p>Markdown-2.2.0</p>
</li>
<li><p>bleach-1.5.0-py2.py3-none-any.whl</p>
</li>
<li><p>After all the pre-requsites (till no 6 above) were installed I installed tensorflow-1.2.0-cp36-cp36m-win_amd64.whl through pip.</p>
</li>
</ol>
<p>But after that installing Keras became the next issue as it was again dependent on a number of other libraries.
I came to know that Tensorflow 2.0 has Keras bundled with it and so I plan to install the same. Even to use new features I intend to install Tensorflow 2.0 but I need to know in advance which all prerequisite libraries(similar to the list above) are required so that I can download them all at once from internet and then copy the same to my workstation.
I am even open to installing a higher version of Tensorflow (<em>That can work with anaconda 4.4 / python 3.6</em>) provided the number of prerequisite dependencies is less(so that i dont have to bear the pain and keep hopping between internet PC and my offline workstation) or is bundled together.
Additional Info - My official PC is having Windows 10 OS. <em>I am relatively new to python.</em></p>
|
<python><tensorflow><keras><anaconda>
|
2023-06-06 08:47:53
| 1
| 310
|
Nostalgic
|
76,412,789
| 1,651,270
|
Assigning a default value to an enum class when the value provided to the constructor is not enumerated
|
<p>I have an enum class:</p>
<pre><code>import enum
class Version(enum.IntEnum):
VERSION_1 = 1
VERSION_2 = 2
VERSION_UNKNOWN = 99
</code></pre>
<p>When a user initializes the enum class with a value that is not listed as a class field, I would like him to get VERSION_UNKNOWN i.e. the following snippet to execute successfully:</p>
<pre><code>e = enum(3)
assert e == Version.VERSION_UNKNOWN
</code></pre>
<p>Is there any way to do that sort of thing?</p>
|
<python>
|
2023-06-06 08:31:09
| 1
| 308
|
Nick
|
76,412,630
| 12,436,050
|
Conditional merging of two dataframes in python3.7
|
<p>I have following dataframe</p>
<pre><code>col1 term1 term2
ab|a ab a
cd cd
</code></pre>
<p>I would like to merge this dataframe to another dataframe (df2) using both the columns "term1" and "term2" but skip/ignore when it is None (like in row 2). I am trying to use if/else condition here in a for loop. Please see the pseudocode below (this is not a functional code as it is showing error as well).</p>
<p>Is it a right approach or there is nicer way to do this.</p>
<pre><code>df1 = pd.concat([df["col1
"],df["col1"].str.split("|", expand=True)], axis=1)
df1.rename(columns={0: 'term1', 1: 'term2'}, inplace=True)
for index, row in df1.iterrows():
if row['term1'] is None:
break
else:
row = row.to_frame()
print (row)
row.merge(df2, how = 'inner', left_on = 'term1', right_on = 'STR')
</code></pre>
|
<python><pandas>
|
2023-06-06 08:12:03
| 1
| 1,495
|
rshar
|
76,412,583
| 5,214,109
|
ValueError: 'CartPole DQN model/' is not a valid root scope name
|
<p>I am following this blog- <a href="https://www.pylessons.com/CartPole-reinforcement-learning" rel="nofollow noreferrer">https://www.pylessons.com/CartPole-reinforcement-learning</a> to run a deep Q learning example.</p>
<p>while running below command in DQNAgent class in replay() function -</p>
<pre><code> target = self.model.predict(state)
</code></pre>
<p>getting this error-</p>
<p>ValueError: 'CartPole DQN model/' is not a valid root scope name. A root scope name has to match the following pattern: ^[A-Za-z0-9.][A-Za-z0-9_.\/>-]*$</p>
<p>Self is object of class with env and other members defined.model is used defined deep learning model.</p>
<pre><code>self.model = OurModel(input_shape=(self.state_size,), action_space = self.action_size)
</code></pre>
|
<python><openai-gym>
|
2023-06-06 08:06:23
| 1
| 659
|
Arpit Sisodia
|
76,412,456
| 1,833,326
|
Installing Packages with Poetry: ERROR: Can not combine '--user' and '--prefix' as they imply different installation locations
|
<p>If I try to run <code>poetry install</code> I get the error</p>
<pre><code> • Installing six (1.16.0)
CalledProcessError
Command 'C:\Users\XXX\AppData\Local\pypoetry\Cache\virtualenvs\XXX-MaFxIpG_-py3.9\Scripts\python.exe -m pip install --disable-pip-version-check --isolated --no-input --prefix C:\Users\lazlo\AppData\Local\pypoetry\Cache\virtualenvs\XXXX-MaFxIpG_-py3.9 --no-deps C:\Users\lazlo\AppData\Local\pypoetry\Cache\artifacts\fb\06\dd\b5671b47dd0597663bc05d60d324bb315a8cef56f3179b8f9067f88e50\pycparser-2.21-py2.py3-none-any.whl' returned non-zero exit status 1.
The following error occurred when trying to handle this error:
EnvCommandError
Command C:\Users\XXX\AppData\Local\pypoetry\Cache\virtualenvs\XXX-MaFxIpG_-py3.9\Scripts\python.exe -m pip install --disable-pip-version-check --isolated --no-input --prefix C:\Users\XXX\AppData\Local\pypoetry\Cache\virtualenvs\XXX-MaFxIpG_-py3.9 --no-deps C:\Users\lazlo\AppData\Local\pypoetry\Cache\artifacts\fb\06\dd\b5671b47dd0597663bc05d60d324bb315a8cef56f3179b8f9067f88e50\pycparser-2.21-py2.py3-none-any.whl errored with the following return code 1, and output:
ERROR: Can not combine '--user' and '--prefix' as they imply different installation locations
The following error occurred when trying to handle this error:
PoetryException
Failed to install C:/Users/XXX/AppData/Local/pypoetry/Cache/artifacts/fb/06/dd/b5671b47dd0597663bc05d60d324bb315a8cef56f3179b8f9067f88e50/six-1.16.0-py2.py3-none-any.whl
</code></pre>
<p>But when I look at this path, I can find the file.</p>
<p>My pyproject.toml looks like</p>
<pre><code>[tool.poetry]
name = "lf_privat"
version = "1.0.0"
description = ""
authors = [
"Lazloo <lazloo@xxxx.com>",
]
license = "Proprietary"
[[tool.poetry.source]]
name = "XXXX_nexus"
url = "https://nexus.infrastructure.XXX.net/repository/pypi-all/simple/"
[tool.poetry.dependencies]
python = ">=3.8,<4.0"
pandas = "^1.4.4"
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
[tool.pytest.ini_options]
testpaths = ["tests"]
</code></pre>
<p>While the package itself seems not to be the issue. If I try to install other packages I get the same error message.</p>
<p>Can I do something about this?</p>
|
<python><python-poetry><pycparser>
|
2023-06-06 07:48:43
| 1
| 1,018
|
Lazloo Xp
|
76,412,417
| 8,351,318
|
How to create Linked Service for Salesforce in Azure Synapse using Python
|
<p>As per the requirements, I am working with Azure Synapse Environment using Python. I need to create a Linked Service for Salesforce to be utilized in pipeline later.<br />
But I'm facing an error and couldn't get better documentations from Microsoft.</p>
<p>Simplified code -</p>
<pre><code>from azure.synapse.artifacts import ArtifactsClient
from azure.synapse.artifacts.models import *
sf_ls_name = "testnotebookls"
sf_env_url = "https://login.salesforce.com"
sf_user = "<username>"
sf_pass = '<password>',
sf_security_token = '<security-token>',
properties=SalesforceLinkedService(environment_url=sf_env_url,
password=sf_pass,
username=sf_user,
security_token=sf_security_token)
client.linked_service.begin_create_or_update_linked_service(linked_service_name=sf_ls_name, properties=properties)
</code></pre>
<p>Please note down that I've created <strong>client</strong> object and able to get linked services and perform synapse related operations.</p>
<p><strong>Error</strong></p>
<pre><code>DeserializationError Traceback (most recent call last)
File c:\Users\Someuser\OneDrive - Company\Desktop\folder\folder\salesforce\.venv\lib\site-packages\azure\synapse\artifacts\_serialization.py:710, in Serializer.body(self, data, data_type, **kwargs)
705 deserializer.key_extractors = [
706 rest_key_case_insensitive_extractor,
707 attribute_key_case_insensitive_extractor,
708 last_rest_key_case_insensitive_extractor,
...
1257 found_key = key
1258 break
-> 1260 return data.get(found_key)
SerializationError: (', DeserializationError: (", AttributeError: \'tuple\' object has no attribute \'get\'", \'Unable to deserialize to object: type\', AttributeError("\'tuple\' object has no attribute \'get\'"))', 'Unable to build a model: (", AttributeError: \'tuple\' object has no attribute \'get\'", \'Unable to deserialize to object: type\', AttributeError("\'tuple\' object has no attribute \'get\'"))', DeserializationError(", AttributeError: 'tuple' object has no attribute 'get'", 'Unable to deserialize to object: type', AttributeError("'tuple' object has no attribute 'get'")))
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
</code></pre>
|
<python><azure><azure-synapse>
|
2023-06-06 07:43:06
| 1
| 532
|
Indrajeet Singh
|
76,412,385
| 21,691,539
|
import_module on relative path difference between python 3.5 and 3.7
|
<p>I'm trying to implement a plugin system in Python.<br>
Each plugin exposes exactly the same function and I want to call one or another, for instance with a simple Id.<br>
For instance, the plugins can be image loader, each plugin handling a different format (<em>jpeg</em>,<em>bmp</em>,...).<br>
Plugins are all inside a special directory tree and I try to import them with <em>relative</em> syntax.</p>
<p>My issue is that Python 3.5 and Python 3.7 seem to behave differently.</p>
<p>Here is my directory tree</p>
<pre><code>root_dir
|
--main.py
|
--plugin_mod1
| |
| --extend.py
...
|
--plugin_modN
| |
| --extend.py
...
</code></pre>
<p>here is my "load plugins" function (in <code>main.py</code>)</p>
<pre class="lang-py prettyprint-override"><code># loads plugins with all the same interface
def LoadPlugin():
global root_dir
Extensions = glob.glob(root_dir+'/plugin_*')
# ExtMods will store the retrieved module
ExtMods = {}
sys.path.insert(0, root_dir)
for Ext in Extensions:
if os.path.exists(Ext+'/extend.py'):
# Python 3.7+
# tmp = importlib.import_module(
# '.extend', os.path.basename(os.path.normpath(Ext)))
# Python 3.5
sys.path.insert(0, Ext)
tmp = importlib.import_module('extend')
# bind some id to the found module
ExtMods[tmp.PluginId()] = tmp
return ExtMods
</code></pre>
<p>The mechanism is used like this in <code>main.py</code></p>
<pre class="lang-py prettyprint-override"><code>...
#loading all plugins found in root_dir
PluginManager = LoadPlugin()
...
# run DoSomething for plugin <some ID>
PlunginManager[<some ID>].DoSomething()
...
</code></pre>
<p>If I use the code commented as <code>Python 3.7+</code> with <code>Python 3.7</code> (on windows), I'm getting the expected result.
If I use the code commented as <code>Python 3.7+</code> with <code>Python 3.5</code> (on ubuntu), I'm getting the following error:</p>
<pre><code>Traceback (most recent call last):
File "<...>/main.py", line <xxx>, in <module>
PluginManager = LoadPlugin()
File "<...>/main.py", line <xxx>, in LoadPlugin
'.extend', os.path.basename(os.path.normpath(Ext)))
File "/usr/lib/python3.5/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 981, in _gcd_import
File "<frozen importlib._bootstrap>", line 931, in _sanity_check
SystemError: Parent module 'plugin_mod1' not loaded, cannot perform relative import
</code></pre>
<p>I looked the suggested post <a href="https://stackoverflow.com/questions/72395188/how-to-dynamically-import-module-from-a-relative-path-with-python-importlib">How to dynamically import_module from a relative path with python importlib</a> but I failed to understand how it can apply to my situation.</p>
<p>Thus my questions are:</p>
<ol>
<li>Why is there a difference between the two python versions (I
failed to find an explanation in python documentation).</li>
<li>what would be the correct way to achieve my expected result with the same
code on different python version (at least from 3.5)?</li>
<li>is there a way to avoid the <code>sys.path.insert(0, <something>)</code> (especially
in the 3.5 version that implies, now, to add each plugin path, even
if temporarily)?</li>
</ol>
|
<python><python-3.x><python-import>
|
2023-06-06 07:38:40
| 0
| 3,479
|
Oersted
|
76,412,365
| 499,721
|
Configuring request model per endpoint in FastAPI
|
<p>I have a <a href="https://fastapi.tiangolo.com/" rel="nofollow noreferrer">FastAPI</a> application, in which several endpoints require the same input model, but in each, some attributes may be optional while others are required. For example:</p>
<pre><code># file: app/schemas/models.py
from pydantic import BaseModel
class GenericRequestModel(BaseModel):
id: UUID = None # required by all endpoints
attr1: str = None # required by endpoint 1 and 2, optional for 3
attr2: boot = None # required by 2, optional for 1, 3
attr3: int = None # optional by all
# file: app/api/endpoints.py
from fastapi import APIRouter
router = APIRouter(prefix='/api')
@router.post('/endpoint-1')
def endpoint_1(params: GenericRequestModel) -> ResponseModel:
return calc_response_using_all_attrs(params)
@router.post('/endpoint-2')
def endpoint_2(params: GenericRequestModel) -> ResponseModel:
return return calc_response_using_attrs_1_and_2(params)
@router.post('/endpoint-3')
def endpoint_3(params: GenericRequestModel) -> ResponseModel:
return calc_generic_response(params)
</code></pre>
<p>Is is possible to configure <code>GenericRequestModel</code>'s required property per endpoint, without deriving a new request models for each endpoint? If not, what is the most elegant solution?</p>
<p><strong>EDIT</strong>
For completeness, here's the rationale behind my question. Assume you have many endpoints (say 50), and many attributes (100). Each endpoint may do complex stuff with the attributes and some endpoint can overcome missing data. Obviously, I don't want to create 50 different models.</p>
|
<python><fastapi><pydantic>
|
2023-06-06 07:35:37
| 2
| 11,117
|
bavaza
|
76,412,079
| 2,573,075
|
Publish Azure functions (model v2) don't set triggers (functions)?
|
<p>Tried to search Google/Bing chat/Stack and didn't find a solution to my issue.</p>
<p>I have the following environment:</p>
<p>IDE: PyCharm, CLI: microsoft az/func (running fine). Defined local with <code>func init</code>. Created the function in azure function. The settings from local.settings.json are the same with one online. The files from application are current published. The venv is the same for entire project but i put a separate requirements.txt in folder.</p>
<p>The function locally is running nicely with <code>func start</code></p>
<p>But the published function with <code><function folder>\func azure functionapp publish ...</code> don't have triggers / functions.</p>
<p>My functions is like this:</p>
<pre><code>import azure.functions as func
import logging
from py_pdf import PdfOcrReader
app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS)
@app.function_name(name="healthcheck")
@app.route(route="healthcheck")
def healthcheck(req: func.HttpRequest) -> func.HttpResponse:
return func.HttpResponse("Status OK", status_code=200)
@app.function_name(name="pypdf")
@app.route(route="pypdf")
def pypdf_api(req: func.HttpRequest) -> func.HttpResponse:
file = req.files.get('file')
if not file:
return func.HttpResponse("No file found in the request.", status_code=400)
# process file
pdf_obj = PdfOcrReader(file)
json_out = {
"pdf_text": pdf_obj.pdf_text,
"ocr_text": pdf_obj.ocr_computer_vision()
}
json_out.update(pdf_obj.metadata)
return func.HttpResponse(
str(json_out), status_code=200
)
</code></pre>
<p>The host.json is looking like this:</p>
<pre><code>{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[4.*, 5.0.0)"
}
}
</code></pre>
<p>P.S: The loggout when <code>func publish</code> looks like</p>
<pre><code>WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
WARNING: You are using pip version 21.2.4; however, version 23.1.2 is available.
You should consider upgrading via the '/tmp/oryx/platforms/python/3.10.4/bin/python3.10 -m pip install --upgrade pip' command.
Not a vso image, so not writing build commands
Preparing output...
Copying files to destination directory '/home/site/wwwroot'...
Done in 0 sec(s).
Removing existing manifest file
Creating a manifest file...
Manifest file created.
Copying .ostype to manifest output directory.
Done in 8 sec(s).
Running post deployment command(s)...
Generating summary of Oryx build
Deployment Log file does not exist in /tmp/oryx-build.log
The logfile at /tmp/oryx-build.log is empty. Unable to fetch the summary of build
Triggering recycle (preview mode disabled).
Linux Consumption plan has a 1.5 GB memory limit on a remote build container.
To check our service limit, please visit https://docs.microsoft.com/en-us/azure/azure-functions/functions-scale#service-limits
Writing the artifacts to a squashfs file
Parallel mksquashfs: Using 1 processor
Creating 4.0 filesystem on /home/site/artifacts/functionappartifact.squashfs, block size 131072.
[=============================================================-] 1250/1250 100%
Exportable Squashfs 4.0 filesystem, gzip compressed, data block size 131072
compressed data, compressed metadata, compressed fragments,
compressed xattrs, compressed ids
duplicates are removed
Filesystem size 40794.51 Kbytes (39.84 Mbytes)
74.13% of uncompressed filesystem size (55031.18 Kbytes)
Inode table size 10917 bytes (10.66 Kbytes)
31.05% of uncompressed inode table size (35154 bytes)
Directory table size 9962 bytes (9.73 Kbytes)
35.27% of uncompressed directory table size (28244 bytes)
root (0)
Creating placeholder blob for linux consumption function app...
SCM_RUN_FROM_PACKAGE placeholder blob scm-latest-....zip located
Uploading built content /home/site/artifacts/functionappartifact.squashfs for linux consumption function app...
Resetting all workers for ....azurewebsites.net
Deployment successful. deployer = Push-Deployer deploymentPath = Functions App ZipDeploy. Extract zip. Remote build.
Remote build succeeded!
Syncing triggers...
Functions in ....:
(venv) PS C:\Users\civan\PycharmProjects\....\....>
</code></pre>
<p>I don't know if it relevant, but this is a logstream when I published a change in host.json:</p>
<pre><code>Connected!
2023-06-06T07:57:50Z [Verbose] Received request to drain the host
2023-06-06T07:57:50Z [Information] DrainMode mode enabled
2023-06-06T07:57:50Z [Information] Calling StopAsync on the registered listeners
2023-06-06T07:57:50Z [Information] Call to StopAsync complete, registered listeners are now stopped
2023-06-06T07:57:50Z [Verbose] Received request to drain the host
2023-06-06T07:58:02Z [Information] File change of type 'Changed' detected for 'C:\Users\civan\PycharmProjects\...\...\host.json'
2023-06-06T07:58:02Z [Information] Host configuration has changed. Signaling restart
2023-06-06T07:58:02Z [Information] File change of type 'Changed' detected for 'C:\Users\civan\PycharmProjects\...\...\host.json'
2023-06-06T07:58:02Z [Information] Host configuration has changed. Signaling restart
2023-06-06T07:58:10Z [Information] Host lock lease acquired by instance ID '00000000000000000000000021706BBA'.
2023-06-06T07:58:10Z [Verbose] Initiating background SyncTriggers operation
2023-06-06T07:58:10Z [Information] Loading functions metadata
2023-06-06T07:58:10Z [Information] Reading functions metadata
2023-06-06T07:58:10Z [Information] 1 functions found
2023-06-06T07:58:10Z [Information] 0 functions loaded
2023-06-06T07:58:10Z [Information] Loading functions metadata
2023-06-06T07:58:10Z [Information] Reading functions metadata
2023-06-06T07:58:10Z [Information] 1 functions found
2023-06-06T07:58:10Z [Information] 0 functions loaded
2023-06-06T07:58:14Z [Verbose] Received request to drain the host
2023-06-06T07:58:14Z [Information] DrainMode mode enabled
2023-06-06T07:58:14Z [Information] Calling StopAsync on the registered listeners
2023-06-06T07:58:14Z [Information] Call to StopAsync complete, registered listeners are now stopped
2023-06-06T07:59:09Z [Verbose] Received request to drain the host
2023-06-06T07:59:09Z [Information] DrainMode mode enabled
2023-06-06T07:59:09Z [Information] Calling StopAsync on the registered listeners
2023-06-06T07:59:09Z [Information] Call to StopAsync complete, registered listeners are now stopped
2023-06-06T07:59:09Z [Verbose] Received request to drain the host
2023-06-06T07:59:09Z [Information] DrainMode mode enabled
2023-06-06T07:59:09Z [Information] Calling StopAsync on the registered listeners
2023-06-06T07:59:09Z [Information] Call to StopAsync complete, registered listeners are now stopped
2023-06-06T07:59:54Z [Verbose] Received request to drain the host
2023-06-06T07:59:54Z [Information] DrainMode mode enabled
2023-06-06T07:59:54Z [Information] Calling StopAsync on the registered listeners
2023-06-06T07:59:54Z [Information] Call to StopAsync complete, registered listeners are now stopped
2023-06-06T08:00:26Z [Information] Host lock lease acquired by instance ID '0000000000000000000000008EA10CF8'.
2023-06-06T08:00:55Z [Information] Host Status: {
"id": "ocroperations",
"state": "Running",
"version": "4.21.3.3",
"versionDetails": "4.21.3+2e42e3beb40b89d4f5d3dd962f3a5d420d376d71",
"platformVersion": "",
"instanceId": "54108609-638216349727465766",
"computerName": "",
"processUptime": 267075,
"functionAppContentEditingState": "NotAllowed",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "4.5.0"
}
}
</code></pre>
|
<python><azure><webdeploy>
|
2023-06-06 06:49:17
| 1
| 633
|
Claudiu
|
76,412,067
| 9,031,512
|
How to sample 5 index and their probabilities from a tensorflow tensor of probability distribution?
|
<p>I have a probability distribution (after applying Softmax) where values in each row sums up to 1</p>
<pre><code>probs = tf.constant([
[0.0, 0.1, 0.2, 0.3, 0.4],
[0.5, 0.3, 0.2, 0.0, 0.0]])
</code></pre>
<p>I want to sample k index from it and their respective probability values using tensorflow operations.</p>
<p>The expected output for 3 index:</p>
<pre><code>index: [
[4, 3, 4],
[0, 1, 0]
]
probs: [
[0.4, 0.3, 0.4],
[0.5, 0.3, 0.5]
]
</code></pre>
<p>How can I achieve this?</p>
|
<python><tensorflow><tensorflow2.0>
|
2023-06-06 06:45:59
| 1
| 757
|
n0obcoder
|
76,412,042
| 4,537,160
|
Python - update the same list and dict using the output of different functions, no code repetition
|
<p>I have a list and a dict, whose values I'm updating in different parts of the code using a structure like this:</p>
<pre><code>final_list = []
final_dict = {}
for iter_1 in iterables_1:
if condition_1:
ret_1, ret_2 = my_func(...)
if ret_1 is not None and ret_2 is not None:
final_list.append(ret_1)
final_dict[iter] = ret_2
elif condition_2:
for iter_2 in iterables_1:
for iter_3 in iterables_3:
ret_1, ret_2 = my_func(...)
if ret_1 is not None and ret_2 is not None:
final_list.append(ret_1)
final_dict[iter] = ret_2
</code></pre>
<p>Now, everything works, but the lines:</p>
<pre><code>ret_1, ret_2 = my_func(...)
if ret_1 is not None and ret_2 is not None:
final_list.append(ret_1)
final_dict[iter] = ret_2
</code></pre>
<p>are repeated (this is an oversimplification, the real code is much longer and this situation occurs more often).<br />
In a case like this, how can I avoid this repetition?</p>
<p>EDIT:
I forgot to mention it, one option would be to move the repeated lines to my_func. However, unless I'm missing something, this would require having final_list and final_dict as both arguments and return value of my_func, so something like:</p>
<pre><code>def my_func(final_list, final_dict, ...):
# calculate ret_1, ret_2
if ret_1 is not None and ret_2 is not None:
final_list.append(ret_1)
final_dict[iter] = ret_2
return final_list, final_dict
final_list = []
final_dict = {}
for iter_1 in iterables_1:
if condition_1:
final_list, final_dict = my_func(final_list, final_dict, ...)
elif condition_2:
for iter_2 in iterables_1:
for iter_3 in iterables_3:
final_list, final_dict = my_func(final_list, final_dict, ...)
</code></pre>
<p>But I'm not sure this is good practice either.</p>
|
<python><design-patterns>
|
2023-06-06 06:41:23
| 1
| 1,630
|
Carlo
|
76,412,036
| 2,380,115
|
Second command on docker-compose is not executed
|
<p>I have the following docker-compose file. and in the command part, i have 2 commands. The first one is for upgrading the pip3 and the second one to install requeriment of the python project.</p>
<pre><code>version: '3'
services:
remote_interpreter :
image: amazon/aws-glue-libs:glue_libs_3.0.0_image_01
entrypoint: ""
command: "pip3 install --upgrade pip && pip3 install -r ./tests/requirements.txt"
</code></pre>
<p>However, when i run the <code>docker-compose -f ./my-file.yaml up</code> , i don't see the packages inside the requirements.txt are being installed.</p>
<p>Also, how i can keep the container which is built on based this docker-compose file running, so that i can check if the needed packages are really installed ?</p>
|
<python><docker-compose>
|
2023-06-06 06:39:59
| 1
| 8,538
|
Jeff
|
76,411,978
| 5,637,881
|
ace python mode single letter local variables why does not prompt autocomplete
|
<p><a href="https://i.sstatic.net/1Petu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1Petu.png" alt="enter image description here" /></a>
The variables n and nn, a and aa are defined as shown in the figure, why does it only prompt nn and aa to be local when typing a or n below, and why n and a are not prompted?</p>
<p>Is there a way to prompt n and a for autocomplete at the same time when you type n or a?</p>
|
<javascript><python><ace-editor>
|
2023-06-06 06:31:20
| 0
| 1,539
|
ChenLee
|
76,411,887
| 1,335,340
|
Custom DjangoModelPermissions not being applied
|
<p>I have a Django app that allows a user to be a member of multiple Organizations.</p>
<pre><code>from django.conf import settings
from app.shared.models import BaseModel
class Organization(BaseModel):
name = models.CharField(max_length=256)
class OrganizationMember(BaseModel):
ROLE_OPTIONS = (
(OrganizationMemberRole.admin.value, _('Admin')),
(OrganizationMemberRole.editor.value, _('Editor')),
(OrganizationMemberRole.view.value, _('View Only')),
)
organization = models.ForeignKey(
Organization,
related_name='members',
on_delete=models.DO_NOTHING,
)
user = models.ForeignKey(
settings.AUTH_USER_MODEL,
related_name='user_role',
on_delete=models.DO_NOTHING,
)
</code></pre>
<p>I've also got a custom user model with a property that generates a list of Organization IDs for that user</p>
<pre><code>from django.contrib.auth.models import AbstractUser
from django.db import models
from .managers import CustomUserManager
class User(AbstractUser):
username = None
email = models.EmailField(max_length=254, unique=True)
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = []
objects = CustomUserManager()
def __str__(self):
return self.email
@property
def user_organization_ids(self):
"""
Returns a list of Organization IDs. Converts UUID to str
"""
user_orgs_qs = self.user_role.all().values_list('organization_id', flat=True)
return [str(i) for i in user_orgs_qs]
</code></pre>
<p>There are other models in the database that will have a relationship with an Organization. For example, Widget</p>
<pre><code>from app.shared.models import BaseModel
class Widget(BaseModel):
organization = models.ForeignKey(Organization, on_delete=models.DO_NOTHING)
</code></pre>
<p>I want to restrict CRUD operations based on OrganizationMember relationships.</p>
<p>So, a user should only be able to perform CRUD operations on Widget if they are a member of the Organization.</p>
<p>I've created</p>
<pre><code>from rest_framework.permissions import DjangoModelPermissions
class OrganizationPermission(DjangoModelPermissions):
def has_object_permission(self, request, view, obj):
return hasattr(obj, 'organization_id') and obj.organization_id is not None and obj.organization_id in request.user.user_organization_ids
</code></pre>
<p>and my views:</p>
<pre><code>from rest_framework import viewsets
from rest_framework.permissions import IsAuthenticated
from .models import Widget
from .serializers import WidgetSerializer
from app.shared.permissions import OrganizationPermission
class WidgetViewSet(viewsets.ModelViewSet):
serializer_class = WidgetSerializer
permission_classes = (OrganizationPermission, IsAuthenticated)
def get_queryset(self):
return Widget.objects.filter(organization_id=self.kwargs['organization_id'])
</code></pre>
<p>but this does not seem to work; in my tests I can see Widgets in organizations that my user does not belong to, and no errors are returned.</p>
<p>Any suggestions on how to modify my DjangoModelPermission ?</p>
|
<python><django>
|
2023-06-06 06:16:31
| 1
| 917
|
Joe F.
|
76,411,640
| 6,017,833
|
How to update group in df after iteration
|
<p>I am trying to apply an operation to each data frame group, and then update the corresponding rows in the original data frame. However, the new values are not being inserted at the correct locations. Effectively, I am trying to <code>diff</code> within each group.</p>
<pre class="lang-py prettyprint-override"><code>mydf = pd.read_csv("my.csv")
mydf["forwardDelta"] = np.NaN
for name, group in mydf.groupby(["a", "b"]):
group["forwardDelta"] = group["c"] - group["c"].shift(1)
for index, row in group.iterrows():
mydf.iloc[index, 4] = row["forwardDelta"]
</code></pre>
|
<python><pandas><dataframe>
|
2023-06-06 05:25:48
| 1
| 1,945
|
Harry Stuart
|
76,411,581
| 1,008,636
|
Super()__init__ on multiple parent classes for Python
|
<pre><code>class Base1:
def __init__(self):
super().__init__()
print("Base1 __init__")
class Base2:
def __init__(self):
print("Base2 __init__")
class Derived(Base1, Base2):
pass
# Create an instance of the Derived class
d = Derived()
</code></pre>
<p>This outputs:</p>
<pre><code>
Base2 __init__
Base1 __init__
</code></pre>
<p>What principle or mechanism makes it print <code>Base2 __init__</code> ?</p>
<p>Isn't MRO on Derived supposed to be Base1->Base2, and thus only Base1's <strong>init</strong> should be run?</p>
|
<python><python-3.x>
|
2023-06-06 05:09:06
| 1
| 3,245
|
user1008636
|
76,411,526
| 5,306,861
|
Understanding mel-scaled spectrogram for a simple sine wave
|
<p>I generate a simple sine wave with a frequency of 100 and calculate an FFT to check that the obtained frequency is correct.</p>
<p>Then I calculate <code>melspectrogram</code> but do not understand what its output means? where do I see the frequency 100 in this output? Why is the yellow bar located in the 25th area?</p>
<pre><code># In[4]:
import numpy as np
import matplotlib.pyplot as plt
import scipy.fft
import librosa
def generate_sine_wave(freq, sample_rate, duration)-> tuple[np.ndarray, np.ndarray]:
x = np.linspace(0, duration, int(sample_rate * duration), endpoint=False)
frequencies = x * freq
# 2pi because np.sin takes radians
y = np.sin(2 * np.pi * frequencies)
return x, y
sample_rate = 1024
freq = 100
x, y = generate_sine_wave(freq, sample_rate, 2)
plt.figure(figsize=(10, 4))
plt.plot(x, y)
plt.grid(True)
fft = scipy.fft.fft(y)
fft = fft[0 : len(fft) // 2]
fft = np.abs(fft)
xs = np.linspace(0, sample_rate // 2, len(fft))
plt.figure(figsize=(15, 4))
plt.plot(xs, fft)
plt.grid(True)
melsp = librosa.feature.melspectrogram(sr=sample_rate, y=y)
melsp = melsp.T
plt.matshow(melsp)
plt.title('melspectrogram')
max = np.max(melsp)
print('melsp.shape =', melsp.shape)
print('melsp max =', max)
</code></pre>
<p><a href="https://i.sstatic.net/CFHBl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CFHBl.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/WlgqT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WlgqT.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/C1o2L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C1o2L.png" alt="enter image description here" /></a></p>
<p>If I change the frequency to 200, <code>melspectrogram</code> it gives me this:</p>
<p><a href="https://i.sstatic.net/lTlAU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lTlAU.png" alt="enter image description here" /></a></p>
<p>Why is the yellow bar in the 50 area?</p>
|
<python><signal-processing><librosa><audio-processing><spectrum>
|
2023-06-06 04:51:20
| 1
| 1,839
|
codeDom
|
76,411,271
| 1,453,157
|
Python Selenium WebDriverWait two conditions
|
<p>After a page is loaded, there will be two outcomes, either one of the following div class will be present.</p>
<ol>
<li><code>css-can-click</code></li>
<li><code>css-donot-click</code></li>
</ol>
<p>If <code>css-can-click</code> is present, then I want to click on that <code>div</code></p>
<p>If <code>css-donot-click</code> is present, then I will not click on the <code>div</code>, and will just skip the rest of the codes.</p>
<p>I am trying to use WebDriverWait to wait until either div class is present.</p>
<p>The following is a partial of my codes:</p>
<pre><code>driver.get(theURL)
WebDriverWait(driver, 30).until(EC.element_to_be_clickable((By.XPATH, '//div[@class="css-can-click"]'))).click()
</code></pre>
<p>Until here, the code works well. But now, how do I put in the condition where the class <code>css-donot-click</code> is present?</p>
|
<python><selenium-webdriver>
|
2023-06-06 03:27:28
| 1
| 503
|
J K
|
76,411,252
| 12,200,808
|
How to remove versions with sed
|
<p>The following python metadata file contains the following required libraries:</p>
<pre><code>Requires-Python: >=3
Requires-Dist: hello (~=0.1.1)
Requires-Dist: bar (~=1.14)
Requires-Dist: hello-world (~=1.10)
Requires-Dist: test (~=1.1) ; python_version < "3.4"
Requires-Dist: test-bar
...
</code></pre>
<p>How to use a single <code>sed</code> bash command to search and replace all versions from all required libraries?</p>
<p><strong>Here is the expected output:</strong></p>
<pre><code>Requires-Python: >=3
Requires-Dist: hello
Requires-Dist: bar
Requires-Dist: hello-world
Requires-Dist: test
Requires-Dist: test-bar
...
</code></pre>
|
<python><bash><ubuntu><sed>
|
2023-06-06 03:20:57
| 1
| 1,900
|
stackbiz
|
76,410,994
| 12,603,110
|
Striding in numpy and pytorch, How force writing to an array or "input tensor and the written-to tensor refer to a single memory location"?
|
<p>Experimenting with both numpy arrays and pytorch tensors I have came across a difference in an attempt to "do black magic" related to time series that rely on previous time steps</p>
<p>A key feature of the numpy-esque libraries is that they often don't copy data but manipulate it in different views. one of the most interesting tools to do that in numpy is using stride_tricks's functions.</p>
<p>As a simpler problem I was trying to make an "inplace cumsum" but i stumbled upon the fact that numpy doesn't error out and a seemingly equivalent code of pytorch.
I don't really need an in place cumsum implementation but rather inplace broadcasting operation that will use inbroadcast-calculated data.</p>
<p>So my questions are; How can I force pytorch to allow the sum?; How can I make numpy not make a copy of my data and sum over it?</p>
<pre><code>import numpy as np
# array = np.arange(6.)
array = np.arange(6.).reshape(2,3) + 3
# array([[3., 4., 5.],
# [6., 7., 8.]])
s = np.lib.stride_tricks.as_strided(array, shape=(1,3)) # array([[3., 4., 5.]])
array += s
array
# array([[ 6., 8., 10.],
# [ 9., 11., 13.]])
</code></pre>
<pre><code>import torch
t = torch.arange(6.).reshape(2,3) + 3
# tensor([[3., 4., 5.],
# [6., 7., 8.]])
s = t.as_strided(size=(1,3),stride=(1,1)) # tensor([[3., 4., 5.]])
t += s
t
# RuntimeError Traceback (most recent call last)
# 3 # tensor([[3., 4., 5.],
# 4 # [6., 7., 8.]])
# 5 s = t.as_strided(size=(1,3),stride=(1,1)) # tensor([[3., 4., 5.]])
# ----> 6 t += s
# 7 t
# RuntimeError: unsupported operation: some elements of the input tensor and the written-to tensor refer to a single memory location. Please clone() the tensor before performing the operation.
</code></pre>
<p>Desired:</p>
<pre class="lang-py prettyprint-override"><code>#sum the first row to itself inplace
#sum the 2nd row with the inplace modified 1st row
array([[6., 8., 10.],
[12., 15., 18.]])
</code></pre>
|
<python><numpy><pytorch><sum>
|
2023-06-06 02:02:34
| 1
| 812
|
Yorai Levi
|
76,410,949
| 2,647,447
|
how to count failure occurrences in a column using pandas?
|
<p>I need to use python's pandas to tabulate the test result in a csv format. The result could be "passsed" or sometime "failed". After I</p>
<pre><code> import python as pd,my code is:
df = pd.read_csv('myfile.csv')
pass_res =df['Status'].value_counts()['passed']
fail_res =df['Status'].value_counts()['failed']
</code></pre>
<p>this code will work if there IS a case of fail. However, when there is no failure, the last line of code will cause an error. How do check, if there is a failure, then I will execute my last line.</p>
|
<python><pandas>
|
2023-06-06 01:51:02
| 3
| 449
|
PChao
|
76,410,874
| 5,924,264
|
How to set the column of the last row of each group to the previous row's column
|
<p>I have a dataframe <code>df</code>, that has columns <code>id</code>, <code>time</code>, <code>start_quantity</code>, <code>end_quantity</code>. For each <code>id</code> and <code>time</code>, the <code>start_quantity</code> is equal to the <code>end_quantity</code> at the previous time.</p>
<p>Here's an example:</p>
<pre><code>id time start_quantity end_quantity
0 1 0 10
0 2 10 15
.....
23 1 55 87
23 2 87 90
.....
</code></pre>
<p>There's a degenerate case in <code>df</code>, where the last row of each <code>id</code> (note that dataframe is pregrouped by <code>id</code> and within each <code>id</code> group its sorted in ascending order based on <code>time</code>), has an incorrect <code>start_quantity</code>. The last row for each <code>id</code> is always <code>time = 10</code>.</p>
<p>For each of these rows, I would like to make the correction, but when I tried</p>
<pre><code>df.loc[df[df.time == 10], "start_quantity"] = df.loc[df[df.time == 9], "end_quantity"]
</code></pre>
<p>It makes the <code>start_quantity</code> for those rows <code>NaN</code>.</p>
|
<python><pandas><dataframe>
|
2023-06-06 01:19:42
| 1
| 2,502
|
roulette01
|
76,410,848
| 13,615,987
|
How Can I pass dictionary data as a input to the fast api strawberry mutation?
|
<p>My input data: <code>{"Nepal": "Kathmandu", "Italy": "Rome", "England": "London"}</code></p>
<p>Goal: I want to pass above data as a input to the strawberry mutation.</p>
<p>It seems currently, strawberry is not supporting <code>dict</code> type. I found <code>JSON</code> type but if i want to use <code>JSON</code> as a type i need to pass the data like this <code>"{\"Nepal\": \"Kathmandu\", \"Italy\": \"Rome\", \"England\": \"London\"}"</code></p>
<p>I dont want to escape the quotes by using <code>\</code>. Is there any alternative to handle this issue?</p>
|
<python><python-3.x><graphql><strawberry-graphql>
|
2023-06-06 01:09:53
| 2
| 659
|
siva
|
76,410,783
| 2,000,548
|
prefect-shell: Task run encountered an exception: RuntimeError: PID xxx failed with return code 6
|
<p>When I run <a href="https://github.com/rclone/rclone" rel="nofollow noreferrer">Rclone</a> to list the difference between a local folder and a S3 folder in shell directly, it succeed.</p>
<pre><code>➜ rclone copy --dry-run --include="*.tdms" --use-json-log /Users/hongbo-miao/Documents/motor hm-s3:my-bucket/motor
{"level":"warning","msg":"Skipped copy as --dry-run is set (size 30.518Mi)","object":"motor-1.tdms","objectType":"*local.Object","size":32000448,"skipped":"copy","source":"operations/operations.go:2360","time":"2023-06-05T17:34:51.1769-07:00"}
{"level":"warning","msg":"\nTransferred: \t 30.518 MiB / 30.518 MiB, 100%, 0 B/s, ETA -\nTransferred: 1 / 1, 100%\nElapsed time: 0.7s\n\n","source":"accounting/stats.go:498","stats":{"bytes":32000448,"checks":0,"deletedDirs":0,"deletes":0,"elapsedTime":0.795462417,"errors":0,"eta":null,"fatalError":false,"renames":0,"retryError":false,"speed":0,"totalBytes":32000448,"totalChecks":0,"totalTransfers":1,"transferTime":0,"transfers":1},"time":"2023-06-05T17:34:51.178495-07:00"}
</code></pre>
<p>However, if I use Prefect <code>ShellOperation</code> from <a href="https://github.com/PrefectHQ/prefect-shell" rel="nofollow noreferrer">prefect-shell</a> v0.1.5 to run same command:</p>
<pre class="lang-py prettyprint-override"><code>@task
async def get_missing_files() -> None:
log = await ShellOperation(
commands=['rclone copy --dry-run --include="*.tdms" --use-json-log /Users/hongbo-miao/Documents/motor hm-s3:my-bucket/motor'],
stream_output=False,
).run()
@flow
async def ingest_data() -> None:
missing_files_list = await get_missing_files(
source_dirname, s3_raw_path, delta_table_path, location
)
</code></pre>
<p>I got error</p>
<pre><code>➜ python src/main.py
17:38:33.356 | INFO | prefect.engine - Created flow run 'chirpy-bull' for flow 'ingest-data'
17:38:33.869 | INFO | Flow run 'chirpy-bull' - Created task run 'get_missing_files-0' for task 'get_missing_files'
17:38:33.870 | INFO | Flow run 'chirpy-bull' - Executing 'get_missing_files-0' immediately...
17:38:34.102 | INFO | Task run 'get_missing_files-0' - PID 40879 triggered with 1 commands running inside the '.' directory.
17:38:34.532 | ERROR | Task run 'get_missing_files-0' - Encountered exception during execution:
Traceback (most recent call last):
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/engine.py", line 1550, in orchestrate_task_run
result = await call.aresult()
^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 181, in aresult
return await asyncio.wrap_future(self.future)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 218, in _run_async
result = await coro
^^^^^^^^^^
File "ingest-data/src/tasks/get_missing_files.py", line 27, in get_missing_files
log = await ShellOperation(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/.pyenv/versions/3.11.1/lib/python3.11/contextlib.py", line 222, in __aexit__
await self.gen.athrow(typ, value, traceback)
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/utilities/processutils.py", line 221, in open_process
yield process
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect_shell/commands.py", line 396, in run
await shell_process.wait_for_completion()
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect_shell/commands.py", line 177, in wait_for_completion
raise RuntimeError(
RuntimeError: PID 40879 failed with return code 6.
17:38:34.621 | ERROR | Task run 'get_missing_files-0' - Finished in state Failed('Task run encountered an exception: RuntimeError: PID 40879 failed with return code 6.\n')
17:38:34.621 | ERROR | Flow run 'chirpy-bull' - Encountered exception during execution:
Traceback (most recent call last):
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/engine.py", line 674, in orchestrate_flow_run
result = await flow_call.aresult()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 181, in aresult
return await asyncio.wrap_future(self.future)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 218, in _run_async
result = await coro
^^^^^^^^^^
File "ingest-data/src/main.py", line 18, in ingest_data
missing_files_list = await get_missing_files(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/api.py", line 109, in wait_for_call_in_loop_thread
return call.result()
^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 173, in result
return self.future.result(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/.pyenv/versions/3.11.1/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/.pyenv/versions/3.11.1/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 218, in _run_async
result = await coro
^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/engine.py", line 1132, in get_task_call_return_value
return await future._result()
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/futures.py", line 240, in _result
return await final_state.result(raise_on_failure=raise_on_failure, fetch=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/states.py", line 91, in _get_state_result
raise await get_state_exception(state)
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/engine.py", line 1550, in orchestrate_task_run
result = await call.aresult()
^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 181, in aresult
return await asyncio.wrap_future(self.future)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 218, in _run_async
result = await coro
^^^^^^^^^^
File "ingest-data/src/tasks/get_missing_files.py", line 27, in get_missing_files
log = await ShellOperation(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/.pyenv/versions/3.11.1/lib/python3.11/contextlib.py", line 222, in __aexit__
await self.gen.athrow(typ, value, traceback)
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/utilities/processutils.py", line 221, in open_process
yield process
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect_shell/commands.py", line 396, in run
await shell_process.wait_for_completion()
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect_shell/commands.py", line 177, in wait_for_completion
raise RuntimeError(
RuntimeError: PID 40879 failed with return code 6.
17:38:34.732 | ERROR | Flow run 'chirpy-bull' - Finished in state Failed('Flow run encountered an exception. RuntimeError: PID 40879 failed with return code 6.\n')
Traceback (most recent call last):
File "ingest-data/src/main.py", line 41, in <module>
asyncio.run(ingest_data())
File "/Users/hongbo-miao/.pyenv/versions/3.11.1/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/.pyenv/versions/3.11.1/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/.pyenv/versions/3.11.1/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/api.py", line 109, in wait_for_call_in_loop_thread
return call.result()
^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 173, in result
return self.future.result(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/.pyenv/versions/3.11.1/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/.pyenv/versions/3.11.1/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 218, in _run_async
result = await coro
^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/client/utilities.py", line 40, in with_injected_client
return await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/engine.py", line 259, in create_then_begin_flow_run
return await state.result(fetch=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/states.py", line 91, in _get_state_result
raise await get_state_exception(state)
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/engine.py", line 674, in orchestrate_flow_run
result = await flow_call.aresult()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 181, in aresult
return await asyncio.wrap_future(self.future)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 218, in _run_async
result = await coro
^^^^^^^^^^
File "ingest-data/src/main.py", line 18, in ingest_data
missing_files_list = await get_missing_files(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/api.py", line 109, in wait_for_call_in_loop_thread
return call.result()
^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 173, in result
return self.future.result(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/.pyenv/versions/3.11.1/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/.pyenv/versions/3.11.1/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 218, in _run_async
result = await coro
^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/engine.py", line 1132, in get_task_call_return_value
return await future._result()
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/futures.py", line 240, in _result
return await final_state.result(raise_on_failure=raise_on_failure, fetch=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/states.py", line 91, in _get_state_result
raise await get_state_exception(state)
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/engine.py", line 1550, in orchestrate_task_run
result = await call.aresult()
^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 181, in aresult
return await asyncio.wrap_future(self.future)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 218, in _run_async
result = await coro
^^^^^^^^^^
File "ingest-data/src/tasks/get_missing_files.py", line 27, in get_missing_files
log = await ShellOperation(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/hongbo-miao/.pyenv/versions/3.11.1/lib/python3.11/contextlib.py", line 222, in __aexit__
await self.gen.athrow(typ, value, traceback)
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect/utilities/processutils.py", line 221, in open_process
yield process
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect_shell/commands.py", line 396, in run
await shell_process.wait_for_completion()
File "/Users/hongbo-miao/Library/Caches/pypoetry/virtualenvs/ingest-data-RUzdR1F0-py3.11/lib/python3.11/site-packages/prefect_shell/commands.py", line 177, in wait_for_completion
raise RuntimeError(
RuntimeError: PID 40879 failed with return code 6.
make: *** [poetry-run-dev] Error 1
</code></pre>
<p>What does this error mean? Thanks!</p>
|
<python><prefect><rclone>
|
2023-06-06 00:48:21
| 1
| 50,638
|
Hongbo Miao
|
76,410,776
| 202,335
|
NameError: name 'market_data' is not defined
|
<pre><code>>>> from pytdx.hq import TdxHq_API
>>>
>>> def get_latest_prices(stock_codes):
... api = TdxHq_API()
... with api.connect('119.147.212.81', 7709):
... market_data = api.get_security_quotes(stock_codes)
... api.disconnect()
...
>>> latest_prices = {}
>>> for data in market_data:
... stock_code = data['code']
... latest_price = data['price']
... latest_prices[stock_code] = latest_price
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'market_data' is not defined
>>> return latest_prices
</code></pre>
<p>market_data is defined in <code>market_data = api.get_security_quotes(stock_codes)</code>, why does it say that it is not defined?</p>
<p>After modifying it to</p>
<pre><code> from pytdx.hq import TdxHq_API
def get_latest_prices(stock_codes):
api = TdxHq_API()
with api.connect('119.147.212.81', 7709):
market_data = api.get_security_quotes(stock_codes)
api.disconnect()
return market_data
latest_prices = {}
for data in market_data:
stock_code = data['code']
latest_price = data['price']
latest_prices[stock_code] = latest_price
return latest_prices
</code></pre>
<p>I get</p>
<pre><code>Cell In[5], line 7
return market_data = api.get_security_quotes(stock_codes)
^
SyntaxError: invalid syntax
</code></pre>
|
<python><python-3.x>
|
2023-06-06 00:43:39
| 2
| 25,444
|
Steven
|
76,410,727
| 202,335
|
Pytdx does exist in Jupyter-lab, but it says that Pytdx is not a package
|
<p>I run Python code in Jupyter-lab, I get a message:</p>
<blockquote>
<p>ModuleNotFoundError: No module named 'pytdx.crawler'; 'pytdx' is not a
package.</p>
</blockquote>
<p>However pytdx is on my computer, in the python310 directory, when I run !pip show pytdx in jupyter lab, it shows that pytdx exists. When I run the same Python code in Python 3.10(64-bit), I don't get this error message. What is wrong, how can I fix it?</p>
<p>when I run</p>
<blockquote>
<p>import sys print(sys.version)</p>
</blockquote>
<p>in Jupyter-lab,
I get</p>
<pre><code>3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
</code></pre>
|
<python><jupyter><jupyter-lab>
|
2023-06-06 00:25:39
| 0
| 25,444
|
Steven
|
76,410,678
| 9,538,252
|
Iterating through a text file between start and stop words to assemble a dataframe
|
<p>I have a text file report I need to iterate through, capturing a row number and all errors and warnings associated with that row. Using the example below, I need to capture the RowID (integer only) and then all of the errors and warnings found below the line with the object's name (disregarding the rest of the text file).</p>
<pre class="lang-none prettyprint-override"><code>***********************************************************
Date/Time: 6/5/2023
FileName: somefile.txt
Report: Standard Report
***********************************************************
Success: 1234
Failures: 1234
RowID: 100
Name: Smith, John
This person did not meet the criteria because of x,y,z.
RowID: 101
Name: Smith, Susie
This is a warning.
This is an error.
Criteria was not met.
RowID: 103
Name: Jones, Bob
This person had invalid characters in the email field.
</code></pre>
<p>I have attempted different variations of the following.</p>
<pre><code>search_string = "RowID:"
next_search_string = "Name:"
with open('report.txt') as y:
for line in y:
if line.startswith(search_string):
print(line.split(':')[1].strip())
if line.startswith(next_search_string):
print(next(y))
while not (next(y)).startswith(search_string):
print(next(y))
if (next(y)).startswith(search_string):
pass
</code></pre>
<p>My desired output is:</p>
<p>100, This person did not meet the criteria because of x,y,z. <br />
101, This is a warning. This is an error. Criteria was not met. <br />
103, This person had invalid characters in the email field.</p>
|
<python><python-3.x><python-2.7>
|
2023-06-06 00:10:03
| 2
| 311
|
ByRequest
|
76,410,653
| 3,826,733
|
Uploading a starlette UploadFile object using azure Upload_blob method throws expected str, bytes or os.PathLike object, not UploadFile error
|
<p>I am sending a multipart file from my client to my graphql server written in Python. The file gets received as a</p>
<pre><code><starlette.datastructures.UploadFile object>
</code></pre>
<p>Now when I try to upload the file to Azure storage using the upload_blob method I get this error <code>expected str, bytes or os.PathLike object, not UploadFile</code></p>
<p>I tried reading the contents of the file using the file.read() method as mentioned in starlette docs. This is covered in this link at the very bottom <a href="https://www.starlette.io/requests/" rel="nofollow noreferrer">https://www.starlette.io/requests/</a>.</p>
<p>Here is my python code that does all of the above -</p>
<pre><code>@mutation.field("fileUpload")
async def resolve_fileUpload(_, info, file):
print(f"File - {file.filename}")
bytes_file = await file.read()
container_client = blob_service_client.get_container_client(
'4160000000')
if not container_client.exists():
container_client.create_container()
with open(file, "rb") as file:
result = container_client.upload_blob(
name='avatar', data=bytes_file)
return {
"status": 200,
"error": "",
"fileUrl": "www.test.com"
}
</code></pre>
<p>Please advise. Thankyou.</p>
<p>June 7th 2023 - Added Stacktrace</p>
<pre><code>[2023-06-07T16:30:47.294Z] expected str, bytes or os.PathLike object, not UploadFile
GraphQL request:3:3
2 | __typename
3 | fileUpload(file: $file) {
| ^
4 | __typename
Traceback (most recent call last):
File "C:\Users\sumch\AppData\Local\Programs\Python\Python310\lib\site-packages\graphql\execution\execute.py", line 528, in await_result
return_type, field_nodes, info, path, await result
File "C:\Users\sumch\OneDrive\Projects\Flutter Projects\bol\bol-api\bol-api\user_operations\mutations.py", line 86, in resolve_fileUpload
with open(file, "rb") as file:
TypeError: expected str, bytes or os.PathLike object, not UploadFile
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\sumch\AppData\Local\Programs\Python\Python310\lib\site-packages\graphql\execution\execute.py", line 1036, in await_result
return build_response(await result, errors) # type: ignore
File "C:\Users\sumch\AppData\Local\Programs\Python\Python310\lib\site-packages\graphql\execution\execute.py", line 403, in set_result
results[response_name] = await result
File "C:\Users\sumch\AppData\Local\Programs\Python\Python310\lib\site-packages\graphql\execution\execute.py", line 535, in await_result
self.handle_field_error(error, return_type)
File "C:\Users\sumch\AppData\Local\Programs\Python\Python310\lib\site-packages\graphql\execution\execute.py", line 569, in handle_field_error
raise error
File "C:\Users\sumch\AppData\Local\Programs\Python\Python310\lib\site-packages\graphql\execution\execute.py", line 528, in await_result
return_type, field_nodes, info, path, await result
File "C:\Users\sumch\OneDrive\Projects\Flutter Projects\bol\bol-api\bol-api\user_operations\mutations.py", line 86, in resolve_fileUpload
with open(file, "rb") as file:
graphql.error.graphql_error.GraphQLError: expected str, bytes or os.PathLike object, not UploadFile
GraphQL request:3:3
2 | __typename
3 | fileUpload(file: $file) {
| ^
4 | __typename
</code></pre>
|
<python><azure-blob-storage><azure-storage><starlette>
|
2023-06-05 23:58:33
| 0
| 3,842
|
Sumchans
|
76,410,606
| 5,942,100
|
Tricky groupby several columns of a similar prefix while taking the sum based off of categorical values within a column (Pandas)
|
<p>I am looking to <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> several columns if the prefix is similar and take the sum based off of categorical values within a column.</p>
<p><strong>Data</strong></p>
<pre class="lang-none prettyprint-override"><code>name type size
AA:3400 5
AA:3401 FALSE 1
AA:3402 FALSE 2
AA:3404 FALSE 0
AA:3409 FALSE 1
AA:3410 FALSE 8
AA:3412 FALSE 9
BB:3400 TRUE 4
BB:3401 FALSE 7
</code></pre>
<p><strong>Desired</strong></p>
<pre class="lang-none prettyprint-override"><code>name type size
AA TRUE 0
AA FALSE 21
AA 5
BB TRUE 4
BB FALSE 7
BB
</code></pre>
<p><strong>Doing</strong></p>
<pre><code>df.groupby(['name', 'type'], dropna=False, as_index=False)['size'].sum()
</code></pre>
<p>However, how can I group if the value has the same prefix?</p>
<p>Any suggestion is appreciated.</p>
|
<python><pandas><dataframe><numpy>
|
2023-06-05 23:41:16
| 2
| 4,428
|
Lynn
|
76,410,537
| 16,319,191
|
Convert wide format data (separate dfs) to long format using Python
|
<p>Convert wide format data in separate dfs to long format in a single df in Python. Some values are NaNs.</p>
<p>Minimal example:</p>
<pre><code>df1 = pd.DataFrame({
"id": ["Mark", "Dave", "Ron" ],
"c2_A": [2, 3, np.nan ],
"c3_A": [1, np.nan, np.nan ] })
df2 = pd.DataFrame({
"id": ["Mark", "Dave", "Ron" ],
"c2_B": [1, 0, np.nan ],
"c3_B": [1, np.nan, 4 ] })
</code></pre>
<p>Required df:</p>
<pre><code>dffinal = pd.DataFrame({
"id": ["Mark", "Mark","Dave", "Dave", "Ron" , "Ron"],
"cValue": ["A", "B","A", "B", "A", "B"],
"c2Value": [2, 1, 3,0,np.nan,np.nan ],
"c3Value": [1, 1, np.nan,np.nan,np.nan,4 ] }
</code></pre>
|
<python><pandas><wide-format-data>
|
2023-06-05 23:14:58
| 1
| 392
|
AAA
|
76,410,488
| 5,942,100
|
Groupby several columns and take the sum based off of categorical values within a column (Pandas)
|
<p>I am looking to groupby several columns and take the sum based off of categorical values within a column.</p>
<p><strong>Data</strong></p>
<pre><code>name size type
AA 9385 FALSE
AA 9460 FALSE
AA 9572 TRUE
AA 9680
BB 10 TRUE
BB 10 TRUE
BB 20 FALSE
BB 20 FALSE
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>name size type
AA 9572 TRUE
AA 18845 FALSE
AA 9680
BB 20 TRUE
BB 40 FALSE
BB
</code></pre>
<p><strong>Doing</strong></p>
<pre><code>df = df.groupby('name').agg({'size': 'sum', 'type': lambda x: x.value_counts().idxmax()})
</code></pre>
<p>However, this appears to have removed Null values. Any suggestion is appreciated.</p>
|
<python><pandas><numpy>
|
2023-06-05 23:01:56
| 1
| 4,428
|
Lynn
|
76,410,475
| 3,586,305
|
problems with geoTiff file generation
|
<p>I am trying to make GeoTiff files and they should come out like this when viewed in qgis:</p>
<p><a href="https://i.sstatic.net/0bjX8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0bjX8.png" alt="enter image description here" /></a></p>
<p>Unfortunately when the ones I have made come out like this when viewed in qgis:</p>
<p><a href="https://i.sstatic.net/WREoH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WREoH.png" alt="enter image description here" /></a></p>
<p>without color. Unfortunately the geoTiff I make is not viewable in many maps including leaflet.</p>
<p>Below is my code for in python for making geoTiff files from grib files:</p>
<p>How do I tweak my code in order to make a successful and viable geoTiff file?</p>
<pre><code>import gdal
import osr
import sys
file=sys.argv[1]
print("file is ",file)
ds = gdal.Open("/home/jason/"+file+".grib2")
wind = ds.GetRasterBand(1).ReadAsArray()
driver = gdal.GetDriverByName('GTiff')
outRaster = driver.Create("/home/jason/"+file+".tiff", ds.RasterXSize, ds.RasterYSize, 1, gdal.GDT_Float32)
outRaster.SetGeoTransform(ds.GetGeoTransform())
outband = outRaster.GetRasterBand(1)
outband.WriteArray(wind)
outband.SetMetadata({'name': 'wind'})
outRasterSRS = osr.SpatialReference()
outRasterSRS.ImportFromEPSG(4326)
outRaster.SetProjection(outRasterSRS.ExportToWkt())
outband.FlushCache()
</code></pre>
|
<python><gdal><qgis><geotiff>
|
2023-06-05 22:56:33
| 1
| 1,085
|
jms1980
|
76,410,474
| 12,596,824
|
Splitting python list into two lists with conditions
|
<p>I want to split a list into two groups but I want to make a condition that tom can't be in the same group as jill and barry can't be in the same group as cate and tim can't be in the same group as jody.</p>
<p>I have the following code - but how can I add that condition in?</p>
<pre><code>import random
ppl = ['tom','jill', 'barry', 'cate', 'tim', 'jody', 'john']
random.shuffle(ppl)
team_a = ppl[:3]
team_b = ppl[3:]
</code></pre>
|
<python>
|
2023-06-05 22:56:15
| 1
| 1,937
|
Eisen
|
76,410,412
| 2,406,499
|
How to clean a line read from a txt file to match literal string
|
<p>If I read a line from a txt file with one single line, I can't seem to be able to make that line match a string that litterally is exactly the same</p>
<p>this is my code:</p>
<p>I've uploaded a text file to myfolder in google drive called 'code.txt' that has one single line that reads 'Code123' (with no quotes), but although i can open an read that line, no matter what I do I can´t seem to be able to clean it so it can match a string that reads also Code123</p>
<pre><code>folder_files = glob.glob('drive/MyDrive/myfolder/*')
codefile_path = ''
for file_path in folder_files:
if 'code.txt' in file_path:
codefile_path = file_path
codefile = open(codefile_path, "r")
lines = codefile.readlines()
codefile.close()
code2 = 'Code123'
code = lines[0].strip()
print(code2) #this prints Code123
print(code) #this also prints Code123
print(code == code2) #yet this line always prints False
</code></pre>
<p>What am I missing?</p>
<p>I've tried creating and reading the txt file with encoding = utf-8
I've tried cleaning the line by replacing \n for '', also cleaning it up with lstrip, rstrip.. both together, but nothing changes.
one thig i noticed is that if I do print(code2 in code) it does give True, which leads me to believe that there is some characters or caracter that is not space or a carriage return that is still in the string read from the file line.
I've been at this for hours now... thank you in advance for any help :)</p>
|
<python><google-colaboratory><drive>
|
2023-06-05 22:37:57
| 1
| 1,268
|
Francisco Cortes
|
76,410,402
| 1,914,781
|
re-arrange data by pairs recursively
|
<p>I have dataframe contains ACQ/REL pair recusively as below:</p>
<pre><code>import pandas as pd
data = [
['2023-06-05 16:51:27.561','ACQ','location'],
['2023-06-05 16:51:27.564','ACQ','location'],
['2023-06-05 16:51:27.567','ACQ','location'],
['2023-06-05 16:51:27.571','REL','location'],
['2023-06-05 16:51:27.573','REL','location'],
['2023-06-05 16:51:27.587','REL','location'],
['2023-06-05 16:51:28.559','ACQ','location'],
['2023-06-05 16:51:28.561','ACQ','location'],
['2023-06-05 16:51:28.563','ACQ','location'],
['2023-06-05 16:51:28.566','REL','location'],
['2023-06-05 16:51:28.569','REL','location'],
['2023-06-05 16:51:28.575','REL','location']
]
df = pd.DataFrame(data,columns=['ts','action','name'])
</code></pre>
<p>I would re-orgnize it by ACQ/REL pairs, the outer ACQ/REL pairs as a group, so that the output dataframe looks like below:</p>
<pre><code>0 2023-06-05 16:51:27.561 ACQ location
5 2023-06-05 16:51:27.587 REL location
1 2023-06-05 16:51:27.564 ACQ location
4 2023-06-05 16:51:27.573 REL location
2 2023-06-05 16:51:27.567 ACQ location
3 2023-06-05 16:51:27.571 REL location
6 2023-06-05 16:51:28.559 ACQ location
11 2023-06-05 16:51:28.575 REL location
7 2023-06-05 16:51:28.561 ACQ location
10 2023-06-05 16:51:28.569 REL location
8 2023-06-05 16:51:28.563 ACQ location
9 2023-06-05 16:51:28.566 REL location
</code></pre>
<p>Current example is 3 pairs a group but it's not constantly the same. What's proper way to get such results?</p>
|
<python><pandas>
|
2023-06-05 22:35:45
| 3
| 9,011
|
lucky1928
|
76,410,339
| 1,113,997
|
ModuleNotFound error in Python but directory is on sys.path
|
<p>Ok, so here is my directory structure</p>
<pre><code>company
└── team
└── user_utils
├── __init__.py
├── test.py
├── utils.py
</code></pre>
<p>from the root of my project - which is at <code>/user/user4769/development/python-utils</code></p>
<p>From the root I run :</p>
<p><code>python3.8 /user/user4769/development/python-utils/company/team/user_utils/main.py</code> or the same with <code>-m</code> flag. As expected, when I print <code>sys.path</code> the current directory of <code>main.py</code> is added first - so I do see :</p>
<p><code>['/user/user4769/development/python-utils/company/team/user_utils', ....]</code></p>
<p>In my <code>main.py</code> I do have :</p>
<p><code>import company.team.user_utils.utils</code></p>
<p>But I do get <code>ModuleNotFound</code></p>
<p>If I add this <code>sys.path.append(os.getcwd())</code> then <code>/user/user4769/development/python-utils/</code> which is my root folder gets added onto the path, and then import works.</p>
<p>What am I missing? This is Python3.8</p>
|
<python><python-3.x><python-import>
|
2023-06-05 22:21:18
| 1
| 5,269
|
ghostrider
|
76,410,293
| 4,599,066
|
How do I turn a time series line plot into a bar plot using data from Pandas?
|
<p><strong>The Problem</strong></p>
<p>I have a plot with 3 subplots showing results from two datasets. I want subplot A to use bars and the other two to use lines. I got all three subplots to use lines, but I can't get bars working on subplot A despite fiddling with it for several days. I get bars if I use the Pandas build-in plotting function df.plot(king="bar") instead of ax.plot, but that breaks my subplot customizations and is inconsistent with the rest of the plot, so I want to use ax.plot if possible.</p>
<p><strong>Code (working example)</strong></p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
data = {}
dlist = ['Data 1', 'Data 2']
data['mean_1']=[0.5,-0.5,0.5,-0.5,0.5]
data[ 'std_1']=[0.2,-0.2,0.2,-0.2,0.2]
data['count_1']=[50,200,50,200,50]
data['mean_2']=[0.4,-0.4,0.4,-0.4,0.4]
data[ 'std_2']=[0.16,-0.16,0.16,-0.16,0.16]
data['count_2']=[70,160,40,240,40]
df = pd.DataFrame.from_dict(data)
df['time'] = pd.date_range('2000-01-01 00:00:00','2000-01-02 00:00:00',freq='6H')
df = df.set_index('time')
df
colors = ['red', 'cornflowerblue']
mean_cols = [col for col in df.columns if 'mean' in col]
std_cols = [col for col in df.columns if 'std' in col]
count_cols = [col for col in df.columns if 'count' in col]
</code></pre>
<p>I've tried two methods using ax.bar, highlighted below.
The first is just the same as subplots B and C but using ax.bar instead of ax.plot. It gives me the error "TypeError: bar() missing 1 required positional argument: 'height'". I can't figure how how to reference the dataframe for the "height".
The second turns the time and y-axis values into lists and uses those for the first two arguments in ax.bar. I get "ValueError: too many values to unpack (expected 1)". Bars do plot, but they don't match the data.</p>
<pre><code>datelist = list(df.index.values)
col_list = df[df.columns[2]].values.tolist()
#Plot initialization
fig, (ax1, ax2, ax3) = plt.subplots(3, figsize=(8, 11))
start, end = '2000-01-01', '2000-01-02'
fig.suptitle('Stuff over Jan 1',fontsize=27,x=0.5)
#Subplot 1- Counts
i = 0
for f in dlist:
p,= ax1.plot(df.loc[start:end, count_cols[i]],color=colors[i])
# p,= ax1.bar(df.loc[start:end, count_cols[i]],color=colors[i]) ##First solution##
# p,= ax1.bar(datelist,col_list,color=colors[0]) ##Second solution##
df[['count_'+str(i+1)]].plot(kind='bar',color=colors[i],alpha=0.5)
i = i+1
#Subplot 2- Means
i = 0
for f in dlist:
p,= ax2.plot(df.loc[start:end, mean_cols[i]],color=colors[i])
i=i+1
#Subplot 3- Stan. Dev.
i = 0
for f in dlist:
p,= ax3.plot(df.loc[start:end,std_cols[i]],color=colors[i])
i=i+1
#Adjustments and save
fig.autofmt_xdate(rotation=0,ha='center')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/4dWWC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4dWWC.png" alt="Line plot- I want the lines in subplot A to display as bars instead." /></a></p>
<p><strong>The Question</strong></p>
<p>How do I turn the lines in the posted image into bars using ax.bar or another similar ax method? I don't want to use the pandas built-in plotting function.</p>
|
<python><pandas><matplotlib><datetime><bar-chart>
|
2023-06-05 22:09:23
| 1
| 1,909
|
Cebbie
|
76,410,280
| 3,424,416
|
How to make pip install download a python package (torch) to a specified directory?
|
<p>I'd like to specify which directory pip install downloads the file to because I'm getting this error when installing torch:</p>
<pre><code>ERROR: Could not install packages due to an EnvironmentError: [Errno 28] No space left on device
[...]
RuntimeError: Couldn't install torch.
Command: "/home/.../bin/python3" -m pip install torch==2.0.1 torchvision==0.15.2 --extra-index-url https://download.pytorch.org/whl/cu118
Error code: 1
</code></pre>
<p>So I tried <code>"/media/.../python3" -m pip install torch==2.0.1 torchvision==0.15.2 --extra-index-url https://download.pytorch.org/whl/cu118 -t "/directory/"</code> but it still downloads to the root drive of my Debian11/KDE machine. Didn't find the solution in the pip documentation. How can I change where it downloads to?</p>
<p>This package has a size of multiple GBs, has really nobody thought of enabling users to specify where it should be downloaded to? Or is using a symbolic link for the cache dir the only way to do this?</p>
|
<python><pip><download><torch>
|
2023-06-05 22:05:47
| 1
| 799
|
mYnDstrEAm
|
76,410,249
| 13,231,896
|
How to authenticate to ODK aggregate using python
|
<p>I am building a python application and I need to be able to authenticate through an ODK aggregate server. By "authenticate" I mean that, I need to send the login information (username and password) to the ODK aggregate server and it should return if the credentials are correct and the user is active.
I rather user python's requests library, but I am open to other suggestions.</p>
|
<python><odk>
|
2023-06-05 21:57:50
| 1
| 830
|
Ernesto Ruiz
|
76,410,208
| 1,478,636
|
Django Template: include another html template, but only once
|
<p>How can I ensure that specific HTML files are not included multiple times in Django templates when passing metadata to different frontend frameworks?</p>
<p>I am working on a project where we use Django templates and need to pass metadata from Django to various JavaScript frontend frameworks (currently React, but potentially refactor to others in the future).</p>
<h3>embed_data_foo.html</h3>
<pre><code>{{ foo_data|json_script:"foo_data" }}
</code></pre>
<h3>some_page.html</h3>
<pre><code>{% include "embed_data_foo.html" %}
</code></pre>
<h3>rendered</h3>
<pre class="lang-html prettyprint-override"><code><script id="foo_data" type="application/json">{"foo":"bar"}</script>
</code></pre>
<p>Our templates are designed compositionally and relatively small, which makes it easy to avoid accidental duplication. However, I want to have guardrails in place to prevent these specific files from being included multiple times. So doing this multiple times potentially, in extended templates, should still only result in the only one being placed.</p>
<h3>some_page_but_with_whoopsies.html</h3>
<pre><code>{% include "embed_data_foo.html" %}
{% include "embed_data_foo.html" %}
{% include "embed_data_foo.html" %}
</code></pre>
<h3>rendered</h3>
<pre class="lang-html prettyprint-override"><code><script id="foo_data" type="application/json">{"foo":"bar"}</script>
<script id="foo_data" type="application/json">{"foo":"bar"}</script>
<script id="foo_data" type="application/json">{"foo":"bar"}</script>
</code></pre>
<p>In the C programming language, I would achieve this using the <code>#ifndef</code>, <code>#define</code>, <code>#endif</code> pattern. Is there a similar approach or technique that I can use in Django templates to ensure that files are included only if they have not been included before?</p>
<p>Given that we will be onboarding many people in the near future, it's crucial for me to set them up for success and avoid any issues caused by duplicate imports.</p>
<p>What is the recommended approach to implement this in Django templates and ensure that specific HTML files are included only once?</p>
|
<python><django>
|
2023-06-05 21:48:40
| 1
| 4,488
|
Nikole
|
76,409,959
| 11,092,636
|
Faster way to use saved Pytorch model (bypassing import torch?)
|
<p>I'm using a Slurm Workload Manager on a server and <code>import torch</code> takes around 30-40 seconds. The IT people running it said they couldn't do much to improve it and it was just hardware related (maybe they missed something? but i've gone through the internet before asking them and couldn't find much either). By comparison, <code>import numpy</code> takes around 1 second.</p>
<p>I would like to know if there is a way to use the saved weights of a <code>pytorch</code> model <strong>to ONLY predict an output with a given input without importing torch</strong> (so no need to import everything related to gradients, etc ...). Theoretically, it is just matrix multiplications (I think?) so it probably is feasible by only using <code>numpy</code>? I need to do this several times on different jobs so I cannot cache / pass around the imported torch which is why I'm actively looking for a solution (but generally speaking taking something from 30-40 seconds to a few is pretty cool anyway).</p>
<p>If that matters, here is the architecture of my model:</p>
<pre class="lang-py prettyprint-override"><code>ActionNN(
(conv_1): Conv2d(5, 16, kernel_size=(3, 3), stride=(1, 1))
(conv_2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1))
(conv_3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1))
(norm_layer_1): InstanceNorm2d(16, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(norm_layer_2): InstanceNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(norm_layer_3): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(gap): AdaptiveAvgPool2d(output_size=(1, 1))
(mlp): Sequential(
(0): Linear(in_features=71, out_features=128, bias=True)
(1): ReLU()
)
(layer): Linear(in_features=128, out_features=210, bias=True)
(layer): Linear(in_features=128, out_features=210, bias=True)
(layer): Linear(in_features=128, out_features=210, bias=True)
(layer): Linear(in_features=128, out_features=210, bias=True)
(layer): Linear(in_features=128, out_features=28, bias=True)
(layer): Linear(in_features=128, out_features=28, bias=True)
(layer): Linear(in_features=128, out_features=28, bias=True)
(sigmoid): Sigmoid()
(tanh): Tanh()
)
Number of parameters: 152284
</code></pre>
<p>If it was only fully connected layers, it would be "pretty easy" but because my network is a tiny bit more complex, I'm not sure how I should do it.</p>
<p>I saved the parameters using <code>torch.save(my_network.state_dict(), my_path)</code>.</p>
<p>Since my script takes in total on average 35 seconds (<code>import torch</code> included), I would be able to run it in on average a second or two, which would be great.</p>
<p>Here is my profiling of <code>import torch</code>:</p>
<pre class="lang-py prettyprint-override"><code> 1226310 function calls (1209639 primitive calls) in 49.994 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1273 21.590 0.017 21.590 0.017 {method 'read' of '_io.BufferedReader' objects}
5276 12.145 0.002 12.145 0.002 {built-in method posix.stat}
1273 7.427 0.006 7.427 0.006 {built-in method io.open_code}
45/25 5.631 0.125 9.939 0.398 {built-in method _imp.create_dynamic}
2 0.564 0.282 0.564 0.282 {built-in method _ctypes.dlopen}
1273 0.288 0.000 0.288 0.000 {built-in method marshal.loads}
17 0.286 0.017 0.286 0.017 {method 'readline' of '_io.BufferedReader' objects}
2809/2753 0.098 0.000 0.546 0.000 {built-in method builtins.__build_class__}
1620/1 0.062 0.000 49.997 49.997 {built-in method builtins.exec}
50145 0.051 0.000 0.119 0.000 {built-in method builtins.getattr}
1159 0.048 0.000 0.115 0.000 inspect.py:3245(signature)
424 0.048 0.000 0.113 0.000 assumptions.py:596(__init__)
13 0.039 0.003 0.039 0.003 {built-in method io.open}
1411 0.035 0.000 0.045 0.000 library.py:71(impl)
1663 0.034 0.000 12.209 0.007 <frozen importlib._bootstrap_external>:1536(find_spec)
</code></pre>
|
<python><numpy><deep-learning><pytorch><neural-network>
|
2023-06-05 20:59:27
| 1
| 720
|
FluidMechanics Potential Flows
|
76,409,916
| 10,705,248
|
Python Xarray ValueError: unrecognized chunk manager dask - must be one of: []
|
<p>I am using <code>xarray</code> for combining multiple netcdf files using <code>xarray.open_mfdataset</code>. But I get the error while running the command, below are the commands and error.</p>
<pre><code>nc_all = xarray.open_mfdataset(files,combine = 'nested', concat_dim="time")
files = glob.glob("/filepath/*")
</code></pre>
<p>I get the following error-</p>
<pre><code>Traceback (most recent call last):
File "/home/lsrathore/GLEAM/GLEAM_HPC.py", line 85, in <module>
nc_1980_90 = xarray.open_mfdataset(files[1:11],combine = 'nested', concat_dim="time")
File "/home/lsrathore/.local/lib/python3.9/site-packages/xarray/backends/api.py", line 1038, in open_mfdataset
datasets = [open_(p, **open_kwargs) for p in paths]
File "/home/lsrathore/.local/lib/python3.9/site-packages/xarray/backends/api.py", line 1038, in <listcomp>
datasets = [open_(p, **open_kwargs) for p in paths]
File "/home/lsrathore/.local/lib/python3.9/site-packages/xarray/backends/api.py", line 572, in open_dataset
ds = _dataset_from_backend_dataset(
File "/home/lsrathore/.local/lib/python3.9/site-packages/xarray/backends/api.py", line 367, in _dataset_from_backend_dataset
ds = _chunk_ds(
File "/home/lsrathore/.local/lib/python3.9/site-packages/xarray/backends/api.py", line 315, in _chunk_ds
chunkmanager = guess_chunkmanager(chunked_array_type)
File "/home/lsrathore/.local/lib/python3.9/site-packages/xarray/core/parallelcompat.py", line 87, in guess_chunkmanager
raise ValueError(
ValueError: unrecognized chunk manager dask - must be one of: []
</code></pre>
<p>What is causing the problem?</p>
|
<python><python-xarray>
|
2023-06-05 20:50:29
| 6
| 854
|
lsr729
|
76,409,883
| 7,318,120
|
How to make scatter plots in Python with Plotly
|
<p>I want to make a scatter plot with <code>plotly</code>.</p>
<p>So i head to the official example on the plotly site: <a href="https://plotly.com/python/line-and-scatter/" rel="nofollow noreferrer">https://plotly.com/python/line-and-scatter/</a></p>
<p>I slightly modify the code (below) and put into a python file:</p>
<pre class="lang-py prettyprint-override"><code>import plotly.express as px
def main():
scatter_plot()
return
def scatter_plot():
''' make a scatter plot with plotly '''
# x and y given as array_like objects
fig = px.scatter(x=[0, 1, 2, 3, 4], y=[0, 1, 4, 9, 16])
fig.show()
return
# main guard idiom
if __name__=='__main__':
main()
</code></pre>
<p>I expect to see a basic scatter plot show up.</p>
<p>Instead i see a tab on my browser launch with a local url: <a href="http://127.0.0.1:59326/" rel="nofollow noreferrer">http://127.0.0.1:59326/</a></p>
<p>but the tab is blank:</p>
<pre><code>This site can’t be reached
127.0.0.1 refused to connect.
</code></pre>
<p>What do i have to do to see the plot ?</p>
<p>edit:
if i do <code>pip install --upgrade nbformat</code> and then run in a jupyter notebook (in vscode) it works. But i would like to run in a *.py file.</p>
|
<python><plotly>
|
2023-06-05 20:44:09
| 0
| 6,075
|
darren
|
76,409,880
| 11,741,232
|
Python - importing lots of packages that may not exist with IDE linking
|
<p>Imagine I have >20 libraries I want to import called a, b, c, d etc.</p>
<p>These libraries may or may not exist, but I want the script to run anyway - think of them as plugins the script runner may have installed.</p>
<p>Obviously, I'll need to catch the ImportErrors then. But if I do it this way:</p>
<pre class="lang-py prettyprint-override"><code>try:
import a
import b
import c
import d
except ImportError:
pass
</code></pre>
<p>it won't work because the first package that doesn't exist will trip the <code>except</code> and the rest of the packages (that could be available) won't be imported.</p>
<p>One solution is to use exec, like so:</p>
<pre class="lang-py prettyprint-override"><code>imports = ['import a', 'import b', 'import c', 'import d']
for import_cmd in imports:
try:
exec(import_cmd)
except ImportError:
pass
</code></pre>
<p>Ignoring that we're using exec() which is generally frowned upon, this accomplishes the correct behaviour. The problem with this method is that since the import statements are strings, IDEs can't link the libraries properly. For example, if this current file was called myfile.py and I went to a different file and tried to import <code>from myfile import a</code>, it wouldn't autocomplete and would probably give a linting error, even though at runtime, it would work fine. This would also disable the ability to ctrl + click <code>a</code> in the second file to navigate to a, which is annoying.</p>
<p>The solution that accomplishes everything is to try: except: every import, but this is very ugly, repetitive, and it feels like there should be a more scalable solution:</p>
<pre class="lang-py prettyprint-override"><code>try:
import a
except ImportError:
pass
try:
import b
except ImportError:
pass
# etc.
</code></pre>
<p>I've also considered <a href="https://cog.readthedocs.io/en/latest/" rel="nofollow noreferrer">cog</a> but in my opinion it makes the code harder to understand and maintain (needing to make a file that tells cog all the files to modify, running cog at library install time etc.)</p>
<p>So, are there any solutions to this?</p>
|
<python>
|
2023-06-05 20:43:21
| 0
| 694
|
kevinlinxc
|
76,409,874
| 13,721,819
|
How can I embed a Panda3D window in a Tkinter app?
|
<p>I am designing a Tkinter app in Python that uses the Panda3D game library to display some 3D models. With my current design, I run the Panda3D engine in headless mode with the option <code>windowType='offscreen'</code>, and I periodically take screenshots from the headless window and display them in a Tkinter <code>Canvas</code> widget. However, this view does not have any of the interactive features of Panda3D.</p>
<p>I am using Panda3D version 1.10.13.post1.</p>
<p>Is there a way that I could embed the Panda3D window in Tkinter, so that it could be an interactive 3D view?</p>
|
<python><tkinter><tkinter-canvas><panda3d>
|
2023-06-05 20:42:36
| 1
| 612
|
Wilson
|
76,409,740
| 1,914,781
|
check column has the same string
|
<p>I would like to check if a dataframe column has the same string.
for example, below dataframe in columns AAA, I want to check if all string is Jack, so the function call should return true!</p>
<pre><code>import pandas as pd
data = {
'AAA' :
['Jack','Jack','Jack','Jack','Jack'],
'BBB' :
['January', 'February', 'February', 'April', 'January'],
'CCC' :
[85, 96, 55, 64,60]
}
df = pd.DataFrame(data)
print(len(df['AAA'].unique()) == 1)
</code></pre>
<p>unique can be used to check value is the same, but I need to check the string is Jack as well.</p>
|
<python><pandas>
|
2023-06-05 20:19:21
| 3
| 9,011
|
lucky1928
|
76,409,701
| 6,676,101
|
If you had text file full of people's names where each person's name was on a separate line, how would you turn that into a clean list of their names?
|
<h3>How do you quickly extract people's names from a text string with a python script?</h3>
<hr />
<h1>A General Description</h1>
<blockquote>
<ol>
<li><p>For any person <em><strong>p</strong></em> if two lines <strong>L1</strong> and <strong>L2</strong> contained person the name person <em><strong>p</strong></em> then line <strong>L1</strong> is the same line as line <strong>L2</strong></p>
</li>
<li><p>For any two different people <em><strong>p1</strong></em> and <em><strong>p2</strong></em> if person <em><strong>p1</strong></em> has their name on line <em><strong>L1</strong></em> and <em><strong>p2</strong></em> has their name on line <em><strong>L2</strong></em>, then line <strong>L1</strong> is different than line <strong>L2</strong></p>
</li>
<li><p>People's names contain upper-case and/or lowercase letters <code>A</code>, <code>B</code>, <code>C</code>, ... <code>Z</code> and people's names do <strong>NOT</strong> contain numbers <code>0</code>, <code>1</code>, <code>2</code>, ..., <code>9</code></p>
</li>
</ol>
</blockquote>
<hr />
<h1>Example Input</h1>
<pre><code>1. ALICIA SANZ 92.0%
(2) ANA FIGUEROA 10.0%
[3] ARIADNA MANZANARES 10.1%
[4] BRIANA CORONIL 82.1%
[5] DRÁP THE KLINGON 71.5%
6. ELEN OF THE DAWN 98.3%
7) INMACULADA FRAGA 14.8%
</code></pre>
<hr />
<h1>Example Output</h1>
<p>Stored in a list or other container type.</p>
<pre class="lang-none prettyprint-override"><code>Alicia Sanz
Ana Figueroa
Ariadna Manzanares
Briana Coronil
Dráp The Klingon
Elen Of The Dawn
Inmaculada Fraga
</code></pre>
<hr />
<p>This question mostly exists so that people who type somthing like "how do I extract people's names out of a file" can quickly find an answer without having to write code. Presumably, their project is more complicated than getting the names of people out of the file, and we could save them 5 to 30 minutes.</p>
<p>Or, you could create a website like <code>www.extractnames.com</code> and run a bunch of advertisements on the site for money. That is optional, of course.</p>
<hr />
<p>I posted an answer, but hopefully, there will be better and better answers posted by different people</p>
<p>You could certainly make your code more human-readable than mine.</p>
<p>If you wanted a really highly up-voted answer, then perhaps consider writing a scraper which is able to delete surpurflous words such as the <code>was painting a picture of a house</code> in the sentence <code>Sophia Gutierez was painting a picture of a house</code>.</p>
<p>We just want her name: <code>Sophia Gutierez</code>.</p>
<p>It us up to you, as long as the code is <em><strong>useful</strong></em>.</p>
<p>May the best pony win.</p>
<hr />
|
<python><string><extract><text-parsing><text-extraction>
|
2023-06-05 20:12:03
| 1
| 4,700
|
Toothpick Anemone
|
76,409,700
| 10,095,043
|
Python Sentry integration only for some modules
|
<p>absolute Sentry beginner here, and junior developer, so sorry in advance if my questions are dumb.</p>
<p>My use case is that I'm making two python packages, foo and bar, one (bar) using the other (foo). My issue is that I want Sentry to only receive errors from foo, not from bar.</p>
<p>So let's assume this simple example:</p>
<pre><code>foo/
foo/
__init__.py
import sentry_sdk
sentry_sdk.init(
dsn="blablabla",
traces_sample_rate=1.0,
in_app_include=["foo"]
)
main.py
def test():
raise ValueError("Foo error") # I want this to be caugh by Sentry
bar/
bar/
__init__.py
main.py:
import foo.main
raise ValueError("Bar error") # I dont want this to be caught by Sentry
foo.main.test()
</code></pre>
<p>Am I misunderstanding how Sentry works, of what it is for?</p>
<p>Thanks a lot!</p>
|
<python><sentry>
|
2023-06-05 20:11:42
| 0
| 442
|
Munshine
|
76,409,565
| 7,703,778
|
Passing arguments to subprocess.run from list
|
<p>I want to pass arguments to an <code>rsync</code> subprocess from a list (or string) but can't find any way to do it without specifying each list item. ie this works</p>
<pre><code>args = ['--progress', '-avh']
subprocess.run(['rsync', args[0],args[1],loaded_prefs['src_dir'],loaded_prefs['dst_dir']])
</code></pre>
<p>but this doesn't</p>
<pre><code>args = '--progress -avh'
subprocess.run(['rsync', args,loaded_prefs['src_dir'],loaded_prefs['dst_dir']])
</code></pre>
<p>or this</p>
<pre><code>args = ['--progress', '-avh']
subprocess.run(['rsync', ','.join(args),loaded_prefs['dst_dir']])
</code></pre>
<p>Any help would be much appreciated</p>
|
<python><bash>
|
2023-06-05 19:45:28
| 1
| 601
|
Chris Barrett
|
76,409,411
| 5,306,861
|
Understanding MFCC output for a simple sine wave
|
<p>I generate a simple sine wave with a frequency of 200 and calculate an FFT to check that the obtained frequency is correct.</p>
<p>Then I calculate MFCC but do not understand what its output means?
What is the explanation of the output, and where do I see the frequency 200 in this output?</p>
<pre><code># In[3]:
import numpy as np
import matplotlib.pyplot as plt
import scipy.fft
import librosa
def generate_sine_wave(freq, sample_rate, duration):
x = np.linspace(0, duration, int(sample_rate * duration), endpoint=False)
frequencies = x * freq
# 2pi because np.sin takes radians
y = np.sin(2 * np.pi * frequencies)
return x, y
sample_rate = 1024
freq = 200
x, y = generate_sine_wave(freq, sample_rate, 1)
plt.figure(figsize=(10, 4))
plt.plot(x, y)
plt.grid(True)
fft = scipy.fft.fft(y)
fft = fft[0 : len(fft) // 2]
fft = np.abs(fft)
xs = np.linspace(0, sample_rate // 2, len(fft))
plt.figure(figsize=(10, 4))
plt.plot(xs, fft)
plt.grid(True)
mfcc_feat = librosa.feature.mfcc(sr=sample_rate, y=y)
print('\nMFCC Parameters:\n Window Count =', mfcc_feat.shape[0])
print(' Individual Feature Length =', mfcc_feat.shape[1])
mfcc_feat = mfcc_feat.T
plt.matshow(mfcc_feat)
plt.title('MFCC Features - librosa')
</code></pre>
<p><a href="https://i.sstatic.net/FnA9X.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FnA9X.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/C3yRe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C3yRe.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/Xn9jZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Xn9jZ.png" alt="enter image description here" /></a></p>
<p>If I change the frequency to 400 MFCC it gives me this:</p>
<p><a href="https://i.sstatic.net/ZOWNL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZOWNL.png" alt="enter image description here" /></a></p>
<p>What is the meaning of all these colors in three rows?</p>
|
<python><signal-processing><librosa><mfcc>
|
2023-06-05 19:18:04
| 1
| 1,839
|
codeDom
|
76,409,390
| 5,666,203
|
Prevent premature wrapping in 2-column HTML report using CSS
|
<p>I'm building a two-column "report" in HTML and CSS (I'm new to both) and printing it to a PDF via Weasyprint in Python. My problem is that content in the first column is wrapping into the second column prematurely, ultimately resulting in a broken table that should remain in one column:</p>
<p><a href="https://i.sstatic.net/C3v1m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C3v1m.png" alt="enter image description here" /></a></p>
<p>The HTML file calls the CSS file:</p>
<pre><code><html>
<head>
<meta charset="UTF-8">
<link href="report.css" rel="stylesheet">
<title>Report</title>
<meta name="description" content="Report example">
</head>
...
</code></pre>
<p>at some point, I create a page style in CSS called "satgeom":</p>
<pre><code>@page {
@top-left {
background: #FF874A;
content: counter(page);
height: 1cm;
text-align: center;
width: 1cm;
}
@top-center {
background: #FF874A;
content: '';
display: block;
height: .05cm;
opacity: .5;
width: 100%;
}
@top-right {
content: string(heading);
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
}
@page :blank {
@top-left { background: none; content: '' }
@top-center { content: none }
@top-right { content: none }
}
@page no-chapter {
@top-left { background: none; content: none }
@top-center { content: none }
@top-right { content: none }
}
@page :first {
background: url(report_cover.png) no-repeat center;
background-size: cover;
margin: 0;
}
@page chapter {
background: #FF874A;
margin: 0;
@top-left { content: none }
@top-center { content: none }
@top-right { content: none }
}
html {
color: #393939;
font-family: Montserrat;
font-size: 11pt;
font-weight: 300;
line-height: 1.5;
}
h1 {
color: #FF874A;
font-size: 38pt;
margin: 5cm 2cm 0 2cm;
page: no-chapter;
width: 100%;
}
h2, h3, h4 {
color: black;
font-weight: 400;
}
h2 {
break-before: always;
font-size: 28pt;
string-set: heading content();
}
h3 {
font-weight: 300;
font-size: 15pt;
}
h4 {
font-size: 13pt;
}
.column {
display: flex;
flex-direction: column;
flex-basis: 100%;
flex: 1;
}
#satgeom section {
columns: 2;
column-gap: 1cm;
}
#satgeom section p {
text-align: justify;
}
/* Table */
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;word-break:normal;}
.tg .tg-zv4m{border-color:#fcbb9a;text-align:left;vertical-align:top}
.tg .tg-ofj5{border-color:#fcbb9a;text-align:right;vertical-align:top}
</code></pre>
<p>and call this style in the HTML. The contents of this page contain a lengthy table and text. My problem is that the table is wrapping prematurely, and I cannot figure out why. Ideally, I would like to wrap the text into the second column <em>after</em> the first column fills up. A snippet of my HTML for the "satgeom" page is as follows:</p>
<pre><code><article id="satgeom">
<h2 id="satgeom-title">Satellite geometry</h2>
<h3>Satellite geometry, depiction, and description</h3>
<section>
<img src="./satellite.png" alt="">
<p>
<table class="tg" style="table-layout: fixed; width: 300px">
<colgroup>
<col style="width: 150px">
<col style="width: 150px">
</colgroup>
<tr>
<td class="tg-zv4m">Name</th>
<td class="tg-ofj5">Uydu</th>
</tr>
<tr>
<td class="tg-zv4m">Cost [$]</th>
<td class="tg-ofj5">600,000,000</th>
</tr>
<tr>
<td class="tg-zv4m">Manufacturer</td>
<td class="tg-ofj5">TAI</td>
</tr>
<tr>
<td class="tg-zv4m">Duration [years]</td>
<td class="tg-ofj5">15</td>
</tr>
<tr>
<td class="tg-zv4m">Orbit altitude [km]</td>
<td class="tg-ofj5">35,785</td>
</tr>
<tr>
<td class="tg-zv4m">Max. velocity [km/s]</td>
<td class="tg-ofj5">11,051</td>
</tr>
<tr>
<td class="tg-zv4m">Dy mass [kg]</td>
<td class="tg-ofj5">1,577</td>
</tr>
<tr>
<td class="tg-zv4m">NORAD ID</td>
<td class="tg-ofj5"> - </td>
</tr>
<tr>
<td class="tg-zv4m">Uplink [GHz]</td>
<td class="tg-ofj5">7.3 - 18.10</td>
</tr>
<tr>
<td class="tg-zv4m">Downlink [GHz]</td>
<td class="tg-ofj5">11.70 - 12.75</td>
</tr>
<tr>
<td class="tg-zv4m">Reference frame</td>
<td class="tg-ofj5">Geocentric</td>
</tr>
<tr>
<td class="tg-zv4m">Regime</td>
<td class="tg-ofj5">Geostationary</td>
</tr>
</table>
</p>
<p>
Launched in 2024, the Uydu satellite was manufactured by the Turkish
Aerospace Industries for roughly $600,000,000. The satellite's mission
is
</p>
<p>
Construction-wise, the Uydu satellite comprises a main body and two
solar panel arrays extending laterally to its side. For power consumption,
the solar panels can be rotated to face the sun.
</p>
</section>
</article>
</code></pre>
<p>I've tried adding a <code>div{}</code> to my CSS file and messed with the <a href="https://stackoverflow.com/questions/6871996/two-inline-block-width-50-elements-wrap-to-second-line">nowrap property</a>, <a href="https://stackoverflow.com/questions/194803/how-to-make-text-over-flow-into-two-columns-automatically">modifying the CSS file</a>, and have also done a number of Google / SO searches, but haven't found a solution. Honestly, I'm not sure I'm looking for the right phrases.</p>
<hr />
<p>Edit:
Stefan's answer below resulted in the "baseball card" solution I was looking for:
<a href="https://i.sstatic.net/UGqxj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UGqxj.png" alt="enter image description here" /></a></p>
|
<python><html><css><python-3.x><weasyprint>
|
2023-06-05 19:13:51
| 2
| 1,144
|
AaronJPung
|
76,409,272
| 11,329,736
|
snakemake built-in md5sum function
|
<p>Under the header "Ensuring output file properties like non-emptyness or checksum compliance" of the <code>snakemake</code> [tutorial rule section][1] the following is stated:</p>
<pre><code>It is possible to annotate certain additional criteria for output files to be ensured after they have been generated successfully. For example, this can be used to check for output files to be non-empty, or to compare them against a given sha256 checksum. If this functionality is used, Snakemake will check such annotated files before considering a job to be successful. Non-emptyness can be checked as follows:
> rule NAME:
> output:
> ensure("test.txt", non_empty=True)
> shell:
> "somecommand {output}"
Above, the output file test.txt is marked as non-empty. If the command somecommand happens to generate an empty output, the job will fail with an error listing the unexpected empty file.
A sha256 checksum can be compared as follows:
my_checksum = "u98a9cjsd98saud090923ßkpoasköf9ß32"
> rule NAME:
> output:
> ensure("test.txt", sha256=my_checksum)
> shell:
> "somecommand {output}"
Is it possible to change sha256 to md5sum? The reason is that I frequently need to download data that comes with md5sums instead of sha256.
[SnakeMake Official Documentation][1]
[1]: https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#ensuring-output-file-properties-like-non-emptyness-or-checksum-compliance
</code></pre>
|
<python><snakemake>
|
2023-06-05 18:58:13
| 2
| 1,095
|
justinian482
|
76,409,244
| 2,807,741
|
Android Kotlin: Exception running python script with Chaquopy
|
<p>As I haven't found an Android library to compare two .wav audio files (only found <a href="https://github.com/loisaidasam/musicg" rel="nofollow noreferrer">musicg</a> which is not working for me) I decided to try one of the many I've found for Python, in concrete, <a href="https://github.com/charlesconnell/AudioCompare" rel="nofollow noreferrer">AudioCompare</a>.</p>
<p>For that I've followed the <a href="https://chaquo.com/chaquopy/" rel="nofollow noreferrer">chaquopy</a> page instructions and I was able to install v14 with no problems, and now I am able to run Python scripts from my Android app, the problem is the audio compare library I'm trying to run is throwing an exception, that is:</p>
<pre><code>com.chaquo.python.PyException: OSError: This platform lacks a functioning sem_open implementation, therefore, the required synchronization primitives needed will not function, see issue 3770.
</code></pre>
<p>I don't know about Python, but I'm quite sure the exception is launched in Matcher.py module (don't know hot to check the line number as the exception is not giving me this information), but anyway I'll paste all files just in case:</p>
<p>main.py:</p>
<pre><code>#!/usr/bin/env python
from error import *
from Matcher import Matcher
from argparse import ArgumentParser
def audio_matcher():
"""Our main control flow."""
parser = ArgumentParser(
description="Compare two audio files to determine if one "
"was derived from the other. Supports WAVE and MP3.",
prog="audiomatch")
parser.add_argument("-f", action="append",
required=False, dest="files",
default=list(),
help="A file to examine.")
parser.add_argument("-d", action="append",
required=False, dest="dirs",
default=list(),
help="A directory of files to examine. "
"Directory must contain only audio files.")
args = parser.parse_args()
from os.path import dirname, join
filename1 = join(dirname(__file__), "file1.wav")
filename2 = join(dirname(__file__), "file2.wav")
search_paths = [filename1, filename2]
#search_paths = args.dirs + args.files
if len(search_paths) != 2:
die("Must provide exactly two input files or directories.")
code = 0
# Use our matching system
matcher = Matcher(search_paths[0], search_paths[1])
results = matcher.match()
for match in results:
if not match.success:
code = 1
warn(match.message)
else:
print(match)
return code
if __name__ == "__main__":
exit(audio_matcher())
</code></pre>
<p>Matcher.py (from <a href="https://github.com/charlesconnell/AudioCompare" rel="nofollow noreferrer">https://github.com/charlesconnell/AudioCompare</a>):</p>
<pre><code>import math
import itertools
from FFT import FFT
import numpy as np
from collections import defaultdict
from InputFile import InputFile
import multiprocessing
#from multiprocessing.dummy import Pool as ThreadPool
import os
import stat
from error import *
from common import *
BUCKET_SIZE = 20
BUCKETS = 4
BITS_PER_NUMBER = int(math.ceil(math.log(BUCKET_SIZE, 2)))
assert((BITS_PER_NUMBER * BUCKETS) <= 32)
NORMAL_CHUNK_SIZE = 1024
NORMAL_SAMPLE_RATE = 44100.0
SCORE_THRESHOLD = 5
class FileResult(BaseResult):
"""The result of fingerprinting
an entire audio file."""
def __init__(self, fingerprints, file_len, filename):
super(FileResult, self).__init__(True, "")
self.fingerprints = fingerprints
self.file_len = file_len
self.filename = filename
def __str__(self):
return self.filename
class ChunkInfo(object):
"""These objects will be the values in
our master hashes that map fingerprints
to instances of this class."""
def __init__(self, chunk_index, filename):
self.chunk_index = chunk_index
self.filename = filename
def __str__(self):
return "Chunk: {c}, File: {f}".format(c=self.chunk_index, f=self.filename)
class MatchResult(BaseResult):
"""The result of comparing two files."""
def __init__(self, file1, file2, file1_len, file2_len, score):
super(MatchResult, self).__init__(True, "")
self.file1 = file1
self.file2 = file2
self.file1_len = file1_len
self.file2_len = file2_len
self.score = score
def __str__(self):
short_file1 = os.path.basename(self.file1)
short_file2 = os.path.basename(self.file2)
if self.score > SCORE_THRESHOLD:
if self.file1_len < self.file2_len:
return "MATCH {f1} {f2} ({s})".format(f1=short_file1, f2=short_file2, s=self.score)
else:
return "MATCH {f2} {f1} ({s})".format(f1=short_file1, f2=short_file2, s=self.score)
else:
return "NO MATCH"
def _to_fingerprints(freq_chunks):
"""Examine the results of running chunks of audio
samples through FFT. For each chunk, look at the frequencies
that are loudest in each "bucket." A bucket is a series of
frequencies. Return the indices of the loudest frequency in each
bucket in each chunk. These indices will be encoded into
a single number per chunk."""
chunks = len(freq_chunks)
fingerprints = np.zeros(chunks, dtype=np.uint32)
# Examine each chunk independently
for chunk in range(chunks):
fingerprint = 0
for bucket in range(BUCKETS):
start_index = bucket * BUCKET_SIZE
end_index = (bucket + 1) * BUCKET_SIZE
bucket_vals = freq_chunks[chunk][start_index:end_index]
max_index = bucket_vals.argmax()
fingerprint += (max_index << (bucket * BITS_PER_NUMBER))
fingerprints[chunk] = fingerprint
# return the indexes of the loudest frequencies
return fingerprints
def _file_fingerprint(filename):
"""Read the samples from the files, run them through FFT,
find the loudest frequencies to use as fingerprints,
turn those into a hash table.
Returns a 2-tuple containing the length
of the file in seconds, and the hash table."""
# Open the file
try:
file = InputFile(filename)
# Read samples from the input files, divide them
# into chunks by time, and convert the samples in each
# chunk into the frequency domain.
# The chunk size is dependent on the sample rate of the
# file. It is important that each chunk represent the
# same amount of time, regardless of the sample
# rate of the file.
chunk_size_adjust_factor = (NORMAL_SAMPLE_RATE / file.get_sample_rate())
fft = FFT(file, int(NORMAL_CHUNK_SIZE / chunk_size_adjust_factor))
series = fft.series()
file_len = file.get_total_samples() / file.get_sample_rate()
file.close()
# Find the indices of the loudest frequencies
# in each "bucket" of frequencies (for every chunk).
# These loud frequencies will become the
# fingerprints that we'll use for matching.
# Each chunk will be reduced to a tuple of
# 4 numbers which are 4 of the loudest frequencies
# in that chunk.
# Convert each tuple in winners to a single
# number. This number is unique for each possible
# tuple. This hopefully makes things more
# efficient.
fingerprints = _to_fingerprints(series)
except Exception as e:
return FileErrorResult(str(e))
return FileResult(fingerprints, file_len, filename)
class Matcher(object):
"""Create an instance of this class to use our matching system."""
def __init__(self, dir1, dir2):
"""The two arguments should be strings that are
file or directory paths. For files, we will simply
examine these files. For directories, we will scan
them for files."""
self.dir1 = dir1
self.dir2 = dir2
@staticmethod
def __search_dir(dir):
"""Returns the regular files residing
in the given directory, OR if the input
is a regular file, return a 1-element
list containing this file. All paths
returned will be absolute paths."""
results = []
# Get the absolute path of our search dir
abs_dir = os.path.abspath(dir)
# Get info about the directory provide
dir_stat = os.stat(abs_dir)
# If it's really a file, just
# return the name of it
if stat.S_ISREG(dir_stat.st_mode):
results.append(abs_dir)
return results
# If it's neither a file nor directory,
# bail out
if not stat.S_ISDIR(dir_stat.st_mode):
die("{d} is not a directory or a regular file.".format(d=abs_dir))
# Scan through the contents of the
# directory (non-recursively).
contents = os.listdir(abs_dir)
for node in contents:
abs_node = abs_dir + os.sep + node
node_stat = os.stat(abs_node)
# If we find a regular file, add
# that to our results list, otherwise
# warn the user.
if stat.S_ISREG(node_stat.st_mode):
results.append(abs_node)
else:
warn("An inode that is not a regular file was found at {f}".format(abs_node))
return results
@staticmethod
def __combine_hashes(files):
"""Take a list of FileResult objects and
create a hash that maps all of their fingerprints
to ChunkInfo objects."""
master = defaultdict(list)
for f in files:
for chunk in range(len(f.fingerprints)):
hash = f.fingerprints[chunk]
master[hash].append(ChunkInfo(chunk, f.filename))
return master
@staticmethod
def __file_lengths(files):
"""Take a list of FileResult objects and
create a hash that maps their filenames
to the length of each file, in seconds."""
results = {}
for f in files:
results[f.filename] = f.file_len
return results
@staticmethod
def __report_file_matches(file, master_hash, file_lengths):
"""Find files from the master hash that match
the given file.
@param file A FileResult object that is our query
@param master_hash The data to search through
@param file_lengths A hash mapping filenames to file lengths
@return A list of MatchResult objects, one for every file
that was represented in master_hash"""
results = []
# A hash that maps filenames to "offset" hashes. Then,
# an offset hash maps the difference in chunk numbers of
# the matches we will find.
# We'll map those differences to the number of matches
# found with that difference.
# This allows us to see if many fingerprints
# from different files occurred at the same
# time offsets relative to each other.
file_match_offsets = {}
for f in file_lengths:
file_match_offsets[f] = defaultdict(lambda: 0)
# For each chunk in the query file
for query_chunk_index in range(len(file.fingerprints)):
# See if that chunk's fingerprint is in our master hash
chunk_fingerprint = file.fingerprints[query_chunk_index]
if chunk_fingerprint in master_hash:
# If it is, record the offset between our query chunk
# and the found chunk
for matching_chunk in master_hash[chunk_fingerprint]:
offset = matching_chunk.chunk_index - query_chunk_index
file_match_offsets[matching_chunk.filename][offset] += 1
# For each file that was in master_hash,
# we examine the offsets of the matching fingerprints we found
for f in file_match_offsets:
offsets = file_match_offsets[f]
# The length of the shorter file is important
# to deciding whether two audio files match.
min_len = min(file_lengths[f], file.file_len)
# max_offset is the highest number of times that two matching
# hash keys were found with the same time difference
# relative to each other.
if len(offsets) != 0:
max_offset = max(offsets.values())
else:
max_offset = 0
# The score is the ratio of max_offset (as explained above)
# to the length of the shorter file. A short file that should
# match another file will result in less matching fingerprints
# than a long file would, so we take this into account. At the
# same time, a long file that should *not* match another file
# will generate a decent number of matching fingerprints by
# pure chance, so this corrects for that as well.
if min_len > 0:
score = max_offset / min_len
else:
score = 0
results.append(MatchResult(file.filename, f, file.file_len, file_lengths[f], score))
return results
def match(self):
"""Takes two AbstractInputFiles as input,
and returns a boolean as output, indicating
if the two files match."""
dir1_files = Matcher.__search_dir(self.dir1)
dir2_files = Matcher.__search_dir(self.dir2)
# Try to determine how many
# processors are in the computer
# we're running on, to determine
# the appropriate amount of parallelism
# to use
try:
cpus = multiprocessing.cpu_count()
except NotImplementedError:
cpus = 1
# Construct a process pool to give the task of
# fingerprinting audio files
pool = multiprocessing.Pool(cpus)
try:
# Get the fingerprints from each input file.
# Do this using a pool of processes in order
# to parallelize the work neatly.
map1_result = pool.map_async(_file_fingerprint, dir1_files)
map2_result = pool.map_async(_file_fingerprint, dir2_files)
# Wait for pool to finish processing
pool.close()
pool.join()
# Get results from process pool
dir1_results = map1_result.get()
dir2_results = map2_result.get()
except KeyboardInterrupt:
pool.terminate()
raise
results = []
# If there was an error in fingerprinting a file,
# add a special ErrorResult to our results list
results.extend([x for x in dir1_results if not x.success])
results.extend([x for x in dir2_results if not x.success])
# Proceed only with fingerprints that were computed
# successfully
dir1_successes = [x for x in dir1_results if x.success and x.file_len > 0]
dir2_successes = [x for x in dir2_results if x.success and x.file_len > 0]
# Empty files should match other empty files
# Our matching algorithm will not report these as a match,
# so we have to make a special case for it.
dir1_empty_files = [x for x in dir1_results if x.success and x.file_len == 0]
dir2_empty_files = [x for x in dir2_results if x.success and x.file_len == 0]
# Every empty file should match every other empty file
for empty_file1, empty_file2 in itertools.product(dir1_empty_files, dir2_empty_files):
results.append(MatchResult(empty_file1.filename, empty_file2.filename, empty_file1.file_len, empty_file2.file_len, SCORE_THRESHOLD + 1))
# This maps filenames to the lengths of the files
dir1_file_lengths = Matcher.__file_lengths(dir1_successes)
dir2_file_lengths = Matcher.__file_lengths(dir2_successes)
# Get the combined sizes of the files in our two search
# paths
dir1_size = sum(dir1_file_lengths.values())
dir2_size = sum(dir2_file_lengths.values())
# Whichever search path has more data in it is the
# one we want to put in the master hash, and then query
# via the other one
if dir1_size < dir2_size:
dir_successes = dir1_successes
master_hash = Matcher.__combine_hashes(dir2_successes)
file_lengths = dir2_file_lengths
else:
dir_successes = dir2_successes
master_hash = Matcher.__combine_hashes(dir1_successes)
file_lengths = dir1_file_lengths
# Loop through each file in the first search path our
# program was given.
for file in dir_successes:
# For each file, check its fingerprints against those in the
# second search path. For matching
# fingerprints, look up the the times (chunk number)
# that the fingerprint occurred
# in each file. Store the time differences in
# offsets. The point of this is to see if there
# are many matching fingerprints at the
# same time difference relative to each
# other. This indicates that the two files
# contain similar audio.
file_matches = Matcher.__report_file_matches(file, master_hash, file_lengths)
results.extend(file_matches)
return results
</code></pre>
<p>May be after all the effort it won't work, but at least I'd like to give it a try.</p>
<p>Many help in order to be able to run this matcher script will be much appreciated.</p>
<p>Full exception:</p>
<pre><code>0 = {StackTraceElement@19282} "<python>.java.android.__init__(__init__.py:140)"
1 = {StackTraceElement@19283} "<python>.multiprocessing.synchronize.__init__(synchronize.py:57)"
2 = {StackTraceElement@19284} "<python>.multiprocessing.synchronize.__init__(synchronize.py:162)"
3 = {StackTraceElement@19285} "<python>.multiprocessing.context.Lock(context.py:68)"
4 = {StackTraceElement@19286} "<python>.multiprocessing.queues.__init__(queues.py:336)"
5 = {StackTraceElement@19287} "<python>.multiprocessing.context.SimpleQueue(context.py:113)"
6 = {StackTraceElement@19288} "<python>.multiprocessing.pool._setup_queues(pool.py:343)"
7 = {StackTraceElement@19289} "<python>.multiprocessing.pool.__init__(pool.py:191)"
8 = {StackTraceElement@19290} "<python>.multiprocessing.context.Pool(context.py:119)"
9 = {StackTraceElement@19291} "<python>.Matcher.match(Matcher.py:306)"
10 = {StackTraceElement@19292} "<python>.main.audio_matcher(main.py:38)"
11 = {StackTraceElement@19293} "<python>.chaquopy_java.call(chaquopy_java.pyx:354)"
12 = {StackTraceElement@19294} "<python>.chaquopy_java.Java_com_chaquo_python_PyObject_callAttrThrowsNative(chaquopy_java.pyx:326)"
13 = {StackTraceElement@19295} "com.chaquo.python.PyObject.callAttrThrowsNative(Native Method)"
14 = {StackTraceElement@19296} "com.chaquo.python.PyObject.callAttrThrows(PyObject.java:232)"
15 = {StackTraceElement@19297} "com.chaquo.python.PyObject.callAttr(PyObject.java:221)"
16 = {StackTraceElement@19298} "com.testmepracticetool.toeflsatactexamprep.ui.activities.main.MainActivity.startActivity(MainActivity.kt:104)"
17 = {StackTraceElement@19299} "com.testmepracticetool.toeflsatactexamprep.ui.activities.main.MainActivity.onCreate(MainActivity.kt:80)"
18 = {StackTraceElement@19300} "android.app.Activity.performCreate(Activity.java:7994)"
19 = {StackTraceElement@19301} "android.app.Activity.performCreate(Activity.java:7978)"
20 = {StackTraceElement@19302} "android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1309)"
21 = {StackTraceElement@19303} "android.app.ActivityThread.performLaunchActivity(ActivityThread.java:3422)"
22 = {StackTraceElement@19304} "android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3601)"
23 = {StackTraceElement@19305} "android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:85)"
24 = {StackTraceElement@19306} "android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:135)"
25 = {StackTraceElement@19307} "android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:95)"
26 = {StackTraceElement@19308} "android.app.ActivityThread$H.handleMessage(ActivityThread.java:2066)"
27 = {StackTraceElement@19309} "android.os.Handler.dispatchMessage(Handler.java:106)"
28 = {StackTraceElement@19310} "android.os.Looper.loop(Looper.java:223)"
29 = {StackTraceElement@19311} "android.app.ActivityThread.main(ActivityThread.java:7656)"
30 = {StackTraceElement@19312} "java.lang.reflect.Method.invoke(Native Method)"
31 = {StackTraceElement@19313} "com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:592)"
32 = {StackTraceElement@19314} "com.android.internal.os.ZygoteInit.main(ZygoteInit.java:947)"
</code></pre>
<hr />
<p>Edit 1: New exception after @mhsmith great help:</p>
<p>After replacing the import multithreading now I get the following exception:</p>
<pre><code>com.chaquo.python.PyException: AttributeError: module 'multiprocessing.dummy' has no attribute 'cpu_count'
0 = {StackTraceElement@19430} "<python>.Matcher.match(Matcher.py:300)"
1 = {StackTraceElement@19431} "<python>.main.audio_matcher(main.py:38)"
2 = {StackTraceElement@19432} "<python>.chaquopy_java.call(chaquopy_java.pyx:354)"
3 = {StackTraceElement@19433} "<python>.chaquopy_java.Java_com_chaquo_python_PyObject_callAttrThrowsNative(chaquopy_java.pyx:326)"
4 = {StackTraceElement@19434} "com.chaquo.python.PyObject.callAttrThrowsNative(Native Method)"
5 = {StackTraceElement@19435} "com.chaquo.python.PyObject.callAttrThrows(PyObject.java:232)"
6 = {StackTraceElement@19436} "com.chaquo.python.PyObject.callAttr(PyObject.java:221)"
7 = {StackTraceElement@19437} "com.testmepracticetool.toeflsatactexamprep.ui.activities.main.MainActivity.startActivity(MainActivity.kt:104)"
8 = {StackTraceElement@19438} "com.testmepracticetool.toeflsatactexamprep.ui.activities.main.MainActivity.onCreate(MainActivity.kt:80)"
9 = {StackTraceElement@19439} "android.app.Activity.performCreate(Activity.java:7994)"
10 = {StackTraceElement@19440} "android.app.Activity.performCreate(Activity.java:7978)"
11 = {StackTraceElement@19441} "android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1309)"
12 = {StackTraceElement@19442} "android.app.ActivityThread.performLaunchActivity(ActivityThread.java:3422)"
13 = {StackTraceElement@19443} "android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3601)"
14 = {StackTraceElement@19444} "android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:85)"
15 = {StackTraceElement@19445} "android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:135)"
16 = {StackTraceElement@19446} "android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:95)"
17 = {StackTraceElement@19447} "android.app.ActivityThread$H.handleMessage(ActivityThread.java:2066)"
18 = {StackTraceElement@19448} "android.os.Handler.dispatchMessage(Handler.java:106)"
19 = {StackTraceElement@19449} "android.os.Looper.loop(Looper.java:223)"
20 = {StackTraceElement@19450} "android.app.ActivityThread.main(ActivityThread.java:7656)"
21 = {StackTraceElement@19451} "java.lang.reflect.Method.invoke(Native Method)"
22 = {StackTraceElement@19452} "com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:592)"
23 = {StackTraceElement@19453} "com.android.internal.os.ZygoteInit.main(ZygoteInit.java:947)"
</code></pre>
|
<python><android><kotlin><chaquopy>
|
2023-06-05 18:53:59
| 1
| 2,974
|
Diego Perez
|
76,409,097
| 525,865
|
driver = webdriver.Chrome() :: issues with a selenium approach - how to work around
|
<p>Well - i am trying to figure out the simplest approach to gather data from clutch.io</p>
<ul>
<li>i have tried various approaches to gather data from a website (clutch.io ) but all seems to fail:</li>
</ul>
<p>see here</p>
<pre><code>from bs4 import BeautifulSoup
from selenium import webdriver
driver = webdriver.Chrome()
url = 'https://clutch.co/it-services/msp'
driver.get(url=url)
soup = BeautifulSoup(driver.page_source,"lxml")
links = []
for l in soup.find_all('li',class_='website-link website-link-a'):
results = (l.a.get('href'))
links.append(results)
print(links, "\n", "Count links - ", len(links))
</code></pre>
<p>throws back this error:</p>
<pre><code>---------------------------------------------------------------------------
WebDriverException Traceback (most recent call last)
<ipython-input-4-4f37092106f4> in <cell line: 4>()
2 from selenium import webdriver
3
----> 4 driver = webdriver.Chrome()
5
6 url = 'https://clutch.co/it-services/msp'
5 frames
/usr/local/lib/python3.10/dist-packages/selenium/webdriver/remote/errorhandler.py in check_response(self, response)
243 alert_text = value["alert"].get("text")
244 raise exception_class(message, screen, stacktrace, alert_text) # type: ignore[call-arg] # mypy is not smart enough here
--> 245 raise exception_class(message, screen, stacktrace)
WebDriverException: Message: unknown error: cannot find Chrome binary
Stacktrace:
#0 0x55a6ebf424e3 <unknown>
#1 0x55a6ebc71c76 <unknown>
#2 0x55a6ebc98757 <unknown>
#3 0x55a6ebc97029 <unknown>
#4 0x55a6ebcd5ccc <unknown>
#5 0x55a6ebcd547f <unknown>
#6 0x55a6ebcccde3 <unknown>
#7 0x55a6ebca22dd <unknown>
#8 0x55a6ebca334e <unknown>
#9 0x55a6ebf023e4 <unknown>
#10 0x55a6ebf063d7 <unknown>
#11 0x55a6ebf10b20 <unknown>
#12 0x55a6ebf07023 <unknown>
#13 0x55a6ebed51aa <unknown>
#14 0x55a6ebf2b6b8 <unknown>
#15 0x55a6ebf2b847 <unknown>
#16 0x55a6ebf3b243 <unknown>
#17 0x7ffb30c27609 start_thread
</code></pre>
<p>how to work around!?
Well - i am trying to figure out the simplest approach to gather data from clutch.io</p>
|
<python><selenium-webdriver><selenium-chromedriver>
|
2023-06-05 18:27:17
| 1
| 1,223
|
zero
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.