QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,334,862
| 1,436,800
|
How to use an API-provided proxy with rotating proxy middleware in Scrapy?
|
<p>I'm trying to scrape websites using Scrapy and want to use an API-provided proxy with the rotating proxy middleware. The proxy is provided by an API endpoint that returns a new proxy IP with each request.
I have already installed the scrapy-rotating-proxies package and configured it with the rotating proxy list in the settings.py file. However, I'm unsure how to integrate the API-provided proxy into the rotating proxy middleware.</p>
<h1>settings.py</h1>
<pre><code>ROTATING_PROXY_LIST = [
'http://api.scrape.do/?token=MyToken&url=https://httpbin.org/ip?json',
]
</code></pre>
<p>TIA!</p>
|
<python><proxy><scrapy>
|
2023-05-25 17:23:33
| 1
| 315
|
Waleed Farrukh
|
76,334,816
| 3,911,443
|
Dynamically add tab in Textual
|
<p>In <a href="https://textual.textualize.io/" rel="nofollow noreferrer">Textual</a> I'm trying to dynamically add a tab to an application. Here's the full code:</p>
<pre><code>from textual import on
from textual.app import App, ComposeResult
from textual.widgets import DataTable, Select
from textual.widgets import Footer
from textual.widgets import TabbedContent, TabPane, Static
ROWS1 = [
("X", "Y"),
("A", "B"),
("C", "D")
]
ROWS2 = [
("X", "Y"),
("AA", "BB"),
("CC", "DD")
]
ROWS3 = [
("1", "2"),
("12", "13"),
("14", "15")
]
class DynamicTabApp(App):
def compose(self) -> ComposeResult:
# Footer to show keys
yield Selector()
with TabbedContent(initial="tab1"):
with TabPane("Tab 1", id="tab1"): # First tab
table = DataTable(id="table1")
table.add_columns(*ROWS1[0])
table.add_rows(ROWS1[1:])
yield table
with TabPane("Tab 2", id="tab2"):
table = DataTable(id="table2")
table.add_columns(*ROWS2[0])
table.add_rows(ROWS2[1:])
yield table
yield Footer()
def action_show_tab(self, tab: str):
"""Switch to a new tab."""
self.get_child_by_type(TabbedContent).active = tab
class SelectTab(Static):
DEFAULT_CSS = """
Screen {
align: center top;
}
Select {
width: 60;
margin: 2;
}
"""
def compose(self) -> ComposeResult:
yield Select([('Tab 1', 'tab1'), ('Tab 2', 'tab2'), ('Tab 3', 'tab3')])
@on(Select.Changed)
async def select_changed(self, event: Select.Changed):
if event.value == 'tab3':
tab_pane = TabPane("Tab 3", id="tab3")
table = DataTable(id="table3")
table.add_columns(*ROWS3[0])
table.add_rows(ROWS3[1:])
await tab_pane.mount(table)
tabbed_content = self.app.query_one(TabbedContent)
await tabbed_content.mount(tab_pane)
else:
self.app.get_child_by_type(TabbedContent).active = event.value
class Selector(Static):
def compose(self) -> ComposeResult:
"""Create child widgets of a stopwatch."""
yield SelectTab()
if __name__ == "__main__":
app = DynamicTabApp()
app.run()
</code></pre>
<p>When I start the application it looks promising:
<a href="https://i.sstatic.net/O7ynV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O7ynV.png" alt="enter image description here" /></a></p>
<p>In the select box there is <code>Tab 3</code> which is not a tab. If gets selected I'm trying to add it dynamically by calling <code>mount</code> on the <code>tabbed_content</code>:</p>
<pre><code> tab_pane = TabPane("Tab 3", id="tab3")
table = DataTable(id="table3")
table.add_columns(*ROWS3[0])
table.add_rows(ROWS3[1:])
await tab_pane.mount(table)
tabbed_content = self.app.query_one(TabbedContent)
await tabbed_content.mount(tab_pane)
</code></pre>
<p>After the selecting <code>tab 3</code> I get is this:</p>
<p><a href="https://i.sstatic.net/76YE9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/76YE9.png" alt="enter image description here" /></a></p>
<p>Which does not add another tab but appends the contents to the bottom of the screen.</p>
|
<python><textual>
|
2023-05-25 17:17:51
| 1
| 390
|
pcauthorn
|
76,334,787
| 7,655,687
|
Python given dict of old index: new index move multiple elements in a list
|
<p>In Python 3 what would be the best way to move multiple potentially non-contiguous elements to new potentially non-contiguous indexes given a <code>dict</code> of <code>{old index: new index, old index: new index, old index: new index}</code></p>
<p><strong>Important Note</strong>: the <code>dict</code> may not contain all the new positions of elements, this is why the examples below the first example do not work</p>
<p>Edit: Sorry I forgot to mention that you don't have to worry about checking all the indexes in <code>new_idxs</code> are valid and within the bounds of <code>seq</code>. The keys of <code>new_idxs</code> are also already in sorted order</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any
def move_elements(
seq: list[Any],
new_idxs: dict,
) -> list[Any]:
new = []
idx = 0
done = set()
while len(new) < len(seq):
if idx in new_idxs and idx not in done:
new.append(seq[new_idxs[idx]])
done.add(idx)
elif idx not in done:
new.append(seq[idx])
idx += 1
else:
idx += 1
return new
# works
new_idxs = {0: 1, 1: 0}
seq = [0, 1]
seq = move_elements(seq, new_idxs)
print ("\nexpected:", [1, 0])
print ("actual :", seq)
# expected output
# [1, 0]
# doesn't work
new_idxs = {3: 0, 5: 1}
seq = [0, 1, 2, 3, 4, 5, 6, 7]
seq = move_elements(seq, new_idxs)
print ("\nexpected:", [3, 5, 0, 1, 2, 4, 6, 7])
print ("actual :", seq)
# expected output
# [3, 5, 0, 1, 2, 4, 6, 7]
# doesn't work
new_idxs = {3: 6, 5: 7}
seq = [0, 1, 2, 3, 4, 5, 6, 7]
seq = move_elements(seq, new_idxs)
print ("\nexpected:", [0, 1, 2, 4, 6, 7, 3, 5])
print ("actual :", seq)
# expected output
# [0, 1, 2, 4, 6, 7, 3, 5]
new_idxs = {3: 1, 7: 4}
seq = [0, 1, 2, 3, 4, 5, 6, 7]
seq = move_elements(seq, new_idxs)
print ("\nexpected:", [0, 3, 1, 2, 7, 4, 5, 6])
print ("actual :", seq)
# expected output
# [0, 3, 1, 2, 7, 4, 5, 6]
new_idxs = {0: 3, 3: 1, 7: 4}
seq = [0, 1, 2, 3, 4, 5, 6, 7]
seq = move_elements(seq, new_idxs)
print ("\nexpected:", [1, 3, 2, 0, 7, 4, 5, 6])
print ("actual :", seq)
# expected output
# [0, 1, 2, 3, 4, 5, 6, 7]
# [1, 3, 2, 0, 7, 4, 5, 6]
</code></pre>
|
<python><python-3.x><list>
|
2023-05-25 17:14:18
| 2
| 2,005
|
ragardner
|
76,334,733
| 2,010,880
|
How can I avoid main plot stopping when closing a second figure?
|
<p>I have a small python script that displays a Matplotlib plot. This main plot is continously updated in a loop.
In this plots figure, I have a button that when pressed, executes a callback that opens a second plot in a figure. The main plot continues to update like expected while this second "Pop up" figure is open.</p>
<p>The issue arises when I close this second figure.
I would have expected the main loop to continue to update the main figure window, but it stops execution.</p>
<p>Even more strangely, when I close the main figure window, the update loop continues again.</p>
<p>Does anyone have any suggestion of how I can get the main loop to continue to update the main figure window after the second figure window is closed?</p>
<pre><code>import sys
import time
import numpy as np
import random
import matplotlib.pyplot as plt
from matplotlib.widgets import Button
def main():
# Create a figure and axis that we will display the main window on
fig, ax = plt.subplots()
ax_pop = fig.add_axes([0.2, 0.14, 0.2, 0.04])
pop_up_window_button = Button(ax=ax_pop, label='Open second window', hovercolor='0.975')
# A callback function for the button press event
def pop_up_window(event):
"""
Pop up a second window
"""
# Plot some random values to a matplotlib graph
print("Plotting some more random values in second window")
x = [random.randint(1, 100) for n in range(100)]
y = [random.randint(1, 100) for n in range(100)]
fig1, ax1 = plt.subplots()
ax1.scatter(x, y)
ax1.axis('off')
plt.pause(0.1) # Need to pause to allow the image to be displayed
# Add a callback to the button press event
pop_up_window_button.on_clicked(pop_up_window)
live_view = True
while live_view:
# Plot some random values to a matplotlib graph
print("Plotting some random values")
x = [random.randint(1, 100) for n in range(100)]
y = [random.randint(1, 100) for n in range(100)]
ax.clear()
ax.scatter(x, y)
ax.axis('off')
ax.set_title('Live view')
plt.pause(0.01) # Need to pause to allow the image to be displayed
time.sleep(1)
print('Done')
return True
# Main entry point of script
if __name__ == '__main__':
if main():
sys.exit(0)
else:
sys.exit(1)
</code></pre>
<p>The main figure that is plotted:
<a href="https://i.sstatic.net/jWpma.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jWpma.png" alt="enter image description here" /></a></p>
<p>The second figure that is plotted:
<a href="https://i.sstatic.net/RU3xj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RU3xj.png" alt="enter image description here" /></a></p>
<p>When this second figure is closed, the main plot stops updating.</p>
<pre><code>Plotting some random values
Plotting some random values
Plotting some random values
Plotting some random values
Plotting some random values
Plotting some random values
Plotting some more random values in second window < We clicked on the button here>
Plotting some random values
Plotting some random values
Plotting some random values
< second figure is closed. Console output is paused, main plot stops updating
....
....
If we close the main figure, the console output starts up again>
Plotting some random values
Plotting some random values
Plotting some random values
Plotting some random values
Plotting some random values
</code></pre>
|
<python><matplotlib><user-interface><button><callback>
|
2023-05-25 17:07:12
| 0
| 302
|
Diarmaid O Cualain
|
76,334,702
| 8,442,560
|
How to get the integer value out of a string that might not always contain a number
|
<p>I have a list comprehension which now contain this hillarious list comprehension below.</p>
<pre><code>[int([y or 0 for y in ["".join(re.findall(r'[0-9*]', x))]][0]) for x in [acc['id_number']]][0]
</code></pre>
<p>The <code>int</code> function in python is not that effective.</p>
<p>The issue is I wish there is a cleaner way to get an integer out of the <code>acc['id_number']</code>. The code works fine.</p>
<p>I want to insert the data in an SQL database so I want the <code>acc['id_number']</code> to be strictly an integer.</p>
<p><code>id_number</code> can be <code>'1234'</code>, <code>'na'</code>, <code>'gh1234'</code>. I want the number out of these strings and 0 where the string contains no number as in the case of <code>'na'</code></p>
<p>If the <code>int</code> function worked on the strings as <code>'gh000'</code> or even <code>''</code>. This line wouldn't get that long.</p>
<pre class="lang-py prettyprint-override"><code>import re
raw_accounts = [
{'account_name': "someone", 'id_number': "2432"},
{'account_name': "another person", 'id_number': "na"},
{'account_name': "some other person", 'id_number': "gh134"},
{'account_name': "another one", 'id_number': "24124"}
]
data = [
(
acc['account_name'],
[int([y or 0 for y in ["".join(re.findall(r'[0-9*]', x))]][0]) for x in [acc['id_number']]][0]
) for acc in raw_accounts
]
print(data)
</code></pre>
|
<python>
|
2023-05-25 17:03:36
| 2
| 652
|
surge10
|
76,334,678
| 11,665,178
|
StringParam not working in python Cloud Function Gen2 for global vars
|
<p>I am using the new Cloud Function gen 2 in python and following this <a href="https://firebase.google.com/docs/functions/config-env?gen=2nd" rel="nofollow noreferrer">guide</a> and this code sample :</p>
<pre><code>from firebase_functions import https_fn
from firebase_functions.params import IntParam, StringParam
MIN_INSTANCES = IntParam("HELLO_WORLD_MIN_INSTANCES")
WELCOME_MESSAGE = StringParam("WELCOME_MESSAGE")
# To use configured parameters inside the config for a function, provide them
# directly. To use them at runtime, call .value() on them.
@https_fn.on_request(min_instances=MIN_INSTANCES)
def hello_world(req):
return https_fn.Response(f'{WELCOME_MESSAGE.value()}! I am a function!')
</code></pre>
<p>I have runtime errors because :</p>
<ul>
<li>First <code>.value()</code> is <code>str</code> so it's not callable, we need to use <code>.value</code></li>
<li>Second, <code>WELCOME_MESSAGE.value</code> seems to not be loaded on time for global vars</li>
</ul>
<p>In my code :</p>
<pre><code>db_url = StringParam("DB_URL")
db_name = StringParam("DB_NAME")
client = MongoClient(db_url.value)
db = client.get_database(db_name.value)
</code></pre>
<p>Throws an exception <code>pymongo.errors.ConfigurationError: Empty host (or extra comma in host list)</code>.</p>
<p>The var seems to be loaded from this message : <code>i functions: Loaded environment variables from .env.</code></p>
<p>Note : if i <code>print</code> the value inside a function, it's working, so are parameters not available for global vars ?</p>
<p>Thanks in advance</p>
|
<python><firebase><google-cloud-functions>
|
2023-05-25 17:00:13
| 2
| 2,975
|
Tom3652
|
76,334,532
| 1,714,385
|
How to fix ConvergenceWarning in Gaussian process regression in sklearn?
|
<p>I am trying to use fit a sklearn Gaussian process regressor to my data. The data has periodicity but no mean trend, so I defined a kernel similarly to the <a href="https://scikit-learn.org/stable/auto_examples/gaussian_process/plot_gpr_co2.html" rel="nofollow noreferrer">tutorial on the Mauna Loa data</a>, without the long term trend, as follows:</p>
<pre><code>from sklearn.gaussian_process.kernels import (RBF, ExpSineSquared,
RationalQuadratic, WhiteKernel)
from sklearn.gaussian_process import GaussianProcessRegressor as GPR
import numpy as np
# Models the periodicity
seasonal_kernel = (
2.0**2
* RBF(length_scale=100.0, length_scale_bounds=(1e-2,1e7))
* ExpSineSquared(length_scale=1.0, length_scale_bounds=(1e-2,1e7),
periodicity=1.0, periodicity_bounds="fixed")
)
# Models small variations
irregularities_kernel = 0.5**2 * RationalQuadratic(length_scale=1.0,
length_scale_bounds=(1e-2,1e7), alpha=1.0)
# Models noise
noise_kernel = 0.1**2 * RBF(length_scale=0.1, length_scale_bounds=(1e-2,1e7)) + \
WhiteKernel(noise_level=0.1**2, noise_level_bounds=(1e-5, 1e5)
)
co2_kernel = (
seasonal_kernel + irregularities_kernel + noise_kernel
)
</code></pre>
<p>Then I use the kernel to define a regressor and fit the data:</p>
<pre><code>gpr = GPR(n_restarts_optimizer=10, kernel=co2_kernel, alpha=150, normalize_y=False)
for x,y in zip(x_list, y_list):
gpr.fit(x,y)
</code></pre>
<p>However, during fit I get multiple <code>ConvergenceWarning</code>s. They all look like the following:</p>
<pre><code>C:\Users\user\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\sklearn\gaussian_process\kernels.py:430: ConvergenceWarning: The optimal value found for dimension 0 of parameter k1__k2__k1__constant_value is close to the specified upper bound 100000.0. Increasing the bound and calling fit again may find a better value.
C:\Users\user\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\sklearn\gaussian_process\kernels.py:430: ConvergenceWarning: The optimal value found for dimension 0 of parameter k2__k1__k1__constant_value is close to the specified upper bound 100000.0. Increasing the bound and calling fit again may find a better value.
C:\Users\user\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\sklearn\gaussian_process\kernels.py:430: ConvergenceWarning: The optimal value found for dimension 0 of parameter k1__k2__k2__alpha is close to the specified upper bound 100000.0. Increasing the bound and calling fit again may find a better value.
C:\Users\user\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\sklearn\gaussian_process\kernels.py:430: ConvergenceWarning: The optimal value found for dimension 0 of parameter k1__k1__k1__k1__constant_value is close to the specified upper bound 100000.0. Increasing the bound and calling fit again may find a better value.
C:\Users\user\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\sklearn\gaussian_process\kernels.py:420: ConvergenceWarning: The optimal value found for dimension 0 of parameter k1__k1__k1__k2__length_scale is close to the specified lower bound 0.01. Decreasing the bound and calling fit again may find a better value.
C:\Users\user\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\sklearn\gaussian_process\kernels.py:430: ConvergenceWarning: The optimal value found for dimension 0 of parameter k1__k2__k1__constant_value is close to the specified upper bound 100000.0. Increasing the bound and calling fit again may find a better value.
</code></pre>
<p>I managed to fix some of them by blanket adding the <code>length_scale_bounds</code> arguments to all of the functions within the kernel, but I'm not sure if I've set overextended bounds which needlessly degrade execution time for parts of the kernel that were running just fine, and I don't know how to remediate to the problem with alpha nor the constant values. Looking the errors online does not provide any help.</p>
<p>I know that the model is not being fitted properly because the Gaussian process regressor is performing far worse than a simple SVR, despite the latter being much faster. Does anybody know how I can:</p>
<ol>
<li>Associate each warning to a specific subkernel within the wider kernel?</li>
<li>How do I fix the warning for alpha and constant value?</li>
</ol>
|
<python><scikit-learn><gaussian-process>
|
2023-05-25 16:41:12
| 1
| 4,417
|
Ferdinando Randisi
|
76,334,511
| 9,731,347
|
Seaborn convert BarPlot to histogram-like chart
|
<p>I have a pandas <code>DataFrame</code> that looks like this, and I'm using it to graph the life of a character over period of days. The <code>days</code> column is really "days since birth." For this example, the character was born on May 26th, 2023.</p>
<pre><code> days health months
0 0 30 May 23
1 1 30
2 2 20
3 3 20
4 4 10
5 5 10
6 6 10 Jun 23
7 7 10
8 8 10
9 9 0
</code></pre>
<p>This is the seaborn <code>BarPlot</code>.</p>
<p><a href="https://i.sstatic.net/uHCrc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uHCrc.png" alt="enter image description here" /></a></p>
<p><strong>I have significantly simplified the number of days</strong> the character is alive for the sake of reproducibility, but in my normal code, the number of days is in the hundreds, <em>possibly thousands</em>.</p>
<p>Here is a graph of my normal case.</p>
<p><a href="https://i.sstatic.net/tfhuz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tfhuz.png" alt="enter image description here" /></a></p>
<p>As you can see, this graph is much more overloaded with bars, which seems to be impacting performance pretty negatively, with only a few hundred days.</p>
<p>So my question is this: can I convert the <code>BarPlot</code> to the seaborn equivalent of a histogram with the way my <code>DataFrame</code> is set up?</p>
<p>The ideal would look something like the image below (ignore my bad graphic design job), <strong>The red lines are only to highlight each section of the histogram. I am not looking to add those red lines.</strong></p>
<p>Note, I also need to be able to keep the month labels in the same place as they are above, since there could be a section of time where the character's health stays the same for multiple months.</p>
<p><a href="https://i.sstatic.net/mq83I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mq83I.png" alt="enter image description here" /></a></p>
<p>My code is minimal for the chart, but the size of the <code>DataFrame</code> seems to be causing the slow rendering time.</p>
<pre><code>ax = sns.barplot(dataframe, x='days', y='health', color='blue')
ax.set_xticklabels(dataframe.months)
plt.xticks(rotation=45)
plt.show()
</code></pre>
<p>Here's a line for the example <code>DataFrame</code>, for easy reproducibility:</p>
<pre><code>df = pd.DataFrame({'days': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], 'health': [30, 30, 20, 20, 10, 10, 10, 10, 10, 0], 'months': ["May 23", " ", " ", " ", " ", " ", "Jun 23", " ", " ", " "]})
</code></pre>
<p>Thank you in advance.</p>
|
<python><pandas><matplotlib><seaborn>
|
2023-05-25 16:37:27
| 2
| 1,313
|
SanguineL
|
76,334,453
| 12,884,304
|
Install private gitlab package with specific version
|
<p>I have private gitlab package. His current version == 0.2.15</p>
<p>I am build and publish package with poetry. Always I install package as</p>
<p><code>pip install sdk --index-url https://__token__:<token>@gitlab.example.com/api/v4/projects/6/packages/pypi/simple</code></p>
<p>I need to install specific version of package (0.2.13 f.e.). How can I do it? I tried with <code>@0.2.13</code> and <code>#0.2.13</code>. It's not work.</p>
|
<python><pip><gitlab><pypi><python-poetry>
|
2023-05-25 16:30:04
| 1
| 331
|
unknown
|
76,334,452
| 4,931,657
|
How to append additional tasks in asyncio.as_completed(task_list)
|
<p>I have a coroutine function to fetch results from a URL:</p>
<pre><code>async def create_task_form_url(aio_session, form_url: str) -> [Future]:
loop = get_running_loop()
task = loop.create_task(async_get_results(form_url, aio_session))
return task
</code></pre>
<p>I'm trying to create the initial task, and use <code>as_completed</code> to get the result. Along the way, <em>depending on some logic</em> I would want to append to the <code>tasks</code> list so that <code>as_completed</code> can fetch the result as well.</p>
<p>This is where my problem comes in, where the <code>for</code> loop closes out before the 2nd task completes. How can I do this correctly?</p>
<pre><code>tasks = [await create_task_form_url(self.aio_session, form_url)]
print(len(tasks))
for each_task in as_completed(tasks):
print(await each_task)
tasks.append(await create_task_form_url(self.aio_session, form_url))
print(len(tasks))
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>1
# Task 1 result is printed.
2 # So we know that the tasks.append works. But the result was never retrieved.
Error msg: Task was destroyed but it is pending!
</code></pre>
|
<python><python-asyncio>
|
2023-05-25 16:29:40
| 2
| 5,238
|
jake wong
|
76,334,295
| 8,895,744
|
Python regex, one word with n characters followed by two words with one char
|
<p>I need to filter strings that start with a word containing 3 or more characters, followed by exactly two words that have only one character. After these three words, anything can follow.</p>
<p>What I tried is this expression:</p>
<pre><code>pattern = r'\w{3,}\s\w\s\w.*'
</code></pre>
<p>but it matches a string <code>apple wrong a b c</code> which is not correct (the word "wrong" has more than one char).</p>
<p>A complete example is here:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'text': ['apple wrong', 'apple wrong b c','apple a b correct', 'apple a b c correct']})
pattern = r'\w{3,}\s\w\s\w.*'
matches = df['text'].str.contains(pattern, regex=True)
result = df[matches]
print(result)
</code></pre>
|
<python><pandas><regex>
|
2023-05-25 16:05:38
| 1
| 563
|
EnesZ
|
76,334,289
| 2,422,125
|
model.fit calculates validation only once after validation_freq train epochs and then never again
|
<p>First: I'm not able to provide any code - if that's enough reason to close this - so be it.</p>
<p>The codebase I was provided is confidential and I was not able to reproduce this behavior in a standalone example after multiple hours of work. It's too much custom code that I barely understand which makes isolating single parts basically impossible.</p>
<p>That's why I'm now trying the opposite approach: looking at the tensorflow code to find out what variable combination has to happen to cause this behaviour and then work from there.</p>
<p>I'm asking for advice on how to debug this issue myself / or if anyone has seen something like this:</p>
<hr />
<p><strong>The actual Issue:</strong></p>
<p>I'm training a tensorflow model with <code>model.fit</code> and set the <code>validation_freq</code> argument to e.g. 10. The behavior I get is that validation is performed on epoch 10 and that's it. On epoch 20, 30 ... and so on no validation is performed. To summarise: for each <code>model.fit</code> call I get only one evaluation in total.
Tensorflow Version: 2.10.0</p>
<p><s>I tried going directly into <code>site-packages/tensorflow/python/keras/engine/training.py</code> to add print statements inside <code>model.fit</code> itself: <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/engine/training.py#L1203" rel="nofollow noreferrer">here</a> and <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/engine/training.py#L2758" rel="nofollow noreferrer">here</a> but I got no output so they were either not executed or print from this file doesn't reach stdout for some reason... or... I misunderstood the code flow of TensorFlow ...</s></p>
<p><em>I was changing <code>site-packages/tensorflow/python/keras/engine/training.py</code> while the code was actually using <code>site-packages/keras/engine/training.py</code></em></p>
<p>What could cause this / how could I debug this?</p>
|
<python><tensorflow><keras>
|
2023-05-25 16:05:05
| 1
| 3,868
|
Fabian N.
|
76,334,272
| 4,404,805
|
Pandas: Apply function to each group and store result in new column
|
<p>I have an item dataframe such as:</p>
<pre><code>item_df = pd.DataFrame({'BarCode': ['12345678AAAA', '12345678BBBB', '12345678CCCC',
'12345678ABCD', '12345678EFGH', '12345678IJKL',
'67890123XXXX', '67890123YYYY', '67890123ZZZZ',
'67890123ABCD', '67890123EFGH', '67890123IJKL'],
'Extracted_Code': ['12345678','12345678', '12345678','12345678','12345678','12345678',
'67890123','67890123', '67890123','67890123', '67890123','67890123'],
'Description': ['Fruits', 'Fruits', 'Fruits', 'Apples', 'Oranges', 'Mangoes',
'Snacks', 'Snacks', 'Snacks', 'Yoghurt', 'Cookies', 'Oats'],
'Category': ['H', 'H', 'H', 'M', 'T', 'S', 'H', 'H', 'H', 'M', 'M', 'F'],
'Code': ['0', '2', '3', '1', '2', '4', '0', '2', '3', '3', '4', '2'],
'Quantity': [99, 77, 10, 52, 11, 90, 99, 77, 10, 52, 11, 90],
'Price': [12.0, 10.5, 11.0, 15.6, 12.9, 67.0, 12.0, 10.5, 11.0, 15.6, 12.9, 67.0]})
item_df = item_df.sort_values(by=['Extracted_Code', 'Category', 'Code'])
item_df['Combined'] = np.NaN
</code></pre>
<p>What I am trying to achieve is a bit tricky. I have to perform groupby on <code>['Extracted_Code']</code> and for each group, create a new column <code>Combined</code>. The column <code>Combined</code> will have value based on:</p>
<ol>
<li>For rows with Category='H', Combined will have NaN values.</li>
<li>For rows with Category other than 'H', suppose if we take a row with Category='M', then Combined column of that particular row will have a list of row jsons that has Category='H' in the same group and whose Code is less than or equal to Code of that particular row.</li>
</ol>
<p>My desired result is:</p>
<pre><code> BarCode Extracted_Code Description Category Code Quantity Price Combined
0 12345678AAAA 12345678 Fruits H 0 99 12.0 NaN
1 12345678BBBB 12345678 Fruits H 2 77 10.5 NaN
2 12345678CCCC 12345678 Fruits H 3 10 11.0 NaN
3 12345678ABCD 12345678 Apples M 1 52 15.6 [{'BarCode': '12345678AAAA', 'Description': 'Fruits', 'Category': 'H', 'Code': '0', 'Quantity': 99, 'Price': 12.0}]
4 12345678IJKL 12345678 Mangoes S 4 90 67.0 [{'BarCode': '12345678AAAA', 'Description': 'Fruits', 'Category': 'H', 'Code': '0', 'Quantity': 99, 'Price': 12.0},
{'BarCode': '12345678BBBB', 'Description': 'Fruits', 'Category': 'H', 'Code': '2', 'Quantity': 77, 'Price': 10.5},
{'BarCode': '12345678CCCC', 'Description': 'Fruits', 'Category': 'H', 'Code': '3', 'Quantity': 10, 'Price': 11.0}]
5 12345678EFGH 12345678 Oranges T 2 11 12.9 [{'BarCode': '12345678AAAA', 'Description': 'Fruits', 'Category': 'H', 'Code': '0', 'Quantity': 99, 'Price': 12.0},
{'BarCode': '12345678BBBB', 'Description': 'Fruits', 'Category': 'H', 'Code': '2', 'Quantity': 77, 'Price': 10.5}]
6 67890123IJKL 67890123 Oats F 2 90 67.0 [{'BarCode': '67890123XXXX', 'Description': 'Snacks', 'Category': 'H', 'Code': '0', 'Quantity': 99, 'Price': 12.0},
{'BarCode': '67890123YYYY', 'Description': 'Snacks', 'Category': 'H', 'Code': '2', 'Quantity': 77, 'Price': 10.5}]
7 67890123XXXX 67890123 Snacks H 0 99 12.0 NaN
8 67890123YYYY 67890123 Snacks H 2 77 10.5 NaN
9 67890123ZZZZ 67890123 Snacks H 3 10 11.0 NaN
10 67890123ABCD 67890123 Yoghurt M 3 52 15.6 [{'BarCode': '67890123XXXX', 'Description': 'Snacks', 'Category': 'H', 'Code': '0', 'Quantity': 99, 'Price': 12.0},
{'BarCode': '67890123YYYY', 'Description': 'Snacks', 'Category': 'H', 'Code': '2', 'Quantity': 77, 'Price': 10.5},
{'BarCode': '67890123ZZZZ', 'Description': 'Snacks', 'Category': 'H', 'Code': '3', 'Quantity': 10, 'Price': 11.0}]
11 67890123EFGH 67890123 Cookies M 4 11 12.9 [{'BarCode': '67890123XXXX', 'Description': 'Snacks', 'Category': 'H', 'Code': '0', 'Quantity': 99, 'Price': 12.0},
{'BarCode': '67890123YYYY', 'Description': 'Snacks', 'Category': 'H', 'Code': '2', 'Quantity': 77, 'Price': 10.5},
{'BarCode': '67890123ZZZZ', 'Description': 'Snacks', 'Category': 'H', 'Code': '3', 'Quantity': 10, 'Price': 11.0}]
</code></pre>
<p>This is what I have done to get list of row jsons:</p>
<pre><code>item_df.groupby(['Extracted_Code', 'Category', 'Code']).apply(lambda x: x.to_dict('records')).reset_index(name='Combined')
</code></pre>
<p>But I am confused on how to apply the condition to each group without losing any columns in the end result.</p>
|
<python><pandas><function><group-by><apply>
|
2023-05-25 16:02:39
| 1
| 1,207
|
Animeartist
|
76,334,209
| 1,424,729
|
Connections leak from a Psycopg connection pool
|
<p>Connections leak with resulting timeout error at the connection get. I am using connection pool configured as follows:</p>
<pre><code>from psycopg_pool import ConnectionPool
pool = ConnectionPool(conninfo=config['postgres']['url'])
@atexit.register
def pool_close():
pool.close()
</code></pre>
<p>Then I use the pool's connections to query like this:</p>
<pre><code>def load_smth(...):
with pool.getconn() as conn:
with conn.cursor() as cur:
cur.execute("select cfg from ...",(..))
return smth
</code></pre>
<p>The problem is that at 5th request the connection pool doesn't return a connection and my app fails with a connection pool timeout error:</p>
<blockquote>
<p>psycopg_pool.PoolTimeout: couldn't get a connection after 30.0 sec</p>
</blockquote>
<p>SQL queries are OK, tested with the PostgreSQL server. Moreover, if I reorder queries, failed query works. So it's a problem of getting a connection from the pool.</p>
<p><strong>upd</strong></p>
<p>After some research I tried to use a wrapper to commit after each call, in the following way:</p>
<pre><code>def run_cursor(sql, params=(), callback=lambda x: x):
with pool.getconn() as conn:
with conn.cursor() as cur:
res = callback(cur.execute(sql, params))
conn.commit()
cur.close()
return res
</code></pre>
<p>But connections are still leaking out from the pool.</p>
|
<python><psycopg3>
|
2023-05-25 15:55:38
| 3
| 749
|
Dmitry
|
76,334,048
| 15,637,940
|
ResourceWarning: unclosed socket.socket using unittest.IsolatedAsyncioTestCase
|
<p>Very similar problem described in another question and have <a href="https://stackoverflow.com/a/54612520/15637940">good answer</a>, but didn't help to solve the problem.</p>
<p>I wrote <code>Scraper</code> class which support async context manager. <code>COUNT_OF_TESTS</code> is defining number of methods which will be added to <code>TestScraper</code> for simulating a lot of tests.</p>
<p><code>mre.py</code>:</p>
<pre><code>import unittest
from aiohttp import ClientSession, ClientResponse
COUNT_OF_TESTS = 20
class Scraper:
async def do_get_request(self, url: str) -> ClientResponse:
return await self.session.get(url)
async def __aenter__(self) -> 'Scraper':
self.session = await ClientSession().__aenter__()
return self
async def __aexit__(self, exc_type, exc_val, exc_tb) -> None:
await self.session.close()
class TestScraper(unittest.IsolatedAsyncioTestCase):
async def _some_test(self) -> None:
async with Scraper() as scraper:
resp = await scraper.do_get_request('https://icanhazip.com')
self.assertEqual(resp.status, 200)
for i in range(1, COUNT_OF_TESTS+1):
setattr(TestScraper, f'test_{i}', TestScraper._some_test)
</code></pre>
<p><code>python -m unittest mre.py</code>:</p>
<pre><code>...../usr/local/lib/python3.11/traceback.py:428: ResourceWarning: unclosed <socket.socket fd=6, family=2, type=1, proto=6, laddr=('192.168.0.104', 35912), raddr=('104.18.114.97', 443)>
result.append(FrameSummary(
ResourceWarning: Enable tracemalloc to get the object allocation traceback
/usr/local/lib/python3.11/asyncio/selector_events.py:843: ResourceWarning: unclosed transport <_SelectorSocketTransport fd=6>
_warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
ResourceWarning: Enable tracemalloc to get the object allocation traceback
/usr/local/lib/python3.11/traceback.py:428: ResourceWarning: unclosed <socket.socket fd=7, family=2, type=1, proto=6, laddr=('192.168.0.104', 32772), raddr=('104.18.115.97', 443)>
result.append(FrameSummary(
ResourceWarning: Enable tracemalloc to get the object allocation traceback
/usr/local/lib/python3.11/asyncio/selector_events.py:843: ResourceWarning: unclosed transport <_SelectorSocketTransport fd=7>
_warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
ResourceWarning: Enable tracemalloc to get the object allocation traceback
...../usr/local/lib/python3.11/traceback.py:428: ResourceWarning: unclosed <socket.socket fd=6, family=2, type=1, proto=6, laddr=('192.168.0.104', 32784), raddr=('104.18.115.97', 443)>
result.append(FrameSummary(
ResourceWarning: Enable tracemalloc to get the object allocation traceback
/usr/local/lib/python3.11/traceback.py:428: ResourceWarning: unclosed <socket.socket fd=7, family=2, type=1, proto=6, laddr=('192.168.0.104', 32792), raddr=('104.18.115.97', 443)>
result.append(FrameSummary(
ResourceWarning: Enable tracemalloc to get the object allocation traceback
/usr/local/lib/python3.11/traceback.py:428: ResourceWarning: unclosed <socket.socket fd=9, family=2, type=1, proto=6, laddr=('192.168.0.104', 35942), raddr=('104.18.114.97', 443)>
result.append(FrameSummary(
ResourceWarning: Enable tracemalloc to get the object allocation traceback
/usr/local/lib/python3.11/asyncio/selector_events.py:843: ResourceWarning: unclosed transport <_SelectorSocketTransport fd=9>
_warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
ResourceWarning: Enable tracemalloc to get the object allocation traceback
...../usr/local/lib/python3.11/traceback.py:428: ResourceWarning: unclosed <socket.socket fd=6, family=2, type=1, proto=6, laddr=('192.168.0.104', 32810), raddr=('104.18.115.97', 443)>
result.append(FrameSummary(
ResourceWarning: Enable tracemalloc to get the object allocation traceback
/usr/local/lib/python3.11/traceback.py:428: ResourceWarning: unclosed <socket.socket fd=7, family=2, type=1, proto=6, laddr=('192.168.0.104', 35948), raddr=('104.18.114.97', 443)>
result.append(FrameSummary(
ResourceWarning: Enable tracemalloc to get the object allocation traceback
/usr/local/lib/python3.11/traceback.py:428: ResourceWarning: unclosed <socket.socket fd=9, family=2, type=1, proto=6, laddr=('192.168.0.104', 35950), raddr=('104.18.114.97', 443)>
result.append(FrameSummary(
ResourceWarning: Enable tracemalloc to get the object allocation traceback
/usr/local/lib/python3.11/traceback.py:428: ResourceWarning: unclosed <socket.socket fd=11, family=2, type=1, proto=6, laddr=('192.168.0.104', 35964), raddr=('104.18.114.97', 443)>
result.append(FrameSummary(
ResourceWarning: Enable tracemalloc to get the object allocation traceback
/usr/local/lib/python3.11/asyncio/selector_events.py:843: ResourceWarning: unclosed transport <_SelectorSocketTransport fd=11>
_warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
ResourceWarning: Enable tracemalloc to get the object allocation traceback
.....
----------------------------------------------------------------------
Ran 20 tests in 1.330s
OK
</code></pre>
<p>All tests passed, but these warnings...
I tried manually close <code>ClientSession</code>:</p>
<pre><code>async def _some_test(self) -> None:
async with Scraper() as scraper:
resp = await scraper.do_get_request('https://icanhazip.com')
await scraper.session.close() # <-- Closing session
self.assertEqual(resp.status, 200)
</code></pre>
<p>Also tried creating session in <code>asyncSetUp()</code> and close it in <code>asyncTearDown()</code>:</p>
<pre><code>class Scraper:
def __init__(self, session: ClientSession | None = None):
self.session = session
async def do_get_request(self, url: str) -> ClientResponse:
return await self.session.get(url)
async def __aenter__(self) -> 'Scraper':
if self.session is None:
self.session = await ClientSession().__aenter__()
return self
async def __aexit__(self, exc_type, exc_val, exc_tb) -> None:
await self.session.close()
class TestScraper(unittest.IsolatedAsyncioTestCase):
async def asyncSetUp(self) -> None:
self.session = await ClientSession().__aenter__()
async def asyncTearDown(self) -> None:
await self.session.close()
async def _some_test(self) -> None:
# pass session from asyncSetUp
async with Scraper(self.session) as scraper:
resp = await scraper.do_get_request('https://icanhazip.com')
self.assertEqual(resp.status, 200)
</code></pre>
<p>Anyway getting these warnings. Any advice would be helpful, thanks!</p>
<p>P.S. I can't reproduce same issues if <code>COUNT_OF_TEST</code> <= 3. If it's > 5 getting error from time to time, but of course in production i have more than 20 tests</p>
|
<python><unit-testing><python-unittest>
|
2023-05-25 15:38:24
| 0
| 412
|
555Russich
|
76,334,025
| 13,534,060
|
Accessing default value of decorator argument in Python
|
<p>I am trying to write a custom Python decorator which wraps the decorated function in a <code>try ... except</code> block and adds a message with additional context to make debugging easier.</p>
<p>Based on different resources (see <a href="https://www.geeksforgeeks.org/decorators-with-parameters-in-python/" rel="nofollow noreferrer">here</a> and <a href="https://www.freecodecamp.org/news/python-decorators-explained-with-examples/" rel="nofollow noreferrer">here</a> for example) I so far built the following:</p>
<pre class="lang-py prettyprint-override"><code>def _exception_handler(msg):
"""Custom decorator to return a more informative error message"""
def decorator(func):
def wrapper(*args, **kwargs):
try:
# this is the actual decorated function
func(*args, **kwargs)
except Exception as e:
# we catch the exception and raise a new one with the custom
# message and the original exception as cause
raise Exception(f"{msg}: {e}") from e
return wrapper
return decorator
</code></pre>
<p>This works as expected - if I run:</p>
<pre class="lang-py prettyprint-override"><code>@_exception_handler("Foo")
def test():
raise ValueError("Bar")
test()
</code></pre>
<p>This returns:</p>
<pre><code>Exception: Foo: Bar
</code></pre>
<p>Now, I do not always want to pass a custom <code>msg</code> because that's sometimes a bit redundant. So I set a default value of <code>msg=""</code> and I want to check if <code>msg==""</code> and if that is the case I would just like to re-create the <code>msg</code> inside the decorator based on the function name, something like <code>msg = f"An error occurred in {func.__name__}"</code>.</p>
<p>I would then like to use the decorator without any <code>msg</code> argument. <strong>I do not care about the empty parantheses, using <code>@_exception_handler()</code> is perfectly fine for me.</strong></p>
<p>But somehow this does not seem to work:</p>
<pre class="lang-py prettyprint-override"><code>def _exception_handler(msg=""):
"""Custom decorator to return a more informative error message"""
def decorator(func):
def wrapper(*args, **kwargs):
try:
# this is the actual decorated function
func(*args, **kwargs)
except Exception as e:
if msg=="":
# if no custom message is passed, we just use the function name
msg = f"An error occurred in {func.__name__}"
# we catch the exception and raise a new one with the message
# and the original exception as cause
raise Exception(f"{msg}: {e}") from e
return wrapper
return decorator
</code></pre>
<p>If I run:</p>
<pre class="lang-py prettyprint-override"><code>@_exception_handler()
def test():
raise ValueError("Bar")
test()
</code></pre>
<p>I get</p>
<pre><code>UnboundLocalError: local variable 'msg' referenced before assignment
</code></pre>
<p>If I put <code>global message</code> right below the <code>def decorator(func):</code> line, I get the same errors. If I put it below the <code>def wrapper(*args, **kwargs):</code>, I instead get:</p>
<pre><code>NameError: name 'msg' is not defined
</code></pre>
<p>Any ideas how I can get this to work? If possible, I would like to avoid any third-party modules such as <code>wrapt</code>. Using <code>wraps</code> from <code>functools</code> from the standard library is fine of course (although I did not have any luck with that so far either).</p>
|
<python><python-decorators>
|
2023-05-25 15:35:28
| 1
| 851
|
henhesu
|
76,333,958
| 10,266,106
|
Applying np.linspace to Multi-Dimensional Array
|
<p>I have a multi-dimensional Numpy array of the following size:</p>
<pre><code>(1200,2600,200)
</code></pre>
<p>At each point <code>i, j</code>, there is an assortment of unordered data which vary at each point. I'm performing some analyses which require the use of evenly spaced numbers with size 200. My current approach is the following:</p>
<pre><code>x = np.empty(shape=(array.shape[0],array.shape[1],200))
def compiler(i, j):
x[i, j] = np.linspace(0.1, np.max(array[i, j]), 200)
[[compiler(i, j) for i in range(array.shape[0])] for j in range(array.shape[1])]
</code></pre>
<p>Constructing this in a list comprehension seems potentially very inefficient given Numpy's capabilites, surely there is a faster way to execute this process?</p>
<p><strong>Edit:</strong> An example of the result I'm looking for at each point i, j with np.linspace is as follows. I'd expect the resulting ndarray to be of shape <code>(1200,2600,200)</code>:</p>
<pre><code>[ 0.1 0.31248513 0.52497025 0.73745538 0.94994051 1.16242564
1.37491076 1.58739589 1.79988102 2.01236615 2.22485127 2.4373364
2.64982153 2.86230665 3.07479178 3.28727691 3.49976204 3.71224716
3.92473229 4.13721742 4.34970255 4.56218767 4.7746728 4.98715793
5.19964305 5.41212818 5.62461331 5.83709844 6.04958356 6.26206869
6.47455382 6.68703895 6.89952407 7.1120092 7.32449433 7.53697945
7.74946458 7.96194971 8.17443484 8.38691996 8.59940509 8.81189022
9.02437535 9.23686047 9.4493456 9.66183073 9.87431585 10.08680098
10.29928611 10.51177124 10.72425636 10.93674149 11.14922662 11.36171175
11.57419687 11.786682 11.99916713 12.21165225 12.42413738 12.63662251
12.84910764 13.06159276 13.27407789 13.48656302 13.69904815 13.91153327
14.1240184 14.33650353 14.54898865 14.76147378 14.97395891 15.18644404
15.39892916 15.61141429 15.82389942 16.03638455 16.24886967 16.4613548
16.67383993 16.88632505 17.09881018 17.31129531 17.52378044 17.73626556
17.94875069 18.16123582 18.37372095 18.58620607 18.7986912 19.01117633
19.22366145 19.43614658 19.64863171 19.86111684 20.07360196 20.28608709
20.49857222 20.71105735 20.92354247 21.1360276 21.34851273 21.56099785
21.77348298 21.98596811 22.19845324 22.41093836 22.62342349 22.83590862
23.04839375 23.26087887 23.473364 23.68584913 23.89833425 24.11081938
24.32330451 24.53578964 24.74827476 24.96075989 25.17324502 25.38573015
25.59821527 25.8107004 26.02318553 26.23567066 26.44815578 26.66064091
26.87312604 27.08561116 27.29809629 27.51058142 27.72306655 27.93555167
28.1480368 28.36052193 28.57300706 28.78549218 28.99797731 29.21046244
29.42294756 29.63543269 29.84791782 30.06040295 30.27288807 30.4853732
30.69785833 30.91034346 31.12282858 31.33531371 31.54779884 31.76028396
31.97276909 32.18525422 32.39773935 32.61022447 32.8227096 33.03519473
33.24767986 33.46016498 33.67265011 33.88513524 34.09762036 34.31010549
34.52259062 34.73507575 34.94756087 35.160046 35.37253113 35.58501626
35.79750138 36.00998651 36.22247164 36.43495676 36.64744189 36.85992702
37.07241215 37.28489727 37.4973824 37.70986753 37.92235266 38.13483778
38.34732291 38.55980804 38.77229316 38.98477829 39.19726342 39.40974855
39.62223367 39.8347188 40.04720393 40.25968906 40.47217418 40.68465931
40.89714444 41.10962956 41.32211469 41.53459982 41.74708495 41.95957007
42.1720552 42.38454033 42.59702546 42.80951058 43.02199571 43.23448084
43.44696596 43.65945109 43.87193622 44.08442135 44.29690647 44.5093916
44.72187673 44.93436186 45.14684698 45.35933211 45.57181724 45.78430236
45.99678749 46.20927262 46.42175775 46.63424287 46.846728 47.05921313
47.27169826 47.48418338 47.69666851 47.90915364 48.12163876 48.33412389
48.54660902 48.75909415 48.97157927 49.1840644 49.39654953 49.60903466
49.82151978 50.03400491 50.24649004 50.45897516 50.67146029 50.88394542
51.09643055 51.30891567 51.5214008 51.73388593 51.94637106 52.15885618
52.37134131 52.58382644 52.79631156 53.00879669]
</code></pre>
|
<python><arrays><numpy><numpy-ndarray>
|
2023-05-25 15:26:56
| 1
| 431
|
TornadoEric
|
76,333,954
| 254,725
|
Geopandas: overlay(how="union") can't handle more than one geometry columns
|
<p><strong>Goal:</strong> Merge country polygons from Natural Earth with the disputed areas. So that I have undisputed and disputed areas in one geodataframe</p>
<p><strong>Code:</strong></p>
<pre class="lang-py prettyprint-override"><code>
import geopandas as gpd
gpd.read_file("ne_50m_admin_0_countries.shp").overlay(gpd.read_file("ne_50m_admin_0_breakaway_disputed_areas.shp"), how="union")
</code></pre>
<p><strong>Error:</strong></p>
<pre><code>ValueError: GeoDataFrame does not support multiple columns using the geometry column name 'geometry'.
</code></pre>
<p><strong>Question:</strong> Is this a bug or am I missing something? It seems like a logical thing to have two <code>geometry</code>-columns when merging two geodataframes. I'm using the regular NE-files fresh from their site: <a href="https://www.naturalearthdata.com/downloads/50m-cultural-vectors/" rel="nofollow noreferrer">https://www.naturalearthdata.com/downloads/50m-cultural-vectors/</a></p>
|
<python><geopandas>
|
2023-05-25 15:26:29
| 1
| 956
|
Jonas
|
76,333,507
| 4,954,037
|
Inherit/subclass from `tuple` with correct indexing and slicing
|
<p>i am trying to sublass <code>tuple</code> and make <code>mypy</code> happy in the process. i would like to make slicing and indexing work.</p>
<p>here is what i have tried:</p>
<pre><code>from functools import singledispatchmethod
from typing import Iterable
class MyIntTuple(tuple[int, ...]):
def __new__(cls, iterable: Iterable[int]) -> "MyIntTuple":
return super().__new__(cls, iterable) # type: ignore
@singledispatchmethod
def __getitem__(self, value: slice, /) -> "MyIntTuple":
return MyIntTuple(super().__getitem__(value))
@__getitem__.register
def __getitem__(self, value: int, /) -> int:
return super().__getitem__(value)
</code></pre>
<p>this almost works except for slicing:</p>
<pre><code>a = MyIntTuple(range(12))
print(a) # (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
print(a[3]) # 3
print(a[3:6]) # (3, 4, 5)
b = a[5:8]
print(type(a)) # <class '__main__.MyIntTuple'>
print(type(b)) # <class 'tuple'>
</code></pre>
<p>the problem here is that slicing should return an instance of <code>MyIntTuple</code>. i seem to have gotten that wrong.</p>
<p>also <code>mypy</code> is not happy about this:</p>
<pre><code>error: Name "__getitem__" already defined on line 488 [no-redef]
</code></pre>
<p>the subs in <code>tuple</code> are</p>
<pre><code>Superclass:
@overload
def __getitem__(self, SupportsIndex, /) -> int
@overload
def __getitem__(self, slice, /) -> Tuple[int, ...]
</code></pre>
<p>how do i correctly overload those? is <code>functools.singledispatchmethod</code> the wrong way to go? or am i using it incorrectly?</p>
|
<python><python-3.x><mypy>
|
2023-05-25 14:39:28
| 3
| 47,321
|
hiro protagonist
|
76,333,373
| 20,266,647
|
MLRun, ErrorMessage, No space left on device
|
<p>I got this error during ingest data to FeatureSet:</p>
<pre><code>Error - Failed to save aggregation for /k78/online_detail/nosql/sets/on line_detail/0354467518.ed74fc2b
Response status code was 400: b'{\n\t"ErrorCode": -28,\n\t"ErrorMessage": "No space left on device"\n}
Update expression was: pr_ph='0354467518';id=7309877;type='r77'
</code></pre>
<p>I used standard code for ingestion, see:</p>
<pre><code>import mlrun
import mlrun.feature_store as fs
...
project = mlrun.get_or_create_project(project_name, context='./', user_project=False)
feature_set=featureGetOrCreate(True, project_name, 'sample')
...
fs.ingest(feature_set, df)
</code></pre>
<p>It seems as the issue with disk space, but I am 100% sure that I had enough free space for ingest (it will be something different). Did you have the similar issue?</p>
|
<python><health-check><feature-store><mlrun>
|
2023-05-25 14:25:07
| 1
| 1,390
|
JIST
|
76,333,357
| 1,096,660
|
How to choose template in DetailView based on a field of the model shown?
|
<p>I have a model with a choice field:</p>
<pre><code>type = models.CharField(choices=TYPE_CHOICES,
max_length=1, default=UNSET, db_index=True)
</code></pre>
<p>Depending on the type I'd like to show a different template in the class based DetailView:</p>
<pre><code>class AlbumDetailView(DetailView):
[...]
</code></pre>
<p>Currently I have set the template by setting:</p>
<pre><code> template_name = 'bilddatenbank/album_detail.html'
</code></pre>
<p>But then it's not possible to access the value of the model field. Where can I can set the template while having access to the model?</p>
<p>Thank you.</p>
|
<python><django><django-views>
|
2023-05-25 14:23:23
| 1
| 2,629
|
JasonTS
|
76,333,327
| 4,439,753
|
Modifying a numpy array from multiple processes without locks
|
<p>I have a big <em>numpy.array</em> shared across multiple processes (created using <em>pool.map</em>), and I want each process to modify a different part of the array. Since there is no 2 processes that can modify the same part of the array, I can get away without using any locks, and therefore without slowing down the code. In my understanding, I should therefore use <em>multiprocessing.sharedctypes.RawArray</em> instead of <em>multiprocessing.Array</em>.</p>
<p>However, when I run the following toy example with only 1 process, the writing time is <em>0.05s</em>. When I run the same code with 32 processes (the multiprocessing.cpu_count() of my machine), the writing time is <em>1.8s</em>. I get the same numbers when I use <em>multiprocessing.Array</em>. It seems to imply that the processes still lock the whole array while writing. Is there any way I can avoid that and speed things up?</p>
<pre><code>import time
import numpy as np
import multiprocessing
from multiprocessing.sharedctypes import RawArray
def modify_array(i):
arr = np.frombuffer(np_x, dtype=np.float32).reshape(np_x_shape)
start_time_local = time.perf_counter()
# Do some processing, each process write to a different row
for j in range(100):
arr[i, ...] = i
print(f"Writting time {time.perf_counter() - start_time_local}")
def pool_initializer(X, X_shape):
global np_x
np_x = X
global np_x_shape
np_x_shape = X_shape
if __name__ == "__main__":
n_processes = multiprocessing.cpu_count() # 1
# Original numpy array
array_shape = (80, 1920 * 1080 * 3)
data = np.ones(array_shape, dtype=np.float32)
# Allocate the shared Array
X = RawArray('i', np.array(array_shape).prod().item())
X_np = np.frombuffer(X, dtype=np.float32).reshape(array_shape)
# Copy data to the shared array
np.copyto(X_np, data)
# Create the processes
with multiprocessing.Pool(processes=n_processes, initializer=pool_initializer, initargs=(X, array_shape)) as pool:
pool.map(modify_array, range(array_shape[0]))
</code></pre>
|
<python><numpy><multiprocessing><python-multiprocessing>
|
2023-05-25 14:20:07
| 1
| 4,761
|
Neabfi
|
76,333,028
| 7,945,506
|
LGBM model: What could cause varying results with fixed seed?
|
<p>I train a LightGBM model with the following Python code:</p>
<pre class="lang-py prettyprint-override"><code>import lightgbm as lgb
# Set parameters
params = {
"objective": "regression",
"num_leaves": 16,
"learning_rate": 0.05,
"feature_fraction": 0.5,
"verbose": 0,
"nthread": -1,
"metric": "l1",
"linear_tree": False,
"bagging_fraction": 0.632,
"bagging_freq": 1,
"min_data_in_leaf": 20,
"num_iterations": 200,
"early_stopping_round": None,
"seed": 42,
}
# Create datasets
df_train = lgb.Dataset(...)
df_test = lgb.Dataset(...)
# Train model
fit_lgb = lgb.train(
params=params,
train_set=df_train,
valid_sets=[df_train, df_test]
)
</code></pre>
<p>Although I have a fixed seed (<code>seed=42</code>), I get varying results between runs. Where could these variations come from?</p>
<p>Edit:
Following the link from @Marjin, I extended/replaced the params with these:</p>
<pre><code>"deterministic": True,
"nthread": 1,
"force_col_wise": True,
</code></pre>
<p>However, I still get varying results.</p>
|
<python><lightgbm>
|
2023-05-25 13:49:07
| 1
| 613
|
Julian
|
76,333,004
| 8,869,570
|
Why do absolute imports not work in this circular dependency case?
|
<p>I have a circular dependency problem. In the direction <code>/path/to/src</code>, there are 2 files <code>a.py</code> and <code>b.py</code>.</p>
<p>In <code>a.py</code>, there is a</p>
<pre><code>from . import b
</code></pre>
<p>and in <code>b.py</code>, there is a</p>
<pre><code>from . import a
</code></pre>
<p>So there's a circular dependency. Based on <a href="https://stackoverflow.com/questions/7336802/how-to-avoid-circular-imports-in-python">How to avoid circular imports in Python?</a>, one way to resolve this is to use absolute imports, so I tried</p>
<p>In <code>a.py</code>:</p>
<pre><code>import path.to.src.b as b
</code></pre>
<p>and in <code>b.py</code>, there is a</p>
<pre><code>import path.to.src.a as a
</code></pre>
<p>But I still get the circular dependency error during execution</p>
<pre><code>module 'path.to.src.a' has no attribute 'b'
</code></pre>
|
<python><circular-dependency>
|
2023-05-25 13:45:55
| 0
| 2,328
|
24n8
|
76,332,990
| 7,437,143
|
"Callable[list, Dict]: has too many arguments in its declaration; expected 2 but 3 argument(s) declared" when using Typeguard runtime type checker
|
<h2>Context</h2>
<p>The (typed) <code>get_next_actions</code> function below returns a (typed) function, called <code>actions_0</code> (or None). The <code>actions_0</code> function takes in 3 arguments, and returns a Dict.</p>
<pre class="lang-py prettyprint-override"><code>from typing import TYPE_CHECKING, Callable, Dict, List
if TYPE_CHECKING:
from some_imports import Script
else:
Script = object
# pylint: disable=W0613
@typechecked
def get_next_actions(
required_objects: List[Dict[str, str]],
optional_objects: List[Dict[str, str]],
script: Script,
) -> Union[Callable[[AutomatorDevice, Screen, Script], Dict], None]:
"""Looks at the required objects and optional objects and determines
which actions to take next.
An example of the next actions could be the following List:
0. Select a textbox.
1. Send some data to a textbox.
2. Click on the/a "Next" button.
Then the app goes to the next screen and waits a pre-determined
amount, and optionally retries a pre-determined amount of attempts.
"""
# In the start screen just press ok.
return actions_0
return Screen(
get_next_actions=get_next_actions,
is_start=True,
max_retries=max_retries,
screen_nr=screen_nr,
wait_time_sec=wait_time_sec,
required_objects=required_objects,
)
# pylint: disable=W0613
@typechecked
def actions_0(dev: AutomatorDevice, screen: Screen, script: Script) -> Dict:
"""Performs the actions in option 1 in this screen.
For this screen, it clicks the "OK" button in the "Connection
request".
"""
# Click the ok button.
dev(resourceId="android:id/button1").click()
# Return the expected screens, using get_expected_screen_nrs.
action_nr: int = int(inspect.stack()[0][3][8:])
screen_nr: int = screen.screen_nr
script_flow: nx.DiGraph = script.script_graph
return {
"expected_screens": get_expected_screen_nrs(
G=script_flow, screen_nr=screen_nr, action_nr=action_nr
)
}
</code></pre>
<h2>Error Message</h2>
<p>However, <a href="https://typeguard.readthedocs.io/en/latest/" rel="nofollow noreferrer">Typeguard typechecking</a> does not agree with specifying 3 input types into the function.</p>
<pre class="lang-none prettyprint-override"><code>TypeCheckError: the return value (function) did not match any element in the union:
Callable[list, Dict]: has too many arguments in its declaration; expected 2 but 3 argument(s) declared
NoneType: is not an instance of NoneType
</code></pre>
<p>When I change:</p>
<pre class="lang-py prettyprint-override"><code>) -> Union[Callable[[AutomatorDevice, Screen, Script], Dict], None]:
</code></pre>
<p>to:</p>
<pre class="lang-py prettyprint-override"><code>) -> Union[Callable[[AutomatorDevice, float, Script], Dict], None]:
</code></pre>
<p>the bug dissapears.</p>
<h2>Question</h2>
<p>How can I ensure typechecking understands it is the typing of the <code>action_0</code> function that is passed into the <code>callable</code> type?</p>
<h2>MWE</h2>
<pre class="lang-py prettyprint-override"><code>```py
"""MWE."""
import inspect
from typing import TYPE_CHECKING, Callable, Dict, List
import networkx as nx
from typeguard import typechecked
from uiautomator import AutomatorDevice
from some_imports import Screen
if TYPE_CHECKING:
from some_imports import Script
else:
Script = object
# pylint: disable=W0613
@typechecked
def get_next_actions(
required_objects: List[Dict[str, str]],
optional_objects: List[Dict[str, str]],
script: float,
) -> Callable[[AutomatorDevice, Screen, Script], Dict]:
"""Looks at the required objects and optional objects and determines
which actions to take next.
An example of the next actions could be the following List:
0. Select a textbox.
1. Send some data to a textbox.
2. Click on the/a "Next" button.
Then the app goes to the next screen and waits a pre-determined
amount, and optionally retries a pre-determined amount of attempts.
"""
# In the start screen just press ok.
return actions_0
# pylint: disable=W0613
@typechecked
def actions_0(dev: AutomatorDevice, screen: Screen, script: Script) -> Dict:
"""Performs the actions in option 1 in this screen.
For this screen, it clicks the "OK" button in the "Connection
request".
"""
# Click the ok button.
dev(resourceId="android:id/button1").click()
# Return the expected screens, using get_expected_screen_nrs.
action_nr: int = int(inspect.stack()[0][3][8:])
screen_nr: int = screen.screen_nr
script_flow: nx.DiGraph = nx.DiGraph()
return {
"expected_screens": get_expected_screen_nrs(
G=script_flow, screen_nr=screen_nr, action_nr=action_nr
)
}
@typechecked
def get_expected_screen_nrs(
G: nx.DiGraph, screen_nr: int, action_nr: int
) -> List[int]:
"""Returns the expected screens per screen per action."""
expected_screens: List[int] = []
for edge in G.edges:
if edge[0] == screen_nr:
if action_nr in G[edge[0]][edge[1]]["actions"]:
expected_screens.append(edge[1])
return expected_screens
required_objects: List[Dict[str, str]] = [{"key": "val_1"}]
optional_objects: List[Dict[str, str]] = [{"key": "val_2"}]
script: float = 9.4
something = get_next_actions(
required_objects=required_objects,
optional_objects=optional_objects,
script=script,
)
print(something)
</code></pre>
<p>With: <code>some_imports.py</code> as:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Callable, Dict, List, Union
from typeguard import typechecked
class Screen:
"""Represents an Android app screen."""
# pylint: disable=R0913
# pylint: disable=W0102
@typechecked
def __init__(
self,
is_start: bool,
get_next_actions: Callable[
[Dict[str, str], Dict[str, str], Dict[str, str]],
Union[Callable, None],
],
max_retries: int,
required_objects: List[Dict[str, str]],
screen_nr: int,
wait_time_sec: float,
optional_objects: List[Dict[str, str]] = [],
) -> None:
print("hello world.")
# pylint: disable=R0902
class Script:
"""Experiment manager.
First prepares the environment for running the experiment, and then
calls a private method that executes the experiment consisting of 4
stages.
"""
# pylint: disable=R0903
# pylint: disable=R0913
@typechecked
def __init__(
self,
app_name: str,
overwrite: bool,
package_name: str,
version: str,
cli_input_data: Dict[str, Union[str, Dict[str, str]]],
) -> None:
self.app_name: str = app_name
</code></pre>
<p>Throws error:</p>
<pre><code>raceback (most recent call last):
File "/home/name/git/temp/mwe.py", line 78, in <module>
something = get_next_actions(
File "/home/name/git/temp/mwe.py", line 36, in get_next_actions
return actions_0
File "/home/name/miniconda/envs/snncompare/lib/python3.10/site-packages/typeguard/_functions.py", line 164, in check_return_type
check_type_internal(retval, annotation, memo)
File "/home/name/miniconda/envs/snncompare/lib/python3.10/site-packages/typeguard/_checkers.py", line 756, in check_type_internal
checker(value, origin_type, args, memo)
File "/home/name/miniconda/envs/snncompare/lib/python3.10/site-packages/typeguard/_checkers.py", line 193, in check_callable
raise TypeCheckError(
typeguard.TypeCheckError: the return value (function) has too many arguments in its declaration; expected 2 but 3 argument(s) declared
</code></pre>
<p>When it is ran with <code>python mwe.py</code></p>
|
<python><python-typing><callable>
|
2023-05-25 13:44:59
| 0
| 2,887
|
a.t.
|
76,332,844
| 11,645,617
|
Django serve views asynchronously
|
<p>Django 3.2.8 (a relatively old version)</p>
<pre class="lang-py prettyprint-override"><code>@csrf_exempt
def xhrView(request, ms, status=200):
sleep(ms / 1000)
response = HttpResponse(
f'{status} DUMMYXHRRESPONSE IN {ms}ms'
)
response['Access-Control-Allow-Origin'] = '*'
response.status_code = status
return response
</code></pre>
<p>I am hosting an api server for generative data testing with django.</p>
<p>I have views for example serving purposes like:</p>
<ol>
<li>fetch or xhr js test</li>
<li>generate fake json based on api query</li>
<li>generate random long text</li>
</ol>
<p>but due to the python lang nature <code>GIL</code>, django is not capable of serving views asynchronously, which for example,
if userA is visitng <code>xhr/10000</code>, any other user won't be able to even access the server in that duration.</p>
<p>in those api views I don't need to access database, is there a way to serve the view above asynchronously?</p>
<blockquote>
<p>p.s. I have channels installed and is using websocket somewhere else for social messaging, the proj is set to asgi.</p>
</blockquote>
<hr />
<p>I have tried</p>
<blockquote>
<p><code>consumers.py</code></p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>import asyncio
from channels.db import database_sync_to_async
from channels.generic.http import AsyncHttpConsumer
from django.http import HttpResponse
class XhrConsumer(AsyncHttpConsumer):
@database_sync_to_async
def sleep_ms(self, ms):
return asyncio.sleep(ms / 1000)
async def handle(self, body):
ms = int(self.scope['url_route']['kwargs'].get('ms', 0))
await self.sleep_ms(ms)
response = HttpResponse(f'{self.get_status()} DUMMYXHRRESPONSE IN {ms}ms')
response['Access-Control-Allow-Origin'] = '*'
await self.send_response(response)
def get_status(self):
return getattr(self, 'status', 200) # Default status is 200
</code></pre>
<blockquote>
<p><code>asgi.py</code></p>
</blockquote>
<pre><code>django_asgi_app = get_asgi_application()
application = ProtocolTypeRouter({
"http": django_asgi_app,
"websocket": URLRouter(
[path('xhr/<int:ms>/', XhrConsumer.as_asgi())],
)
})
</code></pre>
<p>but <a href="http://127.0.0.1:8000/xhr/10000" rel="nofollow noreferrer">http://127.0.0.1:8000/xhr/10000</a> is not accessible. I am certain that asgi is set properly since all my other notification and messaging functionality has passed unit test.</p>
|
<python><django><python-asyncio><coroutine><asgi>
|
2023-05-25 13:28:33
| 0
| 3,177
|
Weilory
|
76,332,840
| 1,136,512
|
Scrapy: select last decendant node?
|
<p>I have a <code>dict</code> with selectors which I use to get data:</p>
<pre><code>for key, selector in selectors.items():
data[key] = response.css(selector).get().strip()
</code></pre>
<p>One of the selectors is <code>span::text</code>, but sometimes the text is wrapped in an additional <code>a</code> tag. My solution is to make that entry a list including <code>span a::text</code>:</p>
<pre><code>for key, selector in selectors.items():
if type(selector) == list:
for sel in selector:
data[key] = response.css(sel).get().strip()
if data[key] not in ["", None]: break
else:
data[key] = response.css(selector).get().strip()
</code></pre>
<p>Is there a way to change the selector so that it will get the text I want whether there's an <code>a</code> tag or not? I would like the script to be a single line with <code>.get().strip()</code>.</p>
|
<python><scrapy><selector>
|
2023-05-25 13:28:24
| 1
| 899
|
bur
|
76,332,802
| 9,182,743
|
Plotly: difference between fig.update_layout({'yaxis': dict(matches=None)}) and fig.update_yaxes(matches=None)
|
<p>I am trying to work out the difference between:</p>
<ul>
<li><code>fig.update_layout({'yaxis': dict(matches=None)})</code></li>
<li><code>fig.update_yaxes(matches=None)</code></li>
</ul>
<p>I thought they were the same, but <code>fig.update_layout({'yaxis': dict(matches=None)})</code> doesn't change the yaxis as expected.</p>
<p>Here is the example code:</p>
<pre class="lang-py prettyprint-override"><code>import plotly.express as px
import plotly
print("plotly version: " , plotly.__version__)
# Sample data
data = {
'Variable': ['A', 'A', 'B', 'B', 'C', 'C'],
'Value': [1, 2, 3, 4, 5, 6]
}
# Create box plot
fig = px.box(data, y='Value', facet_row='Variable')
fig.update_layout(height=400, width =400)
fig.update_layout({'yaxis': dict(matches=None)})
fig.show()
fig.update_yaxes(matches=None)
fig.show()
</code></pre>
<p>OUT:</p>
<p><a href="https://i.sstatic.net/zlNbe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zlNbe.png" alt="enter image description here" /></a></p>
|
<python><plotly><axis>
|
2023-05-25 13:24:10
| 1
| 1,168
|
Leo
|
76,332,782
| 17,200,348
|
Measuring Top-1 and Top-5 Accuracy using TensorFlow Model Garden
|
<p>I've been using the <a href="https://github.com/tensorflow/models" rel="nofollow noreferrer">TensorFlow Model Garden</a> to train a set of models on custom datasets that I've created for image classification. Now that it's time to evaluate them, I've run into an issue when trying to measure the top-k accuracies of my networks. The repo supplies a handy evaluation script, namely <a href="https://github.com/tensorflow/models/blob/master/research/slim/eval_image_classifier.py" rel="nofollow noreferrer"><code>eval_image_classifier.py</code></a>, which nicely works for my purposes. Near the end of the script, on line 165, the metrics for evaluation are defined, where I can add measurements of my own, which I've done here:</p>
<pre class="lang-py prettyprint-override"><code># Define the metrics:
names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({
'Precision': slim.metrics.streaming_precision(predictions, labels),
'Accuracy': slim.metrics.streaming_accuracy(predictions, labels),
'Recall@5': slim.metrics.streaming_recall_at_k(logits, labels, 5),
'Recall@1': slim.metrics.streaming_recall_at_k(logits, labels, 1)
})
</code></pre>
<p>The TF-slim metrics functions are found <a href="https://github.com/google-research/tf-slim/blob/master/tf_slim/metrics/metric_ops.py" rel="nofollow noreferrer">here</a>, and contain many useful metrics for evaluating a network's performance. However, I see no way to measure top-k performance using the functions provided. There are recall and precision at top-k functions, which are close, but not quite the same. Moreover, when looking into recall at k and precision at k, they seem most often applied to recommender systems, so I'm not sure these are what I need at all. <a href="https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54" rel="nofollow noreferrer">This article</a> says:</p>
<blockquote>
<p>Precision at k is the proportion of recommended items in the top-k set that are relevant</p>
</blockquote>
<p>and</p>
<blockquote>
<p>Recall at k is the proportion of relevant items found in the top-k recommendations</p>
</blockquote>
<p>while top-k accuracy is the number of times the correct item is in the top-k predictions. To me, these all seem quite similar, and I find it very hard to see what the exact differences are, even after doing research on them.</p>
<p>So how do you measure top-k performance using TensorFlow Model Garden? Are the recall and precision at top-k the functions I'm looking for, or do they differ? If not, what would be the best approach to implementing top-k accuracy within the script I'm working with?</p>
|
<python><tensorflow><tensorflow-model-garden>
|
2023-05-25 13:21:31
| 1
| 1,629
|
B Remmelzwaal
|
76,332,407
| 687,739
|
Rendered Liquid template to Python dict
|
<p>I have the following Liquid template:</p>
<pre><code>{%- capture vars -%}
{%- capture articleHeadline -%}This is the article of the headline{%- endcapture -%}
{%- capture intro -%}
Introduction to the document
{%- endcapture -%}
{%- capture articleHighlights -%}
Step 1: Get up
Step 2: Go to kitchen
Step 3: Get coffee
{%- endcapture -%}
{%- capture articleSummary -%}
Summary of the document
{%- endcapture -%}
{%- endcapture -%}
</code></pre>
<p>I'd like to parse this text and end up with a <code>dict</code> of the form:</p>
<pre><code>{
"articleHeadline": "This is the article of the headline",
"intro": "\nIntroduction to the document\n"
"articleHighlights": "\nStep 1: Get up\n\nStep 2: Go to kitchen\n\nStep 3: Get coffee\n",
"articleSummary": "\nSummary of the document\n"
}
</code></pre>
<p>I tried <a href="https://jg-rp.github.io/liquid/" rel="nofollow noreferrer">Python Liquid</a> but I'm confident it does not reverse engineer templates like this. I also checked out <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#" rel="nofollow noreferrer">BeautifulSoup</a> for this purpose but it will only parse HTML tags.</p>
<p>Is there a library or method that I can use to parse this (hopefully avoiding regex)?</p>
|
<python><liquid>
|
2023-05-25 12:44:08
| 1
| 15,646
|
Jason Strimpel
|
76,332,295
| 8,101,253
|
How do I run python code on each task in Airflow
|
<p>I have a sequence of Tasks on my DAG</p>
<p>eg.</p>
<pre><code>task1 >> task2 >> task3
</code></pre>
<p>What I need is to put across the tasks functionality without repeating code.
For example logging, would like to execute log.info(task_id) per task
Or some external operation, like http call per task</p>
<p>I am new to airflow and probably missing all the concepts, but how I would do something like this?
a custom operator?
An interceptor for each task execution.</p>
<p>I would like to avoid something like this</p>
<pre><code>[task1,logging] >> [task2,logging] >> [task3, logging]
</code></pre>
<p>Another example
If I try to process the xcom params stored per task with a common way how would I do it? Instead of putting the process logic on every task I would like to have the xcom per task on this seperate operator/task/whatever will be to process, same goes for logging or other functionalities</p>
<p>Triggering my operator/task/whatever per task i guess would be one way to do this, but again do I need to add it every time like this on each task?</p>
<pre><code>[task1,process_xcom] >> [task2,process_xcom] >> [task3, process_xcom]
</code></pre>
<blockquote>
<p>A similar concept in java could be the usage of Aspects on Services</p>
</blockquote>
|
<python><airflow><airflow-2.x>
|
2023-05-25 12:30:55
| 1
| 1,091
|
Panos K
|
76,332,293
| 11,665,178
|
How to change cloud function parameters when deploying python gen2?
|
<p>I am writing Firebase Cloud Function in python using the new <a href="https://firebase.google.com/docs/functions/get-started?gen=2nd" rel="nofollow noreferrer">"Firebase way"</a>.</p>
<p>I would like to be able to set the <code>region</code> and memory for the function when i deploy them, in Node JS this is easy :</p>
<pre><code>setGlobalOptions({ region: "asia-northeast1" });
exports.date = onRequest({
// set concurrency value
concurrency: 500
},
(req, res) => {
// ...
});
</code></pre>
<p>How to do the same in python :</p>
<pre><code>from firebase_functions import https_fn
from firebase_admin import initialize_app
initialize_app()
logging.getLogger("Functions").setLevel(logging.INFO)
@https_fn.on_request()
def example(req: https_fn.CallableRequest) -> https_fn.Response:
logging.info(f"Request example called : {req.auth}")
return https_fn.Response("Hello world!")
</code></pre>
<p>EDIT :</p>
<p>I have found the <code>set_global_options()</code> method by playing around and guessing from NodeJS.</p>
<p>However, i still need the answer to set the <code>options.HttpsOptions</code> parameters for each function inside <code>@https_fn.on_call(options=HttpsOptions(something...))</code></p>
|
<python><firebase><google-cloud-functions>
|
2023-05-25 12:30:44
| 1
| 2,975
|
Tom3652
|
76,332,198
| 192,923
|
aiobotocore - AttributeError: 'ClientCreatorContext' object has no attribute 'send_message'
|
<p>I have a working application that interacts with SQS using python 3.6 and I am required to upgrade the same to Python3.8. Locally, I am using elasticmq as part of the development.</p>
<p>I have a SQSWrapper class that initializes queues and associates sqs_client with each queue. So if I have 10 queues, I will be creating 10 sqs_clients.</p>
<p>Here is the extract of the code, that creates an sqs_client</p>
<pre><code>from aiobotocore.session import get_session
def _create_sqs_client():
'''
Creates an SQS client using our botocore session.
'''
return get_session().create_client(
'sqs', **_connection_details
)
sqs_client = _create_sqs_client()
coro = sqs_client.send_message(
QueueUrl=await self._get_queue_url(),
MessageBody=self.dumps(message.data)
)
result = await asyncio.wait_for(coro, self.API_TIMEOUT)
</code></pre>
<p>I am getting an error message here:</p>
<pre><code>> coro = sqs_client.send_message(
QueueUrl=await self._get_queue_url(),
MessageBody=self.dumps(message.data)
)
E AttributeError: 'ClientCreatorContext' object has no attribute 'send_message'
</code></pre>
<p>When I debugged, I came to see that sqs_client is</p>
<pre><code><aiobotocore.session.ClientCreatorContext object at 0x108cc3a30>
</code></pre>
<p>I see a warning as well:</p>
<pre><code>sys:1: RuntimeWarning: coroutine 'AioSession._create_client' was never awaited
</code></pre>
<p>I am unsure what I am missing here, I would really appreciate if someone could help me to crack this one.</p>
|
<python><aws-cli><amazon-sqs><python-3.8><botocore>
|
2023-05-25 12:18:45
| 0
| 5,527
|
nimi
|
76,332,142
| 11,052,072
|
Install Python packages from tar.gz or .whl in GCP Composer
|
<p>I need to install some packages in a GCP Composer environment directly from a tar.gz or .whl file. In a normal environment I can just use <code>pip install https://url.to.package/package.whl</code> but Composer seems to not allow this.</p>
<p>Checking the documentation (<a href="https://cloud.google.com/composer/docs/how-to/using/installing-python-dependencies" rel="nofollow noreferrer">https://cloud.google.com/composer/docs/how-to/using/installing-python-dependencies</a>) it seems that you can install packages from a "public repository" but it need to be a proper git repository. What if my .tar.gz or .whl is on a random web server? Do anyone have a solution?</p>
<p>Thank you in advance!</p>
|
<python><google-cloud-platform><google-cloud-composer>
|
2023-05-25 12:11:11
| 1
| 553
|
Liutprand
|
76,331,894
| 2,960,978
|
Custom FastAPI middleware causes LocalProtocolError("Too much data for declared Content-Length") exception
|
<p>I have a middleware implemented for FastAPI. For responses that includes some content, it works perfectly. But if a response has no body, it is causing <code>LocalProtocolError("Too much data for declared Content-Length")</code> exception.</p>
<p>To isolate the problem, I've reduced the middleware class to this:</p>
<pre class="lang-python prettyprint-override"><code>from starlette.middleware.base import BaseHTTPMiddleware
from fastapi import FastAPI, Request
class LanguageManagerMiddleware(BaseHTTPMiddleware):
def __init__(self, app: FastAPI):
super().__init__(app)
async def dispatch(self, request: Request, call_next) -> None:
return await call_next(request)
</code></pre>
<p>It basically does nothing.</p>
<p>When I add the middleware, I have an exception:</p>
<pre><code> raise LocalProtocolError("Too much data for declared Content-Length")
h11._util.LocalProtocolError: Too much data for declared Content-Length
</code></pre>
<p>When I disable the middleware, I have no problem.</p>
<p>Here is the line that creates the response which triggers the exception:</p>
<pre><code>return Response(status_code=HTTP_204_NO_CONTENT)
</code></pre>
<p>To further debug the problem, I've activated a breakpoint in the <code>h11/_writers.py</code> <code>ContentLengthWriter</code> class, where the actual exception occurs.</p>
<p>I've tried to decode the byte stream with utf-8 and cp437, but had no luch.</p>
<pre class="lang-python prettyprint-override"><code> class ContentLengthWriter(BodyWriter):
def __init__(self, length: int) -> None:
self._length = length
def send_data(self, data: bytes, write: Writer) -> None:
self._length -= len(data)
if self._length < 0:
raise LocalProtocolError("Too much data for declared Content-Length")
write(data)
</code></pre>
<p>I'm stopping the code at this line: <code>self._length -= len(data)</code></p>
<p>If the middleware is <strong>disabled</strong>, <code>data</code> looks like this: <code>b''</code></p>
<p>If the middleware is <strong>enabled</strong>, <code>data</code> looks like this: <code>b'\x1f\x8b\x08\x00\xf6God\x02\xff'</code></p>
<p>What would be modifying the content of the response?</p>
|
<python><fastapi><middleware><starlette>
|
2023-05-25 11:43:41
| 1
| 1,460
|
SercioSoydanov
|
76,331,840
| 6,017,833
|
MongoDB schema optimisation for finding documents with foreign key in list
|
<p>I have the following two MongoDB database collections.</p>
<p><code>races</code></p>
<pre><code>{
"_id": ..., (index)
"track": ...,
"distance": ...,
"timestamp": ...,
...
"runners": [
{
"horse_id": ...,
"name": ...,
"current_weight": ...,
...
},
...
]
}
</code></pre>
<p><code>horses</code></p>
<pre><code>{
"_id": ..., (index)
"name": ...,
"trainer": ...,
...
}
</code></pre>
<p>For a given horse, I need to find all of its historical races. There are approximately 1 million races in total and each horse has on average 30 historical races. I currently use <code>db.races.find({"runners.horse_id": ...})</code> but it takes over 10 seconds each time. How can I optimise my schema? Is there some intermediate table I could use? I can hold all the data in memory, so maybe using something in Python is faster than MongoDB natively?</p>
|
<python><database><mongodb><pymongo>
|
2023-05-25 11:37:04
| 0
| 1,945
|
Harry Stuart
|
76,331,812
| 20,770,190
|
Problem with updating an ENUM type in postgresql and alembic
|
<p>I have an issue with <code>ENUM</code> in postgresql and alembic and I couldn't resolve the problem using the existing topics in the StackOverflow in this regard.</p>
<p>I had the following code:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy.dialects.postgresql import ENUM
from enum import Enum
class StatusEnum(Enum):
requested = "requested"
accepted = "accepted"
declined = "declined"
class EventModifications(BaseModel):
__tablename__ = "event_modifications"
old_value = Column(Text)
new_value = Column(Text)
status = Column(
ENUM(StatusEnum),
default=StatusEnum.accepted.value,
server_default=StatusEnum.accepted.value
)
</code></pre>
<p>Then I appended an entity in the Enum class and also changed the default value of <code>status</code> column to the new added value:</p>
<pre class="lang-py prettyprint-override"><code>class StatusEnum(Enum):
requested = "requested"
accepted = "accepted"
declined = "declined"
modified = "modified" # new one
class EventModifications(BaseModel):
__tablename__ = "event_modifications"
old_value = Column(Text)
new_value = Column(Text)
status = Column(
ENUM(StatusEnum),
default=StatusEnum.modified.value, # changed the defaults
server_default=StatusEnum.modified.value
)
</code></pre>
<p>Error caused by Alembic:</p>
<pre><code>sqlalchemy.exc.ProgrammingError: (psycopg2.errors.DatatypeMismatch) column "status" cannot be cast automatically to type statusenum
HINT: You might need to specify "USING status::statusenum".
[SQL: ALTER TABLE event_modifications ALTER COLUMN status TYPE statusenum ]
</code></pre>
<p>But when I change the name of <code>StatusEnum</code> class to something else it works well!</p>
|
<python><postgresql><sqlalchemy><enums><alembic>
|
2023-05-25 11:33:44
| 2
| 301
|
Benjamin Geoffrey
|
76,331,512
| 1,334,752
|
How calculate an average value of the most recent events across groups in pandas dataframe?
|
<p>I have a pandas dataframe with events (timestamp, value, company id etc).</p>
<p>EXAMPLE:</p>
<pre><code>
timestamp value name nusers
0 2023-06-01 10:46:11 -1 A 1000
1 2023-06-01 11:12:12 1 A 1000
2 2023-06-01 15:52:44 0 A 1000
3 2023-06-01 18:24:15 0 A 1000
4 2023-06-01 19:19:58 1 A 1000
0 2023-06-01 07:00:41 0 B 2000
1 2023-06-01 09:44:46 -1 B 2000
2 2023-06-01 15:06:21 1 B 2000
3 2023-06-01 15:32:35 0 B 2000
4 2023-06-01 21:55:05 -1 B 2000
0 2023-06-01 08:20:33 0 C 3000
1 2023-06-01 15:02:17 -1 C 3000
2 2023-06-01 17:09:25 1 C 3000
3 2023-06-01 21:51:31 0 C 3000
4 2023-06-01 22:12:48 0 C 3000
</code></pre>
<p>and for each event I need to calculate an average value of the most recent events across companies at that moment in time. Of course, the most straightforward way would be just loop through all rows, take the most recent events for each company less than the current tie stamp, and calculate an average.</p>
<p>So for the dataframe above the 'naive' code that works looks like that:</p>
<pre class="lang-py prettyprint-override"><code>res=[]
for index, row in df.iterrows():
recent=df[df.timestamp<=row.timestamp]
latest_values=recent.groupby('name').last()
res.append(dict(timestamp=row.timestamp, value=latest_values.value.mean()))
aggregated_df=pd.DataFrame(res)
aggregated_df.sort_values('timestamp', inplace=True)
aggregated_df
</code></pre>
<p>which results in what I need:</p>
<pre><code> timestamp value
5 2023-06-01 07:00:41 0.000000
10 2023-06-01 08:20:33 0.000000
6 2023-06-01 09:44:46 -0.500000
0 2023-06-01 10:46:11 -0.666667
1 2023-06-01 11:12:12 0.000000
11 2023-06-01 15:02:17 -0.333333
7 2023-06-01 15:06:21 0.333333
8 2023-06-01 15:32:35 0.000000
2 2023-06-01 15:52:44 -0.333333
12 2023-06-01 17:09:25 0.333333
3 2023-06-01 18:24:15 0.333333
4 2023-06-01 19:19:58 0.666667
13 2023-06-01 21:51:31 0.333333
9 2023-06-01 21:55:05 0.000000
14 2023-06-01 22:12:48 0.000000
</code></pre>
<p>But I wonder if there is a more pandas-like way of having the same result.</p>
|
<python><pandas>
|
2023-05-25 10:53:44
| 1
| 2,092
|
Philipp Chapkovski
|
76,331,447
| 9,536,103
|
Create count of categorical column 1 hour ahead and 1 hour behind current time
|
<p>I have a dataframe with two columns: <code>time_of_day</code> and <code>categorical column</code>. I want to count the number of values that have the same value in the <code>categorical_column</code> that are 1 hour ahead and 1 hour behind the current time:</p>
<p>Example Input:</p>
<pre><code>time_of_day categorical_column
25/05/2023 11:30:00 category1
25/05/2023 11:30:00 category1
25/05/2023 11:45:00 category1
25/05/2023 12:35:00 category1
25/05/2023 13:00:00 category2
25/05/2023 13:30:00 category1
25/05/2023 13:45:00 category1
25/05/2023 14:00:00 category2
25/05/2023 14:15:00 category2
25/05/2023 14:15:00 category1
</code></pre>
<p>Example Output:</p>
<pre><code>time_of_day categorical_column window_count
25/05/2023 11:30:00 category1 3
25/05/2023 11:30:00 category1 3
25/05/2023 11:45:00 category1 4
25/05/2023 12:35:00 category1 3
25/05/2023 13:00:00 category2 2
25/05/2023 13:30:00 category1 4
25/05/2023 13:45:00 category1 3
25/05/2023 14:00:00 category2 3
25/05/2023 14:15:00 category2 2
25/05/2023 14:15:00 category1 3
</code></pre>
<p>e.g. <code>time_of_day=25/05/2023 11:45:00</code> and <code>categorical_column=category1</code> has a value of <code>4</code> because there are 4 values containing <code>category1</code> in range <code>25/05/2023 10:45:00 - 25/05/2023 12:45:00</code></p>
|
<python><pandas>
|
2023-05-25 10:46:22
| 2
| 1,151
|
Daniel Wyatt
|
76,331,409
| 2,354,908
|
Parse JSON string within dataframe and insert extracted information into another column
|
<p>I am trying to extract information from each cell in a row from a data frame and add them as another column.</p>
<pre><code>import json
import pandas as pd
df_nested = pd.read_json('train.json')
df_sample = df_nested.sample(n=50, random_state=0)
display(df_sample)
for index, row in df_sample.iterrows():
table_json = row['table']
paragraphs_json = row['paragraphs']
questions_json = row['questions']
table = json.loads(json.dumps(table_json)).get("table")
#print(table)
paragraphs = [json.loads(json.dumps(x)).get("text") for x in paragraphs_json]
#print(paragraphs)
questions = [json.loads(json.dumps(x)).get("question") for x in questions_json]
answer = [json.loads(json.dumps(x)).get("answer") for x in questions_json]
answer_type = [json.loads(json.dumps(x)).get("answer_type") for x in questions_json]
program = [json.loads(json.dumps(x)).get("derivation") for x in questions_json]
print(program)
</code></pre>
<p>The dataframe is as</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>table</th>
<th>paragraphs</th>
<th>questions</th>
</tr>
</thead>
<tbody>
<tr>
<td>{"uid": "bf2c6a2f-0b76-4bba-8d3c-2ee02d1b7d73", "table": "[[, , December 31,,], [, Useful Life, 2019, 2018], [Computer equipment and software, 3 – 5 years, $57,474, $52,055], [Furniture and fixtures, 7 years, 6,096, 4,367], [Leasehold improvements, 2 – 6 years, 22,800, 9,987], [Renovation in progress, n/a, 8, 1,984], [Build-to-suit property, 25 years, —, 51,058], [Total property and equipment, gross, , 86,378, 119,451], [Less: accumulated depreciation and amortization, , (49,852), (42,197)], [Total property and equipment, net, , $36,526, $77,254]]"}</td>
<td>[{"uid": "07e28145-95d5-4f9f-b313-ac8c3b4a869f", "text": "Accounts Receivable", "order": "1"}, {"uid": "b41652f7-0e68-4cf6-9723-fec443b1e604", "text": "The following is a summary of Accounts receivable (in thousands):", "order": "2"}]</td>
<td>[{"rel_paragraphs": "[2]", "answer_from": "table-text", "question": "Which years does the table provide information for the company's Accounts receivable?", "scale": "", "answer_type": "multi-span", "req_comparison": "false", "order": "1", "uid": "53041a93-1d06-48fd-a478-6f690b8da302", "answer": "[2019, 2018]", "derivation": ""}, {"rel_paragraphs": "[2]", "answer_from": "table-text", "question": "What was the amount of accounts receivable in 2018?", "scale": "thousand", "answer_type": "span", "req_comparison": "false", "order": "2", "uid": "a196a61c-43b0-43f5-bb4b-b059a1103c54", "answer": "[225,167]", "derivation": ""}, {"rel_paragraphs": "[2]", "answer_from": "table-text", "question": "What was the allowance for product returns in 2019?", "scale": "thousand", "answer_type": "span", "req_comparison": "false", "order": "3", "uid": "c8656e5e-2bb7-4f03-ae73-0d04492155c0", "answer": "[(25,897)]", "derivation": ""}, {"rel_paragraphs": "[2]", "answer_from": "table-text", "question": "How many years did the net accounts receivable exceed $200,000 thousand?", "scale": "", "answer_type": "count", "req_comparison": "false", "order": "4", "uid": "fdf08d3d-d570-4c21-9b3e-a3c86e164665", "answer": "1", "derivation": "2018"}, {"rel_paragraphs": "[2]", "answer_from": "table-text", "question": "What was the change in the Allowance for doubtful accounts between 2018 and 2019?", "scale": "thousand", "answer_type": "arithmetic", "req_comparison": "false", "order": "5", "uid": "6ecb2062-daca-4e1e-900e-2b99b2fce929", "answer": "424", "derivation": "-1,054-(-1,478)"}, {"rel_paragraphs": "[]", "answer_from": "table", "question": "What was the percentage change in the Allowance for product returns between 2018 and 2019?", "scale": "percent", "answer_type": "arithmetic", "req_comparison": "false", "order": "6", "uid": "f2c1edad-622d-4959-8cd5-a7f2bd2d7bb1", "answer": "129.87", "derivation": "(-25,897+11,266)/-11,266"}]</td>
</tr>
</tbody>
</table>
</div>
<p>The above code is not an efficient one. But, how do I add the outputs from the <code>df_sample.iterrows()</code> i.e. <code>table, questions, answers, answer_type</code> etc.. as another column in my original <code>df_sample</code> dataframe</p>
|
<python><json><pandas><dataframe>
|
2023-05-25 10:41:51
| 1
| 1,270
|
Betafish
|
76,331,334
| 5,618,251
|
How to make a boxplot in using month as x-axis and data as y-axis
|
<p>I have the following datasets (see below).</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
date_month =\
np.array([ 4, 5, 8, 9, 10, 11, 12, 1, 2, 3, 4, 5, 7, 8, 9, 10, 11,
12, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 1, 2, 3, 4,
5, 6, 7, 8, 9, 10, 11, 12, 1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11, 12, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 1, 2,
3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 1, 2, 3, 4, 5, 6, 7,
8, 9, 10, 11, 12, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
2, 3, 4, 5, 7, 8, 9, 10, 10, 12, 1, 2, 3, 4, 6, 7, 8,
9, 11, 12, 1, 2, 4, 5, 6, 7, 10, 11, 12, 1, 3, 4, 5, 6,
8, 9, 10, 11, 1, 2, 3, 4, 4, 7, 8, 9, 12, 1, 2, 3, 5,
6, 7, 8, 11, 12, 1, 4, 4, 5, 6, 6, 7, 10, 11, 12, 1, 2,
3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 1, 2, 3, 4, 5, 6, 7,
8, 9, 10, 11, 12, 1, 2])
delta_gmb_gris =\
np.array([ 0.00000000e+00, 6.52332764e+01, -3.21059875e+02, -1.16000977e+02,
1.49104065e+02, 2.64947510e+00, 6.44912109e+01, 2.50497498e+02,
-2.01494080e+02, 6.72978516e+01, 3.90867310e+01, -4.75610352e+00,
-1.88961121e+02, -2.03859741e+02, 8.72711182e+00, 8.64827881e+01,
3.07638550e+01, -6.92280273e+01, 1.55065308e+01, 6.16711426e+00,
3.14987793e+01, 7.76036987e+01, -4.47878418e+01, 2.07130737e+01,
-1.71567993e+02, -1.27824707e+02, -4.09106445e+01, -6.65779724e+01,
1.22063568e+02, 1.01320801e+01, -7.25188599e+01, 5.64022827e+01,
7.34244995e+01, 1.50355225e+01, 3.38818359e+01, -4.31621094e+01,
-1.35590759e+02, -1.79813110e+02, -1.11535080e+02, 1.75334015e+01,
3.81227112e+00, 1.05325272e+02, -9.73476105e+01, 3.31967773e+01,
1.25141800e+02, -5.99645996e+01, 6.89667358e+01, -4.80169373e+01,
-5.99559021e+01, -1.39211884e+02, -1.25912407e+02, 2.10015106e+01,
1.06963715e+02, -3.59637375e+01, 2.01712189e+01, 7.62488937e+01,
-5.14585114e+01, 1.07143402e+01, 3.43760834e+01, -2.37678070e+01,
-1.97999802e+02, -1.57171906e+02, -3.81405640e+01, -2.07762146e+01,
4.34190369e+01, -3.78499146e+01, 5.10930328e+01, 8.71820450e+01,
-7.65830231e+00, 9.55678940e+01, -9.06094055e+01, -3.79481125e+01,
-1.51231674e+02, -2.04288483e+02, 1.75345154e+01, -4.03978271e+01,
1.17001984e+02, -2.23410034e+01, 7.62485352e+01, 9.95090027e+01,
-6.25500793e+01, -6.23908691e+01, 3.89622498e+01, 8.02760315e+01,
-8.80406799e+01, -3.50216858e+02, -4.16336670e+01, 3.23754883e+01,
1.47501221e+01, 1.26898743e+02, 4.58666992e+00, -2.09291382e+01,
-4.63806152e-01, -7.16840820e+01, 6.68127441e+01, -4.77747803e+01,
-2.39722107e+02, -1.82174377e+02, -1.14288452e+02, 8.92976074e+01,
-1.76489258e+00, 7.19572144e+01, 4.34899902e+00, -6.35200806e+01,
3.36867676e+01, -1.09863281e-02, -3.13136353e+02, -3.04594727e+01,
-2.03292603e+02, -6.26403809e+00, 2.49539795e+01, 8.23956299e+01,
-2.02749023e+01, 6.30859375e-01, 9.74741211e+01, -3.07237549e+01,
2.30983887e+01, -1.05333008e+02, -4.74336426e+02, -1.04421387e+02,
1.18629150e+02, 4.91618652e+01, 1.77366943e+01, -5.09847412e+01,
6.53220215e+01, 5.60297852e+01, 2.20312500e+01, -2.14761841e+02,
-7.57222900e+01, 5.24270020e+01, 2.35379639e+01, -5.40844727e+00,
9.83039551e+01, -1.90515137e+01, 1.13490601e+02, -1.22815186e+02,
-3.75110596e+02, 2.94479980e+01, -8.11767578e+00, 2.30630371e+02,
-2.21856934e+02, 1.98024902e+02, -7.98215332e+01, 4.95251465e+01,
-2.07468262e+01, -2.14999023e+02, -3.74052734e+01, -8.99821777e+01,
4.27360840e+01, 4.12053223e+01, 3.22509766e+01, 1.75200195e+01,
6.32805176e+01, -5.16687012e+01, -2.10808594e+02, -1.83963867e+02,
1.11079346e+02, -1.16482422e+02, 2.03957275e+02, -7.33830566e+01,
1.47921143e+02, -9.34592285e+01, -3.33400879e+01, -4.14875488e+01,
-6.58161621e+01, -6.06201172e+00, 1.03962646e+02, -9.55339355e+01,
-4.34025879e+01, -7.14077148e+01, -1.37099609e+01, 6.51933594e+01,
-2.58312988e+01, 2.11472168e+01, -2.47263672e+02, -1.89649170e+02,
-7.08708496e+01, -5.19438477e+01, 6.52338867e+01, 1.65212402e+01,
-7.45874023e+01, 4.24604492e+01, -8.08146973e+01, -8.29882812e+00,
6.45378418e+01, -3.11279297e+00, -1.39932861e+02, -1.78218018e+02,
5.48134766e+01, 8.66081543e+01, 6.78093262e+01, -9.07756348e+01,
9.45754395e+01, 1.15795898e+00])
</code></pre>
<p>I want to make a boxplot of the data with the month on the x-axis and the data on the y-axis. This is fairly simple in Matlab, where you can just use one line to plot it:</p>
<pre><code>boxchart(date_month,delta_gmb_gris)
</code></pre>
<p>which produces exactly what I want:</p>
<p><a href="https://i.sstatic.net/xEqKv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xEqKv.png" alt="enter image description here" /></a></p>
<p>In Python, however, this is not the case:</p>
<pre><code>data = [date_month,delta_gmb_gris]
fig = plt.figure(figsize =(10, 7))
# Creating axes instance
ax = fig.add_axes([0, 0, 1, 1])
# Creating plot
bp = ax.boxplot(data)
# show plot
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/BLVoM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BLVoM.png" alt="enter image description here" /></a></p>
<p>How can I solve it?</p>
|
<python><matplotlib><boxplot>
|
2023-05-25 10:34:18
| 1
| 361
|
user5618251
|
76,331,220
| 10,981,411
|
using python to open my excel and save and close
|
<p>The code works fine, the only issue is the excel.Visible = False works fine on my colleague's laptops but doesnt on mine. Also I think there is same issue with excel.DisplayAlerts = False.</p>
<p>any reason why this is happening?</p>
<p>below are my codes</p>
<pre><code>import win32com.client as win32
excel = win32.gencache.EnsureDispatch('Excel.Application')
excel.Visible = False # Set Excel to be hidden
excel.DisplayAlerts = False
wb = excel.Workbooks.Open(filename)
ws = wb.Sheets('tab_name')
ws.Range('K3').Value = 0.55
ws.Range('K4').Value = 1
ws.Range('K7').Value = 0
ws.Range('K8').Value = 1
ws.Calculate()
wb.Save()
wb.Close()
excel.Quit()
</code></pre>
|
<python><win32com>
|
2023-05-25 10:19:38
| 0
| 495
|
TRex
|
76,331,183
| 9,848,968
|
Python List Comprehension: Add n specific elements after each element in a list
|
<p>I would like to add <code>n</code> specific elements after each element in a list using only list comprehension.</p>
<p>Example:</p>
<p><code>l = [A, B, C, D, E]</code></p>
<p><code>element = ''</code></p>
<p>for <code>n = 1</code> should result in</p>
<p><code>l = [A, '', B, '', C, '', D, '', E, '']</code></p>
<p>or <code>n = 2</code> should result in</p>
<p><code>l = [A, '', '', B, '', '', C, '', '', D, '', '', E, '', '']</code></p>
|
<python><list-comprehension>
|
2023-05-25 10:15:02
| 3
| 385
|
muw
|
76,331,049
| 7,965
|
ruamel.yaml anchors with Roundtriploader/Roundtripdumper
|
<p>I am trying to load below example yaml file using the ruamel.yaml python package.</p>
<pre><code>- database: dev_db
<<: &defaults
adapter: postgres
host: localhost
username: postgres
password: password
- database: test_db
<<: *defaults
- database: prod_db
<<: *defaults
</code></pre>
<pre><code>from pydantic import BaseModel
from ruamel.yaml import YAML
yaml = YAML(typ='rt')
with open('config.yaml', 'r') as file:
envs = yaml.load(file)
for env in envs:
print(c)
</code></pre>
<p>This generates below output which misses the aliased tags completely. But when I change the typ to 'safe', even the aliased tags are output correctly.</p>
<pre><code>{'database': 'dev_db'}
{'database': 'test_db'}
{'database': 'prod_db'}
</code></pre>
<p>I am trying to create Pydantic data models with each entry in the YAML. How to get all the attributes with default loader(rt)?</p>
|
<python><ruamel.yaml>
|
2023-05-25 09:59:24
| 1
| 9,317
|
Sirish Kumar Bethala
|
76,330,816
| 9,454,531
|
Why is my Python file watcher not writing the data from Parquet files to a data frame?
|
<p>I have written a file watcher in Python that will watch a specific folder in my laptop and whenever a new parquet file is created in it, the watcher will pull it and read the data inside using Pandas and construct a data frame from it.</p>
<p><strong>Issue:</strong> It does all those activities with perfection except the last bit where it has to write the data to the data frame</p>
<p>Here is the code I have written:</p>
<pre><code># Imports and decalarations
import os
import sys
import time
import pathlib
import pandas as pd
import pyarrow.parquet as pq
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler, PatternMatchingEventHandler
</code></pre>
<pre><code># Eventhandler class
class Handler(FileSystemEventHandler):
def on_created(self, event):
# Import Data
filepath = pathlib.PureWindowsPath(event.src_path).as_posix()
time.sleep(10) # To allow time to complete file write to disk
dataset = pd.read_parquet(filepath, engine='pyarrow')
dataset = dataset.reset_index(drop=True)
dataset.head()
# Code to run for Python Interpreter
if __name__ == "__main__":
path = r"D:\Folder1\Folder2\Folder3" # Path to watch
observer = Observer()
event_handler = Handler()
observer.schedule(event_handler, path, recursive=True)
observer.start()
try:
while(True):
pass
except KeyboardInterrupt:
observer.stop()
observer.join()
</code></pre>
<p>The expected output is the first five rows of the data frame, however, it shows me nothing and I get no error either.</p>
<p><strong>Some Useful Information</strong></p>
<ul>
<li><p>I have been running this code in Jupyter Notebook.</p>
</li>
<li><p>However, I have also run it in Spyder to see whether a data frame appears at all in its Variable Explorer section. But it didn't.</p>
</li>
</ul>
<p>From this, the natural conclusion would be that the data frame isn't getting created at all. But this is what baffles me. Because I have successfully read this same parquet file from a somewhat less sophisticated code (below) yesterday where I fed the file path as a raw string.</p>
<pre><code># Less Sophisticated Code
filepath = r"D:\Folder1\Folder2\Folder3\filename.parquet"
dataset = pd.read_parquet(filepath, engine='pyarrow')
dataset = dataset.reset_index(drop=True) # Resets index of dataframe and replaces with integers
dataset.head()
</code></pre>
<p><a href="https://i.sstatic.net/0yns1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0yns1.png" alt="Output Screenshot (In Jupyter Notebook)" /></a></p>
<p>Is the filepath the issue then? I am very happy to provide any other information you may need.</p>
<p><strong>Edit:</strong> I have added a screenshot of the output from the code that did not have a file watcher</p>
|
<python><pandas><jupyter-notebook><parquet><watchdog>
|
2023-05-25 09:32:40
| 1
| 317
|
Arnab Roy
|
76,330,770
| 7,847,906
|
Creating a PDF file from a folder structure containing images
|
<p>I am looking for a way to generate a PDF from a Folder with several pictures.</p>
<p>I have many pictures like this:</p>
<pre><code>folder1/
Image1.jpg
Image2.jpg
......
folder2/
img1.jpg
pict.jpg
name1.jpg
</code></pre>
<p>I am looking for a way where we can automatically generate a PDF using the name of the pictures saved, and if possible, load pictures in to the PDF automatically.</p>
<p>My code:</p>
<pre><code>import os
from PIL import Image
from PyPDF2 import PdfFileWriter, PdfFileReader
from reportlab.pdfgen import canvas
from reportlab.lib.pagesizes import letter
# Define the path to the folder containing the images
path = "path/to/folder"
# Create a new PDF file
output_pdf = PdfFileWriter()
# Loop through each image in the folder and add it to the PDF file
for filename in os.listdir(path):
if filename.endswith(".jpg") or filename.endswith(".jpeg") or filename.endswith(".png"):
# Open the image file using the Pillow library
with Image.open(os.path.join(path, filename)) as img:
# Create a new page in the PDF file
pdf_page = output_pdf.addBlankPage(width=img.width, height=img.height)
# Convert the image to RGB mode and add it to the PDF page
img_rgb = img.convert('RGB')
pdf_page.mergeRGBImage(img_rgb)
# Add the file name to the PDF page
c = canvas.Canvas(pdf_page)
c.setFont("Helvetica", 8)
c.drawString(10, 10, filename)
c.save()
# Save the output PDF file
with open("output.pdf", "wb") as out_file:
output_pdf.write(out_file)
</code></pre>
|
<python><windows><pdf-generation>
|
2023-05-25 09:27:26
| 1
| 347
|
maxasela
|
76,330,655
| 13,086,128
|
AttributeError: module 'numpy' has no attribute 'complex'
|
<p>I am trying to make a real number complex using numpy. I am using numpy version <code>1.24.3</code></p>
<p>Here is the code:</p>
<pre><code>import numpy as np
c=np.complex(1)
</code></pre>
<p>However, I get this error:</p>
<pre><code>AttributeError: module 'numpy' has no attribute 'complex'.
</code></pre>
|
<python><python-3.x><numpy><complex-numbers>
|
2023-05-25 09:14:48
| 1
| 30,560
|
Talha Tayyab
|
76,330,509
| 17,718,870
|
Unexpected module object on Runtime Execution
|
<p>I have two modules mod1.py and mod2.py. In the first one i have the following code:</p>
<pre><code># mod1.py
# -------
import inspect
def get_module_object():
curr_frame = inspect.currentframe()
return inspect.getmodule(curr_frame)
# =======================================
# mod2.py
# -------
from mod1 import get_module_object
print(get_module_object()) # RETURNS : <module 'mod1' from '...\mod1.py'>
# Expected: <module 'mod2' from '...\mod2.py'>
</code></pre>
<p>I could fix this if I set a paramter in the function signature which accepts a current_frame = inspect.currentframe() (<strong>def func (curr_frame</strong>): ...), but this implies that I import everytime the <em>inspect</em> or <em>sys</em> (sys._geframe(...)) to be able to obtain the current frame, which will be tedious, error prone and will violate the DRY - principle.</p>
<p>P.S.</p>
<p>I had similar problem with the <strong><strong>file</strong></strong>, which will be set on import time and will produce again some unexpected (for me) results. As analogy to the above example will return the full file path of mod1.py and not mod2.py.</p>
<p>The main point should be to encapsulate as much as possible from the logic in mod1 and to use it in the rest of the modules mod2 ... mod_n.</p>
<p>Thank you in advance for your help :)</p>
|
<python>
|
2023-05-25 08:57:57
| 0
| 869
|
baskettaz
|
76,330,456
| 7,257,089
|
Slow responses using Using Google Cloud Run, FastAPI and the Meta Whatsapp API
|
<p>This is quite a sepcific problem but I'm wondering if anyone else has encountered it. I'm using the Whatsapp Cloud API (<a href="https://developers.facebook.com/docs/whatsapp/cloud-api/" rel="nofollow noreferrer">https://developers.facebook.com/docs/whatsapp/cloud-api/</a>) for a question-answer chatbot. These messages are received by an LLM which takes some time to respond .</p>
<p>Unfortunately in the meantime, the Meta API has sent me the same message a few more times. It seems like unless you almost immediately respond with a 200 status code the Meta API will keep spamming you with the same message (see <a href="https://developers.facebook.com/docs/whatsapp/on-premises/guides/webhooks" rel="nofollow noreferrer">here</a> under the "Retry" heading and previous stackoverflow answer: <a href="https://stackoverflow.com/questions/72894209/whatsapp-cloud-api-sending-old-message-inbound-notification-multiple-time-on-my">WhatsApp cloud API sending old message inbound notification multiple time on my webhook</a>).</p>
<p><strong>What I've tried</strong></p>
<p>My first approach was to use FastAPI's <a href="https://fastapi.tiangolo.com/tutorial/background-tasks/" rel="nofollow noreferrer">background task</a> functionality. This allows me to immediately return a 200 response and then do the LLM stuff as a background process. This works well in as much as it stops the multiple Whatsapp API calls. However, the LLM is very slow to respond because cloud run presumably does not see the background task and therefore shuts down.</p>
<p><strong>What I would prefer not to try</strong></p>
<p>I know you can set cloud run to be "always on", setting the min CPUs to 1. That would presumably solve the background task problem, but I don't want to pay for a server that's constantly on when I'm not sure how much use it will get. It also kind of defeats the object of cloud run.</p>
<p>I could also have 2 microservices, one to receive the Whatsapp messages and immediately acknowledge receipt, the other would then receive each message and do the LLM stuff. I want to try and avoid this as it's a relatively simple codebase and would prefer not to split out into 2 services.</p>
<p>So.....</p>
<p><strong>Is there any way to have this running as a single service on Cloud Run, while solving the problems I mentioned?</strong></p>
|
<python><fastapi><whatsapp><google-cloud-run>
|
2023-05-25 08:50:46
| 1
| 372
|
millsy
|
76,330,421
| 9,370,733
|
Specifying a different input type for a Pydantic model field (comma-separated string input as a list of strings)
|
<p>Using Pydantic, how can I specify an attribute that has an input type different from its actual type?</p>
<p>For example I have a <code>systems</code> field that contains a list of systems (so a list of strings) and the user can provide this systems list as a comma separated string (e.g. <code>"system1,system2"</code>); then I use a validator to split this string into a list of strings.</p>
<p>The code below is doing that and it's working but the type hinting is wrong as the systems field is actually a list of strings, not a string; the validator is splitting the original string into a list of strings.</p>
<p>How can I fix this?</p>
<pre class="lang-py prettyprint-override"><code>import typing
from pydantic import BaseSettings, Field, validator
class Config(BaseSettings):
systems: str = Field([], description="list of systems as a comma separated list (e.g. 'sys1,sys2')")
@validator("systems")
def set_systems(cls, v) -> typing.List[str]:
if v == "":
return []
systems = list(filter(None, v.split(",")))
return systems
if __name__ == "__main__":
c = Config(**{"systems": "foo,bar"})
print(c)
</code></pre>
|
<python><python-3.x><pydantic>
|
2023-05-25 08:47:34
| 2
| 684
|
cylon86
|
76,330,146
| 8,930,395
|
FastAPI: AttributeError: 'myFastAPI' object has no attribute 'router'
|
<p>I have created a child class and inherited FastAPI class. I want to define a function lifespan inside it. To implement lifespan I need to create constructor inside myFastAPI class. Below is sample code.</p>
<pre><code>class myFastAPI(FastAPI):
def __init__(self):
self.lifespan = self.lifeSpan
@asynccontextmanager
def lifeSpan(self):
print("Start before Application")
notification = Notification()
yield {'client': notification}
app.state.client.close()
@app.get("/check")
async def index(placeHolder: str) -> str:
message = "API is up and running"
client = request.app.state.client
//some operations
return message
@app.post("/a/b/c")
async def index(request, data) -> str:
client = request.app.state.client
result = somefunc(data, client)
return result
</code></pre>
<p>When I try to bring up the API, it is giving below error.</p>
<pre><code>File "/sdfjjgjg/File1.py", line 89, in <module>
@app.get("/check")
File "/asjgjgj/python3.8/site-packages/fastapi/applications.py", line 469, in get
return self.router.get(
AttributeError: 'myFastAPI' object has no attribute 'router'
</code></pre>
<p>Why above error is coming and how to fix it. Note: lifespan function I want inside myFastAPI class not outside as it is given in most of the document.</p>
|
<python><fastapi><lifespan>
|
2023-05-25 08:08:01
| 1
| 4,606
|
LOrD_ARaGOrN
|
76,329,988
| 2,118,666
|
Why is asyncio.Future.done() not set to True when the task is done?
|
<p>In this example code a <code>asyncio.Future</code> is created and run. However the state is not set to done once it is complete.</p>
<pre><code>import asyncio
from concurrent.futures.thread import ThreadPoolExecutor
from time import sleep
_executor = ThreadPoolExecutor(max_workers=32)
def test():
print('starting test')
sleep(5)
print('ending test')
async def main():
loop = asyncio.get_running_loop()
result = loop.run_in_executor(_executor, test)
sleep(3)
print('sleep 1 complete')
print(f'{result.done()=}')
sleep(3)
print('sleep 2 complete')
print(f'{result.done()=}')
print('await result')
await result
print(f'{result.done()=}')
asyncio.run(main())
</code></pre>
<p>This results in:</p>
<pre><code>starting test
sleep 1 complete
result.done()=False
ending test
sleep 2 complete
result.done()=False
await result
result.done()=True
</code></pre>
<p>Why is <code>result.done()</code> set to <code>False</code> after the second sleep?</p>
|
<python><asynchronous><python-asyncio>
|
2023-05-25 07:47:17
| 1
| 11,241
|
tread
|
76,329,949
| 2,366,887
|
I can't get the langchain agent module to actually execute my prompt
|
<p>I am learning how to use langchain and I have written
a small exercise to try and figure out how agents work.</p>
<p>I have a small Python program that looks like this:</p>
<pre><code>import os
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
topic = input("Topic: ")
prompt = PromptTemplate(input_variables = ['topic'],
template = '''
You have been given access to a search
tool. Please gather information about the
AI algorithm topic{topic}, and write a
thousand word blog post on this topic.
'''
)
os.environ['SERPAPI_API_KEY'] = <"my serpapi key">
llm = OpenAI(model = 'text-davinci-003', temperature = 0.7,openai_api_key = "<my openAPI key> ")
tools = load_tools(['serpapi'])
agent = initialize_agent(tools, llm, agent = 'zero-shot-react-description', verbose=True)
foo = agent.run(prompt)
print (foo)
f = open("new_post","w")
f.write(foo)
f.close()
</code></pre>
<p>When I run this, I get the following output:</p>
<pre><code>> Entering new AgentExecutor chain...
I could use a search engine to look for the answer
Action: Search
Action Input: "Tree of Thoughts"
Observation: Title:Tree of Thoughts: Deliberate
Problem Solving with Large Language Models ...
Abstract: Language models are increasingly being deployed for ...
Thought: This looks like it could be the answer I'm looking for
Action: Read
Action Input: Title:Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Observation: Read is not a valid tool, try another one.
Thought: I should read the abstract to get an overview of what the paper is about
Action: Read
Action Input: Abstract: Language models are increasingly being deployed for ...
Observation: Read is not a valid tool, try another one.
Thought: I should look for other sources of information about this topic
Action: Search
Action Input: "Tree of Thoughts" + review
Observation: Percival Everett's new novel The Trees hits just the right mark. It's a racial allegory grounded in history, shrouded in mystery, and dripping ...
Thought: This looks like a review of a novel, not what I'm looking for
Action: Search
Action Input: "Tree of Thoughts" + research
Observation: To surmount these challenges, we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the ...
Thought: This looks like it could be the answer I'm looking for
Final Answer: Tree of Thoughts (ToT) is a new framework for language model inference that generalizes over the existing methods and enables more efficient problem solving.
> Finished chain.
</code></pre>
<p>What is going on here? Why am I only getting this one sentence as output? Am I using the wrong model?
Thanks</p>
|
<python><langchain>
|
2023-05-25 07:42:39
| 2
| 523
|
redmage123
|
76,329,787
| 13,314,132
|
ConnectionError in connecting my python code which is in windows os to my hdfs which is in linux os
|
<p>I am new to using hdfs. I have stored couple of datasets there. Now I have a running code in Python which is in my local machine. I want to connect to the hdfs in the linux os in the code itself.
I have gone through some of the documentations and have made the necessary changes to the code as follows:</p>
<p>python-code.py:</p>
<pre><code># Set HDFS client
client = InsecureClient('http://192.168.131.129:9000', user='******')
# Function to find the answer based on similarity matching
def find_answer(question):
# Load questions and answers from .dat files
with client.read('/ana/questions.dat') as reader:
questions = [line.strip() for line in reader.read().decode('utf-8').splitlines()]
with client.read('/ana/answers.dat') as reader:
answers = [line.strip() for line in reader.read().decode('utf-8').splitlines()]
</code></pre>
<p>However everytime I am getting the error as follows:</p>
<pre><code>ConnectionError: HTTPConnectionPool(host='192.168.131.129', port=55453): Max retries exceeded with url: /webhdfs/v1/ana/questions.dat?user.name=******&offset=0&op=OPEN (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001C3222EA770>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
</code></pre>
<p>I have checked my Namenodes using jps as follows:</p>
<pre><code>56144 NodeManager
60593 Jps
56021 ResourceManager
55787 SecondaryNameNode
55453 NameNode
55581 DataNode
</code></pre>
<p>I have also made the necessary changes to my <code>core-site.xml</code> and <code>hdfs-site.xml</code>.</p>
<p>Here is my <strong>core-site.xml:</strong></p>
<pre><code><configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/aviparna/hdata</value>
</property>
</configuration>
</code></pre>
<p><strong>hdfs-site.xml:</strong></p>
<pre><code><configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
</code></pre>
<p>Here is a screenshot that my hdfs web ui is running properly:
<a href="https://i.sstatic.net/52zqo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/52zqo.png" alt="enter image description here" /></a></p>
<p>How to connect between these two?</p>
|
<python><hadoop><hdfs>
|
2023-05-25 07:22:45
| 0
| 655
|
Daremitsu
|
76,329,691
| 3,590,067
|
Python: how to reshape a list?
|
<p>I am trying to append 10 arrays to a list. The arrays have dimensions <code>(400,)</code> and I would like to have a list with shape, <code>shape(myList) = (10,)</code>, where each component has shape <code>(400,2)</code>, such as <code>shape(myList[0]) = (400,2)</code>.</p>
<p>This is what I am doing:</p>
<pre><code>myList = []
for i in range(0,10):
cordsX = X[i] # len 400
cordsY = Y[i] # len 400
myList.append(np.array(list(zip(cordsX, cordsY))))
</code></pre>
<p>however the shape of <code>myList</code> is the following</p>
<pre><code>shape(myList)
(10, 400, 2)
</code></pre>
|
<python><arrays><list><numpy>
|
2023-05-25 07:07:35
| 2
| 7,315
|
emax
|
76,329,678
| 8,937,353
|
How to change color of a 3D scatter plot w.r.t. one value other than X,Y,Z
|
<p>I have 4 numpy arrays for X, Y, Z, and Values.</p>
<pre><code>X= [20,30,50,60,..]
Y= [25,35,55,65,...]
Z= [5,6,7,8,...]
Values = [-8,5,0.8,-1.2....]
</code></pre>
<p>All arrays are of the same size and the index of all arrays is matched. Eg. X[1],Y[1],Z[1],Values[1] all corresponds to same point.</p>
<p>Based on X, Y, and Z I have to make a scatter plot. But colour of the scatter plot should change according to the Values array. If abs value of items from the Values array is high then scatter point should have colour red and gradually change to blue if value is less. Any colormap will do.</p>
<p>I have managed to plot a 3d scatter plot using X,Y,Z but I am not sure how to associate Values with it.</p>
<pre><code>from matplotlib import pyplot as plt
fig = plt.figure(figsize=(30, 30))
ax = fig.add_subplot(projection='3d')
ax.scatter(X,Y,Z)
</code></pre>
|
<python><numpy><matplotlib><numpy-ndarray><scatter-plot>
|
2023-05-25 07:06:09
| 1
| 302
|
A.k.
|
76,329,612
| 7,441,757
|
How to list poetry plugins in pyproject.toml?
|
<p>I've added a poetry plugin manually with <code>poetry self add xxx</code>, but I don't see any line changed in the pyproject.toml or poetry.lock. I want this plugin to be included in the development environment, so it needs to install when others set-up the environment. Can you add plugins to the <code>--dev</code> dependencies?</p>
|
<python><python-poetry>
|
2023-05-25 06:55:56
| 1
| 5,199
|
Roelant
|
76,329,602
| 7,788,402
|
How to join two pandas DataFrames on trailing part of path / filename
|
<p>I have two data Frames as follows.</p>
<pre><code>df1 = pd.DataFrame({'PATH':[r'C:\FODLER\Test1.jpg',
r'C:\A\FODLER\Test2.jpg',
r'C:\A\FODLER\Test3.jpg',
r'C:\A\FODLER\Test4.jpg'],
'VALUE':[45,23,45,2]})
df2 = pd.DataFrame({'F_NAME': [r'FODLER\Test1.jpg',
r'FODLER\Test2.jpg',
r'FODLER\Test6.jpg',
r'FODLER\Test3.jpg',
r'FODLER\Test4.jpg',
r'FODLER\Test9.jpg'],
'VALUE_X': ['12', '25', '97', '33', '123', '0'],
'CORDS': ['1', '2', '3', '4', '5', '6']})
</code></pre>
<p>I want to join df2, where PATH.Contains(F_NAME) to df1 table.
so resulting data frame is as follows :</p>
<pre><code>df3 = pd.DataFrame({'PATH':[r'C:\FODLER\Test1.jpg',
r'C:\A\FODLER\Test2.jpg',
r'C:\A\FODLER\Test3.jpg',
r'C:\A\FODLER\Test4.jpg'],
'F_NAME': [r'FODLER\Test1.jpg',
r'FODLER\Test2.jpg',
r'FODLER\Test3.jpg',
r'FODLER\Test4.jpg'],
'VALUE_X': ['12', '25', '33', '123'],
'CORDS': ['1', '2', '4', '5'],
'VALUE':[45,23,45,2]})
</code></pre>
<p>How do I write the pandas merge statement to do this joining?</p>
|
<python><pandas><merge>
|
2023-05-25 06:54:18
| 2
| 2,301
|
PCG
|
76,329,584
| 4,616,611
|
PySpark: Most efficient way to query from DB for a specific set of ids from an existing data frame
|
<p>I have a PySpark dataframe with these fields: ID1, ID2 and DATE. For each ID1, ID2, and DATE I need to write an SQL statement to extract a new field where I include ID1, ID2 and DATE in the WHERE clause.</p>
<p>What's the most efficient way to code this?</p>
<p>My idea at the moment is to extract ID1, ID2 and DATE fields from the first PySpark dataframe in a list of some sort, and then loop through each 3-pair (ID1, ID2, DATE) and have a SQL statement within the loop. I'm curious if there is a way to only execute 1 SQL statement that does it for all ID1, ID2 and DATE fields from the original PySpark dataframe.</p>
|
<python><apache-spark><pyspark>
|
2023-05-25 06:50:27
| 1
| 1,669
|
Teodorico Levoff
|
76,329,489
| 2,966,197
|
streamlit app only executing successfully for one cycle and hanging after that
|
<p>I have <code>streamlit</code> app where on the sidebar I have following sections:</p>
<pre><code>1. File uploader
2. 3 buttons - Submit, reset, add new row
3. Row(s) of input text area in Key value manner
</code></pre>
<p>Now at first there is only single row of point 3 when the app initiates, user enters key and value data and can click on <code>add new row</code> button to add another set of key value data. Then the user <code>submits</code> and gets the results. Now once the result is done, I want the user to be able to add another row to already visible input key value row data or click <code>Reset</code> to reset to first stage (single empty row of input key value box), but this is not working and the app just hangs and goes in loop.</p>
<p>Here is my current code:</p>
<pre><code>import streamlit as st
key_dict = {}
# Initialize the key in session state
if 'clicked' not in st.session_state:
st.session_state.clicked = {"submit": False, "reset": False}
if 'input_keys' not in st.session_state:
st.session_state.input_keys = [random.choice(string.ascii_uppercase) + str(random.randint(0, 999999))]
# Function to update the value in session state
def clicked(button):
st.session_state.clicked[button] = True
def sidebar_section():
c1,c2 = st.sidebar.columns(2)
c1.write("Main Area")
new_sec = c2.button("Add New Row")
if new_sec:
st.session_state.input_keys.append(
random.choice(string.ascii_uppercase) + str(random.randint(0, 999999)))
h1, h2 = st.sidebar.columns(2)
index = 1
for input_key in st.session_state.input_keys:
Key = h1.text_input(f"Input 1 {index}", placeholder="Enter Key",
key=input_key)
val = h2.text_input(f"Input 2 {index}", placeholder="Enter Value",
key=input_key + "input")
key_dict[Key] = val
index += 1
return key_dict
file = st.sidebar.file_uploader('Upload', type=["pdf", "txt"])
key_dict = sidebar_section()
st.sidebar.markdown("---")
c1, c2 = st.sidebar.columns([1, 1])
clear = c1.button("Reset", on_click=clicked, args=["reset"], use_container_width=True)
generate = c2.button("Submit", on_click=clicked, args=["submit"], use_container_width=True)
if st.session_state.clicked["reset"]:
placeholder = st.empty()
st.session_state.input_keys = [random.choice(string.ascii_uppercase) + str(random.randint(0, 999999))]
pyautogui.hotkey("ctrl", "F5")
if st.session_state.clicked["submit"]:
print("Inside Submit buttons \n")
#Check if file was uploaded
if file is not None:
# Save uploaded file to 'F:/tmp' folder.
save_folder = '.'
save_path = Path(save_folder, file.name)
with open(save_path, mode='wb') as w:
w.write(file.getvalue())
placeholder.empty()
# links_list = links.split(",")
print("length of Key Dict = {}".format(len(key_dict)))
with st.spinner("Loading Results ! "):
print("Inside Generate button and Uploaded file is not none. Going to get results \n")
#Perform local computation
#Display result using st.write()
</code></pre>
<p>Not sure where the issue is, though I know its related to somehow not retaining last state of button and inputs, and how to correct it.</p>
|
<python><streamlit>
|
2023-05-25 06:33:26
| 1
| 3,003
|
user2966197
|
76,329,365
| 10,003,538
|
How to add more param to Serialization Model for using at to_presentation
|
<p>Here is my so</p>
<pre><code>class DataFieldSerializer(serializers.ModelSerializer):
class Meta:
model = DataField
fields = ["field", "topic", "category", "group", "alias", "units"]
def __init__(self, *args, **kwargs):
self.extra_param = kwargs.pop('extra_param', None)
super().__init__(*args, **kwargs)
def to_representation(self, instance):
data = super().to_representation(instance)
print("Extra Param:", self.extra_param)
</code></pre>
<p>This is how I call the <code>DataFieldSerializer</code></p>
<pre><code>result_data = DataFieldWithUnitsSerializer(data_fields, many=True, extra_param='example').data
</code></pre>
<p>When I run the code I got this result</p>
<pre><code>Extra Param:None
</code></pre>
<p>I guess it is not the right way to add more param into a model serializer.</p>
<p>To sum up, I want to find away to pass an extra param to the Serializer Model</p>
|
<python><django>
|
2023-05-25 06:07:58
| 1
| 1,225
|
Chau Loi
|
76,329,232
| 1,660,529
|
Pyspark throwing task failure error while initializing new column with UDF
|
<p>I have this spark dataframe:</p>
<pre><code>+--------------------+--------------------+------+
| qid| question_text|target|
+--------------------+--------------------+------+
|403d7d49a9713e6f7caa|Do coconut trees ...| 0|
|edc9a2709785501cce09|What are 5 must-r...| 0|
|c912510d490de9e8c55c|How can I make my...| 0|
|bcd2395ccebea4d57604|Is reading biogra...| 0|
|d26dcd0879465c08ace5|Is the reason why...| 1|
|1c6398530de0b31e985a|Does John McCain ...| 1|
|69c68408690e66889e6a|Isn't the time th...| 1|
|d1a7d8a1da31041048a6|Why do people eat...| 0|
|54e96f880709e3cf9dd7|How do I get the ...| 0|
|f89c04a1c61487623ba9|What does the kno...| 0|
+--------------------+--------------------+------+
</code></pre>
<p>I am trying to apply this UDF:</p>
<pre><code>def check_all_spelling_correct(sentence):
doc=nlp(sentence)
spell_check_performed=0
if doc._.performed_spellCheck==True:
spell_check_performed=1
return spell_check_performed
</code></pre>
<p>Using :</p>
<pre><code>check_all_spelling_correct_UDF = udf(lambda x:check_all_spelling_correct(x),IntegerType())
df_sub_2=df_sub.withColumn("Is spelling correct", check_all_spelling_correct_UDF(col("question_text")))
df_sub_2.collect()
</code></pre>
<p>But I am getting this error:
Py4JJavaError: An error occurred while calling o235.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 100.0 failed 1 times, most recent failure: Lost task 0.0 in stage 100.0 (TID 1441) (DESKTOP-2TQTOQ2 executor driver): org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:188)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:108)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:121)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:162)
at org.apache.spark.sql.execution.python.BatchEvalPythonExec.evaluate(BatchEvalPythonExec.scala:81)
at org.apache.spark.sql.execution.python.EvalPythonExec.$anonfun$doExecute$2(EvalPythonExec.scala:130)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:863)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:863)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(DualStackPlainSocketImpl.java:131)
at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:535)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:189)
at java.net.ServerSocket.implAccept(ServerSocket.java:545)
at java.net.ServerSocket.accept(ServerSocket.java:513)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:175)
... 24 more</p>
<p>Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2454)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2403)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2402)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2402)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1160)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1160)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1160)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2642)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2584)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2573)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:938)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2214)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2235)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2254)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2279)
at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1030)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
at org.apache.spark.rdd.RDD.collect(RDD.scala:1029)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:394)
at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.$anonfun$executeCollect$1(AdaptiveSparkPlanExec.scala:338)
at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.withFinalPlanUpdate(AdaptiveSparkPlanExec.scala:366)
at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.executeCollect(AdaptiveSparkPlanExec.scala:338)
at org.apache.spark.sql.Dataset.$anonfun$collectToPython$1(Dataset.scala:3538)
at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3706)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3704)
at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3535)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:188)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:108)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:121)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:162)
at org.apache.spark.sql.execution.python.BatchEvalPythonExec.evaluate(BatchEvalPythonExec.scala:81)
at org.apache.spark.sql.execution.python.EvalPythonExec.$anonfun$doExecute$2(EvalPythonExec.scala:130)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:863)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:863)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(DualStackPlainSocketImpl.java:131)
at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:535)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:189)
at java.net.ServerSocket.implAccept(ServerSocket.java:545)
at java.net.ServerSocket.accept(ServerSocket.java:513)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:175)
... 24 more</p>
|
<python><apache-spark><pyspark><user-defined-functions>
|
2023-05-25 05:43:24
| 1
| 321
|
Alex_ban
|
76,329,199
| 10,982,755
|
How do I update Big Query partition expiration using Python BigQuery Client?
|
<p>We are currently looking to clean up old Big Query data. While creating the dataset, tables and partitions, we did not update the expiration time until recently. So the old partitions do not have expiration time set. Only the newly created tables and partitions have expiration.</p>
<p><a href="https://cloud.google.com/bigquery/docs/managing-partitioned-tables#bq" rel="nofollow noreferrer">https://cloud.google.com/bigquery/docs/managing-partitioned-tables#bq</a></p>
<p>According to the above doc, it says to update partition expiry using SQL or bq commands or API. I want to be able to automate the whole process without manually running the commands for each of the dataset and I'll be able to run the script only from the server since our access is restricted.</p>
<p>Is there a way to automate the process using python script and Python BQ client?</p>
|
<python><google-bigquery>
|
2023-05-25 05:35:40
| 1
| 617
|
Vaibhav
|
76,328,921
| 14,109,040
|
Python group by columns and make sure values in group doesn't skip values in order of another dataframe
|
<p>I have 2 dataframes with the following structures:</p>
<pre><code>df1
Group1 Group2 Label
G1 A1 AA
G1 A1 BB
G1 A1 CC
G1 A2 AA
G1 A2 CC
G2 A1 BB
G2 A1 DD
G2 A2 AA
G2 A2 CC
G2 A2 DD
G2 A2 BB
df2
ID Label_ref
1 AA
2 BB
4 CC
5 DD
7 EE
</code></pre>
<p>I want to group the <code>df1</code> based on the <code>Group1</code> and <code>Group2</code> columns and check if the 'Label' column contains values from <code>df2</code> <code>Label_ref</code> in order of <code>ID</code>.</p>
<p><code>Label</code> on <code>df1</code> doesn't need to have all values from <code>Label_ref</code> on <code>df2</code>, but the values <code>Label</code> on <code>df1</code> can't skip any <code>Label_ref</code> values in the order of <code>ID</code></p>
<p>Expected output:</p>
<p>The group <code>Group1=G1</code>, <code>Group2=A1</code> doesn't skip any values from <code>AA</code> - <code>CC</code>. Therefore the rows corresponding to the group are flagged.</p>
<p>The group <code>Group1=G1</code>, <code>Group2=A2</code> skips values from <code>BB</code> but has the value <code>CC</code>. Therefore the rows corresponding to the group are not flagged.</p>
<p>The group <code>Group1=G2</code>, <code>Group2=A2</code> doesn't skip any values from <code>AA</code> - <code>DD</code> although they are not in order. Therefore the rows corresponding to the group are flagged.</p>
<pre><code>Group1 Group2 Label Flag
G1 A1 AA 1
G1 A1 BB 1
G1 A1 CC 1
G1 A2 AA 0
G1 A2 CC 0
G2 A1 BB 0
G2 A1 DD 0
G2 A2 AA 1
G2 A2 CC 1
G2 A2 DD 1
G2 A2 BB 1
</code></pre>
<p>I haven't been able to make much progress:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({
'Group1': [ 'G1','G1', 'G1','G1','G1',
'G2','G2', 'G2','G2','G2','G2'],
'Group2': ['A1','A1','A1','A2','A2',
'A1','A1','A2','A2','A2','A2'],
'Label': ['AA','BB','CC','AA','CC','BB',
'DD','AA','CC','DD','BB']})
df2 = pd.DataFrame({
'ID': [ 1, 2, 4, 5, 7],
'Label_ref': ['AA','BB','CC','DD','EE']})
</code></pre>
<p>A link to a solution or a function/method I can use to achieve this is appreciated</p>
|
<python><pandas><group-by>
|
2023-05-25 04:21:15
| 1
| 712
|
z star
|
76,328,903
| 1,872,234
|
Resample dataframe to add missing dates
|
<p>I have a dataframe with multiple string columns, one date column and one int value column.</p>
<p>I want to <code>ffill</code> the missing dates for each group of text columns. The missing dates are all dates from the min date to max date in the dataframe. I think this is better explained using an example.</p>
<p>Sample Input:</p>
<pre><code>group rtype location hardware date value
my-group type-s NY DTop 2020-08-05 10
my-group type-s NY DTop 2020-08-07 20
my-group type-s NY DTop 2020-08-10 30
my-group type-s NY Tower 2020-08-01 40
my-group type-s NY Tower 2020-08-07 50
ot-group type-t NY LTop 2020-08-08 90
</code></pre>
<p>Min and Max date for this dataframe: (start_date) 2020-08-01 - (end_date) 2020-08-10</p>
<p>Sample Output:</p>
<pre><code>group rtype location hardware date value
my-group type-s NY DTop 2020-08-01 0
my-group type-s NY DTop 2020-08-02 0
my-group type-s NY DTop 2020-08-03 0
my-group type-s NY DTop 2020-08-04 0
my-group type-s NY DTop 2020-08-05 10
my-group type-s NY DTop 2020-08-06 10
my-group type-s NY DTop 2020-08-07 20
my-group type-s NY DTop 2020-08-08 20
my-group type-s NY DTop 2020-08-09 20
my-group type-s NY DTop 2020-08-10 30
my-group type-s NY Tower 2020-08-01 40
my-group type-s NY Tower 2020-08-02 40
my-group type-s NY Tower 2020-08-03 40
my-group type-s NY Tower 2020-08-04 40
my-group type-s NY Tower 2020-08-05 40
my-group type-s NY Tower 2020-08-06 40
my-group type-s NY Tower 2020-08-07 50
my-group type-s NY Tower 2020-08-08 50
my-group type-s NY Tower 2020-08-09 50
my-group type-s NY Tower 2020-08-10 50
ot-group type-t NY LTop 2020-08-01 0
ot-group type-t NY LTop 2020-08-02 0
ot-group type-t NY LTop 2020-08-03 0
ot-group type-t NY LTop 2020-08-04 0
ot-group type-t NY LTop 2020-08-05 0
ot-group type-t NY LTop 2020-08-06 0
ot-group type-t NY LTop 2020-08-07 0
ot-group type-t NY LTop 2020-08-08 90
ot-group type-t NY LTop 2020-08-09 90
ot-group type-t NY LTop 2020-08-10 90
</code></pre>
<p>In this example, I kept the location fixed to avoid an extra long output.
I am able to get the dates I want using <code>pd.date_range()</code>.</p>
<p>I tried using <code>resample</code> with multiindex but I run into errors (similar to <a href="https://stackoverflow.com/questions/15799162/resampling-within-a-pandas-multiindex">this</a>).</p>
<p>I tried the approach mentioned in <a href="https://stackoverflow.com/a/32275705/1872234">this answer</a> but it doesn't seem to work:</p>
<p>My code using:</p>
<pre><code>import pandas as pd
df = pd.read_csv('data.csv')
df.set_index('date', inplace=True)
date_range = pd.date_range(df.index.min(), df.index.max(), freq='D')
print(len(date_range), date_range)
def reindex_by_date(df):
return df.reindex(date_range).ffill()
df = df.groupby(['group','rtype','location','hardware']).apply(reindex_by_date).reset_index([0,1,2,3], drop=True)
print(df.to_string())
</code></pre>
<p>Output of this code:</p>
<pre><code>10 DatetimeIndex(['2020-08-01', '2020-08-02', '2020-08-03', '2020-08-04',
'2020-08-05', '2020-08-06', '2020-08-07', '2020-08-08',
'2020-08-09', '2020-08-10'],
dtype='datetime64[ns]', freq='D')
group rtype location hardware value
2020-08-01 NaN NaN NaN NaN NaN
2020-08-02 NaN NaN NaN NaN NaN
2020-08-03 NaN NaN NaN NaN NaN
2020-08-04 NaN NaN NaN NaN NaN
2020-08-05 NaN NaN NaN NaN NaN
2020-08-06 NaN NaN NaN NaN NaN
2020-08-07 NaN NaN NaN NaN NaN
2020-08-08 NaN NaN NaN NaN NaN
2020-08-09 NaN NaN NaN NaN NaN
2020-08-10 NaN NaN NaN NaN NaN
2020-08-01 NaN NaN NaN NaN NaN
2020-08-02 NaN NaN NaN NaN NaN
2020-08-03 NaN NaN NaN NaN NaN
2020-08-04 NaN NaN NaN NaN NaN
2020-08-05 NaN NaN NaN NaN NaN
2020-08-06 NaN NaN NaN NaN NaN
2020-08-07 NaN NaN NaN NaN NaN
2020-08-08 NaN NaN NaN NaN NaN
2020-08-09 NaN NaN NaN NaN NaN
2020-08-10 NaN NaN NaN NaN NaN
2020-08-01 NaN NaN NaN NaN NaN
2020-08-02 NaN NaN NaN NaN NaN
2020-08-03 NaN NaN NaN NaN NaN
2020-08-04 NaN NaN NaN NaN NaN
2020-08-05 NaN NaN NaN NaN NaN
2020-08-06 NaN NaN NaN NaN NaN
2020-08-07 NaN NaN NaN NaN NaN
2020-08-08 NaN NaN NaN NaN NaN
2020-08-09 NaN NaN NaN NaN NaN
2020-08-10 NaN NaN NaN NaN NaN
</code></pre>
<p>Can someone help please?</p>
<p><strong>EDIT:</strong>
After fixing the DatetimeIndex issue, and using <code>fillna(0)</code>:</p>
<pre><code>df = pd.read_csv('data.csv', parse_dates=['date'])
df.set_index('date', inplace=True)
date_range = pd.date_range(df.index.min(), df.index.max(), freq='D')
print(len(date_range), date_range)
def reindex_by_date(df):
return df.reindex(date_range).ffill().fillna(0)
df = df.groupby(['group','rtype','location','hardware']).apply(reindex_by_date).reset_index([0,1,2,3], drop=True).reset_index().rename(columns={'index': 'date'})
print(df.to_string())
</code></pre>
<p>Output:</p>
<pre><code> group rtype location hardware value
2020-08-01 0 0 0 0 0.0
2020-08-02 0 0 0 0 0.0
2020-08-03 0 0 0 0 0.0
2020-08-04 0 0 0 0 0.0
2020-08-05 my-group type-s NY DTop 10.0
2020-08-06 my-group type-s NY DTop 10.0
2020-08-07 my-group type-s NY DTop 20.0
2020-08-08 my-group type-s NY DTop 20.0
2020-08-09 my-group type-s NY DTop 20.0
2020-08-10 my-group type-s NY DTop 30.0
2020-08-01 my-group type-s NY Tower 40.0
2020-08-02 my-group type-s NY Tower 40.0
2020-08-03 my-group type-s NY Tower 40.0
2020-08-04 my-group type-s NY Tower 40.0
2020-08-05 my-group type-s NY Tower 40.0
2020-08-06 my-group type-s NY Tower 40.0
2020-08-07 my-group type-s NY Tower 50.0
2020-08-08 my-group type-s NY Tower 50.0
2020-08-09 my-group type-s NY Tower 50.0
2020-08-10 my-group type-s NY Tower 50.0
2020-08-01 0 0 0 0 0.0
2020-08-02 0 0 0 0 0.0
2020-08-03 0 0 0 0 0.0
2020-08-04 0 0 0 0 0.0
2020-08-05 0 0 0 0 0.0
2020-08-06 0 0 0 0 0.0
2020-08-07 0 0 0 0 0.0
2020-08-08 ot-group type-t NY LTop 90.0
2020-08-09 ot-group type-t NY LTop 90.0
2020-08-10 ot-group type-t NY LTop 90.0
</code></pre>
|
<python><pandas><dataframe>
|
2023-05-25 04:17:48
| 2
| 1,643
|
Wajahat
|
76,328,852
| 1,718,174
|
Unable to get typer autocompletion working
|
<p>Fairly new to typer and trying to get a simple CLI application to auto-complete with [TAB] on my terminal, but without success. Here is my code structure and code itself:</p>
<pre><code>vinicius.ferreira@FVFFPG4VQ6L4 dev-env % tree
.
├── README.md
├── dev_env
│ ├── __init__.py
│ ├── main.py
│ ├── services.py
│ ├── team_product1.py
│ └── team_product2.py
├── dist
│ ├── dev_env-0.1.2-py3-none-any.whl
│ └── dev_env-0.1.2.tar.gz
├── poetry.lock
├── pyproject.toml
└── tests
└── __init__.py
3 directories, 11 files
</code></pre>
<pre><code>##### main.py #####
import typer
from . import team_product1
from . import team_product2
from . import services
app = typer.Typer()
app.add_typer(team_product1.app, name="team_product1")
app.add_typer(team_product2.app, name="team_product2")
app.add_typer(services.app, name="services")
if __name__ == "__main__":
app()
</code></pre>
<pre><code>##### team_product1.py #####
import typer
app = typer.Typer()
@app.command()
def start():
print(f"Starting all services for Team Product 1 ...")
@app.command()
def stop():
print(f"Stopping all services for Team Product 1 ...")
@app.command()
def destroy():
print(f"Destroying all services for Team Product 1 ...")
if __name__ == "__main__":
app()
</code></pre>
<pre><code>##### team_product2.py #####
import typer
app = typer.Typer()
@app.command()
def start():
print(f"Starting all services for Team Product 2 ...")
@app.command()
def stop():
print(f"Stopping all services for Team Product 2 ...")
@app.command()
def destroy():
print(f"Destroying all services for Team Product 2 ...")
if __name__ == "__main__":
app()
</code></pre>
<pre><code>##### services.py #####
import typer
app = typer.Typer()
@app.command()
def start(name: str):
print(f"Starting service: {name}")
@app.command()
def stop(name: str):
print(f"Stopping service: {name}")
@app.command()
def destroy(name: str):
print(f"Destroying service: {name}")
if __name__ == "__main__":
app()
</code></pre>
<p>I was able to build it into a .whl file with <code>poetry build</code> command, and later on install the package globally for all users using <code>pip install dist/dev_env-0.1.2-py3-none-any.whl</code> from the project root.</p>
<p>CLI application runs fine, except for the [TAB] auto-complete, and already executed <code>dev-env --install-completion</code> and restarted my terminal. It mentioned that "zsh completion installed in /Users/vinicius.ferreira/.zfunc/_dev-env" and the file content is this</p>
<pre><code>#compdef dev-env
_dev_env_completion() {
eval $(env _TYPER_COMPLETE_ARGS="${words[1,$CURRENT]}" _DEV_ENV_COMPLETE=complete_zsh dev-env)
}
compdef _dev_env_completion dev-env%
</code></pre>
<p>Can anybody tell me what am I missing over here?</p>
<p>Not sure if it helps, but my <code>~/.zshrc</code> file has this at the end:</p>
<pre><code>zstyle ':completion:*' menu select
fpath+=~/.zfunc
</code></pre>
<p>and I also noticed typer tutorial shows completion should be installed on <code>/home/user/.zshrc.</code>. Not really sure why on my MacOS it was installed on a different place.</p>
|
<python><autocomplete><command-line-interface><typer>
|
2023-05-25 04:03:06
| 1
| 11,945
|
Vini.g.fer
|
76,328,830
| 16,136,190
|
Can you import a class from a sub-directory (from library.directory import subdirectory.class)?
|
<p>Can you import a class from a sub-directory like <code>from library.directory import subdirectory.class</code>?</p>
<p>Let's take Selenium as an example. Its file structure is like this:</p>
<p><code>selenium</code></p>
<ul>
<li><code>webdriver</code>
<ul>
<li><code>support</code>
<ul>
<li><code>ui</code>
<ul>
<li>wait.py (<code>WebDriverWait</code>)</li>
</ul>
</li>
<li>expected_conditions.py</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>I know I can import them separately, in two lines, but for example, if there are many sub-directories in <code>support</code>, if <code>import</code>ing everything in <code>support</code> using <code>import selenium.webdriver.support</code> isn't feasible; if there are many more sub-directories under <code>webdriver</code> and <code>support</code> is a sub-sub-...-sub-directory, or if one is just too lazy to type it again, how can someone use <code>from selenium.webdriver.support import ui.WebDriverWait, expected_conditions as EC, wait</code>?</p>
<p>I tried importing that way (<code>from selenium.webdriver.support import ui.WebDriverWait, ...</code>), but got a <code>SyntaxError</code>. I also tried searching Python's <a href="https://docs.python.org/3/reference/import.html" rel="nofollow noreferrer">docs</a>, but I couldn't find anything related to this. Can it be <code>import</code>ed that way, and if yes, how?</p>
|
<python><python-3.x><syntax-error><python-import>
|
2023-05-25 03:57:31
| 1
| 859
|
The Amateur Coder
|
76,328,764
| 8,968,910
|
Python: Cannot find the right encoding to print Tableau result
|
<p>I want to print the result of crosstabs from Tableau worksheet, it contains some Tranditional Chinese words in it.</p>
<pre><code>import sys
sys.stdout.reconfigure(encoding='utf-8')
.
.
.
view_data_raw = querying.get_view_data_dataframe(
conn, view_id=visual_c_id)
print(view_data_raw.to_string()) #A
print(view_data_raw.to_string().encode(encoding='utf-8')) #B
print(view_data_raw.to_string().encode(encoding='cp1252')) #C
print(view_data_raw.to_string().encode(encoding='gbk')) #D
</code></pre>
<p>#A</p>
<pre><code>2023å¹´4æ1æ¥ #it should be 2023年4月1日
</code></pre>
<p>#B</p>
<pre><code>2023\xc3\xa5\xc2\xb9\xc2\xb44\xc3\xa6\xc2\x9c\xc2\x881\xc3\xa6\xc2\x97\xc2\xa5 #it should be 2023年4月1日
</code></pre>
<p>#C</p>
<pre><code>UnicodeEncodeError: 'charmap' codec can't encode characters in position 8-9: character maps to <undefined>
</code></pre>
<p>#D</p>
<pre><code>UnicodeEncodeError: 'gbk' codec can't encode character '\xe4' in position 4: illegal multibyte sequence
</code></pre>
<p>I tried to decode and encode several times but it doesn't work. Any suggestion?</p>
|
<python><unicode>
|
2023-05-25 03:36:47
| 1
| 699
|
Lara19
|
76,328,606
| 10,964,685
|
Count of unique days grouped by value - pandas
|
<p>I'm aiming to assign a cumulative count of unique days to a new column in a pandas df. It should count the number of unique days, gathered from <code>Date</code>, grouped by <code>Code</code> and <code>Item</code>. Once consecutive values in <code>Code</code> or <code>Item</code> are broken, the count should reset to 0.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"Date":['2023-03-01', '2023-03-01', '2023-03-01', '2023-03-04', '2023-03-06', '2023-03-06', '2023-03-07', '2023-03-08', '2023-03-09','2023-03-01', '2023-03-02', '2023-03-03', '2023-03-03', '2023-03-03','2023-03-03', '2023-03-04', '2023-03-05', '2023-03-06'],
"Code":['X', 'X', 'X', 'X', 'X', 'X', 'X', 'X', 'X', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y'],
"Item":['A', 'A', 'A', 'B', 'B', 'B', 'B', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A'],
})
df['Date'] = pd.to_datetime(df['Date'])
df['Daily_Count'] = df.groupby(['Code', 'Item', df['Date'].dt.date]).cumcount()
</code></pre>
<p>Intended output:</p>
<pre><code> Date Code Item Daily_Count
0 2023-03-01 X A 1
1 2023-03-01 X A 1
2 2023-03-01 X A 1
3 2023-03-04 X B 1
4 2023-03-06 X B 2
5 2023-03-06 X B 2
6 2023-03-07 X B 3
7 2023-03-08 X A 1
8 2023-03-09 X A 2
9 2023-03-01 Y A 1
10 2023-03-02 Y A 2
11 2023-03-03 Y A 3
12 2023-03-03 Y A 3
13 2023-03-03 Y A 3
14 2023-03-03 Y A 3
15 2023-03-04 Y A 4
16 2023-03-05 Y A 5
17 2023-03-06 Y A 6
</code></pre>
|
<python><pandas>
|
2023-05-25 02:52:42
| 1
| 392
|
jonboy
|
76,328,503
| 1,467,079
|
How to set a stoploss in vectorbt based on the number of ticks or price per contract
|
<p>There's an options to add <code>tp_stop</code> or <code>sl_stop</code> on <code>vbt.Portfolio.from_signals</code> which is percent based.</p>
<blockquote>
<p>sl_stop : array_like of float Stop loss. Will broadcast.</p>
<p>A percentage below/above the acquisition price for long/short
position. Note that 0.01 = 1%.</p>
</blockquote>
<p>I would like to exit based on a move of <code>n</code> price ticks or <code>n</code> dollars per contract. For example if price moves 50 ticks in against me then exit.</p>
|
<python><python-3.x><vectorbt>
|
2023-05-25 02:24:55
| 1
| 653
|
ralphinator80
|
76,328,441
| 6,550,894
|
Regex to remove captions with condition not to overlap second match
|
<p>I have the following string, which I extract from a pdf:</p>
<pre><code>This is
Fig. 13: John holding his present and
the flowers
Source: official photographer
a beautiful
Table: a table of some kind
and fully
complete
Table: John holding his present and
Source: official photographer
sentence
</code></pre>
<p>The text includes figs and tables, most of which have a caption on top and a source on bottom, but some don't. Fundamentally, the text I want to be left with should be:</p>
<pre><code>This is
a beautiful
and fully
complete
sentence
</code></pre>
<p>I have tried the following:</p>
<pre><code>s = re.sub(r'(Fig|Table)[\s\S]+?Source:.*\n', '', mystring,flags=re.MULTILINE)
</code></pre>
<p>But unfortunately it returns:</p>
<pre><code>This is
a beautiful
sentence
</code></pre>
<p>With my limited knowledge of regex I cannot figure out how to put <strong>such a condition</strong>:</p>
<p>It should stop at the first <code>\n</code> after <code>Source</code>, only if there is no new <code>fig|table</code> in between, in which case it should have stopped at the first <code>\n</code> from start.</p>
<p>Any idea? Thank you.</p>
|
<python><regex>
|
2023-05-25 02:08:11
| 1
| 417
|
lorenzo
|
76,328,191
| 851,699
|
Tensorflow in Pyinstaller on MacOS. Saving model fails with TensorShapeProto error
|
<p>I have a simple test script that just saves and loads a tensorflow model.</p>
<p>I passes when run from python on my system. But fails to load the model when I run it from a pyinstaller package, it fails with <code>TypeError: Parameter to MergeFrom() must be instance of same class: expected tensorflow.TensorShapeProto got tensorflow.TensorShapeProto.</code></p>
<p>The full script is</p>
<pre><code>import tempfile
from dataclasses import dataclass
import cv2
import tensorflow as tf
@dataclass
class ColorFilterModel(tf.Module):
rgb_filter_strengths = (0., 1., 1.)
def compute_mahalonabis_sq_heatmap(self, image):
return tf.reduce_sum(tf.cast(image, tf.float32) * tf.constant(self.rgb_filter_strengths), axis=-1)
def test_basic_save_and_load_model():
print("Testing basic model save/load")
model = ColorFilterModel()
numpy_image = cv2.imread(tf.keras.utils.get_file('basalt_canyon.jpg', "https://raw.githubusercontent.com/petered/data/master/images/basalt_canyon.jpg"))
print(f"Image shape: {numpy_image.shape}")
with tempfile.TemporaryDirectory() as tmpdirname:
concrete_func = tf.function(model.compute_mahalonabis_sq_heatmap, input_signature=[tf.TensorSpec(shape=(None, None, 3), dtype=tf.uint8)]).get_concrete_function()
result_original = concrete_func(numpy_image)
print(f"Result shape: {result_original.shape}")
tf.saved_model.save(model, tmpdirname, signatures={'mahal': concrete_func}) # THIS LINE FAILS
loaded_func = tf.saved_model.load(tmpdirname).signatures['mahal']
result_loaded = loaded_func(image=numpy_image)['output_0']
assert tf.reduce_all(tf.equal(result_original, result_loaded))
print("Passed test_basic_save_and_load_model")
if __name__ == "__main__":
test_basic_save_and_load_model()
</code></pre>
<p>When run from pyinstaller, it first gets the error message</p>
<pre><code>WARNING:tensorflow:AutoGraph is not available in this environment: functions lack code information. This is typical of some environments like the interactive Python shell. See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/limitations.md#access-to-source-code for more information.
</code></pre>
<p>Then outputs:</p>
<pre><code>Testing basic model save/load
Image shape: (1500, 2000, 3)
2023-05-24 17:20:13.912138: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
Result shape: (1500, 2000)
Traceback (most recent call last):
File "main.py", line 26, in <module>
test_basic_save_and_load_model()
File "video_scanner/detection_utils/test_standalone_model_io.py", line 22, in test_basic_save_and_load_model
tf.saved_model.save(model, tmpdirname, signatures={'mahal': concrete_func})
File "tensorflow/python/saved_model/save.py", line 1232, in save
File "tensorflow/python/saved_model/save.py", line 1268, in save_and_return_nodes
File "tensorflow/python/saved_model/save.py", line 1441, in _build_meta_graph
File "tensorflow/python/saved_model/save.py", line 1396, in _build_meta_graph_impl
File "tensorflow/python/saved_model/save.py", line 794, in _fill_meta_graph_def
File "tensorflow/python/saved_model/save.py", line 607, in _generate_signatures
File "tensorflow/python/saved_model/save.py", line 474, in _tensor_dict_to_tensorinfo
File "tensorflow/python/saved_model/save.py", line 475, in <dictcomp>
File "tensorflow/python/saved_model/utils_impl.py", line 78, in build_tensor_info_internal
TypeError: Parameter to MergeFrom() must be instance of same class: expected tensorflow.TensorShapeProto got tensorflow.TensorShapeProto.
[11509] Failed to execute script 'main' due to unhandled exception: Parameter to MergeFrom() must be instance of same class: expected tensorflow.TensorShapeProto got tensorflow.TensorShapeProto.
</code></pre>
<p>Oddly this does not seem to happen on Windows - only Mac.</p>
<p>If I do the saving outside of Pyinstaller and only do the loading from pyinstaller, I get an error on load <code>AttributeError: as_proto</code>, which I'm guessing has the same route-cause.</p>
<p>What is going on, and how can I save/load tensorflow models in a pyinstaller app?</p>
|
<python><macos><tensorflow><conda><pyinstaller>
|
2023-05-25 00:33:18
| 1
| 13,753
|
Peter
|
76,328,171
| 13,538,030
|
Interpret one Python function
|
<p>Could you please help me to interpret the meaning of this function, and the way to use it? Thank you.</p>
<pre><code>def isNaN(num):
return num != num
</code></pre>
|
<python>
|
2023-05-25 00:27:25
| 1
| 384
|
Sophia
|
76,328,094
| 3,482,266
|
Adding a Postgresql datasource to Grafana. Plugin health check failed
|
<p>I'm using a grafana docker container, and a Postgresql docker container.
I think I manage to add a data source to Grafana, however, the Plugin health check fails as in the picture below. Also, when I try to query I get a connection error...</p>
<p>[![Bookmark Bar Switcher here.][1]][1]</p>
<p>I've already checked, and the user I created in the sql db is a superuser, so it has the permission to query the table.
In my python script, I'm using the grafana http api to add the datasource.</p>
<pre><code>datasource = {
"name": "postgres",
"type": "postgres",
"host": f"http://{database.config.host}",
"database": database.config.database,
"user": database.config.user,
"password": database.config.password,
"access": "proxy",
"port": database.config.port,
"sslmode": "disable",
}
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {self.api_key}",
}
response = requests.post(
self.grafana_url + self.API_DATASOURCES,
json=datasource,
headers=headers,
timeout=2,
)
</code></pre>
<p>My logs, state:</p>
<pre><code>logger=context t=2023-05-24T23:47:11.909546523Z level=warn msg="failed to look up session from cookie" error="user token not found"
logger=context userId=0 orgId=0 uname= t=2023-05-24T23:47:11.909856964Z level=warn msg=Unauthorized error="user token not found" remote_addr=192.168.16.1 traceID=
logger=context userId=0 orgId=0 uname= t=2023-05-24T23:47:11.909908982Z level=info msg="Request Completed" method=GET path=/api/live/ws status=401 remote_addr=192.168.16.1 time_ms=0 duration=790.303µs size=40 referer= handler=/api/live/ws
logger=context t=2023-05-24T23:47:24.906185083Z level=warn msg="failed to look up session from cookie" error="user token not found"
logger=context userId=0 orgId=0 uname= t=2023-05-24T23:47:24.906382401Z level=warn msg=Unauthorized error="user token not found" remote_addr=192.168.16.1 traceID=
logger=context userId=0 orgId=0 uname= t=2023-05-24T23:47:24.906419291Z level=info msg="Request Completed" method=GET path=/api/live/ws status=401 remote_addr=192.168.16.1 time_ms=0 duration=416.706µs size=40 referer= handler=/api/live/ws
logger=context t=2023-05-24T23:47:26.901251349Z level=warn msg="failed to look up session from cookie" error="user token not found"
logger=context userId=0 orgId=0 uname= t=2023-05-24T23:47:26.901763659Z level=warn msg=Unauthorized error="user token not found" remote_addr=192.168.16.1 traceID=
logger=context userId=0 orgId=0 uname= t=2023-05-24T23:47:26.901862131Z level=info msg="Request Completed" method=GET path=/api/live/ws status=401 remote_addr=192.168.16.1 time_ms=1 duration=1.014872ms size=40 referer= handler=/api/live/ws
logger=context t=2023-05-24T23:47:33.749570154Z level=warn msg="failed to look up session from cookie" error="user token not found"
logger=context userId=0 orgId=0 uname= t=2023-05-24T23:47:33.749850255Z level=info msg="Request Completed" method=GET path=/connections/your-connections/datasources/edit/edec663f-a7cc-4316-9aa1-d672dee7d717 status=302 remote_addr=192.168.16.1 time_ms=0 duration=523.514µs size=29 referer=http://localhost:3000/login handler=/connections/your-connections/datasources/edit/*
logger=context t=2023-05-24T23:47:33.751853587Z level=warn msg="failed to look up session from cookie" error="user token not found"
logger=context t=2023-05-24T23:47:36.899061961Z level=warn msg="failed to look up session from cookie" error="user token not found"
logger=context userId=0 orgId=0 uname= t=2023-05-24T23:47:36.899246767Z level=warn msg=Unauthorized error="user token not found" remote_addr=192.168.16.1 traceID=
logger=context userId=0 orgId=0 uname= t=2023-05-24T23:47:36.899285702Z level=info msg="Request Completed" method=GET path=/api/live/ws status=401 remote_addr=192.168.16.1 time_ms=0 duration=439.616µs size=40 referer= handler=/api/live/ws
logger=context t=2023-05-24T23:47:42.741638708Z level=warn msg="failed to look up session from cookie" error="user token not found"
logger=http.server t=2023-05-24T23:47:42.75135848Z level=info msg="Successful Login" User=admin@localhost
logger=context userId=1 orgId=1 uname=newuser t=2023-05-24T23:47:42.894368364Z level=info msg="Request Completed" method=GET path=/api/live/ws status=-1 remote_addr=192.168.16.1 time_ms=0 duration=841.72µs size=0 referer= handler=/api/live/ws
logger=context userId=1 orgId=1 uname=newuser t=2023-05-24T23:47:42.912548509Z level=info msg="Request Completed" method=GET path=/api/datasources/uid/edec663f-a7cc-4316-9aa1-d672dee7d717 status=404 remote_addr=192.168.16.1 time_ms=0 duration=461.639µs size=35 referer=http://localhost:3000/connections/your-connections/datasources/edit/edec663f-a7cc-4316-9aa1-d672dee7d717 handler=/api/datasources/uid/:uid
logger=context userId=1 orgId=1 uname=newuser t=2023-05-24T23:47:42.946075661Z level=info msg="Request Completed" method=GET path=/api/datasources/edec663f-a7cc-4316-9aa1-d672dee7d717 status=400 remote_addr=192.168.16.1 time_ms=0 duration=341.626µs size=27 referer=http://localhost:3000/connections/your-connections/datasources/edit/edec663f-a7cc-4316-9aa1-d672dee7d717 handler=/api/datasources/:id
logger=context userId=1 orgId=1 uname=newuser t=2023-05-24T23:47:56.90833634Z level=info msg="Request Completed" method=GET path=/api/live/ws status=-1 remote_addr=192.168.16.1 time_ms=1 duration=1.658ms size=0 referer= handler=/api/live/ws
logger=context userId=1 orgId=1 uname=newuser t=2023-05-24T23:48:00.997970403Z level=error msg="Plugin health check failed" error="failed to check plugin health: health check failed" remote_addr=192.168.16.1 traceID=
logger=context userId=1 orgId=1 uname=newuser t=2023-05-24T23:48:00.998018152Z level=error msg="Request Completed" method=GET path=/api/datasources/uid/fb4800be-7c42-40fb-be33-f3e0d87ebaab/health status=500 remote_addr=192.168.16.1 time_ms=0 duration=623.119µs size=53 referer=http://localhost:3000/connections/your-connections/datasources/edit/fb4800be-7c42-40fb-be33-f3e0d87ebaab handler=/api/datasources/uid/:uid/health
</code></pre>
|
<python><postgresql><logging><grafana>
|
2023-05-25 00:03:49
| 1
| 1,608
|
An old man in the sea.
|
76,328,026
| 1,267,218
|
Alexa Skill needs more than 8s for a Lambda to complete
|
<p>I'm using Alexa's Python libraries to write a Lambda that hits a couple of APIs. Sometimes, this takes more than 8s. That's when Alexa is set to timeout.</p>
<pre class="lang-py prettyprint-override"><code>class CustomIntentHandler(AbstractRequestHandler):
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return ask_utils.is_intent_name("CustomIntentHandler")(handler_input)
def handle(self, handler_input):
f = open('secrets.json')
secrets = json.load(f)
f.close
# bulk of the code omitted (it works, it just takes too long sometimes)
return (
handler_input.response_builder
.speak(speak_output)
.response
)
</code></pre>
<p>While I have found a specification in the API to help with this matter by deferring the response, I haven't been able to find any code samples that apply specifically to the Python libraries written for this. Where can I find such code samples to defer the response?</p>
|
<python><alexa-skills-kit>
|
2023-05-24 23:40:37
| 1
| 513
|
Luke
|
76,328,007
| 512,480
|
Inconsistent highlighting of default button under tkinter?
|
<p>The following Python module is designed to be used from any tkinter-based program, to pop up a dialog in front of the parent window. Of particular interest is askquestion, a substitute for the built-in messagebox version which doesn't always appear on top when it needs to. Note the Yes button, which has default="active". When run on Mac, this produces a nice, Mac like blue color to the button. When run on Ubuntu, there is <strong>absolutely no</strong> highlight to tell the user that it is the default button. Mac version:<a href="https://i.sstatic.net/YCizO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YCizO.png" alt="enter image description here" /></a>
Linux version:<a href="https://i.sstatic.net/mZJTh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mZJTh.png" alt="enter image description here" /></a></p>
<p>Is this right? Is this a bug? Is there some parameter I might change somewhere to improve the situation? And for that matter, what is the Linux-like way to indicate a default button?</p>
<pre><code>import tkinter as tk
from tkinter import ttk
from tkinter.font import Font
# This module exists to solve a problem that messagebox popups
# invoked over a child window tend to pop up over the top-level window
# instead under Linux.
messageFont = None
def pop(parent, width, height, title=""):
'''Returns an open window centered over parent waiting for you to add content.
Use child.destroy() to get rid of it when you are finished with it.'''
global messageFont
if not messageFont:
messageFont = Font(root=parent, size=16)
corner = (parent.winfo_x(), parent.winfo_y())
size = (parent.winfo_width(), parent.winfo_height())
popSize = (width, height)
popCorner = sub(add(corner, div(size, 2)), div(popSize, 2))
child= tk.Toplevel(parent)
child.attributes("-topmost", True)
geometry = f"{width}x{height}+{int(popCorner[0])}+{int(popCorner[1])}"
print(popCorner,geometry)
child.geometry(f"{width}x{height}+{int(popCorner[0])}+{int(popCorner[1])}")
child.title(title)
return child
def askquestion(parent, question, action, title=""):
def onPress(yesNo):
child.destroy()
action(yesNo)
child = pop(parent, 500, 250, title=title)
frame = ttk.Frame(child)
frame.pack(fill=tk.BOTH, expand="y")
ttk.Label(frame, text=question, font=messageFont).pack(pady=(80,0))
buttons = ttk.Frame(frame)
buttons.pack()
ttk.Button(buttons, text="No", command=lambda: onPress(False)).pack(side=tk.LEFT)
ttk.Button(buttons, text="Yes", command=lambda: onPress(True),
default="active").pack(side=tk.LEFT)
child.bind("<Escape>", lambda _event: onPress(False))
child.bind("<Return>", lambda _event: onPress(True))
# basic vector arithmetic
def add(v1, v2):
return (v1[0] + v2[0], v1[1] + v2[1])
def sub(v1, v2):
return (v1[0] - v2[0], v1[1] - v2[1])
def mul(v1, c):
return (v1[0] * c, v1[1] * c)
def div(v1, c):
return (v1[0] / c, v1[1] / c)
if __name__ == "__main__":
root = tk.Tk()
root.geometry("640x480")
root.update_idletasks()
askquestion(root, "Are you awake?", lambda yesNo: print(yesNo))
root.mainloop()
</code></pre>
|
<python><tkinter><button>
|
2023-05-24 23:35:34
| 1
| 1,624
|
Joymaker
|
76,327,976
| 1,968,829
|
How can I efficiently merge these dataframes on range values?
|
<p>I have two dataframes:</p>
<pre><code>section_headers =
start_sect_ end_sect_
0 0 50
1 121 139
2 221 270
sentences =
start_sent_ end_sent_
0 0 50
1 56 76
2 77 85
3 88 111
4 114 120
5 121 139
6 221 270
</code></pre>
<p>I'm trying to merge <code>sentences</code> that belongs under each <code>section_header</code>...</p>
<p>A sentence belongs under a section_header when its start_sent_ is greater than or equal to that of a section_header's start_sect_ and less than or equal to the next section_header's start_sect_, etc.</p>
<p>Given this, my desired output is:</p>
<pre><code>merge =
start_sent_ end_sent_ start_sect_
0 0 50 0
1 56 76 0
2 77 85 0
3 88 111 0
4 114 120 0
5 121 139 121
6 221 270 221
</code></pre>
<p>I initially converted this to a dictionary and then created a new dataframe based on the conditions, but the amount of data I'm dealing with was very large and it took forever to iterate through the records.</p>
<p>I'm trying to devise a way to not have to iterate through these records to do a merge of the data. I tried the broadcast method here <a href="https://stackoverflow.com/questions/69087496/how-to-join-two-dataframes-for-which-column-values-are-within-a-certain-range-fo">Solution 2: Numpy Solution for large dataset</a>, but since this method doesn't allow indexing of the arrays, it doesn't work. Otherwise, it works great for two other merge use cases I have.</p>
|
<python><pandas><numpy><merge>
|
2023-05-24 23:24:44
| 1
| 2,191
|
horcle_buzz
|
76,327,919
| 16,988,223
|
Set first date of the year when only it has only the year in a pandas dataframe
|
<p>I have a column name called <code>date</code> in one pandas dataframe, this are the first 10 rows:</p>
<pre><code>0 22-Oct-2022
1 3-Dec-2019
2 27-Jun-2022
3 2023
4 15-Jul-2017
5 2019
6 7-Sep-2022
7 2021
8 30-Sep-2022
9 17-Aug-2021
</code></pre>
<p>I want convert all those dates to for example:</p>
<pre><code>0 2023-05-19
1 2023-01-20
2 ...
</code></pre>
<p>And for those rows that only has the year I want set it to for example, if the original <code>df</code> has:</p>
<pre><code>0 2019
1 2021
</code></pre>
<p>to</p>
<pre><code>5 2019-01-01
7 2021-01-01
</code></pre>
<p>In other words I mean I want set for this cases set the first date of the year but keeping the original year not the current year.</p>
<p>I tried:</p>
<pre><code>df['date'] = pd.to_datetime(df['date'], errors='coerce', format='%d-%b-%Y')
</code></pre>
<p>However it's generating <code>NaT</code> values. I hope that you understand this case guys, I will appreciate any idea to fix this problem</p>
|
<python><pandas><dataframe><datetime>
|
2023-05-24 23:06:28
| 2
| 429
|
FreddicMatters
|
76,327,879
| 1,751,825
|
python jsonschema: Use "regex" module to validate "pattern"
|
<p>I'm trying to use jsonschema for a schema which uses "pattern". However in this application, the "pattern" needs to be able to match unicode characters, which is not support by python's builtin "re" module.</p>
<p>for example</p>
<pre><code>import jsonschema
import regex
schema = {
"type": "object",
"properties": {
"name": {
"type": "string",
"pattern": "[\p{L}]+"
},
},
}
if (regex.compile(schema["properties"]["name"]["pattern"]).search("ᚠᛇᚻ")):
print("It matched")
jsonschema.validate(instance={"name" : "ᚠᛇᚻ"}, schema=schema)
</code></pre>
<p>If I run this, the "regex" search works, but the schema validation fails with...</p>
<pre><code>jsonschema.exceptions.SchemaError: '[\\p{L}]+' is not a 'regex'
</code></pre>
<p>So what I'm wondering is if there is some way to get jsonschema.validate to ignore the normal "pattern" validation and instead check the pattern with the "regex" module. I'm very new to jsonschema, so I don't quite know where to start.</p>
|
<python><python-re><python-jsonschema><python-regex>
|
2023-05-24 22:54:27
| 0
| 4,337
|
user1751825
|
76,327,794
| 2,893,712
|
Pandas Join Rows If Time Is Continuous
|
<p>I have a pandas dataframe that shows when employees want to take time off. The Event Title is always in the format of "User Off" along with the specific time if it is not an all day event. Here is a snippet of the dataframe <code>df</code>:</p>
<pre><code> Event Title Start End Employee
UserA Off (07:00-12:00) 2023-05-08 2023-05-09 UserA
UserA Off (12:00-15:30) 2023-05-08 2023-05-09 UserA
UserB Off 2023-05-10 2023-05-11 UserB
UserC Off (08:00-10:30) 2023-05-30 2023-05-31 UserC
UserC Off (10:30-16:30) 2023-05-30 2023-05-31 UserC
UserD Off (09:30-10:00) 2023-05-10 2023-05-11 UserD
UserE Off (13:00-16:00) 2023-06-02 2023-06-03 UserE
UserE Off (07:30-13:00) 2023-06-02 2023-06-03 UserE
</code></pre>
<p>Users A, C, and E have 2 lines for the same start and end date but the times are continuous (this is because they pull from multiple vacation buckets like Vacation and Floating Holiday). What is the best way to combine these rows if the time is continuous?</p>
<p>Here is the first query I created to only display users that have multiple entries on the same day</p>
<pre><code>test = df.groupby(by=['Start','End','Employee']).filter(lambda x: len(x) > 1)
Event Title Start End Employee
UserA Off (07:00-12:00) 2023-05-08 2023-05-09 UserA
UserA Off (12:00-15:30) 2023-05-08 2023-05-09 UserA
UserC Off (08:00-10:30) 2023-05-30 2023-05-31 UserC
UserC Off (10:30-16:30) 2023-05-30 2023-05-31 UserC
UserE Off (13:00-16:00) 2023-06-02 2023-06-03 UserE
UserE Off (07:30-13:00) 2023-06-02 2023-06-03 UserE
</code></pre>
<p>Now I need to see if the times are continuous (i.e. if the first time off on a day ends at noon, then the start time of for this individual should start at noon). It can be assumed it is an all-day absence if it is continuous.</p>
<p>My original idea was to iterate over each dataframe <code>df.groupby(by=['Start','End','Employee'])</code>, sort by <code>Event Title</code> and then do <code>iterrows()</code> on each row and parse if <code>Event Title.str.split('-')[1]</code> equals <code>Event Title.str.split('(')[1].str.split('-')[0]</code> of the next row, but this seems very inefficient.</p>
<p>My end result would be the original dataframe <code>df</code> but have the continuous times be joined like so:</p>
<pre><code> Event Title Start End Employee
UserA Off 2023-05-08 2023-05-09 UserA
UserB Off 2023-05-10 2023-05-11 UserB
UserC Off 2023-05-30 2023-05-31 UserC
UserD Off (09:30-10:00) 2023-05-10 2023-05-11 UserD
UserE Off 2023-06-02 2023-06-03 UserE
</code></pre>
|
<python><pandas><dataframe>
|
2023-05-24 22:32:05
| 1
| 8,806
|
Bijan
|
76,327,775
| 12,454,639
|
Having Trouble finding the correct scope for my API request to Google Drive API
|
<p>This is likely just a matter of me not knowing how to find the correct scope but I need help all the same.</p>
<pre><code>from google.oauth2 import service_account
from googleapiclient.discovery import build
import requests
import pprint
from dotenv import load_dotenv
import os
forcewinds_folder_url = 'fake-folder-url'
load_dotenv()
api_key = os.getenv('JAKE_KEY')
json_key_location = os.getenv('API_KEY_JSON_PATH')
# Replace 'YOUR_JSON_FILE.json' with the path to your downloaded JSON credentials file
credentials = service_account.Credentials.from_service_account_file(json_key_location, scopes=['https://www.googleapis.com/auth/documents.readonly'])
# Create a Google Drive API service
drive_service = build('drive', 'v3', credentials=credentials)
folder_name = 'fake-folder-id'
results = drive_service.files().list(q=f"name = '{folder_name}' and mimeType = 'application/vnd.google-apps.folder'").execute()
folders = results.get('files', [])
# Check if the folder was found and retrieve its ID
if folders:
folder_id = folders[0]['id']
print(f"The folder ID for '{folder_name}' is: {folder_id}")
else:
print(f"No folder found with the name '{folder_name}'")
</code></pre>
<p>This code is giving me this error</p>
<pre><code>File "get_txt.py", line 21, in <module>
results = drive_service.files().list(q=f"name = '{folder_name}' and mimeType = 'application/vnd.google-apps.folder'").execute()
File "/home/zoctavous/.local/lib/python3.8/site-packages/googleapiclient/_helpers.py", line 130, in positional_wrapper return wrapped(*args, **kwargs)
File "/home/zoctavous/.local/lib/python3.8/site-packages/googleapiclient/http.py", line 938, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 403 when requesting https://www.googleapis.com/drive/v3/files?q=name+%3D+%271RyT2IS-dhGLCNcod_LVt246463PBR3bH%27+and+mimeType+%3D+%27application%2Fvnd.google-apps.folder%27&alt=json returned "Request had insufficient authentication scopes.". Details: "[{'message': 'Insufficient Permission', 'domain': 'global', 'reason': 'insufficientPermissions'}]">
</code></pre>
<p>Which indicates a permissions issue. I have a json for the credentials of my project that has Google Drive API enabled here
<a href="https://i.sstatic.net/Lqbqw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lqbqw.png" alt="enter image description here" /></a></p>
<p>I've made sure I've updated my authentication json every time I add permissions to that account and I've given it everything that seems to make sense to me to access a google drive folder on my account. Im fairly sure that I am authenticating and creating my credentials correctly so im fairly stuck.</p>
<p>What am I doing wrong here?</p>
|
<python><google-drive-api>
|
2023-05-24 22:27:26
| 1
| 314
|
Syllogism
|
76,327,749
| 6,357,916
|
What IP HAPROXY adds to the header?
|
<p>We need to specify the mode in the haproxy service description in docker compose file using long syntax:</p>
<pre><code>services:
haproxy:
ports:
# long port syntax https://docs.docker.com/compose/compose-file/compose-file-v3/#long-syntax-1
- target: 80
published: 9763
protocol: tcp
mode: host
</code></pre>
<p>After reading some articles online, I added following to haproxy's backend section:</p>
<pre><code>backend api
option forwardfor
http-request add-header X-Client-IP %[src]
http-request add-header X-FrontEnd-IP %[dst]
</code></pre>
<p>Also, I start containers by running <code>docker stack deploy -c docker-compose.yml mystack</code> command.</p>
<p>Now note that when I run <code>hostname -I</code> command, I get following output</p>
<pre><code>$ hostname -I
192.168.0.102 172.18.0.1 172.17.0.1 172.19.0.1 192.168.49.1
</code></pre>
<p>Also my wifi settings shows IP <code>192.168.0.102</code>:</p>
<p><a href="https://i.sstatic.net/tBYWk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tBYWk.png" alt="enter image description here" /></a></p>
<p>I am able to access the app from the same laptop on which it is running using three IPs: <code>http://172.18.0.1:9763/</code>, <code>http://127.0.0.1:9763/</code> and <code>http://192.168.0.102:9763/</code>.</p>
<ul>
<li><p><strong>Accesing the django web app from laptop using all above three URLs give following output</strong></p>
<p>In python code, I see different header values as follows:</p>
<pre><code> 'HTTP_X_CLIENT_IP' : '172.18.0.1,172.18.0.1'
'HTTP_X_FRONTEND_IP' : '172.18.0.9'
'HTTP_X_FORWARDED_FOR' : '172.18.0.1'
</code></pre>
<p>And <code>172.18.0.1</code> gets logged to database, as I am logging <code>'HTTP_X_FORWARDED_FOR'</code>.</p>
</li>
<li><p><strong>Accesing from tablet using</strong> <code>http://192.168.0.102:9763/login</code></p>
<p>My tablet is also connected to the same router as my laptop running the app. From tablet, I am able to access the app using url <code>http://192.168.0.102:9763/login</code>, but not using <code>http://127.18.0.1:9763/login</code>. When accessed using <code>http://192.168.0.102:9763</code>, various headers have following values:</p>
<pre><code> 'HTTP_X_CLIENT_IP' : '192.168.0.103,192.168.0.103'
'HTTP_X_FRONTEND_IP' : '172.18.0.9'
'HTTP_X_FORWARDED_FOR' : '192.168.0.103'
</code></pre>
<p>And <code>192.168.0.103</code> gets logged to database, as I am logging <code>HTTP_X_FORWARDED_FOR</code>.</p>
</li>
</ul>
<p>My concern is that the IP of my laptop's WiFi NIC is <code>192.168.0.102</code>, but it ends up logging <code>172.18.0.1</code>. Shouldn't it be logging <code>192.168.0.102</code> (similar to how it logs <code>192.168.0.103</code> for laptop) ? Also why it adds <code>172.18.0.1</code> to headers in case of laptop? And how can I make it log <code>192.168.0.102</code> when app is accessed from laptop?</p>
|
<python><docker><docker-compose><docker-swarm><haproxy>
|
2023-05-24 22:18:57
| 1
| 3,029
|
MsA
|
76,327,605
| 1,493,192
|
percent-encoded %2F fail request
|
<p>I have a dictionary that I pass as a parameter in request:</p>
<pre><code>paylod = {
'latitude': 42.406747,
'longitude': 12.154226,
'timezone': 'Europe%2FBerlin', # encoding problem
'time_format': 'iso8601',
'temperature_unit': 'celsius',
'wind_speed_unit': 'ms',
'precipitation_unit': 'mm',
'start_date': '2023-05-20',
'end_date': '2023-05-23',
'models': ['best_match', 'era5', 'era5_land'],
'hourly': ['temperature_2m', 'relativehumidity_2m', 'dewpoint_2m', 'apparent_temperature', 'pressure_msl', 'surface_pressure', 'precipitation', 'rain', 'snowfall', 'weathercode', 'cloudcover', 'cloudcover_low', 'cloudcover_mid', 'cloudcover_high', 'shortwave_radiation', 'direct_radiation', 'diffuse_radiation', 'direct_normal_irradiance', 'windspeed_10m', 'windspeed_100m', 'winddirection_10m', 'winddirection_100m', 'windgusts_10m', 'et0_fao_evapotranspiration', 'vapor_pressure_deficit', 'soil_temperature_0_to_7cm', 'soil_temperature_7_to_28cm', 'soil_temperature_28_to_100cm', 'soil_temperature_100_to_255cm', 'soil_moisture_0_to_7cm', 'soil_moisture_7_to_28cm', 'soil_moisture_28_to_100cm', 'soil_moisture_100_to_255cm']
}
</code></pre>
<p>or</p>
<pre><code>import request
api_url = "https://archive-api.open-meteo.com/v1/archive"
response = requests.get(
url= api_url,
params=payload
)
</code></pre>
<p>The problem concerns the encoding of the <strong>Europe%2FBerlin</strong> parameter (i.e., timezone). In fact, in the final url, which causes the request to fail, the parameter <strong>Europe%2FBerlin</strong> appears as <strong>Europe%252FBerlin</strong></p>
<pre><code>// https://archive-api.open-meteo.com/v1/archive?latitude=42.406747&longitude=12.154226&timezone=Europe%252FBerlin&time_format=iso8601&temperature_unit=celsius&wind_speed_unit=ms&precipitation_unit=mm&start_date=2023-05-20&end_date=2023-05-23&models=best_match&models=era5&models=era5_land&hourly=temperature_2m&hourly=relativehumidity_2m&hourly=dewpoint_2m&hourly=apparent_temperature&hourly=pressure_msl&hourly=surface_pressure&hourly=precipitation&hourly=rain&hourly=snowfall&hourly=weathercode&hourly=cloudcover&hourly=cloudcover_low&hourly=cloudcover_mid&hourly=cloudcover_high&hourly=shortwave_radiation&hourly=direct_radiation&hourly=diffuse_radiation&hourly=direct_normal_irradiance&hourly=windspeed_10m&hourly=windspeed_100m&hourly=winddirection_10m&hourly=winddirection_100m&hourly=windgusts_10m&hourly=et0_fao_evapotranspiration&hourly=vapor_pressure_deficit&hourly=soil_temperature_0_to_7cm&hourly=soil_temperature_7_to_28cm&hourly=soil_temperature_28_to_100cm&hourly=soil_temperature_100_to_255cm&hourly=soil_moisture_0_to_7cm&hourly=soil_moisture_7_to_28cm&hourly=soil_moisture_28_to_100cm&hourly=soil_moisture_100_to_255cm
{
"reason": "Invalid timezone",
"error": true
}
</code></pre>
|
<python><python-requests>
|
2023-05-24 21:46:36
| 2
| 8,048
|
Gianni Spear
|
76,327,478
| 4,117,496
|
Django template render a nested dict with tuples
|
<p>Here's my dict =</p>
<pre><code>{
'123': {'metric_1': [('url_1', 'caption_1'), ('url_2', 'caption_2'), ('url_3', 'caption_3'), ('url_4', 'caption_4')], 'metric_2': [('url_1', 'caption_1'), ('url_2', 'caption_2'), ('url_3', 'caption_3'), ('url_4', 'caption_4')]},
'456': {'metric_1': [('url_1', 'caption_1'), ('url_2', 'caption_2'), ('url_3', 'caption_3'), ('url_4', 'caption_4')], 'metric_2': [('url_1', 'caption_1'), ('url_2', 'caption_2'), ('url_3', 'caption_3'), ('url_4', 'caption_4')]},
'789': {'metric_1': [('url_1', 'caption_1'), ('url_2', 'caption_2'), ('url_3', 'caption_3'), ('url_4', 'caption_4')], 'metric_2': [('url_1', 'caption_1'), ('url_2', 'caption_2'), ('url_3', 'caption_3'), ('url_4', 'caption_4')]},
}
</code></pre>
<p>it basically is a dict of a dict of a list of tuples.</p>
<p>what I'd like to render via Django template is:</p>
<pre><code>123
metric_1
url_1 url_2 url_3 url_4
caption_1 caption_2 caption_3 caption_4
metric_2
url_1 url_2 url_3 url_4
caption_1 caption_2 caption_3 caption_4
456
metric_1
url_1 url_2 url_3 url_4
caption_1 caption_2 caption_3 caption_4
metric_2
url_1 url_2 url_3 url_4
caption_1 caption_2 caption_3 caption_4
789
metric_1
url_1 url_2 url_3 url_4
caption_1 caption_2 caption_3 caption_4
metric_2
url_1 url_2 url_3 url_4
caption_1 caption_2 caption_3 caption_4
</code></pre>
<p>but I'm not able to do so, here's my current Django template:</p>
<pre><code><div class="grid grid-cols-2 gap-2 p-2">
{% for one_analysis_id, all_metrics in url_dict.items %}
<table>
<tr><p>Analysis ID: {{ one_analysis_id }}</p></tr>
{% for metric_name, url_and_caption_tuple_list in all_metrics.items %}
<tr><p>{{ metric_name }}</p></tr>
<tr>
{% for url_and_caption_tuple in url_and_caption_tuple_list %}
<td><img width="350" height="300" src="{{ url_and_caption_tuple.0 }}"><div>{{ url_and_caption_tuple.1 }}</div></td>
{% endfor %}
</tr>
{% endfor %}
</table>
{% endfor %}
</div>
</code></pre>
<p>What the above code renders is:</p>
<pre><code>123
metric_1
metric_2
url_1 url_2 url_3 url_4
caption_1 caption_2 caption_3 caption_4
url_1 url_2 url_3 url_4
caption_1 caption_2 caption_3 caption_4
456
metric_1
metric_2
url_1 url_2 url_3 url_4
caption_1 caption_2 caption_3 caption_4
url_1 url_2 url_3 url_4
caption_1 caption_2 caption_3 caption_4
789
metric_1
metric_2
url_1 url_2 url_3 url_4
caption_1 caption_2 caption_3 caption_4
url_1 url_2 url_3 url_4
caption_1 caption_2 caption_3 caption_4
</code></pre>
<p>Could anyone please provide any insight?</p>
<p>Thanks!</p>
|
<python><html><django><django-templates><nested-loops>
|
2023-05-24 21:21:33
| 1
| 3,648
|
Fisher Coder
|
76,327,461
| 6,069,586
|
Python asyncio REPL behavior vs normal REPL behavior
|
<p>I'm struggling to understand the difference between these two situations. In a script, I am getting TimeoutErrors from the below websockets.client.connect function, and I can replicate that behavior in a normal python REPL. However, it connects and works correctly in a python asyncio repl <a href="https://github.com/python/cpython/blob/main/Lib/asyncio/__main__.py" rel="nofollow noreferrer">(python -m asyncio)</a>.</p>
<pre class="lang-py prettyprint-override"><code># $ python -m asyncio # Python repl executing asyncio.__main__
import asyncio
import websockets.client as wsc
import ssl # Without this, similar looking error!
wss_url = '' # My WSS url
ws = await wsc.connect(wss_url) # This works
await ws.send('data') # This works
</code></pre>
<pre class="lang-py prettyprint-override"><code># $ python # either in REPL or when executing a script with the below
import asyncio
import websockets.client as wsc
import ssl
wss_url = '' # My WSS url
# ws = asyncio.run(wsc.connect(wss_url)) # Doesn't work, bc wsc.connect isn't a coroutine
async def get_ws():
return await wsc.connect(wss_url)
ws = asyncio.run(get_ws())
# Raises asyncio.exceptions.TimeoutError
async def main():
async with wsc.connect(wss_url) as ws:
print("Connected: websocket")
# or ws = await wsc.connect(wss_url), same result
if __name__ == '__main__':
asyncio.run(main())
# Raises asyncio.exceptions.TimeoutError, prints nothing
</code></pre>
<p>What is asyncio doing when run in main that's not being executed here?
The vague struggles of others suggest that there might be a sleep() call missing, or needing to create or get a loop(), but asyncio.run() seems to be sufficient for simpler tests, not using websockets. There's a similar error for the asyncio repl if ssl is not imported; is there a globals difference?</p>
<p>In this particular case, python 3.8, websockets 11.0.3.</p>
|
<python><python-asyncio>
|
2023-05-24 21:18:46
| 1
| 1,223
|
JWCS
|
76,327,370
| 272,023
|
How to use FastAPI request state variable AND also body parameter when using fastapi-cache?
|
<p>I have a FastAPI POST endpoint which receives a parameter in the request body. A global FastAPI Depends sets a Request state value which I want to retrieve in my method.</p>
<p>The following works fine:</p>
<pre><code>from pydantic import BaseModel
from fastapi import APIRouter, Request
class MyPydanticValue(BaseModel):
# some properties here
router = APIRouter()
@router.post(
"/foo"
)
def perform_action(
my_val: MyPydanticValue
request: Request
):
# do something with my_val
...
# and also reference a state value that was set globally elsewhere
print(request.state.x_some_value)
</code></pre>
<p>FastAPI sees the <code>my_val</code> parameter, recognises that it is a Pedantic model and therefore parses it from the request body.</p>
<p>I now want to cache the function, so I use <a href="https://github.com/long2ice/fastapi-cache" rel="nofollow noreferrer">fastapi-cache</a>:</p>
<pre><code>@router.post(
"/foo"
)
@cache(expire=300)
def perform_action(
my_val: MyPydanticValue
request: Request
):
# do something with my_val
...
#and also reference a state value that was set globally elsewhere
print(request.state.x_some_value)
</code></pre>
<p>Now I get an exception:</p>
<pre><code>TypeError: perform_action() got multiple values for argument 'my_val'
</code></pre>
<p>The <a href="https://github.com/long2ice/fastapi-cache#injected-request-and-response-dependencies" rel="nofollow noreferrer">fastapi-cache documentation</a> states:</p>
<blockquote>
<p>The cache decorator injects dependencies for the Request and Response objects, so that it can add cache control headers to the outgoing response, and return a 304 Not Modified response when the incoming request has a matching If-Non-Match header. This only happens if the decorated endpoint doesn't already list these dependencies already.</p>
</blockquote>
<p>My reading of that is that the injection shouldn't happen because I declare a parameter referring to the Request already. I think that's what is happening in the source code for the <code>@cache</code> decorator: <a href="https://github.com/long2ice/fastapi-cache/blob/v0.2.1/fastapi_cache/decorator.py#L41-L46" rel="nofollow noreferrer">https://github.com/long2ice/fastapi-cache/blob/v0.2.1/fastapi_cache/decorator.py#L41-L46</a>. However, I think that my function is indeed actually getting a request parameter being injected by fastapi-cache and then subsequently by FastAPI.</p>
<p>What am I doing wrong - is there a way of working around this? How do I access the state value in the endpoint function whilst also parsing the Pydantic model from the request body?</p>
|
<python><fastapi>
|
2023-05-24 20:59:48
| 0
| 12,131
|
John
|
76,327,286
| 3,826,115
|
How to have only Points respond to mouse tap in Holoviews plot with Points and QuadMesh overlaid
|
<p>The following code generates an interactive plot with a scatter Point plot overlaid on a gridded QuadMesh plot.</p>
<pre><code>import holoviews as hv
import numpy as np
import xarray as xr
import pandas as pd
hv.extension('bokeh')
#create sample gridded data
grid_x = np.linspace(0, 5, 10)
grid_y = np.linspace(0, 5, 10)
grid_data = np.random.rand(len(grid_x), len(grid_y))
grid_data_da = xr.DataArray(grid_data, coords=[grid_x, grid_y], dims=['x', 'y'], name='data')
#create sample point data
point_x = np.arange(5)
point_y = np.arange(5)
point_data = pd.DataFrame(data = {'x':point_x, 'y':point_y})
grid_plot = hv.QuadMesh(grid_data_da)
point_plot = hv.Points(point_data, kdims=['x', 'y']).opts(size = 12, tools = ['tap'])
grid_plot * point_plot
</code></pre>
<p>Currently, both the QuadMesh grid cells and the scatter Points respond to a mouse tap (the point/grid cell is highlighted while the res are dimmed). What I want is for only the scatter markers to respond to mouse taps, not the underlying grid cell. I thought that setting the <code>opts()</code> on only the <code>hv.Points</code> would do it, but apparently not.</p>
<p>Thanks!</p>
|
<python><bokeh><holoviews>
|
2023-05-24 20:45:17
| 1
| 1,533
|
hm8
|
76,327,196
| 6,087,667
|
call concurrent futures from inside a function in another file
|
<p>How can I use parallel computing sitting inside a function in another file that is being imported?</p>
<p>Here as an example I created a file scratch_tests.py</p>
<pre><code>import concurrent.futures
def g():
print(__name__)
global f
def f(x):
return x**2
if __name__=='scratch_tests':
with concurrent.futures.ProcessPoolExecutor() as executor:
t = executor.submit(f, 2)
return t.result()
</code></pre>
<p>In another file I execute these lines:</p>
<pre><code>from scratch_tests import g
g()
</code></pre>
<p>It exectedly errors out. How in principle one handles this type of cases where the concurrent computations need to happen inside a function residing in some file?</p>
<pre><code>> BrokenProcessPool: A process in the process pool was terminated
> abruptly while the future was running or pending
</code></pre>
|
<python><parallel-processing><multiprocessing><concurrent.futures>
|
2023-05-24 20:28:02
| 1
| 571
|
guyguyguy12345
|
76,327,143
| 4,710,409
|
django-STATICFILES _DIRS not collecting
|
<p>In my django project settings, I defined my static files like so:</p>
<pre><code>STATIC_URL = 'static/'
STATIC_ROOT = BASE_DIR + '/static'
STATICFILES_DIRS = [
BASE_DIR +'/folder1',
BASE_DIR + '/folder2',
]
</code></pre>
<p>But collectstatic doesn't collect what is defined in "STATICFILES_DIRS".</p>
<p>Even though it works locally, on the server it doesn't work.</p>
<p>What am I doing wrong?</p>
|
<python><django><server><static><django-staticfiles>
|
2023-05-24 20:19:48
| 1
| 575
|
Mohammed Baashar
|
76,327,116
| 10,941,410
|
How to send logs from a python application to Datadog using ddtrace?
|
<p>Let's say that I have a python routine that runs periodically using <code>cron</code>. Now, let's say that I want to send logs from it to Datadog. I thought that the simples way to do it would be via Datadog's agent, e.g. using <code>ddtrace</code>...</p>
<pre class="lang-py prettyprint-override"><code>import ddtrace
ddtrace.patch_all()
import logging
logger = logging.getLogger(__name__)
logger.warning("Dummy log")
</code></pre>
<p>...but this is not working. I've tried with both <code>DD_LOGS_INJECTION=true</code> and <code>DD_LOGS_ENABLED=true</code> but looking at the <a href="https://docs.datadoghq.com/logs/log_collection/python/#configure-the-datadog-agent" rel="nofollow noreferrer">docs</a> it seems that I have to configure something so the Agent will tail the log files. However, by looking at <code>type: file</code> I'd guess that I could send logs without having to worry with creating those configuration files.</p>
<p>What would you say is the simplest way to send logs do Datadog and how to do that from a python application?</p>
|
<python><logging><datadog>
|
2023-05-24 20:15:32
| 2
| 305
|
Murilo Sitonio
|
76,326,916
| 8,367,943
|
Running R in an AWS Glue job
|
<p>Imagine you had a set of R scripts that form an ETL pipeline that you wanted to run as an AWS Glue job. AWS Glue supports Python and Scala.</p>
<p>Is it possible to call an R as a Python subprocess (or a bash script that wraps a set of R scripts) within an AWS Glue job running in a container with Python and R dependencies?</p>
<p>If so, please outline the steps required and key considerations.</p>
|
<python><r><amazon-web-services><aws-glue>
|
2023-05-24 19:43:42
| 2
| 8,522
|
Rich Pauloo
|
76,326,571
| 525,865
|
getting data out of a website - using BS4 and request fails permanently - need another method now
|
<p>i am trying to scrape the data from the site <a href="https://www.startupblink.com" rel="nofollow noreferrer">https://www.startupblink.com</a> with beautiful soup, Pyhon and request</p>
<pre><code>from bs4 import BeautifulSoup
import requests
url = "https://www.startupblink.com"
response = requests.get(url)
soup = BeautifulSoup(response.content, "html.parser")
links = soup.find_all("a")
for link in links:
print(link.get("href"))
</code></pre>
<p>This will finds all the tags on the page and prints out the values of their href attributes. my data extraction requirements are the following: i want to get all the data OUT OF THE SITE</p>
<p>by the way - with pandas it would be even easier ?!</p>
<p>using the pandas library can make the process of scraping and processing data even easier. Pandas provides powerful data manipulation and analysis tools, including convenient functions for reading HTML tables directly from a URL. Here are some of my ideas how to use pandas to scrape data from the website <a href="https://www.startupblink.com" rel="nofollow noreferrer">https://www.startupblink.com</a>:</p>
<pre><code> import pandas as pd
import requests
Send a GET request to the website: Send a GET request to the URL we want to scrape and store the response in a variable:
url = "https://www.startupblink.com"
response = requests.get(url)
</code></pre>
<p>Well: first we read the HTML table using pandas: here we use the read_html() function from pandas to parse the HTML and extract the tables present on the page. This function returns a list of DataFrame objects representing the tables found. In this case, since we're interested in the tables on the entire page, you can pass the response.content to read_html():</p>
<p>tables = pd.read_html(response.content)
Process and use the data: Once we have the DataFrame objects representing the tables, we can process and analyze the data using pandas' built-in functions and methods.</p>
<p>what do you tink about these different approaches...?</p>
|
<python><pandas>
|
2023-05-24 18:47:45
| 1
| 1,223
|
zero
|
76,326,219
| 20,999,526
|
How to make changes in a built-in library file in chaquopy?
|
<p>I am facing a problem where I have to change a line in a built-in file in a particular library (installed using pip). I have located the file in</p>
<blockquote>
<p>app\build\pip\debug\common\<library folder></p>
</blockquote>
<p>But every time I run the Gradle (for installing or creating APK), the entire folder is <strong>recreated</strong>, and hence, the file is again the same as previous.</p>
<p>Is there any way to make the change permanent?</p>
|
<python><android><android-studio><pip><chaquopy>
|
2023-05-24 17:50:16
| 1
| 337
|
George
|
76,326,096
| 8,401,294
|
requests, urllib3 and CacheControl version downgrade to working in Poetry
|
<p><a href="https://github.com/urllib3/urllib3/releases" rel="nofollow noreferrer">https://github.com/urllib3/urllib3/releases</a>
<code>2.0.0</code> - In this version, <code>strict</code> was removed.</p>
<p>As instructed in the link:
<a href="https://github.com/python-poetry/poetry/issues/7936" rel="nofollow noreferrer">https://github.com/python-poetry/poetry/issues/7936</a>
I went back to the previous version, being: <code>1.26.15</code></p>
<p>In some of my dependencies, <code>cachecontrol</code> is being used:
<a href="https://github.com/ionrock/cachecontrol/blob/v0.12.11/cachecontrol/serialize.py#L54" rel="nofollow noreferrer">https://github.com/ionrock/cachecontrol/blob/v0.12.11/cachecontrol/serialize.py#L54</a></p>
<p>And no matter how old the <code>cachecontrol</code> version is, it always made use of <code>u"strict": response.strict</code>.
That is, no matter what version the <code>cachecontrol</code> is, internally, it will keep trying to fetch the <code>strict</code> attribute.</p>
<p>My problem was that, I can't install my dependencies:</p>
<pre><code>[tool.poetry.dependencies]
python = ">=3.7,<4"
requests = "^22.28.0"
[tool.poetry.dev-dependencies]
…
</code></pre>
<p>The error always appears:
<a href="https://i.sstatic.net/Y7tpf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y7tpf.png" alt="enter image description here" /></a></p>
<p>Version of libraries I'm using with <code>pip list</code>:</p>
<pre><code>Package Version
------------------- ------------
CacheControl 0.12.11
requests 2.31.0
urllib3 1.26.15
</code></pre>
<p>Poetry version:
<code>Poetry version 1.1.6</code></p>
<p>Note on <code>requests</code>:
If I use <code>requests="^2.28.0”</code>, it installs the following dependencies together:</p>
<pre><code>Installing certifi (2023.5.7)
• Installing charset-normalizer (3.1.0)
• Installing idna (3.4)
• Installing urllib3 (2.0.2)
• Installing requests (2.31.0)
</code></pre>
<p>If I update <code>pyproject.toml</code> to <code>requests = "2.28.0”</code>, the dependencies downgrade is performed, as it forces use older version of the lib:</p>
<pre><code>Updating charset-normalizer (3.1.0 -> 2.0.12)
• Updating urllib3 (2.0.2 -> 1.26.16)
• Updating requests (2.31.0 -> 2.28.0)
</code></pre>
<p>This way, it starts working, because <code>urllib3</code> still had the <code>strict</code> attribute, and <code>cachecontrol</code> doesn't break anymore.</p>
<p><code>tool.poetry.dev-dependencies</code>: dependencies (some dependency, indirectly, is making use of <code>cachecontrol</code>, being necessary to limit the use of lib requests to 2.28.0, otherwise it will not work):</p>
<pre><code>[tool.poetry.dev-dependencies]
flake8 = "^4.0.1"
pre-commit = "^2.20.0"
pytest = "^7.0.0"
pytest-cov = "^4.0.0"
bandit = "^1.7.1"
mypy = "^0.931"
pylint = "^2.12.2"
black = "^22.1.0"
</code></pre>
|
<python><python-poetry>
|
2023-05-24 17:34:21
| 0
| 365
|
José Victor
|
76,326,080
| 5,924,264
|
unbound method __init__() error with python2 but not with python3
|
<p>This is related to my earlier question: <a href="https://stackoverflow.com/questions/76324839/unbound-method-init-error-in-unit-tests-but-not-in-regular-executions?noredirect=1#comment134593579_76324839">unbound method __init__() error in unit tests but not in regular executions</a></p>
<p>I made a minimal reproducible example below:</p>
<pre><code>class Base:
def __init__(self):
print("base constructor")
class derived(Base):
def __init__(self):
print("derived constructor")
class Secondary(Base):
def __init__(self):
derived.__init__(self)
s = Secondary()
</code></pre>
<p>If you run this with <code>python3.7</code> (or any python3 version I think), it works fine, but with <code>python2.7</code>, you get the error</p>
<pre><code>$ python2.7 test.py
Traceback (most recent call last):
File "test.py", line 13, in <module>
s = Secondary()
File "test.py", line 11, in __init__
derived.__init__(self)
TypeError: unbound method __init__() must be called with derived instance as first argument (got Secondary instance instead)
</code></pre>
<p>Why does this work with python3 but not python2? Is there a way to make this work with python2?</p>
<p>Also tried using <code>super</code> (had to make <code>Base</code> inherit <code>object</code> to use it):</p>
<pre><code>class Base(object):
def __init__(self):
print("base constructor")
class derived(Base):
def __init__(self):
print("derived constructor")
class Secondary(Base):
def __init__(self):
super(derived, self).__init__()
s = Secondary()
</code></pre>
<p>There's still an issue with <code>python2</code>. The error changes (similar meaning I think):</p>
<pre><code>$ python2.7 test.py
Traceback (most recent call last):
File "test.py", line 13, in <module>
s = Secondary()
File "test.py", line 11, in __init__
super(derived, self).__init__()
TypeError: super(type, obj): obj must be an instance or subtype of type
</code></pre>
|
<python><python-3.x><python-2.7><inheritance>
|
2023-05-24 17:32:03
| 1
| 2,502
|
roulette01
|
76,326,040
| 4,800,907
|
Azure Functions - BlobTrigger - How to trigger function with any file in a container?
|
<p>I'm facing a problem with an Azure Function.</p>
<p>I've build an Azure Function triggered by a new file on a container of a storage account.
The problem is that it seems impossible (to me) to trigger the function with a generic file, without specifing a name!</p>
<p>I've searched on official documentation and it's not specified how to reference a generic file. The expected behaviour is to have a function triggered when I upload any file (with any name) in a specific container, something like "container/*".</p>
<p>This is my simple function:</p>
<p><strong>function.json</strong></p>
<pre><code>{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "inputBlob",
"type": "blobTrigger",
"direction": "in",
"path": "premium-input/*",
"connection": "AzureWebJobsStorage"
}
]
}
</code></pre>
<p><strong><strong>init</strong>.py</strong></p>
<pre><code>import logging
import azure.functions as func
def main(inputBlob: func.InputStream):
logging.info("START EXECUTION")
logging.info(f'NAME {inputBlob.name}')
logging.info(f'URI {inputBlob.uri}')
logging.info("END EXECUTION")
</code></pre>
<p>I've already tried using an event on EventGrid, but I prefer avoiding it...</p>
<p>Can you please help me?</p>
<p>Thanks in advance!!</p>
|
<python><azure><azure-functions><azure-blob-storage><event-driven>
|
2023-05-24 17:27:28
| 1
| 650
|
walzer91
|
76,325,736
| 926,837
|
Passing a new command to a Powershell Elevated instance using schtasks.exe in python using subprocess.run
|
<p>I'm trying to execute dynamically some powershell commands that require admin priviledges in my python application.
But I'm facing problems to obtain the behaviour I want when stacking commands with <strong>subprocess.run</strong><br>
So let's discuss the code:<br>
This is an example command that requires admin priviledges(elevated prompt) to be run:</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
admin_cmd = r'New-NetFirewallRule -DisplayName "Notepad block -Outbound-" -Direction Outbound -Program "C:\Windows\System32\notepad.exe" -Action Block'
subprocess.run(["powershell.exe", "-Command", admin_cmd])
</code></pre>
<p>This command will output an error:
<strong>New-NetFirewallRule : Accesso negato.</strong>
<a href="https://i.sstatic.net/FnmzS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FnmzS.png" alt="enter image description here" /></a>
Obviously it runs without a problem in a elevated powershell.<br>
So as a solution I created a task to bypass UAC and run an elevated powershell without workflow interruption.<br>
I can run the elevated shell like this:<br>
<code>subprocess.run(['schtasks.exe', "/run", "/tn", "Powershell_Elevata"])</code>
<br>Or like this:<br>
<code>subprocess.run(["powershell.exe", "-Command", r'Start-Process schtasks.exe -ArgumentList "/run /tn Powershell_elevata"'])</code><br>
But if i try to pipe <strong>admin_cmd</strong> with -c or -Command, with any of those I will get an error
cause schtasks.exe knows nothing about New-NetFirewallRule.</p>
<p>So how do I fix the problem? Possibly without the need of creating new files, just python code.
And if possible the Elevated Powershell should run the example command silently without opening a window.</p>
|
<python><powershell><subprocess>
|
2023-05-24 16:38:58
| 1
| 301
|
Relok
|
76,325,688
| 7,657,180
|
Fix No module named 'openai_whisper'
|
<p>I have installed the package <code>openai_whisper</code> using this line</p>
<pre><code>pip install openai-whisper
</code></pre>
<p>But when I used this line in the code <code>import openai_whisper</code>, I got an error <code>ModuleNotFoundError: No module named 'openai_whisper'</code>.</p>
<p>I also tried this line but with no use <code>python -m pip install openai-whisper</code></p>
|
<python>
|
2023-05-24 16:31:40
| 1
| 9,608
|
YasserKhalil
|
76,325,661
| 3,231,250
|
how to extend background size in plotly
|
<p>I have added text-box beside to scatter plot.</p>
<p>Since figure size is defined, I can not see text-box proparly.
I would like to extend background and put text-box beside to scatter plot.</p>
<pre><code>import plotly.graph_objects as go
import plotly.express as px
config = {'staticPlot': True}
fig = go.Figure()
fig = px.scatter(terms, x='pr_auc_x1', y='pr_auc_x2', color='hue2', title='test')
fig.add_shape( type="line", x0=0, y0=0, x1=1,y1=1, line=dict(color="Grey",width=1) )
fig.update_layout(
annotations=[
go.layout.Annotation(
text="..<br>".join([i for i in terms[terms.hue == 1].legend]),
align='left',
showarrow=False,
xref='paper',
yref='paper',
x=1.22,
y=0.93,
bordercolor='black',
borderwidth=1,
)
]
)
fig.show(config=config)
</code></pre>
<p><a href="https://i.sstatic.net/zfFdE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zfFdE.png" alt="enter image description here" /></a></p>
|
<python><plotly>
|
2023-05-24 16:29:26
| 1
| 1,120
|
Yasir
|
76,325,603
| 1,607,057
|
Evaluating forward references with typing.get_type_hints in Python for a class defined inside another method/class
|
<p>I'm having trouble calling typing.get_type_hints() for classes that have forward references as strings. My code works with not defined inside of a function. I've reproduced a minimal example below in Python 3.10:</p>
<pre class="lang-py prettyprint-override"><code>import typing
class B:
pass
class A:
some_b: "B"
print(typing.get_type_hints(A)) # prints {'some_b': <class '__main__.B'>}
</code></pre>
<pre class="lang-py prettyprint-override"><code>import typing
def func():
class B:
pass
class A:
some_b: "B"
print(typing.get_type_hints(A))
func() # NameError: name 'B' is not defined
</code></pre>
<p>Is this expected behavior? Is there any way to get around this, and make sure that forward references with strings get evaluated in the correct scope?</p>
|
<python><python-typing>
|
2023-05-24 16:23:22
| 1
| 411
|
GoogieK
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.