QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,608,134
| 1,411,345
|
python pyloudnorm get RMS values (loudness metering) for PCM WAV
|
<p>Currently I am using this code:</p>
<pre><code>import sys
import soundfile as sf
import pyloudnorm as pyln
f = open( "c:\\temp\\wav_analysis.txt", "w" )
data, rate = sf.read(sys.argv[1]) # load audio (with shape (samples, channels))
meter = pyln.Meter( rate ) # create BS.1770 meter
loudness = meter.integrated_loudness(data) # measure loudness
'''output analysis data'''
for i in range(1, len(data)):
if abs(data[i]) > 0.4:
f.write(str( i / rate ) + "," + str(abs(data[ i ])) + "\n")
</code></pre>
<p>The WAV file is passed in as an argument, it's read in and then analyzed for loudness across all of "data".</p>
<p>I don't want that.
I want to analyze 100ms windows of data (i.e. 4410 samples at a time, while shifting my window by 50 milliseconds, thus creating lots of loudness values.</p>
<p>Is there a way to call meter.integrated_loundess() in such a way that it does that?</p>
<p>Or do I need to somehow create a bunch of 4410-value long data arrays derived from "data" and then feed each one of those to meter.integrated_loudness() one by one?</p>
<p>(The stuff below '''output analysis data''' is just a place holder. I want to replace it with what I need.)</p>
<p>EDIT: See the "slicing" answer below. Also, keep in mind that through trial and error I discovered that integrated_loudness requires the data to be at least 17640 samples long (i.e. 400ms at 44100).</p>
<p>EDIT2: During random searching for something else, I came across this site:<a href="https://pysoundfile.readthedocs.io/en/0.8.0/" rel="nofollow noreferrer">https://pysoundfile.readthedocs.io/en/0.8.0/</a></p>
<p>There, this code snippet was exactly what I was initially looking for for quickly getting the RMS values of the WAV file:</p>
<pre><code>import numpy as np
import soundfile as sf
rms = [np.sqrt(np.mean(block**2)) for block in
sf.blocks('myfile.wav', blocksize=1024, overlap=512)]
</code></pre>
<p>Not only is it much faster, but it is also not limited by the "0.4second" window limit that I ran into with meter.integrated_loudness.</p>
|
<python><pcm><rms>
|
2023-03-01 19:29:47
| 1
| 619
|
a1s2d3f4
|
75,608,033
| 3,713,236
|
How to apply the same cat.codes to 2 different dataframes?
|
<p>I have 2 dataframes <code>X_train</code> and <code>X_test</code>. These 2 dataframes have the same columns.</p>
<p>There is 1 column called <code>levels</code> that needs to be changed from <code>str</code> to <code>int</code>. However, each dataframe's <code>levels</code> columns has different unique values:</p>
<p><code>X_train</code> has: ['Level 0', 'Level 10', 'Level 30'] as unique values.</p>
<p><code>X_test</code> has: ['Level 20', 'Level 40'] as unique values.</p>
<p>The goal is 1) Combine the unique values from both <code>X_train</code> and <code>X_test</code>, and then 2) apply the <code>cat.codes</code> to both dataframes so that they are consistent. How would I do that? Basically the <code>cat.codes</code> that are applied to both dataframes will be as follows, even though 1 dataframe may not have values the other dataframe has:</p>
<pre><code>{0: 'Level 0', 1: 'Level 10', 2: 'Level 20', 3: 'Level 30', 4: 'Level 40'}
</code></pre>
<p>Right now I only have the below but I'm not sure how to get the unique values of both <code>cat.codes</code>.</p>
<pre><code>X_train['levels'] = X_train['levels'].astype('category').cat.codes
X_test['levels'] = X_test['levels'].astype('category').cat.codes
</code></pre>
|
<python><pandas><dataframe><categorical-data>
|
2023-03-01 19:16:10
| 1
| 9,075
|
Katsu
|
75,607,929
| 2,908,017
|
How to display PopUp ShowMessage on Python FMX GUI App?
|
<p>How do I display a <code>MessageBox</code> or <code>ShowMessage</code> popup dialog on a <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX Python App</a>?</p>
<p>I basically want a <code>Form</code> with a <code>Button</code> and when you press on the button, then there should be a popup like this:</p>
<p><a href="https://i.sstatic.net/0D61p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0D61p.png" alt="MessageBox Popup in Python GUI App" /></a></p>
<p>I have a button click event, but I have no idea what to put in there to display such a Message popup. I have the following code currently:</p>
<pre><code>from delphifmx import *
class frmMain(Form):
def __init__(self, owner):
self.Caption = 'My Form'
self.Width = 800
self.Height = 500
self.myButton = Button(self)
self.myButton.Parent = self
self.myButton.Position.X = 100
self.myButton.Position.Y = 100
self.myButton.Width = 200
self.myButton.Height = 200
self.myButton.Text = "Click me"
self.myButton.OnClick = self.myButtonClick
def myButtonClick(self, sender):
print('Button Clicked!')
def main():
Application.Initialize()
Application.Title = "My Application"
Application.MainForm = frmMain(Application)
Application.MainForm.Show()
Application.Run()
Application.MainForm.Destroy()
main()
</code></pre>
<p>What do I write in the <code>myButtonClick</code> method?</p>
|
<python><user-interface><popup><firemonkey><messagebox>
|
2023-03-01 19:04:09
| 1
| 4,263
|
Shaun Roselt
|
75,607,888
| 243,031
|
not able to access pytest mark on module level fixture
|
<p>I am declaring <code>pytest.mark</code> on the test case, and trying to access that in <code>pytest.fixture</code>.</p>
<p><strong>test_my_code.py</strong></p>
<pre><code>import pytest
@pytest.mark.mymark("test", "mark")
def test_the_code():
assert 1 == 1
</code></pre>
<p><strong>conftest.py</strong></p>
<pre><code>import pytest
@pytest.fixture(scope="module")
def my_fixture(request):
my_mark = request.node.get_closest_marker("mymark").args
return
</code></pre>
<p>When I try to run this code, it gives error <code>AttributeError: 'NoneType' object has no attribute 'args'</code>, but if I remove the <code>scope="module"</code> from the <code>pytest.fixture</code>, I am able to access that <code>mark</code>.</p>
|
<python><scope><pytest><fixtures>
|
2023-03-01 19:00:44
| 0
| 21,411
|
NPatel
|
75,607,814
| 4,245,834
|
Is it possible to run python38 venv on python37 or python310?
|
<p>My project supports Python37 and Python 310. One library (a2ba_sdk) that I will need to use only supports Python38. Is it possible to create virtual environment for this library and can run it from python37 or python310.</p>
<p>What I am looking for here is to work with multiple version of python.</p>
|
<python><python-3.x><python-venv><virtual-environment>
|
2023-03-01 18:52:11
| 0
| 453
|
Sagar Mehta
|
75,607,749
| 2,172,751
|
Pycharm Console Does Not Print Dots
|
<p>I am running PyCharm 2022.3.2.</p>
<p>If I <code>print('...')</code> I get blank output in both Python Console and IPython Console. This does not happen if I run Python from the terminal.</p>
<p>This happens with any multiples of three dots at the start of the line. Every three dots are converted to blanks. Does not happen if there is something else before the dots. e.g. <code>print('...-...')</code> gives <code>-...</code>.</p>
<p>Is there some setting I am missing? Is it a bug? How can I fix this?</p>
|
<python><pycharm>
|
2023-03-01 18:44:22
| 0
| 533
|
Danyal
|
75,607,683
| 259,543
|
Conditional type hints in python
|
<p>How to do proper type hinting in Python when certain packages only declare type hints when <code>typing.TYPE_CHECKING</code> is enabled?</p>
<p>For example, in flask:</p>
<pre><code># This works in mypy, not in python
# because flask checks for t.TYPE_CHECKING
# before declaring WSGIEnvironment
from flask import WSGIEnvironment
environ: WSGIEnvironment
</code></pre>
<p>What is the usual or clean way to solve this?
Do I need to redeclare the type declarations present in the typeshed?</p>
|
<python><type-hinting>
|
2023-03-01 18:35:08
| 2
| 5,252
|
alecov
|
75,607,640
| 3,768,655
|
Form Recognizer in deep Learning with Annotation
|
<p>I have regular digital forms with blanks, boxes, checkboxes, tables, and signature fields. My aim is to extract the field name along with its fillable coordinates.</p>
<p>For e.g. if form has a field named "Name of benificiary" and has it's corresponding blank space at (x=500,y=750), I require the field Name and it blank space coordinates.</p>
<p>AWS and Azure, didn't provide blank space coordinates. Please let me know if there is any existing library or model to capture the names and their corresponding blank spaces.</p>
<p>If in case, I have to develop a custom model, kindly suggest a baseline model i can start with and how can i tell my model which field name to map with which blank space.</p>
<p>Thanks in advance.</p>
<p>Sample forms are:
<a href="https://i.sstatic.net/az0HQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/az0HQ.png" alt="enter image description here" /></a></p>
|
<python><deep-learning><azure-cognitive-search><amazon-textract><document-layout-analysis>
|
2023-03-01 18:30:46
| 1
| 1,859
|
R.K
|
75,607,616
| 5,203,151
|
Django: maximum recursion depth exceeded in comparison - Postman works
|
<p>I have a Django backend running as a service on an Ubuntu machine. After a while, I am getting this error when trying to execute my requests:</p>
<p>maximum recursion depth exceeded in comparison</p>
<p>I have attached to the process and found where the problem is occurring:</p>
<pre><code> response = requests.delete( apiUrlAddEntries, verify=False )
return response
</code></pre>
<p>The debugger does not get inside the Delete call, it just trigger that exception. However, if I send the same delete call using Postman (on the same machine, toward the same server), all goes through fine.</p>
<p>What do you think? Is it because the request lib? What cold make it run from Postman side and not from the request lib call? If I restart the service, it goes fine, for a while that is.</p>
|
<python><django><python-requests>
|
2023-03-01 18:27:11
| 0
| 451
|
Nightrain
|
75,607,567
| 7,437,143
|
Re-using update function for 2 plotly-dash figures?
|
<h2>Context</h2>
<p>After having created a code that adds an arbitrary number of graphs in the Dash web interface, I was trying to re-use the updater function, as it is the same for each respective graph.</p>
<h2>Issue</h2>
<p>When I inspect the graph, they are both the same graph (the last one that was created/updated). This is determined by inspecting the hovertext value of a particular node that is different between the two graphs.</p>
<h2>Dash Layout Code</h2>
<p>The dash app layout is created with:</p>
<pre class="lang-py prettyprint-override"><code>@typechecked
def create_app_layout(
*,
app: dash.Dash,
dash_figures: Dict[str, go.Figure],
plotted_graphs: Dict[str, nx.DiGraph],
temporal_node_colours_dict: Dict[str, List],
) -> dash.Dash:
"""Creates the app layout."""
html_figures: List = []
for graph_name in plotted_graphs.keys():
# Create html figures with different id's.
html_figures.append(
dcc.Slider(
id=f"color-set-slider{graph_name}",
min=0,
max=len(temporal_node_colours_dict[graph_name][0]) - 1,
value=0,
marks={
i: str(i) for i in range(len(temporal_node_colours_dict[graph_name][0]))
},
step=None,
)
)
html_figures.append(
html.Div(dcc.Graph(id=f"Graph{graph_name}", figure=dash_figures[graph_name]))
)
print(f'graph_name={graph_name}, val={dash_figures[graph_name]}')
# Store html graphs in layout.
app.layout = html.Div(
html_figures
)
return app
</code></pre>
<h2>Dash Update code:</h2>
<p>The dash graphs are updated with:</p>
<pre class="lang-py prettyprint-override"><code>@typechecked
def support_updates(
*,
app: dash.Dash,
dash_figures: Dict[str, go.Figure],
identified_annotations_dict: Dict[str, List[NamedAnnotation]],
plot_config: Plot_config,
plotted_graphs: Dict[str, nx.DiGraph],
temporal_node_colours_dict: Dict[str, List],
temporal_node_opacity_dict: Dict[str, List],
) -> None:
"""Allows for updating of the various graphs."""
# State variable to keep track of current color set
initial_t = 0
for graph_name, plotted_graph in plotted_graphs.items():
@app.callback(
Output(f"Graph{graph_name}", "figure"),
[Input(f"color-set-slider{graph_name}", "value")],
)
def update_color(
t: int,
) -> go.Figure:
# ) -> None:
"""Updates the colour of the nodes and edges based on user
input."""
if len(temporal_node_colours_dict[graph_name][0]) == 0:
raise ValueError(
"Not enough timesteps were found. probably took timestep "
+ "of ignored node."
)
update_node_colour_and_opacity(
dash_figure=dash_figures[graph_name],
identified_annotations=identified_annotations_dict[graph_name],
plot_config=plot_config,
plotted_graph=plotted_graph,
t=t,
temporal_node_colours=temporal_node_colours_dict[graph_name],
temporal_node_opacity=temporal_node_opacity_dict[graph_name],
)
update_node_colour(
dash_figure=dash_figures[graph_name],
plot_config=plot_config,
plotted_graph=plotted_graph,
t=t,
)
update_node_hovertext(
dash_figure=dash_figures[graph_name],
plot_config=plot_config,
plotted_graph=plotted_graph,
t=t,
)
return dash_figures[graph_name]
update_color(
t=initial_t,
)
</code></pre>
<h2>Example</h2>
<p>The first image shows the upper graph is the adapted graph (<code>vth=4</code>=last row of first block):
<a href="https://i.sstatic.net/ktbFZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ktbFZ.png" alt="enter image description here" /></a>
Then, after waiting a second or 2 (while the graph is updating (automatically upon initialization), it jumps to being the 2nd graph, with <code>vth=9999</code>:
<a href="https://i.sstatic.net/lIKbh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lIKbh.png" alt="enter image description here" /></a></p>
<h2>Question</h2>
<p>How can I ensure both/all graphs are unique and stay unique whilst updating them, whilst re-using the update function?</p>
<h2>Debugging</h2>
<p>I noticed that when I remove the return statement from the <code>update_color(</code> function, the graphs remain unique. However, that is because the graph does not update anymore at all.</p>
<p>So I <strong>assume</strong> the object that is returned by the updater function is applied to both figures, instead of only to the figure to which it pertains according to the <code>layout</code> names.</p>
<p>From what I understand from <a href="https://dash.plotly.com/sharing-data-between-callbacks" rel="nofollow noreferrer">this</a> documentation, I overwrite the figure data. However, if that interpretation is correct, I do not yet understand how, because I return <code>dash_figures[graph_name]</code> which is a different object for the different graph names.</p>
<h2>Bandaid Solution</h2>
<p>When I manually copy the updater function, like:</p>
<pre class="lang-py prettyprint-override"><code>@typechecked
def support_updates(
*,
app: dash.Dash,
dash_figures: Dict[str, go.Figure],
identified_annotations_dict: Dict[str, List[NamedAnnotation]],
plot_config: Plot_config,
plotted_graphs: Dict[str, nx.DiGraph],
temporal_node_colours_dict: Dict[str, List],
temporal_node_opacity_dict: Dict[str, List],
) -> None:
"""Allows for updating of the various graphs."""
# State variable to keep track of current color set
initial_t = 0
graph_name_one='adapted_snn_graph'
first_plotted_graph=plotted_graphs[graph_name_one]
@app.callback(
Output(f"Graph{graph_name_one}", "figure"),
[Input(f"color-set-slider{graph_name_one}", "value")],
)
def update_color_one(
t: int,
) -> go.Figure:
# ) -> None:
"""Updates the colour of the nodes and edges based on user
input."""
if len(temporal_node_colours_dict[graph_name_one][0]) == 0:
raise ValueError(
"Not enough timesteps were found. probably took timestep "
+ "of ignored node."
)
update_node_colour_and_opacity(
dash_figure=dash_figures[graph_name_one],
identified_annotations=identified_annotations_dict[graph_name_one],
plot_config=plot_config,
plotted_graph=plotted_graphs[graph_name_one],
t=t,
temporal_node_colours=temporal_node_colours_dict[graph_name_one],
temporal_node_opacity=temporal_node_opacity_dict[graph_name_one],
)
update_node_colour(
dash_figure=dash_figures[graph_name_one],
plot_config=plot_config,
plotted_graph=plotted_graphs[graph_name_one],
t=t,
)
update_node_hovertext(
dash_figure=dash_figures[graph_name_one],
plot_config=plot_config,
plotted_graph=plotted_graphs[graph_name_one],
t=t,
)
return dash_figures[graph_name_one]
update_color_one(
t=initial_t,
)
# Manual copy
graph_name_two='rad_adapted_snn_graph'
second_plotted_graph=plotted_graphs[graph_name_two]
@app.callback(
Output(f"Graph{graph_name_two}", "figure"),
[Input(f"color-set-slider{graph_name_two}", "value")],
)
def update_color_two(
t: int,
) -> go.Figure:
# ) -> None:
"""Updates the colour of the nodes and edges based on user
input."""
if len(temporal_node_colours_dict[graph_name_two][0]) == 0:
raise ValueError(
"Not enough timesteps were found. probably took timestep "
+ "of ignored node."
)
update_node_colour_and_opacity(
dash_figure=dash_figures[graph_name_two],
identified_annotations=identified_annotations_dict[graph_name_two],
plot_config=plot_config,
plotted_graph=plotted_graphs[graph_name_two],
t=t,
temporal_node_colours=temporal_node_colours_dict[graph_name_two],
temporal_node_opacity=temporal_node_opacity_dict[graph_name_two],
)
update_node_colour(
dash_figure=dash_figures[graph_name_two],
plot_config=plot_config,
plotted_graph=plotted_graphs[graph_name_two],
t=t,
)
update_node_hovertext(
dash_figure=dash_figures[graph_name_two],
plot_config=plot_config,
plotted_graph=plotted_graphs[graph_name_two],
t=t,
)
return dash_figures[graph_name_two]
update_color_two(
t=initial_t,
)
</code></pre>
<p>it does work (the graphs remain unique and updatable).</p>
|
<python><plotly-dash><updates>
|
2023-03-01 18:20:20
| 1
| 2,887
|
a.t.
|
75,607,544
| 7,483,211
|
How to display formatted Jupyter stack trace from "full output data"
|
<p>I am debugging Python code in VS Code using the Jupyter integration in interactive mode.</p>
<p>The stack trace I get is only partially displayed, because "Output exceeds the <em>size limit</em>." I would like to see the full stack trace.</p>
<p>I'm advised to "Open the full output data <em>in a text editor</em>".</p>
<p>When I do that, by clicking on "in a text editor" I get something that looks like JSON but is too messy for human consumption:</p>
<pre class="lang-json prettyprint-override"><code>{
"name": "InvalidOperationError",
"message": "window expression not allowed in aggregation",
"stack": "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m\n\u001b[0;31mInvalidOperationError\u001b[0m Traceback (most recent call last)\n\u001b[1;32m/Users/corneliusromer/Downloads/tmp/so.py\u001b[0m in \u001b[0;36mline 2\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=38'>39</a>\u001b[0m \u001b[39m#%%\u001b[39;00m\n\u001b[0;32m----> <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=39'>40</a>\u001b[0m df\u001b[39m.\u001b[39;49mwith_columns(\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=40'>41</a>\u001b[0m [\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=41'>42</a>\u001b[0m pl\u001b[39m.\u001b[39;49mstruct(\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=42'>43</a>\u001b[0m [\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=43'>44</a>\u001b[0m (\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=44'>45</a>\u001b[0m pl\u001b[39m.\u001b[39;49mcol(specs[specnm][\u001b[39m\"\u001b[39;49m\u001b[39myvar\u001b[39;49m\u001b[39m\"\u001b[39;49m])\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=45'>46</a>\u001b[0m \u001b[39m-\u001b[39;49m pl\u001b[39m.\u001b[39;49mcol(specs[specnm][\u001b[39m\"\u001b[39;49m\u001b[39myvar\u001b[39;49m\u001b[39m\"\u001b[39;49m])\u001b[39m.\u001b[39;49mmean()\u001b[39m.\u001b[39;49mover(specs[specnm][\u001b[39m\"\u001b[39;49m\u001b[39mgvars\u001b[39;49m\u001b[39m\"\u001b[39;49m])\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=46'>47</a>\u001b[0m )\u001b[39m.\u001b[39;49mabs(),\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=47'>48</a>\u001b[0m \u001b[39m*\u001b[39;49mspecs[specnm][\u001b[39m\"\u001b[39;49m\u001b[39mxvars\u001b[39;49m\u001b[39m\"\u001b[39;49m],\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=48'>49</a>\u001b[0m ]\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=49'>50</a>\u001b[0m )\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=50'>51</a>\u001b[0m \u001b[39m.\u001b[39;49mapply(\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=51'>52</a>\u001b[0m partial(\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=52'>53</a>\u001b[0m ols_fitted, yvar\u001b[39m=\u001b[39;49mspecs[specnm][\u001b[39m\"\u001b[39;49m\u001b[39myvar\u001b[39;49m\u001b[39m\"\u001b[39;49m], xvars\u001b[39m=\u001b[39;49mspecs[specnm][\u001b[39m\"\u001b[39;49m\u001b[39mxvars\u001b[39;49m\u001b[39m\"\u001b[39;49m]\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=53'>54</a>\u001b[0m )\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=54'>55</a>\u001b[0m )\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=55'>56</a>\u001b[0m \u001b[39m.\u001b[39;49mover([\u001b[39m\"\u001b[39;49m\u001b[39mdate\u001b[39;49m\u001b[39m\"\u001b[39;49m, \u001b[39m\"\u001b[39;49m\u001b[39mid\u001b[39;49m\u001b[39m\"\u001b[39;49m])\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=56'>57</a>\u001b[0m \u001b[39m.\u001b[39;49malias(\u001b[39mf\u001b[39;49m\u001b[39m\"\u001b[39;49m\u001b[39mfitted_\u001b[39;49m\u001b[39m{\u001b[39;49;00mspecnm\u001b[39m}\u001b[39;49;00m\u001b[39m\"\u001b[39;49m)\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=57'>58</a>\u001b[0m \u001b[39mfor\u001b[39;49;00m specnm \u001b[39min\u001b[39;49;00m \u001b[39mlist\u001b[39;49m(specs\u001b[39m.\u001b[39;49mkeys())\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=58'>59</a>\u001b[0m ]\n\u001b[1;32m <a href='file:///Users/corneliusromer/Downloads/tmp/so.py?line=59'>60</a>\u001b[0m )\n\nFile \u001b[0;32m/opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/dataframe/frame.py:6139\u001b[0m, in \u001b[0;36mDataFrame.with_columns\u001b[0;34m(self, exprs, *more_exprs, **named_exprs)\u001b[0m\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/dataframe/frame.py?line=5985'>5986</a>\u001b[0m \u001b[39mdef\u001b[39;00m \u001b[39mwith_columns\u001b[39m(\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/dataframe/frame.py?line=5986'>5987</a>\u001b[0m \u001b[39mself\u001b[39m,\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/dataframe/frame.py?line=5987'>5988</a>\u001b[0m exprs: IntoExpr \u001b[39m|\u001b[39m Iterable[IntoExpr] \u001b[39m=\u001b[39m \u001b[39mNone\u001b[39;00m,\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/dataframe/frame.py?line=5988'>5989</a>\u001b[0m \u001b[39m*\u001b[39mmore_exprs: IntoExpr,\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/dataframe/frame.py?line=5989'>5990</a>\u001b[0m \u001b[39m*\u001b[39m\u001b[39m*\u001b[39mnamed_exprs: IntoExpr,\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/dataframe/frame.py?line=5990'>5991</a>\u001b[0m ) \u001b[39m-\u001b[39m\u001b[39m>\u001b[39m Self:\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/dataframe/frame.py?line=5991'>5992</a>\u001b[0m \u001b[39m \u001b[39m\u001b[39m\"\"\"\u001b[39;00m\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/dataframe/frame.py?line=5992'>5993</a>\u001b[0m \u001b[39m Add columns to this DataFrame.\u001b[39;00m\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/dataframe/frame.py?line=5993'>5994</a>\u001b[0m \n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/dataframe/frame.py?line=6133'>6134</a>\u001b[0m \n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/dataframe/frame.py?line=6134'>6135</a>\u001b[0m \u001b[39m \"\"\"\u001b[39;00m\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/dataframe/frame.py?line=6135'>6136</a>\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_from_pydf(\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/dataframe/frame.py?line=6136'>6137</a>\u001b[0m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mlazy()\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/dataframe/frame.py?line=6137'>6138</a>\u001b[0m \u001b[39m.\u001b[39;49mwith_columns(exprs, \u001b[39m*\u001b[39;49mmore_exprs, \u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49mnamed_exprs)\n\u001b[0;32m-> <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/dataframe/frame.py?line=6138'>6139</a>\u001b[0m \u001b[39m.\u001b[39;49mcollect(no_optimization\u001b[39m=\u001b[39;49m\u001b[39mTrue\u001b[39;49;00m)\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/dataframe/frame.py?line=6139'>6140</a>\u001b[0m \u001b[39m.\u001b[39m_df\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/dataframe/frame.py?line=6140'>6141</a>\u001b[0m )\n\nFile \u001b[0;32m/opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/lazyframe/frame.py:1323\u001b[0m, in \u001b[0;36mLazyFrame.collect\u001b[0;34m(self, type_coercion, predicate_pushdown, projection_pushdown, simplify_expression, no_optimization, slice_pushdown, common_subplan_elimination, streaming)\u001b[0m\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/lazyframe/frame.py?line=1311'>1312</a>\u001b[0m common_subplan_elimination \u001b[39m=\u001b[39m \u001b[39mFalse\u001b[39;00m\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/lazyframe/frame.py?line=1313'>1314</a>\u001b[0m ldf \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_ldf\u001b[39m.\u001b[39moptimization_toggle(\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/lazyframe/frame.py?line=1314'>1315</a>\u001b[0m type_coercion,\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/lazyframe/frame.py?line=1315'>1316</a>\u001b[0m predicate_pushdown,\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/lazyframe/frame.py?line=1320'>1321</a>\u001b[0m streaming,\n\u001b[1;32m <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/lazyframe/frame.py?line=1321'>1322</a>\u001b[0m )\n\u001b[0;32m-> <a href='file:///opt/homebrew/Caskroom/miniforge/base/envs/py11/lib/python3.11/site-packages/polars/internals/lazyframe/frame.py?line=1322'>1323</a>\u001b[0m \u001b[39mreturn\u001b[39;00m pli\u001b[39m.\u001b[39mwrap_df(ldf\u001b[39m.\u001b[39;49mcollect())\n\n\u001b[0;31mInvalidOperationError\u001b[0m: window expression not allowed in aggregation"
}
</code></pre>
<p>How can I display the <code>stack</code> variable in a nicely formatted manner, similar to how the excerpt is shown in the image below?</p>
<p><a href="https://i.sstatic.net/pbQCW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pbQCW.png" alt="How the excerpt is shown, nicely formatted" /></a></p>
<p>Is there maybe a VS code extension for this? Or does the value of the <code>stack</code> key look like some known formatting? Is this using ANSI escapes or something else that I could use a CLI tool for viewing?</p>
|
<python><visual-studio-code><formatting><jupyter><stack-trace>
|
2023-03-01 18:18:09
| 2
| 10,272
|
Cornelius Roemer
|
75,607,502
| 1,720,743
|
conda's behavior differs between two identical pc's using the same version
|
<p>My colleague and I are having a head-scratcher moment. We built an internal library that are trying to get working on two machines. The machines are nearly identical in hardware spec.</p>
<p>On my machine i can do this:</p>
<pre><code>>conda create --no-default-packages -n library_test_env python=3.8
>conda activate library_test_env
>pip install git+https_link_to_repository
</code></pre>
<p>Which will then install the library and the dependencies. When I do <code>conda list</code> it doesn't list all the dependencies, it does however show some of them but not our own library. My colleague gets a much longer list, which does contains the dependencies and the library.</p>
<p>When i do <code>pip list</code> i get a much longer list than my colleague, which includes the library and the remaining dependencies.</p>
<p>Here is the catch; a specific function in the library uses sqlalchemy to connect to the company database and run a query. The query returns a timeseries which contains a UTC timestamp column.</p>
<p>On my machine the dtype of this column is <code>datetime64</code> as you would expect, on his machine it is read as an <code>object</code>. Which causes the logic to fail.</p>
<p>We are scratching our heads here. Apart from the different behavior of conda on our machines I just realised he is using windows 11 and I am using windows 10. Apart from the dependencies showing up differently when using <code>conda list</code> and <code>pip list</code> the listed versions are identical.</p>
<p>What could be a possible cause of this?</p>
|
<python><sqlalchemy><anaconda><conda>
|
2023-03-01 18:12:57
| 0
| 770
|
XiB
|
75,607,488
| 5,272,967
|
Mean value calculation for clustered data using NumPy
|
<p>Suppose I have two arrays:</p>
<ul>
<li><code>x</code> which contains <code>m</code> points;</li>
<li><code>c</code> which contains <code>m</code> cluster ids for each corresponding point from <code>x</code>.</li>
</ul>
<p>I want to calculate the mean value for points which share the same id, i.e. which belong to the same cluster. I know that <code>c</code> contains integers from the range <code>[0, k)</code> and all the values are present in the <code>c</code>.
My current solution looks like the following:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
np.random.seed(42)
k = 3
x = np.random.rand(100, 2)
c = np.random.randint(0, k, size=x.shape[0])
mu = np.zeros((k, 2))
for i in range(k):
mu[i] = x[c == i].mean(axis=0)
</code></pre>
<p>While this approach works, I'm wondering if there is a more efficient way to calculate the means in NumPy without having to use an explicit for loop?</p>
|
<python><numpy>
|
2023-03-01 18:11:36
| 1
| 313
|
andywiecko
|
75,607,419
| 11,092,636
|
openpyxl hides two columns instead of 1
|
<p>Here is a MRE. <a href="https://wetransfer.com/downloads/89eff5d269cfc2a771dec590a7f3901020230301180308/23666a" rel="nofollow noreferrer">Download "huge_bug.xlsx" (it's a wetransfer link)</a> first:</p>
<pre class="lang-py prettyprint-override"><code>from openpyxl import load_workbook
document = load_workbook("huge_bug.xlsx")
# hide column AJ
document.active.column_dimensions["AJ"].hidden = True
# save workbook
document.save("huge_bug_2.xlsx")
</code></pre>
<p>I want to hide column <code>AJ</code> but both columns <code>AJ</code> and <code>AK</code> get hidden and I have no idea why. There is no merged cell that could indicate why those two columns "work together".</p>
<p>I suspect the problem is in my initial excel file, but I'm not sure where to look.</p>
<p>I'm using <code>openpyxl 3.0.10</code> and <code>Python 3.11.1</code>.</p>
<p>EDIT: As @moken pointed out, it probably has to do with the "fill" of the columns, if you change one of them, only by one point, the bug is not reproducible anymore.</p>
|
<python><excel><openpyxl>
|
2023-03-01 18:04:41
| 0
| 720
|
FluidMechanics Potential Flows
|
75,607,393
| 7,014,837
|
Adding global X/Y labels to a grid of subplots
|
<p>I'm trying to plot a summary of many experiments in one grid of graphs in order to compare the results. My goal is to get a graph that looks as follows:</p>
<p><a href="https://i.sstatic.net/B7x7P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B7x7P.png" alt="Illustration of the desired graph" /></a></p>
<p>I have successfully created the grid of the graphs using <code>subfigures</code> and the titles of rows (and even for each graph in each row). However, I could not get the X/Y-Labels to show.</p>
<p>The code I have been using is as follows:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (20,20) # Increasing the size of the graph to accomodate all properly.
fig = plt.figure(constrained_layout=True)
fig.suptitle(f'The big title of the set')
# Attempt 1:
plt.xlabel('The X-Label for all graphs')
ply.ylabel('The Y-Label for all graphs')
# Attempt 2:
axes = fig.gca()
axes.set_xlabel('The X-Label for all graphs', loc='left') # I was wondering if the placement was problematic.
axes.set_ylabel('The Y-Label for all graphs', visible=True) # Or maybe the visibility?
Rows = [four, parts, of, data]
Columns = [four, parts, of, values, per, row]
subfigs = fig.subfigures(nrows=len(Xs), ncols=1)
for row, subfig in zip(Rows, subfigs):
subfig.suptitle(f'Title of row {row}')
subsubfigs = subfig.subplots(nrows=1, ncols=len(Columns))
for column, subsubfig in zip(Columns, subsubfigs):
subsubfig.plot(row_data, column_data)
subsubfig.set_title(f'Title of single graph {row}:{column}')
plt.show()
</code></pre>
<p>However, I get no labels.
I do succeed adding labels to each of the subfigures using <code>subsubfig.set_xlabel</code> and <code>subsubfig.set_ylabel</code>, but I wonder why I couldn't use this on the big grid.</p>
|
<python><matplotlib>
|
2023-03-01 18:01:00
| 2
| 1,110
|
Kerek
|
75,607,375
| 2,568,341
|
how to combine multiple dictionaries from list?
|
<p>I have a list of dictionaries like this one:</p>
<pre><code> my_dict = [{'year-0': '2022', 'dividend-0': ''},
{'year-1': '2021', 'dividend-1': '52.37'},
{'year-2': '2020', 'dividend-2': '44.57'},
{'year-3': '2019', 'dividend-3': '35.00'},
{'year-4': '2018', 'dividend-4': '24.00'},
{'year-5': '2017', 'dividend-5': '23.94'}]
</code></pre>
<p>How I can combine these dictionaries into one dictionary like that ?:</p>
<pre><code>{'year-0': '2022',
'dividend-0': '',
'year-1': '2021',
'dividend-1': '52.37',
'year-2': '2020',
'dividend-2': '44.57',
'year-3': '2019',
'dividend-3': '35.00',
'year-4': '2018',
'dividend-4': '24.00',
'year-5': '2017',
'dividend-5': '23.94'}
</code></pre>
<p>I can do it using a simple loop, but maybe there is a more elegant way ?</p>
<pre><code>x=dict()
for d in my_dict:
x.update(d)
</code></pre>
|
<python><list><dictionary>
|
2023-03-01 17:59:24
| 3
| 36,305
|
krokodilko
|
75,607,361
| 21,309,333
|
Custom function in pandas does not print out values as expected
|
<p>I have the following code:</p>
<pre class="lang-py prettyprint-override"><code>def fillRow(df, key, type_):
print(f"Key (before if statements): {key}, Type: {type_}")
if type_ == 0:
replacement = df[key].median(skipna=True)
elif type_ == 1:
replacement = df[key].mean(skipna=True)
elif type_ == 2:
replacement = df[key].mode(dropna=True)
print(f"Replacement: {replacement}, Key: {key}, Type of filling: {type_}")
df[key] = df[key].fillna(replacement)
return df
</code></pre>
<p>This function is meant to kind of "generalize" different types of replacement (just to save a little bit of time)</p>
<pre class="lang-py prettyprint-override"><code>short_qe = fillRow(short_eq, "Mag", 0)
for i in ["Mo", "Dy", "Latitude", "Longitude"]:
short_eq = fillRow(short_eq, i, 2)
</code></pre>
<p>And this is an example of using it.</p>
<p>The output is following:</p>
<pre><code>Key (before if statements): Mag, Type: 0<br>
Replacement: 6.5, Key: Mag, Type of filling: 0<br>
Key (before if statements): Mo, Type: 2<br>
Replacement: 0 11.0<br>
dtype: float64, Key: Mo, Type of filling: 2<br>
Key (before if statements): Dy, Type: 2<br>
Replacement: 0 25.0<br>
dtype: float64, Key: Dy, Type of filling: 2<br>
Key (before if statements): Latitude, Type: 2<br>
Replacement: 0 36.0<br>
1 38.0<br>
dtype: float64, Key: Latitude, Type of filling: 2<br>
Key (before if statements): Longitude, Type: 2<br>
Replacement: 0 36.1<br>
dtype: float64, Key: Longitude, Type of filling: 2<br>
</code></pre>
<p>Where the hell did "dtype" come from? What are those random numbers? Why does this work properly with "type 0" but not with "type 2"?</p>
|
<python><pandas><function>
|
2023-03-01 17:57:18
| 1
| 365
|
God I Am Clown
|
75,607,274
| 5,452,378
|
Importing shared Python Module in Dataflow (Apache Beam) job
|
<p>I'm building out a series of data pipelines using Google Cloud's Dataflow tool.</p>
<p>This is my file structure:</p>
<pre><code>-- dataflow_jobs
|-- README.md
|-- cleanse_export
| |-- __init__.py
| |-- run_cleanse_export.py
| |-- setup.py
| `-- src
| |-- __init__.py
| `-- cleanse_export.py
|-- constants
| |-- __init__.py
| `-- constants.py
`-- utilities
|-- __init__.py
`-- data_transform_functions.py
</code></pre>
<p>The idea is that <code>constants</code> and <code>utilities</code> should be shared libraries used by other (future) jobs under <code>dataflow_jobs</code>.</p>
<p>Following <a href="https://github.com/apache/beam/tree/master/sdks/python/apache_beam/examples/complete/juliaset" rel="nofollow noreferrer">this example</a>, I built out a <code>setup.py</code> file that looks like this:</p>
<pre><code>from setuptools import find_packages, setup
setup(
name="dataflow_job",
version="0.0.1",
install_requires=["scrubadub==2.0.0"],
packages=find_packages()
)
</code></pre>
<p>In my main program (<code>cleanse_export.py</code>), I'm importing a couple of local packages:</p>
<pre><code>import time
import uuid
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
import sys
sys.path.append("../")
from constants.constants import (
DATAFLOW_RUNNER,
PROJECT,
REGION,
SETUP_FILE,
)
from utilities.data_transform_functions import (
cleanse_pii,
_logging,
)
</code></pre>
<p>I keep getting a <code>ModuleNotFound</code> error:
<code>ModuleNotFoundError: No module named 'utilities'</code>.</p>
<p>I'm trying to get <code>setup.py</code> to recognize and install my local python modules (<code>utilities</code> and <code>constants</code> on the worker machines when the job launches.</p>
<p>I tried adding</p>
<pre><code>packages=find_packages() +
packages=find_packages(where='../constants') + packages=find_packages(where='../utilities')
</code></pre>
<p>To my <code>setup.py</code>'s <code>packages</code> parameter, but I'm getting the same result.</p>
<p>Why isn't <code>setup.py</code> recognizing the modules in directories above it, and how can I get it to recognize these local modules in the current file structure?</p>
|
<python><dependencies><python-import><google-cloud-dataflow><apache-beam>
|
2023-03-01 17:48:06
| 2
| 409
|
snark17
|
75,607,256
| 3,713,236
|
Amazon Product Advertising API: How to translate Curl statement into Python?
|
<p>Amazon Product Advertising API: How to translate Curl statement into Python?</p>
<p>I got a signed curl request from <a href="https://webservices.amazon.com/paapi5/documentation/quick-start/using-curl.html" rel="nofollow noreferrer">this Amazon link</a>. The curl statement looks like this, sensitive info redacted:</p>
<pre><code>curl "https://webservices.amazon.com/paapi5/searchitems" \
-H "Host: webservices.amazon.com" \
-H "Content-Type: application/json; charset=UTF-8" \
-H "X-Amz-Date: 20230301T020028Z" \
-H "X-Amz-Target: com.amazon.paapi5.v1.ProductAdvertisingAPIv1.SearchItems" \
-H "Content-Encoding: amz-1.0" \
-H "User-Agent: paapi-docs-curl/1.0.0" \
-H "Authorization: SENSITIVE_INFO_REDACTED" \
-d "{\"Marketplace\":\"www.amazon.com\",\"PartnerType\":\"Associates\",\"PartnerTag\":\"STORE_NAME",\"Keywords\":\"kindle\",\"SearchIndex\":\"All\",\"ItemCount\":3,\"Resources\":[\"Images.Primary.Large\",\"ItemInfo.Title\",\"Offers.Listings.Price\"]}"
</code></pre>
<p>I translated the above into Python. This is my Python code below. However, I'm getting a <code>Response 401 error</code>:</p>
<pre><code>headers = {
'Host': 'webservices.amazon.com',
'Content-Type': 'application/json; charset=UTF-8',
'X-Amz-Date': '20230301T020028Z',
'X-Amz-Target': 'com.amazon.paapi5.v1.ProductAdvertisingAPIv1.SearchItems',
'Content-Encoding': 'amz-1.0',
'User-Agent': 'paapi-docs-curl/1.0.0',
'Authorization': 'SENSITIVE_INFO_REDACTED',
}
json_data = {
'Marketplace': 'www.amazon.com',
'PartnerType': 'Associates',
'PartnerTag': 'STORE_NAME',
'Keywords': 'kindle',
'SearchIndex': 'All',
'ItemCount': 3,
'Resources': [
'Images.Primary.Large',
'ItemInfo.Title',
'Offers.Listings.Price',
],
}
response = requests.post('https://webservices.amazon.com/paapi5/searchitems', headers=headers, json=json_data)
</code></pre>
|
<python><json><curl><python-requests><amazon-advertising-api>
|
2023-03-01 17:46:36
| 0
| 9,075
|
Katsu
|
75,607,074
| 386,861
|
Plotting several plots in matplotlib using a list of dicts
|
<p>I have a simple plot function:</p>
<pre><code>def plot_data(dataset="boys", baby_name="David"):
ax = dataset.T.sort_index().loc[:, baby_name].plot(legend = baby_name)
list_of_dict = [{"data": boys, "baby_name": "David",
"data": girls, "baby_name": "Susan",
"data": boys, "baby_name": "Colin",
"data": girls, "baby_name": "Frances"}]
for l in list_of_dict:
ax = plot_data(l['data'], l['baby_name'])
</code></pre>
<p>I can layer up different plots by writing</p>
<pre><code>ax = plot_data("boys", "David"])
ax = plot_data("boys", "Susan"])
</code></pre>
<p>But if I try to loop through the list and plot I only get one plot. Why is that?</p>
<p>The plot is what I'm trying to achieve.</p>
<p><a href="https://i.sstatic.net/U1KIY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U1KIY.png" alt="enter image description here" /></a></p>
|
<python><pandas><matplotlib>
|
2023-03-01 17:27:43
| 2
| 7,882
|
elksie5000
|
75,606,996
| 11,115,072
|
Efficient way of applying calculations on dataframes with different conditions and levels
|
<p>I have a dataframe in the following format, it is sorted by the dates for each item, which have a determined frequency:</p>
<pre class="lang-python prettyprint-override"><code>df = pd.DataFrame(
{
"date": [2020-01-01, 2020-02-01, 2020-03-01, 2020-01-01, 2020-02-01],
"item_ID": ["a", "a", "a", "b", "b"],
"quantity" : [1, 2, 3, 5, 2],
"type": ["y", "y", "n", "y", "n"],
}
)
> df
date | item_ID | quantity | type |
2020-01-01 | a | 1 | y |
2020-02-01 | a | 2 | y |
2020-03-01 | a | 3 | n |
2020-01-01 | b | 5 | y |
2020-02-01 | b | 2 | n |
</code></pre>
<p>Whenever I find a type <code>"n"</code>, I need to subtract the quantity of this row from the previous quantity with type "y" (based on the date), but the quantity cannot be negative, so if the subtraction returns a negative value, its set to 0 and the remaining is subtracted from the previous quantity and so on.</p>
<p>For example, on 2020-03-01 item_ID "a", we have a type "n" and 3 of quantity, so we check the previous date with type "y" (2020-02-01), and subtract the quantity:</p>
<p><em>3-2=-1</em></p>
<p>as the result is negative, we set the quantity to 0 and get the remaining (1):</p>
<pre><code>2020-02-01 | a | 0 | y |
</code></pre>
<p>then go the to previous date and repeat the process:</p>
<p><em>1-1 = 0</em></p>
<pre><code>2020-01-01 | a | 0 | y |
</code></pre>
<p>This must be done for each item, until there are no rows with type <code>"n"</code>. The desired output would be this:</p>
<pre><code> date | item_ID | quantity | type |
2020-01-01 | a | 0 | y |
2020-02-01 | a | 0 | y |
2020-01-01 | b | 3 | y |
</code></pre>
<p>I know I can achieve that by looping through the unique items (<code>for item in df["item_ID"].unique():...)</code>, then sorting the dates in descending order and checking each row to apply the criteria, but for a larger dataset (which is my case), that would be too much time consuming, so is there a more efficient way of achieving that same result?</p>
|
<python><pandas><data-analysis>
|
2023-03-01 17:20:35
| 1
| 381
|
Gabriel Caldas
|
75,606,993
| 8,642,375
|
Azure Pronunciation Assessment SDK return wrong result compare with api call
|
<p>I'm using azure speech sdk to do pronunciation assessment, it works fine when i used api provide by azure, but when i use speech sdk the result is not correct. I follow the sample from <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py" rel="nofollow noreferrer">cognitive services speech sdk</a></p>
<p>Here is the code that i used for sdk</p>
<pre><code> def speech_recognition_with_pull_stream(self):
class WavFileReaderCallback(speechsdk.audio.PullAudioInputStreamCallback):
def __init__(self, filename: str):
super().__init__()
self._file_h = wave.open(filename, mode=None)
self.sample_width = self._file_h.getsampwidth()
assert self._file_h.getnchannels() == 1
assert self._file_h.getsampwidth() == 2
# assert self._file_h.getframerate() == 16000 #comment this line because every .wav file read is 48000
assert self._file_h.getcomptype() == 'NONE'
def read(self, buffer: memoryview) -> int:
size = buffer.nbytes
print(size)
print(len(buffer))
frames = self._file_h.readframes(len(buffer) // self.sample_width)
buffer[:len(frames)] = frames
return len(frames)
def close(self):
self._file_h.close()
speech_key = os.getenv('AZURE_SUBSCRIPTION_KEY')
service_region = os.getenv('AZURE_REGION')
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
# specify the audio format
wave_format = speechsdk.audio.AudioStreamFormat(samples_per_second=16000, bits_per_sample=16, channels=1)
# setup the audio stream
callback = WavFileReaderCallback('/Users/146072/Downloads/58638f26-ed07-40b7-8672-1948c814bd69.wav')
stream = speechsdk.audio.PullAudioInputStream(callback, wave_format)
audio_config = speechsdk.audio.AudioConfig(stream=stream)
# instantiate the speech recognizer with pull stream input
speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_config, language='en-US')
reference_text = 'We had a great time taking a long walk outside in the morning'
pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(
reference_text=reference_text,
grading_system=PronunciationAssessmentGradingSystem.HundredMark,
granularity=PronunciationAssessmentGranularity.Word,
)
pronunciation_assessment_config.phoneme_alphabet = "IPA"
pronunciation_assessment_config.apply_to(speech_recognizer)
speech_recognition_result = speech_recognizer.recognize_once()
print(speech_recognition_result.text)
# The pronunciation assessment result as a Speech SDK object
pronunciation_assessment_result = speechsdk.PronunciationAssessmentResult(speech_recognition_result)
print(pronunciation_assessment_result)
# The pronunciation assessment result as a JSON string
pronunciation_assessment_result_json = speech_recognition_result.properties.get(
speechsdk.PropertyId.SpeechServiceResponse_JsonResult
)
print(pronunciation_assessment_result_json)
return json.loads(pronunciation_assessment_result_json)
</code></pre>
<p>and here is the result from sdk</p>
<pre><code>"PronunciationAssessment": {
"AccuracyScore": 26,
"FluencyScore": 9,
"CompletenessScore": 46,
"PronScore": 19.8
},
</code></pre>
<p>and here is the code for api call</p>
<pre><code> def ackaud(self):
# f.save(audio)
# print('file uploaded successfully')
# a generator which reads audio data chunk by chunk
# the audio_source can be any audio input stream which provides read() method, e.g. audio file, microphone, memory stream, etc.
def get_chunk(audio_source, chunk_size=1024):
while True:
# time.sleep(chunk_size / 32000) # to simulate human speaking rate
chunk = audio_source.read(chunk_size)
if not chunk:
# global uploadFinishTime
# uploadFinishTime = time.time()
break
yield chunk
# build pronunciation assessment parameters
referenceText = 'We had a great time taking a long walk outside in the morning. '
pronAssessmentParamsJson = "{\"ReferenceText\":\"%s\",\"GradingSystem\":\"HundredMark\",\"Dimension\":\"Comprehensive\",\"EnableMiscue\":\"True\"}" % referenceText
pronAssessmentParamsBase64 = base64.b64encode(bytes(pronAssessmentParamsJson, 'utf-8'))
pronAssessmentParams = str(pronAssessmentParamsBase64, "utf-8")
subscription_key = os.getenv('AZURE_SUBSCRIPTION_KEY')
region = os.getenv('AZURE_REGION')
# build request
url = "https://%s.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=%s&usePipelineVersion=0" % (
region, 'en-US')
headers = {'Accept': 'application/json;text/xml',
'Connection': 'Keep-Alive',
'Content-Type': 'audio/wav; codecs=audio/pcm; samplerate=16000',
'Ocp-Apim-Subscription-Key': subscription_key,
'Pronunciation-Assessment': pronAssessmentParams,
'Transfer-Encoding': 'chunked',
'Expect': '100-continue'}
audioFile = open('/Users/146072/Downloads/58638f26-ed07-40b7-8672-1948c814bd69.wav', 'rb')
# audioFile = f
# send request with chunked data
response = requests.post(url=url, data=get_chunk(audioFile), headers=headers)
# getResponseTime = time.time()
audioFile.close()
# latency = getResponseTime - uploadFinishTime
# print("Latency = %sms" % int(latency * 1000))
return response.json()
</code></pre>
<p>and here is the result from api</p>
<pre><code>"AccuracyScore": 100,
"FluencyScore": 100,
"CompletenessScore": 100,
"PronScore": 100,
</code></pre>
<p>Am i doing anything wrong in the setup? Thanks a lot.</p>
|
<python><azure><speech-recognition><azure-cognitive-services>
|
2023-03-01 17:20:24
| 2
| 502
|
TRose
|
75,606,960
| 10,404,281
|
How do I add string values from List to DataFrame and recreate DF?
|
<p>I have a data frame and list like below.</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'player_id': [298, 118, 108, 109, 168, 198, 116],
'date': ['2018-06-22', '2018-06-23', '2018-07-24', '2018-07-25',
'2019-06-22', '2019-06-25', '2019-07-25'],
'score': [-2, 1, 2, 3, 7, 8, 6]})
df.head()
player_id date score
298 2018-06-22 -2
118 2018-06-23 1
108 2018-07-24 2
109 2018-07-25 3
168 2019-06-22 7
L = ['ab','da','Ae','gf']
</code></pre>
<p>I want to create a single data frame with all of the values in the list.
For ex: if I select 'ab' I want to add that to the above df and create a column as a "newName"</p>
<p>This is what I'm doing right now.</p>
<pre class="lang-py prettyprint-override"><code>for x in range(len(L)):
print(L[x])
df = df.append({'newName': L[x]}, ignore_index=True)
df
player_id date score newName
298.0 2018-06-22 -2.0 NaN
118.0 2018-06-23 1.0 NaN
108.0 2018-07-24 2.0 NaN
109.0 2018-07-25 3.0 NaN
168.0 2019-06-22 7.0 NaN
198.0 2019-06-25 8.0 NaN
116.0 2019-07-25 6.0 NaN
NaN NaN NaN ab
NaN NaN NaN ac
NaN NaN NaN da
NaN NaN NaN gf
But I want to create something like this
player_id date score newName
298.0 2018-06-22 -2.0 ab
118.0 2018-06-23 1.0 ab
108.0 2018-07-24 2.0 ab
109.0 2018-07-25 3.0 ab
168.0 2019-06-22 7.0 ab
198.0 2019-06-25 8.0 ab
116.0 2019-07-25 6.0 ab
next, I want to select the next item in the list and add it to the data frame
player_id date score newName
298.0 2018-06-22 -2.0 ab
118.0 2018-06-23 1.0 ab
108.0 2018-07-24 2.0 ab
109.0 2018-07-25 3.0 ab
168.0 2019-06-22 7.0 ab
198.0 2019-06-25 8.0 ab
116.0 2019-07-25 6.0 ab
298.0 2018-06-22 -2.0 da
118.0 2018-06-23 1.0 da
108.0 2018-07-24 2.0 da
109.0 2018-07-25 3.0 da
168.0 2019-06-22 7.0 da
198.0 2019-06-25 8.0 da
116.0 2019-07-25 6.0 da
Like that need to add all the list objects
</code></pre>
<p>Is this possible to do it?</p>
<p>Any help would be greatly appreciated!
Thanks in advance!</p>
|
<python><pandas><dataframe>
|
2023-03-01 17:17:09
| 1
| 819
|
rra
|
75,606,940
| 7,980,206
|
calculate sum of a column after groupby based on unique values of second column
|
<p>I have a <code>dataframe</code>, where there are columns like <code>gp1, gp2, gp3, id, sub_id, activity</code></p>
<pre><code>usr gp2 gp3 id sub_id activity
1 IN ASIA 1 1 1
1 IN ASIA 1 2 1
1 IN ASIA 2 9 0
2 IN ASIA 3 4 1
2 IN ASIA 3 5 1
2 IN ASIA 4 6 1
2 IN ASIA 4 7 0
2 IN ASIA 4 8 0
</code></pre>
<p>I want to aggregate the above dataframe by grouping on <code>usr, gp1, gp2</code>, and calculate two columns one is 'Account (id)', which is number of unique <code>id</code> for every group & then Actuals (Activity) which is <code>Activity</code> based on every unique 'id'.</p>
<p><code>for example, if id = 1, the activity sum would be 1 not 2</code></p>
<pre><code>usr gp1 gp3 id Activity
1 IN ASIA 2 1
2 IN ASIA 2 2
df.groupby(['usr', 'gp2', 'gp3']).agg({'id': pd.Series.nunique, 'activity': LOGIC_REQUIRED})
</code></pre>
|
<python><pandas>
|
2023-03-01 17:14:11
| 2
| 717
|
ggupta
|
75,606,870
| 3,541,631
|
Downloading from web using threads(inside a separate process) and multiprocessing queues
|
<p>I have the following class to download from web:</p>
<pre><code>class Downloader:
SENTINEL = END_QUEUE_SENTINEL
def __init__(self, to_download, downloaded):
self.to_download = to_download
self.downloaded = downloaded
self.mutex = Lock()
self._stop_workers = False
@staticmethod
def _write_to_file(path, content):
os.makedirs(os.path.dirname(path), exist_ok=True)
with open(path, "wb") as f:
f.write(content)
def _set_item(self, response: HTTPResponse, item):
if response.status == 200:
self._write_to_file(item.download_path, content=response.read())
return item
def _download_item(self):
while not self._stop_workers:
item: = self.to_download.get()
if item == self.SENTINEL:
print("sentinel received")
self.mutex.acquire()
self._stop_workers = True
print("self._stop_workers becomes True")
self.mutex.release()
print("mutex_released")
print(self.to_download.qsize())
break
req = urlrequest.Request(item.url)
response = urlrequest.urlopen(req)
item = self._set_item(response, item)
self.downloaded.put(item)
def download_items(self, download_workers=50):
threads = [Thread(target=self._download_item) for _ in range(download_workers)]
for t in threads:
t.start()
for t in threads:
t.join()
print("workers stopped")
self._stop_workers = False
self.downloaded.put(self.SENTINEL)
</code></pre>
<p><code>to_download</code> and <code>downloaded</code> are multiprocessing Queues.</p>
<p>A different process is adding data to <code>to_download</code> queue.</p>
<pre><code>downloader = Downloader(to_download,downloaded)
processes = [Process(target=add_item_to_download),
Process(target=downloader.download_items,
kwargs={"download_workers": 10})]
for p in processes:
p.start()
for p in processes:
p.join()
</code></pre>
<p><code>item</code> is an object that has 2 attributes, url and download_path.</p>
<p>The first process after a time is sending a sentinel into <code>Downloader</code>
The sentinel is received, the following is printed:
"sentinel received"
"self._stop_workers becomes True"
"mutex_released"
"0" # qsize</p>
<p>and then nothing happens, the process/threads hangs, and even with Pycharm debugging I can't find why this is happening.</p>
<p>The expectation was to finish the thread "join" and print:
""workers stopped" and add the sentinel to the downloaded queue.</p>
|
<python><python-3.x><python-3.8>
|
2023-03-01 17:07:19
| 1
| 4,028
|
user3541631
|
75,606,835
| 2,908,017
|
How to display Image on a Python FMX GUI App?
|
<p>I've made a small Form App using the DelphiFMX GUI Library for Python, but I'm not sure how to add or display an <code>Image</code> onto the <code>Form</code>. Here's my current code:</p>
<pre><code>from delphifmx import *
class frmMain(Form):
def __init__(self, owner):
self.Caption = 'My Form'
self.Width = 800
self.Height = 500
self.imgPlay = Image(self)
self.imgPlay.Parent = self
self.imgPlay.Position.X = 100
self.imgPlay.Position.Y = 100
self.imgPlay.Width = 300
self.imgPlay.Height = 300
def main():
Application.Initialize()
Application.Title = "My Application"
Application.MainForm = frmMain(Application)
Application.MainForm.Show()
Application.Run()
Application.MainForm.Destroy()
main()
</code></pre>
<p>I've tried:</p>
<pre><code>self.imgPlay.Bitmap.LoadFromFile('play.png')
</code></pre>
<p>but the error is <code>TypeError: "LoadFromFile" called with invalid arguments.</code></p>
<p>I've also tried:</p>
<pre><code>self.imgPlay.Picture.LoadFromFile('play.png')
</code></pre>
<p>The error is then <code>AttributeError: Error in getting property "Picture". Error: Unknown attribute</code></p>
<p>And lastly, I've tried simply saying:</p>
<pre><code>self.imgPlay.LoadFromFile('play.png')
</code></pre>
<p>With a similar error <code>AttributeError: Error in getting property "LoadFromFile". Error: Unknown attribute</code></p>
<p>What is the correct way to load an image file into the Image Component? How do I do this?</p>
|
<python><image><user-interface><firemonkey>
|
2023-03-01 17:02:34
| 1
| 4,263
|
Shaun Roselt
|
75,606,819
| 2,365,416
|
Pulumi Azure Native - How to manage multiple Azure Subscriptions using Python?
|
<p>I have 2 Azure Subscriptions(say Subscription A & B) already created, service principal is also configured.
I want to configure diagnostics in Subscription A so that I can send data to a workspace in Subscription B.
I'm using Pulumi as IaC tool, how can I achieve this using Pulumi Native Azure API?</p>
<p>All I could find was this module: <a href="https://www.pulumi.com/registry/packages/azure-native/api-docs/provider/" rel="nofollow noreferrer">https://www.pulumi.com/registry/packages/azure-native/api-docs/provider/</a> however it doesn't let you call any functions such as 'get_workspace'.</p>
<p>Any suggestions?</p>
<p>Using Pulumi version v3.55.0 and Pulumi Azure Native.</p>
|
<python><azure><pulumi><pulumi-azure><pulumi-python>
|
2023-03-01 17:01:02
| 1
| 2,333
|
Amrit
|
75,606,751
| 4,451,315
|
resample('D').interpolate() fills value, but resample('Y').interpolate() produces nans?
|
<p>Let's start with two dates, two days apart, resample daily, and interpolate:</p>
<pre class="lang-py prettyprint-override"><code>In [1]: ts = pd.Series([1, 2], index=pd.DatetimeIndex(['1950-01-01', '1950-01-03']))
In [2]: ts.resample('D').interpolate()
Out[2]:
1950-01-01 1.0
1950-01-02 1.5
1950-01-03 2.0
Freq: D, dtype: float64
</code></pre>
<p>So far so good. Next, let's try doing it with two dates two years apart, and resample yearly:</p>
<pre class="lang-py prettyprint-override"><code>In [3]: ts = pd.Series([1, 2], index=pd.DatetimeIndex(['1950-01-01', '1952-01-01']))
In [4]: ts.resample('Y').interpolate()
Out[4]:
1950-12-31 NaN
1951-12-31 NaN
1952-12-31 NaN
Freq: A-DEC, dtype: float64
</code></pre>
<p>Why do I get NaNs instead of <code>[1., 1.5, 2.]</code>?</p>
|
<python><pandas><interpolation><pandas-resample>
|
2023-03-01 16:54:14
| 1
| 11,062
|
ignoring_gravity
|
75,606,699
| 7,052,505
|
Checking empty cells using openpyxl
|
<p>In python <code>str(None)</code> is <code>"None"</code>. But when reading a cell with openpyxl we get a None instead of an empty string. So when casting the empty cell we end up with an object with a different bool value.</p>
<p>This causes errors if a value would need to be checked if it's empty.</p>
<p>If it's going to be written out, it won't be actually show nothing but None.</p>
<p>I was not explicitly calling str but when I was construction a formula I was doing it so with f-string like
<code>f"=COUNTIF(A1:A100, {cell_contents})"</code>. Where <code>cell_contents</code> were <code>None</code>. Currently I fixed this problem with checking if the value I read is <code>False</code>,and then assign empty string.
<code>cell_contents = i or ""</code> I'm expecting cell values to be a name as I'm trying to count the occurrences of these names.
What would be a better implementation for this?</p>
|
<python><openpyxl>
|
2023-03-01 16:49:50
| 0
| 350
|
Monata
|
75,606,607
| 2,908,017
|
Initialize Form as Maximized in a Python FMX GUI App
|
<p>I'm making an app in the <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX GUI Library for Python</a>. I have a Form that is being created, but it's being created as a normal window. How do I create and maximize it immediately? It should start as maximized.</p>
<p>This is my current code:</p>
<pre><code>from delphifmx import *
class frmMain(Form):
def __init__(self, owner):
self.Caption = 'My Form'
self.Width = 600
self.Height = 500
def main():
Application.Initialize()
Application.Title = "My Application"
Application.MainForm = frmMain(Application)
Application.MainForm.Show()
Application.Run()
Application.MainForm.Destroy()
main()
</code></pre>
<p>Is there a property or function I can use?</p>
|
<python><user-interface><firemonkey><maximize-window>
|
2023-03-01 16:41:09
| 1
| 4,263
|
Shaun Roselt
|
75,606,583
| 1,459,607
|
Understanding the raft algorithm RequestVote RPC
|
<p>I'm trying to read page 4 of this paper: <a href="https://raft.github.io/raft.pdf" rel="nofollow noreferrer">https://raft.github.io/raft.pdf</a></p>
<p>I am trying to implement the RequestVote RPC but I'm struggling to understand the second part of the "receiver implementation." "If votedFor is null" makes sense! However, the second part where it says "or candidateId and candidate's log is at least up-to-date as receiver's log, grant vote."</p>
<p>I feel like my interpretation below is mistaken.</p>
<p>class LogEntry:
term: int
command: Command</p>
<p>log: list[LogEntry] = []</p>
<p>class RequestVote:
term: int
candidateId: str
lastLogIndex: int
lastLogTerm: int</p>
<p>def on_request_vote_recieved(vote: RequestVote) -> None:
if not votedFor:
send(success)
elif log[vote.lastLogIndex].term >= lastLogTerm:
send(success)
else
send_failure()</p>
<p>Am I missing something here?</p>
|
<python><distributed-computing><raft>
|
2023-03-01 16:39:55
| 1
| 1,386
|
Ryan Glenn
|
75,606,435
| 4,403,073
|
Import Python logistic regression results into R using
|
<p>Because Python uses multiple cores and is more efficient with memory, I would like to run logistic regression in Python, but then import the summary table to R (as in the end is used in a Quarto document).</p>
<p>My reprex is below. I fail when I try to import the outcome from Python. Please help.
I would like to omit a solution, where I save the outcome from Python as .csv and read it in from R again.</p>
<p>Reprex:</p>
<pre><code>## libraries
require(tidyverse)
require(broom)
require(gt)
require(reticulate)
require(ISLR)
## creating a .csv example data set
## The data contains 1070 purchases where the customer either purchased Citrus Hill
## or Minute Maid Orange Juice. A number of characteristics of the customer and product are recorded.
test_data <- ISLR::OJ %>%
select(Purchase, PriceCH, SpecialCH, Store7)%>%
mutate(Purchase=if_else(Purchase=='CH',1,0))|>
as_tibble()|>
write_csv('data_test.csv')
## now comes the Python code
py_model <- py_run_string("
import pandas as pd
import statsmodels.api as sm
# Import the dataset
data = pd.read_csv('data_test.csv')
# Set up the independent and dependent variables
y = data['Purchase']
X = data[['PriceCH', 'SpecialCH', 'Store7']]
# Set up factor variables
X = pd.get_dummies(X, columns=['Store7'], drop_first=True)
# Add a constant term to the independent variables
X = sm.add_constant(X)
# Fit the logistic regression model
model = sm.Logit(y, X).fit()
# Show summary
print(model.summary())
")
## here I cannot select the model or its summary
## HELP HERE
py_result <- py_model$model.summary()
py_result|>
gt()
## expected outcom similar to needed
glm(Purchase~PriceCH+SpecialCH+Store7,
data=test_data,
family = binomial())%>%
tidy()|>
gt()
</code></pre>
|
<python><r><reticulate>
|
2023-03-01 16:25:37
| 1
| 1,005
|
Justas Mundeikis
|
75,606,409
| 8,406,249
|
Applying a tensorflow function to part of a tensor which meets a condition
|
<p>I have a 2d matrix of values with distances from the origin (to be used for a 2D Fourier Transform):</p>
<pre><code>s = tf.linspace(0, 10, 100)
x_grid, y_grid = tf.meshgrid(s, s)
t = x_grid**2 + y_grid**2
</code></pre>
<p>I want to apply a tensorflow function to this tensor which has piecewise behaviour such that, for values below a threshold, one function is used and above another.</p>
<p>In numpy, this could be easily achieved like so:</p>
<pre><code>t[t <= threshold] = fun_1(t[t <= threshold])
t[t > threshold] = fun_2(t[t > threshold])
</code></pre>
<p>However, in TensorFlow, the same operation results in an error: <code>TypeError: only integer scalar arrays can be converted to a scalar index</code>.</p>
<p>I've spent a long time looking around the docs and web for a good solution to this and cannot find anything. Has anyone solved a similar problem?</p>
|
<python><numpy><tensorflow>
|
2023-03-01 16:23:28
| 1
| 390
|
Seb Morris
|
75,606,326
| 5,426,539
|
Python Parallel Processing with ThreadPoolExecutor Gives Wrong Results with Keras Model
|
<p>I am using parallel processing using the <code>concurrent.futures.ThreadPoolExecutor</code> class to make multiple predictions using a Keras model for different sets of weights.</p>
<p>But the Keras model predictions using parallel processing are not correct.</p>
<p>This is a reproducible sample code that creates 10 sets of weights. Then, it calculates the model's errors using and without parallel processing.</p>
<p>I set a random seed to <code>NumPy</code> to make sure that there is no randomness across the different runs.</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow.keras
import numpy
import concurrent.futures
numpy.random.seed(1)
def create_rand_weights(model, num_models):
random_model_weights = []
for model_idx in range(num_models):
random_weights = []
for layer_idx in range(len(model.weights)):
layer_shape = model.weights[layer_idx].shape
if len(layer_shape) > 1:
layer_weights = numpy.random.rand(layer_shape[0], layer_shape[1])
else:
layer_weights = numpy.random.rand(layer_shape[0])
random_weights.append(layer_weights)
random_weights = numpy.array(random_weights, dtype=object)
random_model_weights.append(random_weights)
random_model_weights = numpy.array(random_model_weights)
return random_model_weights
def model_error(model_weights):
global data_inputs, data_outputs, model
model.set_weights(model_weights)
predictions = model.predict(data_inputs)
mae = tensorflow.keras.losses.MeanAbsoluteError()
abs_error = mae(data_outputs, predictions).numpy() + 0.00000001
return abs_error
input_layer = tensorflow.keras.layers.Input(3)
dense_layer1 = tensorflow.keras.layers.Dense(5, activation="relu")(input_layer)
output_layer = tensorflow.keras.layers.Dense(1, activation="linear")(dense_layer1)
model = tensorflow.keras.Model(inputs=input_layer, outputs=output_layer)
data_inputs = numpy.array([[0.02, 0.1, 0.15],
[0.7, 0.6, 0.8],
[1.5, 1.2, 1.7],
[3.2, 2.9, 3.1]])
data_outputs = numpy.array([[0.1],
[0.6],
[1.3],
[2.5]])
num_models = 10
random_model_weights = create_rand_weights(model, num_models)
ExecutorClass = concurrent.futures.ThreadPoolExecutor
thread_output = []
with ExecutorClass(max_workers=2) as executor:
output = executor.map(model_error, random_model_weights)
for out in output:
thread_output.append(out)
thread_output=numpy.array(thread_output)
print("Wrong Outputs using Threads")
print(thread_output)
print("\n\n")
correct_output = []
for idx in range(num_models):
error = model_error(random_model_weights[idx])
correct_output.append(error)
correct_output=numpy.array(correct_output)
print("Correct Outputs without Threads")
print(correct_output)
</code></pre>
<p>This is the correct model outputs without using parallel processing:</p>
<pre class="lang-py prettyprint-override"><code>[6.78012372 3.42922212 4.96738673 6.64474774 6.83102609 4.41165734 3.34482099 7.6132908 7.97145654 6.98378612]
</code></pre>
<p>This is the wrong model outputs without using parallel processing:</p>
<pre class="lang-py prettyprint-override"><code>[3.42922212 3.42922212 6.90911246 6.64474774 4.41165734 3.34482099 7.6132908 7.97145654 6.98378612 6.98378612]
</code></pre>
<p>Even that I set a random seed for NumPy, the outputs using parallel processing still vary for different runs.</p>
|
<python><tensorflow><keras><parallel-processing><concurrent.futures>
|
2023-03-01 16:16:26
| 1
| 874
|
Ahmed Gad
|
75,606,279
| 3,425,104
|
Docker doesn't see .env in dockerfile when trying to run Django collectstatic
|
<p>I have multiple containers run in the order stated in docker-compose. In an app container, I need to execute file collectstatic. The thing is when I try to do it in dockerfile like that:</p>
<pre><code>RUN python manage.py collectstatic
</code></pre>
<p>it breaks down as it doesn't see env vars, although the file is declared in the docker-compose. When I omit that line (and lines in other containers that depend on it) and I log into the container and generate the files - everything runs smoothly.</p>
<p>How can I run it automatically?</p>
<pre><code>version: '3.8'
services:
db:
container_name: postgres
build:
context: .
dockerfile: Dockerfiles/postgres/Dockerfile
ports:
- 5432:5432
volumes:
- ./Dockerfiles/postgres/init.sql:/docker-entrypoint-initdb.d/create_tables.sql
- ./Dockerfiles/postgres/postgres-data:/var/lib/postgresql/data
web:
container_name: django_app
build:
context: .
dockerfile: Dockerfiles/app/Dockerfile
command: python manage.py runserver 0.0.0.0:8000
working_dir: /app
env_file:
- .env
volumes:
- ./:/app/
depends_on:
- db
nginx:
container_name: django_nginx
build:
context: .
dockerfile: Dockerfiles/nginx/Dockerfile
ports:
- 1337:80
working_dir: /app
depends_on:
- web
</code></pre>
|
<python><django><docker><docker-compose><dockerfile>
|
2023-03-01 16:13:02
| 1
| 2,288
|
Zbyszek Kisły
|
75,606,130
| 6,296,919
|
create new column with incremental values
|
<p>I have below result set which got populated with the following code. I require to add new Column GROUP_ID in this result.</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.read_csv ('dups_check_group_v1.csv',encoding= 'unicode_escape',usecols= ['ID','ENTITY_NAME','ENTITY_VALUE','SECTION_GROUP','DOC_ID'])
mask = df['SECTION_GROUP'].isna()
rest = df[mask]
out = pd.concat([d for _, g in df[~mask].groupby('SECTION_GROUP')
for d in [g, rest]])
print(out.sort_values('DOC_ID'))
</code></pre>
<p>Result</p>
<pre><code> ID ENTITY_NAME ENTITY_VALUE SECTION_GROUP DOC_ID
0 1 dNumber U220059090(C) GROUP 1 40
1 2 tDate 6-Dec-22 GROUP 1 40
4 5 sCompany bp NaN 40
2 3 dNumber U220059090(C) GROUP 2 40
3 4 tDate 6-Dec-22 GROUP 2 40
4 5 sCompany bp NaN 40
5 6 dNumber U220059090(C) GROUP 1 42
6 7 tDate 6-Dec-22 GROUP 1 42
9 10 sCompany bp NaN 42
7 8 dNumber U220059090(C) GROUP 2 42
8 9 tDate 6-Dec-22 GROUP 2 42
9 10 sCompany bp NaN 42
</code></pre>
<p>what I am looking to achieve with GROUP_ID is as below. any help is really appreciate.</p>
<p><a href="https://i.sstatic.net/DSUP2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DSUP2.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><pandas><dataframe><group-by>
|
2023-03-01 16:01:54
| 1
| 847
|
tt0206
|
75,605,972
| 6,077,239
|
Polars: how to handle 'window expression not allowed in aggregation'?
|
<p>I have a task on hand where I want to perform regressions of transformed columns on a set of specified columns in a polars dataframe. The transformation and set of independent columns are all controlled by a <code>specs</code> dict.</p>
<p>Below is a simplified mini example for illustrating purposes.</p>
<pre><code>from functools import partial
import polars as pl
import numpy as np
def ols_fitted(s: pl.Series, yvar: str, xvars: list[str]) -> pl.Series:
df = s.struct.unnest()
y = df[yvar].to_numpy()
X = df[xvars].to_numpy()
fitted = np.dot(X, np.linalg.lstsq(X, y, rcond=None)[0])
return pl.Series(values=fitted, nan_to_null=True)
df = pl.DataFrame(
{
"date": [1, 1, 1, 1, 2, 2, 2, 2, 2, 2],
"id": [1, 1, 1, 2, 2, 2, 2, 3, 3, 3],
"y": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
"g1": [1, 1, 1, 1, 1, 2, 2, 2, 2, 2],
"g2": [1, 1, 1, 2, 2, 2, 3, 3, 3, 3],
"g3": [1, 1, 2, 2, 2, 3, 3, 4, 4, 4],
"x1": [2, 5, 4, 7, 3, 2, 5, 6, 7, 2],
"x2": [1, 5, 3, 4, 5, 6, 4, 3, 2, 1],
"x3": [3, 6, 8, 6, 4, 7, 5, 4, 8, 1],
}
)
specs = {
"first": {"yvar": "y", "gvars": ["g1"], "xvars": ["x1"]},
"second": {"yvar": "y", "gvars": ["g1", "g2"], "xvars": ["x1", "x2"]},
"third": {"yvar": "y", "gvars": ["g2", "g3"], "xvars": ["x2", "x3"]},
}
df.with_columns(
pl.struct(
(
pl.col(specs[specnm]["yvar"])
- pl.col(specs[specnm]["yvar"]).mean().over(specs[specnm]["gvars"])
).abs(),
*specs[specnm]["xvars"],
)
.map_elements(
partial(
ols_fitted, yvar=specs[specnm]["yvar"], xvars=specs[specnm]["xvars"]
)
)
.over("date", "id")
.alias(f"fitted_{specnm}")
for specnm in list(specs.keys())
)
</code></pre>
<p>However, I got the error below:</p>
<pre><code>InvalidOperationError: window expression not allowed in aggregation
</code></pre>
<p>Not sure why over is not supported within aggregation context. Would be very convenient if it does like in my example.</p>
<p>So, my real question is how to handle this in my particular case? And, if it cannot be handled, is there any alternative ways to make my code work in a systematic way?</p>
|
<python><python-polars>
|
2023-03-01 15:46:23
| 1
| 1,153
|
lebesgue
|
75,605,946
| 2,163,392
|
Extract features by a CNN with pytorch not working
|
<p>I am trying to use the RESNET-18 pre-trained CNN with Pytorch to act as a feature extractor for new images. To do that, I followed <a href="https://kozodoi.me/python/deep%20learning/pytorch/tutorial/2021/05/27/extracting-features.html" rel="nofollow noreferrer">this tutorial</a>.</p>
<p>Basically, I want the output of the penultimate layer, which would give me 512-D vectors as features for every image. The model summary is something like below:</p>
<pre><code> ResNet(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer2): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer3): Sequential(
(0): BasicBlock(
(conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer4): Sequential(
(0): BasicBlock(
(conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
(fc): Linear(in_features=512, out_features=1000, bias=True)
)
</code></pre>
<p>I want the output of (avgpool), which I believe will contain the 512-D vectors.</p>
<p>According to the tutorial, here is how I do it:</p>
<pre><code>def prepare_image(image_name,device):
image = imread(image_name)
# convert BGR to RGB color format
image = cvtColor(image, COLOR_BGR2RGB).astype(np.float32)
# resize for the required size
image_resized = resize(image, (224, 224))
image_resized = np.expand_dims(image_resized, axis=0)
image_resized = image_resized.swapaxes(1, 3)
image_resized = torch.tensor(image_resized).float()
image_resized = image_resized.to(device)
return image_resized
def get_features(name):
def hook(model, input, output):
features[name] = output.detach()
return hook
DEVICE = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu')
model_ft = torchvision.models.resnet18(pretrained=True)
features = {}
model_ft = model_ft.to(DEVICE)
# avgpool is where I want to get features
model_ft.avgpool.register_forward_hook(get_features('feats'))
image_resized = prepare_image("image.jpg")
</code></pre>
<p>I believe the line below is doing what I want</p>
<pre><code># avgpool is where I want to get features
model_ft.avgpool.register_forward_hook(get_features('feats'))
</code></pre>
<p>However when I apply this model in an input image</p>
<pre><code>preds = model_ft(image_resized)
print(preds.shape)
</code></pre>
<p>It still outputs the last layer's output, or a tensor with torch.Size([1, 1000]). Where am I wrong? is there another way of doing it?</p>
|
<python><deep-learning><pytorch><computer-vision><conv-neural-network>
|
2023-03-01 15:44:36
| 1
| 2,799
|
mad
|
75,605,942
| 14,269,252
|
Combination of checkboxes in streamlit app
|
<p>I am building stream lit app, I know how to define if option_1 is chosen do something, but what if a user choose option 1 and option 2? I want the code doesn't runs if a user select anything rather than only option_1, even a combination of option_1 or option_2.</p>
<pre><code>option_1 = st.sidebar.checkbox('df1', value=True)
option_2 = st.sidebar.checkbox('df2')
option_3 = st.sidebar.checkbox('df3')
if option_1:
"do something" ```
</code></pre>
|
<python><pandas><streamlit>
|
2023-03-01 15:44:13
| 1
| 450
|
user14269252
|
75,605,839
| 10,983,819
|
Create a new column using index in python
|
<p>I have a df with double indexation in python, where Asset and Scenario are the indexes obtained after using pandas.</p>
<pre><code>As Scen V1 v2 v3
0 1 34 45 78
0 2 30 95 58
0 3 14 -5 68
1 1 54 44 -8
1 2 34 45 78
1 3 39 40 96
2 1 34 45 68
2 2 94 -5 78
2 3 64 55 78
</code></pre>
<p>Additionally, I have two data frames with information related with the indexes.</p>
<pre><code>Index AssetName AssetExp
0 Asset1 X
1 Asset2 Y
2 Asset3 Z
</code></pre>
<pre><code>Index ScenarioName sensitivity
1 Scenario1 5
2 Scenario2 10
3 Scenario3 5
</code></pre>
<p>Using those data frames and the indexes how can i get the following data frame</p>
<pre><code> As Scen V1 v2 v3
Asset0 Scenario1 34 45 78
Asset0 Scenario2 30 95 58
Asset0 Scenario3 14 -5 68
Asset1 Scenario1 54 44 -8
Asset1 Scenario2 34 45 78
Asset1 Scenario3 39 40 96
Asset2 Scenario1 34 45 68
Asset2 Scenario2 94 -5 78
Asset2 Scenario3 64 55 78
</code></pre>
|
<python><pandas><dataframe><indexing>
|
2023-03-01 15:34:07
| 1
| 397
|
Eve Chanatasig
|
75,605,559
| 18,091,372
|
Finding the root nodes in a networkx.Graph() and paths between those nodes
|
<p>This should be a simple question, but one I am not sure how to answer since I am new to the networkx api.</p>
<p>The graphs I will be working with will not be directed and be acyclic.</p>
<p>I have created a nx.Graph(), added nodes and edges, can draw it with nx.draw(G), etc.</p>
<p>What I want to find are two things:</p>
<ol>
<li>What are the root nodes? i.e. those nodes with only a single edge</li>
<li>For each root node, what are the edges leading to each other root node?</li>
</ol>
<p>I have attached some sample code, which is based on my real case, creating a graph. Drawing the graph looks like:</p>
<p><a href="https://i.sstatic.net/WB8xv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WB8xv.png" alt="visualized" /></a></p>
<p>The answer to #1 would be the nodes ( 5, 6 ), ( 6, 7 ), and (14, 15 ). How can I use networkx to return those three nodes?</p>
<p>There are three possible paths between the nodes. One of the answers to #2 is:</p>
<p><strong>1.</strong> ( 5, 6 ), ( 4, 5 ), ( 3, 4 ), ( 1, 2 ), ( 11, 12 ), ( 12, 13 ), ( 13, 14 ), ( 14, 15 )</p>
<p><strong>2.</strong> ( 5, 6 ), ( 4, 5 ), ( 3, 4 ), ( 1, 2 ), ( 10, 11 ), ( 9, 10 ), ( 8, 9 ), ( 7, 8 ), ( 6, 7 )</p>
<p><strong>3.</strong> ( 14, 15 ), ( 13, 14 ), ( 12, 13 ), ( 11, 12 ), ( 1, 2 ), ( 10, 11 ), ( 9, 10 ), ( 8, 9 ), ( 7, 8 ), ( 6, 7 )</p>
<p>Clearly, three paths could be reversed and still be part of a valid answer.</p>
<p>How can I use networkx to return those three paths?</p>
<pre><code>import networkx as nx
class PairToNXObject:
def __init__(self, pair):
self.pair = pair
def __eq__(self, other):
return hash( self ) == hash( other )
def __hash__(self):
return hash( f'( {self.pair[0]}//{self.pair[1]} )' )
def __repr__(self):
return f'( {self.pair[0]} {self.pair[1]} )'
coordinateList = [[(1, 2),
(3, 4),
(4, 5),
(5, 6)],
[(6, 7),
(7, 8),
(8, 9),
(9, 10),
(10, 11),
(1, 2)],
[(1, 2),
(11, 12),
(12, 13),
(13, 14),
(14, 15)]]
G = nx.Graph()
for coordinates in coordinateList:
nodes = []
for coordinate in coordinates:
node = PairToNXObject( coordinate )
G.add_node( node )
nodes.append( node )
for x in range( 0, len( nodes ) - 1 ):
G.add_edge( nodes[x], nodes[ x + 1 ] )
nx.draw(G, with_labels = True, node_color='#eeeeee' )
</code></pre>
|
<python><graph><networkx>
|
2023-03-01 15:09:23
| 1
| 796
|
Eric G
|
75,605,522
| 3,204,942
|
ast or syntax tree for all the locals() and globals() in a python file
|
<p>How do I get the tree for all the <code>locals()</code> and <code>globals()</code> in a python file parsed in a string or a shell stdout. I am getting this right now for globals:</p>
<pre><code>{
'__name__': '__main__',
'__doc__': None,
'__package__': None,
'__loader__': <class '_frozen_importlib.BuiltinImporter' >,
'__spec__': None,
'__annotations__': { },
'__builtins__': <module 'builtins'(built -in) >
}
</code></pre>
|
<python><global><loader><local-variables><built-in>
|
2023-03-01 15:06:23
| 0
| 2,349
|
Gary
|
75,605,487
| 7,636,192
|
How to implement a multi form 'wizard' using forms
|
<p>I'm <em>not</em> using django-formtools, just built in forms.</p>
<p>I'm struggling to understand the correct flow to construct such a view.</p>
<p>I was trying to chain my logic like so</p>
<pre><code>def step_one(request, ...):
if request.method == 'POST':
form = step_one_form(data=request.POST)
else:
form = step_one_form()
if form.is_valid():
foo = request.POST.get('foo')
return step_two(request,foo)
return render(request, 'wizard.html', context={'form',form})
def step_two(request, ...)
if request.method == 'POST':
form = step_two_form(data=request.POST)
else:
form = step_two_form()
if form.is_valid():
foo = request.POST.get('foo')
bar = request.POST.get('bar')
return step_three(request, foo, bar)
return render(request, 'wizard.html', context={'form',form}
def get_view(request):
return step_one(request)
</code></pre>
<p>Edit: I realized that I can past data=request.POST to my form constructor, and that kind of works, but because step_two etc are always POST the form always shows "this field is required" errors on the first rendering.</p>
<p>What is the preferred technique to not render the error on first load? Do you have to use a hidden field or something?</p>
|
<python><django>
|
2023-03-01 15:04:17
| 1
| 759
|
Chris
|
75,605,273
| 9,532,692
|
PySpark in synapse notebook raises Py4JJavaError when using count() and save()
|
<p>I'm using spark version 3.3.1.5.2 in synapse notebook. I first read parquet data from azure storage account and do transformation. Finally, when I want to check the size of the pyspark dataframe (final_df) by running <code>final_df.count()</code>, it throws a Py4JJavaError error shown below:</p>
<pre><code>----> 1 final_df.count() File /opt/spark/python/lib/pyspark.zip/pyspark/sql/dataframe.py:804, in DataFrame.count(self) 794 def count(self) -> int: 795 """Returns the number of rows in this :class:`DataFrame`. 796 797 .. versionadded:: 1.3.0 (...) 802 2 803 """ --> 804 return int(self._jdf.count()) File ~/cluster-env/env/lib/python3.10/site-packages/py4j/java_gateway.py:1321, in JavaMember.__call__(self, *args) 1315 command = proto.CALL_COMMAND_NAME +\ 1316 self.command_header +\ 1317 args_command +\ 1318 proto.END_COMMAND_PART 1320 answer = self.gateway_client.send_command(command) -> 1321 return_value = get_return_value( 1322 answer, self.gateway_client, self.target_id, self.name) 1324 for temp_arg in temp_args: 1325 temp_arg._detach() File /opt/spark/python/lib/pyspark.zip/pyspark/sql/utils.py:190, in capture_sql_exception.<locals>.deco(*a, **kw) 188 def deco(*a: Any, **kw: Any) -> Any: 189 try: --> 190 return f(*a, **kw) 191 except Py4JJavaError as e: 192 converted = convert_exception(e.java_exception) File ~/cluster-env/env/lib/python3.10/site-packages/py4j/protocol.py:326, in get_return_value(answer, gateway_client, target_id, name) 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client) 325 if answer[1] == REFERENCE_TYPE: --> 326 raise Py4JJavaError( 327 "An error occurred while calling {0}{1}{2}.\n". 328 format(target_id, ".", name), value) 329 else: 330 raise Py4JError( 331 "An error occurred while calling {0}{1}{2}. Trace:\n{3}\n". 332 format(target_id, ".", name, value)) Py4JJavaError: An error occurred while calling o4153.count. :
org.apache.spark.SparkException: Job aborted due to stage failure: Task 5 in
stage 31.0 failed 4 times, most recent failure: Lost task 5.3 in stage 31.0
(TID 249) (vm-3ac85017 executor 1):
org.apache.spark.sql.execution.QueryExecutionException: Encountered error while
reading file abfss://bronze@adlsblablah.dfs.core.windows.net/MongoDB/blah-blah-O-mongo/LDocument/parquet/2023/02/28/part-00000-bf66db6f-a839-45d0-8119-db3724ebdf63-c000.snappy.parquet.
Details: at
org.apache.spark.sql.errors.QueryExecutionErrors$.cannotReadFilesError(QueryExecutionErrors.scala:731)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:314)
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:135)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.hashAgg_doAggregateWithoutKey_0$(Unknown
Source) at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:764)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at
org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
at
org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:136) at
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750) Caused by:
org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in
block -1 in file abfss://bronze@adlsblablah.dfs.core.windows.net/MongoDB/blah-blah-O-mongo/LDocument/parquet/2023/02/28/part-00000-bf66db6f-a839-45d0-8119-db3724ebdf63-c000.snappy.parquet
at
org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:264)
at
org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
at
org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
at
org.apache.spark.sql.execution.datasources.RecordReaderIterator$$anon$1.hasNext(RecordReaderIterator.scala:61)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:135)
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:305)
... 18 more Caused by: java.lang.ClassCastException: Expected instance of group
converter but got
"org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter$ParquetStringConverter"
at org.apache.parquet.io.api.Converter.asGroupConverter(Converter.java:34) at
org.apache.parquet.io.RecordReaderImplementation.<init>(RecordReaderImplementation.java:267)
at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:147) at
org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:109) at
org.apache.parquet.filter2.compat.FilterCompat$NoOpFilter.accept(FilterCompat.java:177)
at
org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:109)
at
org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:141)
at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:230)
... 23 more Driver stacktrace: at
org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2682)
at
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2618)
at
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2617)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2617) at
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1190)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1190)
at scala.Option.foreach(Option.scala:407) at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1190)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2870)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2812)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2801)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) Caused by:
org.apache.spark.sql.execution.QueryExecutionException: Encountered error while
reading file abfss://bronze@adlsblablah.dfs.core.windows.net/MongoDB/blah-blah-O-mongo/LDocument/parquet/2023/02/28/part-00000-bf66db6f-a839-45d0-8119-db3724ebdf63-c000.snappy.parquet.
Details: at
org.apache.spark.sql.errors.QueryExecutionErrors$.cannotReadFilesError(QueryExecutionErrors.scala:731)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:314)
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:135)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.hashAgg_doAggregateWithoutKey_0$(Unknown
Source) at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:764)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
at
org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:136) at
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750) Caused by:
org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in
block -1 in file abfss://bronze@adlsblablah.dfs.core.windows.net/MongoDB/blah-blah-O-mongo/LDocument/parquet/2023/02/28/part-00000-bf66db6f-a839-45d0-8119-db3724ebdf63-c000.snappy.parquet
at
org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:264)
at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
at
org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
at
org.apache.spark.sql.execution.datasources.RecordReaderIterator$$anon$1.hasNext(RecordReaderIterator.scala:61)
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:135)
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:305)
... 18 more Caused by: java.lang.ClassCastException: Expected instance of group
converter but got
"org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter$ParquetStringConverter"
at org.apache.parquet.io.api.Converter.asGroupConverter(Converter.java:34) at
org.apache.parquet.io.RecordReaderImplementation.<init>(RecordReaderImplementation.java:267)
at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:147) at
org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:109) at
org.apache.parquet.filter2.compat.FilterCompat$NoOpFilter.accept(FilterCompat.java:177)
at
org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:109)
at
org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:141)
at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:230)
... 23 more
</code></pre>
<p>I've referred to <a href="https://stackoverflow.com/questions/41840296/pyspark-in-ipython-notebook-raises-py4jjavaerror-when-using-count-and-first">this post</a> that had a similar problem but the solution seems to geared towards conda, which is not the virtual env I'm using (I'm on synapse using an azure spark pool)</p>
<p>I wonder if setting a spark configuration something like this <code>spark.conf.set("spark.sql.caseSensitive" ,"true")</code> would help prevent the error.</p>
<h1>EDIT: 3/1/2023</h1>
<p>Per @CRAFTY DBA's suggestion, I'm sharing the code that I'm using to read data:</p>
<pre><code>account_name = f'adlsblablah{env}'
container_name = 'bronze'
relative_path = 'MongoDB/blah-blah-O-mongo/LDocument/parquet/*/*/*/'
adls_path = f'abfss://{container_name}@{account_name}.dfs.core.windows.net/{relative_path}'
parquet_path = f'{adls_path}/*'
</code></pre>
<p>Although the parquet files are partitioned, it's reading from each directory (specified by each date) and not the files so I don't think the error's thrown because of the probable error @CRATY DBA mentioned in (3).</p>
<p>Also, per @Sharma's catch, the error seems to stem from files in the <code>.../2023/02/28/</code>directory as shown in the error message. <code>abfss://bronze@adlsblablah.dfs.core.windows.net/MongoDB/blah-blah-O-mongo/LDocument/parquet/2023/02/28/part-00000-bf66db6f-a839-45d0-8119-db3724ebdf63-c000.snappy.parquet</code>. When I explicitly read a different directory (<code>.../2023/02/09</code>), <code>.save()</code> and <code>.count()</code> both ran successfully. I'm guessing that the partitioned file names in the directory needs to follow a similar naming with just a difference in their names indicating each partition 00000, 00001, ..., 00009.</p>
<p>Here is an image of the files in the directory that does NOT throw an error (top = <code>bronze@adlsblablah.dfs.core.windows.net/MongoDB/blah-blah-O-mongo/LDocument/parquet/2023/02/22/</code>) and that DOES throw an error (bottom = <code>bronze@adlsblablah.dfs.core.windows.net/MongoDB/blah-blah-O-mongo/LDocument/parquet/2023/02/28/</code>). Notice that top has files with the same name with just a difference indicating each partition, while the bottom has files with completely different names and don't belong to the same partitioned entity(not sure if it's the right term).</p>
<h3>TOP</h3>
<p><a href="https://i.sstatic.net/hcyDq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hcyDq.png" alt="works fine" /></a></p>
<h3>BOTTOM</h3>
<p><a href="https://i.sstatic.net/uVyCf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uVyCf.png" alt="throws error" /></a></p>
<h2>EDIT: 3/2/2023</h2>
<p>My hypothsis above that it writes successfully only when the files have similar names in the same directory proved to be wrong, as I was able to read files with different names in the same directory and write it. I'm still confused what it means by this error: <code>Caused by: org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file </code>.</p>
|
<python><apache-spark><pyspark>
|
2023-03-01 14:48:47
| 1
| 724
|
user9532692
|
75,605,184
| 1,000,343
|
Left Align the Titles of Each Plotly Subplot
|
<p>I have a facet wraped group of plotly express barplots , each with a title. How can I left align each subplot's title with the left of its plot window?
<a href="https://i.sstatic.net/2ZUWs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2ZUWs.png" alt="enter image description here" /></a></p>
<pre><code>import lorem
import plotly.express as px
import numpy as np
import random
items = np.repeat([lorem.sentence() for i in range(10)], 5)
response = list(range(1,6)) * 10
n = [random.randint(0, 10) for i in range(50)]
(
px.bar(x=response, y=n, facet_col=items, facet_col_wrap=4, height=1300)
.for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1]))
.for_each_xaxis(lambda xaxis: xaxis.update(showticklabels=True))
.for_each_yaxis(lambda yaxis: yaxis.update(showticklabels=True))
.show()
)
</code></pre>
<p>I tried adding <code>.for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1], x=0))</code> but it results in:
<a href="https://i.sstatic.net/iaozW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iaozW.png" alt="enter image description here" /></a></p>
|
<python><plotly><facet-grid>
|
2023-03-01 14:42:28
| 2
| 110,512
|
Tyler Rinker
|
75,605,178
| 528,369
|
Mypy type compatible with list, tuple, range, and numpy.array?
|
<p>The code</p>
<pre><code>import numpy as np
def join(v:list, delim:str = ","):
""" join the elements of v using the given delimiter """
return delim.join(str(x) for x in v)
print(join([0,1,2,3]))
print(join((0,1,2,3)))
print(join(range(4)))
print(join(np.array(range(4))))
</code></pre>
<p>runs, but mypy only likes the first call to <code>join</code> and says</p>
<pre><code>x.py:8: error: Argument 1 to "join" has incompatible type "Tuple[int, int, int, int]"; expected "List[Any]" [arg-type]
x.py:9: error: Argument 1 to "join" has incompatible type "range"; expected "List[Any]" [arg-type]
x.py:10: error: Argument 1 to "join" has incompatible type "ndarray[Any, dtype[Any]]"; expected "List[Any]" [arg-type]
Found 3 errors in 1 file (checked 1 source file)
</code></pre>
<p>Is there a different type annotation for argument <code>v</code> that will fix these errors?</p>
|
<python><mypy>
|
2023-03-01 14:42:00
| 1
| 2,605
|
Fortranner
|
75,605,177
| 11,267,783
|
Spacing adjustment for gridspec subfigures
|
<p>I wanted to change the size of <code>hspace</code> on my figure without using <code>constrained_layout=True</code>.</p>
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import numpy as np
fig = plt.figure()
# fig = plt.figure(constrained_layout=True)
GridSpec = gridspec.GridSpec(ncols=1, nrows=2, figure= fig, hspace=0.9)
subfigure_1= fig.add_subfigure(GridSpec[0,:])
subplots_1= subfigure_1.subplots(1,1)
subfigure_2= fig.add_subfigure(GridSpec[1,:])
subplots_2= subfigure_2.subplots(1,1)
plt.show()
</code></pre>
<p>With <code>constrained_layout=True</code>, it works but sometimes I am faced other issues that I don't want with this setting set to <code>True</code>. (Moreover it seems that <code>constrained_layout=True</code> disables <code>width_ratios</code> on gridSpec.)</p>
|
<python><matplotlib><matplotlib-gridspec>
|
2023-03-01 14:41:56
| 1
| 322
|
Mo0nKizz
|
75,605,089
| 14,385,099
|
Extract string before underscore in python string
|
<p>I have a list of strings in python that looks something like: [AAA_X, BBB_X, CCC_X].</p>
<p>How can I efficiently extract the part of the string before the underscore?</p>
<p>Thank you!</p>
|
<python>
|
2023-03-01 14:33:18
| 2
| 753
|
jo_
|
75,604,996
| 1,330,719
|
Debouncing distributed task dependencies in python
|
<p>I would like to define some tasks that can be triggered dependent on messages from a broker (e.g. SQS). These tasks would be able to re-publish messages triggering other tasks.</p>
<p>While this is partially solved solved by libraries like celery/selinon, one extra requirement I have is that I would like to debounce certain expensive tasks.</p>
<p>Here is an example of how I would want to define this:</p>
<pre class="lang-py prettyprint-override"><code>@task(event_type="Task1")
def task1(message: Dict):
# Queue Task2 to be called
publish({"type": "Task2"})
return inexpensive_operation(message)
@task(event_type="Task2")
def task2(message: Dict):
return inexpensive_operation(message)
@task(event_type="Task3" run_after=["Task1", "Task2"], debounce=1000)
def task3(messages: List[Dict]):
return expensive(messages)
</code></pre>
<p>In the above code, Task3 runs after either Task1 or Task2 has finished. If Task1 and Task2 get called at the same time, rather than Task3 being called potentially three times, I would instead like Task3 to be called once, but be passed the various messages that triggered its run.</p>
<p>Lastly, these tasks may be called on distributed processes potentially requiring some coordinating DB.</p>
<p>I know I could probably implement my own debouncing consumer using a DB with atomic writes, but this feels like a non-trivial problem with many edge cases.</p>
<p>Are there any libraries that help coordinate these more complex dependency scenarios?</p>
|
<python>
|
2023-03-01 14:26:30
| 1
| 1,269
|
rbhalla
|
75,604,912
| 4,879,688
|
Function or dict comprehension as a Snakefile output
|
<p>I have a Snakemake rule with a highly redundant output (here simplified):</p>
<pre><code>rule redundant:
output:
a_x = "prefix_a_x.ext"
b_x = "prefix_b_x.ext"
a_y = "prefix_a_y.ext"
b_y = "prefix_b_y.ext"
run:
process_1(output.a_x)
process_2(output.a_y)
process_3(output.b_x)
process_4(output.b_y)
</code></pre>
<p>I can remove redundancy at the cost of the readability:</p>
<pre><code>rule less_readable:
output:
[f"prefix_{a}_{b}.ext"
for a in ["a", "b"]
for b in ["x", "y"]]
run:
process_1(output[0])
process_2(output[1])
process_3(output[2])
process_4(output[3])
</code></pre>
<p>(I do not want to risk use of the
<code>expand("prefix_{a}_{b}.ext", a=["a", "b"], b=["x", "y"])</code>
as the order of outputs matters and I fear it may change
in future Snakemake versions.)</p>
<p>I would like to have the best of two worlds, something like:</p>
<pre><code>rule more_readable:
output:
{f"{a}_{b}": f"prefix_{a}_{b}.ext"
for a in ["a", "b"]
for b in ["x", "y"]}
run:
process_1(output.a_x)
process_2(output.a_y)
process_3(output.b_x)
process_4(output.b_y)
</code></pre>
<p>Sadly, that does not work:</p>
<pre><code>$ snakemake -j 1 prefix_a_x.ext
Building DAG of jobs...
MissingRuleException:
No rule to produce prefix_a_x.ext (if you use input functions make sure that they don't raise unexpected exceptions).
</code></pre>
<p>How can I assign labels to automatically generated targets?</p>
<p>I use</p>
<pre><code>$ snakemake --version
7.14.2
</code></pre>
|
<python><python-3.x><python-3.7><snakemake>
|
2023-03-01 14:20:43
| 1
| 2,742
|
abukaj
|
75,604,761
| 10,416,012
|
How to parse custom operators inside a evaluable python string?
|
<p>Having a formula as a string like:</p>
<pre><code>str_forumla = "x > 0 AND y < 5 AND 'this AND that' in my_string"
</code></pre>
<p>Where the python operator "&" have been substituted by "AND" (but the string AND was not affected) how to revert the operation so we get the original formula:</p>
<pre><code>python_formula = "x > 0 & y < 5 AND 'this AND that' in my_string"
</code></pre>
<p>I already got a terrible looking solution (that works both for ' and " strings), but as you may seen it is not elegant at all, and I was wondering if there is any easier way of doing this, maybe using ast or any kind of trick assigning the operator to the variable "AND" (as the goal is to evaluate the expresion).</p>
<pre><code>formula = 'x > 0 AND y < 5 AND "this AND that" in my_string'
# invert the "AND" operators outside of quotes
inverted_formula = ''
in_quote = False
i = 0
while i < len(formula):
# check if we are inside a quote
if formula[i] == "'" or formula[i] == '"':
inverted_formula += formula[i]
in_quote = not in_quote
i += 1
continue
# check if we have an "AND" operator outside of quotes
if formula[i:i+3] == "AND" and not in_quote:
inverted_formula += "&"
i += 3
continue
# copy the current character to the inverted formula string
inverted_formula += formula[i]
i += 1
print(inverted_formula)
</code></pre>
<p>Another option (this not fully functional yet cause only support ") would be with regex:</p>
<pre><code>pattern = re.compile(r'\bAND\b(?=([^"]*"[^"]*")*[^"]*$)')
inverted_formula = re.sub(pattern, '&', formula)
</code></pre>
<p>But I'm looking for an easier solution that works for single and double quoted strings as I may need to change more operators like OR AND NOT for |,~, and I'm not sure that my solution will work in more complex cases, any idea?</p>
<p>Harder examples:</p>
<pre><code>f = "x > 0 AND y < 5 AND 'this AND tha\'t' in my_string AND 'this AND tha\'t' in my_string"
f2 = 'x > 0 AND y < 5 AND "this AND tha\'t" in my_string AND "this AND tha\'t" in my_string'
f3 = "'this AND tha\'t' in my_string AND 'this AND that' in my_string"
</code></pre>
|
<python><string><abstract-syntax-tree><python-re>
|
2023-03-01 14:04:39
| 2
| 2,235
|
Ziur Olpa
|
75,604,751
| 5,094,019
|
Execute equation from the elements of a list in python
|
<p>I have the following list and dataframe:</p>
<pre><code>import pandas as pd
data = [[5, 10, 40], [2, 15, 70], [6, 14, 60]]
df = pd.DataFrame(data, columns=['A', 'B', 'C'])
lst = [5, '*', 'A', '+', 'C']
</code></pre>
<p>And I would like to create a code to execute the equation in the lst like <code>5 x A + C</code> returning the following result:</p>
<pre><code>data = [[65], [80], [90]]
dresult = pd.DataFrame(data, columns=['M'])
</code></pre>
<p>Is that possible?</p>
|
<python><python-3.x><pandas>
|
2023-03-01 14:03:37
| 1
| 725
|
Thanasis
|
75,604,705
| 10,409,809
|
How do I plot box plot for multiple columns in pyspark?
|
<p>I have a pyspark dataframe (pyspark.sql.dataframe.DataFrame). I would like to plot the numeric columns in a boxplot to detect outliers.</p>
<p>I started by selecting only the numeric columns with:</p>
<pre><code>numeric_columns = [item[0] for item in df.dtypes if item[1].startswith('float')]
</code></pre>
<p>I tried to use plotly, but then I saw that first I needed to convert to a pandas Dataframe.
So, I did:</p>
<pre><code>df_pd = df.toPandas()
fig = px.box(df_pd[numeric_columns])
fig.show()
</code></pre>
<p>I got an error:
"Command result size exceeds limit: Exceeded 20971520 bytes (current = 20973190)"</p>
<p>I guess the dataset is too big to work with pandas. Could you help me? Is it possible to create the plots directly in a pyspark dataframe?</p>
<p>Thank you.</p>
|
<python><dataframe><pyspark>
|
2023-03-01 13:58:59
| 1
| 537
|
jessirocha
|
75,604,692
| 18,455,931
|
How to delete 30 days older files from a directory in Python databricks
|
<p>I have a files in directory with name model/year(current year)/month(current month)/date(current date)
<code>Example model/2023/01/24/mm.parquet, model/25/02/1998/mm.parquet</code>.</p>
<p>so there are dozens of file there on my directory so i want to delete files which is 30 days older. For example if latest file is of today's date 2023/03/01 then the file which is 30 days older i.e 2023/02/01 should be deleted.</p>
<pre><code>import datetime
from datetime import datetime
# now=str(datetime.now().date())
import pandas as pd
def cleanup_model_output( days_to_keep ):
path='C://Users/anubhav.sharma02/Downloads/21-01-1998.xlsx'
date_of_file= int(path.split('/')[-1][0:2])
now=str(datetime.now().date())
a=int(now.split(',')[0][-2:])
subtract=abs(a-date_of_file)
if subtract > days_to_keep:
os.remove('C://Users/anubhav.sharma02/Downloads/21-01-1998.xlsx')
</code></pre>
<p>Can anyone help me on this? i am not able to make a logic</p>
|
<python><pandas><databricks><azure-databricks>
|
2023-03-01 13:58:01
| 1
| 375
|
Anubhav
|
75,604,651
| 9,321,944
|
How to convert c++ std::map<enum, enum> to python type using pythonBinding
|
<p>I have a C++ library that needs to be used with a Python application. I have created a pythonBinding for that library.</p>
<p>header.h</p>
<pre class="lang-cc prettyprint-override"><code>enum class enumA {
app1,
app2,
app3};
enum class enumB {
kInit,
kRunning,
kFailed};
void registerApp(std::map<enumA, enumB>& appMap, std::function<void(std::string)> callback);
</code></pre>
<p>pythonBinding.hpp</p>
<pre class="lang-cc prettyprint-override"><code>class PyAppIface : public AppIface {
void registerApp(std::map<enumA, enumB>& appMap, std::function<void(std::string)> callback)
override {
PYBIND11_OVERRIDE_PURE(void, AppIface, registerApp, appMap, callback);
}
};
</code></pre>
<p>pythonBinding.cpp</p>
<pre class="lang-cc prettyprint-override"><code>static py::object pyCallback;
void callbackFunction(std::string str)
{
PyEval_InitThreads();
PyGILState_STATE state = PyGILState_Ensure();
pyCallback(nodeMap);
PyGILState_Release(state);
}
void registerApp(AppIface& appIface, std::map<enumA, enumB>& appMap, py::object object){
pyCallback = object;
appIface.registerApp(appMap, callbackFunction);
}
PYBIND11_MODULE(AppBinding, bindModule) {
bindModule.doc() = "AppBinding";
py::enum_<enumA>(bindModule, "enumA")
.value("app1", enumA::app1)
.value("app2", enumA::app2)
.value("app3", enumA::app3)
.export_values();
py::enum_<enumB>(bindModule, "enumB")
.value("kInit", enumB::kInit)
.value("kRunning", enumB::kRunning)
.value("kFailed", enumB::kFailed)
.export_values();
py::class_<AppIface, PyAppIface, std::shared_ptr<AppIface>>(bindModule,
"AppIface")
.def(py::init<>())
.def("registerApp", &registerApp);
}
</code></pre>
<p>The <code>registerApp</code> function will be called by python application. How will <code>std::map<enumA, enumB>& appMap</code> be converted to a C++ type and vice versa? What am I missing here?</p>
|
<python><c++><python-3.x><python-bindings>
|
2023-03-01 13:55:03
| 0
| 371
|
deepan muthusamy
|
75,604,621
| 11,829,398
|
How to automatically run python files using python -m in VS Code
|
<p>I need to run files using <code>python -m foo.bar.baz</code> due to having absolute imports in other areas of the codebase.</p>
<p>In VS Code, when I click the top-right corner to <code>Run Python File</code>, I'd like it to automatically do <code>python -m foo.bar.baz</code> instead of <code>python /absolute/path/to/foo/bar/baz.py</code>. Is this possible?</p>
<p>Of course, this should work automatically for any file regardless of where it is, not just for <code>baz.py</code>.</p>
<p>This <a href="https://stackoverflow.com/questions/55173013/how-to-change-default-launch-settings-in-vs-code-debugger">question</a> suggests modifying <code>settings.json</code> but they don't provide an explicit example for how to do it exactly like this.</p>
|
<python><visual-studio-code>
|
2023-03-01 13:51:36
| 1
| 1,438
|
codeananda
|
75,604,590
| 6,296,919
|
merge group by data into single list or dataframe
|
<p>Hello I am new to python and not sure how to achieve.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">ID</th>
<th style="text-align: center;">ENTITY_NAME</th>
<th style="text-align: right;">ENTITY_NAME</th>
<th style="text-align: right;">SECTION_GROUP</th>
<th style="text-align: right;">DOC_ID</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">dNumber</td>
<td style="text-align: right;">U220059090</td>
<td style="text-align: right;">GROUP 1</td>
<td style="text-align: right;">40</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">tDate</td>
<td style="text-align: right;">6-Dec-22</td>
<td style="text-align: right;">GROUP 1</td>
<td style="text-align: right;">40</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">dNumber</td>
<td style="text-align: right;">U220059090</td>
<td style="text-align: right;">GROUP 2</td>
<td style="text-align: right;">40</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">tDate</td>
<td style="text-align: right;">6-Dec-22</td>
<td style="text-align: right;">GROUP 2</td>
<td style="text-align: right;">40</td>
</tr>
<tr>
<td style="text-align: left;">5</td>
<td style="text-align: center;">sCompany</td>
<td style="text-align: right;">bp</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">40</td>
</tr>
<tr>
<td style="text-align: left;">6</td>
<td style="text-align: center;">dNumber</td>
<td style="text-align: right;">U220059090</td>
<td style="text-align: right;">GROUP 1</td>
<td style="text-align: right;">45</td>
</tr>
<tr>
<td style="text-align: left;">7</td>
<td style="text-align: center;">tDate</td>
<td style="text-align: right;">6-Dec-22</td>
<td style="text-align: right;">GROUP 1</td>
<td style="text-align: right;">45</td>
</tr>
<tr>
<td style="text-align: left;">8</td>
<td style="text-align: center;">dNumber</td>
<td style="text-align: right;">U220059090</td>
<td style="text-align: right;">GROUP 2</td>
<td style="text-align: right;">45</td>
</tr>
<tr>
<td style="text-align: left;">9</td>
<td style="text-align: center;">tDate</td>
<td style="text-align: right;">6-Dec-22</td>
<td style="text-align: right;">GROUP 2</td>
<td style="text-align: right;">45</td>
</tr>
<tr>
<td style="text-align: left;">10</td>
<td style="text-align: center;">sCompany</td>
<td style="text-align: right;">bp</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">45</td>
</tr>
</tbody>
</table>
</div>
<p>I have applied group by to section_group column on below data and got two different result set in dataframe as below.</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.read_csv ('dups_check_group_v1.csv',encoding= 'unicode_escape',usecols= ['ID','ENTITY_NAME','ENTITY_VALUE','SECTION_GROUP','DOC_ID'])
mask = df['SECTION_GROUP'].isna()
rest = df[mask]
for _, g in df[~mask].groupby('SECTION_GROUP'):
g = pd.concat([g, rest])
print(g)
</code></pre>
<p>Result</p>
<pre><code>ID ENTITY_NAME ENTITY_VALUE SECTION_GROUP DOC_ID
0 1 dNumber U220059090(C) GROUP 1 40
1 2 tDate 6-Dec-22 GROUP 1 40
5 6 dNumber U220059090(C) GROUP 1 45
6 7 tDate 6-Dec-22 GROUP 1 45
4 5 sCompany bp NaN 40
9 10 sCompany bp NaN 45
ID ENTITY_NAME ENTITY_VALUE SECTION_GROUP DOC_ID
2 3 dNumber U220059090(C) GROUP 2 40
3 4 tDate 6-Dec-22 GROUP 2 40
7 8 dNumber U220059090(C) GROUP 2 45
8 9 tDate 6-Dec-22 GROUP 2 45
4 5 sCompany bp NaN 40
9 10 sCompany bp NaN 45
</code></pre>
<p>I am trying to merge this two results into single result. like below. any help will be really appreciate.</p>
<pre><code>ID ENTITY_NAME ENTITY_VALUE SECTION_GROUP DOC_ID
0 1 dNumber U220059090(C) GROUP 1 40
1 2 tDate 6-Dec-22 GROUP 1 40
5 6 dNumber U220059090(C) GROUP 1 45
6 7 tDate 6-Dec-22 GROUP 1 45
4 5 sCompany bp NaN 40
9 10 sCompany bp NaN 45
2 3 dNumber U220059090(C) GROUP 2 40
3 4 tDate 6-Dec-22 GROUP 2 40
7 8 dNumber U220059090(C) GROUP 2 45
8 9 tDate 6-Dec-22 GROUP 2 45
4 5 sCompany bp NaN 40
9 10 sCompany bp NaN 45
</code></pre>
|
<python><python-3.x><pandas><dataframe><group-by>
|
2023-03-01 13:48:58
| 1
| 847
|
tt0206
|
75,604,545
| 9,187,350
|
Share tree structure between 2 instances of dict
|
<p>I have the following <code>dict</code>:</p>
<pre class="lang-py prettyprint-override"><code>d1 = {"parent1": {"get": {"responses": {200: {"content": {"application/json": {}}}}}}}}
d2 = {"parent2": {"get": {"responses": {200: {"content": {"application/json": {}}}}}}}}
</code></pre>
<p>I am trying to avoid code repetition so I could do this:</p>
<pre class="lang-py prettyprint-override"><code>common = {"get": {"responses": {200: {"content": {"application/json": {}}}}}}
d1 = {"parent1": common}
d2 = {"parent2": common}
</code></pre>
<p>I would basically like to know what is your <em>pythonic</em> way to get rid of the <code>["get"]["responses"][200]["content"]["application/json"]</code>:</p>
<pre class="lang-py prettyprint-override"><code>v1 = d1["parent1"]["get"]["responses"][200]["content"]["application/json"]
v2 = d2["parent2"]["get"]["responses"][200]["content"]["application/json"]
</code></pre>
|
<python><dictionary>
|
2023-03-01 13:44:58
| 2
| 20,003
|
renatodamas
|
75,604,540
| 453,851
|
Is there a good way to store logging calls for publishing as log messages later?
|
<p>I often find myself stuck in the early phases of a program wanting to write log messages before logging has been configured. This might happen if the log configuration itself is being loaded from some remote location or even logging about steps in logging setup.</p>
<p>Two options have come to mind but I'm not sure about either:</p>
<h3>Option: Save the calls but not actually invoke logging.</h3>
<p>It might be possible to capture some of the logging by using a sort of dummy logger that simply stores a list of <code>functools.partial</code> bindings to actual logger methods. When logging is ready the dummy logger can then rerun all the calls direct into the intended logger.</p>
<p>The problem with this approach is that some aspects of logging are dependent on the call stack. Standard <code>logging.basicConfig()</code> then <code>exc_info=True</code> will need to be converted into <code>exc_info=sys.exc_info()</code>.</p>
<p>If other, more exotic handlers have been configured such as <a href="https://pypi.org/project/structlog/" rel="nofollow noreferrer">structlog</a> they may try to extract more information out of the call stack than I'm aware.</p>
<h3>Option: Add a special handler and re-run the logs</h3>
<p>In theory this might work, but I'm not sure about rerunning information through a logger that was otherwise saved in a log hander. I don't know what translation was otherwise saved. It feels like there is much more scope with this solution to scramble messages.</p>
<p>As far as I know, log handlers receive subtly different information to the parameters passed to a logger. I don't feel confident I could reverse that translation ready for reingesting.</p>
<p>Yet if I don't reingest the messages, how would I leverage the configuration of loglevels, filtering, etc. when the logging has been configured and the messages ready to publish.</p>
<hr />
<p>Just to clarify what I need to do. I need to write log messages before <code>logging</code> has been configured. At the point these log messages are written, I don't even know what my future configuration will be.</p>
<p>Then I configure <code>logging</code> and I have a bunch of saved log messages that I want to write out. I want to write them as if they were written with <code>logging</code> fully configured.</p>
<p>My assumption is the right way to do this is to ask <code>logging</code> to do this for me at the once it is configured.</p>
<hr />
<p>Am I missing something? Is there a good way to hold back python logging messages until logging has been fully configured?</p>
|
<python><logging><python-logging>
|
2023-03-01 13:44:28
| 1
| 15,219
|
Philip Couling
|
75,604,236
| 12,845,199
|
Make sure that kilogram are converted to grams pandas
|
<p>I have the following series</p>
<pre><code>s = pd.Series({0: '1kg',
1: '500g',
2: '200g'})
</code></pre>
<p>What I want to do is to make a very similar column that basically has the same type of measurement, that being in grams. So in this case convert the 1kg to one thousand int value and leave the gram integer in the normal state. Note: The value can alter on the kg part. Any ideas on how I could do this?</p>
<p>Wanted result</p>
<pre><code>{0:1000,
1:500,
2:200}
</code></pre>
|
<python><pandas>
|
2023-03-01 13:15:35
| 4
| 1,628
|
INGl0R1AM0R1
|
75,604,189
| 7,920,004
|
f-string in Python's parameter
|
<p>Is it possible to use <code>f-string</code> to modify output of Python's default argument?
I want such functionallity to re-name one argument per function's calling.</p>
<p>When calling a function with <code>xyz</code> argument I would like to see it injected into <code>v</code> in <code>f"this_is_{v}</code>.</p>
<p>Below pseudo code to give high-level idea of what I'm aiming at.</p>
<pre><code>def function(parameter=f"this_is_{v}"):
print(parameter)
function("first")
#prints this_is_first
function("second")
#prints this_is_second
</code></pre>
<p>Was thinking about alternative mechanism for below code:</p>
<pre><code>def function(v):
value=f"this_is_{v}"
print(value)
function("first")
function("second")
</code></pre>
|
<python>
|
2023-03-01 13:11:50
| 3
| 1,509
|
marcin2x4
|
75,604,139
| 8,741,781
|
Django/PyCharm - Unresolved attribute reference '' for class 'User'
|
<p>Just wondering if there's a way to get around this super annoying PyCharm warning without having to ignore it altogether.</p>
<p>I have a custom <code>User</code> model with extra fields and methods. When I try to access these attributes in a view (ex. <code>if self.request.user.type == 'foo':</code>) I'll get the warning <code>Unresolved attribute reference 'foo' for class 'User'</code> in the IDE.</p>
<p>It seems that PyCharm thinks that I'm still using <code>django.contrib.auth.User</code> even though I've specified <code>AUTH_USER_MODEL = 'user.User'</code> in the project settings.</p>
|
<python><django><pycharm>
|
2023-03-01 13:06:30
| 1
| 6,137
|
bdoubleu
|
75,604,098
| 11,555,352
|
FedEx REST API (Upload Documents): Invalid request: invalid input: Incoming request [code 1001]
|
<p>I am trying to create a simple Python script to upload a PDF document via the new FedEx REST API.</p>
<p>Below is my minimal code example, which can be used to replicate the issue by placing a file, <code>file.pdf</code>, next to the script and updating to your own FedEx REST API production credentials.</p>
<p>In running the code, I get the below error message. Any inputs are appreciated:</p>
<pre><code>{
"customerTransactionId": "ETD-Pre-Shipment-Upload_test1",
"errors": {
"code": "1001",
"message": "Invalid request: invalid input : Incoming Request"
}
}
</code></pre>
<p>My code is below:</p>
<pre><code># minimal class for upload docs test
class FedexLabelHelper:
def __init__(self, fedex_cred):
self.fedex_cred = fedex_cred
self.access_token = ""
return
# function for retrieving access_token
def get_access_token(self):
import json, requests
url = self.fedex_cred["url_prefix"] + "/oauth/token"
payload = {
"grant_type": "client_credentials",
"client_id": self.fedex_cred["key"],
"client_secret": self.fedex_cred["password"],
}
headers = {"Content-Type": "application/x-www-form-urlencoded"}
response = requests.request("POST", url, data=payload, headers=headers)
access_token = json.loads(response.text)["access_token"]
self.access_token = access_token
# function for uploading PDF document
def upload_pdf_document_fedex_script(self):
import requests, binascii
fileName = "file.pdf"
file_content = open(fileName, "rb").read()
file_content_b64 = binascii.b2a_base64(file_content)
file_content_b64.decode("cp1250")
url = self.fedex_cred["doc_url_prefix"] + "/documents/v1/etds/upload"
payload = {
"document": {
"workflowName": "ETDPreshipment",
"carrierCode": "FDXE",
"name": fileName,
"contentType": "application/pdf",
},
"meta": {
"shipDocumentType": "COMMERCIAL_INVOICE",
"originCountryCode": "DK",
"destinationCountryCode": "BE",
},
}
files = [
(
"attachment",
(fileName, "file_content_b64", "application/pdf"),
)
]
headers = {
"Authorization": f"Bearer {self.access_token}",
"x-customer-transaction-id": "ETD-Pre-Shipment-Upload_test1",
"Cookie": "XYZ",
}
response = requests.post(url, headers=headers, data=payload, files=files)
print(response.text)
# -----------------------
# setup minimal test of the FedEx Upload Documents REST API
fedex_cred = {
"production": {
"url_prefix": "https://apis.fedex.com",
"doc_url_prefix": "https://documentapi.prod.fedex.com",
"key": "XYZ",
"password": "XYZ",
"freight_account_number": "XYZ",
},
}
flh = FedexLabelHelper(fedex_cred["production"])
flh.get_access_token()
flh.upload_pdf_document_fedex_script()
</code></pre>
|
<python><rest><fedex>
|
2023-03-01 13:02:30
| 1
| 1,611
|
mfcss
|
75,604,092
| 3,116,231
|
Azure functions disable logging not successful
|
<p>When running my Azure function locally, I'm getting lots of logging messages which I'd like to disable, e.g.</p>
<pre><code>[2023-03-01T12:45:15.038Z] Response status: 200
[2023-03-01T12:45:15.038Z] Response headers:
[2023-03-01T12:45:15.038Z] 'Cache-Control': 'no-cache'
[2023-03-01T12:45:15.038Z] 'Pragma': 'no-cache'
[2023-03-01T12:45:15.038Z] 'Content-Type': 'application/json; charset=utf-8'
[2023-03-01T12:45:15.038Z] 'Expires': '-1'
[2023-03-01T12:45:15.038Z] 'x-ms-keyvault-region': 'germanywestcentral'
[2023-03-01T12:45:15.038Z] 'x-ms-client-request-id': 'xxx'
[2023-03-01T12:45:15.038Z] 'x-ms-request-id': 'xxx'
[2023-03-01T12:45:15.038Z] 'x-ms-keyvault-service-version': '1.9.713.1'
[2023-03-01T12:45:15.038Z] 'x-ms-keyvault-network-info': 'conn_type=Ipv4;addr=xxx;act_addr_fam=InterNetwork;'
[2023-03-01T12:45:15.038Z] 'X-Content-Type-Options': 'REDACTED'
[2023-03-01T12:45:15.038Z] 'Strict-Transport-Security': 'REDACTED'
[2023-03-01T12:45:15.038Z] 'Date': 'Wed, 01 Mar 2023 12:45:14 GMT'
[2023-03-01T12:45:15.038Z] 'Content-Length': '458'
</code></pre>
<p>or</p>
<pre><code>[2023-03-01T12:45:15.992Z] Request URL: 'https://xxx.file.core.windows.net/xxx/xxx.xlsx'
[2023-03-01T12:45:15.992Z] Request method: 'PUT'
[2023-03-01T12:45:15.992Z] Request headers:
[2023-03-01T12:45:15.992Z] 'x-ms-version': 'REDACTED'
[2023-03-01T12:45:15.992Z] 'x-ms-content-length': 'REDACTED'
[2023-03-01T12:45:15.992Z] 'x-ms-type': 'REDACTED'
[2023-03-01T12:45:15.992Z] 'x-ms-file-permission': 'REDACTED'
[2023-03-01T12:45:15.992Z] 'x-ms-file-attributes': 'REDACTED'
[2023-03-01T12:45:15.992Z] 'x-ms-file-creation-time': 'REDACTED'
[2023-03-01T12:45:15.992Z] 'x-ms-file-last-write-time': 'REDACTED'
[2023-03-01T12:45:15.992Z] 'Accept': 'application/xml'
[2023-03-01T12:45:15.993Z] 'User-Agent': 'azsdk-python-storage-file-share/12.10.1 Python/3.9.16 (macOS-13.1-arm64-arm-64bit)'
[2023-03-01T12:45:15.993Z] 'x-ms-date': 'REDACTED'
[2023-03-01T12:45:15.993Z] 'x-ms-client-request-id': 'xxx'
[2023-03-01T12:45:15.993Z] 'Authorization': 'REDACTED'
[2023-03-01T12:45:15.993Z] No body was attached to the request
[2023-03-01T12:45:16.100Z] Response status: 201
</code></pre>
<p>I tried:</p>
<pre><code> client = SecretClient(vault_url=KVUri, credential=credential, logging=False)
....
# upload file to share
file_client = ShareFileClient.from_connection_string(
conn_str='xxx',
share_name=container_name,
file_path=file_name,
logging_enable=False,
)
</code></pre>
<p>I still would like to be able to create logging output from within my app, e.g. <code>logging.info('example message')</code></p>
<p>But, how can I disable the above HTTP logging?</p>
|
<python><logging><azure-functions><python-logging>
|
2023-03-01 13:01:42
| 1
| 1,704
|
Zin Yosrim
|
75,603,959
| 3,057,900
|
python find difference between two list of dictionary
|
<p>I try to find out the difference between two list of dictionary based on value of certain keys:</p>
<pre><code>test_list1 = [{"name" : "name1", "number": "number1", "data": "data1"},
{"name" : "name2", "number": "number2", "data": "data2"},
{"name" : "name3", "number": "number3", "data": "data3"},
{"name" : "name5", "number": "number5", "data": "data5"}]
test_list2 = [{"name" : "name1", "number": "number1", "data": "data5"},
{"name" : "name2", "number": "number2", "data": "data2"},
{"name" : "name3", "number": "number3", "data": "data3"},
{"name" : "name4", "number": "number4", "data": "data4"}]
</code></pre>
<p>I try to find out dict with "number" in test_list1 but not in test_list2, for example,
<code>{"name" : "name5", "number": "number5", "data": "data5"}</code>
is in test_list1 but not in test_list2:</p>
<pre><code>res2 = [i for i in test_list1 for j in test_list2 if i.get("number") != j.get("number")]
</code></pre>
<p>the result is:</p>
<pre><code>[{'name': 'name1', 'number': 'number1', 'data': 'data1'}, {'name': 'name1', 'number': 'number1', 'data': 'data1'}, {'name': 'name1', 'number': 'number1', 'data': 'data1'}, {'name': 'name2', 'number': 'number2', 'data': 'data2'}, {'name': 'name2', 'number': 'number2', 'data': 'data2'}, {'name': 'name2', 'number': 'number2', 'data': 'data2'}, {'name': 'name3', 'number': 'number3', 'data': 'data3'}, {'name': 'name3', 'number': 'number3', 'data': 'data3'}, {'name': 'name3', 'number': 'number3', 'data': 'data3'}, {'name': 'name5', 'number': 'number5', 'data': 'data5'}, {'name': 'name5', 'number': 'number5', 'data': 'data5'}, {'name': 'name5', 'number': 'number5', 'data': 'data5'}, {'name': 'name5', 'number': 'number5', 'data': 'data5'}]
</code></pre>
<p>how to get the dict with "number" in test_list1 but not in test_list2 as following:</p>
<pre><code>{"name" : "name5", "number": "number5", "data": "data5"}
</code></pre>
|
<python><list><list-comprehension>
|
2023-03-01 12:48:25
| 3
| 1,681
|
ratzip
|
75,603,874
| 7,046,421
|
Trying to calculate an essential matrix using skimage, or cv2
|
<p>I am trying to understand how to work with cv2.findEssentialMat or skimage variant. I have a known 3d model in world coordinates and two calibrated cameras. I computed the true essential matrix of the system and my goal is to project the known 3d model points into each of the cameras and use cv2 (or skimage) to recover the known essential matrix.</p>
<p>The essential matrix is computed from the image correspondences because the 3d structure isnt known. If it is known then it is simply I can simply compute the relative pose between the cameras as <code>∆E=E_r @ E_l^-1</code> and then compute the essential matrix. <code>E=T_x @ R</code> where <code>@</code> a is matrix multiplication, <code>T_x</code> is the matrix associated with the normalized translation vector and <code>R</code> is the rotation matrix.</p>
<p>After I confirm that my calculations are good, I project the world points into each camera and use <code>cv2.findEssentialMat</code> to compute the essential matrix from the image coordinates. I expect the essential matrix thatI calculated to be the same as the one from <code>cv2.findEssentialMat</code>. Yet the two are very different.
Does anyone know what I am missing?</p>
<p>Here is my setup</p>
<pre><code>def apply_rt(pts, rt):
return (rt[:3, :3] @ pts.T).T + rt[:3, -1]
def project_points(xyzs, intrinsic, extrinsic, should_normalize=False):
projected_points_cr = (intrinsic @ apply_rt(xyzs, extrinsic).T).T
divisors = projected_points_cr[:, 2:]
projected_points_cr = projected_points_cr / divisors
if should_normalize:
projected_points_cr = (np.linalg.inv(intrinsic) @ projected_points_cr.T).T
return projected_points_cr[:, :2]
xyzs_world = np.array([
[73.008575, 52.755592, 39.83713 ],
[72.075424, 47.25908 , 40.036087],
[72.20231 , 45.843777, 40.351246],
[73.43591 , 54.131153, 39.584625],
[74.108826, 44.653885, 42.031307],
[77.388 , 49.497707, 42.064713],
[77.185585, 48.65105 , 42.162895],
[77.388245, 43.62944 , 42.370674],
[77.02157 , 53.728027, 41.259354],
[74.88333 , 54.320747, 40.528275],
[76.66945 , 43.3657 , 42.48002 ],
[71.57631 , 51.054092, 38.390312],
[71.905975, 52.52903 , 38.386307],
[72.099724, 48.342022, 39.933678],
[73.300545, 45.25413 , 41.518044],
[75.36074 , 54.748974, 40.528008],
[74.20519 , 53.809402, 40.38417 ],
[74.10402 , 48.007614, 41.790726],
[71.70616 , 48.58014 , 39.25619 ],
[77.78094 , 45.657528, 42.211655],
[74.894455, 44.303238, 42.315697],
[72.35703 , 51.532913, 39.52483 ],
[77.83151 , 53.58341 , 41.273026],
[73.03928 , 51.10847 , 40.470455],
[77.50123 , 48.578613, 42.125782],
[74.307816, 55.139206, 39.728397],
[74.199196, 48.10794 , 41.825573],
[72.19932 , 50.435875, 39.648094],
[71.919846, 45.61991 , 39.96143 ],
[77.08149 , 43.011536, 42.4573 ],
[76.36457 , 46.383766, 42.35271 ],
[74.79315 , 48.619663, 41.99683 ],
[74.39389 , 52.70309 , 40.95705 ],
[73.83906 , 44.51386 , 41.910065],
[72.639786, 44.48964 , 40.993587],
[75.381966, 45.770576, 42.32749 ],
[78.57214 , 47.213417, 41.966934],
[77.5113 , 46.790035, 42.220528],
[73.846436, 46.48003 , 41.783493],
[74.31437 , 53.745804, 40.484417],
[74.80585 , 44.500835, 42.28195 ],
[76.932724, 42.753857, 42.496098],
[74.27546 , 47.899445, 41.879692],
[71.30465 , 45.965378, 38.79394 ],
[78.21704 , 43.769154, 42.18486 ],
[76.79267 , 52.646767, 41.56925 ],
[72.67541 , 54.200233, 38.643562],
[78.73034 , 50.631893, 41.681744],
[74.487144, 51.408913, 41.39855 ],
[74.22435 , 51.41863 , 41.266033]], dtype=np.float32)
intrinsics_right = np.array([
[2549.682106, 0.000000, 1193.621053],
[0.000000, 2588.379958, 632.521320],
[0.000000, 0.000000, 1.000000]])
intrinsics_left = np.array([
[2514.183473, 0.000000, 1421.766881],
[0.000000, 2530.292937, 1017.599687],
[0.000000, 0.000000, 1.000000]])
extrinsics_right = np.array([
[0.908730, 0.031275, -0.416211, -52.430834],
[-0.046736, -0.983293, -0.175928, 50.173038],
[-0.414760, 0.179323, -0.892086, 179.569364],
[0.000000, 0.000000, 0.000000, 1.000000]])
extrinsics_left = np.array([
[0.908264, -0.017047, 0.418049, -66.914767],
[-0.008166, -0.999702, -0.023024, 22.166049],
[0.418317, 0.017498, -0.908133, 130.945395],
[0.000000, 0.000000, 0.000000, 1.000000]])
rel_pose = extrinsics_right @ np.linalg.inv(extrinsics_left)
X_r = apply_rt(xyzs_world, extrinsics_right)
X_l = apply_rt(xyzs_world, extrinsics_left)
essential_gt = T_x @ rel_pose[:3,:3]
# result is:
# np.array([[0.005759, -0.415231, -0.020870],
# [-0.415659, -0.153303, 0.895284],
# [0.059779, -0.896665, -0.147390]])
epipolar_constraint = np.linalg.norm(np.einsum('ij,ij->i', X_r, (essential_gt @ X_l.T).T))
print(epipolar_constraint) # print 1e-11
</code></pre>
<p>This value of <code>epipolar_constraint</code> is around <code>1e-11</code> indicating the calculations are good. For the next stage, I will project the world points into each camera and in this way, I have corresponding image points. Then I call <code>cv2.findEssentialMat</code>. Following the instructions in the opencv docs I normlize the image coordinates and pass the identity in place of the extrinsics because I have two different cameras.</p>
<pre><code>rcs_l = project_points(xyzs_world, intrinsics_left, extrinsics_left, should_normalize=True)
rcs_r = project_points(xyzs_world, intrinsics_right, extrinsics_right, should_normalize=True)
essential, inliers = cv2.findEssentialMat(rcs_l, rcs_r, np.eye(3))
# result is
# np.array([[-0.000108, 0.000701, -0.394863],
# [-0.000634, 0.000407, -0.586585],
# [0.423443, 0.566300, 0.000232]])
</code></pre>
|
<python><opencv><computer-vision><camera-calibration>
|
2023-03-01 12:40:56
| 1
| 333
|
ClimbingTheCurve
|
75,603,734
| 221,270
|
Compare dataframes and extract if values overlapp with minimum occurency
|
<p>I would like to compare a couple of data frames and extract overlapping row values:</p>
<pre><code>import pandas as pd
#df1
data= {
'id': ['ID1', 'ID2', 'ID3', 'ID4'],
'type': ['1/1', '1/1', '1/1', '1/1'],
'value': [-10, 2, 28, 40]
}
df1 = pd.DataFrame(data)
#df2
data2= {
'id': ['ID1', 'ID5', 'ID6', 'ID7'],
'type': ['1/1', '1/1', '1/1', '1/1'],
'value': [-10, 13, 10, 11]
}
df2 = pd.DataFrame(data2)
#df3
data3= {
'id': ['ID1', 'ID2', 'ID5', 'ID7'],
'type': ['1/1', '1/1', '1/1', '1/1'],
'value': [-10, 2, 13, 7]
}
df3 = pd.DataFrame(data3)
#df4
data4= {
'id': ['ID1', 'ID2'],
'type': ['1/1', '1/1'],
'value': [-10, 2]
}
df4 = pd.DataFrame(data4)
#df5
data5= {
'id': ['ID1', 'ID2'],
'type': ['1/1', '1/1'],
'value': [-10, 2]
}
df5 = pd.DataFrame(data5)
</code></pre>
<p>Now I would like to apply a filter that allows specifying the minimum amount of overlapping rows. For example, the minimum number of overlaps should be a least 3 (this should be a variable), which results in :</p>
<pre><code>id,type,value
ID1,1/1,-10
ID2,1/1,2
</code></pre>
<p>Using merge I can only compare two data frames:</p>
<pre><code>pd.merge(df1, df2, how='inner', on=['id','type','value'])
</code></pre>
<p>How can I apply such a count filter on more than two data frames with a minimum number of occurrences?</p>
|
<python><pandas>
|
2023-03-01 12:26:39
| 1
| 2,520
|
honeymoon
|
75,603,641
| 15,852,600
|
Function that retuns a dataframe without leading 0s of a specific column
|
<p>I have the following dataframe:</p>
<pre><code>df=pd.DataFrame({
'n' : [0,1,2,3, 0,1,2, 0,1,2],
'col1' : ['A', 'A', 'A', 'B', 'B', 'B', 'B', 'C', 'C', 'C'],
'col2' : [0, 0, 0, 0, 3.3, 0, 4, 1.94, 0, 6.17]
})
</code></pre>
<p>It has the form:</p>
<pre><code> n col1 col2
0 0 A 0.00
1 1 A 0.00
2 2 A 0.00
3 3 B 0.00
4 0 B 3.30
5 1 B 0.00
6 2 B 4.00
7 0 C 1.94
8 1 C 0.00
9 2 C 6.17
</code></pre>
<p>I want a function that will have that dataframe as argument and will return a new dataframe without the first rows where values are 0s in the column 'col2'</p>
<p><strong>My code</strong></p>
<pre><code>def remove_lead_zeros(df):
new_df = df[df['col2'] != 0]
return new_df
</code></pre>
<p>My function removes all rows having 0.0 values while I want to remove only the all first ones,</p>
<p><strong>Goal</strong></p>
<p>Is to get the following dataframe as result:</p>
<pre><code> n col1 col2
0 0 B 3.30
1 1 B 0.00
2 2 B 4.00
3 0 C 1.94
4 1 C 0.00
5 2 C 6.17
</code></pre>
<p>Any help from your side will be highly appreciated (Upvoting all answers), thank you !</p>
|
<python><pandas><dataframe>
|
2023-03-01 12:19:27
| 3
| 921
|
Khaled DELLAL
|
75,603,579
| 9,859,642
|
Multiple series with subplots for shared columns in DataFrames
|
<p>I have two DataFrames with identical row indexes and column names that are sometimes only in one DataFrame, and sometimes in both. I wanted to plot data from columns that are in both DataFrames and arrange them in subplots. The final figure should look like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>ColumnA1 + ColumnB1</td>
<td>ColumnA2 + ColumnB2</td>
</tr>
<tr>
<td>ColumnA3 + ColumnB3</td>
<td>ColumnA4 + ColumnB4</td>
</tr>
</tbody>
</table>
</div>
<p>For now I tried to simply have the plots done, without arranging them in subplots. But if any of the columns is not present in both DataFrames, none of the plots are showing:</p>
<pre><code>for column_name in [DataFrameA.columns, DataFrameB.columns]:
DataFrameA[column_name].plot(label = "A")
DataFrameB[column_name].plot(label = "B")
plt.show()
</code></pre>
|
<python><pandas><dataframe><matplotlib>
|
2023-03-01 12:13:22
| 1
| 632
|
Anavae
|
75,603,485
| 12,945,785
|
pandas multiindex columns from an other dataframe
|
<p>The first one is like that:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: right;">F2</th>
<th style="text-align: right;">F1</th>
<th style="text-align: right;">F3</th>
<th style="text-align: right;">F4</th>
<th style="text-align: right;">F5</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">2019</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">6</td>
</tr>
<tr>
<td style="text-align: left;">2020</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">2021</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">1</td>
</tr>
</tbody>
</table>
</div>
<p>The second one like that</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">ID</th>
<th style="text-align: left;">ASSET</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">F1</td>
<td style="text-align: left;">carac3</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">F2</td>
<td style="text-align: left;">carac1</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">F3</td>
<td style="text-align: left;">carac1</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: left;">F4</td>
<td style="text-align: left;">carac2</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: left;">F5</td>
<td style="text-align: left;">carac2</td>
</tr>
</tbody>
</table>
</div>
<p>I would like to get a multiliindex columns dataframe with :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: right;">F2</th>
<th style="text-align: right;">F1</th>
<th style="text-align: right;">F3</th>
<th style="text-align: right;">F4</th>
<th style="text-align: right;">F5</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;"></td>
<td style="text-align: right;">carac1</td>
<td style="text-align: right;">carac3</td>
<td style="text-align: right;">carac1</td>
<td style="text-align: right;">carac2</td>
<td style="text-align: right;">carac2</td>
</tr>
<tr>
<td style="text-align: left;">2019</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">6</td>
</tr>
<tr>
<td style="text-align: left;">2020</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">2021</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">1</td>
</tr>
</tbody>
</table>
</div>
<p>Thx</p>
|
<python><pandas><multi-index>
|
2023-03-01 12:05:02
| 3
| 315
|
Jacques Tebeka
|
75,603,171
| 7,676,365
|
Problem with data table webcsrape (web with open data)
|
<p>I need read table from web (city open data) directly to pandas dataframe. But when I use <code>pandas.read_html()</code>, python return error message <code>No tables found</code>.</p>
<p><strong>My code:</strong></p>
<pre class="lang-py prettyprint-override"><code>import requests
import pandas as pd
url = 'https://opendata.kosice.sk/datasets/kosice-mesto::v%C3%BDmery-typov-pozemkov-1/explore'
html = requests.get(url).content
df_list = pd.read_html(html)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>...
...
ValueError: No tables found
</code></pre>
<p>How can I fix this? I've used this function multiple times and it's always worked.</p>
|
<python><pandas><web-scraping>
|
2023-03-01 11:37:09
| 1
| 403
|
314mip
|
75,602,839
| 18,895,773
|
NP reshape (add extra 2 dimension)
|
<p>I will illustrate my question with an example. If I have the array:</p>
<pre><code>a = np.array([[1,2,3,4],
[9,10,11,12],
[17,18,19,20],
[25,26,27,28]])
</code></pre>
<p>I would like to get</p>
<pre><code>array([[[ 1, 2], [[ 3, 4],
[9, 10], [11, 12],
[17, 18], [19, 20],
[25, 26]] [27, 28]],
</code></pre>
<p>So apparently if my array was <code>MxN</code> , now it will be <code>Mx(N/2)x2</code>. How to do it? I tried:</p>
<pre><code>import numpy as np
# pre-computed data
data.reshape(data.shape[0], data.shape[1]//2, 2)
</code></pre>
<p>, does not work as expected</p>
|
<python><numpy>
|
2023-03-01 11:05:38
| 2
| 362
|
Petar Ulev
|
75,602,801
| 1,436,800
|
How to track that a request is handled by which view in django project?
|
<p>I am working on a django project which consists of multiple apps and every app has its separate urls.py file.
Now when we send a request in a browser, how can we track that the request is handled by which view in our project?</p>
<p>e.g. We send a request with this url:
<a href="http://mywebsite.com:8000/item?id=2" rel="nofollow noreferrer">http://mywebsite.com:8000/item?id=2</a></p>
<p>In server , we can see the request as:
"GET /item?id=2 HTTP/1.1" 200
How can we track that this request is handled by which view?</p>
|
<python><django><django-rest-framework><django-views>
|
2023-03-01 11:02:02
| 1
| 315
|
Waleed Farrukh
|
75,602,637
| 981,768
|
How can I build a 2D Array from array stack, selecting non-NAN along third dimension?
|
<p>I have <em>n</em> 2D numpy arrays, that are stacked along the third dimension. Each array contains <code>np.nan</code> values. I want to construct a new 2D array out of the stack. Each value in it should be the first <em>non np.nan</em> value along the third dimension.</p>
<p>To illustrate, think of the <em>top view</em> of the 2D stack and imagine each <code>np.nan</code> as transparant.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
a = np.array([[1,2,3],
[4,5,np.nan],
[7,8,9],
[10,11,12]])
b = np.array([[np.nan,21,np.nan],
[23,24,25],
[26,27,28],
[29, 30,31]])
c = np.array([[np.nan,np.nan,np.nan],
[43,np.nan,np.nan],
[46,np.nan,48],
[49, 50, 51]])
stack = np.stack([a, b, c], axis=0)
for d in [2,1]:
if d == 2:
fill = stack[d, :,:]
arr = np.where(np.isnan(stack[d,:,:]), stack[d-1,:,:], fill)
fill = arr
</code></pre>
<p>This illustrates the problem and delivers desired result, see below. Note that <code>c</code> is given highest priority. This is intentional.</p>
<pre class="lang-py prettyprint-override"><code>>>> arr
array([[ 1., 21., 3.],
[43., 24., 25.],
[46., 27., 48.],
[49., 50., 51.]])
</code></pre>
<p>I was wondering if this can be solved more efficiently and without a loop? Perhaps using <code>np.isnan(stack)</code> and <a href="https://numpy.org/doc/stable/reference/generated/numpy.select.html" rel="nofollow noreferrer">np.select</a>?</p>
|
<python><arrays><numpy>
|
2023-03-01 10:46:42
| 1
| 741
|
Hans Roelofsen
|
75,602,619
| 4,358,785
|
How to left join numpy array python
|
<p>What's the numpy "pythonic" way to left join arrays? Let's say I have two 2-D arrays that share a key:</p>
<pre><code>a.shape # (20, 2)
b.shape # (200, 3)
</code></pre>
<p>Both arrays share a common key in their first dimension:
a[:, 0] # values from 0..19
b[:, 0] # values from 0..19</p>
<p>How are I left join the values from a[:, 1] to b?</p>
|
<python><numpy><join><left-join>
|
2023-03-01 10:45:27
| 1
| 971
|
Ruslan
|
75,602,541
| 11,092,636
|
Whole column background formatted the same manner with openpyxl doesn't work as intended
|
<p>MRE:</p>
<pre class="lang-py prettyprint-override"><code>import openpyxl
from openpyxl.styles import PatternFill
# Create a new workbook
workbook = openpyxl.Workbook()
# Select the active sheet
sheet = workbook.active
# Set the background color of the third column to black
fill = PatternFill(start_color='000000', end_color='000000', fill_type='solid')
for cell in sheet['C']:
cell.fill = fill
# Save the workbook
workbook.save('example.xlsx')
</code></pre>
<p>Only the first cell of column 'C' is with black background.</p>
<p>I want the whole column to be formatted with black background (like it works in Excel when you format the whole column).</p>
<p>I don't want to iterate through cells from 1 to a big number because otherwise the Excel file created is created with loads of rows. It should be that the format is applied to the whole column <strong>without creating new rows</strong>, like it works in Excel basically. Is that possible? Or do I need to run <code>xlwings</code> to call a <code>VBA</code> function?</p>
|
<python><excel><openpyxl>
|
2023-03-01 10:36:54
| 1
| 720
|
FluidMechanics Potential Flows
|
75,602,420
| 4,105,440
|
TypeError when applying a function to every element of a nested array
|
<p>I want to vectorize the function apply to an array of arrays in order to avoid the loop.</p>
<p>The input array <code>arr</code> is</p>
<pre class="lang-py prettyprint-override"><code>array([array([], dtype=float64),
array([0.03, 0.04,])],
dtype=object)
</code></pre>
<p>If I do a loop</p>
<pre class="lang-py prettyprint-override"><code>for a in arr:
np.exp(-a)
</code></pre>
<p>I don't get any error . Instead, with <code>np.apply_along_axis</code></p>
<pre class="lang-py prettyprint-override"><code>np.apply_along_axis(func1d=lambda x: np.exp(-x), axis=0, arr=arr)
</code></pre>
<p>I get an error <code>TypeError: loop of ufunc does not support argument 0 of type numpy.ndarray which has no callable exp method</code>. I believe there is some type mismatch but don't really understand why looping through the array works just fine. Also doing a simpler operation like <code>lambda x: x * 6371</code> produces the expected result.</p>
<p>It may also be that <code>np.apply_along_axis</code> is not the best approach here and I may need to convert <code>arr</code> to a structure that allow for vectorization.</p>
|
<python><arrays><numpy>
|
2023-03-01 10:26:28
| 1
| 673
|
Droid
|
75,602,364
| 2,402,501
|
Create a Contingency table of Total and Shared Visitors between two buildings
|
<p>I need to create a contingency table based on a location, and values should be frequency of visits based on a person's check in.</p>
<p>Below is a sample problem with data:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
a = [
["A", "Building 12", 1],
["A", "Building 11", 3],
["A", "Building 12", 1],
["A", "Building 11", 3],
["B", "Building 11", 7],
]
df = pd.DataFrame(a, columns=["ID", "Location", "Visits"])
</code></pre>
<pre><code> ID Location Visits
0 A Buidling 12 1
1 A Building 11 3
2 A Building 12 1
3 A Building 11 3
4 B Building 11 7
</code></pre>
<p>Each row indicates a specific day. For example, row 0 shows that visitor A went to Building 12 once on that day and then in row 2, they returned on another day.</p>
<p>I need to understand the flow of people through a particular building. Below is expected output:</p>
<pre><code>Location Building 12 Building 11
Location
Buidling 12 2 0
Building 11 6 13
</code></pre>
<p>From the above:</p>
<ul>
<li>Building 12: was visited by only person A (in total 2 times)</li>
<li>Building 11: was visited by person A & B (in total 13 times)</li>
<li>Building 12: was only visited by person A who did not go to building 11 (hence 0 is placed in top quad)</li>
<li>Building 11: was visited by both however person A visited Building 11 (hence 6 visits is placed in bottom quad)</li>
</ul>
<p>That is, Building 11 shared visits with Building 12 X 6.</p>
<p>This is obviously a small, sample and I have many records. I have tried the following:</p>
<pre class="lang-py prettyprint-override"><code>pd.crosstab(df.Location, df.Location, values=df.Visits, aggfunc='sum').fillna(0)
</code></pre>
<p>But, it doesn't create a unique list of buildings and visits are not shared.</p>
|
<python><pandas><group-by>
|
2023-03-01 10:22:28
| 1
| 3,307
|
Bryce Ramgovind
|
75,602,300
| 1,938,096
|
Convert NumPy array to x- and y-axis for matplotlib
|
<p>Strugling with this for couple of days now because I' don't fully understand how all the object / datetime conversions work and which are needed for matplotlib. And I can't get beyond the point of try this or that instead of getting more understanding of why I should do things.</p>
<p>I have this numpy array that I loaded from disk ( <code>combined_list = numpy.load('dumpert.npy', allow_pickle=True)</code>) (I do this because the data is from external API which has a limit). It has 4 columns, 1=timestamp, 2=price, 3=othertimestamp, 4=power.</p>
<pre><code>[datetime.datetime(2023, 3, 1, 21, 0, tzinfo=datetime.timezone.utc),
0.16,
datetime.datetime(2023, 3, 1, 21, 0, tzinfo=datetime.timezone.utc),
250],
[datetime.datetime(2023, 3, 1, 22, 0, tzinfo=datetime.timezone.utc),
0.16,
datetime.datetime(2023, 3, 1, 22, 0, tzinfo=datetime.timezone.utc),
100]], dtype=object)
</code></pre>
<p>I then slice the array like this:</p>
<pre class="lang-py prettyprint-override"><code>rows = len(combined_list)
timestamp = combined_list[0:rows+1,0:1]
prices = (combined_list[0:rows+1,1:2])
solar = (combined_list[0:rows+1,3:4])
</code></pre>
<p>When I now plot the prices and solar, I get a nice graph:</p>
<pre class="lang-py prettyprint-override"><code>plt.plot( prices)
plt.plot( solar)
plt.show()
</code></pre>
<p>But I would actually like two things change:</p>
<ul>
<li>have the x-as to use the 'timestamp' and preferably only the hours, like "13:00". Been trying different conversions but I just keep getting different errors.</li>
<li>have the 'price' be a bar graph instead of plot, but using the following code gives me <code>TypeError: only size-1 arrays can be converted to Python scalars</code></li>
</ul>
<pre class="lang-py prettyprint-override"><code>plt.bar( x=xas, height=prices, width=0.2)
plt.plot( solar)
plt.show()
</code></pre>
|
<python><numpy><matplotlib>
|
2023-03-01 10:16:38
| 1
| 579
|
Gabrie
|
75,602,282
| 12,945,785
|
How to group by a dataframe based on another dataframe
|
<p>The first one is like that:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: right;">F1</th>
<th style="text-align: right;">F2</th>
<th style="text-align: right;">F3</th>
<th style="text-align: right;">F4</th>
<th style="text-align: right;">F5</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">2019</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">6</td>
</tr>
<tr>
<td style="text-align: left;">2020</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">2021</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">2022</td>
<td style="text-align: right;">11</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">8</td>
</tr>
<tr>
<td style="text-align: left;">2023</td>
<td style="text-align: right;">12</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">8</td>
</tr>
</tbody>
</table>
</div>
<p>The second one like that</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">ID</th>
<th style="text-align: left;">ASSET</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">F1</td>
<td style="text-align: left;">carac3</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">F2</td>
<td style="text-align: left;">carac1</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">F3</td>
<td style="text-align: left;">carac1</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: left;">F4</td>
<td style="text-align: left;">carac2</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: left;">F5</td>
<td style="text-align: left;">carac2</td>
</tr>
</tbody>
</table>
</div>
<p>I would like to get :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: right;">carac1</th>
<th style="text-align: right;">carac2</th>
<th style="text-align: right;">carac3</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">2019</td>
<td style="text-align: right;">=4 (1+3)</td>
<td style="text-align: right;">=10 54+6</td>
<td style="text-align: right;">...</td>
</tr>
<tr>
<td style="text-align: left;">2020</td>
<td style="text-align: right;">...</td>
<td style="text-align: right;">...</td>
<td style="text-align: right;">...</td>
</tr>
<tr>
<td style="text-align: left;">2021</td>
<td style="text-align: right;">...</td>
<td style="text-align: right;">...</td>
<td style="text-align: right;">...</td>
</tr>
<tr>
<td style="text-align: left;">2022</td>
<td style="text-align: right;">...</td>
<td style="text-align: right;">...</td>
<td style="text-align: right;">...</td>
</tr>
<tr>
<td style="text-align: left;">2023</td>
<td style="text-align: right;">...</td>
<td style="text-align: right;">...</td>
<td style="text-align: right;">...</td>
</tr>
</tbody>
</table>
</div>
<p>where each '...' is the sum of F(i) for ASSET in carat(i)</p>
<p>Thx</p>
|
<python><dataframe><group-by>
|
2023-03-01 10:14:56
| 3
| 315
|
Jacques Tebeka
|
75,602,267
| 1,581,090
|
How to deserialize a string into a dict with json loads with unknown function?
|
<p>When I have a string like this</p>
<pre><code>test = '{"key1":"value1", "key2": UnknownFunction("value2")}'
</code></pre>
<p>I cannot use <code>json.loads</code> to deserialize its content as the serialized data object contains an unknown function. Is there a simple way to map such a function (or any unknown function) to e.g. <code>str</code>, so that I can deserialize the data object to get a dict like</p>
<pre><code>{"key1":"value1", "key2": "value2"}
</code></pre>
<p>?</p>
<p>The following code is working but maybe there is some better way?</p>
<pre><code>for removable in ["UnknownFunction"]:
test= test.replace(removable + "(", "")
test = test.replace(")", "")
data = json.loads(test)
</code></pre>
|
<python><json>
|
2023-03-01 10:13:59
| 1
| 45,023
|
Alex
|
75,602,071
| 1,005,334
|
Endpoint with multipart: Quart-Schema only shows 'application/x-www-urlencoded' in Swagger
|
<p>I'm using Quart-Schema and have defined an endpoint for a multipart call with the help of <a href="https://pgjones.gitlab.io/quart-schema/how_to_guides/request_validation.html" rel="nofollow noreferrer">https://pgjones.gitlab.io/quart-schema/how_to_guides/request_validation.html</a>.</p>
<p>Looks like this:</p>
<pre><code>@dataclass
class File:
file: Optional[UploadFile] = None
title: Optional[str] = None
description: Optional[str] = None
type: Optional[str] = None
@bp.post('/<int:entity_id>/files')
@validate_request(File, source=DataSource.FORM)
async def post_file(data: File, entity_id):
"""Add file for entity to database"""
</code></pre>
<p>The important bit is <code>source=DataSource.FORM</code> so it will pick up the form data instead of expecting JSON.</p>
<p>Works great!
And this is how it looks in Swagger (auto-generated docs):</p>
<p><a href="https://i.sstatic.net/EkHAg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EkHAg.png" alt="Swagger multipart call" /></a></p>
<p>It correctly shows all the fields and types and even includes an upload button. The only issue is, the request body type in Swagger is defined as <code>application/x-www-urlencoded</code> (only option in the drop-down). When I try out this request, it works when only sending plain field data, but when I add a file, it fails.</p>
<p>I can see no way to change this to <code>multipart/form-data</code> although I've seen Swagger examples showing this (though these examples don't use Quart-Schema, unfortunately).</p>
<p>It seems <code>source=DataSource.FORM</code> determines the request body type. Since there are only two options for <code>source</code> (the other one being <code>DataSource.JSON</code>) I see no way of getting Swagger to show/send <code>multipart/form-data</code>.</p>
<p>Is this a limitation of Quart-Schema, and if so: is there a solution for this?</p>
|
<python><swagger><multipartform-data><quart>
|
2023-03-01 09:55:05
| 1
| 1,544
|
kasimir
|
75,602,070
| 14,058,726
|
How to test if import raises ImportError if package is missing?
|
<p>I am writing a module which can have <code>pandas</code> as an optional package. The import statement at the top of the file <code>my_submodule.py</code> looks like this.</p>
<pre><code>try:
import pandas as pd
except (ImportError, ModuleNotFoundError):
pd = None
</code></pre>
<p>Now I want to test that <code>pandas</code> is not installed and either <code>ImportError</code> or <code>ModuleNotFoundError</code> is raised.</p>
<p>How to do this?</p>
<p>At the moment my test file looks like this:</p>
<pre><code>from unittest import TestCase
from unittest.mock import patch
def test_no_pandas_import():
with patch('sys.path', []):
from my_module import my_submodule
assert my_submodule.pd is None
</code></pre>
<p>but the assertion is not True, pandas is imported and the errors are not checked.</p>
|
<python><mocking><python-unittest>
|
2023-03-01 09:54:46
| 1
| 6,392
|
mosc9575
|
75,602,063
| 21,310,501
|
pip install -r requirements.txt is failing: "This environment is externally managed"
|
<p>Command:</p>
<pre class="lang-none prettyprint-override"><code>pip install -r requirements.txt
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.
If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.
If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.
See /usr/share/doc/python3.11/README.venv for more information.
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
</code></pre>
<p>I wish someone would explain to me what to do and how to solve it.</p>
|
<python><linux>
|
2023-03-01 09:53:49
| 12
| 1,943
|
iReXes
|
75,602,040
| 4,116,300
|
How to read an excel file from a local directory using <py-script>
|
<p><strong>Requirement :</strong> I want to read a excel file from my local directory by using <a href="https://pyscript.net/" rel="nofollow noreferrer"><code><py-script></code></a></p>
<p><strong>Problem Statement :</strong> <code>py-script</code> runs under their own environment. Hence, It is not able to locate the current working directory and when I trying to see the current working directory by using <code>os.cwd()</code> command. It is returning <code>/home/pyodide</code> instead of the local directory files.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><html>
<head>
<meta charset="utf-8">
<link rel="stylesheet" href="https://pyscript.net/latest/pyscript.css" />
<script defer src="https://pyscript.net/latest/pyscript.js"></script>
</head>
<body>
<py-script>
import os
print(os.listdir())
print(os.getcwd())
</py-script>
</body>
</html></code></pre>
</div>
</div>
</p>
<p>Hence, It is giving the below error.</p>
<blockquote>
<p>FileNotFoundError: [Errno 44] No such file or directory.</p>
</blockquote>
<p>Is there any way to achieve the requirement using <code>py-script</code> ?</p>
|
<python><pyscript><pyodide>
|
2023-03-01 09:51:52
| 1
| 27,347
|
Rohìt Jíndal
|
75,601,992
| 1,433,751
|
Django ORM: LEFT JOIN condition based on another LEFT JOIN
|
<p>I'm using Django 4.0 and PostgreSQL (13.7) as the backend (upgrading would be possible if its required for a solution)</p>
<p>I have two models: <code>Cat</code> and <code>Attribute</code>.
The <code>Attribute</code> holds generic key-value pairs describing <code>Cat</code> instances.</p>
<p><code>Cat</code> table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>PK</th>
<th>name</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Wanda</td>
</tr>
<tr>
<td>2</td>
<td>Aisha</td>
</tr>
<tr>
<td>3</td>
<td>Thala</td>
</tr>
</tbody>
</table>
</div>
<p><code>Attribute</code> table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>PK</th>
<th>cat_id</th>
<th>key</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>size</td>
<td>10</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>age</td>
<td>5</td>
</tr>
<tr>
<td>3</td>
<td>2</td>
<td>size</td>
<td>7</td>
</tr>
<tr>
<td>4</td>
<td>2</td>
<td>intelligence</td>
<td>75</td>
</tr>
<tr>
<td>5</td>
<td>2</td>
<td>children</td>
<td>3</td>
</tr>
<tr>
<td>6</td>
<td>3</td>
<td>intelligence</td>
<td>60</td>
</tr>
<tr>
<td>7</td>
<td>3</td>
<td>age</td>
<td>9</td>
</tr>
</tbody>
</table>
</div>
<p>I'd like to select different attribute values depending on conditions of other attributes for all instances, e.g.:</p>
<blockquote>
<p>Get <code>size</code> of a <code>Cat</code> if its <code>age</code> is greather than <code>4</code> or it has more than <code>2</code> <code>children</code> - or if there is no match, get <code>age</code> if <code>intelligence</code> is over <code>50</code>.</p>
</blockquote>
<p>Well, the example here contains random attributes and numbers and does not make any sense, but in my application's world the conditions can be overly complex including several recursive AND, OR, EXISTS and NOT conditions.</p>
<p>My query would be:</p>
<pre><code>SELECT
DISTINCT(cat.id),
cat.name,
COALESCE(
result1.key,
result2.key
) as result_key
COALESCE(
result1.value,
result2.value
) as result_value
FROM cat
-- first condition
LEFT OUTER JOIN attribute cond1 ON (
cat.id = cond1.cat_id AND (
cond1.key = 'age' AND cond1.value > 4 OR cond1.key = 'children' and cond1.value > 2
)
-- second condition
LEFT OUTER JOIN attribute cond2 ON (
cat.id = cond2.cat_id AND (
cond2.key = 'intelligence' AND cond2.value > 50
)
)
-- choose the one or other attribute depending on the first two conditions
LEFT OUTER JOIN attribute result1 ON (
cat.id = result1.cat_id AND (cond1.cat_id IS NOT NULL) AND result1.key = 'size'
)
LEFT OUTER JOIN attribute result2 ON (
cat.id = result2.cat_id AND (cond2.cat_id IS NOT NULL) AND result2.key = 'intelligence'
)
</code></pre>
<p>The result should be:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>PK</th>
<th>name</th>
<th>result_key</th>
<th>result_value</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Wanda</td>
<td>size</td>
<td>10</td>
</tr>
<tr>
<td>2</td>
<td>Aisha</td>
<td>size</td>
<td>7</td>
</tr>
<tr>
<td>3</td>
<td>Thala</td>
<td>age</td>
<td>9</td>
</tr>
</tbody>
</table>
</div><hr />
<p>Using Django's ORM I tried the following:</p>
<pre class="lang-py prettyprint-override"><code>Cat.objects.annotate(
cond1=FilteredRelation(
"attribute",
condition=(
Q(attribute__key="age", attribute__value__gt=4)
| Q(attribute__key="children", attribute__value__gt=2)
),
),
cond2=FilteredRelation(
"attribute",
condition=(Q(attribute__key="intelligence", attribute__value__gt=50)),
),
result1=FilteredRelation(
"attribute",
condition=(Q(cond1__cat_id__isnull=False, attribute__key="size")),
),
result2=FilteredRelation(
"attribute",
condition=(Q(cond2__cat_id__isnull=False, attribute__key="age")),
),
result_key=Coalesce(F("result1__key"), F("result2__key")),
result_value=Coalesce(F("result1__value"), F("result2__value")),
).distinct("pk")
</code></pre>
<p>It will fail with the following error:</p>
<pre><code>ValueError: FilteredRelation's condition doesn't support
relations outside the 'attribute' (got
'cond1__cat_id__isnull').
</code></pre>
<p>Is it somehow possible to construct this or any other query to get the expected results or is this a hard limitation of Django and only using a raw query is the last resort to resolve this issue?</p>
|
<python><django><postgresql><left-join><django-orm>
|
2023-03-01 09:47:31
| 1
| 384
|
Noxx
|
75,601,903
| 2,202,989
|
Python statsforecast season_length vs freq
|
<p>When setting up statsforecast models (for example AutoARIMA), one parameter is season_length for each model, and then for the statsforecast object, there is the freq parameter.</p>
<p>My assumption is that if I have weekly data, but I think that seasonality is monthly, I set freq to weekly and season_length to 4, or is this so that if I set freq to monthly, and then I'd set season_length to 4 as there are 4 samples in a season.</p>
<p>So how do these relate?</p>
|
<python><arima><statsforecast>
|
2023-03-01 09:40:57
| 1
| 383
|
Nyxeria
|
75,601,889
| 11,801,923
|
Downloading & Uploading Python Logging logs from S3 to AWS Lambda
|
<p>I have an S3 bucket which holds a pipeline.log file. Each time I run a lambda function, I want the logs to be written and then uploaded to my S3 bucket.</p>
<p>I have created custom functions that handle the downloads and uploads from s3. These custom S3 functions are tested and running well in sagemaker, lambdas and gluejobs.</p>
<p>In the next step, the same uploaded log file will be downloaded, written in a GlueJob then uploaded and then propagated to more lambdas and step function entities and so on.</p>
<p>Below is a sample code that I have tried to download a log file from S3, write to it and upload it back to end the module. I am not interested in using Cloudwatch, nor do I want to disable the cloudwatch logs. I just want to plain old download-write-upload.</p>
<pre><code>import logging
logging.basicConfig(filename='/tmp/pipeline.log',
level=logging.INFO,
format='%(asctime)s %(message)s',
filemode='w')
def lambda_handler(event, context):
download_from_s3(bucket=bucket, # download pipeline.log from s3
key=pipeline.log,
to='/tmp/pipeline.log')
logging.info('Starting Pipeline') # Add logs to pipeline.log
upload_to_s3(bucket=bucket, # Reupload to S3 to be downloaded in next module
key=pipeline.log,
frm='/tmp/pipeline.log')
return None
</code></pre>
<p>The output has <strong>no error</strong> and returns <strong>200</strong>. However the pipeline.log file remains empty with a changed timestamp of the most recent lambda run. This code works perfectly with Gluejobs (ipynb uploads), and the written logs are visible in the log file in S3 but somehow I am unable to use the same code to update the log file from Lambdas.</p>
<p>Any idea on how to get this done? I want the same pipeline.log to be downloaded written and uploaded through each modules of the Step function pipeline.</p>
|
<python><amazon-web-services><logging><aws-lambda><aws-step-functions>
|
2023-03-01 09:39:51
| 2
| 445
|
therion
|
75,601,802
| 4,865,723
|
Get default font size of an empty document in Python-Docx
|
<p>I would like to get the default font size of the current document.</p>
<p><strong>But</strong> the document is empty. There is no paragraph or run I can ask.</p>
<p>The default style "Normal" doesn't have a size set.</p>
<pre><code>print(doc.styles['Normal'].font.size) # None
</code></pre>
<p>So how can I get it?</p>
<p>One workaround would be to create a paragraph. Ask the runs of this paragraph for its font size and delete it then. But deleting a paragraph isn't that easy and still not implemented. And it is a workaround not a solution.</p>
<p>Technically isn't the default font size somewhere in the documents XML content?</p>
|
<python><python-docx>
|
2023-03-01 09:32:10
| 1
| 12,450
|
buhtz
|
75,601,631
| 940,091
|
Nested loop over all row-pairs in a Pandas dataframe
|
<p>I have a dataframe in the following format with ~80K rows.</p>
<pre><code>df = pd.DataFrame({'Year': [1900, 1902, 1903], 'Name': ['Tom', 'Dick', 'Harry']})
Year Name
0 1900 Tom
1 1902 Dick
2 1903 Harry
</code></pre>
<p>I need to call a function with each combination of the name column as parameters. Currently I am doing this with the following code (substituting print for function call):</p>
<pre><code>for i, n1 in enumerate(df.itertuples()):
for n2 in df[i:].itertuples():
print(n1.Name, n2.Name)
</code></pre>
<p>Is there a way to speed this up that I am missing?</p>
<p>PS: I need to keep track of the indices for each name pair. So if I run itertools.combinations on the index, then I still have to make costly df.loc calls.</p>
|
<python><pandas>
|
2023-03-01 09:17:19
| 6
| 457
|
primelens
|
75,601,614
| 10,829,044
|
Pandas assign value to nearest IF block in else clause
|
<p>I have a pandas dataframe that looks like below</p>
<pre><code>customer_id recency frequency H_cnt Years_with_us
1 0 143 3 0.32
2 14 190 8 1.7
</code></pre>
<p>I would like to do the below</p>
<p>a) If any of my row is not matched using a <code>IF</code> clause, I want that specific row to be assign to a nearby if clause based on <code>H_cnt</code></p>
<p>For ex: Nearby if clause is found using <code>H_cnt</code>. If you look at row 1, we can find that <code>H_cnt</code> will not fall into any of the if clauses that I have written.</p>
<p>So, now in else block I would like to write a condition that can assign it to the nearest if block. In this case, nearest if block will be <code>short tenure - promising</code> Because H_cnt = 3 is closer/nearby to H_cnt = 4 (instead of H_cnt >= 9) as shown in <code>1st elif statement in code below</code></p>
<p>Currently my code looks like below</p>
<pre><code>cust = 'customer_id'
for row in df.iterrows():
rec = row[1]
r = rec['recency']
f = rec['frequency']
y = rec['years_with_us']
h = rec['H_cnt']
if ((r <= 11) and (f >=131) and (y >= 0.6) and (h >= 9)):
classes_append({rec[cust]:'Champions'})
elif ((r <= 11) and (f >=19) and (y < 0.6) and (h >= 4)):
classes_append({rec[cust]:'Short Tenure - Promising'})
elif ((r <= 62) and (f >=19) and (y >= 1.5)):
classes_append({rec[cust]:'Loyal Customers'})
elif ((r <= 62) and (f >=19) and (y >= 0.6) and (y < 1.5)):
classes_append({rec[cust]:'Potential Loyalist'})
elif ((r <= 62) and (f <=18) and (y >= 0.6)):
classes_append({rec[cust]:'New Customers'})
else:
print("hi")
print(row[1])
classes_append({0:[row[1]['recency'],row[1]['frequency'],row[1]['H_cnt'],row[1]['years_with_IFX']]})
accs = [list(i.keys())[0] for i in classes]
segments = [list(i.values())[0] for i in classes]
df['new_segment'] = df[cust].map(dict(zip(accs,segments)))
</code></pre>
<p>I expect my output to be like as below</p>
<pre><code>customer_id recency frequency H_cnt Years_with_us new_segment
1 0 143 3 0.32 Short Tenure - Promising
2 14 190 8 1.7 Champions
</code></pre>
|
<python><pandas><dataframe><if-statement><group-by>
|
2023-03-01 09:16:02
| 1
| 7,793
|
The Great
|
75,601,593
| 6,730,854
|
Draw open polygon in Tkinter Canvas?
|
<p>How do I draw an open polygon in Tkinter canvas?</p>
<p>I've been playing with <code>create_polygon</code> option, but that does not seem to have a capability to not close it. I'm writing something where you can draw a polygon, smooth it, and than close it with double click.</p>
|
<python><tkinter><tkinter-entry>
|
2023-03-01 09:13:12
| 0
| 472
|
Mike Azatov
|
75,601,592
| 3,327,034
|
pandas date_range giving wrong result
|
<p>shouldn't the below example include '2022-01-01'?</p>
<pre><code>>>> import pandas as pd
>>> pd.date_range("2022-01-03", "2023-12-31", freq='MS')
DatetimeIndex(['2022-02-01', '2022-03-01', '2022-04-01', '2022-05-01',
'2022-06-01', '2022-07-01', '2022-08-01', '2022-09-01',
'2022-10-01', '2022-11-01', '2022-12-01', '2023-01-01',
'2023-02-01', '2023-03-01', '2023-04-01', '2023-05-01',
'2023-06-01', '2023-07-01', '2023-08-01', '2023-09-01',
'2023-10-01', '2023-11-01', '2023-12-01'],
dtype='datetime64[ns]', freq='MS')
</code></pre>
|
<python><pandas>
|
2023-03-01 09:13:11
| 1
| 405
|
user3327034
|
75,601,543
| 1,826,066
|
Access newly created column in .with_columns() when using polars
|
<p>I am new to Polars and I am not sure whether I am using <code>.with_columns()</code> correctly.</p>
<p>Here's a situation I encounter frequently:
There's a dataframe and in <code>.with_columns()</code>, I apply some operation to a column. For example, I convert some dates from <code>str</code> to <code>date</code> type and then want to compute the duration between start and end date. I'd implement this as follows.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
pl.DataFrame(
{
"start": ["01.01.2019", "01.01.2020"],
"end": ["11.01.2019", "01.05.2020"],
}
).with_columns(
pl.col("start").str.to_date(),
pl.col("end").str.to_date(),
).with_columns(
(pl.col("end") - pl.col("start")).alias("duration"),
)
</code></pre>
<p>First, I convert the two columns, next I call <code>.with_columns()</code> again.</p>
<p>Something shorter like this does not work:</p>
<pre class="lang-py prettyprint-override"><code>pl.DataFrame(
{
"start": ["01.01.2019", "01.01.2020"],
"end": ["11.01.2019", "01.05.2020"],
}
).with_columns(
pl.col("start").str.to_date(),
pl.col("end").str.to_date(),
(pl.col("end") - pl.col("start")).alias("duration"),
)
</code></pre>
<pre><code># InvalidOperationError: sub operation not supported for dtypes `str` and `str`
</code></pre>
<p>Is there a way to avoid calling <code>.with_columns()</code> twice and to write this in a more compact way?</p>
|
<python><dataframe><python-polars>
|
2023-03-01 09:08:10
| 3
| 1,351
|
Thomas
|
75,601,403
| 5,672,950
|
where to store decision trees and multiple regression models?
|
<p>I have implemented decission tree and multiple regression models. I am planning to deploy it and have access to calculate/classify something by rest. Will use most likely rest from python. The only question is how to and where to store those models. Should I store them in mongo and a json or? I dont want to everytime to create model when request to classify something is coming. I know that keras/tensorflow is stored easily on Amazon. What about more trivial algorithms?</p>
|
<python><scikit-learn><artificial-intelligence><decision-tree>
|
2023-03-01 08:50:49
| 0
| 954
|
Ernesto
|
75,601,358
| 7,760,910
|
Assertion error for BytesIO object in Python unittest
|
<p>I have a core logic classes like below:</p>
<pre><code>class SomeDBTManifestProvider:
def __init__(self):
pass
def extract_manifest_json(self, file_stream: BytesIO):
try:
zipf = zipfile.ZipFile(file_stream)
manifest_json_text = [zipf.read(name) for name in zipf.namelist() if "/manifest.json" in name][0].decode("utf-8")
return json.loads(manifest_json_text)
except Exception as e:
raise FetchManifestJsonException(e)
class DbtProvisioner(object):
def __init__(self, dbt_manifest_provider: SomeDBTManifestProvider):
self.dbt_manifest_provider = dbt_manifest_provider
def provision(self, provision_request: ProvisionRequest):
self.dbt_manifest_provider.extract_manifest_json(BytesIO(b'ABC'))
return "Success"
</code></pre>
<p>Where <code>ProvisionRequest</code> is nothing but a <code>DataClass</code> and it can be ignored for now. As it can be seen <code>DbtProvisioner</code> is dependent on <code>SomeDBTManifestProvider</code>, therefore I have used D.I to access the method of <code>SomeDBTManifestProvider</code>. Now, I want to test <code>DbtProvisioner</code> for which I wrote the below test case:</p>
<pre><code>def test_dbt_provision(self):
provision_request = ProvisionRequest(
DataProduct("a", "b", "c", "d", "e", "f", {}, [{}]), Workload("ab", "cd", "ef", "gh", DbtAttr("path"))
)
dbt_manifest = Mock()
actual = DbtProvisioner(dbt_manifest).provision(
provision_request)
dbt_manifest.extract_manifest_json.assert_called_with(BytesIO(b'ABC'))
self.assertEqual("Success", actual)
</code></pre>
<p>Now, using the above approach I want to ensure that my method has been called with the right set of arguments. But somehow, it is giving me the below error:</p>
<pre><code>Expected: extract_manifest_json(<_io.BytesIO object at 0x10d675900>)
Actual: extract_manifest_json(<_io.BytesIO object at 0x10f8efef0>)
</code></pre>
<p>How do I resolve this error? What am I missing here?</p>
<p>I also referred to multiple articles from SO but didn't help in any way. TIA</p>
|
<python><python-3.x><unit-testing><python-unittest>
|
2023-03-01 08:46:10
| 1
| 2,177
|
whatsinthename
|
75,601,314
| 9,974,205
|
Cplex optimization program returns results equal to zero
|
<p>I am currently working on an optimization problem in which a lake has 150 units of water. I am paid 3$ for each unit of water sold, but I need to guarantee that 100 units of water will remain at the end of the month or pay 5$ for each unit below the threshold of 100. I know that rain will bring 125 units of water (later on I will add stochastic rain).</p>
<p>My model is as follows</p>
<pre><code>!pip install cplex
!pip install docplex
from docplex.mp.model import Model
from docplex.mp.environment import Environment
env = Environment()
env.print_information()
mdl = Model()
x = mdl.continuous_var(lb=None, ub=None, name=None )
y = mdl.continuous_var(lb=None, ub=None, name=None )
r1=mdl.add_constraint( 150-x+y+125 >= 100 )
s = mdl.solve()
mdl.maximize( 3*x-5*y )
obj = mdl.objective_value
print(x.solution_value)
print(y.solution_value)
print("* best objective is: {:g}".format(obj))
mdl.export("modelo_determinista_bajo.lp")
</code></pre>
<p>where x is the amount of water sold and y is the amount of water below the 100 units mark.</p>
<p>The output of the model is zero for x, y and the benefit.</p>
<p>I cannot see what I am doing wrong. Can someone help me?
Best regards.</p>
|
<python><optimization><cplex><solver><stochastic>
|
2023-03-01 08:41:50
| 2
| 503
|
slow_learner
|
75,601,308
| 6,610,407
|
Aggregating dataframe rows using groupby, combining multiple columns
|
<p>I have the following pandas dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from datetime import date, timedelta
df = pd.DataFrame(
(
(date(2023, 2, 27), timedelta(hours=0.5), "project A", "planning"),
(date(2023, 2, 27), timedelta(hours=1), "project A", "planning"),
(date(2023, 2, 27), timedelta(hours=1.5), "project A", "execution"),
(date(2023, 2, 27), timedelta(hours=0.25), "project B", "planning"),
(date(2023, 2, 28), timedelta(hours=3), "project A", "wrapup"),
(date(2023, 2, 28), timedelta(hours=3), "project B", "execution"),
(date(2023, 2, 28), timedelta(hours=2), "project B", "miscellaneous"),
),
columns=("date", "duration", "project", "description"),
)
print(df)
>>> date duration project description
>>> 0 2023-02-27 0 days 00:30:00 project A planning
>>> 1 2023-02-27 0 days 01:00:00 project A planning
>>> 2 2023-02-27 0 days 01:30:00 project A execution
>>> 3 2023-02-27 0 days 00:15:00 project B planning
>>> 4 2023-02-28 0 days 03:00:00 project A wrapup
>>> 5 2023-02-28 0 days 03:00:00 project B execution
>>> 6 2023-02-28 0 days 02:00:00 project B miscellaneous
</code></pre>
<p>I want to carry out aggregation for the <code>duration</code> and <code>description</code> columns, grouping by <code>date</code> and <code>project</code>. The result should look something like:</p>
<pre class="lang-py prettyprint-override"><code>result = pd.DataFrame(
(
(
date(2023, 2, 27),
"project A",
timedelta(hours=3),
"planning (1.5), execution (1.5)",
),
(date(2023, 2, 27), "project B", timedelta(hours=0.25), "planning"),
(date(2023, 2, 28), "project A", timedelta(hours=3), "wrapup"),
(
date(2023, 2, 28),
"project B",
timedelta(hours=5),
"execution (3), miscellaneous (2)",
),
),
columns=("date", "project", "duration", "description"),
)
print(result)
>>> date project duration description
>>> 0 2023-02-27 project A 0 days 03:00:00 planning (1.5), execution (1.5)
>>> 1 2023-02-27 project B 0 days 00:15:00 planning
>>> 2 2023-02-28 project A 0 days 03:00:00 wrapup
>>> 3 2023-02-28 project B 0 days 05:00:00 execution (3), miscellaneous (2)
</code></pre>
<p>Aggregating the <code>duration</code> column is easy using <code>groupby()</code>:</p>
<pre class="lang-py prettyprint-override"><code>df.groupby(by=["date", "project"])["duration"].sum().to_frame().reset_index()
</code></pre>
<p>But I'm unsure how to handle the <code>description</code> column with <code>groupby()</code>. I considered using <code>DataFrameGroupBy.apply()</code> with custom functions on two levels, one for grouping by <code>date</code> and <code>project</code>, and one by <code>description</code>. Something like:</p>
<pre class="lang-py prettyprint-override"><code>def agg_description(df):
...
def agg_date_project(df):
...
agg_description(...)
...
df.groupby(by=["date", "project"])["duration","description"].apply(agg_date_project)
</code></pre>
<p>But I can't figure it out. A complicating factor is that the aggregation for the <code>description</code> column is based on the <code>duration</code> column as well.
I could do it "manually" (e.g. using loops) but if possible I'd like to do it using <code>groupby()</code> as well.</p>
|
<python><group-by>
|
2023-03-01 08:41:21
| 3
| 475
|
MaartenB
|
75,600,954
| 1,794,617
|
pathlib takes more time traversing directory recursively than os.walk()
|
<p>I'm experimenting to determine if the <code>pathlib</code> module is an improvement over the <code>os</code> for directory traversal. To my surprise, I am getting better readings from the <code>os</code> module when compared to <code>pathlib</code>. Which is something I was not expecting. Is it because the <code>os</code> module is dumb enough to not care if the path string represents a file or a directory or a link etc? So speed vs better control?</p>
<p>Perhaps I am not using <code>pathlib</code> the way it should be used for this.</p>
<p>Here's the code:</p>
<pre><code>import os
import sys
import pathlib
import time
import pdb
def TraverseDir(path=None, oswalk=None):
if path is None:
path = pathlib.Path().home()
oswalk = True if (oswalk == 'True') else False
if (oswalk == True):
method = "oswalk"
else:
method = "Pathlib"
start = time.time()
count = 0
with open("filelist" + '_' + method, "w+") as file:
if (oswalk):
for ( _, _,fnames) in os.walk(path):
for fname in fnames:
count += 1
file.write(fname + '\n')
continue
else:
for Fullpath in pathlib.Path(path).rglob("*"):
if Fullpath.is_file():
count += 1
file.write(str(Fullpath.name) + '\n')
continue
end = time.time()
print(f"Took {end - start} seconds with {method}, counted {count} files")
if __name__ == '__main__':
try:
path = sys.argv[1]
if ((path.lower() == 'true') or (path.lower() == 'false')):
oswalk = path
path = None
else:
oswalk = sys.argv[2]
except IndexError:
path = None
oswalk = None
TraverseDir(path, oswalk)
</code></pre>
<p>Is this the most optimum way this <code>pathlib</code> should be used for traversing a directory tree? Please shed some light on this.</p>
<p>UPDATE1: Now that I know that <code>pathlib</code> is not a competitor (so to speak) of <code>os</code>, rather a compliment, will resort to mixing them both when need be.</p>
|
<python><python-3.x><pathlib>
|
2023-03-01 08:05:18
| 2
| 900
|
Skegg
|
75,600,944
| 14,715,170
|
How to insert business logic in django template?
|
<p>I am working on review rating form, and I want to display the rating stars given by the user. I am getting an int value rating from the database out of 5 rating.</p>
<p>However I want to implement a logic in django template that would looks like the follow code,</p>
<pre><code>a = "THIS IS A STRING"
b = "THIS IS B STRING"
max_val = 5
def review_rating(val):
flag = 0
for i in range(max_val):
if flag == 0:
for j in range(val):
print(a)
flag = 1
new_val = max_val - val
for k in range(new_val):
print(b)
break
review_rating(1)
</code></pre>
<p>Note: <code>val</code> is the rating value from database.</p>
<p>I have tried with filters,</p>
<p>Following is my filter code,</p>
<pre><code>@register.filter(name='subtract')
def subtract(value, arg):
return int(value) - int(arg)
</code></pre>
<p>and following is my django template code,</p>
<pre><code><p class="starsnd-small">
{% with ''|center:review.review_star as range %}
{% for _ in range %}
<b value="" id="checked" href="#"></b>
{% endfor %}
{% endwith %}
{% with {{5|subtract:review.review_star}} as range %}
{% for _ in range %}
<b value="" href="#"></b>
{% endfor %}
{% endwith %}
</p>
</code></pre>
<p>Can anyone help ?</p>
|
<python><python-3.x><django><django-templates><django-filter>
|
2023-03-01 08:04:37
| 1
| 334
|
sodmzs1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.