QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,523,132
9,751,398
Actual Python version in conda environment when running in Google Colab?
<p>I run notebooks in Google Colab and there I install miniconda and create an appropriate environment for the application. It is good to be able to read out what Python version you actually have in the environment.</p> <p>For example:</p> <p>Calling platform.python_version() gives 3.10.12</p> <p>Calling !Python --version gives 3.12.2</p> <p>Calling !conda list python gives also 3.12.2</p> <p>Seems to me that platform.python_version() refers to some other Python installation that I am not aware of. Likely in this example what !conda list gives is the relevant information for the local environment, right?</p> <p>And what is this other Python installation really?</p>
<python><jupyter-notebook><google-colaboratory><miniconda>
2024-05-23 12:30:37
1
1,156
janpeter
78,523,114
1,084,174
Why I am getting IndexError
<p>Why following code:</p> <pre><code>import torch import torch.nn as nn input_tensor = torch.tensor([[2.0]]) target_tensor = torch.tensor([0], dtype=torch.long) loss_function = nn.CrossEntropyLoss() loss = loss_function(input_tensor, target_tensor) print(loss) input_tensor = torch.tensor([[2.0]]) target_tensor = torch.tensor([1], dtype=torch.long) loss_function = nn.CrossEntropyLoss() loss = loss_function(input_tensor, target_tensor) print(loss) </code></pre> <p>showing following error:</p> <pre><code>--------------------------------------------------------------------------- IndexError Traceback (most recent call last) Cell In[214], line 14 12 target_tensor = torch.tensor([1], dtype=torch.long) 13 loss_function = nn.CrossEntropyLoss() ---&gt; 14 loss = loss_function(input_tensor, target_tensor) 15 print(loss) File ~/jupyter-env-3.10/lib/python3.10/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs) 1106 # If we don't have any hooks, we want to skip the rest of the logic in 1107 # this function, and just call forward. 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -&gt; 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] File ~/jupyter-env-3.10/lib/python3.10/site-packages/torch/nn/modules/loss.py:1163, in CrossEntropyLoss.forward(self, input, target) 1162 def forward(self, input: Tensor, target: Tensor) -&gt; Tensor: -&gt; 1163 return F.cross_entropy(input, target, weight=self.weight, 1164 ignore_index=self.ignore_index, reduction=self.reduction, 1165 label_smoothing=self.label_smoothing) File ~/jupyter-env-3.10/lib/python3.10/site-packages/torch/nn/functional.py:2996, in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing) 2994 if size_average is not None or reduce is not None: 2995 reduction = _Reduction.legacy_get_string(size_average, reduce) -&gt; 2996 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) IndexError: Target 1 is out of bounds. </code></pre>
<python><pytorch><loss-function>
2024-05-23 12:27:10
1
40,671
Sazzad Hissain Khan
78,523,087
14,463,396
Pandas .replace() with multiple many to one mappings
<p>I have a pandas column with some strings in. I want to group up the strings that are similar, and replace with their category. In my real example, I have 6 different strings, and I wanted to replace then with 3 different strings for their categories.</p> <p>I found <a href="https://stackoverflow.com/questions/46920454/how-to-replace-multiple-values-with-one-value-python">this answer</a> for how to map many values to 1 using in the replace() function, so I tried expanding some of the answers to do a many to 1 mapping for multiple groups, however not all of my values were correctly changed and I'm not sure why.</p> <p>As an example:</p> <pre><code>df1 = pd.DataFrame({'col1':['foo', 'foo too', 'bar', 'BAR', 'bar ii']}) col1 0 foo 1 foo too 2 bar 3 BAR 4 bar ii </code></pre> <p>From one of the answers, it looked like you could use '|' to separate different key options if you used regex, so I did this like below:</p> <pre><code>df1['col1'].replace({'foo|foo too' : 'Foo', 'bar|BAR|bar ii' : 'Bar'}, regex=True) </code></pre> <p>Which converted most of my strings, but not all:</p> <pre><code> col1 0 Foo 1 Foo too 2 Bar 3 Bar 4 Bar ii </code></pre> <p>From this example, I would guess something to do with the spaces? although in my actual example some of my strings with spaces did get correctly replaced, so I'm not sure. Any help with why this doesn't work/how I could achieve what I'm after would be appreciated</p>
<python><pandas>
2024-05-23 12:21:42
1
3,395
Emi OB
78,523,044
12,137,722
MT5 python module: downloading OHLC data
<p>I am trying to download OHLC data from MT5 using the python module. if I use copy_rates_from_pos, all works fine.</p> <p>If I use dates, the problem is the resulting data is not aligned with the times on the chart.</p> <p>Background: My broker is GMT +2/+3. It changes daylight savings based upon NYC. So there are no timezone that fits this situation (for example, I used Athens, but daylight savings changes on a different date).</p> <p>I have tried 3 things:</p> <ol> <li>I just put date range with datetime objects with no timezone information.</li> </ol> <pre><code>Ex. Code: now_date = datetime.now()). </code></pre> <p>Result: It gives me candles 3 hours before the latest candle on the current chart. For example, the latest candle on the chart is 8:03. The latest candle in the dataframe is 5:03. Now the candle's OHLC data in the dataframe is accurate to the OHLC 5:03 candle on the chart.</p> <ol start="2"> <li>I tried changing timezone to UTC, as per suggestion on MT5 website.</li> </ol> <pre><code>Ex. Code: now_date = datetime.now(pytz.timezone(&quot;Etc/UTC&quot;)). </code></pre> <p>Result: it gave a time of 1 hour behind the latest candle on the chart. The OHLC data does not line up with any candles of the same minutes on the chart (meaning, I compared to latest candle, 1 hour behind that, 1 hour back, etc.).</p> <ol start="3"> <li>I used UTC and NYC timezones and converted it using GMT offset of my broker. Ex. Code:</li> </ol> <pre><code>nyc_timezone = pytz.timezone('America/New_York') utc_now = datetime.now(pytz.utc) nyc_time = utc_now.astimezone(nyc_timezone) if nyc_time.dst() != timedelta(0): broker_offset = 3 - (-4) # +3 GMT minus -4 GMT (NYC DST offset) else: broker_offset = 2 - (-5) # +2 GMT minus -5 GMT (NYC standard offset) broker_time = nyc_time + timedelta(hours=broker_offset) broker_time = broker_time.replace(second=0, microsecond=0) </code></pre> <p>Result: it gave a time of 1 hour behind the latest candle on the chart. The OHLC data does not line up with any candles of the same minutes on the chart (meaning, I compared to latest candle, 1 hour behind that, 1 hour back, etc.).</p> <p>So, can someone guide me on how to download data using MT5 module using dates? I cannot figure this out. Thanks in advance.</p>
<python><pandas><metatrader5>
2024-05-23 12:12:50
1
314
Jonathan Stearns
78,522,584
8,675,314
Is it possible to change the display of a Binaryfield model attribute in the Django admin panel, based on the value of another attribute?
<p>I'm working on a project where I have a model which stores some BinaryField in the database.</p> <pre class="lang-py prettyprint-override"><code> class Entry(models.Model): ... params = models.JSONField() body = models.BinaryField() ... </code></pre> <p>params will have a tag with the type of the body item. For example it might be html, and the body will be the html text or an image, and the body could be an image etc.</p> <p>Right now, the details of Django admin show something like this:</p> <pre><code> Body: &lt;memory at 0x7355581f0e80&gt; </code></pre> <p>Is it possible to change how django admin displays this info, based on the info I have from the other attributes?</p> <p>Thanks for your time!</p>
<python><django><django-models><django-admin>
2024-05-23 10:48:13
1
415
Basil
78,522,348
1,473,517
How to register np.float128 as a valid numba type so I can sum an array?
<p>I want to write a numba function that takes the sum of an array of np.float128s. These have 80 bits of precision on my set up but if it is easier to cast them to real float128s I would be happy with that as well. My goal is to be able to sum an array of np.float128s quickly and without loss of precision, ultimately returning a np.float128 to my python code.</p> <p>To do this I am trying to copy the model from <a href="https://github.com/gmarkall/numba-bfloat16/blob/main/prototype.py" rel="nofollow noreferrer">https://github.com/gmarkall/numba-bfloat16/blob/main/prototype.py</a>. I am skipping the parts relating to CUDA because I will <em>not</em> be writing CUDA code but I don't know if I need to replace them with something.</p> <p>So far I have:</p> <pre><code>import numpy as np from numba.core.types.scalars import Number from numba.np import numpy_support class Float128(Number): def __init__(self, *args, **kws): super().__init__(name='float128') float128_type = Float128() numpy_support.FROM_DTYPE[np.dtype(np.float128)] = float128_type </code></pre> <p>The next part from the link is:</p> <pre><code>@intrinsic def bfloat16_add(typingctx, a, b): sig = bfloat16_type(bfloat16_type, bfloat16_type) def codegen(context, builder, sig, args): i16 = ir.IntType(16) function_type = ir.FunctionType(i16, [i16, i16]) instruction = (&quot;{.reg.b16 one; &quot; &quot;mov.b16 one, 0x3f80U; &quot; &quot;fma.rn.bf16 $0, $1, one, $2;}&quot;) asm = ir.InlineAsm(function_type, instruction, &quot;=h,h,h&quot;) return builder.call(asm, args) return sig, codegen </code></pre> <p>What should I replace bfloat16_add with for my float128? I think I need to define addition for numba....?</p> <h1>Edit</h1> <p>I will be doing sums on millions of arrays of length less than 10k. For arrays of float64s, fast numba code is many times faster than np,sum, especially if you use fastmath=True. For example,</p> <pre><code>import numba as nb import numpy as np @nb.njit(cache=True, fastmath=True) def fast_sum(arr): s = 0.0 for v in arr: s += v return s @nb.njit(cache=True) def numba_sum_nofastmath(arr): s = 0.0 for v in arr: s += v return s A = np.random.random(1000) %timeit fast_sum(A) 404 ns ± 2.03 ns per loop (mean ± std.dev. of 7 runs, 1,000,000 loops each) %timeit numba_sum_nofastmath(A) 964 ns ± 14.4 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) %timeit np.sum(A) 2.86 µs ± 12.9 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) %timeit A.sum() 1.72 µs ± 34.9 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each </code></pre> <p>If I convert the type to np.float128 I get:</p> <pre><code>A_float128 = A.astype(np.float128) %timeit np.sum(A_float128) 4.3 µs ± 13.7 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) %timeit A_float128.sum() 3.17 µs ± 70 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) </code></pre> <p>but fast_sum(A_float128) fails to run with</p> <pre><code>NumbaValueError: Unsupported array dtype: float128 </code></pre> <p>I already have numpy.float128s in my code. I would like to be able to use them with numba without having to convert them all to float64s.</p>
<python><numba><llvm-ir>
2024-05-23 10:05:08
3
21,513
Simd
78,522,302
3,608,591
Python (Jupyter) Notebook in browser: no horizontal scrollbar at the bottom of a cell?
<p>I use python notebook (browser) often.</p> <p>There's a vertical scrollbar on a cell, but <strong>no horizontal scrollbar</strong> to allow me to access columns on the <strong>right side of a cell</strong>:</p> <p><a href="https://i.sstatic.net/BHifLl3z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHifLl3z.png" alt="enter image description here" /></a></p> <p>The only way I found to see columns on the right is to select text in the displayed dataframe, and drag the mouse to the right.</p> <p>I've tried pandas options:</p> <pre><code>pd.set_option('display.max_columns', 70) pd.set_option('display.max_rows', 500) pd.set_option('display.width', 1000) from IPython.display import display, HTML display(HTML(&quot;&lt;style&gt;.container { width:100% !important; }&lt;/style&gt;&quot;)) </code></pre> <p>I've even tried full screen, and dragging the 'expand' icon on a cell, nothing works.</p> <p>Am I missing something really obvious ?</p>
<python><pandas><jupyter-notebook><display>
2024-05-23 09:57:57
1
317
Xhattam
78,522,213
16,219,380
How to use elements created in a kivy file to create another element in another kivy file?
<p>I'm creating a button in StackLayout in file <code>info_options.kv</code>. Now I want to call a function in class <code>InfoOptionsPanel</code> whenever this button is pressed. I'm able to call this funtion when I'm replacing <code>return MainPanel()</code> with <code>return InfoOptionsPanel()</code> i.e. when I'm not using the layout created by <code>MainPanel</code>, but when I'm using <code>MainPanel</code> class for gui then button gives below, also note that there is no problem in GUI even when using <code>MainPanel</code> for GUI.</p> <p>error thrown</p> <pre><code>File &quot;C:\Users\..\AppData\Local\Programs\Python\Python312\Lib\site-packages\kivy\lang\builder.py&quot;, line 60, in custom_callback exec(__kvlang__.co_value, idmap) File &quot;K:\path\to\project\Info_options.kv&quot;, line 5, in &lt;module&gt; on_press: self.parent.on_press() ^^^^^^^^^^^^^^^^ AttributeError: 'InfoOptionsPanel' object has no attribute 'on_press' </code></pre> <p>info.kv</p> <pre><code>&lt;InfoPanel@BoxLayout&gt; orientation: 'vertical' size_hint_y: 0.8 pos_hint: {&quot;top&quot;: 1} Label: id: info_label text:&quot;Click button&quot; font_size:50 </code></pre> <p>info_options.kv</p> <pre><code>&lt;InfoOptionsPanel@StackLayout&gt;: orientation: 'lr-tb' Button: text: 'Button' on_press: self.parent.on_press() size_hint :.7, 0.1 </code></pre> <p>main_screen.kv</p> <pre><code>&lt;MainPanel@BoxLayout&gt;: orientation: 'vertical' AnchorLayout: anchor_x: 'left' anchor_y: 'top' InfoPanel: size: 100, 100 AnchorLayout: anchor_x: 'right' anchor_y: 'bottom' InfoOptionsPanel: size: 100, 100 </code></pre> <p>main.py</p> <pre class="lang-py prettyprint-override"><code> from kivy.app import App from kivy.lang.builder import Builder from kivy.uix.boxlayout import BoxLayout from kivy.uix.stacklayout import StackLayout import os kivy_design_files = [&quot;Info_options&quot;, &quot;Info&quot;, &quot;main_screen&quot;,] for kv_file in kivy_design_files: Builder.load_file(os.path.join(os.path.abspath(os.getcwd()), kv_file + &quot;.kv&quot;)) class InfoPanel(BoxLayout): pass class InfoOptionsPanel(StackLayout): def on_press(self): print(&quot;Button pressed\n&quot;*55) class MainPanel(BoxLayout): pass class MyApp(App): def build(self): return MainPanel() if __name__ == '__main__': MyApp().run() </code></pre>
<python><user-interface><kivy>
2024-05-23 09:41:01
1
350
Aman
78,522,199
1,191,058
Type hint for descriptor in abstract class/interface
<p>I want to have a class with custom properties defined by descriptor classes and an interface that hides implementation details.</p> <p>My code currently looks like this. It works, type checkers do not complain, but it looks ugly because of the <code>cast</code> from <code>Ten</code> to <code>int</code>. Intermediate function <code>ten</code> also complicates things if I want the descriptor to be <code>Generic</code>.</p> <pre class="lang-py prettyprint-override"><code>from typing import cast class Ten: def __get__(self, obj, objtype=None) -&gt; int: return 10 def __set__(self, obj, value: int) -&gt; None: pass # just to make Ten &quot;writeable&quot; def ten() -&gt; int: return cast(int, Ten()) # &lt;-- this is ugly class ABase: x: int class A(ABase): x: int = ten() def fn(value: ABase) -&gt; int: return value.x a = A() print(fn(a)) </code></pre> <p>Is it possible to achieve some level of type checking and no errors from type checkers without using magic hacks like this one?</p>
<python><python-typing>
2024-05-23 09:38:51
1
3,487
j123b567
78,522,148
13,706,389
Change Axes of legend in Matplotlib
<p>I have a figure with some inset axes at variable positions. Each of these inset axes has the same lines and thus the same legend. To avoid cluttering the figure, I would like to only show the legend of one of these insets and position it (for example) in the top right corner of the main ax to indicate that this is a general legend.</p> <p>This is a simplified version of my code:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1.inset_locator import inset_axes import numpy as np fig, ax = plt.subplots(1, 1) ax.plot(range(10), range(10)) inset_ax1 = inset_axes(ax, width=1.5, height=1.5, loc='lower right') inset_ax2 = inset_axes(ax, width=2, height=2, loc='upper left') inset_ax1.plot(range(10), np.random.rand(10,3)) inset_ax2.plot(range(10), np.random.rand(10,3)) inset_ax1.legend(['a', 'b', 'c'], loc='upper right') plt.show() </code></pre> <p>I tried using <code>ax.add_artist(inset_ax1.legend())</code> but this gives the following warning: <code>ValueError: Can not reset the axes. You are probably trying to re-use an artist in more than one Axes which is not supported</code>. How can i resolve this?</p> <p>To give some context: I'm making a <a href="https://windrose.readthedocs.io/en/latest/usage.html#overlay-of-a-map" rel="nofollow noreferrer">figure similar to this</a> but I would like to show a single legend for all the windroses. The position and number of insets is variable.</p>
<python><matplotlib>
2024-05-23 09:28:42
1
684
debsim
78,521,853
15,162,230
RecursionError: maximum recursion depth exceeded | [Previous line repeated 972 more times] | socket.io
<p>I need help that why i am getting error like <code>RecursionError</code>, i am using the <code>python-socket</code></p> <p><strong>BTW before moving ahead i want to make it clear that this issue is not because of socket configuration it is working fine the simple test events work fine.</strong></p> <p>Here is my socket event that listen to specific event:</p> <pre class="lang-py prettyprint-override"><code>@sio_server.on(EVENT.REPLY_ON_TICKET) async def reply_on_ticket_event(sid:str, data:dict): sender_id = None if data[&quot;sender_info&quot;][&quot;sender_id&quot;]: sender_id= data[&quot;sender_info&quot;][&quot;sender_id&quot;] else: sender_id= data[&quot;sender_id&quot;] payload = { &quot;ticket_id&quot;: data[&quot;ticket_id&quot;], &quot;sender_id&quot;: sender_id, &quot;notify&quot;: data[&quot;notify&quot;], &quot;reply&quot;: data[&quot;reply&quot;], } # Obtain a database session manually db_session = next(get_db()) res = await reply_on_ticket_service(payload, db_session) if res[&quot;success&quot;]: # the callback data for client socket return { &quot;status_code&quot;: 200, &quot;success&quot;: True, &quot;message&quot;: &quot;reply data&quot;, &quot;payload&quot;: res[&quot;data&quot;], } return {&quot;status_code&quot;: 400, &quot;success&quot;: False} </code></pre> <p>and here the service that handle the reply tickets process:</p> <pre class="lang-py prettyprint-override"><code> async def reply_on_ticket_service(data:dict, db_session: Session = Depends(get_db)): try: new_reply= TicketReplyModal(reply_text=data[&quot;reply&quot;],ticket_id=data[&quot;ticket_id&quot;],sender_id=data[&quot;sender_id&quot;]) new_reply_id= await db_manager.add_record(new_reply,db_session) if not new_reply_id: return {&quot;success&quot;:False, 'status_code':400, &quot;message&quot;:&quot;Error occurred while creating new replay&quot;} new_record= await db_manager.get_record_by_id(new_reply_id, TicketReplyModal,db_session) if not new_record: return {&quot;success&quot;:False, 'status_code':404, &quot;message&quot;:&quot;Unable to find record with newly created replay&quot;} sender_data = db_session.query(UserModel).filter(UserModel.id == new_record.sender_id).first() if not sender_data: return {&quot;success&quot;:False, 'status_code':404, &quot;message&quot;:&quot;Unable to find sender info&quot;} reply = { &quot;id&quot;:new_record.id, &quot;created_at&quot;:new_record.created_at, &quot;reply_text&quot;:new_record.reply_text, &quot;ticket_id&quot;: new_record.ticket_id, &quot;sender&quot;:{ &quot;id&quot;: sender_data.id, &quot;email&quot;: sender_data.email, &quot;profile_image&quot;: sender_data.profile_image, &quot;username&quot;: sender_data.username, &quot;public_key&quot;: sender_data.public_key, } } return {&quot;success&quot;: True, &quot;data&quot;: reply} except Exception as e: raise HTTPException(status_code=500, detail=str(e)) </code></pre> <p>in <code>reply_on_ticket_event</code> return method if i comment the line <code>&quot;payload&quot;: res[&quot;data&quot;]</code> and then it works fine without any issue:</p> <p><strong>Error code-snippet:</strong></p> <pre class="lang-bash prettyprint-override"><code>Task exception was never retrieved future: &lt;Task finished name='Task-39' coro=&lt;AsyncServer._handle_event_internal() done, defined at C:\Users\AK\.virtualenvs\BE-hVO0_cLf\Lib\site-packages\socketio\async_server.py:605&gt; exception=RecursionError('maximum recursion depth exceeded')&gt; Traceback (most recent call last): File &quot;C:\Users\AK\.virtualenvs\BE-hVO0_cLf\Lib\site-packages\socketio\async_server.py&quot;, line 617, in _handle_event_internal await server._send_packet(eio_sid, self.packet_class( File &quot;C:\Users\AK\.virtualenvs\BE-hVO0_cLf\Lib\site-packages\socketio\async_server.py&quot;, line 518, in _send_packet encoded_packet = pkt.encode() ^^^^^^^^^^^^ File &quot;C:\Users\AK\.virtualenvs\BE-hVO0_cLf\Lib\site-packages\socketio\packet.py&quot;, line 64, in encode encoded_packet += self.json.dumps(data, separators=(',', ':')) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Program Files\Python311\Lib\json\__init__.py&quot;, line 238, in dumps **kw).encode(obj) ^^^^^^^^^^^ File &quot;C:\Program Files\Python311\Lib\json\encoder.py&quot;, line 200, in encode chunks = self.iterencode(o, _one_shot=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Program Files\Python311\Lib\json\encoder.py&quot;, line 258, in iterencode return _iterencode(o, 0) ^^^^^^^^^^^^^^^^^ File &quot;D:\BE\app.py&quot;, line 38, in default return super().default(obj) ^^^^^^^^^^^^^^^^^^^^ File &quot;D:\BE\app.py&quot;, line 38, in default return super().default(obj) ^^^^^^^^^^^^^^^^^^^^ File &quot;D:\BE\app.py&quot;, line 38, in default return super().default(obj) ^^^^^^^^^^^^^^^^^^^^ [Previous line repeated 972 more times] RecursionError: maximum recursion depth exceeded </code></pre> <p>And the line to whom the error is pointing is in app.py</p> <pre class="lang-py prettyprint-override"><code>class CustomJSONEncoder(json.JSONEncoder): def default(self, obj): if isinstance(obj, uuid.UUID): return str(obj) return super().default(obj) @app.on_event(&quot;startup&quot;) def startup_event(): json.JSONEncoder.default = CustomJSONEncoder().default </code></pre> <p>I try to comment the above @app.on_event(&quot;startup&quot;) and then execute that socket event then the error complaining about UUID here is the error output:</p> <pre class="lang-bash prettyprint-override"><code>Task exception was never retrieved future: &lt;Task finished name='Task-47' coro=&lt;AsyncServer._handle_event_internal() done, defined at C:\Users\AK\.virtualenvs\pw7-BE-hVO0_cLf\Lib\site-packages\socketio\async_server.py:605&gt; exception=TypeError('Object of type UUID is not JSON serializable')&gt; Traceback (most recent call last): File &quot;C:\Users\AK\.virtualenvs\pw7-BE-hVO0_cLf\Lib\site-packages\socketio\async_server.py&quot;, line 617, in _handle_event_internal await server._send_packet(eio_sid, self.packet_class( File &quot;C:\Users\AK\.virtualenvs\pw7-BE-hVO0_cLf\Lib\site-packages\socketio\async_server.py&quot;, line 518, in _send_packet encoded_packet = pkt.encode() ^^^^^^^^^^^^ File &quot;C:\Users\AK\.virtualenvs\pw7-BE-hVO0_cLf\Lib\site-packages\socketio\packet.py&quot;, line 64, in encode encoded_packet += self.json.dumps(data, separators=(',', ':')) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Program Files\Python311\Lib\json\__init__.py&quot;, line 238, in dumps **kw).encode(obj) ^^^^^^^^^^^ File &quot;C:\Program Files\Python311\Lib\json\encoder.py&quot;, line 200, in encode chunks = self.iterencode(o, _one_shot=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Program Files\Python311\Lib\json\encoder.py&quot;, line 258, in iterencode return _iterencode(o, 0) ^^^^^^^^^^^^^^^^^ File &quot;C:\Program Files\Python311\Lib\json\encoder.py&quot;, line 180, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type UUID is not JSON serializable </code></pre>
<python><sockets><websocket><fastapi><asgi>
2024-05-23 08:36:06
1
517
Abidullah
78,521,791
13,638,219
Remove gaps with xOffset
<p>Is there a way to not have gaps at non-existing data when using x/yOffset? In the example below, category B only has 1 group (x) and I would like that bar to be centered at the B-tick.</p> <pre><code>source = pd.DataFrame({ &quot;Category&quot;:list(&quot;AAABCCC&quot;), &quot;Group&quot;:list(&quot;xyzxxyz&quot;), &quot;Value&quot;:[0.1, 0.6, 0.9, 0.7, 0.6, 0.1, 0.2] }) alt.Chart(source).mark_bar().encode( x=&quot;Category:N&quot;, y=&quot;Value:Q&quot;, xOffset=&quot;Group:N&quot;, color=&quot;Group:N&quot; ) </code></pre> <p><a href="https://i.sstatic.net/bZyMyzqU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZyMyzqU.png" alt="enter image description here" /></a></p>
<python><altair>
2024-05-23 08:22:36
1
982
debbes
78,521,776
1,473,517
Can you make a random numpy.float128?
<p>I want to make a random array of type numpy.float128s. This is not valid:</p> <pre><code>import numpy as np a = np.random.random(100, dtype=np.float128) # not valid code </code></pre> <p>I could do</p> <pre><code>a = np.random.random(100).astype(np.float128) </code></pre> <p>but this creates 64 bit floats and then casts them to np.float128 (which is really 80 bit on my system but that isn't the issue here). So this has the wrong precision.</p> <p>Is there any way to create an array of random np.float128s in the range 0 to 1?</p>
<python><numpy><random><floating-point>
2024-05-23 08:20:21
2
21,513
Simd
78,521,674
1,320,143
Custom JSONResponse response for FastAPI - cannot access field in subclass
<p>I want my FastAPI application to return JSON:API responses. It would be really cumbersome to craft each response manually so I've looked into creating a custom response class, one based on <code>JSONResponse</code>, since the responses have to be in JSON format.</p> <p>They have an example here:</p> <p><a href="https://fastapi.tiangolo.com/advanced/custom-response/#custom-response-class" rel="nofollow noreferrer">https://fastapi.tiangolo.com/advanced/custom-response/#custom-response-class</a></p> <p>So, subclass your response class (in my case <code>JSONResponse</code>), subclass it, implement the <code>render</code> method.</p> <p>So far so good:</p> <pre><code>class JSONAPIResponse(JSONResponse): def render(self, content: Any) -&gt; Any: # Null object. if content is None: [...] </code></pre> <p>However, the problem comes at the next step.</p> <p>If I add a <code>content_object_type</code> to JSONAPIResponse, I put it in the <code>__init__</code> method and everything, FastAPI still complains about it.</p> <pre><code> class JSONAPIResponse(JSONResponse): content_object_type: str def __init__(self, content: Any, content_object_type: str): super().__init__(content=content) self.content_object_type = content_object_type def render(self, content: Any) -&gt; Any: self.content_object_type </code></pre> <p>Elsewhere in the code base, another method that actually invokes <code>JSONAPIResponse</code>:</p> <pre><code> return JSONAPIResponse(content=jsonable_encoder(content_object), content_object_type=&quot;bla&quot;) </code></pre> <p>Boom 💥 this is the FastAPI error message below</p> <blockquote> <pre><code>def render(self, content: Any) -&gt; Any: self.content_object_type </code></pre> <p><strong>E AttributeError: 'JSONAPIResponse' object has no attribute 'content_object_type'</strong></p> </blockquote> <p>So I've declared the attribute, I have it in the constructor that's used... what's wrong?</p>
<python><fastapi>
2024-05-23 07:59:01
0
1,673
oblio
78,521,607
3,494,047
How to get the browser view file link of a file using Dropbox python API?
<p>I am using the python api for dropbox and my question is the same as <a href="https://stackoverflow.com/questions/40754573/how-to-get-the-link-of-a-file-using-dropbox-python-api">this</a> one except I would like to get a link which shows the image in the browser. Not a link which downloads the files.</p> <p>So I have a FileMetaData object and I would like a link (like one starting with dropbox.com...) which opens the image in the browser. Not a preview link. It looks like this in the browser <a href="https://i.sstatic.net/v8e5OOjo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v8e5OOjo.png" alt="enter image description here" /></a></p> <p>How can I do this?</p> <p>the link is made up of: dropbox.com/scl/fi/[random_letters_numbers]/[file_name]?rlkey=[random_letters_numbers]&amp;e=2&amp;st=[random_letters_numbers]</p> <p>I also do not want this link to expire.</p>
<python><dropbox-api><dropbox-sdk>
2024-05-23 07:45:13
1
1,723
user3494047
78,521,530
8,884,239
xml.etree.ElementTree.ParseError: unclosed token with XML Library
<p>I am getting following error when I am trying to parse XML using Python XML library.</p> <pre><code>xml.etree.ElementTree.ParseError: unclosed token </code></pre> <p>I am using following code to parse xml string.</p> <pre><code>from xml.etree import ElementTree as ET try: root = ET.fromstring(xml_string) emp = root.findall(&quot;.//employees&quot;) .... further process with emp........ exception: pass </code></pre> <p>I want to continue even if there is an invalid XML string. I am passing this XML string from the dataframe column.</p> <p>Can anyone suggest how we can avoid this error while parsing xml string or how do we correct the xml like lxml has some recover option while parsing xml (one thought is using lxml, correct missing xml tags by using recovery option and use Python XML library for parsing XML string, but I am not sure how we can implement this).</p> <p>Please suggest how we can handle it, and really appreciate your help.</p>
<python><xml><apache-spark><pyspark><lxml>
2024-05-23 07:27:35
1
301
Bab
78,521,358
159,072
Why am I seeing Index error in this Python script?
<pre><code>#ResidueNoInEachProtein,Residue,TrueLabel,Feature1,Feature2,Feature3,Feature4,Feature5,Feature6,Feature7,Feature8,Feature9 0 GLN C 0.000 0.000 0.000 1 1 1 1 1 0 1 THR E 7.057 10.394 0.000 1 1 1 1 1 0 2 VAL E 6.710 9.449 13.140 0 0 0 0 1 0 3 PRO E 6.552 9.752 12.974 0 0 0 0 0 0 4 SER C 6.544 7.584 11.239 0 0 0 0 0 0 5 SER C 5.407 5.140 5.159 0 0 0 0 0 0 6 ASP C 5.485 7.378 5.152 0 0 0 0 0 0 7 GLY C 5.723 9.048 9.571 0 0 0 1 1 0 8 THR C 6.347 9.102 10.812 0 0 0 2 2 0 9 PRO E 6.219 9.620 12.486 0 1 1 3 4 0 10 ILE E 6.412 9.721 12.781 0 0 0 3 4 0 11 ALA E 6.603 10.294 13.140 0 1 1 2 3 0 12 PHE E 7.219 10.586 13.126 0 0 0 2 2 0 13 GLU E 6.939 10.295 13.972 0 0 0 0 1 0 14 ARG E 6.814 10.472 13.764 0 0 0 0 0 0 15 SER E 7.061 9.189 12.947 0 0 0 0 0 0 16 GLY E 6.872 9.856 11.521 0 0 0 0 0 0 17 SER C 6.988 9.388 11.337 0 0 0 0 0 0 18 GLY C 6.903 7.889 9.055 0 0 0 0 0 0 </code></pre> <pre><code>import pandas as pd from sklearn.model_selection import train_test_split from lazypredict.Supervised import LazyClassifier # Load the data from full.regular.txt data = pd.read_csv('full.regular.txt', delim_whitespace=True) # Verify the data structure print(&quot;Data preview:&quot;) print(data.head()) print(&quot;\nData columns:&quot;) print(data.columns) # Assuming columns are correctly read, let’s inspect their index positions print(&quot;\nColumn positions and data types:&quot;) print(data.dtypes) # Extract the true label and features based on given specifications # Column 3 is index 2 and columns 4 to 12 are index 3 to 11 try: y = data.iloc[:, 2] X = data.iloc[:, 3:11] except IndexError as e: print(f&quot;IndexError: {e}&quot;) print(&quot;The dataset does not have the expected number of columns.&quot;) print(&quot;Please check the dataset and ensure it matches the expected format.&quot;) # Verify the shapes of X and y if the extraction was successful if 'X' in locals() and 'y' in locals(): print(&quot;\nFeatures shape:&quot;, X.shape) print(&quot;Labels shape:&quot;, y.shape) # Split the data into training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Initialize and run LazyClassifier clf = LazyClassifier(verbose=0, ignore_warnings=True, custom_metric=None) train, test = clf.fit(X_train, X_test, y_train, y_test) # Display the results print(&quot;\nTraining set evaluation:&quot;) print(train) print(&quot;\nTest set evaluation:&quot;) print(test) </code></pre> <p><strong>Output</strong>:</p> <pre><code>Data preview: #ResidueNoInEachProtein,Residue,TrueLabel,Feature1,Feature2,Feature3,Feature4,Feature5,Feature6,Feature7,Feature8,Feature9 0 GLN C 0.00 0.00 0.00 1 1 1 1 1 0 1 THR E 7.06 10.39 0.00 1 1 1 1 1 0 2 VAL E 6.71 9.45 13.14 0 0 0 0 1 0 3 PRO E 6.55 9.75 12.97 0 0 0 0 0 0 4 SER C 6.54 7.58 11.24 0 0 0 0 0 0 Data columns: Index(['#ResidueNoInEachProtein,Residue,TrueLabel,Feature1,Feature2,Feature3,Feature4,Feature5,Feature6,Feature7,Feature8,Feature9'], dtype='object') Column positions and data types: #ResidueNoInEachProtein,Residue,TrueLabel,Feature1,Feature2,Feature3,Feature4,Feature5,Feature6,Feature7,Feature8,Feature9 int64 dtype: object IndexError: single positional indexer is out-of-bounds The dataset does not have the expected number of columns. Please check the dataset and ensure it matches the expected format. Features shape: (1079134, 8) Labels shape: (1079134,) 100%|██████████| 29/29 [00:00&lt;00:00, 3085.30it/s] Training set evaluation: Empty DataFrame Columns: [Accuracy, Balanced Accuracy, ROC AUC, F1 Score, Time Taken] Index: [] Test set evaluation: Empty DataFrame Columns: [Accuracy, Balanced Accuracy, ROC AUC, F1 Score, Time Taken] Index: [] </code></pre> <p>I am not understanding what I am doing wrong.</p> <p>Why am I seeing Index error in this Python script?</p>
<python><classification><lazypredict>
2024-05-23 06:45:32
1
17,446
user366312
78,520,833
2,817,520
How to convert a list of Pydantic objects to a JSON string
<p>The following code successfully converts a list of <code>UserSQLAlchemyModel</code> to a list of <code>UserPydanticModel</code>:</p> <pre><code>users_from_db = session.scalars(select(UserSQLAlchemyModel)).all() users = TypeAdapter(list[UserPydanticModel]).validate_python(users_from_db) </code></pre> <p>Now how can I convert <code>users</code> to a JSON string? I have noticed using <code>dump_json</code> instead of <code>validate_python</code> works but the problem is the <code>exclude</code> parameter of <code>dump_json</code> has no effect and there is no documentation about <code>dump_json</code>. Also I really like to know if <code>dump_json</code> skips validation or not.</p> <p>My solution using directly Pydantic's <code>to_json</code>:</p> <pre><code>from pydantic_core import to_json users_json = to_json( [UserPydanticModel.model_dump(exclude=('password')) for user in users] ) </code></pre> <p>But I don't know if it is the right way to do it. And it is sad that Pydantic does not provide a simple solution for array of objects.</p> <p>Why I don't use Python <code>json.dumps()</code>? I don't like to deal with <code>not JSON serializable</code> error.</p>
<python><pydantic>
2024-05-23 03:34:59
3
860
Dante
78,520,746
20,235,789
What is the correct way to use classes as dependencies(parameter check)?
<p>Main goal here is mapping UI sent parameters to match my db tables.</p> <hr /> <p>Now call me crazy but this was working a second ago and now its not.</p> <p>I have a few classes and one function(in <code>S_Sorting</code>) here</p> <pre><code>class ScanSort(str, Enum): ASSET_NAME = &quot;assetName&quot; CREATED_AT = &quot;createdAt&quot; ... class BaseSorting(BaseModel): sort_order: Annotated[SortOrder, Query(...)] = SortOrder.ASCENDING ... class S_Sorting(BaseSorting): sort_by: Annotated[ScanSort, Query(...)] = ScanSort.CREATED_AT ... @staticmethod def scan_table_mapping(passed_param: str) -&gt; str: mapping = { ..., &quot;createdAt&quot;: &quot;scans.created_at&quot;, .. } return mapping.get(passed_param, &quot;scans.created_at&quot;) </code></pre> <p>that my function relies on for sorting my data(in a postgres db).</p> <pre><code> def build_query( self, ..., sorting: S_Sorting, sort_by_mapped = S_Sorting.scan_table_mapping(sorting.sort_by) direction = asc if sorting.sort_order == SortOrder.ASCENDING else desc query = query.order_by(direction(text(sort_by_mapped))) </code></pre> <p>So if there is no <code>sort</code> parameter, then it default sorts according to <code>S_Sorting</code> , works fine.</p> <p>Now when I pass in <code>sort=createdAt</code> it should sorting that mapped value.</p> <p>However, this is the error I get:</p> <pre><code>{ &quot;detail&quot;: [ { &quot;type&quot;: &quot;enum&quot;, &quot;loc&quot;: [ &quot;query&quot;, &quot;sort_by&quot; ], &quot;msg&quot;: &quot;Input should be '..., 'createdAt' or 'updatedAt'&quot;, &quot;input&quot;: &quot; createdAt&quot;, &quot;ctx&quot;: { &quot;expected&quot;: &quot;'..., 'createdAt' or 'updatedAt'&quot; } } ] } </code></pre> <p>I'm literally giving it what it expects. What am I doing wrong with classes as dependencies?</p>
<python><dependency-injection><fastapi><pydantic>
2024-05-23 02:58:48
1
441
GustaMan9000
78,520,581
17,835,120
limited trade execution using using backtesting.py library
<p>My goal is to backtest data where you provide buy/sell dates and closing prices to backtest module in order to backtest data from one source csv (instead of having 3 separate files such as price data, buy signals and sell signals)</p> <p>I am not sure why not all of the trades execute here. Currently, my trades are being capped at 5? unsure why I cant get more to execute.</p> <p>Tried setting hedge to true, exclusive orders to false in order to allow for overlapping trades. I set up checking for dates to make sure they werent on holiday. I'm stuck here with trades being capped at 5.</p> <pre><code>import pandas as pd from backtesting import Backtest as OriginalBacktest, Strategy from io import StringIO import numpy as np from pandas.tseries.holiday import USFederalHolidayCalendar # Updated CSV data as a string for buy/sell dates and closing prices csv_data = &quot;&quot;&quot; Buy Dates,Sell Dates,Buy Close,Sell Close 2010-01-15,2010-03-20,120.45,130.23 2011-02-25,2011-05-10,125.67,135.89 2012-08-21,2013-07-22,130.54,140.67 2014-05-23,2014-08-28,135.78,145.34 2015-09-10,2016-01-12,140.23,150.56 2017-02-15,2017-06-18,145.45,155.78 2013-04-18,2014-08-28,135.78,145.34 2015-09-10,2016-01-12,140.23,150.56 2017-02-15,2017-06-18,145.45,155.78 &quot;&quot;&quot; # Read the CSV data from the string data = pd.read_csv(StringIO(csv_data), parse_dates=['Buy Dates', 'Sell Dates']) # Adjust for weekends and holidays cal = USFederalHolidayCalendar() holidays = cal.holidays(start=data['Buy Dates'].min(), end=data['Sell Dates'].max()) def adjust_date(date): while date in holidays or date.weekday() &gt; 4: date += pd.Timedelta(days=1) return date data['Buy Dates'] = data['Buy Dates'].apply(adjust_date) data['Sell Dates'] = data['Sell Dates'].apply(adjust_date) # Create a DataFrame to simulate stock data for backtesting date_range = pd.date_range(start=data['Buy Dates'].min(), end=data['Sell Dates'].max(), freq='B') stock_data = pd.DataFrame(index=date_range, columns=['Open', 'High', 'Low', 'Close', 'Volume']) stock_data['Close'] = stock_data.index.map(lambda x: data['Buy Close'][data['Buy Dates'] &lt;= x].max() if not data[data['Buy Dates'] &lt;= x].empty else None) stock_data.ffill(inplace=True) stock_data.bfill(inplace=True) stock_data['Open'] = stock_data['Close'] stock_data['High'] = stock_data['Close'] stock_data['Low'] = stock_data['Close'] stock_data['Volume'] = 1000 # Arbitrary volume # Define the LinkedTrade strategy class LinkedTrade(Strategy): def init(self): self.buy_dates = set(data['Buy Dates']) self.sell_dates = dict(zip(data['Buy Dates'], data['Sell Dates'])) def next(self): current_date = self.data.index[-1] if current_date in self.buy_dates: if self.hedging or not self.position.is_long: if self.exclusive_orders and self.position: self.position.close() self.buy() elif current_date in self.sell_dates.values(): if self.position.is_long: self.position.close() # Define the custom Backtest class class Backtest(OriginalBacktest): def __init__(self, data, strategy, *, cash=10000, commission=0.0, margin=1.0, trade_on_close=False, hedging=True, exclusive_orders=False): self.hedging = hedging self.exclusive_orders = exclusive_orders strategy.hedging = hedging strategy.exclusive_orders = exclusive_orders super().__init__(data, strategy, cash=cash, commission=commission, margin=margin, trade_on_close=trade_on_close) # Initialize the Backtest instance bt = Backtest(stock_data, LinkedTrade, cash=10_000, commission=.002, hedging=True, exclusive_orders=False) # Run the backtest stats = bt.run() print(stats) # Plot the results bt.plot() # Save individual trade data to CSV with linked buy/sell dates trades = stats['_trades'] # Map the EntryTime and ExitTime to the linked buy/sell dates trades['Linked Buy Date'] = trades['EntryTime'].map(lambda x: data[data['Buy Dates'] == x]['Buy Dates'].values[0] if not data[data['Buy Dates'] == x].empty else None) trades['Linked Sell Date'] = trades['ExitTime'].map(lambda x: data[data['Sell Dates'] == x]['Sell Dates'].values[0] if not data[data['Sell Dates'] == x].empty else None) # Save the trades to a CSV file trades.to_csv('linked_trades.csv') </code></pre>
<python><algorithmic-trading><quantitative-finance><back-testing>
2024-05-23 01:29:01
0
457
MMsmithH
78,520,256
6,440,589
How to merge dictionaries contained in a Pandas dataframe as a groupby operation
<p>Let us consider a pandas dataframe <code>df</code> containing dictionaries in one of its columns (column <code>mydict</code>):</p> <blockquote> <pre><code> mystring mydict 0 a {'key1': 'value1'} 1 a {'key2': 'value2'} 2 b {'key2': 'value2'} </code></pre> </blockquote> <p>I would like to &quot;merge&quot; the dictionaries as part of a groupby operation, <em>e.g.</em> <code>df.groupby('mystring')['mydict'].apply(lambda x: merge_dictionaries(x))</code>.</p> <p>The expected output for <code>mystring</code> 'a' would be: <code>{'key1': 'value1', 'key2': 'value2'}</code></p> <p>The usual way of combining dictionaries is to <em>update</em> an existing dictionary (<code>dict1.update(dict2)</code>), so I am not sure how to implement this here.</p> <p>Here is the code snippet to create the dataframe used in this example:</p> <pre><code>mycolumns = ['mystring', 'mydict'] df = pd.DataFrame(columns = mycolumns) df = df.append(pd.DataFrame(['a', {'key1':'value1'}],mycolumns).T) df = df.append(pd.DataFrame(['a', {'key2':'value2'}],mycolumns).T) df = df.append(pd.DataFrame(['b', {'key3':'value3'}],mycolumns).T) df = df.reset_index(drop=True) </code></pre> <p><strong>EDIT:</strong> a way to achieve a similar result without using groupby would be to iterate over the <code>mystring</code> and then update the dictionary:</p> <pre><code>merged_dict = {} for mystring in df.mystring.unique(): for mydict in df[df.mystring==mystring]['mydict']: print(mydict) merged_dict.update(mydict) </code></pre> <p><strong>EDIT 2</strong>: <code>append</code> is no longer available in pandas 2.0.1. Here is the alternate snippet to create the example dataframe using <code>concat</code>:</p> <pre><code>mycolumns = ['mystring', 'mydict'] df = pd.DataFrame(columns = mycolumns) df = pd.concat([df, pd.DataFrame(['a', {'key1':'value1'}],mycolumns).T]) df = pd.concat([df, pd.DataFrame(['a', {'key2':'value2'}],mycolumns).T]) df = pd.concat([df, pd.DataFrame(['b', {'key3':'value3'}],mycolumns).T]) df = df.reset_index(drop=True) </code></pre>
<python><pandas><dictionary><group-by>
2024-05-22 22:34:06
1
4,770
Sheldon
78,520,245
214,526
Redundant print with multiprocessing
<p>This is my first attempt with multiprocessing using Python multiprocessing library. The simple version of the code is like below -</p> <pre><code>import multiprocessing as mp from dataclasses import dataclass from typing import Dict, NoReturn import time import logging import signal import numpy as np @dataclass class TmpData: name: str value: int def worker(name: str, data: TmpData) -&gt; NoReturn: logger_obj = mp.log_to_stderr() logger_obj.setLevel(logging.INFO) logger_obj.info(f&quot;name: {name}; value: {data.value}&quot;) if name == &quot;XYZ&quot;: raise RuntimeError(&quot;XYZ worker failed&quot;) time.sleep(data.value) def init_worker_processes() -&gt; None: signal.signal(signal.SIGINT, signal.SIG_IGN) if __name__ == &quot;__main__&quot;: map_data: Dict[str, TmpData] = { key: TmpData(name=key, value=np.random.randint(5, 15)) for key in [&quot;ABC&quot;, &quot;DEF&quot;, &quot;XYZ&quot;] } main_logger = logging.getLogger() with mp.get_context(&quot;spawn&quot;).Pool( processes=2, initializer=init_worker_processes(), ) as pool: results = [] for key in map_data: try: results.append( pool.apply_async( worker, args=( key, map_data[key], ), ) ) except KeyboardInterrupt: pool.terminate() pool.close() pool.join() for result in results: try: result.get() except Exception as err: main_logger.error(f&quot;{err}&quot;) </code></pre> <p>This outputs something like following -</p> <pre><code>[INFO/SpawnPoolWorker-2] name: ABC; value: 10 [INFO/SpawnPoolWorker-1] name: DEF; value: 10 [INFO/SpawnPoolWorker-2] name: XYZ; value: 12 [INFO/SpawnPoolWorker-2] name: XYZ; value: 12 [INFO/SpawnPoolWorker-2] process shutting down [INFO/SpawnPoolWorker-2] process shutting down [INFO/SpawnPoolWorker-2] process exiting with exitcode 0 [INFO/SpawnPoolWorker-1] process shutting down [INFO/SpawnPoolWorker-2] process exiting with exitcode 0 [INFO/SpawnPoolWorker-1] process exiting with exitcode 0 XYZ worker failed </code></pre> <p>What I am concerned is <code>[INFO/SpawnPoolWorker-2] name: XYZ; value: 12</code> printed twice. I guess it is only printing issue (not 2 processes spawned as the error message of XYZ worker failing is coming only once). The issue does not occur when pool is initialized with 3 processes.</p> <p>Now, I want to understand what the root cause is and how to fix it. Can someone help me understand what I may be doing wrong and how to fix it?</p>
<python><python-3.x><multiprocessing>
2024-05-22 22:30:04
1
911
soumeng78
78,520,187
8,652,920
How to remove last parentheses in string with its contents in python using regex?
<p>Say we have a list of files like</p> <p>(these aren't symlinks btw, I'm just using the arrows to indicate input -&gt; output relationship)</p> <pre><code>test(1) -&gt; test test2(1) -&gt; test2 test3(note to self) (6) -&gt; test3(note to self) </code></pre> <p>edit(not sure why there is a request to close for lack of focus. I'm literally giving you 3 specific test cases here)</p> <p>We can make no assumptions about the rest of the string outside of the fact that there will be a set of parentheses at the end with something inside. So that means there might be parentheses before the last set of parentheses which should not be touched. only the last parentheses.</p> <p>How to accomplish this with <code>re</code> python module?</p>
<python><regex>
2024-05-22 22:06:56
1
4,239
notacorn
78,520,027
1,806,566
In python, is there a way to use shlex to split a line but leave quotes intact?
<p>I know there have been similar questions about shlex, but none that seem to directly address this issue.</p> <p>I would like to do basically what shlex.split() does (in POSIX mode), except <em>just</em> split - don't remove any quotes or backslashes. I would still like quotes and backslashes to function as they normally do (including being able to escape quotes) - I just don't want them removed.</p> <p>An earlier version of my application was in perl, where Text::ParseWords is a rough equivalent of shlex. I'm looking for the python equivalent of Text::ParseWords::quotewords with $keep=1.</p> <p>For instance, I would like:</p> <pre><code>shlex.split('abc &quot;def ghi&quot; jkl') </code></pre> <p>to return:</p> <pre><code>[ 'abc', '&quot;def ghi&quot;', 'jkl' ] </code></pre> <p>I recognize that if I set posix=False, this particular example would work, but that's not a general solution because posix=False also disables escaping quotes and affects lexing in other undesirable ways.</p> <p>From what I can tell in the documentation, whether I use split() or a lower-level interface to the parser, if I use posix, the quotes are stripped before I see them, but if I don't, then escapes aren't handled.</p> <p>I really am doing POSIX-style shell processing (which is what leads me to shlex), but shlex goes one step too far for me - stripping quotes loses the information of how the original was quoted, which is important for shell-like processing, so I want it to stop before it does that.</p> <p>If there right answer isn't shlex, is there something else that doesn't involve writing a lexer from scratch?</p>
<python><shlex>
2024-05-22 21:14:26
0
1,241
user1806566
78,520,016
15,140,144
Python: Heaptypes with metaclasses
<p>In Python c-api: how can I create a heap type that has a metaclass? I'm well aware of <code>PyType_FromSpec</code> and derivatives and they can do this (from documentation):</p> <blockquote> <p>Changed in version 3.12: The function now finds and uses a metaclass corresponding to the provided base classes. Previously, only type instances were returned.</p> <p>The tp_new of the metaclass is ignored. which may result in incomplete initialization. Creating classes whose metaclass overrides tp_new is deprecated and in Python 3.14+ it will be no longer allowed.</p> </blockquote> <p>I have two one issue with this: support for metaclasses was added in Python 3.12 and I want to support Python 3.8+. Second one: if class name is not statically defined, (creating with malloc and eg.: string concatenation), for python &lt; 3.11, this will leak, as that function doesn't copy the buffer in these versions and there is no good way of freeing that buffer (PyTypeObject-&gt;tp_name is a public field and can be replaced by anything).</p>
<python><c><metaclass><python-c-api>
2024-05-22 21:11:16
1
316
oBrstisf8o
78,519,953
11,751,799
`matplotlib` figure text automatically adjusting position to the corners with multiple axes in the figure
<p><a href="https://stackoverflow.com/questions/78519044/matplotlib-figure-text-automatically-adjusting-position-to-the-top-left/78519857#78519857">This</a> question address how to put text in the top left corner, higher than everything else in the figure, left of everything else in the figure.</p> <p>Now I want text in all four corners, and I want to do it with multiple plots within a single figure (something like <code>fig, ax = plt.subplots(2, 3)</code>).</p> <p>Top left is higher and to the left of everything else in the figure (except for these four annotations).</p> <p>Top right is higher and to the right of everything else in the figure (except for these four annotations).</p> <p>Bottom left is lower and to the left of everything else in the figure (except for these four annotations).</p> <p>Bottom right is lower and to the right of everything else in the figure (except for these four annotations).</p> <p>Just adjusting the accepted answer to go to the bottom or to the right of the corresponding axes is not working.</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt fig, ax = plt.subplots(2, 3) ax[0, 0].scatter([1, 2, 3], [4, 5, 6]) yl = ax[0, 0].set_ylabel(&quot;1\n2\n3\n4\n5\n6\n7&quot;) t = ax[0, 0].set_title(&quot;a\nb\nc\nd\ne\nf&quot;) ax[0, 0].annotate( &quot;ABCD&quot;, (0, 1), ha=&quot;right&quot;, va=&quot;bottom&quot;, color = 'red', xycoords=(yl, t) ) ax[1, 2].annotate( &quot;EFGH&quot;, (1, 0), ha=&quot;left&quot;, va=&quot;top&quot;, xycoords=(yl, t) ) ax[1, 0].annotate( &quot;IJKL&quot;, (0, 0), ha = &quot;right&quot;, va = &quot;bottom&quot;, xycoords=(t, yl) ) ax[0, 2].annotate( &quot;MNOP&quot;, (1, 1), ha=&quot;right&quot;, va=&quot;top&quot;, xycoords=(yl, t) ) fig.patch.set_linewidth(5) fig.patch.set_edgecolor(&quot;green&quot;) plt.savefig( &quot;test.png&quot;, dpi = 900, bbox_inches = &quot;tight&quot;, edgecolor = fig.get_edgecolor() ) plt.show() plt.close() </code></pre> <p><a href="https://i.sstatic.net/EYr0UPZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EYr0UPZP.png" alt="incorrect image" /></a></p> <p>What would be the correct syntax to do to the above plot (to all four corners) what <a href="https://stackoverflow.com/a/78519857/11751799">the accepted answer here</a> does for the top left corner?</p>
<python><matplotlib><plot><graphics>
2024-05-22 20:54:00
0
500
Dave
78,519,657
3,100,515
How to make a "factory" to create dependent fixtures for multiple parametrizations?
<p>I have a test setup wherein I have multiple parametrizations, and I have secondary tests that depend on primary tests for each parametrization. I have got the following dependency setup working (see my SO question <a href="https://stackoverflow.com/questions/78491778/pytest-dependency-doesnt-work-when-both-across-files-and-parametrized">here</a>, although for this question I'm doing everything in one file for simplicity). However, now that I'm applying it, I'm running into the issue that I have to create two new fixtures for each parametrization, resulting in a lot of duplicated code.</p> <p>My setup:</p> <pre><code>tests/ - common.py - test_0.py </code></pre> <p>common.py:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np ints = [1, 2, 3] strs = ['a', 'b'] pars1 = list(zip(np.repeat(ints, 2), np.tile(strs, 3))) pars2 = list(zip(np.repeat(ints[:2], 2), np.tile(strs, 2))) </code></pre> <p>test_0.py:</p> <pre><code>import numpy as np import pytest from pytest_dependency import depends from common import pars1, pars2 def idfnc_mark(val): if isinstance(val, (int, np.int32, np.int64)): return f&quot;n{val}&quot; def idfnc_fix(val): return &quot;n{}-{}&quot;.format(*val) # I use markers here because I have a lot of code parametrized this way perm_mk1 = pytest.mark.parametrize('num, lbl', pars1, ids=idfnc_mark) perm_mk2 = pytest.mark.parametrize('num, lbl', pars2, ids=idfnc_mark) # 2 of these parametrized tests should fail @perm_mk1 @pytest.mark.dependency() def test_a(num, lbl): if num == 2: assert False else: assert True # 2 of these parametrized tests should fail @perm_mk2 @pytest.mark.dependency() def test_b(num, lbl): if lbl == 'b': assert False else: assert True # Following the example in the documentation for parametrized dependency @pytest.fixture(params=pars1, ids=idfnc_fix) def perm_fixt1(request): return request.param @pytest.fixture() def dep_perms1(request, perm_fixt1): depends(request, [&quot;test_a[n{}-{}]&quot;.format(*perm_fixt1)]) return perm_fixt1 @pytest.mark.dependency() def test_c(dep_perms1): pass </code></pre> <p>This all works as expected. But, now I want to define a <code>test_d</code> that depends on <code>test_b</code>, which is parametrized by <code>pars2</code> not <code>pars1</code>. I really don't want to make two new fixtures - my real code has 5 parametrizations, which would be 10 fixtures all following the exact same pattern. Is there a way to generate the dependent fixture in a function or factory of some sort? I tried something like this:</p> <pre class="lang-py prettyprint-override"><code>def make_dependency(name, params, ref_function, perm_format, id_func): @pytest.fixture(params=params, ids=id_func) def orig(request): return request.param @pytest.fixture(name=name) def dep_test(request, orig): depends(request, [ref_function+&quot;[&quot;+perm_format.format(*orig)+&quot;]&quot;]) return orig return dep_test dep_pars1 = make_dependency(&quot;dep_pars1&quot;, pars1, &quot;test_0.py&quot;, &quot;test_a&quot;, &quot;n{}-{}&quot;, idfnc_fix) </code></pre> <p>But, that complains that it can't find the <code>orig</code> fixture. And that makes sort of sense, maybe; I did know that calling it multiple times would probably cause naming / scope issues. Does pytest-dependency support doing something like this?</p>
<python><pytest><pytest-dependency>
2024-05-22 19:35:39
1
5,678
Ajean
78,519,636
850,781
Lookup by datetime in timestamp index does not work
<p>Consider a date-indexed DataFrame:</p> <pre><code>d0 = datetime.date(2024,5,5) d1 = datetime.date(2024,5,10) df0 = pd.DataFrame({&quot;a&quot;:[1,2],&quot;b&quot;:[10,None],&quot;c&quot;:list(&quot;xy&quot;)}, index=[d0,d1]) df0.index Index([2024-05-05, 2024-05-10], dtype='object') </code></pre> <p>Note that <code>df0.index.dtype</code> is <code>object</code>. Now, lookup works for <code>date</code>:</p> <pre><code>df0.loc[d0] a 1 b 10.0 c x Name: 2024-05-05, dtype: object </code></pre> <p>but both <code>df0.loc[str(d0)]</code> and <code>df0.loc[pd.Timestamp(d0)]</code> raise <code>KeyError</code>. This seems to be reasonable.</p> <p>However, consider</p> <pre><code>df1 = df0.reindex(pd.date_range(d0,d1)) df1.index DatetimeIndex(['2024-05-05', '2024-05-06', '2024-05-07', '2024-05-08', '2024-05-09', '2024-05-10'], dtype='datetime64[ns]', freq='D') </code></pre> <p>Note that <code>df1.index.dtype</code> is <code>datetime64</code>. Now, lookup works for <em><strong>both</strong></em> <code>df1.loc[pd.Timestamp(d0)]</code> (as expected) <em><strong>and</strong></em> <code>df1.loc[str(d0)]</code> (<em>why?!</em>) but <em><strong>not</strong></em> for <code>df1.loc[d0]</code> (if it works for a string, why not <code>date</code>?!)</p> <p>Is this the expected behavior (a <em>bug</em> with <em>tenure</em>)? Is this intentional?</p> <p>PS. <a href="https://github.com/pandas-dev/pandas/issues/58815" rel="nofollow noreferrer">Reported</a>.</p>
<python><pandas><dataframe><datetime><indexing>
2024-05-22 19:29:05
1
60,468
sds
78,519,594
2,934,290
How can I have a @classmethod of a generic class act on the type parameter?
<p>How can I call a classmethod by its generic only? Let's use following example:</p> <pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod from typing import Generic, TypeVar class Base(ABC): @classmethod @abstractmethod def test(cls): print('foo') class B(Base): @classmethod def test(cls): print('bar') T = TypeVar(&quot;T&quot;, bound=Base) class A(Generic[T]): def invoke_test(): T.test() a: A = A[B] a.invoke_test() </code></pre> <p>I would imagine the result <code>bar</code> beeing printed, but instead <code>'TypeVar' object has no attribute 'test'</code> will be risen. What am I missing? Python version 3.11.2 if that is of relevance.</p>
<python>
2024-05-22 19:16:54
1
722
Bin4ry
78,519,418
2,986,153
How to create nested list columns in polars
<p>In R with tidyverse, using group_by() and nest() I can create list columns that are very useful for fitting models to multiple datasets.</p> <p>How can I achieve the same outcome in polars in Python, such that each element of the data column is a small dataframe?</p> <pre><code>df = pl.DataFrame( [ pl.Series(&quot;dataset_id&quot;, [1, 1, 2, 2, 1, 1, 2, 2], dtype=pl.Int64), pl.Series(&quot;day&quot;, [1, 2, 1, 2, 1, 2, 1, 2], dtype=pl.Int64), pl.Series(&quot;recipe&quot;, [1, 1, 1, 1, 2, 2, 2, 2], dtype=pl.Int64), pl.Series(&quot;cum_trials&quot;, [1000, 2000, 1000, 2000, 1000, 2000, 1000, 2000], dtype=pl.Int64), pl.Series(&quot;cum_events&quot;, [644, 1287, 643, 1262, 645, 1312, 655, 1301], dtype=pl.Int64), pl.Series(&quot;cum_rate&quot;, [0.644, 0.643, 0.643, 0.619, 0.645, 0.667, 0.655, 0.646], dtype=pl.Float64), ] ) </code></pre> <p>In R/tidyverse the syntax I want to replicate is</p> <pre><code>df_list &lt;- df |&gt; group_by(dataset_id, day) |&gt; nest() df_list </code></pre> <p><a href="https://i.sstatic.net/6fZeFbBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6fZeFbBM.png" alt="enter image description here" /></a></p> <pre><code>df_list$data[[1]] </code></pre> <p><a href="https://i.sstatic.net/tr0GPNhy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tr0GPNhy.png" alt="enter image description here" /></a></p>
<python><python-polars>
2024-05-22 18:34:32
1
3,836
Joe
78,519,254
850,781
How to expand a DataFrame by adding index values?
<p>I have a data frame</p> <pre><code>import pandas as pd df = pd.DataFrame({&quot;a&quot;:[1,2,3]},index=[5,7,6]) a 5 1 7 2 6 3 </code></pre> <p>and I want to add some &quot;missing&quot; index values:</p> <pre><code>pd.DataFrame(index=range(3,9)).join(df) a 3 NaN 4 NaN 5 1.0 6 3.0 7 2.0 8 NaN </code></pre> <p>Is <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer">DataFrame.join</a> really the way to go?</p> <p>PS. For some reason I thought that <code>df.index=range(3,9)</code> should work, but it does not...</p>
<python><pandas><dataframe><join>
2024-05-22 17:56:19
1
60,468
sds
78,519,217
2,547,570
ContextVar MemoryLeak
<p>The following code has a memory leak and I don't understand why there are references to <code>MyObj</code>. <code>run(1)</code> and <code>run(2)</code> are finished, the context is cleared.</p> <pre class="lang-py prettyprint-override"><code>import asyncio import gc from contextvars import ContextVar import objgraph ctx = ContextVar('ctx') class MyObj: def __init__(self, value): self.value = value async def run(value): ctx.set(MyObj(value)) async def main(): await asyncio.gather(run(1), run(2)) gc.collect() print('# of MyObj=', objgraph.count('MyObj')) for obj in objgraph.by_type('MyObj'): print('MyObj.value-',obj.value) print('outer ctx=', ctx.get(None)) objgraph.show_backrefs(objgraph.by_type('MyObj'), filename='ctx.png', max_depth=5) asyncio.run(main()) </code></pre> <pre><code># of MyObj= 2 MyObj.value= 2 MyObj.value= 1 outer ctx= None </code></pre> <p><a href="https://i.sstatic.net/M6VWrrIp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6VWrrIp.png" alt="objgraph" /></a></p> <p>Why are there still 'finished tasks' even though I manually called <code>gc.collect()</code>?</p> <p>Similar questions</p> <ul> <li><a href="https://stackoverflow.com/questions/66856654/what-happens-if-i-dont-reset-pythons-contextvars">What happens if I don&#39;t reset Python&#39;s ContextVars?</a></li> <li><a href="https://stackoverflow.com/questions/67062025/will-a-contextvar-leak-memory-in-async-logic-if-not-reset-after-exception">Will a ContextVar leak memory in async logic if not reset after Exception?</a></li> </ul>
<python><garbage-collection><python-asyncio><python-contextvars>
2024-05-22 17:45:42
1
1,319
mq7
78,519,123
6,915,206
ModuleNotFoundError: No module named 'django.utils.six.moves'
<p>While deploying my Django 3 code on AWS EC2 server I'm getting the error below. I uninstalled <strong>six</strong> many times and deleted the <strong>cache</strong> folder and installed different versions of <strong>six</strong> but none of them are working.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>May 22 17:06:33 ip-172-31-7-56 gunicorn[73506]: from storages.backends.s3boto3 import S3Boto3Storage May 22 17:06:33 ip-172-31-7-56 gunicorn[73506]: File "/home/ubuntu/lighthousemedia/.venv/lib/python3.12/site-packages/storages/backends/s3boto3.py", line 12&gt; May 22 17:06:33 ip-172-31-7-56 gunicorn[73506]: from django.utils.six.moves.urllib import parse as urlparse May 22 17:06:33 ip-172-31-7-56 gunicorn[73506]: ModuleNotFoundError: No module named 'django.utils.six.moves'</code></pre> </div> </div> </p>
<python><python-3.x><django><amazon-web-services><amazon-ec2>
2024-05-22 17:18:21
1
563
Rahul Verma
78,519,081
19,299,757
How to reconnect to a URL with python selenium
<p>I've a Selenium-Python test suite which is used to run regression tests for an UI application.</p> <pre><code>@pytest.fixture(scope='class') def setup(request): driver = None try: options = webdriver.ChromeOptions() options.add_argument('--no-sandbox') options.add_argument(&quot;--window-size=1920,1080&quot;) options.add_argument(&quot;--start-maximized&quot;) options.add_argument('--headless') options.add_argument('--ignore-certificate-errors') driver = webdriver.Chrome(options=options) request.cls.driver = driver driver.delete_all_cookies() driver.get(TestData_common.BASE_URL) except WebDriverException as e: print('UAT seems to be down...&gt; ', e) yield driver if driver is not None: print(&quot;Quitting the driver..&quot;) driver.quit() @pytest.mark.usefixtures(&quot;setup&quot;) @pytest.mark.incremental class Test_InvPayment: def test_001_login(self): &quot;&quot;&quot;Login to UAT and click AGREE button in Site Minder&quot;&quot;&quot; assert TestData_common.URL_FOUND, &quot;UAT seems to be down..&quot; self.loginPage = LoginPage(self.driver) self.loginPage.do_click_agree_button() assert TestData_common.AGREE_BTN_FOUND, &quot;Unable to click Site Minder AGREE btn&quot; self.driver.maximize_window() print('Successfully clicked AGREE button in SiteMinder') time.sleep(2) def test_002_enter_user_and_password(self): &quot;&quot;&quot;Enter User name for UAT&quot;&quot;&quot; self.loginPage = LoginPage(self.driver) TestData_common.UserEnabled = True self.loginPage.do_enter_user_name(TestData_common.USER_NAME_6, 'USER') assert TestData_common.UserEnabled, &quot;Unable to enter User Name..&quot; print('Entered User name:', TestData_common.USER_NAME_6) time.sleep(2) def test_003_enter_password(self): &quot;&quot;&quot;Enter Password for Treasury&quot;&quot;&quot; self.loginPage = LoginPage(self.driver) self.loginPage.do_enter_password(TestData_common.PASSWORD, 'PASS') assert TestData_common.PasswordEnabled, &quot;Unable to enter Password..&quot; print('Entered password:', TestData_common.PASSWORD) time.sleep(2) def test_004_click_login_button(self): &quot;&quot;&quot;Click LOGIN button&quot;&quot;&quot; self.loginPage = LoginPage(self.driver) self.loginPage.do_click_login_button() assert TestData_common.LOGIN_BTN_FOUND, &quot;Unable to click LOGIN button..&quot; print('Clicked LOGIN button') </code></pre> <p>This code works well on most times.</p> <p>Sometimes the siteminder application which sits in front of the UAT application which is responsible for re-directing the traffic to the UAT application.This is where I enter the user/pwd and login to the app. Sometimes for whatever reason the siteminder application doesn't re-direct to the UAT app and in those cases all the tests following test_004 fails. In those cases, I want to build a re-try mechanism to again launch the URL and try logging in one more time. Is there a way to do this here? All I want is after clicking the login button from test_004, if the UAT application is not launched, the control goes back to the login page.</p> <p>Any help is much appreciated.</p>
<python><selenium-webdriver>
2024-05-22 17:08:01
1
433
Ram
78,519,077
1,497,199
Python: async function without await
<p>Is this a true statement?</p> <blockquote> <p>An <code>async</code> python function that does not include an <code>await</code> statement itself will not yield execution to any other <code>async</code> functions, even though it's invoked by an <code>await</code> call.</p> </blockquote> <p>FastAPI example</p> <pre><code>from fastapi import FastAPI import some_arbirtrary_library app = FastAPI() async def no_await(): # this can be any code provided it does not # include an await statement result = some_arbitrary_library.some_arbitrary_function() return result @app.get(&quot;/&quot;) async def root(): # Belief: this will not ever yield execution to any other # async operation (e.g. other async endpoints) because \ # the guts of the function 'no_await' do not await on anything result = await no_await() return {&quot;message&quot;: result} #... other app endpoints defined with async def </code></pre> <p>Is this a valid corollary to this?</p> <blockquote> <p>Defining a function <code>async</code> that does not include an <code>await</code> statement does not provide any performance enhancements over using the function in a synchronous manner.</p> </blockquote>
<python><async-await><fastapi>
2024-05-22 17:07:48
1
8,229
Dave
78,519,069
5,573,170
Variable visibility in python
<p>Maybe a stupid question but I'm new to python.</p> <p>The following code throws an error:</p> <pre><code>deck = [10, 2, 5, 8, 2, 7] deck_idx = 0 def deal(): card = deck[deck_idx] deck_idx += 1 return card print(deal()) </code></pre> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;main.py&quot;, line 10, in &lt;module&gt; print(deal()) File &quot;main.py&quot;, line 6, in deal card = deck[deck_idx] UnboundLocalError: local variable 'deck_idx' referenced before assignment </code></pre> <p>while this doesn't:</p> <pre><code>deck = [10, 2, 5, 8, 2, 7] deck_idx = 0 def deal(): card = deck[0] return card print(deal()) </code></pre> <blockquote> <p>10</p> </blockquote> <p>both <code>deck</code> and <code>deck_idx</code> are declared in the same place, so I expect them to have the same visibility.</p> <p>What am I missing here?</p>
<python><scope>
2024-05-22 17:06:28
0
1,601
Disti
78,519,044
11,751,799
`matplotlib` figure text automatically adjusting position to the top left
<p>When I run the following code, I have no issue.</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.scatter([1, 2, 3], [4, 5, 6]) fig.text( 0, 1, s = &quot;ABCD&quot;, ha = &quot;left&quot;, va = &quot;bottom&quot;, transform = fig.transFigure ) fig.patch.set_linewidth(5) fig.patch.set_edgecolor(&quot;green&quot;) plt.show() plt.close() </code></pre> <p>My goal is to have that <code>ABCD</code> in the top left, higher that everything else in the figure and to the left of everything else in the figure, and that is exactly what happens.</p> <p><a href="https://i.sstatic.net/51r0leMH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/51r0leMH.png" alt="correct plot" /></a></p> <p>When I run this, there is a problem.</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.scatter([1, 2, 3], [4, 5, 6]) ax.set_ylabel(&quot;1\n2\n3\n4\n5\n6\n7&quot;) ax.set_title(&quot;a\nb\nc\nd\ne\nf&quot;) fig.text( 0, 1, s = &quot;ABCD&quot;, ha = &quot;left&quot;, va = &quot;bottom&quot;, transform = fig.transFigure ) fig.patch.set_linewidth(5) fig.patch.set_edgecolor(&quot;green&quot;) plt.show() plt.close() </code></pre> <p><a href="https://i.sstatic.net/iV3UUltj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iV3UUltj.png" alt="incorrect plot" /></a></p> <p>The goal is to have the <code>ABCD</code> higher than everything else in the figure, but the title displays higher than the <code>ABCD</code>. Likewise, the <code>ABCD</code> is supposed to be the the left of everything else in the figure, yet the y-axis label is to the left of the <code>ABCD</code>.</p> <p>Curiously, the border has no issues adjusting to how wide or how tall the figure gets, just this text label.</p> <p>How can I get the <code>ABCD</code> text label to display in the upper left corner, no matter what I do with the axis labels or main titles? I need the bottom of the text to be above everything else in the figure and the right of the text to be to the left of everything else in the figure like I have in the first image.</p>
<python><matplotlib><plot><graphics>
2024-05-22 17:00:38
1
500
Dave
78,518,976
6,642,051
Can I update the type annotation of a variable over time?
<p>I'm making a Blender addon, and part of the flow of one of its functions involves de-duplicating vertices and returning vertex and index lists. Part of the algorithm I'm using involves setting duplicate elements to None, then clearing those elements later using a comprehension. My problem comes from the fact that I'd like to update the type annotation of my vertex list over the course of the function to express that it can contain None, then later back to only containing vertices.</p> <pre class="lang-py prettyprint-override"><code>def deduplicate(vertices: list[Vertex]) # gives error &quot;Declaration &quot;vertices&quot; is obscured by a declaration of the same name&quot; vertices: list[Vertex|None] = vertices.copy() # de-duplication logic ... # vertices: list[Vertex] = [v for v in vertices if v != None] </code></pre> <p>I know I could just use different variables for each step or opt to not have type checking enabled, but the first feels cumbersome and un-pythonic, and the second leaves me without the benefits of the editor checking over my work. Did I miss something or can the type of a variable really never be updated?</p>
<python>
2024-05-22 16:47:09
2
444
ProgramGamer
78,518,954
1,686,814
Unwanted spaced in format string when running Python code in snakemake
<p>I have a project that was running fine until I had to update the conda environment. Suddenly, certain strings got extra spaces around variables that are inserted in the string with <code>f&quot;...&quot;</code>.</p> <p>Consider this:</p> <pre><code>foo = &quot;bar&quot; print(f&quot;foo is now:{foo}.&quot;) </code></pre> <p>The correct output (the one I am getting with the previous conda environment) is <code>foo is now:bar.</code>. In the new environment, however, I get <code>foo is now: bar .</code>.</p> <p>The previous environment had python version 3.10.9 and snakemake version 7.19.1. The new environment has python version 3.12.3 and snakemake version 7.19.1.</p> <p>Here is a minimal snakefile that showcases this behavior (at least in my environment):</p> <pre><code>rule all: run: foo = &quot;bar&quot; print(f&quot;foo is now:{foo}.&quot;) </code></pre>
<python><snakemake>
2024-05-22 16:42:37
0
17,210
January
78,518,777
294,657
Rolling mean over an unstacked, multi-index
<p>I have a CSV file like the following:</p> <pre><code>Animal,Day,Action,Seconds dog,1,eat,10 dog,1,play,20 dog,1,drink,30 cat,1,eat,18 cat,1,play,28 cat,1,drink,21 rabbit,1,eat,34 rabbit,1,play,19 rabbit,1,drink,29 dog,2,eat,20 dog,2,play,20 dog,2,drink,10 cat,2,eat,28 cat,2,play,38 cat,2,drink,31 rabbit,2,eat,24 rabbit,2,play,34 rabbit,2,drink,30 dog,3,eat,30 dog,3,play,20 dog,3,drink,26 cat,3,eat,11 cat,3,play,22 cat,3,drink,32 rabbit,3,eat,50 rabbit,3,play,20 rabbit,3,drink,10 </code></pre> <p>Representing how much has each animal spent doing each action each day (e.g. the dog spent 10s eating on day 1).</p> <p>Using Pandas, how can I show a rolling average of Seconds over 3 days, per Animal, per Action? I want the output to look something like this:</p> <pre><code> Seconds rolling_drink rolling_eat rolling play Action drink eat play Animal Day dog 1 30 10 20 30 10 . 2 10 20 34 20 15 . 3 26 30 34 22 20 . cat 1 21 18 28 21* . . 2 31 28 38 26 . . 3 32 11 22 28 . . rabbit 1 29 37 19 . . . 2 30 27 34 . . . 3 10 50 20 . . . </code></pre> <p>Notes:<br /> . = real value, but no point in calculating each<br /> * = to draw your attention to the rolling mean &quot;resetting&quot;</p> <p>The main point being that rolling resets when the animal changes, i.e. the numbers from different animals aren't rolled together.</p> <p>Here's what I tried:</p> <pre class="lang-py prettyprint-override"><code>df = pd.read_csv('data.csv') x = df.set_index(['Animal', 'Day', 'Action']).unstack('Action') x['rolling_eat'] = x[('Seconds', 'eat')].rolling(window=3, min_periods=1).mean() print(x) </code></pre> <p>This <em>almost</em> gave me what I wanted, but rolling doesn't restart when the animal changes. I'd assume if I indexed over <code>Animal</code> and <code>Day</code>, I'd get a rolling mean per <code>Animal</code> per <code>Day</code>. But nope. And I fail to understand what I'm doing wrong.</p>
<python><pandas><dataframe>
2024-05-22 16:06:48
1
16,064
kaqqao
78,518,728
10,787,371
How to specify a generic over TypedDict?
<p>I want to have a class, that has the following members:</p> <ul> <li>function <code>f</code>, that takes argument of type <code>In</code>, and <code>kwargs</code> specified by <code>Parameters</code></li> <li>function <code>make_parameters</code> that returns <code>Parameters</code>, that can be directly fed into <code>f</code></li> </ul> <p>The code I have now is:</p> <pre class="lang-py prettyprint-override"><code>In = TypeVar(&quot;In&quot;, contravariant=True) Out = TypeVar(&quot;Out&quot;, covariant=True) Parameters = TypeVar(&quot;Parameters&quot;, bound=TypedDict) class Function(Generic[In, Out, Parameters]): class FProtocol(Protocol): def __call__(self, x: In, **parameters: Unpack[Parameters]) -&gt; Out: ... def __init__( self, f: FProtocol, make_parameters: Callable[[], Parameters], ) -&gt; None: super().__init__() self._f = f self._make_parameters = make_parameters def __call__(self, x: In) -&gt; Tuple[Out, Parameters]: parameters = self._make_parameters() return self._f(x, **parameters), parameters </code></pre> <p>However, type hints don't work:</p> <ul> <li>I can't bind <code>Parameters</code> by <code>TypedDict</code></li> <li><code>Unpack</code> expects a <code>TypedDict</code></li> </ul> <p>I could just make parameters a positional argument, but I would prefer it being more natural, with parameters being named arguments.</p> <p>I would want to be able to use my class in the following way:</p> <pre class="lang-py prettyprint-override"><code>def example_f(x: float, a: float, b: float) -&gt; float: return a * x + b class ExampleParameters(TypedDict): a: float b: float f = Function[float, float, ExampleParameters](example_f, lambda: {&quot;a&quot;: 1.0, &quot;b&quot;: 2.0}) </code></pre>
<python><generics><python-typing>
2024-05-22 15:56:53
1
707
aurelia
78,518,643
10,331,807
How to make a grid of plots that include subplots?
<p>Ηi, I want to create a grid of plots that each plot has a subplot. The reason for this is that one of my columns (area1) has very high values compared to the other (area2, area3), therefore making the rest look negligible.</p> <p>Here is my current code:</p> <pre><code>import matplotlib.pyplot as plt import seaborn as sns import pandas as pd data1 = { 'Year': [2010, 2011, 2012, 2013, 2014, 2015], 'area1': [800000, 810000, 820000, 830000, 840000, 850000], 'area2': [1000, 1100, 1200, 1300, 1400, 1500], 'area3': [500, 600, 700, 800, 900, 1000]} population = pd.DataFrame(data1) data2 = { 'Year': [2010, 2011, 2012, 2013, 2014, 2015], 'area1': [15000, 16000, 17000, 18000, 19000, 20000], 'area2': [400, 450, 460, 460, 470, 480], 'area3': [200, 250, 260, 270, 280, 290]} houses = pd.DataFrame(data2) fig, axes = plt.subplots(1, 2, figsize=(18, 8)) # Population sns.lineplot(data=population, x='Year', y='area1', ax=axes[0], marker='o', linewidth=2.5, color='#9c2643', label='area1') sns.lineplot(data=population, x='Year', y='area2',ax=axes[0], marker='o', linewidth=2.5, color='#1d2a6b', label='area2') sns.lineplot(data=population, x='Year', y='area3', ax=axes[0], marker='o', linewidth=2.5, color='#e77148', label='area3') axes[0].set_xlabel('Year') axes[0].set_xlabel('Year') axes[0].set_ylabel('Population') axes[0].set_title('Population / Year') axes[0].legend_.remove() # Houses sns.lineplot(data=houses, x='Year', y='area1', ax=axes[1], marker='o', linewidth=2.5, color='#9c2643', label='area1') sns.lineplot(data=houses, x='Year', y='area2', ax=axes[1], marker='o', linewidth=2.5, color='#1d2a6b', label='area2') sns.lineplot(data=houses, x='Year', y='area3', ax=axes[1], marker='o', linewidth=2.5, color='#e77148', label='area3') axes[1].set_xlabel('Year') axes[1].set_ylabel('Number of houses') axes[1].set_title('Number of houses / Year') axes[1].legend_.remove() plt.show() </code></pre> <p><a href="https://i.sstatic.net/AJgC6Tf8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJgC6Tf8.png" alt="enter image description here" /></a></p> <p>I have managed to achieve what I need when I want to plot one attribute at a time. Here is my code:</p> <pre><code>sns.set(font='sans-serif', font_scale=1.4) sns.set_style('whitegrid') sns.despine() # Create figure and adjust spines plt.figure(figsize=(12, 8)) # Define height ratios for subplots height_ratios = [0.5, 2] # Main plot takes twice the space of the second plot # Create subplots with specified height ratios gs = plt.GridSpec(2, 1, height_ratios=height_ratios) # Create subplot for high values ax0 = plt.subplot(gs[0]) # Second subplot (High Values Plot) sns.lineplot(data=population, x='Year', y='area1', marker='o', linewidth=2.5, color='black', linestyle='dashed', label='area1') plt.ylabel('Population') #Common title plt.title('Population', fontweight=&quot;bold&quot;) # Remove the legend from the first subplot ax0.legend().remove() # Hide x-axis labels and ticks ax0.set_xticks([]) # Remove x-axis ticks ax0.xaxis.set_tick_params(labelbottom=False) # Hide x-axis labels # Remove x-axis label ax0.set_xlabel('') # Main plot ax1 = plt.subplot(gs[1]) # First subplot (Main plot) sns.lineplot(data=population, x='Year', y='area2', marker='o', linewidth=2.5, color='#9c2643', label='area2') sns.lineplot(data=population, x='Year', y='area3', marker='o', linewidth=2.5, color='#1d2a6b', label='area3') #plt.title('Main Plot') plt.xlabel('Year') plt.ylabel('Population') # Common y-axis label plt.ylabel('Population', labelpad=20) plt.xlabel('Year', labelpad=20) # Create a common legend outside the subplots handles0, labels0 = ax0.get_legend_handles_labels() handles1, labels1 = ax1.get_legend_handles_labels() handles = handles0 + handles1 labels = labels0 + labels1 plt.tight_layout() # Adjust layout to prevent overlapping plt.show() </code></pre> <p><a href="https://i.sstatic.net/mdK2iKXD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mdK2iKXD.png" alt="enter image description here" /></a></p> <p>However, I cannot apply something similar when I want to create the grid of the plots. I need to plot multiple attributes so I want a solution that can support the grid of plots. Any suggestions?</p>
<python><matplotlib><seaborn><matplotlib-gridspec>
2024-05-22 15:40:06
0
305
Anas.S
78,518,562
3,973,269
Azure app function - stuck on using storage blob on localhost
<p>I have an azure app function, very basic that runs well when deployed but locally, when I'm sending a post request it just get stuck and eventually the whole shuts down with the message shown below. When I comment out the line <code>from azure.storage.blob import BlobServiceClient</code>, the whole works and a response is returned.</p> <p>My code:</p> <pre><code>import azure.functions as func from azure.storage.blob import BlobServiceClient def main(req: func.HttpRequest) -&gt; func.HttpResponse: return func.HttpResponse( body=&quot;response&quot;, status_code=200 ) </code></pre> <p>the function.json file:</p> <pre><code>{ &quot;scriptFile&quot;: &quot;__init__.py&quot;, &quot;bindings&quot;: [ { &quot;authLevel&quot;: &quot;function&quot;, &quot;type&quot;: &quot;httpTrigger&quot;, &quot;direction&quot;: &quot;in&quot;, &quot;name&quot;: &quot;req&quot;, &quot;methods&quot;: [ &quot;post&quot; ] }, { &quot;type&quot;: &quot;http&quot;, &quot;direction&quot;: &quot;out&quot;, &quot;name&quot;: &quot;$return&quot; } ] } </code></pre> <p>The command to trigger:</p> <pre><code>curl -X POST http://localhost:7071/api/TestFunc -H &quot;Content-Type:application/json&quot; -v -o - </code></pre> <p>The command window gets stuck in this operation. The azure environment is started in a venv using <code>func host start</code>. The azure terminal shows: <code>Executing 'Functions.TestFunc' (Reason='This function was programmatically called via the host APIs.</code> and nothing else after that.</p> <p>Is there any more info that I can provide to help find the solution? Does someone know what I can do to solve this issue? Having to deploy everytime to test is annoying.</p> <p>pip list:</p> <pre><code>azure-common 1.1.28 azure-communication-email 1.0.0 azure-core 1.30.1 azure-cosmos 4.6.0 azure-eventhub 5.11.7 azure-functions 1.19.0 azure-mgmt-core 1.4.0 azure-storage-blob 12.20.0 azure-storage-queue 12.9.0 certifi 2024.2.2 cffi 1.16.0 charset-normalizer 3.3.2 colorclass 2.2.2 cryptography 42.0.5 docopt 0.6.2 idna 3.7 isodate 0.6.1 msrest 0.7.1 mysql-connector 2.2.9 numpy 1.26.4 oauthlib 3.2.2 packaging 24.0 pandas 2.2.2 pip 24.0 pip-upgrader 1.4.15 pycparser 2.22 python-dateutil 2.9.0.post0 python-http-client 3.3.7 pytz 2024.1 requests 2.31.0 requests-oauthlib 2.0.0 sendgrid 6.11.0 setuptools 65.5.0 six 1.16.0 starkbank-ecdsa 2.2.0 terminaltables 3.1.10 typing_extensions 4.11.0 tzdata 2024.1 urllib3 2.2.1 </code></pre>
<python><azure-functions><azure-blob-storage>
2024-05-22 15:23:06
1
569
Mart
78,518,498
7,441,757
Using fsspec with aws profile_name
<p>With S3fs we can set</p> <pre><code>fs = s3fs.S3FileSystem(profile=profile_name) </code></pre> <p>However this passing doesn't work for fsspec with caching:</p> <pre><code> fs = fsspec.filesystem( &quot;filecache&quot;, target_protocol=&quot;s3&quot;, cache_storage=&quot;some/dir&quot;, cache_check=10, expiry_time=60, profile=profile_name, ) </code></pre> <p>Of course we can use <code>AWS_PROFILE</code> to circumvent fsspec, but is there any way to pass it onwards?</p>
<python><python-s3fs><fsspec>
2024-05-22 15:11:07
1
5,199
Roelant
78,517,765
273,593
explicit tuple generic expansion
<p>Simplified scenario (<a href="https://mypy-play.net/?mypy=latest&amp;python=3.12&amp;gist=765b3582ed8b99e2a546fe1bfb4cd0e3" rel="nofollow noreferrer">playground</a>):</p> <pre class="lang-py prettyprint-override"><code>from typing import TypeVar, reveal_type class A: ... class B(A): ... class C(A): ... T = TypeVar('T', bound=A) def fun(*args: T) -&gt; tuple[T, ...]: return args # one parameter reveal_type(fun(A())) # Revealed type is &quot;builtins.tuple[gargs.A, ...]&quot; reveal_type(fun(B())) # Revealed type is &quot;builtins.tuple[gargs.B, ...]&quot; reveal_type(fun(C())) # Revealed type is &quot;builtins.tuple[gargs.C, ...]&quot; # multiple parameters, same type reveal_type(fun(A(), A())) # Revealed type is &quot;builtins.tuple[gargs.A, ...]&quot; reveal_type(fun(B(), B())) # Revealed type is &quot;builtins.tuple[gargs.B, ...]&quot; reveal_type(fun(C(), C())) # Revealed type is &quot;builtins.tuple[gargs.C, ...]&quot; # multiple parameters, different types reveal_type(fun(A(), B())) # Revealed type is &quot;builtins.tuple[gargs.A, ...]&quot; reveal_type(fun(B(), C())) # Revealed type is &quot;builtins.tuple[gargs.A, ...]&quot; reveal_type(fun(C(), A())) # Revealed type is &quot;builtins.tuple[gargs.A, ...]&quot; </code></pre> <p>is there a way to annotate <code>fun</code> to teach <code>mypy</code> that <code>fun</code> should return <code>tuple[A, B]</code>, <code>tuple[B, C]</code> and <code>tuple[C, A]</code> in the three latter cases?</p>
<python><mypy><python-typing>
2024-05-22 13:07:11
1
1,703
Vito De Tullio
78,517,713
66,191
Python MySQL mypy typing complaining about incorrect types
<p>I'm using python with mysql-connector and am running my code through mypy and it's complaining about types which I can't seem to get around.</p> <p>for instance. I have code like this</p> <pre><code> today = datetime.now().date() mysql_cursor = mysql_connection.cursor() cursor.execute( &quot;select holiday_date from Holidays&quot;, {} ) rows = mysql_cursor.fetchall() for row in rows: holiday_date = row[ 0 ] if holiday_date &lt;= today: print( holiday_date ) </code></pre> <p>The Holidays table has a single column as.</p> <pre><code> holiday_date DATE NOT NULL </code></pre> <p>so I know that what will be returned will be python date objects but mypy is complaing...</p> <pre><code>error: Unsupported operand types for &lt;= (&quot;Decimal&quot; and &quot;date&quot;) [operator] error: Unsupported operand types for &lt;= (&quot;bytes&quot; and &quot;date&quot;) [operator] error: Unsupported operand types for &lt;= (&quot;float&quot; and &quot;date&quot;) [operator] error: Unsupported operand types for &lt;= (&quot;int&quot; and &quot;date&quot;) [operator] error: Unsupported operand types for &lt;= (&quot;set[str]&quot; and &quot;date&quot;) [operator] error: Unsupported operand types for &lt;= (&quot;str&quot; and &quot;date&quot;) [operator] error: Unsupported operand types for &lt;= (&quot;timedelta&quot; and &quot;date&quot;) [operator] error: Unsupported operand types for &lt;= (&quot;time&quot; and &quot;date&quot;) [operator] note: Left operand is of type &quot;Any | Decimal | bytes | date | float | int | set[str] | str | timedelta | time | None&quot; </code></pre> <p>I know that the column being returned is &quot;typed&quot; as it described above but I'm not sure how I coerce mypy into believing that the value is a date.</p> <p>Any ideas?</p>
<python><python-3.x><mysql-connector>
2024-05-22 12:57:52
1
2,975
ScaryAardvark
78,517,650
3,412,316
Bulk insert rasters into GPKG
<p>I wonder if there is a way to bulk insert TIFF rasters into a geopackage using python or gdal, to increase write performance.</p> <p>GDAL needs to open an SQLite connection per added raster file (subdataset, RASTER_TABLE), which prevents you from using a single transaction to insert multiple rasters.</p> <p>Concurrency doesn't seem to work with gdal_translate (<code>failed: database is locked ERROR 1: Raster table XXX not correctly initialized due to missing call to SetGeoTransform()</code>). Ex:</p> <pre class="lang-bash prettyprint-override"><code>gdal_translate src_1.tiff dst.gpkg -of GPKG -co APPEND_SUBDATASET=YES -co RASTER_TABLE=src_1 &lt;== creation gdal_translate src_2.tiff dst.gpkg -of GPKG -co APPEND_SUBDATASET=YES -co RASTER_TABLE=src_2&amp; &lt;== parallel insertion attempt gdal_translate src_3.tiff dst.gpkg -of GPKG -co APPEND_SUBDATASET=YES -co RASTER_TABLE=src_3&amp; gdal_translate src_4.tiff dst.gpkg -of GPKG -co APPEND_SUBDATASET=YES -co RASTER_TABLE=src_4&amp; gdal_translate src_5.tiff dst.gpkg -of GPKG -co APPEND_SUBDATASET=YES -co RASTER_TABLE=src_5&amp; </code></pre> <p>GDAL's GPKG driver doesn't seem to support WAL mode.</p>
<python><gdal><geopackage>
2024-05-22 12:47:22
0
2,193
Kiruahxh
78,517,594
2,304,735
How to read only two numbers from python STDIN Function?
<p>I want to take two numbers through the stdin python function and output their total.</p> <pre><code>import sys num1 = None num2 = None print('Please Enter Two Numbers to Find their Sum or Enter q to quit') for line in sys.stdin: if('q'==line.strip()): break else: num = int(line.strip()) if(num1 == None): num1 = num elif(num2 == None): num2 = num else: break total = num1 + num2 print(total) </code></pre> <p>I want the code to do the following:</p> <pre><code>Please Enter Two Numbers to Find their Sum or Enter q to quit 12 11 23 </code></pre> <p>When I run the above code it takes 3 numbers and then gives me the total of the first two numbers.</p> <pre><code>Please Enter Two Numbers to Find their Sum or Enter q to quit 12 11 13 23 </code></pre> <p>I only want it to take 2 numbers and output their total. What do I need to change in the code to do so?</p>
<python><python-3.x>
2024-05-22 12:35:59
2
515
Mahmoud Abdel-Rahman
78,517,501
6,862,405
Unordered y-axis in multiline chart using dash
<p>I am new to dash and struggling to create a multi line chart. I have a data and plotted it using dash and created a multiline chart. The reproducible code is given below-</p> <pre><code>import pandas as pd import plotly.express as px from dash import Dash, dcc, html, callback, Output, Input df = pd.DataFrame({ 'foo': ['ab','ab','ab','ab', 'es','es','es','es', 'la','la','la','la', 'kl','kl','kl','kl', 'ys','ys','ys','ys'], 'months': [1,3,6,12,1,3,6,12,1,3,6,12,1,3,6,12,1,3,6,12], 'metric': [60.25,70.96,104.72,194.33,60.27,72.52,117.66,234.53,61.17,72.30,118.84,239.15,60.76,72.53,118.39,239.00,59.00,65.41,78.04,123.49] }) app = Dash() app.layout = (html.Div([ html.H1(children='Metric assessment', style={'textAlign':'center'}), html.Div([&quot;Enter multiplier for metric &quot;, dcc.Input( id=&quot;some_number&quot;, type=&quot;number&quot;, placeholder=1, min = 1, max = 20, style={&quot;margin-left&quot;: &quot;15px&quot;} ), dcc.Graph(id='graph-content') ]) ]) ) @callback( Output('graph-content', 'figure'), Input(&quot;some_number&quot;, &quot;value&quot;) ) def update_graph(some_number): dff = df.copy(deep = True) if not some_number: some_number = 1 dff['Total metric'] = dff['metric']*some_number dff['Total metric'] = dff['Total metric'].map('{:,.2f}'.format) # dff = dff.sort_values(by = 'Total metric') fig = px.line(dff, x='months', y='Total metric', color = 'foo', markers=True, title = 'Metric vs months') return fig if __name__ == '__main__': app.run(debug=True) </code></pre> <p>When I see the plot, the y axis values are unordered as shown below- <a href="https://i.sstatic.net/otTJQQA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/otTJQQA4.png" alt="enter image description here" /></a></p> <p>Even after sorting the dataframe on 'Total metric' column, the issue persists. What can I do to remove this issue and get the correct graph?</p>
<python><plotly-dash>
2024-05-22 12:20:04
1
831
Ankit Seth
78,517,342
2,955,884
np.save() works with reproducible example but not mith my own numpy array ("NotImplementedError")
<p>np.save() works with a reproducible example but not with my data. How can I save my numpy array as binary?</p> <p>np.save works in this reproducible example:</p> <pre><code>a = np.arange(12, dtype='float32').reshape(2,2,3) a.shape Out[42]: (2, 2, 3) a.dtype Out[43]: dtype('float32') np.save(&quot;delme&quot;,a) </code></pre> <p>But not with my numpy array variable_data:</p> <pre><code>variable_data.shape Out[45]: (3500, 480, 640) variable_data.dtype Out[46]: dtype('float32') ma.is_masked(variable_data) Out[52]: False </code></pre> <p>Then, I get an error:</p> <pre><code>np.save(&quot;delme2&quot;, variable_data) --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) Cell In[41], line 3 1 variable_data.shape 2 variable_data.dtype ----&gt; 3 np.save(&quot;delme2&quot;, variable_data) File ~/miniforge3/envs/sediment/lib/python3.9/site-packages/numpy/lib/npyio.py:546, in save(file, arr, allow_pickle, fix_imports) 544 with file_ctx as fid: 545 arr = np.asanyarray(arr) --&gt; 546 format.write_array(fid, arr, allow_pickle=allow_pickle, 547 pickle_kwargs=dict(fix_imports=fix_imports)) File ~/miniforge3/envs/sediment/lib/python3.9/site-packages/numpy/lib/format.py:730, in write_array(fp, array, version, allow_pickle, pickle_kwargs) 728 else: 729 if isfileobj(fp): --&gt; 730 array.tofile(fp) 731 else: 732 for chunk in numpy.nditer( 733 array, flags=['external_loop', 'buffered', 'zerosize_ok'], 734 buffersize=buffersize, order='C'): File ~/miniforge3/envs/sediment/lib/python3.9/site-packages/numpy/ma/core.py:6221, in MaskedArray.tofile(self, fid, sep, format) 6208 def tofile(self, fid, sep=&quot;&quot;, format=&quot;%s&quot;): 6209 &quot;&quot;&quot; 6210 Save a masked array to a file in binary format. 6211 (...) 6219 6220 &quot;&quot;&quot; -&gt; 6221 raise NotImplementedError(&quot;MaskedArray.tofile() not implemented yet.&quot;) NotImplementedError: MaskedArray.tofile() not implemented yet. </code></pre>
<python><numpy>
2024-05-22 11:50:51
2
576
user2955884
78,517,203
13,506,329
Unable to import function from custom package built using pyproject.toml
<p>This is my project structure</p> <pre><code>pkg_root └───venv └───requirements.txt └───pyproject.toml ├───src │ ├───pkg │ │ ├───__init__.py │ │ ├───test_main.py │ │ ├───config │ │ │ ├───json │ │ ├───data │ │ │ ├───input │ │ │ │ └───data_file │ │ │ └───output │ │ │ ├───data_file │ │ ├───sub_pkg_1 │ │ │ ├───__init__.py │ │ │ ├───sub_pkg_1_1 │ │ │ │ ├───__init__.py │ │ │ │ ├───sub_pkg_1_1_a │ │ │ │ │ └─── __init__.py │ │ │ │ │ └─── file.py │ │ │ ├───sub_pkg_1_2 │ │ │ │ └─── __init__.py │ │ │ │ └─── file.py │ │ │ ├───sub_pkg_1_3 │ │ │ │ ├───__init__.py │ │ │ │ ├───sub_pkg_1_3_a │ │ │ │ │ ├───__init__.py │ │ │ │ │ ├───file.py </code></pre> <p>And this is my <code>pyproject.toml</code></p> <pre><code>[build-system] requires = [&quot;setuptools&gt;=67.0.0&quot;, &quot;wheel&quot;] build-backend = &quot;setuptools.build_meta&quot; [tool.setuptools] package-dir = {&quot;&quot; = &quot;src&quot;} [tool.setuptools.packages.find] where = [&quot;src&quot;] [project] name = &quot;pkg&quot; version = &quot;1.0.0&quot; dynamic = [&quot;dependencies&quot;] [tool.setuptools.dynamic] dependencies = {file = [&quot;requirements.txt&quot;]} </code></pre> <p>Running <code>pip install .</code> installs the package in my global Python interpreter in spite of being inside a virtual environment.</p> <p>I opened an <code>idle</code> shell and imported my package and wrote the following code and running it gave me an attribute error.</p> <pre><code>&gt;&gt; import pkg &gt;&gt; pkg.test_main.generate_data() Traceback (most recent call last): File &quot;&lt;pyshell#1&gt;&quot;, line 1, in &lt;module&gt; pkg.test_main.generate_data() AttributeError: module 'pkg' has no attribute 'test_main' </code></pre> <p>Running the command <code>print(pkg)</code> gives the following message</p> <pre><code>&lt;module 'pkg' from 'path:\\to\\site-packages\\pkg\\__init__.py'&gt; </code></pre> <p>The desired behaviour is to import subpackages/modules from <code>pkg</code>, but from what I can understand, <code>pkg</code> is being treated as an object. Where am I going wrong?</p>
<python><python-3.x><setuptools><python-packaging><pyproject.toml>
2024-05-22 11:26:46
1
388
Lihka_nonem
78,517,171
8,654,320
How to locate image on a large image with different target size
<p>I am trying to use <code>pyautogui.locateOnScreen(btn, confidence=0,8, grayscale=True)</code> to locate image, but if the search image has a different size, it cannot be able to locate.</p> <p>In my case, I want to locate the confirm button on screen. I take the shortcut of button on site A (a small size), and locate it on site B, but it didn't work.</p> <p>I want to detect the position of this button on different websites. Except for the size, all other parts are the same, including color, shape, and button text.</p> <p>I tried <code>cv2.matchTemplate(scr_img, search_image, cv2.TM_CCOEFF_NORMED)</code> but it didn't work either.</p> <p>Is there another way?</p> <hr /> <p>with pyautogui.locateOnScreen</p> <pre><code>from PIL import Image confirm_btn_img = Image.open(&quot;./img/confirm_btn.png&quot;) try: l, t, w, h = pyautogui.locateOnScreen(confirm_btn_img, confidence=0.8, grayscale=True) except Exception as e: print(&quot;Not Found&quot;) </code></pre> <p>with cv2.matchTemplate:</p> <pre><code>import cv2 import numpy as np confirm_btn_img2 = cv2.imread(&quot;./img/confirm_btn.png&quot;, 0) yautogui.screenshot() big_image = cv2.cvtColor(np.array(big_image), cv2.COLOR_RGB2GRAY) res = cv2.matchTemplate(big_image, confirm_btn_img2, cv2.TM_CCOEFF_NORMED) threshold = 0.8 loc = np.where(res &gt;= threshold) print(&quot;loc====&quot;, loc) for pt in zip(*loc[::-1]): cv2.rectangle(big_image, pt, (pt[0] + confirm_btn_img2.shape[1], pt[1] + confirm_btn_img2.shape[0]), (0, 0, 255), 2) cv2.imshow('Detected', big_image) cv2.waitKey(0) cv2.destroyAllWindows() </code></pre> <p>can be found:<br /> <a href="https://i.sstatic.net/fzP0Urs6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzP0Urs6.png" alt="the template can found position" /></a></p> <p>cannot be found:<br /> <a href="https://i.sstatic.net/A2qP4DR8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A2qP4DR8.png" alt="cannot found" /></a></p> <p>targe image:<br /> <a href="https://i.sstatic.net/lMr3fj9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lMr3fj9F.png" alt="target image" /></a></p> <p>example code:</p> <pre><code>import cv2 import numpy as np confirm_btn_img2 = cv2.imread(&quot;./test/ok.png&quot;, 0) big_image = cv2.imread(&quot;./test/can_found_tpl.png&quot;, 0) res = cv2.matchTemplate(big_image, confirm_btn_img2, cv2.TM_CCOEFF_NORMED) threshold = 0.8 loc = np.where(res &gt;= threshold) print(&quot;loc====&quot;, loc) for pt in zip(*loc[::-1]): cv2.rectangle(big_image, pt, (pt[0] + confirm_btn_img2.shape[1], pt[1] + confirm_btn_img2.shape[0]), (0, 0, 255), 2) cv2.imshow('Detected', big_image) cv2.waitKey(0) cv2.destroyAllWindows() </code></pre>
<python><opencv><pyautogui>
2024-05-22 11:21:37
1
1,185
afraid.jpg
78,517,021
6,695,608
Prevent a class variable from being modified outside the class
<p>I have the following class where I have defined couple of class variables and a helper class method where I am updating the value of these class variables based on some condition.</p> <pre><code>class Foo: _UPDATED_AT: datetime = datetime.datetime(2000, 1, 1) STATUS: bool = False @classmethod def bar(cls): # updated the class variable from here only based on some condition if some_cond: cls.STATUS = True cls.UPDATED_AT = datetime.datetime.now() return cls.STATUS </code></pre> <p>I want to 1) prevent these class variables from being updated(i.e. being set) directly outside of the class, 2) but to be able to update the classvar from the class method, so semantically something behaving like a private class variable in C++.</p> <p>I was able to achieve the first of the two requirements by using the a metaclass which basically blocks the attributing setting</p> <pre><code>class MyMeta(type): def __setattr__(cls, key, value): raise AttributeError(f&quot;Can't set {cls}.{key}&quot;) </code></pre> <p>However as expected this also prevents updates from happening within the class. I am not sure if this is possible to achieve with pure python but basically I guess somehow I need to detect if the update is happening from inside or outside of the class.</p> <p>I know we can mark the variables as private in Python with a leading underscore(I already have one in my class) that can tell consumers of your code that these are variables meant for internal use and are not supposed to be used directly by the consumer.</p> <p>However I am just interested out of curiosity if this is somehow achievable ?</p>
<python>
2024-05-22 10:52:39
1
4,308
Rohit
78,516,967
4,586,008
How to log the peak memory usage of Jupyter Notebook
<p>I have a Jupyter Notebook on cloud for a long-running job. The job uses considerable RAM, thus I assign a high-memory (and expensive) machine for it.</p> <p>However, I know that the maximum RAM required is more or less constant among runs, so I want to know the RAM usage at its peak and switch to a cheaper machine with just the right amount of RAM. But since the job takes long time to run, I do not want to sit there and watch, nor via trial-and-error with machines of different specs.</p> <p>Is there a way to keep a log of the peak RAM used of the job?</p>
<python><memory><jupyter-notebook>
2024-05-22 10:42:20
0
640
lpounng
78,516,819
258,825
How/where does pyca/cryptography "embed" OpenSSL dependency?
<p>I'm using cryptography in a project that is targeting both EL8 and EL9 (we're using Rocky Linux 8/9). We're using python3.9 in both cases. The problem/doubt I'm having is that OpenSSL v3 is not available on EL8 (unless built from sources which is not something we're doing), yet when I install cryptography v42, it reports that it is running against OpenSSL v3:</p> <pre><code>$ docker run -it --rm rockylinux:8 /bin/bash $ openssl version OpenSSL 1.1.1k FIPS 25 Mar 2021 $ yum install -y python39 python39-pip $ pip3 install cryptography $ python3 &gt;&gt;&gt; import cryptography.hazmat.backends.openssl &gt;&gt;&gt; cryptography.hazmat.backends.openssl.backend.openssl_version_text() 'OpenSSL 3.2.1 30 Jan 2024' </code></pre> <p>My initial interpretation of what is going on here was that pip is installing a wheel that has been built against OpenSSL v3 headers which is the version that <code>openssl_version_text()</code> is reporting, yet the documentation for the <code>openssl_version_text()</code> <a href="https://cryptography.io/en/42.0.7/openssl/#cryptography.hazmat.backends.openssl.openssl_version_text" rel="nofollow noreferrer">clearly states</a> that it is the run-time version of the OpenSSL:</p> <blockquote> <p>This is not necessarily the same version as it was compiled against.</p> </blockquote> <p>I've browsed both the <a href="https://pypi.org/project/cryptography/#files" rel="nofollow noreferrer">Pypi wheel</a> and the installed package (<code>pip3 show cryptography</code> -&gt; <code>ls -la /usr/local/lib64/python3.9/site-packages</code>) but only saw a bunch of <code>.pyc</code> files and none of the binaries/dynamic libraries.</p> <p>I'm not familiar with the nuances of the <code>pyca/cryptography</code> build process and with how Python wheels work. Does anyone know what is actually happening here and can explain it?</p>
<python><python-wheel>
2024-05-22 10:17:38
1
566
IvanR
78,516,801
10,242,990
How to get data validation
<p>Usecase: I am iterating through rows in a Google sheet and taking some actions dependant on the value in different fields.</p> <pre><code> client = authorize() worksheets_data = get_worksheet(client, CAREEM_SHEET, &quot;UAE&quot;) sheet_data = SheetData(worksheets_data) rows = sheet_data.get_all_values() for row in rows[2:]: # Skip header and template row channel_link_id = row[sheet_data.get_column_index(ActivationSheets.CAREEM_CHANNEL_LINK_ID.value)] careem_mapped = row[sheet_data.get_column_index(ActivationSheets.CAREEM_MAPPED.value)] menu_pushed = row[sheet_data.get_column_index(ActivationSheets.CAREEM_MENU_PUSHED.value)] created = row[sheet_data.get_column_index(ActivationSheets.CAREEM_DRECT_CREATED.value)] if (careem_mapped == &quot;TRUE&quot; and menu_pushed != &quot;Yes&quot; and created == &quot;TRUE&quot;): print(row) </code></pre> <p>Issue: menu_pushed is a drop down in the sheet using data validation. This does not seem to pick up any values using the get_all_values() function.</p> <p>Do you know how I can access the data validation from a cell? All I can seem to find in the docs is set_data_validation()</p> <p><a href="https://i.sstatic.net/MAI4anpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MAI4anpB.png" alt="Example of sheet" /></a></p> <p>Example output from print statement -</p> <p>['6093be05481dce29eaac6bfe', 'TRUE', 'FALSE', '1/23/2022 13:45:47', '']</p> <p>My SheetData class -</p> <pre><code>class SheetData: def __init__(self, worksheet: pygsheets.Worksheet): self._worksheet = worksheet self._header_indexes = {} self._all_values = None self._populate_fields() def _populate_fields(self): self._all_values = self._worksheet.get_all_values() headers = self._all_values[0] self._header_indexes = {header: index for index, header in enumerate(headers)} @property def header_indexes(self): return self._header_indexes def get_all_values(self): &quot;&quot;&quot; :return: all cell values from within the supplied worksheet. &quot;&quot;&quot; return self._worksheet.get_all_values() def authorize() -&gt; pygsheets.client.Client: &quot;&quot;&quot; Authorizes the pygsheets client using a service account credentials file. :param credentials_file: str location of Google Service Account credentials file. :returns: A pygsheets.Client object. :raises pygsheets.exceptions.PyGsheetsException: If the authorization process fails. &quot;&quot;&quot; return pygsheets.authorize(service_account_json=SPREADSHEET_SERVICE_ACCOUNT) def get_worksheet(client: pygsheets.client.Client, spreadsheet_url: str, worksheet_name: str) -&gt; pygsheets.Worksheet: &quot;&quot;&quot; Fetches a worksheet from a Google Sheets spreadsheet. :param client: pygsheets.Client object. :param spreadsheet_url: str URL of the spreadsheet. :param worksheet_name: str name of the worksheet to retrieve. :returns: A pygsheets.Worksheet object representing the specified worksheet. :raises pygsheets.exceptions.PyGsheetsException: If the worksheet retrieval fails. &quot;&quot;&quot; spreadsheet = client.open_by_url(spreadsheet_url) return spreadsheet.worksheet_by_title(worksheet_name) </code></pre>
<python><validation><google-sheets><pygsheets>
2024-05-22 10:13:23
1
1,649
lammyalex
78,516,674
2,657,641
matplotlib: precise inset plot location and size with transformations
<p>I have created a plot with several &quot;inset&quot; axes in matplotlib with the intent of visualizing the meaning of error bars. Please consider the following example:</p> <pre><code>import matplotlib.pyplot as plt from numpy import linspace, random from scipy.stats import norm from mpl_toolkits.axes_grid1.inset_locator import inset_axes num_experiments = 10 data_points = random.randn(num_experiments) x_values = range(num_experiments) fig, ax = plt.subplots(figsize=(12, 6)) ax.set_ylim(-5,5) ax.set_xticks(x_values, [&quot;Experiment {:d}&quot;.format(i+1) for i in range(num_experiments)], rotation='vertical') ax.errorbar(x_values, data_points, yerr=1, fmt='o', capsize=5) ax.axhline(y=0, color='gray', linestyle='--') ax.set_ylabel('Measured Value') inset_width = 2.5 for i, (x, y) in enumerate(zip(x_values, data_points)): # Calculate inset position trans_x, trans_y = fig.transFigure.inverted().transform(ax.transData.transform((x, y))) trans_h, trans_w = fig.transFigure.inverted().transform(ax.transData.transform((0.5, inset_width))) - fig.transFigure.inverted().transform(ax.transData.transform((0, 0))) inset_ax = fig.add_axes([trans_x, trans_y - trans_w, trans_h, 2*trans_w]) x_gauss = linspace(-inset_width, inset_width, 100) y_gauss = norm.pdf(x_gauss) inset_ax.plot(y_gauss, x_gauss, color='blue') inset_ax.fill_betweenx(x_gauss, y_gauss, where=(x_gauss &gt;= -1) &amp; (x_gauss &lt;= 1), color='blue', alpha=0.3) inset_ax.axis('off') plt.xticks(rotation=45) plt.show() </code></pre> <p>The result looks like this: <a href="https://i.sstatic.net/xnIqvDiI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xnIqvDiI.png" alt="illustration" /></a></p> <p>As you might be able to see, the shown 67% confidence intervals on the little gaussians don't exactly agree with the +/-1 error bands I plot in the main part of the plot.</p> <p>Why is that? Is there a better, more elegant way to get the transformation right? Is this a bug? Am I just missing some hidden padding somewhere that I need to disable?</p> <p>Any suggestions are highly appreciated!</p>
<python><matplotlib><statistics>
2024-05-22 09:53:09
1
1,385
carsten
78,516,650
4,247,881
Perform aggregation using min,max,avg on all columns
<p>I have a dataframe like</p> <pre><code>┌─────────────────────┬───────────┬───────────┬───────────┬───────────┬──────┐ │ ts ┆ 646150 ┆ 646151 ┆ 646154 ┆ 646153 ┆ week │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ datetime[μs] ┆ f64 ┆ f64 ┆ f64 ┆ f64 ┆ i8 │ ╞═════════════════════╪═══════════╪═══════════╪═══════════╪═══════════╪══════╡ │ 2024-02-01 00:00:00 ┆ 24.490348 ┆ 65.088941 ┆ 53.545259 ┆ 13.499832 ┆ 5 │ │ 2024-02-01 01:00:00 ┆ 15.054187 ┆ 63.095247 ┆ 60.786479 ┆ 29.538156 ┆ 5 │ │ 2024-02-01 02:00:00 ┆ 24.54212 ┆ 63.880298 ┆ 57.535928 ┆ 24.840966 ┆ 5 │ │ 2024-02-01 03:00:00 ┆ 24.85621 ┆ 69.778516 ┆ 67.57284 ┆ 24.672476 ┆ 5 │ │ 2024-02-01 04:00:00 ┆ 21.21628 ┆ 61.137849 ┆ 55.231299 ┆ 16.648383 ┆ 5 │ │ … ┆ … ┆ … ┆ … ┆ … ┆ … │ │ 2024-02-29 19:00:00 ┆ 23.17318 ┆ 62.590752 ┆ 72.026908 ┆ 24.614523 ┆ 9 │ │ 2024-02-29 20:00:00 ┆ 23.86416 ┆ 64.87102 ┆ 61.023656 ┆ 20.095353 ┆ 9 │ │ 2024-02-29 21:00:00 ┆ 18.553397 ┆ 67.530137 ┆ 63.477737 ┆ 17.313834 ┆ 9 │ │ 2024-02-29 22:00:00 ┆ 22.339175 ┆ 67.456563 ┆ 62.552035 ┆ 20.880844 ┆ 9 │ │ 2024-02-29 23:00:00 ┆ 15.5809 ┆ 66.774367 ┆ 57.066264 ┆ 29.529057 ┆ 9 │ └─────────────────────┴───────────┴───────────┴───────────┴───────────┴──────┘ </code></pre> <p>which is generated as follows</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from datetime import datetime, timedelta def generate_test_data(): # Function to generate hourly timestamps for a month def generate_hourly_timestamps(start_date, end_date): current = start_date while current &lt;= end_date: yield current current += timedelta(hours=1) # Define the date range start_date = datetime(2024, 2, 1) end_date = datetime(2024, 2, 29, 23, 0, 0) # February 29th 23:00 for a leap year # Generate the data timestamps = list(generate_hourly_timestamps(start_date, end_date)) num_hours = len(timestamps) data = { &quot;ts&quot;: timestamps, &quot;646150&quot;: np.random.uniform(15, 25, num_hours), # Random temperature data between 15 and 25 &quot;646151&quot;: np.random.uniform(60, 70, num_hours), # Random humidity data between 60 and 70 &quot;646154&quot;: np.random.uniform(50, 75, num_hours), # Random sensor data between 50 and 75 &quot;646153&quot;: np.random.uniform(10, 30, num_hours) # Random sensor data between 10 and 30 } df = pl.DataFrame(data) df = df.with_columns(pl.col(&quot;ts&quot;).cast(pl.Datetime)) return df df = generate_test_data() # Add a week column df = df.with_columns((pl.col(&quot;ts&quot;).dt.week()).alias(&quot;week&quot;)) </code></pre> <p>I would like to group by week or some other time intervals and aggregate using <code>min</code>, <code>mean</code>, and <code>max</code>. For this, I could do something like</p> <pre class="lang-py prettyprint-override"><code># Group by week and calculate min, max, and avg aggregated_df = df.groupby(&quot;week&quot;).agg([ pl.col(&quot;646150&quot;).min().alias(&quot;646150_min&quot;), pl.col(&quot;646150&quot;).max().alias(&quot;646150_max&quot;), pl.col(&quot;646150&quot;).mean().alias(&quot;646150_avg&quot;), pl.col(&quot;646151&quot;).min().alias(&quot;646151_min&quot;), pl.col(&quot;646151&quot;).max().alias(&quot;646151_max&quot;), pl.col(&quot;646151&quot;).mean().alias(&quot;646151_avg&quot;), pl.col(&quot;646154&quot;).min().alias(&quot;646154_min&quot;), pl.col(&quot;646154&quot;).max().alias(&quot;646154_max&quot;), pl.col(&quot;646154&quot;).mean().alias(&quot;646154_avg&quot;), pl.col(&quot;646153&quot;).min().alias(&quot;646153_min&quot;), pl.col(&quot;646153&quot;).max().alias(&quot;646153_max&quot;), pl.col(&quot;646153&quot;).mean().alias(&quot;646153_avg&quot;) ]) </code></pre> <p>but I would like to avoid specifying the column names.</p> <p>I would like to generate the dataframe like below where the column value is a list or tuples or some other multiple value format that holds the min, max, avg values.</p> <pre><code>┌─────────────────────┬──────────────────┬──────────────────┐ │ week ┆ 646150 ┆ 646151 │ │ --- ┆ --- ┆ --- │ │ i8 ┆ List[f64] ┆ List[f64] │ ╞═════════════════════╪══════════════════╪══════════════════╡ │ 5 ┆ [24.1,26.3,25.0] ┆ [22.1,23.3,22.5] │ │ … ┆ … ┆ … ┆ └─────────────────────┴──────────────────┴──────────────────┘ </code></pre> <p>Is this possible in polars ?</p> <p>Thanks</p>
<python><dataframe><python-polars>
2024-05-22 09:49:01
3
972
Glenn Pierce
78,515,954
16,569,183
Suppressing recommendations on AttributeError - Python 3.12
<p>I like to use <code>__getattr__</code> to give objects optional properties while avoiding issues with <code>None</code>. This is a simple example:</p> <pre class="lang-py prettyprint-override"><code>from typing import Any class CelestialBody: def __init__(self, name: str, mass: float | None = None) -&gt; None: self.name = name self._mass = mass return None def __getattr__(self, name: str) -&gt; Any: if f&quot;_{name}&quot; not in self.__dict__: raise AttributeError(f&quot;Attribute {name} not found&quot;) out = getattr(self, f&quot;_{name}&quot;) if out is None: raise AttributeError(f&quot;Attribute {name} not set for {self.name}&quot;) return out </code></pre> <p>My problem is that Python tries to be nice when I raise my custom <code>AttributeError</code> and exposes the &quot;private&quot; attribute for which I am creating the interface:</p> <pre><code>AttributeError: Attribute mass not set for Earth. Did you mean: '_mass'? </code></pre> <p>Is there a standard way to suppress these recommendations?</p>
<python><python-3.12>
2024-05-22 07:37:43
2
313
alfonsoSR
78,515,887
4,453,737
How to get printed response from async function in Python
<p>I want to get printed response of sync function. for that am using, <code>StringIO</code>.</p> <pre><code>import sys from io import StringIO def sample(): print('started to fetch log!') print('response:body') print('status:200') def get_log(): sys.stdout = StringIO() sample() log = sys.stdout.getvalue() sys.stdout = sys.__stdout__ return log # in sync call, out = get_log() print(out) started to fetch log! response:body status:200 </code></pre> <p>but in async call, am unable to get those, created async wrapper for this sync function.</p> <pre><code>async def get(): return asyncio.run(get_log()) </code></pre> <p>return None. asyncio never running async function in synchronous. without running synchronous way, am unable to get <code>print</code> response.</p> <p>Note: I need async wrapper to coupe with existing logics.</p>
<python><python-asyncio><stringio>
2024-05-22 07:22:25
1
20,393
Mohideen bin Mohammed
78,515,729
10,200,497
How can I compare a value in one column to all values that are BEFORE it in another column?
<p>My DataFrame:</p> <pre><code>import pandas as pd df = pd.DataFrame( { 'high': [100, 102, 120, 150, 125, 115], 'target': [99, 105, 118, 108, 140, 141] } ) </code></pre> <p>Expected output is creating column <code>x</code>:</p> <pre><code> high target x 0 100 99 99 1 102 105 99 2 120 118 118 3 150 108 118 4 125 140 118 5 115 141 108 </code></pre> <p>I explain it row by row. I start from row <code>1</code> because it is easier to explain.</p> <p>102 should be compared with all of the <code>target</code>s that are before it or on the same row. That is, 102 should be compared with 99 and 105. Of these two values, I want to find the maximum value that is LESS or EQUAL than 102. The max value is 99. So this value is chosen for <code>x</code>.</p> <p>For row <code>2</code>, 120 should be compared with 99, 105 and 118. Again the same logic applies and the value for <code>x</code> is 105.</p> <p>For row <code>3</code>, 150 should be compared with 99, 105, 118 and 108. All of these values are less than 150 but the maximum value is 118.</p> <p>This is my attempt that failed:</p> <pre><code>import numpy as np bins = df.target.sort_values(ascending=True).to_list() bins = bins + [np.inf] labels = bins[:-1] df['x'] = pd.cut(df.high, include_lowest=True, right=False, labels=labels, bins=bins) </code></pre>
<python><pandas><dataframe>
2024-05-22 06:46:31
4
2,679
AmirX
78,515,642
2,467,772
How to dump to pickle file for each iteration without saving in memory?
<p>I have a python code which save dictionary to data array and dump data array to a pickle file.</p> <p>This is original code.</p> <pre><code>with torch.no_grad(): char_bboxes, char_scores, word_instances = charnet.find_charbox(im, scale_w, scale_h, original_w, original_h, word_boxes, os.path.splitext(im_name)[0]) ALL, BOX, new_dict = save_word_recognition(ALL, BOX, word_instances, os.path.splitext(im_name)[0], args.results_dir, cfg.RESULTS_SEPARATOR, pointsList, transcriptionsList ,im_original, gaussianTransformer) if 'gt' in new_dict: data.append(new_dict) with open('./data/roidb_'+ args.data_name+'.pik','wb') as f: pickle.dump(data,f) f.close() print(ALL, BOX, BOX/ALL) print(len(data)) </code></pre> <p>Since I have a lot of data so memory is not enough.</p> <p>How can I dump have same pickle file for each iteration? The code is modified like below.</p> <pre><code>with torch.no_grad(): char_bboxes, char_scores, word_instances = charnet.find_charbox(im, scale_w, scale_h, original_w, original_h, word_boxes, os.path.splitext(im_name)[0]) ALL, BOX, new_dict = save_word_recognition(ALL, BOX, word_instances, os.path.splitext(im_name)[0], args.results_dir, cfg.RESULTS_SEPARATOR, pointsList, transcriptionsList ,im_original, gaussianTransformer) if 'gt' in new_dict: with open('./data/roidb_'+ args.data_name+'.pik','a+') as f: pickle.dump(new_dict,f) f.close() </code></pre> <p>I have error as below.</p> <pre><code>File &quot;code/utils/find_char_box.py&quot;, line 183, in &lt;module&gt; pickle.dump(new_dict,f) TypeError: write() argument must be str, not bytes </code></pre> <p>How can I save to pickle without memory issue?</p>
<python><pickle>
2024-05-22 06:25:46
1
7,346
batuman
78,515,406
7,700,802
Changing values in a dataframe under growth change with year condition
<p>Suppose I have</p> <pre><code>sample = {&quot;Location+Type&quot;: [&quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;B&quot;, &quot;B&quot;, &quot;B&quot;, &quot;B&quot;], &quot;Year&quot;: [&quot;2010&quot;, &quot;2011&quot;, &quot;2012&quot;, &quot;2013&quot;, &quot;2010&quot;, &quot;2011&quot;, &quot;2012&quot;, &quot;2013&quot;], &quot;Price&quot;: [1, 2, 1, 3, 1, 1, 1, 1]} df_sample = pd.DataFrame(data=sample) df_sample['pct_pop'] = df_sample['Price'].pct_change() df_sample.head(8) </code></pre> <p>From say 2012, I want to make a change in the dataframes values following the pct_chage() values. In other words, if the percentage change is greater than 30 percent than I want to average the previous values with respect to the Location+Type column starting from the year of 2012.</p> <p>Thus the new dataframe would look like this</p> <pre><code>sample = {&quot;Location+Type&quot;: [&quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;B&quot;, &quot;B&quot;, &quot;B&quot;, &quot;B&quot;], &quot;Year&quot;: [&quot;2010&quot;, &quot;2011&quot;, &quot;2012&quot;, &quot;2013&quot;, &quot;2010&quot;, &quot;2011&quot;, &quot;2012&quot;, &quot;2013&quot;], &quot;Price&quot;: [1, 2, 1.5, 1.33, 1, 1, 1, 1]} df_sample = pd.DataFrame(data=sample) df_sample['pct_pop'] = df_sample['Price'].pct_change() df_sample.head(8) </code></pre> <p>Still having some issues following the answers for my particular data which is instead of changing from 2012 its 2023, here is the shared drive <a href="https://drive.google.com/drive/u/0/folders/11dh05Dq_VNfUNgALeMg7MI1keVQdN0ZN" rel="nofollow noreferrer">https://drive.google.com/drive/u/0/folders/11dh05Dq_VNfUNgALeMg7MI1keVQdN0ZN</a></p> <p>And here is the code modified</p> <pre><code>df_complete['Year'] = df_complete['Year'].astype(int) # just in case if &quot;Year&quot; holds strings def calculate_new_price(row, df): current_index = row.name # row number will be used to check if value is in the first of the DF # Check if row is not the first if current_index &gt; 0: # assign previous pct_pop value to variable: previous_pct_pop previous_pct_pop = df.loc[current_index - 1, 'pct_pop'] # Check if the year is 2023 or later and if the difference in pct_pop is 0.3 or more if row['Year'] &gt;= 2023 and abs(row['pct_pop'] - previous_pct_pop) &gt;= 0.3: # Filter previous years previous_data = df[(df['Year'] &lt; row['Year']) &amp; (df['Location+Type'] == row['Location+Type'])] # Calculate average if row above is not empty if not previous_data.empty: return previous_data['Median_Home_Value_prediction'].mean() else: return row['Median_Home_Value_prediction'] # if pct_pop change was smaller than 0.3 or year is older than 2012, - use default price. else: return row['Median_Home_Value_prediction'] # if index 0 (first row) use value from first row of col: Price return None # Apply the function to create the new_price column df_complete['Median_Home_Value_prediction_new'] = df_complete.apply(lambda row: calculate_new_price(row, df_complete), axis=1) </code></pre>
<python><pandas>
2024-05-22 05:04:33
4
480
Wolfy
78,515,242
487,993
Combining derivatives when taken with respect to the same variable
<p>Is there a way to collect derivatives w.r.t. the same symbol in a sum? There are several questions that go around this point, but I haven't found an answer for the simple situation:</p> <pre class="lang-py prettyprint-override"><code>from sympy import * x,y,z,t = symbols('x y z t') expr = z * (Derivative(x,t) + Derivative(y,t)) expr 'z*(Derivative(x, t) + Derivative(y, t))' expr.simplify(doit=False) 'z*(Derivative(x, t) + Derivative(y, t))' </code></pre> <p>I would expect to get</p> <pre class="lang-py prettyprint-override"><code>'z*Derivative(x+y, t)' </code></pre> <p>I have tried using the <code>replace</code> function to substitute derivation with multiplication, but going back is not really working due to the factor <code>z</code>being reorganized, or simplification functions failing to handle non-commutative symbols.</p>
<python><sympy><derivative><simplify>
2024-05-22 03:55:47
0
801
JuanPi
78,515,065
1,424,739
lxml.etree.tostring show both start and end tags for empty node
<pre><code>from lxml import etree tree = etree.XML('&lt;foo class=&quot;abc&quot;&gt;&lt;/foo&gt;') print(etree.tostring(tree, encoding='utf-8').decode('utf-8')) </code></pre> <p>The above code shows the following.</p> <pre><code>&lt;foo class=&quot;abc&quot;/&gt; </code></pre> <p>How can I make it not to contract the result, but show <code>&lt;foo class=&quot;abc&quot;&gt;&lt;/foo&gt;</code> instead?</p>
<python><xml><lxml>
2024-05-22 02:26:23
1
14,083
user1424739
78,514,884
5,284,054
Python tkinter root close
<p>As an interim to my full application, I'm closing the window with buttons at different places in the file. Now originally, it works fine: I can close the application from both location within the file. However, I'm trying to clean the code up and have a better style.</p> <p>Here's the code I have (with the better style):</p> <pre><code>from tkinter import * from tkinter import ttk class Location: def __init__(self, root): root.title(&quot;Location&quot;) root.geometry('400x295') self.mainframe = ttk.Frame(root, padding=&quot;3 3 12 12&quot;) self.mainframe.grid(column = 0, row=0, sticky=(N, W, E, S)) root.columnconfigure(0, weight=1) root.rowconfigure(0, weight=1) confirm_button = ttk.Button(self.mainframe, text = 'CONFIRM', command = self.make_confirmation) confirm_button.grid(column=0, row=0) select_button = ttk.Button(self.mainframe, text = 'Geometry', command = root.quit) select_button.grid(column=0, row=1) def make_confirmation(self, *args): root.quit() def main_app(): root = Tk() Location(root) root.mainloop() if __name__ == &quot;__main__&quot;: main_app() </code></pre> <p>The &quot;Geometry&quot; button will close just fine.</p> <p>The &quot;Confirm&quot; button gives me a <code>NameError</code>.</p> <p>Note, <code>def make_confirmation</code> is in <code>class Location</code></p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\tkinter\__init__.py&quot;, line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File &quot;c:\Users\User\Documents\Python\Tutorials\Quit_wStyle.py&quot;, line 22, in make_confirmation root.quit() ^^^^ NameError: name 'root' is not defined </code></pre> <p>In my non-MWE, <code>make.confirmation</code> does a bit more and has arguments passed through. That all works fine. I know that because when I get rid of <code>def main_app():</code> and put it all in <code>if __name__ == &quot;__main__&quot;:</code>, then both <code>root.quit</code> works.</p> <p>In other words, this works:</p> <pre><code>from tkinter import * from tkinter import ttk class Location: def __init__(self, root): root.title(&quot;Location&quot;) root.geometry('400x295') self.mainframe = ttk.Frame(root, padding=&quot;3 3 12 12&quot;) self.mainframe.grid(column = 0, row=0, sticky=(N, W, E, S)) root.columnconfigure(0, weight=1) root.rowconfigure(0, weight=1) confirm_button = ttk.Button(self.mainframe, text = 'CONFIRM', command = self.make_confirmation) confirm_button.grid(column=0, row=0) select_button = ttk.Button(self.mainframe, text = 'Geometry', command = root.quit) select_button.grid(column=0, row=1) def make_confirmation(self, *args): root.quit() if __name__ == &quot;__main__&quot;: main_app() root = Tk() Location(root) root.mainloop() </code></pre> <p>Why is <code>command = root.quit</code> working?</p> <p>But <code>command = self.make_confirmation</code> does work?</p>
<python><tkinter>
2024-05-22 00:38:40
1
900
David Collins
78,514,849
2,593,878
pytorch matrix multiplication accuracy depends on tensor size
<p>I have the following code where I multiply tensor <code>X</code> by a matrix <code>C</code>. Depending on the size of <code>X</code> and whether <code>C</code> is attached to the computation graph, I get different results when I compare batched multiplication vs looping over each slice of <code>X</code>.</p> <pre><code>import torch from torch import nn for X,C in [(torch.rand(8, 50, 32), nn.Parameter(torch.randn(32,32))), (torch.rand(16, 50, 32), nn.Parameter(torch.randn(32,32))), (torch.rand(8, 50, 32), nn.Parameter(torch.randn(32,32)).detach()) ]: #multiply each entry A = torch.empty_like(X) for t in range(X.shape[1]): A[:,t,:] = (C @ X[:,t,:].unsqueeze(-1)).squeeze(-1) #multiply in batch A1 = (C @ X.unsqueeze(-1)).squeeze(-1) print('equal:', (A1 == A).all().item(), ', close:', torch.allclose(A1, A)) </code></pre> <p>Returns</p> <pre><code>equal: False , close: False equal: True , close: True equal: True , close: True </code></pre> <p>What's going on? I expect them to be equal in all three cases.</p> <p>For reference,</p> <pre><code>import sys, platform print('OS:', platform.platform()) print('Python:', sys.version) print('Pytorch:', torch.__version__) </code></pre> <p>gives:</p> <pre><code>OS: macOS-14.4.1-arm64-arm-64bit Python: 3.12.1 | packaged by conda-forge | (main, Dec 23 2023, 08:01:35) [Clang 16.0.6 ] Pytorch: 2.2.0 </code></pre>
<python><pytorch><floating-point><precision>
2024-05-22 00:13:50
1
7,392
dkv
78,514,800
2,986,153
Can't pip install pystan on Mac
<p>When I try to pip install pystan I get the following error:</p> <pre><code> error: subprocess-exited-with-error × Preparing metadata (pyproject.toml) did not run successfully. │ exit code: 1 ╰─&gt; [1 lines of output] Cython&gt;=0.22 and NumPy are required. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─&gt; See above for output. </code></pre> <p>I have confirmed that both Cython&gt;=0.22 and NumPy are already installed so I am not sure what step to take next.</p> <p>Perhaps the issue is that I have an M1 Mac. The pystan installation page lists x86-64 CPU as a requirement. <a href="https://pystan.readthedocs.io/en/latest/installation.html" rel="nofollow noreferrer">https://pystan.readthedocs.io/en/latest/installation.html</a></p>
<python><pystan>
2024-05-21 23:45:13
1
3,836
Joe
78,514,732
11,516,350
Flask blueprints: declaration on each module __init__.py vs inside routes files
<p>This is my project structure:</p> <p><a href="https://i.sstatic.net/l5JTRL9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l5JTRL9F.png" alt="project structure" /></a></p> <p>As you can see, I have two modules: transactions and categories. I'll use the categories module as an example (the transactions module contains exactly the same folders). The goal is to split the routes and modules based on the domain.</p> <p>I have two ways to register the blueprints. One of them is causing some weird problems that I don't understand, and here is my question: What exactly is happening?</p> <p><strong>Way 1</strong></p> <p>This is my method to register blueprints, called by my app factory:</p> <pre><code>def register_blueprints(app): # ... from categories.presentation.routes import categories_blueprint app.register_blueprint(categories_blueprint) # ... </code></pre> <p>Please, look exactly how I am importing categories blueprint. I am using &quot;from categories.presentation.routes ...&quot;. To do this, I have this inside categories routes file:</p> <pre><code># imports.... categories_blueprint = Blueprint('categories_blueprint', __name__, url_prefix='/categories') @categories_blueprint.route(blablabla....) # some routes </code></pre> <p><strong>Way 2</strong></p> <p>This is the original method I have been using, but it is causing a lot of problems.</p> <pre><code>def register_blueprints(app): ... from app.src.categories import categories_blueprint app.register_blueprint(categories_blueprint) ... </code></pre> <p>Pay attention about, in this way, I am using only &quot;from app.src.categories ...&quot; instead of the other &quot;from categories.presentation.routes ...&quot;</p> <p>How is it possible?</p> <p>Because I don't have the blueprint registration inside the categories/routes.py routes file. I have this inside categories init.py:</p> <pre><code>from flask import Blueprint categories_blueprint = Blueprint('categories_blueprint', __name__, url_prefix='') from .presentation import routes </code></pre> <p><strong>Problems caused by way 2</strong></p> <p>If I import any content from the 'categories' package into any test, it causes an error.</p> <p>For example. This test works fine:</p> <pre><code>def test_create_first_category(client): response = client.post('/categories/dashboard', data={ 'name': 'New Category', 'description': 'This is a new category.' }, follow_redirects=True) assert response.status_code == 200 assert b&quot;Category successfully created!&quot; in response.data assert b&quot;New Category&quot; in response.data assert b&quot;This is a new category&quot; in response.data </code></pre> <p>But, if I write the same test like this:</p> <pre><code>from categories.domain.category import Category def test_create_first_category(client): category = Category(name=&quot;New Category&quot;, description=&quot;This is a new category.&quot;) response = client.post('/categories/dashboard', data={ 'name': category.name, 'description': category.description, }, follow_redirects=True) assert response.status_code == 200 assert b&quot;Category successfully created!&quot; in response.data assert category.name.encode() in response.data assert category.description.encode() in response.data </code></pre> <p>I have checked that the problem is the Category import. If I let only the import, even if I am not using it inside the test, still failing. This is the fail:</p> <p><a href="https://i.sstatic.net/ED4MeziZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ED4MeziZ.png" alt="Error in test" /></a></p> <p>If I use way 1, then all works fine.</p> <p>What is happening and what is the differences between that 2 ways of registration?</p>
<python><flask><blueprint>
2024-05-21 23:18:42
0
1,347
UrbanoJVR
78,514,520
3,487,441
How to resolve "Unclosed client session" from aiohttp
<p>I'm pulling data from an API and populating Janusgraph DB running in the docker container per the installation site:</p> <pre><code>docker run --name janusgraph-default janusgraph/janusgraph:latest </code></pre> <p>The python script I'm using worked well at first, but won't connect at all now. I've removed and recreated the container. The error is:</p> <pre><code>aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host localhost:8182 ssl:default [Connect call failed ('::1', 8182, 0, 0)] Unclosed client session client_session: &lt;aiohttp.client.ClientSession object at 0x1118d2f70&gt; </code></pre> <p>I tried opening a python console to manually connect:</p> <pre><code>import gremlin_python.driver.serializer as srl from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection from gremlin_python.process.anonymous_traversal import traversal remote_conn = DriverRemoteConnection('ws://localhost:8182/gremlin', 'g', message_serializer=srl.GraphSONSerializersV3d0()) g = traversal().with_remote(remote_conn) g.V().toList() </code></pre> <p>The result is the same as above.</p>
<python><aiohttp><janusgraph><tinkerpop3>
2024-05-21 21:56:14
1
1,361
gph
78,514,357
9,310,154
python requests different from curl request
<p>Hi I want to get some informations from one website. This is the curl command:</p> <pre><code>curl 'https://www.kleinanzeigen.de/s-iphone-13/k0' \ -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \ -H 'accept-language: de-DE,de;q=0.9,ru-DE;q=0.8,ru;q=0.7,en-US;q=0.6,en;q=0.5' \ -H 'cache-control: no-cache' \ -H 'cookie: _clck=or3005%7C2%7Cfly%7C0%7C1602; CSRF-TOKEN=14dc299d-30f2-4d98-9ff4-c4fbe8f3e569; ak_bmsc=C1C8A8F7FF391E9820102E8CA4EBB694~000000000000000000000000000000~YAAQhmZWuGSaVnmPAQAA1pftnBc7BqlNSSkxUM+mIQNkQI3yuVFydinhQtGa20uPsi5nOxA4EWd/piq70XNXFMfMUS/ryvVLMfNDLkEIJ/voevw9rWS4yLRqSn8kGbnehmg3KwRO0VmTEmjPk7QKEZU6yUU6zGiCu5f6lFCyt+7FT+FmtaM2Hje0ugHMveYB+nqJgasPgrwgJggxSChqa/QpR4M6f9Sckw92ci6oQ2O3MzksBAlzMYeVGRXSDv7eNwk10ysBrBOOECdbYmieHjyHk8Toqub9nL2Rm7vdvtmx2nPUt+SiA4ZjkksVDw9I8Gc5YVOC2crta/zObr+6vZp5hR1B7WeJ7slQCtAOsbBYFz7tnqY7ex8fzEQKezCKvpBIw8LnT/GrwN9MnPYgAcp8mhTkRehrFgHHN1O0TmgSYduWXHusOW8E1SqPYGQeCuNC943d6jJ83MVACpX+AzFd4gJdgrPgTNstbfVsTq0xRp/mj4wnW5lviA==; ekConsentBucketTcf2=full2-exp; ekConsentTcf2={%22customVersion%22:6%2C%22encodedConsentString%22:%22CP-9ugAP-9ugAE1ACADEAzEgAP_gAAAAAAYgJepV5D7dbWFFcX53aPt0OY0X19DzIsQhBhSBAyAFgBOQ8JQA02EyNASgBiACEAAAoxZBAAFEHABEAQCAQAAEAADsIgQEgAAIIABEgBEQAAJYAAoKAAAgEAAIgAIFEgAAmBjQKdLmXUiAhIAACgAQACABAICAAgMAAAAIAAIAAAAAAgAAgABgIAIAAAAAARAAAAAAAAAAAAAAIAAACi0AGAAIJelIAMAAQS9JQAYAAgl6QgAwABBL0dABgACCXoyADAAEEvRUAGAAIJeiIAMAAQS9DQAYAAgl6EgAwABBL0A.YAAAAAAAA60A%22%2C%22googleConsentGiven%22:true%2C%22consentInterpretation%22:{%22googleAdvertisingFeaturesAllowed%22:true%2C%22googleAnalyticsAllowed%22:true%2C%22infonlineAllowed%22:true%2C%22theAdexAllowed%22:true%2C%22criteoAllowed%22:true%2C%22facebookAllowed%22:true%2C%22amazonAdvertisingAllowed%22:true%2C%22rtbHouseAllowed%22:true%2C%22linkedInAllowed%22:true%2C%22theTradeDeskAllowed%22:true%2C%22microsoftAdvertisingAllowed%22:true%2C%22pianoAllowed%22:true%2C%22renaultAllowed%22:false}}; iom_consent=0107ff0000&amp;1716324769286; _gid=GA1.2.1532272070.1716324769; __rtbh.uid=%7B%22eventType%22%3A%22uid%22%2C%22id%22%3A%22unknown%22%7D; clientId=496654825.1716324769; _gat=1; FPID=FPID2.2.AOuBPGhHzWT6LnAkpY5kTJhKrpSS%2FkCqyy4lC0XgstU%3D.1716324769; FPLC=q%2BEjZGxjkPyzi60Jabi2fnha46CbqeBJ6r5U2WbreeneTYQSQm2wogu4TCxyjhFwIpCvNTT5dL%2FIMuVk77ixOiL%2Btpwkr%2BhfOXvIFfaYo8901edWtL0mN0jGi%2FQbew%3D%3D; FPAU=1.2.1242298381.1716324771; FPGSID=1.1716324770.1716324770.G-6V6MWRHVN8.TCed7UzLYaViqx_51MOpqg; up=%7B%22ln%22%3A%22294028564%22%2C%22lln%22%3A%229813e7ea-faff-435e-a600-244146dd8da9%22%2C%22llstv%22%3A%22BLN-25381-ka-offboarding%3DB%7CBLN-23401_buyNow_in_chat%3DB%7CBLN-18532_highlight%3DA%7CBLN-23248_BuyNow_SB%3DB%7CBLN-22726_buyer_banner%3DB%7Cignite_web_better_session%3DC%7CEBAYKAD-3536_floor_ai%3DA%7Cignite_improve_session%3DC%7CBLN-24684-enc-brndg-data%3DA%7Cliberty_group%3DY%7CFLPRO-130-churn-reason%3DB%7CEKMO-100_reorder_postad%3DB%7CKLUE-274-financing%3DB%7Clws-aws-traffic%3DB%7CBLN-25450_Initial_message%3DB%7CBLN-24652_category_alert%3DB%7CBLN-21783_testingtime%3DB%7Caudex-awr-update%3DA%7CEBAYKAD-2252_group-assign%3DH%7CBLN-25216-new-user-badges%3DB%7CBLN-25556_INIT_MSG_V2%3DB%7CDESKTOP-promo-switch%3DA%7Caudex-libertyjs-reporting%3DA%7CPLC-189_plc-migration%3DB%7Caudex-libertyjs-update%3DA%7Cperformance-test-desktop%3DC%7CBIPHONE-9700_buy_now%3DB%7CEKMO-243_MyAdsC2b_ABC%3DA%7Cprebid-update%3DA%7CEKTNM-4424_DSA_AD_FLAG%3DA%7Cpubads-behind-consent%3DA%7CPLC-104_plc-login%3DB%7Cdesktop_payment_badge_SRP%3DR%22%2C%22ls%22%3A%22k%3Diphone%252013%22%7D; uh=%7B%22sh%22%3A%22k%3Diphone%252013%22%7D; bm_sv=0E46F6D4C2D15DDF4037D3219522F169~YAAQjGZWuLYmhJGPAQAAc7HtnBfptI0O/cZh+z71bjX7FWoEWfWi+7rSmJtAdtnHr2Yy6Sml0PivsexsxpEKRqSMQ6ykXGA2uoSw845qdTHuHFptR+QNvMHbvH/VH/cLCl5ACpcudk630IFOv5PevfMyfExV481WLWdi/qiNeZTnGPaMNkfNGtb6nRhJan8bo9pPBMP4ZvblfI33UGrSX3mmVIEuQ2CZ9w2o6QF97BCNHkRLzeHAyzVnysmuxchKgTS4GHCq~1; bm_sz=DC20044AB93A95DA69EEDF3CBC61933F~YAAQjGZWuLcmhJGPAQAAc7HtnBfghyhORTugDA+HeFjnh5DSij+T1QpBc1T3YOBc7JF2Kd/LUNEwSqqPtyRQMQqA6S5UoqG2w3rLfYD4LfF3p4Oe35m8blAYcoozf4uBqfBuD4kPWSAEdCCW4Qz3yPBOFwLGDO3Imik7kxYRH+0PpEGheEoo6JQSXKpdHrGRhLXOJecz1EKW/uRKZf2n35KDL7Q5uNdu4ZMkwKG4Tsv2b1lNYr5AsR+OyvPuTnHyGbyIo3IsqmoeL4ZDBvgdGlrbhcAyJdm6D8HFNyaNgCIPLyiH/nbL1Md2mVu99LlQHI/atUHs2Xls/fNrHeiEM14d1gRlmS+0t5wt2eB+C3k4KUgjrpSta8jR7hvBMHq+JWrZ71tUnLjVXgozyprHuWzgQYHSN8DQE9lVEBtuqbkyO0eQZmUPJoUBf/p7raJ7rGiX+R6P~3225913~3749176; _pbjs_userid_consent_data=1948453571612999; _sharedid=2cc48544-17d6-40f0-8055-433f3bd0eebb; ttd_TDID=5af2038d-4201-4381-a705-008be357e42b; _abck=0BE7DECF6A441CAED9FAE60DD4A2D439~-1~YAAQjGZWuGYnhJGPAQAAF7ntnAvIJaPDK0JcFilje/7MpZxzpI0e91nvMkptlAnNzcvbc/7RsC1cJb9t+U/5WcPd8vR5hPtB4B5DzzMkYqetbfONwbgM9rxJ7WzQZDmNXvHso6HgE1CoHrazlFHCdapmnPYQZf9rKTSLi3bpmBLXajRz9jguHHl/OSeKz60PNLlcnQKxgUbDiS2dOz9K/fdl5pYP8fD54/2/h6P9mGBtoAeUAxPNRUZNuqgo3KhfSpryre97I0qQjtZIffvnvrTTh+VGLk+XXGxxWA3+KQZ7fk89MhFNR6X6iPIln2KisoLu/M7FUsmhBUHuGW4GYxSuzePypKxJLosRLHD2VngeRjrSbP5dEuFI0ib3ZbVGoOm0EdrBwwg4gtTbfH82~-1~-1~-1; ioam2018=001004bb35748f346664cf549:1742244769311:1716324769311:.kleinanzeigen.de:3:ebaykanz:ebayK-2:noevent:1716324775481:xwyv7z; __gsas=ID=06678aa415f2bae1:T=1716324775:RT=1716324775:S=ALNI_MY8M9AWvnu9c04kmWUotsmzbZE9Kw; _uetsid=24053ee0178711ef995c73a37e750a44; _uetvid=2815b710edc911edb77b397abfb0ec6b; _ga=GA1.1.496654825.1716324769; __gads=ID=913a6a8a4ed5f68f:T=1716324775:RT=1716324775:S=ALNI_MbcqPiXeD1up-CfoDLjbZWHR3i2qQ; __gpi=UID=00000e283afa9424:T=1716324775:RT=1716324775:S=ALNI_MaKh8LN1b8hyjtkjGF1q_PBkaguBQ; __eoi=ID=30ca85bff1e0487a:T=1716324775:RT=1716324775:S=AA-AfjZA4xIy4PLoxMR9l_uqa7AS; _ga_123456=GS1.1.1716324776.1.0.1716324776.0.0.1971217686; _clsk=kf2mvr%7C1716324777066%7C2%7C0%7Cv.clarity.ms%2Fcollect; _ga_6V6MWRHVN8=GS1.1.1716324769.1.1.1716324777.0.0.804489043; axd=4362638687387987014; tis=' \ -H 'pragma: no-cache' \ -H 'priority: u=0, i' \ -H 'sec-ch-ua: &quot;Google Chrome&quot;;v=&quot;125&quot;, &quot;Chromium&quot;;v=&quot;125&quot;, &quot;Not.A/Brand&quot;;v=&quot;24&quot;' \ -H 'sec-ch-ua-mobile: ?0' \ -H 'sec-ch-ua-platform: &quot;Linux&quot;' \ -H 'sec-fetch-dest: document' \ -H 'sec-fetch-mode: navigate' \ -H 'sec-fetch-site: none' \ -H 'sec-fetch-user: ?1' \ -H 'upgrade-insecure-requests: 1' \ -H 'user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36' </code></pre> <p>When I use this request it works totaly fine and the response is correct. However I want to use python for this (I used curlconverter.com) and this is the python code:</p> <pre><code>import requests cookies = { '_clck': 'or3005%7C2%7Cfly%7C0%7C1602', 'CSRF-TOKEN': '14dc299d-30f2-4d98-9ff4-c4fbe8f3e569', 'ak_bmsc': 'C1C8A8F7FF391E9820102E8CA4EBB694~000000000000000000000000000000~YAAQhmZWuGSaVnmPAQAA1pftnBc7BqlNSSkxUM+mIQNkQI3yuVFydinhQtGa20uPsi5nOxA4EWd/piq70XNXFMfMUS/ryvVLMfNDLkEIJ/voevw9rWS4yLRqSn8kGbnehmg3KwRO0VmTEmjPk7QKEZU6yUU6zGiCu5f6lFCyt+7FT+FmtaM2Hje0ugHMveYB+nqJgasPgrwgJggxSChqa/QpR4M6f9Sckw92ci6oQ2O3MzksBAlzMYeVGRXSDv7eNwk10ysBrBOOECdbYmieHjyHk8Toqub9nL2Rm7vdvtmx2nPUt+SiA4ZjkksVDw9I8Gc5YVOC2crta/zObr+6vZp5hR1B7WeJ7slQCtAOsbBYFz7tnqY7ex8fzEQKezCKvpBIw8LnT/GrwN9MnPYgAcp8mhTkRehrFgHHN1O0TmgSYduWXHusOW8E1SqPYGQeCuNC943d6jJ83MVACpX+AzFd4gJdgrPgTNstbfVsTq0xRp/mj4wnW5lviA==', 'ekConsentBucketTcf2': 'full2-exp', 'ekConsentTcf2': '{%22customVersion%22:6%2C%22encodedConsentString%22:%22CP-9ugAP-9ugAE1ACADEAzEgAP_gAAAAAAYgJepV5D7dbWFFcX53aPt0OY0X19DzIsQhBhSBAyAFgBOQ8JQA02EyNASgBiACEAAAoxZBAAFEHABEAQCAQAAEAADsIgQEgAAIIABEgBEQAAJYAAoKAAAgEAAIgAIFEgAAmBjQKdLmXUiAhIAACgAQACABAICAAgMAAAAIAAIAAAAAAgAAgABgIAIAAAAAARAAAAAAAAAAAAAAIAAACi0AGAAIJelIAMAAQS9JQAYAAgl6QgAwABBL0dABgACCXoyADAAEEvRUAGAAIJeiIAMAAQS9DQAYAAgl6EgAwABBL0A.YAAAAAAAA60A%22%2C%22googleConsentGiven%22:true%2C%22consentInterpretation%22:{%22googleAdvertisingFeaturesAllowed%22:true%2C%22googleAnalyticsAllowed%22:true%2C%22infonlineAllowed%22:true%2C%22theAdexAllowed%22:true%2C%22criteoAllowed%22:true%2C%22facebookAllowed%22:true%2C%22amazonAdvertisingAllowed%22:true%2C%22rtbHouseAllowed%22:true%2C%22linkedInAllowed%22:true%2C%22theTradeDeskAllowed%22:true%2C%22microsoftAdvertisingAllowed%22:true%2C%22pianoAllowed%22:true%2C%22renaultAllowed%22:false}}', 'iom_consent': '0107ff0000&amp;1716324769286', '_gid': 'GA1.2.1532272070.1716324769', '__rtbh.uid': '%7B%22eventType%22%3A%22uid%22%2C%22id%22%3A%22unknown%22%7D', 'clientId': '496654825.1716324769', '_gat': '1', 'FPID': 'FPID2.2.AOuBPGhHzWT6LnAkpY5kTJhKrpSS%2FkCqyy4lC0XgstU%3D.1716324769', 'FPLC': 'q%2BEjZGxjkPyzi60Jabi2fnha46CbqeBJ6r5U2WbreeneTYQSQm2wogu4TCxyjhFwIpCvNTT5dL%2FIMuVk77ixOiL%2Btpwkr%2BhfOXvIFfaYo8901edWtL0mN0jGi%2FQbew%3D%3D', 'FPAU': '1.2.1242298381.1716324771', 'FPGSID': '1.1716324770.1716324770.G-6V6MWRHVN8.TCed7UzLYaViqx_51MOpqg', 'up': '%7B%22ln%22%3A%22294028564%22%2C%22lln%22%3A%229813e7ea-faff-435e-a600-244146dd8da9%22%2C%22llstv%22%3A%22BLN-25381-ka-offboarding%3DB%7CBLN-23401_buyNow_in_chat%3DB%7CBLN-18532_highlight%3DA%7CBLN-23248_BuyNow_SB%3DB%7CBLN-22726_buyer_banner%3DB%7Cignite_web_better_session%3DC%7CEBAYKAD-3536_floor_ai%3DA%7Cignite_improve_session%3DC%7CBLN-24684-enc-brndg-data%3DA%7Cliberty_group%3DY%7CFLPRO-130-churn-reason%3DB%7CEKMO-100_reorder_postad%3DB%7CKLUE-274-financing%3DB%7Clws-aws-traffic%3DB%7CBLN-25450_Initial_message%3DB%7CBLN-24652_category_alert%3DB%7CBLN-21783_testingtime%3DB%7Caudex-awr-update%3DA%7CEBAYKAD-2252_group-assign%3DH%7CBLN-25216-new-user-badges%3DB%7CBLN-25556_INIT_MSG_V2%3DB%7CDESKTOP-promo-switch%3DA%7Caudex-libertyjs-reporting%3DA%7CPLC-189_plc-migration%3DB%7Caudex-libertyjs-update%3DA%7Cperformance-test-desktop%3DC%7CBIPHONE-9700_buy_now%3DB%7CEKMO-243_MyAdsC2b_ABC%3DA%7Cprebid-update%3DA%7CEKTNM-4424_DSA_AD_FLAG%3DA%7Cpubads-behind-consent%3DA%7CPLC-104_plc-login%3DB%7Cdesktop_payment_badge_SRP%3DR%22%2C%22ls%22%3A%22k%3Diphone%252013%22%7D', 'uh': '%7B%22sh%22%3A%22k%3Diphone%252013%22%7D', 'bm_sv': '0E46F6D4C2D15DDF4037D3219522F169~YAAQjGZWuLYmhJGPAQAAc7HtnBfptI0O/cZh+z71bjX7FWoEWfWi+7rSmJtAdtnHr2Yy6Sml0PivsexsxpEKRqSMQ6ykXGA2uoSw845qdTHuHFptR+QNvMHbvH/VH/cLCl5ACpcudk630IFOv5PevfMyfExV481WLWdi/qiNeZTnGPaMNkfNGtb6nRhJan8bo9pPBMP4ZvblfI33UGrSX3mmVIEuQ2CZ9w2o6QF97BCNHkRLzeHAyzVnysmuxchKgTS4GHCq~1', 'bm_sz': 'DC20044AB93A95DA69EEDF3CBC61933F~YAAQjGZWuLcmhJGPAQAAc7HtnBfghyhORTugDA+HeFjnh5DSij+T1QpBc1T3YOBc7JF2Kd/LUNEwSqqPtyRQMQqA6S5UoqG2w3rLfYD4LfF3p4Oe35m8blAYcoozf4uBqfBuD4kPWSAEdCCW4Qz3yPBOFwLGDO3Imik7kxYRH+0PpEGheEoo6JQSXKpdHrGRhLXOJecz1EKW/uRKZf2n35KDL7Q5uNdu4ZMkwKG4Tsv2b1lNYr5AsR+OyvPuTnHyGbyIo3IsqmoeL4ZDBvgdGlrbhcAyJdm6D8HFNyaNgCIPLyiH/nbL1Md2mVu99LlQHI/atUHs2Xls/fNrHeiEM14d1gRlmS+0t5wt2eB+C3k4KUgjrpSta8jR7hvBMHq+JWrZ71tUnLjVXgozyprHuWzgQYHSN8DQE9lVEBtuqbkyO0eQZmUPJoUBf/p7raJ7rGiX+R6P~3225913~3749176', '_pbjs_userid_consent_data': '1948453571612999', '_sharedid': '2cc48544-17d6-40f0-8055-433f3bd0eebb', 'ttd_TDID': '5af2038d-4201-4381-a705-008be357e42b', '_abck': '0BE7DECF6A441CAED9FAE60DD4A2D439~-1~YAAQjGZWuGYnhJGPAQAAF7ntnAvIJaPDK0JcFilje/7MpZxzpI0e91nvMkptlAnNzcvbc/7RsC1cJb9t+U/5WcPd8vR5hPtB4B5DzzMkYqetbfONwbgM9rxJ7WzQZDmNXvHso6HgE1CoHrazlFHCdapmnPYQZf9rKTSLi3bpmBLXajRz9jguHHl/OSeKz60PNLlcnQKxgUbDiS2dOz9K/fdl5pYP8fD54/2/h6P9mGBtoAeUAxPNRUZNuqgo3KhfSpryre97I0qQjtZIffvnvrTTh+VGLk+XXGxxWA3+KQZ7fk89MhFNR6X6iPIln2KisoLu/M7FUsmhBUHuGW4GYxSuzePypKxJLosRLHD2VngeRjrSbP5dEuFI0ib3ZbVGoOm0EdrBwwg4gtTbfH82~-1~-1~-1', 'ioam2018': '001004bb35748f346664cf549:1742244769311:1716324769311:.kleinanzeigen.de:3:ebaykanz:ebayK-2:noevent:1716324775481:xwyv7z', '__gsas': 'ID=06678aa415f2bae1:T=1716324775:RT=1716324775:S=ALNI_MY8M9AWvnu9c04kmWUotsmzbZE9Kw', '_uetsid': '24053ee0178711ef995c73a37e750a44', '_uetvid': '2815b710edc911edb77b397abfb0ec6b', '_ga': 'GA1.1.496654825.1716324769', '__gads': 'ID=913a6a8a4ed5f68f:T=1716324775:RT=1716324775:S=ALNI_MbcqPiXeD1up-CfoDLjbZWHR3i2qQ', '__gpi': 'UID=00000e283afa9424:T=1716324775:RT=1716324775:S=ALNI_MaKh8LN1b8hyjtkjGF1q_PBkaguBQ', '__eoi': 'ID=30ca85bff1e0487a:T=1716324775:RT=1716324775:S=AA-AfjZA4xIy4PLoxMR9l_uqa7AS', '_ga_123456': 'GS1.1.1716324776.1.0.1716324776.0.0.1971217686', '_clsk': 'kf2mvr%7C1716324777066%7C2%7C0%7Cv.clarity.ms%2Fcollect', '_ga_6V6MWRHVN8': 'GS1.1.1716324769.1.1.1716324777.0.0.804489043', 'axd': '4362638687387987014', 'tis': '', } headers = { 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7', 'accept-language': 'de-DE,de;q=0.9,ru-DE;q=0.8,ru;q=0.7,en-US;q=0.6,en;q=0.5', 'cache-control': 'no-cache', # 'cookie': '_clck=or3005%7C2%7Cfly%7C0%7C1602; CSRF-TOKEN=14dc299d-30f2-4d98-9ff4-c4fbe8f3e569; ak_bmsc=C1C8A8F7FF391E9820102E8CA4EBB694~000000000000000000000000000000~YAAQhmZWuGSaVnmPAQAA1pftnBc7BqlNSSkxUM+mIQNkQI3yuVFydinhQtGa20uPsi5nOxA4EWd/piq70XNXFMfMUS/ryvVLMfNDLkEIJ/voevw9rWS4yLRqSn8kGbnehmg3KwRO0VmTEmjPk7QKEZU6yUU6zGiCu5f6lFCyt+7FT+FmtaM2Hje0ugHMveYB+nqJgasPgrwgJggxSChqa/QpR4M6f9Sckw92ci6oQ2O3MzksBAlzMYeVGRXSDv7eNwk10ysBrBOOECdbYmieHjyHk8Toqub9nL2Rm7vdvtmx2nPUt+SiA4ZjkksVDw9I8Gc5YVOC2crta/zObr+6vZp5hR1B7WeJ7slQCtAOsbBYFz7tnqY7ex8fzEQKezCKvpBIw8LnT/GrwN9MnPYgAcp8mhTkRehrFgHHN1O0TmgSYduWXHusOW8E1SqPYGQeCuNC943d6jJ83MVACpX+AzFd4gJdgrPgTNstbfVsTq0xRp/mj4wnW5lviA==; ekConsentBucketTcf2=full2-exp; ekConsentTcf2={%22customVersion%22:6%2C%22encodedConsentString%22:%22CP-9ugAP-9ugAE1ACADEAzEgAP_gAAAAAAYgJepV5D7dbWFFcX53aPt0OY0X19DzIsQhBhSBAyAFgBOQ8JQA02EyNASgBiACEAAAoxZBAAFEHABEAQCAQAAEAADsIgQEgAAIIABEgBEQAAJYAAoKAAAgEAAIgAIFEgAAmBjQKdLmXUiAhIAACgAQACABAICAAgMAAAAIAAIAAAAAAgAAgABgIAIAAAAAARAAAAAAAAAAAAAAIAAACi0AGAAIJelIAMAAQS9JQAYAAgl6QgAwABBL0dABgACCXoyADAAEEvRUAGAAIJeiIAMAAQS9DQAYAAgl6EgAwABBL0A.YAAAAAAAA60A%22%2C%22googleConsentGiven%22:true%2C%22consentInterpretation%22:{%22googleAdvertisingFeaturesAllowed%22:true%2C%22googleAnalyticsAllowed%22:true%2C%22infonlineAllowed%22:true%2C%22theAdexAllowed%22:true%2C%22criteoAllowed%22:true%2C%22facebookAllowed%22:true%2C%22amazonAdvertisingAllowed%22:true%2C%22rtbHouseAllowed%22:true%2C%22linkedInAllowed%22:true%2C%22theTradeDeskAllowed%22:true%2C%22microsoftAdvertisingAllowed%22:true%2C%22pianoAllowed%22:true%2C%22renaultAllowed%22:false}}; iom_consent=0107ff0000&amp;1716324769286; _gid=GA1.2.1532272070.1716324769; __rtbh.uid=%7B%22eventType%22%3A%22uid%22%2C%22id%22%3A%22unknown%22%7D; clientId=496654825.1716324769; _gat=1; FPID=FPID2.2.AOuBPGhHzWT6LnAkpY5kTJhKrpSS%2FkCqyy4lC0XgstU%3D.1716324769; FPLC=q%2BEjZGxjkPyzi60Jabi2fnha46CbqeBJ6r5U2WbreeneTYQSQm2wogu4TCxyjhFwIpCvNTT5dL%2FIMuVk77ixOiL%2Btpwkr%2BhfOXvIFfaYo8901edWtL0mN0jGi%2FQbew%3D%3D; FPAU=1.2.1242298381.1716324771; FPGSID=1.1716324770.1716324770.G-6V6MWRHVN8.TCed7UzLYaViqx_51MOpqg; up=%7B%22ln%22%3A%22294028564%22%2C%22lln%22%3A%229813e7ea-faff-435e-a600-244146dd8da9%22%2C%22llstv%22%3A%22BLN-25381-ka-offboarding%3DB%7CBLN-23401_buyNow_in_chat%3DB%7CBLN-18532_highlight%3DA%7CBLN-23248_BuyNow_SB%3DB%7CBLN-22726_buyer_banner%3DB%7Cignite_web_better_session%3DC%7CEBAYKAD-3536_floor_ai%3DA%7Cignite_improve_session%3DC%7CBLN-24684-enc-brndg-data%3DA%7Cliberty_group%3DY%7CFLPRO-130-churn-reason%3DB%7CEKMO-100_reorder_postad%3DB%7CKLUE-274-financing%3DB%7Clws-aws-traffic%3DB%7CBLN-25450_Initial_message%3DB%7CBLN-24652_category_alert%3DB%7CBLN-21783_testingtime%3DB%7Caudex-awr-update%3DA%7CEBAYKAD-2252_group-assign%3DH%7CBLN-25216-new-user-badges%3DB%7CBLN-25556_INIT_MSG_V2%3DB%7CDESKTOP-promo-switch%3DA%7Caudex-libertyjs-reporting%3DA%7CPLC-189_plc-migration%3DB%7Caudex-libertyjs-update%3DA%7Cperformance-test-desktop%3DC%7CBIPHONE-9700_buy_now%3DB%7CEKMO-243_MyAdsC2b_ABC%3DA%7Cprebid-update%3DA%7CEKTNM-4424_DSA_AD_FLAG%3DA%7Cpubads-behind-consent%3DA%7CPLC-104_plc-login%3DB%7Cdesktop_payment_badge_SRP%3DR%22%2C%22ls%22%3A%22k%3Diphone%252013%22%7D; uh=%7B%22sh%22%3A%22k%3Diphone%252013%22%7D; bm_sv=0E46F6D4C2D15DDF4037D3219522F169~YAAQjGZWuLYmhJGPAQAAc7HtnBfptI0O/cZh+z71bjX7FWoEWfWi+7rSmJtAdtnHr2Yy6Sml0PivsexsxpEKRqSMQ6ykXGA2uoSw845qdTHuHFptR+QNvMHbvH/VH/cLCl5ACpcudk630IFOv5PevfMyfExV481WLWdi/qiNeZTnGPaMNkfNGtb6nRhJan8bo9pPBMP4ZvblfI33UGrSX3mmVIEuQ2CZ9w2o6QF97BCNHkRLzeHAyzVnysmuxchKgTS4GHCq~1; bm_sz=DC20044AB93A95DA69EEDF3CBC61933F~YAAQjGZWuLcmhJGPAQAAc7HtnBfghyhORTugDA+HeFjnh5DSij+T1QpBc1T3YOBc7JF2Kd/LUNEwSqqPtyRQMQqA6S5UoqG2w3rLfYD4LfF3p4Oe35m8blAYcoozf4uBqfBuD4kPWSAEdCCW4Qz3yPBOFwLGDO3Imik7kxYRH+0PpEGheEoo6JQSXKpdHrGRhLXOJecz1EKW/uRKZf2n35KDL7Q5uNdu4ZMkwKG4Tsv2b1lNYr5AsR+OyvPuTnHyGbyIo3IsqmoeL4ZDBvgdGlrbhcAyJdm6D8HFNyaNgCIPLyiH/nbL1Md2mVu99LlQHI/atUHs2Xls/fNrHeiEM14d1gRlmS+0t5wt2eB+C3k4KUgjrpSta8jR7hvBMHq+JWrZ71tUnLjVXgozyprHuWzgQYHSN8DQE9lVEBtuqbkyO0eQZmUPJoUBf/p7raJ7rGiX+R6P~3225913~3749176; _pbjs_userid_consent_data=1948453571612999; _sharedid=2cc48544-17d6-40f0-8055-433f3bd0eebb; ttd_TDID=5af2038d-4201-4381-a705-008be357e42b; _abck=0BE7DECF6A441CAED9FAE60DD4A2D439~-1~YAAQjGZWuGYnhJGPAQAAF7ntnAvIJaPDK0JcFilje/7MpZxzpI0e91nvMkptlAnNzcvbc/7RsC1cJb9t+U/5WcPd8vR5hPtB4B5DzzMkYqetbfONwbgM9rxJ7WzQZDmNXvHso6HgE1CoHrazlFHCdapmnPYQZf9rKTSLi3bpmBLXajRz9jguHHl/OSeKz60PNLlcnQKxgUbDiS2dOz9K/fdl5pYP8fD54/2/h6P9mGBtoAeUAxPNRUZNuqgo3KhfSpryre97I0qQjtZIffvnvrTTh+VGLk+XXGxxWA3+KQZ7fk89MhFNR6X6iPIln2KisoLu/M7FUsmhBUHuGW4GYxSuzePypKxJLosRLHD2VngeRjrSbP5dEuFI0ib3ZbVGoOm0EdrBwwg4gtTbfH82~-1~-1~-1; ioam2018=001004bb35748f346664cf549:1742244769311:1716324769311:.kleinanzeigen.de:3:ebaykanz:ebayK-2:noevent:1716324775481:xwyv7z; __gsas=ID=06678aa415f2bae1:T=1716324775:RT=1716324775:S=ALNI_MY8M9AWvnu9c04kmWUotsmzbZE9Kw; _uetsid=24053ee0178711ef995c73a37e750a44; _uetvid=2815b710edc911edb77b397abfb0ec6b; _ga=GA1.1.496654825.1716324769; __gads=ID=913a6a8a4ed5f68f:T=1716324775:RT=1716324775:S=ALNI_MbcqPiXeD1up-CfoDLjbZWHR3i2qQ; __gpi=UID=00000e283afa9424:T=1716324775:RT=1716324775:S=ALNI_MaKh8LN1b8hyjtkjGF1q_PBkaguBQ; __eoi=ID=30ca85bff1e0487a:T=1716324775:RT=1716324775:S=AA-AfjZA4xIy4PLoxMR9l_uqa7AS; _ga_123456=GS1.1.1716324776.1.0.1716324776.0.0.1971217686; _clsk=kf2mvr%7C1716324777066%7C2%7C0%7Cv.clarity.ms%2Fcollect; _ga_6V6MWRHVN8=GS1.1.1716324769.1.1.1716324777.0.0.804489043; axd=4362638687387987014; tis=', 'pragma': 'no-cache', 'priority': 'u=0, i', 'sec-ch-ua': '&quot;Google Chrome&quot;;v=&quot;125&quot;, &quot;Chromium&quot;;v=&quot;125&quot;, &quot;Not.A/Brand&quot;;v=&quot;24&quot;', 'sec-ch-ua-mobile': '?0', 'sec-ch-ua-platform': '&quot;Linux&quot;', 'sec-fetch-dest': 'document', 'sec-fetch-mode': 'navigate', 'sec-fetch-site': 'none', 'sec-fetch-user': '?1', 'upgrade-insecure-requests': '1', 'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36', } response = requests.get('https://www.kleinanzeigen.de/s-iphone-13/k0', cookies=cookies, headers=headers) </code></pre> <p>With the python request I get the response that the IP address is banned. How is this possible? With curl it does work multiple times even after executing the python code.</p> <pre><code>requests 2.31.0 requests-file 1.5.1 requests-toolbelt 1.0.0 urllib3 2.2.1 Python 3.11.7 </code></pre>
<python><curl>
2024-05-21 21:06:03
0
2,075
otto
78,514,335
11,530,164
How to create a tensorflow dataset from a list of filenames that need to be loaded and transformed and their corresponding labels
<p>Given a list of npy filenames <code>x</code>:</p> <pre><code>x = ['path/to/file1.npy', 'path/to/file2.npy'] </code></pre> <p>and a list of labels <code>y</code>:</p> <pre><code>y = [1, 0] </code></pre> <p>I want to create a tensorflow <code>Dataset</code> that consists of pairs of the labels and the loaded and transformed numpy arrays contained within the npy files.</p> <h3>Constrains</h3> <ol> <li>Each npy file must be loaded, the numpy array contained within must undergo an arbitrary transformation (irrelevant to the question) and then the array must be finally added to the <code>Dataset</code> along with its corresponding label.</li> <li>It is necessary to use a <code>Dataset</code> as the files are too large to be loaded into the memory at once.</li> <li>The npy files are not all contained in a single directory.</li> </ol> <h3>Existing answers and how they don't match for my case:</h3> <ul> <li><p><a href="https://stackoverflow.com/questions/73918045/tensorflow-dataset-from-lots-of-npy-files">Tensorflow dataset from lots of .npy files</a><br /> a) does not offer clear directions on how to construct the mapping function of the loading and b) focuses on a function that only handles arrays and not their corresponding labels.</p> </li> <li><p><a href="https://stackoverflow.com/questions/68594188/what-is-the-best-way-to-load-data-with-tf-data-dataset-in-memory-efficient-way">What is the best way to load data with tf.data.Dataset in memory efficient way</a><br /> This answer does not provide what I ask about (the mapping function to load both <code>x</code> and <code>y</code>, along with transformations of x) but instead has a placeholder for that function (<code>PARSE_FUNCTION</code>).</p> </li> </ul>
<python><tensorflow><dataset>
2024-05-21 21:00:26
1
573
Mario
78,514,193
8,024,622
Pandas resample per ID
<p>I have a DataFrame that looks like this</p> <pre><code>ID DateHour Value A 2024-05-03 07:00:00 5 A 2024-05-03 09:00:00 2 A 2024-05-03 22:00:00 9 A 2024-05-03 23:00:00 1 B 2024-05-03 02:00:00 8 B 2024-05-03 09:00:00 7 B 2024-05-03 20:00:00 8 </code></pre> <p>I would like to resample it hour wise with the ffill method.</p> <p>What I'm doing now is selecting one ID, resampling and putting all together.</p> <p>Is there an easier way to do this?</p> <p>Here is the expected output</p> <pre><code>ID DateHour Value A 2024-05-03 07:00:00 5 A 2024-05-03 08:00:00 5 A 2024-05-03 09:00:00 2 A 2024-05-03 10:00:00 2 ... A 2024-05-03 21:00:00 2 A 2024-05-03 22:00:00 9 A 2024-05-03 23:00:00 1 B 2024-05-03 02:00:00 8 B 2024-05-03 03:00:00 8 ... B 2024-05-03 08:00:00 8 B 2024-05-03 09:00:00 7 ... B 2024-05-03 19:00:00 7 B 2024-05-03 20:00:00 8 </code></pre>
<python><pandas><resample>
2024-05-21 20:19:37
1
624
jigga
78,514,084
850,781
`conda list` mis-reports package version
<p>How come</p> <pre><code>$ conda list --show-channel-urls dateutil # packages in environment at .../miniconda3/envs/c312: # # Name Version Build Channel python-dateutil 2.9.0post0 py312haa95532_0 defaults </code></pre> <p>but</p> <pre><code>$ python -c 'import dateutil; print(dateutil.__version__); print(dateutil.__path__)' 2.8.2 ['.../miniconda3/envs/c312/Lib/site-packages/dateutil'] </code></pre> <p>and</p> <pre><code>cat $(python -c 'import dateutil; print(dateutil.__path__[0])')/_version.py # file generated by setuptools_scm # don't change, don't track in version control ... __version__ = version = '2.8.2' __version_tuple__ = version_tuple = (2, 8, 2) </code></pre> <p><a href="https://github.com/dateutil/dateutil/issues/1314" rel="nofollow noreferrer">Judging by</a></p> <blockquote> <p>.../miniconda3/envs/c312/Lib/site-packages/dateutil/tz/tz.py: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated...</p> </blockquote> <p>the actually used code is 2.8.2.</p> <p>PS. Just in case (cf <a href="https://stackoverflow.com/q/70958434/850781">Unexpected Python paths in Conda environment</a>):</p> <pre><code>$ python -c &quot;import sys; print(sys.path)&quot; ['', '.../miniconda3/envs/c312/python312.zip', '.../miniconda3/envs/c312/DLLs', '.../miniconda3/envs/c312/Lib', '.../miniconda3/envs/c312', '.../miniconda3/envs/c312/Lib/site-packages', '__editable__.FX-0.0.0.finder.__path_hook__', '__editable__.PyApp-36.finder.__path_hook__', '.../miniconda3/envs/c312/Lib/site-packages/win32', '.../miniconda3/envs/c312/Lib/site-packages/win32/lib', '.../miniconda3/envs/c312/Lib/site-packages/Pythonwin'] </code></pre> <p>(I replaced backslashes in paths with normal slashes by hand to protect my eyes).</p> <p>Reported as <a href="https://github.com/conda/conda/issues/13933" rel="nofollow noreferrer"><code>conda list</code> mis-reports package version</a>.</p>
<python><conda><miniconda>
2024-05-21 19:50:41
0
60,468
sds
78,513,947
5,464,233
Creating a Two-dimensional NumPy Array from a Database Resultset
<p>I have created a table <code>yokes</code> in a PostgreSQL database:</p> <pre class="lang-sql prettyprint-override"><code>mydb=# CREATE TABLE yokes(id INT, name TEXT, size FLOAT); CREATE TABLE mydb=# INSERT INTO yokes(id, name, size) VALUES(1, 'foo', 0.5); INSERT 0 1 mydb=# INSERT INTO yokes(id, name, size) VALUES(2, 'bar', 1.5); INSERT 0 1 mydb=# SELECT * FROM yokes; id | name | size ----+------+------ 1 | foo | 0.5 2 | bar | 1.5 (2 rows) </code></pre> <p>I want to turn that into a numpy array with shape <code>(2, 3)</code>. My first approach was this:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import psycopg2 connection = psycopg2.connect(dbname='mydb') cursor = connection.cursor() cursor.execute('SELECT id, name, size FROM yokes') rows = cursor.fetchall() arr = np.array(rows, dtype='i4, U8, f8') print(arr.shape) </code></pre> <p>That gives me the output <code>(2,)</code>, not the <code>(2, 3)</code> that I am aiming at. I assume, that this is because <code>rows</code> is a list of tuples. Next try:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import psycopg2 connection = psycopg2.connect(dbname='mydb') cursor = connection.cursor() cursor.execute('SELECT id, name, size FROM yokes') rows = cursor.fetchall() rows = [list(row) for row in rows] arr = np.array(rows) print(arr.shape) print(arr.dtype) </code></pre> <p>Output:</p> <pre><code>(2, 3) &lt;U32 </code></pre> <p>The shape is what I want. But the second line shows that all values are now strings which is not what I want.</p> <p>I have tried adding an argument <code>dtype='i4, U8, f8</code> to <code>np.array</code> but that gives me an error &quot;ValueError: invalid literal for int() with base 10: 'foo'&quot; that I totally did not expect, because I specified the second (name) column to be of type <code>U8</code> and not <code>&lt;i4</code>.</p> <p>Python, leave alone NumPy, is not my forte. So, do not be afraid of being rookie-friendly.</p>
<python><numpy><psycopg2>
2024-05-21 19:17:50
0
2,432
Guido Flohr
78,513,944
9,403,794
How to convert np.NaN to string in ndarray/list?
<p>I have a simple Python code:</p> <pre><code>l2 =[4,np.nan, 5.1] a2 = np.array(l2) a3 = np.where(np.isnan(a2), '3.3', a2) a3 = a3.astype(float) print(a3) a3 = np.where(np.isnan(a2), 'x', a2) print(a3) </code></pre> <p>The code above produce two lines:</p> <pre><code>[4. 3.3 5.1] ['4.0' 'x' '5.1'] </code></pre> <p>My question is how to convert <code>np.NaN</code> to string, but without changes to other values.</p> <p>There where was numbers should stay like number, but only there where is nan should be new string. Like that:</p> <pre><code>[4.0 'x' 5.1] </code></pre> <p>UPDATE: If we can not mix data types in Numpy ndarray, I found new solution:</p> <pre><code>l2 =[4,np.nan, 5.1, np.nan, 3] a2 = np.array(l2) idxNan = (np.argwhere(np.isnan(l2))) for idx in idxNan: l2[idx[0]] = 'x' print(l2) </code></pre> <p>Code above gave me right output:</p> <pre><code>[4, 'x', 5.1, 'x', 3] </code></pre>
<python><numpy>
2024-05-21 19:17:09
1
309
luki
78,513,619
13,186,897
How do I set user Profile using add_argument?
<p>I try to launch Chrome with my Profile, but it doesn't open the profile. (User 1, the login has not been completed in Browser window)</p> <pre><code>from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager options = webdriver.ChromeOptions() options.add_argument(&quot;--window-size=1920,1080&quot;) options.add_argument(&quot;--user_data_dir=C:\\Users\\shulya403\\AppData\\Local\\Google\\Chrome\\User Data&quot;) options.add_argument(&quot;--profile-directory=Default&quot;) options.add_argument(&quot;--remote-debugging-port=9222&quot;) self.driver = webdriver.Chrome(ChromeDriverManager().install(), options=options) </code></pre> <p>Other Chrome window - closed, the only browser Chrome window - is opened by WebDriver.</p> <p>The path</p> <pre><code>C:\\Users\\shulya403\\AppData\\Local\\Google\\Chrome\\User Data\\Default </code></pre> <p>exists in the system.</p> <p>Other options (<code>window-size</code> and others) are working properly.</p>
<python><selenium-webdriver><selenium-chromedriver>
2024-05-21 17:57:11
1
424
shulya403
78,513,399
1,245,262
Why does torch.utils.collect_env claim I've pip installed modules that I used conda to install
<p>I've had some odd behavior with a torch installation, so I ran <code>python -m torch.utils.collect_env</code> to get a clear idea of ehat was in my environment, but it claimed I had used both pip3 and conda to install some modules.</p> <p>So, I created a simplified conda env, which still gives this problem:</p> <pre><code>$ conda create -n temp $ conda activate temp $ conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia &lt;frozen runpy&gt;:128: RuntimeWarning: 'torch.utils.collect_env' found in sys.modules after import of package 'torch.utils', but prior to execution of 'torch.utils.collect_env'; this may result in unpredictable behaviour Collecting environment information... PyTorch version: 2.3.0 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: Could not collect CMake version: version 3.16.3 Libc version: glibc-2.31 Python version: 3.12.3 | packaged by Anaconda, Inc. | (main, May 6 2024, 19:46:43) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 12.1.105 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti Nvidia driver version: 535.171.04 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True . . . Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] torch==2.3.0 [pip3] torchaudio==2.3.0 [pip3] torchvision==0.18.0 [conda] blas 1.0 mkl [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch [conda] mkl 2023.1.0 h213fc3f_46344 [conda] mkl-service 2.4.0 py312h5eee18b_1 [conda] mkl_fft 1.3.8 py312h5eee18b_0 [conda] mkl_random 1.2.4 py312hdb19cb5_0 [conda] numpy 1.26.4 py312hc5e2394_0 [conda] numpy-base 1.26.4 py312h0da6c21_0 [conda] pytorch 2.3.0 py3.12_cuda12.1_cudnn8.9.2_0 pytorch [conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchaudio 2.3.0 py312_cu121 pytorch [conda] torchvision 0.18.0 py312_cu121 pytorch </code></pre> <p>Why does this say I've pip installed some modules which I used conda to install? Is it connected to this message:</p> <pre><code>'torch.utils.collect_env' found in sys.modules after import of package 'torch.utils', but prior to execution of 'torch.utils.collect_env'; this may result in unpredictable behaviour </code></pre> <p>If so, what should I do? Also, if not, what (if anything) should I do?</p>
<python><pytorch><pip><conda>
2024-05-21 17:08:16
1
7,555
user1245262
78,513,318
5,589,640
Remove e-mail address with whitespaces
<p>I am working with call centre transcripts. In an ideal case the speech-to-text software will transcribe an e-mail as follows: maya.lucco@proton.me. This will not always be the case. So I am looking at a regular expression (RegEx) solution that accommodates whitespaces in the e-mail address, e.g. maya lucco@proton.me or maya.lucco @proton.me or maya-lucco@pro ton.me</p> <p>I have unsuccessfully tried to extend <a href="https://stackoverflow.com/questions/44027943/python-regex-to-remove-emails-from-string">this solution</a> with <a href="https://regex101.com/" rel="nofollow noreferrer">regex101</a>. Compiling a re object (pattern) as suggested in <a href="https://stackoverflow.com/questions/17681670/extract-email-sub-strings-from-large-document">this solution</a> seems overly complex for the task. I looked at the posts on validating e-mail addresses but they describe a different issue. Below my code so far:</p> <pre><code>import re #creating some data test = ['some random text maya @ proton.me with some more text maya+lucco@proton.me', 'maya-lucco@proton.me with another address maya_lucco@proton.me', 'some text maya.lucco @proton.me with some more bla maya.lucco@proton.me', 'maya-lucco_@proton.me more text maya@ proton.me ' ] test = pd.DataFrame(test, columns = ['words']) #creating a function because I like to add some other data cleaning to it later on def anonymiseEmail(text): text = str(text) #make text as string variable text = text.strip() #remove any leading, and trailing whitespaces text = re.sub(r'\S*@\S*\s?', '{e-mail}', text) #remove e-mail address return text # applying the function test['noEmail'] = test.words.apply(anonymiseEmail) #checking the results print(test.noEmail[0]) </code></pre> <pre><code>Output: some random text maya {e-mail}proton.me with some more text {e-mail} </code></pre> <p>The first e-mail address does not get fully deleted. The firstname Maya remains. This is a problem for the project.</p> <p>How can the code be extended so that the whole e-mail address, regardless of how many whitespaces it has, be replaced with a placed holder or deleted?</p> <p><strong>Up-date following the comments:</strong></p> <p>I have looked into RegEx <a href="https://stackoverflow.com/questions/2973436/regex-lookahead-lookbehind-and-atomic-groups">lookahead and lookbehind</a>, i.e. <code>(?=@) </code> and <code>(?&lt;=@)</code> but can't seem to make it match the word(s) that proceed or succeed the @-sign. I am looking at a code snippet Wiktor Stribiżew kindly provided on another occasion <code>\b(?:Dear|H(?:ello|i))(?:[^\S\r\n]+[A-Z]\w*(?:[’'-]\w+)*\.?)+''', '', text</code> and thought I could up-date it to <code>(?i)\b(?&lt;=@)(?:[^\S\r\n]+[A-Z]\w*(?:[’'-]\w+)*\.?)+ </code> but it doesn't match any e-mail address according to regex101. Maybe the code snippet <code>(?i)\b(?&lt;=@)</code> (or any other RegEx) can be modified to match the proceeding and/or succeeding word(s)?</p> <p>Another possible solution that comes to my mind is to select the 5 words before and after the @-sign, put them in a separate variable, check if there are any whitespaces 4 letters/characters before and after the @-sign. If yes, put them in a queue for a manual check. What concerns me with this solution is a) computing power, b) technical implementation and c) general feasibility. But I thought I'd share it in the spirit of trying to find a solution.</p>
<python><python-3.x><regex><email><python-re>
2024-05-21 16:52:43
1
625
Simone
78,513,109
986,896
How to fix DeprecationWarning when concurrent.futures with multiprocessing.Process are used together?
<p>How to fix the DeprecationWarning in the following pytest example? Note this is minimalistic reproducer.</p> <p>Use case is rather simple:</p> <ul> <li>I have an object (represented as <code>Obj</code>) that is tested;</li> <li><code>Obj</code> interacts with an external server;</li> <li>For testing purposes, external server is mocked using <code>multiprocessing.Process()</code>;</li> <li><code>Obj</code> uses <code>ProcessPoolExecutor</code> to do some CPU-bound operations;</li> </ul> <pre><code>import asyncio import concurrent.futures import http.server import multiprocessing import pytest import pytest_asyncio # This does not work, as mock_server is not pickable... # multiprocessing.set_start_method(&quot;spawn&quot;) def hello_world(): print(&quot;Hello, World!&quot;) class Obj(): def __init__(self): self.pool = concurrent.futures.ProcessPoolExecutor(max_workers=1) asyncio.get_running_loop().run_in_executor(self.pool, hello_world) @pytest_asyncio.fixture async def obj_to_test(): return Obj() @pytest.fixture def mock_server(tmpdir): def server(path, host, port): os.chdir(path) httpd = http.server.HTTPServer( (host, port), http.server.SimpleHTTPRequestHandler ) httpd.serve_forever() process = multiprocessing.Process(target=server, args=(&quot;.&quot;, &quot;127.0.0.1&quot;, 9999)) process.start() yield process process.terminate() process.join() def test_something(obj_to_test, mock_server): # normally obj_to_test will interact here with mock_server pass </code></pre> <p>My environment is:</p> <p>platform linux -- Python 3.12.3, pytest-8.2.0, pluggy-1.5.0 plugins: asyncio-0.23.6</p> <p>I am using Python installation from this Docker image: python:3.12-slim@sha256:2be8daddbb82756f7d1f2c7ece706aadcb284bf6ab6d769ea695cc3ed6016743</p> <pre><code>pip install pytest==8.2.0 pytest-asyncio==0.23.6 </code></pre> <p>After above, I am getting the following warning, which I want to fix:</p> <blockquote> <p>/usr/local/lib/python3.12/multiprocessing/popen_fork.py:66: DeprecationWarning: This process (pid=523) is multi-threaded, use of fork() may lead to deadlocks in the child.</p> </blockquote> <p>Using <code>multiprocessing.set_start_method(&quot;spawn&quot;)</code> does not help.</p> <p>I was able to reproduce <em>DeprecationWarning</em> outside of <code>pytest</code> environment as well:</p> <pre><code>import concurrent.futures import multiprocessing import time pool = concurrent.futures.ProcessPoolExecutor(max_workers=1) job = pool.submit(lambda: time.sleep(5)) process = multiprocessing.Process(target=lambda: None_) process.start() process.terminate() </code></pre> <p>and its run:</p> <pre><code>PYTHONWARNINGS=default python3 t5.py /usr/local/lib/python3.12/multiprocessing/popen_fork.py:66: DeprecationWarning: This process (pid=1024) is multi-threaded, use of fork() may lead to deadlocks in the child. self.pid = os.fork() </code></pre>
<python><linux><pytest><python-multiprocessing><python-3.12>
2024-05-21 16:09:07
0
755
Kuchara
78,513,041
125,673
OpenAI Completions API: How do I extract the text from the response?
<p>I am attempting to extract text from OpenAI, but I need some help with the correct syntax.</p> <p>My code:</p> <pre><code>_model = &quot;gpt-3.5-turbo-instruct&quot; # Summarize the text using an AI model (e.g., OpenAI's GPT) try: summary = openai.completions.create( model=_model, prompt=f&quot;Summarize this text: {text}&quot;, max_tokens=150 ) except Exception as e: return f&quot;summary: Exception {e}&quot; _summary = summary.choices[0].message.content #_summary = summary['choices'][0]['text'] </code></pre> <p>I want to get a summary of the text. Is this the best way?</p> <p>However, I find when I debug the code that the text object is located in the following structure within <code>summary['choices'][0]</code>:</p> <pre><code>CompletionChoice(finish_reason='stop', index=0, logprobs=None, text='blah blah blah') </code></pre> <p>How do I extract the text from this in Python code?</p>
<python><openai-api><gpt-3>
2024-05-21 15:56:11
1
10,241
arame3333
78,512,968
1,422,096
Order of execution in try except finally
<p>I never really use <code>finally</code>, so I wanted to test a few things before using it more regularly. I noticed that when running:</p> <pre><code>def f(): try: 1/0 # 1/1 except: print('except') # 1 raise # 2 finally: print('finally') # 3 try: f() except: print(&quot;haha&quot;) </code></pre> <p>we get this order of execution:</p> <pre><code>except finally haha </code></pre> <p><strong>What is the rule for the order of excution?</strong></p> <p>From what I see I get this rule:</p> <ul> <li>run everything in the <code>except</code> block, but not the <code>raise</code> statement, <em>if any such statement is present</em></li> <li>run the <code>finally</code> block</li> <li>go back to the <code>except</code> block, and run the last <code>raise</code> statement, <em>if present</em></li> </ul> <p>Is that correct?</p> <p>Are there other statements than <code>raise</code> in an <code>except</code> block that can be <em>postponed</em> after the <code>finally</code> block?</p> <hr /> <p>Clearer with this example:</p> <pre><code>def f(): try: 1/0 # 1/1 except: print('except') #1 raise print(&quot;a&quot;) #2 # nonsense but just to see in which order this line is executed finally: print('finally') #3 </code></pre> <p>which gives</p> <pre><code>except a finally foo </code></pre>
<python><exception><try-catch><try-catch-finally>
2024-05-21 15:39:38
1
47,388
Basj
78,512,927
7,189,440
How to flush/reset the backlog of a python observer and PatternMatchingEventHandler when lots of changes happen in close succession?
<p>I'm using a python <code>PatternMatchingEventHandler</code> watchdog to call a function each time a file is created (using <code>self.on_created</code>) within a watched directory tree. Minimal code example below. The function is slow, so if a lot of new files are created in a short time period, I can end up with a huge backlog and there are times when I want to fast-forward past them. Since the observer/watchdog pair eventually processes all the files, I'm imagining that at some level, there is an internal list of files that were created, and a loop that calls <code>on_created</code> sequentially for each file in that list. I'm trying to figure out if there is any way to flush/delete the list of events? Or check the length of the backlog and if it gets too long, do an empty loop over them?</p> <pre><code>from watchdog.events import PatternMatchingEventHandler class foo(PatternMatchingEventHandler): def __init__(self): PatternMatchingEventHandler.__init__( self,patterns = ['*.txt'],ignore_patterns = None, ignore_directories = False,case_sensitive = True, ) self.function = bar # defined elsewhere. Takes about 1 sec per file def on_created(self,event): self.function(event.src_path) from watchdog.observers import Observer the_handler = foo() the_observer = Observer() the_observer.schedule(the_handler,path_to_dir,recursive=True) the_observer.start() try: while True: time.sleep(0.01) except: the_observer.stop() the_observer.join() </code></pre>
<python><python-watchdog>
2024-05-21 15:32:49
1
873
AstroBen
78,512,754
114,708
Change the font name of exponent labels in plotnine/matplotlib
<p>I am trying to make a plot using plotnine. The y-axis is in log-scale and has an exponential labels. I want to change the font of the whole plot to something else such as &quot;Gill Sans&quot;. The font is reflected everywhere except the exponential labels. How to fix that? Below is a working example.</p> <pre><code>import plotnine as p9 import pandas as pd import mizani data = { &quot;Degree&quot;: [3, 3, 3, 3, 3, 3, 6, 6, 6, 6, 6, 6], &quot;Probability&quot;: [0.00001, 0.00002, 0.00003, 0.001, 0.002, 0.003, 0.01, 0.02, 0.03, 0.1, 0.2, 0.3], &quot;p&quot;: [&quot;p=0.01&quot;, &quot;p=0.01&quot;, &quot;p=0.01&quot;, &quot;p=0.1&quot;, &quot;p=0.1&quot;, &quot;p=0.1&quot;, &quot;p=0.01&quot;, &quot;p=0.01&quot;, &quot;p=0.01&quot;, &quot;p=0.1&quot;, &quot;p=0.1&quot;, &quot;p=0.1&quot;], &quot;Scheme&quot;: [&quot;Theoretical&quot;, &quot;Original&quot;, &quot;Proposed&quot;, &quot;Theoretical&quot;, &quot;Original&quot;, &quot;Proposed&quot;, &quot;Theoretical&quot;, &quot;Original&quot;, &quot;Proposed&quot;, &quot;Theoretical&quot;, &quot;Original&quot;, &quot;Proposed&quot;], } df = pd.DataFrame(data) x_val = &quot;Degree&quot; y_val = &quot;Probability&quot; y_label = &quot;Degree Probability&quot; g1_val = &quot;p&quot; g2_val = &quot;Scheme&quot; p = ( p9.ggplot(df) + p9.aes(x=x_val, y=y_val, color=g1_val, shape=g2_val, linetype=g2_val, size=g2_val) + p9.geom_point(alpha=0.8) + p9.geom_line(size=0.5) + p9.scale_x_continuous(name=x_val, breaks=[3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]) + p9.scale_y_continuous(name=y_label, trans=mizani.transforms.log_trans(10), labels=mizani.labels.label_log(mathtex=True)) + p9.geom_vline(xintercept=6, size=0.4, linetype=&quot;dotted&quot;, color=&quot;black&quot;) + p9.geom_vline(xintercept=3, size=0.4, linetype=&quot;dotted&quot;, color=&quot;black&quot;) + p9.scale_shape_manual(name=&quot;Algorithm&quot;, values=[&quot;o&quot;, &quot;*&quot;, &quot;+&quot;]) + p9.scale_color_manual(name=&quot;Rewiring Prob.&quot;, values=[&quot;red&quot;, &quot;blue&quot;]) + p9.scale_linetype_manual(name=&quot;Algorithm&quot;, values=[&quot;none&quot;, &quot;none&quot;, &quot;solid&quot;]) + p9.scale_size_manual(name=&quot;Algorithm&quot;, values=[5, 5, 2]) + p9.theme(text=p9.element_text(family=&quot;Gill Sans&quot;, weight=&quot;regular&quot;)) ) p </code></pre> <p><a href="https://i.sstatic.net/261vOEtM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/261vOEtM.png" alt="Output" /></a></p>
<python><matplotlib><plotnine>
2024-05-21 15:04:51
1
1,712
max
78,512,720
1,115,833
python requests Content-Disposition: form-data
<p>I am looking into using cloudflare's KV and I see a nasty looking requests snippet here:<a href="https://developers.cloudflare.com/api/operations/workers-kv-namespace-write-key-value-pair-with-metadata" rel="nofollow noreferrer">https://developers.cloudflare.com/api/operations/workers-kv-namespace-write-key-value-pair-with-metadata</a></p> <p>To summarise I would like to send some payload but CF is showing me:</p> <pre><code>import requests url = &quot;https://api.cloudflare.com/client/v4/accounts/account_id/storage/kv/namespaces/namespace_id/values/key_name&quot; payload = &quot;-----011000010111000001101001\r\nContent-Disposition: form-data; name=\&quot;metadata\&quot;\r\n\r\n\r\n-----011000010111000001101001\r\nContent-Disposition: form-data; name=\&quot;value\&quot;\r\n\r\n\r\n-----011000010111000001101001--\r\n\r\n&quot; headers = { &quot;Content-Type&quot;: &quot;multipart/form-data; boundary=---011000010111000001101001&quot;, &quot;Authorization&quot;: &quot;Bearer undefined&quot; } response = requests.request(&quot;PUT&quot;, url, data=payload, headers=headers) print(response.text) </code></pre> <p>I would like to paramterise the <code>metadata</code> and <code>value</code> fields but I am struggling to see how I can do this in python efficiently without mistakes since the string looks nasty :(</p> <p>ps: I am using the requests library and python 3.12.</p>
<python><python-requests>
2024-05-21 14:56:23
1
7,096
JohnJ
78,512,558
7,958,562
Streaming file from one webserver to another
<p>I'm trying to stream a file from one webserver (Jfrog) to another (nexus content server). I have working code that does this like this:</p> <pre><code>def get_file(): url = &quot;https://webserver.com/big_file.zip&quot; with requests.get(url,, stream=True) as file_stream: self.stream_file_to_webserver(file_stream) def stream_file_to_webserver(file_stream): upload_url = &quot;https://nexus.com/big_file.zip&quot; with requests.put(upload_url, data=file_stream.iter_content(chunk_size=8192), headers={'Content-Type': 'application/octet-stream'}) as response: if response.status_code != 201: print(&quot;Success&quot;) </code></pre> <p>This works fine, but I need to upload to the Nexus content server in a different (because of another issue with other metadata fields missing when you upload this way).</p> <p>There I want to upload this way:</p> <pre><code>import requests from requests.auth import HTTPBasicAuth class FileWrapper: def __init__(self, file_iterator): self.file_iterator = file_iterator def read(self, size=-1): return next(self.file_iterator, b'') def get_file(): url = &quot;https://webserver.com/big_file.zip&quot; with requests.get(url,, stream=True) as file_stream: self.stream_file_to_webserver(file_stream) def stream_file_to_webserver(source_response): upload_url = f&quot;https://nexus.com/repository&quot; file_wrapper = FileWrapper(source_response.iter_content(chunk_size=8192)) payload = { 'raw.directory': ('/'), 'raw.asset1.filename': ('big_file.zip'), 'raw.asset1': ('big_file.zip', file_wrapper) } with requests.post(upload_url, files=payload) as response: if response.status_code == 201: print(&quot;File uploaded to Nexus successfully.&quot;) else: print(f&quot;Failed {response.text}&quot;) </code></pre> <p>But this seems to fail because somehow the payload is not set correctly.</p> <pre><code> [{&quot;id&quot;:&quot;directory&quot;,&quot;message&quot;:&quot;Missing required component field 'Directory'&quot;},{&quot;id&quot;:&quot;filename&quot;,&quot;message&quot;:&quot;Missing required asset field 'Filename' on '1'&quot;},{&quot;id&quot;:&quot;filename&quot;,&quot;message&quot;:&quot;Missing required asset field 'Filename' on '2'&quot;},{&quot;id&quot;:&quot;filename&quot;,&quot;message&quot;:&quot;Missing required asset field 'Filename' on '3'&quot;},{&quot;id&quot;:&quot;*&quot;,&quot;message&quot;:&quot;The assets 1 and 2 have identical coordinates&quot;},{&quot;id&quot;:&quot;*&quot;,&quot;message&quot;:&quot;The assets 2 and 3 have identical coordinates&quot;}] </code></pre> <p>Does anyone know what I am doing wrong and how I can fix this?</p>
<python>
2024-05-21 14:26:05
3
437
John
78,512,493
3,057,377
Display calendar with week numbers
<p>I would like to display a calendar with week numbers.</p> <p>Python provides a nice <code>calendar</code> module that can generate such a calendar, but I can’t find a way to display week numbers.</p> <p>Here is an example of the command and its output:</p> <pre><code>python -m calendar --locale fr --encoding utf-8 --type text 2024 </code></pre> <pre><code> 2024 janvier février mars Lu Ma Me Je Ve Sa Di Lu Ma Me Je Ve Sa Di Lu Ma Me Je Ve Sa Di 1 2 3 4 5 6 7 1 2 3 4 1 2 3 8 9 10 11 12 13 14 5 6 7 8 9 10 11 4 5 6 7 8 9 10 15 16 17 18 19 20 21 12 13 14 15 16 17 18 11 12 13 14 15 16 17 22 23 24 25 26 27 28 19 20 21 22 23 24 25 18 19 20 21 22 23 24 29 30 31 26 27 28 29 25 26 27 28 29 30 31 </code></pre> <p>And here is what I would like to achieve:</p> <pre><code> 2024 janvier février mars Lu Ma Me Je Ve Sa Di Lu Ma Me Je Ve Sa Di Lu Ma Me Je Ve Sa Di 1 1 2 3 4 5 6 7 5 1 2 3 4 9 1 2 3 2 8 9 10 11 12 13 14 6 5 6 7 8 9 10 11 10 4 5 6 7 8 9 10 3 15 16 17 18 19 20 21 7 12 13 14 15 16 17 18 11 11 12 13 14 15 16 17 4 22 23 24 25 26 27 28 8 19 20 21 22 23 24 25 12 18 19 20 21 22 23 24 5 29 30 31 9 26 27 28 29 13 25 26 27 28 29 30 31 </code></pre> <p>Is it possible to obtain this result with the Python calendar module? If it is not directly possible, how can I extend the module to get this new feature?</p> <p>I’m open to a totally different approach if it makes sense.</p>
<python><calendar>
2024-05-21 14:11:57
2
1,380
nico
78,512,350
17,917,952
Different output order in custom Object Detection model causing error in android application
<p>I followed the steps as outlined in the following documentation: <a href="https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html" rel="nofollow noreferrer">https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html</a> to train my custom object detection model. For reference, I'm using TF 2.10. However, after converting it to a tflite model and implementing it in an android application in Java, I get the following error:</p> <pre><code>EXCEPTION: Failed on interpreter inference -&gt; Cannot copy from a TensorFlowLite tensor (StatefulPartionedCall:1) with shape [1,10] to a Java object with shape [1,10,4]. </code></pre> <p>Prior to TensorFlow 2.6, the metadata order was boxes, classes, scores, number of detections. Now, it seems to have changed to scores, boxes, number of detections, classes.</p> <p>I have tried two things: 1) downgrading to TF2.5 which solves this problem but raises incompatibility issues with other libraries so I do not prefer this method. 2) Declared the sequence of outputs explicitely using <a href="https://www.tensorflow.org/lite/models/convert/metadata_writer_tutorial#object_detectors" rel="nofollow noreferrer">metadata writer</a> based on one of the suggestions on <a href="https://github.com/tensorflow/tensorflow/issues/58764" rel="nofollow noreferrer">here</a>; however, this still raises the same exception as stated above. After loading the model (after the metadata writer process) and inspecting the output details, I see the following:</p> <pre><code>[{'name': 'StatefulPartitionedCall:1', 'index': 249, 'shape': array([ 1, 10]), 'shape_signature': array([ 1, 10]), 'dtype': &lt;class 'numpy.float32'&gt;, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}, {'name': 'StatefulPartitionedCall:3', 'index': 247, 'shape': array([ 1, 10, 4]), 'shape_signature': array([ 1, 10, 4]), 'dtype': &lt;class 'numpy.float32'&gt;, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}, {'name': 'StatefulPartitionedCall:0', 'index': 250, 'shape': array([1]), 'shape_signature': array([1]), 'dtype': &lt;class 'numpy.float32'&gt;, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}, {'name': 'StatefulPartitionedCall:2', 'index': 248, 'shape': array([ 1, 10]), 'shape_signature': array([ 1, 10]), 'dtype': &lt;class 'numpy.float32'&gt;, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}] </code></pre> <p>The order of the shapes displayed still do not match the order of boxes, classes, scores, number of detections. Without having to modify the android app code, is there anything else that can be done to avoid the distortion of the output shape during the tflite conversion?</p> <p>If required, here is the simple conversion script I'm using to convert the tflite-friendly saved_model to a tflite:</p> <pre><code>import tensorflow as tf import argparse parser = argparse.ArgumentParser( description=&quot;tfLite Converter&quot;) parser.add_argument(&quot;--saved_model_path&quot;, help=&quot;&quot;, type=str) parser.add_argument(&quot;--tflite_model_path&quot;, help=&quot;&quot;, type=str) args = parser.parse_args() converter = tf.lite.TFLiteConverter.from_saved_model(args.saved_model_path) tflite_model = converter.convert() with open(args.tflite_model_path, 'wb') as f: f.write(tflite_model) </code></pre>
<python><metadata><tensorflow2.0><object-detection><tensorflow-lite>
2024-05-21 13:52:42
1
310
peru_45
78,512,247
3,070,181
Simpleaudio causes a segmentation fault in Tkiinter
<p>I am agnostic as to which solution I use, but in the following code, the <em>simpleaudio</em> solution outputs the sound, but causes a segmentation fault, whereas the <em>pygame</em> solution works perfectly.</p> <p>Why is this?</p> <pre><code>import tkinter as tk from tkinter import ttk import simpleaudio as sa from pygame import mixer root = tk.Tk() def main() -&gt; None: MainFrame() root.mainloop() class MainFrame(): &quot;&quot;&quot;Define the Main frame.&quot;&quot;&quot; def __init__(self): button = ttk.Button(root, text='SimpleAudio', command=self.simple_audio) button.grid(row=0, column=0) button = ttk.Button(root, text='PyGame', command=self.pygame) button.grid(row=1, column=0) def simple_audio(self): ping = sa.WaveObject.from_wave_file('ping.wav') play = ping.play() play.wait_done() def pygame(self): mixer.init() mixer.Sound(&quot;ping.wav&quot;).play() if __name__ == '__main__': main() </code></pre> <p>I am using</p> <p>Python: 3.12.3</p> <p>simpleaudio: 1.0 4</p> <p>pygame: 2.5.2</p>
<python><tkinter><audio>
2024-05-21 13:35:17
1
3,841
Psionman
78,512,177
6,618,051
Playwright Python: 'retain-on-failure'
<p>For Node.js it's mentioned option 'retain-on-failure' (<a href="https://playwright.dev/docs/videos" rel="nofollow noreferrer">https://playwright.dev/docs/videos</a>) to preserve only videos for failed tests. Nothing similar for the Python. How can it be solved (delete videos for passed tests)? Thanks</p> <pre class="lang-py prettyprint-override"><code>import pytest from playwright.sync_api import sync_playwright @pytest.fixture(scope=&quot;module&quot;) def browser(headless): with sync_playwright() as p: browser = p.chromium.launch(headless=headless) yield browser browser.close() @pytest.fixture def context(browser): context = browser.new_context( record_video_dir=&quot;reports/&quot;, record_video_size={&quot;width&quot;: 1024, &quot;height&quot;: 768}, ) yield context context.close() @pytest.fixture def page(context): page = context.new_page() yield page page.close() </code></pre>
<python><playwright><playwright-python>
2024-05-21 13:20:31
1
1,939
FieryCat
78,512,062
850,781
Conda update flips between two versions of fmt
<p>When I do <code>conda update -n base --all</code>, I get this:</p> <pre><code>The following packages will be UPDATED: libmamba artifactory/api/conda/conda-forge::li~ --&gt; pkgs/main::libmamba-1.5.8-h99b1521_2 libmambapy artifactory/api/conda/conda-forge::li~ --&gt; pkgs/main::libmambapy-1.5.8-py311h77c03ed_2 pybind11-abi artifactory/api/conda/conda-forge::py~ --&gt; pkgs/main::pybind11-abi-5-hd3eb1b0_0 The following packages will be DOWNGRADED: fmt 10.2.1-h181d51b_0 --&gt; 9.1.0-h181d51b_0 libarchive 3.7.2-h313118b_1 --&gt; 3.6.2-h6f8411a_1 </code></pre> <p>On the next invocation of <code>conda update -n base --all</code> I get</p> <pre><code>The following packages will be UPDATED: fmt 9.1.0-h181d51b_0 --&gt; 10.2.1-h181d51b_0 libarchive 3.6.2-h6f8411a_1 --&gt; 3.7.2-h313118b_1 The following packages will be SUPERSEDED by a higher-priority channel: libmamba pkgs/main::libmamba-1.5.8-h99b1521_2 --&gt; artifactory/api/conda/conda-forge::libmamba-1.5.8-h3f09ed1_0 libmambapy pkgs/main::libmambapy-1.5.8-py311h77c~ --&gt; artifactory/api/conda/conda-forge::libmambapy-1.5.8-py311h0317a69_0 pybind11-abi pkgs/main::pybind11-abi-5-hd3eb1b0_0 --&gt; artifactory/api/conda/conda-forge::pybind11-abi-4-hd8ed1ab_3 </code></pre> <p>and the cycle repeats.</p> <p>I.e., I never get <code>All requested packages already installed</code>.</p> <p>Is there a way to avoid this insanity?</p> <p>PS1: <code>conda 24.5.0</code></p> <p>PS2: After <a href="https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-channels.html" rel="nofollow noreferrer">RTFM</a>, the answer turned out to be</p> <pre><code>conda config --set channel_priority flexible </code></pre> <p>(it was <code>false</code> before -- dunno why, I think I copied it from the local docs without thinking).</p>
<python><conda><miniconda>
2024-05-21 12:58:14
2
60,468
sds
78,511,805
1,169,091
Click a particular button on a page using By.LINK_TEXT but find_element throws exception
<p>On this page: <a href="https://finance.yahoo.com/quote/KO/options" rel="nofollow noreferrer">https://finance.yahoo.com/quote/KO/options</a></p> <p>Is this button:</p> <pre><code>&lt;button class=&quot;tertiary-btn fin-size-small menuBtn tw-justify-center rounded rightAlign svelte-xhcwo&quot; data-ylk=&quot;elm:inpt;elmt:menu;itc:1;sec:qsp-options;slk:date-select;subsec:date&quot; type=&quot;button&quot; aria-label=&quot;May 24, 2024&quot; aria-haspopup=&quot;listbox&quot; data-type=&quot;date&quot; data-rapid_p=&quot;14&quot; data-v9y=&quot;1&quot;&gt; &lt;div class=&quot;icon fin-icon inherit-icn sz-medium svelte-21xhfv&quot;&gt; &lt;svg xmlns=&quot;http://www.w3.org/2000/svg&quot; viewBox=&quot;0 0 24 24&quot;&gt; &lt;path d=&quot;M16.59 8.59 12 13.17 7.41 8.59 6 10l6 6 6-6z&quot;&gt;&lt;/path&gt; &lt;/svg&gt; &lt;/div&gt; &lt;span class=&quot; textSelect svelte-q648fa&quot;&gt;May 24, 2024&lt;/span&gt; &lt;/button&gt; </code></pre> <p>I tried this code:</p> <pre><code>browser = webdriver.Firefox() browser.get(&quot;https://finance.yahoo.com/quote/KO/options&quot;) # The option expiraiton date will change based on when this script runs option_expiration_date = &quot;May 24, 2024&quot; wait = WebDriverWait(browser, 20) field = wait.until(EC.visibility_of_element_located((By.LINK_TEXT, option_expiration_date))) elem = browser.find_element(By.LINK_TEXT, option_expiration_date) elem.click() </code></pre> <p>but it throws this exception:</p> <pre><code>Message=Message: Stacktrace: RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8 WebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:193:5 NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:511:5 dom.find/&lt;/&lt;@chrome://remote/content/shared/DOM.sys.mjs:136:16 Source=C:\Users\Leave\source\repos\Selenium Demo\Selenium Demo\src\demoPackage\yahooFinance.py StackTrace: File &quot;C:\Users\Leave\source\repos\Selenium Demo\Selenium Demo\src\demoPackage\yahooFinance.py&quot;, line 18, in yahooFinance field = wait.until(EC.visibility_of_element_located((By.LINK_TEXT, option_expiration_date))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Leave\source\repos\Selenium Demo\Selenium Demo\src\mainPackage\main.py&quot;, line 17, in &lt;module&gt; (Current frame) yahooFinance() selenium.common.exceptions.TimeoutException: Message: Stacktrace: RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8 WebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:193:5 NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:511:5 dom.find/&lt;/&lt;@chrome://remote/content/shared/DOM.sys.mjs:136:16 </code></pre>
<python><selenium-webdriver>
2024-05-21 12:18:05
2
4,741
nicomp
78,511,656
17,220,672
"not enough memory" error using multiprocessing
<p>I have a <code>table_data</code> which is a type of list. It has 3.5 million <code>sqlalchemy</code> ORM-Models stored in it.</p> <p>Now for each element in <code>table_data</code> list I want to preprocess some fields in these models and convert each one of them to the new sqlalchemy <code>FooModel</code>.</p> <p>So basically I have this code:</p> <pre><code>#In this case, preprocess() function would get a list of table_data models and it would return list of FooModel models table_data = extract_rows_from_database() preprocessed_models = preprocess(table_data) </code></pre> <p>This code above works, but I was wondering if I could speed up this process using multiprocessing</p> <pre><code># In this case preprocess() function would have one orm model as an argument, and it would return FooModel item with ProcessPoolExecutor(process_workers) as exe: for result in exe.map(preprocess,table_data): preprocessed_models.append(result) </code></pre> <p>But when I use multiprocessing my container keeps exiting with <code>code 137</code> (not enough memory) or</p> <pre><code> concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. </code></pre> <p>Entire function pseudocode would look like this (inspired by this link) <a href="https://stackoverflow.com/a/20849064/17220672">https://stackoverflow.com/a/20849064/17220672</a></p> <pre><code>def function(): table_data = extract_rows_from_database() preprocessed_models = [] with ProcessPoolExecutor(process_workers) as exe: for result in exe.map(preprocess,table_data): preprocessed_models.append(result) load_in_another_database(preprocessed_models) </code></pre> <p>**** load_in_another_database() uses sqlalchemy, I would like to parallelize this too!</p>
<python><sqlalchemy><multiprocessing>
2024-05-21 11:49:09
2
419
mehekek
78,511,620
18,091,040
Install fastapi using an older version of pip (9.0.3)
<p>I am in a project that requires an older version of pip (9.0.3) and I wanted to install fastapi on it. When I run:</p> <pre><code>pip install fastapi </code></pre> <p>I got:</p> <pre><code>Collecting fastapi Could not find a version that satisfies the requirement fastapi (from versions: ) No matching distribution found for fastapi You are using pip version 9.0.3, however version 24.0 is available. You should consider upgrading via the 'pip install --upgrade pip' command. </code></pre> <p>Would it be possible to workaround this error?</p>
<python><python-3.x><pip><fastapi>
2024-05-21 11:44:17
0
640
brenodacosta
78,511,471
13,919,925
how to get distinct record based on specific fields in django orm?
<pre><code> def get_queryset(self): # current_date = datetime(2024, 9, 2, 12, 56, 54, 324893).today() current_date = datetime.today() # get the booking data from the current date --------- requested_datetime is when the patient has scheduled the booking for booking_data = PatientBooking.objects.annotate( admission_datetime=F('requested_admission_datetime'), delivery_datetime=F('estimated_due_date'), fully_dilation_datetime=ExpressionWrapper( F('estimated_due_date') - timedelta(days=(1)), output_field=DateTimeField() ), source=Value('booking', output_field=CharField()) ).filter(status=&quot;ACCEPTED&quot;, admission_datetime__date=current_date ) # get the patient data from the current date ---- based on when the patient record is created_at patient_data = PatientDetails.objects.annotate( admission_datetime=F('created_at'), delivery_datetime=F('estimated_due_date'), fully_dilation_datetime=F('patient_fully_dilated'), source=Value('patient', output_field=CharField()) ).filter(status=&quot;ACTIVE&quot;, admission_datetime__date=current_date ) # combine the record in one queryset using union on the required fields only combined_queryset = booking_data.values('first_name', 'last_name', 'delivery_type', 'date_of_birth', 'admission_datetime', 'delivery_datetime', 'fully_dilation_datetime', 'source' ).union( patient_data.values('first_name', 'last_name', 'delivery_type', 'date_of_birth', 'admission_datetime', 'delivery_datetime', 'fully_dilation_datetime', 'source' ) ).order_by('first_name', 'last_name', 'date_of_birth') # if records in both the model matches---- then consider it as same patient and consider the record where the source=patient combined_queryset = combined_queryset.distinct( 'first_name', 'last_name', 'date_of_birth' ) return combined_queryset </code></pre> <p>This is my queryset. I have two models PatientDetails and PatientBooking both models have different numbers of fields and names I want to get a single queryset with specific fields from both tables. So I first annotate the field name and then perform union to it. To get a single queryset with field name ('first_name', 'last_name', 'delivery_type', 'date_of_birth', 'admission_datetime', 'delivery_datetime', 'fully_dilation_datetime', 'source') . Till now everything is working fine and I'm getting the desired result.</p> <p>But after getting the result from both the models in a single queryset . Now I want to get distinct record based on ('first_name', 'last_name','date_of_birth') I tried</p> <pre><code>combined_queryset = combined_queryset.distinct( 'first_name', 'last_name', 'date_of_birth' ) </code></pre> <p>but it is not working. What I'm doing wrong can anyone tell me.</p>
<python><django>
2024-05-21 11:15:13
0
302
sandeepsinghnegi
78,511,206
1,333,025
How to get docstrings of fields of a NamedTuple or dataclass?
<p>I'm using a <code>NamedTuple</code> with comments <a href="https://stackoverflow.com/q/1606436/1333025">as suggested</a> for Python 3.6+:</p> <pre><code>from typing import NamedTuple class Config(NamedTuple): buy_fee: float &quot;&quot;&quot;My field description...&quot;&quot;&quot; # ... </code></pre> <p>Now I'd like to programmatically get the docstrings of the fields to generate a nice help message.</p> <p>I tried to use <code>inspect</code> as follows:</p> <pre><code>[(f, inspect.getdoc(getattr(cls, f))) for f in cls._fields] </code></pre> <p>But I only get some default comment values: <code>[('buy_fee', 'Alias for field number 0'), ...]</code>.</p> <p>I tried to convert the class to a <code>@dataclass</code> and extract the docs with</p> <pre><code>[(f.name, inspect.getdoc(f)) for f in dataclasses.fields(cls)] </code></pre> <p>But this approach also failed, I got just <code>[('buy_fee', None), ...]</code>.</p> <p>Is there some way to achieve this?</p>
<python><python-dataclasses><docstring>
2024-05-21 10:28:02
1
63,543
Petr
78,511,133
14,253,961
Pytorch : Early stopping Mechanism
<p>I work with Pytorch and CIFAR100 dataset, While I'm newer, I would like to incorporate the early stopping mechanism in my code,</p> <pre><code>def train(net,trainloader,epochs,use_gpu = True): ... net.train() for epoch in range(epochs): print (&quot;Epoch {}/{}&quot;.format(epoch+1, epochs)) running_loss = 0.0 running_corrects = 0 for i, data in enumerate(trainloader, 0): images, labels = data[0].to(device), data[1].to(device) optimizer.zero_grad() outputs = net(images) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() epoch_loss = running_loss/len(trainloader.dataset) print('Loss: {}'.format(epoch_loss)) </code></pre> <p>This is the class of early stopping:</p> <pre><code>class EarlyStopper: def __init__(self, patience=1, min_delta=0): self.patience = patience self.min_delta = min_delta self.counter = 0 self.min_validation_loss = float('inf') def early_stop(self, validation_loss): if validation_loss &lt; self.min_validation_loss: self.min_validation_loss = validation_loss self.counter = 0 elif validation_loss &gt; (self.min_validation_loss + self.min_delta): self.counter += 1 if self.counter &gt;= self.patience: return True return False </code></pre> <p>I don't have a validation set, I suppose that test set is same as validation set Now, I would like to know where I must make those lines:</p> <pre><code>early_stopper = EarlyStopper(patience=3, min_delta=10) for epoch in np.arange(n_epochs): train_loss = train_one_epoch(model, train_loader) validation_loss = validate_one_epoch(model, validation_loader) if early_stopper.early_stop(validation_loss): break </code></pre>
<python><pytorch><early-stopping>
2024-05-21 10:14:49
1
741
seni
78,510,975
4,732,111
Polars read AWS RDS DB with a table containing column of type jsonb
<p>I'm trying to read AWS RDS DB using the following method through polars:</p> <pre><code>df_rds_table_test = pl.read_database_uri(sql_query, uri) </code></pre> <p>Postgres DB contains a table with column name <em>'json_message'</em> of type <em><strong>jsonb</strong></em> and other columns of type String. When the table is read by polars, it treats data type of <em>json_message</em> column as String.</p> <p>Further, i'm using DuckDB on the polars dataframe to perform SQL operations.</p> <pre><code>SQL_QUERY_SRC_JOIN = &quot;select id,json_message.amount as amount from df_rds_table_test where id=10020240501&quot; src_df = duckdb.sql(SQL_QUERY_SRC_JOIN).pl() </code></pre> <p>I'm getting an exception that states</p> <pre><code>duckdb.duckdb.BinderException: Binder Error: Cannot extract field 'amount' from expression &quot;json_message&quot; because it is not a struct, union, or json </code></pre> <p>Not sure if there's a way that we can cast the datatype to jsonb in polars as its not supported. I tried casting json_message to <em>struct</em> but i'm getting error.</p> <p>Also i tried casting json_message to type JSON in duckdb query which didn't help neither.</p> <p>Sample json_message:</p> <pre><code>{ &quot;amount&quot;: 0, &quot;c_id&quot;: null, &quot;branch&quot;: &quot;0502&quot;, &quot;user_id&quot;: &quot;U999999&quot;} </code></pre> <p>Great if someone could please help me to access the json string in the polars dataframe using duckdb.</p>
<python><pandas><amazon-web-services><python-polars><duckdb>
2024-05-21 09:47:18
1
363
Balaji Venkatachalam
78,510,809
7,597,536
Airflow - login next query param does not match the base_url #
<p>I deployed my airflow application behind a <code>Microsoft application gateway</code>. I updated the base_url to match the public IP provided by the application gateway, for example, <code>1.2.3.4</code>.</p> <p>However, when I try to access the login page, the &quot;next&quot; query parameter refers to the internal IP of the machine (for example, <code>192.0.0.0</code>) instead of the public IP of the Application Gateway.</p> <p>So for instance, I have this value as base_url</p> <pre><code>BASE_URL=http://1.2.3.4/airflow </code></pre> <p>When see the login page the URL looks like this:</p> <pre><code>http://1.2.3.4/airflow/login/?next=http%3A%2F%2F192.0.0.0%2Fairflow%2Fhome </code></pre> <p>I was expecting to see something like this:</p> <pre><code>http://1.2.3.4/airflow/login/?next=http%3A%2F%2F1.2.3.4%2Fairflow%2Fhome </code></pre> <p>Is there some additional configuration that I have to set?</p> <p>NOTE: i'm not using kubernetes, but plain virtual machines aiflow 2.6.2</p>
<python><azure><airflow>
2024-05-21 09:18:00
0
1,167
Mattia
78,510,594
10,595,871
Unable to scrape XHR elements from webpage (python)
<p>I've already searched for similar questions but due to errors on deprecated methods I'm still not able to solve this.</p> <p>I just need to extract the JSON provided by the only xhr element in this page: <a href="https://certificatoricreditors.mimit.gov.it/Consultazione" rel="nofollow noreferrer">https://certificatoricreditors.mimit.gov.it/Consultazione</a></p> <p>The xhr is named Consultazione?..... and I'm totally unable to extract it. I have no code, only trials from other sources like: <a href="https://stackoverflow.com/questions/54994218/how-to-scrape-from-xhr-in-chrome-python">How to scrape from XHR in Chrome ( Python )</a> (404 error) <a href="https://stackoverflow.com/questions/74914321/scraping-updated-source-code-from-xhr-response-of-a-website">Scraping UPDATED source code from XHR response of a website</a> (no code provided)</p> <p>etc</p>
<python><web-scraping><python-requests>
2024-05-21 08:40:54
1
691
Federicofkt