QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,532,799
984,003
Get purchase / subscription info using Apple receipt, server-side, "Original API"
<p>I have looked at previous questions/answers for this, but they point to the now deprecated <a href="https://developer.apple.com/documentation/storekit/in-app_purchase/original_api_for_in-app_purchase/validating_receipts_with_the_app_store" rel="nofollow noreferrer">VerifyReceipt endpoint</a>.</p> <p>I am trying to get information related to in-app purchases using the receipt that Apple supplies. I am using the original API. The only information that I save is the receipt. I need to be able to do this server side (Python).</p> <p>I've followed the links from the page above, but they seem to be about validating the original purchase or they link to pages that require a transaction id??</p> <p>This hasn't been an issue before since the purchases were all one-time purchases, and cancellations weren't an issue. But now that I'll be adding auto-renewable purchases I need to look up if people are renewing or cancelling.</p> <p>I should upgrade to the new transaction method at some point, but that won't be right away. Hopefully it's not necessary.</p>
<python><in-app-purchase>
2024-05-25 14:47:07
1
29,851
user984003
78,532,672
1,391,441
Is it bad practice to use empty classes as containers for methods?
<p>I have a few functions that do basically the same task but using different algorithms. Since these algorithms are all related (they belong to the same parent process) I've organized them in my package <code>my_package</code> as methods of an empty class in a module file called <code>my_empty_class.py</code> as so:</p> <pre><code>class my_empty_class: &quot;&quot;&quot;Define an empty class to hold the methods&quot;&quot;&quot; def func1(param1, param2, param3): # algo 1 return results def func2(param1, param3): # algo 2 return results def func3(param2, param4): # algo 3 return results </code></pre> <p>The package files are organized as:</p> <pre><code>my_package/ __init__.py my_empty_class.py another_module.py onemore_module.py ... </code></pre> <p>This is what my <code>__init__.py</code> looks like:</p> <pre><code>from .my_empty_class import my_empty_class as my_empty_class from .another_module import another_module as another_module from .onemore_module import onemore_module as onemore_module ... </code></pre> <p>Finally, I call the methods with:</p> <pre><code>import my_package res1 = my_package.my_empty_class.func1(param1, param2, param3) res2 = my_package.my_empty_class.func2(param1, param3) res3 = my_package.my_empty_class.func3(param2, param4) </code></pre> <p>Is this bad practice? Because it works, but it feels like it is. Is there maybe a better/more recommended way to go about this?</p>
<python><class>
2024-05-25 13:51:53
2
42,941
Gabriel
78,532,668
12,769,783
TypeVarTuple Unpack all contained types and use them as type arguments
<p>I would like to use a generic class in Python that is instantiated with a variable number of type parameters. This can be achieved using <code>TypeVarTuple</code>. For each of these type parameters, I want to fill a data structure (e.g., a <code>list</code>). The list can have a different length for each data type. Ideally, I would like to type hint a tuple of lists, each corresponding to a type from the TypeVarTuple.</p> <p>Here is a very simplified example of what I would like to achieve (note that the syntax below does not work):</p> <pre class="lang-py prettyprint-override"><code>from typing import Generic, Tuple, List from typing_extensions import TypeVarTuple, Unpack Ts = TypeVarTuple('Ts') class Test(Generic[Unpack[Ts]]): def __init__(self) -&gt; None: self.content: Unpack[Tuple[List[Ts]]] = [] def call(self, *values: Unpack[List[Ts]]): for v, c in zip(values, self.content): c.extend(v) # noqa class Implementation(Test[int, str, int]): pass i = Implementation() i.call([1, 2, 3], [], [2]) </code></pre> <p>Is something like this possible with Python's type hinting? If so, how can it be properly implemented?</p>
<python><python-typing>
2024-05-25 13:45:47
1
1,596
mutableVoid
78,532,625
11,861,874
Tkinter Check Combo Box Issue
<p>I am trying to create a combo box with a checkbox against each item. I am not able to select multiple items at one go as every time I have to click the drop-down and select one. Also, It collapses every time I select one item.</p> <pre><code>from tkinter import * root = Tk() main = Menubutton(root,text=&quot;Various Dates&quot;) main.grid() main.menu = Menu(main,tearoff=0) main['menu'] = main.menu Date0 = StringVar() Date1 = StringVar() Date2 = StringVar() Date3 = StringVar() main.menu.add_checkbutton(label='22/03/2024',variable=Date0) main.menu.add_checkbutton(label='24/03/2024',variable=Date1) main.menu.add_checkbutton(label='26/03/2024',variable=Date2) main.menu.add_checkbutton(label='28/03/2024',variable=Date3) main.pack() root.mainloop() </code></pre> <p>The above code creates a simple Dropbox but every time it collapses after selecting an option. I need the dropdown to be open until I select multiple options and once I click dropdown it should collapse like below.</p> <p><a href="https://i.sstatic.net/F0L0abRV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F0L0abRV.png" alt="enter image description here" /></a></p>
<python><user-interface><tkinter>
2024-05-25 13:32:05
0
645
Add
78,532,581
7,421,654
No matching distribution found for intel-openmp==2024.1.2
<p>I am using a package which has intel-openmp==2024.1.2 dependecy, when I run locally on windows machine it works fine but it is not working on ubunutu(code build). it throws</p> <pre><code>No matching distribution found for intel-openmp==2024.1.2 </code></pre> <pre><code>Python verison 3.11.6 pip version 23.0.1 </code></pre>
<python><pip>
2024-05-25 13:14:58
1
1,493
Mohamed Anser Ali
78,532,439
3,940,749
Covariant and invariant collections in python typing
<p>There is a problem that I have come across in python that the List type is invariant - meaning that it can only hold objects of a specific type or you will get a type error (for example when running mypy) - but sometimes you need to use a collection in a more generic function which can accept all types of a base class (lets call it <code>A</code>) as well as all derived classes. A clear example is this</p> <pre class="lang-py prettyprint-override"><code>class A: pass class B(A): pass def print_list_a(my_list: list[A]) -&gt; None: print(my_list) l1 = [A()] l2 = [B()] print_list_a(l1) print_list_a(l2) </code></pre> <p>When I run the above with mypy in strict mode I get the following</p> <pre><code>main.py:41: error: Argument 1 to &quot;print_list_a&quot; has incompatible type &quot;list[B]&quot;; expected &quot;list[A]&quot; [arg-type] main.py:41: note: &quot;List&quot; is invariant main.py:41: note: Consider using &quot;Sequence&quot; instead, which is covariant </code></pre> <p>Can you explain what the note re invariant and covariant types means and the best way to solve it?</p>
<python><mypy><python-typing>
2024-05-25 12:22:43
1
8,277
Sam Redway
78,532,421
6,282,576
drf-spectacular hide Schemas from components in Swagger UI
<p>In my Swagger UI, I'm trying to hide the Schemas section from components:</p> <p><a href="https://i.sstatic.net/IYAdb2BW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYAdb2BW.png" alt="enter image description here" /></a></p> <p>I'm using <a href="https://drf-spectacular.readthedocs.io/en/latest/settings.html#example-swaggerui-settings" rel="nofollow noreferrer">drf-spectacular</a> and I didn't find anything relating to <code>Schemas</code> in the <a href="https://swagger.io/docs/open-source-tools/swagger-ui/usage/configuration/" rel="nofollow noreferrer">Swagger Configuration</a>.</p> <p>I tried removing <code>schemas</code> from the JSON response:</p> <pre class="lang-py prettyprint-override"><code>from drf_spectacular.views import SpectacularJSONAPIView class CustomJSONAPIView(SpectacularJSONAPIView): def get(self, request, *args, **kwargs): response = super().get(request, *args, **kwargs) del response.data[&quot;components&quot;][&quot;schemas&quot;] return response </code></pre> <p>Which works, but corrupts rest of the Swagger functionality. Is it possible to simply hide this <em>Schemas</em> component without breaking rest of Swagger?</p>
<python><django><swagger><drf-spectacular>
2024-05-25 12:15:24
0
4,313
Amir Shabani
78,532,299
7,959,614
How to bundle pandas rows together using the opposite of pandas.groupby-methodology
<p>I have the following <code>pandas.DataFrame</code></p> <pre><code> match_id court 0 50311513 1 1 50313011 2 2 50313009 2 3 50317691 1 4 50315247 2 5 50318597 1 6 50318877 1 7 50318983 1 8 50318831 1 9 50318595 1 </code></pre> <p>As you can see there are a total of <code>2</code> courses. I want to bundle each &quot;slot&quot; together. So, the first grouped df should contain match <code>50311513</code> and <code>50313011</code>. The second slot should contain <code>50313009</code> and <code>50317691</code>. After the third slot, the grouped df is basically a single row.</p> <p>How can I tell <code>pandas.groupby()</code> that only one match can be played on the course?</p> <p>Thanks</p> <p><strong>Edit</strong></p> <p>Different input data:</p> <pre><code> match_id court group 0 46768193 1 0 1 46768193 1 1 2 46768187 2 0 3 46768187 2 1 4 46767821 3 0 </code></pre>
<python><pandas>
2024-05-25 11:28:05
1
406
HJA24
78,532,060
10,607,799
Perfect Reconstruction Condition of DWT
<p>According to several publications, e.g, <a href="https://link.springer.com/chapter/10.1007/978-1-4614-1821-4_6#Sec5" rel="nofollow noreferrer">https://link.springer.com/chapter/10.1007/978-1-4614-1821-4_6#Sec5</a>, the following conditions ensure perfect reconstruction of a DWT:</p> <ul> <li>alias cancellation: <img src="https://latex.codecogs.com/svg.image?%5Ctilde%7BH%7D(z)H(-z)+%5Ctilde%7BG%7D(z)G(-z)=0" alt="\tilde{H}(z)H(-z)+\tilde{G}(z)G(-z)=0" /></li> <li>distortionless: <img src="https://latex.codecogs.com/svg.image?%5Ctilde%7BH%7D(z)H(z)+%5Ctilde%7BG%7D(z)G(z)=2" alt="\tilde{H}(z)H(z)+\tilde{G}(z)G(z)=2" /></li> </ul> <p>Here, H are low-pass and G are high-pass filters. A tilde denotes the reconstruction filters.</p> <p>I have implemented that in python and checked it with all discrete wavelets from pywt:</p> <pre><code>import pywt import numpy as np for w in pywt.wavelist(kind=&quot;discrete&quot;): dec_lo, dec_hi, rec_lo, rec_hi = map(np.array, pywt.Wavelet(w).filter_bank) alias_cancellation = np.sum(rec_lo * dec_lo[::-1]) + np.sum(rec_hi * dec_hi[::-1]) # should be 0 distortionless = np.sum(rec_lo * dec_lo) + np.sum(rec_hi * dec_hi) # should be 2 print(w, alias_cancellation, distortionless) # returns (w, 2, 0) but should be (w, 0, 2) for all wavelets </code></pre> <p>In contrast to the definitions from the paper, my python script returns 2 for <code>alias_cancellation</code> and 0 for <code>distortionless</code>, but it should be 0 and 2, respectively. What am I missing here?</p>
<python><wavelet><wavelet-transform><pywavelets><pywt>
2024-05-25 09:52:14
0
550
CLRW97
78,532,045
5,724,391
Recursion in Python, using window.after()?
<p>Given the following code snippet:</p> <pre><code>def counting_down(count): window.after(1000, counting_down, count - 1) </code></pre> <p>Since the 2nd line, is not a direct call to <code>counting_down</code> but rather it uses a delay mechanism, is this considered to be a recursion?</p> <p>So, if <code>count</code> is a very large number, does it mean the calling stack depth might be &quot;exploded&quot; eventually?</p>
<python><tkinter><recursion>
2024-05-25 09:42:29
1
366
Yaniv G
78,531,959
2,628,868
Additional properties are not allowed ('tool' was unexpected)Even Better TOML
<p>when I add a file <code>pdm.toml</code> for python 3 project,</p> <pre><code>[pypi] verify_ssl = true # https://github.com/pdm-project/pdm/discussions/2406 [tool.pdm.resolution] respect-source-order = true [[tool.pdm.source]] name = &quot;pypi&quot; url = &quot;http://pypi.org/simple&quot; verify_ssl = true [[tool.pdm.source]] name = &quot;fallback&quot; url = &quot;http://pypi.org/simple&quot; verify_ssl = true </code></pre> <p>the toml file shows warning like this:</p> <pre><code>Additional properties are not allowed ('tool' was unexpected)Even Better TOML Additional properties are not allowed ('tool' was unexpected)Even Better TOML Additional properties are not allowed ('tool' was unexpected)Even Better TOML </code></pre> <p>this is the warning look like:</p> <p><a href="https://i.sstatic.net/UQul5uED.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UQul5uED.png" alt="enter image description here" /></a></p> <p>Am I missing something? what should I do to fixed this issue?</p>
<python><toml><pdm>
2024-05-25 09:10:00
1
40,701
Dolphin
78,531,848
6,365,949
Speaker Diarization - how to identify the same speakers across different audio files
<p>I am trying to use whisperx speaker Diarization to transcribe audio from a podcast series, turning it from audio to a timestamped text file, where each speaker is identified.</p> <p>This podcast series always has the same 4 guests. I currently have a python script which works in identifying these speakers, but it gives json output with the speakers labeled as <code>SPEAKER_00</code>, <code>SPEAKER_01</code>, etc... :</p> <pre><code>{ &quot;segments&quot;: [ { &quot;start&quot;: 0.089, &quot;end&quot;: 0.729, &quot;text&quot;: &quot; The minis are off, though.&quot;, &quot;words&quot;: [ { &quot;word&quot;: &quot;The&quot;, &quot;start&quot;: 0.089, &quot;end&quot;: 0.189, &quot;score&quot;: 0.33, &quot;speaker&quot;: &quot;SPEAKER_00&quot; }, ... { &quot;word&quot;: &quot;though.&quot;, &quot;start&quot;: 0.569, &quot;end&quot;: 0.729, &quot;score&quot;: 0.18, &quot;speaker&quot;: &quot;SPEAKER_00&quot; } ], &quot;speaker&quot;: &quot;SPEAKER_00&quot; }, { &quot;start&quot;: 1.31, &quot;end&quot;: 6.974, &quot;text&quot;: &quot;I've, uh... I've just been sitting here, waiting for the podcast to start, chilling.&quot;, &quot;words&quot;: [ { &quot;word&quot;: &quot;I've,&quot;, &quot;start&quot;: 1.31, &quot;end&quot;: 2.09, &quot;score&quot;: 0.759, &quot;speaker&quot;: &quot;SPEAKER_03&quot; }, { &quot;word&quot;: &quot;uh...&quot;, &quot;start&quot;: 2.11, &quot;end&quot;: 2.15, &quot;score&quot;: 0.0, &quot;speaker&quot;: &quot;SPEAKER_03&quot; }, .... </code></pre> <p>if I run this code for individual podcast episodes, it always labels speaker1 - speaker4 as different voices, meaning that the identified speaker 1 from podcast 400 might be a different voice/person compared to who is labeled as speaker 1 when I run podcast 423 through the same script.</p> <p>My question is: how can I edit my script to always recognize the same voices/speakers consistently throughout 600+ podcast episode audio files?</p> <p>So far I can only think of 2 ways:</p> <ol> <li>combine all audio into one single gigantic file, run the script on that, and separate out all the results.</li> <li>Write a separate script to analyze the voices of the segments and try to match them across all 600+ podcast audio files.</li> </ol> <p>Is there a better way to get this consistent speaker identification across hundreds of audio files? My current working speaker Diarization code is below:</p> <pre><code># -*- coding: utf-8 -*- &quot;&quot;&quot;WhisperX_Speaker_Diarization.ipynb Automatically generated by Colab. Original file is located at https://colab.research.google.com/drive/1IHum-j2AOjVOs_ZoqJ5yBUjf1kI4SLmt pip install --q git+https://github.com/m-bain/whisperx.git Run with: python3 Collab\ Notebooks/whisperx_speaker_diarization.py &gt; output/output.log &quot;&quot;&quot; import whisperx import gc from dotenv import load_dotenv import os import json import time # Load environment variables from .env file load_dotenv() # Get Hugging Face token from environment variable huggingface_token = os.getenv(&quot;HUGGINGFACE_TOKEN&quot;) device = &quot;cuda&quot; batch_size = 4 # reduce if low on GPU mem compute_type = &quot;int8&quot; # change from &quot;float16&quot; to &quot;int8&quot; if low on GPU mem (may reduce accuracy) audio_file = &quot;audio/short_MEGA64_PODCAST_483.mp3&quot; audio = whisperx.load_audio(audio_file) model = whisperx.load_model(&quot;large-v2&quot;, device, compute_type=compute_type) result = model.transcribe(audio, batch_size=batch_size) print(result[&quot;segments&quot;]) # before alignment # delete model if low on GPU resources # import gc; gc.collect(); torch.cuda.empty_cache(); del model # 2. Align whisper output model_a, metadata = whisperx.load_align_model(language_code=result[&quot;language&quot;], device=device) result = whisperx.align(result[&quot;segments&quot;], model_a, metadata, audio, device, return_char_alignments=False) result diarize_model = whisperx.DiarizationPipeline(use_auth_token=huggingface_token, device=device) diarize_segments = diarize_model(audio, min_speakers=1, max_speakers=8) diarize_segments diarize_segments.speaker.unique() result = whisperx.assign_word_speakers(diarize_segments, result) print(diarize_segments) # print(result[&quot;segments&quot;]) # segments are now assigned speaker IDs # Save the result to a JSON file with a unique filename timestamp = int(time.time() * 1000) audio_filename = os.path.basename(audio_file).split('.')[0] output_filename = f&quot;speaker_timestamps_{audio_filename}_{timestamp}.json&quot; with open(output_filename, 'w') as f: json.dump(result, f) print(f&quot;Results saved to {output_filename}&quot;) </code></pre>
<python><audio><transcription>
2024-05-25 08:21:04
0
1,582
Martin
78,531,808
2,817,520
SQLAlchemy and PostgreSQL unexpected timestamp with onupdate=func.now()
<p>In the following code after 5 seconds sleep I expect the second part of <code>date_updated</code> to be changed, but only the millisecond part is changed. If I use <code>database_url = 'sqlite:///:memory:'</code> it works as expected. Why?</p> <pre><code>class Base(MappedAsDataclass, DeclarativeBase): pass class Test(Base): __tablename__ = 'test' test_id: Mapped[int] = mapped_column(primary_key=True, init=False) name: Mapped[str] date_created: Mapped[datetime] = mapped_column( TIMESTAMP(timezone=True), insert_default=func.now(), init=False ) date_updated: Mapped[datetime] = mapped_column( TIMESTAMP(timezone=True), nullable=True, insert_default=None, onupdate=func.now(), init=False ) database_url: URL = URL.create( drivername='postgresql+psycopg', username='my_username', password='my_password', host='localhost', port=5432, database='my_db' ) engine = create_engine(database_url, echo=True) Base.metadata.drop_all(engine) Base.metadata.create_all(engine) with Session(engine) as session: test = Test(name='foo') session.add(test) session.commit() print(test) time.sleep(5) test.name = 'bar' session.commit() print(test.date_created.time()) # prints: 08:07:45.413737 print(test.date_updated.time()) # prints: 08:07:45.426483 </code></pre>
<python><postgresql><sqlalchemy>
2024-05-25 08:06:40
1
860
Dante
78,531,794
5,606,937
How to mock the results for two open file calls in one class function
<p>The class:</p> <pre><code>class ABC(object): def __init__(self, files): self.store = [] self.parse_files(files) def parse_files(self, files): for filename in files: with open(filename, newline=&quot;&quot;) as f: self.store.append(f.read()) </code></pre> <p>The test:</p> <pre><code>from unittest.mock import Mock, mock_open, patch class TestABC: result1 = 'string1' result2 = 'string2' result = Mock() result.side_effect = [result1, result2] @patch(&quot;builtins.open&quot;, new_callable=mock_open, read_data=result()) def test_parse_files(self, a): item = ABC(['foo', 'bar']) assert item.store == ['string1', 'string2'] </code></pre> <p>But <code>item.store == ['string1', 'string1']</code></p> <p>I thought the open would return result1 and result2 since result() is iter?</p>
<python><unit-testing>
2024-05-25 08:02:11
1
339
TheTeaRex
78,531,672
8,028,981
Transparent shape with opaque background with matplotlib.patches
<p>My goal is to generate an image with a transparent circle and an opaque white background. Like a transparent circular hole in a white sheet.</p> <p>When I try this, the circle is not transparent:</p> <pre><code>import matplotlib.pyplot as plt import matplotlib.patches as patches circle = patches.Circle([0.5, 0.5], 0.1, facecolor=(0.5, 0.5, 0.5, 0.5), edgecolor='none') plt.gca().add_patch(circle) plt.savefig(&quot;circle.pdf&quot;) </code></pre> <p>With <code>plt.savefig(&quot;circle.pdf&quot;, transparent=True)</code>, the transparency of the circle works fine, but the figure background is transparent, too.</p> <p>One thought that I had: Maybe I can define a shape that is the negative of the circle, i.e., a white rectangle with a circle cut out. How can I do that with matplotlib?</p>
<python><matplotlib><transparency>
2024-05-25 07:12:46
1
1,240
Amos Egel
78,531,659
21,540,734
How would I return an assigned variable of a class (monitor = Monitor()) as a tuple for a match case comparison?
<p>I have a Dell 2-in-1 computer that I can use as a tablet, and I'm working on a script to resize the window of an application base on the orientation of the display. What I'm trying to do is get an assigned variable <code>monitor = Monitor()</code> to return itself as a tuple in a match case comparison.</p> <pre class="lang-py prettyprint-override"><code>from win32api import GetMonitorInfo, MonitorFromPoint from typing import Optional class Monitor: def __init__(self): self._monitor: Optional[tuple] = None self.update() @property def Width(self) -&gt; int: return self._monitor[0] @property def Height(self) -&gt; int: return self._monitor[1] def update(self): self._monitor = self() def __call__(self) -&gt; tuple: return GetMonitorInfo(MonitorFromPoint((0, 0)))['Monitor'][2:] def __iter__(self): return iter(self._monitor) def __next__(self): return tuple(self._monitor) def __getitem__(self, index): return self._monitor[index] def __eq__(self, other): return self._monitor == other def __str__(self): return str(self._monitor) def __repr__(self): return str(self._monitor) if __name__ == '__main__': monitor = Monitor() print(f'monitor: {monitor}') match monitor: case (1440, 900): print((1440, 900)) case (900, 1440): print((900, 1440)) </code></pre> <p>For several days, I've read through posts here on stackoverflow.com, and I've been searching through Google search looking for code examples, but I haven't found anything. I do know that the <code>class.__call__</code> will give me what I want, but in this class, the <code>__call__</code> is to get the current resolution of the monitor so I can compare the class against itself <code>monitor == monitor()</code>. With Windows 11 Tablet Mode, the touch screen will rotate with the computer the same way a touch screen does on an Android device.</p>
<python><class><tuples><variable-assignment>
2024-05-25 07:05:26
1
425
phpjunkie
78,531,643
1,008,531
github action python unittest traverse all subdirectories
<p>I'm making a codeleet project where the structure is</p> <pre><code>/codeleet /python /ex1 -solution.py -test.py /ex2 -solution.py -test.py </code></pre> <p>I use basic unittest to test run my tests. And I want to practice github actions to test all the code. But with <code>python -m unittest discover</code> I doesn't traverse the file structure to look for test files. <code>python -m unittest discover -s 'python' -p 'test.py'</code> also doesn't work. Anyone have any ideas?</p>
<python><unit-testing><github><github-actions>
2024-05-25 07:00:24
0
521
Thomas E
78,531,630
7,959,614
How to transform unicoded output of requests to dictionary
<p>I have the following code</p> <pre><code>import requests headers = { 'Host': 'extranet-lv.bwfbadminton.com', 'Content-Length': '0', 'Sec-Ch-Ua': '&quot;Chromium&quot;;v=&quot;123&quot;, &quot;Not:A-Brand&quot;;v=&quot;8&quot;', 'Accept': 'application/json, text/plain, */*', 'Content-Type': 'application/json;charset=UTF-8', 'Sec-Ch-Ua-Mobile': '?0', 'Authorization': 'Bearer 2|NaXRu9JnMpSdb8l86BkJxj6gzKJofnhmExwr8EWkQtHoattDAGimsSYhpM22a61e1crjTjfIGTKfhzxA', 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.6312.122 Safari/537.36', 'Sec-Ch-Ua-Platform': '&quot;macOS&quot;', 'Origin': 'https://match-centre.bwfbadminton.com', 'Sec-Fetch-Site': 'same-site', 'Sec-Fetch-Mode': 'cors', 'Sec-Fetch-Dest': 'empty', 'Referer': 'https://match-centre.bwfbadminton.com/', 'Accept-Encoding': 'gzip, deflate, br', 'Accept-Language': 'en-GB,en-US;q=0.9,en;q=0.8', 'Priority': 'u=1, i' } def get_tid() -&gt; str: URL = f'https://extranet-lv.bwfbadminton.com/api/vue-current-live' r = requests.post(URL, headers=headers, json={'drawCount': '0'}) if r.status_code == 200: encoded_r = r.text.encode('utf-8') </code></pre> <p>The output should look as follows:</p> <pre><code>{&quot;results&quot;:[{&quot;id&quot;:4746,&quot;code&quot;:&quot;DAC5B0C1-A817-4281-B3C6-F2F3DA65FD2B&quot;,&quot;name&quot;:&quot;PERODUA Malaysia Masters 2024&quot;, ...} </code></pre> <p>If I'm not mistaken, the output of <code>r.text</code> consists of unicode characters:</p> <p><a href="https://i.sstatic.net/HLo7CqOy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HLo7CqOy.png" alt="enter image description here" /></a></p> <p>How do I transform this to the desired dictionary?</p>
<python><dictionary>
2024-05-25 06:53:18
3
406
HJA24
78,531,601
7,290,715
Azure function app using Python: Issue in parsing a JSON string in request body
<p>I am trying to parse a very simple flat JSON in Azure Function App using Python. This JSON is coming as a POST request. Below is the complete Azure Function App code:</p> <pre><code>import azure.functions as func import logging import json #from urllib.parse import parse_qs app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS) @app.route(route=&quot;functionAppTest&quot;,methods=[func.HttpMethod.POST]) def functionAppTest(req: func.HttpRequest) -&gt; func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') res_b = str(req.get_body()) logging.info(f&quot;Request Bytes: {res_b}&quot;) print(res_b) if res_b: logging.info(f&quot;request transformed:{json.loads(res_b)}&quot;) print(json.loads(res_b)) return func.HttpResponse( json.loads(res_b).values(), status_code=400) else: return func.HttpResponse(&quot;Not Working!&quot;) </code></pre> <p>Since this is a POST request, while testing I am passing a JSON string E.g. <code>{&quot;name&quot;:&quot;Azure&quot;}</code>.</p> <p>And it is giving the below the error:</p> <pre><code>Exception: JSONDecodeError: Expecting value: line 1 column 1 (char 0) </code></pre> <p>Indicating that json.loads() is not getting any string.</p> <p>But from <code>logging.info()</code> , I can clearly see that <code>Request Bytes: b'{&quot;name&quot;:&quot;Azure&quot;}'</code></p> <p>I feel it might be a trivial issue, but unfortunately I am not able to decipher.</p> <p>What I am missing above?</p>
<python><azure-functions>
2024-05-25 06:37:31
1
1,259
pythondumb
78,531,433
1,107,474
Horizontal scrollbar overlays middle sub plot of stacked Plotly sub plots
<p>The below code creates stacked Plotly sub plots with a shared x axis.</p> <p>I would like to add a horizontal scrollbar at the bottom.</p> <pre><code>from plotly.subplots import make_subplots import plotly.graph_objects as go fig = make_subplots(rows=3, cols=1, shared_xaxes=True, vertical_spacing=0.02) fig.add_trace(go.Scatter(x=[0, 1, 2, 3], y=[10, 11, 12, 13]), row=3, col=1) fig.add_trace(go.Scatter(x=[0, 1, 2, 3], y=[100, 110, 120, 130]), row=2, col=1) fig.add_trace(go.Scatter(x=[0, 1, 2, 3], y=[1000, 1100, 1200, 1300]), row=1, col=1) fig.update_layout(height=500,xaxis=dict(rangeslider=dict(visible=True), type=&quot;linear&quot;)) fig.show() </code></pre> <p>Unfortunately it's corrupting the first sub plot and overlaying the scrollbar over the middle plot.</p> <p>How do I fix this?</p> <p><a href="https://i.sstatic.net/ED7VoiZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ED7VoiZP.png" alt="enter image description here" /></a></p>
<python><plotly-dash><plotly>
2024-05-25 04:59:30
1
17,534
intrigued_66
78,531,367
16,869,946
Transforming columns in pandas dataframe into quartiles
<p>I have a pandas dataframe that looks like</p> <pre><code>columnA columnB 3 -1 73 2 2 13 -2 24 4 1 8 2 23 3 13 5 2 2 0 3 1 4 2 1 7 2 2 1 2 2 5 3 </code></pre> <p>For columnA that range is 73-(-2)=75 and hence the four quartiles are given by 1st quartile = [-2 - 16.75], 2nd quartile = [16.75, 35.5], 3rd quartile = [35.5, 54.25], 4th quartile = [54.25, 73] and I want to transform the columns by labelling which quartile the element is in. So for example, columnA would become</p> <pre><code>columnA columnB 1 -1 4 2 1 13 1 24 1 1 1 2 2 3 1 5 1 2 1 3 1 4 1 1 1 2 1 1 1 2 1 3 </code></pre> <p>I think it has something to do with <code>.transform</code> but I am not how how to partition it into quartiles.</p>
<python><pandas><dataframe>
2024-05-25 04:06:28
0
592
Ishigami
78,531,095
6,595,551
Pydantic-Settings: Environment Variables Prioritize Over Init Args with Aliases
<p>I am using Pydantic v2 with BaseSettings (<code>pydantic-settings</code>) to load configurations from environment variables. However, I encountered an issue where environment variables seem to override the initialization arguments, even when I expect the init arguments to take priority.</p> <p>Here’s a simplified version of my code:</p> <pre class="lang-py prettyprint-override"><code>from pydantic import Field, StrictStr from pydantic_settings import BaseSettings, SettingsConfigDict class MySettings(BaseSettings): api_key: StrictStr = Field(..., alias=&quot;TEST_API_KEY&quot;) aws_region: StrictStr = Field(..., alias=&quot;AWS_REGION&quot;) model_config = SettingsConfigDict(extra=&quot;ignore&quot;, populate_by_name=True) # .env file # TEST_API_KEY=&quot;TEST_API_KEY_VALUE&quot; # AWS_REGION=&quot;us-east-1&quot; print(MySettings().model_dump()) # Expected: {'api_key': 'TEST_API_KEY_VALUE', 'aws_region': 'us-east-1'} print(MySettings(api_key=&quot;ANOTHER_API_KEY_TO_OVERRIDE&quot;).model_dump()) # Expected: {'api_key': 'ANOTHER_API_KEY_TO_OVERRIDE', 'aws_region': 'us-east-1'} # Actual: {'api_key': 'TEST_API_KEY_VALUE', 'aws_region': 'us-east-1'} print(MySettings(api_key=111).model_dump()) # Expected: ValidationError due to type mismatch # Actual: {'api_key': 'TEST_API_KEY_VALUE', 'aws_region': 'us-east-1'} </code></pre> <p>I've also tried adjusting <code>SettingsConfigDict</code> and using a custom <code>PydanticBaseSettingsSource</code>, but the behavior persists. Debugging the Pydantic code base itself shows both the constructor and environment values present, but the latter is prioritized due to alias matching (maybe).</p> <p>My question basically is How I can ensure that init arguments are prioritized over environment variables when using aliases in <code>pydantic-settings</code>?</p> <pre><code>Python 3.11.8 (arm64) pydantic==2.7.1 pydantic-settings==2.2.1 pydantic_core==2.18.2 </code></pre> <p>UPDATE:</p> <p>I forgot to mention, but by updating the <code>extra=&quot;allow&quot;</code>:</p> <pre class="lang-py prettyprint-override"><code>print(MySettings().model_dump()) print(MySettings(api_key=&quot;ANOTHER_API_KEY_TO_OVERRIDE&quot;).model_dump()) print(MySettings(api_key=111).model_dump()) </code></pre> <pre class="lang-bash prettyprint-override"><code>{'api_key': 'TEST_API_KEY_VALUE', 'aws_region': 'us-east-1'} {'api_key': 'ANOTHER_API_KEY_TO_OVERRIDE', 'aws_region': 'us-east-1'} {'api_key': 111, 'aws_region': 'us-east-1'} </code></pre> <p>As you can see, there will be no validation if I pass integer to <code>api_key</code>.</p>
<python><pydantic><pydantic-settings>
2024-05-25 00:20:38
0
1,647
Iman Shafiei
78,531,083
19,171,308
In mingw/msys2, use `pip install abc` or `pacman -S mingw-w64-i686-python-abc`?
<p>My question is related to this: <a href="https://stackoverflow.com/questions/72293878/failing-to-install-python-cryptography-library-using-pip-on-msys2-mingw">Failing to install python cryptography library using pip on msys2/mingw</a></p> <p>In that question, <code>pip install cryptography</code> failed but <code>pacman -S mingw-w64-x86_64-python3-cryptography</code> installed successfully.</p> <p>There is another question: <a href="https://stackoverflow.com/questions/59656648/msys-pip-install-cffi-fails-due-to-undefined-references">msys: pip install cffi fails due to undefined references</a></p> <p>In the comment, it's suggested to use <code>pacman -S mingw-w64-i686-python-cffi</code> instead of <code>pip install cffi</code>.</p> <p>So my question is a more general one: is it recommoned (or even required) to install a pip package, let me call it <code>abc</code>, by <code>pacman -S mingw-w64-i686-python-abc</code> instead of <code>pip install abc</code>?</p>
<python><pip><mingw-w64><msys2>
2024-05-25 00:11:23
0
597
Felix F Xu
78,531,034
5,284,054
Python tkinter multiple windows from separate files
<p>I continue to struggle with multiple windows in tkinter. It's now developing into an application where each window is in it's own separate file.</p> <p>I need to open the second window from the first because that's how the application works.</p> <p>I also need to open the second window independently of the first, i.e. by itself, so I can do unit testing as it develops.</p> <p>This question destroys the previous window as the second window opens. Not what I want: <a href="https://stackoverflow.com/questions/61022533/python-tkinter-multiple-windows">Python tkinter multiple windows</a></p> <p>My own previous question here closes the first window as the second window opens. Not what I want. Although the comments point out a typo error (omitting <code>()</code>), the accepted answer uses <code>withdraw</code> and <code>deiconify</code>, which will eventually be helpful but not the solution to this problem. <a href="https://stackoverflow.com/questions/78492310/python-tkinter-cloase-first-window-while-opening-second-window">Python tkinter cloase first window while opening second window</a></p> <p>This is closest because it opens a second window from a first window, but it doesn't address both (i) opening the second window from a separate file and (ii) also being able to open the second window independently: <a href="https://stackoverflow.com/questions/78492179/python-tkinter-class-multiple-windows">Python tkinter class multiple windows</a></p> <p>Here's the SECOND window in a file called <strong>Location.py</strong> and it opens fine independently:</p> <pre><code>import tkinter as tk from tkinter import ttk class Location(tk.Frame): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) root.title(&quot;Location&quot;) root.geometry('400x275') # Frames self.mainframe = ttk.Frame(self.master) self.mainframe.grid(column = 0, row=0) ttk.Label(self.mainframe, text = &quot;Second Window&quot;).grid(column=1, row=1, sticky=(tk.W, tk.E)) if __name__ == &quot;__main__&quot;: root = tk.Tk() Location(root) root.mainloop() </code></pre> <p>Here's the FIRST window, which also opens fine, the problem is when I press either button to call the file to open the second window:</p> <pre><code>import tkinter as tk from tkinter import ttk import Location class Building_Info(tk.Frame): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # Frames self.infoframe = ttk.Frame(self.master, height=400, width=200, padding=&quot;50 100 50 100&quot;, borderwidth=10) self.infoframe['relief'] = 'raised' self.infoframe.grid(column = 0, row=0, sticky=(tk.E, tk.N)) self.buttonframe = ttk.Frame(self.master, height=400, width=200, padding=&quot;50 100 50 100&quot;, borderwidth=10) self.buttonframe['relief'] = 'raised' self.buttonframe.grid(column = 1, row=0, sticky=(tk.E, tk.N)) # BUTTONS confirm_button = ttk.Button(self.infoframe, text = 'Stakeholders', command = self.open_location) confirm_button.grid(column=0, row=2) confirm_button = ttk.Button(self.buttonframe, text = 'Location', command = self.open_location) confirm_button.grid(column=0, row=2) for child in self.infoframe.winfo_children(): child.grid_configure(padx=5, pady=5) for child in self.buttonframe.winfo_children(): child.grid_configure(padx=5, pady=5) # METHODS def open_location(self): Location.Location() if __name__ == &quot;__main__&quot;: root = tk.Tk() root.title(&quot;Building Information&quot;) root.geometry('600x400') Building_Info(root) root.mainloop() </code></pre> <p>When I try to pass <code>Location.Location()</code> or <code>Location.Location(root)</code> or <code>Location.Location(self)</code> or <code>Location.Location(self.master)</code>, I get this error:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\tkinter\__init__.py&quot;, line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File &quot;c:\Users\User\Documents\Python\Tutorials\BuildingInfo_Stay_Open.py&quot;, line 32, in open_location Location_Stay_Open.Location() File &quot;c:\Users\User\Documents\Python\Tutorials\Location_Stay_Open.py&quot;, line 9, in __init__ root.title(&quot;Location&quot;) ^^^^ NameError: name 'root' is not defined </code></pre> <p>But when I try to pass <code>Location.Location(self.root)</code>, I get asked if I meant <code>root</code>.</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\tkinter\__init__.py&quot;, line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File &quot;c:\Users\User\Documents\Python\Tutorials\BuildingInfo_Stay_Open.py&quot;, line 32, in open_location Location_Stay_Open.Location(self.root) ^^^^^^^^^ AttributeError: 'Building_Info' object has no attribute 'root'. Did you mean: '_root'? </code></pre> <p>Now if I go back to the second window <code>class Location(tk.Frame)</code> and try <code>class Location(tk.Tk)</code>, then the second window doesn't open independently, and gives this error:</p> <pre><code>Traceback (most recent call last): File &quot;c:\Users\User\Documents\Python\Tutorials\Location_Stay_Open.py&quot;, line 29, in &lt;module&gt; Location(root) File &quot;c:\Users\User\Documents\Python\Tutorials\Location_Stay_Open.py&quot;, line 7, in __init__ super().__init__(*args, **kwargs) File &quot;C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\tkinter\__init__.py&quot;, line 2326, in __init__ self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: create() argument 1 must be str or None, not Tk </code></pre> <p>Trying <code>class Location(tk.Toplevel)</code> opens two windows: one titled <strong>Location</strong> and one titled <strong>tk</strong>. Closing one window closes both windows. I only want one of those windows.</p> <p>What do I need to do to get the second window (Location) to (i) open independently and to (ii) open from the first window?</p>
<python><tkinter><window>
2024-05-24 23:38:04
1
900
David Collins
78,530,986
214,184
Why does HF Transformers run out of (GPU) memory frequently, when the same model can run fine in Ollama?
<p>I can imagine Python taking more resources in general, but even with 10GB of GPU RAM, I'm unable to run inference using HF Transformers for small models like Phi3 4K. Looking for qualitative and quantitative insights. Can someone touch on how much resources does the model take, vs. pytorch, vs. HF Transformers code.</p> <p>The program I'm using is a quite simple inference program shown below.<br> (Note I tried changeing torch_dtype to 16-bit too as shown in the code)</p> <pre><code>import sys import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = &quot;microsoft/Phi-3-mini-4k-instruct&quot; model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.float16, device_map=&quot;cuda&quot;) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) job_descr = sys.stdin.read() input_ids = tokenizer(job_descr, return_tensors=&quot;pt&quot;).input_ids.to(&quot;cuda&quot;) outputs = model.generate(input_ids, do_sample=True, max_new_tokens=300) print(tokenizer.decode(outputs[0], skip_special_tokens=True), &quot;\n&quot;) </code></pre>
<python><pytorch><artificial-intelligence><out-of-memory><huggingface-transformers>
2024-05-24 23:12:48
0
616
slowpoison
78,530,977
1,107,474
How to set Pandas column as the datetime x-axis in Plotly
<p>I have three lists containing data to display via Plotly. The first column represents Epoch timestamps (with milliseconds) and the second and third columns are the series to plot.</p> <p>I am trying to create the pandas dataframe correctly, to pass to Plotly.</p> <p>So far I have this:</p> <pre><code>import pandas as pd epoch=[1716591253000, 1716591254000, 1716591255000, 1716591256000] series_1=[5,6,7,8] series_2=[9,10,11,12] df = pd.DataFrame(data=zip(epoch,series_1,series_2),columns=['Datetime','Series 1','Series 2']) print(df) </code></pre> <p>But I am unsure how I tell pandas that the first column is the date time column and needs to be the x-axis when passed to Plotly.</p> <p>I found these time series examples:</p> <p><a href="https://plotly.com/python/time-series/" rel="nofollow noreferrer">https://plotly.com/python/time-series/</a></p> <p>but unfortunately they're loading pre-created data</p>
<python><pandas><plotly>
2024-05-24 23:07:26
1
17,534
intrigued_66
78,530,971
6,141,885
How to access package data after specifying location in pyproject.toml
<p>This question is a follow-up to <a href="https://stackoverflow.com/questions/69647590/specifying-package-data-in-pyproject-toml/">this question</a> on including package data using <code>setuptools</code> in <code>pyproject.toml</code>.</p> <p>The file structure for my package is as follows:</p> <pre><code>project_root_directory ├── pyproject.toml └── mypkg ├── models │ ├── __init__.py │ ├── model1.pkl │ └── model2.pkl ├── __init__.py ├── module1.py └── module2.py </code></pre> <p>In <code>pyproject.toml</code>, I include the package data using the below specification, following the <a href="https://setuptools.pypa.io/en/stable/userguide/datafiles.html#subdirectory-for-data-files" rel="nofollow noreferrer">setuptools protocol</a>:</p> <pre><code>[build-system] requires = [&quot;setuptools&gt;=61.0&quot;] build-backend = &quot;setuptools.build_meta&quot; [tool.setuptools.packages.find] include = [&quot;mypkg&quot;] [tool.setuptools.package-data] &quot;mypkg.models&quot; = [&quot;*.pkl&quot;] </code></pre> <p>Following the instructions in the <a href="https://setuptools.pypa.io/en/stable/userguide/datafiles.html#accessing-data-files-at-runtime" rel="nofollow noreferrer">setuptools protocol</a> I include the following code in <code>module1.py</code> to access the data at runtime.</p> <pre class="lang-py prettyprint-override"><code>import importlib.resources import joblib def load_model(package, filename): ''' Reads in the model pickle files from the package ''' with importlib.resources.path(package, filename) as file_path: return joblib.load(file_path) saved_model=load_model('mypkg.models', 'model1.pkl') </code></pre> <p>But I get this error, when I try to package the code and run it:</p> <pre class="lang-py prettyprint-override"><code>ModuleNotFoundError: No module named 'mypkg.models' </code></pre> <p>How can I fix this so that I can load the models in <code>module1.py</code>?</p> <p>Thank you in advance for any help with this!</p>
<python><pip><setuptools><pyproject.toml>
2024-05-24 23:04:31
1
1,327
morepenguins
78,530,855
3,487,441
Why are the child processes not running?
<p>Below is a script that reflects the critical part of a larger application. The outline is:</p> <ul> <li>I'm consuming data from an API which here is simulated by iterating over an array of ints.</li> <li>The code runs the data coming off the API through a long(ish) calculation which here is simulated with <code>process_the_data</code>.</li> <li>Once done, the output is written to a database here simulated as <code>write_to_sql</code>.</li> </ul> <p>I need to speed up the ingestion so I'm trying to parallelize the <code>process_the_data</code> calls which is the biggest bottleneck. The intent is for the 3 child processes to write their results into a single Queue to be written out to a database.</p> <p>The two primary errors I'm getting are: <code>EOFError</code>, <code>BrokenPipe</code>. But after writing this simplified version I think I've misunderstood something about the <code>multiprocessing</code> module and/or <code>Queue</code> class because the <code>output_q</code> is not ever written to.</p> <pre><code>import multiprocessing import random import time def process_the_data(in_q, out_q): while True: data = in_q.get(block=True) time.sleep(random.choice([1,2,3])) out_q.put_nowait(data + 1) def write_to_sql(n): print(n) multiprocessing.set_start_method('fork') mgr = multiprocessing.Manager() input_q = mgr.Queue(maxsize=5) output_q = mgr.Queue() input_data = range(100) p1 = multiprocessing.Process(target=process_the_data, args=(input_q, output_q)) p2 = multiprocessing.Process(target=process_the_data, args=(input_q, output_q)) p3 = multiprocessing.Process(target=process_the_data, args=(input_q, output_q)) p1.start() p2.start() p3.start() for n in input_data: # realized this loop is wrong but doesn’t change the behavior while not input_q.full(): input_q.put(n) while not output_q.empty(): write_to_sql(output_q.get(block=False)) p1.join() p2.join() p3.join() print('finished') </code></pre>
<python><multiprocessing><message-queue>
2024-05-24 22:08:27
1
1,361
gph
78,530,691
1,429,450
sys.stdin.readline() in an inittab
<p>I am using <code>sys.stdin.readline()</code> to to hang the execution of a Python script, to keep it persistent. When I put this script in an <code>inittab</code> and it comes time to log into Linux after rebooting, hitting <kbd>enter</kbd> actually makes <code>readline()</code> read the input! Why is this? What is the proper way to keep a Python script daemonized?</p>
<python><persistence><background-process><daemon><inittab>
2024-05-24 21:05:37
0
5,826
Geremia
78,530,625
825,227
Python ffill changing data types
<p>Have a Python dataframe that I'm looking to forward fill across rows.</p> <p>Data looks like this:</p> <pre><code>index trade_date clean_pub_date day_lag ticker transaction_type asset_type clean_amt d0 d0_1m d0_3m d0_6m d0_12m d0_t d0_1m_t d0_3m_t d0_6m_t d0_12m_t 136 2023-01-13 2023-02-09 27 UL P ST 32500 47.2510986328125 46.9969444274902 51.7693290710449 50.7017021179199 49.7376365661621 48.7582054138184 49.1847496032715 51.7501983642578 49.8611183166504 47.3593482971191 142 2023-01-13 2023-02-06 24 CRM P ST 7500 168.829467773438 182.711318969727 197.641815185547 215.778137207031 285.457092285156 149.31494140625 170.856811523438 193.766891479492 226.983489990234 268.838836669922 169 2023-06-09 2023-06-09 0 PTON P ST 7500 8.3100004196167 8.39000034332275 5.82000017166138 6.07999992370606 NaN 8.3100004196167 8.39000034332275 5.82000017166138 6.07999992370606 NaN 170 2023-06-09 2023-07-06 27 TMUS P ST 7500 138.005340576172 135.516159057617 136.914474487305 161.267501831055 NaN 130.270050048828 137.192138671875 136.14094543457 154.882934570313 NaN 171 2023-06-09 2023-06-12 3 EMR P ST 7500 82.3291549682617 90.2124633789063 98.5805587768555 88.8684310913086 NaN 82.4564666748047 87.5781631469727 97.8716278076172 86.8675918579102 NaN </code></pre> <p>And looking to replace the NaNs with preceding within-row entry via this:</p> <pre><code>t = t.ffill(axis=1) </code></pre> <p>This works as desired except data types change with the application of <code>ffill</code>:</p> <p><strong>Before</strong></p> <pre><code>t.dtypes Out[278]: name object trade_date datetime64[ns] clean_pub_date datetime64[ns] day_lag int64 ticker object transaction_type object asset_type object clean_amt int64 d0 float64 d0_1m float64 d0_3m float64 d0_6m float64 d0_12m float64 d0_t float64 d0_1m_t float64 d0_3m_t float64 d0_6m_t float64 </code></pre> <p><strong>And after:</strong></p> <pre><code>t = t.ffill(axis=1) t.dtypes Out[280]: name object trade_date datetime64[ns] clean_pub_date datetime64[ns] day_lag object ticker object transaction_type object asset_type object clean_amt object d0 object d0_1m object d0_3m object d0_6m object d0_12m object d0_t object d0_1m_t object d0_3m_t object d0_6m_t object </code></pre> <p>I don't see why this would happen as all values replacing NaNs are floats. Also don't see any option in the documentation to address this.</p> <p>Any ideas how/why this is happening?</p>
<python><dataframe><ffill>
2024-05-24 20:49:52
1
1,702
Chris
78,530,598
10,391,013
Create a kmer database from a huge csv file
<p>I have a huge csv file (7.5GB) it is structured with three columns (no header), the first it is a string with 7 characters (SSSSDKI), the second is the count (100) and the third represents the length of the sequence where the kmers(kmer is a term from bioinformatics and represent a string of length k) were counted.</p> <pre><code>MSLLGTP,4,356265492 SLLGTPL,9,356265492 LLGTPLS,8,356265492 LGTPLSS,7,356265492 GTPLSSS,10,356265492 TPLSSSS,13,356265492 PLSSSSD,4,356265492 LSSSSDK,7,356265492 </code></pre> <p>I have sqllite and mysql installed, so I need to tranform this csv file to a kmers databse. I have some experience in python code and R, but zero with SQL.</p> <p>The csv file can have repeated kmers once they were counted in different sequences and merged in this huge file. So I need to aggregate the count form kmer that appears more than once and also the sequence length were they were founded.</p> <p>for ex:</p> <pre><code>SSDKIML,4,356265492 SSDKIML,3,396290492 </code></pre> <p>the final values for this kmer would be:</p> <pre><code>SSDKIML,7,752555984 </code></pre> <p>After this process I would have a final csv file with all kmers and the aggregated counts and sequence lengths.</p> <p><strong>But I need assure that all the lines or all the data from the original file are in my database</strong>.</p> <p>If any of the friends have time and patience any help would be very appreciated.</p> <p>Thank you for your time and kindness.</p> <p><strong>PS - I tried with pandas and dask, but every time it kills my kernel</strong></p> <p>Use this with dask and a similar code with pandas:</p> <pre><code>import os import glob import dask.dataframe as dd def process_file(input_pattern, output_filename): # Read all chunks matching the pattern dfs = [dd.read_csv(f, header=None, names=['kmer', 'count', 'len']) for f in glob.glob(input_pattern)] # Concatenate all chunks into a single Dask DataFrame df = dd.concat(dfs) # Group by the 'String' column and sum the 'Count' column grouped_df = df.groupby('kmer')['count'].sum().reset_index() # Write the processed DataFrame to a new CSV file grouped_df.to_csv(output_filename, index=False, header=False, single_file=True) </code></pre> <p>And this:</p> <pre><code>import os import csv from collections import defaultdict def merge_kmer_files(folder_path, output_file): kmer_counts = defaultdict(int) # Dictionary to store k-mer counts # Iterate over all files in the folder for file_name in os.listdir(folder_path): file_path = os.path.join(folder_path, file_name) if os.path.isfile(file_path) and file_name.endswith('.csv'): with open(file_path, mode='r') as csvfile: reader = csv.reader(csvfile) next(reader) # Skip header if there is one for row in reader: kmer, count = row[0], int(row[1]) kmer_counts[kmer] += count # Aggregate counts # Write the aggregated results to the output file with open(output_file, mode='w', newline='') as csvfile: writer = csv.writer(csvfile) writer.writerow(['kmer', 'count']) # Write header for kmer, count in kmer_counts.items(): writer.writerow([kmer, count]) def process_directory(root_dir): for dirpath, dirnames, filenames in os.walk(root_dir): csv_files = [f for f in filenames if f.endswith('.csv')] if len(csv_files) &gt; 1: output_file = os.path.join(dirpath, 'merged_kmers.csv') print(f&quot;Merging files in {dirpath} into {output_file}&quot;) merge_kmer_files(dirpath, output_file) </code></pre> <p>Paulo</p>
<python><sql><r><csv><bigdata>
2024-05-24 20:40:43
2
504
Paulo Sergio Schlogl
78,530,546
3,931,488
133. Clone Graph: Node with value 2 doesn't exist in the original graph
<p>I get a similar error with a different root cause compared to: <a href="https://stackoverflow.com/questions/68783747/clone-graph-leetcode-133">Clone Graph LeetCode 133</a></p> <p>Consider the below implementation. If I use a Node-type key for <code>processed_node_map</code>, the algorithm passes. If I use <code>node.val</code> as the key, I receive the error: <code>Node with value 2 doesn't exist in the original graph.</code></p> <p>Question: Can anyone explain why this error is occurring? TYIA</p> <p>Constraints and notes:</p> <ol> <li>Node.val is constrained to be unique across nodes in the input.</li> <li>This error is seen in a test case with no input value of 2.</li> </ol> <pre class="lang-py prettyprint-override"><code>class Node: def __init__(self, val = 0, neighbors = None): self.val = val self.neighbors = neighbors if neighbors is not None else [] from typing import Optional processed_node_map = {} class Solution: # solution: https://leetcode.com/problems/clone-graph/solutions/5092202/blind-75-beats-88-06-46-75/ def cloneGraph(self, node: Optional['Node']) -&gt; Optional['Node']: return self.dfs(node) if node else None def dfs(self, node): if node.val in processed_node_map: return processed_node_map[node.val] clone = Node(val=node.val) processed_node_map[node.val] = clone calculated_neighbors = [] for child in node.neighbors: calculated_neighbors.append(self.dfs(child)) clone.neighbors = calculated_neighbors return clone </code></pre>
<python><algorithm><recursion><depth-first-search>
2024-05-24 20:25:10
1
2,466
John Vandivier
78,530,494
504,717
Getting bad char in struct format when reading byte data
<p>I have binary data with following specification</p> <pre><code>1 Byte SF-Major 1 Byte SF-Minor 1 Byte SF-Patch 1 Byte SF-Build 2 Byte build (little endian) </code></pre> <p>I wrote following code to read this data</p> <pre class="lang-py prettyprint-override"><code>format_string = ( 'BBBB' '&lt;H' ) parsed_data: bytes = binary_data[:10] unpacked_data = struct.unpack(format_string, header_data) </code></pre> <p>on <code>unpacked_data</code> i get error</p> <pre><code> unpacked_data = struct.unpack(format_string, header_data) struct.error: bad char in struct format </code></pre> <p>this error goes away if i remove <code>&lt;</code> from the <code>format_string</code> OR if i put <code>&lt;</code> as first line in <code>format_string</code> and then followed by remaining lines (<code>H</code> without <code>&lt;</code>)</p> <p>the problem is that the binary specification says that <code>build</code> is little endian.</p> <p>what can i do?</p>
<python><python-3.x><endianness><binary-data><python-3.10>
2024-05-24 20:09:06
1
8,834
Em Ae
78,530,413
8,507,982
Upper Triangular Matrix from pandas multiindex
<p>Building off the question here: <a href="https://stackoverflow.com/questions/34417685/melt-the-upper-triangular-matrix-of-a-pandas-dataframe">Melt the Upper Triangular Matrix of a Pandas Dataframe</a></p> <p>I am looking to do something similar, but across a multi-index (index level 0 of DF)</p> <p>Is there a better way of going about this than a for loop?</p> <pre><code>import pandas as pd import numpy as np # SETUP INPUT A_1 = [1, .5, .3] B_1 = [.5, 1, .4] C_1 = [.3, .4, 1] A_2 = [1, -.5, -.3] B_2 = [-.5, 1, -.4] C_2 = [-.3, -.4, 1] # SETUP DFs DF_1 = pd.DataFrame({'A': A_1, 'B': B_1, 'C': C_1, 'DATE': '2024-05-23', 'ID': ['A', 'B', 'C']}).set_index(['DATE', 'ID']) DF_2 = pd.DataFrame({'A': A_2, 'B': B_2, 'C': C_2, 'DATE': '2024-05-24', 'ID': ['A', 'B', 'C']}).set_index(['DATE', 'ID']) DF = pd.concat([DF_1, DF_2]) def prior_solution(df: pd.DataFrame): # Taken from https://stackoverflow.com/questions/34417685/melt-the-upper-triangular-matrix-of-a-pandas-dataframe df_filt = df.where(np.triu(np.ones(df.shape)).astype(np.bool_)) long = df_filt.stack().reset_index() long.columns = ['DATE', 'ID1', 'ID2', 'VALUE'] return long # get individual pieces - note there sill be an arbitrary number of these, based on DATE in the index of the aggregated df L_1 = prior_solution(DF_1) L_2 = prior_solution(DF_2) # Want # note ID1 and ID2 pairs are not repeating WANT = pd.concat([L_1, L_2] </code></pre> <p>WANT</p> <pre><code>DATE,ID1,ID2,VALUE 2024-05-23,A,A,1.0 2024-05-23,A,B,0.5 2024-05-23,A,C,0.3 2024-05-23,B,B,1.0 2024-05-23,B,C,0.4 2024-05-23,C,C,1.0 2024-05-24,A,A,1.0 2024-05-24,A,B,-0.5 2024-05-24,A,C,-0.3 2024-05-24,B,B,1.0 2024-05-24,B,C,-0.4 2024-05-24,C,C,1.0 </code></pre>
<python><pandas><dataframe>
2024-05-24 19:44:29
1
837
ktj1989
78,530,305
2,249,357
Python: How can I call the original of an overloaded method?
<p>Let's say I have this:</p> <pre><code>class MyPackage ( dict ) : def __init__ ( self ) : super().__init__() def __setitem__ ( self, key, value ) : raise NotImplementedError( &quot;use set()&quot; ) def __getitem__ ( self, key ) : raise NotImplementedError( &quot;use get()&quot; ) def set ( self, key, value ) : # some magic self[key] = value def get ( self, key ) : # some magic if not key in self.keys() : return &quot;no!&quot; return self[key] </code></pre> <p>(Here, <code># some magic</code> is additional code that justifies <code>MyPackage</code> as opposed to 'just a dictionary.')</p> <p>The whole point is, I want to provided a dictionary-like object that forces the use of <code>get()</code> and <code>set()</code> methods and disallows all access via <code>[]</code>, i.e. it is not permitted to use <code>a['x']=&quot;wow&quot;</code> or <code>print( a['x'] )</code> However, the minute I call <code>get()</code> or <code>set()</code>, the <code>NotImplementedError</code> is raised. Contrast this with, say, Lua, where you can bypass &quot;overloading&quot; by using &quot;raw&quot; getters and setters. Is there any way I can do this in Python without making <code>MyPackage</code> <em>contain</em> a dictionary (as opposed to <em>being</em> a dictionary)?</p>
<python>
2024-05-24 19:10:13
1
729
LiamF
78,530,233
9,707,286
So much extra junk in Llama Index metadata when embedding
<p>I am using Llama Index to embed a series of documents. The embeddings are working fine. I have modified the metadata and that is appearing my db, fine. So, what's the problem you ask? Well, Llama Index is adding a load of junk to the metadata that I cannot &quot;pop&quot; or otherwise remove. I even created, entirely, my own metadata variable. No change. Like this example.</p> <pre><code>{&quot;file_path&quot;: &quot;path/to/text.txt&quot;, &quot;file_name&quot;: &quot;myTextFiles.txt&quot;, &quot;file_type&quot;: &quot;text/plain&quot;, &quot;file_size&quot;: 3024349, &quot;creation_date&quot;: &quot;2024-05-19&quot;, &quot;last_modified_date&quot;: &quot;2023-11-24&quot;, </code></pre> <p>That above is all my metadata. Then, Llama Index adds all of this below. How do I ensure that what is below is not added to my metadata?</p> <pre><code>&quot;_node_content&quot;: &quot;{\&quot;id_\&quot;: \&quot;e0faec05-8a68-43b2-a2d1-51c307775877\&quot;, \&quot;embedding\&quot;: null, \&quot;metadata\&quot;: {\&quot;file_path\&quot;: \&quot;/path/to/textFiles.txt\&quot;, \&quot;file_name\&quot;: \&quot;paul_graham_essays.txt\&quot;, \&quot;file_type\&quot;: \&quot;text/plain\&quot;, \&quot;file_size\&quot;: 3024349, \&quot;creation_date\&quot;: \&quot;2024-05-19\&quot;, \&quot;last_modified_date\&quot;: \&quot;2023-11-24\&quot;}, \&quot;excluded_embed_metadata_keys\&quot;: [\&quot;file_name\&quot;, \&quot;file_type\&quot;, \&quot;file_size\&quot;, \&quot;creation_date\&quot;, \&quot;last_modified_date\&quot;, \&quot;last_accessed_date\&quot;], \&quot;excluded_llm_metadata_keys\&quot;: [\&quot;file_name\&quot;, \&quot;file_type\&quot;, \&quot;file_size\&quot;, \&quot;creation_date\&quot;, \&quot;last_modified_date\&quot;, \&quot;last_accessed_date\&quot;], \&quot;relationships\&quot;: {\&quot;1\&quot;: {\&quot;node_id\&quot;: \&quot;b51cd20a-6dbd-4d1b-b46b-4aa4f4d3d358\&quot;, \&quot;node_type\&quot;: \&quot;4\&quot;, \&quot;metadata\&quot;: {\&quot;file_path\&quot;: \&quot;file_name\&quot;: \&quot;texts.txt\&quot;, \&quot;file_type\&quot;: \&quot;text/plain\&quot;, \&quot;file_size\&quot;: 3024349, \&quot;creation_date\&quot;: \&quot;2024-05-19\&quot;, \&quot;last_modified_date\&quot;: \&quot;2023-11-24\&quot;}, \&quot;hash\&quot;: \&quot;412d644dc9ebf3d7aab8e41560ad724ffa0bc36922ce428305ddd694c2b41b3a\&quot;, \&quot;class_name\&quot;: \&quot;RelatedNodeInfo\&quot;}, \&quot;2\&quot;: {\&quot;node_id\&quot;: \&quot;74b97b26-d3b4-4b84-ac0e-e401922ff5f9\&quot;, \&quot;node_type\&quot;: \&quot;1\&quot;, \&quot;metadata\&quot;: {\&quot;file_path\&quot;: \&quot;file_name\&quot;: \&quot;TEXTS.txt\&quot;, \&quot;file_type\&quot;: \&quot;text/plain\&quot;, \&quot;file_size\&quot;: 3024349, \&quot;creation_date\&quot;: \&quot;2024-05-19\&quot;, \&quot;last_modified_date\&quot;: \&quot;2023-11-24\&quot;}, \&quot;hash\&quot;: \&quot;708868dc8c11472299c40c4ad44d643e9aea2dbe9c1bef325cc0dd0336d25d19\&quot;, \&quot;class_name\&quot;: \&quot;RelatedNodeInfo\&quot;}, \&quot;3\&quot;: {\&quot;node_id\&quot;: \&quot;d82535b2-421f-46ca-9972-df8d1e5a0df6\&quot;, \&quot;node_type\&quot;: \&quot;1\&quot;, \&quot;metadata\&quot;: {}, \&quot;hash\&quot;: \&quot;e65800e1e75593d6e58717024fbcf523f02a9f7a9e7a5f9ea739c7c5780fb26f\&quot;, \&quot;class_name\&quot;: \&quot;RelatedNodeInfo\&quot;}}, \&quot;text\&quot;: \&quot;\&quot;, \&quot;start_char_idx\&quot;: 3767, \&quot;end_char_idx\&quot;: 8347, \&quot;text_template\&quot;: \&quot;{metadata_str}\\n\\n{content}\&quot;, \&quot;metadata_template\&quot;: \&quot;{key}: {value}\&quot;, \&quot;metadata_seperator\&quot;: \&quot;\\n\&quot;, \&quot;class_name\&quot;: \&quot;TextNode\&quot;}&quot;, &quot;_node_type&quot;: &quot;TextNode&quot;, &quot;document_id&quot;: &quot;b51cd20a-6dbd-4d1b-b46b-4aa4f4d3d358&quot;, &quot;doc_id&quot;: &quot;b51cd20a-6dbd-4d1b-b46b-4aa4f4d3d358&quot;, &quot;ref_doc_id&quot;: &quot;b51cd20a-6dbd-4d1b-b46b-4aa4f4d3d358&quot;} </code></pre>
<python><embedding><llama-index>
2024-05-24 18:46:04
1
747
John Taylor
78,530,103
1,978,421
'CPUDispatcher' object is not subscriptable error
<p>I am trying to process a very large csv file. The csv file (companies.csv) contains a list of companies with a column of postal codes and some other columns. I have postalcode.csv file containing official postal prefixes in the UK, such as AB, B, E, EC, etc... The aim is to filter out rows whose postal code don't start with the provided postal code prefixes. For your convenience you may use this data</p> <pre><code>df = pd.DataFrame({'postal':['AB12', 'AL34', 'BA56', 'B78', '224876']}) prefixes = pd.Series(['AB', 'AL', 'B', 'BA', ]) </code></pre> <p>I am trying to run the following code with numba.jit on cuda engine</p> <pre><code>df = df[df['postal'].apply(lambda x: any(x.startswith(pc) for pc in prefixes))] </code></pre> <p>ChatGPT translated the code into this:</p> <pre><code>import numpy as np from numba import njit # Convert PostalCode to a set for faster membership check postal_codes_set = set(PostalCode) @njit def starts_with_any(postal_code, postal_codes_set): for pc in postal_codes_set: if postal_code.startswith(pc): return True return False @njit def filter_postal_codes(postal_codes, postal_codes_set): filtered_indices = [] for i in range(len(postal_codes)): if starts_with_any(postal_codes[i], postal_codes_set): filtered_indices.append(i) return filtered_indices </code></pre> <p>I have tried passing postal_codes as numpy.array or simple array. Numba just complains about the object and won't run it for me.</p> <p>errors are:</p> <ul> <li>argument 0: Cannot determine Numba type of &lt;class 'pandas.core.frame.DataFrame'&gt;</li> <li>argument 1: Cannot determine Numba type of &lt;class 'pandas.core.series.Series'&gt;</li> </ul> <hr /> <ol> <li>TypeError: 'CPUDispatcher' object is not subscriptable</li> </ol> <hr /> <p>TypingError: Failed in nopython mode pipeline (step: nopython frontend) non-precise type array(pyobject, 1d, C) During: typing of argument at /tmp/ipykernel_39251/917616573.py (10)</p> <p>File &quot;../../../../../../tmp/ipykernel_39251/917616573.py&quot;, line 10:</p> <hr /> <p>TypingError: Failed in cuda mode pipeline (step: nopython frontend) non-precise type array(pyobject, 1d, C) During: typing of argument at /tmp/ipykernel_39251/3483106034.py (1)</p> <p>File &quot;../../../../../../tmp/ipykernel_39251/3483106034.py&quot;, line 1:</p> <p>I just can't get it work for me. What have I done wrong?</p>
<python><numba><jit>
2024-05-24 18:12:33
1
1,678
Hoy Cheung
78,530,071
2,475,195
Pandas dataframe: interpolate with regular time intervals
<p>My input dataframe looks like this:</p> <pre><code>ts = ['2008-01-02 06:50:00', '2008-01-02 06:51:00', '2008-01-02 06:53:00', '2008-01-02 06:54:00', '2008-01-02 06:57:00', '2008-01-02 06:58:00', '2008-01-02 07:39:00'] a = [1, 2, 3, 4, 5, 6, 7] b = [11, 22, 33, 44, 55, 66, 77] df = pd.DataFrame({'ts':ts, 'a':a, 'b':b}) ts a b 0 2008-01-02 06:50:00 1 11 1 2008-01-02 06:51:00 2 22 2 2008-01-02 06:53:00 3 33 3 2008-01-02 06:54:00 4 44 4 2008-01-02 06:57:00 5 55 5 2008-01-02 06:58:00 6 66 6 2008-01-02 07:39:00 7 77 </code></pre> <p>I want to insert synthetic rows with 1 min intervals such that the value of <code>a</code> is from the previously seen row, and <code>b</code> is 0, but only if the gap is smaller than 30 mins:</p> <pre><code> ts a b 0 2008-01-02 06:50:00 1 11 1 2008-01-02 06:51:00 2 22 2 2008-01-02 06:52:00 2 0 3 2008-01-02 06:53:00 3 33 4 2008-01-02 06:54:00 4 44 5 2008-01-02 06:55:00 4 0 6 2008-01-02 06:56:00 4 0 7 2008-01-02 06:57:00 5 55 8 2008-01-02 06:58:00 6 66 9 2008-01-02 07:39:00 7 77 </code></pre>
<python><pandas><dataframe><interpolation><oversampling>
2024-05-24 18:04:00
2
4,355
Baron Yugovich
78,529,987
1,678,010
Python( Gotcha?) - Element gets appended to multiple list items, in a list of list
<p>I've recently hit a gotcha in Python. I've added a minimal example. Can somebody tell me what is happening?</p> <pre><code>a = [list()] * 3 print(a) #[[], [], []] a[0].append(1) print(a) #Out - [[1], [1], [1]] #Expected - [[1],[],[]] </code></pre>
<python><python-object>
2024-05-24 17:36:48
1
13,931
Neo
78,529,944
4,498,251
Python np.char.add changes datatype / weird behavior
<p>Can somebody explain the following behavior?</p> <p>Input: a list of strings as a pandas series (it's a column of a bigger dataframe in fact).</p> <p>Goal: Put a fixed single string &quot;BI-AS-ATLASSIAN-P-&quot; in front of all elements</p> <p>Code:</p> <pre><code>s = pd.Series(['DAIPRODUCT', 'DAISY']) s = np.char.add(&quot;BI-AS-ATLASSIAN-P-&quot;, s) print(s) </code></pre> <p>Expected output:</p> <pre><code>['BI-AS-ATLASSIAN-P-DAIPRODUCT' 'BI-AS-ATLASSIAN-P-DAISY'] </code></pre> <p>Actual output:</p> <pre><code>['BI-AS-ATLASSIAN-P-DAIPRODU' &lt;--- CT is missing here! 'BI-AS-ATLASSIAN-P-DAISY'] </code></pre> <p>It seems as if np.char.append somewhat silently changes the data type to something like &lt;U28 or so. I get from various posts that there seems to be an involved problem with this simple functionality (<a href="https://github.com/numpy/numpy/issues/10062" rel="nofollow noreferrer">https://github.com/numpy/numpy/issues/10062</a>) but this behavior is definitely wrong... if it is complicated then at least fail with an error informing the user that there is some information missing... or am I completely overlooking something here?</p> <p>What is the &quot;correct&quot; way of doing this simple &quot;listified&quot; operation (append a string to a string)?</p>
<python><pandas><numpy>
2024-05-24 17:23:44
2
1,023
Fabian Werner
78,529,922
1,709,475
dataframe: print entire row/s where keys in the same row hold equal values
<p>I would like to recovery the rows in a dataframe where, in the same row, differing keys hold equal values. I can display where, for instance, the rows where col2 == col3. I would like to get this code to track across col1 matching across col2, col3 and col4. Then col2 to match across col 3 and col4. Then finally col3 across col4.</p> <p>I have read through <a href="https://stackoverflow.com/questions/16476924/how-can-i-iterate-over-rows-in-a-pandas-dataframe">this post</a> and I am confused if iteration is the solution to my problem. If so, how can this be done.</p> <p>I can display, for instance, the rows where col2 == col3.</p> <pre><code># -*- coding: utf-8 -*- import pandas as pd ## writing a dataframe rows = {'col1':['5412','5148','5800','2122','5645','1060','4801','1039'], 'col2':['542','512','541','412','565','562','645','152'], 'col3':['542','3120','3410','2112','5650','5620','4801','152'], 'col4':['5800','2122','5645','2112','412','562','562','645'] } df = pd.DataFrame(rows) print(f'Unsorted dataframe \n\n{df}') ## print the rows where col2 == col3 dft = df[(df['col2'] == df['col3'])] print('\n\nupdate - list row of matching row elements') print(dft) ## print all except the rows where col2 == col3 dft = df.drop(df[(df['col2'] == df['col3'])].index) print('\n\nupdate - Dropping rows of matching row elements') print(dft) </code></pre> <p>With this I am getting back</p> <pre><code> col1 col2 col3 col4 0 5412 542 542 5800 7 1039 152 152 645 </code></pre> <p>I would like to get back</p> <pre><code> col1 col2 col3 col4 0 5412 542 542 5800 3 2122 412 2112 2112 4 5645 565 5650 412 5 1060 562 5620 562 6 4801 645 4801 562 7 1039 152 152 645 </code></pre>
<python><pandas>
2024-05-24 17:17:43
2
326
Tommy Gibbons
78,529,485
391,445
Functions inside inherited Cheetah template can't see global variables
<p>I have some Cheetah templates that are structured using inheritance.</p> <p>basepage.tmpl:</p> <pre><code>from quixote.publish import get_session() #set global $session = get_session() #block content &lt;!DOCTYPE html&gt; &lt;html lang=&quot;$session.lang&quot;&gt; &lt;head&gt; ... meta tags, stylesheets, etc ... &lt;/head&gt; #block body &lt;body&gt;&lt;/body&gt; #end block body &lt;/html&gt; #end block </code></pre> <p>anactualpage.tmpl:</p> <pre><code>#extends basepage #def some_content ... content ... $session.some_variable ... content ... #end def #def body &lt;body&gt; ... body ... $some_content ... more body ... &lt;/body&gt; #end def </code></pre> <p>Quixote is my web framework, but it could be any Python library. Importing isn't even the problem here; it's just that I setup a few global variables using some functions imported from various libraries so this illustrates my use case.</p> <p>When I invoke the template <code>anactualpage</code> in my code it generates the entire page of output as desired without any problems.</p> <p>If instead I want to return just part of the template, <code>anactualpage.some_content()</code> — for example in response to an AJAX request where I want to return just a part of the entire page — I get <code>NameMapper.NotFound</code> in the compiled template code where the variable lookup function <code>VFFSL</code> is looking for <code>session.some_variable</code> in the search list <code>SL</code> (VFFSL and SL are used by Cheetah into the compiled code).</p> <p>I can work around the error by adding extra <code>import</code>s and <code>set</code>s inside the child template — specifically inside the <code>some_content</code> function — but why is this necessary?</p> <p>Is this a bug in Cheetah? I presume not and that I'm abusing it somehow.</p> <p>Is there any other way to make (global) variables always available in the search list so they can be referenced by functions called separately from the entire template?</p>
<python><inheritance><global-variables><python-import><cheetah>
2024-05-24 15:30:03
1
7,809
Colin 't Hart
78,529,455
11,482,075
Will a dynamic list of choices in a Django model evaluate when the model is migrated or when a user tries to select a choice for a model?
<h1>Code</h1> <p>Let's say I have the following model:</p> <pre class="lang-py prettyprint-override"><code>class Course(models.Model): title = models.CharField(max_length=48) YEAR_CHOICES = [(r, r) for r in range( datetime.date.today().year-1, datetime.date.today().year+2 ) ] year = models.IntegerField(_('year'), choices=YEAR_CHOICES) </code></pre> <h1>Question</h1> <p>Will the <code>datetime.date.today()</code> statements be evaluated right when the model is migrated, or will they be evaluated whenever the user accesses a form to set the <code>year</code> value for the <code>Course</code> model?</p> <p>In other words, is my <code>YEAR_CHOICES</code> code above frozen to when I migrated my model or will it dynamically update as the years go by?</p>
<python><django><django-models>
2024-05-24 15:23:32
1
361
DevinG
78,529,335
984,621
Python: ModuleNotFoundError: No module named 'openpyxl', although it's installed
<p>I am struggling with this issue.</p> <p>I have a small python project that is running inside a virtual environment. I installed this module as <code>pip install openpyxl</code> (I also tried <code>pip3 install openpyxl</code> or <code>python3 -m pip install openpyxl</code> with the same result).</p> <p>When I run my script, I get this error:</p> <pre><code>ModuleNotFoundError: No module named 'openpyxl' </code></pre> <p>In the traceback, I see references to my version of python (3.11.8):</p> <pre><code>Traceback (most recent call last): File &quot;/opt/homebrew/bin/scrapy&quot;, line 8, in &lt;module&gt; sys.exit(execute()) ^^^^^^^^^ File &quot;/opt/homebrew/lib/python3.11/site-packages/scrapy/cmdline.py&quot;, line 160, in execute cmd.crawler_process = CrawlerProcess(settings) ^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/lib/python3.11/site-packages/scrapy/crawler.py&quot;, line 357, in __init__ super().__init__(settings) File &quot;/opt/homebrew/lib/python3.11/site-packages/scrapy/crawler.py&quot;, line 227, in __init__ self.spider_loader = self._get_spider_loader(settings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/lib/python3.11/site-packages/scrapy/crawler.py&quot;, line 221, in _get_spider_loader return loader_cls.from_settings(settings.frozencopy()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/lib/python3.11/site-packages/scrapy/spiderloader.py&quot;, line 79, in from_settings return cls(settings) ^^^^^^^^^^^^^ File &quot;/opt/homebrew/lib/python3.11/site-packages/scrapy/spiderloader.py&quot;, line 34, in __init__ self._load_all_spiders() File &quot;/opt/homebrew/lib/python3.11/site-packages/scrapy/spiderloader.py&quot;, line 63, in _load_all_spiders for module in walk_modules(name): ^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/lib/python3.11/site-packages/scrapy/utils/misc.py&quot;, line 106, in walk_modules submod = import_module(fullpath) ^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/Cellar/python@3.11/3.11.8/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py&quot;, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1204, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1176, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1147, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 690, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 940, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 241, in _call_with_frames_removed File &quot;/Users/adam/pythondev/myproj/app.py&quot;, line 17, in &lt;module&gt; from openpyxl import load_workbook ModuleNotFoundError: No module named 'openpyxl' </code></pre> <p>When I do <code>pip list</code> for my <code>venv</code>, I get</p> <pre><code>Package Version ---------- ------- et-xmlfile 1.1.0 openpyxl 3.1.2 pip 24.0 setuptools 69.5.1 wheel 0.43.0 </code></pre> <p>When I do <code>pip list</code> outside of my <code>venv</code>, there's no <code>openpyxl</code>.</p> <p>When I do <code>which python3</code>, I get</p> <pre><code>/Users/adam/pythondev/myproj/.venv/bin/python </code></pre> <p>Based on googling, I apparently installed <code>openpyxl</code> to a wrong version of python? However, how do I fix this and install to my version of python? (<code>python3 --version</code> =&gt; <code>Python 3.11.8</code>)</p>
<python><python-3.x>
2024-05-24 14:56:15
1
48,763
user984621
78,529,326
13,579,159
__subclasses__() and import
<p>There are two files with a chain of commands and a dict that dynamically collects all non-abstract commands.</p> <h3>logic.py</h3> <pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod class Command(ABC): def __init__(self): ... @abstractmethod def run(): ... class Commands(dict): def __init__(self): import comm breakpoint() for cls in Command.__subclasses__(): names = self.split_camel_case(cls.__name__) self[names] = cls </code></pre> <h3>comm.py</h3> <pre class="lang-py prettyprint-override"><code>from logic import Command class TestCommand(Command): ... </code></pre> <h3>Debugger output</h3> <pre class="lang-none prettyprint-override"><code>&gt; logic.py(28) __init__() -&gt; for cls in Command.__subclasses__(): (Pdb) comm.TestCommand.__mro__ (&lt;class 'comm.TestCommand'&gt;, &lt;class 'logic.Command'&gt;, &lt;class 'abc.ABC'&gt;, &lt;class 'object'&gt;) (Pdb) Command.__subclasses__() [] </code></pre> <p>By the time the <code>Commands.__init__()</code> is called the base class <code>Command</code> is already defined. The module <code>comm</code> is imported and the subclass <code>TestCommand</code> is defined too. But the list returned by <code>Command.__subclass__()</code> is empty.</p> <p>Tell me please, what am I missing?</p>
<python><python-import>
2024-05-24 14:53:25
2
341
Gennadiy
78,529,229
2,080,441
Least squares fitting with bounded response variable Y in Python
<p>I have a problem of the form:</p> <pre><code>Xb = y </code></pre> <p>where <em>X</em> is the design matrix of a <strong>2D</strong> polynomial, <em>b</em> is the parameter vector and <em>y</em> is the response variable.</p> <p>I'd like to find an optimum parameter vector <em>b</em> that minimizes the 2-norm <code>|y - X b|</code> while at the same time respecting a constraint on <em>Xb</em>, so that <code>c1 &lt; Xb &lt; c2</code>, with <code>c1 &gt; 0</code> and <code>c2 &gt; 0</code> for known <em>c1</em>, <em>c2</em>. In other words, <em>Xb</em> should be bounded to a positive range.</p> <p>I already have a solution in place for unbounded <em>Xb</em>, using <code>scipy.linalg.lstsq</code>:</p> <pre><code> max_degree = sum(self.degree) scale = False x = self.e.to_numpy() y = self.f.to_numpy() z = self.cd.to_numpy() # Flatten input x = np.asarray(x).ravel() y = np.asarray(y).ravel() z = np.asarray(z).ravel() # Remove masked values mask = ~(np.ma.getmask(z) | np.ma.getmask(x) | np.ma.getmask(y)) x, y, z = x[mask].ravel(), y[mask].ravel(), z[mask].ravel() # Scale coordinates to smaller values to avoid numerical problems at larger degrees if scale: x, y, norm, offset = self._scale() coeff = np.zeros((self.degree[0] + 1, self.degree[1] + 1)) idx = EF_Poly2D_CD_Model._get_coeff_idx(coeff) # Calculate elements 1, x, y, x*y, x**2, y**2, ... # np.vander will only create powers of 1 variable, it takes 1-D arrays A = self.polyvander2d(self.degree) # masking: We only want the combinations with maximum order COMBINED power if max_degree is not None: mask = (idx[:, 0] + idx[:, 1]) &lt;= int(max_degree) idx = idx[mask] A = A[:, mask] # masking: We only want to keep factors that are in the coeff_mask mask = np.ones((idx.shape[0],)).astype(bool) if self.coeff_mask is not None: for i in range(idx.shape[0]): mask[i] = (idx[i, 0], idx[i, 1]) in self.coeff_mask mask = mask.astype(bool) idx = idx[mask] A = A[:, mask] # add support for regularization n_variables = A.shape[1] if self.lambd != 0: A = np.concatenate([A, np.sqrt(self.lambd) * np.eye(n_variables)]) z = np.concatenate([z, np.zeros(n_variables)]) # Do the actual least squares fit C, residuals, *_ = lstsq(A, z) # Reorder coefficients into numpy compatible 2d array for k, (i, j) in enumerate(idx): coeff[i, j] = C[k] # Reverse the scaling if scale: coeff = self.polyscale2d(coeff, *norm, copy=False) coeff = self.polyshift2d(coeff, *offset, copy=False) </code></pre> <p>Is there an algorithm that can achieve this? Ideally I'd like to use one of the popular python packages: numpy, scipy, scikit-learn etc</p>
<python><scipy><regression>
2024-05-24 14:37:06
1
361
capitan
78,529,179
6,467,567
Using regular expression to only retrieve lists
<p>I am trying to retrieve the lists in the text below.</p> <pre><code>import re # Read the content from the file (here we assume the content is stored in a string for demonstration) content = &quot;&quot;&quot; Variation 1: Based on the provided examples and the input sequence, the next anticipated actions for the input sequence [['Start'], ['Pick', 'mixing_bowl_green']] could be: 1. Pick the sponge (since wiping actions typically require the sponge). 2. Wipe the mixing bowl. 3. Place the mixing bowl. 4. Place the sponge. 5. Pick the broom. 6. Sweep. 7. Place the broom. 8. End the sequence. Here is the completed sequence: [['Start'], ['Pick', 'mixing_bowl_green'], ['Pick', 'sponge_small'], ['Wipe mixing_bowl_green'], ['Place', 'mixing_bowl_green'], ['Place', 'sponge_small'], ['Pick', 'broom'], ['Sweep'], ['Place', 'broom'], ['End']] Variation 2: Based on the provided examples, the anticipated sequence of actions for the given input can be completed as follows: Input = [['Start'], ['Pick', 'mixing_bowl_green']] 1. The next logical step is to pick the sponge, as it is required for the wiping actions. 2. Then, proceed to wipe the mixing bowl. 3. Place the mixing bowl back. 4. Pick the cutting board and wipe it. 5. Place the cutting board back. 6. Pick the plate and wipe it. 7. Place the plate back. 8. Place the sponge back. 9. Pick the broom and sweep. 10. Place the broom back. 11. End the sequence. Here is the completed sequence: [['Start'], ['Pick', 'mixing_bowl_green'], ['Pick', 'sponge_small'], ['Wipe mixing_bowl_green'], ['Place', 'mixing_bowl_green'], ['Pick', 'cutting_board_small'], ['Wipe cutting_board_small'], ['Place', 'cutting_board_small'], ['Pick', 'plate_dish'], ['Wipe plate_dish'], ['Place', 'plate_dish'], ['Place', 'sponge_small'], ['Pick', 'broom'], ['Sweep'], ['Place', 'broom'], ['End']] &quot;&quot;&quot; # Define the regular expression pattern pattern = re.compile(r&quot;\[\['Start'\].*?\['End'\]\]&quot;, re.DOTALL) # Find all matches in the content matches = pattern.findall(content) # Print the matches for match in matches: print(&quot;#####&quot;) print(match) print(&quot;#####&quot;) input() </code></pre> <p>However, the code returns me</p> <pre><code>##### [['Start'], ['Pick', 'mixing_bowl_green']] could be: 1. Pick the sponge (since wiping actions typically require the sponge). 2. Wipe the mixing bowl. 3. Place the mixing bowl. 4. Place the sponge. 5. Pick the broom. 6. Sweep. 7. Place the broom. 8. End the sequence. Here is the completed sequence: [['Start'], ['Pick', 'mixing_bowl_green'], ['Pick', 'sponge_small'], ['Wipe mixing_bowl_green'], ['Place', 'mixing_bowl_green'], ['Place', 'sponge_small'], ['Pick', 'broom'], ['Sweep'], ['Place', 'broom'], ['End']] ##### </code></pre> <p>as the first match which is incorrect. How do I write the regular expression to only match lists? The text between the ['Start'] and ['End'] must resemble a list i.e. a comma followed by square brackets. The output should be</p> <pre><code>list1 = [['Start'], ['Pick', 'mixing_bowl_green'], ['Pick', 'sponge_small'], ['Wipe mixing_bowl_green'], ['Place', 'mixing_bowl_green'], ['Place', 'sponge_small'], ['Pick', 'broom'], ['Sweep'], ['Place', 'broom'], ['End']] list2 = [['Start'], ['Pick', 'mixing_bowl_green'], ['Pick', 'sponge_small'], ['Wipe mixing_bowl_green'], ['Place', 'mixing_bowl_green'], ['Pick', 'cutting_board_small'], ['Wipe cutting_board_small'], ['Place', 'cutting_board_small'], ['Pick', 'plate_dish'], ['Wipe plate_dish'], ['Place', 'plate_dish'], ['Place', 'sponge_small'], ['Pick', 'broom'], ['Sweep'], ['Place', 'broom'], ['End']] </code></pre>
<python><regex>
2024-05-24 14:26:24
1
2,438
Kong
78,529,019
823,859
Calculating weighted cosine similarity between vectors of words
<p>I have two word lists, where each word makes up a topic, and has a tf-idf weight for that topic:</p> <pre><code>topic1 = [('blue',.1), ('red',.05), ('sky',.01)] topic2 = [('water',.5), ('fire',.1), ('earth',.02)] </code></pre> <p>I am trying to calculate the cosine similarity between the vectors, but also account for the tf-idf weighting of each word.</p> <p>Is there a commonly accepted way to account for individual weightings when doing vector similarity? How would I implement this?</p>
<python><nlp><cosine-similarity>
2024-05-24 13:54:18
0
7,979
Adam_G
78,528,796
2,725,810
Multiple tests for output of a module reading standard input
<p>In one coding exercise in my Udemy course, a student is required to write a program that checks whether the total length of three strings read from the standard input is equal to 10, i.e. a one-liner like this:</p> <pre class="lang-py prettyprint-override"><code>print(len(input()) + len(input()) + len(input())==10) </code></pre> <p>The module is called <code>main.py</code>. I am trying to test it:</p> <pre class="lang-py prettyprint-override"><code>import sys from unittest import TestCase from unittest.mock import patch class Evaluate(TestCase): fake_input1 = iter([&quot;a&quot;, &quot;abcdefgh&quot;, &quot;I&quot;]).__next__ fake_input2 = iter([&quot;a&quot;, &quot;b&quot;, &quot;c&quot;]).__next__ @patch(&quot;builtins.input&quot;, fake_input1) def test1(self): import main output = sys.stdout.getvalue().strip() self.assertEqual(&quot;True&quot;, output, &quot;Wrong output&quot;) @patch(&quot;builtins.input&quot;, fake_input2) def test2(self): import main output = sys.stdout.getvalue().strip() self.assertEqual(&quot;False&quot;, output, &quot;Wrong output&quot;) </code></pre> <p>The first test works as expected. For <code>test2</code>, <code>output</code> is empty. I thought it had to do with the module being imported only once and tried to unimport it, but still could not get it to work. What is going on and how can I fix it?</p>
<python><python-import><python-unittest><python-importlib>
2024-05-24 13:13:42
0
8,211
AlwaysLearning
78,528,690
5,641,051
How to get Python buffer format string from C struct - PEP 3118
<p>I want to know what would be the &quot;thing&quot; generating/inferring what a C struct's &quot;buffer format string&quot; should be, according to PEP 3118.</p> <p>E.g. if I have some cython code that defines</p> <pre><code>cdef struct S: int field_1 int field_2 </code></pre> <p>if I tried to make a memoryview with type <code>cdef S[:]</code>, there is an associated format string, something like</p> <pre><code>T{ i:field_1: i:field_2: } </code></pre> <p>What is the thing that inspects the struct definition to infer the format string?</p>
<python><c><python-3.x><cython><cpython>
2024-05-24 12:52:01
0
353
statskyy
78,528,444
7,057,529
Get ssl client certificate from socket object after failing to verify the certificate
<p>I have a socket server written in python accepting connections like:</p> <pre class="lang-py prettyprint-override"><code>self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # set the socket as non-blocking self.socket.setblocking(False) # bind self.socket.bind((self.host, int(self.port))) # listen self.socket.listen() </code></pre> <p>In a separate async function, I'm calling:</p> <pre class="lang-py prettyprint-override"><code>conn, address = await self.loop.sock_accept(self.socket) </code></pre> <p>Once the connections is made, it is passed to :</p> <pre class="lang-py prettyprint-override"><code>loop.connect_accepted_socket(lambda: &lt;custom asyncio.Protocol class&gt;, ssl=ssl_context) </code></pre> <p>When the connection fails to establish, the error is caught under:</p> <pre class="lang-py prettyprint-override"><code>except ssl.SSLError as e: print(e) </code></pre> <p>Where <code>e</code> can be a range of ssl errors.</p> <p>Some of them are:</p> <ol> <li>[SSL: WRONG_VERSION_NUMBER]</li> <li>[SSL: SSLV3_ALERT_BAD_CERTIFICATE] sslv3 alert bad certificate (_ssl.c:1091)</li> </ol> <p>Under such scenarios, I'd like to be able to print out the certificate that we had received from the client.</p> <p>Is there a way to do this?</p> <p>I've tried printing <code>conn.getpeercert()</code> in the exception handler, however I get the error message:</p> <blockquote> <p>AttributeError: 'socket' object has no attribute 'getpeercert'</p> </blockquote> <p>Whereas, if the connection does succeed, I'm able to get the peer certs from the transport object by calling:</p> <pre class="lang-py prettyprint-override"><code>transport.get_extra_info(&quot;peercert&quot;) </code></pre> <p>I'd like to know if there is a way to print the certificate received from the client upon failure to verify.</p>
<python><python-3.x><sockets><network-programming><python-asyncio>
2024-05-24 12:03:59
1
498
Anirudh Panchangam
78,528,345
1,805,275
Import nested subfolder file in Flask
<p>I am using Flask and use Heroku in production Everything is working great on my laptop, but impossible to make it work when I push it to Heroku</p> <p>Here is my directory hierarchy from the root directory :</p> <pre><code>__init__.py (empty) app.py Procfile sub1/ ----__init__.py (empty) ----sub2/ --------__init__.py (empty) --------file.py </code></pre> <p>My <code>app.py</code> :</p> <pre><code>from sub1.sub2.file import Myclass x = Myclass() </code></pre> <p>My <code>sub1/sub2/file.py</code> file :</p> <pre><code>class Myclass(): # Some code </code></pre> <p>My <code>Procfile</code> :</p> <pre><code>web: gunicorn app:app </code></pre> <p>Everything is working great on my laptop but as soon as i push it to Heroku I get the error :</p> <pre><code>ModuleNotFoundError: No module named 'sub1.sub2.file' </code></pre> <hr /> <p>I have tried relative import with <code>from .sub1.sub2.file import Myclass</code> but I get the error :</p> <pre><code>ImportError: attempted relative import with no known parent package </code></pre> <p>I have also tried to :</p> <ul> <li><code>import sub1 as sub1</code> inside <code>app.py</code></li> <li><code>x = sub1.sub2.file.Myclass()</code> inside <code>app.py</code></li> <li>add <code>from . import sub2</code> inside <code>sub1/__init__.py</code></li> <li>add <code>from . import file</code> inside <code>sub1/sub2/__init__.py</code></li> <li>but still not working</li> </ul> <p>I have also tried to add the folders in the sys.path :</p> <pre><code>base_dir = os.path.dirname(os.path.abspath(__file__)) sys.path.append(base_dir) sys.path.append(os.path.join(base_dir, 'sub1')) sys.path.append(os.path.join(base_dir, 'sub1', 'sub2')) from sub1.sub2.file import Myclass </code></pre> <p>I have also tried :</p> <ul> <li><code>from sub1 import sub2</code> in <code>app.py</code></li> <li><code>x = sub1.Myclass()</code> in <code>app.py</code></li> <li><code>from .sub2 import file</code> in <code>sub1/__init__.py</code></li> <li><code>from .file import Myclass</code> in <code>sub1/sub2/__init__.py</code></li> <li>I get <code>AttributeError: module 'sub1.sub2' has no attribute 'Myclass'</code></li> </ul> <hr /> <p>Once again, each solution works on my laptop when I do <code>python app.py</code> or <code>foreman start -f Procfile</code> but never works on Heroku... is it even possible ?</p>
<python><heroku><python-import>
2024-05-24 11:43:12
1
3,322
SJU
78,528,308
8,389,618
index difference is not working in the Pandas
<p>I have the data for 1 year which is time series data hourly basis so few timestamps are missing in between them.The shape of this data is (8188, 3) sample data I have attached below.</p> <p>I am resampling timestamps according to my data duration which will generate all the timestamps of one year even which were missing in my original data <code>df_hourly = temp_df.resample('h').asfreq()</code> the shape of resampled index is (8764, 1)</p> <p>Now, I am taking the difference of resampled data and original data <code>new_rows = df_hourly.index.difference(original_index)</code> so actual index shape should come as (8764-8188=576,) and then I will replace these 576 missing timestamp with <strong>median of total</strong></p> <pre><code>temp_df = temp_df[temp_df['cell'] == cell] print(temp_df.head()) temp_df.to_csv('temp_df.csv') print(temp_df.shape) # get_missing_duplicates(temp_df) # fill_missing_duplicates() print(temp_df.shape) temp_df['_time'] = pd.to_datetime(temp_df['_time']) print(temp_df['_time'].dtype) temp_df.set_index('_time', inplace=True) original_index = temp_df.index print(original_index) print(&quot;original_index&quot;,original_index.shape) df_hourly = temp_df.resample('h').asfreq() print(df_hourly) # This line is not working as expected new_rows = df_hourly.index.difference(original_index) print(&quot;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;&quot;,original_index) median_value = df['Total'].median() # new_rows = df_hourly.index.difference(temp_df.index) print(&quot;new_rows&quot;,new_rows) </code></pre> <p><strong>new_rows = df_hourly.index.difference(original_index)</strong> this line gives the wrong result basically it should return the difference df_hourly.index and</p> <ul> <li>the shape of <strong>temp_df</strong> is coming as <strong>(8188,)</strong></li> <li>the shape of <strong>df_hourly</strong> is coming as <strong>(8764,)</strong></li> <li>the shape of <strong>new_rows</strong> is also coming as <strong>(8764,)</strong></li> </ul> <p>The output of the <code>temp_df.index</code> is</p> <pre><code>DatetimeIndex(['2023-05-22 02:00:04+00:00', '2023-05-22 03:00:03+00:00', '2023-05-22 04:00:03+00:00', '2023-05-22 05:00:03+00:00', '2023-05-22 06:00:03+00:00', '2023-05-22 07:00:03+00:00', '2023-05-22 08:00:03+00:00', '2023-05-22 09:00:03+00:00', '2023-05-22 10:00:03+00:00', '2023-05-22 11:00:03+00:00', ... '2024-05-20 17:00:03+00:00', '2024-05-20 18:00:04+00:00', '2024-05-20 20:00:03+00:00', '2024-05-20 21:00:03+00:00', '2024-05-20 22:00:03+00:00', '2024-05-20 23:00:03+00:00', '2024-05-21 01:00:03+00:00', '2024-05-21 02:00:03+00:00', '2024-05-21 04:00:03+00:00', '2024-05-21 05:00:03+00:00'], dtype='datetime64[ns, UTC]', name='_time', length=8188, freq=None) </code></pre> <p>The output of the <code>df_hourly.index</code> is:</p> <pre><code>df_hourly_index RangeIndex(start=0, stop=8764, step=1) (8764, 3) </code></pre> <p>Sample data:</p> <p><a href="https://i.sstatic.net/7A8tFc3e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7A8tFc3e.png" alt="enter image description here" /></a></p>
<python><pandas><dataframe>
2024-05-24 11:34:22
2
348
Ravi kant Gautam
78,528,278
3,220,497
How to prevent multiprocessing spawn from using an edited python script?
<p>I am running multiprocessing in Python with start method 'spawn'. I have the following code:</p> <pre><code>import time import multiprocessing def print_something(): print(&quot;1&quot;) def main(): multiprocessing.set_start_method('spawn') while True: process = multiprocessing.Process(target=print_something) process.start() process.join() time.sleep(1) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>When I run this piece of code, python will print '1' every second. My problem is the following. I start running the main() function and let it run on the background. If I edit the above code, let's say I change print(&quot;1&quot;) to print(&quot;2&quot;), and I save the code, the background function will start printing &quot;2&quot; instead of &quot;1&quot;. This means that saving the code is not safe: if I for example save bugged code, the background function will crash.</p> <p>The multiprocessing start method influences the result. If I use 'fork', it will always print '1' even after editing and saving the code. However, both 'spawn' and 'forkserver' uses the updated script, instead of the script at the moment of starting main().</p> <p>What I want is to safely edit and save the code without it affecting background processes, while using start method 'spawn'. How can I achieve this?</p>
<python><python-multiprocessing>
2024-05-24 11:30:15
0
472
XiozZe
78,528,045
6,224,975
FastAPI and unclosed connection
<p>I have a FastAPI app which (for some reason) throws a lot (!) of warnings <code>ResourceWarning: unclosed connection &lt;asyncpg.connection.Connection object at 0x&gt;</code></p> <p>and</p> <p><code>unclosed resource &lt;TCPTransport closed=False reading=False 0x5a17c850d000&gt;</code></p> <p>I have checked that everywhere I do some kind of database manipulation i.e load models/write models etc. is within a context-manager i.e</p> <pre class="lang-py prettyprint-override"><code> with create_session() as session: session.add(db_models) session.commit() </code></pre> <p>and</p> <pre class="lang-py prettyprint-override"><code> query = select(model.source_identifier).filter(model.source_identifier == id_to_check, model.data[&quot;embedding_model&quot;].astext==CURRENT_EMBEDDING_MODEL.value) async with create_async_session() as session: row = await session.execute(query) row_exists = bool(row.scalar()) return row_exists </code></pre> <p>where <code>create_async_session</code> is</p> <pre class="lang-py prettyprint-override"><code>from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession, AsyncEngine, async_sessionmaker def _create_async_engine() -&gt; AsyncEngine: hostname = os.environ.get('Postgres__Hostname') username = os.environ.get('Postgres__Username') password = os.environ.get('Postgres__Password') database_name = os.environ.get('Postgres__Name') return create_async_engine( f&quot;postgresql+asyncpg://{username}:{password}@{hostname}/{database_name}&quot; ) _asyncsessionmaker = async_sessionmaker() _asyncsessionmaker.configure(bind=_create_async_engine()) def create_async_session() -&gt; AsyncSession: return _asyncsessionmaker() </code></pre> <p>thus I simply cannot figure it out what else it should be.</p> <p>It seems to be rather random and come in chunks - it is not like after each request it comes.</p>
<python><fastapi>
2024-05-24 10:42:14
0
5,544
CutePoison
78,527,670
2,536,614
What do I wrong with crc16-x25 calculation in python?
<p>I have this code in python 3 to calculate CRC16-X25:</p> <pre><code>import crcmod # CRC function using the CRC-16-CCITT standard crc16 = crcmod.mkCrcFun(0x11021, initCrc=0xFFFF, xorOut=0xFFFF, rev=True) def calculate_crc(data): return crc16(data) hex_data = '010e00180510100b1b020100000100ff' # Convert the hex string to bytes binary_data = bytes.fromhex(hex_data) # Calculate the CRC crc_value = calculate_crc(binary_data) # Print the CRC value in hexadecimal format print(f'CRC value: {crc_value:04X}') </code></pre> <p>I am not sure if rev should be True or False, so anyway I try both.</p> <p>But in none of the case I could get the expected answer which <code>0xAF47</code>.</p> <p>However when I calculate with this online tool <a href="https://crccalc.com/" rel="nofollow noreferrer">calculate crc16-x25</a>,</p> <p>in <code>CRC-16/X-25</code> it does give <code>0xAF47</code>.</p> <p><a href="https://i.sstatic.net/tCOWATwy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tCOWATwy.png" alt="enter image description here" /></a></p> <p>So, what is wrong with my python code?</p> <p>BTW: <code>crcmod.predefined.Crc('x-25')</code> DO gives the expected result. What is wrong in <code>mkCrcFun(0x11021, initCrc=0xFFFF, xorOut=0xFFFF, rev=True)</code>?</p>
<python><crc><crc16>
2024-05-24 09:34:01
1
1,263
Mert Mertce
78,527,638
184,379
Cannot find table row tags when scraping page
<p>I am scraping the first few pages of a site. This has recently stopped working after the 10th page:</p> <pre><code> page += 1 rankings_url = f'{URL_RANKINGS}{page}' res = get(rankings_url) html = BeautifulSoup(res.text, 'html.parser') rows = html.find_all('tr', id='row_') logger.info(f'Found {len(rows)} rows for page {page}...') </code></pre> <p>This works for the first 10 pages. However, from page 11 and onward, there are no rows. When I look in the inspector for the request the rows are in the response, as well as when I look at the source.</p> <p>I cannot figure out what the problem could be - it used to work. I'm just fetching it with requests:</p> <pre><code> res = requests.get( url, params=params, headers=headers, timeout=30, allow_redirects=redirect ) </code></pre> <p>The headers and params would be empty, or None and redirect is True. Page in question: <a href="https://boardgamegeek.com/browse/boardgame/page/11" rel="nofollow noreferrer">https://boardgamegeek.com/browse/boardgame/page/11</a></p>
<python><web-scraping>
2024-05-24 09:27:07
0
17,352
Tjorriemorrie
78,527,617
3,161,120
FastAPI - how to handle generic exceptions in websocket endpoints
<p>I would like to learn what is the recommended way of handling exceptions in FastAPI application for <code>websocket</code> endpoints.</p> <p>I tried:</p> <pre><code>app.add_exception_handler(Exception, handle_generic_exception) </code></pre> <p>It catches <code>Exception</code>, but it doesn't catch, e.g. <code>ValueError</code>.</p> <p>I also tried to use <code>@app.middleware(&quot;http&quot;)</code> but it doesn't seem to work with websockets.</p> <pre><code>from fastapi import FastAPI, Request, Response, WebSocket app = FastAPI() app.add_exception_handler(AppException, handle_disconnect) @app.middleware(&quot;http&quot;) async def generic_exception_middleware(request: Request | WebSocket, call_next): try: return await call_next(request) except Exception as exc: send_something_to_client() print(&quot;some exception&quot;) @app.websocket(&quot;/ws&quot;) async def ws(websocket: WebSocket): raise ValueError(&quot;foo&quot;) </code></pre> <p>Would anyone of you know what is the proper way of handling <strong>generic</strong> exceptions for websocket endpoints?</p> <p><strong>EDIT 2024-05-28:</strong></p> <p>I finally added the decorator for handling exceptions for my websocket endpoints. But if there is any more elegant solution, please let me know!</p> <pre><code>def handle_exceptions(func): &quot;&quot;&quot;Decorator for handling exceptions.&quot;&quot;&quot; @functools.wraps(func) async def wrapper(websocket: WebSocket): try: await func(websocket) except WebSocketDisconnect as exc: raise exc except AppError as exc: await app_error_handler(websocket, exc) except Exception as exc: # pylint: disable=broad-exception-caught await generic_exception_handler(websocket, exc) return wrapper [...] @handle_exceptions async def accept_api_v1(websocket: WebSocket): &quot;&quot;&quot;Handle api v1 request.&quot;&quot;&quot; await websocket.accept() do_things() @app.websocket(&quot;/api/v1&quot;) async def ws_api(websocket: WebSocket): &quot;&quot;&quot;API v1 endpoint&quot;&quot;&quot; await accept_api_v1(websocket) </code></pre>
<python><websocket><error-handling><fastapi>
2024-05-24 09:22:01
1
1,830
gbajson
78,527,577
6,108,107
Pandas fillna('value') followed by df.replace('value',np.nan) not working
<p>For some reason df.replace() is not working for me after I pivot my data. I am going from long form data to wide form. I want to fill nan values with a dummy value, pivot, then turn the dummy values back into nans using replace, but replace is not working. On further investigation it seems that the 'yy' value is not being recognised as the same as the fillna value so the function cant find anything to replace. e.g.</p> <blockquote> <p>&quot;Checking again for 'yy' values presence: False&quot;</p> </blockquote> <p>I don't know what's going on. Note this also still happens for me using inplace = True, regex=True and if I put the find and replace items in a dictionary e.g. {'yy':np.nan}. My real data is being read from an excel sheet using read_excel.</p> <pre><code>import pandas as pd import numpy as np # Load example data data_as_dict ={'SiteID': {0: 'Somewhere Creek D/S', 1: 'Somewhere Creek D/S', 2: 'Somewhere Creek D/S', 3: 'Somewhere Creek D/S', 4: 'Somewhere Creek D/S', 5: 'Somewhere Creek D/S', 6: 'Somewhere Creek D/S', 7: 'Somewhere Creek D/S', 8: 'Somewhere Creek D/S'}, 'ParameterID': {0: 'EW_APHA1030E.IONBAL', 1: 'EW_APHA1030E.IONBAL', 2: 'EW_APHA1030E.SUM_OF_IONS', 3: 'EW_APHA1030E.SUM_OF_IONS', 4: 'EW_APHA1030E.TFSS', 5: 'EW_APHA2120C_UV.COLOUR_TRUE', 6: 'EW_APHA2130.TURB_BEFORE', 7: 'EW_APHA2320.ALK_BICAR', 8: 'EW_APHA2320.ALK_BICAR'}, 'SampleDate': {0: '2017-04-03 09:30:00', 1: '2019-04-17 13:30:00', 2: '2017-04-03 09:30:00', 3: '2017-04-03 09:30:01', 4: '2017-04-03 09:30:00', 5: '2017-04-03 09:30:00', 6: '2017-04-03 09:30:00', 7: '2017-04-03 09:30:00', 8: '2019-04-17 13:30:00'}, 'Reading': {0: 15.0, 1: -0.7, 2: 278.0, 3: 975.0, 4: 278.0, 5: 35.0, 6: 20.0, 7: 98.0, 8: 230.0}, 'SampledBy': {0: 'dafdsfd', 1: np.nan, 2: 'dafdsfd', 3: np.nan, 4: 'dafdsfd', 5: 'dafdsfd', 6: 'dafdsfd', 7: 'dafdsfd', 8: np.nan}, 'LabID': {0: 'dagfdfda', 1: np.nan, 2: 'dagfdfda', 3: np.nan, 4: 'dagfdfda', 5: 'dagfdfda', 6: 'dagfdfda', 7: 'dagfdfda', 8: np.nan}, 'Overflow': {0: np.nan, 1: np.nan, 2: np.nan, 3: np.nan, 4: np.nan, 5: np.nan, 6: np.nan, 7: np.nan, 8: np.nan}, 'Symbol': {0: '%', 1: '%', 2: 'mg/L', 3: 'mg/L', 4: 'mg/L', 5: 'Hazen', 6: 'NTU', 7: 'mg/L', 8: 'mg/L'}, 'Description': {0: 'Anion-Cation Balance', 1: 'Anion-Cation Balance', 2: 'Sum of Ions', 3: 'Sum of Ions', 4: 'TFSS', 5: 'Colour (True)', 6: 'Turbidity', 7: 'Bicarbonate Alkalinity as CaCO3', 8: 'Bicarbonate Alkalinity as CaCO3'}, 'Parameter': {0: 'Anion-Cation Balance %', 1: 'Anion-Cation Balance %', 2: 'Sum of Ions mg/L', 3: 'Sum of Ions mg/L', 4: 'TFSS mg/L', 5: 'Colour (True) Hazen', 6: 'Turbidity NTU', 7: 'Bicarbonate Alkalinity as CaCO3 mg/L', 8: 'Bicarbonate Alkalinity as CaCO3 mg/L'}} df=pd.DataFrame.from_dict(data_as_dict) # Replace NaNs with &quot;yy&quot; df = df.fillna(&quot;yy&quot;) # Pivot the data dsf = df.pivot(index=['SiteID', 'ParameterID', 'SampleDate', 'SampledBy', 'LabID', 'Overflow', 'Symbol', 'Description' ], columns='Parameter', values='Reading') # replace values dsf = dsf.replace('yy',np.nan) # Check for 'yy' values print(&quot;Checking for 'yy' values presence:&quot;) print((dsf == 'yy').any().any()) </code></pre>
<python><pandas><numpy>
2024-05-24 09:14:09
1
578
flashliquid
78,527,525
476
Allow only certain fields of Pydantic model to be passed to FastAPI endpoint
<p>Let's say I have a Pydantic model with validation:</p> <pre><code>Name = Annotated[str, AfterValidator(validate_name)] class Foo(BaseModel): id: UUID = Field(default_factory=uuid4) name: Name </code></pre> <p>And a FastAPI endpoint:</p> <pre><code>@app.post('/foos') def create_foo(foo: Foo) -&gt; Foo: save_to_database(foo) return foo </code></pre> <p>I only want the caller to be able to pass a value for <code>name</code>, but not for <code>id</code>. Is there any way to do something like this?</p> <pre><code>def create_foo(foo: Annotated[Foo, Body(include=['id'])]) -&gt; Foo: </code></pre> <p>I know I can do:</p> <pre><code>@app.post('/foos') def create_foo(name: Annotated[str, Body(embed=True)]) -&gt; Foo: foo = Foo(name=name) save_to_database(foo) return foo </code></pre> <p>But then the implicit validation error handling doesn't work anymore, and I need to add more code to do that.</p> <p>Any elegant way of handling that?</p>
<python><fastapi><pydantic><pydantic-v2>
2024-05-24 09:05:49
1
524,499
deceze
78,527,502
7,295,169
What is the mouse press and release event in Flet?
<p>I am trying to learn Python Flet, but have I found that Flet <code>Button</code> only has a &quot;press event&quot; but no &quot;release event&quot;. Where is the &quot;release event&quot;?</p> <pre class="lang-py prettyprint-override"><code>import flet as ft import threading import time import asyncio async def main(page: ft.Page): async def on_click(e): print(&quot;clicked&quot;) async def on_release(e): print(&quot;where is the release ?&quot;) page.add( ft.ElevatedButton(&quot;T&quot;, on_click=on_click) ) ft.app(main) </code></pre>
<python><flet>
2024-05-24 08:59:49
0
1,193
jett chen
78,527,216
8,968,910
python: split column value by another column value in dataframe
<p>I have a df:</p> <pre><code> string word 0 anfhfd f 1 rnvkds v 2 bkfsgk k </code></pre> <p>Code:</p> <pre><code>import pandas as pd df = pd.DataFrame( {'string':['anfhfd', 'rnvkds', 'bkfsgk'], 'word':['f', 'v', 'k'] } ) </code></pre> <p>I need to split the first word in column 'string', so I tried:</p> <pre><code> df[['want', 'notwant']] = df['string'].str.split(df['word'].values[0], n=1, expand=True) </code></pre> <p>result:</p> <pre><code> string word want notwant 0 anfhfd f an hfd 1 rnvkds v rnvkds None 2 bkfsgk k bk sgk </code></pre> <p>The result above is not what I want. It seems like it splits only 'f' in each row. My expected result is:</p> <pre><code> string word want notwant 0 anfhfd f an hfd 1 rnvkds v rn kds 2 bkfsgk k b fsgk </code></pre> <p>Is there any good way to split the word and only keep the column 'want'? I don't need the column 'notwant'</p>
<python><pandas><dataframe><split>
2024-05-24 07:58:16
1
699
Lara19
78,526,821
3,103,767
pandas csv to object list is slow
<p>I have a data file like the following (simplified, I have more columns):</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: right;">timestamp</th> <th style="text-align: right;">frame_idx</th> <th style="text-align: right;">gaze_pos_x</th> <th style="text-align: right;">gaze_pos_y</th> <th style="text-align: right;">gaze_dir_x</th> <th style="text-align: right;">gaze_dir_y</th> <th style="text-align: right;">gaze_dir_z</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: right;">2269.17</td> <td style="text-align: right;">45</td> <td style="text-align: right;">893.314</td> <td style="text-align: right;">500.136</td> <td style="text-align: right;">0.165454</td> <td style="text-align: right;">-0.0222454</td> <td style="text-align: right;">0.985967</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: right;">2274.17</td> <td style="text-align: right;">45</td> <td style="text-align: right;">896.61</td> <td style="text-align: right;">502.564</td> <td style="text-align: right;">0.176397</td> <td style="text-align: right;">-0.0098666</td> <td style="text-align: right;">0.98427</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: right;">2279.17</td> <td style="text-align: right;">46</td> <td style="text-align: right;">900.592</td> <td style="text-align: right;">499.049</td> <td style="text-align: right;">0.189087</td> <td style="text-align: right;">-0.018215</td> <td style="text-align: right;">0.981791</td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: right;">2284.17</td> <td style="text-align: right;">46</td> <td style="text-align: right;">906.321</td> <td style="text-align: right;">478.184</td> <td style="text-align: right;">0.18891</td> <td style="text-align: right;">-0.0307506</td> <td style="text-align: right;">0.981513</td> </tr> <tr> <td style="text-align: right;">4</td> <td style="text-align: right;">2289.17</td> <td style="text-align: right;">46</td> <td style="text-align: right;">893.465</td> <td style="text-align: right;">502.793</td> <td style="text-align: right;">0.175493</td> <td style="text-align: right;">-0.0210113</td> <td style="text-align: right;">0.984257</td> </tr> <tr> <td style="text-align: right;">5</td> <td style="text-align: right;">2294.17</td> <td style="text-align: right;">46</td> <td style="text-align: right;">898.629</td> <td style="text-align: right;">497.182</td> <td style="text-align: right;">0.190142</td> <td style="text-align: right;">-0.0151722</td> <td style="text-align: right;">0.981639</td> </tr> <tr> <td style="text-align: right;">6</td> <td style="text-align: right;">2299.3</td> <td style="text-align: right;">46</td> <td style="text-align: right;">893.554</td> <td style="text-align: right;">496.782</td> <td style="text-align: right;">0.183007</td> <td style="text-align: right;">-0.0150504</td> <td style="text-align: right;">0.982996</td> </tr> <tr> <td style="text-align: right;">7</td> <td style="text-align: right;">2304.3</td> <td style="text-align: right;">46</td> <td style="text-align: right;">905.338</td> <td style="text-align: right;">482.343</td> <td style="text-align: right;">0.188236</td> <td style="text-align: right;">-0.0249608</td> <td style="text-align: right;">0.981807</td> </tr> <tr> <td style="text-align: right;">8</td> <td style="text-align: right;">2309.3</td> <td style="text-align: right;">46</td> <td style="text-align: right;">897.44</td> <td style="text-align: right;">495.476</td> <td style="text-align: right;">0.187434</td> <td style="text-align: right;">-0.0199951</td> <td style="text-align: right;">0.982074</td> </tr> <tr> <td style="text-align: right;">9</td> <td style="text-align: right;">2424.3</td> <td style="text-align: right;">48</td> <td style="text-align: right;">893.358</td> <td style="text-align: right;">495.474</td> <td style="text-align: right;">0.171512</td> <td style="text-align: right;">-0.0198278</td> <td style="text-align: right;">0.984982</td> </tr> </tbody> </table></div> <p>And an object like this (again simplified):</p> <pre class="lang-py prettyprint-override"><code>class Gaze: def __init__(self, ts, frame_idx, gaze2D, gaze_dir3D=None): self.ts = ts self.frame_idx = frame_idx self.gaze2D = gaze2D self.gaze_dir3D = gaze_dir3D </code></pre> <p>where <code>gaze2D</code> is a numpy array containing <code>[gaze_pos_x, gaze_pos_y]</code> and <code>gaze_dir3D</code> is a numpy array containing <code>[gaze_dir_x, gaze_dir_y, gaze_dir_z]</code>.</p> <p>I want to efficiently load in the data file and make one <code>Gaze</code> object per row. I have implemented the below, but this is very slow:</p> <pre class="lang-py prettyprint-override"><code>def readDataFromFile(fileName): gazes = [] data = pd.read_csv(str(fileName), delimiter='\t', index_col=False, dtype=defaultdict(lambda: float, frame_idx=int)) allCols = tuple([c for c in data.columns if col in c] for col in ( 'gaze_pos','gaze_dir')) # allCols -&gt; ([gaze_pos_x, gaze_pos_y],[gaze_dir_x, gaze_dir_y, gaze_dir_z]), a list can be empty if a set of columns is missing (gaze_dir is optional) # run through all rows for _, row in data.iterrows(): frame_idx = int(row['frame_idx']) # must cast to int as pd.Series seems to lose typing of dataframe.... :s ts = row['timestamp'] # get all values (None if columns not present) # again need to cast to float despite all items in the series being a float, because the dtype of the series is object... :s args = tuple(row[c].astype('float').to_numpy() if c else None for c in allCols) gazes.append(Gaze(ts, frame_idx, *args)) return gazes </code></pre> <p>As said, this is very slow, the row iteration takes forever, it is prohibitively slow for my use case. Is there a more efficient way of doing this? Using a similar read-in function using a <code>csv.DictReader</code> is a little faster but still way too slow.</p>
<python><pandas><performance><object>
2024-05-24 06:31:59
2
983
Diederick C. Niehorster
78,526,684
1,107,474
Multiple time series, with horizontal scroll and toggle on/off
<p>I am creating a Python plotly graph to display prices of multiple stocks. I'd like to be able to scroll horizontally and toggle each series on/off.</p> <p>I found some examples here but there was no toggle:</p> <p><a href="https://plotly.com/python/time-series/" rel="nofollow noreferrer">https://plotly.com/python/time-series/</a></p> <p>I found a toggle example (below) but it doesn't contain a horizontal scroll bar and it seems very complicated to add toggle behavior.</p> <p>Would someone be able to help with a simpler example, say two stock series, with toggling and horizontal scroll? I can then extend it to 3/4/5/6 series.</p> <p><a href="https://stackoverflow.com/questions/65941253/plotly-how-to-toggle-traces-with-a-button-similar-to-clicking-them-in-legend">Plotly: How to toggle traces with a button similar to clicking them in legend?</a></p> <pre><code>import numpy as np import pandas as pd import plotly.graph_objects as go import datetime NPERIODS = 200 np.random.seed(123) df = pd.DataFrame(np.random.randint(-10, 12, size=(NPERIODS, 4)), columns=list('ABCD')) datelist = pd.date_range(datetime.datetime(2020, 1, 1).strftime('%Y-%m-%d'), periods=NPERIODS).tolist() df['dates'] = datelist df = df.set_index(['dates']) df.index = pd.to_datetime(df.index) df.iloc[0] = 0 df = df.cumsum() # set up multiple traces traces = [] buttons = [] for col in df.columns: traces.append(go.Scatter(x=df.index, y=df[col], visible=True, name=col) ) traces.append(go.Scatter(x=df.index, y=df[col]+20, visible=True, name=col) ) buttons.append(dict(method='restyle', label=col, visible=True, args=[{'visible':True},[i for i,x in enumerate(traces) if x.name == col]], args2=[{'visible':'legendonly'},[i for i,x in enumerate(traces) if x.name == col]] ) ) allButton = [ dict( method='restyle', label=col, visible=True, args=[{'visible':True}], args2=[{'visible':'legendonly'}] ) ] # create the layout layout = go.Layout( updatemenus=[ dict( type='buttons', direction='right', x=0.7, y=1.3, showactive=True, buttons=allButton + buttons ) ], title=dict(text='Toggle Traces',x=0.5), showlegend=True ) fig = go.Figure(data=traces,layout=layout) # add dropdown menus to the figure fig.show() </code></pre>
<python><plotly>
2024-05-24 05:50:14
1
17,534
intrigued_66
78,526,620
1,224,075
Python 3 type hints do not differentiate between `bytes` and `str`
<p>Consider the following functions:</p> <pre class="lang-py prettyprint-override"><code>def abc(o:bytes): print(o) def xyz(o:str): print(o) </code></pre> <p>Invariant of whether I pass a <code>str</code> or <code>bytes</code> object, I see the functions working. I would have expected an error for type mismatch, but no such thing happens:</p> <pre><code>&gt;&gt;&gt; abc('abc') abc &gt;&gt;&gt; abc(b'abc') b'abc' &gt;&gt;&gt; xyz('xyz') xyz &gt;&gt;&gt; xyz(b'xyz') b'xyz' </code></pre> <p>What does the PEP standard say about this? Other than explicitly checking for type, is there a way to prevent this?</p>
<python><python-3.x>
2024-05-24 05:27:09
1
2,107
tinkerbeast
78,526,576
13,079,519
Using Python to do report automation
<p>I am trying to create on a report automation tool where I can do the following things with python:</p> <ol> <li>First import pictures, then have the ability to label each picture and send them to different sections based on their label(a, b, c) later. (My idea would be to create a folder to contain all the pictures and just make my script go thru the pics and let me choose the preset label)</li> <li>Then the tool will send the pictures to ChatGPT with APIs to get some response/feedback, and it has the ability to ask Chatgpt to rewrite or just edit(delete, change, add) the text myself. (Already sorted out the Chatgpt API part)</li> <li>After everything is good, gather the response/feedback text along with the picture and turn it into a PDF file. (Did some research and think Reportlab can do this)</li> </ol> <p>I am kinda stuck on the part where the tool can let me edit the response/feedback chatgpt(step 2). It would be great to get some help or idea on this tool, Thanks in advance!</p>
<python><automation><report>
2024-05-24 05:06:42
1
323
DJ-coding
78,526,427
8,968,910
Python: pivot table with growing columns
<p>I have a table df:</p> <pre><code> class teacher January February 0 A Mary 4 3 1 B Ann 5 7 2 C Kevin 6 8 </code></pre> <p>code:</p> <pre><code>import pandas as pd df = pd.DataFrame( {'class':['A', 'B', 'C'], 'teacher':['Mary', 'Ann', 'Kevin'], 'January':['4', '5', '6'], 'February':['3', '7', '8'] } ) </code></pre> <p>I need to pivot month columns to rows as new_df:</p> <pre><code> month class teacher count 0 January A Mary 4 1 January B Ann 5 2 January C Kevin 6 3 February A Mary 3 4 February B Ann 7 5 February C Kevin 8 </code></pre> <p>And the month columns might grow in the future like this, so I need to pivot all of the month to new_df:</p> <p>df in future:</p> <pre><code> class teacher January February March 0 A Mary 4 3 4 1 B Ann 5 7 6 2 C Kevin 6 8 4 </code></pre> <p>Not really sure how to convert df to new_df by pivot. Do I need to swap columns and rows first?</p>
<python><pivot-table>
2024-05-24 04:04:59
1
699
Lara19
78,526,373
1,759,557
How can I play mp3 files from Python3 on Raspberry Pi 4 (running Ubuntu server 22.04, 64 bit)?
<p>I'd like to do this:</p> <pre><code>playsound(&quot;/dev/shm/tts-speech.mp3&quot;) </code></pre> <p>I used to use <em>playsound</em> in Ubuntu 20.04. It doesn't work in Ubuntu 22.04, so I used <code>playsound from preferredsoundplayer</code>. That works fine on my pc, but doesn't on my rpi4.</p> <p>I get strange output, and no sound:</p> <pre><code>&gt;&gt;&gt; import preferredsoundplayer as ps &gt;&gt;&gt; ps.playsound(&quot;/dev/shm/tts-speech.mp3&quot;) ('\n', '') </code></pre> <p>What is the simplest way to play sound files on rpi4 from python (or fix preferredsoundplayer)?</p>
<python><audio><mp3><ubuntu-22.04><python-playsound>
2024-05-24 03:37:34
1
495
user1759557
78,526,271
4,002,633
PyDev 11.0.3: warning: Debugger speedups using cython not found. Run command fails to improve the situation
<p>This is a recurrent question on StackOverflow and I have browsed the history of them. Alas none of those answers apply to the problem I have at hand (I'm an old hand at PyDev and have done this many times), which is that I get the standard warning, which provides the standard run request:</p> <pre><code>0.10s - warning: Debugger speedups using cython not found. Run '&quot;E:\Venv\Testing\Scripts\python.exe&quot; &quot;E:\Utility\Eclipse\dropins\PyDev.11.0.3\plugins\org.python.pydev.core_11.0.3.202310301107\pysrc\setup_pydevd_cython.py&quot; build_ext --inplace' to build. pydev debugger: starting (pid: 2484) </code></pre> <p>And then I run the command suggested and it completes successfully and without complaint:</p> <pre><code>(Testing) E:\&gt;&quot;E:\Venv\Testing\Scripts\python.exe&quot; &quot;E:\Utility\Eclipse\dropins\PyDev.11.0.3\plugins\org.python.pydev.core_11.0.3.202310301107\pysrc\setup_pydevd_cython.py&quot; build_ext --inplace running build_ext copying build\lib.win-amd64-cpython-312\_pydevd_bundle\pydevd_cython.cp312-win_amd64.pyd -&gt; _pydevd_bundle </code></pre> <p>Yet the complaint is still present on every debug run. It's not gone away.</p> <p>Now that is a puzzle. It has always worked as it should in past, over many versions of PyDev and on many platforms I've used.</p> <p>Aside: I am using 11.0.3 because 12.0.0 (which is released) is not available:</p> <p><a href="https://github.com/fabioz/Pydev/releases/download/pydev_12_0_0/" rel="nofollow noreferrer">https://github.com/fabioz/Pydev/releases/download/pydev_12_0_0/</a></p> <p>But I was probably using 11.0.3 on this very PC before the rebuild (I'm reinstalling after a rebuild of my PC post a nasty malware infection, gotta love Windoze).</p>
<python><debugging><pydev>
2024-05-24 02:47:20
1
2,192
Bernd Wechner
78,526,251
15,200,553
Efficient implementation for random sampling
<p>If you have 4 lists A, B, C, and D that contain objects of particular length, like list <code>A</code> has <code>n_a</code> elements and all the elements are of width 1, list <code>B</code> has <code>n_b</code> elements and all the elements are of width 2, list <code>C</code> has <code>n_c</code> elements and all the elements are of width 3, and list <code>D</code> has <code>n_d</code> elements and all the elements are of width 4. The goal is to create an array whose elements are randomly sampled from A, B, C and D according to a user defined probability distribution like {1: 0.25, 2: 0.25, 3: 0.5, 4: 0.0}. For instance, with distribution={1: 0.25, 2: 0.25, 3: 0.5, 4: 0.0} the output grid should yield random elements with roughly 25% 1-width elements from list A, 25% 2-width elements from list B and 50% 3-width elements from list C, while disallowing 4-width elements.</p> <p>Here is how I am randomly sampling elements from all 4 lists:</p> <pre><code>import numpy as np import random A = [1, 2, 3] B = ['aa', 'bb', 'cc'] C = [111, 222, 333] D = ['aaaa', 'bbbbb', 'cccc'] num_of_rows = 4 # no. of rows in the final grid num_of_coulmns = 5 # no. of columns in the final grid # Define the probabilities p_A = 0.25 p_B = 0.25 p_C = 0.5 p_D = 0.0 probabilities = [p_A, p_B, p_C, p_D] # Create the random length array by sampling from the lists result_array = [] for _ in range(num_of_rows): # Choose a list based on the probabilities chosen_list = random.choices([A, B, C, D], weights=probabilities, k=1)[0] # Sample a random element from the chosen list element = random.choice(chosen_list) # Append the element to the result array result_array.append(element) </code></pre> <p>But this method will be too inefficient for larger values of rows and columns of grid (like 1000). Is there a better way to achieve this.</p>
<python><numpy><random>
2024-05-24 02:37:18
3
304
Shravan Patel
78,526,206
546,218
How does one disassemble Python graal bytecode?
<p>I have been considering extending the cross-version python disassembler <a href="https://pypi.org/project/xdis/" rel="nofollow noreferrer">xdis</a> for Python Graal.</p> <p>GraalPython provides a Python Code type that is similar to <a href="https://docs.python.org/3.10/c-api/code.html?#c.PyCode_New" rel="nofollow noreferrer">Python's Code type</a>, but the underlying bytecode bytes <a href="https://docs.python.org/3.10/library/inspect.html?highlight=co_code#types-and-members" rel="nofollow noreferrer"><code>co_code</code></a> is different. In Python, these are <a href="https://docs.python.org/3/library/dis.html#opcode-collections" rel="nofollow noreferrer">bytecode-encoded Python bytecode instructions</a> . In Graal, I am given to understand that this contains JVM bytecode, but there seems to be more than just instructions.</p> <p>Recall that bytecode operands typically are indexes into some other table like a constants pool or a variable-name list. Even though Graal's code type has this information stored in the other parts of the code type in the way that Python does it, I suspect there are <em>additional</em> tables in the <code>co_code</code> byte array.</p> <p>To give some idea of what is in the <code>co_code</code> bytearray, here its value consider this file</p> <pre class="lang-py prettyprint-override"><code>def five(): return 5 </code></pre> <p>Using GraalVM Python 3.8.5 (GraalVM CE Native 22.2.0), a hexdump of <code>python -m compileall /tmp/five.py</code> gives:</p> <pre><code>87654321 0011 2233 4455 6677 8899 aabb ccdd eeff 0123456789abcdef ------------------------------------------------- 00000000: 9e52 0d0a 0000 0000 dd5b 6867 1900 0000 .R.......[hg.... 00000010: c30c 0000 002f 746d 702f 6669 7665 2e70 ...../tmp/five.p 00000020: 7940 0000 009a 0000 000f 0007 6669 7665 y@..........five 00000030: 2e70 7900 0c2f 746d 702f 6669 7665 2e70 .py../tmp/five.p 00000040: 7900 0000 1964 6566 2066 6976 6528 293a y....def five(): 00000050: 0a20 2020 2072 6574 7572 6e20 350a 0000 . return 5... 00000060: 025b 5d72 2fc8 0a00 0000 0000 0000 0000 .[]r/........... 00000070: 0000 0000 0000 0000 0000 0000 0101 0004 ................ 00000080: 6669 7665 724b cb05 0100 0000 0000 0000 fiverK.......... 00000090: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 000000a0: 0000 0702 1901 1501 724b cb05 0004 6669 ........rK....fi 000000b0: 7665 02ff ffff 0000 3007 1208 0120 011c ve......0.... .. 000000c0: 1901 0501 0000 0000 0000 00 </code></pre> <p>The above hexdump contains module information, the main code, and it looks like embedded source text. The bytecode for function five() might be around 0x80.</p> <p>Changing the return value from 5 to 6 changes:</p> <pre><code>00000080: 6669 7665 724b cb05 0100 0000 0000 0000 fiverK.......... </code></pre> <p>to:</p> <pre><code>00000080: 6669 7665 36c7 9bee 0100 0000 0000 0000 five6........... </code></pre> <p>In sum, how does one decipher this? Are there tools that can be used for doing so?</p> <p><em>I had a bounty added to this which has expired with no potential answers. Should someone sufficiently answer this in the future and want a bounty for it, let me know after the answer is accepted.</em></p> <p><strong>Edit note:</strong> I have had problems with getting a hex dump that seems okay. Best to create your own using <code>compileall</code> and use your own hex dump routine.</p>
<python><bytecode><graalpython>
2024-05-24 02:11:11
0
7,138
rocky
78,526,120
3,161,801
Middleware Metadata Service - Google App Engine
<p>I have the following sync service which is creating an error on a meta dataservice. Can I ask for help interpreting this? What is the meta dataservice? Is there a link to what this is doing? The error below appears when the service starts. This error prevents the sync service from starting</p> <pre><code>sync.yml service: sync instance_class: F2 automatic_scaling: max_instances: 1 runtime: python312 app_engine_apis: true entrypoint: gunicorn -b :$PORT sync:app #inbound_services: #- warmup libraries: - name: jinja2 version: latest - name: ssl version: latest # taskqueue and cron tasks can access admin urls handlers: - url: /.* script: sync.app secure: always redirect_http_response_code: 301 env_variables: MEMCACHE_USE_CROSS_COMPATIBLE_PROTOCOL: &quot;True&quot; NDB_USE_CROSS_COMPATIBLE_PICKLE_PROTOCOL: &quot;True&quot; DEFERRED_USE_CROSS_COMPATIBLE_PICKLE_PROTOCOL: &quot;True&quot; CURRENT_VERSION_TIMESTAMP: &quot;1677721600&quot; </code></pre> <pre><code>sync.py import google.appengine.api client = ndb.Client() def ndb_wsgi_middleware(wsgi_app:disappointed_face: def middleware(environ, start_response:disappointed_face: with client.context(): return wsgi_app(environ, start_response) return middleware </code></pre> <p>log that appears when service is starting app.wsgi_app = ndb_wsgi_middleware(google.appengine.api.wrap_wsgi_app(app.wsgi_app)) 2024-05-17 04:04:53 sync[20240515t183736] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/_datastore_query.py&quot;, line 373, in _next_batch 2024-05-17 04:04:53 sync[20240515t183736] response = yield _datastore_run_query(query) 2024-05-17 04:04:53 sync[20240515t183736] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-05-17 04:04:53 sync[20240515t183736] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/tasklets.py&quot;, line 319, in _advance_tasklet 2024-05-17 04:04:53 sync[20240515t183736] yielded = self.generator.throw(type(error), error, traceback) 2024-05-17 04:04:53 sync[20240515t183736] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-05-17 04:04:53 sync[20240515t183736] File &quot;/layers/google.python.pip/pip/lib/python3.12/site-packages/google/cloud/ndb/_datastore_query.py&quot;, line 1030, in _datastore_run_query 2024-05-17 04:04:53 sync[20240515t183736] response = yield _datastore_api.make_call(</p> <p>2024-05-17 04:04:45 sync[20240515t183736] google.api_core.exceptions.RetryError: Maximum number of 3 retries exceeded while calling &lt;function make_call..rpc_call at 0x3e956b7625c0&gt;, last exception: 503 Getting metadata from plugin failed with error: Failed to retrieve <a href="http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true" rel="nofollow noreferrer">http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true</a> from the Google Compute Engine metadata service. Compute Engine Metadata server unavailable</p>
<python><google-app-engine><google-cloud-compute-engine>
2024-05-24 01:23:02
0
775
ffejrekaburb
78,526,105
5,722,359
Is there a compact f-string expression for printing non-str object with space formatting?
<p>f-string allows a very compact expression for printing str objects with spacing like so:</p> <pre><code>a = &quot;Hello&quot; print(f'{a=:&gt;20}') a= Hello </code></pre> <p>Is there a way to do the same for other objects like so:</p> <pre><code>from pathlib import Path b=Path.cwd() print(f'{b=:&gt;20}') Traceback (most recent call last): File &quot;/usr/lib/python3.10/idlelib/run.py&quot;, line 578, in runcode exec(code, self.locals) File &quot;&lt;pyshell#11&gt;&quot;, line 1, in &lt;module&gt; TypeError: unsupported format string passed to PosixPath.__format__ </code></pre> <p>An alternative is:</p> <pre><code>print(f'b={str(b):&gt;20}') b= /home/user </code></pre> <p>But this method loses the object info that is shown when I do:</p> <pre><code>print(f'{b=}') b=PosixPath('/home/user') </code></pre> <p>The desired outcome is to print</p> <pre><code>b= PosixPath('/home/user') </code></pre> <p>Multiple printed statements should show something like:</p> <pre><code>self.project= PosixPath('/home/user/project') self.project_a= PosixPath('/home/user/project/project') self.longer= PosixPath('/home/user/project/project/icons/project/pic.png') self.PIPFILELOCK= PosixPath('/home/user/project/Pipfile.lock') self.VIRTUALENV= PosixPath('/home/user/.local/share/virtualenvs/project-mKDFEK') </code></pre>
<python>
2024-05-24 01:15:05
2
8,499
Sun Bear
78,525,945
2,986,153
Fit same model to many datasets in Python
<p>Below I demonstrate a workflow for fitting the same model to many datasets in R by nesting datasets by <code>test_id</code>, and then fitting the same model to each dataset, and extracting a statistic from each model.</p> <p>My goal is to create the equivalent workflow in Python, using polars, but I will use pandas if necessary.</p> <h1>Demonstration in R</h1> <pre><code>library(tidyverse) SIMS &lt;- 3 TRIALS &lt;- 1e3 PROB_A &lt;- .65 PROB_B &lt;- .67 df &lt;- bind_rows( tibble( recipe = &quot;A&quot;, trials = TRIALS, events = rbinom(n=SIMS, size=trials, prob=PROB_A), rate = events/trials) |&gt; mutate(test_id = 1:n()), tibble( recipe = &quot;B&quot;, trials = TRIALS, events = rbinom(n=SIMS, size=trials, prob=PROB_B), rate = events/trials) |&gt; mutate(test_id = 1:n()) ) df </code></pre> <p><a href="https://i.sstatic.net/wi6b8toY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wi6b8toY.png" alt="enter image description here" /></a></p> <pre><code>df_nest &lt;- df |&gt; group_by(test_id) |&gt; nest() df_nest </code></pre> <p><a href="https://i.sstatic.net/kZd2mXJb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kZd2mXJb.png" alt="enter image description here" /></a></p> <p>Define two functions to map over my nested data:</p> <pre><code>glm_foo &lt;- function(.data){ glm(formula = rate ~ recipe, data = .data, weights = trials, family = binomial) } glm_foo(df_nest$data[[1]]) fit_and_extract &lt;- function(.data){ m &lt;- glm(formula = rate ~ recipe, data = .data, weights = trials, family = binomial) m$coefficients['recipeB'] } fit_and_extract(df_nest$data[[1]]) </code></pre> <pre><code>df_nest |&gt; mutate( model = map(.x = data, .f = glm_foo), trt_b = map_dbl(.x = data, .f = fit_and_extract) ) </code></pre> <pre><code>test_id data model trt_b &lt;int&gt; &lt;list&gt; &lt;list&gt; &lt;dbl&gt; 1 &lt;tibble&gt; &lt;S3: glm&gt; 0.05606076 2 &lt;tibble&gt; &lt;S3: glm&gt; 0.11029236 3 &lt;tibble&gt; &lt;S3: glm&gt; 0.01304480 </code></pre> <p>#Python Section I can create the same nested data structure in polars, but I am unsure of how to fit the model to each nested dataset within the list column called <code>data</code>.</p> <pre><code>import polars as pl from polars import col import numpy as np SIMS = 3 TRIALS = int(1e3) PROB_A = .65 PROB_B = .67 df_a = pl.DataFrame({ 'recipe': &quot;A&quot;, 'trials': TRIALS, 'events': np.random.binomial(n=TRIALS, p=PROB_A, size=SIMS), 'test_id': np.arange(SIMS) }) df_b = pl.DataFrame({ 'recipe': &quot;B&quot;, 'trials': TRIALS, 'events': np.random.binomial(n=TRIALS, p=PROB_B, size=SIMS), 'test_id': np.arange(SIMS) }) df = (pl.concat([df_a, df_b], rechunk=True) .with_columns( fails = col('trials') - col('events') )) df </code></pre> <p><a href="https://i.sstatic.net/Qs4wrshn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qs4wrshn.png" alt="enter image description here" /></a></p> <pre><code>df_agg = df.group_by('test_id').agg(data = pl.struct('events', 'fails', 'recipe')) df_agg.sort('test_id') </code></pre> <p><a href="https://i.sstatic.net/eskkiXvI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eskkiXvI.png" alt="enter image description here" /></a></p> <p>At this point my mental model of pandas starts to crumble. There are so many mapping options and I'm not really sure how to trouble shoot at this stage.</p> <pre><code>df_agg.with_columns( ( pl.struct([&quot;data&quot;]).map_batches( lambda x: smf.glm('events + fails ~ recipe', family=sm.families.Binomial(), data=x.struct.field('data').to_pandas()).fit() ) ).alias(&quot;model&quot;) ) </code></pre> <p>ComputeError: PatsyError: Error evaluating factor: TypeError: cannot use <code>__getitem__</code> on Series of dtype List(Struct({'events': Int64, 'fails': Int64, 'recipe': String})) with argument 'recipe' of type 'str' events + fails ~ recipe</p>
<python><r><python-polars>
2024-05-23 23:37:13
1
3,836
Joe
78,525,904
5,284,054
Python tkinter notebook as a method in a class
<p>This is a problem with creating a tkinter notebook in a method in a class.</p> <p>This question is solved by removing <code>master</code>, which is not my problem: <a href="https://stackoverflow.com/questions/68323776/python-tkinter-how-add-a-notebook-class-into-a-tk-toplevel">Python Tkinter - How add a notebook class into a tk.toplevel?</a>.</p> <p>This question is about adding images, and it doesn't have the same structure (from which I might have been able to infer the solution): <a href="https://stackoverflow.com/questions/16514617/python-tkinter-notebook-widget">Python Tkinter Notebook widget</a>.</p> <p>When I create the notebook from <code>__init__</code>, the notebook opens just fine.</p> <pre><code>import tkinter as tk from tkinter import ttk class Stakeholders(tk.Frame): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) notebook = ttk.Notebook(self.master) notebook.pack(pady=10, expand=True) # create frames frame1 = ttk.Frame(notebook, width=400, height=280) frame2 = ttk.Frame(notebook, width=400, height=280) frame1.pack(fill='both', expand=True) frame2.pack(fill='both', expand=True) # add frames to notebook notebook.add(frame1, text='Tab 1') notebook.add(frame2, text='Tab 2') def create_window(): root = tk.Tk() root.geometry('400x300') root.title('Stakeholders') Stakeholders(root) root.mainloop() if __name__ == &quot;__main__&quot;: create_window() </code></pre> <p>When I put the notebook in a method of the class, the window opens but there is no notebook, no tabs.</p> <pre><code>import tkinter as tk from tkinter import ttk class Stakeholders(tk.Frame): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.make_notebook() def make_notebook(self): self.notebook = ttk.Notebook(self) self.notebook.pack(pady=10, expand=True) frame1 = ttk.Frame(self.notebook, width=300, height=300) frame2 = ttk.Frame(self.notebook, width=300, height=300) frame1.pack(fill='both', expand=True) frame2.pack(fill='both', expand=True) self.notebook.add(frame1, text='Owner') self.notebook.add(frame2, text='Architect') def create_window(): root = tk.Tk() root.geometry('400x300') root.title('Stakeholders') Stakeholders(root) root.mainloop() if __name__ == &quot;__main__&quot;: create_window() </code></pre>
<python><class><tkinter><methods>
2024-05-23 23:13:28
1
900
David Collins
78,525,821
2,864,497
Handling C2016 error on Windows using Visual Studio Code
<p>I have to use someone else's C header files, which include empty structs. I have no control over these headers or I would change them as empty structs are not conventional C. The structs are throwing C2016 errors, as expected with the standard compiler in Visual Studio Code (on Windows). The original author of the headers is using some other compiler, which allows empty structs.</p> <p>Here is an example of the error I'm receiving:</p> <pre><code>message_definitions.h(45): error C2016: C requires that a struct or union have at least one member </code></pre> <p>Here is an example of the structs:</p> <pre><code>typedef struct { } Controller_Do_Graceful_Shutdown_t; </code></pre> <p>According to what I've read you are permitted empty structs using other compilers, such as gcc. I have installed gcc and have verified it exists:</p> <pre><code>gcc -v Using built-in specs. COLLECT_GCC=C:\msys64\ucrt64\bin\gcc.exe COLLECT_LTO_WRAPPER=C:/msys64/ucrt64/bin/../lib/gcc/x86_64-w64-mingw32/13.2.0/lto-wrapper.exe Target: x86_64-w64-mingw32 Configured with: ../gcc-13.2.0/configure --prefix=/ucrt64 --with-local-prefix=/ucrt64/local --build=x86_64-w64-mingw32 --host=x86_64-w64-mingw32 --target=x86_64-w64-mingw32 --with-native-system-header-dir=/ucrt64/include --libexecdir=/ucrt64/lib --enable-bootstrap --enable-checking=release --with-arch=nocona --with-tune=generic --enable-languages=c,lto,c++,fortran,ada,objc,obj-c++,jit --enable-shared --enable-static --enable-libatomic --enable-threads=posix --enable-graphite --enable-fully-dynamic-string --enable-libstdcxx-filesystem-ts --enable-libstdcxx-time --disable-libstdcxx-pch --enable-lto --enable-libgomp --disable-libssp --disable-multilib --disable-rpath --disable-win32-registry --disable-nls --disable-werror --disable-symvers --with-libiconv --with-system-zlib --with-gmp=/ucrt64 --with-mpfr=/ucrt64 --with-mpc=/ucrt64 --with-isl=/ucrt64 --with-pkgversion='Rev3, Built by MSYS2 project' --with-bugurl=https://github.com/msys2/MINGW-packages/issues --with-gnu-as --with-gnu-ld --disable-libstdcxx-debug --with-boot-ldflags=-static-libstdc++ --with-stage1-ldflags=-static-libstdc++ Thread model: posix Supported LTO compression algorithms: zlib zstd gcc version 13.2.0 (Rev3, Built by MSYS2 project) </code></pre> <p><strong>The hard part</strong></p> <p>I'm using cffi in Python to &quot;import&quot; the C headers, so the C compiler being used is whatever the ffibuilder decides to use. Out of the box it uses the Microsoft C compiler, which throws the C2016 errors. :-(</p> <p>Here is the cffi code:</p> <pre><code>from cffi import FFI ffibuilder = FFI() ffibuilder.set_source(&quot;_message_definitions&quot;, # name of the output C extension &quot;&quot;&quot; #include &quot;message_definitions.h&quot; &quot;&quot;&quot;, sources=['message_definitions.c'], libraries=[]) if __name__ == &quot;__main__&quot;: ffibuilder.compile(verbose=True) </code></pre> <p>Is there a way to tell cffi to use gcc instead, or suppress the C2016 errors being thrown?</p>
<python><c><windows><gcc><cffi>
2024-05-23 22:30:43
1
557
Kenny Cason
78,525,590
20,591,261
TensorFlow model can't predict on polars dataframe
<p>I trained a TensorFlow model for text classification, but I can't use a Polars DataFrame to make my predictions on it. However, I can use a Pandas DataFrame.</p> <pre><code>import pandas as pd import polars as pl import joblib from tensorflow.keras.models import load_model loaded_model =load_model('model.keras') load_Le = joblib.load('label_encoder.joblib') </code></pre> <p>If i do:</p> <pre><code>text = &quot;some example text&quot; df = pd.DataFrame({&quot;Coment&quot;:[text]}) preddict = loaded_model.predict(df[&quot;Coment&quot;]) </code></pre> <p>I have no problems, but if I do:</p> <pre><code>text = &quot;some example text&quot; df = pl.DataFrame({&quot;Coment&quot;:[text]}) preddict = loaded_model.predict(df[&quot;Coment&quot;]) </code></pre> <p>I get <code>TypeError: cannot convert the argument </code>type_value<code>: String to a TensorFlow Dtype.</code></p> <p>Any advice?</p> <p>Some extra info:</p> <p>Before saving my model, I added this so I can predict on any text (Works fine on pandas)</p> <pre><code>inputs = keras.Input(shape=(1,), dtype=&quot;string&quot;) processed_inputs = text_vectorization(inputs) outputs = model(processed_inputs) inference_model = keras.Model(inputs, outputs) inference_model.save('model.keras') </code></pre>
<python><tensorflow><keras><tensorflow2.0><python-polars>
2024-05-23 21:13:52
1
1,195
Simon
78,525,564
823,859
Find matching rows in dataframes based on number of matching items
<p>I have two topic models, <code>topics1</code> and <code>topics2</code>. They were created from very similar but different datasets. As a result, the words representing each topic/cluster as well as the topic numbers will be different for each dataset. A toy example looks like:</p> <pre><code>import pandas as pd topics1 = pd.DataFrame({'topic_num':[1,2,3], 'words':[['red','blue','green'], ['blue','sky','cloud'], ['eat','food','nomnom']] }) topics2 = pd.DataFrame({'topic_num':[1,2,3], 'words':[['blue','sky','airplane'], ['blue','green','yellow'], ['mac','bit','byte']] }) </code></pre> <p>For each topic in <code>topics1</code>, I would like to find the topic in <code>topics2</code> with the maximum number of matches. In the above example, in <code>topics1</code> topic_num 1 would match topic_num 2 in<code>topics2</code> and topic_num 2 in <code>topics1</code> would match topic_num 1 in <code>topics2</code>. In both of these cases, 2 of the 3 words in each row match across dataframes.</p> <p>Is there a way to find this using built-in <code>pandas</code> functions such as <code>eq()</code>? My solution just iterated across every word in <code>topics1</code> and eery word in <code>topics2</code>.</p>
<python><pandas><match><topic-modeling>
2024-05-23 21:06:44
3
7,979
Adam_G
78,525,411
1,549,476
Writing to a file in python does not consistently change its mtime
<p>If I run the following code</p> <pre><code>import os import time def check_different_times(): try: os.remove(&quot;temp&quot;) except FileNotFoundError: pass with open(&quot;temp&quot;, &quot;w&quot;) as f: f.write(&quot;hi&quot;) first = os.stat(&quot;temp&quot;).st_mtime_ns time.sleep(0.001) with open(&quot;temp&quot;, &quot;w&quot;) as f: f.write(&quot;b&quot;) second = os.stat(&quot;temp&quot;).st_mtime_ns return first != second print(sum(check_different_times() for _ in range(100))) </code></pre> <p>Based on my understanding of how unix timestamps work, this should print 100, given that 0.001s is 1e6 nanoseconds, so the two modification times should obviously be different.</p> <p>However, if I run this on my Ubuntu laptop, it prints something like 28 or 30. If I remove the <code>sleep</code> line it prints something like 1 or 2. Is this some weird buffering thing, a bug in Ubuntu, in Python, or a flaw in my understanding of what mtime is?</p>
<python><ubuntu><filesystems><filemtime>
2024-05-23 20:25:28
1
4,483
k_g
78,525,363
19,962,393
Algorithm for compound fractions
<p>I have a set of N chemical compounds enumerated 1, 2,..., N. For each compound, I have the fraction of each of its constituents, &quot;A&quot;, &quot;B&quot;, and so on. Compounds can also contain other compounds, in which case the corresponding fraction is given. For instance, for N = 5, a sample set is</p> <pre><code>mixes = { 1: { &quot;A&quot;: 0.32, &quot;B&quot;: 0.12, &quot;C&quot;: 0.15, 2: 0.41 }, 2: { &quot;C&quot;: 0.23, &quot;D&quot;: 0.12, &quot;E&quot;: 0.51, 4: 0.14 }, 3: { &quot;A&quot;: 0.24, &quot;E&quot;: 0.76 }, 4: { &quot;B&quot;: 0.13, &quot;F&quot;: 0.01, &quot;H&quot;: 0.86 }, 5: { &quot;G&quot;: 0.1, 2: 0.4, 3: 0.5 } } </code></pre> <p>I would like an algorithm that gives the net fraction of each constituent in every compoound, i.e.</p> <pre><code>mixes = { 1: { &quot;A&quot;: 0.32, &quot;B&quot;: 0.12 + 0.41 * 0.14 * 0.13, &quot;C&quot;: 0.15 + 0.41 * 0.23, &quot;D&quot;: 0.41 * 0.12, &quot;E&quot;: 0.41 * 0.51, &quot;F&quot;: 0.41 * 0.14 * 0.01, &quot;H&quot;: 0.41 * 0.14 * 0.86 }, 2: { &quot;B&quot;: 0.14 * 0.13, &quot;C&quot;: 0.23, &quot;D&quot;: 0.12, &quot;E&quot;: 0.51, &quot;F&quot;: 0.14 * 0.01, &quot;H&quot;: 0.14 * 0.86 }, 3: { &quot;A&quot;: 0.24, &quot;E&quot;: 0.76 }, 4: { &quot;B&quot;: 0.13, &quot;F&quot;: 0.01, &quot;H&quot;: 0.86 }, 5: { &quot;A&quot;: 0.5 * 0.24, &quot;G&quot;: 0.1, &quot;B&quot;: 0.4 * 0.14 * 0.13, &quot;C&quot;: 0.4 * 0.23, &quot;D&quot;: 0.4 * 0.12, &quot;E&quot;: 0.4 * 0.51 + 0.5 * 0.76, &quot;F&quot;: 0.4 * 0.14 * 0.01, &quot;H&quot;: 0.4 * 0.14 * 0.86 } } </code></pre> <p>My current approach involves recursion, but I´d like to know if there´s a clever way to do this. Perhaps using a tree-like data structure may help?</p> <p>EDIT: for simplicity, assume that there´s no cyclic relationships in the dataset.</p>
<python><algorithm><recursion><data-structures><hierarchical-data>
2024-05-23 20:09:41
1
2,327
CrisPlusPlus
78,525,282
23,260,297
return specific column values when NaN value is present in same row
<p>I am merging a dataframe with a static dataframe (a table in a spreadsheet) that results in a dataframe that looks like this:</p> <pre><code>Counterparty DealType Commodity Product Aron Buy AAA NaN Aron Buy AAA NaN Aron Buy AAA NaN Aron Buy BBB prod1 Aron Buy BBB prod1 Aron Buy BBB prod1 Aron Buy CCC NaN Aron Buy CCC NaN Aron Buy CCC NaN </code></pre> <p>If the dataframe has no NaN values in the Product Column then the merge was successful and my program can continue.</p> <p>However, if NaN values do exist in the product column I need to update my spreadsheet with the missing values.</p> <p>this is the code I have which returns true if there is any NaN values:</p> <pre><code>if (df['Product'].isnull().any()): print('missing values found') # get list of missing values sys.exit(0) </code></pre> <p>but I want to return a list of the missing values like <code>['AAA', 'CCC']</code></p>
<python><pandas>
2024-05-23 19:49:33
1
2,185
iBeMeltin
78,525,180
6,930,441
Facet_row_spacing increasing with increasing facet rows
<p>I'm sure there is a simple solution to this but after digging around for a few hours, I can't seem to find the answer. In short - the more rows I add to a faceted series of scatterplots, the greater the gap between the rows (despite trying to hardcode in the desired row gap height)</p> <p>I have a button on a Dash dashboard that generates a plotly scatter plot based on data housed in a table:</p> <p><a href="https://i.sstatic.net/UNmrKsEDl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UNmrKsEDl.png" alt="enter image description here" /></a></p> <p>The function to generate the plot:</p> <pre><code>def generate_all_curves(unique_key_list): hit_curve_data_df = df_curves[df_curves['Unique_key'].isin(unique_key_list)] num_rows = math.ceil(len(unique_key_list)/2) print(&quot;Num rows: &quot;+str(num_rows)) facet_row_max_spacing = 1/num_rows if facet_row_max_spacing &gt; 0.04: facet_row_max_spacing = 0.04 print(&quot;Spacing: &quot;+str(facet_row_max_spacing)) all_curves_figure = px.scatter(hit_curve_data_df, x='Temps', y='Smooth Fluorescence', color='Subplot', color_discrete_sequence=['#FABF73','#F06D4E'], hover_name='Well', hover_data={'Final_decision':False, 'Assay_Plate':False, 'Temps':False, 'Smooth Fluorescence':False, 'Error':True, 'Ctrl_Tm_z-score':True}, #Hover data (Tooltip in Spotfire) facet_col='Unique_key', facet_col_wrap=2, facet_col_spacing=0.08,facet_row_spacing = facet_row_max_spacing,#Facet plots by plate and only allow 2 columns. Column spacing had to be adjusted to allow for individual y-axes render_mode = 'auto', height = 200*num_rows) #Height of plots is equal to half the number of plates (coz 2 columns) with each plot 300px high. Width will have to be adjusted return all_curves_figure </code></pre> <p>And then the callback to generate the graphs based on the rows present in a data table:</p> <pre><code>@app.callback( Output('all_graphs_div', 'children'), [Input('generate', 'n_clicks'), Input('clear', 'n_clicks')], [State('results_table_datatable', 'data')]) def update_or_clear_graphs(generate_clicks, clear_clicks, current_table_data): ctx = callback_context if not ctx.triggered: raise PreventUpdate trigger_id = ctx.triggered[0]['prop_id'].split('.')[0] if trigger_id == 'generate': if generate_clicks &gt; 0: key_list = [d['Unique_key'] for d in current_table_data] new_figure = generate_all_curves(key_list) new_figure.update_yaxes(matches=None) new_figure.for_each_yaxis(lambda yaxis: yaxis.update(showticklabels=True)) new_figure.update_layout(height=200 * len(key_list)/2) #Each plot is 400px high (i.e. 200 is half of 400) graph_object = dcc.Graph(id='all_graphs_object', figure=new_figure, style={'font-family': 'Arial'}) return graph_object elif trigger_id == 'clear': if clear_clicks &gt; 0: return None raise PreventUpdate </code></pre> <p>And finally, the layout of these objects (just the relevant section)</p> <pre><code> #Child A: Data table html.Div([ #Data table html.Div([ dash_table.DataTable( default_datatable.to_dict('records'), [{'name': i, 'id': i} for i in default_datatable.loc[:, ['Source_Plate','Well','Subplot','Compound','Fraction','Final_Tm','No. Std Dev','Relative_amplitude','Unique_key']]], id = 'results_table_datatable', hidden_columns=['Unique_key'], css=[{'selector': '.show-hide', 'rule': 'display: none'},{'selector':'.export','rule': 'margin:5px'}], row_deletable=True, sort_action='native', export_format='xlsx', style_data_conditional=data_table_style_data_conditional, style_table = {'height':'400px','overflow-y':'scroll'}, style_as_list_view=True, style_cell={'fontSize':12, 'font-family':'Arial'}, style_header = {'backgroundColor': '#cce6ff','fontWeight': 'bold'}),#Styling of table ], id = 'results_table'), #Generate all hit graphs button html.Div([html.Button('Generate all graphs', id='generate', n_clicks=None, style = {'margin-top':'20px', 'margin-right': '20px'})], style = {'display':'inline-block','vertical-align':'top'}), #Clear all graphs button html.Div([html.Button('Clear all graphs', id='clear', n_clicks=0, style = {'margin-top':'20px'})],style = {'display':'inline-block','vertical-align':'top'}), #Div with the plots html.Div([],id = 'all_graphs_div', style = {'height':'500px','overflow-y':'scroll'}) ], id='left_panel', style = {'display':'inline-block','vertical-align':'top', 'width': '50%','overflow-y':'scroll','overflow-x':'scroll', 'height':'90%', 'padding':'10px','margin-top':'5px'}), #Styling of table container </code></pre> <p>As you'll see, I've tried to dynamically change the facet_row_spacing. I try and keep it at 0.04 when I only have a handful of plots, as that looks most aesthetically pleasing. Once it's below that mark, I let it drop to whatever it needs to be to work.</p> <p>The trouble comes in when I start increasing the number of plots. Despite my attempts to define the facet row spacing, this space seems to increase with an increasing number of facet plots.</p> <p>E.g. Here are the plots when there are 9 plots (i.e. split into two columns = 5 rows, spacing apparently set to 0.04): <a href="https://i.sstatic.net/3GFZBVull.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3GFZBVull.png" alt="enter image description here" /></a></p> <p>However, when I bump this up to 18 plots (9 rows, the spacing is still apparently set to 0.04 as per my print statements), but you'll notice that the space between each row of plots has increased.</p> <p><a href="https://i.sstatic.net/JpZ9vmv2l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpZ9vmv2l.png" alt="enter image description here" /></a></p> <p>Things got weird when I tried cranking this up to 56 rows (112 plots), which were (according to the print statements) supposed to have a facet_row_spacing of 0.0178571429. However, that inter-row spacing is just getting bigger and bigger as the number of plots or rows of plots increases.</p> <p><a href="https://i.sstatic.net/BcI5ABzul.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BcI5ABzul.png" alt="enter image description here" /></a></p> <p>What on earth is going on and what am I doing wrong to have this weird inverse behaviour? Is there something equivalent to new_figure.update_layout(facet_row_spacing = x) that I can put in the call back?</p> <p><strong>EDIT: Following suggestion of:</strong></p> <pre><code>height = 200*num_rows facet_row_max_spacing = 40/height </code></pre> <p>The plots have definitely become less squashed (yay!) but now the curves are no longer sitting inside their designated plot areas:</p> <p><a href="https://i.sstatic.net/2fDyyjvM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fDyyjvM.png" alt="enter image description here" /></a></p> <p>In the above case, there were 56 rows with spacing of 0.00357</p>
<python><plotly><plotly-dash><scatter-plot><dashboard>
2024-05-23 19:24:30
1
456
Rainman
78,525,108
850,781
How do I force post-init field coersion?
<p>Suppose I want a field to be coerced to a specific type:</p> <pre><code>@dataclass class Foo: bar: Bar def __post_init__(self): self.bar = to_bar(self.bar) </code></pre> <p>is <a href="https://docs.python.org/3/library/dataclasses.html#dataclasses.__post_init__" rel="nofollow noreferrer"><code>__post_init__</code></a> the right way?</p>
<python><python-dataclasses>
2024-05-23 19:08:20
0
60,468
sds
78,524,950
1,048,520
How to (better) get NaN data from pandas dataframe into new dataframe?
<p>I have a dataframe and am currently creating a new dataframe with the column names and number of empty cells like this.</p> <pre class="lang-py prettyprint-override"><code>empty = pd.DataFrame(columns=['Column', 'NaNs']) for (columnName, columnData) in dataset.items(): empty.loc[-1] = [columnName, columnData.isnull().any().sum()] empty.index = empty.index + 1 empty = empty.sort_index() </code></pre> <p>This is 5 lines for a simple overview table.</p> <p>I wonder if there's a better, shorter way of achieving the same with <code>transpose</code> and <code>apply</code> or something else which I could't figure out so far.</p>
<python><pandas><dataframe>
2024-05-23 18:28:38
1
2,391
Karsten S.
78,524,782
10,145,953
Pass multiple S3 files to AWS Textract API in Lambda function
<p>I have an image of a PDF containing several text fields that I want to pass to AWS Textract for text extraction. I have thus created cropped images of these specific text boxes. In order to keep all of these boxes straight (and able to determine which image came from which field), I am operating primarily within a dictionary and created a function to call on the Textract API and enter the resulting response as an entry in the dictionary. An example of the dictionary and the code is below.</p> <p>However, this method is incredibly slow as I am submitting hundreds of images to populate the fields of the dictionary. I cannot figure out how to implement some sort of parallel processing or how I can submit multiple images in one call (and keep them all straight). How can I speed up this task and decrease my processing time? With current code it takes approximately 7 minutes to extract all the text from all the images, and I need to be done in no more than 45 seconds.</p> <p>I did attempt to write the API call as <code>response = textract_client.detect_document_text(Document={'S3Object': {'Bucket': bucket_name,'Name': [&quot;sa1.png&quot;, &quot;sa2.png&quot;, &quot;sb1.png&quot;]}})</code> but that threw an error because you cannot put a list of files into the file name.</p> <p><strong>Dictionary example</strong></p> <pre><code>aws_output = { &quot;Section A&quot;: {&quot;1&quot;: extract_text(imgs['Section A']['1'], &quot;sa1&quot;), &quot;2&quot;: extract_text(imgs['Section A']['2'], &quot;sa2&quot;)}, &quot;Section B&quot;: {&quot;1&quot;: extract_text(imgs['Section B']['1'], &quot;sb1&quot;)} } </code></pre> <p><strong>Function</strong></p> <pre><code>def extract_text(img, loc): img_obj = Image.fromarray(img).convert('RGB') out_img_obj = io.BytesIO() img_obj.save(out_img_obj, format=&quot;png&quot;) out_img_obj.seek(0) file_name = key_id + &quot;_&quot; + loc + &quot;.png&quot; s3.Bucket(bucket_name).put_object(Key=file_name, Body=out_img_obj, ContentType=&quot;image/png&quot;) response = textract_client.detect_document_text( Document={ 'S3Object': { 'Bucket': bucket_name, 'Name': file_name } } ) status_code = response['ResponseMetadata']['HTTPStatusCode'] try: status_code == 200 text_len = {} for y in range(len(response['Blocks'])): if 'Text' in response['Blocks'][y]: text_len[y] = len(response['Blocks'][y]['Text']) else: pass if bool(text_len): extracted_text = response['Blocks'][max(text_len, key=text_len.get)]['Text'] if extracted_text == '-': extracted_text = '' else: pass else: extracted_text = '' s3.Object(bucket_name,file_name).delete() return extracted_text except: return f&quot;HTTP Status code is not 200 when running extract_text on {loc}, code is {status_code}&quot; </code></pre>
<python><amazon-web-services><aws-lambda><parallel-processing><amazon-textract>
2024-05-23 17:47:06
0
883
carousallie
78,524,772
19,672,778
PIL TypeError: Cannot handle this data type: (1, 1, 299, 3), |u1
<p>So, I am trying to generate patches of the image, but I am getting this really weird error, and I don't know how to fix it. Can anyone assist me?</p> <p>Well, first when checking the other questions on this platform, I thought I had errors in my dimensions and tried to fix it but, it changed nothing...</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from patchify import patchify from PIL import Image import cv2 #ocean =Image.open(&quot;ocean.jpg&quot;) #612 X 408 ocean =cv2.imread(&quot;/kaggle/input/supercooldudeslolz/new_300.jpg&quot;) ocean = cv2.resize(ocean, (1495, 2093)) print(ocean.size) ocean = np.asarray(ocean) patches =patchify(ocean,(299,299, 3),step=299) print(patches.shape) for i in range(patches.shape[0]): for j in range(patches.shape[1]): patch = patches[i, j] patch = Image.fromarray(patch) num = i * patches.shape[1] + j patch.save(f&quot;patch_{num}.jpg&quot;) </code></pre> <p>this is errror:</p> <pre><code>--------------------------------------------------------------------------- KeyError Traceback (most recent call last) File /opt/conda/lib/python3.10/site-packages/PIL/Image.py:3089, in fromarray(obj, mode) 3088 try: -&gt; 3089 mode, rawmode = _fromarray_typemap[typekey] 3090 except KeyError as e: KeyError: ((1, 1, 299, 3), '|u1') The above exception was the direct cause of the following exception: TypeError Traceback (most recent call last) Cell In[16], line 15 13 for j in range(patches.shape[1]): 14 patch = patches[i, j] ---&gt; 15 patch = Image.fromarray(patch) 16 num = i * patches.shape[1] + j 17 patch.save(f&quot;patch_{num}.jpg&quot;) File /opt/conda/lib/python3.10/site-packages/PIL/Image.py:3092, in fromarray(obj, mode) 3090 except KeyError as e: 3091 msg = &quot;Cannot handle this data type: %s, %s&quot; % typekey -&gt; 3092 raise TypeError(msg) from e 3093 else: 3094 rawmode = mode TypeError: Cannot handle this data type: (1, 1, 299, 3), |u1 </code></pre> <p>now my output about patches.shape prints this shape:</p> <pre><code>(7, 5, 1, 299, 299, 3) </code></pre>
<python><python-imaging-library>
2024-05-23 17:44:47
1
319
NikoMolecule
78,524,706
7,169,710
Psycopg: get query size
<p>I would like to get size of a query before actually performing it with psycopg but I keep hitting</p> <pre><code>ProgrammingError: the last operation didn't produce a result </code></pre> <p>What i have currently is the following python script that should run a SQL query and return a table with its size as a result.</p> <pre class="lang-py prettyprint-override"><code>import dotenv import os import psycopg from psycopg.rows import dict_row _ = dotenv.load_dotenv(&quot;../env/norther.env&quot;) __ = dotenv.load_dotenv(&quot;../.env.development&quot;) if _ and __: print(&quot;Environment Variables Loaded Successfully&quot;) else: raise Exception(&quot;Environment Variables Not Loaded&quot;) _q = &quot;&quot;&quot; -- DROP FUNCTION IF EXISTS get_query_size; CREATE OR REPLACE FUNCTION get_query_size() RETURNS TABLE(total_size BIGINT, table_size BIGINT, index_size BIGINT) AS $$ BEGIN -- Create the temporary table CREATE TEMP TABLE QueryDataTable AS -- Generic SQL query to get data from a table with a JSONB column SELECT * from my_table; -- Gather size information RETURN QUERY SELECT pg_total_relation_size('QueryDataTable') AS total_size, pg_relation_size('QueryDataTable') AS table_size, pg_total_relation_size('QueryDataTable') - pg_relation_size('QueryDataTable') AS index_size; -- Drop the temporary table PERFORM pg_sleep(0); -- Ensures no empty result error DROP TABLE QueryDataTable; END $$ LANGUAGE plpgsql; -- To call the function and get the size SELECT * FROM get_query_size() &quot;&quot;&quot; # Connect to your PostgreSQL database conn = psycopg.connect( user=os.getenv(&quot;_DATABASE_USERNAME&quot;), password=os.getenv(&quot;_DATABASE_PASSWORD&quot;), host=os.getenv(&quot;_DATABASE_SERVER&quot;), dbname=os.getenv(&quot;_DATABASE_NAME&quot;), port=os.getenv(&quot;_PORT&quot;), row_factory=dict_row, cursor_factory=psycopg.ClientCursor, ) # Create a cursor cur = conn.cursor() # Execute the function call cur.execute(_q) # Fetch the result result = cur.fetchall() # Print the result print(result) # Close the cursor and connection cur.close() conn.close() </code></pre> <p>The query on PGAdmin does return a result, though.</p> <ol> <li>What am I missing?</li> <li>Is there an alternative approach to achieve this?</li> </ol> <h3>Solution</h3> <p>Thanks to <a href="https://stackoverflow.com/users/271959/frank-heikens">Frank Heikens</a>.</p> <pre class="lang-py prettyprint-override"><code>connection = psycopg.connect( user=os.getenv(&quot;_DATABASE_USERNAME&quot;), password=os.getenv(&quot;_DATABASE_PASSWORD&quot;), host=os.getenv(&quot;_DATABASE_SERVER&quot;), dbname=os.getenv(&quot;_DATABASE_NAME&quot;), port=os.getenv(&quot;_PORT&quot;), row_factory=dict_row, ) </code></pre> <pre class="lang-py prettyprint-override"><code>try: with connection as conn: cur = conn.cursor() cur.execute(&quot;BEGIN;&quot;) # Create the temporary table cur.execute(&quot;&quot;&quot; CREATE TEMP TABLE QueryDataTable ON COMMIT DROP AS SELECT * from my_table; &quot;&quot;&quot;) # Gather size information cur.execute(&quot;&quot;&quot; SELECT pg_total_relation_size('QueryDataTable') AS total_size, pg_relation_size('QueryDataTable') AS table_size, pg_indexes_size('QueryDataTable') AS index_size; &quot;&quot;&quot;) # Fetch the result result = cur.fetchall() # Print the result print(result) # Commit the transaction cur.execute(&quot;COMMIT;&quot;) except Exception as e: # If an error occurs, rollback the transaction conn.rollback() print(f&quot;An error occurred: {e}&quot;) finally: # Close the connection conn.close() </code></pre> <p>Even better, using <a href="https://www.psycopg.org/psycopg3/docs/basic/transactions.html" rel="nofollow noreferrer">Psycopg Transactions</a>:</p> <pre class="lang-py prettyprint-override"><code>try: with connection as conn: cur = conn.cursor() with conn.transaction(): # Create the temporary table cur.execute(&quot;&quot;&quot; CREATE TEMP TABLE QueryDataTable ON COMMIT DROP AS SELECT * FROM my_table; &quot;&quot;&quot;) # Gather size information cur.execute(&quot;&quot;&quot; SELECT pg_total_relation_size('QueryDataTable') AS total_size, pg_relation_size('QueryDataTable') AS table_size, pg_indexes_size('QueryDataTable') AS index_size; &quot;&quot;&quot;) # Fetch the result result = cur.fetchall() # Print the result print(result) except Exception as e: # If an error occurs, print the error and rollback is automatic due to context manager print(f&quot;An error occurred: {e}&quot;) finally: # Close the connection conn.close() </code></pre>
<python><sql><postgresql><psycopg3>
2024-05-23 17:28:23
1
405
Pietro D'Antuono
78,524,577
279,125
Why is alembic setting all of my fields as nullable=False?
<p>Here is my model:</p> <pre><code>class Base(DeclarativeBase): pass class AppUser(Base): __tablename__ = &quot;app_user&quot; id: Mapped[int] = mapped_column(Integer, primary_key=True, index=True) created_at: Mapped[DateTime] = mapped_column(DateTime, default=func.now()) updated_at: Mapped[DateTime] = mapped_column( DateTime, default=func.now(), onupdate=func.now(), ) telegram_id: Mapped[int] = mapped_column(BigInteger, unique=True, index=True) telegram_username: Mapped[str] = mapped_column(String(60), unique=True, index=True) language: Mapped[str] = mapped_column(String(10), default=&quot;en&quot;) comment: Mapped[str] = mapped_column(String(255)) </code></pre> <p>After I run <code>pdm run alembic revision --autogenerate -m &quot;Create a baseline migrations&quot;</code> I get the following migration script:</p> <pre><code>def upgrade() -&gt; None: # ### commands auto generated by Alembic - please adjust! ### op.create_table('app_user', sa.Column('id', sa.Integer(), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('telegram_id', sa.BigInteger(), nullable=False), sa.Column('telegram_username', sa.String(length=60), nullable=False), sa.Column('language', sa.String(length=10), nullable=False), sa.Column('comment', sa.String(length=255), nullable=False), sa.PrimaryKeyConstraint('id') ) op.create_index(op.f('ix_app_user_id'), 'app_user', ['id'], unique=False) op.create_index(op.f('ix_app_user_telegram_id'), 'app_user', ['telegram_id'], unique=True) op.create_index(op.f('ix_app_user_telegram_username'), 'app_user', ['telegram_username'], unique=True) # ### end Alembic commands ### </code></pre> <p>Nothing should indicate that telegram_id, telegram_hostname, language, and comment can't be null. I haven't seen anything in alembic documentation that says this is default behavior and since SQLAlchemy assumes &quot;nullable=True&quot; by default I would assume alembic should as well...</p>
<python><sqlalchemy><alembic>
2024-05-23 16:55:24
1
1,082
wsaxton
78,524,565
13,866,126
Get exact line numbers of changed lines after a branch with some missing commits is merged
<p>So let's say a particular file in the main branch has the following contents:</p> <pre><code>public class Test { public static void main(String[] args) { int sum = 0; for (int i=0; i&lt;10; i++) { sum += i; } } } </code></pre> <p>Now I fork out a feature branch from main, add few changes and raise a PR to merge it to main. Adding the diff of the change.</p> <pre><code>diff --git a/Test.java b/Test.java index fb3a6d4..5379684 100644 --- a/Test.java +++ b/Test.java @@ -5,6 +5,7 @@ public class Test { for (int i=0; i&lt;10; i++) { sum += i; + log.info(&quot;Sum: {}&quot;, sum); } } } </code></pre> <p>Now, before I commit my feature branch to the main, someone else adds another commit on the main branch that affects the same file. So the contents of the file on main branch is now:</p> <pre><code>public class Test { public static void main(String[] args) { int sum = 0; log.info(&quot;Calculating sum&quot;); // This is the new line added to main by someone else for (int i=0; i&lt;10; i++) { sum += i; } } } </code></pre> <p>In my github PR diff, it will show that line number 8 is added in this PR (refer to the git diff shared above). But I'm interested in knowing what would be the line number of this change when it's merged to master, which would be 9 in this case as other person's commit added a new line above my change.</p> <p>Use case:</p> <p>I'm working with a tool that provides code analysis on a PR but the issue it reports on a specific line contains the line number after the merge is performed on the base branch (line no. 9 in our example).</p> <p>I want to post this specific line issues as a comment on a github PR, but it fails as it expects the line number of the change before the merge is performed (line no. 8 in our example).</p> <p>Note, just for the sake of simplicity I've added a one line change example. In practical use cases, the number of lines changed can be huge.</p> <p>I couldn't think of any solution apart from pulling in the latest changes from main into my feature branch, but this is not possible to do in my use case. Is there a way to map the line numbers from merging with the latest main, to the line numbers with merging against an older version of main</p> <p>I'm writing this in python and using <a href="https://github.com/matiasb/python-unidiff" rel="nofollow noreferrer">unidiff</a> to parse git diffs and make sense out of it.</p> <p>Update:</p> <p>I already have the following approach in mind:</p> <ul> <li>Take out a local checkout of the repository.</li> <li>For every changed file in my branch, append the line number to it at the start/end of the line.</li> <li>Locally perform the squash and merge to main branch.</li> <li>Parse the diff of the commit due to this merge operation. Now the lines from the diff can be mapped to the old lines which are encoded at the start/end of the line content.</li> </ul> <p>But this looks more like a naive way of doing things. Looking for some better way to solve this issue. Also, I'm not sure how this would work in case of PRs raised with branch from repo forks.</p>
<python><git><github><git-merge>
2024-05-23 16:51:39
1
657
JavaLearner
78,524,564
8,121,824
Read Excel File from github to deploy dash app on render
<p>I have created a dash app and am trying to deploy the app to render. Within this app, it reads multiple excel files. On my personal laptop, it reads the files locally. But I cannot seem to get render to read the files from github correctly. I don't know if it is a path issue, or what the underlying issue is.</p> <p>I have encountered a variety of errors and attempted the solutions as follows. I tried passing a relative path such as .\assets&quot;file_name.xlsx&quot;. However, render returned an error saying no file or path existed.</p> <p>I then tried to pass the url and file name: <code>pd.read_excel(&quot;https://github.com/mtdewrocks/matchup/tree/072ac999722ded50e8b2eeb649c75f091a8ecbcb/assets/2024_Pitching_Logs.xlsx&quot;, usecols=[&quot;Name&quot;, &quot;Date&quot;, &quot;Opp&quot;, &quot;W&quot;, &quot;L&quot;, &quot;IP&quot;, &quot;BF&quot;, &quot;H&quot;, &quot;R&quot;, &quot;ER&quot;, &quot;HR&quot;, &quot;BB&quot;, &quot;SO&quot;,&quot;Pit&quot;])</code>.</p> <p>However, while it does not throw a file not found error, I now get a value error saying the Excel file format cannot be determined, you must specify an engine manually.</p> <p>Therefore I specified the engine as openpyxl and got a BadZipFile - file is not a zip file. I understand this error is typically associated with a corrupt file; however, it happens with multiple files and the files are all fine locally so I don't know that this is the issue.</p> <p>Lastly, I found a question that suggested this approach:</p> <pre><code>url = &quot;https://github.com/mtdewrocks/matchup/tree/072ac999722ded50e8b2eeb649c75f091a8ecbcb/assets/Pitcher_Season_Stats.xlsx&quot; data = requests.get(url).content df = pd.read_excel(BytesIO(data)) </code></pre>
<python><pandas><github><render><plotly-dash>
2024-05-23 16:51:33
1
904
Shawn Schreier
78,524,356
974,925
How to Resolve 'ytmusicapi: command not found' Error on Ubuntu 18.04 with Python 3.6.9
<p>I need to generate an <code>oauth.json</code> for the <a href="https://ytmusicapi.readthedocs.io/en/stable/setup/oauth.html" rel="nofollow noreferrer">ytmusicapi library</a> using the <code>ytmusicapi oauth</code> command. I installed the ytmusicapi library with <code>pip install ytmusicapi</code> and ran <code>$ ytmusicapi oauth</code> in the Ubuntu terminal. All I get is this error:</p> <blockquote> <p>ytmusicapi: command not found</p> </blockquote> <ul> <li>Python 3.6.9</li> <li>Ubuntu 18.04.6 LTS</li> </ul> <p>How to fix this error?</p>
<python>
2024-05-23 16:00:00
1
6,034
Tom
78,524,354
20,591,261
Polars for Processing Search Terms in Text Data
<p>I have a Python script that loads search terms from a JSON file and processes a Pandas DataFrame to add new columns indicating whether certain terms are present in the text data. However, I would like to modify the script to use Polars instead of Pandas and possibly remove the JSON dependency. Here is my original code:</p> <pre><code>import pandas as pd import json class SearchTermLoader: def __init__(self, json_file): self.json_file = json_file def load_terms(self): with open(self.json_file, 'r') as f: data = json.load(f) terms = {} for phase_name, phase_data in data.items(): terms[phase_name] = ( phase_data.get('words', []), phase_data.get('exact_phrases', []) ) return terms class DataFrameProcessor: def __init__(self, df: pd.DataFrame, col_name: str) -&gt; None: self.df = df self.col_name = col_name def add_contains_columns(self, search_terms): columns_to_add = [&quot;type1&quot;, &quot;type2&quot;] for column in columns_to_add: self.df[column] = self.df[self.col_name].apply( lambda text: any( term in text for term in search_terms.get(column, ([], []))[0] + search_terms.get(column, ([], []))[1] ) ) return self.df # Example Usage data = {'text_column': ['The apple is red', 'I like bananas', 'Cherries are tasty']} df = pd.DataFrame(data) term_loader = SearchTermLoader('word_list.json') search_terms = term_loader.load_terms() processor = DataFrameProcessor(df, 'text_column') new_df = processor.add_contains_columns(search_terms) new_df </code></pre> <p>Here is an example of the json file:</p> <pre><code>{ &quot;type1&quot;: { &quot;words&quot;: [&quot;apple&quot;, &quot;tasty&quot;], &quot;exact_phrases&quot;: [&quot;soccer ball&quot;] }, &quot;type2&quot;: { &quot;words&quot;: [&quot;banana&quot;], &quot;exact_phrases&quot;: [&quot;red apple&quot;] } } </code></pre> <p>I understand that I can use the <code>.str.contains()</code> function, but I want to use it with specific words and exact phrases. Could you provide some guidance on how to get started with this?</p>
<python><python-polars>
2024-05-23 15:59:20
1
1,195
Simon
78,523,886
2,776,885
Python - PPTX: Resize shape to fit text
<p>Powerpoint has the following text options for shapes:</p> <p><a href="https://i.sstatic.net/pz9uMEmf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pz9uMEmf.png" alt="enter image description here" /></a></p> <p>I am able to control the vertical alignment and margins via the following code:</p> <pre><code>import pptx from pptx.util import Cm from pptx.enum.shapes import MSO_SHAPE from pptx.enum.text import MSO_VERTICAL_ANCHOR, MSO_AUTO_SIZE ## Create PowerPoint object prs = pptx.Presentation(&quot;./Template/PPT_Template.pptx&quot;) ## Add a slide slide = prs.slides.add_slide(prs.slide_masters[0].slide_layouts[6]) ## Add a rectangular shape at position (5.5, 6.0) (x, y) and dimensions (12.0, 2.0) (w, h) rec = slide.shapes.add_shape(MSO_SHAPE.RECTANGLE, Cm(5.5), Cm(6.0), Cm(12.0), Cm(2.0)) ## Define a text frame for a previously created rectangular shape txt_frame = rec.text_frame txt_frame.text = &quot;Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged.&quot; ## Set the options for the text frame txt_frame.vertical_anchor = MSO_VERTICAL_ANCHOR.TOP txt_frame.auto_size = MSO_AUTO_SIZE.SHAPE_TO_FIT_TEXT txt_frame.margin_top = Cm(0.39) txt_frame.margin_bottom = Cm(0.25) txt_frame.margin_left = Cm(2.72) txt_frame.margin_right = Cm(0.25) ## Save presentation prs.save(&quot;./Test.pptx&quot;) </code></pre> <p>Despite setting 'Resize shape to fit text`, whenever I open the produced PowerPoint the shape is not resized. This only happens when I enter an additional space within the PowerPoint presentation.</p> <p>How do I achieve this using Python?</p> <p>I am using <code>python-pptx == 0.6.23</code></p>
<python><powerpoint><python-pptx>
2024-05-23 14:37:34
0
4,040
The Dude
78,523,819
289,784
Overlay multiple lines with bokeh
<p>How can overlay multiple lines on the same figure in <code>bokeh</code>?</p> <p>This is what I've tried; consider the fallowing data.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame( { &quot;seq&quot;: list(range(5)) + list(range(5)), &quot;a&quot;: [&quot;foo&quot;] * 5 + [&quot;bar&quot;] * 5, &quot;b&quot;: np.concatenate( (np.linspace(1, 5, 5, dtype=int), np.linspace(1, 5, 5, dtype=int) + 3) ), } ) </code></pre> <p>I can draw a scatter plot like this:</p> <pre class="lang-py prettyprint-override"><code>cds1 = ColumnDataSource(df) myfig = figure(y_range=(-1, 10), height=400, width=400) myfig.scatter( &quot;seq&quot;, &quot;b&quot;, source=cds1, marker=factor_mark(&quot;a&quot;, [&quot;circle&quot;, &quot;diamond&quot;], [&quot;foo&quot;, &quot;bar&quot;]), color=factor_cmap(&quot;a&quot;, Category10[3], [&quot;foo&quot;, &quot;bar&quot;]), size=10, ) show(myfig) </code></pre> <p><a href="https://i.sstatic.net/HuhogYOy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HuhogYOy.png" alt="enter image description here" /></a></p> <p>I would like the points to be connected, but I cannot use <code>factor_cmap</code> with <code>figure.line</code>. So I tried creating views and calling one at a time like this:</p> <pre class="lang-py prettyprint-override"><code>foo = GroupFilter(column_name=&quot;a&quot;, group=&quot;foo&quot;) bar = GroupFilter(column_name=&quot;a&quot;, group=&quot;bar&quot;) myfig2 = figure(y_range=(-1, 10), height=400, width=400) myfig2.line(&quot;seq&quot;, &quot;b&quot;, source=cds1, view=CDSView(filter=foo), color=Category10[3][0]) myfig2.line(&quot;seq&quot;, &quot;b&quot;, source=cds1, view=CDSView(filter=bar), color=Category10[3][1]) show(myfig2) </code></pre> <p>But then I get errors like this: <code>ERROR:bokeh.core.validation.check:E-1024 (CDSVIEW_FILTERS_WITH_CONNECTED): CDSView filters are not compatible with glyphs with connected topology such as Line or Patch: GlyphRenderer(id='p1094', ...)</code>.</p> <p>I would rather not use <code>figure.multi_line</code> as I would like to toggle the visibility of each line from a <code>CustomJS</code> callback by resetting <code>cds1.data</code> in JS. Any ideas how I can proceed? Or maybe there is a better way to achieve this besides resetting <code>cds1.data</code>?</p>
<python><bokeh><interactive>
2024-05-23 14:26:47
1
4,704
suvayu
78,523,750
2,473,382
Mock a property to return another attribute
<p>With this class:</p> <pre class="lang-py prettyprint-override"><code>class Something(): def __init__(self, name) -&gt; None: self.name=name @property def value(self): # In real life, something long and complicated return &quot;no&quot; </code></pre> <p>I would like to mock it, by mocking <code>.value</code> to return <code>.name</code> instead.</p> <p>In my test function, I have:</p> <pre class="lang-py prettyprint-override"><code>def side_effect(side_self:Something): return side_self.name with patch.object(Something, &quot;value&quot;, side_effect=side_effect, autospec=True): ... </code></pre> <p>Errors out because Something().value return a <code>MagicMock</code>, not the str (name) I expect.</p> <p>If instead I patch with <code>(Something, &quot;value&quot;, new_callable=PropertyMock, side_effect=side_effect</code> I get <code>.side_effect() missing 1 required positional argument: 'side_self'</code></p> <p>How can I mock <code>value</code> to return <code>name</code>?</p>
<python><python-3.x><python-mock>
2024-05-23 14:15:29
0
3,081
Guillaume
78,523,694
3,906,713
Scipy solve_ivp requests time out of bounds
<p>My code solves a 1st order differential equation using Scipy's <code>solve_ivp</code> function and the standard <code>RK45</code> solver. The code itself is reasonably large, and I will certainly attempt to produce a minimal working example if all else fails. However, there might already be a problem at a conceptual that stage, so I am hoping this is not a bug but just my lack of understanding.</p> <p>The important part of the code is as follows:</p> <pre><code>nStep = 1000 time_arr = np.arange(nStep) sol = solve_ivp(rhs_func, (0, self.nStep-1), [x0, ], t_eval=time_arr, args=(my_args_list,)) </code></pre> <p>I would like to integrate the right hand side on the time interval <code>[0, 999]</code>, as given by the 2nd argument, and and sample the result at integer values of time within that interval, as is given by the <code>t_eval</code> keyword argument.</p> <p>It is my expectation that, internally, the solver would sample the right hand side at arbitrary floating-point times <strong>within</strong> the provided time interval. For many different values of <code>my_args_list</code>, it indeed does so. However, for some fairly reasonable argument values it attempts to sample the right hand side severely outside of the provided range. For example, it attempted to sample at <code>10062.831699320916</code>, which is severely outside of the integration boundaries.</p> <p>Am I right to assume that this behaviour is unexpected and is a bug (either of scipy or of my code)?</p> <p>I would appreciate suggestions on what may be causing such behaviour.</p> <p><strong>EDIT</strong>: I was finally able to produce a minimal example for this bug. I solve a simple Integrator ODE <code>dx/dt = k*[f(t) - x(t)]</code> for some forcing vector <code>f(t)</code>. Specifically, I provide a box signal between 0 and 10 seconds, such that <code>f(t) = 2</code> for <code>2 &lt;= t &lt;= 4</code>, and <code>f(t) = 4</code> everywhere else.</p> <pre><code># Libraries import numpy as np import matplotlib.pyplot as plt from numpy.linalg import norm from scipy.optimize import minimize from scipy.integrate import odeint, solve_ivp # Initialization tmin = 0 tmax = 10 nStep = 1000 time_arr = np.linspace(tmin, tmax, nStep) x_forcing_low = 1 x_forcing_high = 2 t_break_l = 2 t_break_r = 4 forcing = np.full(nStep, x_forcing_low) idxs_pulse = np.logical_and(time_arr &gt;= t_break_l, time_arr &lt;= t_break_r) forcing[idxs_pulse] = x_forcing_high # Right hand side of a simple Integrator ODE dx/dt = k*[f(t) - x(t)] for some forcing vector f(t) def dxdt(t, x, param): if (t &lt; tmin) or (t &gt; tmax): raise ValueError(f&quot;{t} is out of bounds [{tmin}, {tmax}]&quot;) f_this = np.interp(t, time_arr, forcing) k, = param return k*(f_this - x) # We go over different values of the integration rate, and test if solve_ivp works kArr = 10 ** np.linspace(-3, 3, 20) for k in kArr: try: sol = solve_ivp(dxdt, (tmin, tmax), [5, ], t_eval=time_arr, args=([k, ],)) print(k, sol['success']) except ValueError as e: print(k, e) </code></pre> <p>The code demonstrates that for <code>k=0.001</code> the solver <code>solve_ivp</code> attempts to sample time at <code>t=12.5</code>, which is out of bounds [0, 10]. However, it works as expected for all other values of <code>k</code></p> <p>I have also implemented an analytic solution for this case (omitted for brevity), and it clearly demonstrates that the solver starts progressively breaking down for values of <code>k &lt; 0.1</code>, producing completely unreasonable solutions. I will produce a separate question, addressing accuracy of the forwards integration scheme, as the problems are likely to be related.</p>
<python><scipy><differential-equations>
2024-05-23 14:03:37
0
908
Aleksejs Fomins
78,523,432
9,128,863
OpenCV task of detect object in image
<p>I'm trying to detect the fragment in image, represented in this <a href="https://www.mathworks.com/help/vision/ug/object-detection-in-a-cluttered-scene-using-point-feature-matching.html" rel="nofollow noreferrer">MATHLAB</a> example.</p> <p>And I use OpenCV-library.</p> <pre><code> import cv2 import numpy as np from imutils.object_detection import non_max_suppression # Reading the image and the template img = cv2.imread('SourceImage.png') temp = cv2.imread('TargetFragment.png') # save the image dimensions W, H = temp.shape[:2] # Define a minimum threshold thresh = 0.4 # Converting them to grayscale img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) temp_gray = cv2.cvtColor(temp, cv2.COLOR_BGR2GRAY) # Passing the image to matchTemplate method match = cv2.matchTemplate( image=img_gray, templ=temp_gray, method=cv2.TM_CCOEFF_NORMED) # Select rectangles with # confidence greater than threshold (y_points, x_points) = np.where(match &gt;= thresh) # initialize our list of rectangles boxes = list() # loop over the starting (x, y)-coordinates again for (x, y) in zip(x_points, y_points): # update our list of rectangles boxes.append((x, y, x + W, y + H)) # apply non-maxima suppression to the rectangles # this will create a single bounding box boxes = non_max_suppression(np.array(boxes)) # loop over the final bounding boxes for (x1, y1, x2, y2) in boxes: # draw the bounding box on the image cv2.rectangle(img, (x1, y1), (x2, y2), (255, 0, 0), 3) cv2.imwrite('result.png', img) </code></pre> <p>Big Image is:</p> <p><a href="https://i.sstatic.net/gwPwZxeI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gwPwZxeI.png" alt="enter image description here" /></a></p> <p>Target fragment to detect is: <a href="https://i.sstatic.net/0MIQWSCY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0MIQWSCY.png" alt="enter image description here" /></a></p> <p>But 2 areas are detected, instead of one. One of this areas doesn't contain the target fragment at all:</p> <p><a href="https://i.sstatic.net/JpOJ8FM2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpOJ8FM2.png" alt="enter image description here" /></a></p> <p>What did I miss?</p>
<python><opencv><computer-vision>
2024-05-23 13:20:27
1
1,424
Jelly
78,523,359
4,399,016
Backtesting package with pandas_ta in Python
<p>I am trying use the <a href="https://kernc.github.io/backtesting.py/" rel="nofollow noreferrer">backtesting.py</a> package. I had difficulty with talib installation. So using pandas_ta instead.</p> <pre><code>from backtesting import Backtest, Strategy import yfinance as yf import pandas as pd import pandas_ta as ta from backtesting.lib import crossover TICKER = 'AAPL' START = '2020-01-01' df = pd.DataFrame() df = yf.download(TICKER, START) class RsiOscillator(Strategy): upper_bound = 70 lower_bound = 30 def init(self): self.rsi = self.I(ta.rsi, self.data.Close, 14) # I --&gt; how we are going to build indicators within the framework def next(self): if crossover(self.rsi, self.upper_bound): self.position.close() elif crossover(self.lower_bound, self.rsi): self.buy() bt = Backtest(df, RsiOscillator, cash = 10000) stats = bt.run() </code></pre> <p>This code is available <a href="https://greyhoundanalytics.com/blog/backtestingpy-a-complete-quickstart-guide/" rel="nofollow noreferrer">here</a>. But returns Error:</p> <blockquote> <p>ValueError: Indicators must return (optionally a tuple of) numpy.arrays of same length as <code>data</code> (data shape: (1105,); indicator &quot;rsi(C,14)&quot;shape: , returned value: None)</p> </blockquote> <p>What am I doing wrong?</p>
<python><back-testing><pandas-ta>
2024-05-23 13:08:09
1
680
prashanth manohar
78,523,337
6,645,564
How is it possible that rpy2 is altering the values within my dataframe?
<p>I am trying to utilize some R based packages within a Python script using the rpy2 package. In order to implement the code, I first need to convert a Pandas dataframe into an R based data matrix. However, something incredibly strange is happening to the values within the code. Here is a minimally reproducible example of the code</p> <pre><code>import pandas as pd import numpy as np import rpy2.robjects as ro from rpy2.robjects.packages import importr from rpy2.robjects import pandas2ri pandas2ri.activate() utils = importr('utils') # Function to generate random column names def generate_column_names(n, suffixes): columns = [] for _ in range(n): name = ''.join(random.choices(string.ascii_uppercase, k=3)) # Random 3-character string suffix = random.choice(suffixes) # Randomly choose between &quot;_Healthy&quot; and &quot;_Sick&quot; columns.append(name + suffix) return columns # Number of rows and columns n_rows = 1000 n_cols = 15 # Generate random float values between 0 and 10 data = np.random.uniform(0, 10, size=(n_rows, n_cols)) # Introduce NaN values sporadically nan_indices = np.random.choice([True, False], size=data.shape, p=[0.1, 0.9]) data[nan_indices] = np.nan # Generate random column names column_names = generate_column_names(n_cols, [&quot;_Healthy&quot;, &quot;_Sick&quot;]) # Create the DataFrame df = pd.DataFrame(data, columns=column_names) df = df.replace(np.nan, &quot;NA&quot;) with localconverter(ro.default_converter + pandas2ri.converter): R_df = ro.conversion.py2rpy(df) r_matrix = r('data.matrix')(R_df) </code></pre> <p>Now, the input Pandas dataframe looks like this: <a href="https://i.sstatic.net/md6v25RD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/md6v25RD.png" alt="input df" /></a></p> <p>However, after turning it into a R based dataframe using <code>ro.conversion.py2rpy()</code>, and then recasting that as a data matrix using <code>r('data.matrix')</code>, I get a <code>r_matrix</code> dataframe that look like this: <a href="https://i.sstatic.net/1QkTH43L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1QkTH43L.png" alt="output df" /></a></p> <p>How could this happen? I have checked the intermediate <code>R_df</code> and have found that it has the same values as the input Pandas <code>df</code>, so it seems that the line <code>r('data.matrix')</code> is drastically altering my contents.</p> <p>I have run the analogous commands in R (after importing the exact same dataframe into R using readr), and <code>data.matrix</code> does not affect my dataframe's contents at all, so I am incredibly confused as to what the problem is. Has anyone else experienced this at all?</p>
<python><r><pandas><rpy2>
2024-05-23 13:04:20
1
924
Bob McBobson
78,523,156
3,871,036
Python multithreading : How can I share a queue of arguments among already-existing worker objects?
<p>I have a 5 workers (&quot;Processors&quot;) that are supposed to process a 100 arguments (maybe in some sort of Queue ?), using a specific method (<code>process()</code>). I want the 5 &quot;processors&quot; to be executed in parallels. I reasearched both <code>concurrent.futures</code> and <code>multiprocessing</code>, but cannot find any example like this</p> <pre class="lang-py prettyprint-override"><code>import time import numpy as np class Processor : def __init__(self, name) : self.name = name def process(self, arg) : print(f'{self.name} : processing {arg}...') time.sleep(arg) print(f'{self.name} : processing {arg}... DONE') l_processors = [Processor(f'Processor_{i}') for i in range(5)] l_arguments = list(range(10)) np.random.shuffle(l_arguments) # ... what to write beyond this point ? </code></pre> <p>Any idea ? Thanks in advance for any answer.</p> <p>PS :</p> <ul> <li>I <strong>have</strong> to use these 5 Processor objects, I cannot use a already-made all-wrapped-up &quot;ProcessorPool(n_workers=5)&quot;</li> <li>I dont want to pre-assign each worker &quot;Processor&quot; to a list of arguments. I cannot know in advance how long each arguments will take (in term of time). Instead, when a worker is free, he should look into the queue of arguments and pick the next one.</li> </ul>
<python><multithreading><multiprocessing>
2024-05-23 12:34:13
2
1,497
Jean Lescut