QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
78,937,759
1,317,018
Pytorch `DataSet.__getitem__()` called with `index` bigger than `__len__()`
<p>I have following torch dataset (I have replaced actual code to read data from files with random number generation to make it minimal reproducible):</p> <pre><code>from torch.utils.data import Dataset import torch class TempDataset(Dataset): def __init__(self, window_size=200): self.window = window_size self.x = torch.randn(4340, 10, dtype=torch.float32) # None self.y = torch.randn(4340, 3, dtype=torch.float32) self.len = len(self.x) - self.window + 1 # = 4340 - 200 + 1 = 4141 # Hence, last window start index = 4140 # And last window will range from 4140 to 4339, i.e. total 200 elements def __len__(self): return self.len def __getitem__(self, index): # AFAIU, below if-condition should NEVER evaluate to True as last index with which # __getitem__ is called should be self.len - 1 if index == self.len: print('self.__len__(): ', self.__len__()) print('Tried to access eleemnt @ index: ', index) return self.x[index: index + self.window], self.y[index + self.window - 1] ds = TempDataset(window_size=200) print('len: ', len(ds)) counter = 0 # no record is read yet for x, y in ds: counter += 1 # above line read one more record from the dataset print('counter: ', counter) </code></pre> <p>It prints:</p> <pre><code>len: 4141 self.__len__(): 4141 Tried to access eleemnt @ index: 4141 counter: 4141 </code></pre> <p>As far as I understand, <code>__getitem__()</code> is called with <code>index</code> ranging from <code>0</code> to <code>__len__()-1</code>. If thats correct, then why it tried to call <code>__getitem__()</code> with index 4141, when the length of the data itself is 4141?</p> <p>One more thing I noticed is that despite getting called with <code>index = 4141</code>, it does not seem to return any elements, which is why <code>counter</code> stays at 4141</p> <p>What my eyes (or brain) are missing here?</p> <p>PS: Though it wont have any effect, just to confirm, I also tried to wrap <code>DataSet</code> with torch <code>DataLoader</code> and it still behaves the same.</p>
<python><python-3.x><machine-learning><deep-learning><pytorch>
2024-09-01 15:52:18
1
25,281
Mahesha999
78,937,753
2,093,469
How to define a channel-wise finite-differencing kernel in tensorflow
<p>I want to implement spatial finite differences using a tensorflow conv2d layer with a fixed kernel.</p> <p>My input data X is of size (nbatch,nx,ny,nchannel) and the output Y must be of the same shape with</p> <pre><code>Y[batch, i, j, channel] == 0.5 * (X[batch, i, j+1, channel] - X[batch, i, j-1, channel]) </code></pre> <p>The corresponding Conv2D 3-by-3 kernel would be something like</p> <pre><code>[[0, -0.5, 0], [0, 0, 0], [0, 0.5, 0]] </code></pre> <p>I don't understand how this must be expanded along the input and output channel dimensions so that the same kernel is applied to each input channel individually as required.</p>
<python><tensorflow>
2024-09-01 15:47:18
1
9,167
sieste
78,937,366
595,305
Search for a specific tuple in a list of tuples sorted by their first element?
<p>Say I have a list of tuples like this, where the int is the &quot;id&quot; for this purpose:</p> <pre><code>records_by_id = [(10, 'bubble1'), (5, 'bubble2'), (4, 'bubble3'), (0, 'bubble4'), (3, 'bubble5'),] </code></pre> <p>... and I sort this by the first element of the tuple:</p> <pre><code>records_by_id.sort(key = lambda x: x[0]) </code></pre> <p>... this gives:</p> <pre><code>[(0, 'bubble4'), (3, 'bubble5'), (4, 'bubble3'), (5, 'bubble2'), (10, 'bubble1'),] </code></pre> <p>Now, given the number 4, how do I locate the list index of &quot;(4, 'bubble3')&quot;? Obviously these tuples are now sorted by their first element, so a brute force iteration through the list is not ideal. I'm thinking there must be a way of using <code>bisect</code> ... or something similar. Any ideas?</p>
<python><list><search><bisect>
2024-09-01 12:46:15
4
16,076
mike rodent
78,937,235
5,790,653
How to break the loop if iteration is between two members of a list
<p>I have a list like this:</p> <pre class="lang-py prettyprint-override"><code>list1 = [ 'some', 'thing', 'is', 'here', 'dont', 'care', 'them', 'The process completed', 'process id: p1', 'process id: p2', 'Regards', 'some', 'thing', 'is', 'here', 'dont', 'care', 'them', 'The process completed', 'process id: p12', 'Regards', ] </code></pre> <p>I'm going to create a new list like this (expected output):</p> <pre class="lang-py prettyprint-override"><code>list3 = [ 'The process completed:\nprocess id: p1, process id: p2', 'The process completed:\nprocess id: p12' ] </code></pre> <p>I'm going to find and save from <code>The process completed</code> until <code>Regards</code> (but the <code>Regards</code> itself is not considered).</p> <p>I tried this:</p> <pre class="lang-py prettyprint-override"><code>new_list = [] new_str = '' for l in list1: if l == 'The process completed': new_str += l new_str += l if l == 'Regards': new_list.append(new_str) new_str = '' pass </code></pre> <p>I know this is not right, but I thought like this as the logic:</p> <pre><code>if `l == The process completed`, then save from that string up to `Regards`, but not `Regards` itself. Then find another `The process completed` and do the same. </code></pre> <p>How can I do that?</p>
<python>
2024-09-01 11:44:52
6
4,175
Saeed
78,936,857
16,363,897
Count NaNs in window size and min_periods of rolling Pandas function
<p>We have the following pandas dataframe:</p> <pre><code> U date 1990-02-28 NaN 1990-03-01 NaN 1990-03-02 -0.068554 1990-03-05 -0.056425 1990-03-06 -0.022294 1990-03-07 -0.038996 1990-03-08 -0.026863 </code></pre> <p>I want to compute the rolling mean of column 'U', with a window size = 5 and min_periods = 4. Pandas built-in rolling function considers only non-NaN values both for window size and min_periods. Instead, I would like to consider NaN values both for window size and min_periods, without affecting the calculation of the mean. This is the expected output:</p> <pre><code> U rolling_mean date 1990-02-28 NaN NaN 1990-03-01 NaN NaN 1990-03-02 -0.068554 NaN 1990-03-05 -0.056425 -0.062489 1990-03-06 -0.022294 -0.049091 1990-03-07 -0.038996 -0.046567 1990-03-08 -0.026863 -0.042626 </code></pre> <p>Any way to accomplish this without loops? Thanks</p>
<python><pandas><numpy>
2024-09-01 08:12:54
1
842
younggotti
78,936,816
4,987,648
Python: how to run the equivalent of the entry_point `foo.bar:baz` during development?
<p>I have in my <code>setup.py</code>:</p> <pre><code> entry_points={ 'console_scripts': [ # Error at runtime: module planning_web does not exists 'foobaz = foo.bar:baz', ], }, </code></pre> <p>so that when I install it, it creates automatically a script <code>foobaz</code> that runs the function <code>baz</code> from the file <code>foo/bar.py</code>, and it works great (tested with nix). But this is quite annoying to develop a program this way as I constanly need to build my package for any tiny change, while usually flask can automatically reload itself when the code changes.</p> <p>So is there a command like:</p> <pre><code>python -X foo.bar:baz </code></pre> <p>in order to run automatically what the entry script would run once installed? I realized I could do <code>python -m foo.bar</code> to run the <code>foo/bar.py</code> file (doing <code>python foo/bar.py</code> directly would produce an error about <code>ImportError: attempted relative import with no known parent package</code>), but I can't find how to run a specific function inside.</p>
<python><module>
2024-09-01 07:44:06
0
2,584
tobiasBora
78,936,755
779,130
How do I fix broken pip installation
<p>I'm not sure what has happened to my Ubuntu server. I just migrated from an old Ununtu 20.04 to 24.04, and most things seem to now by working all right.</p> <p>However, if I run <code>pip</code> I get:</p> <pre><code>svend@localhost:~$ pip -bash: /home/svend/.local/bin/pip: cannot execute: required file not found </code></pre> <p>If I run <code>python -m pip</code> it works all right:</p> <pre><code>svend@localhost:~$ python -m pip --version pip 24.0 from /usr/lib/python3/dist-packages/pip (python 3.12) </code></pre> <p>I think the files in .local are actually from my old server, when I migrated everything in my home directory, and it might be the wrong version:</p> <pre><code>svend@localhost:~$ cat .local/bin/pip #!/usr/bin/python3.10 # -*- coding: utf-8 -*- import re import sys from pip._internal.cli.main import main if __name__ == '__main__': sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) sys.exit(main()) </code></pre> <p>It seems that if I change the <code>#!</code> in the top of the script to <code>#!/usr/bin/python3.12</code>, it works again, but... that seems a bit hacky? Is there a better way of having these regenerated? I've tried: <code>sudo apt install --reinstall python3-pip</code>, but that didn't seem to do it.</p> <p><strong>Edit: More info based on comments</strong></p> <p>If I run <code>which -a pip</code> I get:</p> <pre><code>svend@localhost:~$ which -a pip /home/svend/.local/bin/pip /usr/bin/pip /bin/pip </code></pre> <p>If I try to just run <code>pip uninstall pip</code> I get the <code>This environment is externally managed</code> warning. I presume I haven't overwritten that warning to install pip? So overriding it with <code>--break-system-packages</code> would presumably be risky if it could break the system python?</p> <p>I tried renaming the <code>pip</code>, <code>pip3</code> and <code>virtualenv</code> in <code>.local/share</code> to <code>oldX</code>, but for some reason it's still trying to run the one in .local:</p> <pre><code>svend@localhost:~$ mv .local/bin/pip .local/bin/oldpip svend@localhost:~$ mv .local/bin/pip3 .local/bin/oldpip3 svend@localhost:~$ mv .local/bin/virtualenv .local/bin/oldvirtualenv svend@localhost:~$ pip -bash: /home/svend/.local/bin/pip: No such file or directory </code></pre> <p><strong>Correction</strong> After deleting/moving them I just needed to start a new session, and that <em>did</em> indeed fix the issue. So thank you for that suggestion :D</p> <p>The only thing in <code>.local/share</code> is an empty directory called <code>virtualenv</code>... I <em>think</em> I might have accidentally deleted everything in there yesterday, when I was also experimenting with removing <code>.local/bin</code> (I zipped up <code>.local/bin</code> and then deleted the entire <code>.local</code> and only realised my mistake when I unzipped the archive and it only contained the <code>.local/bin</code> and not <code>.local/share</code>... :/ I should note that this was <em>after</em> I noticed the broken commands)</p>
<python><ubuntu><pip>
2024-09-01 06:57:07
0
3,343
Svend Hansen
78,936,530
372,172
Can cython detects if certain C header exists and compile conditionally?
<p>I have a new C function in my library that doesn't exists in previous editions:</p> <pre class="lang-c prettyprint-override"><code>#define MYLIB_VERSION &quot;dev&quot; const char *mylib_version(void) { return MYLIB_VERSION; } </code></pre> <p>Now, I can do this with my new Cython code:</p> <pre><code>cdef extern from &quot;mylib.h&quot;: const char *mylib_get_version() </code></pre> <p>But this will fail if the system has my old library version instead.</p> <p>The header &quot;mylib.h&quot; does not exists in my old version. Is there a way to conditional compile Cython code so I can write a fallback for older mylib versions?</p> <p>I tried something weird to detect things on compile time. But it doesn't seem to work:</p> <pre><code>cdef extern from *: #if __has_include(&quot;mylib.h&quot;) #include &quot;mylib.h&quot; #define MYLIB_H_EXISTS 1 #else #define MYLIB_H_EXISTS 0 #endif IF MYLIB_H_EXISTS: cdef extern from &quot;mylib.h&quot;: char *mylib_version() ELSE: cdef inline char *mylib_version(): return &quot;0.0.0&quot; </code></pre> <p>Error:</p> <blockquote> <p>Compile-time name '<code>MYLIB_H_EXISTS</code>' not defined</p> </blockquote>
<python><c><cython>
2024-09-01 04:06:01
1
7,998
Koala Yeung
78,936,509
10,589,070
Deltakernel FFI Error while DuckDb Reading Delta
<p>Getting an FFI error while trying to read a detla table with Duckdb. The delta table is on a network connected drive via SMB. I just wrote the delta table from the same machine, just prior, using the same python kernel. I wrote the delta table using Polars and wanted to query it using SQL in Duckdb.</p> <p>The error is here and I cannot figure it out.</p> <blockquote> <p>IOException: IO Error: Hit DeltaKernel FFI error (from: While trying to read from delta table: '....\data\prices_delta/'): Hit error: 8 (ObjectStoreError) with message (Error interacting with object store: Generic URL error: Unable to recognise URL &quot;file://xxx.xxx.x.xxx/airflow/data/prices_delta/&quot;)</p> </blockquote> <p>Here's the code that I'm using to query the table. Note that I've also tried absolute paths but the relative pathing works for other duckdb operations (scanning csv/parquet/etc) and I used the relative pathing to write the tables.</p> <pre><code>delta_data_folder = '..\\..\\data\\prices_delta' Query = f&quot;&quot;&quot; SELECT count(*) FROM delta_scan('{delta_data_folder}') WHERE 1=1 AND year = 2024 AND month = 8 AND day = 29 &quot;&quot;&quot; con = duckdb.connect(database = &quot;:memory:&quot;) con.execute(Query).df().head() </code></pre>
<python><delta-lake><duckdb>
2024-09-01 03:37:55
1
446
krewsayder
78,936,478
11,244,192
Python Subprocess Catch STDIN Input and Wait
<p>I'm creating an online python compiler using Django.</p> <p>I have this code for executing the code</p> <pre class="lang-py prettyprint-override"><code>def stream_response(): try: process = subprocess.Popen(['python', temp_code_file_path], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin = subprocess.PIPE) process.stdin.write(b'Hello') for line in iter(process.stdout.readline, b''): yield f&quot;data: {line.decode('utf-8')}\n\n&quot; for line in iter(process.stdin.readline, b''): yield f&quot;data: {line.decode('utf-8')}\n\n&quot; for line in iter(process.stderr.readline, b''): yield f&quot;data: {line.decode('utf-8')}\n\n&quot; finally: os.unlink(temp_code_file_path) </code></pre> <p>The streaming process works fine, however, if the file contains an input statement e.g.</p> <pre class="lang-py prettyprint-override"><code>print(&quot;Hello&quot;) name = input(&quot;Enter your name: &quot;) print(name) </code></pre> <p>The &quot;Enter your name: &quot; is not being streamed. How can I make it in a way that it'll be sent to my client, a specific javascript will run and append an input to the DOM, get the user input, and communicate it to the subprocess STDIN?</p> <p>I have tried these questions:</p> <p><a href="https://stackoverflow.com/questions/44525641/python-communicate-with-subprocess-using-stdin?rq=3">python-communicate-with-subprocess-using-stdin</a></p> <p><a href="https://stackoverflow.com/questions/375427/a-non-blocking-read-on-a-subprocess-pipe-in-python">a-non-blocking-read-on-a-subprocess-pipe-in-python</a></p> <p><a href="https://stackoverflow.com/questions/8475290/how-do-i-write-to-a-python-subprocess-stdin">how-do-i-write-to-a-python-subprocess-stdin)</a></p> <p>But cant find a solution that will work in my use case</p>
<python><subprocess>
2024-09-01 02:53:32
1
482
Winmari Manzano
78,936,276
3,486,684
DuckDB: importing a Polars dataframe containing an `Enum` column turns it into a `VARCHAR` column in DuckDB?
<pre class="lang-py prettyprint-override"><code>import polars as pl values = [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;] df = pl.DataFrame(pl.Series(&quot;x&quot;, values, dtype=pl.Enum(values))) print(df) </code></pre> <pre><code>shape: (3, 1) β”Œβ”€β”€β”€β”€β”€β”€β” β”‚ x β”‚ β”‚ --- β”‚ β”‚ enum β”‚ β•žβ•β•β•β•β•β•β•‘ β”‚ a β”‚ β”‚ b β”‚ β”‚ c β”‚ β””β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <pre class="lang-py prettyprint-override"><code>import duckdb as ddb ddb.sql(&quot;SELECT * FROM df&quot;).show() </code></pre> <pre><code>β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ x β”‚ β”‚ varchar β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ a β”‚ β”‚ b β”‚ β”‚ c β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>Why does DuckDB not convert <code>polars</code> <code>Enum</code> into a <a href="https://duckdb.org/docs/sql/data_types/enum.html" rel="nofollow noreferrer">DuckDB <code>Enum</code></a>?</p> <p>(Initially, I was trying to read from a <code>polars</code> dataframe I had saved as a <code>parquet</code> file on disk. On doing so, I ran into the same issue, and I wondered if the problem might have something to do with reading from <code>parquet</code> files, but the issue exists &quot;reading directly&quot; from <code>polars</code> too.)</p>
<python><python-polars><duckdb>
2024-08-31 22:58:54
1
4,654
bzm3r
78,936,155
1,471,828
Is there no autocompletion for interop objects in VS Code?
<p>Using this code:</p> <pre><code>import win32com.client as win32 xl_ens = win32.gencache.EnsureDispatch('Excel.Application') </code></pre> <p>When I type <code>xl_ens.</code>, I do not get auto-completion for the Excel properties and methods, just built in Python stuff</p> <p>I somehow expected it to pick up on the Excel stuff, especially since it is all extracted to <code>%AppData%\local\temp\gen_py\3.10\</code>. Also, when hovering over the variable in debug-mode, I can see them.</p> <p>According to <a href="https://stackoverflow.com/a/50163150/18359438">this link</a>, early binding should make autocomplete available. What am I missing?</p> <p>I am using Windows 10 with Python 3.10.1 (tags/v3.10.1:2cd268a, Dec 6 2021, 19:10:37) [MSC v.1929 64 bit (AMD64)] and Visual Studio Code 1.92.2 with latest versions of Python extensions including Pylance and Python Debugger, Excel 2019 is 32-bit.</p>
<python><visual-studio-code>
2024-08-31 21:35:06
1
905
Rno
78,936,118
1,236,694
VS Code Python unittest not finding tests to run
<p>VS Code latest version.</p> <p>I have a file <code>test_whatever.py</code></p> <pre><code>import unittest class Test_TestWhatever(unittest.TestCase): def test_whatever(self): self.assertEqual(1, 1) if __name__= '__main__': unittest.main() </code></pre> <p>In Explorer if I right-click this file and select &quot;Run Tests&quot; it finds the test and runs it.</p> <p>However, if I modify the file as follows :</p> <pre><code>import unittest import binarySearch class Test_TestWhatever(unittest.TestCase): def test_whatever(self): self.assertEqual(binarySearch.binarySearch(), 4) if __name__ == '__main__': unittest.main() </code></pre> <p>Then VS Code reports that &quot;No tests found in the selected file or folder.&quot;</p> <p>By experimentation I found that just the presence of <code>import binarySearch</code> will cause it to no longer be able to find the test.</p> <pre><code>import unittest import binarySearch class Test_TestWhatever(unittest.TestCase): def test_whatever(self): self.assertEqual(4, 4) if __name__ == '__main__': unittest.main() </code></pre> <p>Is there a certain folder/file structure that I need to follow ?</p> <p>Here's my layout :</p> <pre><code>.\src\binarySearch.py .\test\test_whatever.py </code></pre> <p>Here is my <code>settings.json</code></p> <pre><code>{ &quot;python.testing.unittestArgs&quot;: [ &quot;-v&quot;, &quot;-s&quot;, &quot;./test&quot;, &quot;-p&quot;, &quot;test_*.py&quot; ], &quot;python.testing.pytestEnabled&quot;: false, &quot;python.testing.unittestEnabled&quot;: true, &quot;python.analysis.extraPaths&quot;: [ &quot;./src&quot; ] } </code></pre>
<python><visual-studio-code><python-unittest>
2024-08-31 21:07:29
1
9,151
BaltoStar
78,935,980
7,775,166
Django: Use widgets in Form's init method?
<p>Why the widget for the field defined as a class attribute is working, but it is not for the instance attribute inside the <code>__init__</code> method? (I need the init method)</p> <pre><code>class CropForm(forms.ModelForm): class Meta: model = CropModel fields = ['name', 'label'] label = forms.CharField(label='label', widget=forms.TextInput(attrs={'class': 'input'})) def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.name = forms.CharField(label='name', widget=forms.TextInput(attrs={'class': 'input'})) </code></pre> <p><a href="https://i.sstatic.net/yriQJFL0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yriQJFL0.png" alt="form image" /></a></p> <p>How do I get the widget in the <code>__init__</code> method to work? Thanks.</p>
<python><django><forms><widget>
2024-08-31 19:47:17
1
732
girdeux
78,935,848
2,571,805
Python comprehension with different number of elements
<h1>The general problem</h1> <p>I have a case where I'm generating an element comprehension with a different cardinality to the input source. This cardinality should not be a multiple of the original (data-driven), but rather guided by conditions. Some elements in the original source could translate to one element in the destination, whereas others might translate to more than one, or even none.</p> <h1>Existing approaches</h1> <p>Multiple questions address adjacent situations, but not quite the same:</p> <ul> <li><a href="https://stackoverflow.com/questions/11868964/list-comprehension-returning-two-or-more-items-for-each-item">List comprehension: Returning two (or more) items for each item [duplicate]</a></li> <li><a href="https://stackoverflow.com/questions/1077015/how-can-i-get-a-flat-result-from-a-list-comprehension-instead-of-a-nested-list">How can I get a flat result from a list comprehension instead of a nested list?</a></li> </ul> <p><em>(By the way, I am of the view that those two questions are not duplicates of each another. They address different, if somewhat overlapping issues and although the solutions are to an extent interchangeable, the questions themselves are not.)</em></p> <p>The first of those addresses a comprehension whose length is an exact multiple of the source's length. The second provides a way to generate flat lists from a result that had lists of lists, via <code>reduce</code>, or via double iteration in the comprehension itself. Note that in this case both would in fact produce an irregular number of elements (not a multiple of the original), but in a data-driven way, as opposed to a condition-driven way.</p> <h1>The simple problem</h1> <p>I have a list of strings that will either have a number or two numbers separated by a hyphen, let's say:</p> <pre class="lang-py prettyprint-override"><code>['1', '2', '2-3', '3-1', '4-5'] </code></pre> <p>The desired outcome would be:</p> <pre class="lang-py prettyprint-override"><code>['1', '2', '2', '3', '3', '1', '4', '5'] </code></pre> <p>As you can see, no relation in cardinality (except that it grows).</p> <h2>Solution 1 β€” Lousy</h2> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; a = ['1', '2', '2-3', '3-1', '4-5'] &gt;&gt;&gt; '-'.join(a) '1-2-2-3-3-1-4-5' &gt;&gt;&gt; '-'.join(a).split('-') ['1', '2', '2', '3', '3', '1', '4', '5'] </code></pre> <p>This assumes two things:</p> <ol> <li>I'm using lists. In reality, my exercise involves dictionaries, lists of tuples and such paired data collections. I could join some of these components, but I'd completely lose relationship with other parts of the source data.</li> <li>This is hacky, ugly and probably inefficient.</li> </ol> <h2>Solution 2 β€” <code>reduce</code></h2> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; a = ['1', '2', '2-3', '3-1', '4-5'] &gt;&gt;&gt; reduce(lambda acc, elem: acc+elem, (x.split('-') for x in a)) ['1', '2', '2', '3', '3', '1', '4', '5'] </code></pre> <p>This one is better than the previous one, but it's still data-driven. It is also a flattening of a list of lists, although slightly optimised by the sneaky introduction of an inner generator, instead of an inner comprehension.</p> <h1>The more complex problem</h1> <p>What if I just had a list of numbers and I wanted to generate different response lengths, depending on the values? Here, I'm moving to number lists, so that string manipulation responses are discouraged.</p> <p>Now, I want a result list that puts the number if it's odd and the number and its half if it's even:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; a = [1, 2, 3, 4, 5] &gt;&gt;&gt; # Magic code [1, 1, 2, 3, 2, 4, 5] </code></pre> <p>So, depending on a condition β€” <code>x % 2 == 0</code>, in this case β€” I want to either add one or two numbers to the list.</p> <h2>Solution 0 β€” Missing good old C?</h2> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; a = [1, 2, 3, 4, 5] &gt;&gt;&gt; res = [] &gt;&gt;&gt; for x in a: ... if x % 2: ... res.append(x) ... else: ... res.append(x//2) ... res.append(x) ... &gt;&gt;&gt; res [1, 1, 2, 3, 2, 4, 5] </code></pre> <p>The list of insults on this solution includes &quot;anti-Pythonic&quot;, &quot;ugly&quot;, etc. It works, but as I always say: &quot;If 'it works' is the best / only good thing you can say about your code...&quot;</p> <h2>Solution 1 β€” Multi-dimensional list flattening</h2> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; a = [1, 2, 3, 4, 5] &gt;&gt;&gt; [x if x % 2 else [x//2, x] for x in a] [1, [1, 2], 3, [2, 4], 5] </code></pre> <p>Now I have a list with both elements and lists. My <code>reduce</code> would be a bit more complex, so I'm going to make them all lists:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; a = [1, 2, 3, 4, 5] &gt;&gt;&gt; [[x] if x % 2 else [x//2, x] for x in a] [[1], [1, 2], [3], [2, 4], [5]] </code></pre> <p>Now I can reduce:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; a = [1, 2, 3, 4, 5] &gt;&gt;&gt; reduce(lambda acc, elem: acc+elem, ([x] if x % 2 else [x//2, x] for x in a)) [1, 1, 2, 3, 2, 4, 5] </code></pre> <p>Once again, works but not directly. It still creates an iterable to reduce.</p> <p>I need a solution that simply feeds the target with a variable number of elements calculated from each source element. The reason why this must be expression-like is that this is embedded in a wider expression. So, despite my huge respect for them, no Kernighan&amp;Ritchie imperative solutions, please.</p> <h1>Edit</h1> <p>Some contributors mentioned that I was &quot;willing to use <code>reduce</code>&quot;. I tried to make it obvious that I was looking for something different. I love <code>reduce</code> and all things functional, but its resource usage is only justifiable when data comes in &quot;as is&quot;.</p> <p>As for readability and the imperative <code>for</code>, nothing against it. I work the whole day in Python and C. It's just that <code>for</code> is more about &quot;how&quot;, whereas comprehensions are more about &quot;what&quot;. I guess readability is ultimately a matter of preference:</p> <pre><code>Beautiful is better than ugly. Explicit is better than implicit. ... </code></pre> <p>Some commenters mentioned &quot;real world&quot;, &quot;production-ready&quot; and &quot;readability&quot;. To those, I recommend reading the source code of some libraries that you use every day. I do and it's a great exercise. You should also try PDB and checking things like code length, memoty usage, etc.</p> <p>I'd like to leave a special mention and thank you to <a href="https://stackoverflow.com/users/108205/jsbueno">jsbueno</a>. You make valid points. <code>yield</code> rocks and it's part of my personal toolchain for performance, for &quot;Pythonicity&quot; is only my #2 goal.</p> <p>Another thank you goes for <a href="https://stackoverflow.com/users/320615/dogbert">Dogbert</a>. Your suggestion was what I think is the way to go. I must confess I had the solution already when I posted this, and the difference between yours and mine is just square vs. round.</p> <p>For the rest, if the approach is &quot;as long as it works&quot;, then I guess lots of questions are duplicates and yes, the way from Cambridge to London can go via Honolulu, &quot;as long as we get there&quot;.</p> <p>I abused the term &quot;Pythonic&quot;, not even being 100% sure what it means. I was trying to not use libraries and keep it &quot;native&quot;.</p> <p>Here's my solution:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; a = [1, 2, 3, 4, 5] &gt;&gt;&gt; [ x for i in a for x in ((i,) if i % 2 else (i // 2, i)) ] [1, 1, 2, 3, 2, 4, 5] </code></pre> <p>One line.</p>
<python><list><dictionary><functional-programming><list-comprehension>
2024-08-31 18:30:34
3
869
Ricardo
78,935,739
2,986,153
how to set accuracy within mizani.labels.percent()
<p>Can I use <code>mizani.label.percent</code> or another mizani formatter to present the geom_label with one decimal place? The code below works but rounds to an integer.</p> <pre class="lang-py prettyprint-override"><code>import polars as pl from plotnine import * import mizani.labels as ml df = pl.DataFrame({ &quot;group&quot;: [&quot;A&quot;, &quot;B&quot;], &quot;rate&quot;: [.511, .634] }) ( ggplot(df, aes(x=&quot;group&quot;, y=&quot;rate&quot;, label=&quot;ml.percent(rate)&quot;)) + geom_col() + geom_label() + scale_y_continuous(labels=ml.percent) ) </code></pre> <p><a href="https://i.sstatic.net/F3vwvsVo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F3vwvsVo.png" alt="enter image description here" /></a></p> <pre class="lang-py prettyprint-override"><code>( ggplot(df, aes(x=&quot;group&quot;, y=&quot;rate&quot;, label=&quot;ml.percent(rate, accuracy=.1)&quot;)) + geom_col() + geom_label() + scale_y_continuous(labels=ml.percent) ) </code></pre> <p>When doing this, I get the following error:</p> <pre><code>PlotnineError: &quot;Could not evaluate the 'label' mapping: 'ml.percent(rate, accuracy=.1)' (original error: __call__() got an unexpected keyword argument 'accuracy')&quot; </code></pre>
<python><plotnine>
2024-08-31 17:31:19
1
3,836
Joe
78,935,707
20,591,261
Apply Scaler() on each ID on polars dataframe
<p>I have a dataset with multiple columns and an ID column. Each ID can have different magnitudes and varying sizes across these columns. I want to normalize the columns for each ID separately.</p> <pre><code>import polars as pl from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() df = pl.DataFrame( { &quot;ID&quot; : [1,1,2,2,3,3], &quot;Values&quot; : [1,2,3,4,5,6]} ) </code></pre> <p>If i do this, its using the scaler of the entire dataframe, and i would like to use <code>scaler()</code> for each ID.</p> <p>I tried this:</p> <pre><code>( df .with_columns( Value_scaled = scaler.fit_transform(df.select(pl.col(&quot;Value&quot;))).over(&quot;ID&quot;), ) ) </code></pre> <p>But : <code>AttributeError: 'numpy.ndarray' object has no attribute 'over'</code></p> <p>And i also tried using a <code>group_by()</code></p> <pre><code>( df .group_by( pl.col(&quot;ID&quot;) ).agg( scaler.fit_transform(pl.col(&quot;Value&quot;)).alias(&quot;Value_scaled&quot;) ) ) </code></pre> <p>And i get :</p> <p><code>TypeError: float() argument must be a string or a real number, not 'Expr'</code></p>
<python><scikit-learn><python-polars>
2024-08-31 17:09:38
1
1,195
Simon
78,935,686
2,779,130
'asyncpg.pgproto.pgproto.UUID' object has no attribute 'replace'
<p>I'm building a fastAPI application. I have a Psotgres DB where i write and read data from. I'm using SQLAlchemy to interact with my Postgres Database. Here is my model:</p> <pre><code>import uuid from db.database import Base from sqlalchemy import Column, String, Boolean, DateTime, func from sqlalchemy.dialects.postgresql import UUID class User(Base): __tablename__ = 'user' id = Column( UUID(as_uuid=True), primary_key=True, default=uuid.uuid4, unique=True, nullable=False, ) full_name = Column(String, nullable=True) password = Column(String, nullable=True) email = Column(String, unique=True, nullable=False) is_active = Column(Boolean, default=True) created_at = Column(DateTime, server_default=func.now()) updated_at = Column(DateTime, server_default=func.now(), onupdate=func.now(), nullable=True) def __repr__(self): return f'&lt;User {self.email}&gt;' UserTable = User.__table__ </code></pre> <p>And I'm trying to run a query like this:</p> <pre><code>from models.user import User as UserDB import uuid from db.database import database from datetime import datetime from asyncpg.exceptions import UniqueViolationError from typing import Optional async def get_user_by_email(email: str) -&gt; Optional[UserDB]: query = UserTable.select().where(UserTable.c.email == email) try: user_record = await database.fetch_one(query) if user_record: user_dict = dict(user_record) return UserDB(**user_dict) except Exception as e: print(f&quot;Error: {e}&quot;) return None </code></pre> <p>This result in the below error:</p> <pre><code>Error: 'asyncpg.pgproto.pgproto.UUID' object has no attribute 'replace' </code></pre>
<python><postgresql><sqlalchemy><asyncpg>
2024-08-31 16:55:35
1
804
Rashid
78,935,532
12,415,855
Change some input-fields in a PDF?
<p>i try to change some input-fields in a pdf using the following code:</p> <pre><code>from fillpdf import fillpdfs erg = fillpdfs.get_form_fields(&quot;template.pdf&quot;) erg[&quot;ΓΎΓΏ\x00f\x002\x00_\x000\x001\x00[\x000\x00]&quot;] = &quot;TEST1&quot; erg[&quot;ΓΎΓΏ\x00f\x002\x00_\x000\x002\x00[\x000\x00]&quot;] = &quot;TEST2&quot; erg[&quot;ΓΎΓΏ\x00f\x002\x00_\x000\x003\x00[\x000\x00]&quot;] = &quot;TEST3&quot; fillpdfs.write_fillable_pdf(&quot;template.pdf&quot;, &quot;new.pdf&quot;, erg, flatten=False) </code></pre> <p>The pdf looks like that:</p> <p><a href="https://i.sstatic.net/l857AQ9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l857AQ9F.png" alt="enter image description here" /></a></p> <p>Why is this not working? Or is there maybe another better way to solve this?</p>
<python><pdf>
2024-08-31 15:43:41
1
1,515
Rapid1898
78,934,919
6,447,399
FastAPI - passing an input to a LangGraph model and getting an output in JSON/HTML
<p>I have the following LangGraph code. I can't seem to integrate it with FastAPI correctly. I want to send to the graph an input which is defined in the <code>inp</code> function, pass it through a langgraph workflow and then return the output in FastAPI.</p> <p>I can go to the playground and input some text: <a href="http://0.0.0.0:5043/generate/playground/" rel="nofollow noreferrer">http://0.0.0.0:5043/generate/playground/</a></p> <p>I get the following output:</p> <pre><code>{ &quot;node_1&quot;: &quot;{'question': ['helo']} Hi &quot;, &quot;node_2&quot;: &quot;{'question': ['helo']} Hi there&quot; } </code></pre> <p>However, when I go to the docs <a href="http://0.0.0.0:5043/docs" rel="nofollow noreferrer">http://0.0.0.0:5043/docs</a> - I click &quot;Try it out&quot; and I can't see where I can input anything.</p> <p><a href="https://i.sstatic.net/rULtWs9k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rULtWs9k.png" alt="enter image description here" /></a></p> <p>The overall objective is to pass a HTML search box, pass it through langgraph and display a HTML output.</p> <p>Additionally, I am a little unsure on the code since I am using <code>add_routes</code> but in the documentation They use <code>async</code> functions and <code>app.get</code>. How can I replace the add_routes and define the pages using <code>app.get/app.post</code>?</p> <pre><code>@app.get(&quot;/items/{item_id}&quot;) async def read_item(item_id): return {&quot;item_id&quot;: item_id} </code></pre> <p>Code:</p> <pre><code>from langgraph.graph import Graph from fastapi import FastAPI import uvicorn from langserve import add_routes from langchain_core.runnables import RunnableLambda workflow = Graph() def function_1(input_1): return str(input_1) + &quot; Hi &quot; def function_2(input_2): return input_2 + &quot;there&quot; workflow = Graph() workflow.add_node(&quot;node_1&quot;, function_1) workflow.add_node(&quot;node_2&quot;, function_2) workflow.add_edge('node_1', 'node_2') workflow.set_entry_point(&quot;node_1&quot;) workflow.set_finish_point(&quot;node_2&quot;) app_graph = workflow.compile() def inp(question: str) -&gt; dict: return {&quot;question&quot;: list({question})} ################################################################################ def out(value: dict): result = value return result final_chain = RunnableLambda(inp) | app_graph | RunnableLambda(out) fastapi_app = FastAPI() add_routes(fastapi_app, final_chain, path = &quot;/generate&quot;) if __name__ == &quot;__main__&quot;: import uvicorn uvicorn.run(fastapi_app, host=&quot;0.0.0.0&quot;, port=5043) # # http://localhost:5043/openai/playground/ ################################################################################ </code></pre>
<python><fastapi><langgraph>
2024-08-31 11:13:00
1
7,189
user113156
78,934,877
19,048,626
How do I formalize a repeated relationship among disjoint groups of classes in python?
<p>I have Python code that has the following shape to it:</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass @dataclass class Foo_Data: foo: int class Foo_Processor: def process(self, data: Foo_Data): ... class Foo_Loader: def load(self, file_path: str) -&gt; Foo_Data: ... #---------------------------------------------------------------- @dataclass class Bar_Data: bar: str class Bar_Processor: def process(self, data: Bar_Data): ... class Bar_Loader: def load(self, file_path: str) -&gt; Bar_Data: ... </code></pre> <p>I have several instances of this sort of Data/Processor/Loader setup, and the classes all have the same method signatures modulo the specific class family (Foo, Bar, etc.). Is there a pythonic way of formalizing this relationship among classes to enforce a similar structure if I decide to create a <code>Spam_Data</code>, <code>Spam_Processor</code>, and <code>Spam_Loader</code> family of classes? For instance, I want something to enforce that <code>Spam_Processor</code> has a <code>process</code> method which takes an argument of type <code>Spam_Data</code>. Is there a way of achieving this standardization somehow with abstract classes, generic types, or some other structure?</p> <p>I tried using abstract classes, but <a href="https://mypy.readthedocs.io/en/stable/#" rel="nofollow noreferrer">mypy</a> correctly points out that having all *_Data classes be subclasses of an abstract <code>Data</code> class and similarly having all *_Processor classes be subclasses of an abstract <code>Processor</code> class violates the Liskov substitution principle, since each processor is only designed for its respective Data class (i.e., <code>Foo_Processor</code> can't process <code>Bar_Data</code>, but one would expect that it could if these classes have superclasses <code>Processor</code> and <code>Data</code> which are compatible in this way).</p>
<python><python-typing>
2024-08-31 10:46:47
1
611
Alex Duchnowski
78,934,698
10,200,497
Why doesn't fillna work as expected in pandas version 2.1.4?
<p>This is my DataFrame:</p> <pre><code>import pandas as pd df = pd.DataFrame( { 'a': ['long', 'long', 'short', 'long', 'short', 'short', 'short'], 'b': [1, -1, 1, 1, -1, -1, 1], } ) </code></pre> <p>Expected output is creating column <code>a_1</code>:</p> <pre><code> a b a_1 0 long 1 long 1 long -1 long 2 short 1 short 3 long 1 long 4 short -1 long 5 short -1 long 6 short 1 short </code></pre> <p>Logic:</p> <p><code>a_1</code> should be created like this:</p> <pre><code>df.loc[df.b.eq(-1), 'a_1'] = 'long' df['a_1'] = df.a_1.fillna(df.a) </code></pre> <p>This problem is really weird. When I try <code>fillna</code> it does not work. I tried it with pandas version 1.2.4 and it worked but with version 2.1.4 it does not work. This version is default version of Colab currently and I ran this code on Colab.</p>
<python><pandas><dataframe>
2024-08-31 08:49:26
2
2,679
AmirX
78,934,548
12,466,687
How to extract date from a column in Pandas?
<p>I am trying to <strong>extract</strong> only <strong>dates</strong> from a <code>column(Result)</code> of a <code>dataframe</code>. Dates will only start from year 2000 and beyond but the format of date could be any including datetime.</p> <p>What I want is just date.</p> <p>Is there a simple way of doing it with some easy Regex codes ?</p> <p><strong>Example of dataset:</strong></p> <pre><code>date_extract_df = pd.DataFrame({ 'Result':[': XYZ',': 39 YRS/M',': Self',': HOME COLLECTION',': 10593974', ': 012408030006',': 03/08/2024',': 03/Aug/2024 11:50 AM',': 03/Aug/2024 03:24 PM', ' ','31.80','15'], 'Unit':['dfd','dfdfd','tytyt','03/08/2024','fgf','tyt','xcx','ere','sds','03/Aug/2024 03:24 PM', '4545','5656'] }) </code></pre> <pre><code>Expected Result: 0 1 2 3 4 5 6 03/08/2024 7 03/Aug/2024 8 03/Aug/2024 9 03/Aug/2024 10 11 </code></pre> <p>I am not good at Regex and have tried below code:</p> <p><code>date_extract_df.Result.str.extract(r&quot;^[0,1]?\d{1}\/(([0-2]?\d{1})|([3][0,1]{1}))\/(([1]{1}[9]{1}[9]{1}\d{1})|([2-9]{1}\d{3}))$&quot;)</code></p> <p>Is there a way to figure out Rows containing dates in the column and then filter that row to extract date ?</p> <p>I was trying this for a similar approach:</p> <p><code>datetime.datetime.isoformat(date_check['Result'][9])</code></p> <p><code>date_check['Result'].apply(lambda x: datetime.datetime.isoformat(x))</code></p>
<python><pandas><date><datetime>
2024-08-31 07:18:15
1
2,357
ViSa
78,934,371
72,437
Wildcard rule for @storage_fn.on_object_finalized?
<p>Currently, this Firebase function works fine</p> <pre><code>@storage_fn.on_object_finalized( bucket='XXX.appspot.com', timeout_sec=540, memory=options.MemoryOption.GB_32 ) def process_audio_file(event: storage_fn.CloudEvent[storage_fn.StorageObjectData]): </code></pre> <p>However, it is not efficient because it is triggered every-time there is update on the bucket, even though it is not an audio file.</p> <p>I wish to have the following trigger rule. But, it doesn't work.</p> <pre><code>@storage_fn.on_object_finalized( bucket='XXX.appspot.com', path='{user_id}/audio/', # Specify the folder within the bucket timeout_sec=540, memory=options.MemoryOption.GB_32 ) def process_audio_file(event: storage_fn.CloudEvent[storage_fn.StorageObjectData]): </code></pre> <p>May I know, is it possible to monitor specific folder using wildcard rule? If yes, what is the correct syntax?</p>
<python><google-cloud-storage>
2024-08-31 05:36:12
0
42,256
Cheok Yan Cheng
78,934,312
1,231,714
How to plot a cumulative sum based on a certain columns
<p>Below is sample data from my dataframe. I am trying to plot the cumulative sales by date (X-axis is date that is sorted, Y-axis is cumulative sum of sales_USD). Each item code needs to have its own curve. How do I do this using pandas?</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>Date</th> <th>ItemCode</th> <th>Sales_USD</th> </tr> </thead> <tbody> <tr> <td>8/3/2023</td> <td>A</td> <td>339</td> </tr> <tr> <td>3/27/2023</td> <td>A</td> <td>289</td> </tr> <tr> <td>12/17/2022</td> <td>E</td> <td>516</td> </tr> <tr> <td>8/9/2023</td> <td>C</td> <td>138</td> </tr> <tr> <td>8/21/2022</td> <td>D</td> <td>598</td> </tr> <tr> <td>8/25/2022</td> <td>E</td> <td>674</td> </tr> <tr> <td>1/26/2023</td> <td>C</td> <td>140</td> </tr> <tr> <td>3/12/2023</td> <td>E</td> <td>727</td> </tr> <tr> <td>4/11/2023</td> <td>E</td> <td>166</td> </tr> <tr> <td>10/31/2022</td> <td>D</td> <td>609</td> </tr> <tr> <td>3/15/2023</td> <td>C</td> <td>463</td> </tr> <tr> <td>9/6/2022</td> <td>C</td> <td>929</td> </tr> <tr> <td>7/8/2023</td> <td>D</td> <td>262</td> </tr> <tr> <td>7/1/2023</td> <td>B</td> <td>504</td> </tr> <tr> <td>2/22/2023</td> <td>B</td> <td>345</td> </tr> <tr> <td>10/26/2022</td> <td>C</td> <td>602</td> </tr> <tr> <td>3/16/2023</td> <td>B</td> <td>730</td> </tr> <tr> <td>9/4/2022</td> <td>C</td> <td>831</td> </tr> <tr> <td>9/16/2022</td> <td>D</td> <td>502</td> </tr> <tr> <td>11/21/2022</td> <td>C</td> <td>684</td> </tr> <tr> <td>9/7/2022</td> <td>C</td> <td>704</td> </tr> <tr> <td>7/30/2022</td> <td>C</td> <td>222</td> </tr> <tr> <td>4/5/2023</td> <td>D</td> <td>800</td> </tr> </tbody> </table></div>
<python><pandas><plot>
2024-08-31 04:28:09
1
1,390
SEU
78,934,295
219,153
Can these two Python functions be replaced by a single generic one taking either a list or a tuple argument?
<p>Can these two Python functions:</p> <pre><code>def negativeList(a): return [-e for e in a] def negativeTuple(a): return tuple(-e for e in a) </code></pre> <p>be replaced by an equvalent single generic function <code>negative(a)</code>?</p>
<python><arrays><function><tuples>
2024-08-31 04:13:26
4
8,585
Paul Jurczak
78,933,843
548,123
How install own python package to be used as a command system wide after PEP-668?
<p>I have a utility written in Python that I used to use and need to use it again.</p> <p>It's done as a package I install and an executable script with shebang that will be imported and called the main function.</p> <p>The usage is just like any other utility as in other languages. Just call the executable that wraps the module's main call.</p> <p>But after updating Ubuntu from 22.04 to 24.04, when I try to install the packages system-wide using pip, I get the following message defined by the PEP-668:</p> <pre class="lang-none prettyprint-override"><code>$ /usr/bin/python3 -m pip install -U --user -e ./ error: externally-managed-environment Γ— This environment is externally managed ╰─&gt; To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. If you wish to install a non-Debian packaged Python application, it may be easiest to use pipx install xyz, which will manage a virtual environment for you. Make sure you have pipx installed. See /usr/share/doc/python3.12/README.venv for more information. note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification. </code></pre> <p>I managed to install the dependencies using the APT package equivalents (python-xyz), but for installing my own package this doesn't work.</p> <p>I am not used to venvs, but they require extra steps before using it like activating the environment etc that I can't conciliate with a system-wide available script. I can choose the Python binary to run the script in the shebang, but that's all I think I can do.</p> <p>So, after PEP-668, how to create an application to run system-wide, that can be called from the shell or a <code>.desktop</code> XDG application entry without writing a local Debian (or any other OS) package?</p> <p>The application in question is open source and can be found here: <a href="https://github.com/AllanDaemon/uchoose" rel="nofollow noreferrer">https://github.com/AllanDaemon/uchoose</a></p> <p>Using pip with <code>--break-system-packages</code> like above, alongside with other hacks, did work for me. But it seems too hackish. Maybe it's the ideal solution for this kind of case, but I get the feeling this isn't ideal. <code>/usr/bin/python3 -m pip install --break-system-packages -U --user ./</code></p>
<python><pip><python-packaging>
2024-08-30 21:54:52
2
515
Allan Deamon
78,933,840
8,850,850
How to properly shear a line in a 2D image using interpolation in Python?
<p>I'm trying to apply a shearing transformation to a simple 2D line filter (represented as a binary image) using interpolation methods in Python. However, the resulting image after applying the shear transformation looks almost identical to the input filter, with no visible shearing effect.</p> <p>When I apply the same shearing transformation to a Gaussian function (another 2D image), the shearing works as expected.</p> <p>Here’s the code I’m using to apply the shear:</p> <pre><code>[![import numpy as np import matplotlib.pyplot as plt from scipy import interpolate, ndimage # Define the original 2D function f(x, y) def f(x, y, sigma, muu): dst = np.sqrt(x ** 2 + y ** 2) normal = 1 / (2.0 * np.pi * sigma ** 2) gauss = np.exp(-((dst - muu) ** 2 / (2.0 * sigma ** 2))) * normal return gauss # Dimensions x_dim, y_dim = 5, 19 x_values = np.linspace(-(x_dim // 2), (x_dim // 2), x_dim) y_values = np.linspace(-(y_dim // 2), (y_dim // 2), y_dim) xv, yv = np.meshgrid(x_values, y_values, indexing='ij') # Shearing parameters a = 2 b = 1 interp_method = &quot;linear&quot; # Generate image A and sheared version A_ab A = f(xv, yv, 2, 0) A_ab = f(xv * a + yv * b, yv, 2, 0) # Create a line filter filter = np.zeros((x_dim, y_dim)) filter\[:, y_dim // 2\] = 1 # Interpolation of A and the filter interp_A = interpolate.RegularGridInterpolator( (x_values, y_values), A, bounds_error=False, fill_value=None, method=interp_method) interp_filter = interpolate.RegularGridInterpolator( (x_values, y_values), filter, bounds_error=False, fill_value=None, method=interp_method) # Shearing transformation out_xv = xv * a + yv * b out_yv = yv sheared_coords_EPI = np.transpose(np.array((out_xv, out_yv)), axes=(1, 2, 0)) sheared_A = interp_A(sheared_coords_EPI) sheared_filter = interp_filter(sheared_coords_EPI) # Shift coordinates to positive space for warping x_shift = x_dim // 2 y_shift = y_dim // 2 shifted_out_xv = out_xv + x_shift shifted_out_yv = out_yv + y_shift # Warp the filter using the new coordinates sheared_filter2 = ndimage.map_coordinates(filter, \[shifted_out_xv, shifted_out_yv\], order=1, mode='constant', cval=0.0) # Flatten and interpolate the filter points = np.vstack((xv.flatten(), yv.flatten())).T values = filter.flatten() new_points = np.vstack((out_xv.flatten(), out_yv.flatten())).T interpolated_filter = interpolate.griddata(points, values, new_points, method='linear') interpolated_filter = interpolated_filter.reshape((x_dim, y_dim)) # Plotting plt.figure(), plt.imshow(A), plt.title(&quot;A&quot;) plt.figure(), plt.imshow(A_ab), plt.title(&quot;A_ab&quot;) plt.figure(), plt.imshow(sheared_A), plt.title(&quot;sheared_A&quot;) plt.figure(), plt.imshow(filter), plt.title(&quot;filter&quot;) plt.figure(), plt.imshow(sheared_filter), plt.title(&quot;sheared_filter&quot;) plt.figure(), plt.imshow(sheared_filter2), plt.title(&quot;sheared_filter2&quot;) plt.figure(), plt.imshow(interpolated_filter), plt.title(&quot;interpolated_filter&quot;) plt.show()] </code></pre> <p><a href="https://i.sstatic.net/QSNDF3jn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QSNDF3jn.png" alt="Original" /></a> <a href="https://i.sstatic.net/DdB7dIW4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DdB7dIW4.png" alt="Sheared 1" /></a> <a href="https://i.sstatic.net/W4OfR8wX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W4OfR8wX.png" alt="Sheared 3" /></a></p>
<python><numpy><matplotlib><scipy>
2024-08-30 21:54:02
1
427
tag
78,933,681
210,559
Airflow Task Group Execution Order
<p>I am trying to understand when airflow tasks will be run. I do not understand why task_3a is running immediately when running this example.</p> <p>How do I make this sample dag run in this order:</p> <ul> <li>Task 1</li> <li>Task 2 if instructed to run</li> <li>Task 3</li> <li>Task 3a and Task 3b (great if these run in parallel)</li> <li>Task 4</li> <li>Task 5</li> </ul> <pre><code>import logging from airflow.decorators import task, dag, task_group from airflow.utils.dates import days_ago @dag( dag_id='taskflow_conditional_dag', start_date=days_ago(1), schedule_interval=None, catchup=False, ) def my_dag(): logger = logging.getLogger(&quot;airflow.task&quot;) @task def task_1(): logger.info(&quot;Task 1 running&quot;) return &quot;run_task_&quot; @task.branch def branching_task(data): if data == &quot;run_task_2&quot;: return &quot;task_2&quot; return &quot;task_3&quot; @task def task_2(): # Task 2 logic here logger.info(&quot;Task 2 running&quot;) pass @task( trigger_rule=&quot;none_failed&quot; ) def task_3(): logger.info(&quot;Task 3 running&quot;) pass @task( trigger_rule=&quot;none_failed&quot; ) def task_4(): logger.info(&quot;Task 4 running&quot;) pass @task_group() def task_after_3_before_4_group(): @task def task_3a(): logger.info(&quot;Task 3a running.&quot;) pass @task def task_3b(): logger.info(&quot;Task 3b running.&quot;) pass return task_3a() &gt;&gt; task_3b() @task( trigger_rule=&quot;none_failed&quot; ) def task_5(): logger.info(&quot;Task 5 running&quot;) pass data = task_1() decision = branching_task(data) task_2_result = task_2() task_3_result = task_3() task_4_result = task_4() task_5_results = task_5() data &gt;&gt; decision decision &gt;&gt; task_2_result &gt;&gt; task_3_result &gt;&gt; task_after_3_before_4_group() &gt;&gt; task_4_result &gt;&gt; task_5_results dag = my_dag() </code></pre>
<python><python-3.x><airflow>
2024-08-30 20:37:41
1
9,488
Scott
78,933,569
3,045,351
Python py7zr extracting .7z archive differently to Linux command line 7zip
<p>I have created a .7z archive using the usual basic Windows UI. It is my understanding this defaults to relative paths for any archives created. When looking in the archive post creation, all I see is a directory called 'autocfg'. When I experimented with absolute paths this changed (as expected).</p> <p>When unzipping on a Google Colab Linux machine using the below Python script:</p> <pre><code>import py7zr inpath = '/usr/local/lib/python3.10/virtual-environments/autocfg/lib/python3.10/site-packages/autocfg/colorama.7z' outpath = '/usr/local/lib/python3.10/virtual-environments/autocfg/lib/python3.10/site-packages' with py7zr.SevenZipFile(inpath, mode='r') as szfile: szfile.extractall(path=outpath) szfile.close() </code></pre> <p>I get the following output:</p> <pre><code>site-packages - Program Files - My Sub Dir - My Sub Dir 2 - autocfg colorama </code></pre> <p>...this is the directory structure from Windows that has been preserved somehow. When I try using the Linux command line version of 7zip as per the below:</p> <pre><code>!7za e -y /usr/local/lib/python3.10/virtual-environments/autocfg/lib/python3.10/site-packages/autocfg/colorama.7z -o/usr/local/lib/python3.10/virtual-environments/autocfg/lib/python3.10/site-packages </code></pre> <p>...the .7z file 'colorama' unpacks correctly to:</p> <pre><code>site-packages - colorama </code></pre> <p>...why is py7zr exhibiting this behaviour? What can I amend in my code to get my output as per how Linux command line is delivering it?</p>
<python><python-3.x><linux><7zip>
2024-08-30 19:52:02
0
4,190
gdogg371
78,933,467
3,156,085
How can access the pointer values passed to and returned by C functions from Python?
<p>Can my python code have access to the actual pointer values received and returned by C functions called through <code>ctypes</code>?</p> <p>If yes, how could I achieve that ?</p> <hr /> <p>I'd like to test the pointer values passed to and returned from a shared library function to test an assignment with pytest (here, to test that <code>strdup</code> didn't return the same pointer but a new pointer to a different address).</p> <p>I've wrapped one of the functions to implement (<code>strdup</code>) in a new C function in a file named <code>wrapped_strdup.c</code> to display the pointer values and memory areas contents:</p> <pre><code>/* ** I'm compiling this into a .so the following way: ** - gcc -o wrapped_strdup.o -c wrapped_strdup.c ** - ar rc wrapped_strdup.a wrapped_strdup.o ** - ranlib wrapped_strdup.a ** - gcc -shared -o wrapped_strdup.so -Wl,--whole-archive wrapped_strdup.a -Wl,--no-whole-archive */ #include &lt;stdio.h&gt; #include &lt;stdlib.h&gt; #include &lt;string.h&gt; char *wrapped_strdup(char *src){ char *dst; printf(&quot;From C:\n&quot;); printf(&quot;- src address: %X, src content: [%s].\n&quot;, src, src); dst = strdup(src); printf(&quot;- dst address: %X, dst content: [%s].\n&quot;, dst, dst); return dst; } </code></pre> <p>I also create in the same directory a pytest test file named <code>test_strdup.py</code>:</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3 import ctypes import pytest # Setting wrapped_strdup: lib_wrapped_strdup = ctypes.cdll.LoadLibrary(&quot;./wrapped_strdup.so&quot;) wrapped_strdup = lib_wrapped_strdup.wrapped_strdup wrapped_strdup.restype = ctypes.c_char_p wrapped_strdup.argtypes = [ctypes.c_char_p] @pytest.mark.parametrize(&quot;src&quot;, [b&quot;&quot;, b&quot;foo&quot;]) def test_strdup(src: bytes): print(&quot;&quot;) dst = wrapped_strdup(src) print(&quot;From Python:&quot;) print(f&quot;- src address: {hex(id(src))}, src content: [{src!r}].&quot;) print(f&quot;- dst address: {hex(id(dst))}, dst content: [{dst!r}].&quot;) assert src == dst assert hex(id(src)) != hex(id(dst)) </code></pre> <p>Then, running my test gives me the following output:</p> <pre><code>$ pytest test_strdup.py --maxfail=2 -v -s =================================== test session starts ==================================== platform linux -- Python 3.12.5, pytest-8.3.2, pluggy-1.5.0 -- /usr/bin/python cachedir: .pytest_cache rootdir: /home/vmonteco/code/MREs/MRe_strdup_test_with_ctypes plugins: anyio-4.4.0, cov-5.0.0, typeguard-4.3.0 collected 2 items test_strdup.py::test_strdup[] From C: - src address: C19BDBE8, src content: []. - dst address: 5977DFA0, dst content: []. From Python: - src address: 0x75bcc19bdbc8, src content: [b'']. - dst address: 0x75bcc19bdbc8, dst content: [b'']. FAILED test_strdup.py::test_strdup[foo] From C: - src address: BF00A990, src content: [foo]. - dst address: 59791030, dst content: [foo]. From Python: - src address: 0x75bcbf00a970, src content: [b'foo']. - dst address: 0x75bcbefc18f0, dst content: [b'foo']. PASSED ========================================= FAILURES ========================================= ______________________________________ test_strdup[] _______________________________________ src = b'' @pytest.mark.parametrize(&quot;src&quot;, [b&quot;&quot;, b&quot;foo&quot;]) def test_strdup(src: bytes): print(&quot;&quot;) dst = wrapped_strdup(src) print(&quot;From Python:&quot;) print(f&quot;- src address: {hex(id(src))}, src content: [{src!r}].&quot;) print(f&quot;- dst address: {hex(id(dst))}, dst content: [{dst!r}].&quot;) assert src == dst &gt; assert hex(id(src)) != hex(id(dst)) E AssertionError: assert '0x75bcc19bdbc8' != '0x75bcc19bdbc8' E + where '0x75bcc19bdbc8' = hex(129453562518472) E + where 129453562518472 = id(b'') E + and '0x75bcc19bdbc8' = hex(129453562518472) E + where 129453562518472 = id(b'') test_strdup.py:22: AssertionError ================================= short test summary info ================================== FAILED test_strdup.py::test_strdup[] - AssertionError: assert '0x75bcc19bdbc8' != '0x75bcc19bdbc8' =============================== 1 failed, 1 passed in 0.04s ================================ </code></pre> <p>This output shows two things :</p> <ul> <li>Addresses for variables referencing <code>b''</code> in Python are identical either way (that's the same object) despite addresses being different from the lower level perspective. This is consistent with some pure Python tests and I guess it could be some optimization feature.</li> <li>Addresses values from C and Python for <code>dst</code> and <code>src</code> variables don't actually seem related.</li> </ul> <p><em>So the above attempt is actually <strong>unreliable</strong> to check that a function returned a pointer to a different area.</em></p> <hr /> <p>I could also try to retrieve the pointer value itself and make a second test run for checking this part specifically by changing the <code>restype</code> attribute :</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3 import ctypes import pytest # Setting wrapped_strdup: lib_wrapped_strdup = ctypes.cdll.LoadLibrary(&quot;./wrapped_strdup.so&quot;) wrapped_strdup = lib_wrapped_strdup.wrapped_strdup wrapped_strdup.restype = ctypes.c_void_p # Note that it's not a c_char_p anymore. wrapped_strdup.argtypes = [ctypes.c_char_p] @pytest.mark.parametrize(&quot;src&quot;, [b&quot;&quot;, b&quot;foo&quot;]) def test_strdup_for_pointers(src: bytes): print(&quot;&quot;) dst = wrapped_strdup(src) print(&quot;From Python:&quot;) print(f&quot;- retrieved dst address: {hex(dst)}.&quot;) </code></pre> <p>The above gives the following output :</p> <pre><code>$ pytest test_strdup_for_pointers.py --maxfail=2 -v -s =================================== test session starts ==================================== platform linux -- Python 3.12.5, pytest-8.3.2, pluggy-1.5.0 -- /usr/bin/python cachedir: .pytest_cache rootdir: /home/vmonteco/code/MREs/MRe_strdup_test_with_ctypes plugins: anyio-4.4.0, cov-5.0.0, typeguard-4.3.0 collected 2 items test_strdup_for_pointers.py::test_strdup_for_pointers[] From C: - src address: E15BDBE8, src content: []. - dst address: 84D4D820, dst content: []. From Python: - retrieved dst address: 0x608984d4d820. PASSED test_strdup_for_pointers.py::test_strdup_for_pointers[foo] From C: - src address: DEC7EA80, src content: [foo]. - dst address: 84EA7C40, dst content: [foo]. From Python: - retrieved dst address: 0x608984ea7c40. PASSED ==================================== 2 passed in 0.01s ===================================== </code></pre> <p>Which would give the actual address (or at least something that looks related).</p> <p>But without knowing the value the C function receives, it's not of much help.</p> <hr /> <h2>Addendum: what I came up with from Mark's answer (and that works):</h2> <p>Here's a test that implements both the solution suggested in the accepted answer :</p> <pre><code>#!/usr/bin/env python3 import ctypes import pytest # Setting libc: libc = ctypes.cdll.LoadLibrary(&quot;libc.so.6&quot;) strlen = libc.strlen strlen.restype = ctypes.c_size_t strlen.argtypes = (ctypes.c_char_p,) # Setting wrapped_strdup: lib_wrapped_strdup = ctypes.cdll.LoadLibrary(&quot;./wrapped_strdup.so&quot;) wrapped_strdup = lib_wrapped_strdup.wrapped_strdup # Restype will be set directly in the tests. wrapped_strdup.argtypes = (ctypes.c_char_p,) @pytest.mark.parametrize(&quot;src&quot;, [b&quot;&quot;, b&quot;foo&quot;]) def test_strdup(src: bytes): print(&quot;&quot;) # Just to make pytest output more readable. # Set expected result type. wrapped_strdup.restype = ctypes.POINTER(ctypes.c_char) # Create the src buffer and retrieve its address. src_buffer = ctypes.create_string_buffer(src) src_addr = ctypes.addressof(src_buffer) src_content = src_buffer[:strlen(src_buffer)] # Run function to test. dst = wrapped_strdup(src_buffer) # Retrieve result address and content. dst_addr = ctypes.addressof(dst.contents) dst_content = dst[: strlen(dst)] # Assertions. assert src_content == dst_content assert src_addr != dst_addr # Output. print(&quot;From Python:&quot;) print(f&quot;- Src content: {src_content!r}. Src address: {src_addr:X}.&quot;) print(f&quot;- Dst content: {dst_content!r}. Dst address: {dst_addr:X}.&quot;) @pytest.mark.parametrize(&quot;src&quot;, [b&quot;&quot;, b&quot;foo&quot;]) def test_strdup_alternative(src: bytes): print(&quot;&quot;) # Just to make pytest output more readable. # Set expected result type. wrapped_strdup.restype = ctypes.c_void_p # Create the src buffer and retrieve its address. src_buffer = ctypes.create_string_buffer(src) src_addr = ctypes.addressof(src_buffer) src_content = src_buffer[:strlen(src_buffer)] # Run function to test. dst = wrapped_strdup(src_buffer) # Retrieve result address and content. dst_addr = dst # cast dst: dst_pointer = ctypes.cast(dst, ctypes.POINTER(ctypes.c_char)) dst_content = dst_pointer[:strlen(dst_pointer)] # Assertions. assert src_content == dst_content assert src_addr != dst_addr # Output. print(&quot;From Python:&quot;) print(f&quot;- Src content: {src_content!r}. Src address: {src_addr:X}.&quot;) print(f&quot;- Dst content: {dst_content!r}. Dst address: {dst_addr:X}.&quot;) </code></pre> <p>Output :</p> <pre><code>$ pytest test_strdup.py -v -s =============================== test session starts =============================== platform linux -- Python 3.10.14, pytest-8.3.2, pluggy-1.5.0 -- /home/vmonteco/.pyenv/versions/3.10.14/envs/strduo_test/bin/python3.10 cachedir: .pytest_cache rootdir: /home/vmonteco/code/MREs/MRe_strdup_test_with_ctypes plugins: anyio-4.4.0, stub-1.1.0 collected 4 items test_strdup.py::test_strdup[] From C: - src address: 661BBE90, src content: []. - dst address: F5D8A7A0, dst content: []. From Python: - Src content: b''. Src address: 7C39661BBE90. - Dst content: b''. Dst address: 57B4F5D8A7A0. PASSED test_strdup.py::test_strdup[foo] From C: - src address: 661BBE90, src content: [foo]. - dst address: F5E03340, dst content: [foo]. From Python: - Src content: b'foo'. Src address: 7C39661BBE90. - Dst content: b'foo'. Dst address: 57B4F5E03340. PASSED test_strdup.py::test_strdup_alternative[] From C: - src address: 661BBE90, src content: []. - dst address: F5B0AC50, dst content: []. From Python: - Src content: b''. Src address: 7C39661BBE90. - Dst content: b''. Dst address: 57B4F5B0AC50. PASSED test_strdup.py::test_strdup_alternative[foo] From C: - src address: 661BBE90, src content: [foo]. - dst address: F5BF9C20, dst content: [foo]. From Python: - Src content: b'foo'. Src address: 7C39661BBE90. - Dst content: b'foo'. Dst address: 57B4F5BF9C20. PASSED ================================ 4 passed in 0.01s ================================ </code></pre>
<python><ctypes>
2024-08-30 19:20:11
1
15,848
vmonteco
78,933,243
5,305,512
Python package installed, but getting import error in Jupyter notebook
<p>Fresh install of Python 3.12.5 on Mac OS.</p> <p>Getting import error in Jupyter notebook:</p> <p><a href="https://i.sstatic.net/J85zwz2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J85zwz2C.png" alt="enter image description here" /></a></p> <p>But works fine in terminal:</p> <p><a href="https://i.sstatic.net/8MJFNNVT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8MJFNNVT.png" alt="enter image description here" /></a></p> <p>I am not using any virtual environment either.</p> <p>Here's where terminal shows python and pip are installed:</p> <p><a href="https://i.sstatic.net/3KmZHy5l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3KmZHy5l.png" alt="enter image description here" /></a></p> <p>And here's where Jupyter notebook says python and pip are installed:</p> <p><a href="https://i.sstatic.net/TIcgLHJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TIcgLHJj.png" alt="enter image description here" /></a></p> <p>So, its pointing to the same path (I checked in case Jupyter might be using a different pip internally, but that seems to not be the case).</p> <p>So what's wrong? And how do I fix it?</p> <hr /> <p>What seemed to work is this:</p> <pre><code>import sys !{sys.executable} -m pip install tiktoken </code></pre> <p>But previously I've always used <code>!pip install &lt;package&gt;</code> in Jupyter notebooks, or I do <code>pip install &lt;package&gt;</code> in terminal and am able to import the package inside Jupyter notebook. It's only after freshly installing Python (to upgrade the Python version) that I'm noticing this behaviour. Does anybody know how I can get back to installing packages the simpler way like I used to before? I don't want to have to run <code>!{sys.executable} -m pip install &lt;package&gt;</code> every time I want to install a new package; not to mention that I would also like to be able to install packages from terminal and then import them in Jupyter notebook.</p>
<python><jupyter-notebook><python-import><importerror>
2024-08-30 17:52:28
1
3,764
Kristada673
78,933,232
20,591,261
Keep training pytorch model on new data
<p>I'm working on a text classification task and have decided to use a PyTorch model for this purpose. The process mainly involves the following steps:</p> <ol> <li>Load and process the text.</li> <li>Use a TF-IDF Vectorizer.</li> <li>Build the neural network and save the TF-IDF Vectorizer and model to predict new data.</li> </ol> <p>However, every day I need to classify new comments and correct any wrong classifications.</p> <p>Currently, my approach is to add the new comments with the correct classification to the dataset and retrain the entire model. This process is time-consuming, and the new comments can be lost during validation. I would like to create a new dataset with the newly classified texts and continue training over this new data (the new comments are classified manually, so each label is correct).</p> <p>Using GPT and some online code, i write the desired process, however, im not sure if its working as expected, or im making some silly mistakes that should not happen.</p> <p>So the mains questions are:</p> <ol> <li>How could i check if the propossed way to solve this problem work as i expect?</li> <li>What can i do with the vectorizer when it face new tokens, can i just do a <code>.fit_transform()</code> or i would loose the original vectorizer?</li> </ol> <p>Here its the full training process:</p> <pre><code>import torch from torch import nn from torch.utils.data import Dataset, DataLoader, random_split from sklearn.preprocessing import LabelEncoder import polars as pl from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import TfidfVectorizer import joblib set1 = ( pl .read_csv( &quot;set1.txt&quot;, separator=&quot;;&quot;, has_header=False, new_columns=[&quot;text&quot;,&quot;label&quot;] ) ) # since the dateset its unbalanced, im going to force to have more balance fear_df = set1.filter(pl.col(&quot;label&quot;) == &quot;fear&quot;) joy_df = set1.filter(pl.col(&quot;label&quot;) == &quot;joy&quot;).sample(n=2500) sadness_df = set1.filter(pl.col(&quot;label&quot;) == &quot;sadness&quot;).sample(n=2500) anger_df = set1.filter(pl.col(&quot;label&quot;) == &quot;anger&quot;) train_df = pl.concat([fear_df,joy_df,sadness_df,anger_df]) &quot;&quot;&quot; The text its already clean, so im going to change the labels to numeric and then split it on train, test ,val &quot;&quot;&quot; label_mapping = { &quot;anger&quot;: 0, &quot;fear&quot;: 1, &quot;joy&quot;: 2, &quot;sadness&quot;: 3 } train_mapped = ( train_df .with_columns( pl.col(&quot;label&quot;).replace_strict(label_mapping, default=&quot;other&quot;).cast(pl.Int16) ) ) train_set, pre_Test = train_test_split(train_mapped, test_size=0.4, random_state=42, stratify=train_mapped[&quot;label&quot;]) test_set, val_set = train_test_split(pre_Test, test_size=0.5, random_state=42, stratify=pre_Test[&quot;label&quot;]) # Vectorize text data using TF-IDF vectorizer = TfidfVectorizer(max_features=30000, ngram_range=(1, 2)) X_train_tfidf = vectorizer.fit_transform(train_set['text']).toarray() X_val_tfidf = vectorizer.transform(val_set['text']).toarray() X_test_tfidf = vectorizer.transform(test_set['text']).toarray() y_train = train_set['label'] y_val = val_set['label'] y_test = test_set['label'] class TextDataset(Dataset): def __init__(self, texts, labels): self.texts = texts self.labels = labels def __len__(self): return len(self.texts) def __getitem__(self, idx): text = self.texts[idx] label = self.labels[idx] return text, label train_dataset = TextDataset(X_train_tfidf, y_train) val_dataset = TextDataset(X_val_tfidf, y_val) test_dataset = TextDataset(X_test_tfidf, y_test) batch_size = 32 train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True) val_loader = DataLoader(val_dataset, batch_size=batch_size) test_loader = DataLoader(test_dataset, batch_size=batch_size) class TextClassificationModel(nn.Module): def __init__(self, input_dim, num_classes): super(TextClassificationModel, self).__init__() self.fc1 = nn.Linear(input_dim, 64) self.dropout1 = nn.Dropout(0.5) self.fc2 = nn.Linear(64, 32) self.dropout2 = nn.Dropout(0.5) self.fc3 = nn.Linear(32, num_classes) def forward(self, x): x = torch.relu(self.fc1(x)) x = self.dropout1(x) x = torch.relu(self.fc2(x)) x = self.dropout2(x) x = torch.softmax(self.fc3(x), dim=1) return x input_dim = X_train_tfidf.shape[1] model = TextClassificationModel(input_dim, 4) # Define loss and optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adamax(model.parameters()) # Training loop num_epochs = 17 best_val_acc = 0.0 best_model_path = &quot;modelbest.pth&quot; for epoch in range(num_epochs): model.train() for texts, labels in train_loader: texts, labels = texts.float(), labels.long() outputs = model(texts) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() # Validation model.eval() correct, total = 0, 0 with torch.no_grad(): for texts, labels in val_loader: texts, labels = texts.float(), labels.long() outputs = model(texts) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() val_acc = correct / total if val_acc &gt; best_val_acc: best_val_acc = val_acc torch.save(model.state_dict(), best_model_path) print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}, Val Acc: {val_acc:.4f}') # Load the best model model.load_state_dict(torch.load(best_model_path)) # Load the best model model.load_state_dict(torch.load(best_model_path)) # Test the model model.eval() correct, total = 0, 0 with torch.no_grad(): for texts, labels in test_loader: texts, labels = texts.float(), labels.long() outputs = model(texts) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() test_acc = correct / total print(f'Test Acc: {test_acc:.3f}') # Save the TF-IDF vectorizer vectorizer_path = &quot;tfidf_vectorizer.pkl&quot; joblib.dump(vectorizer, vectorizer_path) # Save the PyTorch model model_path = &quot;text_classification_model.pth&quot; torch.save(model.state_dict(), model_path) </code></pre> <p>Proposed code:</p> <pre><code>import torch import joblib import polars as pl from sklearn.model_selection import train_test_split from torch import nn from torch.utils.data import Dataset, DataLoader # Load the saved TF-IDF vectorizer vectorizer_path = &quot;tfidf_vectorizer.pkl&quot; vectorizer = joblib.load(vectorizer_path) input_dim = len(vectorizer.get_feature_names_out()) class TextClassificationModel(nn.Module): def __init__(self, input_dim, num_classes): super(TextClassificationModel, self).__init__() self.fc1 = nn.Linear(input_dim, 64) self.dropout1 = nn.Dropout(0.5) self.fc2 = nn.Linear(64, 32) self.dropout2 = nn.Dropout(0.5) self.fc3 = nn.Linear(32, num_classes) def forward(self, x): x = torch.relu(self.fc1(x)) x = self.dropout1(x) x = torch.relu(self.fc2(x)) x = self.dropout2(x) x = torch.softmax(self.fc3(x), dim=1) return x # Load the saved PyTorch model model_path = &quot;text_classification_model.pth&quot; model = TextClassificationModel(input_dim, 4) model.load_state_dict(torch.load(model_path)) # Map labels to numeric values label_mapping = {&quot;anger&quot;: 0, &quot;fear&quot;: 1, &quot;joy&quot;: 2, &quot;sadness&quot;: 3} sentiments = [&quot;fear&quot;,&quot;joy&quot;,&quot;sadness&quot;,&quot;anger&quot;] new_data = ( pl .read_csv( &quot;set2.txt&quot;, separator=&quot;;&quot;, has_header=False, new_columns=[&quot;text&quot;,&quot;label&quot;] ) .filter(pl.col(&quot;label&quot;).is_in(sentiments)) .with_columns( pl.col(&quot;label&quot;).replace_strict(label_mapping, default=&quot;other&quot;).cast(pl.Int16) ) ) # Vectorize the new text data using the loaded TF-IDF vectorizer X_new = vectorizer.transform(new_data['text']).toarray() y_new = new_data['label'] class TextDataset(Dataset): def __init__(self, texts, labels): self.texts = texts self.labels = labels def __len__(self): return len(self.texts) def __getitem__(self, idx): text = self.texts[idx] label = self.labels[idx] return text, label batch_size = 10 # Create DataLoader for the new training data new_train_dataset = TextDataset(X_new, y_new) new_train_loader = DataLoader(new_train_dataset, batch_size=batch_size, shuffle=True) # Define loss and optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adamax(model.parameters()) num_epochs = 5 new_best_model_path = &quot;modelbest.pth&quot; for epoch in range(num_epochs): model.train() for texts, labels in new_train_loader: texts, labels = texts.float(), labels.long() outputs = model(texts) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() torch.save(model.state_dict(), new_best_model_path) print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}') # Save the PyTorch model new_best_model_path = &quot;new_moedl.pth&quot; torch.save(model.state_dict(), new_best_model_path) </code></pre> <p>The dataset can be found <a href="https://www.kaggle.com/datasets/praveengovi/emotions-dataset-for-nlp" rel="nofollow noreferrer">here</a></p>
<python><scikit-learn><pytorch><nlp><python-polars>
2024-08-30 17:47:59
1
1,195
Simon
78,933,210
1,275,942
Is string slice-by-copy a CPython implementation detail or part of spec?
<p>Python does slice-by-copy on strings: <a href="https://stackoverflow.com/questions/5722006/does-python-do-slice-by-reference-on-strings/">Does Python do slice-by-reference on strings?</a></p> <p>Is this something that all implementations of Python need to respect, or is it just a detail of the CPython implementation?</p>
<python><specifications><cpython>
2024-08-30 17:41:40
1
899
Kaia
78,933,132
1,275,942
Type-hinting a generator: send_type Any or None?
<p>I have a generator that does not use <code>send()</code> values. Should I type its <code>send_value</code> as <code>Any</code> or <code>None</code>?</p> <pre><code>import typing as t def pi_generator() -&gt; t.Generator[int, ???, None]: pi = &quot;3141592&quot; for digit in pi: yield int(digit) pi_gen = pi_generator() next(pi_gen) # 3 pi_gen.send('foo') # 1 pi_gen.send(pi_gen) # 4 </code></pre> <p>Reasons I see for <code>Any</code>:</p> <ul> <li>The generator works perfectly fine with <code>send()</code> for any type, so if somebody had a reason to use <code>.send(1)</code> with this generator, it's totally fine.</li> <li>Methods' arguments' types should be general, and <code>.send(x: Any)</code> is more general than <code>.send(x: None)</code>.</li> </ul> <p>Reasons I see for <code>None</code>:</p> <ul> <li>Return types should be specific, and &quot;Generator that never uses send&quot; is a more specific type than &quot;Any kind of generator&quot;.</li> <li>If someone is using <code>.send()</code> to this generator, it's likely they're misunderstanding what it does and the type hint should inform them.</li> </ul>
<python><generator><python-typing>
2024-08-30 17:15:22
1
899
Kaia
78,933,115
12,415,855
File not printed using os.startfile?
<p>i try to print a document on windows using the following code</p> <pre><code>import os import sys os.startfile(r&quot;D:/DEV/Python-Diverses/os/testb.png&quot;, &quot;print&quot;) </code></pre> <p>But nothing happens at all - the programs runs trough but no printing.</p> <p>When i just open the file with</p> <pre><code>os.startfile(r&quot;D:/DEV/Python-Diverses/os/testb.png&quot;) </code></pre> <p>it is opened in the image-viewer program (windows image viewer).</p> <p>Any ideas why the printing is not working?</p>
<python><windows><python-os>
2024-08-30 17:10:55
1
1,515
Rapid1898
78,933,101
5,625,497
define multiple methods for comparison at once using the same principle
<p>I am building a binary tree in python defining the node as a class. I wanted the node to have a value and be comparable to other nodes in order to, for example, sort a list of them.</p> <p>I wanted to know if there is a more elegant way to avoid explicitly defining all comparison methods (<code>__le__</code>, <code>__lt__</code>, <code>__eq__</code>). I tried this and it works:</p> <pre><code>class Node: def __init__(self, value, name=None, left=None, right=None): self.value=value self.name = name self.right, self.left = right, left def is_leaf(self): return self.right is None and self.left is None def __le__(self, other): return self.value.__le__(other.value) # or (self.value &lt;= other.value) # same for __lt__, __eq__ </code></pre> <p>But I wanted to reuse the code. More generally, I want the object to reference <code>self.value</code> for a list of dunder methods, without explicitly coding each one.</p> <p>I considered forcing inheritence from same base class, e.g. if values are numbers: <code>class Node(float)</code>, or, in the init:</p> <pre><code> def __init__(self, value, name=None, left=None, right=None): type(value).__init__(value) # etc. </code></pre> <p>But they seem to me as a bad practice since they add a lot of potentially unexpected behaviours.</p> <p>Is there a pythonic / elegant way to avoid explicitly defining all comparison methods in the class when the all obey a common standard?</p>
<python><oop><inheritance>
2024-08-30 17:04:36
1
4,353
Tarifazo
78,932,994
4,875,641
JSON interpretation of chunked data
<p>I need to locate the parameters for a specific object returned from a remote server in JSON format. But the number of objects on the server is always increasing so the JSON responses get larger and larger. I anticipate when there are hundreds of thousands of objects, the JSON response will exceed the memory capacity to hold them all at once in a json.load(). So it would probably be necessary to get the remote data in chunks instead. But will I have to write my own parser in this case as the JSON input will be spaning chunks?</p> <p>In the example below, I must locate the object named 'ID-98349' among all the objects returned. Once I find it, I will be extracting many of the key/value parameters associated with this specific object. But of course, given this is coming as a chunked stream, the string of data could be split across chunks at any point in the string.</p> <p>Is there a set of tools/functions to allow me to find a JSON item arriving in a stream?</p> <pre><code>{ [&quot;objects&quot;]: [ { &quot;Reference&quot;: &quot;obj-1&quot;, &quot;name&quot;: &quot;ID-123&quot;, &lt; a long list of key/value parameters&gt; } { &quot;Reference&quot;: &quot;obj-2&quot;, &quot;name&quot;: &quot;ID-567&quot;, &lt; a long list of key/value parameters&gt; } ... { &quot;Reference&quot;: &quot;obj-4982&quot;, &quot;name&quot;: &quot;ID-98349&quot;, &lt; a long list of key/value parameters&gt; } ] } </code></pre>
<python><json><list><stream><chunks>
2024-08-30 16:29:44
0
377
Jay Mosk
78,932,929
7,713,770
How to Resolve the Issue: Django with Visual Studio Code Changing Template Without Effect?
<p>I have a Django app, and I am using Visual Studio Code as my editor. I have implemented functionality for recovering passwords via an email template. I edited the template to see what effect it would have on the email, but the changes had no effect. I even deleted the email template, but I still received the old email template in my inbox.</p> <p>The email template is within a folder named templates in my account app.</p> <p>Here is my email template (password_reset_email.html):</p> <pre><code>&lt;!-- templates/password_reset_email.html --&gt; &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;meta charset=&quot;utf-8&quot;&gt; &lt;title&gt;Password Reset Request&lt;/title&gt; &lt;style&gt; /* Add any styles you want for your email */ &lt;/style&gt; &lt;/head&gt; &lt;body&gt; &lt;p&gt;Je hebt om een wachtwoord reset aanvraag gevraagd voor je account.&lt;/p&gt;&lt;br&gt;&lt;p&gt; Klik de link beneden om je wachtwoord te veranderen:&lt;/p&gt; &lt;p&gt;Als je niet om een wachtwoord reset hebt gevraag, neem dan contact op met:&lt;/p&gt; &lt;br&gt;&lt;p&gt; test@test.nl. En klik niet op de link.&lt;/p&gt; &lt;p&gt;Met vriendelijke groet,&lt;br&gt;Het app team&lt;/p&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>But in my email box I still got this email:</p> <pre><code>Dag test gebruiker , Je hebt om een wachtwoord reset aanvraag gevraagd voor je account. Klik de link beneden om je wachtwoord te veranderen: Reset je wachtwoord Als je niet om een wachtwoord reset hebt gevraag, neem dan contact op met: test@nvwa.nl. En klik niet op de link. Met vriendelijke groet, Het test app team </code></pre> <p>And this is how the views.py looks:</p> <pre><code>class PasswordResetRequestView(generics.GenericAPIView): permission_classes = [permissions.AllowAny] serializer_class = PasswordResetSerializer def post(self, request, *args, **kwargs): serializer = self.get_serializer(data=request.data) serializer.is_valid(raise_exception=True) #serializer.save() email = serializer.validated_data['email'] user = Account.objects.filter(email=email).first() if user: uid = urlsafe_base64_encode(force_bytes(user.pk)) token = default_token_generator.make_token(user) # React Native App URL reset_link = f&quot;https://test.azurewebsites.net/reset-password/{uid}/{token}/&quot; # Render the email template with the reset link html_content = render_to_string('password_reset_email.html', { 'user': user, 'reset_link': reset_link, }) print(html_content) plain_text_content = strip_tags(html_content) # Code to send the email to user # Send the email send_mail( subject=' aanvraag ingediend', message=plain_text_content, # Plain text version from_email='niels.fischereinie@gmail.com', # Replace with your &quot;from&quot; email recipient_list=[email], # Send email to the user html_message=html_content, # HTML version fail_silently=False, ) print(f&quot;Password reset link: {reset_link}&quot;) # For debugging print(&quot;password sent&quot;) return Response({&quot;message&quot;: &quot;If an account with the provided email exists, a password reset link has been sent.&quot;}, status=status.HTTP_200_OK) </code></pre> <p>Even after deleting the template, I still receive the old email content. How is this possible?</p> <p>What I have tried?</p> <ul> <li>I restarted the django server.</li> <li>I closed and reopened visual studio code.</li> <li>python manage.py clear_cache</li> <li>I opened the app in private mode in the browser</li> <li>IN the settings.py file I added debug:True TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [BASE_DIR / 'templates'], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], 'debug':True, }, }, ]</li> </ul> <p>email settings in settings.py file:</p> <pre><code>#Email settings: # Email backend configuration EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' # SMTP host configuration EMAIL_HOST = 'smtp.gmail.com' EMAIL_PORT = 587 EMAIL_HOST_USER = 'test@gmail.com' EMAIL_HOST_PASSWORD = 'password' </code></pre> <p>I created an registration folder and I put the template: password_reset_email.html in it.</p> <p>And in the view.py I changed the path:</p> <pre><code> html_content = render_to_string('templates/registration/password_reset_email.html', { 'user': user, 'reset_link': reset_link, }) </code></pre> <p>And also in the serializers.py I changed the path:</p> <pre><code> # Render the email template email_subject = 'Password Reset Request' email_body = render_to_string('templates/registration/password_reset_email.html', { 'user': user, 'reset_link': reset_link }) </code></pre> <p>Content of the template:</p> <pre><code>&lt;h3&gt;Hello&lt;/h3&gt; </code></pre> <p>Still get the old template??</p> <p>I also restarted the django server.</p> <p>I realy don't know what else to change?</p> <p>And I only have one template with that name. That is for sure.</p> <p>Is there maybe some issue with visual studio code?</p> <p>Question: how to change the email template with effect?</p>
<python><django><visual-studio-code><django-templates>
2024-08-30 16:12:57
1
3,991
mightycode Newton
78,932,922
390,897
How to add padding to matplotlib plot when aspect is "equal"?
<p>Whenever I make plots with plt.gca().set_aspect(&quot;equal&quot;), the plot can become scrunched to the point where it appears collapse. How can I add some extra padding or retain padding while maintaining the aspect ratio?</p> <p>A few examples:</p> <pre><code># A Line plt.plot([0, 0], [0, 100]) ax = plt.gca() ax.set_aspect(&quot;equal&quot;) plt.show() </code></pre> <p><a href="https://i.sstatic.net/0bRNzd8C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0bRNzd8C.png" alt="a line" /></a></p> <pre><code># A Line with a &quot;tail&quot; plt.plot([0, 0, 0, 5], [0, 100, 200, 200]) ax = plt.gca() ax.set_aspect(&quot;equal&quot;) plt.show() </code></pre> <p><a href="https://i.sstatic.net/fzKjveS6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzKjveS6.png" alt="enter image description here" /></a></p> <pre><code># Two lines plt.plot([0, 0], [0, 50]) plt.plot([0, 0], [50, 100]) ax = plt.gca() ax.set_aspect(&quot;equal&quot;) plt.show() </code></pre> <p><a href="https://i.sstatic.net/Z4tpbTkm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z4tpbTkm.png" alt="enter image description here" /></a></p>
<python><matplotlib>
2024-08-30 16:11:25
2
33,893
fny
78,932,725
8,849,755
Pandas sort one column by custom order and the other naturally
<p>Consider the following code:</p> <pre class="lang-py prettyprint-override"><code>import pandas import numpy strs = ['custom','sort']*5 df = pandas.DataFrame( { 'string': strs, 'number': numpy.random.randn(len(strs)), } ) sort_string_like_this = {'sort': 0, 'custom': 1} print(df.sort_values(['string','number'], key=lambda x: x.map(sort_string_like_this))) </code></pre> <p>which prints</p> <pre><code> string number 1 sort -0.074041 3 sort 1.057676 5 sort -0.612289 7 sort 0.757922 9 sort 0.671288 0 custom -0.339373 2 custom -0.320231 4 custom -1.125380 6 custom 2.120829 8 custom -0.031580 </code></pre> <p>I would like to sort it according to the column <code>string</code> using a custom ordering as given by the dictionary and the column <code>number</code> using the natural ordering of numbers. How can this be done?</p>
<python><pandas><sorting>
2024-08-30 15:15:36
3
3,245
user171780
78,932,657
1,914,781
plotly - replace first and last yticks with min and max value
<p>I would like to mark max/min value as yticks like below:</p> <pre><code>import plotly.graph_objects as go import pandas as pd import plotly.express as px def save_fig(fig,pngname): fig.write_image(pngname,format=&quot;png&quot;,width=800,height=500, scale=1) print(&quot;[[%s]]&quot;%pngname) #fig.show() return DATE='Date' VAL='AAPL.High' url='https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv' def main(): df = pd.read_csv(url) fig = px.line(df, x=DATE, y=VAL) for val in [df[VAL].min(),df[VAL].max()]: fig.add_hline( y=val, annotation_text=f&quot;{val:.2f}&quot;, line_width=.5,line_color='grey', annotation_position=&quot;left&quot;) print(df[VAL].min()) save_fig(fig,&quot;/media/sf_work/demo.png&quot;) return main() </code></pre> <p><a href="https://i.sstatic.net/65E0C8tB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65E0C8tB.png" alt="enter image description here" /></a>`</p> <p>But current solution the min/max ticks will overlap. how can I replace start and end ticks with the min,max value instead of add a new one to avoid such overlap? Maybe read current ticks array then replace first and last value but not sure how to do it.</p>
<python><plotly><xticks><yticks>
2024-08-30 14:57:55
0
9,011
lucky1928
78,932,535
1,560,414
Python Async Thread-safe Semaphore
<p>I'm looking for a thread-safe implementation of a Semaphore I can use in Python.</p> <p>The standard libraries <a href="https://docs.python.org/3/library/asyncio-sync.html#asyncio.Semaphore" rel="nofollow noreferrer">asyncio.Semaphore</a> isn't thread-safe.</p> <p>The standard libraries <a href="https://docs.python.org/3/library/threading.html#threading.Semaphore" rel="nofollow noreferrer">threading.Semaphore</a> doesn't have <code>awaitable</code> interface.</p> <p>I am using <a href="https://sanic.dev/en/" rel="nofollow noreferrer">sanic</a> which has multiple threads (workers) but also an asynchronous loop on each thread. I want to be able to yield execution back to the event loop on each of the workers whenever it encounters a blocked semaphore, while it waits.</p> <p>UPDATE: I meant to say process here, not threads. So these should be that Sanic splits across processes, and <a href="https://docs.python.org/3/library/multiprocessing.html" rel="nofollow noreferrer">multiprocessing.Semaphore</a>. I believe the answer given is still relevant to where I can apply a similar solution.</p>
<python><multithreading><python-asyncio>
2024-08-30 14:30:59
2
1,667
freebie
78,932,526
4,435,175
Add X days to a Datetime Series
<p>I have a Datetime Series that always contains the datetime of yesterday like:</p> <pre><code>Series: '' [datetime[ns]] [ 2024-08-29 00:00:00 ] </code></pre> <p>How can I add 2 days to that Datetime Series so that I can add the datetime from 2 days and 3 days ago?</p> <p>End result should be:</p> <pre><code>Series: '' [datetime[ns]] [ 2024-08-29 00:00:00 2024-08-28 00:00:00 2024-08-27 00:00:00 ] </code></pre>
<python><datetime><python-polars>
2024-08-30 14:28:55
2
2,980
Vega
78,932,341
403,875
Why does 2x - x == x in IEEE floating point precision?
<p>I would expect this to only hold when the last bit of the mantissa is <code>0</code>. Otherwise, in order to subtract them (since their exponents differ by 1), <code>x</code> would lose a bit of precision first and the result would either end up being rounded up or down.</p> <p>But a quick experiment shows that it seems to <em>always</em> hold (assuming <code>x</code> and <code>2x</code> are finite) for any random number (including those with a a trailing <code>1</code> bit).</p> <pre><code>import random import struct from collections import Counter def float_to_bits(f: float) -&gt; int: &quot;&quot;&quot; Convert a double-precision floating-point number to a 64-bit integer. &quot;&quot;&quot; # Pack the float into 8 bytes, then unpack as an unsigned 64-bit integer return struct.unpack(&quot;&gt;Q&quot;, struct.pack(&quot;&gt;d&quot;, f))[0] def check_floating_point_precision(num_trials: int) -&gt; float: true_count = 0 false_count = 0 bit_counts = Counter() for _ in range(num_trials): x = random.uniform(0, 1) if 2 * x - x == x: true_count += 1 else: false_count += 1 bits = float_to_bits(x) # Extract the last three bits of the mantissa last_three_bits = bits &amp; 0b111 bit_counts[last_three_bits] += 1 return (bit_counts, true_count / num_trials) num_trials = 1_000_000 (bit_counts, proportion_true) = check_floating_point_precision(num_trials) print(f&quot;The proportion of times 2x - x == x holds true: {proportion_true:.6f}&quot;) print(&quot;Distribution of last three bits (mod 8):&quot;) for bits_value in range(8): print(f&quot;{bits_value:03b}: {bit_counts[bits_value]} occurrences&quot;) </code></pre> <pre><code>The proportion of times 2x - x == x holds true: 1.000000 Distribution of last three bits (mod 8): 000: 312738 occurrences 001: 62542 occurrences 010: 125035 occurrences 011: 62219 occurrences 100: 187848 occurrences 101: 62054 occurrences 110: 125129 occurrences 111: 62435 occurrences </code></pre>
<python><precision><ieee-754>
2024-08-30 13:47:03
3
5,604
dspyz
78,932,161
13,336,872
How to calculate second derivative using gpu and PyTorch
<p>I have a python code segment related to a deep RL algorithm where it calculates the second order optimization and second derivative with Hessian matrix and fisher information matrix. Normally I run the whole code on GPU (cuda), but since I got a computational issue to calculate second derivative in cuda,</p> <pre><code>NotImplementedError: the derivative for '_cudnn_rnn_backward' is not implemented. Double backwards is not supported for CuDNN RNNs due to limitations in the CuDNN API. To run double backwards, please disable the CuDNN backend temporarily while running the forward pass of your RNN. For example: with torch.backends.cudnn.flags(enabled=False): output = model(inputs) </code></pre> <p>I had to move to CPU for this code segment, and now the code is executing sequentially instead of in parallel, which takes a long time to run:</p> <pre><code>grads = torch.autograd.grad(policy_loss, self.policy.Actor.parameters(), retain_graph=True) loss_grad = torch.cat([grad.view(-1) for grad in grads]) def Fvp_fim(v = -loss_grad): with torch.backends.cudnn.flags(enabled=False): M, mu, info = self.policy.Actor.get_fim(states_batch) #pdb.set_trace() mu = mu.view(-1) filter_input_ids = set([info['std_id']]) t = torch.ones(mu.size(), requires_grad=True, device=mu.device) mu_t = (mu * t).sum() Jt = compute_flat_grad(mu_t, self.policy.Actor.parameters(), filter_input_ids=filter_input_ids, create_graph=True) Jtv = (Jt * v).sum() Jv = torch.autograd.grad(Jtv, t)[0] MJv = M * Jv.detach() mu_MJv = (MJv * mu).sum() JTMJv = compute_flat_grad(mu_MJv, self.policy.Actor.parameters(), filter_input_ids=filter_input_ids, create_graph=True).detach() JTMJv /= states_batch.shape[0] std_index = info['std_index'] JTMJv[std_index: std_index + M.shape[0]] += 2 * v[std_index: std_index + M.shape[0]] return JTMJv + v * self.damping </code></pre> <p>Above is the main function, where it calculates the second derivative. below are the supportive functions and relevant classes it has used.</p> <pre><code>def compute_flat_grad(output, inputs, filter_input_ids=set(), retain_graph=True, create_graph=False): if create_graph: retain_graph = True inputs = list(inputs) params = [] for i, param in enumerate(inputs): if i not in filter_input_ids: params.append(param) grads = torch.autograd.grad(output, params, retain_graph=retain_graph, create_graph=create_graph, allow_unused=True) j = 0 out_grads = [] for i, param in enumerate(inputs): if (i in filter_input_ids): out_grads.append(torch.zeros(param.view(-1).shape, device=param.device, dtype=param.dtype)) else: if (grads[j] == None): out_grads.append(torch.zeros(param.view(-1).shape, device=param.device, dtype=param.dtype)) else: out_grads.append(grads[j].view(-1)) j += 1 grads = torch.cat(out_grads) for param in params: param.grad = None return grads ------ import torch import torch.nn as nn from agents.models.feature_extracter import LSTMFeatureExtractor from agents.models.policy import PolicyModule from agents.models.value import ValueModule class ActorNetwork(nn.Module): def __init__(self, args): super(ActorNetwork, self).__init__() self.FeatureExtractor = LSTMFeatureExtractor(args) self.PolicyModule = PolicyModule(args) def forward(self, s): lstmOut = self.FeatureExtractor.forward(s) mu, sigma, action, log_prob = self.PolicyModule.forward(lstmOut) return mu, sigma, action, log_prob def get_fim(self, x): mu, sigma, _, _ = self.forward(x) if sigma.dim() == 1: sigma = sigma.unsqueeze(0) cov_inv = sigma.pow(-2).repeat(x.size(0), 1) param_count = 0 std_index = 0 id = 0 std_id = id for name, param in self.named_parameters(): if name == &quot;sigma.weight&quot;: std_id = id std_index = param_count param_count += param.view(-1).shape[0] id += 1 return cov_inv.detach(), mu, {'std_id': std_id, 'std_index': std_index} </code></pre> <p>In the bigger picture there are large amounts of batches going through this function, since all of 'em have to go sequentially through this function, it highly increases the total running time. Is there a possible way to calculate the second derivative with Pytorch while running on cuda/GPU?</p>
<python><pytorch><gpu><reinforcement-learning><cudnn>
2024-08-30 13:07:44
1
832
Damika
78,932,072
13,294,364
Efficiently recalculating dependent values in real-time data streams using NumPy in Python
<p>I'm currently working on a real-time data processing system for financial securities, where I need to perform calculations as soon as new data comes in. Each financial security has multiple data points (about 10-20) being fed into the system in real-time, and I have around 200 different securities.</p> <p>I am using a NumPy array to store this data, where each row represents a security and each column represents a different data point or calculated value. Whenever a new data point arrives, I need to recalculate several dependent values immediately.</p> <h4><strong>Current Approach:</strong></h4> <p>Currently, I have implemented this using Python functions associated with each column in the NumPy array. Here’s a simplified version of how my system works:</p> <ol> <li><strong>Data Feeding:</strong> New data for a security is fed into the corresponding row of the NumPy array.</li> <li><strong>Function Association:</strong> I have functions associated with each column. If a data point that a function depends on changes, the function is added to a list for that specific row.</li> <li><strong>Recalculation:</strong> After updating the NumPy array, I loop through the list of functions for each row and recalculate the dependent values.</li> </ol> <h4><strong>Example:</strong></h4> <p>Here is a dummy table to illustrate the concept:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>Security</th> <th>Price</th> <th>Volume</th> <th>Moving Average</th> <th>Volatility</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>100</td> <td>1000</td> <td>102</td> <td>0.02</td> </tr> <tr> <td>B</td> <td>150</td> <td>2000</td> <td>148</td> <td>0.03</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> </tbody> </table></div> <ul> <li><strong>Price</strong> and <strong>Volume</strong> are real-time data points.</li> <li><strong>Moving Average</strong> and <strong>Volatility</strong> are calculated values that depend on the <strong>Price</strong>.</li> </ul> <p>Example formulas:</p> <ul> <li><code>Moving Average</code> might be calculated as the average of the last 10 prices.</li> <li><code>Volatility</code> might be calculated as the standard deviation of the last 10 prices.</li> </ul> <p>When the <strong>Price</strong> for Security A changes, both the <strong>Moving Average</strong> and <strong>Volatility</strong> need to be recalculated.</p> <h4><strong>My Questions:</strong></h4> <ol> <li><p><strong>Is this approach of using functions and maintaining a list of recalculations for each row an efficient way to handle this problem?</strong><br /> I am concerned about the performance implications as the number of securities and data points increases. Specifically, I wonder if looping through functions for each update is optimal.</p> </li> <li><p><strong>How do systems like Excel handle recalculations of dependent cells?</strong><br /> Excel seems to handle recalculations efficiently, even with large datasets. I'm curious if there's a similar approach or optimization I could implement in Python.</p> </li> <li><p><strong>Would it be more efficient to use a different data structure or database for this kind of problem?</strong><br /> Given that the data is being updated continuously and recalculations need to be done instantly, is there a better tool or library for managing this kind of workload? For example, would a database like Redis or a framework like Apache Kafka be more suited for real-time data processing?</p> </li> </ol> <h4><strong>Example Code:</strong></h4> <p>Here is a basic example of my current implementation using NumPy and functions:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np # Dummy data array for securities data = np.array([ [100, 1000, 0, 0], # Security A [150, 2000, 0, 0], # Security B # ... more securities ]) # Example functions for recalculations def calculate_moving_average(price_array): # Simplified moving average calculation return np.mean(price_array[-10:]) def calculate_volatility(price_array): # Simplified volatility calculation return np.std(price_array[-10:]) # Update data and recalculate functions def update_security(security_index, new_price): # Update price data[security_index, 0] = new_price # Add functions to be recalculated for this security functions_to_recalculate = [calculate_moving_average, calculate_volatility] # Loop through functions and recalculate values for func in functions_to_recalculate: # Example recalculation logic result = func(data[security_index, 0]) print(f'Recalculated value: {result}') # Simulate an update update_security(0, 105) </code></pre>
<python><arrays><numpy><performance><real-time-data>
2024-08-30 12:49:09
1
305
Harry Spratt
78,932,041
7,169,710
Update or access Pandas DataFrame via API extension register_dataframe_accessor
<p>I would like to edit a dataframe through the <a href="https://pandas.pydata.org/docs/reference/api/pandas.api.extensions.register_dataframe_accessor.html" rel="nofollow noreferrer">`register_dataframe_extension`</a> available in the pandas API.</p> <p>For example, I would like that that, provided the following code:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd @pd.api.extensions.register_dataframe_accessor(&quot;geo&quot;) class GeoAccessor: def __init__(self, pandas_obj): # self._validate(pandas_obj) self._obj = pandas_obj @property def center(self): # return the geographic center point of this DataFrame lat = self._obj.latitude lon = self._obj.longitude return (float(lon.mean()), float(lat.mean())) def plot(self): # plot this array's data on a map, e.g., using Cartopy pass def ingest_sample_data(self): # define a sample DataFrame with latitude and longitude data self._obj = pd.DataFrame( { &quot;latitude&quot;: [0, 15, 30, 45, 60], &quot;longitude&quot;: [0, 15, 30, 45, 60], } ) return self._obj </code></pre> <p>Calling the <code>ingest_sample_data</code> method would update the original data.</p> <p>What happens instead is that the dataframe remains unchanged.</p> <p>Therefore, this:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame() df.geo.ingest_sample_data() print(df) </code></pre> <p>returns an empty dataframe.</p> <p>There is a prevoius question that is related to this one (<a href="https://stackoverflow.com/questions/55180400/access-pandas-dataframe-from-pandas-accessor">Access pandas DataFrame from pandas accessor</a>), but it is rather old and maybe there have been improvements.</p>
<python><python-3.x><pandas><dataframe>
2024-08-30 12:42:26
1
405
Pietro D'Antuono
78,932,035
6,930,340
Filter or join a polars dataframe by columns from another dataframe
<p>I have two <code>pl.DataFrame</code>s:</p> <pre><code>from datetime import date import polars as pl df1 = pl.DataFrame( { &quot;symbol&quot;: [ &quot;sec1&quot;, &quot;sec1&quot;, &quot;sec1&quot;, &quot;sec1&quot;, &quot;sec1&quot;, &quot;sec1&quot;, &quot;sec2&quot;, &quot;sec2&quot;, &quot;sec2&quot;, &quot;sec2&quot;, &quot;sec2&quot;, ], &quot;date&quot;: [ date(2021, 9, 14), date(2021, 9, 15), date(2021, 9, 16), date(2021, 9, 17), date(2021, 8, 31), date(2020, 12, 31), date(2021, 9, 14), date(2021, 9, 15), date(2021, 8, 31), date(2021, 12, 30), date(2020, 12, 31), ], &quot;price&quot;: range(11), } ) df2 = pl.DataFrame( { &quot;symbol&quot;: [&quot;sec1&quot;, &quot;sec2&quot;], &quot;current_date&quot;: [date(2021, 9, 17), date(2021, 9, 15)], &quot;mtd&quot;: [date(2021, 8, 31), date(2021, 8, 31)], &quot;ytd&quot;: [date(2020, 12, 31), date(2020, 12, 30)], } ) with pl.Config(tbl_rows=-1): print(df1) print(df2) shape: (11, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ date ┆ price β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ date ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ═══════║ β”‚ sec1 ┆ 2021-09-14 ┆ 0 β”‚ β”‚ sec1 ┆ 2021-09-15 ┆ 1 β”‚ β”‚ sec1 ┆ 2021-09-16 ┆ 2 β”‚ β”‚ sec1 ┆ 2021-09-17 ┆ 3 β”‚ β”‚ sec1 ┆ 2021-08-31 ┆ 4 β”‚ β”‚ sec1 ┆ 2020-12-31 ┆ 5 β”‚ β”‚ sec2 ┆ 2021-09-14 ┆ 6 β”‚ β”‚ sec2 ┆ 2021-09-15 ┆ 7 β”‚ β”‚ sec2 ┆ 2021-08-31 ┆ 8 β”‚ β”‚ sec2 ┆ 2021-12-30 ┆ 9 β”‚ β”‚ sec2 ┆ 2020-12-31 ┆ 10 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ shape: (2, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ current_date ┆ mtd ┆ ytd β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ date ┆ date ┆ date β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ══════════════β•ͺ════════════β•ͺ════════════║ β”‚ sec1 ┆ 2021-09-17 ┆ 2021-08-31 ┆ 2020-12-31 β”‚ β”‚ sec2 ┆ 2021-09-15 ┆ 2021-08-31 ┆ 2020-12-30 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>I need to filter the prices of <code>df1</code> for each group with the respective dates from <code>df2</code>. I need to incorporate all columns of type <code>date</code>. The number of these columns in <code>df2</code> might not be fixed.</p> <p>I am looking for the following result:</p> <pre><code>shape: (11, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ date ┆ price β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ date ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ═══════║ β”‚ sec1 ┆ 2021-09-17 ┆ 3 β”‚ β”‚ sec1 ┆ 2021-08-31 ┆ 4 β”‚ β”‚ sec1 ┆ 2020-12-31 ┆ 5 β”‚ β”‚ sec2 ┆ 2021-09-15 ┆ 7 β”‚ β”‚ sec2 ┆ 2021-08-31 ┆ 8 β”‚ β”‚ sec2 ┆ 2020-12-30 ┆ 9 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>I was thinking of filtering <code>df1</code> by <code>symbol</code> and then do a join operation for every individual <code>date</code> column of <code>df2</code>. I would then subsequently concatenate the resulting dataframes. However, there's probably a much more elegant solution.</p>
<python><dataframe><join><python-polars>
2024-08-30 12:39:57
1
5,167
Andi
78,932,018
4,417,769
Assert that two files have been written correctly
<p>How would I assert that this function wrote to those files in tests?</p> <pre class="lang-py prettyprint-override"><code>def write() -&gt; None: with open('./foo', 'w') as f: f.write('fooo') with open('./bar', 'w') as f: f.write('baar') </code></pre> <pre class="lang-py prettyprint-override"><code>class Test: @patch('builtins.open', idk) def test_write(self, something: MagicMockOrSomething) -&gt; None: write() something.assert_somehow_foo_contains_fooo() something.assert_somehow_bar_contains_baar() </code></pre> <p>I'd like to use the decorator syntax with <code>@patch</code> if possible.</p> <p>The documentation only shows it for one file, and not using the decorator: <a href="https://docs.python.org/3/library/unittest.mock.html#mock-open" rel="nofollow noreferrer">https://docs.python.org/3/library/unittest.mock.html#mock-open</a></p>
<python><mocking><python-unittest><python-unittest.mock>
2024-08-30 12:36:50
2
1,228
sezanzeb
78,931,932
9,684,951
Is the generator expression stored anywhere semantically intact?
<p>If I set a generator</p> <pre><code>myra = (x + 100 for x in range(5)) </code></pre> <p>and then later do something with it, like</p> <pre><code>for i in myra: print(i) </code></pre> <p>the generator has run its course, cannot be iterated over again, got that.</p> <p>But is there a way, before, during, or after use, to interrogate the generator object so it returns the generating string, or something semantically equivalent? e.g.</p> <pre><code>&gt;&gt;&gt; print_the_foundation(myra) a + 100 for a in range(5) </code></pre>
<python><generator>
2024-08-30 12:15:56
2
308
bukwyrm
78,931,861
1,741,868
How to deploy a Python Azure Function app from a mono-repo?
<p>Following on from <a href="https://stackoverflow.com/questions/78928280/deploying-a-python-function-app-where-the-source-is-in-a-subdirectory/78931022#78931022">this question</a>, I've got a mono-repo containing a Flask API and the first of what will be several Azure Function apps. I'm trying to deploy the app to Azure from an Azure DevOps pipeline.</p> <p>How do I deploy the app and tell azure where the entry point is?</p> <p>My python code is structured in a few folders underneath a /backend folder, like</p> <pre><code>./backend ./backend/domain/ - contains domain logic .py files ./backend/entrypoints/api - contains Flask app files ./backend/entrypoints/my-func - contains my new function app ./backend/... - various other folder hierarchies that are imported. </code></pre> <p>I've got an <code>AzureFunctionApp@2</code> task and it's pushing a zip of the project up to a Storage Account but I can't see any Functions registered in the Azure Portal.</p> <pre><code> - task: ArchiveFiles@2 displayName: Package Function Apps inputs: rootFolderOrFile: $(Build.SourcesDirectory) archiveType: 'zip' archiveFile: '$(Build.ArtifactStagingDirectory)/nightly-import-$(Build.BuildId).zip' verbose: true - task: AzureFunctionApp@2 displayName: Deploy Nightly Importer inputs: azureSubscription: ${{parameters.azureServiceConnection}} appType: functionAppLinux appName: nightly-import-${{parameters.environmentMoniker}}-uks-01 package: '$(Pipeline.Workspace)/FuncAppsPackage/nightly-import-$(Build.BuildId).zip' deploymentMethod: auto verbose: true </code></pre> <p><a href="https://i.sstatic.net/yrDXsvE0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yrDXsvE0.png" alt="Screen grab of azure portal with no functions on the functions tab of the function app" /></a></p> <p>How do you publish a function app from within a mono-repo in such a way that all dependencies are deployed, including pip packages from the venv and 1st party python modules from outside the function app folder (my backend/domain and other modules)?</p>
<python><python-3.x><azure-devops><azure-functions><azure-pipelines>
2024-08-30 11:57:01
1
14,935
Greg B
78,931,843
608,041
In Python unittest log testname, expected value and actual value into a separate file
<p>I have used the standard python unittest framework to create a testsuite for my hardware that read values from sensors. A test might look like below (a bit simplified)</p> <pre><code>def test_temperature1(self): self.assertAlmostEqual(self.temp_sensor1.get_value(),25.0, delta=1) </code></pre> <p>I would be able to both run the test as unittest with success/fail but also log the actual and expect values into a file. I understand this is possible by adding log statements to each test case but it would be preferable to use a testrunner or another unittest framework that supports this out of the box since I dont want to rewrite all my tests.</p>
<python><unit-testing>
2024-08-30 11:52:14
1
534
kungjohan
78,931,736
865,169
Why can I unpack a Python set when sets are unordered?
<p>I am quite used to unpacking sequences in Python like:</p> <pre class="lang-py prettyprint-override"><code>my_tuple = (1, 2, 3) a, b, c = my_tuple </code></pre> <p>I have noticed that I can also do it with sets:</p> <pre class="lang-py prettyprint-override"><code>my_set = set((1, 2, 3)) a, b, c = my_set </code></pre> <p>Why can I do this? I mean, it makes perfect sense for lists and tuples which are ordered. But how does this make sense for a set <a href="https://docs.python.org/3.10/tutorial/datastructures.html#sets" rel="nofollow noreferrer">which is not ordered</a>?<br /> How can I be sure which value ends up in which variable on the left-hand side? Or maybe I do not quite understand ordered vs. unordered?</p> <pre class="lang-py prettyprint-override"><code>a,b,c = set((3,2,1)) </code></pre> <p>Here <code>a==1</code>, <code>b==2</code>, and <code>c==3</code> illustrates what I mean is ambiguous.</p>
<python><set><iterable-unpacking>
2024-08-30 11:23:24
4
1,372
Thomas Arildsen
78,931,521
13,946,204
How to pass argument into schedule_task for Locust?
<p>Let's say that my test case is to get list of articles at news site and make a comment for the articles.</p> <p>Here is how my code may look:</p> <pre class="lang-py prettyprint-override"><code>class MyTasks(TaskSet): def post_comment(self. article_id: int): self.client.post( f&quot;/{article_id}/comment&quot;, json={&quot;body&quot;: &quot;some random comment&quot;}, name=f&quot;comment_for_{article_id}&quot;, ) @task def get_articles(self): with self.client.get( &quot;/articles&quot;, name=&quot;get_articles&quot;, ) as response: if response.status_code == 200: for article_id in response.json()[&quot;article_ids&quot;]: # I want to add scheduled task for specific article # to write comment for the article # BUT! I can not pass article ID?! self.schedule_task( self.post_comment, first=True, # how to add arguments for post_comment here? ) </code></pre> <p>As you can see it is not possible to pass arguments to any task in Locust.</p> <p>Does Locust really lacks so obvious and necessary feature like passing arguments to task?</p> <p>Here is <code>schedule_task</code>'s code:</p> <pre class="lang-py prettyprint-override"><code>def schedule_task(self, task_callable, first=False): &quot;&quot;&quot; Add a task to the User's task execution queue. :param task_callable: User task to schedule. :param first: Optional keyword argument. If True, the task will be put first in the queue. &quot;&quot;&quot; if first: self._task_queue.appendleft(task_callable) else: self._task_queue.append(task_callable) </code></pre> <p>And I couldn't find how to pass parameters.</p> <p>So the question is how it supposed to pass arguments and custom parameters for a specific task in Locust?</p>
<python><locust>
2024-08-30 10:26:11
2
9,834
rzlvmp
78,931,345
1,930,011
Python function app how to terminate after a timer without harming other functions
<p>I have inherited a collection of Python functions that are capable of deadlocking(this can happen at several places in the code), when that happens they seize all function time out on the Function app in Azure. This process then kills all other running function apps.</p> <p>Obviously the deadlocking shouldn't be happening, nor should the process kill all other running function apps running on the same premium azure instance.</p> <p>While we try to solve the underlying issues I would like to prevent one FA from killing the others if it deadlocks. As we know the deadlocks happen entirely within one function and no resource locks should be left if they get terminated.</p> <p>As such I am looking for a way to terminate the execution of execution within a function app without harming the other executions on the same thread. Our build script automatically transforms the FAExecutor via a package builder, so I altered that implementation into the following:</p> <pre><code>class FAExecutor(ABC): @abstractmethod def execute(self) -&gt; None: raise NotImplementedError def execute_with_timeout(self) -&gt; None: termination_thread = threading.Thread(target=self.timeout) termination_thread.start() self.execute() def timeout(self) -&gt; None: timeout = 1200 time.sleep(timeout) # Terminate the program os._exit() class InfiteLoop(FAExecutor): def execute(self) -&gt; None: logger = LoggerFactory.create_logger() while True: logger.warn(&quot;doing a looping&quot;) time.sleep(60) </code></pre> <p>This does result in the termination of the program within the function app. However if I execute this function twice with about 5 minutes between them then both executions are killed by this timeout. That is not the desired result.</p> <p>Anybody know how to get the function to only kill the execution that sets the timeout rather then the entire environment? Azure threading is a bit opaque in this regard.</p>
<python><azure><azure-functions>
2024-08-30 09:36:50
1
2,633
Thijser
78,931,299
13,392,257
NameError: name 'process' is not defined
<p>I have a base64 string with python function. I want to run this python code</p> <p>My code:</p> <pre><code>import base64 def apply_script(custom_script: str, spark_df=None): script_encoded = base64.b64decode(custom_script).decode('utf-8') print(script_encoded) exec(script_encoded) print(&quot;EXECUTE: &quot;) process(spark_df) # name &quot;process&quot; imported from exec() function return None apply_script(&quot;aW1wb3J0IGpzb24KCgpkZWYgcHJvY2VzcyhkZik6CiAgICBwcmludCgiSGVsbG8gd29ybGQiKQogICAgcmV0dXJuIGRm&quot;) </code></pre> <p>I have the following output and error:</p> <pre><code>python test_custom_script.py import json def process(df): print(&quot;Hello world&quot;) return df EXECUTE: Traceback (most recent call last): File &quot;test_custom_script.py&quot;, line 13, in &lt;module&gt; apply_script(&quot;aW1wb3J0IGpzb24KCgpkZWYgcHJvY2VzcyhkZik6CiAgICBwcmludCgiSGVsbG8gd29ybGQiKQogICAgcmV0dXJuIGRm&quot;) File &quot;test_custom_script.py&quot;, line 9, in apply_script process(spark_df) # name &quot;process&quot; imported from exec() function NameError: name 'process' is not defined </code></pre> <p>How to fix the error?</p>
<python>
2024-08-30 09:25:20
1
1,708
mascai
78,931,121
12,550,791
Pytest assert the original exception raised using `raise AnyException from MyExceptionToAssert`
<p>I wrote a suit of tests that asserts exception (following what was said here <a href="https://stackoverflow.com/questions/23337471/how-do-i-properly-assert-that-an-exception-gets-raised-in-pytest">How do I properly assert that an exception gets raised in pytest?</a> and in the doc). However, there is one instance of my code where I need to raise an exception from the exception I want to assert.</p> <p>How can I properly catch (and potentially match the message of) causing exceptions using pytest?</p> <p>Here is a small reproducible code sample:</p> <p>-- edit -- here is the code to test:</p> <pre class="lang-py prettyprint-override"><code># in src/main.py # this is not exactly like that in the code, but it is what s equivalent class ModelValidationError(Exception): def __init__(self, *_): super().__init__( f&quot;Unable to initialize model&quot; ) class InvalidParametersWarning(Warning): def __init__(self, *_): super().__init__( &quot;The given parameters are invalid&quot; ) def handle_model_validation_error(environment: str, exception: ValueError): &quot;&quot;&quot;Raise the correct exception based on the environment we are in. To avoid frightening the users, we raise a warning in production environments and an exception in development environments. Parameters ---------- environment : str in which application environment we are in. exception : ValueError Raises ------ ModelValidationError For development environments. Is is raised from the ValueError. InvalidParametersWarning For production environments. &quot;&quot;&quot; if environment == &quot;development&quot;: raise ModelValidationError() from exception else: raise InvalidParametersWarning() </code></pre> <pre class="lang-py prettyprint-override"><code>import pytest from src.main import handle_model_validation_error def test_exception(): # Here, I would like to match the message &quot;nope&quot; of ValueError # How can I do that? with pytest.raises(ValueError, match=&quot;nope&quot;): handle_model_validation_error(&quot;development&quot;, ValueError(&quot;nope&quot;)) </code></pre> <p>-- end of edit --</p> <p>which I think is equivalent to managing to match ValueError in:</p> <pre class="lang-py prettyprint-override"><code>import pytest def test_exception(): # Here, I would like to match the message &quot;nope&quot; of ValueError # How can I do that? with pytest.raises(ValueError, match=&quot;nope&quot;): raise Exception from ValueError(&quot;nope&quot;) </code></pre> <p>Thank you for your time</p>
<python><exception><pytest>
2024-08-30 08:31:26
3
391
Marco Bresson
78,931,082
1,111,652
Abstract base class function pointer python
<p>I'd like to make an abstraction of one of my api classes to resolve the following problem. Let's say I have a base class like:</p> <pre><code>class AbstractAPI(ABC): @abstractmethod def create(self): pass @abstractmethod def delete(self): pass </code></pre> <p>And a concrete class:</p> <pre><code>class API(AbstractAPI): def create(self): print(&quot;create&quot;) def delete(self): print(&quot;delete&quot;) </code></pre> <p>When requests come in, I don't have access to my API instance. Both due multithreading and to avoid some circular imports. At that point I do know which method I would like to call later. So my plan was to put a function pointer of AbstractAPI onto a queue and wait until I do have access to the API instance.</p> <pre><code>function_pointer=AbstractAPI.create # later on ... function_pointer(ConcreteAPIInstance) </code></pre> <p>At that point call the function pointer onto the API instance, ba da bim, ba da boom. That does not work. Of course calling the function pointer of AbstractAPI onto an API instance calls the empty AbstractAPI method. Nothing happens. Is there a way to make this work?</p>
<python><inheritance>
2024-08-30 08:20:56
2
1,168
hasdrubal
78,930,856
219,153
What is an equivalent of in operator for 2D Numpy array?
<p>Using Python lists:</p> <pre><code>a = [[0, 1], [3, 4]] b = [0, 2] print(b in a) </code></pre> <p>I'm getting <code>False</code> as an output, but with Numpy arrays:</p> <pre><code>a = np.array([[0, 1], [3, 4]]) b = np.array([0, 2]) print(b in a) </code></pre> <p>I'm getting <code>True</code> as an output. What is an equivalent of <code>in</code> operator above for 2D Numpy arrays?</p>
<python><arrays><numpy-ndarray>
2024-08-30 07:18:44
5
8,585
Paul Jurczak
78,930,100
1,492,613
how to effeciently write chunks to partitioned dataset?
<p>I have multiple level of index in my data, for example</p> <pre><code>schema = pa.schema( [ ('level1', pa.dictionary(pa.int64(), pa.utf8())), ('level2', pa.binary(16)), ('level3', pa.int64()), ('doc', pa.string()) ] ) </code></pre> <p>usually I have 10-100 level2 for each level1, 100k to 3M level3 in each level2 I partition the dataset by level1/level2 apparently if I create a table for each level3 chunk all the level1 and level2 values are the same.</p> <p>Normally we prepare the chunk as table then write to disk.</p> <p>However, the table actually wasted lots of memory space to store the exact same index values.</p> <p>I wonder is there a way to make this more memory efficient?</p>
<python><pyarrow>
2024-08-30 00:58:10
1
8,402
Wang
78,930,091
5,328,289
Why http.server does not deliver the data to the CGI script in this basic example?
<p>I am testing the legacy CGI functionality of python http.server module by implementing a &quot;hello world&quot; alike example that sends data from a fictional &quot;add customer&quot; form from the web front end to the backend. The data is processed by a CGI script which just writes the text received into a file.</p> <p>This is what I have tried:</p> <p>HTML:</p> <pre class="lang-html prettyprint-override"><code>&lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot;&gt; &lt;title&gt;Add New Customer&lt;/title&gt; &lt;script&gt; function submitForm() { // Test string to send fixedLengthData = &quot;FOO TEST STRING&quot;; // Post to console console.log(fixedLengthData); // Create and send POST request to COBOL backend fetch('/cgi-bin/customer_add', { method: 'POST', headers: { 'Content-Type': 'text/plain' }, body: fixedLengthData }) .then(response =&gt; { if (!response.ok) { throw new Error('Network response was not ok ' + response.statusText); } return response; }) .catch(error =&gt; { console.error('There was a problem with the fetch operation:', error); }); } &lt;/script&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Add New Customer&lt;/h1&gt; &lt;form id=&quot;customerForm&quot; onsubmit=&quot;event.preventDefault(); submitForm();&quot;&gt; &lt;label for=&quot;CUSTOMER-ID&quot;&gt;Customer ID:&lt;/label&gt; &lt;input type=&quot;text&quot; id=&quot;CUSTOMER-ID&quot; name=&quot;CUSTOMER-ID&quot; maxlength=&quot;4&quot; required&gt;&lt;br&gt; &lt;label for=&quot;CUSTOMER-NAME&quot;&gt;Customer Name:&lt;/label&gt; &lt;input type=&quot;text&quot; id=&quot;CUSTOMER-NAME&quot; name=&quot;CUSTOMER-NAME&quot; maxlength=&quot;50&quot; required&gt;&lt;br&gt; &lt;label for=&quot;CUSTOMER-EMAIL&quot;&gt;Email:&lt;/label&gt; &lt;input type=&quot;email&quot; id=&quot;CUSTOMER-EMAIL&quot; name=&quot;CUSTOMER-EMAIL&quot; maxlength=&quot;100&quot;&gt;&lt;br&gt; &lt;button type=&quot;submit&quot;&gt;Add Customer&lt;/button&gt; &lt;/form&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>This is the backend CGI script located under <code>cgi-bin</code> folder:</p> <pre><code>#!/bin/sh # Open the file 'test.txt' for writing exec 3&gt; test.txt # Read all input from stdin into a single variable input=$(cat) # Write the input to the file echo &quot;$input&quot; &gt;&amp;3 # Close the file exec 3&gt;&amp;- </code></pre> <p>Running the http.server and accessing the CGI script at the browser gets the CGI script launched but:</p> <ol> <li>it never ends and</li> <li>writes nothing to the text file</li> </ol> <p>I am wondering if it could be that no data is provided to the CGI script in the standard input or the script not finishing makes the data written to file not being flushed.</p> <pre><code>$ python3 -m http.server --bind localhost --cgi 8000 Serving HTTP on ::1 port 8000 (http://[::1]:8000/) ... ::1 - - [30/Aug/2024 02:39:27] &quot;POST /cgi-bin/customer_add HTTP/1.1&quot; 200 - </code></pre> <p><code>ps</code> shows that the CGI process does not end.</p> <pre><code>$ ps user 24194 0.0 0.0 13556 2644 10 S+ 02:39 0:00.00 /bin/sh /home/user/test/cgi-bin/customer_add </code></pre> <p>The CGI script itself seems to work as stand alone:</p> <pre><code>$ echo -n &quot;FOO TEST STRING&quot; | ./customer_add $ ls -ltr total 64 -rwxrwxr-x 1 user user 209 Aug 30 02:37 customer_add -rw-rw-r-- 1 user user 4 Aug 30 02:37 test.txt $ cat test.txt FOO TEST STRING </code></pre> <p>Why the CGI process gets stuck, could it be that it is not receiving the data?</p>
<javascript><python><cgi><http.server>
2024-08-30 00:45:59
1
5,635
M.E.
78,929,964
1,401,640
UTF-16 as sequence of code units in python
<p>I have the string <code>'abΓ§'</code> which in UTF-8 is <code>b'ab\xc3\xa7'</code>.</p> <p>I want it in UTF-16, but not this way:</p> <pre><code>b'ab\xc3\xa7'.decode('utf-8').encode('utf-16-be') </code></pre> <p>which gives me:</p> <p><code>b'\x00a\x00b\x00\xe7'</code></p> <p>The answer I want is the UTF-16 code units, that is, a list of int:</p> <blockquote> <p>[32, 33, 327]</p> </blockquote> <p>Is there any straightforward way to do that?</p> <p>And of course, the reverse. Given a list of ints which are UTF-16 code units, how do I convert that to UTF-8?</p>
<python><unicode><utf-8><utf-16>
2024-08-29 23:26:29
3
465
Andy Jewell
78,929,867
4,476,484
In python, how do you assert the type of a variable after checking it?
<h2>Overview</h2> <p>There is a general pattern in programming that goes like this</p> <pre class="lang-none prettyprint-override"><code>if (something is not initialized) { initialize the thing } do something with the initialized thing </code></pre> <p>In python, I have an object that's of type <code>A | B</code>. I then call some initialization function that guarantees it will be of type <code>A</code>. Then I want to access a member of the object. However, the type of the variable is not narrowed down to just <code>A</code>. I want to signal to python that I know it is <code>A</code> and not <code>A | B</code>. How can I do that?</p> <h2>Example</h2> <p>Here is some sample code that demonstrates the problem.</p> <pre class="lang-py prettyprint-override"><code>import random # Start with a variable that is &quot;uninitialized&quot; # (it might be str and it might be None) value: str | None = None if random.random() &gt; 0.5 else &quot;goodbye&quot; # Create a function that initializes it. After this function runs, # the variable will be of type str. It cannot be None. def set(): global value value = &quot;hello&quot; # Check if the variable is initialized. This is the part mentioned # in the Overview section. Here I am taking some action that will # guarantee that value is of type str. However, python does not # know that. It only knows I am calling a function and not using # the result. But I as the programmer know that the external effect # of calling set() will be that value can no longer be None if value is None: set() # Now that I have initialized my value, I want to do something with # it. Let's say I want to capitalize it, since I know it's a str result = value.capitalize() # Print out the result print(result) </code></pre> <p>Uh oh! When the code runs, it prints &quot;Hello&quot; or &quot;Goodbye&quot; correctly; however, there is an error on this line:</p> <pre class="lang-py prettyprint-override"><code>result = value.capitalize() </code></pre> <p>The error says <code>Cannot access attribute &quot;capitalize&quot; for class &quot;None&quot; Attribute &quot;capitalize&quot; is unknown (Pyright reportAttributeAccessIssue)</code></p> <p>Right, of course. Python doesn't know about what <code>set()</code> does, and it still assumes that my value is <code>str | None</code> when I as the programmer know that it is in fact just <code>str</code>.</p> <h2>Note</h2> <p>In the example above, I can remove the <code>if value is None</code> check and the same question applies. <code>set()</code> still guarantees that <code>value</code> is of type <code>str</code>.</p> <h2>Context</h2> <p>In TypeScript, I would do this:</p> <pre class="lang-js prettyprint-override"><code>let value: string | null = Math.random() &gt; 0.5 ? 'goodbye' : null; const set = () =&gt; { value = 'hello'; }; if (value === null) { set(); } const result = (value as string).toUpperCase(); console.log(result); </code></pre> <p>The key point is the type assertion <code>value as string</code>. Without that part, it will complain that <code>toUpperCase</code> is not valid on <code>null</code>, just like how python is complaining that <code>capitalize</code> is not valid on <code>None</code>.</p> <h2>Question</h2> <p>What is the idiomatic or normal way to do this kind of assertion? How can I signal to python that my value has been initialized/modified/set and that its type is now <code>A</code> instead of <code>A | B</code>?</p>
<python><python-typing><type-assertion>
2024-08-29 22:47:48
0
2,737
nullromo
78,929,802
10,022,961
Plone REST API - Filtering search results using a value inside an object
<p>I am trying to use <code>@search</code> or <code>@querystring-search</code> endpoints to limit the response to include only items with <code>priority.token</code> = 1.</p> <p>An item includes a <code>priority</code> object as follows:</p> <pre><code>&quot;priority&quot;: { &quot;title&quot;: &quot;1 Important&quot;, &quot;token&quot;: &quot;1&quot; } </code></pre> <p>Using <code>@search</code> endpoint, I tried adding <code>priority.token=1</code>, but that resulted in this error:</p> <p><code>&quot;Query for index &lt;FieldIndex at priority&gt; is missing a 'query' key!&quot;</code></p> <p>So, is it possible to filter the results using a value inside an object? And how to do that?</p>
<python><plone>
2024-08-29 22:18:51
1
466
Abdallah El-Yaddak
78,929,762
9,158,985
Is it possible in polars to give the full schema of a LazyFrame/DataFrame in a function argument, and get type errors?
<p>There are occasions when I know ahead of time the full schema of a table I'm working with. In those scenarios, it would be nice to be able to specify the full schema (call it a <code>FullyDefinedFrame</code>). Then the type system could help me out with things like:</p> <ol> <li>error when accessing a column that doesn't exist</li> <li>type checking. E.g can't add a string column to an int column</li> <li>have it generate a new schema from an operation on a <code>FullyDefinedFrame</code>.</li> <li>Combine 1. and 3. to do, e.g. &quot;you performed a pivot, and then accidentally accessed a column that doesn't exist anymore&quot;</li> </ol> <p>I understand that polars does this at run time once it has the full schema of the data it's working on. But what if you could get all that information while still developing?</p> <p>At the moment, I imagine you could get a crummy version of this experience by having a tool that creates a dummy <code>LazyFrame</code>/<code>DataFrame</code> with the schema of the <code>FullyDefinedFrame</code>, and then call your functions on it, and give you the results.</p> <p>Is this possible in general? And if so, what would it take to make it work?</p>
<python><python-polars><rust-polars>
2024-08-29 22:00:21
1
880
natemcintosh
78,929,742
160,245
Claude/Sonnet Python API - more tokens freezes, less tokens truncates
<p>My prompt was to create a blog with 5 sections covering 5 different individuals in an industry niche.</p> <p>When I had max_tokens= 300 (and then 1000), it ran quickly, but the result was not complete.</p> <p>When I tried max_tokens=1500 (or 3000), then the program freezes, and doesn't come back after even about 5 minutes. I asked Claude what to do, and it provided this code to retry, but anytime I'm over 1500 tokens, I get the timeout.</p> <pre><code> def get_claude_response(prompt, max_retries=3, initial_max_tokens=2000, timeout=60): for attempt in range(max_retries): try: message = client.messages.create( model=&quot;claude-3-sonnet-20240229&quot;, max_tokens=initial_max_tokens * (attempt + 1), # Increase max_tokens on each retry messages=[ {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: prompt} ], timeout=timeout # Set a timeout for the API call ) return message.content[0].text except anthropic.APITimeoutError: print(f&quot;Timeout occurred. Retrying with increased max_tokens. Attempt {attempt + 1}/{max_retries}&quot;) time.sleep(2) # Wait for 2 seconds before retrying except Exception as e: print(f&quot;An error occurred: {str(e)}&quot;) return None print(&quot;Max retries reached. Unable to get a complete response.&quot;) return None # Example usage prompt = &quot;details here...&quot; print(&quot;Prompt:&quot;) print(prompt) response = get_claude_response(prompt) if response: print(response) else: print(&quot;Failed to get a response from Claude.&quot;) I asked Claude itself for a solution and it came up with a method to call and get back chunks. Although it worked, it was very slow and was 5 calls, which might be costing me more in the long run than one call with more tokens. def call_claude_sonnet_in_chunks(prompt, max_chunks=10, tokens_per_chunk=500, timeout=55): full_response = &quot;&quot; messages = [{&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: prompt}] for i in range(max_chunks): try: print (&quot;Call Claude:Sonnet i=&quot; + str(i) + &quot; tokens_per_chunk=&quot; + str(tokens_per_chunk)) message = client.messages.create( model=&quot;claude-3-sonnet-20240229&quot;, max_tokens=tokens_per_chunk, messages=messages, timeout=timeout ) chunk = message.content[0].text full_response += chunk # Check if the response is complete if chunk.strip().endswith('.'): break # Prepare for the next chunk messages.append({&quot;role&quot;: &quot;assistant&quot;, &quot;content&quot;: chunk}) messages.append({&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: &quot;Please continue your previous response.&quot;}) time.sleep(1) # Small delay between requests except anthropic.APITimeoutError: print(f&quot;Timeout occurred on chunk {i + 1}. Moving to next chunk.&quot;) except Exception as e: print(f&quot;An error occurred: {str(e)}&quot;) break return full_response.strip() </code></pre>
<python><claude>
2024-08-29 21:49:04
0
18,467
NealWalters
78,929,674
10,319,707
How can I dramatically increase the logging from running pandas.DataFrame.to_csv with an S3 target?
<p>There are many closed bugs claiming that <code>pandas.DataFrame.to_csv</code> fails silently when saving to S3 if it has problems on the S3 side. I think that I have another, but the number of sources claiming that these bugs are closed makes me uneasy. I want to submit a good bug report. How can I get as much logging as possible from running <code>pandas.DataFrame.to_csv</code> with an S3 target?</p> <p>The <code>errors</code> parameter seems irrelevant. It is documented as only being for encoding/decoding issues. My issue is not regarding that. It is either to do with the S3 bucket being incorrectly specified or permissions being wrong. <code>storage_options</code> could be relevant, but I have yet to find how.</p>
<python><pandas><amazon-s3><logging><export-to-csv>
2024-08-29 21:18:03
0
1,746
J. Mini
78,929,522
2,986,153
How to format geom_label() values within plotnine
<p>When I am using plotnine, I can use mizani.labels to format the axis labels as percent strings. Is there a similar method to formate geom_label values? I cannot use <code>label = ml.percent(&quot;rate&quot;)</code> as this will trigger an error.</p> <pre><code>import polars as pl from plotnine import * import mizani.labels as ml df = pl.DataFrame({ &quot;group&quot;: [&quot;A&quot;, &quot;B&quot;], &quot;rate&quot;: [.511, .634] }) plot = (ggplot(df, aes(x = &quot;group&quot;, y = &quot;rate&quot;, label = &quot;rate&quot;)) + geom_col() + geom_label() + scale_y_continuous(labels = ml.percent) ) plot.show() </code></pre> <p><a href="https://i.sstatic.net/f5KgLFD6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f5KgLFD6.png" alt="enter image description here" /></a></p> <p>I can preformat a rate_label before adding my dataframe to ggplot, but I was hoping to avoid this prework as I am able to avoid it when using ggplot in R.</p> <pre><code>df = df.with_columns(rate_label = (pl.col(&quot;rate&quot;) * 100).round(1).cast(pl.Utf8) + '%') plot = (ggplot(df, aes(x = &quot;group&quot;, y = &quot;rate&quot;, label = &quot;rate_label&quot;)) + geom_col() + geom_label() + scale_y_continuous(labels = ml.percent) ) plot.show() </code></pre> <p><a href="https://i.sstatic.net/68d0wfBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/68d0wfBM.png" alt="enter image description here" /></a></p> <h1>Example in R</h1> <pre><code>library(tidyverse) library(scales) df = tibble( group = c(&quot;A&quot;, &quot;B&quot;), rate = c(.511, .634) ) ggplot(df, aes(x = group, y = rate, label = percent(rate))) + geom_col() + geom_label() + scale_y_continuous(labels = percent) </code></pre> <p><a href="https://i.sstatic.net/AiP5MF8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AiP5MF8J.png" alt="enter image description here" /></a></p>
<python><plotnine>
2024-08-29 20:22:09
1
3,836
Joe
78,929,497
2,893,712
Pandas Export to Excel with MultiIndex Column with Formatting
<p>I have a dataframe in Pandas which groups shift attendance for specific days</p> <pre><code>DAYS = ['26-Aug','27-Aug','28-Aug'] SHIFTS = ['S1','S2','S3'] EMPLOYEES = ['John Doe','Jane Doe'] # Create Column Index of shifts for each day idx = pd.MultiIndex.from_product([DAYS, SHIFTS], names=['Days','Shifts']) # Create df for each employee df = pd.DataFrame('', EMPLOYEES, idx) # ... fill df based on certain conditions I have ... </code></pre> <p>Here is an example of what the output would look like when i do `df.to_excel('output.xlsx')</p> <p><a href="https://i.sstatic.net/BMyxbJzu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BMyxbJzu.png" alt="example output" /></a></p> <p>However I am looking to format the excel output using <code>xlsxwriter</code> so I can add some formatting/color to some of the specific rows/columns.</p> <p>I made this function as an easy way to output to Excel but having the data returned as a table with autofit column widths. However does not seem to play nice with my dataframe because of the MultiIndex column.</p> <pre><code>def createExcel(df, filename, sheet_name): writer = pd.ExcelWriter(filename, engine=&quot;xlsxwriter&quot;) df.drop_duplicates(inplace=True) df.to_excel(writer, sheet_name=sheet_name, startrow=1, header=False, index=False) # Get the xlsxwriter workbook and worksheet objects. workbook = writer.book worksheet = writer.sheets[sheet_name] # Get the dimensions of the dataframe. (max_row, max_col) = df.shape # Create a list of column headers, to use in add_table(). column_settings = [{&quot;header&quot;: column} for column in df.columns] # Add the Excel table structure. Pandas will add the data. worksheet.add_table(0, 0, max_row, max_col - 1, {&quot;columns&quot;: column_settings}) # Make the columns wider for clarity. worksheet.autofit() writer.close() </code></pre> <p>I get an error <code>NotImplementedError: Writing to Excel with MultiIndex columns and no index ('index'=False) is not yet implemented.</code>. If I change <code>index=True</code>, I get another error because of how I set the header in <code>column_settings</code> (df.columns returns tuple, not just column name).</p> <p>My question is, how can I output this to spreadsheet combined with <code>worksheet.add_table()</code> and <code>worksheet.autofit()</code>?</p>
<python><excel><pandas>
2024-08-29 20:14:15
1
8,806
Bijan
78,929,356
614,443
Sphinx autodoc - Skipping members with automethod set
<p>I'm using sphinx and a few of the files have the directive <code>.. automethod::</code> set. I need a way to have sphinx show all <code>__init__</code> methods, without removing the <code>.. automethod::</code> directive and without any warnings. I tried to look through stack, but didn't see anything like this. Here's a MWE:</p> <pre class="lang-py prettyprint-override"><code>class A(): &quot;&quot;&quot; documentation .. automethod:: __init__ &quot;&quot;&quot; def __init__(self): &quot;&quot;&quot; documentation for A &quot;&quot;&quot; class B: &quot;&quot;&quot; documentation &quot;&quot;&quot; def __init__(self): &quot;&quot;&quot; documentation for B &quot;&quot;&quot; </code></pre> <pre class="lang-py prettyprint-override"><code>def skip_member(app, what, name, obj, skip, options): // What do I put here to skip the __init__ of class A, but *not* the __init__ of class B? if name == '__init__': return False return skip def setup(app): app.connect('autodoc-skip-member', skip_member) </code></pre> <p>With this I get the following error:</p> <pre><code>/home/xxx/sphinx-test/classFile.py:docstring of classFile.A.__init__:1: WARNING: duplicate object description of classFile.A.__init__, other instance in usage, use :no-index: for one of them </code></pre> <p>I want to be able to get rid of this error message <strong>without</strong> removing the <code>.. automethod::</code> directive. Is this even possible? An alternative that is also acceptable is to keep <code>skip_member</code> as is, but to alter <code>MethodDocumenter</code> (or something else?) to ignore the <code>automethod</code> directive if the name is <code>__init__</code>, but not sure the best way to do that.</p> <hr /> <p>Why am I trying to do this? I'm working on a collaborative project so I can't remove the <code>automethod</code> directives, but I want an automated way to show <em>all</em> documentation including private/hidden methods. (So we would have 2 documentations: 1 for consumers and one for developers in essence) Removing the <code>automethod</code> directive would mean that the consumer version would now have more info than needed, so I don't want to remove those, but by not removing those, when I try and run the sphinx documentation, I'm getting a bunch of duplicate methods and so I need some way to suppress either the ones being included from <code>automethod</code> or from the <code>skip_method</code> way.</p> <p><strong>Edit:</strong> Something that maybe wasn't clear in the above. I could technically use <code>:no-index:</code> to have the error not appear, <em>but</em> then the method gets duplicated (shows up twice in the docs).</p> <pre><code>.. automethod:: __init__ :no-index: </code></pre> <p>When I try and look at the <code>options</code> variable in <code>skip_member</code>, I get a dictionary, but the only option showing up is <code>members</code>. In addition, the <code>what</code> is coming back as <code>class</code> which seems weird? I would have assumed it would come back as <code>method</code> since we're doing <code>automethod</code>?</p>
<python><python-sphinx><autodoc>
2024-08-29 19:23:02
0
2,551
Aram Papazian
78,929,322
7,200,174
Pandas groupby and concat multiple rows
<p><strong>CONTEXT</strong></p> <p>I want to group by both a rule_id and calc_id and transform multiple columns into one row where each variable is concatenated with a &quot;,'</p> <p><strong>DATA EXAMPLE</strong></p> <pre><code>Calc_ID Rule_ID Name Tracked? 100 Rule1 Y 100 Rule2 N 100 Rule3 N YYY Test1 Y YYY Test2 Y YYY Test3 N </code></pre> <p><strong>EXPECTED OUTCOME</strong></p> <pre><code>Calc_ID Rule_ID Name Tracked? 100 Rule1, Rule2, Rule3 Y, N, N YYY Test1, Test2, Test3 Y, Y, N </code></pre> <p><strong>CURRENT CODE</strong></p> <p>I tried to apply a groupby one at a time for each of the columns but that doesn't work.</p> <pre><code>import pandas as pd pd = read_csv(path) pd = pd.fillna('') # &lt;- to fix nans on groupby calc_id / rule_id pd = pd.groupby(['Rule_ID', 'Calc_ID'])['Name'].apply(','.join).reset_index() # pd = pd.groupby(['Rule_ID', 'Calc_ID'])['Tracked?'].apply(','.join).reset_index() # ^ but this doesn't work because the initial groupby removes other columns </code></pre>
<python><pandas><dataframe><group-by>
2024-08-29 19:12:21
1
331
KL_
78,929,301
3,798,035
Parallel performance with xarray and dask
<p>I am trying to perform operations in parallel on a very large array using <code>xarray</code>. My current approach is roughly as follows:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import xarray as xr # Read dataset (netCDF4). Holds data variable 'p2(time, distance)' data = xr.open_dataset('some_file.nc', chunks={'time': 'auto'}) # Compute mean of a slice along the time axis start = np.timedelta64(7, 'h') end = np.timedelta64(19, 'h') mean1 = data.p2.sel(time=slice(start,end)).mean('time') # Compute mean of another slice start = np.timedelta64(20, 'h') end = np.timedelta64(24, 'h') mean2 = data.p2.sel(time=slice(start,end)).mean('time') # Compute some weighted average, obtain the result as numpy array wavg = ((0.2*mean1 + 0.8*mean2) / 16).compute().data # Do some work with wavg ... </code></pre> <p>I was expecting that work on the array would be performed in parallel using all avaliable CPUs, but instead only a few CPUs (about 30 of 256 available CPUs running at about 30% load) are used and memory usage is rather low. As a result the operations are slow and inefficient. I have already tried to change the chunk size and to chunk the array along the <code>distance</code> axis or along both axis. But all these attempts did little impact on the CPU usage. Is my approach wrong or do I miss something here? I am new to <code>xarray</code> and <code>dask</code>, so any help would be appreciated and sorry if I missed something obvious!</p>
<python><python-xarray><dask-dataframe>
2024-08-29 19:05:48
0
652
TMueller83
78,929,276
243,158
Creating a pylint coverage report
<p>We have a lot of legacy python code in our github repo that has the very useful type and pylint checks disabled:</p> <pre><code># type: ignore # pylint: skip-file </code></pre> <p>In some files where this is not the case, there are whole sections with no pylint coverage.</p> <p>We'd like to create a pylint coverage report, which is similar to the pytest coverage report, and shows which files are covered and how much is covered.</p> <p>Are there any tools that do this automatically?</p>
<python><code-coverage><pylint>
2024-08-29 18:56:05
1
1,725
Ido Cohn
78,929,240
8,521,346
Memory Leak When Using Django Bulk Create
<p>I have the following code that constantly checks an API endpoint and then if needed, adds the data to my Postgres database.</p> <p>Every iteration of this loop is leaking memory in the <code>postgresql\operations.py</code> file.</p> <p>Im not sure what data is still being referenced and isnt clearing, so realistically what could be causing this issue?</p> <pre><code>class DataObject(models.Model): raw_data = models.TextField() name = models.TextField(null=True) def post_process(self): data = json.loads(self.raw_data) self.name = data['name'] def do_from_schedule(): api = API() loads = api.grab_all_data() del api datas_to_create = [] for load in loads: data = DataObject() data.raw_data = json.dumps(load) data.post_process() datas_to_create.append(data) DataObject.objects.bulk_create(datas_to_create, ignore_conflicts=True, batch_size=200) del datas_to_create gc.collect() while True: do_from_schedule() time.sleep(5) </code></pre> <p>Here are the results of tracemalloc. Theres a consistent, and unending memory leak.</p> <ul> <li><p>#1: postgresql\operations.py:322: 156.9 KiB return cursor.query.decode()</p> </li> <li><p>#1: postgresql\operations.py:322: 235.3 KiB return cursor.query.decode()</p> </li> <li><p>#1: postgresql\operations.py:322: 313.7 KiB return cursor.query.decode()</p> </li> <li><p>#1: postgresql\operations.py:322: 392.0 KiB return cursor.query.decode()</p> </li> <li><p>#1: postgresql\operations.py:322: 470.4 KiB return cursor.query.decode()</p> </li> </ul> <p>I found this for someone using MySql <a href="https://stackoverflow.com/a/65098227/8521346">https://stackoverflow.com/a/65098227/8521346</a></p> <p>But im not sure how to even test/ fix this if it was the issue with the postgres client.</p> <p>--Edit--</p> <p>The proported duplicate question had nothing to do with, nor fixed the issue. I already included <code>reset_queries()</code> in my code way before posting. This issue is strickly related to <code>bulk_create</code></p>
<python><django>
2024-08-29 18:39:43
0
2,198
Bigbob556677
78,929,001
1,330,734
Flask WebSocket Messages not emitted
<p>I want to use a WebSocket to stream dummy data (a random 4-char string) to a div on a HTML page, using Python with the Flask webserver. The source for both Python and HTML follows.</p> <p>main7.py</p> <pre><code>from flask import Flask, render_template from flask_socketio import SocketIO import random import string app = Flask(__name__) socketio = SocketIO(app) @app.route('/') def index(): return render_template('index7.html') def generate_random_chars(): while True: random_chars = ''.join(random.choices(string.ascii_letters + string.digits, k=4)) socketio.emit('update', random_chars) socketio.sleep(2) if __name__ == '__main__': socketio.start_background_task(target=generate_random_chars) socketio.run(app, host='localhost', port=5000, debug=True) </code></pre> <p>index7.html:</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot;&gt; &lt;title&gt;Flask WebSocket Test&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;div id=&quot;random-chars&quot;&gt;Waiting for updates...&lt;/div&gt; &lt;script src=&quot;https://cdnjs.cloudflare.com/ajax/libs/socket.io/4.0.0/socket.io.js&quot;&gt;&lt;/script&gt; &lt;script&gt; const socket = io(); socket.on('update', function(data) { document.getElementById('random-chars').innerText = data; }); &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>The page is served with no error and there appears to be a WebSocket connection established but after that, there none of the emitted messages appear in the network traffic. There are also no error messages from the Flask debug output, or the DevTools console.</p> <pre><code> * Restarting with watchdog (inotify) * Debugger is active! * Debugger PIN: 136-937-281 (210008) wsgi starting up on http://127.0.0.1:5000 (210008) accepted ('127.0.0.1', 49994) 127.0.0.1 - - [29/Aug/2024 19:48:45] &quot;GET / HTTP/1.1&quot; 200 640 0.003211 (210008) accepted ('127.0.0.1', 49998) 127.0.0.1 - - [29/Aug/2024 19:48:45] &quot;GET / HTTP/1.1&quot; 400 293 0.001261 127.0.0.1 - - [29/Aug/2024 19:49:04] &quot;GET / HTTP/1.1&quot; 200 611 0.002471 127.0.0.1 - - [29/Aug/2024 19:49:04] &quot;GET /socket.io/?EIO=4&amp;transport=polling&amp;t=P6V2cR7 HTTP/1.1&quot; 200 278 0.000470 (210008) accepted ('127.0.0.1', 42628) (210008) accepted ('127.0.0.1', 42634) 127.0.0.1 - - [29/Aug/2024 19:49:04] &quot;POST /socket.io/?EIO=4&amp;transport=polling&amp;t=P6V2cRC&amp;sid=WyHUScU-T8s22PT_AAAA HTTP/1.1&quot; 200 219 0.001391 127.0.0.1 - - [29/Aug/2024 19:49:04] &quot;GET /socket.io/?EIO=4&amp;transport=polling&amp;t=P6V2cRD&amp;sid=WyHUScU-T8s22PT_AAAA HTTP/1.1&quot; 200 213 0.000228 127.0.0.1 - - [29/Aug/2024 19:49:04] &quot;GET /socket.io/?EIO=4&amp;transport=polling&amp;t=P6V2cRM&amp;sid=WyHUScU-T8s22PT_AAAA HTTP/1.1&quot; 200 181 0.000252 127.0.0.1 - - [29/Aug/2024 19:49:48] &quot;GET /socket.io/?EIO=4&amp;transport=websocket&amp;sid=WyHUScU-T8s22PT_AAAA HTTP/1.1&quot; 200 0 43.720072 127.0.0.1 - - [29/Aug/2024 19:49:48] &quot;GET / HTTP/1.1&quot; 200 611 0.001028 127.0.0.1 - - [29/Aug/2024 19:49:48] &quot;GET /socket.io/?... </code></pre> <p><a href="https://i.sstatic.net/2crjkYM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2crjkYM6.png" alt="devtools" /></a></p> <p>Any insight into the problem appreciated!</p>
<python><flask><websocket>
2024-08-29 17:23:04
1
490
user1330734
78,928,919
8,308,617
Issue with Updating Entries in Asset_Insert Array
<p>Note: This specific code is being executed on ignition perspective platform, but the problem I am having is more related to logic than platform.</p> <p><strong>Problem Description:</strong></p> <p>I'm encountering a problem with updating entries in an array named <code>Asset_Insert</code> within an Ignition Perspective project. When I modify a cell value in a table, the <code>Asset_Insert</code> array should be updated with the new data. However, instead of updating the existing entry, a new entry is being added each time. (The method will be executed every time on each commit/value + enter)</p> <pre><code>def runAction(self, event): col = event.column row = event.row BASE_ASSET_NAME = self.getSibling(&quot;Txt_Selected_BT&quot;).props.text plantName = self.props.data[0].Plant_Name.value assetInsert = self.custom.Asset_Insert if 'value' in self.props.data[row][col]: self.props.data[row][col]['value'] = event.value # Flag to check if we found the row to update updated = False # Iterate and update existing entry for i, item in enumerate(assetInsert): if item[&quot;ASSET_NAME&quot;] == asset_data[&quot;ASSET_NAME&quot;]: # Update existing entry assetInsert[i] = asset_data updated = True break # Append the asset_data if not updated if not updated: assetInsert.append(asset_data) </code></pre> <p><strong>Request for Help:</strong></p> <p>How can I modify the code to ensure that the existing entry in <code>Asset_Insert</code> is updated rather than creating a new one?</p> <p>Any suggestions or insights on how to resolve this issue would be greatly appreciated.</p> <p>I have used for loop to iterate through the items in <code>Asset_Insert</code> array, but it only works if there is only 1 row in the table if there are more than 1 rows in the table then it will first create the number of objects as per the number of rows then will keep updating the last object.</p>
<python><datatable><logic>
2024-08-29 16:59:37
0
462
Uniquedesign
78,928,866
4,648,809
Python multiprocessing manager does not work in uwsgi for spawn method
<p>Python multiprocessing manager does not work in uwsgi for spawn method in Linux. Code:</p> <pre><code>from torch.multiprocessing import Manager import torch.multiprocessing as mp if __name__ == 'uwsgi_file__home_kasha_uwsgi_test_test': ctx = mp.set_start_method('spawn') m = mp.Manager() def application(env, start_response): start_response('200 OK', [('Content-Type','text/html')]) return [b&quot;Hello World&quot;] </code></pre> <p>Run:</p> <pre><code>uwsgi --ini ./test.ini </code></pre> <p>Error:</p> <pre><code>unable to load configuration from from multiprocessing.resource_tracker import main;main(9) /usr/bin/uwsgi-core: unrecognized option '--multiprocessing-fork' </code></pre> <p>OS:</p> <blockquote> <p>DISTRIB_ID=Ubuntu DISTRIB_RELEASE=22.04 DISTRIB_CODENAME=jammy DISTRIB_DESCRIPTION=&quot;Ubuntu 22.04.2 LTS&quot;</p> </blockquote> <p>If I set here <code>ctx = mp.set_start_method('spawn')</code> 'fork' it works.</p> <p>test.ini:</p> <pre><code>[uwsgi] master = true processes = 1 socket = 127.0.0.1:9000 buffer-size=32768 #protocol = http-socket socket = /home/kasha/uwsgi_test/test.sock chown-socket = kasha:kasha vacuum = true die-on-term = true plugins = python3 #logto = /home/kasha/uwsgi_test/err.log wsgi-file = /home/kasha/uwsgi_test/test.py lazy = true enable-threads = false </code></pre>
<python><linux><uwsgi>
2024-08-29 16:42:09
0
1,031
Alex
78,928,846
6,231,539
Kafka partioning key difference between Python and Nestjs
<p>In my infrastructure I have multiple microservices with multiple language communicating over Kafka. In my Kafka I have multiple partitions and to keep consistency I use a key when I push messages so that messages with the same key go to the same partition.</p> <p>It works well when different Nestjs microservices communicate between them. Same different Python microservices. But if a Python service and a Nestjs one use the same key it does not go to the same partition.</p> <p>Here is my Netsjs code using ClientKafka from @nestjs/microservices:</p> <pre><code>private client: ClientKafka; ... private sendMessage(topic: string, message: any, partitionKey: string): void { console.log('key', partitionKey); this.client.emit(topic, { key: partitionKey, value: message }); } </code></pre> <p>Here is my Python code using Producer from <code>confluent_kafka</code>:</p> <pre><code>def send_message(topic: str, message: str, partitioning_key: str): producer = Producer(...) producer = get_kafka_producer() producer.produce(topic, key=partitioning_key, value=message.encode(&quot;utf-8&quot;), callback=delivery_callback) producer.poll(1) producer.flush() </code></pre>
<python><apache-kafka><nestjs><confluent-kafka-python>
2024-08-29 16:36:14
1
1,922
Antoine Grenard
78,928,797
1,087,370
How to wait for a confirmation dialog in Flet?
<p>I press a button, a confirmation popup should appear and based on the option you picked(ex: yes or no) to decide what to do next.</p> <p>Using <code>flet==0.22.*</code></p>
<python><flet>
2024-08-29 16:20:18
2
5,934
Alex
78,928,639
897,272
Is it possible to have python workers use different global context?
<p>We have a singleton object we use for our program that refers to an ugly god-object that I wish would die - we inherited the code from a team that had already implemented it this way. Unfortunately stripping the singleton out is a non-trivial task at this point.</p> <p>We want to have a rest endpoint, using django rest_framework to be exact, which is using a partially initialized version of our main program to, with just enough set up to answer questions about how our main program would respond to certain inputs.</p> <p>The problem is that the singleton object has state that would need to be different for each incoming request, but right now since it's a global variable changing it in one worker changes it in all workers which would risk datarace issues if we have two commands that need different state come in at the same time and running on separate threads.</p> <p>I'm wondering if there is a way to set up workers to have their own version of global variables and our singleton which aren't affected by changes to the singleton in other workers so we can change state in a worker without worrying about it affecting other rest calls; short of spinning up a new subprocess each time?</p>
<python><django-rest-framework><singleton>
2024-08-29 15:41:25
1
6,521
dsollen
78,928,619
494,134
Two if statements in a list comprehension
<p>Looking at <a href="https://stackoverflow.com/q/77893051/494134">another question</a>, I saw what I assumed was a simple typo or misunderstanding of list comprehension syntax -- but it actually worked!</p> <p>This is a simplified version of the code in that question:</p> <pre><code>message = &quot;ABCABCABCABC&quot; data = [ch for ch in message if ch != 'A' if ch != 'B'] print(data) </code></pre> <p>It prints <code>['C', 'C', 'C', 'C']</code>.</p> <p>The double <code>if</code> conditions at the end seem very odd to me. Why isn't that a syntax error? How is it parsed?</p>
<python><list-comprehension>
2024-08-29 15:37:50
0
33,765
John Gordon
78,928,608
8,758,459
How to fix mixed content error when embedding a FastAPI app as iframe
<p>I'm deploying a web application using NiceGUI (fastAPI) on Google Cloud Run. The main application is served over HTTPS, but I’m embedding a sub-application (Chainlit) as an iframe within the main app. The Chainlit app is mounted as a FastAPI subapp.</p> <p>When I load the main app (served locally or remotely), I get a mixed content error and the Chainlit endpoint does not load: &quot;Mixed Content: The page at 'https://main-app.a.run.app/' was loaded over HTTPS, but requested an insecure frame 'http://main-app.a.run.app/chat/'. This request has been blocked; the content must be served over HTTPS.</p> <p>As an observation when I manually change the iframe URL with the browser inspector tool to something else and then put it back again, the iframe loads correctly.</p> <p><strong>Question</strong>: What could be causing the subapp to default to HTTP, and how can I ensure it is always served over HTTPS to avoid the mixed content error?</p> <p>My implementation:</p> <pre><code>from nicegui import context, app, ui import chainlit as cl from chainlit.utils import mount_chainlit ui.html(f'&lt;iframe id=&quot;chat_frame&quot; src=&quot;{CHAINLIT_BASE_URL}&quot; style=&quot;width: 100%; height: 100%; border: none;&quot;&gt;&lt;/iframe&gt;').classes('w-full h-full') @app.middleware(&quot;http&quot;) async def add_security_headers(request, call_next): response = await call_next(request) # Content Security Policy header for iframe embedding response.headers['Content-Security-Policy'] = f&quot;frame-ancestors 'self' {URL_REMOTE};&quot; return response mount_chainlit(app=app, target=&quot;chainlit_app.py&quot;, path=&quot;/chat&quot;) # Mount the Chainlit app ui.run(title=TITLE, favicon=FAVICON, storage_secret=API_KEY, host=BASE_URL, port=PORT) </code></pre>
<python><fastapi><google-cloud-run><nicegui><chainlit>
2024-08-29 15:34:27
0
395
John Szatmari
78,928,486
2,715,498
How to deduce whether a classmethod is called on an instance or on a class
<p>I'm writing a logger decorator that can be applied to, among others, classmethods (and theoretically any kind of function or method). My problem is that their parametrization changes according to whether a function is</p> <ul> <li>a standalone_function(*args)</li> <li>a member_method(self, *args) or</li> <li>a class_method(cls, *args)</li> </ul> <p>An example code that works but all but elegant and can be tricked by parametrization:</p> <pre><code>def some_decorator(func): def wrapper(*argsWrap, **kwargsWrap): if isinstance(func, classmethod): if isinstance(argsWrap[0], SomeClass): # Called by some_instance.some_method as the first parameter is SomeClass print(argsWrap[1]) return func.__func__(*argsWrap, **kwargsWrap) else: # Called by SomeClass.some_method print(argsWrap[0]) return func.__func__(SomeClass, *argsWrap, **kwargsWrap) else: return func(*argsWrap, **kwargsWrap) return wrapper class SomeClass: @some_decorator @classmethod def some_method(cls, some_var: str): print(some_var) if __name__ == '__main__': SomeClass.some_method('Test') some_instance = SomeClass() some_instance.some_method('Test2') </code></pre> <p>Question is simple: is it possible to make a cleaner and safer decision regarding just the parametrization?</p> <p>Note that for a general purpose decorator (used by others) one cannot say that this must be put below the @classmethod, not above etc.</p> <p>Also note that this solution is a &quot;bad&quot; workaround in a sense that one can conveniently imagine functions that use more instances of the same class (for example, a typical overloaded operator, like <code>__eq__</code> works like this).</p> <p>Also note that the real problem, as I narrowed it, is that a classmethod called upon a class or called upon an instance of a class passes <em>different</em> parametrization (the first parameter being the first arg for a class-bound function and the class reference for an instance-bound function).</p>
<python><python-decorators>
2024-08-29 15:04:35
1
3,372
Gyula SΓ‘muel Karli
78,928,344
495,786
Tool to extract functions used from specific package in Python
<p>Does anyone know a tool to copy all the functions used in a project comming from a specific package? A possible use case would be something like the following:</p> <p>Project A is a Python application that uses several functions from Package B as a dependency. Let's say that Package B is a private project, if we distribute Project A together with all its dependencies this will include all the code from Package B, even all the functions not used in Project A.</p> <p>Is there any automatic way to find every function from Package B that's used in Project A so that it can be copied using the same module structure? That way just by changing imports from <code>from package_b import A, B, C</code> we could just use <code>from project_a import A, B, C</code>.</p> <p>I understand that due to Python dynamic nature the general case could be quite complicated, but I would be happy with a tool that recives as an input one or more package names like &quot;package_b&quot;, then identifies imports &quot;from package_b.* import x, y, z&quot; and then copies over the code for functions &quot;x&quot;, &quot;y&quot; and &quot;z&quot;.</p>
<python><dependencies>
2024-08-29 14:33:19
1
1,987
skd
78,928,280
1,741,868
Deploying a python Function App where the source is in a subdirectory
<p>I've got a mono-repo with a python project that I'm deploying to Azure. There's an API being deployed as a container and I'm trying to add a function app.</p> <p>My python code is structured in a few folders underneath a /backend folder, like</p> <pre><code>./backend ./backend/domain/ - contains domain logic .py files ./backend/entrypoints/api - contains Flask app files ./backend/entrypoints/my-func - contains my new function app ./backend/... - various other folder hierarchies that are imported. </code></pre> <p>I have a venv set up and a requirements.txt in the root of the project.</p> <p>The function app imports <code>domain</code> and other modules. All folders have an <code>__init__.py</code> to mark them as modules.</p> <p>I'm struggling to run the project. If I <code>func start</code> in the <code>./backend/entrypoints/my-func</code> directory I get a <code>ModuleNotFoundError</code>:</p> <pre><code>Exception: ModuleNotFoundError: No module named 'dotenv'. Cannot find module. Please check the requirements.txt file for the missing module. </code></pre> <p>My flask app can load <code>dotenv</code> just fine and it's in the requirements.txt and installed.</p> <pre><code>$ pip install -r requirements.txt ... Requirement already satisfied: python-dotenv~=1.0.1 in /home/greg/.pyenv/versions/3.12.2/envs/fish12-2/lib/python3.12/site-packages (from -r requirements.txt (line 27)) (1.0.1) ... </code></pre> <p>Is this possible?</p>
<python><python-3.x><azure-functions>
2024-08-29 14:15:25
1
14,935
Greg B
78,928,259
1,325,861
scrape link for jwplayer calculated with JS using python
<p>I'm trying to scrape video link (m3u8) from this website: <a href="https://deaddrive.xyz/embed/fa31e" rel="nofollow noreferrer">https://deaddrive.xyz/embed/fa31e</a></p> <p>While inspecting the page, I realized that the link is calculated on the fly using JS in the function:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>&lt;script type="text/javascript"&gt;eval(function(p,a,c,k,e,d){while(c--)if(k[c])p=p.replace(new RegExp('\\b'+c.toString(a)+'\\b','g'),k[c]);return p}('q 4g=[];16("ai").ah({ag:[{2i:"1h://af.ae.1w/ad/ac/ab/aa/a9.a8?t=a7&amp;s=3z&amp;e=a6&amp;f=40&amp;a5=2k&amp;i=0.4&amp;a4=a3&amp;a2=2k&amp;a1=2k&amp;a0=4m"}],9z:"1h://4k.4j/9y.4i?v=4m",9x:"2j%",9w:"2j%",9v:"9u",9t:"9s.85",9r:\'9q\',9p:\'9o\',9n:{9m:{4l:"#1k",9l:"#1k"},9k:{9j:"#1k"},9i:{4l:"#1k"}},9h:"1n",p:[{2i:"/45?42=9g&amp;1e=29&amp;9f=1h://4k.4j/9e.4i",9d:"9c"}],22:{9b:1,9a:\'#1k\',99:\'#98\',97:"94",93:30,92:2j,},"91":{"90":"4e","8z":"8y"},\'8x\':{"8w":"8v"},8u:"8t",8s:"1h://2h.1w",4h:{2i:"1h://2h.1w/8r-2d/8q.8p","1v":u,2f:"1h://2h.1w/?8o=4h",1c:"8n-8m",8l:"5",1v:u},8k:1n,20:[0.25,0.5,0.75,1,1.25,1.5,2]});q 2e,2g;q 8j=0,8i=0,8h=0;q n=16();q 4b=0,8g=0,8f=0,1b=0;$.8e({8d:{\'8c-8b\':\'8a-89\'}});n.18(\'4f\',o(x){k(5&gt;0&amp;&amp;x.1c&gt;=5&amp;&amp;2g!=1){2g=1;$(\'1j.88\').86(\'84\')}4g.83(1d=&gt;{k(1d.4f&lt;=x.1c&amp;&amp;1d.4c==0){k(1d.82==\'4e\'){n.81(1d.2f)}2q{q a=4d.80(\'7z\');a.7y=1d.2f;4d.7x.7w(a);}1d.4c=1}});k(x.1c&gt;=1b+5||x.1c&lt;1b){1b=x.1c;2b.7u(\'2a\',7t.7s(1b),{7r:60*60*24*7})}});n.18(\'1p\',o(x){4b=x.1c});n.18(\'7q\',o(x){4a(x)});n.18(\'7p\',o(){$(\'1j.49\').7o();2b.7n(\'2a\')});n.18(\'7m\',o(x){});o 4a(x){$(\'1j.49\').1v();$(\'#7k\').1v();k(2e)1y;2e=1;1u=0;k(7j.7i===7h){1u=1}$.3y(\'/45?42=7g&amp;7f=7e&amp;7d=40-87-3t-3z-7c&amp;7b=1&amp;7a=79.78&amp;1u=\'+1u,o(2d){$(\'#77\').76(2d)});q 1b=2b.3y(\'2a\');k(1b&gt;0){16().1p(1b)}}o 74(){q p=n.1z(3x);3w.3v(p);k(p.1e&gt;1){2o(i=0;i&lt;p.1e;i++){k(p[i].2n==3x){3w.3v(\'!!=\'+i);n.2l(i)}}}}n.18(\'72\',o(){16().23(\'&lt;w 3g="3f://3e.3d.3c/3b/w" 3a="b-w-1g b-w-1g-71" 39="0 0 1s 1s" 38="u"&gt;&lt;1q d="m 25.70,57.6z v 6y.3 c 0.6x,2.6w 2.6u,4.6t 4.8,4.8 h 62.7 v -19.3 h -48.2 v -96.4 3u 6s.6r v 19.3 c 0,5.3 3.6,7.2 8,4.3 l 41.8,-27.9 c 2.6q,-1.6p 4.6o,-5.6n 2.7,-8 -0.6m,-1.6l -1.6k,-2.6j -2.7,-2.7 l -41.8,-27.9 c -4.4,-2.9 -8,-1 -8,4.3 v 19.3 3u 30.6i c -2.6h,0.6g -4.6f,2.6e -4.9,4.9 z m 3t.6d,73.6c c -3.3s,-6.3r -10.3q,-10.3p -17.7,-10.6 -7.3o,0.3n -13.3m,4.3k -17.7,10.6 -8.1t,14.3j -8.1t,32.3i 0,46.3 3.3s,6.3r 10.3q,10.3p 17.7,10.6 7.3o,-0.3n 13.3m,-4.3k 17.7,-10.6 8.1t,-14.3j 8.1t,-32.3i 0,-46.3 z m -17.7,47.2 c -7.8,0 -14.4,-11 -14.4,-24.1 0,-13.1 6.6,-24.1 14.4,-24.1 7.8,0 14.4,11 14.4,24.1 0,13.1 -6.5,24.1 -14.4,24.1 z m -47.6b,9.6a v -51 l -4.8,4.8 -6.8,-6.8 13,-12.69 c 3.68,-3.67 8.66,-0.65 8.2,3.4 v 62.64 z"&gt;&lt;/1q&gt;&lt;/w&gt;\',"63 10 33",o(){16().1p(16().31()+10)},"3h");$("1j[26=3h]").2y().2x(\'.b-1g-28\');16().23(\'&lt;w 3g="3f://3e.3d.3c/3b/w" 3a="b-w-1g b-w-1g-28" 39="0 0 1s 1s" 38="u"&gt;&lt;1q d="61.2,5z.5y.1a,21.1a,0,0,0-17.7-10.6,21.1a,21.1a,0,0,0-17.7,10.6,44.1r,44.1r,0,0,0,0,46.3,21.1a,21.1a,0,0,0,17.7,10.6,21.1a,21.1a,0,0,0,17.7-10.6,44.1r,44.1r,0,0,0,0-46.5x-17.7,47.2c-7.8,0-14.4-11-14.4-24.5w.6-24.1,14.4-24.1,14.4,11,14.4,24.5v.4,37.5u,95.5,37.5t-43.4,9.7v-5s-4.8,4.8-6.8-6.8,13-5r.8,4.8,0,0,1,8.2,3.5q.7l-9.6-.5p-5o.5n.5m.36,4.36,0,0,1-4.8,4.5l.6v-19.5k.2v-96.5j.5i.5h,5.3-3.6,7.2-8,4.3l-41.8-27.5g.35,6.35,0,0,1-2.7-8,5.34,5.34,0,0,1,2.7-2.5f.8-27.5e.4-2.9,8-1,8,4.5d.5c.5b.29,4.29,0,0,1,5a.1,57.59"&gt;&lt;/1q&gt;&lt;/w&gt;\',"58 10 33",o(){q 1o=16().31()-10;k(1o&lt;0)1o=0;16().1p(1o)},"2z");$("1j[26=2z]").2y().2x(\'.b-1g-28\');});n.18("r",o(56){q p=n.1z();k(p.1e&lt;2)1y;$(\'.b-j-g-26\').18(\'55\',()=&gt;{$(\'.b-g-r\').15(\'y-1m\',\'u\');$(\'.b-g-r\').15(\'y-1i\',\'u\');$(\'#b-j-g-r\').1l(\'b-j-g-1f\')});$(\'.b-j-54-53\').52(o(){$(\'#b-j-g-r\').1l(\'b-j-g-1f\');$(\'.b-g-r\').15(\'y-1i\',\'u\')});n.23("/50/4z.w","4y 4x",o(){$(\'.b-2t\').4w(\'b-j-2s\');$(\'.b-j-22, .b-2w-2u, .b-j-20\').15(\'y-1m\',\'u\');$(\'.b-j-22, .b-2w-2u, .b-j-20\').15(\'y-1i\',\'u\');k($(\'.b-2t\').4v(\'b-j-2s\')){$(\'.b-g-r\').15(\'y-1m\',\'1n\');$(\'.b-g-r\').15(\'y-1i\',\'1n\');$(\'.b-j-g\').1l(\'b-j-g-1f\');$(\'#b-j-g-r\').2r(\'b-j-g-1f\');$(\'.b-j-g:4u\').2r(\'b-j-g-1f\');}2q{$(\'.b-g-r\').15(\'y-1m\',\'u\');$(\'.b-g-r\').15(\'y-1i\',\'u\');$(\'.b-j-g-r\').1l(\'b-j-g-1f\')}},"4t");k(!4s.4r(\'4q\'))4p("2p(\'4o\')",4n)});q 1x;o 2p(2m){q p=n.1z();k(p.1e&gt;1){2o(i=0;i&lt;p.1e;i++){k(p[i].2n==2m){k(i==1x){1y}1x=i;n.2l(i)}}}}',36,379,'|||||||||||jw|||||submenu|||settings|if|||player|function|tracks|var|audioTracks|||false||svg||aria|||||||attr|jwplayer||on||589|lastt|position|item|length|active|icon|https|expanded|div|FFFFFF|removeClass|checked|true|tt|seek|path|769|240|60009|adb|hide|com|current_audio|return|getAudioTracks|playbackRates||captions|addButton|||button||rewind|974|tto5kba0nbcmc1|ls||data|vvplay|link|vvad|vidhide|file|100|91ejjoFPHQ2N|setCurrentAudioTrack|audio_name|name|for|audio_set|else|addClass|open|controls|quality||tooltip|insertAfter|detach|ff00||getPosition||sec|887|013|867|178|focusable|viewBox|class|2000|org|w3|www|http|xmlns|ff11|06475|23525|29374||97928|30317|31579|29683|38421|30626|72072|163|H|log|console|track_name|get|1724940213|9118284||op|||dl||||video_ad|doPlay|prevt|loaded|document|vast|time|uas|logo|jpg|cc|laving|text|3320|300|ΰ€Ήΰ€Ώΰ€¨ΰ₯ΰ€¦ΰ₯€|setTimeout|default_audio|getItem|localStorage|dualSound|last|hasClass|toggleClass|Track|Audio|dualy|images||mousedown|buttons|topbar|click|event||Rewind|778Z|214|2A4|3H209|3v19|9c4|7l41|9a6|3c0|1v19|4H79|3h48|8H146|3a4|2v125|130|1Zm162|4v62|13a4|51l|278Zm|278|1S103|1s6|3Zm|078a21|131||M113||Forward|69999|88605|21053|03598|02543|99999|72863|77056|04577|422413|210431|860275|03972|689569|893957|124979|52502|174985|57502|04363|13843|480087|93574|99396|160|76396|164107||63589|03604|125|778|993957|rewind2|ready||set_audio_track||html|fviews|xyz|deaddrive|referer|embed|7a70a6d4c15fada1bbee046caaeeb156|hash|o5kba0nbcmc1|file_code|view|undefined|cRAds1|window|over_player_msg||pause|remove|show|complete|play|ttl|round|Math|set||appendChild|body|src|script|createElement|playAd|xtype|forEach|slow||fadeIn||video_ad_fadein|cache|no|Cache|Content|headers|ajaxSetup|v2done|tott|pop3done|vastdone2|vastdone1|playbackRateControls|margin|left|top|ref|png|logo_28779|upload|aboutlink|VidHide|abouttext|720p|866|qualityLabels|insecure|vpaidmode|client|advertising|fontOpacity|backgroundOpacity|Tahoma|||fontFamily|303030|backgroundColor|color|userFontScale|thumbnails|kind|o5kba0nbcmc10000|url|get_slides|androidhls|menus|progress|timeslider|icons|controlbar|skin|none|fullscreenOrientationLock|auto|preload|973|duration|uniform|stretching|height|width|o5kba0nbcmc1_xt|image|asn|p2|p1|500|sp|srv|129600|zwM4N63Iatoh52phXBqaJRlmUGtZ15AENgnDrxl3BwA|m3u8|master|o5kba0nbcmc1_n|01823|01|hls2|hgagecdn|V0eSu3D58smE|sources|setup|vplayer'.split('|'))) &lt;/script&gt;</code></pre> </div> </div> </p> <p>Can I somehow execute this JS script within python using Selenium or something else and unscramble the m3u8 link?</p> <p>Thank you</p>
<javascript><python><selenium-webdriver><web-scraping>
2024-08-29 14:10:32
0
535
Gaurav Suman
78,927,980
2,832,011
Hiding facet row axis in Altair chart
<p>I'm using a faceted chart in Altair in order to split a lengthy timeline into multiple rows.</p> <p>My initial dataset is a pandas dataframe with &quot;Start&quot; and &quot;End&quot; timestamp columns and a &quot;Product&quot; string column.</p> <p>I bin the dataset into roughly equal rows by evenly dividing the time range:</p> <pre><code>timestamp_normalized = (data.Start - data.Start[0]) / (data.Start.iloc[-1] - data.Start[0]) # range from 0 to 1 data['Row'] = (timestamp_normalized * 3 - 1e-9).astype(int) # divide into 3 bins </code></pre> <p>Then I draw the facet chart like this:</p> <pre><code>import altair as alt alt.Chart(data).mark_bar().encode( x=alt.X('Start', title='') x2='End', color=alt.Color('Product', scale=alt.Scale(scheme='dark2')) ).properties( width=800, height=50 ).facet( row=alt.Row('Row', title=''), ).resolve_scale( x='independent', ) </code></pre> <p>This produces the right chart, but unfortunately the bin indices (which are completely irrelevant, as they only serve to split it into three pieces) are shown on the left side. Is there any way to hide these?</p> <p><a href="https://i.sstatic.net/82uSLU7T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82uSLU7T.png" alt="enter image description here" /></a></p>
<python><pandas><facet><altair>
2024-08-29 13:12:11
1
4,709
Christoph Burschka
78,927,977
8,703,313
update dictionaries in the list comprehension
<p>There is a dictionary:</p> <pre class="lang-py prettyprint-override"><code>d = [{&quot;a&quot;:1, &quot;b&quot;:2},{&quot;a&quot;:3, &quot;b&quot;:4},{&quot;a&quot;:5, &quot;b&quot;:6}] </code></pre> <p>I'd like to update values of keys <code>b</code>.</p> <pre class="lang-py prettyprint-override"><code>d = [{**m}.update({&quot;b&quot;:5}) for m in d] </code></pre> <p>but I don't understand why this gives <code>d = [None, None, None]</code></p> <p>I'd expect <code>d = [{&quot;a&quot;:1, &quot;b&quot;:5},{&quot;a&quot;:3, &quot;b&quot;:5},{&quot;a&quot;:5, &quot;b&quot;:5}]</code></p>
<python><list-comprehension>
2024-08-29 13:11:59
3
310
Honza S.
78,927,891
16,958,410
what is Password-based authentication in the UserCreationForm in Django?
<p>I creat a signup form in django using django forms and when i run my code there is field i didnt expect <strong>Password-based authentication</strong> i did not use it and i have no idea what it is so anyone can tell me what it is and how i can remove it from user signup form? <a href="https://i.sstatic.net/Z9aQLmSb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z9aQLmSb.png" alt="signup form" /></a></p> <h2>form.py</h2> <pre><code>from django import forms from django.contrib.auth import get_user_model from django.contrib.auth.forms import UserCreationForm from django.contrib.auth.hashers import check_password class RegisterForm(UserCreationForm): &quot;&quot;&quot;Form to Create new User&quot;&quot;&quot; def __init__(self, *args,hashed_code=None, **kwargs) -&gt; None: super(RegisterForm,self).__init__(*args, **kwargs) self.hashed_code = hashed_code code = forms.CharField(max_length=4, required=True, label=&quot;code&quot;, help_text=&quot;Enter the four-digit code&quot; ) def is_valid(self): &quot;&quot;&quot;Return True if the form has no errors, or False otherwise.&quot;&quot;&quot; if not self.hashed_code: self.add_error(&quot;code&quot;,&quot;you have not any valid code get the code first&quot;) elif not check_password(self.data.get(&quot;code&quot;),self.hashed_code) : self.add_error(&quot;code&quot;,&quot;code is invalid&quot;) return self.is_bound and not self.errors class Meta: model = get_user_model() fields = [&quot;email&quot;, &quot;password1&quot;, &quot;password2&quot;,&quot;code&quot;] </code></pre>
<python><django><django-forms>
2024-08-29 12:52:52
2
596
mehdi_ahmadi
78,927,863
1,614,809
removing a label on a jira issue using python
<p>I can add a label like this</p> <pre><code>issue.fields.labels.append(&quot;MYNEWLABEL&quot;) </code></pre> <p>but I have searched the docs and duckduckgo'd until my hair has gone greyer but I have not figured out how to remove a label. I tried the method I used to remove a component but it didn't work</p> <pre><code># doesn't work: issue.update(update={&quot;labels&quot;: [{&quot;remove&quot;: {&quot;name&quot;: &quot;MYOLDLABEL&quot;}}],},) </code></pre> <p>this works for components, so I thought something similar would work for labels, but not for me</p> <pre><code># works: issue.update(update={&quot;components&quot;: [{&quot;remove&quot;: {&quot;name&quot;: &quot;MYOLDCOMPONENT&quot;}}],},) </code></pre> <p>would be ever so happy to hear how to do this, I don't want to have to manually change hundreds of issues (the bulk editor doesnt' work for me in the browser).</p> <hr /> <p>FYI, here's the core essence of my script:</p> <pre><code>#!/usr/bin/env python3 import sys # https://pypi.org/project/jira/ from jira import JIRA JIRA_API_TOKEN_ENV_NAME = 'JIRA_API_TOKEN' JIRA_FQDN = 'https://jira.example.com' JIRA_JQL = 'labels in (OLDLABEL and labels not in (NEWLABEL)' def main(): jira_api_token = os.environ[JIRA_API_TOKEN_ENV_NAME] issues = jira.search_issues(JIRA_JQL) issue_num = 1 for issue in issues: print(f'=== {issue_num} - {issue.key} ===') print(f'project { issue.fields.project.key}') print(f'state.name { issue.fields.status.name}') was_closed = False if issue.fields.status.name == 'Closed': was_closed = True print(f'labels before { issue.fields.labels }') if was_closed: print('Was closed, reopening') jira.transition_issue(issue, '3') # transition id 3 - name Reopen Issue # works: issue.fields.labels.append('NEWLABEL') # fails: issue.update(update={&quot;labels&quot;: [{&quot;remove&quot;: {&quot;name&quot;: &quot;OLDLABEL&quot;}}],},) # works: issue.update(fields={&quot;labels&quot;: issue.fields.labels}) if was_closed: print('Was closed, reclosing') jira.transition_issue(issue, '2') # transition id 2 - name Close Issue issue_num = issue_num + 1 print('') </code></pre>
<python><jira><python-jira>
2024-08-29 12:45:50
1
320
Paul M
78,927,743
4,049,658
How to debug a running python program?
<p>I'm a programmer but don't know much about python. The program in question is the a1111 stable diffusion webui and due to hardware quirks I'm running it in an anaconda environment with unsupported library versions. Every now and then, the back end seemingly just hangs when inferencing in a certain way and since I haven't found anyone else describing the same issue, I'm assuming its to do with my environment and how its set up. I'm not expecting this to get fixed by somebody else so I want to go and fix it myself.</p> <p>TLDR start here: In order to run the program, I launch anaconda prompt, activate the associated environment and then call python with the associated file name. Since its an interpreted language, I'm assuming its possible to manually break the execution when it hangs and get a stack trace? What kind of a toolchain do I need to do that?</p>
<python><debugging>
2024-08-29 12:17:05
0
6,683
user81993
78,927,692
16,187,613
How to get all Styling parameter configurable by `ttk.Style().configure()` for a widget from Themed Tkinter?
<p>I have been searching the answer for this question from a long time but with no success had to ask it here. I am able to get the styling parameter for from the <a href="https://tcl.tk/man/tcl8.6/TkCmd/contents.htm" rel="nofollow noreferrer">tcl documentation</a>, but my question is how can I achieve the same result programmatically.</p> <p>For example in Tkinter, we can use <code>widget.configure()</code> with no parameters to get all valid parameters for that widgets, since all design parameter must be changed using <code>Style()</code> only in themed tkinter, how can achieve the same functionality?</p> <p><strong>Edit</strong> Consider this example:</p> <pre><code>import tkinter as tk root =tk.Tk() a = tk.Label(root) print(a.configure()) #Output {'activebackground': ('activebackground', 'activeBackground', 'Foreground', &lt;string object: 'SystemButtonFace'&gt;, 'SystemButtonFace'), 'activeforeground': ('activeforeground', 'activeForeground', 'Background', &lt;string object: 'SystemButtonText'&gt;, 'SystemButtonText'), 'anchor': ('anchor', 'anchor', 'Anchor', &lt;string object: 'center'&gt;, 'center'), 'background': ('background', 'background', 'Background', &lt;string object: 'SystemButtonFace'&gt;, 'SystemButtonFace'), 'bd': ('bd', '-borderwidth'), 'bg': ('bg', '-background'), 'bitmap': ('bitmap', 'bitmap', 'Bitmap', '', ''), 'borderwidth': ('borderwidth', 'borderWidth', 'BorderWidth', &lt;string object: '2'&gt;, &lt;string object: '2'&gt;), 'compound': ('compound', 'compound', 'Compound', &lt;string object: 'none'&gt;, 'none'), 'cursor': ('cursor', 'cursor', 'Cursor', '', ''), 'disabledforeground': ('disabledforeground', 'disabledForeground', 'DisabledForeground', &lt;string object: 'SystemDisabledText'&gt;, 'SystemDisabledText'), 'fg': ('fg', '-foreground'), 'font': ('font', 'font', 'Font', &lt;string object: 'TkDefaultFont'&gt;, 'TkDefaultFont'), 'foreground': ('foreground', 'foreground', 'Foreground', &lt;string object: 'SystemButtonText'&gt;, 'SystemButtonText'), 'height': ('height', 'height', 'Height', 0, 0), 'highlightbackground': ('highlightbackground', 'highlightBackground', 'HighlightBackground', &lt;string object: 'SystemButtonFace'&gt;, 'SystemButtonFace'), 'highlightcolor': ('highlightcolor', 'highlightColor', 'HighlightColor', &lt;string object: 'SystemWindowFrame'&gt;, 'SystemWindowFrame'), 'highlightthickness': ('highlightthickness', 'highlightThickness', 'HighlightThickness', &lt;string object: '0'&gt;, &lt;string object: '0'&gt;), 'image': ('image', 'image', 'Image', '', ''), 'justify': ('justify', 'justify', 'Justify', &lt;string object: 'center'&gt;, 'center'), 'padx': ('padx', 'padX', 'Pad', &lt;string object: '1'&gt;, &lt;string object: '1'&gt;), 'pady': ('pady', 'padY', 'Pad', &lt;string object: '1'&gt;, &lt;string object: '1'&gt;), 'relief': ('relief', 'relief', 'Relief', &lt;string object: 'flat'&gt;, 'flat'), 'state': ('state', 'state', 'State', &lt;string object: 'normal'&gt;, 'normal'), 'takefocus': ('takefocus', 'takeFocus', 'TakeFocus', '0', '0'), 'text': ('text', 'text', 'Text', '', ''), 'textvariable': ('textvariable', 'textVariable', 'Variable', '', ''), 'underline': ('underline', 'underline', 'Underline', -1, -1), 'width': ('width', 'width', 'Width', 0, 0), 'wraplength': ('wraplength', 'wrapLength', 'WrapLength', &lt;string object: '0'&gt;, &lt;string object: '0'&gt;)} </code></pre> <p>but</p> <pre><code>import tkinter.ttk as ttk b= ttk.Label(root, text=&quot;hello&quot;) print(ttk.Style(root).configure(&quot;TLabel&quot;) #Output: //Nothing </code></pre> <p><em>Image from Tcl Official Docs</em> <a href="https://i.sstatic.net/VCMNEIct.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VCMNEIct.png" alt="Tcl Doc page screenshot" /></a></p> <p>Hence, I would like to get all the <strong>styling options configurable with ttk::style</strong> for a particular widget via python program.</p> <p><strong>Note:</strong></p> <p>This question differs from similar question asked previously on StackOverflow such as <a href="https://stackoverflow.com/questions/45389166/how-to-know-all-style-options-of-a-ttk-widget?noredirect=1&amp;lq=1">How to know all style options of a ttk widget?</a> in sense that it asks about the options of all element within a widget, <strong>But this question is specfically about the options which if passed into <code>style.configure(&quot;Widget&quot;, options)</code> are valid and have effect to the apperacnce of the widget.</strong></p>
<python><tkinter><ttk>
2024-08-29 12:05:32
1
1,200
Faraaz Kurawle
78,927,686
8,683,461
Scrapy perform action on all retrieved items
<p>I'm a noob on scrapy. I found lots of info about pipelines, but it seems to only treat items individually. I wish to perform some actions - let's say ordering items - on a complete set, after it's been scraped. Is there a scrapy wau to do that ?</p> <p>Thanks</p>
<python><scrapy><scrapy-pipeline>
2024-08-29 12:03:34
1
534
MarvinLeRouge
78,927,654
19,369,310
Pandas Dataframe groupby count number of rows quickly
<p>I have a dataframe that looks like</p> <pre><code>Class_ID Student_ID feature 1 4 31 1 4 86 1 4 2 1 2 11 1 2 0 5 3 2 5 9 3 5 9 2 </code></pre> <p>and I would like to count the number of times a student appear in each <code>Class_ID</code>, so the desired outcome looks like this:</p> <pre><code>Class_ID Student_ID feature count 1 4 31 3 1 4 86 3 1 4 2 3 1 2 11 2 1 2 0 2 5 3 2 1 5 9 3 2 5 9 2 2 </code></pre> <p>and here's how I did it:</p> <pre><code>df['dummy'] = 1 df['count'] = df.groupby(['Class_ID', 'Student_ID'], group_keys=False)['dummy'].transform(lambda x: x.sum()) </code></pre> <p>It works fine, but my actual dataframe is rather large (~ 1M rows), and the code is quite slow, so i would like to ask is there any quicker way/ better way to do it? Thanks.</p>
<python><pandas><dataframe><group-by>
2024-08-29 11:56:20
2
449
Apook
78,927,503
12,466,687
Python Regex to only include/extract numbers, decimals and Hyphen (-)
<p>I am not good at <strong>regex</strong> and have been trying &amp; failing to <strong>extract only</strong> <code>numbers, decimals and -</code> from a column in python.</p> <p>Even better if spaces can also be removed but if not even then it is still manageable.</p> <p>I have tested <code>^(\d.+)|[-]</code> and <code>^(\d.+)|[-]?[^a-z]+$/i</code> and <code>^(\d.+)|[-]?(\d+)?</code> but none of them worked correctly.</p> <p>Test Cases (Basically these are Ranges from inconsistent format)</p> <pre><code>28193.13 28913 28913-13 28193.13-28193.13 28193.13 - 28193.13 28193.13 - 28193.13 / cm - 28193.13 -28193.13 28913- 28913 - </code></pre> <p>Dataframe</p> <pre><code>test_df = pd.DataFrame({&quot;Range&quot;: [28193.13,28913,'28913-13','28193.13-28193.13', '28193.13 - 28193.13','28193.13 - 28193.13 / cm', '- 28193.13','28913-','28913 -']}) test_df </code></pre> <p>Code tried: <code>test_df['Range'].str.extract(r&quot;^(\d.+)|[-]?[^a-z]+$/i&quot;)</code></p> <p>Desired Results on above cases:</p> <pre><code>28193.13 28913 28913-13 28193.13-28193.13 28193.13-28193.13 28193.13-28193.13 -28193.13 -28193.13 28913- 28913- </code></pre> <p><strong>Issue:</strong> I am unable to remove characters from this <code>28193.13 - 28193.13 / cm</code> with my regex code as the desired result from this would be <code>28193.13-28193.13</code>.</p> <p><strong>Tool:</strong> I have used this <a href="https://www.akto.io/tools/numbers-regex-go-tester" rel="nofollow noreferrer">regex test</a> website to test regex code</p> <p>Appreciate any help.</p>
<python><regex>
2024-08-29 11:17:09
2
2,357
ViSa
78,927,337
3,405,291
ModuleNotFoundError: No module named 'mononphm'
<p>I have followed the instructions here:</p> <p><a href="https://github.com/SimonGiebenhain/MonoNPHM?tab=readme-ov-file#31-demo" rel="nofollow noreferrer">https://github.com/SimonGiebenhain/MonoNPHM?tab=readme-ov-file#31-demo</a></p> <p>To run the model by:</p> <pre class="lang-bash prettyprint-override"><code>python scripts/inference/rec.py --model_type nphm --exp_name pretrained_mononphm --ckpt 2500 --seq_name 00898 --no-intrinsics_provided --downsample_factor 0.33 </code></pre> <p>I'm receiving this error:</p> <blockquote> <p>Traceback (most recent call last):</p> <p>File &quot;/home/arisa/MonoNPHM/scripts/inference/rec.py&quot;, line 7, in </p> <p>from mononphm.photometric_tracking.tracking import track</p> <p>ModuleNotFoundError: No module named 'mononphm'</p> </blockquote> <p>The error happens when importing by <code>from mononphm</code>:</p> <pre class="lang-py prettyprint-override"><code>import json, os, yaml import torch import numpy as np import tyro from typing import Literal from mononphm.photometric_tracking.tracking import track from mononphm.photometric_tracking.wrapper import WrapMonoNPHM from mononphm.models.neural3dmm import nn3dmm from mononphm.models import setup_training from mononphm import env_paths from mononphm.utils.others import EasyDict </code></pre> <p>I have tried some solutions here: <a href="https://stackoverflow.com/q/43728431/3405291">Relative imports - ModuleNotFoundError: No module named x</a></p> <p>But none of the solutions I tried have worked so far.</p> <p>It looks like others are <em>not</em> getting my error, they go beyond and receive other errors: <a href="https://github.com/SimonGiebenhain/MonoNPHM/issues/7#issuecomment-2220001296" rel="nofollow noreferrer">https://github.com/SimonGiebenhain/MonoNPHM/issues/7#issuecomment-2220001296</a></p> <p>Can anyone help?</p>
<python><python-3.x><package><python-import><relative-import>
2024-08-29 10:31:04
1
8,185
Megidd
78,927,330
18,769,241
dlib's load_rgb_image() couldn't be found
<p>So I have successfully installed <code>dlib</code> version <code>18.17.0</code> that's compatible with <code>Python 2</code>. Classes such as <code>simple_object_detector</code> could be instantiated and returned and instance of the object. But a function called <code>load_rgb_image</code> couldn't be identified and when I called the following code:</p> <pre><code>import dlib img = dlib.load_rgb_image(filename) </code></pre> <p>I got the error:</p> <pre><code>AttributeError: 'module' object has no attribute 'load_rgb_image' </code></pre> <p>How to call this function in particular? is it a static function?</p>
<python><python-2.x><attributeerror><dlib>
2024-08-29 10:28:55
1
571
Sam
78,927,079
14,860,526
Compare excel files in python
<p>I want to compare to excel files generated by Pandas. What I want is that they are exactly the same, not just the content but the formatting as well. If I use filecmp however I get that they are different:</p> <pre><code>import pandas as pd df1 = pd.DataFrame( [['a', 'b'], ['c', 'd']], index=['row 1', 'row 2'], columns=['col 1','col 2'] ) df1.to_excel(&quot;output.xlsx&quot;, sheet_name='Sheet_name_1') df1.to_excel(&quot;output2.xlsx&quot;, sheet_name='Sheet_name_1') filecmp.cmp(&quot;output.xlsx&quot;, &quot;output2.xlsx&quot;) </code></pre> <p>I can, and I did, write a function that loops between all the sheets in the workbooks, all the cells in a sheet, all the attributes in a cell and checks that they are the same. But this is extremely slow. I there any faster way?</p>
<python><excel>
2024-08-29 09:30:41
1
642
Alberto B
78,926,673
1,981,797
bin data after filtering
<p>I have an experimental dataset in which I could see a feature when plotting a 2D histogram: <a href="https://i.sstatic.net/6iuJ48BM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6iuJ48BM.png" alt="enter image description here" /></a></p> <p>As it can be noticed, there is something appearing in the region around 218.46 in the x-axis and 86000 in the y-axis.</p> <p>After filtering and aggregation my dataset with this</p> <pre><code>edc = ( df_TR.filter( (pl.col(&quot;t&quot;) &gt; 85_700) &amp; (pl.col(&quot;t&quot;) &lt; 86_090) &amp; (pl.col(&quot;delay&quot;) &gt; 218.30) &amp; (pl.col(&quot;delay&quot;) &lt; 218.72) ) .group_by(&quot;delay&quot;).agg(pl.col(&quot;t&quot;).sum().alias(&quot;sum_t&quot;) ) </code></pre> <p>The resulting plot I'm getting is this one:</p> <pre><code>f, ax = plt.subplots() ax.plot(edc[&quot;delay&quot;], edc[&quot;sum_t&quot;], &quot;.&quot;) ax.set_ylim(0, 1e7); </code></pre> <p><a href="https://i.sstatic.net/192GcQS3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/192GcQS3.png" alt="enter image description here" /></a></p> <p>But that's not exactly what I have in mind because I would need to bin the data after filtering/grouping in order to produce something like this (forget about the fitting, as this plot belongs to another experiment):</p> <p><a href="https://i.sstatic.net/pNjeONfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pNjeONfg.png" alt="enter image description here" /></a></p> <p>But I can't find a &quot;binning&quot; operation in polars. I would need to bin the delay axis and in each of those bins, sum (or average) all the times.</p> <p>Any ideas how to achieve that?</p>
<python><python-polars>
2024-08-29 07:43:57
0
641
Maxwell's Daemon