QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,532,591
8,854,526
Is it possible to optimize a function with discrete variables in `mealpy`?
<p>I have been working with the <a href="https://pypi.org/project/mealpy/" rel="nofollow noreferrer"><code>mealpy</code> Python library</a> for quite some time-- the repository of around 200 metaheuristic algorithms including GA, DE, ACO, PSO, ...authored by <a href="https://github.com/thieu1995" rel="nofollow noreferrer">Nguyen Van Thieu, alias <code>thieu1995</code></a>.</p> <p>However, working with <strong>discrete variables</strong> (particularly categoric variables in the form of strings, or integer variables) remains a nightmare. Variables have to be encoded, then decoded again in the fitness function, or outside-- which are prone to cause several anomalies and vulnerabilities, especially when there are some dependent variables in the model, alongside discrete constraints or ranges.</p> <p>Same (though somewhat less painful) for integer variables, which have to be rounded off or truncated everytime a fitness function is calculated, and once again when the best solution is presented.</p> <p>Is it possible to encode <strong>integer</strong> or <strong>categoric</strong> variables with <code>mealpy</code> ?</p>
<python><optimization><genetic-algorithm><discrete-optimization>
2023-11-22 19:17:53
1
326
Partha D.
77,532,463
10,859,585
Collecting data from nested lists and dictionaries without so many for/if statements
<p>I've been working a lot with monday.com's API lately. Once I receive queried data, I retrieve specific pieces to check them with other values, manipulate them, etc. Due to the complexity of the query, I often find myself having to iterate through nested lists of dictionaries with values of more nested lists of dictionaries. It's fairly easy to iterate through them to find the exact information I am looking for, but I'd like to know if there are some better practices then using list comprehensions, or multiple <code>for</code>/<code>if</code> statements.</p> <p>Lets use the below as an example. The goal is to find the &quot;items&quot; value id where the name is equal to the &quot;groups&quot; title after the second dash (<code>-</code>).</p> <p>The first &quot;groups&quot; title to match is <code>2000 Foo Bar: Mapping</code>. That would return the id of <code>1234564130</code>.</p> <p>The second &quot;groups&quot; title is <code>Computer 2023</code>. That would return the id of <code>1234564074</code>.</p> <p>So on and so forth.</p> <p>The code I have below works and is fast enough for my current situation, but I know that using double <code>for</code> loops can become quite slow (O(n²)). And I have 5 nested <code>for</code> loops.... I understand this question can be opinionated based, but are there some better methods for digging this specific information out?</p> <pre><code>{'data': {'boards': [{'groups': [{'title': '123456-G123456.00 - 2000 Foo Bar: Mapping', 'items_page': {'cursor': None, 'items': [{'id': '1234564130', 'name': '2000 Foo Bar: Mapping'}, {'id': '1234564156', 'name': '2000.5 - 2000.5 Ground Model'}]}}, {'title': '123456-R12345.00 - Computer 2023', 'items_page': {'cursor': None, 'items': [{'id': '1234564074', 'name': 'Computer 2023'}, {'id': '1234564096', 'name': '3000.1 - 3000.1 Veggies'}]}}, {'title': '123456-T12345.00 - Dodge - Design', 'items_page': {'cursor': None, 'items': [{'id': '1234564028', 'name': 'Dodge - Design'}, {'id': '1234564048', 'name': '-'}]}}, {'title': 'Group Title', 'items_page': {'cursor': None, 'items': [{'id': '1234563996', 'name': 'Task 1'}]}}]}]}, 'account_id': 123456} </code></pre> <pre><code>query_group_id = f&quot;&quot;&quot; {{ boards (ids: {my_board_id}) {{ groups {{ title items_page (limit: 50) {{ cursor items {{ id name }} }} }} }} }} &quot;&quot;&quot; data = {'query' : query_group_id} r = requests.post(url=apiUrl, json=data, headers=headers) r_dict = r.json() group_board_info = r_dict['data']['boards'][0]['groups'] # This is absoluetly disgusting for dictionary in group_board_info: for k,v in dictionary.items(): if k == 'title': g_name = '-'.join(v.split('-')[2:]).lstrip() # Remove leading white space if k == &quot;items_page&quot;: for k2, v2 in v.items(): if k2 == 'items': for dictionary2 in v2: vals_list = list(dictionary2.values()) keys_list = list(dictionary2.keys()) for idx, key in enumerate(keys_list): if key == 'id': if vals_list[1] == g_name: create_subitem_for_item(vals_list[0], 'init') </code></pre> <p><strong>Edits</strong>:</p> <p>With the comments below and reconsideration, here is my new code. This is much easy to read, comprehend, and should be easy to integrated with other iteration methods. A colleague of mine suggested looking into <code>glom</code>.</p> <pre><code>for dictionary in group_board_info: items = dictionary['items_page']['items'] for item in items: if '-'.join(dictionary['title'].split('-')[2:]).lstrip() == item['name']: create_subitem_for_item(item['id'], 'init') </code></pre>
<python><list><dictionary>
2023-11-22 18:50:49
2
414
Binx
77,532,451
9,251,158
Authenticated request to a RESTful API gives 404
<p>I want to use the BeyondWords RESTful API, with documentation <a href="https://api.beyondwords.io/docs" rel="nofollow noreferrer">here</a>:</p> <pre><code>POST /projects/{project_id}/content (Create) </code></pre> <p>Unfortunately, they don't provide sample code, so I use their request body and these threads (<a href="https://stackoverflow.com/questions/53075939/calling-rest-api-with-an-api-key-using-the-requests-package-in-python">Calling REST API with an API key using the requests package in Python</a> and <a href="https://stackoverflow.com/questions/32659134/python-post-call-throwing-400-bad-request">Python Post call throwing 400 Bad Request</a>) to write this code:</p> <pre><code>import requests BEYOND_WORDS_KEY = &quot;some_key&quot; BEYOND_WORDS_PROJECT = 12345 j = { &quot;type&quot;: &quot;auto_segment&quot;, &quot;title&quot;: &quot;&lt;h1 data-beyondwords-marker=\&quot;h1-title\&quot;&gt;Example title&lt;/h1&gt;&quot;, &quot;summary&quot;: &quot;&lt;h2 data-beyondwords-marker=\&quot;h2-summary\&quot;&gt;This article is about TTS&lt;/h2&gt;&quot;, &quot;body&quot;: &quot;&lt;p data-beyondwords-marker=\&quot;paragraph-1\&quot;&gt;Example body&lt;/p&gt;&quot;, &quot;source_id&quot;: &quot;example-source-id&quot;, &quot;source_url&quot;: &quot;https://example.com/some-article&quot;, &quot;author&quot;: &quot;John Smith&quot;, &quot;image_url&quot;: &quot;https://example.com/image.jpeg&quot;, &quot;metadata&quot;: { &quot;key&quot;: &quot;value&quot; }, &quot;published&quot;: True, &quot;publish_date&quot;: &quot;2023-01-01 00:00:00 UTC&quot;, &quot;ads_enabled&quot;: True, &quot;auto_segment_updates_enabled&quot;: True, &quot;title_voice_id&quot;: 1, &quot;summary_voice_id&quot;: 1, &quot;body_voice_id&quot;: 1 } headers = {'Accept': 'application/json'} auth = requests.auth.HTTPBasicAuth('apikey', BEYOND_WORDS_KEY) response = requests.post( url=&quot;https://api.beyondwords.io/projects/%d/content&quot; % BEYOND_WORDS_PROJECT, headers=headers, auth=auth, json=j ) print(response, response.status_code, response.reason) </code></pre> <p>and the result is:</p> <pre><code>&lt;Response [404]&gt; 404 Not Found </code></pre> <p>Is my code correct, in which case I should contact the website for support?</p>
<python><rest>
2023-11-22 18:49:04
0
4,642
ginjaemocoes
77,532,284
3,963,478
Python module setup.py dependency updates by Renovate combined with GitLab Package Registry
<p>We have Python modules, which were referencing other Python modules from an internal GitLab Package Registry. The <code>setup.py</code> looks like the following:</p> <pre class="lang-py prettyprint-override"><code>import setuptools setuptools.setup( name=&quot;dummy-module&quot;, version=&quot;0.1.0&quot;, author=&quot;Dummy User&quot;, author_email=&quot;dummy.user@local.com&quot;, description=&quot;Dumm module.&quot;, packages=setuptools.find_packages(), classifiers=[ &quot;Programming Language :: Python :: 3&quot;, &quot;License :: OSI Approved :: MIT License&quot;, &quot;Operating System :: OS Independent&quot;, ], python_requires='&gt;=3.6', install_requires=[ &quot;other-internal-module @ https://gitlab.local.com/api/v4/projects/1/packages/pypi/files/0635d9dc9b32911047c19d2814b0b574be7c91756de2283f5217d9e098f79bab/other-internal-module-0.1.0-py3-none-any.whl&quot;, &quot;python-ldap==3.4.4&quot;] ) </code></pre> <p>Our goal is to not use a separate <code>requirements.txt</code> with redundant content and also to install all dependencies directly when installing the Python module <code>dummy-module</code>. This works so far by using the <code>setup.py</code> with <code>install_requires</code>, as shown above.</p> <p>Renovate is configured and also able to detect and update <code>python-ldap</code> inside the <code>setup.py</code>. However, the <code>other-internal-module</code> will not be updated. Might it's because of the &quot;direct reference&quot; (-&gt; <code>@ https://gitlab.local.com/api/v4/projects/1/packages/pypi/files/0635d9dc9b32911047c19d2814b0b574be7c91756de2283f5217d9e098f79bab/other-internal-module-0.1.0-py3-none-any.whl</code>) or Renovate is not able to handle this kind of references.</p> <p><a href="https://i.sstatic.net/MWZ3h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MWZ3h.png" alt="enter image description here" /></a></p> <p>So my question would be:</p> <ul> <li>Does anyone have a hint, how it would be possible to get dependency updates by Renovate for <code>other-internal-module</code> in this kind of construct?</li> <li>Or might another idea of how to define internal hosted (by GitLab Package Registry) Python modules inside the <code>setup.py</code> that during module <code>dummy-module</code> installation the module <code>other-internal-module</code> will also be installed from the local GitLab Package Registry?</li> </ul>
<python><gitlab><setup.py><renovate>
2023-11-22 18:14:51
1
2,228
Patrick
77,531,932
10,985,257
interpretion of branch coverage
<p>This is kind of an followup question to <a href="https://stackoverflow.com/questions/37304078/how-do-i-interpret-python-coverage-py-branch-coverage-results">How do I interpret Python coverage.py branch coverage results?</a></p> <p>I have a different situation where I check a variable <code>perr</code> which has in different scenarios, which I am testing could be either an empty <code>List</code> or a <code>List[str]</code>.</p> <p>The code looks similar to this:</p> <pre class="lang-py prettyprint-override"><code>if perr: message = self.trim_password_prompt(perr[0]) if len(message): raise SystemError(f&quot;{message}&quot;) return pout </code></pre> <p>Assuming that the if-statement is at line 92 and the return-statement on line 97, the Missing Branch is listed as <code>92-&gt;97</code>.</p> <p>I have different if-statements in my code, which are successful tested in multiple tests.</p> <p>I don't understand after all the things I was reading, why the coverage fails here, since I have tests, which are reaching both lines.</p>
<python><coverage.py>
2023-11-22 17:12:53
1
1,066
MaKaNu
77,531,821
2,473,382
Typing/completion in a generic container
<p>Imagine this setup, where I define a generic container, with as goal to make it easy to search inside:</p> <pre class="lang-py prettyprint-override"><code>from typing import Any, Generic, Self, TypeVar from pydantic import BaseModel T = TypeVar(&quot;T&quot;) class Container(Generic[T]): def __init__(self, values: list[T]) -&gt; None: self._values = values def search(self, column: str, value: Any) -&gt; list[T]: return [r for r in self._values if getattr(r, column) == value] class SomeData(BaseModel): col1: int col2: int </code></pre> <p>I would use it this way:</p> <pre class="lang-py prettyprint-override"><code>c = Container( values=[ SomeData(col1=1, col2=2), SomeData(col1=10, col2=20), ] ) print(c.search(column=&quot;col1&quot;, value=1)) # returns [SomeData(col1=1, col2=2)] </code></pre> <p>This works perfectly at runtime.</p> <p>I will always use this container to hold one and only one object type.</p> <p>I would like to have have typing/autocompletion, only the column names of <code>SomeData</code> are allowed as parameter of <code>column</code>, but I cannot think of a way to do it.</p> <p>Is it at all possible? Ideally, I would love to have a SQLAlchemy-like syntax: <code>c.search(col1 == 1)</code> or <code>c.search(c.col1 == 1)</code>.</p>
<python><python-typing><pydantic>
2023-11-22 16:53:09
1
3,081
Guillaume
77,531,793
13,874,745
The Loss can’t back propagate to model’s parameters with my customized loss function
<p>I designed a customized loss:</p> <pre><code>class CustomIndicesEdgeAccuracyLoss(torch.nn.Module): def __init__(self, num_classes: int, selected_indices: list): super(CustomIndicesEdgeAccuracyLoss, self).__init__() self.num_classes = num_classes self.selected_indices = selected_indices def forward(self, input: torch.Tensor, target: torch.Tensor) -&gt; torch.Tensor: batch_size, num_classes, feature_size = input.shape selected_input = input[::, ::, self.selected_indices] selected_target = target[::, self.selected_indices] selected_preds = torch.argmax(selected_input, dim=1) edge_acc = torch.eq(selected_preds, selected_target).sum()/torch.numel(selected_preds) loss = 1 – edge_acc loss.requires_grad = True return loss </code></pre> <p>But the <code>loss</code> won’t back propagate to model’s parameters, in other word, the gradient of model’s parameters are always 0 and the model’s parameters can’t be updated. What’s possible reasons? How should I revise the codes?</p> <p>Here is some information of the local variables of <code>forward()</code>:</p> <pre><code>input.shape: torch.Size([64, 3, 5]) target.shape:torch.Size([64, 5]) selected_input.shape: torch.Size([64, 3, 2]) selected_target.shape:torch.Size([64, 2]) </code></pre> <p>PS. I ask the same question in <a href="https://discuss.pytorch.org/t/the-loss-cant-back-propagate-to-models-parameters-with-my-customized-loss-function/192393" rel="nofollow noreferrer">here</a>, so I will copy the answer from this post to there, and vice versa.</p>
<python><pytorch><loss-function>
2023-11-22 16:48:55
1
451
theabc50111
77,531,749
2,749,397
The computation had not completed because of the undecidable set membership is found in every candidates
<p>Using Sympy from IPython, I receive an error that I cannot understand.</p> <p>The <code>Y1</code> function below is a polynomial in <em>x</em>, that has a maximum in [0,1] (see the plot, <em>please notice that the <em>y</em> axis is inverted!</em>), and I've checked that using simpler polynomials works, e.g. <code>maximum(x*(1-x), x, Interval(0, 1)) → 1/4</code>, but with <code>Y1</code>, to repeat myself, I receive an error message that I cannot understand. Anyone willing to help me?</p> <pre><code>In [8]: plot((Y1, interval1), (Y2, interval2), backend=InvertYAxis); </code></pre> <p><a href="https://i.sstatic.net/i2g2u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/i2g2u.png" alt="enter image description here" /></a></p> <pre><code>In [9]: Y1 Out[9]: x**4/24 - 5*x**3/64 + 7*x/192 In [10]: maximum(Y1, x, Interval(0, 1)) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[10], line 1 ----&gt; 1 maximum(Y1, x, Interval(0, 1)) File /usr/lib/python3.11/site-packages/sympy/calculus/util.py:792, in maximum(f, symbol, domain) 789 if domain is S.EmptySet: 790 raise ValueError(&quot;Maximum value not defined for empty domain.&quot;) --&gt; 792 return function_range(f, symbol, domain).sup 793 else: 794 raise ValueError(&quot;%s is not a valid symbol.&quot; % symbol) File /usr/lib/python3.11/site-packages/sympy/calculus/util.py:208, in function_range(f, symbol, domain) 203 raise NotImplementedError( 204 'Infinite number of critical points for {}'.format(f)) 206 critical_points += solution --&gt; 208 for critical_point in critical_points: 209 vals += FiniteSet(f.subs(symbol, critical_point)) 211 left_open, right_open = False, False File /usr/lib/python3.11/site-packages/sympy/sets/sets.py:1561, in Intersection.__iter__(self) 1559 if not candidates: 1560 raise TypeError(&quot;None of the constituent sets are iterable&quot;) -&gt; 1561 raise TypeError( 1562 &quot;The computation had not completed because of the &quot; 1563 &quot;undecidable set membership is found in every candidates.&quot;) TypeError: The computation had not completed because of the undecidable set membership is found in every candidates. </code></pre>
<python><sympy>
2023-11-22 16:43:25
1
25,436
gboffi
77,531,574
16,707,518
Reduce value of maximum value in multiple columns by one in pandas dataframe by group
<p>Apologies - this is very similar to a query I had earlier but I was unable to delete as I'd made a mistake within it.</p> <p>I simply want to find the maximum value <strong>by group</strong> in some of the columns (not all of them) in a dataframe - and reduce this grouped maximum value by one. It will be a single row, not multiple rows that have the maximum value.</p> <p>So let's say I have a table with the following columns below. I want to reduce the maximum value by group (i.e. Name) in columns A, B and D by one:</p> <pre><code>Name A B C D Bob 3 2 5 5 Bob 4 9 7 7 Bob 6 0 1 3 Jim 7 0 7 9 Jim 1 3 9 1 Jim 2 7 1 5 </code></pre> <p>The end result I'm looking for is where the maximum value by column (for columns A, B and D) by group is reduced by one, per below - but column C remains untouched. Thanks.</p> <pre><code>Name A B C D Bob 3 2 5 5 Bob 4 8 7 6 Bob 5 0 1 3 Jim 6 0 7 8 Jim 1 3 9 1 Jim 2 6 1 5 </code></pre>
<python><pandas>
2023-11-22 16:17:15
2
341
Richard Dixon
77,531,410
9,443,671
How can I take the difference of two overlapping byte streams in Python?
<p>I have an audio stream coming in from an API call which generated text-to-speech; I am currently generating overlapping chunks of consecutive texts and want to only write the difference between a stream chunk and the one preceding it (excluding the overlap), how can I do that when what I'm dealing with are byte streams and not actual audio/text streams?</p> <p>Here's an example:</p> <p>let's say at iteration 0, I make an API call with the text <code>I am feeling good</code> and that returns an audio stream which is written to a file</p> <p>now at iteration 1, I make another API call with the text <code>feeling good because I</code> and that returns an audio stream too, but what I want to do is skip the overlapping <code>feeling good</code> and append the remaining part <code>because I</code> from the audio stream to the file. Any suggestions on how to do this when I'm working with byte streams instead of text?</p> <p>here's what I have so far, which is storing the previous stream and the current one, but how can I take the difference here?</p> <pre><code>for i in sentences: curr_audio_stream = API_call(i) #Returns a botocore.response.StreamingBody object with open(&quot;file.mp3&quot;, &quot;ab&quot;) as file: file.write(curr_audio_stream.read()) prev_audio_stream = curr_audio_stream </code></pre> <p>Here's an example bunch of texts overlapping:</p> <pre><code>1. Oh, that sounds like 2. a fun game! Let's do it! So 3. Let's do it! So do you want to 4. do you want to start by giving me 5. start by giving me a clue about the </code></pre>
<python><audio><stream><byte><botocore>
2023-11-22 15:55:50
0
687
skidjoe
77,531,227
540,665
Filtering Pandas Dataframe Records, based on "key:value" match in a list of dictionaries as 2nd field of the Dataframe
<p>I have a dataset like below in a pandas dataframe. What I want is to check the values in second column, which look like a list of dictionaries, and keep only those full records for which <code>platform</code> is <code>Windows</code> and <code>script_type</code> is <code>PowerShell</code></p> <p>Is there a way to do it only using Dataframe concept, instead of using a for loop method to iterate across the list of dictionaries for each of the fields in the 2nd column.</p> <p>The first column is named <code>Names</code>, and the second column is <code>Data</code></p> <pre><code>script1 [{'platform': 'Windows','script': 'script1 code for Windows','script_type': 'PowerShell'},{'platform': 'Linux','script': 'script1 code for Linux','script_type': 'UnixShell'}] script2 [{'platform': 'Windows','script': 'script2 code for Windows','script_type': 'PowerShell'},{'platform': 'Linux','script': 'script2 code for Linux','script_type': 'UnixShell'}] script3 [{'platform': 'Windows','script': 'script3 code for Windows','script_type': 'VBScript'}] script4 [{'platform': 'Linux','script': 'script4 code for Linux','script_type': 'UnixShell'}] script5 [{'platform': 'Windows','script': 'script5 code for Windows','script_type': 'PowerShell'},{'platform': 'Linux','script': 'script5 code for Linux','script_type': 'UnixShell'}] script6 [{'platform': 'AIX','script': 'script6 code for AIX','script_type': 'UnixShell'},{'platform': 'Linux','script': 'script6 code for Linux','script_type': 'UnixShell'}]``` </code></pre>
<python><pandas><dataframe>
2023-11-22 15:29:26
1
2,398
dig_123
77,531,208
536,262
Python 3.12 SyntaxWarning: invalid escape sequence on triple-quoted string, `\d` must be `\\d`
<p>After updating to Python 3.12, I get warnings about invalid escape sequence on some triple-quotes comments.</p> <p>Is this a new restriction? I have the habit of documenting code using triple-quoted string, but this has never been a problem prior to Python 3.12.</p> <pre><code>python3 --version Python 3.12.0 $ ./some_script.py /some_script.py:123: SyntaxWarning: invalid escape sequence '\d' &quot;&quot;&quot; </code></pre> <p>I tried replacing all lines with <code>\d</code>:</p> <p><code>20230808122708.445|INFO|C:\dist\work\trk-fullstack-test\namespaces.py</code></p> <p>with <code>\\d</code>:</p> <p><code>20230808122708.445|INFO|C:\\dist\work\trk-fullstack-test\namespaces.py</code></p> <p>The warning disappears.</p> <p>Suppressing the warning do not seem to work:</p> <pre><code>import warnings warnings.filterwarnings('ignore', category=SyntaxWarning) </code></pre> <p>I hope I do not have to escape all Windows paths documented in triplequotes in our code.</p>
<python><warnings><string-literals><quoting><python-3.12>
2023-11-22 15:25:57
2
3,731
MortenB
77,531,190
4,340,985
How to read a CSV file with a date and time with milliseconds? ('DD.MM.YYY hh:mm:ss:mmm')
<p>I have a CSV file with timestamps that have (empty) milliseconds in them (<code>01.01.1901 00:00:00:000</code>) and pandas ´read_csv´ can't seem to parse them:</p> <p>test.csv:</p> <pre><code>date,value 18.03.2019 00:00:00:000,3.97 19.06.2019 00:00:00:000,4.22 10.09.2019 00:00:00:000,4.4 16.12.2019 00:00:00:000,5.6 </code></pre> <p><code>testdf = pd.read_csv('test.csv', index_col=0, header=0, parse_dates=True, dayfirst=True)</code></p> <pre><code>testdf.index Index(['18.03.2019 00:00:00:000', '19.06.2019 00:00:00:000', '10.09.2019 00:00:00:000', '16.12.2019 00:00:00:000'], dtype='object', name='date') </code></pre> <p>Obviously, I don't want that to be <code>'object'</code>, but <code>'datetime64[ns]'</code>, as I'd get when remove the empty time data by hand beforehand. How do I specify to <code>read_csv</code> how the time in my index column is formatted? Normally, it's pretty good at parsing dates, so it seems very odd to me that it's totally thrown of by the milliseconds, which is not that rare of a time format.</p>
<python><pandas><datetime><time>
2023-11-22 15:23:12
0
2,668
JC_CL
77,531,042
18,237,126
Images with Tkinter
<p>I am having quite some problems trying to implement images in my game. I want to put each image in the frames I created for the different stages of the game, however, this seems impossible. I can only get it to work if I place the image in the window (and thus I can't change the background). What would be the best way to do this?</p> <p>Also, the image dimensions are the same as the ones for the window I created. However, how do I make it so the image fits the whole screen instead of just a small part in the middle? (attaching image and code for reference)</p> <p>Issues are only in the functions setting_up_bg and setting_up_bg_2 I think (the second one is meant for the second background but it's not working)</p> <p><a href="https://i.sstatic.net/tau9d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tau9d.png" alt="enter image description here" /></a></p> <pre class="lang-py prettyprint-override"><code>class Main_Menu(): def __init__(self): self.window = Tk() self.window.state('zoomed') self.main_menu_frame = Frame(self.window, width=800, height=600) #try to put background here in this frame self.configure_window() self.create_menu() def setting_up_bg(self): # self.background = PhotoImage(file=r'C:\Users\acout\OneDrive - Activos Reais\Uni\Git\Tkinter\race_track.png') # self.background_label = Label(self.window, image=self.background) # self.background_label.place(x=0, y=0, relwidth=1, relheight=1) # self.background_label.tkimg = self.background self.background = Image.open(r&quot;C:\Users\acout\OneDrive - Activos Reais\Uni\Git\Tkinter\race_game_bg.png&quot;) test = ImageTk.PhotoImage(self.background) label1 = Label(image=test) label1.image = test label1.place(x=0, y=0, relwidth=1, relheight=1) def setting_up_bg_2(self): self.background_2_frame = Frame(self.window, bg='white') self.background_2_frame.place(x=0, y=0, relwidth=1, relheight=1) self.background_2 = PhotoImage(file=r'C:\Users\acout\OneDrive - Activos Reais\Uni\Git\Tkinter\race_game_bg.png') self.background_2_label = Label(self.background_2_frame, image=self.background_2) self.background_2_label.place(x=0, y=0, relwidth=1, relheight=1) self.background_label.tkimg = self.background_2 def configure_window(self): self.window.geometry(&quot;800x600&quot;) self.window.configure(background='#b3ffff') self.window.title(&quot;My Tkinter game yo - car go brrr&quot;) self.window.iconbitmap(default=r'C:\Users\acout\OneDrive - Activos Reais\Uni\Git\Tkinter\carro.ico') # setting up grid self.window.columnconfigure(0, weight=1) self.window.columnconfigure(1, weight=1) self.window.columnconfigure(2, weight=1) self.window.rowconfigure(0, weight=7) self.window.rowconfigure(1, weight=1) self.window.rowconfigure(2, weight=1) self.window.rowconfigure(3, weight=1) self.window.rowconfigure(4, weight=1) self.window.rowconfigure(5, weight=5) self.setting_up_bg() </code></pre>
<python><image><tkinter><background><python-imaging-library>
2023-11-22 15:03:20
0
314
António Rebelo
77,530,731
546,465
Thread & Process PoolExecutor with workers that keep state
<p>For a FTP download problem at scale I'd like to use Python's <a href="https://docs.python.org/3/library/concurrent.futures.html" rel="nofollow noreferrer">concurrent futures</a> to get multiple FTP connections and have each download multiple files.</p> <p>My question is how can I make workers that can keep their state such that I don't have to reconnect the FTP for each file I download?</p> <p>Ideally I'd do something like this:</p> <pre><code>from concurrent.futures import ProcessPoolExecutor as Executor class FTP_Worker: def __init__(self, idn): self.idn = idn self.connection = ftplib.connect(server, password) def downloadfile(filename): self.connection.download(filename) workers = [FTP_Worker(i) for i in range(12)] filenames = filenames # list of 1000s remote files Executor(max_workers=12).map(workers, filenames) </code></pre> <p>And ideally this works for both ThreadPoolExecutor and ProcessPoolExecutor.</p>
<python><multithreading><multiprocessing>
2023-11-22 14:20:59
1
4,712
Bastiaan
77,530,677
2,123,706
How to parse xml to pandas dataframe using beatiful soup
<p>I have:</p> <pre><code>xml_data = &quot;&quot;&quot;&lt;address_details sequence_number=&quot;1&quot; match_indicator=&quot;L&quot;&gt; &lt;address_input id=&quot;ADI&quot; address_details=&quot;street address and postcode&quot; /&gt; &lt;address_matched id=&quot;ADO&quot; address_key=&quot;123&quot; house_name=&quot;&quot; house_number=&quot;&quot; street_1=&quot;&quot; street_2=&quot;&quot; district=&quot;&quot; posttown=&quot;&quot; county=&quot;&quot; postcode=&quot;AB1 2CD&quot; address_type=&quot;&quot; /&gt; &lt;electoral_roll id=&quot;ELR&quot; name_match_indicator=&quot;A&quot; title=&quot;&quot; forename=&quot;firstName&quot; second_name=&quot;J&quot; surname=&quot;LastName&quot; date_of_birth=&quot;&quot; period=&quot;19-21&quot; junior_senior=&quot;&quot; /&gt; &lt;electoral_roll id=&quot;ELR&quot; name_match_indicator=&quot;C&quot; title=&quot;&quot; forename=&quot;FirstName&quot; second_name=&quot;M&quot; surname=&quot;MILES&quot; date_of_birth=&quot;&quot; period=&quot;19-21&quot; junior_senior=&quot;&quot; /&gt; &lt;telephone_data id=&quot;TLR&quot; title=&quot;&quot; forename=&quot;D&quot; second_name=&quot;&quot; surname=&quot;LastName&quot; date_of_birth=&quot;&quot; std_code=&quot;******&quot; local_number=&quot;*****&quot; tps=&quot;&quot; date_loaded=&quot;2020-02-07&quot; LineType=&quot;N&quot; /&gt; &lt;insight id=&quot;INR&quot; name_match_indicator=&quot;A&quot; title=&quot;MR&quot; forename=&quot;firstName&quot; second_name=&quot;J&quot; surname=&quot;LastName&quot; date_of_birth=&quot;YYYY-MM-DD&quot; /&gt; &lt;coded_insight_for_id_verification id=&quot;IDC&quot; number_of_accounts=&quot;6&quot; /&gt; &lt;/address_details&gt;&quot;&quot;&quot; soup = BeautifulSoup(xml_data, &quot;html.parser&quot;) </code></pre> <p>I want to loop through each <code>child?</code> node (not sure terminology for <code>address matched</code>, <code>electoral_roll</code>, etc are with respect to the <code>address_details</code>), and create a small dataframe for each child.</p> <p>My current method is:</p> <pre><code>ls_er = [] er = soup.find_all('electoral_roll') for i in range(len(er)): ls_er.append(er[i].attrs) ls_ai = [] er = soup.find_all('address_input') for i in range(len(er)): ls_ai.append(er[i].attrs) ls_am = [] er = soup.find_all('address_matched') for i in range(len(er)): ls_am.append(er[i].attrs) ls_tele = [] er = soup.find_all('telephone_data') for i in range(len(er)): ls_tele.append(er[i].attrs) ls_i = [] er = soup.find_all('insight') for i in range(len(er)): ls_i.append(er[i].attrs) ls_cofi = [] er = soup.find_all('coded_insight_for_id_verification') for i in range(len(er)): ls_cofi.append(er[i].attrs) pd.DataFrame(ls_ai) pd.DataFrame(ls_am) pd.DataFrame(ls_er) pd.DataFrame(ls_tele) pd.DataFrame(ls_i) pd.DataFrame(ls_cofi) </code></pre> <p>This works ok, but there are 2 issues:</p> <ul> <li><p>there may/not be extra/fewer child nodes (ie <code>electoral_roll</code> might not be present, or <code>new_category</code> might be present)</p> </li> <li><p>is there a way to loop through all of the child nodes and perform requested operation?</p> </li> </ul> <p>pseudocode idea:</p> <pre><code>num = len(soup.number_children_nodes) ls_child_soup0, ls_child_soup1 ,,, ls_chld_soupn = ([] for i in range(num)) for child in soup: child_soup = soup.find_all(child) for i in range(len(child_soup)): ls_child_soup0.append(child_soup[i].attrs) </code></pre> <p>I would like to return a few dataframes that look like:</p> <p><a href="https://i.sstatic.net/yOQ12.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yOQ12.png" alt="enter image description here" /></a></p>
<python><pandas><xml><beautifulsoup>
2023-11-22 14:13:57
3
3,810
frank
77,530,583
7,233,155
PyO3 "`From<&PyCell<mod::struct>>` is not implemented for `f64`"
<p>I am trying to expose a Rust struct to Python class.</p> <pre class="lang-rust prettyprint-override"><code>use pyo3::prelude::*; use std::sync::Arc; use indexmap::set::IndexSet; use ndarray::{Array1, Array}; #[pyclass] #[derive(Clone, Debug)] pub struct Dual { real : f64, vars : Arc&lt;IndexSet&lt;String&gt;&gt;, dual : Array1&lt;f64&gt;, } #[pymethods] impl Dual { fn new(real: f64, vars: Vec&lt;String&gt;, dual: Vec&lt;f64&gt;) -&gt; Self { let new_dual; if dual.len() != 0 &amp;&amp; vars.len() != dual.len() { panic!(&quot;`dual` must have same length as `vars` or have zero length.&quot;) } else if dual.len() == 0 &amp;&amp; vars.len() &gt; 0 { new_dual = Array::ones(vars.len()); } else { new_dual = Array::from_vec(dual); } Self { real: real, vars: Arc::new(IndexSet::from_iter(vars)), dual: new_dual, } } fn __repr__(&amp;self) -&gt; String { return format!(&quot;&lt;Dual: ... , [...], [...]&gt;&quot;); } } </code></pre> <p>When using <code>maturin develop</code> I get the error:</p> <pre><code>error[E0277]: the trait bound `f64: From&lt;&amp;PyCell&lt;point::Dual&gt;&gt;` is not satisfied --&gt; src\point\mod.rs:23:18 | 23 | fn new(real: f64, vars: Vec&lt;String&gt;, dual: Vec&lt;f64&gt;) -&gt; Self { | ^^^ the trait `From&lt;&amp;PyCell&lt;point::Dual&gt;&gt;` is not implemented for `f64` = help: the following other types implement trait `From&lt;T&gt;`: &lt;f64 as From&lt;bool&gt;&gt; &lt;f64 as From&lt;i8&gt;&gt; &lt;f64 as From&lt;i16&gt;&gt; &lt;f64 as From&lt;i32&gt;&gt; &lt;f64 as From&lt;u8&gt;&gt; &lt;f64 as From&lt;u16&gt;&gt; &lt;f64 as From&lt;u32&gt;&gt; &lt;f64 as From&lt;f32&gt;&gt; = note: required for `&amp;PyCell&lt;point::Dual&gt;` to implement `Into&lt;f64&gt;` = note: required for `f64` to implement `TryFrom&lt;&amp;PyCell&lt;point::Dual&gt;&gt;` </code></pre> <p>I do not understand this error, nor why, of all the types being used, this seems to fail on the basic float object.</p> <p>If anyone can share any pointers or elucidate this error for me to help me learn what to do I would appreciate it.</p>
<python><rust><pyo3>
2023-11-22 14:01:16
0
4,801
Attack68
77,530,570
2,947,077
How to find a 10x10x3 numpy array within a 1000x1000x3 numpy array (fast)?
<p>I am wondering the following:</p> <p>I have a 10x10x3 and a 1000x1000x3 numpy array. I want to check if the smaller array can be found within the bigger array (and where). This is pretty easy with 2 for loops, but that solution is <em>very</em> slow:</p> <pre class="lang-py prettyprint-override"><code>H, W, _ = big.shape h, w, _ = small.shape for i in range(H - h + 1): for j in range(W - w + 1): if np.array_equal(big[i:i+h, j:j+w, :], small): print(i,j) </code></pre> <p>I was wondering if there is a way to vectorize this? Or a way to write this routine faster?</p>
<python><numpy>
2023-11-22 13:59:41
2
545
dljve
77,530,451
3,114,229
Terminal prompts for Jupyter Notebook for Python in VScode
<p>I'm having trouble running a batch file from a Jupyter Notebook in VScode. The prompts from the batch file are not showing up in the terminal when I run the script. Is there a way to print the prompts in the terminal instead of between cells? I've looked at a <a href="https://stackoverflow.com/questions/38616077/live-stdout-output-from-python-subprocess-in-jupyter-notebook">similar problem on Stack Overflow</a>, but it doesn't quite address my issue. I created a test file to open in a separate cmd window, but I would prefer to print it in the terminal instead. Here is the code I'm using:</p> <p>Jupyter Notebook code:</p> <pre><code>subprocess.run(&quot;start test.bat&quot;, stdout=subprocess.PIPE, text=True, shell=True) </code></pre> <p>test.bat code:</p> <pre><code>@echo off echo. echo foo bar baz </code></pre> <p>Please see photo for clarification: <a href="https://i.sstatic.net/9al7U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9al7U.png" alt="enter image description here" /></a></p>
<python><visual-studio-code><jupyter-notebook>
2023-11-22 13:42:52
1
419
Martin
77,530,412
9,422,346
Python Arrow datetime function returns a timestamp with seconds field value more than 60
<p>I have a function in my code for taking a UTC time argument like - <code>2023-11-06T20:53:39.062Z</code>, convert it to EST time and then return in a format as 'MM-DD-YYYY HH:MM:SS'.</p> <pre><code>def date_conv(time): est = zoneinfo.ZoneInfo('America/Toronto') est_time = arrow.get(time).astimezone(est) return str(arrow.get(est_time).format(&quot;MM-DD-YYYY HH:MM:SS&quot;)) </code></pre> <p>However this sometimes returns a time as <code>16-11-2023 15:11:79</code>with seconds field &gt; 60, which is not desirable. What is exactly wrong in the code?</p>
<python><datetime><utc><python-arrow>
2023-11-22 13:37:25
1
407
mrin9san
77,530,306
13,371,166
Convert Python curl response.content bytes object to readable format (json, xml)
<p>I fetch some data from an api with below code</p> <pre class="lang-py prettyprint-override"><code>headers = { 'Authorization': &quot;Basic XXXXXXXXX&quot;, 'Accept': 'application/xml', } resp = requests.get('https://api_request', headers=headers) </code></pre> <p>I receive a correct response from the API (code 200) but content is in bytes format so not easy to read. Printing the resp.content gives:</p> <pre class="lang-py prettyprint-override"><code>print(resp.content) b'&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot; standalone=&quot;yes&quot;?&gt;&lt;legalEntity xmlns=&quot;http://XXXX.io&quot;&gt;&lt;status&gt;AGT_VALIDATED&lt;/status&gt;&lt;id&gt;&lt;code&gt;140012&lt;/c......... </code></pre> <p>Is there a way to convert this in a easy to manipulate format so that I can extract the specific info I need?</p>
<python><xml>
2023-11-22 13:23:37
1
542
kenshuri
77,530,281
16,707,518
Reduce maximum value in multiple columns by one in pandas dataframe
<p>Bit of a niche one here that I've not been able to find equivalent code for anywhere. I simply want to find the maximum value in some of the columns (not all of them) and reduce this maximum value by one. It will be a single row, not multiple rows that have the maximum value.</p> <p>So let's say I have a table with the following columns. I want to reduce the maximum value in columns A, B and D by one:</p> <pre><code>A B C D 3 2 5 5 4 9 6 7 7 0 7 9 1 3 9 1 </code></pre> <p>...to produce a table with the maximum reduced by one in columns A, B and D.</p> <pre><code>A B C D 3 2 5 5 4 8 6 7 6 0 7 8 1 3 9 1 </code></pre> <p>(I'm actually working with a table of about 40 rows, about 8 of which I want to reduce the maximum value by 1 within). Thanks.</p>
<python><pandas><dataframe><max>
2023-11-22 13:19:53
2
341
Richard Dixon
77,530,232
2,063,900
How I can add button to the variants tab in the products screen in Odoo
<p>I try to add a button in the products screen but I can't find the correct path, and I do not know the problem.</p> <p>this is my code</p> <pre><code> &lt;xpath expr=&quot;//page[@name='variants']&quot; position=&quot;inside&quot;&gt; &lt;button string=&quot;Custom Button&quot; type=&quot;object&quot; name=&quot;custom_button_method&quot;/&gt; &lt;/xpath&gt; </code></pre> <p>==========================================</p> <p>but it gives an error</p> <pre><code>Element '&lt;xpath expr=&quot;//page[@name=&amp;#39;variants&amp;#39;]&quot;&gt;' cannot be located in parent view </code></pre> <p>Can anyone help me?</p>
<python><xml><odoo><odoo-16>
2023-11-22 13:11:33
1
361
ahmed mohamady
77,530,111
8,324,480
Pandas DF eval failing if it's not explicit
<p>I have a dataframe column named <code>foo</code> contianing booleans, True or False, and I want to understand why:</p> <pre><code>df.eval(&quot;foo&quot;) -&gt; fails with ValueError: unknown type object df.eval(&quot;foo == True&quot;) -&gt; works </code></pre> <p>It seems like this first line should select the <code>True</code> value as well.</p> <p>Traceback:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;/home/scheltie/pyvenv/mscheltienne/eeg-flow/lib/python3.10/site-packages/pandas/core/frame.py&quot;, line 4725, in eval return _eval(expr, inplace=inplace, **kwargs) File &quot;/home/scheltie/pyvenv/mscheltienne/eeg-flow/lib/python3.10/site-packages/pandas/core/computation/eval.py&quot;, line 357, in eval ret = eng_inst.evaluate() File &quot;/home/scheltie/pyvenv/mscheltienne/eeg-flow/lib/python3.10/site-packages/pandas/core/computation/engines.py&quot;, line 81, in evaluate res = self._evaluate() File &quot;/home/scheltie/pyvenv/mscheltienne/eeg-flow/lib/python3.10/site-packages/pandas/core/computation/engines.py&quot;, line 121, in _evaluate return ne.evaluate(s, local_dict=scope) File &quot;/home/scheltie/pyvenv/mscheltienne/eeg-flow/lib/python3.10/site-packages/numexpr/necompiler.py&quot;, line 975, in evaluate raise e File &quot;/home/scheltie/pyvenv/mscheltienne/eeg-flow/lib/python3.10/site-packages/numexpr/necompiler.py&quot;, line 877, in validate signature = [(name, getType(arg)) for (name, arg) in File &quot;/home/scheltie/pyvenv/mscheltienne/eeg-flow/lib/python3.10/site-packages/numexpr/necompiler.py&quot;, line 877, in &lt;listcomp&gt; signature = [(name, getType(arg)) for (name, arg) in File &quot;/home/scheltie/pyvenv/mscheltienne/eeg-flow/lib/python3.10/site-packages/numexpr/necompiler.py&quot;, line 717, in getType raise ValueError(&quot;unknown type %s&quot; % a.dtype.name) ValueError: unknown type object </code></pre>
<python><pandas>
2023-11-22 12:54:05
2
5,826
Mathieu
77,530,069
1,609,428
missing labels in matplotlib chart created with pandas
<p>Consider the following example</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt ddd = pd.DataFrame({'time': [1,2,3,4], 'var1': [1,4,6,3], 'var2': [4,3,2,1]}) fig, ax = plt.subplots() for col in ['var1', 'var2']: ddd.set_index('time')[col].plot(label = col) </code></pre> <p>which gives:</p> <p><a href="https://i.sstatic.net/d5ec2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d5ec2.png" alt="enter image description here" /></a></p> <p>However, as you can see, the legend are not correctly labeled even though I used the <code>label</code> argument in the <code>plot()</code> call. Do you know how to fix that while keeping the loop structure (adding charts sequentially while sharing the <code>ax</code>)?</p> <p>Thanks!</p>
<python><pandas><matplotlib>
2023-11-22 12:47:58
1
19,485
ℕʘʘḆḽḘ
77,530,056
10,962,766
Improving performance when looping through country codes in App Store Scraper
<p>I am using the <a href="https://pypi.org/project/app-store-scraper/" rel="nofollow noreferrer">App Store Scraper</a> to get podcast reviews from the Apple Store. One thing students and I realised is that, naturally, popular international podcasts get reviews from several countries, and when we want to catch them all, we need to loop through the country codes. As even people in Sweden or Poland may comment on a BBC podcast in English, we did not want to exclude any countries but use the whole set, which I have (as a starting point) hard-coded as follows:</p> <pre><code># Select country codes # full list of countries where Apple podcasts are available has been shared on Gitlab countries=[&quot;DZ&quot;, &quot;AO&quot;, &quot;AI&quot;, &quot;AR&quot;, &quot;AM&quot;, &quot;AU&quot;, &quot;AT&quot;, &quot;AZ&quot;, &quot;BH&quot;, &quot;BB&quot;, &quot;BY&quot;, &quot;BE&quot;, &quot;BZ&quot;, &quot;BM&quot;, &quot;BO&quot;, &quot;BW&quot;, &quot;BR&quot;, &quot;VG&quot;, &quot;BN&quot;, &quot;BG&quot;, &quot;CA&quot;, &quot;KY&quot;, &quot;CL&quot;, &quot;CN&quot;, &quot;CO&quot;, &quot;CR&quot;, &quot;HR&quot;, &quot;CY&quot;, &quot;CZ&quot;, &quot;DK&quot;, &quot;DM&quot;, &quot;EC&quot;, &quot;EG&quot;, &quot;SV&quot;, &quot;EE&quot;, &quot;FI&quot;, &quot;FR&quot;, &quot;DE&quot;, &quot;GH&quot;, &quot;GB&quot;, &quot;GR&quot;, &quot;GD&quot;, &quot;GT&quot;, &quot;GY&quot;, &quot;HN&quot;, &quot;HK&quot;, &quot;HU&quot;, &quot;IS&quot;, &quot;IN&quot;, &quot;ID&quot;, &quot;IE&quot;, &quot;IL&quot;, &quot;IT&quot;, &quot;JM&quot;, &quot;JP&quot;, &quot;JO&quot;, &quot;KE&quot;, &quot;KW&quot;, &quot;LV&quot;, &quot;LB&quot;, &quot;LT&quot;, &quot;LU&quot;, &quot;MO&quot;, &quot;MG&quot;, &quot;MY&quot;, &quot;ML&quot;, &quot;MT&quot;, &quot;MU&quot;, &quot;MX&quot;, &quot;MS&quot;, &quot;NP&quot;, &quot;NL&quot;, &quot;NZ&quot;, &quot;NI&quot;, &quot;NE&quot;, &quot;NG&quot;, &quot;NO&quot;, &quot;OM&quot;, &quot;PK&quot;, &quot;PA&quot;, &quot;PY&quot;, &quot;PE&quot;, &quot;PH&quot;, &quot;PL&quot;, &quot;PT&quot;, &quot;QA&quot;, &quot;MK&quot;, &quot;RO&quot;, &quot;RU&quot;, &quot;SA&quot;, &quot;SN&quot;, &quot;SG&quot;, &quot;SK&quot;, &quot;SI&quot;, &quot;ZA&quot;, &quot;KR&quot;, &quot;ES&quot;, &quot;LK&quot;, &quot;SR&quot;, &quot;SE&quot;, &quot;CH&quot;, &quot;TW&quot;, &quot;TZ&quot;, &quot;TH&quot;, &quot;TN&quot;, &quot;TR&quot;, &quot;UG&quot;, &quot;UA&quot;, &quot;AE&quot;, &quot;US&quot;, &quot;UY&quot;, &quot;UZ&quot;, &quot;VE&quot;, &quot;VN&quot;, &quot;YE&quot;] </code></pre> <p>Then I loop through this list to get the reviews, which works OK -- but it the process is very slow! Whenever a podcast does not have any reviews, the app store scraper (according to the notifications I get) tries the request 20 times before moving on to the next item, so the loop takes ages. How can I make the process faster, e.g. forcing the script to move on if the first request is unsuccessful? This is what I have so far:</p> <pre><code># Set podcast details app_id = 1614435903 app_name = '28ish-days-later' # important: country codes will be selected from the list above # Set output path path_out = &quot;podcast_reviews&quot; filename_csv = f'{app_name}_reviews_table.csv' file_csv = directory + path_out + filename_csv # Optional: use (how_many=n) after sys.review to limit output # otherwise all reviews are fetched for c in countries: # Create class object sysk = Podcast(country=c, app_name=app_name, app_id=app_id) sysk.review() print(f&quot;No. of reviews found for country {c}:&quot;) #pprint(sysk.reviews) pprint(sysk.reviews_count) # NOTE: the review count seen on the landing page differs from the actual number of reviews fetched. # This is simply because only some users who rated the app also leave reviews. </code></pre> <p>The notification I get in the output when no reviews are found is this:</p> <pre><code>ERROR:Base:Something went wrong: HTTPSConnectionPool(host='amp-api.podcasts.apple.com', port=443): Max retries exceeded with url: /v1/catalog/dz/podcasts/1614435903/reviews?l=en-GB&amp;offset=0&amp;limit=20 (Caused by ResponseError('too many 404 error responses')) No. of reviews found for country DZ: 0 </code></pre> <p>My first attempt was to include a <code>try</code> and <code>except</code>, but that does not stop the script from attempting the max retries before raising the error, so I got rid of it. Perhaps it is possible to give the script a &quot;how_many=1&quot; limitation for all country codes and write only the ones that retrieve a result to a new list before starting the loop. I will post this as an answer if it works.</p>
<python><web-scraping><podcast>
2023-11-22 12:45:45
1
498
OnceUponATime
77,529,961
16,844,801
How to sort and query jsonb data from postgresql query
<p>I have a similarity matrix dataframe. The dataframe has two columns: movie_id and similar_movies. The column similar movies is a jsonb object showing cosine similarity to other movies. Example row 3 (movie_id = 3) in dict:</p> <pre><code>{'movie_id': '3', 'similar_movies': '{&quot;0&quot;: 0.014912461842819367, &quot;1&quot;: 0.012710171518762043, &quot;2&quot;: 0.0013154224298452897, &quot;3&quot;: 1.0000000000000004, &quot;4&quot;: 0.013113069599780204, </code></pre> <p>Or like this in df:</p> <p><a href="https://i.sstatic.net/v5ARa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v5ARa.png" alt="df" /></a></p> <p>Movie Id is 3, and as you see 3 is 1 to 3 because it is the same, and the rest are cosine similarity values. However, the dataset is very large and has dozens of thousands of movies, so it'd be more efficient if I can fetch only the top 10 movies for example from the database level rather than doing the clean up in python.</p> <p>How can I query the top 10 similarities from the jsonb object so it returns the top 10 similarities. For example I want the output to be:</p> <pre><code>&quot;similar_movies&quot;: { &quot;3&quot;: 1.0000000000000004, &quot;255556&quot;: .8995446778842004, &quot;512&quot;: .86214312412155666} </code></pre> <p>The equivalent of this:</p> <p><code>dict(sorted(dict_sim.items(), key=lambda x: x[1], reverse=True)[:10])</code></p> <p>but in a query format</p>
<python><postgresql>
2023-11-22 12:31:34
1
434
Baraa Zaid
77,529,913
3,752,185
Pydantic model_validate with generic abstract classes
<p>I have a generic response class</p> <pre><code>RESP = TypeVar(&quot;RESP&quot;, bound=BaseModel) class TestResponse(BaseModel, Generic[RESP]): status: str data: RESP </code></pre> <p>And an example model class</p> <pre><code>class Example(BaseModel): description: str ExampleResponse = RootModel[list[Example]] </code></pre> <p>I have an abstract method and an implementation that returns response_model</p> <pre><code># file 1 @abstractmethod def response_model(self) -&gt; Type[RESP]: pass # file 2 - implementation def response_model(self) -&gt; Type[ExampleResponse]: return ExampleResponse </code></pre> <p>And am trying to validate the model in the abstract class using</p> <pre><code>response = {&quot;status&quot;: &quot;success&quot;, &quot;data&quot;: [{&quot;description&quot;: &quot;ok&quot;}]} resp_model = self.response_model() return TestResponse[resp_model].model_validate(response) </code></pre> <p>the code seems to work (no exceptions thrown and if I change spelling of &quot;description&quot; to &quot;descriptions&quot; in response.data[0] an exception will be thrown as expected). However, MyPy shows an error</p> <blockquote> <p>Mypy: The type &quot;Type[ConsumerResponse[Any]]&quot; is not generic and not indexable [misc]</p> </blockquote> <p>and</p> <blockquote> <p>Mypy: Type expected within [...] [misc]</p> </blockquote> <p>What is the cause of that and how to fix it?</p>
<python><mypy><pydantic>
2023-11-22 12:23:16
0
2,711
Janar
77,529,835
14,374,599
Passing arguments to a nested function in Python
<p>I have two functions, called <code>update_y_with_x</code> and <code>update_x</code>, which are as follows:</p> <p><em>update_y_with_x</em></p> <pre><code>def update_y_with_x(chosen_tissue): for i, trace in enumerate(f.data): if trace.type == &quot;scatter&quot;: # reset all line width f.data[i].line.width = 1 if chosen_tissue in trace.text: f.data[i].line.width = 1 + 9 </code></pre> <p><em>update_x</em></p> <pre><code> def update_x(trace, points, selector): if len(points.point_inds) == 0: return # Remove existing bar shades and linked lines' highlights f.layout.shapes = () for i, trace in enumerate(f.data): if trace.type == &quot;scatter&quot; and f.data[i].line.width != 1: f.data[i].line.width = 1 # Get tissue identity chosen_tissue = f.data[points.trace_index].y[points.point_inds[0]] # Get the index of the chosen tissue possible_trace_indices = [ i for i, trace in enumerate(f.data) if trace.type == &quot;bar&quot; ] chosen_tissue_ranks = [ list(f.data[index].y).index(chosen_tissue) for index in possible_trace_indices ] for i, _ in enumerate(possible_trace_indices): # Specify the y-range around the clicked bar for shading y0 = chosen_tissue_ranks[i] + 0.5 y1 = y0 - 1 f.add_hrect( y0=y0, y1=y1, line_width=0, fillcolor=&quot;grey&quot;, opacity=0.3, row=1, col=i * 2 + 1, ) # Update lines update_y_with_x(chosen_tissue) </code></pre> <p>As you can see, update_x calls the update_y_with_x function, at the moment, both use a hard-coded plotly go figurewidget, where</p> <p><code>f = go.FigureWidget(fig)</code></p> <p>and are called like so:</p> <pre><code>for i, trace in enumerate(f.data): if trace.type == 'bar': # Check if it's a barplot f.data[i].on_click(update_x) </code></pre> <p>I would like to do two things:</p> <ol> <li><p>Move these two functions to an independent function file and import them into the main.ipynb file I'm working with</p> </li> <li><p>Pass <code>f</code> as a parameter to each function, rather than having it hard - coded as it currently exists in the function, and make sure that it is passed to <code>update_y_with_x</code> prior to it being used in <code>update_x</code></p> </li> </ol> <p><code>update_x</code> click events on a bar plot. When a bar is clicked, it removes existing shapes, resets line widths for linked scatter plots, adds shaded rectangles around the clicked bars, and updates lines which link these bars together.</p> <p><strong>What I tried:</strong></p> <p>I initially exported these functions to a new file called &quot;click_events.py&quot;, and added in the argument for each <code>figure_widget</code>, like so:</p> <ul> <li><p>def update_x(<code>trace, points, selector, figure_widget):</code> (replaced f in file)</p> </li> <li><p>def update_y_with_x(<code>chosen_tissue, figure_widget):</code> (replaced f in file)</p> </li> <li><p>modify the code in <code>update_x</code> so that <code>update_y_with_x(chosen_tissue, figure_widget = figure_widget)</code></p> </li> </ul> <p>However, I believe that when I imported and try to run these functions in the way mentioned above the figure_widget isn't being passed to 'update_y_with_x'.</p>
<python><nested><plotly>
2023-11-22 12:10:08
0
497
KLM117
77,529,638
14,408,656
How to correctly read .exr files in python?
<p>like the title says I want to read .exr image files which encode depth.</p> <p>Sample image - displayed using openexr-viewer<a href="https://i.sstatic.net/Fu2jr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fu2jr.png" alt="" /></a> (Source: <a href="https://synscapes.on.liu.se/" rel="nofollow noreferrer">Synscapes datatset</a>)</p> <p>I tried:</p> <pre><code>os.environ[&quot;OPENCV_IO_ENABLE_OPENEXR&quot;]=&quot;1&quot; import cv2 import numpy as np exr_file_path = 'path/to/file' image = cv2.imread(exr_file_path, cv2.IMREAD_ANYCOLOR | cv2.IMREAD_ANYDEPTH) </code></pre> <p>The dimensions of image are correct, but the image is completely filled with zeros.</p> <p>Debugger shows this:</p> <pre><code>-&gt;image.shape (720, 1440) -&gt;np.unique(image) array([0.], dtype=float32) </code></pre> <p>What am I missing ?</p> <p>My goal would be to create a function which accepts a path to an .exr file and returns the image as a pillow image.</p> <p>Currently this is what I have:</p> <pre><code>def exr_to_pil(exr_file_path): exr_file_path = str(exr_file_path) image = cv2.imread(exr_file_path, cv2.IMREAD_ANYCOLOR | cv2.IMREAD_ANYDEPTH) # Debugger stops here to check output of imread img = Image.fromarray(image.astype('uint8')) return img </code></pre>
<python><image><opencv><image-processing><openexr>
2023-11-22 11:43:12
1
484
Chris
77,529,079
21,034,926
Extract informations from excel file if a certain column is not empty
<p>I'm trying to extract some informations from an excel file using python and openpyxl. I'm learning it so I don't know at all the language. I've writed this code that will extract data from the excel file and will print it to the console.</p> <pre class="lang-py prettyprint-override"><code># Importing openpyxl import openpyxl # loading excel file wb = openpyxl.load_workbook(&quot;BEVANDE.xlsx&quot;) # loading active sheet ws = wb.active # looping sheet for values for row in ws.values: for value in row: print(type(value)) </code></pre> <p>The output is something similar, and for certain extracted informations like this output I will have also a fourth parameter that is a price variation.</p> <pre><code>80369691 VITASNELLA ACQUA ML.1500 0,28 -0,01 </code></pre> <p>Using the <code>type()</code> function I'm able to see that all the output is a string. What I need to achive is to get only the product that will have a price variation like the example. What I need to do with the string and how I can achive my scope using and if statement?</p>
<python><excel><openpyxl>
2023-11-22 10:22:27
0
501
OHICT
77,529,042
859,141
How to mask an int with bitfields
<p>I have a series of 25 flags represented by Bitfields and a session integer which represents any flags set at this time. I want to mask the integer e.g. 268698112 and see what flags are set.</p> <pre class="lang-py prettyprint-override"><code>class Flags: flaga = 0x0001 flagi = 0x0100 flagp = 0x8000 flagq = 0x010000 flagv = 0x10000000 </code></pre> <p>I have tried the following two lines but they only return zero.</p> <pre class="lang-py prettyprint-override"><code>test_a = session &amp; flag_mask test_b = (session &gt;&gt; flag_mask) &amp; 1 </code></pre> <p>I have looked at <a href="https://stackoverflow.com/questions/34174198/extract-bitfields-from-an-int-in-python">Extract bitfields from an int in Python</a> and although I can see Bitstrings might be helpful I cannot see how to then add the integer into the mix without getting an error. For example I have tried:</p> <pre class="lang-py prettyprint-override"><code>bitstring.Bits(session) &amp; bitstring.BitArray(flag_mask) (bitstring.Bits(session) &gt;&gt; bitstring.BitArray(flag_mask)) &amp; 1 </code></pre> <p>but get:</p> <pre class="lang-py prettyprint-override"><code>Bitstrings must have the same length for and operator &lt; not supported between instances of &quot;BitArray&quot; and &quot;int&quot; </code></pre> <p>How can I extract true or false values from the session int for the given flags?</p>
<python><bit-fields>
2023-11-22 10:17:11
1
1,184
Byte Insight
77,528,844
521,347
Getting asyncio.run() cannot be called from a running event loop
<p>I have a python application which is running inside a uvicorn server. I have created a Pub/sub subscriber and trying to enable it from my main.py. I am using a streaming-pull subscription. Now, my requirement is that once the subscriber is created, control should come back to main.py instead of getting blocked in the subscriber listening for events</p> <p>The code for my subscriber is as follows-</p> <pre><code>from google.cloud import pubsub_v1 from app.services.subscription_service import save_bill_events from app.utils import constants from app.utils.logging_tracing_manager import get_logger import traceback print(&quot;Entered in bill_subscriber----------------------&quot;) logger = get_logger(__file__) def callback(message: pubsub_v1.subscriber.message.Message) -&gt; None: save_bill_events(message.data) message.ack() async def create_bill_subscriber(): subscriber = pubsub_v1.SubscriberClient() subscription_path = subscriber.subscription_path(&quot;{projectId}&quot;, constants.BILL_EVENT_SUBSCRIPTION_ID) # Limit the subscriber to only have fixed number of outstanding messages at a time. flow_control = pubsub_v1.types.FlowControl(max_messages=50) streaming_pull_future = subscriber.subscribe(subscription_path, callback=callback, flow_control=flow_control) with subscriber: try: # When `timeout` is not set, result() will block indefinitely, # unless an exception is encountered first. streaming_pull_future.result() except Exception as e: # Even in case of an exception, subscriber should keep listening logger.error( f&quot;An error occurred while pulling message from subscription {constants.BILL_EVENT_SUBSCRIPTION_ID}&quot;, exc_info=True) traceback.print_exc() pass </code></pre> <p>From my main.py, I am trying to call above method using asyncio</p> <pre><code>asyncio.run(main=bill_subscriber.create_bill_subscriber()) </code></pre> <p>But I am seeing an error <code>RuntimeError: asyncio.run() cannot be called from a running event loop</code>. Am I not using <code>asyncio.run()</code> in a correct way?</p> <p>Is there any chance uvicorn runs the application within an event loop and therefore we can't start another event loop? If it is the case, is there any other way to start the subscriber in the background?</p>
<python><async-await><python-asyncio><google-cloud-pubsub><publish-subscribe>
2023-11-22 09:45:04
1
1,780
Sumit Desai
77,528,736
14,471,688
How to unpack bits from packbits results?
<p>I am not familial with np.packbits and I want to perform a huge XOR operation by using it.</p> <p>Here is my toy example:</p> <pre><code>import numpy as np # Example arrays u_values = np.array([[True, True, True, True, True, True, False, True, True, True], [True, True, True, True, True, True, True, False, False, True], [True, True, True, True, False, True, False, True, True, True], [True, True, True, True, False, True, True, False, False, True], [True, True, True, True, True, False, False, False, False, True], [True, True, True, False, False, False, False, False, False, True], [True, False, False, False, False, False, False, False, False, True], [True, True, False, True, False, True, False, True, True, True]]) v_values = np.array([[True, True, True, True, True, True, False, True, True, True], [True, False, True, False, True, True, True, False, False, True], [True, True, True, True, False, True, False, True, True, True], [True, False, True, True, False, False, True, False, False, True], [True, True, False, True, True, False, False, False, False, True], [True, True, True, False, False, True, False, False, False, True]]) </code></pre> <p>I tried to use the following codes as below:</p> <pre><code>u_values_packed = np.packbits(u_values, axis=1) v_values_packed = np.packbits(v_values, axis=1) result_packed = u_values_packed[:, None, :] ^ v_values_packed[None, :, :] result_packed = result_packed.reshape((-1, result_packed.shape[2])) result_unpacked = np.unpackbits(result_packed, axis=1) print(result_unpacked.shape) #(48,16) </code></pre> <p>I got the output shape (48,16) instead of (48,10).</p> <p>How can I get the desired result so that I can perform XOR for each line of u_values with v_values resulting in a total dimension of (48,10)?</p> <p>Here is the entired example output:</p> <pre><code>[[False False False False False False False False False False] [False True False True False False True True True False] [False False False False True False False False False False] [False True False False True True True True True False] [False False True False False True False True True False] [False False False True True False False True True False] [False False False False False False True True True False] [False True False True False False False False False False] [False False False False True False True True True False] [False True False False True True False False False False] [False False True False False True True False False False] [False False False True True False True False False False] [False False False False True False False False False False] [False True False True True False True True True False] [False False False False False False False False False False] [False True False False False True True True True False] [False False True False True True False True True False] [False False False True False False False True True False] [False False False False True False True True True False] [False True False True True False False False False False] [False False False False False False True True True False] [False True False False False True False False False False] [False False True False True True True False False False] [False False False True False False True False False False] [False False False False False True False True True False] [False True False True False True True False False False] [False False False False True True False True True False] [False True False False True False True False False False] [False False True False False False False False False False] [False False False True True True False False False False] [False False False True True True False True True False] [False True False False True True True False False False] [False False False True False True False True True False] [False True False True False False True False False False] [False False True True True False False False False False] [False False False False False True False False False False] [False True True True True True False True True False] [False False True False True True True False False False] [False True True True False True False True True False] [False False True True False False True False False False] [False True False True True False False False False False] [False True True False False True False False False False] [False False True False True False False False False False] [False True True True True False True True True False] [False False True False False False False False False False] [False True True False False True True True True False] [False False False False True True False True True False] [False False True True False False False True True False]] </code></pre>
<python><numpy><bit-manipulation>
2023-11-22 09:29:08
1
381
Erwin
77,528,673
8,318,946
Celery Task ECS Termination Issue - Need Help Updating Decorator for Handling ProtectionEnabled State Changes
<p><strong>Explanation:</strong></p> <p>I have a Django application where I am running multiple Celery tasks on AWS Elastic Container Service (ECS), using SQS as the broker. I am encountering an issue where the Celery tasks are being started in an existing ECS task once the previous one is completed. The issue arises because my decorator changes the status of ProtectionEnabled from true to false, and after a couple of seconds, the ECS task is terminated. The newly started task then fails to work. Below is the command I am running to start celery task.</p> <pre><code>celery -A myapp_settings.celery worker --concurrency=1 l info -Q sqs-celery </code></pre> <p>I am using alerts on CloudWatch to check messages in broker and terminate those ECS tasks that are completed. The problem is that celery is starting task in existing ECS task once the previous one was completed. It would not be a problem but my decorator changes the status of ProtectionEnabled from true to false and after 20 seconds ECS task is terminated and newly started task is not working anymore.</p> <p><strong>Question:</strong></p> <p>I am considering updating my decorator to change back the ProtectionEnabled value from false to true if a new Celery task starts, but I am unsure how to implement this.</p> <p><strong>Code:</strong></p> <p>container_decorator.py</p> <pre><code>class ContainerAgent: class Error(Exception): pass class RequestError(Error, IOError): pass def __init__( self, ecs_agent_uri: str, timeout: int = 10, session: requests.Session = None, logger: logging.Logger = None, ) -&gt; None: self._ecs_agent_uri = ecs_agent_uri self._timeout = timeout self._session = session or requests.Session() self._logger = logger or logging.getLogger(self.__class__.__name__) def _request(self, *, path: str, data: Optional[dict] = None) -&gt; dict: url = f&quot;{self._ecs_agent_uri}{path}&quot; self._logger.info(f&quot;Performing request... {url=} {data=}&quot;) try: response = self._session.put(url=url, json=data, timeout=self._timeout) self._logger.info(f&quot;Got response. {response.status_code=} {response.content=}&quot;) response.raise_for_status() return response.json() except requests.RequestException as e: response_body = e.response.text if e.response is not None else None self._logger.warning(f&quot;Request error! {url=} {data=} {e=} {response_body=}&quot;) raise self.RequestError(str(e)) from e def toggle_scale_in_protection(self, *, enable: bool = True, expire_in_minutes: int = 2880): response = self._request( path=&quot;/task-protection/v1/state&quot;, data={&quot;ProtectionEnabled&quot;: enable, &quot;ExpiresInMinutes&quot;: expire_in_minutes}, ) try: return response[&quot;protection&quot;][&quot;ProtectionEnabled&quot;] except KeyError as e: raise self.Error(f&quot;Task scale-in protection endpoint error: {response=}&quot;) from e def enable_scale_in_protection(*, logger: logging.Logger = None): def decorator(f): if not (ecs_agent_uri := os.getenv(&quot;ECS_AGENT_URI&quot;)): (logger or logging).warning(f&quot;Scale-in protection not enabled. {ecs_agent_uri=}&quot;) return f client = ContainerAgent(ecs_agent_uri=ecs_agent_uri, logger=logger) @wraps(f) def wrapper(*args, **kwargs): try: client.toggle_scale_in_protection(enable=True) except client.Error as e: (logger or logging).warning(f&quot;Scale-in protection not enabled. {e}&quot;) protection_set = False else: protection_set = True try: return f(*args, **kwargs) finally: if protection_set: client.toggle_scale_in_protection(enable=False) return wrapper return decorator </code></pre> <p>celery_tasks.py</p> <pre><code>@shared_task(name=&quot;add_spider_schedule&quot;, base=AbortableTask) @enable_scale_in_protection(logger=get_task_logger(__name__)) def add_spider_schedule(user_id, spider_id): settings_module = os.environ.get('DJANGO_SETTINGS_MODULE') if settings_module == 'myapp_settings.settings.production': return add_spider_schedule_production(user_id, spider_id, add_spider_schedule) else: return print('Unknown settings module') def add_spider_schedule_production(user_id, spider_id, task_object): &quot;&quot;&quot; Adds the schedule for the specified spider. :param spider_id: The ID of the spider to schedule. :return: A string representation of the spider and task IDs. &quot;&quot;&quot; # below is the logging setup to include all 'prints' (both in the below &amp; in the spider script) in the logger logger = get_task_logger(task_object.request.id) old_outs = sys.stdout, sys.stderr rlevel = add_spider_schedule.app.conf.worker_redirect_stdouts_level add_spider_schedule.app.log.redirect_stdouts_to_logger(logger, rlevel) # Get the Spider model instance spider = Spider.objects.get(id=spider_id) # Get the current user user = User.objects.get(id=user_id) # Get the names of the relevant files from the model instance spider_config_file = spider.spider_config_file.file yaml_config_file = spider.yaml_config_file.file template_file = spider.template_file.file mongodb_database_name = spider.mongodb_collection.database_name mongodb_collection_name = spider.mongodb_collection.collection_name # Read the contents of the files from the S3 bucket spider_config_file_contents = load_content_from_s3(AWS_STORAGE_BUCKET_NAME, rf&quot;{PUBLIC_MEDIA_LOCATION}/{spider_config_file}&quot;) yaml_config_path = load_content_from_s3(AWS_STORAGE_BUCKET_NAME, rf&quot;{PUBLIC_MEDIA_LOCATION}/{yaml_config_file}&quot;) input_file_path = load_content_from_s3(AWS_STORAGE_BUCKET_NAME, rf&quot;{PUBLIC_MEDIA_LOCATION}/{template_file}&quot;) # Convert the JSON-encoded keyword arguments to a dictionary kwargs = json.loads(spider.kwargs) if spider.kwargs else {} # Create a module from the contents of the spider_config_file spider_module = import_module(spider_config_file_contents, &quot;spider_config&quot;) is_scraping_finished = False async def run_spider(): try: await spider_module.run( yaml_config_path=yaml_config_path, # page_type = page_type, # fields_to_scrape = fields_to_scrape, input_file_path=input_file_path, mongodb_name=mongodb_database_name, mongodb_collection_name=mongodb_collection_name, task_object=task_object, mode=&quot;sf-lab&quot;, **kwargs ) nonlocal is_scraping_finished # print(f&quot;CELERY TASK OBJECT DETAILS: {task_object.request}&quot;) is_scraping_finished = True except Exception as e: raise Exception(f&quot;An error occurred while running the spider: {e}&quot;) async def check_if_aborted(): while True: if task_object.is_aborted(): print(&quot;Parralel function detected that task was cancelled.&quot;) raise Exception(&quot;task was cancelled&quot;) elif is_scraping_finished: # print(&quot;Scraping finished - breaking check-if-aborted loop&quot;) break await asyncio.sleep(0.1) loop = asyncio.get_event_loop() loop.run_until_complete(asyncio.gather(run_spider(), check_if_aborted())) sys.stdout, sys.stderr = old_outs # needed for logging part return f&quot;[spider: {spider_id}, task_id: {task_object.request.id}]&quot; </code></pre>
<python><django><celery><amazon-ecs>
2023-11-22 09:19:48
1
917
Adrian
77,528,663
726,730
Why QMenu shortcuts didn't triggered from modal QDialog?
<p>For this question, i have two windows: one QMainWindow with a QMenu and one QPushButton, and one QDialog window with just a QLabel.</p> <p>When the user hits the QMenu action, a console message is appeared in the screen. This action has a shortcut key (F10).</p> <p>When the user clicks the button one modal QDialog is appeared. I used <code>show()</code> method instead of <code>exec()</code> or <code>exec_()</code> for no stop the mainwindow functionallity. The problem is that from QDialog i cannot trigger the action of QMenu in QMainWindow.</p> <p>If i re-define the trigger then i should use the shortcut, but how can i resolve it, with no many changes in my code?</p> <p><strong>File: untitled.py</strong></p> <pre class="lang-py prettyprint-override"><code>from PyQt5 import QtCore, QtGui, QtWidgets class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.resize(363, 62) self.centralwidget = QtWidgets.QWidget(MainWindow) self.gridLayout = QtWidgets.QGridLayout(self.centralwidget) self.pushButton = QtWidgets.QPushButton(self.centralwidget) self.gridLayout.addWidget(self.pushButton, 0, 0, 1, 1) MainWindow.setCentralWidget(self.centralwidget) self.menubar = QtWidgets.QMenuBar(MainWindow) self.menubar.setGeometry(QtCore.QRect(0, 0, 363, 21)) self.menuMenu = QtWidgets.QMenu(self.menubar) MainWindow.setMenuBar(self.menubar) self.actionMenu_action_1 = QtWidgets.QAction(MainWindow) self.menuMenu.addAction(self.actionMenu_action_1) self.menubar.addAction(self.menuMenu.menuAction()) self.pushButton.setText(&quot;Click to open modal Dialog&quot;) self.menuMenu.setTitle(&quot;Menu&quot;) self.actionMenu_action_1.setText(&quot;Menu action 1 (print in console)&quot;) self.actionMenu_action_1.setShortcut(&quot;F10&quot;) QtCore.QMetaObject.connectSlotsByName(MainWindow) </code></pre> <p><strong>File: untitled_2.py</strong></p> <pre class="lang-py prettyprint-override"><code>from PyQt5 import QtCore, QtGui, QtWidgets class Ui_Dialog(object): def setupUi(self, Dialog): Dialog.resize(228, 80) self.gridLayout = QtWidgets.QGridLayout(Dialog) self.label = QtWidgets.QLabel(Dialog) self.label.setAlignment(QtCore.Qt.AlignCenter) self.gridLayout.addWidget(self.label, 0, 0, 1, 1) self.label.setText(&quot;Press F10 to test shortcut&quot;) QtCore.QMetaObject.connectSlotsByName(Dialog) </code></pre> <p><strong>File run_app.py</strong></p> <pre class="lang-py prettyprint-override"><code>from PyQt5 import QtCore, QtGui, QtWidgets from untitled import Ui_MainWindow from untitled_2 import Ui_Dialog import sys import os class Run_me: def __init__(self): self.app = QtWidgets.QApplication(sys.argv) self.MainWindow = QtWidgets.QMainWindow() self.ui = Ui_MainWindow() self.ui.setupUi(self.MainWindow) self.MainWindow.show() self.ui.pushButton.clicked.connect(lambda state:self.open_q_dialog(state)) self.ui.actionMenu_action_1.triggered.connect(lambda:print(&quot;Action triggered&quot;)) sys.exit(self.app.exec_()) def open_q_dialog(self,state): self.Dialog = CustomQDialog() self.dialog_ui = Ui_Dialog() self.dialog_ui.setupUi(self.Dialog) self.Dialog.show() class CustomQDialog(QtWidgets.QDialog): def __init__(self,*args,**kwards): super().__init__(*args,**kwards) self.setModal(True) if __name__ == &quot;__main__&quot;: program = Run_me() </code></pre> <p>Maybe the action is not triggered because the QMainWindow is not focused. But it will never be focused because of <code>self.setModal(True)</code> in <code>Custom_QDialog</code>.</p>
<python><pyqt5><shortcut>
2023-11-22 09:18:29
1
2,427
Chris P
77,528,563
4,555,249
Serial communication logging with data transfer
<p>I am trying to develop a communication script for a device that is connected through serial port in python. I am sending and receiving data from/to the device using functions. Now what I would like to do is keep a full communication log. If I am sending and receiving data that I initiate this is trivial. However the device is capable to send data on its own in 'unexpected' times, for example the device sends a message when it is restarted. I struggle to come up with an idea how to log these unexpected events.</p> <p>My code so far:</p> <pre><code>import logging def create_logger(): logger = logging.getLogger('main_log') logger.setLevel(logging.DEBUG) formatter = logging.Formatter(fmt=&quot;%(asctime)s.%(msecs)02d\t%(message)s&quot;, datefmt='%H:%M:%S') fh = TimedRotatingFileHandler(f'logfile.log', when=&quot;midnight&quot;, interval=5) fh.suffix = &quot;%Y%m%d&quot; fh.setFormatter(formatter) strm = logging.StreamHandler() strm.setFormatter(formatter) logger.addHandler(fh) logger.addHandler(strm) return logger LOGGER = create_logger() port = serial.Serial(&quot;/dev/ttyUSB0&quot;, timeout=1, baudrate=115200, parity=serial.PARITY_EVEN, rtscts=False, bytesize=serial.SEVENBITS) def communication_function1(): LOGGER.info(&quot;TO DEVICE: something...&quot;) port.write(b&quot;somedata&quot;) reply = port.read() LOGGER.info(f&quot;FROM DEVICE: received some data {reply}&quot;) return 'success' def communication_function2(): LOGGER.info(&quot;TO DEVICE: other thing...&quot;) port.write(b&quot;otherdata&quot;) reply = port.read() LOGGER.info(f&quot;FROM DEVICE: received some other data {reply}&quot;) return 'success' </code></pre> <p>The communication functions work perfectly and the data I send/receive are logged perfectly. However, in an event the device sends some unexpected data I am not able to receive or log it. My current idea is to set up a thread that constantly communicates, and kill that thread at the beginning of each communication function, and restart it at the end.</p> <p>Idea for the threaded communication:</p> <pre><code>from time import sleep from threading import Thread from threading import Event def task(event): # execute a task in a loo# while (True): # Check if incoming bytes are waiting to be read from the serial input # buffer. if (ser.inWaiting &gt; 0): # read the bytes and convert from binary array to ASCII data_str = port.read(port.inWaiting()) LOGGER.into(data_str) time.sleep(0.01) if event.is_set(): break # create the event event = Event() # create and configure a new thread thread = Thread(target=task, args=(event,)) def start_thread(): thread.start() def stop_thread(): event.set() thread.join() </code></pre> <p>Maybe a decorator for each communication_function to stop and start the &quot;background&quot; communication thread.</p> <p>However I feel like this is not the solution, and I am kind of stuck. What would be the proper way to implement something like this?</p>
<python><python-3.x><logging><serial-port><serial-communication>
2023-11-22 09:00:14
0
3,689
Gábor Erdős
77,528,429
21,034,926
ModuleNotFoundError: No module Openpyxl found
<p>I'm learning python and I want to create a simple script to analyze some data from an excel file. I'm reading <a href="https://www.datacamp.com/tutorial/python-excel-tutorial" rel="nofollow noreferrer">this tutorial</a> and it suggest to install the openpyxl library using the command <code>pip install openpyxl</code></p> <p>I've this code into my python file at the moment</p> <pre class="lang-py prettyprint-override"><code># Test script for xls manipulation import openpyxl # Loading file xlsx workbook = openpyxl.load_workbook('BEVANDE.xlsx') </code></pre> <p>And if I try to run it using vs code debugger I will get this error</p> <pre><code>No module named 'openpyxl' </code></pre> <p>I've readed into another similar question that pip command will install the module only for python 2 and someone suggested to use pip3 install command but I will get this messages into console</p> <pre><code>Requirement already satisfied: Openpyxl in c:\python312\lib\site-packages (3.1.2) Requirement already satisfied: et-xmlfile in c:\python312\lib\site-packages (from Openpyxl) (1.1.0) </code></pre> <p>How I can fix this and what is the correct way to import and install modules in python?</p>
<python><openpyxl>
2023-11-22 08:40:07
3
501
OHICT
77,528,415
9,974,205
How to join two dataframes for which column values are within a certain range and some values do not match?
<p>I have been reading <a href="https://stackoverflow.com/questions/46525786/how-to-join-two-dataframes-for-which-column-values-are-within-a-certain-range">this other post</a>, since I am dealing with a similar situation.</p> <p>However, I have a problem. In my version of <code>df_1</code>, I have timestamps which are outside of the values of the time ranges presented in <code>df_2</code>. Let's say I have an extra row</p> <pre><code>print df_1 timestamp A B 0 2016-05-14 10:54:33 0.020228 0.026572 1 2016-05-14 10:54:34 0.057780 0.175499 2 2016-05-14 10:54:35 0.098808 0.620986 3 2016-05-14 10:54:36 0.158789 1.014819 4 2016-05-14 10:54:39 0.038129 2.384590 5 2023-11-22 10:54:39 0.000500 6.258710 print df_2 start end event 0 2016-05-14 10:54:31 2016-05-14 10:54:33 E1 1 2016-05-14 10:54:34 2016-05-14 10:54:37 E2 2 2016-05-14 10:54:38 2016-05-14 10:54:42 E3 </code></pre> <p>I need to know how can I modify the solution to the previous post</p> <pre><code>df_2.index = pd.IntervalIndex.from_arrays(df_2['start'],df_2['end'],closed='both') df_1['event'] = df_1['timestamp'].apply(lambda x : df_2.iloc[df_2.index.get_loc(x)]['event']) </code></pre> <p>so that I get a null value for the fifth row, since I am getting now an error</p>
<python><pandas><dataframe><datetime><intervals>
2023-11-22 08:38:34
3
503
slow_learner
77,528,356
16,703,301
'gpiod' is not a package
<p>I'm trying to use libgpiod in python on my orangepi model 4B. I installed it with:</p> <pre><code>sudo apt install libgpiod-dev python3-libgpiod </code></pre> <p>Now I try to use it:</p> <pre><code>from gpiod.line import Direction, Value </code></pre> <p>But I get an error:</p> <pre><code>ModuleNotFoundError: No module named 'gpiod.line'; 'gpiod' is not a package </code></pre> <p>If I open python in terminal and import gpiod the autocomplete options for <code>gpiod.</code> are:</p> <pre><code>gpiod.Chip( gpiod.LINE_REQ_FLAG_OPEN_DRAIN gpiod.ChipIter( gpiod.LINE_REQ_FLAG_OPEN_SOURCE gpiod.LINE_REQ_DIR_AS_IS gpiod.Line( gpiod.LINE_REQ_DIR_IN gpiod.LineBulk( gpiod.LINE_REQ_DIR_OUT gpiod.LineEvent( gpiod.LINE_REQ_EV_BOTH_EDGES gpiod.LineIter( gpiod.LINE_REQ_EV_FALLING_EDGE gpiod.find_line( gpiod.LINE_REQ_EV_RISING_EDGE gpiod.version_string( gpiod.LINE_REQ_FLAG_ACTIVE_LOW </code></pre> <p>If I install gpiod through pip it says <code>module 'gpiod' has no attribute 'Chip'</code> when I try to use gpiod.Chip.</p> <p>What is wrong? Thanks in advance.</p>
<python><python-3.x><orange-pi><libgpiod>
2023-11-22 08:30:06
1
336
Dmitry
77,528,329
12,603,110
How to splat Powershell arguments in the same manner that Python *args, **kwargs work
<p>Both python and powershell support a form of splatting array and named arguments as function inputs, a very useful feature.</p> <p>However powershell seem to be internally inconsistent somewhat. I am trying to reproduce powershell code that behaves similarly to the following python code:</p> <pre class="lang-py prettyprint-override"><code>def echoName(Name, *args, **kwargs): print(&quot;Name is:&quot;, Name) def wrapper(*args, **kwargs): print(&quot;args:&quot;, args) print(&quot;kwargs:&quot;, kwargs) echoName(*args, **kwargs) d = { &quot;Name&quot;: &quot;John&quot;, &quot;Age&quot;: 25 } wrapper(**d) # args: () # kwargs: {'Name': 'John', 'Age': 25} # Name is: John </code></pre> <p>As far as I am aware <code>ValueFromRemainingArguments</code> is the only way to accept left over parameters in a powershell advanced function</p> <pre><code>function echoName { param( [CmdletBinding()] [string]$Name, [parameter(Mandatory = $False, ValueFromRemainingArguments = $True)] [Object[]] $Arguments ) Write-Host &quot;Name is: $Name&quot; } function wrapper { [CmdletBinding()] param( [parameter(Mandatory = $False, ValueFromRemainingArguments = $True)] [Object[]] $Arguments ) Write-Host &quot;Arguments is: $Arguments&quot; echoName @Arguments } $d = @{ Name = 'John' Age = 25 } wrapper @d # Arguments is: -Name: John -Age: 25 # Name is: -Name: </code></pre> <p>I have 3 issues with powershell's output</p> <ol> <li>Arguments is now an array</li> <li>the named arguments were prefixed with <code>-</code> and suffixed with <code>: </code></li> <li>this is a weird behavior at best:</li> </ol> <pre><code>$a = @(1,2,3) wrapper @a @d # Arguments is: 1 2 3 -Name: John -Age: 25 # Name is: 1 </code></pre> <p>How can I chain and only partially consume variables as possible in python?</p> <p><a href="https://stackoverflow.com/questions/51911385/what-is-the-difference-between-a-cmdlet-and-a-function/51912471#51912471fu">What is the difference between a cmdlet and a function?</a><br /> <a href="https://stackoverflow.com/questions/62622385/wrapper-function-for-cmdlet-pass-remaining-parametersnction/51912471#51912471">Wrapper function for cmdlet - pass remaining parameters</a><br /> <a href="https://stackoverflow.com/questions/55539278/is-there-a-way-to-create-an-alias-to-a-cmdlet-in-a-way-that-it-only-runs-if-argu/55539863#55539863">Is there a way to create an alias to a cmdlet in a way that it only runs if arguments are passed to the alias?</a></p>
<python><powershell><arguments><keyword-argument><parameter-splatting>
2023-11-22 08:25:25
1
812
Yorai Levi
77,528,014
1,068,879
How to run Huey consumer on Windows IIS
<p>I am working on a Django application which is deployed on a Windows Server using IIS. I want to use <code>Huey</code> to be able to run background tasks e.g. sending of emails and report generation. <code>Celery</code> does not support Windows hence my decision to use <code>Huey</code>.</p> <p>According to the <a href="https://huey.readthedocs.io/en/latest/contrib.html#running-the-consumer" rel="nofollow noreferrer">documentation</a>, you can run the consumer using the following command:</p> <p><code>./manage.py run_huey</code></p> <p>My question is, how and where will I need to run this command to work with my setup i.e. Windows and IIS? It should also work when the server is restarted.</p>
<python><django><iis><windows-server><python-huey>
2023-11-22 07:26:03
0
41,075
Frankline
77,527,951
9,586,338
How to cancel tasks in anyio.TaskGroup context?
<p>I write a script to find out the fastest one in a list of cdn hosts:</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3.11 import time from contextlib import contextmanager from enum import StrEnum import anyio import httpx @contextmanager def timeit(msg: str): start = time.time() yield cost = time.time() - start print(msg, f&quot;{cost = }&quot;) class CdnHost(StrEnum): jsdelivr = &quot;https://cdn.jsdelivr.net/npm/swagger-ui-dist@5.9.0/swagger-ui.css&quot; unpkg = &quot;https://unpkg.com/swagger-ui-dist@5.9.0/swagger-ui.css&quot; cloudflare = ( &quot;https://cdnjs.cloudflare.com/ajax/libs/swagger-ui/5.9.0/swagger-ui.css&quot; ) TIMEOUT = 5 LOOP_INTERVAL = 0.1 async def fetch(client, url, results, index): try: r = await client.get(url) except (httpx.ConnectError, httpx.ReadError): ... else: print(f&quot;{url = }\n{r.elapsed = }&quot;) if r.status_code &lt; 300: results[index] = r.content class StopNow(Exception): ... async def find_fastest_host(timeout=TIMEOUT, loop_interval=LOOP_INTERVAL) -&gt; str: urls = list(CdnHost) results = [None] * len(urls) try: async with anyio.create_task_group() as tg: with anyio.move_on_after(timeout): async with httpx.AsyncClient() as client: for i, url in enumerate(urls): tg.start_soon(fetch, client, url, results, i) for _ in range(int(timeout / loop_interval) + 1): for res in results: if res is not None: raise StopNow await anyio.sleep(0.1) except ( StopNow, httpx.ReadError, httpx.ReadTimeout, httpx.ConnectError, httpx.ConnectTimeout, ): ... for url, res in zip(urls, results): if res is not None: return url return urls[0] async def main(): with timeit(&quot;Sniff hosts&quot;): url = await find_fastest_host() print(&quot;cdn host:&quot;, CdnHost) print(&quot;result:&quot;, url) if __name__ == &quot;__main__&quot;: anyio.run(main) </code></pre> <p>There are three cdn hosts (<a href="https://cdn.jsdelivr.net" rel="nofollow noreferrer">https://cdn.jsdelivr.net</a>, <a href="https://unpkg.com" rel="nofollow noreferrer">https://unpkg.com</a>, <a href="https://cdnjs.cloudflare.com" rel="nofollow noreferrer">https://cdnjs.cloudflare.com</a>). I make three concurrent async task to fetch them by httpx. If one of them get a response with status_code&lt;300, then stop all task and return the right url. But I don't know how to cancel tasks without using a custom exception (in the script is <code>StopNow</code>).</p>
<python><httpx><anyio>
2023-11-22 07:12:35
2
6,549
Waket Zheng
77,527,947
6,643,219
How to check email validation using regex in Django
<p>I'm trying to create a registration form for my Django project. I want to check if the user's email address is correct. I used built-in email validator, but it just check the domain. I need to prevent user to enter emails like this: john#doe@gmail.com etc. here is my models.py</p> <pre><code>class Profile(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, null=True, blank=True) name = models.CharField(max_length=300, null=True, blank=True) email = models.EmailField(max_length=200, null=True, blank=True, validators=[validate_email]) username = models.CharField(max_length=200, null=True, blank=True) created = models.DateTimeField(auto_now_add=True) id = models.UUIDField(default=uuid.uuid4, unique=True, primary_key=True, editable=False) def __str__(self): return str(self.user.username) </code></pre> <p>forms.py</p> <pre><code>class CustomUserCreationForm(UserCreationForm): class Meta: model = User fields = ['first_name', 'email', 'username', 'password1', 'password2'] labels = {'first_name':'Name',} </code></pre> <p>signals.py</p> <pre><code>@receiver(post_save, sender=User) def createProfile(sender, instance, created, **kwargs): if created: user = instance profile = Profile.objects.create( user = user, username = user.username, email = user.email, name = user.first_name, ) </code></pre> <p>and views.py for registration</p> <pre><code>def registerUser(request): page = 'register' form = CustomUserCreationForm() if request.method == 'POST': form = CustomUserCreationForm(request.POST) if form.is_valid(): user = form.save(commit=False) user.username = user.username.lower() user.save() messages.success(request, 'User Account Was Created!') login(request, user) return redirect('profile') else: messages.error(request, 'An error has accurred during registration!') context = {'page' : page, 'form' : form} return render(request, 'users/login-register.html', context) </code></pre>
<python><django>
2023-11-22 07:11:00
1
457
Shima Erfan
77,527,847
7,067,333
Jax vmap limit memory
<p>I'm wondering if there is a good way to limit the memory usage for Jax's VMAP function? Equivalently, to vmap in batches at a time if that makes sense?</p> <p>In my specific use case, I have a set of images and I'd like to calculate the affinity between each pair of images; so ~order((num_imgs)^2 * (img shape)) bytes of memory used all at once if I'm understanding vmap correctly (which gets huge since in my real example I have 10,000 100x100 images).</p> <p>A basic example is:</p> <pre><code>def affininty_matrix_ex(n_arrays=10, img_size=5, key=jax.random.PRNGKey(0), gamma=jnp.array([0.5])): arr_of_imgs = jax.random.normal(jax.random.PRNGKey(0), (n_arrays, img_size, img_size)) arr_of_indices = jnp.arange(n_arrays) inds_1, inds_2 = zip(*combinations(arr_of_indices, 2)) v_cPA = jax.vmap(calcPairAffinity2, (0, 0, None, None), 0) affinities = v_cPA(jnp.array(inds_1), jnp.array(inds_2), arr_of_imgs, gamma) print() print(jax.make_jaxpr(v_cPA)(jnp.array(inds_1), jnp.array(inds_2), arr_of_imgs, gamma)) affinities = affinities.reshape(-1) arr = jnp.zeros((n_arrays, n_arrays), dtype=jnp.float16) arr = arr.at[jnp.triu_indices(arr.shape[0], k=1)].set(affinities) arr = arr + arr.T arr = arr + jnp.identity(n_arrays, dtype=jnp.float16) return arr def calcPairAffinity2(ind1, ind2, imgs, gamma): #Returns a jnp array of 1 float, jnp.sum adds all elements together image1, image2 = imgs[ind1], imgs[ind2] diff = jnp.sum(jnp.abs(image1 - image2)) normed_diff = diff / image1.size val = jnp.exp(-gamma*normed_diff) val = val.astype(jnp.float16) return val </code></pre> <p>I suppose I could just say something like &quot;only feed into vmap X pairs at a time, and loop through n_chunks = n_arrays/X, appending each groups results to a list&quot; but that doesn't seem to be ideal. My understanding is vmap does not like generators, not sure if that would be an alternative way around the issue.</p>
<python><jax>
2023-11-22 06:52:02
1
620
Evan Mata
77,527,741
12,522,881
Replace pandas column with list values from another panadas dataframe
<p>I'm trying to modify the <code>main_df</code> dataframe by replacing its <strong>Product</strong> list values with corresponding Model list values from the <code>mapping_df</code>, but only if they are found in the lookup list <code>to_replace_values</code>.</p> <p>For example, the first <strong>Product</strong> value at index 0 in the <code>main_df</code> is replaced with all corresponding <strong>Model</strong> values from the <code>mapping_df</code>, but this is only applicable if the <strong>Product</strong> value is found in the lookup list <code>to_replace_values</code>.</p> <pre><code> main_df = pd.DataFrame(data = { 'Product': { 0: &quot;['Passenger Cars']&quot;, 1: &quot;['New Cars', 'Mercedes-Maybach']&quot;, 2: &quot;['New Cars', 'Mercedes-Benz C-Class']&quot;, 3: &quot;['New Cars', 'Mercedes-Benz C-Class', 'Mercedes-AMG CLA Coupe']&quot; } }) to_replace_values = [ 'Passenger Cars', 'Mercedes-Benz Vans', 'New Cars', 'New Vans', 'Mercedes-Maybach', 'Mercedes-Benz', 'Mercedes-AMG', 'EQ Technology' ] mapping_df = pd.DataFrame({ 'Product': { 0: 'New Cars', 1: 'New Cars', 2: 'Passenger Cars', 3: 'Passenger Cars', 4: 'Mercedes-Maybach', 5: 'Mercedes-Maybach' }, 'Model': { 0: 'Mercedes-Benz GLA', 1: 'Mercedes-Benz GLB', 2: 'Mercedes-Benz GLE', 3: 'Mercedes-Benz GLS', 4: 'Mercedes-Maybach GLS', 5: 'Mercedes-Maybach S-Class' }}) expected_outputs = pd.DataFrame( data = { 'Product': { 0: &quot;['Mercedes-Benz GLE', 'Mercedes-Benz GLS']&quot;, 1: &quot;['Mercedes-Benz GLA', 'Mercedes-Benz GLB', 'Mercedes-Maybach GLS', 'Mercedes-Maybach S-Class']&quot;, 2: &quot;['Mercedes-Benz GLA', 'Mercedes-Benz GLB', 'Mercedes-Benz C-Class']&quot;, 3: &quot;['Mercedes-Benz GLA', 'Mercedes-Benz GLB', 'Mercedes-Benz C-Class', 'Mercedes-AMG CLA Coupe']&quot; } }) </code></pre>
<python><pandas><dataframe>
2023-11-22 06:32:20
2
473
Ibrahim Ayoup
77,527,647
435,563
How can I best type a python function returning a named tuple type
<p>In python 3.11 (mypy 1.7.0) I have a function that constructs a dynamic NamedTuple type. It returns the type constructed - not an instance. I first tried typing it with <code>Type[NamedTuple]</code>, but that causes python to expect the named tuple constructor itself, not a subclass.</p> <p>I tried a protocol such as:</p> <pre class="lang-py prettyprint-override"><code>class PNamedTuple(Protocol): _fields: Sequence[str] </code></pre> <p>And used <code>Type[PNamedTuple]</code> as my function. This allows me to create a namedtuple class dynamically. However, when I access <code>_fields</code> I get an error:</p> <p><code>&quot;_fields&quot; is protected and used outside of the class in which it is declared</code></p> <ol> <li><p>Can I do anything to suppress this error? It doesn't occur for <code>NamedTuple</code> subtypes - how can I replicate this behavior?</p> </li> <li><p>Is there a simple way to inherit from another protocol to express the &quot;tuple-like&quot; behavior? <code>PNamedTuple(Sequence, Protocol)</code> doesn't work as Sequence is not a Protocol. Is there a simple work around for this?</p> </li> </ol> <p><strong>UPDATE</strong></p> <ol> <li><p>NamedTuple isn't a class, so <code>Type[NamedTuple]</code> is misguided. (See accepted answer by @blhsing.)</p> </li> <li><p>As @dROOOze points out, the <code>_fields</code> error is coming from pylance (and thus from pyright, I think). The solution seemed to give me an error for the same reason (pyright), but mypy is fine with it, and the protocol class is not needed.</p> </li> </ol>
<python><mypy>
2023-11-22 06:09:27
1
5,661
shaunc
77,527,407
11,790,637
python in conda uses packages in the user directory instead of those in virtual environments
<p>I am on ubuntu 20.04 and using anaconda 22.9.0. I installed two versions of pytorch in my user directory and the conda virtual environment, respectively. Despite <code>which python</code> and <code>which pip</code> showing:</p> <pre class="lang-bash prettyprint-override"><code>(openmmlab) user@hostname:~/Applications/anaconda3$ which python /home/user/Applications/anaconda3/envs/openmmlab/bin/python (openmmlab) user@hostname:~/Applications/anaconda3$ which pip /home/user/Applications/anaconda3/envs/openmmlab/bin/pip </code></pre> <p>importing a package in python still shows it's using the one in my user directory <code>/home/user/.local/lib/python3.8/site-packages</code>, instead of in the conda environment directory <code>/home/user/Applications/anaconda3/envs/openmmlab/lib/python3.8/site-packages</code> (I can confirm both packages exist). How should I solve it?</p> <p>Additional information if it helps:</p> <ul> <li>I am trying to use 2 different versions of pytorch</li> <li>pytorch in my user directory was installed by <code>pip</code>, while the one in conda was installed using <code>conda install</code> while the environment was activated.</li> </ul>
<python><anaconda><conda>
2023-11-22 04:54:36
0
2,337
ihdv
77,527,302
13,881,506
Are there a C# equivalents of Python's iter() and next() functions?
<p>Python's <a href="https://docs.python.org/3.9/library/functions.html?highlight=iter#iter" rel="nofollow noreferrer"><code>iter(.)</code></a> and <a href="https://docs.python.org/3.9/library/functions.html?highlight=iter#next" rel="nofollow noreferrer"><code>next(.)</code></a> built-in functions allow iterating a list (or other objects that implement <code>__iter__(self)</code> and <code>__next__(self)</code>) without a for loop and without an index.</p> <p>Does C# have something equivalent to <code>iter(.)</code> and <code>next(.)</code>? The</p> <hr /> <p>Say that for some reason you wanted to merge two lists sorted numbers without using for loops nor indices. You could do so using iterators and <code>next(.)</code> as follows:</p> <pre class="lang-py prettyprint-override"><code>def merge(nums1, nums2): nums1Iter = iter(nums1) nums2Iter = iter(nums2) num1 = next(nums1Iter) num2 = next(nums2Iter) while True: if num1 &lt;= num2: yield num1 try: num1 = next(nums1Iter) except StopIteration: yield num2 yield from nums2Iter break else: yield num2 try: num2 = next(nums2Iter) except StopIteration: yield num1 yield from nums1Iter break nums1 = range(0, 10, 2) nums2 = range(1, 20, 3) print(list(merge(nums1, nums2))) </code></pre>
<python><c#><iterator><ienumerator>
2023-11-22 04:18:53
2
1,013
joseville
77,526,956
2,056,201
Could not build wheels for Numpy when installing MlAgents
<p>All my code is below,</p> <p>I am trying to install ml agents with the simple anaconda commands using Windows 10,</p> <pre><code>conda create --name mlagents python==3.10.8 conda activate mlagents cd ml-agents pip install -e ./ml-agents </code></pre> <p>How do I fix this wheels error?</p> <pre><code>(base) c:\Code\unity&gt;conda create --name mlagents python==3.10.8 Channels: - defaults Platform: win-64 Collecting package metadata (repodata.json): done Solving environment: done ## Package Plan ## environment location: C:\Users\Admin\.conda\envs\mlagents added / updated specs: - python==3.10.8 The following packages will be downloaded: package | build ---------------------------|----------------- bzip2-1.0.8 | he774522_0 113 KB ca-certificates-2023.08.22 | haa95532_0 123 KB libffi-3.4.4 | hd77b12b_0 113 KB openssl-1.1.1w | h2bbff1b_0 5.5 MB pip-23.3 | py310haa95532_0 2.9 MB python-3.10.8 | h966fe2a_1 15.8 MB setuptools-68.0.0 | py310haa95532_0 934 KB sqlite-3.41.2 | h2bbff1b_0 894 KB tk-8.6.12 | h2bbff1b_0 3.1 MB tzdata-2023c | h04d1e81_0 116 KB vc-14.2 | h21ff451_1 8 KB vs2015_runtime-14.27.29016 | h5e58377_2 1007 KB wheel-0.41.2 | py310haa95532_0 127 KB xz-5.4.2 | h8cc25b3_0 592 KB zlib-1.2.13 | h8cc25b3_0 113 KB ------------------------------------------------------------ Total: 31.3 MB The following NEW packages will be INSTALLED: bzip2 pkgs/main/win-64::bzip2-1.0.8-he774522_0 ca-certificates pkgs/main/win-64::ca-certificates-2023.08.22-haa95532_0 libffi pkgs/main/win-64::libffi-3.4.4-hd77b12b_0 openssl pkgs/main/win-64::openssl-1.1.1w-h2bbff1b_0 pip pkgs/main/win-64::pip-23.3-py310haa95532_0 python pkgs/main/win-64::python-3.10.8-h966fe2a_1 setuptools pkgs/main/win-64::setuptools-68.0.0-py310haa95532_0 sqlite pkgs/main/win-64::sqlite-3.41.2-h2bbff1b_0 tk pkgs/main/win-64::tk-8.6.12-h2bbff1b_0 tzdata pkgs/main/noarch::tzdata-2023c-h04d1e81_0 vc pkgs/main/win-64::vc-14.2-h21ff451_1 vs2015_runtime pkgs/main/win-64::vs2015_runtime-14.27.29016-h5e58377_2 wheel pkgs/main/win-64::wheel-0.41.2-py310haa95532_0 xz pkgs/main/win-64::xz-5.4.2-h8cc25b3_0 zlib pkgs/main/win-64::zlib-1.2.13-h8cc25b3_0 Proceed ([y]/n)? y Downloading and Extracting Packages: Preparing transaction: done Verifying transaction: done Executing transaction: done # # To activate this environment, use # # $ conda activate mlagents # # To deactivate an active environment, use # # $ conda deactivate (base) c:\Code\unity&gt;conda activate mlagents (mlagents) c:\Code\unity&gt;cd ml-agents (mlagents) c:\Code\unity\ml-agents&gt;pip install -e ./ml-agents Obtaining file:///C:/Code/unity/ml-agents/ml-agents Preparing metadata (setup.py) ... done Collecting grpcio&lt;=1.48.2,&gt;=1.11.0 (from mlagents==1.0.0) Using cached grpcio-1.48.2-cp310-cp310-win_amd64.whl (3.6 MB) Collecting h5py&gt;=2.9.0 (from mlagents==1.0.0) Downloading h5py-3.10.0-cp310-cp310-win_amd64.whl.metadata (2.5 kB) Collecting mlagents_envs==1.0.0 (from mlagents==1.0.0) Downloading mlagents_envs-1.0.0-py3-none-any.whl.metadata (2.4 kB) Collecting numpy&lt;2.0,&gt;=1.13.3 (from mlagents==1.0.0) Downloading numpy-1.26.2-cp310-cp310-win_amd64.whl.metadata (61 kB) ---------------------------------------- 61.2/61.2 kB 467.6 kB/s eta 0:00:00 Collecting Pillow&gt;=4.2.1 (from mlagents==1.0.0) .... # had to skip a few lines due to limit ---------------------------------------- 102.2/102.2 kB 1.2 MB/s eta 0:00:00 Downloading packaging-23.2-py3-none-any.whl (53 kB) ---------------------------------------- 53.0/53.0 kB 1.4 MB/s eta 0:00:00 Downloading requests-2.31.0-py3-none-any.whl (62 kB) ---------------------------------------- 62.6/62.6 kB 1.1 MB/s eta 0:00:00 Downloading tensorboard_data_server-0.7.2-py3-none-any.whl (2.4 kB) Downloading tqdm-4.66.1-py3-none-any.whl (78 kB) ---------------------------------------- 78.3/78.3 kB 731.4 kB/s eta 0:00:00 Using cached typing_extensions-4.8.0-py3-none-any.whl (31 kB) Downloading werkzeug-3.0.1-py3-none-any.whl (226 kB) ---------------------------------------- 226.7/226.7 kB 432.9 kB/s eta 0:00:00 Using cached networkx-3.2.1-py3-none-any.whl (1.6 MB) Downloading cachetools-5.3.2-py3-none-any.whl (9.3 kB) Downloading certifi-2023.11.17-py3-none-any.whl (162 kB) ---------------------------------------- 162.5/162.5 kB 423.0 kB/s eta 0:00:00 Downloading charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl (100 kB) ---------------------------------------- 100.3/100.3 kB 523.9 kB/s eta 0:00:00 Using cached MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl (17 kB) Downloading urllib3-2.1.0-py3-none-any.whl (104 kB) ---------------------------------------- 104.6/104.6 kB 377.4 kB/s eta 0:00:00 Downloading pyasn1-0.5.1-py2.py3-none-any.whl (84 kB) ---------------------------------------- 84.9/84.9 kB 399.4 kB/s eta 0:00:00 Building wheels for collected packages: numpy Building wheel for numpy (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for numpy (pyproject.toml) did not run successfully. │ exit code: 1 ╰─&gt; [287 lines of output] setup.py:63: RuntimeWarning: NumPy 1.21.2 may not yet support Python 3.10. warnings.warn( Running from numpy source directory. Processing numpy/random\_bounded_integers.pxd.in Processing numpy/random\bit_generator.pyx Processing numpy/random\mtrand.pyx Processing numpy/random\_bounded_integers.pyx.in Processing numpy/random\_common.pyx Processing numpy/random\_generator.pyx Processing numpy/random\_mt19937.pyx Processing numpy/random\_pcg64.pyx Processing numpy/random\_philox.pyx Processing numpy/random\_sfc64.pyx Cythonizing sources blas_opt_info: blas_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['C:\\Users\\Admin\\.conda\\envs\\mlagents\\lib', 'C:\\', 'C:\\Users\\Admin\\.conda\\envs\\mlagents\\libs', 'C:\\Code\\conda\\Library\\lib'] NOT AVAILABLE blis_info: libraries blis not found in ['C:\\Users\\Admin\\.conda\\envs\\mlagents\\lib', 'C:\\', 'C:\\Users\\Admin\\.conda\\envs\\mlagents\\libs', 'C:\\Code\\conda\\Library\\lib'] NOT AVAILABLE openblas_info: libraries openblas not found in ['C:\\Users\\Admin\\.conda\\envs\\mlagents\\lib', 'C:\\', 'C:\\Users\\Admin\\.conda\\envs\\mlagents\\libs', 'C:\\Code\\conda\\Library\\lib'] get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Could not locate executable ifort Could not locate executable ifl customize AbsoftFCompiler Could not locate executable f90 customize CompaqVisualFCompiler Could not locate executable DF customize IntelItaniumVisualFCompiler Could not locate executable efl customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize G95FCompiler Could not locate executable g95 customize IntelEM64VisualFCompiler customize IntelEM64TFCompiler Could not locate executable efort Could not locate executable efc customize PGroupFlangCompiler Could not locate executable flang don't know how to compile Fortran code on platform 'nt' NOT AVAILABLE accelerate_info: NOT AVAILABLE atlas_3_10_blas_threads_info: Setting PTATLAS=ATLAS libraries tatlas not found in ['C:\\Users\\Admin\\.conda\\envs\\mlagents\\lib', 'C:\\', 'C:\\Users\\Admin\\.conda\\envs\\mlagents\\libs', 'C:\\Code\\conda\\Library\\lib'] NOT AVAILABLE atlas_3_10_blas_info: libraries satlas not found in ['C:\\Users\\Admin\\.conda\\envs\\mlagents\\lib', 'C:\\', 'C:\\Users\\Admin\\.conda\\envs\\mlagents\\libs', 'C:\\Code\\conda\\Library\\lib'] NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in ['C:\\Users\\Admin\\.conda\\envs\\mlagents\\lib', 'C:\\', 'C:\\Users\\Admin\\.conda\\envs\\mlagents\\libs', 'C:\\Code\\conda\\Library\\lib'] NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in ['C:\\Users\\Admin\\.conda\\envs\\mlagents\\lib', 'C:\\', 'C:\\Users\\Admin\\.conda\\envs\\mlagents\\libs', 'C:\\Code\\conda\\Library\\lib'] NOT AVAILABLE C:\Users\Admin\AppData\Local\Temp\pip-install-lf3hykwo\numpy_681c1c40b47341459a0decfc0f360314\numpy\distutils\system_info.py:2026: UserWarning: Optimized (vendor) Blas libraries are not found. Falls back to netlib Blas library which has worse performance. A better performance should be easily gained by switching Blas library. if self._calc_info(blas): blas_info: libraries blas not found in ['C:\\Users\\Admin\\.conda\\envs\\mlagents\\lib', 'C:\\', 'C:\\Users\\Admin\\.conda\\envs\\mlagents\\libs', 'C:\\Code\\conda\\Library\\lib'] NOT AVAILABLE C:\Users\Admin\AppData\Local\Temp\pip-install-lf3hykwo\numpy_681c1c40b47341459a0decfc0f360314\numpy\distutils\system_info.py:2026: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. if self._calc_info(blas): blas_src_info: NOT AVAILABLE C:\Users\Admin\AppData\Local\Temp\pip-install-lf3hykwo\numpy_681c1c40b47341459a0decfc0f360314\numpy\distutils\system_info.py:2026: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. if self._calc_info(blas): NOT AVAILABLE non-existing path in 'numpy\\distutils': 'site.cfg' lapack_opt_info: lapack_mkl_info: libraries mkl_rt not found in ['C:\\Users\\Admin\\.conda\\envs\\mlagents\\lib', 'C:\\', 'C:\\Users\\Admin\\.conda\\envs\\mlagents\\libs', 'C:\\Code\\conda\\Library\\lib'] NOT AVAILABLE openblas_lapack_info: libraries openblas not found in ['C:\\Users\\Admin\\.conda\\envs\\mlagents\\lib', 'C:\\', 'C:\\Users\\Admin\\.conda\\envs\\mlagents\\libs', 'C:\\Code\\conda\\Library\\lib'] NOT AVAILABLE openblas_clapack_info: libraries openblas,lapack not found in ['C:\\Users\\Admin\\.conda\\envs\\mlagents\\lib', 'C:\\', 'C:\\Users\\Admin\\.conda\\envs\\mlagents\\libs', 'C:\\Code\\conda\\Library\\lib'] NOT AVAILABLE flame_info: libraries flame not found in ['C:\\Users\\Admin\\.conda\\envs\\mlagents\\lib', 'C:\\', 'C:\\Users\\Admin\\.conda\\envs\\mlagents\\libs', 'C:\\Code\\conda\\Library\\lib'] NOT AVAILABLE atlas_3_10_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in C:\Users\Admin\.conda\envs\mlagents\lib libraries tatlas,tatlas not found in C:\Users\Admin\.conda\envs\mlagents\lib libraries lapack_atlas not found in C:\ libraries tatlas,tatlas not found in C:\ libraries lapack_atlas not found in C:\Users\Admin\.conda\envs\mlagents\libs libraries tatlas,tatlas not found in C:\Users\Admin\.conda\envs\mlagents\libs libraries lapack_atlas not found in C:\Code\conda\Library\lib libraries tatlas,tatlas not found in C:\Code\conda\Library\lib &lt;class 'numpy.distutils.system_info.atlas_3_10_threads_info'&gt; NOT AVAILABLE atlas_3_10_info: libraries lapack_atlas not found in C:\Users\Admin\.conda\envs\mlagents\lib libraries satlas,satlas not found in C:\Users\Admin\.conda\envs\mlagents\lib libraries lapack_atlas not found in C:\ libraries satlas,satlas not found in C:\ libraries lapack_atlas not found in C:\Users\Admin\.conda\envs\mlagents\libs libraries satlas,satlas not found in C:\Users\Admin\.conda\envs\mlagents\libs libraries lapack_atlas not found in C:\Code\conda\Library\lib libraries satlas,satlas not found in C:\Code\conda\Library\lib &lt;class 'numpy.distutils.system_info.atlas_3_10_info'&gt; NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in C:\Users\Admin\.conda\envs\mlagents\lib libraries ptf77blas,ptcblas,atlas not found in C:\Users\Admin\.conda\envs\mlagents\lib libraries lapack_atlas not found in C:\ libraries ptf77blas,ptcblas,atlas not found in C:\ libraries lapack_atlas not found in C:\Users\Admin\.conda\envs\mlagents\libs libraries ptf77blas,ptcblas,atlas not found in C:\Users\Admin\.conda\envs\mlagents\libs libraries lapack_atlas not found in C:\Code\conda\Library\lib libraries ptf77blas,ptcblas,atlas not found in C:\Code\conda\Library\lib &lt;class 'numpy.distutils.system_info.atlas_threads_info'&gt; NOT AVAILABLE atlas_info: libraries lapack_atlas not found in C:\Users\Admin\.conda\envs\mlagents\lib libraries f77blas,cblas,atlas not found in C:\Users\Admin\.conda\envs\mlagents\lib libraries lapack_atlas not found in C:\ libraries f77blas,cblas,atlas not found in C:\ libraries lapack_atlas not found in C:\Users\Admin\.conda\envs\mlagents\libs libraries f77blas,cblas,atlas not found in C:\Users\Admin\.conda\envs\mlagents\libs libraries lapack_atlas not found in C:\Code\conda\Library\lib libraries f77blas,cblas,atlas not found in C:\Code\conda\Library\lib &lt;class 'numpy.distutils.system_info.atlas_info'&gt; NOT AVAILABLE lapack_info: libraries lapack not found in ['C:\\Users\\Admin\\.conda\\envs\\mlagents\\lib', 'C:\\', 'C:\\Users\\Admin\\.conda\\envs\\mlagents\\libs', 'C:\\Code\\conda\\Library\\lib'] NOT AVAILABLE C:\Users\Admin\AppData\Local\Temp\pip-install-lf3hykwo\numpy_681c1c40b47341459a0decfc0f360314\numpy\distutils\system_info.py:1858: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. return getattr(self, '_calc_info_{}'.format(name))() lapack_src_info: NOT AVAILABLE C:\Users\Admin\AppData\Local\Temp\pip-install-lf3hykwo\numpy_681c1c40b47341459a0decfc0f360314\numpy\distutils\system_info.py:1858: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. return getattr(self, '_calc_info_{}'.format(name))() NOT AVAILABLE numpy_linalg_lapack_lite: FOUND: language = c define_macros = [('HAVE_BLAS_ILP64', None), ('BLAS_SYMBOL_SUFFIX', '64_')] Warning: attempted relative import with no known parent package C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\_distutils\dist.py:275: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) running bdist_wheel running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building py_modules sources creating build creating build\src.win-amd64-3.10 creating build\src.win-amd64-3.10\numpy creating build\src.win-amd64-3.10\numpy\distutils building library &quot;npymath&quot; sources Traceback (most recent call last): File &quot;C:\Users\Admin\.conda\envs\mlagents\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;C:\Users\Admin\.conda\envs\mlagents\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) File &quot;C:\Users\Admin\.conda\envs\mlagents\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 251, in build_wheel return _build_backend().build_wheel(wheel_directory, config_settings, File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\build_meta.py&quot;, line 211, in build_wheel return self._build_with_temp_dir(['bdist_wheel'], '.whl', File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\build_meta.py&quot;, line 197, in _build_with_temp_dir self.run_setup() File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\build_meta.py&quot;, line 248, in run_setup super(_BuildMetaLegacyBackend, File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\build_meta.py&quot;, line 142, in run_setup exec(compile(code, __file__, 'exec'), locals()) File &quot;setup.py&quot;, line 448, in &lt;module&gt; setup_package() File &quot;setup.py&quot;, line 440, in setup_package setup(**metadata) File &quot;C:\Users\Admin\AppData\Local\Temp\pip-install-lf3hykwo\numpy_681c1c40b47341459a0decfc0f360314\numpy\distutils\core.py&quot;, line 169, in setup return old_setup(**new_attr) File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\__init__.py&quot;, line 165, in setup return distutils.core.setup(**attrs) File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\_distutils\core.py&quot;, line 148, in setup dist.run_commands() File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\_distutils\dist.py&quot;, line 967, in run_commands self.run_command(cmd) File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\_distutils\dist.py&quot;, line 986, in run_command cmd_obj.run() File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\wheel\bdist_wheel.py&quot;, line 299, in run self.run_command('build') File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\_distutils\cmd.py&quot;, line 313, in run_command self.distribution.run_command(command) File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\_distutils\dist.py&quot;, line 986, in run_command cmd_obj.run() File &quot;C:\Users\Admin\AppData\Local\Temp\pip-install-lf3hykwo\numpy_681c1c40b47341459a0decfc0f360314\numpy\distutils\command\build.py&quot;, line 61, in run old_build.run(self) File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\_distutils\command\build.py&quot;, line 135, in run self.run_command(cmd_name) File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\_distutils\cmd.py&quot;, line 313, in run_command self.distribution.run_command(command) File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\_distutils\dist.py&quot;, line 986, in run_command cmd_obj.run() File &quot;C:\Users\Admin\AppData\Local\Temp\pip-install-lf3hykwo\numpy_681c1c40b47341459a0decfc0f360314\numpy\distutils\command\build_src.py&quot;, line 144, in run self.build_sources() File &quot;C:\Users\Admin\AppData\Local\Temp\pip-install-lf3hykwo\numpy_681c1c40b47341459a0decfc0f360314\numpy\distutils\command\build_src.py&quot;, line 155, in build_sources self.build_library_sources(*libname_info) File &quot;C:\Users\Admin\AppData\Local\Temp\pip-install-lf3hykwo\numpy_681c1c40b47341459a0decfc0f360314\numpy\distutils\command\build_src.py&quot;, line 288, in build_library_sources sources = self.generate_sources(sources, (lib_name, build_info)) File &quot;C:\Users\Admin\AppData\Local\Temp\pip-install-lf3hykwo\numpy_681c1c40b47341459a0decfc0f360314\numpy\distutils\command\build_src.py&quot;, line 378, in generate_sources source = func(extension, build_dir) File &quot;numpy\core\setup.py&quot;, line 661, in get_mathlib_info st = config_cmd.try_link('int main(void) { return 0;}') File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\_distutils\command\config.py&quot;, line 243, in try_link self._link(body, headers, include_dirs, File &quot;C:\Users\Admin\AppData\Local\Temp\pip-install-lf3hykwo\numpy_681c1c40b47341459a0decfc0f360314\numpy\distutils\command\config.py&quot;, line 163, in _link return self._wrap_method(old_config._link, lang, File &quot;C:\Users\Admin\AppData\Local\Temp\pip-install-lf3hykwo\numpy_681c1c40b47341459a0decfc0f360314\numpy\distutils\command\config.py&quot;, line 98, in _wrap_method ret = mth(*((self,)+args)) File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\_distutils\command\config.py&quot;, line 137, in _link (src, obj) = self._compile(body, headers, include_dirs, lang) File &quot;C:\Users\Admin\AppData\Local\Temp\pip-install-lf3hykwo\numpy_681c1c40b47341459a0decfc0f360314\numpy\distutils\command\config.py&quot;, line 106, in _compile src, obj = self._wrap_method(old_config._compile, lang, File &quot;C:\Users\Admin\AppData\Local\Temp\pip-install-lf3hykwo\numpy_681c1c40b47341459a0decfc0f360314\numpy\distutils\command\config.py&quot;, line 98, in _wrap_method ret = mth(*((self,)+args)) File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\_distutils\command\config.py&quot;, line 132, in _compile self.compiler.compile([src], include_dirs=include_dirs) File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\_distutils\_msvccompiler.py&quot;, line 401, in compile self.spawn(args) File &quot;C:\Users\Admin\AppData\Local\Temp\pip-build-env-8fi9o_k7\overlay\Lib\site-packages\setuptools\_distutils\_msvccompiler.py&quot;, line 505, in spawn return super().spawn(cmd, env=env) File &quot;C:\Users\Admin\AppData\Local\Temp\pip-install-lf3hykwo\numpy_681c1c40b47341459a0decfc0f360314\numpy\distutils\ccompiler.py&quot;, line 88, in &lt;lambda&gt; m = lambda self, *args, **kw: func(self, *args, **kw) TypeError: CCompiler_spawn() got an unexpected keyword argument 'env' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for numpy Failed to build numpy ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects </code></pre>
<python><numpy><pytorch><conda><ml-agent>
2023-11-22 02:07:21
1
3,706
Mich
77,526,928
11,237,476
How Does NumPy Internally Handle Matrix Multiplication with Non-continuous Slices?
<p>Hello Stack Overflow community,</p> <p>I'm working with NumPy for matrix operations and I have a question regarding how NumPy handles matrix multiplication, especially when dealing with non-continuous slices of matrices.</p> <p>Consider a scenario where we have a large matrix, say of size [1000, 1000], and we need to perform a matrix multiplication on a sliced version of this matrix with steps, such as [::10, ::10]. I understand that NumPy likely uses optimized BLAS routines like <code>GEMM</code> for matrix multiplication under the hood. However, BLAS routines generally require contiguous memory layouts to function efficiently.</p> <p>My question is: How does NumPy internally handle such scenarios where the input matrices for multiplication are non-contiguous due to slicing with steps? Specifically, I'm interested in understanding if NumPy:</p> <ol> <li>Automatically reallocates these slices to a new contiguous memory block and then performs <code>GEMM</code>.</li> <li>Has an optimized way to handle non-continuous slices without reallocating memory.</li> <li>Uses any specific variant of BLAS routines or NumPy's own implementation to handle such cases.</li> </ol> <p>This information will help me better understand the performance implications of using slices with steps in matrix multiplications in NumPy.</p> <p>Thank you in advance for your insights!</p>
<python><numpy><blas>
2023-11-22 01:55:11
1
1,337
musako
77,526,909
20,122,390
Is it more efficient to use del or dictionary comprehension in python?
<p>Suppose I have a python dictionary with the following structure:</p> <pre><code>data = { &quot;key_1&quot;: &quot;some value&quot;, &quot;key_2&quot;: &quot;some value&quot;, &quot;key_3&quot;: &quot;&quot;, &quot;key_4&quot;: &quot;&quot; } </code></pre> <p>So, I want to remove all key-values for which the value is an empty string. I could use the python one in the following way:</p> <pre><code>for key in list(data.keys()): if data[key] == &quot;&quot;: del data[key] </code></pre> <p>Or I could build a new dictionary with a comprehension:</p> <pre><code>new_ dict = { key: value for key, value in data.items() if value != &quot;&quot; } </code></pre> <p>Both solutions are very readable, but which is more efficient and why?</p>
<python><dictionary-comprehension>
2023-11-22 01:47:13
0
988
Diego L
77,526,899
7,930,118
Runnning NetLogo Mode from Python Throws RuntimeException
<p>I am trying to connect NetLogo from Python. I am following the official pyNetLogo documentation. Here is my code.</p> <pre><code>import pynetlogo netlogo = pynetlogo.NetLogoLink( gui=True, netlogo_home=&quot;/home/protik/NetLogo/&quot; ) netlogo.load_model(&quot;../Netlogo-model/ICS1023.nlogo&quot;) netlogo.command(&quot;setup&quot;) </code></pre> <p>It starts the NetLogo interface but fails to run the NetLogo model. It throws this following error.</p> <pre><code>Exception Traceback (most recent call last) File NetLogoLink.java:121, in netLogoLink.NetLogoLink.loadModel() Exception: Java Exception The above exception was the direct cause of the following exception: java.lang.RuntimeException Traceback (most recent call last) File ~/anaconda3/envs/netlogo-env/lib/python3.8/site-packages/pynetlogo/core.py:246, in NetLogoLink.load_model(self, path) 245 try: --&gt; 246 self.link.loadModel(path) 247 except jpype.JException as ex: java.lang.RuntimeException: java.lang.RuntimeException During handling of the above exception, another exception occurred: NetLogoException Traceback (most recent call last) Cell In[1], line 8 1 import pynetlogo 3 netlogo = pynetlogo.NetLogoLink( 4 gui=True, 5 netlogo_home=&quot;/home/protik/NetLogo/&quot; 6 ) ----&gt; 8 netlogo.load_model(&quot;../Netlogo-model/ICS1023.nlogo&quot;) 9 netlogo.command(&quot;setup&quot;) File ~/anaconda3/envs/netlogo-env/lib/python3.8/site-packages/pynetlogo/core.py:249, in NetLogoLink.load_model(self, path) 247 except jpype.JException as ex: 248 print(ex.stacktrace()) --&gt; 249 raise NetLogoException(str(ex)) NetLogoException: java.lang.RuntimeException </code></pre> <p>On the NetLogo interface a dialogue box pops up and shows the following line.</p> <pre><code>error in loading model java.lang.reflect.invocationtargetexception </code></pre> <p>By the way, the netlogo model itself runs fine. It only creates error while running from Python. I have checked jvm_path which seems to be correct.</p> <pre><code>/usr/lib/jvm/java-11-openjdk-amd64/lib/server/libjvm.so </code></pre> <p>What could be the solution?</p>
<python><netlogo><pynetlogo>
2023-11-22 01:42:47
0
529
Protik Nag
77,526,134
4,628,504
Failure to authenticate with Artifactory using Python Poetry
<p>I'm using the Poetry package manager for Python for the first time. I have a private package stored on our enterprise Artifactory instance.</p> <p>I followed the very simple instructions for how to declare a package source and credentials for it.</p> <p>There are basically two ways to declare creds.</p> <ol> <li>Storing them in Poetry's internal config</li> <li>Storing them in environment variables</li> </ol> <p>Neither worked.</p> <p>I was able to auth to the Artifactory <code>/simple/</code> endpoint in a browser with my creds. I was able to auth when I embedded the creds in the URL directly (not acceptable for storage in version control). I was able install the package using <code>pip</code>.</p> <p>I bothered a colleague for a while and remembered what it was like to be a noob Python programmer...</p> <p>Long story short, see my answer below for the... answer.</p>
<python><authentication><python-poetry>
2023-11-21 21:53:14
1
2,140
John Carrell
77,526,110
1,228,532
How do I assign a generic return type to the method of a generic typed class?
<p>In the code below, <code>GameObjects.to_dict</code> needs to reflect the generic for a <code>TypeDict</code>, depending on the type that has been passed to <code>GameObjects</code>. In this specific example, mypy gives an error:</p> <p><code>line 37: error: List comprehension has incompatible type List[GameObjectDict]; expected List[D] [misc]</code></p> <p>I want to avoid having to create a concrete class for <code>Assets</code> just for the purpose of defining the return type there - I potentially have a lot more such containers which will be functionally identical.</p> <pre><code>from abc import ABC, abstractmethod from dataclasses import dataclass, field from pathlib import Path from typing import Generic, TypedDict, TypeVar T = TypeVar(&quot;T&quot;, bound=&quot;GameObject&quot;) D = TypeVar(&quot;D&quot;, bound=&quot;GameObjectDict&quot;) class GameObjectDict(TypedDict): name: str class AssetDict(GameObjectDict): path: str class AbstractGameObject(ABC, Generic[D]): @abstractmethod def to_dict(self) -&gt; D: raise NotImplementedError @dataclass class GameObject(AbstractGameObject, Generic[D]): name: str def to_dict(self) -&gt; GameObjectDict: return GameObjectDict(name=self.name) @dataclass class GameObjects(Generic[T]): items: dict[str, T] = field(default_factory=dict) def to_dict(self) -&gt; list[D]: return [item.to_dict() for item in self.items.values()] def add(self, item: T): if item.name in self.items.keys(): raise KeyError(f&quot;{item.name} already exists&quot;) self.items[item.name] = item @dataclass class Asset(GameObject): path: Path def to_dict(self) -&gt; AssetDict: return AssetDict(**super().to_dict(), path=str(self.path)) Assets = GameObjects[Asset]() </code></pre>
<python><python-3.x><types><mypy>
2023-11-21 21:46:30
0
653
James N
77,525,969
4,382,391
How do you serve *all* files in a directory in flask? (Not just an individual file), accepting index.html as a default entrance?
<p>Say I have project structure:</p> <pre><code>- app.py // flask server - static/ - index.html - subfolder/ - index.html - main.js </code></pre> <p>In Flask I can go:</p> <pre class="lang-py prettyprint-override"><code>@app.route('/') def index(): return send_from_directory(os.getcwd() + &quot;/static&quot;, &quot;index.html&quot;) </code></pre> <p>and serve the index file within the root dir... But it will not accept subfolders or any other file.</p> <p>I have searched following the following: <a href="https://stackoverflow.com/questions/20646822/how-to-serve-static-files-in-flask">How to serve static files in Flask</a> <a href="https://stackoverflow.com/questions/66379197/how-to-serve-static-file-from-static-folder-using-flask">How to serve static file from static folder using Flask</a> <a href="https://stackoverflow.com/questions/24578330/flask-how-to-serve-static-html">Flask: How to serve static html?</a></p> <p>and found I can do:</p> <pre class="lang-py prettyprint-override"><code>@app.route('/&lt;path:path&gt;') def index(path): return send_from_directory(os.getcwd() + &quot;/static&quot;, path) </code></pre> <p>But this requires you to explicitly supply index.html to the end of all routes.</p> <p>An alternative is then to use:</p> <pre><code>@app.route('/&lt;path:path&gt;' , defaults={'path': 'index.html'}) def index(path): print(f&quot;looking in: {os.getcwd()}/static{path}&quot;) return send_from_directory(os.getcwd() + &quot;/static&quot;, path) </code></pre> <p>But then this doesn't supply subdirectories, and will always redirect you to /static/index.html. Furthermore, if you <em>do</em> supply an explicit index.html to the url, it will render the html correctly, but not the .js file with the error: <code>Uncaught SyntaxError: Unexpected token '&lt;' (at main.js:1:1)</code></p> <p>main.js:</p> <pre><code>console.log(&quot;hello&quot;) </code></pre> <p>subfolder/index.html:</p> <pre><code>&lt;head&gt; &lt;script src=&quot;main.js&quot;&gt;&lt;/script&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;henlo&lt;/h1&gt; &lt;/body&gt; </code></pre> <p>I just want to serve a static folder. How do I do this? The documentation itself says:</p> <pre><code>Static Files: Dynamic web applications also need static files. That’s usually where the CSS and JavaScript files are coming from. Ideally your web server is configured to serve them for you, but during development Flask can do that as well. Just create a folder called static in your package or next to your module and it will be available at /static on the application. To generate URLs for static files, use the special 'static' endpoint name: url_for('static', filename='style.css') The file has to be stored on the filesystem as static/style.css. </code></pre> <p>but of course, <code>url_for</code> does not exist as a function in python nor any dependency or component of the flask module, and even if it did, there is no example of how to use it.</p>
<javascript><python><flask>
2023-11-21 21:15:27
1
1,070
Null Salad
77,525,668
2,423,091
How do I define a class in Python to say that that class can have an instance of itself as a child
<p>How do I define a base class in Python, with type hinting, to say that that class can have an instance of itself as a attribute.</p> <p>For example.....</p> <pre><code>class Person(): first_name: str = None last_name: str = None parent: Person = None def __init__(self, first_name: str = None, last_name: str = None, parent: Person = None): self.first_name = first_name self.last_name = last_name self.parent = parent </code></pre> <p>I get an error about the class no existing if I try to type hint that class in the class. Seems like a really simple solution but I feel really dumb. Thanks in advance.</p>
<python><python-typing>
2023-11-21 20:08:02
0
410
hfrog713
77,525,333
1,084,174
InvalidArgumentError: `predictions` contains negative values
<p>I was trying to run the Tensorflow Audio classification code following the <a href="https://www.tensorflow.org/tutorials/audio/simple_audio" rel="nofollow noreferrer">article</a> of tensorflow. When I ran the following python code after finishing all above cells orderly:</p> <p><strong>Code:</strong></p> <pre><code>confusion_mtx = tf.math.confusion_matrix(y_true, y_pred) plt.figure(figsize=(10, 8)) sns.heatmap(confusion_mtx, xticklabels=label_names, yticklabels=label_names, annot=True, fmt='g') plt.xlabel('Prediction') plt.ylabel('Label') plt.show() </code></pre> <p>I am getting the following error in both my Kaggle and Jupyter Notebook. How can I solve it and what is the real cause for prediction values to be negative?</p> <p><strong>Error:</strong></p> <pre><code>--------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) Cell In[32], line 1 ----&gt; 1 confusion_mtx = tf.math.confusion_matrix(y_true, y_pred) 2 plt.figure(figsize=(10, 8)) 3 sns.heatmap(confusion_mtx, 4 xticklabels=label_names, 5 yticklabels=label_names, 6 annot=True, fmt='g') File /opt/conda/lib/python3.10/site-packages/tensorflow/python/util/traceback_utils.py:153, in filter_traceback.&lt;locals&gt;.error_handler(*args, **kwargs) 151 except Exception as e: 152 filtered_tb = _process_traceback_frames(e.__traceback__) --&gt; 153 raise e.with_traceback(filtered_tb) from None 154 finally: 155 del filtered_tb File /opt/conda/lib/python3.10/site-packages/tensorflow/python/ops/check_ops.py:487, in _binary_assert(sym, opname, op_func, static_func, x, y, data, summarize, message, name) 484 if message is not None: 485 data = [message] + list(data) --&gt; 487 raise errors.InvalidArgumentError( 488 node_def=None, 489 op=None, 490 message=('\n'.join(_pretty_print(d, summarize) for d in data))) 492 else: # not context.executing_eagerly() 493 if data is None: InvalidArgumentError: `predictions` contains negative values. Condition x &gt;= 0 did not hold element-wise: x (shape=(832, 8) dtype=int64) = ['-10', '-4', '-1', '...'] </code></pre>
<python><tensorflow><classification><tensor><kaggle>
2023-11-21 18:59:31
1
40,671
Sazzad Hissain Khan
77,525,270
11,402,435
Filtering even columns in SAS
<p>I have a data set with financial data that has a particular structure:</p> <pre><code>|Date |price1 |Date |price2 |...|Date |priceN | |dd/mm/yy|numeric |dd/mm/yy|numeric |...|dd/mm/yy|numeric | |dd/mm/yy|numeric |dd/mm/yy|numeric |...|dd/mm/yy|numeric | |dd/mm/yy|numeric |dd/mm/yy|numeric |...|dd/mm/yy|numeric | </code></pre> <p>I need pivot the data from wide to long, but first I have to select only first column (Date) and the all the price columns, i.e., all the even columns. I did it in python and it worked so used chat gpt to translate it to SAS, but it didn't work, chat gpt give the following code:</p> <pre><code>data filter_price; set price; array columns[*] _all_; do i=1 to dim(columns); if mod(i,2)=0 then output; end; keep columns:; run; </code></pre> <p>The result of the code above is a data set like this:</p> <pre><code>|Date |price1 |Date |price2 |...|Date |priceN |i | |dd/mm/yy|numeric |dd/mm/yy|numeric |...|dd/mm/yy|numeric |2 | |dd/mm/yy|numeric |dd/mm/yy|numeric |...|dd/mm/yy|numeric |4 | |dd/mm/yy|numeric |dd/mm/yy|numeric |...|dd/mm/yy|numeric |6 | </code></pre> <p>It adds a column with something like an index that take only even numbers. What I need as a result is the following:</p> <pre><code>|Date |price1 |price2 |...|priceN | |dd/mm/yy|numeric |numeric |...|numeric | |dd/mm/yy|numeric |numeric |...|numeric | |dd/mm/yy|numeric |numeric |...|numeric | </code></pre> <p>Can you help me with this? I leave you what I made in python in case it were useful.</p> <pre><code>import pandas as pd # Lee el archivo de precios df = pd.read_excel(&quot;/content/prueba_pivote_precios.xlsm&quot;) # Elimina la primera fila (que no tiene datos) df.drop(0, axis=0, inplace=True) # identifica el nombre de las columnas (el nombre válido de las columnas viene cada dos columnas) names = df.columns.to_list() names2 = [] for i in range(len(names)): if i%2!=0: names2.append(names[i]) # Selecciona las columnas válidas df2 = df[names2] # recupera la columna de fechas del data set original df3[&quot;FECHA&quot;] = df.iloc[:, 0] </code></pre>
<python><sas>
2023-11-21 18:47:13
3
301
MARIO ANTONIO CASTILLO MACHUCA
77,525,271
4,621,513
How can mutable variables be treated as "volatile" for static type checking?
<p>Consider this class which has a variable <code>state</code> that is compared twice in the method <code>check_twice</code> and its value is changed in between the two checks by the method <code>work</code>:</p> <pre><code>import enum class State(enum.Enum): INIT = 0 DONE = 1 class Worker: def __init__(self) -&gt; None: self.state = State.INIT def work(self): self.state = State.DONE def check_twice(self) -&gt; None: if self.state is State.DONE: print(&quot;DONE at first try&quot;) return self.work() if self.state is State.DONE: # according to mypy this will not happen x: int = &quot;x&quot; # mypy will not check this print(&quot;DONE at second try&quot;) return print(&quot;not DONE after second try&quot;) if __name__ == &quot;__main__&quot;: w = Worker() w.check_twice() </code></pre> <p>As expected, when executing this, the output is</p> <pre class="lang-none prettyprint-override"><code>$ python3 volatile.py DONE at second try </code></pre> <p>because at the beginning of <code>check_twice</code>, the variable was still <code>INIT</code>, but at the second check it has been changed to <code>DONE</code> by <code>work()</code> &quot;in the background&quot;.</p> <p>However, according to mypy (version 1.5.0), when using the <code>--strict-equality</code> option, the branch which prints this output is unreachable,</p> <pre class="lang-none prettyprint-override"><code>$ mypy --strict-equality volatile.py volatile.py:23: error: Non-overlapping identity check (left operand type: &quot;Literal[State.INIT]&quot;, right operand type: &quot;Literal[State.DONE]&quot;) [comparison-overlap] </code></pre> <p>because mypy thinks that if <code>state</code> was <code>DONE</code> at the first check, the method returns and the only possible value for <code>state</code> in the rest of the method is <code>INIT</code> (by type narrowing). It does not consider that the value can still be changed to <code>DONE</code> by other code and &quot;optimizes away&quot; the second <code>if self.state is State.DONE</code> statement.</p> <p>The error can be silenced by adding <code># type: ignore [comparison-overlap]</code>. However, mypy will still not check the rest of that <code>if</code> statement which contains an obvious type error, for which no error message is generated.</p> <p>How can we indicate to mypy (or other type checkers) that the variable can change its value after it has been accessed once, and that the body of the second <code>if</code> statement must be checked, analogous to e.g. the <code>volatile</code> keyword in the C language?</p>
<python><mypy><python-typing>
2023-11-21 18:47:12
1
24,148
mkrieger1
77,525,057
183,717
Selenium code to download a file from a link not working
<p>I have written the following code to download a file from the URL: <a href="https://www.nseindia.com/market-data/oi-spurts" rel="nofollow noreferrer">https://www.nseindia.com/market-data/oi-spurts</a> every 1 min. However, I have not been successful in getting it to work. I am seeing the Selenium is opening up a new tab instead of showing the default pop up when a link is clicked. Hoping to have someone help me out with this.</p> <pre><code>import logging from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from selenium.webdriver.chrome.options import Options DOWNLOAD_URL = &quot;https://www.nseindia.com/market-data/oi-spurts&quot; DOWNLOAD_PATH = &quot;/Users/myuser/Downloads/&quot; class SeleniumDriver(object): def __init__(self, download_path=DOWNLOAD_PATH): logging.debug(&quot;Selenium DEBUG&quot;) self.__download_path = download_path def initiate_download(self): chrome_options = Options() chrome_options.add_argument('--no-sandbox') # For Linux systems # chrome_options.add_argument('--incognito') chrome_options.add_argument(&quot;disable-web-security&quot;) chrome_options.add_argument(&quot;--disable-popup-blocking&quot;) chrome_options.add_argument(f'--download.default_directory={self.__download_path}') # Create a Chrome WebDriver instance driver = webdriver.Chrome(options=chrome_options) try: # Navigate to the website where the file is located driver.get(DOWNLOAD_URL) import time time.sleep(10) download_link = driver.find_element(By.CSS_SELECTOR, &quot;div.xlsdownload a&quot;) print(download_link) print(dir(download_link)) download_link.click() # driver.execute_script(&quot;arguments[0].click();&quot;, download_link) #download_link[0].send_keys(Keys.ENTER) # Perform actions on the website to trigger the file download # ... # Wait for the file to download (you may need to adjust the sleep time) time.sleep(5) finally: # Close the browser session driver.quit() __slots__ = ( &quot;__download_path&quot;, ) </code></pre> <p>The error I see every time is : <a href="https://i.sstatic.net/C41ME.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C41ME.png" alt="Error" /></a></p>
<python><selenium-webdriver><automation>
2023-11-21 18:13:00
1
9,853
name_masked
77,524,976
2,954,547
Polars LazyFrame streaming directly to (partitioned) Parquet without collecting
<p>I am reading some data from AWS S3 with <code>polars.scan_pyarrow_dataset</code>. When I am finished with my data processing, I would like to write the results back to cloud storage, in partitioned Parquet files. However, I'd like to avoid collecting all results in memory. I believe this should be possible with the Parquet format because of its support for a row groups, as well as the ability to split the data into physically separate files.</p> <p>I don't see any support for this in the Polars LazyFrame API. I see that <code>collect</code> has <code>streaming=True</code>, but that still looks like it collects the final result in memory.</p> <p>Is there some way to achieve this that I am not seeing?</p>
<python><python-polars>
2023-11-21 17:57:54
2
14,083
shadowtalker
77,524,974
5,424,117
How to resolve error running pipdeptree with Python 3.11
<p>I'm following the instructions here for upgrading Django: <a href="https://www.youtube.com/watch?v=9i1VZQg8mFg" rel="nofollow noreferrer">Django Upgrade Video</a></p> <p>And I get a big error when trying to run the first command for deriving dependencies:</p> <pre><code>pipdeptree -f --warn silence | grep -E '^[a-zA-Z0-9\-]+' </code></pre> <p>The following is the error stack trace</p> <pre><code>Traceback (most recent call last): File &quot;/Users/john.xxxx/Documents/workspace/employee-maestro/venv_3_11/bin/pipdeptree&quot;, line 8, in &lt;module&gt; sys.exit(main()) ^^^^^^ File &quot;/Users/john.xxxx/Documents/workspace/employee-maestro/venv_3_11/lib/python3.11/site-packages/pipdeptree/__main__.py&quot;, line 44, in main render(options, tree) File &quot;/Users/john.xxxx/Documents/workspace/employee-maestro/venv_3_11/lib/python3.11/site-packages/pipdeptree/_render/__init__.py&quot;, line 27, in render render_text( File &quot;/Users/john.xxxx/Documents/workspace/employee-maestro/venv_3_11/lib/python3.11/site-packages/pipdeptree/_render/text.py&quot;, line 34, in render_text _render_text_with_unicode(tree, nodes, max_depth, frozen) File &quot;/Users/john.xxxx/Documents/workspace/employee-maestro/venv_3_11/lib/python3.11/site-packages/pipdeptree/_render/text.py&quot;, line 107, in _render_text_with_unicode lines = chain.from_iterable([aux(p) for p in nodes]) ^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/john.xxxx/Documents/workspace/employee-maestro/venv_3_11/lib/python3.11/site-packages/pipdeptree/_render/text.py&quot;, line 107, in &lt;listcomp&gt; lines = chain.from_iterable([aux(p) for p in nodes]) ^^^^^^ File &quot;/Users/john.xxxx/Documents/workspace/employee-maestro/venv_3_11/lib/python3.11/site-packages/pipdeptree/_render/text.py&quot;, line 59, in aux node_str = node.render(parent, frozen=frozen) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/john.xxxx/Documents/workspace/employee-maestro/venv_3_11/lib/python3.11/site-packages/pipdeptree/_models/package.py&quot;, line 53, in render return render(frozen=frozen) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/john.xxxx/Documents/workspace/employee-maestro/venv_3_11/lib/python3.11/site-packages/pipdeptree/_models/package.py&quot;, line 114, in render_as_root return self.as_frozen_repr(self._obj) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/john.xxxx/Documents/workspace/employee-maestro/venv_3_11/lib/python3.11/site-packages/pipdeptree/_models/package.py&quot;, line 79, in as_frozen_repr fr = FrozenRequirement.from_dist(our_dist) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/john.xxxx/Documents/workspace/employee-maestro/venv_3_11/lib/python3.11/site-packages/pip/_internal/operations/freeze.py&quot;, line 244, in from_dist req, editable, comments = get_requirement_info(dist) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/john.xxxx/Documents/workspace/employee-maestro/venv_3_11/lib/python3.11/site-packages/pip/_internal/operations/freeze.py&quot;, line 174, in get_requirement_info if not dist_is_editable(dist): ^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/john.xxx/Documents/workspace/employee-maestro/venv_3_11/lib/python3.11/site-packages/pip/_internal/utils/misc.py&quot;, line 384, in dist_is_editable egg_link = os.path.join(path_item, dist.project_name + &quot;.egg-link&quot;) ^^^^^^^^^^^^^^^^^ AttributeError: 'Distribution' object has no attribute 'project_name' </code></pre> <p>I'm new to Django and Python. I do not know how to solve this. Any advice would be appreciated.</p> <p>Thanks.</p>
<python><python-3.x><django><pip>
2023-11-21 17:57:32
2
2,474
jb62
77,524,958
18,904,265
Mypy shows error when using a function to return a type
<p>I have written a function <code>string_prompt</code> to return a type annotation which I have to reuse a lot in the context of <a href="https://typer.tiangolo.com/" rel="nofollow noreferrer">typer</a>. This is a working example (run with <code>python -m</code>):</p> <pre class="lang-py prettyprint-override"><code>from typing import Annotated import typer app = typer.Typer() def string_prompt(prompt: bool | str = True): return Annotated[str, typer.Option(prompt=prompt)] @app.command() def public(name: string_prompt(), project_number: string_prompt()): print(f&quot;You called public with {name}, {project_number}&quot;) if __name__ == &quot;__main__&quot;: app() </code></pre> <p>This code works, but mypy shows the error:</p> <blockquote> <p>Invalid type comment or annotation (<a href="https://mypy.readthedocs.io/en/latest/_refs.html#code-valid-type" rel="nofollow noreferrer">valid-type</a>). Suggestion: use string_prompt[...] instead of string_prompt(...)</p> </blockquote> <p>I've tried using square brackets instead, which obviously returns a syntax error, since I can't use square brackets to call a function.</p> <p>Now my question is: Is there a different way I should declare this function to make mypy understand what I am trying to achieve? I had a look at the documentation of the mypy error, in which it explains, that functions are not a valid type, but the solution proposed which uses <code>Callable</code> is not applicable in my case, since I want to actually use the return value of the function called.</p>
<python><mypy><typer>
2023-11-21 17:55:55
1
465
Jan
77,524,726
608,294
Create a new package from an existing installed python package
<p>I made many changes to a library installed with pip (library not maintained anymore) and I want to create a new package that I can use for my own projects without each time to install the library using pip and then change the files I modified.</p> <p>Is there a way?</p>
<python>
2023-11-21 17:17:52
0
1,741
Andrea
77,524,700
17,487,457
Creating an ensemble of classifiers based on predefined feature subsets
<p>The following MWE creates an ensemble method from the features selected using <a href="https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html" rel="nofollow noreferrer">SelectKBest</a> algorithm and <a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier" rel="nofollow noreferrer">RandomForest</a> classifier.</p> <pre class="lang-py prettyprint-override"><code># required import import numpy as np import pandas as pd from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split from sklearn.feature_selection import SelectKBest, f_classif from sklearn.ensemble import RandomForestClassifier, VotingClassifier from sklearn.pipeline import Pipeline # ensemble created from features selected def get_ensemble(n_features): # define base models models = [] # enumerate the features in the training dataset for i in range(1, n_features + 1): # feature selection transform fs = SelectKBest(score_func=f_classif, k=i) # create the model model = RandomForestClassifier(n_estimators=50) # create the pipeline pipe = Pipeline([('fs', fs), ('m', model)]) # list of tuple of models for voting models.append((str(i), pipe)) # define the voting ensemble ensemble_clf = VotingClassifier(estimators=models, voting='hard') return ensemble_clf </code></pre> <p>So, to use the ensemble model:</p> <pre class="lang-py prettyprint-override"><code># generate data for a 3-class classification X, y = make_classification(n_samples=1000, n_features=10, n_classes=3, n_informative=3) X = pd.DataFrame(X, columns=list('ABCDEFGHIJ')) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) X_train.head() A B C D E F G H I J 541 0.1756 -0.3772 -1.6396 -0.7524 0.2138 0.3113 -1.4906 -0.2885 0.1226 0.2057 440 -0.4381 -0.3302 0.7514 -0.4684 -1.2477 -0.5081 -0.7934 -0.3138 0.8423 -0.4038 482 -0.6648 1.2337 -0.2878 -1.6737 -1.2377 -0.4479 -1.1843 -0.2424 -0.9935 -1.4537 422 0.6099 0.2475 0.9612 -0.7339 0.6926 -1.5761 -1.6061 -0.3879 -0.1895 1.3738 778 -1.4893 0.5234 1.6126 0.8704 -2.7363 -1.3818 -0.2196 -0.7894 -1.1755 -2.8779 # get the ensemble model ensemble_clssifier = get_ensemble(X_train.shape[1]) ensemble_clssifier.fit(X_train, y_train) </code></pre> <p>Creates 10 base models (<code>n_features=10</code>) and then an ensemble VotingClassifier based on majority (<code>voting = hard</code>).</p> <p><strong>Question:</strong></p> <p>The MWE described above works fine. However, I would like to replace the <code>SelectKBest</code> feature selection process in the <code>get_ensemble</code> function.</p> <p>I have conducted a different feature selection process, and discovered the &quot;optimal&quot; feature subset for each class in this dataset as follows:</p> <pre><code> | best predictors -------------+------------------- class 0 | A, B, C class 1 | D, E, F, G class 2 | G, H, I, J -------------+------------------- </code></pre> <p>So the modification I would like to make to <code>get_ensemble</code> is that, instead of iterating over the number of available features, creating <code>n</code> base-models, it should create 3 (no. of classes) base models, where:</p> <ul> <li><p><code>base-model 1</code> will be fitted using the feature subset <code>['A', 'B', 'C']</code>.</p> </li> <li><p><code>base-model 2</code> will be fitted using the feature subset <code>['D', 'E', 'F', 'G']</code>.</p> </li> <li><p><code>base-model 3</code> will be fitted using the feature subset <code>['G', 'H', 'I', 'J']</code>.</p> </li> <li><p>finally the <code>ensemble_classifier</code> based on majority voting of the sub-models output.</p> </li> </ul> <p>That's, I when I make the call to:</p> <pre class="lang-py prettyprint-override"><code>ensemble_clssifier.fit(X_train, y_train) </code></pre> <p>It proceeds like so:</p> <pre class="lang-py prettyprint-override"><code># 1st base model on fitted on its feature subset model.fit(X_train[['A', 'B', 'C']], y_train) # 2nd base model model.fit(X_train[['D', 'E', 'F', 'G']], y_train) # 3rd model also model.fit(X_train[['G', 'H', 'I', 'J']], y_train) </code></pre> <p>This scenario should apply as well during prediction, making sure each base model selects the appropriate feature subset from <code>X_test</code> to make its prediction on <code>ensemble_clssifier.fit(X_test)</code> before the final voting.</p> <p>I am not sure how to proceed. Any ideas?</p> <p><strong>EDIT</strong></p> <p>Regarding this question, I made some changes (e.g. not using the <code>VotingClassifier</code>) to further train the final ensemble on the output of the base models (base models confidences). Then finally make predictions.</p> <p>I created the following ensemble class:</p> <pre class="lang-py prettyprint-override"><code>from sklearn.base import clone class CustomEnsemble: def __init__(self, base_model, best_feature_subsets): self.base_models = {class_label: clone(base_model) for class_label in best_feature_subsets} self.best_feature_subsets = best_feature_subsets self.final_model = base_model def train_base_models(self, X_train, y_train): for class_label, features in self.best_feature_subsets.items(): model = self.base_models[class_label] model.fit(X_train[features], (y_train == class_label)) return self def train_final_model(self, X_train, y_train): &quot;&quot;&quot; Probably better to implement the train methods (base models &amp; ensemble) in one method suc as the train_base_models altogether. &quot;&quot;&quot; predictions = pd.DataFrame() for class_label, model in self.base_models.items(): predictions[class_label] = model.predict_proba(X_train[self.best_feature_subsets[class_label]])[:, 1] self.final_model.fit(predictions, y_train) def predict_base_models(self, X_test): predictions = pd.DataFrame() for class_label, model in self.base_models.items(): predictions[class_label] = model.predict_proba(X_test[self.best_feature_subsets[class_label]])[:, 1] return predictions def predict(self, X_test): base_model_predictions = self.predict_base_models(X_test) return self.final_model.predict(base_model_predictions) def predict_proba_base_models(self, X_test): predictions = pd.DataFrame() for class_label, model in self.base_models.items(): predictions[class_label] = model.predict_proba(X_test[self.best_feature_subsets[class_label]])[:, 1] return predictions def predict_proba(self, X_test): base_model_predictions = self.predict_proba_base_models(X_test) return self.final_model.predict_proba(base_model_predictions) </code></pre> <p><em><strong>Usage:</strong></em></p> <ol> <li>Define dictionary of best feature subsets for classes:</li> </ol> <pre class="lang-py prettyprint-override"><code>optimal_features = { 0: ['A', 'B', 'C'], 1: ['D', 'E', 'F', 'G'], 2: ['G', 'H', 'I', 'J'] } </code></pre> <ol start="2"> <li>Instantiate class and train models:</li> </ol> <pre class="lang-py prettyprint-override"><code>classifier = RandomForestClassifier() ensemble = CustomEnsemble(classifier, optimal_features) </code></pre> <ol start="3"> <li>Train models:</li> </ol> <pre class="lang-py prettyprint-override"><code># first, train base models ensemble.train_base_models(X_train, y_train) # then, train the ensemble ensemble.train_final_model(X_train, y_train) </code></pre> <ol start="4"> <li>Make predictions:</li> </ol> <pre class="lang-py prettyprint-override"><code>yhat = ensemble.predict(X_test) yhat_proba = ensemble.predict_proba(X_test) # so as to calculate roc_auc_score() </code></pre> <ol> <li><p>However, it appears I am not doing things right. I am not training the ensemble on the output of base models, but on the original input features.</p> </li> <li><p>Also, I am not sure if separating <code>train_base_models()</code> and <code>train_final_model()</code> is the best approach (this implies fitting twice: base models then final model as in the usage). Or better to combine these into one method (say <code>train_ensemble()</code>).</p> </li> </ol>
<python><machine-learning><scikit-learn><classification><ensemble-learning>
2023-11-21 17:12:25
2
305
Amina Umar
77,524,524
3,368,201
Tkinter Canvas - 2px internal padding appearing
<p>I noticed a strange behavior with the Tkinter Canvas implementation. Here is a small snippet to highlight this behavior.</p> <pre class="lang-py prettyprint-override"><code>shall_configure_innerframe = False shall_configure_periodically = False from tkinter import Tk, Canvas from tkinter.ttk import Style, Frame, Label win = Tk() # Styles Style().configure('outer.TFrame', background='black') Style().configure('inner.TFrame', background='red') # Outer frame (to highlight Canvas area) frame = Frame(win, style='outer.TFrame') frame.pack(expand=True, fill='both', padx=1, pady=1) # Canvas canvas = Canvas(frame) canvas.pack(expand=True, fill='both', padx=3, pady=3) # Inner frame innerframe = Frame(canvas, style='inner.TFrame') win_id = canvas.create_window(0, 0, anchor='nw', window=innerframe) # Let's add some content to actually see the frame Label(innerframe, text='AAA').pack() # Callback when canvas is configured (to set window width to the full canvas) def on_canvas_configure(event): canvas_width = event.width canvas.itemconfig(win_id, width=canvas_width) canvas.bind('&lt;Configure&gt;', on_canvas_configure) # Callback when innerframe gets configured def on_innerframe_configure(event): region = canvas.bbox('all') print(f'onscroll: {region}') canvas.config(scrollregion=region) if shall_configure_innerframe: innerframe.bind('&lt;Configure&gt;', on_innerframe_configure) # Function to periodically call the on_innerframe_configure def configure_periodically(*args, **kwargs): on_innerframe_configure(None) win.after(2000, configure_periodically) if shall_configure_periodically: win.after(2000, configure_periodically) win.mainloop() </code></pre> <p>As you can see, the &quot;core&quot; part of a window is a Canvas with a Frame inside, and this frame is then resized to fit the canvas (width).</p> <p>According to the configuration of the two booleans at the beginning, I can get one of the three results highlighted here:</p> <p><a href="https://i.sstatic.net/wsLqZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wsLqZ.png" alt="Three different canvas options" /></a></p> <p>Basically:</p> <ul> <li>If I never configure the <code>scrollregion</code> parameter (so <code>shall_configure_innerframe = False</code> and <code>shall_configure_periodically = False</code>) I get result A: the inner frame aligns with the Canvas border</li> <li>If I configure the <code>scrollregion</code> parameter later in time (so <code>shall_configure_innerframe = False</code> and <code>shall_configure_periodically = True</code>) I get result A until the timer runs out, then as soon as the function gets called I get result B: the inner frame first aligns with the Canvas border, then a 2px vertical padding appears (but not on the horizontal side)</li> <li>If I configure the <code>scrollregion</code> parameter as soon as possible (so <code>shall_configure_innerframe = True</code> and <code>shall_configure_periodically</code> doesn't matter) I get result C: the inner frame has a 2px padding both on vertical and horizontal sides</li> </ul> <p>What I noticed in the latter case is that the inner frame is shifted, not resized, since the last two pixels disappear outside the Canvas (I noticed this in the real application, where content disappears).</p> <p>In all cases the calculated bbox is always the same: <code>onscroll: (0, 0, 382, 19)</code>.</p> <p>Why is the canvas behaving this way? For the moment I'd simply add a 4px padding on the right, but this does not seem correct to me.</p> <p>Note: of course the real application is a bit more complicated, and I actually need to set the scrollregion</p>
<python><tkinter><tkinter-canvas>
2023-11-21 16:42:11
1
2,880
frarugi87
77,524,506
5,374,234
Equivalent of Linux 'zip -FF' in Python
<p>I have a python application that works with .zip files. Occasionally this application will fail with error <code>BadZipFile: File is not a zip file</code></p> <p>In spot checking the failed files, I've found that a working zip file can be created using the <code>zip -FF</code> Linux command.</p> <p>The issue is that the application works with the files in memory via BytesIO. I would prefer to avoid having to write them to disk in order to run the Linux command.</p> <p>Is there any python module which implements the equivalent of <code>zip -FF</code> that I can use on these files in memory?</p>
<python><zip>
2023-11-21 16:39:19
0
478
Chris Decker
77,524,430
17,744,230
Django optional, non-capturing string in URLs
<p>I am trying to add prefix parameter to our api url's for our application implemented in Python Django. It is currently like this</p> <pre><code>urlpatterns= [ path(&quot;api/&quot;, include( [ path(&quot;books/&quot;, include(books.urls)), path(&quot;events/&quot;, include(events.urls)), path(&quot;meetings/&lt;str:group_id&gt;/&quot;, handle_meetings) ]) ) ] </code></pre> <p>Now what I want to do is make this api a non-capturing and optional parameter. By non-capturing I mean I want the parameter to be in the URL but not go to the views and handlers. I don't want to add it as a parameter to my views and handlers because I won't use it for now.</p> <p>And I want it to be optional. I will now show what I want to achieve in examples</p> <p>/api/books - should work</p> <p>/api_v1/books - should work</p> <p>/books - should work</p> <p>This being non-capturing is not the most meaningful function because the user can basically enter anything. But it is just a temporary solution and this is how we decided to go. Is this even possible?</p> <p>If this is not possible, I am okay with giving up on making it non-capturing as well. I still couldn't fix it. So in that case you can think of it as this.</p> <pre><code>path(&quot;books/&quot;, include(books.urls)), path(&quot;events/&quot;, include(events.urls)), path(&quot;meetings/&lt;str:group_id&gt;/&quot;, handle_meetings) </code></pre> <p>I have these. I want all of these to have optional prefix parameter. Same case of accepting requests as above, I will just have to update my handlers in my views in order to use the parameter that I add. You can assume the use_case like we have to add api/ as a prefix but without breaking the production code and without disturbing people using our api. Hence it should be optional.</p> <p>How can I do this? I am super confused. Thank you in advance</p>
<python><django><regex><rest>
2023-11-21 16:28:14
1
738
dense8
77,524,428
12,477,405
Problems with inheritance using SQLModel
<p>I want the role attribute from AppUser populated. When I declare it in AppUser class, it works as expected, joining the userrole table. But if I declare it in BaseUser class, it's not populating the field, ignoring the join. My question is, how can I make this inheritance model work? With the code below, you can view the sql query. I'm working with python 3.10.12 and SQLModel 0.0.12</p> <pre><code> from sqlmodel import Field, SQLModel,Column from sqlalchemy import create_engine from sqlalchemy.orm import relationship from sqlalchemy.pool import StaticPool from typing import Optional from sqlalchemy import BigInteger from sqlmodel import Field, Relationship, SQLModel, Column from sqlmodel import Session, select class Base(SQLModel, table = False): id:Optional[int] = Field(sa_column=Column(BigInteger(), default=None, primary_key=True)) class UserRole(Base, table = True): name:str class BaseUser(Base, table = False): username:str role_id:int = Field(default=None, foreign_key=&quot;userrole.id&quot;) role:UserRole = Relationship(sa_relationship=relationship(&quot;UserRole&quot;, lazy=&quot;joined&quot;)) class AppUser(BaseUser, table = True): email:str # Uncomment to make join work #role:UserRole = Relationship(sa_relationship=relationship(&quot;UserRole&quot;, lazy=&quot;joined&quot;)) engine = create_engine( 'sqlite:///demo2.db', connect_args={&quot;check_same_thread&quot;: False}, poolclass=StaticPool, echo=True ) SQLModel.metadata.create_all(engine) with Session(engine) as session: statement = select(AppUser) results = session.exec(statement) items = results.all() </code></pre>
<python><inheritance><sqlalchemy><orm><sqlmodel>
2023-11-21 16:27:59
1
454
Gabriel Macus
77,524,234
1,957,873
asyncio: how to handle exceptions
<p>I'm using <code>asyncio.gather()</code> to run concurrently two coroutines. They should run forever, so if one of them returns (correctly or with an exception) I'd like to know.</p> <p>If I use <code>try/except</code> for <code>asyncio.gather()</code>, I can correctly catch the exceptions of coroutines.</p> <p>The problem is that one coroutine could create and run a new task with <code>asyncio.create_task()</code>. In this case, if an exception occurs during this new task, this exception can't be handled in the context of <code>gather()</code>.</p> <pre><code>import asyncio async def coro3(): raise RuntimeError async def coro1(): asyncio.create_task(coro3()) while True: print(&quot;coro1&quot;) await asyncio.sleep(1) async def coro2(): while True: print(&quot;coro2&quot;) await asyncio.sleep(1) async def main(): try: await asyncio.gather(coro1(), coro2()) except Exception as e: print(e) # The exception of coro3() can't be handled here asyncio.run(main()) </code></pre>
<python><python-asyncio>
2023-11-21 16:01:21
2
846
pozzugno
77,523,991
5,594,008
Poetry, keyring backends error on creating
<p>After installing poetry using pipx, I got such error</p> <pre><code>&quot;/Users/my_user/.local/pipx/venvs/poetry/lib/python3.12/site-packages/keyring/backend.py&quot;, line 199, in _load_plugins entry_points = metadata.entry_points()['keyring.backends'] ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/Cellar/python@3.12/3.12.0/Frameworks/Python.framework/Versions/3.12/lib/python3.12/importlib/metadata/__init__.py&quot;, line 287, in __getitem__ raise KeyError(name) KeyError: 'keyring.backends' </code></pre> <p>How it can be fixed?</p>
<python><python-poetry>
2023-11-21 15:25:21
0
2,352
Headmaster
77,523,657
17,194,313
How do you "condense" long recursive Polars expressions?
<p>The Polars expression language is amazing, but it very occasionally can cause issues when the expression definition becomes extremely large.</p> <p>In my experience this only happens with recursion - for example:</p> <pre class="lang-py prettyprint-override"><code>def diffuse(c:pl.Expr, n_time_steps=1, conductivity:float=0.05)-&gt;pl.Expr: '''Applies a heat-equation inspired smoothing to a time series of values''' c_conductivity=(1-conductivity) new_c = c for _ in range(n_time_steps): new_c = (c_conductivity*new_c) + (0.5*conductivity*(new_c.shift(-1).forward_fill() + new_c.shift(1).backward_fill())) return new_c </code></pre> <p>With <code>n_time_steps &gt; 10</code> the <code>diffuse</code> function can easily return a multi-gigabyte expression.</p> <p>One mitigation to this is to apply the function in &quot;chunks&quot; - it seems to work:</p> <pre class="lang-py prettyprint-override"><code>df = (df .with_columns(__temp__ = pl.col('heat')) .with_columns(__temp__ = pl.col('__temp__ ').pipe(diffuse, n_time_steps = 5)) # Equivalent to pl.col('heat').pipe(diffuse, n_time_steps=5) .with_columns(__temp__ = pl.col('__temp__ ').pipe(diffuse, n_time_steps = 5)) # Equivalent to pl.col('heat').pipe(diffuse, n_time_steps=10) .with_columns(__temp__ = pl.col('__temp__ ').pipe(diffuse, n_time_steps = 5)) # Equivalent to pl.col('heat').pipe(diffuse, n_time_steps=15) .with_columns(__temp__ = pl.col('__temp__ ').pipe(diffuse, n_time_steps = 5)) # Equivalent to pl.col('heat').pipe(diffuse, n_time_steps=20) .rename({'__temp__':'heat[diffused with n_time_steps=20]'}) ) </code></pre> <p>However, I suspect I'm missing some Polars functionality that would handle this situation better.</p> <p>To clarify, I am trying to keep this as a function that takes in a <code>pl.Expr</code> and returns a <code>pl.Expr</code> - otwise the above could be implemented as a <code>pl.DataFrame</code> to <code>pl.DataFrame</code> function:</p> <pre><code>def add_diffused_column( df:pl.DataFrame, col:str, n_time_steps=1, conductivity:float=0.05 )-&gt;pl.DataFrame: dummy_column_name = f'__dummy_column_{np.random.rand()}' assert dummy_column_name not in df.columns, 'By chance the dummy column name already exists in the dataframe :(' df = df.with_columns(pl.col(col).alias(dummy_column_name)) for i in range(n_time_steps) df = df.with_columns(pl.col(dummy_column_name).pipe(diffuse, n_time_steps=1, conductivity=conductivity) return df.rename({dummy_column_name: f'{col}[iffused with n_time_steps={n_time_steps}]'}) df.pipe(add_diffused_column, 'heat', n_time_steps=100) </code></pre>
<python><python-polars>
2023-11-21 14:33:44
1
3,075
MYK
77,523,623
3,129,604
Is there a way to take screenshot from a window that is not currently active through a Python script?
<p>I have seen ways to take screenshots easily for the active window in Windows OS.</p> <p>I would like to know if there is a way and how to take screenshots from windows that are minimized in Windows OS, through a Python script.</p> <p>The reason is because I am building a bot app for a game that will run for multiple clients and it works massively based in screenshots and reading numbers and words from screenshots, but I'm not having luck by getting a working solution for the screenshot part. I even tried chat gpt and bard but apparently there is some sort of protection from Windows itself.</p>
<python><windows><screenshot>
2023-11-21 14:28:18
0
2,753
Matteus Barbosa
77,523,527
1,328,355
numpy Polynomial to vector
<p>How do I extract the coefficient vector from a <code>numpy</code> polynomial?</p> <p>In Python, when using the <code>numpy</code> package to construct a polynomial, how do I extract the coefficients of this polynomial as a vector?</p>
<python><numpy><polynomials>
2023-11-21 14:14:17
1
3,681
Bastiaan Quast
77,523,524
183,054
Create view that shows all objects relating to current user's department
<p>I have the following Django model:</p> <pre class="lang-py prettyprint-override"><code>from django.conf import settings from django.db import models class Appeal(models.Model): DEPARTMENTS = [ (&quot;a&quot;, &quot;DeptA&quot;), (&quot;b&quot;, &quot;DeptB&quot;), (&quot;c&quot;, &quot;DeptC&quot;), # long list of departments here ] title = models.CharField(max_length=400) department = models.CharField(choices=DEPARTMENTS) author = models.ForeignKey( settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name=&quot;appeals&quot; ) </code></pre> <p>And a view like this:</p> <pre class="lang-py prettyprint-override"><code>from django.shortcuts import get_list_or_404, render def appeal_list_view(request): visible_appeals = get_list_or_404(Appeal, author=request.user) return render(request, &quot;appeals/index.html&quot;, {&quot;appeals&quot;: visible_appeals}) </code></pre> <p>This shows only the appeals by the current user. I would like to change this code so that:</p> <ul> <li>each user is associated with a department,</li> <li>a user is shown all appeals from their own department (regardless of whether or not they or another user from the same department submitted it).</li> </ul> <p>What would be the best way to achieve this? I considered creating a group per department, which would solve the first requirement but then I'm not sure how to approach the second one.</p>
<python><django><django-views>
2023-11-21 14:14:05
1
1,536
RafG
77,523,522
15,915,737
ImportError: cannot import name 'SecretField' from 'pydantic'
<p>I'm using prefect on a gitlab CI and it was working fine until now when it had this error:</p> <pre><code>from pydantic import BaseModel, Field, SecretField ImportError: cannot import name 'SecretField' from 'pydantic' (/usr/local/lib/python3.8/site-packages/pydantic/__init__.py) </code></pre> <p>I tought it was coming form a dependency issu but apparently not, I have compatible version (prefect==2.8.3 and pydantic==1.10.11).</p> <p>Could come from another packages dependencies ? Here is my requirements.txt file I'm using to generate my docker image:</p> <pre><code>pandas==1.5.3 asyncpg==0.26.0 prefect==2.8.3 pydantic==1.10.11 simple_salesforce==1.12.3 snowflake-connector-python==3.0.3 snowflake-sqlalchemy==1.4.7 dbt-snowflake==1.4.2 beautifulsoup4==4.12.2 </code></pre> <p>Where It could come from ?</p>
<python><package><pydantic><prefect>
2023-11-21 14:13:45
2
418
user15915737
77,523,409
1,216,183
Python slower inside docker container
<p>I'm experiencing a slower python when running from inside of a docker container than the one of my system. I can't explain this difference.</p> <p>I've found a lot of content about this on the Internet, but the solutions I've tried don't work.</p> <h3>Test</h3> <p>Host system Up-to-date <em>Arch linux</em></p> <pre><code>$ python --version Python 3.11.5 $ python -m timeit 'from random import randrange for _ in range(100): [randrange(50,500) for _ in range(randrange(30000,50000))]' 1 loop, best of 5: 3.07 sec per loop </code></pre> <p>Using python:3.11.5 docker image</p> <pre><code>$ docker run python:3.11.5-slim python --version Python 3.11.5 $ docker run python:3.11.5-slim python -m timeit 'from random import randrange for _ in range(100): [randrange(50,500) for _ in range(randrange(30000,50000))]' 1 loop, best of 5: 3.99 sec per loop </code></pre> <p>I've tried many times with more loops, and the result is the same. 0.5 to 1 sec more time when run inside container.</p> <p>I've tried with <code>--security-opt seccomp:unconfined</code> but it does not make a major difference</p> <p>What is happening on my system ? docker isn't supposed to add such overhead.</p> <h3>Edit: other tests</h3> <h4>Bash loop</h4> <p>On local system:</p> <pre><code>$ bash -c &quot;time for ((i=0; i&lt;100; i++)); do for ((j=0; j&lt;$(shuf -i 7000-8000 -n 1); j++)); do : done done&quot; </code></pre> <p>Empiric (manually run 10 times in a row) result around 2.1s</p> <p>Inside docker container</p> <pre><code>$ docker run python:3.11.5-slim bash -c &quot;time for ((i=0; i&lt;100; i++)); do for ((j=0; j&lt;$(shuf -i 7000-8000 -n 1); j++)); do : done done&quot; </code></pre> <p>Empiric (manually run 10 times in a row) result around 2.5s</p> <h4>Python complex math</h4> <p>On local system:</p> <pre><code>$ python -m timeit '(82637**533333).bit_length()' 1 loop, best of 5: 1.36 sec per loop </code></pre> <p>Inside docker container</p> <pre><code>$ docker run python:3.11.5-slim python -m timeit '(82637**533333).bit_length()' 1 loop, best of 5: 1.63 sec per loop </code></pre>
<python><docker><performance>
2023-11-21 14:00:37
0
2,213
fabien-michel
77,523,265
3,932,908
struggling to get unique values in large dataset
<p>I have a large dataset that can't fit in memory with fields &quot;function_name&quot; and &quot;trace&quot; (and a couple of others). The traces are very long strings, and what I want to do is for each function_name, ensure there are no duplicate traces without running out of memory. I have tried the following:</p> <pre><code>df = pl.scan_parquet(Path(&quot;...&quot;, &quot;dataset.parquet&quot;)) df = df.unique([&quot;function_name&quot;, &quot;trace&quot;]).sink_parquet(Path(&quot;...&quot;, &quot;output.parquet&quot;)) </code></pre> <p>But I quickly get an out-of-memory error. What is the right way to do this? (If I were to do it sequentially, so filtering by function name then getting unique values, it will work but is just really slow).</p>
<python><python-polars>
2023-11-21 13:36:39
0
399
Henry
77,523,119
2,002,076
Python 2.7 to Python 3 upgrade
<p>I am working on updating a series of python 2.7 plugins to python 3.10 that get loaded into the software individually. Below is an example of how the python 2.7 plugin is organized.</p> <p>ProjectDir</p> <ul> <li>Plugin1 <ul> <li><code>__init__.py</code></li> </ul> </li> <li>Plugin2 <ul> <li><code>__init__.py</code></li> </ul> </li> <li>Plugin3 <ul> <li><code>__init__.py</code></li> </ul> </li> <li>Common <ul> <li>CommonFunctions.py</li> </ul> </li> </ul> <p>All three of the plugins use functions out of the CommonFunctions.py file. When I have updated the import statements for the three plugins to</p> <pre><code>from ..Common.CommonFunctions import Example1 </code></pre> <p>I get the exception:</p> <pre><code>ImportError: attempted relative import beyond top-level package </code></pre> <p>Based off of my research this is due to the Common directory not being in the same directory as the <strong>init</strong> folder (or one of its subfolders). Is there a way to solve this issue without copying the Common directory to each plugin?</p> <p>I have verified that when I do copy the folder over the issue is resolved. However, that is not a desired solution.</p>
<python><python-3.x><python-2.7>
2023-11-21 13:13:36
2
1,352
Travis Pettry
77,523,055
833,300
_MissingDynamic: `license` defined outside of `pyproject.toml` is ignored
<p>For the last 24 hours or so I'm seeing the below error messages when I try to install either of sexpdata 1.0.1 or segno 1.5.3 in python.</p> <p>I should perhaps note that these installs usually happen via github code runners (automated tests), which is what alerted me to the issue. I can reproduce locally.</p> <p>My requirements.txt hasn't changed in many weeks and no substantive changes for even longer. Climbing back up the commit tree doesn't change the error. So I suspect the problem isn't in my code, but I also am not sure where the problem might lie, as sexpdata hasn't changed since June, according to its <a href="https://github.com/jd-boyd/sexpdata" rel="noreferrer">github repo</a>.</p> <p>Any pointers what might be happening or what to look at?</p> <pre><code>Collecting sexpdata&gt;=0.0.3 (from epc) Using cached sexpdata-1.0.1.tar.gz (8.6 kB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; [71 lines of output] /tmp/pip-build-env-ga37r2a6/overlay/lib/python3.10/site-packages/setuptools/config/_apply_pyprojecttoml.py:75: _MissingDynamic: `license` defined outside of `pyproject.toml` is ignored. !! ******************************************************************************** The following seems to be defined outside of `pyproject.toml`: `license = 'BSD License'` According to the spec (see the link below), however, setuptools CANNOT consider this value unless `license` is listed as `dynamic`. https://packaging.python.org/en/latest/specifications/declaring-project-metadata/ To prevent this problem, you can list `license` under `dynamic` or alternatively remove the `[project]` table from your file and rely entirely on other means of configuration. ******************************************************************************** !! _handle_missing_dynamic(dist, project_table) /tmp/pip-build-env-ga37r2a6/overlay/lib/python3.10/site-packages/setuptools/config/_apply_pyprojecttoml.py:75: _MissingDynamic: `keywords` defined outside of `pyproject.toml` is ignored. !! ******************************************************************************** The following seems to be defined outside of `pyproject.toml`: `keywords = ['s-expression', 'lisp', 'parser']` According to the spec (see the link below), however, setuptools CANNOT consider this value unless `keywords` is listed as `dynamic`. https://packaging.python.org/en/latest/specifications/declaring-project-metadata/ To prevent this problem, you can list `keywords` under `dynamic` or alternatively remove the `[project]` table from your file and rely entirely on other means of configuration. ******************************************************************************** !! _handle_missing_dynamic(dist, project_table) Traceback (most recent call last): File &quot;/var/www/danube/venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;/var/www/danube/venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) File &quot;/var/www/danube/venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 118, in get_requires_for_build_wheel return hook(config_settings) File &quot;/tmp/pip-build-env-ga37r2a6/overlay/lib/python3.10/site-packages/setuptools/build_meta.py&quot;, line 325, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) File &quot;/tmp/pip-build-env-ga37r2a6/overlay/lib/python3.10/site-packages/setuptools/build_meta.py&quot;, line 295, in _get_build_requires self.run_setup() File &quot;/tmp/pip-build-env-ga37r2a6/overlay/lib/python3.10/site-packages/setuptools/build_meta.py&quot;, line 311, in run_setup exec(code, locals()) File &quot;&lt;string&gt;&quot;, line 8, in &lt;module&gt; File &quot;/tmp/pip-build-env-ga37r2a6/overlay/lib/python3.10/site-packages/setuptools/__init__.py&quot;, line 103, in setup return distutils.core.setup(**attrs) File &quot;/tmp/pip-build-env-ga37r2a6/overlay/lib/python3.10/site-packages/setuptools/_distutils/core.py&quot;, line 159, in setup dist.parse_config_files() File &quot;/var/www/danube/venv/lib/python3.10/site-packages/_virtualenv.py&quot;, line 21, in parse_config_files result = old_parse_config_files(self, *args, **kwargs) File &quot;/tmp/pip-build-env-ga37r2a6/overlay/lib/python3.10/site-packages/setuptools/dist.py&quot;, line 627, in parse_config_files pyprojecttoml.apply_configuration(self, filename, ignore_option_errors) File &quot;/tmp/pip-build-env-ga37r2a6/overlay/lib/python3.10/site-packages/setuptools/config/pyprojecttoml.py&quot;, line 67, in apply_configuration return _apply(dist, config, filepath) File &quot;/tmp/pip-build-env-ga37r2a6/overlay/lib/python3.10/site-packages/setuptools/config/_apply_pyprojecttoml.py&quot;, line 56, in apply _apply_project_table(dist, config, root_dir) File &quot;/tmp/pip-build-env-ga37r2a6/overlay/lib/python3.10/site-packages/setuptools/config/_apply_pyprojecttoml.py&quot;, line 82, in _apply_project_table corresp(dist, value, root_dir) File &quot;/tmp/pip-build-env-ga37r2a6/overlay/lib/python3.10/site-packages/setuptools/config/_apply_pyprojecttoml.py&quot;, line 183, in _license _set_config(dist, &quot;license&quot;, val[&quot;text&quot;]) KeyError: 'text' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. </code></pre>
<python><python-3.x><pip>
2023-11-21 13:04:53
2
3,799
jma
77,523,045
4,255,096
Failure to partition a dataframe to write to DeltaLake
<p>I'm getting an <code>IllegalArgumentException</code> while attempting to write to Delta.</p> <pre><code>E pyspark.sql.utils.IllegalArgumentException: requirement failed: The provided partitioning does not match of the table. E - provided: identity(class_name) E - table: </code></pre> <p>This is what I am using:</p> <pre><code> df.repartition(col('class_name')).write \ .option(&quot;mergeSchema&quot;,&quot;true&quot;) \ .format(&quot;delta&quot;) \ .mode(&quot;append&quot;) \ .partitionBy(&quot;class_name&quot;) \ .saveAsTable(&quot;dummy_table&quot;) </code></pre> <p>I've already tested the command without partitioning and it works fine but I need to partition the table by that column. The below works fine.</p> <pre><code> df.write \ .option(&quot;mergeSchema&quot;,&quot;true&quot;) \ .format(&quot;delta&quot;) \ .mode(&quot;append&quot;) \ .saveAsTable(&quot;dummy_table&quot;) </code></pre> <p>In truth, the column is created before using .withColumn but I'm assuming that shouldn't be a problem as the write to Delta works fine.</p> <p>Based on the page from Delta it should work.</p> <p><a href="https://delta.io/blog/2023-01-18-add-remove-partition-delta-lake/" rel="nofollow noreferrer">https://delta.io/blog/2023-01-18-add-remove-partition-delta-lake/</a></p>
<python><apache-spark><pyspark><delta-lake>
2023-11-21 13:02:30
0
375
Geosphere
77,523,017
8,302,849
How to toggle swap camera view between 2 cameras on live feed frame?
<p>I have only 2 USB cameras that are connected to my PC (Note: highly confident that I won't use more than 2 cameras now and in future) . My aim of this program is to capture front and rear view concurrently. And I have successfully have the working prototype of the program which allowed to view 2 live feed video at the same time .</p> <p><a href="https://i.sstatic.net/FGsKW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FGsKW.png" alt="enter image description here" /></a></p> <p>I attached the sample program here which allowed live video feeds from 2 cameras concurrently at the same time.</p> <pre><code>import cv2 import tkinter as tk from datetime import datetime from PIL import Image, ImageTk, ImageDraw global photo1, photo2 photo1 = None photo2 = None def print_camera_info(width, height, camera_number): print(f&quot;Camera {camera_number}&quot;) print(f&quot;Width: {width}&quot;) print(f&quot;Height: {height}&quot;) print(&quot;---&quot;) def swap_camera(): print(&quot;In swap_camera!&quot;) #debugging purpose global photo1, photo2 print(&quot;After define global in swap_camera&quot;) #debugging purpose print(&quot;Before swap&quot;)#debugging purpose # Swap the frames algorithm tmp = photo1 photo1 = photo2 photo2 = tmp print(&quot;After Swap&quot;) #debugging purpose def main(): # Open two camera devices front_cap = cv2.VideoCapture(0) # Use 0 for the first camera (Front Camera) rear_cap = cv2.VideoCapture(1) # Use 1 for the second camera (Rear Camera) # Check if the cameras opened successfully if not (front_cap.isOpened() and rear_cap.isOpened()): print(&quot;Error: Could not open cameras.&quot;) print(&quot;front_cap opened:&quot;, front_cap.isOpened()) print(&quot;rear_cap opened:&quot;, rear_cap.isOpened()) return # Set the desired width and height for live feed target_width, target_height = 320, 240 # Set the CAP_PROP_FRAME_WIDTH and CAP_PROP_FRAME_HEIGHT properties for both cameras front_cap.set(cv2.CAP_PROP_FRAME_WIDTH, target_width) front_cap.set(cv2.CAP_PROP_FRAME_HEIGHT, target_height) rear_cap.set(cv2.CAP_PROP_FRAME_WIDTH, target_width) rear_cap.set(cv2.CAP_PROP_FRAME_HEIGHT, target_height) # Get the actual dimensions after setting the properties actual_width1 = int(front_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) actual_height1 = int(front_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) actual_width2 = int(rear_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) actual_height2 = int(rear_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) # Print camera information with the actual resized dimensions print_camera_info(actual_width1, actual_height1, &quot;Front&quot;) print_camera_info(actual_width2, actual_height2, &quot;Rear&quot;) # Create Tkinter window root = tk.Tk() root.title(&quot;Combined Cameras&quot;) # Create a Checkbutton for camera toggle camera_toggle_button = tk.Checkbutton(root, text=&quot;Toggle Camera&quot;, command=swap_camera) camera_toggle_button.grid(row=1, column=1, padx=10, pady=10) # Create label widget for date and time datetime_label = tk.Label(root, text=&quot;&quot;, font=(&quot;Helvetica&quot;, 14)) datetime_label.grid(row=1, column=2, columnspan=2, pady=10) # Create label widgets for camera views label1 = tk.Label(root, text=&quot;FRONT VIEW&quot;) label2 = tk.Label(root, text=&quot;REAR VIEW&quot;) label1.grid(row=2, column=2) label2.grid(row=2, column=3) # Create image labels for camera feeds video_livefeedfront = tk.Label(root) video_livefeedrear = tk.Label(root) video_livefeedfront.grid(row=3, column=2, padx=10) video_livefeedrear.grid(row=3, column=3, padx=10) while True: # Read frames from both cameras ret1, frame1 = front_cap.read() ret2, frame2 = rear_cap.read() # Check if the frames were read successfully if not (ret1 and ret2): print(&quot;Error: Could not read frames from cameras.&quot;) break # Resize frames to the target dimensions frame_resized1 = cv2.resize(frame1, (target_width, target_height)) frame_resized2 = cv2.resize(frame2, (target_width, target_height)) # Convert frames to RGB format for PIL frame_rgb1 = cv2.cvtColor(frame_resized1, cv2.COLOR_BGR2RGB) frame_rgb2 = cv2.cvtColor(frame_resized2, cv2.COLOR_BGR2RGB) # Convert frames to PhotoImage format for Tkinter photo1 = ImageTk.PhotoImage(Image.fromarray(frame_rgb1)) photo2 = ImageTk.PhotoImage(Image.fromarray(frame_rgb2)) # Update image live videofeed video_livefeedfront.configure(image=photo1) video_livefeedfront.photo = photo1 video_livefeedrear.configure(image=photo2) video_livefeedrear.photo = photo2 # Update the Tkinter window root.update() # Release the camera devices and destroy the Tkinter window front_cap.release() rear_cap.release() cv2.destroyAllWindows() root.destroy() if __name__ == &quot;__main__&quot;: main() </code></pre> <p>There are at times where the front and rear views might be at its designated location when connected (i.e; Front camera show rear live video feed). Therefore, I add a <code>camera_toggle_button</code> to swap the view . However, there are weird bugs right now, when I click on the checkbox <code>camera_toggle_button</code> , I expect the FRONT VIEW should be swapped with REAR VIEW , but as shown below, it does not happened at all. As you can see, I successfully print out the debugging notes (meaning it enter all the algorithm sequences correctly) , but somehow the live video feed does not swapped at all. It is weird at least. <a href="https://i.sstatic.net/7E4Uh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7E4Uh.png" alt="enter image description here" /></a></p>
<python><tkinter>
2023-11-21 12:59:06
2
423
Alan Koh W.T
77,522,801
1,581,090
How to create a rotating animation of a Scatter3D plot with plotly and save it as gif?
<p>With plotly in jupyter I am creating a Scatter3D plot as follows:</p> <pre><code># Configure the trace. trace = go.Scatter3d( x=x, y=y, z=z, mode='markers', marker=dict(color=colors, size=1) ) # Configure the layout. layout = go.Layout( margin={'l': 0, 'r': 0, 'b': 0, 't': 0}, height = 1000, width = 1000 ) data = [trace] plot_figure = go.Figure(data=data, layout=layout) # Render the plot. plotly.offline.iplot(plot_figure) </code></pre> <p>How can I rotate a plot generated like this in order to create a gif video out of it i.e. stored as a gif file like <code>rotate.gif</code> which shows an animation of the plot rotated?</p> <p>Based on the comments given I created this code (complete, working example):</p> <pre><code>import plotly.graph_objects as go import numpy as np import plotly.io as pio # Helix equation t = np.linspace(0, 10, 50) x, y, z = np.cos(t), np.sin(t), t fig= go.Figure(go.Scatter3d(x=x, y=y, z=z, mode='markers')) x_eye = -1.25 y_eye = 2 z_eye = 0.5 fig.update_layout( title='Animation Test', width=600, height=600, scene_camera_eye=dict(x=x_eye, y=y_eye, z=z_eye), updatemenus=[dict(type='buttons', showactive=False, y=1, x=0.8, xanchor='left', yanchor='bottom', pad=dict(t=45, r=10), buttons=[dict(label='Play', method='animate', args=[None, dict(frame=dict(duration=5, redraw=True), transition=dict(duration=1), fromcurrent=True, mode='immediate' )] ) ] ) ] ) def rotate_z(x, y, z, theta): w = x+1j*y return np.real(np.exp(1j*theta)*w), np.imag(np.exp(1j*theta)*w), z frames=[] for k, t in enumerate(np.arange(0, 6.26, 0.1)): xe, ye, ze = rotate_z(x_eye, y_eye, z_eye, -t) newframe = go.Frame(layout=dict(scene_camera_eye=dict(x=xe, y=ye, z=ze))) frames.append(newframe) pio.write_image(newframe, f&quot;images/images_{k+1:03d}.png&quot;, width=400, height=400, scale=1) fig.frames=frames fig.show() </code></pre> <p>which runs without an error and does rotate the scenery when I press on Play, however the image that is saved just shows an empty 2D coordinate system:</p> <p><a href="https://i.sstatic.net/cdryC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cdryC.png" alt="enter image description here" /></a></p> <p>but not what I actually see rotating. It also seems those image are created when I execute the cell in the jupyter notebook and not after I press &quot;Play&quot;. Seems that there are two figures, one that I can see rotating, and the image of an empty 2D coordinate system that gets saved to a file ...</p>
<python><animation><3d><plotly><jupyter>
2023-11-21 12:24:12
2
45,023
Alex
77,522,771
110,451
How can I get python to get exactly the bytes written to the standard output when running with subprocess.run?
<p>The problem I have is the following: in a subprocess (on Windows), someone is writing something as:</p> <p><code>print(&quot;line1\r\nline2\r\nline3\n&quot;)</code></p> <p>and for some reason additional <code>\r</code> chars end up in the stream being written.</p> <p>I'm running in binary mode (<code>text=False</code>) -- which I thought would let me get exactly the same bytes written, but I just can get it unless the user just writes bytes to <code>sys.stdout.buffer</code>, which isn't really feasible given that the codebase has <code>print</code> in the code which I'd like to capture properly (just not with the additional chars there).</p> <p>Below is a sample code on what I was expecting to work (but doesn't in practice and I'm not really sure why):</p> <pre><code>if __name__ == &quot;__main__&quot;: import sys args = sys.argv[1:] if &quot;-&quot; not in sys.argv[1:]: import subprocess result = subprocess.run( [sys.executable, __file__, &quot;-&quot;], capture_output=True, text=False ) output = result.stdout assert ( output == b&quot;line1\r\nline2\r\nline3\nline1\r\nline2\r\nline3\n&quot; ), f&quot;Found: {output!r}&quot; else: sys.stdout.write(&quot;line1\r\nline2\r\nline3\n&quot;) sys.stdout.buffer.write(b&quot;line1\r\nline2\r\nline3\n&quot;) </code></pre> <p>Also, depending on whether I'm using a real terminal or running from the IDE, the contents written are different.</p> <p>When running from the IDE (tested in PyDev and in PyCharm) I get:</p> <p><code>AssertionError: Found: b'line1\r\r\nline2\r\r\nline3\r\nline1\r\nline2\r\nline3\n'</code></p> <p>and when running from the terminal I get:</p> <p><code>AssertionError: Found: b'line1\r\nline2\r\nline3\nline1\r\r\nline2\r\r\nline3\r\n'</code></p> <p>-- notice that in the terminal additional <code>\r</code> chars are added to the contents written to <code>sys.stdout.buffer</code> part, whereas in the IDE additional <code>\r</code> chars are added to the <code>sys.stdout</code> part.</p>
<python><windows>
2023-11-21 12:20:25
1
25,390
Fabio Zadrozny
77,522,743
8,547,163
How to create straight edges at specific corners using graphviz
<p>I have the below code to create a simple flow using graphviz to be visualized in jupyter notebook</p> <pre><code># Create Digraph object dot = Digraph() # Add nodes dot.node('1', shape='box') dot.node('2', shape='rectangle') dot.node('3', shape='parallelogram') dot.node('4', shape='diamond') dot.node('5', shape='box') # Add edge between nodes dot.edges(['12', '23', '34', '42', '45' ]) # Visualize the graph dot </code></pre> <p>How to make an edge from decision box right corner to box 2 a straight line?</p>
<python><jupyter-notebook><graphviz>
2023-11-21 12:16:43
2
559
newstudent
77,522,726
1,850,978
Populating text on a grid does not place text in the grid cell nor do the cells appear
<p>I have the following Python code:</p> <pre><code>import json import sys import matplotlib.pyplot as plt import numpy as np if len(sys.argv) != 2: print(&quot;Usage: python script_name.py &lt;path_to_input_json&gt;&quot;) sys.exit(1) filename = sys.argv[1] with open(filename) as f: data = json.load(f) # Set up figure and axes fig, ax = plt.subplots(figsize=(12, 10)) # Set up data for grid periods = ['early_morning', 'morning', 'noon', 'afternoon', 'evening', 'night', 'late_night'] days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] grid_data = np.full((len(periods), len(days)), '', dtype=object) # Set up axes ax.set_xticks(np.arange(len(days))) ax.set_yticks(np.arange(len(periods))) ax.set_xticklabels(days) ax.set_yticklabels(periods) ax.set_title('Weekly Schedule') ax.set_xlabel('Day of the week') ax.set_ylabel('Time Periods') # Rotate x-axis labels to prevent overlap plt.setp(ax.get_xticklabels(), rotation=45, ha=&quot;right&quot;, rotation_mode=&quot;anchor&quot;) # Adjust spacing to prevent label overlap fig.tight_layout() # Populate grid for i, day in enumerate(days): for j, period in enumerate(periods): cell_text = [] if day in data and period in data[day]: if data[day][period]['accuracy_level'] &gt;= 0.7: for item in data[day][period]['study_material']: if item['accuracy_level'] &gt;= 0.7: cell_text.append(item['course']) cell_text.append(item['subject']) if data[day][period]['accuracy_level'] &gt;= 0.7: cell_text.append(data[day][period]['platform_group']) if cell_text: ax.text(j, i, cell_text, ha='center', va='center') # Save figure plt.savefig('weekly_schedule.png') </code></pre> <p>It outputs the following result:</p> <p><a href="https://i.sstatic.net/G480F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/G480F.png" alt="output" /></a></p> <p>Please advise on how a few questions:</p> <ol> <li>how to align the text into the proper location on the grid, based on the <code>periods</code> and <code>days</code> values I have from my input data?</li> <li>is it possible to draw the grid lines?</li> <li>is it possible to space 'early_morning' &amp; 'late_night' values on the Y axis and 'Monday' &amp; 'Sunday' on X axis respectively?</li> </ol>
<python><matplotlib>
2023-11-21 12:14:00
1
5,853
David Faizulaev
77,522,673
5,506,400
Google Calendar API Understanding of token.json
<p>I am working with the Google calendar API, the python quickstart in particular, but the language does not matter.</p> <p>The example from <a href="https://developers.google.com/calendar/api/quickstart/python" rel="nofollow noreferrer">https://developers.google.com/calendar/api/quickstart/python</a> has:</p> <pre><code> if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( &quot;credentials.json&quot;, SCOPES ) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open(&quot;token.json&quot;, &quot;w&quot;) as token: token.write(creds.to_json()) </code></pre> <p>I am working on a website, that is mostly server side. That people will log in, and be able to create a calendar, that the server will allow them to create a calendar, and automatically add events depending on events that occur.</p> <p>Question 1: My question is about <code>token.json</code>, is that file shared between all users, or should a separator file be created for each person?</p> <p>Question 2: Should it be backed up, cause if I lost the file then will everyone be logged out?</p>
<python><google-oauth><google-calendar-api><google-api-python-client>
2023-11-21 12:04:57
1
2,550
run_the_race
77,522,575
13,674,431
Python Selenium Choose date from a Datepicker Calendar
<p>I want to select a date (dd/mm/yyyy) from a Datepicker calendar</p> <p><a href="https://i.sstatic.net/lgDE4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lgDE4.png" alt="enter image description here" /></a></p> <p>So I used the following code, which doesn't display an error but doesn't work on the browser (I don't have the date selected that I'm looking for).</p> <pre><code>dateDebut=driver.find_element(By.XPATH ,'XPATH on Browser') #here I copied the XPATH on browser dateDebut.send_keys('01/01/2021')#I have no error, but the date on the browser is not &quot;01/01/2021&quot; </code></pre> <p>Any thoughts which can point me in the right direction would be great. Thanks.</p>
<python><selenium-webdriver><selenium-chromedriver><webdriver>
2023-11-21 11:48:29
1
315
Ruser-lab9
77,522,478
7,841,952
Why does xgboost prediction have lower AUC than evaluation of same data in eval_set?
<p>I am training a binary classifier and I want to know the AUC value for its performance on a test set. I thought there were 2 similar ways to do this: 1) I enter the test set into parameter <code>eval_set</code>, and then I receive corresponding AUC values for each boosting round in <code>model.evals_result()</code>; 2) After model training I make a prediction for the test set and then calculate the AUC for that prediction. I had thought that these methods should produce similar values, but the latter method (calculating AUC of a prediction) consistently produces much lower values. Can you help me understand what is going on? I must have misunderstood the function of <code>eval_set</code>.</p> <p>Here is a fully reproducible example using a kaggle dataset (available <a href="https://www.kaggle.com/datasets/uciml/red-wine-quality-cortez-et-al-2009/" rel="nofollow noreferrer">here</a>):</p> <pre><code>import pandas as pd from sklearn.model_selection import train_test_split from sklearn.metrics import RocCurveDisplay, roc_curve, auc from xgboost import XGBClassifier # xgboost version 1.7.6 import matplotlib.pyplot as plt # Data available on kaggle here https://www.kaggle.com/datasets/uciml/red-wine-quality-cortez-et-al-2009/ data = pd.read_csv('winequality-red.csv') data.head() # Separate targets X = data.drop('quality', axis=1) y = data['quality'].map(lambda x: 1 if x &gt;= 7 else 0) # wine quality &gt;7 is good, rest is not good # Split into training and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Create model and fit params = { 'eval_metric':'auc', 'objective':'binary:logistic' } model = XGBClassifier(**params) model.fit( X_train, y_train, eval_set=[(X_test, y_test)] ) </code></pre> <p>First I visualize the AUC metrics resulting from evaluating the test set provided in eval_set:</p> <pre><code>results = model.evals_result() plt.plot(np.arange(0,100),results['validation_0']['auc']) plt.title(&quot;AUC from eval_set&quot;) plt.xlabel(&quot;Estimator (boosting round)&quot;) plt.ylabel(&quot;AUC&quot;) </code></pre> <p><a href="https://i.sstatic.net/MRZ8E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MRZ8E.png" alt="enter image description here" /></a></p> <p>Next, I make a prediction on the same test set, get the AUC, and visualize the ROC curve:</p> <pre><code>test_predictions = model.predict(X_test) fpr, tpr, thresholds = roc_curve(y_true=y_test, y_score=test_predictions,pos_label=1) roc_auc = auc(fpr, tpr) display = RocCurveDisplay(roc_auc=roc_auc, fpr=fpr, tpr=tpr) display.plot() </code></pre> <p><a href="https://i.sstatic.net/lm6A4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lm6A4.png" alt="enter image description here" /></a></p> <p>As you can see, the AUC value of the prediction is 0.81, which is lower than any AUC calculated from evaluating the same test set in <code>eval_set</code>. How have I misunderstood the two methods? Thanks, xgboost is new to me and I appreciate your advice.</p>
<python><xgboost>
2023-11-21 11:32:40
1
436
j45612
77,522,454
21,691,539
importlib.import_module loading the wrong module in custom plugin system
<p>I'm trying to implement a plugin system in Python.<br> Each plugin exposes exactly the same function and I want to call one or another, for instance with a simple Id.<br> For instance, the plugins can be image loader, each plugin handling a different format (<em>jpeg</em>,<em>bmp</em>,...).<br> Plugins are all inside a special directory tree and I try to import them with <em>relative</em> syntax.</p> <p>My issue is that, as soon as I've got more than one plugin, my plugin loader always loads the same one.</p> <p>Here is my directory tree</p> <pre><code>root_dir | --main.py | --plugin_mod1 | | | --extend.py ... | --plugin_modN | | | --extend.py ... </code></pre> <p>here is my &quot;load plugins&quot; function (in <code>main.py</code>)</p> <pre class="lang-py prettyprint-override"><code># loads plugins with all the same interface def LoadPlugin(): global root_dir Extensions = glob.glob(root_dir+'/plugin_*') # ExtMods will store the retrieved module ExtMods = {} sys.path.insert(0, root_dir) for Ext in Extensions: if os.path.exists(Ext+'/extend.py'): sys.path.insert(0, Ext) tmp = importlib.import_module('extend') # bind some id to the found module ExtMods[tmp.PluginId()] = tmp # on second iteration, tmp is still imported from first plugin # same undesired behavior if following line is uncommented # sys.path.remove(Ext) return ExtMods </code></pre> <p>The mechanism is used like this in <code>main.py</code></p> <pre class="lang-py prettyprint-override"><code>... #loading all plugins found in root_dir PluginManager = LoadPlugin() ... # run DoSomething for plugin &lt;some ID&gt; PlunginManager[&lt;some ID&gt;].DoSomething() ... </code></pre> <p>The python 3.5 solution proposed <a href="https://stackoverflow.com/a/67692/21691539">here</a> works but why doesn't my implementation (<code>import_module</code> always loading the same one)?</p> <p>Optional: is there a better design to manage plugin with python?</p> <p>NB: I also encountered <a href="https://stackoverflow.com/questions/76412385/import-module-on-relative-path-difference-between-python-3-5-and-3-7">issues with respect to python version</a> but couldn't find a definitive answer.<br> Ideally the solution should work with Python 3 or more, whatever the minor version.</p> <p>Thanks</p>
<python><python-import>
2023-11-21 11:29:08
0
3,479
Oersted
77,522,434
4,667,839
LLM SQL Agents and few shot learning
<p>I want to use sql Agent together with few shot examples. I followed this example: <a href="https://python.langchain.com/docs/use_cases/qa_structured/sql#extending-the-sql-toolkit" rel="nofollow noreferrer">https://python.langchain.com/docs/use_cases/qa_structured/sql#extending-the-sql-toolkit</a></p> <p>However, I want to use VertexAI instead of the OpenAI ones.</p> <p>I changed the example only on 2 places:</p> <pre><code>embeddings = HuggingFaceEmbeddings( model_name='sentence-transformers/all-MiniLM-L6-v2') </code></pre> <pre><code>agent = create_sql_agent( llm=llm, toolkit=toolkit, verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, extra_tools=custom_tool_list, suffix=custom_suffix, ) </code></pre> <p>I encounter the following error when running the create_sql_agent():</p> <pre><code>__root__ Invalid prompt schema; check for mismatched or missing input parameters. {'agent_scratchpad', 'input'} (type=value_error) ``` </code></pre>
<python><langchain><large-language-model><py-langchain>
2023-11-21 11:25:13
1
399
Userrrrrrrr
77,522,380
2,725,810
Some invocations of a Lambda function take much longer despite a warm start (not a duplicate)
<p><strong>Note:</strong> the question has been previously mistakenly marked as a duplicate of <a href="https://stackoverflow.com/questions/1685221/accurately-measure-time-python-function-takes">this one</a>. Please note that my question is about AWS Lambda and the issue is not how to measure execution time (my code measures time very accurately with millisecond precision). If you do think that the question is a duplicate, please leave a comment first, as it is difficult to reinstate a questions once closed.</p> <p>Here is my AWS Lambda function:</p> <pre class="lang-py prettyprint-override"><code>import json import time import pickle def ms_now(): return int(time.time_ns() / 1000000) class Timer(): def __init__(self): self.start = ms_now() def stop(self): return ms_now() - self.start timer = Timer() from punctuators.models import PunctCapSegModelONNX model_name = &quot;pcs_en&quot; model_sentences = PunctCapSegModelONNX.from_pretrained(model_name) with open('model_embeddings.pkl', 'rb') as file: model_embeddings = pickle.load(file) cold_start = True init_time = timer.stop() print(&quot;Time to initialize:&quot;, init_time, flush=True) def segment_text(texts): sentences = model_sentences.infer(texts) sentences = [ [(s, len(model_embeddings.tokenizer.encode(s))) for s in el] for el in sentences] return sentences def get_embeddings(texts): return model_embeddings.encode(texts) def compute(body): command = body['command'] if command == 'ping': return 'success' texts = body['texts'] if command == 'embeddings': result = get_embeddings(texts) return [el.tolist() for el in result] if command == 'sentences': return segment_text(texts) assert(False) def lambda_handler(event, context): global cold_start global init_time stats = {'cold_start': cold_start, 'init_time': init_time} cold_start = False init_time = 0 stats['started'] = ms_now() result = compute(event['body']) stats['finished'] = ms_now() return { 'statusCode': 200, 'headers': { 'Content-Type': 'application/json' }, 'body': {'result': result, 'stats': stats} } </code></pre> <p>This Lambda function, along with the packages and the models (so that those don't need to be downloaded), is deployed as a docker image.</p> <p>In addition to the timestamps of when the function started and finished (not including the cold start initialization), the response contains the information about whether it was a cold start and how long it took to initialize. I have another function, which invokes this function 15 times in parallel.</p> <p>The anomaly happens with the first of these parallel invocations. Usually, it takes ~300ms (computed as the difference of the timestamps in the response). But sometimes it takes 900ms and longer (with the same input).</p> <p>This does not happen due to a cold start, since I have <code>init_time==0</code> in the response (when a cold start occurs, <code>init_time&gt;6000</code>). It happens both with <code>command == 'embeddings'</code> and with <code>command == 'sentences'</code>.</p> <p>What could be the explanation for these spikes? With a warm start, what can cause a Lambda function to take much longer than usual?</p> <p>P.S. The <a href="https://repost.aws/questions/QUI-eP97dOQ-m6USeb4NP30g/some-invocations-of-a-lambda-function-take-much-longer-despite-a-warm-start" rel="nofollow noreferrer">question at re:Post</a></p>
<python><amazon-web-services><aws-lambda><huggingface-transformers>
2023-11-21 11:16:37
0
8,211
AlwaysLearning
77,522,301
2,749,397
How to change sympy plot properties in Jupyter with matplotlib methods
<p>The following code in a script works as expected,</p> <pre><code>from sympy import * x = symbols('x') p = plot(x, x*(1-x), (x, 0, 1)) ax = p._backend.ax[0] ax.set_yticks((0, .05, .25)) p._backend.fig.savefig('Figure_1.png') </code></pre> <p><a href="https://i.sstatic.net/1zg1x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1zg1x.png" alt="enter image description here" /></a></p> <p>but when I copy the code above in a notebook cell, this is what I get</p> <p><a href="https://i.sstatic.net/1kncP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1kncP.png" alt="enter image description here" /></a></p> <p>If it is possible to manipulate the (hidden) attributes of a Sympy's <code>plot</code> when one works in a Jupyter notebook, how can it be done?</p>
<python><matplotlib><jupyter-notebook><sympy><jupyter-lab>
2023-11-21 11:07:24
2
25,436
gboffi
77,522,268
5,539,782
creates a new pandas column named "list_of_all_next_high" and populates it with lists of all next high values for each row
<p>I have a dataframe for stock prices:</p> <pre><code> dt open close high low 0 2023-09-15 07:51:00 1.06521 1.06521 1.06525 1.06521 1 2023-09-15 07:52:00 1.06521 1.06520 1.06522 1.06520 2 2023-09-15 07:53:00 1.06520 1.06523 1.06524 1.06520 3 2023-09-15 07:54:00 1.06521 1.06532 1.06534 1.06521 4 2023-09-15 07:55:00 1.06533 1.06529 1.06534 1.06523 5 2023-09-15 07:56:00 1.06530 1.06529 1.06532 1.06525 </code></pre> <p>I want to creates a new column named &quot;list_of_all_next_high&quot; and populates it with lists of all next high values for each row. the result will be like this:</p> <pre><code> dt open close high low list_of_all_next_high 0 2023-09-15 07:51:00 1.06521 1.06521 1.06525 1.06521 [1.06522, 1.06524, 1.06534, 1.06534, 1.06532] 1 2023-09-15 07:52:00 1.06521 1.06520 1.06522 1.06520 [1.06524, 1.06534, 1.06534, 1.06532] 2 2023-09-15 07:53:00 1.06520 1.06523 1.06524 1.06520 [1.06534, 1.06534, 1.06532] 3 2023-09-15 07:54:00 1.06521 1.06532 1.06534 1.06521 [1.06534, 1.06532] 4 2023-09-15 07:55:00 1.06533 1.06529 1.06534 1.06523 [1.06532] 5 2023-09-15 07:56:00 1.06530 1.06529 1.06532 1.06525 [] </code></pre> <p>I use this code to obtain the result, but it take very long time, the dataframe is 1 million rows.</p> <pre><code>df['list_of_all_next_high'] = [df['high'].iloc[i+1:].tolist() for i in range(len(df)-1)] + [[]] </code></pre> <p>Any other optimized method that can handle a large dataset (with more than 1million rows) to avoid consuming excessive resources?</p>
<python><pandas><dataframe>
2023-11-21 11:03:01
2
547
Khaled Koubaa
77,522,191
1,581,090
How to create a 3D scatter plot with plotly with all markers individually colored?
<p>Following the suggestion <a href="https://community.plotly.com/t/specifying-a-color-for-each-point-in-a-3d-scatter-plot/12652" rel="nofollow noreferrer">HERE</a> I have tried the following code to create a Scatter3D plot with plotly in a jupyter notebook so each marker is colored individually, like you can do with matplotlib and something like</p> <pre><code>plt.scatter(x,y, c=z) </code></pre> <p>Here is the code:</p> <pre><code>cmap = matplotlib.colormaps['brg'] param = &quot;elevation_deg&quot; min_value = min(vector) max_value = max(vector) range_ = max_value - min_value colors = [] for value in vector: rgba = cmap((value-min_value)/range_) colors.append(f&quot;rgb({int(255*rgba[0])},{int(255*rgba[1])},{int(255*rgba[2])})&quot;) # Configure the trace. trace = go.Scatter3d( x=x, y=y, z=z, mode='markers', marker=dict(colors, size=10) ) </code></pre> <p>But I get the error</p> <pre><code>ValueError: dictionary update sequence element #0 has length 13; 2 is required </code></pre> <p>I also had a look at the documentation for <a href="https://plotly.github.io/plotly.py-docs/generated/plotly.graph_objects.Scatter3d.html" rel="nofollow noreferrer">Scatter3D</a>, but I am totally lost in this page, it is totally confusing.</p> <p>So maybe there is a more way way to do so? And also how to plot the colorbar, as you can do with matplotlib with <code>plt.colorbar()</code>?</p>
<python><plotly><scatter3d>
2023-11-21 10:49:45
0
45,023
Alex
77,522,184
3,575,623
Scale colour over multiple graphs
<p>I have this function that plots data along two axes, and colours them according to their distance to the diagonal:</p> <pre><code>def scatter_diagdistance(x, y) : z = abs(y-x) fig, ax = plt.subplots(dpi=200) ax2 = ax.twinx() ##Create secondary axis ax2.set_yticks([]) ##No ticks for the secondary axis sc = ax.scatter(x, y, c=z, s=50, edgecolor='none') ax2.set_ylabel('Distance from diagonal') ##Label for secondary axis ax.plot([0, 1], [0, 1], '-', c=&quot;red&quot;, transform=ax.transAxes) #Line from 0 to 1 cbar = fig.colorbar(sc) ax.set_xlabel('Ref mutation frequencies') ax.set_ylabel('Decoded mutation frequencies') return fig, z </code></pre> <p>The figures it creates look like this:</p> <p><a href="https://i.sstatic.net/FcGdI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FcGdI.png" alt="scatter plot created by the above code" /></a></p> <p>For the next part of my project, I'd like to compare these graphs between a few different setups. I'd like to use the same colour scale for the whole multi-panel figure, so as to show that one method sticks closer to the diagonal than others. I'd also like the associated colour bar, which doesn't need to be included in the figure output, as I'll be editing the panels together afterwards.</p> <p>How do I combine all the <code>z</code> values together, establish a colour scale based on that, then separate them back again for each plot?</p>
<python><matplotlib><colors><colorbar>
2023-11-21 10:48:00
1
507
Whitehot
77,522,054
16,405,935
Count unique value with prioritize value in pandas
<p>I have a simple data frame as below:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'CUS_NO': ['900636229', '900636229', '900636080', '900636080', '900636052', '900636052', '900636053', '900636054', '900636055', '900636056'], 'indicator': ['both', 'left_only', 'both', 'left_only', 'both', 'left_only', 'both', 'left_only', 'both', 'left_only'], 'Nationality': ['VN', 'VN', 'KR', 'KR', 'VN', 'VN', 'KR', 'VN', 'KR', 'VN']}) CUS_NO indicator Nationality 0 900636229 both VN 1 900636229 left_only VN 2 900636080 both KR 3 900636080 left_only KR 4 900636052 both VN 5 900636052 left_only VN 6 900636053 both KR 7 900636054 left_only VN 8 900636055 both KR 9 900636056 left_only VN </code></pre> <p>I want to count unique value of <code>CUS_NO</code> so I used <code>pd.Series.nunique</code> by below code:</p> <pre><code>df2 = pd.pivot_table(df, values='CUS_NO', index='Nationality', columns='indicator', aggfunc=pd.Series.nunique, margins=True).reset_index() df2 </code></pre> <p>And here is the result:</p> <pre><code>indicator Nationality both left_only All 0 KR 3 1 3 1 VN 2 4 4 2 All 5 5 7 </code></pre> <p>But I my expectation is if <code>CUS_NO</code> was same and indicator was different, I just need to count <code>both</code> indicator. So below is my expected Output:</p> <pre><code>indicator Nationality both left_only All 0 KR 3 0 3 1 VN 2 2 4 2 All 5 2 7 </code></pre> <p>Thank you.</p>
<python><pandas><pivot>
2023-11-21 10:26:48
1
1,793
hoa tran
77,521,745
13,518,907
Map_Reduce prompt with RetrievalQA Chain
<p>in the code below you see how I built my RAG model with the ParentDocumentRetriever from Langchain with Memory. At the moment I am using the RetrievalQA-Chain with the default chain_type=&quot;stuff&quot;. However I want to try different chain types like &quot;map_reduce&quot;. But when replacing chain_type=&quot;map_reduce&quot; and creating the Retrieval QA chain, I get the following Error:</p> <pre><code>ValidationError: 1 validation error for RefineDocumentsChain prompt extra fields not permitted (type=value_error.extra) </code></pre> <p>I am assuming that my Prompt is not built correctly, but how do I have to change it to make it work? I saw that two different prompts are required for &quot;map_reduce&quot;: &quot;map_prompt&quot; and &quot;combine_prompt&quot;. But I am not sure how I have to change the prompts for a typical RAG retrieval task, where a user can interact with the Model and ask the model to answer questions for him. Here is my code:</p> <pre><code>from langchain.chains import RetrievalQA from langchain.memory import ConversationBufferMemory from langchain.prompts import PromptTemplate from langchain.callbacks.manager import CallbackManager from langchain.document_loaders import PyPDFLoader, DirectoryLoader loader = DirectoryLoader(&quot;MY_PATH_TO_PDF_FILES&quot;, glob='*.pdf', loader_cls=PyPDFLoader) documents = loader.load() # This text splitter is used to create the parent documents - The big chunks parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=400) # This text splitter is used to create the child documents - The small chunks child_splitter = RecursiveCharacterTextSplitter(chunk_size=400) # The vectorstore to use to index the child chunks from chromadb.errors import InvalidDimensionException try: vectorstore = Chroma(collection_name=&quot;split_parents&quot;, embedding_function=bge_embeddings, persist_directory=&quot;chroma_db&quot;) except InvalidDimensionException: Chroma().delete_collection() vectorstore = Chroma(collection_name=&quot;split_parents&quot;, embedding_function=bge_embeddings, persist_directory=&quot;chroma_db&quot;) # The storage layer for the parent documents store = InMemoryStore() big_chunks_retriever = ParentDocumentRetriever( vectorstore=vectorstore, docstore=store, child_splitter=child_splitter, parent_splitter=parent_splitter, ) big_chunks_retriever.add_documents(documents) qa_template = &quot;&quot;&quot; Use the following information from the context (separated with &lt;ctx&gt;&lt;/ctx&gt;) to answer the question. Answer in German only, because the user does not understand English! \ If you don't know the answer, answer with &quot;Unfortunately, I don't have the information.&quot; \ If you don't find enough information below, also answer with &quot;Unfortunately, I don't have the information.&quot; \ ------ &lt;ctx&gt; {context} &lt;/ctx&gt; ------ &lt;hs&gt; {chat_history} &lt;/hs&gt; ------ {query} Answer: &quot;&quot;&quot; prompt = PromptTemplate(template=qa_template, input_variables=['context','history', 'question']) chain_type_kwargs={ &quot;verbose&quot;: True, &quot;prompt&quot;: prompt, &quot;memory&quot;: ConversationSummaryMemory( llm=build_llm(), memory_key=&quot;history&quot;, input_key=&quot;question&quot;, return_messages=True)} refine = RetrievalQA.from_chain_type(llm=build_llm(), chain_type=&quot;map_reduce&quot;, return_source_documents=True, chain_type_kwargs=chain_type_kwargs, retriever=big_chunks_retriever, verbose=True) query = &quot;Hi, I am Max, can you help me??&quot; refine(query) </code></pre>
<python><information-retrieval><langchain><nlp-question-answering>
2023-11-21 09:40:33
2
565
Maxl Gemeinderat
77,521,686
22,466,650
How to make overlapping windows of weeks based on the nearest available dates in both sides?
<p>Sorry guys for the title but it is really what I'm trying to do.</p> <p>Here is a table to explain that more. The bold lines makes the years and the the thin ones makes the weeks.</p> <p><a href="https://i.sstatic.net/dhO22.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dhO22.png" alt="enter image description here" /></a></p> <p>For the expected output. It really doesn't matter its format. All I need is that if I ask for the dates of a pair <code>YEAR/WEEK</code>, I get the corresponding window of dates.</p> <p>For example, if I do <code>some_window_function(2022, 5)</code> I should have the result below (it correspond to the <strong>RED WINDOW</strong>)</p> <pre><code> DATE YEAR WEEK 2020 30 Friday, July 24, 2020 2022 5 Wednesday, February 2, 2022 5 Thursday, February 3, 2022 5 Friday, February 4, 2022 7 Tuesday, February 15, 2022 </code></pre> <p>And for example, if I do <code>some_window_function(2022, 7)</code> I should have the result below (it correspond to the <strong>BLUE WINDOW</strong>)</p> <pre><code> DATE YEAR WEEK 2022 5 Friday, February 4, 2022 2022 7 Tuesday, February 15, 2022 7 Wednesday, February 16, 2022 7 Thursday, February 17, 2022 2023 44 Tuesday, October 31, 2023 </code></pre> <p>The dataframe used is this :</p> <pre><code>df = pd.DataFrame({'YEAR': [2020, 2020, 2020, 2020, 2020, 2020, 2020, 2022, 2022, 2022, 2022, 2022, 2022, 2023, 2023, 2023, 2023, 2023, 2023, 2023, 2023, 2023, 2023, 2023], 'WEEK': [29, 29, 29, 30, 30, 30, 30, 5, 5, 5, 7, 7, 7, 44, 44, 44, 44, 45, 45, 45, 46, 46, 46, 46], 'DATE': ['Monday, July 13, 2020', 'Thursday, July 16, 2020', 'Friday, July 17, 2020', 'Monday, July 20, 2020', 'Tuesday, July 21, 2020', 'Thursday, July 23, 2020', 'Friday, July 24, 2020', 'Wednesday, February 2, 2022', 'Thursday, February 3, 2022', 'Friday, February 4, 2022', 'Tuesday, February 15, 2022', 'Wednesday, February 16, 2022', 'Thursday, February 17, 2022', 'Tuesday, October 31, 2023', 'Wednesday, November 02, 2023', 'Friday, November 03, 2023', 'Sunday, November 05, 2023', 'Monday, November 06, 2023', 'Tuesday, November 07, 2023', 'Wednesday, November 08, 2023', 'Monday, November 13, 2023', 'Tuesday, November 14, 2023', 'Wednesday, November 15, 2023', 'Thursday, November 16, 2023']}) </code></pre> <p>I made the code below but it gives a similar dataframe to my input :</p> <pre><code>def make_windows(group): if group.name == df.loc[df['YEAR'] == group.name, 'WEEK'].min(): group.at[group.index[-1]+1, 'DATE'] = df.at[group.index[-1]+1, 'DATE'] return group.ffill() elif group.name &lt; df.loc[df['YEAR']== group.name, 'WEEK'].max(): group.at[group.index[-1]+1, 'DATE'] = df.at[group.index[-1]+1, 'DATE'] return group.iloc[1:].ffill() else: return group.iloc[1:].ffill() results = df.groupby('YEAR').apply(make_windows) </code></pre>
<python><pandas><dataframe>
2023-11-21 09:30:38
2
1,085
VERBOSE