QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,648,819
20,920,790
How to change Airflow docker image?
<p>I got working Airflow image.</p> <p>But I need add new Python library to Airflow.</p> <p>I got &quot;/opt/beget/airflow/docker-compose.yml&quot;. <a href="https://i.sstatic.net/ZLfzdJ1m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZLfzdJ1m.png" alt="enter image description here" /></a></p> <p>I create file Dockerfile in same folder with code:</p> <pre><code>FROM apache/airflow RUN pip install clickhouse-connect </code></pre> <p>So I need edit current image apache/airflow. there's my images (docker images): <a href="https://i.sstatic.net/lziUEw9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lziUEw9F.png" alt="enter image description here" /></a></p> <p>All I need to edit current Airflow image.</p> <p>P.S. I using:</p> <blockquote> <p>Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-107-generic x86_64)</p> <p>Docker version 26.1.3, build b72abbb</p> </blockquote>
<python><docker><airflow>
2024-06-20 17:05:31
1
402
John Doe
78,648,677
10,164,669
How to apply the color of a styled `Labelframe` to its label text also
<p>The following code:</p> <pre><code>from tkinter import * from tkinter.ttk import * root = Tk() root['bg'] = 'yellow' root.title(&quot;Styled Labelframe&quot;) root.geometry(&quot;250x150&quot;) root.columnconfigure(0, weight=1) root.rowconfigure(0, weight=1) Style().configure('my.TLabelframe', background='red') frame = Labelframe(root, style='my.TLabelframe', text=&quot; Testing styled Labelframe... &quot;) frame.grid(column=0, row=0, sticky=(N,W,E,S), padx=20, pady=20) root.mainloop() </code></pre> <p>displays this:</p> <p><a href="https://i.sstatic.net/e8fSKyRv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e8fSKyRv.png" alt="enter image description here" /></a></p> <p>I want the text of the <code>Labelframe</code> to have the same background color (red) as the rest of the <code>Labelframe</code>. I also want to center it, instead of aligning to left.</p> <p>How can I accomplish this?</p> <p>Since I am a beginner, any other comments on my coding are welcome also.</p>
<python><tkinter>
2024-06-20 16:29:38
1
607
FedKad
78,648,595
7,378,537
What is the benefit of letting FastAPI handle SQLAlchemy's session vs the provided context manager?
<p>I am doing a POC on FastAPI with a Controller-Service-Repository architecture. Service and Repository are classes, with dependency injection done in the <code>__init__</code> function like so:</p> <p>(This is the architecture, above my paygrade)</p> <pre><code>Class MyService(IService): def __init__(self, repository): self.repository = MyRepository() # Dependency Injection ... Class MyRepository(IRepository): def __init__(self): ... # May create Engine and Session </code></pre> <p>I've been following <a href="https://fastapi.tiangolo.com/tutorial/sql-databases/" rel="nofollow noreferrer">FastApi's documentation on Relational Database</a>, and this block of code is suggested:</p> <pre><code># Dependency def get_db(): db = SessionLocal() try: yield db finally: db.close() </code></pre> <p>This creates a fresh sqlalchemy session when called and closes the session when the request ends, to be injected to the path operator function so that we get a fresh session per request.</p> <p>According to <a href="https://stackoverflow.com/a/12223711/7378537">this answer</a> by the creator of SQLAlchemy:</p> <blockquote> <p>[T]he question is, what's the difference between making a new Session() at various points versus just using one all the way through. The answer, not very much.</p> </blockquote> <p>and</p> <blockquote> <p>This practice (one session per request) ensures that the new request begins &quot;clean&quot;. If some objects from the previous request haven't been garbage collected yet, and if maybe you've turned off &quot;expire_on_commit&quot;, maybe some state from the previous request is still hanging around, and that state might even be pretty old. If you're careful to leave expire_on_commit turned on and to definitely call commit() or rollback() at request end, then it's fine, but if you start with a brand new Session, then there's not even any question that you're starting clean.</p> </blockquote> <p>Per my understanding, FastAPI's <code>one-session-per-request</code> is not necessary if we use SQLAlchemy's defaults and context managers, like so:</p> <pre><code># create session and add objects with Session(engine) as session: session.add(some_object) session.add(some_other_object) session.commit() </code></pre> <p>By using FastAPI's <code>one-session-per-request</code>, a new <code>MyService</code> instance and/or <code>MyRepository</code> instance is created for each request. <a href="https://fastapi.tiangolo.com/tutorial/dependencies/sub-dependencies/" rel="nofollow noreferrer">FastApi docs</a> says that dependencies are &quot;cached&quot; and reused only in the same request. However, if we stick with SQLAlchemy's doc, we can make my <code>MyService</code> and <code>MyRepository</code> singletons.</p> <p><strong>What is the benefit of letting FastAPI handle SQLAlchemy's session vs the provided context manager?</strong></p> <p>Is the overheads of creating instances per request negligible? Is there something with asyncio that I'm unaware of? Something to do with threads or process, or FastAPI's implementation?</p>
<python><sqlalchemy><fastapi>
2024-06-20 16:10:15
1
755
Yeile
78,648,574
4,126,652
Python singleton pattern with type hints
<p>I have been trying to use the Singleton pattern in Python with proper type hints.</p> <p>Here is my attempt</p> <pre class="lang-py prettyprint-override"><code>from typing import Any, TypeVar, override T = TypeVar('T', bound='SingletonBase') class Singleton(type): _instances: dict[type[T], T] = {} # type variable T has no meaning in this context @override def __call__(cls: type[T], *args: Any, **kwargs: dict[str, Any]) -&gt; T: if cls not in cls._instances: # Access to generic instance variable through class is ambiguous instance = super().__call__(*args, **kwargs) cls._instances[cls] = instance # Access to generic instance variable through class is ambiguous return cls._instances[cls] # Expression of type T is incompatible with return type T@__call__ class SingletonBase(metaclass=Singleton): pass </code></pre> <p>I get a lot of complaints from the type checker. (I have annotate my code with the complaints as comments)</p> <p>I found this answer <a href="https://stackoverflow.com/a/43024667/4126652">Type hinting for generic singleton?</a></p> <p>But that only handles annotations for the return type. I would like to understand what's going on here and learn how to implement the Singleton pattern with proper type hints.</p>
<python><python-typing><pyright>
2024-06-20 16:03:51
1
3,263
Vikash Balasubramanian
78,648,564
3,048,243
How to Groupby and assign Series Values to each row?
<p>I have the following data-frame (read from a csv file):</p> <pre><code> my_df: my_date my_id values key factor 1/1/2024 _One 123 key1 .56 1/7/2024 _One 567 key1 .75 1/14/2024 _One 100 key1 .81 1/14/2024 _One 100 key2 .44 1/1/2024 _Two 150 key3 .91 1/7/2024 _Two 130 key3 .88 1/1/2024 _Three 200 key4 0 1/1/2024 _Three 200 key5 .45 </code></pre> <p>So, there is an overlap of certain dates for two or more keys belonging to the same 'id'. What i want my data-frame to look like is as follows, that is, i need to calculate the allocated values based on the factor weights. Note: the calculated weight is obtained by dividing the factor by the sum of the factors in the overlapping periods. Say,</p> <pre><code> my_df: my_date my_id values key factor weights allocated_values 1/1/2024 _One 123 key1 0.56 1 123 1/7/2024 _One 500 key1 0.75 1 500 1/14/2024 _One 100 key1 0.81 0.648 64.8 1/14/2024 _One 100 key2 0.44 0.352 35.2 1/1/2024 _Two 160 key3 0.91 1 160 1/7/2024 _Two 130 key3 0.88 1 130 1/1/2024 _Three 200 key4 0 0.50 100 1/1/2024 _Three 200 key5 0.45 0.50 100 </code></pre> <p>To achieve the above result, i am doing the following group by:</p> <pre><code> for name, group in my_df.groupby('my_id'): for name1, group1 in group.groupby('key'): factors = group1['factor'] weight = factors['factor']/factors.sum() if factors.sum() != 0 | (factors==0).any() else 1/len(factors) #what i tried- approach1 group['weights'] = weight #doesn't work #what i tried next my_df['weights'] = my_df.update(group) #doesn't work </code></pre> <p>I am so tired now, unable to think any further. So posting it here for any help/guidance.</p> <p>Would much appreciate any hints.</p>
<python><dataframe>
2024-06-20 16:01:23
1
4,202
5122014009
78,648,465
6,930,340
Replace column level values with tuple in Pandas dataframe
<pre><code>import pandas as pd arrays = [[&quot;array_0&quot;, &quot;array_0&quot;], [&quot;col1&quot;, &quot;col2&quot;]] col_idx = pd.MultiIndex.from_arrays(arrays, names=[&quot;level_0&quot;, &quot;level_1&quot;]) df = pd.DataFrame(data=[[1,2],[3,4],[5,6]], columns=col_idx) print(df) level_0 array_0 level_1 col1 col2 0 1 2 1 3 4 2 5 6 </code></pre> <p>I would like to replace the string <code>&quot;array_0&quot;</code> in column level <code>level_0</code> with a tuple, e.g. <code>(10,20,30)</code>.</p> <p>Effectively, I want to replace the value of <code>df.columns.levels[0]</code></p> <p><code>Index(['array_0'], dtype='object', name='level_0')</code></p> <p>That's what I am looking for:</p> <pre><code>level_0 (10,20,30) level_1 col1 col2 0 1 2 1 3 4 2 5 6 </code></pre> <p>EDIT: Clearly, this is a toy example. In real life, there might be more than two column levels, and I don't know in advance which column level I want to replace with a tuple.<br /> Also, it might be possible that the column level I want to replace has more than one value, i.e., I have a list of tuples to replace the original values.</p> <p><strong>I am really wondering if there isn't an easy way to replace the content of <code>df.columns.levels[x]</code> with a list of tuples?</strong></p> <p>I was thinking along those lines:</p> <pre><code>df.columns.set_levels([(10,20,30)], level=0]) </code></pre> <p>However, this leads to an error that I don't understand.</p> <pre><code>TypeError: Levels must be list-like </code></pre>
<python><pandas>
2024-06-20 15:41:59
2
5,167
Andi
78,648,443
545,591
How to get python 2d numpy arrays of shape 2x2 from 4 1d arrays
<p>Suppose I have 4 component arrays of size 6 (e.g., denoting 6 spatial locations in a grid):</p> <p><code>Sxx = array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0])</code></p> <p><code>Sxy = array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6])</code></p> <p>Then assume <code>Syx = Sxy</code>, and</p> <p><code>Syy = array([1.1, 2.1, 3.1, 4.1, 5.1, 6.1])</code></p> <p>How do I retrieve the above arrays as 6 2×2 arrays, e.g., as below, in a &quot;pythonic&quot; way?</p> <code> S = array([ [[1.0, 0.1], [0.1, 1.1]], [[2.0, 0.2], [0.2, 2.1]], [[3.0, 0.3], [0.3, 3.1]], .... ]) </code> <p>I could even work with 6 1-d 1x4 arrays as:</p> <code> S = [ [1.0, 0.1, 0.1, 1.1], [2.0, 0.2, 0.2, 2.1], [3.0, 0.3, 0.3, 3.1], .... ] </code> <p>Any ideas are much appreciated!</p>
<python><arrays><numpy>
2024-06-20 15:37:12
1
1,356
squashed.bugaboo
78,648,426
20,804,255
Skip lines with JSON document that cause "ValueError: Unexpected character found when decoding object value" in pd.read_json(..., lines=True)
<p>Consider a document <code>btc_transactions.json</code> that contains the following data:</p> <pre><code> {&quot;txid&quot;:&quot;00a5b60bf38d0605a7ed65c557722e42c1637e1dade80e37b7fc73cea3b67d9b&quot;,&quot;consensus_time&quot;:&quot;2013-07-14T07:07:24.000000000Z&quot;,&quot;tx_position&quot;:&quot;1058687963627546&quot;,&quot;n_balance_updates&quot;:&quot;4&quot;,&quot;amount&quot;:&quot;0.7&quot;,&quot;block_hash&quot;:&quot;000000000000004b880aa75d36e2586b8570b8e15dc192f33e5e834bf51fd06a&quot;,&quot;height&quot;:&quot;246495&quot;,&quot;miner_time&quot;:&quot;2013-07-14T07:43:12.000000000Z&quot;,&quot;min_chain_sequence_number&quot;:&quot;1058687963627730&quot;,&quot;max_chain_sequence_number&quot;:&quot;1058687963627733&quot;,&quot;version&quot;:&quot;1&quot;,&quot;physical_size&quot;:&quot;227&quot;,&quot;consensus_size&quot;:&quot;908&quot;,&quot;fee&quot;:&quot;0.0001&quot;} {&quot;txid&quot;:&quot;991cbe2565efb9712d55db1683548625403b325ea9a9dfed906fce1fb9bd1941&quot;,&quot;consensus_time&quot;:&quot;2013-07-14T07:07:24.000000000Z&quot;,&quot;tx_position&quot;:&quot;1058687963627547&quot;,&quot;n_balance_updates&quot;:&quot;4&quot;,&quot;amount&quot;:&quot;0.66434783&quot;,&quot;block_hash&quot;:&quot;000000000000004b880aa75d36e2586b8570b8e15dc192f33e5e834bf51fd06a&quot;,&quot;height&quot;:&quot;246495&quot;,&quot;miner_time&quot;:&quot;2013-07-14T07:43:12.000000000Z&quot;,&quot;min_chain_sequence_number&quot;:&quot;1058687963627734&quot;,&quot;max_chain_sequence_number&quot;:&quot;1058687963627737&quot;,&quot;version&quot;:&quot;1&quot;,&quot;physical_size&quot;:&quot;258&quot;,&quot;consensus_size&quot;:&quot;1032&quot;,&quot;fee&quot;:&quot;0.0001&quot;} {&quot;txid&quot;:&quot;49044a76c558afad2b8f3bb895c958922e08bdf47931a60110ff9{&quot;txid&quot;:&quot;5038df9cc96824f7b990f4cc9405a5babe4a742e79b0c325e0487033b78d1e85&quot;,&quot;consensus_time&quot;:&quot;2013-07-16T06:43:24.000000000Z&quot;,&quot;tx_position&quot;:&quot;1060118187737455&quot;,&quot;n_balance_updates&quot;:&quot;4&quot;,&quot;amount&quot;:&quot;0.64502946&quot;,&quot;block_hash&quot;:&quot;000000000000004e79a680599d4e3a2f3fc244d7ef7238666fa52418008d53a0&quot;,&quot;height&quot;:&quot;246828&quot;,&quot;miner_time&quot;:&quot;2013-07-16T07:33:26.000000000Z&quot;,&quot;min_chain_sequence_number&quot;:&quot;1060118187739070&quot;,&quot;max_chain_sequence_number&quot;:&quot;1060118187739073&quot;,&quot;version&quot;:&quot;1&quot;,&quot;physical_size&quot;:&quot;227&quot;,&quot;consensus_size&quot;:&quot;908&quot;,&quot;fee&quot;:&quot;0.0001&quot;} {&quot;txid&quot;:&quot;c785ea0649bf6df832afa9e58b5719967b8d30eef4878fc0bb6302e8b8fc0e31&quot;,&quot;consensus_time&quot;:&quot;2013-07-14T07:12:41.000000000Z&quot;,&quot;tx_position&quot;:&quot;1058692258594819&quot;,&quot;n_balance_updates&quot;:&quot;4&quot;,&quot;amount&quot;:&quot;400&quot;,&quot;block_hash&quot;:&quot;0000000000000022847171b0b5a2e0c0ddf2444ef2dd6925c8a85891f1199e08&quot;,&quot;height&quot;:&quot;246496&quot;,&quot;miner_time&quot;:&quot;2013-07-14T08:09:32.000000000Z&quot;,&quot;min_chain_sequence_number&quot;:&quot;1058692258594827&quot;,&quot;max_chain_sequence_number&quot;:&quot;1058692258594830&quot;,&quot;version&quot;:&quot;1&quot;,&quot;physical_size&quot;:&quot;225&quot;,&quot;consensus_size&quot;:&quot;900&quot;,&quot;fee&quot;:&quot;0&quot;} </code></pre> <p>The penultimate line is an invalid JSON document while the other 3 lines are valid (there are many more lines and files which I want to process using <code>dask</code>).</p> <p>Loading this data with <code>pd.read_json(..., lines=True)</code> fails with <code>ValueError: Unexpected character found when decoding object value</code>. Is there an opportunity to skip these lines similar to the behavior of pandas.read_csv's <code>on_bad_lines</code> parameter?</p> <p>I believe this <a href="https://github.com/pandas-dev/pandas/issues/40273" rel="nofollow noreferrer">enhancement request</a> aimed for something similar but it looks like it did not come to pass. I am aware of <a href="https://stackoverflow.com/questions/49993514/json-file-to-dataframe-conversion-valueerror-unexpected-character-found-when-de">this question</a> with the same point of departure that could also benefit from a potential solution.</p> <p>Importantly, the faulty line is <em>not</em> causing an encoding related error. E.g. the option <code>encoding_errors='ignore'</code> is not resolving the problem and neither is a switch to the <code>pyarrow</code> engine. However, the latter yields a more precise error message</p> <blockquote> <p>pyarrow.lib.ArrowInvalid: JSON parse error: Missing a comma or '}' after an object member. in row 2</p> </blockquote>
<python><json><pandas><dataframe><jsonparser>
2024-06-20 15:33:59
1
315
TLeitzbach
78,648,362
7,227,146
Match punctuation sign or end of a line
<p>I want to improve the NLTK sentence tokenizer. Unfortunately, it doesn't work too well when the text doesn't leave any whitespace between the period and the next sentence.</p> <pre><code>from nltk.tokenize import sent_tokenize text = &quot;I love you.i hate you.I understand. i comprehend. i have 3.5 lines.I am bored&quot; sentences = sent_tokenize(text) sentences </code></pre> <p>Output:</p> <pre><code>['I love you.i hate you.I understand.', 'i comprehend.', 'i have 3.5 lines.I am bored'] </code></pre> <p>So with regex I can split the first line into 3 separate sentences. However, I don't know how can I get the last sentence too, which doesn't end in a punctuation sign.</p> <pre><code>import re new_sentences = [] for i in sentences: sents = re.findall(r'\w+.*?[.?!$](?!\d)', i, flags=re.S) new_sentences.extend(sents) new_sentences </code></pre> <p>Output:</p> <pre><code>['I love you.', 'i hate you.', 'I understand.', 'i comprehend.', 'i have 3.5 lines.'] </code></pre> <p>I put the <code>$</code> there indicating end of line, but it doesn't seem to care.</p>
<python><regex>
2024-06-20 15:17:18
2
679
zest16
78,648,039
5,551,849
Check for existing data before writing to database
<p>When writing time series data from a <code>pandas</code> dataframe to an InfluxDB bucket, to check whether a specific row of data already exists in the bucket (and thus prevent data from being written again).</p> <p>Format of the time series data that exists in the pandas dataframe (sample):</p> <pre><code>epoch,open,high,low,close,volume 1332374520.0,2.341,2.341,2.341,2.341,1.0 1332374700.0,2.343,2.343,2.343,2.343,1.0 1332374940.0,2.344,2.344,2.344,2.344,1.0 1332375420.0,2.344,2.344,2.344,2.344,2.0 1332375660.0,2.344,2.344,2.344,2.344,2.0 1332376080.0,2.344,2.344,2.344,2.344,1.0 </code></pre> <p>The current Python program, as seen below, isn't detecting that the same data has already been written to the <code>database</code> bucket. If the program is run over and over, the output from <code>print</code> statement should be visible notifying that duplicate data has been detected.</p> <pre><code>import os import pandas as pd from tqdm import tqdm from influxdb_client import InfluxDBClient, Point, WriteOptions from influxdb_client.client.write_api import SYNCHRONOUS from influxdb_client.client.query_api import QueryApi # Example OHLCV data data = { &quot;epoch&quot;: [1330902000, 1330902060, 1330902120], &quot;open&quot;: [2.55, 2.532, 2.537], &quot;high&quot;: [2.55, 2.538, 2.549], &quot;low&quot;: [2.521, 2.531, 2.537], &quot;close&quot;: [2.534, 2.538, 2.548], &quot;volume&quot;: [150, 69, 38] } concat_of_all_dfs = pd.DataFrame(data) def data_point_exists(epoch, bucket, org): query = f''' from(bucket: &quot;{bucket}&quot;) |&gt; range(start: 0) |&gt; filter(fn: (r) =&gt; r[&quot;_measurement&quot;] == &quot;ohlcv&quot;) |&gt; filter(fn: (r) =&gt; r[&quot;epoch&quot;] == {epoch}) ''' result = query_api.query(org=org, query=query) return len(result) &gt; 0 if __name__ == &quot;__main__&quot;: # Database credentials token = os.getenv('INFLUXDB_TOKEN') bucket = &quot;bucket_test&quot; org = &quot;organisation_test&quot; url = &quot;http://localhost:8086&quot; # Initialize InfluxDB Client client = InfluxDBClient(url=url, token=token, org=org) write_api = client.write_api(write_options=SYNCHRONOUS) query_api = client.query_api() # Write data points one by one for index, row in tqdm(concat_of_all_dfs.iterrows(), total=len(concat_of_all_dfs)): epoch = row['epoch'] if not data_point_exists(epoch, bucket, org): point = Point(&quot;ohlcv&quot;) \ .field(&quot;epoch&quot;, row['epoch']) \ .field(&quot;open&quot;, row['open']) \ .field(&quot;high&quot;, row['high']) \ .field(&quot;low&quot;, row['low']) \ .field(&quot;close&quot;, row['close']) \ .field(&quot;volume&quot;, row['volume']) write_api.write(bucket=bucket, org=org, record=point) else: print(f&quot;Data point for epoch {epoch} already exists. Skipping...&quot;) client.close() </code></pre> <p>Are there any mistakes in the above code that would prevent repeat data from being detected (possibly in the querying function seen via the flux script, or anywhere else)?</p>
<python><database><time-series><influxdb>
2024-06-20 14:18:53
1
743
p.luck
78,647,959
3,965,828
GCP Cloud Function worker timeout
<p>I have a Google Cloud Function, 1st gen, for which I've set the Timeout value to 540 seconds (9 minutes), the maximum allowed for a 1st gen function. This is a python function that executes a Dataform workflow, then polls the execution every 5 seconds for the execution's status.</p> <p>Starting two days ago, the function has been crashing at 300000ms (5 minutes) with the following exception:</p> <pre><code>File &quot;/layers/google.python.pip/pip/lib/python3.11/site- packages/gunicorn/workers/sync.py&quot;, line 135, in handle self.handle_request(listener, req, client, addr) File &quot;/layers/google.python.pip/pip/lib/python3.11/site- packages/gunicorn/workers/sync.py&quot;, line 178, in handle_request respiter = self.wsgi(environ, resp.start_response) </code></pre> <p>and</p> <pre><code>[1] [CRITICAL] WORKER TIMEOUT (pid:13) </code></pre> <p>I'm not sure where to begin figuring out the issue, and I've been searching Google and here for &quot;GCP Cloud Function gunicorn worker timeout&quot;, but that's only returning results for processes that explicitly use gunicorn. Again, the timeout value for this function is well above the time that this error is occurring.</p>
<python><google-cloud-platform><google-cloud-functions><timeout>
2024-06-20 14:06:04
0
2,631
Jeffrey Van Laethem
78,647,917
11,328,614
Python unittest.mock, wrap instance method, turn mock on/off
<p>I would like to write an unit test for an instance method of a class under test containing some simple logic but calling another more complicated instance method.</p> <p>I would like to wrap the complicated instance method in a mock. Additionally, it should be possible to forwards calls to the original method via the mock, when requested. If the call is forwarded, the spec of the original method should be used.</p> <p>I would like to control that via the return value of the mock, if it is <code>unittest.mock.DEFAULT</code> pass the call through, if it is something else, mock out the entire method.</p> <p>In the code, I came up with, I define a class under test (<code>A</code>) and a derived fake class (<code>FakeA</code>) which mocks out the complicated method. The derived class serves as an isolation layer so that dependencies of the test code to the concrete production code are centralized and kept at a minimum level.</p> <p>One of the tests checks the logic of the <code>simple_method</code> and mocks out the <code>complicated_method</code> at all. The other test should call the <code>complicated_method</code> via the <code>simple_method</code> but mock out external dependencies.</p> <p>The problem now is that the spec of the <code>complicated_method</code> is not determined correctly. Therefore I get a failing <code>test1</code> with the error message:</p> <pre><code> if self._mock_wraps is not None: &gt; return self._mock_wraps(*args, **kwargs) E TypeError: A.complicated_method() missing 1 required positional argument: 'self' </code></pre> <p>It seems, the mock is errornously considered as a <code>classmethod</code> whereas the original method is an <code>instancemethod</code>.</p> <p>How can I fix this?</p> <pre class="lang-py prettyprint-override"><code>import unittest.mock from unittest.mock import MagicMock class A: def simple_method(self): return self.complicated_method() def complicated_method(self): # Calls some external dependency return 42 class FakeA(A): def __init__(self): super().__init__() type(self).complicated_method = MagicMock(wraps=A.complicated_method, autospec=True, return_value=unittest.mock.DEFAULT) &quot;&quot;&quot; Besides mocking out nested calls, the Fake class should provide some additional methods wrapping internals of class A so that the tests need only minimal adaption when the production code changes &quot;&quot;&quot; class TestClassA: def test1(self): # This test lets the call pass through to the original method # Therefor we assume here, that external dependencies have been mocked out a = FakeA() assert a.simple_method() == 42 def test2(self): # This test mocks out the original method at all providing a fake return_value a = FakeA() a.complicated_method.return_value = 77 assert a.simple_method() == 77 </code></pre> <p>Here is my second attempt, which fails for the same reason:</p> <pre class="lang-py prettyprint-override"><code>class FakeA(MagicMock(wraps=A, autospec=A)): def __init__(self): super().__init__() type(self).complicated_method.return_value=unittest.mock.DEFAULT </code></pre>
<python><python-3.x><unit-testing><class-method><instance-methods>
2024-06-20 13:55:46
0
1,132
Wör Du Schnaffzig
78,647,821
1,031,191
How to save django db data in the middle of a playwright test?
<p>I'm having trouble figuring out how to call the &quot;sync&quot; <code>create_user()</code>.</p> <p>Error message:</p> <pre><code>django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async. </code></pre> <p>However, when I use sync_to_async, it cannot be awaited because this is not an async function...</p> <pre class="lang-py prettyprint-override"><code>from functools import wraps from django.test import TestCase from playwright.sync_api import sync_playwright def playwright_test(func): @wraps(func) def wrapper(self, *args, **kwargs): with sync_playwright() as self.p: self.browser = self.p.chromium.launch() self.page = self.browser.new_page() try: result = func(self, *args, **kwargs) finally: self.browser.close() return result return wrapper class TC(TestCase): @playwright_test def test_login(self): self.page.goto(self.host) self.page.fill('input[type=&quot;email&quot;]', 'my@email.com') self.page.fill('input[type=&quot;password&quot;]', 'TestLogin') self.page.click('text=&quot;Login&quot;') # expect &quot;Incorrect Credentials&quot; message (no user created yet) assert &quot;Incorrect Credentials&quot; in self.page.content() User = get_user_model() User.objects.create_user('my@email.com', password='TestLogin') # Login again, this time successfully self.page.fill('input[type=&quot;email&quot;]', 'my@email.com') self.page.fill('input[type=&quot;password&quot;]', 'TestLogin') self.page.click('text=&quot;Login&quot;') assert &quot;Login successful. Welcome back!&quot; in self.page.content() </code></pre> <p>If you have a suggestion, please let me know, my hair starts to fall out. 🙏</p>
<python><python-3.x><django><playwright>
2024-06-20 13:36:36
2
12,634
Barney Szabolcs
78,647,624
5,562,431
Can Python's input be terminated with another key?
<p>I am writing an interactive CLI and just thought about giving the user a different way of confirming an input. So, usually I would use python's <code>input()</code>. With that a user would submit input by pressing <kbd>Enter</kbd>.</p> <p>Would it be possible to let a user submit input by pressing e.g. <kbd>Tab</kbd> or <kbd>Ctrl+D</kbd>? I wanted to make it possible to quickly use the CLI with the left hand only (<kbd>Enter</kbd> is far away).</p> <p>I tried to work around with <kbd>Ctrl+D</kbd> by catching <code>EOFError</code></p> <pre class="lang-py prettyprint-override"><code>def _input(msg: str) -&gt; str: try: inpt = input(msg) except EOFError: pass return inpt </code></pre> <p>but of course that doesn't work because the exception prevents user input from being read. <code>sys.stdin</code> and <code>fileinput.input</code> also rely on <kbd>Enter</kbd>.</p>
<python><input><command-line-interface>
2024-06-20 12:58:35
0
894
mRcSchwering
78,647,552
17,672,187
Using pyenv python inplace of system default
<p>I am on archlinux and trying to run krita scripter (built-in python console) with a pyenv python version. The system default python version is 3.12, however the script I am running uses python libraries that require version 3.11.</p> <p>Here is what I have tried so far.</p> <ol> <li><code>pyenv init</code>, checked the version on console, it shows correct 3.11 (also tried setting <code>pyenv global 3.11</code></li> <li>Install <code>pyqt5</code> package with <code>pip</code> - this installs the packages in site-packages of pyenv 3.11 but I am not sure if it also builds the correct binaries required (the ones installed with the system package <code>pacman -S python-pyqt5</code>)</li> <li><code>export PYTHONHOME=$HOME/.pyenv/versions/3.11.9</code></li> <li><code>export PYTHONPATH=$HOME/.pyenv/versions/3.11.9/lib/python3.11:$PYTHONPAH</code></li> <li><code>export LD_LIBRARY_PATH=$HOME/.pyenv/versions/3.11.9/lib/python3.11/site-packages/PyQt5/Qt5/lib:$LD_LIBRARY_PATH</code></li> </ol> <p>Now if I run krita from console I get error</p> <blockquote> <p>krita: symbol lookup error: /usr/lib/libKF5WidgetsAddons.so.5: undefined symbol: _ZN11QToolButton13checkStateSetEv, version Qt_5</p> </blockquote> <p>krita: symbol lookup error: /usr/lib/libKF5WidgetsAddons.so.5: undefined symbol: _ZN11QToolButton13checkStateSetEv, version Qt_5</p> <p>Then I did :</p> <p><code>ldd /usr/lib/libKF5WidgetsAddons.so.5 | grep Qt</code></p> <p>This shows <code>~/.pyenv/versions/3.11.9/lib/python3.11/site-packages/PyQt5/Qt5/lib/libQt5Widgets.so.5</code> and other similar files.</p> <p>so the so is found but looks like it's not compatible with libKF5WidgetsAddons.so (KDE qt5)?</p> <p>How can I resolve this issue? Any other approach to simply replace default python with custom python in krita is also welcome. As a side note: I tried downloading the appimage of krita and it works fine with built-in 3.10 (this of course doesn't help, as I can't reinstall all the required libraries in this and repackage the appimage)</p>
<python><pyqt><pyqt5><archlinux><pyenv>
2024-06-20 12:44:40
0
691
Loma Harshana
78,647,145
4,100,282
Inconsistent covariance estimates from sklearn.covariance.MinCovDet vs numpy.cov
<p>I would expect that when considering a large sample from a bivariate Gaussian population, covariance estimates from <code>sklearn.covariance.MinCovDet</code> should be equivalent to those from <code>numpy.cov</code> ? Yet when I test this using the following code, I get systematically smaller variance estimates using <code>MinCovDet</code>. Why is that?</p> <pre class="lang-py prettyprint-override"><code>import numpy import sklearn import sklearn.covariance from random import gauss N = 10000 print(f'\n{&quot;MinCovDet&quot;:&gt;12}|{&quot;MLE&quot;:&gt;12}') print(f'{&quot;s_x&quot;:&gt;6}{&quot;s_y&quot;:&gt;6}|{&quot;s_x&quot;:&gt;6}{&quot;s_y&quot;:&gt;6}\n') for ___ in range(16): X = numpy.array([[gauss() for _ in range(N)] for __ in range(2)]).T RC = sklearn.covariance.MinCovDet(assume_centered = True).fit(X).covariance_ C = numpy.cov(X.T) print(f'{RC[0,0]**.5:6.2f}{RC[1,1]**.5:6.2f}|{C[0,0]**.5:6.2f}{C[1,1]**.5:6.2f}') print(f'\n[using sklearn v{sklearn.__version__}]') </code></pre> <p>Output:</p> <pre><code> MinCovDet| MLE s_x s_y| s_x s_y 0.98 0.94| 1.02 0.99 0.95 0.94| 1.00 0.99 0.95 0.95| 1.00 0.99 0.94 0.96| 0.99 1.00 0.95 0.95| 0.99 0.99 0.96 0.95| 1.01 1.00 0.94 0.96| 1.00 1.00 0.96 0.95| 1.01 1.00 0.94 0.95| 0.99 1.01 0.95 0.93| 1.00 1.00 0.94 0.96| 0.99 1.00 0.95 0.97| 1.00 1.01 0.95 0.95| 1.00 0.99 0.96 0.95| 1.00 1.00 0.95 0.95| 1.00 1.00 0.96 0.94| 0.99 1.00 [using sklearn v1.5.0] </code></pre>
<python><scikit-learn>
2024-06-20 11:19:16
1
305
Mathieu
78,647,069
1,367,722
Extracting SQL JOIN condition column names not working with Antlr using Python
<p>The code below is my attempt to identify join relationships between tables in a Oracle database.</p> <p>The idea is to build a map of joins to help identify implicit FK relationships.</p> <p>In the code below I use the PlSqlParserListener to walk the parse tree.</p> <p>In the <code>Join_on_partContext</code> handler I breakout all the child tokens and try to get check if these are column names (I also want table names but keeping this example simple ).</p> <p>The end result I would like is a list or similar which shows:</p> <pre><code> &lt;--- Joins To ---&gt; Table Column Table Column ---------------------------------------- t1 id1 t2 id1 t2 col5 t3a t3a_col3 t2 col5 t3 col3 &lt;--- Ideally resolve inline view aliases to actual tables and columns </code></pre> <p>The script would be run against an entire codebase to build a list of all table JOIN relationships.</p> <p>On the surface this sounds like simply a case of getting each JOIN ON condition, and splitting out the left and right side of the expression. But you can get things like:</p> <p><code>FROM tabB tB JOIN tabA tA ON tA.colA = NVL( tB.col1, tB.col2 )</code></p> <p>Which I would want to show as</p> <pre><code> &lt;--- Joins To ---&gt; Table Column Table Column ---------------------------------------- tabB col1 tabA colA tabB col2 tabA colA </code></pre> <p>The code below is a first attempt to extract out the table and columns names as separate values. My plan is to add a cross check against a list of table/column names to verify whether we have a real table/column or an alias/expression that we need to parse further.</p> <pre><code>import antlr4 from PlSqlLexer import PlSqlLexer from PlSqlParser import PlSqlParser from pretty_print_antlr_tree import to_string_tree from PlSqlParserListener import PlSqlParserListener class SQLListener(PlSqlParserListener): def is_column_name(self, token, ctx): for child_ctx in ctx.getChildren(): if isinstance(child_ctx, PlSqlParser.Column_nameContext): token_start = child_ctx.start.tokenIndex token_stop = child_ctx.stop.tokenIndex if token.tokenIndex &gt;= token_start and token.tokenIndex &lt;= token_stop: return True return False def enterJoin_on_part(self, ctx:PlSqlParser.Join_on_partContext): print('Found a join on :', ctx.getText()) idx1 = ctx.start.tokenIndex idx2 = ctx.stop.tokenIndex istrm = ctx.parser._input tks = istrm.getTokens(idx1, idx2 + 1) for tk in tks: print( &quot; Check &quot; , tk.text, &quot; is a column&quot; ) if self.is_column_name(tk, ctx): print (&quot; join col name:&quot; , tk.text ) def enterTableview_name(self, ctx:PlSqlParser.Tableview_nameContext): print ( &quot; Found table name: &quot;, ctx.getText() ) def enterColumn_name(self, ctx:PlSqlParser.Column_nameContext): print(' Found a columnName:', ctx.getText() ) def enterTable_alias(self, ctx:PlSqlParser.Table_aliasContext): print(' Found a table alias:', ctx.getText() ) def enterColumn_alias(self, ctx:PlSqlParser.Column_aliasContext): print(' Found a column alias:', ctx.getText() ) def enterGeneral_element(self, ctx:PlSqlParser.General_elementContext): print ( &quot; Found expression: &quot;, ctx.getText()) sqltext = &quot;&quot;&quot; select t1.col1, t2,col2 from t1 join t2 on t1.id1 = t2.id1 join ( select t3.col3 as t3_col3 from t3 where t3.col4 = 25 ) t3a on t2.col5 = t3a.t3_col3 where t1.col5 = 2024 order by 1 &quot;&quot;&quot; lexer = PlSqlLexer(antlr4.InputStream(sqltext)) parser = PlSqlParser(antlr4.CommonTokenStream(lexer)) root = parser.sql_script() print(to_string_tree(root, lexer.symbolicNames)) antlr4.ParseTreeWalker.DEFAULT.walk(SQLListener(), root) </code></pre> <p>I am expecting the output to be:</p> <pre><code> Found expression: t1.col1 Found expression: t1 Found expression: t2 Found expression: col2 Found table name: t1 Found table name: t2 Found a join on : ont1.id1=t2.id1 Found expression: t1.id1 join col name: id1 &lt;&lt;&lt; Missing from actual output Found expression: t1 Found expression: t2.id1 join col name: id1 &lt;&lt;&lt; Missing from actual output Found expression: t2 Found expression: t3.col3 join col name: col3 &lt;&lt;&lt; Missing from actual output Found expression: t3 Found a column alias: ast3_col3 Found table name: t3 Found expression: t3.col4 join col name: col4 &lt;&lt;&lt; Missing from actual output Found expression: t3 Found a table alias: t3a Found a join on : ont2.col5=t3a.t3_col3 Found expression: t2.col5 Found expression: t2 Found expression: t3a.t3_col3 join col name: t3_col3 &lt;&lt;&lt; Missing from actual output Found expression: t3a Found expression: t1.col5 join col name: col5 &lt;&lt;&lt; Missing from actual output Found expression: t1 </code></pre> <p>I am thinking maybe it would be better to fashion a state machine so I can tell when I am in a JOIN clause, then break up the &quot;table.column&quot;, &quot;alias.column&quot; or &quot;columns&quot; token and check off against a lookup list of tables and columns. But this seems to suggest a gaping hole in the Antlr way of doing things, and makes me think there must a way to achieve what I want within the Antlr framework.</p> <p>The tree output is (snipped to avoid 30,000 limit):</p> <pre><code>. . ║ ╠═ from_clause ║ ║ ╠═ &quot;from&quot; (FROM) ║ ║ ╚═ table_ref_list ║ ║ ╚═ table_ref ║ ║ ╠═ table_ref_aux ║ ║ ║ ╚═ table_ref_aux_internal_one ║ ║ ║ ╚═ dml_table_expression_clause ║ ║ ║ ╚═ tableview_name ║ ║ ║ ╚═ identifier ║ ║ ║ ╚═ id_expression ║ ║ ║ ╚═ regular_id ║ ║ ║ ╚═ &quot;t1&quot; (REGULAR_ID) ║ ║ ╠═ join_clause ║ ║ ║ ╠═ &quot;join&quot; (JOIN) ║ ║ ║ ╠═ table_ref_aux ║ ║ ║ ║ ╚═ table_ref_aux_internal_one ║ ║ ║ ║ ╚═ dml_table_expression_clause ║ ║ ║ ║ ╚═ tableview_name ║ ║ ║ ║ ╚═ identifier ║ ║ ║ ║ ╚═ id_expression ║ ║ ║ ║ ╚═ regular_id ║ ║ ║ ║ ╚═ &quot;t2&quot; (REGULAR_ID) ║ ║ ║ ╚═ join_on_part ║ ║ ║ ╠═ &quot;on&quot; (ON) ║ ║ ║ ╚═ condition ║ ║ ║ ╚═ expression ║ ║ ║ ╚═ logical_expression ║ ║ ║ ╚═ unary_logical_expression ║ ║ ║ ╚═ multiset_expression ║ ║ ║ ╚═ relational_expression ║ ║ ║ ╠═ relational_expression ║ ║ ║ ║ ╚═ compound_expression ║ ║ ║ ║ ╚═ concatenation ║ ║ ║ ║ ╚═ model_expression ║ ║ ║ ║ ╚═ unary_expression ║ ║ ║ ║ ╚═ atom ║ ║ ║ ║ ╚═ general_element ║ ║ ║ ║ ╠═ general_element ║ ║ ║ ║ ║ ╚═ general_element_part ║ ║ ║ ║ ║ ╚═ id_expression ║ ║ ║ ║ ║ ╚═ regular_id ║ ║ ║ ║ ║ ╚═ &quot;t1&quot; (REGULAR_ID) ║ ║ ║ ║ ╠═ &quot;.&quot; (PERIOD) ║ ║ ║ ║ ╚═ general_element_part ║ ║ ║ ║ ╚═ id_expression ║ ║ ║ ║ ╚═ regular_id ║ ║ ║ ║ ╚═ &quot;id1&quot; (REGULAR_ID) ║ ║ ║ ╠═ relational_operator ║ ║ ║ ║ ╚═ &quot;=&quot; (EQUALS_OP) ║ ║ ║ ╚═ relational_expression ║ ║ ║ ╚═ compound_expression ║ ║ ║ ╚═ concatenation ║ ║ ║ ╚═ model_expression ║ ║ ║ ╚═ unary_expression ║ ║ ║ ╚═ atom ║ ║ ║ ╚═ general_element ║ ║ ║ ╠═ general_element ║ ║ ║ ║ ╚═ general_element_part ║ ║ ║ ║ ╚═ id_expression ║ ║ ║ ║ ╚═ regular_id ║ ║ ║ ║ ╚═ &quot;t2&quot; (REGULAR_ID) ║ ║ ║ ╠═ &quot;.&quot; (PERIOD) ║ ║ ║ ╚═ general_element_part ║ ║ ║ ╚═ id_expression ║ ║ ║ ╚═ regular_id ║ ║ ║ ╚═ &quot;id1&quot; (REGULAR_ID) ║ ║ ╚═ join_clause ║ ║ ╠═ &quot;join&quot; (JOIN) ║ ║ ╠═ table_ref_aux ║ ║ ║ ╠═ table_ref_aux_internal_one ║ ║ ║ ║ ╚═ dml_table_expression_clause ║ ║ ║ ║ ╠═ &quot;(&quot; (LEFT_PAREN) ║ ║ ║ ║ ╠═ select_statement ║ ║ ║ ║ ║ ╚═ select_only_statement ║ ║ ║ ║ ║ ╚═ subquery ║ ║ ║ ║ ║ ╚═ subquery_basic_elements ║ ║ ║ ║ ║ ╚═ query_block ║ ║ ║ ║ ║ ╠═ &quot;select&quot; (SELECT) ║ ║ ║ ║ ║ ╠═ selected_list ║ ║ ║ ║ ║ ║ ╚═ select_list_elements ║ ║ ║ ║ ║ ║ ╚═ expression ║ ║ ║ ║ ║ ║ ╚═ logical_expression ║ ║ ║ ║ ║ ║ ╚═ unary_logical_expression ║ ║ ║ ║ ║ ║ ╚═ multiset_expression ║ ║ ║ ║ ║ ║ ╚═ relational_expression ║ ║ ║ ║ ║ ║ ╚═ compound_expression ║ ║ ║ ║ ║ ║ ╚═ concatenation ║ ║ ║ ║ ║ ║ ╚═ model_expression ║ ║ ║ ║ ║ ║ ╚═ unary_expression ║ ║ ║ ║ ║ ║ ╚═ atom ║ ║ ║ ║ ║ ║ ╚═ general_element ║ ║ ║ ║ ║ ║ ╠═ general_element ║ ║ ║ ║ ║ ║ ║ ╚═ general_element_part ║ ║ ║ ║ ║ ║ ║ ╚═ id_expression ║ ║ ║ ║ ║ ║ ║ ╚═ regular_id ║ ║ ║ ║ ║ ║ ║ ╚═ &quot;t3&quot; (REGULAR_ID) ║ ║ ║ ║ ║ ║ ╠═ &quot;.&quot; (PERIOD) ║ ║ ║ ║ ║ ║ ╚═ general_element_part ║ ║ ║ ║ ║ ║ ╚═ id_expression ║ ║ ║ ║ ║ ║ ╚═ regular_id ║ ║ ║ ║ ║ ║ ╚═ &quot;col3&quot; (REGULAR_ID) ║ ║ ║ ║ ║ ╠═ from_clause ║ ║ ║ ║ ║ ║ ╠═ &quot;from&quot; (FROM) ║ ║ ║ ║ ║ ║ ╚═ table_ref_list ║ ║ ║ ║ ║ ║ ╚═ table_ref ║ ║ ║ ║ ║ ║ ╚═ table_ref_aux ║ ║ ║ ║ ║ ║ ╚═ table_ref_aux_internal_one ║ ║ ║ ║ ║ ║ ╚═ dml_table_expression_clause ║ ║ ║ ║ ║ ║ ╚═ tableview_name ║ ║ ║ ║ ║ ║ ╚═ identifier ║ ║ ║ ║ ║ ║ ╚═ id_expression ║ ║ ║ ║ ║ ║ ╚═ regular_id ║ ║ ║ ║ ║ ║ ╚═ &quot;t3&quot; (REGULAR_ID) ║ ║ ║ ║ ║ ╚═ where_clause ║ ║ ║ ║ ║ ╠═ &quot;where&quot; (WHERE) ║ ║ ║ ║ ║ ╚═ condition ║ ║ ║ ║ ║ ╚═ expression ║ ║ ║ ║ ║ ╚═ logical_expression ║ ║ ║ ║ ║ ╚═ unary_logical_expression ║ ║ ║ ║ ║ ╚═ multiset_expression ║ ║ ║ ║ ║ ╚═ relational_expression ║ ║ ║ ║ ║ ╠═ relational_expression ║ ║ ║ ║ ║ ║ ╚═ compound_expression ║ ║ ║ ║ ║ ║ ╚═ concatenation ║ ║ ║ ║ ║ ║ ╚═ model_expression ║ ║ ║ ║ ║ ║ ╚═ unary_expression ║ ║ ║ ║ ║ ║ ╚═ atom ║ ║ ║ ║ ║ ║ ╚═ general_element ║ ║ ║ ║ ║ ║ ╠═ general_element ║ ║ ║ ║ ║ ║ ║ ╚═ general_element_part ║ ║ ║ ║ ║ ║ ║ ╚═ id_expression ║ ║ ║ ║ ║ ║ ║ ╚═ regular_id ║ ║ ║ ║ ║ ║ ║ ╚═ &quot;t3&quot; (REGULAR_ID) ║ ║ ║ ║ ║ ║ ╠═ &quot;.&quot; (PERIOD) ║ ║ ║ ║ ║ ║ ╚═ general_element_part ║ ║ ║ ║ ║ ║ ╚═ id_expression ║ ║ ║ ║ ║ ║ ╚═ regular_id ║ ║ ║ ║ ║ ║ ╚═ &quot;col4&quot; (REGULAR_ID) ║ ║ ║ ║ ║ ╠═ relational_operator ║ ║ ║ ║ ║ ║ ╚═ &quot;=&quot; (EQUALS_OP) ║ ║ ║ ║ ║ ╚═ relational_expression ║ ║ ║ ║ ║ ╚═ compound_expression ║ ║ ║ ║ ║ ╚═ concatenation ║ ║ ║ ║ ║ ╚═ model_expression ║ ║ ║ ║ ║ ╚═ unary_expression ║ ║ ║ ║ ║ ╚═ atom ║ ║ ║ ║ ║ ╚═ constant ║ ║ ║ ║ ║ ╚═ numeric ║ ║ ║ ║ ║ ╚═ &quot;25&quot; (UNSIGNED_INTEGER) ║ ║ ║ ║ ╚═ &quot;)&quot; (RIGHT_PAREN) ║ ║ ║ ╚═ table_alias ║ ║ ║ ╚═ identifier ║ ║ ║ ╚═ id_expression ║ ║ ║ ╚═ regular_id ║ ║ ║ ╚═ &quot;t3a&quot; (REGULAR_ID) ║ ║ ╚═ join_on_part ║ ║ ╠═ &quot;on&quot; (ON) ║ ║ ╚═ condition ║ ║ ╚═ expression ║ ║ ╚═ logical_expression ║ ║ ╚═ unary_logical_expression ║ ║ ╚═ multiset_expression ║ ║ ╚═ relational_expression ║ ║ ╠═ relational_expression ║ ║ ║ ╚═ compound_expression ║ ║ ║ ╚═ concatenation ║ ║ ║ ╚═ model_expression ║ ║ ║ ╚═ unary_expression ║ ║ ║ ╚═ atom ║ ║ ║ ╚═ general_element ║ ║ ║ ╠═ general_element ║ ║ ║ ║ ╚═ general_element_part ║ ║ ║ ║ ╚═ id_expression ║ ║ ║ ║ ╚═ regular_id ║ ║ ║ ║ ╚═ &quot;t2&quot; (REGULAR_ID) ║ ║ ║ ╠═ &quot;.&quot; (PERIOD) ║ ║ ║ ╚═ general_element_part ║ ║ ║ ╚═ id_expression ║ ║ ║ ╚═ regular_id ║ ║ ║ ╚═ &quot;col5&quot; (REGULAR_ID) ║ ║ ╠═ relational_operator ║ ║ ║ ╚═ &quot;=&quot; (EQUALS_OP) ║ ║ ╚═ relational_expression ║ ║ ╚═ compound_expression ║ ║ ╚═ concatenation ║ ║ ╚═ model_expression ║ ║ ╚═ unary_expression ║ ║ ╚═ atom ║ ║ ╚═ general_element ║ ║ ╠═ general_element ║ ║ ║ ╚═ general_element_part ║ ║ ║ ╚═ id_expression ║ ║ ║ ╚═ regular_id ║ ║ ║ ╚═ &quot;t3a&quot; (REGULAR_ID) ║ ║ ╠═ &quot;.&quot; (PERIOD) ║ ║ ╚═ general_element_part ║ ║ ╚═ id_expression ║ ║ ╚═ regular_id ║ ║ ╚═ &quot;col2&quot; (REGULAR_ID) ║ ╠═ where_clause ║ ║ ╠═ &quot;where&quot; (WHERE) ║ ║ ╚═ condition ║ ║ ╚═ expression . . . ╚═ &quot;&lt;EOF&gt;&quot; </code></pre>
<python><join><antlr4>
2024-06-20 11:02:33
0
4,034
TenG
78,646,926
13,942,929
How can I use enum in CPP and connect it with Cython?
<p>In CPP folder, I have <code>MyWork.h</code> and <code>MyWork.cpp</code>. In Cython folder, I have <code>MyWork.pxd</code> and <code>MyWork.pyx</code>.</p> <p>Now I want to use enum in CPP and then connect it in Cython as follow</p> <p>[MyWork.h]</p> <pre><code>enum Task { REGULAR, DEV, MARKETING }; class _MyWork { _MyWork(); int get_my_work_id(Task value) } </code></pre> <p>Let's skip the <code>MyWork.cpp</code> because I don't think it matters in this case.</p> <p>[MyWork.pxd]</p> <pre><code>cdef extern from &quot;MyWork.h&quot;: cdef enum _MyWork: REGULAR, DEV, MARKETING cdef cppclass _MyWork: _MyWork() int get_my_work_id(Task value) </code></pre> <p>[MyWork.pyx]</p> <pre><code>cdef class Triangle: def __cinit__(self): # STH pass def get_my_work_id(self, Task value): # What should I do here? pass </code></pre> <p><strong>I think I know how to write it in <code>.pxd</code> but how do I write in <code>.pyx</code>?</strong></p>
<python><c++><enums><cython><cythonize>
2024-06-20 10:32:21
1
3,779
Punreach Rany
78,646,747
1,051,765
Wrong shape at fully connected layer: mat1 and mat2 shapes cannot be multiplied
<p>I have the following model. It is training well. The shapes of my splits are:</p> <ul> <li>X_train (98, 1, 40, 844)</li> <li>X_val (21, 1, 40, 844)</li> <li>X_test (21, 1, 40, 844)</li> </ul> <p>However, I am getting the following error at <code>x = F.relu(self.fc1(x))</code> in <code>forward</code>. When I attempt to interpret the model on the validation set.</p> <pre><code># Create a DataLoader for the validation set valid_dl = learn.dls.test_dl(X_val, y_val) # Get predictions and interpret them on the validation set interp = ClassificationInterpretation.from_learner(learn, dl=valid_dl) </code></pre> <p><code>RuntimeError: mat1 and mat2 shapes cannot be multiplied (32x2110 and 67520x128)</code></p> <p>I have checked dozens of similar questions but I am unable to find a solution. Here is the code.</p> <pre class="lang-py prettyprint-override"><code>from fastai.vision.all import * import librosa import numpy as np from sklearn.model_selection import train_test_split import torch import torch.nn as nn from torchsummary import summary [...] #labels in y can be [0,1,2,3] # Split the data X_train, X_temp, y_train, y_temp = train_test_split(X, y, test_size=0.3, random_state=42) X_val, X_test, y_val, y_test = train_test_split(X_temp, y_temp, test_size=0.5, random_state=42) # Reshape data for CNN input (add channel dimension) X_train = X_train[:, np.newaxis, :, :] X_val = X_val[:, np.newaxis, :, :] X_test = X_test[:, np.newaxis, :, :] #X_train.shape, X_val.shape, X_test.shape #((98, 1, 40, 844), (21, 1, 40, 844), (21, 1, 40, 844)) class DraftCNN(nn.Module): def __init__(self): super(DraftCNN, self).__init__() self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=1, padding=1) self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0) self.conv2 = nn.Conv2d(16, 32, kernel_size=3, stride=1, padding=1) # Calculate flattened size based on input dimensions with torch.no_grad(): dummy_input = torch.zeros(1, 1, 40, 844) # shape of one input sample dummy_output = self.pool(self.conv2(self.pool(F.relu(self.conv1(dummy_input))))) self.flattened_size = dummy_output.view(dummy_output.size(0), -1).size(1) self.fc1 = nn.Linear(self.flattened_size, 128) self.fc2 = nn.Linear(128, 4) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(x.size(0), -1) # Flatten the output of convolutions x = F.relu(self.fc1(x)) x = self.fc2(x) return x # Initialize the model and the Learner model = AudioCNN() learn = Learner(dls, model, loss_func=CrossEntropyLossFlat(), metrics=[accuracy, Precision(average='macro'), Recall(average='macro'), F1Score(average='macro')]) # Train the model learn.fit_one_cycle(8) print(summary(model, (1, 40, 844))) # Create a DataLoader for the validation set valid_dl = learn.dls.test_dl(X_val, y_val) # Get predictions and interpret them on the validation set interp = ClassificationInterpretation.from_learner(learn, dl=valid_dl) interp.plot_confusion_matrix() interp.plot_top_losses(5) </code></pre> <p>I tried changing the forward function and the shapes of the layers but I keep getting the same error.</p> <p>Edit. Upon request, I have added more code.</p>
<python><pytorch><neural-network><fast-ai>
2024-06-20 09:53:10
1
1,369
Carlos Vega
78,646,723
14,982,219
How to lock table row with SQLAlchemy ORM after commit?
<p>I am working with SQLalchemy orm. I add a register using <code>session.add(object)</code> and then I commit it with <code>session.commit()</code>.<br /> After commit I continue working on the orm object so I need to lock it so other processes can't edit the object. I need the same behaviour as <code>session.query.with_for_update</code>. Is there a way to do it?</p> <pre class="lang-py prettyprint-override"><code>import MyOrm from sqlalchemy.orm import sessionmaker from sqlalchemy import create_engine engine = create_engine(url=f'postgresql://{username}:{password}@{host}:{port}/{database}') Session = sessionmaker(bind=engine) data = {&quot;name&quot;:&quot;John&quot;, &quot;number&quot;:1, &quot;request&quot;:&quot;registered&quot;} with session_factory() as session: try: object = MyORm(**data) session.add(object) # persist in db as data has been registered session.commit() # this function process the data and do some things process_data(object) # after the function, change request value object.request = &quot;processed&quot; # save the changes session.commit() except Exception: session.rollback() print(&quot;completed&quot;) </code></pre> <p>It is between two session.commit() when I want to lock the register so it won't be modified by other process</p>
<python><postgresql><sqlalchemy>
2024-06-20 09:50:39
1
381
vll1990
78,646,608
1,804,490
Estimating the size of data when loaded from parquet file into an arrow table
<p>I have a pyarrow table with a large amount of columns (&gt;2000). For 1000 rows, it takes about 20M RAM. For many columns, there’s a single value over all the rows. When I save it to a parquet file, the size of the resulting file on storage is ~4MB.</p> <p>Now, when looking on the <code>total_compressed_size</code> (=size on storage?) and <code>total_uncompressed_size</code> (=size in RAM?) parameters in the parquet file metadata, I would expect them to reflect this X4 ratio. But the values for both parameters in my case are ~2M, with the ‘total_uncompressed_size’ value only slightly larger than <code>total_compressed_size</code>.</p> <p>So, my question is whether I misinterpret the meaning of these parameters or they are just not reliable? And, most importantly, is there any reliable way to estimate the size a given parquet file will take in RAM before loading the whole data?</p>
<python><parquet><pyarrow>
2024-06-20 09:29:48
1
591
urim
78,646,484
1,867,328
Combining two Pandas dataframe with unequal number of columns (superset)
<p>I have below code:</p> <pre><code>import pandas as pd df1 = pd.DataFrame({'a':[1,2,3],'y':[7,8,9]}) df2 = pd.DataFrame({'b':[10,11,12],'x':[13,14,15],'y':[16,17,18]}) pd.DataFrame(np.vstack([df1, df2]), columns=df1.columns) </code></pre> <p>Above code generates error. I expect that final dataframe will be all columns from <code>df1</code> and <code>df2</code>. Therefore will be missing values as column names are different.</p> <p>Is there any way to achieve this?</p>
<python><pandas><dataframe>
2024-06-20 09:03:41
1
3,832
Bogaso
78,646,440
5,334,903
Inner function as callback not called
<p>So I've got some classes that allow me to upload file-like or IO objects to Azure Blob Storage.</p> <p>My problem here is that I want to pass a callback during the export of those objects, and this callback needs to do multiple things (hence call a higher method).</p> <p>Here is the code:</p> <pre><code>from azure.storage.blob import BlobServiceClient class UploadStatus: def __init__(self, uploaded=0, total=None): self.uploaded = uploaded self.total = total def update(self, uploaded, total): if total is not None: self.total = self._normalize(total) if uploaded is not None: self.uploaded = self._normalize(uploaded) def progress(self): &quot;&quot;&quot;Calculate the progress made for the upload incorring.&quot;&quot;&quot; return 100.0 * self.uploaded / self.total if self.total &gt; 0 else 0 def _normalize(self, integer): return integer if integer else 0 class Resource: &quot;&quot;&quot;A wrapper encapsulating an io-like object&quot;&quot;&quot; def __init__(self, ...): ... class AzureBlobProxy: def __init__(self, client): self.client = client @classmethod def build(cls, params: dict): client = BlobServiceClient(**params) return cls(client) def blob_export(self, container: str, name: str, io, block=None) -&gt; dict: &quot;&quot;&quot;Export file-like object to a Storage. Args: container (string): Container to use name (string): Name to use io (io.IOBase): File-like io object base on IOBase. &quot;&quot;&quot; blob_client = self.client.get_blob_client( container=container, blob=name ) return blob_client.upload_blob( data=io, progress_hook=block ) class AzureProvider: def upload(self, resource, status, params={}, block=None): &quot;&quot;&quot;See BaseProvider.upload() description.&quot;&quot;&quot; proxy = AzureBlobProxy.build(params) container, name = self._extract_container_and_name_from_params(params) def progress_callback(sent, total): print(block) # doesn't display the block function status.update(uploaded=sent, total=total) if block and callable(block): block.__call__(status) with resource.with_io() as io: status.update(uploaded=0, total=sys.getsizeof(io)) proxy.blob_export(container, name, io, progress_callback) return self._upload_strategy() def _extract_container_and_name_from_params(self, params): &quot;&quot;&quot;Return container and name for blob&quot;&quot;&quot; ... def __upload_strategy(self): return 'azure-blob' def print_progress(status): print('PROGRESS {}%'.format(int(status.progress()))) parameters = { ... } resource = Resource(...) upload_status = UploadStatus() strategy = provider.upload(resource, upload_status, parameters, print_progress) </code></pre> <p>Have a look at my inner function <code>progress_callback</code> that is the progress_hook passed to this final method <a href="https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.blobclient?view=azure-python#azure-storage-blob-blobclient-upload-blob" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.blobclient?view=azure-python#azure-storage-blob-blobclient-upload-blob</a></p> <p>It should call the function <code>print_progress</code>, but that's not the case, I don't see any printing &quot;PROGRESS %&quot; appearing.</p> <p>I can confirm, though, that a file-like object is exported to Azure Blob Storage.</p> <p>Do you have any idea?</p>
<python><azure-blob-storage>
2024-06-20 08:55:29
1
955
Bloodbee
78,646,329
261,006
Provide multiple languages via Odoo RPC call
<p>I would like to write multiple languages to a <code>ir.ui.view</code> Object's <code>arch_db</code> field.</p> <p>However, if I provide a dict/json value with languages as keys and HTML as values (<code>{&quot;de_DE&quot;=&gt;&lt;german-html&gt;, &quot;en_US&quot;=&gt;&lt;american-html&gt;}</code>), validation will fail.</p> <p>If I write html with a <code>de_DE</code> context first, and then with <code>en_US</code> context, the latter will overwrite the former for both languages.</p> <p>How can I write different HTML for different languages?</p> <p>Is there e.g. some way to call <code>update_raw</code> via RPC somehow?</p> <h2>Example</h2> <pre class="lang-golang prettyprint-override"><code>// This example demonstrates how (not) to create a view with translated HTML content. package main import ( &quot;errors&quot; &quot;fmt&quot; &quot;log&quot; &quot;github.com/kolo/xmlrpc&quot; ) func main() { viewArch := TranslatedHTML{ LangDE: `&lt;p&gt;Deutscher Text&lt;/p&gt;`, LangEN: `&lt;p&gt;English text&lt;/p&gt;`, } cl, err := NewClient( &quot;http://localhost:3017&quot;, &quot;odoo_17&quot;, &quot;admin&quot;, &quot;admin&quot;, LangDE, ) panicOnErr(err) reply, err := cl.CreateView(viewArch) panicOnErr(err) fmt.Println(reply) } func panicOnErr(err error) { if err != nil { panic(err) } } func wrapErr(err error, msg string) error { if err != nil { return fmt.Errorf(&quot;%s: %w&quot;, msg, err) } return nil } type Lang string const LangDE = Lang(&quot;de_DE&quot;) const LangEN = Lang(&quot;en_US&quot;) type Client struct { *xmlrpc.Client ContextLang Lang uid int // stores user id after login // Needed per call: OdooDB string Username string Password string } func NewClient(url, odooDB, username, password string, contextLang Lang) (*Client, error) { loginClient, err := xmlrpc.NewClient(fmt.Sprintf(&quot;%s/xmlrpc/2/common&quot;, url), nil) if err != nil { return nil, wrapErr(err, &quot;failed to create login client&quot;) } var uid int err = loginClient.Call(&quot;authenticate&quot;, []any{ odooDB, username, password, map[string]any{}, }, &amp;uid) if err != nil { return nil, wrapErr(err, &quot;failed to authenticate&quot;) } client, err := xmlrpc.NewClient(fmt.Sprintf(&quot;%s/xmlrpc/2/object&quot;, url), nil) if err != nil { return nil, wrapErr(err, &quot;failed to create object client&quot;) } return &amp;Client{client, contextLang, uid, odooDB, username, password}, nil } func (c *Client) WithContextLang(contextLang Lang) *Client { return &amp;Client{c.Client, contextLang, c.uid, c.OdooDB, c.Username, c.Password} } type TranslatedHTML map[Lang]string func (th TranslatedHTML) Langs() []Lang { langs := make([]Lang, 0, len(th)) for lang := range th { langs = append(langs, lang) } return langs } func (cl *Client) ExecuteKW(model, method string, args, reply any) error { return cl.Call( &quot;execute_kw&quot;, []any{cl.OdooDB, cl.uid, cl.Password, model, method, args, map[string]any{&quot;context&quot;: map[string]string{&quot;lang&quot;: string(cl.ContextLang)}}}, reply, ) } func (cl *Client) CreateView(arch TranslatedHTML) (any, error) { langs := arch.Langs() if (len(langs)) == 0 { return nil, errors.New(&quot;no translations provided&quot;) } firstLang := langs[0] restLangs := langs[1:] var reply any err := cl.WithContextLang(firstLang).ExecuteKW(&quot;ir.ui.view&quot;, &quot;create&quot;, []any{map[string]string{&quot;arch_db&quot;: arch[firstLang], &quot;type&quot;: &quot;qweb&quot;}}, &amp;reply) if err != nil { return reply, err } log.Printf(&quot;created view with ID %d, Lang %s, %s&quot;, reply.(int64), firstLang, arch[firstLang]) viewID := reply.(int64) for _, lang := range restLangs { var reply any err := cl.WithContextLang(lang).ExecuteKW(&quot;ir.ui.view&quot;, &quot;write&quot;, []any{viewID, map[string]any{&quot;arch_db&quot;: arch[lang]}}, &amp;reply) if err != nil { return reply, err } log.Printf(&quot;updated view with Lang %s, %v, %s&quot;, lang, reply, arch[lang]) } return nil, nil } </code></pre>
<python><go><odoo><rpc><odoo-17>
2024-06-20 08:30:31
1
2,561
Jasper
78,646,170
9,112,151
Supply only payload schema for FastAPI endpoint without actual validation by Pydantic
<p>I need to supply only payload schema for API endpoint without any <code>Pydantic</code> validation. I'm using:</p> <ul> <li><code>FastAPI==0.110.1</code></li> <li><code>Pydantic v2</code></li> </ul> <pre class="lang-py prettyprint-override"><code>from typing import Literal, Annotated import uvicorn from fastapi import FastAPI from pydantic import BaseModel, Field, RootModel app = FastAPI() def get_openapi_extra(model) -&gt; dict: return { &quot;requestBody&quot;: { &quot;content&quot;: {&quot;application/json&quot;: {&quot;schema&quot;: model.schema(ref_template=&quot;#/components/schemas/{model}&quot;)}} } } class Cat(BaseModel): pet_type: Literal['cat'] meows: int class Dog(BaseModel): pet_type: Literal['dog'] barks: float class Lizard(BaseModel): pet_type: Literal['reptile', 'lizard'] scales: bool class Model(RootModel): root: Annotated[Cat | Dog | Lizard, Field(..., discriminator='pet_type')] @app.get(&quot;/&quot;, openapi_extra=get_openapi_extra(Model)) def index(): return &quot;ok&quot; </code></pre> <p>When I was using <code>Pydantic v1</code> and <code>FastAPI==0.98.0</code> it worked without problems. But when I started to upgrade to <code>Pydantic v2</code> and <code>FastAPI 0.110.1</code> I started facing the next problem:</p> <p><a href="https://i.sstatic.net/fnZT506t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fnZT506t.png" alt="enter image description here" /></a></p> <p>It seems that FastAPI does not know about Lizard, Dog and Cat schemas...</p> <p>How to fix it?</p>
<python><fastapi><openapi><pydantic><pydantic-v2>
2024-06-20 07:55:48
1
1,019
Альберт Александров
78,646,082
9,425,034
get spectrum value with numpy only
<p>I want to do a kind of frequency monitoring program, using rtl-sdr wrapper in python and numpy. This program must run in console mode, no graphic interface. I do not want to have a dependency with matplotlib or scipy, so I'm looking for a pure python and numpy solution.</p> <p>I know how to read data from the rtl-sdr dongle (for example, centered at 446MHz with a sample rate of 250kHz should give me a spectrum from 445,875MHz to 446,125MHz). But how to get this spectrum data ? How to compute the power of 80 linear bands in this range (freq resolution will be 3,125kHz) on data sampled for 100ms ?</p> <p>I know I have to read 25000 samples (100ms at 250kHz), but I do not know how to compute the frequency power in numpy only.</p> <p>Here is my skeleton code. the <code>get_spectrum()</code> function is what I want to do, but all the attempt I made with numpy (based on fft) where probably wrong because results were inconsistent (nothing special happens when I transmit on this frequency).</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from rtlsdr import RtlSdr def get_spectrum( samples, sample_rate, freq_resolution): # TODO :) return [0,1,2,3,4] sdr = RtlSdr() sdr.sample_rate = 250_000 sdr.center_freq = 446_000_000 sdr.gain = 40 duration = 0.100# in second sdr.read_samples(2048) # get rid of initial empty samples while(True): samples = sdr.read_samples(duration * sdr.sample_rate) spectrum = get_spectrum( samples = samples, sample_rate = sdr.sample_rate, freq_resolution = sdr.sample_rate / 80) # to be drawn in a 80 columns terminal # normalize data for easy visualization min_val = np.min(spectrum) max_val = np.max(spectrum) scaled_data = 9 * ((spectrum - min_val) / (max_val - min_val)) for i in scaled_data: v = int(i) print(int(i), end='') print() </code></pre> <p>For my use case, the power level have not to be accurate, no need for a real scientific unit. It is only to be able to visualize and compare power of each frequency.</p> <p>I know it is related to FFT and/or Power Spectral Density, but i dont know how to do it only in Numpy.</p>
<python><numpy><fft><spectrogram><pyrtlsdr>
2024-06-20 07:36:18
1
744
JayMore
78,645,930
10,200,497
How can I find the first row after a number of duplicated rows?
<p>My DataFrame is:</p> <pre><code>import pandas as pd df = pd.DataFrame( { 'x': ['a', 'a', 'a','b', 'b','c', 'c', 'c',], 'y': list(range(8)) } ) </code></pre> <p>And this is the expected output. I want to create column <code>z</code>:</p> <pre><code> x y z 0 a 0 NaN 1 a 1 NaN 2 a 2 NaN 3 b 3 3 4 b 4 NaN 5 c 5 NaN 6 c 6 NaN 7 c 7 NaN </code></pre> <p><em><strong>The logic is:</strong></em></p> <p>I want to find the first row after the first group of duplicated rows. For example in column <code>x</code>, the value <code>a</code> is the first duplicated value. I want to find one row after the <code>a</code> values end. And then put the <code>y</code> of that row for <code>z</code> column.</p> <p>This is my attempt that did not give me the output:</p> <pre><code>m = (df.x.duplicated()) out = df[m] </code></pre>
<python><pandas><dataframe>
2024-06-20 07:00:59
2
2,679
AmirX
78,645,886
2,919,585
Replacement for legacy scipy.interpolate.interp1d for piecewise linear interpolation with extrapolation
<p>The documentation for <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html" rel="nofollow noreferrer"><code>scipy.interpolate.interp1d</code></a> tells me</p> <blockquote> <p>This class is considered legacy and will no longer receive updates. This could also mean it will be removed in future SciPy versions. For a guide to the intended replacements for <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html#scipy.interpolate.interp1d" rel="nofollow noreferrer"><code>interp1d</code></a> see <a href="https://docs.scipy.org/doc/scipy/tutorial/interpolate/1D.html#tutorial-interpolate-1dsection" rel="nofollow noreferrer">1-D interpolation</a>.</p> </blockquote> <p>The linked page basically tells me to use <a href="https://numpy.org/devdocs/reference/generated/numpy.interp.html" rel="nofollow noreferrer"><code>numpy.interp</code></a> for piecewise linear interpolation. However, as far as I can tell, that function does not support linear extrapolation beyond the data range (only constant extrapolation). This makes it an inadequate replacement where linear extrapolation is desired.</p> <p>What is the recommended replacement for <code>interp1d</code> in those cases, now that it is no longer recommended for new code?</p> <p>Here's an example showing what I want to achieve (orange) and what <code>np.interp</code> does (green).</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.interpolate import interp1d from matplotlib import pyplot as plt a = np.linspace(0, 10, 11) b = np.sin(a) plt.plot(a, b, 'o') x = np.linspace(-2, 12, 51) y = interp1d(a, b, fill_value='extrapolate')(x) plt.plot(x, y, '+-') y = np.interp(x, a, b) plt.plot(x, y, 'x:') </code></pre> <p><a href="https://i.sstatic.net/9Q2fI9fK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9Q2fI9fK.png" alt="example output" /></a></p>
<python><numpy><scipy><interpolation><extrapolation>
2024-06-20 06:48:43
1
571
schtandard
78,645,873
3,760,519
How do I declare a numpy array to be 1 dimensional of arbitrary length?
<p>I am working on a python code base and the team has decided to make everything statically typed. I want to declare a numpy array of floats to be one dimension, with arbitrary length. I currently have the following:</p> <pre><code>float64 = np.dtype[np.float64] floats = np.ndarray[Any, float64] </code></pre> <p>What exactly is the role of <code>Any</code> in this case? Does this declaration permit multi-dimensional arrays? How do I statically differentiate between 1, 2, or 3 dimensional arrays?</p>
<python><numpy><python-typing>
2024-06-20 06:45:49
3
2,406
Chechy Levas
78,645,751
471,376
discover which package installed a Python script entry point?
<p><em><strong>tl;dr</strong></em> how do I discover which package installed a script entry point under <code>Scripts</code> directory?</p> <p>Given a Python environment at path <code>Python</code>, it has a scripts entry points directory at <code>Python/Scripts</code>. For script <code>Python/Scripts/foo.py</code>, how do I discover which Python package installed <code>foo.py</code>?</p> <p>Or in other words, how do I &quot;reverse lookup&quot; Python script entry points?</p>
<python><pip><setuptools>
2024-06-20 06:10:39
1
7,289
JamesThomasMoon
78,645,711
2,604,247
Is Polars Guaranteed to Maintain Order After Deduplicating Over a Column?
<h6>The Code</h6> <pre class="lang-py prettyprint-override"><code>import polars as pl ... # Sort by date, then pick the first row for each UID (earliest date) sample_frame=sample_frame.sort(by=DATE_COL).unique(subset=UID_COL, keep='first') </code></pre> <h6>Question</h6> <p>I expected the resulting frame after the above operation to be sorted in order of date, but seems not the case.</p> <p>So does the deduplication operation mess up the order of the remaining rows as well? Do the polars documentation or its maintainers provide any guarantee on the row ordering after calling <code>unique</code>?</p>
<python><sorting><duplicates><python-polars>
2024-06-20 05:58:36
1
1,720
Della
78,645,618
739,809
How Can I Optimize Machine Translation Model Training to Overcome GPU Memory Overflow Issues?
<p>I'm trying to train a fairly standard machine translation transformer model using PyTorch. It's based on the &quot;Attention is All You Need&quot; paper. When I ran it on my PC with standard hyperparameters and a batch size of 128 segments (pairs of source and target language sentences), it worked fine but was slow, as expected.</p> <p>Now, I'm running it on an AWS p2.xlarge instance with a Tesla K80 GPU, and the program crashes quickly due to GPU memory overflow. I've tried everything to free up GPU memory, but I've had to reduce the batch size to 8, which is obviously inefficient for learning.</p> <p>Even with a batch size of 8, I occasionally get this error message:</p> <blockquote> <p>File &quot;C:\Projects\MT004.venv\Lib\site-packages\torch\autograd\graph.py&quot;, line 744, in _engine_run_backward return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.95 GiB. GPU</p> </blockquote> <p>I've tried both SpaCy's tokenizer and the XLM-R tokenizer. With the XLM-R tokenizer, I can only use a batch size of 2, and even then, it sometimes crashes.</p> <p>Here is the code where things crash:</p> <pre><code>def train_epoch(src_train_sent, tgt_train_sent, model, optimizer): model.train() losses = 0 torch.cuda.empty_cache() # Clear cache before forward pass train_dataloader = SrcTgtIterable(src_train_sent, tgt_train_sent, batch_size=BATCH_SIZE, collate_fn=collate_fn) for src, tgt in train_dataloader: src = src.to(DEVICE) tgt = tgt.to(DEVICE) tgt_input = tgt[:-1, :] src_mask, tgt_mask, src_padding_mask, tgt_padding_mask = create_mask(src, tgt_input) logits = model(src, tgt_input, src_mask, tgt_mask, src_padding_mask, tgt_padding_mask, src_padding_mask) optimizer.zero_grad() tgt_out = tgt[1:, :].long() loss = loss_fn(logits.reshape(-1, logits.shape[-1]), tgt_out.reshape(-1)) # Delete unnecessary variables before backward pass del src, tgt_input, src_mask, tgt_mask, src_padding_mask, tgt_padding_mask, logits, tgt_out torch.cuda.empty_cache() # Clear cache after deleting variables loss.backward() optimizer.step() losses += loss.item() # Free GPU memory del loss torch.cuda.empty_cache() # Clear cache after each batch </code></pre> <p>Things crash on <code>loss.backward()</code></p> <p>Unfortunately, I cannot use a bigger server since I don't have enough quota on EC2.</p> <p>Any idea what I might be doing wrong? Any suggestions on how to optimize things?</p>
<python><memory-management><gpu><machine-translation>
2024-06-20 05:22:55
0
2,537
dsb
78,645,414
2,057,516
1 out of many calls to overridden django `Model.save()` generates a pylint `unexpected-keyword-arg` error - how do I satisfy it?
<p>I implemented and have been using this model (super)class I created for years, and it is completely stable on an active site. I just made a small minor tweak to some completely unrelated portion of its code, and checked my work (on my machine) with the latest superlinter. Our CI tests on github currently use an old superlinter and it has never complained, so I was surprised to see a linting error on code I haven't touched in years.</p> <p>My model class has an overridden <code>save()</code> method, which adds a few extra arguments that it pops off <code>kwargs</code> before calling <code>super().save()</code>. I have a number of calls to save inside the class, and the linter is fine with thos extra arguments everywhere else, except in 1 spot (an override of <code>from_db</code>):</p> <pre class="lang-py prettyprint-override"><code>@classmethod def from_db(cls, *args, **kwargs): # Instantiate the model object rec = super().from_db(*args, **kwargs) # If autoupdates are not enabled (i.e. we're not in &quot;lazy&quot; mode) if not cls.get_coordinator().are_lazy_updates_enabled(): return rec # Get the field names queryset_fields = set(args[1]) # field_names # Intersect the queryset fields with the maintained fields common_fields = set(cls.get_my_update_fields()).intersection(queryset_fields) # Look for maintained field values that are None lazy_update_fields = [fld for fld in common_fields if getattr(rec, fld) is None] if len(lazy_update_fields) &gt; 0: # Trigger a lazy auto-update rec.save(fields_to_autoupdate=lazy_update_fields, via_query=True) return rec </code></pre> <p>I tried typehinting <code>rec</code> when it's assigned, but that didn't satisfy the linter. I figure it's the call to <code>super().from_db()</code> that is setting a type that doesn't include my args... but my save overload is definitely called.</p> <p>So how do I satisfy <code>pylint</code> here?</p> <p><strong>ADDENDUM</strong>:</p> <p>Incidentally, I confirmed that the old pylint/superlinter still doesn't complain about the call to save. I pushed the code despite my local newer superlinter's complaint.</p> <p>I looked at the base class's <code>from_db</code> method (below). It's pretty simple. Perhaps I could just copy the code into my method and not call <code>super().from_db</code> (even though that seems ridiculous)...</p> <pre class="lang-py prettyprint-override"><code>@classmethod def from_db(cls, db, field_names, values): if len(values) != len(cls._meta.concrete_fields): values_iter = iter(values) values = [ next(values_iter) if f.attname in field_names else DEFERRED for f in cls._meta.concrete_fields ] new = cls(*values) new._state.adding = False new._state.db = db return new </code></pre> <p>Also: just incidentally, I confirmed via the logs that my save override is correctly called:</p> <pre><code>[Tue May 28 15:13:35.396716 2024] [wsgi:error] [pid 2648907:tid 2649014] [remote 172.20.202.79:47324] Triggering lazy auto-update of fields: TracerLabel.{name} [Tue May 28 15:13:35.396755 2024] [wsgi:error] [pid 2648907:tid 2649014] [remote 172.20.202.79:47324] Auto-updated TracerLabel.name in TracerLabel.9 using TracerLabel._name from [&lt;empty&gt;] to [13C6]. </code></pre> <p>The <code>Triggering lazy auto-update of fields: ...</code> message is only printed in this method (though I'd removed it from my code above) and the <code>Auto-updated ...</code> message is only ever printed via my <code>.save()</code> override.</p> <p>So, just to put a fine point on it, that call to save does indeed call my save and passes it the custom arguments.</p>
<python><django><pylint>
2024-06-20 03:49:12
0
1,225
hepcat72
78,645,312
2,264,738
Python requests get call getting in to unlimited waiting, where curl succeeds
<p>In our legacy application we use Python2.6 with <strong>requests 2.9.1</strong> for making API requests. There a strange behavior is noticed with Python requests library that is, it gets in to unlimited wait where <strong>curl</strong> requests succeeds</p> <pre><code>curl https://url # getting 200 response </code></pre> <p>Wherein,</p> <pre><code>import requests requests.get('https://url', verify=False) # getting in to unlimited waiting if no timeout is mentioned. </code></pre> <p>The same URL is working from the browsers as well, not able to figure out the reason behind it</p> <p><strong>System Info</strong></p> <ul> <li>CentOS release 6.10 (Final)</li> <li>Python 2.6.6</li> <li>requests 2.9.1</li> </ul> <p><strong>curl Requests</strong></p> <pre><code># curl 'https://b3-reader.ew1-wip.umm.is-pr.com/abcd/healthstatus' -vvv * About to connect() to b3-reader.ew1-wip.umm.is-pr.com port 443 (#0) * Trying &lt;IP&gt;... connected * Connected to b3-reader.ew1-wip.umm.is-pr.com (&lt;IP&gt;) port 443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 * Server certificate: * subject: CN=ew1-wip.umm.is-pr.com * start date: May 29 00:00:00 2024 GMT * expire date: Jun 27 23:59:59 2025 GMT * common name: ew1-wip.umm.is-pr.com * issuer: CN=Amazon RSA 2048 M03,O=Amazon,C=US &gt; GET /abcd/healthstatus HTTP/1.1 &gt; User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.44 zlib/1.2.3 libidn/1.18 libssh2/1.4.2 &gt; Host: b3-reader.ew1-wip.umm.is-pr.com &gt; Accept: */* &gt; &lt; HTTP/1.1 200 OK &lt; Date: Thu, 20 Jun 2024 02:57:58 GMT &lt; Content-Type: application/json; charset=utf-8 &lt; Transfer-Encoding: chunked &lt; Connection: keep-alive &lt; X-Content-Type-Options: nosniff &lt; X-Frame-Options: SAMEORIGIN &lt; Content-Security-Policy: frame-src 'self'; &lt; * Connection #0 to host b3-reader.ew1-wip.umm.is-pr.com left intact * Closing connection #0 </code></pre> <p>NOTE: <em>Some information's above are masked..</em></p> <p><strong>Error info when the wait is interrupted</strong></p> <pre><code>/usr/lib/python2.6/site-packages/requests/packages/urllib3/util/ssl_.py:315: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#snimissingwarning. SNIMissingWarning /usr/lib/python2.6/site-packages/requests/packages/urllib3/util/ssl_.py:120: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning. InsecurePlatformWarning ^CTraceback (most recent call last): File &quot;ping-test.py&quot;, line 14, in &lt;module&gt; response = requests.get(url,verify=False) File &quot;/usr/lib/python2.6/site-packages/requests/api.py&quot;, line 67, in get return request('get', url, params=params, **kwargs) File &quot;/usr/lib/python2.6/site-packages/requests/api.py&quot;, line 53, in request return session.request(method=method, url=url, **kwargs) File &quot;/usr/lib/python2.6/site-packages/requests/sessions.py&quot;, line 468, in request resp = self.send(prep, **send_kwargs) File &quot;/usr/lib/python2.6/site-packages/requests/sessions.py&quot;, line 576, in send r = adapter.send(request, **kwargs) File &quot;/usr/lib/python2.6/site-packages/requests/adapters.py&quot;, line 376, in send timeout=timeout File &quot;/usr/lib/python2.6/site-packages/requests/packages/urllib3/connectionpool.py&quot;, line 560, in urlopen body=body, headers=headers) File &quot;/usr/lib/python2.6/site-packages/requests/packages/urllib3/connectionpool.py&quot;, line 345, in _make_request self._validate_conn(conn) File &quot;/usr/lib/python2.6/site-packages/requests/packages/urllib3/connectionpool.py&quot;, line 785, in _validate_conn conn.connect() File &quot;/usr/lib/python2.6/site-packages/requests/packages/urllib3/connection.py&quot;, line 252, in connect ssl_version=resolved_ssl_version) File &quot;/usr/lib/python2.6/site-packages/requests/packages/urllib3/util/ssl_.py&quot;, line 317, in ssl_wrap_socket return context.wrap_socket(sock) File &quot;/usr/lib/python2.6/site-packages/requests/packages/urllib3/util/ssl_.py&quot;, line 132, in wrap_socket return wrap_socket(socket, **kwargs) File &quot;/usr/lib64/python2.6/ssl.py&quot;, line 341, in wrap_socket suppress_ragged_eofs=suppress_ragged_eofs) File &quot;/usr/lib64/python2.6/ssl.py&quot;, line 120, in __init__ self.do_handshake() File &quot;/usr/lib64/python2.6/ssl.py&quot;, line 279, in do_handshake self._sslobj.do_handshake() KeyboardInterrupt </code></pre>
<python><http><python-requests><centos6><python-2.6>
2024-06-20 02:54:34
0
334
user2264738
78,645,168
19,276,472
Handling sync vs async with scrapy + Playwright
<p>I'm using scrapy with Playwright to load a Google Jobs search results page. Playwright is needed to be able to load the page in a browser setting, then to click on different jobs to reveal the details of the job.</p> <p>Example URL I want to extract information from: <a href="https://www.google.com/search?q=product+designer+nyc&amp;ibp=htl;jobs" rel="nofollow noreferrer">https://www.google.com/search?q=product+designer+nyc&amp;ibp=htl;jobs</a></p> <p>While I can get the code to open that page in a Playwright browser and parse the fields I want in an interactive python environment, I'm not sure how to integrate Playwright into scrapy smoothly. I have the <code>start_requests</code> function set up correctly, in the sense that Playwright is set up and it'll open up a browser to the desired page, like the URL above.</p> <p>Here's what I have so far for the <code>parse</code> function:</p> <pre><code>async def parse(self, response): page = response.meta[&quot;playwright_page&quot;] jobs = page.locator(&quot;//li&quot;) num_jobs = jobs.count() for idx in range(num_jobs): # For each job found, first need to click on it await jobs.nth(idx).click() # Then grab this large section of the page that has details about the job # In that large section, first click a couple of &quot;More&quot; buttons job_details = page.locator(&quot;#tl_ditsc&quot;) more_button1 = job_details.get_by_text(&quot;More job highlights&quot;) await more_button1.click() more_button2 = job_details.get_by_text(&quot;Show full description&quot;) await more_button2.click() # Then take that large section and pass it to another function for parsing soup = BeautifulSoup(job_details, 'html.parser') data = self.parse_single_jd(soup) ... yield {data here} return </code></pre> <p>When I try to run the above, it errors on the <code>for idx in range(num_jobs)</code> line with &quot;TypeError: 'coroutine' object cannot be interpreted as an integer&quot;. When running in an interactive python shell, the use of <code>page.locator</code>, <code>jobs.count()</code>, <code>jobs.nth(#).click()</code>, etc all work. This leads me to believe that I'm misunderstanding something fundamental about the async nature of parse, which I believe is needed in order to be able to do things like click on the page (per <a href="https://github.com/scrapy-plugins/scrapy-playwright?tab=readme-ov-file#receiving-page-objects-in-callbacks" rel="nofollow noreferrer">this documentation</a>). It's like I need to force <code>num_jobs = jobs.count()</code> to 'evaluate', but it's not doing so.</p> <p>(Note that a bit further down, if I want to create an <code>if more_button1.count()</code> check before the <code>await more_button1.click()</code> line, I run into the same sort of error - it's as if I need to force the <code>.count()</code> to 'evaluate')</p> <p>Any advice?</p>
<python><scrapy><playwright>
2024-06-20 01:33:55
1
720
Allen Y
78,645,106
7,563,454
Get distance from a point to the nearest box
<p>I have a 3D space where positions are stored as tuples, eg: <code>(2, 0.5, -4)</code>. If I want to know the distance between two points I just do <code>dist = (abs(x1 -x2), abs(y1 - y2), abs(z1 - z2))</code> and if I want a radius <code>distf = (dist[0] + dist[1] + dist[2]) / 3</code>. Now I have boxes each defined by two min / max positions (eg: <code>(-4 8 -16)</code> to <code>(4, 12, 6)</code>) and I want to know the distance between my point to the closest one: What is the simplest way to know the distance to the closest face in all 3 directions, or 0 in case the position is inside a box? Just looking for the lightest solution that doesn't require numpy or libraries other than defaults like <code>math</code> since I'm not using those in my project.</p> <p>This is my messy solution which should probably work but I'd like to know if there's anything better.</p> <pre><code>point = (8, 12, 16) box_min = (-4, -4, -4) box_max = (4, 4, 4) box_center = ((box_min[0] + box_max[0]) / 2, (box_min[1] + box_max[1]) / 2, (box_min[2] + box_max[2]) / 2) box_scale = (abs(box_max[0] - box_min[0]), abs(box_max[1] - box_min[1]), abs(box_max[2] - box_min[2])) dist = (abs(box_center[0] - point[0]) + box_scale[0] / 2, abs(box_center[1] - point[1]) + box_scale[1] / 2, abs(box_center[2] - point[2]) + box_scale[2] / 2) </code></pre>
<python><math><3d>
2024-06-20 00:50:35
1
1,161
MirceaKitsune
78,644,937
1,498,830
How can I ensure some code is run even if a test suite is aborted?
<p>I have a Behave test suite that starts by spinning up my application in a Docker container. I've added some code using <a href="https://docs.python.org/3/library/atexit.html" rel="nofollow noreferrer"><code>atexit</code></a> to ensure that the container is stopped and removed when the suite exits 'normally':</p> <pre><code>def handle_exit(): context.container.stop() context.docker_client.close() atexit.register(handle_exit) </code></pre> <p>This works fine as long as the test suite exits normally, even with failing tests. But sometimes the tests are aborted:</p> <pre class="lang-none prettyprint-override"><code>HOOK-ERROR in before_all: FileNotFoundError: [Errno 2] No such file or directory: '/home/peter/Nextcloud/PycharmProjects/boardgamelibrary/boardgamelibrary' HOOK-ERROR in after_all: AttributeError: 'Context' object has no attribute 'driver' ABORTED: By user. </code></pre> <p>In which case the container is not cleaned up, and subsequent attempts to run the suite will fail.</p> <p>Is there a <em>stronger</em> version of <code>atexit</code> that will run my <code>handle_exit</code> routine in all exit/abort scenarios?</p> <p>To be clear, this is not a question of exception handling, or if it is, the exceptions are being handled internally by behave. There is no place for me to insert a try-catch block.</p>
<python><python-behave>
2024-06-19 23:01:55
0
2,962
spierepf
78,644,927
539,490
"pre-import" python dependencies in docker image
<p>I am building a Python 3.10 Docker image in an Ubuntu-latest GitHub action that is uploaded (by serverless.com CLI) to form an AWS lambda. It has many Python dependencies. With a clean install of the dependencies on my local Mac, it can take 20 seconds to import the main source file (calc.py). Similarly, when the AWS Lambda first starts, it can take 20 seconds to import the main source file (calc.py). If I comment out all of the dependencies or they are left in but it is a subsequent call to the AWS Lambda then the calc.py file is imported very quickly (&lt; 1 second).</p> <p>I want to run an initial import in the Docker container to get the Python dependencies to &quot;pre-compile&quot; so that when the image is started by the AWS Lambda it is &quot;warmed up&quot; and ready to go instead of taking 20+ seconds to serve its first request.</p> <p>I have the following Dockerfile, but the line <code>RUN python -c &quot;from src.calc import process_payload&quot;</code> does not seem to have any effect on the AWS Lambda despite working in my local environment.</p> <pre><code># Using the SHA hash as suggested by: https://snyk.io/blog/best-practices-containerizing-python-docker/#https://snyk.io/blog/best-practices-containerizing-python-docker/#1. Use explicit and deterministic Docker base image tags for containerized Python applications # This is the SHA hash for the `public.ecr.aws/lambda/python:3.10` image # You can get it by running `docker pull public.ecr.aws/lambda/python:3.10` and then `docker images --digests | grep &quot;public.ecr.aws/lambda/python.*3.10&quot;` FROM public.ecr.aws/lambda/python:3.10@sha256:7688a9c4c1a27ea3bfcf12df55cc958e188bb15eea81b463750fe42722e0de87 RUN mkdir src # Seeing the following warning: # Matplotlib created a temporary cache directory at /tmp/matplotlib-4sw1aw8x because the default path (/home/sbx_user1051/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing. # https://stackoverflow.com/questions/9827377/setting-matplotlib-mplconfigdir-consider-setting-mplconfigdir-to-a-writable-dir#comment120692932_9947889 # Make a non temporary directory RUN mkdir /matplotlib_cache ENV MPLCONFIGDIR=&quot;/matplotlib_cache&quot; COPY ./requirements.txt ./ # Install system dependencies using yum RUN yum -y update &amp;&amp; \ yum -y install ffmpeg libSM libXext mesa-libGL &amp;&amp; \ yum clean all &amp;&amp; \ rm -rf /var/cache/yum RUN pip install -r requirements.txt COPY ./src/ ./src/ # https://github.com/matplotlib/matplotlib/pull/16374#issuecomment-580549298 RUN python -c &quot;import matplotlib&quot; # Still trying to get code to precompile RUN python -c &quot;from src.calc import process_payload&quot; CMD [&quot;src/main.lambda_handler&quot;] </code></pre> <p>Any advice on how to &quot;pre-import&quot; the code and dependencies in the Docker image so that the AWS lambda starts quickly? Please let me know if you need more info.</p> <p>* Update 1 *</p> <p>I changed the Dockerfile to use <code>FROM python:3.10-slim</code> and <code>CMD [&quot;uvicorn&quot;, &quot;main:app&quot;, &quot;--host&quot;, &quot;0.0.0.0&quot;, &quot;--port&quot;, &quot;8000&quot;]</code> (also installed uvicorn) and instead built an image and started a container on a DigitalOcean Ubuntu server (droplet). This results in the container consistently responding to an initial request about 4-5 times faster (~4-6 seconds vs 20-25) on a machine 2 times smaller. However this isn't a like for like comparison because if I build the same Docker image on the DigitalOcean server but without the <code>RUN python -c &quot;from src.calc import process_payload&quot;</code> then the initial start up time is longer, though harder to measure as the container is just unresponsive, but the initial and subsequent requests to the endpoint is still 4-6 seconds which suggests that the code is already &quot;pre-compiled&quot;. Stopping and restarting the container it then has a fast start up time again as well as requests to it also being 4-6 seconds. So I think it's some problem with building the image from GitHub's Ubuntu server and uploading it to the AWS ECR that prevents the <code>RUN python -c &quot;from src.calc import process_payload&quot;</code> from helping but also I suspect that using the <code>uvicorn</code> command is also effecting how the container is starting (in a good way). For now I'll just stick with a self hosted container on DigitalOcean and avoid the AWS Lambda.</p>
<python><docker><aws-lambda><python-import><digital-ocean>
2024-06-19 22:56:08
0
29,009
AJP
78,644,909
147,507
UPDATE + SubQuery with conditions in SQLAlchemy 2.0 not being rendered
<p>I'm trying to update a table with info from some other rows from the same table. However, I cannot get SQLAlchemy to generate the proper SQL. It always ends up with a <code>WHERE false</code> clause in the subquery, which nullifies the effect.</p> <p>I have tried several approaches, and this one seems the most correct but still doesn't work. Other examples I found in here are for older versions of SQLAlchemy.</p> <p>Here's my code to execute it. (Please forgive the ambiguous naming -- I'm trying to obscure the code from the original source but keep the readable for troubleshooting.)</p> <pre class="lang-py prettyprint-override"><code>parent_id: UUID = ... iteration: int = ... current_generation_number: int = ... previous_generation_number: int = current_generation - 1 previous_generation = ( select(Run) .where(Run.parent_id == parent_id) .where(Run.grand_iteration == iteration) .where(Run.generation == previous_generation_number) .where(Run.data_partition in [DataPartition.VALIDATION, DataPartition.TEST]) .subquery(name=&quot;previous_generation&quot;) ) update_operation = ( update(Run) .where(Run.parent_id == parent_id) .where(Run.grand_iteration == iteration) .where(Run.generation == current_generation_number) .where(Run.arguments == previous_generation.c.arguments) .where(Run.data_partition == previous_generation.c.data_partition) .values( metric1=previous_generation.c.metric1, metric2=previous_generation.c.metric2, metric3=previous_generation.c.metric3, ) ) self.db.execute(update_operation) self.db.commit() </code></pre> <p>What I expect to be generated is something of the sort:</p> <pre class="lang-sql prettyprint-override"><code>UPDATE runs SET metric1=previous_generation.metric1, metric2=previous_generation.metric2, metric3=previous_generation.metric3, FROM ( SELECT /* ... columns ... */ FROM runs WHERE parent_id = %(parent_id_1)s::UUID AND iteration = %(iteration_1)s AND generation = %(generation_1)s AND data_partition IN (&quot;TEST&quot;, &quot;VALIDATION&quot;) ) AS previous_generation WHERE runs.parent_id = %(parent_id_1)s::UUID AND runs.iteration = %(iteration_1)s AND runs.generation = %(generation_2)s AND runs.arguments = previous_generation.arguments AND runs.data_partition = previous_generation.data_partition </code></pre> <p>And here's the SQL that SQLAlchemy logs output. Interestingly, it is output twice (I'm not sure if that's part of the problem). Notes below.</p> <pre class="lang-sql prettyprint-override"><code>UPDATE runs SET metric1=previous_generation.metric1, metric2=previous_generation.metric2, metric3=previous_generation.metric3, FROM ( SELECT runs.id AS id, runs.parent_id AS parent_id, runs.generation AS generation, runs.iteration AS iteration, runs.arguments AS arguments, runs.data_partition AS data_partition, runs.metric1 AS metric1, runs.metric2 AS metric2, runs.metric3 AS metric3 FROM runs WHERE false ) AS previous_generation WHERE runs.parent_id = %(parent_id_1)s::UUID AND runs.iteration = %(iteration_1)s AND runs.generation = %(generation_1)s AND runs.arguments = previous_generation.arguments AND runs.data_partition = previous_generation.data_partition RETURNING runs.id </code></pre> <p>And the parameters:</p> <pre class="lang-py prettyprint-override"><code>{ 'parent_id_1': UUID('1cb259e1-9f2e-40b8-884a-5706a8275312'), 'iteration_1': 1, 'generation_1': 3 } </code></pre> <p>Note the differences:</p> <ol> <li>My different variables are not captured and rendered in the subquery</li> <li>As such, the subquery ends up with <code>WHERE false</code>, and my conditions are not even included</li> </ol> <p>What am I doing wrong in here? Any guidance is appreciated.</p> <p>Context: SQLAlchemy 2.0, Python 3.9, PostgreSQL 16.2</p>
<python><sql><postgresql><sqlalchemy>
2024-06-19 22:48:04
1
7,898
Alpha
78,644,814
9,781,768
Robot Framework is Automatically closing the browser
<p>I am new to Robot and have this simple code from a tutorial. The code is supposed open chrome and visit the login page of this website. It does that, however, it closes the browser automatically. I've tried to debug by adding the params <code>options=add_experimental_option(&quot;detach&quot;,${True})</code> after create webdriver, however, this makes things worse.</p> <pre><code>*** Settings *** Documentation To validate the Login form Library SeleniumLibrary *** Test Cases *** Validate Unsuccessful Login Open The Browser With The Mortgage Payment URL # Fill the login form # Wait until it checks and displays error message # Verify error message is correct *** Keywords *** Open The Browser With The Mortgage Payment URL Create Webdriver Chrome Go To https://rahulshettyacademy.com/loginpagePractise/ </code></pre>
<python><testing><robotframework>
2024-06-19 22:05:37
1
784
User9123
78,644,760
3,486,684
`xarray`: setting `drop=True` when filtering a `Dataset` causes `IndexError: dimension coordinate conflicts between indexed and indexing objects`
<p>Some preliminary setup:</p> <pre class="lang-py prettyprint-override"><code>import xarray as xr import numpy as np xr.set_options(display_style=&quot;text&quot;) </code></pre> <pre><code>&lt;xarray.core.options.set_options at 0x7f3777111e50&gt; </code></pre> <p>Suppose that I have <code>label</code>s which are composed of two parts: <code>first</code> and <code>second</code>:</p> <pre class="lang-py prettyprint-override"><code>raw_labels = np.array( [[&quot;a&quot;, &quot;c&quot;], [&quot;b&quot;, &quot;a&quot;], [&quot;a&quot;, &quot;b&quot;], [&quot;c&quot;, &quot;a&quot;]], dtype=&quot;&lt;U1&quot;, ) raw_labels </code></pre> <pre><code>array([['a', 'c'], ['b', 'a'], ['a', 'b'], ['c', 'a']], dtype='&lt;U1') </code></pre> <p>I can make an <code>xarray.DataArray</code> easily enough to represent this raw information with informative tags:</p> <pre class="lang-py prettyprint-override"><code>label_metas = xr.DataArray( raw_labels, dims=(&quot;label&quot;, &quot;parts&quot;), coords={ &quot;label&quot;: [&quot;-&quot;.join(x) for x in raw_labels], &quot;parts&quot;: [&quot;first&quot;, &quot;second&quot;], }, name=&quot;meta&quot;, ) label_metas </code></pre> <pre>&lt;xarray.DataArray &#x27;meta&#x27; (label: 4, parts: 2)&gt; Size: 32B array([[&#x27;a&#x27;, &#x27;c&#x27;], [&#x27;b&#x27;, &#x27;a&#x27;], [&#x27;a&#x27;, &#x27;b&#x27;], [&#x27;c&#x27;, &#x27;a&#x27;]], dtype=&#x27;&lt;U1&#x27;) Coordinates: * label (label) &lt;U3 48B &#x27;a-c&#x27; &#x27;b-a&#x27; &#x27;a-b&#x27; &#x27;c-a&#x27; * parts (parts) &lt;U6 48B &#x27;first&#x27; &#x27;second&#x27;</pre> <p>Now suppose that I have additional information for a label: let's say it is some count information for simplicity.</p> <pre class="lang-py prettyprint-override"><code>raw_counts = np.random.randint(0, 100, size=len(label_metas)) raw_counts </code></pre> <pre><code>array([95, 23, 6, 77]) </code></pre> <pre class="lang-py prettyprint-override"><code>label_counts = xr.DataArray( raw_counts, dims=&quot;label&quot;, coords={&quot;label&quot;: label_metas.coords[&quot;label&quot;]}, name=&quot;count&quot;, ) label_counts </code></pre> <pre>&lt;xarray.DataArray &#x27;count&#x27; (label: 4)&gt; Size: 32B array([95, 23, 6, 77]) Coordinates: * label (label) &lt;U3 48B &#x27;a-c&#x27; &#x27;b-a&#x27; &#x27;a-b&#x27; &#x27;c-a&#x27;</pre> <p>How do I combine these clearly related <code>xr.DataArray</code>s? From what I understand: by using <code>xr.Dataset</code>s.</p> <pre class="lang-py prettyprint-override"><code>label_info = xr.merge([label_metas, label_counts]) label_info </code></pre> <pre>&lt;xarray.Dataset&gt; Size: 160B Dimensions: (label: 4, parts: 2) Coordinates: * label (label) &lt;U3 48B &#x27;a-c&#x27; &#x27;b-a&#x27; &#x27;a-b&#x27; &#x27;c-a&#x27; * parts (parts) &lt;U6 48B &#x27;first&#x27; &#x27;second&#x27; Data variables: meta (label, parts) &lt;U1 32B &#x27;a&#x27; &#x27;c&#x27; &#x27;b&#x27; &#x27;a&#x27; &#x27;a&#x27; &#x27;b&#x27; &#x27;c&#x27; &#x27;a&#x27; count (label) int64 32B 95 23 6 77</pre> <p>Now suppose I want to filter this dataset, so that I only have left those labels with first part <code>'a'</code>. How would I go about it? According to the docs, <a href="https://docs.xarray.dev/en/stable/generated/xarray.Dataset.where.html" rel="nofollow noreferrer"><code>where</code> can apply to <code>xr.Dataset</code> too</a>, but no examples are given showing this in action. Here are the results of my experiments:</p> <pre class="lang-py prettyprint-override"><code>label_info[&quot;meta&quot;].sel(parts=&quot;first&quot;) </code></pre> <pre>&lt;xarray.DataArray &#x27;meta&#x27; (label: 4)&gt; Size: 16B array([&#x27;a&#x27;, &#x27;b&#x27;, &#x27;a&#x27;, &#x27;c&#x27;], dtype=&#x27;&lt;U1&#x27;) Coordinates: * label (label) &lt;U3 48B &#x27;a-c&#x27; &#x27;b-a&#x27; &#x27;a-b&#x27; &#x27;c-a&#x27; parts &lt;U6 24B &#x27;first&#x27;</pre> <pre class="lang-py prettyprint-override"><code>label_info.where(label_info[&quot;meta&quot;].sel(parts=&quot;first&quot;) == &quot;a&quot;) </code></pre> <pre>&lt;xarray.Dataset&gt; Size: 192B Dimensions: (label: 4, parts: 2) Coordinates: * label (label) &lt;U3 48B &#x27;a-c&#x27; &#x27;b-a&#x27; &#x27;a-b&#x27; &#x27;c-a&#x27; * parts (parts) &lt;U6 48B &#x27;first&#x27; &#x27;second&#x27; Data variables: meta (label, parts) object 64B &#x27;a&#x27; &#x27;c&#x27; nan nan &#x27;a&#x27; &#x27;b&#x27; nan nan count (label) float64 32B 95.0 nan 6.0 nan</pre> <p>We see that those points that do not match the <code>where</code> are replaced with a <code>np.nan</code>, as expected from the docs. Does that mean there is some re-allocation of backing arrays involved? Suppose then that we just asked for those regions that do not match to be dropped, does that also cause a re-allocation? I am not sure, because I am unable to drop those values due to <code>IndexError: dimension coordinate 'parts' conflicts between indexed and indexing objects</code>:</p> <pre class="lang-py prettyprint-override"><code>label_info.where(label_info[&quot;meta&quot;].sel(parts=&quot;first&quot;) == &quot;a&quot;, drop=True) </code></pre> <pre><code>--------------------------------------------------------------------------- IndexError Traceback (most recent call last) Cell In[20], line 1 ----&gt; 1 label_info.where(label_info[&quot;meta&quot;].sel(parts=&quot;first&quot;) == &quot;a&quot;, drop=True) File ~/miniforge3/envs/xarray-tutorial/lib/python3.11/site-packages/xarray/core/common.py:1225, in DataWithCoords.where(self, cond, other, drop) 1222 for dim in cond.sizes.keys(): 1223 indexers[dim] = _get_indexer(dim) -&gt; 1225 self = self.isel(**indexers) 1226 cond = cond.isel(**indexers) 1228 return ops.where_method(self, cond, other) File ~/miniforge3/envs/xarray-tutorial/lib/python3.11/site-packages/xarray/core/dataset.py:2972, in Dataset.isel(self, indexers, drop, missing_dims, **indexers_kwargs) 2970 indexers = either_dict_or_kwargs(indexers, indexers_kwargs, &quot;isel&quot;) 2971 if any(is_fancy_indexer(idx) for idx in indexers.values()): -&gt; 2972 return self._isel_fancy(indexers, drop=drop, missing_dims=missing_dims) 2974 # Much faster algorithm for when all indexers are ints, slices, one-dimensional 2975 # lists, or zero or one-dimensional np.ndarray's 2976 indexers = drop_dims_from_indexers(indexers, self.dims, missing_dims) File ~/miniforge3/envs/xarray-tutorial/lib/python3.11/site-packages/xarray/core/dataset.py:3043, in Dataset._isel_fancy(self, indexers, drop, missing_dims) 3040 selected = self._replace_with_new_dims(variables, coord_names, indexes) 3042 # Extract coordinates from indexers -&gt; 3043 coord_vars, new_indexes = selected._get_indexers_coords_and_indexes(indexers) 3044 variables.update(coord_vars) 3045 indexes.update(new_indexes) File ~/miniforge3/envs/xarray-tutorial/lib/python3.11/site-packages/xarray/core/dataset.py:2844, in Dataset._get_indexers_coords_and_indexes(self, indexers) 2840 # we don't need to call align() explicitly or check indexes for 2841 # alignment, because merge_variables already checks for exact alignment 2842 # between dimension coordinates 2843 coords, indexes = merge_coordinates_without_align(coords_list) -&gt; 2844 assert_coordinate_consistent(self, coords) 2846 # silently drop the conflicted variables. 2847 attached_coords = {k: v for k, v in coords.items() if k not in self._variables} File ~/miniforge3/envs/xarray-tutorial/lib/python3.11/site-packages/xarray/core/coordinates.py:941, in assert_coordinate_consistent(obj, coords) 938 for k in obj.dims: 939 # make sure there are no conflict in dimension coordinates 940 if k in coords and k in obj.coords and not coords[k].equals(obj[k].variable): --&gt; 941 raise IndexError( 942 f&quot;dimension coordinate {k!r} conflicts between &quot; 943 f&quot;indexed and indexing objects:\n{obj[k]}\nvs.\n{coords[k]}&quot; 944 ) IndexError: dimension coordinate 'parts' conflicts between indexed and indexing objects: &lt;xarray.DataArray 'parts' (parts: 2)&gt; Size: 48B array(['first', 'second'], dtype='&lt;U6') Coordinates: * parts (parts) &lt;U6 48B 'first' 'second' vs. &lt;xarray.Variable ()&gt; Size: 24B array('first', dtype='&lt;U6') </code></pre>
<python><python-xarray>
2024-06-19 21:43:12
1
4,654
bzm3r
78,644,650
412,137
GitHub Actions Workflow with Poetry Failing: "Backend subprocess exited when trying to invoke build_wheel"
<p>I'm encountering an issue with my GitHub Actions workflow that uses Poetry for dependency management. The workflow has been working fine until recently, but now it fails during the virtual environment creation and package installation steps. The error log is as follows:</p> <pre><code>[virtualenv] create virtual environment via CPython3Posix(dest=/tmp/tmp0id2fedr/.venv, clear=False, no_vcs_ignore=False, global=False) 2024-06-19T20:48:12.0128143Z [virtualenv] create folder /tmp/tmp0id2fedr/.venv/bin 2024-06-19T20:48:12.0128439Z [virtualenv] create folder /tmp/tmp0id2fedr/.venv/lib/python3.10/site-packages ... 2024-06-19T20:48:12.0142856Z [virtualenv] add activators for Bash, CShell, Fish, Nushell, PowerShell, Python 2024-06-19T20:48:12.0143272Z Source (PyPI): 217 packages found for setuptools &gt;=40.8.0 2024-06-19T20:48:12.0143401Z Source (PyPI): 1 packages found for setuptools &gt;=40.8.0 2024-06-19T20:48:12.0143457Z [build:build] Getting build dependencies for wheel... ... 2024-06-19T20:48:12.0152045Z 1 ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/poetry/installation/chef.py:121 in prepare 2024-06-19T20:48:12.0152216Z 119│ if archive.is_dir(): 2024-06-19T20:48:12.0152536Z 120│ destination = output_dir or Path(tempfile.mkdtemp(prefix=&quot;poetry-chef-&quot;)) 2024-06-19T20:48:12.0152846Z → 121│ return self._prepare(archive, destination=destination, editable=editable) 2024-06-19T20:48:12.0152962Z 122│ 2024-06-19T20:48:12.0153246Z 123│ return self._prepare_sdist(archive, destination=output_dir) 2024-06-19T20:48:12.0153254Z 2024-06-19T20:48:12.0153357Z ChefBuildError 2024-06-19T20:48:12.0153367Z 2024-06-19T20:48:12.0153548Z Backend subprocess exited when trying to invoke build_wheel 2024-06-19T20:48:12.0153790Z 2024-06-19T20:48:12.0154053Z We need both setuptools AND wheel packages installed for bdist_wheel to work. Try running: pip install wheel 2024-06-19T20:48:12.0154149Z 2024-06-19T20:48:12.0154157Z 2024-06-19T20:48:12.0154533Z at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/poetry/installation/chef.py:164 in _prepare 2024-06-19T20:48:12.0154650Z 160│ 2024-06-19T20:48:12.0154912Z 161│ error = ChefBuildError(&quot;\n\n&quot;.join(message_parts)) 2024-06-19T20:48:12.0155024Z 162│ 2024-06-19T20:48:12.0155204Z 163│ if error is not None: 2024-06-19T20:48:12.0155394Z → 164│ raise error from None 2024-06-19T20:48:12.0155507Z 165│ 2024-06-19T20:48:12.0155680Z 166│ return path 2024-06-19T20:48:12.0155796Z 167│ 2024-06-19T20:48:12.0156109Z 168│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -&gt; Path: 2024-06-19T20:48:12.0169670Z ##[error]Process completed with exit code 1. </code></pre> <p>The main error seems to be:</p> <pre><code>ChefBuildError Backend subprocess exited when trying to invoke build_wheel We need both setuptools AND wheel packages installed for bdist_wheel to work. Try running: pip install wheel </code></pre> <p>Does anyone have any suggestions on how to resolve this issue?</p> <p>Additional Context:</p> <ul> <li>GitHub Actions runner is using Python 3.10.14</li> <li>This issue started happening without any changes to the workflow file</li> <li>I tried installing wheel and setuptools (with the latest version) but it didn't help</li> </ul> <p>Thank you for your help!</p>
<python><github-actions><ubuntu-22.04>
2024-06-19 21:03:01
0
2,767
Nadav
78,644,484
1,867,328
Formatting month to the single digit with pandas
<p>I have below code</p> <pre><code>import pandas as pd pd.to_datetime(['8/23/1999']).strftime(&quot;%m/%d/%Y&quot;).astype('str') </code></pre> <p>This generates <code>08/23/1999</code></p> <p>However I want to get <code>8/23/1999</code></p> <p>Is there any specific formatting to be used for this case?</p>
<python><pandas>
2024-06-19 20:18:23
1
3,832
Bogaso
78,644,422
11,608,962
Persistent MySQL connection issues with FastAPI deployed on DigitalOcean and Azure
<p>I have deployed a FastAPI application on DigitalOcean droplets and Azure App Service, with MySQL databases hosted on DigitalOcean and Azure respectively. Despite several attempts to mitigate the issue, I am consistently facing database connection problems, resulting in errors like:</p> <pre><code>2024-06-06T07:27:13.243707191Z: [ERROR] File &quot;/usr/local/lib/python3.11/site-packages/pymysql/connections.py&quot;, line 759, in _write_bytes 2024-06-06T07:27:13.243710391Z: [ERROR] raise err.OperationalError( 2024-06-06T07:27:13.243713391Z: [ERROR] sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2006, &quot;MySQL server has gone away (ConnectionResetError(104, 'Connection reset by peer'))&quot;) 2024-05-21T12:03:39.909286838Z: [ERROR] File &quot;/usr/local/lib/python3.11/site-packages/pymysql/connections.py&quot;, line 692, in _read_packet 2024-05-21T12:03:39.909290038Z: [ERROR] packet_header = self._read_bytes(4) 2024-05-21T12:03:39.909293038Z: [ERROR] ^^^^^^^^^^^^^^^^^^^ 2024-05-21T12:03:39.909296338Z: [ERROR] File &quot;/usr/local/lib/python3.11/site-packages/pymysql/connections.py&quot;, line 738, in _read_bytes 2024-05-21T12:03:39.909299638Z: [ERROR] raise err.OperationalError( 2024-05-21T12:03:39.909303838Z: [ERROR] pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query ([Errno 110] Connection timed out)') </code></pre> <p>The issue occurs intermittently, causing disruptions to my application. Here are the steps I've taken to address it:</p> <ul> <li>Added middleware to handle database connections.</li> <li>Configured Azure settings to &quot;Open to all host (*)&quot;.</li> <li>Set <code>pool_pre_ping=True</code> in the <code>create_engine()</code> function.</li> <li>Appended <code>?connect_timeout=10&amp;read_timeout=30&amp;write_timeout=30</code> to the database URL.</li> <li>Implemented <code>db_session.close()</code> in a <code>finally</code> block to ensure connections are properly closed.</li> </ul> <p>Despite these efforts, the problem persists. I previously encountered similar issues with a Flask application on DigitalOcean, which were mitigated by explicitly closing database connections at the end of each API route. However, this approach is not feasible for my current FastAPI setup.</p> <p><strong>Details:</strong></p> <ul> <li>FastAPI version: 0.90.0</li> <li>SQLAlchemy version: 2.0.2</li> <li>Python version: 3.11</li> </ul> <p>Any insights or suggestions on how to resolve these MySQL connection issues in FastAPI deployments on DigitalOcean and Azure would be greatly appreciated. Thank you!</p> <p>Code Structure:</p> <ul> <li><code>main.py</code></li> </ul> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware from routers import * app = FastAPI() origins = ['*'] app.add_middleware( CORSMiddleware, allow_origins = origins, allow_credentials = True, allow_methods = [&quot;*&quot;], allow_headers = [&quot;*&quot;], ) app.include_router(user.router) </code></pre> <ul> <li><code>app/database.py</code></li> </ul> <pre class="lang-py prettyprint-override"><code>from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker from sqlalchemy.ext.declarative import declarative_base SQLALCHEMY_DATABASE_URL = f&quot;mysql+pymysql://{user}:{pwrd}@{host}:{port}/{name}&quot; SQLALCHEMY_DATABASE_URL = ( f&quot;mysql+pymysql://{user}:{pwrd}@{host}:{port}/{name}&quot; &quot;?connect_timeout=10&amp;read_timeout=30&amp;write_timeout=30&quot; ) engine = create_engine(SQLALCHEMY_DATABASE_URL, pool_pre_ping = True) SessionLocal = sessionmaker(autocommit = False, autoflush = False, bind = engine) Base = declarative_base() def get_db(): try: db_session = SessionLocal() yield db_session finally: db_session.close() </code></pre> <ul> <li><code>db_models/SQL_Models.py</code> (contains all SQLAlchemy classes defining tables)</li> </ul> <pre class="lang-py prettyprint-override"><code>import sqlalchemy as db from app.database import Base class Table_1(Base): __tablename__ = 'Table_1' row_id = db.Column(db.Integer, primary_key=True, index=True, autoincrement=True) ... from app.database import engine; Base.metadata.create_all(engine) </code></pre> <ul> <li><code>routers/routes.py</code></li> </ul> <pre class="lang-py prettyprint-override"><code>@router.get('/{student_id}') async def get_student_details( student_id: int, db_session: Session = Depends(get_db), # Here I am handling database connection token: Union[str, None] = Header(default=None) ): students = db_session \ .query(Students) \ .filter(Students.student_id == student_id) \ .first() return students </code></pre> <p>Note that I am using MySQL RDBMS.</p>
<python><mysql><azure><sqlalchemy><fastapi>
2024-06-19 19:59:11
2
1,427
Amit Pathak
78,644,359
3,325,401
How to make statsmodels' ANOVA result match R's ANOVA result
<p>The following question is sort of a concrete adaptation of a post on <a href="https://stats.stackexchange.com/questions/10182/intraclass-correlation-coefficient-vs-f-test-one-way-anova/11732#11732">StatsExchange</a>.</p> <p>The following R script runs just fine:</p> <pre class="lang-r prettyprint-override"><code>library(reshape) J1 &lt;- c(9,6,8,7,10,6) J2 &lt;- c(2,1,4,1,5,2) J3 &lt;- c(5,3,6,2,6,4) J4 &lt;- c(8,2,8,6,9,7) Subject &lt;- c('S1', 'S2', 'S3', 'S4', 'S5', 'S6') sf &lt;- data.frame(Subject, J1, J2, J3, J4) sf.df &lt;- melt(sf, varnames=c(&quot;Subject&quot;, &quot;Rater&quot;)) anova(lm(value ~ Subject, sf.df)) anova(lm(value ~ Subject*variable, sf.df)) </code></pre> <p>However, when I try to translate this into Python, I run into an exception that gets thrown from scipy.</p> <p>Here's the equivalent Python script:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import statsmodels.api as sm from statsmodels.formula.api import ols # Create the data J1 = [9, 6, 8, 7, 10, 6] J2 = [2, 1, 4, 1, 5, 2] J3 = [5, 3, 6, 2, 6, 4] J4 = [8, 2, 8, 6, 9, 7] Subject = ['S1', 'S2', 'S3', 'S4', 'S5', 'S6'] # Create the DataFrame sf = pd.DataFrame({'Subject': Subject, 'J1': J1, 'J2': J2, 'J3': J3, 'J4': J4}) print(sf) # Melt the DataFrame sf_melted = pd.melt(sf, id_vars=['Subject'], var_name='Rater', value_name='Value') print(sf_melted) # Perform ANOVA model1 = ols('Value ~ Subject', data=sf_melted).fit() anova_table1 = sm.stats.anova_lm(model1, typ=2) print(anova_table1) model2 = ols('Value ~ Subject * Rater', data=sf_melted).fit() anova_table2 = sm.stats.anova_lm(model2, typ=2) print(anova_table2) </code></pre> <p>The ANOVA calculation for the <em>first</em> model runs just fine, but there seems to be an issue with the second model.</p> <p>Here's the full stack trace:</p> <pre class="lang-none prettyprint-override"><code>/home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages/statsmodels/regression/linear_model.py:1717: RuntimeWarning: divide by zero encountered in double_scalars return np.dot(wresid, wresid) / self.df_resid --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[246], line 26 23 print(anova_table1) 25 model2 = ols('Value ~ Subject * Rater', data=sf_melted).fit() ---&gt; 26 anova_table2 = sm.stats.anova_lm(model2, typ=2) 27 print(anova_table2) File ~/anaconda3/envs/python3/lib/python3.10/site-packages/statsmodels/stats/anova.py:353, in anova_lm(*args, **kwargs) 351 if len(args) == 1: 352 model = args[0] --&gt; 353 return anova_single(model, **kwargs) 355 if typ not in [1, &quot;I&quot;]: 356 raise ValueError(&quot;Multiple models only supported for type I. &quot; 357 &quot;Got type %s&quot; % str(typ)) File ~/anaconda3/envs/python3/lib/python3.10/site-packages/statsmodels/stats/anova.py:84, in anova_single(model, **kwargs) 81 return anova1_lm_single(model, endog, exog, nobs, design_info, table, 82 n_rows, test, pr_test, robust) 83 elif typ in [2, &quot;II&quot;]: ---&gt; 84 return anova2_lm_single(model, design_info, n_rows, test, pr_test, 85 robust) 86 elif typ in [3, &quot;III&quot;]: 87 return anova3_lm_single(model, design_info, n_rows, test, pr_test, 88 robust) File ~/anaconda3/envs/python3/lib/python3.10/site-packages/statsmodels/stats/anova.py:207, in anova2_lm_single(model, design_info, n_rows, test, pr_test, robust) 205 LVL = np.dot(np.dot(L1,robust_cov),L2.T) 206 from scipy import linalg --&gt; 207 orth_compl,_ = linalg.qr(LVL) 208 r = L1.shape[0] - L2.shape[0] 209 # L1|2 210 # use the non-unique orthogonal completion since L12 is rank r File ~/anaconda3/envs/python3/lib/python3.10/site-packages/scipy/linalg/_decomp_qr.py:129, in qr(a, overwrite_a, lwork, mode, pivoting, check_finite) 125 raise ValueError(&quot;Mode argument should be one of ['full', 'r',&quot; 126 &quot;'economic', 'raw']&quot;) 128 if check_finite: --&gt; 129 a1 = numpy.asarray_chkfinite(a) 130 else: 131 a1 = numpy.asarray(a) File ~/anaconda3/envs/python3/lib/python3.10/site-packages/numpy/lib/function_base.py:603, in asarray_chkfinite(a, dtype, order) 601 a = asarray(a, dtype=dtype, order=order) 602 if a.dtype.char in typecodes['AllFloat'] and not np.isfinite(a).all(): --&gt; 603 raise ValueError( 604 &quot;array must not contain infs or NaNs&quot;) 605 return a ValueError: array must not contain infs or NaNs </code></pre> <p>Things I've tried:</p> <ol> <li>Changing library versions. I was originally on numpy 1.22.4 and scipy 1.12, and now I'm on numpy 2.0.0 and scipy 1.13.1.</li> <li>I've tried enforcing categorical variables in the linear model by writing <code>Value ~ Subject * C(Rater)</code> for the second model. I wonder if it's not treating the data as categorical....but from what I've read statsmodels is supposed to correctly infer that any string variable is categorical, so this may be the wrong idea.</li> </ol> <p>Can any R/Python/statsmodels experts help me resolve this?</p> <p>The expected output (from the R script) is the following:</p> <pre><code>Analysis of Variance Table Response: value Df Sum Sq Mean Sq F value Pr(&gt;F) Subject 5 56.208 11.2417 1.7947 0.1648 Residuals 18 112.750 6.2639 Analysis of Variance Table Response: value Df Sum Sq Mean Sq F value Pr(&gt;F) Subject 5 56.208 11.242 Rater 3 97.458 32.486 Subject:Rater 15 15.292 1.019 Residuals 0 0.000 </code></pre> <p>^ Note that this expected output also corresponds with the output shown in the referenced StatsExchange post (the MeanSq value for the &quot;Rater&quot; variable of 32.486 is roughly the value of 32.49 associated with the &quot;Judge&quot; variable in the StatsExchange post).</p>
<python><r><dataframe><statsmodels><anova>
2024-06-19 19:41:10
2
2,767
hobscrk777
78,644,353
6,622,697
Using SQLAlchemy metadata to get info about actual database vs the definition
<p>I want to use the Metadata object to get information about my definitions (i.e., my Model objects) as well as going out to the actual database. But no matter what I try, it always uses the defined information and not what's in the database.</p> <p>Here's what I have:</p> <pre><code>engine = create_engine('...') class ModelBase(DeclarativeBase): pass metadata = ModelBase.metadata print(metadata.tables.keys()) </code></pre> <p>Even if I delete tables from the database, it still prints the names of the tables as they are defined.</p> <p>I have tried using <code>metadata.reflect(bind=engine)</code>, but that doesn't seem to have any affect. Are there two different metadata objects, or 2 different ways to create the object?</p> <p>I want to be able to compare my definitions with what is in the actual database</p>
<python><sqlalchemy>
2024-06-19 19:39:21
1
1,348
Peter Kronenberg
78,644,255
1,867,328
Failed to convert the pandas datetime object to another format
<p>I tried to convert a <code>python</code> datetime object to another format as below</p> <pre><code>import pandas as pd pd.to_datetime(['1/1/1900']).dt.strftime('%Y-%m') </code></pre> <p>However above code generates error.</p> <p>Could you please tell me what would be the right approach</p>
<python><pandas>
2024-06-19 19:07:22
1
3,832
Bogaso
78,644,027
1,867,328
Managing date column with different format
<p>I am trying to convert a <code>pandas</code> dataframe column which is in text format but represents date in mix format to a proper date format. Below is one such example,</p> <pre><code>import pandas as pd pd.to_datetime(['0-Jan-00', '8/23/1999']) </code></pre> <p>Above code generates error.</p> <p>Is there any method available to directly manage such mix of date format?</p>
<python><pandas>
2024-06-19 18:03:00
2
3,832
Bogaso
78,643,864
7,236,133
Check the existence of records in the elastic search vector store
<p>I have such entries in my elasticsearch index: <a href="https://i.sstatic.net/8MsCz7zT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8MsCz7zT.png" alt="enter image description here" /></a></p> <p>It's unstructured data, in this case the content of a PDF that was split into chunks, then a LangChain document was created for each chunk and pushed to the index as a different vector.</p> <p>I faced the issue that each time I loaded the pdf and pushed it to the index, new entries were pushed (with the same content). The code that is used for that purpose is:</p> <pre><code>def push_to_elasticsearch(es_index_name,embeddings,docs): elastic_vector_search = ElasticsearchStore( # es_cloud_id=es_cloud_id, # es_endpoint=es_endpoint, # es_apikey=es_apikey, index_name=es_index_name, # docs=docs, embedding=embeddings, es_connection=es_connection ) docs_ids = [doc.metadata[&quot;hash_id&quot;] for doc in docs] # # print(&quot;----------------------------------------------------------&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&quot;, docs_ids) vector_exists_dict = check_vectors_exist_by_hash_id(es_index_name, docs_ids) print(&quot;----------------------------------------------------------&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&quot;, vector_exists_dict) idempotency_docs = [doc for doc in docs if not vector_exists_dict.get(doc.metadata[&quot;hash_id&quot;], False)] # idempotency_docs = [doc for doc in docs if not vector_exists_dict.get(calculate_content_hash(doc.page_content), False)] print('Len of docs:', len(docs)) print('Len of idempotency_docs:', len(idempotency_docs)) elastic_vector_search.add_documents(documents=docs) db = ElasticsearchStore.from_documents( docs, embeddings, es_connection=es_connection, index_name=es_index_name, ) return db </code></pre> <p>In order to check the existence of vectors before pushing them, I guess I couldn't use the existing _id field (since it's not pushed yet), so I added a new hash_id field in the metadata column (based on hash content), and I want to use it for searching the index before pushing. I still don't know how exactly to implement it. I thought about this implementation:</p> <pre><code>def check_vectors_exist_by_hash_id(index_name, docs_hash_ids): &quot;&quot;&quot; Check if vectors exist for a list of document IDs. Args: doc_ids (list): List of document IDs to check. Returns: dict: A dictionary where keys are document IDs and values are boolean (True if vector exists, False otherwise). &quot;&quot;&quot; vector_exists_dict = {} try: # Fetch documents by IDs responses = es_connection.mget(index=index_name, body={&quot;hash_ids&quot;: docs_hash_ids}) for response in responses[&quot;docs&quot;]: doc_id = response[&quot;hash_id&quot;] vector_exists_dict[doc_id] = &quot;embedding&quot; in response[&quot;_source&quot;] except Exception as e: print(f&quot;Error checking vector existence for doc_ids: {e}&quot;) return vector_exists_dict </code></pre> <p>But haven't figured out yet how to filter by these hash_ids!</p>
<python><elasticsearch><langchain><vectorstore>
2024-06-19 17:18:29
1
679
zbeedatm
78,643,804
4,659,442
TypeError using pandas read_sql_query() with dtype=UNIQUEIDENTIFIER
<p>I'm trying to use Pandas' <code>read_sql_query()</code>, specifying a <code>dtype</code> of <code>UNIQUEIDENTIFIER</code> for one of the columns:</p> <pre class="lang-py prettyprint-override"><code>import os import urllib import pandas as pd from sqlalchemy import create_engine, engine, exc from sqlalchemy.dialects.mssql import UNIQUEIDENTIFIER driver='pyodbc' driver_version='ODBC Driver 17 for SQL Server' dialect='mssql' server=os.environ['odbc_server'] database=os.environ['odbc_database'] authentication='ActiveDirectoryInteractive' username=os.environ['odbc_username'] connection_string = ( f'{dialect}+{driver}:///?odbc_connect=' + urllib.parse.quote_plus( f'DRIVER={driver_version};SERVER={server};DATABASE={database};' + f'UID={username};AUTHENTICATION={authentication};' ) ) connection = create_engine(connection_string) df_ifg_appt = pd.read_sql_query( ''' select p.id, p.name, p.start_date, p.end_date from core.person p ''', con=connection, dtype={ 'id': UNIQUEIDENTIFIER, } ) </code></pre> <p>This is giving an error: <code>TypeError: dtype '&lt;class 'sqlalchemy.dialects.mssql.base.UNIQUEIDENTIFIER'&gt;' not understood</code>.</p> <p>What am I doing wrong?</p>
<python><sql-server><pandas>
2024-06-19 17:06:03
0
727
philipnye
78,643,652
3,903,479
Render HTML to pdf and append images
<p>I'm writing a python script that can read html from stdin and render it to a file (with css, though my example code omits that), along with attaching any trailing image paths on the command. Using the <a href="https://py-pdf.github.io/fpdf2/CombineWithPdfrw.html#adding-a-page-to-an-existing-pdf" rel="nofollow noreferrer">fpdf2 documentation</a> I wrote a script that depends on pdfkit (and wkhtmltopdf), fpdf, and pdfrw:</p> <pre class="lang-py prettyprint-override"><code># Usage: # cat test.html | python3 script.py out.pdf img1.png img2.jpg from pdfrw import PdfReader, PdfWriter from fpdf import FPDF import pdfkit import sys OUTPUT_FILE = sys.argv[1] pdfkit.from_string(&quot;&quot;.join(sys.stdin.readlines()), OUTPUT_FILE) writer = PdfWriter() for page in PdfReader(OUTPUT_FILE).pages: writer.addpage(page) for path in sys.argv[2:]: fpdf = FPDF() fpdf.add_page() fpdf.image(path) reader = PdfReader(fdata=bytes(fpdf.output())) writer.addpage(reader.pages[0]) writer.write(OUTPUT_FILE) </code></pre> <p>It works! The problem—it's writing to <code>OUTPUT_FILE</code>, creating an empty <code>PdfWriter</code>, opening <code>OUTPUT_FILE</code>, appending it to the writer, and writing <code>OUTPUT_FILE</code> again. I've tried a few approaches but they always omit first page of rendered html:</p> <ol> <li>Storing the rendered html and importing that into the writer: <pre class="lang-py prettyprint-override"><code>pdf = pdfkit.from_string(&quot;&quot;.join(sys.stdin.readlines())) writer = PdfWriter(trailer=PdfReader(fdata=pdf)) </code></pre> </li> <li>Writing the rendered html pdf and opening it directly with the writer: <pre class="lang-py prettyprint-override"><code>pdfkit.from_string(&quot;&quot;.join(sys.stdin.readlines()), output_filename) writer = PdfWriter(trailer=PdfReader(output_filename)) </code></pre> </li> </ol> <p>Any ideas how I can tweak this to manipulate the pdf in memory and only write to disk once?</p>
<python><pdf><pdf-generation>
2024-06-19 16:23:03
0
1,942
GammaGames
78,643,575
13,187,876
Control where Source Code for Azure ML Command gets Uploaded
<p>I'm working in a notebook in Azure Machine Learning Studio and I'm using the following code block to instantiate a job using the <a href="https://learn.microsoft.com/en-us/python/api/azure-ai-ml/azure.ai.ml?view=azure-python#azure-ai-ml-command" rel="nofollow noreferrer">command function</a>.</p> <pre><code>from azure.ai.ml import command, Input, Output from azure.ai.ml.entities import Data from azure.ai.ml.constants import AssetTypes subscription_id = &quot;&lt;subscription_id&gt;&quot; resource_group = &quot;&lt;resource_group&gt;&quot; workspace = &quot;&lt;workspace&gt;&quot; storage_account = &quot;&lt;storage_account&gt;&quot; input_path = &quot;&lt;input_path&gt;&quot; output_path = &quot;&lt;output_path&gt;&quot; input_dict = { &quot;input_data_object&quot;: Input( type=AssetTypes.URI_FILE, path=f&quot;azureml://subscriptions/{subscription_id}/resourcegroups/{resource_group}/workspaces/{workspace}/datastores/{storage_account}/paths/{input_path}&quot; ) } output_dict = { &quot;output_folder_object&quot;: Output( type=AssetTypes.URI_FOLDER, path=f&quot;azureml://subscriptions/{subscription_id}/resourcegroups/{resource_group}/workspaces/{workspace}/datastores/{storage_account}/paths/{output_path}&quot;, ) } job = command( code=&quot;./src&quot;, command=&quot;python 01_read_write_data.py -v --input_data=${{inputs.input_data_object}} --output_folder=${{outputs.output_folder_object}}&quot;, inputs=input_dict, outputs=output_dict, environment=&quot;&lt;asset_env&gt;&quot;, compute=&quot;&lt;compute_cluster&gt;&quot;, ) returned_job = ml_client.create_or_update(job) </code></pre> <p>This runs successfully but with each run, if the code stored within the <code>./src</code> directory changes then a new copy is uploaded to the default blob storage account. I don't mind this, but with each run, the code is uploaded to a new container at the root of my blob storage account. Therefore my default storage account is getting cluttered with containers. I've read the docs for instantiating a <code>command</code> object using the <code>command()</code> function, but I see no parameter available to control where my <code>./src</code> code gets uploaded. Is there any way to control this?</p>
<python><azure><machine-learning><command><azure-machine-learning-service>
2024-06-19 16:06:28
1
773
Matt_Haythornthwaite
78,643,538
2,066,855
Upgrading to Stable Diffusion 3 from 2-1 on mac
<p>I'm upgrading my stable diffusion from 2-1 to stable-diffusion-3-medium-diffusers</p> <p>Here is my code which is working for version 2-1</p> <pre><code># source venv/bin/activate from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained(&quot;stabilityai/stable-diffusion-2-1&quot;) pipe = pipe.to(&quot;mps&quot;) pipe.enable_attention_slicing() print(&quot;Starting Process&quot;) steps = 200 query = &quot;Stormy Weather in Monte Carlo&quot; image = pipe(query, num_inference_steps=steps).images[0] image.save(&quot;oneOffImage.jpg&quot;) print(&quot;Successfully Created Image as oneOffImage.jpg&quot;) </code></pre> <p>I upgraded <code>diffusers</code>, signed up on hugging face for access to the gated repo, created and added the HF_TOKEN to my .env, and ran this code</p> <pre><code># source venv/bin/activate from diffusers import StableDiffusion3Pipeline from dotenv import load_dotenv import os load_dotenv() print(&quot;Starting Process&quot;) pipe = StableDiffusion3Pipeline.from_pretrained(&quot;stabilityai/stable-diffusion-3-medium-diffusers&quot;) pipe = pipe.to(&quot;mps&quot;) # pipe.set_progress_bar_config(disable=True) pipe.enable_attention_slicing() print(&quot;Starting Process&quot;) steps = 200 query = &quot;Stormy Weather in Monte Carlo&quot; image = pipe(query, num_inference_steps=steps).images[0] image.save(&quot;oneOffImage.jpg&quot;) print(&quot;Successfully Created Image as oneOffImage.jpg&quot;) </code></pre> <p>I was able to download the model, also I logged the token and confirmed it's in the env vars, I tried adding torch and setting <code>, torch_dtype=torch.float16)</code> but that did nothing plus I think thats for cuda, I also tried adding an auth tag but that did nothing, I upgraded my transformers but I don't even thing that did anything. I'm running out of ideas.</p> <p>Here is the current error</p> <pre><code>(venv) mikeland@mikes-mac-mini WeatherWindow % python3 oneOffGenStableDiffusion.py /Users/mikeland/WeatherWindow/venv/lib/python3.9/site-packages/diffusers/models/transformers/transformer_2d.py:34: FutureWarning: `Transformer2DModelOutput` is deprecated and will be removed in version 1.0.0. Importing `Transformer2DModelOutput` from `diffusers.models.transformer_2d` is deprecated and this will be removed in a future version. Please use `from diffusers.models.modeling_outputs import Transformer2DModelOutput`, instead. deprecate(&quot;Transformer2DModelOutput&quot;, &quot;1.0.0&quot;, deprecation_message) Starting Process Loading pipeline components...: 0%| | 0/9 [00:00&lt;?, ?it/s] Traceback (most recent call last): File &quot;/Users/mikeland/WeatherWindow/oneOffGenStableDiffusion.py&quot;, line 15, in &lt;module&gt; pipe = StableDiffusion3Pipeline.from_pretrained(&quot;stabilityai/stable-diffusion-3-medium-diffusers&quot;) File &quot;/Users/mikeland/WeatherWindow/venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py&quot;, line 114, in _inner_fn return fn(*args, **kwargs) File &quot;/Users/mikeland/WeatherWindow/venv/lib/python3.9/site-packages/diffusers/pipelines/pipeline_utils.py&quot;, line 881, in from_pretrained loaded_sub_model = load_sub_model( File &quot;/Users/mikeland/WeatherWindow/venv/lib/python3.9/site-packages/diffusers/pipelines/pipeline_loading_utils.py&quot;, line 703, in load_sub_model loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs) File &quot;/Users/mikeland/WeatherWindow/venv/lib/python3.9/site-packages/transformers/modeling_utils.py&quot;, line 3122, in from_pretrained raise ImportError( ImportError: Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install accelerate` </code></pre> <p>I'm just looking for ANY example I can get to work on Mac at this point.</p>
<python><macos><machine-learning><stable-diffusion><metal-performance-shaders>
2024-06-19 15:58:26
1
1,902
lando2319
78,643,391
11,400,815
Adding conditional variable in gekko leads to no solution
<p>I'm using gekko to optimize a certain function. When I use a dummy objective function like <code>m.Obj(0)</code> just to test for feasibility, the solver is able to find a feasible solution. However, when I add in my main objective function (commented out below), the solver fails to find a solution. Here's is some fully functioning test code below:</p> <pre><code># ## Imports from gekko import GEKKO import numpy as np ## Set fixed variabls h_matrix = np.array( [ [119.4, 119.4, 119.4, 119.4, 119.4, 119.4, 111.4, 111.4, 111.4, 111.4], [119.4, 119.4, 119.4, 119.4, 119.4, 119.4, 111.4, 111.4, 111.4, 111.4], [119.4, 119.4, 119.4, 119.4, 119.4, 111.4, 111.4, 111.4, 111.4, 111.4] ] ) z_matrix = np.array( [ [383.91, 383.91, 383.91, 383.91, 383.91, 383.91, 254.49, 254.49, 254.49, 254.49], [383.91, 383.91, 383.91, 383.91, 383.91, 383.91, 254.49, 254.49, 254.49, 254.49], [383.91, 383.91, 383.91, 383.91, 383.91, 254.49, 254.49, 254.49, 254.49, 254.49] ] ) w = np.array([47.93, 66.37]) h = np.array([12.10, 8.6]) t = np.array([104, 48]) ## Reshape the matrices to make calulations easier h_matrix_reshaped = h_matrix.reshape(30,1) z_matrix_reshaped = z_matrix.reshape(30,1) # ## Initialize # Initialize the model m = GEKKO(remote=False) ## Fixed variables h_constants = [m.Param(value=h_matrix_reshaped[i][0]) for i in range(30)] z_constants = [m.Param(value=z_matrix_reshaped[i][0]) for i in range(30)] w_base = [m.Param(value=w[i]) for i in range(w.shape[0])] h_base = [m.Param(value=h[i]) for i in range(h.shape[0])] t_base = [m.Param(value=t[i]) for i in range(t.shape[0])] h_cm = m.Param(value=220) ho_cm = m.Param(value=14.4) w_kg = m.Param(value=21.2) s_constraint = m.Param(value=40) # ### Set up x var (main integer variable) ## Initialize x array x = np.empty((30, t.shape[0]), dtype=object) for i in range(30): for j in range(t.shape[0]): x[i][j] = m.Var(value=0, lb=0, integer=True) # ### Set up Constraints ## Total constraint for j in range(len(t_base)): t_contraints = sum(x[i][j] for i in range(30)) m.Equation(t_contraints == t_base[j]) ## Weight contraints for i in range(30): w_constraints = sum(x[i][j]*w_base[j] for j in range(len(w_base))) + w_kg m.Equation(w_constraints &lt;= z_constants[i]) ## Height constraints for i in range(30): h_constraints = sum(x[i][j]*h_base[j] for j in range(len(h_base))) + ho_cm m.Equation(h_constraints &lt;= h_cm) ## Neighbor constraints for i in range(9): # set neighbor constraints horizontally over first row neighbor_constraints_1 = m.abs3((sum(x[i][j]*h_base[j] for j in range(len(h_base))) + h_constants[i]) - (sum(x[i+1][j]*h_base[j] for j in range(len(h_base))) + h_constants[i+1])) m.Equation(neighbor_constraints_1 &lt;= s_constraint) for i in range(10,19): # set neighbor constraints horizontally over second row neighbor_constraints_2 = m.abs3((sum(x[i][j]*h_base[j] for j in range(len(h_base))) + h_constants[i]) - (sum(x[i+1][j]*h_base[j] for j in range(len(h_base))) + h_constants[i+1])) m.Equation(neighbor_constraints_2 &lt;= s_constraint) for i in range(20,29): # set neighbor constraints horizontally over second row neighbor_constraints_3 = m.abs3((sum(x[i][j]*h_base[j] for j in range(len(h_base))) + h_constants[i]) - (sum(x[i+1][j]*h_base[j] for j in range(len(h_base))) + h_constants[i+1])) m.Equation(neighbor_constraints_3 &lt;= s_constraint) for i in range(10): # set neighbor constrainst vertically A with B neighbor_constraints_4 = m.abs3((sum(x[i][j]*h_base[j] for j in range(len(h_base))) + h_constants[i]) - (sum(x[i+10][j]*h_base[j] for j in range(len(h_base))) + h_constants[i+10])) m.Equation(neighbor_constraints_4 &lt;= s_constraint) for i in range(10,20): # set neighbor constrainst vertically B with C neighbor_constraints_5 = m.abs3((sum(x[i][j]*h_base[j] for j in range(len(h_base))) + h_constants[i]) - (sum(x[i+10][j]*h_base[j] for j in range(len(h_base))) + h_constants[i+10])) m.Equation(neighbor_constraints_5 &lt;= s_constraint) # ### Mix Score section below ################## ## Create a binary variable/array b that identifies if x[i][j] is non-zero or not ## We will use the count of these b[i][j] values in our objective function ## Constraint to set b[i][k] = 1 if x[i][k] &gt; 0, and 0 otherwise ## Use if3 to set b directly based on x values b = np.empty((30, len(t_base)), dtype=object) epsilon = 1e-2 # Small margin allows floating-point considerations for i in range(30): for j in range(len(t_base)): b[i][j] = m.if3(x[i][j] - epsilon, 0, 1) # # Calculation of count(i) for each row counts = [m.Intermediate(m.sum(b[i])) for i in range(30)] # # x_sums for sum of each row in x x_sums = [m.Intermediate(m.sum(x[i])) for i in range(30)] ### Mix Score section above ############################# # ## Run Solver ## ## Set a dummy objective just to identify solutions that are feasible m.Obj(0) # Define the main objective function # mix_score = [counts[i] / m.max2(x_sums[i], 1e-3) for i in range(30)] # m.Obj(m.sum(mix_score)) # Set the solver options m.options.SOLVER = 1 # APOPT solver for non-linear programs # Increase max iterations because we don't care much about time m.solver_options = ['minlp_gap_tol 1.0e-4',\ 'minlp_maximum_iterations 50000',\ 'minlp_max_iter_with_int_sol 40000'] ## Solve m.solve(disp=True) </code></pre> <p>In the Mix score section, you'll see where I include conditional variables defined as the <code>b[i][j]</code> that are 1 or 0 depending on the value of <code>x[i][j]</code> (1 if <code>x[i][j]</code> is non-zero, and 0 otherwise). I would like to apply this binary variable to a count function I'm using in the objective function that is commented out</p> <p><code># diversity_score = [counts[i] / m.max2(x_sums[i], 1e-3) for i in range(30)]</code></p> <p><code># m.Obj(m.sum(diversity_score))</code></p> <p>I'm surprised when I implement the objective function</p> <p><code>m.Obj(m.sum(diversity_score))</code></p> <p>The solver fails to find a solution, in spite of the fact that I know it can identify at least one feasible solution (from running <code>m.Obj(0)</code>). I would think the solver should at least return a value for the objective function for the feasible <code>x[i][j]</code> found when using <code>m.Obj(0)</code> but it doesn't. Any help here would be much appreciated.</p>
<python><optimization><gekko>
2024-06-19 15:25:46
1
315
jim
78,643,168
15,648,409
Parse Google BQ SQL queries and get all tables referenced
<p>In a console Google Cloud project I have several datasets with tables/views in them, so most of them have the query from which they came. I am trying to parse every single one of them to get their table dependencies. With &quot;table dependencies&quot; I mean the tables next to their FROM statements or JOIN statements, not returning aliases or whole subqueries/ SELECT statements.</p> <p>I am using this code:</p> <pre><code>from google.cloud import bigquery import sqlparse from sqlparse.sql import IdentifierList, Identifier, Parenthesis, Token from sqlparse.tokens import Keyword, DML def extract_tables(parsed): tables = set() from_seen = False for token in parsed.tokens: if from_seen: if isinstance(token, IdentifierList): for identifier in token.get_identifiers(): tables.add(identifier.get_real_name()) elif isinstance(token, Identifier): tables.add(token.get_real_name()) elif isinstance(token, Parenthesis): subquery = extract_tables(token) tables.update(subquery) elif token.ttype is Keyword and token.value.upper() in ('WHERE', 'GROUP BY', 'HAVING', 'ORDER BY', 'LIMIT', 'UNION', 'EXCEPT', 'INTERSECT'): from_seen = False continue if token.ttype is Keyword and token.value.upper() in ('FROM', 'JOIN'): from_seen = True return tables def extract_table_names(sql): tables = set() parsed = sqlparse.parse(sql) for stmt in parsed: if stmt.get_type() == 'SELECT': tables.update(extract_tables(stmt)) return tables def get_table_query(client, dataset_id, table_id): table = client.get_table(f&quot;{dataset_id}.{table_id}&quot;) if table.table_type == 'VIEW': return table.view_query return None def list_tables_and_sources(project_id): client = bigquery.Client(project=project_id) datasets = list(client.list_datasets()) table_sources = {} for dataset in datasets: dataset_id = dataset.dataset_id tables = list(client.list_tables(dataset_id)) for table in tables: table_id = table.table_id query = get_table_query(client, dataset_id, table_id) if query: source_tables = extract_table_names(query) table_key = f&quot;{dataset_id}.{table_id}&quot; table_sources[table_key] = list(source_tables) # Convert set to list return table_sources project_id = 'my-project-id' table_sources = list_tables_and_sources(project_id) import json print(json.dumps(table_sources, indent=4)) </code></pre> <p>I want my output to be in that schema <code>{&quot;my-project-id.reported_data.orders&quot; : {dataset.tableName, dataset.TableName}}</code>.</p> <p>Most tables are being parsed fine outputting exactly what it should be but some tables/queries are skipped entirely getting a result of an empty array.</p> <p>For example, I have this view</p> <pre><code>SELECT barcode, CAST(CONCAT(min(EXTRACT(YEAR FROM active_date)), '-', min(EXTRACT(MONTH FROM active_date)), '-01') as DATE) as first_of_month, count(active_date) as active_days, FROM `my-project-id.erp_raw_data.fsn_lines` GROUP BY barcode, (EXTRACT(YEAR FROM active_date)), EXTRACT(MONTH FROM active_date) ORDER BY barcode, first_of_month; </code></pre> <p>But my output looks like this</p> <pre><code>{&quot;erp_formatted_data.active_days&quot;: [],...} </code></pre> <p>Another view/query that after parsing returns an empty array is that one</p> <pre><code>SELECT date, barcode, calendar_month, calendar_year, quarter, image_url, sold_quantity, number_of_sales, number_of_invoices, invoiced_quantity, invoiced_revenue, views, impressions, supplier_name, main_category, sub_category, (number_of_sales / views) as sales_per_views, (number_of_sales / impressions) as sales_per_impressions, IF(number_of_sales &gt; 0 OR views &gt; 0 OR impressions &gt; 0 OR active_day = 1, 1, 0) as active_day, FROM `my-project-id.erp_formatted_data.daily_barcodes_stats` </code></pre> <p>I have no idea what the problem could be in my code. Any ideas?</p>
<python><sql><google-bigquery><sql-parser>
2024-06-19 14:38:11
2
431
Zoi K.
78,643,131
1,616,528
HDF Error when reading a NetCDF file as part of tests
<p>My code saves and analyzes data in NetCDF4 format. I have no problem whatsoever with the analysis. However, when I run unit tests in <code>tox</code> I get a ton of HDF and OS errors, e.g.: <a href="https://github.com/StingraySoftware/HENDRICS/actions/runs/9580442835/job/26417155244?pr=164" rel="nofollow noreferrer">https://github.com/StingraySoftware/HENDRICS/actions/runs/9580442835/job/26417155244?pr=164</a></p> <p>I could reproduce this when running <code>tox -e py311-test-alldeps</code>, but only on Linux. On Mac OS (M1) the same <code>tox</code> command works with no issue, and if I run the tests with <code>pytest</code> on a fresh conda environment with the same software versions (in particular, the same <code>necdf4</code>, <code>h5py</code>, and <code>numpy</code> versions) of the <code>tox</code> environment, it works in all architectures. Apparently, I can only reproduce the issue while running with <code>tox</code> on <code>Linux</code>. This make debugging a lot more difficult.</p> <p>Based on <a href="https://stackoverflow.com/questions/49317927/errno-101-netcdf-hdf-error-when-opening-netcdf-file">an old question</a>, I tried to set <code>HDF5_USE_FILE_LOCKING=FALSE</code> in the <code>setenv</code> section of <code>tox.ini</code>, to no avail.</p>
<python><pytest><hdf5><netcdf4><tox>
2024-06-19 14:31:58
1
329
matteo
78,643,122
4,498,251
pandas df.dtypes does not identify timestamps data type correctly...?
<p>Edit: I don't see this as a duplicate of the question marked right now. The issue there is about loc and &quot;memory allocation&quot; (how the data is being presented) while in this question it seems to be about mixing different timezones (what the data actually is)...</p> <p>I have the following pandas dataframe at hand:</p> <pre><code> key created 0 DLAND-1957 2024-05-23 12:59:25+02:00 1 DLAND-1956 2024-05-22 13:53:09+01:00 </code></pre> <p>it is being created in this way:</p> <pre><code>import pandas as pd key = [&quot;DLAND-1957&quot;, &quot;DLAND-1956&quot;] created = [&quot;2024-05-23 12:59:25+02:00&quot;, &quot;2024-05-22 13:53:09+01:00&quot;] df = pd.DataFrame({&quot;key&quot;:key, &quot;created&quot;:created}) df[&quot;created&quot;] = pd.to_datetime(df[&quot;created&quot;]) </code></pre> <p>As you can see, the column &quot;created&quot; is a &quot;timestamp&quot;:</p> <pre><code>type(list(df.iloc[0])[1]) type(list(df.iloc[1])[1]) </code></pre> <p>both return</p> <pre><code>pandas._libs.tslibs.timestamps.Timestamp </code></pre> <p>However,</p> <pre><code>df.dtypes </code></pre> <p>returns</p> <pre><code>key object created object </code></pre> <p>So just the mere fact that timestamps are present from different timezones makes pandas generalize the type of the column to become &quot;object&quot;? That is a bit problematic when trying to detect data types in a pandas dataframe... how (if not using df.dtypes) do I correctly detect timestamp and other data types in a pandas dataframe?</p>
<python><pandas><timestamp>
2024-06-19 14:30:39
0
1,023
Fabian Werner
78,643,121
11,756,186
Share Python object between blocks in Simulink
<p>I have created a class in Python. In my application this class is a kind of lookup table.</p> <p>Here is a sample class to illustrate the topic :</p> <pre><code>class ExampleClass: def __init__(self, var1): self.prop1 = var1 def get_property(self): return self.prop1 </code></pre> <p>Matlab offers the ability to call Python modules directly within the Matlab environment (see <a href="https://fr.mathworks.com/help/matlab/matlab_external/call-user-defined-custom-module.html" rel="nofollow noreferrer">Matlab docs</a>)</p> <p>One calls Python like :</p> <pre><code>&gt;&gt;&gt; py.importlib.import_module(&quot;ExampleClass&quot;) &gt;&gt;&gt; x = py.ExampleClass.ExampleClass(1) &gt;&gt;&gt; x.get_property() ans = 1 </code></pre> <p>What I would like to do is to make <code>x</code> available in Simulink m-function block<strong>s</strong> in order to call <code>get_property()</code> from all the m-function blocks. Note that the creation of <code>x</code> is computationally expensive, therefore I want to load it in the workspace once when initializing the simulation.</p> <p>What I tried so far : I tried passing my <code>ExampleClass</code> to the m-functions via a mask parameter, but I get an error :</p> <p><code>Error:Expression 'object name' for type of data 'x' did not evaluate to a valid type.</code></p> <p>Which is understandable because Simulink does not allow to use datatypes other than <code>int</code>, <code>double</code> and other standard data types.</p> <p>Is there a way around to be able to call <code>x.get_property()</code>from my m-functions ?</p>
<python><matlab><simulink>
2024-06-19 14:30:28
1
681
Arthur
78,643,088
1,088,979
Pylance in VSCODE Jupiter Notebooks cannot resolve modules
<p>I am working with VSCODE Jupiter Notebooks and Pylance cannot resolve some of the modules that I know they have been successfully loaded into my virtual environment.</p> <p>Below is an screenshot:</p> <p><a href="https://i.sstatic.net/EyVfaLZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EyVfaLZP.png" alt="enter image description here" /></a></p> <p>The notebook is using the proper virtual environment:</p> <p><a href="https://i.sstatic.net/mJA95ZDs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mJA95ZDs.png" alt="enter image description here" /></a></p> <p>The code successfully runs. The problem is with Pylance as I think it is somehow confused.</p> <p>How can I have Pylance acknowledge the library that has been successfully loaded into the virtual environment.</p>
<python><visual-studio-code><jupyter-notebook><pylance>
2024-06-19 14:21:22
0
9,584
Allan Xu
78,643,081
3,572,950
How to group by and aggregate Int field and array[int] fields?
<p>So, lets say I've got some table with 2 fields - <code>first_col</code> is <code>int</code> and <code>second_col</code> is <code>array[int]</code>:</p> <pre><code>from sqlalchemy import ( Table, select, Integer, ) from sqlalchemy.dialects.postgresql import ARRAY some_table = Table( &quot;some_table&quot;, Column(&quot;id&quot;, Integer, primary_key=True), Column(&quot;first_col&quot;, Integer), Column(&quot;second_col&quot;, ARRAY(Integer)), ) </code></pre> <p>And, I want to take all objects from the <code>db</code> with both <code>first_col</code> and <code>second_col</code> and like aggregate them, I can show what I want in python code:</p> <pre><code>def get_some_stuff_and_aggregate() -&gt; dict: query = ( select( [ some_table.c.first_col, some_table.c.second_col, ] ) .select_from(some_table) ) # execute query here... it's not important here query_res = ... res_dict = {} for some_obj in query_res: for some_ids in ( [some_obj.first_col], some_obj.second_col, ): for some_id in some_ids: if some_id in res_dict: res_dict[some_id] += 1 else: res_dict[some_id] = 1 return res_dict </code></pre> <p>But, I guess I can do this at the SQL (ORM) level without this dirty and slow python aggregation, can you please help me? How to do it better, using <code>ORM</code>?</p>
<python><postgresql><sqlalchemy>
2024-06-19 14:20:06
1
1,438
Alexey
78,643,077
20,920,790
How to make stop/contunie task in Airflow?
<p>I'm trying to make Airflow dag to update database. If I get no mistakes while get data from API I need to insert data to database. If there's any errors - I need send errors messages.</p> <p>So I need add check length of errors dict.<a href="https://i.sstatic.net/OuESHU18.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OuESHU18.png" alt="enter image description here" /></a></p> <p>Actually I need make transform data then there's no errors before insert data to database, but I think there's no need to add this code to question.</p> <p>How to make stop/continue tasks with using decorators? Continue/stop logic: if len(errors) == 0: *insert table to database else: *send errors text to telegram</p> <p>P. S. I make dict with table, cause I actually get from API few tables.</p> <p>There's my code.</p> <pre><code>import pandas as pd import time import clickhouse_connect import api_library from airflow.decorators import dag, task default_args = { 'owner': 'owner', 'depends_on_past': False, 'retries': 2, 'retry_delay': datetime.timedelta(minutes=5), 'start_date': datetime.datetime(2024, 06, 20) } schedule_interval = '*/15 * * * *' connect = clickhouse_connect.get_client( host = '*.*.*.*' , port = 8443 , database = 'database' , username = 'admin' , password = 'password' ) bearer_key = '***' user_key = '***' def get_data_or_raise_error_with_retry(func, max_tries=10, **args): for _ in range(max_tries): try: # add sleep to avoid API break time.sleep(0.3) if len(args) == 0: return func() else: return func(**args) # I get error text instead pd.Dataframe except Exception as e: return e def make_dict_api_tables(tables: list): error_text = 'Error in table {}.\nCheck function.{}.' # dict with functions I using to get tables tables = { 'stores': { 'text': 'stores' # table name , 'function': 'get_table' # fucntion to get data from API , 'result': tables[0] # result with pd.Dataframe or tuple in some cases or error text } } tables_from_api = {} tables_from_api['stores'] = { 'result': tables['stores']['result'] , 'error_text': error_text.format(tables['stores']['text'], tables['stores']['function']) } return tables_from_api def make_dict_with_errors(tables_dict: dict): messages_with_transform_errors = {} for key in tables_dict.keys(): if type(tables_dict[key]['result']) not in [pd.DataFrame, tuple]: # if result not pd.Dataframe or tuple it's error # add error text to dict messages_with_transform_errors[key] = tables_dict[key]['error_text'] return messages_with_transform_errors @dag(default_args=default_args, schedule_interval=schedule_interval, catchup=False, concurrency=4) def dag_update_database(): @task def connect_to_api(bearer_key: str, user_key: str): # connecting to API api = api_library(bearer_key, user_key) return api @task def get_table_from_api(api, tries: int): # get table, result id pd.Dataframe result_from_salons_api = get_data_or_raise_error_with_retry(api.get_table, tries) return list(result_from_salons_api) @task def make_dict_with_tables_and_errors(table: list): # make dict with table tables_dict = make_dict_api_tables(table) # make dict with errors, dict will be empty if there's no errors errors = make_dict_with_errors(tables_dict) return tables_dict, errors </code></pre>
<python><airflow>
2024-06-19 14:18:40
1
402
John Doe
78,642,905
13,728,700
Mean over two consecutive elements of array
<p>I would like to compute the mean of two consecutive elements of a python array, such that the length of the final array has the length equal to that of the original array minus one (so something like <code>np.diff</code>, but with the mean instead of the difference).</p> <p>So if I have an array</p> <pre><code>a = [1, 2, 3, 4, 5, 6] </code></pre> <p>the output I would like to have would be</p> <pre><code>a_mean = [1.5, 2.5, 3.5, 4.5, 5.5] </code></pre> <p>Is there any smarter solution using numpy rather than looping? I could not come up with a smart solution.</p>
<python><arrays><numpy><mean>
2024-06-19 13:48:33
1
306
BlackPhoenix
78,642,653
315,168
Discrete Real dimension spacing in scikit-optimize
<p>Let's say I am searching over a dimension:</p> <pre class="lang-py prettyprint-override"><code>from skopt import space search_space = [ space.Real(1, 10, name=&quot;my_scale&quot;) ] </code></pre> <p>How can I make this Real number to be searched with discrete steps? E.g. 0.25. Because in my case, calculating data for each real value can be cached and I know that fine-tuning the value with small steps like 0.00001 does not give meaningful improvements. However if the value is calculated 1.25, 1.50, 1.75, etc. I can cache the results and speed up the optimisation process a lot.</p> <p>E.g. something like</p> <pre class="lang-py prettyprint-override"><code> search_space = [ space.Real(1, 10, step=0.25, name=&quot;my_scale&quot;) ] </code></pre>
<python><mathematical-optimization><scikits><scikit-optimize>
2024-06-19 12:57:31
0
84,872
Mikko Ohtamaa
78,642,650
9,072,753
How to type hint a function that parallelize multiple functions and return their results?
<p>Link to mypy: <a href="https://mypy-play.net/?mypy=latest&amp;python=3.12&amp;gist=a4da5db5bfbdf1e6bddce442286cc843" rel="nofollow noreferrer">https://mypy-play.net/?mypy=latest&amp;python=3.12&amp;gist=a4da5db5bfbdf1e6bddce442286cc843</a></p> <p>More often than not, I find myself connecting to multiple APIs and then collecting the results. Requests to multiple APIs can be made in parallel.</p> <pre><code>from typing import Dict, List, TypeVar, Callable, Tuple, overload from concurrent.futures import ThreadPoolExecutor def get_airflow() -&gt; str: return &quot;a&quot; def get_prometheus() -&gt; Dict[str, str]: return {&quot;b&quot;: &quot;c&quot;} def get_zabbix() -&gt; List[str]: return [&quot;d&quot;, &quot;e&quot;] adata = get_airflow() pdata = get_prometheus() zdata = get_zabbix() </code></pre> <p>I can parallelize it using ThreadPoolExecutor:</p> <pre><code>exe = ThreadPoolExecutor() arr = [ exe.submit(get_airflow), exe.submit(get_prometheus), exe.submit(get_zabbix), ] adata = arr[0].result() pdata = arr[1].result() zdata = arr[2].result() </code></pre> <p>However, this design requires that I keep a separate list of submits and a separate list of variable assignments, and I have to keep those lists in sync and don't mix. It also loses any typing information. Is there a way I could write it better? Something along:</p> <pre><code>adata, pdata, zdata = paralelize(get_airflow, get_prometheus, get_zabbix) </code></pre> <p>How to write such a function to preserve typing information? I tried the following, but I do not understand how to preserve types or construct a tuple with a dynamic number of elements:</p> <pre><code>T1 = TypeVar('T1') T2 = TypeVar('T2') T3 = TypeVar('T3') @overload def parallelize(a: Callable[[], T1]) -&gt; Tuple[T1]: ... @overload def parallelize(a: Callable[[], T1], b: Callable[[], T2]) -&gt; Tuple[T1, T2]: ... @overload def parallelize(a: Callable[[], T1], b: Callable[[], T2], c: Callable[[], T3]) -&gt; Tuple[T1, T2, T3]: ... def parallelize(*cbs: Callable[[], Any]) -&gt; Tuple[Any]: with ThreadPoolExecutor() as exe: return (x.result() for x in [exe.submit(cb) for cb in cbs]) </code></pre> <p>How to write such a function to preserve typing information? Is it possible using standard functions, like <code>futures.concurrent.wait(..)[0]</code>? Is there a better way to do it?</p>
<python><python-typing>
2024-06-19 12:57:26
1
145,478
KamilCuk
78,642,622
584,532
How to start a replication with psycopg3?
<p>How can I start the replication with psycopg3? Or is this only supported by psycopg2 so far?</p> <p>In psycopg2, one would create a connection with <code>connection_factory = psycopg2.extras.LogicalReplicationConnection</code> and then call <code>start_replication</code> on a cursor. Is there a similar connection factory available in psycopg3?</p>
<python><postgresql><psycopg3>
2024-06-19 12:50:41
1
2,643
nrainer
78,642,527
11,714,087
Python package dependecy upgrade from requirements.txt
<p>I have a Python application with ~70 packages in requirements.txt file.</p> <p>It was running fine, but suddenly <code>snowfalke-connector-python==2.7.3</code> and <code>schemachange==3.4.2</code> started installing numpy==2.0.0 while they were installing numpy==1.26.4 a day before, but from today they are installing numpy==2.0.0. This is resulting in a compatibility issue with Pandas as my app is throwing following errors.</p> <pre><code> from pandas._libs.interval import Interval File &quot;pandas/_libs/interval.pyx&quot;, line 1, in init pandas._libs.interval ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject </code></pre> <ul> <li><p>If I add pandas== 2.2.2 or other latest version then other compatibility issues arise.</p> </li> <li><p>For now I am running my app by restricting numpy to 1.26.4</p> </li> <li><p>I want to understand why this might have happened, and is it normal in python?</p> </li> <li><p>What is the best way to deal with this type of situation.</p> </li> </ul>
<python><numpy><pip><numpy-2.x>
2024-06-19 12:30:01
1
377
palamuGuy
78,642,391
8,973,620
Inconsistent graphs with Altair despite same package version
<p>I am working on creating graphs using Altair in different environments with different package sets. Despite ensuring the Altair version is consistent across these environments, I am observing significant differences in the graphs generated. Strangely, one set of graphs closely resembles those generated using Pandas plotting functions.</p> <pre class="lang-py prettyprint-override"><code>import altair as alt charts = [] for i in frame[&quot;class&quot;].unique(): chart = ( alt.Chart(frame.query(f'class == &quot;{i}&quot;')) .mark_line() .encode( x=alt.X(&quot;date&quot;).title(&quot;Date&quot;), y=alt.X(&quot;value&quot;).title(f'{label_map[i]}'), ) .properties(title=titles_map[i], width=600, height=200) ) charts.append(chart) chart = alt.vconcat(*charts) chart.save(plot_filename) </code></pre> <p>Environment A (correct layout, one one of the three graphs):</p> <p><a href="https://i.sstatic.net/AhBrO08J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AhBrO08J.png" alt="enter image description here" /></a></p> <p>Environment B (wrong layout, similar to Pandas graph):</p> <p><a href="https://i.sstatic.net/Fy102JyV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fy102JyV.png" alt="enter image description here" /></a></p>
<python><graph><altair>
2024-06-19 12:02:50
1
18,110
Mykola Zotko
78,642,383
508,907
python, Typer: disable printing of elements such as the traceback and locals
<p>I am using <a href="https://typer.tiangolo.com/" rel="nofollow noreferrer">Typer</a> and it looks pretty cool.</p> <p>However in a particular case, I want to hide some details for being print. In particular consider a case like</p> <pre class="lang-py prettyprint-override"><code>import typer app = typer.Typer() @app.command() def connect_to_super_secret_server(): host = os.env.get(HOST,'foohost') port = ... password = ... username = ... .... assert error_foobar_did_not_happen, &quot;FooBar must be set.&quot; ... return connection </code></pre> <p>When the assertion triggers an exception, I am getting something like:</p> <pre><code>╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ .............. │ .............. │ ..Huge traceback. │ .............. │ .............. │ .............. │ .............. │ .............. │ .............. │ .............. │ .............. │ │ │ ╭──────── locals ────────╮ │ │ │ host = 'localhost' │ │ │ │ password = sensitive │ │ │ │ port = 9200 │ │ │ │ username = 'admin' │ │ │ ╰────────────────────────╯ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ AssertionError: FooBar must be set </code></pre> <p>The traceback is really cool when developing, but would it be possible to disable in production?</p> <p>Also, most importantly, would there be a way to hide the value of the sensitive password variable so as not to print it in plaintext?</p>
<python><typer>
2024-06-19 12:01:50
1
14,360
ntg
78,642,298
8,964,393
Check following element in list in pandas dataframe
<p>I have created the following pandas dataframe</p> <pre><code>import pandas as pd import numpy as np ds = { 'col1' : [ ['U', 'U', 'U', 'U', 'U', 1, 0, 0, 0, 'U','U', None], [6, 5, 4, 3, 2], [0, 0, 0, 'U', 'U'], [0, 1, 'U', 'U', 'U'], [0, 'U', 'U', 'U', None] ] } df = pd.DataFrame(data=ds) </code></pre> <p>The dataframe looks like this:</p> <pre><code>print(df) col1 0 [U, U, U, U, U, 1, 0, 0, 0, U, U, None] 1 [6, 5, 4, 3, 2] 2 [0, 0, 0, U, U] 3 [0, 1, U, U, U] 4 [0, U, U, U, None] </code></pre> <p>For each row in <code>col1</code>, I need to check if every element equals to <code>U</code> in the list is followed (from left to right) by any value apart from <code>U</code> and <code>None</code>: in that case I'd create a new column (called <code>iCount</code>) with value of 1. Else 0.</p> <p>In the example above, the resulting dataframe would look like this:</p> <pre><code> col1 iCount 0 [U, U, U, U, U, 1, 0, 0, 0, U, U, None] 1 1 [6, 5, 4, 3, 2] 0 2 [0, 0, 0, U, U] 0 3 [0, 1, U, U, U] 0 4 [0, U, U, U, None] 0 </code></pre> <p>Only in the first row the value <code>U</code> is followed by a value which is neither <code>U</code> nor <code>None</code> (it is <code>1</code>)</p> <p>I have tried this code:</p> <pre><code>col5 = np.array(df['col1']) for i in range(len(df)): iCount = 0 for j in range(len(col5[i])-1): print(col5[i][j]) if((col5[i][j] == &quot;U&quot;) &amp; ((col5[i][j+1] != None) &amp; (col5[i][j+1] != &quot;U&quot;))): iCount += 1 else: iCount = iCount </code></pre> <p>But I get this (wrong) dataframe:</p> <pre><code> col1 iCount 0 [U, U, U, U, U, 1, 0, 0, 0, U, U, None] 0 1 [6, 5, 4, 3, 2] 0 2 [0, 0, 0, U, U] 0 3 [0, 1, U, U, U] 0 4 [0, U, U, U, None] 0 </code></pre> <p>Can anyone help me please?</p>
<python><pandas><dataframe><list>
2024-06-19 11:46:36
5
1,762
Giampaolo Levorato
78,642,191
4,435,175
How to write a dataframe to BigQuery and overwrite partition instead of the table?
<p>I need to write a polars dataframe into a BigQuery table. The table is partioned by date.</p> <p>When I need to run a backfilling script I iterate over a date range, get the data from some source (API in this case), convert it into a dataframe, manipulate a bit and write it into the BQ table.</p> <p>But instead of overwriting the partition for that date it overwrites the whole table?</p> <p>How can I only overwrite the partition?</p> <p>My code so far:</p> <pre><code>import polars as pl from google.cloud import bigquery # create period_range from internal util_package for date in period_range: data = &quot;get some API data here per date&quot; df = pl.read_csv(data).select(pl.col(pl.INT64)) client = bigquery.Client() with io.BytesIO() as stream: df.write_parquet(stream) stream.seek(0) job = client.load_table_from_file( file_obj=stream, destination=&quot;analytics.ads.vendor_name&quot;, project=&quot;mycompany_ads&quot;, location=&quot;EU&quot;, job_config=bigquery.LoadJobConfig( source_format=bigquery.SourceFormat.PARQUET, time_partitioning=bigquery.TimePartitioning( type_=bigquery.TimePartitioningType.DAY, field=&quot;date&quot;, # name of column to use for partitioning require_partition_filter=True, ), clustering_fields=[&quot;domain&quot;, &quot;type&quot;, &quot;placement&quot;], autodetect=True, schema=None, write_disposition=bigquery.WriteDisposition.WRITE_TRUNCATE, ), ) job.result() # Waits for the job to complete print(&quot;ETL finished&quot;) </code></pre>
<python><dataframe><google-bigquery><python-polars>
2024-06-19 11:20:54
1
2,980
Vega
78,642,079
2,736,559
How to properly calculate PSD plot (Power Spectrum Density Plot) for images in order to remove periodic noise?
<p>I'm trying to remove periodic noise from an image using PSDP, I had some success, but I'm not sure if what I'm doing is 100% correct.<br /> This is basically a kind of follow up to this <a href="https://www.youtube.com/watch?v=s2K1JfNR7Sc" rel="nofollow noreferrer">video lecture</a> which discusses this very subject on 1d signals.</p> <p>What I have done so far:</p> <ol> <li>Initially I tried flattening the whole image, and then treating it as a 1D signal, this obviously gives me a plot, but the plot doesn't look right honestly and the final result is not that appealing.</li> </ol> <p>This is the first try:</p> <pre class="lang-py prettyprint-override"><code># img link https://github.com/VladKarpushin/Periodic-noise-removing-filter/blob/master/www/images/period_input.jpg?raw=true img = cv2.imread('./img/periodic_noisy_image2.jpg',0) img_flattened = img.flatten() n = img_flattened.shape[0] # 447561 fft = np.fft.fft(img_flattened, img_flattened.shape[0]) # the values range is just absurdly large, so # we have to use log at some point to get the # values range to become sensible! psd = fft*np.conj(fft)/n freq = 1/n * np.arange(n) L = np.arange(1,np.floor(n/2),dtype='int') # use log so we have a sensible range! psd_log = np.log(psd) print(f'{psd_log.min()=} {psd_log.max()=}') # cut off range to remove noise! indexes = psd_log&lt;15 # use exp to get the original vlaues for plotting comparison psd_cleaned = np.exp(psd_log * indexes) # get the denoised fft fft_cleaned = fft * indexes # in case the initial parts were affected, # lets restore it from fft so the final image looks well span = 10 fft_cleaned[:span] = fft[:span] # get back the image denoised_img = np.fft.ifftn(fft_cleaned).real.clip(0,255).astype(np.uint8).reshape(img.shape) plt.subplot(2,2,1), plt.imshow(img,cmap='gray'), plt.title('original image') plt.subplot(2,2,2), plt.imshow(denoised_img, cmap='gray'), plt.title('denoise image') plt.subplot(2,2,3), plt.plot(freq[L],psd[L]), plt.title('PSD') plt.subplot(2,2,4), plt.plot(freq[L],psd_cleaned[L]), plt.title('PSD clean') plt.show() </code></pre> <p>This is the output, the image is denoised a bit, but overall, it doesn't sit right with me, as I assume I should at least get as good a result as my second attempt, the plots also look weird.<br /> <a href="https://i.sstatic.net/TM1yiTUJ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TM1yiTUJ.jpg" alt="enter image description here" /></a></p> <ol start="2"> <li>in my second attempt, I simply calculated the power spectrum the normal way, and got a much better result imho:</li> </ol> <pre class="lang-py prettyprint-override"><code># Read the image in grayscale img = cv2.imread('./img/periodic_noisy_image2.jpg', 0) # Perform 2D Fourier transform fft = np.fft.fftn(img) fft_shift = np.fft.fftshift(fft) # Calculate Power Spectrum Density, it's the same as doing fft_shift*np.conj(fft_shift) # note that abs(fft_shitf) calculates square root of powerspectrum, so we **2 it to get the actual power spectrum! # but we still need to divide it by the frequency to get the plot (for visualization only)! # this is what we do next! # I use log to make large numbers smaller and small numbers larger so they show up properly in visualization psd = np.log(np.abs(fft_shift)**2) # now I can filter out the bright spots which signal noise # take the indexes belonging to these large values # and then use that to set them in the actual fft to 0 to suppress them # 20-22 image gets too smoothed out, and &gt;24, its still visibly noisy ind = psd&lt;23 psd2 = psd*ind fft_shift2 = ind * fft_shift # since this is not accurate, we may very well endup destroying # the center of the fft which contains low freq important image information # (it has large values there as well) so we grab that area from fft and copy # it back to restore the lost values this way! cx,cy = img.shape[0]//2, img.shape[1]//2 area = 20 # restore the center in case it was overwritten! fft_shift2[cx-area:cx+area,cy-area:cy+area] = fft_shift[cx-area:cx+area,cy-area:cy+area] ifft_shift2 = np.fft.ifftshift(fft_shift2) denoised_img = np.fft.ifftn(ifft_shift2).real.clip(0,255).astype(np.uint8) # Get frequencies for each dimension freq_x = np.fft.fftfreq(img.shape[0]) freq_y = np.fft.fftfreq(img.shape[1]) # Create a meshgrid of frequencies freq_x, freq_y = np.meshgrid(freq_x, freq_y) # Plot the PSD plt.figure(figsize=(10, 7)) plt.subplot(2,2,1), plt.imshow(img, cmap='gray'), plt.title('img') plt.subplot(2,2,2), plt.imshow(denoised_img, cmap='gray'), plt.title('denoised image') #plt.subplot(2,2,3), plt.imshow(((1-ind)*255)), plt.title('mask-inv') plt.subplot(2,2,3), plt.imshow(psd2, extent=(np.min(freq_x), np.max(freq_x), np.min(freq_y), np.max(freq_y))), plt.title('Power Spectrum Density[cleaned]') plt.subplot(2,2,4), plt.imshow(psd, extent=(np.min(freq_x), np.max(freq_x), np.min(freq_y), np.max(freq_y))),plt.title('Power Spectrum Density[default]') plt.xlabel('Frequency (X)') plt.ylabel('Frequency (Y)') plt.colorbar() plt.show() </code></pre> <p><a href="https://i.sstatic.net/29JdezM6.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/29JdezM6.jpg" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/DaAOfm84.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DaAOfm84.jpg" alt="enter image description here" /></a></p> <p>This seems to work, but I'm not getting a good result, I'm not sure if I am doing something wrong here, or this is the best that can be achieved.</p> <ol start="3"> <li>What I did next was, I tried to completely set a rectangle around all the bright spots and set them all to zeros, this way we I make sure the surrounding values are also taken care of as much as possible and this is what I get as the output:</li> </ol> <pre class="lang-py prettyprint-override"><code> img = cv2.imread('./img/periodic_noisy_image2.jpg') while (True): # calculate the dft ffts = np.fft.fftn(img) # now shift to center for better interpretation ffts_shifted = np.fft.fftshift(ffts) # power spectrum ffts_shifted_mag = (20*np.log(np.abs(ffts_shifted))).astype(np.uint8) # use selectROI to select the spots we want to set to 0! noise_rois = cv2.selectROIs('select periodic noise spots(press Spc to take selection, press esc to end selection)', ffts_shifted_mag,False, False,False) print(f'{noise_rois=}') # now set the area in fft_shifted to zero for y,x,h,w in noise_rois: # we need to provide a complex number! ffts_shifted[x:x+w,y:y+h] = 0+0j # shift back iffts_shifted = np.fft.ifftshift(ffts_shifted) iffts = np.fft.ifftn(iffts_shifted) # getback the image img_denoised = iffts.real.clip(0,255).astype(np.uint8) # lets calculate the new image magnitude denoise_ffts = np.fft.fftn(img_denoised) denoise_ffts_shifted = np.fft.fftshift(denoise_ffts) denoise_mag = (20*np.log(np.abs(denoise_ffts_shifted))).astype(np.uint8) cv2.imshow('img-with-periodic-noise', img) cv2.imshow('ffts_shifted_mag', ffts_shifted_mag) cv2.imshow('denoise_mag',denoise_mag) cv2.imshow('img_denoised', img_denoised) # note we are using 0 so it only goes next when we press it, otherwise we can't see the result! key = cv2.waitKey(0)&amp;0xFF cv2.destroyAllWindows() if key == ord('q'): break </code></pre> <p><a href="https://i.sstatic.net/cT2rUegY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cT2rUegY.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/1KjGsxO3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1KjGsxO3.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/192gfpC3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/192gfpC3.png" alt="enter image description here" /></a></p> <p>Again I had the assumption, by removing these periodic noise, the image would look much better, but I still can see patterns which means they are not removed completely! but at the same time, I did remove the bright spots.</p> <p>This gets even harder (so far impossible) to get this image denoised using this method:</p> <p><a href="https://i.sstatic.net/p5mGqrfg.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p5mGqrfg.jpg" alt="enter image description here" /></a></p> <p>This is clearly a periodic noise, so what is it that I'm missing or doing wrong here?</p> <p>For the reference this is the other image with periodic noise which I have been experimenting with:</p> <p><a href="https://i.sstatic.net/7LOLyaeK.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7LOLyaeK.jpg" alt="enter image description here" /></a></p> <h2>Update :</h2> <p>After reading the comments and suggestions so far, I came up with the following version, which overall works decently, but I face these issues:</p> <ol> <li>I don't get tiny imaginary values, even when the output looks fairly good! I can't seem to rely on this check to see what has gone wrong, it exists when there are very little/barely noticeable noise, and when there are noise everywhere.</li> <li>Still face a considerable amount of noise in some images (example given) I'd be great to know if this is expected and I should move on, or there's something wrong which needs to be addressed.</li> </ol> <pre class="lang-py prettyprint-override"><code>def onchange(x):pass cv2.namedWindow('options') cv2.createTrackbar('threshold', 'options', 130, 255, onchange) cv2.createTrackbar('area', 'options', 40, max(*img.shape[:2]), onchange) cv2.createTrackbar('pad', 'options', 0, max(*img.shape[:2]), onchange) cv2.createTrackbar('normalize_output', 'options', 0, 1, onchange) while(True): threshold = cv2.getTrackbarPos('threshold', 'options') area = cv2.getTrackbarPos('area', 'options') pad = cv2.getTrackbarPos('pad', 'options') normalize_output = cv2.getTrackbarPos('normalize_output', 'options') input_img = cv2.copyMakeBorder(img, pad, pad, pad, pad, cv2.BORDER_REFLECT) if pad&gt;0 else img fft = np.fft.fftn(input_img) fft_shift = np.fft.fftshift(fft) # note since we plan on normalizing the magnitude spectrum, # we dont clip and we dont cast here! # +1 so for the images that have 0s we dont get -inf down the road and dont face issues when we want to normalize and create a mask out of it! fft_shift_mag = 20*np.log(np.abs(fft_shift)+1) # now lets normalize and get a mask out of it, # the idea is to identify bright spot and set them to 0 # while retaining the center of the fft as it has a lot # of image information fft_shift_mag_norm = cv2.normalize(fft_shift_mag, None, 0,255, cv2.NORM_MINMAX) # now lets threshold and get our mask if img.ndim&gt;2: mask = np.array([cv2.threshold(fft_shift_mag_norm[...,i], threshold, 255, cv2.THRESH_BINARY)[1] for i in range(3)]) # the mask/img needs to be contiguous, (a simple .copy() would work as well!) mask = np.ascontiguousarray(mask.transpose((1,2,0))) else: ret, mask = cv2.threshold(fft_shift_mag_norm, threshold, 255, cv2.THRESH_BINARY) w,h = input_img.shape[:2] cx,cy = w//2, h//2 mask = cv2.circle(mask, (cy,cx), radius=area, color=0, thickness=cv2.FILLED) # now that we have our mask prepared, we can simply use it with the actual fft to # set all these bright places to 0 fft_shift[mask!=0] = 0+0j ifft_shift = np.fft.ifftshift(fft_shift) img_denoised = np.fft.ifftn(ifft_shift).real.clip(0,255).astype(np.uint8) img_denoised = img_denoised[pad:w-pad,pad:h-pad] # check the ifft imaginary parts are close to zero otherwise sth is wrong! almost_zero = np.all(np.isclose(ifft_shift.imag,0,atol=1e-8)) if not almost_zero: print('imaginary components not close to 0, something is wrong!') else: print(f'all is good!') # do a final contrast stretching: if normalize_output: p2, p98 = np.percentile(img_denoised, (2, 98)) img_denoised = img_denoised.clip(p2, p98) img_denoised = cv2.normalize(img_denoised, None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U) cv2.imshow('input_img', input_img) cv2.imshow('fft-shift-mag-norm', fft_shift_mag_norm) cv2.imshow('fft_shift_mag', ((fft_shift_mag.real/fft_shift_mag.real.max())*255).clip(0,255).astype(np.uint8)) cv2.imshow('mask', mask) cv2.imshow('denoised', img_denoised) key = cv2.waitKey(30)&amp;0xFF if key == ord('q') or key == 27: cv2.destroyAllWindows() break </code></pre> <p>relatively good output:<br /> <a href="https://i.sstatic.net/zOpaKyT5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zOpaKyT5.png" alt="enter image description here" /></a><br /> <a href="https://i.sstatic.net/gYaydECI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gYaydECI.png" alt="enter image description here" /></a></p> <p>Not so much! This is the one example I still get lots of noise. I'm not sure if this is the best I can expect, or there is still room for improvements:</p> <p><a href="https://i.sstatic.net/4a5nrMHL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4a5nrMHL.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/WirgD84w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WirgD84w.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/BOIeOSez.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BOIeOSez.png" alt="enter image description here" /></a></p> <p>There are other samples, where I couldn't remove all the noise either, such as this one(I could tweak it a bit but there would still be artifacts):</p> <p><a href="https://i.sstatic.net/Tpc1YxyJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tpc1YxyJ.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/82wwx3lT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82wwx3lT.png" alt="enter image description here" /></a></p> <p>I attributed this to the low quality of the image itself and accepted it, However, I expected the second example to have room for improvements, I thought I should be able to ultimately get something like this or close to it:</p> <p><a href="https://i.sstatic.net/2fJyS5jM.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fJyS5jM.jpg" alt="enter image description here" /></a></p> <ul> <li>Are my assumptions incorrect?</li> <li>Are these artifacts/noises we are seeing in the outputs, periodic noise or some other types of noise?</li> <li>Relatively speaking, Is this the best one can achieve/hope for when using this method? I mean by purely removing periodic noise and not resorting to anything advanced?</li> </ul>
<python><numpy><opencv><computer-vision><fft>
2024-06-19 10:55:06
1
26,332
Hossein
78,642,075
8,501,483
How to disable gperftool profiling in python
<p>I want to do CPU profiling with <code>gperftools</code> by setting <code>LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libprofiler.so.0 CPUPROFILE=/tmp/myprofile.out</code>.</p> <p>It works fine on purely C++ environment, but gperftools doesn't work with python, which generates an empty file.</p> <p>In my environment, the call chain looks like (hard to change due to a few constraints):</p> <ul> <li><code>executor program</code> -&gt; <code>python program</code> -&gt; <code>C++ subprocess</code></li> <li>or, <code>executor program</code> -&gt; <code>C++ program</code></li> </ul> <p>All programs are launched via an executor program via <code>subprocess.Popen</code>, it's impossible to tell whether the program to execute is python or C++.</p> <p>To make things worse, python program spawn its own subprocess (C++) and pass down the env variables.</p> <p>The result is: both python and C++ process dumps profile to the same location, likely the profile ends up with an empty file due to race.</p> <p>Both python and C++ program have a internal main function (which makes up a few internal setups), I tried unset the env in python main function, but profile file is still generated by python.</p> <p>My question is: is there a way to uniformly enable CPU profiling via <code>LD_PRELOAD</code> and <code>CPUPROFILE</code> for all C++ processes, but not python ones?</p>
<python><c++><gperftools>
2024-06-19 10:53:46
0
604
Tinyden
78,641,487
5,731,861
Problem with numpy: pip install numpy "Requirement already satisfied: numpy"
<p>when working with numpy I have found this problem, that submodules or packages installed in a virtual environment don't recognize the numpy.</p> <pre><code>(venv) D:\felipe\work\ml\mnist&gt;pip install numpy Requirement already satisfied: numpy in d:\felipe\work\ml\mnist\venv\lib\site-packages (2.0.0) </code></pre> <p>the problem is caused by a new and incompatible version of numpy apparently</p>
<python><numpy>
2024-06-19 09:00:15
1
2,257
Felipe Valdes
78,641,299
5,061,637
How to deal with binary extensions (.pyd files) and binary scripts (.exe files) in a Python wheel file?
<p>I’ve made a binary Python extension using C++ with the help of PyBind11. This extension come with a “binary script” which is a regular executable file.</p> <p>Now I’m struggling with the creation of a wheel file to distribute it.</p> <p>Using pyproject.toml and setuptools backend, I’m able to create wheel file to install a regular Python package (made of .py files) with regular script files (using entry points in .py files)</p> <p>But how to deal with my .pyd file which contains my whole extension package ?</p> <p>I’ve try several things, the best I could achieve is a wheel file which install my package as “<strong>python-install-dir/Lib/site-packages/mypackage/mypackage.pyd</strong>”</p> <p>But with that, my package members are accessible from a Python script using “<strong>mypackage.mypackage.mymember</strong>” syntax</p> <p>If I manually install my pyd file as “<strong>python-install-dir/Lib/site-packages/mypackage.pyd</strong>” file (without the “mypackage” subdirectory), it works as expected : I can access my package members with the “<strong>mypackage.mymember</strong>” syntax</p> <p>How to configure my pyproject.toml file to achieve this ? Or is there anything I’ve missed in my .pyd file ? Maybe should I switch to another build backend ?</p> <p>My extension also comes with a binary tool (“mytool.exe” file). How can I include it in my wheel file so it ends up in the “<strong>python-install-dir/Scripts</strong>” directory ?</p>
<python><setuptools><python-wheel>
2024-06-19 08:22:11
0
1,182
Aurelien
78,641,200
1,107,595
Initiate copy but exit without waiting for it to finish
<p>I'm using boto3 to copy a large object from one bucket to another (3.5 GiB), I'm using the following code:</p> <pre><code>boto3.client('s3').copy_object(Bucket=dst_bucket, Key=filepath, CopySource={'Bucket': src_bucket, 'Key': filepath}) </code></pre> <p>It works fine, but it takes ~4-5 minute, I don't want to wait around for the copy to be finished I'd rather just initiate the copy and stop the script.</p> <p>How can I do that ? I thought about launching the copy in a thread and exiting the program after 2second, but it doesn't feel right, surely boto3/aws has a way to do what I'm trying to do ?</p>
<python><amazon-web-services><amazon-s3><boto3>
2024-06-19 08:00:40
2
2,538
BlueMagma
78,641,150
13,086,128
A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.0
<p>I installed numpy 2.0.0</p> <p><a href="https://i.sstatic.net/4aB0aWDL.png" rel="noreferrer"><img src="https://i.sstatic.net/4aB0aWDL.png" alt="enter image description here" /></a></p> <pre><code>pip install numpy==2.0.0 import numpy as np np.__version__ #2.0.0 </code></pre> <p>then I installed:</p> <pre><code>pip install opencv-python Requirement already satisfied: opencv-python in /usr/local/lib/python3.10/dist-packages (4.8.0.76) Requirement already satisfied: numpy&gt;=1.21.2 in /usr/local/lib/python3.10/dist-packages (from opencv-python) (2.0.0) </code></pre> <p>Then I did:</p> <pre><code>import cv2 </code></pre> <p>I am getting this error:</p> <hr /> <pre><code>A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.0 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11&gt;=2.12'. If you are a user of the module, the easiest solution will be to downgrade to 'numpy&lt;2' or try to upgrade the affected module. We expect that some modules will need time to support NumPy 2. Traceback (most recent call last): File &quot;/usr/lib/python3.10/runpy.py&quot;, line 196, in _run_module_as_main return _run_code(code, main_globals, None, File &quot;/usr/lib/python3.10/runpy.py&quot;, line 86, in _run_code exec(code, run_globals) File &quot;/usr/local/lib/python3.10/dist-packages/colab_kernel_launcher.py&quot;, line 37, in &lt;module&gt; ColabKernelApp.launch_instance() File &quot;/usr/local/lib/python3.10/dist-packages/traitlets/config/application.py&quot;, line 992, in launch_instance app.start() File &quot;/usr/local/lib/python3.10/dist-packages/ipykernel/kernelapp.py&quot;, line 619, in start self.io_loop.start() File &quot;/usr/local/lib/python3.10/dist-packages/tornado/platform/asyncio.py&quot;, line 195, in start self.asyncio_loop.run_forever() File &quot;/usr/lib/python3.10/asyncio/base_events.py&quot;, line 603, in run_forever self._run_once() File &quot;/usr/lib/python3.10/asyncio/base_events.py&quot;, line 1909, in _run_once handle._run() File &quot;/usr/lib/python3.10/asyncio/events.py&quot;, line 80, in _run self._context.run(self._callback, *self._args) File &quot;/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py&quot;, line 685, in &lt;lambda&gt; lambda f: self._run_callback(functools.partial(callback, future)) File &quot;/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py&quot;, line 738, in _run_callback ret = callback() File &quot;/usr/local/lib/python3.10/dist-packages/tornado/gen.py&quot;, line 825, in inner self.ctx_run(self.run) File &quot;/usr/local/lib/python3.10/dist-packages/tornado/gen.py&quot;, line 786, in run yielded = self.gen.send(value) File &quot;/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py&quot;, line 361, in process_one yield gen.maybe_future(dispatch(*args)) File &quot;/usr/local/lib/python3.10/dist-packages/tornado/gen.py&quot;, line 234, in wrapper yielded = ctx_run(next, result) File &quot;/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py&quot;, line 261, in dispatch_shell yield gen.maybe_future(handler(stream, idents, msg)) File &quot;/usr/local/lib/python3.10/dist-packages/tornado/gen.py&quot;, line 234, in wrapper yielded = ctx_run(next, result) File &quot;/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py&quot;, line 539, in execute_request self.do_execute( File &quot;/usr/local/lib/python3.10/dist-packages/tornado/gen.py&quot;, line 234, in wrapper yielded = ctx_run(next, result) File &quot;/usr/local/lib/python3.10/dist-packages/ipykernel/ipkernel.py&quot;, line 302, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File &quot;/usr/local/lib/python3.10/dist-packages/ipykernel/zmqshell.py&quot;, line 539, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File &quot;/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py&quot;, line 2975, in run_cell result = self._run_cell( File &quot;/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py&quot;, line 3030, in _run_cell return runner(coro) File &quot;/usr/local/lib/python3.10/dist-packages/IPython/core/async_helpers.py&quot;, line 78, in _pseudo_sync_runner coro.send(None) File &quot;/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py&quot;, line 3257, in run_cell_async has_raised = await self.run_ast_nodes(code_ast.body, cell_name, File &quot;/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py&quot;, line 3473, in run_ast_nodes if (await self.run_code(code, result, async_=asy)): File &quot;/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py&quot;, line 3553, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File &quot;&lt;ipython-input-4-c8ec22b3e787&gt;&quot;, line 1, in &lt;cell line: 1&gt; import cv2 File &quot;/usr/local/lib/python3.10/dist-packages/google/colab/_import_hooks/_cv2.py&quot;, line 78, in load_module cv_module = imp.load_module(name, *module_info) File &quot;/usr/lib/python3.10/imp.py&quot;, line 245, in load_module return load_package(name, filename) File &quot;/usr/lib/python3.10/imp.py&quot;, line 217, in load_package return _load(spec) File &quot;/usr/local/lib/python3.10/dist-packages/cv2/__init__.py&quot;, line 181, in &lt;module&gt; bootstrap() File &quot;/usr/local/lib/python3.10/dist-packages/cv2/__init__.py&quot;, line 153, in bootstrap native_module = importlib.import_module(&quot;cv2&quot;) File &quot;/usr/lib/python3.10/importlib/__init__.py&quot;, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;/usr/local/lib/python3.10/dist-packages/google/colab/_import_hooks/_cv2.py&quot;, line 78, in load_module cv_module = imp.load_module(name, *module_info) File &quot;/usr/lib/python3.10/imp.py&quot;, line 243, in load_module return load_dynamic(name, filename, file) File &quot;/usr/lib/python3.10/imp.py&quot;, line 343, in load_dynamic return _load(spec) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) AttributeError: _ARRAY_API not found --------------------------------------------------------------------------- ImportError Traceback (most recent call last) &lt;ipython-input-4-c8ec22b3e787&gt; in &lt;cell line: 1&gt;() ----&gt; 1 import cv2 8 frames /usr/lib/python3.10/imp.py in load_dynamic(name, path, file) 341 spec = importlib.machinery.ModuleSpec( 342 name=name, loader=loader, origin=path) --&gt; 343 return _load(spec) 344 345 else: ImportError: numpy.core.multiarray failed to import --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the &quot;Open Examples&quot; button below. </code></pre> <hr />
<python><python-3.x><numpy><pip><numpy-2.x>
2024-06-19 07:49:51
3
30,560
Talha Tayyab
78,641,125
1,082,349
Scipy sparse solve prints: dgstrf info
<p>The following code</p> <pre><code>from scipy.linalg.sparse import solve as spsolve foo = spsolve(AMatrixT, dist) </code></pre> <p>prints into stdout</p> <pre><code>dgstrf info 10200 </code></pre> <p>And if I inspect <code>foo</code>, it's all <code>NaN</code>. The shape of <code>dist</code> happens to be (10200, 1). I have uploaded the sparse <code>AMatrixT</code> matrix and the dense <code>dist</code> array as .npz files <a href="https://file.io/dco1sizIchWl" rel="nofollow noreferrer">here</a>. What does the output mean, and how can I stop it?</p> <pre><code># Name Version Build Channel python 3.11.4 h955ad1f_0 scipy 1.12.0 py311h08b1b3b_0 </code></pre>
<python><numpy><scipy>
2024-06-19 07:45:28
1
16,698
FooBar
78,640,937
16,521,194
Update Swagger when dynamically adding FastAPI endpoints
<p>I would like to update the Swagger when adding a new endpoint dynamically.</p> <p>To dynamically add endpoints, I am using the <a href="https://stackoverflow.com/a/74035526/16521194">accepted answer</a> to <a href="https://stackoverflow.com/q/70783994">Reload routes in FastAPI during runtime</a>. This looks like the following:</p> <pre class="lang-py prettyprint-override"><code>import fastapi import uvicorn from uvicorn.config import LOGGING_CONFIG app = fastapi.FastAPI() @app.get(&quot;/add&quot;) async def add(name: str): async def dynamic_controller(): return f&quot;dynamic: {name}&quot; app.add_api_route(f&quot;/dyn/{name}&quot;, dynamic_controller, methods=[&quot;GET&quot;]) return &quot;ok&quot; def route_matches(route, name): return route.path_format == f&quot;/dyn/{name}&quot; @app.get(&quot;/remove&quot;) async def remove(name: str): for i, r in enumerate(app.router.routes): if route_matches(r, name): del app.router.routes[i] return &quot;ok&quot; return &quot;not found&quot; def main(): uvicorn.run(&quot;dynamic_router:app&quot;, host=&quot;0.0.0.0&quot;, workers=1, log_config=LOGGING_CONFIG, port=5000) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>While this creates new functional endpoints, the Swagger UI does not update to reflect these changes.<br /> Is there a way to update the Swagger? If not, can we force the reload?<br /> Is there an entirely other way to go about it?</p> <h2>EDIT 1</h2> <p>As pointed out by <a href="https://stackoverflow.com/users/3001761/jonrsharpe">@jonrsharpe</a>'s comment. This was already partially solved in the <a href="https://stackoverflow.com/a/74035526/16521194">referenced answer</a>. The lines <code>app.openapi_schema = None</code> and <code>app.setup()</code> allow for the Swagger to be updated when refreshing the page.<br /> Is there a way to force the refresh?</p> <h2>EDIT 2</h2> <p>If someone tries to use this endpoint generation while using the router mechanism, this is what I've done, still without being able to force the refresh.<br /> <code>generative_router.py</code></p> <pre class="lang-py prettyprint-override"><code>from fastapi import APIRouter, FastAPI def generative_router_generator(app: FastAPI) -&gt; APIRouter: generative_router: APIRouter = APIRouter() def update_swagger(): app.openapi_schema = None app.setup() @generative_router.get(&quot;/add&quot;, tags=[&quot;GEN&quot;]) async def add(name: str): async def dynamic_controller(): return f&quot;dynamic {name}&quot; app.add_api_route(f&quot;/dyn/{name}&quot;, dynamic_controller, methods=[&quot;GET&quot;], tags=[&quot;DYN&quot;]) update_swagger() return &quot;ok&quot; def route_matches(route, name): return route.path_format == f&quot;/dyn/{name}&quot; @generative_router.get(&quot;/remove&quot;, tags=[&quot;GEN&quot;]) async def remove(name: str): for i, r in enumerate(app.router.routes): if route_matches(r, name): del app.router.routes[i] update_swagger() return &quot;ok&quot; return &quot;not found&quot; return generative_router </code></pre> <p><code>app.py</code></p> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, APIRouter from fastapi_offline import FastAPIOffline from generative_router import generative_router_generator def generate_fastapi_app() -&gt; FastAPI: fastapi_app: FastAPI = FastAPIOffline() generative_router: APIRouter = generative_router_generator(app=fastapi_app) fastapi_app.include_router(generative_router, prefix=&quot;/generative&quot;) return fastapi_app app: FastAPI = generate_fastapi_app() </code></pre> <p><code>main.py</code></p> <pre class="lang-py prettyprint-override"><code>import uvicorn from uvicorn.config import LOGGING_CONFIG def main(): uvicorn.run(&quot;app:app&quot;, host=&quot;0.0.0.0&quot;, workers=1, log_config=LOGGING_CONFIG, port=5000) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>Here we use a generator for the <code>generative_router</code> router. This is done only to pass the main <code>FastAPI</code> object to use in the <code>add</code> and <code>remove</code> endpoints, necessary to ubdate the Swagger.</p>
<python><swagger><fastapi>
2024-06-19 06:56:23
0
1,183
GregoirePelegrin
78,640,606
22,407,544
Why is my Django app running out of memory after switching to Gunicorn?
<p>I got the following error when trying to run a ML/AI app in Django/Docker. The app allows a user to upload audio files which are then transcribed. I started getting the following error after switching to Gunicorn. I understand it is due to memory allocation limitations, but I am not sure how to fix it. The error is inconsistent and only happens with some files and not with others and it is not always the same error. Sometimes I only get a <code>internal server error</code>. The error also happens with longer audio files even if a shorter audio file is larger(so a 10MB, 20 sec audio file may be successful but a 2MB, 3 min file is never successful and always returns a similar error):</p> <pre><code> [2024-06-18 08:56:09 -0500] [19] [INFO] Worker exiting (pid: 19) web-1 | [2024-06-18 13:56:10 +0000] [1] [ERROR] Worker (pid:19) was sent SIGKILL! Perhaps out of memory? web-1 | [2024-06-18 13:56:10 +0000] [34] [INFO] Booting worker with pid: 34 web-1 | /usr/local/lib/python3.11/site-packages/whisper/__init__.py:63: UserWarning: /root/.cache/whisper/tiny.pt exists, but the SHA256 checksum does not match; re-downloading the file web-1 | warnings.warn( 78%|█████████████████████████████ | 56.6M/72.1M [00:24&lt;00:05, 2.74MiB/s][2024-06-18 13:57:18 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:34) 79%|█████████████████████████████ | 56.7M/72.1M [00:24&lt;00:06, 2.42MiB/s] web-1 | [2024-06-18 08:57:18 -0500] [34] [INFO] Worker exiting (pid: 34) web-1 | [2024-06-18 13:57:19 +0000] [1] [ERROR] Worker (pid:34) exited with code 1 web-1 | [2024-06-18 13:57:19 +0000] [1] [ERROR] Worker (pid:34) exited with code 1. web-1 | [2024-06-18 13:57:19 +0000] [45] [INFO] Booting worker with pid: 45 web-1 | /usr/local/lib/python3.11/site-packages/whisper/__init__.py:63: UserWarning: /root/.cache/whisper/tiny.pt exists, but the SHA256 checksum does not match; re-downloading the file web-1 | warnings.warn( 72%|██████████████████████████▊ | 52.2M/72.1M [00:24&lt;00:10, 2.03MiB/s][2024-06-18 14:02:49 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:45) 73%|██████████████████████████▉ | 52.4M/72.1M [00:24&lt;00:09, 2.22MiB/s] web-1 | [2024-06-18 09:02:49 -0500] [45] [INFO] Worker exiting (pid: 45) web-1 | [2024-06-18 14:02:49 +0000] [1] [ERROR] Worker (pid:45) exited with code 1 web-1 | [2024-06-18 14:02:49 +0000] [1] [ERROR] Worker (pid:45) exited with code 1. web-1 | [2024-06-18 14:02:49 +0000] [56] [INFO] Booting worker with pid: 56 web-1 | /usr/local/lib/python3.11/site-packages/whisper/__init__.py:63: UserWarning: /root/.cache/whisper/tiny.pt exists, but the SHA256 checksum does not match; re-downloading the file web-1 | warnings.warn( 100%|█████████████████████████████████████| 72.1M/72.1M [00:22&lt;00:00, 3.39MiB/s] web-1 | [2024-06-18 14:20:30 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:56) web-1 | [2024-06-18 09:20:30 -0500] [56] [INFO] Worker exiting (pid: 56) web-1 | [2024-06-18 14:20:31 +0000] [1] [ERROR] Worker (pid:56) exited with code 1 web-1 | [2024-06-18 14:20:31 +0000] [1] [ERROR] Worker (pid:56) exited with code 1. web-1 | [2024-06-18 14:20:31 +0000] [79] [INFO] Booting worker with pid: 79 </code></pre> <p>I ran docker stats to analyse memory usage. This is the output:</p> <pre><code>CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 0.06% 571.5MiB / 7.387GiB 7.56% 278MB / 9.01MB 0B / 0B 13 0ac4bc893e43 djangoprojects-db-1 0.00% 19.68MiB / 7.387GiB 0.26% 20.8kB / 18.1kB 0B / 0B 7 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 0.06% 571.5MiB / 7.387GiB 7.56% 278MB / 9.01MB 0B / 0B 13 0ac4bc893e43 djangoprojects-db-1 0.00% 19.68MiB / 7.387GiB 0.26% 20.8kB / 18.1kB 0B / 0B 7 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 0.03% 571.5MiB / 7.387GiB 7.56% 278MB / 9.01MB 0B / 0B 13 0ac4bc893e43 djangoprojects-db-1 0.03% 19.68MiB / 7.387GiB 0.26% 20.8kB / 18.1kB 0B / 0B 7 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 0.03% 571.5MiB / 7.387GiB 7.56% 278MB / 9.01MB 0B / 0B 13 0ac4bc893e43 djangoprojects-db-1 0.03% 19.68MiB / 7.387GiB 0.26% 20.8kB / 18.1kB 0B / 0B 7 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 0.02% 571.5MiB / 7.387GiB 7.56% 278MB / 9.01MB 0B / 0B 13 0ac4bc893e43 djangoprojects-db-1 0.01% 19.68MiB / 7.387GiB 0.26% 20.8kB / 18.1kB 0B / 0B 7 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 0.02% 571.5MiB / 7.387GiB 7.56% 278MB / 9.01MB 0B / 0B 13 0ac4bc893e43 djangoprojects-db-1 0.01% 19.68MiB / 7.387GiB 0.26% 20.8kB / 18.1kB 0B / 0B 7 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 0.93% 571.5MiB / 7.387GiB 7.56% 278MB / 9.01MB 0B / 0B 13 0ac4bc893e43 djangoprojects-db-1 0.72% 21.15MiB / 7.387GiB 0.28% 21.7kB / 19.5kB 0B / 0B 8 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 0.93% 571.5MiB / 7.387GiB 7.56% 278MB / 9.01MB 0B / 0B 13 0ac4bc893e43 djangoprojects-db-1 0.72% 21.15MiB / 7.387GiB 0.28% 21.7kB / 19.5kB 0B / 0B 8 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 3.78% 571.5MiB / 7.387GiB 7.56% 278MB / 9.02MB 0B / 0B 13 0ac4bc893e43 djangoprojects-db-1 0.00% 21.15MiB / 7.387GiB 0.28% 21.8kB / 19.5kB 0B / 0B 8 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 3.78% 571.5MiB / 7.387GiB 7.56% 278MB / 9.02MB 0B / 0B 13 0ac4bc893e43 djangoprojects-db-1 0.00% 21.15MiB / 7.387GiB 0.28% 21.8kB / 19.5kB 0B / 0B 8 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 106.80% 681.8MiB / 7.387GiB 9.01% 279MB / 9.04MB 0B / 0B 14 0ac4bc893e43 djangoprojects-db-1 0.00% 21.15MiB / 7.387GiB 0.28% 21.8kB / 19.5kB 0B / 0B 8 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 106.80% 681.8MiB / 7.387GiB 9.01% 279MB / 9.04MB 0B / 0B 14 0ac4bc893e43 djangoprojects-db-1 0.00% 21.15MiB / 7.387GiB 0.28% 21.8kB / 19.5kB 0B / 0B 8 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 11.82% 692.5MiB / 7.387GiB 9.15% 279MB / 9.05MB 0B / 0B 14 0ac4bc893e43 djangoprojects-db-1 0.01% 21.15MiB / 7.387GiB 0.28% 21.8kB / 19.5kB 0B / 0B 8 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 11.82% 692.5MiB / 7.387GiB 9.15% 279MB / 9.05MB 0B / 0B 14 0ac4bc893e43 djangoprojects-db-1 0.01% 21.15MiB / 7.387GiB 0.28% 21.8kB / 19.5kB 0B / 0B 8 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 9.66% 697.1MiB / 7.387GiB 9.22% 280MB / 9.06MB 0B / 0B 22 0ac4bc893e43 djangoprojects-db-1 0.00% 21.15MiB / 7.387GiB 0.28% 21.8kB / 19.5kB 0B / 0B 8 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 9.66% 697.1MiB / 7.387GiB 9.22% 280MB / 9.06MB 0B / 0B 22 0ac4bc893e43 djangoprojects-db-1 0.00% 21.15MiB / 7.387GiB 0.28% 21.8kB / 19.5kB 0B / 0B 8 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 5.36% 697MiB / 7.387GiB 9.21% 280MB / 9.06MB 0B / 0B 22 0ac4bc893e43 djangoprojects-db-1 0.00% 21.15MiB / 7.387GiB 0.28% 21.8kB / 19.5kB 0B / 0B 8 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 5.36% 697MiB / 7.387GiB 9.21% 280MB / 9.06MB 0B / 0B 22 0ac4bc893e43 djangoprojects-db-1 0.00% 21.15MiB / 7.387GiB 0.28% 21.8kB / 19.5kB 0B / 0B 8 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 5.86% 697MiB / 7.387GiB 9.21% 280MB / 9.07MB 0B / 0B 22 0ac4bc893e43 djangoprojects-db-1 0.00% 21.15MiB / 7.387GiB 0.28% 21.8kB / 19.5kB 0B / 0B 8 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 5.86% 697MiB / 7.387GiB 9.21% 280MB / 9.07MB 0B / 0B 22 0ac4bc893e43 djangoprojects-db-1 0.00% 21.15MiB / 7.387GiB 0.28% 21.8kB / 19.5kB 0B / 0B 8 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 6.62% 697.1MiB / 7.387GiB 9.22% 281MB / 9.08MB 0B / 0B 22 0ac4bc893e43 djangoprojects-db-1 0.00% 21.15MiB / 7.387GiB 0.28% 21.8kB / 19.5kB 0B / 0B 8 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dab126e56daf djangoprojects-web-1 6.62% 697.1MiB / 7.387GiB 9.22% 281MB / 9.1MB 0B / 0B 13 </code></pre> <p>I can see that in the Net I/O section, memory usage is greater than the limit. However, I'm not sure how to make the necessary changes and I'm not sure if that is even the actual problem. I also read that simply increasing the Gunicorn timeout setting is only a band-aid solution. Here is my requirement.txt:</p> <pre><code>aiohttp==3.8.5 aiosignal==1.3.1 asgiref==3.7.2 async-timeout==4.0.3 attrs==23.1.0 certifi==2023.7.22 chardet==5.2.0 charset-normalizer==3.2.0 colorama==0.4.6 Django==4.2.1 django-environ==0.11.2 filelock==3.12.4 fire==0.5.0 fonttools==4.42.1 frozenlist==1.4.0 gunicorn==21.2.0 idna==3.4 Jinja2==3.1.2 llvmlite==0.41.0 lxml==4.9.3 MarkupSafe==2.1.3 more-itertools==10.1.0 mpmath==1.3.0 multidict==6.0.4 networkx==3.1 numba==0.58.0 numpy==1.25.2 openai==0.27.8 openai-whisper @ git+https://github.com/openai/whisper.git@0a60fcaa9b86748389a656aa013c416030287d47 opencv-python==4.9.0.80 packaging==23.2 Pillow==10.0.0 pypdf==3.15.4 PyPDF2==3.0.1 python-dotenv==1.0.0 python-magic==0.4.27 regex==2023.8.8 reportlab==4.0.4 requests==2.31.0 six==1.16.0 sqlparse==0.4.4 sympy==1.12 termcolor==2.3.0 tiktoken==0.3.3 torch==2.0.1 tqdm==4.66.1 typing_extensions==4.6.2 tzdata==2023.3 urllib3==2.0.4 yarl==1.9.2 psycopg2-binary==2.9.3 environs[django]==9.5.0 whitenoise==6.1.0 django-storages[s3]==1.14.2 </code></pre> <p>Here is my Dockerfile:</p> <pre><code># Pull base image FROM python:3.11.4-slim-bullseye # Set environment variables ENV PIP_NO_CACHE_DIR off ENV PIP_DISABLE_PIP_VERSION_CHECK 1 ENV PYTHONUNBUFFERED 1 ENV PYTHONDONTWRITEBYTECODE 1 ENV COLUMNS 80 #install Debian and other dependencies that are required to run python apps(eg. git, python-magic). RUN apt-get update \ &amp;&amp; apt-get install -y --force-yes nano python3-pip gettext chrpath libssl-dev libxft-dev libfreetype6 libfreetype6-dev libfontconfig1 libfontconfig1-dev ffmpeg git libmagic-dev libpq-dev gcc \ &amp;&amp; rm -rf /var/lib/apt/lists/* # Set working directory for Docker image WORKDIR /code/ # Install dependencies COPY requirements.txt . RUN pip install -r requirements.txt # Copy project COPY . . </code></pre> <p>Here is my dockerfile-compose.yml(containing my gunicorn command):</p> <pre><code>#version: &quot;3.9&quot; services: web: build: . #command: python /code/manage.py runserver 0.0.0.0:8000 command: gunicorn mysite.wsgi -b 0.0.0.0:8000 # new volumes: - .:/code ports: - 8000:8000 depends_on: - db environment: - &quot;DJANGO_SECRET_KEY=secret&quot; - &quot;DJANGO_DEBUG=True&quot; - &quot;DJANGO_SECURE_SSL_REDIRECT=False&quot; - &quot;DJANGO_SECURE_HSTS_SECONDS=0&quot; - &quot;DJANGO_SECURE_HSTS_INCLUDE_SUBDOMAINS=False&quot; - &quot;DJANGO_SECURE_HSTS_PRELOAD=False&quot; - &quot;DJANGO_SESSION_COOKIE_SECURE=False&quot; # new - &quot;ACCESS_KEY_ID=key_id&quot; - &quot;SECRET_ACCESS_KEY=secret_key&quot; - &quot;STORAGE_BUCKET_NAME=spaces&quot; - &quot;S3_CUSTOM_DOMAIN=domain&quot; db: image: postgres:13 volumes: - postgres_data:/var/lib/postgresql/data/ environment: - &quot;POSTGRES_HOST_AUTH_METHOD=trust&quot; volumes: postgres_data: </code></pre> <p>I'm completely stumped. Any help is much appreciated.</p>
<python><django><docker>
2024-06-19 05:11:44
0
359
tthheemmaannii
78,640,325
801,967
Can not install pandas in python 3.9
<p>I am using python 3.9 with Spark.</p> <pre><code>python --version Python 3.9.0 </code></pre> <p>When I install pandas with</p> <pre><code>pip install pandas </code></pre> <p>I got the following error</p> <pre><code>Collecting pandas Using cached pandas-2.2.2.tar.gz (4.4 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing metadata (pyproject.toml) ... error error: subprocess-exited-with-error × Preparing metadata (pyproject.toml) did not run successfully. │ exit code: 1 ╰─&gt; [12 lines of output] + meson setup C:\Users\barba\AppData\Local\Temp\pip-install-jw566d_g\pandas_b36c8476d5fa4a8dbafeeed465827c7c C:\Users\barba\AppData\Local\Temp\pip-install-jw566d_g\pandas_b36c8476d5fa4a8dbafeeed465827c7c\.mesonpy-gh0nls41\build -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --vsenv --native-file=C:\Users\barba\AppData\Local\Temp\pip-install-jw566d_g\pandas_b36c8476d5fa4a8dbafeeed465827c7c\.mesonpy-gh0nls41\build\meson-python-native-file.ini The Meson build system Version: 1.2.1 Source dir: C:\Users\barba\AppData\Local\Temp\pip-install-jw566d_g\pandas_b36c8476d5fa4a8dbafeeed465827c7c Build dir: C:\Users\barba\AppData\Local\Temp\pip-install-jw566d_g\pandas_b36c8476d5fa4a8dbafeeed465827c7c\.mesonpy-gh0nls41\build Build type: native build Project name: pandas Project version: 2.2.2 ..\..\meson.build:2:0: ERROR: Compiler cl cannot compile programs. A full log can be found at C:\Users\barba\AppData\Local\Temp\pip-install-jw566d_g\pandas_b36c8476d5fa4a8dbafeeed465827c7c\.mesonpy-gh0nls41\build\meson-logs\meson-log.txt [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─&gt; See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. </code></pre> <p>What could be causing this?</p>
<python><pandas><pip><pyproject.toml>
2024-06-19 02:58:01
1
341
ferrito
78,640,299
14,250,641
Dataframe Expansion: Generating Genomic Positions +/- 250 Nucleotides
<p>I have a df that looks like (with 300k more rows of other genomic coordinates):</p> <pre><code> chromosome start end chr1 11859 11879 </code></pre> <p>I want to expand the df such that for each row, it will include every position of the given coordinate centered around 250 nucleotides on each side. I need to do this in an efficient way since my df is expected to be millions of rows long after this process (probably avoid for loop). For example, the chr1:11859-11879 row will be expanded to 21 rows.</p> <p>how to calculate:</p> <pre><code> chromosome start end chr1 11859-250 11859+250 chr1 11860-250 11860+250 ... chr1 11879-250 11879+250 </code></pre> <p>final df:</p> <pre><code> chromosome start end 0 chr1 11609 12109 1 chr1 11610 12110 ... 20 chr1 11629 12129 </code></pre> <p>It seems so simple, but I think I'm using overly complicated methods to get to this point.</p> <p>Here's the general formula:</p> <pre><code>chromosome start end chr1 START-250 START+250 chr1 (START+1)-250 (START+1)+250 ... chr1 END-250 END+250 </code></pre>
<python><pandas><dataframe><numpy><bioinformatics>
2024-06-19 02:44:25
1
514
youtube
78,640,245
2,328,154
Snowflake SQLAlchemy - Dynamically created column with Timestamp?
<p>This is a follow-up question to my previous one.</p> <p><a href="https://stackoverflow.com/questions/78614458/snowflake-sqlalchemy-create-table-with-timestamp/78623238#78623238">Snowflake SQLAlchemy - Create table with Timestamp?</a></p> <p>I am dynamically creating columns and I have this schema.</p> <p>I need the &quot;my_time_stamp&quot; column to have the server default.</p> <pre><code>json_cls_schema = { &quot;clsname&quot;: &quot;MyClass&quot;, &quot;tablename&quot;: &quot;my_table&quot;, &quot;columns&quot;: [ {&quot;name&quot;: &quot;id&quot;, &quot;type&quot;: &quot;integer&quot;, &quot;is_pk&quot;: True, &quot;is_auto&quot; : True}, {&quot;name&quot;: &quot;my_time_stamp&quot;, &quot;type&quot;: &quot;timestamp&quot;, 'serverdefault': text('current_timestamp')} ], } </code></pre> <p>How would I set so that the code below generates this column.</p> <pre><code>my_time_stamp = Column(TIMESTAMP,server_default=text('current_timestamp()')) </code></pre> <p>Right now I am getting the error TypeError: Object of type TextClause is not JSON serializable and the dictionary I am generating this column from looks like this.</p> <pre><code>{'name': 'my_time_stamp', 'type': 'timestamp', 'serverdefault': &lt;sqlalchemy.sql.elements.TextClause object at 0x000001B57664F560&gt;} </code></pre> <p>code:</p> <pre><code>_type_lookup = { &quot;integer&quot;: Integer, &quot;timestamp&quot;: TIMESTAMP, } def mapping_for_json(json_cls_schema): clsdict = {&quot;__tablename__&quot;: json_cls_schema[&quot;tablename&quot;]} clsdict.update( { rec[&quot;name&quot;]: Column( _type_lookup[rec[&quot;type&quot;]], primary_key=rec.get(&quot;is_pk&quot;, False), autoincrement=rec.get(&quot;is_auto&quot;, False), server_default=rec.get(&quot;serverdefault&quot;, '') ) for rec in json_cls_schema[&quot;columns&quot;] } ) return type(json_cls_schema[&quot;clsname&quot;], (Base,), clsdict) </code></pre>
<python><json><sqlalchemy><snowflake-cloud-data-platform>
2024-06-19 02:16:06
1
421
MountainBiker
78,640,035
3,156,085
Are `None` and `type(None)` really equivalent for type analysis?
<p>According to the <a href="https://peps.python.org/pep-0484/#using-none" rel="nofollow noreferrer">PEP 484's &quot;Using None&quot; part</a>:</p> <blockquote> <p>When used in a type hint, the expression <code>None</code> is considered equivalent to <code>type(None)</code>.</p> </blockquote> <p>However, I encountered a case where both don't seem equivalent :</p> <pre class="lang-py prettyprint-override"><code>from typing import Callable, NamedTuple, Type, Union # I define a set of available return types: ReturnType = Union[ int, None, ] # I use this Union type to define other types, like this callable type. SomeCallableType = Callable[..., ReturnType] # But I also want to store some functions metadata (including the function's return type) in a `NamedTuple`: class FuncInfos(NamedTuple): return_type: Type[ReturnType] # This works fine: fi_1 = FuncInfos(return_type=int) # But this issues an error: # main.py:21: error: Argument &quot;return_type&quot; to &quot;FuncInfos&quot; has incompatible type &quot;None&quot;; expected &quot;type[int] | type[None]&quot; [arg-type] # Found 1 error in 1 file (checked 1 source file) fi_2 = FuncInfos(return_type=None) # But this works fine: fi_3 = FuncInfos(return_type=type(None)) </code></pre> <p>It doesn't pose me much problem to write <code>type(None)</code> rather than simply <code>None</code>, but I would've liked to understand the above error issued that seems to contradict the quote from PEP 484.</p> <p>Snippet available for execution <a href="https://mypy-play.net/?mypy=latest&amp;python=3.12&amp;gist=dfe50768e0dfe3e3a3abddda83cccfd9" rel="nofollow noreferrer">here</a>.</p> <hr /> <p><strong>EDIT:</strong> It actually seems to boil down to the following:</p> <pre class="lang-py prettyprint-override"><code>from typing import Type a: Type[None] # This seems to cause an issue: # main.py:4: error: Incompatible types in assignment (expression has type &quot;None&quot;, variable has type &quot;type[None]&quot;) [assignment] # Found 1 error in 1 file (checked 1 source file) a = None # This seems to work: a = type(None) </code></pre> <p>Snippet available for execution <a href="https://mypy-play.net/?mypy=latest&amp;python=3.12&amp;gist=e84f81ba16aeac4bd5dd986658878636" rel="nofollow noreferrer">here</a>.</p>
<python><python-typing>
2024-06-19 00:00:58
2
15,848
vmonteco
78,639,873
3,486,684
Keeping a "pointer" to the of the "parent array" from which a "derived array" was produced?
<p>(Aside: my question is equally applicable to <code>numpy</code> structured arrays and non-structured arrays.)</p> <p>Suppose I have a numpy <a href="https://numpy.org/doc/stable/user/basics.rec.html" rel="nofollow noreferrer">structured array</a> with the <code>dtype</code>:</p> <pre class="lang-py prettyprint-override"><code>EXAMPLE_DTYPE = np.dtype([(&quot;alpha&quot;, np.str_), (&quot;beta&quot;, np.int64)]) </code></pre> <p>I have a wrapper around the <code>numpy</code> data array of this <code>dtype</code>, and then I want to implement a special <code>__getitem__</code> for it:</p> <pre class="lang-py prettyprint-override"><code>class ExampleArray: data: np.ndarray = np.array([(&quot;hello&quot;, 0), (&quot;world&quot;, 1)], dtype=EXAMPLE_DTYPE) def __getitem__(self, index: int|str) -&gt; SpecializedArray: return SpecializedArray(index, self.data[index]) </code></pre> <p><code>SpecializedArray</code> is a class which keeps track of the indices used to specialize from a parent array into a derived array:</p> <pre class="lang-py prettyprint-override"><code>class SpecializedArray: specialization: int|str data: np.ndarray def __init__(self, specialization: int|str, data: np.ndarray) -&gt; None: self.specialization = specialization self.data = data def __repr__(self) -&gt; str: return f&quot;{self.specialization} -&gt; {self.data}&quot; </code></pre> <p>I would like <code>SpecializedArray</code> to ideally have a &quot;reference&quot; to the parent array it was derived from. What is the best way to provide such a reference? Should I provide the parent by slicing it, since slicing creates a view?</p> <pre class="lang-py prettyprint-override"><code># method in ExampleArray def __getitem__(self, index: int|str) -&gt; SpecializedArray: # suppose that SpecializedArray took the parent as a third argument return SpecializedArray(index, self.data[index], self.data[:]) </code></pre>
<python><numpy><numpy-slicing>
2024-06-18 22:34:39
0
4,654
bzm3r
78,639,645
1,946,418
Python/Powershell combo - make pwsh load even faster
<p>My main programming is done within Python, and want to invoke custom Powershell cmdlets I wrote. Added my <code>.psm1</code> file to the <code>$PSModulePath</code>, and my cmdlets are always loaded.</p> <p>And I <code>-NoProfile</code>, and <code>-NoLogo</code> to invoke <code>pwsh</code> cmd a little bit faster. Something like</p> <pre><code>cmd = ['pwsh', '-NoLogo', '-NoProfile', '-Command', cmdToRun] process = Popen(cmd, stderr=PIPE, stdout=PIPE) </code></pre> <p>But this is still taking 5+ secs to return/process.</p> <p>Does anyone know if there are other enhancements to run powershell scripts even faster? TIA</p>
<python><powershell><powershell-core>
2024-06-18 21:01:23
1
1,120
scorpion35
78,639,642
4,752,738
python requests: ValueError: Timeout value connect was <object object at 0x7c6b5e484a80>, but it must be an int, float or None
<p>I updated <code>google-cloud-bigquery</code> from version <code>3.11.4</code> to <code>3.12.0</code>. requests and urllib3 are pined.</p> <pre><code>requests==2.31.0 requests-futures==1.0.1 requests_pkcs12==1.21 urllib3==1.26.18 </code></pre> <p>Sometimes I get this error and don't understand why:</p> <pre><code> File &quot;/usr/local/lib/python3.10/site-packages/google/api_core/future/polling.py&quot;, line 282, in exception self._blocking_poll(timeout=timeout) File &quot;/usr/local/lib/python3.10/site-packages/google/cloud/bigquery/job/query.py&quot;, line 1318, in _blocking_poll super(QueryJob, self)._blocking_poll(timeout=timeout, **kwargs) File &quot;/usr/local/lib/python3.10/site-packages/google/api_core/future/polling.py&quot;, line 137, in _blocking_poll polling(self._done_or_raise)(retry=retry) File &quot;/usr/local/lib/python3.10/site-packages/google/api_core/retry.py&quot;, line 366, in retry_wrapped_func return retry_target( File &quot;/usr/local/lib/python3.10/site-packages/google/api_core/retry.py&quot;, line 204, in retry_target return target() File &quot;/usr/local/lib/python3.10/site-packages/google/cloud/bigquery/job/query.py&quot;, line 1460, in _done_or_raise self.reload(retry=retry, timeout=transport_timeout) File &quot;/usr/local/lib/python3.10/site-packages/google/cloud/bigquery/job/base.py&quot;, line 781, in reload api_response = client._call_api( File &quot;/usr/local/lib/python3.10/site-packages/google/cloud/bigquery/client.py&quot;, line 816, in _call_api return call() File &quot;/usr/local/lib/python3.10/site-packages/google/cloud/_http/__init__.py&quot;, line 482, in api_request response = self._make_request( File &quot;/usr/local/lib/python3.10/site-packages/google/cloud/_http/__init__.py&quot;, line 341, in _make_request return self._do_request( File &quot;/usr/local/lib/python3.10/site-packages/google/cloud/_http/__init__.py&quot;, line 379, in _do_request return self.http.request( File &quot;/usr/local/lib/python3.10/site-packages/google/auth/transport/requests.py&quot;, line 542, in request response = super(AuthorizedSession, self).request( File &quot;/usr/local/lib/python3.10/site-packages/requests/sessions.py&quot;, line 589, in request resp = self.send(prep, **send_kwargs) File &quot;/usr/local/lib/python3.10/site-packages/requests/sessions.py&quot;, line 703, in send r = adapter.send(request, **kwargs) File &quot;/usr/local/lib/python3.10/site-packages/requests/adapters.py&quot;, line 483, in send timeout = TimeoutSauce(connect=timeout, read=timeout) File &quot;/usr/local/lib/python3.10/site-packages/urllib3/util/timeout.py&quot;, line 102, in __init__ self._connect = self._validate_timeout(connect, &quot;connect&quot;) File &quot;/usr/local/lib/python3.10/site-packages/urllib3/util/timeout.py&quot;, line 147, in _validate_timeout raise ValueError( ValueError: Timeout value connect was &lt;object object at 0x7c6b5e484a80&gt;, but it must be an int, float or None. </code></pre> <p>I have seen on other issues that it's because I need to downgrade urllib3 to a version smaller than 2 but this is already the situation in my case.</p>
<python><google-bigquery><python-requests><urllib3>
2024-06-18 20:59:59
0
943
idan ahal
78,639,630
8,510,149
Scalable approach instead of apply in python
<p>I use apply to loop the rows and get the column names of feat1, feat2 or feat3 if they are equal to 1 and scored is equal to 0. The column names are then inserted into a new feature called reason.</p> <p>This solution doesn't scale to larger dataset. I'm looking for faster approach. How can I do that?</p> <pre><code>df = pd.DataFrame({'ID':[1,2,3], 'feat1_tax':[1,0,0], 'feat2_move':[1,0,0], 'feat3_coffee': [0,1,0], 'scored':[0,0,1]}) def get_not_scored_reason(row): exclusions_list = [col for col in df.columns if col.startswith('feat')] reasons = [col for col in exclusions_list if row[col] == 1] return ', '.join(reasons) if reasons else None df['reason'] = df.apply(lambda row: get_not_scored_reason(row) if row['scored'] == 0 else None, axis=1) print(df) </code></pre> <pre><code> ID feat1_tax feat2_move feat3_coffee scored reason 0 1 1 1 0 0 feat1_tax, feat2_move 1 2 0 0 1 0 feat3_coffee 2 3 0 0 0 1 None </code></pre>
<python><pandas><numpy>
2024-06-18 20:56:50
3
1,255
Henri
78,639,591
2,280,641
gitlab-ce does not show test coverage on the project badged, always 'unkown'
<p>I'm trying to make the <code>coverage</code> badged work on gitlab-ce, but no success so far.</p> <p>My badge still unknown:</p> <p><a href="https://i.sstatic.net/Jp74VRT2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jp74VRT2.png" alt="enter image description here" /></a></p> <p>The badged configuration is:</p> <p><a href="https://i.sstatic.net/8vI7qjTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8vI7qjTK.png" alt="enter image description here" /></a></p> <p>I tried with both job-related URL and without specifying the job. It didn't do any good.</p> <p>The job works fine and it reports the coverage percentage:</p> <p><a href="https://i.sstatic.net/jCNQbsFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jCNQbsFd.png" alt="enter image description here" /></a></p> <p>The gitlab-ce gets the percentage into the merge request, with the test summary:</p> <p><a href="https://i.sstatic.net/oTy3J3XA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTy3J3XA.png" alt="enter image description here" /></a></p> <p>The job is:</p> <pre class="lang-yaml prettyprint-override"><code>run_tests_job: stage: testing tags: - prepare_test environment: staging needs: [&quot;install_dependencies_job&quot;] rules: - if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == &quot;dev&quot;' when: on_success - if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == &quot;hml&quot;' when: on_success script: - source venv/bin/activate - python3.12 -m pytest --cov-config=.coveragerc --cov=. --cov-fail-under=70 --cov-report term-missing --cov-report xml:coverage_detailed.xml --junitxml=coverage.xml - echo &quot;Running tests&quot; - echo &quot;Current package passed all tests successfully&quot; coverage: '/Total coverage:.*?(\d+.\d+)/' artifacts: paths: - coverage.xml - coverage_detailed.xml reports: junit: coverage.xml coverage_report: coverage_format: cobertura path: coverage_detailed.xml cache: key: $CI_PROJECT_NAME paths: - venv/ policy: pull allow_failure: false </code></pre> <p>I tried a lot of tips and even chatgpt 4, without any success.</p> <p>Any advices? Thanks in advance!!!</p>
<python><gitlab><continuous-integration><code-coverage><gitlab-ce>
2024-06-18 20:46:52
0
523
DPalharini
78,639,491
21,152,416
How to shutdown resources using dependency_injector
<p>I'm using <a href="https://python-dependency-injector.ets-labs.org/index.html" rel="nofollow noreferrer">dependency_injector</a> to manage DI. I don't understand how to release my resources using this library.</p> <p>I found <a href="https://python-dependency-injector.ets-labs.org/index.html" rel="nofollow noreferrer">shutdown_resources</a> method but have no idea how to use it properly.</p> <p><strong>Example:</strong></p> <pre class="lang-py prettyprint-override"><code>class Resource: &quot;&quot;&quot;Resource example.&quot;&quot;&quot; def __init__(self): &quot;&quot;&quot;.&quot;&quot;&quot; # Initialize session def close(self): &quot;&quot;&quot;Release resources.&quot;&quot;&quot; # Close session class ApplicationContainer(DeclarativeContainer): &quot;&quot;&quot;Application container.&quot;&quot;&quot; resource: Singleton[Resource] = Singleton[Resource](Resource) container = ApplicationContainer() # Do something container.shutdown_resources() # Call close method here </code></pre>
<python><dependency-injection>
2024-06-18 20:18:32
1
1,197
Victor Egiazarian
78,639,455
3,486,684
How to install a local folder as a package using `conda`? (yet another relative imports question)
<h1>High Level problem</h1> <p>I experiment with/test the Python code I am writing in &quot;playground scripts&quot;.</p> <p>Usually I don't keep these playground scripts around, but recently I have been finding some value in saving them for longer term use. So I decided to create a separate</p> <h1>Problem Setup</h1> <p>I have a conda environment active <code>myproject</code>.</p> <p>My working folder looks like:</p> <pre><code>~/x __init__.py something.py ~/x/playground playscript.py </code></pre> <p>I would like to import <code>something.py</code> in <code>playscript.py</code>. I run <code>playscript.py</code> as an interactive window using Jupyter.</p> <p>If I try to relative import <code>something.py</code>, that fails with:</p> <pre><code>{ &quot;name&quot;: &quot;ImportError&quot;, &quot;message&quot;: &quot;attempted relative import with no known parent package&quot;, &quot;stack&quot;: &quot;--------------------------------------------------------------------------- ImportError Traceback (most recent call last) File ~/x/playground/playscript.py:1 ----&gt; 1 from .. import something ImportError: attempted relative import with no known parent package&quot; } </code></pre> <p><code>conda develop</code> (as of 2024 JUN) is not a solution: <a href="https://github.com/conda/conda-build/issues/4251" rel="nofollow noreferrer">Deprecate or remove conda develop</a>. Reading through that, multiple people seem to suggest:</p> <p><code>pip install --no-build-isolation --no-deps -e .</code></p> <h1>Questions</h1> <p>Suppose I make changes to <code>~/x/something.py</code>, or add entirely new modules...do I need to <code>pip install --no-build-isolation --no-deps -e .</code> again? If so, then this is not a good solution.</p> <h1>Related issues</h1> <p>See also the related: <a href="https://github.com/mamba-org/mamba/issues/695" rel="nofollow noreferrer"> &quot;develop&quot; mode? #695 </a></p>
<python><conda><jupyter><python-packaging>
2024-06-18 20:07:24
1
4,654
bzm3r
78,639,402
3,705,854
Capturing stdout from InteractiveConsole.compile
<p>I have the following code:</p> <pre><code>from code import InteractiveConsole from io import StringIO from contextlib import redirect_stdout cons = InteractiveConsole() code = cons.compile(&quot;2&quot;) f = StringIO() with redirect_stdout(f): exec(code) s = f.getvalue() print(&quot;-&quot; * 20) print(s) </code></pre> <p>For some reason exec output is not captured by <code>redirect_stdout</code>. How can I capture output of <code>exec</code>?</p> <p>This code is run in Jupyter notebook.</p>
<python><stdout>
2024-06-18 19:52:47
0
349
JKS
78,639,348
9,191,338
Is it possible to skip initialization of __main__ module in Python multiprocessing?
<p>It is common in python multiprocessing to use <code>if __name__ == &quot;__main__&quot;</code>. However, if I know my child process does not need anything from <code>__main__</code> module, can I remove this part? e.g.</p> <pre class="lang-py prettyprint-override"><code># test_child.py from multiprocessing import Process, get_context def f(x): print(f&quot;{x}&quot;) def start(): ctx = get_context(&quot;spawn&quot;) p = ctx.Process(target=f, args=(&quot;hello&quot;,)) p.start() p.join() </code></pre> <pre class="lang-py prettyprint-override"><code># test_parent.py from test_child import start start() </code></pre> <p>When I run <code>python test_parent.py</code> , ideally the child process does not need anything from the <code>__main__</code> module, so it can skil this part, and I don't need to add <code>if __name__ == &quot;__main__&quot;</code> in <code>test_parent.py</code>.</p> <p>Currently it will cause error though.</p> <p><strong>Edit: thanks for all the answers. I think one possible way is to use <code>subprocess</code> to run the <code>test_child.py</code>, and do serialization &amp; deserialization manually.</strong></p>
<python><multiprocessing>
2024-06-18 19:40:43
3
2,492
youkaichao
78,639,247
4,823,526
How to get the values of a dictionary type from a parquet file using pyarrow?
<p>I have a parquet file which I am reading with pyarrow.</p> <pre><code>In [83]: pq.read_schema('dummy_file.parquet').field('dummy_column').type Out[83]: DictionaryType(dictionary&lt;values=string, indices=int32, ordered=0&gt;) </code></pre> <p>It says it is a column of dictionary type which is similar to a sql enum or pandas category type. Now I want to find the values present in the dictionary type, how do I do that?</p> <p>It says in <a href="https://arrow.apache.org/docs/python/generated/pyarrow.DictionaryType.html#pyarrow.DictionaryType.value_type" rel="nofollow noreferrer">here</a> that:</p> <blockquote> <p>The dictionary values are found in an instance of DictionaryArray.</p> </blockquote> <p>But how do I get this <code>DictionaryArray</code>?</p> <hr />
<python><pandas><dictionary><parquet><pyarrow>
2024-06-18 19:15:17
1
462
In78
78,639,203
2,862,945
How to plot a simple line with mayavi?
<p>Apparently, I do not understand how to plot a simple line with mayavi. According to the <a href="http://docs.enthought.com/mayavi/mayavi/auto/mlab_helper_functions.html#mayavi.mlab.plot3d" rel="nofollow noreferrer">docs</a>, I should use <code>mlab.plot3d</code>, where the first three arguments <code>x</code>, <code>y</code>, <code>z</code> contain the coordinates:</p> <blockquote> <p>x, y, z and s are numpy arrays or lists of the same shape. x, y and z give the positions of the successive points of the line</p> </blockquote> <p>[quoted from the docs, see the previous link]</p> <p>So I thought I should give this a try and wrote the following piece of code:</p> <pre><code>import numpy as np from mayavi import mlab fig1 = mlab.figure( bgcolor=(1,1,1), fgcolor=(0,0,0)) myLine = mlab.plot3d( np.array( [2, 3, 4] ), # x np.array( [1, 1, 1] ), # y np.array( [1, 1, 1] ), # z representation='points', line_width=1, ) ax1 = mlab.axes( color=(0,0,0), nb_labels=4, extent=[0, 5, 0, 2, 0, 2, ], ) mlab.outline(ax1) mlab.show() </code></pre> <p>Unfortunately, this does not result in a line, but in a cuboid: <a href="https://i.sstatic.net/b1NxhTUr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b1NxhTUr.png" alt="enter image description here" /></a></p> <p>Looking at the image, also the starting coordinates are not what I set in the code: I thought it should start at (2,1,1), corresponding to the first value of each of the arrays.</p> <p>Summarizing, I seem not to understand how to use <code>plot3d</code>. What am I missing?</p>
<python><draw><mayavi><mayavi.mlab>
2024-06-18 19:04:26
1
2,029
Alf
78,639,151
1,275,942
Preserving typing/typechecking while extending function with many arguments
<p>I want to subclass/wrap subprocess.Popen. However, it has a lot of arguments. The usual ways to solve this, as far as I'm aware, are 1. &quot;biting the bullet&quot;:</p> <pre class="lang-py prettyprint-override"><code>class MyPopen1(subprocess.Popen): def __init__(self, myarg1, myarg2, bufsize=-1, executable=None, stdin=None, stdout=None, stderr=None, preexec_fn=None, close_fds=True, shell=False, cwd=None, env=None, universal_newlines=None, startupinfo=None, creationflags=0, restore_signals=True, start_new_session=False, pass_fds=(), *, user=None, group=None, extra_groups=None, encoding=None, errors=None, text=None, umask=-1, pipesize=-1, process_group=None): arguments = self.handle_custom_arguments(myarg1, myarg2) super().__init__(arguments, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, user=user, group=group, extra_groups=extra_groups, encoding=encoding, errors=errors, text=text, umask=umask, pipesize=pipesize, process_group=process_group) </code></pre> <p>or 2. using *args, **kwargs.</p> <pre class="lang-py prettyprint-override"><code>class MyPopen2(subprocess.Popen): def __init__(self, myarg1, myarg2, *args, **kwargs): arguments = self.handle_custom_arguments(myarg1, myarg2) super().__init__(arguments, *args, **kwargs) </code></pre> <p>The second is a lot easier to write; it requires no maintenance if a new argument is added to subprocess.Popen in 3.1x. The downside is that it has no types, which means no static typechecking and no function signature hinting in a text editor.</p> <p>We can get a little closer with <code>@functools.wraps(...)</code>, but at the cost of the signature needing to be the exact same as subprocess.Popen, and it overriding <strong>doc</strong> and etc. I don't think it's what wraps is intended for.</p> <pre class="lang-py prettyprint-override"><code># Not good solution. class MyPopen3(subprocess.Popen): @functools.wraps(subprocess.Popen.__init__) def __init__(self, myarg1: int, myarg2: str, *args, **kwargs): &quot;&quot;&quot; :param myarg1: does foo :param myarg2: does bar &quot;&quot;&quot; arguments = self.handle_custom_arguments(myarg1, myarg2) super().__init__(arguments, *args, **kwargs) </code></pre> <p>Ideally, I'd want a type signature like...</p> <pre class="lang-py prettyprint-override"><code>def __init__(self, myarg1: int, myarg2: str, *args: ArgsOf[subprocess.Popen.__init__][1:], **kwargs: KwargsOf[subprocess.Popen.__init__]): </code></pre> <p>Is there any way to get this sort of effect? If not, how can I preserve typechecking when doing something like this?</p>
<python><python-typing>
2024-06-18 18:47:38
2
899
Kaia
78,638,998
11,092,636
Radix sort slower than expected compared to standard sort
<p>I've implemented two versions of radix sort (the version that allows sorting integers whose values go up to n² where n is the size of the list to sort) in Python for benchmarking against the standard sort (Timsort). I'm using PyPy for a fairer comparison.</p> <p>Surprisingly my radix sort implementation, even without using a hashmap (using a direct access array instead), is slower than the standard sort even for larger input sizes. There is micro optimisation that I'm lacking since O(n) should eventually beat O(nlogn). I'm seeking advice to achieve better performance. I'm doing this for learning purposes, therefore I'm not looking for built-ins, libraries, or C-compiled code to call from Python.</p> <p>Are there micro-optimisations I could bring? Is my code really O(n)?</p> <p><a href="https://i.sstatic.net/2fbgloyM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fbgloyM.png" alt="enter image description here" /></a></p> <p>My code takes up to 10 seconds to run on an AMD Ryzen 9 7950X CPU:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import random import time from collections import defaultdict def radix_sort(arr, size): least_sig_digit = defaultdict(list) for num in arr: q, r = divmod(num, size) least_sig_digit[r].append(q) highest_sig_digit = defaultdict(list) for k in range(size): # k goes in order of lowest significant digit for q in least_sig_digit[k]: highest_sig_digit[q].append(q*size+k) i: int = 0 for k in range(size): for num in highest_sig_digit[k]: arr[i] = num i += 1 return arr def radix_sort_no_hashmap(arr, size): least_sig_digit = [[] for _ in range(size)] for num in arr: q, r = divmod(num, size) least_sig_digit[r].append(q) highest_sig_digit = [[] for _ in range(size)] for k in range(size): # k goes in order of lowest significant digit for q in least_sig_digit[k]: highest_sig_digit[q].append(q*size+k) i: int = 0 for k in range(size): for num in highest_sig_digit[k]: arr[i] = num i += 1 return arr def benchmark_sorting_algorithms(): sizes = [1000, 10000, 100000, 200000, 1000000, 2000000, 3000000, 4000000, 5000000, 6000000, 10000000] radix_times = [] radix_sort_no_hashmap_times = [] std_sort_times = [] for size in sizes: array = random.sample(range(1, size**2), size) new_arr = array.copy() start_time = time.time() a = radix_sort(new_arr, size) radix_times.append(time.time() - start_time) new_arr = array.copy() start_time = time.time() b = radix_sort_no_hashmap(new_arr, size) radix_sort_no_hashmap_times.append(time.time() - start_time) new_arr = array.copy() start_time = time.time() c = sorted(new_arr) std_sort_times.append(time.time() - start_time) for k in range(len(array)): assert a[k] == b[k] == c[k] return sizes, radix_times, std_sort_times, radix_sort_no_hashmap_times sizes, radix_times, std_sort_times, radix_sort_no_hashmap_times = benchmark_sorting_algorithms() plt.figure(figsize=(12, 6)) plt.plot(sizes, radix_times, label='Radix Sort (O(n))') plt.plot(sizes, std_sort_times, label='Standard Sort (O(nlogn))') plt.plot(sizes, radix_sort_no_hashmap_times, label='Radix Sort (O(n)) - No Hashmap') plt.xlabel('Input size (n)') plt.xscale('log') plt.ylabel('Time (seconds)') plt.yscale('log') plt.title('Radix Sort vs Standard Sort') plt.legend() plt.grid(True) plt.show() </code></pre> <p>The same question was posted <a href="https://stackoverflow.com/questions/78643069/optimization-of-radix-sort-implementation-slower-than-expected-compared-to-stan">here in CPP</a> following the advice of @user24714692.</p>
<python><algorithm><sorting><complexity-theory><pypy>
2024-06-18 18:07:51
1
720
FluidMechanics Potential Flows
78,638,972
13,135,901
Remove all duplicate rows except first and last in pandas
<p>I have a signal log with a lot of redundant data that I parse with pandas. To remove all duplicate rows besides first and last I use following code:</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame( { &quot;A&quot;: [1, 2, 3, 4, 5, 6, 7, 8, 9], &quot;B&quot;: [0, 0, 0, 0, 1, 1, 1, 2, 3], } ) &gt;&gt;&gt; df A B 0 1 0 1 2 0 2 3 0 3 4 0 4 5 1 5 6 1 6 7 1 7 8 2 8 9 3 &gt;&gt;&gt; df = df[~((df.B == df.B.shift()) &amp; (df.B == df.B.shift(-1)))] &gt;&gt;&gt; df A B 0 1 0 3 4 0 4 5 1 6 7 1 7 8 2 8 9 3 </code></pre> <p>The log files can get pretty big, hundreds of megabytes and the app is running on AWS EC2 VPS with only 1 Gb of RAM. So if I try to parse a large file it crashes the server. So my questions are these:</p> <ol> <li>Does this method require 3 times the size of the file of RAM (because it &quot;creates&quot; 3 dataframes)?</li> <li>Is there a more memory efficient way to do it?</li> </ol>
<python><pandas><dataframe>
2024-06-18 17:58:20
2
491
Viktor
78,638,748
9,357,484
Unable to change the default version of Python in the virtual environment
<p>I want to create a virtual environment with Python version 3.9. I am using Windows11 operating system. I found various python version in the folder C:\Users\user1\AppData\Local\Programs\Python. The python versions are as follows <a href="https://i.sstatic.net/TMM1BbpJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMM1BbpJ.png" alt="enter image description here" /></a></p> <p>Now if I want to create a virtual environment following the command python39 -m venv myvenv</p> <p>I received the error message 'python39' is not recognized as an internal or external command, operable program or batch file. <a href="https://i.sstatic.net/M6NaiOrp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6NaiOrp.png" alt="enter image description here" /></a></p> <p>I also tried another way to create a virtual environment, that did not work either</p> <p>How can I solve this?</p> <p>Thank you.</p> <p><a href="https://i.sstatic.net/bZXz32JU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZXz32JU.png" alt="enter image description here" /></a></p>
<python><python-venv>
2024-06-18 16:54:54
1
3,446
Encipher
78,638,695
6,622,697
How to use separate model files with SQLAlchemy
<p>There are many questions about this and I tried various combinations, but can't get this to work. I want to be able to separate my model files into separate files.</p> <p>I have the following structure</p> <pre><code>app.py db Database.py models __init__.py City.py Meteo.py </code></pre> <p>In <code>models.__init__.py</code>, I am including all the models</p> <pre><code>from sqlalchemy.orm import DeclarativeBase import City import Meteo class ModelBase(DeclarativeBase): pass metadata = ModelBase.metadata </code></pre> <p><code>Database.py</code> contains the initialization of the db</p> <pre><code>from sqlalchemy import URL, create_engine from db.models import metadata engine = create_engine(&quot; &quot;) def create_tables(engine): metadata.create_all(engine) # Print the names of all tables in the database def print_all_tables(engine): metadata.reflect(bind=engine) tables = metadata.tables.keys() print(&quot;List of tables:&quot;) for table in tables: print(f' {table}') </code></pre> <p><code>app.py</code> includes the models and tries to create all the tables</p> <pre><code>from db.Database import engine, create_tables, print_all_tables from views.calibration_views import calibration import db.models create_tables(engine) print_all_tables(engine) if __name__ == '__main__': app.run() </code></pre> <p>And just for completeness, <code>City.py</code> looks like this:</p> <pre><code>from sqlalchemy import String, Integer from sqlalchemy.orm import mapped_column, relationship from db.models import ModelBase class City(ModelBase): __tablename__ = 'city' city_id = mapped_column(Integer, primary_key=True) city_name = mapped_column(String) city_climate = mapped_column(String) city_meteo_data = relationship(&quot;Meteo&quot;, backref=&quot;city&quot;) </code></pre> <p>But it dies in Database.py when it includes <code>db.models</code>. Somehow, it can't find the other files in the model directory</p> <p><a href="https://i.sstatic.net/nuECCYOP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nuECCYOP.png" alt="enter image description here" /></a></p> <p>Is there something wrong with the way I have <code>\_\_init\_\_.py</code> defined?</p>
<python><sqlalchemy>
2024-06-18 16:43:53
1
1,348
Peter Kronenberg
78,638,694
1,943,571
How to test a Pydantic BaseModel with MagicMock spec and wraps
<p>Given this example in Python 3.8, using Pydantic v2:</p> <pre><code>from pydantic import BaseModel import pytest from unittest.mock import MagicMock class MyClass(BaseModel): a: str = '123' # the actual implementation has many intertwined attributes # and methods that I'd like to test @pytest.fixture(name='myclass') def fixture_myclass(): yield MagicMock(wraps=MyClass(), spec=MyClass) def test_myclass_wraps(myclass): assert myclass.a == '123' </code></pre> <p>Running this will raise:</p> <p><code>AttributeError: Mock object has no attribute 'a'</code></p> <p>I expect attribute access to pass through the wrapper here. However, since Pydantic uses metaclass trickery to not store attributes and methods normally until after instantiation, <code>a</code> doesn't exist in <code>dir</code> or <code>myclass.__dict__</code>. I think this is why the <code>spec</code> doesn't work the way I expect, since <code>MagicMock</code> uses <code>dir</code> under the hood to inspect the attributes on an object. So, it's unable to spec the instance properly because of how Pydantic stores stuff.</p> <p>So how can I mock a <code>BaseModel</code> class? I would like to use <code>spec</code> for test safety and <code>wraps</code> for simplifying the amount of manual mocking.</p>
<python><pytest><python-unittest.mock><pydantic-v2>
2024-06-18 16:43:30
3
2,702
Remolten