QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,085,020
1,839,674
Packaging with pyproject.toml that will include other multi level directories
<p>I have spent two full days trying to figure this out with no success. I have a python project that I want to package. That is the easy part. The part I can't figure out is how to copy other multilevel folders into the package.</p> <p>I am trying to get the <em>myproj</em> as a package with the <em>dosomething.py</em> but also have the <em>algorithms</em> and <em>configs</em> folders in there entirety (folder structure and all files included).</p> <p>Project structure:</p> <pre><code> - myproj - src - myproj - dosomething.py - tests - test_do_something.py - algorithms - Type1 - algorithm1 - eval.py - hyperparameters.yml - a bunch of other folders and files - algorithm2 - eval.py - hyperparameters.yml - a bunch of other folders and files - Type2 - algorithm3 - eval.py - hyperparameters.yml - a bunch of other folders and files - algorithm4 - eval.py - hyperparameters.yml - a bunch of other folders and files - configs - type1.yml - type2.yml - pyproject.toml - mypy.ini - tox.ini - run-mypy </code></pre> <p>Contents of pyproject.toml:</p> <pre><code>[build-system] requires = [&quot;setuptools &gt;= 61.0&quot;, &quot;wheel&quot;] build-backend = &quot;setuptools.build_meta&quot; [project] name = &quot;myproj&quot; version = &quot;0.0.1&quot; readme = &quot;README.md&quot; dependencies = [ &quot;numpy &gt;= 1.26.4&quot;, ] [project.urls] repository = &quot;https://gitlab.com/somerepo [project.optional-dependencies] dev = [ &quot;tox&quot;, &quot;pytest&quot;, &quot;pytest-sugar&quot;, &quot;pytest-cov&quot;, &quot;black&quot;, &quot;flake8&quot;, &quot;flake8-docstrings&quot;, &quot;mypy&quot;, &quot;pre-commit&quot;, &quot;sphinx&quot;, &quot;sphinx-rtd-theme&quot; ] </code></pre> <p>After doing a pip install in a new conda env, I am looking for the following:</p> <pre><code>- env/test_env/lib/python3.9/site-packages - myproj - algorithms and everything under it - configs and everything under it - dosomething.py </code></pre> <p>I have tried MANIFEST.in, I have tried everything on this page: <a href="https://setuptools.pypa.io/en/latest/userguide/datafiles.html" rel="nofollow noreferrer">https://setuptools.pypa.io/en/latest/userguide/datafiles.html</a></p> <p>I figure this would be trivial to copy folders and files in the entirety into a package folder but I am stumped.</p> <p>Any help would be greatly appreciated!</p>
<python><setuptools><python-packaging><pyproject.toml>
2024-03-01 00:37:50
1
620
lr100
78,084,959
6,943,622
Count Unguarded Cells in the Grid (recursion)
<p>it's a semi-popular leetcode question. And I know there are a lot of solutions online, but I am practicing by solving all these problems myself to really master recursion. I am wondering what's wrong with my current approach; I keep maxing out the call stack. I would appreciate feedback! Here's the question:</p> <p>&quot;You are given two integers m and n representing a 0-indexed m x n grid. You are also given two 2D integer arrays guards and walls where guards[i] = [rowi, coli] and walls[j] = [rowj, colj] represent the positions of the ith guard and jth wall respectively.</p> <p>A guard can see every cell in the four cardinal directions (north, east, south, or west) starting from their position unless obstructed by a wall or another guard. A cell is guarded if there is at least one guard that can see it.</p> <p>Return the number of unoccupied cells that are not guarded.</p> <p><strong>Example</strong></p> <p>Input:</p> <pre><code>m = 4, n = 6, guards = [[0,0],[1,1],[2,3]], walls = [[0,1],[2,2],[1,4]] </code></pre> <p>Output:</p> <pre><code> &quot;7&quot; </code></pre> <p>Here's the code I've written so far:</p> <pre><code>def countUnguarded(m: int, n: int, guards: List[List[int]], walls: List[List[int]]) -&gt; int: def dfs(grid, row, col): if row &lt; 0 or row &gt;= m or col &lt; 0 or col &gt;= n or grid[row][col] == 'W': return # Mark cell as watched print(&quot;marking&quot;) grid[row][col] = '1' # Recursive call dfs(grid, row + 1, col) dfs(grid, row - 1, col) dfs(grid, row, col + 1) dfs(grid, row, col - 1) grid = [['0'] * n for _ in range(m)] # Update grid to mark guards as 'G' for guard in guards: row, col = guard grid[row][col] = 'G' # Update grid to mark walls as 'W' for wall in walls: row, col = wall grid[row][col] = 'W' # Run dfs for each cell with Guard for row in range(m): for col in range(n): if grid[row][col] == 'G': print(f&quot;running DFS at point {row, col}&quot;) dfs(grid, row, col) # count result unguarded_count = 0 for row in range(m): for col in range(n): if grid[row][col] == '0': unguarded_count += 1 return unguarded_count </code></pre>
<python><algorithm><recursion><depth-first-search>
2024-03-01 00:08:32
2
339
Duck Dodgers
78,084,852
4,979,733
How to handle Path objects when dumping to YAML
<p>I have an object that contains PoxisPath() as data members. How do I write this out as a YAML file? I currently do:</p> <pre><code>safe_dump_all(data, f_out, indent=2, default_flow_style=False) </code></pre> <p>but I'm getting an error that it doesn't know how to reprsent Path objects.</p> <pre><code>Type: &lt;class 'yaml.representer.RepresenterError'&gt; 2024-02-29 15:28:18 ERROR - Value: ('cannot represent an object', PosixPath('/a/b/c') </code></pre> <p>When using JSON, I solved it by writing a custom Encoder like this:</p> <pre><code> class MyJSONEncoder(JSONEncoder): def default(self, o): if isinstance(o, Path): return os.fspath(o) return JSONEncoder.default(self, o) </code></pre> <p>but not sure how to do it for YAML.</p>
<python><yaml>
2024-02-29 23:31:30
1
3,491
user4979733
78,084,790
2,635,863
split data frame index by separator and convert into columns in pandas
<pre><code>df = pd.DataFrame({'x':[1,2,3,4]}) df.index = ['A:100','B:103','C:105','D:108'] </code></pre> <p>I'm trying to split the index at <code>:</code> and convert the output into columns <code>y</code> and <code>z</code>:</p> <pre><code>x y z 1 A 100 2 B 103 3 C 105 4 D 108 </code></pre> <p>I thought this would work:</p> <pre><code>df[['y','z']] = df.index.str.split(':', expand=True) </code></pre> <p>but I get a <code>ValueError: Columns must be same length as key</code></p>
<python><pandas>
2024-02-29 23:11:08
2
10,765
HappyPy
78,084,763
1,447,953
Collect common groups on non-index column across two dataframes
<p>Here are two dataframes grouped how I want them:</p> <pre><code>last5s = pd.Timestamp.now().replace(microsecond=0) - pd.Timedelta('5s') dates = pd.date_range(last5s, periods = 5, freq='s') N=10 data1 = np.random.randint(0,10,N) data2 = np.random.randint(0,10,N) df1 = pd.DataFrame({'timestamp': np.random.choice(dates, size=N), 'A': data1}) df2 = pd.DataFrame({'timestamp': np.random.choice(dates, size=N), 'B': data2}) print(df1) print(df2) print() g1 = df1.groupby(pd.Grouper(key='timestamp', freq='1s')) print(&quot;g1:&quot;) for time, group in g1: print('time:', time) print(group) print() print() g2 = df2.groupby(pd.Grouper(key='timestamp', freq='1s')) print('g2:') for time, group in g2: print('time:', time) print(group) print() </code></pre> <p>Output (e.g.):</p> <pre><code> timestamp A 0 2024-03-01 10:05:26 7 1 2024-03-01 10:05:25 8 2 2024-03-01 10:05:28 1 3 2024-03-01 10:05:24 2 4 2024-03-01 10:05:28 5 5 2024-03-01 10:05:27 4 6 2024-03-01 10:05:24 6 7 2024-03-01 10:05:26 3 8 2024-03-01 10:05:26 8 9 2024-03-01 10:05:28 8 timestamp B 0 2024-03-01 10:05:25 1 1 2024-03-01 10:05:26 6 2 2024-03-01 10:05:25 5 3 2024-03-01 10:05:28 7 4 2024-03-01 10:05:27 7 5 2024-03-01 10:05:28 1 6 2024-03-01 10:05:28 4 7 2024-03-01 10:05:25 0 8 2024-03-01 10:05:24 6 9 2024-03-01 10:05:24 5 g1: time: 2024-03-01 10:05:24 timestamp A 3 2024-03-01 10:05:24 2 6 2024-03-01 10:05:24 6 time: 2024-03-01 10:05:25 timestamp A 1 2024-03-01 10:05:25 8 time: 2024-03-01 10:05:26 timestamp A 0 2024-03-01 10:05:26 7 7 2024-03-01 10:05:26 3 8 2024-03-01 10:05:26 8 time: 2024-03-01 10:05:27 timestamp A 5 2024-03-01 10:05:27 4 time: 2024-03-01 10:05:28 timestamp A 2 2024-03-01 10:05:28 1 4 2024-03-01 10:05:28 5 9 2024-03-01 10:05:28 8 g2: time: 2024-03-01 10:05:24 timestamp B 8 2024-03-01 10:05:24 6 9 2024-03-01 10:05:24 5 time: 2024-03-01 10:05:25 timestamp B 0 2024-03-01 10:05:25 1 2 2024-03-01 10:05:25 5 7 2024-03-01 10:05:25 0 time: 2024-03-01 10:05:26 timestamp B 1 2024-03-01 10:05:26 6 time: 2024-03-01 10:05:27 timestamp B 4 2024-03-01 10:05:27 7 time: 2024-03-01 10:05:28 timestamp B 3 2024-03-01 10:05:28 7 5 2024-03-01 10:05:28 1 6 2024-03-01 10:05:28 4 </code></pre> <p>How do I &quot;join&quot; the groups together such that I can iterate over them together? E.g. I want to be able to do:</p> <pre><code>for time, group1, group2 in somehow_joined(g1,g2): &lt;do stuff with group1 and group2 in this common time group&gt; </code></pre>
<python><pandas><dataframe><group-by>
2024-02-29 23:00:38
2
2,974
Ben Farmer
78,084,675
3,834,841
Python class method type overloading returns invalid type according to Pylint
<p>I've written a function that returns a <code>dict</code> from a config, and am trying to type it so that based on the key, the correct value type is returned (a <code>list</code> of an aliased <code>TypedDict</code>). I tried to achieve this with method overloading, but am receiving a Pylint error when trying to use the returned <code>list</code>: <code>Value 'x' is unsubscriptable E1136:unsubscriptable-object</code>. Here's a couple of examples that show the problem:</p> <p>With these types defined</p> <pre class="lang-py prettyprint-override"><code>class AType(TypedDict): prop_a: str class BType(TypedDict): prop_b: str class CType(TypedDict): prop_c: str </code></pre> <p>This example using module-level functions returns the type I expect, which is a <code>list</code> of the aliased <code>TypedDict</code></p> <pre class="lang-py prettyprint-override"><code>__config: dict[str, list[AType] | list[BType] | list[CType]] = {&quot;a&quot;: [{&quot;prop_a&quot;: &quot;a&quot;}], &quot;b&quot;: [{&quot;prop_b&quot;: &quot;b&quot;}], &quot;c&quot;: [{&quot;prop_c&quot;: &quot;c&quot;}]} @overload def func(s: Literal[&quot;a&quot;]) -&gt; list[AType]: ... @overload def func(s: Literal[&quot;b&quot;]) -&gt; list[BType]: ... @overload def func(s: Literal[&quot;c&quot;]) -&gt; list[CType]: ... def func(s: Literal[&quot;a&quot;] | Literal[&quot;b&quot;] | Literal[&quot;c&quot;]) -&gt; list[AType] | list[BType] | list[CType]: return __config[s] x = func(&quot;c&quot;) y = x[0][&quot;prop_c&quot;] # No error </code></pre> <p>However, as soon as I move these functions into a class and try to use the returned value the same way, I get the Pylint error described above (value is unsubscriptable).</p> <pre class="lang-py prettyprint-override"><code>class Test: __config: dict[str, list[AType] | list[BType] | list[CType]] def __init__(self): self.__config = {&quot;a&quot;: [{&quot;prop_a&quot;: &quot;a&quot;}], &quot;b&quot;: [{&quot;prop_b&quot;: &quot;b&quot;}], &quot;c&quot;: [{&quot;prop_c&quot;: &quot;c&quot;}]} @overload def func(self, s: Literal[&quot;a&quot;]) -&gt; list[AType]: ... @overload def func(self, s: Literal[&quot;b&quot;]) -&gt; list[BType]: ... @overload def func(self, s: Literal[&quot;c&quot;]) -&gt; list[CType]: ... def func(self, s: Literal[&quot;a&quot;] | Literal[&quot;b&quot;] | Literal[&quot;c&quot;]) -&gt; list[AType] | list[BType] | list[CType]: return self.__config[s] test = Test() x = test.func(&quot;c&quot;) y = x[0][&quot;prop_c&quot;] # Value 'x' is unsubscriptable E1136:unsubscriptable-object </code></pre> <p>Any idea what I'm doing wrong here? For reference, this is with Python 3.11.5 and Pylint 3.0.3, using the bundled typechecker. Also, when I reveal the type of <code>x</code> in my editor, it's shown correctly in both scenarios as <code>list[CType]</code>.</p>
<python><python-3.x><pylint>
2024-02-29 22:39:29
0
3,875
awarrier99
78,084,659
12,027,869
Two Plots in One Figure using seaborn
<p>I am trying to plot two plots in one figure: a histogram and a line plot based on the same x axis tick labels from the histogram.</p> <p>The histogram is just the frequency by <code>state</code> and <code>group</code>.</p> <p>The line plot is the <code>target</code> by <code>state</code>.</p> <p>I want the legends to be in the same box even though there are two axes (<code>ax1</code>, <code>ax2</code>).</p> <p>I tried this but it doesn't work:</p> <p>This works but there are two separate legend boxes:</p> <pre><code>h2, l2 = ax2.get_legend_handles_labels() ax2.legend(h2, l2) </code></pre> <p>This only displays the <code>target</code> label:</p> <pre><code>h1, l1 = ax.get_legend_handles_labels() h2, l2 = ax2.get_legend_handles_labels() ax.legend(h1+h2, l1+l2, loc=2) </code></pre> <p>Here is the full code:</p> <pre><code>data = { 'state': ['nj', 'nj', 'nj', 'ny', 'ny', 'ny'], 'group': ['A', 'A', 'B', 'A', 'B', 'B'], 'target': [20, 21, 19, 18, 15, 33] } df_test1 = pd.DataFrame(data) df_test1 fig, ax = plt.subplots(figsize=(10, 6)) sns.histplot( data=df_test1, x='state', hue='group', hue_order=['A', 'B'], multiple='dodge', ax=ax, shrink=.5, ) ax2 = ax.twinx() sns.lineplot( data=df_test1, x='state', y='target', label='target', ax=ax2, ) h1, l1 = ax.get_legend_handles_labels() h2, l2 = ax2.get_legend_handles_labels() ax.legend(h1+h2, l1+l2, loc=2) </code></pre>
<python><seaborn>
2024-02-29 22:35:07
0
737
shsh
78,084,456
15,963,311
How to maintain "pydantic style" errors for validations outside of pydantic?
<p>We are using fastapi and pydantic to excellent effect. However, there are some validations that we perform that can not (<a href="https://github.com/pydantic/pydantic/issues/1227#issuecomment-585723753" rel="nofollow noreferrer">should not?</a>) be done in a pydantic validator. (Mostly database/world-state kinds of validations)</p> <p>Examples:</p> <ul> <li>When creating a <code>User</code> with a <code>username</code>, the pydantic model has validations for length and allowed characters, but a uniqueness check requires a database call, and so should not be done in a pydantic validator. <ul> <li>Further, this should be enforced by the database, so it can be done atomically (create and check for error, rather than check if name exists and then try to create).</li> </ul> </li> <li>Creating a <code>Document</code> object with several &quot;related to&quot; links that mention other documents by <code>id</code> <ul> <li>These are foreign key relationships in the database, so the API endpoint must check if the linked ids are legitimate ids that are in use. (Again, should be done at the database layer)</li> </ul> </li> </ul> <p>The end result:</p> <ul> <li>The API endpoint itself implements a number of validations beyond those in the pydantic model (acceptable/expected)</li> <li>These validations are applied one at a time, so a user can only see one error message, fix it, and then go on to see the next one. (undesirable)</li> <li>It is difficult to return standardized error messages. (undesirable) <ul> <li>pydantic's error messages are excellent: structured, describe individual fields, and can describe multiple errors at once.</li> </ul> </li> </ul> <pre><code>{ &quot;detail&quot;: [ { &quot;loc&quot;: [ &quot;body&quot;, &quot;username&quot; ], &quot;msg&quot;: &quot;Username must be at least 5 characters long. Username cannot contain '-' character.&quot;, &quot;type&quot;: &quot;value_error&quot; } ] } </code></pre> <p>How can I:</p> <ul> <li>Return error messages with this same structure? Ideal solution would be something like:</li> </ul> <pre><code>try: db.create_user(user) except UserAlreadyExists: raise pydantic.&lt;something&gt;(User.username, &quot;This username is already in use.&quot;) </code></pre> <ul> <li>Aggregate a number of errors at once: eg. Document name is not unique and &quot;related to&quot; links (a list) item #3 is not found.</li> </ul> <p>I know that I can (and have) built error handling to reflect this same structure, and even tried to make it reasonably dynamic to handle different models/fields, but it is a lot of effort. If there were something provided by pydantic directly, that would be more convenient.</p> <p>I did come across <a href="https://stackoverflow.com/a/76601052/15963311">https://stackoverflow.com/a/76601052/15963311</a> which mentions <code>ErrorWrapper</code> from pydantic. At a glance, it seems to do what I want, but once I discovered that it is deprecated in pydantic V2, I didn't bother investigating it thoroughly.</p>
<python><fastapi><pydantic>
2024-02-29 21:50:05
1
394
Luke Nelson
78,084,311
14,055,985
How do I port python2 code using RSA.importKey().decrypt() to python3?
<p>We have a legacy python2 application that uses an RSA private key to decrypt a base64-encoded file. How do we port this to python3?</p> <h2>Python2.7 code, this does work:</h2> <pre class="lang-py prettyprint-override"><code> def _unlock_keys(self, passphrase): #get user priv key passphrase and perform unlock of randbits master_key = None master_key_path = 'master.pem' with open(master_key_path) as f: master_key = RSA.importKey(f.read(), passphrase) with open('host_keys/part.A') as f: enc_bits = f.read() self.part_A = master_key.decrypt(b64decode(enc_bits)) # self.part_A has the data we need </code></pre> <h2>Python3 code, this does not work:</h2> <p>This is what we've tried on the python3 code, but so far it decrypts to the empty bytestring <code>b''</code>:</p> <pre class="lang-py prettyprint-override"><code> def _unlock_keys(self, passphrase): #get user priv key passphrase and perform unlock of randbits master_key = None master_key_path = 'master.pem' with open(master_key_path) as f: master_key = RSA.importKey(f.read(), passphrase) with open('host_keys/part.A') as f: enc_bits = f.read() # Note: PKCS1_OAEP doesn't work because this is raw, unpadded: decryptor = PKCS1_v1_5.new(master_key) self.part_A = decryptor.decrypt(b64decode(enc_bits), &quot;error&quot;) print(self.part_A) # self.part_A prints as b'' </code></pre> <h2>OpenSSL command that works</h2> <p>We can use OpenSSL to decrypt it as follows. Note the <code>-raw</code> argument because there is no PKCS padding, and this is PKCS v1.5:</p> <pre class="lang-bash prettyprint-override"><code>openssl rsautl -pkcs -raw -decrypt -in &lt;(base64 -d host_keys/part.A) -out /proc/self/fd/1 -inkey master.pem Enter pass phrase for master.pem: &lt;correct output&gt; </code></pre> <p>(Maybe I should have posed the question <em>&quot;how can I implement this openssl command in python3?&quot;</em> but methinks that would be an XY question...)</p> <h2>Key format</h2> <p>In case it helps, this is what the private key format looks like.</p> <pre><code>-----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: DES-EDE3-CBC,&lt;REDACTED&gt; ... -----END RSA PRIVATE KEY----- </code></pre>
<python><python-3.x><python-2.7><cryptography><public-key-encryption>
2024-02-29 21:17:10
1
1,981
KJ7LNW
78,084,169
23,260,297
Power Automate Desktop '[Errno 2] No such file or directory when running powershell script
<p>I am trying to use power automate desktop to run a python script. Since power automate desktop does not have an action to run a python3 script (only python 2), I have to use the run powershell script action instead. I am passing the path to my python interpreter and the absolute path to my python script. However, when I run the flow, I get:</p> <pre><code>C:\Python27\python.exe: can't open file 'C:\Users\myname\Source\Repos\Projects\Solution\HQ': [Errno 2] No such file or directory </code></pre> <p>It is cutting off the rest of my path because there is white space, which I wasn't aware was an issue when creating the repository that my scripts are in. Is there a way around running the powershell script to acknowledge the rest of the path without having to rename anything?</p>
<python><powershell><power-automate-desktop>
2024-02-29 20:42:41
1
2,185
iBeMeltin
78,084,157
1,134,337
FastAPI and Pydantic - ignore but warn when extra elements are provided to a router input Model
<p>This is the code I wrote</p> <pre><code>from typing import Any from fastapi import APIRouter from pydantic import BaseModel, ConfigDict, ValidationError class State(BaseModel): mode: str | None = None alarm: int = 0 class StateLoose(State): model_config = ConfigDict(extra='allow') class StateExact(State): model_config = ConfigDict(extra='forbid') def validated_state(state: dict) -&gt; State: try: return StateExact(**state) except ValidationError as e: logger.warning(&quot;Sanitized input state that caused a validation error. Error: %s&quot;, e) return State(**state) @router.put(&quot;/{client_id}&quot;, response_model=State) def update_state(client_id: str, state: StateLoose) -&gt; Any: v_state = validated_state(state.dict()).dict() return update_resource(client_id=client_id, state=v_state) # Example State inputs a = {&quot;mode&quot;: &quot;MANUAL&quot;, &quot;alarm&quot;: 1} b = {&quot;mode&quot;: &quot;MANUAL&quot;, &quot;alarm&quot;: 1, &quot;dog&quot;: &quot;bau&quot;} normal = State(**a) loose = StateLoose(**a) exact = StateExact(**a) </code></pre> <p>From my understanding/tests with/of pydantic</p> <ul> <li>State &quot;adapts&quot; the input dict to fit the model and trows an exception only when something is very wrong (non-convertable type, missing required field). However, extra fields are lost.</li> <li>StateLoose, accepts extra fields and shows them by default (or with <strong>pydantic_extra</strong>)</li> <li>StateExact trows a ValidationError whenever something extra is provided</li> </ul> <p>What I wanted to achieve is:</p> <ul> <li>Show the &quot;State scheme as input&quot; in the FastAPI generated Docs (this means having a State-like input in the &quot;put function&quot;.</li> <li>Accept States that have extra elements but ignoring the extra elements (this means using State to remove extra args)</li> <li>Log a warning when extra elements are detected so that I can trace this since probably it means something went not as planned</li> </ul> <p>To achieve this I was forced to create 3 different State classes and play with those. Since I plan to have lots of Models, I don't like the idea of having 3 versions of each and it feels like I am doing something quite wrong if it's so weird to accomplish.</p> <p><strong>Is there a less redundant way to:</strong></p> <ul> <li>accept extra elements in a Model;</li> <li>use the Model as an input to FastAPI router.put;</li> <li>generate a warning;</li> <li>ignore extra elements and continue with the right ones?</li> </ul>
<python><fastapi><pydantic><python-3.11>
2024-02-29 20:41:00
1
766
Bertone
78,084,123
2,137,570
python - URL listener for request submission
<p>What in python can do this? Listener?? not sure what to use</p> <p>I'm trying to figure out how to do the following:</p> <ul> <li>Website1: Generates a code and loads website2 URL :<a href="https://website2.com/?code=123423439934&amp;state=619" rel="nofollow noreferrer">https://website2.com/?code=123423439934&amp;state=619</a></li> <li>On website2 how do I using python listen/watch for this URL submission and capture this request specifically the code?</li> </ul> <p>any help would be appreicated.</p>
<python>
2024-02-29 20:33:05
1
5,998
Lacer
78,083,956
4,815,601
How to use balanced sampler for torch Dataset/Dataloader
<p>My simplified Dataset looks like:</p> <pre><code>class MyDataset(Dataset): def __init__(self) -&gt; None: super().__init__() self.images: torch.Tensor[n, w, h, c] # n images in memmory - specific use case self.labels: torch.Tensor[n, w, h, c] # n images in memmory - specific use case self.positive_idx: List # positive 1 out of 10000 negative self.negative_idx: List def __len__(self): return 10000 # fixed value for training def __getitem__(self, idx): return self.images[idx], self.labels[idx] ds = MyDataset() dl = DataLoader(ds, batch_size=100, shuffle=False, sampler=...) # Weighted Sampler? Shuffle False because I guess the sampler should process shuffling. </code></pre> <p>What is the most &quot;torch&quot; way of balancing the sampling for Dataloader so the batch will be constructed as 10 positive + 90 random negative in each epoch and in case of not enough positive duplicating the possible ones?</p> <p>For the purpose of this exercise I'm not implementing augmenting for increasing sample size of positives.</p>
<python><pytorch><dataset><sampling><dataloader>
2024-02-29 19:56:22
1
1,042
Mateusz Konopelski
78,083,860
11,812,216
Why does this python decorator refuse to set the setter?
<p>This is my code:</p> <pre><code>class Property: def __init__(self, fget, fset): self.fget = fget self.fset = fset def __get__(self, obj, objtype=None): return self.fget(obj) def __set__(self, obj, value): self.fset(obj, value) class X: def __init__(self, val): self.__x = val @Property def x(self): return self.__x @x.setter def x(self, val): self.__x = int(val) myx = X(13) print(myx.x) </code></pre> <p>I get this error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/izilinux/git/python/property/ex2.py&quot;, line 21, in &lt;module&gt; class X: File &quot;/home/izilinux/git/python/property/ex2.py&quot;, line 26, in X def x(self): TypeError: Property.__init__() missing 1 required positional argument: 'fset' </code></pre> <p>The getter works fine! The problem started once i tried encorporating a setter into the code. Any ideas?</p>
<python><python-decorators><python-descriptors>
2024-02-29 19:37:03
0
424
forstack overflowizi
78,083,607
1,743,843
TypeError with Datetimes in SQLModel
<p>I'm working with SQLModel ORM framework in a Python application and encountering a TypeError related to datetime objects when trying to insert new records into a PostgreSQL database.</p> <pre><code>from datetime import datetime, timezone import uuid from sqlmodel import Field, SQLModel, Relationship, UniqueConstraint from typing import List class UserBase(SQLModel): id: uuid.UUID = Field(default_factory=uuid.uuid4, primary_key=True) phone_number: str = Field(max_length=255) phone_prefix: str = Field(max_length=10) class User(UserBase, table=True): __table_args__ = ( UniqueConstraint(&quot;phone_number&quot;, &quot;phone_prefix&quot;, name=&quot;phone_numbe_phone_prefix_constraint&quot;), ) registered_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc)) interests: List[&quot;Interest&quot;] = Relationship(back_populates=&quot;user&quot;) </code></pre> <p>When attempting to insert a new <code>User</code> record, the following error is encountered:</p> <pre><code>E TypeError: can't subtract offset-naive and offset-aware datetimes asyncpg/pgproto/./codecs/datetime.pyx:152: TypeError </code></pre> <p>The above exception was the direct cause of the following exception:</p> <pre><code>self = &lt;sqlalchemy.dialects.postgresql.asyncpg.AsyncAdapt_asyncpg_cursor object at 0x108b78ee0&gt; operation = 'INSERT INTO &quot;user&quot; (id, phone_number, phone_prefix, registered_at) VALUES ($1::UUID, $2::VARCHAR, $3::VARCHAR, $4::TIMESTAMP WITHOUT TIME ZONE)' parameters = ('d9999373-a43d-4154-935c-f28f13f17d3e', '8545227945', '+342', datetime.datetime(2024, 2, 29, 18, 25, 54, 21935, tzinfo=datetime.timezone.utc)) </code></pre> <p>How can I resolve this TypeError to ensure compatibility between the timezone-aware datetime objects in my SQLModel?</p>
<python><sqlmodel>
2024-02-29 18:39:26
1
34,339
softshipper
78,083,513
14,293,020
Python np.nansum() arrays but not return 0 when summing two nans
<p><strong>Context:</strong> I have a 3D numpy array (3,10,10): <code>arr</code>. I have a list gathering indices of the 0th dimension of <code>arr</code>: <code>grouped_indices</code>. I want to calculate the sum of these grouped indices of <code>arr</code>, and store them in a host array: <code>host_arr</code>.</p> <p><strong>Problem:</strong> I am using <code>np.nansum()</code>, however a sum of two NaNs gives me a 0 and I would like it to return a NaN. I don't want to set all the zeros to NaNs once I have calculated the sum.</p> <p><strong>Question:</strong> How can I calculate the nansum of <em>n</em> 2D arrays (same shape), but set as NaN any cell for which all the arrays have a NaN ?</p> <p><strong>Example:</strong></p> <pre><code>import numpy as np import matplotlib.pyplot as plt # Generate example data np.random.seed(0) arr_shape = (10, 10) num_arrays = 3 # Create a 3D numpy array with random values arr = np.random.rand(num_arrays, *arr_shape) # Introduce NaNs arr[0, :5, :5] = np.nan arr[1, 2:7, 2:7] = np.nan arr[2] = np.nan arr[2, :2, :2] = 10 # Generate a list of arrays containing indices of the 0th dimension of arr grouped_indices = [np.array([0,1]), np.array([0,1,2])] # Create a host array that is the sum of grouped_indices slices host_arr = np.array([np.nansum(arr[indices], axis=0) for indices in grouped_indices]) # Plot the nansums plt.figure() plt.imshow(host_arr[0]) # indices [2:5, 2:5] should be NaNs plt.colorbar() plt.figure() plt.imshow(host_arr[1]) # indices [2:5, 2:5] should be NaNs too plt.colorbar() </code></pre>
<python><numpy><sum><nan>
2024-02-29 18:16:02
1
721
Nihilum
78,083,235
1,711,271
Add a constant to an existing column
<p>Dataframe:</p> <pre><code>rng = np.random.default_rng(42) df = pl.DataFrame( { &quot;nrs&quot;: [1, 2, 3, None, 5], &quot;names&quot;: [&quot;foo&quot;, &quot;ham&quot;, &quot;spam&quot;, &quot;egg&quot;, None], &quot;random&quot;: rng.random(5), &quot;A&quot;: [True, True, False, False, False], } ) </code></pre> <p>Currently, to add a constant to a column I do:</p> <pre><code>df = df.with_columns(pl.col('random') + 500.0) </code></pre> <p>Questions:</p> <ol> <li><p>Why does <code>df = df.with_columns(pl.col('random') += 500.0)</code> throw a <code>SyntaxError</code>?</p> </li> <li><p>Various AIs tell me that <code>df['random'] = df['random'] + 500</code> should also work, but it throws the following error instead:</p> <pre><code>TypeError: DataFrame object does not support `Series` assignment by index Use `DataFrame.with_columns`. </code></pre> <p>Why is <code>polars</code> throwing an error? I've been using <code>df['random']</code> to identify the <code>random</code> column in other parts of my code, and it worked.</p> </li> </ol>
<python><python-polars>
2024-02-29 17:17:37
1
5,726
DeltaIV
78,083,169
3,225,420
Send GSD Command in JSON to Zebra Printer
<p>Trying to send commands (not labels) to Zebra printers using Python.</p> <p>On page 574 of the <a href="https://support.zebra.com/cpws/docs/zpl/zpl-zbi2-pm-en.pdf" rel="nofollow noreferrer">documentation</a> it shows: <a href="https://i.sstatic.net/tDWui.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tDWui.png" alt="enter image description here" /></a></p> <p>Here's my code:</p> <pre><code>mysocket = socket.socket(socket.AF_INET,socket.SOCK_STREAM) host= &quot;192.168.100.245&quot; # verified IP address of Zebra printer port = 9100 mysocket.connect((host, port)) name_string= '''&quot;sgd.name&quot;:null''' my_string= f'''{{}}{{{name_string}}}''' x = json.dumps(obj=my_string) mysocket.sendall(bytes(x,encoding=&quot;utf-8&quot;)) data= mysocket.recv(1024) print(data.decode('utf-8')) </code></pre> <p>The printer responds to pings and other non-JSON Zebra commands sent to it (i.e. <code>mysocket.send(b&quot;~hs&quot;)</code>). However, with the code above I wait for a long time and no response returns from the printer.</p> <p>Tried multiple variations of the JSON formatting, what should I try next?</p> <p>Per @bruan comment I tried the following variations but did not work:</p> <p><code>my_string= '''&quot;sgd.name&quot;:null&quot;'''</code></p> <p><code>my_string= '''{}{&quot;sgd.name&quot;:null}'''</code></p>
<python><json><python-3.x><zpl><zpl-ii>
2024-02-29 17:03:39
1
1,689
Python_Learner
78,083,042
520,556
No module named 'matplotlib.typing'
<p>I am receiving an error message: <code>ModuleNotFoundError: No module named 'matplotlib.typing'</code> with this bit of code:</p> <pre><code>from plotnine import * (ggplot(the, aes(x='Trial', y='Weights', colour='Cues')) + geom_line() + scale_colour_manual(values=['#1f78b4','#b2df8a','#fb9a99','#e31a1c','#ae017e'])) </code></pre> <p>I am running Anaconda on macOS:</p> <ul> <li><code>conda 23.7.4</code></li> <li><code>plotnine 0.13.0 pypi_0 pypi</code></li> <li><code>matplotlib 3.7.1 py311hecd8cb5_1 </code></li> </ul> <p>This, however, works on my older machine with all the same specs but <code>plotnine</code> being <code>0.12.4</code>.</p> <p>Can you please help? Thanks!</p>
<python><matplotlib><plotnine>
2024-02-29 16:45:48
2
1,598
striatum
78,082,815
7,479,217
Python requests returns 503 status code as response when sending a file through POST request
<p>So I am using python requests to send a post request to another server (both servers are internal to the company). It worked some weeks ago, but I tried today and it has suddenly stopped working.</p> <p>I can use postman to send the same request and the server responds with 201. I can perform a get request from my python code using requests library.</p> <p>The problem ONLY occurs on post request from python code.</p> <p>Here's the piece:</p> <pre><code>with open(filepath, &quot;rb&quot;) as f: # f is a file-like object. data = {&quot;name&quot;: experiment_id + &quot;.zip&quot;, &quot;content&quot;: f} #The CALCSCORE_URL is an env variable named CALCSCORE_BE set up in each environment. calcscore_url = f&quot;{settings.CALCSCORE_URL}/api/hive/experiment/{experiment_id}/roi&quot; response = requests.post(calcscore_url, files=data) </code></pre> <p>I tried different things, like sending the data in a separate 'files' and 'data' argument but its always the same error. The file is 5 MB in size. I don't think its that big?</p> <p><code>response.content</code> shows <code>response timed out</code>.</p>
<python><python-3.x><django><flask><python-requests>
2024-02-29 16:08:03
0
352
Mazhar Ali
78,082,467
13,088,678
repartition vs coalesce performance on smaller partition
<p>I'm reading input data from s3, writing the result data back to s3.</p> <ul> <li>Input data volume (from delta table): 5TB</li> <li>Result data volume : ~5GB</li> </ul> <p>I used different options during write as below:</p> <ul> <li>Spark default behaviour (multiple files) : 6hr</li> <li>repartition(11) : completed in 6 hr</li> <li>coalesce(11) : cancelled as it was overrunning for 10+hr</li> </ul> <p>I have used same cluster configuration (5 worker + 1 Driver + 16 core/64 GB mem per machine) for all of the above different runs.</p> <p>Just curious as why coalesce is running long compared to repartition? Should not coalesce faster due to no shuffle involved?</p>
<python><scala><apache-spark>
2024-02-29 15:12:22
0
407
Matthew
78,082,446
12,415,855
Python DeepTranslator and GoogleTranslator not working with text?
<p>I try to translate a text using the following code -</p> <pre><code># -*- coding: utf-8 -*- text = &quot;&quot;&quot; ดำเนินธุรกิจทางด้าน ผลิต ถุงพลาสติก ถุงบรรจุอาหาร พร้อมด้วย ถุงพิมพ์สอดสี ระบบกราเวียร์ อันทันสมัย นอกจากนี้ยัง ผลิตถุงขยะ, แผ่นพลาสติก Green House ที่มีลูกค้าทั้งในและต่างประเทศ ตลอดระยะเวลา 40 กว่าปีที่ผ่านมา บริษัทได้เติบโตอย่างต่อเนื่องด้วยดีตลอดมา จนกระทั่งในปี 2532 บริษัทได้ขยายกิจการ โดยเพิ่มทุนจดทะเบียนเป็น 15 ล้านบาท มีการจัดตั้งเป็นรูปแบบโรงงานขึ้น ในพื้นที่ประมาณ 4 ไร่ บริเวณซอยประชาอุทิศ 23 เขตราษฎร์บูรณะ จังหวัดกรุงเทพมหานคร ด้วยการเจริญเติบโตอย่างต่อเนื่องของบริษัท และเป็นการขยายกิจการ ทางบริษัทจึงย้ายโรงงานใหม่มาที่ ตำบลพันท้ายนรสิงห์ จังหวัดสมุทรสาคร ในปี 2552 มีการเพิ่มเครื่องจักรที่ทันสมัย และเพิ่มบุคลากรในด้านต่างๆ เพื่อให้เพียงพอต่อความต้องการของตลาดในปัจจุบัน – บริษัท สมบัติชายอุตสาหกรรมพลาสติก จำกัด เริ่มก่อตั้ง เมื่อปี 2510 ดำเนินธุรกิจทางด้าน ผลิต ถุงพลาสติก ถุงบรรจุอาหาร พร้อมด้วย ถุงพิมพ์สอดสี ระบบกราเวียร์ อันทันสมัย นอกจากนี้ยัง ผลิตถุงขยะ, แผ่นพลาสติก Green House ที่มีลูกค้าทั้งในและต่างประเทศ ตลอดระยะเวลา 40 กว่าปีที่ผ่านมา บริษัทได้เติบโตอย่างต่อเนื่องด้วยดีตลอดมา จนกระทั่งในปี 2532 บริษัทได้ขยายกิจการ โดยเพิ่มทุนจดทะเบียนเป็น 15 ล้านบาท มีการจัดตั้งเป็นรูปแบบโรงงานขึ้น ในพื้นที่ประมาณ 4 ไร่ บริเวณซอยประชาอุทิศ 23 เขตราษฎร์บูรณะ จังหวัดกรุงเทพมหานคร ด้วยการเจริญเติบโตอย่างต่อเนื่องของบริษัท และเป็นการขยายกิจการ ทางบริษัทจึงย้ายโรงงานใหม่มาที่ ตำบลพันท้ายนรสิงห์ จังหวัดสมุทรสาคร ในปี 2552 มีการเพิ่มเครื่องจักรที่ทันสมัย และเพิ่มบุคลากรในด้านต่างๆ เพื่อให้เพียงพอต่อความต้องการของตลาดในปัจจุบัน ปัจจุบัน – บริษัท สมบัติชพลาสติก green house ที่มีลูกค้าทั้งในและต่างประเทศ ตลอดระยะเวลา 40 กว่าปีที่ผ่านมา บริษัทได้เติบโตอย่างต่อเนื่องด้วยดีตลอดมา จนกระทั่งในปี 2532 บริษัทได้ขยายกิจการ โดยเพิ่มทุนจดทะเบียนเป็น 15 ล้านบาท มีการจัดตั้งเป็นรูปแบบโรงงานขึ้น ในพื้นที่ประมาณ 4 ไร่ บริเวณซอยประชาอุทิศ 23 เขตราษฎร์บูรณะ จังหวัดกรุงเทพมหานคร ด้วยการเจริญเติบโตอย่างต่อเนื่องของบริษัท และเป็นการขยายกิจการ ทางบริษัทจึงย้ายโรงงานใหม่มาที่ ตำบลพันท้ายนรสิงห์ จังหวัดสมุทรสาคร ในปี 2552 มีการเพิ่มเครื่องจักรที่ทันสมัย และเพิ่มบุคลากรในด้านต่างๆ เพื่อให้เพียงพอต่อความต้องการของตลาดในปัจจุบัน &quot;&quot;&quot; from deep_translator import GoogleTranslator translated = GoogleTranslator(source='auto', target='en').translate(text) </code></pre> <p>But when i running this code i get this error:</p> <pre><code>(test) C:\DEV\Python-Diverses\DeepTranslator&gt;python exmplDeepTranslator.py ' ดำเนินธุรกิจทางด้าน ผลิต ถุงพลาสติก ถุงบรรจุอาหาร พร้อมด้วย ถุงพิมพ์สอดสี ระบบกราเวียร์ อันทันสมัย นอกจากนี้ยัง ผลิตถุงขยะ, แผ่นพลาสติก Green House ที่มีลูกค้าทั้งในและต่างประเทศ ตลอดระยะเวลา 40 ายกิจการ โดยเพิ่มทุนจดทะเบียนเป็น 15 ล้านบาท มีการจัดตั้งเป็นรูปแบบโรงงานขึ้น ในพื้นที่ประมาณ 4 ไร่ บริเวณซอยประชาอุทิศ 23 เขตราษฎร์บูรณะ จังหวัดกรุงเทพมหานคร ด้วยการเจริญเติบโตอย่างต่อเนื่องของบริ สาคร ในปี 2552 มีการเพิ่มเครื่องจักรที่ทันสมัย และเพิ่มบุคลากรในด้านต่างๆ เพื่อให้เพียงพอต่อความต้องการของตลาดในปัจจุบัน – บริษัท สมบัติชายอุตสาหกรรมพลาสติก จำกัด เริ่มก่อตั้ง เมื่อปี 2510 ดำเนินธุรกิจทาง ัย นอกจากนี้ยัง ผลิตถุงขยะ, แผ่นพลาสติก Green House ที่มีลูกค้าทั้งในและต่างประเทศ ตลอดระยะเวลา 40 กว่าปีที่ผ่านมา บริษัทได้เติบโตอย่างต่อเนื่องด้วยดีตลอดมา จนกระทั่งในปี 2532 บริษัทได้ขยายกิจการ โด ่ บริเวณซอยประชาอุทิศ 23 เขตราษฎร์บูรณะ จังหวัดกรุงเทพมหานคร ด้วยการเจริญเติบโตอย่างต่อเนื่องของบริษัท และเป็นการขยายกิจการ ทางบริษัทจึงย้ายโรงงานใหม่มาที่ ตำบลพันท้ายนรสิงห์ จังหวัดสมุทรสาคร ใน เป็นการขยายกิจการ ทางบริษัทจึงย้ายโรงงานใหม่มาที่ ตำบลพันท้ายนรสิงห์ จังหวัดสมุทรสาคร ในปี 2552 มีการเพิ่มเครื่องจักรที่ทันสมัย และเพิ่มบุคลากรในด้านต่างๆ เพื่อใองด้วยดีตลอดมา จนกระทั่งในปี 2532 บริษัทไห้เพียงพอต่อความต้องการของตลาดในปัจจุบัน\n' 2033 Traceback (most recent call last): File &quot;C:\DEV\Python-Diverses\DeepTranslator\exmplDeepTranslator.py&quot;, line 16, in &lt;module&gt; translated = GoogleTranslator(source='auto', target='en').translate(text) File &quot;C:\DEV\.venv\test\lib\site-packages\deep_translator\google.py&quot;, line 72, in translate raise RequestError() deep_translator.exceptions.RequestError: Request exception can happen due to an api connection error. Please check your connection and try again </code></pre> <p>But this has nothing to do with my connection. When i use another text or simply only provide one of this 2 lines in the text-variable the translation is working fine. Why is this error happening with the full text with both lines?</p>
<python><google-translator-toolkit>
2024-02-29 15:09:11
1
1,515
Rapid1898
78,082,220
8,436,290
Cron job in python every x hours - 1 minute
<p>I would like to create a cron job that triggers at 23h59, then 3h59, 7h59, 11h59...</p> <p>My solution</p> <pre><code>scheduler.add_job( func=myfunc, trigger=&quot;cron&quot;, day='*', hour=&quot;*/4&quot;, minute=59, second=0, ) </code></pre> <p>works but triggers every time one hour too late: 00.59, 4.59... Any solution please?</p>
<python><cron>
2024-02-29 14:33:32
2
467
Nicolas REY
78,082,169
3,923,576
numpy.intersect1d does not work on dictionary.keys()
<p>I tried to use numpy's <code>intersect1d</code> to compare the keys in two dictionaries. However, this always returns an intersection of zero, for some reason related to dictionary keys being objects. I want to know why this behavior is desireable in any way.</p> <pre><code>d1 = {'a':1, 'b':2} d2 = {'b':2, 'c':3} np.intersect1d(d1.keys(), d2.keys()) &gt; array([], dtype=object) </code></pre> <p>However,</p> <pre><code>np.intersect1d(list(d1.keys()), list(d2.keys())) &gt; array(['b'], dtype='&lt;U1') </code></pre> <p>Is this intended behavior and if so, why?</p>
<python><numpy><dictionary><set>
2024-02-29 14:24:34
2
569
Vira
78,081,978
1,181,452
Sqlite - How to efficiently insert or update millions of rows?
<p>I have thousands of text files consisting of an item and the supplier code that are in the following format:</p> <pre><code>item_name,supplier_code </code></pre> <p>So for example:</p> <pre><code>Potatoes,10294 Rope,49013 Beans,23958 Soap,12495 </code></pre> <p>I want to add them to a single table in sqlite with just 2 columns, one for the item name and the other for the list of suppliers where each unique supplier is seperated by a comma. So for example it can look something like this:</p> <pre><code>Beans|23958,23467,35994 </code></pre> <p>Each item can only occur once in the table and the second column can only have unique supplier codes - so if a text file had Beans,23958, which was also in another text file that was previously processed then this line would not get added to the table as both the item name and supplier code already exist.</p> <p>As I have this data in thousands of text files, which will ultimilately lead to hundreds of thousands of rows (if not millions), how can I efficiently add each item but ensuring that the item is only added once and any subsequent additions of the item are merely just appending the supplier code column (without adding duplicate supplier codes)? I have thought about running 2 queries, the first checking if an item name exists in the table - if it does then just update the supplier code by appending the new code if it too doesnt exist. If the item name doesnt exist then just simply running a basic insert query with the item name and supplier.</p> <p>This way seems like it will not be efficient with the sheer number of lines I will be processing. I have looked at using transactions but not sure how this will work as if a transaction isnt committed then we will not know whether an insert or an update is needed.</p> <p>I would appreciate any ideas or logic or even suggestions to look at alternative storage methods.</p>
<python><database><sqlite><transactions>
2024-02-29 13:55:54
1
3,296
M9A
78,081,891
1,914,781
Why is Feb 29 reported as invalid date?
<pre><code>import datetime tstr = &quot;02-29 13:34:57.041&quot; dt = datetime.datetime.strptime(tstr, '%m-%d %H:%M:%S.%f') </code></pre> <p>My script reports such an error:</p> <pre class="lang-none prettyprint-override"><code> File &quot;/usr/lib/python3.11/_strptime.py&quot;, line 579, in _strptime_datetime return cls(*args) ^^^^^^^^^^ ValueError: day is out of range for month </code></pre> <p>Why? Today is Feb 29, 2024.</p>
<python><datetime>
2024-02-29 13:42:24
1
9,011
lucky1928
78,081,853
1,661,267
Reading a multi-part 7z file using Python is failing due to memory issues
<p>I am using a loop to read multi-archive 7z files with this code.</p> <pre><code>import py7zr import multivolumefile zip_path = f&quot;{ARCHIVE_PATH}/test.7z&quot; with multivolumefile.open(zip_path, mode='rb') as multizip_handler: with py7zr.SevenZipFile(multizip_handler, 'r', password=PASSWORD, filters=filters) as zip_handler: for fname, fcontent in zip_handler.read(targets=None).items(): pass </code></pre> <p>The archive is relatively large (73 parts with a total size of 700 Mb). I have noticed that the memory footprint is quite high (even without storing in memory any variable content like <code>fname</code> or <code>fcontent</code>). This loop is working, but if I intentionnaly fill the memory with commands such as <code>head -c 7G /dev/zero | tail</code>, the loop is giving me a CRC Error (while actually the archive is fine tested with the <code>7z</code> command). The loop is quite simple and use only library functions, so I cannot make it lighter than it is.</p> <p>EDIT: to be more precise:</p> <ul> <li>For some archives the loop is totally failing</li> <li>For some others, the loop is working, and I can deduce it is a memory issue by filling the memory and watching the loop failing (the code and the archive being the same). Filling the memory was done in a way enough space remains (let's say 1 Gb).</li> </ul> <p>So my guess is that one of the two libraries <code>multivolumefile</code> or <code>py7zr</code> is internally consuming a lot of memory.</p> <p>Is there a way to reduce the memory footprint so we can ensure reading a multipart archive always success independently from the size of the archive or the size of the files inside the archive?</p>
<python><7zip><py7zr>
2024-02-29 13:36:29
1
1,252
mountrix
78,081,625
632,472
How to dynamically get a property?
<p>I have this python code:</p> <pre><code>class MyObject: def __init__(self): self._obj1 = &quot;obj1&quot; self._obj2 = &quot;obj2&quot; @property def obj1(self): return self._obj1 @property def obj2(self): return self._obj2 def get_property(obj, chose): if chose == &quot;obj1&quot;: return obj.obj1 elif chose == &quot;obj2&quot;: return obj.obj2 test1 = MyObject() test2 = MyObject() config = {&quot;obj1&quot;: test1, &quot;obj2&quot;:test2} for key, value in config.items(): #my_value = value.key &lt;&lt;- how make it works? my_value = get_property(value, key) print(my_value) </code></pre> <p>As you can see, I will have to manually create a big list of <code>get_property</code> to translate the value. How can I make it more dynamically and this code just works:</p> <pre><code>for key, value in config.items(): my_value = value.key </code></pre>
<python>
2024-02-29 12:59:08
1
12,804
Rodrigo
78,081,432
10,164,750
Handling a dynamic sql query in Psycopg2 with multiple WHERE condition
<p>I want to execute a sql query dynamically with multiple <code>WHERE</code> conditions. the query is as below</p> <pre><code>query = &quot;&quot;&quot;SELECT * FROM UBO_EXCEPTION WHERE reg_nb = %s AND return_date = %s::date AND data_type = %s ORDER BY db_timestamp LIMIT 1&quot;&quot;&quot; </code></pre> <p>The method that run the query is as below:</p> <pre><code>cursor.execute(query, param) result = cursor.fetchall() </code></pre> <p>Sample <code>param</code> value is as below, there are 4 records in it:</p> <pre><code>(('00246102', datetime.date(2024, 3, 25), 'UKNI'), ('00246107', datetime.date(2024, 3, 25), 'SH01'), ('00246109', datetime.date(2024, 3, 25), 'CS01'), ('00246101', datetime.date(2024, 3, 25), '')) </code></pre> <p>This is giving an error:- <code>not all arguments converted during string formatting</code>.</p> <p>I tried a minor change in the method, which is as below:</p> <pre><code>cursor.execute(query, (param,)) result = cursor.fetchall() </code></pre> <p>This gave me:- <code>tuple out of index error</code></p> <p>I am not sure, what to do next. Any suggestion is welcomed. Thank you.</p>
<python><postgresql><psycopg2>
2024-02-29 12:30:18
0
331
SDS
78,081,304
11,067,209
Issue with Padding Mask in PyTorch Transformer Encoder
<p>I'm encountering an issue with the padding mask in PyTorch's Transformer Encoder. I'm trying to ensure that the values in the padded sequences do not affect the output of the model. However, even after setting the padded values to zeros in the input sequence, I'm still observing differences in the output.</p> <p>Here's a simplified version of my code:</p> <pre><code>import torch as th from torch import nn # Data batch_size = 2 seq_len = 5 input_size = 16 src = th.randn(batch_size, seq_len, input_size) # Set some values to a high value src[0, 2, :] = 1000.0 src[1, 4, :] = 1000.0 # Generate a padding mask padding_mask = th.zeros(batch_size, seq_len, dtype=th.bool) padding_mask[0, 2] = 1 padding_mask[1, 4] = 1 # Pass the data through the encoder of the model encoder = nn.TransformerEncoder( nn.TransformerEncoderLayer( d_model=input_size, nhead=1, batch_first=True, ), num_layers=1, norm=None, ) out1000 = encoder(src, src_key_padding_mask=padding_mask) # Modify the input data so that the masked vector does not affect src[0, 2, :] = 0.0 src[1, 4, :] = 0.0 # Pass the modified data through the model out0 = encoder(src, src_key_padding_mask=padding_mask) # Check if the results are the same assert th.allclose( out1000[padding_mask == 0], out0[padding_mask == 0], atol=1e-5, ) </code></pre> <p>Despite setting the padded values to zeros in the input sequence, I'm still observing differences in the output of the Transformer Encoder. Could someone please help me understand why this might be happening? How can I ensure that the values in the padded sequences do not affect the output of the model?</p>
<python><pytorch><transformer-model>
2024-02-29 12:08:16
2
665
Angelo
78,081,023
1,095,967
Creating dataframe from dictionary with multiple dtypes set for columns
<p>I would like to create a pandas dataframe from a dictionary but set multiple dtypes on selected columns at creation. EDIT - should have made this clear - I have already created the dictionary and have passed it in as col_dets in this example.</p> <pre><code>df = pd.DataFrame(col_dets, index=[0]).astype(dict.fromkeys(df.columns[[3, 6, 9]], 'float64')) </code></pre> <p>I am aware the above is incorrect, but is there a method to do this? Ideally stating columns at these indices are float64 and at other indices are int64 (for example?)</p>
<python><pandas><dataframe><dictionary><dtype>
2024-02-29 11:26:40
1
661
MaxRussell
78,080,790
12,013,353
Solving implicit equation with scipy.optimize.fsolve
<p>I came to ask this question from <a href="https://stackoverflow.com/questions/64400639/solve-an-implicit-equation-python">the very similar one</a> where I &quot;learned&quot; how to employ fsolve to solve for solving implicit equations.<br /> My equation is defined by this function:</p> <pre><code>def fn4(Fs): return ( np.sum( (lamL * c + lamW * np.tan(np.radians(phi))) / (np.cos(lamA) * (1 + np.tan(lamA) * np.tan(np.radians(phi)) / Fs)) ) / np.sum(lamW * np.sin(lamA)) ) </code></pre> <p>where:</p> <pre><code>c = 10 phi = 30 lamela1 = {'W':887.36, 'alpha':np.radians(46.71), 'L':19.325} lamela2 = {'W':1624.8, 'alpha':np.radians(22.054), 'L':14.297} lamela3 = {'W':737.43, 'alpha':np.radians(1.9096), 'L':13.258} lamele = [lamela1, lamela2, lamela3] lamW = np.array([[i['W'] for i in lamele]]) lamA = np.array([[i['alpha'] for i in lamele]]) lamL = np.array([[i['L'] for i in lamele]]) </code></pre> <p>Now, I solved this by making a simple recursion:</p> <pre><code>def iterf(f, Fsi): if np.isclose(Fsi, fn4(Fsi)) == True: return Fsi else: return iterf(f, fn4(Fsi)) </code></pre> <p>which gives</p> <pre><code>iterf(fn4, 1) 1.8430 </code></pre> <p>But when I try to use <code>scipy.optimize.fsolve</code> (as <code>fsolve(fn4, 1)</code>), then i get:</p> <pre><code>RuntimeWarning: divide by zero encountered in true_divide / (np.cos(lamA) * (1 + np.tan(lamA) * np.tan(np.radians(phi)) / Fs)) </code></pre> <p>I don't know how exactly does <code>fsolve</code> work (which I'd like to learn), but my guess is that for the optimization it needs to input Fs as 0 at one point, which produces the zero-division. How can I use a function like this one to get the output?</p> <p>EDIT:<br /> The theoretical background is stability of soil slopes. The Fs, which is the result, is the factor of safety - a measure whose values is &gt;1 for stable slopes, and &lt;=1 for unstable slopes. The chosen method is really simple as it requires the solution of only this one equation. The slope is discretized with a number of &quot;lamellae&quot; or &quot;columns&quot;, in this case only 3 for simplicity.</p>
<python><scipy><fsolve>
2024-02-29 10:51:23
1
364
Sjotroll
78,080,761
9,322,863
Confusing finite difference results with numpy and scipy sparse for simple ODE
<p>I'm having trouble getting the solution to a simple ODE using finite difference. The equation I'm trying to solve is more complicated, but even for a sin function I'm getting different results with <code>scipy.sparse</code> and <code>numpy.linalg</code>. Neither is particularly close to the real solution, even for 20000 x values, which I would expect to be plenty. Any help appreciated!</p> <pre><code>import numpy as np import scipy as sp nx = 20000 xs = np.linspace(0,2*np.pi,nx) dx = 2*np.pi/nx # Solving y'' = v = sin(x) v = np.sin(xs) sol_actual = -np.sin(xs) # Build second-order finite difference matrix with periodic BCs diagonals = [1,-2,1] A = sp.sparse.diags_array(diagonals,offsets=[-1,0,1],shape=(nx,nx)) a_l = A.tolil() a_l[-1,0] = 1 a_l[0,-1] = 1 a_l = a_l/dx**2 A_sparse = a_l.tocsr() A_numpy = A_sparse.toarray() sol_sparse = sp.sparse.linalg.spsolve(A_sparse,v) sol_np = np.linalg.solve(A_numpy,v) from matplotlib import pyplot as plt plt.plot(xs,v,label='d2x/dx2') plt.plot(xs,sol_actual,label='analytical') plt.plot(xs,sol_sparse,label='scipy sparse') plt.plot(xs,sol_np,label='numpy') plt.legend() </code></pre> <p><a href="https://i.sstatic.net/Z25ah.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z25ah.png" alt="enter image description here" /></a></p>
<python><numpy><scipy><sparse-matrix><differential-equations>
2024-02-29 10:47:38
1
323
TIF
78,080,591
1,714,692
Python get_type_hints not returning parent's annotations of child dataclass
<p>I want to get all fields of a inherited dataclass:</p> <pre><code>from dataclasses import dataclass, fields from typing import get_type_hints @dataclass class A: a: int = 0 @dataclass class B(A): b: int = 0 a = A(a=4) b = B(a=4, b=5) print(a.__annotations__) print(b.__annotations__) </code></pre> <p>As already in other posts this will print:</p> <pre><code>{'a': &lt;class 'int'&gt;} {'b': &lt;class 'int'&gt;} </code></pre> <p>Some posts like <a href="https://stackoverflow.com/a/76026531/1714692">this</a>, suggest the usage of <code>get_type_hints</code>. But in my case doing:</p> <pre><code>print(get_type_hints(a)) print(get_type_hints(b)) </code></pre> <p>which prints</p> <pre><code>{'a': &lt;class 'int'&gt;} {'b': &lt;class 'int'&gt;} </code></pre> <p>Whereas I would have expected something like <code>{'a': &lt;class 'int'&gt;, 'b': &lt;class 'int'&gt;}</code> in the second print.</p> <p>Is this correct? Am I doing something wrong? The only way I could extract parent's fields is the following:</p> <pre><code>print([el.name for el in fields(b)]) </code></pre> <p>Is this the only way?</p>
<python><python-typing><python-dataclasses>
2024-02-29 10:24:00
1
9,606
roschach
78,080,499
13,520,223
Fetching API tokens does not work with requests-oauthlib (but without)
<p>I try to get a token from an API using <code>requests-oauthlib</code>. This is the essential part of my code:</p> <pre><code>from flask import Flask, request, redirect from requests_oauthlib import OAuth2Session # setting constants … app = Flask(__name__) app.config[&quot;SECRET_KEY&quot;] = SECRET_KEY @app.route(&quot;/login/&quot;) def login(): api_session = OAuth2Session( API_CLIENT_ID, redirect_uri=API_REDIRECT_URI ) authorization_url, state = api_session.authorization_url( API_AUTHORIZATION_BASE_URL ) session[&quot;oauth_state&quot;] = state return redirect(authorization_url) @app.route(&quot;/callback/&quot;) def callback(): api_session = OAuth2Session(API_CLIENT_ID, state=session[&quot;oauth_state&quot;]) token = api_session.fetch_token( API_TOKEN_URL, client_secret=API_CLIENT_SECRET, authorization_response=request.url, include_client_id=True, code=request.args.get(&quot;code&quot;), ) return &quot;Tokens received.&quot; </code></pre> <p>When I open the login URL in my browser, I get redirected to the <code>API_AUTHORIZATION_BASE_URL</code> which redirects back to my <code>API_REDIRECT_URI</code> (callback). Then <code>oauthlib.oauth2.rfc6749.errors.AccessDeniedError: (access_denied)</code> raises.</p> <p>When I try to do the same <em>without</em> <code>requests-oauthlib</code>, just with hand made requests, it works perfectly and I get the token:</p> <pre><code>import requests from flask import Flask, request, redirect # setting constants … app = Flask(__name__) app.config[&quot;SECRET_KEY&quot;] = SECRET_KEY @app.route(&quot;/login/&quot;) def login(): return redirect( f&quot;{API_AUTHORIZATION_BASE_URL}?client_id={API_CLIENT_ID}&amp;redirect_uri={API_REDIRECT_URI}&amp;response_type=code&quot; ) @app.route(&quot;/callback/&quot;) def callback(): data = { &quot;client_secret&quot;: API_CLIENT_SECRET, &quot;grant_type&quot;: &quot;authorization_code&quot;, &quot;code&quot;: request.args.get(&quot;code&quot;), &quot;client_id&quot;: API_CLIENT_ID, &quot;redirect_uri&quot;: API_REDIRECT_URI, } response = requests.post(API_TOKEN_URL, data=data) return &quot;Tokens received.&quot; </code></pre> <p>Even though I tried hard (and learned a lot about the OAuth2 protocoll and <a href="https://docs.python-requests.org/en/latest/user/authentication/#oauth-2-and-openid-connect-authentication" rel="nofollow noreferrer">requests-oauthlib</a>), I still did not find what's wrong with the requests-oauthlib approach.</p> <p>Does anybody have a hint for me? This would be great.</p>
<python><flask><oauth-2.0><requests-oauthlib>
2024-02-29 10:10:59
1
693
Ralf Zosel
78,080,376
11,322,034
How to check whether a specific opc ua node already exists with asyncua?
<p>How to check with <a href="https://github.com/FreeOpcUa/opcua-asyncio" rel="nofollow noreferrer">asyncua</a> (python) whether a node with a specific name exists already on the server?</p> <p>If it exists it should be used if it doesn't exist then the node should be created.</p> <p><strong>Use case:</strong> The opcua-client has to write data on a opcua-server. The client also creates the objects and nodes for the data. If the client goes offline the server keeps the nodes and objects. When the client comes back online it has to check whether the server has the objects or has been reseted.</p>
<python><opc-ua>
2024-02-29 09:54:29
1
441
Samuel
78,080,336
23,461,455
Why is there implemented a default blank row when writing csv from python linked list?
<p>I have an example of writing a csv from a <code>python</code> linked list:</p> <pre><code>import csv # field names fields = ['Name', 'Branch', 'Year', 'CGPA'] # data rows of csv file rows = [ ['Nikhil', 'COE', '2', '9.0'], ['Sanchit', 'COE', '2', '9.1'], ['Aditya', 'IT', '2', '9.3'], ['Sagar', 'SE', '1', '9.5'], ['Prateek', 'MCE', '3', '7.8'], ['Sahil', 'EP', '2', '9.1']] with open('Test.csv', 'w') as f: # using csv.writer method from CSV package write = csv.writer(f) write.writerow(fields) write.writerows(rows) </code></pre> <p>This works fine, except I have the following result:</p> <pre><code>Name,Branch,Year,CGPA Nikhil,COE,2,9.0 Sanchit,COE,2,9.1 Aditya,IT,2,9.3 Sagar,SE,1,9.5 Prateek,MCE,3,7.8 Sahil,EP,2,9.1 </code></pre> <p>As you can see, in my output csv file I have blank lines between the data. I don't understand where these unwanted line breaks come from.</p> <p>Why is there an extra row on default? What is its purpose since it's not there in the input data?</p>
<python><excel><csv><export-to-csv>
2024-02-29 09:48:52
1
1,284
Bending Rodriguez
78,080,284
2,546,099
Test for missing import with pytest
<p>In the <code>__init__.py</code>-file of my project I have the following code, for retrieving the current version of the program from the <code>pyproject.toml</code>-file:</p> <pre><code>from typing import Any from importlib import metadata try: import tomllib with open(&quot;pyproject.toml&quot;, &quot;rb&quot;) as project_file: pyproject: dict[str, Any] = tomllib.load(project_file) __version__ = pyproject[&quot;tool&quot;][&quot;poetry&quot;][&quot;version&quot;] except Exception as _: __version__: str = metadata.version(__package__ or __name__) </code></pre> <p>This code works fine on Python 3.11 and newer. On older versions, however, <code>tomllib</code> does not exist, and therefore the exception should be raised (and the version will be determined by using <code>metadata</code>)</p> <p>When testing on Python 3.11 with <code>pytest</code>, is there any way of designing a test to check if that approach of determining the version works both with and without <code>tomllib</code>, without having to create a second venv without <code>tomllib</code>? I.e. how can I artificially generate the exception, such that I can test both branches without having to switch to different versions?</p>
<python><pytest>
2024-02-29 09:41:59
3
4,156
arc_lupus
78,080,274
2,352,158
Using python interpreter explicitely does not ignore shebang
<p>I have read <a href="https://stackoverflow.com/questions/64120768/does-shebang-overwrite-the-python-interpreter-path">Does shebang overwrite the python interpreter path</a>, and it seems the given answer does not apply to me.</p> <p>Some context: I am on Windows using <code>C:\Program Files\Git\bin\bash.exe</code> and having two python installations as you can see:</p> <pre><code>$ py --version Python 3.10.11 $ which py /c/Users/Username/AppData/Local/Programs/Python/Launcher/py $ python3 --version Python 3.9.10 $ which python3 /c/ProgramData/chocolatey/bin/python3 </code></pre> <p>Have created a dummy python file <code>shebang.py</code> to print out the python version used</p> <pre><code>#!/usr/bin/env python3 import sys print(sys. version) </code></pre> <pre><code>$ py shebang.py 3.9.10 (heads/mingw-v3.9.10-dirty:12d1cb5b7c, Dec 9 2022, 03:24:49) [GCC UCRT 12.2.0 64 bit (AMD64)] </code></pre> <p>Looks like <code>python3</code> is used and not <code>py</code>. Same behaviour if I call <code>./shebang.py</code> .</p> <p>How can I make sure <code>py</code> is used rather?</p> <p>edit (@wRAR comment):</p> <p>When I remove the shebang line from <code>shebang.py</code> I indeed can use <code>py</code></p> <pre><code>$ py shebang.py 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] </code></pre>
<python><python-3.x><shebang>
2024-02-29 09:40:02
2
4,695
coincoin
78,080,209
17,797,258
ModuleNotFoundError in Python AzureFunction App for custom package
<p>I developing a <code>Python</code> <code>Azure Function</code> app and I use a <code>Custom Package</code> for this project. because of the custom package I build and install all dependencies in my <code>CI/CD</code> pipeline before making <code>zip</code> file and deploying it on <code>Azure</code> based on this <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-how-to-azure-devops?tabs=python%2Cyaml&amp;pivots=v2" rel="nofollow noreferrer">documentation</a>. my problem is when the app deployed successfully, I got an error about the <code>Custom Package</code> is not found when I run the app. The full error message is</p> <blockquote> <p>Result: Failure Exception: ModuleNotFoundError: No module named '{custom package name}'. Cannot find module. Please check the requirements.txt file for the missing module.</p> </blockquote> <p>I check this <a href="https://aka.ms/functions-modulenotfound" rel="nofollow noreferrer">documentation</a> and double check my artifact. the only thing I found based was on documentation the package name format must be '<code>&lt;package-name&gt;-&lt;version&gt;-dist-info</code>' but in my artifact is '<code>&lt;package-name&gt;-&lt;version&gt;.dist-info</code>'.</p> <p>Also, I try these configurations in <code>Azure Function</code> but it does not make a difference.</p> <p>Configurations:</p> <ul> <li><p>BUILD_FLAGS=UseExpressBuild</p> </li> <li><p>PYTHON_ISOLATE_WORKER_DEPENDENCIES=0</p> </li> <li><p>SCM_DO_BUILD_DURING_DEPLOYMENT=true</p> </li> <li><p>ENABLE_ORYX_BUILD=true</p> </li> </ul>
<python><azure><azure-devops><azure-functions>
2024-02-29 09:29:26
2
440
Milad Karimifard
78,080,159
7,541,530
How to type-annotate a polars DataFrame created from an arrow Table?
<p>In python, I'm trying to create a polars DataFrame from a pyarrow table, like so:</p> <pre class="lang-py prettyprint-override"><code>import pyarrow as pa import polars as pl table = pa.table( { &quot;a&quot;: [1, 2, 3], &quot;b&quot;: [4, 5, 6], } ) df = pl.from_arrow(table) </code></pre> <p>The return-type of <code>pl.from_arrow()</code> is <code>(DataFrame | Series)</code>, even though my <code>df</code> really always is a DataFrame. This results in type-checker-warnings further down the line, where I want to perform actions on the data. For instance:</p> <pre class="lang-py prettyprint-override"><code>df.select(pl.col(&quot;a&quot;)) </code></pre> <pre><code>Cannot access member &quot;select&quot; for type &quot;Series&quot; Member &quot;select&quot; is unknown Pylance(reportAttributeAccessIssue) </code></pre> <p>I could try to find a work-around, but haven't found a &quot;proper&quot; way to do this yet. For instance, I could do:</p> <pre class="lang-py prettyprint-override"><code>df = pl.DataFrame(table) df.select(pl.col(&quot;a&quot;)) # or maybe even if (isinstance(df, pl.Series)): raise TypeError </code></pre> <p>What would be the proper way to deal with this situation? How to assure the type-checker that <code>pl.from_arrow()</code> really returns a DataFrame?</p>
<python><python-polars>
2024-02-29 09:22:45
1
321
Taeke
78,080,061
12,314,521
What is an efficient way for merge list of same key dictionaries which value is tensor [Pytorch]
<p>Is there any way more efficient than this way?</p> <pre><code>mention_inputs = defaultdict(list) for idx in mention_indices: mention_input, _ = ... for key,value in mention_input.items(): # value is a tensor has shape (dim,) mention_inputs[key].append(value) mention_inputs = {key:torch.stack(value) for key, value in mention_inputs.items()} </code></pre>
<python><pytorch><tensor>
2024-02-29 09:02:36
1
351
jupyter
78,079,908
1,743,843
AttributeError: 'async_generator' object has no attribute 'post'
<p>I'm writing pytest tests for my FastAPI async project. Tests causes error :</p> <pre><code>test_main.py::test_create_user FAILED [100%] test_main.py:32 (test_create_user) client = &lt;async_generator object client at 0x106585fc0&gt; generate_random_phone_number = &lt;function generate_random_phone_number.&lt;locals&gt;._generate at 0x104ae4220&gt; generate_random_phone_prefix = &lt;function generate_random_phone_prefix.&lt;locals&gt;._generate at 0x10654ac00&gt; @pytest.mark.asyncio async def test_create_user(client: AsyncClient, generate_random_phone_number, generate_random_phone_prefix): user_data = { &quot;phone_number&quot;: generate_random_phone_number(), &quot;phone_prefix&quot;: generate_random_phone_prefix() } &gt; response = await client.post(&quot;/api/user/&quot;, json=user_data) E AttributeError: 'async_generator' object has no attribute 'post' test_main.py:40: AttributeError </code></pre> <p>Here is code</p> <pre><code>import random import string import pytest from httpx import AsyncClient from main import app # Make sure this import points to your FastAPI app instance @pytest.fixture async def client(): async with AsyncClient(app=app, base_url=&quot;http://test&quot;) as client: yield client @pytest.fixture def generate_random_phone_number(): def _generate(length=10): return ''.join(random.choices(string.digits, k=length)) return _generate @pytest.fixture def generate_random_phone_prefix(): def _generate(): prefix_length = random.randint(1, 3) return '+' + ''.join(random.choices(string.digits, k=prefix_length)) return _generate @pytest.mark.asyncio async def test_create_user(client: AsyncClient, generate_random_phone_number, generate_random_phone_prefix): user_data = { &quot;phone_number&quot;: generate_random_phone_number(), &quot;phone_prefix&quot;: generate_random_phone_prefix() } response = await client.post(&quot;/api/user/&quot;, json=user_data) assert response.status_code == 201 data = response.json() assert data[&quot;phone_number&quot;] == user_data[&quot;phone_number&quot;] assert data[&quot;phone_prefix&quot;] == user_data[&quot;phone_prefix&quot;] @pytest.mark.asyncio async def test_duplicate_user(generate_random_phone_number, generate_random_phone_prefix): phone_number = generate_random_phone_number() phone_prefix = generate_random_phone_prefix() user_data = { &quot;phone_number&quot;: phone_number, &quot;phone_prefix&quot;: phone_prefix } async with AsyncClient(app=app, base_url=&quot;http://test&quot;) as ac: await ac.post(&quot;/api/user/&quot;, json=user_data) response = await ac.post(&quot;/api/user/&quot;, json=user_data) assert response.status_code == 400 data = response.json() assert data[&quot;detail&quot;] == &quot;A user with the given phone number and prefix already exists.&quot; </code></pre> <p>changing the code to</p> <pre><code>import random import string import pytest from httpx import AsyncClient from main import app # Make sure this import points to your FastAPI app instance @pytest.fixture def generate_random_phone_number(): def _generate(length=10): return ''.join(random.choices(string.digits, k=length)) return _generate @pytest.fixture def generate_random_phone_prefix(): def _generate(): prefix_length = random.randint(1, 3) return '+' + ''.join(random.choices(string.digits, k=prefix_length)) return _generate @pytest.mark.asyncio async def test_create_user(generate_random_phone_number, generate_random_phone_prefix): user_data = { &quot;phone_number&quot;: generate_random_phone_number(), &quot;phone_prefix&quot;: generate_random_phone_prefix() } async with AsyncClient(app=app, base_url=&quot;http://test&quot;) as ac: response = await ac.post(&quot;/api/user/&quot;, json=user_data) assert response.status_code == 201 data = response.json() assert data[&quot;phone_number&quot;] == user_data[&quot;phone_number&quot;] assert data[&quot;phone_prefix&quot;] == user_data[&quot;phone_prefix&quot;] @pytest.mark.asyncio async def test_duplicate_user(generate_random_phone_number, generate_random_phone_prefix): phone_number = generate_random_phone_number() phone_prefix = generate_random_phone_prefix() user_data = { &quot;phone_number&quot;: phone_number, &quot;phone_prefix&quot;: phone_prefix } async with AsyncClient(app=app, base_url=&quot;http://test&quot;) as ac: await ac.post(&quot;/api/user/&quot;, json=user_data) response = await ac.post(&quot;/api/user/&quot;, json=user_data) assert response.status_code == 400 data = response.json() assert data[&quot;detail&quot;] == &quot;A user with the given phone number and prefix already exists.&quot; </code></pre> <p>works perfectly.</p> <p>What is wrong with the first part of code?</p>
<python><fastapi>
2024-02-29 08:35:29
1
34,339
softshipper
78,079,857
1,645,154
Apache Arrow with Apache Spark - UnsupportedOperationException: sun.misc.Unsafe or java.nio.DirectByteBuffer not available
<p>I am trying to integrate Apache Arrow with Apache Spark in a PySpark application, but I am encountering an issue related to <code>sun.misc.Unsafe</code> or <code>java.nio.DirectByteBuffer</code> during the execution.</p> <pre><code>import os import pandas as pd from pyspark.sql import SparkSession extra_java_options = os.getenv(&quot;SPARK_EXECUTOR_EXTRA_JAVA_OPTIONS&quot;, &quot;&quot;) spark = SparkSession.builder \ .appName(&quot;ArrowPySparkExample&quot;) \ .getOrCreate() spark.conf.set(&quot;Dio.netty.tryReflectionSetAccessible&quot;, &quot;true&quot;) spark.conf.set(&quot;spark.sql.execution.arrow.pyspark.enabled&quot;, &quot;true&quot;) pdf = pd.DataFrame([&quot;midhun&quot;]) df = spark.createDataFrame(pdf) result_pdf = df.select(&quot;*&quot;).toPandas() </code></pre> <p>Error Message:</p> <pre><code>in stage 0.0 (TID 11) (192.168.140.22 executor driver): java.lang.UnsupportedOperationException: sun.misc.Unsafe or java.nio.DirectByteBuffer.&lt;init&gt;(long, int) not available at org.apache.arrow.memory.util.MemoryUtil.directBuffer(MemoryUtil.java:174) at org.apache.arrow.memory.ArrowBuf.getDirectBuffer(ArrowBuf.java:229) </code></pre> <p>Environment:</p> <p>Apache Spark version: 3.4 Apache Arrow version: 1.5 Java version: jdk 21</p>
<python><apache-spark><pyspark><pyarrow>
2024-02-29 08:25:19
2
1,179
Midhun Pottammal
78,079,717
10,721,627
Polars - AttributeError: 'ArrowExtensionArray' object has no attribute 'to_pydatetime'
<p>I am trying to push dates to the SQLite database using Polars' <code>write_database</code> method:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl from datetime import date import sqlite3 dates_dt = [date(2000, 1, 1), date(2000, 1, 2), date(2000, 1, 3)] df = pl.DataFrame({&quot;date&quot;: dates_dt}) conn = sqlite3.connect(&quot;test.db&quot;) df.write_database(&quot;test_table&quot;, &quot;sqlite:///test.db&quot;, if_table_exists=&quot;replace&quot;) conn.close() </code></pre> <p>However, I got the following error:</p> <pre><code>AttributeError: 'ArrowExtensionArray' object has no attribute 'to_pydatetime' </code></pre> <p>The same problem occurs if date strings are converted to a Date column:</p> <pre class="lang-py prettyprint-override"><code>date_str = [&quot;2000-01-01&quot;, &quot;2000-01-02&quot;, &quot;2000-01-03&quot;] df = pl.DataFrame({&quot;date&quot;: date_str}) df = df.with_columns(pl.col(&quot;date&quot;).str.to_date(&quot;%Y-%m-%d&quot;)) </code></pre>
<python><pandas><python-polars>
2024-02-29 07:59:51
1
2,482
Péter Szilvási
78,079,454
14,466,860
How to register .otf font in reportlab python
<p>I am working on <code>reportlab</code> in python and want to use <code>Avenir</code> font I have this font of <code>OTF</code> type and There is class <code>TTFont</code> in reportlab but i want to use .otf font how to use it</p> <pre class="lang-py prettyprint-override"><code>pdfmetrics.registerFont(TTFont(&quot;Avenir&quot;, avenir_font_path)) #this only work with .ttfont </code></pre> <p>all your suggestions would be highly appreciated.</p>
<python><pdf><fonts><reportlab>
2024-02-29 07:10:58
0
7,847
Munsif Ali
78,079,168
2,804,197
How can I make my D-Bus service exit automatically when idle?
<p>I have a D-Bus service written in Python with GDBus (using bindings provided by <a href="https://gnome.pages.gitlab.gnome.org/pygobject/" rel="nofollow noreferrer">PyGObject</a>).</p> <p>Since it's written in Python and has a few dependencies, that service has a relatively big memory footprint (about 30MiB). It's only needed when the client app is actually being used by the user, so most of the time it's just sitting around idle. It's installed as a D-Bus-activated systemd service, so it's started on-demand.</p> <p>I'm looking for a way to have it automatically exit if it hasn't been used for some time. I was hoping that systemd could do this for me automatically but it doesn't seem to support that. It <a href="https://github.com/systemd/systemd/blob/b5a3418332bbac3e499b98645f4cc3d586516fd6/src/shared/bus-util.c#L97-L148" rel="nofollow noreferrer">does implement this feature</a> for its own services though. I also found <a href="https://github.com/cgwalters/test-exit-on-idle" rel="nofollow noreferrer">this PoC</a> for a race-free exit-on-idle GDBus service but it's written in C.</p>
<python><service><systemd><dbus><gdbus>
2024-02-29 06:04:17
1
402
user2804197
78,078,945
12,314,521
Is it efficient to pass model into a custom dataset to run model inference during training for sampling strategy?
<p>I'm trying to design a training flow for sampling samples during training.</p> <p>My data look like this:</p> <pre><code>defaultdict(list, {'C1629836-28004480': [0, 5, 6, 12, 17, 19, 28], 'C0021846-28004480': [1, 7, 15], 'C0162832-28004480': [2, 9], 'C0025929-28004480': [3, 10, 30], 'C1515655-28004480': [4], ... } </code></pre> <p>where key is label and value is list of data index</p> <p>I custom dataset class in which my <code>__getitem__(self, idx)</code> function need to calculate distance between an anchor (which is chosen randomly) and other data points. It looks like this:</p> <pre><code>def __getitem__(self, idx): item_label = self.labels[idx] # C1629836-28004480 item_data = self.data[item_label] # [0, 5, 6, 12, 17, 19, 28] anchor_index = random.sample(item_data,1) mentions_indices = [idx for idx in item_data if idx != anchor_index] with torch.no_grad(): self.model.eval() anchor_input = ... anchor_embedding = self.model.mention_encoder(anchor_input) for idx in mention_indices: ... </code></pre> <p>Another way to prevent from passing the model into custom dataset is to run inference inside the <code>training_step</code> function during training.</p> <p>But I read somewhere that, using dataset and dataloader to prepare data to feed into model might save the training time, as they have parallel mechanism or something like that.</p> <p>But in fact, I need to compute these kind of distance base on the latest state of weight of my model during training, is this parallel mechanism ensure that? Though in python variable is reference variable instead of value variable.</p> <p>So which way is more professional and correct?</p> <p>Edit:</p> <p>I did both approaches and the second approach much faster than the first approach.</p>
<python><pytorch><dataset><sampling>
2024-02-29 04:57:29
1
351
jupyter
78,078,821
7,644,846
pyautogui is not finding the image on my screen while using the `locateCenterOnScreen` command
<p>So, I'm playing with <code>pyautogui</code> automation python library and trying to find the axis (x, y) of a picture on my screen.</p> <p>Following the <a href="https://pyautogui.readthedocs.io/en/latest/quickstart.html" rel="nofollow noreferrer">official documentation</a>, we have the following command, which supposedly discovers the axis of the picture on the screen.</p> <pre class="lang-rb prettyprint-override"><code>&gt;&gt;&gt; pyautogui.locateCenterOnScreen('looksLikeThis.png') # returns center x and y (898, 423) </code></pre> <p>However, it is not finding the image on my screen</p> <ol> <li>Here is a simple code that this error is already happening:</li> </ol> <pre class="lang-rb prettyprint-override"><code>import pyautogui pyautogui.FAILSAFE = False pyautogui.moveTo(0, 0, duration=0) # pyautogui.click(x=650, y=850, clicks=1, interval=0, button='left', duration=1) # Chrome x, y = pyautogui.locateCenterOnScreen(&quot;resources/chrome.png&quot;) pyautogui.click(x=x, y=y, clicks=1, interval=0, button='left', duration=1) </code></pre> <ol start="2"> <li>Here is the image I'm searching:</li> </ol> <p>‣ <a href="https://i.imgur.com/pSbaUlL.png" rel="nofollow noreferrer">https://i.imgur.com/pSbaUlL.png</a></p> <ol start="3"> <li>Here is my screen (MacOS):</li> </ol> <p>‣ <a href="https://i.imgur.com/6W6Aic0.png" rel="nofollow noreferrer">https://i.imgur.com/6W6Aic0.png</a></p> <ol start="4"> <li>Here is the error:</li> </ol> <pre><code>&gt;&gt;&gt; pyautogui.locateCenterOnScreen(&quot;resources/chrome.png&quot;) Traceback (most recent call last): File &quot;/Users/victorcosta/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/pyautogui/__init__.py&quot;, line 172, in wrapper return wrappedFunction(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/victorcosta/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/pyautogui/__init__.py&quot;, line 204, in locateCenterOnScreen return pyscreeze.locateCenterOnScreen(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/victorcosta/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/pyscreeze/__init__.py&quot;, line 447, in locateCenterOnScreen coords = locateOnScreen(image, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/victorcosta/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/pyscreeze/__init__.py&quot;, line 405, in locateOnScreen retVal = locate(image, screenshotIm, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/victorcosta/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/pyscreeze/__init__.py&quot;, line 383, in locate points = tuple(locateAll(needleImage, haystackImage, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/victorcosta/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/pyscreeze/__init__.py&quot;, line 371, in _locateAll_pillow raise ImageNotFoundException('Could not locate the image.') pyscreeze.ImageNotFoundException: Could not locate the image. During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/Users/victorcosta/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/pyautogui/__init__.py&quot;, line 174, in wrapper raise ImageNotFoundException # Raise PyAutoGUI's ImageNotFoundException. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ pyautogui.ImageNotFoundException </code></pre>
<python><macos><automation><ui-automation><pyautogui>
2024-02-29 04:13:04
1
2,226
Victor Cordeiro Costa
78,078,811
6,997,665
Gradients for selection from array operation in PyTorch
<p>This is a followup question from <a href="https://stackoverflow.com/q/78072628/6997665">here</a>. I have obtained a tensor, say, <code>d</code> with gradients. Now I have another tensor array, say <code>e</code> from which I need to pick the first <code>d</code> elements. MWE below.</p> <pre><code>import torch a = torch.tensor([4.], requires_grad=True) b = torch.tensor([5.]) c = torch.tensor([6.]) d = a.min(b).min(c) e = torch.arange(10) f = e[:d] # Throws error &quot;TypeError: only integer tensors of a single element can be converted to an index&quot; </code></pre> <p>Based on the answer <a href="https://discuss.pytorch.org/t/typeerror-only-integer-tensors-of-a-single-element-can-be-converted-to-an-index/45641/2" rel="nofollow noreferrer">here</a>, the following line works.</p> <pre><code>f = e[:d.to(dtype=torch.long)] </code></pre> <p>However, the gradients are lost. Is there someway I can pass the gradients or this operation is not differentiable at all? Many thanks.</p>
<python><python-3.x><machine-learning><deep-learning><pytorch>
2024-02-29 04:07:18
2
3,502
learner
78,078,717
22,371,917
html and js not working the same with flask
<p>i have a folder which contains server.py a templates folder with index.html and a js folder with script.js server.py:</p> <pre class="lang-py prettyprint-override"><code>from flask import Flask, render_template import random app = Flask(__name__) @app.route('/') def home(): return render_template('index.html') if __name__ == '__main__': app.run() </code></pre> <p>index.html:</p> <pre class="lang-html prettyprint-override"><code>&lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot;&gt; &lt;meta http-equiv=&quot;X-UA-Compatible&quot; content=&quot;IE=edge&quot;&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1.0&quot;&gt; &lt;title&gt;Sparkle Trail and Dust Cursor&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Sparkle Trail and Dust Cursor&lt;/h1&gt; &lt;script src=&quot;{{ url_for('static', filename='js/script.js') }}&quot;&gt;&lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>script.js can be found here: <a href="https://codepen.io/sarahwfox/pen/pNrYGb" rel="nofollow noreferrer">https://codepen.io/sarahwfox/pen/pNrYGb</a> its too long probably but yeah im trying to have this effect but it either doesnt show anything or it shows the trail following the cursor but not the + sparkles</p>
<javascript><python><html><flask>
2024-02-29 03:29:37
1
347
Caiden
78,078,676
2,201,789
How to properly setup ALLOWED_HOST to allow other machines in the network access the server
<p>Django server in machine TestPC-A, at 192.25.56.120</p> <p>I want it accessible from the computer in the same network 192.25.56.xxx.</p> <p>what I have configured</p> <p><strong>1. settings.py</strong></p> <pre class="lang-py prettyprint-override"><code> ALLOWED_HOSTS = [&quot;127.0.0.1&quot;, &quot;localhost&quot;, &quot;TestPC-A&quot; , &quot;0.0.0.0&quot;, &quot;192.25.56.120&quot;] #and this which I am not sure if this is necessary: CSRF_TRUSTED_ORIGINS = [ &quot;http://127.0.0.1&quot;, &quot;https://127.0.0.1&quot;, &quot;http://localhost&quot;, &quot;https://localhost&quot;, &quot;https://TestPC-A&quot;, &quot;http://TestPC-A&quot;, ] </code></pre> <p><strong>2. runserver.bat</strong></p> <pre><code>@echo off REM Activate virtual environment in the current terminal session and run server cmd /k &quot;.venv\Scripts\activate &amp;&amp; python manage.py runserver_plus --cert-file cert.pem --key-file key.pem 0.0.0.0:8000&quot; </code></pre> <p>I have tried to add following in Windows hosts file C:\Windows\System32\drivers\etc\hosts</p> <pre><code>192.25.56.120 TestPC-A </code></pre> <p>I restart the server, and the web page is not loaded at another computer's web browser <code>https://TestPC-A:8000/</code></p> <p>What else that I need to set?</p>
<python><django>
2024-02-29 03:13:36
2
1,201
user2201789
78,078,327
951,739
python re.split gives different results depending on char group order
<pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import re &gt;&gt;&gt; s = &quot;i'm happy 7 times&quot; &gt;&gt;&gt; re.split(r' ',s) [&quot;i'm&quot;, 'happy', '7', 'times'] &gt;&gt;&gt; re.split(r&quot;[, -]&quot;,s) [&quot;i'm&quot;, 'happy', '7', 'times'] &gt;&gt;&gt; re.split(r&quot;[, -:]&quot;,s) ['i', 'm', 'happy', '', '', 'times'] </code></pre> <p>Added a colon to the char group and the apostrophe and the digit become targets for the <code>split</code>.</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; re.split(r&quot;[:, -]&quot;,s) [&quot;i'm&quot;, 'happy', '7', 'times'] </code></pre> <p>Moved the colon to the front of the group and got the expected result.</p> <p>Put a colon in the string:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; s = &quot;i'm happy 7 times: again&quot; &gt;&gt;&gt; re.split(r&quot;[:, -]&quot;,s) [&quot;i'm&quot;, 'happy', '7', 'times', '', 'again'] </code></pre> <p>I can't find anything in the re documentation about colons having a meaning in a char group. Is this correct behaviour?</p> <p>I'm using:</p> <pre><code>Python 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)] on win32 </code></pre>
<python><regex>
2024-02-29 00:45:15
1
769
knowingpark
78,078,290
3,543,200
botocore >= 1.28.0 slower in multithread application
<p>The official Boto3 docs recommends creating a new resource per thread: <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/guide/resources.html#multithreading-or-multiprocessing-with-resources" rel="nofollow noreferrer">https://boto3.amazonaws.com/v1/documentation/api/latest/guide/resources.html#multithreading-or-multiprocessing-with-resources</a></p> <p>Botocore 1.28.0 merged a feature which appears to generate a list of all possible endpoints on resource creation: <a href="https://github.com/boto/botocore/pull/2785" rel="nofollow noreferrer">https://github.com/boto/botocore/pull/2785</a></p> <p>I have a test suite which uses <code>motoserver</code> and an application that relies heavily on parallelized downloads from / uploads to s3 from a process pool. With botocore 1.28.0, the test suite takes an extra 20 minutes to run as compared to the previous version.</p> <p>I've done some work with <code>cProfile</code> and I can confirm that at least half of the additional time is spent inside of <code>botocore</code>'s <code>load_service_model</code> method called during botocore client creation. Haven't tracked down the other ~50% of extra time yet but it's somewhere in botocore usage.</p> <p>What can I do to speed this up again with the version bump?</p>
<python><amazon-web-services><amazon-s3><boto><botocore>
2024-02-29 00:30:11
1
997
gmoss
78,077,986
358,680
Vectorized version of `__contains__` (contained in) in pandas/numpy
<p>Consider:</p> <pre class="lang-py prettyprint-override"><code>a = pd.DataFrame([[&quot;x&quot;, &quot;y&quot;, &quot;z&quot;], [&quot;y&quot;, &quot;x&quot;, &quot;z&quot;]]) valid = (&quot;x&quot;, &quot;y&quot;) print(a in valid) # Doesn't work </code></pre> <p>I want the output to be:</p> <pre><code> 0 1 2 0 True True False 1 True True False </code></pre> <p><code>np.logical_or.reduce</code> can do this but the result is not a dataframe (or series) but rather a raw numpy array:</p> <pre class="lang-py prettyprint-override"><code>np.logical_or.reduce(a == v for v in valid) </code></pre> <p>Is there a better way?</p>
<python><pandas><numpy>
2024-02-28 22:37:04
1
1,323
Luis
78,077,924
2,893,712
Pandas First Value That Doesnt Return KeyError
<p>I have a dataframe that has <em>either</em> column <code>A</code> or <code>B</code></p> <p>I was hoping for a syntax that uses the <code>or</code> operator to have a stntax like:</p> <pre><code>Value = df.at['Total','A'] or df.at['Total','B'] </code></pre> <p>but I still receive a <code>KeyError</code> exception. Is there a shorthand way to achieve this instead of doing something like:</p> <pre><code>if 'A' in df.columns: Value = df.at['Total','A'] else: Value = df.at['Total','B'] </code></pre>
<python><pandas><dataframe><keyerror>
2024-02-28 22:19:17
3
8,806
Bijan
78,077,883
22,407,544
'ProgrammingError: cannot cast type bigint to uuid' in Django
<p>I've been moving development of my website over to using Docker. I replaced sqlite as my database with postgresql then ran the command <code>docker-compose exec web python manage.py migrate</code> in my Docker environment and it produced the following error:</p> <pre><code> File &quot;/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py&quot;, line 87, in _execute return self.cursor.execute(sql) ^^^^^^^^^^^^^^^^^^^^^^^^ psycopg2.errors.CannotCoerce: cannot cast type bigint to uuid LINE 1: ...quirementschat&quot; ALTER COLUMN &quot;id&quot; TYPE uuid USING &quot;id&quot;::uuid ^ The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;/code/manage.py&quot;, line 22, in &lt;module&gt; main() File &quot;/code/manage.py&quot;, line 18, in main execute_from_command_line(sys.argv) File &quot;/usr/local/lib/python3.11/site-packages/django/core/management/__init__.py&quot;, line 442, in execute_from_command_line utility.execute() File &quot;/usr/local/lib/python3.11/site-packages/django/core/management/__init__.py&quot;, line 436, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File &quot;/usr/local/lib/python3.11/site-packages/django/core/management/base.py&quot;, line 412, in run_from_argv self.execute(*args, **cmd_options) File &quot;/usr/local/lib/python3.11/site-packages/django/core/management/base.py&quot;, line 458, in execute output = self.handle(*args, **options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/django/core/management/base.py&quot;, line 106, in wrapper res = handle_func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/django/core/management/commands/migrate.py&quot;, line 356, in handle post_migrate_state = executor.migrate( ^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/django/db/migrations/executor.py&quot;, line 135, in migrate state = self._migrate_all_forwards( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/django/db/migrations/executor.py&quot;, line 167, in _migrate_all_forwards state = self.apply_migration( ^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/django/db/migrations/executor.py&quot;, line 252, in apply_migration state = migration.apply(state, schema_editor) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/django/db/migrations/migration.py&quot;, line 132, in apply operation.database_forwards( File &quot;/usr/local/lib/python3.11/site-packages/django/db/migrations/operations/fields.py&quot;, line 235, in database_forwards schema_editor.alter_field(from_model, from_field, to_field) File &quot;/usr/local/lib/python3.11/site-packages/django/db/backends/base/schema.py&quot;, line 830, in alter_field self._alter_field( File &quot;/usr/local/lib/python3.11/site-packages/django/db/backends/postgresql/schema.py&quot;, line 287, in _alter_field super()._alter_field( File &quot;/usr/local/lib/python3.11/site-packages/django/db/backends/base/schema.py&quot;, line 1055, in _alter_field self.execute( File &quot;/usr/local/lib/python3.11/site-packages/django/db/backends/postgresql/schema.py&quot;, line 48, in execute return super().execute(sql, None) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/django/db/backends/base/schema.py&quot;, line 201, in execute cursor.execute(sql, params) File &quot;/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py&quot;, line 102, in execute return super().execute(sql, params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py&quot;, line 67, in execute return self._execute_with_wrappers( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py&quot;, line 80, in _execute_with_wrappers return executor(sql, params, many, context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py&quot;, line 84, in _execute with self.db.wrap_database_errors: File &quot;/usr/local/lib/python3.11/site-packages/django/db/utils.py&quot;, line 91, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File &quot;/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py&quot;, line 87, in _execute return self.cursor.execute(sql) ^^^^^^^^^^^^^^^^^^^^^^^^ django.db.utils.ProgrammingError: cannot cast type bigint to uuid LINE 1: ...quirementschat&quot; ALTER COLUMN &quot;id&quot; TYPE uuid USING &quot;id&quot;::uuid ^ </code></pre> <p>I haven't been able to find any working solutions. I renamed my 'id' field and ran migrations but still ended up with the same errors. I deleted all records in my database, changed the field from Charfield to UUIDField but the error persists.</p> <p>Here is my models.py:</p> <pre><code>class RequirementsChat(models.Model): id = models.CharField(primary_key=True, max_length=38) #id = models.UUIDField(default=uuid.uuid4, unique=True, primary_key=True, max_length=37) alias = models.CharField(max_length=20, blank=True, null=True) email = models.CharField(max_length=80, blank=True, null=True) language = models.CharField(max_length=10, blank=True, null=True) due_date = models.CharField(max_length=10, blank=True, null=True) subtitle_type = models.CharField(max_length=10, blank=True, null=True) transcript_file_type = models.CharField(max_length=10, blank=True, null=True) additional_requirements = models.TextField(max_length=500, blank=True, null=True) date = models.DateTimeField(auto_now_add=True, blank=True, null=True) url = models.CharField(max_length=250, blank=True, null=True) task_completed = models.BooleanField(default=False) class UploadedFile(models.Model): input_file = models.FileField(upload_to='files') chat_id = models.CharField(max_length=33, null= True) requirements_chat = models.ForeignKey(RequirementsChat, on_delete=models.CASCADE, related_name='files', null=True) </code></pre> <p>Any suggestions?</p>
<python><sql><django><postgresql><sqlite>
2024-02-28 22:07:48
1
359
tthheemmaannii
78,077,822
7,124,155
Databricks spark submit - getting error with --py-files
<p>In Databricks workflows, I submit a spark job (Type = &quot;Spark Submit&quot;), and a bunch of parameters, starting with --py-files. The appl_src.zip contains application code and common functions.</p> <pre><code>&quot;--py-files&quot;, &quot;s3://some_path/appl_src.zip&quot;, &quot;s3://some_path/main.py&quot;, </code></pre> <p>That works fine. But now I'd like to have three Python files, putting the &quot;common&quot; module in a different s3 path, like this:</p> <pre><code>&quot;--py-files&quot;, &quot;s3://some_path/appl_src.py&quot;, &quot;s3://some_path/main.py&quot;, &quot;s3://a_different_path/common.py&quot;, </code></pre> <p>But then I get an error saying &quot;common&quot; doesn't exist, when I know in fact the path exists. In fact the log4j mentions the first two, but not the third:</p> <pre><code>24/02/28 21:41:00 INFO Utils: Fetching s3://some_path/appl_src.py to ... 24/02/28 21:41:00 INFO Utils: Fetching s3://some_path/main.py to ... </code></pre> <p>And then from Standard output:</p> <pre><code>Traceback (most recent call last): File &quot;/local_disk0/tmp/spark-123/appl_src.py&quot;, line 21, in &lt;module&gt; from common import my_functions ModuleNotFoundError: No module named 'common' </code></pre> <p>Does Spark only read the first two Python arguments?</p>
<python><apache-spark><databricks>
2024-02-28 21:52:35
0
1,329
Chuck
78,077,509
53,491
Using aiohttp with requests_aws4auth
<p>I've asked a previous version of this, but I've progressed much since then.</p> <p>I've been told that I need to get the authorization headers from AWS4auth and add them as headers into the aiohttp session calls. I'm not sure how to do that.</p> <p>Currently I have:</p> <pre><code> self.aws4auth = AWS4Auth(access_key, secret_key, 'us-east-1', 's3') with open(file_path, 'rb') as file: content = file.read() url = f'{self.host}/{bucket}/{object_name}' async with aiohttp.ClientSession() as session: for k,w in self.aws4auth.signing_key.__dict__.items(): session.headers[k] = str(w) # pprint(list(session.headers.items())) async with session.put(url, data=content) as resp: print(resp) pass </code></pre> <p>It gives me a 400 error</p> <p>I've also tried putting the headers into the session.put with the same result.</p> <p>When I use the AWS4auth directly with a syncronous request, it works fine, so the data itself isn't the problem.</p> <p>ETA:</p> <p>I've grabbed the headers out of a successful get call from request, and it gives me a 403, presumably because I've already used some part of the authentication...</p> <p>current code:</p> <pre><code> url = f'{self.host}/{bucket}/{object_name}' listurl = f'{self.host}/{bucket}' listresp = requests.request(&quot;GET&quot;, listurl, auth=self.aws4auth) headers= listresp.request.headers async with aiohttp.ClientSession(headers=headers) as session: async with session.put(url, data=content) as resp: print(resp.text) pass </code></pre> <p>the listrequest gives a 200 response.</p> <p>I should add that the entire purpose of this is to not have to re-implement: <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html</a> if I don't have to.</p> <p>ETA: I don't believe that this is currently possible with these tools, at least not without re-implementing the entire signing process above.... I'm looking at other tools.</p>
<python><python-asyncio><aiohttp>
2024-02-28 20:49:15
1
12,317
Brian Postow
78,077,448
4,706,711
TLS version mismach between curl and server. How I can accept multiple tls versions?
<p>I made the following servers:</p> <pre><code>import socket import threading import queue import time import ssl import multiprocessing class SocketServer: &quot;&quot;&quot; Basic Socket Server in python &quot;&quot;&quot; def __init__(self,host,port,max_threads): print(&quot;Create Server For Http&quot;) self.host = host self.port = port self.server_socket = self.__initSocket() self.max_threads = max_threads self.request_queue = queue.Queue() def __initSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_STREAM) def __postProcessSocket(self,client_socket): return client_socket def __accept(self): self.server_socket.listen(5) while True: client_socket, client_address = self.server_socket.accept() actual_client_socket = self.__postProcessSocket(client_socket) self.request_queue.put((actual_client_socket, client_address)) def __handle(self): while True: # Dequeue a request and process it client_socket, address = self.request_queue.get() print(address) # Read HTTP Request # Log Http Request # Manipulate Http Request # Forward or respond content = '&lt;html&gt;&lt;body&gt;Hello World&lt;/body&gt;&lt;/html&gt;\r\n'.encode() headers = f'HTTP/1.1 200 OK\r\nContent-Length: {len(content)}\r\nContent-Type: text/html\r\n\r\n'.encode() client_socket.sendall(headers + content) client_socket.close() self.request_queue.task_done() def __initThreads(self): for _ in range(self.max_threads): threading.Thread(target=self.__handle, daemon=True).start() def start(self): self.server_socket.bind((self.host, self.port)) self.__initThreads() self.__accept() class TLSSocketServer(SocketServer): def __init__(self,host,port,max_threads): super().__init__(host,port,max_threads) self.context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) def __postProcessSocket(self,client_socket): # @todo load SNI self.context.load_cert_chain(certfile=&quot;/etc/certs/cert.crt&quot;, keyfile=&quot;/etc/certs/key.key&quot;) return self.context.wrap_socker(client_socket,server=true) if __name__ == &quot;__main__&quot;: # @todo read settings file host = &quot;0.0.0.0&quot; port = 80 tls_port=443 max_threads = 5 server = SocketServer(host, port, max_threads) server_process = multiprocessing.Process(target=server.start) server_process.start() # Add other main application code here if needed tls_server = TLSSocketServer(host, tls_port, max_threads) tls_server_process = multiprocessing.Process(target=tls_server.start) tls_server_process.start() tls_server_process.join() </code></pre> <p>And ad you can see I am running them into multiple processes in python. But once I do:</p> <pre><code>$ curl -vvv https://10.0.0.2 </code></pre> <p>I get this error:</p> <pre><code> * Trying 10.0.0.2:443... * Connected to 10.0.0.2 (10.0.0.2) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * CAfile: /etc/ssl/certs/ca-certificates.crt * CApath: /etc/ssl/certs * TLSv1.0 (OUT), TLS header, Certificate Status (22): * TLSv1.3 (OUT), TLS handshake, Client hello (1): * (5454) (IN), , Unknown (72): * error:0A00010B:SSL routines::wrong version number * Closing connection 0 curl: (35) error:0A00010B:SSL routines::wrong version number </code></pre> <p>How I can make accept any tls version. I am doing this as a local debug/analysis tool instead of something that will run remotely. Therefore, I want to use any available tls version, but the latest to be prefered if possible.</p> <p>How I can do this?</p> <p>If I cannot do how I can fix this error? I also tried with firefox and I get: SSL_ERROR_RX_RECORD_TOO_LONG</p> <p>In order to mitigate the issue I also tried fruitlessly:</p> <pre><code>class TLSSocketServer(SocketServer): def __initSocket(self): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) print(&quot;Load TLS&quot;) context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2) context.load_cert_chain(certfile='/etc/certs/cert.crt', keyfile='/etc/certs/key.key') return context.wrap_socket(sock, server_side=True) </code></pre> <p>Furthermore, I tried:</p> <pre><code>class TLSSocketServer(SocketServer): def __initSocket(self): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) print(&quot;Load TLS&quot;) context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) context.load_cert_chain(certfile='/etc/certs/cert.crt', keyfile='/etc/certs/key.key') return context.wrap_socket(sock, server_side=True) </code></pre> <p>And tried to enforce TLS 1.3 but I still got the error:</p> <pre><code>$ curl -vvv -k --tlsv1.3 https://10.0.0.2 * Trying 10.0.0.2:443... * Connected to 10.0.0.2 (10.0.0.2) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * TLSv1.0 (OUT), TLS header, Certificate Status (22): * TLSv1.3 (OUT), TLS handshake, Client hello (1): * (5454) (IN), , Unknown (72): * error:0A00010B:SSL routines::wrong version number * Closing connection 0 curl: (35) error:0A00010B:SSL routines::wrong version number </code></pre>
<python><ssl><multiprocessing>
2024-02-28 20:38:28
1
10,444
Dimitrios Desyllas
78,077,410
23,260,297
Styling negative numbers in pandas
<p>I have a dataframe that I am exporting to Excel. I would also like to style it before the export.</p> <p>I have this code which changes the background color and text color and works fine, but I would like to add to it:</p> <pre><code>df.style.set_properties(**{'background-color': 'black', 'color': 'lawngreen', 'border-color': 'white'}).to_excel(writer, sheet_name='Sheet1', startrow=rowPos, float_format = &quot;%0.5f&quot;) </code></pre> <p>I need columns with strings and dates to have a white text color, and then positive numbers to be green and negative numbers to be red. I pulled these styles directly from pandas <a href="https://pandas.pydata.org/pandas-docs/version/1.1/user_guide/style.html" rel="nofollow noreferrer">documentation</a> on styling since I have never used it before, and am unsure how to achieve these results.</p> <p>Lets say my dataframe looks like this:</p> <pre><code>StartDate ExpiryDate Commodity Quantity Price Total --------- ---------- ---------- ------- ----- ----- 02/28/2024 12/28/2024 HO 10000 -3.89 -38900 02/28/2024 12/28/2024 WPI 10000 4.20 42000 </code></pre> <p>how could I achieve what I am looking for?</p>
<python><pandas><excel><dataframe><pandas-styles>
2024-02-28 20:31:47
2
2,185
iBeMeltin
78,077,316
2,414,957
Exception: Not found: 'python/cv2/py.typed' ERROR: Failed building wheel for opencv-python in Anaconda Python 3.6
<p>I am following the env setup in CenterPose repo. However, opencv-python doesn't get installed.</p> <p><code>(CenterPose) mona@ada:/data/CenterPose/data$ pip install opencv-python</code></p> <p>I get this error:</p> <pre><code> -- Installing: /tmp/pip-install-wjrf6kvm/opencv-python_e5e6222fa4024d2d8f3e1d1c3bd5fb1f/_skbuild/linux-x86_64-3.6/cmake-install/share/opencv4/lbpcascades/lbpcascade_profileface.xml -- Installing: /tmp/pip-install-wjrf6kvm/opencv-python_e5e6222fa4024d2d8f3e1d1c3bd5fb1f/_skbuild/linux-x86_64-3.6/cmake-install/share/opencv4/lbpcascades/lbpcascade_silverware.xml Copying files from CMake output creating directory _skbuild/linux-x86_64-3.6/cmake-install/cv2 copying _skbuild/linux-x86_64-3.6/cmake-install/python/cv2/python-3/cv2.abi3.so -&gt; _skbuild/linux-x86_64-3.6/cmake-install/cv2/cv2.abi3.so copying _skbuild/linux-x86_64-3.6/cmake-install/python/cv2/__init__.py -&gt; _skbuild/linux-x86_64-3.6/cmake-install/cv2/__init__.py copying _skbuild/linux-x86_64-3.6/cmake-install/python/cv2/load_config_py2.py -&gt; _skbuild/linux-x86_64-3.6/cmake-install/cv2/load_config_py2.py copying _skbuild/linux-x86_64-3.6/cmake-install/python/cv2/load_config_py3.py -&gt; _skbuild/linux-x86_64-3.6/cmake-install/cv2/load_config_py3.py copying _skbuild/linux-x86_64-3.6/cmake-install/python/cv2/config.py -&gt; _skbuild/linux-x86_64-3.6/cmake-install/cv2/config.py copying _skbuild/linux-x86_64-3.6/cmake-install/python/cv2/config-3.py -&gt; _skbuild/linux-x86_64-3.6/cmake-install/cv2/config-3.py Traceback (most recent call last): File &quot;/home/mona/anaconda3/envs/CenterPose/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py&quot;, line 349, in &lt;module&gt; main() File &quot;/home/mona/anaconda3/envs/CenterPose/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py&quot;, line 331, in main json_out['return_val'] = hook(**hook_input['kwargs']) File &quot;/home/mona/anaconda3/envs/CenterPose/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py&quot;, line 249, in build_wheel metadata_directory) File &quot;/tmp/pip-build-env-o9p39xto/overlay/lib/python3.6/site-packages/setuptools/build_meta.py&quot;, line 231, in build_wheel wheel_directory, config_settings) File &quot;/tmp/pip-build-env-o9p39xto/overlay/lib/python3.6/site-packages/setuptools/build_meta.py&quot;, line 215, in _build_with_temp_dir self.run_setup() File &quot;/tmp/pip-build-env-o9p39xto/overlay/lib/python3.6/site-packages/setuptools/build_meta.py&quot;, line 268, in run_setup self).run_setup(setup_script=setup_script) File &quot;/tmp/pip-build-env-o9p39xto/overlay/lib/python3.6/site-packages/setuptools/build_meta.py&quot;, line 158, in run_setup exec(compile(code, __file__, 'exec'), locals()) File &quot;setup.py&quot;, line 537, in &lt;module&gt; main() File &quot;setup.py&quot;, line 310, in main cmake_source_dir=cmake_source_dir, File &quot;/tmp/pip-build-env-o9p39xto/overlay/lib/python3.6/site-packages/skbuild/setuptools_wrap.py&quot;, line 683, in setup cmake_install_dir, File &quot;setup.py&quot;, line 450, in _classify_installed_files_override raise Exception(&quot;Not found: '%s'&quot; % relpath_re) Exception: Not found: 'python/cv2/py.typed' ---------------------------------------- ERROR: Failed building wheel for opencv-python Failed to build opencv-python ERROR: Could not build wheels for opencv-python which use PEP 517 and cannot be installed directly </code></pre> <p>To reproduce:</p> <pre><code>CenterPose_ROOT=/path/to/clone/CenterPose git clone https://github.com/NVlabs/CenterPose.git $CenterPose_ROOT conda create -n CenterPose python=3.6 conda activate CenterPose pip install -r requirements.txt conda install -c conda-forge eigenpy </code></pre> <p>a bit sys info:</p> <pre><code>(base) mona@ada:~$ uname -a Linux ada 6.5.0-21-generic #21~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Feb 9 13:32:52 UTC 2 x86_64 x86_64 x86_64 GNU/Linux (base) mona@ada:~$ lsb_release -a LSB Version: core-11.1.0ubuntu4-noarch:security-11.1.0ubuntu4-noarch Distributor ID: Ubuntu Description: Ubuntu 22.04.3 LTS Release: 22.04 Codename: jammy </code></pre>
<python><opencv><cmake><anaconda><conda>
2024-02-28 20:11:21
1
38,867
Mona Jalal
78,077,096
8,272,788
Subset list of text and matching embeddings using set() and np.unique() gives different length results
<p>I have a list of strings, as well as an array containing the dense embeddings generated from those strings (generated using <code>SentenceTransformer(&quot;all-mpnet-base-V2&quot;)</code>). In my analysis I notice that a number of the strings are repeated text. I want to run my analysis on unique values. Because generating embeddings is expensive, i want to subset my unique strings and get the relevant embeddings. I try this using <code>set()</code> and <code>np.unique()</code>, but get different length results. Why is this?</p> <p>I don't post a reproducible example, because my array of vectors is large. But can anyone explain what might be happening and why these lengths wouldn't match, they are close but not the same?</p> <pre><code>#Basic structure of my data: titles = [&quot;some text&quot;, &quot;other text&quot;, ...] embeddings = [Array1, Array2, ...] #Get unique items of the list unique_titles = list(set(titles)) unique_embeddings = np.unique(embeddings, axis = 0) len(unique_titles) == len(unique_embeddings) False </code></pre> <p>I can get around this all with the following for loop:</p> <pre><code>titles_unique = [] embeddings_unique = np.array([0 for i in range(embeddings.shape[1])]) for t, e in zip(titles, embeddings): if t not in titles_unique: titles_unique.append(t) embeddings_unique = np.vstack([embeddings_unique, e]) #Get rid of the first row of the array, used to create the correct number of dimensions embeddings_unique = np.delete(embeddings_unique, (0), axis = 0) </code></pre> <p>But this is slow and I will have to do this for a much larger data set shortly.</p> <p>The fact that <code>set()</code> and <code>np.unique()</code> don't give the same results makes me think i am losing information somewhere.</p>
<python><numpy><sentence-transformers>
2024-02-28 19:24:18
1
1,261
MorrisseyJ
78,077,069
2,079,306
Python - Read specific sheet of XLSM book that contains 7 sheets with macros and formulas. Find string in cell, replace string. Write back to XLSM
<p>Issue: Pandas to_excel writing as .xslx instead of .xlsm. Need to maintain macros in books.</p> <p>I have about 3000 xlsm files that are moderately complex with a few macros. There are about 7 sheets per book. I need to find a string(s) in a static column on a specific sheet and replace that string in all books.</p> <p>The approach I am taking is using pandas with openpyxl. I use pandas to read in a dataframe from the specific sheet, check the specific column for my string, if found, replace and write to a new sheet. (At least for now. I'll be replacing the original sheet with the new dataframe as I can't write just one cell or I would do that, but this is my proof of concept for now). The issue I get is pandas' to_excel function writes the file as xlsx (it actually writes the filename as a .xlsm but it's corrupted, changing the ext to .xslx allows it to be opened). Any ideas on how to write a pandas dataframe to xlsm? Or go about this issue another way?</p> <pre><code>for file in sorted(files): warnings.filterwarnings('ignore', category=UserWarning, module='openpyxl') df = pandas.read_excel(files[file], sheet_name='Planning') #get data from sheet string_found=False for comment in df['Comment'].values: try: if re.search('mystring', comment, re.IGNORECASE): string_found=True print(file, comment) except TypeError: pass #not a string #add new sheet. if string_found: book = openpyxl.load_workbook(files[file], keep_vba = True) with pandas.ExcelWriter(files[file], mode='a', if_sheet_exists='replace', engine='openpyxl') as writer: writer.workbook = book writer.sheets.update(dict((ws.title, ws) for ws in book.worksheets)) writer.vba_archive = book.vba_archive df.to_excel(writer, sheet_name= 'Planning2') </code></pre> <p>This does what I ask it to, just writes it as xlsx (with a .xlsm extension), losing the macros. Thoughts?</p>
<python><pandas><excel>
2024-02-28 19:17:09
2
1,123
john stamos
78,077,010
12,569,441
Optimizing the rational quadratic kernel function
<p>Given the following functions, what are some optimizations that can be done to speed up computations?</p> <p>Yes, I tried using ChatGPT and Bard, the reason I'm mentioning this is that there's a &quot;caveat&quot; to their solutions, <code>np.nan if (index - i) &lt; 0 else price_feed[index - i]</code> has to hold. So for each period, I need to check the previous periods, if on the dataframe that period is <code>index 0</code>, then it is np.nan, since doing a lookup to the past is nonsense, since 1) it is not possible in live envirioments, 2) you would be looking &quot;into the future&quot;.</p> <pre class="lang-py prettyprint-override"><code>from pandas import DataFrame import numpy as np from numba import jit, float32, uint32, float64 @jit( float64[:](float64[:], uint32, float32, uint32), cache=True, parallel=True, nopython=True, target_backend=&quot;cuda&quot;, forceobj=False, nogil=True, ) def rational_quadratic( price_feed: np.ndarray, lookback: int, relative_weight: float, start_at_bar: int, ) -&gt; np.ndarray: length_of_prices = len(price_feed) bars_calculated = start_at_bar + 1 result = np.zeros(length_of_prices, dtype=float) lookback_squared = np.power(lookback, 2) denominator = lookback_squared * 2 * relative_weight for index in range(length_of_prices): current_weight = 0.0 cumulative_weight = 0.0 for i in range(bars_calculated): y = np.nan if (index - i) &lt; 0 else price_feed[index - i] w = np.power( 1 + (np.power(i, 2) / denominator), -relative_weight, ) current_weight += y * w cumulative_weight += w result[index] = current_weight / cumulative_weight return result def rational_quadratic_wrapper( dataframe: DataFrame, lookback: int, relative_weight: float, start_at_bar: int, candle_type: str, ) -&gt; DataFrame: dataframe = dataframe.copy() ohlc4_values = dataframe[candle_type].values no_filter_values = rational_quadratic(ohlc4_values, lookback, relative_weight, start_at_bar) dataframe[&quot;no_filter&quot;] = no_filter_values dataframe[&quot;yhatdelt2&quot;] = rational_quadratic( no_filter_values, lookback, relative_weight, start_at_bar ) dataframe[&quot;smooth&quot;] = dataframe[&quot;no_filter&quot;] - (dataframe[&quot;no_filter&quot;] - dataframe[&quot;yhatdelt2&quot;]) dataframe[&quot;zero_lag&quot;] = dataframe[&quot;no_filter&quot;] + ( dataframe[&quot;no_filter&quot;] - dataframe[&quot;yhatdelt2&quot;] ) return dataframe fake_price_data = {'ohlc4': [4308.172, 4175.935, 4070.76, 4112.74, 4029.135, 4308.172, 4175.935, 4070.76, 4112.74, 4029.135, 4308.172, 4175.935, 4070.76, 4112.74, 4029.135, 4308.172, 4175.935, 4070.76, 4112.74, 4029.135, 4308.172, 4175.935, 4070.76, 4112.74, 4029.135, 4029.135, 4308.172, 4175.935, 4070.76, 4112.74, 4029.135, 4308.172, 4175.935, 4070.76, 4112.74, 4029.135]} dates = pd.date_range(start='2017-08-17', periods=36, freq='D') df = pd.DataFrame(fake_data, index=dates) results = rational_quadratic_optimized(debug_df, 8, 1, 5, &quot;ohlc4&quot;) print(results) # Make sure your optimization matches the results dataframe output # I tried using ChatGPT and Bard, all of their solutions break the constraints or generate a ```shape error``` when doing cross multiplications with the ```relative weight```. Wish you luck in the challenge! </code></pre>
<python><numpy><performance><optimization><vectorization>
2024-02-28 19:03:21
1
403
Corfucinas
78,076,936
13,775,706
Single QPushButton Object on multiple frames
<p>I have an application that I'm writing that will have different layers for different purposes. Some layers will have functionality that differs from other layers, but some functionality will remain the same for all layers. As a user switches between the layers, I want to avoid having to have the user switch back to a previous layer to do certain operations. As an example, I need an enable button that should be present for all layers. The enable functionality does the same thing for all layers, with zero difference. So rather than creating five buttons, one for each layer, I was thinking one button, that is passed to each of the different layers would be the way to go.</p> <p>my first attempt at this has failed. I get a RuntimeError</p> <pre><code>RuntimeError: wrapped C/C++ object of type QDoubleSpinBox has been deleted </code></pre> <p>This isn't going to work as I expected, which I'm guessing has to do with some form of ownership over the objects. So I created a simpler application, that has a single button, with two different panels.</p> <pre><code>import sys from PyQt6.QtCore import Qt from PyQt6.QtWidgets import ( QApplication, QFrame, QMainWindow, QPushButton, QSplitter, QStackedWidget, QVBoxLayout, QWidget, ) class IO: def __init__(self) -&gt; None: self.button = QPushButton(&quot;Panel A&quot;) def get_io(self): return self.button class FrameA: def get_frame(self, btn): frame = QFrame() frame.setFrameShape(QFrame.Shape.NoFrame) frame.setLineWidth(0) frame.setMidLineWidth(0) layout = QVBoxLayout(frame) layout.addWidget(btn) return frame class PanelA(QWidget): def __init__(self, io, parent=None): QWidget.__init__(self) self.parent = parent self.build_panel(io) def build_panel(self, btn): layout = QVBoxLayout() vertical_splitter = QSplitter(Qt.Orientation.Vertical) vertical_splitter.setStyleSheet(&quot;QSplitter::handle {image: none;}&quot;) vertical_splitter.addWidget(FrameA().get_frame(btn)) layout.addWidget(vertical_splitter) self.setLayout(layout) class PanelB(QWidget): def __init__(self, io, parent=None): QWidget.__init__(self) self.parent = parent self.build_panel(io) def build_panel(self, btn): layout = QVBoxLayout() vertical_splitter = QSplitter(Qt.Orientation.Vertical) vertical_splitter.setStyleSheet(&quot;QSplitter::handle {image: none;}&quot;) vertical_splitter.addWidget(FrameA().get_frame(btn)) layout.addWidget(vertical_splitter) self.setLayout(layout) class UI(QWidget, IO): def __init__(self, parent=None): QWidget.__init__(self) IO.__init__(self) layout = QVBoxLayout() self.stack = QStackedWidget() self.stack.addWidget(PanelA(self.get_io())) self.stack.addWidget(PanelB(self.get_io())) self.stack.setCurrentIndex(1) layout.addWidget(self.stack) self.setLayout(layout) self.init_btn_connections() def init_btn_connections(self): self.button.clicked.connect(lambda: self.btnstate(self.button)) def btnstate(self, btn): print(f&quot;Button: {btn}&quot;) if btn.text() == &quot;Panel A&quot;: print( f&quot;----&gt; Button, Current Stack Idx: {self.stack.currentIndex()}&quot; ) self.button.setText(&quot;Panel B&quot;) self.stack.setCurrentIndex(1) print(f&quot;\tSwitched to stack index: {self.stack.currentIndex()}&quot;) elif btn.text() == &quot;Panel B&quot;: print( f&quot;---&gt; Default, Current Stack Idx: {self.stack.currentIndex()}&quot; ) self.button.setText(&quot;Panel A&quot;) self.stack.setCurrentIndex(0) class App(QMainWindow): def __init__(self): QMainWindow.__init__(self) self.gui = UI() self.setCentralWidget(self.gui) def main(): app = QApplication(sys.argv) ex = App() ex.show() sys.exit(app.exec()) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>I've tried to replicate in the simplest way possible what my actual application is doing. Here is the rub of it. I can not replicate the *Object has been deleted error with this simpler app. This seems to tell me that the object that is being deleted is going out of scope and Python is tossing it. Which means I have to chase it down for the object itself.</p> <p>This simple application runs just fine. However, what I don't have is a button on both panels. When the application launches I see nothing but the application frame. If I set the stack to the 2nd position I see the button.</p> <p>What appears to be happening in this situation, is that I set the button on the first stack. But when I attempt to build the 2nd panel, it removes it from the 1st stack and places it on the 2nd stack.</p> <p>I would not be opposed to this if there was a way for me to refresh the panel with the button when I switch between the stacks.</p> <p>What I was hoping to avoid by using stacks, was the need to build a new panel every time I switch between panels.</p> <p>How do others handle situations like this?</p>
<python><user-interface><pyqt6>
2024-02-28 18:47:21
0
304
Michael
78,076,624
225,020
How do I define a classmethod dunder?
<p>I have a class <code>percent</code> that I'd like to be able to define <code>__rsub__</code> for the class and not instances. For instance using this class:</p> <pre><code>class percent: def __init__(self, value: int) -&gt; None: self.value = value @classmethod def __rsub__(cls, value: int) -&gt; None: return percent(value) def __repr__(self) -&gt; str: return f&quot;{value}%&quot; </code></pre> <p>This obviously doesn't work, but ultimately I'd like to be able to create a <code>percent</code> instance like so:</p> <pre><code>&gt;&gt;&gt; 50-percent 50% </code></pre>
<python><python-3.x>
2024-02-28 17:48:44
0
27,595
Jab
78,076,401
13,339,621
Uninstall uv Python package installer
<p>I recently installed <a href="https://github.com/astral-sh/uv" rel="nofollow noreferrer">uv</a> on Linux using the command line provided in the documentation:</p> <pre class="lang-bash prettyprint-override"><code>curl -LsSf https://astral.sh/uv/install.sh | sh </code></pre> <p>It created an executable in <code>/home/currentuser/.cargo/bin/uv</code>. Now, just out of curiosity, I would like to know how to remove it properly. Is it as simple as deleting a file? Or is there a script or command line which is cleaner?</p> <p>Please note that I also tried with <code>pip install/uninstall uv</code>, and it worked perfectly, but the installation path was different.</p>
<python><linux><python-packaging><uv>
2024-02-28 17:12:25
2
1,549
matleg
78,076,355
1,773,592
TypeError when using regex to change column names in polars
<p>I have a df:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame({ &quot;A&quot;: [0], &quot;B&quot;: [1], '{&quot;C_C&quot;, &quot;1&quot;}': [2], '{&quot;D_D&quot;, &quot;6&quot;}': [3], }) </code></pre> <p>I want to change the column names so that if they have quotation marks they are joined with an underscore and <code>_count</code> is added at end, so <code>{&quot;C_C&quot;, &quot;1&quot;}</code> becomes <code>C_C_1_count</code>. I have tried:</p> <pre><code>def flatten_pivot_polars(d:pl.DataFrame, col_str: str)-&gt;pl.DataFrame: import re d=d.select( pl.exclude([&quot;Step&quot;, &quot;RunId&quot;]).name.map(lambda col_name: '_'.join([re.findall('&quot;([^&quot;]*)&quot;',col_name), col_str])) ) return d flatten_pivot_polars(df, 'count') </code></pre> <p>but this gives:</p> <pre><code>ComputeError: Python function in 'name.map' produced an error: TypeError: sequence item 0: expected str instance, list found. </code></pre> <p>I am guessing it is because I am not excluding the non quoted columns properly but don't know what else to do.</p>
<python><regex><dataframe><python-polars>
2024-02-28 17:04:46
1
3,391
schoon
78,076,346
16,383,578
how to download file using subprocess and update QProgressBar in PyQt6
<p>I am writing a program to download a bunch of files, the program is very long, I download the files by calling <a href="https://github.com/aria2/aria2/releases/tag/release-1.37.0" rel="nofollow noreferrer">aria2c.exe</a> via subprocess, and I have encountered a problem.</p> <p>Specifically, when using aria2c + subprocess + QThread to download the file in the background, the GUI hangs and the progressbar and related labels don't update while the download is running, the GUI remains unresponsive until the download is complete.</p> <p>I used the same method to download the files in console without the GUI using aria2c + subprocess + threading.Thread, the download completed successfully and all stats are updated correctly, and the threads complete without errors.</p> <p>This is the minimal code required to reproduce the issue, though it is rather long:</p> <pre><code>import re import requests import subprocess import sys import time from PyQt6.QtCore import Qt, QThread, pyqtSignal, pyqtSlot from PyQt6.QtGui import ( QFont, QFontMetrics, ) from PyQt6.QtWidgets import ( QApplication, QGridLayout, QGroupBox, QHBoxLayout, QLabel, QProgressBar, QPushButton, QSizePolicy, QVBoxLayout, QWidget, ) BASE_COMMAND = [ &quot;aria2c&quot;, &quot;--async-dns=false&quot;, &quot;--connect-timeout=3&quot;, &quot;--disk-cache=256M&quot;, &quot;--disable-ipv6=true&quot;, &quot;--enable-mmap=true&quot;, &quot;--http-no-cache=true&quot;, &quot;--max-connection-per-server=16&quot;, &quot;--min-split-size=1M&quot;, &quot;--piece-length=1M&quot;, &quot;--split=32&quot;, &quot;--timeout=3&quot;, ] url = &quot;http://ipv4.download.thinkbroadband.com/100MB.zip&quot; UNITS_SIZE = {&quot;B&quot;: 1, &quot;KiB&quot;: 1 &lt;&lt; 10, &quot;MiB&quot;: 1 &lt;&lt; 20, &quot;GiB&quot;: 1 &lt;&lt; 30} DOWNLOAD_PROGRESS = re.compile( &quot;(?P&lt;downloaded&gt;\d+(\.\d+)?[KMG]iB)/(?P&lt;total&gt;\d+(\.\d+)?[KMG]iB)&quot; ) UNITS = (&quot;B&quot;, &quot;KiB&quot;, &quot;MiB&quot;, &quot;GiB&quot;, &quot;TiB&quot;, &quot;PiB&quot;, &quot;EiB&quot;, &quot;ZiB&quot;, &quot;YiB&quot;, &quot;RiB&quot;, &quot;QiB&quot;) ALIGNMENT = Qt.AlignmentFlag.AlignLeft | Qt.AlignmentFlag.AlignTop class Font(QFont): def __init__(self, size: int = 10) -&gt; None: super().__init__() self.setFamily(&quot;Times New Roman&quot;) self.setStyleHint(QFont.StyleHint.Times) self.setStyleStrategy(QFont.StyleStrategy.PreferAntialias) self.setPointSize(size) self.setBold(True) self.setHintingPreference(QFont.HintingPreference.PreferFullHinting) FONT = Font() FONT_RULER = QFontMetrics(FONT) class Box(QGroupBox): def __init__(self) -&gt; None: super().__init__() self.setAlignment(ALIGNMENT) self.setContentsMargins(3, 3, 3, 3) self.setSizePolicy(QSizePolicy.Policy.Minimum, QSizePolicy.Policy.Minimum) self.vbox = make_vbox(self) class Button(QPushButton): def __init__(self, text: str) -&gt; None: super().__init__() self.setFont(FONT) self.setFixedSize(72, 20) self.setText(text) def make_box( box_type: type[QHBoxLayout] | type[QVBoxLayout] | type[QGridLayout], parent: QWidget, margin: int, ) -&gt; QHBoxLayout | QVBoxLayout | QGridLayout: box = box_type(parent) if parent else box_type() box.setAlignment(ALIGNMENT) box.setContentsMargins(*[margin] * 4) return box def make_vbox(parent: QWidget = None, margin: int = 0) -&gt; QVBoxLayout: return make_box(QVBoxLayout, parent, margin) def make_hbox(parent: QWidget = None, margin: int = 0) -&gt; QHBoxLayout: return make_box(QHBoxLayout, parent, margin) class Label(QLabel): def __init__(self, text: str) -&gt; None: super().__init__() self.setFont(FONT) self.set_text(text) def autoResize(self) -&gt; None: self.Height = FONT_RULER.size(0, self.text()).height() self.Width = FONT_RULER.size(0, self.text()).width() self.setFixedSize(self.Width + 3, self.Height + 8) def set_text(self, text: str) -&gt; None: self.setText(text) self.autoResize() class ProgressBar(QProgressBar): def __init__(self) -&gt; None: super().__init__() self.setFont(FONT) self.setValue(0) self.setFixedSize(1000, 25) class DownThread(QThread): update = pyqtSignal(dict) def __init__(self, parent: QWidget, url: str, folder: str) -&gt; None: super().__init__(parent) self.url = url self.folder = folder self.line = &quot;&quot; self.stats = {} def run(self) -&gt; None: self.total = 0 res = requests.head(url) if res.status_code == 200 and (total := res.headers.get(&quot;Content-Length&quot;)): self.total = int(total) self.process = subprocess.Popen( BASE_COMMAND + [f&quot;--dir={self.folder}&quot;, self.url], stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) self.monitor() self.quit() def monitor(self) -&gt; None: self.start = self.elapsed = time.time_ns() self.downloaded = 0 output = self.process.stdout self.buffer = &quot;&quot; while self.process.poll() is None: char = output.read(1) if char in (b&quot;\n&quot;, b&quot;\r&quot;): self.line = self.buffer self.buffer = &quot;&quot; self.update_stats() else: self.buffer += char.decode() self.finish() def update_stats(self) -&gt; None: if match := DOWNLOAD_PROGRESS.search(self.line): current = time.time_ns() new, total = map(self.parse_size, match.groupdict().values()) delta = (current - self.elapsed) / 1e9 speed = (new - self.downloaded) / delta self.stats = { &quot;downloaded&quot;: new, &quot;total&quot;: total, &quot;speed&quot;: speed, &quot;elapsed&quot;: (current - self.start) / 1e9, &quot;eta&quot;: ((total - new) / speed) if speed != 0 else 1e309, } self.elapsed = current self.downloaded = new self.update.emit(self.stats) @staticmethod def parse_size(size: str) -&gt; int: unit = size[-3:] size = size.replace(unit, &quot;&quot;) return (float if &quot;.&quot; in size else int)(size) * UNITS_SIZE[unit] def finish(self): self.elapsed = (time.time_ns() - self.start) // 1e9 total = self.total or self.stats[&quot;total&quot;] self.stats[&quot;downloaded&quot;] = total self.stats[&quot;total&quot;] = total self.stats[&quot;elapsed&quot;] = self.elapsed self.stats[&quot;eta&quot;] = 0 self.stats[&quot;speed&quot;] = total / self.elapsed self.update.emit(self.stats) class Underbar(Box): def __init__(self): super().__init__() self.setFixedHeight(256) self.progressbar = ProgressBar() self.hbox = make_hbox() self.hbox.addWidget(self.progressbar) self.displays = {} for name in (&quot;Downloaded&quot;, &quot;Total&quot;, &quot;Speed&quot;, &quot;Elapsed&quot;, &quot;ETA&quot;): self.hbox.addWidget(Label(name)) widget = Label(&quot;0&quot;) self.hbox.addWidget(widget) self.displays[name] = widget self.vbox.addLayout(self.hbox) self.button = Button(&quot;Test&quot;) self.vbox.addWidget(self.button) self.button.clicked.connect(self.test) def test(self): self.progressbar.setValue(0) down = DownThread(self, url, &quot;D:/downloads&quot;) down.update.connect(self.update_displays) down.run() def update_displays(self, stats): self.progressbar.setValue(100 * int(stats[&quot;downloaded&quot;] / stats[&quot;total&quot;] + 0.5)) for name, suffix in ((&quot;Downloaded&quot;, &quot;&quot;), (&quot;Total&quot;, &quot;&quot;), (&quot;Speed&quot;, &quot;/s&quot;)): self.displays[name].setText( f&quot;{round(stats[name.lower()] / 1048576, 2)}MiB{suffix}&quot; ) self.displays[&quot;Elapsed&quot;].setText(f'{round(stats[&quot;elapsed&quot;], 2)}s') self.displays[&quot;ETA&quot;].setText(f'{round(stats[&quot;eta&quot;], 2)}s') for label in self.displays.values(): label.autoResize() if __name__ == &quot;__main__&quot;: app = QApplication([]) app.setStyle(&quot;Fusion&quot;) window = Underbar() window.show() sys.exit(app.exec()) </code></pre> <p>How to fix this?</p>
<python><subprocess><qthread><pyqt6>
2024-02-28 17:03:51
1
3,930
Ξένη Γήινος
78,076,321
11,431,332
Inner Join Database Table with Excel file by query in python
<p>I need to perform an inner join between a table in Amazon Redshift and an Excel table. The database table is quite large, and I only need to extract the IDs that are in Excel file, which contains a substantial list of IDs (approximately 30,000).</p> <p>I attempted to include the Excel table directly into the execute function but encountered issues. How to correctly do that?</p> <pre><code>id_table=df['id'] def query_(): query =f&quot;&quot;&quot; SELECT distinct t1.id, t1.value FROM table1 t1 inner join (SELECT id2 from %s ) as t2 on t1.id=t2.id2 &quot;&quot;&quot; return query def get_dataframe2(): query = query_() with psycopg2.connect(dbname=database_, host=server_, user=username_, password=password_, port=port_) as conn: with conn.cursor() as cur: cur.execute(query,(id_table,)) # cur.execute(query) success = False while not success: try: data = cur.fetchall() success = True except: cur.nextset() cols = ['id','value'] return pd.DataFrame(data, columns=cols) </code></pre> <p><a href="https://i.sstatic.net/TOwFc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TOwFc.png" alt="enter image description here" /></a></p>
<python><sql><pandas><excel><amazon-redshift>
2024-02-28 17:00:40
1
331
Yasmine
78,076,178
225,020
How do I type hint a return type of one function based init if init is overloaded
<p>I have a class that accepts either ints or floats in it's init but all must be int or float so i am using <code>typing.overload</code> to achieve that and I want to be able to type hint the return of a function based on the given values.</p> <pre><code>class Vector3: @overload def __init__(self, x: int, y: int, z: int) -&gt; None: ... @overload def __init__(self, x: float, y: float, z: float) -&gt; None: ... def __init__(self, x, y, z) -&gt; None: self._x = x self._y = y self._z = z # This function def __key(self) -&gt; tuple[int | float, int | float, int | float]: return (self._x, self._y, self._z) </code></pre> <p>Also how would I type hint the values of x, y, and z? I plan to use <code>@property</code> to obfuscate the _x, _y, _z values and don't know how I'd type hint them either.</p> <pre><code>@property def x(self) -&gt; int | float: return self._x </code></pre>
<python><python-typing>
2024-02-28 16:37:44
1
27,595
Jab
78,076,083
2,423,851
Error when running python machine learning model
<p>I am trying to follow tutorial video and run <a href="https://colab.research.google.com/github/camenduru/PanoHead-colab/blob/main/PanoHead_custom_colab.ipynb#scrollTo=v9wpwlGfiX2e" rel="nofollow noreferrer">this notebook</a> on google colab.</p> <p>After running second block of code, the following error is produced:</p> <pre><code>/content/3DDFA_V2/bfm/bfm.py:34: FutureWarning: In the future `np.long` will be defined as the corresponding NumPy scalar. self.keypoints = bfm.get('keypoints').astype(np.long) # fix bug Traceback (most recent call last): File &quot;/content/3DDFA_V2/recrop_images.py&quot;, line 322, in &lt;module&gt; main(args) File &quot;/content/3DDFA_V2/recrop_images.py&quot;, line 183, in main tddfa = TDDFA(gpu_mode=gpu_mode, **cfg) File &quot;/content/3DDFA_V2/TDDFA.py&quot;, line 34, in __init__ self.bfm = BFMModel( File &quot;/content/3DDFA_V2/bfm/bfm.py&quot;, line 34, in __init__ self.keypoints = bfm.get('keypoints').astype(np.long) # fix bug File &quot;/usr/local/lib/python3.10/dist-packages/numpy/__init__.py&quot;, line 328, in __getattr__ raise AttributeError(&quot;module {!r} has no attribute &quot; AttributeError: module 'numpy' has no attribute 'long'. Did you mean: 'log'? cp: cannot stat '/content/3DDFA_V2/crop_samples/img/*': No such file or directory </code></pre> <p>update: have found more info about the particular error here:</p> <p><a href="https://stackoverflow.com/questions/76389395/attributeerror-module-numpy-has-no-attribute-long">AttributeError: module &#39;numpy&#39; has no attribute &#39;long&#39;</a></p> <p>Would like to try to roll back <code>numpy</code> to previous version but not sure how to go about doing that in a colab like this.</p>
<python><numpy><machine-learning>
2024-02-28 16:26:07
1
776
Petar Ivcec
78,076,046
7,270,723
Why str("").strip() in (".32") is True in Python?
<p>I've developed a habit of searching within collections to verify the existence of elements as part of my conditions. Recently, I encountered a test case that necessitated this approach, but I found myself unable to provide a clear explanation for the observed behavior or outcome.</p> <p>Why <code>str(&quot;&quot;).strip() in (&quot;.32&quot;) =&gt; True</code></p> <p><code>print(str(&quot;&quot;).strip() in (&quot;.32&quot;))</code></p> <p>But <code>str(&quot;&quot;).strip() in [&quot;.32&quot;] =&gt; False</code></p> <p>It seems be something with the tuple, but I don't know why...</p>
<python><python-3.x><list>
2024-02-28 16:22:24
1
351
Jesse
78,075,946
4,692,610
Is there a way in Python to translate a hexidecimal representation of a fixed point number to the decimal representation of a fixed point number?
<p>I am aware of <a href="https://pypi.org/project/numfi/" rel="nofollow noreferrer">numfi</a>, which allows me to represent a decimal fixed point as a hexidecimal. For example:</p> <pre><code>import numfi y = numfi(0.5378, 1, 32, 30) y.hex array(['226B50B1'], dtype='&lt;U8') </code></pre> <p>However, lets say I only have the value <code>0x226B50B1</code> and know that the representation is Q1.30, but don't know what decimal value that is. Is there a method that allows me to see the equivalent decimal representation of 0.5378? If not in numfi, then any other method in Python?</p> <p>For example:</p> <pre><code>y = numfi(0x226B50B1, 1, 32, 30) </code></pre> <p>Does not do what I want.</p>
<python><decimal><fixed-point><bit-representation>
2024-02-28 16:10:12
0
323
MW2023
78,075,943
8,648,710
Multi-node/host training with the sharding API
<p>I wanted to do multi-node training with jax. For context, I'm on TPUs (I have 2x <code>v3-8</code> nodes).</p> <p>The <a href="https://jax.readthedocs.io/en/latest/multi_process.html#initializing-the-cluster" rel="nofollow noreferrer">docs</a> suggests using <code>distributed.initialize()</code> along with <code>xmap</code>/<code>pmap</code>. However, that is now discouraged - the official way is to use sharding with <code>device_put</code> calls and let XLA autoparallelize the code across the local devices. This is what I'm using.</p> <p>But I'm still confused on how to use sharding with multi-node setup. AIUI, we should be able to to a sort of 3D sharding like <code>(2, 8, 1)</code> for 2x TPUs with <code>8</code> local devices each, DDP styled. This would allow us to switch between <code>n</code>-way data parallelism and <code>m</code>-way model parallelism as outlined <a href="https://jax.readthedocs.io/en/latest/notebooks/Distributed_arrays_and_automatic_parallelization.html#examples-neural-networks" rel="nofollow noreferrer">here</a>.</p> <p>That however doesn't seem like the correct way to accomplish this.</p> <p>Can someone provide a minimal example here to demonstrate how exactly we modify the sharding to work in a multi-host setting?</p>
<python><python-3.x><jax>
2024-02-28 16:09:57
0
1,257
neel g
78,075,813
9,284,651
Python- Iterate over data frame in pandas and replace value that does not contains string from list
<p>My DF looks like below</p> <pre><code>x date_from cleaned_date 1 21 JUNE 23.59 2024-06-23 2 18TH JUN 23:59 2024-06-18 3 01TH JULY (23.59 HRS) 2024-07-01 4 28th June 2023 2023-06-28 5 5TH MAY 2023 2023-05-05 6 JUNE 27, 2023 2023-06-27 </code></pre> <p>I wrote a code that extract the correct date from date_from column but somehow it add 'year' from nothing. There is no information about year in some cases but it still try to add it. I wrote a code that deals with it and it replace the whole value in cleaned_date column with None if there is no year information. It looks like below:</p> <pre><code>df.loc[(~df['date_from'].astype('str').str.contains('2025')) &amp; (~df['date_from'].astype('str').str.contains('2024')) &amp; (df['date_from'].astype('str') != 'nan') &amp; (~df['date_from'].astype('str').str.contains('2023')) &amp; (~df['date_from'].astype('str').str.contains('2022')) &amp; (~df['date_from'].astype('str').str.contains('2021')) &amp; (~df['date_from'].astype('str').str.contains('2020')) &amp; (~df['date_from'].astype('str').str.contains('2019')), 'cleaned_date'] = None </code></pre> <p>Unfortunately I have more years to check so is there a way to use for instance for loop? Do you have any idea?</p> <p>Regards</p>
<python><pandas><dataframe><for-loop>
2024-02-28 15:48:40
2
403
Tmiskiewicz
78,075,720
10,721,627
Polars throw OperationalError when trying to write dataframe to a sqlite3 database
<pre class="lang-py prettyprint-override"><code>import polars as pl import sqlite3 conn = sqlite3.connect(&quot;test.db&quot;) df = pl.DataFrame({&quot;col1&quot;: [1, 2, 3]}) </code></pre> <p>According to the documentation of <a href="https://docs.pola.rs/py-polars/html/reference/api/polars.DataFrame.write_database.html" rel="nofollow noreferrer"><code>pl.write_database</code></a>, I need to pass a connection URI string e.g. &quot;sqlite:////path/to/database.db&quot; for SQLite database:</p> <pre class="lang-py prettyprint-override"><code>df.write_database(&quot;test_table&quot;, f&quot;sqlite:////test.db&quot;, if_table_exists=&quot;replace&quot;) </code></pre> <p>However, I got the following error:</p> <pre><code>OperationalError: (sqlite3.OperationalError) unable to open database file </code></pre> <p>EDIT: Based on the answer, install SQLAlchemy with the <code>pip install polars[sqlalchemy]</code> command.</p>
<python><sqlite><python-polars>
2024-02-28 15:37:56
2
2,482
Péter Szilvási
78,075,639
12,415,855
Selenium can't input a text in a field on the website
<p>I tried to put a string in a txt-field on the following site with the following code:</p> <pre><code>import time from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By if __name__ == '__main__': options = Options() options.add_argument(&quot;start-maximized&quot;) options.add_argument('--log-level=3') options.add_experimental_option(&quot;prefs&quot;, {&quot;profile.default_content_setting_values.notifications&quot;: 1}) options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;]) options.add_experimental_option('excludeSwitches', ['enable-logging']) options.add_experimental_option('useAutomationExtension', False) options.add_argument('--disable-blink-features=AutomationControlled') srv=Service() driver = webdriver.Chrome (service=srv, options=options) waitWD = WebDriverWait (driver, 10) firstRun = True driver.get (&quot;https://translate.google.com/&quot;) time.sleep(3) waitWD.until(EC.element_to_be_clickable((By.XPATH, '//button[@aria-label=&quot;Alle akzeptieren&quot;]'))).click() waitWD.until(EC.element_to_be_clickable((By.XPATH, '//c-wiz[@jsdata=&quot;deferred-c2&quot;]'))).click() waitWD.until(EC.element_to_be_clickable((By.XPATH, '//c-wiz[@jsdata=&quot;deferred-c2&quot;]'))).send_keys(&quot;This is some test!&quot;) </code></pre> <p>But I only get this error:</p> <pre><code>(openAIALL) C:\DEV\Fiverr\ORDER\robalf\SOLtranslateTXT&gt;python test1.py Traceback (most recent call last): File &quot;C:\DEV\Fiverr\ORDER\robalf\SOLtranslateTXT\test1.py&quot;, line 28, in &lt;module&gt; waitWD.until(EC.element_to_be_clickable((By.XPATH, '//c-wiz[@jsdata=&quot;deferred-c2&quot;]'))).send_keys(&quot;This is some test!&quot;) File &quot;C:\DEV\.venv\openAIALL\lib\site-packages\selenium\webdriver\remote\webelement.py&quot;, line 231, in send_keys self._execute( File &quot;C:\DEV\.venv\openAIALL\lib\site-packages\selenium\webdriver\remote\webelement.py&quot;, line 395, in _execute return self._parent.execute(command, params) File &quot;C:\DEV\.venv\openAIALL\lib\site-packages\selenium\webdriver\remote\webdriver.py&quot;, line 347, in execute self.error_handler.check_response(response) File &quot;C:\DEV\.venv\openAIALL\lib\site-packages\selenium\webdriver\remote\errorhandler.py&quot;, line 229, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable (Session info: chrome=120.0.6099.200) </code></pre> <p>Why is it not possible to put a text in this field?</p>
<python><selenium-webdriver>
2024-02-28 15:25:51
3
1,515
Rapid1898
78,075,463
395,457
FB Hydra extending configurations
<p>I'm trying to use Facebook's Hydra package to configure a project. In the example below, I've tried to keep names as generic as possible. Suppose that there are multiple model architecture types (e.g. different types of segmentation task) that the user can load. The main application loads the appropriate model using a <code>task</code> key in the model config.</p> <p>My config file tree looks like:</p> <pre><code>├── config.yaml ├── data │   └── default.yaml ├── model │   ├── task_1 │   │   ├── base.yaml │   │   ├── variant_1.yaml │   │   ├── variant_2.yaml │   └── task_2 │   ├── base.yaml │   ├── default.yaml │   ├── variant_1.yaml </code></pre> <p>My default config contains:</p> <pre class="lang-yaml prettyprint-override"><code>defaults: - data: default - model: task_1/base </code></pre> <p>Loading this works, and I get a <code>model</code> key with the contents of the <code>base.yaml</code> config file.</p> <p>Now, suppose base has a key called <code>config</code> that maps to a dictionary of hyperparameters. I would like to load an extended config e.g. <code>variant_1.yaml</code> which extends that config and of keeps the original parameters in the base configuration.</p> <pre class="lang-py prettyprint-override"><code>launch(&quot;config&quot;, overrides=[&quot;model=task_1/variant_1&quot;])` </code></pre> <p>In <code>variant_1.yaml</code> I've tried:</p> <pre><code>defaults: - base </code></pre> <p>but this gives an exception:</p> <pre><code>MissingConfigException: In 'model/task_1/variant_1': Could not load 'model/base'. </code></pre> <p>if I use <code>task_1/base</code> instead, then this works, but then the entire base config is loaded as a separate dictionary key called <code>task_1</code>:</p> <pre><code>{ task_1: {from base}, config: {... from variant_1}} } </code></pre> <p>rather than:</p> <pre><code>{ ... config: { base, extended with other params from variant_1} } </code></pre> <p>What is the correct way to inherit with this sort of nested directory structure (or is there a more canonical way to do this)? The documentation from Hydra seems to suggest this is straightforward, but I can't figure it out from the examples.</p>
<python><fb-hydra><omegaconf>
2024-02-28 15:00:56
1
2,917
Josh
78,075,375
13,812,982
Calculate row-by-row changes on a DataFrame where row data is a vector
<p>As an example:</p> <pre><code>import pandas as pd a = pd.Series([[1,2,3],[4,5,6],[7,8,9]]) a 0 [1, 2, 3] 1 [4, 5, 6] 2 [7, 8, 9] dtype: object </code></pre> <p>I am looking for the percentage change for each row to the row below, eg the result:</p> <pre><code>0 [-0.75, -0.6, -0.5] 1 [-0.42857142857142855, -0.375, -0.333333333333... 2 None dtype: object </code></pre> <p>I have this solution:</p> <pre><code>import pandas as pd a = pd.Series([[1,2,3],[4,5,6],[7,8,9]]) pd.Series([ [ [ (xx-yy)/yy for xx,yy in zip(x,y)] if y is not None else None] for x,y in zip(a,a.shift(-1))]) </code></pre> <p>but this seems overly complex. I'd be grateful for a neater solution.</p>
<python><dataframe>
2024-02-28 14:49:40
2
4,331
DS_London
78,075,240
2,353,911
Python timedelta: microseconds vs milliseconds confusion
<p>I have what appears to be a very silly question.</p> <p>My impression is that the <code>timedelta.microseconds</code> attribute really returns milliseconds (at least in Python 3.10).</p> <p>Running the following code:</p> <pre><code>import sys from datetime import datetime, timedelta from time import sleep print(sys.version) start: datetime = datetime.now() sleep(3) # time.sleep takes a non-keyword &quot;secs&quot; argument aka seconds - see docstring end: datetime = datetime.now() delta: timedelta = (end - start) print(delta) print(delta.microseconds) print(delta.microseconds / 1000) print(delta.seconds) </code></pre> <p>... will print out something in the lines of (my comments at end of line):</p> <pre><code>3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] # full version info for reference 0:00:03.003650 # representation of the delta as requested 3650 # isn't that &gt;&gt; milliseconds &lt;&lt;&lt;? aka roughly 3 seconds * 1000 ? 3.65 # dividing the delta by 1000 roughly gives me the original argument 3 # accessing the &quot;seconds&quot; attribute instead works as expected --&gt; 3 seconds (roughly) </code></pre> <p>I couldn't find anything to clarify this for me in the official 3.10 datetime doc <a href="https://docs.python.org/3.10/library/datetime.html#datetime.timedelta.microseconds" rel="nofollow noreferrer">page</a>, which makes me wonder whether this is an actual naming bug or I'm overlooking something obvious.</p> <p>Note also that the timedelta class doesn't appear to have a &quot;milliseconds&quot; attribute I can use for comparison.</p>
<python><python-3.x><datetime><timedelta>
2024-02-28 14:31:00
1
48,632
Mena
78,075,159
2,469,032
Concatenate values in a column by group as a list and assigning it to a variable in a pandas dataframe
<p>I have the following dataframe:</p> <pre><code>game = pd.DataFrame({ 'team': ['A', 'A', 'B', 'B', 'C', 'C', 'C'], 'members': [1, 2, 3, 4, 5, 6, 7] }) game </code></pre> <pre><code> team members 0 A 1 1 A 2 2 B 3 3 B 4 4 C 5 5 C 6 6 C 7 </code></pre> <p>I would like to concatenate the values from column 'members' by group 'team' to form a list, and attach them to a new variable all_team_members. The expected results are:</p> <pre><code> team members all_team_members 0 A 1 [1, 2] 1 A 2 [1, 2] 2 B 3 [3, 4] 3 B 4 [3, 4] 4 C 5 [5, 6, 7] 5 C 6 [5, 6, 7] 6 C 7 [5, 6, 7] </code></pre> <p>I have the following code that I thought would work, but results are not as expected</p> <pre><code>game['all_teamm_members'] = game.groupby('team').members.transform(lambda x : x.tolist()) </code></pre> <p>P.S.: I know I can accomplish this by first use <code>game.groupby('team').members.apply(lambda x : x.tolist())</code> to create the list at unique group ('team') level and then merge the dataframe back to the original dataframe, however I am really curious how I can accomplish this using <code>transform()</code></p>
<python><pandas>
2024-02-28 14:21:49
2
1,037
PingPong
78,075,152
3,836,815
Preserve line wrapping in ruamel.yaml
<p>Is there a way to preserve line wrapping with <code>ruamel.yaml</code>? The closest I have found is to use the <code>width</code> property but that doesn't quite do it.</p> <p><a href="https://stackoverflow.com/questions/42170709/prevent-long-lines-getting-wrapped-in-ruamel-yaml">Prevent long lines getting wrapped in ruamel.yaml</a></p> <p>My use case: I have a yaml file with some long lines and some where they are already wrapped. To minimize changes when doing a round-trip update, I'd like the untouched lines to keep their wrapping.</p> <pre><code>import ruamel.yaml import sys yaml = ruamel.yaml.YAML() # yaml.width = 999 instr = &quot;&quot;&quot;\ description: This is a long line that has been wrapped. url: https://long-url.com/add-more-characters-so-that-it-goes-out-farther-than-the-default-80-cols value: 7 &quot;&quot;&quot; test = yaml.load(instr) # Modify &lt;value&gt; test['value'] += 1 yaml.dump(test, sys.stdout) </code></pre> <p>Output below. The <code>description</code> is unwrapped and <code>url</code> is moved to a new line.</p> <pre><code>description: This is a long line that has been wrapped. url: https://long-url.com/add-more-characters-so-that-it-goes-out-farther-than-the-default-80-cols value: 8 </code></pre> <p>If I uncomment the <code>yaml.width = 999</code> line the <code>url</code> looks the way I want, but <code>description</code> changes.</p> <pre><code>description: This is a long line that has been wrapped. url: https://long-url.com/add-more-characters-so-that-it-goes-out-farther-than-the-default-80-cols value: 8 </code></pre> <p>What I really want is the <code>description</code> and <code>url</code> lines to match the original, only <code>value</code> changing:</p> <pre><code>description: This is a long line that has been wrapped. url: https://long-url.com/add-more-characters-so-that-it-goes-out-farther-than-the-default-80-cols value: 8 </code></pre>
<python><yaml><ruamel.yaml>
2024-02-28 14:20:40
2
22,535
crdavis
78,075,088
3,042,018
Get CodeSpaces VS Code to apply correct indentation
<p>I have checked every setting I can think of, but I must be missing something, as when I hit <kbd>Enter</kbd>, I get a 2-space indent, not a 4-space indent when trying to write Python in Visual Studio Code in <a href="https://github.com/features/codespaces" rel="nofollow noreferrer">GitHub Codespaces</a>.</p> <p>This makes me sad. Please see the images of settings and the result. Is there a solution?</p> <p><a href="https://i.sstatic.net/Qh1zg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qh1zg.png" alt="Enter image description here" /></a></p> <p><a href="https://i.sstatic.net/XYIDL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XYIDL.png" alt="Enter image description here" /></a></p> <p><a href="https://i.sstatic.net/9U5FQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9U5FQ.png" alt="Enter image description here" /></a></p> <p><a href="https://i.sstatic.net/BbaF2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BbaF2.png" alt="Enter image description here" /></a></p> <p><a href="https://i.sstatic.net/szu1Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/szu1Q.png" alt="Enter image description here" /></a></p>
<python><visual-studio-code><indentation><github-codespaces>
2024-02-28 14:11:04
0
3,842
Robin Andrews
78,075,052
4,119,262
Attempt to split word if uppercase (without enumerate, function or regex) leads to errors (e.g. not enough value to unpack)
<p>I am trying to learn python. In this context I need to split a word when encountering an upper case, convert it as a lower case and then insert a &quot;_&quot;.</p> <p>I have seen various quite complex answer <a href="https://stackoverflow.com/questions/63715861/insert-space-if-uppercase-letter-is-preceded-and-followed-by-one-lowercase-lette">here</a>, but I am trying to do it as simply as possible.</p> <p>So far, here is my code:</p> <pre><code>word = input(&quot;What is the camelCase?&quot;) for i , k in word: if i.isupper() and k.islower(): word2 = k + &quot;_&quot; + i.lower + k print(word2) else: print(word) </code></pre> <p>This leads to &quot;<code>not enough values to unpack (expected 2, got 1)</code>&quot; after I input my word.</p> <p>Another attempt here:</p> <pre><code>word = input(&quot;What is the camelCase?&quot;) for i and k in word: if i.isupper() and k.islower(): word2 = k + &quot;_&quot; + i.lower + k print(word2) else: print(word) </code></pre> <p>In this case I cannot even write my word, I directly have the error: &quot;<code>cannot assign to expression</code>&quot;</p> <p>A proposed option has been as follows:</p> <pre><code>word = input(&quot;What is the camelCase?&quot;) word2 = &quot;&quot; for i, char in enumerate(word): if i &gt; 0 and char.isupper() and word[i - 1].islower(): word2 += &quot;_&quot; + char.lower() else: word2 += char print(word2) </code></pre> <p>However, I also do not want to use &quot;enumerate&quot;.</p> <p>I think that I have something almost ok now, which is this:</p> <pre><code>word = input(&quot;What is the camelCase? &quot;) for char in word: if char.islower(): print(char) else: print(&quot;_&quot; + char.lower()) </code></pre> <p>Sadly the output that it gives me is not perfect, for &quot;testTest&quot; I have:</p> <pre><code>t e s t _t e s t </code></pre>
<python>
2024-02-28 14:04:46
1
447
Elvino Michel
78,074,899
13,181,599
OutOfMemoryError: CUDA out of memory. Both in local machine and google colab
<p>I am trying to use stable diffusion xl model to generate images. But after installing and painfully matching version of python, pytorch, diffusers, cuda versions I got this error:</p> <p><code>OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU 0 has a total capacty of 14.75 GiB of which 857.06 MiB is free. Process 43684 has 13.91 GiB memory in use. Of the allocated memory 13.18 GiB is allocated by PyTorch, and 602.64 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. </code></p> <p>Now it may seem obvious to get higher GPU Memory but!!! I have tried this on my local computer with NVIDIA GEFORCE FTX 3060 6GB. And also in Google Colab with 15 GB of VRAM!</p> <p>I have tried every solution in stackoverflow, github and still can't fix this issue. Solutions I have tried:</p> <ol> <li>I am not training the model here. When training batch_size was 1.</li> <li>Added these environment variables: PYTHONUNBUFFERED=1;PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256</li> <li>Resize image to 512x512</li> <li>I have read somewhere that I need to downgrade pytorch version to 1.8 because of RTX 3060 GPU and Cuda version 11.3. But can't install pytorch version 1.8 : <code>Could not find a version that satisfies the requirement torch==1.8.1</code></li> </ol> <p>Here is my python code:</p> <pre><code> from diffusers import DiffusionPipeline, StableDiffusionXLImg2ImgPipeline import torch import gc #for cleaning memory gc.collect() del variables torch.cuda.empty_cache() model = &quot;stabilityai/stable-diffusion-xl-base-1.0&quot; pipe = DiffusionPipeline.from_pretrained( model, torch_dtype=torch.float16, ) pipe.to(&quot;cuda&quot;) pipe.load_lora_weights(&quot;model/&quot;, weight_name=&quot;pytorch_lora_weights.safetensors&quot;) refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained( &quot;stabilityai/stable-diffusion-xl-refiner-1.0&quot;, torch_dtype=torch.float16, ) refiner.to(&quot;cuda&quot;) prompt = &quot;a portrait of maha person 4k, uhd&quot; for seed in range(1): generator = torch.Generator(&quot;cuda&quot;).manual_seed(seed) image = pipe(prompt=prompt, generator=generator, num_inference_steps=25) image = image.images[0] image.save(f&quot;output_images/{seed}.png&quot;) image = refiner(prompt=prompt, generator=generator, image=image) image = image.images[0] image.save(f&quot;images_refined/{seed}.png&quot;) </code></pre>
<python><pytorch><google-colaboratory><stable-diffusion>
2024-02-28 13:41:01
1
301
Mahammad Yusifov
78,074,843
307,297
Jython: syntax check available?
<p>I am using Jython for a simple Python script task inside a Java app.</p> <pre class="lang-java prettyprint-override"><code>Properties ps = new Properties(); ps.put (&quot;python.console.encoding&quot;, &quot;UTF-8&quot;); ps.put (&quot;python.import.site&quot;, &quot;false&quot;); PythonInterpreter.initialize (System.getProperties (), ps, new String [0]); this.pi = new PythonInterpreter (); String pycode = &quot;print \&quot;abc\&quot;&quot;; pi.exec (pycode); </code></pre> <p>This works, but now I need to think of a way to check the code before it is executed.</p> <p>I tried <code>pi.compile</code>:</p> <pre class="lang-java prettyprint-override"><code>PyCode pc = pi.compile (pycode); pi.exec (pc); </code></pre> <p>But I do not see any effect with that regarding my problem.</p> <p>Method compile produces a result even if exec fails afterwards (e.g. wrong syntax).</p>
<python><java><jython>
2024-02-28 13:33:00
0
12,695
chris01
78,074,725
22,221,987
How to typehint dataclass field in function signature
<p>I've a frozen dataclass with some constants and function, which accept any constant from this class as it's argument.<br /> How can I typehint this mechanic? I need to tell user that function waits for any field's value from this specific dataclass.<br /> It should looks like this:</p> <pre><code>from dataclasses import dataclass @dataclass(frozen=True) class Consts: const_0: int = 0 const_1: int = 1 const_2: int = 2 const_3: int = 3 def foo(param: Consts.field): return param </code></pre> <p><strong>UPD</strong>:<br /> According to @JaredSmith hint I've tried to use Enum. It look like much correct. But the problem still exists.<br /> I've tried to use <code>typing.Literal</code>, like this:</p> <pre><code>def foo(param: Literal[Consts.const_0, Consts.const_1]): return param </code></pre> <p>but it will not give use a correct typehint. In case <code>foo(Consts)</code> we will get this, not pretty obvious, warning: <code>Expected type 'Consts', got 'Type[Consts]' instead </code> instead of something like that: <code>Expected type Consts.value got Consts instead</code></p> <p>So, the main question, after updates, is: How to aggregate constants in logical groups in the code to simplify their usage (dataclass or Enum) and how to typehint the corresponding solution.</p> <p><strong>UPD</strong>: Great thanks for all comments. They showed me lot of interesting facts. As an answer for current question I choose IntEnum variant by @JaredSmith. Also, thanks @chepner and @InSync for the deep explanation.</p>
<python><python-typing><python-dataclasses>
2024-02-28 13:16:09
2
309
Mika
78,074,666
10,764,260
Depth calculation for 3D-2D projections with different intrinsics and extrinsic matrices
<p>I have an ikea manual and the 3D parts in the assembled state. For each step in the ikea manual I have calculated intrinsic and extrinsic matrices to project the part onto the manual. Currently the intrinsic is different for each part and it is my understanding that this is unavoidable as the manual steps are drawn and thus each part might have individual scaling and distortion, which is something that cannot be encoded just in the extrinsics.d</p> <p>Projecting the parts on the manual already works reasonable well, but the problem is handling occlusions. If my understanding is correct, then techniques like z-buffering would work well if one intrinsic was used for all the parts. But with different intrinsics it doesn't work, i.e. different focal length will change the z-axis for instance.</p> <p>To provide some sample code:</p> <pre class="lang-py prettyprint-override"><code>def _get_mask_part(self, path: pathlib.Path, extrinsic: Float[np.ndarray, &quot;4 4&quot;], intrinsic: Float[np.ndarray, &quot;3 3&quot;], height: int,width: int) -&gt; Tuple[Bool[np.ndarray, &quot;height width&quot;], Float[np.ndarray, &quot;height width&quot;]]: &quot;&quot;&quot; Uses trimesh and pyrender to load the mesh of the specific parts, applies all transformations, puts camera in scene and finally projects and renders the scene onto an image to finally obtain bitmasks for parts. Args: path: Path to the mesh .obj file of this part. extrinsic: Extrinsic transformation of this part. intrinsic: Intrinsic transformation of this part. height: Height of the manual image. width: Width of the manual image. Returns: Tuple consisting of the bitmask of the specific part in image plane and pixel depth information of this part which will be used for mean depth calculation. &quot;&quot;&quot; scene = pyrender.Scene(ambient_light=[0.3, 0.3, 0.3, 1.0], bg_color=[0.0, 0.0, 0.0, 1.0]) fuze_trimesh = trimesh.load(path, force='mesh') mesh = pyrender.Mesh.from_trimesh(fuze_trimesh) scene.add(mesh, pose=extrinsic) fx, fy, cx, cy = intrinsic[0, 0], intrinsic[1, 1], intrinsic[0, 2], intrinsic[1, 2] camera = pyrender.IntrinsicsCamera(fx=fx, fy=fy, cx=cx, cy=cy, znear=0.05, zfar=100) camera_pose = np.eye(4) camera_pose[1][1] = -1 scene.add(camera, pose=camera_pose) renderer = pyrender.OffscreenRenderer(viewport_height=height, viewport_width=width) _, depth = renderer.render(scene) # for now rough calibration of pixel depth with focal length depth = 1000 / fx * depth mask = depth != 0 renderer.delete() return mask, depth </code></pre> <p>This will be done for each part and then a mean depth is calculated based on the non zero pixels, which is used for sorting the parts:</p> <pre class="lang-py prettyprint-override"><code>mean_depth = np.mean(depth[depth != 0]) </code></pre> <p>As you can see I do some rough estimate by multiplying the depth with the focal length to account for different focal length. The results look okayish, but there are plently of failure cases.</p> <p>So my question is:</p> <ol> <li>Given this setup with different intrinsics and extrinsics, is it possible to correctly calculate the depth? (Note: In case it matters, the assumption that all parts are in a specific order seem to hold, but ideally it would also work on pixel level)</li> <li>Is there some other method using something different than intrinsics and extrinsics for this. It would be possible for me to annotate say keypoints in 2D and 3D to calculate something useful (Currently I do PnP to estimate the intrinsics / extrinsics)</li> </ol> <p>In case it is needed: This is the original dataset being used <a href="https://cs.stanford.edu/%7Ercwang/projects/ikea_manual/" rel="nofollow noreferrer">https://cs.stanford.edu/~rcwang/projects/ikea_manual/</a></p>
<python><graphics><3d><computer-vision>
2024-02-28 13:07:21
0
308
Leon0402
78,074,623
4,354,477
"The truth value of an array with more than one element is ambiguous" when trying to train a new JAX+Equinox model a second time
<p><strong>TL;DR</strong>: I create a new instance of my <code>equinox.Module</code> model and fit it using Optax. Everything works fine. When I create a <em>new</em> instance of the same model and try to fit it from scratch, using the same code, same initial values, same everything, I get:</p> <pre><code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() </code></pre> <p>...somewhere deep in Optax code. My code doesn't compare any arrays. The error message doesn't show where exactly the comparison happens. What's wrong?</p> <h2>Code</h2> <pre class="lang-py prettyprint-override"><code># 1. Import dependencies. import jax; jax.config.update(&quot;jax_enable_x64&quot;, True) import jax.numpy as np, jax.random as rnd, equinox as eqx import optax # 2. Define loss function. I'm fairly confident this is correct. def npdf(x, var): return np.exp(-0.5 * x**2 / var) / np.sqrt(2 * np.pi * var) def mixpdf(x, ps, vars): return ps.dot(npdf(x, vars)) def loss(model, series): weights, condvars = model(series) return -jax.vmap( lambda x, vars: np.log(mixpdf(x, weights, vars)) )(series[1:], condvars[:-1]).mean() # 3. Define recurrent neural network. class RNNCell(eqx.Module): bias: np.ndarray Wx: np.ndarray Wh: np.ndarray def __init__(self, ncomp: int, n_in: int=1, *, key: np.ndarray): k1, k2, k3 = rnd.split(key, 3) self.bias = rnd.uniform(k1, (ncomp, )) self.Wx = rnd.uniform(k2, (ncomp, n_in)) self.Wh = 0.9 * rnd.uniform(k3, (ncomp, )) def __call__(self, vars_prev, obs): vars_new = self.bias + self.Wx @ obs + self.Wh * vars_prev return vars_new, vars_new class RNN(eqx.Module): cell: RNNCell logits: np.ndarray vars0: np.ndarray = eqx.field(static=True) def __init__(self, vars0: np.ndarray, n_in=1, *, key: np.ndarray): self.vars0 = np.array(vars0) K = len(self.vars0) self.cell = RNNCell(K, n_in, key=key) self.logits = np.zeros(K) def __call__(self, series: np.ndarray): _, hist = jax.lax.scan(self.cell.__call__, self.vars0, series**2) return jax.nn.softmax(self.logits), abs(hist) def condvar(self, series): weights, variances = self(series) return variances @ weights def predict(self, series: np.ndarray): return self.condvar(series).flatten()[-1] # 4. Training/fitting code. def fit(model, logret, nepochs: int, optimizer, loss): loss_and_grad = eqx.filter_value_and_grad(loss) @eqx.filter_jit def make_step(model, opt_state): loss_val, grads = loss_and_grad(model, logret) updates, opt_state = optimizer.update(grads, opt_state) model = eqx.apply_updates(model, updates) return loss_val, model, opt_state opt_state = optimizer.init(model) for epoch in range(nepochs): loss_val, model, opt_state = make_step(model, opt_state) print(&quot;Works!&quot;) return model def experiment(): series = rnd.normal(rnd.PRNGKey(8), (100, 1)) model = RNN([0.4, 0.6, 0.8], key=rnd.PRNGKey(8)) return fit(model, series, 100, optax.adam(0.01), loss) # 5. Run the exact same code twice. experiment() # 1st call, works experiment() # 2nd call, error </code></pre> <h2>Error message</h2> <pre><code>&gt; python my_RNN.py Works! Traceback (most recent call last): File &quot;/Users/forcebru/test/my_RNN.py&quot;, line 75, in &lt;module&gt; experiment() # 2nd call, error ^^^^^^^^^^^^ File &quot;/Users/forcebru/test/my_RNN.py&quot;, line 72, in experiment return fit(model, series, 100, optax.adam(0.01), loss) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/test/my_RNN.py&quot;, line 65, in fit loss_val, model, opt_state = make_step(model, opt_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/equinox/_jit.py&quot;, line 206, in __call__ return self._call(False, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/equinox/_module.py&quot;, line 935, in __call__ return self.__func__(self.__self__, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/equinox/_jit.py&quot;, line 200, in _call out = self._cached(dynamic_donate, dynamic_nodonate, static) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/traceback_util.py&quot;, line 179, in reraise_with_filtered_traceback return fun(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/pjit.py&quot;, line 248, in cache_miss outs, out_flat, out_tree, args_flat, jaxpr, attrs_tracked = _python_pjit_helper( ^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/pjit.py&quot;, line 136, in _python_pjit_helper infer_params_fn(*args, **kwargs) File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/api.py&quot;, line 325, in infer_params return pjit.common_infer_params(pjit_info_args, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/pjit.py&quot;, line 495, in common_infer_params jaxpr, consts, out_shardings, out_layouts_flat, attrs_tracked = _pjit_jaxpr( ^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/pjit.py&quot;, line 1150, in _pjit_jaxpr jaxpr, final_consts, out_type, attrs_tracked = _create_pjit_jaxpr( ^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/linear_util.py&quot;, line 350, in memoized_fun ans = call(fun, *args) ^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/pjit.py&quot;, line 1089, in _create_pjit_jaxpr jaxpr, global_out_avals, consts, attrs_tracked = pe.trace_to_jaxpr_dynamic( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/profiler.py&quot;, line 336, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/interpreters/partial_eval.py&quot;, line 2314, in trace_to_jaxpr_dynamic jaxpr, out_avals, consts, attrs_tracked = trace_to_subjaxpr_dynamic( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/interpreters/partial_eval.py&quot;, line 2336, in trace_to_subjaxpr_dynamic ans = fun.call_wrapped(*in_tracers_) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/linear_util.py&quot;, line 192, in call_wrapped ans = self.f(*args, **dict(self.params, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/equinox/_jit.py&quot;, line 49, in fun_wrapped out = fun(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/test/my_RNN.py&quot;, line 59, in make_step updates, opt_state = optimizer.update(grads, opt_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/optax/_src/combine.py&quot;, line 59, in update_fn updates, new_s = fn(updates, s, params, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/optax/_src/base.py&quot;, line 337, in update return tx.update(updates, state, params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/optax/_src/transform.py&quot;, line 369, in update_fn mu_hat = bias_correction(mu, b1, count_inc) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/traceback_util.py&quot;, line 179, in reraise_with_filtered_traceback return fun(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/pjit.py&quot;, line 248, in cache_miss outs, out_flat, out_tree, args_flat, jaxpr, attrs_tracked = _python_pjit_helper( ^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/pjit.py&quot;, line 136, in _python_pjit_helper infer_params_fn(*args, **kwargs) File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/api.py&quot;, line 325, in infer_params return pjit.common_infer_params(pjit_info_args, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/pjit.py&quot;, line 491, in common_infer_params canonicalized_in_shardings_flat, in_layouts_flat = _process_in_axis_resources( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;&lt;string&gt;&quot;, line 4, in __eq__ File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/core.py&quot;, line 745, in __bool__ check_bool_conversion(self) File &quot;/Users/forcebru/.pyenv/versions/3.12.1/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/jax/_src/core.py&quot;, line 662, in check_bool_conversion raise ValueError(&quot;The truth value of an array with more than one element is &quot; ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() </code></pre> <h2>Problem</h2> <ul> <li>The error message says <code>File &quot;&lt;string&gt;&quot;, line 4, in __eq__</code>, which doesn't help.</li> <li>It refers to the line <code>mu_hat = bias_correction(mu, b1, count_inc)</code> in Optax code, but as far as I understand, it <a href="https://github.com/google-deepmind/optax/blob/8e4aa35dec085e9646dc6d633fe914a691ffbac4/optax/_src/transform.py#L110" rel="nofollow noreferrer">doesn't compare any arrays</a>.</li> <li>It also refers to JAX code that's supposedly responsible for JIT compilation, but this seems outside my control.</li> </ul> <p>Is there a bug in my model definition (<code>RNNCell</code> or <code>RNN</code>)? Did I implement the training loop wrong? I basically copied it straight from <a href="https://docs.kidger.site/equinox/examples/train_rnn/" rel="nofollow noreferrer">Equinox docs</a>, so it should be fine. Why does it work when I call <code>experiment()</code> the first time, but not the second?</p>
<python><machine-learning><neural-network><jax>
2024-02-28 13:01:01
1
45,042
ForceBru
78,074,445
16,511,234
Scatter plot on Plotly map but with search bar
<p>I use Plotly to create scatter plots on maps. Like this for example:</p> <pre><code>import plotly.express as px import geopandas as gpd geo_df = gpd.read_file(gpd.datasets.get_path('naturalearth_cities')) px.set_mapbox_access_token(open(&quot;.mapbox_token&quot;).read()) fig = px.scatter_mapbox(geo_df, lat=geo_df.geometry.y, lon=geo_df.geometry.x, hover_name=&quot;name&quot;, zoom=1) fig.show() </code></pre> <p><a href="https://plotly.com/python/scattermapbox/" rel="nofollow noreferrer">https://plotly.com/python/scattermapbox/</a></p> <p>I export it as a HTML file and everything works fine. My question is, can I add a search bar into it to filter locations and export is a HTML file? I want to type Berlin into the search bar and only Berlin should show up.</p>
<python><plotly>
2024-02-28 12:34:53
0
351
Gobrel
78,074,401
820,410
Removing dense grid lines from CAPCHA & convert into clear image
<p>I want to extract the text from this image. I'm a newbie in opencv. I've tried various opencv codes across various questions, but none is working for me.</p> <p>How can I extract text from this? Or maybe remove grid lines &amp; that circular bulge in the middle so that image is straightened &amp; then I can extract the text.</p> <p>Any help is appreciated.</p> <p>One of the Sample codes I tried - <a href="https://dsp.stackexchange.com/questions/52089/removing-noisy-lines-from-image-opencv-python">this</a></p> <p><a href="https://i.sstatic.net/AmWEI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AmWEI.png" alt="enter image description here" /></a></p>
<python><opencv><ocr><captcha><text-recognition>
2024-02-28 12:29:28
1
16,203
Pankaj Singhal
78,074,286
316,723
Handling unregistered Celery tasks during continuous deployment
<p>I have a setup of multiple Celery workers on different machines. The tasks can get added at any time with a continous deployment process. So, at any point different workers may have different sets of tasks, even though they are on the same task queue.</p> <p>So, the following state is possible:</p> <ul> <li>tasks on machine A: <code>add</code></li> <li>tasks on machine B: <code>add</code>, <code>subtract</code></li> </ul> <p>Now, I call task <code>subtract</code>, machine A picks it up (round robin), and throws a <code>celery.exceptions.NotRegistered</code> exception. The task fails and is never retried again.</p> <p>Is there a clean pattern where machine A would simply ignore tasks that are not in its registry so other workers that do could pick them up? I cannot use queues, for example, because the tasks are on the same queue (just diverging due to deployment, for example).</p> <p>Of course, I can detect the failure and retry on the client, but it seems like it should be a common enough pattern to have built-in support, but I just can't find anything about this.</p> <p>I tried <code>task_acks_late</code> and <code>task_reject_on_worker_lost</code> configuration parameters but they didn't seem to be relevant.</p> <p>Machine A:</p> <pre><code>app = Celery('worker', broker='pyamqp://guest@localhost//', backend='redis://localhost:6389/0') @app.task def add(x, y): return x + y </code></pre> <p>Machine B:</p> <pre><code>app = Celery('worker', broker='pyamqp://guest@localhost//', backend='redis://localhost:6389/0') @app.task def add(x, y): return x + y @app.task def subtract(x, y): return x - y </code></pre> <p>Client:</p> <pre><code>&gt;&gt;&gt; result = app.send_task('subtract', args=[3, 2]) &gt;&gt;&gt; res.get() &gt;&gt;&gt; 1 &gt;&gt;&gt; result = app.send_task('subtract', args=[3, 2]) &gt;&gt;&gt; res.get() ... celery.exceptions.NotRegistered: subtract </code></pre>
<python><celery>
2024-02-28 12:10:40
0
4,106
naktinis
78,074,124
19,356,117
How to read zarr files correctly from minio?
<p>I want to read a big <code>zarr</code> file from my minio(s3) server,however,after I changed three methods,they are all crashed:</p> <pre><code>import hydrodata.configs.config as conf # Method 1 # https://pastebin.com/vkM1M3VV zarr_path = await conf.FS.open_async('s3://datasets-origin/usgs_streamflow_nldas_hourly.zarr') zds = xr.open_dataset(zarr_path, engine='zarr') # Method 2 # https://pastebin.com/fKKECf3U zarr_path = conf.FS.get_mapper('s3://datasets-origin/usgs_streamflow_nldas_hourly.zarr') wrapped_store = zarr.storage.KVStore(zarr_path) zds = xr.open_zarr(wrapped_store) # Method 3 # AttributeError: __enter__ with conf.FS.open_async('s3://datasets-origin/usgs_streamflow_nldas_hourly.zarr') as zarr_path: zds = xr.open_dataset(zarr_path) </code></pre> <p>And this is <code>conf.FS</code>:</p> <pre><code>FS = s3fs.S3FileSystem( client_kwargs={&quot;endpoint_url&quot;: MINIO_PARAM[&quot;endpoint_url&quot;]}, key=MINIO_PARAM[&quot;key&quot;], secret=MINIO_PARAM[&quot;secret&quot;], use_ssl=False, ) </code></pre> <p>So how to solve their problem and let me get correct data?</p> <p>———————————————————————————————————</p> <p>This is my crash report in Method2:</p> <pre><code>name = 'xarray.core.daskmanager' import_ = &lt;function _gcd_import at 0x7fe2aabbb400&gt; &gt; ??? E ModuleNotFoundError: No module named 'xarray.core.daskmanager' </code></pre> <p>However I have run <code>pip install xarray[complete]</code> and <code>conda install -c conda-forge xarray dask netCDF4 bottleneck</code> before,so where's the problem? This is my pip list: <a href="https://pastebin.com/BUbcNqtT" rel="nofollow noreferrer">https://pastebin.com/BUbcNqtT</a></p>
<python><asynchronous><amazon-s3><bigdata><minio>
2024-02-28 11:44:35
1
1,115
forestbat
78,074,062
7,819,329
Deal with "non-stable" entries in a noisy signal
<p>I have a signal in which I want to identify &quot;outliers&quot;, actually corresponding to samples for which the signal is not sufficiently stable.</p> <p>I have implemented this:</p> <pre><code>def count_outliers(series, window_size=5, stability_threshold=0.1): rolling_min = series.rolling(window=window_size, min_periods=1, center=True).min() rolling_max = series.rolling(window=window_size, min_periods=1, center=True).max() rolling_mean = series.rolling(window=window_size, min_periods=1, center=True).mean() # Calculate the max-min over a rolling window max_min_difference = rolling_max - rolling_min # Calculate the mean over the rolling window mean_over_window = rolling_mean # Create a binary vector based on the condition # if the gap is high (transition/instability) outliers = ((max_min_difference &gt; stability_threshold * mean_over_window) &amp; ( abs(series - mean_over_window) &gt; stability_threshold * mean_over_window)).astype(int) return outliers </code></pre> <p>but this is not convincing and let me explain why on two examples.</p> <p><strong>Example 1:</strong></p> <p>Assume this one:</p> <pre><code>test_vec=pd.Series([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7.03, 6.21, 15.84, 16.81, 17.78, 30.16, 29.23, 28.3, 28.215, 28.13, 28.195, 28.245, 28.305, 28.305, 28.345]) </code></pre> <p>I want the elements of indices 10 to 14 (included), that is the third &quot;row&quot; (presented like this only for readability), to be labeled as outliers. The two following elements might or might not be classified as outliers, depending on the threshold maybe. This is not so critical.</p> <p><strong>Example 2:</strong></p> <p>Assume this one, which is the trickiest thing I could encounter:</p> <pre><code>test_vec=pd.Series([0, 0, 0, 0, 0, 0, 26.09, 11.36, 6.04, 0, 0, 5.2, 26.15, 27.825, 29.5, 0, 0, 5.32, 26.18, 27.37, 28.56, 0, 0, 0, 0, 0]) </code></pre> <p>In fact, all non-zero elements (I don't care much about 0's in the problem in fact) should be outliers: the signal is never stable enough.</p> <p>What can I try next?</p>
<python><filter><time-series><signal-processing>
2024-02-28 11:32:05
0
1,161
MysteryGuy
78,074,032
22,221,987
PyCharm Google-docstring russian language parsing problems
<p>I've a docstring in some function. This is a google-styled docstring. My PyCharm has enabled default syntax checking.<br /> For some reasons I got a PEP hint, that the first letter in a sentence should be capitalised. But, here is the tricky part. This hint occured under the Arg name.<br /> Here is the screenshot, which illustrates, that different sentence contents affects this situation differently.<br /> <a href="https://i.sstatic.net/xhxqC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xhxqC.png" alt="enter image description here" /></a><br /> As you can see, I have no hints in <code>foo</code>, but I have hint in <code>foo1</code>. The only difference is in the content of the previous text line in these functions. And the most important, that the hint says that I need to start every sentence with capital letter. But, in both <code>foo</code> and <code>foo1</code> we have the dot at the end of the <code>param1</code> line.</p> <p>Here is the code fore testing:</p> <pre><code>def foo(param1: str, param2: int) -&gt; bool: &quot;&quot;&quot; Текст, текст, текст. Args: param1: Некоторый текст, некоторой длины. param2: Некоторый текст, некоторой длины. Returns: &quot;&quot;&quot; def foo1(param1: str, param2: int) -&gt; bool: &quot;&quot;&quot; Текст, текст, текст. Args: param1: Некоторый текст, некоторой длины текст. param2: Некоторый текст, некоторой длины. Returns: None &quot;&quot;&quot; </code></pre> <p>I've tried to reproduce it in english, but I didn't obtain any result.<br /> What causes such irrational behaviour?</p> <p>My specs:<br /> PyCharm 2023.1.5 (Community Edition)<br /> Build #PC-231.9414.12, built on February 14, 2024</p> <p><strong>UPD</strong>: Here is the hint example:</p> <p><a href="https://i.sstatic.net/haUQV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/haUQV.png" alt="enter image description here" /></a><br /> It says &quot;this sentence does not start with a capital letter&quot;</p>
<python><python-3.x><pycharm><docstring><pep>
2024-02-28 11:25:02
1
309
Mika
78,073,757
14,565,295
Create class attribute derived from base class's class attribute
<p>I have a base class with a class attribute in Python:</p> <pre><code>class BaseClass(): TENANT_VARIABLES = # Some kind of dictionary </code></pre> <p>then I have some classes that inherit from that class:</p> <pre><code>class A(BaseClass): pass </code></pre> <p>and finally I have the lower-level class in the inheritance, where I'm trying to do something like the following:</p> <pre><code>class LowLevelClass(A): VARIABLES = TENANT_VARIABLES.get(&quot;BACKEVENTS&quot;) </code></pre> <p>How can I refer to the base class's attribute <code>TENANT_VARIABLES</code> (which should also be a class attribute of <code>LowLevelClass</code>)?</p> <p>I've tried several approaches like:</p> <pre><code>class LowLevelClass(A): VARIABLES = LowLevelClass.TENANT_VARIABLES.get(&quot;BACKEVENTS&quot;) </code></pre> <pre><code>class LowLevelClass(A): VARIABLES = cls.TENANT_VARIABLES.get(&quot;BACKEVENTS&quot;) </code></pre> <pre><code>class LowLevelClass(A): VARIABLES = super.TENANT_VARIABLES.get(&quot;BACKEVENTS&quot;) </code></pre> <p>but I have not found the way to be able to access the original value. Only the following approach works, but I don't like it because it is not clear at first sight for a new developer that <code>LowLevelClass</code> inherits from <code>BaseClass</code>:</p> <pre><code>class LowLevelClass(A): VARIABLES = BaseClass.TENANT_VARIABLES.get(&quot;BACKEVENTS&quot;) </code></pre> <p>Is there an easier way to define attributes in low-level classes based on the base class' classes attributes? Maybe is there some kind of class property? I could do the following but all instances of that class will have the same value, I don't think it makes complete sense that I need to assign it to <code>self</code>:</p> <pre><code>class LowLevelClass(A): @property def VARIABLES(self): return self.TENANT_VARIABLES.get(&quot;BACKEVENTS&quot;) </code></pre>
<python>
2024-02-28 10:38:54
1
349
rodvictor
78,073,490
12,878,983
Precise transliteration of Chinese words in Python
<p>I have this two words in Chinese:</p> <pre><code>l1 = '陕西省' l2 = '山西省' </code></pre> <p>and their transiliterations are 'Shaanxi Sheng' and 'Shanxi Sheng'. I'm using the python unidecode package</p> <pre><code>import unidecode print(unidecode.unidecode(l1), unidecode.unidecode(l2)) </code></pre> <p>but it gives me the same transliteration for both words (i.e. Shanxi Sheng). I've tried to use also packages as pypinyin and xpinyin</p> <pre><code>from pypinyin import pinyin, Style def transliterate_with_pypinyin(text): result = pinyin(text, style=Style.NORMAL) return ''.join([item[0] for item in result]) from xpinyin import Pinyin def transliterate_with_xpinyin(text): p = Pinyin() return p.get_pinyin(text, '') </code></pre> <p>but they give me also the same result. Any idea on how to achieve a precise result?</p>
<python>
2024-02-28 09:57:29
1
573
marco
78,073,485
10,816,027
/usr/bin/python3: No module named colab_kernel_launcher
<p>I need python==3.7 and am working on Google Colab. The list of commands to downgrade the Python version are as follows:</p> <pre><code>!python --version !sudo apt-get install python3.7 !sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 1 </code></pre> <p>It seems to work:</p> <pre><code>!python --version </code></pre> <p>output --</p> <pre><code>Python 3.7.17 </code></pre> <p>However, when I restart the runtime according to <a href="https://www.geeksforgeeks.org/how-to-downgrade-python-version-in-colab/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/how-to-downgrade-python-version-in-colab/</a></p> <p>However, when I restart the runtime, it doesn't reconnect. These are logs from app.log</p> <pre><code>Feb 28, 2024, 3:17:22 PM WARNING /usr/bin/python3: No module named colab_kernel_launcher Feb 28, 2024, 3:17:22 PM WARNING WARNING:root:kernel 30b3411f-a987-403e-9ac5-1fcc10285314 restarted Feb 28, 2024, 3:17:22 PM INFO KernelRestarter: restarting kernel (4/5), keep random ports Feb 28, 2024, 3:17:19 PM WARNING /usr/bin/python3: No module named colab_kernel_launcher Feb 28, 2024, 3:17:19 PM WARNING WARNING:root:kernel 30b3411f-a987-403e-9ac5-1fcc10285314 restarted Feb 28, 2024, 3:17:19 PM INFO KernelRestarter: restarting kernel (3/5), keep random ports Feb 28, 2024, 3:17:16 PM WARNING /usr/bin/python3: No module named colab_kernel_launcher Feb 28, 2024, 3:17:16 PM WARNING WARNING:root:kernel 30b3411f-a987-403e-9ac5-1fcc10285314 restarted Feb 28, 2024, 3:17:16 PM INFO KernelRestarter: restarting kernel (2/5), keep random ports Feb 28, 2024, 3:17:13 PM WARNING /usr/bin/python3: No module named colab_kernel_launcher </code></pre>
<python><google-cloud-colab-enterprise>
2024-02-28 09:56:38
0
1,057
Om Rastogi