QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,920,093
11,355,926
Python 3.12: Correct typing for list[list[int,str,list[list[str]]]]?
<p>I have a list of users, that I would like to type correctly, but I'm not able to find an answer explaining how to do this.</p> <p>The list looks like this:</p> <pre><code>listofusers = [[167, 'john', 'John Fresno', [[538, 'Privileged'], [529, 'User']]]] </code></pre> <p>I tried doing like this:</p> <pre><code>def test(listofusers: list[list[str,list[list[str]]]]) -&gt; None: ... </code></pre> <p>But VS Code only shows typing for: <code>list[list[str]]</code>:</p> <p><a href="https://i.sstatic.net/8ySeW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8ySeW.png" alt="screenshot" /></a></p> <p>I also tried using Union, like this:</p> <pre><code>listofusers: list[list[Union[int,str,list[list[str]]]]] </code></pre> <p>But again, this doesn't seems to work.</p> <p>Can someone shed a light over this?</p>
<python><python-3.x><python-typing>
2024-02-01 12:13:02
2
3,060
Cow
77,919,954
9,247,345
FastAPI overwrite pydantic model during startup
<p>In my FastAPI application, I want to create a dynamic pydantic model which is used as an input argument. If a condition (that can only be checked during startup) is True, I want to overwrite the model.</p> <p>In the minimum working example, the model <code>Model</code> is initialized with the default value <code>lim = 10</code>. In the lifespan event, I overwrite <code>Model</code> by initializing a new dynamic model with <code>lim = 100</code>.</p> <p>However, in <code>/docs</code>, the default model is still used. Is there a way to overwrite the model such that all endpoints that use it are updated? Since I am using the model as an input argument in multiple endpoints, I would prefer not creating all endpoints dynamically in the lifespan event.</p> <p><strong>MWE</strong></p> <pre><code>from contextlib import asynccontextmanager from typing import Annotated, Type from fastapi import FastAPI, Depends, Query from pydantic import BaseModel, create_model def dynamic_model(lim=10) -&gt; Type[BaseModel]: return create_model(&quot;DynamicModel&quot;, lim=(Annotated[int, Query(ge=1, le=lim)], 1)) Model = dynamic_model() @asynccontextmanager async def lifespan(_app: FastAPI): global Model something_is_true = True if something_is_true: Model = dynamic_model(lim=100) # overwrite model yield app = FastAPI(lifespan=lifespan) @app.get(&quot;/&quot;) async def root(model: Annotated[Model, Depends()]): # Using the shortcut to be able to call the endpoint with query parameters # https://fastapi.tiangolo.com/tutorial/dependencies/classes-as-dependencies/#shortcut return model if __name__ == &quot;__main__&quot;: import uvicorn uvicorn.run(&quot;main:app&quot;, reload=True) </code></pre> <p><strong>Endpoint with default model</strong> The MWE does not manage to overwrite the model</p> <p><a href="https://i.sstatic.net/exnhX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/exnhX.png" alt="Root endpoint with default model" /></a></p>
<python><fastapi><pydantic>
2024-02-01 11:50:21
0
317
brnk
77,919,918
839,963
Missing center point in scatter plot on matplotlib PolarAxes
<p>I try to draw markers on a matplotlib <code>PolarAxes</code> using <code>plot</code> or <code>scatter</code> (tried both).</p> <p>Here is my code (not showing the part that creates the plot and colorbar):</p> <pre class="lang-py prettyprint-override"><code>figure = plt.figure(figsize=(6, 5)) axes: plt.Axes = figure.add_axes( position, projection=&quot;polar&quot; ) axes.plot( apos, rpos, 'o', markersize=4, markerfacecolor='white', markeredgecolor='darkgrey' ) </code></pre> <p>The data, whith angular and radial positions <code>apos</code> and <code>rpos</code>, clearly contains the center points with coordinates (0, 0); in total it's 49 points.</p> <p><a href="https://i.sstatic.net/yZmy9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yZmy9.png" alt="enter image description here" /></a></p> <p>The problem is that the center point is always missing on the resulting plot.</p> <p><a href="https://i.sstatic.net/1AKr0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1AKr0.png" alt="enter image description here" /></a></p> <p>In fact, if I manipulate the data and increase the radial position of the first point from <code>0</code> to <code>5</code> or higher, the corresponding marker suddenly appears. For radii &lt; 5, no marker is drawn.</p> <p>I'd like to understand why this is not working as intended and how I can fix it so that the center point does appear on the plot.</p> <p>This is using python 3.8.10 and matplotlib 3.2.2.</p> <p>In case anyone is interested, this is a visualization of a wafer-map measurement of sheet resistance on a 8&quot; silicon wafer coated with a Cu thin film. The markers indicate the sites of the 49-point measurement pattern.</p>
<python><matplotlib>
2024-02-01 11:44:45
1
745
Glemi
77,919,397
1,711,271
Polars: create an integer column from another one by computing the difference with the largest smaller value from a list
<p>I have a <code>polars</code> dataframe like this one:</p> <pre><code>shape: (10, 2) ┌────────┬───────┐ │ foo ┆ bar │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞════════╪═══════╡ │ 86 ┆ 11592 │ │ 109 ┆ 2765 │ │ 109 ┆ 4228 │ │ 153 ┆ 4214 │ │ 153 ┆ 7217 │ │ 153 ┆ 11095 │ │ 160 ┆ 1134 │ │ 222 ┆ 5509 │ │ 225 ┆ 10150 │ │ 239 ┆ 4151 │ └────────┴───────┘ </code></pre> <p>And a <strong>sorted</strong> list of integers <code>points</code>:</p> <pre><code>points = [0, 1500, 3000, 4500, 6000, 7500, 9000, 10500, 12000] </code></pre> <p>I want to create a new column <code>baz</code>, such that for each element <code>y</code> of <code>bar</code>, I find the largest <code>x</code> in <code>points</code> such that <code>x =&lt; y</code>. Then the element of <code>baz</code> is <code>y - x</code>. How can I do that?</p>
<python><list><python-polars>
2024-02-01 10:28:09
1
5,726
DeltaIV
77,919,370
4,105,440
Unexpected behaviour when applying rolling operation on a dataframe with no data
<p>Consider the following code</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd data = pd.DataFrame({'time': pd.date_range('2024-01-01 00:00', '2024-01-02 00:00', freq='1H'), 'data' : np.random.rand(25)}) data.rolling(window='3H', on='time').max() </code></pre> <p>This produces, as expected, the following dataframe</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>time</th> <th>data</th> </tr> </thead> <tbody> <tr> <td>2024-01-01 00:00:00</td> <td>0.826762</td> </tr> <tr> <td>2024-01-01 01:00:00</td> <td>0.826762</td> </tr> <tr> <td>2024-01-01 02:00:00</td> <td>0.826762</td> </tr> <tr> <td>2024-01-01 03:00:00</td> <td>0.792733</td> </tr> <tr> <td>2024-01-01 04:00:00</td> <td>0.618498</td> </tr> <tr> <td>...</td> <td>...</td> </tr> </tbody> </table> </div> <p>If I remove the <code>data</code> column and apply the same operation</p> <pre class="lang-py prettyprint-override"><code>data.drop(columns=['data']).rolling('3H', on='time').max() </code></pre> <p>I get no errors or warnings and, as result, a Dataframe with just the original <code>RangeIndex</code> and no columns in it. I would have expected the function to return the original untouched dataframe.</p> <p>I understand what I'm doing doesn't make sense, because the input Dataframe does not contain any data to apply the rolling operation to, but shouldn't the function returns the input Dataframe (so with the time column) instead than removing it from the results?</p> <p>In my case I was using the same function over a dataframe that sometimes didn't contain that column, and this unexpected return value was causing the whole pipeline afterwards to fail because it couldn't find the time column anymore.</p>
<python><pandas><dataframe><rolling-computation>
2024-02-01 10:23:53
0
673
Droid
77,918,877
5,938,276
Adding watch to Python project in Debug mode
<p>I am working in VSCode on Windows and have a created a debug configuration. I want to add a watch to inspect the contents of a data class as it is populated. My data class is setup like so:</p> <pre><code>class BSM: def __init__(self) self.ct = CTimer() def process(self, BP:float) -&gt; None|CTimer: if ..... self.ct = set_attribute1 </code></pre> <p>If I just add <code>ct</code> or <code>self.ct</code> to the <strong>Watch</strong> window in VSCode I see nothing, yet my print(f&quot;{self.ct}&quot;) shows data is in the CTimer dataclass instance ct.</p> <p>What am I doing wrong, so that I can watch the instance self.ct.</p>
<python><visual-studio-code>
2024-02-01 09:12:38
1
2,456
Al Grant
77,918,750
3,727,079
How can I optimize a search over two sets of files to merge them?
<p>I've got a bunch of .txt files with alphabetically-sorted file names:</p> <pre><code>aaa.txt aab.txt aac.txt . . . zzz.txt </code></pre> <p>And another set of .txt files stored in a different location:</p> <pre><code>ant.txt bat.txt cat.txt lion.txt ... </code></pre> <p>I want to take the text of .txt files in the latter group and append them to the appropriate file in the first group. (There is a chance that a file in the second group does not exist in the first group.)<br /> For example, I want to take the contents of <code>ant.txt</code> from the second group and append that to <code>ant.txt</code> in the first group.</p> <p>How can I do this efficiently?<br /> The obvious way is:</p> <pre><code>for file in second group: for file in first group: # check if the names are identical; if they are, append </code></pre> <p>But this seems really inefficient. A human trying to find <code>cat.txt</code> in the first group wouldn't start searching from aaa.txt, they'd immediately jump to the files that start with ca.</p> <p>I imagine one way to optimize this is to remove <code>cat.txt</code> from the search once it's been updated, possibly by storing the updated file in a third directory, and deleting <code>cat.txt</code> from the first group(?)</p> <p>If it matters, I'm using Python.</p>
<python><algorithm><optimization>
2024-02-01 08:47:48
2
399
Allure
77,918,600
7,295,599
Irreproducible crash/behaviour of pyinstaller executable on different PCs
<p>I'm writing a little Python (3.11.3) program using PyQt5 and a few more modules. Then I was packing this with pyInstaller (5.10.1) into an single file executable for Windows10.</p> <p>On my laptop, I did not experience any problems, neither with the Python script nor with the executable. However, when I copied the .exe to 3 other PCs and started it, I got 3 times some strange behaviour:</p> <ol> <li>on PC 1 the .exe was crashing (I couldn't see what the error message was)</li> <li>on PC 2 some table cells where suddenly empty after some table action</li> <li>on PC 3 it crashed (error was something like <code>ValueError: invalid literal for int() with base 10: '3-5</code>, however, this case should be excluded by the script)</li> </ol> <p>Now, here comes the next strange thing: This happened only when starting the .exe on the other PCs for the <strong>first</strong> time. After restarting, this crashes/errors did not occur even with the same input.</p> <p>Here on StackOverflow, there have been similar questions, but as far as I can judge no duplicates:</p> <p><a href="https://stackoverflow.com/q/64896400/7295599">Why python .exe doesn&#39;t run on some PCs and others it does (Using pyinstaller)</a><br> No conclusive answer there. In my case, the PCs are all Windows10/64 systems.</p> <p><a href="https://stackoverflow.com/q/67418982/7295599">PyQt5 app compiled with PyInstaller using --onefile and --noconsole, but exe fails to launch</a><br> In my case, the .exe started the second time, no pyInstaller <code>hiddenimports=[]</code> issues.</p> <p>Some information about the used modules:</p> <pre><code>from PyQt5.QtWidgets import QAbstractItemView, QAction, QApplication, QHBoxLayout, QLabel, QLineEdit, QMainWindow, QMessageBox, QPushButton, QTableWidget, QTableWidgetItem, QVBoxLayout, QWidget from PyQt5.QtCore import pyqtSlot, Qt from PyQt5.QtGui import QIcon import sys, re, os from pypdf import PdfReader, PdfWriter, Transformation from math import cos, sin, atan, sqrt, radians, pi </code></pre> <p>Since the initial .exe was about 50 MB which to my opinion seems pretty large (but this is <a href="https://stackoverflow.com/q/77009377/7295599">another unsolved topic</a>), I tried to reduce the size by using some excludes in the pyInstaller .spec file.</p> <pre><code>excludes=['pandas', 'numpy', 'matplotlib', 'scipy', 'PIL', 'sqlite3', 'setuptools', 'opencv', 'ocr', 'paramiko'], </code></pre> <p>With this, the .exe size was reduced to abut 40 MB. And again, with no crash or functional issues on my original PC.</p> <p>Hence, unfortunately, I <strong>cannot</strong> give a minimal, reproducible example (maybe except posting the whole project). I know this will make it probably difficult to give some advice. However, maybe someone has experienced similar issues and can explain or give some hints what might going wrong here?</p> <p>My suspicion was that on the different PCs there might be different versions of the &quot;Microsoft Visual C++ redistributable&quot;. I could check that back if necessary. If any additional information might be necessary, I'll be happy to provide it.</p>
<python><windows><pyqt5><crash><pyinstaller>
2024-02-01 08:19:28
0
27,030
theozh
77,918,540
10,232,932
Map the upper group to the column in pandas
<p>I have the following (ordered) dataframe <code>df</code>:</p> <pre><code>Level ID 5 A 8 DD 8 DA 8 AC 5 B 8 BA 8 BB 8 BC 8 BD </code></pre> <p>The logic is that the Level 5 , which stands above the level 8 has to get mapped to it, that means:</p> <pre><code>Level ID Upper_ID 5 A A 8 DD A 8 DA A 8 AC A 5 B B 8 BA B 8 BB B 8 BC B 8 BD B </code></pre> <p>How can I achieve this? I want to map the &quot;upper group ID&quot; to it. So everytime a new <code>5</code> in the Level column occurs, that menas this is the new upper group for the Level 8 until the next 5 occurs.</p>
<python><pandas>
2024-02-01 08:06:52
1
6,338
PV8
77,918,537
1,936,853
Play Developer Api, edits.tracks.update not work
<p>I have an Android app released on PlayStore.</p> <p>In general, when I release a new version to the PlayStore, I set the 'user fraction' to 30%.</p> <p>Then, I will monitor it for a few days and if there are no major problems, I will gradually increase it and deploy it to 100%.</p> <p>I do this on the PlayConsole Web site.</p> <p>But I want to do this programmatically.</p> <p>So I found the API and created a Python code.</p> <pre class="lang-py prettyprint-override"><code>args = argparser.parse_args() user_fraction = args.user_fraction credentials = ServiceAccountCredentials.from_json_keyfile_name('key.json', 'https://www.googleapis.com/auth/androidpublisher') http = httplib2.Http() http = credentials.authorize(http) service build('androidpublisher', 'v3', http=http) # get edit id edit_request = service.edits().insert(body={}, packageName=package_name) result = edit_request.execute() edit_id = result['id'] print('edit id: %s' % edit_id) result = service.edits().tracks().patch( editId=edit_id, packageName=package_name, track=&quot;production&quot;, body={ &quot;releases&quot;: [ { &quot;versionCodes&quot;: [ target_code ], &quot;userFraction&quot;: 0.4, &quot;status&quot;: &quot;inProgress&quot; } ] } ).execute() println(result) </code></pre> <p>When I run this code, it prints below response:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;track&quot;: &quot;production&quot;, &quot;releases&quot;: [ { &quot;versionCodes&quot;: [ xxx ], &quot;status&quot;: &quot;inProgress&quot;, &quot;userFraction&quot;: 0.4 } ] } </code></pre> <p>The userFraction was changed! But when I open the PlayConsole, the User fraction doesn't change. It still 30%.</p> <p>Why the web page is still showing 0.3? Is this a caching issue?</p>
<python><android><google-play-developer-api>
2024-02-01 08:06:22
1
2,861
dev.farmer
77,918,532
9,072,753
How to overload class member type depending on argument?
<p>I recently learned I can overload return type using <code>Literal[True]</code> and <code>Literal[False]</code> argument. I am implementing my own <code>subprocess.Popen</code>-ish interface and I am not able to overload <code>self.stdin</code> return type to be <code>IO[bytes]</code> or <code>IO[str]</code> depending on the value of <code>text</code> in the constructor. I am using pyright for static type checking.</p> <p>I have tried the following:</p> <pre><code>from typing import IO, Literal, Optional, overload class MyPopen: @overload def __init__(self, text: Literal[False] = False): self.stdin: Optional[IO[bytes]] @overload def __init__(self, text: Literal[True] = True): self.stdin: Optional[IO[str]] def __init__(self, text: bool = False): self.stdin = None pp = MyPopen(text=True) assert pp.stdin pp.stdin.write(&quot;text&quot;) # should be ok pp.stdin.write(b&quot;text&quot;) # should error pp = MyPopen(text=False) assert pp.stdin pp.stdin.write(&quot;text&quot;) # should error pp.stdin.write(b&quot;text&quot;) # should be ok </code></pre> <p>However, it assumes <code>pp.stdin</code> to be <code>IO[str]</code> twice, and the declaration stdin is obscured:</p> <pre><code>$ pyright /dev/stdin &lt;1.py /dev/stdin /dev/stdin:7:14 - error: Declaration &quot;stdin&quot; is obscured by a declaration of the same name (reportRedeclaration) /dev/stdin:20:1 - error: No overloads for &quot;write&quot; match the provided arguments (reportCallIssue) /dev/stdin:20:16 - error: Argument of type &quot;Literal[b&quot;text&quot;]&quot; cannot be assigned to parameter &quot;__s&quot; of type &quot;str&quot; in function &quot;write&quot;   &quot;Literal[b&quot;text&quot;]&quot; is incompatible with &quot;str&quot; (reportArgumentType) /dev/stdin:24:1 - error: No overloads for &quot;write&quot; match the provided arguments (reportCallIssue) /dev/stdin:24:16 - error: Argument of type &quot;Literal[b&quot;text&quot;]&quot; cannot be assigned to parameter &quot;__s&quot; of type &quot;str&quot; in function &quot;write&quot;   &quot;Literal[b&quot;text&quot;]&quot; is incompatible with &quot;str&quot; (reportArgumentType) 5 errors, 0 warnings, 0 informations </code></pre> <p>How can I overload class instance member type depending on the value of argument?</p> <p>Note how it works with subprocess.Popen:</p> <pre><code>import subprocess pp = subprocess.Popen(&quot;&quot;, text=True) assert pp.stdout pp.stdout.write(&quot;text&quot;) # ok pp.stdout.write(b&quot;text&quot;) # error pp = subprocess.Popen(&quot;&quot;, text=False) assert pp.stdout pp.stdout.write(&quot;text&quot;) # error pp.stdout.write(b&quot;text&quot;) # ok </code></pre> <p>I tried reading subprocess source code <a href="https://github.com/python/cpython/blob/main/Lib/subprocess.py#L1015" rel="nofollow noreferrer">https://github.com/python/cpython/blob/main/Lib/subprocess.py#L1015</a> , but I do not know where type annotations are stored.</p>
<python><python-typing><pyright>
2024-02-01 08:05:25
2
145,478
KamilCuk
77,918,290
3,099,733
How to test if an object is Annotated in Python?
<p>Consider the following code:</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass, fields from typing import Annotated @dataclass class Foo: x: Annotated[int, 'x'] print(fields(Foo)[0].type) # typing.Annotated[int, 'x'] print(isinstance(fields(Foo)[0].type, Annotated)) # False print(fields(Foo)[0].type is Annotated) # False </code></pre> <p>I want to test if type of <code>x</code> field in a <code>dataclass</code> is <code>Annotated</code> type but it always return <code>False</code>. What's wrong and how to fix?</p>
<python><python-typing>
2024-02-01 07:17:22
1
1,959
link89
77,917,969
9,542,989
Persist Package Installations in Python-based Docker Container
<p>I have a Python app running in a Docker container. I want to have the flexibility of installing new packages while the container is running. Eventually, I want to do this via the GUI, but right now I am doing it using the following commands,</p> <pre><code>docker exec -it container-name sh pip install &lt;package-name&gt; exit docker restart container-name </code></pre> <p>This is fine for now, but I want these package installations to persist across stops and restarts of the container. How can I do this?</p>
<python><docker>
2024-02-01 05:57:02
1
2,115
Minura Punchihewa
77,917,904
4,066,329
Pycharm debugger console not autocompleting variables created in the debugger console
<p>I have just upgraded to PyCharm 2023.3.3 community edition and am experiencing a weird issue that hasn't occurred before. System is Windows 10 64 bit, and this has worked fine on this machine before.</p> <p>When I debug a script and hit a breakpoint and the program is suspended and the debugger window is opened, autocomplete doesn't seem to work on new variables i create in the debugging console, however it works on variables created in the script before the breakpoint was hit.</p> <p><strong>e.g., Script</strong></p> <p><a href="https://i.sstatic.net/WfI5q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WfI5q.png" alt="Script and breakpoint" /></a></p> <p><strong>Debugging console showing autocomplete working for variable <em>testing</em> but not <em>test</em></strong></p> <p><a href="https://i.sstatic.net/Qmc9o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qmc9o.png" alt="enter image description here" /></a></p> <p>Any ideas why the debugging console is not autocompleting <em>test</em>? and how I could fix this?</p>
<python><debugging><pycharm><console><code-completion>
2024-02-01 05:37:18
1
684
Sighonide
77,917,840
2,827,181
How to translate PySpark res = notesCollege.select(*[f.mean(c).alias(c) for c in notesCollege.columns]) into Java Spark?
<p>I would like to apply a PySpark statement suggested in <a href="https://stackoverflow.com/questions/53855300/pyspark-calculate-mean-of-all-columns-in-one-line">pyspark calculate mean of all columns in one line</a> question, into my Java program but I don't know how.</p> <p>In the next codes, <code>notesCollege</code> is a <code>Dataset</code> / <code>Dataframe</code>:</p> <pre><code>+----+----+----+-----+----+----+----+----+----+----+----+----+-----+----+ |ORTH|GRAM|EXPR|RECI1|MATH|ANGL|HIST|BIOL|EDNU|ARTS|TECH|EPS |RECI2|EXPO| +----+----+----+-----+----+----+----+----+----+----+----+----+-----+----+ |13.0|10.0|2.0 |4.0 |9.0 |9.0 |8.0 |7.0 |7.5 |1.5 |14.0|10.0|10.5 |13.0| |6.5 |8.0 |8.5 |14.0 |13.0|7.0 |11.0|8.5 |16.0|4.0 |18.0|18.0|16.0 |15.0| |14.0|6.5 |8.0 |5.0 |11.0|8.0 |9.5 |8.0 |18.5|9.5 |14.0|16.5|14.0 |13.0| |13.0|7.5 |9.0 |5.0 |10.0|10.5|10.0|16.0|16.0|11.5|0.0 |11.5|15.0 |18.0| |15.0|7.5 |10.0|14.0 |12.0|11.0|9.0 |11.0|16.5|13.5|16.0|13.0|15.51|17.0| |5.0 |8.0 |5.5 |6.5 |16.0|12.0|9.0 |7.0 |13.5|5.0 |16.0|12.5|13.0 |17.5| |12.0|6.5 |9.0 |16.0 |18.0|13.5|9.0 |10.0|15.0|11.0|16.0|13.5|14.0 |18.0| |8.5 |2.5 |9.0 |13.0 |12.0|9.5 |12.0|13.5|16.5|8.0 |13.0|12.0|14.0 |17.0| |15.5|7.5 |12.5|16.0 |15.0|13.0|12.0|13.5|17.0|14.0|15.0|16.0|15.0 |17.0| |20.0|14.5|16.5|10.5 |18.0|16.5|15.0|10.5|18.0|13.5|12.0|14.5|13.0 |15.0| |6.0 |4.0 |11.0|9.5 |13.0|12.0|11.0|7.5 |17.5|12.0|18.0|13.5|13.0 |17.0| |0.0 |5.5 |6.0 |6.0 |9.0 |3.0 |7.0 |2.0 |12.0|1.5 |15.0|12.5|14.0 |0.0 | |15.5|12.0|12.0|8.0 |17.0|17.0|11.0|4.5 |16.0|9.0 |13.0|5.0 |12.5 |16.5| |15.0|6.5 |11.5|10.0 |13.0|12.5|7.0 |11.0|14.0|7.0 |14.0|11.0|15.0 |13.0| |11.0|9.0 |7.5 |11.0 |12.5|11.0|9.0 |11.5|18.0|8.5 |17.0|15.0|15.0 |15.0| |6.5 |9.0 |10.0|10.0 |10.0|6.5 |7.0 |12.5|18.0|14.5|16.0|13.5|13.0 |18.0| |7.0 |4.5 |6.0 |6.0 |15.0|13.5|11.0|7.0 |7.0 |6.5 |11.0|10.5|13.5 |16.0| |11.5|8.5 |7.5 |12.0 |10.0|7.5 |8.0 |10.0|17.5|7.0 |11.0|16.5|15.5 |14.0| |4.5 |7.5 |9.0 |12.0 |12.0|9.0 |7.0 |11.5|17.0|12.5|12.0|16.0|13.0 |18.0| |5.5 |8.0 |8.0 |11.0 |10.0|12.0|8.0 |7.0 |12.0|5.0 |18.0|10.5|13.0 |15.0| |16.0|13.5|14.5|9.5 |16.0|17.0|11.5|12.5|19.0|13.5|6.0 |11.0|16.0 |16.0| |5.5 |2.0 |7.5 |14.5 |11.0|10.0|7.0 |8.0 |15.5|8.0 |16.0|13.5|13.0 |15.0| |14.0|8.0 |6.5 |6.5 |10.0|14.5|9.0 |5.0 |9.5 |3.5 |0.0 |18.5|13.0 |0.0 | |5.5 |8.0 |4.5 |15.0 |8.0 |7.5 |5.0 |5.0 |13.5|8.0 |6.0 |9.0 |0.0 |13.0| |11.0|4.0 |10.5|6.0 |11.0|12.0|9.0 |17.0|14.0|12.5|8.0 |10.5|17.5 |15.0| |4.5 |7.0 |10.5|11.5 |14.0|8.5 |7.5 |7.0 |18.0|14.0|13.0|15.5|18.0 |13.0| |9.0 |8.5 |7.5 |10.0 |14.0|11.5|9.0 |15.0|17.5|9.0 |13.0|13.5|17.5 |16.0| +----+----+----+-----+----+----+----+----+----+----+----+----+-----+----+ or as csv: ORTH,GRAM,EXPR,RECI1,MATH,ANGL,HIST,BIOL,EDNU,ARTS,TECH,EPS,RECI2,EXPO 13.00,10.00,2.00,4.00,9.00,9.00,8.00,7.00,7.50,1.50,14.00,10.00,10.50,13.00 6.50,8.00,8.50,14.00,13.00,7.00,11.00,8.50,16.00,4.00,18.00,18.00,16.00,15.00 14.00,6.50,8.00,5.00,11.00,8.00,9.50,8.00,18.50,9.50,14.00,16.50,14.00,13.00 13.00,7.50,9.00,5.00,10.00,10.50,10.00,16.00,16.00,11.50,0.00,11.50,15.00,18.00 15.00,7.50,10.00,14.00,12.00,11.00,9.00,11.00,16.50,13.50,16.00,13.00,15.51,17.00 5.00,8.00,5.50,6.50,16.00,12.00,9.00,7.00,13.50,5.00,16.00,12.50,13.00,17.50 12.00,6.50,9.00,16.00,18.00,13.50,9.00,10.00,15.00,11.00,16.00,13.50,14.00,18.00 8.50,2.50,9.00,13.00,12.00,9.50,12.00,13.50,16.50,8.00,13.00,12.00,14.00,17.00 15.50,7.50,12.50,16.00,15.00,13.00,12.00,13.50,17.00,14.00,15.00,16.00,15.00,17.00 20.00,14.50,16.50,10.50,18.00,16.50,15.00,10.50,18.00,13.50,12.00,14.50,13.00,15.00 6.00,4.00,11.00,9.50,13.00,12.00,11.00,7.50,17.50,12.00,18.00,13.50,13.00,17.00 0.00,5.50,6.00,6.00,9.00,3.00,7.00,2.00,12.00,1.50,15.00,12.50,14.00,0.00 15.50,12.00,12.00,8.00,17.00,17.00,11.00,4.50,16.00,9.00,13.00,5.00,12.50,16.50 15.00,6.50,11.50,10.00,13.00,12.50,7.00,11.00,14.00,7.00,14.00,11.00,15.00,13.00 11.00,9.00,7.50,11.00,12.50,11.00,9.00,11.50,18.00,8.50,17.00,15.00,15.00,15.00 6.50,9.00,10.00,10.00,10.00,6.50,7.00,12.50,18.00,14.50,16.00,13.50,13.00,18.00 7.00,4.50,6.00,6.00,15.00,13.50,11.00,7.00,7.00,6.50,11.00,10.50,13.50,16.00 11.50,8.50,7.50,12.00,10.00,7.50,8.00,10.00,17.50,7.00,11.00,16.50,15.50,14.00 4.50,7.50,9.00,12.00,12.00,9.00,7.00,11.50,17.00,12.50,12.00,16.00,13.00,18.00 5.50,8.00,8.00,11.00,10.00,12.00,8.00,7.00,12.00,5.00,18.00,10.50,13.00,15.00 16.00,13.50,14.50,9.50,16.00,17.00,11.50,12.50,19.00,13.50,6.00,11.00,16.00,16.00 5.50,2.00,7.50,14.50,11.00,10.00,7.00,8.00,15.50,8.00,16.00,13.50,13.00,15.00 14.00,8.00,6.50,6.50,10.00,14.50,9.00,5.00,9.50,3.50,0.00,18.50,13.00,0.00 5.50,8.00,4.50,15.00,8.00,7.50,5.00,5.00,13.50,8.00,6.00,9.00,0.00,13.00 11.00,4.00,10.50,6.00,11.00,12.00,9.00,17.00,14.00,12.50,8.00,10.50,17.50,15.00 4.50,7.00,10.50,11.50,14.00,8.50,7.50,7.00,18.00,14.00,13.00,15.50,18.00,13.00 9.00,8.50,7.50,10.00,14.00,11.50,9.00,15.00,17.50,9.00,13.00,13.50,17.50,16.00 </code></pre> <p>Here's the Python code I would like to translate:</p> <pre class="lang-py prettyprint-override"><code># f imports org.apache.spark.sql.functions res = notesCollege.select(*[f.mean(c).alias(c) for c in notesCollege.columns]) </code></pre> <p>that calculates means for each of its columns.</p> <p>Starting with this expression,<br /> it looks like I should replace the <code>*</code> inside by some kind of lambda (?),<br /> but I don't see how.</p> <pre class="lang-java prettyprint-override"><code>import static org.apache.spark.sql.functions.*; Dataset&lt;Row&gt; res = notesCollege.select(*[mean(c).alias(c) for c in notesCollege.columns()]) </code></pre> <p>How to make this select statement work with Java too?</p>
<python><java><apache-spark><pyspark><apache-spark-sql>
2024-02-01 05:13:33
1
3,561
Marc Le Bihan
77,917,829
8,516,987
Stripe Proration Webhook Interception on Downgrades
<p>I'm looking for a way to ensure on a plan downgrade, the user gets charged the plans price on the next payment period.</p> <p>If susan is going from $20/month to a $10/month plan, she continues to get the benefits of the $20/month until end of her subscription plan. However, on stripe UI it says she pays $0 for next month and I'm guessing it's because of proration.</p> <p>I've tried to modify the subscription on 'customer.subscription.updated' but the subscription doesn't show the change in the stripe subscription UI or customer portal.</p> <p>What else can I do besides make my own user flow for checking out and upgrading/downgrading the plans?</p>
<python><stripe-payments>
2024-02-01 05:08:02
1
642
itsPav
77,917,717
1,601,580
How does one create a HF tokenizer with only a fraction of the vocabulary but without changing the model?
<p>I tried to create a smaller tokenizer:</p> <pre><code>def get_tokenizer_with_subset_of_vocab(tokenizer: GPT2Tokenizer, percentage_to_keep: float) -&gt; GPT2Tokenizer: &quot;&quot;&quot; Create a tokenizer with a fraction of the vocabulary. &quot;&quot;&quot; from copy import deepcopy tok = deepcopy(tokenizer) assert id(tok) != id(tokenizer), &quot;The tokenizer is not a deep copy!&quot; special_tokens = tok.all_special_tokens # to make sure there is always a token set no matter what tok.unk_token = &quot;the&quot; # but &quot;the&quot; is hopefully common enough that it doesn't damage the semantics of the sentence too much, however, putting EOS or something else might screw up the semantics of the sentence # Calculate the number of tokens to keep total_tokens = len(tok) tokens_to_keep_count = int(total_tokens * percentage_to_keep) # Get all non-special tokens vocab = tok.get_vocab() all_tokens = list(vocab.keys()) non_special_tokens = [token for token in all_tokens if token not in special_tokens] assert &quot;the&quot; in non_special_tokens, &quot;The token 'the' is not in the non-special tokens!&quot; # Randomly sample from non-special tokens random_sampled_tokens = random.sample(non_special_tokens, tokens_to_keep_count - len(special_tokens)) # Combine special tokens with the randomly sampled tokens final_tokens_to_keep = set(special_tokens + random_sampled_tokens + [&quot;the&quot;]) assert &quot;the&quot; in non_special_tokens, &quot;The token 'the' is not in the non-special tokens!&quot; assert tok.unk_token == &quot;the&quot;, &quot;The token 'the' is not the unknown token!&quot; # Update the tokenizer's vocab new_vocab = {token: idx for token, idx in vocab.items() if token in final_tokens_to_keep} tok.vocab = new_vocab tok.ids_to_tokens = {v: k for k, v in vocab.items()} return tok </code></pre> <p>but the unit test doesn't work as expected but code looks right:</p> <pre><code>def _test0_does_hacky_fraction_tokenizer_work(): # - have a tokenizer with only the special token &quot;the&quot;, check everything is &quot;the&quot; text_seq: str = &quot;the cat is nice&quot; tokenizer = GPT2Tokenizer.from_pretrained('gpt2') new_tokenizer = get_tokenizer_with_subset_of_vocab(tokenizer, 1/tokenizer.vocab_size) # encode to tokens then decode to text tokens = new_tokenizer.encode(text_seq) llm_seq_txt: str = new_tokenizer.decode(tokens) assert llm_seq_txt == &quot;the the the the&quot;, f'Error: {llm_seq_txt=}' # have a tokenizer with only the special token &quot;the&quot; and &quot;cat&quot;, check the-&gt;the anything_else-&gt;the and cat-&gt;cat text_seq: str = &quot;the cat is nice&quot; tokenizer = GPT2Tokenizer.from_pretrained('gpt2') new_tokenizer = get_tokenizer_with_subset_of_vocab(tokenizer, 1/tokenizer.vocab_size) # encode to tokens then decode to text tokens = new_tokenizer.encode(text_seq) llm_seq_txt = new_tokenizer.decode(tokens) assert llm_seq_txt == &quot;the cat the the&quot;, f'Error: {llm_seq_txt=} </code></pre> <p>why?</p> <p>Error</p> <pre><code>Exception has occurred: AssertionError (note: full exception trace is shown but execution is paused at: _run_module_as_main) Error: llm_seq_txt='the cat is nice' File &quot;/lfs/ampere9/0/brando9/beyond-scale-language-data-diversity/src/diversity/embeddings/div_act_based.py&quot;, line 199, in _test0_does_hacky_fraction_tokenizer_work assert llm_seq_txt == &quot;the the the the&quot;, f'Error: {llm_seq_txt=}' File &quot;/lfs/ampere9/0/brando9/beyond-scale-language-data-diversity/src/diversity/embeddings/div_act_based.py&quot;, line 620, in &lt;module&gt; _test0_does_hacky_fraction_tokenizer_work() File &quot;/lfs/ampere9/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/runpy.py&quot;, line 86, in _run_code exec(code, run_globals) File &quot;/lfs/ampere9/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/runpy.py&quot;, line 196, in _run_module_as_main (Current frame) return _run_code(code, main_globals, None, AssertionError: Error: llm_seq_txt='the cat is nice' </code></pre> <p><a href="https://discuss.huggingface.co/t/how-does-one-create-a-hf-tokenizer-with-only-a-fraction-of-the-vocabulary-but-without-changing-the-model/71418" rel="nofollow noreferrer">https://discuss.huggingface.co/t/how-does-one-create-a-hf-tokenizer-with-only-a-fraction-of-the-vocabulary-but-without-changing-the-model/71418</a></p>
<python><huggingface-transformers>
2024-02-01 04:24:00
1
6,126
Charlie Parker
77,917,599
11,053,343
Zope 5 Log To Error Log From Python Script
<p>Normally when an exception occurs, Zope automatically logs it to the error_log via the Products.SiteErrorLog package, however I have a python script with a try/except block because I need to return json whether or not the script succeeds. Unfortunately, the exception block overrides Zope's default logging to the error_log, and since I cannot use raise, because it halts execution of the rest of the script (including, a finally block or the rest of the exception block), I need a way to make Zope still log the error to the error_log.</p> <p>From what I can tell this is done via the SiteErrorLog.raising() method, and since in restricted python I can't find a way to allow the use of the raising method, even though I have allowed the entire SiteErrorLog module and the SiteErrorLog class, I used an external method, but can't get it to work.</p> <p>For reference, in my <code>__init__.py</code> to allow the module and class I used:</p> <pre><code>allow_module(Products.SiteErrorLog.SiteErrorLog) from Products.SiteErrorLog.SiteErrorLog import SiteErrorLog allow_class(SiteErrorLog) </code></pre> <p>which does allow the SiteErrorLog class and module but does not allow the use of the raising method from within the ZMI via a python script.</p> <p>If using SiteErrorLog is not the correct answer and there's an easier way, I'm all ears.</p> <p>Here's an example of my python script:</p> <p>import json</p> <pre><code>try: context.my_zsql_method() def returnjson(): value = {&quot;output&quot;: &quot;success&quot;} return json.dumps(value) return returnjson() except Exception as e: def returnjson(): value = {&quot;output&quot;: &quot;failure&quot;} return json.dumps(value) return returnjson() </code></pre> <p>I do not want a custom error message or anything other than the standard exception error that is normally logged to the zope error_log, and I only want it to be logged in the except block.</p> <p>As I mentioned, I tried writing an external method so I could use the SiteErrorLog.raising() like this:</p> <pre><code>from Products.SiteErrorLog.SiteErrorLog import SiteErrorLog def logError(error): errorType = type(error).__name__ errorStr = str(error) error_info = (errorType, errorStr, None) SiteErrorLog.raising(None, error_info) </code></pre> <p>and updating my except block like this:</p> <pre><code>except Exception as e: context.logError(e) def returnjson(): value = {&quot;output&quot;: &quot;failure&quot;} return json.dumps(value) return returnjson() </code></pre> <p>but nothing logs to the error log. SiteErrorLog.raising() requires 2 parameters (self, info) and supposedly calling the external method is automatically supposed to pass self, but that doesn't seem to be happening. I've tried passing context.error_log into the external method (doesn't work), as well using SiteErrorLog as self (doesn't work).</p> <p>I saw an answer to a question about using both raise and return from a long time ago saying to use a finally block, so of course, I also tried:</p> <pre><code>except Exception as e: raise finally: def returnjson(): value = {&quot;output&quot;: &quot;failure&quot;} return json.dumps(value) return returnjson() </code></pre> <p>even though I knew that was incorrect. Of course, as soon as raise is called, the rest of the script stops all execution so the json is never returned, but I thought I'd at least give it a try.</p> <p>Again, I do not want to log to a log file, write a custom log message, or anything else. I just want the exception error to be logged to zope's error log as normal, while still allowing me to return json.</p> <p>Seems like this should be a lot simpler than everything I'm doing, but I can't find an answer anywhere and would love some assistance. Thanks!</p>
<python><logging><zope>
2024-02-01 03:41:08
2
1,515
kittonian
77,917,567
11,277,108
Attribute Error for compare_metadata with multiple schemas
<p>I'm trying to use the <code>compare_metadata</code> function with the following setup:</p> <p><strong>base.py</strong></p> <pre><code>from sqlalchemy.orm import DeclarativeBase class Base(DeclarativeBase): pass </code></pre> <p><strong>workspace.py</strong></p> <pre><code>from sqlalchemy import Column, Integer, String from base import Base class Workspace(Base): __table_args__ = {&quot;schema&quot;: &quot;workspace&quot;} __tablename__ = 'workspace_table' id = Column(Integer, primary_key=True) name = Column(String(10)) </code></pre> <p><strong>host.py</strong></p> <pre><code>from sqlalchemy import Column, Integer, String from base import Base class Host(Base): __table_args__ = {&quot;schema&quot;: &quot;host&quot;} __tablename__ = 'host_table' id = Column(Integer, primary_key=True) ip = Column(String(10)) </code></pre> <p><strong>main.py</strong></p> <pre><code>import sqlalchemy as sa from alembic.autogenerate import compare_metadata from alembic.migration import MigrationContext from sqlalchemy.schema import CreateSchema import host import workspace from base import Base def main(): conn_str = &quot;mysql+mysqlconnector://root:&lt;my_password&gt;@localhost&quot; engine = sa.create_engine(conn_str) connection = engine.connect() connection.execute(CreateSchema(&quot;workspace&quot;, if_not_exists=True)) connection.execute(CreateSchema(&quot;host&quot;, if_not_exists=True)) Base.metadata.drop_all(bind=engine) Base.metadata.create_all(bind=engine) mc = MigrationContext.configure(engine.connect()) diff_list = compare_metadata(mc, Base.metadata) print(diff_list) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>However, running <code>main.py</code> results in the following error:</p> <pre><code>AttributeError: 'NoneType' object has no attribute 'replace' </code></pre> <p>Tracking this back it seems that SQLAlchemy can't find a schema.</p> <p>Reading around I have tried:</p> <pre><code>mc = MigrationContext.configure(engine.connect(), opts={&quot;include_schemas&quot;: True}) </code></pre> <p>But same error in the same place.</p> <p>Any ideas on where I'm going wrong?</p> <p>Details:</p> <ul> <li>macOS Sonoma</li> <li>python 3.11</li> <li>SQLAlchemy 2.0.7</li> <li>alembic 1.13.1</li> <li>mysql-connector-python 8.2.0</li> </ul>
<python><sqlalchemy><alembic>
2024-02-01 03:29:16
1
1,121
Jossy
77,917,429
7,662,164
Indexing multiple elements in a muldimensional numpy array
<p>I would like to extract elements of a given multidimensional numpy array, using another array of indices. However it doesn't behave in the way I expected. Below is a simple example:</p> <pre><code>import numpy as np a = np.random.random((3, 3, 3)) idx = np.asarray([[0, 0, 0], [0, 1, 2]]) b = a[idx] print(b.shape) # expect (2, ), got (2, 3, 3, 3) </code></pre> <p>Why is that the case? And how should I modify the code so that <code>b</code> contains only two elements: <code>a[0, 0, 0]</code> and <code>a[0, 1, 2]</code>?</p>
<python><arrays><numpy><indexing><numpy-ndarray>
2024-02-01 02:34:12
1
335
Jingyang Wang
77,917,402
5,896,591
When does Python require parentheses around "not x"?
<p>I get the following result in both Python 2.7 and Python 3.5. Why do the parentheses make a difference?</p> <pre><code>&gt;&gt;&gt; True == (not False) True &gt;&gt;&gt; True == not False File &quot;&lt;stdin&gt;&quot;, line 1 True == not False ^ SyntaxError: invalid syntax &gt;&gt;&gt; </code></pre>
<python>
2024-02-01 02:23:17
1
4,630
personal_cloud
77,917,342
2,175,052
How do I protect my FastAPI REST-ful services by making sure request are coming from only a certain domain (front-end application)?
<p>I have an application stack as follows.</p> <ul> <li>Backend: FastAPI, REST-ful service; eg r.domain.com</li> <li>Frontend: Angular 17 application; eg w.domain.com</li> </ul> <p>On the backend, I want to ensure that a response is generated only for requests coming from the frontend application (eg w.domain.com). My current strategy is to look at the incoming request headers (eg referrer, host, user-agent) and reject. I have something like the following defined (middleware).</p> <pre class="lang-py prettyprint-override"><code>if headers_not_ok(request): return JSONResponse(content='BAD_REQUEST', status_code=403) </code></pre> <p>This approach, I found, is not strong enough to protect or only allow processing for requests that comes from the frontend app. In Google Chrome, you can inspect the network and copy the request as a curl command, and executing such copied command will bypass checking based on headers (since the correct headers are declared as a part of the request).</p> <p>I thought maybe all backends are susceptible to this problem, but apparently not. For example, if I am on Facebook and I copy the request to a curl command and execute it in the terminal, Facebook's backend logic is able to detect that I'm not logged in. I wonder how are they achieving this feat and how I can mimic the same for my situation.</p> <p>In my situation, there is no login yet. It is opened to the public. However, the main goal is to not respond to any request that is not coming from the frontend application.</p> <p>Any help is appreciated.</p>
<python><angular><rest><cors><fastapi>
2024-02-01 01:56:27
0
8,955
Jane Wayne
77,917,266
1,944,636
pyspark splitting words into rows
<p>I'm trying out pyspark and looking at this example: <a href="https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html</a></p> <p>I got the code working easy enough, so the guide is pretty good, but one part has got me scratching my head which is the split part</p> <pre><code>words = lines.select( explode( split(lines.value, &quot; &quot;) ).alias(&quot;word&quot;) ) </code></pre> <p>I get what it is doing, it's splitting each row by space and turning each word into a row. But I would have expected something along the lines of this</p> <pre><code>words = lines.explode(lambda l: l.split(&quot; &quot;)).alias(&quot;word&quot;) </code></pre> <p>In the code they have lines.value, lines is the dataframe and value is the column name. Why are we pulling the value from the data frame? How does it know what row we are pulling the value from? Shouldn't we have a row object (l in the second example) and be pulling value from that?</p> <p>My other question is how is it that dataframe has a function/property called &quot;value&quot;. If I change the column name to &quot;bob&quot; then dataframe suddenly appears to have a function/property &quot;bob&quot;. Is this a python feature where objects can have dynamic functions?</p> <p>It appears the actual split function is not the python split function but a function provided by pyspark libraries (there is an import at the top). Why did they do it this way?</p>
<python><pyspark>
2024-02-01 01:28:08
0
1,077
MikeKulls
77,917,257
1,497,385
Python SQLite database is locked in a single-thread interactive process despite closing all SQLAlchemy sessions/engines/connections
<p>There are many SO questions with same issues. However, all other answers focus on user errors that are not valid for me. In most other SO question, the situation involves 2+ process conflicting over the DB file. In my case, there is only a single process conflicting with itself. If you think about it, there is no reason why a single process cannot lose a file lock. Perhaps, SQLAlchemy or sqlite driver just creates a lock and forgets to ever release it.</p> <p>My database file is on local internal HDD (not network). I work from IPython/Jupyter environment on Windows via VSCode Interactive. There is just a single python.exe process. I do not use any viewers/consoles/apps that may access the DB. I do not use multi-threading. I have a single interactive session. I have an sqlite database file. The file is FS-locked and cannot be moved. As per Process Explorer, the file is opened by a single python process which is the <code>python.exe</code> process running my ipykernel interactive session: <code>c:\Users\xxx\AppData\Local\Programs\Python\Python311\python.exe -m ipykernel_launcher --ip=127.0.0.1 --stdin=9013 --control=9011 &quot;--transport=\&quot;tcp\&quot;&quot; --iopub=9014 --f=c:\Users\xxx\AppData\Roaming\jupyter\runtime\kernel-v2-xxx.json&quot;</code></p> <p>I've tried calling <code>sqlite.orm.close_all_sessions()</code> and <code>engine.dispose()</code>. This does not unlock the file.</p> <p>I can create engine/session and query the DB. But when I close all sessions and dispose the engine, the DB is still locked.</p> <p>Today's lock has occurred when I tried to add index to the DB and it said &quot;database is locked&quot;. It's been locked since then. DB was created via SQLAlchemy ORM with no fancy changes or features.</p> <p><strong>How do I release the DB lock that my python.exe process is holding without killing the process?</strong></p> <p>P.S. <strong>Is there any way to list all Python objects in my process that own file locks?</strong></p>
<python><sqlite><sqlalchemy>
2024-02-01 01:24:27
0
6,866
Ark-kun
77,917,202
7,662,164
Tensor power of 1D numpy array
<p>I would like to implement a function that takes what I call the &quot;tensor power&quot; of an one-dimensional numpy array:</p> <pre><code>def tensor_pow(x: np.ndarray, n: int) -&gt; np.ndarray: # code goes here return y </code></pre> <p>For any numpy array <code>x</code> of shape <code>(Nx)</code>, the output <code>y</code> should be a numpy array of shape <code>(Nx, ..., Nx)</code> where there are <code>n</code> copies of <code>Nx</code>, and its entries are defined as <code>y[i, j, ..., k] = x[i] * x[j] * ... * x[k]</code>. A simple example would be:</p> <pre><code>y = tensor_pow(np.arange(3), 2) # an array of shape (3, 3) y[1, 2] == 2 # returns True </code></pre> <p>Is there any easy way to achieve this?</p>
<python><arrays><numpy><tensor><broadcast>
2024-02-01 01:00:31
1
335
Jingyang Wang
77,917,146
9,670,371
Failure in converting the SparkDF to Pandas DF
<p>I have a Spark code running on a Dataproc cluster that reads a table from BigQuery into a Spark dataframe. In this code, there is a step where I need to perform some data processing using pandas dataframe logic. However, when I try to convert the Spark dataframe to a pandas dataframe, I encounter an error that I am unable to resolve. It's worth noting that this code works fine on Hadoop without any issues.</p> <p>I would appreciate any assistance or guidance in resolving this issue with the conversion from Spark dataframe to pandas dataframe.</p> <pre><code>df=df.toPandas() File &quot;/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/pandas/conversion.py&quot;, line 141, in toPandas File &quot;/opt/conda/default/lib/python3.8/site-packages/pandas/core/frame.py&quot;, line 2317, in from_records mgr = arrays_to_mgr(arrays, columns, result_index, typ=manager) File &quot;/opt/conda/default/lib/python3.8/site-packages/pandas/core/internals/construction.py&quot;, line 153, in arrays_to_mgr return create_block_manager_from_column_arrays( File &quot;/opt/conda/default/lib/python3.8/site-packages/pandas/core/internals/managers.py&quot;, line 2142, in create_block_manager_from_column_arrays mgr._consolidate_inplace() File &quot;/opt/conda/default/lib/python3.8/site-packages/pandas/core/internals/managers.py&quot;, line 1829, in _consolidate_inplace self.blocks = _consolidate(self.blocks) File &quot;/opt/conda/default/lib/python3.8/site-packages/pandas/core/internals/managers.py&quot;, line 2272, in _consolidate merged_blocks, _ = _merge_blocks( File &quot;/opt/conda/default/lib/python3.8/site-packages/pandas/core/internals/managers.py&quot;, line 2297, in _merge_blocks new_values = np.vstack([b.values for b in blocks]) # type: ignore[misc] File &quot;&lt;__array_function__ internals&gt;&quot;, line 180, in vstack File &quot;/opt/conda/default/lib/python3.8/site-packages/numpy/core/shape_base.py&quot;, line 282, in vstack return _nx.concatenate(arrs, 0) File &quot;&lt;__array_function__ internals&gt;&quot;, line 180, in concatenate numpy.core._exceptions.MemoryError: Unable to allocate 69.9 MiB for an array with shape (4289, 2137) and data type int64 </code></pre> <p>The code I am working on processes a relatively small amount of data, with the maximum number of rows being around 10,000. However, the main challenge lies in the fact that the data source has over 4,000 columns. In this particular example, the code failed while processing 2,137 rows of data.</p>
<python><pandas><dataframe><apache-spark><google-cloud-dataproc>
2024-02-01 00:36:12
1
415
trougc
77,917,091
5,942,100
unique melt and pivot crosstab format with a groupby using pandas
<p>I have a dataset where I would like for some values to become column headers, as well as a crosstab format.</p> <p>data</p> <pre><code>year qtr ID type growth re nondd_re se_re or 2024 2024Q1 NY aa 3.18 1.14 0 0 0 2024 2024Q2 NY aa 2.1 1.14 0 0 0 2024 2024Q1 NY dd 6.26 3.07 3.07 0 0 2024 2024Q2 NY dd 4.13 3.07 3.07 0 0 2024 2024Q1 CA aa 0 0 0 0 0 2024 2024Q2 CA aa 0.03 0 0 0 0 2024 2024Q1 CA dd 0 0 0 0 0 2024 2024Q2 CA dd 0.06 0 0 0 0 </code></pre> <p>desired</p> <pre><code>ID type type 2024Q1 2024Q2 NY growth dd 6.26 4.13 NY nond_ re dd 3.07 3.07 NY se_re dd 0 0 NY or dd 0 0 NY re dd 3.07 3.07 NY growth aa 3.18 2.1 NY nond_ re aa 0 0 NY se_re aa 0 0 NY or aa 0 0 NY re aa 1.14 1.14 CA growth dd 0 0.6 CA nond_ re dd 0 0 CA se_re dd 0 0 CA or dd 0 0 CA re dd 0 0 CA growth aa 0 0.3 CA nond_ re aa 0 0 CA se_re aa 0 0 CA or aa 0 0 CA re aa 0 0 </code></pre> <p>doing</p> <pre><code># Melt the dataframe to transform metrics columns into rows melted_df = df.melt(id_vars=[&quot;year&quot;, &quot;qtr&quot;, &quot;ID&quot;, &quot;type&quot;], var_name=&quot;type&quot;, value_name=&quot;value&quot;) # Pivot the melted dataframe pivot_df = melted_df.pivot_table(index=[&quot;ID&quot;,&quot;type&quot;], columns=&quot;qtr&quot;, values=&quot;value&quot;, fill_value=0) # Reset index to turn multi-index into columns pivot_df = pivot_df.reset_index() </code></pre> <p>The issue is that all of the values aren't getting. The above code produces output with missing values Any suggestion is appreciated</p>
<python><pandas><numpy>
2024-02-01 00:15:42
2
4,428
Lynn
77,916,993
3,433,802
Why won't a custom font work with ezdxf on Ubuntu?
<p>I am running this test program.</p> <pre><code>import ezdxf doc = ezdxf.new(&quot;AC1032&quot;) doc.styles.add(&quot;IAmOnlineWithU&quot;, font=&quot;IAmOnlineWithU-o96q.ttf&quot;) msp = doc.modelspace() t = msp.add_text(&quot;TEST TEXT&quot;, height=1, dxfattribs = {&quot;style&quot;: &quot;IAmOnlineWithU&quot;}) t.set_placement((0,0)) with open(&quot;out.dxf&quot;, &quot;w&quot;) as f: doc.write(f) </code></pre> <p>I am hoping to use this single-line connected font I found to draw the text for CNC purposes, but when I do this, the font seems to have no effect. Some default font is used.</p> <p>I am using Ubuntu 23.10, and I have installed the <a href="https://www.fontspace.com/i-am-online-with-u-font-f11732" rel="nofollow noreferrer">font which I found here</a> on my system. Here is the output of my font list command <code>fc-list | grep IAm</code> :</p> <pre><code>/home/mmachenry/.local/share/fonts/IAmOnlineWithU-o96q.ttf: I am online with u:style=line </code></pre> <p>So it seems like it should be found. I checked ezdxf's docs on where it looks for these fonts and .local/share/fonts was one of them.</p> <p>I have tried adding this to my code and it had not effect:</p> <pre><code>from ezdxf.fonts import fonts fonts.build_system_font_cache() </code></pre> <p>I'm able to find the font in my system through the ezdxf Python package as well. This is a python REPL sessions showing that:</p> <pre><code>&gt;&gt;&gt; from ezdxf.fonts.fonts import find_font_face &gt;&gt;&gt; f = find_font_face(&quot;IAmOnlineWithU-o96q.ttf&quot;) &gt;&gt;&gt; f.family 'I am online with u' </code></pre> <p>I'm wondering if this could actually be made to be a local to the repository search so I don't have to install it on my machine but that's a different question.</p> <p>edit: I should also add that I am using LibreCAD to view the resultant DXF and I have added my local font directory setup properly in the Application Settings.</p>
<python><ezdxf>
2024-01-31 23:43:48
0
1,982
mmachenry
77,916,633
4,590,499
Understanding Gunicorn Worker and Thread Configuration for Flask-SocketIO with Eventlet
<p>I'm developing a Flask web application that uses SocketIO and am facing issues with configuring Gunicorn, particularly with the number of workers and threads. My server runs on Ubuntu with 32GB DDR4 memory and an i7 7700k CPU. I've tried several configurations, but I'm puzzled by the performance results and SocketIO errors I'm encountering. Here's what happens with different commands:</p> <ol> <li><strong>Basic Gunicorn Command:</strong></li> </ol> <pre><code>gunicorn --bind 0.0.0.0:5000 wsgi:app Result: Extremely poor performance and SocketIO errors. </code></pre> <ol start="2"> <li><strong>Increased Workers and Threads:</strong></li> </ol> <pre><code>gunicorn -w 12 -t 3 --bind 0.0.0.0:5000 wsgi:app Result: Great performance but still SocketIO errors. </code></pre> <ol start="3"> <li><strong>Using Eventlet with One Worker:</strong></li> </ol> <pre><code>gunicorn -k eventlet -w 1 -t 100 --bind 0.0.0.0:5000 wsgi:app Result: Great performance and no SocketIO errors. </code></pre> <ol start="4"> <li><strong>Using Eventlet with Two Workers:</strong></li> </ol> <pre><code>gunicorn -k eventlet -w 2 -t 100 --bind 0.0.0.0:5000 wsgi:app Result: Great performance but back to SocketIO errors. </code></pre> <p>Given the above scenarios, I have several questions:</p> <ol> <li><strong>Performance with Single Worker in Eventlet:</strong> Why am I getting good performance in scenario 3 with only a single Eventlet worker? How does this single worker manage to handle the load effectively?</li> <li><strong>SocketIO Errors with Multiple Workers:</strong> Why do SocketIO errors reappear when I increase the number of workers (scenario 4) while using Eventlet? If <code>eventlet -w 1</code> is allocating single worker for eventlet, then is my program being allocated extra workers automatically? If not then why is my app performing good?</li> <li><strong>Optimal Configuration for Performance and Stability:</strong> Based on my server specs and the behavior observed, what would be the optimal Gunicorn configuration for balancing performance and stability with Flask-SocketIO and Eventlet?</li> </ol>
<python><flask><socket.io><gunicorn><eventlet>
2024-01-31 21:56:24
1
964
Daqs
77,916,345
23,260,297
Write multiple dataframes to multiple sheets in excel file
<p>I have 4 different programs that read 4 different types of csv files to get the data in the same format. My goal is to export dataframes from each program into a single excel workbook onto 4 seperate sheets. Each time I write a dataframe to excel it overwrites the data that was just exported there.</p> <p>this is the code I have which is the same for each program:</p> <pre><code>rowPos = 1 with pd.ExcelWriter(f&quot;Y:\HedgeFundRecon\JAron\Output\JAronOutput.xlsx&quot;, datetime_format='mm/dd/yy') as writer: for item in df_list: item.to_excel(writer, sheet_name='JAron', startrow=rowPos) rowPos += (len(item) + 2) </code></pre> <p>I changed the code to this to check if the file does not exists then just write to the file, but if it does exist then overwrite the sheet:</p> <pre><code>rowPos = 1 if(os.path.exists(path)): with pd.ExcelWriter(path, engine='openpyxl', mode='a', if_sheet_exists='replace', datetime_format='dd/mm/yyyy') as writer: for item in df_list: item.to_excel(writer, sheet_name='JAron', startrow=rowPos) rowPos += (len(item) + 2) else: with pd.ExcelWriter(path, engine='openpyxl', mode='w', date_format='dd/mm/yyyy') as writer: for item in df_list: item.to_excel(writer, sheet_name='JAron', startrow=rowPos) rowPos += (len(item) + 2) </code></pre> <p>How would I alter this code to stop the data from being overwritten? I tried putting the mode to append but xlsxwriter does not support that. Any suggestions?</p>
<python><pandas><excel>
2024-01-31 20:51:55
1
2,185
iBeMeltin
77,916,261
5,728,714
How to create a grid containing all possible parameters for a function?
<p>I have the following parameters:</p> <pre class="lang-py prettyprint-override"><code>trend_types = ['add', 'mul'] seasonal_types = ['add', 'mul'] boxcox_options = [True, False, 'log'] </code></pre> <p>To be used in the following function:</p> <pre class="lang-py prettyprint-override"><code>model = sm.tsa.ExponentialSmoothing(df, trend=trend_type, seasonal=seasonal_type, use_boxcox=boxcox) fittedModel = model.fit() </code></pre> <p>And I would like to generate an array containing all possible combinations, like:</p> <pre class="lang-json prettyprint-override"><code>[ {trend_type: 'add', seasonal_type: 'add', boxcox: True}, {trend_type: 'add', seasonal_type: 'add', boxcox: False}, {trend_type: 'add', seasonal_type: 'add', boxcox: Log}, {trend_type: 'add', seasonal_type: 'mul', boxcox: True}, {trend_type: 'add', seasonal_type: 'mul', boxcox: False}, {trend_type: 'add', seasonal_type: 'mul', boxcox: Log}, [..] ] </code></pre> <p>That way I can use such an array like:</p> <pre class="lang-py prettyprint-override"><code>for parameter in parameters: model = sm.tsa.ExponentialSmoothing(df, trend=parameter.trend_type, seasonal=parameter.seasonal_type, use_boxcox=parameter.boxcox) fittedModel = model.fit() </code></pre> <p>Is there a way to generate such an array?</p>
<python>
2024-01-31 20:34:06
1
1,877
Mathias Hillmann
77,916,213
1,214,800
Python 3.11 + Mypy generic type hinting that can accept both types and type definitions
<p>I'm trying to write an arg getter with generics that casts the result to the correct type based on a generic type input:</p> <pre><code>T = TypeVar(&quot;T&quot;) R = TypeVar(&quot;R&quot;) P = ParamSpec(&quot;P&quot;) GenericCallable = Callable[P, R] def get_arg( name_or_pos: str | int, # 0-based index typ: type[T], args: tuple[Any, ...] = (), kwargs: dict[str, Any] = {}, fallback: T | None = None, throw: bool = False, ) -&gt; T: found: Any = None if isinstance(name_or_pos, str): name = name_or_pos found = kwargs.get(name, None) if found is None: if throw: raise RuntimeError( f&quot;Could not find keyword argument '{name}' of type '{typ!r}' in&quot; f&quot; {kwargs!r}&quot; ) found = fallback elif isinstance(name_or_pos, int): pos = name_or_pos if pos &gt;= len(args): if throw: raise RuntimeError( f&quot;Could not find argument at (0-based) pos={pos}, not enough&quot; f&quot; arg(s) (only have {len(args)}: {', '.join(*args)}&quot; ) found = fallback # type: ignore found = args[pos] # type: ignore if found is None: found = fallback return cast(T, found) </code></pre> <p>Given the following test cases:</p> <pre><code>def test_get_arg_for_functions(): def func1(a: str, b: str) -&gt; str: return f&quot;{a} {b}&quot; def func2(a: str, b: str, c: str) -&gt; str: return f&quot;{a} {b} {c}&quot; class Class1: def __init__(self, x: int, y: str) -&gt; None: self.x = x self.y = y kwargs = {&quot;func1&quot;: func1, &quot;func2&quot;: func2, &quot;class1&quot;: Class1(1, &quot;a&quot;)} got_func1: Callable[..., str] = get_arg(&quot;func1&quot;, type(func1), kwargs=kwargs) assert got_func1(&quot;a&quot;, &quot;b&quot;) == &quot;a b&quot; got_class1_instance = get_arg(&quot;class1&quot;, Class1, kwargs=kwargs) assert got_class1_instance.x == 1 assert got_class1_instance.y == &quot;a&quot; Func1Type = GenericCallable[[str, str], str] got_func1_from_type = get_arg(&quot;func1&quot;, Func1Type, kwargs=kwargs) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ assert got_func1_from_type(&quot;a&quot;, &quot;b&quot;) == &quot;a b&quot; </code></pre> <p>Mypy complains with the following, for <code>got_func1_from_type</code>:</p> <blockquote> <p><code>Argument 2 to &quot;get_arg&quot; has incompatible type &quot;&lt;typing special form&gt;&quot;; expected &quot;type[Never]&quot;</code></p> </blockquote> <p>This (I believe) is because it is expecting <code>type[T]</code> and returning <code>T</code>, but here I'm passing a type definition.</p> <p>So I tried instead with <code>type: T</code> and return <code>T</code> (or, to avoid confusion, let's call it <code>D</code>:</p> <pre><code>def get_arg( ... typ: D, ... ) -&gt; D: ... </code></pre> <p>Now, Mypy is fine with me passing in the type definition to the generic <code>typ</code>:</p> <pre><code>Func1Type = GenericCallable[[str, str], str] got_func1_from_type = get_arg_def(&quot;func1&quot;, Func1Type, kwargs=kwargs) assert got_func1_from_type(&quot;a&quot;, &quot;b&quot;) == &quot;a b&quot; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ </code></pre> <p>but <code>got_func1_from_type</code> isn't callable because it's <code>type[(str, str) -&gt; str]</code> and Mypy complains with:</p> <blockquote> <p><code>Cannot instantiate type &quot;GenericCallable[(str, str), str]&quot;</code></p> </blockquote> <p>This makes sense, but I don't know what to do about it. I tried doing clever things like:</p> <pre><code>D = TypeVar(&quot;D&quot;, bound=type) # or D = TypeVar(&quot;D&quot;, bound=AnyType) # or D = TypeVar(&quot;D&quot;, bound=type[AnyType]) </code></pre> <p>...but these are all binding the type definition to <code>type[TypeDef]</code> instead of <code>actual instance of type D</code>.</p> <p>Is there a way I can either cast, or declare that it's returning the actual instance of the correct type, while allowing it to work for all of these use-cases?</p> <pre><code>P = ParamSpec(&quot;P&quot;) R = TypeVar(&quot;R&quot;) GenericCallable = Callable[P, R] a_str = get_arg(&quot;a&quot;, str, kwargs=kwargs) # a_str: str FuncType = GenericCallable[..., str] a_func = get_arg(&quot;func&quot;, FuncType, kwargs=kwargs) # a_func: FuncType an_instance = get_arg(&quot;class&quot;, MyClass, kwargs=kwargs) # an_instance: MyClass </code></pre> <p>(P.S., I tried a bunch of fancy stuff with <code>@overload</code> but couldn't get it to play any nicer than unions)</p>
<python><mypy><python-typing><python-3.11>
2024-01-31 20:22:23
2
73,674
brandonscript
77,916,199
535,782
Python logger does not print debug messages, even if logger.getEffectiveLevel() is DEBUG
<p>The following code changes the log level from INFO to DEBUG, and even though logger.getEffectiveLevel() returns DEBUG, the debug message doesn't get logged, as the output shows.</p> <p>version of python: 3.10.12</p> <p>---out.log---</p> <pre><code>log level 1: INFO log level 2: DEBUG </code></pre> <p>Code</p> <pre><code>import logging import logging.config def test_change_log_level(): logger = logging.Logger(&quot;my-logger&quot;, logging.INFO) file_handler = logging.FileHandler(filename=&quot;out.log&quot;) file_handler.setLevel(logging.INFO) logger.addHandler(file_handler) logger.debug(&quot;should not be logged ...&quot;) logger.info(&quot;log level 1: %s&quot;, logging.getLevelName(logger.getEffectiveLevel())) logger.setLevel(logging.DEBUG) file_handler.setLevel(logging.DEBUG) logger.info(&quot;log level 2: %s&quot;, logging.getLevelName(logger.getEffectiveLevel())) logger.debug(&quot;debug level is effective&quot;) if __name__ == '__main__': test_change_log_level() </code></pre>
<python><python-logging>
2024-01-31 20:20:39
1
10,224
Boutran
77,916,018
1,294,072
How efficiently share complex and expensive objects between python files
<p>In my case, I initialize and use these (complex and expensive) object in file1, import them in file2 and use. It doesn't waste time create multiple obj in multiple functions accross two files, and works for me.</p> <pre><code># utils.py from fancy_module import Fancy_class expensive_obj = Fancy_class() def func1(): expensive_obj.do_stuff() </code></pre> <pre><code># work.py from utils import func1, expensive_obj def work_func(): expensive_obj.do_other_stuff() </code></pre> <p>However when I hand it over to my colleague to deploy, he points out it takes quite some time to do <code>expensive_obj = Fancy_class()</code> during import, and it causes troubles in the prod framework we are using (and I cannot change that). He asks to put it in a getter and use @lru_cache to avoid duplication.</p> <pre><code># utils.py from fancy_module import Fancy_class @lru_cache def get_expensive_obj(): return Fancy_class() def func1(): expensive_obj = get_expensive_obj() expensive_obj.do_stuff() </code></pre> <pre><code># work.py from utils import func1, get_expensive_obj def work_func(): expensive_obj = get_expensive_obj() expensive_obj.do_other_stuff() </code></pre> <p>Not knowing how exactly lru_cache work, I worry if it would really avoid duplicating <code>expensive_obj</code>. Plus I need to create a few like this <code>expensive_obj</code> in a few dozen or so functions similar to func1() and work_founc(). Kind of messy.</p> <p>Is there other solutions that allow me to:</p> <ol> <li>share objects between functions across files</li> <li>and avoid expensive initialization during the importing time</li> </ol> <p>Thanks!!</p> <p>EDIT: Thank you all for good suggestions (@chepner) and tips on caching (@ Munya Murape). I cannot resist the temptation of convenience using globals (and since these <code>expensive_obj</code> never change once created, like constants). Here is another option that I'd like to hear opinions:</p> <ul> <li>create a small file <code>expensive.py</code> that only contains these expensive objects.</li> <li>import expensive and initiate where they are needed</li> </ul> <pre><code># expensive.py - shared objects only from fancy_module import Fancy_class expensive_obj = None def init_expensive_obj(): global expensive_obj expensive_obj = Fancy_class() </code></pre> <pre class="lang-py prettyprint-override"><code># work.py import expensive if expensive.expensive_obj is None: expensive.init_expensive_obj() def work_func(): expensive.expensive_obj.do_other_stuff() </code></pre>
<python><performance><object><initialization><python-lru-cache>
2024-01-31 19:41:45
1
746
xyliu00
77,915,989
4,500,083
how make a class with async aiohttp tread safe?
<p>I have this Python class and want your opinion on whether this <code>cls._instance._cache = {}</code> is thread safe for <code>tornado</code>? if not how can I handle this cache to be thread safe?</p> <pre><code>import logging import aiohttp import time # Constants DEFAULT_TIMEOUT = 20 MAX_ERRORS = 3 HTTP_READ_TIMEOUT = 1 class HTTPRequestCache: _instance = None def __new__(cls): if cls._instance is None: cls._instance = super().__new__(cls) # TODO: check whether its tread safe with tornado event loop cls._instance._cache = {} cls._instance._time_out = DEFAULT_TIMEOUT cls._instance._http_read_timeout = HTTP_READ_TIMEOUT cls._instance._loop = None return cls._instance async def _fetch_update(self, url): try: async with aiohttp.ClientSession() as session: logging.info(f&quot;Fetching {url}&quot;) async with session.get(url, timeout=self._http_read_timeout) as resp: resp.raise_for_status() resp_data = await resp.json() cached_at = time.time() self._cache[url] = { &quot;cached_at&quot;: cached_at, &quot;config&quot;: resp_data, &quot;errors&quot;: 0 } logging.info(f&quot;Updated cache for {url}&quot;) except aiohttp.ClientError as e: logging.error(f&quot;Error occurred while updating cache for {url}: {e}&quot;) async def get(self, url): if url not in self._cache or self._cache[url][&quot;cached_at&quot;] &lt; time.time() - self._time_out: await self._fetch_update(url) return self._cache.get(url, {}).get(&quot;config&quot;) </code></pre>
<python><tornado><aiohttp>
2024-01-31 19:32:51
1
1,170
Mini
77,915,929
608,294
How do I define the constraints of a multiple assignment problem with Pulp?
<p>I am trying to solve an assignment problem: A supervisor can be assigned to multiple consultants according to the number of languages a supervisor and a consultant speak (the more languages they have in common the better). The constraints are:</p> <ul> <li>each supervisor has a fixed number of hours that it can use to supervise, namely: 14, 11, 7, or 0 hour/s</li> <li>each consultant has a fixed number of hours that it can use to be supervised, namely: 6, 3, 2, 1, or 0 hour/s</li> <li>a supervisor can have multiple consultants to supervise according to the number of hours the supervisor is available (e.g. a supervisor with 14 hours can have 2 consultants with 6 hours and 2 with 1 hour, or just 1 with 6 hours, and so on. Not all the available hours of the supervisor have to be covered, while the hours of the consultant have to be all covered.)</li> <li>a supervisor, to be chosen, needs to have a seniority &gt;= of the seniority of the consultant</li> </ul> <pre><code> supervisor_h = ... # number of hours of the availability per each supervisor consultant_h = ... # number of hours needed per each consultant y = pulp.LpVariable.dicts(&quot;pairs&quot;, [(i,j) for i in supervisors for j in consultants] ,cat='Binary') prob = pulp.LpProblem(&quot;matching&quot;, pulp.LpMaximize) prob += pulp.lpSum([costs[i][m] * y[(i,j)] for i in supervisors for m, j in enumerate(consultants)]) # each supervisor can have a number of consultants &gt;= 1 for i in supervisors: prob += pulp.lpSum(y[(i,j)] for j in consultants) &gt;= 1 # each consultant can have only one supervisor for j in consultants: prob += pulp.lpSum(y[(i,j)] for i in supervisors) &lt;= 1 # a supervisor can accept a consultant if the number of hours the supervisor still has available is &gt;= than the number of hours requested by the consultant # Here I have a problem in defining the constraint for n, i in enumerate(supervisors): prob += supervisor_h[n] - pulp.lpSum(qcee_h[m] for m, j in enumerate(consultants)) &lt;= consultant_h[n] # I have the same problem in defining the constraint for the seniority </code></pre> <p>It is the first time I use pulp, so if you can help me I appreciate it.</p>
<python><optimization><constraints><linear-programming><pulp>
2024-01-31 19:20:41
1
1,741
Andrea
77,915,868
12,092,252
"module 'os' has no attribute 'add_dll_directory'" in aws lambda function
<p>I'm facing an issue that <code>os</code> does not have an <code>add_dll_directory</code> attribute in the AWS lambda function on the trigger.</p> <p>I have written code for creating a thumbnail of a video and I have tried it locally that's working fine but when I upload this function with the required libraries then I get this issue and still, I'm unable to resolve it. I did not get any help from the internet yet regarding this issue.</p> <p><code>that is an issue that I'm facing</code></p> <pre><code>[ERROR] AttributeError: module 'os' has no attribute 'add_dll_directory' Traceback (most recent call last): File &quot;/var/lang/lib/python3.12/importlib/__init__.py&quot;, line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1381, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1354, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1325, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 929, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 994, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 488, in _call_with_frames_removed File &quot;/var/task/lambda_function.py&quot;, line 2, in &lt;module&gt; import cv2 File &quot;/var/task/cv2/__init__.py&quot;, line 11, in &lt;module&gt; import numpy File &quot;/var/task/numpy/__init__.py&quot;, line 112, in &lt;module&gt; _delvewheel_patch_1_5_2() File &quot;/var/task/numpy/__init__.py&quot;, line 109, in _delvewheel_patch_1_5_2 os.add_dll_directory(libs_dir) </code></pre> <p><code>This is the code that I'm using for creating a thumbnail</code></p> <pre><code>import json import cv2 import tempfile from PIL import Image from io import BytesIO import boto3 s3 = boto3.client('s3') def get_video_file(bucket, key): video_file = s3.get_object(Bucket=bucket, Key=key)['Body'].read() return video_file def upload_image_file(image_byte_data, bucket, key): s3.put_object(Bucket=bucket, Key=key, Body=image_byte_data, ContentType=&quot;image/png&quot;) def generate_thumbnail(video_byte_data): with tempfile.NamedTemporaryFile(suffix=&quot;.mp4&quot;, delete=False) as temp_file: temp_file_path = temp_file.name temp_file.write(video_byte_data) video_capture = cv2.VideoCapture(temp_file_path, cv2.CAP_ANY) success, frame = video_capture.read() if success: frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) thumbnail_image = Image.fromarray(frame_rgb) thumbnail_size = (320,320) thumbnail_image.resize(thumbnail_size) thumbnail_data = BytesIO() thumbnail_image.save(thumbnail_data, format='PNG') thumbnail_bytes = thumbnail_data.getvalue() return thumbnail_bytes else: raise ValueError('Unable to read video frame') def lambda_handler(event, context): bucket = event['Records'][0]['s3']['bucket']['name'] key = event['Records'][0]['s3']['object']['key'] video_file = get_video_file(bucket, key) image_output_file = generate_thumbnail(video_file) upload_image_file(image_output_file, &quot;thumbnail876&quot;, key.split(&quot;.&quot;)[0]+&quot;.png&quot;) return { &quot;statusCode&quot;: 200, &quot;body&quot;: json.dumps( { &quot;message&quot;: &quot;hello world&quot;, } ), } </code></pre> <p>Please can someone tell me why I'm facing this issue? and how can I resolve this? Thanks</p>
<python><amazon-web-services><amazon-s3><aws-lambda><video-thumbnails>
2024-01-31 19:09:33
2
1,053
Muhammad AbuBaker
77,915,826
5,838,180
Defining a variable as a copy of another one with a condition
<p>I have a variable <code>a</code>, that's an array, and I want to define another variable <code>b</code>, which is the copy of <code>a</code>, except for a condition (let's say, every non-zero element should be set to 1). I can write this in the following way:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; a = np.array([2, 7, -2, 0, 0, 9]) &gt;&gt;&gt; b = np.where(a != 0, 1, 0) &gt;&gt;&gt; print(b) &gt;&gt;&gt; array([1, 1, 1, 0, 0, 1]) </code></pre> <p>That's about fine, but I think I have encountered before the use of a simpler python notation, using no <code>numpy</code>, that allowed for defining <code>b</code> as a copy of <code>a</code>, but with a condition, all in one line of code, containing two equality signs, it was something like this:</p> <pre><code>b = a[a!=0]=1 </code></pre> <p>The above doesn't work, but I hope you can help me find the code that I refer to or some other kind of simplification</p>
<python><numpy><variables>
2024-01-31 19:01:36
2
2,072
NeStack
77,915,759
312,444
Creating a pydatinc Enum from a string
<pre><code>from pydantic import BaseModel from uuid import UUID from enum import Enum from datetime import datetime class NoteTemplate(str, Enum): general_practitioner = &quot;general_practitioner&quot; cardiologist = &quot;cardiologist&quot; soap = &quot;soap&quot; consultation_report = &quot;consultation_report&quot; plastic_surgeon = &quot;plastic_surgeon&quot; orthopedist = &quot;orthopedist&quot; </code></pre> <p>I would like to create a NoteTemplate from a string. e.g.</p> <pre><code>NoteTemplate(&quot;general_practitioner&quot;) </code></pre>
<python><pydantic>
2024-01-31 18:45:59
0
8,446
p.magalhaes
77,915,725
1,401,560
Pass/fail result for Paramiko SFTPClient get() and put() functions?
<p>I'm writing an Ansible module using Paramiko's <a href="https://docs.paramiko.org/en/3.4/api/sftp.html" rel="nofollow noreferrer"><code>SFTPClient</code> class</a>. I have the following code snippets</p> <pre class="lang-py prettyprint-override"><code>import paramiko from ansible.module_utils.basic import * def get_sftp_client(host, port, user, password): # Open a transport transport = paramiko.Transport((host,port)) # Auth transport.connect(None,user,password) # Get a client object sftp = paramiko.SFTPClient.from_transport(transport) return (transport, sftp) def transfer_file(params): has_changed = False meta = {&quot;upload&quot;: &quot;not yet implemented&quot;} user = params['user'] password = params['password'] host = params['host'] port = params['port'] direction = params['direction'] src = params['src'] dest = params['dest'] transport, sftp = get_sftp_client(host, port, user, password) stdout = None stderr = None if direction == 'download': sftp.get(src, dest) else: sftp.put(src, dest) meta = { &quot;source&quot;: src, &quot;destination&quot;: dest, &quot;direction&quot;: direction, &quot;stdout&quot;: stdout, &quot;stderr&quot;: stderr } # Close if sftp: sftp.close() if transport: transport.close() return (has_changed, meta) def main(): fields = { &quot;user&quot;: {&quot;required&quot;: True, &quot;type&quot;: &quot;str&quot;}, &quot;password&quot;: {&quot;required&quot;: True, &quot;type&quot;: &quot;str&quot;, &quot;no_log&quot;: True}, &quot;host&quot;: {&quot;required&quot;: True, &quot;type&quot;: &quot;str&quot;}, &quot;port&quot;: {&quot;type&quot;: &quot;int&quot;, &quot;default&quot;: 22}, &quot;direction&quot;: { &quot;type&quot;: &quot;str&quot;, &quot;choices&quot;: [&quot;upload&quot;, &quot;download&quot;], &quot;default&quot;: &quot;upload&quot;, }, &quot;src&quot;: {&quot;type&quot;: &quot;path&quot;, &quot;required&quot;: True}, &quot;dest&quot;: {&quot;type&quot;: &quot;path&quot;, &quot;required&quot;: True}, } module = AnsibleModule(argument_spec=fields) # Do the work has_changed, result = transfer_file(module.params) module.exit_json(changed=has_changed, meta=result) if __name__ == '__main__': main() </code></pre> <p>How can I capture the pass/fail result of the <code>get()</code> and <code>put()</code> functions? The docs don't mention the return value for <code>get()</code>, and the return value for <code>put()</code> is an <code>SFTPAttribute</code> class, which I don't need. I just want to know if the transaction passed or failed, and if failed, get the error. Thanks.</p>
<python><ansible><sftp><paramiko>
2024-01-31 18:38:48
1
17,176
Chris F
77,915,597
4,531,757
Need help in turn off docker in 'Autogen AutoBuilder"
<p>I am new and struggling to turn off Docker. I like to either set 'use_docker': false or set up environment variable. Either option will work. Thanks for your help. The screenshot has been enclosed. <a href="https://i.sstatic.net/5Vhqt.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5Vhqt.gif" alt="enter image description here" /></a></p>
<python><docker><artificial-intelligence><ms-autogen>
2024-01-31 18:14:23
1
601
Murali
77,915,540
639,676
How to implement the next function (the use of Dynamic Shapes) in JAX?
<p>I have a simple function that takes an jax Array as input, searches for the first occurrence of 1, and replaces it with another jax Array (specified as a second input):</p> <pre><code>rules_int = [ jnp.array([0,0]), jnp.array([1,1,1]), ] # Even with the same size of inputs, the sizes of outputs can be different def replace_first_one(arr, action): index = jnp.where(arr == 1)[0] if index.size == 0: return arr index = index[0] new_arr = jnp.concatenate([arr[:index], rules_int[action], arr[index+1:]]) return new_arr replace_first_one(jnp.array([1]), 0) # result is Array([0, 0], dtype=int32) </code></pre> <p>But when I use <code>vmap</code> a get an exception:</p> <pre><code>batch_arr = jnp.array([ jnp.array([1, 4, 5, 1]), jnp.array([6, 1, 8, 1]) ]) batch_actions = jnp.array([0, 1]) # Corresponding actions for each array # Vectorize the function vectorized_replace_first_one = vmap(replace_first_one, in_axes=(0, 0)) result = vectorized_replace_first_one(batch_arr, batch_actions) </code></pre> <p><em>index = jnp.where(arr == 1)[0]</em> <em>The size argument of jnp.nonzero must be statically specified to use jnp.nonzero within JAX transformations. This BatchTracer with object id 140260750414512 was created on line:</em></p> <p>I read on <a href="https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#dynamic-shapes" rel="nofollow noreferrer">JAX docs</a>:</p> <blockquote> <p>JAX code used within transforms like jax.jit, jax.vmap, jax.grad, etc. requires all output arrays and intermediate arrays to have static shape: that is, the shape cannot depend on values within other arrays.</p> </blockquote> <p>Please suggest how to make it work?</p> <p>Ideally, these rules should be applied recursively until there are no rules to apply. (string rewriting system)</p>
<python><jax>
2024-01-31 18:03:04
1
4,143
Oleg Dats
77,915,195
2,610,522
Why dask shows smaller size than the actual size of the data (numpy array)?
<p><code>Dask</code> shows slightly smaller size than the actual size of a numpy array. Here is an example of a <code>numpy</code> array that is exactly 32 Mb:</p> <pre><code>import dask as da import dask.array import numpy as np shape = (1000,4000) ones_np = np.ones(shape) print(f&quot;Size:{ones_np.nbytes / 1e6} Mb&quot;) &gt;&gt; Size: 32.0 Mb </code></pre> <p>However with Dask it shows 30.52:</p> <pre><code>ones_da = da.array.ones(shape) ones_da </code></pre> <p><a href="https://i.sstatic.net/OaqvB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OaqvB.png" alt="enter image description here" /></a></p> <p>Tho if I do <code>ones_da.nbytes/1e6</code> it returns the correct (32 Mb) size.</p> <p>I thought dask Array size should show the actual size?</p>
<python><numpy><dask><dask-distributed>
2024-01-31 17:10:55
1
810
Ress
77,915,134
10,719,369
opencv-python installation issue using pip
<p>I am trying to install <code>opencv-python</code> inside a python venv using pip in one server env (ubuntu 22.04, python version 3.10.6). When i try to import like <code>import cv2</code> I am facing the below error</p> <pre><code>ImportError: libGL.so.1: cannot open shared object file: No such file or directory </code></pre> <p>In one of the SO Answers, suggested that <code>opencv-python-headless</code> will work. I tried and it worked in my machine.</p> <p>Now I also want to install <code>rapid_latex_ocr</code> where <code>open-cv</code> is one of the dependency which automatically added when i install rapid_latex_ocras <code>pip install rapid_latex_ocr</code>.</p> <p>My venv contains both <code>opencv-python-headless and opencv-python</code>, again the <strong>ImportError: libGL.so.1: cannot open shared object file: No such file or directory</strong> problem repeated when i <code>import cv2</code></p> <p>What should i do in this situation ? Kindly help.</p>
<python><opencv><ocr><importerror><python-venv>
2024-01-31 17:01:20
0
708
Mari
77,915,106
6,010,142
How to insert into oracle database with custom object type using sqlalchemy
<p>Given the following code:</p> <p>Custom oracle object type:</p> <pre class="lang-sql prettyprint-override"><code>CREATE TYPE address_type AS OBJECT ( street VARCHAR2(30), city VARCHAR2(20), state CHAR(2), postal_code VARCHAR2(6) ) </code></pre> <pre class="lang-py prettyprint-override"><code>from sqlalchemy import create_engine, Column, CHAR, VARCHAR from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.types import UserDefinedType class AddressType(UserDefinedType): def get_col_spec(self): return &quot;ADDRESS_TYPE&quot; def bind_processor(self, dialect): def process(value): if value is not None: street = value['street'] city = value['city'] state = value['state'] postal_code = value['postal_code'] return f&quot;ADDRESS_TYPE('{street}', '{city}', '{state}', '{postal_code}')&quot; return None return process def result_processor(self, dialect, coltype): def process(value): if value is not None: return {'street': value.street, 'city': value.city, 'state': value.state, 'postal_code': value.postal_code} return None return process Base = declarative_base() class MyEntity(Base): __tablename__ = 'MY_TABLE' id = Column(Integer, primary_key=True) custom_address = Column(AddressType, nullable=False) engine = create_engine('oracle://your_username:your_password@your_host:your_port/your_database') Base.metadata.create_all(engine) ... somewhere later ... new_entity = MyEntity(id=1, custom_address={'street':'street', 'city':'city', 'state':'ST', 'postal_code':'000000'} db_session.add(new_entity) db_session.commit(). # This will throw the error </code></pre> <p>I can successfully read from the database using sqlalchemy. But when trying to create a record I get several errors. Even when returning None, I get the error:</p> <p><code>ORA-00932: inconsistent datatypes: expected address_type got CHAR</code></p> <p>How can I insert a new entity using sqlalchemy?</p> <p>EDIT:</p> <p>Thanks to the tip in the comments: If I use the method <code>bind_expression</code> instead, I can actually write a hardcoded value to the database.</p> <pre><code> def bind_expression(self, bindvalue): return text(f&quot;ADDRESS_TYPE('street', 'random city', 'NY', '10001')&quot;) </code></pre> <p>But I am not able to access any data I set on the variable <code>custom_address</code>. The <code>bindvalue</code> does not contain any of those. How do I access my data inside the <code>bind_expression</code> function?</p>
<python><oracle-database><sqlalchemy>
2024-01-31 16:57:36
0
831
Ali
77,914,954
3,112,672
Best practice for isolating dependencies during unit tests
<p>I have a Python module which contains utility methods for use in Databricks. The module essentially looks like this (<code>utils.py</code>):</p> <pre><code>from databricks.sdk.runtime import spark, dbutil def someUsefulFunction(): pass </code></pre> <p>etc.</p> <p>I have a unit test module built in Python unittest which looks sort of like this (<code>test.py</code>):</p> <pre><code>from utils import someUsefulFunction import unittest class TestSomething(unittest.TestCase): def testSomeUsefulFunction(self): pass </code></pre> <p>etc.</p> <p>My problem is that the <code>databricks.sdk.runtime</code> import is required for the code to run in Databricks, but if the code is running in unit tests it tries to attach to the spark runtime and fails (just from the import - it doesn't require that I use spark).</p> <p>There are a couple of ideas I have:</p> <ol> <li>separate the code that relies on spark from the code that doesn't into different modules, and</li> <li>use a strategy pattern that uses Spark in Databricks and does something else in unit tests, or alternatively,</li> <li>mock spark in unit tests.</li> </ol> <p>As far as I know options 2 and 3 both retain a dependency on spark and will still fail.</p> <p>My big issue is that I don't know in Python how to truly separate dependencies in the way I need to.</p>
<python><apache-spark><python-unittest>
2024-01-31 16:34:00
0
1,174
absmiths
77,914,854
6,705,902
Threading is replaceable in Python?
<p>I have done some reading on the topic and <a href="https://stackoverflow.com/questions/27435284/multiprocessing-vs-multithreading-vs-asyncio">found that</a> we can use <code>threading</code> if some problem is <strong>io-bounded</strong> but <strong>io is not very slow</strong>. But I couldn't understand why we still couldn't use <code>asyncio</code>? Since python <code>threading</code> is anyway runs on a single thread because of the GIL, what make it more useful than <code>asyncio</code>?</p> <p>I.e, in Python is there any problem where <code>threading</code> is more useful than using <code>asyncio</code> or <code>multiprocessing</code>?</p> <p>I found tons of example of each and differences between each, but still could not find a single source explaining a clear use case of Python threading.</p> <p>Thank you.</p> <p>Note: I found <a href="https://stackoverflow.com/questions/75471729/use-cases-for-threading-and-asyncio-in-python">this question</a> sounds similar, but I understand why multiprocessing is important when a python process is CPU heavy and can independently run without minimal communication.</p>
<python><multithreading><python-asyncio><python-multithreading><gil>
2024-01-31 16:18:06
1
1,120
Wenuka
77,914,786
1,028,270
Can sphinx.ext.viewcode allow me to include the _content_ of the source code in my doc in addition to linking to it?
<p>I'm trying to avoid having to do literal includes with specific line numbers to embed snippets of my code in my docs.</p> <p>I'm asking almost this same question: <a href="https://stackoverflow.com/questions/58277247/sphinx-how-to-include-a-python-function-as-source-code">sphinx - How to include a Python function as source code</a></p> <p>I want to include the <em>content</em> of the object referenced in my rst doc and have it appear as a highlighted python code snippet.</p> <p>In that post no one mentions the <code>sphinx.ext.viewcode</code> plugin though, which includes the source code and generates links to it.</p> <p>If <code>sphinx.ext.viewcode</code> can include the code and links in my docs is there a way to use it to include that code as a snippet in my doc pages directly as well?</p> <p>For example here is what one of the links looks like: <code>my_rendered_docs/_modules/module/sub_module/another_sub_module.html#my_func</code></p> <p>Can I include that content directly as a python code block through some relative pathing? Or does viewcode support a directive for this?</p> <h1>Edit</h1> <p>Per comments <code>pyobject</code> is what I needed.</p> <pre><code>.. literalinclude :: ../../my_proj/some_path/my_module.py :pyobject: a_function_inside_my_module :linenos: :language: python </code></pre>
<python><python-sphinx><restructuredtext>
2024-01-31 16:08:09
0
32,280
red888
77,914,732
5,269,749
How to get the layer details in tensorflow?
<p>I have a layer which is defined as following</p> <pre><code>import tensorflow.keras.layers as layers avg_pool = layers.AveragePooling1D(pool_size=2) </code></pre> <p>Is there a way to print the details of this layer? (in this case it would be printing the <code>pool_size</code>)? I cannot use summary on this layer as it gives me <code>AttributeError: 'AveragePooling1D' object has no attribute 'summary'</code></p>
<python><tensorflow><keras><tf.keras>
2024-01-31 15:59:47
2
1,264
Alex
77,914,684
302,102
How do you write a statically typed Python function that converts a string to a parameterized type?
<p>I would like to write a statically typed Python function that accepts a general type and a string and that returns a value of the given type, derived from the given string. For example:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; parse(int | None, &quot;5&quot;) 5 &gt;&gt;&gt; parse(int | None, &quot;&quot;) is None True </code></pre> <p>Here's a Python 3.12 attempt (which has a problem in the type signature since union types can't be assigned to variables of type <code>type</code>):</p> <pre class="lang-py prettyprint-override"><code>def parse[a](x: type[a], v: str) -&gt; a: if x in (bool, bool | None): return bool(v) if x in (float, float | None): return float(v) if x in (int, int | None): return int(v) if x in (str, str | None): return v raise ValueError </code></pre> <p>However, this code yields errors with Pyright 1.1.349:</p> <pre><code>error: Expression of type &quot;bool&quot; cannot be assigned to return type &quot;a@parse&quot; error: Expression of type &quot;float&quot; cannot be assigned to return type &quot;a@parse&quot; error: Expression of type &quot;int&quot; cannot be assigned to return type &quot;a@parse&quot; </code></pre> <p>How can this be fixed?</p>
<python><pyright><python-3.12>
2024-01-31 15:52:17
2
1,935
aparkerlue
77,914,585
9,869,260
Pandas / Openpyxl : Write Dataframe with left corner on a specific cell
<p>I would like to write a table in an excel sheet using openpyxl.</p> <p>My Pandas Dataframe looklikes this :</p> <pre><code>df = pd.DataFrame({ &quot;A&quot;: [1, 3, 5], &quot;B&quot;: [2, 4, 6], }) </code></pre> <p>I want to create a xlsx file that looklike this (Right corner of my dataframe in cell D9):</p> <p><a href="https://i.sstatic.net/uHb8o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uHb8o.png" alt="enter image description here" /></a></p> <p>How could I do this, meaning is there a best way to do it by writing cell by cell using iteration an all dataframe ? Thanks</p>
<python><pandas>
2024-01-31 15:37:53
2
437
Chjul
77,914,578
10,145,953
Detect if a checkbox is checked OpenCV
<p>I am using OpenCV to process an image of a form which contains main checkboxes. I have used the pixel coordinates to crop around the checkbox so all I am left with is just the single checkbox. From there, I need to detect if there is anything inside of that checkbox (x, checkmark, slash, etc.) or not.</p> <p>I used the code from <a href="https://stackoverflow.com/a/63085222/10145953">this</a> response and it was able to successfully draw a line around the checkbox, successfully detecting the checkbox (and part of the x). However, how do I then get it to tell me if the box is checked off in some way or if it's empty?</p> <p>The checkbox in question before the code:</p> <p><a href="https://i.sstatic.net/hooVx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hooVx.png" alt="enter image description here" /></a></p> <p>The code:</p> <pre><code># Code courtesy of Sreekiran A R on Stack Overflow: https://stackoverflow.com/questions/63084676/checkbox-detection-opencv image=box_5a_f ### converting BGR to Grayscale gray_scale=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) ### Binarising image th1,img_bin = cv2.threshold(gray_scale,180,225,cv2.THRESH_OTSU) ### defining kernels lWidth = 2 lineMinWidth = 15 kernal1 = np.ones((lWidth,lWidth), np.uint8) kernal1h = np.ones((1,lWidth), np.uint8) kernal1v = np.ones((lWidth,1), np.uint8) kernal6 = np.ones((lineMinWidth,lineMinWidth), np.uint8) kernal6h = np.ones((1,lineMinWidth), np.uint8) kernal6v = np.ones((lineMinWidth,1), np.uint8) ### finding horizontal lines img_bin_h = cv2.morphologyEx(~img_bin, cv2.MORPH_CLOSE, kernal1h) # bridge small gap in horizonntal lines img_bin_h = cv2.morphologyEx(img_bin_h, cv2.MORPH_OPEN, kernal6h) # kep ony horiz lines by eroding everything else in hor direction ## detect vert lines img_bin_v = cv2.morphologyEx(~img_bin, cv2.MORPH_CLOSE, kernal1v) # bridge small gap in vert lines img_bin_v = cv2.morphologyEx(img_bin_v, cv2.MORPH_OPEN, kernal6v)# kep ony vert lines by eroding everything else in vert direction def fix(img): img[img&gt;127]=255 img[img&lt;127]=0 return img img_bin_final = fix(fix(img_bin_h)|fix(img_bin_v)) ### getting labels ret, labels, stats,centroids = cv2.connectedComponentsWithStats(~img_bin_final, connectivity=8, ltype=cv2.CV_32S) ### drawing recangles for visualisation for x,y,w,h,area in stats[2:]: cv2.rectangle(image,(x,y),(x+w,y+h),(0,0,255),2) </code></pre> <p>The checkbox after the code:</p> <p><a href="https://i.sstatic.net/URNV3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/URNV3.png" alt="enter image description here" /></a></p> <p>I could also adjust my pixel coordinates to only crop the inside of the checkbox, but I'm still not sure what step to take from there and if it would really do anything to solve my issue.</p>
<python><opencv><checkbox><computer-vision><omr>
2024-01-31 15:37:29
2
883
carousallie
77,914,497
10,232,932
Keep all rows before first occurrence of a value
<p>I have a pandas dataframe and the opposite question to <a href="https://stackoverflow.com/questions/38707165/drop-all-rows-before-first-occurrence-of-a-value">Drop all rows before first occurrence of a value</a></p> <p>Instead of drop all rows before the first occurence of a value, I want to keep the rows before the first occurence of a value. so for example, i want to keep all the rows before the occurence <code>1</code> in the column count:</p> <pre><code>Year ID Count 1997 1 0 1998 2 0 1999 3 1 2000 4 0 2001 5 1 </code></pre> <p>leads to:</p> <pre><code>Year ID Count 1997 1 0 1998 2 0 </code></pre>
<python><pandas>
2024-01-31 15:25:55
2
6,338
PV8
77,914,370
10,415,970
Pydantic/SQLAlchemy: How to Avoid cyclic reference error on bidirectional models?
<p>I can't run <code>model_validate</code> on my SQLAlchemy ORM because it contains a foreignkey relationship (error at bottom):</p> <p>Example:</p> <pre class="lang-py prettyprint-override"><code># models.py from __future__ import annotations from pydantic import BaseModel BaseModel.model_config = {'from_attributes': True} class Job(BaseModel): id: int | None = None user: User | None = None class User(BaseModel): id: int | None = None jobs: list[Job] </code></pre> <p>SQLAlchemy models:</p> <pre class="lang-py prettyprint-override"><code># db/models.py from sqlalchemy.orm import declarative_base Base = declarative_base() class Job(Base): __tablename__ = 'jobs' id = Column(Integer, primary_key=True) user_id = Column(Integer, ForeignKey('users.id'), nullable=True) user = relationship('User', back_populates='jobs') class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True) </code></pre> <p>Now say you create some users:</p> <pre><code># main.py from models import User as UserModel, Job as JobModel from db.models import User as UserORM, Job as JobORM session = Session() # from wherever # Create users with jobs db_users = [] db_jobs = [] for _ in range(3): db_user = UserModel() db_job = JobModel(user=user) db_users.append(db_user) db_jobs.append(db_job) session.add_all(db_users) session.add_all(db_jobs) session.commit() # Try to get pydantic models from the existing models db_user = users[0] UserModel.model_validate(db_user) </code></pre> <p>The last part of ^ would raise something like</p> <pre><code> pydantic_core._pydantic_core.ValidationError: 1 validation error for Job user.jobs.0 Recursion error - cyclic reference detected [type=recursion_loop, input_value=&lt;Job https://www.codement...pen-requests/iq7j9h5s1j&gt;, input_type=Job] For further information visit https://errors.pydantic.dev/2.6/v/recursion_loop </code></pre> <p>But there seems to be no way around it! How on earth convert the data back to Pydantic models when it throws an error on <code>model_validate</code>?</p>
<python><sqlalchemy><pydantic>
2024-01-31 15:06:47
0
4,320
Zack Plauché
77,914,336
4,696,802
Where to find the pyconfig.h when building Python for Windows?
<p>To use the C API of the Python interpreter you include the Python.h header. And you can <a href="https://github.com/python/cpython" rel="nofollow noreferrer">see</a> that the Python.h header includes a file called pyconfig.h. There is no pyconfig.h in that include folder. Which is why when I include Python.h I get an error. Now, there is a pyconfig.h.in in the main folder and at the top of it it says:</p> <blockquote> <p>/* pyconfig.h.in. Generated from configure.ac by autoheader. */</p> </blockquote> <p>But there is no pyconfig.h that ends up where it's supposed to go (I'm guessing in the Include folder. I know on Linux you run the configure command, but not on Windows. The way to build to build Python on Windows is to run PCBuild/build.bat, and after it finishes there is still no pyconfig.h in sight. What am I doing wrong? I am calling build.bat like this:</p> <pre><code>build.bat -p x64 </code></pre>
<python><windows>
2024-01-31 15:02:11
1
16,228
Zebrafish
77,914,222
880,874
read_sql_query() using stored procedure and parameters
<p>How can I pass a parameter to a SQL Stored Procedure?</p> <p>I've tried so many ways, but they all fail.</p> <p>Here is my latest attempt.</p> <p>The parameter is <code>BookId</code>.</p> <p>Here is my code:</p> <pre><code>bookId = 653 connection_string = &quot;DRIVER={ODBC Driver 17 for SQL Server};SERVER=dev;DATABASE=Nemisis&quot; connection_url = URL.create( &quot;mssql+pyodbc&quot;, query={&quot;odbc_connect&quot;: connection_string}) engine = sa.create_engine(connection_url) qry = text(&quot;EXECUTE dbo.LibraryBookData @BookId = &quot; &amp; bookId) with engine.connect() as con: rs = con.execute(qry) df = pd.read_sql_query(qry, engine) </code></pre> <p>This try time I get this error:</p> <pre class="lang-none prettyprint-override"><code>Exception has occurred: TypeError unsupported operand type(s) for &amp;: 'str' and 'int' </code></pre> <p>I also tried this:</p> <pre><code>qry = text(&quot;EXECUTE dbo.LibraryBookData :BookId&quot;) with engine.connect() as con: rs = con.execute(qry, BookId = bookId) </code></pre> <p>Which gave me this error:</p> <pre class="lang-none prettyprint-override"><code>TypeError: Connection.execute() got an unexpected keyword argument 'bookId' </code></pre>
<python><sql-server><pandas><sqlalchemy>
2024-01-31 14:45:34
1
7,206
SkyeBoniwell
77,914,135
19,125,840
Weird Behavior of __new__
<p>I have encountered a peculiar behavior with the <code>__new__</code> method in Python and would like some clarification on its functionality in different scenarios. Let me illustrate with two unrelated classes, <code>A</code> and <code>B</code>, and provide the initial code:</p> <pre class="lang-py prettyprint-override"><code>class A: def __new__(cls, *args, **kwargs): return super().__new__(B) def __init__(self, param): print(param) class B: def __init__(self, param): print(param) if __name__ == '__main__': a = A(1) </code></pre> <p>In this case, no output is generated, and neither <code>A</code>'s <code>__init__</code> nor <code>B</code>'s <code>__init__</code> is called.</p> <p>However, when I modify the code to make B a child of A:</p> <pre class="lang-py prettyprint-override"><code>...... class B(A): ...... </code></pre> <p>Suddenly, <code>B</code>'s <code>__init__</code> is invoked, and it prints 1.</p> <p>I am seeking clarification on how this behavior is occurring. In the first case, if I want to invoke <code>B</code>'s <code>__init__</code> explicitly, I find myself resorting to the following modification:</p> <pre class="lang-py prettyprint-override"><code>class A: def __new__(cls, *args, **kwargs): obj = super().__new__(B) obj.__init__(*args, **kwargs) return obj </code></pre> <p>Can someone explain why the initial code behaves as it does and why making <code>B</code> a child of <code>A</code> alters the behavior? Additionally, how does the modified code explicitly calling <code>B</code>'s <code>__init__</code> achieve the desired outcome and not without it?</p>
<python><python-3.x><magic-methods>
2024-01-31 14:30:09
1
460
demetere._
77,913,802
10,721,627
How to mock a database connection in FastAPI using polars?
<p>I want to mock <code>pyodbc.Connection</code> in a FastAPI application that uses the <code>polars</code> package to read the database. This is the initial <code>main.py</code> file:</p> <pre><code>from fastapi import Depends, FastAPI import polars as pl import pyodbc import os app = FastAPI() def init_db() -&gt; pyodbc.Connection: connstring = os.environ.get(&quot;AZURE_SQL_CONNECTIONSTRING&quot;, &quot;no_env_var&quot;) return pyodbc.connect(connstring) @app.get(&quot;/&quot;) async def root(conn: pyodbc.Connection = Depends(init_db)) -&gt; str: df = pl.read_database(query=&quot;SELECT * FROM test_table&quot;, connection=conn) return df.write_json() </code></pre> <p>The <code>Depends</code> class injects the database connection for testing afterward. I connected to the database via <a href="https://docs.pola.rs/py-polars/html/reference/api/polars.read_database.html" rel="nofollow noreferrer">pl.read_database</a> function. I want to create a unit test case without connecting to the real database.</p> <p>How can I test the API calls without connecting to the real database using <code>pytest</code>?</p> <p>EDIT: Replacing the <code>to_dicts</code> to <code>write_json</code> for more efficient conversion.</p>
<python><pytest><fastapi><python-polars>
2024-01-31 13:41:13
1
2,482
Péter Szilvási
77,913,773
8,703,313
pandas: select value from column name given as a value of another column
<p>I have following dfs:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'a': [94, 170, 5], 'b': [31, 115, 8]}, index=[11, 12, 13]) df1 = pd.DataFrame({'idx': [&quot;a&quot;, &quot;b&quot;, &quot;a&quot;]}, index=[11, 12, 13]) </code></pre> <p><strong>I need to select value from either column <code>&quot;a&quot;</code> or <code>&quot;b&quot;</code> of <code>df</code> based on value in <code>df1[&quot;idx&quot;]</code>.</strong></p> <p>I'd like it to be efficient, since my dataframes will be big.</p>
<python><pandas><select>
2024-01-31 13:37:25
1
310
Honza S.
77,913,743
6,281,366
fastapi - how to use middleware to change the request body?
<p>I know i can create a middleware function using the <code>@app.middleware(&quot;http&quot;)</code> annotation.</p> <p>I would like to create such function, that before every POST request, will modify the request body in a way.</p> <p>I know the generic structure is of this form:</p> <pre><code>@app.middleware(&quot;http&quot;) async def middleware_function_example(request: Request, call_next): response = await call_next(request) return response </code></pre> <p>But how can i modify the request body?</p>
<python><fastapi><starlette>
2024-01-31 13:33:39
1
827
tamirg
77,913,720
16,383,578
How to parse MFT File Record bytes using ctypes and struct?
<p>Don't know if this is a duplicate, anyway my Google-fu is weak and Google almost never finds anything relevant if I type more than five words.</p> <p>I am trying to parse Master File Table as located by <code>&quot;//?/X:/$MFT&quot;</code>, I know trying to open it directly will raise <code>PermissionDenied</code>. Of course I have figured a way to circumvent it. By opening <code>&quot;//?/X:&quot;</code> this creates a handle that lets me to read the boot sector, I can then read the MFT from there...</p> <p>I have already written the code, or at least the vast majority of it, I can already read all of the MFT into primary memory but at this stage the memory usage is high and the information is not well organized. But I have parsed all information I wanted with the help of <a href="http://inform.pucp.edu.pe/%7Einf232/Ntfs/ntfs_doc_v0.5/index.html" rel="nofollow noreferrer">this documentation</a>.</p> <p>You can see my code <a href="https://codereview.stackexchange.com/questions/289225/python-master-file-table-parser">here</a>.</p> <p>As you can see from my code, I use a lot of offsets to slice the bytes into chunks and call corresponding functions to decode these chunks iteratively, I will show you what I mean:</p> <pre><code>from typing import NamedTuple class Record_Header_Flags(NamedTuple): In_Use: bool Directory: bool Extension: bool Special_Index: bool class Record_Header(NamedTuple): LogFile_Serial: int Written: int Hardlinks: int Flags: Record_Header_Flags Record_Size: int Base_Record: int Base_Writes: int Record_ID: int HEADER_FLAGS = (1, 2, 4, 8) def parse_signed_little_endian(data: bytes) -&gt; int: return ( -1 * (1 + sum((b ^ 0xFF) * (1 &lt;&lt; i * 8) for i, b in enumerate(data))) if data[-1] &amp; 128 else int.from_bytes(data, &quot;little&quot;) ) def parse_little_endian(data: bytes) -&gt; int: return int.from_bytes(data, &quot;little&quot;) def parse_header_flags(data: bytes) -&gt; Record_Header_Flags: flag = data[0] return Record_Header_Flags(*(bool(flag &amp; bit) for bit in HEADER_FLAGS)) FILE_RECORD_HEADER = ( (8, 16, parse_little_endian), (16, 18, parse_little_endian), (18, 20, parse_little_endian), (22, 24, parse_header_flags), (24, 28, parse_little_endian), (32, 38, parse_little_endian), (38, 40, parse_little_endian), (44, 48, parse_little_endian), ) def parse_record_header(data: bytes) -&gt; Record_Header: return Record_Header( *(func(data[start:end]) for start, end, func in FILE_RECORD_HEADER) ) data = b&quot;FILE0\x00\x03\x00\x9dt \x13\x0c\x00\x00\x00\x08\x00\x02\x008\x00\x01\x00\xd8\x01\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\xff\xff\x00\x00&quot; print(parse_record_header(data)) </code></pre> <pre><code>Record_Header(LogFile_Serial=51860501661, Written=8, Hardlinks=2, Flags=Record_Header_Flags(In_Use=True, Directory=False, Extension=False, Special_Index=False), Record_Size=472, Base_Record=0, Base_Writes=0, Record_ID=65535) </code></pre> <p>Someone told me this is inefficient and unPythonic, and the proper way to do this is to use a combination of <code>struct</code> and <code>ctypes</code>.</p> <p>I know I can parse 4 bytes little endian sequences using <code>struct.unpack(&quot;&lt;i&quot;, data)[0]</code>, 4 bytes unsigned LE with this format string: <code>&quot;&lt;I&quot;</code>, 8 bytes LE with <code>&quot;&lt;q&quot;</code> and 8 bytes ULE with <code>&quot;&lt;Q&quot;</code>. But some values are sequences of 6 bytes.</p> <p>And I don't know how to use <code>ctypes</code> structures.</p> <p>Further MFT uses non-standard formats like Windows File Time:</p> <pre><code>from datetime import datetime, timedelta from typing import NamedTuple EPOCH = datetime(1601, 1, 1, 0, 0, 0) def parse_NTFS_timestamp(data: bytes) -&gt; datetime: return EPOCH + timedelta(seconds=int.from_bytes(data, &quot;little&quot;) * 1e-7) </code></pre> <p>How would one use <code>ctypes</code> and <code>struct</code> to parse the example I have given, and parse byte sequences containing non-standard encodings for example timestamp fields from 0x10 $STANDARD_INFORMATION?</p>
<python><struct><ctypes><ntfs-mft>
2024-01-31 13:31:06
1
3,930
Ξένη Γήινος
77,913,561
1,445,531
v2 python programming model and Azure Functions: dynamic amount of blob_output bindings
<p>I'm creating an Azure Function that will receive an array of images URLs and should download them all on Blob storage. I'm using the new v2 programming model with decorators. Here is the dummy code to illustrate:</p> <pre><code>import azure.functions as func import datetime import json import logging from pathlib import Path import urllib import uuid app = func.FunctionApp() @app.route(route=&quot;MyHttpTrigger&quot;, auth_level=func.AuthLevel.ANONYMOUS) @app.blob_output(arg_name=&quot;outputblob&quot;, path=&quot;newblob/folder&quot;, connection=&quot;AzureWebJobsStorage&quot;) def MyHttpTrigger(req: func.HttpRequest, outputblob: func.Out[str]) -&gt; func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') images = [ { &quot;url&quot;: &quot;https://.../dodgenc_dodgenc16juldji0001(466).jpg&quot;, &quot;filename&quot;: &quot;file1.jpg&quot;, &quot;size&quot;: 12345, }, { &quot;url&quot;: &quot;https://.../greenwood_img00112.jpg&quot;, &quot;filename&quot;: &quot;file2.jpg&quot;, &quot;size&quot;: 12345, }, ] for img in images: img_extension = Path(img['filename']).suffix local_path = Path('folder/'+str(uuid.uuid4()) + img_extension) try: urllib.request.urlretrieve(img['url'], local_path) logging.info(f&quot;Downloading {img['url']} to {local_path}&quot;) # Upload data from local to blob storage. logging.warning(f&quot;Uploading {local_path} to {outputblob}&quot;) outputblob.set(local_path.read_bytes()) except Exception as e: logging.warning(f&quot;Could not fetch image {img.url}. Exception: \n{e}&quot;) return func.HttpResponse( &quot;This HTTP triggered function executed successfully.&quot;, status_code=200 ) </code></pre> <p>This code works fine except that there are two images written to the same blob, so the second one overwrites the first one. I could work around that by introducing a second <code>blob_output</code>, one for each image. But how to do if <strong>I don't know in advance</strong> how many images to process?</p> <p>Is there a way to dynamically set the number of output bindings (I expect not)? Alternatively, how could I download all the images to a folder in blob storage?</p> <p>Thanks in advance!</p>
<python><azure-functions>
2024-01-31 13:06:25
1
678
jerorx
77,913,473
15,461,255
Why pandas value_counts() generates tuples as index?
<p>I have the following pandas dataframe:</p> <pre><code>import pandas as pd df = pd.DataFrame({'a': [&quot;Q1&quot;, &quot;Q2&quot;, &quot;Q1&quot;, &quot;P1&quot;]}) </code></pre> <p>That is:</p> <pre><code> a 0 Q1 1 Q2 2 Q1 3 P1 </code></pre> <p>When I count the values with <code>counts = df.value_counts(normalize=True)</code>, indexes change to tuple. That is, <code>counts.index</code> is now:</p> <pre><code>MultiIndex([('Q1',), ('P1',), ('Q2',)], names=['a']) </code></pre> <p>but I want preserve the strings as indexes. In other words, I want <code>counts.index</code> to be an Index object instead of a MultiIndex.</p> <p>So, my questions are:</p> <ul> <li>Why <code>value_counts</code> return a MultiIndex?</li> <li>Is there an alternative or a &quot;fix&quot; to that?</li> </ul>
<python><pandas>
2024-01-31 12:53:27
1
350
Palinuro
77,913,463
9,208,758
How to resolve the stride assert error produced by the PyTorch compile?
<p>I am currently trying to run the code from the section 3.2 of the &quot;A QuickPyTorch 2.0 Tutorial&quot; (<a href="https://www.learnpytorch.io/pytorch_2_intro/#27-create-training-and-testing-loops" rel="nofollow noreferrer">https://www.learnpytorch.io/pytorch_2_intro/#27-create-training-and-testing-loops</a>). When I run the code, I get the following error:</p> <p>AssertionError: expected size 64==64, stride 3136==1 at dim=1</p> <p><a href="https://i.sstatic.net/5N2DP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5N2DP.png" alt="enter image description here" /></a></p> <p>Partial output and full error below:</p> <pre><code>... /home/isaac-aktam/anaconda3/lib/python3.10/site-packages/torch/overrides.py:110: UserWarning: 'has_cuda' is deprecated, please use 'torch.backends.cuda.is_built()' torch.has_cuda, /home/isaac-aktam/anaconda3/lib/python3.10/site-packages/torch/overrides.py:111: UserWarning: 'has_cudnn' is deprecated, please use 'torch.backends.cudnn.is_available()' torch.has_cudnn, /home/isaac-aktam/anaconda3/lib/python3.10/site-packages/torch/overrides.py:117: UserWarning: 'has_mps' is deprecated, please use 'torch.backends.mps.is_built()' torch.has_mps, /home/isaac-aktam/anaconda3/lib/python3.10/site-packages/torch/overrides.py:118: UserWarning: 'has_mkldnn' is deprecated, please use 'torch.backends.mkldnn.is_available()' torch.has_mkldnn, Training Epoch 0: 99%|█████████▉| 195/196 [01:53&lt;00:00, 1.72it/s, train_loss=0.671, train_acc=0.769] 0%| | 0/5 [01:53&lt;?, ?it/s] --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) Cell In[25], line 43 39 print(f&quot;Time to compile: {compile_time} | Note: the first time you compile a model, the first epoch may take longer due optimizations happening behind the scenes&quot;) 41 # Train the compiled model ---&gt; 43 single_run_compile_results = train(model = compiled_model.to(device), 44 train_dataloader = train_dataloader_CIFAR10, 45 test_dataloader = test_dataloader_CIFAR10, 46 optimizer = optimizer, 47 loss_fn = loss_fn, 48 epochs = NUM_EPOCHS, 49 device = device) Cell In[24], line 204, in train(model, train_dataloader, test_dataloader, optimizer, loss_fn, epochs, device, disable_progress_bar) 200 for epoch in tqdm(range(epochs), disable=disable_progress_bar): 201 202 # Perform training step and time it 203 train_epoch_start_time = time.time() --&gt; 204 train_loss, train_acc = train_step(epoch=epoch, 205 model=model, 206 dataloader=train_dataloader, 207 loss_fn=loss_fn, 208 optimizer=optimizer, 209 device=device, 210 disable_progress_bar=disable_progress_bar) 211 train_epoch_end_time = time.time() 212 train_epoch_time = train_epoch_end_time - train_epoch_start_time Cell In[24], line 60, in train_step(epoch, model, dataloader, loss_fn, optimizer, device, disable_progress_bar) 57 optimizer.zero_grad() 59 # 4. Loss backward ---&gt; 60 loss.backward() 62 # 5. Optimizer step 63 optimizer.step() File ~/anaconda3/lib/python3.10/site-packages/torch/_tensor.py:483, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs) 436 r&quot;&quot;&quot;Computes the gradient of current tensor wrt graph leaves. 437 438 The graph is differentiated using the chain rule. If the tensor is (...) 480 used to compute the attr::tensors. 481 &quot;&quot;&quot; 482 if has_torch_function_unary(self): --&gt; 483 return handle_torch_function( 484 Tensor.backward, 485 (self,), 486 self, 487 gradient=gradient, 488 retain_graph=retain_graph, 489 create_graph=create_graph, 490 inputs=inputs, 491 ) 492 torch.autograd.backward( 493 self, gradient, retain_graph, create_graph, inputs=inputs 494 ) File ~/anaconda3/lib/python3.10/site-packages/torch/overrides.py:1560, in handle_torch_function(public_api, relevant_args, *args, **kwargs) 1556 if _is_torch_function_mode_enabled(): 1557 # if we're here, the mode must be set to a TorchFunctionStackMode 1558 # this unsets it and calls directly into TorchFunctionStackMode's torch function 1559 with _pop_mode_temporarily() as mode: -&gt; 1560 result = mode.__torch_function__(public_api, types, args, kwargs) 1561 if result is not NotImplemented: 1562 return result File ~/anaconda3/lib/python3.10/site-packages/torch/utils/_device.py:77, in DeviceContext.__torch_function__(self, func, types, args, kwargs) 75 if func in _device_constructors() and kwargs.get('device') is None: 76 kwargs['device'] = self.device ---&gt; 77 return func(*args, **kwargs) File ~/anaconda3/lib/python3.10/site-packages/torch/_tensor.py:492, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs) 482 if has_torch_function_unary(self): 483 return handle_torch_function( 484 Tensor.backward, 485 (self,), (...) 490 inputs=inputs, 491 ) --&gt; 492 torch.autograd.backward( 493 self, gradient, retain_graph, create_graph, inputs=inputs 494 ) File ~/anaconda3/lib/python3.10/site-packages/torch/autograd/__init__.py:251, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs) 246 retain_graph = create_graph 248 # The reason we repeat the same comment below is that 249 # some Python versions print out the first line of a multi-line function 250 # calls in the traceback and some print out the last line --&gt; 251 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 252 tensors, 253 grad_tensors_, 254 retain_graph, 255 create_graph, 256 inputs, 257 allow_unreachable=True, 258 accumulate_grad=True, 259 ) File ~/anaconda3/lib/python3.10/site-packages/torch/autograd/function.py:288, in BackwardCFunction.apply(self, *args) 282 raise RuntimeError( 283 &quot;Implementing both 'backward' and 'vjp' for a custom &quot; 284 &quot;Function is not allowed. You should only implement one &quot; 285 &quot;of them.&quot; 286 ) 287 user_fn = vjp_fn if vjp_fn is not Function.vjp else backward_fn --&gt; 288 return user_fn(self, *args) File ~/anaconda3/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py:3232, in aot_dispatch_autograd.&lt;locals&gt;.CompiledFunction.backward(ctx, *flat_args) 3230 out = CompiledFunctionBackward.apply(*all_args) 3231 else: -&gt; 3232 out = call_compiled_backward() 3233 return out File ~/anaconda3/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py:3204, in aot_dispatch_autograd.&lt;locals&gt;.CompiledFunction.backward.&lt;locals&gt;.call_compiled_backward() 3199 with tracing(saved_context), context(), track_graph_compiling(aot_config, &quot;backward&quot;): 3200 CompiledFunction.compiled_bw = aot_config.bw_compiler( 3201 bw_module, placeholder_list 3202 ) -&gt; 3204 out = call_func_with_args( 3205 CompiledFunction.compiled_bw, 3206 all_args, 3207 steal_args=True, 3208 disable_amp=disable_amp, 3209 ) 3211 out = functionalized_rng_runtime_epilogue(CompiledFunction.metadata, out) 3212 return tuple(out) File ~/anaconda3/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py:1506, in call_func_with_args(f, args, steal_args, disable_amp) 1504 with context(): 1505 if hasattr(f, &quot;_boxed_call&quot;): -&gt; 1506 out = normalize_as_list(f(args)) 1507 else: 1508 # TODO: Please remove soon 1509 # https://github.com/pytorch/pytorch/pull/83137#issuecomment-1211320670 1510 warnings.warn( 1511 &quot;Your compiler for AOTAutograd is returning a function that doesn't take boxed arguments. &quot; 1512 &quot;Please wrap it with functorch.compile.make_boxed_func or handle the boxed arguments yourself. &quot; 1513 &quot;See https://github.com/pytorch/pytorch/pull/83137#issuecomment-1211320670 for rationale.&quot; 1514 ) File ~/anaconda3/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:328, in _TorchDynamoContext.__call__.&lt;locals&gt;._fn(*args, **kwargs) 326 dynamic_ctx.__enter__() 327 try: --&gt; 328 return fn(*args, **kwargs) 329 finally: 330 set_eval_frame(prior) File ~/anaconda3/lib/python3.10/site-packages/torch/_dynamo/external_utils.py:17, in wrap_inline.&lt;locals&gt;.inner(*args, **kwargs) 15 @functools.wraps(fn) 16 def inner(*args, **kwargs): ---&gt; 17 return fn(*args, **kwargs) File ~/anaconda3/lib/python3.10/site-packages/torch/_inductor/codecache.py:374, in CompiledFxGraph.__call__(self, inputs) 373 def __call__(self, inputs) -&gt; Any: --&gt; 374 return self.get_current_callable()(inputs) File ~/anaconda3/lib/python3.10/site-packages/torch/_inductor/compile_fx.py:628, in align_inputs_from_check_idxs.&lt;locals&gt;.run(new_inputs) 626 def run(new_inputs): 627 copy_misaligned_inputs(new_inputs, inputs_to_check) --&gt; 628 return model(new_inputs) File ~/anaconda3/lib/python3.10/site-packages/torch/_inductor/codecache.py:401, in _run_from_cache(compiled_graph, inputs) 391 from .codecache import PyCodeCache 393 compiled_graph.compiled_artifact = PyCodeCache.load_by_key_path( 394 compiled_graph.cache_key, 395 compiled_graph.artifact_path, (...) 398 else (), 399 ).call --&gt; 401 return compiled_graph.compiled_artifact(inputs) File /tmp/torchinductor_isaac-aktam/fi/cfignuiw5cmdhtpeow6axfwfjocu44zvgnrf4rlockxh2k5th7f3.py:5638, in call(args) 5636 del primals_4 5637 buf473 = buf472[0] -&gt; 5638 assert_size_stride(buf473, (s0, 64, 56, 56), (200704, 1, 3584, 64)) 5639 buf474 = buf472[1] 5640 assert_size_stride(buf474, (64, 64, 1, 1), (64, 1, 64, 64)) AssertionError: expected size 64==64, stride 3136==1 at dim=1 </code></pre> <p>I found a relevant post on GitHub (<a href="https://github.com/pytorch/pytorch/pull/91605" rel="nofollow noreferrer">https://github.com/pytorch/pytorch/pull/91605</a>), but don't understand how to implement it.</p> <p>Note: the code works when torch.compile() is not utilised. Furthermore, when torch.compile() is on, the code crashes after the first epoch.</p>
<python><machine-learning><deep-learning><pytorch><resnet>
2024-01-31 12:52:11
0
589
Isaac A
77,913,408
7,123,933
How to keep original indentations in YML file after section deletion
<p>I am using Python script to remove specific sections in YML file, which is schema of source tables for DBT in format like:</p> <pre class="lang-yaml prettyprint-override"><code>models: - name: model1 columns: - name: ID description: user_id data_type: INT64 - name: ATTR description: attribute_name data_type: INT64 - name: model2 columns: - name: USER description: username </code></pre> <p>and when i am deleting one section let's say &quot;- name: model1&quot; using this function:</p> <pre class="lang-py prettyprint-override"><code>def remove_section_from_yml(yml_path, section_name): yaml = ruamel.yaml.YAML() with open(yml_path, 'r') as f: data= yaml.load(f) for i, model in enumerate(data['models']): if 'name' in model and model['name'] == section_name: del data['models'][i] break with open(yml_path, 'w') as f: yaml.preserve_quotes = True yaml.dump(data, f) </code></pre> <p>and it is working properly as removing section which I do not need, but in the output file indents are not correct and looks like this:</p> <pre class="lang-yaml prettyprint-override"><code>models: - name: model2 columns: - name: USER description: username </code></pre> <p>It is moving section with model name completely to the left without indent and I don't know how to resolve it. Any ideas? :)</p>
<python><yaml>
2024-01-31 12:42:05
1
359
Lukasz
77,913,385
9,138,097
Find duplicates in environment variables defined in YAML file
<p>I have a kubernetes deployment YAML file, with defined environment variables including duplicated environment variables. My goal is detect them and print them, as I have hundreds of files which needs to parsed.</p> <p>Example YAML:</p> <pre><code>kind: ReplicationController apiVersion: v1 spec: template: spec: containers: env: - name: PROFILE value: some_value - name: PROFILE value: some_value </code></pre> <p>The environment variable key is duplicated with a value, I want to detect it and print the duplicated key.</p> <p>I tried using PyYAML library in Python but it does not detects at the environment variables level.</p> <p>Any examples would be highly appreciated.</p>
<python><python-3.x><parsing><yaml><pyyaml>
2024-01-31 12:37:28
0
811
Aziz Zoaib
77,913,227
9,758,017
Remove the boxes but keep the labels for a YOLOv8 prediction
<p>I am working on a project where I have trained a model using YOLO (from the Ultralytics library - version: 8.0.132) to detect specific objects in images. My classification categories are <strong>[A, B, C, D]</strong>.</p> <p>In my Jupyter Notebook, I have the following code:</p> <pre><code>from ultralytics import YOLO model_path = &quot;{pathToModel}/best.pt&quot; print(&quot;Loading model...&quot;) model = YOLO(model_path) result = model.predict(&quot;{pathToImage}.png&quot;, conf=0.3, save=True) print(result) # Checking the presence of predictions for r in result: print(r.masks) </code></pre> <p>This successfully saves an image showing the predicted segments with bounding boxes and probabilities (e.g., category B with a probability of 0.61).</p> <p>However, when I modify the <code>predict</code> method (by reading through the documentation: <a href="https://docs.ultralytics.com/de/modes/predict/" rel="nofollow noreferrer">https://docs.ultralytics.com/de/modes/predict/</a>) to exclude bounding boxes (<code>boxes=False</code>), the saved image shows the segments without the boxes and crucially, without labels or probabilities.</p> <p>Attempting to include labels and probabilities with <code>labels=True</code> (same with <code>show_labels</code>) and <code>probs=True</code> results in the following error:</p> <pre><code>Traceback (most recent call last): File c:\Users\myuser\AppData\Local\Programs\Python\Python311\Lib\site-packages\IPython\core\interactiveshell.py:3442 in run_code exec(code_obj, self.user_global_ns, self.user_ns) Cell In[39], line 1 result = model.predict(&quot;{pathToMyImage}.png&quot; , conf = 0.3, save = True, boxes=False, labels=True, probs=True) File c:\Users\myuser\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\utils\_contextlib.py:115 in decorate_context return func(*args, **kwargs) File c:\Users\myuser\AppData\Local\Programs\Python\Python311\Lib\site-packages\ultralytics\yolo\engine\model.py:249 in predict self.predictor = TASK_MAP[self.task][3](overrides=overrides, _callbacks=self.callbacks) File c:\Users\myuser\AppData\Local\Programs\Python\Python311\Lib\site-packages\ultralytics\yolo\v8\segment\predict.py:13 in __init__ super().__init__(cfg, overrides, _callbacks) File c:\Users\myuser\AppData\Local\Programs\Python\Python311\Lib\site-packages\ultralytics\yolo\engine\predictor.py:86 in __init__ self.args = get_cfg(cfg, overrides) File c:\Users\myuser\AppData\Local\Programs\Python\Python311\Lib\site-packages\ultralytics\yolo\cfg\__init__.py:114 in get_cfg check_cfg_mismatch(cfg, overrides) File c:\Users\myuser\AppData\Local\Programs\Python\Python311\Lib\site-packages\ultralytics\yolo\cfg\__init__.py:187 in check_cfg_mismatch raise SyntaxError(string + CLI_HELP_MSG) from e ... Docs: https://docs.ultralytics.com Community: https://community.ultralytics.com GitHub: https://github.com/ultralytics/ultralytics </code></pre> <p>Seems like the combination of</p> <pre><code>show_boxes=False, show_conf=True, show_labels=True </code></pre> <p>does not work out (what is described in here: <a href="https://stackoverflow.com/questions/76893048/unable-to-hide-bounding-boxes-and-labels-in-yolov8">Unable to hide bounding boxes and labels in YOLOv8</a>). <code>show_boxes</code> throws and error and <code>boxes</code> more or less overrides everything I set as an argument.</p> <p>I followed the documentation but am struggling to display both the segment and its associated label/probability without the bounding box. Any insights or suggestions on how to achieve this would be greatly appreciated.</p>
<python><yolo><yolov8><ultralytics>
2024-01-31 12:14:37
1
1,778
41 72 6c
77,913,154
7,662,164
Vectorizing power of `jax.grad`
<p>I'm trying to vectorize the following &quot;power-of-grad&quot; function so that it accepts multiple <code>order</code>s: (<a href="https://github.com/google/jax/discussions/18834" rel="nofollow noreferrer">see here</a>)</p> <pre><code>def grad_pow(f, order, argnum): for i in jnp.arange(order): f = grad(f, argnums=argnum) return f </code></pre> <p>This function produces the following error after applying <code>vmap</code> on the argument <code>order</code>:</p> <pre><code>jax.errors.ConcretizationTypeError: Abstract tracer value encountered where concrete value is expected: traced array with shape int32[]. It arose in the jnp.arange argument 'stop' </code></pre> <p>I have tried writing a static version of <code>grad_pow</code> using <code>jax.lax.cond</code> and <code>jax.lax.scan</code>, following the logic <a href="https://stackoverflow.com/questions/76334231/how-can-i-implement-a-vmappable-sum-over-a-dynamic-range-in-jax">here</a>:</p> <pre><code>def static_grad_pow(f, order, argnum): order_max = 3 ## maximum order def grad_pow(f, i): return cond(i &lt;= order, grad(f, argnum), f), None return scan(grad_pow, f, jnp.arange(order_max+1))[0] if __name__ == &quot;__main__&quot;: test_func = lambda x: jnp.exp(-2*x) test_func_grad_pow = static_grad_pow(jax.tree_util.Partial(test_func), 1, 0) print(test_func_grad_pow(1.)) </code></pre> <p>Nevertheless, this solution still produces an error:</p> <pre><code> return cond(i &lt;= order, grad(f, argnum), f), None TypeError: differentiating with respect to argnums=0 requires at least 1 positional arguments to be passed by the caller, but got only 0 positional arguments. </code></pre> <p>Just wondering how this issue can be resolved?</p>
<python><loops><vectorization><jax>
2024-01-31 12:01:31
1
335
Jingyang Wang
77,913,003
275,088
How does autocommit work with multiple queries?
<p>I have a psycopg connection with autocommit turned on. Say, I run a query that is a combination of multiple queries, e.g.:</p> <pre><code>query = &quot;;&quot;.join([create_table, insert_data, analyze_table]) conn.execute(query) </code></pre> <p>Is this batch executed in a single transaction or multiple transactions? What happens if a query in the middle fails?</p>
<python><psycopg2>
2024-01-31 11:34:03
1
16,548
planetp
77,912,987
6,231,251
Error with setup.py egg_info while pip-installing my own package
<p>I am building a python package via <code>pip</code>and <code>setuptools</code>. I am testing it on <code>test.pypi</code> and I am building it following <a href="https://betterscientificsoftware.github.io/python-for-hpc/tutorials/python-pypi-packaging/" rel="nofollow noreferrer">this guide</a>.</p> <p>Prior to building the package and uploading, I made sure to update everything I need with</p> <pre><code>pip install --upgrade pip setuptools </code></pre> <p>My project directory is structured as follows:</p> <pre><code>mypackage |_ /mypackage |_ /docs |_ /demos |_ LICENSE |_ README.md |_ requirements.txt |_ setup.py |_ /tests </code></pre> <p>While building, everything runs smoothly. In my project directory, I run the following commands:</p> <pre><code> python setup.py check python setup.py sdist python setup.py bdist_wheel --universal python setup.py egg_info twine upload --repository-url https://test.pypi.org/legacy/ dist/mypackage-0.0.2.tar.gz </code></pre> <p>It successfully builds, adding the folders <code>mypackage.egg-info</code>, <code>dist</code> and <code>build</code> to my project folder.</p> <p>However, when I try to install it by doing</p> <pre><code>pip install -i https://test.pypi.org/simple/ mypackage </code></pre> <p>I get the following error:</p> <pre><code>Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─&gt; [8 lines of output] Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 2, in &lt;module&gt; File &quot;&lt;pip-setuptools-caller&gt;&quot;, line 34, in &lt;module&gt; File &quot;/private/var/folders/1l/rpytfnks7cg8cqm_7qp4r_v40000gn/T/pip-install-_mx6wquv/mypackage_5557314a6a5a415c90c3b8da2b6c8efa/setup.py&quot;, line 11, in &lt;module&gt; with open(path.join(here, &quot;requirements.txt&quot;), encoding=&quot;utf-8&quot;) as f: File &quot;/Users/myname/opt/anaconda3/envs/pip_mypackage/lib/python3.10/codecs.py&quot;, line 906, in open file = builtins.open(filename, mode, buffering) FileNotFoundError: [Errno 2] No such file or directory: '/private/var/folders/1l/rpytfnks7cg8cqm_7qp4r_v40000gn/T/pip-install-_mx6wquv/mypackage_5557314a6a5a415c90c3b8da2b6c8efa/requirements.txt' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─&gt; See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. </code></pre> <p>It looks like it is not able to locate the <code>requirements.txt</code> file at some point during the process. The error stays even if I specify the requirements directly in <code>setup.py</code>instead of sourcing them from the external file.</p> <p>I am not sure what I am missing and how I can fix the issue on my own package. Please let me know if I can provide further details.</p>
<python><pip><setuptools><pypi><python-packaging>
2024-01-31 11:31:48
0
882
sato
77,912,835
4,965,381
groupby on pandas dataframe created with pandas.read_csv
<p>How do you create a dataframe from a csv file and be able to execute the same <code>groupby</code> command that simply works when creating the dataframe by hand?</p> <p>Step-by-step i tried to do:</p> <p>I create a pandas dataframe with multiindexed columns:</p> <pre><code>df = pd.DataFrame({ ('A', 'x'): [1, 2, 3, 4], ('A', 'y'): [10, 15, 20, 25], ('B', 'x'): [7, 8, 9, 10], ('B', 'y'): [1, 1, 2, 2], }) </code></pre> <p>I then add a new column with some &quot;metadata&quot; that has only one column identifier:</p> <pre><code>df[&quot;C&quot;] = [&quot;F1&quot;, &quot;F1&quot;, &quot;F2&quot;, &quot;F2&quot;] </code></pre> <p>Results in:</p> <pre><code> A B C x y x y 0 1 10 7 1 F1 1 2 15 8 1 F1 2 3 20 9 2 F2 3 4 25 10 2 F2 </code></pre> <p>And i group my data on this column <code>&quot;C&quot;</code> - this works:</p> <pre><code>grouped = df.groupby(&quot;C&quot;).sum() A B x y x y C F1 3 25 15 2 F2 7 45 19 4 </code></pre> <p>now i store the dataframe as csv file and read it again:</p> <pre><code>df.to_csv(&quot;test.csv&quot;) df2 = pd.read_csv(&quot;test.csv&quot;, header=[0, 1], index_col=0) A B C x y x y Unnamed: 5_level_1 0 1 10 7 1 F1 1 2 15 8 1 F1 2 3 20 9 2 F2 3 4 25 10 2 F2 </code></pre> <p>executing the same groupby command on the read-in dataframe results in an error:</p> <pre><code>df2.groupby(&quot;C&quot;).sum() File &quot;xxx\py311\Lib\site-packages\pandas\core\groupby\grouper.py&quot;, line 980, in get_grouper raise ValueError(f&quot;Grouper for '{name}' not 1-dimensional&quot;) ValueError: Grouper for 'C' not 1-dimensional </code></pre> <p>I just cant seem to work out what i need to do to be able to groupby the column after reading in the data from csv?</p>
<python><pandas><group-by>
2024-01-31 11:08:34
2
1,067
voiDnyx
77,912,770
2,233,500
How to extract data from a wikilinks?
<p>I want to extract data from the wikilinks returned by the <a href="https://github.com/earwig/mwparserfromhell" rel="nofollow noreferrer">mwparserfromhell</a> lib. I want for instance to parse the following string:</p> <pre><code>[[File:Warszawa, ul. Freta 16 20170516 002.jpg|thumb|upright=1.18|[[Maria Skłodowska-Curie Museum|Birthplace]] of Marie Curie, at 16 Freta Street, in [[Warsaw]], [[Poland]].]] </code></pre> <p>If I split the string using the character <code>|</code>, it doesn't work as there is a link inside the description of the image that uses the <code>|</code> as well: <code>[[Maria Skłodowska-Curie Museum|Birthplace]]</code>.</p> <p>I'm using regexp to first replace all links in the string before spliting it. It works (in this case) but it doesn't feel clean (see code bellow). Is there a better way to extract information from such a string?</p> <pre class="lang-py prettyprint-override"><code>import re wiki_code = &quot;[[File:Warszawa, ul. Freta 16 20170516 002.jpg|thumb|upright=1.18|[[Maria Skłodowska-Curie Museum|Birthplace]] of Marie Curie, at 16 Freta Street, in [[Warsaw]], [[Poland]].]]&quot; # Remove [[File: at the begining of the string prefix = &quot;[[File:&quot; if (wiki_code.startswith(prefix)): wiki_code = wiki_code[len(prefix):] # Remove ]] at the end of the string suffix = &quot;]]&quot; if (wiki_code.endswith(suffix)): wiki_code = wiki_code[:-len(suffix)] # Replace links with their link_pattern = re.compile(r'\[\[.*?\]\]') matches = link_pattern.findall(wiki_code) for match in matches: content = match[2:-2] arr = content.split(&quot;|&quot;) label = arr[-1] wiki_code = wiki_code.replace(match, label) print(wiki_code.split(&quot;|&quot;)) </code></pre>
<python><wikipedia>
2024-01-31 10:57:24
1
867
Vincent Garcia
77,912,637
5,769,814
What is `\*` in Python function definition?
<p>I know what a single asterisk (<code>*</code>) means in a function definition. See, for example, <a href="https://stackoverflow.com/questions/14301967/what-does-a-bare-asterisk-do-in-a-parameter-list-what-are-keyword-only-parame">this question</a>.</p> <p>However, the <a href="https://docs.python.org/3/library/warnings.html#available-functions" rel="nofollow noreferrer">documentation of <code>warnings.warn</code></a> contains the entry <code>\*</code>. What does this symbol combination mean in a function definition?</p>
<python><python-3.x>
2024-01-31 10:39:53
1
1,324
Mate de Vita
77,912,612
13,392,257
MySQLdb/_mysql.c:521:9: error: call to undeclared function 'mysql_ssl_set'
<p>My system -- Python version -- 3.8.18 and OS version -- macOS ventura 13.6.4</p> <p>I have an error when I want to install mysqlclient</p> <pre><code> pip install mysqlclient==2.2.0 </code></pre> <p>Error: (full log below)</p> <pre><code>MySQLdb/_mysql.c:521:9: error: call to undeclared function 'mysql_ssl_set' </code></pre> <p>I was trying to use mysql 8.0 instead of 8.3 (according to this post <a href="https://stackoverflow.com/questions/77903414/pip-install-mysqlclient-fails-with-call-to-undeclared-function-error">pip install mysqlclient fails with &quot;call to undeclared function&quot; error</a> )</p> <p>Then I created a new env <code>python3.8 -m venv test_venv38</code></p> <pre><code>brew install pkg-config // version 0.29.2 pip install mysqlclient==2.2.0 </code></pre> <p>Error for the last command</p> <pre><code>File &quot;/private/var/folders/5f/mqmy5hxd2d9_ydtgx8jfch2r0000gn/T/pip-build-env-10bqx287/overlay/lib/python3.8/site-packages/setuptools/build_meta.py&quot;, line 311, in run_setup exec(code, locals()) File &quot;&lt;string&gt;&quot;, line 154, in &lt;module&gt; File &quot;&lt;string&gt;&quot;, line 48, in get_config_posix File &quot;&lt;string&gt;&quot;, line 27, in find_package_name Exception: Can not find valid pkg-config name. Specify MYSQLCLIENT_CFLAGS and MYSQLCLIENT_LDFLAGS env vars manually [end of output] </code></pre> <hr /> <p>on the step <strong>pip install mysqlclient==2.2.0</strong></p> <p>Initial error logs:</p> <pre><code>Building wheels for collected packages: mysqlclient, thriftpy2 Building wheel for mysqlclient (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for mysqlclient (pyproject.toml) did not run successfully. │ exit code: 1 ╰─&gt; [59 lines of output] Trying pkg-config --exists mysqlclient # Options for building extention module: extra_compile_args: ['-I/usr/local/Cellar/mysql/8.3.0/include/mysql', '-std=c99'] extra_link_args: ['-L/usr/local/Cellar/mysql/8.3.0/lib', '-lmysqlclient'] define_macros: [('version_info', (2, 2, 0, 'final', 0)), ('__version__', '2.2.0')] running bdist_wheel running build running build_py creating build creating build/lib.macosx-13-x86_64-cpython-38 creating build/lib.macosx-13-x86_64-cpython-38/MySQLdb copying src/MySQLdb/release.py -&gt; build/lib.macosx-13-x86_64-cpython-38/MySQLdb copying src/MySQLdb/cursors.py -&gt; build/lib.macosx-13-x86_64-cpython-38/MySQLdb copying src/MySQLdb/connections.py -&gt; build/lib.macosx-13-x86_64-cpython-38/MySQLdb copying src/MySQLdb/__init__.py -&gt; build/lib.macosx-13-x86_64-cpython-38/MySQLdb copying src/MySQLdb/times.py -&gt; build/lib.macosx-13-x86_64-cpython-38/MySQLdb copying src/MySQLdb/converters.py -&gt; build/lib.macosx-13-x86_64-cpython-38/MySQLdb copying src/MySQLdb/_exceptions.py -&gt; build/lib.macosx-13-x86_64-cpython-38/MySQLdb creating build/lib.macosx-13-x86_64-cpython-38/MySQLdb/constants copying src/MySQLdb/constants/FLAG.py -&gt; build/lib.macosx-13-x86_64-cpython-38/MySQLdb/constants copying src/MySQLdb/constants/CLIENT.py -&gt; build/lib.macosx-13-x86_64-cpython-38/MySQLdb/constants copying src/MySQLdb/constants/__init__.py -&gt; build/lib.macosx-13-x86_64-cpython-38/MySQLdb/constants copying src/MySQLdb/constants/ER.py -&gt; build/lib.macosx-13-x86_64-cpython-38/MySQLdb/constants copying src/MySQLdb/constants/CR.py -&gt; build/lib.macosx-13-x86_64-cpython-38/MySQLdb/constants copying src/MySQLdb/constants/FIELD_TYPE.py -&gt; build/lib.macosx-13-x86_64-cpython-38/MySQLdb/constants running egg_info writing src/mysqlclient.egg-info/PKG-INFO writing dependency_links to src/mysqlclient.egg-info/dependency_links.txt writing top-level names to src/mysqlclient.egg-info/top_level.txt reading manifest file 'src/mysqlclient.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' adding license file 'LICENSE' writing manifest file 'src/mysqlclient.egg-info/SOURCES.txt' copying src/MySQLdb/_mysql.c -&gt; build/lib.macosx-13-x86_64-cpython-38/MySQLdb running build_ext building 'MySQLdb._mysql' extension creating build/temp.macosx-13-x86_64-cpython-38 creating build/temp.macosx-13-x86_64-cpython-38/src creating build/temp.macosx-13-x86_64-cpython-38/src/MySQLdb clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX13.sdk &quot;-Dversion_info=(2, 2, 0, 'final', 0)&quot; -D__version__=2.2.0 -I/Users/aamoskalenko/0_root_folder/02_dev/13_feast_bitbucket/feast/test_venv38/include -I/usr/local/opt/python@3.8/Frameworks/Python.framework/Versions/3.8/include/python3.8 -c src/MySQLdb/_mysql.c -o build/temp.macosx-13-x86_64-cpython-38/src/MySQLdb/_mysql.o -I/usr/local/Cellar/mysql/8.3.0/include/mysql -std=c99 src/MySQLdb/_mysql.c:524:9: error: call to undeclared function 'mysql_ssl_set'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] mysql_ssl_set(&amp;(self-&gt;connection), key, cert, ca, capath, cipher); ^ src/MySQLdb/_mysql.c:524:9: note: did you mean 'mysql_close'? /usr/local/Cellar/mysql/8.3.0/include/mysql/mysql.h:797:14: note: 'mysql_close' declared here void STDCALL mysql_close(MYSQL *sock); ^ src/MySQLdb/_mysql.c:1792:9: error: call to undeclared function 'mysql_kill'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] r = mysql_kill(&amp;(self-&gt;connection), pid); ^ src/MySQLdb/_mysql.c:1792:9: note: did you mean 'mysql_ping'? /usr/local/Cellar/mysql/8.3.0/include/mysql/mysql.h:525:13: note: 'mysql_ping' declared here int STDCALL mysql_ping(MYSQL *mysql); ^ src/MySQLdb/_mysql.c:2001:9: error: call to undeclared function 'mysql_shutdown'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] r = mysql_shutdown(&amp;(self-&gt;connection), SHUTDOWN_DEFAULT); ^ 3 errors generated. error: command '/usr/bin/clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for mysqlclient Building wheel for thriftpy2 (pyproject.toml) ... done Created wheel for thriftpy2: filename=thriftpy2-0.4.17-cp38-cp38-macosx_13_0_x86_64.whl size=662548 sha256=951292c9a91c7f1fea19732e135c45aad6129a4cdd99a0c867dadca3bd88d752 Stored in directory: /Users/aamoskalenko/Library/Caches/pip/wheels/b2/d2/6d/0a47d1a0250236680e0f12b3f3c5093ffc3da4871fe1fa2804 Successfully built thriftpy2 Failed to build mysqlclient ERROR: Could not build wheels for mysqlclient, which is required to install pyproject.toml-based projects </code></pre>
<python><mysql><pip><feast>
2024-01-31 10:36:32
0
1,708
mascai
77,912,467
7,347,925
How to clip tile imagery by patch?
<p>I'm trying to clip the satellite background image by circle:</p> <pre><code>import io from PIL import Image import cartopy.crs as ccrs import matplotlib.patches as patches import matplotlib.pyplot as plt import cartopy.io.img_tiles as cimgt from urllib.request import urlopen, Request def image_spoof(self, tile): '''this function reformats web requests from OSM for cartopy''' url = self._image_url(tile) # get the url of the street map API req = Request(url) # start request req.add_header('User-agent','Anaconda 3') # add user agent to request fh = urlopen(req) im_data = io.BytesIO(fh.read()) # get image fh.close() # close url img = Image.open(im_data) # open image with PIL img = img.convert(self.desired_tile_form) # set image format return img, self.tileextent(tile), 'lower' # reformat for cartopy proj = ccrs.PlateCarree() ax = plt.axes(projection=proj) # set extent lon_min = -98.853627 lon_max = -98.752037 lat_min = 19.274685 lat_max = 19.376275 ax.set_extent((lon_min, lon_max, lat_min, lat_max), crs=proj) cimgt.QuadtreeTiles.get_image = image_spoof # reformat web request for street map spoofing osm_img = cimgt.QuadtreeTiles() # spoofed, downloaded street map osm_img = ax.add_image(osm_img, 16) # add OSM with zoom specification patch = patches.Circle((-98.802832, 19.32548), radius=0.03, transform=ax.transData) ax.add_patch(patch) # osm_img.set_clip_path(patch) </code></pre> <p>However, I see <code>set_clip_path</code> does not work for this case. Is it possible to clip <code>osm_img</code> by the circle patch? Or any other method to plot a circled satellite background?</p> <p><a href="https://i.sstatic.net/LUJ64.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LUJ64.png" alt="example" /></a></p>
<python><matplotlib><patch><cartopy><clip>
2024-01-31 10:13:25
1
1,039
zxdawn
77,912,307
11,963,167
Managing pd.Series and pd.DatetimeIndex in a function simultaneously
<p>I have a function meant to examine time-series data, usually obtained from a pandas dataframe. However, there are usually two ways to store this kind of data: the time-series can be stored in a column (in that case, it is a <code>pd.Series</code>, kind of), or serves as the dataframe's index. In that latter case, it's of type <code>pd.DatetimeIndex</code>.</p> <p>I'd like my function to be used for both cases. The best I can think of is a transformation from index to series in the function itself, but that seems ugly: it requires checking the nature of the object and turning the index to a series and back (that I suppose costly in terms of performance.)</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from typing import Union def function(time_series: Union[pd.Series, pd.DatetimeIndex]): &quot;&quot;&quot; Do something :param time_series: my time-series, retrieved from a dataframe, and of type pd.Series, or pd.DatetimeIndex &quot;&quot;&quot; series = time_series.copy() if isinstance(time_series, pd.DatetimeIndex): series = time_series.to_series() # do something on series if isinstance(time_series, pd.DatetimeIndex): return pd.DatetimeIndex(series) else: return series </code></pre> <p>I think it's something the pandas library has to handle somehow. Does anyone have a hint ?</p>
<python><pandas><dataframe><time-series>
2024-01-31 09:50:53
0
496
Clej
77,912,302
4,609,258
PyCloudinary "Invalid value for parameter kind" while trying to upload to Cloudinary
<p>I'm trying to use <a href="https://cloudinary.com/documentation/python_quickstart" rel="nofollow noreferrer">the quickstart guide</a> with our test system.</p> <pre><code> cloudinary.uploader.upload(&quot;https://cloudinary-devs.github.io/cld-docs-assets/assets/images/butterfly.jpeg&quot;, public_id=&quot;quickstart_butterfly&quot;, unique_filename = False, overwrite=True) </code></pre> <p>First I received <code>Field 'country' is mandatory and cannot be left empty&quot;</code>, so I learned that I need to <a href="https://cloudinary.com/documentation/image_upload_api_reference" rel="nofollow noreferrer">add meta data</a> to the call:</p> <pre><code> cloudinary.uploader.upload(..., metadata=&quot;country=de|&quot;) </code></pre> <p>Another gotcha here is that you will need to use the <code>external_id</code> values for the metadata fields and not the ones displayed in the UI of Cloudinary. Also be aware that multi-select fields need to be handed over as lists:</p> <pre><code> cloudinary.uploader.upload(..., metadata=&quot;country=de|language=[\&quot;without_language\&quot;]&quot;) </code></pre> <p>But now above error message keeps being thrown and I have no clue what <code>kind</code> is supposed to be.</p>
<python><cloudinary>
2024-01-31 09:50:14
1
866
Florian Straub
77,912,077
10,721,627
How to connect to Azure SQL Database in Python using pyodbc?
<p>I would like to connect to an Azure SQL database via <code>pyodbc</code> Python package. I obtained the connection string by navigating to the Azure SQL database resource and going to the <strong>Connection strings</strong> menu under the <strong>Settings</strong> group. After, I selected the <strong>ODBC</strong> tab and copied the connection string that contains the username and password authentication.</p> <p>The following code tries to connect to the database:</p> <pre class="lang-py prettyprint-override"><code>import pyodbc driver = &quot;{ODBC Driver 18 for SQL Server}&quot; server = &quot;&lt;server&gt;&quot; database = &quot;&lt;database&gt;&quot; username = &quot;&lt;user_name&gt;&quot; password = &quot;&lt;password&gt;&quot; conn_str = str.format( &quot;Driver={0};Server={1},1433;Database={2};Uid={3};Pwd={4}&quot;, driver, server, database, username, password, ) connection = pyodbc.connect(conn_str) connection.close() </code></pre> <p>However, I got the following error:</p> <blockquote> <p>pyodbc.InterfaceError: ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnect)')</p> </blockquote>
<python><sql-server><azure><pyodbc>
2024-01-31 09:15:36
1
2,482
Péter Szilvási
77,912,040
2,688,929
Using local library in Ansible filter_plugin
<p>I am creating a bunch of filter plugins for Ansible (2.9/2.13) and I would like to have a tools library that can be used by these filter plugins.</p> <p>I have the directory <code>filter_plugins</code> under my playbook directory and in there, I have a bunch of python scripts that provide Ansible with various filters. This works fine.</p> <p>Now, some of these python scripts have the same functions. Naturally, I would like to move these shared function to a separate library file. So here is what I tried:</p> <ul> <li>Option 1: have the library file (tools.py) in the directory <code>filter_plugins</code> <ul> <li>This does not work well. Python can't find the module tools and Ansible gives me this error:</li> </ul> </li> </ul> <pre><code>[WARNING]: Skipping plugin (/&lt;REDACTED&gt;/playbook_dir/filter_plugins/tools.py) as it seems to be invalid: module 'ansible.plugins.filter.966997458576792353_tools' has no attribute 'FilterModule' </code></pre> <ul> <li>Option 2: Create a library folder and put the tools.py script there (<code>import library.tools</code> or <code>from library.tools import shared_function</code>) <ul> <li>This doesn't work either (with or without an empty <code>__init__.py</code>). Ansible gives me this error:</li> </ul> </li> </ul> <pre><code>[WARNING]: Skipping plugin (/&lt;REDACTED&gt;/playbook_dir/filter_plugins/check.py) as it seems to be invalid: No module named 'tools' </code></pre> <p>All the filter scripts look basically the same (some have more filters, some have less):</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/python class FilterModule(object): def filters(self): return { 'filter1': self.filter1, 'filter2': self.filter2 } def shared_function(data_to_process): processed_data = &lt;process_data&gt; return processed_data def filter1(self, in_data): &quot;&quot;&quot;some code to process in_data to out_data&quot;&quot;&quot; process_data = &lt;do_some_stuff&gt; out_data = shared_function(process_data) return out_data def filter2(self, in_data): &quot;&quot;&quot;some other code to process in_data to out_data&quot;&quot;&quot; process_data = &lt;do_some_different_stuff&gt; out_data = shared_function(process_data) return out_data </code></pre> <p>As <code>shared_function</code> is used in more than one filter script, I want to move it out of there to the <code>tools.py</code> library</p> <p>Can anybody help me find the correct way to include local library files in filter plugins for Ansible?</p> <p>Thanx for your help.</p>
<python><ansible>
2024-01-31 09:07:45
2
427
Johan G
77,912,012
265,521
"Collection if" in Python
<p>Dart has a neat feature called <a href="https://dart.dev/language/collections#control-flow-operators" rel="nofollow noreferrer">collection if</a>. It lets you conditionally include an element in a list literal. For example:</p> <pre><code>var nav = ['Home', 'Furniture', 'Plants', if (promoActive) 'Outlet']; </code></pre> <p>Is there an equivalent way to do this in Python? The best I can come up with is:</p> <pre><code>nav = ['Home', 'Furniture', 'Plants', *(['Outlet'] if promoActive else [])]; </code></pre> <p>But that sucks. Is there something clearer?</p> <p>Edit: To be clear, I want it to work in the situations where you can use an array literal. <code>.append()</code> is going to be horrible for situations like this:</p> <pre><code>a = { &quot;foo&quot;: { &quot;bar&quot;: [ [&quot;a&quot;], [&quot;b&quot;, &quot;c&quot;, *([&quot;d&quot;] if foo else []), &quot;e&quot;, &quot;f&quot;], ... </code></pre>
<python>
2024-01-31 09:02:25
2
98,971
Timmmm
77,911,884
987,074
Python Azure Function works in local emulator but no HTTP triggers found when deploying
<p>I have 5 Azure Functions I'm developing, and 4 of them deploy and work fine, just having this trouble with one of them.</p> <p>I have the Azure Functions Core Tools installed and Azurite running as a VSCode plugin, and am able to run my function with no errors by calling <code>func start --python --verbose</code> from CLI.</p> <p>I can call the HTTP trigger from the local emulator and it returns a correct response.</p> <p>When I deploy to Azure Functions, the logs end in <code>No HTTP triggers found</code> with no errors shown throughout the deploy process.</p> <p>I've been able to download the web package via SCM and confirmed that the files are the same in there as they are in my development environment.</p> <p>What steps can I take to get more verbose information out of the Azure Functions environment to help with identifying the error which is causing the HTTP trigger to fail to deploy to production?</p>
<python><azure-functions>
2024-01-31 08:40:46
1
3,544
bdx
77,911,883
1,860,805
How get local IANA zone name in python 3
<p>In python 3 how to get local zone name (e.g Australia/Sydney)</p> <p>I tried like this</p> <pre><code>import datetime local_tzname = datetime.datetime.now().astimezone().tzinfo print(local_tzname) </code></pre> <p>this outputs like this</p> <blockquote> <p>AEDT</p> </blockquote> <p>But I want to output like this which is the local time zone name</p> <blockquote> <p>Australia/Sydney</p> </blockquote> <p>I can execute <em>timedatectl</em> in a subprocess and read the output but that is not what I want. The script I am writing need to be executed on several RedHat (7/8/9) Ubuntu servers where I don't have control of available modules. Therefore it must use available modules when python 3 get installed.</p> <p>Therefore my requirement is</p> <ol> <li>use native python 3 available modules</li> <li>not to use external commands and read the output (e.g. use subprocess )</li> <li>Not require any special module to be installed e.g. tzlocal</li> </ol> <p>zoneinfo is a good module to work with IANA format but I couldn't use that to get the system time zone</p>
<python><python-3.x><linux><timezone>
2024-01-31 08:40:43
1
523
Ramanan T
77,911,745
2,784,032
Install Python on Self Hosted Windows runner for Github Action
<p>I am using Windows Server 2019 as a self hosted runner for my Github Actions. I need to install Python. Shall install this python as one time activity or through github action workflow every time that workflow is executed?</p> <p>I came across <a href="https://github.com/actions/setup-python/blob/main/docs/advanced-usage.md#windows" rel="nofollow noreferrer">this</a> but this is confusing. Do I need to install using MSI installer? Also need to update PATH.</p>
<python><github-actions-self-hosted-runners>
2024-01-31 08:13:58
1
856
Maxx
77,911,732
3,099,733
Final is not working with type alias in Python
<p>Given the following code</p> <pre class="lang-py prettyprint-override"><code>from typing import Final ImmutableStr = Final[str] class Test: s1: ImmutableStr = 'abc' s2: Final[str] = 'abc' t = Test() t.s1 = 'def' # pass type check t.s2 = 'def' # error detected </code></pre> <p>In VsCode only report error for second case.</p> <p><a href="https://i.sstatic.net/7sSH7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7sSH7.png" alt="vscode-example" /></a></p> <p>The warning of s2 is</p> <pre><code>Class constant name &quot;s2&quot; doesn't conform to UPPER_CASE naming stylePylintC0103:invalid-name (constant) s2: Literal['abc'] </code></pre> <p>The error message for s2 is as the follow. I hope s1 should report the same error.</p> <pre><code>Cannot assign member &quot;s2&quot; for type &quot;Test&quot; Member &quot;s2&quot; cannot be assigned through a class instance because it is a ClassVar &quot;s2&quot; is declared as Final and cannot be reassigned Member &quot;__set__&quot; is unknownPylancereportGeneralTypeIssues (constant) s2: Literal['def'] </code></pre> <p>Is the problem of VsCode or <code>Final</code> doesn't work with type alias? I am using <code>python 3.9.12</code> with Pylance.</p> <p>And is there anything I can do to create an immutable type so that IDE like VsCode will report error when someone try to reassign a value to it?</p>
<python><python-typing><pyright>
2024-01-31 08:12:11
0
1,959
link89
77,911,535
10,053,485
SQLAlchemy running on Kubernetes Pod, broken behaviour compared to localhost: Async issue / Missing Greenlet
<p>When running an API (using FastAPI) I'm building on my Localhost it works as intended, when deploying it to a kubernetes pod it does not. It is unclear to me as to why, though the error appears to suggest an async issue. This seems unusual to me, as I would also expect this to then also be an issue on the localhost.</p> <p>Having looked into the code and the traceback, the issue appears to be triggered by the following line of my code:</p> <pre class="lang-py prettyprint-override"><code>from sqlalchemy import MetaData from .db_connection import engine metadata_obj = MetaData() metadata_obj.reflect(bind=engine) # &lt;---- this is the problematic line. my_table = metadata_obj[&quot;my_table&quot;] </code></pre> <p>I am aware Table reflection as above is not ideal, and am in the planning to change this. However, due to dependencies this is not possible at this time.</p> <p>the engine is build as follows:</p> <pre class="lang-py prettyprint-override"><code>from sqlalchemy import create_engine database_url = mysql+mysqlconnector://root:@localhost:3306/my_db # local database_url = mysql+aiomysql://project_name:pass@db_link:3306/db_name # pod engine = create_engine(database_url) </code></pre> <p>As far as I can see the environment should be equivalent, same python version, same libraries.</p> <p>Thus my question: Why does this cause an issue in the deployment pod, but not on my localhost? And what is the best approach to fixing this?</p> <p>Full traceback, some names obfuscated.</p> <pre><code> File &quot;&lt;frozen runpy&gt;&quot;, line 189, in _run_module_as_main File &quot;&lt;frozen runpy&gt;&quot;, line 148, in _get_module_details File &quot;&lt;frozen runpy&gt;&quot;, line 112, in _get_module_details File &quot;/app/folder_name/server/__init__.py&quot;, line 1, in &lt;module&gt; from .app import app File &quot;/app/folder_name/server/app.py&quot;, line 13, in &lt;module&gt; from .routes.device import router as device_router File &quot;/app/folder_name/server/routes/device.py&quot;, line 9, in &lt;module&gt; from folder_name.database.db_reader import read_credentials File &quot;/app/folder_name/database/db_reader.py&quot;, line 5, in &lt;module&gt; from .db_tables import my_table File &quot;/app/folder_name/database/db_tables.py&quot;, line 12, in &lt;module&gt; metadata_obj.reflect(bind=engine) File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/sql/schema.py&quot;, line 5725, in reflect with inspection.inspect(bind)._inspection_context() as insp: ^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/inspection.py&quot;, line 145, in inspect ret = reg(subject) ^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/reflection.py&quot;, line 303, in _engine_insp return Inspector._construct(Inspector._init_engine, bind) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/reflection.py&quot;, line 236, in _construct init(self, bind) File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/reflection.py&quot;, line 247, in _init_engine engine.connect().close() ^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py&quot;, line 3269, in connect return self._connection_cls(self) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py&quot;, line 145, in __init__ self._dbapi_connection = engine.raw_connection() ^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py&quot;, line 3293, in raw_connection return self.pool.connect() ^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/pool/base.py&quot;, line 452, in connect return _ConnectionFairy._checkout(self) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/pool/base.py&quot;, line 1269, in _checkout fairy = _ConnectionRecord.checkout(pool) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/pool/base.py&quot;, line 716, in checkout rec = pool._do_get() ^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/pool/impl.py&quot;, line 169, in _do_get with util.safe_reraise(): File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py&quot;, line 146, in __exit__ raise exc_value.with_traceback(exc_tb) File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/pool/impl.py&quot;, line 167, in _do_get return self._create_connection() ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/pool/base.py&quot;, line 393, in _create_connection return _ConnectionRecord(self) ^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/pool/base.py&quot;, line 678, in __init__ self.__connect() File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/pool/base.py&quot;, line 902, in __connect with util.safe_reraise(): File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py&quot;, line 146, in __exit__ raise exc_value.with_traceback(exc_tb) File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/pool/base.py&quot;, line 898, in __connect self.dbapi_connection = connection = pool._invoke_creator(self) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/create.py&quot;, line 645, in connect return dialect.connect(*cargs, **cparams) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/default.py&quot;, line 616, in connect return self.loaded_dbapi.connect(*cargs, **cparams) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/aiomysql.py&quot;, line 264, in connect await_only(creator_fn(*arg, **kw)), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/sqlalchemy/util/_concurrency_py3k.py&quot;, line 121, in await_only raise exc.MissingGreenlet( sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called; can't call await_only() here. Was IO attempted in an unexpected place? (Background on this error at: https://sqlalche.me/e/20/xd2s)``` </code></pre>
<python><kubernetes><sqlalchemy>
2024-01-31 07:36:30
0
408
Floriancitt
77,911,530
1,153,768
Identifying what triggers a join timeout multiprocessing
<p>I am trying to work out why my multiprocessing hangs, it is not regular, sometimes it will hang after 10 minutes sometimes after hours</p> <p>This is a reduced version of the code. I start the job and then join the jobs</p> <pre><code>jobs = [] for item in items_to_do: p = multiprocessing.Process(target=index_a_doc, args=(doc, has_errors)) jobs.append(p) p.start() for job in jobs: job.join(timeout=20) if job.is_alive(): print(&quot;TERMINATED&quot;) else: print(&quot;Finished OK. No Timeout&quot;) </code></pre> <p>I added in the join timeout as I was aware that the code hung sometimes. If it hangs for 20 seconds it terminates. My question is, am I able to identify where in the code it was before it terminated (where was it hanging) or stop the job that was troublesome and then carry on.</p> <p>All the jobs are independent of each other, and don't actually need to finish</p> <p>Thanks</p> <p>Grant</p>
<python><multiprocessing><timeout>
2024-01-31 07:35:37
1
929
mozman2
77,911,424
17,580,381
Interesting behaviour of contextlib.redirect_stdout
<p>Let's start with a contrived piece of code. Explanation will follow:</p> <pre><code>from timeit import timeit from os import devnull from contextlib import redirect_stdout def process(): for i in range(1_000): print(f&quot;Message {i}&quot;) N = 10_000 if __name__ == &quot;__main__&quot;: with redirect_stdout(None): t = timeit(process, number=N) print(t) with open(devnull, &quot;a&quot;) as _null: with redirect_stdout(_null): t = timeit(process, number=N) print(t) </code></pre> <p><strong>Output:</strong></p> <pre><code>1.0380362079886254 2.2988220420083962 </code></pre> <p>So, as you can see, there's a significant difference in the timings. This is due to passing None to redirect_stdout() rather than a valid file-type object.</p> <p>The documentation for redirect_stdout() says nothing about this &quot;trick&quot; - I just stumbled across it whilst researching something else. Under the covers, what happens is that None is assigned to sys.stdout.</p> <p>I have 2 questions:</p> <ol> <li>Is there documentation anywhere that explains this (very useful) behaviour? - i.e., when sys.stdout is assigned None, stdout output is suppressed.</li> <li>Is it likely to be future-proof?</li> </ol> <p><strong>Platform:</strong></p> <pre><code>macOS 14.3 on M2 Python 3.12.1 </code></pre>
<python>
2024-01-31 07:13:15
1
28,997
Ramrab
77,911,210
12,789,602
In odoo 16, ValueError: message_post does not support subtype parameter anymore. Please give a valid subtype_id or subtype_xmlid value instead
<p>I have call <code>message_post</code> function from a wizard TransientModel. and use subtype of <code>hr_recruitment.mt_job_applicant_hired</code> and also add depended module in manifest.json but its showing error</p> <pre><code>ValueError: message_post does not support subtype parameter anymore. Please give a valid subtype_id or subtype_xmlid value instead. </code></pre> <pre class="lang-py prettyprint-override"><code>applicant.job_id.message_post( body=_( 'New Employee %s Hired') % applicant.partner_name if applicant.partner_name else applicant.name, subtype=&quot;hr_recruitment.mt_job_applicant_hired&quot;) </code></pre> <p>How to resolve it in odoo 16?</p>
<python><odoo><odoo-16>
2024-01-31 06:30:02
1
552
Bappi Saha
77,911,173
4,238,408
Discrepancy between OpenCV Python and C++ implementations of `imreconstruct`
<p>I'm implementing MATLAB's <code>imreconstruct</code> in both Python and C++. However, for my test case, the Python implementation matches the MATLAB's output, the C++ one doesn't.</p> <p>This is the Python implementation:</p> <pre class="lang-py prettyprint-override"><code>def imReconstruct(marker: np.array, mask: np.array) -&gt; np.array: &quot;&quot;&quot; Naive implementation of MatLAB's imReconstruct function works when `mask` consists of mostly background (global minumum) will be slow otherwise &quot;&quot;&quot; kernel = cv.getStructuringElement(cv.MORPH_RECT, (3, 3)) # calculate the extreme values of the mask image min_val, max_val, _, _ = cv.minMaxLoc(mask) # clip the marker by global extrema of mask _, marker = cv.threshold(marker, min_val, max_val, cv.THRESH_TRUNC | cv.THRESH_BINARY_INV) while True: expanded = cv.dilate(marker, kernel) expanded = np.minimum(expanded, mask) # return `expanded` when the difference is small if np.max(np.abs(expanded - marker)) &lt; 1e-5: return expanded # set expanded to marker and repeat marker = expanded D = np.array( [[4.2426405, 3.6055512, 3.1622777, 3. , 3. ], [3.6055512, 2.828427 , 2.236068 , 2. , 2. ], [3.1622777, 2.236068 , 1.4142135, 1. , 1. ], [3. , 2. , 1. , 0. , 0. ], [3. , 2. , 1. , 0. , 0. ]], dtype=float32) imReconstruct(D-.85,D) # output # array([[3.3926406, 3.3926406, 3.1622777, 3. , 3. ], # [3.3926406, 2.828427 , 2.236068 , 2. , 2. ], # [3.1622777, 2.236068 , 1.4142135, 1. , 1. ], # [3. , 2. , 1. , 0. , 0. ], # [3. , 2. , 1. , 0. , 0. ]], dtype=float32) </code></pre> <p>And C++:</p> <pre class="lang-cpp prettyprint-override"><code>Mat imReconstruct(Mat marker, Mat mask){ /************** * naive implementation of MatLAB's imReconstruct * works when `mask` consist of mostly background (global minimum) * will be slow otherwise *************/ Mat kernel = getStructuringElement(MORPH_RECT, Size(3,3)); // calculate the min and max values from mask double minMask, maxMask; minMaxLoc(mask, &amp;minMask, &amp;maxMask); // clip the marker by global extrema of mask threshold(marker, marker, minMask, maxMask, THRESH_TRUNC|THRESH_BINARY_INV); Mat expanded; // keep filling the holes with `dilate` // until there are no more changes while (1){ dilate(marker, expanded, kernel); expanded = min(expanded, mask); // compute the max difference minMaxLoc(expanded-marker, &amp;minMask, &amp;maxMask); // return image when changes are small if (maxMask&lt;1e-5) return expanded; // set expanded as marker and continue looping marker = expanded; } } // test case cv::Mat D = (cv::Mat_&lt;float&gt; (5,5) &lt;&lt; 4.2426405, 3.6055512, 3.1622777, 3. , 3. , 3.6055512, 2.828427 , 2.236068 , 2. , 2. , 3.1622777, 2.236068 , 1.4142135, 1. , 1. , 3. , 2. , 1. , 0. , 0. , 3. , 2. , 1. , 0. , 0. ); std::cout &lt;&lt; imReconstruct(D - .85, D) &lt;&lt; std::endl; // output // [3.3926406, 3.3926406, 3.1622777, 2.7555513, 2.3122778; // 3.3926406, 2.8284271, 2.236068, 2, 2; // 3.1622777, 2.236068, 1.4142135, 1, 1; // 2.7555513, 2, 1, 0, 0; // 2.3122778, 2, 1, 0, 0] </code></pre> <p>What is the cause of this discrepancy? I may have overlooked something simple, but I already spent few hours in vain without any positive results.</p>
<python><c++><opencv>
2024-01-31 06:20:07
1
151,120
Quang Hoang
77,911,172
18,730,707
If you specify a background color in pandas in python, number formatting is disabled
<p>After sorting the data in pandas, I am saving it in Excel. At this time, we are trying to improve readability by changing the format. I specified the number format and background color at the same time, but there is an error where only one of the two is specified. I'm pretty sure this isn't an error, it's just 'I'm not understanding very well'. But I don't know where I made a mistake. I would like to receive your help.</p> <p>Below is the example code I wrote and the results of running it. The first code is when I formatted the number. It works normally.</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({ 'Name': ['John', 'Isac'], 'Value': [100000, 300000000] }) with pd.ExcelWriter('test.xlsx', engine='xlsxwriter') as writer: df.to_excel(writer, sheet_name='test', index=False) workbook = writer.book worksheet = writer.sheets['test'] format_with_commas = workbook.add_format({'num_format':'#,##0'}) worksheet.set_column('B:B', 15, format_with_commas) </code></pre> <p><a href="https://i.sstatic.net/CnXM5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CnXM5.png" alt="enter image description here" /></a></p> <p>However, the problem is that if you write code to specify the background color here, the number format will not work. I have included the code and execution results as images.</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({ 'Name': ['John', 'Isac'], 'Value': [100000, 300000000] }) styled_df = df.style.set_properties(**{'background-color':'yellow', 'color':'black'}) with pd.ExcelWriter('test.xlsx', engine='xlsxwriter') as writer: styled_df.to_excel(writer, sheet_name='test', index=False) workbook = writer.book worksheet = writer.sheets['test'] format_with_commas = workbook.add_format({'num_format':'#,##0'}) worksheet.set_column('B:B', 15, format_with_commas) </code></pre> <p><a href="https://i.sstatic.net/uRFC4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uRFC4.png" alt="enter image description here" /></a></p> <p>If my method is not correct, please let me know where I am wrong. Alternatively, if this code is outdated, please point that out as well.</p>
<python><pandas><xlsxwriter>
2024-01-31 06:20:00
2
878
na_sacc
77,911,091
5,751,473
Not more than one special symbol in a range from a long text
<p>Simplify the problem:</p> <p>There is an article (long text)</p> <p>Extract the content between <code>start</code> (included) and <code>end</code> (included)</p> <p>Requirement: There cannot be more than one <code>\n</code> between <code>start</code> and <code>end</code></p> <p>Find all matches</p> <p>Use <code>python</code> <code>re</code> only</p> <p>For code:</p> <pre class="lang-py prettyprint-override"><code>lines = re.findall(pattern, text, re.DOTALL) for line in lines: print(line) print('===') </code></pre> <p>So, how can I fixed my pattern?</p> <p>What I try <code>pattern</code>:</p> <ol> <li><code>start[^\n]*\n?[^\n]*end</code> with text:</li> </ol> <pre class="lang-none prettyprint-override"><code>... start just me and python regex 1 end start just me and python regex 2 end start just me and python regex 3 end ... </code></pre> <p><code>wrong</code>:</p> <pre class="lang-none prettyprint-override"><code>start just me and python regex 1 end start just me and python regex 2 end --&gt; should be split with the line before === start just me and python regex 3 end === </code></pre> <ol start="2"> <li><code>start(?:(?!\n\n).)*?end</code> and <code>start(?:[^\n]|\n(?!\n))*?end</code> with text:</li> </ol> <pre class="lang-none prettyprint-override"><code>start just me and python regex 1 end start just me and python regex 2 end start just me and python regex 3 end </code></pre> <p><code>wrong</code>:</p> <pre class="lang-none prettyprint-override"><code>start just me and python regex 1 end --&gt; should not match this cause there is two `\n` in === start just me and python regex 2 end === start just me and python regex 3 end === </code></pre>
<python><regex><python-re>
2024-01-31 06:01:20
1
430
Se ven
77,910,877
3,795,219
Why is VSCode Ctrl+Space suggesting all words, vice methods/attributes?
<p>Using <code>Ctrl+Space</code> while editing a python file triggers a suggestions popup. However, rather than linting the object and suggesting its methods/attributes, my popup shows me all words in the file (sorted alphabetically).</p> <p>For instance, <code>10_000</code> and <code>10k</code> are just words listed in my script, not methods attached to the <code>model</code> object's class: <a href="https://i.sstatic.net/sAP1d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sAP1d.png" alt="terrible suggestions from VSCode" /></a></p> <p><strong><em>Question:</em> How can I limit these suggestions to the relevant methods/attributes?</strong></p> <p>It seems to me that Intellisense is treating my code as text, but I set the language mode to Python in VSCode.</p>
<python><visual-studio-code><intellisense><lint>
2024-01-31 04:51:52
0
8,645
Austin
77,910,876
3,123,290
"relation x does not exist" in Sqlalchemy 2.0 Async when it very clearly does
<p>I have tried to boil this down to the most basic of test cases. This is part of our test mocking; the db_session is mapped to a test-specific schema, I confirm that the setup works and the test-specific schema has the the required table, and then add a record to that table.</p> <pre class="lang-py prettyprint-override"><code> async def generate(self) -&gt; &quot;Any&quot;: tables = await self.db_session.execute(text(&quot;SELECT table_name FROM test_hero.information_schema.tables where table_schema = 'test_smoke_respond_to_message'&quot;)) if (&quot;organization&quot;,) in tables and self.db_session.get_bind().get_execution_options()[&quot;schema_translate_map&quot;][&quot;public&quot;] == &quot;test_smoke_respond_to_message&quot;: org = SqlOrganization(**self.model_dict) async with self.db_session as session: session.add(org) await session.commit() await session.refresh(org) </code></pre> <p>The problem is that <code>await session.commit()</code> throws this error:</p> <pre class="lang-py prettyprint-override"><code>sqlalchemy.exc.ProgrammingError: (sqlalchemy.dialects.postgresql.asyncpg.ProgrammingError) &lt;class 'asyncpg.exceptions.UndefinedTableError'&gt;: relation &quot;organization&quot; does not exist [SQL: INSERT INTO organization (h_engine_id, natural_language_name, _id) VALUES ($1::VARCHAR, $2::VARCHAR, $3::UUID) RETURNING organization.created_at, organization.updated_at] [parameters: ('0fda47a9dfe14d02a6a70e2fd15ddfec', 'Archer Inc', UUID('7b0d6120-6255-4c21-8276-4cd289c7d339'))] (Background on this error at: https://sqlalche.me/e/20/f405) &gt; /app/app/tests/mock_factory/models.py(51)generate() </code></pre> <p>I am completely out of ideas. How can this possibly execute if the table does <em>not</em> exist? How can it possibly fail if the table <em>does</em> exist? This is using the same DB connection so there's no possibility of different permissions/scopes.</p> <p>Here are the related definitions (most of the extraneous methods removed for clarity)</p> <pre class="lang-py prettyprint-override"><code> # models.py class SqlOrganization(NamedBase, TimestampedBase): &quot;&quot;&quot;the highest level of hierarchy in our system.&quot;&quot;&quot; __tablename__ = &quot;organization&quot; __pydantic_model__ = &quot;PyOrganization&quot; id_prefix = &quot;o&quot; engine_id: Mapped[Optional[str]] = mapped_column( String(255), nullable=True, doc=&quot;the id of the organization in the Engine&quot; ) platforms: Mapped[List[&quot;SqlPlatform&quot;]] = relationship( &quot;SqlPlatform&quot;, back_populates=&quot;organization&quot;, lazy=&quot;selectin&quot; ) # bases.py class SqlalchemyBase(AsyncAttrs, DeclarativeBase): __abstract__ = True @property def __pydantic_model__(self) -&gt; &quot;str&quot;: &quot;&quot;&quot;str representation of the pydantic model&quot;&quot;&quot; raise NotImplementedError class NamedBase(SqlalchemyBase): __abstract__ = True natural_language_name: Mapped[str] = mapped_column(String(255)) # conftest.py @pytest_asyncio.fixture(scope=&quot;function&quot;) async def test_async_sessionmaker(request): function_ = request.node.name engine = get_async_engine() if &quot;[&quot; in function_: function_ = function_.split(&quot;[&quot;)[0] async with engine.begin() as connection: await connection.execute(text('CREATE EXTENSION IF NOT EXISTS &quot;vector&quot;')) await connection.execute(text(f&quot;CREATE SCHEMA IF NOT EXISTS {function_}&quot;)) await connection.execute(text(f&quot;SET search_path TO {function_},public&quot;)) await connection.run_sync(SqlalchemyBase.metadata.drop_all) await connection.run_sync(SqlalchemyBase.metadata.create_all) await connection.execution_options(schema_translate_map={&quot;public&quot;: function_}) scoped_engine = get_async_engine(execution_options={&quot;schema_translate_map&quot;: {&quot;public&quot;: function_}}) def _test_async_sessionmaker() -&gt; &quot;sessionmaker[AsyncSession]&quot;: return sessionmaker(bind=scoped_engine, class_=AsyncSession, expire_on_commit=False) return _test_async_sessionmaker </code></pre>
<python><sqlalchemy><python-asyncio>
2024-01-31 04:51:32
1
778
EthanK
77,910,795
11,753,262
How the __name__ of a Python module be defined with different import method?
<p>I'm struggling to understand how <code>__name__</code> of a module be defined, especially in different import methods. Suppose a directory tree:</p> <pre><code>├ main.py └ test/ ├── __init__.py ├── a │ ├── a1.py │ ├── base.py &lt;- a1.py and b1.py will import it │ └── __init__.py └── b ├── b1.py └── __init__.py </code></pre> <p>And here is <code>base.py</code>:</p> <pre class="lang-py prettyprint-override"><code># test/a/base.py print(&quot;Base.py be imported, __name__ =&quot;, __name__) </code></pre> <p><code>a1.py</code> imports <code>base.py</code> through relative import</p> <pre class="lang-py prettyprint-override"><code># test/a/a1.py print(&quot;[a1.py] import base: start&quot;) from . import base print(&quot;[a1.py] import base: end&quot;) </code></pre> <p>then <code>a/__init__.py</code> imports <code>a1.py</code> and <code>base.py</code></p> <pre class="lang-py prettyprint-override"><code># test/a/__init__.py from . import base from .a1 import * </code></pre> <p>For package <code>b</code>, the module <code>b1.py</code> import base through absolute import</p> <pre class="lang-py prettyprint-override"><code># test/b/b1.py print(&quot;[b1.py] import base: start&quot;) from test.a import base print(&quot;[b1.py] import base: end&quot;) </code></pre> <p>and <code>b/__init__.py</code> import <code>b.py</code></p> <pre class="lang-py prettyprint-override"><code># test/b/__init__.py from test.b.b1 import * </code></pre> <p>Finally, the <code>test/__init__.py</code> will import both package <code>a</code> and <code>b</code> in different methods</p> <pre class="lang-py prettyprint-override"><code># method1 print(&quot; ----- import a -----&quot;) from .a import * print(&quot; ----- import b -----&quot;) from .b import * # method2 import sys import os sys.path.append(os.path.dirname(os.path.abspath(__file__)) + &quot;/./&quot;) print(&quot; ----- import a -----&quot;) from a import * print(&quot; ----- import b -----&quot;) from b import * </code></pre> <p>If I import <code>test</code> in <code>main.py</code>, the output looks like it in method1</p> <pre><code>----- import a ----- Base.py be imported!, __name__ = test.a.base [a1.py] import base: start [a1.py] import base: end ----- import b ----- [b1.py] import base: start [b1.py] import base: end </code></pre> <p>The <code>__name__</code> of <code>base</code> is <code>test.a.base</code>. If I try it again in method2, the output looks like</p> <pre><code>----- import a ----- Base.py be imported!, __name__ = a.base [a1.py] import base: start [a1.py] import base: end ----- import b ----- [b1.py] import base start Base.py be imported!, __name__ = test.a.base [a1.py] import base: start [a1.py] import base: end [b1.py] import base: end </code></pre> <p>which is quite different to the method1.</p> <p>How a module's <code>__name__</code> be defined? Does different import method, like absolute/relative import or different import seaching path, change the module's <code>__name__</code>? I have seen <a href="https://stackoverflow.com/questions/15883526/how-is-the-name-variable-in-a-python-module-defined">this post</a> but it still confuse me, and I want to know more about it under the hood. Thanks a lot!</p>
<python><python-3.x><python-import>
2024-01-31 04:20:55
1
385
Chun-Ye Lu
77,910,640
1,070,833
python OCIO colour space conversions - colour issue
<p>I have a test 16bit tiff file that is using ACEScc colour space: <a href="https://i.sstatic.net/7NzcT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7NzcT.png" alt="acess_cc" /></a></p> <p>I want to process it using OCIO to, for instance, linear sRGB. OCIO provides examples of how to do it: <a href="https://opencolorio.readthedocs.io/en/latest/guides/developing/developing.html" rel="nofollow noreferrer">https://opencolorio.readthedocs.io/en/latest/guides/developing/developing.html</a> and it seems quite trivial to use:</p> <pre><code>import cv2 import numpy as np import PyOpenColorIO as OCIO path = &quot;ACEScc.tif&quot; input_cs = &quot;ACES - ACEScc&quot; output_cs = &quot;Utility - Linear - sRGB&quot; img = cv2.imread(path) #I use the default config downloaded from OCIO website config = OCIO.Config.CreateFromFile(&quot;config.ocio&quot;) in_colour = config.getColorSpace(input_cs) out_colour = config.getColorSpace(output_cs) processor = config.getProcessor(in_colour, out_colour) cpu = processor.getDefaultCPUProcessor() #this seems to be missing in the docs. the processor seems to expect floating point values #scaled to 0-1 range. Here I might be doing it wrong and this might be causing the colour issues. img2 = img.astype(np.float32) / 255.0 cpu.applyRGB(img2) #take it back to 8bit to have something to look at: img3 = (img2*255.0).astype(np.uint8) cv2.imwrite(r&quot;ocio_test.tif&quot;, img3) </code></pre> <p>the resulting image is linearised correctly. the values that are out of 8 bit range are clipped (this to be expected and I didn't handle it in any way). However the colours that are not clipped are not right. What am I missing?</p> <p><a href="https://i.sstatic.net/HIRlx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HIRlx.png" alt="srgb linear" /></a></p> <p>here is a closeup of the macbeth chart showing the problem with the colour (left is correct, right is what I get from the above python)</p> <p><a href="https://i.sstatic.net/0Gknv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0Gknv.png" alt="compare" /></a></p> <p>some colour are ok but yellows and oranges are shifted towards pink. I also didn't expect the green plant to be out of range. I tested both with ocio v1 and v2 configs downloaded from here: <a href="https://opencolorio.readthedocs.io/en/latest/quick_start/downloads.html" rel="nofollow noreferrer">https://opencolorio.readthedocs.io/en/latest/quick_start/downloads.html</a></p> <p>I'm not sure what is the problem here and would love any suggestions.</p>
<python><image-processing><colors><openimageio>
2024-01-31 03:24:41
2
1,109
pawel
77,910,394
2,805,482
How to get text in white color from image in grey background and paste it on another image
<p>Hi I am using opencv to extract text in white from one image and paste it on another. I am using below approach for black image background and it works fine. Check the below code</p> <pre><code>import cv2 import numpy as np image_msg = (30, 209, 187, 31) base_path=&quot;/Users/images/&quot; mask_image1 = &quot;{}show_image_2.PNG&quot;.format(base_path) image_1_title = &quot;{}show_image_2_logo.PNG&quot;.format(base_path) img = cv2.imread(mask_image1) image_1_title_img = cv2.imread(image_1_title) image_1_titl_img = cv2.cvtColor(image_1_title_img, cv2.COLOR_BGR2RGB) im1r_title = cv2.resize(image_1_titl_img, (image_msg[2], image_msg[3])) print(im1r_title.shape) plt.imshow(im1r_title,cmap='gray') plt.axis('off') plt.show() print(img.shape) plt.imshow(img,cmap='gray') plt.axis('off') plt.show() img[image_msg[1]: image_msg[1] + image_msg[3], image_msg[0]: image_msg[0] + image_msg[2], ] = np.where(im1r_title &lt; [100, 100, 100], img[image_msg[1]: image_msg[1] + image_msg[3], image_msg[0]: image_msg[0] + image_msg[2], ], im1r_title) plt.imshow(img,cmap='gray') plt.axis('off') plt.show() </code></pre> <p><a href="https://i.sstatic.net/OwFHa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OwFHa.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/eVikX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eVikX.png" alt="enter image description here" /></a></p> <p>Result:</p> <p><a href="https://i.sstatic.net/SUVsH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SUVsH.png" alt="enter image description here" /></a></p> <p>however I am struggling to find a way to get the text in white color from the below, if you open this image you will see text in white, how can I extract and paste on another image?</p> <p>Problematic image with text:</p> <p><a href="https://i.sstatic.net/oBr4h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oBr4h.png" alt="enter image description here" /></a></p>
<python><numpy><opencv><image-processing>
2024-01-31 01:51:50
2
1,677
Explorer
77,910,368
1,412,564
Django - is there a way to shuffle and run only a subset of tests?
<p>We are using Django with tests. We have in total about 6,000 tests which take about 40 minutes to run. Is there a way to shuffle the tests and run only 200 (randomly chosen) tests? This should be done with a command line parameter with the number 200 (which may change), since usually we run all the tests.</p> <p>This can be used together with <code>--shuffle</code>. If the number of tests is less than 200, run all the tests (and not 200 tests).</p> <p>Here is some of our code:</p> <p>File <code>https://github.com/speedy-net/speedy-net/blob/main/speedy/core/base/management/commands/test.py</code>:</p> <pre class="lang-py prettyprint-override"><code>from django.core.management.commands import test class Command(test.Command): def add_arguments(self, parser): super().add_arguments(parser=parser) parser.add_argument( &quot;--test-all-languages&quot;, action=&quot;store_true&quot;, help=&quot;If run with this argument, test all languages, and don't skip languages.&quot;, ) </code></pre> <p>File <code>https://github.com/speedy-net/speedy-net/blob/main/speedy/core/base/test/models.py</code>:</p> <pre class="lang-py prettyprint-override"><code> class SiteDiscoverRunner(DiscoverRunner): def __init__(self, *args, **kwargs): assert (django_settings.TESTS is True) super().__init__(*args, **kwargs) self.test_all_languages = kwargs.get('test_all_languages', False) def build_suite(self, test_labels=None, extra_tests=None, **kwargs): if (not (test_labels)): # Default test_labels are all the relevant directories under &quot;speedy&quot;. For example [&quot;speedy.core&quot;, &quot;speedy.net&quot;]. # Due to problems with templates, &quot;speedy.match&quot; label is not added to speedy.net tests, and &quot;speedy.net&quot; label is not added to speedy.match tests. # ~~~~ TODO: fix this bug and enable these labels, although the tests there are skipped. test_labels = [] for label in django_settings.INSTALLED_APPS: if (label.startswith('speedy.')): label_to_test = '.'.join(label.split('.')[:2]) if (label_to_test == 'speedy.net'): add_this_label = (django_settings.SITE_ID == django_settings.SPEEDY_NET_SITE_ID) elif (label_to_test == 'speedy.match'): add_this_label = (django_settings.SITE_ID == django_settings.SPEEDY_MATCH_SITE_ID) elif (label_to_test == 'speedy.composer'): add_this_label = (django_settings.SITE_ID == django_settings.SPEEDY_COMPOSER_SITE_ID) elif (label_to_test == 'speedy.mail'): add_this_label = (django_settings.SITE_ID == django_settings.SPEEDY_MAIL_SOFTWARE_SITE_ID) else: add_this_label = True if (add_this_label): if (not (label_to_test in test_labels)): test_labels.append(label_to_test) print(test_labels) return super().build_suite(test_labels=test_labels, extra_tests=extra_tests, **kwargs) def setup_test_environment(self, **kwargs): super().setup_test_environment(**kwargs) django_settings.TEST_ALL_LANGUAGES = self.test_all_languages def teardown_test_environment(self, **kwargs): super().teardown_test_environment(**kwargs) del django_settings.TEST_ALL_LANGUAGES </code></pre> <p>There is more code which you can see on our <a href="https://github.com/speedy-net/speedy-net" rel="nofollow noreferrer">repository</a> on GitHub.</p>
<python><django><django-tests><django-unittest><manage.py>
2024-01-31 01:40:35
2
3,361
Uri
77,910,348
13,379,052
D5RF - Display something in HTML format before processing API requests
<p><strong>Django 5 - Rest Framework</strong></p> <p>I am building a Django REST API project. I am currently dealing with a request that takes so long to process. I want to know if I could display something like an HTML Response before the request is processed, or just like a text saying: <code>Loading... Please wait.</code></p> <p>Here is my code:</p> <pre><code>from rest_framework.authentication import SessionAuthentication, BasicAuthentication from rest_framework.permissions import IsAuthenticated from rest_framework.views import APIView class SomeView(APIView): authentication_classes = [SessionAuthentication, BasicAuthentication] permission_classes = [IsAuthenticated] def get(self, request): # I wanted to display it here. # ----------------- # Some process here # ----------------- # And then the actual response of the process. return Response(response, status=status.HTTP_200_OK) </code></pre> <p>Any help would be appreciated.</p>
<python><django><django-rest-framework>
2024-01-31 01:33:27
1
442
Coderio
77,910,224
8,540,947
How to check if a node in Maya is read only using Python
<p>I was trying to find a Maya command or flag to use to check if a node is read only in Maya using Python. There does not seem to be a direct way to query this, but it must be needed fairly often?</p>
<python><maya>
2024-01-31 00:49:24
2
519
winteralfs
77,910,172
22,212,435
Why do Entries have different text using the same Variable?
<p>Basically, I was just experimenting with tkinter and decided to write this useless code:</p> <pre><code>import tkinter as tk root = tk.Tk() var = tk.IntVar(value=1) class SmallClass(tk.Frame): def __init__(self, special_var: tk.IntVar, *args, **kwargs): super().__init__(*args, **kwargs) self.label = tk.Entry(self, textvariable=special_var, font=('', 20)) self.label.pack() self.bind('&lt;Enter&gt;', lambda event: (print('value in the Entry: ', self.label.get()), print(&quot;value set in Variable: &quot;, self.getvar(self.label['textvariable'])))) special_var.trace('w', lambda *options: special_var.set(special_var.get()*2)) SmallClass(var).pack() SmallClass(var).pack() SmallClass(var).pack() SmallClass(var).pack() SmallClass(var).pack() SmallClass(var).pack() root.mainloop() </code></pre> <p>I tkink it's better to run this code and see what happens. I am just trying to understand why it works in such a strange way. It looks like when I change <code>variable</code> in one place, it does this small lambda function for each class and multiplies the current value by 2 several times. So, <code>Entry</code> will start to have different values even though they are all using the same <code>textvariable</code>. At first, I thought it would cause an error, but there were no errors at all. Maybe this is due to update issues, but I have checked that <code>entry</code>s actually have the text you see, and it is not equal to <code>textvariable</code>s value.</p> <p>So maybe you have some idea why this behaves in the way it does? Is it a tkinter bag or I just did't get something?</p> <p>Also, you can modify the code and move the line</p> <pre><code>special_var.trace('w', lambda *options: special_var.set(special_var.get()*2)) </code></pre> <p>outside the class and so change to</p> <pre><code>var.trace('w', lambda *options: var.set(var.get()*2)) </code></pre> <p>This way the function will run once but still won't work as &quot;expected&quot;.</p>
<python><tkinter>
2024-01-31 00:29:02
0
610
Danya K