QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
79,084,529
| 8,947,266
|
Streamlit Javascript integration
|
<p>I can't get any js function in streamlit (python) to work, the return value is always 0, no matter how I tried to write it. For instance</p>
<pre><code>anonymous = st_javascript("""
(() => {
return 2;
})()
""")
st.write(f"JavaScript returned: {anonymous}")
</code></pre>
<p>What am I doing wrong here? Chatgpt and claude couldn't help, nor did I see any helpfull posts here.</p>
<p>Can someone provide the correct syntax?</p>
|
<javascript><python><streamlit>
|
2024-10-14 01:58:22
| 1
| 304
|
Romero Azzalini
|
79,084,460
| 13,634,560
|
pandas, determine if set value already exists in dataframe
|
<p>I am checking to see if a tuple of unordered values is already in another list. I am new to Python, and so have not used sets much, but am a heavy pandas user and so was thrilled to find <a href="https://stackoverflow.com/questions/66279365/how-to-check-if-a-slice-of-a-tuple-in-a-set-of-tuples-is-in-another-set-of-tuple">this question</a> on checking to see if a slice of something is in another tuple.</p>
<p>The MRE is good for my use case, thank you to @forgetso, but I add one complication: each value in the frame is an unordered tuple.</p>
<pre><code>import pandas as pd
c1 = [("a", "b", "c", "e"), ("f", "g", "j"), ("a", "z", "x", "q"), ("q", "w", "e", "r")]
df1 = pd.DataFrame({
"A": c1,
})
c1 = [("e", "c", "d", "a"), ("f", "g", "j"), ("a", "z", "x", "q"), ("q", "w", "e", "r")]
df2 = pd.DataFrame({
"A": c1,
})
</code></pre>
<p>So that the two frames look like this:</p>
<pre><code> A
0 (a, b, c, e)
1 (f, g, j)
2 (a, z, x, q)
3 (q, w, e, r)
</code></pre>
<p>and</p>
<pre><code> A
0 (e, c, d, a)
1 (f, g, j)
2 (a, z, x, q)
3 (q, w, e, r)
</code></pre>
<p>So for example the values from A in df2 are all already in A in df1, even though df.iloc[0, "A"] is in a different order.</p>
<p>Apologies for the complexity, but is a dataframe truly the best way to achieve what I'm after? Or should I pivot to sets?</p>
|
<python><pandas><dataframe><set>
|
2024-10-14 00:46:41
| 1
| 341
|
plotmaster473
|
79,084,176
| 12,011,020
|
Polars read_json fails to parse date type schema
|
<p>I want to read in a polars dataframe from a json string containing dates in the standard iso-format "yyyy-mm-dd".
When I try to read the string in and set the dtype of the date column witheither <code>schema</code> or <code>schema_override</code> this results in only NULL values.</p>
<h2>MRE</h2>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime, timedelta
from io import StringIO
import polars as pl
# Generate a list of dates
start_date = datetime.today()
dates = [start_date + timedelta(days=i) for i in range(100)]
date_strings = [date.strftime("%Y-%m-%d") for date in dates]
# Create a Polars DataFrame
df = pl.DataFrame({"dates": date_strings})
df_reread = pl.read_json(
StringIO(df.write_json()),
schema_overrides={"dates": pl.Date},
)
</code></pre>
<p>output of <code>print(df_reread)</code></p>
<h2>Error</h2>
<pre><code>shape: (100, 1)
βββββββββ
β dates β
β --- β
β date β
βββββββββ‘
β null β
β null β
β null β
β null β
β null β
β β¦ β
β null β
β null β
β null β
β null β
β null β
βββββββββ
</code></pre>
<h2>Question</h2>
<p>Is there anyway to correctly read in the Date dtype from a json string?</p>
|
<python><json><dataframe><python-polars>
|
2024-10-13 21:25:18
| 2
| 491
|
SysRIP
|
79,084,149
| 7,077,159
|
How to iteratively run a Python script on the contents of a CSV file
|
<p>I have a Python program that takes a url as a runtime argument. I want to modify this by creating a 'wrapper' to process a CSV file containing a list of urls as the input instead of a single url. The script should be executed once for each row in the CSV file.</p>
<p>Here is my simple script 'myscript.py':</p>
<pre><code>#! /usr/bin/env python3
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('url', help='input URL')
args = parser.parse_args()
print('This is the argument')
print(args.url)
</code></pre>
<p>which I run with <code>python3 myscript.py https://www.bbc.co.uk</code></p>
<p>and it outputs:</p>
<pre><code>This is the argument
https://www.bbc.co.uk
</code></pre>
<p>With a CSV file 'urls.csv' containing a list of utls, one per line, I want to be able to run something like: <code>python3 myscript.py urls.csv</code>
The script should run the necessary number of times to produce output based on the number of urls in the file urls.csv to produce (for example):</p>
<pre><code>This is the argument
https://www.bbc.co.uk
This is the argument
https://www.itv.com
This is the argument
https://www.channel4.com
This is the argument
https://www.channel5.com
</code></pre>
<p>I prefer a 'wrapper' approach rather than modifying the 'argparse' command in my existing script. This is in a Windows environment.</p>
|
<python><csv>
|
2024-10-13 21:09:21
| 3
| 333
|
Packwood
|
79,084,135
| 1,285,061
|
Python regex inconsistent match for the same expression for using `search` and `findall`
|
<p>Why does <code>re</code> have inconsistent match for the same expression for using <code>search</code> and <code>findall</code>.</p>
<pre><code>import re
msg = 'data from 192.168.10.255 and 10.10.10.10'
ipregex_d = re.compile('((25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])\.){3}(25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])')
op = ipregex_d.search(msg)
print (op)
op = ipregex_d.findall(msg)
print (op)
</code></pre>
<p>Output</p>
<pre><code><re.Match object; span=(10, 24), match='192.168.10.255'> # First hit on the search is properly matched
[('10.', '10', '255'), ('10.', '10', '10')] # IPs are not properly matched
</code></pre>
|
<python><python-3.x><regex>
|
2024-10-13 21:02:11
| 1
| 3,201
|
Majoris
|
79,083,774
| 3,825,948
|
How to use DTMF input in Twilio to call Python function
|
<p>I have Twilio working with a Python IVR app using a bi-directional websocket connection. DTMF messages from Twilio are used to detect user digit input while the websocket connection is alive. This is working well. However, when I receive and recognize DTMF input, for example the number 1 was pressed, I want to call a Python function and run some code. I want to do this without closing the websocket connection in my Python IVR app. From what I've read in the Twilio documentation and confirmed via testing, Twilio does not execute the next TwiML verb until the bi-directional websocket connection is closed. My code is as follows:</p>
<pre><code>ws.close() // Closes the bi-directional websocket connection in the Python IVR app
// TwiML code that starts the bi-directional websocket connection and calls the number once the connection is closed by the Python IVR app as shown by the code above
<Connect>
<Stream url="wss://{{ host }}/audio/" />
</Connect>
<Dial>
<Number>{{ number }}</Number>
</Dial>
</code></pre>
<p>This code works as expected when the bi-directional websocket connection is closed. But I would like to dial the number without closing the websocket connection. I have not figured out a way to do this since when I close the websocket, control is passed to the TwiML template and no other Python code can be run.</p>
<p>Any help would be appreciated.</p>
|
<python><websocket><twilio><dtmf>
|
2024-10-13 17:41:18
| 0
| 937
|
Foobar
|
79,083,667
| 6,312,511
|
How do I change a column of single-value Python tuples into integers?
|
<p>As part of a class assignment, I have received a dataframe which has a data value associated with a single-value tuple:</p>
<pre><code>d = {'value': [0.827278, 0.586009, 0.832050, 0.576557, 0.943456],
'userId': [(75,), (106,), (686,), (815,), (1040,)]}
df = pd.DataFrame(data=d)
</code></pre>
<p>I need to deploy <code>userId</code> in a merge, onto an ID column in another dataframe which is integer-valued, and that obviously returns an error since <code>userId</code> is a tuple. How do I convert <code>userId</code> into an integer column consisting of the first value in each of those tuples? All of my attempts to do something like <code>df['userId'] = df['userId'][0]</code> return errors. (You can assume that the first value in each tuple is guaranteed to be an integer; I've checked that.)</p>
|
<python><pandas>
|
2024-10-13 16:53:37
| 2
| 1,447
|
mmyoung77
|
79,083,492
| 922,130
|
How to include a directory with compiled binaries in a Python wheel using pyproject.toml?
|
<p>I want to create a Python wheel that contains a single Python module (<code>script.py</code>) and a directory of compiled C++ binaries. Below is my project structure:</p>
<pre class="lang-bash prettyprint-override"><code>root/
β
βββ bin/
β βββ binary1
β βββ binary2
β βββ binary3
β βββ binary4
βββ script.py
βββ pyproject.toml
βββ MANIFEST.in
βββ README.md
</code></pre>
<p>pyproject.toml:</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ["setuptools>=61.0.0", "wheel"]
build-backend = "setuptools.build_meta"
[tool.setuptools]
py-modules = ["script"]
[project]
name = "script-test"
description = "Lorem ipsum"
readme = "README.md"
license = { file = "LICENSE" }
authors = [
{ name = "", email = "your.email@example.com" },
]
requires-python = ">=3.7"
dynamic = ["version"]
classifiers = [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Natural Language :: English",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Bio-Informatics",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux"
]
[tool.setuptools.dynamic]
version = {attr = "script.__version__"}
[tool.setuptools.package-data]
"*" = ["bin/*"]
[project.scripts]
script = "script:main"
</code></pre>
<p>MANIFEST.in</p>
<pre><code>include README.md
include LICENSE
recursive-include bin *
</code></pre>
<p>After running:</p>
<pre><code>python -m build
</code></pre>
<p>The wheel (<code>script_test-0.0.4-py3-none-any.whl</code>) is created, but the <code>bin/</code> directory <em>is not included</em> in the wheel, while it is correctly included in the source distribution (<code>script_test-0.0.4.tar.gz</code>).</p>
<p>Build output (truncated):</p>
<pre><code>* Building wheel...
/private/var/folders/1k/dnrtvbx9445dpk1shw82v9fw0000gn/T/build-env-jgwgjjff/lib/python3.12/site-packages/setuptools/config/expand.py:126: SetuptoolsWarning: File '/private/var/folders/1k/dnrtvbx9445dpk1shw82v9fw0000gn/T/build-via-sdist-l75sfsez/script_test-0.0.4/LICENSE' cannot be found
for path in _filter_existing_files(_filepaths)
running bdist_wheel
running build
running build_py
creating build/lib
copying script.py -> build/lib
running egg_info
writing script_test.egg-info/PKG-INFO
writing dependency_links to script_test.egg-info/dependency_links.txt
writing entry points to script_test.egg-info/entry_points.txt
writing top-level names to script_test.egg-info/top_level.txt
reading manifest file 'script_test.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'LICENSE'
writing manifest file 'script_test.egg-info/SOURCES.txt'
installing to build/bdist.macosx-10.13-universal2/wheel
running install
running install_lib
creating build/bdist.macosx-10.13-universal2/wheel
copying build/lib/script.py -> build/bdist.macosx-10.13-universal2/wheel/.
running install_egg_info
Copying script_test.egg-info to build/bdist.macosx-10.13-universal2/wheel/./script_test-0.0.4-py3.12.egg-info
running install_scripts
creating build/bdist.macosx-10.13-universal2/wheel/script_test-0.0.4.dist-info/WHEEL
creating '/Users/andrzej/Desktop/vclust_env/vclust_test/dist/.tmp-8zpx71kj/script_test-0.0.4-py3-none-any.whl' and adding 'build/bdist.macosx-10.13-universal2/wheel' to it
adding 'script.py'
adding 'script_test-0.0.4.dist-info/METADATA'
adding 'script_test-0.0.4.dist-info/WHEEL'
adding 'script_test-0.0.4.dist-info/entry_points.txt'
adding 'script_test-0.0.4.dist-info/top_level.txt'
adding 'script_test-0.0.4.dist-info/RECORD'
removing build/bdist.macosx-10.13-universal2/wheel
Successfully built script_test-0.0.4.tar.gz and script_test-0.0.4-py3-none-any.whl
</code></pre>
<p>I would like the <code>bin/</code> directory to be part of the wheel, at the same level as <code>script.py</code>, and for the binaries to be installed into the Python <code>site-packages/</code> directory alongside the script. How can I include the <code>bin/</code> directory in the wheel and ensure it is installed with the package?</p>
|
<python><setuptools><python-packaging><python-wheel>
|
2024-10-13 15:21:23
| 1
| 909
|
sherlock85
|
79,083,302
| 307,050
|
Slice list of 2D points for plotting with matplotlib
|
<p>I've got a list of two-dimensional points represented in a <code>numpy</code> style array:</p>
<pre><code>lines = np.array([
[[1,1], [2,3]], # line 1 (x,y) -> (x,y)
[[-1,1], [-2,2]], # line 2 (x,y) -> (x,y)
[[1,-1], [2,-7]] # line 3 (x,y) -> (x,y)
])
</code></pre>
<p>I'd like to plot these lines with <code>matplotlib</code> in the simplest form possible.
However, most of matplotlib's methods expect points to be represented component wise like <code>([x1, x2, x3, ...], [y1, y2, y3, ...])</code> instead of point wise.</p>
<p>I managed to get the slices right for a <code>quiver</code> type plot with vectors:</p>
<pre><code>x = lines[:,0,0]
y = lines[:,0,1]
u = lines[:,1,0]
v = lines[:,1,1]
# quiver([X, Y], U, V, [C], **kwargs)
plt.quiver(x, y, u, v, color=['r','b','g'], scale=1, scale_units='xy', angles='xy')
plt.xticks(np.arange(-10, 10, 1))
plt.yticks(np.arange(-10, 10, 1))
plt.grid()
plt.show()
</code></pre>
<p>But it's my impression that <code>quiver</code> isn't really the right type of plot for my use case. Especially the <code>scale</code>, <code>scale_units</code> and <code>angles</code> arguments took me quite a while to figure out. Without them, vectors were displayed "wrong".</p>
<p>I'd rather use the simpler <code>plot</code>, in this signature</p>
<pre><code>plot([x], y, [fmt], [x2], y2, [fmt2], ..., **kwargs)
</code></pre>
<p>which for each line to plot expects the two x coordinates followed by the y coordinates.</p>
<p>So for the lines above, we need</p>
<pre><code>rearranged = [ [1,2],[1,3], [-1,-2],[1,2], [1,2],[-1,-7] ]
# ^line1 ^line2 ^line3
</code></pre>
<p>If I add these to a call of <code>plot</code> statically, it's exactly what I need.</p>
<p>Question is, how can I slice or rearrange my initial <code>lines</code> array?</p>
<p>All I've got so far is</p>
<pre><code>lines[:,[0,0],[0,1]]
</code></pre>
<p>but that only gives me the x coordinates of each line:</p>
<pre><code>[[ 1 1]
[-1 1]
[ 1 -1]]
</code></pre>
|
<python><numpy><matplotlib><numpy-slicing>
|
2024-10-13 13:57:23
| 1
| 1,347
|
mefiX
|
79,083,280
| 3,286,489
|
When "pip install github", it errors with "aiohttp/_websocket.c:198:12: fatal error: 'longintrepr.h' file not found"
|
<p>When I <code>pip install github</code>, I have the following error</p>
<pre><code>Collecting github
Using cached github-1.2.7-py3-none-any.whl.metadata (1.7 kB)
Collecting aiohttp==3.8.1 (from github)
Using cached aiohttp-3.8.1.tar.gz (7.3 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions in /opt/homebrew/lib/python3.11/site-packages (from github) (4.11.0)
Requirement already satisfied: attrs>=17.3.0 in /opt/homebrew/lib/python3.11/site-packages (from aiohttp==3.8.1->github) (23.1.0)
Collecting charset-normalizer<3.0,>=2.0 (from aiohttp==3.8.1->github)
Using cached charset_normalizer-2.1.1-py3-none-any.whl.metadata (11 kB)
Requirement already satisfied: multidict<7.0,>=4.5 in /opt/homebrew/lib/python3.11/site-packages (from aiohttp==3.8.1->github) (6.0.4)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /opt/homebrew/lib/python3.11/site-packages (from aiohttp==3.8.1->github) (4.0.2)
Requirement already satisfied: yarl<2.0,>=1.0 in /opt/homebrew/lib/python3.11/site-packages (from aiohttp==3.8.1->github) (1.15.1)
Requirement already satisfied: frozenlist>=1.1.1 in /opt/homebrew/lib/python3.11/site-packages (from aiohttp==3.8.1->github) (1.3.3)
Requirement already satisfied: aiosignal>=1.1.2 in /opt/homebrew/lib/python3.11/site-packages (from aiohttp==3.8.1->github) (1.3.1)
Requirement already satisfied: idna>=2.0 in /opt/homebrew/lib/python3.11/site-packages (from yarl<2.0,>=1.0->aiohttp==3.8.1->github) (3.4)
Requirement already satisfied: propcache>=0.2.0 in /opt/homebrew/lib/python3.11/site-packages (from yarl<2.0,>=1.0->aiohttp==3.8.1->github) (0.2.0)
Using cached github-1.2.7-py3-none-any.whl (15 kB)
Using cached charset_normalizer-2.1.1-py3-none-any.whl (39 kB)
Building wheels for collected packages: aiohttp
Building wheel for aiohttp (pyproject.toml) ... error
error: subprocess-exited-with-error
Γ Building wheel for aiohttp (pyproject.toml) did not run successfully.
β exit code: 1
β°β> [104 lines of output]
*********************
* Accelerated build *
*********************
running bdist_wheel
running build
running build_py
creating build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/web_ws.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/worker.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/multipart.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/web_response.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/client_ws.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/test_utils.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/tracing.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/web_exceptions.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/web_middlewares.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/web.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/http_exceptions.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/web_app.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/streams.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/web_protocol.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/log.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/client.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/web_urldispatcher.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/web_request.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/http_websocket.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/client_proto.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/locks.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/__init__.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/web_runner.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/web_server.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/base_protocol.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/payload.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/client_reqrep.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/http.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/web_log.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/resolver.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/formdata.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/payload_streamer.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/web_routedef.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/connector.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/client_exceptions.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/typedefs.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/hdrs.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/web_fileresponse.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/http_writer.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/tcp_helpers.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/helpers.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/http_parser.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/cookiejar.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/pytest_plugin.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/abc.py -> build/lib.macosx-13-arm64-cpython-311/aiohttp
running egg_info
writing aiohttp.egg-info/PKG-INFO
writing dependency_links to aiohttp.egg-info/dependency_links.txt
writing requirements to aiohttp.egg-info/requires.txt
writing top-level names to aiohttp.egg-info/top_level.txt
reading manifest file 'aiohttp.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'aiohttp' anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
warning: no previously-included files matching '*.so' found anywhere in distribution
warning: no previously-included files matching '*.lib' found anywhere in distribution
warning: no previously-included files matching '*.dll' found anywhere in distribution
warning: no previously-included files matching '*.a' found anywhere in distribution
warning: no previously-included files matching '*.obj' found anywhere in distribution
warning: no previously-included files found matching 'aiohttp/*.html'
no previously-included directories found matching 'docs/_build'
adding license file 'LICENSE.txt'
writing manifest file 'aiohttp.egg-info/SOURCES.txt'
copying aiohttp/_cparser.pxd -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/_find_header.c -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/_find_header.h -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/_find_header.pxd -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/_headers.pxi -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/_helpers.c -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/_helpers.pyi -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/_helpers.pyx -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/_http_parser.c -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/_http_parser.pyx -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/_http_writer.c -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/_http_writer.pyx -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/_websocket.c -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/_websocket.pyx -> build/lib.macosx-13-arm64-cpython-311/aiohttp
copying aiohttp/py.typed -> build/lib.macosx-13-arm64-cpython-311/aiohttp
creating build/lib.macosx-13-arm64-cpython-311/aiohttp/.hash
copying aiohttp/.hash/_cparser.pxd.hash -> build/lib.macosx-13-arm64-cpython-311/aiohttp/.hash
copying aiohttp/.hash/_find_header.pxd.hash -> build/lib.macosx-13-arm64-cpython-311/aiohttp/.hash
copying aiohttp/.hash/_helpers.pyi.hash -> build/lib.macosx-13-arm64-cpython-311/aiohttp/.hash
copying aiohttp/.hash/_helpers.pyx.hash -> build/lib.macosx-13-arm64-cpython-311/aiohttp/.hash
copying aiohttp/.hash/_http_parser.pyx.hash -> build/lib.macosx-13-arm64-cpython-311/aiohttp/.hash
copying aiohttp/.hash/_http_writer.pyx.hash -> build/lib.macosx-13-arm64-cpython-311/aiohttp/.hash
copying aiohttp/.hash/_websocket.pyx.hash -> build/lib.macosx-13-arm64-cpython-311/aiohttp/.hash
copying aiohttp/.hash/hdrs.py.hash -> build/lib.macosx-13-arm64-cpython-311/aiohttp/.hash
running build_ext
building 'aiohttp._websocket' extension
creating build/temp.macosx-13-arm64-cpython-311/aiohttp
clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX13.sdk -I/opt/homebrew/opt/python@3.11/Frameworks/Python.framework/Versions/3.11/include/python3.11 -c aiohttp/_websocket.c -o build/temp.macosx-13-arm64-cpython-311/aiohttp/_websocket.o
aiohttp/_websocket.c:198:12: fatal error: 'longintrepr.h' file not found
#include "longintrepr.h"
^~~~~~~~~~~~~~~
1 error generated.
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for aiohttp
Failed to build aiohttp
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (aiohttp)
</code></pre>
<p>My Python version is Python 3.13.0</p>
<p>I tried <code>pip uninstall aiohttp</code> and reinstall it using <code>pip install aiohttp</code>, it seems all good. But somehow, when <code>pip install github</code>, will fail.</p>
<p>Is this a python issue, or my machine issue?</p>
|
<python><python-3.x><github>
|
2024-10-13 13:47:25
| 0
| 61,245
|
Elye
|
79,082,898
| 4,432,498
|
Wait in a method until some methods finished in another threads
|
<p>I need to implement the waiting in a method until some other methods in different threads are finished.</p>
<p>This is my implementation (not working as expected):</p>
<pre><code>import queue
from concurrent.futures import ThreadPoolExecutor
import threading
import time
import logging
logging.basicConfig(
level=logging.INFO,
format="%(message)s",
)
class Foo(object):
def func_1(self):
log = logging.getLogger(__name__)
log.info("start func_1")
time.sleep(2)
log.info("end func_1")
def func_2(self):
log = logging.getLogger(__name__)
log.info("start func_2")
time.sleep(2)
log.info("end func_2")
def func_3(self):
log = logging.getLogger(__name__)
log.info("start func_3")
time.sleep(2)
log.info("end func_3")
def func_4_free(self):
log = logging.getLogger(__name__)
log.info("start func_4_free")
time.sleep(2)
log.info("end func_4_free")
def do(self):
log = logging.getLogger(__name__)
log.info("start do")
time.sleep(1)
log.info("This is do")
time.sleep(1)
log.info("end do")
class Bar(Foo):
cv_from_do = threading.Event()
FUNC_SYNCHRONISE = {"do": ["func_1", "func_2", "func_3", "do"]}
def __init__(self):
super().__init__()
self.funcs = queue.Queue()
self.cv_from_do.set()
def __getattribute__(self, name):
attribute = super().__getattribute__(name)
if name in ["cv_from_do", "funcs", "FUNC_SYNCHRONISE"]:
return attribute
if name in self.FUNC_SYNCHRONISE.keys():
def wrap(*args, **kwargs):
log = logging.getLogger(__name__)
log.info(f"start {name}. wait till other functions finish...")
try:
self.cv_from_do.wait()
self.cv_from_do.clear() # force to wait other functions
while not self.funcs.empty():
pass
self.funcs.put(name)
attribute(*args, **kwargs)
self.funcs.get(name)
self.cv_from_do.set()
except Exception as ex:
log.exception(ex)
return wrap
elif name in self.FUNC_SYNCHRONISE["do"]:
def wrap(*args, **kwargs):
log = logging.getLogger(__name__)
try:
self.cv_from_do.wait()
self.funcs.put(name)
attribute(*args, **kwargs)
self.funcs.get(name)
except Exception as ex:
log.exception(ex)
return wrap
else:
return attribute
with ThreadPoolExecutor(max_workers=None) as executor:
obj = Bar()
# function of 'obj' can be called any times, in any threads, in any order
# following is just for test
for f in [obj.func_1, obj.func_3, obj.func_4_free, obj.do, obj.func_2, obj.func_1, obj.do, obj.func_4_free, obj.do, obj.func_3, obj.func_2, obj.func_1]:
future = executor.submit(
f,
**{}
)
</code></pre>
<p>The inheritance must be preserved <code>Bar(Foo)</code>. <code>Foo</code> - must not be modified</p>
<p>The expectation is:</p>
<ol>
<li>Function of <code>obj = Bar()</code> can be called in a thread in any amount, in any time.</li>
<li>When any functions are started in threads they are working
independent in parallel.</li>
<li>Once <code>do</code> starts in a thread - it waits until other functions (including other <code>do</code>s if they exist in other threads) finish, then other functions wait until the current <code>do</code> is finished. Once current <code>do</code> finishes -> other functions can continue.</li>
<li>Only function of <code>Foo</code> in <code>FUNC_SYNCHRONISE</code> have to be synchronized. Other functions of <code>Foo</code> can work independently.</li>
</ol>
<p>I got following output:</p>
<pre><code>start func_1
start func_3
start func_4_free
start do. wait till other functions finish...
start do. wait till other functions finish...
start func_4_free
start do. wait till other functions finish...
end func_1
end func_3
end func_4_free
start do <<< - from here
end func_4_free
This is do
end do <<< - to here is correct. 'do' behaves atomic
start do <<< - from here
start func_1 <<< - to here is NOT correct, as 'do' has to be finished before other methods sart
start func_3
start func_2
start func_2
start func_1
This is do
end func_1
end do
end func_3
end func_2
end func_2
end func_1
start do
This is do
end do
</code></pre>
<p>The problem is that if there is only one <code>do</code> in threads - no problem, my code is working. But if I call several <code>do</code> in different threads - the first is working as expected, but next <code>do</code>s are called without waiting for completion of each other. In short - <code>do</code> should be called in sequential mode, never in concurrent to each other.</p>
<p>Please help if possible..</p>
<p><strong>UPD</strong></p>
<p>I'm thinking to create a queue of <code>do</code> and allow them to execute when it is its time.. but it is not an atomic (as I need to check the element in queue and only after pop it) and may lead to concurrency problems i believe</p>
|
<python><multithreading><concurrency>
|
2024-10-13 10:18:10
| 1
| 550
|
Anton
|
79,082,892
| 15,869,059
|
Fusion-like timeline in Python
|
<p>In Autodesk Fusion, the timeline shows individual operations on a 3D model:</p>
<p><a href="https://i.sstatic.net/c6V5BngY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c6V5BngY.png" alt="a timeline containing operations on a 3D model" /></a></p>
<p>Each operation can be added, removed or modified at any point, with the 3D model being updated accordingly.</p>
<p>Under the hood, modifying an operation might work as follows:</p>
<pre class="lang-py prettyprint-override"><code>for i in range(modified_operation_index, n):
cache[i] = apply_operation(operation=operations[i],
initial_state=cache[operation - 1])
display(cache[n - 1])
</code></pre>
<p>The implication of this is that changes to operations do not require Fusion to run through all operations, but rather only the operations starting from the modified one.</p>
<p>I would like to have the same functionality in Build123d, which is a Python CAD library.</p>
<pre class="lang-py prettyprint-override"><code># %% Operation 1
from build123d import *
# ... more time-consuming imports ...
# %% Operation 2
box = Box(1, 2, 3)
# %% Operation 3
# ... more code that modifies box ...
# %% Operation 4
circle = Circle(4, 5)
# %% Operation 5
# ... more code that modifies circle ...
# %% Operation ...
# %% Operation n - 1
result = box + circle + ...
# %% Operation n
# show result in viewer
</code></pre>
<p>I don't want to wait for operations 1 through 16 to finish executing each time I want to preview a change to operation 17's code.</p>
<p>Is there a way to accomplish this in Python, possibly using external software such as IPython?</p>
|
<python><jupyter-notebook><3d>
|
2024-10-13 10:16:08
| 1
| 431
|
glibg10b
|
79,082,779
| 8,275,142
|
gmodels::fit.contrast equivalent in Python
|
<p>I would like to test for synergy between 2 drugs given to animals, and I measured the tumor growth at different time-points in 15 animals in each group.</p>
<p>I use the statistical test for Bliss independence synergy, which is a modified t-statistic derived from the principles of linear models and the properties of the t-distribution. This statistic essentially measures the difference between the combined effect and the sum of individual effects, as <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6876842/" rel="nofollow noreferrer">nicely described elsewhere</a>.</p>
<p>The code in R uses a straightforward approach with a contrast matrix:</p>
<pre class="lang-none prettyprint-override"><code>model.fit <- lm(lnTV ~ lnTV.baseline + Group, data = TV.data.day)
fit.combo <- gmodels::fit.contrast(model.fit, "Group", c(1, -1, -1, 1), conf.int = 0.9, df = TRUE)
</code></pre>
<p>Is there a way to leverage the contrast matrix in Python, for example using <code>statsmodels.stats.contrast</code> of anything with <code>patsy.constrasts</code>? I couldn't translate the methods described in the documents <a href="https://www.statsmodels.org/stable/examples/notebooks/generated/contrasts.html" rel="nofollow noreferrer">Contrast Overview</a> or the <a href="https://www.statsmodels.org/dev/contrasts.html" rel="nofollow noreferrer">Contrast coding system</a>.</p>
|
<python><r><statistics><statsmodels><patsy>
|
2024-10-13 09:01:10
| 1
| 852
|
SΓ©bastien Wieckowski
|
79,082,403
| 3,847,651
|
Why a Python-based Windows service can't be started
|
<p>I want to develop a Windows service using Python. As a try, I copied code from <a href="https://thepythoncorner.com/posts/2018-08-01-how-to-create-a-windows-service-in-python/" rel="nofollow noreferrer">this page</a>. Then I compiled it to the EXE file with <a href="https://pyinstaller.org/en/stable/" rel="nofollow noreferrer">PyInstaller</a>. Then I use <a href="https://en.wikipedia.org/wiki/WiX" rel="nofollow noreferrer">WiX</a> to make an <a href="https://en.wikipedia.org/wiki/Windows_Installer" rel="nofollow noreferrer">MSI</a> file and it as a Windows 11 service.</p>
<p>But after it is installed, my service in the <a href="https://en.wikipedia.org/wiki/Service_Control_Manager" rel="nofollow noreferrer">SCM</a> window is always <em>"STOPPED"</em> and can't be started. I don't know why.</p>
<p><a href="https://i.sstatic.net/XkfvjVcg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XkfvjVcg.png" alt="Enter image description here" /></a></p>
<p>My WiX project is just normal; I added a ServiceInstall element. I don't know if WiX matters.</p>
<p>The whole process doesn't report any error.</p>
<p>I checked <a href="https://en.wikipedia.org/wiki/Event_Viewer" rel="nofollow noreferrer">Event Viewer</a>. I got error in it. It reads like: waiting for service MyServiceTest connection timeout (30000Β ms)</p>
<p>But I didn't understand the error message yet.</p>
<p>I finally found the fix <a href="https://stackoverflow.com/questions/25770873/python-windows-service-pyinstaller-executables-error-1053">here</a>.</p>
|
<python><windows><service>
|
2024-10-13 04:30:39
| 0
| 1,553
|
Wason
|
79,082,174
| 565,635
|
If I install two distribution packages that provide the same import package, which one is imported?
|
<p>As a quick reminder, a <a href="https://packaging.python.org/en/latest/glossary/#term-Distribution-Package" rel="nofollow noreferrer"><em>distribution package</em></a> is what you install through <code>pip</code>. Each distribution package can contain (multiple) <a href="https://packaging.python.org/en/latest/glossary/#term-Import-Package" rel="nofollow noreferrer"><em>import packages</em></a> which may or may not have the same name. For example the <em>distribution package</em> <code>scikit-learn</code> provides the <em>import package</em> <code>sklearn</code>.</p>
<p>Now, if I install two distribution packages through <code>pip</code>, but both distribution packages offer an import package with the same name, which will get imported?</p>
<p>For example suppose I install distribution packages <code>a</code> and <code>b</code>, but both packages offer an import package <code>foo</code>. When I write <code>import foo</code>, which import package will I get?</p>
|
<python><pip><python-import><python-packaging>
|
2024-10-13 00:10:50
| 0
| 119,106
|
orlp
|
79,082,153
| 14,579,156
|
Google Classroom API Assign Coursework To Students
|
<p>I'm trying to create a script that will assign students by ID to coursework. The code appears to be working, but I'm getting a 500 response from the server. Any idea what I'm doing wrong? I've tried different student ids and they all give the same result?</p>
<pre><code>assign_coursework_to_students(service, course_id, coursework_id, student_ids_to_add):
"""Assign coursework to specific individual students with retry on internal error."""
print(student_ids_to_add)
for attempt in range(3): # Retry up to 3 times
try:
# Prepare the request body
body = {
"assigneeMode": "INDIVIDUAL_STUDENTS",
"modifyIndividualStudentsOptions": {
"addStudentIds": student_ids_to_add
}
}
# Make the API call to modify assignees
response = service.courses().courseWork().modifyAssignees(
courseId=course_id,
id=coursework_id,
body=body
).execute()
print(f"Successfully assigned coursework to students: {response['title']} (ID: {response['id']})")
return # Exit the function on success
except HttpError as error:
print(f"An error occurred: {error.resp.status} - {error._get_reason()}")
if error.resp.status == 500:
print(f"Internal error encountered. Retrying... (Attempt {attempt + 1})")
time.sleep(2) # Wait before retrying
continue # Retry the request
else:
print(f"An error occurred while assigning coursework: {error}")
return # Exit on non-500 error
except Exception as e:
print(f"An unexpected error occurred: {e}")
return # Exit on unexpected errors
</code></pre>
|
<python><google-classroom>
|
2024-10-12 23:48:15
| 0
| 370
|
ShadowGunn
|
79,082,075
| 2,675,349
|
How to efficiently process messages from Rabbit MQ?
|
<p>I have an architecture-related question about implementing a Python API/Microservices that need to process 30 million incoming requests.</p>
<p>I am planning to to use RabbitMQ or Amazon SQS.</p>
<p>Question: Question: Will the callback mechanism be able to handle the message volume efficiently, or do you have any better suggestions, please advice?</p>
|
<python><asynchronous><rabbitmq><microservices><event-driven>
|
2024-10-12 22:44:35
| 0
| 1,027
|
Ullan
|
79,081,999
| 8,397,886
|
VS Code terminal is inconsistent with normal terminal MacOS ARM
|
<p>Below I have attached two images. One from my VS Code terminal and the other one from my normal terminal. The difference is important because I keep getting dependency issues due to the packages being compiled for x86_64 instead of Arm64 which is what it should be.</p>
<p>What is the discrepancy due to and how can I ensure they both are using Arm64?</p>
<p>When I run this it seems both are available in my python installation</p>
<pre><code>file $(which python3)
/Library/Frameworks/Python.framework/Versions/3.12/bin/python3
/Library/Frameworks/Python.framework/Versions/3.12/bin/python3: Mach-O universal binary with 2 architectures: [x86_64:Mach-O 64-bit executable x86_64] [arm64:Mach-O 64-bit executable arm64]
/Library/Frameworks/Python.framework/Versions/3.12/bin/python3 (for architecture x86_64): Mach-O 64-bit executable x86_64
/Library/Frameworks/Python.framework/Versions/3.12/bin/python3 (for architecture arm64): Mach-O 64-bit executable arm64
</code></pre>
<p>Any help would be much appreciated!</p>
<p><a href="https://i.sstatic.net/piuVPIfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/piuVPIfg.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/lGYd6pu9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lGYd6pu9.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code><terminal><arm64><python-venv>
|
2024-10-12 21:44:52
| 1
| 2,577
|
Ludo
|
79,081,924
| 189,878
|
With spaCy, how can I get all lemmas from a string?
|
<p>I have a pandas data frame with a column of text values (documents). I want to apply lemmatization on these values with the spaCy library using the pandas <code>apply</code> function. I've defined my <code>to_lemma</code> function to iterate through the words in the document and concatenate the corresponding lemmas in the output string, however this is very slow. Is there a way to extract the lemmatized form of a document in spaCy?</p>
<pre><code>def to_lemma(text):
tp = nlp(text)
line = ""
for word in tp:
line = line + word.lemma_ + " "
return line
</code></pre>
|
<python><pandas><nlp><spacy><lemmatization>
|
2024-10-12 21:03:21
| 2
| 2,346
|
Patrick
|
79,081,900
| 1,790,839
|
Get context from previous airflow task
|
<p>I have the following DAG:</p>
<blockquote>
<p>task_a >> task_b >> task_c</p>
</blockquote>
<p>I would like to be able:</p>
<ul>
<li>to somehow access inside task_c some parameters (environment variables to be precise) passed to task_b (which is a KubernetesOperator)</li>
<li>without changing task_b (so no xcom involved: because of the legacy code base I'm working on, I would like to avoid having to modify task_b whatsoever and so I don't want to have task_b pushing xcom)</li>
</ul>
<p>I found and tried some stackoverflow post (<a href="https://stackoverflow.com/questions/56309773/airflow-get-previous-task-id-in-the-next-task">Airflow: Get previous task id in the next task</a>) where I can access the previous task_id from a given task. I would like to know if, using the same spirit, I could access the previous task context in my task.</p>
|
<python><airflow>
|
2024-10-12 20:44:10
| 0
| 450
|
kofi_kari_kari
|
79,081,887
| 8,190,068
|
Adjusting Checkbox and Label alignment in a GridLayout
|
<p>I am relatively new to Kivy, and I'm having problems adjusting checkboxes and their associated labels within a GridLayout. With reference to the image below...</p>
<p><a href="https://i.sstatic.net/Cbdh6b8r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cbdh6b8r.png" alt="Import Dialog with Grid Layout" /></a></p>
<p>In the screenshot above, there are three GridLayout widgets, which contain checkboxes and labels, arranged in 4 columns. I want the columns containing the checkboxes to be narrow, and the columns containing the labels to split the remaining space.</p>
<p>My questions are:</p>
<ol>
<li><p>The labels in the second column are quite distant, horizontally, from the checkboxes that they describe in the first column. <strong>How can I get them to be closer, as with the widgets in columns 3 & 4?</strong></p>
</li>
<li><p>The checkboxes are not vertically-aligned with the labels that describe them. <strong>How can I vertically-align these widgets?</strong></p>
</li>
</ol>
<p><strong>UPDATE: I have tried adding <code>height</code> (to match the checkboxes), <code>valign</code> and <code>padding</code> (0,0,0,0) attributes to the labels, but it has not changed the vertical alignment in the slightest.</strong></p>
<ol start="3">
<li><p>I wanted a little space above each section heading, but I wasn't sure how to do that without doing a bunch of manual calculations and playing around with the height of each GridLayout. So, to let Kivy figure this out, I added an extra row with an empty label. <strong>Is there a better way to do this?</strong></p>
</li>
<li><p>It's not clear to me from the online documentation or the demo programs how to determine which of these checkboxes (i.e. radio buttons) was selected when the user clicks Load. <strong>Must I do an id search in the app to find each checkbox by its id, and then figure out which one is 'True'? Or is there a better way?</strong></p>
</li>
</ol>
<p>Here is my kivy-language code:</p>
<pre><code><ImportTypeDialog>:
BoxLayout:
orientation: "vertical"
size: root.size
pos: root.pos
Label:
text: '[i]Which type of data are you importing?[/i]'
markup: 'True'
text_size: self.width, None
size: self.texture_size
halign: 'left'
Label:
text: '[u][size=16]Customer-related items[/size][/u]'
markup: 'True'
text_size: self.width, None
size: self.texture_size
halign: 'left'
GridLayout:
cols: 4
spacing: '8dp'
size_hint_y: None
CheckBox:
id: chkBoxCustomerAddresses
size_hint_y: None
size_hint_x: None
height: '14dp'
width: '30dp'
group: 'importTypes'
Label:
text: 'Addresses'
text_size: self.width, None
halign: 'left'
CheckBox:
id: chkBoxCustomerMessaging
size_hint_y: None
size_hint_x: None
height: '14dp'
width: '30dp'
group: 'importTypes'
Label:
text: 'Electronic Messaging'
text_size: self.width, None
halign: 'left'
CheckBox:
id: chkBoxCustomerCommunications
size_hint_y: None
size_hint_x: None
height: '14dp'
width: '30dp'
group: 'importTypes'
Label:
text: 'Communications'
text_size: self.width, None
CheckBox:
id: chkBoxCustomerTransactions
size_hint_y: None
size_hint_x: None
height: '14dp'
width: '30dp'
group: 'importTypes'
Label:
text: 'Transactions'
text_size: self.width, None
Label:
text: ' '
text_size: self.width, None
Label:
text: '[u][size=16]Supplier-related items[/size][/u]'
markup: 'True'
text_size: self.width, None
size: self.texture_size
halign: 'left'
GridLayout:
cols: 4
spacing: '8dp'
size_hint_y: None
CheckBox:
id: chkBoxSupplierAddresses
size_hint_y: None
size_hint_x: None
height: '14dp'
width: '30dp'
group: 'importTypes'
Label:
text: 'Addresses'
text_size: self.width, None
halign: 'left'
CheckBox:
id: chkBoxSupplierMessaging
size_hint_y: None
size_hint_x: None
height: '14dp'
width: '30dp'
group: 'importTypes'
Label:
text: 'Electronic Messaging'
text_size: self.width, None
halign: 'left'
CheckBox:
id: chkBoxSupplierCommunications
size_hint_y: None
size_hint_x: None
height: '14dp'
width: '30dp'
group: 'importTypes'
Label:
text: 'Communications'
text_size: self.width, None
CheckBox:
id: chkBoxSupplierTransactions
size_hint_y: None
size_hint_x: None
height: '14dp'
width: '30dp'
group: 'importTypes'
Label:
text: 'Transactions'
text_size: self.width, None
Label:
text: ' '
text_size: self.width, None
Label:
text: '[u][size=16]Employee-related items[/size][/u]'
markup: 'True'
text_size: self.width, None
size: self.texture_size
halign: 'left'
GridLayout:
cols: 4
spacing: '8dp'
size_hint_y: None
CheckBox:
id: chkBoxEmployeeAddresses
size_hint_y: None
size_hint_x: None
height: '14dp'
width: '30dp'
group: 'importTypes'
Label:
text: 'Addresses'
text_size: self.width, None
halign: 'left'
CheckBox:
id: chkBoxEmployeeMessaging
size_hint_y: None
size_hint_x: None
height: '14dp'
width: '30dp'
group: 'importTypes'
Label:
text: 'Electronic Messaging'
text_size: self.width, None
halign: 'left'
CheckBox:
id: chkBoxEmployeeCommunications
size_hint_y: None
size_hint_x: None
height: '14dp'
width: '30dp'
group: 'importTypes'
Label:
text: 'Communications'
text_size: self.width, None
CheckBox:
id: chkBoxEmployeeTransactions
size_hint_y: None
size_hint_x: None
height: '14dp'
width: '30dp'
group: 'importTypes'
Label:
text: 'Transactions'
text_size: self.width, None
Label:
text: ' '
text_size: self.width, None
BoxLayout:
orientation: "horizontal"
size_hint_y: None
height: 30
Button:
text: "Cancel"
on_release: root.cancel()
Button:
text: "Load"
on_release: app.ImportFile(filechooser.path, filechooser.selection)
</code></pre>
|
<python><kivy-language><grid-layout>
|
2024-10-12 20:38:05
| 1
| 424
|
Todd Hoatson
|
79,081,885
| 1,492,229
|
Unable to install Torch using pip
|
<p>I using Windows</p>
<p>I installed Python 3.13</p>
<p>I am trying to <code>pip install torch</code></p>
<p>However, when attempting to do so, I encounter this error.</p>
<pre><code>F:\Kit>pip install torch
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch
</code></pre>
<p>What is the issue and how can it be resolved?</p>
|
<python><pytorch><pip>
|
2024-10-12 20:37:14
| 2
| 8,150
|
asmgx
|
79,081,866
| 2,287,458
|
Polars Pivot treats null values as 0 when summing
|
<p>I have this code:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
pl.DataFrame({
'label': ['AA', 'CC', 'BB', 'AA', 'CC'],
'account': ['EU', 'US', 'US', 'EU', 'EU'],
'qty': [1.5, 43.2, None, None, 18.9]})\
.pivot('account', index='label', aggregate_function='sum')
</code></pre>
<p>which gives</p>
<pre><code>shape: (3, 3)
βββββββββ¬βββββββ¬βββββββ
β label β EU β US β
β --- β --- β --- β
β str β f64 β f64 β
βββββββββͺβββββββͺβββββββ‘
β AA β 1.5 β null β
β CC β 18.9 β 43.2 β
β BB β null β 0.0 β
βββββββββ΄βββββββ΄βββββββ
</code></pre>
<p>Now, when there are any <code>null</code> values in the original data, I want the pivot table to show <code>null</code> in the respective cell. However, AA-EU shows 1.5 (but should be null), and BB-US shows 0.0 (but should also be null).</p>
<p>I tried using</p>
<pre class="lang-py prettyprint-override"><code>aggregate_function=lambda col: pl.when(col.has_nulls())\
.then(pl.lit(None, dtype=pl.Float64))\
.otherwise(pl.sum(col))
</code></pre>
<p>but it errors out with <code>AttributeError: 'function' object has no attribute '_pyexpr'</code>.</p>
<p>How can I fix this?</p>
|
<python><dataframe><null><pivot-table><python-polars>
|
2024-10-12 20:29:48
| 1
| 3,591
|
Phil-ZXX
|
79,081,859
| 7,886,968
|
How do I ensure that my Python script and its child processes handle Ctrl-C properly?
|
<p>System:</p>
<ol>
<li><p>A Raspberry Pi-4 running Buster using Python 3.7 with an Adafruit 128x32 2.23" OLED Bonnet connected to the GPIO pins.Β Ref: <a href="https://learn.adafruit.com/adafruit-2-23-monochrome-oled-bonnet" rel="nofollow noreferrer">https://learn.adafruit.com/adafruit-2-23-monochrome-oled-bonnet</a></p>
</li>
<li><p>The original version of the script I am using can be found at: <a href="https://learn.adafruit.com/adafruit-2-23-monochrome-oled-bonnet/usage#verify-i2c-device-3064554" rel="nofollow noreferrer">https://learn.adafruit.com/adafruit-2-23-monochrome-oled-bonnet/usage#verify-i2c-device-3064554</a></p>
</li>
</ol>
<p>Issue:<br>
When running a Python program <em><strong>via a terminal window</strong></em>, CTL-C doesn't always produce the orderly shutdown I want to happen.Β When run <em><strong>inside of Thonny</strong></em>, it appears to always work.</p>
<hr>
<p>Note the following Python script which was lifted <em>verbatim</em> from Adafruit's web site to test their 128x32 OLED Raspberry Pi "Bonnet".</p>
<p>I modified it by adding some code to capture and handle signal interrupts such that it would clear and close the display when the program ends.</p>
<p>Viz.:</p>
<pre><code>#!/usr/bin/python3.7
# SPDX-FileCopyrightText: <text> 2020 Tony DiCola, James DeVito,
# and 2020 Melissa LeBlanc-Williams, for Adafruit Industries </text>
# SPDX-License-Identifier: MIT
# This example is for use on (Linux) computers that are using CPython with
# Adafruit Blinka to support CircuitPython libraries. CircuitPython does
# not support PIL/pillow (python imaging library)!
# Modified 2024/10/12 by Jim Harris to allow for signal capture
# and a graceful shutdown
import os
import signal
import sys
from threading import Condition, Thread, Event
import time
import subprocess
from board import SCL, SDA, D4
import busio
import digitalio
from PIL import Image, ImageDraw, ImageFont
import adafruit_ssd1305
# check if it's ran with Python3
assert sys.version_info[0:1] == (3,)
# for triggering the shutdown procedure when a signal is detected
keyboard_trigger = Event()
def signal_handler(signal, frame):
print('\nSignal detected. Stopping threads.')
keyboard_trigger.set()
# registering both types of signals
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
# Get PID of this running script
pid = os.getpid()
# Define the Reset Pin
oled_reset = digitalio.DigitalInOut(D4)
# Create the I2C interface.
i2c = busio.I2C(SCL, SDA)
# Create the SSD1305 OLED class.
# The first two parameters are the pixel width and pixel height. Change these
# to the right size for your display!
disp = adafruit_ssd1305.SSD1305_I2C(128, 32, i2c, reset=oled_reset)
# Clear display.
disp.fill(0)
disp.show()
# Create blank image for drawing.
# Make sure to create image with mode '1' for 1-bit color.
width = disp.width
height = disp.height
image = Image.new("1", (width, height))
# Get drawing object to draw on image.
draw = ImageDraw.Draw(image)
# Draw a black filled box to clear the image.
draw.rectangle((0, 0, width, height), outline=0, fill=0)
# Draw some shapes.
# First define some constants to allow easy resizing of shapes.
padding = -2
top = padding
bottom = height - padding
# Move left to right keeping track of the current x position for drawing shapes.
x = 0
# Load default font.
#font = ImageFont.load_default()
# Alternatively load a TTF font. Make sure the .ttf font file is in the
# same directory as the python script!
# Some other nice fonts to try: http://www.dafont.com/bitmap.php
font = ImageFont.truetype('/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf', 9)
print("Started system statistics printout as PID", pid)
# Main drawing routine
#while True:
while not keyboard_trigger.is_set():
# Draw a black filled box to clear the image.
draw.rectangle((0, 0, width, height), outline=0, fill=0)
# Shell scripts for system monitoring from here:
# https://unix.stackexchange.com/questions/119126/command-to-display-memory-usage-disk-usage-and-cpu-load
cmd = "hostname -I | cut -d' ' -f1"
IP = subprocess.check_output(cmd, shell=True).decode("utf-8")
cmd = "top -bn1 | grep load | awk '{printf \"CPU Load: %.2f\", $(NF-2)}'"
CPU = subprocess.check_output(cmd, shell=True).decode("utf-8")
cmd = "free -m | awk 'NR==2{printf \"Mem: %s/%s MB %.2f%%\", $3,$2,$3*100/$2 }'"
MemUsage = subprocess.check_output(cmd, shell=True).decode("utf-8")
cmd = 'df -h | awk \'$NF=="/"{printf "Disk: %d/%d GB %s", $3,$2,$5}\''
Disk = subprocess.check_output(cmd, shell=True).decode("utf-8")
# Write four lines of text.
draw.text((x, top + 0), "IP: " + IP, font=font, fill=255)
draw.text((x, top + 8), CPU, font=font, fill=255)
draw.text((x, top + 16), MemUsage, font=font, fill=255)
draw.text((x, top + 25), Disk, font=font, fill=255)
# Display image.
disp.image(image)
disp.show()
time.sleep(0.1)
# until some keyboard event is detected
print("\nA keyboard event was detected.")
print("Clearing and shutting down display")
# Clear display.
disp.fill(0)
disp.show()
sys.exit(0)
</code></pre>
<p>Case 1: Run from within Thonny<br>
Running from within Thonny <em><strong>always</strong></em> produces the following output within Thonny's terminal (REPL) window.</p>
<pre><code>Python 3.7.3 (/usr/bin/python3)
>>> %cd /home/pi/startup_scripts
>>> %Run Stats.py
Started system statistics printout as PID 24225
Signal detected. Stopping threads.
A keyboard event was detected.
Clearing and shutting down display
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Python 3.7.3 (/usr/bin/python3)
>>>
</code></pre>
<p>Each and every time, the script responds to a CTL-C with a clean shutdown and clearing the display, which is the desired and expected behavior.</p>
<p>Case 2: Running within a terminal window.<br>
When run within a terminal, I get the following:</p>
<p>Try 1: Sending an explicit kill signal from another terminal window:</p>
<pre><code>pi@Adafruit128x32:~/startup_scripts $ ./Stats.py
Started system statistics printout as PID 15893
Signal detected. Stopping threads.
A keyboard event was detected.
Clearing and shutting down display
pi@Adafruit128x32:~/startup_scripts $
</code></pre>
<p>This has happened every single time I've "Killed" the process via another terminal.</p>
<p>Try 2: Killing within the running terminal window using CTL-C</p>
<p>Attempt 1:</p>
<pre><code>pi@Adafruit128x32:~/startup_scripts $ ./Stats.py
Started system statistics printout as PID 16921
^C
Signal detected. Stopping threads.
A keyboard event was detected.
Clearing and shutting down display
pi@Adafruit128x32:~/startup_scripts $
</code></pre>
<p>Attempt 2:</p>
<pre><code>pi@Adafruit128x32:~/startup_scripts $ ./Stats.py
Started system statistics printout as PID 17587
^C
Signal detected. Stopping threads.
Traceback (most recent call last):
File "./Stats.py", line 104, in <module>
Disk = subprocess.check_output(cmd, shell=True).decode("utf-8")
File "/usr/lib/python3.7/subprocess.py", line 395, in check_output
**kwargs).stdout
File "/usr/lib/python3.7/subprocess.py", line 487, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command 'df -h | awk '$NF=="/"{printf "Disk: %d/%d GB %s", $3,$2,$5}'' died with <Signals.SIGINT: 2>.
pi@Adafruit128x32:~/startup_scripts $
</code></pre>
<p>Here I got an unhandled exception thrown, and this is the most common outcome. (the try where it worked was actually a suprise to me.)</p>
<p>What I suspect is happening is that there are other bits of library code running that catch the CTL-C keyboard signal, and the "bits of library code" aren't passing the exception "up the ladder" to the process where I am handling the error.</p>
<p>Question:<br></p>
<ol>
<li><p>Why is this happening?</p>
</li>
<li><p>Is there a way to ensure that my code captures the keyboard signal so that it can shutdown cleanly without the messy unhandled exception that usually happens?</p>
</li>
</ol>
|
<python><linux><signals><raspberry-pi4>
|
2024-10-12 20:26:12
| 2
| 643
|
Jim JR Harris
|
79,081,361
| 174,365
|
Echo y from within python
|
<p>I'm trying to import and use a module originally made as a standalone script.
Preferably without altering it, to keep tool commonality with the authors.
One function I'm using prompts for a y/n response, to which I want to always answer "y":</p>
<pre><code>input("\nContinue? (y/n): ").lower()
</code></pre>
<p>What's the best way to automate this answer?</p>
|
<python>
|
2024-10-12 15:54:52
| 2
| 2,613
|
Emilio M Bumachar
|
79,081,039
| 561,243
|
Failing to serialize a dataclass generated with make_dataclass with pickle
|
<p>In my code, I am generating classes run-time using the make_dataclass function.</p>
<p>The problem is that such dataclasses cannot be serialized with pickle as shown in the code snippet here below.</p>
<p>It is important to know that for my application those dataclass instances <strong>must</strong> be pickable because they need to be transferred to a <code>multiprocessing</code> pool of executors.</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass, make_dataclass, asdict
import pickle
#
# Standard data class generated using the decorator approach
@dataclass
class StdDataClass:
num: int = 0
a = StdDataClass(12)
ser_a = pickle.dumps(a)
des_a = pickle.loads(ser_a)
# serialization and deserialization with pickle is working
assert a.num == des_a.num
# Run time created class using the make_dataclass approach.
# In the real case, the name, the type and the default value of each field is
# not available before the code is executed. The structure of this dataclass is
# known only at run-time.
fields = [('num', int, 0)]
B = make_dataclass('B', fields)
b = B(2)
try:
# An attempt to serialize the object is triggering an exception
# Can't pickle <class 'types.B'>: attribute lookup B on types failed
ser_b = pickle.dumps(b)
des_b = pickle.loads(ser_b)
assert b.num == des_b.num
except pickle.PickleError as e :
print(e)
</code></pre>
<p>Serializing a class defined with the make_dataclass method is triggering an exception. I think that this is actually to be expected, because in the <a href="https://docs.python.org/3/library/pickle.html#what-can-be-pickled-and-unpickled" rel="nofollow noreferrer">documentation</a> it is written:</p>
<blockquote>
<p>The following types can be pickled:</p>
<ul>
<li>built-in constants (None, True, False, Ellipsis, and NotImplemented);</li>
<li>integers, floating-point numbers, complex numbers;</li>
<li>strings, bytes, bytearrays;</li>
<li>tuples, lists, sets, and dictionaries containing only picklable objects;</li>
<li>functions (built-in and user-defined) accessible from the top level of a module (using def, not lambda);</li>
<li><strong>classes accessible from the top level of a module;</strong></li>
<li>instances of such classes whose the result of calling <code>__getstate__()</code> is picklable (see section Pickling Class Instances for details).</li>
</ul>
</blockquote>
<p>I think that the problem is that there is no definition of a Class B in the module (bold line) and that's why it fails, but I am not sure about this.</p>
<p>The workaround I have found is to transform the run time created dataclass into a dictionary, serialize the dictionary and when needed deserialize the dictionary to recreate the dataclass.</p>
<pre class="lang-py prettyprint-override"><code># A base data class with no data members but with a class method 'constructor'
# and a convenience method to convert the class into a dictionary
@dataclass
class BaseDataClass:
@classmethod
def from_dict(cls, d: dict):
new_instance = cls()
for key in d:
setattr(new_instance, key, d[key])
return new_instance
def to_dict(self):
return asdict(self)
# Another baseclass defined with the make_dataclass approach but
# using BaseDataClass as base.
C = make_dataclass('C', fields, bases=(BaseDataClass,))
c = C(13)
# WORKAROUND
#
# Instead of serializing the class object, I am pickling the
# corresponding dictionary
ser_c = pickle.dumps(c.to_dict())
# Deserialize the dictionary and use it to recreate the dataclass
des_c = C.from_dict(pickle.loads(ser_c))
assert c.num == des_c.num
</code></pre>
<p>Even though the workaround is actually working, I was wondering if it is not possible to teach pickle to do the same for whatever dataclass derived from BaseDataClass.</p>
<p>I tried to code the <code>__reduce__</code> method and both the <code>__setstate__</code> and <code>__getstate__</code> but with no luck.</p>
<p>I though to sub-class the Pickle to have a custom reducer, but this is the recommended approach if you cannot modify the class of the object to be serialized (e. g. generated by an external library) and, moreover I also do not know how to specify to the <code>multiprocessing</code> module to use my pickle subclass instead of the base one.</p>
<p>Do you have any idea how I can solve this issue?</p>
|
<python><multiprocessing><pickle><python-dataclasses>
|
2024-10-12 12:54:30
| 1
| 367
|
toto
|
79,080,790
| 10,200,497
|
How to get the largest streak of negative numbers by sum?
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [-3, -1, -2, -5, 10, -3, -13, -3, -2, 1, -200, -100],
}
)
</code></pre>
<p>Expected output:</p>
<pre><code> a
10 -200
11 -100
</code></pre>
<p>Logic:</p>
<p>I want to return the largest streak of negative numbers in terms of sum of them. In other words I want AT LEAST two consecutive negative rows and then I want the sum of it to be the largest in terms of absolute value:</p>
<p>My attempt based on this <a href="https://stackoverflow.com/a/78824669/10200497">answer</a>:</p>
<pre><code>s = np.sign(df['a'])
g = s.ne(s.shift()).cumsum()
out = df[g.eq(df[s.eq(-1)].groupby(g).sum().idxmax())]
</code></pre>
<p>It gives me an empty dataframe</p>
|
<python><pandas><dataframe>
|
2024-10-12 10:19:35
| 1
| 2,679
|
AmirX
|
79,080,448
| 22,466,650
|
How to truly compare two dataframes based on a key column?
|
<p>My inputs are two dataframes:</p>
<pre><code>import pandas as pd
df_having = pd.DataFrame({'ID': ['ID_01', 'ID_01', 'ID_01', 'ID_01', 'ID_02', 'ID_03', 'ID_03', 'ID_05', 'ID_06', 'ID_06'], 'NAME': ['A', 'A', 'A', 'A', 'E', 'B', 'B', 'E', 'A', 'A'], 'TYPE': ['A', 'A', 'B', 'A', 'C', 'A', 'B', 'F', 'A', 'A'], 'CATEGORY': [1, 1, 3, 3, 3, 1, 2, 1, 1, 1]})
df_tohave = pd.DataFrame({'ID': ['ID_01', 'ID_01', 'ID_02', 'ID_02', 'ID_03', 'ID_03', 'ID_03', 'ID_04', 'ID_05'], 'NAME': ['A', 'A', 'A', 'A', 'B', 'B', 'E', 'A', 'D'], 'TYPE': ['A', 'B', 'C', 'C', 'A', 'A', 'B', 'D', 'G'], 'CATEGORY': [1, 2, 1, 2, 1, 2, 3, 3, 2]})
</code></pre>
<p>I'm trying to do comparison based on the column 'ID' so that I end up with third dataframe (at the very right of the image below) :</p>
<p><a href="https://i.sstatic.net/6HXaUGNB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6HXaUGNB.png" alt="enter image description here" /></a></p>
<p><code>df_having</code> reflects the actual situation of the system and <code>df_tohave</code> is what this one should have.</p>
<p>The <code>FLAG</code> can have the following values:</p>
<ul>
<li>NO_ACTION: The 'ID' has the correct 'NAME', 'TYPE' and 'CATEGORY' in <code>df_having</code></li>
<li>TO_CREATE: the 'ID' is missing in <code>df_having</code> and exists in <code>df_tohave</code></li>
<li>TO_UPDATE: the 'ID' exists in <code>df_having</code> but at least one of the fields 'NAME', 'TYPE' and 'CATEGORY' is incorrect</li>
<li>TO_DELETE: the 'ID' exists in <code>df_having</code> and not in <code>df_tohave</code></li>
</ul>
<p>The <code>FIELDS</code> are simply the fields concerned by the <code>FLAG</code>.</p>
<p>When building the comparison dataframe, if the flag is 'TO_DELETE' we grab the row from <code>df_having</code>, otherwise we get it from <code>df_tohave</code>.</p>
<p>I made a try but the resulting dataframe has 34 rows while it should be 13 and the majority of flags are wrong.</p>
<pre><code>df_merged = pd.merge(df_having, df_tohave, on=['ID', 'NAME', 'TYPE', 'CATEGORY'], how='outer', indicator=True)
df_merged['FLAG'] = df_merged['_merge'].map({'both': 'NO_ACTION', 'left_only': 'TO_DELETE', 'right_only': 'TO_CREATE'})
df_merged['FIELDS'] = df_merged['FLAG'].map({'NO_ACTION': 'NONE', 'TO_DELETE': 'ALL', 'TO_CREATE': 'ALL'})
import functools
cols = ['NAME', 'TYPE', 'CATEGORY']
df_diff = functools.reduce(lambda left, right: pd.merge(left, right, on='ID', how='outer', suffixes=('_having', '_tohave')),
[df_having[['ID'] + cols], df_tohave[['ID'] + cols]])
for col in cols:
df_diff[f'{col}_diff'] = df_diff[f'{col}_having'] != df_diff[f'{col}_tohave']
df_diff['FIELDS'] = df_diff[[f'{col}_diff' for col in cols]].dot(pd.Index(cols) + ' & ').str.strip(' & ')
df_diff = df_diff[df_diff['FIELDS'] != ''].assign(FLAG='TO_UPDATE')
df_comparison = pd.concat([df_merged, df_diff[['ID', 'NAME_tohave', 'TYPE_tohave', 'CATEGORY_tohave', 'FLAG', 'FIELDS']].rename(columns={
'NAME_tohave': 'NAME', 'TYPE_tohave': 'TYPE', 'CATEGORY_tohave': 'CATEGORY'})]).drop(columns='_merge').reset_index(drop=True)
</code></pre>
<p>Do you guys know what I'm doing wrong?</p>
<p>Can anyone provide a solution?</p>
|
<python><pandas><dataframe>
|
2024-10-12 07:50:55
| 0
| 1,085
|
VERBOSE
|
79,080,145
| 2,816,062
|
Convert Django JSON field to Text in subquery
|
<p>I have two models <code>Assignment</code> and <code>Application</code>. <code>Assignment</code> has <code>meta</code> field which has <code>uuid</code> data that is used to match <code>uuid</code> field on <code>Application</code>. My goal is to join assignee information when I query applications.</p>
<p>I have some code like this using subquery.</p>
<pre class="lang-py prettyprint-override"><code>assignment_subquery = Assignment.objects.filter(
meta__uuid=Cast(OuterRef('uuid'), TextField())
).values('assignee__username')[:1]
# Query applications and annotate them with the assignee's username
applications_with_assignee = Application.objects.annotate(
assignee_username=Subquery(assignment_subquery)
)
</code></pre>
<p>It produce SQL like this</p>
<pre class="lang-sql prettyprint-override"><code>SELECT "application"."id",
"application"."uuid",
(SELECT U1."username"
FROM "assignment" U0
INNER JOIN "auth_user" U1 ON (U0."assignee_id" = U1."id")
WHERE (U0."meta" -> 'uuid') = ("application"."uuid")::text
LIMIT 1) AS "assignee_username"
FROM "application";
</code></pre>
<p>It is almost correct except <code>U0."meta" -> 'uuid'</code> instead of <code>U0."meta" ->> 'uuid'</code>, which I believe extracts the value associated with the specified key as text. I can't figure out how to get it to generate the right query.</p>
|
<python><django>
|
2024-10-12 03:46:27
| 0
| 2,868
|
Andrew Zheng
|
79,079,917
| 5,508,532
|
Filter polars DataFrame where values starts with any string in a list
|
<p>I have a DataFrame with a few million rows, e.g.:</p>
<pre><code>df = pl.DataFrame({
'col1': ['12345', '12', '54467899', '5433523353','0024355']
})
</code></pre>
<p>And a list of a couple hundred prefixes such as:</p>
<pre><code>prefixes = ['123', '544', '55443345']
</code></pre>
<p>Is there an efficient way to filter the large DataFrame to only retain values where <code>col1</code> starts with any of the prefixes in the <code>prefixes</code> list.
I.e. the original frame:</p>
<pre><code>ββββββββββββββ
β col1 β
β --- β
β str β
ββββββββββββββ‘
β 12345 β
β 12 β
β 54467899 β
β 5433523353 β
β 0024355 β
ββββββββββββββ
</code></pre>
<p>is filtered down to</p>
<pre><code>ββββββββββββββ
β col1 β
β --- β
β str β
ββββββββββββββ‘
β 12345 β
β 54467899 β
ββββββββββββββ
</code></pre>
<p>Edit: I have tried (with the real prefixes list, about 600 items)</p>
<pre><code> prefix_regex = '|'.join(f"^{prefix}" for prefix in prefixes)
ldf = pl.scan_parquet('./data.parquet')
ldf = ldf.filter(pl.col("col1").str.contains(prefix_regex)).collect()
</code></pre>
<p>It's still sitting here not finished 5 minutes later. Without the filter collects in a few seconds.</p>
|
<python><regex><dataframe><python-polars>
|
2024-10-11 23:19:59
| 3
| 1,938
|
binary01
|
79,079,634
| 2,402,098
|
Exclude objects with a related date range
|
<p>Using these (simplified of course) models:</p>
<pre><code>class Person(models.Model):
name = models.CharField()
class InactivePeriod(models.Model)
person = models.ForeignKeyField(Person)
start_date = models.DateField()
end_date = models.DateField()
class Template(models.Model):
day = models.CharField() #choices = monday, tuesday, etc...
person = models.ForeignKeyField(Person)
class TheList(models.Model):
person = models.ForeignKeyField(Person)
date = models.DateField(default=datetime.today)
attended = models.BooleanField(default=False)
</code></pre>
<p>I have a workflow where people call in to sign up for a daily event (there will be no online registration). Originally my users were going to enter each attendee manually each day as they call by creating a <code>TheList</code> record, and then check off if they attended. However, as each event has regulars that typically come on the same days. It was decided that I would provide a <code>Template</code> table, where the user could designate that 'Mike' is a regular on 'Monday' and then I would use a cronjob to automatically populate the <code>TheList</code> table with records for people who were regulars for that day. Then, my user interface is going to automatically filter the <code>TheList</code> list view down to the current day's attendees, and then my users can make any changes as necessary. It just reduces the amount of input they need to do.</p>
<p>The wrinkle is that my users requested that they be able to designate someone as inactive for a period of time. If a <code>Person</code> has a related <code>InactivePeriod</code> record where 'today' falls between that record's <code>start_date</code> and <code>end_date</code>, my cronjob will omit that <code>Person</code>. I decided against a simple 'active' <code>BooleanField</code> as I think it will be easier for them to not have to remember if someone is inactive or not, and to not have to worry about communicating that they marked someone inactive to another user.</p>
<p>My script is essentially:</p>
<pre><code>def handle(self, *args, **options):
today = datetime.today()
day = str(today.weekday()) #we store this as a string as it is a charfield
is_inactive_today = (Q(person__inactiveperiod__start_date__lte=today) & \
Q(person__inactiveperiod__end_date__gte=today))
attendance_list = []
for t in models.Template.objects.filter(day=day).exclude(is_inactive_today):
attendance_list.append(models.TheList(person=t.person, date=today))
models.TheList.objects.bulk_create(attendance_list)
</code></pre>
<p>assuming 'today' is 10/11/2024 (october 11 2024), if I add two <code>InactivePeriod</code>'s for a <code>Person</code> where the date ranges are (10/9/2024, 10/10/2024) and (10/12/2024, 10/13/2024), the query incorrectly does not return that <code>Person</code>'s related <code>Template</code> record despite the fact that both of their <code>InactivePeriod</code>'s do not include today, and as such I cannot create an appropriate <code>TheList</code> record. If I remove either of the records, leaving just one <code>InactivePeriod</code> record, the query does return the related <code>Template</code> record successfully.</p>
<p>Is this possible to do using the ORM or do I have write some logic into my for loop?</p>
|
<python><django><postgresql><cron>
|
2024-10-11 20:21:27
| 1
| 342
|
DrS
|
79,079,460
| 1,485,877
|
How to type annotate the splat/spread function
|
<p>Consider the common <code>splat</code> function, also known as <code>spread</code>, which takes a function of <code>n</code> arguments and wraps it in a function of one <code>n</code>-element tuple argument:</p>
<pre class="lang-py prettyprint-override"><code>def splat(f: Callable[???, A]) -> Callable[[???], A]:
def splatted(args: ???) -> A:
return f(*args)
return splatted
</code></pre>
<p>How do I annotate the type of parameter <code>f</code> and the return value? Is it even expressible in Python's type system?</p>
|
<python><python-typing>
|
2024-10-11 19:03:10
| 1
| 9,852
|
drhagen
|
79,079,448
| 9,707,202
|
In python xarray, how do I create and subset lazy variables without loading the whole dataarray?
|
<p>I am trying to create a python function that opens a remote dataset (in a opendap server) using xarray and automatically creates new variables lazily. A use case would be to calculate magnitude and direction when u and v components are available, e.g.:</p>
<pre><code>import xarray as xr
import dask
import dask.array as da
@dask.delayed
def uv2mag(u, v):
return (u**2 + v**2)**0.5
@dask.delayed
def uv2dir(u, v):
return np.rad2deg(np.arctan2(u, v))
def open_dataset(*args, **kwargs) -> xr.Dataset:
uv = kwargs.pop("uv", None)
ds = xr.open_dataset(*args, **kwargs)
if uv:
uvar, vvar = uv
ds["magnitude"] = (
ds[uvar].dims,
da.from_delayed(uv2mag(ds[uvar], ds[vvar]),
ds[uvar].shape,
dtype=ds[uvar].dtype),
{"long_name": "magnitude"},
)
ds["direction"] = (
ds[uvar].dims,
da.from_delayed(uv2dir(ds[uvar], ds[vvar]),
ds[uvar].shape,
dtype=ds[uvar].dtype),
{"long_name": "direction"},
)
return ds
url = "https://tds.hycom.org/thredds/dodsC/FMRC_ESPC-D-V02_uv3z/FMRC_ESPC-D-V02_uv3z_best.ncd"
uvar = "water_u"
vvar = "water_v"
ds = open_dataset(url, drop_variables="tau", uv=[uvar, vvar])
</code></pre>
<p>On first look everything seems to work just fine and the new magnitude and direction variables are created.</p>
<pre><code><xarray.Dataset>
Dimensions: (depth: 40, lat: 4251, lon: 4500, time: 121)
Coordinates:
* depth (depth) float64 0.0 2.0 4.0 6.0 ... 2.5e+03 3e+03 4e+03 5e+03
* lat (lat) float64 -80.0 -79.96 -79.92 -79.88 ... 89.92 89.96 90.0
* lon (lon) float64 0.0 0.07996 0.16 0.24 ... 359.7 359.8 359.8 359.9
* time (time) datetime64[ns] 2024-09-29T12:00:00 ... 2024-10-14T12:...
time_run (time) datetime64[ns] ...
Data variables:
time_offset (time) datetime64[ns] ...
water_u (time, depth, lat, lon) float32 ...
water_v (time, depth, lat, lon) float32 ...
magnitude (time, depth, lat, lon) float32 dask.array<chunksize=(121, 40, 4251, 4500), meta=np.ndarray>
direction (time, depth, lat, lon) float32 dask.array<chunksize=(121, 40, 4251, 4500), meta=np.ndarray>
Attributes: (12/22)
classification_level:
distribution_statement: Approved for public release; distribution unli...
downgrade_date: not applicable
classification_authority: not applicable
institution: Fleet Numerical Meteorology and Oceanography C...
source: HYCOM archive file, GLBz0.04
...
time_origin: 2024-10-05 12:00:00
_CoordSysBuilder: ucar.nc2.dataset.conv.CF1Convention
cdm_data_type: GRID
featureType: GRID
location: Proto fmrc:FMRC_ESPC-D-V02_uv3z
history: FMRC Best Dataset
</code></pre>
<p>The problem arises when I try to get a very small subset of the data. A "real" variable returns the data instantly, e.g.:</p>
<pre><code>>>> ds.isel(time=slice(0, 5), depth=0, lat=1000, lon=1000)["water_u"].values
array([0.27400002, 0.23600002, 0.12 , 0.108 , 0.24400002],
dtype=float32)
</code></pre>
<p>But when I try to get a subset from the lazy variable, an error is raised due to trying to load the whole 4D matrix in memory, not just the subset.</p>
<pre><code>>>> ds.isel(time=slice(0, 5), depth=0, lat=1000, lon=1000)["magnitude"].values
MemoryError: Unable to allocate 345. GiB for an array with shape (121, 40, 4251, 4500) and data type float32
</code></pre>
<p>Does anyone have a suggestion of how thin can be fixed?</p>
<p>Thank you.</p>
<p>My expectation was that the subset performed by <code>isel</code> would also be applied in the lazy variables, and that the loading in memory would be equally as fast as the "real" variables.</p>
|
<python><dask><python-xarray><opendap><thredds>
|
2024-10-11 18:56:26
| 1
| 436
|
Marcelo Andrioni
|
79,079,425
| 7,124,155
|
How to emulate Python nested or "sub" attributes
|
<p>How can I track a sub-attribute that applies to some attributes but not others? It seems such a thing does not exist, but please confirm.</p>
<p>I read some similar questions <a href="https://stackoverflow.com/questions/17914737/can-python-objects-have-nested-properties">here</a> and <a href="https://stackoverflow.com/questions/39310436/python-sub-attribute-in-a-class">here</a>, but one answer is quite old and the other is closed.</p>
<p>Consider the example:</p>
<pre><code>class FoodTest:
def __init__(self, food_group, food, variety=''):
self.food_group = food_group
self.food = food
self.variety = variety
def get_food(self):
print(f'I like {self.food}s, which are a {self.food_group}')
if self.food == 'apple':
print(f'The {self.food} variety is {self.variety}')
oran = FoodTest('fruit', 'orange')
oran.get_food()
app = FoodTest('fruit', 'apple', 'fuji')
app.get_food()
</code></pre>
<p>Output:</p>
<pre><code>I like oranges, which are a fruit
I like apples, which are a fruit
The apple variety is fuji
</code></pre>
<p>What I want is a variety only for apples (fuji, granny smith, etc.) but not for oranges. Of course I can access app.variety, but what I'm really looking for is "app.food.variety".</p>
|
<python><python-3.x><class><attributes>
|
2024-10-11 18:47:34
| 1
| 1,329
|
Chuck
|
79,079,387
| 23,260,297
|
Applying styles to different columns in table
|
<p>I am getting an error when trying to apply a color to specific columns when creating a table with reportlab.</p>
<p>Here is my entire script for a reproducible example (you will need to put an output path on line 109):</p>
<pre><code>import pandas as pd
from datetime import datetime
from reportlab.platypus import Frame
from reportlab.lib.pagesizes import A4, landscape
from reportlab.platypus import PageTemplate
from reportlab.platypus import BaseDocTemplate
from reportlab.platypus import Image
from reportlab.lib.units import inch
from reportlab.platypus import Table, Paragraph
from reportlab.lib import colors
from reportlab.platypus import NextPageTemplate, PageBreak
from reportlab.lib.styles import getSampleStyleSheet, ParagraphStyle
def on_page(canvas, doc, pagesize=A4):
page_num = canvas.getPageNumber()
canvas.drawCentredString(pagesize[0]/2, 50, str(page_num))
now = datetime.now()
today = now.strftime("%B %d, %Y")
current_time = now.strftime("%I:%M %p")
canvas.drawString(50, pagesize[1] - 50, f"{today}")
canvas.drawString(50, pagesize[1] - 70, f"{current_time}")
def on_page_landscape(canvas, doc):
return on_page(canvas, doc, pagesize=landscape(A4))
def format_nums(num):
if num < 0:
x = '(${:.2f})'.format(abs(num))
elif num > 0:
x = '${:.2f}'.format(num)
else:
x = '${:.0f}'.format(num)
return x
def df2table(df, custom_style):
for col in df.columns:
if df[col].dtype == 'float64':
df[col] = df[col].apply(lambda x: format_nums(x))
#('BACKGROUND', (start_col, start_row), (end_col, end_row), color)
return Table(
[[Paragraph(col, custom_style) for col in df.columns]] + df.values.tolist(),
style = [
('FONTNAME', (0, 0), (-1, 0), 'Helvetica-Bold'), # Header font
('FONTSIZE', (0, 1), (-1, -1), 8), # Body font size
('TEXTCOLOR', (0, 0), (-1, 0), colors.white), # Header text color
('BACKGROUND', (0, 0), (-1, 0), colors.lightgrey), # Header background color
('BACKGROUND', (0, 0), (0, 0), colors.lightblue), # first Header background color
('BACKGROUND', (0, 1), (0, -1), colors.lightblue), # First column background color
('LINEBELOW', (0, 0), (-1, 0), 1, colors.black), # Line below header
('INNERGRID', (0, 0), (-1, -1), 0.25, colors.black), # Inner grid lines
('BOX', (0, 0), (-1, -1), 1, colors.black)], # Outer box
hAlign = 'LEFT')
def df2table2(df, custom_style, commodity_cols):
for col in df.columns:
if df[col].dtype == 'float64':
df[col] = df[col].apply(lambda x: format_nums(x))
data = [[Paragraph(col, custom_style) for col in df.columns]] + df.values.tolist()
#('BACKGROUND', (start_col, start_row), (end_col, end_row), color)
style = [
('FONTNAME', (0, 0), (-1, 0), 'Helvetica-Bold'), # Header font
('FONTSIZE', (0, 1), (-1, -1), 8), # Body font size
('TEXTCOLOR', (0, 0), (-1, 0), colors.white), # Header text color
('BACKGROUND', (0, 0), (-1, 0), colors.lightgrey), # Header background color
('LINEBELOW', (0, 0), (-1, 0), 1, colors.black), # Line below header
('INNERGRID', (0, 0), (-1, -1), 0.25, colors.black), # Inner grid lines
('BOX', (0, 0), (-1, -1), 1, colors.black)] # Outer box
# Apply colors based on column types
for i, col in enumerate(df.columns):
if col in commodity_cols:
style.append(('BACKGROUND', (i, 1), (i, -1), colors.lightgrey)) # Commodity column
else:
if col != 'Counterparty':
style.append(('BACKGROUND', (i, 1), (i, -1), colors.lightgreen)) # Aggregation column
return Table(data, style, hAlign='LEFT')
df = pd.DataFrame({
'Counterparty': ['foo', 'fizz', 'fizz', 'fizz','fizz', 'foo'],
'Commodity': ['bar', 'bar', 'bar', 'bar','bar', 'ab cred'],
'DealType': ['Buy', 'Buy', 'Buy', 'Buy', 'Buy', 'Buy'],
'StartDate': ['07/01/2024', '09/01/2024', '10/01/2024', '11/01/2024', '12/01/2024', '01/01/2025'],
'FloatPrice': [18.73, 17.12, 17.76, 18.72, 19.47, 20.26],
'MTMValue':[10, 10, 10, 10, 10, 10]
})
commodity_cols = df['Commodity'].unique()
out = pd.pivot_table(df, values = 'MTMValue', index='Counterparty', columns = 'Commodity', aggfunc='sum').reset_index().rename_axis(None, axis=1).fillna(0)
out['Cumulative Exposure'] = out[out.columns[1:]].sum(axis = 1)
path = <INSERT PDF OUTPUT FILE HERE>
padding = dict(
leftPadding=72,
rightPadding=72,
topPadding=72,
bottomPadding=18)
portrait_frame = Frame(0, 0, *A4, **padding)
landscape_frame = Frame(0, 0, *landscape(A4), **padding)
portrait_template = PageTemplate(
id='portrait',
frames=portrait_frame,
onPage=on_page,
pagesize=A4)
landscape_template = PageTemplate(
id='landscape',
frames=landscape_frame,
onPage=on_page_landscape,
pagesize=landscape(A4))
doc = BaseDocTemplate(
path,
pageTemplates=[
#portrait_template
landscape_template
]
)
styles = getSampleStyleSheet()
custom_style = ParagraphStyle(name='CustomStyle', fontSize=8)
# NOT WOKRING
#story = [
# Paragraph('Title 1', styles['Title']),
# Paragraph('Title 2', styles['Title']),
# Paragraph("<br/><br/>", styles['Normal']),
# Paragraph('Current Total Positions', styles['Heading2']),
# df2table2(out, custom_style, commodity_cols)
#]
# WORKING
story = [
Paragraph('Title 1', styles['Title']),
Paragraph('Tilte 2', styles['Title']),
Paragraph("<br/><br/>", styles['Normal']),
Paragraph('Current Total Positions', styles['Heading2']),
df2table(out, custom_style)
]
doc.build(story)
</code></pre>
<p>My issue arises in the <code>df2table2</code> function somewhere with the full error message:</p>
<pre><code>unsupported operand type(s) for -=: 'float' and 'tuple'
</code></pre>
<p>I have been looking at this for hours and cannot for the life of me figure out what is possibly going wrong. Any ideas what the issue could be?</p>
|
<python><pandas><reportlab>
|
2024-10-11 18:34:16
| 1
| 2,185
|
iBeMeltin
|
79,079,203
| 11,638,153
|
How to combine multiple docx files into a single in python
|
<p>I combined multiple text files into a single text file using simple code:</p>
<pre><code>with open("Combined_file.txt", 'w') as f1:
for indx1, fil1 in enumerate(files_to_combine):
with open(files_to_combine[indx1], 'r') as f2:
for line1 in f2:
f1.write(line1)
f1.write("\n")
</code></pre>
<p><code>files_to_combine</code> is a list containing files to combine into a single file <code>Combined_file.txt</code>. I want to combine MS Word <code>.docx</code> files similar to above and looked at this answer <a href="https://stackoverflow.com/a/48925828">https://stackoverflow.com/a/48925828</a> using <code>python-docx</code> module. But I couldn't figure out how to open and save a docx file under top <code>for</code> loop of above code, since <code>with open</code> construct won't work here. Also, if the source docx file contains an image, can it be copied using above code and the answer code?</p>
|
<python><python-3.x><ms-word><python-docx>
|
2024-10-11 17:22:45
| 1
| 441
|
ewr3243
|
79,079,170
| 3,785,010
|
PyQt6 failure in Python 3.13.0 - ModuleNotFoundError: No module named PyQt6.sip
|
<p>The following simple example fails at line 2, the import, when run with Python 3.13.0 and PyQt6 6.7.1 in October of 2024:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PyQt6.QtWidgets import QApplication, QWidget
app = QApplication(sys.argv)
w = QWidget()
w.resize(250, 200)
w.setWindowTitle('HelloWorld')
w.show()
sys.exit(app.exec())
</code></pre>
<p>The error:</p>
<pre><code>"C:\Program Files\Python\python.exe" c:\projects\python\PyQtSimpleExample\main.py
Traceback (most recent call last):
File "c:\projects\python\PyQtSimpleExample\main.py", line 2, in <module>
from PyQt6.QtWidgets import QApplication, QWidget
***ModuleNotFoundError: No module named 'PyQt6.sip'***
</code></pre>
<p>The temporary work-around is to stop using PyQt6 6.7.1 and switch to PySide6 6.7.3 which does not suffer from the same issue.</p>
|
<python><python-3.x><pyqt6><python-3.13>
|
2024-10-11 17:10:45
| 1
| 1,109
|
IntelligenceGuidedByExperience
|
79,079,142
| 1,016,784
|
Intermittently losing ContextVar when passing from parent to child thread
|
<p>I have a subclass of <code>Thread</code> that I use across my project. In this class, I pass in the ContextVar manually. However, at times (once or twice a day), I notice that the ContextVar in the child thread is not set (reverted to a default value).</p>
<pre class="lang-py prettyprint-override"><code>class MyThread(Thread):
def __init__(
self,
group: None = None,
target: Callable[..., Any] | None = None,
name: str | None = None,
args: tuple[Any, ...] = (),
kwargs: dict[str, Any] | None = None,
*,
daemon: bool | None = None,
):
super().__init__(group=group, target=target, name=name, args=args, kwargs=kwargs, daemon=daemon)
self.my_resource = get_resource_info()
def run(self):
self._exception = None
try:
set_my_resource_info(self.my_resource.name, self.my_resource.kind)
self._return_value = super().run()
except BaseException as e:
self._exception = e
def join(self, timeout: float | None = None):
super().join(timeout)
if self._exception:
raise self._exception
return self._return_value
</code></pre>
<p>And in another module I have:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class MyResourceInfo:
name: str
kind: str ="unknown"
resource_info: ContextVar[MyResourceInfo] = ContextVar(
'my_resource_info',
default=MyResourceInfo(name=get_default_resource_name()),
)
def set_resource_info(name: str, kind: str = 'unknown') -> Token[MyResourceInfo]:
return resource_info.set(MyResourceInfo(name=name, kind=kind))
</code></pre>
<p>Why does the context var revert to default value intermittently in child threads?</p>
|
<python><python-3.x><multithreading><python-multithreading><python-contextvars>
|
2024-10-11 17:01:35
| 1
| 1,240
|
Kyuubi
|
79,078,996
| 4,048,657
|
How to optimize a single index in a PyTorch tensor?
|
<p>I have code like this optimizing an (N,3) tensor.</p>
<pre class="lang-py prettyprint-override"><code>deform_verts = torch.full(verts_shape, 0.0, device=device, requires_grad=True)
optimizer = torch.optim.SGD([deform_verts], lr=5e-2, momentum=0.9)
</code></pre>
<p>However, what I really want is to optimize a single index.</p>
<pre class="lang-py prettyprint-override"><code>deform_verts = torch.full(verts_shape, 0.0, device=device, requires_grad=True)
optimizer = torch.optim.SGD([deform_verts[275][1]], lr=5e-2, momentum=0.9)
</code></pre>
<blockquote>
<p>ValueError: can't optimize a non-leaf Tensor</p>
</blockquote>
<p>How can I optimize this single index?</p>
|
<python><pytorch>
|
2024-10-11 16:07:32
| 0
| 1,239
|
Cedric Martens
|
79,078,922
| 3,398,536
|
PKS12 pfx certificate doens't load in python playwright
|
<p>I'm using playwright python and the usage of PKS12 certificates seems not be working properly.</p>
<p>We try to follow the <a href="https://playwright.dev/python/docs/release-notes#version-146" rel="nofollow noreferrer">playwright documentation</a> and this example from <a href="https://stackoverflow.com/a/78943350/3398536">stackoverflow</a> too, but none of these seems to be working properly, the certificate doesn't show up in the list of options for login.</p>
<pre><code>import asyncio
from fake_useragent import UserAgent
from playwright.async_api import async_playwright
async def page_launch_playwright():
playwright = await async_playwright().start()
device = playwright.devices["Desktop Firefox"]
device['user_agent'] = UserAgent().firefox
device['viewport'] = {'width': 1366, 'height': 768}
# https://github.com/nodejs/node/issues/40672#issuecomment-1243648223
# convert old certificate to new pfx
# openssl pkcs12 -in oldPfxFile.pfx -nodes -legacy -out decryptedPfxFile.tmp
# openssl pkcs12 -in decryptedPfxFile.tmp -export -out newPfxFile.pfx
# base example https://playwright.dev/python/docs/release-notes#version-146
device["client_certificates"] = [
{
"origin": 'https://website.com',
"pfxPath": './newPfxFile.pfx',
"passphrase": 'mypass',
}
]
url_base_path = "https://website.com"
launch_config = {}
launch_config["headless"] = False
launch_config["timeout"] = 10000.0
launch_config["proxy"] = None
launch_config["slow_mo"] = 190
browser = await playwright.firefox.launch(**launch_config)
context = await browser.new_context(**device)
await context.add_init_script("Object.defineProperty(navigator, 'webdriver', {get: () => undefined})")
page_driver = await context.new_page()
await page_driver.goto(url_base_path)
await page_driver.wait_for_timeout(2000.0)
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(page_launch_playwright())
</code></pre>
<p>What we have been done wrong on this example?</p>
|
<python><playwright><playwright-python>
|
2024-10-11 15:46:43
| 2
| 341
|
Iron Banker Of Braavos
|
79,078,889
| 19,502,111
|
What is the difference between Dropout(1.0) and stop_gradient?
|
<p>Consider these two architectures:</p>
<pre><code>prev_layer -> dropout 1.0 -> next_layer (output layer)
prev_layer -> stop_gradient -> next_layer (output layer)
</code></pre>
<p>As gradients flow from the output layer to the input, both must produce the same behavior where <code>prev_layer</code> weights will not be updated, so what is the difference?</p>
<p>I have verified it with this code:</p>
<pre><code>input_layer = Input(shape=(1,))
prev_layer = Dense(32, activation='relu')(input_layer)
dropout_layer = Dropout(.99999)(prev_layer) # This is effectively disabling the layer
output_layer = Dense(1, activation='linear')(dropout_layer)
model_dropout = Model(inputs=input_layer, outputs=output_layer)
model_dropout.compile(optimizer='adam', loss='mse')
input_layer = Input(shape=(1,))
prev_layer = Dense(32, activation='relu')(input_layer)
stop_gradient_layer = Lambda(lambda x: tf.stop_gradient(x))(prev_layer)
output_layer = Dense(1, activation='linear')(stop_gradient_layer)
model_stopgradient = Model(inputs=input_layer, outputs=output_layer)
model_stopgradient.compile(optimizer='adam', loss='mse')
</code></pre>
<p>Train them:</p>
<pre><code>before_train_dropout = model_dropout.layers[1].get_weights()
before_train_stopgradient = model_stopgradient.layers[1].get_weights()
X_dummy = np.random.rand(5, 1)
y_dummy = np.random.rand(5, 1)
model_dropout.fit(X_dummy, y_dummy, epochs=50, verbose=0)
model_stopgradient.fit(X_dummy, y_dummy, epochs=50, verbose=0)
after_train_dropout = model_dropout.layers[1].get_weights()
after_train_stopgradient = model_stopgradient.layers[1].get_weights()
# Is array equal
print('weight')
display(np.array_equal(np.array(before_train_dropout[0]), np.array(after_train_dropout[0])))
display(np.array_equal(np.array(before_train_stopgradient[0]), np.array(after_train_stopgradient[0])))
print('bias')
display(np.array_equal(np.array(before_train_dropout[1]), np.array(after_train_dropout[1])))
display(np.array_equal(np.array(before_train_stopgradient[1]), np.array(after_train_stopgradient[1])))
</code></pre>
<p>Returned:</p>
<pre><code>weight
True
True
bias
True
True
</code></pre>
<p>So, when should I use between <code>Dropout(1.0)</code> and <code>stop_gradient</code>?</p>
|
<python><arrays><numpy><tensorflow><keras>
|
2024-10-11 15:36:08
| 1
| 353
|
Citra Dewi
|
79,078,880
| 13,031,996
|
Vectorised linear interpolation where x, xp and yp are all 2D
|
<p>I have a similar issue to <a href="https://stackoverflow.com/questions/49713617/linear-interpolation-of-two-2d-arrays?noredirect=1&lq=1">Linear interpolation of two 2D arrays</a>.</p>
<p>But in my case x, xp and yp are all 2 dimensional. For instance:</p>
<pre><code>import numpy as np
x = np.array([[2, 3], [1, 2]], dtype=float)
xp = np.array([[0, 1, 2, 3, 4], [0, 1, 2, 3, 4]], dtype=float)
yp = np.array([[0, 1, 3, 6, 7], [0, 3, 5, 7, 9]], dtype=float)
</code></pre>
<p>I can do it simply with a loop, but this is slow for big arrays.</p>
<pre><code>result = np.array([np.interp(x[i], xp[i], yp[i]) for i in range(len(x))])
</code></pre>
<p>Is there a possibility to do it in a vectorised way?</p>
|
<python><numpy>
|
2024-10-11 15:32:13
| 2
| 957
|
Stefan
|
79,078,872
| 3,196,122
|
How to create schema with fixed values using pydantic
|
<p>I have a json that looks like the following, exported from pandas Dataframe:</p>
<p><code>{"columns": ["x", "y", "z"], "data": [[0, 0, 0.5], [1, 1, null]]}</code></p>
<p>This, I want to send to a FastAPI and validate using pydantic.</p>
<ol>
<li>How can I enforce that <code>"columns"</code> equals the list <code>["x", "y", "z"] </code></li>
<li><code>"data"</code> is a list of format <code>[int, int, Optional[float]]</code>. How can I enforce that?</li>
</ol>
<p>The closest I got so far is the following:</p>
<pre><code>class MyModel(BaseModel):
columns: List[Literal['x', 'y', 'z']] = ['x', 'y', 'z']
data: List[conlist(Union[int, Optional[float]], min_length=3, max_length=3)]
</code></pre>
<p>However, this would also allow columns = ['x', 'x', 'x'] or data = [0,0,0]</p>
|
<python><pandas><fastapi><pydantic>
|
2024-10-11 15:29:59
| 2
| 483
|
KrawallKurt
|
79,078,823
| 9,274,940
|
LangGraph Error - Invalid Tool Calls when using AzureChatOpenAI
|
<p>I'm following the LangGraph <a href="https://academy.langchain.com/courses/take/intro-to-langgraph/lessons/58239232-lesson-6-agent" rel="nofollow noreferrer">tutorial</a> (from LangGraph) and I'm getting an error when switching from "ChatOpenAI" to "AzureChatOpenAI". This is the code example from the website:</p>
<pre><code>from langchain_openai import AzureChatOpenAI
from dotenv import find_dotenv, load_dotenv
import os
from langgraph.graph import MessagesState
from langchain_core.messages import HumanMessage, SystemMessage
from langgraph.graph import START, StateGraph
from langgraph.prebuilt import tools_condition
from langgraph.prebuilt import ToolNode
from IPython.display import Image, display
load_dotenv(find_dotenv(), override=True)
</code></pre>
<p>This is the only change I've done, I've modified how to initialize the LLM (I am using AzureChatOpen AI) instead of ChatOpenAI 4o model.</p>
<pre><code>llm = AzureChatOpenAI(
openai_api_version=os.getenv("OPENAI_API_VERSION"),
azure_deployment=os.getenv("DEPLOYMENT_NAME"),
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
temperature=0,
)
</code></pre>
<p>Then the remain code:</p>
<pre><code>def multiply(a: int, b: int) -> int:
"""Multiply a and b.
Args:
a: first int
b: second int
"""
return a * b
def add(a: int, b: int) -> int:
"""Adds a and b.
Args:
a: first int
b: second int
"""
return a + b
tools = [add, multiply]
llm_with_tools = llm.bind_tools(tools)
sys_msg = SystemMessage(content="You are a helpful assistant tasked with performing arithmetic on a set of inputs.")
def assistant(state: MessagesState):
return {"messages": [llm_with_tools.invoke([sys_msg] + state["messages"])]}
# Graph
builder = StateGraph(MessagesState)
builder.add_node("assistant", assistant)
builder.add_node("tools", ToolNode(tools))
builder.add_edge(START, "assistant")
builder.add_conditional_edges(
"assistant",
tools_condition,
)
builder.add_edge("tools", "assistant")
react_graph = builder.compile()
</code></pre>
<p>And this is the call (how to reproduce the error):</p>
<pre><code>messages = [HumanMessage(content="Add 3 and 4. Multiply the output by 2. Using the local tool")]
messages = react_graph.invoke({"messages": messages})
</code></pre>
<p>print the result:</p>
<pre><code>for m in messages['messages']:
m.pretty_print()
</code></pre>
<p>And this is the error I get:</p>
<p><a href="https://i.sstatic.net/lGCQGu49.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lGCQGu49.png" alt="enter image description here" /></a></p>
|
<python><langchain><langgraph>
|
2024-10-11 15:14:09
| 0
| 551
|
Tonino Fernandez
|
79,078,703
| 23,260,297
|
Order columns in dataframe before converting to report lab
|
<p>I am creating a pdf report using report lab based on multiple pandas dataframes.</p>
<p>The tables that are displayed in the pdf need to have to columns in a specific order.</p>
<p>here is a sample dataframe:</p>
<pre><code>df = pd.DataFrame({
'Counterparty': ['foo', 'fizz', 'fizz', 'fizz','fizz', 'foo'],
'Commodity': ['bar', 'bar', 'bar', 'bar','bar', 'ab cred'],
'DealType': ['Buy', 'Buy', 'Buy', 'Buy', 'Buy', 'Buy'],
'StartDate': ['07/01/2024', '09/01/2024', '10/01/2024', '11/01/2024', '12/01/2024', '01/01/2025'],
'FloatPrice': [18.73, 17.12, 17.76, 18.72, 19.47, 20.26],
'MTMValue':[10, 10, 10, 10, 10, 10]
})
out = pd.pivot_table(df, values = 'MTMValue', index='Counterparty', columns = 'Commodity', aggfunc='sum').reset_index().rename_axis(None, axis=1).fillna(0)
out['Cumulative Exposure'] = out[out.columns[1:]].sum(axis = 1)
Counterparty ab cred bar Cumulative Exposure
0 fizz 0.0 40.0 40.0
1 foo 10.0 10.0 20.0
</code></pre>
<p>I need this:</p>
<pre><code>Counterparty bar ab cred Cumulative Exposure
0 fizz 40.0 0.0 40.0
1 foo 10.0 10.0 20.0
</code></pre>
<p>where <code>ab cred</code> will always be before <code>cumulative exposure</code> and after the last <code>commodity</code>, since there can be <code>x</code> amount of commodities added to the data (<code>ab cred</code> is always included as a commodity). More columns can also be added after <code>cumulative exposure</code> at any time.</p>
|
<python><pandas><reportlab>
|
2024-10-11 14:45:12
| 2
| 2,185
|
iBeMeltin
|
79,078,700
| 19,648,465
|
Error when deploying Django/DRF backend on Google Cloud: No matching distribution found for Django==5.0.4
|
<p>I am trying to deploy my Django backend (using Django Rest Framework) on a Google Cloud VM instance. However, when I run <code>pip install -r requirements.txt</code>, I encounter the following error:</p>
<pre><code>Collecting asgiref==3.8.1
Using cached asgiref-3.8.1-py3-none-any.whl (23 kB)
Collecting attrs==23.2.0
Using cached attrs-23.2.0-py3-none-any.whl (60 kB)
ERROR: Could not find a version that satisfies the requirement Django==5.0.4 (from -r requirements.txt (line 3)) (from versions: 1.1.3, 1.1.4, 1.2, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.2.5, 1.2.6, 1.2.7, 1.3, 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.3.5, 1.3.6, 1.3.7, 1.4, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6, 1.4.7, 1.4.8, 1.4.9, 1.4.10, 1.4.11, 1.4.12, 1.4.13, 1.4.14, 1.4.15, 1.4.16, 1.4.17, 1.4.18, 1.4.19, 1.4.20, 1.4.21, 1.4.22, 1.5, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.5.6, 1.5.7, 1.5.8, 1.5.9, 1.5.10, 1.5.11, 1.5.12, 1.6, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.6.8, 1.6.9, 1.6.10, 1.6.11, 1.7, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.7.8, 1.7.9, 1.7.10, 1.7.11, 1.8a1, 1.8b1, 1.8b2, 1.8rc1, 1.8, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.8.5, 1.8.6, 1.8.7, 1.8.8, 1.8.9, 1.8.10, 1.8.11, 1.8.12, 1.8.13, 1.8.14, 1.8.15, 1.8.16, 1.8.17, 1.8.18, 1.8.19, 1.9a1, 1.9b1, 1.9rc1, 1.9rc2, 1.9, 1.9.1, 1.9.2, 1.9.3, 1.9.4, 1.9.5, 1.9.6, 1.9.7, 1.9.8, 1.9.9, 1.9.10, 1.9.11, 1.9.12, 1.9.13, 1.10a1, 1.10b1, 1.10rc1, 1.10, 1.10.1, 1.10.2, 1.10.3, 1.10.4, 1.10.5, 1.10.6, 1.10.7, 1.10.8, 1.11a1, 1.11b1, 1.11rc1, 1.11, 1.11.1, 1.11.2, 1.11.3, 1.11.4, 1.11.5, 1.11.6, 1.11.7, 1.11.8, 1.11.9, 1.11.10, 1.11.11, 1.11.12, 1.11.13, 1.11.14, 1.11.15, 1.11.16, 1.11.17, 1.11.18, 1.11.20, 1.11.21, 1.11.22, 1.11.23, 1.11.24, 1.11.25, 1.11.26, 1.11.27, 1.11.28, 1.11.29, 2.0a1, 2.0b1, 2.0rc1, 2.0, 2.0.1, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 2.0.6, 2.0.7, 2.0.8, 2.0.9, 2.0.10, 2.0.12, 2.0.13, 2.1a1, 2.1b1, 2.1rc1, 2.1, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.1.7, 2.1.8, 2.1.9, 2.1.10, 2.1.11, 2.1.12, 2.1.13, 2.1.14, 2.1.15, 2.2a1, 2.2b1, 2.2rc1, 2.2, 2.2.1, 2.2.2, 2.2.3, 2.2.4, 2.2.5, 2.2.6, 2.2.7, 2.2.8, 2.2.9, 2.2.10, 2.2.11, 2.2.12, 2.2.13, 2.2.14, 2.2.15, 2.2.16, 2.2.17, 2.2.18, 2.2.19, 2.2.20, 2.2.21, 2.2.22, 2.2.23, 2.2.24, 2.2.25, 2.2.26, 2.2.27, 2.2.28, 3.0a1, 3.0b1, 3.0rc1, 3.0, 3.0.1, 3.0.2, 3.0.3, 3.0.4, 3.0.5, 3.0.6, 3.0.7, 3.0.8, 3.0.9, 3.0.10, 3.0.11, 3.0.12, 3.0.13, 3.0.14, 3.1a1, 3.1b1, 3.1rc1, 3.1, 3.1.1, 3.1.2, 3.1.3, 3.1.4, 3.1.5, 3.1.6, 3.1.7, 3.1.8, 3.1.9, 3.1.10, 3.1.11, 3.1.12, 3.1.13, 3.1.14, 3.2a1, 3.2b1, 3.2rc1, 3.2, 3.2.1, 3.2.2, 3.2.3, 3.2.4, 3.2.5, 3.2.6, 3.2.7, 3.2.8, 3.2.9, 3.2.10, 3.2.11, 3.2.12, 3.2.13, 3.2.14, 3.2.15, 3.2.16, 3.2.17, 3.2.18, 3.2.19, 3.2.20, 3.2.21, 3.2.22, 3.2.23, 3.2.24, 3.2.25, 4.0a1, 4.0b1, 4.0rc1, 4.0, 4.0.1, 4.0.2, 4.0.3, 4.0.4, 4.0.5, 4.0.6, 4.0.7, 4.0.8, 4.0.9, 4.0.10, 4.1a1, 4.1b1, 4.1rc1, 4.1, 4.1.1, 4.1.2, 4.1.3, 4.1.4, 4.1.5, 4.1.6, 4.1.7, 4.1.8, 4.1.9, 4.1.10, 4.1.11, 4.1.12, 4.1.13, 4.2a1, 4.2b1, 4.2rc1, 4.2, 4.2.1, 4.2.2, 4.2.3, 4.2.4, 4.2.5, 4.2.6, 4.2.7, 4.2.8, 4.2.9, 4.2.10, 4.2.11, 4.2.12, 4.2.13, 4.2.14, 4.2.15, 4.2.16)
ERROR: No matching distribution found for Django==5.0.4 (from -r requirements.txt (line 3))
</code></pre>
<p>Here are the steps I performed</p>
<pre><code>cd Project_Backend
python3 -m venv env
sudo apt-get install python3-venv
python3 -m venv env
source ./env/bin/activate
pip install -r requirements.txt
</code></pre>
<p>That's my requirements.txt</p>
<pre><code>asgiref==3.8.1
attrs==23.2.0
Django==5.0.4
django-cors-headers==4.4.0
djangorestframework==3.15.1
drf-spectacular==0.27.2
inflection==0.5.1
jsonschema==4.22.0
jsonschema-specifications==2023.12.1
PyYAML==6.0.1
referencing==0.35.1
rpds-py==0.18.1
sqlparse==0.4.4
typing_extensions==4.11.0
tzdata==2024.1
uritemplate==4.1.1
</code></pre>
<p>I am running Ubuntu on my Google Cloud VM. How should I resolve this issue and successfully deploy my Django application?</p>
|
<python><django><linux><ubuntu><google-cloud-platform>
|
2024-10-11 14:44:55
| 1
| 705
|
coder
|
79,078,340
| 14,720,215
|
How to check type of Ellipsis
|
<p>I have really specific case, when I need to check types by <code>isinstance</code> in python. And in really rare cases I want to check that value is <code>Ellipsis</code>. For example lets get dict:</p>
<pre class="lang-py prettyprint-override"><code>customer_ids_map = {
"1": "aaa",
"2": ...,
}
</code></pre>
<p>But this doesn't work:</p>
<pre class="lang-py prettyprint-override"><code>isinstance(..., Ellipsis)
TypeError: isinstance() arg 2 must be a type, a tuple of types, or a union
</code></pre>
<p>And this doesn't work:</p>
<pre class="lang-py prettyprint-override"><code>isinstance(..., ...)
TypeError: isinstance() arg 2 must be a type, a tuple of types, or a union
</code></pre>
<p>So is there any elegant way to do this?</p>
<p>P.S. I'm using python3.11</p>
|
<python><python-3.x>
|
2024-10-11 12:59:06
| 1
| 1,338
|
Kirill Ilichev
|
79,078,329
| 8,467,078
|
Python typing specific substring type
|
<p>Is it possible to define a Type in Python that only allows strings starting (or ending) with a specific substring? Such that at runtime, it's still allow to pass any string, but a static type checker will only allow the specific substring.</p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>special_str_type = MyType("X*") # doesn't exist
def foo(mystr: special_str_type) -> str:
...
# starts with "X", passes type checking
result = foo("Xabc")
# doesn't start with "X", fails type checking
result = foo("abc")
</code></pre>
<p>I know I can simply check this (at runtime) using e.g. <code>mystr.startswith("X")</code> in this case, and potentially throw an exception if it's not the case. But I'm specifically wondering if this is possible in the typing context. Maybe somehow with the builtin <code>typing.NewType</code>?</p>
|
<python><python-typing>
|
2024-10-11 12:55:58
| 1
| 345
|
VY_CMa
|
79,078,313
| 8,261,345
|
Docker is downgrading SSL for MySQL connection
|
<h3>The server</h3>
<p>I have a MySQL server v8.4 running in Google Cloud with <code>skip-name-resolve</code>. It requires SSL connections, but without trusted client certificates.</p>
<p>I have a user configured on the server to connect from any IP: <code>'myuser'@'%'</code>.</p>
<p>I have verified the user can connect from any IP using the <code>mysql</code> DB where <code>SELECT Host, User FROM user</code> returns:</p>
<pre><code>+-----------+----------------------------+
| Host | User |
+-----------+----------------------------+
| % | myuser |
| % | root |
+-----------+----------------------------+
</code></pre>
<p>For Google Cloud config, connections are allowed from any IP, set using the CIDR range <code>0.0.0.0/0</code>.</p>
<h3>The client</h3>
<p>I have a Python program using SQLAlchemy to connect to the server:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import create_engine
from sqlalchemy.orm import Session
with Session(create_engine('mysql://myuser:mypass@X.X.X.X:3306/mydb?ssl=true')) as session:
result = session.query(MyTable).all()
print(result)
</code></pre>
<p>When running on my local machine, <strong>this connects to the remote server without issue and prints the rows.</strong> Using a VPN and switching to various IP addresses, this continues to work.</p>
<h3>The problem</h3>
<p>As soon as I put this client code inside a Docker container, the server starts denying the connections.</p>
<pre><code>sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1045, "Access denied for user 'myuser'@'Y.Y.Y.Y' (using password: YES)")
</code></pre>
<p><code>Y.Y.Y.Y</code> matches my public IP address exactly. Connecting from outside the Docker container from this same IP address does not have any issues.</p>
<p>My Dockerfile:</p>
<pre class="lang-none prettyprint-override"><code>FROM python:3.12-slim-bullseye
WORKDIR /usr/src/app
ENV PYTHONPATH=.
COPY requirements.txt requirements.txt
COPY myfile.py myfile.py
RUN apt-get update && apt-get install -y pkg-config default-libmysqlclient-dev
RUN pip3 install -r requirements.txt
CMD python3 myfile.py
</code></pre>
<p>The database logs show that the request is reaching the server but provides no additional information:</p>
<pre><code>Access denied for user 'myuser'@'Y.Y.Y.Y' (using password: YES)
</code></pre>
<p><strong>Disabling the SSL requirement on the server fixes the issue.</strong> So it seems that the container is ignoring the <code>?ssl=true</code>.</p>
<p>How do I get the container's connection to use SSL?</p>
|
<python><ssl><libmysqlclient>
|
2024-10-11 12:51:58
| 1
| 694
|
Student
|
79,078,306
| 12,466,687
|
Unable to install python-poppler on Windows for pdftotext
|
<p>I am trying to install poppler on Windows for Python as I want to use pdftotext.</p>
<p>I have referred to several SO posts like:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/52336495/cannot-install-pdftotext-on-windows-because-of-poppler">cannot install pdftotext on windows because of poppler</a></li>
<li><a href="https://stackoverflow.com/questions/18381713/how-to-install-poppler-on-windows">How to install Poppler on Windows?</a></li>
</ul>
<p>I have downloaded the poppler files from <a href="https://github.com/oschwartz10612/poppler-windows/releases" rel="nofollow noreferrer">this source</a>.</p>
<p>Even after several fixes of CMAKE I am still getting below error:
<a href="https://i.sstatic.net/yrTKQ4Q0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yrTKQ4Q0.png" alt="enter image description here" /></a></p>
<p>Seems like it is not able to find poppler-cpp files even when I have put this poppler folder in Program Files and have added PATH</p>
<p>I have added Path to Environment variables:</p>
<p><a href="https://i.sstatic.net/19zN1Ve3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19zN1Ve3.png" alt="enter image description here" /></a></p>
<p>files structure: <code>C:\Program Files\poppler-24.07.0\Library\bin</code>
<a href="https://i.sstatic.net/V1rPjpth.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V1rPjpth.png" alt="enter image description here" /></a></p>
<p>file structure: <code>C:\Program Files\poppler-24.07.0\Library\lib</code>
<a href="https://i.sstatic.net/CqieIJrk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CqieIJrk.png" alt="enter image description here" /></a></p>
<p>I have also restarted the machine after setting PATH.</p>
<p>What else should I do to make this work?</p>
|
<python><pdftotext><poppler>
|
2024-10-11 12:50:28
| 1
| 2,357
|
ViSa
|
79,078,287
| 8,040,369
|
Groupby a df column based on other column and add a default value to everylist
|
<p>I have an df which has 2 columns lets say Region and Country.</p>
<pre><code>Region Country
================================
AMER US
AMER CANADA
APJ INDIA
APJ CHINA
</code></pre>
<p>I have grouped the unique Country list for each Region using the code and o/p like below:</p>
<pre><code>df.drop_duplicates().groupby("Region")['Country'].agg(lambda x: sorted(x.unique().tolist())).to_dict()
</code></pre>
<p><strong>OUTPUT</strong></p>
<pre><code>{ 'AMER': ['US', 'CANADA'], 'APJ': ['INDIA', 'CHINA'] }
</code></pre>
<p>Is there way with to add a default value <strong>"ALL"</strong> to every list?</p>
<p><strong>=============================================================</strong></p>
<p><strong>EDIT:</strong>
I have a similar situation again and really would need some help here.</p>
<p>I have an df which has 3 columns lets say Region, Country and AREA_CODE.</p>
<pre><code>Region Country AREA_CODE
===================================
AMER US A1
AMER CANADA A1
AMER US B1
AMER US A1
</code></pre>
<p>I want to get the output like list of AREA_CODE for each country under each Region with 'ALL' as list value as well. something like</p>
<pre><code>{
"AMER": {
"US": ["ALL", "A1", "B1"],
"CANADA": ["ALL", "A1"]
}
}
</code></pre>
<p>So far i have tried to groupby both region and country column and then tried to group & agg it by AREA_CODE, it is throwing error</p>
<pre><code>df.drop_duplicates().groupby(["Region", "Country"]).groupby("Country")['AREA_CODE'].agg(lambda x: ["ALL"]+sorted(x.unique().tolist())).to_dict()
</code></pre>
<p>Could someone kindly help me with this.</p>
<p>Thanks,</p>
|
<python><python-3.x><pandas>
|
2024-10-11 12:44:45
| 1
| 787
|
SM079
|
79,078,271
| 313,768
|
Why is lstsq such a poor fit for this case?
|
<p>I am attempting to produce a plane of best fit through a locus of points in an RGB colour space. These points are:</p>
<pre class="lang-py prettyprint-override"><code>>>> colours
array([[ 0, 0, 0],
[255, 255, 255],
[120, 136, 97],
[135, 129, 86],
[ 93, 67, 31],
[242, 239, 186],
[247, 207, 202],
[203, 167, 119],
[246, 207, 179],
[204, 119, 34],
[250, 245, 231],
[235, 214, 154],
[ 39, 28, 19],
[103, 50, 26],
[244, 198, 166],
[124, 138, 104],
[255, 191, 104],
[ 90, 67, 100],
[ 59, 42, 66],
[ 75, 53, 69],
[203, 151, 53]])
</code></pre>
<p>If I add an artificial offset to help with the (0, 0, 0) case <em>and</em> only fit to five points, then the fit is good:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
offset = 256
# Plane of best fit to form a(x + 256) + b(y + 256) + c(z + 256) = 255
sel = [0, 6, 12, 18, -1]
normal, residuals, rank, singular = np.linalg.lstsq(
a=colours[sel] + offset,
b=np.full(shape=len(sel), fill_value=255),
rcond=None,
)
rhs = 255 - offset * normal.sum()
print('(r g b) .', normal, '~', rhs)
</code></pre>
<pre class="lang-none prettyprint-override"><code>(r g b) . [-4.87196165 6.64635746 -0.73171352] ~ -11.92666587980193
residuals = 3657.53108184
rank = 3
singular = [1368.55050253, 86.54356905, 12.63853985]
</code></pre>
<p>Here the locus is depicted projecting toward the (invisible) plane in the middle.</p>
<p><a href="https://i.sstatic.net/gYo1oiLI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gYo1oiLI.png" alt="OK projection" /></a></p>
<p>If I attempt to perform a fit to the whole locus (with or without still retaining the offset), the result seems to be significantly biased in a way that does not look like a best linear fit:</p>
<pre class="lang-py prettyprint-override"><code>
import numpy as np
offset = 256
# Plane of best fit to form a(x + 256) + b(y + 256) + c(z + 256) = 255
normal, residuals, rank, singular = np.linalg.lstsq(
a=colours + offset,
b=np.full(shape=len(colours), fill_value=255),
rcond=None,
)
rhs = 255 - offset * normal.sum()
print('(r g b) .', normal, '~', rhs)
</code></pre>
<pre class="lang-none prettyprint-override"><code>(r g b) . [ 0.58106115 -0.47111639 0.51868464] ~ 94.07087269776349
residuals = 43765.21160088
rank = 3
singular = 3175.31193247, 163.11543773, 56.93404491
</code></pre>
<p>projecting to</p>
<p><a href="https://i.sstatic.net/26jnasxM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26jnasxM.png" alt="bad fit" /></a></p>
<p>Why is this so, and what is the best way to fit a plane to all of the points?</p>
<h2>Reference Code</h2>
<pre class="lang-py prettyprint-override"><code>import typing
import numpy as np
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
def dict_to_array(colours: dict[str, bytes]) -> np.ndarray:
return np.array([tuple(triple) for triple in colours.values()])
def triples_to_hex(triples: typing.Iterable[tuple[int, int, int]]) -> tuple[str, ...]:
return tuple(
f'#{red:02x}{green:02x}{blue:02x}'
for red, green, blue in triples
)
def fit_plane(
colours: np.ndarray,
) -> tuple[np.ndarray, float]:
offset = 256
# Plane of best fit to form ax + by + cz + 256 = 255
sel = [0, 6, 12, 18, -1]
normal, residuals, rank, singular = np.linalg.lstsq(
a=colours[sel] + offset,
b=np.full(shape=len(sel), fill_value=255),
rcond=None,
)
if rank != 3:
raise ValueError(f'Deficient rank {rank}')
# (rgb + offset)@norm = 255
# rgb@norm = 255 - offset*norm.sum()
rhs = 255 - offset*normal.sum()
print(f'Colour plane of best fit: (r g b) .', normal, '~', rhs)
return normal, rhs
def project_grid(normal: np.ndarray, rhs: float) -> np.ndarray:
channel = np.linspace(start=0, stop=255, num=40)
ggbb = np.stack(
np.meshgrid(channel, channel), axis=-1,
)
rr = rhs/normal[0] - ggbb@(normal[1:]/normal[0])
rgb = np.concatenate((rr[..., np.newaxis], ggbb), axis=2)
rgb[(rgb > 255).any(axis=-1)] = np.nan
return rgb
def project_irregular(colours: np.ndarray, normal: np.ndarray, rhs: float) -> np.ndarray:
offset = rhs/normal[0], 0, 0
v = colours - offset
w = normal
return offset + v - np.outer(v@w, w/w.dot(w))
def plot_2d(
colours: np.ndarray, projected: np.ndarray,
rgb_grid: np.ndarray,
colour_strs: tuple[str, ...], proj_strs: tuple[str, ...],
) -> plt.Axes:
ax: plt.Axes
fig, ax = plt.subplots()
rgb = np.nan_to_num(rgb_grid, nan=100).astype(np.uint8) # dark grey
ax.imshow(rgb, extent=(0, 255, 0, 255), origin='lower')
ax.scatter(colours[:, 1], colours[:, 2], c=colour_strs)
ax.scatter(
projected[:, 1], projected[:, 2], edgecolors='black', linewidths=0.2,
c=proj_strs,
)
ax.set_title('RGB coordinate system, plane of best fit')
ax.set_xlabel('green')
ax.set_ylabel('blue')
return ax
def plot_3d(
colours: np.ndarray, projected: np.ndarray,
colour_strs: tuple[str, ...], proj_strs: tuple[str, ...],
) -> Axes3D:
fig, ax = plt.subplots(subplot_kw={'projection': '3d'})
ax: Axes3D
ax.scatter3D(*colours.T, c=colour_strs, depthshade=False)
ax.scatter3D(*projected.T, c=proj_strs, depthshade=False)
ax.set_xlabel('red')
ax.set_ylabel('green')
ax.set_zlabel('blue')
return ax
def plot_correspondences(
ax2: plt.Axes, ax3: Axes3D, colours: np.ndarray, projected: np.ndarray,
) -> None:
for orig, proj in zip(colours, projected):
ax2.plot(
[orig[1], proj[1]], [orig[2], proj[2]],
c='black',
)
ax3.plot(
[orig[0], proj[0]],
[orig[1], proj[1]],
[orig[2], proj[2]],
c='black',
)
def demo() -> None:
colour_dict = {
'black': b'\x00\x00\x00',
'white': b'\xFF\xFF\xFF',
'olive': b'\x78\x88\x61',
'olive-brown': b'\x87\x81\x56',
'brown': b'\x5d\x43\x1f',
'yellow': b'\xf2\xef\xba',
'pink': b'\xf7\xcf\xca',
'tan': b'\xcb\xa7\x77',
'salmon': b'\xf6\xcf\xb3',
'ochre': b'\xcc\x77\x22',
'cream': b'\xfa\xf5\xe7',
'buff': b'\xeb\xd6\x9a',
'blackish-brown': b'\x27\x1c\x13',
'reddish-brown': b'\x67\x32\x1a',
'pinkish-brown': b'\xf4\xc6\xa6',
'green': b'\x7c\x8a\x68',
'yellow-orange': b'\xff\xbf\x68',
'purple': b'\x5a\x43\x64',
'purple-black': b'\x3b\x2a\x42',
'purple-brown': b'\x4b\x35\x45',
'yellow-brown': b'\xcb\x97\x35',
}
colours = dict_to_array(colour_dict)
normal, rhs = fit_plane(colours=colours)
rgb_float = project_grid(normal=normal, rhs=rhs)
projected = project_irregular(colours=colours, normal=normal, rhs=rhs)
colour_strs = triples_to_hex(triples=colour_dict.values())
proj_strs = triples_to_hex(triples=projected.clip(min=0, max=255).astype(np.uint8))
ax2 = plot_2d(colours=colours, projected=projected, rgb_grid=rgb_float,
colour_strs=colour_strs, proj_strs=proj_strs)
ax3 = plot_3d(colours=colours, projected=projected,
colour_strs=colour_strs, proj_strs=proj_strs)
plot_correspondences(ax2=ax2, ax3=ax3, colours=colours, projected=projected)
plt.show()
demo()
</code></pre>
|
<python><numpy><linear-regression><linear-algebra>
|
2024-10-11 12:40:49
| 0
| 16,660
|
Reinderien
|
79,078,236
| 3,753,826
|
Capturing Matplotlib coordinates with mouse clicks using ipywidgets in Jupyter Notebook
|
<h1>Short question</h1>
<p>I want to capture coordinates by clicking different locations with a mouse on a Matplotlib figure inside a Jupyter Notebook. I want to use <code>ipywidgets</code> <strong>without</strong> using any Matplotlib magic command (like <code>%matplotlib ipympl</code>) to switch the backend and <strong>without</strong> using extra packages apart from Matplotlib, ipywidgets and Numpy.</p>
<h1>Detailed explanation</h1>
<p>I know how to achieve this using the <a href="https://matplotlib.org/ipympl/" rel="nofollow noreferrer">ipympl package</a> and the corresponding Jupyter <em>magic command</em> <code>%matplotlib ipympl</code> to switch the backend from <code>inline</code> to <code>ipympl</code> <a href="https://stackoverflow.com/a/42539917/3753826">(see HERE)</a>.</p>
<p>After installing <code>ipympl</code>, e.g. with <code>conda install ipympl</code>, and switching to the <code>ipympl</code> backend, one can follow <a href="https://stackoverflow.com/a/25525143/3753826">this procedure</a> to capture mouse click coordinates in Matplotlib.</p>
<pre class="lang-none prettyprint-override"><code>import matplotlib.pyplot as plt
# Function to store mouse-click coordinates
def onclick(event):
x, y = event.xdata, event.ydata
plt.plot(x, y, 'ro')
xy.append((x, y))
# %%
# Start Matplotlib interactive mode
%matplotlib ipympl
plt.plot([0, 1])
xy = [] # Initializes coordinates
plt.connect('button_press_event', onclick)
</code></pre>
<p><a href="https://i.sstatic.net/4aMThiKLl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4aMThiKLl.png" alt="enter image description here" /></a></p>
<p>However, I find this switching back and forth between <code>inline</code> and <code>ipympl</code> backend quite confusing in a Notebook.</p>
<p>An alternative for interactive Matplotlib plotting in Jupyter Notebook is to use the <a href="https://ipywidgets.readthedocs.io/en/latest/" rel="nofollow noreferrer">ipywidgets package</a>. For example, with the <code>interact</code> command one can easily create sliders for Matplotlib plots, without the need to switcxh backend. <a href="https://stackoverflow.com/a/25024206/3753826">(see HERE)</a>.</p>
<pre class="lang-none prettyprint-override"><code>from ipywidgets import interact
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 2 * np.pi)
def update(w=1.0):
plt.plot(np.sin(w * x))
plt.show()
interact(update);
</code></pre>
<p><a href="https://i.sstatic.net/Um4ADSBEl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Um4ADSBEl.png" alt="enter image description here" /></a></p>
<p>However, I have not found a way to use the <code>ipywidgets</code> package to capture <code>(x,y)</code> coordinates from mouse clicks, equivalent to my above example using <code>ipympl</code>.</p>
|
<python><numpy><matplotlib><jupyter-notebook><ipywidgets>
|
2024-10-11 12:34:11
| 2
| 17,652
|
divenex
|
79,078,033
| 4,262,344
|
yield from with a Generator class
|
<p>Due to needing a self reference inside the generator I have Generator classes instead of generator functions. But I also want to use "yield from" like:</p>
<pre><code>def gen1():
yield "foo"
def gen2():
t = gen1()
yield from t
yield "bla"
for i in gen2():
print(i)
</code></pre>
<p>Is this the best way to simulate yield from with Generator classes?</p>
<pre><code>from collections.abc import Generator
class Test(Generator):
def __init__(self):
self.count = 5;
def send(self, *args):
print(f"Test.send({args})")
self.count -= 1
if self.count == 0:
raise StopIteration()
return "foo"
def throw(self, value):
print(f"Test.throw({value})")
class Test2(Generator):
def __init__(self):
self.count = 5;
self.test = Test()
def send(self, *args):
if self.test:
try:
return self.test.send(*args)
except StopIteration:
self.test = None
print(f"Test2.send({args})")
self.count -= 1
if self.count == 0:
raise StopIteration()
return "foo"
def throw(self, value):
print(f"Test2.throw({value})")
test2 = Test2()
while True:
res = test2.send("bla")
print(f"res = {res}")
</code></pre>
<p>How would I do the same with inheritance including multiple levels of inheritance?</p>
<pre><code>class Test2(Test): ...
</code></pre>
<p>Update:
For (multiple) inheritance the classes just need to use <code>self.__count</code> and <code>self.test = Test()</code> can become <code>self.__parent = super()</code>. So only the first part of the question remains unanswered.</p>
|
<python><generator>
|
2024-10-11 11:31:03
| 1
| 12,446
|
Goswin von Brederlow
|
79,077,951
| 21,185,825
|
Class is in another directory - returns 0 tests
|
<p>I want to create different classes for my tests. But when I run my test unit, it does not run the test in my test class. The <code>setUp</code> function is never reached and zero tests are made.</p>
<pre class="lang-none prettyprint-override"><code>python -m unittest discover -s src -p "tests.py"
Ran 0 tests in 0.000s
OK
</code></pre>
<p>This is the main script <code>tests.py</code>:</p>
<pre><code>import unittest
def main():
unittest.main()
if __name__ == '__main__':
main()
</code></pre>
<p>The test class is in another file:</p>
<pre><code>import unittest
from Tools import Tools
class TestSomething(unittest.TestCase):
def setUp(self):
self.logger = Tools.get_logger(__file__)
self.logger.info("here")
def test_something(self):
....
</code></pre>
<p>I tried different ways to launch the script but I always have the same problem.</p>
<p>Can't we separate the test units in classes?</p>
<p>I add a screenshot that shows the name of the files present in my <code>src</code> folder:</p>
<p><a href="https://i.sstatic.net/tCS093Xy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tCS093Xy.png" alt="enter image description here" /></a></p>
|
<python><python-unittest>
|
2024-10-11 11:09:30
| 2
| 511
|
pf12345678910
|
79,077,824
| 3,961,495
|
How to plot bar graphs with pandas using cut function and interval when NaNs are involved?
|
<p>I'm wrestling with the following:</p>
<p>I have a dataframe with 2 columns of float values that may include NaNs.</p>
<p>For example:</p>
<pre><code>In [5]: df = pd.DataFrame({'vals1': [10,20,25,15,np.nan, 2], 'vals2': [5, 11, 12, np.nan, np.nan, np.nan]})
In [6]: df
Out[6]:
vals1 vals2
0 10.0 5.0
1 20.0 11.0
2 25.0 12.0
3 15.0 NaN
4 NaN NaN
5 2.0 NaN
</code></pre>
<p>I would like to create "bins" using <code>vals1</code> and then plot a bar graph with the value counts for both <code>vals1</code> and <code>vals2</code>.</p>
<p>The critical point is that I would like to <em>reuse</em> the bins created from <code>vals1</code> in such a way that the value counts can be plotted and the NaNs are plotted along as a seperate category/bin.</p>
<p>Without the NaNs I can do this:</p>
<pre><code>In [7]: bins = sorted(pd.cut(df['vals1'], 3).value_counts(dropna=True).index)
In [8]: bins
Out[8]:
[Interval(1.977, 9.667, closed='right'),
Interval(9.667, 17.333, closed='right'),
Interval(17.333, 25.0, closed='right')]
In [9]: pd.cut(df['vals2'], bins=bins)
Out[9]:
0 (1.977, 9.667]
1 (9.667, 17.333]
2 (9.667, 17.333]
3 NaN
4 NaN
5 NaN
Name: vals2, dtype: category
Categories (3, interval[float64, right]): [(1.977, 9.667] < (9.667, 17.333] < (17.333, 25.0]]
In [10]: plt.figure()
plt.bar([str(b) for b in bins], pd.cut(df['vals1'], bins=bins).value_counts().sort_values(), label='vals1', alpha=0.4)
plt.bar([str(b) for b in bins], pd.cut(df['vals2'], bins=bins).value_counts().sort_values(), label='vals2', alpha=0.4)
plt.legend()
</code></pre>
<p>This plots the non-NaN values nicely (see below).</p>
<p><strong>Question:</strong> But is there a way to dd the NaN as a "category" or "bin" in an out-of-the-box way?</p>
<p><a href="https://i.sstatic.net/gT3WilIz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gT3WilIz.png" alt="How to add the category NaN here?" /></a></p>
|
<python><pandas><matplotlib><nan><binning>
|
2024-10-11 10:29:46
| 1
| 3,127
|
Ytsen de Boer
|
79,077,781
| 4,245,090
|
How can I to use Python with Selenium to run automatically a website (chrome) in the background when I want to press a button and download?
|
<p>I would like to create an automated program in python with the help of selenium that opens a website, navigates to a section where I press a button and downloads the data. If the program is not running in the background, it works fine (then I don't use the <code>chrome_options.add_argument("--headless=old")</code> ). If the website runs in the background, the program runs without error, but the data is not downloaded. I don't think
press the button.
I made the following chrome settings:</p>
<pre><code>chrome_options = Options()
chrome_options.add_argument("--headless=old") # "--headless" or "--headless=new" can't work
chrome_options.add_argument("--disable-gpu")
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument("--disable-dev-shm-usage")
chrome_options.add_argument("--window-size=1920,1080")
chrome_options.add_argument("--log-level=3")
chrome_options.add_argument("--disable-extensions")
</code></pre>
<p>...</p>
<pre><code>driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=chrome_options)
driver.get(""https://www.example.com"")
time.sleep(1)
</code></pre>
<p>button press</p>
<p>...</p>
<pre><code>button = driver.find_element(By.XPATH, '//*[@data-form-button="primary"]')
button.click()
</code></pre>
<p>or</p>
<pre><code>button = wait.until(EC.element_to_be_clickable((By.XPATH, '//*[@id="dialog-region"]/form/div/footer/ul/li[2]/button')))
driver.execute_script("arguments[0].click();", button)
</code></pre>
<p>or</p>
<pre><code>form = driver.find_element(By.XPATH, '//*[@id="dialog-region"]/form')
form.submit()
</code></pre>
<p>It's can't work.</p>
<p>When I use the next chrome setup:
<code>chrome_options.add_argument("--headless")</code> or <code>chrome_options.add_argument("--headless=new")</code> then a white panel appears until I close the program. The program only runs with the background open when I use <code>"--headless=old"</code>. I use this for the button press:</p>
<pre><code>button = driver.find_element(By.XPATH, '//*[@data-form-button="primary"]')
button.click()
</code></pre>
<p>Version of Chrome: 129.0.6668.101</p>
<p>Version of Selenium : 4.25.0</p>
<p>Operation system: Windows 10</p>
<p>Thank you in advance for your help.</p>
|
<python><selenium-webdriver><button><background>
|
2024-10-11 10:16:09
| 1
| 627
|
Sevi
|
79,077,734
| 5,269,892
|
Pandas missing value representation in aggregated dataframe
|
<p>When applying an aggregation to a grouped pandas DataFrame, the aggregated output appears to contains different values for aggregated all-missing-value-columns, depending on the type of the dataframe column. Below is a minimal example, containing one non-missing-value (an integer, a string and a tuple), one <code>NaN</code>, and one <code>None</code> each:</p>
<pre><code>import pandas as pd
import numpy as np
a1 = pd.DataFrame({'a': [3, np.nan, None], 'b': [0,1,2]})
a2 = pd.DataFrame({'a': ['tree', np.nan, None], 'b': [0,1,2]})
a3 = pd.DataFrame({'a': [(0,1,2), np.nan, None], 'b': [0,1,2]})
a1.groupby('b')['a'].first()
a2.groupby('b')['a'].first()
a3.groupby('b')['a'].first()
a1.groupby('b')['a'].agg('first')
a2.groupby('b')['a'].agg('first')
a3.groupby('b')['a'].agg('first')
</code></pre>
<p>Looking at the <code>dtypes</code> of column <code>'a'</code>, it can be seen that these are <code>float64</code>, <code>object</code> and <code>object</code> for <code>a1</code>, <code>a2</code> and <code>a3</code>, respectively. The <code>None</code> in <code>a1</code> is converted to <code>NaN</code> at dataframe creation. Therefore I would have the following</p>
<p><strong>Expected output behavior:</strong></p>
<ul>
<li><code>a1</code>: <code>NaN</code> for rows 1 and 2 (that is the case)</li>
<li><code>a2</code>: <code>NaN</code> and <code>None</code> for rows 1 and 2 (not the case)</li>
<li><code>a3</code>: <code>NaN</code> and <code>None</code> for rows 1 and 2 (not the case)</li>
</ul>
<p><strong>Actual output:</strong></p>
<pre><code>b
0 3.0
1 NaN
2 NaN
Name: a, dtype: float64
b
0 tree
1 None
2 None
Name: a, dtype: object
b
0 (0, 1, 2)
1 None
2 None
Name: a, dtype: object
</code></pre>
<p><strong>Why does the aggregation change the data from <code>NaN</code> to <code>None</code> for row 1 in <code>a2</code> and <code>a3</code>?</strong> As the column is anyways of dtype object, there should be no issue in returning <code>NaN</code> and <code>None</code> for rows 1 and 2, respectively; and we are not in a scenario here, where any group to be aggregated contains both <code>NaNs</code> and <code>None</code>. The documentation (<a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.first.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.first.html</a>) is not very precise on this behavior either, it just mentions the returned value for all-NA-columns is NA.</p>
<hr />
<p><strong>Update:</strong></p>
<p>As mentioned in <a href="https://stackoverflow.com/a/79077748/5269892">@mozway's answer</a> further below, for pure NaN/None-groups, <code>skipna=False</code> can be used to preserve NaN and None respectively. However, this does not work when having both mixed non-missing-/missing-value and all-missing columns (e.g. <code>[[np.nan, None, 'tree'],[np.nan, None]]</code>), where we still would like to get the first non-missing value, as that would require passing <code>skipna=True</code>.</p>
|
<python><pandas><aggregate><nan><nonetype>
|
2024-10-11 10:01:31
| 1
| 1,314
|
silence_of_the_lambdas
|
79,077,664
| 2,086,511
|
Weird subprocess.Popen pipe issue
|
<p>I am coding a Turkish AI assistant. In order to chat with the language model, I load the language model into the AI. The voice questions I ask are converted to text and entered as input, and the output is read aloud. Unfortunately, the above code snippet does not work as I want. Normally, I need to be able to send input from stdin and read output from stdout interactively using the subprocess.Popen pipe method. But this does not work.
I want to do this interactively.</p>
<p>If I run the following code, I can make chats without any problems via the terminal with the llama model:</p>
<pre class="lang-py prettyprint-override"><code>process = subprocess.Popen(
[
"llama-cli", "-m", MODEL_FILE_PATH,
"-i", "-cnv", "-p", "new session", "--color", "--temp", "0.5",
"--no-context-shift", "--no-warmup", "-n", "80"
],
# stdin=subprocess.PIPE,
# stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
bufsize=1
)
</code></pre>
<p>But if I uncomment <code>#stdout=subprocess.PIPE,</code> the model does not load and hangs.
If uncomment <code>#stdin=subprocess.PIPE,</code>
the model is loaded but it keeps giving irrelevant output, as if input is being provided continuously and process can not be killed by <code>ctrl+x</code>, it continues to run even if I close the terminal. I had to run</p>
<pre><code> killall llama-cli.
</code></pre>
<p>Would you please guide me?</p>
<p>EDIT1: The code below somehow ensures that the model is loaded. However, since it cannot detect when the model is loaded, the interactive chat fails. It is delayed. The results can only be seen after pressing enter a few times.</p>
<pre><code># Start the llama model
process = subprocess.Popen(
[
"llama-cli", "-m", MODEL_FILE_PATH,
"-i", "-cnv", "-p", "new session", "--color", "--temp", "0.5",
"--no-context-shift", "--no-warmup", "-n", "80"
],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
bufsize=1,
universal_newlines=True
)
# Wait for a fixed amount of time for the model to load (e.g., 10 seconds)
print("Waiting for the model to load...")
time.sleep(10) # Wait for 10 seconds (adjust this value based on your model's load time)
print("Model should be ready!")
# Now start the interactive input-output loop
while True:
user_input = input("You: ").strip()
if user_input:
process.stdin.write(user_input + "\n")
process.stdin.flush()
# Read output from the model
output = process.stdout.readline().strip()
if output:
print(f"Model Output: {output}")
</code></pre>
|
<python><shell><command-line><subprocess><pipe>
|
2024-10-11 09:41:37
| 1
| 328
|
kenn
|
79,077,528
| 451,878
|
Use JWT token with FactoryBoy's tests and FastAPI
|
<p>I've some problems to find examples to use FactoryBoy (with FastAPI) with JWT token.</p>
<p>My integration fastapi+jwt is working. But, testing... I'm confused.</p>
<p>Here's my code :</p>
<pre><code>def fake_oauth2_token():
client_token = TestClient(app)
response = client_token.post(
"api/auth/login/",
data={
"grant_type": "password",
"username": "me",
"password": "mypass",
"scope": "",
"client_id": "",
"client_secret": "",
},
)
token = json.loads(response.content)
return token["access_token"] if token else {}
def test_get_flower():
""" """
flower = FlowerFactory()
mock_session = MagicMock()
mock_session.query.return_value.all.return_value = [flower]
app.dependency_overrides[oauth2_scheme] = lambda: fake_oauth2_token()
app.dependency_overrides[get_db] = lambda: mock_session
client = TestClient(app)
response = client.get("api/flower/")
assert response.status_code == 200
</code></pre>
<p>I get this error :</p>
<blockquote>
<p>TypeError: <code>hash must be unicode or bytes, not unittest.mock.MagicMock</code></p>
</blockquote>
<p>And, if I remove this lines :</p>
<pre><code>app.dependency_overrides[oauth2_scheme] = lambda: fake_oauth2_token()
</code></pre>
<p>then I get a 401 error: it's fine, authentication issue.</p>
<p>How can I inject my token in headers please?</p>
|
<python><fastapi><factory-boy>
|
2024-10-11 09:01:36
| 1
| 1,481
|
James
|
79,077,213
| 2,300,597
|
Anaconda3 - downgrade python version to 3.11.8 in the current (base) environment
|
<p>Windows 11 OS</p>
<p>I installed a clean Anaconda3 platform just 3 days ago. It came with Python 3.12.4. But it seems some things (PySpark related) just don't work with python 3.12.4.</p>
<p>For example, I am getting a similar error like this one here, when I try running some PySpark examples.</p>
<p><a href="https://stackoverflow.com/questions/78240322/error-from-pyspark-code-to-showdataframe-py4j-protocol-py4jjavaerror">Error from PySpark code to showdataFrame : py4j.protocol.Py4JJavaError</a></p>
<pre><code>py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 1 times, most recent failure: Lost task 1.0 in stage 0.0 (TID 1) (laptop001 executor driver): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
</code></pre>
<p>So now I want to downgrade the python version in my Anaconda3 <code>base</code> env to 3.11.8.</p>
<p>Is it possible to downgrade the existing python version to 3.11.8 in the current (base) environment? If so, how?</p>
<p>I don't want to create a new conda environment (with 3.11.8) as most of the tutorials suggest. I just want to set python 3.11.8 in the current base environment. I am not finding any info anywhere if this can be done.</p>
|
<python><anaconda><conda><anaconda3>
|
2024-10-11 07:38:18
| 0
| 39,631
|
peter.petrov
|
79,077,054
| 2,156,537
|
Creating many hard links to a single file gradually slows down hard link creation
|
<p>I've written a backup script in Python that uses hard links to create full backups while saving a lot of space. When a new backup is made, the most recent backup is compared to the source files. If a source file has not changed (as determined by the file size and the modification time) since the last backup, a hard link to the most recent backup is made instead of copying the source file (think <code>rsync</code> with <code>--link-dest=$previous_backup</code> or Time Machine on macOS). This saves a lot of space since most of my files never change (e.g., photos, music, videos, downloaded emails).</p>
<p>After the eighth backup where most files are hard-linked, the backups gradually take longer and longer. See the plot below, where the x-axis is the number of backups after the first one (hence, the number of hard links for most files), and the y-axis is the number of minutes to complete the backup. The size of the backups are about 350 GB in 166,000 files.</p>
<p><a href="https://i.sstatic.net/JpKDxPv2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpKDxPv2.png" alt="Plot of the time needed to complete a backup versus the number of hardlinks each file has on average. A trend line shows that each backup after the 10th takes about one minute longer than the previous one." /></a></p>
<p>If I force a backup to copy all files without creating hard links, then all subsequent backups are fast once again. But, the time taken once again gradually increases after the 9th or 10th backup.</p>
<p>Is this behavior inherent to hard links or the NTFS file system? Is there some way to work around it?</p>
<p>Some relevant writings found through googling:</p>
<ul>
<li><a href="https://community.osr.com/t/ntfs-hardlink-creation-performance-issues/45461/2" rel="nofollow noreferrer">Post discussing similar behavior observed with some guesses as to the cause in the replies</a></li>
<li><a href="https://github.com/microsoft/WSL/issues/873#issuecomment-425272829" rel="nofollow noreferrer">Microsoft engineer discusses the general architecture of NTFS and how it relates to system calls</a></li>
</ul>
|
<python><windows><backup><ntfs><hardlink>
|
2024-10-11 06:40:19
| 0
| 617
|
Mark H
|
79,076,840
| 17,889,492
|
matplotlib path patch outside axes
|
<p>I want to create a tab for a plot. I have a <code>PathPatch</code> object that I would like to place just above the top spine (i.e. outside the plotting area). Secondly I would like a method that dynamically adjusts the width of the path in accordance with the width of the text.</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import matplotlib.path as mpath
fig, ax = plt.subplots()
Path = mpath.Path
path_data = [
(Path.MOVETO, (0, 0)),
(Path.CURVE4, (0, 1.0)),
(Path.CURVE4, (1, 1.0)),
(Path.CURVE4, (1, 1)),
(Path.LINETO, (4, 1)),
(Path.CURVE4, (4, 1)),
(Path.CURVE4, (5, 1.05)),
(Path.CURVE4, (5, 0)),
(Path.CLOSEPOLY, (0, 0)),
]
codes, verts = zip(*path_data)
path = mpath.Path(verts, codes)
patch = mpatches.PathPatch(path, facecolor='r', alpha=0.5)
ax.add_patch(patch)
x, y = zip(*path.vertices)
#line, = ax.plot(x, y, 'go-')
text_obj = plt.text(x = 0.75, y = 0.35, s = 'Text', fontsize = 32)
ax.grid()
ax.axis('equal')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/GPHUU6bQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GPHUU6bQ.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2024-10-11 04:51:43
| 2
| 526
|
R Walser
|
79,076,445
| 4,377,521
|
It appears that Dramatiq with asyncio operates with just a single worker
|
<p>I am trying to run several jobs in parallel.<br>
When I define actor as a sync function it uses different workers.<br>
But when functions is <code>async</code> it uses only one thread(worker).<br></p>
<p>I generally understand difference between asynchronous parallelism and <code>multithreading</code>/<code>multiprocessing</code>, but is there a way to use all spawned workers with <code>async</code> actors?</p>
<p>For example, if you are limited by library you use inside actor.<br>
I don't feel make actor synchronous and spawn new event loop inside it as an elegant solution though.</p>
<p>The code I use:</p>
<pre><code>redis_broker = RedisBroker(
host=settings.redis_url,
port=settings.redis_port,
db=settings.redis_db,
password=settings.redis_password,
middleware=[CurrentMessage()] # with AsyncIO() in async case
)
dramatiq.set_broker(redis_broker)
</code></pre>
<p>Sync actor:</p>
<pre><code>@dramatiq.actor
def jobs(param: int):
logger.info(f"Started {param}")
time.sleep(param)
logger.info(f"Ended {param}")
</code></pre>
<p>Async actor:</p>
<pre><code>@dramatiq.actor
async def jobs(param: int):
logger.info(f"Started {param}")
await asyncio.sleep(param)
logger.info(f"Ended {param}")
</code></pre>
<p>And I run it as:</p>
<pre><code>g = group([
jobs.send(1),
jobs.send(2),
jobs.send(3),
]).run()
</code></pre>
<p>Sync results:</p>
<pre><code>[2024-10-11 02:38:42,251] [PID 420038] [Thread-3] [dramatiq] [INFO] Started 3
[2024-10-11 02:38:42,251] [PID 420038] [Thread-5] [dramatiq] [INFO] Started 2
[2024-10-11 02:38:42,251] [PID 420038] [Thread-6] [dramatiq] [INFO] Started 3
[2024-10-11 02:38:42,252] [PID 420038] [Thread-7] [dramatiq] [INFO] Started 1
[2024-10-11 02:38:42,252] [PID 420038] [Thread-8] [dramatiq] [INFO] Started 2
[2024-10-11 02:38:42,252] [PID 420038] [Thread-4] [dramatiq] [INFO] Started 1
[2024-10-11 02:38:43,253] [PID 420038] [Thread-7] [dramatiq] [INFO] Ended 1
[2024-10-11 02:38:43,254] [PID 420038] [Thread-4] [dramatiq] [INFO] Ended 1
[2024-10-11 02:38:44,254] [PID 420038] [Thread-5] [dramatiq] [INFO] Ended 2
[2024-10-11 02:38:44,254] [PID 420038] [Thread-8] [dramatiq] [INFO] Ended 2
[2024-10-11 02:38:45,255] [PID 420038] [Thread-6] [dramatiq] [INFO] Ended 3
[2024-10-11 02:38:45,255] [PID 420038] [Thread-3] [dramatiq] [INFO] Ended 3
</code></pre>
<p>Async results:</p>
<pre><code>[2024-10-11 02:50:03,245] [PID 422276] [Thread-1] [dramatiq] [INFO] Started 3
[2024-10-11 02:50:03,245] [PID 422276] [Thread-1] [dramatiq] [INFO] Started 2
[2024-10-11 02:50:03,245] [PID 422276] [Thread-1] [dramatiq] [INFO] Started 3
[2024-10-11 02:50:03,245] [PID 422276] [Thread-1] [dramatiq] [INFO] Started 1
[2024-10-11 02:50:03,245] [PID 422276] [Thread-1] [dramatiq] [INFO] Started 2
[2024-10-11 02:50:03,245] [PID 422276] [Thread-1] [dramatiq] [INFO] Started 1
[2024-10-11 02:50:04,246] [PID 422276] [Thread-1] [dramatiq] [INFO] Ended 1
[2024-10-11 02:50:04,246] [PID 422276] [Thread-1] [dramatiq] [INFO] Ended 1
[2024-10-11 02:50:05,247] [PID 422276] [Thread-1] [dramatiq] [INFO] Ended 2
[2024-10-11 02:50:05,247] [PID 422276] [Thread-1] [dramatiq] [INFO] Ended 2
[2024-10-11 02:50:06,246] [PID 422276] [Thread-1] [dramatiq] [INFO] Ended 3
[2024-10-11 02:50:06,246] [PID 422276] [Thread-1] [dramatiq] [INFO] Ended 3
</code></pre>
|
<python><dramatiq>
|
2024-10-10 23:43:55
| 1
| 2,938
|
sashaaero
|
79,076,434
| 15,848,470
|
How to create a sequence of floats in Polars of type List[f64]
|
<p>I have a polars List[f64], column "a". I want to create a new List[f64], column "b", which is a sequence from the min to the max of that row's list in column a, in intervals of 0.5, inclusive. So for a row with a column "a" list of <code>[0.0, 3.0, 2.0, 6.0, 2.0]</code>, the value in column b should be <code>[0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0]</code>.</p>
<p>This is my solution, but it has an error.</p>
<pre><code>df = df.with_columns(
pl.col("a").list.eval(
pl.arange(pl.element().min(), pl.element().max(), 1)
.append(pl.arange(pl.element().min(), pl.element().max(), 1) + 0.5)
.append(pl.element().max())
.append(pl.element().max() - 0.5)
.unique()
.sort(),
parallel=True,
)
.alias("b")
)
</code></pre>
<p>It fails the edge case of when column a only contains 1 unique value in its list. Since polars seems to only have an integer <code>arange()</code> function, when I create the second list and add 0.5, if there is only one unique value this results in having 2 values in the output, the actual value seen, and the actual value seen - 0.5</p>
<p>Here is some toy data. Column "a" contains the lists, the min's and max's of which should be used to define the boundaries of the sequence, which is column "b".</p>
<pre><code>pl.DataFrame([
pl.Series('a', [[4.0, 5.0, 3.0, 7.0, 0.0, 1.0, 6.0, 2.0], [2.0, 4.0, 3.0, 0.0, 1.0], [1.0, 2.0, 3.0, 0.0, 4.0], [1.0, 3.0, 2.0, 0.0], [1.0, 0.0]], dtype=pl.List(pl.Float64)),
pl.Series('b', [[0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0], [0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0], [0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0], [0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0], [0.0, 0.5, 1.0]], dtype=pl.List(pl.Float64))
])
</code></pre>
<p>Speed is pretty important here, I am rewriting in Polars for that purpose here. Thanks.</p>
|
<python><dataframe><python-polars>
|
2024-10-10 23:35:59
| 2
| 684
|
GBPU
|
79,076,277
| 5,978,560
|
Find truncated strings for provided word in python
|
<p>I am attempting to do a string comparison that checks for common truncated words in the string while doing a word by word comparison.</p>
<p>Does a python library exist for finding variations of truncated words in the english language?</p>
<pre><code>if "inc" in truncated_stings_for("Incorporated"):
pass
</code></pre>
<p>More examples of potential truncated words in a string below for clarity purposes. I will be finding string A using string B:</p>
<pre><code>a = "LINCOLN ELECTRIC HOLDINGS"
b = "LINCOLN ELEC HLDGS"
</code></pre>
<pre><code>a = "INCYTE CORP"
b = "Incyte Co."
</code></pre>
<pre><code>a = "RLJ Lodging Trust"
b = "RLJ LODGING TR"
</code></pre>
<pre><code>a = "KRATOS DEFENSE & SECURITY"
b = "KRATOS DEFENSE & SEC"
</code></pre>
<pre><code>a = "Brookdale Senior Living"
b = "BROOKDALE SR LIVING"
</code></pre>
<p>Note: I do NOT have a complete list of potential words that I might come across that might be truncated, so this isn't something I can hard code in variations for.</p>
|
<python>
|
2024-10-10 22:00:32
| 0
| 526
|
brw59
|
79,076,276
| 14,256,643
|
Python scrapy playwright getting error ValueError: Page.evaluate: The future belongs to a different loop
|
<p>here is full error logs</p>
<pre><code>line 514, in wrap_api_call
raise rewrite_error(error, f"{parsed_st['apiName']}: {error}") from None
ValueError: Page.evaluate: The future belongs to a different loop than the one specified as the loop argument
</code></pre>
<p>I am trying to click on submit button after get captcha response from api. my full code</p>
<pre><code>import scrapy
from anticaptchaofficial.recaptchav2proxyless import recaptchaV2Proxyless
from scrapy_playwright.page import PageMethod
class RecaptchaSpider(scrapy.Spider):
name = "recaptcha_spider"
start_urls = ["https://www.google.com/recaptcha/api2/demo"]
delay_after_submit = 5 # Time in seconds to wait after submit
def start_requests(self):
for url in self.start_urls:
yield scrapy.Request(
url,
dont_filter=True,
meta={
"playwright": True,
"playwright_include_page": True,
},
callback=self.solve_captcha,
)
async def solve_captcha(self, response):
# Step 1: Solve the CAPTCHA using AntiCaptcha API
api_key = "my api key" # Replace with your actual API key
solver = recaptchaV2Proxyless()
solver.set_key(api_key)
solver.set_website_url(response.url)
solver.set_website_key("6Le-wvkSAAAAAPBMRTvw0Q4Muexq9bi0DJwx_mJ-") # Replace with Google reCAPTCHA site key
captcha_solution = solver.solve_and_return_solution()
if captcha_solution:
self.logger.info(f"Solved CAPTCHA: {captcha_solution}")
# Step 2: Execute JavaScript to fill the CAPTCHA solution and click the submit button
page = response.meta["playwright_page"]
await page.evaluate(f'document.getElementById("g-recaptcha-response").innerHTML = "{captcha_solution}";') # Replace with the correct ID if different
await page.click("button[type='submit']") # Replace with the correct selector for the submit button
# Wait for the next page or any other action you want to perform
await page.wait_for_timeout(self.delay_after_submit)
# Step 3: Handle the next step after clicking submit (if necessary)
# You might want to retrieve the new page content here
new_content = await page.content()
self.logger.info("New page content after submit:")
self.logger.info(new_content)
else:
self.logger.error("Failed to solve CAPTCHA.")
</code></pre>
|
<python><python-3.x><web-scraping><scrapy>
|
2024-10-10 22:00:11
| 1
| 1,647
|
boyenec
|
79,076,160
| 3,127,764
|
Which Emojis are supported by the Zendesk API? (It looks like a SUBSET of the ones the web UI supports!)
|
<p>I use <a href="https://developer.zendesk.com/api-reference/ticketing/tickets/ticket_comments/" rel="nofollow noreferrer">Zendesk ("ZD") developer API to update tickets with comments</a>. I frequently put markdown into ticket comments (using the "body" item and not the "html_body" item when I do that), and it seems that <em>some</em> emoji's are supported, e.g. <code>:boom:</code> and <code>:x:</code>. Others seem to be supported in ZD's web interface, but they don't render via the API, e.g. <code>:up arrow:</code> or <code>:chains:</code>. The former render as expected in the resulting comment, and the latter just render as text.</p>
<p>Does anyone know what explains this and what the comprehensive list of API-supported emojis might be (so I don't have to come up with it myself)?</p>
<p>I can use unicode characters like \u2191 and \u2193 to get what I want in some cases, but emojis are so much more fun!</p>
<p>Some things I tried in hopes of teasing out any idiosyncrasies:</p>
<ul>
<li>adding spaces before / after, e.g. <code>:x: :chains: :boom:</code> instead of <code>:x::chains::boom</code>. (In both cases, only the <code>:x:</code> and <code>:boom"</code> emojis rendered as expected)</li>
<li>searching the web and ZD's documentation</li>
</ul>
|
<python><emoji><zendesk-api>
|
2024-10-10 21:07:51
| 1
| 6,307
|
HaPsantran
|
79,075,956
| 219,153
|
How to improve random access of a video frame with Python?
|
<p>I'm using <code>pims</code> library (<a href="https://github.com/soft-matter/pims" rel="nofollow noreferrer">https://github.com/soft-matter/pims</a>) to access frames from .MOV file with over 25K frames, 3840 Γ 2160, H.264 (High Profile), 60fps.</p>
<pre><code>import pims
from time import perf_counter
video = pims.Video('video/v1.MOV')
t0 = perf_counter()
img = video[1000]
t1 = perf_counter()
print(f'{(t1-t0):.3f}s {img.shape}')
</code></pre>
<p>Here are the timings on a fairly fast PC, after file open:</p>
<ul>
<li>frame #100: 2.97s</li>
<li>frame #1,000: 28.39s</li>
<li>frame #10,000: 280.19s</li>
</ul>
<p>AFAIK, this library uses <code>ffmpeg</code>, which can access frame #10,000 with <code>ffmpeg -ss 00:02:46.66 -i v1.MOV -frames:v 1 f10k.png</code> practically instantaneously. Is there a way to improve frame access with <code>pims</code> or some other method?</p>
|
<python><ffmpeg><video-capture>
|
2024-10-10 19:51:10
| 1
| 8,585
|
Paul Jurczak
|
79,075,821
| 8,734,075
|
Finding the difference in rows of a table, and returning only the columns where values are different
|
<p>I'm working on analyzing some data. I have a way of doing this in excel, but it's slow and too much manual work. I'd like to find a more effective way to find what I'm looking for.</p>
<p>Here's the scenario:
I have a DB table (multiple, but let's just focus on a single one for now) that has many rows and many columns. Think of this as transactional data and we can call it Table0. It looks like the sample below.</p>
<p>Table0 has differences in columns 0,2,3,5 and has identical data in columns 1,4. I need to process this table, and only return the columns with differences: columns 0,2,3,5.</p>
<p>I'm looking for a solution that will work with either Python or SQL (postgres) that can provide the sample output table below. It doesn't seem like a complex issue, but I don't have the luxury of time to get a custom solution running properly.</p>
<p>Are there any well-known methods of manipulating my data like this?</p>
<pre><code>Table0
C0 C1 C2 C3 C4 C5
R0 aaa ax ay aq 123 555
R1 aab ax ay aq 123 555
R2 aac ax ay aw 123 557
R3 aad ax ax aw 123 555
R4 aae ax ay aw 123 559
R5 aaf ax ay ae 123 555
Output
C0 C2 C3 C5
R0 aaa ay aq 555
R1 aab ay aq 555
R2 aac ay aw 557
R3 aad ax aw 555
R4 aae ay aw 559
R5 aaf ay ae 555
</code></pre>
|
<python><sql><pandas><database><postgresql>
|
2024-10-10 19:05:57
| 3
| 374
|
MKUltra
|
79,075,777
| 4,048,657
|
How can I run backward() on individual image pixels without causing an error of trying to backward through the graph a second time?
|
<p>I have this code</p>
<pre class="lang-py prettyprint-override"><code>...
vertex_id = 275
deform_verts.retain_grad() # input
predicted_silhouette.retain_grad() # output
impact_img = torch.zeros_like(predicted_silhouette, requires_grad=False)
for i in range(image_size):
for j in range(image_size):
pixel = predicted_silhouette[i][j]
pixel.retain_grad()
pixel.backward()
impact = deform_verts.grad[vertex_id]
impact_img[i][j] += impact.sum()
plt.imshow(impact_img.detach().cpu().numpy())
</code></pre>
<p>I'm trying to create an image based off how a single entry of deform_verts affects the entire image. For this purpose, I go through every pixel of the output image and call <code>.backward()</code> and insert its gradient into a new image for visualization purposes. However, when I call <code>backward()</code> on a first pixel, I cannot call it on a second one because I suspect intermediate variables have been used in the backpropagation already. I tried to use <code>retain_graph</code> but I don't think this does what I want since the image looks far from what I expected.</p>
|
<python><pytorch>
|
2024-10-10 18:53:43
| 1
| 1,239
|
Cedric Martens
|
79,075,630
| 12,005,060
|
Vercel Deployment Not Creating Tables in PostgreSQL with FastAPI/SQLModel
|
<p>I'm deploying a FastAPI app with SQLModel on Vercel. The environment variable <code>DATABASE_URL</code> is set correctly in Vercel, and the app is able to connect to the PostgreSQL database hosted on Supabase.</p>
<p>However, the problem is that during deployment, tables are not being created in the database. Despite connecting successfully, the app only interacts with the existing tables (if they are already created), but it fails to create new ones when I try to initialize the database schema.</p>
<p>Hereβs what Iβve done so far:</p>
<ol>
<li>Verified that the <code>DATABASE_URL</code> environment variable is set correctly in Vercel's environment variables.</li>
<li>The FastAPI app works perfectly locally, and tables are created when I run it on my local machine.</li>
<li>Iβve ensured that the table creation logic is being triggered in the FastAPI app. Hereβs the code for the engine and table creation:</li>
</ol>
<p><code>db.py</code></p>
<pre class="lang-py prettyprint-override"><code> from sqlmodel import SQLModel, create_engine
import os
database_url = os.getenv("DATABASE_URL")
if not database_url:
raise ValueError("DATABASE_URL environment variable not set")
#Note: My DATABASE_URL also starts with postgresql://.... and is hosted online
engine = create_engine(database_url)
def init_db():
SQLModel.metadata.create_all(engine)
</code></pre>
<p><code>main.py</code></p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI
import uvicorn
from contextlib import asynccontextmanager
from db import engine, init_db
@asynccontextmanager
async def lifespan(app: FastAPI):
try:
init_db()
yield
finally:
await engine.dispose()
app = FastAPI(
lifespan=lifespan)
if __name__ == "__main__":
uvicorn.run(app)
</code></pre>
<ol start="4">
<li>I've tried redeploying the app multiple times, and even attempted different ways of triggering table creation, but Vercel doesnβt seem to create them on the PostgreSQL database.</li>
<li>Checked logs in Vercel, and there are no errors related to database connections or table creation.</li>
</ol>
<p><strong>Question:</strong>
Why might Vercel not be creating new tables in the PostgreSQL database even though the app connects to the database successfully and surprisingly localhost creates the tables on same url but vercel doesn't? Is there something I need to configure differently on Vercel for table creation to work? Any advice or debugging steps would be greatly appreciated!</p>
|
<python><postgresql><sqlalchemy><fastapi><sqlmodel>
|
2024-10-10 17:58:36
| 0
| 325
|
Mubashar Hussain
|
79,075,564
| 6,141,238
|
What is the best way to fit a quadratic polynomial to p-dimensional data and compute its gradient and Hessian matrix?
|
<p>I have been trying to use the scikit-learn library to solve this problem. Roughly:</p>
<pre><code>from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
# Make or load an n x p data matrix X and n x 1 array y of the corresponding
# function values.
poly = PolynomialFeatures(degree=2)
Xp = poly.fit_transform(X)
model = LinearRegression()
model.fit(Xp, y)
# Approximate the derivatives of the gradient and Hessian using the relevant
# finite-difference equations and model.predict.
</code></pre>
<p>As the above illustrates, <code>sklearn</code> makes the design choice to separate polynomial regression into <code>PolynomialFeatures</code> and <code>LinearRegression</code> rather than combine these into a single function. This separation has conceptual advantages but also a major drawback: it effectively prevents <code>model</code> from offering the methods <code>gradient</code> and <code>hessian</code>, and <code>model</code> would be significantly more useful if it did.</p>
<p>My current work-around uses finite-difference equations and <code>model.predict</code> to approximate the elements of the gradient and Hessian (as described <a href="https://math.stackexchange.com/questions/199174/computing-the-elements-of-a-hessian-matrix-with-finite-difference">here</a>). But I don't love this approach β it is sensitive to floating-point error and the "exact" information needed to build the gradient and Hessian is already contained in <code>model.coef_</code>.</p>
<p>Is there any more elegant or accurate method to fit a p-dimensional polynomial and find its gradient and Hessian within Python? I would be fine with one that uses a different library.</p>
|
<python><scikit-learn><linear-regression><polynomials><hessian-matrix>
|
2024-10-10 17:32:12
| 3
| 427
|
SapereAude
|
79,075,522
| 952,161
|
How to efficiently import many modules while providing a "one module index"
|
<p>I have a project with the current structure:</p>
<pre><code>βββ program.py
βββ interfaces
βββ domain1.py
βββ domain2.py
βββ index.py
</code></pre>
<p>with
<em>interfaces/index.py</em>:</p>
<pre><code>from . import domain1
from . import domain2
</code></pre>
<p><em>interfaces/domain1.py</em>:</p>
<pre><code>import re # fast import
def print_local():
print("This is domain1")
</code></pre>
<p><em>interfaces/domain2.py</em>:</p>
<pre><code>import torch # slow import
def print_local():
print("This is domain2")
</code></pre>
<p>and <em>program.py</em>:</p>
<pre><code>from interfaces import index as id
def main():
id.domain1.print_local()
if __name__ == "__main__":
main()
</code></pre>
<p>The reason why index.py is importing all domain*.py files and is used in program.py (versus importing domain directly in program) is to offer a one-stop import namespace that contains all domains. This is very convenient for discoverability purposes.</p>
<p>This follow a similar idea of what's provided by pandas for example where users generally do:</p>
<pre><code>import pandas as pd
# use pd.XXX
</code></pre>
<h2>The problem</h2>
<p>As our real project contains tons of imports, this is very inefficient. Here program.py is importing torch while not using it and thus is quite slow:</p>
<pre><code>> time python program.py
This is domain1
real 0m0.480s
user 0m0.425s
sys 0m0.054s
</code></pre>
<p>while it should take 20x less time if it was not importing torch.</p>
<h2>What I have tried</h2>
<p>In program.py:</p>
<pre><code>from interfaces import domain1 as id.domain1
</code></pre>
<p>=> This is not a valid syntax</p>
<pre><code>from interfaces import domain1 as id_dm1
</code></pre>
<p>=> This solves the issue of preformance but I'm losing the one stop domain name.</p>
<p>I've also looked in other repo like dagster/pandas but there solution seems to load all their code.</p>
<h2>Question</h2>
<p>How can I offer the feature of:</p>
<pre><code>from interfaces import index as id
id.domain1.print_local()
</code></pre>
<p>while not importing other domains in index ?</p>
<p>I'm also open to other paradigms to solve these kind of problematic.</p>
<p>I would like to avoid as much as possible to use importlib or manipulate sys.path to solve this.</p>
<p>The solution should use relatively standard mechanism.</p>
|
<python><import>
|
2024-10-10 17:19:36
| 1
| 433
|
Fabrice E.
|
79,075,509
| 1,275,942
|
Remove packages that do not exist anymore and can be edited in Pip
|
<p>I install a local package in editable mode.</p>
<p><code>pip install -e ~/subdir/test_example</code></p>
<p>Then, I switch branches to a branch where <code>subdir/test_example/pyproject.toml</code> does not exist yet.</p>
<p><code>git switch master</code></p>
<p>Now Pip is in a broken state:</p>
<pre><code>$ pip freeze
ERROR: Exception:
Traceback (most recent call last):
...
File "<my_venv>\lib\site-packages\pip\_internal\vcs\git.py", line 501, in get_repository_root
r = cls.run_command(
File "<my_venv>\lib\site-packages\pip\_internal\vcs\versioncontrol.py", line 650, in run_command
return call_subprocess(
File "<my_venv>\lib\site-packages\pip\_internal\utils\subprocess.py", line 141, in call_subprocess
proc = subprocess.Popen(
File "C:\Python39\lib\subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Python39\lib\subprocess.py", line 1420, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
NotADirectoryError: [WinError 267] The directory name is invalid
</code></pre>
<p>I can manually remove the editable install and fix Pip: <code>pip uninstall test_example</code>, after which <code>pip freeze</code> and other commands are fine.</p>
<p>The obvious solution here is "ensure your python packages exist across all branches of a repo", but that may not be an option.</p>
<p>Is there a way to remove all broken links to editable packages?</p>
|
<python><pip><python-packaging>
|
2024-10-10 17:14:10
| 0
| 899
|
Kaia
|
79,075,233
| 9,878,263
|
Launch a subset of docker compose services (containers). Use a script?
|
<p>How to start only some containers from the list of services present in a docker-compose.yml file?</p>
<p>MS Visual Studio supports it, see <a href="https://learn.microsoft.com/en-us/visualstudio/containers/launch-profiles?view=vs-2022" rel="nofollow noreferrer">here</a>.</p>
<p>This question has already been asked: <a href="https://stackoverflow.com/questions/30233105/docker-compose-up-for-only-certain-containers">docker-compose up for only certain containers</a> but I'm not satisfied with the answers: I want to start <em>several</em> services in one command.</p>
<p>If it can't be done with one simple command I can use a Python script or a Powershell one (powershell v7 if possible) to do the job.</p>
|
<python><powershell><docker-compose>
|
2024-10-10 15:53:27
| 0
| 1,591
|
Mathieu CAROFF
|
79,075,095
| 10,518,698
|
How to get site id of One Drive for Business with MS Graph API
|
<p>I'm struggling to get site id for my one drive link with MS Graph API.</p>
<p>This is my one drive link - <code>https://sharedspace-my.sharepoint.com/:f:/r/personal/naruto/Documents/Shared%20Folders/hinata?csf=1&web=1&e=M3GWjd</code></p>
<p>and I took the host and site url from the above link.</p>
<pre><code>HOST = r"sharedspace-my.sharepoint.com"
SITE_URL = r"/personal/naruto/Documents/Shared%20Folders/hinata"
</code></pre>
<p>I can get the access token as well however, if I try to get site id, I get <code>ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1108)</code> error. I confirmed for access and it's accessible.</p>
<p>This is the code I tried.</p>
<pre><code>auth_url = f'https://login.microsoftonline.com/{tenant_id}/oauth2/v2.0/token'
data = {
'client_id': client_id,
'scope': 'files.readwrite.all',
'username': username,
'password': password,
'grant_type': 'password',
'response_type': 'code',
'client_secret': client_secret
}
response = requests.post(auth_url, data=data)
access_token = response.json()['access_token']
url = "https://graph.microsoft.com/v1.0/sites/" + host + ":" + site_url
headers = { "Authorization" : "Bearer " + access_token }
r = requests.get(url, headers=headers)
data = json.loads(r.content)
site_id = ','.join(data['id'].split(',')[1:])
</code></pre>
<p>Am I missing something here?</p>
|
<python><azure><microsoft-graph-api><onedrive>
|
2024-10-10 15:12:01
| 1
| 513
|
JSVJ
|
79,074,576
| 72,813
|
Different PyInstaller behaviour when signing app using QWebEngineView
|
<p>I have an app using a QWebEngineView widget, and when I create a distribution package with PyInstaller, I get a different behaviour if I sign the app or not. I created a small reproducible example (<em>tester.py</em>):</p>
<pre><code>import time
import sys
from PySide6.QtCore import QUrl
from PySide6.QtWebEngineWidgets import QWebEngineView
from PySide6.QtWidgets import QApplication, QWidget, QPushButton, QVBoxLayout
app = QApplication(sys.argv)
web = QWebEngineView()
web.setHtml('<html><body></body></html>')
wdg = QWidget()
vl = QVBoxLayout(wdg)
btn1 = QPushButton('Clear')
btn2 = QPushButton('Something')
btn3 = QPushButton('Google')
vl.addWidget(web)
vl.addWidget(btn1)
vl.addWidget(btn2)
vl.addWidget(btn3)
wdg.setLayout(vl)
btn1.clicked.connect(lambda x: web.setHtml('<html><body></body></html>'))
btn2.clicked.connect(lambda x: web.load(QUrl("https://something.com")))
btn3.clicked.connect(lambda x: web.load(QUrl("https://google.com")))
wdg.show()
sys.exit(app.exec())
</code></pre>
<p>This works fine using <code>python tester.py</code>, the contents can be cleared and both sites load fine. If I create a distribution using <code>pyinstaller tester.py</code>, then running <code>./dist/tester/tester</code> works just as well:</p>
<p><a href="https://i.sstatic.net/2B4IaSM6.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2B4IaSM6.jpg" alt="QWebEngineView working fine" /></a></p>
<p>However, if I sign the app with <code>pyinstaller --codesign-identity XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX tester.py</code>, then when running the binary, I get different behaviours. In a Mac with an Intel Core i7 running MacOS Big Sur, the clear page and something.com load fine, but google.com seems to disable the QWebEngineView widget. If I sign the app on a Mac with an Apple Silicon M2 Max running MacOS Sequoia 15.0.1, then the widget seems to be constantly disabled:</p>
<p><a href="https://i.sstatic.net/eAYQmQYv.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAYQmQYv.jpg" alt="QWebEngineView disabled when loading content" /></a></p>
<p>The codesign identity is valid (masked above for obvious reasons). I've tried using a windowed version with -w, no difference. I also tried specifying the target architecture with --target-architecture, still no difference.</p>
<p>All test computers used Python 3.12, PyInstaller 6.7.0, PySide6 6.6.1.</p>
<p>Any ideas? I need to distribute the app, hence needing to sign it.</p>
|
<python><pyinstaller><pyside6><qwebengineview>
|
2024-10-10 13:14:36
| 1
| 448
|
nicolaum
|
79,073,992
| 4,636,579
|
How to implement a preprocessor for code in python
|
<p>I have some code an want to scan i line by line. The problem is that in code, statements are build up over several lines, like</p>
<pre><code>def UART(ONE):
ONE = UUUU001 + 4
print(ONE)
</code></pre>
<p>I would like to have a tool which would scan the code and prepare it for me to work on it, extract names, calls, parameters, etc. I've got the hint to use ast, but I have the impression as if I do not understand ast very well</p>
<p>For example, I made a class MyVisitor which should do the preprocessing part, but I am not able to get the correct statements from code because "If", Function Definitions, For-Loops are not recursive visited and therefore are appearing as on line (shown in the output). (On the other hand, function calls are assembled to one statement). So what am i missing?</p>
<pre class="lang-none prettyprint-override"><code>
class MyVisitor(ast.NodeVisitor):
def __init__(self):
self.konvertiert = []
def generic_visit(self, node):
if isinstance(node, ast.Constant):
pass
elif isinstance(node, ast.Store):
pass
elif isinstance(node, ast.Load):
pass
elif isinstance(node, ast.Name):
pass
elif isinstance(node, ast.Attribute):
pass
elif isinstance(node, ast.Assign):
self.konvertiert.append(ast.unparse(node))
elif isinstance(node, ast.BinOp):
pass
elif isinstance(node, ast.If):
self.konvertiert.append(ast.unparse(node))
#super().generic_visit(node)
elif isinstance(node, ast.Call):
self.konvertiert.append(ast.unparse(node))
#super().generic_visit(node)
elif isinstance(node, ast.FunctionDef):
self.konvertiert.append(ast.unparse(node))
#super().generic_visit(node)
else:
super().generic_visit(node)
code = '''
ONE = 7
if ONE==7:
print('Found one')
open(file, line)
foo = abrakdabra(15, 5,
16, 67,
eins, 99)
result = get_nummer(142, 141, 140) # 10
def UART(ONE): # 11
ONE = UUUU001 + 4
print(ONE)
WWWWWWWW(eins, 32, irgendwas, 'a[6]') # 12
name2 = 'Elise' # 13
'''
tree = ast.parse(code)
vis = MyVisitor()
vis.konvertiert = []
vis.visit(tree)
lauf = 0
for i in vis.konvertiert:
lauf += 1
print(f'{lauf}: {i}')
</code></pre>
<p>Output:</p>
<pre><code>1: ONE = 7
2: if ONE == 7:
print('Found one')
3: open(file, line)
4: foo = abrakdabra(15, 5, 16, 67, eins, 99)
5: result = get_nummer(142, 141, 140)
6: def UART(ONE):
ONE = UUUU001 + 4
print(ONE)
7: WWWWWWWW(eins, 32, irgendwas, 'a[6]')
8: name2 = 'Elise'
</code></pre>
|
<python><abstract-syntax-tree>
|
2024-10-10 10:36:59
| 0
| 681
|
Coliban
|
79,073,763
| 11,814,996
|
facebook/m2m100_418M model - how to translate longer sequences of text
|
<p>I have this extracted the following text from Wikipedia's Wiki (<a href="https://en.wikipedia.org/wiki/Wiki" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Wiki</a>),</p>
<pre><code>A wiki is a form of hypertext publication on the internet which is collaboratively edited and managed by its audience directly through a web browser. A typical wiki contains multiple pages that can either be edited by the public or limited to use within an organization for maintaining its internal knowledge base.
Wikis are powered by wiki software, also known as wiki engines. Being a form of content management system, these differ from other web-based systems such as blog software or static site generators in that the content is created without any defined owner or leader. Wikis have little inherent structure, allowing one to emerge according to the needs of the users. Wiki engines usually allow content to be written using a lightweight markup language and sometimes edited with the help of a rich-text editor. There are dozens of different wiki engines in use, both standalone and part of other software, such as bug tracking systems. Some wiki engines are free and open-source, whereas others are proprietary. Some permit control over different functions (levels of access); for example, editing rights may permit changing, adding, or removing material. Others may permit access without enforcing access control. Further rules may be imposed to organize content. In addition to hosting user-authored content, wikis allow those users to interact, hold discussions, and collaborate.
</code></pre>
<p>and tried to translate it to French, in the way shown in the model card and found some part of the text to be discarded from the translation.</p>
<p>This is the French translation I got,</p>
<pre><code>Un wiki est une forme de publication hypertexte sur Internet qui est collaborativement Γ©ditΓ© et gΓ©rΓ© par son public directement par le biais d'un navigateur Web. Un wiki typique contient plusieurs pages qui peuvent soit Γͺtre Γ©ditΓ© par le public ou limitΓ© Γ utiliser dans une organisation pour maintenir sa base de connaissances interne. Wikis sont alimentΓ©es par le logiciel wiki, Γ©galement connu sous le nom de moteurs wiki. Γtre une forme de systΓ¨me de gestion du contenu, ces derniers diffΓ¨rent d'autres systΓ¨mes web tels que le logiciel de blog ou les gΓ©nΓ©rateurs de site statiques dans lequel le contenu est créé sans propriΓ©taire ou leader dΓ©fini. Wikis ont peu de structure inhΓ©rente, permettant Γ l'un d'Γ©merger en fonction des besoins des utilisateurs. Les moteurs wiki permettent gΓ©nΓ©ralement le contenu
</code></pre>
<p>and English version of that according to google-translate is, (I wanted to verify the translation since I don't know French)</p>
<pre><code>A wiki is a form of hypertext publication on the Internet that is collaboratively edited and managed by its audience directly through a web browser. A typical wiki contains multiple pages that can either be edited by the public or restricted to use within an organization to maintain its internal knowledge base. Wikis are powered by wiki software, also known as wiki engines. Being a form of content management system, these differ from other web systems such as blogging software or static site generators in which content is created without a defined owner or leader. Wikis have little inherent structure, allowing one to emerge based on user needs. Wiki engines typically allow content
</code></pre>
<p>So, as can be seen from both (from facebook/m2m100_418M and google-translate) the translation from the m2m100_418M is shorter and I presume because the input was truncated because of model's ability.</p>
<p>How to translate longer sequences using this pretrained model? and if the option is to feed shorter sequences in a loop, how to find the threshold at which sentences end or to not let the context get lost, not at a character limit?</p>
<p>Here is my code:</p>
<pre><code>>>> # translate Hindi to French
>>> tokenizer.src_lang = "eng"
>>> # `inp_text` variable has the text from above
>>> inp_encoded = tokenizer(inp_text, return_tensors="pt")
>>> generated_tokens = model.generate(**inp_encoded, forced_bos_token_id=tokenizer.get_lang_id("fr"))
>>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
</code></pre>
|
<python><facebook><nlp><huggingface-transformers><machine-translation>
|
2024-10-10 09:45:20
| 0
| 3,172
|
Naveen Reddy Marthala
|
79,073,468
| 9,974,205
|
How to handle repeated events in a time range intersection calculation in Pandas?
|
<p>I am working on a Python script using Pandas to analyze event data. My goal is to calculate the intersection of active events.</p>
<p>My code works fine if the same event doesn't happen twice. However, if it happens twice, my code returns empty dataframes.</p>
<p>Here it is the defective code:</p>
<pre><code>import pandas as pd
def is_active(df, event, start_date, end_date):
"""Filters events that are active within a time range."""
filter = (df['event_name'] == event) & (
(df['start_date'] <= end_date) & (df['end_date'] >= start_date)
)
return df[filter].shape[0] > 0
def is_not_active(df, event, start_date, end_date):
"""Filters events that are inactive within a time range."""
filter = (df['event_name'] == event) & (
(df['start_date'] <= end_date) & (df['end_date'] >= start_date)
)
return df[filter].empty
def generate_active_intersection(df_events, active_events, inactive_events, combination):
"""Generates a DataFrame with active events and filters inactive ones."""
# Define initial intersection range as the maximum start_date and minimum end_date among active events
df_filtered = df_events[df_events['event_name'].isin(active_events)]
if df_filtered.empty:
return pd.DataFrame() # No common active events
max_start_date = df_filtered['start_date'].max()
min_end_date = df_filtered['end_date'].min()
# Ensure the time range is valid
if max_start_date > min_end_date:
return pd.DataFrame() # No overlap of all active events
# Verify that there is a temporal intersection among all active events
for event in active_events:
if not is_active(df_events, event, max_start_date, min_end_date):
return pd.DataFrame() # No overlap of all active events
# Verify that inactive events are NOT active in the same time range
for inactive_event in inactive_events:
event_without_no = inactive_event.replace("NO_", "")
if not is_not_active(df_events, event_without_no, max_start_date, min_end_date):
return pd.DataFrame() # Some event that should be inactive is active
# Calculate active time in seconds
active_time_seconds = (min_end_date - max_start_date).total_seconds()
# If all conditions are met, return the common time range with the catalog format
return pd.DataFrame({
'start_date': [max_start_date],
'end_date': [min_end_date],
'catalog': [combination], # Use the original catalog combination
'active_time_seconds': [active_time_seconds]
})
def process_catalog(df_events, df_catalog):
"""Processes each catalog combination and generates the corresponding DataFrames."""
results = []
for index, row in df_catalog.iterrows():
combination = row['catalog']
events = combination.split(', ')
active_events = [e for e in events if not e.startswith('NO_')]
inactive_events = [e for e in events if e.startswith('NO_')]
df_result = generate_active_intersection(df_events, active_events, inactive_events, combination)
if not df_result.empty:
results.append(df_result)
if results:
return pd.concat(results, ignore_index=True)
else:
return pd.DataFrame(columns=['start_date', 'end_date', 'catalog', 'active_time_seconds']) # No valid results
</code></pre>
<p>The sample data is here:</p>
<pre><code># Example DataFrames:
events = pd.DataFrame({
'event_name': ['C', 'A', 'B', 'D', 'E', 'F'],
'start_date': pd.to_datetime([
'2023-10-01 09:45:00',
'2023-10-01 12:00:00',
'2023-10-02 14:30:00',
'2023-10-04 16:00:00',
'2023-10-05 18:15:00',
'2023-10-05 18:20:00'
]),
'end_date': pd.to_datetime([
'2023-10-03 11:30:00',
'2023-10-05 18:00:00',
'2023-10-06 23:59:59',
'2023-10-07 08:45:00',
'2023-10-08 20:00:00',
'2023-10-08 10:00:00',
])
})
candidates = pd.DataFrame({
'catalog': ['A, B, C', 'B, C, D', 'A, NO_E, C']
})
# Process and get the results
df_results = process_catalog(events, candidates)
print(f"Catalog results with start_time, end_time, name and active time in seconds: \n", df_results, "\n")
</code></pre>
<p>The code works fine as it is, but if you change the last "F" for an "A" in "event_name", the whole thing fails and the function returns an empty dataframe as a result.</p>
|
<python><pandas><dataframe><data-analysis><intersection>
|
2024-10-10 08:41:16
| 1
| 503
|
slow_learner
|
79,073,176
| 4,505,998
|
Speeding up Dataset.__getitems__
|
<p>I have a model with a <code>forward</code> function that receives optional parameters, like this:</p>
<pre class="lang-py prettyprint-override"><code>class MyModel(nn.Module):
...
def forward(self, interactions: torch.Tensor, user_features: Optional[torch.Tensor] = None):
"""
Where _N_ is the number of items
interactions (Tensor): Nx2
user_features (Tensor): Nx(number of features)
"""
...
</code></pre>
<p>For this reason, my <code>Dataset</code> also returns a dict, based on a DataFrame</p>
<pre class="lang-py prettyprint-override"><code>class StackOverflowDataset(torch.utils.data.Dataset):
def __init__(self, data, user_features=None):
self._data = data
self._user_features = user_features
def __getitem__(self, idx):
if self._user_features is None:
return {'interactions': self._data[idx]}
else:
return {
'interactions': self._data[idx],
'user_features': self._user_features[self._data[idx]['user']]
}
def __len__(self):
return len(self._data)
</code></pre>
<p>Most of the training time is spent on <code>__getitem__</code>, which makes me wonder: is having optional arguments on <code>MyModel.forward</code> bad practice? I guess most of the time is spent playing with numpy <em>item by item</em>, and then converting it to PyTorch's tensors and moving it to the GPU. Is there any way I can "pre-process" all of this beforehand but still use a <code>Dataloader</code> that returns a dict?</p>
<p>It seems that <a href="https://pytorch.org/data/beta/dp_tutorial.html" rel="nofollow noreferrer">Datapipe</a> also goes row by row. Would it be possible to directly return a dict with the whole 'interactions' tensor, and then the <code>Dataloader</code> slices it up?</p>
<blockquote>
<p><strong>EDIT:</strong> It seems DataPipe will be deprecated</p>
</blockquote>
<p><a href="https://i.sstatic.net/xFQCEyui.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xFQCEyui.png" alt="profiling of my model" /></a></p>
|
<python><pytorch><pytorch-dataloader>
|
2024-10-10 07:23:08
| 1
| 813
|
David DavΓ³
|
79,073,159
| 9,542,989
|
MS Teams Bot Cannot Respond on Web Chat
|
<p>I have created a bot via MS Teams. I registered the bot in MS Bot Framework portal and then created an app that is connected to this bot on Teams.</p>
<p>I am able to chat with my bot and it responds without an issue. However, it does not work when I try to use the web chat option that is provided to test the bot in the Bot Framework portal. It fails when I try to respond with a 'forbidden' error. What could be causing this? This is the Python code I am using to send responses:</p>
<pre><code>credentials = MicrosoftAppCredentials(
app_id=APP_ID,
password=APP_PASSWORD
)
connector = ConnectorClient(credentials, base_url=SERVICE_URL)
connector.conversations.send_to_conversation(
CONVERSATION_ID,
Activity(
type=ActivityTypes.message,
channel_id=CHANNEL_ID,
recipient=ChannelAccount(
id=RECIPIENT_ID
),
from_property=ChannelAccount(
id=FROM_ID,
),
text="HELLO WORLD!"
)
)
</code></pre>
<p>All of these attributes such as the <code>SERVICE_URL</code> and so forth are extracted from the message that is sent by the bot.</p>
<p>How can I refactor this to also reply on the web chat?</p>
|
<python><botframework><microsoft-teams>
|
2024-10-10 07:14:08
| 0
| 2,115
|
Minura Punchihewa
|
79,073,040
| 6,455,731
|
How to type partial application?
|
<p>How to type partial application (preferably with 3.12 syntax)? With the following simple implementation:</p>
<pre class="lang-py prettyprint-override"><code>class simple_partial:
def __init__(self, f: Callable, *args, **kwargs) -> None:
self.f = f
self.args = args
self.kwargs = kwargs
def __call__(self, *args, **kwargs):
return self.f(*(*self.args, *args), **(self.kwargs | kwargs))
partially = simple_partial(lambda x, y, z=3: (x, y, z), 1)
partially(2)
</code></pre>
<p>The only thing I can think of is something like:</p>
<pre class="lang-py prettyprint-override"><code>class simple_partial[T, **P]:
def __init__(self, f: Callable[P, T], *args: P.args, **kwargs: P.kwargs) -> None:
self.f = f
self.args = args
self.kwargs = kwargs
def __call__(self, *args: P.args, **kwargs: P.kwargs) -> T:
return self.f(*(*self.args, *args), **(self.kwargs | kwargs))
</code></pre>
<p>but this isn't correct because it doesn't express that args/kwargs are potentially incomplete and really <code>P.args</code> and <code>P.kwargs</code> will be different in <code>__init__</code> and <code>__call__</code>. Maybe this can be done with <code>typing.Concatenate</code> and two <code>ParamSpec</code>s but <code>Concatenate</code> doesn't allow two <code>ParamSpec</code>s.</p>
<p>Edit: Two <code>ParamSpec</code> version:</p>
<pre class="lang-py prettyprint-override"><code>class simple_partial[T, **P, **P2]:
def __init__(
self, f: Callable[Concatenate[P, P2], T], *args: P.args, **kwargs: P.kwargs
) -> None:
self.f = f
self.args = args
self.kwargs = kwargs
def __call__(self, *args: P2.args, **kwargs: P2.kwargs) -> T:
return self.f(*(*self.args, *args), **(self.kwargs | kwargs))
</code></pre>
<p><code>Concatenate</code> does not allow two <code>ParamSpec</code>s though.</p>
|
<python><python-typing>
|
2024-10-10 06:45:43
| 1
| 964
|
lupl
|
79,072,552
| 1,230,724
|
Cumulative sum with overrides on condition
|
<p>I'm trying to calculate the balance (the level) of an inventory over time and have incoming and outgoing quantities as input (and a category for each type of inventory). Usually I would calculate <code>incoming - outgoing</code> and carry over to the next period (cumulative sum), but in this case an added difficulty is that the balance can be overridden at various points in time which "resets" the balance to these values (and incoming/outgoings need to be added to these overrides from this point in time forward).</p>
<p>I came up with a way to calculate this by offsetting the calculated balance (=cumsum(incoming-outgoing)) when there's an override balance (by the negative calculated cumsum; i.e. setting the inventory to 0 when there's an override balance), but that doesn't work when there's multiple overrides at different times.</p>
<p>This is my current approach which works fine for the given dataframe (=only one override (<code>bal</code>) per category (<code>cat</code>)).</p>
<pre class="lang-py prettyprint-override"><code>>>> df = pd.DataFrame({
... 'cat': ['a', 'a', 'b', 'b', 'a', 'a', 'a', 'a', 'a', 'b'],
... 'time': [1, 2, 1, 2, 4, 5, 6, 7, 8, 9],
... 'in': [None, 10, None, None, None, 20, 11, 9, 10, None],
... 'out': [10, None, None, 20, 10, 5, None, 30, None, None],
... 'bal': [None, None, None, None, 50, None, None, None, None, None]
^ at this time, the balance should be set to 50, irrespective of prior `in` and `out`.
... })
>>>
>>> # cumsum goes by row, so order matters
>>> df = df.sort_values(by=['time'])
>>> df
cat time in out bal
0 a 1 NaN 10.0 NaN
2 b 1 NaN NaN NaN
1 a 2 10.0 NaN NaN
3 b 2 NaN 20.0 NaN
4 a 4 NaN 10.0 50.0
5 a 5 20.0 5.0 NaN
6 a 6 11.0 NaN NaN
7 a 7 9.0 30.0 NaN
8 a 8 10.0 NaN NaN
9 b 9 NaN NaN NaN
>>>
>>>
>>> # Calculate the balance as if 'bal' (the override) wasn't there (cumsum(in - out))
>>> df['inout'] = df['in'].fillna(0) - df['out'].fillna(0)
>>> df['cumsum'] = df[['cat', 'inout']].groupby(['cat']).cumsum()
>>> df
cat time in out bal inout cumsum
0 a 1 NaN 10.0 NaN -10.0 -10.0
2 b 1 NaN NaN NaN 0.0 0.0
1 a 2 10.0 NaN NaN 10.0 0.0
3 b 2 NaN 20.0 NaN -20.0 -20.0
4 a 4 NaN 10.0 50.0 -10.0 -10.0 <-- we want to override this with the value from 'bal' (50) and continue the calculation
5 a 5 20.0 5.0 NaN 15.0 5.0
6 a 6 11.0 NaN NaN 11.0 16.0
7 a 7 9.0 30.0 NaN -21.0 -5.0
8 a 8 10.0 NaN NaN 10.0 5.0
9 b 9 NaN NaN NaN 0.0 -20.0
>>>
>>> # Find the positions where a balance would override the calculated balance
>>> df['correction'] = -df.loc[pd.notnull(df['bal']), 'cumsum']
>>> df
cat time in out bal inout cumsum correction
0 a 1 NaN 10.0 NaN -10.0 -10.0 NaN
2 b 1 NaN NaN NaN 0.0 0.0 NaN
1 a 2 10.0 NaN NaN 10.0 0.0 NaN
3 b 2 NaN 20.0 NaN -20.0 -20.0 NaN
4 a 4 NaN 10.0 50.0 -10.0 -10.0 10.0
5 a 5 20.0 5.0 NaN 15.0 5.0 NaN
6 a 6 11.0 NaN NaN 11.0 16.0 NaN
7 a 7 9.0 30.0 NaN -21.0 -5.0 NaN
8 a 8 10.0 NaN NaN 10.0 5.0 NaN
9 b 9 NaN NaN NaN 0.0 -20.0 NaN
>>>
>>>
>>> # Calculate with the corrected balance
>>> df['inout2'] = df['in'].fillna(0) - df['out'].fillna(0) + df['bal'].fillna(0) + df['correction'].fillna(0)
>>> df['cumsum2'] = df[['cat', 'inout2']].groupby(['cat']).cumsum()
>>> df
cat time in out bal inout cumsum correction inout2 cumsum2
0 a 1 NaN 10.0 NaN -10.0 -10.0 NaN -10.0 -10.0
2 b 1 NaN NaN NaN 0.0 0.0 NaN 0.0 0.0
1 a 2 10.0 NaN NaN 10.0 0.0 NaN 10.0 0.0
3 b 2 NaN 20.0 NaN -20.0 -20.0 NaN -20.0 -20.0
4 a 4 NaN 10.0 50.0 -10.0 -10.0 10.0 50.0 50.0 (override from 'bal')
5 a 5 20.0 5.0 NaN 15.0 5.0 NaN 15.0 65.0 <--- 50 (override) +15 (in-out)
6 a 6 11.0 NaN NaN 11.0 16.0 NaN 11.0 76.0
7 a 7 9.0 30.0 NaN -21.0 -5.0 NaN -21.0 55.0
8 a 8 10.0 NaN NaN 10.0 5.0 NaN 10.0 65.0
9 b 9 NaN NaN NaN 0.0 -20.0 NaN 0.0 -20.0
>>>
>>>
>>> df[df['cat'] == 'a']
cat time in out bal inout cumsum correction inout2 cumsum2
0 a 1 NaN 10.0 NaN -10.0 -10.0 NaN -10.0 -10.0
1 a 2 10.0 NaN NaN 10.0 0.0 NaN 10.0 0.0
4 a 4 NaN 10.0 50.0 -10.0 -10.0 10.0 50.0 50.0
5 a 5 20.0 5.0 NaN 15.0 5.0 NaN 15.0 65.0
6 a 6 11.0 NaN NaN 11.0 16.0 NaN 11.0 76.0
7 a 7 9.0 30.0 NaN -21.0 -5.0 NaN -21.0 55.0
8 a 8 10.0 NaN NaN 10.0 5.0 NaN 10.0 65.0
</code></pre>
<p>That looks good. At index 4, the simple balance calculation is overridden (was -10, now is 50 as expected) and subsequent period inout flows are added as expected.</p>
<p>However, when I introduce another override the above algorithm breaks.</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({
'cat': ['a', 'a', 'b', 'b', 'a', 'a', 'a', 'a', 'a', 'b'],
'time': [1, 2, 1, 2, 4, 5, 6, 7, 8, 9],
'in': [None, 10, None, None, None, 20, 11, 9, 10, None],
'out': [10, None, None, 20, 10, 5, None, 30, None, None],
'bal': [None, None, None, None, 50, None, None, 30, None, None]
# ^
})
... same pipeline as before
>>> df
cat time in out bal inout cumsum correction inout2 cumsum2
0 a 1 NaN 10.0 NaN -10.0 -10.0 NaN -10.0 -10.0
2 b 1 NaN NaN NaN 0.0 0.0 NaN 0.0 0.0
1 a 2 10.0 NaN NaN 10.0 0.0 NaN 10.0 0.0
3 b 2 NaN 20.0 NaN -20.0 -20.0 NaN -20.0 -20.0
4 a 4 NaN 10.0 50.0 -10.0 -10.0 10.0 50.0 50.0 # still ok
5 a 5 20.0 5.0 NaN 15.0 5.0 NaN 15.0 65.0
6 a 6 11.0 NaN NaN 11.0 16.0 NaN 11.0 76.0
7 a 7 9.0 30.0 30.0 -21.0 -5.0 5.0 14.0 90.0 # expect 30
8 a 8 10.0 NaN NaN 10.0 5.0 NaN 10.0 100.0 # expect 30 + 10 = 40
9 b 9 NaN NaN NaN 0.0 -20.0 NaN 0.0 -20.0
</code></pre>
<p>I'd like to modify the algorithm to keep with the simplicity of using <code>cumsum</code> (functional), but can't work out how to proceed. It's almost like I need a conditional cumsum which replaces the intermediate values when there's a condition being met (in this case, a value in <code>bal</code>). However, I'd much rather calculate yet another correcting column (or fix the existing one) and add it (but I hit a wall as I probably looked at it for too long). Any help is greatly appreciated.</p>
|
<python><pandas><dataframe><cumulative-sum>
|
2024-10-10 02:25:48
| 1
| 8,252
|
orange
|
79,072,308
| 2,595,216
|
Build Python C extension with cibuildwheel
|
<p>I try build/compile wheels for <code>lupa</code> under windows. I got one working job in github-action <a href="https://github.com/emcek/pyi/actions/runs/11264308367" rel="nofollow noreferrer">github-action</a> 'manual' for Python 3.6.</p>
<p>I needed replace original <code>setup.py</code> to be able build <code>luajit</code> wheel for windows, line 370 is different:</p>
<pre class="lang-none prettyprint-override"><code>-or (platform.startswith('win') and 'luajit' in os.path.basename(lua_bundle_path.rstrip(os.sep)))
+or (get_machine() != "AMD64" and get_machine() != "x86_64" and 'luajit' in os.path.basename(lua_bundle_path.rstrip(os.sep)))
</code></pre>
<p>So, I try use <code>cibuildwheel</code> to build for all Pythons 3.6-3.12 but it failed with:</p>
<pre class="lang-none prettyprint-override"><code>error: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\VC\\Tools\\MSVC\\14.41.34120\\bin\\HostX86\\x86\\link.exe' failed with exit status 1120
</code></pre>
<p>I guess key command from manual step is:</p>
<pre class="lang-none prettyprint-override"><code>python setup.py build_ext -i bdist_wheel --use-bundle --with-cython
</code></pre>
<p><strong>Note:</strong> It is important (for me) to build <code>luajit21</code> module (since it the fastest). In manual step I was able to successfully build all modules: <code>luajit21</code>, <code>luajit20</code>, <code>lua54</code>-<code>lua51</code>
When I install <code>lupa-2.2-cp36-cp36m-win_amd64.whl</code> from manual job, all imports are working:</p>
<pre class="lang-py prettyprint-override"><code>import lupa.luajit21
import lupa.luajit20
import lupa.lua54
import lupa.lua53
import lupa.lua52
import lupa.lua51
</code></pre>
|
<python><visual-studio><github-actions><lupa>
|
2024-10-09 23:15:33
| 1
| 553
|
emcek
|
79,072,229
| 8,414,875
|
django-haystack elasticsearch spanish special chars (accents)
|
<p>I'm using Django-haystack 3.3.0 over Django 4.x, and use Elasticsearch 7.17.24 as backend.</p>
<p>I'm dealing with texts in Spanish, so for example I'm index the text:</p>
<pre><code>El corazΓ³n del mar
</code></pre>
<p>And when I'm trying to search:</p>
<pre><code>corazon
</code></pre>
<p>I got <code>0</code> results, but when I'm search
<code>corazΓ³n</code> or <code>CorazΓ³n</code>.<br>
I got <code>n</code> results; <code>n>0</code>.</p>
<p>Any idea?</p>
|
<python><django><elasticsearch><django-haystack>
|
2024-10-09 22:22:29
| 1
| 347
|
juanbits
|
79,072,098
| 275,088
|
Solving Leetcode's "1813. Sentence Similarity III" using regexes
|
<p>I'm trying to solve this <a href="https://leetcode.com/problems/sentence-similarity-iii/description/" rel="nofollow noreferrer">problem</a> from Leetcode using regexes (just for fun):</p>
<blockquote>
<p>You are given two strings <code>sentence1</code> and <code>sentence2</code>, each
representing a sentence composed of words. A sentence is a list of
words that are separated by a single space with no leading or trailing
spaces. Each word consists of only uppercase and lowercase English
characters.</p>
<p>Two sentences <code>s1</code> and <code>s2</code> are considered similar if it is possible
to insert an arbitrary sentence (possibly empty) inside one of these
sentences such that the two sentences become equal. Note that the
inserted sentence must be separated from existing words by spaces.</p>
<p>For example,</p>
<ul>
<li><code>s1 = "Hello Jane"</code> and <code>s2 = "Hello my name is Jane"</code> can be made equal by inserting <code>"my name is"</code> between <code>"Hello"</code> and <code>"Jane"</code> in <code>s1</code>.</li>
<li><code>s1 = "Frog cool"</code> and <code>s2 = "Frogs are cool"</code> are not similar, since although there is a sentence <code>"s are"</code> inserted into <code>s1</code>, it
is not separated from <code>"Frog"</code> by a space.</li>
</ul>
<p>Given two sentences <code>sentence1</code> and <code>sentence2</code>, return true if
<code>sentence1</code> and <code>sentence2</code> are similar. Otherwise, return false.</p>
</blockquote>
<p>Assuming <code>small</code> and <code>big</code> are the smaller and bigger sentences respectively, it's easy to check whether <code>small</code> is a prefix/suffix of <code>big</code>:</p>
<pre><code>if re.match(f'^{small} .*$', big) or re.match(f'^.* {small}$', big):
print('similar')
</code></pre>
<p>How to best check if the sentences can be made equal by inserting a new sentence <em>in the middle</em> of <code>small</code> using a regex?</p>
|
<python><regex>
|
2024-10-09 21:18:59
| 2
| 16,548
|
planetp
|
79,071,911
| 1,492,613
|
in python how can I annotate the type of Literal type?
|
<p>I basically want to have a argument that accept the literal object as input</p>
<pre class="lang-py prettyprint-override"><code>from typing import Literal, TypeVar, get_args
MyLiteral=Literal["a", "b"]
def func1(
cls_literal # how to annotate here?
):
print(get_args(cls_literal))
func1(cls_literal=MyLiteral)
</code></pre>
<p><code>Type[Literal]</code> or <code>Literal</code> will not work at all.</p>
|
<python><python-typing>
|
2024-10-09 20:10:54
| 2
| 8,402
|
Wang
|
79,071,739
| 20,591,261
|
Optimizing Variable Combinations to Maximize a Classification
|
<p>I am working with a dataset where users interact via an app or a website, and I need to determine the optimal combination of variables <code>(x1, x2, ... xn)</code> that will maximize the number of users classified as "APP Lovers." According to the business rule, a user is considered an "APP Lover" if they use the app more than 66% of the time.</p>
<p>Hereβs a simplified example of the data structure:</p>
<pre><code>import polars as pl
df = pl.DataFrame({
"ID": [1, 2, 3, 1, 2, 3, 1, 2, 3],
"variable": ["x1", "x1", "x1", "x2", "x2", "x2", "x3", "x3", "x3"],
"Favourite": ["APP", "APP", "WEB", "APP", "WEB", "APP", "APP", "APP", "WEB"]
})
</code></pre>
<p>In this dataset, each <code>ID</code> represents a user, and <code>variable</code> refers to the function <code>(e.g., x1, x2, x3)</code>, with Favourite indicating whether the function was executed via the app or the website.</p>
<p>I pivot the data to count how many actions were performed via APP or WEB:</p>
<pre><code>(
df
.pivot(
index=["ID"],
on="Favourite",
values=["variable"],
aggregate_function=pl.col("Favourite").len()
).fill_null(0)
)
</code></pre>
<p>Output:</p>
<pre><code>shape: (3, 3)
βββββββ¬ββββββ¬ββββββ
β ID β APP β WEB β
β --- β --- β --- β
β i64 β u32 β u32 β
βββββββͺββββββͺββββββ‘
β 1 β 3 β 0 β
β 2 β 2 β 1 β
β 3 β 1 β 2 β
βββββββ΄ββββββ΄ββββββ
</code></pre>
<p>Next, I calculate the proportion of app usage for each user and classify them:</p>
<pre><code>(
df2
.with_columns(
Total = pl.col("APP") + pl.col("WEB")
)
.with_columns(
Proportion = pl.col("APP") / pl.col("Total")
)
.with_columns(
pl
.when(pl.col("Proportion") >= 0.6).then(pl.lit("APP Lover"))
.when(pl.col("Proportion") > 0.1).then(pl.lit("BOTH"))
.otherwise(pl.lit("Inactive"))
)
)
shape: (3, 6)
βββββββ¬ββββββ¬ββββββ¬ββββββββ¬βββββββββββββ¬ββββββββββββ
β ID β APP β WEB β Total β Proportion β literal β
β --- β --- β --- β --- β --- β --- β
β i64 β u32 β u32 β u32 β f64 β str β
βββββββͺββββββͺββββββͺββββββββͺβββββββββββββͺββββββββββββ‘
β 1 β 3 β 0 β 3 β 1.0 β APP Lover β
β 2 β 2 β 1 β 3 β 0.666667 β APP Lover β
β 3 β 1 β 2 β 3 β 0.333333 β BOTH β
βββββββ΄ββββββ΄ββββββ΄ββββββββ΄βββββββββββββ΄ββββββββββββ
</code></pre>
<p>The challenge: In my real dataset, I have at least 19 different x variables. Yesterday <a href="https://stackoverflow.com/questions/79067125/efficiently-handling-large-combinations-in-polars-without-overloading-ram?">asked</a>, I tried iterating over all possible combinations of these variables to filter out the ones that result in the highest number of "APP Lovers," but the number of combinations <code>(2^19)</code> is too large to compute efficiently.</p>
<p>Question: How can I efficiently determine the best combination of xn variables that maximizes the number of "APP Lovers"? I'm looking for guidance on how to approach this in terms of algorithmic optimization or more efficient iterations.</p>
|
<python><optimization><python-polars>
|
2024-10-09 19:10:51
| 3
| 1,195
|
Simon
|
79,071,274
| 6,458,245
|
What is wrong with this NumPy linear algebra matrix rank calculator?
|
<p>Consider:</p>
<pre><code>import numpy as np
from numpy.linalg import matrix_rank
for i in range(1, 70):
y = np.linspace(0,i, i+1)
y = 2 ** y
x = np.ones(i+1)
matrix = np.stack([x,y])
print(matrix_rank(matrix))
</code></pre>
<p>After 48, I see the rank shifting from 2 to 1. How do I fix this?</p>
|
<python><numpy><linear-algebra>
|
2024-10-09 16:45:57
| 1
| 2,356
|
JobHunter69
|
79,071,043
| 17,580,381
|
Shared memory sizing for a numpy array
|
<p>In the example seen on <a href="https://superfastpython.com/numpy-array-sharedmemory/" rel="nofollow noreferrer">superfastpython.com</a>, the size of a shared memory segment to be used to support a 1-dimensional numpy array is calculated as the number of elements multiplied by the data type size.</p>
<p>We know that the size parameter given to the SharedMemory constructor is a <em>minimum</em>. Thus, in many cases, the actual size may be larger than that specified - and that's fine.</p>
<p>But what if the specified size is an exact multiple of the underlying memory page size?</p>
<p>Consider this:</p>
<pre><code>import numpy as np
from multiprocessing.shared_memory import SharedMemory
n = 2048
s = n * np.dtype(np.double).itemsize
shm = SharedMemory(create=True, size=s)
try:
assert s == shm.size
a = np.ndarray((n,), dtype=np.double, buffer=shm.buf)
a.fill(0.0)
finally:
shm.close()
shm.unlink()
</code></pre>
<p>In this case (Python 13.3.0 on macOS 15.0.1) the value of <em>s</em> is 16,384 which happens to be a precise multiple of the underlying page size and therefore shm.size is equal to <em>s</em></p>
<p>Maybe I don't know enough about numpy but I would have imagined that the ndarray would need more space for internal / management structures.</p>
<p>Can someone please explain why this works and why there's no apparent need to allow extra space in the shared memory segment?</p>
|
<python><arrays><numpy><shared-memory>
|
2024-10-09 15:38:56
| 1
| 28,997
|
Ramrab
|
79,071,022
| 2,881,888
|
I am not able to create a python module that correctly supports type checking
|
<p>I have created a python module MyModule that is entirely written with typing support (for a matter of maintenance).</p>
<p>But when I write a script that uses MyModule, let's say MyScript.py:</p>
<pre><code>from MyModule import MyClass, MyOtherClass
if __name__ == "__main__":
a = MyClass(1, 2)
a = MyOtherClass(90)
</code></pre>
<p>To run my script, I use <code>PYTHONPATH=pathtomodule python MyScript.py</code> and it runs fine.</p>
<p>But, when I run <code>PYTHONPATH=pathtomodule mypy MyScript.py</code>, I get the following error:</p>
<pre><code>MyScript.py:1: error: Module "MyModule " has no attribute "MyClass" [attr-defined]
MyScript.py:1: error: Module "MyModule " has no attribute "MyOtherClass" [attr-defined]
</code></pre>
<p>To be complete, here is the layout of my module files:</p>
<ul>
<li>pathtomodule/
<ul>
<li>MyModule/
<ul>
<li>py.typed : empty</li>
<li>__init__.py : empty</li>
<li>MyModule.py : contains the classes definition</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>I have noticed that if i add to __init__.py the following line, it will work:</p>
<pre><code>from .MyModule import MyClass, MyOtherClass
</code></pre>
<p>but i do not understand why. Do I need to add all the interfaces to that file to support typing in script that use my module?</p>
|
<python>
|
2024-10-09 15:33:43
| 2
| 1,401
|
Louis Caron
|
79,070,976
| 11,266,889
|
request from Django to djangorestframework from the same Docker Container timeouts
|
<p>I have 2 containers with docker compose:</p>
<pre><code>services:
web:
container_name: web
build:
context: .
dockerfile: Dockerfile
command: bash -c "python manage.py makemigrations && python manage.py migrate && python
manage.py collectstatic --no-input && gunicorn mysite.wsgi:application
--bind 0.0.0.0:8000"
volumes:
- .:/app
- static:/app/static
env_file:
- .env
ports:
- "8000:8000"
nginx:
build: ./nginx
volumes:
- static:/app/static
ports:
- "80:80"
depends_on:
- web
</code></pre>
<p>here is the nginx config:</p>
<pre><code>upstream django {
server web:8000;
}
server {
listen 80;
location / {
proxy_pass http://django;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /static/ {
alias /app/static/;
}
# Increase client max body size to 50M
client_max_body_size 50M;
</code></pre>
<p>}</p>
<p>In the web container I am running django app where is also djangorestframework. If I try the API endpoints with postman it works fine. Problem is when from Django view I try call the API endpoint it timeouts. Any idea what is wrong? Thank you for you time.</p>
|
<python><django><docker><django-rest-framework>
|
2024-10-09 15:21:17
| 0
| 305
|
pipikej
|
79,070,891
| 6,480,757
|
Vectorize the operation of finding the lowest value of a column in dataframe groups defined by another column
|
<p>I have the following dataframe:</p>
<pre><code>df = pd.DataFrame(
data = [
[0,0],
[1,0],
[2,2],
[3,3],
[4,3],
[5,3],
[6,4],
[7,4],
],
columns = ['day_count', 'level']
)
</code></pre>
<p>I am wondering if there is a vectorizable way to calculate the lowest value of the <code>day_count</code> column in each <code>level</code> group to produce this output:</p>
<pre><code>df.groupby(['level'])['day_count'].transform('min')
</code></pre>
<p><a href="https://i.sstatic.net/iViwdgHj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iViwdgHj.png" alt="enter image description here" /></a></p>
<p>This is an operation that is part of an expensive optimization function call on a large dataset, and I want to see if it's possible to produce this through a series of vectorizable operations to speed things up and to possibly lump multiple calls together.</p>
<p>My attempts so far focus on trying to produce shifts of the <code>level</code> column and use it to take differences and perform numpy where logic on the differences to identify the beginnings of groups and extract the <code>day_count</code> value at those points. But either way I find myself having to perform some sort of sequential scan that would not be vectorizable.</p>
|
<python><pandas><group-by><vectorization>
|
2024-10-09 14:57:58
| 0
| 544
|
Mike
|
79,070,591
| 5,181,947
|
Saving styled pandas dataframe as image throws ValueError
|
<p>I have python code like so:</p>
<pre><code>import dataframe_image as dfi
def color_negative_red(val):
color = f'rgba(255, 50, 50, {min(1, val / 350)})'
return f'background-color: {color}'
styled_df = df.style.applymap(color_negative_red, subset=pd.IndexSlice[:, pd.IndexSlice['Active (bps)', :]])
dfi.export(styled_df, 'test.png', table_conversion='matplotlib')
</code></pre>
<p>but it throws this error:</p>
<blockquote>
<p>ValueError: Invalid RGBA argument: 'rgba(255, 50, 50, 0.002857)'</p>
</blockquote>
<p>df has MultiIndex columns and I want to color only "Active (bps)" columns:</p>
<pre><code>import pandas as pd
data = {
('Act', 'bps'): [-14, 341, -14],
('Dur', 'bps'): [49, 379, 50],
('Active (bps)', '3M'): [1, -7, 3],
('Active (bps)', '1Y'): [3, 3, 4],
('Active (bps)', '2Y'): [14, 10, -36],
('Active (bps)', '3Y'): [118, 105, -59],
('Active (bps)', '5Y'): [-295, 205, 68],
('Active (bps)', '7Y'): [101, 25, 5]
}
df = pd.DataFrame(data)
df.index = ['NO2', 'BSB', 'GOB']
df.columns = pd.MultiIndex.from_tuples(df.columns)
print(df)
Act Dur Active (bps)
(bps) (bps) 3M 1Y 2Y 3Y 5Y 7Y
NO2 -14 49 1 3 14 118 -295 101
BSB 341 379 -7 3 10 105 205 25
GOB -14 50 3 4 -36 -59 68 5
</code></pre>
|
<python><pandas>
|
2024-10-09 13:48:43
| 1
| 1,924
|
Ankur
|
79,070,422
| 7,550,314
|
Poetry - ModuleNotFoundError: No module named jwt
|
<p>Added PyJWT pkg to the project but getting error that the module is not found. I added the pkg using the poetry add command from a private repository.</p>
<pre><code>from jwt import decode, get_unverified_header
alg = get_unverified_header(token)["alg"]
decoded_token = decode(token, algorithms=[alg], options={"verify_signature": False})
</code></pre>
<p>Tried direct import of jwt.</p>
<p>The pyproject.toml file looks like this</p>
<pre><code>[tool.poetry.dependencies]
python = "^3.11"
streamlit = "1.38.0"
ndg-httpsclient = "^0.5.1"
streamlit-javascript = "^0.1.5"
uuid = "^1.30"
certifi = "2024.7.4"
setuptools = "70.0.0"
idna = "3.7"
pytz = "^2024.1"
pillow = "10.3.0"
pyjwt = "2.8.0"
</code></pre>
<p>The lock file looks like this</p>
<pre><code>[[package]]
name = "pyjwt"
version = "2.8.0"
description = "JSON Web Token implementation in Python"
optional = false
python-versions = ">=3.7"
files = [
{file = "PyJWT-2.8.0-py3-none-any.whl", hash = "sha256:59127c392cc44c2da5bb3192169a91f429924e17aff6534d70fdc02ab3e04320"},
{file = "PyJWT-2.8.0.tar.gz", hash = "sha256:57e28d156e3d5c10088e0c68abb90bfac3df82b40a71bd0daa20c65ccd5c23de"},
]
[package.extras]
crypto = ["cryptography (>=3.4.0)"]
dev = ["coverage[toml] (==5.0.4)", "cryptography (>=3.4.0)", "pre-commit", "pytest (>=6.0.0,<7.0.0)", "sphinx (>=4.5.0,<5.0.0)", "sphinx-rtd-theme", "zope.interface"]
docs = ["sphinx (>=4.5.0,<5.0.0)", "sphinx-rtd-theme", "zope.interface"]
tests = ["coverage[toml] (==5.0.4)", "pytest (>=6.0.0,<7.0.0)"]
[package.source]
type = "legacy"
url = "https://repo.nexuscloud.aws.rabo.cloud/repository/gr-pypi-13/simple"
reference = "pypi-group"
</code></pre>
<p>The dockerfile looks like</p>
<pre><code>FROM <link-to-nexus>/python:3.11.8-slim
WORKDIR /app
RUN useradd -ms /bin/bash portal && chown -R portal /home/portal && chown -R portal /app
COPY pyproject.toml poetry.lock ./
ARG NEXUSCLOUD_USERNAME
ARG NEXUSCLOUD_PASSWORD
RUN pip config --user set global.index-url "https://$NEXUSCLOUD_USERNAME:$NEXUSCLOUD_PASSWORD@<link-to-nexus>/repository/<repo>/simple"
#Should be same as poetry.lock file
RUN pip install poetry==1.7.1 streamlit==1.38.0 streamlit-javascript==0.1.5 --no-cache-dir
RUN poetry config repositories.pypi-group "https://<link-to-nexus>/repository/<repo>/simple" \
&& poetry config http-basic.pypi-group $NEXUSCLOUD_USERNAME $NEXUSCLOUD_PASSWORD
RUN poetry install --no-cache
COPY . .
ENV ENVIRONMENT=""
ENV BASE_API_URL=$BASE_API_URL
EXPOSE 8501
HEALTHCHECK CMD curl --fail http://localhost:8501/_stcore/health
ENV PATH=β${PATH}:/root/.local/binβ
USER portal
ENTRYPOINT ["streamlit", "run", "portal/streamlit_app.py", "--server.port=8501", "--server.address=0.0.0.0", "--server.enableXsrfProtection=false", "--server.maxUploadSize=1024", "--client.toolbarMode=minimal", "--client.showErrorDetails=false", "--browser.gatherUsageStats=false"]
</code></pre>
<p>The build is successful but on running the app it throws the module not found error.</p>
<p><strong>EDIT: Already checked no dependency confusion with JWT pkg.</strong></p>
|
<python><python-poetry><pyjwt>
|
2024-10-09 13:16:19
| 1
| 415
|
Faisal
|
79,070,395
| 352,290
|
How to avoid duplicate words within an array
|
<p>How to avoid duplicate words within an array. In the following scenario</p>
<p><code>First scenario:</code> I want all 3 players to be present in the output including <code>Phil</code>. <code>Phil</code> is a subset of <code>Phillies</code> so it should not be removd from the output.</p>
<p><code>Second scenario:</code> But I want "New York Mets", "Philadelphia Phillies" to be present in the output but <code>Phillies</code> should be removed since it matches a word in <code>Philadelphia Phillies</code></p>
<p><code>Third scenario:</code> I want <code>Cristiano Ronaldo</code> alone to be present in the output, since <code>Cristiano</code> is already present as part of the bigger word it should be removed.</p>
<pre><code># Initial array
football_players = ["New York Mets", "Philadelphia Phillies", "Phil"]
# football_players = ["New York Mets", "Philadelphia Phillies", "Phillies"]
# football_players = ["Cristiano", "Cristiano Ronaldo"]
# Function to remove smaller words contained in larger words
def remove_subwords(players):
result = []
for player in players:
# Check if the player name is a substring of any other name in the list
if not any(player in other and player != other for other in players):
result.append(player)
return result
# Call the function and print the result
filtered_players = remove_subwords(football_players)
print(filtered_players)
</code></pre>
|
<python>
|
2024-10-09 13:10:49
| 1
| 1,360
|
user352290
|
79,070,211
| 7,971,750
|
Local package isn't recognized even with __init__.py
|
<h2>Original question</h2>
<p>Current project setup is</p>
<pre><code>project/
ββ ams/
β ββ __init__.py
β ββ common/
β β ββ dataclasses.py
β β ββ __init__.py
ββ entry_point/
β ββ bar_api/
β β ββ swagger/
β β β ββ main.yaml
β β ββ __init__.py
β ββ run.py
β ββ __init__.py
</code></pre>
<p>with <code>run.py</code> having a line <code>from ams.common.dataclasses import Foo</code>, which gives me an <code>ImportError</code> when I run <code>run.py</code> both from project root via <code>python entry_point/run.py</code> and via <code>python run.py</code> after moving to <code>entry_point</code>.</p>
<p>As you can see, I have <code>__init__.py</code> literally everywhere, and the project seems to work with the current setup for the other developer (I'm just trying to deploy it locally to test stuff).</p>
<p>I've seen suggestions of adding some variation of the path to the project root to <code>PYTHONHOME</code> or <code>PYTHONPATH</code> env variables via the env activation script, but trying this lead me to <code>ModuleNotFoundError: No module named 'encodings'</code> and a traceback that suggests Python is looking in the wrong directory for the stdlib. Googling said error leads to a dicussion on Github that suggests modifying either of those variables is a sign you're doing something wrong and should not be done.</p>
<p>At this point, what do I do?</p>
<h2>Update 1</h2>
<p>Moving <code>run.py</code> to root and running it from there presented me with an file not found error: <code>FileNotFoundError: [Errno 2] No such file or directory: 'bar_api/swagger/main.yaml'</code> I've added in the updated project structure. I assume this time the working directory should be <code>entry_point</code>, so by now I'm heavily inclined towards adding every single folder in the project directory to <code>PYTHONPATH</code>.</p>
|
<python><pip><python-3.8>
|
2024-10-09 12:25:15
| 1
| 322
|
bqback
|
79,069,996
| 964,491
|
Feedparser errors during SSl read operation when accessing NASDAQ RSS Feeds
|
<p>By utliizing Python 3.12, Feedparser 6.0.11, ca-certificates installed</p>
<p>When attempting to read this RSS feed: <a href="https://www.nasdaq.com/feed/rssoutbound?category=Financial+Advisors" rel="nofollow noreferrer">https://www.nasdaq.com/feed/rssoutbound?category=Financial+Advisors</a>. Feedparser library returns this error.</p>
<pre><code>https://www.nasdaq.com/feed/rssoutbound?category=Innovation
^CTraceback (most recent call last):
File "/home/nckr/kiwichi/read.py", line 78, in <module>
NewsFeed = feedparser.parse(url)
^^^^^^^^^^^^^^^^^^^^^
File "/home/nckr/kiwichi/venv/lib/python3.12/site-packages/feedparser/api.py", line 216, in parse
data = _open_resource(url_file_stream_or_string, etag, modified, agent, referrer, handlers, request_headers, result)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nckr/kiwichi/venv/lib/python3.12/site-packages/feedparser/api.py", line 115, in _open_resource
return http.get(url_file_stream_or_string, etag, modified, agent, referrer, handlers, request_headers, result)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nckr/kiwichi/venv/lib/python3.12/site-packages/feedparser/http.py", line 171, in get
f = opener.open(request)
^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/urllib/request.py", line 515, in open
response = self._open(req, data)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/urllib/request.py", line 532, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/urllib/request.py", line 492, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "/usr/lib/python3.12/urllib/request.py", line 1392, in https_open
return self.do_open(http.client.HTTPSConnection, req,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/urllib/request.py", line 1348, in do_open
r = h.getresponse()
^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/http/client.py", line 1428, in getresponse
response.begin()
File "/usr/lib/python3.12/http/client.py", line 331, in begin
version, status, reason = self._read_status()
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/http/client.py", line 292, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/socket.py", line 707, in readinto
return self._sock.recv_into(b)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/ssl.py", line 1252, in recv_into
return self.read(nbytes, buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/ssl.py", line 1104, in read
return self._sslobj.read(len, buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyboardInterrupt
</code></pre>
<p>I tried setting the unverified SSL: context(<a href="https://stackoverflow.com/questions/28282797/feedparser-parse-ssl-certificate-verify-failed">Feedparser.parse() 'SSL: CERTIFICATE_VERIFY_FAILED'</a>) at the beginning of the script but still get this error.</p>
<pre><code>if hasattr(ssl, '_create_unverified_context'):
ssl._create_default_https_context = ssl._create_unverified_context
</code></pre>
<p>I'm assuming the global SSL setting will be magically used by the <code>feedparser</code>s lib and I don't need to pass it to it explicitly.</p>
<p>Is there another workaround or anyway to get more info regarding the actual error other than an SSL read error. Could also be a timeout error as well. However I can access the URL successful using curl on the command line.</p>
|
<python><python-3.x><ssl><feedparser>
|
2024-10-09 11:27:37
| 1
| 2,563
|
Dan
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.