QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,814,853
| 10,677,643
|
Create a widget factory in Qt
|
<p>Hello~ I'm creating a set of custom widgets that extend the native widgets in Qt. My custom widgets are supposed to be constructed from a data source, and they all provide a custom function <code>Foobar</code>. For example:</p>
<pre class="lang-cpp prettyprint-override"><code>class CheckBox: public QCheckBox, public Control
{
Q_OBJECT
public:
CheckBox(QWidget* parent = 0);
CheckBox(DataSource* data, QWidget* parent = 0);
virtual ~CheckBox();
virtual void Foobar();
};
class ComboBox: public QComboBox, public Control
{
Q_OBJECT
public:
ComboBox(QWidget* parent = 0);
ComboBox(DataSource* data, QWidget* parent = 0);
virtual ~ComboBox();
virtual void Foobar();
};
</code></pre>
<p>Each widget class has a corresponding data source class that is responsible for creating the widget. For example:</p>
<pre class="lang-cpp prettyprint-override"><code>class CheckBoxDataSource: public DataSource
{
public:
CheckBoxDataSource();
virtual ~CheckBoxDataSource();
virtual QWidget* createControl(QWidget* parent);
};
class ComboBoxDataSource: public DataSource
{
public:
ComboBoxDataSource();
virtual ~ComboBoxDataSource();
virtual QWidget* createControl(QWidget* parent);
};
</code></pre>
<p>After these classes are exposed to Python, Python clients who have a list of various data sources are expected to use it like this:</p>
<pre class="lang-py prettyprint-override"><code>for data in dataSources:
widget = data.createControl(parent) # returns a QCheckBox/QComboBox/etc
# at a later time ...
widget.Foobar()
</code></pre>
<p>However, this is not going to work because <code>data.createControl()</code> will return an object of type <code>QCheckBox</code> or <code>QCombox</code> in PyQt, so clients won't be able to call <code>widget.Foobar()</code> as the <code>Foobar()</code> function isn't available in these classes. How can I work around this?
So far I have tried these below but none of them works...</p>
<h5>Attempt 1</h5>
<p>In Python, I tried casting the widget to the right C++ class so that I can call <code>Foobar()</code>.</p>
<pre class="lang-py prettyprint-override"><code>widget.__class__ = CheckBox
widget.__class__ = ComboBox
</code></pre>
<p>In some cases this works fine, in other cases I get an error <code>AttributeError: 'CheckBox' object has no attribute 'Foobar'</code> even though it's the correct type. Interestingly, if I put a breakpoint on the line and steps over it in the debugger, it always works, without the breakpoint it fails, so there seems to be a timing issue that I could not figure out. Anyway, mutating <code>__class__</code> is dangerous so I probably should avoid it.</p>
<h5>Attempt 2</h5>
<p>If I add <code>virtual void Foobar()</code> to the base class <code>Control</code> and declare <code>createControl()</code> to return a <code>Control*</code> instead of <code>QWidget*</code>. This should allow clients to call <code>Foobar()</code> in Python, but they also lose the ability to treat it as a <code>QWidget</code>.</p>
<h5>Attempt 3</h5>
<pre class="lang-py prettyprint-override"><code>widget = CheckBox(data.createControl(parent))
</code></pre>
<p>By wrapping the returned object in another constructor call, the widget in Python should now have the right type. But this will create 2 <code>CheckBox</code> widgets, not 1, so I have to "delete" one of them. Looks like the copy constructor is being called in this case. I don't think I can <code>move</code> it in Python.</p>
<p>Sorry my brain is a huge mess at the moment, I think there should be a better factory design?</p>
|
<python><c++><qt>
|
2024-07-31 07:07:09
| 2
| 574
|
neo-mashiro
|
78,814,811
| 7,227,146
|
SSLError: HTTPSConnectionPool(host='huggingface.co', port=443)
|
<p>I have the following script:</p>
<p><strong>test.py</strong>:</p>
<pre><code>from transformers import AutoTokenizer
checkpoint = "kevinscaria/joint_tk-instruct-base-def-pos-neg-neut-combined"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
</code></pre>
<p>When I run <code>python test.py</code> I get the following error:</p>
<blockquote>
<p>requests.exceptions.SSLError:
(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443):
Max retries exceeded with url:
/kevinscaria/joint_tk-instruct-base-def-pos-neg-neut-combined/resolve/main/tokenizer_config.json
(Caused by SSLError(SSLCertVerificationError(1, '[SSL:
CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed
certificate in certificate chain (_ssl.c:1000)')))"), '(Request ID:
2d2eb919-ddf8-4e5d-b0fc-b0ab1f666251)')</p>
</blockquote>
<p>This is despite I already added <code>huggingface.co</code> to my pip configuration file as a trusted host:</p>
<pre><code>>pip config list
global.trusted-host='pypi.org files.pythonhosted.org pypi.python.org huggingface.co'
</code></pre>
<p>Note that I'm working on a different computer and I'm constantly having this kind of SSL errors which I never had on my previous computer.</p>
|
<python><ssl><huggingface>
|
2024-07-31 06:54:51
| 0
| 679
|
zest16
|
78,814,672
| 9,879,534
|
Differentiating two tuple types based on their first elements
|
<p>I have a function, it's return type is <code>tuple[bool, set[int] | str]</code>. If the 0th item is <code>True</code>, then the 1st item is the result <code>set[int]</code>, else the 1st item is a str showing the reason why it failed.</p>
<p>It's like this:</p>
<pre class="lang-py prettyprint-override"><code>def callee(para_a: int) -> tuple[bool, set[int] | str]:
result = set([1, 2, 3])
if (something wrong):
return False, "something wrong"
return True, result
</code></pre>
<p>Now I call this function from other functions:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI
from fastapi.responses import JSONResponse
api = FastAPI()
def kernel(para: set[int]):
return [i for i in para]
@api.post("/test")
def caller(para_a: int):
res = callee(para_a)
if res[0] is True:
return {"result": kernel(res[1])}
return JSONResponse(status_code=500, content={"fail_msg": res[1]}
</code></pre>
<p>Never mind <code>fastapi</code>, that's just because I want to say it is sometimes useful in web.</p>
<p>Now <code>mypy</code> would blame that <code>error: Argument 1 to "kernel" has incompatible type "set[int] | str"; expected "set[int]" [arg-type]</code>, so I'd like to make <code>mypy</code> know that if the 0th item is <code>True</code>, then the 1st item is the result <code>set[int]</code>, else the 1st item is a str. I thought about <code>overload</code>, so I wrote.</p>
<pre class="lang-py prettyprint-override"><code>from typing import overload, Literal
@overload
def callee(para_a: int) -> tuple[Literal[True], set[int]]:
...
@overload
def callee(para_a: int) -> tuple[Literal[False], str]:
...
</code></pre>
<p>Then <code>mypy</code> blames that <code>Overloaded function signature 2 will never be matched: signature 1's parameter type(s) are the same or broader</code>.</p>
<p>What I'd like to know is whether there is a good way to solve my problem?</p>
<p>As I can't use <code>overload</code> in this situation, what should I use to make <code>mypy</code> know <code>res[1]</code> is just a <code>set[int]</code> but never would be a <code>str</code> if <code>res[0] is True</code>?</p>
|
<python><python-typing>
|
2024-07-31 06:14:59
| 3
| 365
|
Chuang Men
|
78,814,532
| 7,984,318
|
How to pass value between steps? 'Context' object has no attribute
|
<p>Code:</p>
<pre><code>@then('the user should see the generated survey code')
def step_then_user_sees_generated_survey_code(context):
context.survey_code = context.driver.find_element(By.CSS_SELECTOR, "input#survey_code").get_attribute('value')
print(22222222,context.survey_code)
assert context.survey_code is not None
@given('the user has a generated survey code')
def step_given_user_has_generated_survey_code(context):
print(444444444444444,context.survey_code)
assert context.survey_code is not None
</code></pre>
<p>In line:</p>
<pre><code>print(22222222,context.survey_code)
</code></pre>
<p>I can get a value.</p>
<p>But in line:</p>
<pre><code>print(444444444444444,context.survey_code)
</code></pre>
<p>I received an error:</p>
<pre><code> AttributeError: 'Context' object has no attribute 'survey_code'
</code></pre>
<p>My purpose is to pass the survey_code value from @then step to @given step</p>
|
<python><python-behave>
|
2024-07-31 05:21:48
| 1
| 4,094
|
William
|
78,814,397
| 16,525,263
|
How to handle FileNotFoundException in python
|
<p>I have a function to read avro files(day-wise folders) from a path and write it to
same path(aggregating to month-wise folders). The function works fine if the folders have .avro files.</p>
<p>But I get an error if the folders are empty.</p>
<pre><code>java.io.FileNotFoundException: No avro files found. If files don't have .avro extension, set ignoreExtension to true
</code></pre>
<p>I have handled with try and except block. Even then I'm getting the error. This is function below.</p>
<pre><code>def aggregate_data(path_to_source, input_format, output_format, output_table_format='avro'):
all_input_dates = [str(fileStatus.getPath()).split('/')[-1] for fileStatus in
lake.listStatus(fs.Path(path_to_source))
if 'latest' not in str(fileStatus.getPath())]
all_output_dates = list(set(datetime.datetime.strptime(i, input_format).strftime(output_format)
for i in all_input_dates if validate_date(i, input_format) - {datetime.date.today().strftime(output_format)})
print("***** {0} *****".format(path_to_source))
try:
for _partition in all_output_dates:
src_df = spark.read.option("ignoreExtension":"true"). format(output_table_format)\
.load(os.path.join(path_to_source, '{0}*'.format(_partition)))
agg_df = src_df.repartition(estimate_part_num(src_df)).write.mode('overwrite')\
.format(output_table_format).save(os.path.join(path_to_source, _partition))
command_exist = ['hadoop', 'fs', '-rm', '-r', os.path.join(path_to_source, '{0}[.]*'. format(_partition))]
subprocess.call(command_exist)
except FileNotFoundException as e:
print("**** Exception occured ****".format(path_to_source),e)
</code></pre>
<p>How to handle FileNotFoundException in python</p>
|
<python><exception><filenotfoundexception>
|
2024-07-31 04:10:45
| 0
| 434
|
user175025
|
78,814,350
| 3,966,456
|
Under what circumstance can `sys.getrefcount()` return zero?
|
<p>I was reading the Python 3.12 official documentation on the <code>sys</code> module, and <a href="https://docs.python.org/3.12/library/sys.html#sys.getrefcount" rel="nofollow noreferrer">the part about <code>getrefcount()</code></a> seems rather confusing to me (emphasis mine).</p>
<blockquote>
<p>Return the reference count of the object. The count returned is
generally one higher than you might expect, because it includes the
(temporary) reference as an argument to <code>getrefcount()</code>.</p>
<p>Note that the returned value may not actually reflect how many
references to the object are actually held. For example, some objects
are “immortal” and have a very high refcount that does not reflect the
actual number of references. <strong>Consequently, do not rely on the returned</strong>
<strong>value to be accurate, other than a value of 0 or 1.</strong></p>
<blockquote>
<p>Changed in version 3.12: Immortal objects have very large refcounts
that do not match the actual number of references to the object.</p>
</blockquote>
</blockquote>
<p>When <code>sys.getrefcount(expr)</code> is invoked in Python, if nothing else, there is at least a reference to the object resulting from <code>expr</code>: the reference from <code>getrefcount()</code> itself. For example, the following code returns the minimal possible return value of <code>getrefcount()</code> per my understanding, which is 1.</p>
<pre><code>import sys
print(sys.getrefcount(object()))
</code></pre>
<p>I suspect this is a documentation inaccuracy due to a direct copy from the <a href="https://docs.python.org/3.12/c-api/refcounting.html#c.Py_REFCNT" rel="nofollow noreferrer">documentation of Py_REFCNT</a>. Note how one paragraph duplicates exactly the same text as the previously quoted documentation.</p>
<blockquote>
<p>Get the reference count of the Python object <code>o</code>.</p>
<p>Note that the returned value may not actually reflect how many
references to the object are actually held. For example, some objects
are “immortal” and have a very high refcount that does not reflect the
actual number of references. Consequently, do not rely on the returned
value to be accurate, other than a value of 0 or 1.</p>
<p>Use the <code>Py_SET_REFCNT()</code> function to set an object reference count.</p>
<blockquote>
<p>Changed in version 3.10: <code>Py_REFCNT()</code> is changed to the inline static
function.</p>
</blockquote>
<blockquote>
<p>Changed in version 3.11: The parameter type is no longer <code>const PyObject*</code>.</p>
</blockquote>
</blockquote>
<p>In the case of <code>Py_REFCNT()</code> I believe it's reasonable to return a value of zero because passing a pointer to <code>Py_REFCNT()</code> does not automatically form an additional reference to the underlying Python object. However, I do not believe that there can be a case where the <code>getrefcount()</code> function from Python <code>sys</code> module can possibly return zero.</p>
<p>Am I missing anything obvious or any edge cases? Is there a case where <code>sys.getrefcount()</code> can return zero as the documentation currently seem to allude to? It would be better if someone can further clarify what the bolded sentence actually means. Does it mean that a <code>sys.getrefcount()</code> return value of 3 (which is a rather common case) should be considered inaccurate?</p>
|
<python><cpython><reference-counting><python-3.12>
|
2024-07-31 03:42:25
| 0
| 5,550
|
Weijun Zhou
|
78,813,947
| 1,324,833
|
adding buttons to a PyQT/Matplotlib NavigationToolbar2QT
|
<p>I've successfully added a button to the standard navigation toolbar using the following function, however my new button appears at the right side of the toolbar, and I would like it to follow the existing buttons on the left side. I can't find any reference to positioning or aligning the buttons.</p>
<p>I'm guessing the reason is that there is a QSpacerItem added to the layout, but how do I squeeze my button in before it?</p>
<pre><code>def add_toolbar(self):
self.toolbar = NavigationToolbar(self.canvas, self, True)
self.measure_button = QtWidgets.QToolButton(self.toolbar)
self.measure_button.clicked.connect(self.measure)
self.measure_button.setIcon(QtGui.QIcon('MyButtonIcon.jpg'))
self.toolbar.addWidget(self.measure_button)
self.addToolBar(QtCore.Qt.TopToolBarArea, self.toolbar)
</code></pre>
|
<python><matplotlib><pyqt>
|
2024-07-30 23:09:05
| 1
| 1,237
|
marcp
|
78,813,933
| 13,802,418
|
Kivy 2.3 VideoPlayer Android Crash
|
<p>I'm trying to create video .apk which includes <code>Video</code> or <code>VideoPlayer</code>.</p>
<p>main.py</p>
<pre><code>import kivy
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.widget import Widget
from kivy.uix.videoplayer import VideoPlayer
import os
os.environ["KIVY_VIDEO"] = "ffpyplayer"
from kivy.lang import Builder
from kivy.utils import platform
kv = """
MainLayout:
orientation: 'vertical'
BoxLayout:
Button:
text: "Is file Exist?"
on_release: root.checkFile()
Label:
id: l1
BoxLayout:
orientation: 'vertical'
Button:
text: "Load 1.mp4 to Video & VideoPlayer"
on_release: root.loadmp4()
Label:
id: l2
text: "Waiting to Load File"
BoxLayout:
Video:
id: vd
VideoPlayer:
id: vdp
BoxLayout:
Button:
id: vdb
text: "Video Load file first"
on_release: root.playVideo()
disabled: True
Button:
text: "VideoPlayer Load file first"
id: vdpb
on_release: root.playVideoPlayer()
disabled: True
BoxLayout:
orientation: 'vertical'
BoxLayout:
Button:
text: "use gstplayer"
on_release: root.setKivyVideoEnviron("gstreamer")
Button:
text: "use ffmpeg"
on_release: root.setKivyVideoEnviron("gstplayer")
Button:
text: "use ffpyplayer"
on_release: root.setKivyVideoEnviron("ffpyplayer")
Button:
text: "Show Kivy Video Providers"
on_release: root.showKivyVideoOptions()
Label:
id: log
text: "Logs"
"""
class MainLayout(BoxLayout):
def vpwPlay(self,*args):
self.ids.vpw.state = "play"
def showKivyVideoOptions(self,*args):
App.get_running_app().askpermission() #also check read&write permission
self.ids.log.text = str(kivy.kivy_options["video"])
def setKivyVideoEnviron(self,env,*args):
try:
os.environ["KIVY_VIDEO"] = env
except Exception as err:
self.ids.log.text = str(err)
def checkFile(self, *args):
try:
file_exists = os.path.isfile(os.path.join(os.getcwd(), "1.mp4"))
self.ids.l1.text = "File exist: " + str(file_exists)
except Exception as err:
self.ids.log.text = str(err)
def loadmp4(self, *args):
try:
file_path = os.path.join(os.getcwd(), "1.mp4")
self.ids.vd.source = file_path
self.ids.vdp.source = file_path
self.ids.l2.text = "Loaded file: " + file_path
self.ids.vdb.text = "[VIDEO]\nPlay Video"
self.ids.vdb.disabled = False
self.ids.vdpb.text = "[VIDEOPLAYER]\nPlay Video"
self.ids.vdpb.disabled = False
except Exception as err:
self.ids.log.text = str(err)
def playVideo(self, *args):
try:
self.ids.vd.state = 'play'
except Exception as err:
self.ids.log.text = str(err)
def playVideoPlayer(self, *args):
try:
self.ids.vdp.state = 'play'
except Exception as err:
self.ids.log.text = str(err)
class MyApp(App):
def askpermission(self,*args):
if platform == "android":
from android.permissions import request_permissions, Permission
request_permissions([Permission.WRITE_EXTERNAL_STORAGE, Permission.READ_EXTERNAL_STORAGE])
def build(self):
return Builder.load_string(kv)
if __name__ == '__main__':
MyApp().run()
</code></pre>
<p>buildozer.spec:
[app]</p>
<pre><code>title = Test Video APK
package.name = myapp.videotestetststeste
package.domain = org.testtestestest
source.dir = .
source.include_exts =
version = 0.1
requirements = python3,kivy==2.3.0,olefile,ffpyplayer,ffpyplayer_codecs
orientation = portrait
osx.python_version = 3.11
osx.kivy_version = 2.3.0
fullscreen = 1
android.permissions = android.permission.INTERNET, (name=android.permission.WRITE_EXTERNAL_STORAGE;maxSdkVersion=18)
</code></pre>
<p>Log:</p>
<pre><code>07-31 02:01:02.992 15263 15263 V pythonutil: Loading library: python3.11
07-31 02:01:03.038 15263 15263 V pythonutil: Loading library: main
07-31 02:01:03.042 15263 15263 V pythonutil: Loaded everything!
07-31 02:01:03.146 15263 15302 I python : Initializing Python for Android
07-31 02:01:03.146 15263 15302 I python : Setting additional env vars from p4a_env_vars.txt
07-31 02:01:03.147 15263 15302 I python : Changing directory to the one provided by ANDROID_ARGUMENT
07-31 02:01:03.147 15263 15302 I python : /data/user/0/org.testtestestest.myapp.videotestetststeste/files/app
07-31 02:01:03.170 15263 15302 I python : Preparing to initialize python
07-31 02:01:03.170 15263 15302 I python : _python_bundle dir exists
07-31 02:01:03.170 15263 15302 I python : calculated paths to be...
07-31 02:01:03.170 15263 15302 I python : /data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/stdlib.zip:/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/modules
07-31 02:01:03.170 15263 15302 I python : set wchar paths...
07-31 02:01:03.194 15263 15263 W SDLThread: type=1400 audit(0.0:56907): avc: granted { execute } for path="/data/data/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/modules/zlib.cpython-311.so" dev="sda15" ino=900967 scontext=u:r:untrusted_app:s0:c221,c257,c512,c768 tcontext=u:object_r:app_data_file:s0:c221,c257,c512,c768 tclass=file app=org.testtestestest.myapp.videotestetststeste
07-31 02:01:03.221 15263 15302 I python : Initialized python
07-31 02:01:03.221 15263 15302 I python : AND: Init threads
07-31 02:01:03.222 15263 15302 I python : testing python print redirection
07-31 02:01:03.223 15263 15302 I python : Android path ['.', '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/stdlib.zip', '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/modules', '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/site-packages']
07-31 02:01:03.223 15263 15302 I python : os.environ is environ({'PATH': '/product/bin:/apex/com.android.runtime/bin:/apex/com.android.art/bin:/system_ext/bin:/system/bin:/system/xbin:/odm/bin:/vendor/bin:/vendor/xbin', 'ANDROID_BOOTLOGO': '1', 'ANDROID_ROOT': '/system', 'ANDROID_ASSETS': '/system/app', 'ANDROID_DATA': '/data', 'ANDROID_STORAGE': '/storage', 'ANDROID_ART_ROOT': '/apex/com.android.art', 'ANDROID_I18N_ROOT': '/apex/com.android.i18n', 'ANDROID_TZDATA_ROOT': '/apex/com.android.tzdata', 'EXTERNAL_STORAGE': '/sdcard', 'ASEC_MOUNTPOINT': '/mnt/asec', 'BOOTCLASSPATH': '/apex/com.android.art/javalib/core-oj.jar:/apex/com.android.art/javalib/core-libart.jar:/apex/com.android.art/javalib/core-icu4j.jar:/apex/com.android.art/javalib/okhttp.jar:/apex/com.android.art/javalib/bouncycastle.jar:/apex/com.android.art/javalib/apache-xml.jar:/system/framework/framework.jar:/system/framework/ext.jar:/system/framework/telephony-common.jar:/system/framework/voip-common.jar:/system/framework/ims-common.jar:/system/framework/framework-atb-backward-compatibility.jar:/system/framework/tcmiface.jar:/system/framework/telephony-ext.jar:/system/framework/qcom.fmradio.jar:/system/framework/QPerformance.jar:/system/framework/UxPerformance.jar:/system/framework/WfdCommon.jar:/apex/com.android.conscrypt/javalib/conscrypt.jar:/apex/com.android.media/javalib/updatable-media.jar:/apex/com.android.mediaprovider/javalib/framework-mediaprovider.jar:/apex/com.android.os.statsd/javalib/framework-statsd.jar:/apex/com.android.permission/javalib/framework-permission.jar:/apex/com.android.sdkext/javalib/framework-sdkextensions.jar:/apex/com.android.wifi/javalib/framework-wifi.jar:/apex/com.android.tethering/javalib/framework-tethering.jar', 'DEX2OATBOOTCLASSPATH': '/apex/com.android.art/javalib/core-oj.jar:/apex/com.android.art/javalib/core-libart.jar:/apex/com.android.art/javalib/core-icu4j.jar:/apex/com.android.art/javalib/okhttp.jar:/apex/com.android.art/javalib/bouncycastle.jar:/apex/com.android.art/javalib/apache-xml.jar:/system/framework/framework.jar:/system/framework/ext.jar:/system/framework/telephony-common.jar:/system/framework/voip-common.jar:/system/framework/ims-common.jar:/system/framework/framework-atb-backward-compatibility.jar:/system/framework/tcmiface.jar:/system/framework/telephony-ext.jar:/system/framework/qcom.fmradio.jar:/system/framework/QPerformance.jar:/system/framework/UxPerformance.jar:/system/framework/WfdCommon.jar', 'SYSTEMSERVERCLASSPATH': '/system/framework/com.android.location.provider.jar:/system/framework/services.jar:/system/framework/ethernet-service.jar:/apex/com.android.permission/javalib/service-permission.jar:/apex/com.android.wifi/javalib/service-wifi.jar:/apex/com.android.ipsec/javalib/android.net.ipsec.ike.jar', 'DOWNLOAD_CACHE': '/data/cache', 'ANDROID_SOCKET_zygote': '18', 'ANDROID_SOCKET_usap_pool_primary': '23', 'ANDROID_ENTRYPOINT': 'main.pyc', 'ANDROID_ARGUMENT': '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app', 'ANDROID_APP_PATH': '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app', 'ANDROID_PRIVATE': '/data/user/0/org.testtestestest.myapp.videotestetststeste/files', 'ANDROID_UNPACK': '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app', 'PYTHONHOME': '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app', 'PYTHONPATH': '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app:/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/lib', 'PYTHONOPTIMIZE': '2', 'P4A_BOOTSTRAP': 'SDL2', 'PYTHON_NAME': 'python', 'P4A_IS_WINDOWED': 'False', 'KIVY_ORIENTATION': 'Portrait', 'P4A_NUMERIC_VERSION': 'None', 'P4A_MINSDK': '21', 'LC_CTYPE': 'C.UTF-8'})
07-31 02:01:03.223 15263 15302 I python : Android kivy bootstrap done. __name__ is __main__
07-31 02:01:03.223 15263 15302 I python : AND: Ran string
07-31 02:01:03.223 15263 15302 I python : Run user program, change dir and execute entrypoint
07-31 02:01:03.284 15263 15263 W SDLThread: type=1400 audit(0.0:56908): avc: granted { execute } for path="/data/data/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/modules/_typing.cpython-311.so" dev="sda15" ino=900949 scontext=u:r:untrusted_app:s0:c221,c257,c512,c768 tcontext=u:object_r:app_data_file:s0:c221,c257,c512,c768 tclass=file app=org.testtestestest.myapp.videotestetststeste
07-31 02:01:03.334 15263 15263 W SDLThread: type=1400 audit(0.0:56909): avc: granted { execute } for path="/data/data/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/modules/math.cpython-311.so" dev="sda15" ino=900956 scontext=u:r:untrusted_app:s0:c221,c257,c512,c768 tcontext=u:object_r:app_data_file:s0:c221,c257,c512,c768 tclass=file app=org.testtestestest.myapp.videotestetststeste
07-31 02:01:03.364 15263 15263 W SDLThread: type=1400 audit(0.0:56912): avc: granted { execute } for path="/data/data/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/modules/_opcode.cpython-311.so" dev="sda15" ino=900928 scontext=u:r:untrusted_app:s0:c221,c257,c512,c768 tcontext=u:object_r:app_data_file:s0:c221,c257,c512,c768 tclass=file app=org.testtestestest.myapp.videotestetststeste
07-31 02:01:03.389 15263 15302 I python : [ERROR ] Error when copying logo directory
07-31 02:01:03.389 15263 15302 I python : Traceback (most recent call last):
07-31 02:01:03.390 15263 15302 I python : File "/mnt/KivyVideoAPK/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/python-installs/myapp.videotestetststeste/arm64-v8a/kivy/__init__.py", line 372, in <module>
07-31 02:01:03.390 15263 15302 I python : File "/mnt/KivyVideoAPK/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/python3/arm64-v8a__ndk_target_21/python3/Lib/shutil.py", line 561, in copytree
07-31 02:01:03.390 15263 15302 I python : File "/mnt/KivyVideoAPK/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/python3/arm64-v8a__ndk_target_21/python3/Lib/shutil.py", line 515, in _copytree
07-31 02:01:03.390 15263 15302 I python : shutil.Error: [('/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/site-packages/kivy/data/logo/kivy-icon-128.png', '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon/kivy-icon-128.png', "[Errno 13] Permission denied: '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon/kivy-icon-128.png'"), ('/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/site-packages/kivy/data/logo/kivy-icon-16.png', '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon/kivy-icon-16.png', "[Errno 13] Permission denied: '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon/kivy-icon-16.png'"), ('/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/site-packages/kivy/data/logo/kivy-icon-24.png', '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon/kivy-icon-24.png', "[Errno 13] Permission denied: '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon/kivy-icon-24.png'"), ('/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/site-packages/kivy/data/logo/kivy-icon-256.png', '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon/kivy-icon-256.png', "[Errno 13] Permission denied: '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon/kivy-icon-256.png'"), ('/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/site-packages/kivy/data/logo/kivy-icon-32.png', '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon/kivy-icon-32.png', "[Errno 13] Permission denied: '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon/kivy-icon-32.png'"), ('/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/site-packages/kivy/data/logo/kivy-icon-48.png', '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon/kivy-icon-48.png', "[Errno 13] Permission denied: '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon/kivy-icon-48.png'"), ('/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/site-packages/kivy/data/logo/kivy-icon-512.png', '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon/kivy-icon-512.png', "[Errno 13] Permission denied: '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon/kivy-icon-512.png'"), ('/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/site-packages/kivy/data/logo/kivy-icon-64.ico', '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon/kivy-icon-64.ico', "[Errno 13] Permission denied: '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon/kivy-icon-64.ico'"), ('/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/site-packages/kivy/data/logo/kivy-icon-64.png', '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon/kivy-icon-64.png', "[Errno 13] Permission denied: '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon/kivy-icon-64.png'"), ('/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/site-packages/kivy/data/logo', '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon', "[Errno 13] Permission denied: '/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/icon'")]
07-31 02:01:03.399 15263 15302 I python : [WARNING] [Config ] Older configuration version detected (0 instead of 27)
07-31 02:01:03.400 15263 15302 I python : [WARNING] [Config ] Upgrading configuration in progress.
07-31 02:01:03.400 15263 15302 I python : [DEBUG ] [Config ] Upgrading from 0 to 1
07-31 02:01:03.403 15263 15302 I python : [INFO ] [Logger ] Record log in /data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/.kivy/logs/kivy_24-07-31_0.txt
07-31 02:01:03.405 15263 15302 I python : [INFO ] [Kivy ] v2.3.0
07-31 02:01:03.406 15263 15302 I python : [INFO ] [Kivy ] Installed at "/data/user/0/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/site-packages/kivy/__init__.pyc"
07-31 02:01:03.406 15263 15302 I python : [INFO ] [Python ] v3.11.5 (main, Jul 31 2024, 01:30:29) [Clang 14.0.6 (https://android.googlesource.com/toolchain/llvm-project 4c603efb0
07-31 02:01:03.406 15263 15302 I python : [INFO ] [Python ] Interpreter at ""
07-31 02:01:03.407 15263 15302 I python : [INFO ] [Logger ] Purge log fired. Processing...
07-31 02:01:03.407 15263 15302 I python : [INFO ] [Logger ] Purge finished!
07-31 02:01:04.354 15263 15263 W SDLThread: type=1400 audit(0.0:56934): avc: granted { execute } for path="/data/data/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/modules/binascii.cpython-311.so" dev="sda15" ino=900913 scontext=u:r:untrusted_app:s0:c221,c257,c512,c768 tcontext=u:object_r:app_data_file:s0:c221,c257,c512,c768 tclass=file app=org.testtestestest.myapp.videotestetststeste
07-31 02:01:04.374 15263 15263 W SDLThread: type=1400 audit(0.0:56935): avc: granted { execute } for path="/data/data/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/modules/_contextvars.cpython-311.so" dev="sda15" ino=900911 scontext=u:r:untrusted_app:s0:c221,c257,c512,c768 tcontext=u:object_r:app_data_file:s0:c221,c257,c512,c768 tclass=file app=org.testtestestest.myapp.videotestetststeste
07-31 02:01:04.384 15263 15263 W SDLThread: type=1400 audit(0.0:56936): avc: granted { execute } for path="/data/data/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/modules/_asyncio.cpython-311.so" dev="sda15" ino=900869 scontext=u:r:untrusted_app:s0:c221,c257,c512,c768 tcontext=u:object_r:app_data_file:s0:c221,c257,c512,c768 tclass=file app=org.testtestestest.myapp.videotestetststeste
07-31 02:01:04.424 15263 15263 W SDLThread: type=1400 audit(0.0:56937): avc: granted { execute } for path="/data/data/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/site-packages/kivy/_event.so" dev="sda15" ino=901266 scontext=u:r:untrusted_app:s0:c221,c257,c512,c768 tcontext=u:object_r:app_data_file:s0:c221,c257,c512,c768 tclass=file app=org.testtestestest.myapp.videotestetststeste
07-31 02:01:04.445 15263 15302 I python : [INFO ] [Factory ] 195 symbols loaded
07-31 02:01:04.602 15263 15302 I python : [INFO ] [ImageLoaderFFPy] Using ffpyplayer 4.3.2
07-31 02:01:04.603 15263 15302 I python : [INFO ] [Image ] Providers: img_tex, img_dds, img_sdl2, img_ffpyplayer (img_pil ignored)
07-31 02:01:04.907 15263 15302 I python : [INFO ] [VideoFFPy ] Using ffpyplayer 4.3.2
07-31 02:01:04.908 15263 15302 I python : [INFO ] [Video ] Provider: ffpyplayer(['video_ffmpeg'] ignored)
07-31 02:01:04.940 15263 15302 I python : [INFO ] [Window ] Provider: sdl2
07-31 02:01:05.020 15263 15302 I python : [INFO ] [GL ] Using the "OpenGL ES 2" graphics system
07-31 02:01:05.026 15263 15302 I python : [INFO ] [GL ] Backend used <sdl2>
07-31 02:01:05.026 15263 15302 I python : [INFO ] [GL ] OpenGL version <b'OpenGL ES 3.2 V@0502.0 (GIT@d4cfdf3, Ic907de5ed0, 1601055299) (Date:09/25/20)'>
07-31 02:01:05.027 15263 15302 I python : [INFO ] [GL ] OpenGL vendor <b'Qualcomm'>
07-31 02:01:05.027 15263 15302 I python : [INFO ] [GL ] OpenGL renderer <b'Adreno (TM) 610'>
07-31 02:01:05.027 15263 15302 I python : [INFO ] [GL ] OpenGL parsed version: 3, 2
07-31 02:01:05.028 15263 15302 I python : [INFO ] [GL ] Texture max size <16384>
07-31 02:01:05.028 15263 15302 I python : [INFO ] [GL ] Texture max units <16>
07-31 02:01:05.082 15263 15302 I python : [INFO ] [Window ] auto add sdl2 input provider
07-31 02:01:05.083 15263 15302 I python : [INFO ] [Window ] virtual keyboard not allowed, single mode, not docked
07-31 02:01:05.084 15263 15302 I python : [ERROR ] [Image ] Error loading <1.mp4>
07-31 02:01:05.087 15263 15302 I python : [WARNING] [Base ] Unknown <android> provider
07-31 02:01:05.087 15263 15302 I python : [INFO ] [Base ] Start application main loop
07-31 02:01:05.604 15331 15331 F DEBUG : #06 pc 000000000001539c /data/data/org.testtestestest.myapp.videotestetststeste/files/app/_python_bundle/site-packages/kivy/core/window/_window_sdl2.so
07-31 02:01:05.604 15331 15331 F DEBUG : #07 pc 00000000002c82b8 /data/app/~~s0lBApATuUpzckMd1_r55w==/org.testtestestest.myapp.videotestetststeste-PxUqzVwevgRc14pJJJu1eQ==/lib/arm64/libpython3.11.so (_PyEval_EvalFrameDefault+2052)
07-31 02:01:05.604 15331 15331 F DEBUG : #08 pc 00000000002c78e4 /data/app/~~s0lBApATuUpzckMd1_r55w==/org.testtestestest.myapp.videotestetststeste-PxUqzVwevgRc14pJJJu1eQ==/lib/arm64/libpython3.11.so (PyEval_EvalCode+268)
07-31 02:01:05.604 15331 15331 F DEBUG : #09 pc 00000000003187e8 /data/app/~~s0lBApATuUpzckMd1_r55w==/org.testtestestest.myapp.videotestetststeste-PxUqzVwevgRc14pJJJu1eQ==/lib/arm64/libpython3.11.so (_PyRun_SimpleFileObject+1188)
07-31 02:01:05.604 15331 15331 F DEBUG : #10 pc 00000000003190ac /data/app/~~s0lBApATuUpzckMd1_r55w==/org.testtestestest.myapp.videotestetststeste-PxUqzVwevgRc14pJJJu1eQ==/lib/arm64/libpython3.11.so (PyRun_SimpleFileExFlags+60)
</code></pre>
<p>Code works on windows&linux side but fails on Android. Can't figure out what is wrong with these last 5 rows of LOG.</p>
|
<python><android><video><kivy><buildozer>
|
2024-07-30 23:02:28
| 1
| 505
|
320V
|
78,813,866
| 799,927
|
How to Traverse a Python Dictionary While Avoiding KeyErrors?
|
<p>While parsing a large JSON, it's possible that some keys may only exist in certain circumstances, such as when an error is present. It's not uncommon to get a <code>200 OK</code> from the server's API, but then the response you got contains errors that should be checked for.</p>
<p>What's the best way to handle this?</p>
<p>I know that using things like <code>get()</code> are one way to handle KeyErrors.</p>
<pre><code>if json.get('errors') is not None:
do_something
</code></pre>
<p>But what if we need to keep going? This feels horrible:</p>
<pre><code>if json.get('errors') is not None:
if json.get('errors').get('code') is not None:
if json.get('errors').get('code') != 0:
this_is_bad_throw_error
</code></pre>
<p>Is there a better way? For example, <code>ruby</code> has a <code>dig()</code> method that basically condenses that whole line into <code>x = dig(:errors, :code)</code> and then you just check on <code>x</code> if it's a <code>nil</code> object or not; so 2 lines, instead of 4 (or more if you have to keep going deeper).</p>
<p>I have used <code>try-except</code> before to catch the <code>KeyError</code>, but this isn't an approach I necessarily care for.</p>
<p>I'm not against external libraries, but due to the location of where the code is being run it's <em>extremely</em> beneficial to stick to built-ins.</p>
|
<python><dictionary>
|
2024-07-30 22:31:15
| 3
| 966
|
mire3212
|
78,813,844
| 214,526
|
"process died unexpectedly" with cythonized version of multiprocessing code
|
<p>This is an offshoot of <a href="https://stackoverflow.com/questions/78520245/redundant-print-with-multiprocessing">this question</a>. The code in python runs fine. When I tried the cythonized version, I started getting "Can't pickle <cyfunction init_worker_processes at 0x7fffd7da5a00>" even though I defined the init_worker_processes at top level. So, I moved it to another module and used the imported init_worker_processes. Now, I get the below error:</p>
<pre><code>error: unrecognized arguments: -s -c from multiprocessing.spawn import spawn_main; spawn_main(tracker_fd=8, pipe_handle=16) --multiprocessing-fork
Python3/lib/python3.9/multiprocessing/resource_tracker.py:96: UserWarning: resource_tracker: process died unexpectedly, relaunching. Some resources might leak.
warnings.warn('resource_tracker: process died unexpectedly, '
</code></pre>
<p>I'm not explicitly using <code>-s</code> or <code>-c</code> as reported in the error. The error is coming from below code in the multiprocessing library (method - <code>ensure_running</code>)</p>
<pre><code>warnings.warn('resource_tracker: process died unexpectedly, '
'relaunching. Some resources might leak.')
</code></pre>
<p>How to resolve this issue?</p>
<pre><code># updated Python code
# ---------------------- mp_app.py ------------------
import argparse
import logging
import signal
import sys
import time
import multiprocessing as mp
from dataclasses import dataclass
from typing import Dict, NoReturn
import numpy as np
from mp_utils import init_worker_processes
@dataclass
class TmpData:
name: str
value: int
def worker(name: str, data: TmpData) -> NoReturn:
logger_obj = mp.get_logger()
logger_obj.info(f"processing : {name}; value: {data.value}")
time.sleep(data.value)
def get_args(logger: logging.Logger) -> argparse.Namespace:
parser = argparse.ArgumentParser(description="test MP app")
parser.add_argument(
"-m",
"--max-time",
type=int,
dest="max_time",
required=True,
help="max timeout in seconds",
)
parser.add_argument(
"-j",
dest="num_workers",
type=int,
default=1,
required=False,
help=argparse.SUPPRESS,
)
try:
args = parser.parse_args()
except argparse.ArgumentError as err:
logger.exception(parser.print_help())
raise err
return args
def mp_app(options: argparse.Namespace, logger: logging.Logger) -> NoReturn:
map_data: Dict[str, TmpData] = {
key: TmpData(name=key, value=np.random.randint(1, options.max_time))
for key in ["ABC", "DEF", "GHI", "JKL", "PQR", "STU", "XYZ"]
}
with mp.get_context("fork").Pool(
processes=options.num_workers,
initializer=init_worker_processes,
) as pool:
results = []
for key in map_data:
try:
results.append(
pool.apply_async(
worker,
args=(
key,
map_data[key],
),
)
)
except KeyboardInterrupt:
pool.terminate()
pool.close()
pool.join()
for result in results:
try:
result.get()
except Exception as err:
logger.error(f"{err}")
if __name__ == "__main__":
main_logger = logging.getLogger()
try:
args = get_args(main_logger)
mp_app(options=args, logger=main_logger)
except Exception as e:
main_logger.error(e)
raise SystemExit(1) from e
sys.exit(0)
# --------------------- mp_utils.py --------------------------
import multiprocessing
import logging
import signal
from typing import NoReturn
def init_worker_processes() -> NoReturn:
"""
Initializes each worker to handle signals
Returns:
None
"""
this_process_logger = multiprocessing.log_to_stderr()
this_process_logger.setLevel(logging.INFO)
signal.signal(signal.SIGINT, signal.SIG_IGN)
</code></pre>
<p>Please note the main issue seems to be "-s" and "-c" being unrecognized option; not sure from where those are coming.</p>
<h2>Edit-2:</h2>
<p>While I'm still trying to decipher the cython build process as it happens through a complex make system in our environment. However, I'm guessing that I am able to trace the root cause of -s and -c options. <code>-s</code> seems to be coming from the <code>_args_from_interpreter_flags</code> method in subprocess.py (module subprocess).</p>
<p>In my python shell I see sys.flags as following -</p>
<pre><code>>>> sys.flags sys.flags(debug=0, inspect=0, interactive=0, optimize=0, dont_write_bytecode=0, no_user_site=1, no_site=0, ignore_environment=0, verbose=0, bytes_warning=0, quiet=0, hash_randomization=1, isolated=0, dev_mode=False, utf8_mode=0, int_max_str_digits=-1)
</code></pre>
<p>Since sys.flags.no_user_site is 1, <code>-s</code> seems to get appended.</p>
<p><code>get_command_line</code> in spawn.py seems to be adding <code>-c</code>. Since this branch is coming from else of <code>if getattr(sys, 'frozen', False)</code>, is spwan approach not supposed to work with a cythonized binary?</p>
<h2>EDIT-3:</h2>
<p>I tried with both "fork" and "spawn". Both works in Python. But with cythonized build, "spawn" based app I get "UserWarning: resource_tracker: process died unexpectedly, relaunching. Some resources might leak" message and along with "unrecognized arguments" for -s and -c. The cythonized version of the "fork" based app, simply hangs at launch itself as if it's waiting on some lock. I tried, pstack on the process id, but could not spot anything -</p>
<pre><code># top 20 frames from pstack
#0 0x00007ffff799675d in read () from /usr/lib64/libpthread.so.0
#1 0x00007ffff70c3996 in _Py_read (fd=fd@entry=3, buf=0x7fffbabfdbf0, count=count@entry=4) at Python/fileutils.c:1707
#2 0x00007ffff70ce872 in os_read_impl (module=<optimized out>, length=4, fd=3) at ./Modules/posixmodule.c:9474
#3 os_read (module=<optimized out>, nargs=<optimized out>, args=<optimized out>) at ./Modules/clinic/posixmodule.c.h:5012
#4 os_read (module=<optimized out>, args=<optimized out>, nargs=<optimized out>) at ./Modules/clinic/posixmodule.c.h:4977
#5 0x00007ffff6fc444f in cfunction_vectorcall_FASTCALL (func=0x7ffff7f4aa90, args=0x7fffbae68f10, nargsf=<optimized out>, kwnames=<optimized out>) at Objects/methodobject.c:430
#6 0x00007ffff6f303ec in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0x7fffbae68f10, callable=0x7ffff7f4aa90, tstate=0x419cb0) at ./Include/cpython/abstract.h:118
#7 PyObject_Vectorcall (kwnames=0x0, nargsf=<optimized out>, args=0x7fffbae68f10, callable=<optimized out>) at ./Include/cpython/abstract.h:127
#8 call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=0x419cb0) at Python/ceval.c:5077
#9 _PyEval_EvalFrameDefault (tstate=<optimized out>, f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3520
#10 0x00007ffff7066ea4 in _PyEval_EvalFrame (throwflag=0, f=0x7fffbae68d60, tstate=0x419cb0) at ./Include/internal/pycore_ceval.h:40
#11 _PyEval_EvalCode (tstate=tstate@entry=0x419cb0, _co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=2, kwnames=0x0, kwargs=0x7fffbac0b750, kwcount=0, kwstep=1, defs=0x7fffbae95298, defcount=1, kwdefs=0x0, closure=0x0, name=0x7fffbae8d230, qualname=0x7fffbae8c210) at Python/ceval.c:4329
#12 0x00007ffff6f7baba in _PyFunction_Vectorcall (func=<optimized out>, stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>) at Objects/call.c:396
#13 0x00007ffff6f311aa in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0x7fffbac0b740, callable=0x7fffbabefe50, tstate=0x419cb0) at ./Include/cpython/abstract.h:118
#14 PyObject_Vectorcall (kwnames=0x0, nargsf=<optimized out>, args=0x7fffbac0b740, callable=<optimized out>) at ./Include/cpython/abstract.h:127
#15 call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=0x419cb0) at Python/ceval.c:5077
#16 _PyEval_EvalFrameDefault (tstate=<optimized out>, f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3506
#17 0x00007ffff7066ea4 in _PyEval_EvalFrame (throwflag=0, f=0x7fffbac0b5b0, tstate=0x419cb0) at ./Include/internal/pycore_ceval.h:40
#18 _PyEval_EvalCode (tstate=tstate@entry=0x419cb0, _co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=2, kwnames=0x0, kwargs=0x7fffbac08f58, kwcount=0, kwstep=1, defs=0x7fffbae83b08, defcount=1, kwdefs=0x0, closure=0x0, name=0x7fffbae84830, qualname=0x7fffbae8c3f0) at Python/ceval.c:4329
#19 0x00007ffff6f7baba in _PyFunction_Vectorcall (func=<optimized out>, stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>) at Objects/call.c:396
#20 0x00007ffff6f311aa in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0x7fffbac08f48, callable=0x7fffbabeff70, tstate=0x419cb0) at ./Include/cpython/abstract.h:118
</code></pre>
<p>I checked the cython build process prints something like following:</p>
<pre><code>cython --3str --embed --no-docstrings -o mp_app.c mp_app.py
gcc -Os -Loss/Python3/lib -DNDEBUG -Wl,--strip-all -IPython-3.9.15/include/python3.9 -LPython-3.9.15/lib/python3.9/config-3.9-x86_64-linux-gnu -LPython-3.9.15/lib -lcrypt -lpthread -ldl -lutil -lm -lm -B /binutils/bin -static-libgcc -static-libstdc++ -fPIC -lpython3.9 mp_app.c -o mp_app.pex
</code></pre>
<p>PS: I've also edited the source code example</p>
|
<python><python-3.x><multiprocessing><cython><python-multiprocessing>
|
2024-07-30 22:17:13
| 1
| 911
|
soumeng78
|
78,813,619
| 1,390,639
|
regex to interpret awkward scientific notation
|
<p>Ok, so I'm working with this <code>ENDF</code> data, see <a href="https://www.nndc.bnl.gov/endf-b8.0/download.html" rel="nofollow noreferrer">here</a>. Sometimes in the files they have what is quite possibly the most annoying encoding of scientific notation floating point numbers I have ever seen<sup>1</sup>. There it is often used that instead of <code>1.234e-3</code> it would be something like <code>1.234-3</code> (omitting the "e").</p>
<p>Now I've seen a library that simply changes <code>-</code> into <code>e-</code> or <code>+</code> into <code>e+</code> by a simple substitution. But that doesn't work when some of the numbers can be negative. You end up getting some nonsense like <code>e-5.122e-5</code> when the input was <code>-5.122-5</code>.</p>
<p>So, I guess I need to move onto regex? I'm open to another solution that's simpler but its the best I can think of right now. I am using the <code>re</code> python library. I can do a simple substitution where I look for <code>[0-9]-[0-9]</code> and replace that like this:</p>
<pre><code>import re
str1='-5.634-5'
x = re.sub('[0-9]-[0-9]','4e-5',str1)
print(x)
</code></pre>
<p>But obviously this won't work generally because I need to get the numerals before and after the <code>-</code> to be what they were, not just something I made up... I've used capturing groups before but what would be the fastest way in this context to use a capturing group for the digits before and after the <code>-</code> and feed it back into the substitution using the <code>Python</code> regex library <code>import re</code>?</p>
<p>1 Yes, I know, fortran...80 characters...save space...punch cards...nobody cares anymore.</p>
|
<python><regex><scientific-notation>
|
2024-07-30 20:46:07
| 4
| 1,259
|
villaa
|
78,813,476
| 17,174,267
|
python function: argument after argument with asterisk
|
<p>I was wondering why defining the following function is considered fine.</p>
<pre><code>def foo(*x, y):
pass
</code></pre>
<p>As far as I know there is now way of calling this function, since it's always missing the value for y. (Please correct me if I'm wrong.)</p>
<p>Does this have any utility I'm not ware of?</p>
|
<python>
|
2024-07-30 19:57:03
| 0
| 431
|
pqzpkaot
|
78,813,418
| 21,905,651
|
How do upload an image from URL to Discord to be used for an embed?
|
<p>I have a Discord embed for Instagram posts but the <a href="https://scontent-sjc3-1.cdninstagram.com/v/t51.29350-15/449322999_988059366331988_1840488979349194447_n.webp?stp=dst-jpg_e35_s1080x1080&_nc_ht=scontent-sjc3-1.cdninstagram.com&_nc_cat=108&_nc_ohc=Ip7b_s9PzD0Q7kNvgG-5F6W&edm=AOQ1c0wBAAAA&ccb=7-5&ig_cache_key=MzQwMDI5MjkzOTQxODAzNzkwMQ%3D%3D.2-ccb7-5&oh=00_AYDIS-rl8iS6vSWTxTvE5jcMXNT5RFTNNHhnkxYknPlZZg&oe=66A719ED&_nc_sid=8b3546" rel="nofollow noreferrer">image in the embed</a> breaks after around a week with it saying:</p>
<blockquote>
<p>URL signature expired</p>
</blockquote>
<p><a href="https://i.sstatic.net/DaeYCuN4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DaeYCuN4.png" alt="Screenshot of broken embed" /></a></p>
<p>The embed uses a link to the Instagram post's media. This link expires. I want to upload the media to Discord so it doesn't expire.</p>
<p>Here is the code I currently have:</p>
<pre class="lang-py prettyprint-override"><code>import requests
from instaloader import Instaloader, LoginRequiredException, Post, Profile
from requests.exceptions import HTTPError
def create_webhook_json(post: Post):
webhook_json = {
"embeds": [
{
"title": post.owner_profile.full_name,
"url": "https://instagram.com/p/" + post.shortcode,
"image": {"url": post.url},
}
],
"attachments": [],
}
return webhook_json
def send_to_discord(post: Post):
payload = create_webhook_json(post)
try:
print("Sending post sent to Discord")
r = requests.post(discord_webhook_url, json=payload)
r.raise_for_status()
except HTTPError as http_error:
print("HTTP error occurred: %s", http_error)
else:
print("New post sent to Discord successfully.")
send_to_discord(new_post)
</code></pre>
<p>It just creates a JSON for a webhook and then sends that JSON webhook. Note that I'm using <a href="https://instaloader.github.io/" rel="nofollow noreferrer">Instaloader</a>.</p>
<p>I've tried searching online for ways to upload an image to Discord in Python but every result either is super complicated or uses a third party library which I've been trying to avoid, since I would need to refactor a lot of my code. On <a href="https://discord.com/developers/docs/resources/webhook#execute-webhook" rel="nofollow noreferrer">Discord's docs about executing a webhook</a> it mentions about uploading files for attachments but I don't want an attachment.</p>
<p>I want to be able to not have to use a third-party Discord library to be able to do this.</p>
|
<python><python-requests><discord><webhooks>
|
2024-07-30 19:37:16
| 1
| 635
|
Ryan Luu
|
78,813,291
| 20,591,261
|
Reload custom python function
|
<p>I'm trying to reload from a python script a custom function because i'm editing the function constantly and i dont want to reload the kernel each time i make a change on the file. I have read that i can use <code>importlib</code> .</p>
<p>However, when i try to use reload i get <code>ImportError: module Classifier not in sys.modules</code></p>
<p>My code:</p>
<pre><code>from importlib import reload
from FilterFile import Classifier as foo
reload(foo)
</code></pre>
<p>I tried just using <code>reload(Classifier)</code> and <code>reload(FilterFile import Classifier)</code> , but i'm getting the same error.</p>
|
<python><python-importlib>
|
2024-07-30 18:54:14
| 1
| 1,195
|
Simon
|
78,813,152
| 8,436,290
|
Ask multiple queries in parallel with Langchain Python
|
<p>I have managed to ask one query using langchain like this:</p>
<pre><code>template = """Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Answer questions using the information in this document and be precise.
Provide only one number if asked for it.
{context}
Question: {question}
Helpful Answer:"""
custom_rag_prompt = PromptTemplate.from_template(template)
rag_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| custom_rag_prompt
| llm
| StrOutputParser()
)
rag_chain.invoke(query)
</code></pre>
<p>But now I would like to have something like</p>
<pre><code>queries = ["tell me this?", "tell me that?"]
</code></pre>
<p>And ask those questions in parallel. Any idea how I could do this please ?</p>
|
<python><large-language-model><py-langchain>
|
2024-07-30 18:11:37
| 0
| 467
|
Nicolas REY
|
78,813,033
| 2,941,921
|
Error: to Calculate shifted delta code error for one image
|
<p>I am trying to find Shifted delta and running code for one image. and Im getting error on the highligted line. I have Also attached the screen shot of the error.</p>
<pre><code>import numpy as np
import cv2
from scipy.fftpack import dct, idct
def compute_cepstrum(image):
# Compute the 2D DCT (Discrete Cosine Transform)
dct_image = dct(dct(image.T, norm='ortho').T, norm='ortho')
# Compute the log magnitude
log_magnitude = np.log(np.abs(dct_image) + 1e-8)
# Compute the inverse DCT to get the cepstrum
cepstrum = idct(idct(log_magnitude.T, norm='ortho').T, norm='ortho')
return cepstrum
def compute_delta_cepstrum(cepstrum, delta=1):
rows, cols = cepstrum.shape
delta_cepstrum = np.zeros((rows, cols))
for i in range(rows):
for j in range(cols):
if i - delta >= 0:
delta_cepstrum[i, j] = cepstrum[i, j] - cepstrum[i - delta, j]
if j - delta >= 0:
**delta_cepstrum[i, j] = cepstrum[i, j] - cepstrum[i, j - delta]** ---- Here is error in this line
return delta_cepstrum
def compute_shifted_delta_cepstrum(cepstrum, window_size=3, delta=1):
rows, cols = cepstrum.shape
shifted_delta_cepstrum = np.zeros((rows, cols, window_size * 2 + 1))
for shift in range(-window_size, window_size + 1):
shifted_delta_cepstrum[:, :, shift + window_size] = compute_delta_cepstrum(cepstrum, delta + shift)
return shifted_delta_cepstrum
# Load and preprocess the image
image_path = '/content/drive/My Drive/RMC2/Full Clean Run/sample.jpg' # Replace with your image path
image = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
image = image.astype(np.float32) / 255.0
# Compute cepstrum
cepstrum = compute_cepstrum(image)
# Compute shifted delta cepstrum
shifted_delta_cepstrum = compute_shifted_delta_cepstrum(cepstrum)
# Save or process the shifted delta cepstrum as needed
print("Shifted Delta Cepstrum calculation completed.")
</code></pre>
<p>I'm getting below error:</p>
<pre><code>IndexError Traceback (most recent call last)
<ipython-input-4-f742ff8a661b> in <cell line: 45>()
43
44 # Compute shifted delta cepstrum
---> 45 shifted_delta_cepstrum = compute_shifted_delta_cepstrum(cepstrum)
46
47 # Save or process the shifted delta cepstrum as needed
1 frames
<ipython-input-4-f742ff8a661b> in compute_delta_cepstrum(cepstrum, delta)
21 delta_cepstrum[i, j] = cepstrum[i, j] - cepstrum[i - delta, j]
22 if j - delta >= 0:
---> 23 delta_cepstrum[i, j] = cepstrum[i, j] - cepstrum[i, j - delta]
24
25 return delta_cepstrum
IndexError: index 1740 is out of bounds for axis 1 with size 1740
</code></pre>
<p><a href="https://i.sstatic.net/65MghKOB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65MghKOB.png" alt="enter image description here" /></a></p>
|
<python><numpy>
|
2024-07-30 17:36:10
| 0
| 301
|
Qasim0787
|
78,813,024
| 5,458,723
|
Fast(est) way to process an expanding linear sequence in Python
|
<p>I have the following conditions:</p>
<ul>
<li>The number u(0) = 1 is the first one in u.</li>
<li>For each x in u, then y = 2 * x + 1 and z = 3 * x + 1 must be in u also.</li>
<li>There are no other numbers in u.</li>
<li>No duplicates should be present.</li>
<li>The numbers must be in ascending sequential order</li>
</ul>
<p>Ex: u = [1, 3, 4, 7, 9, 10, 13, 15, 19, 21, 22, 27, ...]</p>
<p>The program wants me to give the value of a member at a given index.</p>
<p>I've already found ways to solve this using insort and I hacked together a minimal binary search tree, as well. Unfortunately, this process needs to be faster than what I have and I'm at a loss as to what to do next. I thought the BST would do it, but it is not fast enough.</p>
<p>Here's my BST code:</p>
<pre><code>class BSTNode:
def __init__(self, val=None):
self.left = None
self.right = None
self.val = val
def insert(self, val):
if not self.val:
self.val = val
return
if self.val == val:
return
if val < self.val:
if self.left:
self.left.insert(val)
return
self.left = BSTNode(val)
return
if self.right:
self.right.insert(val)
return
self.right = BSTNode(val)
def inorder(self, vals):
if self.left is not None:
self.left.inorder(vals)
if self.val is not None:
vals.append(self.val)
if self.right is not None:
self.right.inorder(vals)
return vals
</code></pre>
<p>and here's my function:</p>
<pre><code>from sys import setrecursionlimit
def twotimeslinear(n):
#setrecursionlimit(2000)
i = 0
u = [1]
ended = False
bst = BSTNode()
bst.insert(1)
while i < n and not ended:
for j in range(2, 4):
k = 1
cur = j * bst.inorder([])[i] + 1
bst.insert(cur)
if len(u) == n:
ended = True
break
i+= 1
return bst.inorder([])[n]
</code></pre>
<p>I just need directions as to what I could do to make the process faster. I can solve the problem if I only knew what I was missing. I'm probably overlooking some data structure that would work better, but I don't even know what to look for. Thank you for any help.</p>
|
<python><performance><data-structures><containers><linear-algebra>
|
2024-07-30 17:33:12
| 1
| 365
|
ep84
|
78,812,985
| 8,605,685
|
Ruff pandas-use-of-dot-at (PD008) - convert to numpy array for performance
|
<p>The <code>ruff</code> linting code <a href="https://docs.astral.sh/ruff/rules/pandas-use-of-dot-at/" rel="nofollow noreferrer"><code>pandas-use-of-dot-at (PD008)</code></a> states:</p>
<blockquote>
<p>If performance is an important consideration, convert the object to a NumPy array, which will provide a much greater performance boost than using <code>.at</code> over <code>.loc.</code></p>
</blockquote>
<p>What does this mean, and why is it faster? How can I apply it to their <a href="https://docs.astral.sh/ruff/rules/pandas-use-of-dot-at/#example" rel="nofollow noreferrer">example</a> of selecting <code>"Maria"</code> from <code>students_df</code>?</p>
<pre><code>import pandas as pd
students_df = pd.read_csv("students.csv")
students_df.at["Maria"]
</code></pre>
|
<python><pandas><numpy><ruff>
|
2024-07-30 17:21:02
| 1
| 12,587
|
Salvatore
|
78,812,936
| 5,790,653
|
Writing json to sqlite3 but with a condition to remove a person from sqlite3 if person is not in json file
|
<p>I have a json file (that's dynamic and is generated daily) which is like this for yesterday:</p>
<pre class="lang-json prettyprint-override"><code>[
{"name": "saeed1", "age": 29},
{"name": "saeed2", "age": 30},
{"name": "saeed3", "age": 31}
]
</code></pre>
<p>And this is my python code to include them in a sqlite file:</p>
<pre class="lang-py prettyprint-override"><code>with open('my_file.json', 'r') as file:
names = json.loads(file.read())
filename = "names.sqlite3"
table_name = 'names'
sqlite3_connection = sqlite3.connect(filename)
sqlite3_connection.execute(
f'''
CREATE TABLE IF NOT EXISTS {table_name}
(name TEXT,
date TEXT,
sent_status TEXT,
PRIMARY KEY(name)
);
'''
)
for name in names:
sqlite3_connection.execute(
f"""
INSERT OR IGNORE INTO {table_name} (name, date, sent_status)
VALUES (
'{name['name']}',
'{(datetime.datetime.now()).strftime('%Y-%m-%d')}',
'not_yet'
)
"""
)
sqlite3_connection.commit()
sqlite3_connection.close()
</code></pre>
<p>Now suppose this is my json file for today:</p>
<pre class="lang-json prettyprint-override"><code>[
{"name": "saeed2", "age": 30},
{"name": "saeed3", "age": 31},
{"name": "saeed4", "age": 32}
]
</code></pre>
<p>The difference between the first and the second json file is that <code>saeed1</code> is removed in the second json output and <code>saeed4</code> is new.</p>
<p>With the code I have, new names are inserted in the database and because of <code>INSERT OR IGNORE INTO</code>, it doesn't write duplicate exact names.</p>
<p>The question based on the above is: How can I remove <code>saeed1</code> when I run my python code for today's json file?</p>
|
<python><sqlite>
|
2024-07-30 17:08:10
| 1
| 4,175
|
Saeed
|
78,812,763
| 19,130,803
|
unable to install kaleido
|
<p>I am working on dash app. I have a plotly graph which I am trying to save as image. On running I got first error as</p>
<pre><code>Image export using the ""kaleido"" engine requires the kaleido package,
which can be installed using pip:
$ pip install -U kaleido
</code></pre>
<p>While trying to install it, I am getting this error as</p>
<pre><code>ubuntu@ubuntu:~/demo_project$ poetry add kaleido
Using version ^0.2.1.post1 for kaleido
Updating dependencies
Resolving dependencies... (0.1s)
Package operations: 1 install, 0 updates, 0 removals
- Installing kaleido (0.2.1.post1): Failed
RuntimeError
Unable to find installation candidates for kaleido (0.2.1.post1)
at ~/.local/share/pypoetry/venv/lib/python3.12/site-packages/poetry/installation/chooser.py:74 in choose_for
70│
71│ links.append(link)
72│
73│ if not links:
→ 74│ raise RuntimeError(f"Unable to find installation candidates for {package}")
75│
76│ # Get the best link
77│ chosen = max(links, key=lambda link: self._sort_key(package, link))
78│
Cannot install kaleido.
Please help.
</code></pre>
|
<python><plotly-dash>
|
2024-07-30 16:13:14
| 0
| 962
|
winter
|
78,812,608
| 20,591,261
|
Use regex function on Polars
|
<p>I'm cleaning a column of spanish text using the following function that use <code>re</code> and <code>unicodedata</code>:</p>
<pre><code>def CleanText(texto: str) -> str:
texto = texto.lower()
texto = ''.join((c for c in unicodedata.normalize('NFD', texto) if unicodedata.category(c) != 'Mn'))
texto = re.sub(r'[^a-z0-9 \n\.,]', '', texto)
texto = re.sub(r'([.,])(?![\s])', r'\1 ', texto)
texto = re.sub(r'\s+', ' ', texto).strip()
texto = texto.replace('.', '')
texto = texto.replace(',', '')
return texto
</code></pre>
<p>And then i apply it to my <code>Dataframe</code> using:</p>
<pre><code>(
df
.with_columns(
pl.col("Comment").map_elements(CleanText,return_dtype=pl.String).alias("CleanedText")
)
)
</code></pre>
<p>However, since polars accept <code>regex crate</code> i think i could just use polars to do the cleaning without needing to create auxiliar funcions.</p>
<p>How could i just use a polars expression to do the same?</p>
|
<python><regex><python-polars>
|
2024-07-30 15:40:00
| 1
| 1,195
|
Simon
|
78,812,350
| 2,050,158
|
How to plot a mean line on kdeplot for each variable
|
<p>I would like to visualize the mean and percentiles for each variable plotted on kdeplot</p>
<p>The code provided here "<a href="https://stackoverflow.com/questions/63307440/how-to-plot-a-mean-line-on-a-kdeplot-between-0-and-the-y-value-of-the-mean">kdeplot showing mean and quartiles</a>" does draw the mean and percentiles on the plot, but I would like to do so for a plot that has several variables such as the one displayed by the code below.</p>
<pre><code>sns.kdeplot(data=penguins, x="flipper_length_mm", hue="species", multiple="stack");
</code></pre>
<p>In otherwords, is there a way to obtain the transformed flipper_length_mm data used in generating the plot for each of the 3 species?</p>
|
<python><matplotlib><seaborn>
|
2024-07-30 14:36:38
| 2
| 503
|
Allan K
|
78,812,288
| 2,571,805
|
Cython dynamic array not converting to Python
|
<p>I'm working on:</p>
<ul>
<li>Ubuntu 24.04</li>
<li>Python 3.12</li>
<li>Cython 3.0</li>
<li>cc 13.2.</li>
</ul>
<p>I have a C module that implements an arbitrary tree and a few constructor functions:</p>
<pre class="lang-c prettyprint-override"><code>typedef unsigned char byte;
typedef unsigned long ulong;
// ; #region Tree
typedef struct _file {
ulong length;
ulong offset;
byte *data;
} file;
typedef struct _node {
ulong length;
byte *value;
ulong childrenCount;
struct _node *children;
} node;
// ; #endregion Tree
// ; #region Functions
file read_file(char *file_name);
node process_file(char *file_name);
// ; #endregion Functions
</code></pre>
<p>All I have from the C side is a header file and a <code>.so</code> library. The tree is implemented via the <code>children</code> field, which is subject to a <code>malloc</code>. This all works perfectly well on C alone, with no leaks, no <code>SIGSEGV</code>, no fuss.</p>
<p>I'm trying to connect this to Python via Cython. I have the following files:</p>
<pre class="lang-py prettyprint-override"><code># cfile.pxd
cdef extern from "file.h":
ctypedef unsigned char byte;
ctypedef unsigned short ushort;
ctypedef unsigned long ulong;
ctypedef struct file:
ulong length
ulong offset
byte* data
ctypedef struct node:
ulong length
byte* value
ulong childrenCount
node* children
file read_file(char *file_name)
node process_file(char *file_name);
</code></pre>
<p>and:</p>
<pre class="lang-py prettyprint-override"><code># pyfile.pyx
cimport cfile
from libc.stdlib cimport malloc, free
from libc.string cimport memcpy
cdef class File:
cdef cfile.file c_struct
def __cinit__(self, unsigned long length = 0, unsigned long offset = 0, bytes data = None):
self.c_struct.length = length
self.c_struct.offset = offset
self.c_struct.data = <unsigned char *> malloc(length)
memcpy(self.c_struct.data, <unsigned char *> data, length)
@property
def length(self):
return self.c_struct.length
@property
def offset(self):
return self.c_struct.offset
@property
def data(self):
return list(self.c_struct.data[:self.c_struct.length])
cdef class Node:
cdef cfile.node c_struct
def __cinit__(self, unsigned long length, bytearray value, unsigned long childrenCount):
self.c_struct.length = length
self.c_struct.value = <unsigned char *> malloc(length)
memcpy(self.c_struct.value, <unsigned char *> value, length)
self.c_struct.childrenCount = childrenCount
self.c_struct.children = NULL
@property
def length(self):
return self.c_struct.length
@property
def value(self):
return bytearray(self.c_struct.value[:self.c_struct.length])
@property
def childrenCount(self):
return self.c_struct.childrenCount
@property
def children(self):
return [Node.from_c_struct(self.c_struct.children[i]) for i in range(self.c_struct.childrenCount)]
@property
def from_c_struct(self,):
cdef Node node = self.__new__(self)
node.c_struct = self.c_struct
return node
def read_file(str file_name):
cdef cfile.file c_data
c_data = cfile.read_file(file_name.encode('utf-8'))
file = File.__new__(File)
file._c_struct = c_data
return file
def process_file(str file_name):
cdef cfile.node c_node
c_node = cfile.process_file(file_name.encode('utf-8'))
return Node.from_c_struct(c_node)
</code></pre>
<p>Finally, I am using the following setup:</p>
<pre class="lang-py prettyprint-override"><code># setup.py
from setuptools import setup
from Cython.Build import cythonize
from setuptools.extension import Extension
extensions = [
Extension(
name="file",
sources=["file.pyx"],
include_dirs=["."],
libraries=["file"],
library_dirs=["."],
runtime_library_dirs=["."],
extra_objects=["libfile.so"]
)
]
setup(
name="file",
ext_modules=cythonize(extensions),
)
</code></pre>
<p>and running <code>python setup.py build_ext --inplace</code>.</p>
<h1>ERRORS</h1>
<p>I'm getting these:</p>
<pre><code> @property
def children(self):
return [Node.from_c_struct(self.c_struct.children[i]) for i in range(self.c_struct.childrenCount)]
^
------------------------------------------------------------
file.pyx:53:57: Cannot convert 'node' to Python object
</code></pre>
<p>and</p>
<pre><code>def process_file(str file_name):
cdef cfile.node c_node
c_node = cfile.process_file(file_name.encode('utf-8'))
return Node.from_c_struct(c_node)
^
------------------------------------------------------------
file.pyx:72:30: Cannot convert 'node' to Python object
</code></pre>
<p>I believe this is related to the <code>children</code> field, as removing it and all connected logic seems to work.</p>
<h1>Question</h1>
<p>Any ideas on how to make this work directly with Cython?</p>
<p>I'm looking for a solution that precludes third parties. I found many solutions based on <code>cpython</code> imports or <code>numpy</code>, or even both. I would like to avoid any of that, if possible.</p>
<p>I'm also looking for a solution that does not involve compiling the C code together with the Python code. In order to keep concerns separated, all I want from the C side is the <code>.so</code>.</p>
|
<python><c><malloc><cython>
|
2024-07-30 14:23:51
| 0
| 869
|
Ricardo
|
78,812,281
| 301,302
|
How to properly type PyMongo?
|
<p>I'm trying to use PyMongo in a type safe(ish) way in VS Code. I've tried following the <a href="https://pymongo.readthedocs.io/en/stable/examples/type_hints.html" rel="nofollow noreferrer">type hints</a> section in the PyMongo documentation and arrived at the following (<code>app</code> is a Flask instance):</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any, TypedDict
from pymongo import MongoClient
from app import app
from pymongo.collection import Collection
class Movie(TypedDict):
title: str
genres: list[str]
def get_db():
conn_string = str(app.config['MONGO_URI'])
return MongoClient[dict[str, Any]](conn_string)
@app.route('/test')
def models():
client = get_db()
db = client['sample_mflix']
coll: Collection[Movie] = cast(Collection[Movie], db['movies'])
genres = list(coll.aggregate([
{"$unwind": "$genres"},
{"$group": {"_id": None, "genres": {"$addToSet": "$genres"}}}
]))[0]['genres']
return make_response(jsonify(genres), 200)
</code></pre>
<p>I realize that the error stems from the fact that I've typed <code>db</code> to be <code>dict[str, Any]</code>, so the type of the collection is <code>Collection[dict[str, Any]]</code> requiring a cast to the type (<code>Movie</code>). I could get around this if typed the client as <code>MongoClient[Movie]</code>, but presumably, I will want to interact with collections other than the <code>movies</code> collection.</p>
<p>Should I make a client per collection? Live with the cast? Some other pattern I'm missing?</p>
|
<python><mongodb><pymongo>
|
2024-07-30 14:22:50
| 1
| 2,958
|
naivedeveloper
|
78,812,242
| 12,522,881
|
How to group rows and sum costs where revenue is zero
|
<p>Im trying to group all columns (except date column) I have and keep the rows where <strong>Revenue</strong> dosent equel to zero and sum all <strong>Cost</strong> columns to.</p>
<p>For example, the first three rows will be summed with the fourth row since their <strong>Revenue</strong> is zero. The three rows will then be dropped, and the summed <strong>Cost</strong> values will grouped be in the fourth row.</p>
<p><strong>Original data shape</strong></p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>date</th>
<th>dim_channel_group</th>
<th>dim_channel</th>
<th>dim_hotel_name</th>
<th>dim_market</th>
<th>Cost</th>
<th>Revenue</th>
</tr>
</thead>
<tbody>
<tr>
<td>10/10/2023</td>
<td>Social</td>
<td>Snapchat</td>
<td>Hotel A</td>
<td>KSA</td>
<td>28.5</td>
<td>0</td>
</tr>
<tr>
<td>11/10/2023</td>
<td>Social</td>
<td>Snapchat</td>
<td>Hotel A</td>
<td>KSA</td>
<td>108</td>
<td>0</td>
</tr>
<tr>
<td>12/10/2023</td>
<td>Social</td>
<td>Snapchat</td>
<td>Hotel A</td>
<td>KSA</td>
<td>126.7</td>
<td>0</td>
</tr>
<tr>
<td>13/10/2023</td>
<td>Social</td>
<td>Snapchat</td>
<td>Hotel A</td>
<td>KSA</td>
<td>156.1</td>
<td>8195.8</td>
</tr>
<tr>
<td>14/10/2023</td>
<td>Social</td>
<td>Snapchat</td>
<td>Hotel A</td>
<td>KSA</td>
<td>120.9</td>
<td>0</td>
</tr>
<tr>
<td>15/10/2023</td>
<td>Social</td>
<td>Snapchat</td>
<td>Hotel A</td>
<td>KSA</td>
<td>115.4</td>
<td>0</td>
</tr>
<tr>
<td>16/10/2023</td>
<td>Social</td>
<td>Snapchat</td>
<td>Hotel A</td>
<td>KSA</td>
<td>114.6</td>
<td>2352.4</td>
</tr>
<tr>
<td>10/01/2023</td>
<td>Search</td>
<td>Google</td>
<td>Hotel B</td>
<td>Australia</td>
<td>34.1</td>
<td>0</td>
</tr>
<tr>
<td>11/01/2023</td>
<td>Search</td>
<td>Google</td>
<td>Hotel B</td>
<td>Australia</td>
<td>42.7</td>
<td>0</td>
</tr>
<tr>
<td>12/01/2023</td>
<td>Search</td>
<td>Google</td>
<td>Hotel B</td>
<td>Australia</td>
<td>93.6</td>
<td>0</td>
</tr>
<tr>
<td>13/01/2023</td>
<td>Search</td>
<td>Google</td>
<td>Hotel B</td>
<td>Australia</td>
<td>29.3</td>
<td>3500</td>
</tr>
<tr>
<td>14/01/2023</td>
<td>Search</td>
<td>Google</td>
<td>Hotel B</td>
<td>Australia</td>
<td>44.8</td>
<td>0</td>
</tr>
<tr>
<td>15/01/2023</td>
<td>Search</td>
<td>Google</td>
<td>Hotel B</td>
<td>Australia</td>
<td>88.6</td>
<td>0</td>
</tr>
<tr>
<td>16/01/2023</td>
<td>Search</td>
<td>Google</td>
<td>Hotel B</td>
<td>Australia</td>
<td>35.5</td>
<td>0</td>
</tr>
<tr>
<td>17/01/2023</td>
<td>Search</td>
<td>Google</td>
<td>Hotel B</td>
<td>Australia</td>
<td>81.9</td>
<td>7500</td>
</tr>
</tbody>
</table></div>
<p><strong>Desired data shape</strong></p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>date</th>
<th>dim_channel_group</th>
<th>dim_channel</th>
<th>dim_hotel_name</th>
<th>dim_market</th>
<th>Cost</th>
<th>Revenue</th>
</tr>
</thead>
<tbody>
<tr>
<td>13/10/2023</td>
<td>Social</td>
<td>Snapchat</td>
<td>Hotel A</td>
<td>KSA</td>
<td>419.3</td>
<td>8195.8</td>
</tr>
<tr>
<td>16/10/2023</td>
<td>Social</td>
<td>Snapchat</td>
<td>Hotel A</td>
<td>KSA</td>
<td>350.9</td>
<td>2352.4</td>
</tr>
<tr>
<td>13/01/2023</td>
<td>Search</td>
<td>Google</td>
<td>Hotel B</td>
<td>Australia</td>
<td>199.7</td>
<td>3500</td>
</tr>
<tr>
<td>17/01/2023</td>
<td>Search</td>
<td>Google</td>
<td>Hotel B</td>
<td>Australia</td>
<td>250.8</td>
<td>7500</td>
</tr>
</tbody>
</table></div>
|
<python><pandas>
|
2024-07-30 14:16:35
| 2
| 473
|
Ibrahim Ayoup
|
78,812,183
| 6,061,080
|
Split strings in a Series, convert to array and average the values
|
<p>I have a Pandas Series that has these unique values:</p>
<pre><code>array(['17', '19', '21', '20', '22', '23', '12', '13', '15', '24', '25',
'18', '16', '14', '26', '11', '10', '12/16', '27', '10/14',
'16/22', '16/21', '13/17', '14/19', '11/15', '10/15', '15/21',
'13/19', '13/18', '32', '28', '12/15', '29', '42', '30', '31',
'34', '46', '11/14', '18/25', '19/26', '17/24', '19/24', '17/23',
'13/16', '11/16', '15/20', '36', '17/25', '19/25', '17/22',
'18/26', '39', '41', '35', '50', '9/13', '33', '10/13', '9/12',
'93/37', '14/20', '10/16', '14/18', '16/23', '37', '9/11', '37/94',
'20/54', '22/31', '22/30', '23/33', '44', '40', '50/95', '38',
'16/24', '15/23', '15/22', '18/23', '16/20', '37/98', '19/27',
'38/88', '23/31', '14/22', '45', '39/117', '28/76', '33/82',
'15/19', '23/30', '47', '46/115', '14/21', '17/18', '25/50',
'12/18', '12/17', '21/28', '20/27', '26/58', '22/67', '22/47',
'25/51', '35/83', '39/86', '31/72', '24/56', '30/80', '32/85',
'42/106', '40/99', '30/51', '21/43', '52', '56', '25/53', '34/83',
'30/71', '27/64', '35/111', '26/62', '32/84', '39/95', '18/24',
'22/29', '42/97', '48', '55', '58', '39/99', '49', '43', '40/103',
'22/46', '54/133', '25/54', '36/83', '29/72', '28/67', '35/109',
'25/62', '14/17', '42/110', '52/119', '20/60', '46/105', '25/56',
'27/65', '25/74', '21/49', '29/71', '26/59', '27/62'], dtype=object)
</code></pre>
<p>The ones that have the '/', I want to split these into arrays and then average their values. One simpler but a flawed approach is to simply extract the first value:
<code>master_data["Cmb MPG"].str.split('/').str[0].astype('int8')</code></p>
<p>However, what I truly require is the two values being averaged.
I have tried several commands and this one:</p>
<pre><code>np.array(master_data["Cmb MPG"].str.split('/')).astype('int8').mean()
</code></pre>
<p>Should ideally do the job, but I get a ValueError followed by a TypeError:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
TypeError: int() argument must be a string, a bytes-like object or a real number, not 'list'
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
Cell In[88], line 1
----> 1 np.array(master_data["Cmb MPG"].str.split('/')).astype('int8')
ValueError: setting an array element with a sequence.
</code></pre>
<p>The <code>slice()</code> method returns a Series but it won't proceed either with the splitting of strings.</p>
<p>What is required is:</p>
<pre><code>'18/25' ---> [18, 25] ---> 22 (rounded)
</code></pre>
|
<python><pandas>
|
2024-07-30 14:05:51
| 4
| 1,107
|
shiv_90
|
78,811,831
| 7,057,547
|
xlwings keep app open beyond lifetime of pytest script
|
<p>I'd like to test an Excel file using xlwings and pytest. While writing tests, it would be helpful to have an option to keep the xlwings.App instance open with the test file, so that I can inspect what's going on.</p>
<p>I'm using something like this:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
import xlwings as xw
EXCEL_VISIBLE = True # toggle headless excel
EXCEL_KEEPALIVE = True # toggle automatic closing of excel application
XL_TESTFILE = "myfile.xlsm"
@pytest.fixture(scope="module")
def file(request):
# copy to temp directory, set up paths etc.
with xw.App(visible=EXCEL_VISIBLE) as xl_app:
xl_app.books.open(XL_TESTFILE)
xl_app.activate(steal_focus=True)
xl_app.macro('LoadData')()
yield xl_app
# FIXME: do something to optionally keep xl_app open, otherwise quit and unlink the test file
if EXCEL_KEEPALIVE:
pass
else:
pass
def simple_test(file):
assert xl_app.books[XL_TESTFILE].sheets["mysheet"].range("A1").value == "expected"
</code></pre>
<p>How can I keep the application open after the tests complete, so that I can inspect what's happening. In the docs I can see the kill() and quit() methods, but no way to detach from the test script. Is that possible?</p>
<p>I don't care if keeping the app open results in manual cleanup - this is just something that makes it easier for me to write the tests.</p>
|
<python><windows><xlwings>
|
2024-07-30 12:48:32
| 1
| 351
|
squarespiral
|
78,811,795
| 27,876
|
Python S3 streaming, return stream from function
|
<p>Is it possible to return an open write stream from a function?
I am getting the error:</p>
<blockquote>
<p>ValueError: I/O operation on closed file.</p>
</blockquote>
<pre><code>import boto3
import io
s3 = boto3.client('s3')
# Download file from S3 and upload a copy
def download_s3():
response = s3.get_object(
Bucket="bedrock-data-files",
Key="test/filename.txt"
)
stream = io.BytesIO(response['Body'].read())
return stream
def upload_s3():
returnedStream = io.BytesIO()
s3.upload_fileobj(
returnedStream,
"bedrock-data-files",
"test/copy.txt"
)
return returnedStream
downloadstream = download_s3()
uploadstream = upload_s3()
for chunk in downloadstream.read():
uploadstream.write(chunk)
</code></pre>
<p>This version works, but I need the write stream to be created in a function.</p>
<pre><code>def download_s3():
response = s3.get_object(
Bucket="bedrock-data-files",
Key="test/filename.txt"
)
stream = io.BytesIO(response['Body'].read())
return stream
downloadstream = download_s3()
print(downloadstream) # This line returns <_io.BytesIO object at 0x10679dbc0>
s3.upload_fileobj(
downloadstream,
"bedrock-data-files",
"test/copy.txt"
)
</code></pre>
|
<python><amazon-s3><stream>
|
2024-07-30 12:43:28
| 1
| 818
|
Jon Wilson
|
78,811,761
| 3,540,161
|
Python Scapy not detecting my npcap or winpcap installation
|
<p>I have a python script which imports scapy to do some network packet analysis. The first thing my script reports is "WARNING: No libpcap provider available ! pcap won't be used". Without libpcap or ncap my script will not work, so I need to solve this problem.</p>
<p>The minimal code to show the problem is a python file with this single line in it:</p>
<pre class="lang-py prettyprint-override"><code>from scapy.arch.windows import *
</code></pre>
<p>I am running this on Windows 11, AMD64 architecture, in a Parallels VM. I have installed Python 3.8.0 and npcap 1.79. In a CMD window, I get the following output:</p>
<pre><code>C:\Users\rolf> python scapytest.py
WARNING: No libpcap provider available ! pcap won't be used
</code></pre>
<p>To check the correct installation, I installed nmap 7.95 and ran a local scan:</p>
<pre><code>C:\Users\rolf> nmap 127.0.0.1
Starting Nmap 7.95 ( https://nmap.org ) at 2024-07-30 14:20 W. Europe Daylight Time
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000026s latency).
Not shown: 998 closed tcp ports (reset)
PORT STATE SERVICE
135/tcp open msrpc
445/tcp open microsoft-ds
Nmap done: 1 IP address (1 host up) scanned in 0.11 seconds
</code></pre>
<p>When I uninstall npcap, nmap throws an error so I know it is being used and is working. I tried installing npcap with the winpcap option and also without. Both installations give me the same python problem.</p>
<p>I see other people with <a href="https://stackoverflow.com/questions/68691090/python-scapy-error-no-libpcap-provider-available">similar problems</a>, but no answers that help analyse the problem.</p>
<p>I am running out of options to check. What can I do to find out why my Python script does not seem to detect npcap on my system?</p>
|
<python><scapy><winpcap><npcap>
|
2024-07-30 12:35:39
| 0
| 7,308
|
Rolf
|
78,811,604
| 893,254
|
Numpy array `fill_value` fills matrix with same object, with same id, not independent objects
|
<p>Here is a short example which demonstrates how numpy array creation works with <code>fill_value</code>:</p>
<pre><code>class ExampleObject():
pass
my_array = numpy.full(
shape=(3, 3),
fill_value=ExampleObject(),
)
for i in range(3):
for j in range(3):
print(id(my_array[i][j]) # returns same id for all entries
</code></pre>
<p>It is obvious why this is happening. <code>fill_value</code> is set to a single instance of an object. In this case <code>ExampleObject()</code>.</p>
<p>The behaviour I actually want is for each entry to be independent, such that each entry in the matrix is a unique instance of <code>ExampleObject</code>.</p>
<p>I also tried this:</p>
<pre><code>my_array = numpy.fill(
shape=(3, 3),
fill_value=[ExampleObject() for _ in range(3*3)],
)
</code></pre>
<p>however, this failed with error <code>ValueError: could not broadcast input arry from shape (9,) into shape (3,3)</code></p>
<p>Is there a way to make this work? Possibly something like using a function (or other callable) which can return a new <code>ExampleObject</code> each time it is called? (Basically a factory function.)</p>
|
<python><numpy>
|
2024-07-30 12:06:01
| 1
| 18,579
|
user2138149
|
78,811,597
| 4,013,571
|
mypy: typing a tuple with large number of arguments
|
<p>I have the following requirement to type</p>
<pre class="lang-py prettyprint-override"><code>n: int = 256
x: tuple[(int,) * n] = (1,) * n
</code></pre>
<p><code>mypy</code> however fails with <code>Type expected within [...] [misc]</code> - how can this be fixed</p>
<p>Current reading:</p>
<ul>
<li><a href="https://github.com/python/typing/issues/221#issuecomment-239597477" rel="nofollow noreferrer">https://github.com/python/typing/issues/221#issuecomment-239597477</a></li>
<li><a href="https://github.com/python/mypy/issues/3345" rel="nofollow noreferrer">https://github.com/python/mypy/issues/3345</a></li>
</ul>
|
<python><python-typing><mypy>
|
2024-07-30 12:04:46
| 0
| 11,353
|
Alexander McFarlane
|
78,811,505
| 19,672,778
|
Why does order of the data matter for neural network?
|
<p>Recently, I discovered the really weird behaviour of my an AI model. I wanted to build AI model that would try and guess the implicit functions based on the data I gave it. For example, the equation of the flower is:</p>
<p><a href="https://i.sstatic.net/1Xq8o13L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1Xq8o13L.png" alt="enter image description here" /></a></p>
<p>And this is how I wrote it in the Numpy array:</p>
<pre class="lang-py prettyprint-override"><code>K = 0.75
a = 9
def f(x):
return x - 2 - 2*np.floor((x - 1)/2)
def r1(x):
return abs(6 * f(a * K * x / 2 / np.pi)) - a/2
t = np.linspace(0, 17 * math.pi, 1000)
x = np.cos(t) * r1(t)
y = np.sin(t) * r1(t)
points = np.vstack((x, y)).T
</code></pre>
<p>After that, I tried to experiment a bit and allowed my AI to actually try and guess the shape of this flower! At first try, it actually got it written.</p>
<p>Here it is:</p>
<p><a href="https://i.sstatic.net/lGLX4T9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lGLX4T9F.png" alt="enter image description here" /></a></p>
<p>Well, I got a great example. After that, I tried to experiment and checked what would happen if I shuffled the point array, and I completely got devasting results!</p>
<p><a href="https://i.sstatic.net/53kfUaeH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/53kfUaeH.png" alt="enter image description here" /></a></p>
<p>And I couldn't explain why the order of cartesian coordinates mattered to the approximate flower implicit function. Can anyone explain it?</p>
<p>Here is the code for AI.</p>
<pre class="lang-py prettyprint-override"><code># Define the neural network model
model = Sequential()
model.add(Dense(128, input_dim=2, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(2, activation='linear'))
# Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')
# Train the model
model.fit(points, points, epochs=100, batch_size=32)
</code></pre>
|
<python><numpy><tensorflow>
|
2024-07-30 11:45:54
| 2
| 319
|
NikoMolecule
|
78,811,474
| 784,587
|
Decrypt AES-256-CBC in php
|
<p>I encrypt in Python this way:</p>
<pre><code>from Crypto.Cipher import AES
from base64 import b64decode, b64encode
BLOCK_SIZE = 16
pad = lambda s: s + (BLOCK_SIZE - len(s.encode()) % BLOCK_SIZE) * chr(BLOCK_SIZE - len(s.encode()) % BLOCK_SIZE)
unpad = lambda s: s[:-ord(s[len(s) - 1:])]
class AESCipher:
def __init__(self, key: str, iv: str):
self.key = key
self.iv = iv
def encrypt(self, text):
text = pad(text).encode()
cipher = AES.new(key=self.key.encode(), mode=AES.MODE_CBC, iv=self.iv.encode())
encrypted_text = cipher.encrypt(text)
enc = b64encode(encrypted_text).decode('utf-8')
return enc
def decrypt(self, encrypted_text):
encrypted_text = b64decode(encrypted_text)
cipher = AES.new(key=self.key.encode(), mode=AES.MODE_CBC, iv=self.iv.encode())
decrypted_text = cipher.decrypt(encrypted_text)
return unpad(decrypted_text).decode('utf-8')
key = '12345hg5bnlg4mtae678900cdy7ta4vy'
iv = '12345hg5bnlg4mtae678900cdy7ta4vy'[:16]
json = '{"email":"test@test.com","password":"secret","firstName":"test","lastName":"test"}'
# Encrypt
cipher = AESCipher(key, iv)
enc = cipher.encrypt(json)
print(enc)
</code></pre>
<p>And I need to decrypt in PHP this way:</p>
<pre><code>function encrypt($data, $key) {
$encryptedData = openssl_encrypt($data, 'AES-256-CBC', $key, OPENSSL_ZERO_PADDING, substr($key, 0, 16));
return base64_encode($encryptedData);
}
function decrypt($encryptedData, $key) {
$decryptedData = openssl_decrypt(base64_decode($encryptedData), 'AES-256-CBC', $key, OPENSSL_ZERO_PADDING, substr($key, 0, 16));
return $decryptedData;
}
</code></pre>
<p>Unfortunately, I get the empty result on decryption. Could you tell where is my bug? I don't get any error, so it is difficult to understand where the problem is.</p>
<p>Thanks</p>
|
<python><php><encryption><aes><cbc-mode>
|
2024-07-30 11:38:20
| 2
| 857
|
Rougher
|
78,811,382
| 12,106,577
|
random.sample(population, X) sometimes not contained in random.sample(population, Y) when Y>X
|
<p><strong>EDIT</strong>: Edited original post replacing my code with an MRE from user <a href="https://stackoverflow.com/users/16759116/no-comment">@no comment</a>.</p>
<p>I noticed a seemingly non-intuitive behaviour using a seeded <code>random.sample(population, k)</code> call to sample from a list of files in a directory.</p>
<p>I would expect that sampling 100 items with <code>k=100</code> while using a seed, guarantees that when I subsequently sample with <code>k=800</code>, the 100 sampled items are guaranteed to be within the list of 800 sampled items.</p>
<p>Below is an example for a number of population sizes and seeds:</p>
<pre class="lang-py prettyprint-override"><code>import random
for n in 1000, 2000, 5000:
print(end=f'{n=}: ')
for i in range(10):
random.seed(i)
x = random.sample(range(n), 100)
random.seed(i)
y = random.sample(range(n), 800)
print(sum(i==j for i,j in zip(x,y[:100])), end=' ')
print()
</code></pre>
<p>The output shows the expected perfect match when sampling from a population of 1000 or 5000, but somehow randomly only a partial match when sampling from 2000:</p>
<pre><code>n=1000: 100 100 100 100 100 100 100 100 100 100
n=2000: 93 61 38 44 68 36 39 33 71 86
n=5000: 100 100 100 100 100 100 100 100 100 100
</code></pre>
<p>Similarly, after running my sampling script in some directories, I get mixed results. From some directories, the 100 are within the 800, but for others there are differences for a few files, some are here and not there and vice versa.</p>
<p>Are there any sources of randomness that I am not aware of?</p>
<p>I tried sorting the <code>os.listdir()</code> call that lists directory contents but this didn't change anything. I know I can refactor the script to work by first sampling a greater value and slicing the sample list, but I would expect my original script to work in the same way too.</p>
|
<python><random><sample>
|
2024-07-30 11:17:01
| 1
| 399
|
John Karkas
|
78,811,118
| 10,658,339
|
How to Efficiently Handle and Display Over 1000 Plots with User Input in Jupyter Notebook with VS Code
|
<p>I'm running an analysis that requires plotting data and having a human (engineer) decide whether to accept the fitting based on their judgment. The dataset is large, with over 1000 plots, and user input is necessary for each plot. However, I'm encountering a problem where Jupyter Notebook stops displaying outputs after reaching 500 outputs, which hinders my analysis.</p>
<p>Is there a way to efficiently handle and display such a large number of plots while still allowing for user input for each plot in Jupyter Notebook? Below is a simplified version of my code:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Constants for column names
ID_COLUMN = 'ID'
Z_COLUMN = 'Z'
X_COLUMN = 'X'
Y_COLUMN = 'Y'
# Sample DataFrame
data = pd.DataFrame({
ID_COLUMN: np.random.choice(['A', 'B', 'C'], 1000),
Z_COLUMN: np.random.choice([10, 20, 30], 1000),
X_COLUMN: np.random.rand(1000) * 100,
Y_COLUMN: np.random.rand(1000) * 50
})
def plot_data(data):
unique_ids = data[ID_COLUMN].unique()
for data_id in unique_ids:
filtered_data = data[data[ID_COLUMN] == data_id]
unique_z_values = filtered_data[Z_COLUMN].unique()
for z_value in unique_z_values:
filtered_data_z = filtered_data[filtered_data[Z_COLUMN] == z_value]
plt.figure()
plt.plot(filtered_data_z[X_COLUMN], filtered_data_z[Y_COLUMN])
plt.title(f'ID: {data_id}, Z: {z_value}')
plt.xlabel(X_COLUMN)
plt.ylabel(Y_COLUMN)
plt.show()
user_input = input(f'Accept plot for ID {data_id} at Z {z_value}? (y/n): ')
if user_input.lower() == 'abort':
return
# Running the plotting function
plot_data(data)
</code></pre>
<p><strong>Details:</strong></p>
<p>Each plot must be reviewed and accepted by a human.
The dataset results in over 1000 plots.
Jupyter Notebook stops displaying outputs after 500 plots, which disrupts the analysis process.</p>
<p><strong>Questions:</strong></p>
<p>How can I manage and display more than 500 plots in Jupyter Notebook efficiently?
Are there any best practices or alternative approaches to handle such large-scale plotting with user input?
Thank you for your help!</p>
|
<python><visual-studio-code><jupyter-notebook><user-input><display>
|
2024-07-30 10:12:26
| 0
| 527
|
JCV
|
78,810,998
| 18,769,241
|
How to install speech recognition module for Python 2.7?
|
<p>I am trying to install the following module <a href="https://github.com/Uberi/speech_recognition" rel="nofollow noreferrer">https://github.com/Uberi/speech_recognition</a> from PyPi (<a href="https://pypi.org/project/SpeechRecognition/" rel="nofollow noreferrer">https://pypi.org/project/SpeechRecognition/</a>) for my installed Python 2.7 using the command:</p>
<pre><code>py -2.7 -m pip install SpeechRecognition==3.8.1
</code></pre>
<p>But the error I get is:</p>
<pre><code>Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError(SSLError(1, '_ssl.c:499: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol'),)) - skipping
</code></pre>
<p>How to solve this and eventually install The SpeechRecognition v3.8.1 which is compatible with Python 2.7?</p>
|
<python><speech-recognition>
|
2024-07-30 09:49:04
| 0
| 571
|
Sam
|
78,810,832
| 1,602,160
|
install python package dynamically via subprocess pip but still see import error
|
<p>I have to dynamically install the package: <code>paho-mqtt</code> (which itself depends on <code>typing-extensions</code>) in remote machine(python 3.7) via:</p>
<pre><code>import subprocess
import sys
try:
import paho.mqtt.client as mqtt_client
except ImportError:
subprocess.check_call([sys.executable, "-m", "pip", "install", 'paho-mqtt'])
subprocess.check_call([sys.executable, "-m", "pip", "install", 'typing-extensions==4.7.1'])
finally:
import paho.mqtt.client as mqtt_client
</code></pre>
<p>the package can be seen installed correctly, but the final import still failed:</p>
<pre><code>Collecting paho-mqtt
Using cached https://files.pythonhosted.org/packages/c4/cb/00451c3cf31790287768bb12c6bec834f5d292eaf3022afc88e14b8afc94/paho_mqtt-2.1.0-py3-none-any.whl
Installing collected packages: paho-mqtt
Successfully installed paho-mqtt-2.1.0
Collecting typing-extensions==4.7.1
Using cached https://files.pythonhosted.org/packages/ec/6b/63cc3df74987c36fe26157ee12e09e8f9db4de771e0f3404263117e75b95/typing_extensions-4.7.1-py3-none-any.whl
Installing collected packages: typing-extensions
Successfully installed typing-extensions-4.7.1
Traceback (most recent call last):
File "tt.py", line 10, in <module>
import paho.mqtt.client as mqtt_client
ModuleNotFoundError: No module named 'paho'
</code></pre>
<p>also tried call the <code>pip</code> one time to install both packages, still the same error:</p>
<pre><code>subprocess.check_call([sys.executable, "-m", "pip", "install", 'typing-extensions==4.7.1','paho-mqtt'])
</code></pre>
<p>any idea?</p>
|
<python>
|
2024-07-30 09:17:58
| 0
| 734
|
Shawn
|
78,810,717
| 4,399,016
|
Extracting contents from Zipfile in Python downloaded from URL
|
<p>I have this <a href="https://www.statssa.gov.za/timeseriesdata/Excel/P6420%20Food%20and%20beverages%20(202405).zip" rel="nofollow noreferrer">URL</a> for downloading a Zipfile.
We can download the zipfile by clicking the link.</p>
<p>But this code I have does not work.</p>
<pre><code>from datetime import datetime, timedelta
import requests, zipfile, io
from zipfile import BadZipFile
TwoMonthsAgo = datetime.now() - timedelta(60)
zip_file_url = 'https://www.statssa.gov.za/timeseriesdata/Excel/P6420%20Food%20and%20beverages%20('+ datetime.strftime(TwoMonthsAgo, '%Y%m') +').zip'
r = requests.get(zip_file_url)
z = zipfile.ZipFile(io.BytesIO(r.content))
z.extractall()
</code></pre>
<p>This returns an error:</p>
<blockquote>
<p>BadZipFile: File is not a zip file</p>
</blockquote>
<p>When I try to download the zipfile to cwd and extract it, there is once again the same error.</p>
<pre><code>import urllib.request
filename = 'P6420 Food and beverages ('+ datetime.strftime(TwoMonthsAgo, '%Y%m') +').zip'
urllib.request.urlretrieve(zip_file_url, filename)
import zipfile
with zipfile.ZipFile(filename, 'r') as zip_ref:
zip_ref.extractall()
</code></pre>
<blockquote>
<p>BadZipFile: File is not a zip file</p>
</blockquote>
<p>This downloaded file does not extract when I try to open manually.
While downloading, there is some corruption. How to overcome?</p>
|
<python><python-zipfile>
|
2024-07-30 08:59:19
| 1
| 680
|
prashanth manohar
|
78,810,393
| 961,207
|
Finding quartile in Python given mean, std.dev
|
<p>Given the mean and standard deviation of a normal distribution, how in Python can I determine the quartile of a given point?</p>
<p>E.g.:</p>
<ul>
<li>mu = 0.7</li>
<li>sigma = 0.1</li>
</ul>
<p>Which quartile is the value 0.75 in, with Python?</p>
<p>I'm after the quartile itself, not the quartile bounds - i.e. a function should take m, s, and val, and return quartile in the range 0 to 3.</p>
|
<python><statistics>
|
2024-07-30 07:48:58
| 1
| 571
|
Leon Derczynski
|
78,810,340
| 22,496,572
|
Allow eval access to class variables but also to a function from an imported module
|
<p>I want <code>eval()</code> to only have access to variables present in a class instance <code>instance</code>, but also to functions from the module <code>itertools</code>. How can I do it most elegantly?</p>
<p>Without the functions from the module, I do <code>eval(expr, instance.__dict__)</code>. If I want specific functions, I can add them for example by doing something like <code>eval(expr, instance.__dict__ | {'product': itertools.product})</code>.</p>
<p>But maybe there is a less ad-hoc way of allowing access to the imported modules' functions?</p>
|
<python><eval>
|
2024-07-30 07:36:37
| 0
| 371
|
Sasha
|
78,810,300
| 13,840,270
|
Pythonic approach to avoid nested loops for string concatenation
|
<p>I want to find all 5-digit strings, for which</p>
<ul>
<li>the first three digits are in my first list,</li>
<li>the second trough fourth are in my second and</li>
<li>the third to fifth are in my last list:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>l0=["123","567","451"]
l1=["234","239","881"]
l2=["348","551","399"]
</code></pre>
<p>should thus yield: <code>['12348', '12399']</code>.</p>
<p>I have therefore written a function <code>is_successor(a,b)</code> that tests if a and b overlap:</p>
<pre class="lang-py prettyprint-override"><code>def is_successor(a:str,b:str)->bool:
"""tests if strings a and b overlap"""
return a[1:]==b[:2]
</code></pre>
<p>I can then achieve my goal by writing this nested loop/check structure, that basically appends back to front strings and resulting in all valid strings:</p>
<pre class="lang-py prettyprint-override"><code>pres=[]
for c in l2:
for b in l1:
if is_successor(b,c):
for a in l0:
if is_successor(a,b):
pres.append(a[0]+b[0]+c)
pres
</code></pre>
<ul>
<li>I know I could write it as a list comprehension, but for my original data i have more nested lists and i loose readability even in a list comprehension.</li>
<li>I start from <code>l2</code> -> <code>l0</code> because in my original data the lists become longer, the lower the index and thus I can filter out more cases early this way.</li>
<li>A single loop trough all combinations of <code>l0,l1,l2</code> and checking the succession of all items <code>a,b,c</code> simultaneously would work but it test way more unnecessary combinations than my current construct does.</li>
</ul>
<p><strong>Question</strong>:</p>
<p>How can this nested loop and conditional checking call be abstracted? Is there a pythonic way to capture the repetition of <code>for</code> -> <code>is_successor()</code>?</p>
<p>A larger input might be:</p>
<pre class="lang-py prettyprint-override"><code>primes = [2, 3, 5, 7, 11, 13, 17]
lsts=[
[
str(j).zfill(3)
for j in range(12,988)
if not j%prime
]
for prime in primes
]
</code></pre>
|
<python>
|
2024-07-30 07:27:36
| 5
| 3,215
|
DuesserBaest
|
78,810,190
| 1,867,328
|
Difference in estimates of SE between R's GLM and Python's minimize method
|
<p>I have below dataset in <code>R</code></p>
<pre><code>dat = structure(list(PurchasedProb = c(0.37212389963679, 0.572853363351896,
0.908207789994776, 0.201681931037456, 0.898389684967697, 0.944675268605351,
0.660797792486846, 0.62911404389888, 0.0617862704675645, 0.205974574899301,
0.176556752528995, 0.687022846657783, 0.384103718213737, 0.769841419998556,
0.497699242085218, 0.717618508264422, 0.991906094830483, 0.380035179434344,
0.777445221319795, 0.934705231105909, 0.212142521282658, 0.651673766085878,
0.125555095961317, 0.267220668727532, 0.386114092543721, 0.0133903331588954,
0.382387957070023, 0.86969084572047, 0.34034899668768, 0.482080115471035,
0.599565825425088, 0.493541307048872, 0.186217601411045, 0.827373318606988,
0.668466738192365, 0.79423986072652, 0.107943625887856, 0.723710946040228,
0.411274429643527, 0.820946294115856, 0.647060193819925, 0.78293276228942,
0.553036311641335, 0.529719580197707, 0.789356231689453, 0.023331202333793,
0.477230065036565, 0.7323137386702, 0.692731556482613, 0.477619622135535,
0.8612094768323, 0.438097107224166, 0.244797277031466, 0.0706790471449494,
0.0994661601725966, 0.31627170718275, 0.518634263193235, 0.6620050764177,
0.406830187188461, 0.912875924259424, 0.293603372760117, 0.459065726259723,
0.332394674187526, 0.65087046707049, 0.258016780717298, 0.478545248275623,
0.766310670645908, 0.0842469143681228, 0.875321330036968, 0.339072937844321,
0.839440350187942, 0.34668348915875, 0.333774930797517, 0.476351245073602,
0.892198335845023, 0.864339470630512, 0.389989543473348, 0.777320698834956,
0.960617997217923, 0.434659484773874, 0.712514678714797, 0.399994368897751,
0.325352151878178, 0.757087148027495, 0.202692255144939, 0.711121222469956,
0.121691921027377, 0.245488513959572, 0.14330437942408, 0.239629415096715,
0.0589343772735447, 0.642288258532062, 0.876269212691113, 0.778914677444845,
0.79730882588774, 0.455274453619495, 0.410084082046524, 0.810870242770761,
0.604933290276676, 0.654723928077146, 0.353197271935642, 0.270260145887733,
0.99268406117335, 0.633493264438584, 0.213208135217428, 0.129372348077595,
0.478118034312502, 0.924074469832703, 0.59876096714288, 0.976170694921166,
0.731792511884123, 0.356726912083104, 0.431473690550774, 0.148211560677737,
0.0130775754805654, 0.715566066093743, 0.103184235747904, 0.446284348610789,
0.640101045137271, 0.991838620044291, 0.495593577856198, 0.484349524369463,
0.173442334868014, 0.754820944508538, 0.453895489219576, 0.511169783771038,
0.207545113284141, 0.228658142732456, 0.595711996313184, 0.57487219828181,
0.0770643802825361, 0.0355405795853585, 0.642795492196456, 0.928615199634805,
0.598092422354966, 0.560900748008862, 0.526027723914012, 0.985095223877579,
0.507641822332516, 0.682788078673184, 0.601541217649356, 0.238868677755818,
0.258165926672518, 0.729309623362496, 0.452570831403136, 0.175126768415794,
0.746698269620538, 0.104987640399486, 0.864544949028641, 0.614644971676171,
0.557159538846463, 0.328777319053188, 0.453131445450708, 0.500440972624347,
0.180866361130029, 0.529630602803081, 0.0752757457084954, 0.277755932649598,
0.212699519237503, 0.284790480975062, 0.895094102947041, 0.4462353233248,
0.779984889784828, 0.880619034869596, 0.413124209502712, 0.0638084805104882,
0.335487491684034, 0.723725946620107, 0.337615333497524, 0.630414122482762,
0.840614554006606, 0.856131664710119, 0.39135928102769, 0.380493885604665,
0.895445425994694, 0.644315762910992, 0.741078648716211, 0.605303446529433,
0.903081611497328, 0.293730155099183, 0.19126010988839, 0.886450943304226,
0.503339485730976, 0.877057543024421, 0.189193622441962, 0.758103052387014,
0.724498892668635, 0.943724818294868, 0.547646587016061, 0.711743867723271,
0.388905099825934, 0.100873126182705, 0.927302088588476, 0.283232500310987,
0.59057315881364, 0.110360604943708, 0.840507032116875, 0.317963684443384,
0.782851336989552, 0.267508207354695, 0.218645284883678, 0.516796836396679,
0.268950592027977, 0.181168327340856, 0.518576137488708, 0.562782935798168,
0.129156854469329, 0.256367604015395, 0.717935275984928, 0.961409936426207,
0.100140846567228, 0.763222689507529, 0.947966354666278, 0.818634688388556,
0.308292330708355, 0.649579460499808, 0.953355451114476, 0.953732650028542,
0.339979203417897, 0.262474110117182, 0.165453933179379, 0.322168056620285,
0.510125206550583, 0.923968471353874, 0.510959698352963, 0.257621260825545,
0.0464608869515359, 0.41785625834018, 0.854001502273604, 0.347230677725747,
0.131442320533097, 0.374486864544451, 0.631420228397474, 0.390078933676705,
0.689627848798409, 0.689413412474096, 0.554900623159483, 0.429624407785013,
0.452720062807202, 0.306443258887157, 0.578353944001719, 0.910370304249227,
0.142604082124308, 0.415047625312582, 0.210925750667229, 0.428750370861962,
0.132689975202084, 0.460096445865929, 0.942957059247419, 0.761973861604929,
0.932909828843549, 0.470678497571498, 0.603588067693636, 0.484989680582657,
0.10880631650798, 0.247726832982153, 0.498514530714601, 0.372866708086804,
0.934691370232031, 0.523986077867448, 0.317144671687856, 0.277966029476374,
0.787540507735685, 0.702462512534112, 0.165027638664469, 0.0644575387705117,
0.754705621628091, 0.620410033036023, 0.169576766667888, 0.0622140523046255,
0.109029268613085, 0.381716351723298, 0.169310914585367, 0.298652542056516,
0.192209535045549, 0.257170021301135, 0.181231822818518, 0.477313709678128,
0.770737042883411, 0.0277871224097908, 0.527310776989907, 0.880319068906829,
0.373063371982425, 0.047959131654352, 0.138628246728331, 0.321492120390758,
0.154831611318514, 0.132228172151372, 0.221305927727371, 0.226380796171725,
0.13141653384082, 0.981563460314646, 0.327013726811856, 0.506939497077838,
0.68144251476042, 0.0991691031958908, 0.118902558228001, 0.0504396595060825,
0.929253919748589, 0.673712232848629, 0.0948578554671258, 0.492596120806411,
0.461551840649918, 0.375216530868784, 0.991099219536409, 0.176350713707507,
0.813435208518058, 0.0684466371312737, 0.40044974675402, 0.141144325723872,
0.193309862399474, 0.841351716779172, 0.719913988374174, 0.267212083097547,
0.495001644827425, 0.0831138978246599, 0.353884240612388, 0.969208805356175,
0.624714189674705, 0.664618249749765, 0.312489656498656, 0.405689612729475,
0.996077371528372, 0.855082356370986, 0.953548396006227, 0.812305092345923,
0.782182115828618, 0.267878128914163, 0.762151529546827, 0.986311589134857,
0.293605549028143, 0.399351106956601, 0.812131523853168, 0.0771516691893339,
0.363696809858084, 0.442592467181385, 0.156714132521302, 0.582205270184204,
0.970162178855389, 0.98949983343482, 0.176452036481351, 0.542130424408242,
0.384303892031312, 0.676164050819352, 0.26929377974011, 0.469250942347571,
0.171800082316622, 0.36918946239166, 0.725405272562057, 0.486149104312062,
0.0638024667277932, 0.784546229988337, 0.418321635806933, 0.981018084799871,
0.282883955864236, 0.847882149508223, 0.0822392308618873, 0.886458750581369,
0.471930730855092, 0.109100963454694, 0.333277984522283, 0.837416569236666,
0.276849841699004, 0.587035141419619, 0.836732269730419, 0.0711540239863098,
0.702778743347153, 0.69882453721948, 0.46396238100715, 0.436931110452861,
0.562176787760109, 0.928483226336539, 0.230466414242983, 0.221813754411414,
0.420215893303975, 0.333520805696025, 0.864807550329715, 0.177194535732269,
0.49331872863695, 0.429713366786018, 0.564263842999935, 0.656162315513939,
0.97855406277813, 0.232161148451269, 0.240811596158892, 0.796836083987728,
0.831671715015545, 0.113507706439123, 0.963312016334385, 0.147322899661958,
0.143626942764968, 0.925229935208336, 0.507035602582619, 0.15485101705417,
0.348302052123472, 0.659821025794372, 0.311772374436259, 0.351573409279808,
0.147845706902444, 0.658877609064803), Age = c(19L, 35L, 26L,
27L, 19L, 27L, 27L, 32L, 25L, 35L, 26L, 26L, 20L, 32L, 18L, 29L,
47L, 45L, 46L, 48L, 45L, 47L, 48L, 45L, 46L, 47L, 49L, 47L, 29L,
31L, 31L, 27L, 21L, 28L, 27L, 35L, 33L, 30L, 26L, 27L, 27L, 33L,
35L, 30L, 28L, 23L, 25L, 27L, 30L, 31L, 24L, 18L, 29L, 35L, 27L,
24L, 23L, 28L, 22L, 32L, 27L, 25L, 23L, 32L, 59L, 24L, 24L, 23L,
22L, 31L, 25L, 24L, 20L, 33L, 32L, 34L, 18L, 22L, 28L, 26L, 30L,
39L, 20L, 35L, 30L, 31L, 24L, 28L, 26L, 35L, 22L, 30L, 26L, 29L,
29L, 35L, 35L, 28L, 35L, 28L, 27L, 28L, 32L, 33L, 19L, 21L, 26L,
27L, 26L, 38L, 39L, 37L, 38L, 37L, 42L, 40L, 35L, 36L, 40L, 41L,
36L, 37L, 40L, 35L, 41L, 39L, 42L, 26L, 30L, 26L, 31L, 33L, 30L,
21L, 28L, 23L, 20L, 30L, 28L, 19L, 19L, 18L, 35L, 30L, 34L, 24L,
27L, 41L, 29L, 20L, 26L, 41L, 31L, 36L, 40L, 31L, 46L, 29L, 26L,
32L, 32L, 25L, 37L, 35L, 33L, 18L, 22L, 35L, 29L, 29L, 21L, 34L,
26L, 34L, 34L, 23L, 35L, 25L, 24L, 31L, 26L, 31L, 32L, 33L, 33L,
31L, 20L, 33L, 35L, 28L, 24L, 19L, 29L, 19L, 28L, 34L, 30L, 20L,
26L, 35L, 35L, 49L, 39L, 41L, 58L, 47L, 55L, 52L, 40L, 46L, 48L,
52L, 59L, 35L, 47L, 60L, 49L, 40L, 46L, 59L, 41L, 35L, 37L, 60L,
35L, 37L, 36L, 56L, 40L, 42L, 35L, 39L, 40L, 49L, 38L, 46L, 40L,
37L, 46L, 53L, 42L, 38L, 50L, 56L, 41L, 51L, 35L, 57L, 41L, 35L,
44L, 37L, 48L, 37L, 50L, 52L, 41L, 40L, 58L, 45L, 35L, 36L, 55L,
35L, 48L, 42L, 40L, 37L, 47L, 40L, 43L, 59L, 60L, 39L, 57L, 57L,
38L, 49L, 52L, 50L, 59L, 35L, 37L, 52L, 48L, 37L, 37L, 48L, 41L,
37L, 39L, 49L, 55L, 37L, 35L, 36L, 42L, 43L, 45L, 46L, 58L, 48L,
37L, 37L, 40L, 42L, 51L, 47L, 36L, 38L, 42L, 39L, 38L, 49L, 39L,
39L, 54L, 35L, 45L, 36L, 52L, 53L, 41L, 48L, 48L, 41L, 41L, 42L,
36L, 47L, 38L, 48L, 42L, 40L, 57L, 36L, 58L, 35L, 38L, 39L, 53L,
35L, 38L, 47L, 47L, 41L, 53L, 54L, 39L, 38L, 38L, 37L, 42L, 37L,
36L, 60L, 54L, 41L, 40L, 42L, 43L, 53L, 47L, 42L, 42L, 59L, 58L,
46L, 38L, 54L, 60L, 60L, 39L, 59L, 37L, 46L, 46L, 42L, 41L, 58L,
42L, 48L, 44L, 49L, 57L, 56L, 49L, 39L, 47L, 48L, 48L, 47L, 45L,
60L, 39L, 46L, 51L, 50L, 36L, 49L), b0 = c(1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1), Age1 = c(-1.77956878790224,
-0.253270175924977, -1.11181314516219, -1.01641948191361, -1.77956878790224,
-1.01641948191361, -1.01641948191361, -0.539451165670714, -1.20720680841077,
-0.253270175924977, -1.11181314516219, -1.11181314516219, -1.68417512465366,
-0.539451165670714, -1.87496245115082, -0.82563215541645, 0.891453783057969,
0.700666456560812, 0.79606011980939, 0.986847446306548, 0.700666456560812,
0.891453783057969, 0.986847446306548, 0.700666456560812, 0.79606011980939,
0.891453783057969, 1.08224110955513, 0.891453783057969, -0.82563215541645,
-0.634844828919293, -0.634844828919293, -1.01641948191361, -1.58878146140508,
-0.921025818665029, -1.01641948191361, -0.253270175924977, -0.444057502422135,
-0.730238492167871, -1.11181314516219, -1.01641948191361, -1.01641948191361,
-0.444057502422135, -0.253270175924977, -0.730238492167871, -0.921025818665029,
-1.39799413490792, -1.20720680841077, -1.01641948191361, -0.730238492167871,
-0.634844828919293, -1.30260047165934, -1.87496245115082, -0.82563215541645,
-0.253270175924977, -1.01641948191361, -1.30260047165934, -1.39799413490792,
-0.921025818665029, -1.4933877981565, -0.539451165670714, -1.01641948191361,
-1.20720680841077, -1.39799413490792, -0.539451165670714, 2.03617774204092,
-1.30260047165934, -1.30260047165934, -1.39799413490792, -1.4933877981565,
-0.634844828919293, -1.20720680841077, -1.30260047165934, -1.68417512465366,
-0.444057502422135, -0.539451165670714, -0.348663839173556, -1.87496245115082,
-1.4933877981565, -0.921025818665029, -1.11181314516219, -0.730238492167871,
0.128304477069338, -1.68417512465366, -0.253270175924977, -0.730238492167871,
-0.634844828919293, -1.30260047165934, -0.921025818665029, -1.11181314516219,
-0.253270175924977, -1.4933877981565, -0.730238492167871, -1.11181314516219,
-0.82563215541645, -0.82563215541645, -0.253270175924977, -0.253270175924977,
-0.921025818665029, -0.253270175924977, -0.921025818665029, -1.01641948191361,
-0.921025818665029, -0.539451165670714, -0.444057502422135, -1.77956878790224,
-1.58878146140508, -1.11181314516219, -1.01641948191361, -1.11181314516219,
0.0329108138207596, 0.128304477069338, -0.0624828494278193, 0.0329108138207596,
-0.0624828494278193, 0.414485466815075, 0.223698140317917, -0.253270175924977,
-0.157876512676398, 0.223698140317917, 0.319091803566496, -0.157876512676398,
-0.0624828494278193, 0.223698140317917, -0.253270175924977, 0.319091803566496,
0.128304477069338, 0.414485466815075, -1.11181314516219, -0.730238492167871,
-1.11181314516219, -0.634844828919293, -0.444057502422135, -0.730238492167871,
-1.58878146140508, -0.921025818665029, -1.39799413490792, -1.68417512465366,
-0.730238492167871, -0.921025818665029, -1.77956878790224, -1.77956878790224,
-1.87496245115082, -0.253270175924977, -0.730238492167871, -0.348663839173556,
-1.30260047165934, -1.01641948191361, 0.319091803566496, -0.82563215541645,
-1.68417512465366, -1.11181314516219, 0.319091803566496, -0.634844828919293,
-0.157876512676398, 0.223698140317917, -0.634844828919293, 0.79606011980939,
-0.82563215541645, -1.11181314516219, -0.539451165670714, -0.539451165670714,
-1.20720680841077, -0.0624828494278193, -0.253270175924977, -0.444057502422135,
-1.87496245115082, -1.4933877981565, -0.253270175924977, -0.82563215541645,
-0.82563215541645, -1.58878146140508, -0.348663839173556, -1.11181314516219,
-0.348663839173556, -0.348663839173556, -1.39799413490792, -0.253270175924977,
-1.20720680841077, -1.30260047165934, -0.634844828919293, -1.11181314516219,
-0.634844828919293, -0.539451165670714, -0.444057502422135, -0.444057502422135,
-0.634844828919293, -1.68417512465366, -0.444057502422135, -0.253270175924977,
-0.921025818665029, -1.30260047165934, -1.77956878790224, -0.82563215541645,
-1.77956878790224, -0.921025818665029, -0.348663839173556, -0.730238492167871,
-1.68417512465366, -1.11181314516219, -0.253270175924977, -0.253270175924977,
1.08224110955513, 0.128304477069338, 0.319091803566496, 1.94078407879234,
0.891453783057969, 1.6546030890466, 1.36842209930086, 0.223698140317917,
0.79606011980939, 0.986847446306548, 1.36842209930086, 2.03617774204092,
-0.253270175924977, 0.891453783057969, 2.13157140528949, 1.08224110955513,
0.223698140317917, 0.79606011980939, 2.03617774204092, 0.319091803566496,
-0.253270175924977, -0.0624828494278193, 2.13157140528949, -0.253270175924977,
-0.0624828494278193, -0.157876512676398, 1.74999675229518, 0.223698140317917,
0.414485466815075, -0.253270175924977, 0.128304477069338, 0.223698140317917,
1.08224110955513, 0.0329108138207596, 0.79606011980939, 0.223698140317917,
-0.0624828494278193, 0.79606011980939, 1.46381576254944, 0.414485466815075,
0.0329108138207596, 1.17763477280371, 1.74999675229518, 0.319091803566496,
1.27302843605228, -0.253270175924977, 1.84539041554376, 0.319091803566496,
-0.253270175924977, 0.605272793312233, -0.0624828494278193, 0.986847446306548,
-0.0624828494278193, 1.17763477280371, 1.36842209930086, 0.319091803566496,
0.223698140317917, 1.94078407879234, 0.700666456560812, -0.253270175924977,
-0.157876512676398, 1.6546030890466, -0.253270175924977, 0.986847446306548,
0.414485466815075, 0.223698140317917, -0.0624828494278193, 0.891453783057969,
0.223698140317917, 0.509879130063654, 2.03617774204092, 2.13157140528949,
0.128304477069338, 1.84539041554376, 1.84539041554376, 0.0329108138207596,
1.08224110955513, 1.36842209930086, 1.17763477280371, 2.03617774204092,
-0.253270175924977, -0.0624828494278193, 1.36842209930086, 0.986847446306548,
-0.0624828494278193, -0.0624828494278193, 0.986847446306548,
0.319091803566496, -0.0624828494278193, 0.128304477069338, 1.08224110955513,
1.6546030890466, -0.0624828494278193, -0.253270175924977, -0.157876512676398,
0.414485466815075, 0.509879130063654, 0.700666456560812, 0.79606011980939,
1.94078407879234, 0.986847446306548, -0.0624828494278193, -0.0624828494278193,
0.223698140317917, 0.414485466815075, 1.27302843605228, 0.891453783057969,
-0.157876512676398, 0.0329108138207596, 0.414485466815075, 0.128304477069338,
0.0329108138207596, 1.08224110955513, 0.128304477069338, 0.128304477069338,
1.55920942579802, -0.253270175924977, 0.700666456560812, -0.157876512676398,
1.36842209930086, 1.46381576254944, 0.319091803566496, 0.986847446306548,
0.986847446306548, 0.319091803566496, 0.319091803566496, 0.414485466815075,
-0.157876512676398, 0.891453783057969, 0.0329108138207596, 0.986847446306548,
0.414485466815075, 0.223698140317917, 1.84539041554376, -0.157876512676398,
1.94078407879234, -0.253270175924977, 0.0329108138207596, 0.128304477069338,
1.46381576254944, -0.253270175924977, 0.0329108138207596, 0.891453783057969,
0.891453783057969, 0.319091803566496, 1.46381576254944, 1.55920942579802,
0.128304477069338, 0.0329108138207596, 0.0329108138207596, -0.0624828494278193,
0.414485466815075, -0.0624828494278193, -0.157876512676398, 2.13157140528949,
1.55920942579802, 0.319091803566496, 0.223698140317917, 0.414485466815075,
0.509879130063654, 1.46381576254944, 0.891453783057969, 0.414485466815075,
0.414485466815075, 2.03617774204092, 1.94078407879234, 0.79606011980939,
0.0329108138207596, 1.55920942579802, 2.13157140528949, 2.13157140528949,
0.128304477069338, 2.03617774204092, -0.0624828494278193, 0.79606011980939,
0.79606011980939, 0.414485466815075, 0.319091803566496, 1.94078407879234,
0.414485466815075, 0.986847446306548, 0.605272793312233, 1.08224110955513,
1.84539041554376, 1.74999675229518, 1.08224110955513, 0.128304477069338,
0.891453783057969, 0.986847446306548, 0.986847446306548, 0.891453783057969,
0.700666456560812, 2.13157140528949, 0.128304477069338, 0.79606011980939,
1.27302843605228, 1.17763477280371, -0.157876512676398, 1.08224110955513
)), row.names = c(NA, -400L), class = "data.frame")
</code></pre>
<p>Now I fit a <code>GLM</code> on above data</p>
<pre><code>glm("PurchasedProb ~ Age1", data = dat, family = binomial())
</code></pre>
<p>Now I wanted to get estimates of model coefficients using <code>scp.optimize.minimize</code> method on the same data as below</p>
<pre><code>import scipy as scp
import pandas as pd
import numpy as np
def LL_Function(Params, Data, VariableY, VariableX ) :
Data['X_Beta'] = Data.assign(B0 = 1)[['B0'] + VariableX].to_numpy() @ np.array(Params)
Data['Mu'] = 1 / (1 + np.exp(-1 * Data['X_Beta']))
Data['LL'] = Data[VariableY].to_numpy() * np.log(Data['Mu']) + (1 - Data[VariableY]) * np.log(1 - Data['Mu'])
return(-1 * Data['LL'].sum())
opt = scp.optimize.minimize(LL_Function, [0, 1], args = (dat, 'PurchasedProb', ['Age1']))
</code></pre>
<p>Also obtained the <code>SE</code> as below</p>
<pre><code>print(np.sqrt(np.diag(opt.hess_inv)))
</code></pre>
<p>However while the estimates of <code>SE</code> is close to estimates from <code>R</code> for this dataset, I found there are noticable difference between above 2 approaches in different other datasets. Because of some restriction I could not reproduce such dataset here.</p>
<p><strong>I wonder if anyone have ever come across such difference.</strong></p>
<p>So my question is if <code>print(np.sqrt(np.diag(opt.hess_inv)))</code> is the right approach to obtain <code>SE</code>? Or any other settings/methods are required to get similar estimates of <code>SE</code> between <code>R</code> and <code>Python</code>?</p>
|
<python><r><scipy><glm>
|
2024-07-30 06:58:58
| 0
| 3,832
|
Bogaso
|
78,810,121
| 23,468,281
|
Python Regex to match the FIRST repetition of a digit
|
<p>Examples:</p>
<ol>
<li>For <strong>0123123123</strong>, <strong>1</strong> should be matched since the 2nd <strong>1</strong> appears before the repetition of any other digit.</li>
<li>For <strong>01234554321</strong>, <strong>5</strong> should be matched since the 2nd <strong>5</strong> appears before the repetition of any other digit.</li>
</ol>
<p>Some regexes that I have tried:</p>
<ol>
<li>The below works for the 1st but not the 2nd example. It matches <strong>1</strong> instead because <strong>1</strong> is the first digit that appears in the string which is subsequently repeated.</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import re
m = re.search(r"(\d).*?\1", string)
print(m.group(1))
</code></pre>
<ol start="2">
<li>The below works for the 2nd but not the 1st example. It matches <strong>3</strong> instead - in particular the 2nd and 3rd occurrence of the digit. I do not know why it behaves that way.</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import re
m = re.search(r"(\d)(?!(\d).*?\2).*?\1", string)
print(m.group(1))
</code></pre>
|
<python><regex>
|
2024-07-30 06:39:58
| 3
| 361
|
plsssineedheeeelp
|
78,810,016
| 142,605
|
Importing third-party classes as if they were local
|
<p>I'm reviewing a code and I am seeing the case when third party class</p>
<pre class="lang-py prettyprint-override"><code># tables.py
from sqlalchemy import Enum
</code></pre>
<p>is imported as it was local</p>
<pre class="lang-py prettyprint-override"><code># db_update.py
from .tables import Enum
</code></pre>
<p>Is there any downside of it? Seems like a way to allow replacement of <code>Enum</code> definition in the future in <code>tables.py</code> e.g. when
sqlalchemy is replaced one day or another with another library</p>
<p>I've seen this approach applies for local packages to make imports shorter/clearer but never seen that for third party libraries.</p>
<pre><code># __init__.py
from .x.y.z import MyClass
</code></pre>
|
<python>
|
2024-07-30 06:04:48
| 1
| 4,654
|
dzieciou
|
78,809,881
| 3,215,004
|
Why does `os.execl` interfere with `stdin` on Windows?
|
<p>My minimal example is null.py:</p>
<pre><code>import os, sys
os.execl(sys.executable, sys.executable)
</code></pre>
<p>I would have thought that <code>python null.py</code> would be pretty much the same as running <code>python</code>, which is the case on Ubuntu 22.04. However, on Windows 11, it seems to mess up stdin dramatically; for example, if I press a key, it may or may not appear on the screen, and pressing Enter mostly does nothing. This does not happen if I replace <code>os.execl(...)</code> with <code>os.system(sys.executable)</code>. Is there a better way to <code>exec</code> under Windows?</p>
|
<python><windows><stdin>
|
2024-07-30 05:07:58
| 0
| 855
|
gmatht
|
78,809,285
| 4,367,371
|
Pandas read csv that has multiple tables
|
<p>I have a URL that downloads a csv file, to open it I am using the following code:</p>
<pre><code> df = pd.read_csv(url)
</code></pre>
<p>most URLs I am using only contain a single table and they open fine, but some have the following format which causes an error:</p>
<p><a href="https://i.sstatic.net/v8UpnyTo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v8UpnyTo.png" alt="enter image description here" /></a></p>
<p>The csv file is divided into two tables by a set of two empty rows.</p>
<p>The code currently returns the following error:</p>
<pre><code>ParserError: Error tokenizing data. C error: Expected 4 fields in line 9, saw 5
</code></pre>
<p>I am trying to read in both tables and then combine them into one table such as the following:</p>
<p><a href="https://i.sstatic.net/8W4JtgTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8W4JtgTK.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><csv><parsing>
|
2024-07-29 23:02:48
| 1
| 3,671
|
Mustard Tiger
|
78,809,025
| 2,521,423
|
Adding variable indentation in front of each line of python logging output
|
<p>I am building logging into a python app, and I would like it to be human-readable. At the moment, the debug logs document every function that is called, with argument and return values. This means that practically, a debug log for a nested function call might look like this:</p>
<pre><code>2024-07-29 16:52:26,641: DEBUG: MainController.initialize_components called with args <controllers.main_controller.MainController(0x1699fdcdda0) at 0x000001699F793300>
2024-07-29 16:52:26,643: DEBUG: MainController.setup_connections called with args <controllers.main_controller.MainController(0x1699fdcdda0) at 0x000001699F793300>
2024-07-29 16:52:26,645: DEBUG: MainController.setup_connections returned None
2024-07-29 16:52:26,646: DEBUG: MainController.initialize_components returned None
</code></pre>
<p>I would like to make it obvious from reading the logs when things are nested, using indentation, python-style. So the ideal output would be this:</p>
<pre><code>2024-07-29 16:52:26,641: DEBUG: MainController.initialize_components called with args <controllers.main_controller.MainController(0x1699fdcdda0) at 0x000001699F793300>
2024-07-29 16:52:26,643: DEBUG: MainController.setup_connections called with args <controllers.main_controller.MainController(0x1699fdcdda0) at 0x000001699F793300>
2024-07-29 16:52:26,645: DEBUG: MainController.setup_connections returned None
2024-07-29 16:52:26,646: DEBUG: MainController.initialize_components returned None
</code></pre>
<p>I am currently achieving my documentation by wrapping class methods with this decorator:</p>
<pre><code>import functools
import logging
def log(_func=None, *, logger):
def decorator_log(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
if logger.handlers:
current_formatter = logger.handlers[0].formatter
current_formatter.set_tabs(current_formatter.get_tabs() + 1)
self = args[0]
name = f'{self.__class__.__name__}.{func.__name__}'
if logger.root.level < logging.DEBUG:
logger.info(f"Entering {name}")
else:
args_repr = [repr(a) for a in args]
kwargs_repr = [f"{k}={v!r}" for k, v in kwargs.items()]
signature = ", ".join(args_repr + kwargs_repr)
logger.debug(f"{name} called with args {signature}")
try:
result = func(*args, **kwargs)
except Exception as e:
logger.exception(f"Exception raised in {name}: {str(e)}")
raise e
if logger.root.level < logging.DEBUG:
logger.info(f"Leaving {name}")
else:
logger.debug(f"{name} returned {result}")
if logger.handlers:
current_formatter = logger.handlers[0].formatter
current_formatter.set_tabs(current_formatter.get_tabs() - 1)
return result
return wrapper
if _func is None:
return decorator_log
else:
return decorator_log(_func)
</code></pre>
<p>I could add an attribute <code>tabs</code> to the logger with <code>setattr</code> and incremement at the start/decrement at the end of the decorator, but this only applies the tabs to the <code>message</code> portion of the output, like so:</p>
<pre><code>2024-07-29 16:52:26,641: DEBUG: MainController.initialize_components called with args <controllers.main_controller.MainController(0x1699fdcdda0) at 0x000001699F793300>
2024-07-29 16:52:26,643: DEBUG: MainController.setup_connections called with args <controllers.main_controller.MainController(0x1699fdcdda0) at 0x000001699F793300>
2024-07-29 16:52:26,645: DEBUG: MainController.setup_connections returned None
2024-07-29 16:52:26,646: DEBUG: MainController.initialize_components returned None
</code></pre>
<p>Which is better than nothing, but not quite what I want. How can I update this (ideally without using a global variable) to have variable indentation at the start of each line of logger output?</p>
|
<python><python-logging>
|
2024-07-29 20:59:47
| 1
| 1,488
|
KBriggs
|
78,808,926
| 1,084,875
|
Define constraints for different JSON schema models in a list
|
<p>I have some JSON with a structure similar to what is shown below. The <code>threshold</code> list represents objects where the type can be <code>"type": "upper_limit"</code> or <code>"type": "range"</code>. Notice the <code>"target"</code> value should be an integer or float depending on the type of the object.</p>
<pre class="lang-json prettyprint-override"><code>{
"name": "blah",
"project": "blah blah",
"threshold": [
{
"id": "234asdflkj",
"group": "walkers",
"type": "upper_limit",
"target": 20,
"var": "distance"
},
{
"id": "asdf34asf2654",
"group": "runners",
"type": "range",
"target": 1.7,
"var": "speed"
}
]
}
</code></pre>
<p>Pydantic models to generate a JSON schema for the above data are given below:</p>
<pre class="lang-py prettyprint-override"><code>class ThresholdType(str, Enum):
upper_limit = "upper_limit"
range = "range"
class ThresholdUpperLimit(BaseModel):
id: str
group: str
type: ThresholdType = "upper_limit"
target: int = Field(gt=2, le=20)
var: str
class ThresholdRange(BaseModel):
id: str
group: str
type: ThresholdType = "range"
target: float = Field(gt=0, lt=10)
var: str
class Campaign(BaseModel):
name: str
project: str
threshold: list[ThresholdUpperLimit | ThresholdRange]
</code></pre>
<p>The models validate the JSON, but the constraints for the <code>target</code> value are being ignored for the type. For example, if a threshold object contains <code>"type": "range", "target": 12,</code> then no errors are thrown because it is being parsed as an integer and therefore constraints defined by the <code>ThresholdUpperLimit</code> are used; but the constraints defined by <code>ThresholdRange</code> should be used because the type is <code>"range"</code>. Any suggestions on how to properly handle this?</p>
|
<python><json><pydantic>
|
2024-07-29 20:24:55
| 2
| 9,246
|
wigging
|
78,808,903
| 18,476,381
|
pandas read_sql resulting in Value error: "year -10100 is out of range" caused by corrupted dates in database
|
<p>I am running a script to migrate data from oracle to postgres. When running the below</p>
<pre><code>df = pd.read_sql(
query,
oracle_conn,
)
</code></pre>
<p>It is resulting in an error of</p>
<blockquote>
<p>ValueError: year -10100 is out of range</p>
</blockquote>
<p>For some rows accross thousands there are some dates that are corrupted and have large numbers for the year. For example the below is a corrupted date while the next is a valid one.</p>
<pre><code>10101-11-29 22:58:59000.
2024-03-19 18:25:49.000
</code></pre>
<p>Is there any way to alter the read_sql so it still reads in corrupted dates no matter what the year range is?</p>
|
<python><sql><pandas><dataframe>
|
2024-07-29 20:14:23
| 1
| 609
|
Masterstack8080
|
78,808,891
| 808,713
|
PasswordInput does not react with carriage return
|
<p>I'm trying to avoid the need for a submit button. The below code is currently running from a remote jupyter lab. The message will be printed out only after the cursor focus is removed from the password widget. I would like carriage return to trigger the message printing. Any clues?</p>
<pre><code>from panel.widgets import PasswordInput, TextInput
pn.extension()
def on_enter(event=None):
message_pane.object = "<h2>Password Entered</h2>"
username_input = TextInput(name='Username')
password_input = PasswordInput(name='Password', width=200)
message_pane = pn.pane.HTML('')
layout = pn.Column(username_input, password_input, message_pane)
layout.servable()
</code></pre>
<p>In ipywidgets, this can be accomplished with observe.</p>
|
<python><panel>
|
2024-07-29 20:10:41
| 0
| 2,573
|
csta
|
78,808,829
| 8,436,290
|
failed_generation with Groq LLM and llama index in Python
|
<p>My code is the following:</p>
<pre><code>from llama_parse import LlamaParse
from llama_index.llms.groq import Groq
from llama_parse.base import ResultType, Language
from llama_index.core.node_parser import MarkdownElementNodeParser
parser = LlamaParse(
api_key="xxx",
result_type=ResultType.MD,
language=Language.ENGLISH,
parsing_instructions="""\
The document is an exam documents. \
It contains many tables and images. \
Do not parse neither the definition, nor requirements, nor explanation sections.\
""",
)
llm = Groq(
model="llama3-8b-8192",
api_key="xxx",
)
documents = parser.load_data("path/to/file.pdf")
node_parser = MarkdownElementNodeParser(llm=llm,
summary_query_str="""
The document is a property valuation documents. \
It shows the key metrics for a property sale. \
The paper includes detailed figures like the price, the size of the property, \
the year of construction, the number of bedrooms and bathrooms. \
Answer questions using the information in this document and be precise. \
Skip lines for each bullet point. \
Answer with only one number if asked for it.""")
nodes = node_parser.get_nodes_from_documents(documents)
</code></pre>
<p>But it raises an error:</p>
<blockquote>
<p>BadRequestError: Error code: 400 - {'error': {'message': "Failed to
call a function. Please adjust your prompt. See 'failed_generation'
for more details.", 'type': 'invalid_request_error', 'code':
'tool_use_failed', 'failed_generation': '\n{\n
"tool_calls": [\n {\n "id": "pending",\n "type":
"function",\n "function": {\n "name": "TableOutput"\n<br />
},\n "parameters": {\n "table_id": "29 Jones St",\n<br />
"table_title": "Property Information",\n "summary": "Summary of
property information"\n }\n }\n ]\n}\n'}}</p>
</blockquote>
<p>Any idea why I get that please? I have tried many things and none seems to work.</p>
|
<python><large-language-model><llama-index><groq>
|
2024-07-29 19:45:48
| 0
| 467
|
Nicolas REY
|
78,808,780
| 445,810
|
Subclass ctypes.c_void_p to represent a custom handler with a custom C function for constructor and destructor / free
|
<p>I have a somewhat exotic question.</p>
<p>Imagine there is some kind of C interface with ops like (in practice I'm thinking of <a href="https://webrtc.googlesource.com/src/+/refs/heads/main/common_audio/vad/include/webrtc_vad.h" rel="nofollow noreferrer">webrtc vad API</a>):</p>
<pre class="lang-cpp prettyprint-override"><code>Handler* create();
int process(Handler*);
void free(Handler*);
</code></pre>
<p>I know how to represent these functions using <code>ctypes</code> and <code>ctypes.c_void_p</code> to represent the pointer to a handler.</p>
<p>Now the exotic question. Can one represent this pattern as a class derived from <code>ctypes.c_void_p</code>?</p>
<ul>
<li>Is it in general correct to derive from <code>ctypes.c_void_p</code>?</li>
<li>Are there any constraints for a constructor of such a derived class? Will the constructor be used by ctypes for constructing marshalled returned handles?</li>
<li>Is it correct to modify the <code>self.value</code> in derived class <code>__init__</code>?</li>
<li>How does ctypes create object of <code>HandlerPointer</code>, will it call the <code>__init__</code> and will it prevent it from correctly functioning?</li>
<li>Does ctypes support/marshal correctly <code>c_void_p</code>-derived argtypes/restypes?</li>
<li>Is there a more natural way to represent such capsule-like objects (pointer to a possibly opaque structure along with a destructor function) / APIs?</li>
</ul>
<p>E.g. to do something like:</p>
<pre class="lang-py prettyprint-override"><code>import os
import ctypes
class HandlerPointer(ctypes.c_void_p):
@staticmethod
def ffi(lib_path = os.path.abspath('mylib.so')):
lib = ctypes.CDLL(lib_path)
lib.create.argtypes = []
lib.create.restype = HandlerPointer
lib.process.argtypes = [HandlerPointer]
lib.process.restype = ctypes.c_int
lib.free.argtypes = [HandlerPointer]
lib.free.restype = None
return lib
def __init__(self):
super().__init__()
self.value = self.lib.create().value
def process(self):
return self.lib.process(self)
def __del__(self):
self.lib.free(self)
# can't do this var init inside the class as can't refer yet to HandlerPointer inside `.ffi()` if class not initialized yet
HandlerPointer.lib = HandlerPointer.ffi() # otherwise
</code></pre>
<hr />
<p><strong>UPD</strong>: it appears that having <code>__init__</code> method works, except for <code>__del__</code> method (which calls a C-land custom free method) whose existence will lead to a crash with <code>double free or corruption (!prev)</code>. Found the problem, fix described in my comment below, will post a complete solution soon.</p>
|
<python><ctypes>
|
2024-07-29 19:30:46
| 1
| 1,164
|
Vadim Kantorov
|
78,808,729
| 595,305
|
Is there any way to disguise a mock object as an acceptable PyQt object in Qt method parameters?
|
<p>This is an MRE. You need to have <code>pytest</code> installed... and in fact <a href="https://pypi.org/project/pytest-qt/" rel="nofollow noreferrer">pytest-qt</a> won't do any harm.</p>
<pre><code>import sys, pytest
from PyQt5 import QtWidgets
from unittest import mock
class MyLineEdit(QtWidgets.QLineEdit):
def __init__(self, parent):
super().__init__(parent)
self.setPlaceholderText('bubbles')
class Window(QtWidgets.QWidget):
def __init__(self):
super().__init__()
self.setWindowTitle("Pytest MRE")
self.setMinimumSize(400, 200)
layout = QtWidgets.QVBoxLayout()
qle = MyLineEdit(self)
layout.addWidget(qle)
self.setLayout(layout)
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
window = Window()
window.show()
sys.exit(app.exec_())
def test_MyLineEdit_sets_a_title():
mock_window = mock.Mock(spec=QtWidgets.QWidget)
print(f'mock_window.__class__ {mock_window.__class__}') # says "QWidget"
print(f'type(mock_window) {type(mock_window)}') # says "mock.Mock"
with mock.patch('PyQt5.QtWidgets.QLineEdit.setPlaceholderText') as mock_set_placeholder:
# with mock.patch('PyQt5.QtWidgets.QLineEdit.__init__') as mock_super_init:
my_line_edit = MyLineEdit(mock_window)
mock_set_placeholder.assert_called_once()
assert False
</code></pre>
<p>This illustrates a recurrent problem and I'm not sure whether there's a way round it: I want to pass a mock as a parameter of a method of a PyQt class (in this case a constructor: it is the act of passing the mock to the superclass <code>__init__</code> which causes the problem).</p>
<p>Typically, when I try to run this test I get:</p>
<pre><code>E TypeError: arguments did not match any overloaded call:
E QLineEdit(parent: Optional[QWidget] = None): argument 1 has unexpected type 'Mock'
E QLineEdit(contents: Optional[str], parent: Optional[QWidget] = None): argument 1 has unexpected type 'Mock'
test_grey_out_qle_exp.py:7: TypeError
</code></pre>
<p>I run up against this problem with many scenarios: "has unexpected type 'Mock'"...</p>
<p>Over and over again, during my testing of PyQt apps, I find I have to pass a <strong>real</strong> <code>QtWidget</code> (or other real PyQt class object). This in turn can cause all manner of problems, at worst nasty intermittent "Windows access violation" errors ... depending on what type of combination of real PyQt objects and mocks I'm trying to get results from.</p>
<p>I want somehow to pass a mock into the PyQt code, under a disguise. I've tried various combinations of <code>MagicMock</code>, <code>spec</code>, <code>autospec</code>, etc. But the PyQt code is never fooled. It appears to be checking using the built-in function <code>type(x)</code>. However, I don't know that for sure as I can't examine any source code for this, as it appears that the PyQt modules are automatically generated from the C++ code.</p>
<p>One thing that occurred to me: might it be possible to patch the built-in function <code>type()</code>... ? Probably an unlikely solution, not least because the PyQt module is I suspect not using Python code when checking the type. However I did search on this, to no avail.</p>
<p>PS as illustrated in the commented-out <code>mock.patch...</code> line, one possible workaround is to start patching out any PyQt superclass methods which are causing problems ...</p>
<p><strong>later: musicamante's suggestion</strong></p>
<p>Nice idea. I tried this:</p>
<pre><code>class HybridMock(QtWidgets.QWidget, mock.Mock):
def __init__(self, *args, **kwargs):
# super().__init__(*args, **kwargs)
QtWidgets.QWidget.__init__(self, *args, **kwargs)
def test_MyLineEdit_sets_a_title():
# mock_window = mock.Mock(spec=QtWidgets.QWidget)
mock_window = HybridMock()
</code></pre>
<p>On running <code>pytest</code> this just crashes and says "Python has stopped working ..." (with no explanation). NB platform is W10. I'm not that surprised.</p>
|
<python><pyqt><mocking><pytest><pytest-qt>
|
2024-07-29 19:13:06
| 1
| 16,076
|
mike rodent
|
78,808,725
| 596,922
|
python regex to match multiple words in a line without going to the next line
|
<p>I'm writing a parser to parse the below output:</p>
<pre><code> admin@str-s6000-on-5:~$ show interface status Ethernet4
Interface Lanes Speed MTU Alias Vlan Oper Admin Type Asym PFC
--------------- ----------- ------- ----- ------------ ------ ------ ------- -------------- ----------
Ethernet4 29,30,31,32 40G 9100 fortyGigE0/4 trunk up up QSFP+ or later off
PortChannel0001 N/A 40G 9100 N/A routed up up N/A N/A
PortChannel0002 N/A 40G 9100 N/A routed up up N/A N/A
PortChannel0003 N/A 40G 9100 N/A routed up up N/A N/A
PortChannel0004 N/A 40G 9100 N/A routed up up N/A N/A
</code></pre>
<p>I have made an attempt to write a regex to match all the fields as below</p>
<pre><code>(\S+)\s+([\d,]+)\s+(\S+)\s+(\d+)\s+(\S+)\s+(\S+)\s+([up|down])+\s+([up|down]+)\s+([\w\s+?]+)\s+(\S+)
</code></pre>
<p>I'm able to get upto Admin column correctly. The column Type contains multiple words so i have used the pattern <code>([\w\s+?]+)</code> hoping it will match multiple workds seperated by one space with + being optional followed by <code>(\S+)</code> to match the last column. The problem that I face is, regex <code>([\w\s+?]+)</code> spawns over multiple lines and it gives me an output as below</p>
<p><code>Ethernet4 29,30,31,32 40G 9100 fortyGigE0/4 trunk up up QSFP+ or later off PortChannel0001 N/A </code></p>
<p>I see that <code>\s</code> matches the new line as well. how to restrict that not to match the new line? could someone pls help me to clarify.</p>
<p>I looked at this space <a href="https://stackoverflow.com/questions/25131949/regex-for-one-or-more-words-separated-by-spaces">Regex for one or more words separated by spaces</a> but that is not helping me either. can someone help me to understand this better?</p>
|
<python><regex>
|
2024-07-29 19:12:35
| 5
| 1,865
|
Vijay
|
78,808,609
| 24,191,255
|
Excluding a label on the y-axis in Matplotlib: How?
|
<p>I would not like to have the first label on the y-axis on my figure which looks like this <a href="https://i.sstatic.net/AJthAFD8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJthAFD8.png" alt="Figure" /></a> now, since I don't find it very nice to have it like this.</p>
<p>In my solution, I tried excluding the corresponding (first) tick, however this is not necessary. I just want the label (i.e., 0.3) to diminish.</p>
<p>My try for achieving this goal was the following addition to the script:</p>
<pre><code>y_ticks = ax.get_yticks()
ax.set_yticks(y_ticks[1:])
</code></pre>
<p>But I could not solve the problem.</p>
<p>The complete part regarding plotting looks like this</p>
<pre><code>plt.style.use('seaborn-v0_8-bright')
plt.rcParams.update({'font.size': 11, 'font.family': 'serif'})
fig, ax = plt.subplots(figsize=(10, 6))
ax.plot(specific_alpha_degrees, best_X_values, linestyle='-', marker='', label=r'$P_1$', color='blue')
ax.plot(specific_alpha_degrees, best_X_values2, linestyle='--', marker='', label=r'$P_2$', color='green')
ax.plot(specific_alpha_degrees, best_X_values3, linestyle='-.', marker='', label=r'$P_3$', color='red')
ax.plot(specific_alpha_degrees, best_X_values4, linestyle=':', marker='', label=r'$P_4$', color='purple')
ax.plot(specific_alpha_degrees, best_X_values5, linestyle='-', marker='', label=r'$P_5$', color='orange')
y_ticks = ax.get_yticks()
ax.set_yticks(y_ticks[1:])
ax.set_xlabel(r'Incline [$deg$]')
ax.set_ylabel(r'$x_{\text{opt}}$')
ax.set_xlim(min(specific_alpha_degrees)-1, max(specific_alpha_degrees)+1)
ax.grid(True, which='major', linestyle='--', linewidth=0.5)
ax.minorticks_on()
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), fontsize='medium', frameon=False, framealpha=0.9, borderpad=1)
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_color('black')
ax.spines['left'].set_color('black')
fig.tight_layout()
plt.show()
</code></pre>
|
<python><matplotlib><axis>
|
2024-07-29 18:39:11
| 1
| 606
|
Márton Horváth
|
78,808,552
| 15,307,950
|
How to run python subprocess with PIPE in parallel?
|
<p>I'm converting a bunch of SVG images to PNG using inkscape.
Single threaded:</p>
<pre><code>
import subprocess
import time
import os
inkscape_path = r'C:\Program Files\Inkscape\bin\inkscape.com'
steps=30
filenames = []
processes = []
# t_start = time.process_time()
t_start = time.time()
for i in range(steps):
template = bytes(f"""<?xml version="1.0" encoding="UTF-8"?>
<svg width="100" height="100" viewBox="-10 -10 30 30" xmlns="http://www.w3.org/2000/svg">
<line x1="0" y1="0" x2="{i*9/(steps-1)+1}" y2="0" stroke="green"/>
</svg>
""",'UTF-8')
filename = f'img{i:02}.png'
filenames.append(filename)
process = subprocess.run([inkscape_path, '-p', '-o', filename],input=template )
# elapsed_time = time.process_time() - t_start
elapsed_time = time.time() - t_start
print(elapsed_time)
</code></pre>
<p>This works and takes 85 seconds in total for 30 images.</p>
<p>If I try it with communicate() it waits for completion for each process. So I set timeout to 0 and ignore the raised exception.</p>
<pre><code>
import subprocess
import time
import os
inkscape_path = r'C:\Program Files\Inkscape\bin\inkscape.com'
steps=30
filenames = []
processes = []
# t_start = time.process_time()
t_start = time.time()
for i in range(steps):
template = bytes(f"""<?xml version="1.0" encoding="UTF-8"?>
<svg width="100" height="100" viewBox="-10 -10 30 30" xmlns="http://www.w3.org/2000/svg">
<line x1="0" y1="0" x2="{i*9/(steps-1)+1}" y2="0" stroke="green"/>
</svg>
""",'UTF-8')
filename = f'img{i:02}.png'
filenames.append(filename)
process = subprocess.Popen([inkscape_path, '-p', '-o', filename],stdin= subprocess.PIPE, env=dict(os.environ, SELF_CALL="xxx") )
try:
process.communicate(template,timeout=0)
except:
pass
processes.append(process)
for p in processes:
p.wait()
# elapsed_time = time.process_time() - t_start
elapsed_time = time.time() - t_start
print(elapsed_time)
</code></pre>
<p>This works and takes 30 seconds, but I don't think this is a clean way to do it.</p>
<p>How do I run multiple of these subprocesses in parallel in a clean way. Preferably in 1 line.</p>
<p>(SELF_CALL is a workaround as inkscape sometimes throws exceptions when multiple instances are running)</p>
|
<python><subprocess>
|
2024-07-29 18:23:00
| 0
| 726
|
elechris
|
78,808,445
| 13,132,728
|
How to style groups of rows differently based on a column's value with Pandas Styler
|
<h1>What I am trying to accomplish:</h1>
<p>I have the following dataframe, <code>df</code>:</p>
<pre><code>data = {'person': {0: 'a',
1: 'a',
2: 'a',
3: 'a',
4: 'a',
5: 'a',
6: 'b',
7: 'b',
8: 'b',
9: 'b',
10: 'b',
11: 'b',
12: 'c',
13: 'c',
14: 'c',
15: 'c',
16: 'c',
17: 'c'},
'x': {0: 1,
1: 1,
2: 1,
3: 1,
4: 1,
5: 1,
6: 1,
7: 1,
8: 1,
9: 1,
10: 1,
11: 1,
12: 1,
13: 1,
14: 1,
15: 1,
16: 1,
17: 1},
'y': {0: 2,
1: 2,
2: 2,
3: 2,
4: 2,
5: 2,
6: 2,
7: 2,
8: 2,
9: 2,
10: 2,
11: 2,
12: 2,
13: 2,
14: 2,
15: 2,
16: 2,
17: 2},
'z': {0: 'foo',
1: 'foo',
2: 'foo',
3: 'bar',
4: 'bar',
5: 'bar',
6: 'foo',
7: 'foo',
8: 'foo',
9: 'bar',
10: 'bar',
11: 'bar',
12: 'foo',
13: 'foo',
14: 'foo',
15: 'bar',
16: 'bar',
17: 'bar'}}
df = pd.DataFrame.from_dict(data, orient='columns')
</code></pre>
<p>And I want to style groups of rows differently (in different sets of alternating colors) based on the value of <code>z</code> FOR EACH VALUE OF <code>person</code>.</p>
<hr />
<h1>My desired output:</h1>
<p><a href="https://i.sstatic.net/ixUVDAj8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ixUVDAj8.png" alt="enter image description here" /></a></p>
<hr />
<h1>What I have tried:</h1>
<p>Initially, I figured I could use a nested loop to break out each <code>z</code> for each <code>person</code>. I tried testing this out initially for just one <code>person</code>, like so:</p>
<pre><code>
COLORS = {
'foo':['red','green'],
'bar':['blue','yellow']
}
test = df.loc[df.person=='a'].copy()
sub_person = pd.DataFrame()
for val in test.z.unique():
i_test = test.loc[test.z==val].copy()
c1 = COLORS[val][0]
c2 = COLORS[val][-1]
css_alt_rows = f'background-color: {c1}; color: {c2};'
i_test = (i_test.style.apply(lambda col: np.where(col.index % 2, css_alt_rows,None)))
sub_person = pd.concat([sub_person,i_test])
</code></pre>
<p>I thought this was a clever solution to handle the different stylings individually, but I was met with the error:</p>
<pre><code>TypeError: cannot concatenate object of type '<class 'pandas.io.formats.style.Styler'>'; only Series and DataFrame objs are valid
</code></pre>
<p>So, turns out this code cannot work as you cannot concat Styler objects.</p>
<p>Next, I tried a similar strategy by nesting the lambda function with another <code>np.where()</code> conditional:</p>
<pre><code>COLORS = {
'foo':['red','green'],
'bar':['blue','yellow']
}
test = df.loc[df.person=='a'].copy()
for val in test.z.unique():
c1 = COLORS[val][0]
c2 = COLORS[val][-1]
css_alt_rows = f'background-color: {c1}; color: {c2};'
test = (test.style.apply(lambda col: np.where(np.where(col.index % 2, css_alt_rows,None),None)))
</code></pre>
<p>But I get the following error:</p>
<pre><code>AttributeError: 'Styler' object has no attribute 'style'
</code></pre>
<p>Which makes sense as after the first iteration of the loop, <code>test</code> is a styler object, which results in <code>test.style</code> raising an error in the next iteration.</p>
<hr />
<p>So, how would I go about applying these stylings for each <code>z</code> for each <code>person</code>?</p>
<p>Also, how could I add a bottom border in the last row of each <code>person</code> without being able to style groupings individually and concatenating them?</p>
<p>Note: Yes, the colors in <code>colors</code> won't match the image, which is fine.</p>
|
<python><pandas><dataframe><pandas-styles>
|
2024-07-29 17:49:58
| 2
| 1,645
|
bismo
|
78,808,130
| 6,439,229
|
When to use event() vs. eventFilter()
|
<p>I'm confused about when you can use <code>event()</code> vs. <code>eventFilter()</code> to modify a widget's behaviour.</p>
<p>Two examples:<br />
A Pushbutton where you can disable tooltips without changing the tooltip text:</p>
<pre><code>class PButton(QPushButton):
def __init__(self, *args):
super().__init__(*args)
self.tt_enabled = False
def event(self, event: QEvent):
if event.type() == QEvent.Type.ToolTip and not self.tt_enabled:
return True
return super().event(event)
</code></pre>
<p>A Delegate that doesn't close the editor when pressing Return or Enter:</p>
<pre><code>class Delegate(QStyledItemDelegate):
def eventFilter(self, obj, event: QEvent):
if event.type() == QEvent.Type.KeyPress and event.key() in (0x01000004, 0x01000005): # Return, Enter
return False
return super().eventFilter(obj, event)
</code></pre>
<p>Both examples work as intended but one uses <code>event()</code> and the other uses <code>eventFilter()</code>, and it doesn't work when you switch it around: use <code>event()</code> on the delegate and <code>eventFilter()</code> on the Pushbutton.<br />
What's the difference?</p>
<p>I've read the docs on <a href="https://doc.qt.io/qt-6/eventsandfilters.html" rel="nofollow noreferrer">The Event System</a>, <a href="https://doc.qt.io/qt-6/qobject.html#eventFilter" rel="nofollow noreferrer">eventFilter()</a>, and <a href="https://doc.qt.io/qt-6/qobject.html#event" rel="nofollow noreferrer">event()</a>, but did not find an answer there.</p>
|
<python><events><pyqt><eventfilter>
|
2024-07-29 16:22:06
| 0
| 1,016
|
mahkitah
|
78,808,076
| 3,322,533
|
How do you install a generated openapi client to a zipapp?
|
<p>So I generated an openapi client using <a href="https://github.com/OpenAPITools/openapi-generator" rel="nofollow noreferrer">openapi-generator</a>.</p>
<p>And I now want to install it to my zippapp.</p>
<p>There must be something wrong with my overal structure, though.</p>
<p>I have an <code>app</code> folder. It contains my <code>__main__.py</code> file, along with everything else I need.</p>
<p>It's also where I <code>pip install</code> my dependencies into. (Not sure if that's the correct way to go about it, but it worked for other dependencies, so ...)</p>
<p>So after generating the openapi client, I</p>
<pre><code>OUT_ABSOLUTE="<path to generated openapi client>"
APP="${PWD}/app"
sudo chown -R "${USER}" "${OUT_ABSOLUTE}"
pip install -t "${APP}" "${OUT_ABSOLUTE}" --upgrade
</code></pre>
<p>this says it succeeds, with</p>
<pre><code>Building wheels for collected packages: openapi-client
Building wheel for openapi-client (pyproject.toml) ... done
Created wheel for openapi-client: filename=openapi_client-1.0.0-py3-none-any.whl size=1666754 sha256=f7beef08b8727cad41d0b23f4eb18b523d75e05c953b52d8079870ad0fa9e79b
Stored in directory: /tmp/pip-ephem-wheel-cache-05c68l5b/wheels/15/66/d5/db16f91fb5af2f414682e9db528a1a63d5611d8afcfafdb1df
Successfully built openapi-client
Installing collected packages: urllib3, typing-extensions, six, annotated-types, python-dateutil, pydantic-core, pydantic, openapi-client
Successfully installed annotated-types-0.7.0 openapi-client-1.0.0 pydantic-2.8.2 pydantic-core-2.20.1 python-dateutil-2.9.0.post0 six-1.16.0 typing-extensions-4.12.2 urllib3-2.0.7
</code></pre>
<p>after which I compose and run my app</p>
<pre><code>python -m zipapp app -o my_app.pyz
chmod +x my_app.pyz
python my_app.pyz
</code></pre>
<p>This is where things stop going the way I want them to.</p>
<p>After <code>pip install</code>ing the openapi client, my <code>app</code> folder looks like this:</p>
<p><a href="https://i.sstatic.net/90jJflKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/90jJflKN.png" alt="enter image description here" /></a></p>
<p>As you can see, the <code>pydantic_core._pydantic_core</code> package is present.</p>
<p>And yet, trying to run the app gets python to complain that</p>
<pre><code>Traceback (most recent call last):
File "/home/user1291/anaconda3/envs/my_app/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/user1291/anaconda3/envs/my_app/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/user1291/code/my_app/my_app.pyz/__main__.py", line 5, in <module>
File "/home/user1291/code/my_app/my_app.pyz/my_app.py", line 5, in <module>
File "/home/user1291/code/my_app/my_app.pyz/openapi_client/__init__.py", line 20, in <module>
File "/home/user1291/code/my_app/my_app.pyz/openapi_client/api/__init__.py", line 4, in <module>
File "/home/user1291/code/my_app/my_app.pyz/openapi_client/api/aa_sequences_api.py", line 15, in <module>
File "/home/user1291/code/my_app/my_app.pyz/pydantic/__init__.py", line 404, in __getattr__
File "/home/user1291/anaconda3/envs/my_app/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/home/user1291/code/my_app/my_app.pyz/pydantic/validate_call_decorator.py", line 8, in <module>
File "/home/user1291/code/my_app/my_app.pyz/pydantic/_internal/_validate_call.py", line 7, in <module>
File "/home/user1291/code/my_app/my_app.pyz/pydantic_core/__init__.py", line 6, in <module>
ModuleNotFoundError: No module named 'pydantic_core._pydantic_core'
</code></pre>
<p>I appreciate the irony of getting a "not found" error on line 404, but I'd rather python finds its stuff.</p>
<p>How do I get this to work?</p>
|
<python><python-3.x><pip><openapi-generator><zipapp>
|
2024-07-29 16:06:41
| 1
| 8,239
|
User1291
|
78,808,015
| 9,999,861
|
Django date input format seems not to be accepted
|
<p>I put a DateField inside a ModelForm, which seems to work fine, if I add new entries to my database. Then I added a page, where I can edit one of those entries. Of course, I am passing the date of the desired entry to the HTML, but the date format is not set as expected.</p>
<pre><code># models.py
class SomeModel(models.Model):
someDate = models.DateField()
description = models.CharField(max_length=100)
</code></pre>
<pre><code># forms.py
class SomeForm(forms.ModelForm):
someDate = forms.DateField(input_formats=["%d.%m.%y"], label="Some Date")
class Meta:
model = SomeModel
widgets = {
"someDate": forms.DateInput(format=("%d.%m.%y"))
}
</code></pre>
<pre><code># views.py
def someModel_edit(request: HttpRequest, id: int):
# ...
query = SomeModel.objects.get(id=id)
form = SomeForm(instance=query)
args = {
'form': form
}
return render(request, 'SomeHtml.html', args)
</code></pre>
<pre><code>{% extends 'base.html' %}
{% block content %}
<h2>Edit Model</h2>
<form action="{% url 'someModel_edit' %}" method="post">
{% csrf_token %}
{{ form }}
<input type="submit" name="edit-someModel" value="Edit">
</form>
{% endblock %}
</code></pre>
<p>I expect the format of the date in the edit page to be as I configured in the forms.py file (e.g. 12.08.2024). Sadly it always looks like this:</p>
<p><a href="https://i.sstatic.net/pBcHZK1f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBcHZK1f.png" alt="Dateformat on the page" /></a></p>
<p>And even worse, if I edit the description of the model entry and hit save, the software tells me, that I use the wrong date format.</p>
|
<python><html><django><django-forms>
|
2024-07-29 15:46:58
| 1
| 507
|
Blackbriar
|
78,807,906
| 2,081,427
|
scrapy renew access token after some time
|
<p>I am using scrapy to query an api with restricted access.</p>
<pre><code>def start_requests(self):
self.initialize_gcs_store()
url = "https://api.example.com/authenticate"
headers = {'Content-Type': 'application/json'}
headers = {'authority': 'https://apply.workable.com'}
payload = {
"username": "username",
"password": "password",
}
yield scrapy.Request(
url=url,
method='POST',
meta={"task": ""},
headers=headers,
body=json.dumps(payload),
errback=self.errback,
callback=self.parse_access_token,
)
def parse_access_token(self, response):
access_token = json.loads(response.text)["jwt"]
...
</code></pre>
<p>the response from the authentication is an access token that is valid for 60 min. I would like to renew the access token after 60 min automatically. How could I do that with scrapy?</p>
|
<python><scrapy>
|
2024-07-29 15:23:23
| 1
| 2,568
|
DJJ
|
78,807,798
| 3,406,193
|
Mypy 1.10 reports error when functools.wraps() is used on a generic function
|
<h4>TLDR;</h4>
<p>I have a decorator that:</p>
<ul>
<li>changes the function signature</li>
<li>the wrapped function uses some generic type arguments</li>
<li>Other than the signature I would like to use <code>funtools.wraps</code> to preserve the rest of the information.</li>
</ul>
<p>Is there any way to achieve that without <code>mypy</code> complaining?</p>
<hr />
<h2>More context</h2>
<p>A minimal working example would look like this:</p>
<pre class="lang-py prettyprint-override"><code>from functools import wraps
from typing import Callable, TypeVar
B = TypeVar('B', bound=str)
def str_as_int_wrapper(func: Callable[[int], int]) -> Callable[[B], B]:
WRAPPER_ASSIGNMENTS = ('__module__', '__name__', '__qualname__', '__doc__',)
WRAPPER_UPDATES = ('__dict__', '__annotations__')
@wraps(func, assigned=WRAPPER_ASSIGNMENTS, updated=WRAPPER_UPDATES)
def _wrapped_func(val: B) -> B:
num = int(val)
result = func(num)
return val.__class__(result)
return _wrapped_func
@str_as_int_wrapper
def add_one(val: int) -> int:
return val + 1
</code></pre>
<p>This seems to work alright, but <code>mypy</code> (version 1.10.0) does not like it. Instead, it complains with</p>
<pre><code>test.py:17: error: Incompatible return value type (got "_Wrapped[[int], int, [Never], Never]", expected "Callable[[B], B]") [return-value]
test.py:17: note: "_Wrapped[[int], int, [Never], Never].__call__" has type "Callable[[Arg(Never, 'val')], Never]"
</code></pre>
<p>If I either remove the <code>@wraps</code> decorator or replace the <code>B</code> type annotations by <code>str</code>, the error disappears.</p>
<h2>Question</h2>
<p>Am I missing something? Is this some already reported bug or limitation from <code>mypy</code> (couldn't find anything)? Should it be reported?</p>
<p>Thanks!</p>
|
<python><python-typing><mypy><python-decorators>
|
2024-07-29 14:57:14
| 1
| 4,044
|
mgab
|
78,807,787
| 801,924
|
Replace NaN by None by without raise SettingWithCopyWarning
|
<p>I have a data frame looking of this:</p>
<pre><code> x u i rowid
0 1.0 1.0 5.0 1.0
1 2.0 2.0 4.0 2.0
2 3.0 3.0 3.0 3.0
3 4.0 4.0 2.0 NaN
4 5.0 5.0 1.0 5.0
5 6.0 4.0 0.0 6.0
6 7.0 3.0 1.0 7.0
7 8.0 2.0 2.0 NaN
8 9.0 1.0 3.0 9.0
9 10.0 0.0 4.0 10.0
10 11.0 1.0 5.0 11.0
11 12.0 2.0 3.0 12.0
12 13.0 3.0 2.0 NaN
13 14.0 4.0 1.0 14.0
14 15.0 5.0 0.0 15.0
16 17.0 3.0 2.0 17.0
</code></pre>
<p>I need to replace Nan with None in <code>rowid</code> column. I'm doing this with:</p>
<pre><code>data_frame = data_frame.where(pandas.notnull(data_frame), None)
</code></pre>
<p>But, this raise a:</p>
<pre><code>/usr/lib/python3.8/site-packages/pandas/core/indexing.py:1843: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
</code></pre>
<p>I read a lot of docs <strong>and other questions</strong> without success. The closest solution I found is:</p>
<pre><code>data_frame.loc[numpy.isnan(data_frame[ROW_ID]), ROW_ID] = None
</code></pre>
<p>It don't work (data_frame is the same). But I know I'm close because when replacing <code>None</code> by <code>-1</code>, <code>Nan</code> are replaced by <code>-1</code>.</p>
<p>Any idea ?</p>
|
<python><pandas><numpy>
|
2024-07-29 14:54:42
| 1
| 7,711
|
bux
|
78,807,728
| 4,505,998
|
Sort by two columns where first is already sorted in pandas
|
<p>I have a kind of big DataFrame, and therefore it can't be sorted on O(n²). I want to sort by two columns, and <strong>one of those columns is already sorted</strong>.</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({
'col1': [ 0, 0, 1, 2, 2, 2],
'col2': [ 'c', 'a', 'b', 'e', 'h', 'f']
})
df2 = sort_ordered(df, 'col1', 'col2')
assert df2.equals(df.sort_values(['col1', 'col2']))
</code></pre>
<p>How can I sort by those two columns, <strong>assuming that the first column is already sorted</strong>?.</p>
<p>That way, as only the elements in the second column need to be sorted "per group", the complexity would be O(n*k²) worst case or O(n(klog k)) in average, where k is the number of elements in each "partition" of the first column. In this case, $k$ is a number between 1 to 12, making the proposed solution almost linear, instead of worst-case quadratic.</p>
<p>I provide the <em>pseudocode</em> of an iterative approach:</p>
<pre class="lang-py prettyprint-override"><code>wstart = 0
for i in range(len(df)):
if df.iloc[i]['col1'] != df.iloc[wstart]['col1']:
wstart = i
df.iloc[wstart:i] = df.iloc[wstart:i].sort_values('col2')
</code></pre>
|
<python><pandas><dataframe><algorithm><sorting>
|
2024-07-29 14:41:02
| 2
| 813
|
David Davó
|
78,807,596
| 4,505,998
|
Dask merge ordered
|
<p>I have a kind of huge dataset (about 100GB) based on blockchain data. I want to merge two tables based on the <code>transactionHash</code>, which would be impossible (O(n^2)) except because these two tables are both ordered by <code>blockNumber</code>, so this can be done in O(|A|+|B|)=O(n).</p>
<p>The tables would have something like this, among other columns</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">blockNumber</th>
<th style="text-align: left;">transactionHash</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1</td>
<td style="text-align: left;">0x0</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: left;">0x1</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">1</td>
<td style="text-align: left;">0x6</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">0x2</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">0x8</td>
</tr>
<tr>
<td style="text-align: right;">5</td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">0xf</td>
</tr>
</tbody>
</table></div>
<p>In pandas, I would use <a href="https://pandas.pydata.org/docs/reference/api/pandas.merge_ordered.html" rel="nofollow noreferrer"><code>pd.merge_ordered</code></a>, but it seems this is not available in dask.</p>
<pre class="lang-py prettyprint-override"><code>blockNumber = [1,1,1, 2,2,2]
transactionHash = ['0x0', '0x1', '0x6', '0x2', '0x8', '0xf']
df1 = pd.DataFrame({
'blockNumber': blockNumber,
'transactionHash': transactionHash,
'field1': [42,37,21,78,32,45],
})
df2 = pd.DataFrame({
'blockNumber': blockNumber,
'transactionHash': transactionHash,
'field2': [True, False, True, True, False, False],
})
res1 = pd.merge(df1, df2, on=['transactionHash'])
res2 = pd.merge(df1, df2, on=['blockNumber', 'transactionHash'])
assert res1.equals(res2)
</code></pre>
<p>How can I either:</p>
<ul>
<li>a) Implement this algorithm myself, what should I take into account and how can I "align" partitions</li>
<li>b) Use an alternative with other dask functions</li>
</ul>
|
<python><pandas><dask>
|
2024-07-29 14:10:33
| 0
| 813
|
David Davó
|
78,807,556
| 2,856,499
|
Normalising data in dat file with sections into a pandas dataframe
|
<p>I have a file that show an example of a data as below</p>
<pre><code>CAR REPORT AREA 1
Car Honda Country America
Type Car
Built. location Japan Ship. location China
Date 2023/01/02 transport shipping
============================================================================================================================
PPM Price Age
============================================================================================================================
2,000 100 12
3,000 100 13
3,000 100 13
3,000 100 13
3,000 100 13
CAR REPORT AREA 2
Car Toyota Country America
Type Car
Built. location Japan Ship. location China
Date 2023/01/02 transport shipping
============================================================================================================================
PPM Price Age
============================================================================================================================
2,000 100 12
3,000 100 13
3,000 100 13
3,000 100 13
3,000 100 13
</code></pre>
<p>Im trying to write a script in pandas/python so that I can produce a dataframe as the below</p>
<p>I tried to do an open file via python and read line by line and what I did was to do a regex to put the subsections like the header portions into a separate dataframe and then perform a merge back. It works but its a little inefficient.</p>
<p>I wonder is there any ready pandas functions which I can leverage instead of looping over the file as there are multiple sections involved and multiple files.</p>
<pre><code>CAR | COUNTRY | TYPE | Built.Location | ship.Location | Date | Transport | PPM | Price| Age
Honda| America | Car | Japan | China | 2023/01/02 | shipping | 2,000 | 100| 12
Honda| America | Car | Japan | China | 2023/01/02 | shipping | 3,000 | 100| 13
Honda| America | Car | Japan | China | 2023/01/02 | shipping | 3,000 | 100| 13
Honda| America | Car | Japan | China | 2023/01/02 | shipping | 3,000 | 100| 13
Honda| America | Car | Japan | China | 2023/01/02 | shipping | 3,000 | 100| 13
Toyota| America | Car | Japan | China | 2023/01/02 | shipping | 2,000 | 100| 12
Toyota| America | Car | Japan | China | 2023/01/02 | shipping | 3,000 | 100| 13
Toyota| America | Car | Japan | China | 2023/01/02 | shipping | 3,000 | 100| 13
Toyota| America | Car | Japan | China | 2023/01/02 | shipping | 3,000 | 100| 13
Toyota| America | Car | Japan | China | 2023/01/02 | shipping | 3,000 | 100| 13
</code></pre>
|
<python><pandas>
|
2024-07-29 14:03:13
| 1
| 1,315
|
Adam
|
78,807,441
| 2,233,500
|
Problem with text rendering using CairoSVG
|
<p>I'm using CairoSVG to convert a SVG file into a PNG image.
The position of the text seems to be off compared to the images rendered by Firefox, Safari and co. I cannot understand if the problem comes from the code, from CairoSVG or from the SVG file.</p>
<p>Here is the URL of the SVG file: <a href="https://upload.wikimedia.org/wikipedia/commons/7/7d/Roll_pitch_yaw_mnemonic.svg" rel="nofollow noreferrer">https://upload.wikimedia.org/wikipedia/commons/7/7d/Roll_pitch_yaw_mnemonic.svg</a></p>
<p>Here is the image generated using CairoSVG:</p>
<p><a href="https://i.sstatic.net/yuNNNY0w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yuNNNY0w.png" alt="Image generated usin CairoSVG" /></a></p>
<p>Here is the Python code I'm using:</p>
<pre class="lang-py prettyprint-override"><code>import cairosvg
URL = "https://upload.wikimedia.org/wikipedia/commons/7/7d/Roll_pitch_yaw_mnemonic.svg"
cairosvg.svg2png(url=URL, write_to="output.png")
</code></pre>
<p>Did I miss something or is it a known issue?
If so, is there a better lib than CairoSVG?</p>
|
<python><cairo>
|
2024-07-29 13:42:30
| 1
| 867
|
Vincent Garcia
|
78,807,196
| 4,451,315
|
Convert PyArrow table to pandas, but transform string to `string[pyarrow]` in pandas directly
|
<p>Say I have</p>
<pre class="lang-py prettyprint-override"><code>import pyarrow as pa
tbl = pa.table({'a': ['foo', 'bar']})
</code></pre>
<p>If I do <code>tbl.to_pandas()</code>, then the dtypes of the result are <code>object</code>:</p>
<pre><code>In [14]: tbl.to_pandas().dtypes
Out[14]:
a object
dtype: object
</code></pre>
<p>How can I convert <code>tbl</code> to pandas, such that column <code>'a'</code> ends up being of type <code>string[pyarrow]</code>? I'd like this to happen during conversion, so <code>tbl.to_pandas().convert_dtypes</code> or anything like that which converts <em>after</em> the <code>to_pandas</code> call I can't accept</p>
|
<python><pyarrow>
|
2024-07-29 12:56:33
| 1
| 11,062
|
ignoring_gravity
|
78,807,069
| 625,396
|
How to implement in pytorch - numpy's .unique WITH(!) return_index = True?
|
<p>In numpy.unique there is an option <strong>return_index=True</strong> - which returns positions of unique elements (first occurrence - if several).</p>
<p>Unfortunately, there is no such option in torch.unique !</p>
<p><strong>Question:</strong> What are the fast and torch-style ways to get indexes of the unique elements ?</p>
<p>=====================</p>
<p>More generally my issue is the following: I have two vectors v1, v2, and I want to get positions of those elements in v2, which are not in v1 and also for repeated elements I need only one position. Numpy's unique with return_index = True immediately gives the solution. How to do it in torch ? If we know that vector v1 is sorted, can it be used to speed up the process ?</p>
|
<python><numpy><pytorch><torch>
|
2024-07-29 12:26:16
| 1
| 974
|
Alexander Chervov
|
78,806,993
| 8,000,016
|
How could call a psql command from Python script
|
<p>I'm trying to create database table with psql command called from python scrip but I got the following error: <code>[Errno 2] No such file or directory: b'psql -d postgres -U postgres -h postgres -W -c "CREATE DATABASE db_316c76d0 OWNER user_316c76d0;"'</code></p>
<p>The script is:</p>
<pre><code>try:
create_user_sh = f'psql -d {settings.DB_NAME} -U {settings.DB_USER} -h {settings.DB_HOST} -W -c "CREATE DATABASE {name} OWNER {user};"'.encode()
proc = Popen(create_user_sh, stdin=PIPE, stdout=PIPE)
proc.communicate(settings.DB_PASSWORD.encode())
except Exception as ex:
print(str(ex))
</code></pre>
<p>Any idea how to solve it ?</p>
|
<python><postgresql><psql>
|
2024-07-29 12:09:31
| 1
| 1,264
|
Alberto Sanmartin Martinez
|
78,806,818
| 19,363,912
|
Join same name dataframe column values into list
|
<p>I create a dataframe with distinct column names and use <code>rename</code> to create columns with same name.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
"a_1": ["x", "x"],
"a_2": ["", ""],
"b_1": ["", ""],
"a_3": ["", "y"],
"c_1": ["z", "z"],
})
names = {
"a_1": "a",
"a_2": "a",
"a_3": "a",
"b_1": "b",
"c_1": "c",
}
df2 = df.rename(columns=names)
</code></pre>
<p>This produces</p>
<pre><code> a a b a c
0 x z
1 x y z
</code></pre>
<p>How to join values in the columns with same name into a list?</p>
<pre><code>out = pd.DataFrame({
"a": [["x"], ["x", "y"]],
"b": [[""], [""]],
"c": [["z"], ["z"]],
})
a b c
0 [x] [] [z]
1 [x, y] [] [z]
</code></pre>
<p><strong>Bad attempt</strong></p>
<p>I suspect this can be resolved with lambda</p>
<pre><code>for c in ['a', 'b', 'c']:
df2[c] = df2[c].apply(lambda x: x.tolist() if x.any() else [], axis=1)
</code></pre>
<p>This however produces</p>
<pre><code>TypeError: <lambda>() got an unexpected keyword argument 'axis'
</code></pre>
<p>Any idea?</p>
|
<python><pandas><dataframe>
|
2024-07-29 11:28:06
| 1
| 447
|
aeiou
|
78,806,812
| 13,573,168
|
THIRD_PARTY_NOTICES.chromedriver - Exec format error - undetected_chromedriver
|
<p><strong>undetected_chromedriver</strong> with <strong>webdriver_manager</strong> was working well few days ago for scraping websites but out of nowhere it started throwing the error:</p>
<pre><code>OSError: [Errno 8] Exec format error:
'/Users/pd/.wdm/drivers/chromedriver/mac64/127.0.6533.72/chromedriver-mac-x64/THIRD_PARTY_NOTICES.chromedriver'
</code></pre>
<p>I am guessing it is related to recent update of <strong>webdriver_manager</strong>.</p>
<p>This is the code:</p>
<pre class="lang-py prettyprint-override"><code>import undetected_chromedriver as uc
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.support import expected_conditions as EC
def get_driver():
options = uc.ChromeOptions()
# options.add_argument("--headless")
options.add_argument("--no-sandbox")
options.add_argument("--disable-dev-sim-usage")
options.add_argument("--start-maximized")
options.add_argument('--disable-popup-blocking')
driver = uc.Chrome(driver_executable_path=ChromeDriverManager().install(), options=options, version_main=116)
driver.maximize_window()
return driver
</code></pre>
<p>It would be really great if someone can help me on this, Thanks.</p>
|
<python><selenium-webdriver><web-scraping><undetected-chromedriver>
|
2024-07-29 11:27:10
| 4
| 4,946
|
Prakash Dahal
|
78,806,693
| 7,227,146
|
Error while finding module specification for '' (ModuleNotFoundError: __path__ attribute not found on '' while trying to find '')
|
<p>I have a minimal directory with the following structure:</p>
<ul>
<li>my_project
<ul>
<li>code
<ul>
<li>my_code.py</li>
</ul>
</li>
<li>functions
<ul>
<li>module1.py</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>The contents of each file are:</p>
<pre><code>## module1.py
def add_three(number):
return number+3
</code></pre>
<p>And</p>
<pre><code>## my_code.py
from functions.module1 import add_three
result_1 = add_three(1)
print(result_1)
</code></pre>
<p>In the command line I <code>cd</code> to the <code>my_project</code> directory, then I run <code>python -m code.my_code</code> and I get this:</p>
<p><code>C:\[my_user]\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\python.exe: Error while finding module specification for 'code.my_code' (ModuleNotFoundError: __path__ attribute not found on 'code' while trying to find 'code.my_code')</code></p>
<p>Most answers I've seen for this error suggest not to include the <code>.py</code> extension with the <code>-m</code> flag, but of course I've already done that.</p>
<p>What's funny is if I <code>cd ..</code>, change the import to <code>from my_project.functions.module1 import add_three</code>, and run <code>python -m my_project.code.my_code</code>, it runs correctly.</p>
|
<python>
|
2024-07-29 10:59:03
| 1
| 679
|
zest16
|
78,806,611
| 1,082,673
|
Keyringrc.cfg Permission Issues when installing packages via a python script using poetry
|
<p>I have an app being managed by <code>supervisor</code> running as the user <code>app_user</code>. When the script starts, it checks if there are new packages and tries to install them using the code below. It however throws the error <code>[Errno 13] Permission denied: '/root/.config/python_keyring/keyringrc.cfg'</code></p>
<p>Function to install packages:</p>
<pre><code>import os
import sys
import subprocess
import logging
logger = logging.getLogger("general_log")
def install_app_packages():
"""
Run poetry install to update application packages
"""
python_path = sys.executable
current_dir = os.path.dirname(os.path.realpath(__file__))
project_directory = os.path.dirname(current_dir)
poetry_install_command = "poetry install --with prod --no-root"
activate_path = f"{python_path[:-7]}/activate"
export_poetry = 'export PATH="/opt/apps/venv/poetry/bin:$PATH"'
full_command = f"{export_poetry} && source {activate_path} && {poetry_install_command}"
try:
subprocess.run(
full_command,
shell=True,
cwd=project_directory,
executable="/bin/ash"
)
return True
except Exception as e:
logger.error(f"Error while running Poetry Install: {e.__str__()}")
return False
</code></pre>
<p><strong>Detailed error:</strong></p>
<pre><code>[2024-07-29 12:53:01 +0300] [24580] [INFO] Using worker: sync
[2024-07-29 12:53:02 +0300] [24581] [INFO] Booting worker with pid: 24581
Installing dependencies from lock file
Package operations: 1 install, 0 updates, 0 removals
- Installing semver (3.0.2)
PermissionError
[Errno 13] Permission denied: '/root/.config/python_keyring/keyringrc.cfg'
at /usr/lib/python3.11/pathlib.py:1013 in stat
1009│ """
1010│ Return the result of the stat() system call on this path, like
1011│ os.stat() does.
1012│ """
→ 1013│ return os.stat(self, follow_symlinks=follow_symlinks)
1014│
1015│ def owner(self):
1016│ """
1017│ Return the login name of the file owner.
Cannot install semver.
[2024-07-29 12:53:03 +0300] [24580] [ERROR] Worker (pid:24581) exited with code 3
[2024-07-29 12:53:03 +0300] [24580] [ERROR] Shutting down: Master
[2024-07-29 12:53:03 +0300] [24580] [ERROR] Reason: Worker failed to boot.
</code></pre>
<p>What works though?</p>
<p><strong>1. Installation works when invoked directly using gunicorn</strong></p>
<p>When I run the application directly using <code>gunicorn</code> while running as the same <code>app_user</code>, it successfully installs.</p>
<p><code>(poetry-venv)project_directory$ gunicorn --bind 0.0.0.0:8030 server.wsgi --error-logfile /tmp/gunicorn_error.log --access-logfile /tmp/gunicorn_access.log --preload</code></p>
<p><strong>2. Changing Supervisor to run as root</strong></p>
<p>in supervisor conf file for the app, if I specify <code>user=root</code> and <code>group=root</code> instead of the <code>app_user</code>, the installation also succeeds.</p>
<p>So wondering what could by wonky here. App permissions or some poetry settings? Why is the script trying to look for keyringrc.cfg under root yet "root" is not in play here?</p>
<p>I had an earlier <a href="https://stackoverflow.com/questions/78313914/how-to-determine-the-user-in-a-python-script-ran-by-supervisord">question on permissions here</a> but I haven't found the answer. Could it be related?</p>
|
<python><python-3.x><supervisord><python-poetry>
|
2024-07-29 10:38:41
| 1
| 4,090
|
lukik
|
78,806,270
| 335,427
|
Create columns from grouped output
|
<p>I have data which looks like this</p>
<pre><code>startDate endDate value sourceName
0 2024-06-03 22:26:00+02:00 2024-06-03 22:46:00+02:00 HKCategoryValueSleepAnalysisAsleepCore AppleWatch
6 2024-06-03 22:40:00+02:00 2024-06-04 07:48:00+02:00 HKCategoryValueSleepAnalysisAsleepCore Connect
1 2024-06-03 22:46:00+02:00 2024-06-03 22:49:00+02:00 HKCategoryValueSleepAnalysisAwake AppleWatch
2 2024-06-03 22:49:00+02:00 2024-06-04 00:56:00+02:00 HKCategoryValueSleepAnalysisAsleepREM AppleWatch
3 2024-06-04 00:56:00+02:00 2024-06-04 03:56:00+02:00 HKCategoryValueSleepAnalysisAsleepCore AppleWatch
4 2024-06-04 05:56:00+02:00 2024-06-04 07:56:00+02:00 HKCategoryValueSleepAnalysisAsleepREM AppleWatch
5 2024-06-04 22:40:00+02:00 2024-06-05 07:48:00+02:00 HKCategoryValueSleepAnalysisAsleepCore AppleWatch
</code></pre>
<p>I group them into "sleep sessions" by device and start end date if the gap is no larger than 2 hours like so.</p>
<pre><code> startDate endDate duration
sourceName
AppleWatch 0 2024-06-03 22:26:00+02:00 2024-06-04 07:56:00+02:00 7.500000
1 2024-06-04 22:40:00+02:00 2024-06-05 07:48:00+02:00 9.133333
Connect 1 2024-06-03 22:40:00+02:00 2024-06-04 07:48:00+02:00 9.133333
</code></pre>
<p>I would like to get columns which sum the duration of each grouped value (within the session). example</p>
<pre><code>REM_duration
Core_duration
Awake_duration
</code></pre>
<p>Also any gaps between stages (see between row index 3&4) should be added to Awake_duration. Example <code>Awake_duration</code> from session 0 should be <code>2.05</code></p>
<p>So expected output would be</p>
<pre><code> startDate endDate duration sourceName rem_duration core_duration awake_duration
0 2024-06-03 22:26:00+02:00 2024-06-04 07:56:00+02:00 7.500000 AppleWatch 4.116667 3.333333 2.05
1 2024-06-04 22:40:00+02:00 2024-06-05 07:48:00+02:00 9.133333 AppleWatch 0.000000 9.133333 0.00
1 2024-06-03 22:40:00+02:00 2024-06-04 07:48:00+02:00 8.133333 Connect 1.000000 7.133333 1.00
</code></pre>
<p>This is what I have so far</p>
<pre><code>import pandas as pd
from datetime import timedelta
data = [
{
"startDate": pd.Timestamp("2024-06-03 22:26:00+0200"),
"endDate": pd.Timestamp("2024-06-03 22:46:00+0200"),
"value": "HKCategoryValueSleepAnalysisAsleepCore",
"sourceName": "AppleWatch"
},
{
"startDate": pd.Timestamp("2024-06-03 22:46:00+0200"),
"endDate": pd.Timestamp("2024-06-03 22:49:00+0200"),
"value": "HKCategoryValueSleepAnalysisAwake",
"sourceName": "AppleWatch"
},
{
"startDate": pd.Timestamp("2024-06-03 22:49:00+0200"),
"endDate": pd.Timestamp("2024-06-04 00:56:00+0200"),
"value": "HKCategoryValueSleepAnalysisAsleepREM",
"sourceName": "AppleWatch"
},
{
"startDate": pd.Timestamp("2024-06-04 00:56:00+0200"),
"endDate": pd.Timestamp("2024-06-04 03:56:00+0200"),
"value": "HKCategoryValueSleepAnalysisAsleepCore",
"sourceName": "AppleWatch"
},
{
"startDate": pd.Timestamp("2024-06-04 05:56:00+0200"),
"endDate": pd.Timestamp("2024-06-04 07:56:00+0200"),
"value": "HKCategoryValueSleepAnalysisAsleepREM",
"sourceName": "AppleWatch"
},
{
"startDate": pd.Timestamp("2024-06-04 22:40:00+0200"),
"endDate": pd.Timestamp("2024-06-05 07:48:00+0200"),
"value": "HKCategoryValueSleepAnalysisAsleepCore",
"sourceName": "AppleWatch"
},
{
"startDate": pd.Timestamp("2024-06-03 22:40:00+0200"),
"endDate": pd.Timestamp("2024-06-04 07:48:00+0200"),
"value": "HKCategoryValueSleepAnalysisAsleepCore",
"sourceName": "Connect"
}
]
# Create DataFrame
df_orig = pd.DataFrame.from_records(data).sort_values('startDate')
max_gap = 2
df = df_orig.copy()
df = df.sort_values(['sourceName', 'startDate'])
df['duration'] = (df['endDate'] - df['startDate']).div(pd.Timedelta(hours=1))
g = df['startDate'].sub(df['endDate'].shift()).div(pd.Timedelta(hours=1))
df2 = df.groupby(['sourceName', g.gt(max_gap).cumsum()]).agg({'startDate':'min', 'endDate':'max', 'duration': 'sum'})
</code></pre>
|
<python><pandas>
|
2024-07-29 09:21:35
| 1
| 3,735
|
Chris
|
78,806,145
| 3,684,931
|
Is there a way to generate the computation graph of a PyTorch model from just a forward pass?
|
<p>Imagine a network like this:</p>
<pre><code>import torch
import torch.nn as nn
class CoolCNN(nn.Module):
def __init__(self):
super(CoolCNN, self).__init__()
self.initial_conv = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, padding=1)
self.parallel_conv = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, padding=1)
self.secondary_conv = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3, padding=1)
self.max_pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
self.fully_connected1 = nn.Linear(32 * 8 * 8, 128)
self.output_layer = nn.Linear(128, 10)
def forward(self, x):
main_path = self.max_pool(torch.relu(self.initial_conv(x)))
parallel_path = self.max_pool(torch.relu(self.parallel_conv(x)))
x = (main_path + parallel_path) / 2
x = self.max_pool(torch.relu(self.secondary_conv(x)))
x = x.view(-1, 32 * 8 * 8)
x = torch.relu(self.fully_connected1(x))
x = self.output_layer(x)
return x
</code></pre>
<p>It has this tree-like structure:</p>
<pre class="lang-none prettyprint-override"><code>CoolCNN
└── Forward Pass
├── parallel_path
│ ├── parallel_conv (Conv2d)
│ ├── ReLU Activation
│ └── max_pool (MaxPool2d)
│
├── main_path
│ ├── initial_conv (Conv2d)
│ ├── ReLU Activation
│ └── max_pool (MaxPool2d)
│
├── Average main_path and parallel_path
│
├── secondary_conv (Conv2d)
├── ReLU Activation
└── max_pool (MaxPool2d)
│
├── Flatten the Tensor
│
├── fully_connected1 (Linear)
├── ReLU Activation
│
└── output_layer (Linear)
</code></pre>
<p>As you can see, the main and parallel paths being parallel branches gets accurately represented in the above illustrated tree representation.</p>
<p>I wanted to know if it's possible to somehow extract this tree-like structure from just a forward pass through the network <strong>without</strong> relying on the backward pass at all. There are libraries like <a href="https://github.com/szagoruyko/pytorchviz/blob/master/torchviz/dot.py" rel="nofollow noreferrer">torchviz</a> which use the computation graph from backward pass which I don't want to use.</p>
<p>The only approach that I could find was forward hooks. That is in fact how <a href="https://github.com/sksq96/pytorch-summary/blob/master/torchsummary/torchsummary.py" rel="nofollow noreferrer">torchsummary</a> prints the model's summary.</p>
<p>However, with forward hooks, all you get to know is which order the nodes get called in. So it is just a topological sort of the underlying graph and it's impossible to accurately reconstruct a graph from just its topological sort. This means it's impossible to accurately derive the actual unique tree representation from forward hooks alone (at least that is my understanding).</p>
<p>I have already seen <a href="https://stackoverflow.com/questions/63082507/obtaining-computation-graph-structure-of-a-forward-pass-in-pytorch">this</a> question where the OP seemed to be happy without an exact graph reconstruction, so it doesn't address my need.</p>
<p>So is there a way by which we can reconstruct the computation graph from a forward pass alone?</p>
|
<python><graph><pytorch>
|
2024-07-29 08:45:16
| 1
| 1,984
|
Sachin Hosmani
|
78,805,881
| 4,451,315
|
Shift a chunkedarray
|
<p>If I start with</p>
<pre><code>import pyarrow as pa
ca = pa.chunked_array([[1,3, 2], [5, 2, 1]])
</code></pre>
<p>I'd like to end up with a pyarrow array with elements</p>
<pre><code>[None, 1, 3, 2, 5, 2]
</code></pre>
<p>I don't really mind if the output is an array or a chunkedarray</p>
|
<python><pyarrow>
|
2024-07-29 07:40:00
| 1
| 11,062
|
ignoring_gravity
|
78,805,524
| 143,397
|
How to build a PyO3 extension with Yocto / OpenEmbedded?
|
<p>I have a Python/Rust project that uses PyO3 to build a Python extension, written in Rust.</p>
<p>I have it set up with <code>maturin</code> and it works fine locally - it will build a wheel (.whl) and within that is my Python code and the Rust extension shared object, just as I'd expect.</p>
<p>I need to cross-compile this with Yocto (I'm stuck with Langdale, unfortunately), to put it in my embedded root filesystem, and I've quickly come to the realisation that I will need to ditch <code>maturin</code> for <code>setuptools-rust</code>, as there is no <code>maturin</code> support in Yocto Langdale at all.</p>
<p>So now my Rust/Python/PyO3 project is set up very much like the <code>setuptools-rust</code> example and I can build it locally, and the generated wheel contains what I'd expect.</p>
<p>Directory layout:</p>
<pre><code>.
├── Cargo.lock
├── Cargo.toml
├── MANIFEST.in
├── pyproject.toml
├── python
│ ├── my_project
│ │ └── my_module.py
│ └── tests
│ └── test_my_module.py
└── src
└── lib.rs
</code></pre>
<p><code>project.toml</code> (partial):</p>
<pre><code>[build-system]
requires = ["setuptools>=61.0.0", "setuptools-rust"]
build-backend = "setuptools.build_meta"
[tool.setuptools.packages]
find = { where = ["python"] }
[[tool.setuptools-rust.ext-modules]]
target = "my_project._lib"
path = "Cargo.toml"
binding = "PyO3"
</code></pre>
<p>(EDIT: removed <code>setuptools-scm</code> because Langdale version is < 8, use <code>MANIFEST.in</code> instead)</p>
<p><code>MANIFEST.in</code>:</p>
<pre><code>include Cargo.toml
recursive-include src *.rs
</code></pre>
<p><code>Cargo.toml</code> (partial):</p>
<pre><code>[lib]
name = "_lib"
crate-type = ["cdylib"]
path = "src/lib.rs"
[dependencies]
pyo3 = "0.22.0"
</code></pre>
<p>Recipe <code>python3-my-project.bb</code>:</p>
<pre><code>inherit externalsrc
EXTERNALSRC = "${TOPDIR}/../../my_project"
DEPENDS += " \
python3-tomli-native \
python3-setuptools-rust-native"
inherit python_setuptools_build_meta
</code></pre>
<p>EDIT: including <code>python3-tomli-native</code> fixed the error:</p>
<pre><code>| File "build/tmp/work/cortexa72-cortexa53-xilinx-linux/my_package/1.0-r0/recipe-sysroot-native/usr/lib/python3.10/site-packages/picobuild/__init__.py", line 93, in __init__
| pyproject = tomllib.load(f)
| AttributeError: module 'tomli' has no attribute 'load'
</code></pre>
<p>This builds a .rpm but inspecting the contents shows no shared library present:</p>
<pre><code>/usr/lib/python3.10/site-packages/my_project-0.1.0.dist-info/METADATA
/usr/lib/python3.10/site-packages/my_project-0.1.0.dist-info/RECORD
/usr/lib/python3.10/site-packages/my_project-0.1.0.dist-info/WHEEL
/usr/lib/python3.10/site-packages/my_project-0.1.0.dist-info/top_level.txt
/usr/lib/python3.10/site-packages/my_project/__pycache__/my_project.cpython-310.pyc
/usr/lib/python3.10/site-packages/my_project/my_project.py
/usr/lib/python3.10/site-packages/tests/__pycache__/test_my_project.cpython-310.pyc
/usr/lib/python3.10/site-packages/tests/test_my_project.py
</code></pre>
<p>(Not sure why this .whl contains the <code>tests</code> directory? Is that typical? Perhaps a quirk of the setuptools-rust directory layout.)</p>
<p>Looking in the bitbake log I can see that something called "picobuild" is used:</p>
<pre><code>DEBUG: python3-my-project-1.0-r0 do_compile: Executing shell function do_compile
+ cd build/tmp/work/cortexa72-cortexa53-xilinx-linux/python3-my-project/1.0-r0/python3-my-project-1.0
+ do_compile
+ export _PYTHON_SYSCONFIGDATA_NAME=_sysconfigdata
+ _PYTHON_SYSCONFIGDATA_NAME=_sysconfigdata
+ python_pep517_do_compile
+ nativepython3 -m picobuild --source /.../build/../../python/my_project --dest /.../build/tmp/work/cortexa72-cortexa53-xilinx-linux/python3-my-project/1.0-r0/dist --wheel
...
</code></pre>
<p>Digging into this a bit, I see that the standard pypa/build front-end was replaced by "picobuild" around May 2022:</p>
<p><a href="https://patchwork.yoctoproject.org/project/oe-core/cover/20220526171001.4074388-1-ross.burton@arm.com/" rel="nofollow noreferrer">https://patchwork.yoctoproject.org/project/oe-core/cover/20220526171001.4074388-1-ross.burton@arm.com/</a></p>
<p>But a newer post in Jan 2023 seems to suggest that "picobuild" was changed back to "build" (a hint at a "migration"):</p>
<p><a href="https://patchwork.yoctoproject.org/project/oe-core/patch/20230112112336.2254765-4-ross.burton@arm.com/#7902" rel="nofollow noreferrer">https://patchwork.yoctoproject.org/project/oe-core/patch/20230112112336.2254765-4-ross.burton@arm.com/#7902</a></p>
<p>Does anyone know the history of this? Is "picobuild" my problem here, in that it doesn't support the setuptools-rust plugin, somehow?</p>
<hr />
<p>Inheriting from <code>python_setuptools3_rust</code> instead of <code>python_setuptools_build_meta</code> successfully builds and packages the <em>Python</em> parts of my project, but the Rust extension (I expected to see a <code>.so</code> or <code>.pyd</code> file) is completely absent. I think this is because <code>python_setuptools3_rust</code> is not using a PEP-517-compliant build step, and invoking <code>setup.py bdist_wheel</code> directly, which skips the Rust config in <code>pyproject.toml</code>.</p>
<p>At this point I've run out of documentation or examples to guide me. I'm at a bit of a loss as to how to proceed.</p>
<p>It's a real shame I'm stuck with Yocto Langdale because the latest LTS, Scarthgap, does seem to have <code>maturin</code> support.</p>
|
<python><rust><yocto><openembedded><pyo3>
|
2024-07-29 05:34:21
| 0
| 13,932
|
davidA
|
78,805,430
| 10,722,752
|
How to sort a pandas dataframe using group by
|
<p>I am working on a dataframe similar to below sample:</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(0)
np.random.seed(0)
df = pd.DataFrame({'date' : np.tile(['2024-05-01', '2024-06-01'], 4),
'State' : np.repeat(['fl', 'ny', 'mi', 'nc'], 2),
'Rev' : [21000, 18200, 51200, 48732, 5676, 6798, 24012, 25005],
'Score' : np.random.normal(size = 8),
'Value' : np.random.randint(10, 50, size = 8)})
df
date State Rev Score Value
0 2024-05-01 fl 21000 1.764052 34
1 2024-06-01 fl 18200 0.400157 22
2 2024-05-01 ny 51200 0.978738 11
3 2024-06-01 ny 48732 2.240893 48
4 2024-05-01 mi 5676 1.867558 49
5 2024-06-01 mi 6798 -0.977278 33
6 2024-05-01 nc 24012 0.950088 34
7 2024-06-01 nc 25005 -0.151357 27
</code></pre>
<p>Expected output should be the <code>dataframe</code> sorted by <code>Rev</code>, largest to the smallest, and within each <code>State</code>, the <code>date</code> column should be sorted from in ascending order.</p>
<p>Tried below code:</p>
<pre><code>(df.sort_values(by = ['Rev'], ascending = [False]).
groupby('State', as_index = False).
apply(lambda x : x.sort_values('date')).reset_index(drop = True))
</code></pre>
<p>But it's not giving me the required output.</p>
<pre><code> date State Rev Score Value
0 2024-05-01 fl 21000 1.764052345967664 34
1 2024-06-01 fl 18200 0.4001572083672233 22
2 2024-05-01 mi 5676 1.8675579901499675 49
3 2024-06-01 mi 6798 -0.977277879876411 33
4 2024-05-01 nc 24012 0.9500884175255894 34
5 2024-06-01 nc 25005 -0.1513572082976979 27
6 2024-05-01 ny 51200 0.9787379841057392 11
7 2024-06-01 ny 48732 2.240893199201458 48
</code></pre>
<p>The output should be NY, NC, FL and MI in that order based on the <code>Rev</code> and <code>date</code> columns.
i.e. for a <code>State</code> group, the <code>Rev</code> value for <code>2024-05-01</code> will decide which state will take precedence in the final output order.</p>
<p>Can someone help me with the code.</p>
<p>Expected Output:</p>
<pre><code>df.iloc[[2,3, 6,7, 0,1, 4,5], : ]
date State Rev Score Value
2 2024-05-01 ny 51200 0.978738 11
3 2024-06-01 ny 48732 2.240893 48
6 2024-05-01 nc 24012 0.950088 34
7 2024-06-01 nc 25005 -0.151357 27
0 2024-05-01 fl 21000 1.764052 34
1 2024-06-01 fl 18200 0.400157 22
4 2024-05-01 mi 5676 1.867558 49
5 2024-06-01 mi 6798 -0.977278 33
</code></pre>
|
<python><pandas>
|
2024-07-29 04:52:44
| 3
| 11,560
|
Karthik S
|
78,805,181
| 14,761,117
|
ValueError: Unrecognized keyword arguments passed to LSTM: {'batch_input_shape'} in Keras
|
<p>I'm trying to build and train a stateful LSTM model using Keras in TensorFlow, but I keep encountering a ValueError when specifying the batch_input_shape parameter.</p>
<p>The error message:</p>
<pre><code>ValueError: Unrecognized keyword arguments passed to LSTM: {'batch_input_shape': (1, 1, 14)}
</code></pre>
<p>Here's a simplified version of my code:</p>
<pre><code>import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM
# Load your data
file_path = 'path_to_your_file.csv'
data = pd.read_csv(file_path)
# Create 'date' column with the first day of each month
data['date'] = pd.to_datetime(data['tahun'].astype(str) + '-' + data['bulan'].astype(str) + '-01')
data['date'] = data['date'] + pd.offsets.MonthEnd(0)
data.set_index('date', inplace=True)
# Group by 'date' and sum the 'amaun_rm' column
df_sales = data.groupby('date')['amaun_rm'].sum().reset_index()
# Create a new dataframe to model the difference
df_diff = df_sales.copy()
df_diff['prev_amaun_rm'] = df_diff['amaun_rm'].shift(1)
df_diff = df_diff.dropna()
df_diff['diff'] = df_diff['amaun_rm'] - df_diff['prev_amaun_rm']
# Create new dataframe from transformation from time series to supervised
df_supervised = df_diff.drop(['prev_amaun_rm'], axis=1)
for inc in range(1, 13):
field_name = 'lag_' + str(inc)
df_supervised[field_name] = df_supervised['diff'].shift(inc)
# Adding moving averages
df_supervised['ma_3'] = df_supervised['amaun_rm'].rolling(window=3).mean().shift(1)
df_supervised['ma_6'] = df_supervised['amaun_rm'].rolling(window=6).mean().shift(1)
df_supervised['ma_12'] = df_supervised['amaun_rm'].rolling(window=12).mean().shift(1)
df_supervised = df_supervised.dropna().reset_index(drop=True)
df_supervised = df_supervised.fillna(df_supervised.mean())
# Split the data into train and test sets
train_set, test_set = df_supervised[0:-6].values, df_supervised[-6:].values
scaler = MinMaxScaler(feature_range=(-1, 1))
scaler = scaler.fit(train_set)
train_set_scaled = scaler.transform(train_set)
test_set_scaled = scaler.transform(test_set)
# Split into input and output
X_train, y_train = train_set_scaled[:, 1:], train_set_scaled[:, 0]
X_test, y_test = test_set_scaled[:, 1:], test_set_scaled[:, 0]
X_train = X_train.reshape((X_train.shape[0], 1, X_train.shape[1]))
X_test = X_test.reshape((X_test.shape[0], 1, X_test.shape[1]))
# Check the shape of X_train
print("X_train shape:", X_train.shape) # Should output (44, 1, 14)
# Define the LSTM model
model = Sequential()
model.add(LSTM(4, stateful=True, batch_input_shape=(1, X_train.shape[1], X_train.shape[2])))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
# Train the model
model.fit(X_train, y_train, epochs=100, batch_size=1, verbose=1, shuffle=False)
# Summarize the model
model.summary()
</code></pre>
<p><strong>What I have tried:</strong></p>
<ul>
<li>I verified the shape of <code>X_train</code> which is <code>(44, 1, 14)</code>.</li>
<li>I attempted to use <code>input_shape</code> instead of <code>batch_input_shape</code>, which led to different errors.</li>
<li>I ensured that the versions of TensorFlow and Keras are compatible.</li>
</ul>
<p><strong>System Information:</strong></p>
<ul>
<li><p>Python version: 3.12</p>
</li>
<li><p>TensorFlow version: 2.17.0</p>
</li>
<li><p>Keras version: 3.4.1</p>
</li>
</ul>
<p><strong>Question:</strong>
How can I correctly specify the <code>batch_input_shape</code> for my stateful LSTM model in Keras to avoid this error? Are there any specific version requirements or additional configurations needed?</p>
|
<python><keras><lstm>
|
2024-07-29 01:50:24
| 1
| 331
|
AmaniAli
|
78,804,934
| 2,697,895
|
How to make an Android service in Python?
|
<p>I want to build a simple Android app, like <a href="https://pushover.net/" rel="nofollow noreferrer">PushOver app</a>, that has a TCP server and receive text messages which it logs and then send them as push notifications. This part is already done and working fine. But I want to receive messages even if the GUI app is closed. I know that this is possible because PushOver app does it ! I gues, I may need a service... but I don't know how to make one, because this is my firs Andoroid app. As a development environment I use BeeWare(Toga/Briefcase) and Python on a Kubuntu machine. I activated the USB Debuging on the phone and I run the code directly on Android.</p>
<p><strong>app.py</strong></p>
<pre><code>import toga
from toga.style import Pack
from toga.style.pack import COLUMN, ROW
import threading, socket
Debug = True
# ----- Color codes ----------------------
RED = '\033[31m'
GREEN = '\033[32m'
YELLOW = '\033[33m'
MAGENTA = '\033[35m'
CYAN = '\033[36m'
WHITE = '\033[37m'
EXCEPT = '\033[38;5;147m' # light purple
RESET = '\033[0m'
EventColor = ['', GREEN, YELLOW, RED]
class TCPServer(threading.Thread):
def __init__(self, app):
super().__init__()
self.daemon = True
self.app = app
self.terminated = threading.Event()
self.start()
def Terminate(self):
if self.is_alive():
self.terminated.set()
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as ESocket:
try:
ESocket.settimeout(0.2)
ESocket.connect(('localhost', 12345))
except: pass
self.join()
def run(self):
def HandleClient(Conn):
with Conn:
message = Conn.recv(1024).decode()
if not message: return
self.app.loop.call_soon_threadsafe(self.app.LogMessage, message)
#self.app.SendNotif(message)
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as SSocket:
try:
SSocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
SSocket.bind(('192.168.0.21', 12345))
SSocket.listen(10)
if Debug:
print(GREEN + 'TCP Server is listening on port 12345...' + RESET)
try:
while not self.terminated.is_set():
Conn, Addr = SSocket.accept()
if self.terminated.is_set():
Conn.close(); break
HandleClient(Conn)
except Exception as E1:
if Debug: print(RED + f'TCP Server exception (listen): {E1}' + RESET)
finally:
if Debug: print(YELLOW + 'TCP Server has stopped listening.' + RESET)
except Exception as E2:
if Debug: print(RED + f'TCP Server exception (setup): {E2}' + RESET)
class Application(toga.App):
def startup(self):
self.on_exit = self.exit_handler
self.main_window = toga.MainWindow(title=self.formal_name)
self.log = toga.MultilineTextInput(readonly=True, style=Pack(flex=1))
main_box = toga.Box(children=[self.log], style=Pack(direction=COLUMN, padding=10))
self.main_window.content = main_box
self.main_window.show()
self.tcp_server = TCPServer(self)
def exit_handler(self, app):
try:
self.tcp_server.Terminate()
finally:
return True
def LogMessage(self, message):
self.log.value += message + '\n'
def SendNotif(self, message):
self.main_window.info_dialog('New Message', message)
def main():
return Application()
</code></pre>
|
<python><android><service><beeware>
|
2024-07-28 22:20:44
| 0
| 3,182
|
Marus Gradinaru
|
78,804,920
| 10,746,224
|
Django Admin TabularInline: How do I hide the object name of a M2M through model?
|
<p>How do I hide <strong><code>Unit_attribute object (3)</code></strong> from the admin display?</p>
<p><a href="https://i.sstatic.net/zOahrSx5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zOahrSx5.png" alt="Django Admin" /></a></p>
<p><strong>admin.py:</strong></p>
<pre><code>from django.contrib import admin
from core.models import Attribute, Unit
class UnitAttributeInline(admin.TabularInline):
model = Unit.attributes.through
@admin.register(Unit)
class UnitAdmin(admin.ModelAdmin):
inlines = [UnitAttributeInline]
</code></pre>
<p><strong>models.py:</strong></p>
<pre><code>class Attribute(models.Model):
name = models.CharField(max_length=45)
class Unit(models.Model):
attributes = models.ManyToManyField(Attribute)
</code></pre>
|
<python><django><django-admin>
|
2024-07-28 22:13:23
| 1
| 16,425
|
Lord Elrond
|
78,804,834
| 208,080
|
Timeout parameter in subprocess does not work in chainlit app
|
<p>I have a script to build a Chainlit UI for my GraphRAG app running in windows. The GraphRAG query runs fine in the terminal, although it takes about 120 seconds (screenshot attached). However, when I run this Chainlit script, the timeout=300 in subprocess.run does not work as expected. Instead, I get a 'can't reach server' error after approximately 60 seconds, even though the result is eventually returned (terminal screenshot also attached). How can I enforce the script to wait until the response is received?</p>
<pre><code>import chainlit as cl
import subprocess
import shlex
@cl.on_chat_start
def start():
cl.user_session.set("history", [])
# cl.set_theme(cl.Theme(background="white"))
# cl.set_header("Duality Expert Chat")
@cl.on_message
async def main(message: cl.Message):
history = cl.user_session.get("history")
query = message.content
cmd = [
"python", "-m", "graphrag.query",
"--root", r"C:\Users\Lei Shang\Documents\Projects\LLM_RAG\GraphRAG_ollama_LMStudio_chainlit\graphRag\mistral-large\graphRag-mistral-large",
"--method", "global",
]
cmd.append(shlex.quote(query))
try:
print(cmd)
result = subprocess.run(cmd, capture_output=True, text=True, check=True, timeout=300)
print(result)
output = result.stdout
# extract content tailing "SUCCESS: Global Search Response:"
response = output.split("SUCCESS: Global Search Response:", 1)[-1].strip()
history.append((query, response))
cl.user_session.set("history", history)
await cl.Message(content=response).send()
except subprocess.CalledProcessError as e:
error_message = f"An error occurred: {e.stderr}"
await cl.Message(content=error_message).send()
if __name__ == "__main__":
cl.run()
</code></pre>
<p><a href="https://i.sstatic.net/4aAIrUOL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4aAIrUOL.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/Bcyslszu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Bcyslszu.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/xlA4xjiI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xlA4xjiI.png" alt="enter image description here" /></a></p>
|
<python><chainlit><graphrag>
|
2024-07-28 21:15:24
| 0
| 1,296
|
Mavershang
|
78,804,625
| 446,786
|
Django Models - trying to get the "best of both worlds" when it comes to copying a model vs subclassing it
|
<p>Let's say I have a base class (Django model) called Character that looks something like</p>
<pre><code>class Character(models.Model):
strength = models.blabla
dex = models.blabla
...
</code></pre>
<p>I can set that up with its attributes and hook it up to Admin and everything is lovely.</p>
<p>Then I decide I want a NonPlayerCharacter model, which has everything the Character model has, plus a new field or two. I can think of <strong>at least two ways</strong> to do this, each with pros and cons.</p>
<p><strong>Option 1:</strong></p>
<pre><code># The basic obvious stupid-simple answer is just to copy the model and add the fields
class NonPlayerCharacter(models.Model):
strength = models.blabla
dex = models.blabla
...
faction = models.blabla
</code></pre>
<p>Pros: Everything is kept very simple and easy to manage, and if the models need to diverge in the future, that's easy too.</p>
<p>Cons: Obvious code duplication and keeping universal changes in sync. If I change a stat or add a method on one, I've got to duplicate it on the other. If I add another type of Character, like Mob, well then it triples the upkeep to keep the necessary parts in sync.</p>
<p><strong>Option 2:</strong></p>
<pre><code># The next most obvious solution is to subclass Character
class NonPlayerCharacter(Character):
faction = models.blabla
</code></pre>
<p>Pros: I don't have to worry about keeping my classes in sync if I add something or make a change. Each is only concerned with its own NEW attributes and methods.</p>
<p>Cons: Everywhere I query for Character will also, by default, pull every subclass of Character. I'm (somewhat) aware of how to filter that out on a query-by-query basis, but it sure would be nice if Character.objects just got me characters and left the rest to queries for that particular model/subclass.</p>
<p>Is there a handy method to get the best of both worlds without filtering Character each time, or am I just asking something unreasonable?</p>
|
<python><django><django-models>
|
2024-07-28 19:11:41
| 1
| 311
|
Josh
|
78,804,536
| 850,781
|
How to update python modules for numpy 2?
|
<p>On linux with pip the new numpy 2 seems to work with pandas fine:</p>
<pre><code>$ python3 -c 'import numpy as np; print(np.__version__); import pandas as pd; print(pd.__version__)'
2.0.1
2.2.2
</code></pre>
<p>however, on windows with miniconda I get</p>
<pre><code>$ ${localappdata}/miniconda3/envs/c312/python.exe -c 'import numpy as np; print(np,np.__version__); import pandas as pd; print(pd,pd.__version__)'
<module 'numpy' from '${localappdata}\\miniconda3\\envs\\c312\\Lib\\site-packages\\numpy\\__init__.py'> 2.0.1
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.1 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.
Traceback (most recent call last): File "<string>", line 1, in <module>
File "${localappdata}\miniconda3\envs\c312\Lib\site-packages\pandas\__init__.py", line 26, in <module>
from pandas.compat import (
File "${localappdata}\miniconda3\envs\c312\Lib\site-packages\pandas\compat\__init__.py", line 27, in <module>
from pandas.compat.pyarrow import (
File "${localappdata}\miniconda3\envs\c312\Lib\site-packages\pandas\compat\pyarrow.py", line 8, in <module>
import pyarrow as pa
File "${localappdata}\miniconda3\envs\c312\Lib\site-packages\pyarrow\__init__.py", line 65, in <module>
import pyarrow.lib as _lib
AttributeError: _ARRAY_API not found
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.1 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.
Traceback (most recent call last): File "<string>", line 1, in <module>
File "${localappdata}\miniconda3\envs\c312\Lib\site-packages\pandas\__init__.py", line 49, in <module>
from pandas.core.api import (
File "${localappdata}\miniconda3\envs\c312\Lib\site-packages\pandas\core\api.py", line 9, in <module>
from pandas.core.dtypes.dtypes import (
File "${localappdata}\miniconda3\envs\c312\Lib\site-packages\pandas\core\dtypes\dtypes.py", line 24, in <module>
from pandas._libs import (
File "${localappdata}\miniconda3\envs\c312\Lib\site-packages\pyarrow\__init__.py", line 65, in <module>
import pyarrow.lib as _lib
AttributeError: _ARRAY_API not found
<module 'pandas' from '${localappdata}\\miniconda3\\envs\\c312\\Lib\\site-packages\\pandas\\__init__.py'> 2.2.2
</code></pre>
<p>Reinstalling (<code>conda install --force-reinstall pandas</code>) did not help.</p>
<p>So, what do I need to do to get numpy2 working?</p>
|
<python><pandas><numpy>
|
2024-07-28 18:30:44
| 0
| 60,468
|
sds
|
78,804,323
| 525,865
|
bs4-approach to wikipedia-page: getting the infobox
|
<p>i am currently trying to apply a bs4-approach to wikipedia-page: results do not store in a df</p>
<p>due to the fact that scraping on Wikipedia is a very very common technique - where we can use an appropiate approach to work with many many different jobs - i did have some issues with getting back the results - and store it into a df</p>
<p>well - as a example for a very common Wikipedia-bs4 job - we can take this one:</p>
<p>on this page we have more than 600 results - in sub-pages: url = "https://de.wikipedia.org/wikiListe_der_St%C3%A4dte_in_Deutschland#Liste_der_St%C3%A4dte_in_Deutschland"</p>
<p>so to do a first experimental script i follow like so : first i scrape the table from the Wikipedia page and afterwards i convert it into a Pandas DataFrame. Therefore i first install necessary packages: Make sure you have requests, beautifulsoup4, and pandas installed. You can install them using pip if you haven't already:</p>
<p>pip install requests beautifulsoup4 pandas</p>
<p>and then i follow like so : first i scrape the table from the Wikipedia page and afterwards i convert it into a Pandas DataFrame.</p>
<pre><code>import pandas as pd
# URL of the Wikipedia page
url = "https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_in_Deutschland#Liste_der_St%C3%A4dte_in_Deutschland"
table = pd.read_html(url, extract_links='all')[1]
base_url = 'https://de.wikipedia.org'
table = table.apply(lambda col: [v[0] if v[1] is None else f'{base_url}{v[1]}' for v in col])
links = list(table.iloc[:,0])
for link in links:
print('\n',link)
try:
df = pd.read_html(link)[0]
print(df)
except Exception as e:
print(e)
</code></pre>
<p>see what i get back - only two records. instead of hundreds.
btw; i guess that the best way would be to collect all in a df.
and & / or store it</p>
<pre><code>Document is empty
https://de.wikipedia.org/wiki/Aach_(Hegau)
Wappen \
0 NaN
1 NaN
2 Basisdaten
3 Koordinaten:
4 Bundesland:
5 Regierungsbezirk:
6 Landkreis:
7 Höhe:
8 Fläche:
9 Einwohner:
10 Bevölkerungsdichte:
11 Postleitzahl:
12 Vorwahl:
13 Kfz-Kennzeichen:
14 Gemeindeschlüssel:
15 LOCODE:
16 Adresse der Stadtverwaltung:
17 Website:
18 Bürgermeister:
19 Lage der Stadt Aach im Landkreis Konstanz
20 Karte
Deutschlandkarte
0 NaN
1 NaN
2 Basisdaten
3 47° 51′ N, 8° 51′ OKoordinaten: 47° 51′ N, 8° ...
4 Baden-Württemberg
5 Freiburg
6 Konstanz
7 545 m ü. NHN
8 10,68 km2
9 2384 (31. Dez. 2022)[1]
10 223 Einwohner je km2
11 78267
12 07774
13 KN, STO
14 08 3 35 001
15 DE AAC
16 Hauptstraße 16 78267 Aach
17 www.aach.de
18 Manfred Ossola
19 Lage der Stadt Aach im Landkreis Konstanz
20 Karte
</code></pre>
<p>note: we have several hunderds records there:
<a href="https://i.sstatic.net/jy0R4QoF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jy0R4QoF.png" alt="enter image description here" /></a></p>
<p>see the infobox: i am wanting to fetch the data of the infobox</p>
<p><a href="https://i.sstatic.net/FaQR1eVo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FaQR1eVo.png" alt="enter image description here" /></a></p>
<p><strong>update: what is aimed:</strong> - how to get full results - that are stored in a df. - and containing all the data - in the info.box.. (see image above) - with the contact infos etc</p>
<p><strong>update2:</strong></p>
<p>the overview - page: <a href="https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_in_Deutschland#Liste_der_St%C3%A4dte_in_Deutschland" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_in_Deutschland#Liste_der_St%C3%A4dte_in_Deutschland</a></p>
<p>it takes us to approx 1000 sub-pages: like the following</p>
<p>Aach (Hegau): <a href="https://de.wikipedia.org/wiki/Aach_(Hegau)" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Aach_(Hegau)</a>
Aachen: <a href="https://de.wikipedia.org/wiki/Aachen" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Aachen</a>
Aalen: <a href="https://de.wikipedia.org/wiki/Aalen" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Aalen</a></p>
<p>see a result- of the so called "info-box": <a href="https://de.wikipedia.org/wiki/Babenhausen_(Hessen)" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Babenhausen_(Hessen)</a>
Babenhausen (Hessen)</p>
<pre><code>+----------------------+--------------------------------------------------------------+
| | |
+----------------------+--------------------------------------------------------------+
| koordinaten: | ♁49° 58′ N, 8° 57′ OKoordinaten: 49° 58′ N, 8° 57′ O | | OSM |
| Bundesland: | Hessen |
| Regierungsbezirk: | Darmstadt |
| Landkreis: | Darmstadt-Dieburg |
| Höhe: | 124 m ü. NHN |
| Fläche: | 66,85 km2 |
| Einwohner: | 17.579 (31. Dez. 2023)[1] |
| Bevölkerungsdichte: | 263 Einwohner je km2 |
| Postleitzahl: | 64832 |
| Vorwahl: | 06073 |
| Kfz-Kennzeichen: | DA, DI |
| Gemeindeschlüssel: | 06 4 32 002 |
| Stadtgliederung: | 6 Stadtteile |
| Adresse der | |
| Stadtverwaltung: | Rathaus |
| Marktplatz 2 | |
| 64832 Babenhausen | |
| Website: | www.babenhausen.de |
| Bürgermeister: | Dominik Stadler (parteilos) |
+----------------------+--------------------------------------------------------------+
</code></pre>
<p><a href="https://de.wikipedia.org/wiki/Bacharach" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Bacharach</a>
<a href="https://de.wikipedia.org/wiki/Backnang" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Backnang</a></p>
<p><strong>update3:</strong> if i run this code in order to fetch 300 records . it works well - if i run this in order to fetch 2400 it fails..</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
def get_info(city_url: str) -> dict:
info_data = {}
response = requests.get(city_url)
soup = BeautifulSoup(response.text, 'lxml')
for x in soup.find('tbody').find_all(
lambda tag: tag.name == 'tr' and tag.get('class') == ['hintergrundfarbe-basis']):
if not x.get('style'):
if 'Koordinaten' in x.get_text():
info_data['Koordinaten'] = x.findNext('span', class_='coordinates').get_text()
else:
info_data[x.get_text(strip=True).split(':')[0]] = x.get_text(strip=True).split(':')[-1]
info_data['Web site'] = soup.find('a', {'title':'Website'}).findNext('a').get('href')
return info_data
cities = []
response = requests.get('https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_in_Deutschland#Liste_der_St%C3%A4dte_in_Deutschland')
soup = BeautifulSoup(response.text, 'lxml')
for city in soup.find_all('dd')#[:2500]:
city_url = 'https://de.wikipedia.org' + city.findNext('a').get('href')
result = {'City': city.get_text(), 'URL': 'https://de.wikipedia.org' + city.findNext('a').get('href')}
result |= get_info(city_url)
cities.append(result)
df = pd.DataFrame(cities)
print(df.to_string())
------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-28-4391c852fd75> in <cell line: 24>()
25 city_url = 'https://de.wikipedia.org' + city.findNext('a').get('href')
26 result = {'City': city.get_text(), 'URL': 'https://de.wikipedia.org' + city.findNext('a').get('href')}
---> 27 result |= get_info(city_url)
28 cities.append(result)
29 df = pd.DataFrame(cities)
<ipython-input-28-4391c852fd75> in get_info(city_url)
15 else:
16 info_data[x.get_text(strip=True).split(':')[0]] = x.get_text(strip=True).split(':')[-1]
---> 17 info_data['Web site'] = soup.find('a', {'title':'Website'}).findNext('a').get('href')
18 return info_data
19
AttributeError: 'NoneType' object has no attribute 'findNext'
</code></pre>
|
<python><pandas><beautifulsoup><wikipedia>
|
2024-07-28 16:54:03
| 1
| 1,223
|
zero
|
78,803,752
| 14,838,954
|
My PyTorch Model in Kaggle uses 100% CPU and 0% GPU During Training
|
<p>I'm trying to fine-tune a PyTorch classification model to classify plant disease images. I have properly initialized the CUDA device and sent the model, train, validation, and test data to the device. However, when training the model, it uses 100% CPU and 0% GPU. Why is this happening?</p>
<pre class="lang-py prettyprint-override"><code>device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
</code></pre>
<pre class="lang-py prettyprint-override"><code>def train_model(model, train_loader, val_loader, criterion, optimizer, num_epochs, patience):
train_losses, val_losses = [], []
train_accuracies, val_accuracies = [], []
best_val_loss = np.inf
patience_counter = 0
lr_scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1)
for epoch in range(num_epochs):
model.train()
running_loss, running_corrects, total = 0.0, 0, 0
for inputs, labels in tqdm(train_loader):
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item() * inputs.size(0)
_, preds = torch.max(outputs, 1)
running_corrects += torch.sum(preds == labels.data)
total += labels.size(0)
epoch_loss = running_loss / total
epoch_acc = running_corrects.double() / total
train_losses.append(epoch_loss)
train_accuracies.append(epoch_acc.item())
val_loss, val_acc = evaluate_model(model, val_loader, criterion)
val_losses.append(val_loss)
val_accuracies.append(val_acc)
print(f"Epoch {epoch}/{num_epochs-1}, Train Loss: {epoch_loss:.4f}, Train Acc: {epoch_acc:.4f}, Val Loss: {val_loss:.4f}, Val Acc: {val_acc:.4f}")
if val_loss < best_val_loss:
best_val_loss = val_loss
torch.save(model.state_dict(), 'best_model.pth')
patience_counter = 0
else:
patience_counter += 1
if patience_counter >= patience:
print("Early stopping")
break
lr_scheduler.step()
return train_losses, train_accuracies, val_losses, val_accuracies
</code></pre>
<p>Here is my notebook: <a href="https://www.kaggle.com/code/nirmalsankalana/efficientnet-with-augmentation" rel="nofollow noreferrer">EfficientNet with Augmentation</a></p>
<p>Edit:
I resize the dataset into 256X256 and run the code. When I'm checking my GPU usage surprisingly usage is periodically going up and down. As @kale-kundert said there is a performance bottleneck in data loading and applying augmenting pipeline.</p>
<p><a href="https://i.sstatic.net/3GsNZVVl.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3GsNZVVl.gif" alt="enter image description here" /></a></p>
|
<python><deep-learning><pytorch><gpu><kaggle>
|
2024-07-28 12:24:12
| 1
| 485
|
Nirmal Sankalana
|
78,803,628
| 7,514,722
|
Trying pybind11 in window: undefined reference to __imp_PyGILState_Check
|
<p>I'm trying to get pybind11 to work in windows. I've installed pybind11 using conda-forge, then do the simplest code:</p>
<pre><code>#include "pybind11/pybind11.h"
namespace py = pybind11;
</code></pre>
<p>compile it with :</p>
<pre><code>g++ -std=c++17 -O2 -mavx -IC:/Users/beng_/anaconda3/Lib/site-packages/pybind11/include -IC:/Users/beng_/anaconda3/include/pybind11_global/ -IC:/Users/beng_/anaconda3/include/ - pb_test.cpp -o pb_test
</code></pre>
<p>The -I path avoids earlier problem of Python.h not found. But now, I get the bunch of errors related to undefined reference to '_<em>imp_Py*' type of errors. Too many to count. My guess is that I am missing a linker, after some searching, the suggested hint seems to involved the /anaconda3/libs folder with all the dll. If I tried that -lc:/Users/beng</em>/anaconda3/libs/python311 or python3, none of it works. It will instead complain:</p>
<pre><code>cannot find -lC:/Users/beng_/anaconda3/libs/python311
collect2.exe: error: ld returned 1 exit status
</code></pre>
<p>But both python3.dll and python311.dll are in that folder.</p>
<p>I've checked post like but nothing that helps me so far:
<a href="https://stackoverflow.com/questions/52728091/pybind11-error-undefined-reference-to-py-getversion">Pybind11 error undefined reference to `Py_GetVersion'</a></p>
<p>Please help. I'm seriously stuck. Not sure my suspicion about linking error is even correct.</p>
|
<python><c++><pybind11>
|
2024-07-28 11:24:57
| 1
| 335
|
Ong Beng Seong
|
78,803,612
| 10,976,122
|
How can I use custom variables in Loguru in Python?
|
<p>I don't want the structure the Loguru gives me. I want to use a custom serialized json base structure.
for eg:</p>
<pre><code> {
"time": "5:30pm",
"error": "Type error,
}
</code></pre>
<p>I don't want any extra params.</p>
<p>From the source code I think it comes from this function:</p>
<pre><code> @staticmethod
def _serialize_record(text, record):
exception = record["exception"]
if exception is not None:
exception = {
"type": None if exception.type is None else exception.type.__name__,
"value": exception.value,
"traceback": bool(exception.traceback),
}
serializable = {
"text": text,
"record": {
"elapsed": {
"repr": record["elapsed"],
"seconds": record["elapsed"].total_seconds(),
},
"exception": exception,
"extra": record["extra"],
"file": {"name": record["file"].name, "path": record["file"].path},
"function": record["function"],
"level": {
"icon": record["level"].icon,
"name": record["level"].name,
"no": record["level"].no,
},
"line": record["line"],
"message": record["message"],
"module": record["module"],
"name": record["name"],
"process": {"id": record["process"].id, "name": record["process"].name},
"thread": {"id": record["thread"].id, "name": record["thread"].name},
"time": {"repr": record["time"], "timestamp": record["time"].timestamp()},
},
}
return json.dumps(serializable, default=str, ensure_ascii=False) + "\n"
</code></pre>
<p>From handler.</p>
<p>I tried overriding it, but was unable to do it.</p>
<p>Can anyone give me a better solution to achieve this?</p>
<p>I am using it in FastAPI.</p>
|
<python><logging><fastapi><loguru>
|
2024-07-28 11:20:53
| 2
| 350
|
shaswat kumar
|
78,803,573
| 694,360
|
Manage packages inside a conda environment exclusively throught pip
|
<p>I would like to use conda as environment manager (in a Windows system) and pip as package manager.</p>
<p>As suggested <a href="https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#pip-in-env" rel="nofollow noreferrer">here</a>, using conda after pip should be avoided; moreover</p>
<blockquote>
<p>Only after conda has been used to install as many packages as possible should pip be used to install any remaining software.</p>
</blockquote>
<p>If (in a conda environment) I cannot use conda after pip, the only solution is to avoid installing anything with conda and start using pip from the beginning; so I created a totally empty conda environment but, when I install python, conda tries to install many additional packages:</p>
<pre><code>(fem) C:\Users\UP>conda install python
Retrieving notices: ...working... done
Channels:
- defaults
Platform: win-64
Collecting package metadata (repodata.json): done
Solving environment: done
## Package Plan ##
environment location: C:\Users\UP\miniconda3\envs\fem
added / updated specs:
- python
The following NEW packages will be INSTALLED:
bzip2 pkgs/main/win-64::bzip2-1.0.8-h2bbff1b_6
ca-certificates pkgs/main/win-64::ca-certificates-2024.7.2-haa95532_0
expat pkgs/main/win-64::expat-2.6.2-hd77b12b_0
libffi pkgs/main/win-64::libffi-3.4.4-hd77b12b_1
openssl pkgs/main/win-64::openssl-3.0.14-h827c3e9_0
pip pkgs/main/win-64::pip-24.0-py312haa95532_0
python pkgs/main/win-64::python-3.12.4-h14ffc60_1
setuptools pkgs/main/win-64::setuptools-69.5.1-py312haa95532_0
sqlite pkgs/main/win-64::sqlite-3.45.3-h2bbff1b_0
tk pkgs/main/win-64::tk-8.6.14-h0416ee5_0
tzdata pkgs/main/noarch::tzdata-2024a-h04d1e81_0
vc pkgs/main/win-64::vc-14.2-h2eaa2aa_4
vs2015_runtime pkgs/main/win-64::vs2015_runtime-14.29.30133-h43f2093_4
wheel pkgs/main/win-64::wheel-0.43.0-py312haa95532_0
xz pkgs/main/win-64::xz-5.4.6-h8cc25b3_1
zlib pkgs/main/win-64::zlib-1.2.13-h8cc25b3_1
Proceed ([y]/n)?
</code></pre>
<p>Is it possible to install a bare python in a conda environment (with or without conda), in order to use esclusively pip as package manager inside such conda environment?</p>
<p>In other words <strong>I would like to use conda (which is both an environment and package manager) only as environment manager and pip as package manager</strong>.</p>
|
<python><pip><conda>
|
2024-07-28 10:59:21
| 1
| 5,750
|
mmj
|
78,803,555
| 4,340,793
|
FastAPI dependency injection by class type
|
<p>I'm familiar with dependency injecting and I was trying to do something I have seen in the past in other frameworks (spring Autowired annotation).</p>
<p>I have the following 2 classes :</p>
<pre><code>class MyServerConfig:
def __init__(self):
serviceA_host = getenv("SERVICE_A_HOST")
ServiceA_port = getenv("SERVICE_A_PORT")
self._serviceA_addr = f"{serviceA_host}:{ServiceA_port}"
</code></pre>
<pre><code>class DependencyContainer:
def __init__(self, config: ServiceConfig):
serviceA_channel = grpc.insecure_channel(config._serviceA_addr)
self._serviceA_client = (
SomeGrpcStub(serviceA_channel)
)
def service_a_client():
return self.serviceA_client
</code></pre>
<p>In my <code>main.py</code> I initialize both classes and pass them as dependencies to my routes :</p>
<pre><code> config = MyServerConfig()
dependency_container = DependencyContainer(config)
</code></pre>
<p>And I tried to pass them as dependencies to my inner routes :</p>
<pre><code> def get_dependency_container() -> DependencyContainer:
return dependency_container
def get_server_config() -> MyServerConfig:
return config
shared_dependencies = [
Depends(get_server_config),
Depends(get_dependency_container),
]
app.include_router(
somemodule.products_router_v1(),
dependencies=shared_dependencies,
)
</code></pre>
<p>My router code:</p>
<pre><code>def products_router_v1():
products_router = APIRouter(
prefix="/products",
)
@products_router.post(
"",
)
def register_product(
save_request: SaveProductRequest,
http_request: Request,
dependency_container: DependencyContainer = Depends(),
) -> Response
</code></pre>
<p>I was hoping that fastapi will understand that <code>DependencyContainer</code> is available in the route dependencies and will inject it but it doesnt do that.</p>
<p>My goal is to create the clients in one place (main.py) and inject the relevant clients to the relevant endpoints. I prefer to create globals only in my main.py and not in other modules.</p>
<p>With the current code I'm getting the following exception:</p>
<blockquote>
<p>astapi.exceptions.FastAPIError: Invalid args for response field! Hint: check that <class 'x.x.x.x.MyServerConfig'> is a valid Pydantic field type.</p>
</blockquote>
<p>Is there a way to implement this injection via fastapi ?</p>
|
<python><dependency-injection><fastapi>
|
2024-07-28 10:51:15
| 0
| 4,140
|
JeyJ
|
78,803,339
| 7,377,543
|
Cannot read excel file in Pandas as required
|
<ol>
<li>I am learning <code>stack()</code>/ <code>unstack()</code> in Pandas</li>
<li>Link to the excel file is <a href="https://github.com/codebasics/py/blob/master/pandas/12_stack/stocks.xlsx" rel="nofollow noreferrer">here</a>. The data in the file looks as under:</li>
</ol>
<p><a href="https://i.sstatic.net/WTy3tdwX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WTy3tdwX.png" alt="theexceldata" /></a></p>
<ol start="3">
<li><p>My code is as under:</p>
<pre><code> import pandas as pd
df = pd.read_excel("stocks.xlsx",header=[0,1]) # on local machine
df
</code></pre>
</li>
<li><p>The output looks as under:</p>
</li>
</ol>
<p><a href="https://i.sstatic.net/6Hyhow1B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6Hyhow1B.png" alt="theoutput" /></a></p>
<ol start="5">
<li>The required output is as under:</li>
</ol>
<p><a href="https://i.sstatic.net/GsF5nqgQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GsF5nqgQ.png" alt="therequiredoutput" /></a></p>
<p><strong>The question:</strong></p>
<ul>
<li><p>Why am I not getting the required output? What is wrong with code?</p>
</li>
<li><p>Since I am not getting required output, I cannot work further on the data.</p>
</li>
<li><p>Please guide me. Thanks.</p>
</li>
</ul>
|
<python><excel><pandas>
|
2024-07-28 08:51:53
| 0
| 430
|
Nizamuddin Shaikh
|
78,803,329
| 1,371,666
|
default maximized window in tkinter with red background
|
<p>I am using Python 3.11.9 in windows 11 OS (home edition)<br>
There is a table inside a scrollable frame.<br>
Here is the simplified code</p>
<pre><code>import tkinter as tk
class Table(tk.Frame):
def __init__(self, master, header_labels:tuple, *args, **kwargs):
tk.Frame.__init__(self, master, *args, **kwargs)
# configuration for all Labels
# easier to maintain than directly inputting args
self.lbl_cfg = {
'master' : self,
'foreground' : 'blue',
'relief' : 'raised',
'font' : 'Arial 16 bold',
'padx' : 0,
'pady' : 0,
'borderwidth': 1,
'width' : 11,
}
self.headers = []
self.rows = []
for col, lbl in enumerate(header_labels):
self.grid_columnconfigure(col, weight=1)
# make and store header
(header := tk.Label(text=lbl, **self.lbl_cfg)).grid(row=0, column=col, sticky='nswe')
self.headers.append(header)
def add_row(self, desc:str, quantity:int, rate:float, amt:float, pending:bool) -> None:
self.rows.append([])
for col, lbl in enumerate((desc, quantity, rate, amt, pending), 1):
(entry := tk.Label(text=lbl, **self.lbl_cfg)).grid(row=len(self.rows), column=col, sticky='nswe')
self.rows[-1].append(entry)
class Application(tk.Tk):
def __init__(self, title:str="Sample Application", x:int=0, y:int=0, **kwargs):
tk.Tk.__init__(self)
self.title(title)
self.config(**kwargs)
header_labels = ('', 'Description', 'Quantity', 'Rate', 'Amt', 'pending')
self.gui_btn_look = {
'foreground' : 'blue',
'relief' : 'groove',
'font' : 'Arial 20 bold',
'padx' : 5,
'pady' : 0,
}
self.content_frame=tk.Frame(self,highlightbackground="black",highlightthickness=1)
self.content_frame.grid(column=0,row=0,sticky='news')
self.canvas=tk.Canvas(self.content_frame)
scrollbar=tk.Scrollbar(self.content_frame,orient="vertical",command=self.canvas.yview,width=50)
self.canvas.configure(yscrollcommand=scrollbar.set)
self.table_frame=tk.Frame(self.canvas)
self.table_frame.grid(column=0,row=0,sticky='news')
self.table_frame.bind("<Configure>",lambda e:self.canvas.configure(scrollregion=self.canvas.bbox("all")))
self.content_frame.columnconfigure(0,weight=2)
self.content_frame.rowconfigure(0,weight=2)
self.canvas.create_window((0, 0),window=self.table_frame,anchor="nw")
self.frame_id=self.canvas.create_window((0,0),window=self.table_frame,anchor="nw")
self.canvas.grid(row=0,column=0,sticky="nswe")
self.canvas.bind_all("<MouseWheel>",self._on_mousewheel)
scrollbar.grid(row=0,column=1,sticky="ns")
self.table_frame.trial_var=0
self.table = Table(self.table_frame,header_labels,bg='red')
self.table.grid(row=0,column=0,sticky='nswe')
self.keyboard_frame=tk.Frame(self,highlightbackground="black",highlightthickness=1)
self.keyboard_frame.grid(column=1,row=0,sticky='',padx=(0,10))
self.grid_rowconfigure(0, weight=1)
self.grid_columnconfigure(0, weight=1)
for numbers in range(1,11):
self.add_number_button=tk.Button(self.keyboard_frame,text=str(numbers%10),**self.gui_btn_look)
self.add_number_button.grid(column=int(numbers-1)%3,row=int((numbers-1)/3))
self.clear_button=tk.Button(self.keyboard_frame,text='Del',**self.gui_btn_look)
self.clear_button.grid(column=1,row=3,columnspan=2,sticky='e')
self.enter_button=tk.Button(self.keyboard_frame,text='Enter',**self.gui_btn_look)
self.enter_button.grid(column=0,row=4,columnspan=3,sticky='')
#maximizing table cell
self.grid_rowconfigure(0, weight=1)
self.grid_columnconfigure(0, weight=1)
# update so we can get the current dimensions
self.update_idletasks()
self.geometry(f'{self.winfo_screenwidth()}x{self.winfo_screenheight()}+{0}+{0}')
# test
self.table.add_row("A", 1, 2.0, 3, True)
self.table.add_row("B", 4, 5.0, 6, False)
self.table.add_row("C", 7, 8.0, 9, True)
self.table.add_row("D", 10, 11.0, 12, False)
self.table.add_row("A", 1, 2.0, 3, True)
self.table.add_row("B", 4, 5.0, 6, False)
self.table.add_row("C", 7, 8.0, 9, True)
self.table.add_row("D", 10, 11.0, 12, False)
self.table.add_row("A", 1, 2.0, 3, True)
self.table.add_row("B", 4, 5.0, 6, False)
self.table.add_row("C", 7, 8.0, 9, True)
self.table.add_row("D", 10, 11.0, 12, False)
self.table.add_row("A", 1, 2.0, 3, True)
self.table.add_row("B", 4, 5.0, 6, False)
self.table.add_row("C", 7, 8.0, 9, True)
self.table.add_row("D", 10, 11.0, 12, False)
self.table.add_row("A", 1, 2.0, 3, True)
self.table.add_row("B", 4, 5.0, 6, False)
self.table.add_row("C", 7, 8.0, 9, True)
self.table.add_row("D", 10, 11.0, 12, False)
self.table.add_row("A", 1, 2.0, 3, True)
self.table.add_row("B", 4, 5.0, 6, False)
self.table.add_row("C", 7, 8.0, 9, True)
self.table.add_row("D", 10, 11.0, 12, False)
self.table.add_row("A", 1, 2.0, 3, True)
self.table.add_row("B", 4, 5.0, 6, False)
self.table.add_row("C", 7, 8.0, 9, True)
self.table.add_row("D", 10, 11.0, 12, False)
def _on_mousewheel(self,event):
print('scroll in table')
self.canvas.yview_scroll(int(-1 * (event.delta / 120)), "units")
if __name__ == "__main__":
Application(title="My Application").mainloop()
</code></pre>
<p>I want to do following<br>
(1)when i run it then window should appear maximized meaning there will be a restore button between close and minimize buttons on top right corner . I think self.winfo_screenheight does not remove current windows taskbar height from Y screen resolution.<br>
(2)I tried 'bg=red' but it did not cause all window background to become red . I want red background everywhere instead of grey you see everywhere. Then I will add code to change all red to white on clicking of a particular button.<br>
<br>Thanks.</p>
|
<python><windows><tkinter>
|
2024-07-28 08:45:11
| 0
| 481
|
user1371666
|
78,803,308
| 13,658,665
|
django consistently fails to load css on deployment
|
<p>I was deploying a simple django application to kubernetes and encountered the error of not being able to load css and js. I have tried several times to change the dockerfile,change the volume path, but I can't fix it.
<a href="https://i.sstatic.net/YjJFsHdx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YjJFsHdx.png" alt="enter image description here" /></a></p>
<p><strong>I've spent 1 week still not finding the error, am I overlooking something?</strong></p>
<p>Here are my files and code</p>
<p>main/settings.py</p>
<pre><code>BASE_DIR = Path(__file__).resolve().parent.parent
env.read_env(os.path.join(BASE_DIR, 'main/.env'))
....
STATIC_URL = '/static/'
STATIC_ROOT = '/app/static'
</code></pre>
<p>Dockerfile</p>
<pre><code># Use the official Python image as the base image
ARG ARCH=
FROM ${ARCH}python:3.12-slim as builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --default-timeout=100 --no-cache-dir -r requirements.txt
COPY . .
FROM ${ARCH}python:3.12-slim
WORKDIR /app
COPY --from=builder /app /app
COPY --from=builder /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin
EXPOSE 8080
ENV PYTHONUNBUFFERED 1
# Specify the command to run your Django app
CMD ["gunicorn", "--workers=3", "--bind=0.0.0.0:8080", "main.wsgi:application" ]
</code></pre>
<p>deployment.yaml</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: static-files-pv
spec:
capacity:
storage: 0.5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/static-files
storageClassName: do-block-storage
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: static-files-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 0.5Gi
storageClassName: do-block-storage
---
apiVersion: v1
kind: ConfigMap
metadata:
name: auth-db-config
data:
DB_NAME: auth
DB_HOST: patroni.default.svc.cluster.local
DB_PORT: "5432"
---
apiVersion: v1
kind: Secret
metadata:
name: auth-db-secret
type: Opaque
data:
DB_USER: Y***************y
DB_PASSWORD: S***************4=
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: xxxx.com/auth/app:latest # my private registry
env:
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: auth-db-config
key: DB_NAME
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: auth-db-config
key: DB_HOST
- name: DB_PORT
valueFrom:
configMapKeyRef:
name: auth-db-config
key: DB_PORT
- name: DB_USER
valueFrom:
secretKeyRef:
name: auth-db-secret
key: DB_USER
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: auth-db-secret
key: DB_PASSWORD
ports:
- containerPort: 8080
volumeMounts:
- name: static-files
mountPath: /app/static
volumes:
- name: static-files
persistentVolumeClaim:
claimName: static-files-pvc
imagePullSecrets:
- name: auth
---
apiVersion: v1
kind: Service
metadata:
name: auth
spec:
type: ClusterIP
selector:
app: auth
ports:
- protocol: TCP
port: 80
targetPort: 8080
</code></pre>
<p>ingress-nginx.yaml</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
cert-manager.io/issuer: letsencrypt-nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- xxxx.com
secretName: letsencrypt-nginx
rules:
- host: xxxx.com
http:
paths:
- path: /static/
pathType: Prefix
backend:
service:
name: auth
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: auth
port:
number: 80
</code></pre>
<p>Container static file is exist:</p>
<pre><code>root@auth-7794b4dc86-h7ww4:/app# ls /app/static
admin rest_framework
root@auth-7794b4dc86-h7ww4:/app#
</code></pre>
|
<python><django><kubernetes><dockerfile><kubernetes-ingress>
|
2024-07-28 08:34:51
| 1
| 394
|
HappyKoala
|
78,803,220
| 827,927
|
How to efficiently check whether x appears before y in a list
|
<p>Given a list <code>L</code> in Python, and two elements <code>x</code> and <code>y</code> that are guaranteed to be in <code>L</code>, I wish to know whether <code>x</code> appears before <code>y</code> in <code>L</code>.</p>
<p>If there is only one pair, I can simply loop over <code>L</code> until I find either <code>x</code> or <code>y</code>. This takes time linear in the length of <code>L</code>. But, I have to answer many such queries for the same <code>L</code> (for different <code>x</code> and <code>y</code>). Is there an efficient way, maybe a data structure or iterator in Python, that I can use to answer many queries on the same <code>L</code> quickly?</p>
|
<python><algorithm><performance>
|
2024-07-28 07:42:35
| 2
| 37,410
|
Erel Segal-Halevi
|
78,803,025
| 1,307,876
|
set widget field_value with PyMuPDF (fitz) does not work
|
<p>I have a simple script that should fill AcroForms inside a PDF file.</p>
<pre class="lang-py prettyprint-override"><code>import fitz # PyMuPDF
def fill_pdf_form(input_pdf, output_pdf, data):
doc = fitz.open(input_pdf)
for page_num in range(len(doc)):
page = doc.load_page(page_num)
for widget in page.widgets():
field_name = widget.field_name
print("fieldName: ", field_name)
if field_name in data:
widget.field_value = data[field_name]
print("fieldValue1: ", data[field_name])
print("fieldValue2: ", widget.field_value)
doc.save(output_pdf, use_objstms=1)
doc.close()
if __name__ == "__main__":
input_pdf = "file.pdf"
output_pdf = "file_filled.pdf"
data = {
"field_01": "123456 _ggtl",
"field_multiline_01": "lorem Ipsum\nnext line\nagain a line",
}
fill_pdf_form(input_pdf, output_pdf, data)
print(f"success: {output_pdf}")
</code></pre>
<p>fieldValue1 and fieldValue2 are OK on output line. But the saved PDF has no values in the form fields.</p>
<p>What do you have to do to ensure that the set values are included in the output PDF?</p>
|
<python><pdf><pymupdf>
|
2024-07-28 05:37:27
| 0
| 658
|
bitkorn
|
78,802,856
| 1,532,338
|
How to parse and print API response and Error code from pymeter in python script?
|
<p>How to parse and print API response and Error code from pymeter in python script?</p>
<p>PyMeter is python version of jmeter.
(pymeter help document - <a href="https://pymeter.readthedocs.io/en/latest/api.html" rel="nofollow noreferrer">https://pymeter.readthedocs.io/en/latest/api.html</a>)</p>
<p>I am trying to get the performance stats for API - <a href="https://reqres.in/api/users?page=2" rel="nofollow noreferrer">https://reqres.in/api/users?page=2</a> and it gives following response. I am able to execute api using pymeter sampler, however unable to find any help doc about how to parse API response.</p>
<pre><code>{"page":2,"per_page":6,"total":12,"total_pages":2,"data":[{"id":7,"email":"michael.lawson@reqres.in","first_name":"Michael","last_name":"Lawson","avatar":"https://reqres.in/img/faces/7-image.jpg"},{"id":8,"email":"lindsay.ferguson@reqres.in","first_name":"Lindsay","last_name":"Ferguson","avatar":"https://reqres.in/img/faces/8-image.jpg"},{"id":9,"email":"tobias.funke@reqres.in","first_name":"Tobias","last_name":"Funke","avatar":"https://reqres.in/img/faces/9-image.jpg"},{"id":10,"email":"byron.fields@reqres.in","first_name":"Byron","last_name":"Fields","avatar":"https://reqres.in/img/faces/10-image.jpg"},{"id":11,"email":"george.edwards@reqres.in","first_name":"George","last_name":"Edwards","avatar":"https://reqres.in/img/faces/11-image.jpg"},{"id":12,"email":"rachel.howell@reqres.in","first_name":"Rachel","last_name":"Howell","avatar":"https://reqres.in/img/faces/12-image.jpg"}],"support":{"url":"https://reqres.in/#support-heading","text":"To keep ReqRes free, contributions towards server costs are appreciated!"}}
</code></pre>
<p>Here is my sample script -</p>
<pre><code>from unittest import TestCase
from pymeter.api.config import TestPlan, ThreadGroupWithRampUpAndHold
from pymeter.api.samplers import HttpSampler
from pymeter.api.reporters import HtmlReporter
from pymeter.api.postprocessors import JsonExtractor
from pymeter.api.assertions import ResponseAssertion
class TestTestPlanClass(TestCase):
def __init__(self, *args, **kwargs):
super(TestTestPlanClass, self).__init__(*args, **kwargs)
print("TestTestPlanClass")
@classmethod
def setUpClass(cls):
print("setUpClass")
def test_case_1(self):
json_extractor = JsonExtractor("variable", "args.data")
# create HTTP sampler, sends a get request to the given url
ra = ResponseAssertion().contains_substrings("data")
http_sampler = HttpSampler("echo_get_request", "https://reqres.in/api/users?page=2", json_extractor)
# create html reporter
html_reporter = HtmlReporter()
# create a thread group that will rump up 10 threads in 1 second and
# hold the load for additional 10 seconds, give it the http sampler as a child input
thread_group_main = ThreadGroupWithRampUpAndHold(1, 1, 1, http_sampler)
# create a test plan with the required thread group
test_plan = TestPlan(thread_group_main, html_reporter)
# run the test plan and take the results
stats = test_plan.run()
self.assertLess(stats.sample_time_99_percentile_milliseconds, 2000)
@classmethod
def tearDownClass(cls):
print("tearDownClass")
</code></pre>
|
<python><jmeter>
|
2024-07-28 02:43:50
| 1
| 3,846
|
Abhishek Kulkarni
|
78,802,794
| 5,458,723
|
Reversing function to obtain the lexicographic index of a given permutation
|
<p>I'm working on a Python3 code challenge where a message is encoded on or decoded from the shuffling order of a 52-card standard playing deck. I have already found how to encode the message, but I am having difficulty reversing the function to get a message from a given shuffling order. I have a function that finds a lexicographic permutation of a list of cards based on its index. I need one now that takes a permutation from a list of cards and outputs its index. I have some ideas, but I'm not as good at number theory as I should be. I have put some comments with my ideas.</p>
<pre><code>deck = ["AC", "2C", "3C", "4C", "5C", "6C", "7C", "8C", "9C", "TC", "JC", "QC", "KC",
"AD", "2D", "3D", "4D", "5D", "6D", "7D", "8D", "9D", "TD", "JD", "QD", "KD",
"AH", "2H", "3H", "4H", "5H", "6H", "7H", "8H", "9H", "TH", "JH", "QH", "KH",
"AS", "2S", "3S", "4S", "5S", "6S", "7S", "8S", "9S", "TS", "JS", "QS", "KS",]
def get_nth_permutation( p_index, in_list, out_list=[] ):
if p_index >= factorial( len(in_list) ): return []
if not in_list: return out_list
f = factorial( len(in_list)-1 )
index = p_index / f # p_index * f
remainder = p_index % f # pow(p_index, -1, f) - reverse modulo?
# reverse divmod by adding index + remainder in each step to increase p_index?
out_list.append( in_list[int(index)] )
del in_list[int(index)]
if not remainder: # function should end when out_list + in_list matches deck
return out_list + in_list # return p_index
else: #keep recursion if possible
return get_nth_permutation( remainder, in_list, out_list )
</code></pre>
<p>Any help is appreciated. It doesn't even have to be code, even a pseudocode explanation or some next steps is better than where I am right now.</p>
|
<python><algorithm><permutation><steganography><playing-cards>
|
2024-07-28 01:26:55
| 1
| 365
|
ep84
|
78,802,744
| 1,485,877
|
Reliably get TargetData for current process
|
<p>In llvmlite, an instance of <code>TargetData</code> is required to get the ABI size of an object. This makes sense given that the size of an object is dependent on the word size and alignment. If I want to immediately compile and use the code in the current process, I shouldn't need anything other that the target data of the current process. The docs would seem to suggest that this would reliably get the target data for the current process.</p>
<pre class="lang-py prettyprint-override"><code>import llvmlite.binding as llvm
# Initialize the LLVM
# https://llvmlite.readthedocs.io/en/latest/user-guide/binding/examples.html
llvm.initialize()
llvm.initialize_native_target()
llvm.initialize_native_asmprinter()
target_string = llvm.get_process_triple()
target_data = llvm.create_target_data(target_string)
</code></pre>
<p>However, this last line aborts the Python process with the error:</p>
<pre><code>LLVM ERROR: Unknown specifier in datalayout string
Aborted (core dumped)
</code></pre>
<p>Presumably, this is related to the fact that my process triple is <code>x86_64-unknown-linux-gnu</code>. I have Ubuntu 20.04 on a Lenovo P53, so it shouldn't be that weird.</p>
<p>How do I get a working default <code>TargetData</code> for the current process?</p>
|
<python><llvmlite>
|
2024-07-28 00:39:43
| 1
| 9,852
|
drhagen
|
78,802,587
| 11,106,801
|
Remove internal border inside a Text widget
|
<p>Is it possible to remove the border around between the <code>tkinter.Text</code> widget and the background colour of a tag inside it? I set both the border width and <code>highlightthickness</code> to <code>0</code> but the border is still there:</p>
<pre class="lang-py prettyprint-override"><code>import tkinter as tk
kwargs = dict(highlightthickness=0, bd=0)
root = tk.Tk()
root.geometry("200x200")
# relief="flat" doesn't help
text = tk.Text(root, wrap="none", bg="yellow", **kwargs)
text.pack()
text.insert("end", ("#"*30+"\n")*30)
text.tag_config("mytag", background="blue")
text.tag_add("mytag", "1.0", "end")
root.mainloop()
</code></pre>
<p>I how to change the colour of the yellow border but I want to remove it:</p>
<p><a href="https://i.sstatic.net/6bQEyOBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6bQEyOBM.png" alt="A tkinter window with a text widget with hashtags inside that have a blue background" /></a></p>
<p>I am using python 3.10 with tcl/tk 8.6 on Ubuntu 22.04</p>
<p>In my full program, the text box is free to move inside a canvas but the border is annoying so I want to remove it. The full code is actually an answer to <a href="https://stackoverflow.com/a/78802499/11106801">another</a> SO question.</p>
|
<python><tkinter><tcl><tk-toolkit>
|
2024-07-27 22:27:47
| 1
| 7,800
|
TheLizzard
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.