QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,265,244
| 4,434,140
|
Can I run a python static binary without root privilege in a docker container?
|
<p>I have the following <code>Dockerfile</code>:</p>
<pre><code>FROM cgr.dev/chainguard/wolfi-base as builder
ENV PATH=/venv/bin:$PATH
COPY . /code/app
RUN apk add --no-cache python-3.11 git wget jq build-base posix-libc-utils \
&& python3 -m venv /venv
WORKDIR /tmp
RUN wget $(wget -qO- https://api.github.com/repos/NixOS/patchelf/releases/latest | jq -r '.assets[] | select(.name | contains("aarch64")) | .browser_download_url') -O patchelf.tar.gz \
&& tar -xvf patchelf.tar.gz && install ./bin/patchelf /usr/bin/patchelf
RUN pip install --upgrade pip \
&& pip install --no-cache-dir pyinstaller staticx \
&& pip install --no-cache-dir -r /code/app/requirements-ci.txt \
&& pip install --no-cache-dir /code/app
WORKDIR /code/app
RUN pyinstaller static.spec && staticx dist/main dist/main_app
FROM cgr.dev/chainguard/static:latest-glibc
COPY --from=builder /code/app/dist/main_app /main_app
# this runs as user nonroot
ENTRYPOINT ["/main_app"]
</code></pre>
<p>I start with the installation of python in the <code>builder</code>, together with some tools that will be necessary to build the static version of my python app with <code>pyinstaller</code> and <code>staticx</code>. The last stage publishes the application in the <code>chainguard/static</code> image, which amounts to more or less the same as a <code>FROM scratch</code> (with some basic advantages, like a proper user management, time zones, certificates, etc.).</p>
<p>When I run it in docker image <code>cgr.dev/chainguard/wolfi-base</code>, where the default user is <code>root</code>, the binary produced by <code>staticx</code> works flawlessly. The application runs as expected.</p>
<p>When I run it as above, in docker image <code>cgr.dev/chainguard/static:latest-glibc</code>, it turns out the user is <code>nonroot</code>, which is great, but then I get the following error:</p>
<pre><code>Failed to open /proc/self/exe: Permission denied
</code></pre>
<p>That's because <code>nonroot</code> is not provided access to <code>/proc/self/exe</code>.</p>
<p>I want to produce a static binary out of my python application that I can run as <code>nonroot</code>. I believe the way I told above is kind of standard, but unfortunately makes it impossible. Is there another way?</p>
|
<python><docker><wolfi><chainguard>
|
2024-04-03 04:36:00
| 1
| 1,331
|
Laurent Michel
|
78,265,205
| 11,922,567
|
Decode URL strings with Pydantic
|
<p>I am using <code>Pydantic</code> to validate and type an incoming <code>S3 Event</code> in an <code>AWS Lambda</code> function.</p>
<p>The event looks like this (only including relevant bits):</p>
<pre class="lang-json prettyprint-override"><code>{
"Records": [
{
"s3": {
"bucket": {
"name": "my-bucket"
},
"object": {
"key": "MYKEY%28CSV%29/XXXX.CSV"
}
}
}
]
}
</code></pre>
<p>I define my <code>Model</code> like this to get the relevant information.</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel
class ObjectInfo(BaseModel):
key: str
class BucketInfo(BaseModel):
name: str
class S3Schema(BaseModel):
bucket: BucketInfo
object: ObjectInfo
class Record(BaseModel):
s3: S3Schema
class DeletionEvent(BaseModel):
Records: list[Record]
def handler(event: dict, _):
eventTyped = DeletionEvent(**event)
return True
</code></pre>
<p>Now the problem is that the correct value for <code>key</code> is supposed to be <code>MYKEY(CSV)/XXXX.CSV</code>, not <code>MYKEY%28CSV%29/XXXX.CSV</code>. I usually fix this issue using <code>urllib.parse.unquote_plus</code> to decode the <code>%XX</code> bits representing special characters. I think I can define a custom <a href="https://docs.pydantic.dev/2.1/usage/types/encoded/" rel="nofollow noreferrer">decoder</a> but this seems like overkill.</p>
<p>Is there any way to get <code>pydantic</code> to do this decoding for me? It has a <a href="https://docs.pydantic.dev/2.1/usage/types/urls/" rel="nofollow noreferrer">bunch of classes for working with URLs</a> but I don't see anything about decoding URL encoded strings by themselves.</p>
|
<python><urllib><pydantic><urldecode>
|
2024-04-03 04:22:17
| 1
| 1,822
|
Wesley Cheek
|
78,265,179
| 6,757,815
|
filtering in folium based on two column values
|
<p>I aim to utilize <code>ipywidgets</code> <strong>dropdowns</strong> and <code>Folium</code> to filter and visualize geodataframes. As an example, let's consider a geodataframe sourced from Natural Earth vectors, containing columns such as 'continent' and 'name'.</p>
<p>My objective is to create a user interface where users can first select a continent from a dropdown. Upon selection, another dropdown dynamically populates with the countries falling within that continent. Finally, when a country is chosen from the second dropdown, its location will be displayed on a Folium map.</p>
<p>I intend to save this interactive notebook as an <strong>HTML</strong> file.</p>
<p>Below is the code I've attempted so far based on the <a href="https://stackoverflow.com/questions/72468801/how-do-add-drop-down-menu-for-the-folium-map">How do add drop down menu for the folium map?</a></p>
<pre><code>import geopandas as gpd
import ipywidgets
from IPython.display import HTML, display
import folium
df = gpd.read_file(gpd.datasets.get_path("naturalearth_lowres"))
df = df[['continent', 'geometry']]
out = ipywidgets.Output(layout={'border': '1px solid black'})
w = ipywidgets.Dropdown(
options=df['continent'].unique().tolist(),
value=df['continent'].tolist()[0],
description='Column:',
disabled=False,
)
# Define a function to filter the dataframe
def filter_dataframe(name):
return df[df['continent'] == name]
def on_dropdown_change(change):
out.clear_output()
df = filter_dataframe(w.value)
with out:
display(df.explore(df.columns[0], cmap="Blues"))
w.observe(on_dropdown_change, names='value')
display(w)
with out:
display(df.explore(df.columns[0], cmap="Blues"))
out
</code></pre>
|
<python><folium>
|
2024-04-03 04:14:53
| 1
| 517
|
gis.rajan
|
78,265,109
| 10,378,232
|
The difference of placing signal.signal before or after child process is created and started?
|
<p>I'm encountering an odd problem when using the <code>signal</code> module to manage process behavior.</p>
<p>I want to send a <code>SIGTERM</code> signal from <code>b.py</code> after <code>a.py</code> is running to terminate the main and child processes in <code>a.py</code>.</p>
<p>Now I find when <code>signal.signal(signal.SIGTERM, handle_signal)</code> is placed before <code>__main__</code> entrypoint then the child processes are not terminated as expected. They still run.</p>
<p>But if I placed <code>signal.signal(signal.SIGTERM, handle_signal)</code> after that child process starts then the child process can be terminated as expected when <code>a.py</code> receives the <code>SIGTERM</code> signal from <code>b.py</code>.</p>
<h5>a.py</h5>
<pre class="lang-py prettyprint-override"><code>import multiprocessing
import os
import signal
import time
def process1():
while True:
print("the child process is running")
time.sleep(1)
def handle_signal(signum, frame):
print("signal received:", signum, "pid:", os.getpid())
# register signal
signal.signal(signal.SIGTERM, handle_signal) # place here 1
if "__main__" == __name__:
a = multiprocessing.Process(target=process1)
a.daemon = True
a.start()
# signal.signal(signal.SIGTERM, handle_signal) # place here 2
child_pid = a.pid
parent_pid = os.getpid()
parent_pid_group = os.getpgid(parent_pid)
with open("./crawler.pid", "w") as f:
f.write(str(parent_pid_group))
a.join()
print("Parent id:", parent_pid)
print("Child id", child_pid)
print("all terminated!")
</code></pre>
<h5>b.py</h5>
<pre class="lang-py prettyprint-override"><code>import os
import signal
with open("./crawler.pid", "r") as f:
try:
pid = int(f.read())
os.killpg(pid, signal.SIGTERM)
print("Signal sent successfully to process", pid)
except Exception as e:
print("Error sending signal:", e)
</code></pre>
<p>The wrong output - signal is registered before the child process starts:</p>
<pre class="lang-bash prettyprint-override"><code>╰─ python a.py
the child process is running
the child process is running
the child process is running
the child process is running
the child process is running
signal received: 15 pid: 1379
signal received: 15 pid: 1380
the child process is running
the child process is running
the child process is running
the child process is running
^CTraceback (most recent call last):
File "a.py", line 34, in <module>
a.join()
File "/usr/lib/python3.8/multiprocessing/process.py", line 149, in join
res = self._popen.wait(timeout)
File "/usr/lib/python3.8/multiprocessing/popen_fork.py", line 47, in wait
return self.poll(os.WNOHANG if timeout == 0.0 else 0)
File "/usr/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll
pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt
signal received: 15 pid: 1380
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "a.py", line 10, in process1
time.sleep(1)
KeyboardInterrupt
----------------------
╰─ python b.py
Signal sent successfully to process 1379
</code></pre>
<p>And the right output - signal is registered after the child process starts:</p>
<pre class="lang-bash prettyprint-override"><code>╰─ python a.py
the child process is running
the child process is running
the child process is running
signal received: 15 pid: 1961
Parent id: 1961
Child id 1962
all terminated!
---------------------
╰─ python b.py
Signal sent successfully to process 1961
</code></pre>
<p>Is this a python mechanism or an operating system design?</p>
|
<python><signals>
|
2024-04-03 03:50:22
| 1
| 412
|
Runstone
|
78,265,106
| 1,989,579
|
How to Convert u-law 8000 Hz Audio Files to OPUS in Python
|
<p>I'm working on a Python project where I need to convert audio files from μ-law format with an 8000 Hz sampling rate to OPUS format. The challenge is to keep the original channels separated in the conversion process. Despite several attempts with different methods and libraries, I haven't succeeded yet. I'm looking for advice or a Python library that can efficiently handle this conversion.</p>
<p>What I've tried:</p>
<p>1- Using pydub to Load and Export Files: Attempted to load .au files with pydub and export them as OPUS. This approach resulted in ffmpeg related errors, specifically "Invalid data found when processing input".</p>
<pre><code>from pydub import AudioSegment
audio_1 = AudioSegment.from_file("path/to/file1.au", format="au", frame_rate=8000)
audio_2 = AudioSegment.from_file("path/to/file2.au", format="au", frame_rate=8000)
combined_audio = AudioSegment.from_mono_audiosegments(audio_1, audio_2)
combined_audio.export("output.opus", format="opus", bitrate="32k")
</code></pre>
<p>2- Direct Conversion Using ffmpeg: Also tried direct conversion with ffmpeg from both the command line and within Python, specifying codecs, but kept encountering "Invalid data found when processing input".</p>
<pre><code>ffmpeg -acodec pcm_mulaw -i input_file.au -ar 8000 -ac 1 output.wav
</code></pre>
<p>3- Playback with aplay: Interestingly, the files play back fine using aplay with specified sampling rate and format, confirming the files are not corrupted.</p>
<pre><code>aplay -r 8000 -f MU_LAW input_file.au
</code></pre>
<p>How can I convert audio files from u-law 8000 Hz format to OPUS format in Python, ideally keeping the original audio channels separate? Is there a specific Python library or tool that works well for this type of audio conversion?</p>
|
<python><ffmpeg><opus><mu-law>
|
2024-04-03 03:49:23
| 1
| 3,512
|
user60108
|
78,265,101
| 6,388,372
|
Missing Few days data historical : Yahoo Finance API Python yahoo_fin
|
<p>I am fetching historical data of forex pair from yahoo finance, and I am experiencing missing data of few days. Then I tried smaller version of code and I face same issue. Please help</p>
<pre><code>stock_symbol = 'USDCHF=X'
#fine the timeframe
start_date = '2024-04-02'
end_date = '2024-04-03'
# Fetch the data
stock_data = get_data(stock_symbol, start_date, end_date)
</code></pre>
<p>Results :</p>
<pre><code> open high low close adjclose volume ticker
2024-04-01 0.90507 0.90894 0.90490 0.90507 0.90507 0 USDCHF=X
2024-04-03 0.90757 0.90854 0.90725 0.90809 0.90809 0 USDCHF=X
</code></pre>
<p>I don't have 2nd april data. Please explain why and how can i get this. Yesterday night I was getting same data but now it is not there.</p>
|
<python><yahoo-finance><forex>
|
2024-04-03 03:48:18
| 0
| 602
|
KMittal
|
78,265,092
| 464,277
|
Error when plotting confidence intervals with NeuralProphet
|
<p>I'm following the tutorial on quantile regression in <a href="https://neuralprophet.com/tutorials/tutorial08.html" rel="nofollow noreferrer">NeuralProphet</a>, but there is an issue when plotting the forecast.</p>
<pre><code>confidence_lv = 0.9
quantile_list = [round(((1 - confidence_lv) / 2), 2), round((confidence_lv + (1 - confidence_lv) / 2), 2)]
m = NeuralProphet(
yearly_seasonality=True,
weekly_seasonality=True,
daily_seasonality=False,
quantiles=quantile_list,
n_lags=30,
epochs=10,
n_forecasts=30,
)
m.set_plotting_backend('plotly')
metrics = m.fit(df)
df_future = m.make_future_dataframe(
df,
n_historic_predictions=True,
periods=30,
)
forecast = m.predict(df_future)
m.plot(forecast, forecast_in_focus=30)
</code></pre>
<p>I'm getting the error below. I'm not finding the parameter they mentioned anywhere in these functions above (I'm using version <code>0.6.2</code>).</p>
<pre><code>in NeuralProphet.plot(self, fcst, df_name, ax, xlabel, ylabel, figsize, forecast_in_focus, plotting_backend)
1886 if len(self.config_train.quantiles) > 1:
1887 if (self.highlight_forecast_step_n) is None and (
1888 self.n_forecasts > 1 or self.n_lags > 0
1889 ): # rather query if n_forecasts >1 than n_lags>1
-> 1890 raise ValueError(
1891 "Please specify step_number using the highlight_nth_step_ahead_of_each_forecast function"
1892 " for quantiles plotting when auto-regression enabled."
1893 )
1894 if (self.highlight_forecast_step_n or forecast_in_focus) is not None and self.n_lags == 0:
1895 log.warning("highlight_forecast_step_n is ignored since auto-regression not enabled.")
ValueError: Please specify step_number using the highlight_nth_step_ahead_of_each_forecast function for quantiles plotting when auto-regression enabled.
</code></pre>
|
<python><deep-learning><neural-network><time-series>
|
2024-04-03 03:44:15
| 1
| 10,181
|
zzzbbx
|
78,265,014
| 202,335
|
nslookup error when I use scrapy, I can access the site with a browser
|
<pre><code>import scrapy
class CninfoSpider(scrapy.Spider):
name = 'cninfo_spider'
start_urls = ['https://www.cninfo.com.cn/new/commonUrl/pageOfSearch?url=disclosure/list/search']
def parse(self, response):
# Extract the table rows
rows = response.xpath('//table[@id="notice-table"]/tbody/tr')
for row in rows:
# Extract the data from each column
code = row.xpath('.//td[1]/span[@class="code"]/text()').get()
name = row.xpath('.//td[2]/span[@class="code"]/text()').get()
announcement_title = row.xpath('.//td[3]/a/text()').get()
announcement_time = row.xpath('.//td[4]/span[@class="date"]/text()').get()
yield {
'code': code.strip() if code else None,
'name': name.strip() if name else None,
'announcement_title': announcement_title.strip() if announcement_title else None,
'announcement_time': announcement_time.strip() if announcement_time else None,
}
C:\Users\Steven\cninfo_scraper>scrapy crawl cninfo_spider
2024-04-03 10:47:33 [scrapy.utils.log] INFO: Scrapy 2.11.1 started (bot: cninfo_scraper)
2024-04-03 10:47:33 [scrapy.utils.log] INFO: Versions: lxml 4.9.3.0, libxml2 2.10.3, cssselect 1.2.0, parsel 1.9.0, w3lib 2.1.2, Twisted 24.3.0, Python 3.12.0 (tags/v3.12.0:0fb18b0, Oct 2 2023, 13:03:39) [MSC v.1935 64 bit (AMD64)], pyOpenSSL 24.1.0 (OpenSSL 3.1.4 24 Oct 2023), cryptography 41.0.5, Platform Windows-11-10.0.22631-SP0
2024-04-03 10:47:33 [scrapy.addons] INFO: Enabled addons:
[]
2024-04-03 10:47:33 [asyncio] DEBUG: Using selector: SelectSelector
2024-04-03 10:47:33 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2024-04-03 10:47:33 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop
2024-04-03 10:47:33 [scrapy.extensions.telnet] INFO: Telnet Password: 4e5a2f9878fdbea5
2024-04-03 10:47:33 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2024-04-03 10:47:33 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'cninfo_scraper',
'FEED_EXPORT_ENCODING': 'utf-8',
'NEWSPIDER_MODULE': 'cninfo_scraper.spiders',
'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'ROBOTSTXT_OBEY': True,
'SPIDER_MODULES': ['cninfo_scraper.spiders'],
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2024-04-03 10:47:33 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2024-04-03 10:47:33 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2024-04-03 10:47:33 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2024-04-03 10:47:33 [scrapy.core.engine] INFO: Spider opened
2024-04-03 10:47:33 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2024-04-03 10:47:33 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2024-04-03 10:47:33 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://cninfo.com.cn/robots.txt> (failed 1 times): DNS lookup failed: no results for hostname lookup: cninfo.com.cn.
2024-04-03 10:47:33 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://cninfo.com.cn/robots.txt> (failed 2 times): DNS lookup failed: no results for hostname lookup: cninfo.com.cn.
2024-04-03 10:47:33 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <GET https://cninfo.com.cn/robots.txt> (failed 3 times): DNS lookup failed: no results for hostname lookup: cninfo.com.cn.
2024-04-03 10:47:33 [scrapy.downloadermiddlewares.robotstxt] ERROR: Error downloading <GET https://cninfo.com.cn/robots.txt>: DNS lookup failed: no results for hostname lookup: cninfo.com.cn.
Traceback (most recent call last):
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\twisted\internet\defer.py", line 1999, in _inlineCallbacks
result = context.run(
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\twisted\python\failure.py", line 519, in throwExceptionIntoGenerator
return g.throw(self.value.with_traceback(self.tb))
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\scrapy\core\downloader\middleware.py", line 54, in process_request
return (yield download_func(request=request, spider=spider))
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\twisted\internet\defer.py", line 1078, in _runCallbacks
current.result = callback( # type: ignore[misc]
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\twisted\internet\endpoints.py", line 1025, in startConnectionAttempts
raise error.DNSLookupError(
twisted.internet.error.DNSLookupError: DNS lookup failed: no results for hostname lookup: cninfo.com.cn.
2024-04-03 10:47:33 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://cninfo.com.cn> (failed 1 times): DNS lookup failed: no results for hostname lookup: cninfo.com.cn.
2024-04-03 10:47:33 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://cninfo.com.cn> (failed 2 times): DNS lookup failed: no results for hostname lookup: cninfo.com.cn.
2024-04-03 10:47:33 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <GET https://cninfo.com.cn> (failed 3 times): DNS lookup failed: no results for hostname lookup: cninfo.com.cn.
2024-04-03 10:47:33 [scrapy.core.scraper] ERROR: Error downloading <GET https://cninfo.com.cn>
Traceback (most recent call last):
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\twisted\internet\defer.py", line 1999, in _inlineCallbacks
result = context.run(
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\twisted\python\failure.py", line 519, in throwExceptionIntoGenerator
return g.throw(self.value.with_traceback(self.tb))
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\scrapy\core\downloader\middleware.py", line 54, in process_request
return (yield download_func(request=request, spider=spider))
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\twisted\internet\defer.py", line 1078, in _runCallbacks
current.result = callback( # type: ignore[misc]
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\twisted\internet\endpoints.py", line 1025, in startConnectionAttempts
raise error.DNSLookupError(
twisted.internet.error.DNSLookupError: DNS lookup failed: no results for hostname lookup: cninfo.com.cn.
2024-04-03 10:47:33 [scrapy.core.engine] INFO: Closing spider (finished)
2024-04-03 10:47:33 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 6,
'downloader/exception_type_count/twisted.internet.error.DNSLookupError': 6,
'downloader/request_bytes': 1314,
'downloader/request_count': 6,
'downloader/request_method_count/GET': 6,
'elapsed_time_seconds': 0.4481,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2024, 4, 3, 2, 47, 33, 867424, tzinfo=datetime.timezone.utc),
'log_count/DEBUG': 7,
'log_count/ERROR': 4,
'log_count/INFO': 10,
'retry/count': 4,
'retry/max_reached': 2,
'retry/reason_count/twisted.internet.error.DNSLookupError': 4,
"robotstxt/exception_count/<class 'twisted.internet.error.DNSLookupError'>": 1,
'robotstxt/request_count': 1,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 3,
'scheduler/enqueued/memory': 3,
'start_time': datetime.datetime(2024, 4, 3, 2, 47, 33, 419324, tzinfo=datetime.timezone.utc)}
2024-04-03 10:47:33 [scrapy.core.engine] INFO: Spider closed (finished)
C:\Users\Steven\cninfo_scraper>scrapy crawl cninfo_spider
2024-04-03 10:52:08 [scrapy.utils.log] INFO: Scrapy 2.11.1 started (bot: cninfo_scraper)
2024-04-03 10:52:08 [scrapy.utils.log] INFO: Versions: lxml 4.9.3.0, libxml2 2.10.3, cssselect 1.2.0, parsel 1.9.0, w3lib 2.1.2, Twisted 24.3.0, Python 3.12.0 (tags/v3.12.0:0fb18b0, Oct 2 2023, 13:03:39) [MSC v.1935 64 bit (AMD64)], pyOpenSSL 24.1.0 (OpenSSL 3.1.4 24 Oct 2023), cryptography 41.0.5, Platform Windows-11-10.0.22631-SP0
2024-04-03 10:52:08 [scrapy.addons] INFO: Enabled addons:
[]
2024-04-03 10:52:08 [asyncio] DEBUG: Using selector: SelectSelector
2024-04-03 10:52:08 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2024-04-03 10:52:08 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop
2024-04-03 10:52:08 [scrapy.extensions.telnet] INFO: Telnet Password: 67028393706b5607
2024-04-03 10:52:08 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2024-04-03 10:52:08 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'cninfo_scraper',
'FEED_EXPORT_ENCODING': 'utf-8',
'NEWSPIDER_MODULE': 'cninfo_scraper.spiders',
'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'ROBOTSTXT_OBEY': True,
'SPIDER_MODULES': ['cninfo_scraper.spiders'],
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2024-04-03 10:52:08 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2024-04-03 10:52:08 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2024-04-03 10:52:08 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2024-04-03 10:52:08 [scrapy.core.engine] INFO: Spider opened
2024-04-03 10:52:08 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2024-04-03 10:52:08 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2024-04-03 10:52:08 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://cninfo.com.cn/robots.txt> (failed 1 times): DNS lookup failed: no results for hostname lookup: cninfo.com.cn.
2024-04-03 10:52:09 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://cninfo.com.cn/robots.txt> (failed 2 times): DNS lookup failed: no results for hostname lookup: cninfo.com.cn.
2024-04-03 10:52:09 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <GET https://cninfo.com.cn/robots.txt> (failed 3 times): DNS lookup failed: no results for hostname lookup: cninfo.com.cn.
2024-04-03 10:52:09 [scrapy.downloadermiddlewares.robotstxt] ERROR: Error downloading <GET https://cninfo.com.cn/robots.txt>: DNS lookup failed: no results for hostname lookup: cninfo.com.cn.
Traceback (most recent call last):
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\twisted\internet\defer.py", line 1999, in _inlineCallbacks
result = context.run(
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\twisted\python\failure.py", line 519, in throwExceptionIntoGenerator
return g.throw(self.value.with_traceback(self.tb))
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\scrapy\core\downloader\middleware.py", line 54, in process_request
return (yield download_func(request=request, spider=spider))
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\twisted\internet\defer.py", line 1078, in _runCallbacks
current.result = callback( # type: ignore[misc]
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\twisted\internet\endpoints.py", line 1025, in startConnectionAttempts
raise error.DNSLookupError(
twisted.internet.error.DNSLookupError: DNS lookup failed: no results for hostname lookup: cninfo.com.cn.
2024-04-03 10:52:09 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://cninfo.com.cn> (failed 1 times): DNS lookup failed: no results for hostname lookup: cninfo.com.cn.
2024-04-03 10:52:09 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://cninfo.com.cn> (failed 2 times): DNS lookup failed: no results for hostname lookup: cninfo.com.cn.
2024-04-03 10:52:09 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <GET https://cninfo.com.cn> (failed 3 times): DNS lookup failed: no results for hostname lookup: cninfo.com.cn.
2024-04-03 10:52:09 [scrapy.core.scraper] ERROR: Error downloading <GET https://cninfo.com.cn>
Traceback (most recent call last):
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\twisted\internet\defer.py", line 1999, in _inlineCallbacks
result = context.run(
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\twisted\python\failure.py", line 519, in throwExceptionIntoGenerator
return g.throw(self.value.with_traceback(self.tb))
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\scrapy\core\downloader\middleware.py", line 54, in process_request
return (yield download_func(request=request, spider=spider))
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\twisted\internet\defer.py", line 1078, in _runCallbacks
current.result = callback( # type: ignore[misc]
File "C:\Users\Steven\AppData\Roaming\Python\Python312\site-packages\twisted\internet\endpoints.py", line 1025, in startConnectionAttempts
raise error.DNSLookupError(
twisted.internet.error.DNSLookupError: DNS lookup failed: no results for hostname lookup: cninfo.com.cn.
2024-04-03 10:52:09 [scrapy.core.engine] INFO: Closing spider (finished)
2024-04-03 10:52:09 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 6,
'downloader/exception_type_count/twisted.internet.error.DNSLookupError': 6,
'downloader/request_bytes': 1314,
'downloader/request_count': 6,
'downloader/request_method_count/GET': 6,
'elapsed_time_seconds': 0.460191,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2024, 4, 3, 2, 52, 9, 246305, tzinfo=datetime.timezone.utc),
'log_count/DEBUG': 7,
'log_count/ERROR': 4,
'log_count/INFO': 10,
'retry/count': 4,
'retry/max_reached': 2,
'retry/reason_count/twisted.internet.error.DNSLookupError': 4,
"robotstxt/exception_count/<class 'twisted.internet.error.DNSLookupError'>": 1,
'robotstxt/request_count': 1,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 3,
'scheduler/enqueued/memory': 3,
'start_time': datetime.datetime(2024, 4, 3, 2, 52, 8, 786114, tzinfo=datetime.timezone.utc)}
2024-04-03 10:52:09 [scrapy.core.engine] INFO: Spider closed (finished)
</code></pre>
<p>I can access cninfo with a browser.</p>
|
<python><scrapy><dns>
|
2024-04-03 03:09:22
| 0
| 25,444
|
Steven
|
78,264,876
| 5,863,348
|
why python list doesn't implement __copy__ and __deepcopy__
|
<p>Unlike Python's array, which implements <code>__copy__</code> and <code>__deepcopy__</code> so that the copy module can use it,
Python's list does not implement <code>__copy__</code> and <code>__deepcopy__</code>. (although there is a copy method)
Instead, the logic related to list copy is implemented in the copy function and deepcopy function of the copy module.</p>
<p>Why doesn't list have <code>__copy__</code> and <code>__deepcopy__</code> methods unlike array?</p>
|
<python>
|
2024-04-03 02:15:20
| 1
| 513
|
YouHoGeon
|
78,264,848
| 1,610,864
|
How to get the full Sales by Product/Service Detail Report using QuickBooks API?
|
<p>I have the below Python code trying to get <code>Sales by Product/Service Detail</code> Report data.</p>
<p>Under the documentation: <a href="https://developer.intuit.com/app/developer/qbo/docs/api/accounting/report-entities/SalesByProduct" rel="nofollow noreferrer">https://developer.intuit.com/app/developer/qbo/docs/api/accounting/report-entities/SalesByProduct</a></p>
<p>It looks like the end point is going to be <code>[ItemSales]</code> even though the end point for the API documentation is <code>[SalesByProduct]</code> which is confusing.</p>
<p>How can I change the code below to be able to pull all the default metrics:</p>
<ul>
<li>Product/Service</li>
<li>Date</li>
<li>Transaction type</li>
<li>Num</li>
<li>Customer name</li>
<li>Memo/Description</li>
<li>Qty</li>
<li>Rate</li>
<li>Amount</li>
<li>Balance</li>
</ul>
<p>I attached a screenshot of how the report looks like under QuickBooks UI when I run the <code>Sales by Product/Service Detail</code> report.</p>
<p>Also, I want to be able to pull all the data, currently I only get 96 object but when I run my report via the UI I get 2315 rows. is there a pagination mechanism within reports API? if so, how can I implement it?</p>
<p>Note: When I run the report under the UI, I don't use any value for the grouping in order to see to get the data at the item line level, does that mean I should not use [summarize_column_by] attribute</p>
<pre><code>def fetch_sales_by_product(access_token):
url = f'https://quickbooks.api.intuit.com/v3/company/11111111111/reports/ItemSales'
headers = {
'Authorization': f'Bearer {access_token}',
'Accept': 'application/json'
}
params = {
'start_date': '2021-01-01',
'end_date': '2024-04-01',
'summarize_column_by': 'ProductsAndServices'
}
all_data = []
counter = 1
while True:
response = requests.get(url, headers=headers, params=params)
print(f"{response.status_code=}")
if response.status_code != 200:
raise Exception(f"Failed to fetch report: {response.text}" )
data = response.json()
all_data.extend(data['Rows']['Row'])
counter += 1
return all_data
</code></pre>
<p><a href="https://i.sstatic.net/5WxMK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5WxMK.png" alt="enter image description here" /></a></p>
|
<python><quickbooks><quickbooks-online>
|
2024-04-03 02:03:33
| 0
| 5,784
|
mongotop
|
78,264,508
| 4,447,761
|
Fastest way to extract moving dynamic crop from video using ffmpeg
|
<p>I'm working on an AI project that involves object detection and action recognition on roughly 30 minute videos.</p>
<p>My pipeline is the following:</p>
<ul>
<li>determine crops using object detection model</li>
<li>extract crops using Python and write them to disk as individual images for each frame.</li>
<li>use action recognition model by inputting a series of the crops.</li>
</ul>
<p>The models are fast but actual writing of the crops to disk is slow. Sure, using an SSD would speed it up but I'm sure ffmpeg would greatly speed it up.</p>
<p>Some of the challenges with the crops:</p>
<ul>
<li>the output size is always 128x128</li>
<li>the input size is variable</li>
<li>the crop moves on every frame</li>
</ul>
<p>My process for extracting crops is simple using <code>cv2.imwrite(output_crop_path, crop)</code> in a for loop.</p>
<p>I've done experiments trying to use <code>sndcommand</code> and <code>filter_complex</code>. I tried this <a href="https://stackoverflow.com/a/67508233/4447761">https://stackoverflow.com/a/67508233/4447761</a> but it outputs an image with black below the crop and the image gets wrapped around on the x axis.</p>
<p><a href="https://i.sstatic.net/JhwYU.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JhwYU.jpg" alt="enter image description here" /></a></p>
|
<python><opencv><image-processing><ffmpeg><video-processing>
|
2024-04-02 23:41:18
| 2
| 3,016
|
NateW
|
78,264,493
| 388,640
|
How to summarize counts by date and item as the column header with Pandas DataFrame
|
<p>Given the following DataFrame in Pandas:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
"Date": [
pd.Timestamp("2000-01-02"),
pd.Timestamp("2000-01-02"),
pd.Timestamp("2000-01-05"),
pd.Timestamp("2000-01-06"),
pd.Timestamp("2000-01-06"),
pd.Timestamp("2000-01-06")
],
"Item": ["A", "A", "B", "C", "C", "C"]
}
)
</code></pre>
<p>which produces the following DataFrame:</p>
<pre><code> Date Item
0 2000-01-02 A
1 2000-01-02 A
2 2000-01-05 B
3 2000-01-06 C
4 2000-01-06 C
5 2000-01-06 C
</code></pre>
<p>How do I summarize the data by Date with Item as column headers like the following format?</p>
<pre><code>Date A B C
2000-01-02 2 0 0
2000-01-05 0 1 0
2000-01-06 0 0 3
</code></pre>
<p>I tried</p>
<pre><code>pivot_df = df.pivot_table(index='Date', columns='Item', aggfunc=len, fill_value=0)
</code></pre>
<p>but that's giving me:</p>
<pre><code>Item A B C
Date
2000-01-02 2 0 0
2000-01-05 0 1 0
2000-01-06 0 0 3
</code></pre>
<p>I don't want the "Item" in the column, just "Date A B C" as the columns.</p>
|
<python><pandas><dataframe>
|
2024-04-02 23:37:09
| 2
| 2,215
|
Niner
|
78,264,432
| 6,202,327
|
Cuda error when trying to setup a project using pytorch
|
<p>I was trying to follow the instructions of an <a href="https://github.com/Pointcept/SegmentAnything3D" rel="nofollow noreferrer">opensource repo</a> for segmentation. At some point I ran <code>python setup.py install</code> and got this error.</p>
<pre><code>!!
self.initialize_options()
running bdist_egg
running egg_info
writing pointops.egg-info/PKG-INFO
writing dependency_links to pointops.egg-info/dependency_links.txt
writing requirements to pointops.egg-info/requires.txt
writing top-level names to pointops.egg-info/top_level.txt
reading manifest file 'pointops.egg-info/SOURCES.txt'
writing manifest file 'pointops.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/torch/utils/cpp_extension.py:425: UserWarning: There are no g++ version bounds defined for CUDA version 11.8
warnings.warn(f'There are no {compiler_name} version bounds defined for CUDA version {cuda_str_version}')
building 'pointops._C' extension
/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/torch/cuda/__init__.py:628: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
Traceback (most recent call last):
File "setup.py", line 16, in <module>
setup(
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/__init__.py", line 103, in setup
return distutils.core.setup(**attrs)
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/dist.py", line 989, in run_command
super().run_command(command)
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/command/install.py", line 84, in run
self.do_egg_install()
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/command/install.py", line 132, in do_egg_install
self.run_command('bdist_egg')
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/dist.py", line 989, in run_command
super().run_command(command)
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 167, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 153, in call_command
self.run_command(cmdname)
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/dist.py", line 989, in run_command
super().run_command(command)
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/command/install_lib.py", line 11, in run
self.build()
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/_distutils/command/install_lib.py", line 111, in build
self.run_command('build_ext')
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/dist.py", line 989, in run_command
super().run_command(command)
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 88, in run
_build_ext.run(self)
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 345, in run
self.build_extensions()
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 871, in build_extensions
build_ext.build_extensions(self)
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 467, in build_extensions
self._build_extensions_serial()
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 493, in _build_extensions_serial
self.build_extension(ext)
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 249, in build_extension
_build_ext.build_extension(self, ext)
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 548, in build_extension
objects = self.compiler.compile(
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 675, in unix_wrap_ninja_compile
cuda_post_cflags = unix_cuda_flags(cuda_post_cflags)
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 574, in unix_cuda_flags
cflags + _get_cuda_arch_flags(cflags))
File "/home/makogan/anaconda3/envs/sam3d/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1976, in _get_cuda_arch_flags
arch_list[-1] += '+PTX'
IndexError: list index out of range
</code></pre>
<p>I have tried reinstalling cuda and rebooting my computer both without success, what can I try?</p>
|
<python><deep-learning><pytorch>
|
2024-04-02 23:12:47
| 0
| 9,951
|
Makogan
|
78,264,382
| 5,404,647
|
Weight and shape different than the number of channels in input
|
<p>I'm trying to fine-tunning the VAE of <a href="https://huggingface.co/CompVis/stable-diffusion-v1-4/tree/main/vae" rel="nofollow noreferrer">SD 1.4</a></p>
<p>I'm in a multi gpu environment, and I'm using <code>accelerate</code> library for handling that.
This is my code summarized:</p>
<pre><code>import os
import torch.nn.functional as F
import yaml
from PIL import Image
import torch
from torch.utils.data import Dataset, DataLoader
from torchvision.transforms import Compose, Resize, ToTensor, Normalize
from diffusers import AutoencoderKL
from torch.optim import Adam
from accelerate import Accelerator
from torch.utils.tensorboard import SummaryWriter
# Load configuration
with open('config.yaml', 'r') as file:
config = yaml.safe_load(file)
def save_checkpoint(model, optimizer, epoch, step, filename="checkpoint.pth.tar"):
checkpoint = {
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'epoch': epoch,
'step': step
}
torch.save(checkpoint, filename)
class ImageDataset(Dataset):
def __init__(self, root_dir, transform=None):
self.root_dir = root_dir
self.transform = transform
self.images = [os.path.join(root_dir, f) for f in os.listdir(root_dir) if f.endswith('.png')]
def __len__(self):
return len(self.images)
def __getitem__(self, idx):
img_path = self.images[idx]
image = Image.open(img_path).convert('RGB')
if self.transform:
image = self.transform(image)
return image
# Setup dataset and dataloader based on config
transform = Compose([
Resize((config['dataset']['image_size'], config['dataset']['image_size'])),
ToTensor(),
Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
])
dataset = ImageDataset(root_dir=config['dataset']['root_dir'], transform=transform)
dataloader = DataLoader(dataset, batch_size=config['training']['batch_size'], shuffle=True, num_workers=config['training']['num_workers'])
# Initialize model, accelerator, optimizer, and TensorBoard writer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_path = config['model']['path']
vae = AutoencoderKL.from_pretrained(model_path).to(device)
optimizer = Adam(vae.parameters(), lr=config['training']['learning_rate'])
accelerator = Accelerator()
vae, dataloader = accelerator.prepare(vae, dataloader)
writer = SummaryWriter()
# Training loop
for epoch in range(config['training']['num_epochs']):
vae.train()
total_loss = 0
for step, batch in enumerate(dataloader):
with accelerator.accumulate(vae):
# Assuming the first element of the batch is the image
target = batch[0].to(next(vae.parameters()).dtype)
# Access the original model for custom methods
model = vae.module if hasattr(vae, "module") else vae
posterior = model.encode(target).latent_dist
z = posterior.mode()
pred = model.decode(z).sample
kl_loss = posterior.kl().mean()
mse_loss = F.mse_loss(pred, target, reduction="mean")
loss = mse_loss + config['training']["kl_scale"] * kl_loss
optimizer.zero_grad()
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad() # Clear gradients after updating weights
# Checkpointing every 10 steps
if step % 10 == 0:
checkpoint_path = f"checkpoint_epoch_{epoch}_step_{step}.pth"
accelerator.save({
"epoch": epoch,
"model_state_dict": model.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
"loss": loss,
}, checkpoint_path)
print(f"Checkpoint saved to {checkpoint_path}")
writer.close()
print("Training complete.")
</code></pre>
<p>When running the code, I got the following error:</p>
<pre><code>RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [128] and input of shape [128, 1024, 1024]:
</code></pre>
<p>My input folder contains a set of png images with different sizes, and resized to 1024x1024 in the configuration file.</p>
<p>I do not know why this is happening and if someone knows, or if there is a easier way to fine-tunning the VAE weights using my images.
Thanks.</p>
<p>Edit:
My <code>config.yaml</code> file</p>
<pre><code>model:
path: 'vae1dot4' # Path to your pre-trained model directory
dataset:
root_dir: 'segmented' # Directory containing your PNG images
image_size: 1024 # Target size for image resizing
training:
batch_size: 8 # Batch size for training
num_epochs: 10 # Number of epochs to train
learning_rate: 0.0005 # Learning rate for the optimizer
num_workers: 4 # Number of worker processes for data loading
kl_scale: 1
gradient_accumulation_steps: 1
logging:
tensorboard_dir: 'runs' # Directory for TensorBoard logs
</code></pre>
|
<python><pytorch><huggingface>
|
2024-04-02 22:54:13
| 2
| 622
|
Norhther
|
78,264,232
| 1,231,450
|
Calculate the standard deviation of a distribution
|
<p>Let's say, we have tick data like the following</p>
<pre><code>,timestamp,close,security_code,volume,bid_volume,ask_volume
2024-04-02 01:00:00.128123+00:00,2024-04-02 01:00:00.128123+00:00,18465.5,NQ,1,0,1
2024-04-02 01:00:00.128123+00:00,2024-04-02 01:00:00.128123+00:00,18465.5,NQ,1,0,1
2024-04-02 01:00:03.782064+00:00,2024-04-02 01:00:03.782064+00:00,18465.25,NQ,1,0,1
2024-04-02 01:00:04.112603+00:00,2024-04-02 01:00:04.112603+00:00,18465.0,NQ,1,0,1
2024-04-02 01:00:04.112603+00:00,2024-04-02 01:00:04.112603+00:00,18465.0,NQ,1,0,1
2024-04-02 01:00:04.112603+00:00,2024-04-02 01:00:04.112603+00:00,18464.75,NQ,1,0,1
2024-04-02 01:00:04.112603+00:00,2024-04-02 01:00:04.112603+00:00,18464.75,NQ,1,0,1
2024-04-02 01:00:05.759876+00:00,2024-04-02 01:00:05.759876+00:00,18464.5,NQ,1,0,1
2024-04-02 01:00:06.273686+00:00,2024-04-02 01:00:06.273686+00:00,18464.75,NQ,5,5,0
</code></pre>
<p>Then one could calculate the current high, the current low and the most often point (the poc) as follows</p>
<pre><code>import pandas as pd, matplotlib.pyplot as plt
from collections import defaultdict
df = pd.read_csv("csv/nq_out_daily.csv")
df.drop('Unnamed: 0', inplace=True, axis=1)
df['timestamp'] = pd.to_datetime(df['timestamp'])
df["timestamp"] = df['timestamp'].dt.strftime('%d-%m-%Y %H:%M:%S')
summary = {"high": [], "low": [], "poc": []}
dist = defaultdict(float)
current_high = current_low = None
for idx, (timestamp, tick, ask, bid) in enumerate(zip(df.timestamp, df.close, df.ask_volume, df.bid_volume)):
current_high = tick if (current_high is None or tick > current_high) else current_high
current_low = tick if (current_low is None or tick < current_low) else current_low
dist[tick] += 1
summary["high"].append(current_high)
summary["low"].append(current_low)
summary["poc"].append(max(dist, key=dist.get))
# plot the summary
fig = plt.figure()
x = range(len(summary["high"]))
plt.scatter(x, summary["high"], s=1)
plt.scatter(x, summary["low"], s=1)
plt.scatter(x, summary["poc"], s=1)
plt.legend(['high', 'low', 'poc'])
plt.savefig(f"distribution.png")
plt.close(fig)
</code></pre>
<p>Which would yield the following figure for today
<a href="https://i.sstatic.net/KUMpC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KUMpC.png" alt="Plotted high, low and poc" /></a></p>
<p>Now, how can I calculate the first standard deviation up and down from the poc?</p>
|
<python><pandas>
|
2024-04-02 21:56:45
| 1
| 43,253
|
Jan
|
78,263,388
| 11,360,093
|
Autograd returning None
|
<p>I am trying to create a Contractive Autoencoder, and I read in a couple of papers that the main idea is to use the norm of the Jacobian of the encoder's output with respect to its inputs.</p>
<p>In other words, I'm trying to obtain the gradients of the encoder's output while using the original input.</p>
<p>So far, I have something like this:</p>
<pre><code>def compute_model_loss(self, predX, X, latent_X):
gradients = torch.autograd.grad(outputs = latent_X, inputs = X, grad_outputs = torch.ones_like(latent_X),
create_graph = True, allow_unused=True)[0]
print(gradients) # None!
frobenius_norm = torch.mean(torch.norm(gradients, p = 'fro', dim = (1,2)))
contractive_penalty = self.args['lambda'] * frobenius_norm
total_loss += contractive_penalty
</code></pre>
<p>The problem arises when computing the gradients in the first line. For some reason, it returns None, so what I've first tried is to see what the data looks like.</p>
<p><strong>Input data X</strong>: (32 x 2866)</p>
<pre><code>tensor([[0.4663, 0.3859, 0.6573, ..., 0.7819, 0.0822, 0.3332],
[0.4204, 0.8448, 0.6168, ..., 0.2698, 0.3503, 0.3372],
[0.6329, 0.4084, 0.7437, ..., 0.3490, 0.4902, 0.8333],
...,
[0.3004, 0.6908, 0.7698, ..., 0.8115, 0.9253, 0.1996],
[0.6895, 0.6812, 0.4595, ..., 0.8959, 0.6600, 0.5660],
[0.5647, 0.2448, 0.5046, ..., 0.6494, 0.4483, 0.5269]],
device='cuda:0', grad_fn=<SigmoidBackward0>)
</code></pre>
<p><strong>Encoder's output</strong>: (32 x 32)</p>
<pre><code>tensor([[0.3837, 0.3975, 0.5000, ..., 0.6480, 0.9503, 0.8660],
[0.4182, 0.5000, 0.8683, ..., 0.4916, 0.7044, 0.5293],
[0.5000, 0.7034, 0.5588, ..., 0.3750, 0.5000, 0.5000],
...,
[0.4598, 0.4478, 0.9167, ..., 0.8179, 0.7026, 0.5000],
[0.5000, 0.5370, 0.4786, ..., 0.4529, 0.3132, 0.4245],
[0.4134, 0.5000, 0.4898, ..., 0.4799, 0.5000, 0.7334]],
device='cuda:0', grad_fn=<SigmoidBackward0>)
</code></pre>
<p>So both these tensors do have a computational graph... All these tensors also have <code>requires_grad_</code> set to <code>True</code>.</p>
<p>Regarding the autoencoder architecture, it's mainly just linear layer after linear layer, and my forward function looks like:</p>
<pre><code>def forward(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded, encoded
</code></pre>
<p>As for the training part:</p>
<pre><code>x_batch = torch.tensor(tr_model.X_train[b]).to(self.device)
x_batch.requires_grad_(True)
x_batch.retain_grad()
# h is the encoder's output
x_pred_batch, h = tr_model.model.forward(x_batch)
# Here we compute the contractive loss
loss = tr_model.compute_model_loss(x_pred_batch, x_batch, h)
</code></pre>
<p>If it helps, when I do loss.backward() with a normal autoencoder, everything works fine. (though I don't use the encoder's output there)</p>
<p>Thanks a lot for reading this far!!</p>
<p>EDIT ::</p>
<p>The encoder's architecture:</p>
<pre><code> self.encoder = torch.nn.Sequential(
custom_block(input_dim, 3000),
custom_block(3000, 2500),
custom_block(2500, 2000),
custom_block(2000, 1500),
custom_block(1500, 1200),
custom_block(1200, 1000),
custom_block(1000, 800),
custom_block(800, 600),
custom_block(600, 400),
custom_block(400, 300),
custom_block(300, 150),
custom_block(150, 100),
custom_block(100, 50),
custom_block(50, L),
torch.nn.Sigmoid()
)
</code></pre>
<p>where <code>custom_block</code> is:</p>
<pre><code>def custom_block(input_dim, output_dim, dropout_rate=0.4):
return torch.nn.Sequential(
torch.nn.Linear(input_dim, output_dim),
torch.nn.BatchNorm1d(output_dim),
torch.nn.PReLU(),
torch.nn.Dropout(dropout_rate)
)
</code></pre>
|
<python><machine-learning><pytorch><autoencoder>
|
2024-04-02 18:36:19
| 0
| 472
|
Sanzor
|
78,263,371
| 13,592,012
|
How to generate point cloud using depth image in drake simulation
|
<p>I'm currently working on setting up a simulation environment using Drake in Python. My goal is to load various objects into the scene and generate a point cloud from a depth camera. While I've successfully set up the environment and managed to obtain the depth map, I'm struggling with generating a point cloud using DepthImageToPointCloud or any alternative methods.</p>
<p>I've referred to tutorials and documentation, including <a href="https://deepnote.com/workspace/Drake-0b3b2c53-a7ad-441b-80f8-bf8350752305/project/Tutorials-2b4fc509-aef2-417d-a40d-6071dfed9199/notebook/rendering_multibody_plant-77f3cc79fd1c4e15b6ebef6b03c8554e" rel="nofollow noreferrer">this example</a> for getting a depth image, but haven't found a suitable implementation for generating point clouds.</p>
<p>Could someone provide guidance or point me to a resource that demonstrates how to implement point cloud generation from depth images within the Drake simulation environment?</p>
|
<python><point-clouds><robotics><drake><depth-camera>
|
2024-04-02 18:32:54
| 1
| 967
|
Karan Owalekar
|
78,263,337
| 7,619,353
|
C# Program cannot ReadKey when Python is running it as a subprocess
|
<p>I have a C# app that runs perfectly fine when executing it through a command line terminal or manually executing it. When using a python script to run the C# app, the program spits out the following error:</p>
<pre><code>Unhandled Exception: System.InvalidOperationException: Cannot read keys when either application does not have a console or when console input has been redirected from a file. Try Console.Read. at System.Console.ReadKey(Boolean intercept)
</code></pre>
<p>Is there a reason why the program cannot read keys when run using subprocess.Popen in python?</p>
<pre><code>p = subprocess.Popen(
f"SomeCSharpProgram.exe",
stdin=PIPE,
stdout=PIPE,
stderr=PIPE,
shell=True
)
output, error = p.communicate(input=b'\n')
</code></pre>
|
<python><c#><subprocess>
|
2024-04-02 18:25:44
| 1
| 1,840
|
tyleax
|
78,263,331
| 3,700,524
|
Python module with name __main__.py doesn't run while imported
|
<p>Somebody asked me a question and I honestly didn't try it before, So it was interesting to know what exactly happens when we name a module <code>__main__.py</code>. So I named a module <code>__main__.py</code>, imported it in another file with name <code>test.py</code>. Surprisingly when I tried to run <code>test.py</code> it prints nothing and none of the functions of <code>__main__.py</code> are available in <code>test.py</code>. Here is the contents of these files :</p>
<p>Here is the contents of <code>__main__.py</code> :</p>
<pre><code>def add(a,b):
result = a+b
return result
print(__name__)
if __name__=='__main__':
print(add(1,2))
</code></pre>
<p>Here is the contents of <code>test.py</code> :</p>
<pre><code>import __main__
</code></pre>
<p>Why isn't the print statement from <code>__main__.py</code> reached? Although when I rename the <code>__main__.py</code> with some other name such as <code>func.py</code>, the program runs correctly and the line that prints module's name works fine.</p>
|
<python><python-module>
|
2024-04-02 18:23:52
| 2
| 3,421
|
Mohsen_Fatemi
|
78,263,178
| 17,174,267
|
tkinter export Undo Stack
|
<p>As far as I understood the undo feature in tkinter's Text class is implemented via a stack saving the insertions/deletions. (<a href="https://tkdocs.com/shipman/text-undo-stack.html" rel="nofollow noreferrer">tkdocs</a>)
How can I export this stack so that I can import it the next time the program is launches? (I'm of course also saving the text inside the text box)</p>
<p>-> The text box undo behaviour should be the same after relaunching the application.</p>
<pre><code>maxUndo = 30
self.__textarea = Text(self.__root, undo=maxUndo != 0, maxundo=maxUndo)
</code></pre>
|
<python><tkinter>
|
2024-04-02 17:51:53
| 1
| 431
|
pqzpkaot
|
78,263,052
| 11,233,365
|
Write a setup.cfg installation key to install dependencies from other installation keys
|
<p><strong>The Problem:</strong></p>
<p>I'm writing a <code>setup.cfg</code> file for a Python package that requires different sets of dependencies depending on what it's used for. For development purposes, I want to add a key to the <code>[options.extras_require]</code> table of the <code>setup.cfg</code> file that installs multiple sets of dependencies simultaneously.</p>
<p>The relevant section of the <code>setup.cfg</code> file at the moment:</p>
<pre class="lang-toml prettyprint-override"><code>[options.extras_require]
option1 =
pkg1
pkg2
pkg3
pkg4
pkg5
option2 =
pkg1
pkg2
pkg6
pkg7
pkg8
option3 =
pkg3
pkg4
pkg9
pkg10
all =
pkg1
pkg2
pkg3
pkg4
pkg5
pkg6
pkg7
pkg8
pkg9
pkg10
</code></pre>
<p>The accompanying <code>setup.py</code> file is minimally dressed, and all our metadata is currently still kept in the <code>setup.cfg</code> file:</p>
<pre><code>from __future__ import annotations
import setuptools
if __name__ == "__main__":
# Do not add any parameters here. Edit setup.cfg instead.
setuptools.setup()
</code></pre>
<p>So far, I've added the dependencies manually to the <code>all</code> key, but this would require me to manually crawl through and add dependencies to it every time the package's list of dependencies changes.</p>
<p>Is there a way to write the <code>all</code> key such that it will install everything listed under options 1, 2, and 3 without the need for me to manually write them to <code>all</code> each time the environment changes?</p>
<p><strong>BUILD:</strong>
The Python version I'm testing for at the moment is 3.11.8, and I'm using <code>setuptools</code> 68.2.2 along with <code>pip</code> 23.3.1. The package is supposed to work from Python 3.9 and up, so it will need to be compatible with multiple versions of <code>setuptools</code> and <code>pip</code>.</p>
<p><strong>UPDATE:</strong></p>
<p><strong>Feeding Modified Dictionary to <code>setup()</code>:</strong></p>
<p>I tried tinkering with the <code>setup.py</code> file as suggested below, and it now looks like this:</p>
<pre><code>from __future__ import annotations
from setuptools import setup
from setuptools.config.setupcfg import read_configuration
# Run code only if file is run as a script
if __name__ == "__main__":
# Read the configuration from setup.cfg
config = read_configuration("setup.cfg")
# Get the dependencies from the other sections
option1_deps = config["options"]["extras_require"]["option1"]
option2_deps = config["options"]["extras_require"]["option2"]
option3_deps = config["options"]["extras_require"]["option3"]
# Combine dependencies
all_deps = option1_deps + option2_deps + option3_deps
# Update the config with the combined dependences
config["options"]["extras_require"]["all"] = all_deps
# Pass the udpated configuration to setuptools.setup()
setup(**config["options"])
</code></pre>
<p>However, I then ran into the following issue when trying to run <code>pip install -e .[all]</code>:</p>
<pre><code> Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [1 lines of output]
error in setup command: 'python_requires' must be a string containing valid version specifiers; 'SpecifierSet' object has no attribute 'split'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
</code></pre>
<p><strong>Testing <code>read_configuration()</code> and <code>setup()</code> individually:</strong></p>
<p>After some investigation, I found that <code>read_configuration</code> works as advertised, and is able to generate the dictionary structure for the <code>[metadata]</code> and <code>[options]</code> tables correctly. However, when that is fed into <code>setup()</code> it causes it to be unable to read the <code>setup.cfg</code> correctly afterwards. Running <code>setup()</code> without any arguments works just fine.</p>
<p>How should I proceed from here?</p>
|
<python><configuration>
|
2024-04-02 17:21:45
| 1
| 301
|
TheEponymousProgrammer
|
78,263,034
| 3,151,415
|
Langchain: Dynamically route logic based on input
|
<p>I am trying to dynamically bind two chains. It works when the question is about 'langchain' but fails with below error if otherwise.</p>
<p>Individually the RAG chain works fine.</p>
<pre><code>classification_chain = (
PromptTemplate.from_template(
"""Given the user question below, classify it as either being about 'Langchain' or 'something else'\
Do not respond with more than one word.
<question>
{question}
</question>
Classification:"""
)
| get_llm()
| StrOutputParser()
)
langchain_chain = (
PromptTemplate.from_template(
"""You are an expert in langchain. Always answer with Daddy says.\
<question>
{question}
</question>
Answer:"""
)
| get_llm()
| StrOutputParser()
)
retrieval_chain = ''
def route(info):
if "langchain" in info["topic"].lower():
return langchain_chain
else:
print('-'*100)
print(retrieval_chain)
return retrieval_chain
def get_RAG_response(collection_name: str, question: str):
db = get_vector_db(collection_name)
retriever = db.as_retriever()
model = get_llm()
template = """Answer the following question based only on the provided context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
global retrieval_chain
retrieval_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)
# response = retreival_chain.invoke(question)
full_chain = {"topic": classification_chain, "question": lambda x: x["question"]} | RunnableLambda(
route
)
response = full
| StrOutputParser()
)
# response = retreival_chain.invoke(question)
full_chain = {"topic": classification_chain, "question": lambda x: x["question"]} | RunnableLambda(
route
)
response = full_chain.invoke({"question": question})
print('*'*100)
print('response:', response)
return response
</code></pre>
<p><strong>Error:</strong></p>
<pre><code>INFO: 127.0.0.1:47854 - "POST /chat/chats HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/starlette/applications.py", line 116, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/starlette/middleware/cors.py", line 91, in __call__
await self.simple_response(scope, receive, send, request_headers=headers)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/starlette/middleware/cors.py", line 146, in simple_response
await self.app(scope, receive, send)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 55, in wrapped_app
raise exc
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 44, in wrapped_app
await app(scope, receive, sender)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/starlette/routing.py", line 746, in __call__
await route.handle(scope, receive, send)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/starlette/routing.py", line 75, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 55, in wrapped_app
raise exc
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 44, in wrapped_app
await app(scope, receive, sender)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/starlette/routing.py", line 70, in app
response = await func(request)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/fastapi/routing.py", line 299, in app
raise e
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/fastapi/routing.py", line 294, in app
raw_response = await run_endpoint_function(
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/fastapi/routing.py", line 193, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/starlette/concurrency.py", line 35, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2134, in run_sync_in_worker_thread
return await future
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
File "/home/garg10may/coding/pipa/backend/app/api/chat/chat_router.py", line 107, in create_chat_route
return create_chat_service(db, chatCreate)
File "/home/garg10may/coding/pipa/backend/app/api/chat/chat_service.py", line 146, in create_chat_service
bot_message = get_RAG_response(file_group.group_name, chatCreate.message)
File "/home/garg10may/coding/pipa/backend/app/api/chat/utility.py", line 288, in get_RAG_response
response = full_chain.invoke({"question": question})
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2446, in invoke
input = step.invoke(
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3908, in invoke
return self._call_with_config(
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1625, in _call_with_config
context.run(
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3792, in _invoke
output = output.invoke(
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2446, in invoke
input = step.invoke(
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3091, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3091, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_core/retrievers.py", line 141, in invoke
return self.get_relevant_documents(
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_core/retrievers.py", line 245, in get_relevant_documents
raise e
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_core/retrievers.py", line 238, in get_relevant_documents
result = self._get_relevant_documents(
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_core/vectorstores.py", line 696, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_community/vectorstores/pgvector.py", line 543, in similarity_search
embedding = self.embedding_function.embed_query(text=query)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_community/embeddings/openai.py", line 697, in embed_query
return self.embed_documents([text])[0]
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_community/embeddings/openai.py", line 668, in embed_documents
return self._get_len_safe_embeddings(texts, engine=engine)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_community/embeddings/openai.py", line 471, in _get_len_safe_embeddings
token = encoding.encode(
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/tiktoken/core.py", line 116, in encode
if match := _special_token_regex(disallowed_special).search(text):
TypeError: expected string or buffer
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3091, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3091, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_core/retrievers.py", line 141, in invoke
return self.get_relevant_documents(
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_core/retrievers.py", line 245, in get_relevant_documents
raise e
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_core/retrievers.py", line 238, in get_relevant_documents
result = self._get_relevant_documents(
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_core/vectorstores.py", line 696, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_community/vectorstores/pgvector.py", line 543, in similarity_search
embedding = self.embedding_function.embed_query(text=query)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_community/embeddings/openai.py", line 697, in embed_query
return self.embed_documents([text])[0]
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_community/embeddings/openai.py", line 668, in embed_documents
return self._get_len_safe_embeddings(texts, engine=engine)
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/langchain_community/embeddings/openai.py", line 471, in _get_len_safe_embeddings
token = encoding.encode(
File "/home/garg10may/coding/pipa/backend/venv/lib/python3.10/site-packages/tiktoken/core.py", line 116, in encode
if match := _special_token_regex(disallowed_special).search(text):
TypeError: expected string or buffer
</code></pre>
|
<python><artificial-intelligence><langchain>
|
2024-04-02 17:17:37
| 1
| 6,248
|
garg10may
|
78,262,838
| 16,912,844
|
Python Logging Color Output In Terminal Without ANSI Escape Code In File
|
<p>this is more of a 2 part question regarding logging and terminal output.</p>
<ol>
<li>How, if possible to log color output in terminal with color, but don't have them display the ANSI escape code when redirecting (<code>></code>), or output to log file? I try not to use any external library with this if possible.</li>
</ol>
<p>I understand you can use ANSI escape code like <code>\x1b[31m</code> (foreground red) or <code>\x1b[43m</code> (background yellow) to display color in terminal. When I use redirect (<code>></code>), or output to file, they display <code>ESC[36m</code> in the log file.</p>
<p>Python have a <a href="https://github.com/Textualize/rich" rel="nofollow noreferrer">rich</a> library that can output stylized text in terminal, and when I tried it, it output the result to file without the ANSI escape code.</p>
<ol start="2">
<li>I was wondering how they achieve this? I read some of the documentation, and skim some of the code, but still cannot figure out how they are doing this.</li>
</ol>
|
<python><python-logging>
|
2024-04-02 16:41:24
| 1
| 317
|
YTKme
|
78,262,772
| 1,391,441
|
Remove bottom edge on marplotlib bar plot
|
<p>I need to generate a <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.bar.html" rel="nofollow noreferrer">bar plot </a> but without the bottom edge. Here's an example output and code:</p>
<p><a href="https://i.sstatic.net/NraKy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NraKy.png" alt="enter image description here" /></a></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.array([0.05, 0.15, 0.25, 0.35, 0.45, 0.55, 0.65, 0.75, 0.85, 0.95])
y = np.array([0.0801 , 0.08062, 0.08136, 0.08728, 0.0836 , 0.08862, 0.09642,
0.12204, 0.0722 , 0.20776])
plt.bar(x, y, width=0.1, alpha=0.5, align='center',
fill=False, ls=':', lw=5, edgecolor='r')
plt.show()
</code></pre>
<p>I want to hide the bottom edge of the bars. Can this be done?</p>
|
<python><matplotlib><bar-chart>
|
2024-04-02 16:28:02
| 3
| 42,941
|
Gabriel
|
78,262,738
| 540,725
|
Why does Pandas loc with multiindex return a matrix with single row
|
<p>This question is similar to <a href="https://stackoverflow.com/questions/20383647/pandas-selecting-by-label-sometimes-return-series-sometimes-returns-dataframe">Pandas selecting by label sometimes return Series, sometimes returns DataFrame</a>, however I didn't find a solution there. I have 2 dataframes read from CSV with a multi-index <code>(str,int)</code>.</p>
<pre class="lang-py prettyprint-override"><code>data1 = pd.read_csv(file1, sep=";", index_col=['pdf_name', 'page'])
data2 = pd.read_csv(file2, sep=";", index_col=['pdf_name', 'page'])
idx = data1.index[0] # first index: ('0147S00044', 0)
data1.loc[idx] # returns Series, as I would expect
data2.loc[idx] # returns 1xN DataFrame
data2['col1'].loc[idx] # returns Series with 1 value
data2.loc[idx[0]].loc[idx[1]] # returns Series -- how is this different from above???
data2['col1'].loc[idx[0]].loc[idx[1]] # returns actual value
</code></pre>
<p>the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer">docs</a> describe the behaviour with <code>data1</code>, which also makes sense to me. What is happening with <code>data2</code>, why does it behave in this rather weird way?</p>
<p><strong>EDIT: working example:</strong></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from io import StringIO
file1 = StringIO("pdf_name;page;col1;col2\npdf1;0;val1;val2\npdf2;0;asdf;ffff")
file2 = StringIO("pdf_name;page;col1;col2\npdf1;0;;\npdf2;0;;\npdf2;0;;")
data1 = pd.read_csv(file1, sep=";", index_col=['pdf_name', 'page'])
data2 = pd.read_csv(file2, sep=";", index_col=['pdf_name', 'page'])
data2 = data2.sort_index() # data2.sort_index() # avoid performance warning
idx = data1.index[0]
print(idx) # ('pdf1', 0)
print("data1.loc[idx]", type(data1.loc[idx])) # data1.loc[idx] <class 'pandas.core.series.Series'>
print("data2.loc[idx]", type(data2.loc[idx])) # data2.loc[idx] <class 'pandas.core.frame.DataFrame'>
print("data2.loc[idx].shape", data2.loc[idx].shape) # (1, 2) -- single row
print("data2['col1'].loc[idx]", type(data2['col1'].loc[idx])) # data2['col1'].loc[idx] <class 'pandas.core.series.Series'>
</code></pre>
<p>I figured this happens whenever the dataset has at least two rows with identical index, even if the queried index does not have any duplicates. Is this wanted behaviour?</p>
|
<python><pandas><dataframe><multi-index><pandas-loc>
|
2024-04-02 16:23:55
| 0
| 1,857
|
N4ppeL
|
78,262,629
| 234,146
|
Why is libopenblas from numpy so big?
|
<p>We are deploying an open source application based on numpy that includes libopenblas.{cryptic string}.gfortran-win32.dll. It is part of the Python numpy package. This dll is over 27MB in size. I'm curious why it is so big and where I can find the source for it to see for myself.</p>
<p>Ultimately I'd like to see if it can be more limited in size for my application.</p>
<p>Thanks</p>
|
<python><numpy><blas>
|
2024-04-02 16:06:45
| 1
| 1,358
|
Max Yaffe
|
78,262,552
| 1,467,533
|
How to get `GenericAlias` super types in python?
|
<p>Say I have a class defined as follows:</p>
<pre class="lang-py prettyprint-override"><code>class MyList(list[int]): ...
</code></pre>
<p>I'm looking for a method that will return <code>list[int]</code> when I give it <code>MyList</code>, e.g.:</p>
<pre class="lang-py prettyprint-override"><code>>>> inspect.getsupers(MyList)
[list[int]]
</code></pre>
<p>The trouble is that no such method exists, as far as I can find. There is <code>inspect.getmro(...)</code>, but this only returns <code>list</code>, not <code>list[int]</code>. Is there anything that I can reuse or implement that will give me this, short of some magic involving <code>inspect.getsource(...)</code>?</p>
<p>The context is that I'm trying to write a generic <code>issubtype(t1, t2)</code> method that works for arbitrary classes and <code>GenericAlias</code>es. This is one issue I've run into in writing this method that I'm not sure how to solve.</p>
|
<python><python-typing>
|
2024-04-02 15:52:06
| 1
| 1,855
|
mattg
|
78,262,522
| 11,049,863
|
not all arguments converted during string formatting when trying to execute mssql stored procedure from django
|
<p>My procedure consists of inserting the data into a table in the database from REST API.<br/>
p1,p2,p3 are the parameters coming from the user interface<br/>
P1,P2,P3 are my stored procedure's parameters.<br/>
Here is my code <br/></p>
<pre><code>def test(request):
if request.method == "POST":
with connection.cursor() as cursor:
P1 = request.data["p1"]
P2 = request.data["p2"]
P3 = request.data["p3"]
params = (P1,P2,P3)
print(params)
try:
cursor.execute("{CALL [dbo].[myStoredProcedure] (?,?,?)}",params)
if cursor.return_value == 1:
result_set = cursor.fetchall()
print(result_set)
finally:
cursor.close()
return Response({"msg":"post"})
else:
return Response({"msg": "get"})
</code></pre>
<p>when I add parameters like this: <code>cursor.execute("{CALL [dbo].[myStoredProcedure] (P1,P2,P3)}")</code>"P1","P2","P3" are inserted in the database but not the value of these parameters.<br/>
How to solve this problem ?</p>
<pre><code> File "C:\Users\HP\.virtualenvs\GFA\Lib\site-packages\rest_framework\views.py", line 509, in dispatch
response = self.handle_exception(exc)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HP\.virtualenvs\GFA\Lib\site-packages\rest_framework\views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "C:\Users\HP\.virtualenvs\GFA\Lib\site-packages\rest_framework\views.py", line 480, in raise_uncaught_exception
raise exc
File "C:\Users\HP\.virtualenvs\GFA\Lib\site-packages\rest_framework\views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HP\.virtualenvs\GFA\Lib\site-packages\rest_framework\decorators.py", line 50, in handler
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "E:\projets\quantum\GFA\api\views.py", line 19, in agence_actions
cursor.execute("{CALL myStoredProcedure (?,?,?)}",params)
File "C:\Users\HP\.virtualenvs\GFA\Lib\site-packages\django\db\backends\utils.py", line 121, in execute
with self.debug_sql(sql, params, use_last_executed_query=True):
File "C:\Users\HP\AppData\Local\Programs\Python\Python311\Lib\contextlib.py", line 158, in __exit__
self.gen.throw(typ, value, traceback)
File "C:\Users\HP\.virtualenvs\GFA\Lib\site-packages\django\db\backends\utils.py", line 139, in debug_sql
sql = self.db.ops.last_executed_query(self.cursor, sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HP\.virtualenvs\GFA\Lib\site-packages\mssql\operations.py", line 424, in last_executed_query
return sql % params
~~~~^~~~~~~~
TypeError: not all arguments converted during string formatting
</code></pre>
|
<python><sql-server><django>
|
2024-04-02 15:45:27
| 1
| 385
|
leauradmin
|
78,262,486
| 20,920,790
|
How to make method for pandas.to_sql for making update operation?
|
<p>I got some tables in database, which must be updated.
How to perform this with table.to_sql for Postgresql, v.16.2?
Run .to_sql with if_exists='replace' will not works, cause this table related to another.</p>
<p>I now that I can make custom method for this, but I can't find working code or good explanation hot to do it.</p>
<p>Edit:
My best shot is this method:</p>
<pre><code>from sqlalchemy.dialects import postgresql
def pg_upsert(table, conn, keys, data_iter):
for row in data:
row_dict = dict(zip(keys, row))
stmt = postgresql.insert(table).values(**row_dict)
upsert_stmt = stmt.on_conflict_do_update(
index_elements=table.index,
set_=row_dict)
conn.execute(upsert_stmt)
</code></pre>
<p>But then I use it:</p>
<pre><code>table.to_sql(
'table',
con=engine,
schema='public',
if_exists='replace',
index=False,
chunksize=200,
method=pg_upsert
)
</code></pre>
<p>I get error:</p>
<pre><code>DependentObjectsStillExist: can't delete table, cause there' data that depents of it.
</code></pre>
<p>And hint:</p>
<pre><code>HINT: use DROP ... CASCADE.
</code></pre>
|
<python><pandas><postgresql><sqlalchemy>
|
2024-04-02 15:39:42
| 0
| 402
|
John Doe
|
78,262,455
| 2,168,359
|
Accessing Images in media/ folder In Django, as public url
|
<p>I can access the image on the local machine as
<a href="http://127.0.0.1:8000/media/taj.jpg" rel="nofollow noreferrer">http://127.0.0.1:8000/media/taj.jpg</a>
in the browser.</p>
<p>How can I access the same image publicly in the browser?
Like url: <a href="http://49.128.160.80:8000/media/taj.jpg" rel="nofollow noreferrer">http://49.128.160.80:8000/media/taj.jpg</a></p>
|
<python><django><media>
|
2024-04-02 15:33:40
| 0
| 399
|
Nands
|
78,262,292
| 12,545,041
|
Error loading Hugging Face model: SafeTensorsInfo.__init__() got an unexpected keyword argument 'sharded'
|
<p>I have been using Hugging Face transformers <a href="https://huggingface.co/TheBloke/Llama-2-7B-Chat-AWQ" rel="nofollow noreferrer">quantized Llama2 model</a>. Suddenly, code I was able to run earlier today is throwing an error when I try to load the model.</p>
<p>This code is straight from the docs:</p>
<pre class="lang-py prettyprint-override"><code>from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/Llama-2-7b-Chat-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(
model_name_or_path,
fuse_layers = True,
trust_remote_code = False,
safetensors = True
)
</code></pre>
<p>This suddenly generates this error:</p>
<pre><code>TypeError: SafeTensorsInfo.__init__() got an unexpected keyword argument 'sharded'
</code></pre>
<p>The bizarre thing is that I was able to run this code earlier today on the same machine. It is the same model. I haven't installed or updated any new Python (or other) packages.</p>
<p>This is using Ubuntu 20.04. Here are the relevant package versions:</p>
<pre><code>accelerate==0.28.0
autoawq==0.2.4
autoawq_kernels==0.0.6
bitsandbytes==0.43.0
huggingface-hub==0.22.2
torch==2.2.2+cu121
torchaudio==2.2.2+cu121
torchvision==0.17.2+cu121
transformers==4.38.2
</code></pre>
<p>How can an error suddenly arise when I haven't changed anything? It might make sense of my cached version of the model was no longer available, and it had downloaded a newer version. The way that <code>transformers</code> downloads models is a bit mysterious to me, but the <a href="https://huggingface.co/TheBloke/Llama-2-7B-Chat-AWQ/tree/main" rel="nofollow noreferrer">model files</a> page doesn't indicate anything has changed. What can I do to return to the position I was in this morning when this code would run?</p>
|
<python><nlp><huggingface-transformers><huggingface><llama>
|
2024-04-02 15:10:17
| 1
| 23,071
|
SamR
|
78,262,179
| 9,652,160
|
How to pick timestamps from ndarray if time() is bigger than 5:00
|
<p>I have an ndarray of timestamps. I need to pick timestamps if they represent a time after 5:00.</p>
<p>The sample code is:</p>
<pre><code>import numpy as np
from datetime import datetime
arr = np.array([1672713000, 1672716600, 1672720200, 1672723800, 1672727400, 1672731000])
threshold_time = datetime.strptime('5:00', '%H:%M').time()
new_arr = np.where(datetime.utcfromtimestamp(arr).time() > threshold_time)
</code></pre>
<p>However, when I try the following code, I get the this error:</p>
<pre><code>TypeError: only integer scalar arrays can be converted to a scalar index
</code></pre>
<p>How to query ndarray correctly in this case? I can't use pandas to solve this issue.</p>
|
<python><numpy><numpy-ndarray>
|
2024-04-02 14:52:46
| 2
| 505
|
chm
|
78,262,116
| 764,592
|
How to elegantly handle optional arguments when all optional arguments are explicitly defined but is called with dict unpack which key may not exist?
|
<p>My question is about Pythonic way of defining the function elegantly when all optional keyword arguments are defined explicitly, but function call arguments to be unpack dynamically from a dictionary <code>**</code>.</p>
<p>Example: I want my function to be defined as follows (or equivalent):</p>
<pre><code>def say(greeting='Hello', message=None):
print(greeting)
print(message)
</code></pre>
<p>This is basic use case</p>
<pre><code>amy = {
'greeting': 'Morning',
'message': 'World',
}
say(**amy)
</code></pre>
<p>How can I elegantly unpack dictionary in which there might be keys in the dictionary that are not optional arguments? i.e.</p>
<pre><code># Dict may have some other keys
bob = {
'greeting': 'Salute',
'message': 'Whoa',
'year': '2024',
}
# Expected:
# TypeError: say() got an unexpected keyword argument 'year'
say(**bob)
</code></pre>
<p>Solutions</p>
<ol>
<li><p>Just filter the dictionary before passing it to the function:</p>
<pre><code> say(**{k: bob[k] for k in {'greeting','message'}}
</code></pre>
<p>What bothers me is the filter task. First it assume that key exist, which may not always. I could simply use <code>bob.get(k)</code>, but this will overwrite the default value that I have explicitly defined in the function definition. Solution #2 below solve this issue.</p>
</li>
<li><p>Accept <code>**kwargs</code> in the function definition, but don't use it:</p>
<pre><code>def say(greeting='Hello', message=None, **kwargs):
print(greeting)
print(message)
say(**bob)
</code></pre>
<p>What bothers me is that unused variable still have to be defined. So, Solution #3.</p>
</li>
<li><p>Use <code>_</code> underscore as a throwaway variable</p>
<pre><code>def say(greeting='Hello', message=None, **_):
print(greeting)
print(message)
say(**bob)
</code></pre>
<p>Well, I can't complain to this. At least some will know that underscore is a throwaway variable and is a way out. But is this the Pythonic solution for such definition?</p>
</li>
</ol>
|
<python>
|
2024-04-02 14:43:37
| 0
| 11,858
|
Yeo
|
78,262,071
| 11,618,586
|
Getting into poetry shell environment and calling jupyter-notebook via a batchfile
|
<p>Im using poetry shell and I want to create a bactch file that will goto the folder that has the python environment and execute jupyter-notebook command to bring up my jupyter notebook.</p>
<p>The actions I execute as follows:</p>
<ol>
<li>CD to the folder the with environment</li>
<li>Poetry shell</li>
<li>Jupyter-notebook</li>
</ol>
<p>I created 2 batch files with the following:</p>
<p>1st batch file <code>Call Poetry.bat</code>:</p>
<pre><code>@ echo off
cd\
cd C:\Path\PythonScripts
poetry shell
</code></pre>
<p>2nd batch file:</p>
<pre><code>@echo off
call "C:\Path\Call Poetry.bat"
call Jupyter-notebook
</code></pre>
<p>But when i execute <code>Call Poetry.bat</code> it just gets into the poetry environment command line and waits. When i exit the environment, it then executes the next line which is <code>jupyter-notebook</code>.
How can i get a batch file to get into an environment and then execute commands?</p>
|
<python><batch-file><virtualenv><python-poetry>
|
2024-04-02 14:37:39
| 0
| 1,264
|
thentangler
|
78,261,993
| 7,800,760
|
Python Wordcloud: Help getting (near) what designers ask for
|
<p>I am generating a wordcloud from a term-frequency dictionary and got this:
<a href="https://i.sstatic.net/REArx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/REArx.png" alt="Wordcloud of people mentioned on a given day" /></a></p>
<p>by using the following wordcloud parameters:</p>
<pre><code>wordcloud = WordCloud(
width=667,
height=375,
font_path="resources/NotoSerif-SemiBold.ttf",
prefer_horizontal=1,
max_words=20,
background_color="whitesmoke",
min_font_size=11,
max_font_size=64,
).generate_from_frequencies(freqdict)
</code></pre>
<p>What I'm really not achieving are the Colour and Size scheme they request according to these specs:
<a href="https://i.sstatic.net/BHosx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHosx.png" alt="Color and size design guide" /></a>
can any of you come to at least some approximation of what they want? Thank you</p>
|
<python><word-cloud>
|
2024-04-02 14:25:29
| 1
| 1,231
|
Robert Alexander
|
78,261,825
| 7,194,474
|
PuLP: Constrain the number of distinct people that can work on a task
|
<p>I have a LP/IP problem that I'm trying to solve using PuLP. It is the division of work amongst employees. The thing that I can't figure out is how to limit the engine to only plan 1 employee on an action. In reality the problem has significantly more constraints, but I tried to minimise the problem as much as possible.</p>
<p>Say I have a specific workload:</p>
<pre><code>import pandas as pd
data = {
"actions": [
"ActionA",
"ActionB",
"ActionC",
"ActionD",
],
"value": [5, 2, 1, 1],
"available_work": [8, 4, 12, 24],
}
work = pd.DataFrame(data=data).set_index("actions")
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>actions</th>
<th>value</th>
<th>available_work</th>
</tr>
</thead>
<tbody>
<tr>
<td>ActionA</td>
<td>5</td>
<td>8</td>
</tr>
<tr>
<td>ActionB</td>
<td>2</td>
<td>4</td>
</tr>
<tr>
<td>ActionC</td>
<td>1</td>
<td>12</td>
</tr>
<tr>
<td>ActionD</td>
<td>1</td>
<td>24</td>
</tr>
</tbody>
</table></div>
<p>and also a list of people able to work:</p>
<pre><code>data = {
"ID": [
"01",
"02",
"03",
"04",
"05",
],
"time": [7, 7, 6, 5, 5],
}
employees = pd.DataFrame(data=data).set_index("ID")
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>time</th>
</tr>
</thead>
<tbody>
<tr>
<td>01</td>
<td>7</td>
</tr>
<tr>
<td>02</td>
<td>7</td>
</tr>
<tr>
<td>03</td>
<td>6</td>
</tr>
<tr>
<td>04</td>
<td>5</td>
</tr>
<tr>
<td>05</td>
<td>5</td>
</tr>
</tbody>
</table></div>
<p>I want to maximize the value using the time of the 5 employees:
<code>prob = pulp.LpProblem('PlanningActions', pulp.LpMaximize)</code></p>
<p>I create a dictionary called vars to contain the referenced variables:</p>
<pre><code>vars = pulp.LpVariable.dicts(
"division",
(work.index, employees.index),
lowBound=0
cat=pulp.LpInteger,
)
</code></pre>
<p>I don't want to plan more work on a person than there is available work:</p>
<pre><code>for details in work.itertuples():
prob += (
pulp.lpSum([vars[details.Index][m] for m in employees.index])
<= details.available_work,
f"Not more than available work {details.Index}",
)
</code></pre>
<p>and I also don't want to plan more work on an employee than it has available time:</p>
<pre><code>for m in employees.index:
prob += (
pulp.lpSum([vars[w][m] for w in work.index])
<= int(employees.loc[m, "time"]),
f"Not more than available time employee {m}",
)
</code></pre>
<p>This al works properly. However, I can't figure out the constraint to only plan 1 employee on an action. I figured it would look something like this</p>
<pre><code>for action in work.index:
if action == 'ActionA':
prob += (
pulp.lpSum([vars[action][m]>0)
<= 1,
f"Not more than one employee for {action}",
)
</code></pre>
<p>But I can't seem to compare the variable (vars) with integer, since I'm getting the following error:
<code>TypeError: '>' not supported between instances of 'LpVariable' and 'int'</code></p>
|
<python><pulp><integer-programming>
|
2024-04-02 13:54:32
| 2
| 1,897
|
Paul
|
78,261,786
| 1,668,622
|
How to conditionally clear script output after execution?
|
<p>I want to build/provide a wrapper for scripts with output necessary only while the script is being executed, but which should conditionally be cleaned up after script execution.</p>
<p>E.g. (only an example, I'm not talking about Docker)</p>
<pre><code>docker build <ARGS>
</code></pre>
<p>can be quite verbose even when executed from cache. You also don't want to hide the output while its running generally, because then the user doesn't see what's going on.</p>
<p>You <em>could</em> use <code>screen</code> and run</p>
<pre><code>screen docker build <ARGS>
</code></pre>
<p>Now you can see the output while <code>docker build</code> is running, and it hides after script termination. But now you have a dependency to <code>screen</code> and you also don't see erroneous output in case something goes wrong.</p>
<p>Is there a cheap (in terms of dependencies) way in Python or Bash to remove the output of a script after execution (as with <code>screen</code> but without extra dependencies) or (even better) do it conditionally?</p>
<p>(Note: I don't want to clear the whole screen of course, only what the output of the wrapped script)</p>
<p>To shed some lite on the use case: as in modern tools like <code>pipenv</code>, <code>npm</code>, <code>cargo</code>, <code>docker-build</code> I want to keep the user informed about an ongoing process while only errors should stay after the script terminated.</p>
<p>So the ideal behavior after running <code>my-wrapper <COMMAND></code> looks like this:</p>
<ul>
<li>Don't clear the terminal on startup (as most <code>curses</code> programs do)</li>
<li>Run <code><COMMAND></code> and pass <code>stdout</code> and <code>stderr</code> through (keeping <code>stderr</code> <code>stderr</code> and <code>stdout</code> <code>stdout</code>)</li>
<li>after the spawned process terminated, react based on the return code being non-zero: remove the output from the terminal (as <code>screen</code> does it but without printing <code>[screen is terminating]</code>) on success, but keep it (or re-print it if necessary) otherwise</li>
</ul>
|
<python><bash><terminal>
|
2024-04-02 13:47:34
| 0
| 9,958
|
frans
|
78,261,772
| 13,184,183
|
How to prevent consequences of code execution in try block if exception is raised?
|
<p>I want to prevent execution on <code>try</code> block if it raises an <code>Exception</code> of any kind. Example:</p>
<pre class="lang-py prettyprint-override"><code>a = 1
try:
a += 1
raise ValueError
except:
pass
print(a)
</code></pre>
<p>After that <code>a</code> would be equal to 2. I want it to remain 1. In that particular case I could add <code>a -=1</code> into <code>except</code> block, but that is not a solution for more complex stuff. Is there a way to do that smoothly?</p>
|
<python><exception><try-catch>
|
2024-04-02 13:45:39
| 1
| 956
|
Nourless
|
78,261,716
| 2,546,099
|
Usage of torchaudio.transforms.MelSpectrogram for tensor residing on GPU
|
<p>I want to calculate a MelSpectrogram using torchaudio on a GPU. For testing, I wrote the following code:</p>
<pre><code>from typing import Optional
import torch
import torchaudio
import numpy as np
from tests.__init__ import (
__target_clock__ as TARGET_CLOCK,
__number_of_test_data_vals__ as NUMBER_OF_TEST_DATA_VALS,
)
# Set general parameters:
TARGET_DEVICE = "CUDA"
TARGET_FREQUENCY: int = 440
NUMBER_OF_FFT_SLOTS: int = 1024
HOP_LENGTH: Optional[int] = None
NUMBER_OF_MEL_SLOTS: int = 128
if __name__ == "__main__":
target_device = torch.device(
"cuda" if (TARGET_DEVICE == "CUDA" and torch.cuda.is_available()) else "cpu"
)
print(f"Using device {target_device}")
sampling_vec: np.ndarray = np.arange(NUMBER_OF_TEST_DATA_VALS) / TARGET_CLOCK
frequency_vec: np.ndarray = np.sin(
2 * np.pi * TARGET_FREQUENCY * sampling_vec
).astype("float32")
frequency_tensor: torch.Tensor = torch.Tensor(frequency_vec).to(target_device)
mel_spectrogram: torch.Tensor = torchaudio.transforms.MelSpectrogram(
sample_rate=TARGET_CLOCK,
n_fft=NUMBER_OF_FFT_SLOTS,
hop_length=HOP_LENGTH,
n_mels=NUMBER_OF_MEL_SLOTS,
)(frequency_tensor)
print(f"Obtained MEL-Spectrogram: {mel_spectrogram}")
</code></pre>
<p>When running it on the CPU (i.e. setting <code>TARGET_DEVICE</code> to anything else than <code>"CUDA"</code>), the code runs without issues. When trying to use CUDA, however, I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "~\testing_modules\test_melspectrogram_GPU.py", line 39, in <module>
mel_spectrogram: torch.Tensor = torchaudio.transforms.MelSpectrogram(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AppData\Local\pypoetry\Cache\virtualenvs\testbed-rg5q6nje-py3.11\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AppData\Local\pypoetry\Cache\virtualenvs\testbed-rg5q6nje-py3.11\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AppData\Local\pypoetry\Cache\virtualenvs\testbed-rg5q6nje-py3.11\Lib\site-packages\torchaudio\transforms\_transforms.py", line 619, in forward
specgram = self.spectrogram(waveform)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AppData\Local\pypoetry\Cache\virtualenvs\testbed-rg5q6nje-py3.11\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AppData\Local\pypoetry\Cache\virtualenvs\testbed-rg5q6nje-py3.11\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AppData\Local\pypoetry\Cache\virtualenvs\testbed-rg5q6nje-py3.11\Lib\site-packages\torchaudio\transforms\_transforms.py", line 110, in forward
return F.spectrogram(
^^^^^^^^^^^^^^
File "~\AppData\Local\pypoetry\Cache\virtualenvs\testbed-rg5q6nje-py3.11\Lib\site-packages\torchaudio\functional\functional.py", line 126, in spectrogram
spec_f = torch.stft(
^^^^^^^^^^^
File "~\AppData\Local\pypoetry\Cache\virtualenvs\testbed-rg5q6nje-py3.11\Lib\site-packages\torch\functional.py", line 660, in stft
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: stft input and window must be on the same device but got self on cuda:0 and window on cpu
</code></pre>
<p>What am I doing wrong here, and what can I do to run MelSpectrogram on the GPU? The current torch version is <code>2.2.2+cu121</code></p>
|
<python><pytorch><torchaudio>
|
2024-04-02 13:36:30
| 1
| 4,156
|
arc_lupus
|
78,261,632
| 14,045,537
|
Folium Custom legend compatibility issue with Fullscreen plugin
|
<p>I'm using the code from <a href="https://github.com/python-visualization/folium/issues/528#issuecomment-421445303" rel="nofollow noreferrer"><code>How can I add a legend to a folium map?</code></a> to create a custom draggable legend.</p>
<p><a href="https://i.sstatic.net/E6v6q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E6v6q.png" alt="enter image description here" /></a></p>
<p>On adding the <a href="https://github.com/python-visualization/folium/blob/main/folium/plugins/fullscreen.py" rel="nofollow noreferrer"><code>Fullscreen folium plugin</code></a>, with fullscreen mode the draggable legend disappears.</p>
<p><a href="https://i.sstatic.net/2Kn1W.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2Kn1W.png" alt="enter image description here" /></a></p>
<p>Is there a way to show the legend in fullscreen mode with the folium fullscreen plugin?</p>
<p>Reproducible Code:</p>
<pre class="lang-py prettyprint-override"><code>from branca.element import Template, MacroElement
import folium
from folium import plugins
m = folium.Map(location=(30, 20), zoom_start=4)
template = """
{% macro html(this, kwargs) %}
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>jQuery UI Draggable - Default functionality</title>
<link rel="stylesheet" href="//code.jquery.com/ui/1.12.1/themes/base/jquery-ui.css">
<script src="https://code.jquery.com/jquery-1.12.4.js"></script>
<script src="https://code.jquery.com/ui/1.12.1/jquery-ui.js"></script>
<script>
$( function() {
$( "#maplegend" ).draggable({
start: function (event, ui) {
$(this).css({
right: "auto",
top: "auto",
bottom: "auto"
});
}
});
});
</script>
</head>
<body>
<div id='maplegend' class='maplegend'
style='position: absolute; z-index:9999; border:2px solid grey; background-color:rgba(255, 255, 255, 0.8);
border-radius:6px; padding: 10px; font-size:14px; right: 20px; top: 20px;'>
<div class='legend-title'>Population</div>
<div class='legend-scale'>
<div class='legend-labels' style="display: flex; justify-content: space-between;">
<div style="display: flex; flex-direction: column; align-items: center;">
<span style='background:#fee5d9; opacity:0.8; width: 50px; height: 20px; display: inline-block;'></span>
<span>(0-50)</span>
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<span style='background:#fcbba1; opacity:0.8; width: 50px; height: 20px; display: inline-block;'></span>
<span>(51-100)</span>
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<span style='background:#fc9272; opacity:0.8; width: 50px; height: 20px; display: inline-block;'></span>
<span>(101-250)</span>
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<span style='background:#fb6a4a; opacity:0.8; width: 50px; height: 20px; display: inline-block;'></span>
<span>(251-500)</span>
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<span style='background:#ef3b2c; opacity:0.8; width: 50px; height: 20px; display: inline-block;'></span>
<span>(501-750)</span>
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<span style='background:#cb181d; opacity:0.8; width: 50px; height: 20px; display: inline-block;'></span>
<span>(751-1000)</span>
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<span style='background:#99000d; opacity:0.8; width: 50px; height: 20px; display: inline-block;'></span>
<span>(> 1001)</span>
</div>
</div>
</div>
</div>
</body>
</html>
<style type='text/css'>
.maplegend .legend-title {
text-align: left;
margin-bottom: 5px;
font-weight: bold;
font-size: 90%;
}
.maplegend .legend-scale ul {
margin: 0;
margin-bottom: 5px;
padding: 0;
float: left;
list-style: none;
}
.maplegend .legend-scale ul li {
font-size: 80%;
list-style: none;
margin-left: 0;
line-height: 18px;
margin-bottom: 2px;
}
.maplegend ul.legend-labels li span {
display: block;
float: left;
height: 16px;
width: 30px;
margin-right: 5px;
margin-left: 0;
border: 1px solid #999;
}
.maplegend .legend-source {
font-size: 80%;
color: #777;
clear: both;
}
.maplegend a {
color: #777;
}
</style>
{% endmacro %}"""
macro = MacroElement()
macro._template = Template(template)
m.get_root().add_child(macro)
# Add the full screen button.
plugins.Fullscreen(
position = "bottomleft",
title = "Open full-screen map",
title_cancel = "Close full-screen map",
force_separate_button = True,
).add_to(m)
m
</code></pre>
|
<python><html><css><folium><folium-plugins>
|
2024-04-02 13:24:46
| 1
| 3,025
|
Ailurophile
|
78,261,613
| 14,824,108
|
OpenAI Fine-tuning API error: "ImportError: cannot import name 'FineTune'"
|
<p>I'm having an issue with the OpenAI Fine-tuning API, getting the same error message after having tried several versions:</p>
<p><code>ImportError: cannot import name 'FineTune' from 'openai.cli'</code></p>
<p>Which is related to the following imports:</p>
<pre><code>from openai import FineTune as FineTune
from openai.cli import FineTune as FineTuneCli
</code></pre>
<p>Does anyone know how to fix this issue?</p>
|
<python><openai-api><fine-tuning>
|
2024-04-02 13:21:21
| 1
| 676
|
James Arten
|
78,261,368
| 1,714,692
|
filter result of groupby with pd.Series of boolean
|
<p>Consider having two dataframes having the same column <code>a</code>. However in the first dataframe column <code>a</code> has unique values, whereas in the second one it does not although the possible values of column are the same in both dataframes and both of them are sorted on <code>a</code>.</p>
<pre><code>df = pd.DataFrame([{'a': 1, 'b':2}, {'a':2, 'b': 4}, {'a':3, 'b': 4}])
df2 = pd.DataFrame([{'a': 1, 'c':2}, {'a': 1, 'c':3}, {'a':2, 'c': 4}, {'a': 2, 'c':5}, {'a': 3, 'c':5}, {'a': 3, 'c':5}])
</code></pre>
<p>From the first dataframe I want to consider only a given set of rows, based on column <code>b</code>. For example:</p>
<pre><code>my_indexes = df1[df1['b']==4]
</code></pre>
<p>the result <code>my_indexes</code> is a <code>pd.Series</code> of booleans (not exactly indexes but let me keep this naming), and normally in a dataframe it can be used as <code>df[my_indexes]</code> to extract some rows. I want to use it on <code>df2</code> to extract the rows with values <code>a</code> that corresponds to that for which <code>'b'==4</code>, i.e., the ones for which <code>valid</code> entries are True. However, <code>df2</code> has more rows than <code>df</code>. For this reason I first group on <code>a</code> as:</p>
<pre><code>grouped = df2.groupby("a")
</code></pre>
<p>in this way <code>grouped</code> will have the same length of <code>my_indexes</code>, since all values of column <code>a</code> in <code>df</code> are also in <code>df2</code> and since both are sorted on <code>a</code>. So I thought I could use <code>valid</code> directly on <code>groupded</code> as it were a df, but <code>grouped[my_indexes]</code> does not work.</p>
<p>How can I use <code>my_indexes</code> to filter <code>df2</code>?</p>
|
<python><pandas><dataframe><filter><group-by>
|
2024-04-02 12:40:12
| 1
| 9,606
|
roschach
|
78,261,324
| 20,920,790
|
Geting UnicodeDecodeError in then run read_sql_query for PostgreSQL
|
<p>I make database by this code:</p>
<pre><code>CREATE DATABASE "new_db"
WITH
OWNER "postgres"
ENCODING 'UTF8'
LC_COLLATE = 'en_US.UTF-8'
LC_CTYPE = 'en_US.UTF-8'
TEMPLATE template0;
</code></pre>
<p>Successfully added table:</p>
<pre><code>CREATE TABLE public.rounds
(
rounds_id bigint NOT NULL,
rounds_bot_date date,
rounds_bot_time time without time zone,
PRIMARY KEY (rounds_id)
)
</code></pre>
<p>In Python I create engine:</p>
<pre><code>db_config = {'user': 'postgres',
'pwd': '****',
'host': 'localhost',
'port': 5432,
'db': 'new_db'}
connection_string = f"postgresql://{db_config['user']}:{db_config['pwd']}@{db_config['host']}:{db_config['port']}/{db_config['db']}"
engine = create_engine(connection_string)
</code></pre>
<p>And then I try run pandas.read_sql_query:</p>
<pre><code>pd.read_sql_query('select * from rounds', con=engine)
</code></pre>
<p>I get this error:</p>
<pre><code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc2 in position 61: invalid continuation byte
</code></pre>
<p>Same thing, then I try to append data to table.
I think there's coding mistake in database.
Or correct connection_string.</p>
<p>I also tryed use postgresql+pg8000, but get another error:</p>
<pre><code>ProgrammingError: (pg8000.dbapi.ProgrammingError) {'S': '�����', 'V': 'FATAL', 'C': '28P01',
</code></pre>
<p>With "postgresql+psycopg" error:</p>
<pre><code>OperationalError: (psycopg.OperationalError) connection failed: ������������ "postgres" ��
</code></pre>
<p>I also tryed this thing:</p>
<pre><code>import ssl
ssl_context = ssl.create_default_context()
engine = create_engine(
connection_string,
connect_args={"ssl_context": ssl_context},
)
</code></pre>
<p>And get this error:</p>
<pre><code>InterfaceError: (pg8000.exceptions.InterfaceError) Server refuses SSL
</code></pre>
<p>P. S. Database contains only english content.</p>
<p>P. S. S. Same query perfectly runs in DBeaver.</p>
<p>Python ver. 3.11.7</p>
<p>Pandas ver. 2.2.1</p>
<p>SQLAlchemy ver. 2.0.25</p>
<p>PostgreSQL ver. 16.2, compiled by Visual C++ build 1937, 64-bit</p>
|
<python><pandas><postgresql><sqlalchemy>
|
2024-04-02 12:32:47
| 1
| 402
|
John Doe
|
78,261,311
| 3,240,681
|
py is not linked with latest python version
|
<p>Regarding the <a href="https://stackoverflow.com/a/50896577/3240681">answer</a> <code>py</code> by default, i.e. without options, should use latest Python installed on system. But for some reason I have the following on my Windows 11:</p>
<pre><code>$ py --version
Python 3.8.10
$ python3 --version
Python 3.12.2
</code></pre>
<p>How can this be explained? And how to link <code>py</code> with latest python version?</p>
<p>Unfortunately I don't remember how I did install them.</p>
|
<python><python-3.x><windows>
|
2024-04-02 12:30:09
| 1
| 5,172
|
αλεχολυτ
|
78,261,229
| 9,212,050
|
Cannot run Matlab SDK module for Python inside Docker, but no issues locally
|
<p>I need to run a Matlab script inside Python. To achieve that, first I installed Matlab Compiler Runtime (MCR) using <a href="https://fr.mathworks.com/products/compiler/matlab-runtime.html" rel="nofollow noreferrer">this</a> documentation. Then, I used <a href="https://mathworks.com/help/compiler/package-matlab-standalone-applications-into-docker-images.html" rel="nofollow noreferrer">this</a> documentation to create and install a python module of the matlab script. In particular, I used the Matlab Compiler SDK app to create the module, then installed the created directory using <code>python setup.py install</code>, <code>setup.py</code> is in the generated directory, and I could succesfully execute the script locally inside python using this code:</p>
<pre><code>import test_function
mypkg = test_function.initialize()
arg1 = 1
arg2 = 2
out1, out2 = mypkg.test_function(arg1, arg2, nargout=2)
print(out1, out2)
mypkg.terminate()
</code></pre>
<p>Now I would like to execute the module from inside a Docker container. So, I installed the MCR installation zip file (Linux 64 from <a href="https://fr.mathworks.com/products/compiler/matlab-runtime.html" rel="nofollow noreferrer">here</a>), unzipped it and saved it to a directory <code>matlab_runtime</code>. In the Dockerfile, I copied the directory with the Matlab SDK module (<code>test_function</code>), copied the directory with MCR installation directory (<code>matlab_runtime</code>) and tried to install Matlab Runtime. Here are the changes to the Dockerfile:</p>
<pre><code>FROM mathworks/matlab-runtime-deps:R2023b
# other not-related commands
# ...
# other not-related commands
RUN apt-get install -y python3.9 python3-pip
COPY test_function /app/test_function
COPY matlab_runtime /app/matlab_runtime
RUN ls /app/matlab_runtime
RUN pip install setuptools
WORKDIR /app
USER root
RUN chmod +x ./matlab_runtime/install
RUN python3 test_function/setup.py install
RUN sudo -H ./matlab_runtime/install
</code></pre>
<p>However I get this error: <code>failed to solve: rpc error: code = Unknown desc = process "/bin/sh -c sudo -H ./matlab_runtime/install" did not complete successfully: exit code: 42</code>.</p>
<p>I tried to use <code>make</code> in the Dockerfile:</p>
<pre><code>RUN apt update && apt install -y make
RUN make ./matlab_runtime/install
</code></pre>
<p>Docker doesn't return any errors, but when I run a python script above inside the Docker container, I get the error: <code>AttributeError: module 'LC_filter_for_AI' has no attribute 'initialize'</code>, when I check the module's methods using dir(test_function) I get</p>
<pre><code>['__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__']
</code></pre>
<p>while for the local installation of the module I get the methods</p>
<pre><code>['_PathInitializer', '__builtins__', '__cached__', '__doc__', '__exit_packages', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '_pir', 'atexit', 'glob', 'importlib', 'initialize', 'initialize_runtime', 'os', 'pdb', 'platform', 're', 'sys', 'terminate_runtime', 'weakref']
</code></pre>
|
<python><docker><matlab><sdk><matlab-compiler>
|
2024-04-02 12:14:04
| 1
| 1,404
|
Sayyor Y
|
78,261,001
| 2,526,586
|
Different types of Integer division in Python
|
<p>In terms of the resulting value (ignoring the resulting data type), are the following the same in Python if <code>x</code> and <code>y</code> are both numbers?</p>
<pre><code>int(x / y)
</code></pre>
<pre><code>x // y
</code></pre>
<p>If so, which is better to use in real application? And why?</p>
<p>P.S. Are there any other methods in Python that achieve similar result but more suitable in different use cases? For example, if <code>y</code> is 2^n, then we can do bitwise shifting - that's all I know.</p>
|
<python><integer-division>
|
2024-04-02 11:35:28
| 1
| 1,342
|
user2526586
|
78,260,947
| 1,473,517
|
How to parallelize/speed up embarrassingly parallel numba code?
|
<p>I have the following code:</p>
<pre><code>@nb.njit(cache=True)
def find_two_largest(arr):
# Initialize the first and second largest elements
if arr[0] >= arr[1]:
largest = arr[0]
second_largest = arr[1]
else:
largest = arr[1]
second_largest = arr[0]
# Iterate through the array starting from the third element
for num in arr[2:]:
if num > largest:
second_largest = largest
largest = num
elif num > second_largest:
second_largest = num
return largest, second_largest
@nb.njit(cache=True)
def max_bar_one(arr):
largest, second_largest = find_two_largest(arr)
missing_maxes = np.empty_like(arr)
for i in range(arr.shape[0]):
if arr[i] == largest:
if largest != second_largest:
missing_maxes[i] = second_largest
else:
missing_maxes[i] = largest # largest == second_largest
else:
missing_maxes[i] = largest
return missing_maxes
@nb.njit(cache=True)
def replace_max_row_wise_add_first_delete_last(d):
"""
Run max_bar_one on each row but the last, prepend an all -inf row
"""
m, n = d.shape
result = np.empty((m, n))
result[0] = -np.inf
for i in range(0, m - 1):
result[i + 1, :] = max_bar_one(d[i, :])
return result
@nb.njit(cache=True)
def main_function(d, subcusum, j):
temp = replace_max_row_wise_add_first_delete_last(d)
for i1 in range(temp.shape[0]):
for i2 in range(temp.shape[1]):
temp[i1, i2] = max(temp[i1, i2], d[i1, i2]) + subcusum[j, i2]
return temp
</code></pre>
<p>I then set up the data with:</p>
<pre><code>n = 5000
A = np.random.randint(-3, 4, (n, n)).astype(float)
cusum_rows = np.cumsum(A, axis=1)
rowseq = np.arange(n)
d = np.random.randint(-3, 4, (5000, 5000))
</code></pre>
<p>We can then time it with:</p>
<pre><code>%timeit main_function(d, cusum_rows, 0)
166 ms ± 1.87 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
</code></pre>
<p>Is it possible to parallelize the for loop or the code in general to speed this up? I tried using parallel=True in replace_max_row_wise_add_first_delete_last
but it didn't speed up the code and only reported:</p>
<pre><code>Instruction hoisting:
loop #1:
Failed to hoist the following:
dependency: $value_var.73 = getitem(value=_72call__function_11, index=$parfor__index_72.90, fn=<built-in function getitem>)
</code></pre>
<p>This is surprising as all the calls in the for loop are independent.</p>
<p>Can this code be sped up and/or parallelized?</p>
|
<python><performance><numba>
|
2024-04-02 11:25:33
| 1
| 21,513
|
Simd
|
78,260,939
| 348,168
|
Make a pcap file in python with two udp packets with different timestamps
|
<p>I have a piece of code for making pcap files taken from here:
<a href="https://www.codeproject.com/tips/612847/generate-a-quick-and-easy-custom-pcap-file-using-p" rel="nofollow noreferrer">https://www.codeproject.com/tips/612847/generate-a-quick-and-easy-custom-pcap-file-using-p</a></p>
<pre><code>port = 9600
#Custom Foo Protocol Packet
message = ('01 01 00 08' #Foo Base Header
'01 02 00 00' #Foo Message (31 Bytes)
'00 00 12 30'
'00 00 12 31'
'00 00 12 32'
'00 00 12 33'
'00 00 12 34'
'D7 CD EF' #Foo flags
'00 00 12 35')
"""----------------------------------------------------------------"""
""" Do not edit below this line unless you know what you are doing """
"""----------------------------------------------------------------"""
import sys
import binascii
#Global header for pcap 2.4
pcap_global_header = ('D4 C3 B2 A1'
'02 00' #File format major revision (i.e. pcap <2>.4)
'04 00' #File format minor revision (i.e. pcap 2.<4>)
'00 00 00 00'
'00 00 00 00'
'FF FF 00 00'
'01 00 00 00')
#pcap packet header that must preface every packet
pcap_packet_header = ('AA 77 9F 47'
'90 A2 04 00'
'XX XX XX XX' #Frame Size (little endian)
'YY YY YY YY') #Frame Size (little endian)
eth_header = ('00 00 00 00 00 00' #Source Mac
'00 00 00 00 00 00' #Dest Mac
'08 00') #Protocol (0x0800 = IP)
ip_header = ('45' #IP version and header length (multiples of 4 bytes)
'00'
'XX XX' #Length - will be calculated and replaced later
'00 00'
'40 00 40'
'11' #Protocol (0x11 = UDP)
'YY YY' #Checksum - will be calculated and replaced later
'7F 00 00 01' #Source IP (Default: 127.0.0.1)
'7F 00 00 01') #Dest IP (Default: 127.0.0.1)
udp_header = ('80 01'
'XX XX' #Port - will be replaced later
'YY YY' #Length - will be calculated and replaced later
'00 00')
def getByteLength(str1):
return len(''.join(str1.split())) / 2
def writeByteStringToFile(bytestring, filename):
bytelist = bytestring.split()
bytes = binascii.a2b_hex(''.join(bytelist))
bitout = open(filename, 'wb')
bitout.write(bytes)
def generatePCAP(message,port,pcapfile):
udp = udp_header.replace('XX XX',"%04x"%port)
udp_len = getByteLength(message) + getByteLength(udp_header)
udp = udp.replace('YY YY',"%04x"%udp_len)
ip_len = udp_len + getByteLength(ip_header)
ip = ip_header.replace('XX XX',"%04x"%ip_len)
checksum = ip_checksum(ip.replace('YY YY','00 00'))
ip = ip.replace('YY YY',"%04x"%checksum)
pcap_len = ip_len + getByteLength(eth_header)
hex_str = "%08x"%pcap_len
reverse_hex_str = hex_str[6:] + hex_str[4:6] + hex_str[2:4] + hex_str[:2]
pcaph = pcap_packet_header.replace('XX XX XX XX',reverse_hex_str)
pcaph = pcaph.replace('YY YY YY YY',reverse_hex_str)
bytestring = pcap_global_header + pcaph + eth_header + ip + udp + message
writeByteStringToFile(bytestring, pcapfile)
#Splits the string into a list of tokens every n characters
def splitN(str1,n):
return [str1[start:start+n] for start in range(0, len(str1), n)]
#Calculates and returns the IP checksum based on the given IP Header
def ip_checksum(iph):
#split into bytes
words = splitN(''.join(iph.split()),4)
csum = 0;
for word in words:
csum += int(word, base=16)
csum += (csum >> 16)
csum = csum & 0xFFFF ^ 0xFFFF
return csum
"""------------------------------------------"""
""" End of functions, execution starts here: """
"""------------------------------------------"""
if len(sys.argv) < 2:
print 'usage: pcapgen.py output_file'
exit(0)
generatePCAP(message,port,sys.argv[1])
</code></pre>
<p>The above code works for single packet from the payload <code>message</code>.</p>
<pre><code>message = ('01 01 00 08' #Foo Base Header
'01 02 00 00' #Foo Message (31 Bytes)
'00 00 12 30'
'00 00 12 31'
'00 00 12 32'
'00 00 12 33'
'00 00 12 34'
'D7 CD EF' #Foo flags
'00 00 12 35')
</code></pre>
<p>I want to add a second packet to the pcap file with payload</p>
<pre><code>message2 = ('f1 b1 a0 08' #
'01 02 00 00' #
'00 00 12 30'
'00 00 12 31'
'00 00 12 32'
'00 00 12 33'
'00 00 12 34'
'e7 CD EF' #
'00 00 12 35')
</code></pre>
<p>with a different timestamp (delayed by 1.52345 second from the first packet). In the Time column of Wireshark viewer I must see 1.52345 for the second packet. I tried to change the part</p>
<pre><code>bytelist = bytestring.split()
bytes = binascii.a2b_hex(''.join(bytelist))
</code></pre>
<p>like <code>bytelist.append(bytelist)</code> and all. But in vain.Packet is not valid as mentioned by Wireshark.</p>
|
<python><pcap>
|
2024-04-02 11:23:42
| 1
| 4,378
|
Vinod
|
78,260,872
| 14,298,525
|
OpenTelemetry on Gunicorn and Falcon not showing spans properly
|
<p>I have an application running with a combination of Falcon and Gunicorn. I am trying to use OpenTelemetry to instrument it and send traces to Jaeger.</p>
<p>The following is my code:</p>
<p>pyproject.toml:</p>
<pre><code>opentelemetry-distro = {extras = ["otlp"], version = "0.44b0" }
opentelemetry-instrumentation = "0.44b0"
opentelemetry-instrumentation-falcon = "0.44b0"
</code></pre>
<p>gunicorn_conf.py:</p>
<pre><code>from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
def post_fork(server, worker):
from opentelemetry.instrumentation.auto_instrumentation import sitecustomize
server.log.info("Worker spawned (pid: %s)", worker.pid)
resource = Resource.create(attributes={
"service.name": "my-app"
})
trace.set_tracer_provider(TracerProvider(resource=resource))
span_processor = BatchSpanProcessor(
OTLPSpanExporter(endpoint=telemetry_endpoint, insecure=telemetry_insecure)
)
trace.get_tracer_provider().add_span_processor(span_processor)
</code></pre>
<p>And the launch command:</p>
<pre><code>OTEL_RESOURCE_ATTRIBUTES=service.name={app_name} OTEL_EXPORTER_OTLP_TRACES_ENDPOINT={telemetry_endpoint} OTEL_EXPORTER_OTLP_METRICS_ENDPOINT={telemetry_endpoint} OTEL_EXPORTER_OTLP_INSECURE={telemetry_insecure} OTEL_TRACES_EXPORTER=otlp OTEL_METRICS_EXPORTER=none opentelemetry-instrument gunicorn -c gunicorn_conf.py
</code></pre>
<p>So the application runs and then the traces appear on Jaeger but shows it like a single function call and not multiple calls even if there is a db call or an external api call.</p>
<p><img src="https://i.sstatic.net/ZlKKd.png" alt="Attached image for reference" /></p>
|
<python><gunicorn><open-telemetry><otel><falcon>
|
2024-04-02 11:13:25
| 2
| 341
|
mifol68042
|
78,260,854
| 7,246,472
|
Python - custom type annotations for tree nodes and branches
|
<p>I have a class representation of a ternary tree, where every node has three child nodes, which are generated via different "branches", which are actually generating functions for the child nodes. The class has methods which need to process the nodes and branches, and I would like to create the correct type annotations for these.</p>
<p>Each node is a tuple of (positive) integers <code>(r, s)</code>, where <code>r > s >= 1</code>, and each "branch" is a callable, in this case, a lambda <code>lambda x, y: # -> returns (f(x), f(y))</code>, where <code>f</code> is the mathematical function associated with the branch - it produces integers. Branches take the unpacked args of a node and produce another node.</p>
<p>All nodes have the same form, and all three branches are of the same form but with different generating functions.</p>
<p>Currently, I have the following typing for the nodes and branches, using the <code>type</code> parameter syntax <a href="https://docs.python.org/3/reference/compound_stmts.html#generic-type-aliases" rel="nofollow noreferrer">introduced in 3.12</a>:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Callable, Unpack
...
type TreeNode = tuple[int, int]
type TreeBranch = Callable[Unpack[TreeNode], TreeNode]
</code></pre>
<p>This seems correct, but I have a feeling I'm missing something. So this is my first question: given what I've described, are these type annotations correct?</p>
<p>Would the equivalent for 3.11 be?</p>
<pre class="lang-py prettyprint-override"><code>TreeNode = TypeVar('TreeNode', bound=tuple[int, int])
Branch = TypeVar('TreeBranch', bound=Callable[Unpack[TreeNode], TreeNode])
</code></pre>
<p>Second question: the pairs of integers <code>(r, s)</code> in each node have the property that <code>r > s >= 1</code>. Is there a way of indicating this via type annotations?</p>
|
<python><python-typing>
|
2024-04-02 11:10:45
| 1
| 595
|
srm
|
78,260,613
| 188,331
|
After creating a Custom Tokenizer using HF Tokenizers library, how to create a model that fits the Tokenizer?
|
<p>I followed <a href="https://huggingface.co/learn/nlp-course/chapter6/8?fw=pt" rel="nofollow noreferrer">this tutorial</a> to create a custom Tokenizer based on <code>SentencePieceBPE</code> class, with a custom pre-tokenizer class. The newly trained Tokenizer was successfully trained with a dataset and saved on the HuggingFace platform.</p>
<p>I can load my custom Tokenizer class without a problem, using code like this:</p>
<pre><code>tokenizer = MyCustomTokenizerFast.from_pretrained('myusername/mytokenizer')
</code></pre>
<p>Specifically, let's take a pre-trained model as an example. Originally, my sequence-to-sequence pipeline uses a pre-trained model named <code>fnlp/bart-base-chinese</code> as tokenizer and model.</p>
<pre><code>from transformers import AutoTokenizer, BertTokenizer
checkpoint = 'fnlp/bart-base-chinese'
tokenizer = BertTokenizer.from_pretrained(checkpoint)
model = BartForConditionalGeneration.from_pretrained(checkpoint, output_attentions = True, output_hidden_states = True)
</code></pre>
<p>If I change the tokenizer to my custom Tokenizer class, the model has to modify/re-train as well.</p>
<pre><code>checkpoint = 'fnlp/bart-base-chinese'
tokenizer = MyCustomTokenizerFast.from_pretrained('myusername/mytokenizer')
model = BartForConditionalGeneration.from_pretrained(checkpoint, output_attentions = True, output_hidden_states = True) # this one has to change?!
</code></pre>
<p>In theory, a tokenizer will tokenize a corpus with a set of Token IDs. The model that uses the tokenizer understands this set of Token IDs. If I change the tokenizer to my custom one, the model has to change / re-train as well.</p>
<p>Is my concept correct? If yes, how should I re-build the model to make it "compatible" with my custom Tokenizer?</p>
<p>Thanks in advance.</p>
|
<python><huggingface-transformers><huggingface-tokenizers>
|
2024-04-02 10:28:20
| 1
| 54,395
|
Raptor
|
78,260,588
| 4,320,131
|
Deploying Django App onto Elastic Beanstalk (mysqlclient issue)
|
<p>Having a bit of a mare trying to deploy my <strong>Django app</strong> to <strong>Elasticbeanstalk AWS</strong></p>
<p>I'm getting issues related to the installation of mysql. See log files:</p>
<pre><code>----------------------------------------
/var/log/eb-engine.log
----------------------------------------
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [24 lines of output]
Trying pkg-config --exists mysqlclient
Command 'pkg-config --exists mysqlclient' returned non-zero exit status 1.
Trying pkg-config --exists mariadb
Command 'pkg-config --exists mariadb' returned non-zero exit status 1.
Trying pkg-config --exists libmariadb
Command 'pkg-config --exists libmariadb' returned non-zero exit status 1.
Traceback (most recent call last):
File "/var/app/venv/staging-LQM1lest/lib64/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in <module>
main()
File "/var/app/venv/staging-LQM1lest/lib64/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/var/app/venv/staging-LQM1lest/lib64/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 130, in get_requires_for_build_wheel
return hook(config_settings)
File "/tmp/pip-build-env-0u1shkl5/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 325, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "/tmp/pip-build-env-0u1shkl5/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 295, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-0u1shkl5/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 311, in run_setup
exec(code, locals())
File "<string>", line 155, in <module>
File "<string>", line 49, in get_config_posix
File "<string>", line 28, in find_package_name
Exception: Can not find valid pkg-config name.
Specify MYSQLCLIENT_CFLAGS and MYSQLCLIENT_LDFLAGS env vars manually
[end of output]
</code></pre>
<p>My Python <strong>pip requirements</strong> file is:</p>
<pre><code>beautifulsoup4==4.12.3
bs4==0.0.2
Django==2.2
gunicorn==20.1.0
mysqlclient==2.2.4
numpy==1.24.4
pandas==2.0.3
python-dateutil==2.9.0.post0
pytz==2024.1
six==1.16.0
soupsieve==2.5
sqlparse==0.4.4
tzdata==2024.1
whitenoise==6.4.0
</code></pre>
<p>and I've also specified the <strong>SQL flags</strong> in an addition <code>.config</code> file in the <code>.ebextensions</code> folder (as well as in the AWS console itself)</p>
<pre><code>option_settings:
aws:elasticbeanstalk:application:environment:
MYSQLCLIENT_CFLAGS: "-I/usr/include/mysql"
MYSQLCLIENT_LDFLAGS: "-L/usr/lib64/mysql"
</code></pre>
<p>as well as <strong>yum package</strong> installation files in another:</p>
<pre><code>packages:
yum:
python3-devel: []
mariadb-devel.x86_64: []
mariadb.x86_64: []
gcc: []
</code></pre>
<p>Some guidance on how to move forward would be appreciated, as I've been stuck playing with different options for a while</p>
|
<python><amazon-web-services><amazon-elastic-beanstalk><libmysqlclient>
|
2024-04-02 10:23:27
| 0
| 1,807
|
William Baker Morrison
|
78,260,507
| 367,079
|
How to provide type hints in Python to accepts graphs with certain connections
|
<p>I'm working on an application, which can boil down to traversing different graphs. I have a set of shared function that require that graph adheres to certain "shape" (i.e. has expected transitions between vertices). I want to somehow express it using type hints. Here is the simplest example of a graph:</p>
<pre><code>class NodeA:
def go_to_B(self) -> NodeB:
return NodeB()
def go_to_C(self) -> NodeC:
return NodeC()
class NodeB:
def go_to_A(self) -> NodeA:
return NodeA()
class NodeC:
def go_to_A(self) -> NodeA:
return NodeA()
</code></pre>
<p>Now, let's say I have a function which expects that from a given node X I can go to node Y and back to X. I could use <code>Protocols</code> to express it:</p>
<pre><code>class BGoer(Protocol):
def go_to_B(self) -> AGoer:
pass
class AGoer(Protocol):
def go_to_A(self) -> BGoer:
pass
class Traverser:
def traverse(self, start: BGoer) -> BGoer:
b: AGoer = start.go_to_B()
a: BGoer = b.go_to_A()
return a
</code></pre>
<p>However, using this approach, I'm losing information about the original types and cannot now go to C:</p>
<pre><code>def main() -> None:
a = NodeA()
t = Traverser()
a_after_traversing:BGoer = t.traverse(a)
a_after_traversing.go_to_C()
</code></pre>
<p>This gives error from mypy: <code>graph.py:37: error: "BGoer" has no attribute "go_to_C"; maybe "go_to_B"? [attr-defined]</code>.</p>
<p>While this code:</p>
<pre><code>def main() -> None:
a = NodeA()
t = Traverser()
a_after_traversing:NodeA = t.traverse(a)
a_after_traversing.go_to_C()
</code></pre>
<p>gives this error: <code>graph.py:35: error: Incompatible types in assignment (expression has type "BGoer", variable has type "NodeA") [assignment]</code></p>
<p>I have been trying to use generic protocols to somehow express this, but cannot make it work. How can I express the expected transitions in a graph, while maintain the original types as well?</p>
|
<python><mypy><python-typing><duck-typing>
|
2024-04-02 10:08:41
| 0
| 605
|
Filip
|
78,260,392
| 5,349,916
|
How to overload functions to handle Any-arguments?
|
<p>I have a function whose return type is sensitive to multiple arguments:
If a given predicate is strong enough to provide a type constraint,
the input values are similarly constrained (<code>T | None -> T</code> or <code>T -> R where R <: T</code>).
This is straightforward to type-hint if all types are known:</p>
<pre><code>from typing import Any, Callable, Iterable, Iterator, TypeGuard, TypeVar, overload
T = TypeVar("T")
R = TypeVar("R")
@overload # 1
def select(pred: None, values: Iterable[T | None]) -> Iterator[T]: ...
@overload # 2
def select(pred: Callable[[T], TypeGuard[R]], values: Iterable[T]) -> Iterator[R]: ...
@overload # 3
def select(pred: Callable[[T], Any], values: Iterable[T]) -> Iterator[T]: ...
</code></pre>
<p>However, there is a problem if the argument types are not known:
If the predicate is <code>Any</code>, one would expect the least typing constraint and thus the same output type as the input values.<br />
Yet, available type checkers do not agree<sup>1</sup> with this:
MyPy looses the type almost completely to the overly generic <code>Iterator[Any]</code> and PyRight matches the <code>None</code> special case and provides the overconstraint <code>Iterator[int]</code>.</p>
<p><strong>How do I correctly type-hint an <code>overload</code> in which <code>Any</code> can match special cases?</strong></p>
<p>Notably, reordering is not sufficient.
If I order overloads as 2-3-1 then <code>Any</code> does not provide an <code>R</code> and type checkers cannot fall back to <code>T</code>.
If I order overloads as 3-2-1 then <code>(T) -> Any</code> shadows <code>(T) -> TypeGuard[R]</code> completely.</p>
<hr />
<p><sup>1</sup> Given a prelude of</p>
<pre><code>from typing import Any, reveal_type
iitr: list[int | None] = [0, 1, 2, None, 3]
any_pred: Any = lambda val: not val # could be any unknown function
</code></pre>
<p>the type checkers I tested all fail in various ways when the unknown predicate is used:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>case</th>
<th>expected</th>
<th>MyPy</th>
<th>Pyright</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>select(None, iitr)</code></td>
<td><code>Iterator[int]</code></td>
<td><code>Iterator[int]</code></td>
<td><code>Iterator[int]</code></td>
</tr>
<tr>
<td><code>select(bool, iitr)</code></td>
<td><code>Iterator[int | None]</code></td>
<td><code>Iterator[int | None]</code></td>
<td><code>Iterator[int | None]</code></td>
</tr>
<tr>
<td><code>select(any_pred, iitr)</code></td>
<td><code>Iterator[int | None]</code></td>
<td><code>Iterator[Any]</code></td>
<td><code>Iterator[int]</code></td>
</tr>
</tbody>
</table></div>
|
<python><python-typing>
|
2024-04-02 09:48:54
| 1
| 53,360
|
MisterMiyagi
|
78,260,387
| 1,581,090
|
How to bind a socket to a multicast UDP port on Windows-10 with python?
|
<p>This is a python-related programming question that maybe cannot be answered at all. Maybe there is no solution.</p>
<p>But here is the setup: I am using Windows-10 on a hp laptop and a device in a local network sends out UDP packages for destination <code>239.0.0.4</code> and port <code>45004</code> (As can be confirmed via wireshark). That device is connected to a USB Ethernet connector named <code>Ethernet</code> with a static IP address <code>192.168.200.5</code> and subnetmask <code>255.255.255.0</code>.</p>
<p><a href="https://i.sstatic.net/u79tI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u79tI.png" alt="enter image description here" /></a></p>
<p>I am trying the following python (3.10.11) code to create a socket in order to read the packages from that device (the exact same python code works without any issues on Ubuntu on the same laptop and the same device):</p>
<pre><code>import socket
mcast_grp = "239.0.0.3"
port = 45004
adaptor = "192.168.200.5"
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind((mcast_grp, port)) # FAILING LINE
mreq = struct.pack("4s4s", socket.inet_aton(mcast_grp), socket.inet_aton(adaptor))
sock.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, mreq)
</code></pre>
<p>But that python code fails on the indicated line with the error</p>
<pre><code> sock.bind((mcast_grp, port))
OSError: [WinError 10049] The requested address is not valid in its context
</code></pre>
<p>Is there ANY way to rewrite this python code so it also works on windows?</p>
|
<python><websocket><windows-10>
|
2024-04-02 09:48:30
| 2
| 45,023
|
Alex
|
78,260,375
| 2,351,983
|
AzureOpenAI returning openai.NotFoundError
|
<p>I am trying to create embeddings as described here: <a href="https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/ai-services/openai/how-to/embeddings.md" rel="nofollow noreferrer">https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/ai-services/openai/how-to/embeddings.md</a></p>
<p>So this:</p>
<pre><code>import os
from openai import AzureOpenAI
client = AzureOpenAI(
api_key = os.getenv("AZURE_OPENAI_API_KEY"),
api_version = "2023-05-15",
azure_endpoint =os.getenv("AZURE_OPENAI_ENDPOINT")
)
response = client.embeddings.create(
input = "Your text string goes here",
model= "text-embedding-ada-002"
)
print(response.model_dump_json(indent=2))
</code></pre>
<p>I get the error <code>openai.NotFoundError: Error code: 404 - {'error': {'code': 'DeploymentNotFound', 'message': 'The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.'}}</code></p>
<p>I am using <code>openai</code> version <code>1.14.3</code>. I copied the code exactly from the link, the environment variables are set up correctly and this worked fine with <code>langchain_openai.AzureChatOpenAI</code>. The deployment is set up in the environment variable <code>AZURE_OPENAI_API_DEPLOYMENT_NAME</code>.</p>
|
<python><azure><openai-api><azure-openai>
|
2024-04-02 09:46:36
| 1
| 356
|
luanpo1234
|
78,260,128
| 5,022,847
|
Django-cte gives: 'QuerySet' object has no attribute 'with_cte'
|
<p>I have records in below format:</p>
<pre><code>| id | name | created |
-----------------------------------------------
|1 | A |2024-04-10T02:49:47.327583-07:00|
|2 | A |2024-04-01T02:49:47.327583-07:00|
|3 | A |2024-03-01T02:49:47.327583-07:00|
|4 | A |2024-02-01T02:49:47.327583-07:00|
|5 | B |2024-02-01T02:49:47.327583-07:00|
</code></pre>
<p>Model:</p>
<pre><code>class Model1(model.Models):
name = models.CharField(max_length=100)
created = models.DateTimeField(auto_now_add=True)
</code></pre>
<p>I want to perform a group by in django with month from field <code>created</code> and get latest record from that month.</p>
<p>Expected output:</p>
<pre><code>| id | name | created |
-----------------------------------------------
|1 | A |2024-04-10T02:49:47.327583-07:00|
|3 | A |2024-03-01T02:49:47.327583-07:00|
|4 | A |2024-02-01T02:49:47.327583-07:00|
</code></pre>
<p>I am using <a href="https://pypi.org/project/django-cte/" rel="nofollow noreferrer">django-cte</a> to perform the above action</p>
<pre><code>from django.db.models.functions import DenseRank, ExtractMonth
from django_cte import With
m = Model1.objects.get(id=1)
cte = With(
Model1.objects.filter(name=m.name)
rank=Window(
expression=DenseRank(),
partition_by=[ExtractMonth("created")],
order_by=F("created").desc(),
)
)
qs = cte.queryset().with_cte(cte).filter(rank=1)
</code></pre>
<p>But the above give error:</p>
<pre><code>qs = cte.queryset().with_cte(cte).filter(rank=1)
^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'QuerySet' object has no attribute 'with_cte'
</code></pre>
<p>Please help!</p>
|
<python><django><common-table-expression><django-orm><django-annotate>
|
2024-04-02 09:04:36
| 2
| 1,430
|
TechSavy
|
78,260,093
| 984,621
|
Python - No module named 'psycopg2' even though it is installed
|
<p>I have installed the <code>psycopg2</code> for PSQL on OSX (<code>Requirement already satisfied: psycopg2 in ./venv/lib/python3.12/site-packages (2.9.9)</code>). When I run <code>pip list</code>, I get the following:</p>
<pre><code>Package Version
---------- -------
pip 24.0
psycopg2 2.9.9
setuptools 69.2.0
wheel 0.43.0
</code></pre>
<p>Python version (<code>python -V</code>) is <code>Python 3.12.2</code>.</p>
<p>When I run my python script, I get the following output:</p>
<pre><code>Traceback (most recent call last):
File "/opt/homebrew/bin/scrapy", line 8, in <module>
sys.exit(execute())
^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/scrapy/cmdline.py", line 160, in execute
cmd.crawler_process = CrawlerProcess(settings)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/scrapy/crawler.py", line 357, in __init__
super().__init__(settings)
File "/opt/homebrew/lib/python3.11/site-packages/scrapy/crawler.py", line 227, in __init__
self.spider_loader = self._get_spider_loader(settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/scrapy/crawler.py", line 221, in _get_spider_loader
return loader_cls.from_settings(settings.frozencopy())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/scrapy/spiderloader.py", line 79, in from_settings
return cls(settings)
^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/scrapy/spiderloader.py", line 34, in __init__
self._load_all_spiders()
File "/opt/homebrew/lib/python3.11/site-packages/scrapy/spiderloader.py", line 63, in _load_all_spiders
for module in walk_modules(name):
^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/scrapy/utils/misc.py", line 106, in walk_modules
submod = import_module(fullpath)
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.8/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/Users/myname/pythondev/project/spiders/app.py", line 10, in <module>
from ..db import Database as DB
File "/Users/myname/pythondev/project/db.py", line 3, in <module>
import psycopg2
ModuleNotFoundError: No module named 'psycopg2'
</code></pre>
<p>The python script probably cannot find <code>psycopg2</code>, despite it's installed. I noticed in the error output that there's a reference to a Python version 3.11, but my Python version is 3.12.</p>
<p>Can this be the problem? I don't see anywhere in the code any reference to version <code>3.11</code>.</p>
|
<python><python-3.x><macos><pip>
|
2024-04-02 08:57:39
| 0
| 48,763
|
user984621
|
78,259,971
| 5,379,182
|
Celery - cannot stop test worker within shutdown timeout
|
<p>When my celery worker is ready I register a custom timer</p>
<pre><code>@worker_ready.connect
def worker_ready_handler(sender: Consumer, **kwargs):
logger.info(f"Worker process ready: {sender.pid}")
timer: Timer = sender.timer
timer.call_repeatedly(
60, do_something, ()
)
</code></pre>
<p>When I test it using the <code>celery_worker</code> fixture provided by pytest-celery, I get the error<br />
"RuntimeError: Worker thread failed to exit within the allocated timeout. Consider raising <code>shutdown_timeout</code> if your tasks take longer to execute."</p>
<p>This error only occurs when I have <em>two</em> tests using the <code>celery_worker</code> fixture, I.e.</p>
<pre><code>def test_some_worker_test(celery_app, celery_worker):
# test a specific task on a worker thread
def test_worker_does_something_periodically(mocker, celery_app, celery_worker):
# test that the timer is created and doing something periodically...
# THIS ONE THROWS THE ERROR
</code></pre>
<p>I have already tried to increase the shutdown timeout, doesn't help.</p>
<p>When I run the above tests individually, they both work fine.</p>
<p>Could it be that the timer is somehow keeping the worker thread alive and hence the shutdown cannot take place?</p>
|
<python><pytest><celery>
|
2024-04-02 08:36:11
| 0
| 3,003
|
tenticon
|
78,259,837
| 15,648,070
|
Speech to text pipelining
|
<p>Quit new to using models,
i'm trying to use <a href="https://huggingface.co/ivrit-ai/whisper-large-v2-tuned" rel="nofollow noreferrer">ivrit-ai/whisper-large-v2-tuned model</a></p>
<p>with a 'long-form' of audio file like they advice here <a href="https://huggingface.co/ivrit-ai/whisper-large-v2-tuned#long-form-transcription" rel="nofollow noreferrer">Long-Form Transcription
instructions</a></p>
<p>I'm getting the following error</p>
<pre><code>raise ValueError(
ValueError: Multiple languages detected when trying to predict the most likely target
language for transcription.
It is currently not supported to transcribe to different languages in a single batch.
Please make sure to either force a single language by passing `language='...'` or make sure all input audio is of the same language.`
</code></pre>
<p>My code</p>
<pre><code>import torch
from transformers import pipeline
device = "cuda:0" if torch.cuda.is_available() else "cpu"
pipe = pipeline(
"automatic-speech-recognition",
model="ivrit-ai/whisper-large-v2-tuned",
chunk_length_s=30,
device=device,
)
audio_file = './audio/sales_call.mp3'
with open(audio_file, 'rb') as file:
audio = file.read()
prediction = pipe(audio, batch_size=8, return_timestamps=True)["chunks"]
with open('transcription.txt', 'w', encoding='utf-8') as file:
for item in prediction:
file.write(f"{item['text']},{item['timestamp']}\n")
</code></pre>
|
<python><pipeline><huggingface-transformers>
|
2024-04-02 08:10:26
| 1
| 636
|
Eyal Solomon
|
78,259,726
| 22,414,610
|
How to get closest geo points in Elasticsearch with Python (Flask)
|
<p>I am following the <a href="https://www.elastic.co/search-labs/tutorials/search-tutorial/full-text-search" rel="nofollow noreferrer">official tutorial</a> of Elasticsearch using Python with Flask. I implemented the full text matching search, but I wanted to extend the functionality to find nearest locations. I found again from the official documentation <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-geo-distance-query.html" rel="nofollow noreferrer">geo-distance query</a>. I tried the query, but it doesn't work on my side. Here is my creation of the Search object:</p>
<pre><code>from pprint import pprint
from elasticsearch import Elasticsearch
from decouple import config
from managers.home_manager import HomeManager
from schemas.response.home_response import HomeResponseSchema, PinSchema
mapping = {
"mappings": {
"properties": {"pin": {"properties": {"location": {"type": "geo_point"}}}}
}
}
class Search:
def __init__(self) -> None:
self.es = Elasticsearch(
api_key=config("ELASTIC_API_KEY"), cloud_id=config("ELASTIC_CLOUD_ID")
)
client_info = self.es.info()
print("Connected to Elasticsearch!")
pprint(client_info.body)
def create_index(self):
self.es.indices.delete(index="real_estate_homes", ignore_unavailable=True)
self.es.indices.create(index="real_estate_homes", body=mapping)
def insert_document(self, document):
return self.es.index(index="real_estate_homes", body=document)
def insert_documents(self, documents):
operations = []
for document in documents:
operations.append({"index": {"_index": "real_estate_homes"}})
operations.append(document)
return self.es.bulk(operations=operations)
def reindex_homes(self):
self.create_index()
homes = HomeManager.select_all_homes()
pins = []
for home in homes:
pin = {
"location": {"lat": float(home.latitude), "lon": float(home.longitude)}
}
pins.append(pin)
return self.insert_documents(pins)
def search(self, **query_args):
return self.es.search(index="real_estate_homes", **query_args)
es = Search()
</code></pre>
<p>I want to mention that I tried to dump json files using PinSchema object.</p>
<p>Here is the code with the queiries:</p>
<pre><code>from flask_restful import Resource
from managers.home_manager import HomeManager
from search import es
from schemas.response.home_response import HomeResponseSchema
class ElasticResource(Resource):
def get(self, home_id):
print(home_id)
home = HomeManager.select_home_by_id(home_id)
geo_query = es.search(
body={
"query": {
"bool": {
"must": {"match_all": {}},
"filter": {
"geo_distance": {
"distance": "2000km",
"pin.location": {"lat": 43, "lon": 27},
}
},
}
}
}
)
print(geo_query)
result = es.search(
query={
"bool": {
"must": {"match": {"city": {"query": home.city}}},
"must_not": {"match": {"id": {"query": home.id}}},
}
}
)
print(result["hits"]["hits"])
if len(result["hits"]["hits"]) == 0:
return "No results."
suggested_homes = [
suggested_home["_source"] for suggested_home in result["hits"]["hits"]
]
resp_schema = HomeResponseSchema()
return resp_schema.dump(suggested_homes, many=True), 200
</code></pre>
<p>Can anybody help me to find the problem and receive matches?</p>
<p>I tried to find solutions in StackOverflow, but they didn't worked too.</p>
|
<python><elasticsearch><flask><geolocation>
|
2024-04-02 07:52:36
| 1
| 424
|
Mr. Terminix
|
78,259,423
| 988,279
|
Find substring with same start and end characters
|
<p>I've to detect a substring which starts and ends with the same characters.</p>
<pre><code>import re
text = "/image/123.__W500__.png"
print(re.findall('__.*?__', text))
-> ['__W500__']
</code></pre>
<p>How can I get rid of the search pattern?</p>
<p>-> ['W500']</p>
<p>Thanks.</p>
|
<python><python-3.x>
|
2024-04-02 06:44:20
| 3
| 522
|
saromba
|
78,259,328
| 4,399,016
|
Downloading an Excel file from an URL using Python
|
<p>The URL for Excel file is this:
<a href="https://www.gso.gov.vn/wp-content/uploads/2024/03/IIP-ENG.xlsx" rel="nofollow noreferrer">https://www.gso.gov.vn/wp-content/uploads/2024/03/IIP-ENG.xlsx</a></p>
<p>I have this code:</p>
<pre><code>from datetime import datetime, timedelta
url = 'https://www.gso.gov.vn/wp-content/uploads/' + datetime.strftime(datetime.now() - timedelta(30), '%y') +'/' + datetime.strftime(datetime.now() - timedelta(30), '%m') + '/IIP-ENG.xlsx'
import requests
resp = requests.get(url, verify=False)
output = open('IIP.xlsx', 'wb')
output.write(resp.content)
output.close()
</code></pre>
<p>I can see a file being downloaded but I can't open it in Office Excel. The file is corrupted.</p>
<pre><code>resp
</code></pre>
<blockquote>
<p><Response [404]></p>
</blockquote>
<p>I also cant open using this code:</p>
<pre><code>import pandas as pd
df = pd.read_excel(open('IIP.xlsx', 'rb'),sheet_name=0, engine='openpyxl')
print(df.head(5))
</code></pre>
<blockquote>
<p>BadZipFile error. The file is not a Zip file.</p>
</blockquote>
<p>How to fix this ?</p>
|
<python><web-scraping><python-requests><openpyxl>
|
2024-04-02 06:24:40
| 3
| 680
|
prashanth manohar
|
78,258,962
| 132,438
|
I have exported an HLL sketch from Snowflake, how can I estimate its count?
|
<p>Snowflake exports its HLL sketches in dense and sparse format, depending on the size of the sketch:</p>
<ul>
<li><a href="https://docs.snowflake.com/en/user-guide/querying-approximate-cardinality#dense-format" rel="nofollow noreferrer">https://docs.snowflake.com/en/user-guide/querying-approximate-cardinality#dense-format</a></li>
</ul>
<pre><code>{
"version" : 3,
"precision" : 12,
"dense" : [3,3,3,3,5,3,4,3,5,6,2,4,4,7,5,6,6,3,2,2,3,2,4,5,5,5,2,5,5,3,6,1,4,2,2,4,4,5,2,5,...,4,6,3]
}
{
"version" : 3,
"precision" : 12,
"sparse" : {
"indices": [1131,1241,1256,1864,2579,2699,3730],
"maxLzCounts":[2,4,2,1,3,2,1]
}
}
</code></pre>
<p>How can I transform these objects generated by <code>HLL_EXPORT()</code> into estimate counts with Python?</p>
|
<python><snowflake-cloud-data-platform><hyperloglog>
|
2024-04-02 04:32:48
| 1
| 59,753
|
Felipe Hoffa
|
78,258,960
| 1,362,485
|
Does pyodbc support multiprocessing?
|
<p>The intent of the code below is to perform several database updates in parallel using pyodbc and mysql. Question: are the connections opened independently and in parallel? Will this code work or I need to take a different approach? I tried to investigate pyodbc and multithreading and didn't find much.</p>
<pre><code>import multiprocessing, pyodbc
tables = ['tab1', 'tab2', 'tab3', 'tab4', 'tab5', 'tab6']
for table in tables:
p = multiprocessing.Process(target=process_table, args=(table,))
p.start()
def process_table(table):
conn = pyodbc.connect(.....)
cursor = conn.cursor()
for i in range(0, 10):
sql = 'update ' + table + ' set col = ' + str(i)
cursor.execute(sql)
conn.commit()
cursor.close()
conn.close()
</code></pre>
|
<python><python-multiprocessing><python-multithreading><pyodbc>
|
2024-04-02 04:32:18
| 1
| 1,207
|
ps0604
|
78,258,943
| 16,525,263
|
How to read the configuration from one python file in another python file
|
<p>I have a .py file as below:</p>
<pre><code>dict_nova = {
"export_nova_rosterlight":{
"metadata_resource_id": "export_nova_rosterlight",
"metadata_product_version" : "1.0",
"List_Fields": [
"data_export",
"userID",
"deviceName",
"osVersion",
"emailAddress",
"model",
"serialNumber"]
},
"export_nova_export":{
"metadata_resource_id":"Export_nova_export",
"metadata_product_version" : "1.0",
"List_Fields": [ "data_export",
"userMail",
"userID",
"deviceName",
"emailAddress"]
}
}
</code></pre>
<p>In another .py file I have a function to validate some columns based on some conditions as below:</p>
<pre><code>def validate_counts(df):
distinct_mail = df.agg(F.countDistinct('emailAddress')).collect()[0][0]
distinctdevice = df.agg(F.countDistinct('deviceName')).collect()[0][0]
if 10000 <= distinct_mail <= 20000 and 5000 <= distinctdevice <= 10000:
result = 'valid'
else:
result = 'invalid'
return result
</code></pre>
<p>But I need to validate the columns from the <code>"export_nova_rosterlight"</code> key from the other .py file. How to do this.</p>
|
<python><pyspark>
|
2024-04-02 04:26:57
| 1
| 434
|
user175025
|
78,258,602
| 1,413,856
|
Python - Understanding TKInter Frames
|
<p>I’m trying to understand how the Notebook and Frames work in Python.</p>
<p>As far as I understand, if I want a tabbed interface, I create a Notebook widget and add Frame widgets. Here is a simple working sample:</p>
<pre class="lang-py prettyprint-override"><code>import tkinter
from tkinter import ttk
# Main Window
window = tkinter.Tk()
# Notebook
notebook = ttk.Notebook(window)
notebook.pack(expand=True)
# Frames
apple = ttk.Frame(notebook)
notebook.add(apple, text='Apple')
alabel = ttk.Label(apple, text='Apples')
alabel.pack()
banana = ttk.Frame(notebook)
notebook.add(banana, text='Banana')
blabel = ttk.Label(banana, text='Bananas')
blabel.pack()
tkinter.mainloop()
</code></pre>
<p>Any document I can find states that the <code>Frame</code> widget requires a parent, which is the <code>notebook</code> variable above. However:</p>
<ul>
<li>If I leave the constructor empty, it woks just as well.</li>
<li>It seems to be redundant if I then add it to the frame later.</li>
</ul>
<p>I can’t find anything on the Internet other than the syntax which always includes the parent, and examples. The question is, what is the point of the parent parameter in the <code>Frame</code> constructor?</p>
<p>Python 3.12 on a a Macintosh, if that’s helpful.</p>
|
<python><tkinter>
|
2024-04-02 01:57:19
| 1
| 16,921
|
Manngo
|
78,258,461
| 264,755
|
Rolling count/tally between a list of values in a creative way
|
<p>I have 2 DataFrames. The first is has a List of Dicts of values. The second has no data, but has a list of columns that are integers.</p>
<pre><code>data1 = [{'Start': 51, 'End': 55},{'Start':24, 'End':37},{'Start':89,'End':122},{'Start':44, 'End':31}, {'Start':77, 'End':50}, {'Start':10, 'End':9}]
dfm1 = pd.DataFrame.from_dict(data1)
data2 = [-40, -30, -20, -10, 0, 10, 20, 30, 40]
dfm2 = pd.DataFrame([], columns=data2)
</code></pre>
<p>Let's assume that data1 has 500 data points. My goal is that I want a tally of ranges of the differences in <code>dfm1</code> based upon a variable sliding window size and I want that tally to exist in <code>dfm2</code>.</p>
<p>I want to create a sliding window of calculation of the difference between <code>data1[index + window] - data1[index]</code>. Then, based upon that difference between the values at the 2 indexes, I want to add a tally to dfm2 if it is less than or equal to the <code>dfm1</code> column value, but not less than the column-1 value. We would assume that, in my example, column <code>-40</code> would never ever have a tally greater than 0.</p>
<p>My desired date output, for the <code>dfm1</code> values I provided, and we are tallying Start values, and a window of size 2 would be (for <code>dfm2</code>):</p>
<pre><code>[0, 1, 1, 0, 0, 0, 1, 0, 1]
</code></pre>
<p>This would be performing 51-89 = -38, 24-44 = -20, 89-77=12, 44-10=34 for a window size of 2. A window size of 3 would be 51-44, 24-77, and 89-10...</p>
<p>The cheap and easy way is obviously for me to iterate and create tallies. But I know that <code>DataFrame</code> has some methods like <code>rolling</code> which may work really well for this.</p>
<p>What if I wanted to do this same rolling tally, but rather than subtracting Start from Start, what if I wanted to subtract an index's Start from its same End, and then perform that rolling tally based upon the difference from <code>window_size</code> away?</p>
<p>What if I didn't preset the column names in <code>dfm2</code>, and I let them be auto added as new tallies are discovered, say in ranges of 10?</p>
|
<python><pandas><dataframe><rolling-computation>
|
2024-04-02 00:49:34
| 1
| 1,705
|
jasonmclose
|
78,258,396
| 3,118,190
|
How to compile a pydantic BaseModel using mypyc?
|
<p>In an environment where we have installed:</p>
<pre><code>pip install -U pydantic mypy
</code></pre>
<p>Given the example <code>test_basemodel.py</code>:</p>
<pre><code>from pydantic import BaseModel
class A(BaseModel):
pass
</code></pre>
<p>We run the command: <code> mypyc test_basemodel.py</code>
and see:</p>
<pre><code>running build_ext
building 'test_basemodel' extension
x86_64-linux-gnu-gcc -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -I/home/guy/repos/DeepracerService/src/local/processor/.venv/lib/python3.12/site-packages/mypyc/lib-rt -I/home/guy/repos/DeepracerService/src/local/processor/.venv/include -I/usr/include/python3.12 -c build/__native.c -o build/temp.linux-x86_64-cpython-312/build/__native.o -O3 -g1 -Werror -Wno-unused-function -Wno-unused-label -Wno-unreachable-code -Wno-unused-variable -Wno-unused-command-line-argument -Wno-unknown-warning-option -Wno-unused-but-set-variable -Wno-ignored-optimization-argument -Wno-cpp
build/__native.c: In function ‘CPyDef___top_level__’:
build/__native.c:130:16: error: ‘CPyModule_pydantic___main’ undeclared (first use in this function); did you mean ‘CPyModule_pydantic’?
130 | cpy_r_r9 = CPyModule_pydantic___main;
| ^~~~~~~~~~~~~~~~~~~~~~~~~
| CPyModule_pydantic
build/__native.c:130:16: note: each undeclared identifier is reported only once for each function it appears in
build/__native.c: At top level:
cc1: note: unrecognized command-line option ‘-Wno-ignored-optimization-argument’ may have been intended to silence earlier diagnostics
cc1: note: unrecognized command-line option ‘-Wno-unknown-warning-option’ may have been intended to silence earlier diagnostics
cc1: note: unrecognized command-line option ‘-Wno-unused-command-line-argument’ may have been intended to silence earlier diagnostics
</code></pre>
<p>According to <a href="https://mypyc.readthedocs.io/en/latest/native_classes.html#inheritance" rel="nofollow noreferrer">https://mypyc.readthedocs.io/en/latest/native_classes.html#inheritance</a></p>
<blockquote>
<p>Most non-native classes can’t be used as base classes</p>
</blockquote>
<p>Is it possible to compile pydantic BaseModels using mypyc? Pydantic works so well with mypy(not mypyc) and mypy is so strongly tied with mypyc that I can't believe the relationship isn't explicitly documented, let alone unsupported.</p>
|
<python><mypy><pydantic><pydantic-v2><mypyc>
|
2024-04-02 00:23:42
| 1
| 648
|
GDub
|
78,258,291
| 214,526
|
Does accessing a class variable by class in instance methods cause problems?
|
<p>I have seen multiple posts on this - some are quite old and some are not so old. So, I thought of asking the question again as pylint issues warnings on some of the accepted answers from previous posts like <a href="https://stackoverflow.com/a/70422556/214526">https://stackoverflow.com/a/70422556/214526</a></p>
<pre><code>class Foo:
_THRESHOLD_VAL: float = 0.8
def __init__(self, value: float):
self._val = value
def method1(self):
if self._val < self.__class__._THRESHOLD_VAL:
print("less than threshold")
else:
print("more than or equal to threshold")
</code></pre>
<p>The default way to access a class variable is <code>self.<var_name></code> but are there reasons not to use <code>self.__class__.<var_name></code>?</p>
<p>An advantage of using <code>self.__class__.<var_name></code> is that it explicitly says that a class variable being used. However, "pylint" issues the following warning -</p>
<p><code>warning: Access to a protected member _THRESHOLD_VAL of a client class</code></p>
<p>What are the actual risks, and reasons not to use <code>self.__class__.<var_name></code>? How should I avoid or prevent the warning from arising?</p>
|
<python><python-3.x>
|
2024-04-01 23:38:42
| 1
| 911
|
soumeng78
|
78,257,914
| 2,279,796
|
Showing a string variable in pyspark sql
|
<p>How to use spark sql to get a variable.
I have a table called</p>
<pre><code>delta.`first/merchant/loaddate=04-02-2024`
</code></pre>
<p>and the last part of my table is the latest data .</p>
<p>I want to somehow get the latest data and put it in the spark. So it will always load the latest.</p>
<p>So I did the following</p>
<pre><code>from datetime import datetime
a = datetime.today().strftime('%Y-%m-%d')
print(a)
tableName = "delta." + "`first\merchant\loaddate=" + a + '`'
</code></pre>
<p>then I do the following</p>
<pre><code>df = spark.sqk(
'''
select
*
from
{0}
'''.format(table)
</code></pre>
<p>But it does not seem to show the table when I call <code>df.show()</code>.</p>
|
<python><sql><pyspark>
|
2024-04-01 21:34:40
| 1
| 549
|
Fernando Martinez
|
78,257,849
| 4,535,717
|
SqlAlchemy bindparam fails on mssql (but works on mysql)
|
<p>I am having trouble making the following work on an azure db, while it works on mysql db.
Explanation why this is happening and help with resolving it would be appreciated.</p>
<pre><code> with engine.begin() as conn:
print(f"Running with {engine.name=}")
sql_q = text("SELECT count(*) FROM ticker WHERE id in :ids")
sql_q.bindparams(bindparam('ids', expanding=True))
result = conn.execute(
sql_q, ids=[300, 400]
).scalar_one()
print(f"{engine.name=}: {result=}")
</code></pre>
<pre><code> Running with engine.name='mysql'
engine.name='mysql': result=2`
</code></pre>
<p>The generated mysql query is: <code>'SELECT count(*) FROM ticker WHERE id in %(ids)s'</code>
with parameters: <code>{'ids': [300, 400]}</code></p>
<pre><code># with mssql engine (azure db)
Running with engine.name='mssql'
# ...
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ("A TVP's rows must be Sequence objects.", 'HY000')
[SQL: SELECT count(*) FROM ticker WHERE id in ?]
[parameters: ([300, 400],)]
(Background on this error at: https://sqlalche.me/e/14/f405)
</code></pre>
<ul>
<li>Python 3.8.10</li>
<li>SQLAlchemy==1.4.39</li>
<li>pyodbc==4.0.39</li>
</ul>
|
<python><sql-server><sqlalchemy><azure-sql-database>
|
2024-04-01 21:16:07
| 1
| 941
|
dgeorgiev
|
78,257,767
| 4,447,853
|
Save an Arrow Dataset or Pandas DataFrame as .bin and .idx files
|
<p>I'm working with Hugging Face's <code>datasets</code> library to load a DataFrame using the <code>from_pandas()</code> method and save it as a Hugging Face dataset using <code>save_to_disk()</code>. The saved dataset consists of a folder with files <code>data-00000-of-00001.arrow</code>, <code>dataset_info.json</code>, and <code>state.json</code>.</p>
<p>However, I'm facing compatibility issues when trying to use this dataset with Megatron-LM (an accelerated version by Hugging Face, not NVIDIA's). Megatron-LM expects the dataset in <code>.idx</code> and <code>.bin</code> formats. The specific code causing issues can be found <a href="https://github.com/huggingface/Megatron-LM/blob/100b522bb8044d98413398f9e71563af15b83325/megatron/data/indexed_dataset.py#L61" rel="nofollow noreferrer">here</a>.</p>
<p>Is there a method to convert a pandas DataFrame or a Hugging Face dataset into the required <code>.bin</code> and <code>.idx</code> formats? My DataFrame has multiple columns, if that matters, and contains around 100k records.</p>
<p>Edit:
I understand this is only necessary when using <a href="https://huggingface.co/docs/accelerate/main/en/usage_guides/megatron_lm#advanced-features-to-leverage-writing-custom-train-step-and-megatron-lm-indexed-datasets" rel="nofollow noreferrer">Megatron-LM indexed datasets</a>/<code>MegatronLMDummyDataLoader</code>, and "normal" datasets can potentially be used, but it is recommended when datasets grow to extremely large sizes which will potentially happen in the future.</p>
|
<python><pandas><artificial-intelligence><pyarrow><binning>
|
2024-04-01 20:56:35
| 0
| 527
|
chriscrutt
|
78,257,700
| 9,343,043
|
Add gridlines and y=x line to seaborn lmplot
|
<p>I have the following code below to plot a regression plot with respect to categorical variables. The variable of interest in this question is "<code>Site</code>".</p>
<pre><code>children_with_PNC_plot_site = sns.lmplot(x="Z Score, Post Central WM Right for All Patients vs. All Controls, Ages 6-18",
y="Z Score, Post Central WM Right for Children Patients vs. All Controls",
hue = "Site", palette = "hsv", data=df, fit_reg=False)
</code></pre>
<p><a href="https://i.sstatic.net/1b9hh.png" rel="noreferrer"><img src="https://i.sstatic.net/1b9hh.png" alt="Site Identities stripped from output above" /></a></p>
<p>Is there a way to add both horizontal and vertical gridlines to the plot below, along with the line <code>y=x</code>?</p>
<p>When I try to do any of the above, I always get a new plot with either the gridlines only, or the <code>y=x</code> line.</p>
<p>Thanks!</p>
|
<python><matplotlib><seaborn><regression><lmplot>
|
2024-04-01 20:42:23
| 2
| 871
|
florence-y
|
78,257,695
| 20,999,380
|
break out of while loop if a certain time passes or any key is pressed (record keypress)
|
<p>I need a piece of code that waits for a user input OR an elapsed time. If a user presses any key, it records the key pressed and uses that as a variable in the next function before breaking the while loop. If 3 seconds have elapsed and they have not pressed anything, I need it to activate a different function and then break out of the while loop.</p>
<p>Within the context of my program, I need something like:</p>
<pre><code>import time
import date.time
import tkinter as tk
def bind_keypress(func):
root.bind("<KeyPress>", func)
def function_a():
print("not important")
# etc.
def function_b():
if entry_label.get() in ([a list of keys]):
print("whatever")
elif entry_label.get() in ([another list of keys]):
print("okay")
else:
print("nope")
# etc.
start_countdown = time.time()
while True:
wait_length = time.time() - start_countdown
if wait_length >= 3:
function_a()
break
elif [any key is pressed]:
bind_keypress(function_b)
break
</code></pre>
<p>I have tried to simplify here. I realize that the stuff in my functions could just go directly into the while loop, but that is not what I need. Assume I need the code structured roughly the way it is. I am looking for a solution to the "elif [any key is pressed]" part.</p>
<p>Windows 10, Python 3.1.1</p>
<p>EDIT to reopen:
I have been through the similar questions cited (e.g., <a href="https://stackoverflow.com/questions/42028768/how-do-i-let-tkinter-wait-for-a-keypress-before-continuing">here</a> and <a href="https://stackoverflow.com/questions/24072790/how-to-detect-key-presses">here</a>). These do not solve my problem. I already know how to detect keypresses (including specific keys), and I know how to wait for a keypress. I specifically need something that waits up to three seconds for a user input and then continues down one of two paths accordingly (path_a if they do not respond fast enough and path_b if they give an input in time).</p>
<p>Once a given path is followed, a new round should begin, so I need to be able to exit out of the input/timeout loop and continue in the same GUI. Root.quit and root.update both kill the program, even if I try to use some sort of nested root.mainloop.</p>
|
<python>
|
2024-04-01 20:41:28
| 1
| 345
|
grace.cutler
|
78,257,537
| 557,576
|
Catch "request body too large" error in python on azure functions
|
<p>Azure storage queue has a message size limit of 64KB. I want to catch this error when it occurs in azure functions. For that I use the following code:</p>
<pre class="lang-py prettyprint-override"><code>@app.blob_trigger(arg_name='inputFile', path="blob",
connection='queue_con_str')
@app.queue_output(arg_name='outputInfo', queue_name='q-input',
connection='queue_con_str')
def process_input(inputFile: func.InputStream, outputInfo: func.Out[typing.List[str]]):
data = json.loads(inputFile.read())
o_data = [json.dumps(v) for v in data]
try:
outputInfo.set(o_data)
except Exception as e:
print("EXCEPTION ", e)
print(type(e))
</code></pre>
<p>When the message is large, I get an exception but python cannot catch the error. I want to split the message if the error occurs. Instead I get a
<code>System.Private.CoreLib: Exception while executing function: Functions.process_input. </code> error from <code>.NET</code>. How to catch the exception in python?</p>
<p>Logged messages</p>
<pre><code>Executed 'Functions.process_input' (Failed, Id=ae07d776-ecec-4544-96bb-2b598ef4033d, Duration=24506ms)
System.Private.CoreLib: Exception while executing function: Functions.process_input. Azure.Storage.Queues: The request body is too large and exceeds the maximum permissible limit.
RequestId:d326abfd-d003-0071-116b-8483bb000000
Time:2024-04-01T19:33:02.1219320Z
Status: 413 (The request body is too large and exceeds the maximum permissible limit.)
ErrorCode: RequestBodyTooLarge
Additional Information:
MaxLimit: 65536
Content:
<?xml version="1.0" encoding="utf-8"?><Error><Code>RequestBodyTooLarge</Code><Message>The request body is too large and exceeds the maximum permissible limit.
RequestId:d326abfd-d003-0071-116b-84845b000000
Time:2024-04-01T19:33:02.1219320Z</Message><MaxLimit>65536</MaxLimit></Error>
Headers:
Server: Windows-Azure-Queue/1.0,Microsoft-HTTPAPI/2.0
x-ms-request-id: d326abfd-d003-0071-116b-84845b000000
x-ms-version: 2018-11-09
x-ms-error-code: RequestBodyTooLarge
Date: Mon, 01 Apr 2024 19:33:01 GMT
Content-Length: 286
Content-Type: application/xml
</code></pre>
|
<python><azure><error-handling><azure-functions><azure-storage-queues>
|
2024-04-01 20:02:30
| 0
| 832
|
Shishir Pandey
|
78,257,311
| 16,845
|
Type annotations: pathlib.Path vs importlib.resources.abc.Traversable
|
<p>When loading file resources that are bundled inside my distribution package, I use <code>importlib.resources.files</code>. When loading files from disk, I use <code>pathlib.Path</code>.</p>
<p>Sometimes I want to write a function that will take either:</p>
<pre><code>from importlib.resources.abc import Traversable
from pathlib import Path
def process_file(file_to_process: Traversable | Path) -> None: ...
</code></pre>
<p>It feels cumbersome to annotate all file-processing functions with <code>Traversable | Path</code>. I could define and use my own union type, but that feels like something that Python might already have built-in that I'm missing. I only require the basic subset of both types: open / close / read / write, etc, and nothing like touching permissions or using OS-specific details.</p>
<p>What is the correct type to use for a file resource that can be loaded from a distribution package via <code>importlib</code> <em>or</em> from a drive via <code>pathlib</code>?</p>
<p>I'm currently using Python 3.11 but will upgrade over time, so all 3.11+ answers are welcome.</p>
|
<python><python-typing>
|
2024-04-01 19:11:46
| 2
| 1,216
|
Charles Nicholson
|
78,257,277
| 5,837,992
|
How do I Dump MySql Tables One At a Time Using Python
|
<p>I am using MySQL 5.6.21 on a Windows server and need to migrate from one machine to another. Been trying to use the MySQL Workbench Export Tables and I keep getting an error 22 on write at some point during the export process.</p>
<p>I am using Jupyter for simplicity.</p>
<p>I want to try to dump each table individually and see which one is causing the issue.</p>
<p>So first thing I did was to write a test using one table, as follows:</p>
<pre><code>import subprocess
tablename="templedger"
gettablestring="mysqldump -u user1 -pB57$52 db_main %s > G:\Databasebackup\dbmain\%s.sql" % (tablename,tablename)
subprocess.Popen(gettablestring, shell=True)
print("Done")
</code></pre>
<p>The word "Done" immediately came back, but no dump</p>
<p>I then tried the following</p>
<pre><code>!{gettablestring}
</code></pre>
<p>And got "Access Denied"</p>
<p>How do I code this us so that I can execute the dump command from within a Jupyter cell?</p>
<p>Thanks</p>
|
<python><mysql><database-backups>
|
2024-04-01 19:02:33
| 1
| 1,980
|
Stumbling Through Data Science
|
78,257,262
| 6,328,506
|
How to Identify Numbers from WAV Files for Comparison?
|
<p>I have an HTTP request that returns an audio file where Google's voice speaks a sequence of 6 numbers in Portuguese, ranging from 0 to 9, separated by silence. I have saved WAV files in my development environment with the audio of each of the possible numbers. Now, I want to extract a segment of raw data from each of these WAV files to compare it with the audio chunk received from the HTTP request.</p>
<p>For example, in the following Python code, different binary segments are used for comparison, but I need to know how to select representative segments from my WAV files. How can I achieve this?</p>
<pre class="lang-py prettyprint-override"><code>def discaptcher(file_content):
# legacy code, other audio
footprints_to_numbers = {
b"\x72\x00\x77\x00\x7a\x00\x4a\x00\x58\x00\x7a\x00\x55\x00\x53\x00": "0", # noqa
b"\x86\x00\x7f\x00\x8c\x00\x7f\x00\x71\x00\x62\x00\x52\x00\x52\x00": "1", # noqa
b"\x15\x00\x1e\x00\x1b\x00\x05\x00\x16\x00\x1b\x00\x20\x00\x17\x00": "2", # noqa
b"\x8e\xff\x85\xff\x8c\xff\x85\xff\x79\xff\xb6\xff\xbd\xff\xaa\xff": "3", # noqa
b"\xfd\xff\x0b\x00\x0f\x00\xf6\xff\xd7\xff\xcf\xff\xbb\xff\xb2\xff": "4", # noqa
b"\x87\x00\x85\x00\x7c\x00\x77\x00\x7b\x00\x81\x00\x79\x00\x7f\x00": "5", # noqa
b"\x65\xff\x87\xff\xb1\xff\xa3\xff\x6a\xff\x20\xff\x26\xff\x2c\xff": "6", # noqa
b"\xb9\xfe\xbb\xfe\xae\xfe\xa6\xfe\xa1\xfe\xb8\xfe\xd2\xfe\xfb\xfe": "7", # noqa
b"\x5b\xff\x66\xff\x90\xff\x85\xff\x76\xff\x75\xff\x79\xff\x75\xff": "8", # noqa
b"\x16\x01\x1e\x01\x29\x01\x13\x01\x0b\x01\x05\x01\x06\x01\xf8\x00": "9",
} # noqa
captcha_dict = {}
captcha = ""
for snd, lt in footprints_to_numbers.items():
pos = [m.start() for m in regex.finditer(regex.escape(snd), file_content)]
for i in pos:
captcha_dict[i] = lt
for key in sorted(captcha_dict.keys()):
captcha += captcha_dict[key]
return captcha
</code></pre>
<p>I tried, after the initial HTTP request, to split the received audio into chunks using the split_on_silence method from pydub and then chunk.export('filename.wav', format='wav'), but the comparison between two files representing the same number doesn't work. I also attempted to generate MD5 hashes for the WAV files, but in this case, the comparison also failed.</p>
|
<python><python-3.x><audio>
|
2024-04-01 18:58:21
| 0
| 416
|
Kafka4PresidentNow
|
78,257,192
| 2,910,704
|
Python Django ModelViewSet implementation with SAML ACS
|
<p>In the current system with the legacy IAM, we have implemented a class inherited from <code>ModelViewSet</code> with login and logout functions. In the legacy IAM, it is not compulsory to obtain <code>name_id</code> and <code>session_index</code> to logout. Therefore, we can bypass <code>acs</code> (a.k.a. assertion_consumer_service) to obtain these information and go straight to the logout.</p>
<p>Now, a new IAM system is deployed and we need to extend the current implementation to support both login and logout (along with acs). <code>name_id</code> and <code>session_index</code> shall be provided in <code>LogoutRequest</code>. Given we have different set of URLs for</p>
<ol>
<li>login/logout: <a href="https://example.com/saml2/account/%5Blogin%7Clogout%5D" rel="nofollow noreferrer">https://example.com/saml2/account/[login|logout]</a></li>
<li>acs: <a href="https://example.com/saml2/sso/acs" rel="nofollow noreferrer">https://example.com/saml2/sso/acs</a></li>
</ol>
<p>How can we update the following code to support the callback from acs so that we can save the <code>name_id</code> and <code>session_index</code>?</p>
<p><strong>urls.py</strong></p>
<pre><code>router = DefaultRouter()
router.register("saml2/account", Saml2AccountView, basename="account")
urlpatterns = [
url("", include(router.urls)),
]
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>class Saml2AccountView(viewsets.ModelViewSet):
@action(detail=False, methods=['get'])
def login(self, request, *args, **kwargs):
# implement the login function
@action(detail=False, methods=['get'])
def logout(self, request, *args, **kwargs):
# implement the logout function
</code></pre>
<p>NOTE: We are using <a href="https://pypi.org/project/django-saml2-auth/" rel="nofollow noreferrer">https://pypi.org/project/django-saml2-auth/</a> for the SAML implementation with the login/logout.</p>
|
<python><django><saml><django-saml2-auth>
|
2024-04-01 18:41:43
| 1
| 708
|
chesschi
|
78,257,175
| 2,791,346
|
Preprocess image for better barcode read results
|
<p>I am trying to read barcodes from images. Unfortunately, sometimes images are of low quality. I tried to preprocess the image so that the <code>Pyzbar</code> library would return better results.</p>
<p>I try with</p>
<p>resizing</p>
<pre><code>def resize(_image, _scalar):
x, y = _image.size
return _image.resize((int(round(x*_scalar)), int(round(y*_scalar))))
</code></pre>
<p>blurring/sharpening</p>
<pre><code>from PIL import ImageEnhance
def blur(_image, _sharpness):
return ImageEnhance.Sharpness(_image).enhance(_sharpness)
</code></pre>
<p>Binarisation</p>
<pre><code>def binarisation(_image, factor=127):
_img = np.array(_image)
_img = grey(_img)
ret,th1 = cv2.threshold(_img,factor,255,cv2.THRESH_BINARY)
return th1
</code></pre>
<p>Gaussian binarisation</p>
<pre><code>def gaussian_binar(_image, t1=11, t2=2):
_img = np.array(_image)
gray_image = grey(_img)
return cv2.adaptiveThreshold(gray_image,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,t1,t2)
</code></pre>
<p>Otsu binarisation</p>
<pre><code>def otsu(_image, win_size=5):
_img = np.array(_image)
gray_image = grey(_img)
_blur = cv2.GaussianBlur(gray_image,(win_size,win_size),0)
ret3,th3 = cv2.threshold(_blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
return th3
</code></pre>
<p>Clahe</p>
<pre><code>from PIL import ImageOps
def normalise_CLAHE(_image):
_img = np.array(_image)
cvim = grey(_img)
clahe = cv2.createCLAHE()
clahe_im = clahe.apply(cvim)
return clahe_im
</code></pre>
<p>with erosion and dilatation</p>
<pre><code>def erode(_image, _iteration=1, _size=3):
_img = np.array(_image)
_img = grey(_img)
_kernel = np.ones((_size, _size), np.uint8)
return cv2.erode(_img, _kernel, iterations=_iteration)
def dilate(_image, _iteration=1, _size=3):
_img = np.array(_image)
_img = grey(_img)
_kernel = np.ones((_size, _size), np.uint8)
return cv2.dilate(_img, _kernel, iterations=_iteration)
</code></pre>
<p>and of course with the combination of those</p>
<pre><code>import os
class ContinueI(Exception):
pass
continue_i = ContinueI()
strategies = {}
images_solved = {}
directory = 'test_images'
for filename in os.listdir(directory):
try:
file_name=os.path.join(directory, filename)
print(f'___________*** {file_name} ***___________')
image = Image.open(file_name)
find = search_for_codes(image, 'Original', file_name)
if find:
raise continue_i
img = image.copy()
for scalar in [0.5, 1, 2]:
img1 = resize(img, scalar)
find = search_for_codes(img1, f'Scaled {scalar}', file_name)
if find:
raise continue_i
for sharpness in [1, 0.1, 0.5, 1.5, 2]:
img2 = blur(img1, sharpness)
find = search_for_codes(img2, f'Scaled {scalar}, blured {sharpness}', file_name)
if find:
raise continue_i
for bin in [50, 150]:
img3 = binarization(img2, bin)
find = search_for_codes(img3, f'Scaled {scalar}, blured {sharpness}, bin {bin}', file_name)
if find:
raise continue_i
# for mor in [3,7,15,21]:
# img36 = morpholog(img3, mor)
# search_for_codes(img36, f'Scaled {scalar}, blured {sharpness}, bin {bin}, mor {mor}')
for er in [1,3,5]:
img37 = erode(img3, 1, er)
find = search_for_codes(img37, f'Scaled {scalar}, blured {sharpness}, bin {bin}, er {er}', file_name)
if find:
raise continue_i
for dil in [1,3,5]:
img38 = dilate(img3, 1, dil)
find = search_for_codes(img38, f'Scaled {scalar}, blured {sharpness}, bin {bin}, dil {dil}', file_name)
if find:
raise continue_i
for er_dil in [1,3,5]:
img39 = dilate(img3, 1, er_dil)
img39 = erode(img39, 1, er_dil)
find = search_for_codes(img39, f'Scaled {scalar}, blured {sharpness}, bin {bin}, dil {er_dil} + er {er_dil}', file_name)
if find:
raise continue_i
for gau in [3, 5, 7, 11,13, 15, 17]:
img4 = gausian_binar(img2, gau, 2)
search_for_codes(img4, f'Scaled {scalar}, blured {sharpness}, gau {gau}')
for ots in [3, 5]:
img5 = otsu(img2, ots)
find = search_for_codes(img5, f'Scaled {scalar}, blured {sharpness}, ots {ots}', file_name)
if find:
raise continue_i
for mor in [3,7,15,21]:
img6 = morpholog(img2, mor)
find = search_for_codes(img6, f'Scaled {scalar}, blured {sharpness}, mor {mor}', file_name)
if find:
raise continue_i
for er in [1,3,5]:
img7 = erode(img2, 1, er)
find = search_for_codes(img7, f'Scaled {scalar}, blured {sharpness}, er {er}', file_name)
if find:
raise continue_i
for dil in [1,3,5]:
img8 = dilate(img2, 1, dil)
find = search_for_codes(img8, f'Scaled {scalar}, blured {sharpness}, dil {dil}', file_name)
if find:
raise continue_i
for er_dil in [1,3,5]:
img9 = dilate(img2, 1, er_dil)
img9 = erode(img9, 1, er_dil)
find = search_for_codes(img9, f'Scaled {scalar}, blured {sharpness}, dil {er_dil} + er {er_dil}', file_name)
if find:
raise continue_i
img7 = normalise_CLAHE(img2)
find = search_for_codes(img7, f'Scaled {scalar}, blured {sharpness}, clahe', file_name)
if find:
raise continue_i
except ContinueI:
continue
print(strategies)
</code></pre>
<p>But for some images, even if it looked ok to me (looking with my eyes), the library could not read it, no matter what I did.</p>
<p>An example is this image:</p>
<p><a href="https://i.sstatic.net/AY8HY.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AY8HY.jpg" alt="enter image description here" /></a>
(the white rectangle is for privacy reasons - only here at StackOverflow)</p>
<p>Can anyone suggest something else to do with those images? Any other approach for preprocessing images?</p>
|
<python><opencv><image-processing><barcode><pyzbar>
|
2024-04-01 18:35:35
| 0
| 8,760
|
Marko Zadravec
|
78,256,967
| 2,074,830
|
How to compare each element of tensor and record result
|
<p>I'm trying to check each element of a tensor to determine if it is closer to one of points I specify and then associate this element to this point somehow.
The data is like a table, axes X and Y are image pixel coordinates and values are depth values in millimeters (the depth_data variable).</p>
<p>My implementation currently is like this:</p>
<pre><code>import numpy
import matplotlib.pyplot as plt
import numpy as np
import torch
</code></pre>
<pre><code>def depth_clustering(num_points, depth_data):
max = torch.max(depth_data)
min = torch.min(depth_data)
cluster_centers = create_cluster_centers(num_points, min, max)
for y, line in enumerate(depth_data):
for x, depth_val in enumerate(line):
closest_cluster_center = None
for cluster_center in cluster_centers:
if closest_cluster_center is None:
closest_cluster_center = cluster_center
continue
diff = torch.abs(torch.sub(depth_val, cluster_center.value))
if torch.lt(diff, torch.abs(depth_val - closest_cluster_center.value)):
closest_cluster_center = cluster_center
closest_cluster_center.add_and_adjust(torch.tensor([x, y, depth_val]))
return cluster_centers
</code></pre>
<pre><code>class DepthValueCluster:
def __init__(self, value, init_capacity=100):
self.value = value
self.values = torch.zeros(init_capacity, 3)
self.count = 0
self.init_capacity = init_capacity
def add(self, value):
if self.values.size(0) < self.count + 1:
new_array = torch.zeros((self.init_capacity, 3))
self.values = torch.cat((self.values, new_array), 0)
self.values[self.count] = value
self.count += 1
def add_and_adjust(self, value):
self.add(value)
self.move_cluster()
def move_cluster(self):
new = torch.mean(self.values, -2)
self.value = new[2]
</code></pre>
<pre><code>def create_cluster_centers(num_points, min, max):
diff = max - min
cluster_range = diff / num_points
clusters = list()
count = cluster_range / 2
for i in range(num_points):
clusters.append(DepthValueCluster(count.detach().clone(), 1000))
count = count + cluster_range
return clusters
</code></pre>
<pre><code>if __name__ == '__main__':
depth_data = torch.tensor([[1150., 1150., 1155., 2041., 2041., 2041.],
[1153., 1153., 1155., 2048., 2048., 2048.],
[1150., 1150., 1155., 2048., 2048., 0],
[890., 893., 0, 0, 0., 0],
[889., 889., 892., 2560., 0, 0],
[889., 889., 892., 2549., 2549., 0]])
clusters = depth_clustering(4, depth_data)
for cluster in clusters:
print("Cluster val: {0}, size: {1}".format(cluster.value, cluster.count))
</code></pre>
<p><code>cluster_centers</code> is a list of Python objects each has a center value I assign and a tensor, to which I record points that are closest to it.</p>
<p>The <code>add()</code> method is like a list implementation, which uses tensor and increases its size (by 1000) when its full.
this is ridiculously slow, takes like 4 minutes to run over 300K samples on GPU, is a bit faster on CPU however. So its obvous I'm doing everything incorrectly, but I wasn't able to find any guidance about how to solve this in pytorch way.</p>
<p>I want this to return some data structure that has those cluster centers in it (say X,Y,value) and depth values that were found to be closest to each of the centers.</p>
|
<python><pytorch>
|
2024-04-01 17:44:38
| 0
| 334
|
Zmur
|
78,256,905
| 3,078,502
|
Do Python packages have to be built in the same environment as the project (source) files?
|
<p>I am working through the official <a href="https://packaging.python.org/en/latest/tutorials/packaging-projects/" rel="nofollow noreferrer">Python Packaging Tutorial</a>. I use <code>conda</code> environments, but the question should be the same for <code>venv</code> virtual environments.</p>
<p>The tutorial presents a very simple example package with one module and no dependencies. The module contains a function which merely adds one to the passed number:</p>
<pre class="lang-py prettyprint-override"><code>def add_one(number):
return number + 1
</code></pre>
<p>I created a new module which mimics <code>add_one</code>, except it uses NumPy to add an array of ones to the passed array:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def add_ones(array):
return array + np.ones(array.shape)
</code></pre>
<p>I then created a <code>conda</code> environment called <code>packaging</code> and installed <code>build</code> and <code>twine</code>. I was able to successfully package the example package using two build backends (Hatchling and Flit). I was also able upload the package to TestPyPI. I was then able to <code>pip install</code> the package from PyPI. If I install it to an environment that is missing NumPy, the import of course fails, but so long as I install it to an environment that does have NumPy, the import works and the function runs successfully.</p>
<p>My question is whether this behavior is reliable? For this simple example, <code>build</code> works from an environment that is missing the package's dependencies, such as my <code>packaging</code> environment which is missing NumPy.</p>
|
<python><dependencies><packaging>
|
2024-04-01 17:30:24
| 2
| 609
|
Lee Hachadoorian
|
78,256,836
| 7,796,833
|
SQLAlchemy: Automatically update Column on Update
|
<p>TL;DR: SQLAlchemy <code>Column(server_onupdate=text('now()')</code> does not trigger update.</p>
<p>I am defining a simple SQLAlchemy class:</p>
<pre class="lang-py prettyprint-override"><code>class Task(Base):
__tablename__ = "tasks"
id = Column(Integer, primary_key=True, index=True, nullable=False)
name = Column(String, index=True, nullable=False)
project_id = Column(Integer, ForeignKey("projects.id", ondelete="CASCADE"), nullable=False)
created_at = Column(TIMESTAMP(timezone=True), nullable=False, server_default=text("now()"))
updated_at = Column(TIMESTAMP(timezone=True), nullable=False, server_default=text("now()"), server_onupdate=text("now()"))
__table_args__ = (UniqueConstraint("name", "project_id", name="unique_task_name_project_id"),)
def __repr__(self):
return f"<Task {self.name}>"
</code></pre>
<p>I want <code>updated_at</code> to automatically update every time I change <em>any</em> values in that row.</p>
<p>I thought that the column <code>server_onupdate</code> would do that. When I run</p>
<pre class="lang-sql prettyprint-override"><code>UPDATE projects SET description = 'Explain duck typing to intern' WHERE id = 1;
</code></pre>
<p>I can see that this was successful, but the <code>updated_at</code> column is not updated. How do I fix my code?</p>
<p>Can I prevent anyone (but this trigger) from setting this column manually?</p>
|
<python><postgresql><sqlalchemy>
|
2024-04-01 17:16:16
| 0
| 618
|
Carol Eisen
|
78,256,679
| 6,242,883
|
Why does Python's multiprocess use all cores if I am telling it not to?
|
<p>I am running a script that includes</p>
<pre><code>import multiprocess as mp
def sample():
# ... do things...
return x
num_cores = 50
num_samples = 1000
with mp.Pool(num_cores) as p:
samples = p.starmap(self, [() for i in range(num_samples)])
</code></pre>
<p>There are 128 cores in my server. However, even though I create a pool with just 50 of them, when I run <code>htop</code> in the terminal I see that all 128 cores are being used by my script, and each at 100% usage.</p>
<p>Why is this happening? I thought only <code>num_cores</code> cores could be used my script. Or does this depend on what <code>sample()</code> is actually doing?</p>
|
<python><python-3.x><multiprocessing><python-multiprocessing>
|
2024-04-01 16:45:02
| 1
| 1,176
|
Tendero
|
78,256,629
| 4,218,883
|
Apply style on different columns from different rules through a for loop
|
<p>I try to highlight different columns from a DataFrame with different lists of words as references. If I chain the <code>apply</code> to the Styler object, it works, but if I <em>apply</em> successively the <code>apply</code> through a for loop, it highlights both columns with the last list of words.</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({"A": ["foo", "bar", "corge"], "B": ["corge", "fred", "garply"]})
str_lists = [["foo", "bar"], ["corge", "fred"]]
def highlight_str(s, str_lst):
return ["background-color: yellow" if v in str_lst else "" for v in s]
styled_df = df.style
# Works
styled_df.apply(lambda x: highlight_str(x, str_lists[0]), subset=["A"]).apply(
lambda x: highlight_str(x, str_lists[1]), subset=["B"]
)
# Does not work
for str_lst, subset in zip(str_lists, df.columns):
print(str_lst, subset)
styled_df.apply(lambda x: highlight_str(x, str_lst), subset=[subset])
styled_df
</code></pre>
|
<python><pandas>
|
2024-04-01 16:35:50
| 2
| 1,812
|
Marc
|
78,256,497
| 254,172
|
Using already deployed Snowflake UDF from snowpark
|
<p>So I have a prebuilt and predeployed set of Python UDFs that I would like to use from within a snowpark program. I defined the UDFs in SQL:</p>
<pre><code>CREATE OR REPLACE FUNCTION dg_utility__field_contains_phone_number(str string, pre_clean boolean)
RETURNS BOOLEAN
LANGUAGE PYTHON
RUNTIME_VERSION = 3.8
HANDLER = 'dg_utility__field_contains_phone_number'
AS
$$
.......udf stuff
$$;
</code></pre>
<p>Now I want to leverage those UDFs from snowpark python dsl. Something like:</p>
<pre><code>df = df.withColumn(col, dg_utility__field_contains_phone_number(col))
</code></pre>
<p>Is there a way to do this, or am I required to redefine dg_utility__field_contains_phone_number within my snowpark program?</p>
<p>I want to be able to share UDFs between SQL and python. I think I could just use session.sql, but that would require me to generate dynamic SQL against the dataframe I want to apply these functions to and I'm looking to avoid that.</p>
|
<python><snowflake-cloud-data-platform><user-defined-functions>
|
2024-04-01 16:04:49
| 1
| 595
|
Brutus35
|
78,256,362
| 6,011,193
|
When run onnx with CUDAExecutionProvider, it raise "FAIL : Failed to load library libonnxruntime_providers_cuda.so with error"
|
<pre><code>2024-04-01 23:16:16.891282913 [E:onnxruntime:Default, provider_bridge_ort.cc:1480 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1193 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory
</code></pre>
<p>I use following cmd find the not found so</p>
<pre><code>~ $ locate libonnxruntime_providers_cuda.so
/home/roroco/.pyenv/versions/3.9.1/lib/python3.9/site-packages/onnxruntime/capi/libonnxruntime_providers_cuda.so
</code></pre>
<p>I guess it's because env PATH issue, how to set PATH to let python onnx find libonnxruntime_providers_cuda.so</p>
<p>I try to locate the so, the so exist and manually add so dir to LD_LIBRARY_PATH, the err still exist</p>
<pre><code>~/Dropbox/mix/real-esrgan/submodule/Real-ESRGAN $ locate libcublasLt.so
/home/roroco/.pyenv/versions/3.9.1/lib/python3.9/site-packages/nvidia/cublas/lib/libcublasLt.so.11
/home/roroco/.pyenv/versions/3.9.1/lib/python3.9/site-packages/nvidia/cublas/lib/libcublasLt.so.12
^[[A~/Dropbox/mix/real-esrgan/submodule/Real-ESRGAlocate libonnxruntime_providers_cuda.so
/home/roroco/.pyenv/versions/3.9.1/lib/python3.9/site-packages/onnxruntime/capi/libonnxruntime_providers_cuda.so
~/Dropbox/mix/real-esrgan/submodule/Real-ESRGAN $ PYTHON_PATH=. LD_LIBRARY_PATH="/home/roroco/.pyenv/versions/3.9.1/lib/python3.9/site-packages/onnxruntime/capi:/home/roroco/.pyenv/versions/3.9.1/lib/python3.9/site-packages/nvidia/cublas/lib" python onnx/run_onnx.py
2024-04-01 23:50:25.975940926 [E:onnxruntime:Default, provider_bridge_ort.cc:1480 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1193 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcudnn.so.8: cannot open shared object file: No such file or directory
2024-04-01 23:50:25.975963148 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:747 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
out /home/roroco/Dropbox/mix/real-esrgan/submodule/Real-ESRGAN/onnx/out.jpg
</code></pre>
<p>following is my full onnx code:</p>
<pre><code>import os.path
import onnxruntime
import numpy as np
import cv2
# Load the ONNX model
d = os.path.abspath(f"{__file__}/..")
onnx_model_path = f'{d}/RealESRGAN_x4plus_anime_6B.onnx'
ort_session = onnxruntime.InferenceSession(onnx_model_path, providers=["CUDAExecutionProvider", 'AzureExecutionProvider',
'CPUExecutionProvider']) # in my old deivce with 18.04, cannot install libonnxruntime_providers_cuda, so cannot use cuda
# ort_session = onnxruntime.InferenceSession(onnx_model_path, providers=["CUDAExecutionProvider"])
# Load and preprocess input image
input_image_path = f"{d}/t.jpg"
input_image = cv2.imread(input_image_path)
input_image = cv2.cvtColor(input_image, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
input_image = input_image.astype(np.float32) / 255.0 # Normalize to [0, 1]
input_image = np.transpose(input_image, (2, 0, 1)) # Change the shape from HWC to CHW
input_image = np.expand_dims(input_image, axis=0) # Add batch dimension
# Perform inference
ort_inputs = {ort_session.get_inputs()[0].name: input_image}
ort_outs = ort_session.run(None, ort_inputs)
# Post-process the output
output_image = ort_outs[0][0] # Assuming only one output
output_image = np.transpose(output_image, (1, 2, 0)) # Change the shape from CHW to HWC
output_image = (output_image * 255).clip(0, 255).astype(np.uint8) # Convert back to uint8 and clip values
# Display or save the output image
out = f"{d}/out.jpg" # Adjust output path as needed
cv2.imwrite(out, cv2.cvtColor(output_image, cv2.COLOR_RGB2BGR))
print(f"out {out}")
</code></pre>
|
<python><linux><artificial-intelligence><onnx><onnxruntime>
|
2024-04-01 15:33:51
| 1
| 4,195
|
chikadance
|
78,256,329
| 5,722,359
|
What is the correct syntax for a Python function/method argument when it has more than one possible type hints?
|
<pre><code>import os
def scan(path) -> os.DirEntry :
return os.scandir(path)
</code></pre>
<p>What is the correct type hint for the <code>path</code> argument of this function?</p>
<p>According to <a href="https://docs.python.org/3/library/os.html#os.scandir" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p><em>path</em> may be a <a href="https://docs.python.org/3/glossary.html#term-path-like-object" rel="nofollow noreferrer">path-like</a> object. If <em>path</em> is of type <code>bytes</code> (directly or
indirectly through the <a href="https://docs.python.org/3/library/os.html#os.scandir" rel="nofollow noreferrer">PathLike</a> interface), the type of the <a href="https://docs.python.org/3/library/os.html#os.scandir" rel="nofollow noreferrer">name</a> and
<a href="https://docs.python.org/3/library/os.html#os.DirEntry.path" rel="nofollow noreferrer">path</a> attributes of each <code>os.DirEntry</code> will be <code>bytes</code>; in all other
circumstances, they will be of type <code>str</code>.</p>
</blockquote>
<p>So, it can either be a <code>str</code> or <code>bytes</code> object representing a path, or an object implementing the <code>os.PathLike</code> protocol. How do I write these possibilities for the type hinting?</p>
|
<python><python-typing>
|
2024-04-01 15:28:43
| 3
| 8,499
|
Sun Bear
|
78,256,223
| 116,906
|
How, in Python 3, can I have a client open a socket to a server, send 1 line of JSON-encoded data, read 1 line JSON-encoded data back, and continue?
|
<p>I have the following code for a server listening on a port:</p>
<pre><code> def handle_oracle_query(self, sock, address):
sockIn = sock.makefile('rb')
sockOut = sock.makefile('wb')
line = sockIn.readline()
submitted_data = json.loads(line)
self.get_thread_specific_storage()['environmental_variables'] = submitted_data['environmental_variables']
self.get_thread_specific_storage()['cgi'] = submitted_data['cgi']
generate_output()
print_output(sockOut)
sock.close()
sockIn.close()
sockOut.close()
self.remove_thread_specific_storage()
</code></pre>
<p>I am attempting to connect to it with the client below:</p>
<pre><code> sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
ip = server[0]
try:
port = int(server[1])
except:
port = DEFAULT_PORT
sock.connect((ip, port))
sockIn = sock.makefile("rb")
sockOut = sock.makefile("wb")
cgi_hash = {}
for key in cgi.FieldStorage().keys():
cgi_hash[key] = cgi.FieldStorage()[key].value
as_dictionary = dict(os.environ)
submitted_data = {
'environmental_variables': as_dictionary,
'cgi': cgi_hash}
encoded_data = json.dumps(submitted_data)
sys.stderr.write(encoded_data + '\r\n')
sockOut.write(encoded_data + '\r\n')
sockOut.flush()
result = json.loads(sockIn.read())
sock.close()
sockIn.close()
sockOut.close()
</code></pre>
<p>When I try to connect with the server I get the following stacktrace:</p>
<pre><code>Traceback (most recent call last):
File "/home/christos/fathers/bin/./fathersd", line 4075, in <module>
multitasking.start_oracle()
File "/home/christos/fathers/bin/./fathersd", line 2396, in start_oracle
self.run_oracle()
File "/home/christos/fathers/bin/./fathersd", line 2372, in run_oracle
self.handle_oracle_query(newsocket, address)
File "/home/christos/fathers/bin/./fathersd", line 2309, in handle_oracle_query
submitted_data = json.loads(line)
File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
</code></pre>
<p>What changes should I be making to the above code in the client and/or server so the client sends a single line of input containing a JSON string and the server, after constructing the output, returns a single line of output containing a JSON string of output which the client can parse and work from there?</p>
<p>I am in the process of porting an application to Python 3. I would like a character encoding of UTF-8.</p>
|
<python><json><python-3.x><sockets>
|
2024-04-01 15:06:20
| 2
| 6,021
|
Christos Hayward
|
78,256,159
| 7,906,796
|
How can I make CTRL+C interrupt the script without closing Selenium WebDriver?
|
<p>My Python script uses Selenium to automate a browser on Windows. I want to be able to interrupt the script by pressing <kbd>CTRL</kbd>+<kbd>C</kbd>. But even if I catch the exception, the WebDriver terminates on the interrupt:</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
driver = webdriver.Chrome()
try:
while True:
pass
except KeyboardInterrupt:
print('KeyboardInterrupt caught')
driver.get('https://www.google.com') # will fail
</code></pre>
<p>When <kbd>CTRL</kbd>+<kbd>C</kbd> is pressed, the WebDriver window closes immediately. Any subsequent calls to driver methods raise a urllib3 exception.</p>
<p>If I am using WebDriver in Python REPL and I need to terminate some function via interrupt, I don't want that to destroy my driver instance. Similarly when testing a longer automated process, I would like to be able to cancel execution manually if something goes awry but allow the browser window to remain open for debugging.</p>
<p>I could use an instance of Remote WebDriver (via Selenium Grid) on my local machine as suggested <a href="https://stackoverflow.com/questions/46135888/python-selenium-ctrlc-closes-chromedrive">here</a> but I do not want to add complexity with managing the grid. The solutions mentioned <a href="https://stackoverflow.com/questions/27495376/catching-keyboardinterrupt-without-closing-selenium-webdriver-sessions-in-pyth">here</a> seem to only work on Unix.</p>
<p>Is there a simple way to achieve this on Windows?</p>
|
<python><windows><selenium-webdriver><keyboardinterrupt>
|
2024-04-01 14:53:41
| 2
| 629
|
ryyyn
|
78,256,137
| 20,793,070
|
Keep websocket alive
|
<p>I have a websocket subscription watcher:</p>
<pre><code>async def run():
start = get_start_status() ### ON/OFF
while start == 1:
live = True
while live == True:
try:
async with websockets.connect(WEBSOCKET_URL, timeout=15, ping_interval=10) as websocket:
# Send subscription request
await websocket.send(json.dumps({
"jsonrpc": "2.0",
"id": 1,
"method": ###...
"params": ###...
}))
first_resp = await websocket.recv()
response_dict = json.loads(first_resp)
if 'result' in response_dict:
print("Subscription successful. Subscription ID: ", response_dict['result'])
# Continuously read from the WebSocket
async for response in websocket:
response_dict = json.loads(response)
### TO DO WiTH RESPONSES
except ConnectionTimeoutError as e:
print('Error connecting. Retrying...')
live = False
async def main():
asyncio.ensure_future(run())
asyncio.get_event_loop().run_forever()
asyncio.run(main())
</code></pre>
<p>It works properly, but at some point it freezes and stops accepting responses. The <code>timeout=15, ping_interval=10</code> don't help, it just hangs indefinitely until a manual reboot.</p>
<p>I understand that I need to insert a ping-pong function that will periodically check its performance. But I don't understand where and how to add a checker. How can I keep my websocket alive?</p>
|
<python><python-3.x><websocket>
|
2024-04-01 14:49:02
| 0
| 433
|
Jahspear
|
78,255,992
| 4,976,543
|
PyTorch Geometric SAGEConv - Expected scalar type Long, but found Float?
|
<p>I am trying to implement graph neural networks from the torch_geometric library of model types. I am receiving an error: "RuntimeError: expected scalar type Long but found Float" in this line of the SAGEConv module:</p>
<pre><code>"(My Path)\Python310\lib\site-packages\torch_geometric\nn\dense\linear.py", line 147, in forward
F.linear(x, self.weight, self.bias)
RuntimeError: expected scalar type Long but found Float
</code></pre>
<p>I realize that there are similar questions on Stack Overflow, but I don't think that there is clear guidance regarding how to troubleshoot effectively in torch_geometric. I tried to reduce my problem to the simplest code possible:</p>
<p>First, I import SAGEConv:</p>
<pre><code>import torch
from torch_geometric.nn import SAGEConv
sconv = SAGEConv((-1, -1), 64)
</code></pre>
<p>Then, I create very basic node and edge tensors for the graph:</p>
<pre><code>x = torch.tensor([[1,0],[2,4],[5,7]]) # Three Node Graph; Two "features" per node
ei = torch.tensor([[1, 1],[0,2]]) # Edge Index Matrix - Node 1 to Node 0 and Node 1 to Node 2
</code></pre>
<p>Finally, I call my SAGEConv layer</p>
<pre><code>sconv(x, ei)
>>> (...) RuntimeError: expected scalar type Long but found Float
</code></pre>
<p>I can't wrap my head around this because both "x" and "ei" variables are LongTensor type:</p>
<pre><code>x.type()
>>> 'torch.LongTensor'
ei.type()
>>> 'torch.LongTensor'
</code></pre>
<p>This has been driving me a little crazy. Any assistance with finding what I've done wrong would be extremely appreciated. In case of a version issue, here is my pip freeze for the packages:</p>
<pre><code>torch==2.0.1
torch_geometric==2.5.0
</code></pre>
<p>Edit 1:</p>
<p>I upgraded both <code>torch</code> and <code>torch_geometric</code> to the most recent versions and the error still exists. Though the message is now "expected m1 and m2 to have the same dtype, but got: __int64 != float"</p>
|
<python><types><pytorch><pytorch-geometric>
|
2024-04-01 14:18:15
| 1
| 712
|
Branden Keck
|
78,254,765
| 6,751,456
|
Django custom user admin not reflected in admin site
|
<p>I am new to django and going through legacy django project where Custom user admin is configured with following:</p>
<pre><code>app1/admin.py
</code></pre>
<pre><code>from .models import User
@admin.register(User)
class UserAdmin(DjangoUserAdmin):
"""Define admin model for custom User model with no email field."""
fieldsets = (
(None, {'fields': ('username', 'password')}),
(_('Personal info'), {'fields': ('email', 'first_name', 'last_name', 'phone_number',
'type_of_user', 'timezone', 'deleted', 'is_supervisor',
'supervisor', 'role')}),
(_('Permissions'), {'fields': ('is_active', 'is_staff', 'is_superuser',
'groups', 'user_permissions')}),
(_('Important dates'), {'fields': ('last_login', 'date_joined',
'created_date')}),
)
add_fieldsets = (
(None, {
'classes': ('wide',),
'fields': ('email', 'username', 'password1', 'password2'),
}),
)
list_display = ('username', 'first_name', 'last_name', 'is_staff')
search_fields = ('username', 'first_name', 'last_name')
ordering = ('username',)
</code></pre>
<p>Also in app1.models.py, where <code>CustomUser</code> is further inherit from <code>AbstractBaseUser</code>.</p>
<pre><code>class User(CustomUser):
first_name = models.CharField(max_length=1024, db_index=True)
last_name = models.CharField(max_length=1024, db_index=True)
phone_number = models.CharField(max_length=1024, blank=True, null=True)
is_supervisor = models.BooleanField(default=False, db_index=True)
license = models.CharField(max_length=1024, blank=True, null=True)
type_of_user = models.IntegerField(choices=TYPE_OF_USER, default=1, db_index=True)
created_date = models.DateTimeField(default=timezone.now)
timezone = TimeZoneField(choices=[(tz, tz) for tz in pytz.all_timezones])
deleted = models.BooleanField(default=False, db_index=True)
load_id = models.CharField(max_length=1024, null=True, blank=True)
approval_status = models.IntegerField(choices=APPROVAL_STATUS, null=True, blank=True)
supervisor = models.ForeignKey('self', on_delete=models.SET_NULL, null=True, blank=True)
last_alert_sent_on = models.DateTimeField(null=True, blank=True, db_index=True)
role = models.ForeignKey("common.Role", on_delete=models.SET_NULL,
null=True, blank=True)
</code></pre>
<p>In <code>settings.py</code>:</p>
<p><code>AUTH_USER_MODEL = 'app1.User'</code></p>
<p>There's no <code>admin.py</code> in <code>main_app</code> folder.</p>
<p>The problem here is that the <code>User</code> model is not associated with this particular <code>UserAdmin</code>.</p>
<p>Even if I remove this <code>@admin.register(User) class UserAdmin(DjangoUserAdmin):</code> in <code>admin.py</code> the users are still displayed.</p>
<p>Here's the admin interface for Users.</p>
<p><a href="https://i.sstatic.net/AyA3X.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AyA3X.png" alt="enter image description here" /></a></p>
<p>As you see, the fields are also different from what is defined here:</p>
<p><code>list_display = ('username', 'first_name', 'last_name', 'is_staff')</code></p>
<p>Why is the custom user admin not working in this case?</p>
|
<python><django><django-custom-user>
|
2024-04-01 09:56:25
| 1
| 4,161
|
Azima
|
78,254,529
| 17,556,733
|
How to setup datadog alerts without time intervals
|
<p>I want to create a datadog alert when a specific phrase is found in my logs, however when I create a monitor with the specific query I need, I must specify over what time period I want the query evaluated (which causes one notification when the log is spotted, and another notification when the interval elapses without the same log being spotted again).</p>
<p>Is there a way to create a monitor such that when a specific log is spotted, I instantly get the first notification and not get any follow up notification?</p>
|
<python><amazon-web-services><logging><slack><datadog>
|
2024-04-01 08:59:17
| 1
| 495
|
TheMemeMachine
|
78,254,501
| 9,112,151
|
How to disable application/json in Swagger UI autodocs of a FastAPI application?
|
<p>My API can return <strong>only</strong> a file:</p>
<pre class="lang-py prettyprint-override"><code> @router.get(
"/partners/{partner_id}/rsr-requests/{rsr_request_id}/{document_path}",
responses={200: {"content": {"application/octet-stream": {}}, "description": "Файл"}}
)
async def download_rsr_document(...):
pass
</code></pre>
<p>But in Swagger UI, I can see that <code>application/json</code> still remains. How to disable it?</p>
<p><a href="https://i.sstatic.net/3H1du.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3H1du.png" alt="enter image description here" /></a></p>
|
<python><swagger><fastapi><swagger-ui><media-type>
|
2024-04-01 08:53:06
| 2
| 1,019
|
Альберт Александров
|
78,254,479
| 589,352
|
Running 10,000 SELECT queries against SQLite takes hours
|
<p>I am using Python and the <code>sqlite3</code> lib to run around 10,000 SELECT queries against a single table. Running it locally on my not-high-spec-but-not-ancient laptop this is taking six or seven hours to complete. I created a composite index on the fields but that didn't make any difference. Here's the table and the query (the table is geolocation data for IP addresses, with the "addresses" being ints made by stripping the full stops):</p>
<pre class="lang-sql prettyprint-override"><code>CREATE TABLE ip2location (ip_from INT, ip_to INT, country_code VARCHAR(2), country_name VARCHAR(255), region_name VARCHAR(255), city_name VARCHAR(255));
CREATE INDEX `ix_ip_between` ON `ip2location`(`ip_from`, `ip_to`);
SELECT country_name, city_name FROM ip2location
WHERE ip_from <= ? and ip_to >= ?
AND country_name != '-';
</code></pre>
<pre class="lang-py prettyprint-override"><code>query = "SELECT country_name, city_name FROM ip2location \
WHERE ip_from <= ? and ip_to >= ? \
AND country_name != '-';"
results = []
for ip in ips:
cur.execute(query, [ip, ip])
result = cur.fetchone()
if result:
results.append(result)
</code></pre>
<p>I'd expect it to take a while but not this long. Does the problem lie with Python, SQLite, me? Thanks.</p>
<p>[EDIT] Looking at the query plan revealed that my index wasn't being used. I made separate indexes and that solves the problem. Thanks.</p>
|
<python><sql><sqlite>
|
2024-04-01 08:48:23
| 2
| 1,937
|
jaybee
|
78,254,396
| 2,856,552
|
How do I choose individual colors for contours in matplotlib
|
<p>In my short python code below I am plotting risk levels based on values in a csv file and using a shapefile. The code works, but I would like to select specific individual colors for each level, namely green, yellow, orange and red.
Currently it works if I put "Blues", or "Reds" as in the attached png.[<img src="https://i.sstatic.net/WWX2M.png" alt="image output1" />.
The line in the code</p>
<pre><code>merged_df.plot(column="risk",colors = ['green','yellow','cyan','red'])
</code></pre>
<p>does not work.
How do I specify individual specific colors. help will be appreciated.
The risk data in "Tmaxrisks.csv" is as</p>
<pre><code>"District","risk"
"Berea",1
"Butha Buthe",3
"Leribe",1
"Mafeteng",2
"Maseru",1
"Mohale's Hoek",2
"Mokhotlong",4
"Qacha's Nek",3
"Quthing",3
"Thaba Tseka",4
import pandas as pd
import geopandas as gpd
import matplotlib.pyplot as plt
map_df = gpd.read_file("Shapefiles/BNDA_LSO_1990-01-01_lastupdate/BNDA_LSO_1990-01-01_lastupdate.shx")
risks_df = pd.read_csv("Tmaxrisks.csv")
merged_df = map_df.merge(risks_df, left_on=["adm1nm"], right_on=["District"])
merged_df.plot(column="risk", cmap="Reds", legend=False)
#merged_df.plot(column="risk",colors = ['green','yellow','cyan','red'])
plt.show()
</code></pre>
|
<python><matplotlib>
|
2024-04-01 08:30:06
| 1
| 1,594
|
Zilore Mumba
|
78,254,332
| 1,238,967
|
In python how to share a large data structure across modules (no copies)
|
<p>I am working on a prject where I use a Pandas dataframe, rather large, and I need to let several functions access it.
So far I have used a single python script, where I create the dataframe as global to the script, and then define the various functions, which read and write from/to it.
So far, so good.</p>
<p>Now, the project is growing, and so is the number of functions, and I would like to split the project into several files (modules).</p>
<p>The first idea is to let the dataframe to be global across the modules, and I have found some info on how to do it (I would still appreciate a comment on which is the best/most pythonic/most efficient way).
However, I have rad that this is frown upon.</p>
<p>So, what would be the best approach, taking into account that the dataframe can become quite large, so that making a copy local to each function, when called, would be inefficient?</p>
<p>Thanks!</p>
|
<python><pandas><module><global-variables>
|
2024-04-01 08:12:42
| 1
| 1,234
|
Fabio
|
78,254,013
| 3,118,602
|
How to use SpaCy NER?
|
<p>I am working on a mini-project to cluster similar sentences together. Before I can achieve that, I have to perform pre-processing to the extremely dirty data (these data are all user inputs, free text).</p>
<p>One of the pre-processing step that I thought of is to identify each sentence and classify it with a category. Even though it is free text, there are some key token per sentence such as "LOOK", "REPLACE", "CHECK".</p>
<p>To classify each sentences, I looked into NLP and discovered SpaCy. One of the component in SpaCy, NER, seems like the perfect task for this. I believe my use case is that I do not need to use the full SpaCy pipeline for this classification. I also understand that I need to add in my custom labels into the NER.</p>
<p>My question is - can I just add my custom label into SpaCy NER using <code>add_label()</code> and run the NER onto my dataset without the having to re-train the SpaCy model? My end goal is to simply classify the sentences into a category, based on the keywords.</p>
<p>This is quite unclear despite several days of researching.</p>
<p>Greatly appreciate any clarification on this. TIA.</p>
|
<python><nlp><spacy><named-entity-recognition>
|
2024-04-01 06:41:32
| 1
| 593
|
user3118602
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.