QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 β |
|---|---|---|---|---|---|---|---|---|
76,894,594 | 2,386,113 | How to plot a sine curve for longer time duration | <p>I am new to Python and <strong>trying to plot a sine curve for a time duration of 300 seconds</strong> on the x-axis but I am only able to plot a correct sine curve for a short duration.</p>
<p><strong>MWE:</strong></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
in_array = np.linspace(0, 2*np.pi, 300)
out_array = np.sin(in_array)
print("in_array : ", in_array)
print("\nout_array : ", out_array)
# red for numpy.sin()
plt.plot(in_array, out_array, color = 'red')
plt.title("numpy.sin()")
plt.xlabel("X")
plt.ylabel("Y")
plt.show()
</code></pre>
<p>The code above is able to plot the desired sine curve as shown below:</p>
<p><a href="https://i.sstatic.net/NHfEJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NHfEJ.png" alt="enter image description here" /></a></p>
<p>If I change the <code>np.linspace</code> line in the above code to</p>
<pre><code>in_array = np.linspace(0, 6*np.pi, 300)
</code></pre>
<p>...then I am able to get the 3 cycles of a sine curve, which is also okay.</p>
<p><a href="https://i.sstatic.net/vKSdC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vKSdC.png" alt="enter image description here" /></a></p>
<p><strong>Problem:</strong> If I try to plot the sine-curve for 300 seconds (approx.) on the x-axis by using <code>in_array = np.linspace(0, 90*np.pi, 300)</code> in the above code then the curve contains <strong>different amplitudes</strong> (as shown below), why and how can I correct it?</p>
<p><a href="https://i.sstatic.net/eNM46.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eNM46.png" alt="enter image description here" /></a></p>
| <python><numpy><matplotlib> | 2023-08-13 17:59:13 | 2 | 5,777 | skm |
76,894,567 | 18,346,591 | How to find subset of a set without knowing the subset or parent set values | <p>I have two sets:</p>
<pre><code>a = {x,x,x,x}
b = {x,x,x,x,x}
</code></pre>
<p>with x meaning unknown values</p>
<p>The exact arrangements or length or values of each set <code>(a or b)</code> are not fixed until they've been discovered. I am not able to know what exact values or which would be a subset of which but I need to be able to detect and perform some action once either is a subset of the other.</p>
<p>I understand I could do this using:</p>
<pre><code>if a <= b or b <= a:
</code></pre>
<p>But the issue is that below this <code>if statement</code>, I must be sure of which one is a subset of the other because I perform some functions with the subset and the set such as <strong>filtering out the subset from the set</strong>. There's no room for maybes, like an <code>if statement</code>. And I do not want to write duplicate code.</p>
<p>How do you suggest I go about this?</p>
<p>Sources I have checked:
<a href="https://stackoverflow.com/questions/728972/finding-all-the-subsets-of-a-set">Finding all the subsets of a set</a>
<a href="https://stackoverflow.com/questions/73822286/finding-subset-of-a-set">Finding subset of a set</a></p>
| <python><minesweeper> | 2023-08-13 17:50:21 | 1 | 662 | Alexander Obidiegwu |
76,894,535 | 2,391,063 | pip stops working after upgrading Python on OS X | <p>I encountered an installation issue with <code>pip</code> after upgrading Python on OS X. I upgraded to the latest Python 3.11 using homebrew. This also install pip with the latest version. I then redirected my Python path from <code>/usr/local/bin/python3.11</code> to <code>/usr/local/bin/python</code>. Everything worked great, except my site-packages were no longer found (as expected).</p>
<p><strong>The issue:</strong> when I tried to import packages using <code>pip</code> or <code>pip3</code> I got the following error:</p>
<pre class="lang-bash prettyprint-override"><code>$ pip3 install numpy
Traceback (most recent call last):
File "/usr/local/bin/pip3", line 5, in <module>
from pip._internal.cli.main import main
File "/usr/local/lib/python3.11/site-packages/pip/_internal/cli/main.py", line 10, in <module>
from pip._internal.cli.autocompletion import autocomplete
File "/usr/local/lib/python3.11/site-packages/pip/_internal/cli/autocompletion.py", line 10, in <module>
from pip._internal.cli.main_parser import create_main_parser
File "/usr/local/lib/python3.11/site-packages/pip/_internal/cli/main_parser.py", line 9, in <module>
from pip._internal.build_env import get_runnable_pip
File "/usr/local/lib/python3.11/site-packages/pip/_internal/build_env.py", line 19, in <module>
from pip._internal.cli.spinners import open_spinner
File "/usr/local/lib/python3.11/site-packages/pip/_internal/cli/spinners.py", line 9, in <module>
from pip._internal.utils.logging import get_indentation
File "/usr/local/lib/python3.11/site-packages/pip/_internal/utils/logging.py", line 13, in <module>
from pip._vendor.rich.console import (
File "/usr/local/lib/python3.11/site-packages/pip/_vendor/rich/__init__.py", line 17, in <module>
_IMPORT_CWD = os.path.abspath(os.getcwd())
^^^^^^^^^^^
PermissionError: [Errno 1] Operation not permitted
</code></pre>
<p>Other older SO solutions did not seem to help, including upgrading permissions.</p>
<p><strong>Question</strong>: How do I import packages with pip?</p>
| <python><python-3.x><macos><pip><upgrade> | 2023-08-13 17:41:33 | 1 | 10,868 | Thane Plummer |
76,894,469 | 10,367,451 | Downloading a sectIon of a video using python | <p>Why would this command run as expected in the command line, downloading only the specified part of the video, but using the python library of yt-dlp it would download the whole video instead??
<code>yt-dlp https://www.youtube.com/watch?v=MtN1YnoL46Q --download-sections "*00:02:05-00:02:10" --force-keyframes-at-cuts</code>
I'm trying to send the downloaded part of video as a response for my flask api endpoint, but the whole video gets downloaded, its like the ydl_opts get totally ignored.</p>
<pre><code>from flask import Flask, make_response, request
import yt_dlp
from flask_cors import CORS
app = Flask(__name__, static_folder='static')
CORS(app, expose_headers=['Filename'])
@app.route("/downloadpart/", methods=['POST'])
def download_cut():
if request.method == 'POST':
url = request.form.get('link')
ydl_opts = {'format': 'best',
'download_sections' : "*00:02:05-00:02:10",
'force_keyframes_at_cuts' : True,
}
try:
with yt_dlp.YoutubeDL(ydl_opts) as ydl:
info_dict = ydl.extract_info(url, download=False)
video_url = info_dict['url']
video_data = ydl.urlopen(video_url).read()
headers = {
'Content-Type': 'video/mp4',
}
return make_response(video_data, 200, headers)
except Exception as e:
return make_response(e)
return make_response("Invalid request", 400)
if __name__ == '__main__':
#app.run(debug=True)
app.run(debug=False, host='0.0.0.0', port=5000)
</code></pre>
<p>Am I writing the ydl_opts wrong? What am I doing wrong? Why is the whole video downloaded instead of the specified part? I dont want to download it on the server an then use something as ffmpeg on it to cut it</p>
| <python><flask><youtube-dl><yt-dlp> | 2023-08-13 17:21:54 | 0 | 301 | JohnnySmith88 |
76,894,360 | 1,056,563 | How to omit "--multiprocess" option in pytest in pycharm | <p>I am seeing sporadic instances of the following code being executed in <code>python3.10/multiprocessing/spawn.py</code> happening when running <code>Pytest</code> in debug mode in <code>Pycharm</code></p>
<pre><code>def _check_not_importing_main():
if getattr(process.current_process(), '_inheriting', False):
raise RuntimeError('''
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.''')
</code></pre>
<p>I had never seen this before.</p>
<p>Is there any way to disable <code>--multiprocess</code> when running <code>pytest</code> in <code>pycharm</code> - as an intended workaround?</p>
<p>The command line being generated is</p>
<pre><code>venv/bin/python3 /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py
--multiprocess --qt-support=auto --client 127.0.0.1 --port 50192 --file ```
I am on `Pycharm 2022.3.1 Professional Edition`
</code></pre>
| <python><pycharm> | 2023-08-13 16:50:05 | 1 | 63,891 | WestCoastProjects |
76,894,344 | 6,020,504 | How do I set up a virtual env for python in Chef recipe? | <p>The goal is to clone a repo to the machine that is to run a Python script. And I need to install the requirements. However, I get this warning:</p>
<p><code>WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv</code></p>
<p>Below is my attempt to get rid of the warning, but I get the below error:</p>
<p><code>had an error: Errno::ENOENT: No such file or directory - /var/log/virtual-environment/bin/pip</code></p>
<p>I am unsure how to modify the code to solve this issue?</p>
<pre><code># Cookbook: chefcookbookname
# Attribute file: attributes/default.rb
default['chefcookbookname']['venv_path'] = '/var/log/virtual-environment'
</code></pre>
<pre><code># Cookbook:: chefcookbookname
# Recipe:: default
package 'git'
package 'python3-pip'
package 'python3.10-venv'
directory '/var/log/tmp' do
action :create
recursive true
end
directory '/var/log/myrepo' do
action :create
recursive true
end
directory '/var/log/virtual-environment' do
action :create
recursive true
end
venv_path = node['chefcookbookname']['venv_path']
directory venv_path do
action :create
end
execute 'create_virtualenv' do
command "python3 -m venv #{venv_path}"
not_if { ::File.exist?(venv_path) }
end
git 'clone_repository' do
repository 'https://repo.git'
reference 'main'
destination '/var/log/myrepo'
action :sync
end
execute 'move_requirements' do
command "mv /var/log/myrepo/requirements.txt #{venv_path}"
action :run
only_if { ::File.exist?("/var/log/myrepo/requirements.txt") }
end
execute 'install_python_dependencies' do
command "#{venv_path}/bin/pip install -r #{venv_path}/requirements.txt"
only_if { ::File.exist?("#{venv_path}/requirements.txt") }
environment(
'VIRTUAL_ENV' => venv_path,
'PATH' => "#{venv_path}/bin:#{ENV['PATH']}"
)
end
</code></pre>
| <python><chef-infra> | 2023-08-13 16:44:38 | 1 | 349 | user938e3ef455 |
76,894,176 | 20,999,380 | python key input logic using msvcrt | <p>I need to do roughly the following:</p>
<pre><code>while True:
if *a key is pressed within 5 seconds of some prior event*:
print(*the specific key that was pressed)
elif *5 seconds pass with no key press*
print("No key pressed")
</code></pre>
<p>I posted about my specific needs in another question (<a href="https://stackoverflow.com/questions/76893525/read-specific-key-with-msvcrt-getch-or-move-on-after-set-time-with-no-input">Read specific key with msvcrt.getch() OR move on after set time with no input</a>), but I figure this format is much more approachable. The critical part here is that I must know what key was pressed. I have been trying to use <code>msvcrt.getch() amd msvcrt.kbhit()</code>, but it seems I need some sort of hybrid.</p>
<p>Can anyone figure this out?</p>
<p>Windows 11, Python 3.11.0</p>
| <python><msvcrt> | 2023-08-13 16:01:11 | 1 | 345 | grace.cutler |
76,894,072 | 7,533,650 | Can't get all img tags when scrapping website with BeautifulSoup (BS4) | <p>I am trying to get a list of all the <code><img></code> image tags on a website. Unfortunately, I only get a handful and cannot get the rest despite them existing in the HTML code when I use developer tools. In my code below, I stop getting <code>img</code> tags after <code>TRU</code>, or about only 10 tags. There are many others.</p>
<p><strong>Here's my code:</strong></p>
<pre><code>import requests
from bs4 import BeautifulSoup
import time
# get list of token IMGs
r = requests.get('https://coinmarketcap.com/view/real-world-assets/')
soup = BeautifulSoup(r.text, 'lxml')
soup = soup.find('table', {'class':'sc-482c3d57-3 iTyfmj cmc-table'})
soup = soup.find_all('tr')
for s in soup:
print(s)
print('************')
</code></pre>
<p><strong>Here's what I'm referring to:</strong>*</p>
<pre><code>************
<tr><td><span class="sc-c0b4ee1-2 bwyOyZ"></span></td><td style="text-align:start"><p class="sc-4984dd93-0 iWSjWE" color="text2" data-sensors-click="true" font-size="14">412</p></td><td style="text-align:start"><div class="sc-aef7b723-0 LCOyB" display="flex"><a class="cmc-link" href="/currencies/truefi-token/"><div class="sc-aef7b723-0 sc-b585f443-0 hqAcrb" data-sensors-click="true"><img alt="TRU logo" class="coin-logo" decoding="async" fetchpriority="low" loading="lazy" src="https://s2.coinmarketcap.com/static/img/coins/64x64/7725.png"/><div class="sc-aef7b723-0 sc-b585f443-1 dUXsZC hide-ranking-number"><p class="sc-4984dd93-0 kKpPOn" color="text" data-sensors-click="true" font-size="1" font-weight="semibold">TrueFi</p><div class="sc-b585f443-2 SoolS" data-nosnippet="true"><p class="sc-4984dd93-0 iqdbQL coin-item-symbol" color="text3" data-sensors-click="true" font-size="1">TRU</p></div></div></div></a></div></td><td style="text-align:end"><div class="sc-a0353bbc-0 gDrtaY"><a class="cmc-link" href="/currencies/truefi-token/#markets"><span>$0.03894</span></a></div></td><td style="text-align:end"><span class="sc-97d6d2ca-0 bQjSqS"><span class="icon-Caret-down"></span>0.09%</span></td><td style="text-align:end"><span class="sc-97d6d2ca-0 cYiHal"><span class="icon-Caret-up"></span>2.53%</span></td><td style="text-align:end"><span class="sc-97d6d2ca-0 cYiHal"><span class="icon-Caret-up"></span>14.77%</span></td><td style="text-align:end"><p class="sc-4984dd93-0 jZrMxO" color="text" data-sensors-click="true" font-size="1" style="white-space:nowrap"><span class="sc-f8982b1f-0 jYSZLP">$41.55M</span><span class="sc-f8982b1f-1 bOsKfy" data-nosnippet="true">$41,552,664</span></p></td><td style="text-align:end"><div class="sc-aef7b723-0 sc-a0b7a456-0 iHWrYq"><a class="cmc-link" href="/currencies/truefi-token/#markets"><p class="sc-4984dd93-0 jZrMxO font_weight_500" color="text" data-sensors-click="true" font-size="1">$3,402,169</p></a><div data-nosnippet="true"><p class="sc-4984dd93-0 ihZPK" color="text2" data-sensors-click="true" font-size="0">87,376,391 TRU</p></div></div></td><td style="text-align:end"><div class="sc-aef7b723-0 sc-e8f714de-0 cELvic" data-sensors-click="true" style="cursor:pointer"><div class="sc-aef7b723-0 sc-e8f714de-1 hSniWt"><p class="sc-4984dd93-0 WfVLk" color="text" data-sensors-click="true" font-size="1" font-weight="medium">1,067,178,474 TRU</p></div><div class="sc-4ff2400b-0 hKysZR" data-sensors-click="true" width="160"><div class="sc-4ff2400b-1 ldlqLF" width="118"></div></div></div></td><td style="text-align:end"><a class="cmc-link" href="/currencies/truefi-token/?period=7d"><img alt="truefi-token-7d-price-graph" class="sc-482c3d57-0 gsWGOt isUp" loading="lazy" src="https://s3.coinmarketcap.com/generated/sparklines/web/7d/2781/7725.svg"/></a></td></tr>
************
<tr class="sc-428ddaf3-0 bKFMfg"><td><span></span></td><td><span></span></td><td><a class="cmc-link" href="/currencies/propy/"><span class="circle"></span><span>Propy</span><span class="crypto-symbol">PRO</span></a></td><td><span>$<!-- -->0.33</span></td><td><span></span></td></tr>
************
</code></pre>
| <python><html><beautifulsoup> | 2023-08-13 15:36:31 | 1 | 303 | BorangeOrange1337 |
76,894,060 | 10,967,961 | Gropuby agg says: ValueError: Must produce aggregated value | <p>I am trying to groupby a huge dataframe (3.5 Billion observations) by two columns and multiply the resulting columns 2 by 2 as follows:</p>
<pre><code>FirstNeighborVars_s2=dat2.groupby(by=['NiuCust2', 'year']).agg(
s2_nv_importing=('NVCost2_sum', lambda x: (x * dat2.loc[x.index, 'importing'])),
s2_prop_importing=('PROPCost2_sum', lambda x: (x * dat2.loc[x.index, 'importing']))
).reset_index()
</code></pre>
<p>Now, while this works with a smaller version of the database (when dat2 is defined as dat2.head(10000)), this does not work with the version using the entire database (the one in the code above) giving the following error:</p>
<pre><code>ValueError: Must produce aggregated value
</code></pre>
<p>Why does this error arise? Is there another way to perform the following series of operations (which does not work actually because in pandas we cannot operate on Groupby dataframes):</p>
<pre><code>FirstNeighborVars_s2=dat2.groupby(by=['NiuCust2', 'year'])
FirstNeighborVars_s2["s2_nv_importing"]=FirstNeighborVars_s2["NVCost2_sum"]*FirstNeighborVars_s2["importing"]
FirstNeighborVars_s2["s2_prop_importing"]=FirstNeighborVars_s2["PROPCost2_sum"]*FirstNeighborVars_s2["importing"]
</code></pre>
<p>Thanks a lot</p>
| <python><pandas><group-by> | 2023-08-13 15:33:57 | 0 | 653 | Lusian |
76,893,978 | 6,732,947 | newline is missing after the result of "\n".join(list) inserted into cells using openpyxl | <p>I am using <code>beautifulsoup</code> and <code>openpyxl</code> to get some web links;<br></p>
<p>My code is;<br></p>
<ol>
<li>Use <code>beautifulsoup</code> to get the <code>href</code> of a tag, and append them to a list</li>
<li>use <code>join</code> to concatenate the multiple web links of the list to a string joining with <code>"\n"</code></li>
<li>insert that concatenated string to a cell</li>
</ol>
<p>I am confused, don't know what happened.</p>
<p>Here is a reproducible example,</p>
<pre><code>from openpyxl import Workbook
from openpyxl import load_workbook
wb = load_workbook("c:\\python\\test.xlsx")
ws = wb.active
detail = ['pic','movie name','page link','release date','δΈdownload link','detail']
detail_str="\n".join(detail)
ws.cell(1,1,detail_str)
wb.save("c:\\python\\test.xlsx")
wb.close
</code></pre>
| <python><beautifulsoup><openpyxl> | 2023-08-13 15:16:58 | 1 | 661 | tonyibm |
76,893,623 | 5,038,503 | Passing a numpy audio array around different audio libraries | <p>I'm working on a project which involved numerous audio-processing tasks with text-to-speech, but I've hit a small snag. I'm going to be processing possibly hundreds of TTS audio segments, so I want to mitigate file IO as much as possible.
I need to synthesize speech with Coqui TTS, time-stretch it with AudioTSM, and then perform additoinal processing and splicing with PyDub.</p>
<p>I'm using <a href="https://github.com/coqui-ai/TTS" rel="nofollow noreferrer">Coqui TTS</a> to generate text like this:</p>
<pre><code>from TTS.api import TTS
tts = TTS()
audio = tts.tts("Hello StackOverflow! Please help me!")
</code></pre>
<p>This returns a List of Float32 values which needs to be converted to be used with <a href="https://audiotsm.readthedocs.io/en/latest/io.html" rel="nofollow noreferrer">AudioTSM's ArrayReader</a></p>
<p>Like this:</p>
<pre><code>audio_array = np.array(tts_audio)
# Reshape the array to (channels, samples)
samples = len(audio_array)
channels = 1
sample_rate = 22050
audio_array = audio_array.reshape(channels, samples)
from audiotsm.io.array import ArrayReader, ArrayWriter
reader = ArrayReader(audio_array)
tsm = wsola(reader.channels, speed=2) # increase the speed by 2x
rate_adjusted = ArrayWriter(channels=channels)
tsm.run(reader, rate_adjusted)
</code></pre>
<p>So far, things are hunky-dory. The problem comes in with Pydub</p>
<p>If I were to use AudioTSM's <code>WavWriter</code> instead, the audio is time-stretched correctly just as it would be if I had done <code>tts_to_file</code> and then <code>WavReader()</code></p>
<h3>The issue comes in with <a href="https://github.com/jiaaro/pydub" rel="nofollow noreferrer">PyDub</a></h3>
<p>If I were to just directly pass in the results of <code>ArrayWriter</code>'s <code>rate_adjusted.data.tobytes()</code>, like this, we get MAJOR distorted audio</p>
<pre><code>from pydub import AudioSegment
# Convert the processed audio data to a PyDub AudioSegment
processed_audio_segment = AudioSegment(
rate_adjusted.data.tobytes(),
frame_rate=samplerate,
sample_width=2,
channels=channels
)
# Perform additional audio processing
processed_audio_segment.export('tts_output.wav', format='wav')
</code></pre>
<p>I can't find documentation that supports this, but looking at the source for the <code>AudioSegment</code> <code>__init__</code> I suspected it had something to do with Coqui outputting <code>float32</code> AudioSegment wanting a scaled <code>int16</code></p>
<p>converting the array actually seems to produce a somewhat usable result</p>
<pre><code># Scale the floats to whole numbers and convert
converted_audio = (rate_adjusted.data * 2**15).astype(np.int16).tobytes()
</code></pre>
<p>This produces an audio file that is not distorted but has a noticeable decrease in quality, and when exported is actually about 25KB smaller than the one exported by AudioTSM's <code>WavWriter</code> without any processing. I would guess this is because Int16 uses less data. I tried converting to Int32 instead like this:</p>
<pre><code>converted_audio = (rate_adjusted.data * 2**31).astype(np.int32).tobytes()
</code></pre>
<p>But this actually doesn't really sound any better and takes up much more space. What am I missing here?</p>
<p>If I just export to wav with <code>WavWriter</code> and read in with <code>AudioSegment.from_wav()</code>, there is no distortion, the export is identical, and I don't have to convert, but again, File IO is expensive and a pain.</p>
<p>Is there any way to properly convert between these array formats that won't cause distortion, loss of quality, or sanity besides just turning things into wavs? I could also try other libraries, but my project has already been making heavy use of PyDub even though it's proving to be a massive thorn in the tuchus. My goal is just to perform all audio operations in memory with as much interoperability between libraries as possible.</p>
| <python><numpy><audio><audio-processing><pydub> | 2023-08-13 13:39:37 | 1 | 2,024 | Tessa Painter |
76,893,477 | 4,136,337 | pylint: unable to import 'requests' using nvim with pylsp and pylint enabled | <p>I'm having a weird problem with pylint complaining about an import problem:</p>
<pre><code>pylint: [import-error] Unable to import 'requests' [E0401]
</code></pre>
<p>This makes no sense since <code>requests</code> is a standard library. Also, I have no issues with importing other standard libraries.</p>
<p>I've tried to with pylint and pynvim installed but changes.</p>
<p>I use:</p>
<ul>
<li>neovim 0.9.1</li>
<li>pylsp with pylint enabled</li>
</ul>
<p>When I run <code>pylint</code> from the command-line, this error doesn't appear.</p>
<p>Below I attach a screenshot showing exactly what I'm referring to:</p>
<p><a href="https://i.sstatic.net/wBRjm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wBRjm.png" alt="enter image description here" /></a></p>
| <python><neovim><pylint><nvim-lspconfig> | 2023-08-13 13:04:36 | 1 | 1,356 | 0x4ndy |
76,893,273 | 536,377 | Shared Variable between Parent and Child in Python | <p>I have a global configuration <code>pp</code> that changes at <em>runtime</em> and needs it to be shared across all parent/child objects.</p>
<pre><code>class Config:
pp = 'Init'
def __init__(self):
pass
class Child(Config):
def __init__(self, name):
self.cc = name
par = Config()
print(f"Parent: {par.pp}")
par.pp = "123"
print(f"Parent: {par.pp}")
child = Child('XYZ')
print(f"Child-1: {child.pp} - {child.cc}")
</code></pre>
<p>This prints:</p>
<pre><code>Parent: Init
Parent: 123
Child-1: Init - XYZ
</code></pre>
<p>The third line is expected to be <code>Child-1: 123 - XYZ</code></p>
<p>How can I implement that in a clean way?</p>
<p>UPDATE*
Currently it works with a method like:</p>
<pre><code>class Config:
pp = 'Init'
def __init__(self):
pass
def set_pp(self, val):
type(self).pp = val
</code></pre>
| <python><inheritance><singleton> | 2023-08-13 12:07:31 | 3 | 606 | Tarek Eldeeb |
76,893,252 | 10,849,727 | how can I improve blob detection with python? | <p>So I have this flapping winged robot, where I drew some circles on the wing that I thought would be easy to track. As the wing moves in each frame, some circles may appear blurrier or elliptic compared to other frames.
My goal is to generate a dictionary with keys that represent a circle's unique ID, and a value that is a list of (x,y) coordinates for each frame. So for <code>k</code> circles and <code>n</code> frames:</p>
<pre><code>result = {
1: [(x11,y11),(x12,y12),...,(x1n,y1n)],
2: [(x21,y21),(x22,y22),...,(x2n,y2n)],
...,
k: [(xk1,yk1),(xk2,yk2),...,(xkn,ykn)]
}
</code></pre>
<p>currently, I use <code>cv2.SimpleBlobDetector_create</code>, which does find <em>some</em> of the circles in each frame, and I match between two consecutive circles by measuring the Euclidian distance w.r.t every circle in the previous frame.
Of course, as not all circles are always found, I get these holes in the data that I want to fill in.</p>
<p><strong>How can I fill in these gaps?</strong></p>
<p>While I don't think that the code adds much value, I've added it here as well:</p>
<pre><code>def blob_trajectory(camera_dirname: str,
crop_params_filename='crop_params.pkl',
first_image_name='Img000001.jpg',
photos_sub_dirname='photos',
cropped_dirname='cropped',
**kwargs):
params = cv2.SimpleBlobDetector_Params()
params.minThreshold = kwargs["min_thres"]
params.maxThreshold = kwargs["max_thres"]
params.filterByCircularity = 1
params.minCircularity = kwargs["min_circ"]
params.maxCircularity = kwargs["max_circ"]
params.filterByConvexity = 1
params.minInertiaRatio = kwargs["min_conv"]
params.maxInertiaRatio = kwargs["max_conv"]
params.filterByArea = 1
params.minArea = kwargs["min_area"] # number of pixels
params.maxArea = kwargs["max_area"]
detector = cv2.SimpleBlobDetector_create(params)
images_path = os.path.join(camera_dirname, photos_sub_dirname)
cropped_path = os.path.join(camera_dirname, cropped_dirname)
frame0 = cv2.imread(os.path.join(cropped_path, 'frame_0000.png'), cv2.COLOR_BGR2GRAY)
prev_blobs = detector.detect(frame0)
blob_trajectories = {}
for image_name in os.listdir(images_path)[1:]:
frame = cv2.imread(os.path.join(cropped_path, image_name), cv2.COLOR_BGR2GRAY)
curr_blobs = detector.detect(frame)
for i, kp in enumerate(prev_blobs):
kp.class_id = i
distances = np.zeros((len(prev_blobs), len(curr_blobs)))
for i, kp1 in enumerate(prev_blobs):
for j, kp2 in enumerate(curr_blobs):
distances[i, j] = np.linalg.norm(np.array(kp1.pt) - np.array(kp2.pt))
row_ind, col_ind = scipy.optimize.linear_sum_assignment(distances)
for i, j in zip(row_ind, col_ind):
if distances[i, j] < 10:
curr_blobs[i].class_id = prev_blobs[j].class_id
for kp in prev_blobs:
blob_id = kp.class_id
if blob_id not in blob_trajectories:
blob_trajectories[blob_id] = []
blob_trajectories[blob_id].append(kp.pt)
im_with_blobs = cv2.drawKeypoints(frame, curr_blobs, np.array([]), (255, 0, 0),
cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
plt.imshow(im_with_blobs)
plt.show()
prev_blobs = curr_blobs
return blob_trajectories
</code></pre>
| <python><opencv><computer-vision><object-detection><tracking> | 2023-08-13 12:03:06 | 0 | 688 | Hadar |
76,892,592 | 6,357,916 | Unable to import module in jupyter notebook, but able to import it in the terminal interpreter | <p>I installed <code>yfinance</code> library (from jupyter notebook) and then tried to import it in jupyter notebook. But it did not work. I also tried to uninstall it and install it from terminal. But it still did not work. However I can import other libraries like pandas in . Below is the screen shot of my jupyter notebook:</p>
<p><a href="https://i.sstatic.net/TvuJJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TvuJJ.png" alt="enter image description here" /></a></p>
<p>However, note that I am able to import the same from terminal:</p>
<p><a href="https://i.sstatic.net/9xg0q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9xg0q.png" alt="enter image description here" /></a></p>
<p>What I am missing here?</p>
| <python><windows><jupyter-notebook><cygwin> | 2023-08-13 08:42:57 | 0 | 3,029 | MsA |
76,892,576 | 11,656,213 | Assign user role on signup flask-security-too | <p>I'm new to flask so bear with me. I'm creating a sign-up flow via flask-security-too. How can I assign a user role on sign-up? I'm following the second (SQLAcademy + session) example from <a href="https://flask-security-too.readthedocs.io/en/stable/quickstart.html" rel="nofollow noreferrer">here</a>. I have tried to update</p>
<pre><code>class ExtendedRegisterForm(RegisterForm):
first_name = StringField('First Name', [DataRequired()])
last_name = StringField('Last Name', [DataRequired()])
roles = SelectField('Roles', choices=["user"]) #<- updated here
</code></pre>
<p>and also created a new "register_user.html" which adds a new field for role "user":</p>
<pre><code>{% from "security/_macros.html" import render_field_with_errors, render_field %}
{% include "security/_messages.html" %}
<h1>Registrer</h1>
<form action="{{ url_for_security('register') }}" method="POST" name="register_user_form">
{{ register_user_form.hidden_tag() }}
{{ render_field_with_errors(register_user_form.email) }}
{{ render_field_with_errors(register_user_form.password) }}
{% if register_user_form.password_confirm %}
{{ render_field_with_errors(register_user_form.password_confirm) }}
{% endif %}
{{ render_field_with_errors(register_user_form.first_name) }}
{{ render_field_with_errors(register_user_form.last_name) }}
{{ render_field_with_errors(register_user_form.roles) }} <!-- updated mainly here -->
{{ render_field(register_user_form.submit) }}
</form>
{% include "security/_menu.html" %}
</code></pre>
<p>Models and database looks like as specified in the example above. I have also set</p>
<pre><code>app.config["SECURITY_REGISTERABLE"] = True
app.config["SECURITY_RECOVERABLE"] = True
</code></pre>
<p>And set-up e-mail. However, when I run this, I get this error:</p>
<pre><code>Traceback (most recent call last):
File "/opt/homebrew/Caskroom/miniforge/base/envs/flask_security_too/lib/python3.10/site-packages/flask/app.py", line 2213, in __call__
return self.wsgi_app(environ, start_response)
File "/opt/homebrew/Caskroom/miniforge/base/envs/flask_security_too/lib/python3.10/site-packages/flask/app.py", line 2193, in wsgi_app
response = self.handle_exception(e)
File "/opt/homebrew/Caskroom/miniforge/base/envs/flask_security_too/lib/python3.10/site-packages/flask/app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
File "/opt/homebrew/Caskroom/miniforge/base/envs/flask_security_too/lib/python3.10/site-packages/flask/app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/opt/homebrew/Caskroom/miniforge/base/envs/flask_security_too/lib/python3.10/site-packages/flask/app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
File "/opt/homebrew/Caskroom/miniforge/base/envs/flask_security_too/lib/python3.10/site-packages/flask/app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/opt/homebrew/Caskroom/miniforge/base/envs/flask_security_too/lib/python3.10/site-packages/flask_security/decorators.py", line 632, in wrapper
return f(*args, **kwargs)
File "/opt/homebrew/Caskroom/miniforge/base/envs/flask_security_too/lib/python3.10/site-packages/flask_security/views.py", line 299, in register
user = register_user(form)
File "/opt/homebrew/Caskroom/miniforge/base/envs/flask_security_too/lib/python3.10/site-packages/flask_security/registerable.py", line 46, in register_user
user = _datastore.create_user(**user_model_kwargs)
File "/opt/homebrew/Caskroom/miniforge/base/envs/flask_security_too/lib/python3.10/site-packages/flask_security/datastore.py", line 470, in create_user
kwargs = self._prepare_create_user_args(**kwargs)
File "/opt/homebrew/Caskroom/miniforge/base/envs/flask_security_too/lib/python3.10/site-packages/flask_security/datastore.py", line 214, in _prepare_create_user_args
roles[i] = self.find_role(rn)
TypeError: 'str' object does not support item assignment
</code></pre>
<p>And the role "user" is not assigned. Why?</p>
| <python><flask><flask-security> | 2023-08-13 08:39:17 | 2 | 520 | aleksandereiken |
76,892,271 | 16,363,897 | An integer is required (got type datetime.date) error with read_excel | <p>This is a follow-up to <a href="https://stackoverflow.com/questions/74170324/pandas-not-able-to-read-excel-due-to-error">this question</a></p>
<p>I'm trying to import <a href="https://www.philadelphiafed.org/-/media/frbp/assets/surveys-and-data/survey-of-professional-forecasters/data-files/files/mean_cpi10_level.xlsx" rel="nofollow noreferrer">this excel file</a> in pandas with the following script</p>
<pre><code>url="https://www.philadelphiafed.org/-/media/frbp/assets/surveys-and-data/survey-of-professional-forecasters/data-files/files/mean_cpi10_level.xlsx"
df= pd.read_excel(url, index_col=0)
</code></pre>
<p>but I get this error:</p>
<pre><code>TypeError Traceback (most recent call last)
~\anaconda3\lib\site-packages\openpyxl\descriptors\base.py in _convert(expected_type, value)
54 try:
---> 55 value = expected_type(value)
56 except:
TypeError: an integer is required (got type datetime.date)
</code></pre>
<p>Following the answer to the previous question, I tried:</p>
<pre><code>df = pd.read_excel(url, index_col=0, dtype={'YEAR': object, 'QUARTER': object, "CPI10": object})
</code></pre>
<p>and also (using openpyxl instead of pandas):</p>
<pre><code>from openpyxl import load_workbook
workbook = load_workbook('mean_cpi10_level.xlsx')
</code></pre>
<p>but all resulted in the same error.
What can I do? Thanks?</p>
| <python><pandas><excel> | 2023-08-13 06:59:11 | 1 | 842 | younggotti |
76,892,218 | 584,212 | Running LLama2 on a GeForce 1080 8Gb machine | <p>I am trying to run LLama2 on my server which has mentioned nvidia card. It's a simple hello world case you can find <a href="https://huggingface.co/blog/llama2" rel="nofollow noreferrer">here</a>. However I am constantly running into memory issues:</p>
<pre><code>torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 250.00 MiB (GPU 0; 7.92 GiB total capacity; 7.12 GiB already allocated; 241.62 MiB free; 7.18 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
</code></pre>
<p>I tried</p>
<p>export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128</p>
<p>but same effect. Is there anything I can do?</p>
| <python><pytorch><huggingface><llama> | 2023-08-13 06:40:58 | 1 | 1,079 | wonglik |
76,892,059 | 978,288 | How to remove message about loaded modules | <p>Some months ago I've written a Flask application. I got back to work, but now after start I got a few screens of <code>loaded modules</code>:</p>
<pre><code>formnoreload$ python main.py
...
matplotlib - DEBUG - __init__.<module>() [l. 1449]:
interactive is False
matplotlib - DEBUG - __init__.<module>() [l. 1450]:
platform is linux
matplotlib - DEBUG - __init__.<module>() [l. 1451]:
loaded modules: ['sys', 'builtins', '_frozen_importlib',
[lengthy list continues...]
</code></pre>
<p>Is it possible to remove this information about <code>loaded modules</code>? How to do it?</p>
<p>Thanks in advance.</p>
<p><strong>EDIT</strong></p>
<p>Almost done:</p>
<pre><code>from matplotlib.pyplot import set_loglevel
set_loglevel("warning")
</code></pre>
<p>Most of logging information, about loaded modules, has been turned off.</p>
| <python><matplotlib><flask><debugging> | 2023-08-13 05:37:58 | 1 | 462 | khaz |
76,892,056 | 13,060,649 | Django: Serpy serializer django model JsonField | <p>I am using serpy to serialize my model, so my model has a JSONField containing list of dict. Here is my model:</p>
<pre><code>class Section(models.Model):
title = models.CharField(max_length=255, null=False)
item_count = models.IntegerField(default=0)
# The JSON field
contents = models.JSONField(blank=True, null=True)
</code></pre>
<p>I am trying to use <code>serpy.DictSerializer</code> to serialize the <code>contents</code> field but it results in empty dict <code>{}</code> .</p>
<p>This is my serializer:</p>
<pre><code>class SectionSerializer(serpy.Serializer):
title = serpy.StrField()
item_count = serpy.IntField()
data = serpy.MethodField()
def get_data(self, obj):
return serpy.DictSerializer(obj.contents, many=True).data
</code></pre>
<p>As contents is a list of dicts, I have used <code>many=True</code>, But still it is not helping.</p>
| <python><django><serialization><django-rest-framework> | 2023-08-13 05:35:49 | 1 | 928 | suvodipMondal |
76,892,013 | 9,926,472 | Numpy multiple masks with conditions | <p>Goal:
Given an <code>int</code> numpy array, <code>x</code> of shape <code>d x 1</code>, where the contents of the array are random indices. The range of <code>x</code> is from <code>(0, n)</code>, where n is the max possible index. (<code>n << len(x)</code>). My goal is to get the indices that correspond to each index.</p>
<p>For eg:</p>
<pre><code>n = 3
x = [2, 0, 3, 3, 2]
out = required_fn(x, n)
# out should be [[1], [], [0, 4], [2,3]].
# i.e. for 0, it is at index 1, for 1, there is no occurence, for 2,it occurs at indices 0, 4
</code></pre>
<p>I'm looking for the most efficient implementation that can help me with this. This would be used in a data loader, so efficiency is important.</p>
<p>I tried a stupid attempt by:</p>
<pre><code>x[x == np.array(list(range(n))]
</code></pre>
<p>But this obviously did not work.</p>
<p>Thanks</p>
| <python><arrays><numpy> | 2023-08-13 05:15:30 | 3 | 587 | OlorinIstari |
76,891,965 | 2,195,440 | How can you distinguish between a standard library call, a third-party library call, and an API call from the repository? | <p>I am working on a project where I have to determine whether a particular call or import is:</p>
<ol>
<li>From the standard library of the language (Python) I'm using. (I am already considering to use <code>sys.stdlib_module_names</code> for it)</li>
<li>From a third-party library, or</li>
<li>An API call made to some service from within the repository.</li>
</ol>
<p>Is there an efficient way or tool that could help me quickly differentiate between these types of calls or imports? I'm primarily using Python, but methods for other languages are welcome as well.</p>
<p>I am working on a project where I have to collect a dataset of library calls that are made within within that repository.</p>
<p>I am working on a project wherein I aim to compile a dataset of function calls made within a given repository from Github.</p>
<p>So at first, I download any given python repository from Github.</p>
<p>Then my main objectives are:</p>
<ul>
<li>To extract all function calls made within the target repository.</li>
<li>To gather details of these function calls, including the arguments they use.</li>
<li>For this purpose, I am employing the Python AST (Abstract Syntax Tree) parser to detect and catalogue function calls and their respective arguments.</li>
<li>My entire analysis pipeline is based within a Python script leveraging the AST module.</li>
<li><strong>Now I have to determine which of these function calls originate from within the repository itself.</strong></li>
</ul>
<p>For example, if there is a call</p>
<p>file_b.py</p>
<pre><code>def abc():
....
</code></pre>
<p>file_a.py</p>
<pre><code>import numpy as np
from file_b import abc
....
def foo():
..
x = np.linspace(-math.pi, math.pi, 2000)
y = np.sin(x)
...
..
c = abc()
</code></pre>
<p>I want to only capture <code>abc</code> (as it is defined in that repository) and not the calls from numpy module.</p>
| <python><api-design><code-analysis> | 2023-08-13 04:47:51 | 2 | 3,657 | Exploring |
76,891,904 | 7,487,335 | Why does this ThreadPoolExecutor execute futures way before they are called? | <p>Why does this <code>ThreadPoolExecutor</code> execute <code>futures</code> way before they are called?</p>
<pre class="lang-py prettyprint-override"><code>import concurrent.futures
import time
def sleep_test(order_number):
num_seconds = 0.5
print(f"Order {order_number} - Sleeping {num_seconds} seconds")
time.sleep(num_seconds)
print(f"Order {order_number} - Slept {num_seconds} seconds")
if order_number == 4:
raise Exception("Reached order #4")
def main():
order_numbers = [i for i in range(10_000)]
max_number_of_threads = 2
with concurrent.futures.ThreadPoolExecutor(max_workers=max_number_of_threads) as executor:
futures = []
for order in order_numbers:
futures.append(executor.submit(sleep_test, order_number=order))
for future in futures:
if future.cancelled():
continue
try:
_ = future.result()
except Exception:
print("Caught Exception, stopping all future orders")
executor.shutdown(wait=False, cancel_futures=True)
if __name__ == "__main__":
main()
</code></pre>
<p>Here is a sample execution:</p>
<pre><code>$ python3 thread_pool_test.py
Order 0 - Sleeping 0.5 seconds
Order 1 - Sleeping 0.5 seconds
Order 0 - Slept 0.5 seconds
Order 1 - Slept 0.5 seconds
Order 2 - Sleeping 0.5 seconds
Order 3 - Sleeping 0.5 seconds
Order 2 - Slept 0.5 seconds
Order 4 - Sleeping 0.5 seconds
Order 3 - Slept 0.5 seconds
Order 5 - Sleeping 0.5 seconds
Order 4 - Slept 0.5 seconds
Order 6 - Sleeping 0.5 seconds
Caught Exception, stopping all future orders
Order 5 - Slept 0.5 seconds
Order 4706 - Sleeping 0.5 seconds
Order 6 - Slept 0.5 seconds
Order 4706 - Slept 0.5 seconds
</code></pre>
<p>All of a sudden Order 4706 is called seemingly out of nowhere which doesn't make sense to me. I expect the threads to stop at around Order 5 or 6 which is when the <code>Exception</code> is hit. Sometimes when I run the script it works as expected but other times it calls a function that is thousands of "futures" in the future.</p>
<p>Why is this happening? Can I stop this from happening?</p>
| <python><multithreading><threadpoolexecutor> | 2023-08-13 04:15:19 | 2 | 4,400 | Josh Correia |
76,891,717 | 8,482,467 | Sparse and huge matrix multiplication in pytorch or numpy | <p>I have a scenario where I need to multiply a small size vector <code>a</code> with a huge and highly sparse matrix <code>b</code>. Here's a simplified version of the code:</p>
<pre><code>import numpy as np
B = 32
M = 10000000
a = np.random.rand(B)
b = np.random.rand(B, M)
b = b > 0.9
result = a @ b
</code></pre>
<p>In my actual use case, the <code>b</code> matrix is loaded from a <code>np.memmap</code> file due to its large size. Importantly, <code>b</code> remains unchanged throughout the process, and will be performing the inner product with different vectors <code>a</code> each time, so any pre-process on <code>b</code> to leverage its sparse nature is allowed.</p>
<p>I'm seeking suggestions on how to optimize this matrix multiplication speed. Any insights or code examples would be greatly appreciated.</p>
| <python><numpy><linear-algebra><torch> | 2023-08-13 02:23:54 | 2 | 1,319 | Garvey |
76,891,663 | 814,044 | How to run multiprocess Chroma.from_documents() in Langchain | <p>Can we somehow pass an option to run multiple threads/processes when we call Chroma.from_documents() in Langchain?</p>
<p>I am trying to embed 980 documents (embedding model is mpnet on CUDA), and it take forever.
Specs:
Software: Ubuntu 20.4 (on Win11 WSL2 host), Langchain version: 0.0.253, pyTorch version: 2.0.1+cu118, Chroma Version: 0.4.2, CUDA 11.8
Processor: Intel i9-13900k at 5.4Ghz all 8 P-cores and 4.3Ghz all remaining 16 E-cores.
GPU: RTX 4090 GPU</p>
| <python><embedding><langchain><multiprocessor><chromadb> | 2023-08-13 01:39:04 | 2 | 497 | Paris Char |
76,891,526 | 11,416,654 | Travelling Salesman Problem - Best path to go through all points | <p>I am trying to solve an exercise based on the travelling salesman problem. Basically I am provided with a list of points with their coordinates, like this:</p>
<pre><code>[(523, 832), (676, 218), (731, 739), ..] (a total of 198)
</code></pre>
<p>and a starting one (832,500).</p>
<p>The goal is to find the optimal path to go through all points (starting from the (832,500) one). I have at disposal a service which will check my sequence and at the moment I have managed to get a decent solution (by applying the nearest neighbor algorithm), but it seems not to be the optimal solution. Here is my current code:</p>
<pre><code>from math import sqrt
def calcDistance(start,finish):
dist = sqrt((start[0]-finish[0])*(start[0]-finish[0]) +
(start[1]-finish[1])*(start[1]-finish[1]))
return dist
def calcPath(path):
pathLen = 0
for i in range(1,len(path)):
pathLen+=calcDistance(path[i],path[i-1])
return pathLen
capital = (832, 500)
towns = [(523, 832), (676, 218), (731, 739), (803, 858),
(170, 542), (273, 743), (794, 345), (825, 569), (770, 306),
(168, 476), (198, 361), (798, 352), (604, 958), (700, 235),
(791, 661), (37, 424), (393, 815), (250, 719), (400, 183),
(468, 831), (604, 184), (168, 521), (691, 71), (304, 232),
(800, 642), (708, 241), (683, 223), (726, 257), (279, 252),
(559, 827), (832, 494), (584, 178), (254, 277), (309, 772),
(293, 240), (58, 658), (765, 300), (446, 828), (766, 699),
(407, 819), (818, 405), (626, 192), (828, 449), (758, 291),
(333, 788), (124, 219), (443, 172), (640, 801), (171, 452),
(242, 710), (496, 168), (217, 674), (785, 672), (369, 195),
(486, 168), (821, 416), (206, 654), (503, 832), (288, 756),
(789, 336), (170, 464), (636, 197), (168, 496), (832, 515),
(168, 509), (832, 523), (677, 781), (651, 796), (575, 176),
(478, 168), (831, 469), (391, 186), (735, 265), (529, 169),
(241, 292), (235, 700), (220, 321), (832, 481), (806, 629),
(176, 575), (751, 282), (511, 832), (581, 822), (708, 759),
(777, 317), (410, 180), (180, 411), (382, 189), (694, 230),
(327, 784), (177, 421), (797, 650), (742, 272), (719, 250),
(739, 731), (298, 764), (423, 177), (658, 792), (813, 611),
(667, 213), (257, 727), (178, 583), (616, 189), (342, 208),
(817, 600), (348, 205), (344, 793), (968, 541), (700, 766),
(181, 594), (633, 804), (656, 206), (831, 533), (722, 747),
(759, 708), (188, 615), (416, 822), (820, 590), (169, 529),
(172, 445), (424, 824), (687, 775), (229, 692), (597, 182),
(187, 388), (436, 826), (463, 170), (321, 220), (174, 434),
(567, 826), (224, 686), (210, 338), (608, 814), (190, 381),
(538, 170), (332, 938), (265, 735), (195, 367), (173, 562),
(270, 260), (462, 830), (192, 625), (824, 427), (781, 678),
(599, 817), (669, 786), (359, 199), (328, 216), (183, 401),
(815, 393), (827, 559), (830, 460), (215, 329), (311, 227),
(713, 755), (822, 581), (546, 829), (505, 168), (172, 554),
(748, 721), (421, 37), (184, 604), (317, 778), (286, 246),
(648, 202), (201, 645), (281, 750), (453, 171), (356, 800),
(827, 439), (491, 832), (375, 808), (807, 372), (521, 168),
(246, 286), (482, 832), (804, 365), (809, 622), (197, 637),
(232, 303), (227, 310), (362, 802), (592, 819), (533, 831),
(560, 173), (550, 171), (619, 810), (384, 811), (931, 313),
(811, 384), (168, 488), (773, 690), (781, 323), (204, 349),
(213, 667), (829, 547), (431, 175), (754, 714), (263, 267)]
#We start from the capital
start = capital
#path = ""
alreadyVisitedTowns = []
path = []
path2 = ""
while len(alreadyVisitedTowns) != 199:
#minDistanceForThisStep = {"town": towns[0], "end": calcDistance(start, towns[0]), "indices": 0}
bestTownIndex = 0
bestDistance = 999999
for i,town in enumerate(towns):
if (town not in alreadyVisitedTowns) and calcDistance(start, town)<bestDistance:
bestDistance = calcDistance(start, town)
bestTownIndex = i
path.append(bestTownIndex)
path2 += str(bestTownIndex) + " "
alreadyVisitedTowns.append(towns[bestTownIndex])
#path+= " " +str(minDistanceForThisStep["indices"])
start = towns[bestTownIndex]
print(path2)
print(" ".join(path2.split()[::-1]))
</code></pre>
<p>Result: 4749km</p>
<p>Test with networkx:</p>
<pre><code>import numpy as np
import networkx as nx
from math import sqrt
#from scipy.spatial.distance import euclidean
def calcDistance(start,finish):
dist = sqrt((start[0]-finish[0])*(start[0]-finish[0]) +
(start[1]-finish[1])*(start[1]-finish[1]))
return dist
def calcPath(path):
pathLen = 0
for i in range(1,len(path)):
pathLen+=calcDistance(path[i],path[i-1])
return pathLen
# List of towns as (x, y) coordinates
towns = [(523, 832), (676, 218), (731, 739), (803, 858),
(170, 542), (273, 743), (794, 345), (825, 569), (770, 306),
(168, 476), (198, 361), (798, 352), (604, 958), (700, 235),
(791, 661), (37, 424), (393, 815), (250, 719), (400, 183),
(468, 831), (604, 184), (168, 521), (691, 71), (304, 232),
(800, 642), (708, 241), (683, 223), (726, 257), (279, 252),
(559, 827), (832, 494), (584, 178), (254, 277), (309, 772),
(293, 240), (58, 658), (765, 300), (446, 828), (766, 699),
(407, 819), (818, 405), (626, 192), (828, 449), (758, 291),
(333, 788), (124, 219), (443, 172), (640, 801), (171, 452),
(242, 710), (496, 168), (217, 674), (785, 672), (369, 195),
(486, 168), (821, 416), (206, 654), (503, 832), (288, 756),
(789, 336), (170, 464), (636, 197), (168, 496), (832, 515),
(168, 509), (832, 523), (677, 781), (651, 796), (575, 176),
(478, 168), (831, 469), (391, 186), (735, 265), (529, 169),
(241, 292), (235, 700), (220, 321), (832, 481), (806, 629),
(176, 575), (751, 282), (511, 832), (581, 822), (708, 759),
(777, 317), (410, 180), (180, 411), (382, 189), (694, 230),
(327, 784), (177, 421), (797, 650), (742, 272), (719, 250),
(739, 731), (298, 764), (423, 177), (658, 792), (813, 611),
(667, 213), (257, 727), (178, 583), (616, 189), (342, 208),
(817, 600), (348, 205), (344, 793), (968, 541), (700, 766),
(181, 594), (633, 804), (656, 206), (831, 533), (722, 747),
(759, 708), (188, 615), (416, 822), (820, 590), (169, 529),
(172, 445), (424, 824), (687, 775), (229, 692), (597, 182),
(187, 388), (436, 826), (463, 170), (321, 220), (174, 434),
(567, 826), (224, 686), (210, 338), (608, 814), (190, 381),
(538, 170), (332, 938), (265, 735), (195, 367), (173, 562),
(270, 260), (462, 830), (192, 625), (824, 427), (781, 678),
(599, 817), (669, 786), (359, 199), (328, 216), (183, 401),
(815, 393), (827, 559), (830, 460), (215, 329), (311, 227),
(713, 755), (822, 581), (546, 829), (505, 168), (172, 554),
(748, 721), (421, 37), (184, 604), (317, 778), (286, 246),
(648, 202), (201, 645), (281, 750), (453, 171), (356, 800),
(827, 439), (491, 832), (375, 808), (807, 372), (521, 168),
(246, 286), (482, 832), (804, 365), (809, 622), (197, 637),
(232, 303), (227, 310), (362, 802), (592, 819), (533, 831),
(560, 173), (550, 171), (619, 810), (384, 811), (931, 313),
(811, 384), (168, 488), (773, 690), (781, 323), (204, 349),
(213, 667), (829, 547), (431, 175), (754, 714), (263, 267)]
capital = (832, 500)
towns.append(capital)
num_towns = len(towns)
G = nx.complete_graph(num_towns)
for i in range(num_towns):
for j in range(i + 1, num_towns):
distance = calcDistance(towns[i], towns[j])
G[i][j]['weight'] = distance
G[j][i]['weight'] = distance
optimal_path_indices = list(nx.approximation.traveling_salesman_problem(G, cycle=True))
capital_index = optimal_path_indices.index(num_towns - 1)
optimal_path_indices = optimal_path_indices[capital_index:] + optimal_path_indices[:capital_index + 1]
optimal_path_indices = optimal_path_indices[:-1]
print("Optimal path indices:", optimal_path_indices)
optimal_path_indices = optimal_path_indices[::-1]
finalPath = ""
for el in optimal_path_indices:
finalPath += str(el) + " "
print(finalPath)
#print(calcPath(optimal_path_indices))
</code></pre>
<p>Result: 4725km (slightly better when inverting the path for some reason)
Do you have any proposal on how I can improve my algorithm?</p>
| <python><algorithm><optimization><euclidean-distance><traveling-salesman> | 2023-08-12 23:58:22 | 2 | 823 | Shark44 |
76,891,422 | 21,575,627 | Is hashing tuples an O(n) operation? | <p>Let's say you do the following:</p>
<pre><code>for row in myListofLists:
if tuple(row) in myDictWithTupleKeys:
</code></pre>
<p>in other words, within a loop through the rows of an n x n array, you check to see if a row is contained as a key within a <code>dict</code>. Is this an O(n^2) operation? In other words, is the <code>in</code> operator an O(n) operation here?</p>
| <python><python-3.x> | 2023-08-12 23:07:52 | 0 | 1,279 | user129393192 |
76,891,374 | 2,307,570 | How to make an optional include in a bottle template? | <p>I would like to add an optional include in a bottle template.</p>
<p>Unfortunately this does not work as expected:</p>
<pre class="lang-py prettyprint-override"><code><foo>
% include(required_view)
<%
try:
include(optional_view)
except NameError:
pass
%>
</foo>
</code></pre>
<p>It does indeed make the optional import, but it stops after that.<br>(So <code></foo></code> is missing from the result.)</p>
<p>I would like to allow for the variable <code>optional_view</code> to be undefined.<br>
Always adding <code>'optional_view': None</code> to the context would be annoying.<br>
(Not to mention always adding an empty view.)</p>
| <python><bottle> | 2023-08-12 22:40:36 | 1 | 1,209 | Watchduck |
76,891,148 | 23,512,643 | How to delay an action until the webpage loads | <p>I am using Selenium within R.</p>
<p>I have the following script which searches <a href="https://en.wikipedia.org/wiki/Google_Maps" rel="nofollow noreferrer">Google Maps</a> for all pizza restaurants around a given geographical coordinate - and then keeps scrolling until all restaurants are loaded.</p>
<p>First, I navigate to the starting page:</p>
<pre class="lang-r prettyprint-override"><code>library(RSelenium)
library(wdman)
library(netstat)
selenium()
seleium_object <- selenium(retcommand = T, check = F)
remote_driver <- rsDriver(browser = "chrome", chromever = "114.0.5735.90", verbose = F, port = free_port())
remDr<- remote_driver$client
lat <- 40.7484
lon <- -73.9857
# Create the URL using the paste function
URL <- paste0("https://www.google.com/maps/search/pizza/@", lat, ",", lon, ",17z/data=!3m1!4b1!4m6!2m5!3m4!2s", lat, ",", lon, "!4m2!1d", lon, "!2d", lat, "?entry=ttu")
# Navigate to the URL
remDr$navigate(URL)
Then, I use the following code to keep scrolling until all entries have been loaded:
# Waits 10 seconds for the elements to load before scrolling
elements <- remDr$findElements(using = "css selector", "div.qjESne")
while (TRUE) {
new_elements <- remDr$findElements(using = "css selector", "div.qjESne")
# Pick the last element in the list - this is the one we want to scroll to
last_element <- elements[[length(elements)]]
# Scroll to the last element
remDr$executeScript("arguments[0].scrollIntoView(true);", list(last_element))
Sys.sleep(10)
# Update the elements list
elements <- new_elements
# Check if there are any new elements loaded - the "You've reached the end of the list." message
if (length(remDr$findElements(using = "css selector", "span.HlvSq")) > 0) {
print("No more elements")
break
}
}
</code></pre>
<p>Finally, I use this code to extract the names and addresses of all restaurants:</p>
<pre class="lang-r prettyprint-override"><code>titles <- c()
addresses <- c()
# Check if there are any new elements loaded - the "You've reached the end of the list." message
if (length(remDr$findElements(using = "css selector", "span.HlvSq")) > 0) {
# now we can parse the data since all the elements loaded
for (data in remDr$findElements(using = "css selector", "div.lI9IFe")) {
title <- data$findElement(using = "css selector", "div.qBF1Pd.fontHeadlineSmall")$getElementText()[[1]]
restaurant <- data$findElement(using = "css selector", ".W4Efsd > span:nth-of-type(2)")$getElementText()[[1]]
titles <- c(titles, title)
addresses <- c(addresses, restaurant)
}
# This converts the list of titles and addresses into a dataframe
df <- data.frame(title = titles, address = addresses)
print(df)
break
}
</code></pre>
<p>Instead of using <code>Sys.sleep()</code> in R, I am trying to change my code such that only scrolls (i.e., delays the action) once the previous action has been completed. I am noticing that my existing code often freezes half way through and I suspect that this is because I am trying to load a new page when the existing page is not fully loaded. I think it might be better to somehow delay the action and wait for the page to be fully loaded prior to proceeding.</p>
<p>How might I be able to delay my script and force it to wait for the existing page to load before loading a new page? (e.g., <a href="https://stackoverflow.com/questions/43402237/r-waiting-for-page-to-load-in-rselenium-with-phantomjs">R - Waiting for page to load in RSelenium with PhantomJS</a>)</p>
<p><strong>Note:</strong> I am also open to a Python solution.</p>
<h3>References:</h3>
<ul>
<li><em><a href="https://stackoverflow.com/questions/43402237/r-waiting-for-page-to-load-in-rselenium-with-phantomjs">R - Waiting for page to load in RSelenium with PhantomJS</a></em></li>
<li><em><a href="https://stackoverflow.com/questions/76701351/html-xml-understanding-how-scroll-bars-work">HTML/XML: Understanding How "Scroll Bars" Work</a></em></li>
<li><em><a href="https://stackoverflow.com/questions/27080920/how-to-check-if-page-finished-loading-in-rselenium">How to check if a page finished loading in RSelenium</a></em></li>
<li><em><a href="https://www.geeksforgeeks.org/waiting-for-page-to-load-in-rselenium-in-r/" rel="nofollow noreferrer">Waiting for page to load in RSelenium in R</a></em></li>
</ul>
| <python><html><r><xml><selenium-webdriver> | 2023-08-12 21:14:23 | 1 | 6,799 | stats_noob |
76,891,042 | 15,803,668 | Permission denied when writing content created with PyMuPdf to temporary PDF file | <p>I'm working on a Python script that uses the PyMuPDF library to modify a PDF document and then save the modified content to a temporary PDF file. However, I'm encountering a "Permission denied" error when trying to write the content of the pdf file the temporary file. I've looked into various solutions, but I'm still stuck. Here's the simplified example of my code:</p>
<pre><code>from tempfile import NamedTemporaryFile
import fitz
doc = fitz.open("test.pdf")
with NamedTemporaryFile(suffix=".pdf", delete=False) as temp_file:
temp_pdf_path = temp_file.name
doc.save(temp_pdf_path)
doc.close()
</code></pre>
<p>The error is:</p>
<blockquote>
<p>Traceback (most recent call last): File
"C:\Users\user\PycharmProjects\test\test.py", line 8, in
doc.save(temp_pdf_path) File "C:\Users\user\PycharmProjects\test\venv\lib\site-packages\fitz\fitz.py",
line 4629, in save
return _fitz.Document_save(self, filename, garbage, clean, deflate, deflate_images, deflate_fonts, incremental, ascii, expand,
linear, no_new_id, appearance, pretty, encryption, permissions,
owner_pw, user_pw) RuntimeError: cannot remove file
'C:\Users\user\AppData\Local\Temp\tmp1ivvrwvj.pdf': Permission denied</p>
</blockquote>
| <python><temporary-files><pymupdf> | 2023-08-12 20:35:12 | 0 | 453 | Mazze |
76,890,970 | 1,405,689 | Commands in Python-Jupyter are not visualizing the graph | <p>I have revised the <a href="https://stackoverflow.com/questions/35916976/plot-wont-show-in-jupyter">link1</a> and <a href="https://stackoverflow.com/questions/30739146/matplotlib-does-not-display-interactive-graph-in-ipython">link2</a> and applied what in there is said. But, still those solutions does not resolve my problem.</p>
<p>My python code in jupyter is:</p>
<pre><code>#Librerias necesarias
import pandas as pd
import seaborn as sn #VisualizaciΓ³n
import numpy as np #complemento de pd
import matplotlib.pyplot as plt #graficar
from sklearn.cluster import KMeans
from sklearn.metrics import pairwise_distances_argmin_min
#graficas 3D
%matplotlib notebook
#%pylab
from mpl_toolkits.mplot3d import Axes3D
plt.rcParams['figure.figsize'] = (16, 9)
plt.style.use('ggplot')
#import warnings #advertencias
#warnings.filterwarnings("ignore")
</code></pre>
<p>But, when I run this code piece:</p>
<pre><code># Predicting the clusters
labels = kmeans.predict(X)
# Getting the cluster centers
C = kmeans.cluster_centers_
colores=['red','green','blue']
asignar=[]
for row in labels:
asignar.append(colores[row])
fig = plt.figure()
ax = Axes3D(fig)
ax.scatter(X[:, 0], X[:, 4], X[:, 4], c=asignar,s=10)
ax.scatter(C[:, 0], C[:, 4], C[:, 4], marker='*', c=colores, s=10)
</code></pre>
<p>It prompt to me an empty image:</p>
<p><a href="https://i.sstatic.net/XwRVl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XwRVl.png" alt="enter image description here" /></a></p>
<p>And this message: <strong><mpl_toolkits.mplot3d.art3d.Path3DCollection at 0x186e1f865c0></strong></p>
<p>I newbie in python and I do not see the issue. <em>What would it be?</em></p>
| <python><3d><jupyter> | 2023-08-12 20:11:35 | 1 | 2,548 | Another.Chemist |
76,890,666 | 4,958,156 | TF model is accepting any image input size after the first inference, but only 224x224 should be allowed | <p>I'm facing a very strange error which I cannot get my head around.</p>
<p>When I run this code below, it works fine as the model is expecting a 224x224 input image. However, if I change the image to say 224x225, it will give an error sayng that it was expecting a 224x224. This is as expected.</p>
<pre><code>from tensorflow.keras.preprocessing import image
import numpy as np
import tensorflow as tf
#load model from model.h5
model = tf.keras.models.load_model('model.h5')
# Load model weights from weights.h5
model.load_weights('weights.h5')
image_path = 'image1.jpg'
img = image.load_img(image_path, target_size=(224, 224))
img_array = image.img_to_array(img)
img_array = np.expand_dims(img_array, axis=0)
print(img_array.shape)
predictions = model.predict(img_array)
if predictions[0][0] > 0.5:
print("class 1")
else:
print("class 2")
</code></pre>
<p>Now, let's say I ran the code above with a 224x224 input image, so it doesn't give an error. However, if I now subsequently run inference on another image with dimensions, say 224x225, it doesn't give an error! How come that's possible? <strong>It seems as if the model only cares about the first inference being 224x224, and any following inferences can be of any dimension.</strong></p>
<pre><code>
image_path = 'image2.jpg'
img = image.load_img(image_path, target_size=(224, 225)) #THIS SHOULD GIVE ERROR BUT IT DOESNT
img_array = image.img_to_array(img)
img_array = np.expand_dims(img_array, axis=0)
print(img_array.shape)
predictions = model.predict(img_array)
if predictions[0][0] > 0.5:
print("class 1")
else:
print("class 2")
</code></pre>
| <python><tensorflow><keras><deep-learning> | 2023-08-12 18:41:49 | 0 | 1,294 | Programmer |
76,890,658 | 206,253 | How to unit test in pytest a method changing class attributes? | <p>I am using a class attribute in a way that mimics the singleton pattern - it is a pandas dataframe which is manipulated by a lot of methods and which should exist as a single instance. In a way it stands for a table in a relational database.</p>
<p>How to unit test a method which manipulates this class attribute?</p>
<p>In the code snippet below this is the <code>add_product</code> method.</p>
<pre><code>@dataclass
class Item():
id: int
@dataclass(kw_only=True)
class Appliance (Item):
model: str
class Inventory:
appliances = pd.DataFrame()
@classmethod
def add_product(cls, prod):
##add a new row
</code></pre>
| <python><pytest> | 2023-08-12 18:40:01 | 1 | 3,144 | Nick |
76,890,537 | 20,266,647 | Issue with create a user in MLRun/Iguazio via API | <p>I have a problem, how can I create user in MLRun/Iguazio system. I did not see relevant API in MLRun/Iguazio for creating the user. Unfortunately, I do not have relevant sample.</p>
<p>Did you solve the same issue?</p>
| <python><mlrun> | 2023-08-12 18:05:17 | 1 | 1,390 | JIST |
76,890,125 | 20,088,885 | What is the technical name of employer's name in Odoo? | <p>So I'm trying to create an email that will be directly send to the owner of the email, so for example here, my subject here is <code>{{object.employee_id.name}}</code> and I don't know where I get this info.</p>
<p>What I want here is to attach the name of <code>Bailey Boi</code> on the content of the email. What I tried is I activate developer tool</p>
<p><a href="https://i.sstatic.net/NNNtB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NNNtB.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/4xAdk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4xAdk.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/7zfeL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7zfeL.png" alt="enter image description here" /></a></p>
<p>since I'm still noob at Oddo, I have two questions.</p>
<p>1.) Where did I get the <code>{{object.employee_id.name}}</code>?</p>
<p>2.) Is there any guide on learning on how can I manipulate the data like the image above, we're using SaaS and I'm trying to learn the <code>studio app</code></p>
<p><strong>Additional Info:</strong></p>
<p><a href="https://i.sstatic.net/Ynphd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ynphd.png" alt="enter image description here" /></a></p>
| <python><odoo><erp> | 2023-08-12 16:11:26 | 1 | 785 | Stykgwar |
76,889,918 | 1,187,621 | Minimize value by manipulating variables in GEKKO+Python | <p>This is an extension of <a href="https://stackoverflow.com/questions/76843179/gekko-python-ipopt-minimize-delta-v">this question</a>.</p>
<p>I want to minimize the delta-V (impulse) by manipulating the launch date, flyby date, and arrival date. These are defined as my manipulation variables as follows:</p>
<pre><code># Manipulating variables and initial guesses
launch = m.MV(value = 2460159.5, lb = 2460159.5, ub = 2460525.5)
launch.STATUS = 1
# flyby = m.MV(value = 2460424.5, lb = 2460342.5, ub = 2460525.5) # Venus/Mars
flyby = m.MV(value = 2460846.5, lb = 2460704.5, ub = 2460908.5) # Jupiter
flyby.STATUS = 1
# arrive = m.MV(value = 2460694.5, lb = 2460480.5, ub = 2460845.5) # Venus/Mars
arrive = m.MV(value = 2461534.5, lb = 2461250.5, ub = 2461658.5) # Jupiter
arrive. STATUS = 1
</code></pre>
<p>I then define my variables as follows:</p>
<pre><code># Variables
r1 = m.Array(m.Var, 3, value = 0, lb = -1e10, ub = 1e10)
v1 = m.Array(m.Var, 3, value = 0, lb = -1e5, ub = 1e5)
r2 = m.Array(m.Var, 3, value = 0, lb = -1e10, ub = 1e10)
v2 = m.Array(m.Var, 3, value = 0, lb = -1e5, ub = 1e5)
r3 = m.Array(m.Var, 3, value = 0, lb = -1e10, ub = 1e10)
v3 = m.Array(m.Var, 3, value = 0, lb = -1e5, ub = 1e5)
l = m.Array(m.Var, 3, value = 0, lb = -1e5, ub = 1e5)
imp = m.Array(m.Var, 3, value = 0, lb = -1e5, ub = 1e5)
</code></pre>
<p>I was following the answer to the previous question, and used the same format for all variables. I had originally defined my variables as follows (based on the previous answer):</p>
<pre><code># Variables
r1 = m.Var(value = np.array([0, 0, 0]), lb = -1e10, ub=1e10, name = "r1")
v1 = m.Var(value = np.array([0, 0, 0]), lb = -1e5, ub = 1e5, name = "v1")
r2 = m.Var(value = np.array([0, 0, 0]), lb = -1e10, ub = 1e10, name = "r2")
v2 = m.Var(value = np.array([0, 0, 0]), lb = -1e5, ub = 1e5, name = "v2")
r3 = m.Var(value = np.array([0, 0, 0]), lb = -1e10, ub = 1e10, name = "r3")
v3 = m.Var(value = np.array([0, 0, 0]), lb = -1e5, ub = 1e5, name = "v3")
l = m.Var(value = np.array([0, 0, 0]), lb = -1e5, ub = 1e5, name = "launch")
imp = m.Array(m.Var, 3, value = 0, lb = -1e5, ub = 1e5)
</code></pre>
<p>While the only variable I am trying to minimize is 'imp', I do intend to use the final values of all the other variables for further analysis, so I want to be able to get the values.</p>
<p>To determine the impulse (delta-V), I am doing a slingshot maneuver, which is defined as follows:</p>
<pre><code>def slingshot():
# Dates
date_launchE = Time(str(launch.value), format="jd", scale="utc").tdb
date_flyby = Time(str(flyby.value), format="jd", scale="utc").tdb # Mars
date_arrivalE = Time(str(arrive.value), format="jd", scale="utc").tdb
venus = Ephem.from_body(Venus, time_range(date_launchE, end=date_arrivalE, periods=500))
earth = Ephem.from_body(Earth, time_range(date_launchE, end=date_arrivalE, periods=500))
r_e0, v_e0 = earth.rv(date_launchE)
ss_e0 = Orbit.from_ephem(Sun, earth, date_launchE)
ss_fly = Orbit.from_ephem(Sun, venus, date_flyby)
# Lambert solution to get to flyby planet
man_launch = Maneuver.lambert(ss_e0, ss_fly)
# Cruise to flyby planet
cruise1 = ss_e0.apply_maneuver(man_launch)
cruise1_end = cruise1.propagate(date_flyby)
ss_earrive = Orbit.from_ephem(Sun, earth, date_arrivalE)
# Lambert solution to the flyby
man_flyby = Maneuver.lambert(cruise1_end, ss_earrive)
imp_a, imp_b = man_flyby.impulses
# Apply the maneuver
cruise2, ss_etarget = cruise1_end.apply_maneuver(man_flyby, intermediate=True)
# Propagate the transfer orbit until return to Earth
cruise2_end = cruise2.propagate(date_arrivalE)
ss_target = Orbit.from_ephem(Sun, earth, date_arrivalE)
final = Ephem.from_orbit(cruise2_end, date_arrivalE)
r_final, v_final = final.rv(date_arrivalE)
r1, v1 = Ephem.from_orbit(ss_e0, date_launchE).rv(date_launchE)
r2, v2 = Ephem.from_orbit(ss_fly, date_flyby).rv(date_flyby)
r3, v3 = Ephem.from_orbit(ss_target, date_arrivalE).rv(date_arrivalE)
return r1[-1].value, v1[-1].value, r2[-1].value, v2[-1].value, r3[-1].value, v3[-1].value, v_final[-1].value, imp_a[-1].value
</code></pre>
<p>However, when I call the function, no optimization takes place, and the return value just ends up being based on the initial guesses for my dates.</p>
<pre><code># Slingshot maneuver
r1, v1, r2, v2, r3, v3, v_final, imp = slingshot()
</code></pre>
<p>From the previous question, I see that I was informed not to change the variables within the function. I recognize that I am not passing anything into my function, but even if I do pass the manipulation variables, the result is the same.</p>
<p>So, my question is then:</p>
<p><strong>How do I use my function to minimize the impulse by manipulating the dates?</strong></p>
<p>I based my initial problem with the function on <a href="https://stackoverflow.com/questions/76469795/does-anyone-see-where-the-error-in-the-following-gekko-ipopt-nonlinear-optimizat">a different question I had posted</a>. I thought I had modified that GEKKO/IPOPT problem appropriately for this problem, but apparently I did not.</p>
| <python><gekko><ipopt> | 2023-08-12 15:14:54 | 1 | 437 | pbhuter |
76,889,837 | 19,400,931 | Asyncpraw basic reddit API example throws error right after execution | <p>I just got into trying to make a bot using the AsyncPraw Reddit API wrapper. I copy-pasted the <a href="https://asyncpraw.readthedocs.io/en/stable/getting_started/quick_start.html" rel="nofollow noreferrer">code example in the documentation</a> and i can't get it to run without getting atleast a warning.</p>
<p>by trying to execute this code:</p>
<pre><code>import asyncpraw
import asyncio
import auth
from time import sleep
async def amain():
reddit = asyncpraw.Reddit(
client_id=auth.CLIENT_ID,
client_secret=auth.CLIENT_SECRET,
user_agent="Example bot",
password=auth.PASSWORD,
username=auth.USERNAME,
ratelimit_seconds=700
)
print(reddit.read_only)
# Output: True
# continued from code above
subreddit = await reddit.subreddit("askreddit")
async for submission in subreddit.top(limit=10):
print(submission.title)
asyncio.run(amain())
</code></pre>
<p>Everything works, the submission titles get printed, but then, this happens:</p>
<pre><code>
> Unclosed client session client_session: <aiohttp.client.ClientSession
> object at 0x000001D53CA8D6C0> Unclosed connector connections:
> ['[(<aiohttp.client_proto.ResponseHandler object at
> 0x000001D53C246FE0>, 116575.906)]'] connector:
> <aiohttp.connector.TCPConnector object at 0x000001D53CA8D7E0> Fatal
> error on SSL transport protocol: <asyncio.sslproto.SSLProtocol object
> at 0x000001D53CA8EB30> transport: <_ProactorSocketTransport fd=-1
> read=<_OverlappedFuture cancelled>>
</code></pre>
<blockquote>
<p>...</p>
</blockquote>
<pre><code>
> RuntimeError: Event loop is closed
</code></pre>
<p>tried to fix by replacing the <code>asyncio.run(amain())</code> with:</p>
<pre><code>loop = asyncio.get_event_loop()
loop.run_until_complete(amain())
</code></pre>
<p>and this fixes the <code>RuntimeError: Event loop is closed</code>, but the <code>Unclosed client session client_session: <aiohttp.client.ClientSession</code>. and all the things that follow it remain.</p>
<p>and on top of that, now i get a</p>
<blockquote>
<p>DeprecationWarning: There is no current event loop
loop = asyncio.get_event_loop()</p>
</blockquote>
| <python><asynchronous><async-await><praw><asyncpraw> | 2023-08-12 14:55:40 | 1 | 313 | Patientes |
76,889,790 | 557,937 | quickly check that Python3 integer fits into C long or int | <p>In Python2's C API there was <code>long PyInt_AS_LONG()</code> function which allowed a very quick creation of a C <code>long</code> from a Python2 <code>int</code>.</p>
<p>Is there a quick way to check, using Python3 C API, that an <code>int</code> (which is actually an arbitrary precision integer) fits into C <code>long</code>?</p>
<p>One can call <code>PyLong_AsLong()</code>, but this appears to do unnecessary work. There ought to be a function to get the size of the memory block used to store the number, but I can't find anything in the docs.</p>
<p>This has to be done in C, so I can't directly use Python arithmetic comparison. Although I can, in principle, call <a href="https://docs.python.org/3.10/c-api/object.html#c.PyObject_RichCompare" rel="nofollow noreferrer">PyObject_RichCompare()</a>, it looks like an overkill.</p>
| <python><python-3.x><python-c-api> | 2023-08-12 14:43:43 | 1 | 377 | Dima Pasechnik |
76,889,779 | 125,244 | Dash hello World doesn't show anything | <p>I'm trying to get started with Dash but even the hello World example doesn't show up on two different laptops under Windows 11 started from within Visual Studio 2022</p>
<p>"my" code</p>
<pre><code># https://dash.plotly.com/tutorial
from dash import Dash, html
app = Dash(__name__)
app.layout = html.Div([
html.Div(children='Hello World')
])
if __name__ == '__main__':
app.run()
</code></pre>
<p>If I 'Start without debugging' the following appears in the command window
<a href="https://i.sstatic.net/AiAiU.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AiAiU.jpg" alt="app started without debugging" /></a></p>
<p>If I 'Start with debugging' I get <a href="https://i.sstatic.net/Pcr0h.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Pcr0h.jpg" alt="enter image description here" /></a></p>
<p>and if I start with <code>app.run(debug=True)</code> I don't get any further than
<a href="https://i.sstatic.net/rIYg0.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rIYg0.jpg" alt="enter image description here" /></a></p>
<p>Examples I found that should display a figure in a browser but those do not work either.</p>
<p>I also tried</p>
<pre><code>if __name__ == '__main__':
app.run(host="127.0.0.1", port=5000)
</code></pre>
<p>What am I missing that makes that I don't see any output?</p>
| <python><plotly-dash><visual-studio-2022> | 2023-08-12 14:40:29 | 2 | 1,110 | SoftwareTester |
76,889,721 | 3,809,375 | How can I get the key without modifiers on keyPressEvent in Qt? | <p>Consider this PySide 6 example:</p>
<pre><code>from PySide6.QtCore import * # noqa
from PySide6.QtGui import * # noqa
from PySide6.QtWidgets import * # noqa
class TableWidget(QTableWidget):
def __init__(self):
super().__init__()
self.setColumnCount(2)
self.setHorizontalHeaderLabels(["Key", "Value"])
self.setColumnWidth(1, 800)
self._sequence_format = QKeySequence.PortableText
self._modifier_order = [
self.modifier_to_string(modifier)
for modifier in (Qt.ControlModifier, Qt.AltModifier, Qt.ShiftModifier)
]
def modifier_to_string(self, modifier):
return QKeySequence(modifier.value).toString(self._sequence_format).lower()
def key_to_string(self, key):
if key in (
Qt.Key_Shift,
Qt.Key_Control,
Qt.Key_Alt,
Qt.Key_Meta,
Qt.Key_AltGr,
):
return None
else:
return QKeySequence(key).toString(self._sequence_format).lower()
def shortcut_from_event(self, event):
key = event.key()
modifiers = event.modifiers()
vk = event.nativeVirtualKey()
key_string = self.key_to_string(key)
vk_string = self.key_to_string(vk)
modifier_strings = tuple(
self.modifier_to_string(modifier) for modifier in modifiers
)
shortcut_tpl = (key_string, modifier_strings)
shortcut_lst = []
for modifier_string in self._modifier_order:
if modifier_string in shortcut_tpl[1]:
shortcut_lst.append(modifier_string)
shortcut_lst.append(shortcut_tpl[0])
if None in shortcut_lst:
shortcut = None # noqa
else:
shortcut = "".join(shortcut_lst) # noqa
return {
"shortcut": shortcut,
"key": key,
"modifiers": modifiers,
"vk": vk,
"key_string": key_string,
"vk_string": vk_string,
"modifier_strings": modifier_strings,
}
def keyPressEvent(self, event):
table = self
item = self.shortcut_from_event(event)
if item:
table.clearContents()
table.setRowCount(0)
row_position = 0
for k, v in sorted(item.items(), reverse=True):
table.insertRow(row_position)
table.setItem(row_position, 0, QTableWidgetItem(k))
table.setItem(row_position, 1, QTableWidgetItem(str(v)))
# table.resizeColumnsToContents()
# return super().keyPressEvent(event)
if __name__ == "__main__":
import sys
app = QApplication(sys.argv)
w = TableWidget()
w.resize(800, 600)
w.show()
sys.exit(app.exec())
</code></pre>
<p>I'd like to know: How can I create a shortcut string using the modifiers that follows the modifier_order and the unaffected key string? For instance, right now:</p>
<ul>
<li><p>If I press key <kbd>+</kbd></p>
<p><a href="https://i.sstatic.net/xSLyH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xSLyH.png" alt="Enter image description here" /></a></p>
</li>
<li><p>If I press <kbd>Shift</kbd> + <kbd>+</kbd></p>
<p><a href="https://i.sstatic.net/uccWJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uccWJ.png" alt="Enter image description here" /></a></p>
</li>
<li><p>If I press <kbd>AltGr</kbd> + <kbd>+</kbd></p>
<p><a href="https://i.sstatic.net/jD6Ua.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jD6Ua.png" alt="Enter image description here" /></a></p>
</li>
</ul>
<p>But I'd like to modify the code so the shortcut created will be <kbd>+</kbd>, <kbd>Shift</kbd> + <kbd>plus</kbd> and <kbd>Ctrl</kbd> + <kbd>Alt</kbd> + <kbd>plus</kbd>, respectively.</p>
<p>How can I achieve this behaviour? Similar to all keys, I'd like to get the string of the unaffected key without any modifiers on top.</p>
| <python><qt><key><keyboard-shortcuts><pyside6> | 2023-08-12 14:24:33 | 1 | 9,975 | BPL |
76,889,691 | 15,100,030 | Django tenants can't delete record from a shared model that has FK in tenant from public schema | <p>I Have a custom user model shared between tenants and public</p>
<pre class="lang-py prettyprint-override"><code>
HAS_MULTI_TYPE_TENANTS = True
MULTI_TYPE_DATABASE_FIELD = 'type'
TENANT_TYPES = {
"public": {
"APPS": [
'django_tenants',
'clients',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'rest_framework.authtoken',
'djoser',
"corsheaders",
'accounts',
],
"URLCONF": "server.urls_public",
},
"menu": {
"APPS": [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework.authtoken',
'accounts',
'products',
],
"URLCONF": "server.urls_menu",
},
"full-version": {
"APPS": [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework.authtoken',
'accounts',
'products',
'order',
],
"URLCONF": "server.urls_full",
}
}
INSTALLED_APPS = []
for schema in TENANT_TYPES:
INSTALLED_APPS += [app for app in TENANT_TYPES[schema]
["APPS"] if app not in INSTALLED_APPS]
</code></pre>
<p>from the full-version order app, there is an FK to the user model
that restrict me to delete any user from the public schema</p>
<pre class="lang-py prettyprint-override"><code>
class Address(models.Model):
user = models.ForeignKey(
User, on_delete=models.SET_NULL, null=True, blank=True)
</code></pre>
<p>also, the issue appears in the menu type cuz the order models do not exist yet</p>
<pre class="lang-bash prettyprint-override"><code>
django.db.utils.ProgrammingError: relation "order_address" does not exist
LINE 1: ...."user_type", "accounts_user"."phone_number" FROM "order_add...
</code></pre>
<p>Is any work around or do I have to create an empty model for each type?</p>
| <python><django><postgresql><django-tenants> | 2023-08-12 14:16:48 | 1 | 698 | Elabbasy00 |
76,889,502 | 188,159 | How to scale a path element in an SVG file with Python | <p>My Inkscape-made SVG contains a path with a unique ID that I want to scale around its center. Here's what it looks like:</p>
<p><a href="https://i.sstatic.net/yOKsA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yOKsA.png" alt="source svg" /></a></p>
<p>I know the id of the element, so I used <code>svgutils</code> to scale it:</p>
<pre><code>import svgutils.transform as sg
fig = sg.fromfile('smile.svg')
mouth = fig.find_id('path1069')
factor = 1.25
mouth.scale(factor)
fig.save('smile-fail.svg')
</code></pre>
<p>The result scales from the top-left corner of the entire SVG canvas and it's not great:</p>
<p><a href="https://i.sstatic.net/OlLZF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OlLZF.png" alt="failed scaling" /></a></p>
<p>Instead it should look like this when I scale it around the center by holding SHIFT and CTRL while scaling in Inkscape:</p>
<p><a href="https://i.sstatic.net/2XC0I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2XC0I.png" alt="intended scaling Inkscape sample" /></a></p>
<p>The SVG in question:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
width="200"
height="200"
viewBox="0 0 200 200"
version="1.1"
id="svg5"
inkscape:version="1.2.2 (732a01da63, 2022-12-09)"
sodipodi:docname="smile.svg"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg">
<sodipodi:namedview
id="namedview7"
pagecolor="#ffffff"
bordercolor="#eeeeee"
borderopacity="1"
inkscape:showpageshadow="0"
inkscape:pageopacity="0"
inkscape:pagecheckerboard="0"
inkscape:deskcolor="#ffffff"
inkscape:document-units="px"
showgrid="false"
inkscape:zoom="2"
inkscape:cx="125.5"
inkscape:cy="87"
inkscape:current-layer="layer1" />
<defs
id="defs2">
<pattern
inkscape:collect="always"
patternUnits="userSpaceOnUse"
width="2"
height="2"
patternTransform="translate(0,0) scale(10,10)"
id="Checkerboard"
inkscape:stockid="Checkerboard"
inkscape:isstock="true">
<rect
style="fill:black;stroke:none"
x="0"
y="0"
width="1"
height="1"
id="rect1912" />
<rect
style="fill:black;stroke:none"
x="1"
y="1"
width="1"
height="1"
id="rect1914" />
</pattern>
</defs>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1">
<rect
style="fill:#aca694;fill-opacity:1;"
id="rect403"
width="200"
height="100"
x="-6.357829e-07"
y="99.999367" />
<circle
style="fill:#d1c7b4;fill-opacity:1;"
id="path241"
cx="97.803673"
cy="94.830917"
r="65.697899" />
<circle
style="fill:url(#Checkerboard);fill-opacity:1;opacity:0.1"
id="circle2820"
cx="97.803673"
cy="94.830917"
r="65.697899" />
<circle
style="fill:#fff7e7;fill-opacity:1"
id="path1011"
cx="77.145447"
cy="70.418861"
r="14.504185" />
<circle
style="fill:#fff7e7;fill-opacity:1"
id="circle1013"
cx="119.08147"
cy="68.001495"
r="16.921549" />
<path
style="fill:#fff7e7;fill-opacity:1"
d="m 80,100 40,5 c 0,0 3.69099,14.51519 -20,15 -23.690996,0.4848 -20,-20 -20,-20 z"
id="path1069"
sodipodi:nodetypes="cczc" />
<circle
style="fill:#292827;fill-opacity:1"
id="circle1512"
cx="79.523659"
cy="70.418861"
r="7.3695707" />
<circle
style="fill:#292827;fill-opacity:1"
id="circle1514"
cx="118.02084"
cy="67.892021"
r="9.8964138" />
</g>
</svg></code></pre>
</div>
</div>
</p>
| <python><svg><scale> | 2023-08-12 13:27:01 | 1 | 9,813 | qubodup |
76,889,444 | 206,253 | Replacing global variables in python with a singleton using an abstract class | <p>I have a pandas dataframe which is defined as a variable in a module and is accessed (and modified) by various functions within the module. This is not a good design for various reasons including testability.</p>
<p>I would like to replace this module-level variable with some sort of singleton. There are a number of ways to implement them in Python. A common one is to use metaclass which limits the number of instances created to one. There also many others listed, among others, <a href="https://stackoverflow.com/questions/6760685/creating-a-singleton-in-python">here</a>.</p>
<p>I wonder why not use a class attribute of an abstract class which cannot be instantiated at all?</p>
<p>I have tested this by defining a counter within an abstract class and incrementing it outside of the base class. It works.</p>
<p>I am concerned that I haven't seen this used. So my question is - why is this not being used as a simple implementation of the singleton pattern?</p>
<p>EDIT 1: I forgot to add an abstractmethod to the class - now it is added. The question of the post still stays.</p>
<p>EDIT 2: One of the reasons why I am asking this is because one (weak) implementation of the singleton pattern is via a class attribute. The idea being that this attribute exists in the namespace of the class and thus is a "singleton". However, class attributes if accessed via an instance, essentially, mutate into instance attributes. This issue is avoided if abstract classes are used as no instances can be created thus the class attribute can exist as a true singleton.</p>
<pre><code>from abc import ABC, abstractmethod
class Test(ABC):
counter = 0
@abstractmethod
def some_method():
pass
Test.counter += 1
</code></pre>
| <python><global-variables><singleton> | 2023-08-12 13:14:03 | 0 | 3,144 | Nick |
76,888,913 | 4,208,742 | Flask SQLAlchemy "Lost connection to server during query" | <p>I already stepped through these StackOverflow posts, and my issue persists.</p>
<ul>
<li><p><a href="https://stackoverflow.com/questions/29755228/sqlalchemy-mysql-lost-connection-to-mysql-server-during-query">SQLAlchemy/MySQL Lost connection to MySQL server during query</a></p>
</li>
<li><p><a href="https://stackoverflow.com/questions/61665212/periodic-lost-connection-to-mysql-server-during-query-after-dockerizing-flask">Periodic "Lost connection to MySQL server during query" after Dockerizing Flask Web App</a></p>
</li>
<li><p><a href="https://stackoverflow.com/questions/42163359/flask-sqlalchemy-2013-lost-connection">Flask SQLAlchemy - 2013 Lost Connection</a></p>
</li>
<li><p><a href="https://stackoverflow.com/questions/10210080/how-to-disable-sqlalchemy-caching">How to disable SQLAlchemy caching?</a></p>
</li>
<li><p><a href="https://stackoverflow.com/questions/12108913/how-to-avoid-caching-in-sqlalchemy">How to avoid caching in sqlalchemy?</a></p>
</li>
<li><p><a href="https://stackoverflow.com/questions/4285474/how-to-disable-caching-correctly-in-sqlalchemy-orm-session">How to disable caching correctly in Sqlalchemy orm session?</a></p>
</li>
</ul>
<p>I am getting <strong>Lost connection to server during query</strong></p>
<p>Here is my Flask <code>__init__.py</code> file.</p>
<pre><code>from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy import func, exc
from flask_login import UserMixin
db = SQLAlchemy()
class users(db.Model, UserMixin):
id = db.Column(db.Integer, nullable=False, unique=True, primary_key=True)
email = db.Column(db.String(100), nullable=False, unique=True)
def app():
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = "mariadb+mariadbconnector://serviceid:itsasecret@ec2-1-2-3-4.compute-1.amazonaws.com:3306/mytable"
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
app.config['SQLALCHEMY_POOL_SIZE'] = 100
app.config['SQLALCHEMY_POOL_RECYCLE'] = 30
app.config['SQLALCHEMY_POOL_TIMEOUT'] = 30
app.config['SQLALCHEMY_POOL_PRE_PING'] = True
app.config["SQLALCHEMY_ECHO"] = True
app.config["SQLALCHEMY_RECORD_QUERIES"] = True
db.init_app(app)
with app.app_context():
db.create_all()
data = users.query.filter(users.email.ilike('john.doe@example.com')).first()
db.session.close()
db.session.flush()
db.session.expire_all()
time.sleep(60)
data = users.query.filter(users.email.ilike('john.doe@example.com')).first()
db.session.close()
db.session.flush()
db.session.expire_all()
return app
</code></pre>
<p>If I set <code>time.sleep(59)</code> or less, the issue does not occur.</p>
<p>If I set <code>time.sleep(60)</code> or more, the issue occurs.</p>
<p>Thus it appears to be some 60 second MariaDB timeout setting. Here are my MariaDB 60 second timeouts.</p>
<pre><code>[ec2-user@ip-172-31-80-56 ~]$ sudo docker exec mariadb mysql -e "SHOW SESSION VARIABLES LIKE '%timeout%'" | grep 60
net_write_timeout 60
slave_net_timeout 60
thread_pool_idle_timeout 60
[ec2-user@ip-172-31-80-56 ~]$ sudo docker exec mariadb mysql -e "SHOW GLOBAL VARIABLES LIKE '%timeout%'" | grep 60
net_write_timeout 60
slave_net_timeout 60
thread_pool_idle_timeout 60
</code></pre>
<p>Here is my console output.</p>
<pre><code>2023-08-12 05:17:46,349 INFO sqlalchemy.engine.Engine SELECT DATABASE()
2023-08-12 05:17:46,350 INFO sqlalchemy.engine.Engine [raw sql] ()
2023-08-12 05:17:46,558 INFO sqlalchemy.engine.Engine SELECT @@sql_mode
2023-08-12 05:17:46,559 INFO sqlalchemy.engine.Engine [raw sql] ()
2023-08-12 05:17:46,660 INFO sqlalchemy.engine.Engine SELECT @@lower_case_table_names
2023-08-12 05:17:46,661 INFO sqlalchemy.engine.Engine [raw sql] ()
2023-08-12 05:17:46,867 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2023-08-12 05:17:46,869 INFO sqlalchemy.engine.Engine SELECT COUNT(*) FROM information_schema.tables WHERE table_schema = ? AND table_name = ?
2023-08-12 05:17:46,870 INFO sqlalchemy.engine.Engine [generated in 0.00112s] ('mytable','users')
2023-08-12 05:17:46,968 INFO sqlalchemy.engine.Engine COMMIT
2023-08-12 05:17:47,178 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2023-08-12 05:17:47,186 INFO sqlalchemy.engine.Engine SELECT users.id AS users_id, users.email AS users_email FROM users WHERE lower (users.email) LIKE lower (?) LIMIT ?
2023-08-12 05:17:47,188 INFO sqlalchemy.engine.Engine [generated in 0.00224s] ('john.doe@example.com', 1)
2023-08-12 05:17:47,278 INFO sqlalchemy.engine.Engine ROLLBACK
time.sleep(60) here
2023-08-12 05:18:48,484 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2023-08-12 05:18:48,486 INFO sqlalchemy.engine.Engine SELECT users.id AS users_id, users.email AS users_email FROM users WHERE lower (users.email) LIKE lower (?) LIMIT ?
2023-08-12 05:18:48,491 INFO sqlalchemy.engine.Engine [cached since 61.31s ago] ('john.doe@example.com', 1)
2023-08-12 05:18:48,499 INFO sqlalchemy.pool.impl.QueuePool Invalidate connection <mariadb.connection connected to 'ec2-107-22-51-98.compute-1.amazonaws.com' at 0000023778900830> (reason: InterfaceError:Lost connection to server during query)
Traceback (most recent call last):
mariadb.InterfaceError: Lost connection to server during query
</code></pre>
<p>I am not sure if this is relevant but I do see SQLAlchemy cache, which seems unexpected to me since I called <code>db.session.close()</code> and <code>db.session.flush()</code> and <code>db.session.expire_all()</code>.</p>
<pre><code>2023-08-12 05:18:48,491 INFO sqlalchemy.engine.Engine [cached since 61.31s ago] ('john.doe@example.com', 1)
</code></pre>
| <python><flask><sqlalchemy><mariadb> | 2023-08-12 10:48:31 | 0 | 673 | JeremyCanfield |
76,888,773 | 275,002 | How do I get balance sheet data from Alpha Vantage in JSON format? | <p>I tried the below code but it keeps returning a tuple. I know I can use the <code>requests</code> library but was willing to use the library. The library I am using is given <a href="https://github.com/RomelTorres/alpha_vantage/" rel="nofollow noreferrer">here</a>.</p>
<pre><code>from config import API_KEY
from alpha_vantage.fundamentaldata import FundamentalData
from alpha_vantage.alphavantage import AlphaVantage
if __name__ == '__main__':
fd = FundamentalData(key=API_KEY, output_format='json')
bs = fd.get_balance_sheet_annual(symbol='IBM')
print(bs)
</code></pre>
<p>The output is:</p>
<pre><code>( fiscalDateEnding ... commonStockSharesOutstanding
date ...
1970-01-01 00:00:00.000000000 2022-12-31 ... 906091977
1970-01-01 00:00:00.000000001 2021-12-31 ... 898068600
1970-01-01 00:00:00.000000002 2020-12-31 ... 892653424
1970-01-01 00:00:00.000000003 2019-12-31 ... 887110455
1970-01-01 00:00:00.000000004 2018-12-31 ... 892479411
[5 rows x 38 columns], 'IBM')
</code></pre>
| <python><alpha-vantage> | 2023-08-12 10:09:04 | 2 | 15,089 | Volatil3 |
76,888,717 | 7,195,178 | Error with UMAP: "ufunc 'correct_alternative_cosine' did not contain a loop with signature matching types numpy.dtype[float32]" | <p>I'm trying to use UMAP for dimensionality reduction on some embeddings. However, I encounter the following error when my dataset has more than 5k rows:</p>
<blockquote>
<p>ufunc 'correct_alternative_cosine' did not contain a loop with
signature matching types numpy.dtype[float32]</p>
</blockquote>
<p>below is my code</p>
<pre><code>import numpy as np
import pandas as pd
import umap
import hdbscan
embeddings = my_embedder.encode(
data_df.normalization.values, show_progress_bar=False
)
umap_embeddings = umap.UMAP(
n_neighbors=np.min([5, data_df.shape[0]]),
n_components=3,
metric='cosine',
random_state=17
).fit_transform(embeddings)
</code></pre>
<p>Library versions:</p>
<pre><code>numpy: 1.24.4
umap-learn: 0.5.3
pandas: 1.5.3
hdbscan: 0.8.33
numba: 0.55.1
</code></pre>
<p>I even tried downgrading version of numpy to <em>1.20.3</em> but that too didn't work.</p>
<p>I am using poetry for dependency management.</p>
| <python><pandas><numpy><hdbscan> | 2023-08-12 09:52:30 | 0 | 464 | Harvindar Singh Garcha |
76,888,669 | 10,035,190 | 401 Unauthorized from https://test.pypi.org/legacy/ | <p>I am using this cmd on windows to upload my package in testpypi <code>twine upload -r testpypi dist/*</code> but it's showing this error. so how to upload in testpypi and pypi when 2FA is enable ?</p>
<pre><code>Uploading mypackage-0.1.0-py3-none-any.whl
100% ββββββββββββββββββββββββββββββββββββββββ 8.5/8.5 kB β’ 00:00 β’ ?
WARNING Error during upload. Retry with the --verbose option for more details.
ERROR HTTPError: 401 Unauthorized from https://test.pypi.org/legacy/
User has two factor auth enabled, an API Token or Trusted Publisher must be used to upload in place of password.
</code></pre>
| <python><pypi><twine> | 2023-08-12 09:42:37 | 1 | 930 | zircon |
76,888,663 | 10,666,991 | Get the positive score in a classification task by using a generative model | <p>I'm attempting to utilize a generative model (Llama2) for a binary classification task and aim to obtain the positive score, which represents the confidence level for the positive label.</p>
<p>I tried to use <code>compute_transition_scores</code> but not sure how can I get the confidence between 0-1 correctly.</p>
<p>Here is my current code:</p>
<pre><code>model = AutoModelForCausalLM.from_pretrained(
peft_config.base_model_name_or_path,
# quantization_config=bnb_config,
torch_dtype='auto',
device_map='auto',
offload_folder="offload", offload_state_dict = True
)
pos_scores = []
input_ids = tokenizer(test_sample, return_tensors="pt").input_ids
tokens_for_summary = 1
output_tokens = input_ids.shape[1] + tokens_for_summary
outputs = model.generate(inputs=input_ids, do_sample=False, max_length=output_tokens, pad_token_id=tokenizer.eos_token_id,
output_scores=True, return_dict_in_generate=True)
score = float(torch.exp(model.compute_transition_scores(outputs.sequences, outputs.scores)).cpu())
if pred_label == 1:
pos_scores.append(score)
elif pred_label == 0:
pos_scores.append(-1 * score) # reverse the sign of all samples for which the prediction was 0.
</code></pre>
<p>However, I'm obtaining high values. I've considered using the sigmoid function, but I'm not entirely certain if this is the correct approach.</p>
<p>How should I do that?
Thank you!</p>
| <python><nlp><huggingface-transformers><large-language-model> | 2023-08-12 09:39:02 | 0 | 781 | Ofir |
76,888,461 | 9,196,760 | Replicate Fabric js Hue Rotation of an image to Python using OpenCv and numpy | <p>I am currently working with image filtering and need to adjust the hue of an image using Python. I've been using Fabric.js to perform hue rotation, and the results are satisfactory, but the execution speed is very slow. So I want to replicate some of the image filtering functionality in Python. However, when attempting to achieve the same hue rotation effect in Python, I've encountered challenges that have led to discrepancies between the Python output and the Fabric.js output.</p>
<p>Here is a simplified version of my Python code.</p>
<pre><code>def adjust_hue(image, hue_value):
"""
changes the Hue of the image using the hue_value parameter and returns the adjusted image.
The hue_value must be in range (-1, 1).
"""
# Split the input image into the color channels (BGR) and alpha channel
bgr_channels = image[:, :, :3]
alpha_channel = image[:, :, 3]
hue_value = (hue_value * 180) / 2
# Convert BGR image to HSV color space
hsv_image = cv2.cvtColor(bgr_channels, cv2.COLOR_BGR2HSV)
# Adjust the hue of the HSV image
hsv_image[:, :, 0] = (hsv_image[:, :, 0] + hue_value) % 180
# Convert the adjusted HSV image back to BGR color space
bgr_adjusted = cv2.cvtColor(hsv_image, cv2.COLOR_HSV2BGR)
# Merge the adjusted BGR channels with the original alpha channel
result_image = np.dstack((bgr_adjusted, alpha_channel))
return result_image
# ......
# thats how I am calling the function
rotation = 0.33333333333333333333333333333333 # Same as 60 in range -180 to 180
filtered_image = adjust_hue(image, rotation)
cv2.imwrite('merged.png', filtered_image)
</code></pre>
<p>As Fabric.js expects the hue in the range (-1, 1), so I kept The range of the hue_value as same as fabic.js. I am converting from (-1, 1) to (-180, 180).
Of course, I tried this <code>hue_value = (hue_value * 180)</code> formula to convert from (-1, 1) to (-180, 180), but don't know why it was not showing the output as expected. For example, if I pass 0.3333...... which is 60 in range (-180, 180), it will show the output as if I passed 120. So that's why I divided it by 2.</p>
<p>Note that my input image is in png format, so it has transparent ends.</p>
<p>The input image is as follows
<a href="https://i.sstatic.net/0NBVO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0NBVO.png" alt="enter image description here" /></a></p>
<p>The output image is
<a href="https://i.sstatic.net/Zho46.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zho46.png" alt="enter image description here" /></a></p>
<p>The expected(Fabric.js) output is
<a href="https://i.sstatic.net/2DsQ2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2DsQ2.png" alt="enter image description here" /></a></p>
<p>Although there is not a big difference between Python output image and fabric js output image but still a human eye can detect the difference easily. I ran Absolute Error (AE) algorithm on various online websites and found that there is more than a 15% difference in various areas of the image. So could someone please guide me on how to achieve accurate hue rotation in Python, similar to what Fabric.js provides?</p>
<p>Packages I used</p>
<p>numpy==1.24.4
opencv-python==4.8.0.74</p>
| <python><numpy><opencv><image-processing><fabricjs> | 2023-08-12 08:49:46 | 0 | 452 | Barun Bhattacharjee |
76,888,441 | 21,055,247 | Tkinter setting image to button gets invisible and imposible to interactwith | <p>I have a tkinter program (Python) That contains a button with a PNG image on it <em>(image 1)</em>. (Just upgrading button styles) When you click the button it will run a function (with some code) and it also changes the button image <em>(image 2)</em> (to have other color and text). But when I click it, it displays the image that I wanted <em>(image 2)</em> for one millisecond and then it vanishes the button image and I'm unable to interact with the button again. (The button gets completely blank)</p>
<p>I want to it successfully change the button image <em>(from image 1 to 2)</em> and when I click it again it would change the image to the previous one <em>(from image 2 to 1).</em></p>
<pre><code>def openServer():
global state, start, process, stateD
if state == 0:
state = 1
#stateD.config(text="Opening...",fg="orange")
stateD.config(text="Online", fg="green")
#start.config(text="Stop")
btn2 = tk.PhotoImage(file='../../../icons/button_stop.png')
start.config(image=btn2)
executable = '*HIDDEN*'
if process is None:
Timer(1.0, consoleTimer).start()
process = subprocess.Popen(executable, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
print("Server started.")
elif state == 1:
stateD.config(text="Offline",fg="black")
#start.config(text="Start")
btn = tk.PhotoImage(file='../../../icons/button_start.png')
start.config(image=btn)
try:
process.stdin.write("exit".encode())
process.stdin.flush()
except:
state = 0
process = None
state = 0
btn= tk.PhotoImage(file='../../../icons/button_start.png')
start = tk.Button(server,borderwidth=0, text= "Start",image=btn,width= 1500,command=lambda:Timer(0,openServer).start())
start.pack()
</code></pre>
<p>Note: the window is 1500x800 the button width is 1500px and the button_image is 1500px</p>
| <python><tkinter><tkinter-button> | 2023-08-12 08:42:57 | 0 | 979 | nikita goncharov |
76,888,311 | 713,200 | How to search for multiple strings in a dictionary values in python? | <p>I want to match multiple strings in a python dictionary which generally has only 1 key value pair. I'm looking for most optimized code, like a 1 liner.
Currently I'm trying with following code, but even if its working, I'm looking for a 1 liner</p>
<pre><code>a = "increment"
b = "variable"
c = "found"
s = {'d':'want to increment the location variable based on which x is found'}
if any(a,b,c in value for value in s.values()):
print("YES")
else:
print("NO")
</code></pre>
| <python><string><dictionary><search> | 2023-08-12 08:04:04 | 2 | 950 | mac |
76,888,237 | 15,218,250 | Possible security risks for the following form submission in Django | <p>I am trying to create the page for a livetime chat app using Django and some Ajax. I have always used django forms, since it is more secure and clean if you are not very familiar with javascript. The below html and views.py don't use django forms, but instead go with the traditional method of using javascript. However, I am concerned that there might be security leaks with this following form, which I copied from another website. I believe I added the csrf token correctly, but is there anything else I need to consider? Thank you, and please leave any comments.</p>
<p>And this is the views.py.</p>
<pre><code>def view(request):
message = request.POST['message']
username = request.POST['username']
room_id = request.POST['room_id']
new_message = Message.objects.create(value=message, user=username, room=room_id)
new_message.save()
return HttpResponse('Message sent successfully')
</code></pre>
<p>If there is an issue, please write the fixed code along with it so I could refer to it.</p>
| <python><django><django-models><django-views> | 2023-08-12 07:38:04 | 1 | 613 | coderDcoder |
76,888,143 | 4,764,171 | Store Plot as Variable in Dataframe Cell | <p>I have been testing storing objects in pandas dataframes trying to learn the limitations, and one of the things I am currently testing involves trying to store the results of a plot in a dataframe column, but I cannot figure out how to reference it properly.</p>
<p>Object Definition</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
class point:
def __init__(self, coordinate):
self.x = coordinate[0]
self.y = coordinate[1]
class series:
def __init__(self, series):
self.series = np.asarray(series)
def plot(self):
fig, ax = plt.subplots(figsize=(5,5))
z = ax.scatter(self.series[:,0], self.series[:,1])
return z
</code></pre>
<p>Test Dataframe:</p>
<pre><code>col_index = [i for i in range(1)]
col_coordinates = [[[i, i**2] for i in range(100)]]
df = pd.DataFrame({"i": col_index, "coordinates": col_coordinates})
</code></pre>
<p><a href="https://i.sstatic.net/4O9lz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4O9lz.png" alt="Test Dataframe output" /></a></p>
<p>Application:</p>
<pre><code>df['series'] = df.apply(lambda x: series(x['coordinates']), axis=1)
df['plot'] = df['series'].apply(lambda x: x.plot())
df.head()
</code></pre>
<p><a href="https://i.sstatic.net/t7ysC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t7ysC.png" alt="Plot Result" /></a></p>
<p>Attempting to Reference the Plot Later:</p>
<pre><code>fig,ax = plt.subplots(figsize=(5,5))
ax = df['plot'].iloc[0]
</code></pre>
<p><a href="https://i.sstatic.net/5jEvT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5jEvT.png" alt="Referencing the Plot Later" /></a></p>
<p>As you can see the plot renders properly when the column in the dataframe is filled in, but it does not retain the plot itself when I try to reference it later. I'm sure I'm probably saving the wrong thing, but I'm not sure what the right thing is.</p>
<p>Secondary question is, can the plot itself be saved into the column without it first being rendered?</p>
| <python><pandas><matplotlib> | 2023-08-12 07:04:52 | 0 | 786 | hcaelxxam |
76,888,023 | 2,153,235 | Python/Spark convention to locate a class cited in help documentation? | <p>I am spinning up on Python and PySpark. While (slowly) working
through the beginning of
<a href="https://sparkbyexamples.com/pyspark/different-ways-to-create-dataframe-in-pyspark" rel="nofollow noreferrer">this</a>
tutorial, I found that <code>help(rdd.toDF)</code> was the right incantation to
to access the <code>toDF</code> documentation. Here, <code>rdd</code> is the variable name
in the tutorial code for an object whose class includes the <code>toDF</code>
method, which itself is shorthand for <code>spark.createDataFrame</code>. In
turn, <code>help(spark.createDataFrame)</code> says that the method "Creates a
:class:<code>DataFrame</code>".</p>
<p>The problem is that the documentation doesn't specify how to prefix
<code>DataFrame</code> for the <code>help()</code> function (the following do not work):</p>
<pre><code>>>> help(DataFrame)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'DataFrame' is not defined
>>> help(pyspark.sql.session.DataFrame)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'pyspark' is not defined
>>> help(spark.DataFrame)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'SparkSession' object has no attribute 'DataFrame'
</code></pre>
<p>Does Python or Spark have a convention whereby the user can
determine the prefixing needed to access the help() documentation
for a class that is cited in the help() documentation?</p>
<p>The immediate use case is for the <code>DataFrame</code> class, but I was hoping that there was a way to <em>always</em> be able to figure out how to the required prefixing.</p>
<h1>Troubleshooting by using Vim</h1>
<p>As per the comment under <em>Myron_Ben4</em>'s answer, I tried searching for the class definition of <code>DataFrame</code> in the <code>*.py</code> files of the current Conda environment. Unfortunately, it yields many ambiguous results.</p>
<pre><code># Go to the conda environment
cd /c/Users/User.Name/anaconda3/envs/py39
# Find *.py files and grep for definition of DataFrame class
find * -type f -name "*.py" \
-exec grep -E "^\s*class\s+DataFrame\b" {} +
Lib/site-packages/dask/dataframe/core.py:class DataFrame(_Frame):
Lib/site-packages/pandas/core/frame.py:class DataFrame(NDFrame, OpsMixin):
Lib/site-packages/pandas/core/interchange/dataframe_protocol.py:class DataFrame(ABC):
Lib/site-packages/panel/pane/markup.py:class DataFrame(HTML):
Lib/site-packages/panel/widgets/tables.py:class DataFrame(BaseTable):
Lib/site-packages/param/__init__.py:class DataFrame(ClassSelector):
Lib/site-packages/pyspark/pandas/frame.py:class DataFrame(Frame, Generic[T]):
Lib/site-packages/pyspark/sql/connect/dataframe.py:class DataFrame:
Lib/site-packages/pyspark/sql/dataframe.py:class DataFrame(PandasMapOpsMixin, PandasConversionMixin):
</code></pre>
<p>I tried to look at the context of each occurrence using VimScript's <code>vimgrep</code>:</p>
<pre><code>cd /c/Users/User.Name/anaconda3/envs/py39 " The Conda environment
" This command is minimal code, but there are too many files to
" search in the Conda environment and it is too slow:
"
" vimgrep '^\s*class\s\+DataFrame\>' **/*.py
" This command find the RegEx occurrences in the
" above files in easily navigable form:
vimgrep /^\s*class\s\+DataFrame\>/
\ Lib/site-packages/dask/dataframe/core.py
\ Lib/site-packages/pandas/core/frame.py
\ Lib/site-packages/pandas/core/interchange/dataframe_protocol.py
\ Lib/site-packages/panel/pane/markup.py
\ Lib/site-packages/panel/widgets/tables.py
\ Lib/site-packages/param/__init__.py
\ Lib/site-packages/pyspark/pandas/frame.py
\ Lib/site-packages/pyspark/sql/connect/dataframe.py
\ Lib/site-packages/pyspark/sql/dataframe.py
</code></pre>
<p>Since the Vim command line won't accept continued lines, I had to put the VimScript code in a file and issue <code>:source #59</code> at the Vim command line, where <code>59</code> is the Vim buffer number of the file (use whatever the buffer number is for your situation). Vim's <code>:copen</code> command then lets me surf through the various definitions of the <code>DataFrame</code> class within the context of their <code>*.py</code> files.</p>
<p>From this examination, however, I could see no obvious indication of which class definition is the one that is in effect in the context of my question. I am guessing that it is the last one and can read the documentation. In general, however, this just shows that an unambiguous way is needed to access the correct help content.</p>
<h1>Work-around</h1>
<p>Having just read up on Python's <a href="https://blog.logrocket.com/understanding-type-annotation-python" rel="nofollow noreferrer">type
hinting/annotation</a>,
I see that one work-around is to look at the return-type annotation for
<code>createDataFrame</code>:</p>
<pre><code>>>> help(spark.createDataFrame)
Help on method createDataFrame in module pyspark.sql.session:
createDataFrame(data: Union[pyspark.rdd.RDD[Any], Iterable[Any],
ForwardRef('PandasDataFrameLike'), ForwardRef('ArrayLike')],
schema: Union[pyspark.sql.types.AtomicType,
pyspark.sql.types.StructType, str, NoneType] = None,
samplingRatio: Optional[float] = None, verifySchema: bool = True)
-> pyspark.sql.dataframe.DataFrame method of
pyspark.sql.session.SparkSession instance
Creates a :class:`DataFrame` from an :class:`RDD`, a list, a
:class:`pandas.DataFrame` or a :class:`numpy.ndarray`.
</code></pre>
<p>The annotation <code>-> pyspark.sql.dataframe.DataFrame</code> indicates what to
submit to <code>help()</code>. Only useful, of course, if there is annotation.</p>
<pre><code>>>> help(pyspark.sql.dataframe.DataFrame)
Help on class DataFrame in module pyspark.sql.dataframe:
class DataFrame(pyspark.sql.pandas.map_ops.PandasMapOpsMixin,
pyspark.sql.pandas.conversion.PandasConversionMixin)
| DataFrame(jdf: py4j.java_gateway.JavaObject, sql_ctx:
| Union[ForwardRef('SQLContext'), ForwardRef('SparkSession')])
|
| A distributed collection of data grouped into named columns.
</code></pre>
<p>The only confusing thing about the above help documentation is that it
opens with what appears to be a contructor, the signature of which differs from that in
<code>/c/Users/User.Name/anaconda3/envs/py39/Lib/site-packages/pyspark/sql/dataframe.py</code>:</p>
<pre><code>class DataFrame(PandasMapOpsMixin, PandasConversionMixin):
<...snip...>
def __init__(
self,
jdf: JavaObject,
sql_ctx: Union["SQLContext", "SparkSession"],
):
</code></pre>
<p><a href="https://stackoverflow.com/questions/55320236">This Q&A</a> helps with
understanding the discrepancy due to presence of <code>ForwardRef</code> in the
<code>help()</code> output. It doesn't actually describe the keyword
<code>ForwardRef</code>, but I couldn't find a good description of the latter
online. <a href="https://docs.python.org/3/library/typing.html#functions-and-decorators" rel="nofollow noreferrer">This Python
documentation</a>
says that "class typing.ForwardRef...[is] used for internal typing
representation of string forward references". It is not clear why it
needs to be explicit in the <code>help(pyspark.sql.dataframe.DataFrame)</code>
output <code>sql_ctx: Union[ForwardRef('SQLContext'), ForwardRef('SparkSession')]</code>,
i.e., instead of just <code>sql_ctx: Union["SQLContext", "SparkSession"]</code>,
as shown in <code>dataframe.py</code>. It also isn't clear whether the <code>ForwardRef</code>
from <code>help(pyspark.sql.dataframe.DataFrame)</code> is the same as the
<code>class typing.ForwardRef</code> in the aforementioned Python documentation.</p>
<p>An alternative to relying on the above documentation from <code>help()</code> is
to find the help page online. The
<a href="https://spark.apache.org/docs/3.1.1/api/python/reference/api/pyspark.sql.SparkSession.createDataFrame.html" rel="nofollow noreferrer">createDataFrame</a>,
page links to
<a href="https://spark.apache.org/docs/3.1.1/api/python/reference/api/pyspark.sql.DataFrame.html#pyspark.sql.DataFrame" rel="nofollow noreferrer">DataFrame</a>,
which provides the full "path" <code>pyspark.sql.DataFrame</code>.</p>
| <python><apache-spark><pyspark> | 2023-08-12 06:18:25 | 1 | 1,265 | user2153235 |
76,887,911 | 534,298 | How to remove python interactions for numpy array operations in cython | <p>I have a simple numerical function for cython. Following the <a href="https://cython.readthedocs.io/en/latest/src/userguide/numpy_tutorial.htm" rel="nofollow noreferrer">numpy tutorial</a>, I got</p>
<pre><code>import numpy as np
from cython.view cimport array as cvarray
cpdef double[:, ::1] invert_xyz(double[:, ::1] xyz, double[:] center):
"""
Inversion operation on `xyz` with `center` as inversion center.
:param xyz: Nx3 coordinate array
"""
cdef Py_ssize_t i, n
n = xyz.shape[0]
# see https://stackoverflow.com/questions/18462785/what-is-the-recommended-way-of-allocating-memory-for-a-typed-memory-view
got = cvarray(shape=(n, 3), itemsize=sizeof(double), format="d")
cdef double[:, ::1] mv = got
for i in range(n):
mv[i, 0] = 2 * center[0] - xyz[i, 0]
mv[i, 1] = 2 * center[1] - xyz[i, 1]
mv[i, 2] = 2 * center[2] - xyz[i, 2]
return mv
</code></pre>
<p>But the lines inside the <code>for</code> loop still contain python interactions (they are yellow when compiled with annotation). For example,</p>
<pre><code>+032: for i in range(n):
+033: mv[i, 0] = 2 * center[0] - xyz[i, 0]
__pyx_t_8 = 0;
__pyx_t_9 = -1;
if (__pyx_t_8 < 0) {
__pyx_t_8 += __pyx_v_center.shape[0];
if (unlikely(__pyx_t_8 < 0)) __pyx_t_9 = 0;
} else if (unlikely(__pyx_t_8 >= __pyx_v_center.shape[0])) __pyx_t_9 = 0;
if (unlikely(__pyx_t_9 != -1)) {
__Pyx_RaiseBufferIndexError(__pyx_t_9);
__PYX_ERR(0, 33, __pyx_L1_error)
}
__pyx_t_10 = __pyx_v_i;
__pyx_t_11 = 0;
__pyx_t_9 = -1;
if (__pyx_t_10 < 0) {
__pyx_t_10 += __pyx_v_xyz.shape[0];
if (unlikely(__pyx_t_10 < 0)) __pyx_t_9 = 0;
} else if (unlikely(__pyx_t_10 >= __pyx_v_xyz.shape[0])) __pyx_t_9 = 0;
if (__pyx_t_11 < 0) {
__pyx_t_11 += __pyx_v_xyz.shape[1];
if (unlikely(__pyx_t_11 < 0)) __pyx_t_9 = 1;
} else if (unlikely(__pyx_t_11 >= __pyx_v_xyz.shape[1])) __pyx_t_9 = 1;
if (unlikely(__pyx_t_9 != -1)) {
__Pyx_RaiseBufferIndexError(__pyx_t_9);
__PYX_ERR(0, 33, __pyx_L1_error)
}
__pyx_t_12 = __pyx_v_i;
__pyx_t_13 = 0;
__pyx_t_9 = -1;
if (__pyx_t_12 < 0) {
__pyx_t_12 += __pyx_v_mv.shape[0];
if (unlikely(__pyx_t_12 < 0)) __pyx_t_9 = 0;
} else if (unlikely(__pyx_t_12 >= __pyx_v_mv.shape[0])) __pyx_t_9 = 0;
if (__pyx_t_13 < 0) {
__pyx_t_13 += __pyx_v_mv.shape[1];
if (unlikely(__pyx_t_13 < 0)) __pyx_t_9 = 1;
} else if (unlikely(__pyx_t_13 >= __pyx_v_mv.shape[1])) __pyx_t_9 = 1;
if (unlikely(__pyx_t_9 != -1)) {
__Pyx_RaiseBufferIndexError(__pyx_t_9);
__PYX_ERR(0, 33, __pyx_L1_error)
}
*((double *) ( /* dim=1 */ ((char *) (((double *) ( /* dim=0 */ (__pyx_v_mv.data + __pyx_t_12 * __pyx_v_mv.strides[0]) )) + __pyx_t_13)) )) = ((2.0 * (*((double *) ( /* dim=0 */ ((char *) (((double *) __pyx_v_center.data) + __pyx_t_8)) )))) - (*((double *) ( /* dim=1 */ ((char *) (((double *) ( /* dim=0 */ (__pyx_v_xyz.data + __pyx_t_10 * __pyx_v_xyz.strides[0]) )) + __pyx_t_11)) ))));
+034: mv[i, 1] = 2 * center[1] - xyz[i, 1]
</code></pre>
<p>How can I get rid of the python interactions?</p>
| <python><numpy><cython> | 2023-08-12 05:42:19 | 0 | 21,060 | nos |
76,887,758 | 20,122,390 | How should I structure modules in a large python application? | <p>I often write and participate in the development of web applications in Python together with FastAPI. I always use the same folder structure which is as follows:</p>
<p><a href="https://i.sstatic.net/KMMdM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KMMdM.png" alt="enter image description here" /></a></p>
<p>I think the names are clear to know what is in each of them. But I want to focus on two folders: api and services. Within api are all the modules with the endpoints of the application (one module for each "Entity", for example in the module api/routers/users.py will be all the endpoints for the users. In these endpoints there is no complex logic , all the logic is in the "services" folder. So, in the module services/users.py there will be a whole class called User with the logic. In many cases that class is simply a CRUD that inherits from a BASE CRUD and doesn't even extend the class Or maybe add one or two more methods.
However, in some cases, the "entity" requires much more complex logic, it is appropriate to implement some design patterns, interfaces etc. When that happens I feel overwhelmed by having to put everything in the services/users.py module. (which implies a very large file). I've even seen how other developers continue to extend the User class (it's just an example, it can be anything) with a lot of methods that have nothing to do with the class, making the code too coupled and with low cohesion.
As a solution to that, I thought of creating a folder as such for each entity and not a module. Then it would be services/user and there have all the user logic distributed in more than one module if necessary.
But I'm not sure I'm doing the right thing in terms of design. Am I making things more complicated? Or is it a correct strategy?</p>
| <python><design-patterns><backend><fastapi><api-design> | 2023-08-12 04:41:26 | 2 | 988 | Diego L |
76,887,708 | 8,144,957 | Identifying Rectangles from a Large Set of Line Segments in Python | <p>Hello StackOverflow community!</p>
<p>I am currently facing a challenge in detecting rotated rectangles that can be formed from a given set of line segments. With a total of approximately 5000 line segments, the performance becomes a significant concern.</p>
<p>To better illustrate the problem, consider the following example:</p>
<pre class="lang-py prettyprint-override"><code>segments = [
[(0, 0), (10, 0)],
[(10, 0), (10, 10)],
[(10, 10), (0, 10)],
[(0, 10), (0, 0)],
]
</code></pre>
<p>From the above example, these line segments form a rectangle. However, this rectangle is not rotated. But in other cases, the segments might form a rotated rectangle, and I want to be able to detect that.</p>
<p>How can I efficiently determine if a given set of line segments can form a rotated rectangle? Is there any existing algorithm or Python library that can assist with this?</p>
<p>Thank you in advance for any guidance and suggestions!</p>
<pre class="lang-py prettyprint-override"><code>import cv2
import numpy as np
import pandas as pd
from tqdm import tqdm
def find_rectangles(segments):
"""
Finds all rectangles in a list of segments.
Args:
segments: A list of segments.
Returns:
A list of rectangles.
"""
rectangles = []
segments = np.array(segments)
for i in tqdm(range(len(segments))):
l1 = segments[i]
l2 = None
l3 = None
l4 = None
for j in range(len(segments)):
if are_perpendicular(l1, segments[j]):
l2 = segments[j]
break
if l2 is None:
continue
for j in range(len(segments)):
# l3: Vuong goc voi l2, song song voi l1, ke voi l2
z = segments[j]
if are_parallel(l1, segments[j]) and are_perpendicular(l2, segments[j]):
l3 = segments[j]
break
if l3 is None:
continue
for j in range(len(segments)):
# l4: Vuong goc voi l1, song song voi l2, ke voi l3
if (
are_perpendicular(l1, segments[j])
and are_parallel(l2, segments[j])
and are_perpendicular(l3, segments[j])
):
l4 = segments[j]
break
if l4 is None:
continue
lines = np.array([l1, l2, l3, l4])
points = lines.reshape(-1, 2)
x = points[:, 0]
y = points[:, 1]
xmin, ymin, xmax, ymax = min(x), min(y), max(x), max(y)
rectangles.append(Rectangle(xmin, ymin, xmax, ymax))
print("Found rectangle: ", rectangles[-1])
return rectangles
def are_parallel(v1, v2):
"""
Determines if two segments are parallel.
Args:
segment1: A segment.
segment2: A segment.
Returns:
True if the segments are parallel, False otherwise.
"""
slope1 = compute_slope(v1)
slope2 = compute_slope(v2)
return slope1 == slope2
def are_adjacent(segment1, segment2):
"""
Determines if two segments are adjacent.
Args:
segment1: A segment.
segment2: A segment.
Returns:
True if the segments are adjacent, False otherwise.
"""
p1, p2 = segment1
p3, p4 = segment2
return (p1 == p3).all() or (p1 == p4).all() or (p2 == p3).all() or (p2 == p4).all()
def compute_slope(seg):
seg = seg[1] - seg[0]
angle = np.angle(complex(*(seg)), deg=True)
return angle
def are_perpendicular(v1, v2):
"""
Determines if two segments are perpendicular.
Args:
segment1: A segment.
segment2: A segment.
Returns:
True if the segments are perpendicular, False otherwise.
"""
# KhΓ΄ng chΓΉng nhau
# points = np array shape (-1, 2)
points = np.array([v1[0], v1[1], v2[0], v2[1]])
xmin, ymin, xmax, ymax = min(points[:, 0]), min(points[:, 1]), max(
points[:, 0]
), max(points[:, 1])
is_overlap = (xmin == xmax) or (ymin == ymax)
if is_overlap:
return False
# adjacent
cond2 = are_adjacent(v1, v2)
if not cond2:
return False
# VuΓ΄ng gΓ³c
s1 = compute_slope(v1) # in degree
s2 = compute_slope(v2) # in degree
cond3 = np.abs(s1 - s2) == 90
return cond3
class Rectangle:
"""
Represents a rectangle.
Attributes:
top_left: The top left corner of the rectangle.
bottom_right: The bottom right corner of the rectangle.
"""
def __init__(self, xmin, ymin, xmax, ymax):
"""
Initializes a rectangle.
Args:
top_left: The top left corner of the rectangle.
bottom_right: The bottom right corner of the rectangle.
"""
self.top_left = (xmin, ymin)
self.bottom_right = (xmax, ymax)
def __str__(self):
"""
Returns a string representation of the rectangle.
Returns:
A string representation of the rectangle.
"""
return "Rectangle(top_left=({}, {}), bottom_right=({}, {}))".format(
self.top_left[0],
self.top_left[1],
self.bottom_right[0],
self.bottom_right[1],
)
def draw_line(df):
x = df["x0"].values.tolist() + df["x1"].values.tolist()
y = df["y0"].values.tolist() + df["y1"].values.tolist()
xmin, ymin, xmax, ymax = min(x), min(y), max(x), max(y)
w, h = xmax - xmin, ymax - ymin
mat = np.zeros((ymax - ymin + 1, xmax - xmin + 1, 3), dtype=np.uint8)
df[["x0", "x1"]] = df[["x0", "x1"]] - xmin
df[["y0", "y1"]] = df[["y0", "y1"]] - ymin
for x0, y0, x1, y1 in tqdm(df[["x0", "y0", "x1", "y1"]].values.tolist()):
cv2.line(mat, (x0, y0), (x1, y1), (255, 255, 255), 1)
# cv2.imshow("mat", mat)
cv2.imwrite("mat.png", mat)
print('write mat')
if __name__ == "__main__":
segments = [
[(0, 0), (10, 0)],
[(10, 0), (10, 10)],
[(10, 10), (0, 10)],
[(0, 10), (0, 0)],
]
# Find all rectangles in the list of segments.
rectangles = find_rectangles(segments)
# Print the rectangles.
rects = []
for rectangle in rectangles:
print(rectangle)
</code></pre>
| <python><algorithm><geometry><computational-geometry><line-segment> | 2023-08-12 04:17:51 | 2 | 328 | Nguyα»
n Anh Bình |
76,887,677 | 1,228,906 | Unknown model type error when logging Keras model using MLflow in Azure ML | <p>I'm getting a βUnknown model type errorβ when training a model on Azure ML. It's a Keras TensorFlow model and here is my code that's failing:</p>
<pre><code>mlflow.keras.log_model(
model=model,
registered_model_name=registered_model_name,
artifact_path=registered_model_name,
extra_pip_requirements=["protobuf~=3.20"],
)
mlflow.keras.save_model(
keras_model=model,
path=os.path.join(registered_model_name, "trained_model"),
extra_pip_requirements=["protobuf~=3.20"],
)
</code></pre>
<p>Error:</p>
<blockquote>
<p>Epoch 20/20 - 3s - loss: 0.0047 - accuracy: 0.9988 - val_loss: 0.1677 - val_accuracy: 0.9809 2023/08/12 14:33:23 WARNING mlflow.tensorflow:
You are saving a TensorFlow Core model or Keras model without a
signature. Inference with mlflow.pyfunc.spark_udf() will not work
unless the model's pyfunc representation accepts pandas DataFrames as
inference inputs. Test loss: 0.1676583289883102 Test accuracy:
0.98089998960495 Registering the model via MLFlow -- 1 Traceback (most recent call last): File "keras_mnist.py", line 176, in
mlflow.keras.log_model( File "/azureml-envs/azureml_e6c91049350b2ff55519ca4d0d2aa0dc/lib/python3.8/site-packages/mlflow/tensorflow/<strong>init</strong>.py",
line 208, in log_model
return Model.log( File "/azureml-envs/azureml_e6c91049350b2ff55519ca4d0d2aa0dc/lib/python3.8/site-packages/mlflow/models/model.py",
line 572, in log
flavor.save_model(path=local_path, mlflow_model=mlflow_model, **kwargs) File "/azureml-envs/azureml_e6c91049350b2ff55519ca4d0d2aa0dc/lib/python3.8/site-packages/mlflow/tensorflow/<strong>init</strong>.py",
line 451, in save_model
raise MlflowException(f"Unknown model type: {type(model)}") mlflow.exceptions.MlflowException: Unknown model type: <class
'keras.engine.sequential.Sequential'></p>
</blockquote>
<p>Conda Environment:</p>
<pre><code>name: keras-env channels:
- conda-forge dependencies:
- python=3.8
- pip=21.2.4
- pip:
- protobuf~=3.20
- numpy==1.21.2
- tensorflow-gpu==2.2.0
- keras==2.3.1
- matplotlib
- mlflow==2.5.0
- azureml-mlflow==1.52.0
</code></pre>
| <python><tensorflow><keras><azure-machine-learning-service><mlflow> | 2023-08-12 04:01:28 | 1 | 1,896 | webber |
76,887,615 | 11,832,959 | pdfminer laparams not causing multiple LTChar to group into LTTextLine | <p>I'm using pdfminer.six</p>
<p>According to <a href="https://pdfminersix.readthedocs.io/_/downloads/en/latest/pdf/" rel="nofollow noreferrer">this</a> on page 8 I should be able to modify <code>char_margin</code> and <code>line_overlap</code> in a <code>LAParams</code> object in order to cause a bunch of <code>LTChar</code> objects next to each other to group into <code>LTTextLine</code> objects. Unfortunately, it doesn't matter what values I assign these, nothing changes.</p>
<p>Here is my code and some example output. What am I missing? I've tried reasonable values such as 1, 5, 10, and super high ones such as 10000000000000. But I never get any LTTextLine objects.</p>
<pre class="lang-py prettyprint-override"><code>from pdfminer.layout import *
from pdfminer.high_level import extract_pages
def rec(element, deep=0):
print(f"{deep}: {element}")
if hasattr(element, '__iter__'):
for item in element:
rec(item, deep+1)
def main():
file = 'completed-intake-form.pdf'
laparams = LAParams(char_margin=4, line_overlap=4.0, word_margin=5.0)
for page_layout in extract_pages(file, laparams=laparams):
for element in page_layout:
rec(element, 0)
</code></pre>
<pre><code>$ python3 src/main.py
0: <LTFigure(TLkNsAkXBt) 0.000,0.000,595.500,850.080 matrix=[1.00,0.00,0.00,1.00, (0.00,-7.83)]>
1: <LTRect 0.000,-7.463,596.000,842.250>
1: <LTRect 0.000,-0.707,596.000,842.250>
1: <LTRect 0.000,-0.707,596.000,842.250>
1: <LTRect 36.491,749.262,550.347,787.661>
1: <LTChar 39.671,663.263,47.715,673.269 matrix=[0.75,0.00,0.00,0.75, (39.67,666.25)] font='AAAAAA+EBGaramond-SemiBold' adv=10.717319919600001 text='N'>
1: <LTChar 47.714,663.263,51.967,673.269 matrix=[0.75,0.00,0.00,0.75, (47.71,666.25)] font='AAAAAA+EBGaramond-SemiBold' adv=5.6652499575 text='a'>
1: <LTChar 51.966,663.263,59.941,673.269 matrix=[0.75,0.00,0.00,0.75, (51.97,666.25)] font='AAAAAA+EBGaramond-SemiBold' adv=10.6240099203 text='m'>
1: <LTChar 59.940,663.263,64.022,673.269 matrix=[0.75,0.00,0.00,0.75, (59.94,666.25)] font='AAAAAA+EBGaramond-SemiBold' adv=5.4386399592000005 text='e'>
1: <LTChar 39.671,645.269,46.785,655.275 matrix=[0.75,0.00,0.00,0.75, (39.67,648.25)] font='AAAAAA+EBGaramond-SemiBold' adv=9.4776299289 text='A'>
1: <LTChar 46.784,645.269,52.157,655.275 matrix=[0.75,0.00,0.00,0.75, (46.78,648.25)] font='AAAAAA+EBGaramond-SemiBold' adv=7.1582099463 text='d'>
1: <LTChar 52.156,645.269,57.529,655.275 matrix=[0.75,0.00,0.00,0.75, (52.16,648.25)] font='AAAAAA+EBGaramond-SemiBold' adv=7.1582099463 text='d'>
1: <LTChar 57.529,645.269,61.411,655.275 matrix=[0.75,0.00,0.00,0.75, (57.53,648.25)] font='AAAAAA+EBGaramond-SemiBold' adv=5.1720399612 text='r'>
1: <LTChar 61.410,645.269,65.493,655.275 matrix=[0.75,0.00,0.00,0.75, (61.41,648.25)] font='AAAAAA+EBGaramond-SemiBold' adv=5.4386399592000005 text='e'>
1: <LTChar 65.492,645.269,68.974,655.275 matrix=[0.75,0.00,0.00,0.75, (65.49,648.25)] font='AAAAAA+EBGaramond-SemiBold' adv=4.638839965200001 text='s'>
1: <LTChar 68.974,645.269,72.456,655.275 matrix=[0.75,0.00,0.00,0.75, (68.97,648.25)] font='AAAAAA+EBGaramond-SemiBold' adv=4.638839965200001 text='s'>
1: <LTChar 39.671,627.003,45.564,637.009 matrix=[0.75,0.00,0.00,0.75, (39.67,629.99)] font='AAAAAA+EBGaramond-SemiBold' adv=7.8513699411 text='P'>
1: <LTChar 45.563,627.003,51.016,637.009 matrix=[0.75,0.00,0.00,0.75, (45.56,629.99)] font='AAAAAA+EBGaramond-SemiBold' adv=7.264849945500001 text='h'>
1: <LTChar 51.016,627.003,56.019,637.009 matrix=[0.75,0.00,0.00,0.75, (51.02,629.99)] font='AAAAAA+EBGaramond-SemiBold' adv=6.66499995 text='o'>
1: <LTChar 56.018,627.003,61.551,637.009 matrix=[0.75,0.00,0.00,0.75, (56.02,629.99)] font='AAAAAA+EBGaramond-SemiBold' adv=7.371489944700001 text='n'>
1: <LTChar 61.550,627.003,65.633,637.009 matrix=[0.75,0.00,0.00,0.75, (61.55,629.99)] font='AAAAAA+EBGaramond-SemiBold' adv=5.4386399592000005 text='e'>
1: <LTLine 66.501,665.670,230.891,665.670>
1: <LTLine 168.264,545.088,332.654,545.088>
1: <LTLine 387.881,472.888,412.650,473.221>
1: <LTLine 521.049,472.516,550.331,472.516>
1: <LTLine 365.141,648.330,445.459,648.330>
1: <LTLine 487.873,647.580,550.179,647.580>
1: <LTLine 99.174,57.068,263.564,57.068>
1: <LTLine 416.664,215.165,540.527,215.540>
1: <LTLine 75.393,647.955,239.783,647.955>
1: <LTLine 330.209,629.758,494.599,629.758>
1: <LTLine 385.636,498.845,550.026,498.845>
1: <LTLine 69.056,630.134,157.635,630.206>
1: <LTLine 305.623,665.220,394.946,666.118>
1: <LTLine 45.726,405.074,559.167,405.074>
1: <LTLine 45.726,572.622,559.167,572.622>
1: <LTLine 45.726,166.724,559.167,166.724>
1: <LTRect 50.007,374.845,62.221,387.059>
1: <LTRect 284.236,521.109,296.450,533.323>
1: <LTRect 166.196,495.736,178.410,507.950>
1: <LTRect 119.307,469.260,131.522,481.474>
1: <LTRect 362.813,521.109,375.027,533.323>
1: <LTRect 244.773,495.736,256.987,507.950>
1: <LTRect 197.884,469.260,210.098,481.474>
1: <LTRect 233.699,374.845,245.913,387.059>
1: <LTRect 416.664,372.944,428.878,385.158>
1: <LTRect 50.007,334.925,62.221,347.139>
1: <LTRect 233.699,334.925,245.913,347.139>
1: <LTRect 416.664,333.024,428.878,345.238>
1: <LTRect 50.007,295.005,62.221,307.219>
1: <LTRect 233.699,295.005,245.913,307.219>
1: <LTRect 416.664,293.104,428.878,305.318>
1: <LTRect 50.007,255.084,62.221,267.299>
1: <LTRect 233.699,255.084,245.913,267.299>
1: <LTRect 416.664,253.183,428.878,265.398>
1: <LTRect 50.007,354.885,62.221,367.099>
1: <LTRect 233.699,354.885,245.913,367.099>
1: <LTRect 416.664,352.984,428.878,365.198>
1: <LTRect 50.007,314.965,62.221,327.179>
1: <LTRect 233.699,314.965,245.913,327.179>
1: <LTRect 416.664,313.064,428.878,325.278>
1: <LTRect 50.007,275.045,62.221,287.259>
1: <LTRect 233.699,275.045,245.913,287.259>
1: <LTRect 416.664,273.144,428.878,285.358>
1: <LTRect 50.007,235.124,62.221,247.339>
1: <LTRect 233.699,235.124,245.913,247.339>
1: <LTRect 416.664,233.223,428.878,245.437>
1: <LTRect 50.007,215.164,62.221,227.378>
1: <LTRect 233.699,215.164,245.913,227.378>
1: <LTChar 134.147,719.958,183.612,762.526 matrix=[0.75,0.00,0.00,0.75, (134.15,761.84)] font='BAAAAA+EyesomeScript' adv=65.897018838 text='M'>
1: <LTChar 183.605,719.958,204.336,762.526 matrix=[0.75,0.00,0.00,0.75, (183.61,761.84)] font='BAAAAA+EyesomeScript' adv=27.617769513000002 text='a'>
1: <LTChar 204.334,719.958,217.658,762.526 matrix=[0.75,0.00,0.00,0.75, (204.33,761.84)] font='BAAAAA+EyesomeScript' adv=17.750229687 text='s'>
1: <LTChar 217.656,719.958,230.980,762.526 matrix=[0.75,0.00,0.00,0.75, (217.66,761.84)] font='BAAAAA+EyesomeScript' adv=17.750229687 text='s'>
1: <LTChar 230.978,719.958,251.709,762.526 matrix=[0.75,0.00,0.00,0.75, (230.98,761.84)] font='BAAAAA+EyesomeScript' adv=27.617769513000002 text='a'>
1: <LTChar 251.706,719.958,272.990,762.526 matrix=[0.75,0.00,0.00,0.75, (251.71,761.84)] font='BAAAAA+EyesomeScript' adv=28.3549995 text='g'>
1: <LTChar 272.988,719.958,287.504,762.526 matrix=[0.75,0.00,0.00,0.75, (272.99,761.84)] font='BAAAAA+EyesomeScript' adv=19.338109659000004 text='e'>
1: <LTChar 287.502,719.958,300.272,762.526 matrix=[0.75,0.00,0.00,0.75, (287.50,761.84)] font='BAAAAA+EyesomeScript' adv=17.0129997 text=' '>
1: <LTChar 300.271,719.958,320.959,762.526 matrix=[0.75,0.00,0.00,0.75, (300.27,761.84)] font='BAAAAA+EyesomeScript' adv=27.561059514 text='T'>
1: <LTChar 320.956,719.958,346.157,762.526 matrix=[0.75,0.00,0.00,0.75, (320.96,761.84)] font='BAAAAA+EyesomeScript' adv=33.572319408 text='h'>
1: <LTChar 346.154,719.958,360.669,762.526 matrix=[0.75,0.00,0.00,0.75, (346.15,761.84)] font='BAAAAA+EyesomeScript' adv=19.338109659000004 text='e'>
1: <LTChar 360.668,719.958,378.674,762.526 matrix=[0.75,0.00,0.00,0.75, (360.67,761.84)] font='BAAAAA+EyesomeScript' adv=23.988329577000002 text='r'>
1: <LTChar 378.672,719.958,399.403,762.526 matrix=[0.75,0.00,0.00,0.75, (378.67,761.84)] font='BAAAAA+EyesomeScript' adv=27.617769513000002 text='a'>
1: <LTChar 399.400,719.958,418.343,762.526 matrix=[0.75,0.00,0.00,0.75, (399.40,761.84)] font='BAAAAA+EyesomeScript' adv=25.235949555 text='p'>
1: <LTChar 418.341,719.958,439.625,762.526 matrix=[0.75,0.00,0.00,0.75, (418.34,761.84)] font='BAAAAA+EyesomeScript' adv=28.3549995 text='y'>
1: <LTChar 439.618,719.958,452.389,762.526 matrix=[0.75,0.00,0.00,0.75, (439.62,761.84)] font='BAAAAA+EyesomeScript' adv=17.0129997 text=' '>
1: <LTChar 150.514,707.298,167.303,727.723 matrix=[0.75,0.00,0.00,0.75, (150.51,715.06)] font='CAAAAA+Garet-Book' adv=22.366619178 text='C'>
1: <LTChar 171.038,707.298,182.537,727.723 matrix=[0.75,0.00,0.00,0.75, (171.04,715.06)] font='CAAAAA+Garet-Book' adv=15.319229437 text='L'>
1: <LTChar 186.273,707.298,192.135,727.723 matrix=[0.75,0.00,0.00,0.75, (186.27,715.06)] font='CAAAAA+Garet-Book' adv=7.809269713000001 text='I'>
1: <LTChar 195.872,707.298,208.311,727.723 matrix=[0.75,0.00,0.00,0.75, (195.87,715.06)] font='CAAAAA+Garet-Book' adv=16.570889390999998 text='E'>
1: <LTChar 212.046,707.298,227.753,727.723 matrix=[0.75,0.00,0.00,0.75, (212.05,715.06)] font='CAAAAA+Garet-Book' adv=20.924489231 text='N'>
1: <LTChar 231.488,707.298,244.131,727.723 matrix=[0.75,0.00,0.00,0.75, (231.49,715.06)] font='CAAAAA+Garet-Book' adv=16.842989381 text='T'>
1: <LTChar 247.866,707.298,253.340,727.723 matrix=[0.75,0.00,0.00,0.75, (247.87,715.06)] font='CAAAAA+Garet-Book' adv=7.292279732000001 text=' '>
1: <LTChar 257.078,707.298,262.940,727.723 matrix=[0.75,0.00,0.00,0.75, (257.08,715.06)] font='CAAAAA+Garet-Book' adv=7.809269713000001 text='I'>
1: <LTChar 266.677,707.298,282.384,727.723 matrix=[0.75,0.00,0.00,0.75, (266.68,715.06)] font='CAAAAA+Garet-Book' adv=20.924489231 text='N'>
1: <LTChar 286.118,707.298,298.761,727.723 matrix=[0.75,0.00,0.00,0.75, (286.12,715.06)] font='CAAAAA+Garet-Book' adv=16.842989381 text='T'>
1: <LTChar 302.497,707.298,316.447,727.723 matrix=[0.75,0.00,0.00,0.75, (302.50,715.06)] font='CAAAAA+Garet-Book' adv=18.584429317 text='A'>
1: <LTChar 320.182,707.298,334.112,727.723 matrix=[0.75,0.00,0.00,0.75, (320.18,715.06)] font='CAAAAA+Garet-Book' adv=18.557219318 text='K'>
1: <LTChar 337.847,707.298,350.286,727.723 matrix=[0.75,0.00,0.00,0.75, (337.85,715.06)] font='CAAAAA+Garet-Book' adv=16.570889390999998 text='E'>
1: <LTChar 354.022,707.298,359.495,727.723 matrix=[0.75,0.00,0.00,0.75, (354.02,715.06)] font='CAAAAA+Garet-Book' adv=7.292279732000001 text=' '>
1: <LTChar 363.233,707.298,374.671,727.723 matrix=[0.75,0.00,0.00,0.75, (363.23,715.06)] font='CAAAAA+Garet-Book' adv=15.237599440000002 text='F'>
1: <LTChar 378.407,707.298,395.768,727.723 matrix=[0.75,0.00,0.00,0.75, (378.41,715.06)] font='CAAAAA+Garet-Book' adv=23.12849915 text='O'>
1: <LTChar 399.502,707.298,413.473,727.723 matrix=[0.75,0.00,0.00,0.75, (399.50,715.06)] font='CAAAAA+Garet-Book' adv=18.611639316 text='R'>
1: <LTChar 417.208,707.298,436.019,727.723 matrix=[0.75,0.00,0.00,0.75, (417.21,715.06)] font='CAAAAA+Garet-Book' adv=25.060409079 text='M'>
</code></pre>
| <python><pdf><pdfminer><pdfminersix> | 2023-08-12 03:23:38 | 1 | 434 | Peyton Hanel |
76,887,599 | 21,305,238 | How to make mypy satisfied with my MutableMapping[str, int] whose __getitem__ may return None and __setitem__ must not accept None? | <p>I have the following class which maps a <code>str</code> to its corresponding <code>int</code> or <code>None</code> if there is no such key. I want it to be a subclass of <code>collections.abc.MutableMapping</code>. The real logic is a bit more complicated than just <code>self._record.get()</code>, but the whole thing boils down to just this:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Iterator
from collections.abc import MutableMapping
class FooMap(MutableMapping[str, int]):
_record: dict[str, int]
def __init__(self) -> None:
self._record = {}
def __contains__(self, key: str) -> bool:
return self[key] is not None
def __getitem__(self, key: str) -> int | None:
return self._record.get(key)
def __setitem__(self, key: str, value: int) -> None:
self._record[key] = value
def __delitem__(self, key: str) -> None:
raise TypeError('Keys cannot be deleted')
def __len__(self) -> int:
return len(self._record)
def __iter__(self) -> Iterator[str]:
return iter(self._record)
</code></pre>
<p>However, <code>mypy</code> <a href="https://mypy-play.net/?mypy=1.5.0&python=3.11&flags=strict&gist=e23b8a6c51767ee1c774f4e63d16a263" rel="nofollow noreferrer">complains</a> about both the <code>__contains__()</code>:</p>
<pre class="lang-none prettyprint-override"><code>main.py:12: error: Argument 1 of "__contains__" is incompatible with supertype "Mapping"; supertype defines the argument type as "object" [override]
main.py:12: note: This violates the Liskov substitution principle
main.py:12: note: See https://mypy.readthedocs.io/en/stable/common_issues.html#incompatible-overrides
main.py:12: error: Argument 1 of "__contains__" is incompatible with supertype "Container"; supertype defines the argument type as "object" [override]
</code></pre>
<p>...and the <code>__getitem__()</code> method:</p>
<pre class="lang-none prettyprint-override"><code>main.py:15: error: Return type "int | None" of "__getitem__" incompatible with return type "int" in supertype "Mapping" [override]
</code></pre>
<p>I get the latter: The <code>__getitem__()</code> method of a <code>Mapping[str, int]</code> is supposed to return an <code>int</code>, not <code>None</code>, but that is not my use case. Changing <code>int</code> to <code>int | None</code> doesn't help, since <code>__setitem__()</code>'s second argument will also need to be changed correspondingly.</p>
<p>The former is even more confusing: The first argument passed to <code>__contains__()</code> should be a <code>str</code>, since we are talking about a <code>Mapping[str, int]</code>. Yet, mypy expected something more generic than <code>object</code>, according to <a href="https://mypy.readthedocs.io/en/stable/common_issues.html#incompatible-overrides" rel="nofollow noreferrer">the link it gave me</a>. I changed that to <code>key: object</code>, <a href="https://mypy-play.net/?mypy=1.5.0&python=3.11&flags=strict&gist=32ae34ca04d1de4c34849425d8a9dd0b" rel="nofollow noreferrer">but to no avail</a>, as <code>__getitem__()</code> wants a <code>str</code>.</p>
<p>I know I can just throw <code>MutableMapping</code> away or add a comment to explicitly tell mypy that it doesn't need to scrutinize a line, but I also don't want to do that.</p>
<p>How to make mypy happy while retaining my initial use case? I'm fine with using any features supported by mypy 1.4+ and Python 3.11+.</p>
| <python><mypy><python-typing> | 2023-08-12 03:16:56 | 1 | 12,143 | InSync |
76,887,572 | 7,169,895 | How do I find the XSFR token on a page? Is it possible that it could be dynamically generated such that I can not retrieve it? | <p>I am trying to retrieve data from a website into json then a DataFrame. I am trying to submit the XSFR token with my request like seen <a href="https://stackoverflow.com/questions/13567507/passing-csrftoken-with-python-requests">here</a>. My issue is that it seems there is no hidden input field containing the XSFR token. I had to hard code a XSFR token taken from following the answer <a href="https://stackoverflow.com/questions/69215705/scrapy-beautifulsoup-simulating-clicking-a-button-in-order-to-load-a-section-o">here</a>. Those tokens expire so I would like to find a way to get the XSRF token and submit it with my request. However, my request's header and cookies do not show me getting the token. Is it possible the token is being generated with JavaScript dynamically such that I can not retrieve it? Am I missing something?</p>
<p>My code:</p>
<pre><code>import requests
import json
import pandas as pd
from bs4 import BeautifulSoup
import requests
session = requests.Session()
print(session.cookies.get_dict())
response = session.get('https://www.finra.org/finra-data/fixed-income/corp-and-agency')
print("Cookies: ", session.cookies) # Nothing with the token
print("Param: ", session.params) # Nothing with the token
print("Header: ", session.params) # Nothing with the token
get_token_response = requests.get('https://www.finra.org/finra-data/fixed-income/corp-and-agency')
soup = BeautifulSoup(get_token_response.text)
print(get_token_response.headers) # Nothing with the token
print(get_token_response.cookies) # Nothing with the token
print(soup.findAll('input') # Nothing with the token
# Hard Coded
headers = {
'authority': 'services-dynarep.ddwa.finra.org',
'accept': 'application/json, text/plain, */*',
'content-type': 'application/json',
'cookie':'XSRF-TOKEN=578706e6-5dfa-4beb-b887-60b42da068be;',
'origin': 'https://www.finra.org',
'referer': 'https://www.finra.org/',
'x-xsrf-token': '578706e6-5dfa-4beb-b887-60b42da068be',
}
data = ('{"fields":["issueSymbolIdentifier","issuerName","isCallable","productSubTypeCode",'
'"couponRate","maturityDate","industryGroup","moodysRating",'
'"standardAndPoorsRating","lastSalePrice","lastSaleYield"],'
'"dateRangeFilters":[],"domainFilters":[],"compareFilters":[],'
'"multiFieldMatchFilters":[{"fuzzy":false,"searchValue":"gme","synonym":true,"fields":'
'[{"name":"issuerName","boost":1}]}],"orFilters":[],"aggregationFilter":null,'
'"sortFields":["+issuerName"],"limit":50,"offset":0,"delimiter":null,"quoteValues":false}')
response = requests.post('https://services-dynarep.ddwa.finra.org/public/reporting/v2/data/group/FixedIncomeMarket/name/CorporateAndAgencySecurities',
headers=headers, data=data)
print(response.status_code)
data = json.dumps(response.json()['returnBody']['data'], indent=4)
data = list(data.replace('\\n', '').replace('\\', ''))
data = ''.join(data[1:-1])
df = pd.read_json(data)
</code></pre>
| <python><python-requests> | 2023-08-12 02:55:39 | 0 | 786 | David Frick |
76,887,450 | 22,212,435 | Trying to create default parameters for the child class | <p>I want to create a class that will be a able to create a subclass. Here is an example of code:</p>
<pre><code>class System:
def __init__(self, a=0, b=0, c=0):
self.a, self.b, self.c = a, b, c
self.sub_systems = []
def create_sub_sys(self, a=None, b=None, c=None):
if a is None: a = self.a
if b is None: b = self.b
if c is None: c = self.c
ss = System(a, b, c)
self.sub_systems += [ss]
return ss
s = System(a=10, b=100, c=-19)
s1 = s.create_sub_sys() # want it to be exact same system, so a, b, c = 10, 100, -19
s2 = s.create_sub_sys(b=3) # create system with a, b, c = 10, 3, -19
s21 = s2.create_sub_sys(a=3, c=3) # so a, b, c all are 3. 'b' parameter inherits from a 's2' class
print(s1.a, s1.b, s1.c)
print(s2.a, s2.b, s2.c)
print(s21.a, s21.b, s21.c)
</code></pre>
<p>So child classes should take parameters, as parent have, if there are no parameters given by a user to that subclass. The code above is working, but it seems to be quite poor made. If i will have lots of parameters, there will be lots of lines of code. And also, this program that i posted is checking, if parameters are None. But I am making a program, where 'None' will actually means something (program should not assign parents parameter), so code will not working.</p>
<p>It is actually very hard to formulate a question, but I hope you can understand the code. So the system I want should check if the user have used some parameters of the class. If so, it will override parent's that are given by default. If not, it will leave parent's. Maybe you can give me advice how to improve it, make it more general (None in the code states, that I can't make some parameters of the child to be None type even if I want)</p>
| <python><python-3.x><default-value> | 2023-08-12 01:51:02 | 2 | 610 | Danya K |
76,887,395 | 7,169,895 | Parsing json data fails, not sure why | <p>Following <a href="https://stackoverflow.com/questions/69215705/scrapy-beautifulsoup-simulating-clicking-a-button-in-order-to-load-a-section-o">this question</a>, I got the code to make an ajax request here:</p>
<pre><code>import requests
import json
headers = {
'authority': 'services-dynarep.ddwa.finra.org',
'accept': 'application/json, text/plain, */*',
'accept-language': 'en-US,en;q=0.6',
'content-type': 'application/json',
}
data = '{"fields":["issueSymbolIdentifier","issuerName","isCallable","productSubTypeCode","couponRate","maturityDate","industryGroup","moodysRating","standardAndPoorsRating","lastSalePrice","lastSaleYield"],"dateRangeFilters":[],"domainFilters":[],"compareFilters":[],"multiFieldMatchFilters":[{"fuzzy":false,"searchValue":"gme","synonym":true,"fields":[{"name":"issuerName","boost":1}]}],"orFilters":[],"aggregationFilter":null,"sortFields":["+issuerName"],"limit":50,"offset":0,"delimiter":null,"quoteValues":false}'
response = requests.post('https://services-dynarep.ddwa.finra.org/public/reporting/v2/data/group/FixedIncomeMarket/name/CorporateAndAgencySecurities',
headers=headers, data=data)
print(response.status_code)
</code></pre>
<p>The cookie an CSRF code has been removed.</p>
<p>The data that I want is buried in the json response. However, any attempt to convert the json to a DataFrame or just a regular dictionary has failed.</p>
<pre><code>data = json.dumps(response.json()['returnBody']['data'], indent=4)
print(data.replace('\\n', '').replace('\\', ''))
# print(pd.read_json(data.replace('\\n', '').replace('\\', '')))
</code></pre>
<p>gives</p>
<pre><code>"[{"isCallable":"Y","couponRate":null,"issueSymbolIdentifier":"NTEB5563784","issuerName":"NTE MOBILITY PARTNERS SEGMENTS 3 LLC","maturityDate":"2028-06-30","productSubTypeCode":"CORP","moodysRating":"Baa2","standardAndPoorsRating":null,"lastSaleYield":null,"industryGroup":null,"lastSalePrice":null},{"isCallable":"Y","couponRate":null,"issueSymbolIdentifier":"NTEB5563785","issuerName":"NTE MOBILITY PARTNERS SEGMENTS 3 LLC","maturityDate":"2028-06-30","productSubTypeCode":"CORP","moodysRating":"Baa2","standardAndPoorsRating":null,"lastSaleYield":null,"industryGroup":null,"lastSalePrice":100},{"isCallable":"Y","couponRate":null,"issueSymbolIdentifier":"NTEB5563786","issuerName":"NTE MOBILITY PARTNERS SEGMENTS 3 LLC","maturityDate":"2028-06-30","productSubTypeCode":"CORP","moodysRating":"Baa2","standardAndPoorsRating":null,"lastSaleYield":null,"industryGroup":null,"lastSalePrice":100}]"
</code></pre>
<p>I replaced the /n and //s to make the data readable by pandas. However, it still fails reading the json with <code>ValueError: Trailing data</code>. What am I missing?</p>
| <python><pandas><python-requests> | 2023-08-12 01:25:47 | 0 | 786 | David Frick |
76,887,349 | 4,878,423 | Read Kafka store file location from S3 | <p>We are getting below error so we started to get kafka key and certificate from S3 location (s3://my-bucket/tmp/k2/truststore.jks) on databricks notebook</p>
<pre><code>DbxDlTransferError: Terminated with exception: Kafka store file location only supports external location or UC Volume path on Shared cluster. Use external location or UC Volume Path to provide it.: None
</code></pre>
<p>But while reading S3 location we are getting another error</p>
<pre><code>Caused by: java.nio.file.NoSuchFileException: s3:/my-bucket/tmp/k2/truststore.jks
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
at java.nio.file.Files.newInputStream(Files.java:152)
at kafkashaded.org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.load(DefaultSslEngineFactory.java:368)
... 86 more
</code></pre>
<blockquote>
<p>We pass S3 location with "//" but when java reads it, there is only
"/" slash.</p>
</blockquote>
<p>I tried looking online but couldn't find any solution. If someone has any idea please share here. Thanks in advance</p>
<p>Here is my code</p>
<pre><code>truststore_location = "s3://my-bucket/tmp/k2/truststore.jks"
cluster_ca_certificate_location = "s3://my-bucket/tmp/k2/cluster-ca-certificate.pem"
kafka_server = "server_ip:9093"
kafka_topic = "kafka_topic"
kafka_group_id = "group_id"
scram_login_module = 'org.apache.kafka.common.security.scram.ScramLoginModule required username="" password=""'
input_df = (
spark
.read
.format("kafka")
.option("kafka.bootstrap.servers", kafka_server)
.option("kafka.ssl.truststore.location", truststore_location)
.option("kafka.ssl.truststore.password", truststore_pass)
.option("kafka.ssl.ca.location", cluster_ca_certificate_location)
.option("kafka.security.protocol", "SASL_SSL")
.option("kafka.sasl.mechanism", "SCRAM-SHA-256")
.option("kafka.sasl.jaas.config", "kafkashaded.org.apache.kafka.common.security.scram.ScramLoginModule required username='{}' password='{}';".format(consumer_username, consumer_password))
.option("subscribe", kafka_topic)
.option("kafka.group.id", kafka_group_id)
.option("failOnDataLoss", "false")
.load()
)
</code></pre>
| <python><amazon-s3><apache-kafka><databricks><aws-databricks> | 2023-08-12 00:58:01 | 1 | 461 | Arvind Pant |
76,887,294 | 2,065,821 | OSX: Installing python cmdline tools never works | <p>I'm using OSX Monterey, and whenever I install a python package that is supposed to provide a commandline tool, say <code>pipreqs</code> for example I can never run it as suggested like this:</p>
<pre><code>$ pipreqs
</code></pre>
<p>Instead I am forced to run it as a module like this:</p>
<pre><code>$ python -m pipreqs
</code></pre>
<p>And this goes for <em>all</em> of these python modules which are supposed to provide a command I can run.</p>
<p>I can't even add them to my path cos I have no idea where these binaries are bieng installed, or even <em>if</em> they are being installed.</p>
| <python><macos><command-line><macos-monterey> | 2023-08-12 00:25:08 | 1 | 633 | pnadeau |
76,887,165 | 2,781,105 | Elementwise multiplication of dataframes in Python | <p>I have a dataframe which represents features of a linear regression model.</p>
<pre><code>df1 = pd.DataFrame({'yyyyww': ['2022-01','2022-02','2022-03', '2022-04','2022-05','2022-06','2022-07','2022-08','2022-09','2022-10'],
'feature1': [1000,2000,4000,3000,5000,2000,8000,2000,4000,3000],
'feature2': [9000,7000,3000,1000,2000,3000,6000,8000,1000,1000],
'feature3': [3000,1000,2000,5000,9000,7000,2000,3000,5000,9000]})
</code></pre>
<p><a href="https://i.sstatic.net/wJIdy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wJIdy.png" alt="enter image description here" /></a></p>
<p>I run the model and calculate the coefficients which produces another dataframe, below.</p>
<pre><code>df2 = pd.DataFrame({'feature': ['feature1','feature2','feature3'],
'coefficient': [-1,2,0.5]})
</code></pre>
<p><a href="https://i.sstatic.net/vZWQZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vZWQZ.png" alt="enter image description here" /></a></p>
<p>I then want to produce a third dataframe where the contents are the product of the values from df1 and the corresponding coefficients from df2.
Desired output below.</p>
<pre><code>df3 = pd.DataFrame({'yyyyww': ['2022-01','2022-02','2022-03', '2022-04','2022-05','2022-06','2022-07','2022-08','2022-09','2022-10'],
'feature1': [-1000,-2000,-4000,-3000,-5000,-2000,-8000,-2000,-4000,-3000],
'feature2': [18000,14000,6000,2000,4000,6000,12000,16000,2000,2000],
'feature3': [1500,500,1000,2500,4500,3500,1000,1500,2500,4500]})
</code></pre>
<p><a href="https://i.sstatic.net/dJkql.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dJkql.png" alt="enter image description here" /></a></p>
<p>I have tried to achieve this using <code>mul</code> and <code>multiply</code> in the following manner, however this does not produce the desired result.</p>
<pre><code>features = [['feature1', 'feature2', 'feature3']]
results = pd.DataFrame()
for cols in features:
results[cols] = df1[cols]
results = df1.mul(df2['coefficient'], axis =0)
results
</code></pre>
| <python><pandas><dataframe><numpy><elementwise-operations> | 2023-08-11 23:28:22 | 3 | 889 | jimiclapton |
76,887,112 | 11,058,930 | Getting 200 Instead of 201 Status Code When Posting to WordPress with Python (Rest API) | <p>I'm trying to create posts on my WordPress website via Python using the below code, but the <code>response.json()</code> just returns a <code>200</code> and a list of all the existing posts on my site in JSON format, i.e.:</p>
<blockquote>
<p>[{'id': 1,
'date': '2023-07-15T17:46:24',
'date_gmt': '2023-07-15T17:46:24',
'guid': {'rendered': 'http://1234.5.6.7/?p=1'},
'modified': '2023-07-24T03:58:06',
'modified_gmt': '2023-07-24T03:58:06',
'slug': 'slug',
'status': 'publish',
'type': 'post',
'link': 'https://www.example.com/2023/07/15/slug/',
'title': {'rendered': 'Title'},
'content': {'rendered': '\t\t\n\t\t\t\t\t\t\t\t\te',
'protected': False},
'excerpt': {'rendered': 'Welcome to WordPress. This is your first post. Edit or delete it, then start writing!\n',
'protected': False},
'author': 1,
'featured_media': 3957,
'comment_status': 'open',
'ping_status': 'open',
'sticky': False,
'template': '',
'format': 'video',
'meta': {'footnotes': '',
'csco_singular_sidebar': '',
'csco_page_header_type': '',
.........</p>
</blockquote>
<p>This my python code:</p>
<pre><code>import requests
# Set the URL of the WordPress REST API.
url = 'https://example.com/wp-json/wp/v2/posts'
# Set the username and password for your WordPress account.
username = 'UserNameHere'
password = 'MyPasswordHere'
# Create a JSON object that contains the post data.
post_data = {
'title': 'My First Post',
'content': 'This is my first python post!',
'status': 'publish'
}
# Add the authorization header to the request.
headers = {'Authorization': 'Basic {}'.format(username + ':' + password)}
# Add the Content-Type header to the request.
headers['Content-Type'] = 'application/json'
# Use the requests library to make a POST request to the WordPress REST API.
response = requests.post(url, headers=headers, json=post_data)
# If the request is successful, the response will be a JSON object that contains the post data.
if response.status_code == 201:
post = response.json()
print(post)
else:
print('Error:', response.status_code, response.json())
</code></pre>
<p>Any ideas?</p>
| <python><wordpress><wordpress-rest-api> | 2023-08-11 23:12:07 | 2 | 1,747 | mikelowry |
76,887,071 | 12,841,609 | Django "django-modeladmin-reorder" library does not work for side navigation/nav bar of admin | <p>I have a django project, which contains several models with natural grouping, as well as some off-the-shelf models from install libraries.</p>
<p>I am trying to have this natural grouping reflected in the admin hope page, as well as the side navigation (nav bar) of the admin panel. By using <a href="https://github.com/mishbahr/django-modeladmin-reorder" rel="nofollow noreferrer">django-modeladmin-reorder</a> library, the home page is properly grouped as I want. Though, the nav bar ordering is not working and the models appear in their original order (alphabetically, grouped by the library/file where they are fetched from).</p>
<p>I have tried several complex (or more complex than just plugging a library) solutions, such as overriding the nav bar/app list html of django and create other templates and creating custom middlewares.</p>
| <python><django><django-models><django-admin><django-modeladmin> | 2023-08-11 23:01:11 | 1 | 353 | Svestis |
76,886,866 | 2,302,819 | How does Copy-on-Write (CoW) work when using the new pandas pyarrow backend? | <p>I think the question says it all β the pandas docs have good information on both the new <a href="https://pandas.pydata.org/docs/user_guide/copy_on_write.html" rel="nofollow noreferrer">CoW behavior</a> and the <a href="https://pandas.pydata.org/docs/user_guide/pyarrow.html" rel="nofollow noreferrer">pyarrow backend</a>, but was wondering if CoW works the same with pyarrow as it does with a numpy backend.</p>
| <python><pandas><pyarrow> | 2023-08-11 21:57:36 | 1 | 3,733 | nick_eu |
76,886,864 | 14,222,845 | Unable to install pyinstaller in anaconda environment | <p>I have an environment in Anaconda called <code>myEnv</code>. I am trying to install pyinstaller to it. I tried all 3 of the options given here <a href="https://anaconda.org/conda-forge/pyinstaller" rel="nofollow noreferrer">https://anaconda.org/conda-forge/pyinstaller</a> for installing pyinstaller, however, none of them worked.</p>
<p>This is what the message looks like in Anaconda Prompt:</p>
<pre><code>Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
-pyinstaller
</code></pre>
<p>This is weird since I do have <code>conda-forge</code> as a channel:</p>
<pre><code>conda config --show channels
</code></pre>
<p>The response is:</p>
<pre><code>channels:
- defaults
- conda-forge
</code></pre>
<p>As a last resort, I tried <code>pip install pyinstaller</code> and that gave an error.</p>
<pre><code>WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 407 Proxy Authentication Required'))': /simple/pyinstaller/
ERROR: Could not find a version that satisfies the requirement pyinstaller (from versions: none)
ERROR: No matching distribution found for pyinstaller
WARNING: There was an error checking the latest version of pip.
</code></pre>
<p><strong>Edit:</strong> I also tried <code>conda update conda</code> but that still didn't seem to do the trick.</p>
<p>Installing it from the Anaconda channel also didn't work:</p>
<pre><code>conda install -c anaconda pyinstaller
</code></pre>
<p>It just gave me the same error when I tried to install <code>pyinstaller</code> from the <code>conda-forge</code> channel.</p>
<p>Some Extra Details:</p>
<p>My Python version: 3.10.9</p>
<p>Laptop: Windows 64</p>
| <python><pip><anaconda><conda><pyinstaller> | 2023-08-11 21:57:08 | 3 | 330 | Diamoniner12345 |
76,886,516 | 45,139 | PyCharm not working with existing virtual environment | <p>I'm able to use python3 and venv with other editors like emacs and nano, but for some reason PyCharm seems to choke on an existing environment.</p>
<p>I'm using Debian 12, and I have PyCharm Community install via Flatpack. My system python is 3.11.2, and I created the virtual environment with <code>python3 -m venv venv</code>. Then I install my packages with pip, write code in some other editor, run it in the terminal, and it all works fine.</p>
<p>When I go to open the project in PyCharm however, it flags all my imports like "no module named matplotlib". When I try to configure the interpreter in File->Settings it says it's pointing to <code>home/user/Projects/test_a/venv/bin/python</code>, which seems correct, but it also says "Python packaging tools not found", and then I get an error modal saying:</p>
<pre><code>Executed command:
/home/user/Projects/test_a/venv/bin/python
/app/pycharm/plugins/pyton-ce/helpers/packaging_tool.py list
Error: Python packaging tool 'setuptools' not found
</code></pre>
<p>When I check my pip list, both at the system level and in the venv, setuptools is listed.
This does not happen if I create a new project in PyCharm and let it create the venv.</p>
<p>Any ideas on how I can get PyCharm to work existing virtual environments?</p>
| <python><pycharm><python-venv> | 2023-08-11 20:32:34 | 2 | 3,947 | LoveMeSomeCode |
76,886,418 | 2,708,215 | Writing contents of gensim Doc2Vec to Azure blob | <p>I have a Python script which will have effectively no local file storage access. It is currently a Jupyter notebook, but needs to become an Azure Databrick later on. It will, however, have access to Azure blob storage.</p>
<p>We are currently writing the output using this type of process:</p>
<pre><code>doc2vec_results = Doc2Vec( <parameters and values> )
doc2vec_results.build_vocab(data)
<processing and stuff>
doc2vec_results.save(file_location)
</code></pre>
<p>That works, but without local storage that process no longer works. So, instead, it would be very convenient to save what would have been the contents of that file to a variable, like <code>doc2vec_result_savedata</code> and then write that to a blob:</p>
<pre><code>new_blob_name = f"doc2vec_data_{timestamp}"
blob_url = f"{account_url}/{container_name}/{new_blob_name}"
blob = BlobClient.from_blob_url(
blob_url=blob_url,
credential=credential
)
blob.upload_blob(doc2vec_result_savedata)
</code></pre>
<p>Is there a way to accomplish this result? The methods in the Doc2Vec class don't present an applicable option. I'm also considering making a file object (something from the io module, perhaps?), writing to that, reading it to a variable, etc... but that really seems excessive. What am I missing?</p>
| <python><python-3.x> | 2023-08-11 20:02:27 | 1 | 503 | Andrew |
76,886,259 | 1,471,980 | how do you add values in pandas dataframe column based on group values | <p>I have a data frame like this</p>
<pre class="lang-py prettyprint-override"><code>print(df)
#Hostname Slot Port Reserved
#Server1 1 1 0
#Server1 1 2 0
#Server1 2 3 1
#Server2 2 1 1
#Server2 2 2 0
#Server2 2 3 1
#Server3 1 1 0
#Server3 2 2 0
#Server3 3 3 1
</code></pre>
<p>I need to sum <code>Reserved</code> column by <code>Hostname</code> and <code>Slot</code> columns.</p>
<pre class="lang-none prettyprint-override"><code>Hostname Slot Total_Reserved
Server1 1 1
Server2 2 2
Server3 1 1
</code></pre>
<p>I tried this to no avail:</p>
<pre class="lang-py prettyprint-override"><code>new_df = df.groupby(['Hostname', 'Slot', 'Reserved']).sum()
</code></pre>
| <python><pandas> | 2023-08-11 19:24:18 | 1 | 10,714 | user1471980 |
76,886,238 | 10,520,077 | ValueError when using custom new in Python Enum | <p>I'm encountering an issue with the following Python code where a ValueError is raised when trying to create an instance of an Enum with a value that doesn't correspond to any defined enum member:</p>
<pre><code>from enum import Enum
class Option(Enum):
OPTION_1 = "Option 1"
OPTION_2 = "Option 2"
NONE = ""
def __new__(cls, value):
try:
obj = object.__new__(cls)
obj._value_ = value
return obj
except ValueError:
return Option.NONE
tmp = Option("Option 3")
</code></pre>
<p>The ValueError seems to be related to how the <code>__new__</code> method is handling the invalid values. I want the code to create an instance of the <code>NONE</code> member in case an invalid value is provided, but it doesn't seem to be working as expected.</p>
<p>Why is this code resulting in a ValueError and how can I achieve the desired behavior of using the <code>NONE</code> member for invalid values?</p>
| <python><enums> | 2023-08-11 19:18:59 | 1 | 911 | DomDunk |
76,886,019 | 2,501,018 | Create new named color in Matplotlib | <p>I have a few colors I use over and over again in my work. I would like to "register" these as named colors in Matplotlib (by executing code in my work, not by changing the source of Matplotlib in my install) so that they can be referred to by string in the way any "included" color can be.</p>
<p>For example, I want to be able to say <code>my_favorite_blue = '#76c4db'</code> and then run <code>plt.plot(range(5), range(5), color='my_favorite_blue')</code> and have it work.</p>
<p>I don't mind if the solution is ugly. But would like to be able to run something like:</p>
<pre><code>import matplotlib.pyplot as plt
def register_new_color(name, value):
*** your brilliance goes here***
register_new_color('my_favorite_blue', '#76c4db')
plt.plot(range(5), range(5), color='my_favorite_blue')
</code></pre>
<p>Thanks in advance!</p>
| <python><matplotlib> | 2023-08-11 18:38:22 | 2 | 13,868 | 8one6 |
76,885,984 | 4,354,477 | "Array slice indices must have static start/stop/step" when `jax.lax.scan`ning involving an Equinox model | <p>The Equinox model looks like this:</p>
<pre><code>import jax
import jax.numpy as jnp, jax.random as jrnd
import equinox as eqx
class Model(eqx.Module):
lags: list[int]
linear: eqx.Module
def __init__(self, lags: list[int]=[22, 5, 1], *, key: jrnd.PRNGKeyArray):
self.lags = lags
self.linear = eqx.nn.Linear(len(lags), 1, key=key)
def __call__(self, x, key=None):
x_new = jnp.array([
x[-lag:].mean() for lag in self.lags
])
return self.linear(x_new)
</code></pre>
<p>Notice how I have <code>x[-lag:].mean()</code> inside <code>__call__</code>, which apparently causes issues.</p>
<p>The rest of my code does something like this:</p>
<pre><code>@eqx.filter_value_and_grad
def loss(model: eqx.Module, X: jax.Array, y: jax.Array):
y_pred = eqx.filter_vmap(model)(X)
return ((y_pred - y)**2).mean()
@eqx.filter_jit
def scanner(model: eqx.Module, _):
loss_val, _ = loss(model, X, y)
return model, loss_val
key = jrnd.PRNGKey(5)
X = jrnd.normal(key, (50, 22))
y = jrnd.uniform(key, (X.shape[0], ))
model = Model(key=key)
print(model)
print("`vmap` works:")
print(eqx.filter_jit(loss)(model, X, y))
print("`scan` errors:")
jax.lax.scan(scanner, model, jnp.arange(5))
</code></pre>
<p>When I run this, I get this error:</p>
<pre><code>The stack trace below excludes JAX-internal frames.
The preceding is the original exception that occurred, unmodified.
--------------------
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "_question_repro.py", line 40, in <module>
jax.lax.scan(scanner, model, jnp.arange(5))
File "_question_repro.py", line 26, in scanner
loss_val, _ = loss(model, X, y)
^^^^^^^^^^^^^^^^^
File "_question_repro.py", line 21, in loss
y_pred = jax.vmap(model)(X)
^^^^^^^^^^^^^^^^^^
File "_question_repro.py", line 14, in __call__
x_new = jnp.array([
^
File "_question_repro.py", line 15, in <listcomp>
x[-lag:].mean() for lag in self.lags
~^^^^^^^
File "/Users/forcebru/.pyenv/versions/3.11.4/lib/python3.11/site-packages/jax/_src/numpy/array_methods.py", line 723, in op
return getattr(self.aval, f"_{name}")(self, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/forcebru/.pyenv/versions/3.11.4/lib/python3.11/site-packages/jax/_src/numpy/lax_numpy.py", line 4153, in _rewriting_take
return _gather(arr, treedef, static_idx, dynamic_idx, indices_are_sorted,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/forcebru/.pyenv/versions/3.11.4/lib/python3.11/site-packages/jax/_src/numpy/lax_numpy.py", line 4162, in _gather
indexer = _index_to_gather(shape(arr), idx) # shared with _scatter_update
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/forcebru/.pyenv/versions/3.11.4/lib/python3.11/site-packages/jax/_src/numpy/lax_numpy.py", line 4414, in _index_to_gather
raise IndexError(msg)
IndexError: Array slice indices must have static start/stop/step to be used with NumPy indexing syntax. Found slice(Traced<ShapedArray(int32[], weak_type=True)>with<DynamicJaxprTrace(level=2/0)>, None, None). To index a statically sized array at a dynamic position, try lax.dynamic_slice/dynamic_update_slice (JAX does not support dynamically sized arrays within JIT compiled functions).
</code></pre>
<p>The error message suggests that I'm trying to "index a statically sized array at a dynamic position", but I'm not: <code>self.lags</code> is a list of integers that's supposed to be constant.</p>
<p>According to <a href="https://stackoverflow.com/a/76636892">this answer</a>, "Inside transformations like <code>jit</code> or <code>vmap</code>, JAX array shapes must be static". Sure, but then the <code>jit</code> and <code>vmap</code> calls in my code shouldn't have worked, but they did. The only thing that <em>doesn't</em> work is <code>jax.lax.scan(scanner, model, jnp.arange(5))</code>.</p>
<p>What's special about <code>jax.lax.scan</code> that makes my code produce this error? How do I fix this?</p>
| <python><jax> | 2023-08-11 18:32:16 | 1 | 45,042 | ForceBru |
76,885,963 | 9,371,999 | Azure function not deploying when importing azure.search.documents (time trigger) | <p>I am trying to deploy a simple time-triggered function in my azure subscription. The <code>__init__.py</code> code us basically the template. However, I am encountering issues when deploying the function only when importing (not even using) one module/library. The line causing the problem is:</p>
<ul>
<li><code>from azure.search.documents import SearchClient</code></li>
</ul>
<p>So, the code that is correctly deployed and triggered:</p>
<pre><code>import os
import sys
from io import StringIO
import csv
import time
import json
import openai
import logging
import requests
from dateutil import parser
from dotenv import load_dotenv
from azure.storage.blob import BlobServiceClient,ContainerClient,BlobClient
from tenacity import retry, wait_random_exponential, stop_after_attempt
from azure.core.credentials import AzureKeyCredential
from azure.keyvault.secrets import SecretClient
from azure.identity import ClientSecretCredential
from azure.identity import DefaultAzureCredential
#from azure.search.documents import SearchClient
import datetime
import azure.functions as func
from . import util_codes_custom as ucc
def main(mytimer: func.TimerRequest) -> None:
utc_timestamp = datetime.datetime.utcnow().replace(
tzinfo=datetime.timezone.utc).isoformat()
if mytimer.past_due:
logging.info('The timer is past due!')
logging.info('Python timer trigger function ran at %s', utc_timestamp)
logging.info(ucc.print_message("keyvault?"))
</code></pre>
<p>When deployed, azure monitor shows all blue lines like:</p>
<pre><code>2023-08-11T17:33:00Z [Verbose] Sending invocation id: 'fa46dc97-ab44-4732-xxxx-65d165d76a17
2023-08-11T17:33:00Z [Verbose] Posting invocation id:fa46dc97-ab44-4732-xxxx-65d165d76a17 on workerId:dc4b5bc9-d9c4-403d-a51f-5b72c86eaf2a
2023-08-11T17:33:00Z [Information] Python timer trigger function ran at 2023-08-11T17:33:00.004288+00:00
2023-08-11T17:33:00Z [Information] keyvault?
2023-08-11T17:33:00Z [Information] Executed 'Functions.azurefunction-minutas-timetrigger' (Succeeded, Id=fa46dc97-ab44-4732-b0aa-65d165d76a17, Duration=19ms)
2023-08-11T17:33:20Z [Information] Executing 'Functions.azurefunction-minutas-timetrigger' (Reason='Timer fired at 2023-08-11T17:33:20.0002929+00:00', Id=3c23fc7a-a0d3-4e42-9b0f-b4c103ba3bba)
</code></pre>
<p>The moment I uncomment the <code>azure.search.documents</code> import line the function fail and give the following log messages in the monitor (when deployed):</p>
<pre><code>2023-08-11T18:15:20Z [Verbose] Sending invocation id: '096b7468-e0ba-4641-8562-d85779b551f3
2023-08-11T18:15:20Z [Verbose] Posting invocation id:096b7468-e0ba-4641-XXXX-d85779b551f3 on workerId:b4c17b5d-53c6-4455-bfa8-d01f322a28b0
2023-08-11T18:15:20Z [Error] Executed 'Functions.azurefunction-minutas-timetrigger' (Failed, Id=096b7468-e0ba-4641-8562-d85779b551f3, Duration=20ms)
</code></pre>
<p>My pip list (what's installed on .venv) is:</p>
<pre><code>Package Version
--------------------------- ------------------
aiohttp 3.8.5
aiosignal 1.3.1
async-timeout 4.0.3
attrs 23.1.0
azure-ai-formrecognizer 3.2.1
azure-common 1.1.28
azure-core 1.27.1
azure-functions 1.15.0
azure-identity 1.13.0
azure-keyvault 4.2.0
azure-keyvault-certificates 4.7.0
azure-keyvault-keys 4.8.0
azure-keyvault-secrets 4.7.0
azure-search-documents 11.4.0a20230509004
azure-storage-blob 12.16.0
certifi 2023.7.22
cffi 1.15.1
charset-normalizer 3.2.0
colorama 0.4.6
cryptography 41.0.3
frozenlist 1.4.0
idna 3.4
isodate 0.6.1
msal 1.23.0
msal-extensions 1.0.0
msrest 0.7.1
multidict 6.0.4
numpy 1.25.1
oauthlib 3.2.2
openai 0.27.8
pip 23.2.1
portalocker 2.7.0
pycparser 2.21
PyJWT 2.8.0
python-dateutil 2.8.2
python-dotenv 1.0.0
pywin32 306
requests 2.31.0
requests-oauthlib 1.3.1
setuptools 65.5.0
six 1.16.0
tenacity 8.2.2
tqdm 4.66.1
typing_extensions 4.7.1
urllib3 2.0.4
yarl 1.9.2
</code></pre>
<p>Note that the version of <code>azure.search.documents</code> is not a standard one. It is installed in the requirement.txt as:</p>
<pre><code>-i https://pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-python/pypi/simple/
azure-search-documents==11.4.0a20230509004
</code></pre>
<p>What can be going wrong? I read something about "dockerizing" the function. Is this the only solution?</p>
<p>Thanks in advance!</p>
| <python><azure><azure-functions> | 2023-08-11 18:28:17 | 1 | 529 | GEBRU |
76,885,853 | 6,087,667 | Apply a function over last two dimensions | <p>How can I apply a function over last two dimensions? E.g. I generated an array below of (2,3,3) dimensions, the resulting array should have the same dimenstions where the function is apply to a[0,:,:] and a[1,:,:]. I understand I can go with a <code>for</code> loop, but might there be an in-built function specially for these type of operations?</p>
<pre><code>a = np.arange(18).reshape(2,3,3)
c = []
for i in range(2):
c.append(np.linalg.pinv(a[i,:,:]))
result = np.array(c)
</code></pre>
<p>Here I used <code>np.linalg.pinv</code> but assume a generic function <code>f: (n,k)-> (n,k)</code>, e.g <code>f = lambda x: x**3 +61</code></p>
| <python><numpy><multidimensional-array> | 2023-08-11 18:07:17 | 2 | 571 | guyguyguy12345 |
76,885,799 | 1,368,435 | Python Modbus TCP read input_registers 0x4 get error with MBAP headers | <p>Hi I have developed this code, using pymodbus 3.4.1 libs and pymodbustcp 0.2.0 libs to check who works better.</p>
<p>Python is the standard of Debian 11 and is versione 3.9.2</p>
<p>I activated the debug for both libs:</p>
<pre><code>from pyModbusTCP.client import ModbusClient
c = ModbusClient(host="192.168.1.134", port=1024, unit_id=10, debug=True)
c.open()
if c.is_open == False:
print('not connected')
else:
print('connected')
print(c.unit_id)
c.read_input_registers(1,1)
c.close()
from pymodbus.client import ModbusTcpClient
from pymodbus.transaction import ModbusSocketFramer as ModbusFramer
import logging
logging.basicConfig()
log = logging.getLogger()
log.setLevel(logging.DEBUG)
ip = "192.168.1.134"
port_num = 1024
unit_num = 10
client = ModbusTcpClient(ip, port=port_num, framer=ModbusFramer)
client.connect()
rr = client.read_input_registers(1,1, unit=unit_num)
print(rr)
</code></pre>
<p>The output of the two libs are that:</p>
<pre><code>connected
10
Tx
[DD 02 00 00 00 06 0A] 04 00 01 00 01
Rx
[57 65 6C 63 6F 6D 65]
MBAP checking error
DEBUG:pymodbus.logging:Connection to Modbus server established. Socket ('192.168.1.125', 46270)
DEBUG:pymodbus.logging:Current transaction state - IDLE
DEBUG:pymodbus.logging:Running transaction 1
DEBUG:pymodbus.logging:SEND: 0x0 0x1 0x0 0x0 0x0 0x6 0x0 0x4 0x0 0x1 0x0 0x1
DEBUG:pymodbus.logging:New Transaction state "SENDING"
DEBUG:pymodbus.logging:Changing transaction state from "SENDING" to "WAITING FOR REPLY"
DEBUG:pymodbus.logging:Incomplete message received, Expected 28531 bytes Received 19 bytes !!!!
DEBUG:pymodbus.logging:Changing transaction state from "WAITING FOR REPLY" to "PROCESSING REPLY"
DEBUG:pymodbus.logging:RECV: 0x57 0x65 0x6c 0x63 0x6f 0x6d 0x65 0x20 0x74 0x6f 0x20 0x54 0x63 0x70 0x53 0x72 0x76 0xd 0xa
DEBUG:pymodbus.logging:Processing: 0x57 0x65 0x6c 0x63 0x6f 0x6d 0x65 0x20 0x74 0x6f 0x20 0x54 0x63 0x70 0x53 0x72 0x76 0xd 0xa
DEBUG:pymodbus.logging:Frame check failed, ignoring!!
DEBUG:pymodbus.logging:Resetting frame - Current Frame in buffer - 0x57 0x65 0x6c 0x63 0x6f 0x6d 0x65 0x20 0x74 0x6f 0x20 0x54 0x63 0x70 0x53 0x72 0x76 0xd 0xa
DEBUG:pymodbus.logging:Getting transaction 1
DEBUG:pymodbus.logging:Changing transaction state from "PROCESSING REPLY" to "TRANSACTION_COMPLETE"
Modbus Error: [Input/Output] No Response received from the remote slave/Unable to decode response
</code></pre>
<p>Does someone have face off the same error with some software modbus server from some machine producers?
All suggestions are welcome :)</p>
<p>Edit:</p>
<p>Hi! I try a session with mbpoll as u suggest and this is what I got:</p>
<pre><code>mbpoll 192.168.1.134 -a 10 -p 1024 -t 3 -r 1
mbpoll 1.5-2 - ModBus(R) Master Simulator
Copyright (c) 2015-2023 Pascal JEAN, https://github.com/epsilonrt/mbpoll
This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under certain conditions; type 'mbpoll -w' for details.
Protocol configuration: ModBus TCP
Slave configuration...: address = [10]
start reference = 1, count = 1
Communication.........: 192.168.1.134, port 1024, t/o 1.00 s, poll rate 1000 ms
Data type.............: 16-bit register, input register table
-- Polling slave 10... Ctrl-C to stop)
Read input register failed: Invalid data
-- Polling slave 10... Ctrl-C to stop)
Read input register failed: Invalid data
-- Polling slave 10... Ctrl-C to stop)
Read input register failed: Invalid data
-- Polling slave 10... Ctrl-C to stop)
Read input register failed: Invalid data
-- Polling slave 10... Ctrl-C to stop)
Read input register failed: Invalid data
-- Polling slave 10... Ctrl-C to stop)
Read input register failed: Invalid data
-- Polling slave 10... Ctrl-C to stop)
Read input register failed: Invalid data
-- Polling slave 10... Ctrl-C to stop)
Read input register failed: Invalid data
-- Polling slave 10... Ctrl-C to stop)
Read input register failed: Invalid data
-- Polling slave 10... Ctrl-C to stop)
Read input register failed: Invalid data
-- Polling slave 10... Ctrl-C to stop)
Read input register failed: Invalid data
-- Polling slave 10... Ctrl-C to stop)
Read input register failed: Invalid data
^C--- 192.168.1.134 poll statistics ---
12 frames transmitted, 0 received, 12 errors, 100.0% frame loss
everything was closed.
Have a nice day !
</code></pre>
<p>with modpoll (with tcp and try rtu encapsulated in tcp):</p>
<pre><code>./modpoll -m enc -a 10 -p 1024 -t 3 -r 1 192.168.1.134
modpoll 3.10 - FieldTalk(tm) Modbus(R) Master Simulator
Copyright (c) 2002-2021 proconX Pty Ltd
Visit https://www.modbusdriver.com for Modbus libraries and tools.
Protocol configuration: Encapsulated RTU over TCP, FC4
Slave configuration...: address = 10, start reference = 1, count = 1
Communication.........: 192.168.1.134, port 1024, t/o 1.00 s, poll rate 1000 ms
Data type.............: 16-bit register, input register table
-- Polling slave... (Ctrl-C to stop)
Checksum error!
-- Polling slave... (Ctrl-C to stop)
Reply time-out!
-- Polling slave... (Ctrl-C to stop)
Checksum error!
^C
./modpoll -m tcp -a 10 -p 1024 -t 3 -r 1 192.168.1.134
modpoll 3.10 - FieldTalk(tm) Modbus(R) Master Simulator
Copyright (c) 2002-2021 proconX Pty Ltd
Visit https://www.modbusdriver.com for Modbus libraries and tools.
Protocol configuration: MODBUS/TCP, FC4
Slave configuration...: address = 10, start reference = 1, count = 1
Communication.........: 192.168.1.134, port 1024, t/o 1.00 s, poll rate 1000 ms
Data type.............: 16-bit register, input register table
-- Polling slave... (Ctrl-C to stop)
Invalid MPAB identifier!
-- Polling slave... (Ctrl-C to stop)
Reply time-out!
-- Polling slave... (Ctrl-C to stop)
Invalid MPAB identifier!
-- Polling slave... (Ctrl-C to stop)
Reply time-out!
^C
</code></pre>
<p>This is the simple code of the nodes program that I try and works:</p>
<pre><code>// create an empty modbus client
const ModbusRTU = require("modbus-serial");
const client = new ModbusRTU();
// open connection to a tcp line
client.connectTCP("192.168.1.134", { port: 1024 });
client.setID(10);
// read the values of 10 registers starting at address 0
// on device number 1. and log the values to the console.
setInterval(function() {
client.readInputRegisters(0, 10, function(err, data) {
console.log(data.data);
});
}, 1000);
</code></pre>
<p>and this is the result:</p>
<pre><code>node modbus.js
[
3, 53, 2, 0, 0,
0, 0, 0, 0, 0
]
[
3, 53, 2, 0, 0,
0, 0, 0, 0, 0
]
[
3, 53, 2, 0, 0,
0, 0, 0, 0, 0
]
[
3, 53, 2, 0, 0,
0, 0, 0, 0, 0
]
[
3, 53, 2, 0, 0,
0, 0, 0, 0, 0
]
[
3, 53, 2, 0, 0,
0, 0, 0, 0, 0
]
^C
</code></pre>
<p>Probably the nodes lib don't make some checks on the server and get the data rightly but probably is out of modbus standard standard.</p>
| <python><dictionary><debugging><modbus> | 2023-08-11 17:58:42 | 1 | 742 | FrancoTampieri |
76,885,715 | 12,027,858 | Salesforce's Metadata API .describe() method missing many fields compared to .read() | <p>When I use simple-salesforce's Metadata api read method <code>_sf.mdapi.CustomObject.read(sf_object_name)</code>, I get what seems to be a full list of fields from my object. However, if I use the describe method <code>getattr(_sf, sf_object_name).describe()</code>, the metadata returned is missing dozens of fields. Why is that the case for the same object?</p>
<p>I'd prefer to use .describe(), since it returns usable <code>picklistValues</code> vs. .read(), which sometimes gives a <code>valueSetName</code> that doesn't include the actual picklist values, just the name of a valueSet which I'd have to re-query for.</p>
| <python><salesforce><metadata><simple-salesforce><sfdc-metadata-api> | 2023-08-11 17:44:36 | 1 | 600 | ezeYaniv |
76,885,582 | 8,110,650 | Setting up Ruff (Python Linter) | <p>As far as I understood <code>ruff</code> implements <code>pycodestyle</code> rules by default. However, when I run my code through pycodestyle I get:</p>
<pre><code>test.py:5:1: E302 expected 2 blank lines, found 1
test.py:8:1: E302 expected 2 blank lines, found 1
test.py:13:9: E129 visually indented line with same indent as next logical line
test.py:22:1: E305 expected 2 blank lines after class or function definition, found 1
test.py:32:5: E265 block comment should start with '# '
test.py:34:5: E265 block comment should start with '# '
test.py:43:14: W292 no newline at end of file
</code></pre>
<p>While <code>ruff</code> gives me all good.</p>
<p>Anyone can tell me why that is the case?</p>
| <python><pycodestyle><ruff> | 2023-08-11 17:20:48 | 3 | 919 | SrdjaNo1 |
76,885,569 | 3,520,791 | python map iterate over a list and function on another list | <p>How in Python is the map function used to iterate over a list (iterable) and do the logic on another list(iterable). For example, we have a list of indices as <code>indices</code> and a list of strings as <code>str_list</code>. Then, we want to make the the element at index <code>i</code> taken from <code>indices</code> of <code>str_list</code> to be empty. Yes, the indices are guaranteed to be in the range of <code>str_list</code> length. There are simple ways to do, however, I want to figure out how the map function does work on two separate list scenarios. This I came up with has error:</p>
<pre><code> str_list = map(lambda index : str_list[index] = "", indices)
SyntaxError: expression cannot contain assignment, perhaps you meant "=="?
</code></pre>
| <python><python-3.x><list><mapreduce> | 2023-08-11 17:18:16 | 1 | 469 | Alan |
76,885,568 | 3,948,766 | Custom metrics in keras to evaluate directional accuracy | <p>I am developing a metric in keras to evaluate if the direction of the model prediction is correct. My model looks at forex historical data and predicts the close price of the next candle. I have a callback class that does this calculation, but I really want to convert it into a loss function, my function is kind of a hack and does not use the keras tensors so it is not working as a loss function.</p>
<p>I have looked at <a href="https://stackoverflow.com/questions/46735166/custom-metrics-in-keras-to-evaluate-sign-prediction">This Question</a>, but it is looking at the sign of <code>y_true</code> and <code>y_pred</code>. My true and predictions are all positive so that method just goes to 1.0. I need something that will look at <code>close</code> value of <code>y_true[n]</code>, <code>y_true[n-1]</code>, <code>y_pred[n]</code>, and <code>y_pred[n-1]</code>. Then if the values at <code>n</code> is greater (or less than) than <code>n-1</code> for both <code>y_pred</code> and <code>y_true</code> then the direction is predicted correctly. Then <code>1 - sum(correct_values)/total_values</code> would be my loss.</p>
<p>I am picturing something along the lines of</p>
<pre class="lang-py prettyprint-override"><code>def directional_accuracy(y_true, y_pred):
return 1 - (K.equal(K.sign(y_true - y_true[n-1]), K.sign(y_pred - y_pred[n-1)))/K.count(y_true)
</code></pre>
<p>I know there is not a <code>K.count</code> method, but you get the idea.</p>
<p>I am still learning about tensors, so any help you can provide on how to implement this would be appreciated.</p>
| <python><tensorflow><keras> | 2023-08-11 17:18:12 | 2 | 356 | Infinity Cliff |
76,885,534 | 251,276 | Why is memory usage high after calling scipy.ndimage.center_of_mass? | <p>I'm using the scipy.ndimage module to get information about labeled objects in a 3D array. I am trying to reduce the memory used by this process because it is a bottleneck limiting the number of parallel processes we can run on our compute server. I noticed that after calling the center_of_mass function, the memory usage skyrockets and does not match the size of the data returned from the function. I am calling the function like this:</p>
<pre><code>ndimage.center_of_mass(values, labels, range(1,np.max(labels)+1))
</code></pre>
<p>The shape of <code>values</code> and <code>labels</code> is <code>(20, 2048, 2048)</code>. Before I call center_of_mass, I print out the memory usage using the resource module:</p>
<pre><code>print(resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)
</code></pre>
<p>Then I print it again after calling <code>center_of_mass</code>. Before, this reported <code>6504168</code> and afterwards it gives <code>11255620</code>, so the memory usage increased by almost 5 GB. But the shape of what is returned from <code>center_of_mass</code> is <code>(24187884, 3)</code> which should only require about 580 MB of memory (confirmed from checking <code>nbytes</code> on the ndarray). I am also monitoring the process with <code>top</code> and the process continues to use significantly more memory, so it isn't just a temporary spike in memory usage during the function call.</p>
<p>What is causing such a huge jump in memory usage? Is there any way I can reduce it?</p>
| <python><arrays><numpy><scipy> | 2023-08-11 17:11:46 | 1 | 10,920 | Colin |
76,885,399 | 288,271 | Python, WebDriver, how to get number of elements without triggering implicit wait if there are none? | <p>What's the best way to get the number of elements on a page without triggering an implicit wait if there are no such elements on the page? At the moment, I'm using this:</p>
<pre><code> def get_number_of_elements(self, by, locator):
self.driver.implicitly_wait(0)
elements = self.driver.find_elements(by, locator)
self.driver.implicitly_wait(self.config.IMPLICIT_TIMEOUT)
return count(elements)
</code></pre>
<p>This works, but is there a better way of doing this?</p>
<p><strong>UPDATE:</strong></p>
<p>The issue was cause because of the way I was instantiating <code>webdriver.Chrome(options=options</code>. In short, do not set <code>driver.implicitly_wait()</code>:</p>
<pre><code>import time
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
if __name__ == '__main__':
options = Options()
options.add_argument('--disable-dev-shm-usage')
options.add_argument('--crash-dumps-dir=tmp')
options.add_argument('--remote-allow-origins=*')
options.add_argument("--no-sandbox")
options.add_argument("--disable-gpu")
options.page_load_strategy = "eager"
options.add_experimental_option("excludeSwitches", ["enable-automation"])
driver = webdriver.Chrome(options=options)
driver.set_page_load_timeout(30)
# driver.implicitly_wait(30) # <-- This triggers an implicit wait of 30 seconds for find_elements if no elements exist.
driver.set_script_timeout(30)
driver.get("https://www.google.com")
driver.find_element(By.XPATH, "//div[text()='Accept all']").click()
driver.find_element(By.NAME, "q").send_keys("Stack Overflow")
driver.find_element(By.XPATH, "//input[@type='submit'][@value='Google Search']").submit()
elements = driver.find_elements(By.XPATH, "//h1[text()='No such thing...']")
print(len(elements))
elements = driver.find_elements(By.XPATH, "//a")
print(len(elements))
time.sleep(5)
driver.close()
</code></pre>
| <python><selenium-webdriver> | 2023-08-11 16:48:08 | 2 | 443 | Scott Deagan |
76,885,351 | 1,547,297 | Memory Exception When Merging Large Volumes of Waveform Data Files Using wrdb.wrsamp() | <p>I'm trying to merge multiple waveform data (.dat) files into a single file. I'm using the <code>wrdb.wrsamp()</code> function for this task. The total number of files is approximately 10,000 and each one has 3 channels. I've tried several times, but every attempt results in a memory exception, requiring more than 40GB of memory. I'm unsure if I am doing something incorrect.</p>
<p>I've been unable to find a method to write the files incrementally. My current approach is to read each sample, combine all signals into an array, and write them. While this works fine with a small number of files, I'm having difficulties when it comes to larger datasets. Each file contains over 6 minutes of data.</p>
| <python><numpy> | 2023-08-11 16:39:59 | 0 | 938 | Lakmal |
76,885,334 | 4,506,929 | How do you make a matplotlib plot with two panels at the top and one centered at the bottom, all with the same size? | <p>For the following MWE I'm using the functions defined in <a href="https://matplotlib.org/stable/gallery/subplots_axes_and_figures/mosaic.html" rel="nofollow noreferrer">this docs page</a>. Consider the code:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
# Helper function used for visualization in the following examples
def identify_axes(ax_dict, fontsize=48):
"""
Helper to identify the Axes in the examples below.
Draws the label in a large font in the center of the Axes.
Parameters
----------
ax_dict : dict[str, Axes]
Mapping between the title / label and the Axes.
fontsize : int, optional
How big the label should be.
"""
kw = dict(ha="center", va="center", fontsize=fontsize, color="darkgrey")
for k, ax in ax_dict.items():
ax.text(0.5, 0.5, k, transform=ax.transAxes, **kw)
axd = plt.figure(layout="constrained").subplot_mosaic(
"""
AC
DD
"""
)
identify_axes(axd)
</code></pre>
<p>This produces this figure:</p>
<p><a href="https://i.sstatic.net/qG4KV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qG4KV.png" alt="enter image description here" /></a>
Basically I'm trying to reproduce something like this figure:</p>
<p>My question is: how can I produce a similar figure, but with the plot at the bottom having the same size as the other two, and still being centered? For the life of me I can't figure that out.</p>
| <python><matplotlib> | 2023-08-11 16:38:02 | 1 | 3,547 | TomCho |
76,885,306 | 9,234,092 | create_pandas_dataframe_agent() over multiple CSV dataframes | <p>Using the example from the <strong>langchain</strong> documentation(<a href="https://python.langchain.com/docs/integrations/toolkits/pandas" rel="nofollow noreferrer">https://python.langchain.com/docs/integrations/toolkits/pandas</a>) and despite having tried all kinds of things, I am not able to <strong>create the agent over 2 CSV file</strong> (I precise that the agent works fine on a single CSV).</p>
<pre><code>from langchain.llms import OpenAI
import pandas as pd
# Import input data
df = pd.read_csv("titanic.csv")
# Create a second pandas dataframe
df1 = df.copy()
df1["Age"] = df1["Age"].fillna(df1["Age"].mean())
# Run the agent over multiple dataframe
agent = create_pandas_dataframe_agent(OpenAI(temperature=0, model_name='gpt-3.5-turbo', deployment_id="chat"), [df, df1], verbose=True)
agent.run("how many rows in the age column are different?")
</code></pre>
<p><strong>This is the error I get</strong> : "ValueError: Expected pandas object, got <class 'list'>".
Does anyone know if the documentation is up to date? Any idea to overcome this ? Here is a screenshot if it can help...</p>
<p><a href="https://i.sstatic.net/FiaWW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FiaWW.png" alt="enter image description here" /></a></p>
| <python><openai-api><agent><langchain><large-language-model> | 2023-08-11 16:34:27 | 1 | 703 | JeanBertin |
76,885,261 | 5,924,007 | CSRF protection for Angular 16 webapp and Flask API | <p>We have a regular Flask REST API (not using Flask-Restful) and our frontend is Angular 15. We want to make sure our webapp is CSRF protected. After some research we figured out that <code>flask_wtf.csrf</code> provides CSRF protection, so we did this:</p>
<pre><code>from flask_wtf.csrf import CSRFProtect, generate_csrf
app.config['SECRET_KEY'] = 'APP SECRET KEY'
CSRFProtect(app)
....
@app.after_request
def inject_csrf_token(response):
xsrf_token = generate_csrf()
response.headers.set('X-XSRF-TOKEN', xsrf_token)
response.set_cookie('XSRF-TOKEN', xsrf_token)
return response
</code></pre>
<p>The above code results in the below response headers:</p>
<pre><code>Set-Cookie: XSRF-TOKEN=asdasdas....;Path=/
Set-Cookie: asdasd....; HttpOnly; Path=/
Vary: Origin, Cookie
X-Xsrf-Token: asdasdas....
</code></pre>
<p>But, no cookies are set.
<a href="https://i.sstatic.net/5loJd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5loJd.png" alt="enter image description here" /></a></p>
<p>My Angular 15 service call looks like this:</p>
<pre><code>constructor(_http: HttpClient, _token: HttpXsrfTokenExtractor) {}
get xsrfToken(): string {
return this._token.getToken();
}
const headers = {
'X-XSRF-TOKEN': this.token,
'Content-Type': 'application/json; charset=UTF-8',
};
const requestConfig = {
withCredentials: true,
headers,
};
return this._http.post<T>(url, data, requestConfig);
</code></pre>
<p>HttpXsrfTokenExtractor.getToken() always returns null. I am the extracting the <strong>X-XSRF-Token</strong> from the response header using an interceptor and then storing that in a variable and then passing it in the request header for every PUT, POST or DELETE call. I get
<code>flask_wtf.csrf.CSRFError: 400 Bad Request: The CSRF token is missing.</code> on the server. I am not sure what am I missing. Can someone please guide me? Why is it not setting up the cookie in the browser? I am also importing <code>HttpClientXsrfModule</code> in <code>app.module.ts</code></p>
<p>PS: I have only posted the relevant code to keep this question as simple as possible.</p>
| <javascript><python><angular><flask><csrf> | 2023-08-11 16:27:00 | 0 | 4,391 | Pritam Bohra |
76,885,099 | 4,076,764 | Should dataclass use fields for attributes with only defaults? | <p>When a python <code>dataclass</code> has a simple attribute that only needs a default value, it can be defined either of these ways.</p>
<pre><code>from dataclasses import dataclass, field
@dataclass
class ExampleClass:
x: int = 5
@dataclass
class AnotherClass:
x: int = field(default=5)
</code></pre>
<p>I don't see any advantage of one or the other in terms of functionality, and so would go with the less verbose version. Of course field offers other bells and whistles, but I don't need them yet and could easily refactor to use <code>field</code> later.</p>
<p>Is there any advantage to using <code>field</code> for a simple default over just a type hint?</p>
| <python><python-dataclasses> | 2023-08-11 15:58:53 | 2 | 16,527 | Adam Hughes |
76,885,016 | 3,821,009 | Take N elements of a window in polars | <p>Say I have this:</p>
<pre><code>import numpy as np
import polars as pl
pl.Config(tbl_rows=20) # to show full output
df = (pl
.DataFrame(dict(
j=np.random.randint(10, 99, 20),
))
.with_row_index()
.select(
g=pl.col('index') // 4,
j='j',
)
)
</code></pre>
<pre><code>shape: (20, 2)
βββββββ¬ββββββ
β g β j β
β --- β --- β
β u32 β i64 β
βββββββͺββββββ‘
β 0 β 95 β
β 0 β 80 β
β 0 β 51 β
β 0 β 68 β
β 1 β 71 β
β 1 β 92 β
β 1 β 44 β
β 1 β 97 β
β 2 β 36 β
β 2 β 64 β
β 2 β 70 β
β 2 β 80 β
β 3 β 75 β
β 3 β 69 β
β 3 β 54 β
β 3 β 16 β
β 4 β 88 β
β 4 β 89 β
β 4 β 97 β
β 4 β 37 β
βββββββ΄ββββββ
</code></pre>
<p>I'd like to take top two elements in each group, i.e. end up with this:</p>
<pre><code> shape: (10, 2)
βββββββ¬ββββββ
β g β j β
β --- β --- β
β u32 β i64 β
βββββββͺββββββ‘
β 0 β 95 β
β 0 β 80 β
β 1 β 71 β
β 1 β 92 β
β 2 β 36 β
β 2 β 64 β
β 3 β 75 β
β 3 β 69 β
β 4 β 88 β
β 4 β 89 β
βββββββ΄ββββββ
</code></pre>
<p>I tried this:</p>
<pre><code>dfj = (df
.select(
pl.all().head(2).over('g')
)
)
print(dfj)
</code></pre>
<p>but that results in this exception:</p>
<pre><code>ComputeError: the length of the window expression did not match that of the group
Error originated in expression: 'col("g").slice(offset=0, length=2).over([col("g")])'
</code></pre>
<p>I know I can add the row count per group again and filter by that, but I was wondering:</p>
<ul>
<li>Why <code>head</code> fails here</li>
<li>If there's a better solution (especially without <code>group_by</code>)</li>
</ul>
| <python><python-polars> | 2023-08-11 15:47:46 | 1 | 4,641 | levant pied |
76,884,972 | 1,733,746 | Let a passed function object be called as a method | <p>EDIT: I extended the example to show the more complex case that I was talking about before. Thanks for the feedback in the comments.</p>
<p>Some more context:</p>
<ul>
<li>I am mostly interested in this on a theoretical level. I would be happy with an answer like "This is not possible because ...", giving some detailed insights about python internals.</li>
<li>For my concrete problem I have a solution, as the library allows to pass a class which then can do what I want. However, if there would be a way to use the simpler interface of just passing the function which gets bound, I would save many lines of code. (For the interested: This question is derived from the sqlalchemy extension <a href="https://github.com/sqlalchemy/sqlalchemy/blob/9e89d2cbbf7ccef05579472b94c90124a1ecf9e3/lib/sqlalchemy/ext/associationproxy.py#L84" rel="nofollow noreferrer">associationproxy</a> and the creator function parameter.)</li>
</ul>
<pre class="lang-py prettyprint-override"><code># mylib.py
from typing import Callable
class C:
def __init__(self, foo: Callable):
self.foo = foo
def __get__(self, instance, owner):
# simplified
return CConcrete(self.foo)
class CConcrete:
foo: Callable
def __init__(self, foo: Callable):
self.foo = foo
def bar(self):
return self.foo()
</code></pre>
<pre class="lang-py prettyprint-override"><code># main.py
from mylib import C
def my_foo(self):
return True if self else False
class House:
window = C(my_foo)
my_house = House()
print(my_house.window.bar())
</code></pre>
<p>This code gives the error</p>
<blockquote>
<p>my_foo() missing 1 required positional argument: 'self'</p>
</blockquote>
<p>How can I get <code>my_foo</code> be called with <code>self</code> without changing the class C itself?</p>
<p>The point is that a class like this exists in a library, so I can't change it.<br />
<strike>In fact it's even more complicated, as <code>foo</code> gets passed down and the actual object where <code>bar</code> exists and calls <code>self.foo</code> is not <code>C</code> anymore. So the solution can also not include assigning something to <code>c</code> after creation, except it would also work for the more complex case described.</strike></p>
| <python><python-descriptors> | 2023-08-11 15:41:11 | 1 | 394 | Alexander |
76,884,926 | 2,386,113 | Why the Convolution results in Python are shifted to left? | <p>I am new to Python and trying to perform a convolution between two 1D arrays. One array is quite large than the other array.</p>
<p>Convolution Conditions:</p>
<ol>
<li>The convolution should take place only if the smaller array is completely over the larger array (i.e. <code>mode = 'valid'</code> in case of numpy convolution).</li>
<li>Smaller array must not be flipped for convolution</li>
</ol>
<p>I tried to write my own <code>for-loop</code> to do the convolution as well as tried to use the <code>Numpy's Convolution()</code> but both the version produce the same results.</p>
<p><strong>MWE:</strong></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
larger_array = np.array([0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0])
smaller_array = np.array([0,1,0])
result_length = larger_array.shape[0] - smaller_array.shape[0] + 1
convolution_result = np.zeros(result_length)
for i in range(result_length):
convolution_result[i] = np.sum(larger_array[i:i + smaller_array.shape[0]] * smaller_array)
np_convolution_result = np.convolve(larger_array, smaller_array, mode='valid')
print("done!")
</code></pre>
<p><strong>Problem:</strong></p>
<p>The above code produces the results as plotted below. As seen in the plots, the convolution result is shifted to the left side. I mean the peak after convolution <strong>SHOULD NOT</strong> come before the peak of the original signal, right?</p>
<p><a href="https://i.sstatic.net/fnmHm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fnmHm.png" alt="enter image description here" /></a></p>
| <python><arrays><numpy> | 2023-08-11 15:35:27 | 0 | 5,777 | skm |
76,884,912 | 18,227,234 | Querying one table across many single-tenant databases in parallel using Spark/Pyspark? | <p>I'm trying to use Spark as part of a full-refresh/oh-shit data replication pipeline to grab data, union it, and stick it in our analytics database. Our source/raw data warehouse is set up on a single-tenant model, with a single client/customer per database, in Azure SQL. The target data warehouse is set up with all customers in a multi-tenant database.</p>
<p>I have a working example that runs in series -- have to redact parts of the code for security reasons but the basic structure is like this:</p>
<pre class="lang-py prettyprint-override"><code>dfs = []
for d in databases:
st_df = spark.read \
.option('table', [TABLENAME]) \
.load()
dfs.append(st_df)
mt_df = reduce(lambda df1, df2: df1.unionByName(df2), dfs)
mt_df.write \
.format([TARGET_DB]) \
.save()
</code></pre>
<p>How do I get this to the point where I can parallelize the <code>for d in databases</code> part and have the queries run in parallel? We need to improve run speed - the number of databases on the source side is upwards of 400.</p>
| <python><apache-spark><pyspark><azure-sql-database> | 2023-08-11 15:33:05 | 1 | 318 | walkrflocka |
76,884,896 | 3,821,009 | Add row number / index per group in polars | <p>Is there a way to rewrite this:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
'g': [0, 0, 0, 1, 1, 1, 2, 2, 2, 3],
'j': [37, 87, 56, 28, 49, 65, 37, 91, 45, 95]
})
df = (
df.with_columns(rn=1)
.with_columns(
rn=pl.col('rn').shift().fill_null(0).cum_sum().over('g')
)
)
</code></pre>
<pre><code>βββββββ¬ββββββ¬ββββββ
β g β j β rn β
β --- β --- β --- β
β u32 β i64 β i32 β
βββββββͺββββββͺββββββ‘
β 0 β 37 β 0 β
β 0 β 87 β 1 β
β 0 β 56 β 2 β
β 1 β 28 β 0 β
β 1 β 49 β 1 β
β 1 β 65 β 2 β
β 2 β 37 β 0 β
β 2 β 91 β 1 β
β 2 β 45 β 2 β
β 3 β 95 β 0 β
βββββββ΄ββββββ΄ββββββ
</code></pre>
<p>so it adds <code>rn</code> column without requiring it to add a column full of <code>1</code>s first? I.e. somehow rewrite this part:</p>
<pre><code> .with_columns(rn=1)
.with_columns(
rn=pl.col('rn').shift().fill_null(0).cum_sum().over('g')
)
</code></pre>
<p>so that:</p>
<pre><code> .with_columns(rn=1)
</code></pre>
<p>is not required? Basically reduce two expressions to one.</p>
<p>Or any other / better way to add a row number <em>per group</em>?</p>
| <python><dataframe><window-functions><python-polars> | 2023-08-11 15:30:53 | 1 | 4,641 | levant pied |
76,884,839 | 12,027,869 | Python Two Custom Sort By Two String Variables | <p>I have a dataframe:</p>
<pre><code>data = {
'group': ['2', '1', '2', '2', '2', '1'],
'interval': ['20-30', '20-30', '30-40', '10-20', '10-20', '0-10'],
'count': [3, 4, 2, 7, 5, 1],
}
df = pd.DataFrame(data)
</code></pre>
<pre><code>group interval count
2 20-30 3
1 20-30 4
2 30-40 2
2 10-20 7
2 10-20 5
1 0-10 1
</code></pre>
<p>I want to sort <code>group</code> first then <code>interval</code> simultaneously in ascending order which will look like this:</p>
<pre><code>group interval count
1 00-10 1
1 20-30 4
2 10-20 5
2 10-20 7
2 20-30 3
2 30-40 2
</code></pre>
<p>I know how to do this separately but how to do it simultaneously?</p>
<pre><code>(
df
.sort_values(by = ['group'], key = lambda s: s.str[0:].astype(int))
.sort_values(by = ['interval'], key = lambda s: s.str[:2].astype(int))
.reset_index(drop=True)
)
</code></pre>
| <python><pandas><sorting> | 2023-08-11 15:21:53 | 2 | 737 | shsh |
76,884,658 | 15,678,119 | logging file rotation based on size and time in config.ini | <p>i have a logger in a webapp, I initialize the logger from config.ini, and feed the config.ini to the app on server start.
<em>runApp.bat</em></p>
<pre><code>uvicorn api.main:app --host 127.0.0.1 --port 8000 --log-config \path\logconfig.ini
</code></pre>
<p>the app is a fastAPI project.</p>
<p><em>config.ini</em></p>
<pre><code>[loggers]
keys=root
[handlers]
keys=logfile
[formatters]
keys=logfileformatter
[logger_root]
level=INFO
handlers=logfile
[formatter_logfileformatter]
format=[%(asctime)s.%(msecs)03d] %(levelname)s [%(thread)d] - %(message)s
[handler_logfile]
class=handlers.RotatingFileHandler
args=('C:\\path\\api.log', 'a', 1024, 1000)
</code></pre>
<p>right now the logs are rotated based on size, once it reaches 1024bytes it creates a new one what i want is to also rotate based on time so if the file has 500 bytes but for example 10 seconds pass by i want to create a new file either way</p>
<pre><code>[handler_logfile]
class=handlers.TimedRotatingFileHandler
level=INFO
args=('C:\\path\\api.log', 's', 10, 0)
</code></pre>
<p>this is how i call the logger in <em>main.py</em></p>
<pre><code>
logging.basicConfig(
level=logging.INFO,
)
@app.get("/", include_in_schema=False)
.
.
.
</code></pre>
<p>adding one rotator works but i dont know how to add both?</p>
| <python><logging> | 2023-08-11 14:57:12 | 0 | 958 | Hannon qaoud |
76,884,610 | 6,843,153 | send pandas dataframe as csv file to slack without saving | <p>I want to post to slack the result of a Pandas dataframe <code>to_csv()</code> method as a file, but slack SDK <code>files_upload()</code> expects a filename in parameter <code>file</code>. This is my code:</p>
<pre><code>from slack_sdk import WebClient
client = WebClient(os.environ["slack_token"])
csv_file = df.to_csv()
response = self.client.files_upload(
file=csv_file,
initial_comment=message_text,
thread_ts=thread_ts,
)
</code></pre>
<p>The problem with this code is that the method takes the file contest as the filename and raises an error because it can't find the file.</p>
<p>Is there any way I can send the text content of the variable as a file?</p>
| <python><pandas><slack> | 2023-08-11 14:50:07 | 1 | 5,505 | HuLu ViCa |
76,884,538 | 6,195,489 | How to combine a Numpy array of shape [N,4] with one of shape [M] to get an array of shape [N*M,4] | <p>I have two numpy arrays, one with shape [N,4], eg:</p>
<pre><code>x1= [[1,2,3,4],
[2,3,4,5],
[7,3,2,1]]
</code></pre>
<p>and one with shape [M], eg::</p>
<pre><code>x2=[1,2,3,4]
</code></pre>
<p>I would like to get a final array with shape [N*M,4] where, if c is a constant:</p>
<pre><code>x3= [[1+c,2+x2[0],3-c,4-x2[0]],
[2+c,3+x2[0],4-c,5-x2[0]],
[7+c,3+x2[0],2-c,1-x2[0]],
[1+c,2+x2[1],3-c,4-x2[1]],
[2+c,3+x2[1],4-c,5-x2[1]],
[7+c,3+x2[1],2-c,1-x2[1]],
[1+c,2+x2[2],3-c,4-x2[2]],
[2+c,3+x2[2],4-c,5-x2[2]],
[7+c,3+x2[2],2-c,1-x2[2]],
[1+c,2+x2[3],3-c,4-x2[3]],
[2+c,3+x2[3],4-c,5-x2[3]],
[7+c,3+x2[3],2-c,1-x2[3]]]
</code></pre>
<p>I can loop through the rows in the first, and do the operation, but is there, as I suspect, a simpler way to do it?</p>
| <python><numpy> | 2023-08-11 14:38:06 | 3 | 849 | abinitio |
76,884,438 | 3,948,658 | AWS RDS Connect Compute Resource via CDK | <p>I basically just want to add a Compute Resource (EC2 instance) to an RDS DB cluster with the AWS CDK as described here. <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/ec2-rds-connect.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/ec2-rds-connect.html</a>. But I can't find any examples online or in the CDK Docs (<a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_rds.DatabaseCluster.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_rds.DatabaseCluster.html</a>).</p>
<p>I can manually add compute resources from the AWS console no problem but I want to do it programmatically with AWS CDK. How would I do this, or what are my options?</p>
<p>Here is my current AWS CDK code for RDS. Here I am trying to add an EC2 instance resource that I created in another stack called <strong>my_ec2_stack.my_ec2</strong> as a compute resource for <strong>my_rds_cluster</strong>. But if this is a wrong approach or you know a different way to do this, let me know. Examples would be appreciated as well.</p>
<p>Additional details: The rds DB cluster below and my EC2 instance that I want to add as a Compute Resource are in the same VPC and subnet.</p>
<pre><code>my_rds_cluster = rds.DatabaseCluster(self, "MyRDSDatabaseCluster",
cluster_identifier="my-postgres",
engine=rds.DatabaseClusterEngine.aurora_postgres(
version=rds.AuroraPostgresEngineVersion.VER_11_18
),
instance_props=rds.InstanceProps(
vpc=shared_vpc_stack.vpc,
instance_type=ec2.InstanceType.of(ec2.InstanceClass.R6G, ec2.InstanceSize.XLARGE2),
vpc_subnets=ec2.SubnetSelection(
availability_zones=["us-west-2a", "us-west-2b", "us-west-2c", "us-west-2d"],
subnets=[private_subnet_a, private_subnet_b, private_subnet_c, private_subnet_d],
),
auto_minor_version_upgrade=True,
enable_performance_insights=True,
publicly_accessible=False,
security_groups=[my_rds_security_group, my_ec2_stack.searchy_security_group, my_ec2_stack.soary_security_group],
),
backup=rds.BackupProps(
retention=Duration.days(7),
),
)
# My failed attempt to add compute resources EC2. (No error it just doesn't do anything)
my_rds_connections = my_rds_cluster.connections.allow_from(
other=my_ec2_stack.my_ec2,
port_range=ec2.Port.tcp(5432),
description="My RDS connection to My EC2"
)
</code></pre>
<p>Screenshot below shows what it looks like in the AWS RDS Console when a compute resource is connected:</p>
<p><a href="https://i.sstatic.net/0VwKO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0VwKO.png" alt="enter image description here" /></a></p>
| <python><amazon-web-services><amazon-ec2><amazon-rds><aws-cdk> | 2023-08-11 14:25:31 | 1 | 1,699 | dredbound |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.