QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,593,017
| 5,029,589
|
Clean and Read csv file with pandas which has unwanted texts
|
<p>I have large number of csv files (More than 10K) and all files have unwated texts at start and end .I want to clean the csv file first and then read it . One of the example is like below (Below is a sample data ,my csv files are similar to below . I need to clean and ignore all the texts above</p>
<p>Columns for reference are : Student,id,add,div,rank</p>
<pre><code>SAMPLE FILE LTD
STUDENT NUMBERS
INFO OF ALL STUDENTS No : from 27-Mar-2023 00:00:00 to 04-Apr-2023 00:00:00 and from 05-Oct-2023 00:00:00 to 13-Oct-2023 00:00:00
Student,id,add,div,rank
ABC,12,USA,A,1
DEF,13,IND,C,2
XYZ,14,UK,E,3
PQR,15,DE,F,4
This is System generated report, and needs no signature.
14-Oct-2023 18:14:12
</code></pre>
<p>One solution which I found is as below</p>
<pre><code>#This will read the file untill the filter value is reached
def get_rows_to_skip(file_name,filter):
rows=0;
file = open(file_name, 'r')
while True:
line = file.readline()
if filter in line:
return rows
rows=rows+1
def read_csv():
skiprows=0
is_check=True
file_name="/Users/test/Desktop/student/students.csv"
rows=get_rows_to_skip(file_name,"rank")
df=pd.read_csv(file_name,skiprows=rows)
df = df[df['rank'].notna()]
print("done")
</code></pre>
|
<python><pandas>
|
2023-12-03 04:18:31
| 2
| 2,174
|
arpit joshi
|
77,592,943
| 10,200,497
|
Finding the first row that meets conditions of a mask starting from nth row
|
<p>This is my dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({'a': [20, 21, 100, 4, 100, 20], 'b': [20, 20, 20, 20, 20, 20]})
</code></pre>
<p>I want to create column <code>c</code> by using a mask. This is my desired output:</p>
<pre class="lang-none prettyprint-override"><code> a b c
0 20 20 NaN
1 21 20 NaN
2 100 20 NaN
3 4 20 NaN
4 100 20 x
5 20 20 NaN
</code></pre>
<p>My mask is:</p>
<pre class="lang-py prettyprint-override"><code>mask = (df.a > df.b)
</code></pre>
<p>Note that I want to start looking for this mask from the third row. That is, rows 0, 1 and 2 do not count. That is why the first row that meets the <code>a</code> > <code>b</code> is the 5th row which its index is 4.</p>
<p>This is what I have tried. But I don't know how to start from the third row.</p>
<pre class="lang-py prettyprint-override"><code>df.loc[mask.cumsum().eq(1) & mask, 'c'] = 'x'
</code></pre>
<p>These are some additional examples. First three rows do not count.
<a href="https://i.sstatic.net/hsjn7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hsjn7.png" alt="enter image description here" /></a></p>
|
<python><pandas><indexing>
|
2023-12-03 03:32:37
| 3
| 2,679
|
AmirX
|
77,592,875
| 4,398,966
|
Python turtle terminator error on alternating runs
|
<p>I have the following code running in anaconda spyder:</p>
<pre><code>from turtle import *
speed(0)
setup(800, 700)
#Blue Background
penup()
goto(0, -320)
pendown()
color("lightskyblue")
begin_fill()
circle(320)
end_fill()
done()
</code></pre>
<p>on the second run and every other (run 4, run 6 ...) I get the following message in the console:</p>
<pre><code>Traceback (most recent call last):
File E:\python\Lib\site-packages\spyder_kernels\py3compat.py:356 in compat_exec
exec(code, globals, locals)
File c:\users\robert\downloads\mit python\pset4\problemset4\snowman.py:9
speed(0)
File <string>:5 in speed
Terminator
</code></pre>
<p>on runs 1, 3, 5... I get the expected output</p>
<p>If I add bye() as last line I get on even runs the following message in console:</p>
<pre><code>runfile('C:/Users/Robert/Downloads/MIT python/pset4/ProblemSet4/snowman.py', wdir='C:/Users/Robert/Downloads/MIT python/pset4/ProblemSet4')
Tcl_AsyncDelete: async handler deleted by the wrong thread
Windows fatal exception: code 0x80000003
Main thread:
Thread 0x00001c64 (most recent call first):
File "E:\python\Lib\site-packages\spyder_kernels\comms\frontendcomm.py", line 262 in _remote_callback
File "E:\python\Lib\site-packages\spyder_kernels\comms\commbase.py", line 343 in _handle_remote_call
File "E:\python\Lib\site-packages\spyder_kernels\comms\commbase.py", line 333 in _comm_message
File "E:\python\Lib\site-packages\spyder_kernels\comms\frontendcomm.py", line 256 in handle_msg
File "E:\python\Lib\site-packages\comm\base_comm.py", line 263 in comm_msg
File "E:\python\Lib\site-packages\ipykernel\kernelbase.py", line 410 in dispatch_shell
File "E:\python\Lib\site-packages\ipykernel\kernelbase.py", line 505 in process_one
File "E:\python\Lib\site-packages\ipykernel\kernelbase.py", line 516 in dispatch_queue
File "E:\python\Lib\asyncio\events.py", line 80 in _run
File "E:\python\Lib\asyncio\base_events.py", line 1922 in _run_once
File "E:\python\Lib\asyncio\base_events.py", line 607 in run_forever
File "E:\python\Lib\site-packages\tornado\platform\asyncio.py", line 195 in start
File "E:\python\Lib\site-packages\ipykernel\kernelapp.py", line 736 in start
File "E:\python\Lib\site-packages\spyder_kernels\console\start.py", line 330 in main
File "E:\python\Lib\site-packages\spyder_kernels\console\__main__.py", line 24 in <module>
File "<frozen runpy>", line 88 in _run_code
File "<frozen runpy>", line 198 in _run_module_as_main
Restarting kernel...
%pylab is deprecated, use %matplotlib inline and import the required libraries.
Populating the interactive namespace from numpy and matplotlib
</code></pre>
<p>So, after I get the error message it restarts the kernel and populates the interactive namespace from numpy and matplotlib. Then when I run again it works, but if I run it again I get the above message and restarts the kernel</p>
|
<python><python-turtle>
|
2023-12-03 02:28:55
| 0
| 15,782
|
DCR
|
77,592,795
| 2,205,785
|
How to deploy python extensions (shared libraries) with multiple architectures?
|
<p>Context: one can compile C code such that it can be used as a Python module. The compiled object is a shared library with a specific naming, so Python can find and load it as a module.</p>
<p>Great. I have successfully compiled and tested code such that file "foo.c" becomes a shared library "foo.so", and Python code <code>import foo</code> works.</p>
<p>The goal is to distribute a set of shared libraries for Mac, Linux, and Windows, where <code>import foo</code> loads the appropriate shared library.</p>
<p>Conceptually, I want my distribution to contain a directory with three files:</p>
<pre><code>mypkg/
┠─ __init__.py
┠─ foo.so (linux)
┠─ foo.dylib (mac)
┖─ foo.dll (windows)
</code></pre>
<p>so that <code>from mypkg import foo</code> picks the appropriate library.
I <strong>do not</strong> want to distribute the source code <code>foo.c</code>.</p>
<p>The problem is, Mac will pick the <code>.so</code> file and complain:</p>
<pre><code>ImportError: dlopen(/.../mypkg/foo.so, 0x0002): tried: '/.../mypkg/foo.so' (not a mach-o file)
</code></pre>
<p>Is there a pattern / naming scheme which would permit this (short of writing a custom module loader)?</p>
<p><strong>Edit</strong>: Explanation why PyPI / pip / wheel-type distribution is not desired... these aren't running in a standard python.exe process.</p>
<p>The <em>main executable</em> is a C program, which enables C-language plugins using an SDK. I've written a C plugin, which provides Python and exposes a Python interface to the original C API (doing <code>Py_Initialize()</code> etc. This C extension looks for, loads, and executes Python plugins. The result is one can now write Python plugins instead of C plugins. Users place Python plugins in a specific directory & each is read and executed. (plugins cannot execute standalone.) That all works fine.</p>
<p>Now, I'm looking at how one of these plugins can define and use a shared library Python module.</p>
<pre><code>main.c -> 1) InitPython
2) PyImport_Import("plugins/a.py")
3) PyImport_Import("plugins/b.py")
-> import mypkg.foo
...
</code></pre>
<p>If mypkg/foo.py is pure Python, this works great. If foo is a shared library, then it must be named <code>foo.so</code> on Linux and macOS, so I cannot simply ship my plugin as <code>b.py</code> + <code>mypkg/*</code>. I might be able to use <code>pip install --target=plugins foo.whl</code>.</p>
<p>Alternatively I'm testing a different loading mechanism, similar to @shadowtalker's non-recommendation, for <code>mypkg/foo.py</code>:</p>
<pre><code>import os
import platform
_system = platform.system()
from importlib.machinery import ExtensionFileLoader
from importlib.utils import spec_from_file_location
filename = f'{os.path.dirname(__file__)/foo.{_system.lower()}.so'
_loader = ExtensionFileLoader('foo', filename)
_spec = spec_from_file_location('foo', filename)
_mod = _loader.create_module(_spec)
_loader.exec_module(_mod)
from foo import *
</code></pre>
<p>It looks like it's working, but I'll continue to test.</p>
|
<python><python-c-api>
|
2023-12-03 01:45:51
| 4
| 4,590
|
pbuck
|
77,592,555
| 5,938,276
|
Calculating time between high state
|
<p>In python I want calculations of high / low states of a signal in chunks of a given size (<code>samples_to_process</code>).</p>
<p>The two required calculations are number index between rising edges (<code>length_between_high_states</code>) and number of index's the signal is high for (<code>high_state_length</code>).</p>
<p>The calculations must be stateful across chunnks.</p>
<p><a href="https://i.sstatic.net/ttBye.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ttBye.png" alt="example" /></a></p>
<p>Lets take a small reproduceable example:</p>
<pre><code>data = np.array([0,1,1,0,1,1,1,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,0,0,0,0,1,1,1,1,0,0])
</code></pre>
<p>If this array is read 8 items at a time, the first iteration is</p>
<pre><code>high_state_length = 2
length_between_high_states = 3
</code></pre>
<p>then</p>
<pre><code>high_state_length = 3
length_between_high_states = 9
</code></pre>
<p>I believe I have the correct logic to read in the first state of the array and signal changes, but subsequent state changes in the signal and carrying the state across chunks are not yet implemented:</p>
<pre><code>import numpy as np
#total array size = 25
data = np.array([0,1,1,0,1,1,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,0,0,0,0,1,1,1,1,0,0])
#size of samples of data array to process in each read
samples_to_process = 8
samples = np.zeros(samples_to_process, dtype=np.complex64)
threshold = 0.5
index = 0
#slice(index, index+samples_to_process)
for index in range(0, data.size, samples_to_process):
samples=data[index:index+samples_to_process]
print(samples)
while True
start_index = np.argmax(samples > threshold)
stop_index = np.argmax(samples[start_index:] < threshold) + start_index
next_start_index = np.argmax(samples[stop_index:] > threshold) + stop_index
length_between_high_states = next_start_index - start_index
high_state_length = stop_index - start_index
# how to calculate remainder state and pass into next itr
start_index = next_start_index
print("next loop")
</code></pre>
<p>The question is how to pass the signal state between iterations to be included in subsequent calculations.</p>
|
<python><signal-processing>
|
2023-12-02 23:29:08
| 1
| 2,456
|
Al Grant
|
77,592,428
| 2,774,885
|
more efficient way to capture output / maybe not .communicate() from multiple python subprocesses?
|
<p>I've got the following block of code. <code>cmdTable</code> is a dict where the keys are strings that describe a subprocess to open (like "out_From_hi_mom") and the values are the executable command (like "echo hi mom")... something like:</p>
<p><code>cmdTable['himom'] : "echo hi there momma"</code></p>
<p>This ultimately builds <code>procOutput["himom"] : "hi there momma"</code></p>
<p>This all works just fine, but I'm launching about 100 subprocesses and I'm trying to figure out if it's actually running these in parallel. I'm deeply suspicious that it's not, because the log next to the .communicate() call always shows the subprocesses returning in <em>exactly the same order</em> that they were created.</p>
<p>If the debug timestamps are to be believed, the .communicate()'s also return in batches, which doesn't seem like the expected behavior to me anyway...</p>
<p>I was under the impression that I was launching a bunch of subprocesses here more or less simultaneously. The timestamps on the <code>Popen</code> calls supports this theory, all ~100 of these launch within a second or so.</p>
<p>the various <code>except</code> blocks have been removed for brevity...</p>
<pre><code>def runShowCommands(cmdTable) -> dict:
"""return a dictionary of captured output from commands defined in cmdTable. """
procOutput = {} # dict to store the output text from show commands
procHandles = {}
for cmd in cmdTable.keys():
try:
log.debug(f"running subprocess {cmd} -- {cmdTable[cmd]}")
procHandles[cmd] = subprocess.Popen(cmdTable[cmd], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
for handle, proc in procHandles.items():
try:
procOutput[handle] = proc.communicate(timeout=180)[0].decode("utf-8") # turn stdout portion into text
log.debug(f"subprocess returned {handle}")
return procOutput
</code></pre>
<p>I suppose its worth mentioning that all of these subprocesses are thread-safe with respect to each other, I do not care in exactly what order they run and they share no input nor output state. My primary goal is to minimize the total wall-clock execution time, and I'm reasonably sure that I'm missing something and these are all running serially as opposed to in parallel.</p>
<p>Is there something here in the <code>Popen</code> and <code>.communicate()</code> usage that I've got wrong (I inherited this code and will freely admit that it's at the edge of my abilities...)</p>
|
<python><subprocess>
|
2023-12-02 22:29:14
| 1
| 1,028
|
ljwobker
|
77,592,409
| 9,231,706
|
How to run Stockfish on AWS Lambda
|
<p>I am trying to run stockfish on AWS lambda.</p>
<p>Here is my lambda function</p>
<pre><code> import os
import json
from stockfish import Stockfish
# The Stockfish binary is located at the root of your deployment package
stockfish_path = "/var/task/stockfish-ubuntu-x86-64-avx2"
def lambda_handler(event, context):
stockfish = Stockfish(path=stockfish_path)
# Set the chess position or FEN notation as needed
position = "rnbqkbnr/pppppppp/8/8/4P3/8/PPPP1PPP/RNBQKBNR w KQkq - 0 2"
stockfish.set_fen_position(position)
best_move = stockfish.get_best_move()
response = {
"best_move": best_move
}
return {
"statusCode": 200,
"body": json.dumps(response)
}
</code></pre>
<p>In my root I also have this stockfish binary I downloaded from stockfish.com.</p>
<p>I have deployed it to AWS lambda, but when I run the lambda I get this error:</p>
<pre><code>[ERROR] ValueError: invalid literal for int() with base 10: '/lib64/libm'
Traceback (most recent call last):
File "/var/task/index.py", line 9, in lambda_handler
stockfish = Stockfish(path=stockfish_path)
File "/var/task/stockfish/models.py", line 57, in __init__
self._stockfish_major_version: int = int(END RequestId: 304e4a19-2097-4770-92fa-8dc1bf9a680d
</code></pre>
<p>This is strange since it seems like an internal error with stockfish as opposed to something wrong with my code. I dont want to update the internal stockfish models that the error is referring to.</p>
|
<python><stockfish>
|
2023-12-02 22:23:17
| 0
| 729
|
James
|
77,592,262
| 5,801,127
|
Crontab failing shell script if statement
|
<p>I have a shell script which executes a python program by first checking if it is running</p>
<p>However, the problem is that it seems to fail the first block of the if statement. The shell script does work manually</p>
<p>This is what my shell script looks like:</p>
<pre><code>PATH=/opt/conda/bin:/opt/conda/condabin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
if [ $(/bin/pgrep -f "miner_nbeats.py") ]; then
echo "script running"
else
echo "script not running"
exec tmux new-session -d \; send-keys "source activate python310 && cd /home/putsncalls23/directory/ && python miner_nbeats.py" Enter
fi
</code></pre>
<p>And this is what i have in my <code>/etc/crontab</code> file:</p>
<pre><code>SHELL=/bin/bash
PATH=/opt/conda/bin:/opt/conda/condabin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
*/5 * * * * root putsncalls23 -l /home/putsncalls23/run_script.sh
</code></pre>
<p>Does anyone know what I'm doing wrong here ?</p>
<p><strong>EDIT:</strong></p>
<p>After doing more googling, I also saw something like this</p>
<pre><code>PATH=/opt/conda/bin:/opt/conda/condabin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
if /bin/pgrep -f "miner_nbeats.py" >/dev/null; then
echo "script running"
else
echo "script not running"
tmux new-session \; send-keys "source activate python310 && cd /home/putsncalls23/directory && python miner_nbeats.py$
fi
</code></pre>
<p>I just wanted to clarify if the new shell script works and what the effect of <code>/bin/pgrep -f "miner_nbeats.py" >/dev/null</code> does. For example, after executing miner_beats.py, it seems to output:</p>
<pre><code>21080
21128
</code></pre>
<p>Thanks</p>
|
<python><linux><cron>
|
2023-12-02 21:33:02
| 1
| 1,011
|
PutsandCalls
|
77,592,209
| 8,313,547
|
Visual Studio code not connecting directories/references from imported packages
|
<p>Usually with an IDE, when the Python interpreter is set to the anaconda python.exe file, and the environment is correct, the imported packages can be easily referenced and the IDE will acknowledge them. But, I am getting no references to packages installed. The code runs, when I run the python package, but I cannot look back at where certain classes/functions/etc. come from.</p>
<p>Does anyone know how I can fix this? Here is an image of what I mean:</p>
<p><a href="https://i.sstatic.net/tLjWA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tLjWA.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code><ide>
|
2023-12-02 21:16:22
| 1
| 449
|
mikanim
|
77,592,197
| 891,203
|
IntelliJ with Python plugin and project SDK virtualenv, can't find library code for browsing
|
<p>Using IntelliJ 2023.1.2 on MacOS 13.3.</p>
<p>Created project with a Python 3.10 virtualenv SDK at project level. Then <code>pip install openai</code>. Copied simple OpenAI tutorial script into my project. It is recognized as a Python file. The script runs successfully from the command line or from an IntelliJ Run Configuration.</p>
<p>I want to learn/explore the OpenAI objects and methods by browsing the OpenAI library source code. But when I click on the script code, such as <code>client = OpenAI()</code>, IntelliJ shows me an unclear error message <em>Cannot find declaration to go to</em>. What is the meaning?</p>
<p><strong>How can I get IntelliJ to let me browse source code of a pip installed Python lib?</strong></p>
|
<python><intellij-idea><virtualenv>
|
2023-12-02 21:13:46
| 1
| 1,658
|
devdanke
|
77,592,144
| 10,152,435
|
I am trying to seperate my python files in diffrent folders when I try to import I get error No module named 'animal'
|
<p>I have this file structure</p>
<pre><code>- animal //this folder contains my pthon project
- mamal (Folder)
- dog.py //this is my dog python file
- reptile (Folder)
- snake.py //this is my snake python file
- main.py // Is the python file that should access the mamal.py file
</code></pre>
<p>Here is my dog.py code</p>
<pre><code>class Dog:
def __init__(self):
self.name = "Dog"
def draw_dog():
print("₍ᐢ•ᴥ•ᐢ₎")
</code></pre>
<p>this is my snake.py code</p>
<pre><code>class Snake:
def __init__(self):
self.name = "Snake"
def draw_snake():
print("===========*")
</code></pre>
<p>And here is my main.py code where I try to call the dog class</p>
<pre><code>from mamal.dog import Dog
Dog.draw_dog()
</code></pre>
<p>but when I run main.py I get the following error ModuleNotFoundError: No module named 'animal'</p>
<p>Why is that and how can I fix this issues</p>
|
<python>
|
2023-12-02 20:56:35
| 1
| 772
|
J.C
|
77,592,100
| 1,256,347
|
How to show axis labels of all subplots when the labels are strings?
|
<h3>Problem summary</h3>
<p>Whenever I try to create a plot with <a href="https://plotly.com/python/plotly-express/" rel="nofollow noreferrer">plotly express 5.18.0</a> containing a subplot <em>and</em> axes labels that are not numbers, I only get labels for the first subplot subsequent subplots show empty axis labels.</p>
<p>How can I ensure that all subplots show their respective axes labels, even if they contain strings?</p>
<h3>Example data</h3>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import plotly.express as px
N = 100
food = ["Dim sum", "Noodles", "Burger", "Pizza", "Pancake"]
drink = ["Beer", "Wine", "Soda", "Water", "Fruit juice", "Coffee", "Tea"]
df = pd.DataFrame(
{
"age": np.random.randint(8, 99, N),
"favourite_food": np.random.choice(food, N, replace=True),
"favourite_drink": np.random.choice(drink, N, replace=True),
"max_running_speed": np.random.random(N)*20,
"number_of_bicycles": np.random.randint(0, 5, N)
}
)
df.age.replace({range(0, 19): "Kid", range(19, 100): "Adult"}, inplace=True)
</code></pre>
<p>Random 5 rows:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">age</th>
<th style="text-align: left;">favourite_food</th>
<th style="text-align: left;">favourite_drink</th>
<th style="text-align: right;">max_running_speed</th>
<th style="text-align: right;">number_of_bicycles</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">Adult</td>
<td style="text-align: left;">Dim sum</td>
<td style="text-align: left;">Wine</td>
<td style="text-align: right;">8.57536</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: right;">65</td>
<td style="text-align: left;">Kid</td>
<td style="text-align: left;">Pizza</td>
<td style="text-align: left;">Water</td>
<td style="text-align: right;">9.45698</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">57</td>
<td style="text-align: left;">Kid</td>
<td style="text-align: left;">Pancake</td>
<td style="text-align: left;">Beer</td>
<td style="text-align: right;">11.1445</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">84</td>
<td style="text-align: left;">Adult</td>
<td style="text-align: left;">Dim sum</td>
<td style="text-align: left;">Soda</td>
<td style="text-align: right;">8.80699</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">45</td>
<td style="text-align: left;">Adult</td>
<td style="text-align: left;">Pizza</td>
<td style="text-align: left;">Fruit juice</td>
<td style="text-align: right;">17.7258</td>
<td style="text-align: right;">4</td>
</tr>
</tbody>
</table>
</div><h3>Demonstration of problem</h3>
<p>If I now create a figure with two subplots:</p>
<ul>
<li>First subplot contains the distribution of the max. running speed (a number)</li>
<li>Second subplot contains the distribution of the number of bicycles (a number)</li>
</ul>
<p>For convenience I use the <code>facet_col</code> argument in combination with the <a href="https://plotly.com/python/wide-form/" rel="nofollow noreferrer">wide-form support of plotly express</a> and the formatting updates I found in this <a href="https://stackoverflow.com/q/60997189/1256347">related Q&A</a>):</p>
<pre class="lang-py prettyprint-override"><code>px.histogram(
df,
x=["max_running_speed", "number_of_bicycles"],
facet_col="variable",
color="age",
barmode="group",
histnorm="percent",
text_auto=".2r",
).update_xaxes(matches=None, showticklabels=True).update_yaxes(matches=None, showticklabels=True)
</code></pre>
<p><a href="https://i.sstatic.net/cvNkE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cvNkE.png" alt="enter image description here" /></a></p>
<p>All works as it should ✅: I get separate ranges x- and y-axes and I get separate labels on the x-axes.</p>
<p>Now I do the same, but for the columns with text data:</p>
<pre class="lang-py prettyprint-override"><code>px.histogram(
df,
x=["favourite_food", "favourite_drink"],
facet_col="variable",
color="age",
barmode="group",
histnorm="percent",
text_auto=".2r",
).update_xaxes(matches=None, showticklabels=True).update_yaxes(matches=None, showticklabels=True)
</code></pre>
<p><a href="https://i.sstatic.net/c3awX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c3awX.png" alt="enter image description here" /></a></p>
<p>Now there's a problem ❌: The x-axis of the right plot does not show the names of the favourite drinks.</p>
<h3>What I've tried</h3>
<p>I checked the underlying data JSON object, as I noticed that when I hover over the bars of the right plot, the "value" field is empty:</p>
<p><a href="https://i.sstatic.net/iY6xT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iY6xT.png" alt="enter image description here" /></a></p>
<p>But when I inspect the JSON object in the <code>.data</code> key of the figure, I see that x-values are present for both histograms:</p>
<p><a href="https://i.sstatic.net/nKGFE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nKGFE.png" alt="enter image description here" /></a></p>
|
<python><plotly><facet><plotly-express>
|
2023-12-02 20:40:53
| 1
| 2,595
|
Saaru Lindestøkke
|
77,591,842
| 162,758
|
namespace package does not get installed as part of pip install
|
<p>I am working on an open source package and have it published on pypi. I am using poetry to package and publish my project. My src tree structure is as below</p>
<pre><code>- src
|- novi
|- client
| |- __init__.py
|- core
| |- __init__.py
|- web
| |- __init__.py
|- novi_activations
</code></pre>
<p>Both novi and novi_activations are python namespace packages and do not have an <code>__init__.py</code> directly underneath. My <code>pyproject.toml</code> is as below</p>
<pre><code>[tool.poetry]
name = "novi"
version = "0.4.5"
description = "A simple yet powerful feature flag and multivariate testing platform built in Python"
authors = ["vdevigere <vdevigere+git@gmail.com>"]
readme = "README.md"
packages = [
{include = "novi", from="src"},
]
include =[
{path="src/novi_activations"}
]
.....
</code></pre>
<p>The resulting tar file has the right structure ie:- both novi and novi_activations get included. However when my users do a <code>pip install novi</code>. Only the novi package gets installed and not the novi_activations. I have tried a few things such as including novi_activations as a package instead</p>
<pre><code>[tool.poetry]
name = "novi"
version = "0.4.5"
description = "A simple yet powerful feature flag and multivariate testing platform built in Python"
authors = ["vdevigere <vdevigere+git@gmail.com>"]
readme = "README.md"
packages = [
{include = "novi", from="src"},
{include = "novi_activations", from="src"}
]
......
</code></pre>
<p>However this fails with the error message that novi_activations is not a package. I can get over this by including some dummy packages under novi_activations but I dont want to do that. How do I get around this issue. If you are interested the code and pypi project links are below</p>
<ul>
<li><a href="https://github.com/vdevigere/Novi" rel="nofollow noreferrer">https://github.com/vdevigere/Novi</a></li>
<li><a href="https://pypi.org/project/novi/" rel="nofollow noreferrer">https://pypi.org/project/novi/</a></li>
</ul>
|
<python><pip><python-packaging><python-poetry>
|
2023-12-02 19:13:46
| 0
| 2,344
|
VDev
|
77,591,245
| 11,057,932
|
Why isn't __del__ called twice?
|
<p>I want to use an object's <code>__del__</code> method to automatically write the object data to either an external database or a local cache (a global dict), depending on certain conditions. A context manager doesn't work for me.</p>
<p>However, this means that, once <code>__del__</code> is called (because the object has gone out of scope), a new reference to the object may be created in the local cache. Because this sounded like potential trouble, I wrote a simple short test case:</p>
<pre class="lang-py prettyprint-override"><code>cache = []
class Temp:
def __init__(self) -> None:
self.cache = True
def __del__(self) -> None:
print('Running del')
if self.cache:
cache.append(self)
def main():
temp = Temp()
print(temp.cache)
main()
if cache:
print(cache[0].cache)
</code></pre>
<p>When I run this, it outputs:</p>
<pre><code>True
Running del
True
</code></pre>
<p>Whereas I expected</p>
<pre><code>True
Running del
True
Running del
</code></pre>
<p>I.e. <code>__del__</code> called twice, once when it originally went out of scope at the end of <code>main</code> and once at the end of the program, since a new reference was stored in the cache. Why didn't <code>__del__</code> run twice?</p>
|
<python><destructor>
|
2023-12-02 16:38:36
| 1
| 1,927
|
David
|
77,591,165
| 951,296
|
Why are there so many data loss after dask.dataframe to_csv
|
<p>Im a newbie to Dask distribute,now I am doing a simple test to learn it and got a very strange situation, here is my code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import dask.dataframe as dd
data_vol = 2000
index = pd.date_range("2021-09-01", periods=data_vol, freq="1h")
df = pd.DataFrame({"a": np.arange(data_vol), "b": ["abcaddbe"] * data_vol, 'time': index})
ddf = dd.from_pandas(df, npartitions=10)
df2 = pd.DataFrame({"c": np.arange(data_vol), "d": ["xyzopq"] * data_vol, 'time': reversed(index)})
ddf2 = dd.from_pandas(df2, npartitions=10)
ddf['timestamp'] = ddf.time.apply(lambda x: int(x.timestamp()), meta=('time', 'int64'))
ddf2['timestamp'] = ddf2.time.apply(lambda x: int(x.timestamp()), meta=('time', 'int64'))
def merge_onindex(ddf, ddf2):
ret = ddf.merge(ddf2)
ret["add"] = ret.a + ret.c + 1
return ret
from dask.distributed import Client
import dask
dask.config.set({"dataframe.shuffle.method": "tasks"})
client = Client("tcp://172.17.0.2:8786")
ddf_st = client.scatter(ddf.set_index('timestamp'), broadcast=True)
ddf2_st = client.scatter(ddf2.set_index("timestamp"), broadcast=True)
dd_merge_res = client.submit(merge_onindex, ddf_st, ddf2_st)
## Future: merge_onindex status: finished, type: dask.dataframe.core.DataFrame, key: merge_onindex-da1eb54a93de0c19af3093b76230b9f6
dd_merge_res.result().to_csv("/jupyter/merge_single.csv", single_file=True)
</code></pre>
<p>then I run <code>wc -l merge_single.csv</code>, there was only hundreds lines and the line number would vary each time I ran it.</p>
<p>Here are some head lines:</p>
<pre><code>,a,b,time,c,d,add
0,19,abcaddbe,2021-09-01 19:00:00,1980,xyzopq,2000
1,22,abcaddbe,2021-09-01 22:00:00,1977,xyzopq,2000
2,35,abcaddbe,2021-09-02 11:00:00,1964,xyzopq,2000
3,37,abcaddbe,2021-09-02 13:00:00,1962,xyzopq,2000
4,50,abcaddbe,2021-09-03 02:00:00,1949,xyzopq,2000
5,58,abcaddbe,2021-09-03 10:00:00,1941,xyzopq,2000
6,78,abcaddbe,2021-09-04 06:00:00,1921,xyzopq,2000
7,84,abcaddbe,2021-09-04 12:00:00,1915,xyzopq,2000
8,112,abcaddbe,2021-09-05 16:00:00,1887,xyzopq,2000
</code></pre>
<p>The existing lines are correct but many other lines are missing !</p>
<p>Thanks for any help!</p>
<p>My environment:</p>
<pre><code>docker base image: python:3.8
dask: 2023.5.0
2 docker containers as worker and one as master. Each has 3 cpus.
</code></pre>
|
<python><dask-distributed><dask-dataframe>
|
2023-12-02 16:18:51
| 1
| 998
|
Flybywind
|
77,591,051
| 5,962,981
|
SqlAlchemy How to get objects from "child" class?
|
<p>This is the code:</p>
<pre><code>from sqlalchemy.orm import declarative_base, relationship
from sqlalchemy import Column, String, Integer, ForeignKey
Base = declarative_base()
class Parent(Base):
__tablename__ = 'parents'
id = Column(Integer, primary_key=True)
name = Column(String(20))
children = relationship('Child', back_populates='parents')
class Child(Base):
__tablename__ = 'children'
id = Column(Integer, primary_key=True)
parent_id = Column(Integer, ForeignKey('parents.id'))
name = Column(String(20))
parents = relationship('Parent', back_populates='children')
mother = Parent(id=1, name='Sarah')
c1 = Child(id=22, parent_id=mother.id, name='Alice')
c2 = Child(id=23, parent_id=mother.id, name='Bob')
print(mother.children)
</code></pre>
<p>OUTPUT:</p>
<pre><code>[]
</code></pre>
<p>As you can see, the result is an empty list. But I want to see <code>c1</code> and <code>c2</code> objects and get their attributes.</p>
<p>I would appreciate your help.</p>
|
<python><sqlalchemy>
|
2023-12-02 15:46:07
| 1
| 923
|
filtertips
|
77,590,861
| 4,575,197
|
unexpected behavior of for after using Map and Partial method
|
<p>I'm using Partial method to pass 2 parameters which are not iterables, thus i shouldn't use that in the <code>Map()</code> function. I'm also using ThreadPoolExecutor for I\O bound task that i have here.
the problem is that inside of the <code>get_the_text_par()</code> function, i have a for loop which should go through all the rows and send the requests for each row (link) but it's doing it only for the first row and skips the other rows. How can i fix the issue or what am i missing here.</p>
<pre><code> get_the_text_par = partial(get_the_text,_link_column=link,_firms=firms)
with ThreadPoolExecutor() as executor:
#chunk_size = len(results) // 10
chunk_size= len(results) if len(results)<10 else len(results) // 10
chunks=[results.iloc[i:i + chunk_size] for i in range(0, len(results),chunk_size)]
result = list(executor.map(get_the_text_par,chunks))
</code></pre>
<p>Get_the_Text implementation:</p>
<pre><code>def get_the_text(_df,_firms:list,_link_column:str):
'''
sending a request to recieve the Text of the Articles
Parameters
----------
_df : DataFrame
Returns
-------
dataframe with the text of the articles
'''
_df.reset_index(inplace=True)
print(_df)
for k,link in enumerate(_df[[f'{_link_column}']]):
print(k,'\n',_df.loc[k,f'{_link_column}'])
if link:
website_text=list()
# print(link,'\n','K:',k)
try:
page_status_code,page_content,page_url = send_two_requests(_df.loc[k,f'{_link_column}'])
......
.....
...
..
.
</code></pre>
<p>to import the data :</p>
<pre><code>data = {
'index': [1366, 4767, 6140, 11898],
'DATE': ['2014-01-12', '2014-01-12', '2014-01-12', '2014-01-12'],
'SOURCES': ['go.com', 'bloomberg.com', 'latimes.com', 'usatoday.com'],
'SOURCEURLS': [
'http://abcnews.go.com/Business/wireStory/mercedes-recalls-372k-suvs-21445846',
'http://www.bloomberg.com/news/2014-01-12/vw-patent-application-shows-in-car-gas-heater.html',
'http://www.latimes.com/business/autos/la-fi-hy-autos-recall-mercedes-20140112-story.html',
'http://www.usatoday.com/story/money/cars/2014/01/12/mercedes-recall/4437279/'
],
'Tone': [-0.375235, -1.842752, 1.551724, 2.521008],
'Positive_Score': [2.626642, 1.228501, 3.275862, 3.361345],
'Negative_Score': [3.001876, 3.071253, 1.724138, 0.840336],
'Polarity': [5.628518, 4.299754, 5.0, 4.201681],
'Activity_Reference_Density': [22.326454, 18.918919, 22.931034, 19.327731],
'Self_Group_Reference_Density': [0.0, 0.0, 0.344828, 0.840336],
'Year': [2014, 2014, 2014, 2014],
'Month': [1, 1, 1, 1],
'Day': [12, 12, 12, 12],
'Hour': [0, 0, 0, 0],
'Minute': [0, 0, 0, 0],
'Second': [0, 0, 0, 0],
'Mentioned_firms': ['mercedes', 'vw', 'mercedes', 'mercedes'],
'text': ['', '', '', '']
}
# Creating a DataFrame
df = pd.DataFrame(data)
</code></pre>
|
<python><pandas><dataframe><python-multithreading>
|
2023-12-02 14:56:35
| 1
| 10,490
|
Mostafa Bouzari
|
77,590,844
| 697,964
|
Separating empty folders from folders which only contain other folders
|
<p>I have a set of files and a set of folders.</p>
<p>This part seems to work:</p>
<pre><code>file_folders = {p.parent for p in files}
no_file_folders = folders - file_folders
</code></pre>
<p>But this part seems to not be correct:</p>
<pre><code>no_file_folders_parents = {p.parent for p in no_file_folders}
folder_folders = no_file_folders & no_file_folders_parents
empty_folders = no_file_folders - folder_folders
</code></pre>
<p>How can I correctly separate empty folders (containing no other folders) and folders which only contain folders?</p>
|
<python>
|
2023-12-02 14:50:02
| 1
| 569
|
jaksco
|
77,590,541
| 1,234,434
|
How to split string in pandas column
|
<p>I am new to Pandas. I have data like this:</p>
<pre><code>Category Sales Paid
Table 1 table Yes
Chair 3chairs Yes
Cushion 8 cushions Yes
Table 3Tables Yes
Chair 12 Chairs No
Mats 12Mats Yes
</code></pre>
<p>I have learnt how to apply a groupby on the <code>category</code> column
However, now I have multiple rows per group. I want to sum the number of sales per group held in the <code>Sales</code> column. But the sales number to add is written next to words e.g. 3Tables and there isn't a consistency on the way it's written as you can see above. How can I split the words and then capture the value and sum per group and print out?</p>
<p>My reading so far has signalled that I need to use an apply method, with lambdas.</p>
|
<python><pandas>
|
2023-12-02 13:26:01
| 1
| 1,033
|
Dan
|
77,590,508
| 1,279,000
|
How do I query a Flask SQLite database with SQLAlchemy outside of the Flask app url endpoints?
|
<p>I'm creating a Flask REST API. It's working well as an API. I've had a growing need to connect to the app's sqlite database on the side for scheduled tasks (which <a href="https://stackoverflow.com/questions/11810461/how-to-perform-periodic-task-with-flask-in-python">this post</a> could address), yet also for other background IoT-triggered logs via MQTT messages.</p>
<p>So far, when I attempt to import ORM models created for use in the Flask app, I get this error:</p>
<pre><code>ImportError: attempted relative import with no known parent package
</code></pre>
<p>This makes sense as the models are extending Flask's database [session?, something?]. Here's a scheduled task example I'm starting with where I tried to recreate the Flask app environment to make use of the model and run the task (maybe I don't need the full flask app here to use the model?):</p>
<p>remove_old_tokens.py</p>
<pre class="lang-py prettyprint-override"><code># 🔥 I was going to import model here, yet that generates error.
# from ..models import TokenBlocklist
from flask import Flask
from datetime import datetime, timedelta
import os
from flask_sqlalchemy import SQLAlchemy
# TODO: Not sure I like this added globally, yet maybe useful to bring in from
# another file for these types of tasks.
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' \
+ os.path.abspath('../../instance/db.sqlite')
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
app.app_context().push()
db = SQLAlchemy(app)
db.create_all()
# Tokens are set to expire at max 30 days. After about that long with padding,
# remove the tokens from the database to keep that table clean.
def remove_old_tokens():
# 🔥 This line generates the same error as just importing at top of the file
from ..models import TokenBlocklist
forty_days = timedelta(days=40)
forty_days_ago = datetime.now() - forty_days
query = TokenBlocklist.__table__.delete() \
.where(TokenBlocklist.created < forty_days_ago)
db.session.execute(query)
db.session.commit()
print('old tokens deleted')
remove_old_tokens()
</code></pre>
<p>models.py</p>
<pre><code>import uuid
from .app import db
def uuid_str():
return str(uuid.uuid4())
class TokenBlocklist(db.Model):
id = db.Column(
db.String(36),
primary_key=True,
nullable=False,
index=True,
default=uuid_str
)
jti = db.Column(
db.String(36),
nullable=False,
index=True
)
type = db.Column(
db.String(10),
nullable=False
)
created_at = db.Column(
db.DateTime,
nullable=False,
server_default=func.now(),
index=True
)
</code></pre>
<p>app.py</p>
<pre><code>app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///db.sqlite'
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
db = SQLAlchemy(app)
with app.app_context():
db.create_all()
</code></pre>
<p>structure</p>
<pre><code>app/
app.py
models.py
scheduled_tasks/
remove_old_tokens.py
instance/
db.sqlite
</code></pre>
<p>One alternative I could find was to write simple text queries, ignoring the model (finding good examples <a href="https://www.gormanalysis.com/blog/intro-to-sqlalchemy/" rel="nofollow noreferrer">here</a>). However, that may eventually break if I move away from SQLite when the app grows.</p>
<p>How do I resolve the error? Keep in mind that I'm doing more than scheduled tasks that could use a URL when working with the database (live IoT logs the UI will make use of for example).</p>
<p>Full code (excluding this example being worked out) is available <a href="https://github.com/PikesPeakMakerspace/TeslaCore" rel="nofollow noreferrer">here</a>.</p>
<p><strong>UPDATE:</strong></p>
<p>Thanks to @michael-butscher's comment, I was able to resolve the Import error with an altered script that makes use of absolute imports. However, now I'm running into a circular import error. I wonder if it makes sense to rework the models file, or simply query another way:</p>
<pre><code>ImportError: cannot import name 'TokenBlocklist' from partially initialized module 'app.models' (most likely due to a circular import) (/full-path-here/app/models.py)
</code></pre>
<pre><code>import sys
import os
sys.path.append(os.path.abspath('../../'))
from flask import Flask
from datetime import datetime, timedelta
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + os.path.abspath('../../instance/db.sqlite')
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
# 🔥 This may be the issue, because models.py is trying to load db from un-initialized app.py...
db = SQLAlchemy(app)
app.app_context().push()
# 🔥 app.models is going to try `from .app import db` hmmm...
from app.models import TokenBlocklist
db.create_all()
def remove_old_tokens():
forty_days = timedelta(days=40)
forty_days_ago = datetime.now() - forty_days
query = TokenBlocklist.__table__.delete().where(TokenBlocklist.created < forty_days_ago)
db.session.execute(query)
db.session.commit()
print('old tokens deleted')
remove_old_tokens()
</code></pre>
|
<python><sqlite><flask><sqlalchemy>
|
2023-12-02 13:16:45
| 1
| 1,278
|
Christopher Stevens
|
77,590,201
| 14,923,149
|
Issue with triangles borders in Matplotlib
|
<p>I am facing an issue with drawing triangle borders using Matplotlib in Python. I want to create a specific pattern, but I'm encountering unexpected behavior. I need assistance in identifying and resolving the problem.</p>
<p>this is my code</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
N = 5
A = np.array([(x, y) for y in range(N, -1, -1) for x in range(N + 1)])
t = np.array([[1, 1], [-1, 1]])
A = np.dot(A, t)
# I have defined a triangle
fig = plt.figure(figsize=(10, 10))
triangle = fig.add_subplot(111)
X = A[:, 0].reshape(N + 1, N + 1)
Y = A[:, 1].reshape(N + 1, N + 1)
for i in range(1, N + 1):
for j in range(i):
line_x = np.array([X[i, j + 1], X[i, j], X[i - 1, j]])
line_y = np.array([Y[i, j + 1], Y[i, j], Y[i - 1, j]])
triangle.plot(line_y,line_x, color='black', linewidth=1)
plt.show()
</code></pre>
<p>but I am getting this image, as u can see, <a href="https://i.sstatic.net/4W3f3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4W3f3.png" alt="enter image description here" /></a></p>
<p>At corner extra lines are coming, as i encircled it. I dont want this extra line, i tried to solved it using loop, eventhough one extra line will keep remain</p>
<pre><code>for i in range(6):
if i == N-1 :
for j in range(i-1):
line_x = np.array([X[i , j+1], X[i, j],X[i-1, j]])
line_y = np.array([Y[i, j+1], Y[i, j], Y[i-1, j]])
triangle.plot(line_y, line_x, color='black', linewidth=1)
pass
else:
for j in range(i):
line_x = np.array([X[i , j+1], X[i, j],X[i-1, j]])
line_y = np.array([Y[i, j+1], Y[i, j], Y[i-1, j]])
triangle.plot(line_y,line_x, color='black', linewidth=1)
pass
</code></pre>
<p>plt.show()
kindly resolve the issue</p>
|
<python><python-3.x><matplotlib>
|
2023-12-02 11:43:11
| 2
| 504
|
Umar
|
77,590,137
| 11,748,924
|
cloud function return 400 bad requests
|
<p>I have this code, basically I want to extract the zip file in GCS that triggered by HTTP Request.</p>
<pre><code>import functions_framework
import os
import tempfile
import zipfile
from google.cloud import storage
def extract_zip(event):
"""Cloud Function to extract contents of a zip file uploaded to GCS."""
# Specify your GCS bucket name
bucket_name = "deepcare-dataset"
print(bucket_name)
# Extract information from the event
file_name = event["name"]
file_path = f"gs://{bucket_name}/{file_name}"
print(file_path)
# Set up a temporary directory to extract files
temp_dir = tempfile.mkdtemp()
# Create a storage client
storage_client = storage.Client()
print('Downloading...')
# Download the zip file from GCS
blob = storage_client.get_bucket(bucket_name).get_blob(file_name)
download_path = os.path.join(temp_dir, file_name)
blob.download_to_filename(download_path)
print('Extracting...')
# Extract the contents of the zip file
with zipfile.ZipFile(download_path, 'r') as zip_ref:
zip_ref.extractall(temp_dir)
print('Uploading...')
# Upload the extracted files back to GCS
for extracted_file in os.listdir(temp_dir):
extracted_file_path = os.path.join(temp_dir, extracted_file)
destination_blob_name = f"{file_name[:-4]}/{extracted_file}"
# Upload the extracted file to GCS
storage_client.get_bucket(bucket_name).blob(destination_blob_name).upload_from_filename(extracted_file_path)
print(f'Uploaded for:', destination_blob_name)
# Clean up the temporary directory
os.remove(download_path)
os.rmdir(temp_dir)
print(f"Extraction and upload completed for {file_name}")
@functions_framework.http
def hello_http(request):
"""HTTP Cloud Function.
Args:
request (flask.Request): The request object.
<https://flask.palletsprojects.com/en/1.1.x/api/#incoming-request-data>
Returns:
The response text, or any set of values that can be turned into a
Response object using `make_response`
<https://flask.palletsprojects.com/en/1.1.x/api/#flask.make_response>.
"""
request_json = request.get_json(silent=True)
request_args = request.args
if request_json and 'name' in request_json:
name = request_json['name']
if name == 'myzip.zip':
extract_zip(request_args)
elif request_args and 'name' in request_args:
name = request_args['name']
if name == 'myzip.zip':
extract_zip(request_args)
else:
name = 'World'
return 'Hello {}!'.format(name)
</code></pre>
<p>Here are the logging info that verbose but didn't give me information the reason why 400 bad requests error. Note that, the error only happened if the body request with POST is</p>
<pre><code>{
"name":"myzip.zip"
}
</code></pre>
<p>other than that <code>("name" : "foo")</code> is 200 OK. I suspect it was because my extract function error. But it's not clear that whether the 400 response come from my function or something else since cloud logging didn't give me clear reason. It was just saying warning.</p>
<p>LOG:</p>
<pre><code>{
"insertId": "656b0fa30002f03d0c947a91",
"httpRequest": {
"requestMethod": "POST",
"requestUrl": "https://asia-southeast2-***.cloudfunctions.net/function-1",
"requestSize": "464",
"status": 400,
"responseSize": "819",
"userAgent": "PostmanRuntime/7.34.0",
"remoteIp": "104.28.254.47",
"serverIp": "216.239.36.54",
"latency": "0.005019129s",
"protocol": "HTTP/1.1"
},
"resource": {
"type": "cloud_run_revision",
"labels": {
"revision_name": "function-1-00001-qid",
"location": "asia-southeast2",
"service_name": "function-1",
"project_id": "***",
"configuration_name": "function-1"
}
},
"timestamp": "2023-12-02T11:06:11.180972Z",
"severity": "WARNING",
"labels": {
"goog-managed-by": "cloudfunctions",
"instanceId": "0087599d42c23459cbb07b0680f42627a501194e4825c912c5119b703c25894fe2d5caf7a1bbe0ce42006b2c6f646d2cd6ec68f6f5fb2ad91fa39a17e6eaa78789"
},
"logName": "projects/***/logs/run.googleapis.com%2Frequests",
"trace": "projects/***/traces/98c21d5c3145fa7af9b5bc5f2d53b523",
"receiveTimestamp": "2023-12-02T11:06:11.194091644Z",
"spanId": "14357157746819527854",
"traceSampled": true
}
</code></pre>
<p>Any there idea how do I debug this</p>
|
<python><google-cloud-functions><google-cloud-logging>
|
2023-12-02 11:21:34
| 0
| 1,252
|
Muhammad Ikhwan Perwira
|
77,589,937
| 10,584,570
|
Python Create a simple library wrapper (as a side package)
|
<p>I need to extend a library's functionality (let it be pandas, for example). It works if I put my code inside <code>...site-packages/pandas/__init__.py</code> 👍 (however, the code I added will be auto erased with the next pandas upgrade 👎)</p>
<p>Struggle:
So, I need a wrapper library around pandas that I can call
with <code>import my_wrapper as pd</code> - how do I do that?</p>
<p>Current solution: create a module folder <strong>my_wrapper</strong> in site-packages with <strong><code>__init__.py</code></strong>:</p>
<pre><code>import pandas
from pandas.core.base import PandasObject
def has_stackoverflow(ser: pandas.Series):
return ser[ser.str.contains('stackoverflow', case=False, regex=True)]
PandasObject.has_stackoverflow = has_stackoverflow
# now, this method is available on any pandas object:
# pd.pandas.Series(['s','as stackoverflow sa']).has_stackoverflow()
</code></pre>
<p>WORKING IMPORT:</p>
<pre><code>from my_wrapper import pandas as pd
</code></pre>
<p>DESIRED IMPORT:</p>
<pre><code>import my_wrapper as pd
</code></pre>
<p>how do I achieve it?</p>
|
<python><pandas><module><wrapper>
|
2023-12-02 10:18:12
| 0
| 303
|
Anton Frolov
|
77,589,859
| 8,647,273
|
Mocking json.dumps() method in Python unit tests
|
<p>I am working on test case writing using Pytest and I need to mock a function which uses json.dumps() method.</p>
<p>I tried so many ways but somehow this method cannot be mocked.</p>
<p>I tried using decorator -</p>
<pre><code>@mock.patch("json.dumps")
</code></pre>
<p>But this gives error.</p>
<pre><code>TypeError: Object of type MagicMock is not JSON serializable
</code></pre>
<p>Just mocking json also does not work.</p>
<p>Any idea why only json library gives such problems? All my other patches are working fine.</p>
<p>How can I work around this problem?</p>
|
<python><mocking><pytest><python-unittest>
|
2023-12-02 09:57:01
| 1
| 606
|
Pallavi
|
77,589,765
| 1,082,349
|
Prepare numpy array -- similar to broadcasting
|
<p>Let's say I have an m-dimensional matrix <code>M</code> and an array <code>N</code> of length <code>n</code>.</p>
<p>The shape of <code>M</code> is (a,b,n,e,f), meaning that on a particular axis (here, 2) the dimensionality of <code>M</code> agrees with that of <code>N</code>.</p>
<p>If I want to multiply them, in this particular case, I would do:</p>
<pre><code>M * N[None, None, :, None, None]
</code></pre>
<p>How can I generalize this concept, to cases where the shape of <code>M</code> differs, and the axis may be a different one? I'm sure there must be a convenient function to extend array <code>N</code> accordingly, but I can't find anything</p>
|
<python><numpy>
|
2023-12-02 09:22:59
| 1
| 16,698
|
FooBar
|
77,589,744
| 11,801,298
|
In search of retrograde Mercury with pandas
|
<p>My object, the planet Mercury, is moving in a circle back and forth. Its coordinate lies within 360 degrees. There is no problem in detecting its reversal if it is far from the 360 degree point (0 degrees).</p>
<pre><code>an easy case:
13.08.2010 166.41245
14.08.2010 167.00584
15.08.2010 167.53165
16.08.2010 167.98625
17.08.2010 168.36589
18.08.2010 168.66672
19.08.2010 168.88494
20.08.2010 169.01682
21.08.2010 169.05885 This is where the backward movement begins
22.08.2010 169.00792 I detect it easily
23.08.2010 168.86147
24.08.2010 168.61771
25.08.2010 168.27591
26.08.2010 167.83665
</code></pre>
<p>I do this by searching for extremes.</p>
<pre><code>from scipy.signal import argrelextrema
</code></pre>
<p>But if it goes from 359 degrees to 1 degree, I get constant crashes.</p>
<p>Here is an example where the system fails. It considers it is a reversal of Mercury, although it just goes from 359 degrees to the 0 degree. This is not the beginning of retrogression.</p>
<pre><code>crash example
13.03.2010 350.60172
14.03.2010 352.53184
15.03.2010 354.47785
16.03.2010 356.43861
17.03.2010 358.41273 This is not the beginning of a backward movement (NOT MAXIMUM)
18.03.2010 0.39843 its just ingression from Pieces to Aries
19.03.2010 2.39354
20.03.2010 4.39545
21.03.2010 6.40106
22.03.2010 8.40673
23.03.2010 10.40828
24.03.2010 12.40098
25.03.2010 14.37956
26.03.2010 16.33824
</code></pre>
<p>So, I need desicion to find reversal points of a planet that moves on a circle 360 degrees.</p>
|
<python><pandas><scipy>
|
2023-12-02 09:13:03
| 1
| 877
|
Igor K.
|
77,589,708
| 22,371,917
|
How to use proxies with seleniumbase?
|
<p>I am trying to use a proxy with seleniumbase but i get:
You are using an unsupported command-line flag: --ignore-certificate-errors. Stability and security will suffer.(at the starting chrome page)
(after opening site)Site couldnt be reached whatismyip.com took too long to respond.
but the code works fine when i dont have a proxy</p>
<pre class="lang-py prettyprint-override"><code>from seleniumbase import SB
with SB(uc=True, proxy="IP:PORT") as sb:
sb.sleep(10)
sb.driver.get("https://whatismyip.com")
sb.sleep(10)
</code></pre>
<p>And its not that the ip and port are invalid because im checking and making sure they work before using them</p>
<pre class="lang-py prettyprint-override"><code>import requests
proxies = {"http": "http://IP:PORT"}
response = requests.get("http://ipinfo.io/json", proxies=proxies)
print(response.json())
</code></pre>
<p>using this code, could someone help?</p>
<p>edit: maybe ips are different and some work for sb and some dont even if they work for requests and i just need to try with sb until i find working ones? idk</p>
|
<python><selenium-webdriver><python-requests><proxy><seleniumbase>
|
2023-12-02 08:59:57
| 1
| 347
|
Caiden
|
77,589,586
| 3,710,481
|
How to share websocket context in quart
|
<p>i have quart app with with websocket method.</p>
<pre><code>charge_points = {}
@app.websocket("/ws/<charge_point_id>")
async def on_connect(charge_point_id: str):
logging.info("charge_point_id: %s", charge_point_id)
try:
await websocket.accept()
cp = SChargePoint(charge_point_id, websocket)
charge_points[charge_point_id] = cp
except asyncio.CancelledError:
print('charger WebSocketDisconnect')
logging.info("charger %s WebSocketDisconnect with code %s", charge_point_id, e.code)
</code></pre>
<p>Here SChargePoint which extended ChargePoint class.</p>
<pre><code>class SChargePoint(ChargePoint):
def __init__(self, charge_point_id, connection):
super().__init__(charge_point_id, connection)
self.order_id = None
@on(Action.BootNotification)
def on_boot_notification(
self, charge_point_vendor: str, charge_point_model: str, **kwargs
):
return call_result.BootNotificationPayload(
current_time=datetime.utcnow().isoformat(),
interval=55,
status=RegistrationStatus.accepted,
)
async def remote_start_transaction(self, user, unit_rate, remote_start_item):
#send message to connection.
}
</code></pre>
<p>ChargePoint class handle send, receive message of web socket.</p>
<p>when new connection is made i store it in charge_points. it store SChargePoint instance.</p>
<p>when need to send message to specific charger ponit id. i called</p>
<pre><code>@app.post("/fleet/start-charging/<string:charge_point_id>")
async def remote_start(charge_point_id):
remote_start_item = await request.get_json()
schargepoint_instance = charge_points.get(charger_id)
schargepoint_instance.remote_start_transaction(user, 21, remote_start_item)
</code></pre>
<p>it class remote_start_transaction method of SChargePoint class. SChargePoint already has websocket connection. but when it try to send message using send then get error ⇒ Not within a websocket context.</p>
<p>i use this @app.post("/fleet/start-charging/string:charge_point_id")
@copy_current_websocket_context but not luck</p>
|
<python><flask><websocket><quart>
|
2023-12-02 08:09:25
| 0
| 8,151
|
patelarpan
|
77,589,521
| 14,154,197
|
Simple approach for running a docker image with python on ubuntu
|
<p>The problem is that I have a python app and want to create a docker image from it. What is a simple approach to do so?</p>
<p>I am using a ubuntu desktop 22.04, running a python django rest_framework app using (venv) virtual environament.</p>
<p>I have docker 24.0.7 already intsalled and need to create a Dockerfile and know the steps to create and run a docker image.</p>
|
<python><bash><docker><ubuntu>
|
2023-12-02 07:39:30
| 1
| 401
|
André Luiz Myszko
|
77,589,457
| 3,416,774
|
Is there a way to set attribute to sub objects in a class?
|
<p>I would like to be able to do this:</p>
<pre class="lang-py prettyprint-override"><code>>>> obj = Example('hi', 'hello')
>>> obj.a
'hi'
>>> obj.sub_obj.b
'hello'
</code></pre>
<p>I try this but I get <code>AttributeError: 'dict' object has no attribute 'b'</code>:</p>
<pre class="lang-py prettyprint-override"><code>class Example:
def __init__(self, a, b):
self.a = a
self.sub_obj = {}
self.sub_obj.b = b
</code></pre>
<p>I see a similar question but I don't understand much: <a href="https://stackoverflow.com/q/17914737/3416774">Can python objects have nested properties?</a>. I just want the output JSON is to have nested objects. The API requires me to do so.</p>
|
<python><object><attributes><nested-object>
|
2023-12-02 07:14:24
| 2
| 3,394
|
Ooker
|
77,589,368
| 3,542,535
|
Jinja YAML Templating: Get Value of Optional Nested Key or Use Default Value
|
<p>I'm trying to create a YAML file from some inputs passed to a Jinja template. I want certain input chained keys be optional and used in the templating if present, but otherwise ignored. In the example below, <code>override.source.property</code> may or may not exist in the input file, and <code>override</code> is a top level key when it is present. Is there a way to optionally retrieve <code>a.certain.key</code> by its full chained path via Jinja templating?</p>
<pre class="lang-yaml prettyprint-override"><code># without_override.yaml
name: blah
</code></pre>
<pre class="lang-yaml prettyprint-override"><code># with_override.yaml
name: blah
overrides:
source:
property: something
</code></pre>
<pre><code># template.yaml.jinja
name: {{ name }}
source.property: {{ overrides.source.property or "property of " + name }}
source.property3: {{ overrides.source.property | default("property of " + name) }}
{# is there a way to provide a full path and return a value if it exists?
# top level document reference?
source.property2: {{ self.get("overrides.source.property") or "property of " + name }}
#}
</code></pre>
<pre class="lang-py prettyprint-override"><code># renderer.py
import yaml
import sys
from jinja2 import Environment, StrictUndefined, ChainableUndefined
def render_jinja(template, context):
# jinja_env = Environment(extensions=["jinja2.ext.do"], undefined=StrictUndefined)
jinja_env = Environment(extensions=["jinja2.ext.do"], undefined=ChainableUndefined)
template_obj = jinja_env.from_string(template)
return template_obj.render(**context).strip()
if __name__ == "__main__":
with open(sys.argv[1]) as f:
config = yaml.safe_load(f.read())
with open("template.yaml.jinja") as f:
template = f.read()
print(render_jinja(template, config))
</code></pre>
<pre class="lang-bash prettyprint-override"><code># python renderer.py with_override.yaml
# using StrictUndefined or ChainedUndefined RETURNS
name: blah
source.property: something
source.property3: something
</code></pre>
<pre class="lang-bash prettyprint-override"><code># python renderer.py without_override.yaml
# with ChainedUndefined RETURNS
name: blah
source.property: property of blah
source.property3: property of blah
# with StrictUndefined ERRORS
jinja2.exceptions.UndefinedError: 'overrides' is undefined
</code></pre>
|
<python><templates><yaml><jinja2>
|
2023-12-02 06:31:09
| 1
| 413
|
alpacafondue
|
77,589,188
| 12,035,739
|
Sorting non-negative integers in linear time using Numba breaks
|
<p>I am trying to sort an array/list of non-negative integers in linear time. We also only keep the unique elements. Here is an example,</p>
<pre><code>Sort: [7, 7, 0, 3, 2, 1, 9, 1]
7: 10000000
7: 10000000
0: 10000001
3: 10001001
2: 10001101
1: 10001111
9: 1010001111
1: 1010001111
1010001111: []
101000111: [0]
10100011: [0, 1]
1010001: [0, 1, 2]
101000: [0, 1, 2, 3]
10100: [0, 1, 2, 3]
1010: [0, 1, 2, 3]
101: [0, 1, 2, 3]
10: [0, 1, 2, 3, 7]
1: [0, 1, 2, 3, 7]
: [0, 1, 2, 3, 7, 9]
</code></pre>
<p>Essentially, I am implementing <code>np.unique([7, 7, 0, 3, 2, 1, 9, 1])</code> in linear time. Here is my Python,</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from time import perf_counter
from numba import njit
# @njit
def count(ls):
ret = []
m = 0
for x in ls:
m = m | (1 << int(x))
i = 0
while m > 0:
if (m & 1):
ret.append(i)
m = m >> 1
i += 1
return ret
RNG = np.random.default_rng(0)
x = RNG.integers(2**16, size=2**17)
start = perf_counter()
y1 = np.unique(x)
print(perf_counter() - start)
start = perf_counter()
y2 = count(x)
print(perf_counter() - start)
print((y1 == y2).all())
</code></pre>
<p>My "O(n)" sort did not beat Numpy's unique function. I expected that since Python is slower than C (which is where <code>np.unique</code> is implemented I am guessing). To remedy this, I tried using Numba's JIT decorator. But--if I uncomment the decorator, somehow the function breaks and returns an empty list. It works without the decorator.</p>
<p>Can someone please point out my oversight?</p>
|
<python><sorting><numba>
|
2023-12-02 04:49:54
| 1
| 886
|
scribe
|
77,589,004
| 23,002,898
|
Problem execute calculations in a nested loop. TypeError: 'numpy.float64' object is not iterable
|
<p>I'm trying to calculate the <a href="https://en.wikipedia.org/wiki/Residual_sum_of_squares" rel="nofollow noreferrer">sum of squared errors</a> and i'm using a nested loop.</p>
<p><a href="https://i.sstatic.net/ZnPoZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZnPoZ.png" alt="enter image description here" /></a></p>
<p>I'm new to Python and i apologize, but i encounter the error:</p>
<pre><code> File "...", line 13, in <module>
for y in values_subtraction_mean:
TypeError: 'numpy.float64' object is not iterable
</code></pre>
<p>The problem is with the second loop, when i have to calculate <code>result </code> in:
<code> for y in values_subtraction_mean: result = sum(math.sqrt(y))</code></p>
<p>In the second loop, it should show all values of <code>values_subtraction_mean</code>, so it should show <code>2.2, -0.8, 0.2, 1.2, -2.8</code>. Next for each value above, a sqrt should be calculated and get <code>4.84, 0.64, 0.04, 1.44, 7.84</code>. In the end you have to sum all these numbers and get <code>14.8</code></p>
<p>What am I doing wrong?</p>
<pre><code>from numpy import mean
import math
values = [5, 2, 3, 4, 0]
mean = mean(values)
for x in values:
values_subtraction_mean = x - mean
print(values_subtraction_mean)
#2.2, -0.8, 0.2, 1.2, -2.8
for y in values_subtraction_mean:
result = sum(math.sqrt(y))
print(result)
#sqrt: 4.84, 0.64, 0.04, 1.44, 7.84
#sum and result: 14.8
</code></pre>
<p>I tried using this, but it doesn't solve the problem:</p>
<pre><code>import numpy as np
values = np.array([5, 2, 3, 4, 0])
</code></pre>
<p>I tried not using numpy, calculating the mean with: <code>sum(values) / len(values)</code>, but it doesn't work either and i get error:</p>
<p><code>TypeError: 'float' object is not iterable</code></p>
|
<python><python-3.x><numpy><loops><math>
|
2023-12-02 02:56:01
| 3
| 307
|
Nodigap
|
77,588,920
| 5,267,751
|
Is it possible to override the pretty-printing of an existing data type in SageMath?
|
<p>In IPython, using the method in <a href="https://stackoverflow.com/questions/14977066/how-can-i-configure-ipython-to-display-integers-in-hex-format">How can I configure ipython to display integers in hex format?</a> , it's possible to customize how objects are printed:</p>
<pre><code>In [1]: import ast
...: formatter=get_ipython().display_formatter.formatters["text/plain"]
...: formatter.for_type(ast.AST, lambda o, p, cycle: p.text("??"))
Out[1]: <function __main__.<lambda>(o, p, cycle)>
In [2]: x=ast.parse('1+2')
In [3]: x
Out[3]: ??
</code></pre>
<p>In SageMath, the same method does not work:</p>
<pre><code>sage: import ast
....: formatter=get_ipython().display_formatter.formatters["text/plain"]
....: formatter.for_type(ast.AST, lambda o, p, cycle: p.text("??"))
....: x=ast.parse('1+2')
sage: x
<ast.Module object at 0x7f5750ccfaf0>
</code></pre>
<p>While for custom classes it appear to be possible to use <code>_repr_</code> or <code>__repr__</code> (the former only if the object inherit from <code>SageObject</code>):</p>
<pre><code>sage: class A(SageObject):
....: ^Idef _repr_(self)->str:
....: ^I^Ireturn "abc"
....: A()
abc
sage: class A:
....: ^Idef __repr__(self)->str:
....: ^I^Ireturn "abc"
....: A()
abc
sage: ast.AST.__repr__=lambda x: "??" # hack, patch Python method
sage: x=ast.parse('1+2')
sage: x
??
</code></pre>
<p>this does not work if I want to customize how an existing SageMath class e.g.</p>
<pre><code>sage: from sage.rings.complex_interval import ComplexIntervalFieldElement
sage: ComplexIntervalFieldElement.__repr__=lambda x: "??"
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[30], line 1
----> 1 ComplexIntervalFieldElement.__repr__=lambda x: "??"
TypeError: cannot set '__repr__' attribute of immutable type 'sage.rings.complex_interval.ComplexIntervalFieldElement'
</code></pre>
<p>So, in summary, <strong>is it possible to customize how <code>CIF</code> or other SageMath types are printed in the output</strong>?</p>
|
<python><sage>
|
2023-12-02 02:10:06
| 1
| 4,199
|
user202729
|
77,588,878
| 6,421,708
|
Why is Visual Studio Code running Python when it should run Java on Mac
|
<p>I wrote a standard 'Hello World' Java program in a file called QuickTest.java in the latest Visual Studio Pro (on Mac). I have installed the standard Extension Package for Java. Which seams to have installed several required packages.</p>
<pre><code>class Hello {
public static void main (String[] args) {
System.out.println ("Hello, World.");
}
}
</code></pre>
<p>And then got this error when running with F5:</p>
<pre><code> File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/runpy.py", line 87, in _run_code
</code></pre>
<p>Clearly showing that VSCode is running Python Instead of Java.</p>
|
<python><java><macos>
|
2023-12-02 01:50:20
| 0
| 5,191
|
Keith
|
77,588,790
| 10,574,250
|
ModuleNotFoundError: No module named 'financials_api_get' when trying to run folder location
|
<p>I have a folder directory that looks as such:</p>
<pre><code>-- show_case
--airflow
--dags
fundamental_data_pipeline.py
__init__.py
financials_api_get.py
</code></pre>
<p>I am trying to run a function <code>get_fundemental_data</code> from <code>financial_api_get.py</code> within <code>fundamental_data_pipeline.py</code> but I get an import error.</p>
<p>My file <code>fundamental_data_pipeline.py</code> looks like this:</p>
<pre><code>sys.path.insert(1, Path(__file__).resolve().parent.parent.parent)
print(Path(__file__).resolve().parent.parent.parent)
from financials_api_get import get_fundemental_data
</code></pre>
<p>A print of the path prints <code>show_case</code> but I still get</p>
<pre><code>ModuleNotFoundError: No module named 'financials_api_get'
</code></pre>
<p>Why doesn't this work?</p>
<p>EDIT
I have an <code>__init__.py</code> file that looks as such:</p>
<pre><code>from financials_api_get import get_fundemental_data
</code></pre>
|
<python><pathlib>
|
2023-12-02 01:09:15
| 1
| 1,555
|
geds133
|
77,588,662
| 6,162,679
|
`min()` is not working properly in Python?
|
<p>I am new to Python and notice the following error when using <code>min()</code>.</p>
<pre><code>import pandas as pd
a = [1,2,3,4,5]
print(min(20, pd.array(a)**2))
</code></pre>
<p>The code should return[1, 4, 9, 16, 20], however, it returns [1, 4, 9, 16, 25]. It seems that <code>min()</code> is not working as expected. What is the reason and how can I fix it? Thank you.</p>
|
<python><math><min>
|
2023-12-02 00:14:18
| 2
| 922
|
Yang Yang
|
77,588,577
| 130,948
|
ENABLE_ORYX_BUILD being removed during remote build and function app doesn't get the new version of code
|
<p>I am working on azure function with python.</p>
<p>Based on the documentation we are supposed to have below two configuration values when we do remote build on linux machine. Navigate <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-deployment-technologies?tabs=linux#remote-build" rel="nofollow noreferrer">here</a> to see more details on this</p>
<pre><code>ENABLE_ORYX_BUILD=true
SCM_DO_BUILD_DURING_DEPLOYMENT=true
</code></pre>
<p>I am using below command to push the build</p>
<pre><code>az functionapp deployment source config-zip -g <resource group name> -n <function app name> --src
<zipfile> --build-remote true --verbose
</code></pre>
<p>Below is the warning message I am seeing and no error. Also when I try to see the code of the function app it shows the old code and not the new code I am trying to deploy with remote deploy. Am</p>
<pre><code>Removing ENABLE_ORYX_BUILD app setting
</code></pre>
<p>My zip file has below files</p>
<ul>
<li>function_app.py</li>
</ul>
<pre><code>import azure.functions as func
import logging
app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS)
@app.route(route="from_vscode_http_trigger")
def from_vscode_http_trigger(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
return func.HttpResponse(f"Hello again again, {name}. This HTTP triggered function executed successfully.")
else:
return func.HttpResponse(
"This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
status_code=200
)
</code></pre>
<ul>
<li>host.json</li>
</ul>
<pre><code>{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.*, 4.0.0)"
}
}
</code></pre>
<ul>
<li>local.settings.json</li>
</ul>
<pre><code>{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "python",
"AzureWebJobsFeatureFlags": "EnableWorkerIndexing"
}
}
</code></pre>
|
<python><azure><azure-functions>
|
2023-12-01 23:31:23
| 1
| 743
|
Ravi Khambhati
|
77,588,539
| 15,456,681
|
Numba very slow with default arguments depending on which arguments are provided
|
<p>I'm implementing a function for which some arguments are compulsory and some are set as default in numba, however depending on the values set for the default arguments, which order the different types appear in and which arguments are provided I am getting very different timings:</p>
<pre class="lang-py prettyprint-override"><code>import numba as nb
@nb.njit
def function(a, b, c, d=1.49012e-8, e=1.49012000000001e-8, f=0.0, g=None):
...
@nb.njit
def function2(a, b, c, d=1.49012e-8, e=1.49012000000001e-8, f=0.0):
...
@nb.njit
def function3(a, b, c, d=1.49012e-8, e=1.49012000000001e-8, f=None, g=0.0):
...
@nb.njit
def function4(a, b, c, d=1.49012e-8, e=1.49012e-8, f=0.0, g=None):
...
</code></pre>
<p>And then timing it with different numbers of <code>args</code> and excluding different <code>kwargs</code>:</p>
<pre><code>d = 1.49012e-8
e = 1.49012000000001e-8
f = 0.0
g = 1000
args = (d, e, f, g)
kwargs = {'d': d, 'e': e, 'f': f, 'g': g}
def time_func(func, args, kwargs):
func(1, 2, 3)
print(func.__name__)
print("time *args")
for i, _ in enumerate(args):
func(1, 2, 3, *args[:i])
%timeit -n 1000 func(1, 2, 3, *args[:i])
print("time **kwargs")
for i in kwargs:
_kwargs = {k: v for k, v in kwargs.items() if k != i}
func(1, 2, 3, **_kwargs)
%timeit -n 1000 func(1, 2, 3, **_kwargs)
time_func(function, args, kwargs)
time_func(function2, args[:-1], {k: v for k, v in kwargs.items() if k != 'g'})
time_func(function3, args, kwargs)
time_func(function4, args, kwargs)
</code></pre>
<p>Output:</p>
<pre><code>function
time *args
26.3 µs ± 425 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
25.4 µs ± 266 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
24 µs ± 175 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
241 ns ± 4.94 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
time **kwargs
235 ns ± 2.03 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
23.7 µs ± 62.6 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
23.3 µs ± 203 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
241 ns ± 5.25 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
function2
time *args
24.1 µs ± 115 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
23.3 µs ± 172 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
22.1 µs ± 428 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
time **kwargs
210 ns ± 1.31 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
22.6 µs ± 97.4 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
21.9 µs ± 98.5 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
function3
time *args
26.3 µs ± 149 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
25.2 µs ± 81.4 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
24 µs ± 160 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
23.3 µs ± 416 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
time **kwargs
237 ns ± 4.64 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
25 µs ± 290 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
255 ns ± 12.5 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
24.2 µs ± 112 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
function4
time *args
26.2 µs ± 238 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
25.1 µs ± 95.6 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
24.1 µs ± 250 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
240 ns ± 5.87 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
time **kwargs
231 ns ± 11.9 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
233 ns ± 3.1 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
23.4 µs ± 132 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
230 ns ± 3.43 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
</code></pre>
<p>I've tested with numba 0.58.0 in python 3.11.5 and numba 0.57.1 in python 3.10.12 and get similar results in both.</p>
|
<python><numba>
|
2023-12-01 23:17:00
| 0
| 3,592
|
Nin17
|
77,588,420
| 2,006,921
|
Sphinx autodoc import problems
|
<p>I have a python project main file that resides in <code>C:\b_tool\b.py</code>. I start it from <code>C:</code> with <code>python -m b_tool.b</code>. <code>b.py</code> imports all kinds of other modules.</p>
<p>I would like to add a Sphinx folder like here <code>C:\b_tool\Docs\Sphinx</code>. How would I specify the path (or additional options) in Sphinx's <code>conf.py</code> such that when Sphinx does its thing, it does not run into import problems, because that is what is happening currently.</p>
<p>I created a reduced example:
Directory structure:</p>
<pre><code>b_tool
|
|--b.py
|
|--sub
| |
| |--sub.py
|
|--Doc
|
|--Sphinx
</code></pre>
<p>b.py:</p>
<pre><code>from b_tool.sub.sub import Subclass
a = Subclass()
print(a.a)
</code></pre>
<p>sub.py:</p>
<pre><code>class Subclass():
def __init__(self):
self.a = 3.14;
</code></pre>
<p>I can run b.py from outside of the b_tool folder with <code>python -m b_tool.b</code>. By doing it this way, I can import modules from any subdirectory by specifying the absolute path to the module. If I run b.py as a script, I lose a lot of flexibility in this regard. At least I have not found a good way of doing it in another way, and this method was recommended to me in the post that I cited above.</p>
<p>It is not possible to call b.py as a script, that will return a ModuleNotFoundError.</p>
<p>Now I would like to have the Sphinx machine in the folder as indicated (will probably move it to somewhere else altogether later). The question is now what do I tell Sphinx so that it is able to go through the files without running into import problems?</p>
<p>Here's the Sphinx files:</p>
<p>index.rst:</p>
<pre><code>Welcome to B's documentation!
=============================
.. automodule:: b
:members:
.. toctree::
:maxdepth: 2
:caption: Contents:
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
</code></pre>
<p>conf.py:</p>
<pre><code>project = 'B'
copyright = '2023, John Doe'
author = 'John Doe'
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
extensions = ['sphinx.ext.autodoc']
templates_path = ['_templates']
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
</code></pre>
<p>Now when I run <code>make html</code>, I get the following warning/error:</p>
<pre><code>WARNING: autodoc: failed to import module 'b'; the following exception was raised:
No module named 'b_tool'
</code></pre>
|
<python><python-sphinx>
|
2023-12-01 22:35:36
| 1
| 1,105
|
zeus300
|
77,588,263
| 4,822,772
|
How to make session_state persist after button click in streamlit
|
<p>I'm encountering an issue with Streamlit where I'm trying to allow the user to modify text using <code>st.text_input</code> and then display the modified text when a button is clicked. However, the modified text is not persisting as expected in the session state.</p>
<p>Here's a simplified version of the code:</p>
<pre class="lang-py prettyprint-override"><code>import streamlit as st
# Initialize session state
if 'text' not in st.session_state:
st.session_state.text = "original"
if st.button("show"):
# Allow the user to modify the text
st.session_state.text = st.text_input("Edit Text", value=st.session_state.text)
# Display the modified text
st.markdown(st.session_state.text)
if st.button("show again"):
# Display the modified text
st.markdown(st.session_state.text)
</code></pre>
<p>Despite using <code>st.text_input</code> to modify the text, the "show again" button still displays the original text, not the modified one. I've tried using <code>st.text_area</code> as well, but the issue persists.</p>
<p>Why is the modified text not persisting in the session state as expected? How to make it persist as expected?</p>
|
<python><button><streamlit>
|
2023-12-01 21:49:53
| 1
| 1,718
|
John Smith
|
77,588,222
| 2,532,408
|
Why does mypy not catch error when typealias is used?
|
<p>Consider the following code that mypy catches the fact the wrong type is used.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Generic, TypeVar, TypeAlias
V = TypeVar("V")
class MyObj(Generic[V]):
def foo(self, arg: tuple[V, V]) -> None:
...
mo = MyObj[int]()
mo.foo((3, 3.4)) # Argument 1 to "foo" of "MyObj" has incompatible type "tuple[int, float]"; expected "tuple[int, int]" [arg-type]
</code></pre>
<p>Can someone explain to me why mypy doesn't catch the error here? Is there some way to create a type alias in this fashion?</p>
<pre class="lang-py prettyprint-override"><code>T_hint: TypeAlias = tuple[V, V]
class MyObj(Generic[V]):
def foo(self, arg: T_hint) -> None:
...
mo = MyObj[int]()
mo.foo((3, 3.4)). # mypy does not catch this
</code></pre>
<p>Is this a bug of mypy or is there a subtle nuance I'm missing?</p>
|
<python><mypy><python-3.12>
|
2023-12-01 21:40:28
| 0
| 4,628
|
Marcel Wilson
|
77,588,203
| 11,391,711
|
effectively creating dictionary of dictionaries where the values are a list from an Excel file
|
<p>I am reading a big table with multiple columns using <code>parse</code> command. Then, I would like to use the first and second columns as a nested key pair along with using the rest of columns as a value stored in a list. I have written a code snippet which does what I wish. I was wondering if this operation can be performed more efficiently.</p>
<pre><code>import pandas as pd
#The data frame comes from an Excel sheet using df.parse
df = pd.DataFrame({
"Company": ["TechCorp", "Innovate Inc", "Green Solutions", "Future Dynamics"],
"Product": ["TC100", "IN200", "GS300", "FD400"],
"Production Cost": [10000, 15000, 12000, 18000],
"Development Time": [6, 9, 8, 12],
"Launch Year": [2023, 2024, 2023, 2025]
})
nested_dict = {}
for index, row in df.iterrows():
fleet = row['Company']
engine = row['Product']
values = row[['Production Cost', 'Development Time', 'Launch Year']].tolist()
if fleet not in nested_dict:
nested_dict[fleet] = {}
nested_dict[fleet][engine] = values
return nested_dict
</code></pre>
<p>My goal is to get the following structure.</p>
<pre><code>{'TechCorp': {'TC100': [10000, 6, 2023]}, 'Innovate Inc': {'IN200': [15000, 9, 2024]}, 'Green Solutions': {'GS300': [12000, 8, 2023]}, 'Future Dynamics': {'FD400': [18000, 12, 2025]}}
</code></pre>
|
<python><pandas><dictionary>
|
2023-12-01 21:36:05
| 2
| 488
|
whitepanda
|
77,588,149
| 4,541,045
|
Is there any reason to do to multiple inheritance with object?
|
<p>While reviewing some code, it has a structure like</p>
<pre class="lang-py prettyprint-override"><code>class Bar(Foo, object):
</code></pre>
<p>Which seems like it could easily be written instead as</p>
<pre class="lang-py prettyprint-override"><code>class Bar(Foo):
</code></pre>
<p>The functionally appears at least the same the simple chain of inheritance for the purpose of determining method resolution, and both <code>Foo</code> and <code>Bar</code> are <code>isinstance()</code> of <code>object</code> in both cases</p>
<pre class="lang-none prettyprint-override"><code>B → F → o
B → F
↳ o
</code></pre>
<p>Is there any practical benefit to a multiple inheritance from <code>object</code>?</p>
|
<python><python-3.x><multiple-inheritance>
|
2023-12-01 21:23:20
| 1
| 19,831
|
ti7
|
77,588,137
| 1,288,043
|
Issue in simple neural network
|
<p>I tried below code in tensorflow with different variations for a simple regression problem. I have synthesized data as <code>y=10.0*x</code>. One input and one outcome variable. But tensorflow is giving me a loss of ~2000 to ~200000. I am using MSE as the loss function. Also tried relu activations as well, without any use.</p>
<ol>
<li>What should be the appropriate model for such a [simple] regression problem?</li>
<li>What should the model be for problem like <code>y=x^3</code>?</li>
</ol>
<pre><code>def PolynomialModel():
inp = layers.Input((1))
l=layers.Dense(16, activation='tanh')(inp)
l=layers.Dense(8, activation='tanh')(l)
l=layers.Dropout(.5)(l)
l=layers.Dense(4, activation='tanh')(l)
l=layers.Dropout(.5)(l)
output=layers.Dense(1, activation='tanh')(l)
return models.Model(inp,output)
</code></pre>
|
<python><tensorflow><regression>
|
2023-12-01 21:21:19
| 1
| 319
|
user1288043
|
77,587,970
| 9,571,463
|
Program using Asyncio with queue ends before consumer finishes
|
<p>I have the following producer pattern where I obtain data from a source, place it in queue, then consume it and write it to a CSV file. However, the program seems to end early because my tasks are all marked as done before they are actually wrote.</p>
<p>My hunch is because it is due to the fact I am yielding data from the queue so the queue sees that it is finished before the items that are being yielded get processed.</p>
<p>How can I ensure that my queue items are all consumed before ending the program? For example in the following program if ran, the final tag in my list is not finished writing before program exits.</p>
<pre><code>import asyncio
import random
import aiocsv
import aiofiles
from pathlib import Path
# puts random data in a queue with a tag
async def produce(tag: str, q: asyncio.Queue) -> None:
data: float = random.random()
await q.put({
"tag": tag,
"data": data
})
async def read_items(q: asyncio.Queue) -> dict:
# generate from queue
while True:
item = await q.get()
print(f"retrieved item: {item}")
q.task_done()
yield item
# consumes from queue and writes to CSV
async def consume(q: asyncio.Queue, base_dir: Path) -> str:
async for item in read_items(q):
file: str = await write_csv(item, base_dir)
print(f"wrote to file: {file}")
async def write_csv(item: dict, base_dir: Path) -> str:
# file path to write data to for
file_path: str = base_dir.joinpath(str(item["tag"])+".csv")
print(f"writing to {file_path}")
async with aiofiles.open(file_path, mode="a+", newline='') as f:
w: aiocsv.AsyncWriter = aiocsv.AsyncWriter(f)
# write the core data
try:
await w.writerow([item["data"]])
except Exception as err:
print(err)
return file_path
# runs our end to end pattern
async def run() -> None:
q: asyncio.Queue = asyncio.Queue()
# finite list to simulate tags for production
tags: list[str] = [
"foo",
"bar",
"baz",
"foobington",
"barrington",
"bazzington"
]
# make directory we will write to if it does not yet exist
base_path: Path = Path("../data")
try:
base_path.mkdir(parents=True, exist_ok=True)
except Exception as err:
print(err)
# make producers
producers: list[asyncio.Task] = [
asyncio.create_task(produce(t, q))
for t in tags
]
# make a consumer to read data from queue and write to CSV
consumer = asyncio.create_task(consume(q, base_path))
# start producing
await asyncio.gather(*producers)
# wait for all tasks to be consumed from q
await q.join()
# cancel our consumer
consumer.cancel()
def main() -> None:
# boilerplate
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(run())
if __name__ == "__main__":
main()
</code></pre>
|
<python><queue><python-asyncio><producer-consumer>
|
2023-12-01 20:38:35
| 1
| 1,767
|
Coldchain9
|
77,587,931
| 1,859,242
|
Langchain - Chat History with Embedded Data not working
|
<p>I'm trying to embed some data to gpt-4-1106-preview and chat on it. You may think like chatPDF. But my problem is, ChatGPT answers the question but doesn't remember our chat history.</p>
<p>Here is the code, probably I'm doing a mistake but I couldn't found.</p>
<p>Console command</p>
<pre><code>!pip install langchain openai cohere tiktoken kaleido python-multipart fastapi uvicorn chromadb
</code></pre>
<p>The code</p>
<pre><code>import os
import sys
from langchain.document_loaders import TextLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
os.environ["OPENAI_API_KEY"] = "sk-XYZ"
oader = TextLoader("./data.txt")
index = VectorstoreIndexCreator().from_loaders([loader])
docs = loader.load();
print(docs)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
query = "My name is Fobus"
print(index.query(query, llm=ChatOpenAI(model="gpt-4-1106-preview", temperature=0.7), verbose=True, memory=memory))
query = "What is my name?"
print(index.query(query, llm=ChatOpenAI(model="gpt-4-1106-preview", temperature=0.7), verbose=True, memory=memory))
print(memory)
</code></pre>
<p>and the output</p>
<pre><code>[Document(page_content='data.txt content comes here', metadata={'source': './data.txt'})]
> Entering new RetrievalQA chain...
> Finished chain.
Hello Fobus, how can I help you today? If you have any questions or need assistance with something, feel free to ask.
> Entering new RetrievalQA chain...
> Finished chain.
I don't know your name. My capabilities don't include access to personal data unless it's shared with me in the course of our conversation, and even then, I don't retain that information. If you'd like to tell me your name, I can address you by it for the duration of our interaction.
chat_memory=ChatMessageHistory(messages=[HumanMessage(content='My name is Fobus'), AIMessage(content='Hello Fobus, how can I help you today? If you have any questions or need assistance with something, feel free to ask.'), HumanMessage(content='What is my name?'), AIMessage(content="I don't know your name. My capabilities don't include access to personal data unless it's shared with me in the course of our conversation, and even then, I don't retain that information. If you'd like to tell me your name, I can address you by it for the duration of our interaction.")]) return_messages=True memory_key='chat_history'
</code></pre>
<p>As you can see in the first message he calls me with my name, but in the second message it forgot.</p>
<p>We want more honest artificial intelligence lol...</p>
|
<python><openai-api><langchain><chat-gpt-4>
|
2023-12-01 20:29:33
| 1
| 2,088
|
fobus
|
77,587,866
| 10,986,032
|
I want to get the anchor tag value and nested anchor tag value from single URL using multithreading
|
<p>I try to below code for get the anchor tag <code>href</code> and nested anchor tag value for the following URL <code>https://www.tradeindia.com/</code> but not generate the exact output. below code only get the single page URL output, can any one please suggest?</p>
<pre><code>import concurrent
import requests
from bs4 import BeautifulSoup
from concurrent.futures import ThreadPoolExecutor
def get_page(url):
response = requests.get(url)
return response.content
def extract_links(html_content):
soup = BeautifulSoup(html_content, 'html.parser')
links = [a['href'] for a in soup.find_all('a', href=True)]
return links
def process_page(url):
html_content = get_page(url)
links = extract_links(html_content)
return links
def main():
start_url = 'https://www.tradeindia.com/'
# Fetch the initial page
start_page_content = get_page(start_url)
# Extract links from the initial page
start_page_links = extract_links(start_page_content)
all_links = set(start_page_links)
# Use ThreadPoolExecutor to parallelize the process
with ThreadPoolExecutor(max_workers=5) as executor:
# Submit tasks for processing each link concurrently
future_to_url = {executor.submit(process_page, url): url for url in start_page_links}
# Iterate through completed tasks and update the set of all links
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
links_on_page = future.result()
all_links.update(links_on_page)
except Exception as e:
print(f"Error processing {url}: {e}")
# Print all the extracted links
print("All Links:")
print(len(all_links))
for link in all_links:
print(link)
if __name__ == "__main__":
main()
</code></pre>
|
<python><multithreading><request><concurrent.futures>
|
2023-12-01 20:16:13
| 1
| 872
|
Sam
|
77,587,845
| 13,968,392
|
Modify regex capturing group in column
|
<p>How can I modify the capturing group in pandas <code>df.replace()</code>? I try to add thousands separators to the numbers within the string of each cell. This should happen in a method chain. Here is the code I have so far:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({'a_column': ['1000 text', 'text', '25000 more text', '1234567', 'more text'],
"b_column": [1, 2, 3, 4, 5]})
df = (df.reset_index()
.replace({"a_column": {"(\d+)": r"\1"}}, regex=True))
</code></pre>
<p>The problem is that I don't know how to do something with <code>r"\1"</code>, e.g., <code>str(float(r"\1"))</code> doesn't work.</p>
<p>Expected output:</p>
<pre class="lang-none prettyprint-override"><code> index a_column b_column
0 0 1,000 text 1
1 1 text 2
2 2 25,000 more text 3
3 3 1,234,567 4
4 4 more text 5
</code></pre>
|
<python><pandas><regex><replace>
|
2023-12-01 20:12:09
| 4
| 2,117
|
mouwsy
|
77,587,807
| 12,114,641
|
ModuleNotFoundError: No module named 'openai.openai_object'
|
<p>I'm getting error while running a python script on macOS</p>
<blockquote>
<p>from openai.openai_object import OpenAIObject</p>
<p>ModuleNotFoundError: No module named 'openai.openai_object'</p>
</blockquote>
<p>How do I fix it?</p>
<pre><code>(llama-py3.12) bash-3.2$ python3 starter.py
Traceback (most recent call last):
File "/Users/imac/ROOT/Python/chatgpt/llama/starter.py", line 1, in <module>
from llama_index import VectorStoreIndex, SimpleDirectoryReader
File "/Users/imac/Library/Caches/pypoetry/virtualenvs/llama-HG1XU64H-py3.12/lib/python3.12/site-packages/llama_index/__init__.py", line 17, in <module>
from llama_index.embeddings.langchain import LangchainEmbedding
File "/Users/imac/Library/Caches/pypoetry/virtualenvs/llama-HG1XU64H-py3.12/lib/python3.12/site-packages/llama_index/embeddings/__init__.py", line 16, in <module>
from llama_index.embeddings.openai import OpenAIEmbedding
File "/Users/imac/Library/Caches/pypoetry/virtualenvs/llama-HG1XU64H-py3.12/lib/python3.12/site-packages/llama_index/embeddings/openai.py", line 18, in <module>
from llama_index.llms.openai_utils import (
File "/Users/imac/Library/Caches/pypoetry/virtualenvs/llama-HG1XU64H-py3.12/lib/python3.12/site-packages/llama_index/llms/__init__.py", line 23, in <module>
from llama_index.llms.litellm import LiteLLM
File "/Users/imac/Library/Caches/pypoetry/virtualenvs/llama-HG1XU64H-py3.12/lib/python3.12/site-packages/llama_index/llms/litellm.py", line 28, in <module>
from llama_index.llms.litellm_utils import (
File "/Users/imac/Library/Caches/pypoetry/virtualenvs/llama-HG1XU64H-py3.12/lib/python3.12/site-packages/llama_index/llms/litellm_utils.py", line 4, in <module>
from openai.openai_object import OpenAIObject
ModuleNotFoundError: No module named 'openai.openai_object'
(llama-py3.12) bash-3.2$ which python
/Users/imac/Library/Caches/pypoetry/virtualenvs/llama-HG1XU64H-py3.12/bin/python
(llama-py3.12) bash-3.2$ pip show openai
Name: openai
Version: 1.3.7
Summary: The official Python library for the openai API
Home-page:
Author:
Author-email: OpenAI <support@openai.com>
License:
Location: /Users/imac/Library/Caches/pypoetry/virtualenvs/llama-HG1XU64H-py3.12/lib/python3.12/site-packages
Requires: anyio, distro, httpx, pydantic, sniffio, tqdm, typing-extensions
Required-by: llama-index
(llama-py3.12) bash-3.2$
</code></pre>
|
<python><llama><llama-index>
|
2023-12-01 20:03:24
| 1
| 1,258
|
Raymond
|
77,587,572
| 7,180,705
|
Trim a List so the Sum is Close to a Target Value
|
<p>I have a list of values that add up to a number and I need to trim it so the sum is close to a target value. Is there any standard algorithm for this? I will elaborate below:</p>
<p>Assume there is a class named <code>MyClass</code> and <code>obj_list</code> is a list containing instances of <code>MyClass</code>. Each instance has a property named <code>area</code>. I can calculate the total area as:</p>
<p><code>tot_area = numpy.sum([i.area for i in obj_list])</code></p>
<p>But I need to make sure that total area is close to (a little above/below) a value <code>target_area</code>. I cannot modify how the instances of <code>MyClass</code> are generated as this includes drawing from meaningful measurements and some randomization. However, I control how many there are (<code>num_objs</code>) and I can remove instances from <code>obj_list</code>. My goal is to remove the smallest number of instances.</p>
<p>My idea is to remove the instances with the smallest area and then work my way up, but this has some drawbacks. For example, what if removing only the third biggest instance is enough to achieve the objective?</p>
<p>I wonder if there are better ways to do this. Any help, suggestion, or name of algorithms is appreciated.</p>
|
<python><numpy>
|
2023-12-01 19:15:49
| 1
| 329
|
Mohammadreza Khoshbin
|
77,587,508
| 1,163,094
|
Why is `to_thread` faster than `create_task` in python asyncio?
|
<p>I have a blocking function that calls an REST endpoint and returns some result. Let's simulate that as below:</p>
<pre class="lang-py prettyprint-override"><code>import random, time, asyncio
def test_func(x):
time.sleep(2*random.random())
return x
</code></pre>
<p>I tried to use python <code>asyncio</code> to turn this blocking call into async non-blocking calls. I tried two implementations:</p>
<ul>
<li>first:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>class Output:
output = None
def test_func_thread(x, out: Output):
out.output = test_func(x)
async def imp1():
coroutines = []
scores = []
for _ in range(10):
scores.append(Output())
coroutines.append(asyncio.to_thread(test_func_thread, _, scores[-1]))
for coroutine in asyncio.as_completed(coroutines):
await coroutine
return [_.output for _ in scores]
</code></pre>
<ul>
<li>second:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>async def async_score(x):
return test_func(x)
async def imp2():
tasks = []
results = []
for _ in range(10):
tasks.append(asyncio.create_task(async_score(_)))
for t in tasks:
if not t.done():
await t
results.append(t.result())
return results
</code></pre>
<p>And also the sync blocking version:</p>
<pre class="lang-py prettyprint-override"><code>def imp3():
result = []
for _ in range(10):
result.append(test_func(_))
return result
</code></pre>
<p>And the running time of each:</p>
<pre class="lang-py prettyprint-override"><code>print(asyncio.run(imp1()))
> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
%timeit asyncio.run(imp1())
> 1 loops, best of 5: 1.45 s per loop
print(asyncio.run(imp2()))
> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
%timeit asyncio.run(imp2())
> 1 loops, best of 5: 7.11 s per loop
print(imp3())
> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
%timeit imp3()
> 1 loops, best of 5: 8.66 s per loop
</code></pre>
<p>I was expecting <code>imp2()</code> and <code>imp1()</code> would have similar performance but <code>imp2()</code> actually is closer to the sync blocking version and using <code>asyncio</code> does not help much. I'm obviously not familiar with <code>asyncio</code>, so here are my questions:</p>
<ol>
<li>Why is <code>imp2()</code> behaving like sync blocking version?</li>
<li>What's a pythonic implementation for use case like this?</li>
<li>If <code>imp2()</code> is more pythonic, how do I make it perform like <code>imp1()</code>?</li>
<li>For <code>imp1()</code>, I had to make a <code>class Output</code> trick to keep the return results in the same order as the function call order and this does not seem pythonic to me. Are there better ways to do something like this?</li>
</ol>
<p>Thanks!</p>
<hr />
<p>Update:</p>
<p>For <code>imp2</code>, I also tried the following implementation:</p>
<pre class="lang-py prettyprint-override"><code>async def imp2():
tasks = [async_score(_) for _ in range(10)]
results = await asyncio.gather(*tasks)
return results
</code></pre>
<p>But it still behaves like a sync and blocking call that does not have any performance gain.</p>
|
<python><python-3.x><asynchronous><async-await><python-asyncio>
|
2023-12-01 19:01:00
| 0
| 1,240
|
qkhhly
|
77,587,324
| 8,167,752
|
python-pptx - Adding the content box in a "Title and Content" slide
|
<p>I'm using the pptx module to create a PowerPoint "Title and Content" slide.</p>
<pre><code>prs = Presentation()
mySlide = prs.slides.add_slide(prs.slide_layouts[1])
</code></pre>
<p>I can "address" the title with either "title" attribute or the "placeholder[0]" attribute.</p>
<pre><code># These both work.
titlePlaceholder = mySlide.shapes.title
titlePlaceholder = mySlide.shapes.placeholders[0]
</code></pre>
<p>However, the only way I can figure out to "address" the "content box" on the "Title and Content" slide is with the "placeholder[1]" attribute.</p>
<pre><code>contentPlaceholder = mySlide.shapes.placeholders[1]
</code></pre>
<p>Is there another way to "address" the content box? Is there something like a "content" attribute?</p>
<pre><code># I'd like to use something similar to this.
contentPlaceholder = mySlide.shapes.content
</code></pre>
<p>Incidentally, my apologies in advance for any typos in my code examples. For corporate reasons, I can't access StackOverflow on the same system I do my Python programming, so I can't cut-and-paste.</p>
|
<python><powerpoint><python-pptx>
|
2023-12-01 18:22:18
| 1
| 477
|
BobInBaltimore
|
77,587,293
| 10,994,166
|
Get max value from an array column and get value with similar index from another column pyspark
|
<p>I have dataframe like this:</p>
<pre><code>| id | label | md |
+-----------+-----------+------+
|[a, b, c] | [1, 4, 2] | 3 |
|[b, d] | [7, 2] | 1 |
|[a, c] | [1, 2] | 8 |
</code></pre>
<p>I want to get max value from label column and get value from id column with similar index.</p>
<p>Expected Output:</p>
<pre><code>| id |label| md |
+----+-----+------+
| b | 4 | 3 |
| b | 7 | 1 |
| c | 2 | 8 |
</code></pre>
|
<python><apache-spark><pyspark><apache-spark-sql>
|
2023-12-01 18:15:38
| 1
| 923
|
Chris_007
|
77,587,268
| 3,569,246
|
Python 3.9 - strongly typed callbacks with known first argument type and additional args typed in the function definition
|
<p>I have a set of a few dozen simple callbacks with various arguments, defined as such:</p>
<pre><code>def test_value(tested: Tested, value: int) -> bool:
return tested.value >= value
def test_value_and_name(tested: Tested, value: int, name: str) -> bool:
return tested.value >= value and tested.name === name
def test_owned_thing(tested: Tested, thing: Thing)
return tested.thing === thing
</code></pre>
<p>As you can see, all of them are functions supposed to be called on a <code>Tested</code> object.</p>
<hr />
<p>There is a global "tester" function that is supposed to test all <code>Tested</code> objects that exist. But I'm struggling with how to type it in a way that provides correct type hints for all of the predicates.</p>
<pre><code>def tester_all(predicate: Callable[???????, bool], *args: ???, **kwargs: ???) -> bool:
return all(predicate(tested, ????) for tested in GLOBAL_TESTED_LIST)
</code></pre>
<p>This is the kind of type hints that I expect:</p>
<pre><code>tester_all(test_value, 5) // correct
tester_all(test_value, value=5) // correct
tester_all(test_value) // type error: missing argument
tester_all(test_value, 'string value') // type error: argument should be int
tester_all(test_value, 5, 2137682) // type error: too many arguments
tester_all(test_value_and_name, 5, 'some name') // correct
tester_all(test_value_and_name, 5) // type error: not enough arguments
tester_all(test_value_and_name, 5, 5) // type error: second argument must be a string
tester_all(test_value_and_name, 'some name', 5) // type error: int should be first and string second, not other way around
tester_all(test_owned_thing, Thing()) // correct
tester_all(test_owned_thing, 5) // type error: int is not a Thing
</code></pre>
<p>I tried many various solutions sprinkled all over SO, including those using <code>Protocol</code>s, but I can't find how to make any of them work exactly as I want it to.</p>
<p>From what I gathered so far, it seems I either have to completely lose callback argument typing (and even counting!), or lose the ability to write any number of new typed predicates without having to also update some externally defined Callable/Protocol typings list for my tester.</p>
<p>Which seems weird to me. I just want to define functions and use them as callbacks with typed arguments. Is that too much to ask? ; ) Perhaps my approach is wrong here and it could be made simpler?</p>
<p>(For additional context, I'm the owner of the codebase and I'm happy to introduce additional (preferably simple) type helpers, but my Python version is hard-capped at 3.9 because of an external technical requirement that I can't affect.)</p>
|
<python><python-3.x>
|
2023-12-01 18:11:24
| 2
| 3,756
|
JoannaFalkowska
|
77,587,206
| 14,392,430
|
Is that possible to define "SET NULL" for only one column of composite foreign key in SQLAlchemy?
|
<p>Postgres supports this feature - <a href="https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a" rel="nofollow noreferrer">link</a>. Not sure how to implement it in SQLAlchemy.
If I do <code>ForeignKeyConstraint(..., ondelete="set null(payment_id)")</code>, it throws an error:</p>
<pre><code>Unexpected SQL phrase: 'set null(payment_id)' (matching against '^(?:RESTRICT|CASCADE|SET NULL|NO ACTION|SET DEFAULT)$')
</code></pre>
|
<python><postgresql><sqlalchemy>
|
2023-12-01 17:58:03
| 1
| 708
|
Alexander Farkas
|
77,587,151
| 9,334,609
|
ValueError: client_id must not be blank using PDF Extract API
|
<p>I have followed the steps in Getting Started with PDF Extract API (Python) successfully. The required libraries have been installed in the virtual environment (venv_smed). The source code has no syntax errors. However, when I run the <code>python extract.py</code> command the following error is displayed:</p>
<pre><code>(venv_smed) bash-3.2$ python extract.py
Traceback (most recent call last):
File "/Users/username/Documents/allAboutPython/python3.11/myapp/pkg_app/adobe_pdfservices_sdk/extract.py", line 56, in <module>
credentials = Credentials.service_principal_credentials_builder().with_client_id(os.getenv('PDF_SERVICES_CLIENT_ID')).with_client_secret(os.getenv('PDF_SERVICES_CLIENT_SECRET')).build();
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/username/Documents/allAboutPython/python3.11/myapp/venv_smed/lib/python3.11/site-packages/adobe/pdfservices/operation/auth/service_principal_credentials.py", line 82, in build
return ServicePrincipalCredentials(self._client_id, self._client_secret)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/username/Documents/allAboutPython/python3.11/myapp/venv_smed/lib/python3.11/site-packages/adobe/pdfservices/operation/auth/service_principal_credentials.py", line 28, in __init__
self._client_id = _is_valid(client_id, 'client_id')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/username/Documents/allAboutPython/python3.11/myapp/venv_smed/lib/python3.11/site-packages/adobe/pdfservices/operation/auth/credentials.py", line 17, in _is_valid
raise ValueError(f'{name} must not be blank')
ValueError: client_id must not be blank
(venv_smed) bash-3.2$
</code></pre>
<p>The example code has been taken from the following link:
<a href="https://developer.adobe.com/document-services/docs/overview/pdf-extract-api/quickstarts/python/" rel="nofollow noreferrer">https://developer.adobe.com/document-services/docs/overview/pdf-extract-api/quickstarts/python/</a></p>
<p>I have reviewed the documentation in Adobe Developer and it seems <code>ServiceAccountCredentials.Builder</code> is deprecated.</p>
<p>The section of the code that has problems is the following:</p>
<pre><code>credentials = Credentials.service_principal_credentials_builder().with_client_id(os.getenv('PDF_SERVICES_CLIENT_ID')).with_client_secret(os.getenv('PDF_SERVICES_CLIENT_SECRET')).build();
</code></pre>
<p>I have modified the format of the line of code to obtain the credentials to access the API,</p>
<pre><code>credentials = Credentials.service_principal_credentials_builder().with_client_id(
os.getenv('PDF_SERVICES_CLIENT_ID')).with_client_secret(
os.getenv('PDF_SERVICES_CLIENT_SECRET')).build();
</code></pre>
<p>When executing the <code>python extract.py</code> command I get the same error, however apparently the problem is in obtaining the value of the environment variable <code>PDF_SERVICES_CLIENT_SECRET</code>,</p>
<pre class="lang-none prettyprint-override"><code> Traceback (most recent call last):
File "/Users/username/Documents/allAboutPython/python3.11/myapp/pkg_app/adobe_pdfservices_sdk/extract.py", line 58, in <module>
os.getenv('PDF_SERVICES_CLIENT_SECRET')).build();
^^^^^^^
File "/Users/username/Documents/allAboutPython/python3.11/myapp/venv_smed/lib/python3.11/site-packages/adobe/pdfservices/operation/auth/service_principal_credentials.py", line 82, in build
return ServicePrincipalCredentials(self._client_id, self._client_secret)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/username/Documents/allAboutPython/python3.11/myapp/venv_smed/lib/python3.11/site-packages/adobe/pdfservices/operation/auth/service_principal_credentials.py", line 28, in __init__
self._client_id = _is_valid(client_id, 'client_id')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/username/Documents/allAboutPython/python3.11/myapp/venv_smed/lib/python3.11/site-packages/adobe/pdfservices/operation/auth/credentials.py", line 17, in _is_valid
raise ValueError(f'{name} must not be blank')
ValueError: client_id must not be blank
</code></pre>
<p>The pdfservices-api-credentials.json file has the following content in JSON format,</p>
<pre><code>{
"client_credentials": {
"client_id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"client_secret": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
},
"service_principal_credentials": {
"organization_id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
}
</code></pre>
<p>Any guide to resolve the error would be appreciated:</p>
<blockquote>
<p><strong>ValueError: client_id must not be blank</strong>*</p>
</blockquote>
<p>Solution:</p>
<pre><code>"""
Read JSON
"""
# Opening JSON file
f = open('pdfservices-api-credentials.json')
# returns JSON object as
# a dictionary
data = json.load(f)
# Get client ID and credentials
CLIENT_ID = data["client_credentials"]["client_id"]
CLIENT_SECRET = data["client_credentials"]["client_secret"]
print(CLIENT_ID)
print(CLIENT_SECRET)
# Closing file
f.close()
credentials = Credentials.service_principal_credentials_builder().with_client_id(
CLIENT_ID).with_client_secret(
CLIENT_SECRET).build();
</code></pre>
<p>The result of executing the script is as expected:</p>
<pre><code>(venv_smed) bash-3.2$ python extract.py
"your client_id"
"your client_secret"
Structured Information Output Format
Introduction
List of key components
(venv_smed) bash-3.2$
</code></pre>
|
<python><adobe>
|
2023-12-01 17:45:45
| 1
| 461
|
Ramiro
|
77,586,916
| 2,628,868
|
how to make PyCharm use the pdm download package
|
<p>I am using the pdm <a href="https://github.com/pdm-project/pdm" rel="nofollow noreferrer">https://github.com/pdm-project/pdm</a> to manage the python project dependencies, today I found the PyCharm seems did not use the pdm downloaded packages, seems use it's own downloaded package. Is it possible to make the PyCharm use the pdm downloaded package? or Am I missing something?</p>
<p>the pdm already update the package to 0.1.34, but the PyCharm still use the legacy 0.1.29 package. Even through the pdm download the package, the PyCharm still need to redownload the same package. This is the pdm info:</p>
<pre><code>> pdm info
PDM version:
2.10.4
Python Interpreter:
/Users/xiaoqiangjiang/source/dolphin/visa/.venv/bin/python (3.10)
Project Root:
/Users/xiaoqiangjiang/source/dolphin/visa
Local Packages:
</code></pre>
<p>this is the pdm environment info:</p>
<pre><code>> pdm info --env
{
"implementation_name": "cpython",
"implementation_version": "3.10.11",
"os_name": "posix",
"platform_machine": "arm64",
"platform_release": "22.4.0",
"platform_system": "Darwin",
"platform_version": "Darwin Kernel Version 22.4.0: Mon Mar 6 20:59:28 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T6000",
"python_full_version": "3.10.11",
"platform_python_implementation": "CPython",
"python_version": "3.10",
"sys_platform": "darwin"
}
</code></pre>
<p>and the PyCharm version is:</p>
<pre><code>PyCharm 2023.2.5 (Community Edition)
Build #PC-232.10227.11, built on November 14, 2023
Runtime version: 17.0.9+7-b1000.46 aarch64
VM: OpenJDK 64-Bit Server VM by JetBrains s.r.o.
macOS 13.3.1
GC: G1 Young Generation, G1 Old Generation
Memory: 2048M
Cores: 10
Metal Rendering is ON
Registry:
debugger.new.tool.window.layout=true
ide.experimental.ui=true
</code></pre>
|
<python><pycharm><pdm>
|
2023-12-01 16:58:41
| 0
| 40,701
|
Dolphin
|
77,586,707
| 8,527,313
|
Numpy eig/eigh high memory usage
|
<p>Let me start with a disclaimer that I am not as experienced in Python as one might want to be, so it could be that this is being caused by something I had caused myself.</p>
<p>Disclaimer aside, I've noticed some strange behavior in terms of the memory usage in NumPy. Specifically, when performing eigenvalue decompositions (I request both eigenvalues and eigenvectors) the memory usage is significantly larger than expected. This becomes a problem when the matrix size is such that we can not afford to store several copies.</p>
<p>A typical LAPACK implementation (which NumPy calls -- at some point) will compute the eigenvectors in place, thus the memory requirement will be roughly 8N^2+O(N) -- with the 8 coming from the double precision floating point data type used in my case. Thus, with NumPy being supposedly relatively efficient one would expect something similar. Admittedly, we know to expect twice as much, as NumPy will keep the original matrix, so it must at some point make at least a single copy.</p>
<p>Interestingly, this is most definitely not what is observed. As an example, we can track the memory usage, as reported by <em>/proc/<pid>/status</em>, of the following code (using Python 3.9.7, NumPy 1.26.2, SciPy 1.11.3):</p>
<pre><code> import numpy
import time
import scipy
N=10000
A=numpy.random.randn(N, N)
B=2**-0.5 * (A+A.T)
del A
# This will create an easy to identify structure on the graph
time.sleep(1)
C=numpy.random.randn(N, N)
del C
time.sleep(1)
evals,evecs = numpy.linalg.eigh(B)
# This will create an easy to identify structure on the graph
time.sleep(1)
C=numpy.random.randn(N, N)
del C
time.sleep(1)
</code></pre>
<p>For reference, the following tests were performed on a node with 2x AMD EPYC 7702 with OMP_NUM_THREADS=16 running Debian bullseye (Linux 5.10.0-26-amd64), though I don't expect this to be relevant to anything other than the runtime in the plots, which is irrelevant to the problem at hand.</p>
<p>Looking at the time between the two dashed vertical lines, where the <code>numpy.linalg.eigh</code> call takes place, we observe higher memory usage than expected.
<a href="https://i.sstatic.net/H7p7K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H7p7K.png" alt="numpy.linalg.eigh memory use" /></a> The horizontal lines represent the expected size of a matrix of our chosen size. We see that, on top of the single copy we expect, we get two more copies roughly half-way through the calculation. Furthermore, at the very end there appears to be another copy being created, presumably as something is being copied as opposed to moved. In short, we've gone from 2 times the size to 5 times. So my question is, why does this happen and how, if possible, can it be resolved? Ideally I would also avoid having the original copy but as far as I understand this was considered at some point but not implemented in NumPy, so I assume that is not possible at the moment.</p>
<p>Curiously, replacing <code>numpy.linalg.eigh</code> with <code>numpy.linalg.eig</code>, we see even worse performance at the end (not only in time, which is expected, but also in the memory use). <a href="https://i.sstatic.net/xrDfD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xrDfD.png" alt="numpy.linalg.eig memory use" /></a>
At first we see the same behavior, though half-way through we only get one more copy this time around, however at the very end we appear to incur a significantly greater overhead with an additional 4 temporary copies (in terms of the memory use, whether or not that is the result of 4 copies I do not know) being created.</p>
<p>I would assume this is some sort of bug in the implementation of NumPy, at the very least that peak at the very end? Particularly in light of what happens if we try a different implementation, which also uses the LAPACK backend.</p>
<p>We can swap to SciPy (<code>scipy.linalg.eigh</code>), which offers similar functions (though slightly worse performance it seems), which do not suffer from the same peak in memory use at the very end, though we still have 3 copies, which is still not ideal.
<a href="https://i.sstatic.net/tzAjU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tzAjU.png" alt="scipy.linalg.eigh memory use" /></a></p>
<p>SciPy also allows us to use the option <code>overwrite_a=True</code>, which should, according to the documentation, overwrite the matrix. As far as I understand this should then lead to the expected memory behavior in line with what one would observe in C/C++/Fortran, but in practice it does nothing. <a href="https://i.sstatic.net/Y10y2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y10y2.png" alt="scipy.linalg.eigh with overwrite_a=True memory use" /></a>
Indeed, we see roughly identical behavior as without this setting, both in terms of time and space.</p>
<p><strong>TL;DR</strong></p>
<p>How to get the eigenvalues and eigenvectors of a real symmetric matrix in Python with minimal memory use, both NumPy and SciPy appear to have quite some overhead with NumPy being significantly worse?</p>
<p><strong>EDIT 1</strong></p>
<p>Potentially relevant links so far:</p>
<ol>
<li><a href="https://stackoverflow.com/questions/70746660/how-to-predict-memory-requirement-for-np-linalg-inv">How to predict memory requirement for np.linalg.inv?</a> - discusses a similar issue for the inverse, however only for the 3x memory usage in the majority of the algorithm. Does not discuss the peak at the very end.</li>
<li><a href="https://github.com/numpy/numpy/issues/14024" rel="nofollow noreferrer">https://github.com/numpy/numpy/issues/14024</a> - seems to only mention a 2x memory usage due to the copy, that feature request has not been implemented (so it seems), but that only explains a small part of the observed memory usage and I don't think is particularly relevant. They also mention <code>overwrite_a</code> not doing anything for the memory usage in SciPy, which seems to have remained true.</li>
</ol>
|
<python><numpy><scipy>
|
2023-12-01 16:26:13
| 0
| 1,255
|
Qubit
|
77,586,651
| 15,587,184
|
Failing to establish an SSH tunnel in Python (without Putty)
|
<p>I'm attempting to automate the process of connecting to a Redshift server using Python without relying on PuTTY. Currently, I'm on a Windows machine, and I need to extract data from PostgreSQL on a Redshift server. However, to achieve this, I have to:</p>
<ol>
<li><p>Open the PuTTY .exe</p>
</li>
<li><p>Enter this command in PuTTY: <code>"Putty -P <port_number> -noagent -N -L 5534:<redshift_host>:5534 <username>@<remote_host> -i <private_key_file> -pw <password>"</code></p>
</li>
<li><p>Wait a few seconds until PuTTY shows the tunnel is open</p>
</li>
<li><p>Open my Jupyter Python Notebook and finally execute my query:</p>
<p>cxn= psycopg2.connect(user="sql_username",
password="sql_password",
host="host_ip",
port=5534,
database="database_name")</p>
</li>
</ol>
<p>Extract the data and store it as a dataframe.
Since this is quite a manual and not so efficient process, I have been searching the web to stop using PuTTY altogether and find a new way to create the tunnel and extract my data. I have even converted my .ppk key to a .pem format to use with other libraries. I'm using paramiko and SSHTunnelForwarder, but I have not been successful in actually connecting correctly to my tunnel. Here is my code:</p>
<pre><code>from sshtunnel import SSHTunnelForwarder
ssh_host = <remote_host>
ssh_port = <port_number>
ssh_user = <username>
ssh_key_path = 'ssh_key_redshift.pem'
ssh_password = <password>
redshift_host = <redshift_host>
redshift_port = 5534
redshift_user = <username>
# Create an SSH tunnel
with SSHTunnelForwarder(
(ssh_host, ssh_port),
ssh_username=ssh_user,
ssh_pkey=ssh_key_path,
ssh_password=ssh_password,
remote_bind_address=(redshift_host, redshift_port),
local_bind_address=('localhost', 5534)
) as tunnel:
print("SSH Tunnel established successfully.")
input("Press Enter to close the tunnel...")
</code></pre>
<p>But unfortunately is not working to open and connect the tunnel and when I use shhtunnel.</p>
<p>I have heard of the paramiko library, and I would be thrilled if anyone could assist me with this. Essentially, what I need to do is establish an SSH tunnel using <code><port_number></code>, binding the local port 5534 to a Redshift host's port 5534, using the credentials and the key file that I have converted to .pem.</p>
|
<python><ssh><paramiko><openssh><ssh-tunnel>
|
2023-12-01 16:15:49
| 2
| 809
|
R_Student
|
77,586,508
| 66,490
|
How to copy Azure blobs quickly in Python
|
<p>I need to copy about 1000 blobs at a time from one storage account to another. The size of each blob is roughly between 100 to 1000MB. Each blob is renamed, so I cannot copy the blobs in bulk using a common prefix.</p>
<p>The approach I've taken is to use <code>BlobClient.start_copy_from_url()</code> to create an asynchronous copy operation for each blob and wait for them to complete. The problem is that it takes hours to copy the blobs this way. The operations seem to complete in batches of around 6 operations at a time, which makes me think there's something that prevents more from being processed parallel.</p>
<p>In comparison, it takes about 5 minutes for Storage Explorer to copy the same blobs between the storage accounts.</p>
<p>How does Storage Explorer copy files so quickly and is there a way to make my Python script copy blobs faster?</p>
<p>My code is essentially similar this:</p>
<pre class="lang-py prettyprint-override"><code>active_jobs=[]
for job in pending_jobs: # 1000 pending jobs
job.target=job.target_client.get_blob_client(job.target_path)
source=job.source_client.get_blob_client(job.source_path)
job.target.start_copy_from_url(source.url)
active_jobs.append(job)
while active_jobs:
for job in active_jobs:
status=job.target.get_blob_properties().copy.status
if status=="success":
job.done=True
print("Job done")
active_jobs=[job for job in active_jobs if not job.done]
</code></pre>
|
<python><azure><azure-blob-storage>
|
2023-12-01 15:52:45
| 2
| 7,505
|
TrayMan
|
77,586,440
| 1,614,870
|
Grouping and counting time intervals by hours with a marker for overlapping days
|
<p>I need to count the number of hour intervals over a monthly period. I need to group it by only time and not by date. For example</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Start</th>
<th>End</th>
</tr>
</thead>
<tbody>
<tr>
<td>23-02-2023</td>
<td>12:10:00</td>
<td>12:34:00</td>
</tr>
<tr>
<td>24-02-2023</td>
<td>12:15:00</td>
<td>12:45:00</td>
</tr>
</tbody>
</table>
</div>
<p>would count 2 for 12:00:00 to 12:59:59 slot</p>
<p>My sample data looks like this (If needed I can change my sample data format)</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>first_appear</th>
<th>last_appear</th>
</tr>
</thead>
<tbody>
<tr>
<td>12:10:00</td>
<td>12:31:00</td>
</tr>
<tr>
<td>12:33:49</td>
<td>13:29:12</td>
</tr>
<tr>
<td>15:30:20</td>
<td>18:40:30</td>
</tr>
<tr>
<td>20:12:20</td>
<td>23:10:20</td>
</tr>
<tr>
<td>23:34:20</td>
<td>6:11:00</td>
</tr>
</tbody>
</table>
</div>
<p>If you notice the last entry denotes the overlap with the next day.
The code</p>
<pre><code>import pandas as pd
import numpy as np
import staircase as sc
df = pd.read_csv('Overlapping Schedule - Sheet2.csv')
df["first_appear"] = pd.to_timedelta(df["first_appear"].map(str))
df["last_appear"] = pd.to_timedelta(df["last_appear"].map(str))
df["first_appear"] = df["first_appear"].dt.floor("H")
df["last_appear"] = df["last_appear"].dt.ceil("H")
sf = sc.Stairs(df, start="first_appear", end="last_appear")
sample_times = pd.timedelta_range("00:00:00", "24:00:00", freq=pd.Timedelta("1hr"))
sf(sample_times, include_index=True)
</code></pre>
<p>The output</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>0 days 00:00:00</th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<td>0 days 01:00:00</td>
<td>0</td>
</tr>
<tr>
<td>0 days 02:00:00</td>
<td>0</td>
</tr>
<tr>
<td>0 days 03:00:00</td>
<td>0</td>
</tr>
<tr>
<td>0 days 04:00:00</td>
<td>0</td>
</tr>
<tr>
<td>0 days 05:00:00</td>
<td>0</td>
</tr>
<tr>
<td>0 days 06:00:00</td>
<td>0</td>
</tr>
<tr>
<td>0 days 07:00:00</td>
<td>0</td>
</tr>
<tr>
<td>0 days 08:00:00</td>
<td>0</td>
</tr>
<tr>
<td>0 days 09:00:00</td>
<td>0</td>
</tr>
<tr>
<td>0 days 10:00:00</td>
<td>0</td>
</tr>
<tr>
<td>0 days 11:00:00</td>
<td>0</td>
</tr>
<tr>
<td>0 days 12:00:00</td>
<td>2</td>
</tr>
<tr>
<td>0 days 13:00:00</td>
<td>1</td>
</tr>
<tr>
<td>0 days 14:00:00</td>
<td>0</td>
</tr>
<tr>
<td>0 days 15:00:00</td>
<td>1</td>
</tr>
<tr>
<td>0 days 16:00:00</td>
<td>1</td>
</tr>
<tr>
<td>0 days 17:00:00</td>
<td>1</td>
</tr>
<tr>
<td>0 days 18:00:00</td>
<td>1</td>
</tr>
<tr>
<td>0 days 19:00:00</td>
<td>0</td>
</tr>
<tr>
<td>0 days 20:00:00</td>
<td>1</td>
</tr>
<tr>
<td>0 days 21:00:00</td>
<td>1</td>
</tr>
<tr>
<td>0 days 22:00:00</td>
<td>1</td>
</tr>
<tr>
<td>0 days 23:00:00</td>
<td>1</td>
</tr>
<tr>
<td>1 days 00:00:00</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>Ideally I would like to see following entries as well</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>1 days 01:00:00</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<td>1 days 02:00:00</td>
<td>1</td>
</tr>
<tr>
<td>1 days 03:00:00</td>
<td>1</td>
</tr>
<tr>
<td>1 days 04:00:00</td>
<td>1</td>
</tr>
<tr>
<td>1 days 05:00:00</td>
<td>1</td>
</tr>
<tr>
<td>1 days 06:00:00</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>I referred to multiple Stack Overflow answers to come up with this but now I am stuck. @riley's answer to <a href="https://stackoverflow.com/questions/73214050/group-and-count-by-time-interval-python">Group and count by time interval - Python</a> helped me to get started</p>
|
<python><pandas><dataframe><datetime>
|
2023-12-01 15:40:46
| 2
| 9,634
|
Abhijit Mazumder
|
77,586,425
| 967,621
|
Aggregate dataframe using less repetitive code than a bunch of select and merge operations
|
<p>I am trying to summarize/aggregate a dataframe as follows. While the code gives the correct result, it is very repetitive and I would like to avoid that. I think that I need to use something like <code>groupby</code>, <code>agg</code>, <code>apply</code>, etc. but could not find a way to do that. The goal is to compute <code>df_summ</code> at the end. I think that I am using too many intermediate dataframes with selections of rows, and too many <code>merge</code>s to put the results together. I feel that there must be a simpler way, but cannot figure it out.</p>
<p>The real <code>df_stats</code> input dataframe has millions of rows, and <code>df_summ</code> output dataframe has dozens of columns. The input shown below is just a minimal reproducible example.</p>
<pre><code>import io
import pandas as pd
TESTDATA="""
enzyme regions N length
AaaI all 10 238045
AaaI all 20 170393
AaaI all 30 131782
AaaI all 40 103790
AaaI all 50 81246
AaaI all 60 62469
AaaI all 70 46080
AaaI all 80 31340
AaaI all 90 17188
AaaI captured 10 292735
AaaI captured 20 229824
AaaI captured 30 193605
AaaI captured 40 163710
AaaI captured 50 138271
AaaI captured 60 116122
AaaI captured 70 95615
AaaI captured 80 73317
AaaI captured 90 50316
AagI all 10 88337
AagI all 20 19144
AagI all 30 11030
AagI all 40 8093
AagI all 50 6394
AagI all 60 4991
AagI all 70 3813
AagI all 80 2759
AagI all 90 1666
AagI captured 10 34463
AagI captured 20 19220
AagI captured 30 15389
AagI captured 40 12818
AagI captured 50 10923
AagI captured 60 9261
AagI captured 70 7753
AagI captured 80 6201
AagI captured 90 4495
"""
df_stats = pd.read_csv(io.StringIO(TESTDATA), sep='\s+')
df_cap_N90 = df_stats[(df_stats['N'] == 90) & (df_stats['regions'] == 'captured')].drop(columns=['regions', 'N'])
df_cap_N50 = df_stats[(df_stats['N'] == 50) & (df_stats['regions'] == 'captured')].drop(columns=['regions', 'N'])
df_all_N50 = df_stats[(df_stats['N'] == 50) & (df_stats['regions'] == 'all') ].drop(columns=['regions', 'N'])
df_summ_cap_N50_all_N50 = pd.merge(df_cap_N50, df_all_N50, on='enzyme', how='inner', suffixes=('_cap_N50', '_all_N50'))
df_summ_cap_N50_all_N50['cap_N50_all_N50'] = (df_summ_cap_N50_all_N50['length_cap_N50'] -
df_summ_cap_N50_all_N50['length_all_N50'])
print(df_summ_cap_N50_all_N50)
df_summ_cap_N90_all_N50 = pd.merge(df_cap_N90, df_all_N50, on='enzyme', how='inner', suffixes=('_cap_N90', '_all_N50'))
df_summ_cap_N90_all_N50['cap_N90_all_N50'] = df_summ_cap_N90_all_N50['length_cap_N90'] - df_summ_cap_N90_all_N50['length_all_N50']
print(df_summ_cap_N90_all_N50)
df_summ = pd.merge(df_summ_cap_N50_all_N50.drop(columns=['length_cap_N50', 'length_all_N50']),
df_summ_cap_N90_all_N50.drop(columns=['length_cap_N90', 'length_all_N50']),
on='enzyme', how='inner')
print(df_summ)
</code></pre>
<p>Prints:</p>
<pre><code> enzyme length_cap_N50 length_all_N50 cap_N50_all_N50
0 AaaI 138271 81246 57025
1 AagI 10923 6394 4529
enzyme length_cap_N90 length_all_N50 cap_N90_all_N50
0 AaaI 50316 81246 -30930
1 AagI 4495 6394 -1899
enzyme cap_N50_all_N50 cap_N90_all_N50
0 AaaI 57025 -30930
1 AagI 4529 -1899
</code></pre>
<p><strong>Notes on bioinformatics background behind this question:</strong></p>
<p><em>(Feel free to skip this, it describes the domain knowledge behind the python code)</em></p>
<p>The above code is one step in a multi-step bioinformatics project, where I try to find optimal restriction enzymes based on the way they cut DNA.</p>
<p>As input for this step, I have a table with restriction enzymes (whose name is stored in column <code>enzyme</code>). I want to rank the enzymes based on the statistical properties of the way the cut DNA. Column <code>regions</code> stores two different DNA regions types, which I want to differentiate using these enzymes. Column <code>N</code> is the name of the statistic that measures the degree of how finely the DNA is cut (N10, ..., N90), and <code>length</code> is the value of this statistic. The <code>N</code> statistics summarize the DNA fragment length distribution, measured in units of nucleotides), similar in spirit to quantiles (10%, ..., 90%). When I compare the the enzymes, I want to do simple operations, such as <code>cap_N90_all_N50 = { captured N90 } - { all N50 }</code>, etc. Then I rank the enzymes by the combination of <code>cap_N50_all_N50</code>, etc.</p>
|
<python><pandas><group-by><merge><aggregate>
|
2023-12-01 15:38:33
| 4
| 12,712
|
Timur Shtatland
|
77,586,339
| 10,252,177
|
Quit Tkinter application by calling class method
|
<p>I am trying to build a splash screen that I will be able to call from an external application using Tkinter. I'd like to define my GUI in a class, like this:</p>
<pre class="lang-py prettyprint-override"><code>class Splash:
def __init__(self):
self.root = tk.Tk()
self.root.overrideredirect(True)
self.root.wm_attributes("-topmost", True)
self.label = tk.Label(self.root, text="Initializing...")
self.label.pack(side=tk.BOTTOM)
self.progbar = ttk.Progressbar(self.root, orient=tk.HORIZONTAL, mode='indeterminate')
self.progbar.pack(fill=tk.BOTH, side=tk.BOTTOM, padx=10)
self.progbar.start(40)
self.root.update_idletasks()
self.root.geometry(
"+{}+{}".format(
int((self.root.winfo_screenwidth() - self.root.winfo_reqwidth()) / 2),
int((self.root.winfo_screenheight() - self.root.winfo_reqheight()) / 2),
)
)
root.mainloop()
def close(self):
self.root.destroy()
</code></pre>
<p>The goal is that in my external app, I can create a <code>Splash</code> object, run my initialization, and then call the <code>close</code> function on the <code>Splash</code> object to close the splash screen, like this:</p>
<pre class="lang-py prettyprint-override"><code>import time
import Splash
x = Splash.Splash()
time.sleep(5) # App initialization happens here.
x.close()
</code></pre>
<p>This isn't working, as <code>root.mainloop()</code> blocks, and the initialization functions are never actually reached. This variation also does not work:</p>
<pre class="lang-py prettyprint-override"><code>class Splash:
closeRequest = False
def __init__(self):
# Everything up until root.mainloop(), as shown above, goes here
while not self.closeRequest:
self.root.update_idletasks()
self.root.update()
def close(self):
self.closeRequest = True
</code></pre>
<p><strong>How can I accomplish my goal here?</strong> I'd prefer not to use threads for this, but please feel free to share threads-based solutions as well.</p>
|
<python><tkinter><python-multithreading><python-class><tcltk>
|
2023-12-01 15:27:27
| 1
| 522
|
Ben Zelnick
|
77,586,290
| 709,439
|
How to remove a tag from an element of BeautifulSoup?
|
<p>I have a page like this:</p>
<pre><code>...
<div class="myclass">
<p>
text 1 to keep<span>text 1 to remove</span>and keep this too.
</p>
<p>
text 2 to keep<span>text 2 to remove</span>and keep this too.
</p>
<div>
</code></pre>
<p>I.e.: I want to remove all <code><span></code> tags from any <code><p></code> element from bs4 (BeautifulSoup in Python3).</p>
<p>Currently this is my code:</p>
<pre><code>from bs4 import BeautifulSoup
...
text = ""
for tag in soup.find_all(attrs={"class": "myclass"}):
text += tag.p.text
</code></pre>
<p>And of course I get all text in spans too...</p>
<p>I read I should use <code>unwrap()</code> or <code>decompose()</code> but I really do not understand how to use them in practice in my use-case...<br />
All similar Q/A do not help...</p>
|
<python><beautifulsoup>
|
2023-12-01 15:17:50
| 2
| 17,761
|
MarcoS
|
77,586,285
| 1,534,243
|
How can I get a pytorch Tensor containing some other Tensor's size (or shape) without conversion to Python int?
|
<p>In the context of exporting pytorch code to ONNX, I get this warning:</p>
<p>TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.</p>
<p>Here is the offending line:</p>
<pre><code>text_lengths = torch.tensor([text_inputs.shape[1]]).long().to(text_inputs.device)
</code></pre>
<p><code>text_inputs</code> is a <code>torch.Tensor</code> of shape <code>torch.Size([1, 81])</code></p>
<p>And the warning is spot on and cannot be ignored, because the shape of <code>text_inputs</code> is supposed to be dynamic.</p>
<p>I need <code>text_lengths</code> to be a <code>torch.Tensor</code> that contains the number <code>81</code> coming from the <code>shape</code> of <code>text_inputs</code>. The "offending line" from above succeeds in doing that, but we actually make a round trip from pytorch to a Python <code>int</code> and back to pytorch, because the elements in the <code>torch.Size</code> objects are Python <code>int</code>s. This is (1) somewhat weird, (2) probably inefficient in terms of GPU -> CPU -> GPU and, as stated above, an actual problem in the ONNX exporting context.</p>
<p>Is there some other way how I can use a tensor's shape in torch computations, without "leaving" the torch world?</p>
|
<python><pytorch><onnx>
|
2023-12-01 15:16:52
| 1
| 578
|
Dietmar
|
77,586,262
| 8,483,576
|
Using custom JS event handler inside Gradio
|
<p>I have a Gradio chatbot app exposed through the iframe. The main application component is the new <a href="https://www.gradio.app/docs/chatinterface" rel="nofollow noreferrer">Gradio ChatInterface</a>. I want to pass some sensitive data from the parent app using <a href="https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage" rel="nofollow noreferrer">postMessage</a> API (example <a href="https://javascriptbit.com/transfer-data-between-parent-window-and-iframe-postmessage-api/" rel="nofollow noreferrer">here</a>).</p>
<p>Gradio documentation specifies how to <a href="https://www.gradio.app/guides/custom-CSS-and-JS#custom-js" rel="nofollow noreferrer">add a custom js</a>, triggered by a component event, but only some helper methods, and I need to:</p>
<ol>
<li>Set up an event listener on load of the app or on a click of a button</li>
<li>Pass the data from the event listener into the python code so I can use it</li>
</ol>
|
<javascript><python><iframe><gradio><gradio-chatinterface>
|
2023-12-01 15:12:16
| 1
| 404
|
Dušan
|
77,586,212
| 2,729,831
|
Google cloud function finishes with status ok even when I return 500 error code
|
<p>I am using python for a google cloud function.
When an exception occures I catch it and reutrn error code 500 but the log shows that the funcion finishes with status "OK".
This is the code:</p>
<pre><code>import hmac
import hashlib
import base64
import datetime
import traceback
from variables import *
import json
import time
def main(event, context):
pubsub_headers = event['attributes']
pubsub_message = base64.b64decode(event['data']).decode('utf-8')
product_data = json.loads(pubsub_message)
try:
raise Exception()
except Exception as e:
print(e)
traceback.print_exc()
return "ERROR", 500
finally:
time.sleep(30)
</code></pre>
<p>The log message is:
"<code>Function execution took 30014 ms, finished with status: 'ok'</code> "
If I raise an exception in the main function without catching it, the finished status is "crash"</p>
|
<python><google-cloud-functions><google-cloud-pubsub>
|
2023-12-01 15:04:10
| 1
| 473
|
blob
|
77,586,069
| 1,111,088
|
adjustText incorrectly positions annotations when x and y values greater than 1
|
<p>We're using matplotlib and adjustText to plot XY coordinates and pass a list of labels to specific points in the plot and prevent the labels from overlapping one another.</p>
<p>However, I noticed that applying adjust_text to XY values that are greater than 1 causes the labels to be displayed incorrectly. Here's a sample code:</p>
<pre><code>import matplotlib.pyplot as plt
from adjustText import adjust_text
fig, ax = plt.subplots()
x = [10,20,30,40,50,60,70,80,90,100]
y = [11,12,13,14,15,16,17,18,19,20]
ax.plot(x, y)
ax.set_title('Peaks')
ax.set_xlabel('X')
ax.set_ylabel('Y')
annotations = []
for i in range(len(x)):
annotations.append(plt.text(x[i], y[i], "Label" + str(i)))
</code></pre>
<p>Running this, I get the following:</p>
<p><a href="https://i.sstatic.net/0RKk3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0RKk3.png" alt="Plot without adjustText" /></a></p>
<p>Once I add this:</p>
<pre><code>adjust_text(annotations,arrowprops=dict(arrowstyle="-", color='k', lw=0.5))
</code></pre>
<p>I get this image:</p>
<p><a href="https://i.sstatic.net/7erRQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7erRQ.png" alt="Plot with adjustText" /></a></p>
<p>If I change my x and y values to:</p>
<pre><code>x = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1]
y = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1]
</code></pre>
<p>with the adjust_text line, I get this image:</p>
<p><a href="https://i.sstatic.net/2rmQO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2rmQO.png" alt="Plot with small values and adjustText" /></a></p>
<p>So it seems like it's working for relatively small values. Is there a parameter that I'm missing?</p>
<p>This is on Python 3.9, matplotlib 3.8.2, adjustText 0.8. This was also happening with matplotlib 3.7.1, I just checked out of curiosity.</p>
<p>Update: I tried it on Python 3.11 based on a comment and I got the same results.</p>
|
<python><matplotlib>
|
2023-12-01 14:42:58
| 0
| 2,480
|
rikitikitik
|
77,586,057
| 5,586,359
|
How do I calculate the loss when samples per batch have different shapes?
|
<p>I have a training function like so:</p>
<pre class="lang-py prettyprint-override"><code>def training():
model.train()
train_mae = []
progress = tqdm(train_dataloader, desc='Training')
for batch_index, batch in enumerate(progress):
x = batch['x'].to(device)
x_lengths = batch['x_lengths'].to(device)
y = batch['y'].to(device)
y_type = batch['y_type'].to(device)
y_valid_indices = batch['y_valid_indices'].to(device)
# Zero Gradients
optimizer.zero_grad()
# Forward pass
y_first, y_second = model(x)
losses = []
for j in range(len(x_lengths)):
x_length = x_lengths[j].item()
if y_type[j].item() == 0:
predicted = y_first[j]
else:
predicted = y_second[j]
actual = y[j]
valid_mask = torch.zeros_like(predicted, dtype=torch.bool)
valid_mask[:x_length] = 1
# Padding of -1 is removed from y
indices_mask = y[j].ne(-1)
valid_indices = y[j][indices_mask]
valid_predicted = predicted[valid_mask]
valid_actual = actual[valid_mask]
loss = mae_fn(valid_predicted, valid_actual, valid_indices)
losses.append(loss)
# Backward pass and update
loss = torch.stack(losses).mean() # This fails due to different shapes
loss.backward()
optimizer.step()
train_mae.append(loss.detach().cpu().numpy())
progress.set_description(
f"mae: {loss.detach().cpu().numpy():.4f}"
)
# Return the average MAEs for y type
return (
np.mean(train_mae)
)
</code></pre>
<pre class="lang-py prettyprint-override"><code>def mae_fn(output, target, indices):
clipped_target = torch.clip(target, min=0, max=1)
maes = F.l1_loss(output, clipped_target, reduction='none')
return maes[indices]
</code></pre>
<p>Obviously can't stack these losses since they have different shape due to the indices. Taking mean on <code>maes[indices]</code> will solve the issue, but it's resulting in very bad test loss. What do I to calculate the loss here since indices determine the shape depending on y_type.</p>
|
<python><deep-learning><pytorch><loss-function>
|
2023-12-01 14:40:56
| 1
| 954
|
Vivek Joshy
|
77,586,036
| 3,521,180
|
why the NULL value handling not working properly?
|
<p>I have written a simple code to handle null value for string type column and float type column as shown below:</p>
<pre><code>def format_item(item):
if item is None:
if isinstance(item, (Decimal, float)):
return 0.00
else:
return ""
else:
if isinstance(item, (Decimal, float)):
return float(item)
else:
return str(item)
</code></pre>
<p>Above code is suppose to handle <code>NULL</code> values in both <code>string type column</code> and <code>float type column</code>. i.e. if there is a NULL in float type column, then it is suppose to replace it with 0.00 in the output, and similarly, if there is a NULL in string type column, so it should return "", i.e. empty string.</p>
<p>Below is the output as seen my logs, and if 3rd list is seen, the the <code>2nd last element</code> is shown there as a empty string, i.e. <code>""</code>. However, if I go with the logic from the above code, it should have been 0 at least if not 0.00.</p>
<pre><code>['681738eb', 'Agi', '6817-abc', '5280-oou', 'xyz', 'ert', 'yuo', 'test1', 'garbage', 13456.76, 16148.12, 2691.36, '2023-11-30 16:16:38']
['681738eb', 'Agi', '6817-abc', '5280-oou', 'xyz', 'ert', 'yuo', 'test1', 'garbage', 13456.76, 16148.12, 13.92, '2023-12-01 08:48:33']
['681738eb', 'Agi', '6817-abc', '5280-oou', 'xyz', 'ert', 'yuo', 'test1', 'garbage', 13456.76, 16148.12, '', '2023-11-30 16:17:14']
</code></pre>
<p>As seen in my DB, the data type of that column is <code>"Double"</code></p>
<p>Above function have been called in the below function.</p>
<pre><code>def stored_procedure_call(SP_name, id, entity):
logging.info(f"Fetching DB connection details.")
try:
# Load env file
load_dotenv()
# Create the connection object
conn = mysql.connector.connect(
user=os.getenv('USER_NAME'),
password=get_db_password(os.getenv('RDS_HOST')),
host=os.getenv('RDS_HOST'),
database=os.getenv('DB_NAME'),
port=os.getenv('PORT'))
# Create a cursor
cursor = conn.cursor()
except Exception as error:
logging.error("An unexpected error occurred: {}".format(error))
try:
# Call the stored procedure with the provided ID
cursor.callproc(SP_name, [id, entity])
conn.commit()
result_list = []
for result in cursor.stored_results():
rows = result.fetchall()
for row in rows:
result_list.append(list(row))
logging.info(row)
if not result_list:
return {
'statusCode': 200,
'body': json.dumps([])
}
result_list_serializable = [list(format_item(item) for item in tup) for tup in result_list]
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json'
},
'body': json.dumps(result_list_serializable)
}
</code></pre>
<p>Please suggest.</p>
|
<python><python-3.x>
|
2023-12-01 14:37:55
| 2
| 1,150
|
user3521180
|
77,585,972
| 8,683,461
|
Python nested lists search optimization
|
<p>I have a search and test problem :
in the list of prime numbers from 2 to 100k, we're searching the first set of 5 with the following criteria :</p>
<ul>
<li>p1 < p2 < p3 < p4 < p5</li>
<li>any combination of 2 primes from the solution (3 and 7 => 37 and 73) must also be a prime</li>
<li>sum(p1..p5) is the
smallest possible sum of primes satisfying criteria, and is above
100k</li>
</ul>
<p>I can totally code such a thing, but i have a severe optimization problem : my code is super duper slow. I have a list of primes under 100k, a list of primes over 100k, and a primality test which works well, but i do not see how to optimize that to obtain a result in a correct time.</p>
<p>For a basic idea :</p>
<ul>
<li>the list of all primes under 100k contains 9592 items</li>
<li>the list of all primes under 1 billion contains approximately 51 million lines</li>
<li>i have the list of all primes under 1 billion, by length</li>
</ul>
<p>Thanks for the help</p>
|
<python><algorithm><optimization><mathematical-optimization><primes>
|
2023-12-01 14:30:01
| 1
| 534
|
MarvinLeRouge
|
77,585,969
| 395,255
|
pandas read_excel returns columns is wrong order if number of columns are less than total columns in the sheet
|
<p>I have an xlsx file which contains 3 columns. This is how my xlsx file looks like:</p>
<pre><code>Items Object Information
Item1 Some Object Some Information
Item2 Some Object Some Information
Item3 Some Object Some Information
Item4 Some Object Some Information
</code></pre>
<p>When reading this using pandas.read_excel, i am getting wrong column orders based on what I pass in <code>names</code> argument.</p>
<p>When I say</p>
<pre><code>df_sheet = pd.read_excel("myfile.xlsx",
sheet_name="mysheet",
engine="openpyxl",
header=None,
names=list("ABC"))
</code></pre>
<p>I get Items column as A, Object column as B, and Information column as C.</p>
<p>When I say</p>
<pre><code>df_sheet = pd.read_excel("myfile.xlsx",
sheet_name="mysheet",
engine="openpyxl",
header=None,
names=list("AB"))
</code></pre>
<p>I get Items column as B, and Object column as A.</p>
<p>I was under the impression that pandas will preserve the column order and by passing <code>names</code> argument in read_excel, i am asking it to name the first column as A and second column as B but for some reason when I read only 2 columns out of 3 the column order is not right but works fine if I read all 3 columns.</p>
<p>What am I missing here?</p>
|
<python><pandas>
|
2023-12-01 14:29:21
| 1
| 12,380
|
Asdfg
|
77,585,904
| 10,986,032
|
how to get urls and nested page urls using concurrent.futures?
|
<p>I am trying to collect the anchors tag <code>href</code> value and nested urls within a webpage and repeat the operation for each such url.</p>
<p>I want to reducing the time to fetch the urls using concurrency.</p>
<pre><code>import concurrent.futures
from urllib.parse import urlsplit
import requests
from bs4 import BeautifulSoup
def get_href_from_url(url):
try:
response = requests.get(url)
response.raise_for_status() # Check for errors in HTTP response
parts = urlsplit(url)
base = "{0.netloc}".format(parts)
strip_base = base.replace("www.", "")
base_url = "{0.scheme}://{0.netloc}".format(parts)
path = url[:url.rfind('/') + 1] if '/' in parts.path else url
soup = BeautifulSoup(response.text, 'html.parser')
href_values = set()
regex = r'.*-c?([0-9]+).html'
for link in soup.find_all('a'):
anchor = link.attrs["href"] if "href" in link.attrs else ''
if anchor.startswith('/'):
local_link = base_url + anchor
href_values.add(local_link)
elif strip_base in anchor:
href_values.add(anchor)
elif not anchor.startswith('http'):
local_link = path + anchor
href_values.add(local_link)
return href_values
except Exception as e:
print(f"Error while processing {url}: {e}")
return []
def follow_nested_urls(seed_url):
visited_urls = set()
urls_to_visit = [seed_url]
while len(urls_to_visit):
current_url = urls_to_visit.pop()
if current_url in visited_urls:
continue
visited_urls.add(current_url)
with concurrent.futures.ThreadPoolExecutor(max_workers=20) as executor:
href_values = get_href_from_url(current_url)
nested_urls = [url for url in href_values if url.startswith('http')]
urls_to_visit.extend(nested_urls)
# Process href values or do other tasks as needed
# print(f"URL: {current_url}, HREF values: {href_values}")
print(f"visited_urls " + current_url)
print(len(visited_urls))
# print(len(urls_to_visit))
if __name__ == "__main__":
seed_url = "https://www.tradeindia.com/" # Replace with your desired starting URL
follow_nested_urls(seed_url)
</code></pre>
|
<python><io><concurrent.futures>
|
2023-12-01 14:17:35
| 1
| 872
|
Sam
|
77,585,792
| 8,236,076
|
What is the difference between a SubQuestionQueryEngine and a MultiStepQueryEngine?
|
<p>The title says it: What is the difference between a SubQuestionQueryEngine and a MultiStepQueryEngine?</p>
<p>Intuitively, these things seem very similar: Breaking a question down into multiple steps is very similar to breaking a question down into multiple subquestions. Conceptually, what is the difference between both? In what situation would you use one over the other? Would/could you combine them somehow or does that not make sense?</p>
|
<python><llama-index>
|
2023-12-01 13:59:43
| 0
| 1,144
|
Willem
|
77,585,606
| 18,091,040
|
How to demote an Endorser from Hyperledger Indy?
|
<p>I am currently using the example of Hyperledger Indy of <a href="https://github.com/hyperledger/indy-sdk/blob/main/docs/how-tos/write-did-and-query-verkey/README.md" rel="nofollow noreferrer">Write a DID and Query Its Verkey</a> to dive deep into DID creation.</p>
<p>After generating a DID and a verkey for a TRUST_ANCHOR, I sign and submit it to the ledger as:</p>
<pre><code> # 7.
print_log('\n7. Building NYM request to add Trust Anchor to the ledger\n')
nym_transaction_request = await ledger.build_nym_request(submitter_did=user.steward_did,
target_did=actor_did,
ver_key=actor_verkey,
alias=None,
role='TRUST_ANCHOR')
print_log('NYM transaction request: ')
pprint.pprint(json.loads(nym_transaction_request))
# 8.
print_log('\n8. Sending NYM request to the ledger\n')
nym_transaction_response = await ledger.sign_and_submit_request(pool_handle=pool_handle,
wallet_handle=wallet_handle,
submitter_did=user.steward_did,
request_json=nym_transaction_request)
</code></pre>
<p>I wanted to demote this TRUST_ANCHOR from the ledger, removing its rights and making it disappear, is it possible?</p>
<p>Reading the <a href="https://github.com/hyperledger/indy-node/blob/main/docs/source/auth_rules.md" rel="nofollow noreferrer">AUTH_RULES</a> it seems possible as there are many lines in this table about demoting roles. But I don't see how to implement it on Python.</p>
|
<python><hyperledger-indy>
|
2023-12-01 13:30:32
| 1
| 640
|
brenodacosta
|
77,585,597
| 6,335,363
|
How can I have an optional TypeVar in a Generic class in Python?
|
<p>I'm trying to write a simple type wrapper to represent the interface of decorator functions:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Protocol, TypeVar, Generic
TIn = TypeVar('TIn', contravariant=True)
TOut = TypeVar('TOut', covariant=True)
class Decorator(Protocol, Generic[TIn, TOut]):
"""
Represents a decorated value, used to simplify type definitions
"""
def __call__(self, value: TIn) -> TOut:
...
</code></pre>
<p>This would be used to type a decorator function as follows:</p>
<pre><code>IntFunction = Callable[[int, int], int]
def register_operator(op: str) -> Decorator[IntFunction, IntFunction]:
def inner(value: IntFunction) -> IntFunction:
# register the function or whatever
return value
return inner
@register_operator("+")
def add(a: int, b: int) -> int:
return a + b
</code></pre>
<p>In the above example, Mypy is able to validate the type signature of <code>add</code> to ensure it matches the specification of <code>register_operator</code>.</p>
<p>This is useful for decorators that transform the type (eg converting it from an <code>IntFunction</code> to a <code>StrFunction</code>), but in almost all cases, <code>TIn</code> is identical to <code>TOut</code>, and so I want to simplify the usage of my definition.</p>
<p>Essentially, I want to make it so that if <code>TOut</code> isn't given, it will be assumed to be the same as <code>TIn</code>, which would allow the above decorator function to be simplified to</p>
<pre class="lang-py prettyprint-override"><code>def register_operator(op: str) -> Decorator[IntFunction]:
# Simplification here ^
def inner(value: IntFunction) -> IntFunction:
# register the function or whatever
return value
return inner
</code></pre>
<p>The ideal syntax I would use in my protocol definition would be something like this:</p>
<pre class="lang-py prettyprint-override"><code>class Decorator(Protocol, Generic[TIn, TOut = TIn]):
"""
Represents a decorated value, used to simplify type definitions
"""
def __call__(self, value: TIn) -> TOut:
...
</code></pre>
<p>Note that this does not work.</p>
<p>How can I achieve this functionality, whilst continuing to have the assurance that Mypy provides? I am happy to make the definition of <code>Decorator</code> as complex as needed, but I want to keep its simple usage.</p>
|
<python><generics><python-typing><mypy>
|
2023-12-01 13:27:59
| 1
| 2,081
|
Maddy Guthridge
|
77,585,519
| 4,451,521
|
Best way to run several heavy processes in python
|
<p>I have an indeterminate number of csv files. I want to back them up, then process them and save the result in the same file. The process is kind of heavy and it involves manipulating the values in the rows and columns, performing some search, adding values, etc.
Since it is heavy I would like to do it concurrently as much as possible.</p>
<p>I have the following code</p>
<pre><code>import concurrent.futures
import pandas as pd
import os
def process_csv(file_path):
# Modify the file name for backup
backup_file_path = file_path.replace('.csv', '_BACK.csv')
# Backup the file
os.rename(file_path, backup_file_path)
# Process the CSV file
df = pd.read_csv(backup_file_path)
my_heavy_process(df)
# Save the processed DataFrame to a new file
result_file_path = file_path.replace('.csv', '_result.csv')
df.to_csv(result_file_path, index=False)
# List of CSV files to process. It is supposed that I am going
# to read it dynamically from a folder
csv_files = ['file1.csv', 'file2.csv', 'file3.csv', 'file4.csv', 'file5.csv']
# Create a ThreadPoolExecutor with max_workers set to the number of files for parallel processing
with concurrent.futures.ThreadPoolExecutor(max_workers=len(csv_files)) as executor:
# Submit each CSV file for processing
futures = {executor.submit(process_csv, file): file for file in csv_files}
# Wait for all tasks to complete
concurrent.futures.wait(futures)
</code></pre>
<p>As an alternative I am thinking</p>
<pre><code>with concurrent.futures.ProcessPoolExecutor(max_workers=len(csv_files)) as executor:
# Submit each CSV file for processing
futures = {executor.submit(process_csv, file): file for file in csv_files}
# Wait for all tasks to complete
concurrent.futures.wait(futures)
</code></pre>
<p>I am doubting which would be the best option. My process reads and writes files but I don't know if to say if the task is I/O bound. And I am guessing it <em>is</em> CPU bound
(I mean <code>my_heavy_process</code> is not I/O bound, but <code>process_csv</code> involves read/write files)</p>
<p>Another doubt I have is how should I set the <code>max_workers</code>. What happens if I have say 10 files or if I have 40 files? should it change?</p>
|
<python><multiprocessing><python-multiprocessing><python-multithreading>
|
2023-12-01 13:15:03
| 0
| 10,576
|
KansaiRobot
|
77,585,352
| 12,775,432
|
Change batch size using a list of Pytorch's data loader
|
<p>During the training of my neural network model, I used a Pytorch's data loader to accelerate the training of the model. But instead of using a fixed batch size before updating the model's parameter, I have a list of different batch sizes that I want the data loader to use.</p>
<p>Example</p>
<pre><code>train_dataset = TensorDataset(x_train, y_train) # x_train.shape (8400, 4)
dataloader_train = DataLoader(train_dataset, batch_size=64) # with fixed batch size of 64
</code></pre>
<p>What I want is a data loader that can use a list of batch size that is dynamic (not fixe)?</p>
<pre><code>list_batch_size = [30, 60, 110, ..., 231] # with this list's sum being equal to x_train.shape[0] (8400)
</code></pre>
|
<python><pytorch><pytorch-dataloader><batchsize>
|
2023-12-01 12:42:22
| 1
| 640
|
pyaj
|
77,585,320
| 6,113,142
|
Point `apk` to install packages to correct `site-packages`
|
<p>Certain (<code>py3-pandas</code>, <code>py3-scipy</code>) packages installed through <code>apk</code> on <code>python:3.12-alpine</code> are not accessible since they are installed in <code>/usr/lib/python3.11/site-packages</code> inspite of specifying <code>$PYTHONPATH=/usr/local/lib/python3.12/site-packages</code></p>
<p>For reference, here's the relevant portion of the <code>Dockerfile</code>:</p>
<pre><code>FROM python:3.12-alpine
ENV PYTHONUNBUFFERED=1
ENV PYTHONPATH=/usr/local/lib/python3.12/site-packages
RUN apk add --no-cache \
gcc g++ libffi-dev musl-dev \
py3-pandas py3-scipy \
&& pip3 install pip-tools
WORKDIR /app
COPY requirements.in .
RUN pip-compile requirements.in > requirements.txt \
&& pip3 install -r requirements.txt
ENTRYPOINT ["sh"]
</code></pre>
|
<python><dockerfile><alpine-linux><python-3.12>
|
2023-12-01 12:35:38
| 1
| 350
|
Pratik K.
|
77,585,278
| 4,124,887
|
PyVISA error reading from TCP/IP unless exact number of incoming bytes is specified
|
<p>I have some Python code for reading from a device, which returns a read error unless I specify exactly the number of bytes returned or fewer. The following code successfully returns <code>SA-14212</code>, which is exactly 8 characters and is the correct response. But we will not always know the correct length of the response in advance and replacing <code>data = session.read_bytes(8)</code> with <code>data = session.read_bytes(9)</code> or <code>data = session.read_bytes(1024)</code> or <code>data = session.read()</code> causes an error.</p>
<pre><code>import pyvisa as visa
HOST = '192.168.0.15'
PORT = 7088
message = '*IDN?'
visa.log_to_screen()
# Open a TCPIP connection using VISA
resourceManager = visa.ResourceManager()
dev = 'TCPIP0::' + HOST + '::' + str(PORT) + '::SOCKET'
session = resourceManager.open_resource(dev)
session.write(message)
data = session.read_bytes(8)
print(data.decode())
session.close()
</code></pre>
<p>Here are the error and proximate debugging statements for <code>data = session.read_bytes(9)</code>:</p>
<pre><code>2023-12-02 01:17:01,872 - pyvisa - DEBUG - viWrite(1, b'*IDN?\r\n', 7, 'c_ulong(7)') -> 0
2023-12-02 01:17:01,872 - pyvisa - DEBUG - TCPIP0::192.168.0.15::7088::SOCKET - reading 9 bytes (last status None)
2023-12-02 01:17:01,887 - pyvisa - DEBUG - viRead(1, <ctypes.c_char_Array_9 object at 0x0000029043084CC0>, 9, 'c_ulong(8)') -> -1073807298
2023-12-02 01:17:01,887 - pyvisa - DEBUG - TCPIP0::192.168.0.15::7088::SOCKET - exception while reading: VI_ERROR_IO (-1073807298): Could not perform operation because of I/O error.
Buffer content: bytearray(b'')
</code></pre>
<p>The device is an obscure Fibre-Bragg-Grating demodulation device with very little documentation. To my knowledge it uses no termination character for input or output
(changing it from <code>''</code> to <code>'\0'</code> or <code>'\n'</code> doesn't alter things). I have looked up the error code here:
<a href="https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000P6FmSAK" rel="nofollow noreferrer">https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000P6FmSAK</a>
but it sheds no light.</p>
<p>We have separate working code using the Python socket library and it ostensibly does exactly the same as here: opens a TCPIP connection, writes a byte string, reads a response (into a buffer with 1024 characters). Unfortunately the goal is a PyVisa implementation, not a raw sockets implementation.</p>
|
<python><tcp><labview><pyvisa>
|
2023-12-01 12:29:00
| 0
| 1,675
|
Tony
|
77,584,918
| 521,347
|
How can I access output of previous Ptransform in next step
|
<p>I have to create a Apache data beam in Python which has to do following functions-</p>
<ol>
<li>Read an entry from a database table matching a particular criteria</li>
<li>Call 1st REST API for this record</li>
<li>In the response of above API call, will get an array. Call two API for each element in this array</li>
<li>Update the data fetched by all 3 API calls in DB</li>
</ol>
<p>One thing I could not find on web was how can I pass the output of any step(Ptransform) to next one. Can someone please point me in the right direction?</p>
|
<python><apache-spark><google-cloud-dataflow><apache-beam>
|
2023-12-01 11:23:28
| 1
| 1,780
|
Sumit Desai
|
77,584,854
| 2,352,242
|
JPype and robotframework custom keywords
|
<p>I'm trying to use JPype with Robotframework to be able to use java-based SUTs.</p>
<p>I used this (<a href="https://forum.robotframework.org/t/connecting-java-with-robot-framework/5253" rel="nofollow noreferrer">https://forum.robotframework.org/t/connecting-java-with-robot-framework/5253</a>) forum Post to implement it (modified it a bit to work with non-static methods) and it works well. However, I'm unable to create other functions in this JavaWrapper.py to use as keywords in my .robot file. The reason I want to do it, is to be able to use different arguments for the constructor of the java object for each test case.</p>
<p>My python code looks something like this:</p>
<pre><code>def foo(self, name: str):
do_something()
</code></pre>
<p>And in the robot file I'm using:</p>
<pre><code>Foo name
</code></pre>
<p>The error I'm getting is just <code>No keyword with name 'Foo' found.</code>. At the same time the keywords from the jar file are detected normally and I can use them without errors or warnings.</p>
|
<python><java><robotframework><jpype>
|
2023-12-01 11:12:59
| 0
| 449
|
szoszk
|
77,584,671
| 7,290,715
|
Inserting hyphen in between strings as per the given format
|
<p>I have a string like <code>s_t = "ABCD1234"</code>.</p>
<p>Now I have a format <code>XXX-X-00-00</code>.</p>
<p>And I want to convert the string to the above format.
i.e. <code>ABC-D-12-34</code>
Here <code>00</code> indicates number and <code>X</code> indicates string type.</p>
<p>I am following the below approach:</p>
<pre><code>'-'.join([s_t[0:3],s_t[3],s_t[4:6],s_t[6:8]])
</code></pre>
<p>And this is giving correct result. However, I want this thing to be dynamic.</p>
<p>I am trying this:</p>
<pre><code>l=[]
for i in s.split('-'):
l_i = len(i)
l_split = l.append(l_i)
</code></pre>
<p>And then</p>
<pre><code>s_f = '-'.join([s_t[0:l[0]],s_t[l[0]],s_t[l[0]+1:l[0]+1+l[2]]])
</code></pre>
<p>But again this is not a dynamic approach. How can I use <code>enumerate(s_t)</code> effectively?</p>
|
<python>
|
2023-12-01 10:41:25
| 1
| 1,259
|
pythondumb
|
77,584,556
| 7,662,164
|
Selecting permutationally-unique elements of a symmetric tensor
|
<p>I have a code which creates a numpy array <code>A</code> of shape <code>(m, m, ..., m)</code> where there are <code>n</code> copies of <code>m</code>. By construction, this array is a symmetric tensor (in the mathematical sense), meaning that <code>A[i, j, ..., k] == A[i', j', ..., k']</code>, where <code>(i', j', ..., k')</code> is a permutation of <code>(i, j, ..., k)</code>.</p>
<p>We may define the permutationally-unique elements of <code>A</code> as the set of entries in <code>A</code> whose corresponding indices are not equivalent to one another by permutation. For example, in a matrix with shape <code>(3, 3)</code>, the permutationally-unique indices are <code>(0, 0), (0, 1), (0, 2), (1, 1), (1, 2), (2, 2)</code>.</p>
<p>For a general symmetric tensor <code>A</code>, how can I extract all of its permutationally-unique elements, as well as their corresponding indices? (<code>np.unique</code> doesn't work, in the case where two elements <code>A[i1, j1, ..., k1]</code> and <code>A[i2, j2, ..., k2]</code> are equal by coincidence, but <code>(i1, j1, ..., k1)</code> is not a permutation of <code>(i2, j2, ..., k2)</code>.)</p>
|
<python><arrays><numpy><indexing><combinatorics>
|
2023-12-01 10:23:37
| 2
| 335
|
Jingyang Wang
|
77,584,384
| 6,306,190
|
Axios POST causes CORS error despite enabling CORS in Flask
|
<p>I have spent many days on this problem and still couldn't figure out how to solve this. I'm having CORS issues and I've tried enabling CORS in Flask but it didn't work.</p>
<p>Backend code (hosted on port 5000):</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, request, jsonify
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
@app.route("/process", methods=['POST'])
def process():
print('process')
# process here
return jsonify({'response': 'OK'})
</code></pre>
<p>Frontend code (hosted on port 3001. I use react, only the function that sends the request shown here):</p>
<pre class="lang-js prettyprint-override"><code>import axios from 'axios';
import FormData from 'form-data';
const sendFile = (binary: ArrayBuffer) => {
console.log('sending file');
const formData = new FormData();
formData.append('image', new Blob([binary]), file.name);
const options = {
method: 'POST',
url: 'http://localhost:5000/process',
// url: 'https://postman-echo.com/post',
data: formData,
};
axios(options)
.then(response => {
console.log(response);
})
.catch(error => {
console.error(error);
});
};
</code></pre>
<p>Error as shown in Chrome console:</p>
<blockquote>
<p>Access to XMLHttpRequest at 'http://localhost:5000/process' from origin 'http://localhost:3001' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.</p>
</blockquote>
<p>and</p>
<blockquote>
<p>POST http://localhost:5000/process net::ERR_FAILED 403 (Forbidden)</p>
</blockquote>
<hr />
<p>I also tested with VSCode REST Client using this request:</p>
<pre><code>POST http://localhost:5000/process
Content-Type: multipart/form-data; boundary=your-unique-boundary
--your-unique-boundary
Content-Disposition: form-data; name="image"; filename="IMG_6651.PNG"
Content-Type: image/png
< /Users/johan/Downloads/IMG_6651.PNG
--your-unique-boundary--
</code></pre>
<p>It worked using the REST client so the backend is running properly. But I guess the REST client doesn't follow SOP. So I tried to see if it does return with <code>Access-Control-Allow-Origin</code> header and it does:</p>
<pre class="lang-bash prettyprint-override"><code>% curl -I -X POST http://localhost:5000/process
HTTP/1.1 200 OK
Server: Werkzeug/2.2.2 Python/3.10.8
Date: Fri, 01 Dec 2023 09:37:22 GMT
Content-Type: application/json
Content-Length: 23
Access-Control-Allow-Origin: *
Connection: close
</code></pre>
<hr />
<p>Questions:</p>
<ol>
<li>I thought I already allow CORS using flask_cors but it didn't work. How do I solve this?</li>
<li>Am I sending the same format of requests in my frontend and in VSCode REST client? If not, what's the equivalent frontend code of the request in REST client?</li>
<li>From the curl response, it seems <code>Access-Control-Allow-Origin</code> header is indeed included, then why do I still get CORS issues?</li>
<li>Is the 403 error a result of the CORS issue? Do I have two problems or one problem here?</li>
<li>I'm sending a simple request, which means there's no preflight request, correct?</li>
</ol>
|
<python><typescript><flask><cors><vscode-restclient>
|
2023-12-01 09:57:33
| 1
| 1,830
|
thyu
|
77,584,351
| 2,301,782
|
Unable to call method from Parent Class in Google colab
|
<p>I am trying run a music recommendation code from following this <a href="https://github.com/JingPoo/WSDM-kkbox_music_recommendation-with-KGE/tree/master" rel="nofollow noreferrer">https://github.com/JingPoo/WSDM-kkbox_music_recommendation-with-KGE/tree/master</a>.</p>
<p>I have successfully ran data_preprocess.py and train_val_test_split python File. and now I am running model_training.</p>
<p><a href="https://github.com/melissakou/knowledge-graph-embedding/blob/main/KGE/models/translating_based/TransE.py" rel="nofollow noreferrer">TransE</a> file has parent <a href="https://github.com/melissakou/knowledge-graph-embedding/blob/main/KGE/models/base_model/TranslatingModel.py" rel="nofollow noreferrer">TranslatingMode</a> and this class has Parent <a href="https://github.com/melissakou/knowledge-graph-embedding/blob/main/KGE/models/base_model/BaseModel.py" rel="nofollow noreferrer">KGEModel</a>. For some reason in Google olab its unable to refer train_loss_history field declared in KGEModel class.
I am getting below error while running this python file.</p>
<p>I am creating object of TransE and referring train_loss_history attribute of KGEModel class in below</p>
<pre><code>
# read data before model training
train = pd.read_csv('./data/KKBOX/train_index_data.csv').values
valid = pd.read_csv('./data/KKBOX/valid_index_data.csv').values
test = pd.read_csv('./data/KKBOX/test_index_data.csv').values
with open('./data/KKBOX/metadata.json') as f:
metadata = json.load(f)
# initialized TransE model object
model = TransE(
embedding_params={"embedding_size": emb_size},
negative_ratio= neg_ratio,
corrupt_side="h+t"
)
model.train(train_X=train, val_X=valid, metadata=metadata, epochs=epoch, batch_size=10000,
early_stopping_rounds=10, restore_best_weight=False,
optimizer=tf.optimizers.Adam(learning_rate=0.001),
seed=12345, log_path=log_path, log_projector=False)
</code></pre>
<p>Error</p>
<pre><code>epoch: 1, train loss: 0.764412, valid loss: 0.952546: 100%|██████████| 2/2 [04:34<00:00, 137.30s/it]
INFO:root:[2023-12-01 09:43:42.858684] Finished training!
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-1-c2b73ad15cdc> in <cell line: 190>()
236
237 # MLflow log artifact 1.training and validation loss curve
--> 238 train_loss = model.train_loss_history
239 val_loss = model.val_loss_history
240 epochs = range(1,len(train_loss)+1)
AttributeError: 'TransE' object has no attribute 'train_loss_history',
</code></pre>
<p>When I opened <a href="https://github.com/melissakou/knowledge-graph-embedding/blob/main/KGE/models/translating_based/TransE.py" rel="nofollow noreferrer">TransE file</a> I do see it does not have any train_loss_history file. However I see train_loss_history in BaseModel.py file <a href="https://github.com/melissakou/knowledge-graph-embedding/blob/main/KGE/models/base_model/BaseModel.py#L126" rel="nofollow noreferrer">https://github.com/melissakou/knowledge-graph-embedding/blob/main/KGE/models/base_model/BaseModel.py#L126</a> at line 126. I am new to python and machine learning. I need some help debugging and unblocking myself.</p>
<p>Complete Code : below and <a href="https://github.com/JingPoo/WSDM-kkbox_music_recommendation-with-KGE/blob/master/model_training.py" rel="nofollow noreferrer">link here</a></p>
<pre><code>#!/usr/bin/env python
# coding: utf-8
import tqdm
import mlflow
import json
import statistics
import numpy as np
import pandas as pd
import tensorflow as tf
from matplotlib import pyplot as plt
from KGE.models.translating_based.TransE import TransE
def batch(iterable, n = 1):
# generate batches of batch_size:n
current_batch = []
for item in iterable:
current_batch.append(item)
if len(current_batch) == n:
yield current_batch
current_batch = []
if current_batch:
yield current_batch
def recommend(user_list):
'''
A function to recommend 25 musics for each user in the input user list
Parameter
---------
user_list: list of user id
Return
------
dict: top 25 recommend songs for list of users
'''
# - input: list of user id
# - output: list of recommend item (25 recommend songs for each user)
# - logic:
# 1. user id → user embedding
# 2. a = user embedding + has_insterest embedding
# 3. compare distance with all item embeddings, output the nearest 25 items
test_users_rec_music = {}
for users in tqdm.tqdm(batch(user_list,100), total=len(user_list)//100+1):
# users embedding (batch_users * embedding_size)
users_index = [metadata['ent2ind'].get(user) for user in users]
users_emb = tf.nn.embedding_lookup(model.model_weights['ent_emb'], users_index)
# has_interest embedding (1 * embedding_size )
has_interest_index = metadata['rel2ind']['has_interest']
has_interest_emb = model.model_weights['rel_emb'][has_interest_index]
# compute recommend songs (batch_users * embedding_size)
compute_songs_emb = users_emb + has_interest_emb
with open('./data/KKBOX/entity_groupby_type.json') as f:
entity_groupby_type = json.load(f)
# songs embedding (total_songs * embedding_size)
song_id = [metadata['ent2ind'].get(ent) for ent in entity_groupby_type['song']]
songs_emb = tf.nn.embedding_lookup(model.model_weights['ent_emb'], song_id)
# 用matrix計算,算完全部compute_songs_emb (list) 與 全部songs_emb(list)的距離 (batch_users * total_songs)
distances = []
# for each user
for i in range(compute_songs_emb.shape[0]):
# calculate his rec_music embedding distance to all songs embeddings
distances.append(tf.norm(tf.subtract(songs_emb, compute_songs_emb[i]), ord=2, axis=1))
# 每個人的前25首embedding相似的song index (batch_users * 25)
top_25_songs_index = tf.argsort(distances)[:,:25].numpy().tolist()
# song index to song id (batch_users * 25)
song_ent = tf.convert_to_tensor(np.array(entity_groupby_type['song']))
top_25_songs = tf.nn.embedding_lookup(song_ent, top_25_songs_index)
# zip users and their rec_25_songs into a dict
users_top25_songs = dict(zip(users,top_25_songs))
test_users_rec_music.update(users_top25_songs)
return test_users_rec_music
# NDCG
def DCG(rec_list, ans_list):
dcg = 0
for i in range(len(rec_list)):
r_i = 0
if rec_list[i] in ans_list:
r_i = 1
dcg += (2**r_i - 1) / np.log2((i + 1) + 1)
return dcg
def IDCG(rec_list, ans_list):
A_temp_1 = []
A_temp_0 = []
for rec_music in rec_list:
if rec_music in ans_list:
A_temp_1.append(rec_music)
else:
A_temp_0.append(rec_music)
A_temp_1.extend(A_temp_0)
idcg = DCG(A_temp_1, ans_list)
return idcg
def NDCG(rec_list, ans_list):
dcg = DCG(rec_list, ans_list)
idcg = IDCG(rec_list, ans_list)
if dcg == 0 or idcg ==0:
ndcg = 0
else:
ndcg = dcg / idcg
return ndcg
def intersection(list1, list2):
# check if two lists have intersect
return list(set(list1) & set(list2))
def evaluate(test_users_rec_music):
'''
Evaluate the recommend result
Parameters
----------
test_users_rec_music(dict): top 25 recommended songs for each user
Returns
-------
metric_result(dict): metric include hit, recall, precision and NDCG
'''
Popular_rec_list = []
TP_list = [] # each user's True Positive number
ans_lengths = [] # each user's has_interest music number
ndcg_list = []
for user in test_users_rec_music.keys():
ans_music_list = user_and_hasInterestItem[user]
ans_lengths.append(len(ans_music_list))
rec_music_list = [x.decode() for x in test_users_rec_music[user].numpy().tolist()]
TP_list.append(len(intersection(rec_music_list, ans_music_list)))
ndcg_list.append(NDCG(rec_music_list, ans_music_list))
Popular_rec_list.append(len(intersection(rec_music_list, top25songs)))
hit_list = [1 if TP >= 1 else 0 for TP in TP_list]
precision_list = [TP/25 for TP in TP_list]
recall_list = [TP_list[i]/ans_lengths[i] for i in range(len(TP_list))]
Popular_rec_list = [hit_count/25 for hit_count in Popular_rec_list]
metric_result = {
'hit': statistics.mean(hit_list),
'recall': statistics.mean(recall_list),
'precision': statistics.mean(precision_list),
'ndcg': statistics.mean(ndcg_list),
'Popular_rec_rate': statistics.mean(Popular_rec_list)
}
return metric_result
def generateTestData(df):
'''
Parameter
---------
df: dataframe
Return
------
users: test users list
user_and_hasInterestItem: dict{key=user: value=interest item list}
'''
users = df['h'].unique().tolist()
user_and_hasInterestItem = df.groupby('h')['t'].apply(list).to_dict()
return users, user_and_hasInterestItem
mlflow.set_experiment('KKBOX-MusicRecommend')
emb_size = 10
neg_ratio = 10
epoch = 2
run_name = 'TEST-emb' + str(emb_size) + 'neg' + str(neg_ratio) + 'epoch' + str(epoch)
log_path = './tensorboard_logs/TEST-emb' + str(emb_size) + 'neg' + str(neg_ratio) + 'epoch' + str(epoch)
with mlflow.start_run(run_name = run_name):
# read data before model training
train = pd.read_csv('./data/KKBOX/train_index_data.csv').values
valid = pd.read_csv('./data/KKBOX/valid_index_data.csv').values
test = pd.read_csv('./data/KKBOX/test_index_data.csv').values
with open('./data/KKBOX/metadata.json') as f:
metadata = json.load(f)
# initialized TransE model object
model = TransE(
embedding_params={"embedding_size": emb_size},
negative_ratio= neg_ratio,
corrupt_side="h+t"
)
# MLflow log parameters
mlflow.log_param('embedding_size',emb_size)
mlflow.log_param('negative_ratio',neg_ratio)
# Convert NumPy arrays to pandas DataFrames for easier manipulation
train_df = pd.DataFrame(train)
valid_df = pd.DataFrame(valid)
# Define a placeholder value for invalid data
invalid_placeholder = -2147483648
# Remove rows with NaN values or the invalid placeholder in either DataFrame
train_df = train_df.replace(invalid_placeholder, np.nan).dropna()
valid_df = valid_df.replace(invalid_placeholder, np.nan).dropna()
# Convert back to NumPy arrays
train = train_df.astype(np.int32).values
valid = valid_df.astype(np.int32).values
# train the model
model.train(train_X=train, val_X=valid, metadata=metadata, epochs=epoch, batch_size=10000,
early_stopping_rounds=10, restore_best_weight=False,
optimizer=tf.optimizers.Adam(learning_rate=0.001),
seed=12345, log_path=log_path, log_projector=False)
# MLflow log artifact 1.training and validation loss curve
train_loss = model.train_loss_history
val_loss = model.val_loss_history
epochs = range(1,len(train_loss)+1)
plt.plot(epochs, train_loss, 'g', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='validation loss')
plt.title('Training and Validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
fig_fn = 'train_val_loss_{}_{}.png'.format('emb' + str(emb_size),'neg' + str(neg_ratio))
plt.savefig(fig_fn)
mlflow.log_artifact(fig_fn) # logging to mlflow
plt.close()
# MLflow log artifact 2. model weights
model_weights = {'ent_emb': model.model_weights['ent_emb'].numpy().tolist(),
'rel_emb': model.model_weights['rel_emb'].numpy().tolist()}
with open("./data/KKBOX/model_weights.json", 'w') as f:
json.dump(model_weights, f)
mlflow.log_artifact('./data/KKBOX/model_weights.json')
# For Parameter Tuning -> use VALIDATION data
# generate test data
test_df = pd.read_csv('./data/KKBOX/test_data.csv')
test_users, user_and_hasInterestItem = generateTestData(test_df)
top25songs = test_df['t'].value_counts().head(25).index.tolist()
# recommend and evaluate on TEST data
test_users_rec_music = recommend(test_users)
test_evaluate_result = evaluate(test_users_rec_music)
# write in tensorboard log
summary_writer = tf.summary.create_file_writer(log_path)
with summary_writer.as_default():
tf.summary.scalar('test-hit', test_evaluate_result['hit'], step=0)
tf.summary.scalar('test-recall', test_evaluate_result['recall'], step=0)
tf.summary.scalar('test-precision', test_evaluate_result['precision'], step=0)
tf.summary.scalar('test-ndcg', test_evaluate_result['ndcg'], step=0)
tf.summary.scalar('test-Popular_rec_rate',test_evaluate_result['Popular_rec_rate'], step=0)
# MLflow log metrics
mlflow.log_metric('test-hit',test_evaluate_result['hit'])
mlflow.log_metric('test-recall',test_evaluate_result['recall'])
mlflow.log_metric('test-precision',test_evaluate_result['precision'])
mlflow.log_metric('test-ndcg',test_evaluate_result['ndcg'])
mlflow.log_metric('test-Popular_rec_rate',test_evaluate_result['Popular_rec_rate'])
</code></pre>
|
<python><python-3.x><tensorflow><google-colaboratory>
|
2023-12-01 09:52:20
| 0
| 1,530
|
TheGraduateGuy
|
77,584,201
| 3,521,180
|
Why the Json module cannot serialize date object in python?
|
<p>I have a error coming as below</p>
<pre><code>{
"statusCode": 500,
"body": "\"An error occurred: Object of type date is not JSON serializable\""
},
</code></pre>
<p>and below is the code implementation to handle the error.</p>
<pre><code>def format_item(item):
if item is None:
return 0.0 if isinstance(item, (Decimal, float)) else ""
elif isinstance(item, (Decimal, float)):
return float(item)
elif isinstance(item, (datetime, date)):
if isinstance(item, date):
return item.isoformat() # Handle yyyy-mm-dd format
elif isinstance(item, datetime):
if item.hour == 0 and item.minute == 0 and item.second == 0:
return item.strftime('%Y-%m-%d') # Handle yyyy-mm-dd format
else:
return item.strftime('%Y-%m-%d %H:%M:%S') # Handle yyyy-mm-dd hh:mm:ss format
else:
return "" if item is None else item
def stored_procedure_call(sp_name, id, entity):
logging.info(f"Fetching DB connection details.")
try:
# Load env file
load_dotenv()
# Create the connection object
conn = mysql.connector.connect(
user=os.getenv('USER_NAME'),
password=get_db_password(os.getenv('RDS_HOST')),
host=os.getenv('RDS_HOST'),
database=os.getenv('DB_NAME'),
port=os.getenv('PORT'))
# Create a cursor
cursor = conn.cursor()
except Exception as error:
logging.error("An unexpected error occurred: {}".format(error))
try:
# Call the stored procedure with the provided ID
cursor.callproc(stored_procedure_name, [id, entity])
conn.commit()
result_list = []
for result in cursor.stored_results():
rows = result.fetchall()
for row in rows:
result_list.append(list(row))
logging.info(row)
print("[RESULT LIST] :", result_list)
if not result_list:
return {
'statusCode': 200,
'body': json.dumps([])
}
else:
result_list_serializable = [list(format_item(item) for item in tup) for tup in result_list]
print('[RESULT SERIALIZER] :', result_list_serializable)
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json'
},
'body': json.dumps(result_list)
}
except Exception as error:
logging.error("An unexpected error occurred: {}".format(error))
return {
'statusCode': 500, # Internal Server Error
'body': json.dumps('An error occurred: {}'.format(str(error)))
}
finally:
close_connection(cursor, conn)
</code></pre>
<p>What could be the possible issue.</p>
<p>The error seems to be quite common, and I have tried to implement the solution from other blogs or article. But it doesn't seem to be working. Please suggest</p>
<p>sample data:</p>
<pre><code>[['4565b', 'Agi Ltd', 'Insight Dire', '2100', datetime.date(2023, 9, 21), datetime.date(2023, 10, 21), 'ABC', 'Dell E5 ', '4565b64f53e', datetime.datetime(2023, 11, 30, 16, 17, 14)]]
</code></pre>
|
<python><python-3.x>
|
2023-12-01 09:31:31
| 2
| 1,150
|
user3521180
|
77,584,165
| 14,243,731
|
Sphinx autosummary only adding __init__ methods for classes
|
<p>I am trying to use Sphinx autosummary to document all of the classes within the directory <code>mypackage/src/</code>. Sphinx is only generating proper documentation for the class constructors but none of the (public) methods inside classes. What am I doing wrong?</p>
<h2>Project structure:</h2>
<pre><code> - docs/
- build/
- source/
- _static/
- generated/
- mypackage.src.file_3.ClassInsideFile3.rst
- api.rst
- conf.py
- index.rst
- usage.rst
- make.bat
- Makefile
- mypackage/
- __init__.py
- file_1.py
- file_2.py
- src/
- __init__.py
- file_3.py
- file_4.py
- tests/
- .gitignore
- poetry.lock
- pyproject.tomml
- README.md
</code></pre>
<h2><code>docs/source/conf.py</code>:</h2>
<pre class="lang-py prettyprint-override"><code># Configuration file for the Sphinx documentation builder.
#
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
import sys
import os
sys.path.insert(0, os.path.abspath('../..'))
project = 'mypackage'
copyright = '2023 My Name'
author = 'My Name'
release = '0.1.0'
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.autosummary']
templates_path = ['_templates']
exclude_patterns = []
# -- Options for HTML output -------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
html_theme = 'alabaster'
html_static_path = ['_static']
</code></pre>
<h2><code>docs/source/api.rst</code>:</h2>
<pre><code>API
===
src
---
.. autosummary::
:toctree: generated
:recursive:
mypackage.src.file_3.class_inside_file_3
</code></pre>
<h2><code>mypackage/src/file_3.py</code>:</h2>
<pre class="lang-py prettyprint-override"><code>class ClassInsideFile3:
"Docstring for the class"
def __init__(self, names: list):
"""
Initializes a ClassInsideFile3 object based on names argument
:param names: A list of names
:type names: list
"""
self.names = names
def add_name(self, name: str) -> None:
"""
Appends a new name to the list of names if it does not already exist
:param name: The name to be appended
:type name: str
"""
if name not in self.names:
self.names.append(name)
</code></pre>
<h2>The result I get when I run <code>sphinx-build docs/source docs/build</code>:</h2>
<p><a href="https://i.sstatic.net/PZIhf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PZIhf.png" alt="" /></a></p>
<h2>The generated file <code>docs/source/generated/mypackage.src.file_3.ClassInsideFile3.rst</code>:</h2>
<pre><code>mypackage.src.file_3.ClassInsideFile3
=====================================
.. currentmodule:: mypackage.src.file_3
.. autoclass:: ClassInsideFile3
.. automethod:: __init__
# Why isn't it creating an auto method for `add_name`?
.. rubric:: Methods
.. autosummary::
~ClassInsideFile3.__init__
~ClassInsideFile3.add_name
</code></pre>
<p>Why isn't Sphinx creating an auto method for the <code>add_name</code> method?</p>
|
<python><python-sphinx><autodoc><autosummary>
|
2023-12-01 09:24:08
| 1
| 328
|
Adventure-Knorrig
|
77,583,899
| 9,819,585
|
XGBoost Classification, Different AUC score for XGBClassifier vs xgb.train Even After Rounding Prediction Probabilities
|
<p>it seems that XGBClassifier is a wrapper over <code>xgb.train</code>. I was trying to train a binary classifier using <code>xgb.train</code> and <code>"objective": "binary:logistic"</code>.</p>
<p>It seems that using <code>xgb.train</code> followed by <code>.predict()</code> returns the prediction probabilities which we can round to either 0 or 1 for the classification (based on an answer I saw in another stackover question, xgb.train results probabilities which we need to round off to get the actual classification prediction)</p>
<p>So if we have binary classification, we need to round off <0.5 ==> class 0 and >0.5 ==> class 1?</p>
<p>But what if it is multiclass? Then we assume a uniform distribution for and split it evenly? e.g. 3 class ==> 0~1/3, 1/3~2/3, 2/3~1?</p>
<p>What is the right way to use <code>xgb.train</code> as a classifier?</p>
<pre><code>import numpy as np
import xgboost as xgb
data = np.random.rand(50,10) # 50 entities, each contains 10 features
label = np.random.randint(2, size=50) # binary target
dtrain = xgb.DMatrix(data, label=label)
param = {'max_depth':3, 'eta':0.1, 'silent':1, 'tree_method':'hist','objective':'binary:logistic', 'seed':42}
num_round = 100 # same as number of estimator
bst = xgb.train( param, dtrain, num_round)
trainres = bst.predict(dtrain)
model = xgb.XGBClassifier(n_estimators=100, objective='binary:logistic', tree_method='hist', eta=0.1, max_depth=3, enable_categorical=True, seed=42)
model = model.fit(data,label)
fitres = model.predict(data)
# Compare classification
print(all([round(x) for x in trainres] == fitres))
# Compare probabilities. Predict proba gives prob for class 0 and 1, so take x[1]
print(all([x[1] for x in model.predict_proba(data)] == trainres))
</code></pre>
<p>Output:</p>
<pre><code>True
True
</code></pre>
|
<python><machine-learning><xgboost>
|
2023-12-01 08:32:40
| 0
| 703
|
Lim Kaizhuo
|
77,583,736
| 37,758
|
Ignore extras during model_dump
|
<p>I'm looking for a way to get a dictionary representation of a nested pydantic model which does not include extra elements. <code>model_dump</code> offers a number of <code>exclude</code> flags, but unfortunately no <code>exclude_extras</code>.</p>
<p>Here's an example. I want the assertion to hold. Bear in mind this is just a minimal example, in reality I'm dealing with much more complex models with multiple nested layers each of which can hold extra data.</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, ConfigDict
class Nested(BaseModel):
model_config = ConfigDict(extra="allow")
baz: str
class Root(BaseModel):
foo: int = 10
bar: int
nested: Nested
if __name__ == "__main__":
model = Root(foo=10, bar=20, nested={"baz": "boing", "extra": "so special"})
dumped_data = model.model_dump()
assert "extra" not in dumped_data["nested"]
</code></pre>
<p>The idea I'm currently exploring is to iterate the model after dumping and manually prune the result using <code>model_extra</code>. But it feels like there should be a less tedious way.</p>
|
<python><pydantic>
|
2023-12-01 07:54:10
| 2
| 2,982
|
NiklasMM
|
77,583,719
| 5,594,008
|
Poetry error on install - SecretStorage required
|
<p>I'm trying to execute <code>poetry install</code>, but I got errors</p>
<pre><code>Cannot install pyasn1.
RuntimeError
SecretStorage required
</code></pre>
<p>And when I try to poetry add SecretStorage, I got the same <code>SecretStorage required</code></p>
<p>What should I do to fix that?</p>
<p>(Python version - 3.11, operating system - Macos)</p>
|
<python><python-poetry>
|
2023-12-01 07:49:45
| 1
| 2,352
|
Headmaster
|
77,583,685
| 8,968,910
|
Python: regex number with different cases
|
<p>Consider:</p>
<pre><code>import re
l='the number is 35.897, please check'
print(re.sub(r"([0-9])\.([0-9])", r"\1 of", l))
i='the number is 35+897, please check'
print(re.sub(r"([0-9])\+([0-9])", r"\1 plus", i))
j='the number is 35、897, please check'
print(re.sub(r"([0-9])\、([0-9])", r"\1 and \2", j))
k='the number is 35-897, please check'
print(re.sub(r"([0-9])\-([0-9])", r"\1 of \2", k))
</code></pre>
<p>Output (annotated):</p>
<pre class="lang-none prettyprint-override"><code>the number is 35 of97, please check # I want '35 of' instead
the number is 35 plus97, please check # I want '35 plus' instead
the number is 35 and 897, please check # Correct
the number is 35 of 897, please check # Correct
</code></pre>
<p>I do not know why when I used <code>.</code> and <code>+</code> in my regex, the result is not what I expected. I don't want the number after . and +, I thought \2 means 897. How can I fix it?</p>
|
<python><regex>
|
2023-12-01 07:42:54
| 1
| 699
|
Lara19
|
77,583,506
| 13,793,478
|
manipulating pandas dataframe to add new column
|
<p>I have this pandas dataframe</p>
<pre><code> ticker created_at entry_price exit_price direction size
0 APPL 2023-05-29 1 4 LONG 12
1 TSLA 2023-05-29 2 3 SHORT 32
</code></pre>
<p>but I want to add another column at the end which would be the sum of entry_price and exit_price times size.</p>
<p>I tried but couldn't make it work any help would be appreciated.</p>
|
<python><pandas>
|
2023-12-01 07:03:27
| 0
| 514
|
Mt Khalifa
|
77,582,815
| 13,046,093
|
How to randomly map a proportion of data value to a specific category?
|
<p>I have a dataset below which shows if a customer is a return customer or not. The end goal is for all returned customers, I need to map about 25% of them to 'yes 1 purchase' and 75% of them to 'yes >1 purchase'. I also need to set a seed to make sure the result does not change each time I re-run the process.</p>
<p>I researched on numpy random function and random seed function, but it seems they generate random numbers instead of randomly assign/map a proportion of data value to a specific category. Can anyone advise on how to do this?</p>
<pre><code>import pandas as pd
import numpy as np
list_customer_name = ['customer1','customer2','customer3','customer4','customer5',
'customer6','customer7','customer8','customer9','customer10','customer11','customer12',
'customer13','customer14','customer15','customer16','customer17','customer18']
list_return_customer = ['yes','yes','yes','yes','yes','yes',
'yes','yes','yes','yes','yes','yes','yes','yes',
'yes','yes','no','no']
df_test = pd.DataFrame({'customer_name': list_customer_name,
'return_customer?':list_return_customer})
</code></pre>
<p>data looks like this</p>
<p><a href="https://i.sstatic.net/CL3O8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CL3O8.png" alt="enter image description here" /></a></p>
<p>desired output looks like this - 25% of customers (4 customer highlighted in yellow) flagged "yes" in the "return_customers?" column are mapped to "yes 1 purchase", the remaining 75% of customers (12 customers highlighted in green) are mapped to "yes >1 purchase".</p>
<p><a href="https://i.sstatic.net/n74KV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n74KV.png" alt="enter image description here" /></a></p>
|
<python><random-seed><numpy-random>
|
2023-12-01 03:12:09
| 1
| 460
|
user032020
|
77,582,760
| 367,878
|
How to fill missing values in dataframe sequentially?
|
<p>I have a data frame in PySpark that looks like this:</p>
<pre><code>row_id, group_id
1, 1
2, null
3, null
4, null
5, 5
6, null
7, null
8, 8
9, null
10, null
11, null
12, null
</code></pre>
<p>And so on: where row_id is sequential number (incremental and unique) and group_id is unique id of the group that starts where value show up first time and until next value.
The task is to fill in all nulls to the data frame like this:</p>
<pre><code>row_id, group_id
1, 1
2, 1
3, 1
4, 1
5, 5
6, 5
7, 5
8, 8
9, 8
10, 8
11, 8
12, 8
</code></pre>
<p>There is an unknown number of records in each group (sample show small number) but it would be in 100s and length of data frame is in millions.</p>
|
<python><dataframe><apache-spark><pyspark><apache-spark-sql>
|
2023-12-01 02:54:12
| 1
| 25,654
|
bensiu
|
77,582,620
| 5,231,110
|
Tensorflow _pywrap_tf2 ImportError: DLL load failed
|
<p>I got the following error with TensorFlow:</p>
<pre><code>from tensorflow.python.platform import _pywrap_tf2
ImportError: DLL load failed while importing _pywrap_tf2: A dynamic link library (DLL) initialization routine failed.
</code></pre>
<p>There are similar questions, but not with <code>_pywrap_tf2</code> and as far as I can see not with an easy working solution.</p>
|
<python><tensorflow>
|
2023-12-01 02:01:41
| 3
| 2,936
|
root
|
77,582,477
| 22,437,734
|
What can I do about Jupyter returning error for no clear reason?
|
<p>I had been doing a ML course where we had to use conda + anaconda.</p>
<p>In the course, they start up jupyter through the terminal, which for some reason I couldn't do. So I started using the <code>microsoft</code> extension for <code>jupyter notebooks</code>. It worked okay for some time until an error popped up under this cell:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
</code></pre>
<p>The error:</p>
<pre><code>The Kernel crashed while executing code in the the current cell or a previous cell. Please review the code in the cell(s) to identify a possible cause of the failure. Click here for more info. View Jupyter log for further details.
</code></pre>
<p>I don't know what has happened. It worked before perfectly. Please provide an answer if possible. I will be grateful for any response.</p>
<p><strong>Edit</strong></p>
<p>Jupyter Log:</p>
<pre><code>0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
16:53:58.987 [info] Restarted cfc82026-4df4-42b3-a6cc-a6aa07b76e56
16:53:59.370 [info] Kernel acknowledged execution of cell 0 @ 1701392039369
16:53:59.717 [error] Disposing session as kernel process died ExitCode: undefined, Reason: 0.02s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
16:53:59.717 [info] Dispose Kernel process 13930.
16:53:59.773 [info] End cell 0 execution @ undefined, started @ 1701392039369, elapsed time = -1701392039.369s
16:54:26.617 [info] Handle Execution of Cells 0 for ~/Desktop/machine_learning/project_1/Untitled.ipynb
</code></pre>
<p><strong>EDIT <em>2.0</em></strong></p>
<p>The kernel always crashes when it contains <code>import pandas</code>.</p>
|
<python><conda><jupyter><importerror>
|
2023-12-01 00:53:20
| 0
| 473
|
Gleb
|
77,582,315
| 14,775,628
|
rankN type equivalent for mypy in python
|
<p>In Haskell we can use rankN types like so:</p>
<pre class="lang-hs prettyprint-override"><code>rankN :: (forall n. Num n => n -> n) -> (Int, Double)
rankN f = (f 1, f 1.0)
</code></pre>
<p>Is the same thing possible in python with mypy?</p>
<p>I tried the following code in python 3.10.2 with mypy 1.7.1:</p>
<pre class="lang-py prettyprint-override"><code>I = TypeVar("I", int, float)
def rankN(f: Callable[[I], I]) -> tuple[int, float]:
return (f(1), f(1.0))
</code></pre>
<p>This produces the following errors, implying that <code>f</code> is specializing to <code>float</code>:</p>
<pre><code>Incompatible return value type (got "tuple[float, float]", expected "tuple[int, float]") [return-value]
Argument 1 has incompatible type "float"; expected "int" [arg-type]
</code></pre>
<p>I'm not necessarily expecting this to work since the magic syntax in the Haskell case is the nested <code>forall</code>, but I don't know if there is a similar way to convey this to mypy if it is possible at all.</p>
|
<python><haskell><types><mypy>
|
2023-11-30 23:52:28
| 2
| 518
|
petrucci4prez
|
77,582,299
| 2,718,496
|
Tensor (5-dim matrix) in python pandas?
|
<p>I have a CSV file that looks like</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>time</th>
<th>Col_A</th>
<th>Col_B</th>
<th>Col_C</th>
<th>Col_D</th>
<th>Price</th>
</tr>
</thead>
<tbody>
<tr>
<td>123</td>
<td>A1</td>
<td>B1</td>
<td>C2</td>
<td>D6</td>
<td>23.43</td>
</tr>
<tr>
<td>124</td>
<td>A5</td>
<td>B3</td>
<td>C7</td>
<td>D1</td>
<td>14.63</td>
</tr>
<tr>
<td>125</td>
<td>A3</td>
<td>B2</td>
<td>C3</td>
<td>D2</td>
<td>343.43</td>
</tr>
<tr>
<td>126</td>
<td>A2</td>
<td>B1</td>
<td>C2</td>
<td>D6</td>
<td>43.43</td>
</tr>
<tr>
<td>127</td>
<td>A1</td>
<td>B1</td>
<td>C7</td>
<td>D2</td>
<td>6.63</td>
</tr>
</tbody>
</table>
</div>
<p>Now I want to create a 5-dimensional matrix. I think it is called a tensor?</p>
<p>Question 1) Can this tensor be done in Pandas, or is there a better Python library? Do you have a link to an API manual so I can read more?</p>
<p>The dimensions of the tensor will be "time", "Col_A", "Col_B", "Col_C", "Col_D". The cell will contain the scalar "Price".</p>
<p>So I want to be able to set the cell, maybe something like:</p>
<pre><code>my_matrix[time=123, Col_A = A1, Col_B = B1, Col_C = C2, Col_D = D6] = 23.43
</code></pre>
<p>Question 2) So what is the syntax to set a scalar? And how do I read the scalar?</p>
<p>I would also like to sum over dimensions. Say I want to sum like this below. When I write "*" I mean the star is a wildcard.</p>
<pre><code>Matrix[time = *, Col_A = A1, Col_B = B1 and B2 and B3, Col_C = *, Col_D = D6 and D2]
</code></pre>
<p>Question 3) How do I sum over different dimensions? Do I need to for loop? Can I use other operations, such as divisions?</p>
<p>Actually, I want to put several numbers in each cell. Maybe I would like to put "time" and "Price" into each cell. I was thinking about string concatenation "time_Price", but to do a large summation, there will be lot of substring extraction, which might be time consuming? Therefore, I was thinking of creating several identical tensors. One tensor might contain the "Price" in each cell, and another tensor might contain "time" in each cell. So if I want to check the time for a Price, I will use the same coordinates in both tensors.</p>
<p>Question 4) I guess it is faster to use several tensors where each cell contain a single value, instead of using one single tensor containing many variables i.e. long string of concatenated variables? Or is there a better way?</p>
|
<python><pandas><numpy><matrix><tensor>
|
2023-11-30 23:47:46
| 1
| 1,015
|
Orvar Korvar
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.