QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,461,018 | 3,917,668 | Pysimplegui: get event source without a key | <p>I'm making a Calendar app for my workplace. I want to be able to assign shift (in manager mode) or request & block shifts (in worker mode).</p>
<p>In order for the latter part to be convenient, I want the Text element for each shift to be clickable, making it colored red for block for example.</p>
<p>My problem is that the only way I know to create so many identifiable clickable elements is by assigning a different key to each of them, and there are many shift elements each month (
>120).</p>
<p>Is there any other way to know which element evoked the event? Is there anything else I'm missing? I feel like there has to be a more elegant, maintainable way then the one mentioned above.</p>
<p>I'll paste my code for generating the calendar in case it helps.</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime
import calendar
import json
import PySimpleGUI as sg
sg.theme('DarkGrey15') # Add a touch of color
min_weeks, max_weeks = (4, 6)
'Dec']
weekdays = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
workers = ['worker1', 'worker2', 'worker3', 'worker4', 'worker5']
shift_emoji = {'full': '🌈', 'day': '☀️', 'night': '🌙'}
selector_key = 'MonthSelectorKey'
browser_key = 'JsonBrowserKey'
worker_mode_button_title = ['create request file', 'back to shift assignment']
worker_mode_key = 'WorkerModeKey'
worker_requests = ['block', 'request', 'cancel']
worker_requests_default = worker_requests[0]
worker_requests_key = 'WorkerRequestsKey'
exoprt_key = 'ExportKey'
class StateClass:
def __init__(self, month_start = datetime.now().replace(day= 1)) -> None:
self.blocks = {}
self.month_start = month_start
if month_start.month < 12:
self.month_start = month_start.replace(month= month_start.month+1)
else:
self.month_start = month_start.replace(year= month_start.year+1, month= 1)
self.month_range = calendar.monthrange(month_start.year, month_start.month)
self.worker_mode = False
state = StateClass()
def CreateShiftUi(text: str):
return [sg.Text(text, expand_x=True, enable_events=True), sg.Combo(workers, enable_events=True)]
def CreateWorkdayUi():
return [CreateShiftUi(emo) for emo in shift_emoji.values()]
def CreateWorkdayFrame(day: int):
return sg.Frame(day, CreateWorkdayUi())
def CreateWeekRow(start_day: int):
return [CreateWorkdayFrame(start_day+i) for i in range(7)]
def WorkdaySetEnable(workday_frame: sg.Frame, f_enable: bool):
for row in workday_frame.Rows:
row[-1].update('', values= workers, disabled= not f_enable)
def WeekSetEnable(week: list[sg.Frame], f_enable: bool, until: int=6):
for workday in week[:until+1]:
WorkdaySetEnable(workday, f_enable)
for workday in week[until+1:]:
WorkdaySetEnable(workday, not f_enable)
def WeekSetHidden(week_row: list[sg.Frame], f_hide: bool):
if week_row[0].visible != f_hide:
return
for workday in week_row:
workday.update(visible= not f_hide)
def OnMonthChanged():
new_year = month_row[0].get()
new_month = month_row[1].get()
if new_month == state.month_start.month and new_year == state.month_start.year:
return
state.month_start = state.month_start.replace(year= new_year, month= new_month)
ResetCalendar()
def CalendarFrame(day_in_month):
day = day_in_month + state.month_range[0]
return calendar_layout[day//7][day%7]
def ResetCalendar():
global calendar_layout
state.month_range = calendar.monthrange(state.month_start.year, state.month_start.month)
start_weekday = (state.month_range[0]+1) % 7
weeks_in_month = (state.month_range[1]+start_weekday+6) // 7
for week in calendar_layout[min_weeks : weeks_in_month]:
WeekSetHidden(week, False)
for week in calendar_layout[weeks_in_month : max_weeks]:
WeekSetHidden(week, True)
WeekSetEnable(week, False)
# Disable out-of-month days
WeekSetEnable(calendar_layout[0], False, start_weekday-1)
for week in calendar_layout[1 : weeks_in_month-1]:
WeekSetEnable(week, True)
end_weekday = (start_weekday + state.month_range[1] - 1) % 7
WeekSetEnable(calendar_layout[weeks_in_month-1], True, end_weekday)
# Renumber days in month
for i in range(1, 1+state.month_range[1]):
CalendarFrame(i).update(i)
prev_month = (state.month_start.month-2)%12 + 1
prev_month_len = calendar.monthrange(state.month_start.year - int(prev_month == 12), prev_month)[1]
for i in range(start_weekday):
calendar_layout[0][i].update(prev_month_len - start_weekday + i + 1)
for i in range(end_weekday+1,7):
calendar_layout[weeks_in_month-1][i].update(i-end_weekday)
def ApplyBlockfile(block_file):
# Read the Json file
with open(block_file) as json_file:
block_data = json.load(json_file)
# Validate worker's name
name = block_data['worker_name'].capitalize()
if not name in workers:
print(name + " doesn't appear on you list")
return
# Remove from relevant Combos
for blocking in block_data['blockings'].items():
blocked_day = int(blocking[0])
blocked_frame = CalendarFrame(blocked_day)
for blocked_shift in blocking[1]:
shift_type = list(shift_emoji).index(blocked_shift)
shift_combo: sg.Combo = blocked_frame.Rows[shift_type][-1]
new_list = shift_combo.Values.copy()
if name in new_list:
new_list.remove(name)
shift_combo.update(values= new_list)
def OnWorkerMode():
new_mode = not state.worker_mode
state.worker_mode = new_mode
window[worker_mode_key].update(worker_mode_button_title[new_mode])
window[worker_requests_key].update(visible= new_mode)
if new_mode:
pass
else:
window[worker_requests_key].update(worker_requests_default)
calendar_layout = [CreateWeekRow(1+7*j) for j in range(max_weeks)]
month_row = [sg.Spin([i for i in range(2020,2100)], state.month_start.year),
sg.Combo([i for i in range(1,13)],state.month_start.month, key=selector_key, enable_events=True),
sg.FilesBrowse('Browse', enable_events= True, file_types= (("JSON", "*.json"),), target= browser_key),
sg.Input(key= browser_key, visible= False, enable_events= True, expand_x= True),
sg.Button(worker_mode_button_title[0], key=worker_mode_key, enable_events= True),
sg.Combo(worker_requests, worker_requests_default, key=worker_requests_key, visible=False, enable_events=True),
sg.Button('export', key= exoprt_key)]
weekday_titles = [sg.Text(weekdays[i],expand_x=True, justification='center') for i in range(len(weekdays))]
layout = [
month_row,
weekday_titles,
calendar_layout,
]
# Create the Window
window = sg.Window('Window Title', layout, finalize=True)
ResetCalendar()
# Event Loop to process "events" and get the "values" of the inputs
while True:
event, values = window.read()
if event == sg.WIN_CLOSED or event == 'Cancel': # if user closes window or clicks cancel
break
if event == selector_key:
OnMonthChanged()
continue
if event == browser_key:
for block_file in values[browser_key].split(';'):
ApplyBlockfile(block_file)
continue
if event == worker_mode_key:
OnWorkerMode()
continue
print('You entered ', values[0])
window.close()
</code></pre>
| <python><event-handling><pysimplegui> | 2023-11-10 15:14:07 | 1 | 449 | Oded Sayar |
77,460,982 | 2,854,673 | OpenCV puttext curved text printing | <p>I am trying to print text in curved way instead of straight line using opencv puttext function.
Can lineType supports curve option instead of cv2.LINE_AA?
Any other alternatives available to print curved text on an image?</p>
<pre><code>Syntax: cv2.putText(image, text, org, font, fontScale, color[, thickness[, lineType[, bottomLeftOrigin]]])
Parameters:
image: It is the image on which text is to be drawn.
text: Text string to be drawn.
org: It is the coordinates of the bottom-left corner of the text string in the image. The coordinates are represented as tuples of two values i.e. (X coordinate value, Y coordinate value).
font: It denotes the font type. Some of font types are FONT_HERSHEY_SIMPLEX, FONT_HERSHEY_PLAIN, , etc.
fontScale: Font scale factor that is multiplied by the font-specific base size.
color: It is the color of text string to be drawn. For BGR, we pass a tuple. eg: (255, 0, 0) for blue color.
thickness: It is the thickness of the line in px.
lineType: This is an optional parameter.It gives the type of the line to be used.
bottomLeftOrigin: This is an optional parameter. When it is true, the image data origin is at the bottom-left corner. Otherwise, it is at the top-left corner.
Return Value: It returns an image.
</code></pre>
| <python><image><opencv><text><curve> | 2023-11-10 15:09:35 | 1 | 334 | UserM |
77,460,950 | 3,611,472 | Optimising the computation of a (quasi) infinite series on Pandas/Numpy | <p>I have a time series stored in a pandas dataframe. The time series is the sequence {<code>X_t</code>} where <code>t</code> is the time index that in my case it is truncated, i.e. t = [0,T>0].</p>
<p>For each <code>t</code>, I would like to compute the following series</p>
<p><a href="https://i.sstatic.net/D4O14.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D4O14.jpg" alt="enter image description here" /></a></p>
<p>where the weights <code>w_k</code> are defined by a recursive relation</p>
<p><a href="https://i.sstatic.net/x9Tgd.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x9Tgd.jpg" alt="enter image description here" /></a></p>
<p>and <code>d</code> is a float parameter between 0 and 1.</p>
<p>Of course, given that my series is not infinite, this sum must be truncated. For example, for the last two terms of the time series, I will compute</p>
<p><a href="https://i.sstatic.net/bsDMi.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bsDMi.jpg" alt="enter image description here" /></a></p>
<p>I have tried to implement this computation in python, but it is terribly slow and I am sure I am not using the full potential of pandas/numpy. Can anyone suggest me a better way to do this computation?</p>
<p>First of all, I create a random dataset</p>
<pre><code>import pandas as pd
import numpy as np
from tqdm import tqdm
df = pd.DataFrame(np.random.randint(0,100,size=(100000, 1)), columns=['value'])
</code></pre>
<p>then, I create a function that computes the weights iteratively</p>
<pre><code>def get_next_weight(weight, k, d):
return - weight * (d-k+1)/k
weights = [1]
idx = 1
for idx in range(1,len(df)):
weights.append(get_next_weight(weights[-1], idx, 0.1))
</code></pre>
<p>Then, I compute the new series</p>
<pre><code>new_values = []
with tqdm(total=len(df)) as pbar:
for idx in df.index:
Xt = (df.value.loc[:idx].sort_index(ascending=False).reset_index(drop=True) * weights[:idx+1]).sum()
new_values.append(Xt)
pbar.update(1)
</code></pre>
<p>This is really slow, and I know that my solution is very bad, but I couldn't come up with a better clean solution.</p>
<p>Any help?</p>
| <python><pandas><numpy><optimization> | 2023-11-10 15:06:07 | 1 | 443 | apt45 |
77,460,905 | 2,394,163 | How can I reference multiple Google Sheets worksheets from .streamlit/secrets.toml? | <p>I want to be able to aggregate data from multiple Google sheets spreadsheets in a single <a href="https://docs.streamlit.io/" rel="nofollow noreferrer">streamlit</a> application.</p>
<p>The documentation at <a href="https://docs.streamlit.io/knowledge-base/tutorials/databases/public-gsheet" rel="nofollow noreferrer">Connect Streamlit to a public Google Sheet</a> only shows a single spreadsheet.</p>
<pre class="lang-ini prettyprint-override"><code># .streamlit/secrets.toml
[connections.gsheets]
spreadsheet = "https://docs.google.com/spreadsheets/d/xxxxxxx/edit#gid=0"
</code></pre>
<pre class="lang-py prettyprint-override"><code># streamlit_app.py
import streamlit as st
from streamlit_gsheets import GSheetsConnection
# Create a connection object.
conn = st.connection("gsheets", type=GSheetsConnection)
df = conn.read()
# Print results.
for row in df.itertuples():
st.write(f"{row.name} has a :{row.pet}:")
</code></pre>
| <python><google-sheets><streamlit> | 2023-11-10 14:57:50 | 1 | 2,054 | Nick |
77,460,818 | 3,066,306 | Pyinstaller ignores dll which is part of a dependency | <p>I am creating a slim Python project which is wrapping a native dll, so that other Python developers can use the functionality of this dll without bothering about C-internals. This project can be build into a wheel, it can be pip installed, and it can be successfully used within scripts. However, when bundled with pyinstaller, the dll is ignored, so the bundled program crashes.</p>
<p>I know there is a workaround by explicitly telling pyinstaller to include the dll: <code>pyinstaller --add-binary="source/path/mylib.dll:destination/path ./main.py</code>. However, this somewhat defies the purpose of such a wrapper which is to obfuscate the dll's existence to the user. I would like to know if there is a way to improve my wrapper's project structure so that pyinstaller will find the dll automatically.</p>
<hr />
<p>This is how my wrapper project is structured:</p>
<pre><code>wrapper
|-- wrapper
| |-- __init__.py
| |-- mylib.dll
|-- pyproject.toml
</code></pre>
<p><code>pyproject.toml</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry]
name = "wrapper"
version = "0.1.0"
description = ""
authors = []
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p><code>__init__.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import ctypes
import importlib.resources as res
lib = None
with res.path("wrapper", "mylib.dll") as dll_path:
lib = ctypes.CDLL(str(dll_path))
# now defining several python functions accessing lib
</code></pre>
<p>As mentioned above, this project can be successfully build to a wheel by running <code>poetry build</code>, and it also can be installed by <code>pip install <wheel name></code>. In both cases, the dll gets recognized and copied to the proper direction (like e.g. site-packages/wrapper). Having it installed, the following script runs successfully:</p>
<p><code>main.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import wrapper
# calling arbitrary functions from the wrapper module
</code></pre>
<p>However, when running <code>pyinstaller ./main.py</code>, the library <code>mylib.dll</code> is ignored, and the bundled executable crashes with a <code>PyInstallerImportError</code>.</p>
<hr />
<p><strong>Added later:</strong> Running <code>pyinstaller --collect-binaries wrapper .\main.py</code> actually works fine. This is a solution which isn't too bad for my taste. As a library developer however, I would still prefer a solution which does not require users to set nonstandard flags to be able to use my library. So I am looking forward for better ideas.</p>
| <python><python-3.x><pyinstaller> | 2023-11-10 14:43:45 | 1 | 303 | Dune |
77,460,705 | 13,151,304 | How to detect any key pressed without blocking execution in Python | <p>I have an script that checks the position of the mouse every 60 seconds. If the mouse has not moved, it moves it, makes a right click, clicks esc, and sleeps. It is pretty handy to avoid the computer going to sleep. If the mouse has moved, does nothing, goes to sleep and checks again in 60 sec.</p>
<p>Now I want to extend it to detect any key pressed. and take it in the same way as the mouse. If a key has been pressed, then just go to sleep.</p>
<p>I found the module keyboard, that allows to detect pressed keys. However, it blocks the execution while waiting for a pressed key. Therefore, I would like to find a way to stop "listening" with keyboard after an specified time. That way, instead of sleep, I would use that. If either the mouse is moved or the keyboard is pressed don't do anything.</p>
<p>I have not been able to find anything that works for Windows. I hope you can help.</p>
| <python> | 2023-11-10 14:26:21 | 1 | 417 | El_que_no_duda |
77,460,692 | 1,554,386 | Can not install apache-airflow-providers-mysql or mysqlclient: pkg-config error on MacOS | <p>Using MacOS Ventura 13.6, when trying to install <code>apache-airflow-providers-mysql</code> or <code>mysqlclient</code> package I get the following error:</p>
<pre><code>$ pip install apache-airflow-providers-mysql
...
Collecting mysqlclient>=1.3.6 (from apache-airflow-providers-mysql)
Using cached mysqlclient-2.2.0.tar.gz (89 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [27 lines of output]
/bin/sh: pkg-config: command not found
/bin/sh: pkg-config: command not found
Trying pkg-config --exists mysqlclient
Command 'pkg-config --exists mysqlclient' returned non-zero exit status 127.
Trying pkg-config --exists mariadb
Command 'pkg-config --exists mariadb' returned non-zero exit status 127.
Traceback (most recent call last):
File "/Users/am/test-env-mysqlclient/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/Users/am/test-env-mysqlclient/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/am/test-env-mysqlclient/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/dl/dl02sf7d0kxd127bfd69h0d1pdpc0b/T/pip-build-env-33qoucsr/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 355, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/dl/dl02sf7d0kxd127bfd69h0d1pdpc0b/T/pip-build-env-33qoucsr/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 325, in _get_build_requires
self.run_setup()
File "/private/var/folders/dl/dl02sf7d0kxd127bfd69h0d1pdpc0b/T/pip-build-env-33qoucsr/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 341, in run_setup
exec(code, locals())
File "<string>", line 154, in <module>
File "<string>", line 48, in get_config_posix
File "<string>", line 27, in find_package_name
Exception: Can not find valid pkg-config name.
Specify MYSQLCLIENT_CFLAGS and MYSQLCLIENT_LDFLAGS env vars manually
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
</code></pre>
<p>How do I install <code>pkg-config</code> and the necessary dependencies to build <code>mysqlclient</code>?</p>
| <python><macos><airflow><libmysqlclient> | 2023-11-10 14:24:08 | 1 | 27,985 | Alastair McCormack |
77,460,652 | 7,334,572 | Parse text with newline and write it to file using argparse | <p>I want to pass some strings with the new lines in command into the file, but <code>argparse</code> did not deal with the <code>\n</code> as the file <code>aaa</code> showed in the following script with the command <code>python test.py -t "aa\nbb\ncc\n"</code>.</p>
<p>The <code>test.py</code> is here:</p>
<pre class="lang-py prettyprint-override"><code>#! /usr/bin/env python3
# -*- coding: utf-8 -*-
import argparse
def wwrite():
# parser
parser = argparse.ArgumentParser(
description='Atest.')
parser.add_argument('-t', '--tail',
help='Add some lines to file')
args = parser.parse_args()
# check
if args.tail is not None:
zz_tail = str(args.tail)
else:
zz_tail = ''
# write
with open("aaa", 'w', encoding='utf-8') as fw:
fw.write(zz_tail)
with open("bbb", 'w', encoding='utf-8') as fw:
fw.write("aa\nbb\ncc")
# 主文件
if __name__ == "__main__":
wwrite()
</code></pre>
<p>How do I get the file same to this:</p>
<pre><code>aa
bb
cc
</code></pre>
<p>Not:</p>
<pre><code>aa\nbb\ncc\n
</code></pre>
<p>And not change the command <code>python test.py -t "aa\nbb\ncc\n"</code>.</p>
<p>There is a similar question and answer, but it will change the command to <code>python test.py -t $'aa\nbb\ncc\n'</code>. (<a href="https://stackoverflow.com/questions/50642064/parse-text-with-newline-using-argparse">Parse text with newline using argparse</a>)</p>
| <python><argparse> | 2023-11-10 14:17:25 | 1 | 442 | cwind |
77,460,540 | 610,569 | Iterables difference and common elements | <p>Given 2 iterables/lists, the goal is to extract the common and different elements from both lists.</p>
<p>For example, given:</p>
<pre><code>x, y = [1,1,5,2,2,3,4,5,5], [2,3,4,5]
</code></pre>
<p>The goal is to achieve:</p>
<pre><code>common = [2,3,4,5]
x_only = [1,1,5,2,5]
y_only = []
</code></pre>
<p>Explanation:</p>
<ul>
<li>when it comes to the element <code>2</code>, x has 2 counts and y has 1 count, so <code>[2]</code> will be in common and the other <code>[2]</code> will be in x_only.</li>
<li>Similarly for <code>5</code>, x has 3 counts and y has 1 count. so <code>[5]</code> will be common and the other <code>[5,5]</code> will be in x_only.</li>
<li>The order of the elements in <code>common</code>, <code>x_only</code> and <code>y_only</code> is not important.</li>
</ul>
<p>I've tried:</p>
<pre><code>from collections import Counter
x, y = [1,1,5,2,3,4,5], [2,3,4,5]
x_only = list((Counter(x) - Counter(y)).elements())
y_only = list((Counter(y) - Counter(x)).elements())
common = list((Counter(x) & Counter(y)).elements())
</code></pre>
<p>The above attempt achieves the objective but it seems a little redundant in repeating the multiple sets/counter subtraction and intersection. It will work for well for small iterables but not a large list e.g. 1-10 billion items.</p>
| <python><list><counter><iterable> | 2023-11-10 13:59:30 | 1 | 123,325 | alvas |
77,460,486 | 12,279,326 | How to get stacked bar chart rows share the same axis | <p>I have a stacked horizontal bar chart using <code>matplotlib.pyplot</code>.
It produces the following output:
<a href="https://i.sstatic.net/49wns.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/49wns.png" alt="enter image description here" /></a></p>
<p>I wanted the x-axis width to be 100%, so colours would occupy the same width as styles, assuming the individual segments would be wider, to expand into the vacant space.</p>
<p>I thought filling a numpy array with equal values divided by 100 would ensure the values are proportionally scaled.</p>
<p>However, as per the output colours do not expand out.</p>
<p>Could anyone kindly point me in the direction of solving this problem?</p>
<p>Here is the code for a reproducible example.</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams.update(plt.rcParamsDefault)
dummy = {
'colours': ['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine', 'ten', 'eleven'],
'style': ['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine', 'ten', 'eleven', 'twelve', '13', '14']
}
df = pd.DataFrame.from_dict(dummy, orient="index")
y_labels_colors = df.loc["colours"].dropna().tolist()
y_values_colors = np.full(len(y_labels_colors), len(y_labels_colors) / 100)
y_labels_styles = df.loc["style"].dropna().tolist()
y_values_styles = np.full(len(y_labels_styles), len(y_labels_styles) / 100)
fig, ax = plt.subplots(figsize=(12, 1))
cumulative_size_colors = 0
cumulative_size_styles = 0
for label, value in zip(y_labels_colors, y_values_colors):
ax.barh("colours", value, left=cumulative_size_colors, color='lightskyblue', edgecolor='white', linewidth=1)
cumulative_size_colors += value
ax.text(cumulative_size_colors - value / 2, 0, label, ha='center', va='center')
for label, value in zip(y_labels_styles, y_values_styles):
ax.barh("styles", value, left=cumulative_size_styles, color='lightsteelblue', edgecolor='white', linewidth=1)
cumulative_size_styles += value
ax.text(cumulative_size_styles - value / 2, 1, label, ha='center', va='center')
ax.set_frame_on(False)
ax.set_xticks([])
plt.show()
</code></pre>
| <python><matplotlib><stacked-bar-chart> | 2023-11-10 13:50:05 | 1 | 948 | dimButTries |
77,460,389 | 1,819,402 | Run cron jobs inside of Docker container | <p>I'm trying to run my python program every minute in a Docker container. To do this I want to build image with Python and cron and then with cron expression run <code>main.py</code>.</p>
<p>Can you please look at my Dockerfile and tell me why nothing is executing? Even the second cron expr (<code>echo "OK"</code>) is not writing to a file (I've added it as a debug job).</p>
<p>All log files are empty, just like the cron would not work at all...</p>
<p>BTW. if I would run <code>/script.sh</code> manually (from docker container) I receive no errors and a new line is added to <code>/app/ROOT.log</code>.</p>
<p>Dockerfile:</p>
<pre><code>FROM python:3.8-slim-buster
RUN pip install --upgrade pip
RUN apt-get update && apt-get -y install cron
RUN apt-get -y install procps
RUN touch /script.sh
RUN echo "#!/bin/bash" >> /script.sh \
&& echo "" >> /script.sh \
&& echo "cd /app" >> /script.sh \
&& echo "python main.py" >> /script.sh
RUN chmod +x /script.sh
RUN (crontab -l ; echo "*/1 * * * * /bin/bash /script.sh >/app/cron.log 2>&1") | crontab
RUN (crontab -l ; echo "*/1 * * * * echo \"OK\" >>/app/cron.log 2>&1") | crontab
RUN chown app:app /script.sh && chown -R app:app /var/spool/cron/crontabs
RUN touch /var/log/cron.log
WORKDIR /app
# copy requirements.txt file from local (source) to file structure of container (destination)
COPY requirements.txt requirements.txt
# Install the requirements specified in file using RUN
RUN pip3 install -r requirements.txt
# copy all items in current local directory (source) to current container directory (destination)
COPY main.py main.py
CMD /etc/init.d/cron start && tail -f /var/log/cron.log
</code></pre>
<p><code>main.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>if __name__ == '__main__':
from datetime import datetime
# save current date time to file
with open('/app/ROOT.log', 'a') as f:
f.write(str(datetime.now()) + '\n')
</code></pre>
<pre class="lang-bash prettyprint-override"><code>root@86bbd19a7cb1:/app# ps aux | grep cron
root 1 0.0 0.0 2384 756 ? Ss 12:49 0:00 /bin/sh -c /etc/init.d/cron start && tail -f /var/log/cron.log
root 11 0.0 0.0 7260 2056 ? Ss 12:49 0:00 /usr/sbin/cron
root 12 0.0 0.0 4076 744 ? S 12:49 0:00 tail -f /var/log/cron.log
root 38 0.0 0.0 4832 876 pts/0 S+ 12:54 0:00 grep cron
</code></pre>
<pre class="lang-bash prettyprint-override"><code>root@86bbd19a7cb1:/app# ls /var/log/
alternatives.log apt btmp cron.log dpkg.log exim4 faillog lastlog wtmp
root@86bbd19a7cb1:/app# cat /var/log/cron.log
root@86bbd19a7cb1:/app# cat /var/log/lastlog
root@86bbd19a7cb1:/app# cat /var/log/faillog
</code></pre>
| <python><docker><cron> | 2023-11-10 13:33:10 | 1 | 3,199 | MAGx2 |
77,460,276 | 159,072 | What version of TensorFlow+Keras should I install for nVidia GeForce GTX 480? | <pre><code>Microsoft Windows [Version 10.0.19045.2965]
(c) Microsoft Corporation. All rights reserved.
C:\Users\pc>nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:32_Central_Daylight_Time_2017
Cuda compilation tools, release 9.0, V9.0.176
C:\Users\pc>nvidia-smi
Fri Nov 10 14:02:45 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 391.35 Driver Version: 391.35 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 480 WDDM | 00000000:03:00.0 N/A | N/A |
| 44% 63C P12 N/A / N/A | 360MiB / 1536MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+
C:\Users\pc>
</code></pre>
<p>I am running Python 3.11. So, I tried to install TensorFlow 2.0.0 and received the following error:</p>
<pre><code>C:\Users\pc>pip install tensorflow==2.0.0
ERROR: Could not find a version that satisfies the requirement tensorflow==2.0.0
(from versions: 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2,
2.13.0, 2.13.1, 2.14.0rc0, 2.14.0rc1, 2.14.0, 2.15.0rc0, 2.15.0rc1)
ERROR: No matching distribution found for tensorflow==2.0.0
</code></pre>
<p>I need to run neural network training.</p>
<p>What version of TensorFlow+Keras should I install for this GPU configuration?</p>
| <python><tensorflow><gpu><nvidia> | 2023-11-10 13:13:48 | 1 | 17,446 | user366312 |
77,460,275 | 15,638,204 | Using pyodbc&pandas to load a Table data to df | <p>I'm connecting to a DB and can get all the tables from it. No errors
I wanted to find a table named "ToolHistory" and later some other tables and then use pandas DF and filter some rows from the table/tables.
Here is what I got so far:</p>
<pre><code>conn = pyodbc.connect('DRIVER={SQL Server};SERVER=SomeServer.com,1433;DATABASE=QQS;UID=me;PWD=allgetlost')
cursor = conn.cursor()
for row in cursor.tables():
print(f" >> {row.table_name}")
sql_query = 'SELECT * FROM ToolHistory'
df = pd.read_sql(sql_query, conn)
# tb_one = row.table_name
# if tb_one == 'ToolHistory' :
# print('Found Table ->',tb_one)
#df = pd.read_sql_table('tb_one', conn)
#df = pd.read_sql_table('tb_one', cursor)
</code></pre>
<p>I cannot pass the line after the</p>
<pre><code>print(f" >> {row.table_name}")
</code></pre>
<p>Any help is appreciated.</p>
| <python><pandas> | 2023-11-10 13:13:43 | 1 | 956 | avocadoLambda |
77,460,117 | 2,791,346 | Find file on SFTP server that contains string content in Python | <h2>Problem</h2>
<p>In Python, my task is to find a file <code>(XML)</code> on an external <code>SFTP</code> server that contains</p>
<pre><code><myTag>BLA</myTag>
</code></pre>
<p>The problem is there could be more than 5000 files on the server.</p>
<p>Is there a way to do it efficiently?</p>
<h2>Right now I am:</h2>
<ul>
<li>using <code>pysftp</code> to connect to <code>SFTP</code> server</li>
<li>gather all filenames from there and filter them to include only .xml</li>
<li>go over files with <code>conn.getfo(file_name, file)</code> to get the file and search for content.</li>
</ul>
<h2>Pseudo code</h2>
<pre><code>import pysftp
cnopts = pysftp.CnOpts()
cnopts.hostkeys = None
conn = pysftp.Connection('...', username='...', password='..', port=22, cnopts=cnopts)
all_file_names = []
for file_attr in conn.listdir_attr(root):
filepath = f'{root}/{file_attr.filename}'.replace('//', '/')
if stat.S_ISREG(file_attr.st_mode):
all_file_names.append(filepath)
elif stat.S_ISDIR(file_attr.st_mode):
if skip_path is None or skip_path != filepath:
all_file_names = all_file_names + get_all_filenames(filepath)
for file_name in [f_name for f_name in all_file_names if f_name.endswith(tuple(['.xml']))]:
with tempfile.TemporaryFile(mode='wb+') as file:
try:
conn.getfo(file_name, file)
file.seek(0)
file.read()
# find right tag...
</code></pre>
<p>This code takes around 0.15s per file to open it. Because I have 5000 files this is 12,5min to go over all of them.</p>
<h2>Question</h2>
<p>How to optimize this?</p>
<h2>Prerequisits</h2>
<ul>
<li>The SFTP is outside of my domain</li>
<li>I do not have SSH permissions</li>
<li>There is not a viable solution to download and store all files on my server (file sync)</li>
</ul>
| <python><optimization><sftp><pysftp> | 2023-11-10 12:47:01 | 0 | 8,760 | Marko Zadravec |
77,460,101 | 13,146,029 | werkzeug.routing.exceptions.BuildError: Could not build url for endpoint 'add_user_confirm' with values | <p>Im trying to create an invite url for a project I'm working on and I have seen a few versions on line when it comes to creating a token URL for that to work.</p>
<p>I have an error when it comes to creating a URL</p>
<pre><code> ^^^^^^^^^^^^^^^^^^^^
File "/Users/grahammorby/Documents/GitHub/tagr-api/venv/lib/python3.12/site-packages/flask/app.py", line 1071, in url_for
return self.handle_url_build_error(error, endpoint, values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/grahammorby/Documents/GitHub/tagr-api/venv/lib/python3.12/site-packages/flask/app.py", line 1060, in url_for
rv = url_adapter.build( # type: ignore[union-attr]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/grahammorby/Documents/GitHub/tagr-api/venv/lib/python3.12/site-packages/werkzeug/routing/map.py", line 919, in build
raise BuildError(endpoint, values, method, self)
werkzeug.routing.exceptions.BuildError: Could not build url for endpoint 'add_user_confirm' with values ['token']. Did you mean 'users_blueprint.add_user_confirm' instead?
</code></pre>
<p>I have two routes under a blueprint which look as follows:</p>
<pre class="lang-py prettyprint-override"><code>@users_blueprint.route("/invite_user", methods=["GET", "POST"])
def invite_user():
email = request.json["email"]
if email:
ts = URLSafeTimedSerializer("JWT_SECRET_KEY")
salt = "12345"
token = ts.dumps(email, salt)
confirm_url = url_for("add_user_confirm", token=token, _external=True)
try:
msg = Message(
"Welcome to TAGR",
sender="registration@tagr.global",
html="You have been invited to set up an account on PhotogApp. Click here: "
+ confirm_url,
recipients=[email],
)
mail.send(msg)
except Exception as e:
print(e)
@users_blueprint.route("/add_user_confirm/<token>", methods=["GET", "POST"])
def add_user_confirm(token):
print(token)
</code></pre>
<p>Im new to this way of working and would welcome some help</p>
| <python><flask> | 2023-11-10 12:44:09 | 1 | 317 | Graham Morby |
77,460,094 | 726,730 | Python - PyQt5 - How to show statustip for QMenu and submenu actions | <pre class="lang-py prettyprint-override"><code># -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'menu_example_statustip.ui'
#
# Created by: PyQt5 UI code generator 5.15.9
#
# WARNING: Any manual changes made to this file will be lost when pyuic5 is
# run again. Do not edit this file unless you know what you are doing.
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(800, 600)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 800, 22))
self.menubar.setObjectName("menubar")
self.menu1 = QtWidgets.QMenu(self.menubar)
self.menu1.setObjectName("menu1")
self.menu1_1 = QtWidgets.QMenu(self.menu1)
self.menu1_1.setObjectName("menu1_1")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.action1_1_1 = QtWidgets.QAction(MainWindow)
self.action1_1_1.setObjectName("action1_1_1")
self.action1_1_2 = QtWidgets.QAction(MainWindow)
self.action1_1_2.setObjectName("action1_1_2")
self.menu1_1.addAction(self.action1_1_1)
self.menu1_1.addAction(self.action1_1_2)
self.menu1.addAction(self.menu1_1.menuAction())
self.menubar.addAction(self.menu1.menuAction())
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.menu1.setStatusTip(_translate("MainWindow", "1"))
self.menu1.setTitle(_translate("MainWindow", "1"))
self.menu1_1.setStatusTip(_translate("MainWindow", "1.1"))
self.menu1_1.setTitle(_translate("MainWindow", "1.1"))
self.action1_1_1.setText(_translate("MainWindow", "1.1.1"))
self.action1_1_1.setStatusTip(_translate("MainWindow", "1.1.1"))
self.action1_1_2.setText(_translate("MainWindow", "1.1.2"))
self.action1_1_2.setStatusTip(_translate("MainWindow", "1.1.2"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
</code></pre>
<p>Maybe a duplicate question.</p>
<p>How can I show the "1" status tip message and "1.1" status tip message for menu and submenu actions.
For "1.1.1" and "1.1.2" there is no problem.</p>
<p>Is this possible without code in QtDesigner?</p>
<p><strong>Edit:</strong> There is a related bug <a href="https://bugreports.qt.io/browse/QTBUG-3810" rel="nofollow noreferrer">here</a>.</p>
| <python><pyqt5><qmenu><qstatusbar> | 2023-11-10 12:42:08 | 1 | 2,427 | Chris P |
77,459,882 | 13,899,026 | How to group by and filter within the groups pandas | <p>I am looking to add an aggregate column to a DataFrame that looks something like this:</p>
<p><code>pd.DataFrame({"col1" : [1, 2, 3, 4, 5], "col2" : ["b", "b", "b", "a", "b"], "col3": [6, 2, 11, 1, 3]})</code></p>
<p>When I aggregate I want to filter on a column value. So for the example DataFrame above for each row I would want to find the mean of the values of "col1" which have the same value for "col2" and a lower value than it's "col3" value.</p>
<p>So for row 0 want the value of the mean of (2, 5) = 3.5</p>
<p>My intended result would look like this :</p>
<p><code>pd.DataFrame({"col1" : [1, 2, 3, 4, 5], "col2" : ["b", "b", "b", "a", "b"], "col3": [6, 2, 11, 1, 3], "mean_col" : [3.5, None, 2.666, None, 2.0]})</code></p>
<p>I could do something like this but I would rather not iterate through the df:</p>
<pre><code>mean_vals = []
for index, row in df.iterrows():
# ensure same group
grouped_df = df[df.col2 == row["col2"]]
# apply condition
grouped_and_filtered_df = grouped_df[grouped_df.col3 < row["col3"]]
# aggregation
row_mean = grouped_and_filtered_df.col1.mean()
mean_vals.append(row_mean)
</code></pre>
| <python><pandas><dataframe><group-by><aggregate-functions> | 2023-11-10 12:05:55 | 1 | 441 | Dom McEwen |
77,459,646 | 5,868,293 | How to pivot a pandas dataframe and calculate product of combinations | <p>I have a pandas dataframe that looks like this:</p>
<pre><code>import pandas as pd
pd.DataFrame({
'variable': ['gender','gender', 'age_group', 'age_group'],
'category': ['F','M', 'Young', 'Old'],
'value': [0.6, 0.4, 0.7, 0.3],
})
variable category value
0 gender F 0.6
1 gender M 0.4
2 age_group Young 0.7
3 age_group Old 0.3
</code></pre>
<p>which represents my population. So in my population I have 60% Females, 70% Young etc.</p>
<p>I want to calculate the combinations of gender-age_group and output it in a dataframe that looks like this:</p>
<pre><code>pd.DataFrame({
'gender': ['F', 'F', 'M', 'M'],
'age_group': ['Young', 'Old', 'Young', 'Old'],
'percentage': [0.42, 0.18, 0.28, 0.12]
})
gender age_group percentage
0 F Young 0.42
1 F Old 0.18
2 M Young 0.28
3 M Old 0.12
</code></pre>
<p>which will show that the <code>Young Females</code> in the population are 42% (which comes from <code>0.6*0.7</code>), the <code>Old Males</code> are 12% (which comes from <code>0.4*0.3</code>) etc</p>
<p>How could I do that ?</p>
| <python><pandas> | 2023-11-10 11:25:53 | 2 | 4,512 | quant |
77,459,601 | 4,810,328 | Variable context in nested python function | <p>This is probably a stupid question but I was going through the <a href="https://github.com/karpathy/micrograd/blob/master/micrograd/engine.py" rel="nofollow noreferrer">micrograd</a> repo and I came across a nested function that seems confusing to me.</p>
<pre><code>class Value:
""" stores a single scalar value and its gradient """
def __init__(self, data, _children=(), _op=''):
self.data = data
self.grad = 0
# internal variables used for autograd graph construction
self._backward = lambda: None
self._prev = set(_children)
self._op = _op # the op that produced this node, for graphviz / debugging / etc
def __add__(self, other):
other = other if isinstance(other, Value) else Value(other)
out = Value(self.data + other.data, (self, other), '+')
def _backward():
self.grad += out.grad
other.grad += out.grad
out._backward = _backward
return out
</code></pre>
<p>The <code>_backward</code> function defined inside <code>__add__</code> is attached to a newly created Value object. <code>_backward</code> is using self and other. It is clear from the definition where it is getting those from but when we assign the <code>_backward</code> of the object to refer to nested <code>_backward</code>, how is it able to get the self and other objects?</p>
| <python><autograd> | 2023-11-10 11:20:28 | 1 | 711 | Tarique |
77,459,534 | 2,850,913 | Does Mistral 7b work with Langchain tools? | <p>I am following <a href="https://www.pinecone.io/learn/series/langchain/langchain-tools/" rel="nofollow noreferrer">this tutorial</a> which is the third search result on Google for 'langchain tools'. I am trying to get Mistral 7b Instruct to use a simple circumference calculator tool. I keep getting "Could not parse LLM output" errors. I tried setting 'handle_parsing_errors' to True, but it does not help.</p>
<p>Here is the code;</p>
<pre class="lang-py prettyprint-override"><code>from langchain.llms import LlamaCpp
llm = LlamaCpp(model_path="./mistral_7b_instruct/mistral-7b-instruct-v0.1.Q4_K_M.gguf", verbose=True, n_ctx=4000, temperature=0)
from langchain.tools import BaseTool
from math import pi
from typing import Union
class CircumferenceTool(BaseTool):
name = "Circumference calculator"
description = "use this tool when you need to calculate a circumference using the radius of a circle"
def _run(self, radius: Union[int, float]):
return float(radius)*2.0*pi
def _arun(self, radius: int):
raise NotImplementedError("This tool does not support async")
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
conversational_memory = ConversationBufferWindowMemory(
memory_key='chat_history',
k=5,
return_messages=True
)
from langchain.agents import initialize_agent
tools = [CircumferenceTool()]
# initialize agent with tools
agent = initialize_agent(
agent='chat-conversational-react-description',
tools=tools,
llm=llm,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
memory=conversational_memory,
handle_parsing_errors=True
)
agent("can you calculate the circumference of a circle that has a radius of 7.81mm")
</code></pre>
<p>and the output is this;</p>
<pre><code>> Entering new AgentExecutor chain...
▅
ASSISTANT'S RESPONSE
--------------------
json
{
"action": "Circumference calculator",
"action_input": "7.81"
}
Observation: 49.071677249072565
Thought:
llama_print_timings: load time = 445.58 ms
llama_print_timings: sample time = 9.25 ms / 53 runs ( 0.17 ms per token, 5729.11 tokens per second)
llama_print_timings: prompt eval time = 25256.61 ms / 562 tokens ( 44.94 ms per token, 22.25 tokens per second)
llama_print_timings: eval time = 2743.39 ms / 52 runs ( 52.76 ms per token, 18.95 tokens per second)
llama_print_timings: total time = 28164.19 ms
Llama.generate: prefix-match hit
Could not parse LLM output:
Observation: Invalid or incomplete response
Thought:
llama_print_timings: load time = 445.58 ms
llama_print_timings: sample time = 0.17 ms / 1 runs ( 0.17 ms per token, 5780.35 tokens per second)
llama_print_timings: prompt eval time = 8081.89 ms / 172 tokens ( 46.99 ms per token, 21.28 tokens per second)
llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_print_timings: total time = 8108.44 ms
Llama.generate: prefix-match hit
Could not parse LLM output:
Observation: Invalid or incomplete response
Thought:
llama_print_timings: load time = 445.58 ms
llama_print_timings: sample time = 0.17 ms / 1 runs ( 0.17 ms per token, 5780.35 tokens per second)
llama_print_timings: prompt eval time = 5237.91 ms / 112 tokens ( 46.77 ms per token, 21.38 tokens per second)
llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_print_timings: total time = 5254.49 ms
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[35], line 43
31 # initialize agent with tools
32 agent = initialize_agent(
33 agent='chat-conversational-react-description',
34 tools=tools,
(...)
40 handle_parsing_errors=True
41 )
---> 43 agent("can you calculate the circumference of a circle that has a radius of 7.81mm")
File ~/anaconda3/lib/python3.10/site-packages/langchain/chains/base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
308 except BaseException as e:
309 run_manager.on_chain_error(e)
--> 310 raise e
311 run_manager.on_chain_end(outputs)
312 final_outputs: Dict[str, Any] = self.prep_outputs(
313 inputs, outputs, return_only_outputs
314 )
File ~/anaconda3/lib/python3.10/site-packages/langchain/chains/base.py:304, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
297 run_manager = callback_manager.on_chain_start(
298 dumpd(self),
299 inputs,
300 name=run_name,
301 )
302 try:
303 outputs = (
--> 304 self._call(inputs, run_manager=run_manager)
305 if new_arg_supported
306 else self._call(inputs)
307 )
308 except BaseException as e:
309 run_manager.on_chain_error(e)
File ~/anaconda3/lib/python3.10/site-packages/langchain/agents/agent.py:1190, in AgentExecutor._call(self, inputs, run_manager)
1188 iterations += 1
1189 time_elapsed = time.time() - start_time
-> 1190 output = self.agent.return_stopped_response(
1191 self.early_stopping_method, intermediate_steps, **inputs
1192 )
1193 return self._return(output, intermediate_steps, run_manager=run_manager)
File ~/anaconda3/lib/python3.10/site-packages/langchain/agents/agent.py:703, in Agent.return_stopped_response(self, early_stopping_method, intermediate_steps, **kwargs)
701 new_inputs = {"agent_scratchpad": thoughts, "stop": self._stop}
702 full_inputs = {**kwargs, **new_inputs}
--> 703 full_output = self.llm_chain.predict(**full_inputs)
704 # We try to extract a final answer
705 parsed_output = self.output_parser.parse(full_output)
File ~/anaconda3/lib/python3.10/site-packages/langchain/chains/llm.py:298, in LLMChain.predict(self, callbacks, **kwargs)
283 def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:
284 """Format prompt with kwargs and pass to LLM.
285
286 Args:
(...)
296 completion = llm.predict(adjective="funny")
297 """
--> 298 return self(kwargs, callbacks=callbacks)[self.output_key]
File ~/anaconda3/lib/python3.10/site-packages/langchain/chains/base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
308 except BaseException as e:
309 run_manager.on_chain_error(e)
--> 310 raise e
311 run_manager.on_chain_end(outputs)
312 final_outputs: Dict[str, Any] = self.prep_outputs(
313 inputs, outputs, return_only_outputs
314 )
File ~/anaconda3/lib/python3.10/site-packages/langchain/chains/base.py:304, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
297 run_manager = callback_manager.on_chain_start(
298 dumpd(self),
299 inputs,
300 name=run_name,
301 )
302 try:
303 outputs = (
--> 304 self._call(inputs, run_manager=run_manager)
305 if new_arg_supported
306 else self._call(inputs)
307 )
308 except BaseException as e:
309 run_manager.on_chain_error(e)
File ~/anaconda3/lib/python3.10/site-packages/langchain/chains/llm.py:108, in LLMChain._call(self, inputs, run_manager)
103 def _call(
104 self,
105 inputs: Dict[str, Any],
106 run_manager: Optional[CallbackManagerForChainRun] = None,
107 ) -> Dict[str, str]:
--> 108 response = self.generate([inputs], run_manager=run_manager)
109 return self.create_outputs(response)[0]
File ~/anaconda3/lib/python3.10/site-packages/langchain/chains/llm.py:117, in LLMChain.generate(self, input_list, run_manager)
111 def generate(
112 self,
113 input_list: List[Dict[str, Any]],
114 run_manager: Optional[CallbackManagerForChainRun] = None,
115 ) -> LLMResult:
116 """Generate LLM result from inputs."""
--> 117 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
118 callbacks = run_manager.get_child() if run_manager else None
119 if isinstance(self.llm, BaseLanguageModel):
File ~/anaconda3/lib/python3.10/site-packages/langchain/chains/llm.py:179, in LLMChain.prep_prompts(self, input_list, run_manager)
177 for inputs in input_list:
178 selected_inputs = {k: inputs[k] for k in self.prompt.input_variables}
--> 179 prompt = self.prompt.format_prompt(**selected_inputs)
180 _colored_text = get_colored_text(prompt.to_string(), "green")
181 _text = "Prompt after formatting:\n" + _colored_text
File ~/anaconda3/lib/python3.10/site-packages/langchain/prompts/chat.py:339, in BaseChatPromptTemplate.format_prompt(self, **kwargs)
330 def format_prompt(self, **kwargs: Any) -> PromptValue:
331 """
332 Format prompt. Should return a PromptValue.
333 Args:
(...)
337 PromptValue.
338 """
--> 339 messages = self.format_messages(**kwargs)
340 return ChatPromptValue(messages=messages)
File ~/anaconda3/lib/python3.10/site-packages/langchain/prompts/chat.py:588, in ChatPromptTemplate.format_messages(self, **kwargs)
580 elif isinstance(
581 message_template, (BaseMessagePromptTemplate, BaseChatPromptTemplate)
582 ):
583 rel_params = {
584 k: v
585 for k, v in kwargs.items()
586 if k in message_template.input_variables
587 }
--> 588 message = message_template.format_messages(**rel_params)
589 result.extend(message)
590 else:
File ~/anaconda3/lib/python3.10/site-packages/langchain/prompts/chat.py:99, in MessagesPlaceholder.format_messages(self, **kwargs)
97 value = kwargs[self.variable_name]
98 if not isinstance(value, list):
---> 99 raise ValueError(
100 f"variable {self.variable_name} should be a list of base messages, "
101 f"got {value}"
102 )
103 for v in value:
104 if not isinstance(v, BaseMessage):
ValueError: variable agent_scratchpad should be a list of base messages, got ▅
ASSISTANT'S RESPONSE
--------------------
json
{
"action": "Circumference calculator",
"action_input": "7.81"
}
Observation: 49.071677249072565
Thought:Could not parse LLM output:
Observation: Invalid or incomplete response
Thought:Could not parse LLM output:
Observation: Invalid or incomplete response
Thought:
I now need to return a final answer based on the previous steps:
</code></pre>
<p>EDIT:</p>
<p>I am thinking the best way might be to look at the observations and extract the answer - in my actual application the answer will have a specific format that makes it easy to detect using regex.</p>
| <python><python-3.x><langchain><large-language-model> | 2023-11-10 11:08:33 | 1 | 750 | tail_recursion |
77,459,456 | 10,950,656 | Python multiple levels Logging Configuration Issue | <p>I'm trying to set up logging in my Python script using the logging module, but I'm encountering issues with the multiple level configuration. I want to log errors, warnings, and info messages into separate files. However, the current configuration only seems to create the info log. Here's a snippet of my code:</p>
<pre><code>import logging
import logging as infoLog
import logging as errorLog
import logging as warningLog
errorLog.basicConfig(filename='error.log', level=logging.ERROR, format='%(asctime)s - ProFilCar - %(levelname)s - %(message)s')
warningLog.basicConfig(filename='warning.log', level=logging.WARNING, format='%(asctime)s - ProFilCar - %(levelname)s - %(message)s')
infoLog.basicConfig(filename='info.log', level=logging.INFO, format='%(asctime)s - ProFilCar - %(levelname)s - %(message)s')
</code></pre>
<p>I expected this setup to create three separate log files for errors, warnings, and info messages, but it only generates the info log.</p>
<p>I appreciate for your help!</p>
| <python><logging> | 2023-11-10 10:54:26 | 0 | 481 | Maya_Cent |
77,459,431 | 10,771,559 | How can I add a contour plot to a scatterplot? | <p>I have created a scatterplot of the first principal component versus the second principal component, coloured by a categorical variable using the code below:</p>
<pre><code>sns.scatterplot(x=dataframe['principal component 1'], y=dataframe['principal component 2'], hue=dataframe['categorical variable'])
</code></pre>
<p>I want to add a contour plot, to get a graph that looks like this:</p>
<p><a href="https://i.sstatic.net/EUNMj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EUNMj.png" alt="enter image description here" /></a></p>
<p>How can I do this?</p>
| <python><seaborn> | 2023-11-10 10:50:45 | 1 | 578 | Niam45 |
77,459,386 | 15,456,681 | How to implement nested for loops with branches efficiently in JAX | <p>I am wanting to reimplement a function in jax that loops over a 2d array and modifies the output array at an index that is not necessarily the same as the current iterating index based on conditions. Currently I am implementing this via repeated use of <code>jnp.where</code> for the conditions separately, but the function is ~4x slower than the numba implementation on cpu, on gpu it is ~10x faster - which I suspect is due to the fact that I am iterating over the whole array again for every condition.</p>
<p>The numba implementation is as follows:</p>
<pre><code>from jax.config import config
config.update("jax_enable_x64", True)
import jax
import jax.numpy as jnp
import numpy as np
import numba as nb
rng = np.random.default_rng()
@nb.njit
def raytrace_np(ir, dx, dy):
assert ir.ndim == 2
n, m = ir.shape
assert ir.shape == dx.shape == dy.shape
output = np.zeros_like(ir)
for i in range(ir.shape[0]):
for j in range(ir.shape[1]):
dx_ij = dx[i, j]
dy_ij = dy[i, j]
dxf_ij = np.floor(dx_ij)
dyf_ij = np.floor(dy_ij)
ir_ij = ir[i, j]
index0 = i + int(dyf_ij)
index1 = j + int(dxf_ij)
if 0 <= index0 <= n - 1 and 0 <= index1 <= m - 1:
output[index0, index1] += (
ir_ij * (1 - (dx_ij - dxf_ij)) * (1 - (dy_ij - dyf_ij))
)
if 0 <= index0 <= n - 1 and 0 <= index1 + 1 <= m - 1:
output[index0, index1 + 1] += (
ir_ij * (dx_ij - dxf_ij) * (1 - (dy_ij - dyf_ij))
)
if 0 <= index0 + 1 <= n - 1 and 0 <= index1 <= m - 1:
output[index0 + 1, index1] += (
ir_ij * (1 - (dx_ij - dxf_ij)) * (dy_ij - dyf_ij)
)
if 0 <= index0 + 1 <= n - 1 and 0 <= index1 + 1 <= m - 1:
output[index0 + 1, index1 + 1] += (
ir_ij * (dx_ij - dxf_ij) * (dy_ij - dyf_ij)
)
return output
</code></pre>
<p>and my current jax reimplementation is:</p>
<pre><code>@jax.jit
def raytrace_jax(ir, dx, dy):
assert ir.ndim == 2
n, m = ir.shape
assert ir.shape == dx.shape == dy.shape
output = jnp.zeros_like(ir)
dxfloor = jnp.floor(dx)
dyfloor = jnp.floor(dy)
dxfloor_int = dxfloor.astype(jnp.int64)
dyfloor_int = dyfloor.astype(jnp.int64)
meshyfloor = dyfloor_int + jnp.arange(n)[:, None]
meshxfloor = dxfloor_int + jnp.arange(m)[None]
validx = (meshxfloor >= 0) & (meshxfloor <= m - 1)
validy = (meshyfloor >= 0) & (meshyfloor <= n - 1)
validx2 = (meshxfloor + 1 >= 0) & (meshxfloor + 1 <= m - 1)
validy2 = (meshyfloor + 1 >= 0) & (meshyfloor + 1 <= n - 1)
validxy = validx & validy
validx2y = validx2 & validy
validxy2 = validx & validy2
validx2y2 = validx2 & validy2
dx_dxfloor = dx - dxfloor
dy_dyfloor = dy - dyfloor
output = output.at[
jnp.where(validxy, meshyfloor, 0), jnp.where(validxy, meshxfloor, 0)
].add(
jnp.where(validxy, ir * (1 - dx_dxfloor) * (1 - dy_dyfloor), 0)
)
output = output.at[
jnp.where(validx2y, meshyfloor, 0),
jnp.where(validx2y, meshxfloor + 1, 0),
].add(jnp.where(validx2y, ir * dx_dxfloor * (1 - dy_dyfloor), 0))
output = output.at[
jnp.where(validxy2, meshyfloor + 1, 0),
jnp.where(validxy2, meshxfloor, 0),
].add(jnp.where(validxy2, ir * (1 - dx_dxfloor) * dy_dyfloor, 0))
output = output.at[
jnp.where(validx2y2, meshyfloor + 1, 0),
jnp.where(validx2y2, meshxfloor + 1, 0),
].add(jnp.where(validx2y2, ir * dx_dxfloor * dy_dyfloor, 0))
return output
</code></pre>
<p>Test and timings:</p>
<pre><code>shape = 2000, 2000
ir = rng.random(shape)
dx = (rng.random(shape) - 0.5) * 5
dy = (rng.random(shape) - 0.5) * 5
_raytrace_np = raytrace_np(ir, dx, dy)
_raytrace_jax = raytrace_jax(ir, dx, dy).block_until_ready()
assert np.allclose(_raytrace_np, _raytrace_jax)
%timeit raytrace_np(ir, dx, dy)
%timeit raytrace_jax(ir, dx, dy).block_until_ready()
</code></pre>
<p>Output:</p>
<pre><code>14.3 ms ± 84.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
62.9 ms ± 187 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
</code></pre>
<p>So is there a way to implement this algorithm in jax with performance more comparable to the numba implementation?</p>
| <python><jax> | 2023-11-10 10:43:37 | 1 | 3,592 | Nin17 |
77,459,326 | 5,852,692 | Creating an autoincrement column via sqlalchemy | <p>I am using a Postgresql database via sqlalchemy orm. The table is really basic:</p>
<pre><code>import sqlalchemy as sa
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
class Network(Base):
__tablename__ = 'networks'
id: Mapped[int] = mapped_column(sa.Integer, primary_key=True)
name: Mapped[str] = mapped_column(sa.String(60))
type: Mapped[str] = mapped_column(sa.String(60))
version: Mapped[int] = mapped_column(sa.Integer)
</code></pre>
<p>I would like to convert the <code>version</code> column into something which check if same <code>name</code> & same <code>type</code> row exists. If yes, then increase the previous <code>version</code> with 1.</p>
<p>So imagine my table already has following rows:</p>
<pre><code>id name type version
0 foo big 1
1 bar big 1
</code></pre>
<p>And I insert one more <code>(foo, big)</code> into the table then it should be automaticly:</p>
<pre><code>id name type version
0 foo big 1
1 bar big 1
2 foo big 2
</code></pre>
<p>Is something like this possible? BTW, it should get also default version 1 if same <code>name</code> & same <code>type</code> does not exists.</p>
| <python><postgresql><sqlalchemy><auto-increment> | 2023-11-10 10:35:53 | 1 | 1,588 | oakca |
77,459,235 | 11,487,973 | Guix Error when trying to make sphinx documentation | <p>Good afternoon. The OS used is Guix</p>
<pre><code>guix --version
(GNU Guix) 938a47c86d7bea785f33f42834c5c1f3dfa594b0
guix package --list-installed
make 4.3 out /gnu/store/c5kpgdinijrqwkx63q4zyqcwclkn5b2s-make-4.3
git 2.41.0 out /gnu/store/57bpibp6jay9q11xhasw75cclkyq9d6h-git-2.41.0
texinfo 7.0.3 out /gnu/store/ri8lw41rpi24av3gy2b779iw790a9nln-texinfo-7.0.3
translate-shell 0.9.7.1 out /gnu/store/g5ki8alshcbv8nxd912vch1n089hwh36-translate-shell-0.9.7.1
groff 1.22.4 out /gnu/store/4swp36jfinc60sfhm1q3mxj2h2mpnc8z-groff-1.22.4
roffit 0.12-1.b59e6c855 out /gnu/store/m5hxiswy6sl819va2axjwivgyapjfpyc-roffit-0.12-1.b59e6c855
firefox 119.0 out /gnu/store/phagnzcwb9nxfqbmm65284jrvhlf73s4-firefox-119.0
gnome-shell-extensions 42.3 out /gnu/store/4f2qcnhmb9g3l2500m28vnblnfw9mg03-gnome-shell-extensions-42.3
python 3.10.7 out /gnu/store/i2gnz4m9rpixbqrmd4x46r1xnh9adbaa-python-3.10.7
python-sphinx 5.1.1 out /gnu/store/560q0k85lg9kfvpspc2fmmf9ng7ki6dd-python-sphinx-5.1.1
gcc-toolchain 13.2.0 out /gnu/store/m9lkrb2y14jwgx56vdhy9cs6yngm8wjc-gcc-toolchain-13.2.0
python-pip 23.1 out /gnu/store/724pw1f9m1k5fsxhzfmsfjvc5ivkhm4p-python-pip-23.1
python-sphinx-alabaster-theme 0.7.12 out /gnu/store/cjd7nq31lvlpyjmz2z61l41ga2dhigg5-python-sphinx-alabaster-theme-0.7.12
python-nbsphinx 0.8.8 out /gnu/store/brgppd3ssnc45swk3p6k6f8wgll6ryd5-python-nbsphinx-0.8.8
perl 5.36.0 out /gnu/store/h0hsl8ncw6d6p6274q5xzh67g2biaia9-perl-5.36.0
</code></pre>
<p>There was a difficulty in assembling the Linux kernel documentation. Repository cloned <strong><a href="https://github.com/linux-kernel-labs/linux.git" rel="nofollow noreferrer">https://github.com/linux-kernel-labs/linux.git</a></strong> and <strong><a href="https://github.com/torvalds/linux.git" rel="nofollow noreferrer">https://github.com/torvalds/linux.git</a></strong>.</p>
<p>Build according to the attached manuals in the repositories was not successful. Created an environment on Ubuntu
python and compiled a guide to python and sphinx</p>
<pre><code>make VENVDIR=.../python/env/ venv
source .../python/env/bin/activate
(env) $ python -m pip install -r requirements.txt
# Сборка документации
(env) $ sphinx-build -bhtml .../dirRstFls .../dirOutDocHtml
</code></pre>
<p>I'm interested in the possibility of assembling documentation for the apache and git web server in Guix; a script was used in ubuntu
with java build.sh to build apache documentation, to build git manual - asciidoc. I saw the httpd package
in guix. Translated Guix documentation into my language (Russian) (part of the translation already existed), without documentation
As for the Linux kernel, my learning to program has reached a dead end. Translated documentation for python3.12 and
sphinx. From what I learned from a quick read of the translated manuals, I was able to collect documentation on Ubuntu
for the Linux kernel from the repository <strong><a href="https://github.com/torvalds/linux.git" rel="nofollow noreferrer">https://github.com/torvalds/linux.git</a></strong>. From the repository <strong><a href="https://github.com/linux-kernel-labs/linux.git" rel="nofollow noreferrer">https://github.com/linux-kernel-labs/linux.git</a></strong>
It didn't work out that way. It turns out that in order to collect documentation on the Linux kernel on Ubuntu, you will need to study how it works
miracle store snap packages Ubuntu, python, sphinx, Perl and so on. Maybe there are development guides in GNU?
Linux kernels in Texinfo format?
Thank you</p>
| <python><documentation><python-sphinx><guix> | 2023-11-10 10:20:26 | 0 | 449 | uanr81 |
77,459,198 | 5,612,605 | paramiko - login to a host not requiring a password | <p>How can I login to a ssh-server, if this ssh server does not demand a password.</p>
<p>On the (here) powershell the following works fine:</p>
<pre class="lang-bash prettyprint-override"><code>ssh root@myMachine
root@myMachine$:
</code></pre>
<p>no password asked, just simple login and hello from the machine.</p>
<p>My python code looks like this:</p>
<pre class="lang-py prettyprint-override"><code>import paramiko
client = paramiko.SSHClient()
print("connecting...")
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(hostname="xxx.xxx.xxx.xxx", port=XX)
# Error:
# > raise SSHException("No authentication methods available")
# E paramiko.ssh_exception.SSHException: No authentication methods available
</code></pre>
<p>The error arises from a case selection in <code>paramiko/client.py</code> which simply does not have the case where no password is provided and it works like it. If I provide a password, the authentication fails.</p>
<p><strong>How can I tell paramiko to just login and don't bother for the password?</strong></p>
<p>ok... figured something out... You can go via Transport.</p>
<pre class="lang-py prettyprint-override"><code>t = paramiko.Transport(("localhost", 2022))
t.connect()
t.auth_none("root")
x = t.open_session()
noting_returned = x.exec_command("touch /tmp/huhu.txt")
</code></pre>
<p>How ever, the channell closes after each command and I can't see a way in how to retrieve the stdout/error/returnvalue etc.</p>
<ul>
<li><strong>Can I use the existing transport as source for an SSHClient?</strong></li>
<li><strong>Would this be the intended way to go?</strong></li>
</ul>
| <python><ssh><paramiko> | 2023-11-10 10:13:56 | 1 | 3,651 | Cutton Eye |
77,459,166 | 1,409,644 | Multiple inserts fail with IntegrityError due to failed constraint | <p>With Python (SQLAlchemy) and MariaDB, I am trying to perform three inserts, whereby a constraint for the third insert requires the first two to have been successful. However, currently the third insert fails unless the first two inserts have been committed. I would like to perform all three inserts within one transaction and rollback if an error occurs.</p>
<p>The code looks like this:</p>
<pre><code>try:
ulcdb.add_person(uid, lang)
ulcdb.add_group(gid, gid_number)
ulcdb.add_membership(uid, gid, 'owner')
ulcdb.commit()
except Exception as e:
ulcdb.rollback()
logger.error(str(e))
raise
</code></pre>
<p>whereby <code>ulcdb</code> is an instance of the following class:</p>
<pre><code>class UserLifecycleDatabase:
def __init__(self):
engine = sqlalchemy.create_engine(url)
Session = sqlalchemy.orm.sessionmaker(bind=engine)
self.session = Session()
def commit(self):
self.session.commit()
def rollback(self):
self.session.rollback()
def add_person(self, uid, lang):
self.session.add(Person(uid=uid, lang=lang))
...
</code></pre>
<p>If I commit the first two inserts, the third is also successful. What am I doing wrong?</p>
| <python><sql><sqlalchemy><transactions><mariadb> | 2023-11-10 10:07:41 | 1 | 469 | loris |
77,459,159 | 265,521 | Remove all elements matching a predicate from a list in-place | <p>How can I remove all elements matching a predicate from a list <em><strong>IN-PLACE</strong></em>?</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar("T")
def remove_if(a: list[T], predicate: Callable[[T], bool]):
# TODO: Fill this in.
# Test:
a = [1, 2, 3, 4, 5]
remove_if(a, lambda x: x % 2 == 0)
assert(a == [1, 3, 5])
</code></pre>
<p><strong>The answer should be O(N); not O(N^2).</strong></p>
<p>Here is a similar question but it does not require <em><strong>IN-PLACE</strong></em> removal: <a href="https://stackoverflow.com/questions/1866343/removing-an-element-from-a-list-based-on-a-predicate">Removing an element from a list based on a predicate</a></p>
<p>In Rust this is <a href="https://doc.rust-lang.org/std/vec/struct.Vec.html#method.retain" rel="nofollow noreferrer"><code>Vec::retain()</code></a>. In C++ it is <a href="https://en.cppreference.com/w/cpp/algorithm/remove" rel="nofollow noreferrer"><code>std::remove_if()</code></a> (plus <code>erase()</code>).</p>
<p>Is there an equivalent in Python?</p>
| <python> | 2023-11-10 10:06:48 | 1 | 98,971 | Timmmm |
77,459,039 | 15,456,681 | How to use *args instead of explicit function signature in a numba overload | <p>I have many (~169) functions (with ~45 unique signatures) that I am overloading with numba. I would like to add some additional logic that is the same to all the functions so I was trying to use a decorator but it seems that the 'inner' function in the decorator has to have the same signature as the function it is decorating.</p>
<p>This is a working example for one specific signature:</p>
<pre class="lang-py prettyprint-override"><code>from numba import njit, extending, objmode
import warnings
def func(a, b, c):
return a + b + c
def warn(func):
def inner(a, b, c):
result = func(a, b, c)
if not result:
with objmode:
warnings.warn("Result is zero")
return result
return inner
@extending.overload(func)
def func_overload(a, b, c):
@warn
@extending.register_jitable
def impl(a, b, c):
return a + b - c
return impl
@njit
def func_nb(a, b, c):
return func(a, b, c)
args = 1, 2, 3
print(func_nb(*args), func(*args))
</code></pre>
<p>Output:</p>
<pre><code>0 6
/var/folders/v7/vq2l7f812yd450mn3wwmrhtc0000gn/T/ipykernel_15095/773019809.py:36: UserWarning: Result is zero
print(func_nb(*args), func(*args))
</code></pre>
<p>but I would like a single decorator to work for all of my signatures. I've tried using <code>*args</code>:</p>
<pre><code>def warn(func):
def inner(*args):
result = func(*args)
if not result:
with objmode:
warnings.warn("Result is zero")
return result
return inner
</code></pre>
<p>but I get the following error:</p>
<pre><code>/var/folders/v7/vq2l7f812yd450mn3wwmrhtc0000gn/T/ipykernel_15488/784317548.py:38: NumbaPendingDeprecationWarning: Code using Numba extension API maybe depending on 'old_style' error-capturing, which is deprecated and will be replaced by 'new_style' in a future release. See details at https://numba.readthedocs.io/en/latest/reference/deprecation.html#deprecation-of-old-style-numba-captured-errors
Exception origin:
File "/Users/chris/anaconda3/lib/python3.11/site-packages/numba/core/typing/templates.py", line 577, in _validate_sigs
a = ty_args[:ty_args.index(b[-1]) + 1]
~^^^^
return func(a, b, c)
</code></pre>
<pre><code>---------------------------------------------------------------------------
TypingError Traceback (most recent call last)
Untitled-2.ipynb Cell 5 line 4
44 args = 1, 2, 3
45 args2 = *args, 4
---> 46 print(func_nb(*args), func(*args))
File ~/anaconda3/lib/python3.11/site-packages/numba/core/dispatcher.py:468, in _DispatcherBase._compile_for_args(self, *args, **kws)
464 msg = (f"{str(e).rstrip()} \n\nThis error may have been caused "
465 f"by the following argument(s):\n{args_str}\n")
466 e.patch_message(msg)
--> 468 error_rewrite(e, 'typing')
469 except errors.UnsupportedError as e:
470 # Something unsupported is present in the user code, add help info
471 error_rewrite(e, 'unsupported_error')
File ~/anaconda3/lib/python3.11/site-packages/numba/core/dispatcher.py:409, in _DispatcherBase._compile_for_args.<locals>.error_rewrite(e, issue_type)
407 raise e
408 else:
--> 409 raise e.with_traceback(None)
TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<function func at 0x104455e40>) found for signature:
>>> func(int64, int64, int64)
There are 2 candidate implementations:
- Of which 2 did not match due to:
Overload in function 'func_overload': File: ../../../../../../var/folders/v7/vq2l7f812yd450mn3wwmrhtc0000gn/T/ipykernel_15488/784317548.py: Line 20.
With argument(s): '(int64, int64, int64)':
Rejected as the implementation raised a specific error:
IndexError: list index out of range
raised from /Users/chris/anaconda3/lib/python3.11/site-packages/numba/core/typing/templates.py:577
During: resolving callee type: Function(<function func at 0x104455e40>)
During: typing of call at /var/folders/v7/vq2l7f812yd450mn3wwmrhtc0000gn/T/ipykernel_15488/784317548.py (38)
File "../../../../../../var/folders/v7/vq2l7f812yd450mn3wwmrhtc0000gn/T/ipykernel_15488/784317548.py", line 38:
<source missing, REPL/exec in use?>
</code></pre>
<p>and with <a href="https://docs.python.org/3/library/functions.html#exec" rel="nofollow noreferrer">exec</a>:</p>
<pre><code>_tmpl = """
def inner({0}):
result = func({0})
if not result:
with objmode:
warnings.warn("Result is zero")
return result
"""
def warn(args):
def warn_args(func):
exec(_tmpl.format(args))
return locals()["inner"]
return warn_args
</code></pre>
<p>and then changing the call on line 18 from <code>@warn</code> to <code>@warn("a, b, c")</code>
but it gets stuck in a recursive loop.
So is there any way to do this without having to add this logic to the functions individually or to make decorators for all the signatures?</p>
| <python><numba> | 2023-11-10 09:45:13 | 1 | 3,592 | Nin17 |
77,458,854 | 4,927,847 | How to determine if a row is striked in Excel sheet using OpenPyxl | <p>I'm reading an excel file using openpyxl and I face the following issue:</p>
<p>Some cells appear to be striked in excel and their cell strike property is True, I succeed to deal with them using Openpyxl</p>
<p>But some cells appear to be striked in excel but the format cell strike property is False.
This is because row strike property is set.</p>
<p>My problem is that I don't succeed to have access to strike property of row using OpenPyxl</p>
<pre><code> sheet = book[sheet_name]
for row in sheet[0:sheet.max_row]:
if sheet.row_dimensions[row[0].row].font.strike:
break
</code></pre>
<p>My code never break although some row are striked</p>
<p>What is strange too is that when I look at row_dimensions of my sheet I don't see an entry per line but only for some lines and I don't have any entry for the lines that are striked.
I don't understand on what criteria some line have a corresponding row_dimensions and some have not</p>
<p>Do you know how to proceed or have an idea about that ?</p>
| <python><excel><openpyxl> | 2023-11-10 09:12:24 | 1 | 421 | Quicky |
77,458,642 | 7,119,501 | How to convert NONEs to an empty string in a pyspark dataframe when it has nested columns? | <p>I have a dataframe with nested columns like below:</p>
<pre><code>df_schema = StructType([
StructField("response", StringType(), True),
StructField("id", StringType(), True),
StructField("data", StructType([
StructField("type", StringType(), True),
StructField("record", StringType(), True),
StructField("enteredCycle", StringType(), True),
StructField("timestamp", StringType(), True),
StructField("modifiedById", StringType(), True),
StructField("years", IntegerType(), True),
StructField("attributes", StructType([
StructField("mass", ArrayType(DoubleType()), True),
StructField("pace", ArrayType(IntegerType()), True),
StructField("reflex", ArrayType(StringType()), True)
]))
]))
])
</code></pre>
<p>I am getting this dataframe as a resultant of an API call like below.</p>
<pre><code>def api_call(parameter: str):
response = session.get(f"https:url={parameter}", headers=header_data)
return json.dumps(json.loads(response.text))
udf_call = udf(lambda z:api_call(z),StringType())
</code></pre>
<p>I am adding this UDF call to one of my dataframe as an extra column like below:</p>
<pre><code>df = inputDf.withColumn("api_response", udf_call(col("employee_id")))
# Creating an empty df
# I have multiple api calls. So I am appending all of them into
# one single dataframe and then writing all of them at once
# rather than write one record at a time (I have 500,00 records)
empty_rdd = spark.sparkContext.emptyRDD()
empty_df = spark.createDataFrame(empty_rdd, df_schema)
</code></pre>
<p>Applying schema:</p>
<pre><code>json_df = (
df
.withColumn("main",from_json(col('api_response'), df_schema)
.select('response', 'id', 'data.type', 'data.record',....'data.attributes.reflex')
)
empty_df = empty_df.unionAll(json_df)
</code></pre>
<p>The problem appears when I try to ingest the dataframe as a table:</p>
<pre><code>empty_df.write.mode('overwrite').format('parquet').saveAsTable('dbname.tablename')
</code></pre>
<p>I see this error:</p>
<pre><code>Job aborted due to stage failure: Task 24 in stage 161.0 failed 4 times, most recent failure: Lost task 24.3 in stage 161.0 (TID 1119) (100.66.2.91 executor 0): org.apache.spark.api.python.PythonException: 'TypeError: can only concatenate str (not "NoneType") to str', from year_loads.py, line 21. Full traceback below:
Traceback (most recent call last):
File "<year_loads.py>", line 21, in <lambda>
File "<year_loads.py>", line 44, in api_call
TypeError: can only concatenate str (not "NoneType") to str
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:642)
at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$2.read(PythonUDFRunner.scala:86)
at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$2.read(PythonUDFRunner.scala:68)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:595)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:489)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:511)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage8.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:757)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
at org.apache.spark.ContextAwareIterator.hasNext(ContextAwareIterator.scala:39)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1209)
at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1215)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:442)
at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.writeIteratorToStream(PythonUDFRunner.scala:53)
at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:521)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:2241)
at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:313)
</code></pre>
<p>My assumption is that there are NONEs in the output that cannot be appended to StringType(). So I implemented a logic to convert all NONEs to an empty string like below. I got this implementation from another stackoverflow post <a href="https://stackoverflow.com/questions/33287886/replace-empty-strings-with-none-null-values-in-dataframe">here</a> and changed it according to my requirement.</p>
<pre><code># Get only String Columns
def replace_none_with_empty_str(df: DataFrame):
string_fields = []
for i, f in enumerate(df.schema.fields):
if isinstance(f.dataType, StringType):
string_fields.append(f.name)
exprs = [none_as_blank(x).alias(x) if x in string_fields else x for x in df.columns]
df.select(*exprs)
return df
# NULL/NONE to blank logic
def none_as_blank(x):
return when(col(x) != None, col(x)).otherwise('')
non_nulls_df = replace_none_with_empty_str(empty_df)
non_nulls_df.write.mode('overwrite').format('parquet').saveAsTable('dbname.tablename')
</code></pre>
<p>But I still see the same error even after applying the above <code>NULL/NONE to blank logic</code>.
Is my assumption and work around correct? Is my logic being properly applied on all String columns particularly nested stirng columns? If not, could anyone let me know what is the mistake I am doing here and how can I correct it?
Any help is massively appreciated.</p>
| <python><apache-spark><pyspark><apache-spark-sql> | 2023-11-10 08:37:31 | 1 | 2,153 | Metadata |
77,458,556 | 12,890,458 | Find lists of almost equal values in list | <p>I have a 1D numpy array, and I want to find the sublists/subarrays that contain almost equal values. That means they don't differ from each other more than a tolerance. I mean there is one central point in the solution from which all other points don't differ more than tolerance. For instance if I have <code>[1.0, 2.2, 1.4, 1.8, 1.5, 2.1]</code> and tolerance <code>0.2</code> the desired outcome is <code>[[1.4, 1.5], [2.1, 2.2]]</code>. The following function does the job I think:</p>
<pre><code>import numpy as np
def find_almost_equal(input, tol):
sorted = np.sort(input)
result = []
for i, v1 in enumerate(sorted):
result.append([])
for j, v2 in enumerate(sorted):
if v2 - tol < v1 < v2 + tol:
result[i].append(v2)
result = [r for r in result if len(r) > 1]
for i, r1 in enumerate(result):
for j, r2 in enumerate(result):
if set(r2).issubset(set(r1)):
del result[j]
return result
test = np.array([2.6, 1.2, 1.5, 1.8, 2.0, 2.2, 2.5, 1.1, 1.4])
tolerance = 0.15
almost_equal = find_almost_equal(test, tolerance)
print(almost_equal)
</code></pre>
<p>The outcome is <code>[[1.1, 1.2], [1.4, 1.5], [2.5, 2.6]]</code>. With <code>tolerance = 0.25</code> the outcome is <code>[[1.1, 1.2, 1.4], [1.4, 1.5], [1.8, 2.0, 2.2], [2.5, 2.6]]</code>.</p>
<p>When a point belongs to several sublists my algorithm does not always give the correct result. For example with input <code>[1.0, 1.1, 1.2, 1.3, 1.4]</code> and <code>tolerance = 0.2</code> the output is <code>[[1.0, 1.1, 1.2], [1.2, 1.3, 1.4]]</code>, while the expected outcome is <code>[[1.0, 1.1, 1.2], [1.1, 1.2, 1.3], [1.2, 1.3, 1.4]]</code>.</p>
<p>The question: Is there an easier way to do this (preferably in numpy)? And how can I do this correctly?</p>
| <python><arrays><list><numpy><cluster-analysis> | 2023-11-10 08:22:25 | 1 | 460 | Frank Tap |
77,458,463 | 9,404,261 | Canot slice/index unicode strings with underscores | <p>I have this Unicode string:</p>
<pre><code>my_string = "₁ᴀa̲a̲̲"
</code></pre>
<p>How can index and slice it to make other Unicode strings?</p>
<p>If I run</p>
<pre><code>print([x for x in my_string])
['₁', 'ᴀ', 'a', '̲', 'a', '̲', '̲']
</code></pre>
<p>when I expected</p>
<p>['₁', 'ᴀ', 'a̲', '̲a̲̲']</p>
<p>this prints</p>
<pre><code>my_string[3]
'̲'
</code></pre>
<p>when I expected</p>
<pre><code>a̲̲
</code></pre>
<p>I tried t define</p>
<pre><code>my_string = u"₁ᴀa̲a̲̲"
</code></pre>
<p>but the <code>u</code> is automatically deleted by vscode</p>
<p>I need <code>my_string</code> as buffer to compose other strings according to his human-readable index.</p>
| <python><python-3.x><unicode><slice> | 2023-11-10 08:01:11 | 2 | 609 | tutizeri |
77,458,443 | 2,604,247 | Cleanest Way to Get a Set in Redis where Individual Elements Expire in 30 Days? | <p>I need a data structure similar to a Python set in redis, with the additional capability of individual elements automatically expiring (getting popped) from the set 30 days after insertion. Basically, these are the abstract behaviours I want out of the class.</p>
<pre class="lang-py prettyprint-override"><code>from abc import abstractmethod, ABC
from typing import Iterable
class RedisSet(ABC):
"""Implement a set of strings with a remote redis host."""
@abstractmethod
def __init__(self, url:str):
"""
Initialise the class at the remote redis url.
The set of strings should be empty initially.
"""
raise NotImplementedError
@abstractmethod
def add(self, new_elements:Iterable[str])->None:
"""Insert all the elements into the set."""
raise NotImplementedError
@abstractmethod
def __contains__(self, elem:str)->bool:
"""Check membership."""
raise NotImplementedError
</code></pre>
<p>So what would be the cleanest way of achieving it? I do not need the entire class implemented, but asking what would be the correct data types and APIs to use in redis, as I am not thoroughly familiar with full capabilities of redis.</p>
<p>I noted redis has a set datatype, but seems (happy to be corrected if I am wrong) it does not support any time to live (TTL). Au contraire, the dictionary supports TTL, but I have to use a placeholder <code>value</code> for each key (unnecessary overhead) and I am not sure whether the membership check will be constant time.</p>
<p>N. B.</p>
<ul>
<li>I do not foresee any need to iterate through the elements, so optimising with respect to that operation is superfluous. All I need is membership check.</li>
<li>I intend to use <code>aioredis</code> as python client. It should support the necessary redis operations.</li>
</ul>
<p>Any idea of the correct redis data types whose documentations I should look up will be greatly appreciated.</p>
| <python><redis><set><hashset><aioredis> | 2023-11-10 07:57:52 | 1 | 1,720 | Della |
77,458,331 | 21,346,793 | How to add prefix Bearer on django-yasg swagger | <p>I am using django-yasg swagger. And i need to add prefix Bearer before my token in authorization.Header needed for django-rest-framework: Bearer
I should write only token, and it makes automatically into header Authorization.
My settings:</p>
<pre><code>SWAGGER_SETTINGS = {
'SECURITY_DEFINITIONS': {
'Bearer': {
'type': 'apiKey',
'name': 'Authorization',
'in': 'header'
}
}
}
</code></pre>
| <python><django-rest-framework><drf-yasg> | 2023-11-10 07:32:50 | 1 | 400 | Ubuty_programmist_7 |
77,458,259 | 1,973,798 | Python: webbrower and missing `GLIBCXX_3.4.29' | <p>I came across an import problem causing a runtime warning at debug time.
As soon as my python script invokes the open method of the webbrowser package, VSCode prints the following:
<code>/snap/core20/current/lib/x86_64-linux-gnu/libstdc++.so.6: version GLIBCXX_3.4.29' not found (required by /lib/x86_64-linux-gnu/libproxy.so.1) Failed to load module: /home/andrea/snap/code/common/.cache/gio-modules/libgiolibproxy.so</code>.</p>
<p>There is nothing special in the code as you can see from the below two lines.</p>
<pre><code> import webbrowser
webbrowser.open(url)
</code></pre>
<p>This is not a VSCode specific problem as the same script launched in the terminal output the same error.</p>
<p>Checking for the required GLIBCXX version at a system level, the required module seems to be in place (<code>strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep GLIBCXX</code>) as the required version is listed.
I have however upgraded the library to the most recent package without success.</p>
<p>At this stage, I'm not sure whether this depends from:</p>
<ul>
<li>the <code>LD_LIBRARY_PATH</code>, currently missing in my environment shell. I was somehow expecting PyEnv to do this for me? But this doesn't seem to be the case (I don't see any variable set by querying the os.environ.</li>
<li>the fact the upgrade I did was not included in my Python built, hence I need to rebuild the package getting PyEnv doing the leg work for me</li>
</ul>
<p>What would be your take?</p>
<p>P.S. I was simply trying to open a webbrowser and click on a link to cut short an authentication process I need to do to interact with my script.</p>
| <python><linux><python-webbrowser> | 2023-11-10 07:17:59 | 0 | 1,041 | Andrea Moro |
77,458,196 | 15,222,211 | pydantic and pylint conflict E1137 | <p>I am using <strong>pydantic</strong> in my code along with <strong>pylint</strong> for code validation. The following code appears to be valid to me, but pylint is displaying an E1137 error message. Can you help me identify what might be wrong?</p>
<p>I need update dictionary inside object.</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import Field, BaseModel
class Object(BaseModel):
data: dict[str, str] = Field(default={}, description="description")
def update_data(self, values: dict[str, str]) -> None:
"""Update self dictionary."""
for key, value in values.items():
self.data[key] = value
obj = Object()
obj.update_data(values={"key": "value"})
print(obj.data) # {'key': 'value'}
</code></pre>
<pre><code>pylint --disable=all --enable=E1137 my_script.py
E1137: 'self.data' does not support item assignment (unsupported-assignment-operation)
</code></pre>
<pre><code>pip list
pydantic 2.4.2
pydantic_core 2.10.1
pydantic-settings 2.0.3
pylint 3.0.2
</code></pre>
| <python><pydantic><pylint> | 2023-11-10 07:01:55 | 0 | 814 | pyjedy |
77,457,709 | 1,226,676 | gunicorn looking for deleted module; cannot run django application in GCP | <p>I have a django application that's been running fine on GCP, and I recently removed a dependency: <code>django-colorfields</code>. I didn't need this functionality, so I removed these items from my models, migrated the database and uninstalled the module.</p>
<p>Now when I deploy my project to GCP, I can't get past the admin login page. The admin page (<code>https://myproject.com/admin</code>) shows up just fine, but when I enter my username and password and hit enter, I get a HTTP 500 internal server error.</p>
<p>I checked the internal server logs, and there's a number of gunicorn errors, but this one stands out:</p>
<pre><code>2023-11-09 20:02:29.950
Traceback (most recent call last):
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/gunicorn/arbiter.py", line 609, in spawn_worker
worker.init_process()
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/gunicorn/workers/gthread.py", line 95, in init_process
super().init_process()
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/gunicorn/workers/base.py", line 134, in init_process
self.load_wsgi()
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi
self.wsgi = self.app.wsgi()
2023-11-09 20:02:29.950
^^^^^^^^^^^^^^^
2023-11-09 20:02:29.950
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/gunicorn/app/base.py", line 67, in wsgi
2023-11-09 20:02:29.950
self.callable = self.load()
2023-11-09 20:02:29.950
^^^^^^^^^^^
2023-11-09 20:02:29.950
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
2023-11-09 20:02:29.950
return self.load_wsgiapp()
2023-11-09 20:02:29.950
^^^^^^^^^^^^^^^^^^^
2023-11-09 20:02:29.950
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
2023-11-09 20:02:29.950
return util.import_app(self.app_uri)
2023-11-09 20:02:29.950
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-11-09 20:02:29.950
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/gunicorn/util.py", line 371, in import_app
2023-11-09 20:02:29.950
mod = importlib.import_module(module)
2023-11-09 20:02:29.950
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-11-09 20:02:29.950
File "/layers/google.python.runtime/python/lib/python3.11/importlib/__init__.py", line 126, in import_module
2023-11-09 20:02:29.950
return _bootstrap._gcd_import(name[level:], package, level)
2023-11-09 20:02:29.950
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-11-09 20:02:29.950
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
2023-11-09 20:02:29.950
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
2023-11-09 20:02:29.950
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
2023-11-09 20:02:29.950
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
2023-11-09 20:02:29.950
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
2023-11-09 20:02:29.950
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
2023-11-09 20:02:29.950
File "/srv/main.py", line 1, in <module>
2023-11-09 20:02:29.950
from polb_cms.wsgi import application
2023-11-09 20:02:29.950
File "/srv/polb_cms/wsgi.py", line 16, in <module>
2023-11-09 20:02:29.950
application = get_wsgi_application()
2023-11-09 20:02:29.950
^^^^^^^^^^^^^^^^^^^^^^
2023-11-09 20:02:29.950
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/django/core/wsgi.py", line 12, in get_wsgi_application
2023-11-09 20:02:29.950
django.setup(set_prefix=False)
2023-11-09 20:02:29.950
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/django/__init__.py", line 24, in setup
2023-11-09 20:02:29.950
apps.populate(settings.INSTALLED_APPS)
2023-11-09 20:02:29.950
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/django/apps/registry.py", line 91, in populate
2023-11-09 20:02:29.950
app_config = AppConfig.create(entry)
2023-11-09 20:02:29.950
^^^^^^^^^^^^^^^^^^^^^^^
2023-11-09 20:02:29.950
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/django/apps/config.py", line 228, in create
2023-11-09 20:02:29.950
import_module(entry)
2023-11-09 20:02:29.950
File "/layers/google.python.runtime/python/lib/python3.11/importlib/__init__.py", line 126, in import_module
2023-11-09 20:02:29.950
return _bootstrap._gcd_import(name[level:], package, level)
2023-11-09 20:02:29.950
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-11-09 20:02:29.950
ModuleNotFoundError: No module named 'colorfield'
2023-11-09 20:02:29.950
[2023-11-09 22:02:29 -0600] [18] [ERROR] Exception in worker process
</code></pre>
<p>That last bit -- <code>ModuleNotFoundError: no module named 'colorfield'</code> -- stands out to me. I have removed ALL references to <code>django-colorfield</code> in my entire project (I've done a find + replace in my entire project and can't find it anywhere!), but for some reason, my GCP deployed file still thinks that it needs this module and has an error every time.</p>
<p>My app.yaml file is dead simple:</p>
<pre><code># [START django_app]
# [START gaestd_py_django_app_yaml]
runtime: python311
env_variables:
# This setting is used in settings.py to configure your ALLOWED_HOSTS
APPENGINE_URL: https://my-project.uw.r.appspot.com/
GOOGLE_CLOUD_PROJECT: my-project
GS_BUCKET_NAME: "my-bucket"
handlers:
# This configures Google App Engine to serve the files in the app's static
# directory.
- url: /static
static_dir: static/
# This handler routes all requests not caught above to your main app. It is
# required when static routes are defined, but can be omitted (along with
# the entire handlers section) when there are no static files defined.
- url: /.*
script: auto
# [END gaestd_py_django_app_yaml]
# [END django_app]
</code></pre>
<p>and I've checked other common issues -- I have DEBUG=TRUE in my <code>settings.py</code> and <code>ALLOWED_HOSTS = [*]</code> just in case. I'm still not able to log in to my admin website, and this seems like the best lead that I have at the moment.</p>
<p>I've removed it from <code>INSTALLED_APPS</code>, I've uninstalled it with <code>pip</code>, I'm removed every mention of it from my models, and I even deleted all of my migration files and reset my database because I found mentions of this module in there. What else can I do to rid myself of this error?</p>
| <python><django><google-cloud-platform><google-app-engine> | 2023-11-10 04:33:26 | 0 | 5,568 | nathan lachenmyer |
77,457,631 | 6,396,569 | VScode will not recognize my launch.json arguments, how can I fix this? | <p>Using Visual Studio Code v1.84.2 on Windows 10.
This is my <code>launch.json</code> contents:</p>
<pre><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": true,
"args": ["F:\\PRODUCTION Sounds\\"]
}
]
}
</code></pre>
<p>And my script:</p>
<pre><code>import os
import argparse
from sys import argv
print(argv)
def find_audio_files(directory):
# Define the set of file extensions to look for
audio_extensions = {'.mp3', '.wav', '.aiff', '.flac', '.aac', '.ogg', '.m4a'}
audio_files = []
# Walk through the directory
for root, dirs, files in os.walk(directory):
for file in files:
# Check the extension of each file
if os.path.splitext(file)[1].lower() in audio_extensions:
# If it's an audio file, add it to the list
audio_files.append(os.path.join(root, file))
return audio_files
def main():
parser = argparse.ArgumentParser(
prog='audiofinder',
description='Finds audio files',
epilog='Text at the bottom of help')
parser.add_argument('directory')
args = parser.parse_args()
print(f"Searching directory for audio files: {args.directory}")
print(f"{find_audio_files(args.directory)}")
if __name__ == '__main__':
main()
</code></pre>
<p>My directory structure looks like this:</p>
<pre><code>audiofinder
.vscode
launch.json
audiofinder.py
</code></pre>
<p>And when I run Python debug in VSC, my program outputs: <code>audiofinder: error: the following arguments are required: directory</code></p>
<p>However, this exact argument works fine when passed manually, and furthermore, I added a print statement at the top as you saw, and no such argument is ever passed to the program when under debug mode using this config.</p>
| <python><visual-studio-code><debugging> | 2023-11-10 03:59:18 | 0 | 2,567 | the_endian |
77,457,542 | 1,188,943 | Chromedriver ERR_SSL_PROTOCOL_ERROR | <p>I'm using Selenium 4.14.0 and Python 3.11.4 to crawl a website with the following settings:</p>
<pre><code>options = webdriver.ChromeOptions()
options.add_argument('--lang=en')
options.add_argument('--allow-insecure-localhost')
options.add_argument('--ignore-certificate-errors')
</code></pre>
<p>but get the following error:</p>
<pre><code>selenium.common.exceptions.WebDriverException: Message: unknown error: net::ERR_SSL_PROTOCOL_ERROR
(Session info: chrome=119.0.6045.123)
</code></pre>
| <python><selenium-webdriver> | 2023-11-10 03:26:06 | 0 | 1,035 | Mahdi |
77,457,504 | 2,849,157 | tqdm bar cannot update correctly when use pdsh | <p>I have code like this:</p>
<pre><code># tq_test.py
from tqdm.auto import tqdm
import time
if __name__ == "__main__":
bar = tqdm(total=5)
for _ in tqdm(range(3)):
time.sleep(0.5)
for _ in range(5):
bar.update(1)
</code></pre>
<p>If I run it in local matchine <code>/home/user/miniconda3/bin/python -u /home/user/python/tq_test.py</code>, it is correct, two bar is updated correctly.</p>
<p>But when I run it with <code>pdsh</code>, the first bar is not updated correctly</p>
<pre><code>pdsh -S -f 1024 -w mymachine /home/user/miniconda3/bin/python -u /home/user/python/tq_test.py
</code></pre>
<p>How can I fix this?</p>
| <python><tqdm> | 2023-11-10 03:11:15 | 1 | 10,083 | roger |
77,457,357 | 5,560,837 | VSCode Python heap[0] is no longer the min after modifying the heap items | <p>In VS Code, I have a Python heap variable, which is a list of three-item tuples, defined as</p>
<p><code>heapq.heappush(heap, (len(candidates[(i, j)]), i, j))</code></p>
<p>At the beginning of debugging, <code>heap[0]</code> is always the tuple with the shortest <code>len(candidates[i, j])</code>. However, after a few rounds of modifying items (not the top item) in the heap by doing -</p>
<pre><code>heap.remove((length, r, c))
heapq.heappush(heap, (length - 1, r, c))
</code></pre>
<p>, <code>heap[0]</code> no longer point to the minimum item. For example, in the debug console, <code>heap[0]</code> may show <code>(3, 4, 5)</code>, while there exists another heap item <code>(1, 6, 7)</code>. Because <code>1 < 3</code>, obviously <code>(3, 4, 5)</code> shouldn't be above the other item in the heap.</p>
| <python><heap><sudoku> | 2023-11-10 02:20:12 | 1 | 417 | Egret |
77,457,342 | 14,540,717 | Download a pandas df with images into word document using python | <p>Hi I am creating a dataframe with pandas in python that is going to include pictures with the following code (the image is already downloaded into my working directory)</p>
<pre><code>#Create the df using pandas
import pandas as pd
from IPython.display import Image, HTML
row_1 = ["Bob", '<img src="bob.jfif" width="60">',"Red"]
df = pd.DataFrame(row_1).transpose()
df.columns = ['Name', 'Picture','Fav.color']
HTML(df.to_html(escape=False))
</code></pre>
<p>And so it looks alright inside my jupyter notebook</p>
<p><a href="https://i.sstatic.net/WtPIx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WtPIx.png" alt="enter image description here" /></a></p>
<p>But when I try to download this pandas df into a word document using the following code</p>
<pre><code>#Export into word document
import docx
#Create new document
doc = docx.Document()
# add a table to the end and create a reference variable
# extra row is so we can add the header row
t = doc.add_table(rows=df.shape[0]+1, cols=df.shape[1]) #+1 allegedly for the header
# add the header rows.
for j in range(df.shape[-1]):
t.cell(0,j).text = df.columns[j]
# add the rest of the data frame
for i in range(df.shape[0]):
for j in range(df.shape[-1]):
t.cell(i+1,j).text = str(df.values[i,j])
# save the doc
doc.save('test.docx')
</code></pre>
<p>When I open the word document the image is not able to show</p>
<p><a href="https://i.sstatic.net/6mD8R.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6mD8R.png" alt="enter image description here" /></a></p>
<p>Is there any way to get the pictures inside the word document table either by directly downloading from source URL or importing from my downloaded pictures in working directory? Thanks a lot.</p>
| <python><pandas><image> | 2023-11-10 02:12:43 | 1 | 428 | Jeff238 |
77,457,324 | 7,391,480 | Finding the minimum value in lists inside a dataframe with TypeError: 'float' object is not iterable | <p>I've generated a Dataframe with a column of lists, but if there are no values, there's a NaN instead.</p>
<pre><code>import pandas as pd
df = pd.DataFrame(columns=['Lists', 'Min'])
df['Lists'] = [ [1,2,3], [4,5,6], [7,8,9], float('NaN') ]
print(df)
Lists Min
0 [1, 2, 3] NaN
1 [4, 5, 6] NaN
2 [7, 8, 9] NaN
3 NaN NaN
</code></pre>
<p>I would like for <code>df['Min']</code> to contain the minimum value of the corresponding list in the same row. Thus:</p>
<pre><code> Lists Min
0 [1, 2, 3] 1
1 [4, 5, 6] 4
2 [7, 8, 9] 7
3 NaN NaN
</code></pre>
<p>However, when I try list comprehension I receive an error.</p>
<pre><code>df['Min'] = [min(x) for x in df.Lists.tolist()]
</code></pre>
<p>Produces the error</p>
<pre><code>TypeError: 'float' object is not iterable
</code></pre>
<p>How can I find the minimum of each list?</p>
| <python><pandas><dataframe><list><nested-lists> | 2023-11-10 02:08:36 | 3 | 1,364 | edge-case |
77,457,291 | 627,259 | pqdm.processes 'AttributeError' object is not iterable while it works with pdqm.threads | <p>I have a newbie python issue which I do not understand.</p>
<p>I have working code for parallel execution using pqdm. It works if I use <code>from pqdm.threads import pqdm</code> and fails with <code>'AttributeError' object is not iterable</code> if I switch to <code>from pqdm.processes import pqdm</code> instead.</p>
<p>What is it that I do not understand here? What can be the problem here?</p>
<p>Thanks.</p>
| <python><parallel-processing><tqdm> | 2023-11-10 01:51:52 | 0 | 2,085 | Samo |
77,457,273 | 1,415,826 | Flask API authorization without "login" | <p>I see a lot of examples online showing how to add authentication to a flask API using JSON Web Tokens which requires a login. For example <a href="https://www.youtube.com/watch?v=J5bIPtEbS0Q" rel="nofollow noreferrer">https://www.youtube.com/watch?v=J5bIPtEbS0Q</a></p>
<p>How can I add authentication without having to actually "login"? I have a public flask API that is hosted with AWS App Runner. The API is triggered whenever a user sends a SMS message to a number, then the API is called via an internal AWS Lambda function. You can see the architecture in the image below. We are also going to expose this API on our website to allow people to ask questions there, which I believe means the API URL will be exposed with the requests.</p>
<pre><code>@app.route("/api/text_bot", methods=["GET"])
def text_bot():
# code
return jsonify(response)
</code></pre>
<p>How can I ensure that the request authorization headers have a valid token when the API is called?</p>
<p><a href="https://i.sstatic.net/4p4i6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4p4i6.png" alt="enter image description here" /></a></p>
| <python><flask> | 2023-11-10 01:43:18 | 1 | 945 | iambdot |
77,457,194 | 1,611,813 | Issue with Dataframe manipulation on Python | <p>Have the below panda data frame. Trying to create a new column labeled <em>Category</em> with the index of sorted unique values of <em>Color</em> by its <em>Fruit</em> type.</p>
<p><strong>Data:</strong></p>
<p><a href="https://i.sstatic.net/qrzNg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qrzNg.png" alt="enter image description here" /></a></p>
<pre><code>sorted(df(Fruit="Citrus").unique()) => ['Green', 'Orange', 'Yellow']
sorted(df(Fruit="Apples").unique()) => ['Dark Red', 'Green', 'Red']
</code></pre>
<p><strong>Expected Output:</strong></p>
<p><a href="https://i.sstatic.net/8vDkz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8vDkz.png" alt="enter image description here" /></a></p>
<p><strong>MWE:</strong></p>
<pre><code>import panda as pd
data = {'Fruit': ['Citrus', 'Apples', 'Apples', 'Citrus', 'Citrus', 'Apples', 'Citrus', 'Apples', 'Citrus', 'Apples'],
'Color': ['Orange', 'Dark Red', 'Red', 'Orange', 'Green', 'Red', 'Yellow', 'Dark Red', 'Yellow', 'Green']}
df = pd.DataFrame(data)
df['Category'] = df.groupby('Fruit')['Color'].transform(lambda x: pd.factorize(sorted(x.unique()))[0][x.argsort()])
print(df)
</code></pre>
<p>Python is complaining, which I believe has to do with the newly created <em>Category</em> column.</p>
<pre><code>IndexError: index 3 is out of bounds for axis 0 with size 3
</code></pre>
<p>Questions:</p>
<ol>
<li>What is the right way to attend to this error.</li>
<li>Is there any simpler way to implement this, as will be having 100,000 rows to process?</li>
</ol>
| <python><pandas> | 2023-11-10 01:12:29 | 1 | 684 | Saravanan K |
77,457,083 | 2,414,842 | Android Frida ProcessNotFoundError error when trying to attach to a process | <p>I've installed frida according to the <a href="https://frida.re/docs/installation/" rel="nofollow noreferrer">official page</a> installation guide and downloaded <strong>frida-server-16.1.5-android-x86_64</strong> from their github release pages and placed in the <strong>/data/local/tmp/</strong> directory of my <strong>android 9.0</strong> vm (x86_64 VM running through virt-manager). Even after running the <strong>frida-server as root</strong> and running this sample script to try to attach to a process I keep having this error.</p>
<pre><code>import frida
def on_message(message, _data):
if message['type'] == 'send':
print(f"Syscall: {message['payload']}")
def trace_syscalls(target_process):
session = frida.attach(target_process)
session.on('message', on_message)
session.detach()
if __name__ == '__main__':
target_process = 17772
# target_process = 'owasp.mstg.uncrackable2'
# target_process = 'Uncrackable Level 2'
trace_syscalls(target_process)
</code></pre>
<p>I'm sure both the process name and pid are correct. The errors are</p>
<blockquote>
<p>frida.ProcessNotFoundError: unable to find process with pid 17772</p>
</blockquote>
<p>when I try with PID or</p>
<blockquote>
<p>frida.ProcessNotFoundError: unable to find process with name 'owasp.mstg.uncrackable2'</p>
</blockquote>
<blockquote>
<p>frida.ProcessNotFoundError: unable to find process with name 'Uncrackable Level 2'</p>
</blockquote>
<p>when I try with process name (returned from <code>frida-ps -Ua</code>)</p>
<p>Output of <code>frida-trace -U -N owasp.mstg.uncrackable2</code>:</p>
<pre><code>$ frida-trace -U -N owasp.mstg.uncrackable2
Started tracing 0 functions. Press Ctrl+C to stop.
Process terminated
</code></pre>
<p>Outputs nothing, even when I use the app, it remains like that until I close the process. <strong>Also there is only this android device connected through ADB</strong>.</p>
<p>The frida-tools package from pip is version <strong>12.3.0</strong>, the most recent from pip. And the frida-server is <strong>16.1.5</strong> the most recent from their git repo. What can be causing this and how to solve it? Thanks in advance.</p>
| <python><android><frida> | 2023-11-10 00:24:18 | 1 | 2,294 | Fnr |
77,457,081 | 3,558,874 | Next environment variables in Flask API route | <p>I've set up a Python backend for my Next frontend according to this <a href="https://vercel.com/guides/how-to-use-python-and-javascript-in-the-same-application" rel="nofollow noreferrer">Vercel article</a>.</p>
<p>How do I access environment variables in these Python API routes?</p>
<p>In development, the information is saved in <code>.env.local</code>, and in deployment they would be added in the Vercel dashboard of the project.</p>
<p>If this were a standard Node API route, I would access the environment variables like this:</p>
<pre><code>const username = process.env.username
</code></pre>
<p>Near the top of this <a href="https://vercel.com/docs/projects/environment-variables" rel="nofollow noreferrer">Vercel documentation</a>, it mentions that Python environment variables can be accessed like this:</p>
<pre><code>username = os.environ.get('username')
</code></pre>
<p>But this doesn't work and the variable is undefined when it comes time to use it.</p>
<p>My guess is that this has something to do with the fact that when testing on <code>localhost</code>, the Flask server is running in a <code>venv</code> virtual environment according to this <a href="https://code.visualstudio.com/docs/python/tutorial-flask" rel="nofollow noreferrer">VS Code tutorial</a> and this <a href="https://codevoweb.com/how-to-integrate-flask-framework-with-nextjs/" rel="nofollow noreferrer">other tutorial</a>.</p>
<p>The Flask server is also running on a different port than the Next frontend, so it makes sense that it would not have access to the <code>.env.local</code> file.</p>
<p>I'm entirely new to using a non-Node backend for my Next API routes, so I'm not too sure how to set this up correctly. There aren't too many questions/articles out there about using a Python backend for a Next frontend, and the few that I've found don't mention environment variables.</p>
<p>Any ideas?</p>
| <python><flask><next.js><python-venv> | 2023-11-10 00:23:28 | 1 | 4,327 | pez |
77,457,080 | 5,475,733 | prefect orion start error: No such command 'orion' | <p>I am learning <a href="https://dezoomcamp.streamlit.app/Week_2_Workflow_Orchestration" rel="nofollow noreferrer">workflow orchestration</a> from data engineering zoomcamp and have successfully ingested the data to postgresql with prefect flow but when I try to run command <code>prefect orion start</code> to see the prefect UI, I get an error <code>Error: No such command 'orion'</code>. I have tried everything from reinstalling and setting up the whole project again but the error persists.</p>
<p>prefect version installed.</p>
<pre><code>Version: 2.14.4
API version: 0.8.4
Python version: 3.9.18
Git commit: d2cf30f4
Built: Thu, Nov 9, 2023 4:48 PM
OS/Arch: win32/AMD64
Profile: default
Server type: ephemeral
</code></pre>
<p>ingest_data.py file.</p>
<pre class="lang-py prettyprint-override"><code>import os
import argparse
from time import time
import pandas as pd
from sqlalchemy import create_engine
from prefect import flow, task
from prefect.tasks import task_input_hash
from datetime import timedelta
@task(log_prints=True, retries=3, cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))
def extract_data(csv_url, user, password, host, port, db):
if csv_url.endswith('.csv.gz'):
csv_name = 'yellow_tripdata_2021-01.csv.gz'
else:
csv_name = 'output.csv'
os.system(f"wget {csv_url} -O {csv_name}")
df_iter = pd.read_csv(csv_name, iterator=True, chunksize=100000)
df = next(df_iter)
df.tpep_pickup_datetime = pd.to_datetime(df.tpep_pickup_datetime)
df.tpep_dropoff_datetime = pd.to_datetime(df.tpep_dropoff_datetime)
return df
@task(log_prints=True)
def transform_data(df):
print(f"pre: missing passenger count: {df['passenger_count'].isin([0]).sum()}")
df = df[df['passenger_count'] != 0]
print(f"post: missing passenger count: {df['passenger_count'].isin([0]).sum()}")
return df
@task(log_prints=True, retries=3)
def ingest_data(user, password, host, port, db, table_name, df):
postgres_url = f'postgresql://{user}:{password}@{host}:{port}/{db}'
engine = create_engine(postgres_url)
df.head(n=0).to_sql(name=table_name, con=engine, if_exists='replace')
df.to_sql(name=table_name, con=engine, if_exists='append')
@flow(name='Subflow', log_prints=True)
def log_subflow(table_name:str):
print("Logging subflow for:{table_name}")
@flow(name="Ingest Flow")
def main(table_name:str):
user = "root"
password = "root"
host = "localhost"
port = "5432"
db = "ny_taxi"
csv_url = "https://github.com/DataTalksClub/nyc-tlc-data/releases/download/yellow/yellow_tripdata_2021-01.csv.gz"
log_subflow(table_name)
raw_data = extract_data(csv_url, user, password, host, port, db)
data = transform_data(raw_data)
ingest_data(user, password, host, port, db, table_name, data)
if __name__ == '__main__':
main("yellow_taxi_trips")
</code></pre>
| <python><prefect> | 2023-11-10 00:23:27 | 1 | 1,271 | npkp |
77,456,942 | 953,553 | sqlalchemy new style with Mapped anotations causes sqlalchemy.exc.ArgumentError: Could not interpret annotation Mapped chicken-egg-issue | <p>I am following the example from sqlalchemy docs <a href="https://docs.sqlalchemy.org/en/20/orm/inheritance.html#relationships-with-joined-inheritance" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/20/orm/inheritance.html#relationships-with-joined-inheritance</a></p>
<p>and I am facing this kind of chicken-egg-issue with Mapped[List[Manager]] before actuall Manager class definition what leads to:</p>
<blockquote>
<p>sqlalchemy.exc.ArgumentError: Could not interpret annotation
Mapped[List[Manager]]. Check that it uses names that are correctly
imported at the module level. See chained stack trace for more hints.</p>
</blockquote>
<p><strong>what is the proper way to define and then import and create the tables with this new style with Mapped annotate?</strong></p>
<p>all the details:</p>
<p>I store them in models.py</p>
<pre class="lang-py prettyprint-override"><code>class Company(Base):
__tablename__ = "company"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str]
managers: Mapped[List[Manager]] = relationship(back_populates="company")
class Employee(Base):
__tablename__ = "employee"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str]
type: Mapped[str]
__mapper_args__ = {
"polymorphic_identity": "employee",
"polymorphic_on": "type",
}
class Manager(Employee):
__tablename__ = "manager"
id: Mapped[int] = mapped_column(ForeignKey("employee.id"), primary_key=True)
manager_name: Mapped[str]
company_id: Mapped[int] = mapped_column(ForeignKey("company.id"))
company: Mapped[Company] = relationship(back_populates="managers")
__mapper_args__ = {
"polymorphic_identity": "manager",
}
class Engineer(Employee):
...
</code></pre>
<p>inside database.py</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker, declarative_base
engine = create_engine('postgresql+psycopg2://datalake:password@localhost/flask_db')
db_session = scoped_session(sessionmaker(autocommit=False,
autoflush=False,
bind=engine))
Base = declarative_base()
Base.query = db_session.query_property()
Base.__table_args__ = (
{'schema': 'metadata'}
)
def init_db():
# import all modules here that might define models so that
# they will be registered properly on the metadata. Otherwise
# you will have to import them first before calling init_db()
import models
Base.metadata.drop_all(bind=engine)
Base.metadata.create_all(bind=engine)
</code></pre>
<p>and inside app.py</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask
def create_app():
app = Flask(__name__)
from database import init_db
init_db()
return app
app = create_app()
@app.route('/')
def hello():
return '<h1>Hello, World!</h1>'
</code></pre>
<p>when running flask run I get:</p>
<p><strong>sqlalchemy.exc.ArgumentError: Could not interpret annotation Mapped[List[Manager]]. Check that it uses names that are correctly imported at the module level. See chained stack trace for more hints.</strong></p>
<p>the full stack trace</p>
<pre><code>Traceback (most recent call last):
File "/Users/andi/.pyenv/versions/spike/bin/flask", line 8, in <module>
sys.exit(main())
File "/Users/andi/.pyenv/versions/3.9.6/envs/spike/lib/python3.9/site-packages/flask/cli.py", line 1064, in main
cli.main()
File "/Users/andi/.pyenv/versions/3.9.6/envs/spike/lib/python3.9/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/Users/andi/.pyenv/versions/3.9.6/envs/spike/lib/python3.9/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/andi/.pyenv/versions/3.9.6/envs/spike/lib/python3.9/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/andi/.pyenv/versions/3.9.6/envs/spike/lib/python3.9/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/Users/andi/.pyenv/versions/3.9.6/envs/spike/lib/python3.9/site-packages/click/decorators.py", line 92, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/Users/andi/.pyenv/versions/3.9.6/envs/spike/lib/python3.9/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/Users/andi/.pyenv/versions/3.9.6/envs/spike/lib/python3.9/site-packages/flask/cli.py", line 912, in run_command
raise e from None
File "/Users/andi/.pyenv/versions/3.9.6/envs/spike/lib/python3.9/site-packages/flask/cli.py", line 898, in run_command
app = info.load_app()
File "/Users/andi/.pyenv/versions/3.9.6/envs/spike/lib/python3.9/site-packages/flask/cli.py", line 309, in load_app
app = locate_app(import_name, name)
File "/Users/andi/.pyenv/versions/3.9.6/envs/spike/lib/python3.9/site-packages/flask/cli.py", line 219, in locate_app
__import__(module_name)
File "/Users/andi/Projects/sqlalchemy_spike/app.py", line 11, in <module>
app = create_app()
File "/Users/andi/Projects/sqlalchemy_spike/app.py", line 7, in create_app
init_db()
File "/Users/andi/Projects/sqlalchemy_spike/database.py", line 22, in init_db
import models
File "/Users/andi/Projects/sqlalchemy_spike/models.py", line 14, in <module>
class Company(Base):
File "/Users/andi/.pyenv/versions/3.9.6/envs/spike/lib/python3.9/site-packages/sqlalchemy/orm/decl_api.py", line 195, in __init__
_as_declarative(reg, cls, dict_)
File "/Users/andi/.pyenv/versions/3.9.6/envs/spike/lib/python3.9/site-packages/sqlalchemy/orm/decl_base.py", line 247, in _as_declarative
return _MapperConfig.setup_mapping(registry, cls, dict_, None, {})
File "/Users/andi/.pyenv/versions/3.9.6/envs/spike/lib/python3.9/site-packages/sqlalchemy/orm/decl_base.py", line 328, in setup_mapping
return _ClassScanMapperConfig(
File "/Users/andi/.pyenv/versions/3.9.6/envs/spike/lib/python3.9/site-packages/sqlalchemy/orm/decl_base.py", line 563, in __init__
self._scan_attributes()
File "/Users/andi/.pyenv/versions/3.9.6/envs/spike/lib/python3.9/site-packages/sqlalchemy/orm/decl_base.py", line 1006, in _scan_attributes
collected_annotation = self._collect_annotation(
File "/Users/andi/.pyenv/versions/3.9.6/envs/spike/lib/python3.9/site-packages/sqlalchemy/orm/decl_base.py", line 1277, in _collect_annotation
extracted = _extract_mapped_subtype(
File "/Users/andi/.pyenv/versions/3.9.6/envs/spike/lib/python3.9/site-packages/sqlalchemy/orm/util.py", line 2353, in _extract_mapped_subtype
raise sa_exc.ArgumentError(
sqlalchemy.exc.ArgumentError: Could not interpret annotation Mapped[List[Manager]]. Check that it uses names that are correctly imported at the module level. See chained stack trace for more hints.
</code></pre>
<p>using:</p>
<pre><code>Flask==3.0.0
SQLAlchemy==2.0.23
</code></pre>
| <python><flask><sqlalchemy> | 2023-11-09 23:36:09 | 1 | 23,529 | andilabs |
77,456,832 | 2,981,639 | How to extract a struct values to a list in polars | <p>I have a pipeline that extracts regular expression groups from a <code>polars</code> text column, and I want to display the text and matches in a <code>streamlit</code> <code>st.table</code>. Provided that the column containing the matches is a List[str] this works well with <code>streamlit</code>, but <code>polars</code> <code>extract_groups</code> returns a struct (understandable since regex groups can be named).</p>
<p>The code below works, but is there a way of doing this without using <code>map_elements</code>? In general, there could be 0, 1 or more match groups and I'd like to retain the output dtype of <code>list[str]</code> for <code>streamlit</code> compatibility.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
dataframe = pl.DataFrame([{"text":"ABC"}, {"text":"123"}])
regexp = r'^.*(\d+).*$'
dataframe = dataframe.with_columns(regexp_match=pl.col("text").str.extract_groups(regexp)).filter(pl.col("regexp_match").struct["1"].is_not_null())
dataframe.with_columns(
regexp_match = pl.struct(["regexp_match"]).map_elements(lambda x: list(x['regexp_match'].values()), return_dtype=pl.List(str))
)
</code></pre>
<pre><code>shape: (1, 2)
┌──────┬──────────────┐
│ text ┆ regexp_match │
│ --- ┆ --- │
│ str ┆ list[str] │
╞══════╪══════════════╡
│ 123 ┆ ["3"] │
└──────┴──────────────┘
</code></pre>
| <python><dataframe><python-polars> | 2023-11-09 23:02:07 | 2 | 2,963 | David Waterworth |
77,456,821 | 2,101,025 | Flask-Login > 0.6.0 not working on Azure Web App Deployment | <p>I have an Azure webapp running a Flask Application. I have no extra workers, just running it on default settings.</p>
<p>Since moving from Flask-Login 0.6.0 to 0.6.3 it is no longer logging out when it is expired.</p>
<p>I have my init.py file</p>
<pre><code>def create_app(config_env=os.getenv("ENV"), register_blueprints=True):
app = Flask(__name__)
app.config.from_object(config[config_env])
db.init_app(app)
lm.init_app(app)
lm.login_view = "auth.login"
lm.login_message_category = "info"
lm.refresh_view = "auth.login"
lm.needs_refresh_message = "Session timedout, please re-login"
lm.login_message_category = "info"
@app.before_request
def my_func():
session.modified = True
@app.before_request
def before_request():
session.permanent = True
current_app.permanent_session_lifetime = timedelta(minutes=1)
</code></pre>
<p>and I have my model.py file:</p>
<pre><code>@lm.user_loader
def load_user(user_id):
return PortalUsers.query.get(int(user_id))
class PortalUsers(UserMixin, db.Model):
__tablename__ = "tbl_PortalUsers"
id = db.Column("Staff_Id", db.Integer, primary_key=True)
first_name = db.Column("First_Name", db.String(100, "SQL_Latin1_General_CP1_CI_AS"))
last_name = db.Column("Last_Name", db.String(100, "SQL_Latin1_General_CP1_CI_AS"))
full_name = db.Column(
db.String(201, "SQL_Latin1_General_CP1_CI_AS"),
db.Computed("(([First_Name]+' ')+[Last_Name])", persisted=False),
)
phone = db.Column("Phone", db.String(10, "SQL_Latin1_General_CP1_CI_AS"))
email = db.Column("Email", db.String(50, "SQL_Latin1_General_CP1_CI_AS"))
role_type = db.Column("Role_Type", db.String(20, "SQL_Latin1_General_CP1_CI_AS"))
portal_active = db.Column("Portal_Active", db.Integer)
date_created = db.Column("Date_Created", db.DateTime, default=datetime.utcnow)
created_by = db.Column("Created_By", db.String(100, "SQL_Latin1_General_CP1_CI_AS"))
last_updated = db.Column("Last_Updated", db.DateTime, default=datetime.utcnow)
last_updated_by = db.Column(
"Last_Updated_By", db.String(100, "SQL_Latin1_General_CP1_CI_AS")
)
date_deleted = db.Column("Date_Deleted", db.DateTime)
time_zone_id = db.Column(
"TimeZoneId",
db.Integer,
db.ForeignKey("tbl_TimeZones.id"),
unique=False,
nullable=False,
default=1,
)
password = db.Column("Password", db.String(64), nullable=False)
</code></pre>
<p>On my localhost, everything works as expected, and after 1 minute, if I refresh my link I am redirected to the login page with a flashed message.</p>
<p>On production in my Azure Web App I am not redirected. The session cookie on production does change. for example:</p>
<p><a href="https://i.sstatic.net/vODXw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vODXw.png" alt="enter image description here" /></a></p>
<p>and after the 1 minute:</p>
<p><a href="https://i.sstatic.net/Fs2GJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fs2GJ.png" alt="enter image description here" /></a></p>
<p>I am not sure where to troubleshoot. Is it something to do with Azure? Or is it a Flag I am missing for FLask-Login? Or is it something actually to do with FLask? Any help to point me in the correct direction would be greatly appreciated.</p>
<p><strong>UPDATE</strong></p>
<p>Looking at my session Cookies on localhost, I get the expected session on login, and on expiry. As an example the expired session cookie is:</p>
<pre><code>eyJfZnJlc2giOmZhbHNlLCJfcGVybWFuZW50Ijp0cnVlLCJjc3JmX3Rva2VuIjoiYWFhNTE1ZTEwNDg5OGZlYzMwZDk1ZDFkY2IyN2ZkYWJiOTI0M2Y3MyJ9.ZU2zrQ.-GCaBYI8ksvVBRjylRgLM2Cmst0
</code></pre>
<p>which decodes to:</p>
<pre><code>{
"_fresh": false,
"_permanent": true,
"csrf_token": "aaa515e104898fec30d95d1dcb27fdabb9243f73"
}
</code></pre>
<p>But in production on expiry or on logout I get a session cookie like:
eyJfcGVybWFuZW50Ijp0cnVlfQ.ZU20qQ.tyNVkqrhHbHtDwQUU2PMc7p6Pco</p>
<p>which gives me [ERR: Not JSON data]</p>
<p>This leads me to believe it has something to do with Flask... Since Flask-Login can't read the invalid session. However, not seeing much in the community about it, so I think I must be still doing something incorrectly.</p>
| <python><flask><gunicorn><flask-login> | 2023-11-09 22:59:14 | 1 | 303 | ghawes |
77,456,817 | 2,383,070 | Is there a way to round a Duration dtype in Polars? | <p>In Pandas, you can round a Timedelta like this...</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
pd.Timedelta("1h20min").round("1h")
</code></pre>
<p>Can something similar be done in Polars?</p>
<p>When I try rounding a duration column, the error tells me this only works for date/datetime dtypes.</p>
<pre><code>import polars as pl
from datetime import datetime, timedelta
a = pl.DataFrame({"times": [timedelta(seconds=15), timedelta(seconds=30)]})
a.select(pl.col("times").dt.round("1m"))
</code></pre>
<blockquote>
<p>InvalidOperationError: <code>round</code> operation not supported for dtype <code>duration[μs]</code> (expected: date/datetime)</p>
</blockquote>
| <python><python-polars> | 2023-11-09 22:58:02 | 2 | 3,511 | blaylockbk |
77,456,758 | 2,945,498 | python ray out of memory (OOM) with RAY_memory_monitor_refresh_ms set to 0 | <p>I'm using <a href="https://modin.readthedocs.io/en/stable/" rel="nofollow noreferrer">Modin</a> to use pandas code that can't fit in memory. It works very well locally with dataset (30GB) and my RAM(16GB).
Now I want to speed this up and decided to run a cluster on GCP with modin on <a href="https://www.ray.io/" rel="nofollow noreferrer">Ray</a>
My dataset ~50GB stored on gcs as multiple files ~40mb in size and I'm setting up 7 <code>n1-standard-2</code> machines ~7GB each.
I want to test this setup, before going to TBs datasets.
But when I'm trying to create dataset, my workers got killed with the following error, despite setting all env variables:</p>
<pre><code>(raylet) Refer to the documentation on how to address the out of memory issue: https://docs.ray.io/en/latest/ray-core/scheduling/ray-oom-prevention.html. Consider provisioning more memory on this node or reducing task parallelism by requesting more CPUs per task. To adjust the kill threshold, set the environment variable `RAY_memory_usage_threshold` when starting Ray. To disable worker killing, set the environment variable `RAY_memory_monitor_refresh_ms` to zero.
</code></pre>
<p>I'm using the code below to initialize <code>ray</code>:</p>
<pre class="lang-py prettyprint-override"><code>import modin.pandas as pd
import ray
import os
pd.DEFAULT_NPARTITIONS=280
os.environ["MODIN_ENGINE"] = "ray"
runtime_env = {
'env_vars': {
"RAY_memory_monitor_refresh_ms": "0",
"RAY_memory_usage_threshold": "3"
}
}
ray.init(runtime_env=runtime_env, _plasma_directory="/tmp")
df = pd.read_parquet("gs://test-data-set/parquets/")
</code></pre>
<p>Any advice would be appreciated.</p>
| <python><google-cloud-platform><ray> | 2023-11-09 22:43:48 | 2 | 967 | tsh |
77,456,745 | 284,932 | Negative values in data passed to MultinomialNB when vectorize using Word2Vec | <p>I am currently working on a project where I'm attempting to use Word2Vec in combination with Multinomial Naive Bayes (MultinomialNB) for accuracy calculations.</p>
<pre><code>import pandas as pd
import numpy as np, sys
from sklearn.model_selection import train_test_split
from gensim.models import Word2Vec
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score, precision_score
from datasets import load_dataset
df = load_dataset('celsowm/bbc_news_ptbr', split='train')
X = df['texto']
y = df['categoria']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
sentences = [sentence.split() for sentence in X_train]
w2v_model = Word2Vec(sentences, vector_size=100, window=5, min_count=5, workers=4)
def vectorize(sentence):
words = sentence.split()
words_vecs = [w2v_model.wv[word] for word in words if word in w2v_model.wv]
if len(words_vecs) == 0:
return np.zeros(100)
words_vecs = np.array(words_vecs)
return words_vecs.mean(axis=0)
X_train = np.array([vectorize(sentence) for sentence in X_train])
X_test = np.array([vectorize(sentence) for sentence in X_test])
clf = MultinomialNB()
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print('Accuracy:', accuracy_score(y_test, y_pred))
print('Precision:', precision_score(y_test, y_pred, pos_label='positive'))
</code></pre>
<p>However, I've encountered an error:</p>
<pre><code>ValueError("Negative values in data passed to %s" % whom)
ValueError: Negative values in data passed to MultinomialNB (input X)
</code></pre>
<p>I would appreciate any insights into resolving this issue.</p>
| <python><scikit-learn><gensim><word2vec><naivebayes> | 2023-11-09 22:41:04 | 2 | 474 | celsowm |
77,456,727 | 2,774,885 | python VSCode: suggested way to run code differently based on launch / debug state | <p>I've got some python code which iterates over a large number of files, and I've written a simple multithread pool, which speeds it up substantially. (The app is not remotely complex, there are no concurrency or locking issues as each input file is mapped 1:1 to an output file and there are no shared data structures across these threads...)</p>
<p>However, when I'm debugging (in VSCode), it's far easier on me to run it as a single thread so I added a variable to the function call that allows me to force it to run without threads. This solves the problem but is manual and clunky. I've thought of at least a couple of ways I could semi-automate this, but knowing very little about this stuff (obviously) I figured I'd ask for suggestions.</p>
<ol>
<li>I could set an environment variable, pull that into the script, and then based on that variable choose the single vs multi-threaded code.</li>
<li>I could add an argument to the existing parser that selects single/multi threaded mode.</li>
</ol>
<p>If either of these exist, I could presumably modify the <code>launch.json</code> stanzas for the debugger and have it set those arguments or variables before it launches the code in the debugger.</p>
<p>BUT, and this part I'm confused on... does the Ctrl-F5 "run program" shortcut always execute exactly the same as the "F5" shortcut that launches the program inside the debugger?</p>
<p>I very often use Ctrl-F5 when I know (or at least think!) that the code will run, whereas I use F5 when I think it won't. Is there some reasonable way to have Ctrl-F5 do something "very slightly different" than just F5?</p>
<p>It's also entirely possible that what I've just described is a <em>Very Bad Idea</em> for reasons that are not obvious to me, and/or that there is much more elegant and unknown-to-me wisdom in this matter... ;-)</p>
| <python><visual-studio-code><vscode-debugger> | 2023-11-09 22:36:37 | 1 | 1,028 | ljwobker |
77,456,701 | 18,139,225 | Cannot load Jupyter Lab after installing "jupyter-markdown" extension | <p>I have just installed Jupyter Lab via <code>pip install</code>. It works well until I installed "jupyter-markdown" extension via <code>pip install jupyter-markdown</code>.</p>
<p>Now I cannot load Jupyter Lab. I get the following error:</p>
<pre><code>[W 2023-11-09 22:59:30.662 ServerApp] 404 GET /lab/api/settings?1699567170458 (::1): Schema not found: C:/Users/ephra/miniconda3/envs/md/share/jupyter/lab/schemas\@datalayer/jupyter-admin\plugin.json
[W 2023-11-09 22:59:30.663 LabApp] wrote error: 'Schema not found: C:/Users/ephra/miniconda3/envs/md/share/jupyter/lab/schemas\\@datalayer/jupyter-admin\\plugin.json'
Traceback (most recent call last):
File "C:\Users\ephra\miniconda3\envs\md\Lib\site-packages\tornado\web.py", line 1784, in _execute
result = method(*self.path_args, **self.path_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ephra\miniconda3\envs\md\Lib\site-packages\tornado\web.py", line 3290, in wrapper
return method(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ephra\miniconda3\envs\md\Lib\site-packages\jupyterlab_server\settings_handler.py", line 57, in get
result, warnings = get_settings(
^^^^^^^^^^^^^
File "C:\Users\ephra\miniconda3\envs\md\Lib\site-packages\jupyterlab_server\settings_utils.py", line 386, in get_settings
settings_list, warnings = _list_settings(
^^^^^^^^^^^^^^^
File "C:\Users\ephra\miniconda3\envs\md\Lib\site-packages\jupyterlab_server\settings_utils.py", line 211, in _list_settings
schema, version = _get_schema(
^^^^^^^^^^^^
File "C:\Users\ephra\miniconda3\envs\md\Lib\site-packages\jupyterlab_server\settings_utils.py", line 55, in _get_schema
raise web.HTTPError(404, notfound_error % path)
tornado.web.HTTPError: HTTP 404: Not Found (Schema not found: C:/Users/ephra/miniconda3/envs/md/share/jupyter/lab/schemas\@datalayer/jupyter-admin\plugin.json)
[W 2023-11-09 22:59:30.665 LabApp] 404 GET /lab/api/settings?1699567170458 (a8be64fc149d42c2b38ccd8d38f052f5@::1) 100.08ms referer=http://localhost:8888/lab
</code></pre>
<p>I can remove the virtual environment and recreate the venv and reinstall Jupyter Lab. My question is to know if there is a way of installing "jupyter-markdown" extension and use it in Jupyter Lab. I get the same error if I install the <code>jupyter-markdown</code> extension via Jupyter Lab extension tab.</p>
<p>I use Windows 11, python 3.11.0</p>
| <python><markdown><jupyter-lab> | 2023-11-09 22:30:00 | 3 | 441 | ezyman |
77,456,584 | 12,827,931 | Adding alpha area (confidence interval) to sns pointplot | <p>Assume a dataframe</p>
<pre><code>df = pd.DataFrame({"X" : [1, 1, 2, 2, 3, 3, 4, 4],
"Model" : ["A", "B", "A", "B", "A", "B", "A", "B"],
"Lower" : [0.2, 0.3, 0.2, 0.2, 0.25, 0.3, 0.3, 0.25],
"Median" : [0.5, 0.55, 0.6, 0.55, 0.5, 0.6, 0.5, 0.5],
"Upper" : [0.6, 0.7, 0.65, 0.7, 0.7, 0.65, 0.55, 0.7]})
</code></pre>
<p>and a plot:</p>
<pre><code>pl1 = sns.catplot(data = df, kind = 'point',
hue = 'Model',
x = 'X',
y = 'Median', sharey = False, heigth = 3, aspect = 1.5)
pl1.set(ylim = (0, 1))
</code></pre>
<p>that looks like this</p>
<p><a href="https://i.sstatic.net/Vnvj9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Vnvj9.png" alt="enter image description here" /></a></p>
<p>What I'd like to do, is to add a confidence interval based on columns <code>Lower</code> and `Upper' that, for example, looks like this (for the blue curve)</p>
<p><a href="https://i.sstatic.net/1GEEW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1GEEW.png" alt="enter image description here" /></a></p>
<p>Is it possible?</p>
| <python><pandas><matplotlib><seaborn><visualization> | 2023-11-09 22:01:25 | 1 | 447 | thesecond |
77,456,537 | 4,913,108 | How to create Pandas bar plot with values from one column but filled (stacked) with values from another? | <p>I have a Pandas dataframe like the one below:</p>
<pre><code>df = pd.DataFrame({'decile': [1,1,2,2,2,3,3,3],
'conversion': [.3,.3,.5,.5,.5,.55,.55,.55],
'campaign': ['tv','online','tv','online','radio','tv','online','radio'],
'campaign_pct': [.6,.4,.5,.4,.1,.45,.35,.2]})
df
decile conversion campaign campaign_pct
0 1 0.30 tv 0.60
1 1 0.30 online 0.40
2 2 0.50 tv 0.50
3 2 0.50 online 0.40
4 2 0.50 radio 0.10
5 3 0.55 tv 0.45
6 3 0.55 online 0.35
7 3 0.55 radio 0.20
</code></pre>
<p>I want to create a bar plot with <code>decile</code> on the x-axis, the bar heights to be the values for <code>conversion</code>, and fill each bar with the proportion corresponding to <code>campaign_pct</code>, with <code>campaign</code> being in the legend. It should look something like the sketch below (the fill can be colors, doesn't have to be lines). How can I do this?
<a href="https://i.sstatic.net/Vvnzz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Vvnzz.png" alt="enter image description here" /></a></p>
| <python><pandas> | 2023-11-09 21:46:15 | 1 | 5,750 | Gaurav Bansal |
77,456,519 | 12,297,666 | Difference between one-hot-encoded and integer output in Sklearn | <p>Consider the case of a multiclass classification problem with 12 classes (classes 0 to 11). These classes are <strong>nominal</strong> categorical variables (no ranking order).</p>
<p>I have trained two models (<code>M1</code> and <code>M2</code>), as follows:</p>
<p><code>M1</code>: The output vector, <code>y_train</code>, is provided in one-hot-encoded format:</p>
<pre><code>array([[0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
...,
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]])
</code></pre>
<p><code>M2</code>: The output vector, <code>y_train</code>, is provided as integer values.</p>
<pre><code>array([ 6, 8, 1, ..., 11, 0, 4], dtype=int64)
</code></pre>
<p>I have trained these models with <a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html" rel="nofollow noreferrer">RandomForestClassifier</a> and <a href="https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html" rel="nofollow noreferrer">MLPClassifier</a>. Below you can see the code of the two models for the Random Forest (the codes for the MLP are exactly the same, just changing the Classifier type).</p>
<p>Code for <code>M1</code>:</p>
<pre><code>import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
df = pd.read_csv('dataset.csv')
x = df.iloc[:, :-1].values
y = df['FLAG'].values
# One-hot-encode labels
label_bin = LabelBinarizer()
unique_classes = np.unique(y)
label_bin.fit_transform(unique_classes )
# Train-test split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)
del x, y
# RF
# one-hot-output
y_train_one_hot = label_bin.transform(y_train)
classifier_RF = RandomForestClassifier(n_estimators=100, criterion='entropy', min_samples_split=2,
min_samples_leaf=1,
random_state=42)
# Train
classifier_RF.fit(x_train, y_train_one_hot)
# Predict on test set
y_pred = classifier_RF.predict(x_test)
# Inverse predictions
y_pred_orig = label_bin.inverse_transform(y_pred)
# Classification report
print(classification_report(y_test, y_pred_orig))
</code></pre>
<p>and this for <code>M2</code>:</p>
<pre><code>import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
df = pd.read_csv('dataset.csv')
x = df.iloc[:, :-1].values
y = df['FLAG'].values
# Train-test split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)
del x, y
# RF
classifier_RF = RandomForestClassifier(n_estimators=100, criterion='entropy', min_samples_split=2,
min_samples_leaf=1,
random_state=42)
# Train
classifier_RF.fit(x_train, y_train)
# Predict on test set
y_pred = classifier_RF.predict(x_test)
# Classification report
print(classification_report(y_test, y_pred))
</code></pre>
<p>Are these codes technically correct or am i missing something? I am asking this, because so far i have found that in the case of a multiclass problem, the <code>M1</code> approach (one-hot-encode output) is the correct/suitable one. But, i am getting a higher <code>f1-score</code> in the <code>y_test</code> set, for the <code>M2</code> approach. While <code>M1</code> is <code>0.73</code>, the <code>M2</code> gives me <code>0.77</code>. This also happened when i use <code>MLPClassifier</code>. The <code>M1</code>for <code>MLPClassifier</code> is <code>0.73</code> while for <code>M2</code> is <code>0.76</code>.</p>
<p><strong>EDIT:</strong> Here is the <code>M1</code> MLP:</p>
<pre><code>classifier_MLP = MLPClassifier(hidden_layer_sizes=(2*x_train.shape[1], 2*y_train_one_hot.shape[1]),
max_iter=250,
random_state=42)
</code></pre>
<p>and here he <code>M2</code>:</p>
<pre><code>classifier_MLP = MLPClassifier(hidden_layer_sizes=(2*x_train.shape[1], len(np.unique(y_train))),
max_iter=250,
random_state=42)
</code></pre>
| <python><machine-learning><scikit-learn><one-hot-encoding> | 2023-11-09 21:42:23 | 1 | 679 | Murilo |
77,456,488 | 226,473 | How to prepend (and append) to string if it contains a whitespace character? | <p>here is what i've tried:</p>
<pre><code>
df[df.path.str.contains(" ", na=False)] = df['^' + df.path.astype('str') + '$']
</code></pre>
<p>this gives me an error though.
Basically what I tried was to set the 'path' column to the same value with one character prepended (and appended). But I tried to set only the rows that contain a single whitespace.</p>
<p>complete stack trace (I'm working in a local jupyter notebok):</p>
<pre><code>KeyError Traceback (most recent call last)
Cell In[180], line 22
17 df['path'] = df.path.str.replace(" ", "$|^")
18 #print(df[df.path.str.contains("\$\|\^", na=False)].shape[0])
19
20 #multipath = df[df.path.str.contains(" ", na=False)]
21 #multipath['path'] = multipath['path'].str.replace(' ','$|^')
---> 22 df[df.path.str.contains(" ", na=False)] = df['^' + df.path.astype('str') + '$']
23 #multipath['path'] =
24 #print(multipath.shape[0])
27 print("original shape:", df.shape[0])
File ~/Library/Python/3.9/lib/python/site-packages/pandas/core/frame.py:3902, in DataFrame.__getitem__(self, key)
3900 if is_iterator(key):
3901 key = list(key)
-> 3902 indexer = self.columns._get_indexer_strict(key, "columns")[1]
3904 # take() does not accept boolean indexers
3905 if getattr(indexer, "dtype", None) == bool:
File ~/Library/Python/3.9/lib/python/site-packages/pandas/core/indexes/base.py:6114, in Index._get_indexer_strict(self, key, axis_name)
6111 else:
6112 keyarr, indexer, new_indexer = self._reindex_non_unique(keyarr)
-> 6114 self._raise_if_missing(keyarr, indexer, axis_name)
6116 keyarr = self.take(indexer)
6117 if isinstance(key, Index):
6118 # GH 42790 - Preserve name from an Index
File ~/Library/Python/3.9/lib/python/site-packages/pandas/core/indexes/base.py:6175, in Index._raise_if_missing(self, key, indexer, axis_name)
6173 if use_interval_msg:
6174 key = list(key)
-> 6175 raise KeyError(f"None of [{key}] are in the [{axis_name}]")
6177 not_found = list(ensure_index(key)[missing_mask.nonzero()[0]].unique())
6178 raise KeyError(f"{not_found} not in index")
KeyError: "None of [Index(['^nan$',\n '^/content/icebreakers/en_us/products/ice-cubes-wintergreen-1-5-oz-tins-8-ct-box.html$',\n '^/content/corporate_SSF/en_us/careers/retirees/contact-us.html$',\n '^/content/franchise/en_us/products.html$',\n '^/content/franchise/en_us/our-brands/good-and-plenty/good-and-plenty-1-8-oz.html$',\n '^/content/dam/chocolateworld/en_us/documents/times-square-reopening-menu.pdf$',\n '^/content/franchise/en_us/products/hersheys-milk-chocolate-snack-size-bag.html$',\n '^/content/jolly-rancher/en_us/products/orignal-hard-candy-60-oz.html$',\n '^/content/kitkat/en_us/products/white-bar.html$',\n '^/content/dam/chocolateworld/en_us/documents/birthday-party-packages.pdf$',\n ...\n '^/kitchens/en_us/recipes/grilled-peanut-butter-chocolate-and-jelly-sandwich.html$',\n '^/kitchens/en_us/recipes/apple-wheels.html$',\n '^/whoppers/en_us/products/strawberry-milkshake.html$',\n '^/kitchens/en_us/blogs/chocolate-covered-valentines-day.html$',\n '^/en_us/our-brands/bubble-yum/bubble-yum-cotton-candy-5-piece.html$|^/en_us/our-brands/bubble-yum/bubble-yum-sugarless-original-10-piece.html$',\n '^/en_us/ad-cookie-policy.html$',\n '^/hersheysolutions/en_us/employee_register.html$',\n '^/reeses/en_us/products/$|^/en_us/our-brands/milk-duds/milk-duds-snack-size-chewy-caramels-9-3-oz.html$|^/reeses/en_us/products/reeses-peanut-butter-eggs-6-pack.html$|^/reeses/en_us/products/reeses-chocolate-lovers-cups.html$',\n '^nan$', '^/content/corporate_SSF/en_us/investors.html$'],\n dtype='object', length=4586)] are in the [columns]"
</code></pre>
| <python><pandas> | 2023-11-09 21:35:24 | 1 | 21,308 | Ramy |
77,456,434 | 7,124,155 | I need to aggregate and transpose one column to rows in Pyspark (long to wide format) | <p>I need to "explode" one column to multiple columns - similar to pivot but I need new column names.</p>
<p>Give this spark dataframe:</p>
<pre><code>df = spark.createDataFrame(
[(1, 6, 200),
(1, 6, 300),
(1, 6, 400),
(1, 7, 1000),
(1, 6, 5000)],
"id int, term int, amount int")
df.show()
+---+----+------+
| id|term|amount|
+---+----+------+
| 1| 6| 200|
| 1| 6| 300|
| 1| 6| 400|
| 1| 7| 1000|
| 1| 7| 5000|
+---+----+------+
</code></pre>
<p>How can break "amount" into new columns? Some cells will be empty, which is ok.</p>
<p><a href="https://i.sstatic.net/npLQe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/npLQe.png" alt="enter image description here" /></a></p>
| <python><dataframe><apache-spark><pyspark> | 2023-11-09 21:23:03 | 2 | 1,329 | Chuck |
77,456,285 | 1,432,980 | can't get attribute function when running multiple processes | <p>I wanted to compare the speed when using multiprocessing and a normal linear function.</p>
<p>My code looks like this</p>
<pre><code>from multiprocessing import Pool, Manager
import time
from faker import Faker
num_rows = 1000000
items = [
['Column_1', Faker(), "pyint", {}],
['Column_2', Faker(), "random_element", {"elements": ["Kayden", "Franklin", "Gabriel", "Vincent"]}],
['Column_3', Faker(), "random_element", {"elements": ["Miller", "Ward", "Edwards", "Parry"]}],
['Column_4', Faker(), "pyint", {}],
['Column_5', Faker(), "pyint", {}],
['Column_6', Faker(), "pyint", {}],
['Column_7', Faker(), "pyint", {}],
['Column_8', Faker(), "pyint", {}],
['Column_9', Faker(), "pyint", {}],
['Column_10', Faker(), "pyint", {}],
['Column_11', Faker(), "pyint", {}],
['Column_12', Faker(), "pyint", {}],
['Column_13', Faker(), "pyint", {}],
['Column_14', Faker(), "pyint", {}],
['Column_15', Faker(), "pyint", {}],
['Column_16', Faker(), "pyint", {}],
['Column_17', Faker(), "pyint", {}],
['Column_18', Faker(), "pyint", {}],
['Column_19', Faker(), "pyint", {}],
['Column_20', Faker(), "pyint", {}],
['Column_21', Faker(), "pyint", {}],
['Column_22', Faker(), "pyint", {}],
['Column_23', Faker(), "pyint", {}],
]
def concurrent():
with Manager() as dict_manager:
data_frame = dict_manager.dict()
global faker_function
def faker_function(params):
items = []
for _ in range(0, num_rows - 1):
items.append(getattr(params[1], params[2])(**params[3]))
data_frame[params[0]] = items
curr_time = time.time()
with Pool(10) as p:
p.map(faker_function, items)
elapsed = time.time() - curr_time
print('Concurrent', elapsed)
print('Dict size', len(data_frame))
def linear():
data_frame = {}
def faker_function(params):
items = []
for _ in range(0, num_rows - 1):
items.append(getattr(params[1], params[2])(**params[3]))
data_frame[params[0]] = items
curr_time = time.time()
for item in items:
faker_function(item)
elapsed = time.time() - curr_time
print('Linear time', elapsed)
print('Dict size', len(data_frame))
if __name__ == "__main__":
concurrent()
linear()
</code></pre>
<p>Both functions <code>linear</code> and <code>concurrect</code> supposed to generate some data and write it into a dictionary (for multiprocessing I am using <code>Manager</code> object).</p>
<p>I have made the inner function - <code>faker_function</code> - into <code>global</code>.</p>
<p>But when I am running the app I get this error from the processes</p>
<pre><code>Can't get attribute 'faker_function' on <module '__mp_main__'
</code></pre>
<p>What is the problem?</p>
| <python><concurrency><multiprocessing> | 2023-11-09 20:55:27 | 1 | 13,485 | lapots |
77,456,223 | 1,492,613 | why in the packaging.sys_tags() the lower cp version has higher priority than the py version which has no implementation spec? | <p>from the document:</p>
<blockquote>
<p>The iterable is ordered so that the best-matching tag is first in the sequence. The exact preferential order to tags is interpreter-specific, but in general the tag importance is in the order of:</p>
<ol>
<li>Interpreter</li>
<li>Platform</li>
<li>ABI</li>
</ol>
</blockquote>
<p>I would expect py310 before the cp39, however the pynn is always at the bottom:</p>
<pre><code>import packaging.tags
print(list(dict.fromkeys(i.interpreter for i in packaging.tags.sys_tags())))
</code></pre>
<pre><code>['cp310',
'cp39',
'cp38',
'cp37',
'cp36',
'cp35',
'cp34',
'cp33',
'cp32',
'py310',
'py3',
'py39',
'py38',
'py37',
'py36',
'py35',
'py34',
'py33',
'py32',
'py31',
'py30']
</code></pre>
<p>Does this means if I have old package <code>cp36-abi3-manylinux_2_35_x86_64</code> and the new package <code>py310-none-any</code>, given current system is cp310, it actually prefer the old package rather than the new py310-none-any? I cannot get my head around this. Why the order should be like this?</p>
| <python><python-packaging> | 2023-11-09 20:45:19 | 1 | 8,402 | Wang |
77,456,185 | 21,370,869 | Passing command line arguments to a .ps1 file from Python's subprocess.Popen() method | <p>I would like to pass command line arguments to to the file "my script.ps1", as well as pass <code>NoProfile</code> to the PowerShell executable.</p>
<p>The scripts (<code>c:/temp/my script.ps1</code>) content is:</p>
<pre><code>param($param1, $param2, $param3)
$PSBoundParameters.Keys.count
$param1
$param2
$param3
</code></pre>
<p>I want to call this script within Python 3, I tried a number of quoting approaches and they are all failing. I can only call the script without arguments. The following is but some of the things I tried:</p>
<pre><code>import subprocess
#subprocess.Popen(['pwsh','-file "C:\Temp\my script.ps1" "cat" "dog"']) #Console Window Error --> The argument '-file "C:\Temp\my script.ps1" "cat"' is not recognized as the name of a script file
#subprocess.Popen(['pwsh','"C:\Temp\my script.ps1" "cat" "dog"']) #Console Window Error --> The argument '"C:\Temp\my script.ps1" "cat"' is not recognize
#subprocess.Popen(['pwsh -file ','"C:\Temp\my script.ps1" "cat" "dog"']) #Console window does not seem to open
subprocess.Popen(['pwsh','C:\Temp\my script.ps1']) #This is the only test that worked
input("Press enter to exit;") #Keep the console window open
</code></pre>
<p>I can run the <code>ps1</code> just fine on a PowerShell Terminal:</p>
<pre><code>pwsh 'C:\Temp\my script.ps1' 'cat' 'dog'
2
Cat
Dog
</code></pre>
<p>Or:</p>
<pre><code>pwsh -nologo -noprofile -file 'C:\Temp\my script.ps1' 'cat' 'dog'
2
Cat
Dog
</code></pre>
<p>I am working with PowerShell 7 and Python 3 on windows.</p>
<p>Any help would be greatly appreciated!</p>
| <python><powershell> | 2023-11-09 20:36:58 | 1 | 1,757 | Ralf_Reddings |
77,456,088 | 1,738,895 | Difference in record counts between my SQL editor and Python psycopg2 query | <p>I am running the exact same redshift query copied from a sql editor to python. In python I have tested counts from pandas.read_sql and direct psycopg2 cursor.execute queries. On the SQL editor side, I have tested the counts in dbeaver, beekeeper and mysqlworkbench. I have stripped out all joins to simplify the query with the same variance occurring between the environment types. I have checked getdate() in all environments and see the same timestamp to rule out any difference in timezones between environments. If it were a few records, I could attribute it to timing, but it is in the 20-30k range for a 6-month window. What else should I be checking?</p>
<p>simplified query:</p>
<pre><code>select count(distinct u.customerid)
from users u
</code></pre>
<p>Python setup:</p>
<pre><code>conn = psycopg2.connect(f"dbname={DB} host={HOST} port={PORT}
user={USER} password={PWD}")
cursor = conn.cursor()
cursor.execute(''' select count(distinct u.customerid)
from users u
''')
result = cursor.fetchone()
print(result)
conn.close()
</code></pre>
| <python><sql><amazon-redshift><psycopg2> | 2023-11-09 20:13:44 | 0 | 1,444 | vizyourdata |
77,455,817 | 1,942,868 | Get the list of certain columns wihout duplicates from a model object | <p>I have a model object such as</p>
<pre><code>class MyMenu(models.Model):
key= m.CharField(max_length=255,unique=True)
name = m.CharField(max_length=255)
</code></pre>
<p>Now there is data like this,</p>
<pre><code> key,name
a hiro
b hiro
c fusa
d yoshi
e yoshio
f fusa
</code></pre>
<p>Now I would like to get the <code>name</code> column without duplication.</p>
<p>As a result I would like to get <code>["hiro","fusa","yoshi","yoshio"]</code></p>
<p>It is possible to do this with for loop, however this way is awkward and slow.</p>
<p>Is there any good method to use a model object?</p>
| <python><django> | 2023-11-09 19:15:13 | 2 | 12,599 | whitebear |
77,455,738 | 5,277,842 | Finding the nouns in a sentence given the context in Python | <p>How to find the nouns in a sentence regarding the context? I am using the <code>nltk</code> library as follows:</p>
<pre><code>text = 'I bought a vintage car.'
text = nltk.word_tokenize(text)
result = nltk.pos_tag(text)
result = [i for i in result if i[1] == 'NN']
#result = [('vintage', 'NN'), ('car', 'NN')]
</code></pre>
<p>The problem with this script is that it considers <code>vintage</code> as a noun, which can be true, but given the context, it is an adjective.</p>
<p>How can we achieve this task?</p>
<p><strong>Appendix:</strong> Using <code>textblob</code>, we get "vintage car" as the noun:</p>
<pre><code>!python -m textblob.download_corpora
from textblob import TextBlob
txt = "I bought a vintage car."
blob = TextBlob(txt)
print(blob.noun_phrases) #['vintage car']
</code></pre>
| <python><nlp><nltk><textblob> | 2023-11-09 18:59:33 | 2 | 429 | Shayan |
77,455,688 | 2,304,735 | How can I add data to the index column of a pandas dataframe? | <p>I want to output the following dataframe</p>
<pre><code>Ratio A B C
ROA
ROE
ROI
</code></pre>
<p>Below is the code that I am trying to run</p>
<pre><code>import pandas as pd
col_list = ['A', 'B', 'C']
data = pd.DataFrame([['ROA', 'ROE', 'ROI']], columns=col_list)
data.index.names = ['Ratios']
ratio_rows = ['ROA', 'ROE', 'ROI', 'PE Ratio']
data.set_axis(ratio_rows)
</code></pre>
<p>When I try to run the last line of code I get the following error</p>
<pre><code>Traceback (most recent call last):
Cell In[7], line 1
data.set_axis(ratio_rows)
File D:\Users\Mahmoud\anaconda3\Lib\site-packages\pandas\core\frame.py:5034 in set_axis
return super().set_axis(labels, axis=axis, copy=copy)
File D:\Users\Mahmoud\anaconda3\Lib\site-packages\pandas\core\generic.py:708 in set_axis
return self._set_axis_nocheck(labels, axis, inplace=False, copy=copy)
File D:\Users\Mahmoud\anaconda3\Lib\site-packages\pandas\core\generic.py:720 in _set_axis_nocheck
setattr(obj, obj._get_axis_name(axis), labels)
File D:\Users\Mahmoud\anaconda3\Lib\site-packages\pandas\core\generic.py:6002 in __setattr__
return object.__setattr__(self, name, value)
File pandas\_libs\properties.pyx:69 in pandas._libs.properties.AxisProperty.__set__
File D:\Users\Mahmoud\anaconda3\Lib\site-packages\pandas\core\generic.py:730 in _set_axis
self._mgr.set_axis(axis, labels)
File D:\Users\Mahmoud\anaconda3\Lib\site-packages\pandas\core\internals\managers.py:225 in set_axis
self._validate_set_axis(axis, new_labels)
File D:\Users\Mahmoud\anaconda3\Lib\site-packages\pandas\core\internals\base.py:70 in _validate_set_axis
raise ValueError(
ValueError: Length mismatch: Expected axis has 0 elements, new values have 4 elements
</code></pre>
<p>What can I do to put the ratio_rows in the index column?</p>
| <python><pandas><dataframe> | 2023-11-09 18:51:08 | 2 | 515 | Mahmoud Abdel-Rahman |
77,455,436 | 11,898,085 | Holoviz Panel: how to extract the button name in a callback method? | <p>I need the button name for my Holoviz Panel callback method because the name serves also as a key to a dictionary containing sql commands to be executed on click. I’ve tried to search if the event passed to the callback could have some properties including the name but I have found nothing. There must(?) be a way of identifying the button which has been clicked. Could someone help me out with this? The format below works in Jupyter Lab except for the name extraction.</p>
<pre class="lang-py prettyprint-override"><code>class Query:
def __init__(self):
self.buttons = [btn_0, btn_1, btn_2]
self.data = dict
def callback(self, event):
value = self.dict[button_name]
# Run an SQL query using the value
return (the query result)
def bind_buttons(self):
for button in self.buttons:
pn.bind(callback, self.button, watch=True)
query = Query()
query.bind_buttons()
query_buttons = pn.Column(
'# Queries',
query.buttons[0],
query.buttons[1],
query.buttons[2],
)
</code></pre>
| <python><button><callback><holoviz-panel> | 2023-11-09 18:03:45 | 1 | 936 | jvkloc |
77,455,423 | 10,985,257 | Run command via python with sudo and capture stdout | <p>I want to run a sudo command from within python, because my script does a lot of things, that should be handled without the right necessary for commands like <code>apt install</code>.</p>
<p>Further for most of those command I need the return value for analysis.</p>
<p>What I have so far:</p>
<pre class="lang-py prettyprint-override"><code>def _logable_sudo(self, cmds: List[str], password: str):
sudo_cmds = ["sudo"]
sudo_cmds.extend(cmds)
process = Popen(sudo_cmds, stdin=PIPE, stdout=PIPE, stderr=STDOUT)
process.communicate(f"{password}\n".encode())
with process.stdout:
try:
for line in iter(process.stdout.readline, b""):
# TODO replace print with actuall logging
print(line.decode("utf-8").strip())
except CalledProcessError as e:
print(f"{str(e)}")
</code></pre>
<p>I tested this without the <code>stdin=PIPE</code> for normal command and without the <code>communicate</code> and it worked. But achieving similar and parsing pw to the sudo command failed.</p>
<p>I've also tried with <code>["echo", f"{password}\n", "|", "sudo", "-S"]</code>, but this always returns the command in the print line instead of executing the command.</p>
| <python><subprocess><sudo> | 2023-11-09 17:59:43 | 0 | 1,066 | MaKaNu |
77,455,386 | 3,821,009 | How to format polars.Duration? | <p>This:</p>
<pre><code>df = (polars
.DataFrame(dict(
j=['2023-01-02 03:04:05.111', '2023-01-08 05:04:03.789'],
k=['2023-01-02 03:04:05.222', '2023-01-02 03:04:05.456'],
))
.select(
polars.col('j').str.to_datetime(),
polars.col('k').str.to_datetime(),
)
.with_columns(
l=polars.col('k') - polars.col('j'),
)
)
print(df)
</code></pre>
<p>produces:</p>
<pre><code> j (datetime[μs]) k (datetime[μs]) l (duration[μs])
2023-01-02 03:04:05.111 2023-01-02 03:04:05.222 111ms
2023-01-08 05:04:03.789 2023-01-02 03:04:05.456 -6d -1h -59m -58s -333ms
shape: (2, 3)
</code></pre>
<p>Doing this afterwards:</p>
<pre><code>print(df
.select(
polars.col('l').dt.to_string('%s.%f')
)
)
</code></pre>
<p>raises:</p>
<pre><code>polars.exceptions.InvalidOperationError: `to_string` operation not supported for dtype `duration[μs]`
</code></pre>
<p>How would I go about formatting this to a string with a custom format like I can do with <code>polars.Datetime</code>?</p>
| <python><python-polars> | 2023-11-09 17:53:42 | 1 | 4,641 | levant pied |
77,455,383 | 1,673,702 | Reshape JSON hierarchically | <p>I have this JSON file:</p>
<pre><code>{
"animal": {
"debug": {
"forceQuitCell": true,
"surviveWithoutFood": false
},
"name": "Tyrannosaurus_sp",
"defaultHuntingMode": "active_hunting",
"sexualType": "diploid",
"size": 1,
...
</code></pre>
<p>It is being read in Python by:</p>
<pre><code>dict = json.load("myjson.file")
</code></pre>
<p>Then I parse it into a TreeView resulting in what you see on the image:</p>
<p><a href="https://i.sstatic.net/SFGbe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SFGbe.png" alt="enter image description here" /></a></p>
<p>The problem with indentation is obvious. For example, name -> Tyrannosaurus_sp should lie under animal and not under debug.</p>
<p>Reading docs it seems this is the natural way of loading a json and showing it if you populate the TreeView in order (you insert as you read items from dictionary).</p>
<p>Question is, is there a way to reorder the JSON so it "makes sense" and is perfectly ordered hierarchically? I have not found a way. Another approach would be to create a new insertion method that stores all items with their related parent and the go and insert in order by parent.</p>
<p>Any suggestion is welcome.</p>
| <python><json><treeview> | 2023-11-09 17:53:30 | 0 | 441 | kankamuso |
77,455,300 | 11,159,734 | Streamlit: How to add proper citation with source content to chat message | <p>I'm currently building a RAG (Retrieval Augmented Generation) Chatbot in Streamlit that queries my own data from a Postgres database and provides it as context for GPT 3.5 to answer questions about my own data.</p>
<p>I have got the basics working already (frontend and backend). Now I also want to display the sources used nicely. I already finished the backend part. It returns a list of dictionaries with the following format:</p>
<pre><code>{"id": 1, "file_name": "my_file.pdf", "url_name": "", "content": "my content", "origin": "pdf"}
</code></pre>
<p>Some documents are actual files and therefore the "url_name" is "" and sometimes the source is an actual website. In that case "url_name" would be: "www.my_website.com"</p>
<p>In order to display websources nicely I found the streamlit-extras library which has a nice feature to display mentions:</p>
<pre><code>from streamlit_extras.mention import mention
mention(label=f"{file_name}", url=url_name)
</code></pre>
<p>It displays a clickable text with an icon. The url opens in another tab:
<a href="https://i.sstatic.net/xcCPq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xcCPq.png" alt="enter image description here" /></a></p>
<p>Now this does not really work for my non url documents. In that case I need something to show the actual text of the document. But I don't want to do it inline. <strong>I want something like this:</strong></p>
<p><a href="https://i.sstatic.net/u4OkL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u4OkL.png" alt="enter image description here" /></a></p>
<p>The Microsoft Azure App chatbot example provides the user the option to click on a source which will then show the content of the document on the right side of the page.</p>
<p>I want to copy this behaviour by providing a <strong>selecatable text element that will expand the right sidebar in streamlit and display the text within this sidebar</strong>.
Unfortunately I have no idea how I can accomplish this as there is no way to make a selectable text that can expand the sidebar and fill it with content as far as I know.</p>
| <python><streamlit> | 2023-11-09 17:40:15 | 1 | 1,025 | Daniel |
77,455,287 | 1,946,418 | python - get access to decorator class's "self" inside the inner method | <p>Following the example shown here - <a href="https://www.geeksforgeeks.org/creating-decorator-inside-a-class-in-python/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/creating-decorator-inside-a-class-in-python/</a></p>
<pre class="lang-py prettyprint-override"><code># creating class A
class A:
self.someVarInsideClassA = "hello some variable inside class A"
def Decorators(func):
def inner(self):
print("Decoration started.")
# print(self.someVarInsideClassA) => do not have access here, because "self" is class "B" ❓❓❓❓❓❓❓❓
func(self)
print("Decoration of function completed.\n")
return inner
@Decorators
def fun1(self):
print("Decorating - Class A methods.")
# creating class B
class B(A):
@A.Decorators
def fun2(self):
print("Decoration - Class B methods.")
obj = B()
obj.fun1()
obj.fun2()
</code></pre>
<p>Looks like the <code>self</code> inside <code>inner</code> method is referring to <code>class B</code>.</p>
<p>Can someone point me in the right direction to access/use variables defined in <code>Class A</code> please. TIA</p>
| <python><python-3.x><python-decorators> | 2023-11-09 17:39:08 | 0 | 1,120 | scorpion35 |
77,455,164 | 7,475,193 | Executing .py with bash command but how to pass variables - Airflow | <p>I have a DAG definition file where I use BashOperator to call a Python script:</p>
<pre><code>with DAG(dag_id='test', default_args=default_args, schedule_interval='0 1 * * *', catchup=False) as dag:
# Instantiate tasks
task_1 = BashOperator(
task_id='tsk1',
bash_command=f'python {PATH}/update_files.py'
)
</code></pre>
<p>update_files.py:</p>
<pre><code>import wsclient
def function_one():
...
return df
def function_two(df):
...
return df_transformed
if __name__ == "__main__":
ws = WSClient("https://")
if ws.do_login(usr, pws):
raw_dataframe = function_one()
transformed_dataframe = function_two(raw_dataframe )
for record in transformed_dataframe.to_dict(orient='records'):
ws.do_create('Clients', record)
</code></pre>
<p>usr and pws are username and passwords used for "https://". Currently, I store pws and usr as a variable in Airflow.</p>
<p>How can I extract Airflow variables and use them in my script? I found a few answers here but all of them are about passing params to PythonOperator, not BashOperator.</p>
| <python><airflow> | 2023-11-09 17:16:13 | 0 | 477 | eponkratova |
77,455,158 | 8,565,438 | How do I label features in an array by their size? | <p>I have a 2D boolean numpy array, <code>mask</code>:</p>
<pre><code>array([[False, False, False, True, True, False, False, False],
[ True, True, True, False, True, False, False, False],
[False, False, True, False, False, True, False, True],
[ True, False, False, False, True, True, False, False]])
</code></pre>
<p><code>mask</code> was generated by:</p>
<pre><code>np.random.seed(43210)
mask = (np.random.rand(4,8)>0.7)
</code></pre>
<p>I visualize <code>mask</code> via:</p>
<pre><code>plt.pcolormesh(mask)
plt.gca().invert_yaxis()
plt.gca().set_aspect('equal')
</code></pre>
<p>Result:</p>
<p><a href="https://i.sstatic.net/ozOJG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ozOJG.png" alt="enter image description here" /></a></p>
<p>I use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.label.html" rel="nofollow noreferrer"><code>scipy.ndimage.label</code></a> to <em>label</em> the <em>features</em>, ie sections of neighbouring <code>True</code> elements in the array.</p>
<pre><code>label, num_features = scipy.ndimage.label(mask)
</code></pre>
<p><code>label</code> is then:</p>
<pre><code>array([[0, 0, 0, 1, 1, 0, 0, 0],
[2, 2, 2, 0, 1, 0, 0, 0],
[0, 0, 2, 0, 0, 3, 0, 4],
[5, 0, 0, 0, 3, 3, 0, 0]], dtype=int32)
</code></pre>
<p>visualization:</p>
<p><a href="https://i.sstatic.net/Q2Qyy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q2Qyy.png" alt="enter image description here" /></a></p>
<p>However, I would like to have an array where the features are marked by an number showing the size of the feature. I achieve this by:</p>
<pre><code>newlabel = np.zeros(label.shape)
for i in range(1,num_features+1): # works but very slow
newlabel[label==i]=sum((label==i).flatten())
</code></pre>
<p><code>newlabel</code> is then:</p>
<pre><code>array([[0., 0., 0., 3., 3., 0., 0., 0.],
[4., 4., 4., 0., 3., 0., 0., 0.],
[0., 0., 4., 0., 0., 3., 0., 1.],
[1., 0., 0., 0., 3., 3., 0., 0.]])
</code></pre>
<p>visualization:</p>
<p><a href="https://i.sstatic.net/7Gh7o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7Gh7o.png" alt="enter image description here" /></a></p>
<p><strong>This result above (the <code>newlabel</code> array) is correct, this is what I want.</strong> The features with only 1 pixel are marked by <code>1.</code> (blue squares in the visualization). Features with 3 pixels are marked by <code>3.</code> (green shapes on plot), while the feature with 4 pixels are marked by <code>4.</code> in <code>newlabel</code> (yellow shape on plot).</p>
<p>The problem with this approach is that the for loop takes a long time when <code>mask</code> is big. Testing with a 100 times larger mask:</p>
<pre><code>import time
np.random.seed(43210)
mask = (np.random.rand(40,80)>0.7)
t0 = time.time()
label, num_features = scipy.ndimage.label(mask)
t1 = time.time()
newlabel = np.zeros(label.shape)
for i in range(1,num_features+1):
newlabel[label==i]=sum((label==i).flatten())
t2 = time.time()
print(f"Initial labelling takes: {t1-t0} seconds.")
print(f"Relabelling by feature size takes: {t2-t1} seconds.")
print(f"Relabelling takes {(t2-t1)/(t1-t0)} times as much time as original labelling.")
</code></pre>
<p>Output:</p>
<pre><code>Initial labelling takes: 0.00052642822265625 seconds.
Relabelling by feature size takes: 0.3239290714263916 seconds.
Relabelling takes 615.333786231884 times as much time as original labelling.
</code></pre>
<p>This makes my solution unviable on real world examples.</p>
<p><strong>How can I label the features by their size faster?</strong></p>
| <python><numpy><scipy> | 2023-11-09 17:15:37 | 1 | 8,082 | zabop |
77,455,083 | 6,458,245 | Efficiently subtract row from array n times and create a new dimension? | <p>I have a numpy array of size A = (x,y). I want to subtract 100 different rows from each row in array A. So the end size is (x,y,100). How do I do this efficiently?</p>
<p>In pseudo code this would look something like:</p>
<pre><code>for i in range(100):
list.append([A - B[i]])
np.stack(list)
</code></pre>
| <python><python-3.x><numpy> | 2023-11-09 17:03:49 | 2 | 2,356 | JobHunter69 |
77,455,052 | 6,728,433 | Spark unpersist when cache() was never called | <p>I'm reviewing someone's code and I see they're calling <code>df.unpersist()</code> on a dataframe that was never cached. This is an abbreviated version of the code:</p>
<pre><code>def get_and_prepare_data():
# Read the data
input_df = spark.sql('SELECT * FROM my_table')
try:
# Do a bunch of transformations, etc
df = input_df.repartition(partition_ct)
df = df.groupBy(order_id).agg(collect_list(item_id).alias("my_items"))
except Exception as err:
error_list.append(str(err))
log_error(err)
finally:
df.unpersist()
return df
</code></pre>
<p>Later, they pass along the df for use in other methods, like:</p>
<pre><code>def join_and_filter(df):
filtered = df.filter(...)
filtered.count()
</code></pre>
<p><strong>Question 1: Does the "df.unpersist()" even do anything?</strong> My understanding is that unpersist removes a dataframe from cache, and if it was never cached then there's nothing to remove. Are there any side effects of calling <code>unpersist</code> in this case?</p>
<p><strong>Question 2</strong>, currently there is no Spark action in <code>get_and_prepare_data()</code>, so I'd think the <code>join_and_filter()</code> method would execute once for all the lazy steps leading up to it:</p>
<ol>
<li>execute spark SQL query</li>
<li>repartition</li>
<li>group by / agg</li>
<li>unpersist</li>
<li>filter</li>
<li>count</li>
</ol>
<p>If there was a <code>count()</code> action after the group by, I think the code would do the following:</p>
<p>JOB A:</p>
<ol>
<li>execute spark SQL query</li>
<li>repartition</li>
<li>group by / agg</li>
<li>count</li>
</ol>
<p>JOB B:</p>
<ol>
<li>execute spark SQL query</li>
<li>repartition</li>
<li>group by / agg</li>
<li>unpersist</li>
<li>filter</li>
<li>count</li>
</ol>
<p>In which case, the persist <em>still</em> wouldn't do anything (unless we add <code>.cache()</code> or <code>.persist</code>).
Am I understanding this correctly?</p>
| <python><apache-spark><pyspark> | 2023-11-09 16:58:54 | 0 | 798 | S.S. |
77,454,939 | 12,978,930 | Polars: Nesting `over` calls | <p><strong>Context.</strong> I have written a function that computes the mean of all elements in a column except the elements in the current group.</p>
<pre><code>df = pl.DataFrame({
"group": ["A", "A", "B", "B", "C", "C"],
"value": [1, 3, 2, 4, 3, 5],
})
def sum_excl_group(val_exp: pl.Expr, group_expr: pl.Expr) -> pl.Expr:
return val_exp.sum() - val_exp.sum().over(group_expr)
def count_non_null_excl_group(val_exp: pl.Expr, group_expr: pl.Expr) -> pl.Expr:
return val_exp.is_not_null().sum() - val_exp.is_not_null().sum().over(group_expr)
def mean_excl_group(val_exp: pl.Expr, group_expr: pl.Expr) -> pl.Expr:
return sum_excl_group(val_exp,group_expr) / count_non_null_excl_group(val_exp, group_expr)
(
df
.with_columns(
mean_excl_group(pl.col("value"), pl.col("group")).alias("mean_excl_group"),
)
)
</code></pre>
<p>This gives the expected result.</p>
<pre><code>shape: (6, 3)
┌───────┬───────┬─────────────────┐
│ group ┆ value ┆ mean_excl_group │
│ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ f64 │
╞═══════╪═══════╪═════════════════╡
│ A ┆ 1 ┆ 3.5 │
│ A ┆ 3 ┆ 3.5 │
│ B ┆ 2 ┆ 3.0 │
│ B ┆ 4 ┆ 3.0 │
│ C ┆ 3 ┆ 2.5 │
│ C ┆ 5 ┆ 2.5 │
└───────┴───────┴─────────────────┘
</code></pre>
<hr />
<p><strong>Problem.</strong> Now, I am facing the issue that I would like to run this function <em>within</em> an <code>over</code> context to obtain the mean of all elements in a group, except the elements in the current subgroup.</p>
<p>I would've expected the following to work.</p>
<pre><code>df = pl.DataFrame({
"group": ["A", "A", "B", "B", "C", "C"],
"subgroup": ["a", "b", "c", "d", "e", "f"],
"value": [1, 3, 2, 4, 3, 5],
})
(
df
.with_columns(
mean_excl_group(pl.col("value"), pl.col("subgroup")).over("group").alias("mean_excl_group"),
)
)
</code></pre>
<p>but get an <code>InvalidOperationError</code></p>
<pre><code>InvalidOperationError: window expression not allowed in aggregation
</code></pre>
<hr />
<p><strong>Attempt.</strong> For now, I have "solved" this issue by avoiding the nested <code>over</code> calls.</p>
<pre><code>def sum_excl_group(val_exp: pl.Expr, coarse_group_expr: pl.Expr, fine_group_expr: pl.Expr) -> pl.Expr:
return val_exp.sum().over(coarse_group_expr) - val_exp.sum().over(fine_group_expr)
def count_non_null_excl_group(val_exp: pl.Expr, coarse_group_expr: pl.Expr, fine_group_expr: pl.Expr) -> pl.Expr:
return val_exp.is_not_null().sum().over(coarse_group_expr) - val_exp.is_not_null().sum().over(fine_group_expr)
def mean_excl_group(val_exp: pl.Expr, coarse_group_expr: pl.Expr, fine_group_expr: pl.Expr) -> pl.Expr:
return sum_excl_group(val_exp, coarse_group_expr, fine_group_expr) / count_non_null_excl_group(val_exp, coarse_group_expr, fine_group_expr)
(
df
.with_columns(
mean_excl_group(pl.col("value"), pl.col("group"), pl.col("subgroup")).alias("mean_excl_group"),
)
)
</code></pre>
<p>This gives the expected result.</p>
<pre><code>shape: (6, 4)
┌───────┬──────────┬───────┬─────────────────┐
│ group ┆ subgroup ┆ value ┆ mean_excl_group │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 ┆ f64 │
╞═══════╪══════════╪═══════╪═════════════════╡
│ A ┆ a ┆ 1 ┆ 3.0 │
│ A ┆ b ┆ 3 ┆ 1.0 │
│ B ┆ c ┆ 2 ┆ 4.0 │
│ B ┆ d ┆ 4 ┆ 2.0 │
│ C ┆ e ┆ 3 ┆ 5.0 │
│ C ┆ f ┆ 5 ┆ 3.0 │
└───────┴──────────┴───────┴─────────────────┘
</code></pre>
<p>However, this requires me to pass both granularities to all functions making the code more bloated than (I hope) it needs to be. Is there a cleaner more polaric way to solve the problem of nested <code>over</code> calls?</p>
| <python><dataframe><window-functions><python-polars> | 2023-11-09 16:41:11 | 2 | 12,603 | Hericks |
77,454,929 | 5,548,026 | Installing google-colab with anaconda and pip throws an error in pandas setup command | <p>I need to install google-colab and I keep having various issues with the task. Current version of python is 3.11.5 on Win11, pip version 23.3, conda is 23.10.0. I get the following error on my Anaconda Prompt.</p>
<pre><code>Collecting google.colab
Using cached google_colab-1.0.0-py2.py3-none-any.whl
Collecting google-auth~=1.4.0 (from google.colab)
Using cached google_auth-1.4.2-py2.py3-none-any.whl (64 kB)
Collecting ipykernel~=4.6.0 (from google.colab)
Using cached ipykernel-4.6.1-py3-none-any.whl (104 kB)
Collecting ipython~=5.5.0 (from google.colab)
Using cached ipython-5.5.0-py3-none-any.whl (758 kB)
Collecting notebook~=5.2.0 (from google.colab)
Using cached notebook-5.2.2-py2.py3-none-any.whl (8.0 MB)
Collecting six~=1.12.0 (from google.colab)
Using cached six-1.12.0-py2.py3-none-any.whl (10 kB)
Collecting pandas~=0.24.0 (from google.colab)
Using cached pandas-0.24.2.tar.gz (11.8 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [4 lines of output]
<string>:12: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
error in pandas setup command: 'install_requires' must be a string or list of strings containing valid project/version requirement specifiers; Expected end or semicolon (after version specifier)
pytz >= 2011k
~~~~~~~^
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p>I had tried to downgrade python (to 3.8, 3.9 and 3.10) with no help, it would ask me to download Visual Studio C++, which after I did I would get an error : <code>'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.36.32532\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2</code></p>
<p>I would downgrade setuptools, as I had seen some suggest online, that didn't help either.</p>
<p>I use the command <code>pip install google.colab --use-pep517</code>. If not I would also get:</p>
<pre><code> Using cached pandas-0.24.2.tar.gz (11.8 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [15 lines of output]
C:\Users\theoP\AppData\Local\Temp\pip-install-g1w_6le_\pandas_ca7c993ca7d04b0eb7b5f9b402a89232\setup.py:12: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
import pkg_resources
C:\Users\theoP\anaconda3\Lib\site-packages\setuptools\__init__.py:84: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated.
!!
********************************************************************************
Requirements should be satisfied by a PEP 517 installer.
If you are using pip, you can try `pip install --use-pep517`.
********************************************************************************
!!
dist.fetch_build_eggs(dist.setup_requires)
error in pandas setup command: 'install_requires' must be a string or list of strings containing valid project/version requirement specifiers; Expected end or semicolon (after version specifier)
pytz >= 2011k
~~~~~~~^
[end of output]
</code></pre>
<p>Has anybody seen anything like this before or have relevant resources? Personally haven't found anything exactly like this and everything I have tried from researching just crashes the install. Thanks.</p>
| <python><pandas><pip><google-colaboratory> | 2023-11-09 16:40:00 | 1 | 401 | user159941 |
77,454,895 | 7,431,005 | XGBoost warning: running on cuda while input data is on cpu | <p>I tried to train a XGBoost model using GPU acceleration.
When training my model using gridsearch, I get the following warning:</p>
<blockquote>
<p>UserWarning: [17:29:04] WARNING:
/workspace/src/common/error_msg.cc:58: Falling back to prediction
using DMatrix due to mismatched devices. This might lead to higher
memory usage and slower performance. XGBoost is running on: cuda:0,
while the input data is on: cpu. Potential solutions:</p>
<ul>
<li>Use a data structure that matches the device ordinal in the booster.</li>
<li>Set the device for booster before call to inplace_predict.</li>
</ul>
</blockquote>
<p>Although some potential solutions are given, I'm not sure how to interpret them and what to do with this information.
Surprisingly, it only appears when using GridSearchCV.
In my minimal example below, I do not get the warning if I use <code>reg</code> directly instead of <code>gs</code>.</p>
<p>My <code>X</code> and <code>y</code> variables come from hdf5 files that I read using pandas.
Can anybody give me a hint on how I might improve my code to not raise the warning?</p>
<pre><code>import xgboost as xgb
from sklearn.model_selection import GridSearchCV
import numpy as np
if __name__ == "__main__":
X = np.random.randn(100,2)
y = np.random.randn(100)
reg = xgb.XGBRegressor(device="cuda", tree_method="hist", max_depth=10, n_estimators=100)
param_grid = {"gamma": [0.3]}
gs = GridSearchCV(reg, param_grid, cv=5)
gs.fit(X,y) # warning
# reg.fit(X,y) # no warning
</code></pre>
<p>packages:</p>
<pre><code>xgboost==2.0.1
scikit-learn==1.3.2
</code></pre>
| <python><scikit-learn><xgboost> | 2023-11-09 16:35:26 | 1 | 4,667 | user7431005 |
77,454,601 | 9,919,423 | How to load a DBF file within a zip file to a variable without extracting it in Python? | <p>I am trying to load a DBF file that is contained within a zip file into a variable without extracting the zip file. I have tried using the following code, but it is not working:</p>
<pre><code>import zipfile
from dbfread import DBF
def load_dbf_from_zip(zip_filename):
with zipfile.ZipFile(zip_filename, 'r') as z:
dbf_filename = z.namelist()[0]
with z.open(dbf_filename) as f:
dbf = DBF(f)
return dbf
dbf = load_dbf_from_zip('my_zip_file.zip')
</code></pre>
<p>This code is not working because it is trying to read the DBF file from the zip file using the DBF constructor, which expects a file object as input. However, the open() method of the ZipFile class returns a ZipExtFile object, which is not a file object.</p>
<p>How can I load the DBF file into a variable without extracting the zip file?</p>
| <python><zip><dbf><python-zipfile> | 2023-11-09 15:55:22 | 1 | 412 | David H. J. |
77,454,581 | 3,336,423 | Python/unitest: Is it possible to have the same function called many times and test not interrupted in case of failure? | <p>I'm testing a module with Python <code>unittest</code> library. I'd like to call a unt test methos with different arguments, but would like each call to be a separate test case:</p>
<pre><code>import unittest
class TestStringMethods(unittest.TestCase):
def test_all(self):
for x in range(10):
self.do_test_one(x)
def do_test_one(self,x):
print( "Testing " + str(x) )
self.assertTrue( x != 5 )
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>This reports:</p>
<pre><code>Testing 0
Testing 1
Testing 2
Testing 3
Testing 4
Testing 5
F
======================================================================
FAIL: test_all (__main__.TestStringMethods)
----------------------------------------------------------------------
Traceback (most recent call last):
File "so.py", line 8, in test_all
self.do_test_one(x)
File "so.py", line 12, in do_test_one
self.assertTrue( x != 5 )
AssertionError: False is not true
----------------------------------------------------------------------
Ran 1 test in 0.014s
FAILED (failures=1)
</code></pre>
<p>I'd like to get, having each function be a separate test:</p>
<pre><code>Testing 0
Testing 1
Testing 2
Testing 3
Testing 4
Testing 5
F
======================================================================
FAIL: test_all (__main__.TestStringMethods)
----------------------------------------------------------------------
Traceback (most recent call last):
File "so.py", line 8, in test_all
self.do_test_one(x)
File "so.py", line 12, in do_test_one
self.assertTrue( x != 5 )
AssertionError: False is not true
----------------------------------------------------------------------
Testing 6
Testing 7
Testing 8
Testing 9
Ran 10 test in 0.014s
FAILED (passed=9, failures=1)
</code></pre>
<p>Is it possible to achieve that?</p>
| <python><python-unittest> | 2023-11-09 15:52:41 | 2 | 21,904 | jpo38 |
77,454,475 | 583,187 | TypeError when using openai-api | <p>Using the code below and openai version 0.28.0 i get an error which i can't resolve:</p>
<p>File "", line 11, in
TypeError: string indices must be integers, not 'str'</p>
<p>Which indice is it complaining about. Seems I'm a little blind today...</p>
<pre><code>import requests
from bs4 import BeautifulSoup
from docx import Document
import openai
# Set your OpenAI API key
openai.api_key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
# URL of the website you want to scrape
website_url = "https://somesite.com"
# Send a GET request to the website
response = requests.get(website_url)
# Parse the HTML content of the website using BeautifulSoup
soup = BeautifulSoup(response.content, "html.parser")
# Extract text blocks larger than 100 characters
text_blocks = []
for paragraph in soup.find_all("p"):
text = paragraph.get_text().strip()
if len(text) >= 100:
text_blocks.append(text)
# Translate text blocks from English to German using OpenAI's Chat API
translated_text_blocks = []
for text_block in text_blocks:
chat_input = f"Translate the following English text to German: '{text_block}'"
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo", # Use the language model
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": chat_input},
],
)
# Extract translated text from the API response
translated_text = response.choices[0].message["content"]["body"]
translated_text_blocks.append(translated_text)
# Create a new Word document
document = Document()
# Add translated text blocks to the Word document
for translated_text in translated_text_blocks:
document.add_paragraph(translated_text)
# Save the Word document
document.save("translated_content.docx")
</code></pre>
<p>The full console output is:</p>
<pre><code>>>> # Send a GET request to the website
>>>
>>> response = requests.get(website_url)
>>> # Parse the HTML content of the website using BeautifulSoup
>>>
>>> soup = BeautifulSoup(response.content, "html.parser")
>>> # Extract text blocks larger than 100 characters
>>>
>>> text_blocks = []
>>> for paragraph in soup.find_all("p"):
... text = paragraph.get_text().strip()
... if len(text) >= 100:
... text_blocks.append(text)
... # Translate text blocks from English to German using OpenAI's Chat API
...
>>> translated_text_blocks = []
>>> for text_block in text_blocks:
... chat_input = f"Translate the following English text to German: '{text_block}'"
... response = openai.ChatCompletion.create(
... model="gpt-3.5-turbo", # Use the language model
... messages=[
... {"role": "system", "content": "You are a helpful assistant."},
... {"role": "user", "content": chat_input},
... ],
... )
... # Extract translated text from the API response
... translated_text = response.choices[0].message["content"]["body"]
... translated_text_blocks.append(translated_text)
... # Create a new Word document
...
Traceback (most recent call last):
File "<stdin>", line 11, in <module>
TypeError: string indices must be integers, not 'str'
>>> document = Document()
>>> # Add translated text blocks to the Word document
>>>
>>> for translated_text in translated_text_blocks:
... document.add_paragraph(translated_text)
... # Save the Word document
...
>>> document.save("translated_content.docx")
>>> print("Translated text blocks have been saved to 'translated_content.docx'.")
Translated text blocks have been saved to 'translated_content.docx'.
</code></pre>
| <python><openai-api> | 2023-11-09 15:38:08 | 1 | 2,841 | Lumpi |
77,454,412 | 9,686,427 | ModuleNotFoundError: No module named 'firebase_admin' after installing firebase | <p>I'm getting <code>ModuleNotFoundError: No module named 'firebase_admin' </code> after installing firebase i.e. I've performed the following steps:</p>
<ol>
<li><p>Open the console and install firebase via <code>pip3 install firebase-admin</code>. The requirements are already satisfied.</p>
</li>
<li><p>Running <code>pip3 show firebase-admin</code> outputs</p>
<pre><code>Name: firebase-admin
Version: 6.2.0
...
</code></pre>
</li>
<li><p>Running <code>python</code> and inputting <code>import firebase_admin</code> gives <code>ModuleNotFoundError: No module named 'firebase_admin'</code></p>
</li>
</ol>
<hr />
<p>What I've tried:</p>
<ul>
<li>Using <code>pip</code> as opposed to <code>pip3</code>.</li>
<li>Reinstalling both <code>firebase</code> and <code>firebase-admin</code>.</li>
</ul>
| <python><firebase><firebase-admin> | 2023-11-09 15:30:40 | 1 | 484 | Sam |
77,454,215 | 2,532,408 | poetry fails to install psycopg2-binary in python 3.12 (macos) | <p>I cannot get <code>psycopg2-binary</code> package to install via poetry in python3.12.</p>
<hr />
<p>pyproject.toml</p>
<pre><code>[tool.poetry]
name = "my_project"
[tool.poetry.dependencies]
psycopg2-binary = "*"
[build-system]
requires = ["poetry-core>=1.8.1"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<hr />
<p>Installing the project in python 3.11 works. <code>psycopg2-binary</code> installs.</p>
<p>When I try to install with poetry in a python 3.12 environment, the following error occurs:</p>
<pre><code>poetry install --with dev --sync
Installing dependencies from lock file
Package operations: 1 install, 0 updates, 0 removals
• Installing psycopg2-binary (2.9.9): Failed
ChefBuildError
Backend subprocess exited when trying to invoke get_requires_for_build_wheel
running egg_info
writing psycopg2_binary.egg-info/PKG-INFO
writing dependency_links to psycopg2_binary.egg-info/dependency_links.txt
writing top-level names to psycopg2_binary.egg-info/top_level.txt
Error: pg_config executable not found.
pg_config is required to build psycopg2 from source. Please add the directory
containing pg_config to the $PATH or specify the full executable path with the
option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
If you prefer to avoid building psycopg2 from source, please install the PyPI
'psycopg2-binary' package instead.
For further information please check the 'doc/src/install.rst' file (also at
<https://www.psycopg.org/docs/install.html>).
at ~/.pyenv/versions/3.12.0/envs/poetry312/lib/python3.12/site-packages/poetry/installation/chef.py:166 in _prepare
162│
163│ error = ChefBuildError("\n\n".join(message_parts))
164│
165│ if error is not None:
→ 166│ raise error from None
167│
168│ return path
169│
170│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:
Note: This error originates from the build backend, and is likely not a problem with poetry but with psycopg2-binary (2.9.9) not supporting PEP 517 builds. You can verify this by running 'pip wheel --no-cache-dir --use-pep517 "psycopg2-binary (==2.9.9)"'.
</code></pre>
<p>What confuses me is the error looks like it's attempting to install <code>pyscopg2</code> not <code>psycopg2-binary</code>.</p>
<p>I don't fully understand the note at the bottom.</p>
<blockquote>
<p>Note: This error originates from the build backend, and is likely not a problem with poetry but with psycopg2-binary (2.9.9) not supporting PEP 517 builds. You can verify this by running 'pip wheel --no-cache-dir --use-pep517 "psycopg2-binary (==2.9.9)"'.</p>
</blockquote>
<p>When I execute <code>pip wheel --no-cache-dir --use-pep517 "psycopg2-binary (==2.9.9)"</code> the following returns:</p>
<pre><code>$ pip wheel --no-cache-dir --use-pep517 "psycopg2-binary (==2.9.9)"
Collecting psycopg2-binary==2.9.9
Downloading psycopg2_binary-2.9.9-cp312-cp312-macosx_11_0_arm64.whl.metadata (4.4 kB)
Downloading psycopg2_binary-2.9.9-cp312-cp312-macosx_11_0_arm64.whl (2.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.6/2.6 MB 2.4 MB/s eta 0:00:00
Saved ./psycopg2_binary-2.9.9-cp312-cp312-macosx_11_0_arm64.whl
</code></pre>
<p>I don't understand what this verified....</p>
<p>In addition, I'm able to install <code>psycopg2-binary</code> via pip just fine.</p>
<hr />
<ul>
<li>MacOS 14 (sonoma) M2</li>
<li>Poetry 1.7.0</li>
</ul>
| <python><macos><psycopg2><python-poetry> | 2023-11-09 15:03:33 | 2 | 4,628 | Marcel Wilson |
77,454,163 | 11,039,749 | Issue Installing TensorFlow or PyTorch | <p>I am trying to Install TensorFlow or PyTorch. I tried doing in a normal project and also a virtual project. These are the errors I get and I tried many different versions and reading documentation.</p>
<pre class="lang-bash prettyprint-override"><code>pip install torch
pip3 install torch --no-cache-dir
pip install "tensorflow<2.11"
pip install tensorflow==2.0
pip install tensorflow --user
pip3 install tensorflow
python3 -m pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow_cpu-2.10.0-cp310-cp310-win_amd64.whlA
py -m pip install torch
py -m pip install tensorflow --upgrade
</code></pre>
<p>and many other different tries</p>
<p>Python Version:</p>
<pre><code>Python 3.12.0
pip 23.3.1
</code></pre>
<p>I also tried to use Conda</p>
<blockquote>
<p>ERROR: Could not find a version that satisfies the requirement torch
(from versions: none)</p>
<p>ERROR: No matching distribution found for torch</p>
<p>ERROR: Could not find a version that satisfies the requirement
tensorflow==2.10.0 (from versions: none)</p>
<p>ERROR: No matching distribution found for tensorflow==2.10.0</p>
<p>ERROR: tensorflow_cpu-2.10.0-cp310-cp310-win_amd64.whl is not a
supported wheel on this platform.</p>
<p>ERROR: Could not find a version that satisfies the requirement
tensorflow (from versions: none)</p>
<p>ERROR: Can not perform a '--user' install. User site-packages are not
visible in this virtualenv.</p>
<p>ERROR: Could not find a version that satisfies the requirement
torch==2.0.1 (from versions: none)</p>
<p>ERROR: No matching distribution found for torch==2.0.1</p>
<p>ERROR: No matching distribution found for tensorflow</p>
<p>ERROR: Could not find a version that satisfies the requirement
tensorflow[and-cuda] (from versions: none)</p>
<p>ERROR: No matching distribution found for tensorflow[and-cuda]</p>
<p>ERROR: Could not find a version that satisfies the requirement
tensorflow (from versions: none)</p>
<p>ERROR: No matching distribution found for tensorflow</p>
</blockquote>
<p>Exception has occurred: RuntimeError
At least one of TensorFlow 2.0 or PyTorch should be installed. To install TensorFlow 2.0, read the instructions at <a href="https://www.tensorflow.org/install/" rel="nofollow noreferrer">https://www.tensorflow.org/install/</a> To install PyTorch, read the instructions at <a href="https://pytorch.org/" rel="nofollow noreferrer">https://pytorch.org/</a>.</p>
<p>and many other errors as I entered the install commands for pytorch or sensorflow</p>
<p>I need to have 1 of these libraries to use transformers in huggingface.
I tried to install this in a Virtual Environment and out of one.</p>
<p>Thank you</p>
| <python><tensorflow><pytorch> | 2023-11-09 14:55:50 | 2 | 529 | Bigbear |
77,453,929 | 12,689,373 | How to add rows in a dataframe by a column condition python | <p>I have a dataframe like that:</p>
<pre><code>df = pd.DataFrame({'year': [2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022],
'month': [1,2,3,1,2,3,4,5,6,7,8,9,10,11,1,2,3,4,5],
'client':[1,1,1,2,2,2,2,2,2,2,2,2,2,2,3,3,3,3,3],
'total':[10,20,30,55,4,64,88,5,64,32,84,24,69,70,54,11,37,98,52]})
df
year month client total
0 2022 1 1 10
1 2022 2 1 20
2 2022 3 1 30
3 2022 1 2 55
4 2022 2 2 4
5 2022 3 2 64
6 2022 4 2 88
7 2022 5 2 5
8 2022 6 2 64
9 2022 7 2 32
10 2022 8 2 84
11 2022 9 2 24
12 2022 10 2 69
13 2022 11 2 70
14 2022 1 3 54
15 2022 2 3 11
16 2022 3 3 37
17 2022 4 3 98
18 2022 5 3 52
</code></pre>
<p>I would like to have for all the clients 12 months, so I need to add this rows and assign the value 0 to the column total. For this new rows, the value of the year column is a copy of the rows that have filled the month up.</p>
<p>Desire output:</p>
<pre><code>df = pd.DataFrame({'year': [2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,20
22,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022,2022],
'month': [1,2,3,4,5,6,7,8,9,10,11,12,1,2,3,4,5,6,7,8,9,10,11,12,1,2,3,4,5,6,7,8,9,10,11,12],
'client':[1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,3,3,3,3],
'total':
[10,20,30,0,0,0,0,0,0,0,0,0,55,4,64,88,5,64,32,84,24,69,70,0,54,11,37,98,52,0,0,0,0,0,0,0]})
df
year month client total
0 2022 1 1 10
1 2022 2 1 20
2 2022 3 1 30
3 2022 4 1 0
4 2022 5 1 0
5 2022 6 1 0
6 2022 7 1 0
7 2022 8 1 0
8 2022 9 1 0
9 2022 10 1 0
10 2022 11 1 0
11 2022 12 1 0
12 2022 1 2 55
13 2022 2 2 4
14 2022 3 2 64
15 2022 4 2 88
16 2022 5 2 5
17 2022 6 2 64
18 2022 7 2 32
19 2022 8 2 84
20 2022 9 2 24
21 2022 10 2 69
22 2022 11 2 70
23 2022 12 2 0
24 2022 1 3 54
25 2022 2 3 11
26 2022 3 3 37
27 2022 4 3 98
28 2022 5 3 52
29 2022 6 3 0
30 2022 7 3 0
31 2022 8 3 0
32 2022 9 3 0
33 2022 10 3 0
34 2022 11 3 0
35 2022 12 3 0
</code></pre>
<p>Maybe by the merge option there is a solution? I've tried but without a good result</p>
| <python><row><addition> | 2023-11-09 14:23:57 | 2 | 349 | Cristina Dominguez Fernandez |
77,453,779 | 6,068,294 | Invert the Laplacian of a 2D fixed grid data array | <p>I want to invert the Laplacian on a 2d fixed grid in python, and then take the gradient, the first step of which was possible to do in <code>ncl</code> using the <a href="https://www.ncl.ucar.edu/Document/Functions/Built-in/ilapsf.shtml" rel="nofollow noreferrer"><code>ilapsf</code></a> function. This is essentially the opposite (inverse function) of <a href="https://stackoverflow.com/questions/4692196/discrete-laplacian-del2-equivalent-in-python">this question</a>, i.e. I need the inverse function of <a href="https://docs.scipy.org/doc/scipy-1.10.1/reference/generated/scipy.ndimage.laplace.html" rel="nofollow noreferrer">scipy.ndimage.laplace</a>.</p>
<p>I don't really want to try and do this from scratch and thought this would be the bread and butter of <a href="https://www.pyngl.ucar.edu/" rel="nofollow noreferrer">pyngl</a>, <a href="https://metview.readthedocs.io/en/latest/" rel="nofollow noreferrer">metview</a> or <a href="https://unidata.github.io/MetPy/latest/index.html" rel="nofollow noreferrer">metpy</a>, but it seems <code>ilapsf</code> didn't make the transfer to python and <code>metpy</code> only does the easier Laplacian calculation, rather than solving the inverse Laplacian. I also didn't find anything in <code>scipy</code> or <code>numpy</code>.</p>
<p>The old code in <code>ncl</code> was as simple as this:</p>
<pre><code>chi = ilapsF_Wrap(nei,0) # inverse laplace
gradchi = grad_latlon_cfd(chi, chi&latitude, chi&longitude, True, False) # calculate the gradient
</code></pre>
<p>I managed to get the end result using the <code>dv2uv</code> function in cdo, but it is the fudge of the century, involved conversion to a spectral field and then tricking it by turning the flux convergence into a divergence field, setting up a dummy zero vorticity field and then turning u and v into energy fluxes... (plus the meridional flux does weird things at the poles, which the ncl solution can handle).</p>
<pre><code>from cdo import Cdo
cdo=Cdo()
# convert flux divergence into GG and the spectral G:
cdo.gp2sp(input="-remapcon,T511grid -selvar,nei nei.nc",output="neisp.nc")
# fudge CDO into thinking this is pure divergence (code=155)
cdo.chname("nei,sd",input="neisp.nc",output="tmp_vd.nc")
# make up a zero field for vorticity
cdo.chname("nei,svo",input="-gec,1e36 neisp.nc",output="tmp_vo.nc")
# the result is energy flux, not winds, so we will need to change the output names (and eventually metadata):
cdo.dv2uv(input="-merge tmp_vo.nc tmp_vd.nc",output="tmp_fluxes.nc")
cdo.chname("v,vflux",input="-chname,u,uflux tmp_fluxes.nc",output="fluxes.nc")
</code></pre>
| <python><cdo-climate><ncl> | 2023-11-09 14:00:01 | 0 | 8,176 | ClimateUnboxed |
77,453,771 | 9,620,383 | How do I remove a choco version constraint that I previously added? | <p>Previously, I needed a specific version of python and installed
<code>choco install python --version=3.7.5</code></p>
<p>I now want to upgrade, but choco still thinks I have this version dependency and gives me the error:</p>
<p><code>Unable to resolve dependency 'python': Unable to resolve dependencies. 'python3 3.12.0' is not compatible with 'python 3.7.5 constraint python3 (= 3.7.5)'.</code></p>
<p>Thank you.</p>
<p>I'm using windows 10 if that were to make a difference.</p>
<p>I already did an uninstall of the 3.7.5 version.</p>
| <python><windows><installation><chocolatey><choco> | 2023-11-09 13:58:29 | 1 | 600 | WorstCoder4Ever |
77,453,673 | 3,104,974 | pyspark.pandas: Converting float64 column to TimedeltaIndex | <p>I want to convert a numeric column which is resembling a timedelta in seconds to a <code>ps.TimedeltaIndex</code> (for the purpose of later resampling the dataset)</p>
<pre><code>import pyspark.pandas as ps
df = ps.DataFrame({"time": [2.0, 3.0, 4.0], "x": [4.5, 4.0, 3.5]})
df.set_index(ps.to_timedelta(df.time, "s").to_numpy())
KeyError: '2000000000 nanoseconds'
</code></pre>
<p>I don't understand why this doesn't work.</p>
| <python><apache-spark><pyspark><pyspark-pandas> | 2023-11-09 13:42:34 | 2 | 6,315 | ascripter |
77,453,659 | 457,290 | why does numpy.trapz behave differently with booleans than with ints? | <p>Booleans in Python are implemented as ints and usually there is no difference in <code>numpy</code> and other packages when changing <code>0</code> by <code>False</code> and <code>1</code> by <code>True</code>.
Why then these two computations differ?</p>
<pre class="lang-py prettyprint-override"><code>>>> import numpy as np
>>> np.trapz([1,1])
1.0
>>> np.trapz([True, True])
0.5
</code></pre>
<p>The same thing happens with <code>scipy.integrate</code>.</p>
| <python><numpy> | 2023-11-09 13:40:38 | 1 | 2,019 | matiasg |
77,453,594 | 315,168 | Parallelising functions using multiprocessing in Jupyter Notebook | <p>Edit: I updated the question with a trivial repeatable example for ipython, PyCharm and Visual Studio Code. They all fail in a different way.</p>
<p>I am running CPU-intensive tasks in Jupyter Notebook. The task is trivial to parallelise and I am already able to do this in a notebook via threads. However, due to Python's GIL, this is inefficient as the GIL prevents effectively utilising multiple CPU cores for parallel tasks.</p>
<p>The obvious solution would be <code>multiprocessing</code> Python module, and I have this working with Python application code (not notebooks). However, due to how Jupyter Notebook operates, <code>multiprocessing</code> fails due to lack of <code>__main__</code> entrypoint.</p>
<p>I do not want to create separate Python modules, because they defeat the purpose of using notebooks for data research in the first place.</p>
<p>Here is the minimum repeatable example.</p>
<p>I create a notebook with a single cell:</p>
<pre><code># Does not do actual multiprocessing, but demostrates it fails in a notebook
from multiprocessing import Process
def task():
return 2
p = Process(target=task)
p.start()
p.join()
</code></pre>
<p>Running this with IPython gives:</p>
<pre class="lang-bash prettyprint-override"><code>ipython notebooks/notebook-multiprocess.ipynb
</code></pre>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 289, in run_path
return _run_module_code(code, init_globals, run_name,
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 96, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/Users/moo/code/ts/trade-executor/notebooks/notebook-multiprocess.ipynb", line 5, in <module>
"execution_count": null,
NameError: name 'null' is not defined
</code></pre>
<p>Running this with PyCharm gives:</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
AttributeError: Can't get attribute 'task' on <module '__main__' (built-in)>
</code></pre>
<p>Running this with Visual Studio Code gives:</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
AttributeError: Can't get attribute 'task' on <module '__main__' (built-in)>
</code></pre>
<p>My current parallelisation using a thread pool works:</p>
<pre class="lang-py prettyprint-override"><code>results = []
def process_background_job(a, b):
# Do something for the batch of data and return results
pass
# If you switch to futureproof.executors.ProcessPoolExecutor
# here it will crash with the above error
executor = futureproof.executors.ThreadPoolExecutor(max_workers=8)
with futureproof.TaskManager(executor, error_policy="log") as task_manager:
# Send individual jobs to the multiprocess worker pool
total_tasks = 0
for look_back in look_backs:
for look_forward in look_forwards:
task_manager.submit(process_background_job, look_back, look_forward)
total_tasks += 1
print(f"Processing grid search {total_tasks} background jobs")
# Run the background jobs and read back the results from the background worker
# with a progress bar
with tqdm(total=total_tasks) as progress_bar:
for task in task_manager.as_completed():
if isinstance(task.result, Exception):
executor.join()
raise RuntimeError(f"Could not complete task for args {task.args}") from task.result
look_back, look_forward, long_regression, short_regression = task.result
results.append([
look_back,
look_forward,
long_regression.rsquared,
short_regression.rsquared
])
progress_bar.update()
</code></pre>
<p>How can I use process-based parallelization in notebooks?</p>
<p>Python 3.10, but happy to upgrade if it helps.</p>
| <python><jupyter-notebook><python-multiprocessing> | 2023-11-09 13:30:40 | 3 | 84,872 | Mikko Ohtamaa |
77,453,550 | 31,667 | Numpy array of a batch of shifted ranges | <p>I'd like to create a numpy array filled with the values from another array, but each row shifted by one (or some other constant). So for example mapping an array with elements 1,2,3,4,5.... to a 5x5 array, I would expect:</p>
<pre><code>1 2 3 4 5
2 3 4 5 6
3 4 5 6 7
4 5 6 7 8
5 6 7 8 9
</code></pre>
<p>Currently I'm just running a loop over the original array like:</p>
<pre><code>for batch in range(len(foo)-batch_size):
result[batch,:] = foo[batch:batch+batch_size]
</code></pre>
<p>But I suspect there's a better and more efficient way to do it in numpy?</p>
| <python><numpy> | 2023-11-09 13:24:00 | 3 | 34,355 | viraptor |
77,453,529 | 4,025,749 | SSL Error with Zeep Python Library when VPN is disconnected | <p>I am currently using the Zeep Python library for my project and I’ve encountered an issue that I hope someone can help me with.</p>
<p>When I try to run my code without connecting to my company’s VPN, I get the following error:</p>
<pre><code>requests.exceptions.SSLError: HTTPSConnectionPool(host='ws-gateway-cert.xxx.com', port=443): Max retries exceeded with url: /services (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1007)')))
</code></pre>
<p>However, when I connect to the VPN, the code works perfectly fine. This is puzzling because other clients like Postman and SoapUI are able to run the same requests successfully even without the VPN connection.</p>
<pre class="lang-py prettyprint-override"><code>
session = Session()
session.trust_env = False
session.mount('https://', Pkcs12Adapter(
pkcs12_filename=r'C:\PROJECT\assets\cert.p12',
pkcs12_password='****'))
session.verify = False
transport = Transport(session=session, )
settings = Settings(strict=False, xml_huge_tree=True, xsd_ignore_sequence_order=True, force_https=False)
self.vw_client = Client(
wsdl=self.wsdl,
transport=transport,
plugins=[WsAddressingPlugin(address_url='ws://address'),
# LoggingPlugin()
],
settings=settings
)
</code></pre>
| <python><soap><ssl-certificate><zeep> | 2023-11-09 13:20:09 | 1 | 616 | Omid Erfanmanesh |
77,453,416 | 7,631,505 | How to make this calculation faster? | <p>So I have an issue that I will explain:
I have a set of images that come from an experiment, each pixel of these images has 3 values associated which depend on the position (in case you wonder this is a diffraction experiment). What I would need is the intensity of the pixel, but as a function of these three values. The way I'm currently implementing it in python is with a bunch of for loops that find the pixels that have values within a narrow range and summing them together and saving this as a point in my new "space". Here is a basic implementation with some duummy values:</p>
<pre><code>import numpy as np
def MaskApply(HKLvals, h, k, ll, dh, dk, dl):
ans = (
np.where(HKLvals[2].flatten() < ll + dl, 1, 0)
* np.where(HKLvals[2].flatten() > ll, 1, 0)
* np.where(HKLvals[1].flatten() > k, 1, 0)
* np.where(HKLvals[1].flatten() < k + dk, 1, 0)
* np.where(HKLvals[0].flatten() < h + dh, 1, 0)
* np.where(HKLvals[0].flatten() > h, 1, 0)
)
ans = np.reshape(ans, HKLvals[2].shape, order="C")
return ans
data = np.random.rand(120, 1000, 500) # this is a collection of 120 images each 1000x500
# This is read by a file in my actual code
HKLvals = np.ones((120, 3, 1000, 500)) # These are some values connected to the images each of the 1000x500 pixels has three values. for the 120 images
H, dh = np.linspace(0, 10, 1000, retstep=True)
K, dk = np.linspace(0, 10, 1000, retstep=True)
L, dl = np.linspace(0, 5, 100, retstep=True)
outData = np.zeros((len(H), len(K)))
for i in range(len(L)):
l = L[i]
for j in range(len(H)):
h = H[j]
for t in range(len(K)):
k = K[t]
maskArr = np.zeros(np.shape(data))
pixel_number = 0
for m in range(np.shape(data)[0]):
mask = MaskApply(HKLvals[m], h, k, l, dh, dk, dl)
pixel_number += np.count_nonzero(mask)
maskArr[m] = mask
maskArr = maskArr.astype(bool)
outData[j, t] = np.sum(data[maskArr]) / pixel_number
print(outData) # of course in reality this is saved in an .h5
</code></pre>
<p>This is the gist of my code ATM. I'd love if anybody had any input on this and could point out some efficiency mistakes, or perhaps a different algorithm to do this that maybe I'm not thinking about.</p>
<p>I suppose the easiest way to improve the efficiency would be to stop using Python and convert to something more performing like C++ or Julia. I'm also looking into that, but I'm very not well versed in those so in the meantime it would be great if anybody had any input.</p>
<p>Thanks to all who will read.</p>
<p>EDIT: fixed the code which is now running properly</p>
| <python><numpy><performance><optimization> | 2023-11-09 13:02:19 | 1 | 316 | mmonti |
77,453,408 | 2,725,810 | How to process fast 10,000 small objects stored in S3 | <p><strong>The problem</strong>: Given a list of 10000 paths to objects stored in S3, I need to process the corresponding objects fast (under 1 second). Each object is 40 KB.</p>
<p><strong>The background</strong>: Each object represents a document. Each document is associated with one or more users (could be a thousand users or more, hence duplication of documents is avoided). We need to process all documents associated with a given user. The processing is a search in the documents' contents based on a given query. The exact details of the search are unimportant here, except for the fact that the search results are small (~1KB). In the above problem statement, the user has 10000 documents associated with him.</p>
<p>An approach I have already tested is to use parallelization by utilizing multiple instances of a Lambda function, each downloading and processing a part of the list of objects.</p>
<p>The computation part of this approach is fast. It is the downloading of objects that is the bottleneck. I would like to find a smarter (cheaper and more performant) approach.</p>
<p>One idea is to merge all 10,000 objects into a single temporary object and then download the latter large object with a single <code>get</code> request. However, the only way of doing this that I am aware of is by multipart upload, which <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.html" rel="nofollow noreferrer">requires that each part be at least 5 MB</a>. In my case, the parts are 40 KB each. I cannot merge the parts in advance, since an object can be associated with many different users and thus appear in combination with different objects.</p>
<p>Is there a workaround or a different approach I can use?</p>
<p>P.S. The question <a href="https://repost.aws/questions/QUCbTOwBxKSjmaTSY7ul52zQ/how-to-process-fast-10-000-small-objects-stored-in-s3" rel="nofollow noreferrer">at re:Post</a></p>
| <python><amazon-web-services><amazon-s3><aws-lambda><boto3> | 2023-11-09 13:01:11 | 1 | 8,211 | AlwaysLearning |
77,453,363 | 16,056,216 | Videocapture from opencv doesn't work for me. What should I do? | <p>I have dockerized my app and I have used <code>http://nvcr.io/nvidia/pytorch:23.10-py3</code> base image. I'll share my dockerfile with you. I am also using Ubuntu.</p>
<p>The problem here is that <code>videocapture</code> method from <code>opencv==4.8.0.74</code> doesn't work!</p>
<p>It doesn't give me an error but doesn't work!</p>
<p>I also couldn't use opencv newer versions because I faced the following error:</p>
<pre class="lang-none prettyprint-override"><code>module 'cv2.dnn' has no attribute 'DictValue'
</code></pre>
<p>My Dockerfile:</p>
<pre class="lang-bash prettyprint-override"><code>
FROM nvcr.io/nvidia/pytorch:23.10-py3
COPY requirements.txt /opt/app/requirements.txt
WORKDIR /opt/app
# RUN pip uninstall -y opencv-python
RUN pip install -r requirements.txt
RUN pip uninstall -y opencv-python opencv-python-headless
RUN pip install opencv-python==4.8.0.74
RUN mkdir -p /home/amirreza/Desktop/vi/file15
RUN mkdir -p /home/amirreza/Desktop/vi/file16
COPY vi.py /opt/app/vi.py
ENV PYTHONUNBUFFERED=1
CMD [ "python", "./vi.py" ]
</code></pre>
<p>My <code>requirements.txt</code>:</p>
<pre class="lang-none prettyprint-override"><code>transformers==4.34.0
timm==0.9.7
pytest-shutil
</code></pre>
<p>The part of my python script:</p>
<pre><code>import os
import cv2
from PIL import Image
import numpy as np
from transformers import pipeline
def video_to_frames(video_path, frames_dir):
video = cv2.VideoCapture(video_path)
count = 0
while video.isOpened():
ret, frame = video.read()
if not ret:
break
cv2.imwrite(frames_dir + "/{:d}.png".format(count), frame)
count += 1
video.release()
# cv2.destroyAllWindows()
return count
cwd = os.getcwd()
folder = 'videos'
# video2.mp4 & video1.mp4 path
video1_path = os.path.join(cwd, folder, 'video1.mp4')
video2_path = os.path.join(cwd, folder, 'video2.mp4')
aa='/opt/app/file15'
bb='/opt/app/file16'
os.makedirs('/opt/app/file15', exist_ok=True)
os.makedirs('/opt/app/file16', exist_ok=True)
frames_count_1 = video_to_frames(video1_path, aa)
frames_count_2 = video_to_frames(video2_path, bb)
</code></pre>
<p>In my script the function doesn't work because the <code>videocapture</code> doesn't work (I think)!</p>
<p>Because after running the code still file15 and file16 are empty!</p>
<p>Also my code works truly and it works on Google colab!</p>
<p>Can someone please help me to fix this problem?</p>
<p>I have also tried the following installation but unfortunately it didn't work!</p>
<p><code>pip install opencv-contrib-python</code></p>
<p>By the way is there any problem with python version? Because python is installed from that base image and its version is 3.10.12.</p>
<p>Or for example does opencv need any extra libraries to run videocapture method?</p>
<p>Also I printed the <code>count</code> variable in the function and the output was 0!</p>
| <python><docker><opencv><dockerfile><video-capture> | 2023-11-09 12:53:39 | 0 | 362 | Amirreza Hashemi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.