QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,797,128
| 10,259,099
|
Python3 module 'cups' has no attribute 'Connection'
|
<p>I try to convert python2 to python3 and there is code not working on `cups.Connection()``</p>
<pre><code>import cups
conn = cups.Connection()
</code></pre>
<p>And its produce stacktree</p>
<pre><code>Traceback (most recent call last):
File "/Users/sagara/Downloads/Printing Script/mnd_75 (latest).py", line 177, in <module>
conn = cups.Connection()
^^^^^^^^^^^^^^^
AttributeError: module 'cups' has no attribute 'Connection'
</code></pre>
<p>I already uninstall pycups from <a href="https://stackoverflow.com/a/58724758/10259099">This</a> and install it again The python version its <code>Python 3.11.2</code> with pyenv and the pycups version its</p>
<pre><code>Requirement already satisfied: pycups in /Users/sagara/.pyenv/versions/3.11.2/lib/python3.11/site-packages (2.0.1)
</code></pre>
<p>Its maybe cause the device? cause im use M1 not intel or maybe i do wrong in the code?</p>
<p>and i try with terminal and go same error too</p>
<pre><code>Python 3.11.2 (main, Mar 16 2023, 17:42:42) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import cups
>>> conn = cups.Connection()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'cups' has no attribute 'Connection'
>>>
</code></pre>
|
<python><cups><pycups>
|
2023-03-21 03:37:46
| 0
| 341
|
Rofie Sagara
|
75,797,103
| 2,998,077
|
Pandas new column from counts of column contents
|
<p>A simple data frame that I want to add a column, to tell how many Teams that the Project has, according to a name dictionary.</p>
<p><a href="https://i.sstatic.net/Ctp4u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ctp4u.png" alt="enter image description here" /></a></p>
<p>The way I came up with seems working OK but doesn't look very smart.</p>
<p>What is a better way to do so?</p>
<pre><code>import pandas as pd
from io import StringIO
dict_name = {
"William": "A",
"James": "C",
"Ava": "A",
"Elijah": "A",
"Mason": "B",
"Ethan": "B",
"Noah": "B",
"Benjamin": "B",
"Lucas": "B",
"Oliver": "B",
"Olivia": "C",
"Emma": "C"}
csvfile = StringIO(
"""
Project ID Members
A58 Noah, Oliver
A34 William, Elijah, James, Benjamin
A157 Lucas, Mason, Ethan, Olivia
A49 Emma, Ava""")
df = pd.read_csv(csvfile, sep = '\t', engine='python')
final_count_list = []
final_which_list = []
for names in df.Members.to_list():
team_list = []
for each in names.split(', '):
team_list.append(dict_name[each])
final_count_list.append(len(list(set(team_list))))
final_which_list.append(list(set(team_list)))
df['How many teams?'] = final_count_list
df['Which teams?'] = final_which_list
print (df)
</code></pre>
<p><a href="https://i.sstatic.net/F2Anf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F2Anf.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe>
|
2023-03-21 03:28:59
| 1
| 9,496
|
Mark K
|
75,797,073
| 6,248,559
|
Pandas data masking where the conditions come from other variables
|
<p>I have a dataframe and two lists as below:</p>
<pre><code>seller1 = [5, 4, 3]
seller2 = [4, 2, 1]
df = {'customer': [1, 1, 1, 2, 2, 2], 'time': [1,2,3,1,2,3], 'location': [3,4,2,4,3,3], 'demand':[10,12,15,20,8,16], 'price':[3,4,4,5,2,1]}
df = pd.DataFrame(df)
</code></pre>
<p>Which results in the following table:</p>
<pre><code> customer time location demand price
0 1 1 3 10 3
1 1 2 4 12 4
2 1 3 2 15 4
3 2 1 4 20 5
4 2 2 3 8 2
5 2 3 3 16 1
</code></pre>
<p>The <code>seller1</code> and <code>seller2</code> lists show where the sellers are at time 1,2, and 3. I want to know the demand and the price if one of the sellers is there at the exact time and mask the demand data otherwise. For example, at time 1, seller one is at location 5 and seller 2 is at location 4. Likewise, customer 1 is at location 3 and customer 2 is at location 4. So, the sellers meet the first customer but not the second at t=1.</p>
<p>The end table I want to have is</p>
<pre><code> customer time location demand price
0 1 1 3 None None
1 1 2 4 12 4
2 1 3 2 None None
3 2 1 4 20 5
4 2 2 3 None None
5 2 3 3 16 1
</code></pre>
<p>So far, I have</p>
<pre><code>for i in range(df.shape[0]):
if df["location"][i] != seller1[int(df["time"][i])-1] and df["location"][i] != seller2[int(df["time"][i])-1]:
df["demand"][i] = np.nan
df["price"][i] = np.nan
</code></pre>
<p>This is producing a <code>SettingWithCopyWarning:</code> and it doesn't look efficient with the for loop, either.</p>
<p>Is there a way to do this with df.mask()?</p>
|
<python><pandas><dataframe><data-masking>
|
2023-03-21 03:22:34
| 2
| 301
|
A Doe
|
75,796,987
| 6,676,101
|
How do we check whether a class has overridden a particulair method inherrited from `object`?
|
<p>I was thinking about writing a class decorator which would check whether a particular method inherited from <code>object</code> had been overridden or not.</p>
<pre class="lang-python prettyprint-override"><code>import io
def check_str_method(kls:type) -> type:
with io.StringIO() as strm:
if "__str__" in kls.__dict__:
print("`__str__` is in `__dict__`", file = strm)
if "__str__" in dir(kls):
print("`__str__` is in container returned by `dir(kls)`", file = strm)
str_method = getattr(kls, "__str__")
if str_method == object.__str__:
pass
if str_method == object.__str__:
pass
return kls
@check_str_method
class ChildClass1:
pass
@check_str_method
class ChildClass2:
def __str__(self):
return "I HAVE OVERRIDDEN __str__"
obj1 = ChildClass1()
obj2 = ChildClass2()
print("obj1...", str(obj1))
print("obj2...", str(obj2))
</code></pre>
<p>What is the proper pythonic way to do this? Do we check <code>mro()</code> (the method resolution order?)</p>
<p>Do we search in <code>__bases__</code>?</p>
|
<python><python-3.x><decorator><python-decorators>
|
2023-03-21 03:00:44
| 1
| 4,700
|
Toothpick Anemone
|
75,796,965
| 6,676,101
|
How do we write a decorator which forces a class to inherit from some other class?
|
<p>Normally, we have one class inherit from another as follows:</p>
<pre class="lang-python prettyprint-override"><code>class ChildClass(ParentClass):
"""
ChildClass is a subclass of ParentClass
ParentClass is a superclass of ChildClass
"""
pass
</code></pre>
<p>That is fine for most applications.</p>
<p>However, how would we write a decorator which forces one class to inherit from another?</p>
<pre class="lang-python prettyprint-override"><code>class SuperClass:
def __str__(self):
return "used SuperClass.__str__ INHERRITED FROM SUPER CLASS"
@classmethod
def force_inherit_from_super_class(cls, subclass):
# should have some more code in here somewhere
return subclass
#############################################################
class ChildClass1:
pass
@SuperClass.force_inherit_from_super_class
class ChildClass2:
pass
@SuperClass.force_inherit_from_super_class
class ChildClass3:
def __str__(self):
return "using ChildClass3.__str__. I HAVE OVERRIDDEN __str__"
obj1 = ChildClass1()
obj2 = ChildClass2()
obj3 = ChildClass3()
print("obj1...", str(obj1)) # should use object.__str__()
print("obj2...", str(obj2)) # should use SuperClass.__str__()
print("obj3...", str(obj3)) # should use ChildClass3.__str__()
</code></pre>
|
<python><python-3.x><decorator><python-decorators>
|
2023-03-21 02:56:25
| 1
| 4,700
|
Toothpick Anemone
|
75,796,905
| 15,724,084
|
python tkinter getting text value from selected radiobutton
|
<p>I have a radiobuttons, for user to select. On each selection it gives to Entry widget some text.
Now, I want to get the text value from the radiobutton which is <code>selected</code>. As I remember checkboxes have true or false values, but I used radio buttons on my project.
screen<a href="https://i.sstatic.net/vPnFW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vPnFW.png" alt="enter image description here" /></a></p>
<p>I need to know, which radio button is selected, and if user changes text on entry widget, then I want to change the text message variables accordingly.</p>
<p>and here is my code</p>
<pre><code>from tkinter import ttk
from tkinter import *
import yagmail
root=Tk()
root.title('send bunch of inquiry emails')
#frame entry for search and button
frameUpper=Frame(master=root,borderwidth=3,relief='flat',padx=12,pady=12)
frameUpper.grid(row=0,column=0,sticky=(E,W))
#frame for text to send via email
frameBtm=Frame(master=root,borderwidth=3,relief='groove',padx=12,pady=12,width=200,height=100)
frameBtm.grid(row=1,column=0,sticky=(N,S,E,W))
frm_emails=ttk.Frame(master=root,padding=3,borderwidth=3,width=300,relief='sunken')
var_inp=StringVar()
lbl_search=Label(master=frameUpper,text='enter search keyword:')
entry_search=Entry(master=frameUpper,width=33,textvariable=var_inp,relief='groove')
#text editor
txt_greetings='Hi,'
search_variable='keyword'
txt_body=f'I would like to have price information for {search_variable} for our company located in Azerbaijan'
txt_regards='Kind regards,'
btn_radio=StringVar()
btn_radio_greeting=ttk.Radiobutton(master=frameBtm,text='Greetings',variable=btn_radio,value=txt_greetings)
btn_radio_body=ttk.Radiobutton(master=frameBtm,text='Body',variable=btn_radio,value=txt_body)
btn_radio_regards=ttk.Radiobutton(master=frameBtm,text='Regards',variable=btn_radio,value=txt_regards)
lbl_text_message=Label(master=frameBtm,text='text message for email')
entry_msg_txt=StringVar()
entry_text=Entry(master=frameBtm,textvariable=entry_msg_txt,borderwidth=1,width=20)
def func_x():
frm_emails.grid(row=2,sticky=(S,E,W))
Label(master=frm_emails, text=no_duplicates,wraplength=200).pack()
lbl_search.grid(sticky=(E,W))
entry_search.grid(sticky=(E,W))
entry_search.bind('<Return>', start_scraping)
Button(master=frameUpper, width=13, command=start_scraping, text='run').grid(pady=5)
btn_radio_greeting.grid(row=1,column=0,sticky=(W,))
btn_radio_body.grid(row=2,column=0,sticky=(W,))
btn_radio_regards.grid(row=3,column=0,sticky=(W,))
btn_radio_greeting.bind('<Double-1>',change_value_btn_greetings)
btn_radio_body.bind('<Double-1>',change_value_btn_body)
btn_radio_regards.bind('<Double-1>',change_value_btn_regards)
lbl_text_message.grid(row=0,column=0,rowspan=1,columnspan=1,sticky=(W))
entry_text.grid(row=1,column=1,rowspan=3,columnspan=3,sticky=(N,S,E,W))
root.columnconfigure(0,weight=1)
root.rowconfigure(1,weight=1)
frameUpper.columnconfigure(0,weight=1)
frameBtm.columnconfigure(0,weight=1)
frameBtm.rowconfigure(0,weight=1)
frameBtm.columnconfigure(1,weight=5)
frameBtm.rowconfigure(1,weight=5)
lbl_text_message.rowconfigure(0,weight=1)
lbl_text_message.columnconfigure(0,weight=1)
entry_text.rowconfigure(1,weight=5)
entry_text.columnconfigure(1,weight=5)
frm_emails.rowconfigure(0,weight=1)
frm_emails.columnconfigure(0,weight=1)
##newly added lines
def get_value_from_entry_txt_put_to_variable(event):
global txt_greetings
global txt_body
global txt_regards
global txt_message_to_send
if btn_radio_regards.winfo_viewable()==1:
txt_regards = entry_msg_txt.get()
elif btn_radio_body.winfo_viewable()==1:
txt_body = entry_msg_txt.get()
elif btn_radio_greeting.winfo_viewable()==1:
txt_greetings = entry_msg_txt.get()
print(txt_message_to_send)
txt_message_to_send = txt_greetings + '\n' + txt_body + '\n' + txt_regards
test_print()
entry_text.bind('<Return>',get_value_from_entry_txt_put_to_variable)
# send emails
txt_message_to_send=txt_greetings+'\n'+txt_body+'\n'+txt_regards
def test_print():
print(txt_message_to_send)
if __name__=='__main__':
root.mainloop()
</code></pre>
<p>I am able to get children elements in frameBottom as a list, but do not know, how to get to know that which radio button is selected actually?</p>
<pre><code>a=[x for x in frameBtm.winfo_children()]
print(a)
[<tkinter.ttk.Radiobutton object .!frame2.!radiobutton>, <tkinter.ttk.Radiobutton object .!frame2.!radiobutton2>, <tkinter.ttk.Radiobutton object .!frame2.!radiobutton3>, <tkinter.Label object .!frame2.!label>, <tkinter.Entry object .!frame2.!entry>]
</code></pre>
<p><strong>With respond assistance from @Brian Oakley i resolved my issue</strong>
I bounded enter key, for entry widget modifictions, then with help of <code>get()</code> method i used conditionals to determine which radiobutton <code>value</code> is selected. The code is below.</p>
<pre><code>def get_value_from_entry_txt_put_to_variable(event):
global txt_greetings
global txt_body
global txt_regards
global txt_message_to_send
print(btn_radio.get())
if btn_radio.get()==txt_regards:
print('btn_radio_regards')
txt_regards = entry_msg_txt.get()
elif btn_radio.get()==txt_body:
print('btn_radio_body')
txt_body = entry_msg_txt.get()
elif btn_radio.get()==txt_greetings:
print('btn_radio_greeting')
txt_greetings = entry_msg_txt.get()
print(txt_message_to_send,'---')
txt_message_to_send = txt_greetings + '\n' + txt_body + '\n' + txt_regards
print(txt_message_to_send, '-+-')
test_print()
entry_text.bind('<Return>',get_value_from_entry_txt_put_to_variable)
</code></pre>
|
<python><tkinter><radio-button>
|
2023-03-21 02:40:55
| 1
| 741
|
xlmaster
|
75,796,866
| 850,781
|
How do I select a subset of a DataFrame based on a condition on one level of a MultiIndex
|
<p>Similar to <a href="https://stackoverflow.com/q/75796107/850781">How do I select a subset of a DataFrame based on one level of a MultiIndex</a>, let</p>
<pre><code>df = pd.DataFrame({"v":range(12)},
index=pd.MultiIndex.from_frame(
pd.DataFrame({"name":4*["a"]+4*["b"]+4*["c"],
"rank":[x*x for x in range(12)]})))
</code></pre>
<p>and suppose I want to select only rows with the <strong>second</strong> level <code>rank</code> being within 25 from its smallest value for the given <strong>first</strong> level <code>name</code>:</p>
<pre><code> v
name rank
a 0 0
1 1
4 2
9 3
b 16 4
25 5
36 6
c 64 8
81 9
</code></pre>
<p>This time I have no idea how to do that easily (other than a convoluted combination of <code>to_frame</code> and <code>groupby</code> - as explained in <a href="https://stackoverflow.com/q/75796382/850781">How do I select a subset of a DataFrame based on a condition on a column</a>).</p>
|
<python><pandas><dataframe><multi-index>
|
2023-03-21 02:29:59
| 1
| 60,468
|
sds
|
75,796,835
| 10,259,099
|
Migrate python2 to 3 IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
|
<p>So I try to migrate legacy python2 script to python3 but i got error on the some code</p>
<pre><code>print ('resizing images')
for x in range(0, len(img)):
imgResized.append(cv2.resize(img[x], (301 * len(img), 301)))
print ('resizing done')
source = interlace(imgResized, 301 * len(img), 301 * len(img))
cv2_im = cv2.cvtColor(source, cv2.COLOR_BGR2RGB)
</code></pre>
<p>so i got error on <code>source = interlace(imgResized, 301 * len(img), 301 * len(img))</code>
that interlance its function for do</p>
<pre><code>def interlace(img, h, w):
inter = np.empty((h, w, 3), img[0].dtype)
for i in range(h - 1):
val = i % NUM_FRAMES
print('index')
print(img[val])
inter[i, :, :] = img[val][i / NUM_FRAMES, :, :]
return inter
</code></pre>
<p>on the interlace function the error come from <code>inter[i, :, :] = img[val][i / NUM_FRAMES, :, :]</code></p>
<p>this the complete stacktree</p>
<pre><code>Traceback (most recent call last):
File "/Users/sagara/Downloads/Printing Script/mnd_75 (latest).py", line 171, in <module>
source = interlace(imgResized, 301 * len(img), 301 * len(img))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sagara/Downloads/Printing Script/mnd_75 (latest).py", line 126, in interlace
inter[i, :, :] = img[val][i / NUM_FRAMES, :, :]
~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
</code></pre>
<p>and the print console from <code>print('index')</code>and <code>print(img[val])</code>i got this value</p>
<pre><code>index
[[[239 236 238]
[240 237 240]
[247 245 246]
...
[255 255 255]
[255 255 255]
[255 255 255]]
[[211 208 210]
[215 212 214]
[233 230 232]
...
[255 255 255]
[255 255 255]
[255 255 255]]
[[229 227 228]
[232 229 231]
[242 240 242]
...
[255 255 255]
[255 255 255]
[255 255 255]]
...
[[149 151 152]
[159 161 162]
[200 202 203]
...
[255 255 255]
[255 255 255]
[255 255 255]]
[[156 158 159]
[166 168 169]
[204 206 207]
...
[255 255 255]
[255 255 255]
[255 255 255]]
[[128 129 131]
[140 142 143]
[190 192 192]
...
[255 255 255]
[255 255 255]
[255 255 255]]]
</code></pre>
<p>its maybe because the img[val] its in not correct value?</p>
|
<python><numpy><opencv>
|
2023-03-21 02:23:40
| 1
| 341
|
Rofie Sagara
|
75,796,716
| 13,176,896
|
InvalidRequestError when make order in pybit
|
<p>I want to test some codes for placing short-open orders using pybit.
My code is as follows.</p>
<pre><code>from pybit import *
session = usdt_perpetual.HTTP(endpoint = "https://api-testnet.bybit.com",
api_key = 'yyyyy',
api_secret = 'xxxxxx')
symbol = "HIGHUSDT"
resp = session.place_active_order(
symbol=symbol,
side="Sell",
order_type="Market",
qty= 10,
time_in_force="GoodTillCancel",
reduce_only=False,
close_on_trigger=False
)
</code></pre>
<p>It makes the following Error.</p>
<pre><code>InvalidRequestError: Position idx not match position mode (ErrCode: 130001) (ErrTime: 01:35:30).
Request → POST https://api-testnet.bybit.com/private/linear/order/create: {'api_key': 'xxxxxxxx', 'close_on_trigger': False, 'order_type': 'Market', 'qty': 10, 'recv_window': 5000, 'reduce_only': False, 'side': 'Buy', 'symbol': 'HIGHUSDT', 'time_in_force': 'GoodTillCancel', 'timestamp': 1679362530091, 'sign': '64632a4ed529118f8516eb01294deec4036289ae835326eb24abef6b1ab8b813'}.
</code></pre>
<p>What is wrong and how can I fix it?
I changed qty and order_type, but it does not help.</p>
|
<python><bybit><python-bybit><pybit>
|
2023-03-21 01:47:50
| 1
| 2,642
|
Gilseung Ahn
|
75,796,673
| 19,425,874
|
Receiving print error while using ReportLab to print PDFs in Python
|
<p>This is a difficult issue to diagnose, because I am unsure how to even get the full details. I've been testing for hours and there is no clear pattern. Every few times I run my script, I receive this printer error (image below):</p>
<p><a href="https://i.sstatic.net/9YO2a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9YO2a.png" alt="printer error" /></a></p>
<p>The PDFs go to the printer que, but when there is an error they get stuck there. I am able to fix by simply unplugging and plugging back in, but I'd really love to put this bug out.</p>
<p>Sometimes it happens after 10 successful prints, other times it has happened every other.</p>
<p>Does anyone see a bug in my code that might be causing this? Or an overall issue with my script as to why this happens occasionally?</p>
<pre><code>import os
import gspread
from oauth2client.service_account import ServiceAccountCredentials
import win32print
from reportlab.pdfgen import canvas
from reportlab.lib.pagesizes import landscape, portrait
from reportlab.lib.units import mm
from reportlab.pdfbase import pdfmetrics
from reportlab.pdfbase.ttfonts import TTFont
# Set up Google Sheets API credentials
scope = ['https://spreadsheets.google.com/feeds',
'https://www.googleapis.com/auth/drive']
creds = ServiceAccountCredentials.from_json_keyfile_name('creds.json', scope)
client = gspread.authorize(creds)
# Set up printer name and font
printer_name = "Rollo Printer"
font_path = "arial.ttf"
# Register Arial font
pdfmetrics.registerFont(TTFont('Arial', font_path))
# Open the Google Sheet and select Label and Claims worksheets
sheet_url = "https://docs.google.com/spreadsheets/d/1eRO-30eIZamB5sjBp-Mz2QMKCfGxV037MQO8nS7G7AI/edit#gid=1988901382"
label_sheet = client.open_by_url(sheet_url).worksheet("Label")
claims_sheet = client.open_by_url(sheet_url).worksheet("Claims")
# Get the number of label copies to print
num_copies = int(label_sheet.acell("F2").value)
# Set up label filename and PDF canvas
label_file = "label.pdf"
label_canvas = canvas.Canvas(label_file, pagesize=landscape((400, 250)),bottomup=432)
# Write label text to PDF canvas
label_text = label_sheet.get_all_values()
x_pos = 10 # set initial x position
y_pos = 260 # set initial y position
line_height = 20 # set line height for text
label_canvas.setFont('Arial', 20)
for row in label_text:
for col in row:
textobject = label_canvas.beginText()
textobject.setFont('Arial', 20)
textobject.setTextOrigin(x_pos, y_pos)
lines = col.split("\n")
for line in lines:
textobject.textLine(line)
y_pos -= line_height
label_canvas.drawText(textobject)
x_pos += 145
y_pos = 260
x_pos = 10
# Save the label PDF and print to the printer
label_canvas.save()
for i in range(num_copies):
os.startfile(label_file, 'printto', printer_name, "", 1)
# Set up claims filename and PDF canvas
claims_file = "claims.pdf"
claims_canvas = canvas.Canvas(claims_file, pagesize=landscape((400, 250)),bottomup=432)
# Write claims text to PDF canvas
claims_text = claims_sheet.get_all_values()
x_pos = 10
y_pos = 260 # set initial y position
line_height = 20
claims_canvas.setFont('Arial', 20)
for row in claims_text:
for col in row:
textobject = claims_canvas.beginText()
textobject.setFont('Arial', 20)
textobject.setTextOrigin(x_pos, y_pos)
lines = col.split("\n")
for line in lines:
textobject.textLine(line)
y_pos -= line_height
claims_canvas.drawText(textobject)
x_pos += 145
y_pos = 260
x_pos = 10
# Save the claims PDF and print to the printer
claims_canvas.save()
os.startfile(claims_file, 'printto', printer_name, "", 1)
</code></pre>
|
<python><pdf-generation><pywin32><reportlab>
|
2023-03-21 01:31:31
| 0
| 393
|
Anthony Madle
|
75,796,615
| 1,857,373
|
TypeError: get_params() missing 1 required positional argument: 'self' on RandomSearchCV
|
<p><strong>Problem</strong></p>
<p>Running a KNeighborsClassifier with OneVsRestClassifier. Already have several other classifiers working perfect with OneVsRestClassifier, but when I try the pattern of code that worked for other RandomSearchCV for classifer, this once hits a snat on Error Type.</p>
<p>I have parameters for the KNeighborsClassifier classifier: n_neibgors, weights, algorithm params, initialized with object and not a class, e.g., . Yes, the get_params() is missing 1 required positional argument: 'self'.</p>
<p><strong>Code</strong></p>
<pre><code>param_grid = {
'n_neighbors': [1,2,3,4,5,6,8,10,11,12,14,20],
'weights': ('uniform', 'distance', 'callable', 'None'),
'algorithm': ('auto', 'ball_tree', 'kd_tree', 'brute', 'auto'),
'leaf_sizeint': [10,20,30,40,50,60,80,120]
}
ovr_KNN_model = OneVsRestClassifier(KNeighborsClassifier)
ovo_KNN_model = OneVsOneClassifier(KNeighborsClassifier)
ovr_KNN_grid_param = RandomizedSearchCV(ovr_KNN_model, param_grid, cv=5, n_jobs=3, error_score="raise")
ovo_KNN_grid_param = RandomizedSearchCV(ovo_KNN_model, param_grid, cv=5, n_jobs=3, error_score="raise")
ovr_KNN_model_model_fit = ovr_KNN_grid_param.fit(X_train_scaled, y_train)
</code></pre>
<p><strong>Error Location</strong></p>
<pre><code>TypeError Traceback (most recent call last)
Cell In[51], line 2
1 ## scores
----> 2 ovr_KNN_model_model_fit = ovr_KNN_grid_param.fit(X_train_scaled, y_train)
Search:\n", ovr_KNN_model_model_fit.best_estimator_)
</code></pre>
<p><strong>Error Message</strong></p>
<pre><code>TypeError: get_params() missing 1 required positional argument: 'self'
</code></pre>
<p><strong>Error Trace</strong></p>
<pre><code>TypeError Traceback (most recent call last)
Cell In[51], line 2
1 ## scores
----> 2 ovr_KNN_model_model_fit = ovr_KNN_grid_param.fit(X_train_scaled, y_train)
3 ovo_KNN_model_model_fit = ovo_KNN_grid_param.fit(X_train_scaled, y_train)
5 print("\n OneVsRest Classifier KNN best estimator across Randomized Search:\n", ovr_KNN_model_model_fit.best_estimator_)
File ~/opt/anaconda3/lib/python3.9/site-packages/sklearn/model_selection/_search.py:891, in BaseSearchCV.fit(self, X, y, groups, **fit_params)
885 results = self._format_results(
886 all_candidate_params, n_splits, all_out, all_more_results
887 )
889 return results
--> 891 self._run_search(evaluate_candidates)
893 # multimetric is determined here because in the case of a callable
894 # self.scoring the return type is only known after calling
895 first_test_score = all_out[0]["test_scores"]
File ~/opt/anaconda3/lib/python3.9/site-packages/sklearn/model_selection/_search.py:1766, in RandomizedSearchCV._run_search(self, evaluate_candidates)
1764 def _run_search(self, evaluate_candidates):
1765 """Search n_iter candidates from param_distributions"""
-> 1766 evaluate_candidates(
1767 ParameterSampler(
1768 self.param_distributions, self.n_iter, random_state=self.random_state
1769 )
1770 )
File ~/opt/anaconda3/lib/python3.9/site-packages/sklearn/model_selection/_search.py:838, in BaseSearchCV.fit.<locals>.evaluate_candidates(candidate_params, cv, more_results)
830 if self.verbose > 0:
831 print(
832 "Fitting {0} folds for each of {1} candidates,"
833 " totalling {2} fits".format(
834 n_splits, n_candidates, n_candidates * n_splits
835 )
836 )
--> 838 out = parallel(
839 delayed(_fit_and_score)(
840 clone(base_estimator),
841 X,
842 y,
843 train=train,
844 test=test,
845 parameters=parameters,
846 split_progress=(split_idx, n_splits),
847 candidate_progress=(cand_idx, n_candidates),
848 **fit_and_score_kwargs,
849 )
850 for (cand_idx, parameters), (split_idx, (train, test)) in product(
851 enumerate(candidate_params), enumerate(cv.split(X, y, groups))
852 )
853 )
855 if len(out) < 1:
856 raise ValueError(
857 "No fits were performed. "
858 "Was the CV iterator empty? "
859 "Were there no candidates?"
860 )
File ~/opt/anaconda3/lib/python3.9/site-packages/joblib/parallel.py:1054, in Parallel.__call__(self, iterable)
1051 self._iterating = False
1053 with self._backend.retrieval_context():
-> 1054 self.retrieve()
1055 # Make sure that we get a last message telling us we are done
1056 elapsed_time = time.time() - self._start_time
File ~/opt/anaconda3/lib/python3.9/site-packages/joblib/parallel.py:933, in Parallel.retrieve(self)
931 try:
932 if getattr(self._backend, 'supports_timeout', False):
--> 933 self._output.extend(job.get(timeout=self.timeout))
934 else:
935 self._output.extend(job.get())
File ~/opt/anaconda3/lib/python3.9/site-packages/joblib/_parallel_backends.py:542, in LokyBackend.wrap_future_result(future, timeout)
539 """Wrapper for Future.result to implement the same behaviour as
540 AsyncResults.get from multiprocessing."""
541 try:
--> 542 return future.result(timeout=timeout)
543 except CfTimeoutError as e:
544 raise TimeoutError from e
File ~/opt/anaconda3/lib/python3.9/concurrent/futures/_base.py:446, in Future.result(self, timeout)
444 raise CancelledError()
445 elif self._state == FINISHED:
--> 446 return self.__get_result()
447 else:
448 raise TimeoutError()
File ~/opt/anaconda3/lib/python3.9/concurrent/futures/_base.py:391, in Future.__get_result(self)
389 if self._exception:
390 try:
--> 391 raise self._exception
392 finally:
393 # Break a reference cycle with the exception in self._exception
394 self = None
TypeError: get_params() missing 1 required positional argument: 'self'
</code></pre>
<p><strong>Data y</strong></p>
<pre><code>11719 4
25835 4
13523 5
49 4
19381 4
..
27642 5
28628 5
4506 5
40663 4
38826 4
Name: label, Length: 3146, dtype: int64
</code></pre>
<p>**Data X</p>
<pre><code>array([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]])
</code></pre>
|
<python><scikit-learn>
|
2023-03-21 01:19:27
| 0
| 449
|
Data Science Analytics Manager
|
75,796,520
| 4,441,239
|
Unable to generate service tokens - Permission denied on resource
|
<p>I am trying to follow documentation mentioned here in Google documentation: <a href="https://cloud.google.com/run/docs/authenticating/service-to-service#acquire-token" rel="nofollow noreferrer">https://cloud.google.com/run/docs/authenticating/service-to-service#acquire-token</a></p>
<p>I have tried both programmatic and API call method, but I get the same response.</p>
<pre><code>$ curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=abc-def-api-4yuwqeg56fq-ew.a.run.app" -H "Metadata-Flavor: Google"
Failed to generate identity token; IAM returned 403 Forbidden: Permission 'iam.serviceAccounts.getOpenIdToken' denied on resource (or it may not exist).
</code></pre>
<p>With libraries, I am getting the same error:</p>
<pre><code>>>> id_token = google.oauth2.id_token.fetch_id_token(auth_req, audience)
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/google/auth/compute_engine/credentials.py", line 377, in _call_metadata_identity_endpoint
id_token = _metadata.get(request, path, params=params)
File "/usr/local/lib/python3.10/dist-packages/google/auth/compute_engine/_metadata.py", line 182, in get
raise exceptions.TransportError(
google.auth.exceptions.TransportError: ('Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=https%3A%2F%2Fabc-def-api-4yuwqeg56fq-ew.a.run.app&format=full from the Google Compute Engine metadata service. Status: 403 Response:\nb"Failed to generate identity token; IAM returned 403 Forbidden: Permission \'iam.serviceAccounts.getOpenIdToken\' denied on resource (or it may not exist).\\n"', <google.auth.transport.requests._Response object at 0x7f4c9f9aa7d0>)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/google/oauth2/id_token.py", line 340, in fetch_id_token
id_token_credentials.refresh(request)
File "/usr/local/lib/python3.10/dist-packages/google/auth/compute_engine/credentials.py", line 398, in refresh
self.token, self.expiry = self._call_metadata_identity_endpoint(request)
File "/usr/local/lib/python3.10/dist-packages/google/auth/compute_engine/credentials.py", line 380, in _call_metadata_identity_endpoint
six.raise_from(new_exc, caught_exc)
File "<string>", line 3, in raise_from
google.auth.exceptions.RefreshError: ('Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=https%3A%2F%2Fabc-def-api-4yuwqeg56fq-ew.a.run.app&format=full from the Google Compute Engine metadata service. Status: 403 Response:\nb"Failed to generate identity token; IAM returned 403 Forbidden: Permission \'iam.serviceAccounts.getOpenIdToken\' denied on resource (or it may not exist).\\n"', <google.auth.transport.requests._Response object at 0x7f4c9f9aa7d0>)
>>>
</code></pre>
<p>I have put audience as the Cloud Run URL.</p>
<p>Am I not able to understand what's wrong and what I can do to get it right. Any pointers / guidance would be much appreciated.</p>
<p>Many Thanks</p>
|
<python><google-cloud-platform><google-cloud-iam>
|
2023-03-21 00:55:12
| 2
| 949
|
Moni
|
75,796,507
| 14,173,197
|
How to run multiple batches in parallel with multiprocessing in python?
|
<p>I have a large dataset (approx 38,000 records) that I need to train. So I am trying to break down the loop into batches. As each batch is independent, I want to run batches in parallel for faster execution. How can I do that using multiprocessing? Below is my code:</p>
<pre><code> if __name__ == '__main__':
# Create a list of tuples with each key and the corresponding df for each key
site_data = [(site, df) for site, df in df.groupby('site')]
# set batch size
batch_size = 1000
# split dictionary into batches
batches = [site_data[i:i+batch_size] for i in range(0, len(site_data), batch_size)]
# loop through each batch
for batch_num, batch in enumerate(batches):
batch_results = []
batch_logs = []
print(f'----Batch {batch_num+1} of {len(batches)}-------')
batch_logs.append(f'\nBatch no {batch_num+1} of {len(batches)} \n')
# run loops and get results for the current batch
for i, (k, v) in enumerate(batch):
print(f'----Iteration {i+1} of {len(batch)}-------')
result, msg = run_model(k, v, param)
batch_results.append(result)
batch_logs.append(msg)
# Combine the results and save the output files
batch_results = pd.concat(batch_results)
batch_results = batch_results.reset_index(drop=True)
# Save logs to the file
log_file = f"logs/logs_{today}"
save_logs(log_file, batch_logs)
</code></pre>
|
<python><pandas><multiprocessing>
|
2023-03-21 00:53:25
| 1
| 323
|
sherin_a27
|
75,796,504
| 11,198,671
|
Plotting an ellipse with eigenvectors using matplotlib and numpy
|
<p>I ran into a deadlock with the following code, and I hope you can help me out. It seems like a deadlock situation because I need to understand the math in order to write the code, but I also need the code to understand the math. I have spent half a day trying to figure out what's wrong, but with no success. It could be possible that everything is correct, but who knows?</p>
<p>Let me first show you when things go well. My goal is to plot an ellipsoid (an ellipse) in a 2D space.</p>
<h1>Theoretical background</h1>
<h3>Ellipsoid</h3>
<p>For this, I take the standard definition of the ellipsoid which tells you that an ellipsoid is set of the form <code>(x-z)^T * D * (x-z) <= 1</code> where <code>D</code> is a positive definite matrix, <code>z</code> the center of an ellipsoid and <code>x</code> is a point in the 2D space. For simplicity reasons we assume that <code>z=0</code> hence an ellipsoid is just <code>x^T*D*x <= 1</code> for <code>x</code> in <code>R^2</code>.</p>
<p>An ellipsoid is defined for any dimension. An ellipsoid is 2D is just an (filled) ellipse.</p>
<p>On the other hand, an ellipse in 2D can be described using the formula <code>(x^2 / a^2) + (y^2 / b^2) = 1</code> where <code>a</code> and <code>b</code> are the lengths of the semi-axes of the ellipsoid. If the ellipsoid (or ellipse) is centered at the origin and oriented such that its principal axes align with the coordinate axes, then the semi-axes are the distances from the origin to the points where the ellipsoid (or ellipse) intersects the coordinate axes.</p>
<h3>Eigenvalues and eigenvectors</h3>
<p>Let the positive definite matrix defining the ellipsoid be as follows:</p>
<pre><code>D = 1 0
0 4
</code></pre>
<p>To find the eigenvalues of the matrix, we solve the equation <code>det(D - λI) = 0</code> which yields <code>λ1 = 1</code> and <code>λ2 = 4</code>. We then take the square roots of the reciprocals of these eigenvalues to obtain the lengths of the semi-axes of the ellipsoid. Specifically, <code>a = sqrt(1/1) = 1</code> is the length of the first axis, and <code>b = sqrt(1/4) = 1/2</code> is the length of the second axis.</p>
<p>To determine the eigenvectors of the matrix that define the directions in which the space is going to be stretched or compressed, we solve the systems <code>D - λ1*I = 0</code> and <code>D - λ2*I = 0</code>. This gives us the eigenspace <code>[u 0]</code> for <code>λ1</code> and the eigenspace <code>[0 w]</code> for <code>λ2</code>. We set <code>u</code> and <code>w</code> to 1 and conclude that <code>[1 0]</code> is an eigenvector for the eigenvalue <code>1</code>, while <code>[0 1]</code> is an eigenvector for the eigenvalue <code>4</code>.</p>
<h3>Parameter Transformation and Scaling</h3>
<p>Next, we will plot the ellipse or the border of the ellipsoid. An ellipse can be described by the following equation: <code>(x^2 / a^2) + (y^2 / b^2) = 1</code>, where <code>a</code> and <code>b</code> are the lengths of the semi-axes of the ellipsoid. To plot the ellipse using <code>matplotlib</code>, we use the parameter transformation. We set <code>x = a * cos(t)</code> and <code>y = b * sin(t)</code> for <code>0 <= t <= 2pi</code>.</p>
<p>Finally, we will plot the eigenvectors that define the axes of the ellipsoid. We take the desired length of the semi-axis, multiply it by the corresponding eigenvector, and scale it by the norm of this vector.</p>
<h3>Plot</h3>
<p>Plotting everything together we get</p>
<p><a href="https://i.sstatic.net/QqP7f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QqP7f.png" alt="enter image description here" /></a></p>
<p>If I'm correct, the plot shows two semi-axes with lengths <code>1</code> and <code>1/2</code> that align with the x and y-axis, respectively, in line with the eigenvalues. Additionally, we can shift or rotate the ellipsoid using the rotation matrix:
<a href="https://i.sstatic.net/hPg2M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hPg2M.png" alt="enter image description here" /></a></p>
<h1>Implementation</h1>
<p>We start by creating an array of points and get eigenvalues and eigenvectors from the built-in functions:</p>
<pre class="lang-py prettyprint-override"><code># Generate points on the ellipse.
theta = np.linspace(0, 2 * np.pi, 10000)
eigenvalues, eigenvectors = np.linalg.eig(positive_definite_matrix)
</code></pre>
<p>As far as I understand, <code>np.linalg.eig()</code> returns a 2D array as the second output. It is called "eigenvectors", but these are not eigenvectors, since the first array contains the x coordinates of the vectors and the second array contains the y coordinates of the vectors. Therefore, we need to transpose it:</p>
<pre class="lang-py prettyprint-override"><code># Transpose the result to get eigenvectors we can calculate with.
eigenvectors = eigenvectors.T
</code></pre>
<p>Now we can get the lengths of the semi-axes as described in the theory section:</p>
<pre class="lang-py prettyprint-override"><code>a = np.sqrt(1/eigenvalues[0])
b = np.sqrt(1/eigenvalues[1])
</code></pre>
<p>We get the ellipse points:</p>
<pre class="lang-py prettyprint-override"><code>ellipse_points = a * np.cos(theta)[:, np.newaxis] * eigenvectors[:, 0] + b * np.sin(theta)[:, np.newaxis] * eigenvectors[:, 1]
</code></pre>
<p>If an angle for the rotation was given, we can rotate the ellipsoid using the rotation matrix:</p>
<pre class="lang-py prettyprint-override"><code>rotation_matrix = np.array([[np.cos(rotation_angle), -np.sin(rotation_angle)], [np.sin(rotation_angle), np.cos(rotation_angle)]])
rotated_points = np.dot(rotation_matrix, ellipse_points.T).T
</code></pre>
<p>We shift all these points to a new center:</p>
<pre class="lang-py prettyprint-override"><code>rotated_points += center
</code></pre>
<p>We plot the ellipse and the center:</p>
<pre class="lang-py prettyprint-override"><code>ax.plot(rotated_points[:, 0], rotated_points[:, 1], 'b-')
ax.scatter(center[0], center[1], c='b', s=100, marker='o', linewidths=1)
</code></pre>
<p>Finally, we need to plot the eigenvectors, which are scaled by the length of the corresponding semi-axes:</p>
<pre class="lang-py prettyprint-override"><code># Rotate eigenvectors
rotated_eigenvectors = np.dot(rotation_matrix, eigenvectors).T
# Scale the eigenvectors according to the axis
rotated_eigenvectors[0] = a * rotated_eigenvectors[0] / np.linalg.norm(rotated_eigenvectors[0])
rotated_eigenvectors[1] = b * rotated_eigenvectors[1] / np.linalg.norm(rotated_eigenvectors[1])
plot_vector(ax, center, rotated_eigenvectors[0] + center[0], **BASIS_VECTOR_PROPERTIES)
plot_vector(ax, center, rotated_eigenvectors[1] + center[1], **BASIS_VECTOR_PROPERTIES)
</code></pre>
<p>The entire code along with the plotting properties is shown below:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib import collections as mc
from pylab import *
# Plotting area properties. Normally are in a separate module.
PLOTTING_BOUND = 2
PLOT_BOTTOM_LEFT_CORNER = np.array([-PLOTTING_BOUND, -PLOTTING_BOUND])
PLOT_TOP_RIGHT_CORNER = np.array([PLOTTING_BOUND, PLOTTING_BOUND])
BASIS_VECTOR_PROPERTIES = {
'scale_units' : 'xy',
'angles' : 'xy',
'scale' : 1,
'color' : 'b'
}
def plot_ellipsoid_in_work(ax, center, positive_definite_matrix, rotation_angle, want_to_plot_eigenvectors):
# Generate points on the ellipse.
theta = np.linspace(0, 2 * np.pi, 10000)
eigenvalues, eigenvectors = np.linalg.eig(positive_definite_matrix)
# Transpose the result to get eigenvectors in one array.
eigenvectors = eigenvectors.T
print("Eigenvalues: " + str(eigenvalues))
print("Eigenvectors: " + str(eigenvectors))
a = np.sqrt(1/eigenvalues[0])
b = np.sqrt(1/eigenvalues[1])
print("Semi-axis: " + str(a) + ", " + str(b))
# Get ellipse points.
ellipse_points = a * np.cos(theta)[:, np.newaxis] * eigenvectors[:, 0] + b * np.sin(theta)[:, np.newaxis] * eigenvectors[:, 1]
# Rotate the points.
rotation_matrix = np.array([[np.cos(rotation_angle), -np.sin(rotation_angle)], [np.sin(rotation_angle), np.cos(rotation_angle)]])
rotated_points = np.dot(rotation_matrix, ellipse_points.T).T
# Shift by center.
rotated_points += center
# Plot the ellipse
ax.plot(rotated_points[:, 0], rotated_points[:, 1], 'b-')
ax.scatter(center[0], center[1], c='b', s=100, marker='o', linewidths=1)
# Show eigenvectors
if want_to_plot_eigenvectors:
# Rotate eigenvectors
rotated_eigenvectors = np.dot(rotation_matrix, eigenvectors).T
# Scale the eigenvectors according to the axis
rotated_eigenvectors[0] = a * rotated_eigenvectors[0] / np.linalg.norm(rotated_eigenvectors[0])
rotated_eigenvectors[1] = b * rotated_eigenvectors[1] / np.linalg.norm(rotated_eigenvectors[1])
plot_vector(ax, center, rotated_eigenvectors[0] + center[0], **BASIS_VECTOR_PROPERTIES)
plot_vector(ax, center, rotated_eigenvectors[1] + center[1], **BASIS_VECTOR_PROPERTIES)
def plot_vector(ax: plt.Axes, start_point: np.ndarray, vector: np.ndarray, **properties) -> None:
"""
Plot a vector given its starting point and displacement vector.
Args:
ax (matplotlib.axes.Axes): The matplotlib axes on which to plot the vector.
start_point (np.ndarray): A 2D numpy array representing the starting point of the vector.
vector (np.ndarray): A 2D numpy array representing the displacement vector.
**properties: Additional properties to be passed to the quiver function.
Returns:
None
"""
displacement = vector - start_point
ax.quiver(start_point[0], start_point[1], displacement[0], displacement[1], **properties)
def setup_plot(ax: plt.Axes, ldown: np.ndarray, rup: np.ndarray) -> None:
"""
Setup the given Matplotlib axis for plotting.
Args:
ax (matplotlib.axes.Axes): The Matplotlib axis to be setup.
ldown (np.ndarray): A 2D numpy array representing the lower left corner of the rectangular region.
rup (np.ndarray): A 2D numpy array representing the upper right corner of the rectangular region.
Returns:
None
"""
ax.set_aspect('equal')
ax.grid(True, which='both')
ax.set_xlim(ldown[0], rup[0])
ax.set_ylim(ldown[1], rup[1])
plt.axhline(0, color='black')
plt.axvline(0, color='black')
ax.set_xlabel('X')
ax.set_ylabel('Y')
mngr = plt.get_current_fig_manager()
mngr.resize(960, 1080)
# Create plot
fig, ax = plt.subplots()
setup_plot(ax, PLOT_BOTTOM_LEFT_CORNER, PLOT_TOP_RIGHT_CORNER)
# Plot the ellipsoid
plot_ellipsoid_in_work(ax,np.array([1,1]),np.array([[1,0],[0,4]]),np.pi/4,True)
# Show the plot
plt.show()
</code></pre>
<h1>Problem</h1>
<p><strong>UPD: center problem solved. The error was in the wrong indexing.</strong>
<em>Even though I went carefully through every step, I just do not understand what is happening if I vary the inputs. Putting the center at [-1,1], we get</em></p>
<p><a href="https://i.sstatic.net/OcTxx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OcTxx.png" alt="enter image description here" /></a></p>
<p><em>I am just shifting everything by the center in the lines</em></p>
<pre class="lang-py prettyprint-override"><code>rotated_points += center
...
plot_vector(ax, center, rotated_eigenvectors[0] + center[0], **BASIS_VECTOR_PROPERTIES)
plot_vector(ax, center, rotated_eigenvectors[1] + center[1], **BASIS_VECTOR_PROPERTIES)
</code></pre>
<p><em>Why it makes the eigenvectors to point to some strange direction?</em></p>
<p><strong>UPD2: This problem is not solved. I add a plot and description at the end of the post in bold</strong>
Putting <code>D = </code></p>
<pre><code>1 3
1 4
</code></pre>
<p>and calling</p>
<pre><code>plot_ellipsoid_in_work(ax,np.array([0,0]),np.array([[1,3],[1,4]]),0,True)
</code></pre>
<p>results in</p>
<p><a href="https://i.sstatic.net/dlses.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dlses.png" alt="enter image description here" /></a></p>
<p>Here, I am wondering why the ellipsoid gets rotated, even though the angle is zero. I have picked some points and checked if they satisfy the ellipsoid equation, and they do. So, I assume that the ellipsoid is plotted correctly. However, I am curious why the eigenvectors do not align with the axis of the ellipsoid and are not scaled correctly. The left one is too short, and the center one is too long.</p>
<p>I calculate the first eigenvalue by hand and get <code>λ1 = (5 + sqrt(21)) / 2</code>, which is approximately <code>4.7912878474779195</code>. I check the output in the terminal and see that this corresponds to one eigenvalue:</p>
<pre><code>Eigenvalues: [0.20871215 4.79128785]
</code></pre>
<p>Hence, the eigenvalue is computed correctly. Then I take <code>a = sqrt(1/λ1) = sqrt(2 / (5 + sqrt(21)))</code>. This must be the length of this axis.</p>
<p>Then I compute the eigenspace by hand and get <code>a * [1,(3 + sqrt(21))/6]</code>. I set <code>a=1</code> and plot the scaled vector <code>[1,(3 + sqrt(21))/6]</code>:</p>
<pre class="lang-py prettyprint-override"><code>a = np.sqrt(2 / (5 + np.sqrt(21)))
plot_vector(ax, np.array([0,0]), a * eigenvector / np.linalg.norm(eigenvector), color='r', scale=1, scale_units='xy')
</code></pre>
<p><a href="https://i.sstatic.net/1ejIM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1ejIM.png" alt="enter image description here" /></a></p>
<p>Now this new vector points somewhere into eternity, and I am giving up.</p>
<p>I am struggling to understand why my code is not working as expected. I started by trying to understand how the matrix <code>D</code> of the ellipsoid defines the ellipsoid. I computed the eigenvalues and they seemed to be correct. However, when I computed the eigenvectors, they were different from those returned by the <code>np.linalg.eig()</code> function. I am now questioning my understanding of math.</p>
<p>I wrote the code, and it seems to work fine, with the eigenvectors always aligned with the ellipsoid axes. However, when I used the matrix <code>[1 3; 1 4]</code> as described before, the axes were not aligned anymore. I am questioning whether I have made a mistake somewhere or if there is a problem with the library functions.</p>
<p>I am frustrated and confused, and my question is essentially "What is wrong with this code?". I am not sure if my understanding of math, code, or the Python library is flawed.</p>
<p>I hope you can help me figure out what I am missing, because at this point, I am just lost.</p>
<p><strong>UPD3: here is the problem in detail. If we use the same matrix <code>D</code>, we get the following output:</strong></p>
<p><a href="https://i.sstatic.net/IpZHZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IpZHZ.png" alt="enter image description here" /></a></p>
<p><strong>You can see the output of the program in blue. In red I added the manually calculated eigenvector and the axis of the ellipsoid. I consulted myself regarding the math part, and there must be no error there. However, we can see that both program and my manually calculated eigenvectors are out of the axes. Mathematically they must be on the same axis, and we can assume that both the blue and red eigenvectors are calculated correctly. And, mathematically we can assume that the eigenvectors must be on the ellipse axes. The question is, why do they get plotted with wrong length and angle. Is there an issue with the quiver function? What is the reason?</strong></p>
<p><strong>UPD4: Example of a symmetric matrix</strong></p>
<pre><code>12 3
3 15
</code></pre>
<p><strong>for which the plot is still not aligned:</strong></p>
<p><a href="https://i.sstatic.net/vFuKp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vFuKp.png" alt="enter image description here" /></a></p>
|
<python><numpy><matplotlib><math><ellipse>
|
2023-03-21 00:52:30
| 1
| 345
|
jupiter_jazz
|
75,796,489
| 11,370,582
|
Assign url to wsgi gunicorn server
|
<p>This may seem like a stupid question but I'm very new to web development so please bear with me.</p>
<p>I have created a dash web app that I am deploying with a wsgi server, and I can't for the life of me find any information on how to actually assign a url to the ip address.</p>
<p>For example, if I follow this tutorial - <a href="https://kevalnagda.github.io/flask-app-with-wsgi-and-nginx" rel="nofollow noreferrer">https://kevalnagda.github.io/flask-app-with-wsgi-and-nginx</a></p>
<p>I can get the app running on <code>http://localhost:80</code> easily, but I cannot seem to find how to assign it with a unique url. Something like <code>www.mywebapp.com</code>. Is there a substantial piece of this I am missing?</p>
|
<python><flask><url><web-applications><plotly-dash>
|
2023-03-21 00:48:29
| 1
| 904
|
John Conor
|
75,796,382
| 850,781
|
How do I select a subset of a DataFrame based on a condition on a column
|
<p>Similar to <a href="https://stackoverflow.com/q/75796107/850781">How do I select a subset of a DataFrame based on one level of a MultiIndex</a>, let</p>
<pre><code>df = pd.DataFrame({"v":[x*x for x in range(12)]},
index=pd.MultiIndex.from_product([["a","b","c"],[1,2,3,4]]))
</code></pre>
<p>and suppose I want to select only rows with the <code>v</code> being within 25 from its smallest value for the given <strong>first</strong> level:</p>
<pre><code> v
a 1 0
2 1
3 4
4 9
b 1 16
2 25
3 36
c 1 64
2 81
</code></pre>
<p>This time I have no idea how to do that easily....</p>
|
<python><pandas><dataframe><multi-index>
|
2023-03-21 00:19:33
| 1
| 60,468
|
sds
|
75,796,355
| 6,665,586
|
Error when testing private methods in Pytest
|
<p>I have a private method in a module, for example</p>
<p>wallets.py</p>
<pre><code>def __foo():
return 0
</code></pre>
<p>When I try to test this method with this code</p>
<p>test_wallets.py</p>
<pre><code>from unittest import TestCase
from db.services import wallets
class GetTokenQuantity(TestCase):
def test_get_token_quantity(self):
token_quantity = wallets.__foo()
</code></pre>
<p>I get this error <code>AttributeError: module 'db.services.wallets' has no attribute '_GetTokenQuantity__foo'</code></p>
<p>How can I test this private method?</p>
|
<python><pytest><python-unittest>
|
2023-03-21 00:11:52
| 1
| 1,011
|
Henrique Andrade
|
75,796,307
| 17,090,926
|
How do you use conditionals with Polars LazyFrames
|
<p>i tried to use lazyframes, but it seems there are a lot of limitations.</p>
<p>for example, i have a list comprehension which runs a function on an expression x</p>
<pre><code>final = [
df.select(
threshold_rank(λ=lambda_df.select(series + '_lambda').item(),
x=pl.col(series))
.alias(series + '_rank')
).collect()
for series in fields
]
</code></pre>
<p>i'm supposed to get back a lazyframe:</p>
<pre><code>def threshold_rank(λ, x):
e = 2.718281828459045
t = 22.108576145 # t is the threshold value
k = 1 # k is a scaling factor that determines how rapidly the rankings increase above the threshold
if x <=t:
calc = 10 - ((10 - 1) / t) * (t - x)
else:
calc = 10 - (9 * e ** (-λ * (x - t)))
return calc
</code></pre>
<p>i get:</p>
<pre><code>File "C:\Users\calcs.py", line 91, in threshold_rank
if x.le(t).is_in(True):
File "C:\Users\\temp\venv\lib\site-packages\polars\expr\expr.py", line 193, in __bool__
raise ValueError(
ValueError: Since Expr are lazy, the truthiness of an Expr is ambiguous. Hint: use '&' or '|' to logically combine Expr, not 'and'/'or', and use 'x.is_in([y,z])' instead of 'x in [y,z]' to check membership.
</code></pre>
<p>i've tried to do something like:</p>
<p>if <code>x.le(t)</code>: but this is still no good.</p>
<p>so, really confused about first: when is it a good idea to use lazyframes vs not.</p>
<p>and second how to solve this problem</p>
<p>update:
tried the following:</p>
<pre><code>def threshold_rank(λ, x, series):
e = 2.718281828459045
t = 22.108576145 # t is the threshold value
k = 1 # k is a scaling factor that determines how rapidly the rankings increase above the threshold
calc = pl.when(
x.le(t).any()
.then(10 - ((10 - 1) / t) * (t - x))
.otherwise(10 - (9 * e ** (-λ * (x - t))))
)
return calc
</code></pre>
<p>but i get <code> .then(10 - ((10 - 1) / t) * (t - x)) AttributeError: 'Expr' object has no attribute 'then'</code></p>
<p>update 2:</p>
<p>ok this should have been solved by simply noticing the parenthesis:</p>
<p><code>when().then().otherwise()</code></p>
<p>but i will leave this open to anyone who has advice on improving the use of lazyframes/expressions</p>
|
<python><dataframe><python-polars>
|
2023-03-20 23:58:42
| 1
| 415
|
rnd om
|
75,796,164
| 12,787,236
|
How to add main function to pylint missing-function-docstring (C0116) as an exception?
|
<p>How can I avoid pylint to mark missing docstring in the <code>main</code> function, without removing any others from the default, like the <code>__init__</code> method?</p>
<p><a href="https://i.sstatic.net/fiJ8t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fiJ8t.png" alt="enter image description here" /></a></p>
<p>When I run</p>
<pre class="lang-bash prettyprint-override"><code>poetry run pylint src/* --no-docstring-rgx "main"
</code></pre>
<p>it overwrites the default value to avoid private methods that start with <code>_</code>. See the docs <a href="https://pylint.pycqa.org/en/v2.17.0/user_guide/configuration/all-options.html#no-docstring-rgx" rel="nofollow noreferrer">here</a></p>
<p>I don't want to override the default configuration, but add more rules to it. Can't figure out a way to achieve that.</p>
|
<python><configuration><pylint><linter>
|
2023-03-20 23:24:09
| 1
| 1,948
|
Henrique Branco
|
75,796,107
| 850,781
|
How do I select a subset of a DataFrame based on one level of a MultiIndex
|
<p>Let</p>
<pre><code>df = pd.DataFrame({"v":range(12)}, index=pd.MultiIndex.from_product([["a","b","c"],[1,2,3,4]]))
</code></pre>
<p>and suppose I want to select only rows with the first level being <code>a</code> or <code>c</code>:</p>
<pre><code> v
a 1 0
2 1
3 2
4 3
c 1 8
2 9
3 10
4 11
</code></pre>
<p>I can do:</p>
<pre><code>df[df.index.to_frame()[0].isin(("a","c"))]
</code></pre>
<p>but this creates an intermediate frame which seems like a waste.</p>
<p>Is there a better way?</p>
|
<python><pandas><dataframe><multi-index>
|
2023-03-20 23:11:22
| 1
| 60,468
|
sds
|
75,795,933
| 4,937,644
|
How do I extract text into a CSV file from multiple text files using Python?
|
<p>I have a folder full of subfolders with text (.txt) files that look like this:</p>
<pre><code>some random information here
ignore it
author: Lisa Smith
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla sit amet leo quis risus viverra varius pretium sed nunc. Nullam vitae tempor nisl.
Quisque viverra interdum nibh, id malesuada magna scelerisque sit amet. Quisque sed arcu tempus, feugiat dolor at, convallis justo. Suspendisse euismod, metus non pretium pulvinar, odio eros rhoncus eros, eu scelerisque ex risus id mauris. Praesent id vulputate augue.
Aliquam erat volutpat. Pellentesque dignissim pharetra commodo. Vivamus risus leo, posuere eu odio eget, vestibulum auctor lorem. Aenean volutpat finibus lectus sed pretium. Lorem ipsum dolor sit amet, consectetur adipiscing elit. In ullamcorper mauris nec elit tempor, vitae finibus ante aliquam.
</code></pre>
<p>I want to create a CSV file that looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">filename</th>
<th style="text-align: center;">author</th>
<th style="text-align: right;">text</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">/fullfilepathhere/</td>
<td style="text-align: center;">Lisa Smith</td>
<td style="text-align: right;">Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla sit amet leo quis risus viverra varius pretium sed nunc. Nullam vitae tempor nisl. Quisque viverra interdum nibh, id malesuada magna scelerisque sit amet. Quisque sed arcu tempus, feugiat dolor at, convallis justo. Suspendisse euismod, metus non pretium pulvinar, odio eros rhoncus eros, eu scelerisque ex risus id mauris. Praesent id vulputate augue. Aliquam erat volutpat. Pellentesque dignissim pharetra commodo. Vivamus risus leo, posuere eu odio eget, vestibulum auctor lorem. Aenean volutpat finibus lectus sed pretium. Lorem ipsum dolor sit amet, consectetur adipiscing elit. In ullamcorper mauris nec elit tempor, vitae finibus ante aliquam.</td>
</tr>
</tbody>
</table>
</div>
<p>This is the code I currently have, which I cobbled together from <a href="https://stackoverflow.com/questions/69645296/extracting-specific-value-from-multiple-text-files-from-folder-using-python-and">previous question</a> and several other posts:</p>
<pre><code>from glob import glob
import os
import re
import csv
import nltk
path = '**/*.txt'
def extract_fields(fname):
with open(fname) as f:
author, txt = "", ""
for line in f:
line = line.strip()
if line.startswith("author: "):
author = line[8:]
break
next(f) # discard the following blank line
txt = f.read()
return author, txt
rows = []
for fname in glob(path):
author, txt = extract_fields(fname)
rows.append([fname, author, txt])
with open("output.csv", "w", newline="") as f:
writer = csv.writer(f)
writer.writerow(["filename", "author", "txt"])
writer.writerows(rows)
</code></pre>
<p>I am getting the following error:</p>
<pre><code>Traceback (most recent call last):
File "print_text.py", line 28, in <module>
author, txt = extract_fields(fname)
File "print_text.py", line 19, in extract_fields
next(f) # discard the following blank line
StopIteration
</code></pre>
<p>Any guidance would be appreciated!</p>
|
<python><csv>
|
2023-03-20 22:38:38
| 1
| 661
|
hy9fesh
|
75,795,728
| 6,817,610
|
pySpark - join on nullable column, conditional join
|
<p>I have a pyspark code that joins two DFs on 3 columns:</p>
<pre><code>final_df = spark_company_df
.join(
spark_geo_df,
(spark_company_df.column1 == spark_geo_df.column1) &
(spark_company_df.column2 == spark_geo_df.column2) &
(spark_company_df.column3 == spark_geo_df.column3),
"left_outer")
.select(
spark_geo_df.column1,
spark_geo_df.column2,
spark_geo_df.column3)
</code></pre>
<p><strong>spark_company_df.column3</strong> can be null and the other two can't so in those cases all 3 columns are null in resulting DF. Is there any easy way to join on 2 columns if 3rd is null and join 3 columns if it's not null, some kind of conditional join. I know I could do it with additional join but maybe there is a better way to do it?</p>
<p>Desired final DF:</p>
<pre><code>-----------------+--------------------+-------------+-
|column1 | column2 | column3|
+-----------------+--------------------+-------------+
| LA| CA| US|
| LA| CA| US|
| SF| CA| null|
+-----------------+--------------------+-------------+
</code></pre>
<p>but getting:</p>
<pre><code>-----------------+--------------------+-------------+-
|column1 | column2 | column3|
+-----------------+--------------------+-------------+
| LA| CA| US|
| LA| CA| US|
| null| null| null|
+-----------------+--------------------+-------------+
</code></pre>
|
<python><sql><apache-spark><pyspark><apache-spark-sql>
|
2023-03-20 22:03:13
| 1
| 953
|
Anton Kim
|
75,795,588
| 4,103,997
|
How to create new column that can combine adjacent but broken up "on" values into groups using pandas
|
<p>I have a dataframe with a column of 1s (corresponding to an "ON" signal) and 0s (corresponding to an "off" signal).</p>
<p>My data has some noise in it, such that the first "ON" signal has some 0s in the middle which makes it appear broken up. The same happens with the other "ON" signals. This makes it hard to count how many "ON" signals are in my data in total. It looks like there are more than there are!</p>
<p>Is there a way for me to group these, filling in the gaps? Ideally i would like to make a new column that indicates the current number of "ON" signals up to that time.
<a href="https://i.sstatic.net/QaDHd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QaDHd.png" alt="enter image description here" /></a></p>
<p>I have tried a rolling mean and threshold type approach...any help would be appreciated.</p>
<p>This gives me the locations when it changes from an "ON" to an "OFF", letting me look up the time index when it occurs:</p>
<pre><code>df.loc[:,'Change'] = np.abs(df['ONOFF_Signal'].diff())
On_off_timestamps = df.query("Change == 1")['Time'].values
##Sample_data for two "ON" groupings.
df['Change'] = [0,0,0,0,0,0,0,0,0,0,1,1,1,1,0,1,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,1,1,1,0,1,1,0,0,0,0,0,0]
#Output wanted:
[0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0]
##So that I can generate a count column "mask" up to each row of how many "ON" values have occurred:
[0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2]
</code></pre>
|
<python><pandas><dataframe>
|
2023-03-20 21:41:27
| 2
| 488
|
Ciaran
|
75,795,474
| 12,139,954
|
Why did the bart-large-cnn summarization model giving funny output with different length settings?
|
<p>I have a piece of text of 4226 characters (316 words + special characters)</p>
<p>I am trying different combinations of min_length and max_length to get summary</p>
<pre><code>print(summarizer(INPUT, max_length = 1000, min_length=500, do_sample=False))
</code></pre>
<p>With the code:</p>
<p>The code is</p>
<pre><code>summarizer = pipeline("summarization", model="facebook/bart-large-cnn")
INPUT = """We see ChatGPT as an engine that will eventually power human interactions with computer systems in a familiar, natural, and intuitive way. As ChatGPT stated, large language models can be put to work as a communication engine in a variety of applications across a number of vertical markets. Glaringly absent in its answer is the use of ChatGPT in search engines. Microsoft, which is an investor in OpenAI, is integrating ChatGPT into its Bing search engine. The use of a large language model enables more complex and more natural searches and extract deeper meaning and better context from source material. This is ultimately expected to deliver more robust and useful results. Is AI coming for your job? Every wave of new and disruptive technology has incited fears of mass job losses due to automation, and we are already seeing those fears expressed relative to AI generally and ChatGPT specifically. The year 1896, when Henry Ford rolled out his first automobile, was probably not a good year for buggy whip makers. When IBM introduced its first mainframe, the System/360, in 1964, office workers feared replacement by mechanical brains that never made mistakes, never called in sick, and never took vacations. There are certainly historical cases of job displacement due to new technology adoption, and ChatGPT may unseat some office workers or customer service reps. However, we think AI tools broadly will end up as part of the solution in an economy that has more job openings than available workers. However, economic history shows that technology of any sort (i.e., manufacturing technology, communications technology, information technology) ultimately makes productive workers more productive and is net additive to employment and economic growth. How big is the opportunity? The broad AI hardware and services market was nearly USD 36bn in 2020, based on IDC and Bloomberg Intelligence data. We expect the market to grow by 20% CAGR to reach USD 90bn by 2025. Given the relatively early monetization stage of conversational AI, we estimate that the segment accounted for 10% of the broader AI’s addressable market in 2020, predominantly from enterprise and consumer subscriptions. That said, user adoption is rapidly rising. ChatGPT reached its first 1 million user milestone in a week, surpassing Instagram to become the quickest application to do so. Similarly, we see strong interest from enterprises to integrate conservational AI into their existing ecosystem. As a result, we believe conversational AI’s share in the broader AI’s addressable market can climb to 20% by 2025 (USD 18–20bn). Our estimate may prove to be conservative; they could be even higher if conversational AI improvements (in terms of computing power, machine learning, and deep learning capabilities), availability of talent, enterprise adoption, spending from governments, and incentives are stronger than expected. How to invest in AI? We see artificial intelligence as a horizontal technology that will have important use cases across a number of applications and industries. From a broader perspective, AI, along with big data and cybersecurity, forms what we call the ABCs of technology. We believe these three major foundational technologies are at inflection points and should see faster adoption over the next few years as enterprises and governments increase their focus and investments in these areas. Conservational AI is currently in its early stages of monetization and costs remain high as it is expensive to run. Instead of investing directly in such platforms, interested investors in the short term can consider semiconductor companies, and cloud-service providers that provides the infrastructure needed for generative AI to take off. In the medium to long term, companies can integrate generative AI to improve margins across industries and sectors, such as within healthcare and traditional manufacturing. Outside of public equities, investors can also consider opportunities in private equity (PE). We believe the tech sector is currently undergoing a new innovation cycle after 12–18 months of muted activity, which provides interesting and new opportunities that PE can capture through early-stage investments."""
print(summarizer(INPUT, max_length = 1000, min_length=500, do_sample=False))
</code></pre>
<hr />
<p>Questions I have are:</p>
<h2>Q1: What does the following warning message mean? <code>Your max_length is set to 1000, ...</code></h2>
<p>Your max_length is set to 1000, but you input_length is only 856. You might consider decreasing max_length manually, e.g. summarizer(‘…’, max_length=428)</p>
<h2>Q2: After above message this it publishes a summary of total 2211 characters. How did it get that?</h2>
<h2>Q3: Of the above 2211 characters, first 933 characters are valid content from text but then it publishes text like</h2>
<blockquote>
<p>For confidential support call the Samaritans on 08457 90 90 90 or
visit a local Samaritans branch, see <a href="http://www.samaritans.org" rel="nofollow noreferrer">www.samaritans.org</a> for details.
For support …</p>
</blockquote>
<h2>Q4: How does min_length and max_length actually work (it does not seems to follow the restrictions given to it)?</h2>
<p>Q5: What is the max input that I can actually give to this summarizer?</p>
|
<python><nlp><huggingface-transformers><summarization><large-language-model>
|
2023-03-20 21:26:38
| 1
| 381
|
Ani
|
75,795,327
| 5,942,100
|
Tricky remove, replace and transform transformation using Pandas
|
<p>I wish to remove the first row, and then push the value in the second row as a column header, replacing the original value.</p>
<p><strong>Data</strong></p>
<pre><code> Unnamed: 0_Date val1 val2
0
1 state
2 AA1 63 65
3 AA2 0 0
</code></pre>
<p><strong>Desired</strong></p>
<pre><code> state val1 val2
2 AA1 63 65
3 AA2 0 0
</code></pre>
<p><strong>Doing</strong></p>
<pre><code>df.columns = (df.columns + '_' + df.iloc[0])
df.iloc[0] = ''
df = out.reset_index(drop=True)
</code></pre>
<p>Trying the script above however this is not removing the specified row, nor is it replacing the original column value. Any suggestion is appreciated.</p>
<p><a href="https://i.sstatic.net/GS6R9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GS6R9.png" alt="enter image description here" /></a></p>
|
<python><pandas><numpy>
|
2023-03-20 21:08:55
| 1
| 4,428
|
Lynn
|
75,795,302
| 11,572,712
|
Error in the creation of a class instance
|
<p>I have created the class <code>Animal</code>:</p>
<pre class="lang-py prettyprint-override"><code>class Animal:
def __int__(self, height, weight):
self.height = height
self.weight = weight
</code></pre>
<p>Then I want to create the instance <code>animal_1</code> of class <code>Animal</code> like so:</p>
<pre><code>animal_1 = Animal(height=120, weight=80)
</code></pre>
<p>And if I try to execute the code I get this error:</p>
<pre><code>TypeError: Animal() takes no arguments
</code></pre>
<p>What did I do wrong?</p>
|
<python><class><error-handling>
|
2023-03-20 21:05:47
| 1
| 1,508
|
Tobitor
|
75,795,170
| 11,294,747
|
How to design Codeforces interactive grader?
|
<p>I came across one interactive problem in Codeforces. I want to know how the grader or interactor (as per Codeforces' terms) might be designed.</p>
<p>Let's say I want to create a grader for this problem: <a href="https://codeforces.com/gym/101021/problem/1" rel="noreferrer">1. Guess the Number</a>.</p>
<p>My solution to the above problem is stored in <code>1_Guess_the_Number.py</code> file. It is a correct solution and is accepted by the CF grader.</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
l, r = 1, 1000000
while l != r:
mid = (l + r + 1) // 2
print(mid, flush=True)
response = input()
if response == "<":
r = mid - 1
else:
l = mid
print("!", l)
</code></pre>
<p>I created the following <code>grader.py</code> file:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import sys
INP = 12
def interactor(n):
if n > INP:
return "<"
return ">="
while True:
guess = input()
if guess.startswith("!"):
print(int(guess.split()[1]) == INP, flush=True)
sys.exit()
print(interactor(int(guess)), flush=True)
</code></pre>
<p>So, when I run <code>./1_Guess_the_Number.py | ./grader_1.py</code>, I expect it to work correctly. But in the terminal, the above command runs for an infinite time with only the following output:</p>
<pre class="lang-bash prettyprint-override"><code><
</code></pre>
<p>I don't know what is going wrong. Also, it will be very helpful if someone can provide any other way.</p>
|
<python>
|
2023-03-20 20:50:46
| 1
| 383
|
sgalpha01
|
75,794,919
| 3,197,684
|
How to segment and transcribe an audio from a video into timestamped segments?
|
<p>I want to segment a video transcript into chapters based on the content of each line of speech. The transcript would be used to generate a series of start and end timestamps for each chapter. This is similar to how YouTube now "auto-chapters" videos.</p>
<p>Example .srt transcript:</p>
<pre><code>...
70
00:02:53,640 --> 00:02:54,760
All right, coming in at number five,
71
00:02:54,760 --> 00:02:57,640
we have another habit that saves me around 15 minutes a day
...
</code></pre>
<p>I have had minimal luck doing this with ChatGPT as it finds it difficult to both segment by topic and recollect start and end timestamps accurately. I am now exploring whether there are other options for doing this.</p>
<p>I know topic modeling based on time series is possible with some python libraries. I have also read about text tiling as another option. <strong>What options are there for achieving an outcome like this?</strong></p>
<p>Note: The format above (.srt) is not necessary. It's just the idea that the input is a list of text-content with start and end timestamps.</p>
|
<python><machine-learning><nlp><openai-api><automatic-speech-recognition>
|
2023-03-20 20:14:46
| 1
| 741
|
nonsequiter
|
75,794,830
| 5,805,610
|
I want to connect to an api and extract the data
|
<p>I'm doing my own python analytics project on compiling and analysing data from this open-data source:</p>
<p>(<a href="https://data.gov.ie/dataset?q=homeless&api=true&sort=score+desc%2C+metadata_created+desc&theme=Housing" rel="nofollow noreferrer">https://data.gov.ie/dataset?q=homeless&api=true&sort=score+desc%2C+metadata_created+desc&theme=Housing</a>)</p>
<p>I've never worked with api's or json's before.. all the info on google or YouTube video's always have an API key.. but I don't know how to get it</p>
<p>so far I've done this:</p>
<pre><code>import requests
import pandas as pd
import time
API_KEY = requests.get('https://data.gov.ie/dataset?q=homelessness&api=true&theme=Housing&sort=metadata_modified+desc')
API_KEY.status_code
# this returns 200 which from google means connected status correct
</code></pre>
<p>Then I write this:</p>
<pre><code>#make api call
response = API_KEY.json()
</code></pre>
<p>and get back errors:</p>
<pre><code>JSONDecodeError Traceback (most recent call last)
~/opt/anaconda3/lib/python3.9/site-packages/requests/models.py in json(self, **kwargs)
970 try:
--> 971 return complexjson.loads(self.text, **kwargs)
972 except JSONDecodeError as e:
~/opt/anaconda3/lib/python3.9/json/__init__.py in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
345 parse_constant is None and object_pairs_hook is None and not kw):
--> 346 return _default_decoder.decode(s)
347 if cls is None:
...
...
...
</code></pre>
<p>About The Project:
What I want it to connect to the data on homelessness in ireland (I would like the data to be easily updatable for future automatic updating but that's later on.. no idea how to do it yet).
Ireland is facing a serious crisis with homelessness and housing in general and I would like to see how it has worsened over the years and visualise it. Perhaps get some insight or the scale of the issue.
After I do some work with the project I will then import the data into tableau to do some more visualisation then publish my report to my LinkedIn.
Perhaps you can advise me if I should be trying to work with csv's or json's?</p>
<p>Any help as always would be sincerely appreciated.</p>
|
<python><pandas><data-analysis>
|
2023-03-20 20:03:59
| 1
| 334
|
Nabeel
|
75,794,492
| 21,286,804
|
mypy does not recognize a method of a class
|
<pre><code>from abc import ABC, abstractmethod
from typing import List
class AirConditioner:
"""Class that represents an air conditioner"""
def __init__(self, identify: str, state: bool, temperature: int):
self.identify = identify
self._state = state
self._temperature = temperature
def turn_on(self) -> None:
self._state = True
def turn_off(self) -> None:
self._state = False
def set_temperature(self, temperature: int) -> None:
self._temperature = temperature
def get_state(self) -> bool:
return self._state
def get_temperature(self) -> int:
return self._temperature
class ICommand(ABC):
"""Interface that represents a command"""
@abstractmethod
def execute(self) -> None:
pass
@abstractmethod
def undo(self) -> None:
pass
class TurnOnAirConditioner(ICommand):
"""Class that represents a command to turn on an air conditioner"""
def __init__(self, air_conditioner: AirConditioner):
self._air_conditioner = air_conditioner
def execute(self) -> None:
self._air_conditioner.turn_on()
def undo(self) -> None:
self._air_conditioner.turn_off()
class ChangeTemperatureAirConditioner(ICommand):
"""Class that represents a command to change the temperature of an air conditioner"""
def __init__(self, air_conditioner: AirConditioner):
self._air_conditioner = air_conditioner
self._temperature = air_conditioner.get_temperature()
self._temperature_anterior = self._temperature
def set_temperature(self, temperature: int) -> None:
self._temperature_anterior = self._temperature
self._temperature = temperature
def execute(self) -> None:
self._air_conditioner.set_temperature(self._temperature)
def undo(self) -> None:
self._air_conditioner.set_temperature(self._temperature_anterior)
class Aplicativo:
"""Class that represents an application that uses the command pattern to control an air conditioner"""
def __init__(self) -> None:
self._comandos: List[ICommand] = []
def set_comando(self, comando_app: ICommand) -> int:
self._comandos.append(comando_app)
return len(self._comandos) - 1
def get_command(self, comando_id: int) -> ICommand:
return self._comandos[comando_id]
def pressing_button(self, comando_id: int) -> None:
self._comandos[comando_id].execute()
if __name__ == "__main__":
app = Aplicativo()
my_air_conditioner = AirConditioner("Air Conditioner", False, 26)
change_temperature_air = ChangeTemperatureAirConditioner(my_air_conditioner)
turn_on_ar = TurnOnAirConditioner(my_air_conditioner)
ID_TURN_AIR_ON = app.set_comando(turn_on_ar)
ID_CHANGE_AIR_TEMPERATURE = app.set_comando(change_temperature_air)
app.pressing_button(ID_TURN_AIR_ON)
comando = app.get_command(ID_CHANGE_AIR_TEMPERATURE)
comando.set_temperature(25)
</code></pre>
<p>When I run the code above, mypy brings me the following alert:</p>
<p>error: "ICommand" has no attribute "set_temperature" [attr-defined]</p>
<p>How do I do it when I need to call a method, but not every class that implements the ICommand interface has this method?</p>
<p>I tried to comment lines with #type ignore, but I would like to know a better way to handle this problem</p>
|
<python><python-3.x><mypy><command-pattern>
|
2023-03-20 19:24:33
| 2
| 427
|
Magaren
|
75,794,429
| 518,004
|
-bash: python: command not found on mac
|
<p>I've tried a number of approaches:</p>
<pre><code>brew install python
==> Downloading https://formulae.brew.sh/api/formula.jws.json
######################################################################## 100.0%
==> Downloading https://formulae.brew.sh/api/cask.jws.json
######################################################################## 100.0%
Warning: python@3.11 3.11.2_1 is already installed and up-to-date.
To reinstall 3.11.2_1, run:
brew reinstall python@3.11
williamm-5541:~ williamm$ python
-bash: python: command not found
</code></pre>
<p>and</p>
<pre><code>williamm-5541:git williamm$ pyenv version
3.11.2 (set by /Users/williamm/.pyenv/version)
williamm-5541:git williamm$ python
-bash: python: command not found
</code></pre>
<p>The command <code>python3</code> works however python does not which is blocking me from installing:</p>
<p><a href="https://github.com/aws/aws-elastic-beanstalk-cli-setup" rel="nofollow noreferrer">https://github.com/aws/aws-elastic-beanstalk-cli-setup</a></p>
|
<python><macos>
|
2023-03-20 19:17:36
| 1
| 8,739
|
Will
|
75,794,357
| 18,086,775
|
Unable to save excel workbook on mac using xlwings
|
<p>I'm running the same Xlwings code with jupyter notebook on both MAC and Windows to save a new Excel workbook in a folder.</p>
<pre><code>import xlwings as xw
import os
wb = xw.Book()
wb.save(os.path.join(os.getcwd(),r'pro/fi/g.xlsx'))
wb.close()
</code></pre>
<p>It runs on Windows fine but gives the following error on MAC;</p>
<pre><code>CommandError: Command failed:
OSERROR: -50
MESSAGE: Parameter error.
COMMAND: app (pid=71190). workbooks [ 'Book2'].save_workbook_as (filename='Macintosh HD: Users: mohit: Desktop: pro:fi:g.xlsx', overwrite=True, file_format=k.Excel_XML_file_format, timeout=-1, password=None)
</code></pre>
|
<python><xlwings>
|
2023-03-20 19:09:08
| 1
| 379
|
M J
|
75,793,858
| 2,991,243
|
Regular expression in python is not returning the desired result
|
<p>Suppose that I have a string consisting of different sentences. I expect to remove the part that begins with <code>It was formerly known as </code> until the end of this sentence. I want to stop cleaning until it reaches <code>. Withey Limited</code>. If it is not the case, it ends cleaning until <code>. It</code>.</p>
<pre><code>import re
txt = 'It was formerly known as A. Withey & Black Limited. Withey Limited delivers many things. It has a facility in the UK, including many branches.'
out = re.sub("\s*It was formerly known as [\w\d\s@_!#$%^&*()<>?/\|}{~:\.]+" + "(?=(. Withey Limited |. It))","", txt)
</code></pre>
<p>This code returns <code>. It has a facility in the UK, including many branches.'</code> which is not my expected outcome. My expected outcome is as follows:</p>
<pre><code>Withey Limited delivers many things. It has a facility in the UK, including many branches.
</code></pre>
<p>How can I adjust my regular expression to reach this outcome? And why is it behaving like this?</p>
|
<python><regex>
|
2023-03-20 18:08:59
| 1
| 3,823
|
Eghbal
|
75,793,853
| 10,620,003
|
np.concatenate(np.stack the different arrays with a general solution
|
<p>I have 6 array with same size. I have a general function and based on a value I should consider 2, 3, 5, 6 array of these and concate them with the following way. Could you please help me with a general solution for this? Here I only provide a simple example and in my real data, I should build different array and I have more than 20 which I should use.</p>
<pre><code>import numpy as np
a = np.random.randint(3, size = (2,4))
b = np.random.randint(3, size = (2,4))
c = np.random.randint(3, size = (2,4))
d = np.random.randint(3, size = (2,4))
e = np.random.randint(3, size = (2,4))
f = np.random.randint(3, size = (2,4))
value=6
out = np.concatenate(np.stack((a, b, c, d, e, f), axis=1))
value=3
out = np.concatenate(np.stack((a, b, c), axis=1))
value=2
out = np.concatenate(np.stack((a, b), axis=1))
</code></pre>
|
<python><numpy>
|
2023-03-20 18:08:08
| 1
| 730
|
Sadcow
|
75,793,794
| 4,802,259
|
adding __getitem__ accessor to Python class method
|
<p>I'm attempting to add an item getter (<code>__getitem__</code>, to provide the <code>[]</code> syntax) to a class method so that I can use some unique-ish syntax to provide types to functions outside the normal parentheses, like the following. The syntax on the last line (of this first snippet) is really the goal for this whole endeavor.</p>
<pre class="lang-py prettyprint-override"><code>class MyClass:
@typedmethod
def compute_typed_value(self, value, *args, **kwargs):
print(self, args, kwargs)
result = TypedMethod.requested_type(kwargs)(value)
self.my_other_method()
return result
def my_other_method(self):
print('Doing some other things!')
return 3
a = MyClass()
a.compute_typed_value[int]('12345') # returns int value 12345
</code></pre>
<p>Additionally, I'd like to retain the intuitive behavior that a defined function can be called like a function, potentially with a default value for the type, like so:</p>
<pre class="lang-py prettyprint-override"><code>a = MyClass()
a.compute_typed_value('12345')
# should return whatever the default type is, with the value of '12345',
# or allow some other default behavior
</code></pre>
<p>In a broader context, this would be implemented as a piece of an API adapter that implements a generic request processor, and I'd like the data to come out of the API adapter in a specific format. So the way that this might look in actual use could be something like the following:</p>
<pre class="lang-py prettyprint-override"><code>
@dataclass
class MyAPIData:
property_a: int = 0
property_b: int = 0
class MyAPIAdapter:
_session
def __init__(self, token):
self._init_session(token)
@typedmethod
def request_json(self, url, **kwargs):
datatype = TypedMethod.requested_type(kwargs)
response_data = self._session.get(url).json()
if datatype:
response_data = datatype(**response_data)
return response_data
def fetch_myapidata(self, search):
return self.request_json[MyAPIData](f"/myapi?q={search}")
</code></pre>
<p>I'm attempting to achieve this kind of behavior with a decorator that I can throw onto any function that I want to enable this behavior. Here is my current full implementation:</p>
<pre class="lang-py prettyprint-override"><code>
from functools import partial
class TypedMethod:
_REQUESTED_TYPE_ATTR = '__requested_type'
def __init__(self, method):
self._method = method
print(method)
self.__call__ = method.__call__
def __getitem__(self, specified_type, *args, **kwargs):
print(f'getting typed value: {specified_type}')
if not isinstance(specified_type, type):
raise TypeError("Only Type Accessors are supported - must be an instance of `type`")
return partial(self.__call__, **{self.__class__._REQUESTED_TYPE_ATTR: specified_type})
def __call__(self, *args, **kwargs):
print(args, kwargs)
return self._method(self, *args, **kwargs)
@classmethod
def requested_type(cls, foo_kwargs):
return foo_kwargs[cls._REQUESTED_TYPE_ATTR] if cls._REQUESTED_TYPE_ATTR in foo_kwargs else None
def typedmethod(foo):
print(f'wrapping {foo.__name__} with a Typed Method: {foo}')
_typed_method = TypedMethod(foo)
def wrapper(self, *args, **kwargs):
print('WRAPPER', self, args, kwargs)
return _typed_method(self, *args, **kwargs)
_typed_method.__call__ = wrapper
return _typed_method
class MyClass:
@typedmethod
def compute_typed_value(self, value, *args, **kwargs):
print(self, args, kwargs)
result = TypedMethod.requested_type(kwargs)(value)
print(result)
self.my_other_method()
return result
def my_other_method(self):
print('Doing some other things!')
return 3
a = MyClass()
a.compute_typed_value[int]('12345')
</code></pre>
<p>If you run this code, it will fail stating that 'TypedMethod' object has no attribute 'my_other_method'. Further inspection reveals that the first line of <code>compute_typed_value</code> is not printing what one would intuitively expect from the code:</p>
<blockquote>
<p><code><__main__.TypedMethod object at 0x10754e790> () {'__requested_type': <class 'int'>}</code></p>
</blockquote>
<p>Specifically, the first item printed, which is a <code>TypedMethod</code> instead of a <code>MyClass</code> instance</p>
<p>Basically, the idea is use the <code>__getitem__</code> callout to generate a <code>functools.partial</code> so that the subsequent call to the resulting function contains the <code>__getitem__</code> key in a known "magic" <code>kwargs</code> value, which should hypothetically work, except that now the <code>self</code> reference that is available to <code>MyClass.compute_typed_value</code> is actually a reference to the <code>TypedMethod</code> instance generated by the wrapper instead of the expected <code>MyClass</code> instance. I've attempted a number of things to get the <code>MyClass</code> instance passed as <code>self</code>, but since it's implemented as a decorator, the instance isn't available at the time of decoration, meaning that somehow it needs to be a bound method at the time of function execution, I think.</p>
<hr />
<p>I know I could just pass this value in as like the first positional argument, but I <em>want</em> it to work with the square bracket annotation because I think it'd be cool and more readable. This is mostly a learning exercise to understand more of Python's inner workings, so the answer could ultimately be "no".</p>
|
<python><python-decorators><python-typing>
|
2023-03-20 18:03:00
| 4
| 2,864
|
David Culbreth
|
75,793,683
| 6,039,697
|
Algorithm to render a binary tree datastructure
|
<p>I have some data organized as a binary tree where each node has exactly 0 or 2 children.</p>
<p>I'm looking for an algorithm that allows me to render this tree to an image (PNG preferred).</p>
<p>I want to render the nodes as a box which contains some multiline text representing the data represented by the node.</p>
<p>All nodes should have the same bounding box and the rendering should be aligned like <a href="https://www.researchgate.net/profile/Jose-Amaral-6/publication/221496921/figure/fig1/AS:305660496498688@1449886549536/A-binary-tree-with-15-nodes-The-node-number-indicates-the-order-in-which-the-node-was.png" rel="nofollow noreferrer">this</a>.</p>
<p>I would appreciate a Python solution but I'm not restricted to it.</p>
<p>I did tried this solution with <code>matplotlib</code> (generated by ChatGPT) but could not adjust the gap between each node so they don't overlap each other.</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
BOX_WIDTH = 2
BOX_HEIGHT = 2
class Node:
def __init__(self, val=0, left=None, right=None):
self.val = val
self.left = left
self.right = right
def get_content(self):
box_props = dict(boxstyle='square', facecolor='white', edgecolor='black')
value = self.val
lines = ['Line 1asdasdasd', 'Line 2', 'Line 3']
text = '\n'.join(lines)
content = dict(value=value, text=text, box_props=box_props)
return content
def plot_tree(node, x, y, parent_x=None, parent_y=None, x_offset=1., y_offset=1.):
# Get node content
if node is None:
return
content = node.get_content()
# Draw box containing lines of text
r = plt.text(x, y, content['text'], bbox=content['box_props'], ha='center', va='center')
# Plot edge
if parent_x is not None and parent_y is not None:
plt.plot([parent_x, x], [parent_y, y], linewidth=1, color='black')
# Plot left and right subtree with adjusted coordinates
plot_tree(node.left, x - x_offset, y - y_offset, x, y, x_offset / 2, y_offset)
plot_tree(node.right, x + x_offset, y - y_offset, x, y, x_offset / 2, y_offset)
root = Node(1)
root.left = Node(2)
root.right = Node(3)
root.left.left = Node(4)
root.left.right = Node(5)
root.right.left = Node(6)
root.right.right = Node(7)
root.right.right.left = Node(2)
root.right.right.right = Node(3)
root.right.right.left.left = Node(4)
root.right.right.left.right = Node(5)
root.right.right.right.left = Node(6)
root.right.right.right.right = Node(7)
plt.figure()
plot_tree(root, 0, 0)
# plt.axis('off')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/d7mEV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d7mEV.png" alt="Current solution with nodes overlapping each other" /></a></p>
|
<python><binary-tree><render>
|
2023-03-20 17:53:04
| 1
| 1,184
|
Michael Pacheco
|
75,793,632
| 20,051,041
|
How to convert a string list to (object) list in Pandas?
|
<p>I have the dictionary below in which the values are string type:</p>
<pre><code>data = {'object_1':"['abc']",
'object_2':"['def']",
"object_3": "['xyz']",
"object_4": "['abc']"}
</code></pre>
<p>I want to convert the values of the dictionary from a string type to a list type. I tried to use <code>literal_eval</code> and <code>eval()</code> but without any success to get pure python lists:</p>
<p><em>Desired output:</em></p>
<pre><code>data = {'object_1':['abc'],
'object_2':['def'],
"object_3": ['xyz'],
"object_4": ['abc']}
</code></pre>
<p>Thanks for any advice.</p>
|
<python><pandas><string>
|
2023-03-20 17:47:39
| 3
| 580
|
Mr.Slow
|
75,793,592
| 13,981,285
|
Control bitrate of video generated using opencv VideoWriter
|
<p>I am generating a video from a set of images using
<code>cv2.VideoWriter(filename,fourcc,fps,size)</code></p>
<p>I want to use a particular bitrate for my output videos to reduce the file sizes. I am trying to mimic a ffmpeg command which generated videos with smaller sizes. One prominent difference I noticed is the bitrate. Resolution, codec, video length are all similar.</p>
<p>How do I change the bitrate when I generate videos using opencv in python?</p>
|
<python><linux><opencv><ffmpeg><h.264>
|
2023-03-20 17:43:53
| 0
| 402
|
darthV
|
75,793,590
| 8,968,801
|
Django Rest Framework: Remove Appended "[]" from List Parameter
|
<p>I have a django rest framework application set up with two packages: One's called djangorestframework-camel-case (to "camelizes" the parameters in the API docs) and the other is drf-spectacular (to automatically generate Swagger docs for my API).</p>
<p>The camel case package also has the effect that it causes parameters from the request data sent from the frontend (in camelCase) to be received in the Python backend in snake_case. However, I just found out that when one of the requests from the frontend contains a list of data, lets say <code>dataList = ["1", "2", "3"]</code>, the parameter is received in the Python backend as a dictionary with the key <code>data_list[]</code>. Is there a setting to remove that <code>[]</code> at the end? Its messing up with the rest of my logic.</p>
|
<python><django><django-rest-framework>
|
2023-03-20 17:43:41
| 0
| 823
|
Eddysanoli
|
75,793,546
| 1,241,786
|
Regex to include curly brackets in some strings
|
<p>I have a spreadsheet that contains a "find" column and a "replace" column. Note that some strings are a subset of other strings.</p>
<pre><code>Find Replace
{Example1} {50M00_Dewirer_South\Example1}
{Example1\Alarm} {50M00_Dewirer_South\Example1\Alarm}
{Example1\AlarmHigh} {50M00_Dewirer_South\Example1\AlarmHigh}
{Example1\AlarmLow} {50M00_Dewirer_South\Example1\AlarmLow}
Example2 50M00_Dewirer_South\Example2
Example2foo 50M00_Dewirer_South\Example2foo
Example2foobar 50M00_Dewirer_South\Example2foobar
ATag Device_Shortcut\DirectReference
Another_Tag Winder\Local:50:I.Data.0
Another\Tag Winder\Local:12:O.Data.1
</code></pre>
<p>I need to search a directory of files for each of the search terms and replace the discovered terms with their associated replacements. The files I'm searching could also contain capitalization errors. {Example1} Could appear as {example1}, {ExAmPle1}, {exAMplE}1, or any other combination of upper and lower case characters. I'm attempting to use regular expressions as my brute force attempt at searching all of the files was far too slow.</p>
<p>I've managed to put together a regular expression that works with strings that do not contain curly brackets {}. However, if a string does contain a curly bracket, my search function will not find anything in the files that I'm searching.</p>
<pre><code>pattern = re.compile(
r'\b(?:%s)\b' % '|'.join([re.escape(term) for term in replace_dict]),
re.IGNORECASE
)
</code></pre>
<p>How should I form my regex to include the curly brackets as part of the search term? Also, if my search terms did not contain curly brackets, could the new regex still be used, or would I have to revert back to my current pattern?</p>
<p>Edit: I should probably broaden this question as I haven't yet discovered all of the possible special characters that I would need to search for. Can a regex be created that can potentially contain any combination of special characters?</p>
|
<python><regex>
|
2023-03-20 17:39:46
| 1
| 728
|
kubiej21
|
75,793,541
| 16,853,253
|
Google Cloud Console shows Client is unauthorized to retrieve access tokens using this method in python
|
<p>I saw so many question relating to this GCP issue, none of it helped. I have created service account and added to "Manage Domain-wide delegation" with scopes. But I still get this error <code>Client is unauthorized to retrieve access tokens using this method or client not authorized for any of the scopes requested.</code></p>
<p>code is below:</p>
<pre><code>from google.oauth2 import service_account
SCOPES = [
"https://www.googleapis.com/auth/admin.directory.user",
"https://www.googleapis.com/auth/admin.directory.domain.readonly",
"https://www.googleapis.com/auth/gmail.readonly",
"https://www.googleapis.com/auth/gmail.send",
"https://www.googleapis.com/auth/gmail.insert",
"https://www.googleapis.com/auth/gmail.settings.sharing",
]
SERVICE_ACCOUNT_FILE = '/PATH/TO/FILE/credentials.json'
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES, )
delegated_credentials = credentials.with_subject('email')
service = build('admin', 'directory_v1', credentials=delegated_credentials)
def main():
print("Getting the first 10 users in the domain")
results = (
service.users()
.list(customer="customer_id", maxResults=10, orderBy="email")
.execute()
)
users = results.get("users", [])
print(users)
</code></pre>
|
<python><google-cloud-platform><google-workspace>
|
2023-03-20 17:38:38
| 1
| 387
|
Sins97
|
75,793,446
| 14,890,683
|
Python - Plotly - make_subplots - Title Overlap / Move Subplot Titles
|
<p>How do I adjust the subplot titles so that they don't overlap with the x-axis?</p>
<p><a href="https://i.sstatic.net/n2lNf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n2lNf.png" alt="Plotly Image" /></a></p>
<p>Here is the code for reference:</p>
<pre class="lang-py prettyprint-override"><code>import math
import pandas as pd
import plotly.subplots as ps
import plotly.graph_objects as go
# create subplots
x_axis_groups = df.groupby("x_axis_group")
fig = ps.make_subplots(
rows=math.ceil(len(list(x_axis_groups)) / 3),
cols=3,
subplot_titles=[name for name, _ in x_axis_groups],
horizontal_spacing=0.025,
vertical_spacing=0.1,
)
for i, (_, group) in enumerate(x_axis_groups):
subfig = go.Heatmap(x=group["x-axis"], y=group["y-axis"], z=group['z'])
fig.append_trace(subfig, row=i//3+1, col=i%3+1)
fig.update_layout(
showlegend=False,
title_text='Total PF reads per plate',
font=dict(size=18),
xaxis=dict(range=[1, 12], side="top", dtick=1),
yaxis=dict(autorange="reversed")
)
</code></pre>
|
<python><plotly>
|
2023-03-20 17:25:21
| 1
| 345
|
Oliver
|
75,793,406
| 14,409,562
|
when concatenating two data frames an extra row is added
|
<p>I am trying to concatenate two panda dataframes but unfortunately it's not working this is the following code:</p>
<pre><code>
train_df =pd.concat([x_train,y_train],axis =1 )
print(train_df)
</code></pre>
<p>y_train and x_train are of the same length and have the correct size and row indexes, I just wish to conctenate both of them like concatenating two matrices together.
My current output is the following:</p>
<pre><code> Age Sex HighChol BMI ... PhysHlth DiffWalk HighBP Diabetes
0 10.0 1.0 1.0 33.0 ... 30.0 0.0 1.0 NaN
1 10.0 1.0 0.0 21.0 ... 30.0 1.0 1.0 1.0
2 4.0 0.0 0.0 32.0 ... 7.0 0.0 0.0 1.0
3 11.0 1.0 1.0 35.0 ... 10.0 1.0 1.0 0.0
4 10.0 0.0 1.0 27.0 ... 0.0 0.0 1.0 1.0
... ... ... ... ... ... ... ... ... ...
996 3.0 0.0 1.0 33.0 ... 0.0 0.0 0.0 0.0
997 9.0 0.0 1.0 41.0 ... 30.0 1.0 1.0 0.0
998 12.0 0.0 1.0 34.0 ... 0.0 0.0 1.0 1.0
999 6.0 0.0 0.0 31.0 ... 0.0 0.0 0.0 0.0
1000 NaN NaN NaN NaN ... NaN NaN NaN 1.0
[1001 rows x 15 columns]
</code></pre>
<p>which for some reason seems to add a row of nan</p>
<p>edit:
apparently y_train is a series</p>
|
<python><arrays><pandas>
|
2023-03-20 17:21:18
| 1
| 412
|
a_confused_student
|
75,793,236
| 1,422,096
|
Roll and pad in Numpy
|
<p>Is there a built-in Numpy function to shift (roll + pad) an 1D array?</p>
<p>Something like this:</p>
<pre><code>import numpy as np
def roll_pad(a, t):
b = np.roll(a, t)
if t >= 0:
b[:t] = 0
else:
b[t:] = 0
return b
z = np.array([1, 2, 3, 4, 5, 6])
print(z)
print(roll_pad(z, 2)) # [0 0 1 2 3 4]
print(roll_pad(z, -2)) # [3 4 5 6 0 0]
</code></pre>
|
<python><numpy><zero-padding>
|
2023-03-20 17:04:01
| 1
| 47,388
|
Basj
|
75,793,219
| 4,436,572
|
Polars replace_time_zone function throws error of "non-existent in time zone"
|
<p>here's our test data to work with:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import pandas as pd
from datetime import date, time, datetime
df = pl.DataFrame(
pl.datetime_range(
start=date(2022, 1, 3),
end=date(2022, 9, 30),
interval="5m",
time_unit="ns",
time_zone="UTC",
eager=True
).alias("UTC")
)
</code></pre>
<p>I specifically need <code>replace_time_zone</code> to actually change the underlying timestamp.</p>
<p>It works with <code>convert_time_zone</code>:</p>
<pre class="lang-py prettyprint-override"><code>df.select(
pl.col("UTC").dt.convert_time_zone(time_zone="America/New_York").alias("US")
)
</code></pre>
<pre><code>shape: (77_761, 1)
┌────────────────────────────────┐
│ US │
│ --- │
│ datetime[ns, America/New_York] │
╞════════════════════════════════╡
│ 2022-01-02 19:00:00 EST │
│ 2022-01-02 19:05:00 EST │
│ 2022-01-02 19:10:00 EST │
│ 2022-01-02 19:15:00 EST │
│ 2022-01-02 19:20:00 EST │
│ … │
│ 2022-09-29 19:40:00 EDT │
│ 2022-09-29 19:45:00 EDT │
│ 2022-09-29 19:50:00 EDT │
│ 2022-09-29 19:55:00 EDT │
│ 2022-09-29 20:00:00 EDT │
└────────────────────────────────┘
</code></pre>
<p>But fails with <code>replace_time_zone</code>:</p>
<pre class="lang-py prettyprint-override"><code>df.select(
pl.col("UTC").dt.replace_time_zone(time_zone="America/New_York").alias("US")
)
</code></pre>
<pre><code># ComputeError: datetime '2022-03-13 02:00:00' is non-existent in time zone 'America/New_York'.
# You may be able to use `non_existent='null'` to return `null` in this case.
</code></pre>
|
<python><datetime><timezone><python-polars>
|
2023-03-20 17:01:52
| 2
| 1,288
|
stucash
|
75,793,123
| 6,679,011
|
Twilio credential issues
|
<p>This is my case. I need to fetch SMS data through Twilio.
This is what I have:
Account SID 'ACxxxxxxxxx'
For security reason, they could not give me the token for the Account SID, but setup a API Sid and its token
API SID 'SKxxxxxxxxxxxxx'
Token 'XXXXXXXXXXXXXXXXX'</p>
<p>I already figured out how to access the data through CURL command, like this:</p>
<pre><code>curl -X GET "https://api.twilio.com/2010-04-01/Accounts/$TWILIO_ACCOUNT_SID/Messages.json?PageSize=20" \
-u $API_ACCOUNT_SID:$TOKEN
</code></pre>
<p>What if I want to fetch data through the twilio library, what should I setup the connection? (API SID, token) pair does not work. and the (Account SID, token) doesn't work either. Any suggestions?</p>
<p>According to the library</p>
<pre><code>import os
from twilio.rest import Client
# Find your Account SID and Auth Token at twilio.com/console
# and set the environment variables. See http://twil.io/secure
account_sid = os.environ['TWILIO_ACCOUNT_SID']
auth_token = os.environ['TWILIO_AUTH_TOKEN']
client = Client(account_sid, auth_token)
messages = client.messages.list(limit=20)
for record in messages:
print(record.sid)
</code></pre>
|
<python><twilio-api>
|
2023-03-20 16:51:21
| 1
| 469
|
Yang L
|
75,793,089
| 1,773,592
|
How do I seek using python in GCP Pubsub?
|
<p>I am trying to use the Google Python pubsub client to seek to now in a subscription:</p>
<pre><code>seek_request_dict = {
"subscription":subscription,
"time":current_time
}
request = pubsub_v1.types.SeekRequest(seek_request_dict)
</code></pre>
<p>this gives the error:</p>
<pre><code>TypeError: Message must be initialized with a dict: google.pubsub.v1.SeekRequest
</code></pre>
<p>I have tried:</p>
<pre><code>request = pubsub_v1.SeekRequest(seek_request_dict)
</code></pre>
<p>but this gives:</p>
<pre><code>AttributeError: module 'google.cloud.pubsub_v1' has no attribute 'SeekRequest'
</code></pre>
<p>So smart ones what do I need to do? TIA!</p>
|
<python><google-cloud-platform><google-cloud-pubsub>
|
2023-03-20 16:48:48
| 0
| 3,391
|
schoon
|
75,793,069
| 12,548,458
|
Can conda channels include multiple channels?
|
<p>I'm reading the <a href="https://conda.io/projects/conda/en/latest/user-guide/concepts/channels.html" rel="nofollow noreferrer">documentation on Conda channels</a>, and the documentation appears to state that:</p>
<ul>
<li>By default, Conda uses the <code>defaults</code> channel. This can be verified locally:</li>
</ul>
<pre><code>% conda config --show channels
channels:
- defaults
</code></pre>
<ul>
<li>The <code>defaults</code> channel is hosted on <a href="https://repo.anaconda.com/pkgs" rel="nofollow noreferrer">repo.anaconda.com</a></li>
</ul>
<p>However, a quick glance at the landing page shows that this channel seems to include multiple channels, such as <code>pkgs/main</code>, <code>pkgs/free</code>, etc. This is extra confusing because this reflects the output of a similar command to list default channels:</p>
<pre><code>% conda config --show default_channels
default_channels:
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
</code></pre>
<p>So my question is, what exactly is a channel? Is <code>defaults</code> just a special, hard-coded alias for a list of default channels, namely <code>pkgs/main</code> and <code>pkgs/r</code>? Or can any channel declare that it includes multiple channels within itself?</p>
|
<python><anaconda><conda>
|
2023-03-20 16:47:11
| 1
| 3,289
|
dlq
|
75,793,013
| 864,245
|
Combining Dumper class with string representer to get exact required YAML output
|
<p>I'm using PyYAML 6.0 with Python 3.9.</p>
<p>In order, I am trying to...</p>
<ol>
<li>Create a YAML list</li>
<li>Embed this list as a multi-line string in another YAML object</li>
<li>Replace this YAML object in an existing document</li>
<li>Write the document back, in a format that will pass YAML 1.2 linting</li>
</ol>
<p>I have the process working, apart from the YAML 1.2 requirement, with the following code:</p>
<pre class="lang-py prettyprint-override"><code>import yaml
def str_presenter(dumper, data):
"""configures yaml for dumping multiline strings
Ref: https://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data"""
if data.count('\n') > 0: # check for multiline string
return dumper.represent_scalar('tag:yaml.org,2002:str', data, style='|')
return dumper.represent_scalar('tag:yaml.org,2002:str', data)
yaml.add_representer(str, str_presenter)
yaml.representer.SafeRepresenter.add_representer(
str, str_presenter)
class DoYamlStuff:
def post_renderers(images):
return yaml.dump([
{
"op": "replace",
"path": "/spec/postRenderers",
"value": [
{
"kustomize": {
"images": images
}
}
]
}])
@classmethod
def images_patch(cls, chart, images, ecr_url):
return {
"target": {
"kind": "HelmRelease",
"name": chart,
"namespace": chart
},
"patch": cls.post_renderers([x.patch(ecr_url) for x in images])
</code></pre>
<p>This produces something like this:</p>
<pre class="lang-yaml prettyprint-override"><code>- patch: |
- op: replace
path: /spec/postRenderers
value:
- kustomize:
images:
- name: nginx:latest
newName: 12345678910.dkr.ecr.eu-west-1.amazonaws.com/nginx
newTag: latest
target:
kind: HelmRelease
name: nginx
namespace: nginx
</code></pre>
<p>As you can see, that's mostly working. Valid YAML, does what it needs to, etc.</p>
<p>Unfortunately... it doesn't indent the list item by 2 spaces, so the YAML linter in our repository's pre-commit then adjusts everything. Makes the repo messy, and causes PRs to regularly include changes that aren't relevant.</p>
<p>I then set out to implement <a href="https://stackoverflow.com/a/70423579/864245">this</a> PrettyDumper class from StackOverflow. This reversed the effects - my indentation is now right, but my scalars aren't working at all:</p>
<pre class="lang-yaml prettyprint-override"><code> - patch: "- op: replace\n path: /spec/postRenderers\n value:\n - kustomize:\n\
\ images:\n - name: nginx:latest\n \
\ newName: 793961818876.dkr.ecr.eu-west-1.amazonaws.com/nginx\n \
\ newTag: latest\n"
target:
kind: HelmRelease
name: nginx
namespace: nginx
</code></pre>
<p>I have tried to merge the <code>str_presenter</code> function with the <code>PrettyDumper</code> class, but the scalars still don't work:</p>
<pre class="lang-py prettyprint-override"><code>import yaml.emitter
import yaml.serializer
import yaml.representer
import yaml.resolver
class IndentingEmitter(yaml.emitter.Emitter):
def increase_indent(self, flow=False, indentless=False):
"""Ensure that lists items are always indented."""
return super().increase_indent(
flow=False,
indentless=False,
)
class PrettyDumper(
IndentingEmitter,
yaml.serializer.Serializer,
yaml.representer.Representer,
yaml.resolver.Resolver,
):
def __init__(
self,
stream,
default_style=None,
default_flow_style=False,
canonical=None,
indent=None,
width=None,
allow_unicode=None,
line_break=None,
encoding=None,
explicit_start=None,
explicit_end=None,
version=None,
tags=None,
sort_keys=True,
):
IndentingEmitter.__init__(
self,
stream,
canonical=canonical,
indent=indent,
width=width,
allow_unicode=allow_unicode,
line_break=line_break,
)
yaml.serializer.Serializer.__init__(
self,
encoding=encoding,
explicit_start=explicit_start,
explicit_end=explicit_end,
version=version,
tags=tags,
)
yaml.representer.Representer.__init__(
self,
default_style=default_style,
default_flow_style=default_flow_style,
sort_keys=sort_keys,
)
yaml.resolver.Resolver.__init__(self)
yaml.add_representer(str, self.str_presenter)
yaml.representer.SafeRepresenter.add_representer(
str, self.str_presenter)
def str_presenter(self, data):
print(data)
"""configures yaml for dumping multiline strings
Ref: https://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data"""
if data.count('\n') > 0: # check for multiline string
return self.represent_scalar('tag:yaml.org,2002:str', data, style='|')
return self.represent_scalar('tag:yaml.org,2002:str', data)
</code></pre>
<p>If I could merge these two approaches into the <code>PrettyDumper</code> class, I think it would do everything I require. Can anyone point me in the right direction?</p>
|
<python><yaml><pyyaml>
|
2023-03-20 16:41:43
| 1
| 1,316
|
turbonerd
|
75,793,007
| 416,734
|
What is the benefit of using complex numbers to store graph coordinates?
|
<p>I am looking at a <a href="https://github.com/hughcoleman/advent-of-code/blob/main/2022/12.py" rel="nofollow noreferrer">solution</a> to an <a href="https://adventofcode.com/2022/day/12" rel="nofollow noreferrer">Advent of Code puzzle</a> that stores coordinates as complex numbers:</p>
<pre><code> heightmap = {
complex(x, y): c
for y, ln in enumerate(sys.stdin.read().strip().split("\n"))
for x, c in enumerate(ln)
}
</code></pre>
<p>Then accesses them later as follows:</p>
<pre><code>for xy, c in heightmap.items():
for d in (1, -1, 1j, -1j):
if ord(heightmap.get(xy + d, "{")) <= ord(c) + 1:
G.add_edge(xy, xy + d)
</code></pre>
<p>I can see that this code makes the 'get neighbors' line easy to write/think about, but I don't see that it is worth the added complexity (no pun intended).</p>
<p>Can someone explain why it's useful to store the grid coordinates as complex numbers?</p>
|
<python><graph-theory><complex-numbers>
|
2023-03-20 16:41:22
| 1
| 831
|
outis
|
75,792,946
| 4,504,711
|
Fast implementations in Python to compute the mean of products
|
<p>I have a list of float elements <code>x=[0.1, 2, 0.5, ...]</code> with length <code>l=len(x)</code>. I am looking for fast/vectorized implementations to compute mean of the products between all two pairs from <code>x</code>:</p>
<pre><code>S=0.0
for x1 in x:
for x2 in x:
S+=x1*x2/(l*l)
</code></pre>
<p>This would essentially be the sample covariance of <code>x</code>. I looked into <code>numpy</code>'s <code>cov()</code> function, however, that computes a covariance matrix, or, in the case of a list, simply the sample variance. Also, the <code>correlate()</code> function uses samples from a sliding window, not the combination of all element pairs from the list.</p>
<p><strong>Edit:</strong> Although not stated in the original question, methods not involving l^2 memory would be greatly appreciated. <code>np.outer</code> and numpy broadcasting proposed below both have l^2 memory requirements which make them unfeasible for large l.</p>
|
<python><numpy><covariance>
|
2023-03-20 16:34:18
| 2
| 2,842
|
Botond
|
75,792,901
| 8,913,983
|
Calculate the carry over from rows based on criteria in pandas
|
<p>I have a <code>df</code> like this :</p>
<pre><code>date time value
2021-08 0.0 22.50
2021-08 5.0 6600.00
2021-09 0.0 1057.62
2021-09 1.0 646.35
2021-09 2.0 311.76
2021-09 3.0 3982.50
2021-09 4.0 900.00
2021-09 7.0 546.00
2021-09 9.0 1471.50
2021-09 11.0 1535.16
</code></pre>
<p>The <code>time</code> column represent for how many months the <code>value</code> is being payed from the start of <code>date</code>. So for example the first row remains the same, the second row remains the same as there is nothing to add, but the third row would be the <code>value</code> + <code>6600</code> because from the second row, the value of <code>6600</code> is being payed from <code>2021-08</code> to <code>2022-02</code></p>
<p>I am unsure how can I achieve this, my idea was to create a new data frame:</p>
<pre><code>new_df = pd.DataFrame(pd.date_range(start='2021-08', end=datetime.datetime.now(), freq='M'), columns=['value'])
new_df['commission'] = 0
</code></pre>
<p>And fill it somehow while iterating through the main <code>df</code> so that the end result should look like this:</p>
<pre><code>leased value
2021-08 22.50 + 6600
2021-09 6600 + 1057.62 + 646.35 + 311.76 + 3982.50 + 900.00 + 546.00 + 1471.50 + 1535.16
2021-10 6600 + 646.35 + 311.76 + 3982.50 + 900.00 + 546.00 + 1471.50 + 1535.16
2021-11 6600 + 3982.50 + 900.00 + 546.00 + 1471.50 + 1535.16
...
</code></pre>
|
<python><pandas><datetime>
|
2023-03-20 16:28:55
| 2
| 4,870
|
Jonas Palačionis
|
75,792,678
| 11,304,830
|
Count words in a sentence controlling for negations
|
<p>I am trying to count the number of times some words occur in a sentence while controlling for negations. In the example below, I write a very basic code where I count the number of times "w" appear in "txt". Yet, I fail to control for negations like "don't" and/or "not".</p>
<pre><code>w = ["hello", "apple"]
for word in w:
txt = "I love apples, apple are my favorite fruit. I don't really like apples if they are too mature. I do not like apples if they are immature either."
print(txt.count(word))
</code></pre>
<p>The code should say that it finds "apple" only times and not 4. So, I would like to add: if, n words before or after the words in "w" there is a negation, then don't count, and otherwise.</p>
<p>N.B. Negations here are words like "don't" and "not".</p>
<p>Can anyone help me with this?</p>
<p>Thanks a lot for your help!</p>
|
<python><nlp>
|
2023-03-20 16:08:43
| 1
| 1,623
|
Rollo99
|
75,792,660
| 10,673,107
|
Solve TSP without crossing through the object
|
<p>I have a grid of points which are forming a cube. The cube looks like this:</p>
<p><a href="https://i.sstatic.net/ioKsT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ioKsT.png" alt="enter image description here" /></a></p>
<p>Now I want to create a path through all the points, so I had the following method which applies TSP and optimizes the path with 2-Opt:</p>
<pre><code>def tsp_2opt(points):
# Return input when only one point is given
if len(points) <= 1:
return points
# Create a distance matrix between all points
dist_matrix = np.sqrt(np.sum((points[:, np.newaxis] - points) ** 2, axis=2))
# Create an initial tour using the nearest neighbor algorithm
n = len(points)
unvisited = set(range(n))
curr_point = 0 # start with the first point
tour = [curr_point]
unvisited.remove(curr_point)
while unvisited:
next_point = min(unvisited, key=lambda x: dist_matrix[curr_point, x])
if len(unvisited) == 1:
tour.append(next_point)
break
tour.append(next_point)
unvisited.remove(next_point)
curr_point = next_point
# Use 2-Opt algorithm to improve the tour
improved = True
while improved:
improved = False
for i in range(n-2):
for j in range(i+2, n-1):
if dist_matrix[tour[i], tour[j]] + dist_matrix[tour[i+1], tour[j+1]] < dist_matrix[tour[i], tour[i+1]] + dist_matrix[tour[j], tour[j+1]]:
tour[i+1:j+1] = reversed(tour[i+1:j+1])
improved = True
return points[np.array(tour)]
</code></pre>
<p>This resulted in:</p>
<p><a href="https://i.sstatic.net/TOcPo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TOcPo.png" alt="enter image description here" /></a></p>
<p>This is almost what I want, except that this breaks one rule that my model should prevent. The path is not allowed to go through the object, so the last part it crosses from middle to middle, because it has no other option to reach that last point. I tried setting the distance matrix to infinity for points on a different surface, so that it would not try to go there, but in my case it has no other option.
These are my data points:</p>
<pre><code>[[ 0. 100. 100.]
[ 0. 100. 50.]
[ 0. 100. 0.]
[ 50. 100. 100.]
[ 50. 100. 50.]
[ 50. 100. 0.]
[100. 100. 100.]
[100. 100. 50.]
[100. 100. 0.]
[100. 0. 0.]
[100. 50. 0.]
[100. 100. 0.]
[100. 0. 50.]
[100. 50. 50.]
[100. 100. 50.]
[100. 0. 100.]
[100. 50. 100.]
[100. 100. 100.]
[ 0. 0. 0.]
[ 0. 50. 0.]
[ 0. 100. 0.]
[ 0. 0. 50.]
[ 0. 50. 50.]
[ 0. 100. 50.]
[ 0. 0. 100.]
[ 0. 50. 100.]
[ 0. 100. 100.]
[ 0. 0. 100.]
[ 0. 50. 100.]
[ 0. 100. 100.]
[ 50. 0. 100.]
[ 50. 50. 100.]
[ 50. 100. 100.]
[100. 0. 100.]
[100. 50. 100.]
[100. 100. 100.]
[ 0. 0. 0.]
[ 0. 50. 0.]
[ 0. 100. 0.]
[ 50. 0. 0.]
[ 50. 50. 0.]
[ 50. 100. 0.]
[100. 0. 0.]
[100. 50. 0.]
[100. 100. 0.]
[ 0. 0. 100.]
[ 0. 0. 50.]
[ 0. 0. 0.]
[ 50. 0. 100.]
[ 50. 0. 50.]
[ 50. 0. 0.]
[100. 0. 100.]
[100. 0. 50.]
[100. 0. 0.]]
</code></pre>
<p>First I created a check that it is only allowed to move one axis at the time, so that these cross paths would not occur, but the problem is that that solution only works for cubes. I need to make this work for other shapes as well.</p>
|
<python>
|
2023-03-20 16:07:11
| 2
| 994
|
A. Vreeswijk
|
75,792,623
| 4,939,167
|
How to setup and teardown with multiple files and test cases in pytest
|
<p>I am new to pytest.
I have 10 test files which has multiple tests defined in each test file like below</p>
<pre><code>test_repo
- features
-step_defs
- stress_tests
-file1 (10 test cases)
-file2 (3 test cases)
- functional_tests
- file2 (2 test cases)
- file3 (20 test cases)
</code></pre>
<p>and so on.</p>
<p>My problem statement is - when i initiate stress tests i want to setup and tear down like below</p>
<pre><code> - setup testdb
- run stress tests file 1 and file2 on testdb
- tear down delete testdb after all tests are run.
</code></pre>
<p>So i want to setup creation and deletion of DB only once and run all 13 test cases from stress_tests.</p>
<p>I tried pytest_fixtures(scope="session") they get reused or called upon each test cases.</p>
<p>Is there a way where i can call the creation of db and dleetion of db only once after all tests run completed?</p>
|
<python><pytest><pytest-bdd>
|
2023-03-20 16:03:43
| 1
| 352
|
Ashu123
|
75,792,476
| 8,713,442
|
Find mode in panda Dataframe
|
<p>Find mode among all values in c1_ind and c2_ind . I don't want to mode along each column.</p>
<pre><code>import pandas as pd
import numpy as np
from scipy.stats import mode
list =[{"col1":123,"C1_IND ":"Rev","C2_IND":"Hold"},
{"col1":456,"C1_IND ":"Hold","C2_IND":"Rev"},
{"col1":123,"C1_IND ":"Hold","C2_IND":"Service"},
{"col1":1236,"C1_IND ":"Man","C2_IND":"Man"}]
df = pd.DataFrame.from_dict(list)
print(df)
</code></pre>
<p>For another example Find mode among all values in c1_ind and c3_ind</p>
<pre><code>
import pandas as pd
import numpy as np
from scipy.stats import mode
list =[{"col1":123,"C1_IND ":"Rev","C2_IND":"Hold","C3_IND":"Hold"},
{"col1":456,"C1_IND ":"Hold","C2_IND":"Rev","C3_IND":"Rev"},
{"col1":123,"C1_IND ":"Hold","C2_IND":"Service","C3_IND":"Service"},
{"col1":1236,"C1_IND ":"Man","C2_IND":"Man","C3_IND":"Man"}]
df = pd.DataFrame.from_dict(list)
print(df)
</code></pre>
|
<python><pandas><dataframe><numpy>
|
2023-03-20 15:49:44
| 1
| 464
|
pbh
|
75,792,259
| 11,770,390
|
Prevent sharing of environmental variables windows <-> wsl subsystem
|
<p>I'm using an SDK on both windows and in wsl ubuntu. The SDK provide some functionality through a python script that collects environment information (env variables) and locates certain resources based on it. Now I'm getting this error:</p>
<pre><code> File "/home/myuser/dev/esp-idf/tools/idf_tools.py", line 1099, in get_user_defined_targets
if env == idf_env_json['idfSelectedId']:
KeyError: 'idfSelectedId'
</code></pre>
<p>(The SDK in question is the esp-idf development framework, link to the script in question: <a href="https://github.com/espressif/esp-idf/blob/master/tools/idf_tools.py" rel="nofollow noreferrer">click</a>)</p>
<p>When I follow this error inside <code>idf_tools.py</code> I get to this line eventually:</p>
<pre><code>idf_env_file_path = os.path.join(global_idf_tools_path, IDF_ENV_FILE) # type: ignore
with open(idf_env_file_path, 'r') as idf_env_file:
return json.load(idf_env_file)
</code></pre>
<p>So it seems that the idf_env_json is loaded from a json file found in the systems <code>global_idf_tools_path</code>. Now my suspicion is that the python loader will load the windows json file because for whatever reason this variable will be dominant, and hence the variable <code>idf_env_json</code> will have the wrong entries. Is there a way I can exclude certain or maybe all variables from the wsl so that those two systems don't intermix so badly?</p>
|
<python><terminal><environment-variables><esp-idf>
|
2023-03-20 15:32:04
| 1
| 5,344
|
glades
|
75,792,213
| 12,961,237
|
Convert Docx numbered list to Python and keep indexes
|
<p>I have a bunch of documents in <code>docx</code> format containing numbered lists like:</p>
<pre><code>1) Foo
2) Bar
</code></pre>
<p>and also nested lists like:</p>
<pre><code>I. Heading
a) Sub paragraph
II. ...
</code></pre>
<p>I'm currently working with <code>docx2python</code> which at least gives <em>some</em> indexes to the list when converting to a python string, but not very reliably. It e.g. gives 1) to multiple paragraphs that are actually numbered with 6., 7. and 8.</p>
<p>Does anybody knows a different package I could use or has a workaround in mind?</p>
|
<python><docx>
|
2023-03-20 15:28:12
| 1
| 1,192
|
Sven
|
75,792,173
| 11,968,226
|
Python DeepL API glossary not working for translation
|
<p>I am using the <a href="https://www.deepl.com/de/docs-api/translate-text/translate-text/" rel="nofollow noreferrer">DeepL API</a> to translate text and I also want to include a <code>glossary</code> for translating.</p>
<p>Creating the glossary worked fine, I can see all the correct entries but translating with the glossary brings up some weird behaviors:</p>
<ol>
<li><p>One entry is <code>I:Ich</code> because DeepL doesn't seem to be able to translate the english "I" to the german "Ich" (always returns "I" in German), so I thought I can fix this with the glossary but the API is still returning "I" for "I" even though I added the glossary in the <code>request</code>.</p>
</li>
<li><p>DeepL does not always seem to take the exact value from the glossary. E.g. I have an entry "holding,halten". Deepl translates "holding" without the glossary included to "Betrieb" but when adding the glossary it returns "halten<strong>d</strong>". So it is adding a "d" at the end.</p>
</li>
</ol>
<p>What am I not getting here? Deepl support is not answering. Happy for every help.</p>
<p>This is the function I use to translate text:</p>
<pre><code>def translate_text_with_deepl(text_to_translate):
URL = "https://api-free.deepl.com/v2/translate"
params = {
"auth_key": DEEPL_APY_KEY,
"text": text_to_translate,
"target_lang": TARGET_LANGUAGE_CODE,
"source_lang": SOURCE_LANGUAGE_CODE,
"glossary_id": GLOSSARY_ID,
}
re = requests.post(URL, params)
return re.json()["translations"][0]["text"]
</code></pre>
|
<python><glossary><deepl>
|
2023-03-20 15:23:58
| 1
| 2,404
|
Chris
|
75,792,127
| 19,980,284
|
Convert pandas column values based on groupings of values
|
<p>I have a pandas columns with values <code>1.0</code>, <code>2.0</code>, <code>3.0</code>, <code>4.0</code>, and <code>5.0</code> like below:</p>
<pre><code>0 5.0
1 2.0
2 3.0
3 3.0
4 5.0
...
1039 5.0
1040 1.0
1041 2.0
1042 4.0
1043 1.0
</code></pre>
<p>I want rows with values 1.0 or 2.0 to all have a value of 1.0, 3.0 and 4.0 to become 2.0, and 5.0 to become 3.0. How could I re-assign the values based on these groupings. I was thinking <code>np.where()</code> at first but now I'm not sure how to implement that with <code>np.where()</code> logic because that seems like it would be better suited for conversion to a binary variable. Maybe just masking with <code>.loc()</code>?</p>
<p>Thanks.</p>
|
<python><pandas><dataframe><numpy>
|
2023-03-20 15:19:50
| 1
| 671
|
hulio_entredas
|
75,792,022
| 412,252
|
How do I get typing working in python mixins?
|
<ul>
<li>I have a mixin that will always be used with a specific type of class i.e. a subclass of <code>widgets.Input</code></li>
<li>I want to override a few methods using the mixin, and I'm referencing attributes that exist on <code>widgets.Input</code> in my custom methods</li>
</ul>
<p>How do I tell the python type system that this mixin extends the <code>widgets.Input</code> interface without inheriting from <code>widgets.Input</code>?</p>
<p><strong>Bonus:</strong> Can I tell the type system that this mixin can only be used with subclasses of <code>widgets.Input</code>?</p>
<pre class="lang-py prettyprint-override"><code>from django.forms import widgets
class MaskedWidgetMixin:
def format_value(self, value):
return ""
def get_context(self, name, value, attrs):
context = super().get_context(name, value, attrs) # <--- super().get_context does not exist
context["widget"]["attrs"] = self.build_attrs(self.attrs, attrs, value) # <--- self.attrs does not exist
return context
def build_attrs(self, base_attrs, extra_attrs=None, value: str = ""):
attrs = super().build_attrs(base_attrs, extra_attrs) # <--- super().build_attrs does not exist
if value:
attrs.update({"placeholder": "* ENCRYPTED *", "required": False})
return attrs
class EncryptedTextInputWidget(MaskedWidgetMixin, widgets.TextInput):
pass
class EncryptedTextareaWidget(MaskedWidgetMixin, widgets.Textarea):
pass
</code></pre>
<h2>Update</h2>
<p>As suggested in the comments, I tried to define a <code>Protocol</code>.<br />
However this is not the exact purpose of a <code>Protocol</code>.<br />
Rather a <code>Protocol</code> defines an interface to be implemented by the class extending it.</p>
<p>The following code produces this type error ...</p>
<pre><code>Class derives from one or more protocol classes but does not implement all required members
Member "attrs" is declared in protocol class "InputProtocol"
</code></pre>
<pre class="lang-py prettyprint-override"><code>class InputProtocol(Protocol):
attrs: dict[str, Any]
def get_context(self, name, value, attrs) -> dict[str, dict[str, Any]]:
...
def build_attrs(self, base_attrs, extra_attrs=None) -> dict[str, Any]:
...
class MaskedWidgetMixin(InputProtocol):
...
</code></pre>
|
<python><django><mypy><python-typing>
|
2023-03-20 15:08:57
| 1
| 4,674
|
demux
|
75,792,014
| 19,980,284
|
Is it possible to combine levels of categorical variables in a statsmodels logit model?
|
<p>Let's say I'm regressing on hair color, and I have two independent variables: income and gender.</p>
<p>Let's say income is a pandas column with <code>1</code>, <code>2</code>, and <code>3</code> representing these different income levels:</p>
<pre><code>0-49,999 : 1
50,000-99,999: 2
100,000-199,999: 3
</code></pre>
<p>If I'd like in my regression results for <code>0-99,999</code> to become combined into <code>1</code> and <code>100,000-199,999</code> becomes <code>2</code>, is that possible in the statsmodels <code>smf</code> formula, or would I have to alter my dataframe and add dummy variables to represent these changes?</p>
|
<python><logistic-regression><statsmodels>
|
2023-03-20 15:08:38
| 0
| 671
|
hulio_entredas
|
75,791,918
| 112,976
|
Headers with FastAPI
|
<p>I created an endpoint which requires the User Agent as described in the documentation:
<a href="https://fastapi.tiangolo.com/tutorial/header-params/#__tabbed_2_1" rel="nofollow noreferrer">https://fastapi.tiangolo.com/tutorial/header-params/#__tabbed_2_1</a></p>
<p>However, the Swagger documentation generated displays it as a query param.</p>
<p><a href="https://i.sstatic.net/ndRDC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ndRDC.png" alt="enter image description here" /></a></p>
<p>Any ideas of what is wrong in my setup?</p>
<pre><code>from typing import Annotated
from fastapi import FastAPI, Header
app = FastAPI()
@app.get("/items/")
async def read_items(user_agent: Annotated[str | None, Header()] = None):
return {"User-Agent": user_agent}
</code></pre>
<p>I am running it with Python 3.10.</p>
|
<python><swagger><fastapi>
|
2023-03-20 14:59:47
| 1
| 22,768
|
poiuytrez
|
75,791,765
| 18,363,860
|
how to download videos that require age verification with pytube?
|
<p>I download and clip some youtube videos with pytube but some videos are not downloading and asking for age verification. How can I solve this? Thanks for your advice</p>
|
<python><dataset><pytube>
|
2023-03-20 14:47:32
| 7
| 353
|
byhite
|
75,791,727
| 2,697,895
|
How can I escape from a GPIO.wait_for_edge?
|
<p>I have this code:</p>
<pre><code>while True:
try:
Stat1 = GPIO.input(WPIN)
time.sleep(WARNST)
Stat2 = GPIO.input(WPIN)
if Stat1 == Stat2: SendWarnStat(Stat2)
Event = None
while Event != WPIN: Event = GPIO.wait_for_edge(WPIN, GPIO.BOTH)
except:
break
</code></pre>
<p>Is there a way to make my script exist when I press Ctrl+C or the system ask it to terminate ? It seems that everything is blocked by the <code>wait_for_edge</code> function... I want to send some messages when the state of the pin is changed, and I want to do this without bother the CPU constantly checking that pin, so I have to use the wait function... Can I move the wait function in a thread and wait for multiple objects, one of them being a flag that is swiched from the main thread ? I am verry new to Python...</p>
|
<python>
|
2023-03-20 14:43:46
| 2
| 3,182
|
Marus Gradinaru
|
75,791,698
| 774,133
|
Cannot cast array data from dtype('O') in np.bincount
|
<p>Unfortunately I cannot share the data I am now using, so this question will not contain an MWE.</p>
<p>I have this code:</p>
<pre class="lang-py prettyprint-override"><code>def baseline(labels):
# dummy classifier returning the most common label in labels
print(labels.shape)
print(type(labels))
print(type(labels[0]))
print(type(labels[2]))
print(labels)
counts = np.bincount(labels)
value = np.argmax(counts)
</code></pre>
<p>This code runs fine with most input files containing the <code>labels</code>. However, on a subset of files, I get the error:</p>
<p><strong>Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'</strong></p>
<p>that I cannot understand. Output is:</p>
<pre class="lang-py prettyprint-override"><code>(891,)
<class 'numpy.ndarray'>
<class 'int'>
<class 'int'>
[0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 0 1 1 1 1 0 0 1 0 1 0 0 0 1 1 0 1 0 0
0 1 1 0 1 0 0 0 1 0 1 0 1 1 1 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 1 1 1 0 0 0 1
0 0 0 0 1 0 1 1 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 0 1 1
1 1 1 1 0 0 0 1 0 1 1 0 0 0 1 1 1 0 1 0 0 0 0 1 1 1 1 1 1 0 0 1 1 1 0 1 1
0 0 0 0 0 1 0 1 1 0 0 1 1 1 1 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 1 0 0
0 0 0 1 1 1 0 1 1 0 0 1 0 1 0 0 1 0 1 0 0 1 1 0 1 0 1 0 0 1 0 1 0 0 0 0 0
0 1 1 1 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 1 1 0 1 1
1 0 0 1 0 1 0 0 0 1 0 0 1 1 1 0 1 0 0 1 0 1 1 0 1 0 0 1 0 1 0 0 1 0 0 1 0
1 0 0 1 0 1 0 0 0 0 0 1 1 0 0 1 1 1 1 0 0 0 1 1 0 0 1 1 0 1 0 0 0 0 1 0 0
1 0 1 0 1 1 1 1 0 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 1 1 0 1 0 1 0 1 1 0 1
0 1 1 1 1 1 1 0 0 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 0 0 1 1 0 1 1 1 1 0 0 0 0
1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 0 1 1 1 0 1 0 1 1 0 1 1
0 0 1 1 0 0 0 0 1 1 1 0 1 0 0 1 0 1 1 1 0 1 0 0 1 1 1 0 1 1 0 1 0 0 0 0 1
1 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 1 1 1 0 1 1 0
1 0 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 0 1 0 1 0 1 0 0 0 1 1 0
1 1 1 1 1 1 0 1 0 0 0 0 0 0 0 0 1 0 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 0 1 0 0
0 0 1 1 1 0 1 1 1 1 0 0 0 1 0 0 1 0 0 1 0 1 1 0 1 0 0 1 0 0 0 0 0 1 1 1 0
1 0 0 1 0 0 1 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 1 0 0 1 1 0 1 0 0 0
1 1 0 0 0 1 0 0 0 1 0 1 0 1 1 0 1 0 1 1 1 1 0 1 1 1 0 0 1 0 1 0 1 0 1 0 0
0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 0 1 1 0 1 0 0 1 1 1 1 1 1 1 1 0 0 0
0 0 1 1 0 1 0 1 0 1 0 0 0 0 1 1 0 0 1 1 1 1 0 0 1 1 0 1 1 0 1 0 0 1 0 0 1
0 0 1 0 1 1 0 1 1 1 0 1 1 1 0 1 0 0 1 1 0 1 0 1 1 0 0 0 1 1 0 1 0 1 1 1 0
0 0 0 1 1 1 0 1 0 1 0 1 1 0 0 1 1 1 0 1 1 0 1 0 1 0 0 1 1 0 0 1 1 1 1 0 1
1 0 0 1 0 1 0 1 0 1 1 1 0 0 0 1 1 1 1 1 0 0 0 0 1 0 1 1 1 1 1 0 0 0 0 1 0
0 0 1]
Traceback (most recent call last):
File "07_training_test.py", line 577, in <module>
fire.Fire(main)
File "/home/user/miniconda3/envs/proj/lib/python3.8/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/user/miniconda3/envs/proj/lib/python3.8/site-packages/fire/core.py", line 466, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/user/miniconda3/envs/proj/lib/python3.8/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "07_training_test.py", line 554, in main
res = process_file(fn, parameters, config)
File "07_training_test.py", line 434, in process_file
value_train, train_acc = utils.baseline(full_labels.loc[train_i].to_numpy())
File "/home/user/workspace/proj/src/pipeline_paper/utils.py", line 186, in baseline
counts = np.bincount(labels)
File "<__array_function__ internals>", line 5, in bincount
TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
</code></pre>
<p>There are other questions on this error, but in different contexts so I was not able to solve the issue following the answers.</p>
|
<python><numpy>
|
2023-03-20 14:41:20
| 1
| 3,234
|
Antonio Sesto
|
75,791,594
| 764,365
|
plot failing to update for multiple Pandas hist calls on mac when run one at a time
|
<p>Here's some sample code. Surprisingly this bug seems to require running the last line separately from the rest.</p>
<pre><code>import pandas as pd
import numpy as np
n = np.nan
a = [1,2,3,4,5,1,2,3,4,5.0,n,n,n,n,n,n]
b = [1,1,1,2,2,2,3,3,3,4.0,6,7,8,n,n,n]
d = {'a':a,'b':b}
df = pd.DataFrame(d)
df.b.hist()
#Wait for the plot to come up, then run this line
df.a.hist()
</code></pre>
<p>On Windows I'm seeing this, as expected:</p>
<p><a href="https://i.sstatic.net/CeBXd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CeBXd.png" alt="Hist with correct behavior" /></a></p>
<p>On my Mac I'm seeing this:</p>
<p><a href="https://i.sstatic.net/6U2k3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6U2k3.png" alt="Hist with 2nd dataset missing" /></a></p>
<p>If I run all the code at once on my mac, I get the correct behavior.</p>
<p>I currently have this warning message on my mac when I first go to plot:
qt.qpa.drawing: Layer-backing can not be explicitly controlled on 10.14 when built against the 10.14 SDK</p>
<p>I'm using Python 3.9.12, IPython 7.33.0, Matplotlib 3.5.2, Pandas 1.5.3, Mac OS 12.6.3, Spyder 5.3.0, Automatic graphics backend, Qt 5.12.9 | PyQt5 5.12.3 | Darwin 21.6.0</p>
<p>Changed backend to Tkinter and warning is now gone but plot error remains.</p>
|
<python><pandas>
|
2023-03-20 14:32:09
| 1
| 3,304
|
Jimbo
|
75,791,352
| 5,687,779
|
How to type annotate a parent property/parameter in python without causing a circular dependency?
|
<p>Assuming I have two <code>.py</code> files</p>
<pre><code>==== car.py ====
from .wheel import Wheel
class Car():
def __init__(self) -> None:
self._wheel = Wheel(self, "large")
==== wheel.py ====
from .car import Car
class Wheel():
def __init__(self, parent: Car, desc: str) -> None:
self._car = parent
self._desc = desc
</code></pre>
<p>wheel.py has a circular dependency problem because it imports car just for the typing, and car imports wheel for the actual instantiating. What is they way to type annotate <code>parent</code> property in Wheel class?</p>
|
<python><python-3.x><python-typing>
|
2023-03-20 14:12:40
| 0
| 573
|
Shai Ben-Dor
|
75,791,316
| 2,955,541
|
Find Indices Where NumPy Int Array Value Matches Its Index Position
|
<p>I have a long array of integers and I'd like to find all of the array indices where the array value matches the index position. Here is the solution that I came up with:</p>
<pre><code>import numpy as np
def find_index_value_match(a):
return a == np.arange(len(a))
a = np.array([0,1,4,4,4,0,7,9,2,3])
find_index_value_match(a)
# array([ True, True, False, False, True, False, False, False, False, False])
</code></pre>
<p>For larger arrays (n > 100_000_000) where this function might be called repeatedly, I was wondering if there's a faster way that wouldn't require creating a new array via <code>np.arange</code>.</p>
|
<python><numpy>
|
2023-03-20 14:09:19
| 0
| 6,989
|
slaw
|
75,791,277
| 12,871,587
|
How to deal with slightly different columns when scanning multiple csv files?
|
<p>I have a folder containing thousands of csv files which I'd like to scan with PL lazy frame.</p>
<p>Scanning in fact works just fine, but when I try to fetch or collect the df, I get "ShapeError: unable to append to a dataframe of width 63 with a dataframe of width 67".</p>
<p>This means that there are some csv files containing more columns than the others. I checked that there are 4 unique sets of column names. Some csv files contains additional information compared to others.</p>
<p>Ideal outcome would be a dataframe containing all the possible columns and if there are some files that doesn't contain these specific "additional" columns, it would leave those blank.</p>
<p>Current code:</p>
<pre><code>df = pl.scan_csv(r"path\*.csv", sep=";", infer_schema_length=0)
df.collect(streaming=True).write_parquet("new_file.parquet")
</code></pre>
<p>What would be a convenient way to deal with this?</p>
|
<python><dataframe><csv><python-polars>
|
2023-03-20 14:06:40
| 1
| 713
|
miroslaavi
|
75,791,141
| 1,422,096
|
Numpy range as a constant
|
<p>Let's say we have</p>
<pre><code>import numpy as np
z = np.array([1, 2, 3, 4, 5, 6])
</code></pre>
<p>In some cases, I'd like to define a "numpy range" as a global constant, i.e. instead of doing</p>
<pre><code>print(z[2:4])
</code></pre>
<p>with hard-coded values 2 and 4 everywhere in my code, I'd prefer (pseudo-code):</p>
<pre><code>MY_CONSTANT_RANGE = 2:4 # defined once
print(z[MY_CONSTANT_RANGE])
</code></pre>
<p>Is there a way to do this? With a numpy range object maybe?</p>
<p>PS: of course we could do</p>
<pre><code>RANGE_MIN, RANGE_MAX = 2, 4
z[RANGE_MIN:RANGE_MAX]
</code></pre>
<p>but I am curious if there is a way to define a range constant.</p>
|
<python><numpy>
|
2023-03-20 13:54:54
| 2
| 47,388
|
Basj
|
75,791,034
| 21,305,238
|
Use another class' methods without decorators and inheritance
|
<p>I have a class, which have several methods of its own, not shown here for simplicity:</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
def __init__(self, arg: str):
self.bar = arg
</code></pre>
<p>Let's say, aside from its own methods, I want <code>Foo</code>'s instances to use <code>str</code>'s methods in its <code>bar</code> property's (assured to be a string) stead. This is possible with the <code>__getattr__</code> dunder method:</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
def __getattr__(self, item):
return getattr(self.foo, item)
</code></pre>
<p>The call's result should be <code>.bar</code>'s new value. However, since Python strings are immutable, a string resulted from a method call (say, <code>str.strip()</code>) will need to be re-assigned, which doesn't look very nice. Also, an <code>if</code> is needed in case that call doesn't return a string:</p>
<pre class="lang-py prettyprint-override"><code>result = instance_of_Foo.strip()
if isinstance(result, str):
instance_of_Foo.bar = result
else:
...
</code></pre>
<p>I solved this problem with a decorator:</p>
<pre class="lang-py prettyprint-override"><code>def decorator(function, *, self):
def wrapper(*args, **kwargs):
result = function(*args, **kwargs)
if isinstance(result, str):
self.bar = result
else:
return result
return wrapper
class Foo:
def __init__(self, arg: str):
self.bar = arg
def __getattr__(self, item):
method = decorator(getattr(self.bar, item), self = self)
return method
foo = Foo(' foo ')
print(foo.bar) # ' foo '
foo.strip()
print(foo.bar) # 'foo'
</code></pre>
<p>...but there surely is a more "Pythonic" way, preferably using dunder methods instead of a decorator, to intercept the call, isn't there? Note that my class cannot substitute a string (Liskov principle violation), so inheritance is out of the question.</p>
|
<python><oop><magic-methods><getattr>
|
2023-03-20 13:44:20
| 2
| 12,143
|
InSync
|
75,791,023
| 1,496,362
|
How to load TimeSeriesData from non-continuous csv files
|
<p>I need to load a all csv files in a folder to train a time series model (PyTorch Lightening).</p>
<p>The issue is that while the rows within a file are continuous (t, t+1, etc.), there is a break between files.</p>
<p>How do I correctly deal with this? Do I pad them with seq_len?</p>
<p>This is how I currently load one file. Since this is PyTorch Lightening, it's hard to give a standalone working example, but I believe, the trick should probably be somewhere in setup or TimeseriesDataset.</p>
<p>Perhaps in setup, I should just load an X1 from csv1 and X2 from csv2, and concatenate them together with padding in between? What padding should that be? zeros?</p>
<p>Or should I do a check each time TimeseriesDataset is used to see if the timestamps in X are continuous? That seems inefficient.</p>
<pre><code>def setup(self, stage=None):
path = './file.csv'
df = pd.read_csv(
path,
sep=',',
infer_datetime_format=True,
low_memory=False,
na_values=['nan','?'],
index_col='Time'
)
X = df['cols'] # here I select the columns I want, not the issue at hand
y = df['label'] # here I select the label, not the issue at hand
X_train, X_val, y_train, y_val = train_test_split(
X, y, test_size=0.25, shuffle=False
)
def train_dataloader(self):
train_dataset = TimeseriesDataset(self.X_train,
self.y_train,
seq_len=self.seq_len)
train_loader = DataLoader(train_dataset,
batch_size = self.batch_size,
shuffle = False,
num_workers = self.num_workers)
return train_loader
class TimeseriesDataset(Dataset):
'''
Custom Dataset subclass.
Serves as input to DataLoader to transform X
into sequence data using rolling window.
DataLoader using this dataset will output batches
of `(batch_size, seq_len, n_features)` shape.
Suitable as an input to RNNs.
'''
def __init__(self, X: np.ndarray, y: np.ndarray, seq_len: int = 1):
self.X = torch.tensor(X).float()
self.y = torch.tensor(y).float()
self.seq_len = seq_len
def __len__(self):
return self.X.__len__() - (self.seq_len-1)
def __getitem__(self, index):
return (self.X[index:index+self.seq_len], self.y[index+self.seq_len-1])
</code></pre>
<p>The data consist of csv files with timestamp and features, something like the below. Different files may have the same timestamps, but they are both used to train the same model.</p>
<pre><code>2020-01-01 11:00:00,130.85,130.89,129.94,130.2,4968.55433,0.9908333333333346,-0.004967520061138764,0,0,0,0,False,2.0,128.39,2.0,128.39
2020-01-01 12:00:00,130.21,130.74,130.15,130.2,3397.90747,0.9600000000000014,0.0,0,0,0,0,False,2.0,128.39,2.0,128.39
2020-01-01 13:00:00,130.2,130.47,130.11,130.3,4243.6064,0.9171428571428574,0.0007680491551460555,0,0,0,0,False,2.0,128.39,2.0,128.39
2020-01-01 14:00:00,130.31,130.75,130.26,130.44,3668.90166,0.8886666666666675,0.0010744435917113826,0,0,0,0,False,2.0,128.39,2.0,128.39
2020-01-01 15:00:00,130.47,130.71,130.14,130.24,4147.17413,0.8674222222222244,-0.0015332720024531232,0,0,0,0,False,2.0,128.39,2.0,128.39
</code></pre>
<p>So I might have 2 files like this, it could be stock ticker data and each file belongs to another ticker.</p>
<p>TimeSeriesDataloader will create tensors of seq_len x num_features, e.g. 10 x 9 as X, and the last value in the label column at time 10 as y. Seq_len is the lookback period used during training and RNN.</p>
<p>If I were to just concatenate the 2 csv files, there would be an issue, because the time stamps are not compatible, i.e. there should be no X sampled that contains data from both csv1 and csv2. Each sampled X should be either from csv1 or from csv2.</p>
<p>Hope that clarifies.</p>
|
<python><pytorch><pytorch-lightning><dataloader><pytorch-dataloader>
|
2023-03-20 13:43:12
| 0
| 5,417
|
dorien
|
75,791,020
| 12,257,924
|
Why doesn't PyTorch Lightning module save logged val loss? ModelCheckpoint error
|
<p>I'm running an LSTM-based model training on Kaggle. I use Pytorch Lightning and wandb logger for that.</p>
<p>That's my model's class:</p>
<pre><code>class Model(pl.LightningModule):
def __init__(
self,
input_size: int,
hidden_size: int,
bidirectional: bool = False,
lstm_layers: int = 1,
lstm_dropout: float = 0.4,
fc_dropout: float = 0.4,
lr: float = 0.01,
lr_scheduler_patience: int = 2,
):
super().__init__()
self.lr = lr
self.save_hyperparameters()
# LSTM
self.encoder_lstm = nn.LSTM(
input_size=input_size,
hidden_size=hidden_size,
num_layers=lstm_layers,
bidirectional=bidirectional,
dropout=lstm_dropout if lstm_layers > 1 else 0,
batch_first=True,
)
# Fully-connected
num_directions = 2 if bidirectional else 1
self.fc = nn.Sequential(
nn.Linear(
hidden_size * num_directions, hidden_size * num_directions * 2
),
nn.ReLU(),
nn.Dropout(fc_dropout),
nn.Linear(hidden_size * num_directions * 2, input_size),
)
self.loss_function = nn.MSELoss()
def configure_optimizers(self):
optimizer = torch.optim.AdamW(self.parameters(), lr=self.lr)
return {
"optimizer": optimizer,
"lr_scheduler": {
"scheduler": torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer, patience=self.hparams.lr_scheduler_patience
),
"monitor": "val_loss",
},
}
def forward(self, x, prev_state):
...
def training_step(self, batch, batch_idx):
loss, _ = self._step(batch)
self.log("train_loss", loss)
return loss
def validation_step(self, batch, batch_idx):
loss, embeddings = self._step(batch)
self.log("val_loss", loss)
return {
'val_loss': loss,
'preds': embeddings # this is consumed by my custom callback
}
def test_step(self, batch, batch_idx):
loss, _ = self._step(batch)
self.log("test_loss", loss)
</code></pre>
<p>And that's how I use it:</p>
<pre><code>model = Model(
bidirectional=False,
lstm_layers=1,
lstm_dropout=0.4,
fc_dropout=0.4,
lr=0.01,
lr_scheduler_patience=2
)
...
checkpoint_callback = ModelCheckpoint(
monitor="val_loss",
every_n_train_steps=100,
verbose=True
)
trainer = pl.Trainer(
accelerator='gpu',
precision=16,
max_epochs=100,
callbacks=[early_stopping, checkpoint_callback, lr_monitor, custom_callback],
log_every_n_steps=50,
logger=wandb_logger,
auto_lr_find=True,
)
trainer.tune(model, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader)
trainer.fit(model, train_dataloader, val_dataloader)
</code></pre>
<p>When I don't run <code>trainer.tune(model, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader)</code> <code>trainer.fit</code> works perfectly but when I run 'trainer.tune' I get such ModelCheckpoint error:</p>
<pre><code>MisconfigurationException: `ModelCheckpoint(monitor='val_loss')` could not find the monitored key in the returned metrics: ['train_loss', 'epoch', 'step']. HINT: Did you call `log('val_loss', value)` in the `LightningModule`?
</code></pre>
<p>So even though I log <code>val_loss</code> it doesn't get saved. On the Trainer object I set <code>log_every_n_steps=50</code> and on the ModelCheckpoint I set <code>every_n_train_steps=100</code> so it seems that it should have 'val_loss' logged by the moment ModelCheckpoint gets going.</p>
<p>I printed val loss in <code>validation_step</code> and it gets computed before ModelCheckpoint is run. I also defined an <code>on_train_batch_end</code> function in my custom callback to see saved trainer metrics. It turns out that val loss is in fact missing.</p>
|
<python><machine-learning><deep-learning><pytorch><pytorch-lightning>
|
2023-03-20 13:43:02
| 2
| 639
|
Karol
|
75,790,860
| 651,871
|
How to return the average value using fastapi & pydantic
|
<p>I am new to fasts-api and python as I am a mobile developer.</p>
<p>At the moment I managed to get this response from my API (please look at <code>averageLevel</code> which is an array at the moment):</p>
<pre><code>[
{
"user_id": 139,
"event_date": "2023-03-20T12:18:17",
"public": 1,
"waitlist": 1,
"maxParticipant": 9,
"event_type": "1 vs 1",
"event_field": "Acrylic",
"event_id": 173,
"created": "2023-03-20T13:18:05",
"organizer": {
"user_id": 139,
"email": "email@email.com",
"name": "Daniele",
"lastName": "Proietti",
"city": "Rome",
"state": "Lazio",
"zipCode": " 32323",
"country": "Italy",
"averageLevel": [
{
"level": 70
}
]
}
}
]
</code></pre>
<p>The table that holds the player's level values is the following:</p>
<p><a href="https://i.sstatic.net/tlgfe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tlgfe.png" alt="enter image description here" /></a></p>
<p>my models.py:</p>
<pre><code>class Event(Base):
__tablename__ = "event"
event_id = Column(_sql.Integer, primary_key=True, index=True, nullable=False)
user_id = Column(_sql.Integer, ForeignKey("users.user_id", ondelete="CASCADE"), primary_key=False, index=True, nullable=False)
sport_id = Column(_sql.Integer, ForeignKey("sportSupported.sport_id"), primary_key=False, index=True, nullable=False)
place_id = Column(_sql.Integer, ForeignKey("places.place_id"), primary_key=False, index=True, nullable=False)
created = Column(_sql.DateTime(), default=datetime.now(), nullable=True)
event_date = Column(_sql.DateTime(), nullable=False)
public = Column(_sql.Integer, default = 0)
waitlist = Column(_sql.Integer, default = 0)
event_type = Column(String(100), nullable=False)
event_field = Column(String(100), nullable=False)
description = Column(String(255), nullable=True)
maxParticipant = Column(_sql.Integer, nullable=False)
player_level = Column(_sql.Float, default= 0.0, nullable=False)
organizer = relationship("User")
participants = relationship("User", secondary=association_table)
venue = relationship("Place", back_populates="event")
class User(Base):
__tablename__ = "users"
user_id = Column(Integer, primary_key=True, index=True, nullable=False)
email = Column(String(100), nullable=False)
password = Column(String(255), nullable=False)
salt = Column(String(255),nullable=False)
name = Column(String(50),nullable=False)
lastName = Column(String(100), nullable=False)
dob = Column(DateTime(), nullable=True)
city = Column(String(50), nullable=True)
state = Column(String(40),nullable=True)
zipCode = Column(String(10), nullable=True)
country = Column(String(90), nullable=True)
telephoneNumber = Column(String(60))
joined = Column(DateTime(), default=datetime.now(), nullable=False)
facebook_id = Column(Integer)
verified = Column(Integer, default = 0, nullable=False)
testAccount = Column(Integer, default = 0, nullable=False)
averageLevel = relationship("PlayersLevel")
def __repr__(self):
return f"{self.user_id}"
class PlayersLevel(Base):
__tablename__ = "playersLevel"
prog_id = Column(Integer, primary_key=True, index=True, nullable=False)
user_id = Column(Integer,ForeignKey("users.user_id", ondelete="CASCADE"),primary_key=False, index=True, nullable=False)
event_id = Column(Integer, nullable=False)
date = Column(DateTime(), default=datetime.now(), nullable=False)
level = Column(Integer)
sport_id = Column(Integer)
def __repr__(self):
return f"{self.progr_id}"
</code></pre>
<p>and schema.py:</p>
<pre><code>class User(BaseClass):
user_id: int
email: str
name: str
lastName: str
city: str
state: str
zipCode: str
country: str
averageLevel: list[PlayersLevel] = []
class PlayersLevel(BaseClass):
proper_id: int
user_id: int
event_id: int
date: datetime
level: int
sport_id: int
class EventRead(Event):
event_id: int
created: datetime
organizer: User
participants: list[User] = []
venue: Place
</code></pre>
<p>I query the database like this (all events):</p>
<pre><code>def get_events(db: Session):
return db.query(models.Event).all()
</code></pre>
<p>However, what I would like to achieve is the average of all level rows for a specific <code>user_id</code> (not an array of levels as it is now): something like this:</p>
<pre><code>[
{
"user_id": 139,
"event_date": "2023-03-20T12:18:17",
"public": 1,
"waitlist": 1,
"maxParticipant": 9,
"event_type": "1 vs 1",
"event_field": "Acrylic",
"event_id": 173,
"created": "2023-03-20T13:18:05",
"organizer": {
"user_id": 139,
"email": "mattiamauceri@gmail.com",
"name": "Daniele",
"lastName": "Proietti",
"city": "Rome",
"state": "Lazio",
"zipCode": " 32323",
"country": "Italy",
"averageLevel": // not an array here
{
"level": 74 // if there are 3 rows for the user (70, 72, 80) I would like to get the average IE 74
}
}
}
]
</code></pre>
<p>how can I achieve this?</p>
|
<python><sqlalchemy><fastapi><pydantic>
|
2023-03-20 13:27:33
| 1
| 6,314
|
Mat
|
75,790,838
| 5,896,319
|
How to add a new field to the default user model in Django?
|
<p>I want to add a new field to default Django user. I created a AbstractUser inherited model and added to the settings.py but I getting this error:</p>
<blockquote>
<p>ValueError: The field admin.LogEntry.user was declared with a lazy
reference to 'fdaconfig.customuser', but app 'fdaconfig' doesn't
provide model 'customuser'. The field fdaconfig.Dataset.user was
declared with a lazy reference to 'fdaconfig.customuser', but app
'fdaconfig' doesn't provide model 'customuser'.</p>
</blockquote>
<p>models.py</p>
<pre><code>class CustomUser(AbstractUser):
user_info = models.CharField(max_length=255, null=True, blank=True)
</code></pre>
<p>settings.py</p>
<pre><code>AUTH_USER_MODEL = 'fdaconfig.CustomUser'
</code></pre>
<p>Note: I can not delete database</p>
|
<python><django>
|
2023-03-20 13:25:17
| 0
| 680
|
edche
|
75,790,507
| 3,613,158
|
Correct way to use a local Django project in a new computer (VS Code)
|
<p>I created my first local Python/Django project a few months ago using Visual Studio Code. It worked perfectly.</p>
<p>Now I'm trying to use it on a new computer. I've tried just saving the VS workspace and then loading it on the new computer, but it gives me the following error:</p>
<pre><code>ModuleNotFoundError: No module named 'gad'
</code></pre>
<p>I think it's coming from the line <code>os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'gad.settings')</code> in manage.py</p>
<p>I don't know if I should create a Django project from scratch and then copy my "old" files into that project, if I just have to create the "gad.settings" file or what.</p>
<p>I don't have the old computer, but I think I have all the files from my old Django project (but there is no gad.settings file in it, so maybe I had to copy more files outside that folder).</p>
<p>Thanks!</p>
|
<python><django><visual-studio-code>
|
2023-03-20 12:53:13
| 1
| 691
|
migueltic
|
75,790,452
| 14,640,406
|
Mux KLV STANAG data into MPEG-TS stream in Python
|
<p>I have some MPEGTS video streams and i need to mux KLV metadata into these. KLV data may e in a .pcap format or any another binary format. Any ideas about how can i perform this using some Python library?</p>
|
<python><mpeg2-ts><klvdata><stanag>
|
2023-03-20 12:47:13
| 0
| 309
|
carraro
|
75,790,418
| 9,879,869
|
Pandas: make new column and return list of values from other column
|
<p>Suppose I have two columns</p>
<pre><code>d = {'a_lower': [1, 2], 'a_upper': [3, 4]}
df = pd.DataFrame(data=d)
a_lower a_upper
0 1 3
1 2 4
</code></pre>
<p>I want to have 3rd column that returns a list from the values of the two columns</p>
<pre><code> a_lower a_upper a
0 1 3 [1, 3]
1 2 4 [2, 4]
</code></pre>
<p>I tried this</p>
<pre><code>df['a'] = [df['a_lower'], df['a_upper']]
</code></pre>
<p>I got different result</p>
<pre><code> a_lower a_upper a
0 1 3 0 1 1 2 Name: a_lower, dtype: int64
1 2 4 0 3 1 4 Name: a_upper, dtype: int64
</code></pre>
<p>How to do if correcty? I am trying to return an array of dict oriented in 'records'</p>
<pre><code>[{'a_lower': 1,
'a_upper': 3,
'a': [1, 3]
},
...
]
</code></pre>
|
<python><pandas>
|
2023-03-20 12:44:17
| 2
| 1,572
|
Nikko
|
75,790,393
| 3,232,194
|
fastapi, pymongo not querying lookup and returns empty on response
|
<p>I have a simply query, find the user in user collection from group collection.
It is returning empty.</p>
<p>Problem 1:
"user" is empty.</p>
<p>Problem 2: Response is completely empty <code>[]</code> in postman.</p>
<pre><code>query = group.collection.aggregate([
{ '$match': {"usersId: {"$in": [ ObjectID('some id') ] }} },
{
"$lookup":
{
"from": "user",
"localField": "usersId",
"foreignField": "_id",
"as": "user"
}
},
{
'$project': {
'_id': 0,
'createdAt': 1,
'updatedAt': 1,
'user.name' :1
}
}
])
print (json.dumps(list(query), default=str)) # prints the data but user is empty
return JSONResponse(
status_code=200,
content=loads(dumps(list(query))) # returns []
)
</code></pre>
<p>This is what it prints.</p>
<pre><code>[{ "createdAt": 1679081594322, "updatedAt": 1679081594322, "user": []},
{ "createdAt": 1679083049799, "updatedAt": 1679083049799, "user": []} ]
</code></pre>
|
<python><pymongo><fastapi>
|
2023-03-20 12:41:52
| 0
| 1,475
|
hammies
|
75,790,133
| 12,297,666
|
Error in Sklearn MinMaxScaler Normalization
|
<p>Check the code:</p>
<pre><code>import numpy as np
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
x = np.array([[-1, 4, 2], [-0.5, 8, 9], [3, 2, 3]])
y = np.array([1, 2, 3])
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=2)
scaler = MinMaxScaler()
x_train_norm = scaler.fit_transform(x_train.T).T
x_test_norm = scaler.transform(x_test.T).T
</code></pre>
<p>It throws the following error:</p>
<pre><code>x_test_norm = scaler.transform(x_test.T).T
ValueError: X has 1 features, but MinMaxScaler is expecting 2 features as input.
</code></pre>
<p>The error is pretty clear, but i am unsure how to fix it. Any ideas?</p>
<p><strong>EDIT:</strong> I need the normalization to be performed in the transposed data (i.e, at each row independently). That is why i have transposed it.</p>
|
<python><scikit-learn><normalization>
|
2023-03-20 12:13:36
| 1
| 679
|
Murilo
|
75,789,990
| 10,982,203
|
Packaging Python module with source C/C++ files and compile on install
|
<p>I have a C library which I have ported to Python3. The Python module loads C code from the shared library.</p>
<p>However, instead of building shared library for for each platform, I plan to package the C/C++ code which can be compiled at the user end when installing the module.</p>
<p>I am using setuptools and I tried using ext_modules as below. However, instead of packaging c files, it locally build it and adding it to the final package (.whl) file which defeats the purpose.</p>
<pre><code>ext_modules=[
Extension(
# the qualified name of the extension module to build
'mymodule',
# the files to compile into our module relative to ``setup.py``
['library.cpp'],
),
],
</code></pre>
<p>I also tried using <code>cmdclass</code> to run post installation build but it only works on local installation and not when installing from PyPi. A discussion here</p>
<p><a href="https://stackoverflow.com/questions/19569557/pip-not-picking-up-a-custom-install-cmdclass">Pip not picking up a custom install cmdclass</a></p>
<p><a href="https://chat.stackoverflow.com/rooms/150536/discussion-between-swiftsnamesake-and-collins-a">https://chat.stackoverflow.com/rooms/150536/discussion-between-swiftsnamesake-and-collins-a</a></p>
<p>Thanks</p>
|
<python><c++><python-3.x>
|
2023-03-20 11:58:17
| 0
| 471
|
Ketu
|
75,789,933
| 11,452,928
|
Jax fitting MLP gives different result than Tensorflow
|
<p>I need to build a MLP in Jax, but I get slightly different (and in my opinion inaccurate) result from Jax respect to a MLP created in Tensorflow.</p>
<p>In both cases I created a dataset where the y are linear function of X plus a standard gaussian error, the dataset is the same in both cases.</p>
<p>I initialized the MLP in tensorflow with the same initialization I did in Jax (to be sure to start with the exact same network).</p>
<p>In Tensorflow I fit the network using this:</p>
<pre><code>model.compile(loss=tf.keras.losses.mean_squared_error,optimizer=tf.keras.optimizers.SGD(learning_rate = 0.00001))
model.fit(X, y, batch_size = X.shape[0], epochs = 5000)
</code></pre>
<p>And this is what I get (it seems correct):</p>
<p><a href="https://i.sstatic.net/IyXI9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IyXI9.png" alt="Tensorflow Result" /></a></p>
<p>Now, in Jax i train the network as follows:</p>
<pre><code>loss = lambda params, x, y: jnp.mean((apply_fn(params, x) - y) ** 2)
@jit
def update(params, x, y, learning_rate):
grad_loss = grad(loss)(params, x, y)
# SGD update
return jax.tree_util.tree_map(
lambda p, g: p - learning_rate * g, params, grad_loss # for every leaf i.e. for every param of MLP
)
learning_rate = 0.00001
num_epochs = 5000
for _ in range(num_epochs):
params = update(params, X, y, learning_rate)
</code></pre>
<p>This is what I get as result:</p>
<p><a href="https://i.sstatic.net/Yghuv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yghuv.png" alt="Jax Result" /></a></p>
<p>I notice that if I increase a lot the number of epochs in the Jax implementation it works better (the model predictions get closer and closer to the real values) but how can I get a similar result from Jax to Tensorflow without increasing the number of epochs?</p>
|
<python><tensorflow><machine-learning><jax><mlp>
|
2023-03-20 11:52:33
| 0
| 753
|
fabianod
|
75,789,732
| 2,636,044
|
Specify keys for pydantic nested model
|
<p>I have a structure similar to this in JSON</p>
<pre><code>{
'name': 'My Company',
'region': {
'us': {
'person': {
'name': 'John'
}
},
'mx': {
'person': {
'name': 'Juan'
}
}
}
}
</code></pre>
<p>I'm trying to model it as pydantic classes</p>
<pre><code>class Person(BaseModel):
name: str
class Region(BaseModel):
person: Person
</code></pre>
<p>but I'm having trouble modelling the <code>Company</code> class because I want to limit the keywords that can be used, I tried something like this:</p>
<pre><code>REGIONS = Literal['us', 'mx']
class Company(BaseModel):
name: str
region: Dict[REGIONS, Region]
</code></pre>
<p>but that doesn't work, it also introduces issues when I have a function where I need to get those inner values, eg:</p>
<pre><code># This method is part of Company - it doesn't work
def lower_case_person(cls):
return cls.region.person.name
</code></pre>
<p>How can I model this to constrain the keywords for <code>Region</code> while at the same time maintain it's behaviour as an object rather than a dictionary?</p>
<p>Edit:</p>
<p>In an actual scenario, an example of the JSON will come as:</p>
<pre><code>{
'name': 'My Company',
'region': {
'mx': {
'person': {
'name': 'Juan'
}
}
}
}
</code></pre>
|
<python><pydantic>
|
2023-03-20 11:29:11
| 1
| 1,339
|
Onilol
|
75,789,651
| 12,769,783
|
Type hint generic function with arguments of passed function
|
<p>I would like to write type hints that allow type checkers to recognize if a wrong argument is passed to a function I call "<code>meta</code>" in the example code below:</p>
<p>I have got the following functions:</p>
<ul>
<li>a function <code>meta</code> that takes a function and arguments for that function as its arguments,</li>
<li>an example function <code>example_function</code> with which I demonstrate configurations that should produce warnings / errors (PEP484):</li>
</ul>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar, Callable
def example_function(f: str, g: int) -> None:
assert isinstance(f, str)
assert isinstance(g, int)
In = TypeVar('In')
Out = TypeVar('Out')
def meta(func: Callable[In, Out], *args, **kwargs) -> Out:
a = func(*args, **kwargs)
print(f'{func.__name__} executed' )
return a
</code></pre>
<p>The following calls should be fine:</p>
<pre class="lang-py prettyprint-override"><code># ok
meta(example_function, f='', g=0)
</code></pre>
<p>The following calls should produce warnings:</p>
<pre class="lang-py prettyprint-override"><code># arg for f already provided
try:
meta(example_function, '', f='', g=0)
except:
...
# too many *args
try:
meta(example_function, '', 0, 1)
except:
...
# too many **kwargs
try:
meta(example_function, f='', g=0, k='diff')
except:
...
# invalid kwarg / arg
try:
meta(example_function, 0, g='')
except:
...
</code></pre>
<p>Is what I want to achieve possible with a mixture of arguments and keyword arguments?
If not, is it possible with keyboard arguments / arguments only?</p>
|
<python><python-typing>
|
2023-03-20 11:21:28
| 1
| 1,596
|
mutableVoid
|
75,789,602
| 14,045,537
|
Pandas nested groupby and sum unique values based on another column
|
<p>I have a pandas dataframe</p>
<pre><code>import pandas as pd
import numpy as np
data = pd.DataFrame({"ID1": ["a", "a", "a", "b", "b", "b", "c", "c", "c"],
"ID2": ["k", "k", "k", "k","k", "k", "j", "j", "j"],
"val": [18, 19, 20, 18, 19, 20, 34, 35, 37]
})
data
</code></pre>
<p><em>Output:</em></p>
<pre><code> ID1 ID2 val
0 a k 18
1 a k 19
2 a k 20
3 b k 18
4 b k 19
5 b k 20
6 c j 34
7 c j 35
8 c j 37
</code></pre>
<p>I'm trying to get the average of <code>val</code> by grouping by <code>ID1</code> and finally need the <code>sum</code> grouping by <code>ID2</code></p>
<pre><code>(data
.assign(val_id1_avg = data.groupby("ID1")["val"].transform("mean"))
.groupby("ID2")
.agg(val_avg = ("val_id1_avg", lambda x: np.sum(x.unique())),
volume=("ID1", 'nunique'))
.reset_index())
</code></pre>
<p><em>Output:</em></p>
<pre><code> ID2 val_avg volume
0 j 35.333333 1
1 k 19.000000 2
</code></pre>
<p>How can I drop duplicates based on <code>ID1</code> and sum the <code>val_id1_avg</code>??</p>
<p><strong>Desired Output:</strong></p>
<pre><code>
ID2 val_avg Volume
0 k 38.00 1
1 j 35.33 2
</code></pre>
|
<python><pandas><dataframe><group-by>
|
2023-03-20 11:15:06
| 2
| 3,025
|
Ailurophile
|
75,789,548
| 18,013,020
|
CORS error in flask even after using flask-cors
|
<p>I start learning some <code>back-end</code> development with <code>Flask</code> but when I try to make a route that returns json data but it throws <code>CORS ERROR</code>, for tunneling I use <code>ngrok</code></p>
<p>main.py</p>
<pre><code>from flask import Flask, jsonify
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
@app.route('/')
def hello():
return 'Hello'
@app.route('/test')
def about():
return jsonify({'key_one':'some testing value'})
if __name__ == "__main__":
app.run()
</code></pre>
<p>and here is javascript and html</p>
<pre><code>console.log("hello");
fetch("https://link.ngrok.io/test")
.then((x) => x.json())
.then((r) => console.log(r))
.catch((e) => console.log(e));
</code></pre>
<p>the html are just for holding the <code>JS</code> file</p>
<pre><code><!DOCTYPE html>
<html lang="en">
<head>
<title></title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link href="css/style.css" rel="stylesheet">
<script defer src='main.js' >
</script>
</head>
<body>
</body>
</html>
</code></pre>
<p>and here an image for the output</p>
<p><a href="https://i.sstatic.net/NHofN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NHofN.png" alt="enter image description here" /></a></p>
|
<javascript><python><flask><cors><flask-cors>
|
2023-03-20 11:09:49
| 1
| 346
|
Mustafa
|
75,789,537
| 4,537,160
|
Python, make attribute accessible to other attributes of a class
|
<p>I have a Python class (MainClass), having 3 attributes:</p>
<ul>
<li>a data_history dict, which is updated at runtime (the program is analyzing a video stream, so data_history gets updated basically at each frame);</li>
<li>2 other objects (data_analyzer_00, data_analyzer_01), each having a use_data_history() method that requires to use data from data_history;</li>
</ul>
<p>and I need to make data_history accessible to data_analyzer_00 and data_analyzer_01.</p>
<p>One way would be to pass MyClass as argument when initializing the data_analyzer_00 and data_analyzer_01 objects, so something like this:</p>
<pre><code>class MainClass:
def __init__(self):
self.data_history = {}
self.data_analyzer_00 = DataAnalyzer(self)
self.data_analyzer_01 = DataAnalyzer(self)
def update_data_history(self, new_data_history):
self.data_history = new_data_history
class DataAnalyzer:
def __init__(self, my_class_instance):
self.my_class_instance = my_class_instance
def use_data_history(self)
# using self.my_class_instance.data_history
</code></pre>
<p>So that I can do</p>
<pre><code>main_obj = MainClass()
main_obj.data_history = {"old_key": "old_value"}
print(main_obj.data_analyzer_00.data_history) # prints old data_history
main_obj.update_data_history({"old_key": "old_value"})
print(main_obj.data_analyzer_00.data_history) # prints new data_history
</code></pre>
<p>Just, I was wondering if this is the correct way to do it, since this way I create some sort of recursive situation (main_obj has a data_analyzer_00 attribute, which has a my_class_instance attribute attribute, which has a data_analyzer_00 attribute...).
Any other way to do this?</p>
|
<python><class><attributes>
|
2023-03-20 11:08:14
| 1
| 1,630
|
Carlo
|
75,789,398
| 361,023
|
How to use pyfakefs, pytest to test a function using `multiprocessing.Queue` via a `Manager`?
|
<p>I am trying to test a code that uses <code>multiprocessing.Queue</code>. I tried using the <code>additional_skip_names</code>, calling <code>multiprocessing.freeze_support()</code> and <code>multiprocessing.set_start_method('forkserver')</code>, but nothing seems to work.</p>
<p>Here is a minimal example:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
import multiprocessing
from pyfakefs.fake_filesystem_unittest import Patcher
def f():
q = multiprocessing.Manager().Queue()
q.put(1)
return q.get()
@pytest.fixture
def fs():
with Patcher(additional_skip_names=['multiprocessing.connection']) as patcher:
yield patcher.fs
def test_f1(fs):
assert f() == 1
def test_f2():
assert f() == 1
</code></pre>
<p>resulting in</p>
<pre class="lang-bash prettyprint-override"><code>$ pytest -q test.py
FF [100%]
=================================================== FAILURES ====================================================
____________________________________________________ test_f1 ____________________________________________________
fs = <pyfakefs.fake_filesystem.FakeFilesystem object at 0x7f12acbeadd0>
def test_f1(fs):
> assert f() == 1
/tmp/test.py:18:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/tmp/test.py:6: in f
q = multiprocessing.Manager().Queue()
/usr/lib/python3.10/multiprocessing/context.py:57: in Manager
m.start()
/usr/lib/python3.10/multiprocessing/managers.py:566: in start
self._address = reader.recv()
/usr/lib/python3.10/multiprocessing/connection.py:250: in recv
buf = self._recv_bytes()
/usr/lib/python3.10/multiprocessing/connection.py:414: in _recv_bytes
buf = self._recv(4)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <multiprocessing.connection.Connection object at 0x7f12acc40670>, size = 4
read = <bound method FakeOsModule.read of <pyfakefs.fake_filesystem.FakeOsModule object at 0x7f12acbeb190>>
def _recv(self, size, read=_read):
buf = io.BytesIO()
handle = self._handle
remaining = size
while remaining > 0:
chunk = read(handle, remaining)
n = len(chunk)
if n == 0:
if remaining == size:
> raise EOFError
E EOFError
/usr/lib/python3.10/multiprocessing/connection.py:383: EOFError
--------------------------------------------- Captured stderr call ----------------------------------------------
Process SyncManager-1:
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.10/multiprocessing/managers.py", line 591, in _run_server
server = cls._Server(registry, address, authkey, serializer)
File "/usr/lib/python3.10/multiprocessing/managers.py", line 156, in __init__
self.listener = Listener(address=address, backlog=16)
File "/usr/lib/python3.10/multiprocessing/connection.py", line 448, in __init__
self._listener = SocketListener(address, family, backlog)
File "/usr/lib/python3.10/multiprocessing/connection.py", line 591, in __init__
self._socket.bind(address)
FileNotFoundError: [Errno 2] No such file or directory
____________________________________________________ test_f2 ____________________________________________________
def test_f2():
> assert f() == 1
test.py:21:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
test.py:6: in f
q = multiprocessing.Manager().Queue()
/usr/lib/python3.10/multiprocessing/managers.py:723: in temp
token, exp = self._create(typeid, *args, **kwds)
/usr/lib/python3.10/multiprocessing/managers.py:606: in _create
conn = self._Client(self._address, authkey=self._authkey)
/usr/lib/python3.10/multiprocessing/connection.py:508: in Client
answer_challenge(c, authkey)
/usr/lib/python3.10/multiprocessing/connection.py:752: in answer_challenge
message = connection.recv_bytes(256) # reject large message
/usr/lib/python3.10/multiprocessing/connection.py:216: in recv_bytes
buf = self._recv_bytes(maxlength)
/usr/lib/python3.10/multiprocessing/connection.py:414: in _recv_bytes
buf = self._recv(4)
/usr/lib/python3.10/multiprocessing/connection.py:379: in _recv
chunk = read(handle, remaining)
/home/xxx/.venv/lib/python3.10/site-packages/pyfakefs/fake_filesystem.py:5218: in wrapped
return f(*args, **kwargs)
/home/xxx/.venv/lib/python3.10/site-packages/pyfakefs/fake_filesystem.py:4241: in read
file_handle = self.filesystem.get_open_file(fd)
/home/xxx/.venv/lib/python3.10/site-packages/pyfakefs/fake_filesystem.py:1578: in get_open_file
self.raise_os_error(errno.EBADF, str(file_des))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pyfakefs.fake_filesystem.FakeFilesystem object at 0x7f12acbeadd0>, err_no = 9, filename = '12'
winerror = None
def raise_os_error(
self,
err_no: int,
filename: Optional[AnyString] = None,
winerror: Optional[int] = None,
) -> NoReturn:
"""Raises OSError.
The error message is constructed from the given error code and shall
start with the error string issued in the real system.
Note: this is not true under Windows if winerror is given - in this
case a localized message specific to winerror will be shown in the
real file system.
Args:
err_no: A numeric error code from the C variable errno.
filename: The name of the affected file, if any.
winerror: Windows only - the specific Windows error code.
"""
message = os.strerror(err_no) + " in the fake filesystem"
if winerror is not None and sys.platform == "win32" and self.is_windows_fs:
raise OSError(err_no, message, filename, winerror)
> raise OSError(err_no, message, filename)
E OSError: [Errno 9] Bad file descriptor in the fake filesystem: '12'
/home/xxx/.venv/lib/python3.10/site-packages/pyfakefs/fake_filesystem.py:1149: OSError
============================================ short test summary info ============================================
FAILED test.py::test_f1 - EOFError
FAILED test.py::test_f2 - OSError: [Errno 9] Bad file descriptor in the fake filesystem: '12'
2 failed in 0.34s
Exception ignored in: <function _ConnectionBase.__del__ at 0x7f12acc80ca0>
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/connection.py", line 132, in __del__
self._close()
File "/usr/lib/python3.10/multiprocessing/connection.py", line 361, in _close
_close(self._handle)
File "/home/xxx/.venv/lib/python3.10/site-packages/pyfakefs/fake_filesystem.py", line 5218, in wrapped
return f(*args, **kwargs)
File "/home/xxx/.venv/lib/python3.10/site-packages/pyfakefs/fake_filesystem.py", line 4224, in close
file_handle = self.filesystem.get_open_file(fd)
File "/home/xxx/.venv/lib/python3.10/site-packages/pyfakefs/fake_filesystem.py", line 1577, in get_open_file
return file_list[0]
IndexError: list index out of range
Exception ignored in: <function _ConnectionBase.__del__ at 0x7f12acc80ca0>
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/connection.py", line 132, in __del__
File "/usr/lib/python3.10/multiprocessing/connection.py", line 361, in _close
File "/home/xxx/.venv/lib/python3.10/site-packages/pyfakefs/fake_filesystem.py", line 5218, in wrapped
File "/home/xxx/.venv/lib/python3.10/site-packages/pyfakefs/fake_filesystem.py", line 4224, in close
File "/home/xxx/.venv/lib/python3.10/site-packages/pyfakefs/fake_filesystem.py", line 1578, in get_open_file
File "/home/xxx/.venv/lib/python3.10/site-packages/pyfakefs/fake_filesystem.py", line 1149, in raise_os_error
OSError: [Errno 9] Bad file descriptor in the fake filesystem: '12'
</code></pre>
|
<python><python-3.x><multiprocessing><pytest><pyfakefs>
|
2023-03-20 10:53:49
| 0
| 96,298
|
Jorge E. Cardona
|
75,789,222
| 12,436,050
|
Extracting multiple fields from a JSON in python
|
<p>I have the following JSON object:</p>
<pre class="lang-js prettyprint-override"><code>{
'organisations': {
'total-items': '41477',
'organisation': [
{
'mappings': None,
'active-request': 'false',
'identifiers': {
'identifier': {
'code': 'ORG-100023310',
'code-system': '100000167446',
'code-system-name': ' OMS Organization Identifier'
}
},
'name': 'advanceCOR GmbH',
'operational-attributes': {
'created-on': '2016-10-18T15:38:34.322+02:00',
'modified-on': '2022-11-02T08:23:13.989+01:00'
},
'locations': {
'location': [
{
'location-id': {
'link': {
'href': 'https://v1/locations/LOC-100052061'
},
'id': 'LOC-100052061'
}
},
{
'location-id': {
'link': {
'href': 'https://v1/locations/LOC-100032442'
},
'id ': 'LOC-100032442'
}
},
{
'location-id': {
'link': {
'href': 'https://v1/locations/LOC-100042003'
},
'id': 'LOC-100042003'
}
}
]
},
'organisation-id': {
'link': {
'rel': 'self',
'href': 'https://v1 /organisations/ORG-100023310'
},
'id': 'ORG-100023310'
},
'status': 'ACTIVE'
},
{
'mappings': None,
'active-request': 'false',
'identifiers': {
'identifier': {
'code': 'ORG-100004261',
'code-system': '100000167446',
'code-system-name': 'OMS organization Identifier'
}
},
'name': 'Beacon Pharmaceuticals Limited',
'operational-attributes': {
'created-on': '2016-10-18T14:48:16.293+02:00',
'modified-on': '2022-10-12T08:26:24.645+02:00'
},
'locations': {
'location': [
{
'location-id': {
'link': {
'href': 'https://v1/locations/LOC-100005615'
},
'id': 'LOC-100005615'
}
},
{
'location-id': {
'link': {
'href': 'https://v1/locations/LOC-100000912'
},
'id': 'LOC-100000912'
}
},
{
'location-id': {
'link': {
'href': 'https://v1/locations/LOC-100043831'
},
'id': 'LOC-100043831'
}
}
]
},
'organisation-id': {
'link': {
'rel': 'self',
'href': 'https://v1/organisations/ORG-100004261'
},
'id': 'ORG-100004261'
},
'status': 'ACTIVE'
},
</code></pre>
<p>I would like to fetch following fields 'code, name, location, status' for all the org_id (type of d: class 'list', type of d[0) is class 'dict'. I am using following lines of code however could not fetch all the organisation ids and location_id (only 1 I can get) (shown in current output). How can I get information for all the org_id (shown in expected output)?</p>
<p>Code:</p>
<pre><code>with open('organisations.json', encoding='utf-8') as f:
d = json.load(f)
print (d[0]['organisations']['organisation'][2]['identifiers']['identifier']['code']) #code
print (d[0]['organisations']['organisation'][2]['name']) #name
print (d[0]['organisations']['organisation'][2]['locations']) #location
print (d[0]['organisations']['organisation'][2]['status']) #status
</code></pre>
<p>current output:</p>
<pre><code>ORG-100023310
advanceCOR GmbH
{'location': {'location-id': {'link': {'href': 'https://v1/locations/LOC-100052061'}, 'id': ' LOC-100052061'}}}
ACTIVE
</code></pre>
<p>Expected output:</p>
<pre><code>org_id org_name location_id status
ORG-100023310 advanceCOR GmbH LOC-100052061, LOC-100032442, LOC-100042003 ACTIVE
ORG-100004261 Beacon Pharmaceuticals Limited LOC-100005615, LOC-100000912, LOC-100043831 ACTIVE
</code></pre>
|
<python><json>
|
2023-03-20 10:34:27
| 4
| 1,495
|
rshar
|
75,789,043
| 7,720,535
|
Vectorize applying function on a Pandas DataFrame
|
<p>I have a Pandas DataFrame with two columns, <code>val</code> and <code>target</code>.</p>
<pre><code>import random
import numpy as np
import pandas as pd
df = pd.DataFrame({'val': np.random.uniform(-1., 1., 1000),
'target': random.choices([True, False], k=1000)})
</code></pre>
<p>Target column is boolean and I want to apply the function <code>score</code> on the dataframe for many different pair of <code>lo_lim</code> and <code>up_lim</code>.</p>
<pre><code>def score(df, lo_lim, up_lim, alpha):
df_out = df['target'].values[np.where((df['val']>up_lim) | (df['val']<lo_lim))[0]]
return df_out.sum()-alpha*(len(df_out)-df_out.sum())
</code></pre>
<p>This is the code using for loop over pairs of lo_lim and up_lim.</p>
<pre><code>lo_lims = np.random.uniform(-1., -0.5, 100)
up_lims = np.random.uniform(0.5, 1.0, 100)
res = []
for i in range(100):
res.append((lo_lims[i], up_lims[i], score(df, lo_lims[i], up_lims[i], 0.5)))
</code></pre>
<p>Now, I need to truly vectorize applying the function on the dataframe and handle all pairs of lo_lim and up_lim at once and make the computation time much shorter.</p>
|
<python><pandas>
|
2023-03-20 10:17:52
| 2
| 485
|
Esi
|
75,788,847
| 4,502,950
|
Stack and explode columns in pandas
|
<p>I have a dataframe to which I want to apply explode and stack at the same time. Explode the 'Attendees' column and assign the correct values to courses. For example, for Course 1 'intro to' the number of attendees was 24 but for Course 2 'advanced' the number of attendees was 46. In addition to that, I want all the course names in one column.</p>
<pre><code> import pandas as pd
import numpy as np
df = pd.DataFrame({'Session':['session1', 'session2','session3'],
'Course 1':['intro to','advanced','Cv'],
'Course 2':['Computer skill',np.nan,'Write cover letter'],
'Attendees':['24 & 46','23','30']})
</code></pre>
<p>If I apply the explode function to 'Attendees' I get the result</p>
<pre><code>Course_df = Course_df.assign(Attendees=Course_df['Attendees'].str.split(' & ')).explode('Attendees')
Session Course 1 Course 2 Attendees
0 session1 intro to Computer skill 24
0 session1 intro to Computer skill 46
1 session2 advanced. NaN 23
</code></pre>
<p>and when I apply the stack function</p>
<pre><code>Course_df = (Course_df.set_index(['Session','Attendees']).stack().reset_index().rename({0:'Courses'}, axis = 1))
</code></pre>
<p>This is the result I get</p>
<pre><code> Session level_1 Courses Attendees
0 session1 Course 1 intro to 24
1 session1 Course 2 Computer skill 46
2 session2 Course 1 advanced 23
3 session3 Course 1 Cv 30
</code></pre>
<p>Whereas the result I want is</p>
<pre><code> Session level_1 Courses Attendees
0 session1 Course 1 intro to 24
1 session1 Course 2 Computer skill 46
2 session2 Course 1 advanced 23
3 session3 Course 1 Cv 30
4 session3 Course 2 Write cover letter 30
</code></pre>
|
<python><pandas>
|
2023-03-20 09:57:55
| 2
| 693
|
hyeri
|
75,788,779
| 16,971,617
|
Finding the median of a segment using numpy
|
<p>I would like to find out the median of each segment within an image.
The first step is to do segmentation. I use the skimage.segmentation as <a href="https://scikit-image.org/docs/stable/api/skimage.segmentation.html#examples-using-skimage-segmentation-felzenszwalb" rel="nofollow noreferrer">follows</a>.</p>
<p>Yet the return mask is in a size of <code>100*100</code> instead of <code>100*100*3</code>.
I would like to use the numpy func to get the median of each segment, as in <a href="https://www.geeksforgeeks.org/numpy-maskedarray-median-function-python/" rel="nofollow noreferrer">this</a> example.</p>
<p>Is there an easy way to get it to work?</p>
<pre><code>from skimage.segmentation import felzenszwalb, slic, quickshift, watershed
np.random.seed(seed=777)
img = np.random.randint(low=0, high = 255, size=(100,100, 3))
segments_fz = felzenszwalb(img, scale=100, sigma=0.5, min_size=50)
label_image, num = label(segments_fz, return_num = True)
for i in num:
mask_arr = np.ma.masked_array(img, mask = ??? )
out_arr1 = ma.median(mask_arr, axis = 0)
</code></pre>
|
<python><numpy><image-segmentation><scikit-image>
|
2023-03-20 09:51:24
| 2
| 539
|
user16971617
|
75,788,685
| 1,988,046
|
Low latency image streaming between Win11 and WSL2
|
<p>I need to stream images from a C# application running on Windows 11 to a python module running on WSL2 (Ubuntu 20.04).</p>
<p>Please note that I'm not a networking expert - I do computer vision.</p>
<p>I've tried using WebSockets and am passing a 640x480 image to python and receiving a short string in response. Unfortunately, the round trip time is about 80ms, which is at least 75ms too long. When I pass a short byte array (100 bytes) through the WebSocket, the round trip time is about 1.5ms.</p>
<p>I can reduce the size of the image, but that's only going to get me part of the way there.</p>
<p>What alternatives do I have?</p>
<ul>
<li>I've considered using raw TCP/IP, but I'm concerned about my ability to build a robust server in python and a robust client in C#.</li>
<li>I've used memory-mapped files for interprocess communication in the past, but I'm not sure how to share memory between windows and WSL (<a href="https://stackoverflow.com/questions/65312636/shared-memory-between-windows-process-and-wsl-linux-process">Shared Memory Between Windows Process and WSL Linux Process</a>)</li>
<li>I'm seriously considering setting up a RAM disk, writing to file from C# and then opening it in python, but this makes me feel itchy.</li>
</ul>
<p>Can anyone suggest anything better?</p>
|
<python><c#><windows-subsystem-for-linux><shared-memory>
|
2023-03-20 09:45:01
| 0
| 513
|
mike1952
|
75,788,659
| 12,863,331
|
Modify a pandas dataframe so that two or more cells will be merged and centered when opened in Excel
|
<p>I've seen some solutions but couldn't apply them after trying many times.<br />
The code I used is based on the <a href="https://xlsxwriter.readthedocs.io/example_merge1.html" rel="nofollow noreferrer">documentation</a>.<br />
In the following example the intention is to merge cells with the value 1:</p>
<pre><code>df = p.DataFrame({'A': [1, 1, 3], 'B': [4, 5, 6]})
writer = p.ExcelWriter('test.xlsx', engine='xlsxwriter')
df.to_excel(writer, sheet_name='Sheet1', index=False)
worksheet = writer.sheets['Sheet1']
worksheet.merge_range('B2:B3', '1')
worksheet.close()
</code></pre>
<p><code>worksheet.close()</code> resulted in the error:</p>
<blockquote>
<p>AttributeError: 'Worksheet' object has no attribute 'close'</p>
</blockquote>
<p>which I couldn't find a solution for.<br />
Is that the correct approach? Is something missing?<br />
Thanks.</p>
|
<python><excel><pandas><xlsxwriter>
|
2023-03-20 09:42:30
| 2
| 304
|
random
|
75,788,557
| 19,580,067
|
Extract the Group Email Address from the recipients of outlook email using win32
|
<p>I'm trying to extract the all email info from outlook such as email body, sender address and the recipients addresses.</p>
<p>The extraction of the sender and body works fine but unable to get the recipents email addresses as the recipients list contains group emails which results none while extracting the address whereas it returns the names correctly.</p>
<p>Here is the code</p>
<pre><code>otlk = win32com.client.Dispatch("Outlook.Application")
outlook = otlk.GetNamespace("MAPI")
email_sender = outlook.Session.Accounts['pravin.subramanian@alatas.com']
for i in (range(len(outlook.Folders))):
#Condition to take emails from that particular account.
if (outlook.Folders.Item(i+1).Name == 'autoenq@xyz.com'):
root_folder = outlook.Folders.Item(i+1)
#Looping to find the inbox folder and extracting the emails
for folder in root_folder.Folders:
if (folder.Name == 'Inbox'):
messages = folder.Items
messages.Sort("[ReceivedTime]", True)
msg_no = 0
for message in messages:
#Trying to get the recipients email
recipients = message.Recipients
for recipient in recipients:
print(recipient.AddressEntry.GetExchangeUser())
#recipient.AddressEntry.GetExchangeUser().PrimarySmtpAddress```
But the code returns none whereas if I tried ```recipient.AddressEntry``` it returns the names of the recipients correctly??
How can I fix this?
</code></pre>
|
<python><outlook><pywin32><win32com><office-automation>
|
2023-03-20 09:31:57
| 2
| 359
|
Pravin
|
75,788,523
| 2,964,170
|
How to get last word from string in python
|
<p>I have string trying to get last word from string like this AUTHORITY["EPSG","6737"]</p>
<pre><code>a = 'PROJCS["Korea",GEOGCS["K 2000",DATUM["Geocentric_datum",
SPHEROID["GRS 1980",6378137,298.257222101,AUTHORITY["EPSG","7019"]],
AUTHORITY["EPSG","6737"]]]'
</code></pre>
|
<python><python-3.x><string>
|
2023-03-20 09:28:22
| 3
| 425
|
Vas
|
75,788,472
| 1,887,381
|
How to detect the inner radius and outer radius of this circle in python?
|
<p>i want to detect the radius of two circles in <code>mask</code> image,
but <code>cv2.HoughCircles</code> can not detect these two circles.</p>
<p>what should i do? thanks in advance.</p>
<p><a href="https://i.sstatic.net/VwO5k.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VwO5k.jpg" alt="enter image description here" /></a></p>
<p>here is the code:</p>
<pre><code>import cv2
import numpy as np
track_win = 'Tracking'
def nothing(x):
pass
# use track bar to perfectly define (1/2)
# the lower and upper values for HSV color space(2/2)
cv2.namedWindow(track_win)
cv2.resizeWindow(track_win, 1000, 500)
cv2.moveWindow(track_win, 1920, 100)
#1 Lower/Upper HSV 3 startValue 4 endValue
cv2.createTrackbar("LH",track_win,35,255,nothing)
cv2.createTrackbar("LS",track_win,43,255,nothing)
cv2.createTrackbar("LV",track_win,46,255,nothing)
cv2.createTrackbar("UH",track_win,77,255,nothing)
cv2.createTrackbar("US",track_win,255,255,nothing)
cv2.createTrackbar("UV",track_win,255,255,nothing)
cv2.createTrackbar("min r",track_win,30,1000,nothing)
cv2.createTrackbar("max r",track_win,100,2000,nothing)
cv2.setTrackbarPos("LH", track_win, 4)
cv2.setTrackbarPos("LS", track_win, 39)
cv2.setTrackbarPos("LV", track_win, 126)
cv2.setTrackbarPos("UH", track_win, 46)
cv2.setTrackbarPos("US", track_win, 121)
cv2.setTrackbarPos("UV", track_win, 171)
cv2.setTrackbarPos("min r", track_win, 30)
cv2.setTrackbarPos("max r", track_win, 1000)
kernel = np.ones((4, 4), np.uint8)
while True:
frame = cv2.imread('./IMG_0645.jpg')
hsv = cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
l_h = cv2.getTrackbarPos("LH",track_win)
l_s = cv2.getTrackbarPos("LS",track_win)
l_v = cv2.getTrackbarPos("LV",track_win)
u_h = cv2.getTrackbarPos("UH",track_win)
u_s = cv2.getTrackbarPos("US",track_win)
u_v = cv2.getTrackbarPos("UV",track_win)
m_r = cv2.getTrackbarPos("min r",track_win)
x_r = cv2.getTrackbarPos("max r",track_win)
l_g = np.array([l_h, l_s, l_v]) # lower green value
u_g = np.array([u_h,u_s,u_v])
mask = cv2.inRange(hsv,l_g,u_g)
mask = cv2.dilate(mask, kernel, iterations = 1) # dilate
mask = cv2.erode(mask, kernel, iterations=1) # erode
mask = cv2.Canny(mask, 50, 150) # Canny
mask = cv2.dilate(mask, kernel, iterations = 1) # dilate
mask = cv2.erode(mask, kernel, iterations=1) # erode
res=cv2.bitwise_and(frame,frame,mask=mask) # src1,src2
#Hough Circles Detection
circles= cv2.HoughCircles(mask,cv2.HOUGH_GRADIENT,1,100,param1=100,param2=50,minRadius=m_r,maxRadius=m_r) #
if circles is not None:
print(len(circles))
for circle in circles:
# print(circle)
if len(circle) == 3:
(x, y, r) = circle
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.circle(res, (x, y), r, (0, 255, 0), 4)
# cv2.rectangle(res, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
cv2.imshow("frame", frame)
cv2.moveWindow("frame",100,100)
cv2.imshow("mask", mask)
cv2.moveWindow("mask",100+540,100)
cv2.imshow("res", res)
cv2.moveWindow("res",100+540*2,100)
key = cv2.waitKey(1)
if key == 27: # Esc
break
cv2.destroyAllWindows()
</code></pre>
<p>here is the original image:</p>
<p><a href="https://i.sstatic.net/m4xna.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/m4xna.jpg" alt="enter image description here" /></a></p>
|
<python><opencv><computer-vision><hough-transform>
|
2023-03-20 09:22:56
| 0
| 2,661
|
12343954
|
75,788,424
| 10,829,044
|
pandas combine specific excel sheets into one
|
<p>I have an excel sheet named <code>output.xlsx</code> with multiple sheets in it.</p>
<p>Example, the sheets within it are named as <code>P1</code>,<code>P2</code>,<code>P3</code>,<code>P4</code></p>
<p>I would like to do the below</p>
<p>a) Combine sheet <code>P1</code> and sheet <code>P2</code> into one single sheet name it as <code>P1&P2</code></p>
<p>b) retain the <code>P3</code> and <code>P4</code> sheets in <code>output.xlsx</code> as it is. No changes</p>
<p>c) Overwrite the same file <code>output.xlsx</code> but don't touch <code>P3</code> and <code>P4</code></p>
<p>So, I tried the below</p>
<pre><code>df = pd.concat(pd.read_excel('output.xlsx', sheet_name=['P1','P2']), ignore_index=True)
with pd.ExcelWriter('output.xlsx', engine='xlsxwriter') as writer:
df.to_excel(writer, sheet_name="P1&P2",index=False)
</code></pre>
<p>But the above code ovcerwrites the file and deletes other sheet</p>
<p>How can I combine only the sheets <code>P1</code> and <code>P2</code> and keep <code>P3</code> and <code>P4</code> as it is</p>
|
<python><excel><pandas><dataframe><group-by>
|
2023-03-20 09:17:42
| 1
| 7,793
|
The Great
|
75,788,354
| 5,084,737
|
Unable to use EcsTaskRole inside task to get s3 buckets
|
<p>I have an ECS Fargate task running, having a task role with following policy:</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ecs:*",
"ec2:*",
"elasticloadbalancing:*",
"ecr:*",
"cloudwatch:*",
"s3:*",
"rds:*",
"logs:*",
"elasticache:*",
"secretsmanager:*"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": "logs:DescribeLogGroups",
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"logs:CreateLogStream",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": "*",
"Effect": "Allow"
}
]
}
</code></pre>
<p>This is how my containerdefinition.json looks like</p>
<pre><code>{
"taskDefinitionArn": "arn:aws:ecs:eu-west-1:account-id:task-definition/application:13",
"containerDefinitions": [
{
"name": "application",
"image": "boohoo:latest",
"cpu": 256,
"memory": 512,
"memoryReservation": 256,
"links": [],
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000,
"protocol": "tcp"
}
],
"essential": true,
"entryPoint": [],
"command": [
"pipenv",
"run",
"prod"
],
"environment": [],
"environmentFiles": [],
"mountPoints": [],
"volumesFrom": [],
"secrets": [],
"user": "uwsgi",
"dnsServers": [],
"dnsSearchDomains": [],
"extraHosts": [],
"dockerSecurityOptions": [],
"dockerLabels": {},
"ulimits": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "Cluster/services/logs/",
"awslogs-region": "eu-west-1",
"awslogs-stream-prefix": "application"
},
"secretOptions": []
},
"systemControls": []
}
],
"family": "application",
"taskRoleArn": "arn:aws:iam::account-id:role/task-role",
"executionRoleArn": "arn:aws:iam::account-id:role/task-execution-role",
"networkMode": "awsvpc",
"revision": 13,
"volumes": [],
"status": "ACTIVE",
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"name": "ecs.capability.execution-role-awslogs"
},
{
"name": "com.amazonaws.ecs.capability.ecr-auth"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "ecs.capability.secrets.asm.environment-variables"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.17"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.21"
},
{
"name": "com.amazonaws.ecs.capability.task-iam-role"
},
{
"name": "ecs.capability.execution-role-ecr-pull"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "ecs.capability.task-eni"
}
],
"placementConstraints": [],
"compatibilities": [
"EC2",
"FARGATE"
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "512",
"memory": "1024",
"registeredAt": "2023-03-19T10:24:24.737Z",
"registeredBy": "arn:aws:sts::account-id:assumed-role/...",
"tags": []
}
</code></pre>
<p>I can see the env variable <code>AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/something-something</code> inside container and trying to get list of s3 buckets using boto3 library but get <strong>ListBuckets operation: Access Denied</strong></p>
<pre><code>>>> session = boto3.session.Session()
>>> s3 = session.client("s3")
>>> s3.list_buckets()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/app/.venv/lib/python3.10/site-packages/botocore/client.py", line 530, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/app/.venv/lib/python3.10/site-packages/botocore/client.py", line 960, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied
</code></pre>
<p>I can see the readonly access key and secret access key by running the following command but looks like I am missing something.</p>
<pre><code>>>> print(session.get_credentials().get_frozen_credentials())
</code></pre>
<p>Just for testing I tried to follow the steps mentioned <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html" rel="nofollow noreferrer">here in AWS Guid for Task IAM role</a> to get the credentials and use them to communicate with the s3. I am able to get the credentials and can communicate with s3.</p>
<pre><code>>> r = requests.get("http://169.254.170.2/v2/credentials/something-something")
>>> r.json()
{'RoleArn': 'arn:aws:iam::account-id:role/task-role', 'AccessKeyId': 'access-key-id', 'SecretAccessKey': 'secret-access-key', 'Token': 'very-long-token', 'Expiration': '2023-03-20T14:52:49Z'}
>>>
>>>
>>> s3 = boto3.client("s3", aws_access_key_id="access-key-id", aws_secret_access_key="secret-access-key", aws_session_token="very-long-token")
>>> s3.list_buckets()
{...}
</code></pre>
|
<python><amazon-web-services><boto3>
|
2023-03-20 09:10:12
| 0
| 1,117
|
Sadan A.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.