content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
|---|---|---|---|---|---|---|---|---|
Q:
Python: How is the docstring code running and is this the correct way to code a menu?
Errors for this codeI don't understand how the code under the docstring is running. I am trying to make a main menu. I also keep getting errors under the comments
def main_menu():
# Function for the interface where the user is presented with a menu and given the options requiring the user's input
choice = input("""
MainName
R - Reporting
I - Intelligence
M - Monitoring
A - About
Q - Quit
Choose an option: """)
if choice == "R" or choice == "r":
reporting()
elif choice == "I" or choice == "i":
intelligence()
elif choice == "M" or choice == "m":
monitoring()
elif choice == "A" or choice == "a":
about()
elif choice == "Q" or choice == "q":
quit()
else:
print(" ")
print("Please try again")
main_menu()
Is this a correct way to making a menu? The program runs with no error messages but I keep getting problems highlighted.
A:
You are using triple quotes in the input function that takes it as a string!
and for the menu it is correct but also checkout the case function newly launched in python version 3.11
A:
You are using triple quotes in the input function that takes it as a string! and for the menu it is correct but also checkout the case function newly launched in python version 3.11
use VsCode maybe your editor dosen't support it
And use main_menu() function outside the if else block
|
Python: How is the docstring code running and is this the correct way to code a menu?
|
Errors for this codeI don't understand how the code under the docstring is running. I am trying to make a main menu. I also keep getting errors under the comments
def main_menu():
# Function for the interface where the user is presented with a menu and given the options requiring the user's input
choice = input("""
MainName
R - Reporting
I - Intelligence
M - Monitoring
A - About
Q - Quit
Choose an option: """)
if choice == "R" or choice == "r":
reporting()
elif choice == "I" or choice == "i":
intelligence()
elif choice == "M" or choice == "m":
monitoring()
elif choice == "A" or choice == "a":
about()
elif choice == "Q" or choice == "q":
quit()
else:
print(" ")
print("Please try again")
main_menu()
Is this a correct way to making a menu? The program runs with no error messages but I keep getting problems highlighted.
|
[
"You are using triple quotes in the input function that takes it as a string!\nand for the menu it is correct but also checkout the case function newly launched in python version 3.11\n",
"\nYou are using triple quotes in the input function that takes it as a string! and for the menu it is correct but also checkout the case function newly launched in python version 3.11\n\nuse VsCode maybe your editor dosen't support it\n\nAnd use main_menu() function outside the if else block\n\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"docstring",
"function",
"mainmenu",
"python"
] |
stackoverflow_0074535377_docstring_function_mainmenu_python.txt
|
Q:
Google Or Tools Prefer more vehicles than saving cost
I had total of 8 vehicles and have travel time, weight as cost parameter for route calculation. Below image is showing the plans created and Google OR-Tools has created plans for only five vehicles. If you note the green plan that is created, it is sending vehicle on left side and then on the right side serving the locations. Is there a way that i can engage more vehicles than making long travel plans? Ideally the green vehicle should have been sent on right side or left side only, instead of traveling to both direction despite the weight, time and no of locations to visit are left.
These are search parameters that i have specified
search_parameters = pywrapcp.DefaultRoutingSearchParameters()
search_parameters.first_solution_strategy = (
routing_enums_pb2.FirstSolutionStrategy.PATH_CHEAPEST_ARC)
search_parameters.local_search_metaheuristic = (
routing_enums_pb2.LocalSearchMetaheuristic.GUIDED_LOCAL_SEARCH)
search_parameters.time_limit.FromSeconds(3)
A:
Here are some helpful links for Global_Span which could be useful if you want to utilize more vehicles by setting it on distance dimension for example.
/// Sets a cost proportional to the *global* dimension span, that is the
/// difference between the largest value of route end cumul variables and
/// the smallest value of route start cumul variables.
/// In other words:
/// global_span_cost =
/// coefficient * (Max(dimension end value) - Min(dimension start value)).
void SetGlobalSpanCostCoefficient(int64_t coefficient);
Reference
vrp_global_span.py (Sample)
|
Google Or Tools Prefer more vehicles than saving cost
|
I had total of 8 vehicles and have travel time, weight as cost parameter for route calculation. Below image is showing the plans created and Google OR-Tools has created plans for only five vehicles. If you note the green plan that is created, it is sending vehicle on left side and then on the right side serving the locations. Is there a way that i can engage more vehicles than making long travel plans? Ideally the green vehicle should have been sent on right side or left side only, instead of traveling to both direction despite the weight, time and no of locations to visit are left.
These are search parameters that i have specified
search_parameters = pywrapcp.DefaultRoutingSearchParameters()
search_parameters.first_solution_strategy = (
routing_enums_pb2.FirstSolutionStrategy.PATH_CHEAPEST_ARC)
search_parameters.local_search_metaheuristic = (
routing_enums_pb2.LocalSearchMetaheuristic.GUIDED_LOCAL_SEARCH)
search_parameters.time_limit.FromSeconds(3)
|
[
"Here are some helpful links for Global_Span which could be useful if you want to utilize more vehicles by setting it on distance dimension for example.\n /// Sets a cost proportional to the *global* dimension span, that is the\n /// difference between the largest value of route end cumul variables and\n /// the smallest value of route start cumul variables.\n /// In other words:\n /// global_span_cost =\n /// coefficient * (Max(dimension end value) - Min(dimension start value)).\n void SetGlobalSpanCostCoefficient(int64_t coefficient);\n\nReference\nvrp_global_span.py (Sample)\n"
] |
[
1
] |
[] |
[] |
[
"google_visualization",
"or_tools",
"python"
] |
stackoverflow_0074518518_google_visualization_or_tools_python.txt
|
Q:
How to drop duplicates columns from a pandas dataframe, based on columns' values (columns don't have the same name)?
I want to drop columns if the values inside of them are the same as other columns. From DF, it should yields DF_new:
DF = pd.DataFrame(index=[1,2,3,4], columns = ['col1', 'col2','col3','col4','col5'])
x = np.random.uniform(size=4)
DF['col1'] = x
DF['col2'] = x+2
DF['col3'] = x
DF ['col4'] = x+2
DF['col5'] = [5,6,7,8]
display(DF)
DF_new = DF[['col1', 'col2', 'col5']]
display(DF_new)
Simple example of what I can't manage to do:
Note that the column names are not the same, so I can't use:
DF_new = DF.loc[:,~DF.columns.duplicated()].copy()
, which drop columns based on their names.
A:
You can use:
df = df.T.drop_duplicates().T
Step by step:
df2 = df.T # T = transpose (convert rows to columns)
1 2 3 4
col1 0.67075 0.707864 0.206923 0.168023
col2 2.67075 2.707864 2.206923 2.168023
col3 0.67075 0.707864 0.206923 0.168023
col4 2.67075 2.707864 2.206923 2.168023
col5 5.00000 6.000000 7.000000 8.000000
#now we can use drop duplicates
df2=df2.drop_duplicates()
'''
1 2 3 4
col1 0.67075 0.707864 0.206923 0.168023
col2 2.67075 2.707864 2.206923 2.168023
col5 5.00000 6.000000 7.000000 8.000000
'''
#then use transpose again.
df2=df2.T
'''
col1 col2 col5
1 0.670750 2.670750 5.0
2 0.707864 2.707864 6.0
3 0.206923 2.206923 7.0
4 0.168023 2.168023 8.0
'''
A:
this should do what you need
df = df.loc[:,~df.apply(lambda x: x.duplicated(),axis=1).all()].copy()
as you can see from this link
|
How to drop duplicates columns from a pandas dataframe, based on columns' values (columns don't have the same name)?
|
I want to drop columns if the values inside of them are the same as other columns. From DF, it should yields DF_new:
DF = pd.DataFrame(index=[1,2,3,4], columns = ['col1', 'col2','col3','col4','col5'])
x = np.random.uniform(size=4)
DF['col1'] = x
DF['col2'] = x+2
DF['col3'] = x
DF ['col4'] = x+2
DF['col5'] = [5,6,7,8]
display(DF)
DF_new = DF[['col1', 'col2', 'col5']]
display(DF_new)
Simple example of what I can't manage to do:
Note that the column names are not the same, so I can't use:
DF_new = DF.loc[:,~DF.columns.duplicated()].copy()
, which drop columns based on their names.
|
[
"You can use:\ndf = df.T.drop_duplicates().T\n\n\nStep by step:\ndf2 = df.T # T = transpose (convert rows to columns)\n\n 1 2 3 4\ncol1 0.67075 0.707864 0.206923 0.168023\ncol2 2.67075 2.707864 2.206923 2.168023\ncol3 0.67075 0.707864 0.206923 0.168023\ncol4 2.67075 2.707864 2.206923 2.168023\ncol5 5.00000 6.000000 7.000000 8.000000\n\n#now we can use drop duplicates\n\ndf2=df2.drop_duplicates()\n'''\n 1 2 3 4\ncol1 0.67075 0.707864 0.206923 0.168023\ncol2 2.67075 2.707864 2.206923 2.168023\ncol5 5.00000 6.000000 7.000000 8.000000\n'''\n\n#then use transpose again.\ndf2=df2.T\n'''\n col1 col2 col5\n1 0.670750 2.670750 5.0\n2 0.707864 2.707864 6.0\n3 0.206923 2.206923 7.0\n4 0.168023 2.168023 8.0\n'''\n\n",
"this should do what you need\ndf = df.loc[:,~df.apply(lambda x: x.duplicated(),axis=1).all()].copy()\n\nas you can see from this link\n"
] |
[
2,
0
] |
[] |
[] |
[
"duplicates",
"pandas",
"python"
] |
stackoverflow_0074534831_duplicates_pandas_python.txt
|
Q:
How to disable inline labels in django inline admin?
In my Django admin, I want to disable the below-shown paragraph. Please help me out. I am new to changing djano admin.
Link to Image
Here is my current admin.py
class ImageInlineAdmin(admin.StackedInline):
model = Image
max_num = 1
class BlogAdmin(admin.ModelAdmin):
inlines = [ParagraphInlineAdmin, ImageInlineAdmin]
list_display = ['heading','id']
Here are my models
class Container(models.Model):
heading = models.CharField(max_length=100, blank=True, default="")
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
def __str__(self) -> str:
return self.heading
class Paragraph(models.Model):
content = RichTextUploadingField()
container = models.ForeignKey(
Container, on_delete=models.CASCADE, blank=True)
def __str__(self) -> str:
return self.content
P.S I am using Django CKeditor
A:
You need to remove the str function from the model that will solve the problem
A:
I removed it via css trick as follow:
class MyClassAdmin(admin.modelAdmin):
inlines = [MyTabularAdmin]
class Media:
css = {
'custom': ('css/custom_admin.css',),
}
the css is:
.tabular table tbody .original p {
visibility: hidden;
}
.inline-group .tabular tr.has_original td {
padding-top: 10px;
}
A:
If you don't want to override CSS try this:
Instead of:
def __str__(self) -> str:
return self.content
Try:
def __str__(self) -> str:
return ""
A:
Removing the str function of making it return an empty string does remove the label, but it does not remove the space occupied by the label. The CSS solution does that.
|
How to disable inline labels in django inline admin?
|
In my Django admin, I want to disable the below-shown paragraph. Please help me out. I am new to changing djano admin.
Link to Image
Here is my current admin.py
class ImageInlineAdmin(admin.StackedInline):
model = Image
max_num = 1
class BlogAdmin(admin.ModelAdmin):
inlines = [ParagraphInlineAdmin, ImageInlineAdmin]
list_display = ['heading','id']
Here are my models
class Container(models.Model):
heading = models.CharField(max_length=100, blank=True, default="")
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
def __str__(self) -> str:
return self.heading
class Paragraph(models.Model):
content = RichTextUploadingField()
container = models.ForeignKey(
Container, on_delete=models.CASCADE, blank=True)
def __str__(self) -> str:
return self.content
P.S I am using Django CKeditor
|
[
"You need to remove the str function from the model that will solve the problem\n",
"I removed it via css trick as follow:\nclass MyClassAdmin(admin.modelAdmin):\n \n inlines = [MyTabularAdmin]\n\n class Media:\n css = {\n 'custom': ('css/custom_admin.css',),\n }\n\nthe css is:\n.tabular table tbody .original p {\n visibility: hidden;\n }\n\n.inline-group .tabular tr.has_original td {\n padding-top: 10px;\n }\n\n",
"If you don't want to override CSS try this:\nInstead of:\ndef __str__(self) -> str:\n return self.content\n\nTry:\ndef __str__(self) -> str:\n return \"\"\n\n",
"Removing the str function of making it return an empty string does remove the label, but it does not remove the space occupied by the label. The CSS solution does that.\n"
] |
[
0,
0,
0,
0
] |
[] |
[] |
[
"django",
"django_admin",
"django_ckeditor",
"python"
] |
stackoverflow_0067940490_django_django_admin_django_ckeditor_python.txt
|
Q:
Select rows from SQL table based on multiple LIKE
I am connecting to some SQL table through sqlserver within Python environment. I have an SQL table like this one but instead of 6 rows it has thousands of rows with several combinations of deal names and Loan IDs. Loan IDs follow a specific pattern with four digits followed by underscore and then the actual Loan ID.
deal_name
LOAN_ID
AAAAAAAAA
0001_LX3333
AAAAAAAAA
0001_LX4444
BBBBBBBBB
0221_LX3333
BBBBBBBBB
0001_LX4444
CCCCCCCCC
4401_LX3333
CCCCCCCCC
0001_LX4444
I would like to select rows from this table based on a Python list of loan IDs (~1,000 entries) without the prefix (i.e., LX3333, LX4444, etc) which is not fixed and is being updated every month. If loan IDs were fixed I could use some LIKE statement, but that is not possible as loan IDs are updated and they are thousands. Is there way to provide a list of loan IDs and then look into the SQL table using some kind of LIKE statement?
A:
Thanks for your responses. I managed to resolve this issue by doing the following:
I first extracted the loan id from the given table by doing:
RIGHT(unique_id, 8) AS id
I then looked-up to the given list of IDs by doing:
WHERE id IN {tuple(loan_idlist)}
|
Select rows from SQL table based on multiple LIKE
|
I am connecting to some SQL table through sqlserver within Python environment. I have an SQL table like this one but instead of 6 rows it has thousands of rows with several combinations of deal names and Loan IDs. Loan IDs follow a specific pattern with four digits followed by underscore and then the actual Loan ID.
deal_name
LOAN_ID
AAAAAAAAA
0001_LX3333
AAAAAAAAA
0001_LX4444
BBBBBBBBB
0221_LX3333
BBBBBBBBB
0001_LX4444
CCCCCCCCC
4401_LX3333
CCCCCCCCC
0001_LX4444
I would like to select rows from this table based on a Python list of loan IDs (~1,000 entries) without the prefix (i.e., LX3333, LX4444, etc) which is not fixed and is being updated every month. If loan IDs were fixed I could use some LIKE statement, but that is not possible as loan IDs are updated and they are thousands. Is there way to provide a list of loan IDs and then look into the SQL table using some kind of LIKE statement?
|
[
"Thanks for your responses. I managed to resolve this issue by doing the following:\nI first extracted the loan id from the given table by doing:\nRIGHT(unique_id, 8) AS id\nI then looked-up to the given list of IDs by doing:\nWHERE id IN {tuple(loan_idlist)}\n"
] |
[
0
] |
[] |
[] |
[
"python",
"sql"
] |
stackoverflow_0074532823_python_sql.txt
|
Q:
How can I load an image into a tkinter window/canvas?
I am trying to have an image show up on a tkinter window. I have managed to do so in the past, but somehow my current attempt is failing at every step. Hopefully someone can guide me to the proper way and help me fix it.
I'm currently trying with this code. The error I'm getting is
_tkinter.TclError: image "paco_img" doesn't exist
from tkinter import *
PINK = "#e2979c"
RED = "#e7305b"
GREEN = "#9bdeac"
YELLOW = "#f7f5dd"
BLUE = "#678ac2"
FONT_NAME = "Courier"
window = Tk()
window.title("Thomas' Elevator Pitch")
window.config(padx=200, pady=100, bg=BLUE)
canvas = Canvas(width=5000, height=4000)
paco_img = PhotoImage(file="paco.png")
canvas.create_image(2500, 2000, image="paco_img")
canvas.pack()
I've also tried to do the following, which changes the error to
NameError: name 'ImageTk' is not defined. Did you mean: 'Image'?
However, when I do change ImageTk to Image, it shows PhotoImage as an unresolved attribute reference to Image.
window = Tk()
window.title("Thomas' Elevator Pitch")
window.config(padx=200, pady=100, bg=BLUE)
canvas = Canvas(width=5000, height=4000)
paco_img = ImageTk.PhotoImage(file="paco.png")
canvas.create_image(2500, 2000, image="paco_img")
canvas.pack()
I can't seem to wrap my head around it, and suggestions on similar questions asked here didn't work for me yet.
A:
As very well explained in comments by @JRiggles, first you need a Python library to handle the image.
Of course, you can install it with pip:
pip install Pillow
Then, using the tkinter hello-world-program as base, all you need to do is load the necessary libraries, load your image and to display it use either a button or a label widget image param:
from tkinter import *
from tkinter import ttk
from PIL import Image, ImageTk
root = Tk()
image = im = Image.open("/path/to/your/image.ext")
frm = ttk.Frame(root, padding=10)
frm.grid()
tk_image = ImageTk.PhotoImage(image)
ttk.Label(frm, image=tk_image).grid(column=0, row=0)
root.mainloop()
|
How can I load an image into a tkinter window/canvas?
|
I am trying to have an image show up on a tkinter window. I have managed to do so in the past, but somehow my current attempt is failing at every step. Hopefully someone can guide me to the proper way and help me fix it.
I'm currently trying with this code. The error I'm getting is
_tkinter.TclError: image "paco_img" doesn't exist
from tkinter import *
PINK = "#e2979c"
RED = "#e7305b"
GREEN = "#9bdeac"
YELLOW = "#f7f5dd"
BLUE = "#678ac2"
FONT_NAME = "Courier"
window = Tk()
window.title("Thomas' Elevator Pitch")
window.config(padx=200, pady=100, bg=BLUE)
canvas = Canvas(width=5000, height=4000)
paco_img = PhotoImage(file="paco.png")
canvas.create_image(2500, 2000, image="paco_img")
canvas.pack()
I've also tried to do the following, which changes the error to
NameError: name 'ImageTk' is not defined. Did you mean: 'Image'?
However, when I do change ImageTk to Image, it shows PhotoImage as an unresolved attribute reference to Image.
window = Tk()
window.title("Thomas' Elevator Pitch")
window.config(padx=200, pady=100, bg=BLUE)
canvas = Canvas(width=5000, height=4000)
paco_img = ImageTk.PhotoImage(file="paco.png")
canvas.create_image(2500, 2000, image="paco_img")
canvas.pack()
I can't seem to wrap my head around it, and suggestions on similar questions asked here didn't work for me yet.
|
[
"As very well explained in comments by @JRiggles, first you need a Python library to handle the image.\nOf course, you can install it with pip:\npip install Pillow\n\nThen, using the tkinter hello-world-program as base, all you need to do is load the necessary libraries, load your image and to display it use either a button or a label widget image param:\nfrom tkinter import *\nfrom tkinter import ttk\nfrom PIL import Image, ImageTk\n\n\n\nroot = Tk()\nimage = im = Image.open(\"/path/to/your/image.ext\")\nfrm = ttk.Frame(root, padding=10)\nfrm.grid()\ntk_image = ImageTk.PhotoImage(image)\nttk.Label(frm, image=tk_image).grid(column=0, row=0)\nroot.mainloop()\n\n"
] |
[
0
] |
[] |
[] |
[
"image",
"python",
"python_3.x",
"tkinter",
"tkinter_canvas"
] |
stackoverflow_0074535657_image_python_python_3.x_tkinter_tkinter_canvas.txt
|
Q:
remove \n from a line in python
I have a txt file that I need to convert into a table. If I have a case like this:
---------------------------------------------
|apple|very good|every day|fruit
|chocolate|not so good|just\n
some times|snack
|bread|good|every day|whole|carbs
---------------------------------------
I splitted the file on the '|' but the new line is a problem I cannot overcome, how can I join the two lines?
with open("ridotto.txt", encoding='latin-1') as f:
new_list=[]
for line in f:
if line.startswith("-"):
line.replace("-", "")
else:
new_list.append(line.replace('\n', ' ').split('|'))
When I do this I get as a result:
[apple,very good,every day,fruit,chocolate, not so good, just
,some times, snack,bread, good, every day, whole, carbs]
while I want just some times to be one singular element
Note: the \n is not literal
A:
Sometimes the \n is actually \r\n under the hood
try this if it helps
new_list.append(line.replace('\n', ' ').replace('\r\n', ' ').split('|'))
A:
If you are trying to remove a literal newline, then just use two backslashes: replace('\\n', ' '). Python uses the backslash as an escape character and so you have to escape the backslash itself if you want to use a literal backslash.
A:
for your code, I recommend you to do this
with open("ridotto.txt", encoding="latin-1") as f:
new_list = []
for line in f:
if "\\n" in line:
new_list.append(line.replace("\\n\n", "").split("|"))
elif not line.startswith("-"):
new_list.append(line.replace("\n", "").split("|"))
A:
Read text file. & replace items one by one accordingly.
with open('text.txt', 'r') as file:
data = file.read().replace('\n', '')
data = data.replace("\\n", " ")
data = data.replace("-", " ")
data = data.strip()
my_list = data.split("|")
my_list = [i for i in my_list if i] # Remove empty elements
print(my_list)
Gives #
['apple', 'very good', 'every day', 'fruit', 'chocolate', 'not so good', 'just some times', 'snack', 'bread', 'good', 'every day', 'whole', 'carbs']
A:
Maybe the following logic might help you, if what you want is to combine 2 lines, when the first line ends with \n in the file
from io import StringIO
with open("ridotto.txt", encoding='latin-1') as f:
new_list=[]
## if there is \n in end of text line, then remove the newline character on that line ##
c = f.read().replace("\\n\n",' ')
## Reading string like a file ##
with StringIO(c) as file:
for line in file:
if line.startswith("-"):
continue
else:
new_list.append(line.replace('\n', '').split('|'))
|
remove \n from a line in python
|
I have a txt file that I need to convert into a table. If I have a case like this:
---------------------------------------------
|apple|very good|every day|fruit
|chocolate|not so good|just\n
some times|snack
|bread|good|every day|whole|carbs
---------------------------------------
I splitted the file on the '|' but the new line is a problem I cannot overcome, how can I join the two lines?
with open("ridotto.txt", encoding='latin-1') as f:
new_list=[]
for line in f:
if line.startswith("-"):
line.replace("-", "")
else:
new_list.append(line.replace('\n', ' ').split('|'))
When I do this I get as a result:
[apple,very good,every day,fruit,chocolate, not so good, just
,some times, snack,bread, good, every day, whole, carbs]
while I want just some times to be one singular element
Note: the \n is not literal
|
[
"Sometimes the \\n is actually \\r\\n under the hood\ntry this if it helps\nnew_list.append(line.replace('\\n', ' ').replace('\\r\\n', ' ').split('|'))\n\n",
"If you are trying to remove a literal newline, then just use two backslashes: replace('\\\\n', ' '). Python uses the backslash as an escape character and so you have to escape the backslash itself if you want to use a literal backslash.\n",
"for your code, I recommend you to do this\nwith open(\"ridotto.txt\", encoding=\"latin-1\") as f:\n new_list = []\n for line in f:\n if \"\\\\n\" in line:\n new_list.append(line.replace(\"\\\\n\\n\", \"\").split(\"|\"))\n elif not line.startswith(\"-\"):\n new_list.append(line.replace(\"\\n\", \"\").split(\"|\"))\n\n",
"Read text file. & replace items one by one accordingly.\nwith open('text.txt', 'r') as file:\n data = file.read().replace('\\n', '')\n\ndata = data.replace(\"\\\\n\", \" \")\ndata = data.replace(\"-\", \" \")\ndata = data.strip()\nmy_list = data.split(\"|\")\nmy_list = [i for i in my_list if i] # Remove empty elements\n\nprint(my_list)\n\nGives #\n['apple', 'very good', 'every day', 'fruit', 'chocolate', 'not so good', 'just some times', 'snack', 'bread', 'good', 'every day', 'whole', 'carbs']\n\n",
"Maybe the following logic might help you, if what you want is to combine 2 lines, when the first line ends with \\n in the file\nfrom io import StringIO\n\nwith open(\"ridotto.txt\", encoding='latin-1') as f: \n new_list=[]\n \n ## if there is \\n in end of text line, then remove the newline character on that line ##\n c = f.read().replace(\"\\\\n\\n\",' ')\n \n ## Reading string like a file ##\n with StringIO(c) as file: \n for line in file:\n if line.startswith(\"-\"):\n continue\n else:\n new_list.append(line.replace('\\n', '').split('|'))\n\n"
] |
[
0,
0,
0,
0,
0
] |
[] |
[] |
[
"newline",
"python",
"trailing_newline"
] |
stackoverflow_0074535122_newline_python_trailing_newline.txt
|
Q:
Python - [Errno 111] Connection refused on client side of the connection
I'm trying to create a chat between client and server written in Python, using SSL protocols with mutual authentication (i.e: server authenticates client and client authenticates server using certificates). My host machine is being used as the server, and my laptop is the client.
When attempting to connect to my host ip, I keep getting this error on my laptop:
Traceback (most recent call last):
File "/home/icarus/Codes/RealtimeChat/Chat.py", line 88, in <module>
main()
File "/home/icarus/Codes/RealtimeChat/Chat.py", line 75, in main
connection(ip, port, SSLSock)
File "/home/icarus/Codes/RealtimeChat/Chat.py", line 35, in connection
sock.connect((ip, port))
File "/usr/lib/python3.10/ssl.py", line 1375, in connect
self._real_connect(addr, False)
File "/usr/lib/python3.10/ssl.py", line 1362, in _real_connect
super().connect(addr)
ConnectionRefusedError: [Errno 111] Connection refused
And in the server - which was supposed to print a message saying that a connection was refused - nothing happens, it keeps listening for connections as if nothing happened
Connection function on client side:
def connection(ip, port, sock):
try:
sock.connect((ip, port))
print(f"Connected with {ip}")
except Exception as e:
print("Connection failed: ", e)
sock.close()
Server side:
def acceptConnection(self):
while True:
con, senderIP = self.sock.accept()
# Attempting to wrap connection with SSL socket
try:
SSLSock = self.getSSLSocket(con)
# If exception occurs, close socket and continue listening
except Exception as e:
print("Connection refused: ", e)
con.close()
continue
print(f"{senderIP} connected to the server")
# Adding connection to clients list
self.clients.append(SSLSock)
# Initializing thread to receive and communicate messages
# to all clients
threading.Thread(target=self.clientCommunication, args=(SSLSock, ), daemon=True).start()
This is the main function on my server:
def main():
serverIP = "127.0.0.1"
port = int(input("Port to listen for connections: "))
server = Server()
server.bindSocket(serverIP, port)
server.socketListen(2)
server.acceptConnection()
Everything works fine when I connect from my localhost (e.g I open a server on my host machine on one terminal and use another one on the same machine to connect to it). Both machines have the required certificates to authenticate each other, so I don't think that's the problem. Also, without the SSL implementation, the connection between this two different computers was refused by the server
I've tried using sock.bind('', port) on server side, disabling my firewall, used telnet 127.0.0.1 54321 (on my host machine) to check if the connection was working on the specified port (and it is), and also on the client machine (which showed that the connection was refused). I also tried running both scripts with admin privileges (sudo), but it also didn't work. Any suggestions?
A:
I found what was wrong: I was trying to connect to my public IP address (which I found by searching for "What is my ip" on Google), but instead what should be done is to connect to the private IP address (I guess that's the correct name), and you can see yours using ifconfig on Linux and Mac and ipconfig on Windows using a terminal. By doing this, I could connect two computers that are on my network to my desktop server, I still haven't tested for computers in different networks, but the problem has been solved.
|
Python - [Errno 111] Connection refused on client side of the connection
|
I'm trying to create a chat between client and server written in Python, using SSL protocols with mutual authentication (i.e: server authenticates client and client authenticates server using certificates). My host machine is being used as the server, and my laptop is the client.
When attempting to connect to my host ip, I keep getting this error on my laptop:
Traceback (most recent call last):
File "/home/icarus/Codes/RealtimeChat/Chat.py", line 88, in <module>
main()
File "/home/icarus/Codes/RealtimeChat/Chat.py", line 75, in main
connection(ip, port, SSLSock)
File "/home/icarus/Codes/RealtimeChat/Chat.py", line 35, in connection
sock.connect((ip, port))
File "/usr/lib/python3.10/ssl.py", line 1375, in connect
self._real_connect(addr, False)
File "/usr/lib/python3.10/ssl.py", line 1362, in _real_connect
super().connect(addr)
ConnectionRefusedError: [Errno 111] Connection refused
And in the server - which was supposed to print a message saying that a connection was refused - nothing happens, it keeps listening for connections as if nothing happened
Connection function on client side:
def connection(ip, port, sock):
try:
sock.connect((ip, port))
print(f"Connected with {ip}")
except Exception as e:
print("Connection failed: ", e)
sock.close()
Server side:
def acceptConnection(self):
while True:
con, senderIP = self.sock.accept()
# Attempting to wrap connection with SSL socket
try:
SSLSock = self.getSSLSocket(con)
# If exception occurs, close socket and continue listening
except Exception as e:
print("Connection refused: ", e)
con.close()
continue
print(f"{senderIP} connected to the server")
# Adding connection to clients list
self.clients.append(SSLSock)
# Initializing thread to receive and communicate messages
# to all clients
threading.Thread(target=self.clientCommunication, args=(SSLSock, ), daemon=True).start()
This is the main function on my server:
def main():
serverIP = "127.0.0.1"
port = int(input("Port to listen for connections: "))
server = Server()
server.bindSocket(serverIP, port)
server.socketListen(2)
server.acceptConnection()
Everything works fine when I connect from my localhost (e.g I open a server on my host machine on one terminal and use another one on the same machine to connect to it). Both machines have the required certificates to authenticate each other, so I don't think that's the problem. Also, without the SSL implementation, the connection between this two different computers was refused by the server
I've tried using sock.bind('', port) on server side, disabling my firewall, used telnet 127.0.0.1 54321 (on my host machine) to check if the connection was working on the specified port (and it is), and also on the client machine (which showed that the connection was refused). I also tried running both scripts with admin privileges (sudo), but it also didn't work. Any suggestions?
|
[
"I found what was wrong: I was trying to connect to my public IP address (which I found by searching for \"What is my ip\" on Google), but instead what should be done is to connect to the private IP address (I guess that's the correct name), and you can see yours using ifconfig on Linux and Mac and ipconfig on Windows using a terminal. By doing this, I could connect two computers that are on my network to my desktop server, I still haven't tested for computers in different networks, but the problem has been solved.\n"
] |
[
0
] |
[] |
[] |
[
"connection",
"python",
"sockets"
] |
stackoverflow_0074519698_connection_python_sockets.txt
|
Q:
Logging in Python Django: INFO not logging onto file
I have setup a logger in my Django project for 4 different cases: info messages, debug messages, error messages, ad gunicorn logging.
Here is the content of my settings.py:
LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"main_formatter": {
"format": "{asctime}-{levelname}-{module}-{funcName}-{message}",
"style": "{",
},
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "main_formatter",
},
"dfile": {
"class": "logging.FileHandler",
"filename": "logs/debug.log",
"formatter": "main_formatter",
},
"file": {
"class": "logging.FileHandler",
"filename": "logs/info.log",
"formatter": "main_formatter",
},
"efile": {
"class": "logging.FileHandler",
"filename": "logs/error.log",
"formatter": "main_formatter",
},
"gfile": {
"class": "logging.FileHandler",
"filename": "logs/gunicorn.log",
"formatter": "main_formatter",
},
},
"loggers": {
"main": {
"handlers": ["dfile", "console"],
"propagate": True,
"level": "DEBUG",
},
"main": {
"handlers": ["file", "console"],
"propagate": True,
"level": "INFO",
},
"main": {
"handlers": ["efile", "console"],
"propagate": True,
"level": "ERROR",
},
"gunicorn.access": {
"handlers": ["gfile", "console"],
"propagate": False,
"level": "DEBUG",
},
},
}
And here is an example:
import logging
[...]
logger = logging.getLogger(__name__)
[...]
logger.error(f"{request.user}: Erreur interne. Code [LI.001]") # THIS WORKS
logger.info(f"Password Generated and Sent to {email}") # THIS DOESNT WORK
In my logs/ folder, I have 4 files: info.log, error.log, debug.log and gunicorn.log
The only output I see is in error.log and gunicorn.log, the other 2 files are always empty.
A:
I don't know anything about Django's logging. But it sticks out to me that your loggers dictionary appears to specify the main key 3 times. I would expect that each one overwrites the previous and so your DEBUG and INFO settings are being lost/replaced with your ERROR settings.
|
Logging in Python Django: INFO not logging onto file
|
I have setup a logger in my Django project for 4 different cases: info messages, debug messages, error messages, ad gunicorn logging.
Here is the content of my settings.py:
LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"main_formatter": {
"format": "{asctime}-{levelname}-{module}-{funcName}-{message}",
"style": "{",
},
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "main_formatter",
},
"dfile": {
"class": "logging.FileHandler",
"filename": "logs/debug.log",
"formatter": "main_formatter",
},
"file": {
"class": "logging.FileHandler",
"filename": "logs/info.log",
"formatter": "main_formatter",
},
"efile": {
"class": "logging.FileHandler",
"filename": "logs/error.log",
"formatter": "main_formatter",
},
"gfile": {
"class": "logging.FileHandler",
"filename": "logs/gunicorn.log",
"formatter": "main_formatter",
},
},
"loggers": {
"main": {
"handlers": ["dfile", "console"],
"propagate": True,
"level": "DEBUG",
},
"main": {
"handlers": ["file", "console"],
"propagate": True,
"level": "INFO",
},
"main": {
"handlers": ["efile", "console"],
"propagate": True,
"level": "ERROR",
},
"gunicorn.access": {
"handlers": ["gfile", "console"],
"propagate": False,
"level": "DEBUG",
},
},
}
And here is an example:
import logging
[...]
logger = logging.getLogger(__name__)
[...]
logger.error(f"{request.user}: Erreur interne. Code [LI.001]") # THIS WORKS
logger.info(f"Password Generated and Sent to {email}") # THIS DOESNT WORK
In my logs/ folder, I have 4 files: info.log, error.log, debug.log and gunicorn.log
The only output I see is in error.log and gunicorn.log, the other 2 files are always empty.
|
[
"I don't know anything about Django's logging. But it sticks out to me that your loggers dictionary appears to specify the main key 3 times. I would expect that each one overwrites the previous and so your DEBUG and INFO settings are being lost/replaced with your ERROR settings.\n"
] |
[
0
] |
[] |
[] |
[
"django",
"logging",
"python"
] |
stackoverflow_0074536177_django_logging_python.txt
|
Q:
Update Python dictionary with appended list value
I have a dataframe with price quotes for a variety of parts and makers. ~10k parts and 10 makers, so my dataset contains up to 100k rows, looking roughly like this:
Part
Maker
Price
1
Alpha
1.00
2
Alpha
1.30
3
Alpha
1.25
1
Bravo
1.10
2
Bravo
1.02
3
Bravo
1.15
4
Bravo
1.19
1
Charlie
.99
2
Charlie
1.10
3
Charlie
1.12
4
Charlie
1.19
I am wanting to return two dictionaries based on the best price, Part/Maker and Part/Price. My main issue is when two makers have the same best price.
I want my result to end up like this:
1:.99
2:1.1
3: 1.02
4:1.19
and the second one to be:
1:Charlie
2: Charlie
3: Bravo
4: [Bravo, Charlie]
The first dictionary is easy. Second one is what I'm stuck on. Here's what I have so far:
winning_price_dict={}
winning_mfg_dict={}
for index, row in quote_df.iterrows():
if row['Part'] not in winning_price_dict:
winning_price_dict[row['Part']] = row['Proposed Quote']
winning_mfg_dict[row['Part']] = list(row['Maker'])
if winning_price_dict[row['Part']]>row['Proposed Quote']:
winning_price_dict[row['Part']] = row['Proposed Quote']
winning_mfg_dict[row['Part']] = row['Maker']
if winning_price_dict[row['Part']]==row['Proposed Quote']:
winning_price_dict[row['Part']] = row['Proposed Quote']
winning_mfg_dict[row['Part']] = winning_mfg_dict[row['Part']].append(row['Maker']) #this is the only line that I don't believe works
When I run it as is, it says 'str' object has no attribute 'append'. However, I thought that it should be a list because of the list(row['Maker']) command.
When I change the relevant lines to this:
for index, row in quote_df.iterrows():
if row['Part'] not in winning_price_dict:
winning_mfg_dict[row['Part']] = list(row['Mfg'])
if winning_price_dict[row['Part']]>row['Proposed Quote']:
winning_mfg_dict[row['Part']] = list(row[['Mfg']])
if winning_price_dict[row['Part']]==row['Proposed Quote']:
winning_mfg_dict[row['Part']] = list(winning_mfg_dict[row['Part']]).append(row['Mfg'])
The winning_mfg_dict is all the part numbers and NoneType values, not the maker names.
What do I need to change to get it to return the list of suitable makers?
Thanks!
A:
In your original code, the actual problem was on line 9 of the first fragment: you set vale to a string, not to a list. Also, calling list(some_string) dos not what you expect: it creates a list of single chars, not a [some_string].
I took the liberty to improve the overall readability by extracting common keys to variables, and joined two branches with same bodies. Something like this should work:
winning_price_dict = {}
winning_mfg_dict = {}
for index, row in quote_df.iterrows():
# Extract variables, saving a few accesses and reducing line lengths
part = row['Part']
quote = row['Proposed Quote']
maker = row['Maker']
if part not in winning_price_dict or winning_price_dict[part] > quote:
# First time here or higher value found - reset to initial
winning_price_dict[part] = quote
winning_mfg_dict[part] = [maker]
elif winning_price_dict[part] == quote:
# Add one more item with same value
# Not updating winning_price_dict - we already know it's proper
winning_mfg_dict[part].append(maker)
A:
You can use groupby to get all quotes for one part
best_quotes = quote_df.groupby("part").apply(lambda df: df[df.price == df.price.min()])
Then you get a dataframe with the part number and the previous index as Multiindex. The lambda function selects only the quotes with the minimum price.
You can get the first dictionary with
winning_price_dict = {part : price for (part, _), price in best_quotes.price.iteritems()}
and the second one with
winning_mfg_dict = {part:list(best.loc[part]["maker"]) for part in best_quotes.index.get_level_values("part")}
|
Update Python dictionary with appended list value
|
I have a dataframe with price quotes for a variety of parts and makers. ~10k parts and 10 makers, so my dataset contains up to 100k rows, looking roughly like this:
Part
Maker
Price
1
Alpha
1.00
2
Alpha
1.30
3
Alpha
1.25
1
Bravo
1.10
2
Bravo
1.02
3
Bravo
1.15
4
Bravo
1.19
1
Charlie
.99
2
Charlie
1.10
3
Charlie
1.12
4
Charlie
1.19
I am wanting to return two dictionaries based on the best price, Part/Maker and Part/Price. My main issue is when two makers have the same best price.
I want my result to end up like this:
1:.99
2:1.1
3: 1.02
4:1.19
and the second one to be:
1:Charlie
2: Charlie
3: Bravo
4: [Bravo, Charlie]
The first dictionary is easy. Second one is what I'm stuck on. Here's what I have so far:
winning_price_dict={}
winning_mfg_dict={}
for index, row in quote_df.iterrows():
if row['Part'] not in winning_price_dict:
winning_price_dict[row['Part']] = row['Proposed Quote']
winning_mfg_dict[row['Part']] = list(row['Maker'])
if winning_price_dict[row['Part']]>row['Proposed Quote']:
winning_price_dict[row['Part']] = row['Proposed Quote']
winning_mfg_dict[row['Part']] = row['Maker']
if winning_price_dict[row['Part']]==row['Proposed Quote']:
winning_price_dict[row['Part']] = row['Proposed Quote']
winning_mfg_dict[row['Part']] = winning_mfg_dict[row['Part']].append(row['Maker']) #this is the only line that I don't believe works
When I run it as is, it says 'str' object has no attribute 'append'. However, I thought that it should be a list because of the list(row['Maker']) command.
When I change the relevant lines to this:
for index, row in quote_df.iterrows():
if row['Part'] not in winning_price_dict:
winning_mfg_dict[row['Part']] = list(row['Mfg'])
if winning_price_dict[row['Part']]>row['Proposed Quote']:
winning_mfg_dict[row['Part']] = list(row[['Mfg']])
if winning_price_dict[row['Part']]==row['Proposed Quote']:
winning_mfg_dict[row['Part']] = list(winning_mfg_dict[row['Part']]).append(row['Mfg'])
The winning_mfg_dict is all the part numbers and NoneType values, not the maker names.
What do I need to change to get it to return the list of suitable makers?
Thanks!
|
[
"In your original code, the actual problem was on line 9 of the first fragment: you set vale to a string, not to a list. Also, calling list(some_string) dos not what you expect: it creates a list of single chars, not a [some_string].\nI took the liberty to improve the overall readability by extracting common keys to variables, and joined two branches with same bodies. Something like this should work:\nwinning_price_dict = {}\nwinning_mfg_dict = {}\n\nfor index, row in quote_df.iterrows():\n # Extract variables, saving a few accesses and reducing line lengths\n part = row['Part']\n quote = row['Proposed Quote']\n maker = row['Maker']\n \n if part not in winning_price_dict or winning_price_dict[part] > quote:\n # First time here or higher value found - reset to initial\n winning_price_dict[part] = quote\n winning_mfg_dict[part] = [maker]\n elif winning_price_dict[part] == quote:\n # Add one more item with same value\n # Not updating winning_price_dict - we already know it's proper\n winning_mfg_dict[part].append(maker)\n\n",
"You can use groupby to get all quotes for one part\nbest_quotes = quote_df.groupby(\"part\").apply(lambda df: df[df.price == df.price.min()])\n\nThen you get a dataframe with the part number and the previous index as Multiindex. The lambda function selects only the quotes with the minimum price.\nYou can get the first dictionary with\nwinning_price_dict = {part : price for (part, _), price in best_quotes.price.iteritems()}\n\nand the second one with\nwinning_mfg_dict = {part:list(best.loc[part][\"maker\"]) for part in best_quotes.index.get_level_values(\"part\")}\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"dictionary",
"python"
] |
stackoverflow_0074535973_dictionary_python.txt
|
Q:
Determine Shortest Distance between two moving points
Im am trying to produce the shortest distance between the two orbiting points (Earth and Jupiter) created by this orbital model. I'vebeen working on this for quite some time but have been struggling.
Could someone sugest a way to create this output or possibly create a line connecting the two points and display their distance reative to one another? Ideally in km but AU is fine too.
I'm new to programming so any help would be great.
Thank you in advance!
This is the code ive been using on Google Colab;
`
# -*- coding: utf-8 -*-
"""Untitled2.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/14vpwJ9ixq6YZGSxN2eBWay-hIPe__BxF
"""
#%% plot it
import matplotlib.pyplot as plt
from matplotlib import animation
import matplotlib
matplotlib.rcParams['animation.embed_limit'] = 2**128
#matplotlib.use("TkAgg") # for mac M1
from IPython.display import HTML
import matplotlib as mpl
mpl.rcParams.update(mpl.rcParamsDefault)
from tempfile import TemporaryFile
# First to define the Constants
G = 6.67e-11
Ms = 1.988e30 # Sun
Me = 5.972e24 # Earth
Mj = 1.898e27 # Jupiter
AU = 1.5e11
daysec = 24.0*60*60
e_ap_v = 29780 # Earth's velocity
j_ap_v = 13060 # Jupiter's velocity
gravconst_e = G*Me*Ms
gravconst_j = G*Mj*Ms
# To setup the starting conditions and locations on plot of each planet;
# Earth
xe,ye,ze = 0,(1*AU),0
xve,yve,zve = -e_ap_v,0,0
# Jupiter
xj,yj,zj = 0,(5.2*AU),0
xvj,yvj,zvj = -j_ap_v,0,0
# Sun
xs,ys,zs = 0,0,0
xvs,yvs,zvs = 0,0,0
t = 0.0
dt = 1*daysec # every frame move this time
xelist,yelist,zelist = [],[],[]
xslist,yslist,zslist = [],[],[]
xjlist,yjlist,zjlist = [],[],[]
# start simulation
while t<1*365*daysec:
################ earth #############
# compute G force on earth
#rx,ry,rz = xs - xe, ys - ye, zs - ze
rx,ry,rz = xe - xs, ye - ys, ze - zs
modr3_e = (rx**2+ry**2+rz**2)**1.5
fx_e = -gravconst_e*rx/modr3_e
fy_e = -gravconst_e*ry/modr3_e
fz_e = -gravconst_e*rz/modr3_e
# update quantities how is this calculated? F = ma -> a = F/m
xve += fx_e*dt/Me
yve += fy_e*dt/Me
zve += fz_e*dt/Me
# update position
xe += xve*dt
ye += yve*dt
ze += zve*dt
# save the position in list
xelist.append(xe)
yelist.append(ye)
zelist.append(ze)
################ Jupiter ##############
# compute G force on Jupiter
rx_j,ry_j,rz_j = xj - xs, yj - ys, zj - zs
modr3_j = (rx_j**2+ry_j**2+rz_j**2)**1.5
fx_j = -gravconst_j*rx_j/modr3_j
fy_j = -gravconst_j*ry_j/modr3_j
fz_j = -gravconst_j*rz_j/modr3_j
xvj += fx_j*dt/Mj
yvj += fy_j*dt/Mj
zvj += fz_j*dt/Mj
# update position
xj += xvj*dt
yj += yvj*dt
zj += zvj*dt
# add to list
xjlist.append(xj)
yjlist.append(yj)
zjlist.append(zj)
################ the sun ###########
# update quantities how is this calculated? F = ma -> a = F/m
xvs += -(fx_e+fx_j)*dt/Ms
yvs += -(fy_e+fy_j)*dt/Ms
zvs += -(fz_e+fz_j)*dt/Ms
# update position
xs += xvs*dt
ys += yvs*dt
zs += zvs*dt
xslist.append(xs)
yslist.append(ys)
zslist.append(zs)
# update dt
t +=dt
# to plot the data
import matplotlib.pyplot as plt
from matplotlib import animation
import matplotlib
matplotlib.rcParams['animation.embed_limit'] = 2**128
from IPython.display import HTML
fig, ax = plt.subplots(figsize=(5.8,5.8))
ax.set_aspect('equal')
# Defining the planets and orbit path attributes on the graph (size & colour)
line_e, = ax.plot([],[],'b',lw=3,)
point_e, = ax.plot([AU], [0], marker="o", markersize=5, markeredgecolor="green", markerfacecolor="blue")
text_e = ax.text(AU,0,'Earth')
line_j, = ax.plot([],[],'r',lw=3)
point_j, = ax.plot([5.2*AU], [0], marker="o", markersize=7, markeredgecolor="red", markerfacecolor="brown")
text_j = ax.text(5.2*AU,0,'Jupiter')
point_s, = ax.plot([0], [0], marker="o", markersize=10, markeredgecolor="orange", markerfacecolor="yellow")
text_s = ax.text(0,0,'Sun')
exdata,eydata = [],[] # earth track
sxdata,sydata = [],[] # sun track
jxdata,jydata = [],[] # Jupiters track
print(len(xelist))
def update(i):
exdata.append(xelist[i])
eydata.append(yelist[i])
jxdata.append(xjlist[i])
jydata.append(yjlist[i])
line_e.set_data(exdata,eydata)
point_e.set_data(xelist[i],yelist[i])
text_e.set_position((xelist[i],yelist[i]))
line_j.set_data(jxdata,jydata)
point_j.set_data(xjlist[i],yjlist[i])
text_j.set_position((xjlist[i],yjlist[i]))
point_s.set_data(xslist[i],yslist[i])
text_s.set_position((xslist[i],yslist[i]))
ax.set_xlim(-5.8*AU,5.8*AU)
ax.set_ylim(-5.8*AU,5.8*AU)
return line_e,point_s,point_e,line_j,point_j,text_e,text_j,text_s,
anim = animation.FuncAnimation(fig,func=update,frames=len(xelist),interval=1,blit=True)
# Showing animation in Jupyter Notebook
from IPython.display import HTML
HTML(anim.to_jshtml())
A:
The connection line between Jupiter and the Earth can be drawn using the arrow method of matplotlib. This allows to add arrow heads if desired. The distance can be added as a normal label.
annotation = ax.arrow(0, AU, 0, (5.2 - 1) * AU)
text_a = ax.text(0, (5.2 + 1) / 2 * AU, '', ha='center', backgroundcolor='white')
In the update method the distance has to be calculated using the Pythagorean theorem:
annotation.set_data(x=exdata[-1], y=eydata[-1], dx=jxdata[-1] - exdata[-1], dy=jydata[-1] - eydata[-1])
text_a.set_position(((exdata[-1] + jxdata[-1]) / 2, (eydata[-1] + jydata[-1]) / 2))
text_a.set_text(f"{dist/1e9:.1f} Mio km")
Whole code:
# -*- coding: utf-8 -*-
"""Untitled2.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/14vpwJ9ixq6YZGSxN2eBWay-hIPe__BxF
"""
#%% plot it
import matplotlib.pyplot as plt
from matplotlib import animation
import matplotlib
matplotlib.rcParams['animation.embed_limit'] = 2**128
#matplotlib.use("TkAgg") # for mac M1
from IPython.display import HTML
from math import sqrt
import matplotlib as mpl
mpl.rcParams.update(mpl.rcParamsDefault)
from tempfile import TemporaryFile
# First to define the Constants
G = 6.67e-11
Ms = 1.988e30 # Sun
Me = 5.972e24 # Earth
Mj = 1.898e27 # Jupiter
AU = 1.5e11
daysec = 24.0*60*60
e_ap_v = 29780 # Earth's velocity
j_ap_v = 13060 # Jupiter's velocity
gravconst_e = G*Me*Ms
gravconst_j = G*Mj*Ms
# To setup the starting conditions and locations on plot of each planet;
# Earth
xe,ye,ze = 0,(1*AU),0
xve,yve,zve = -e_ap_v,0,0
# Jupiter
xj,yj,zj = 0,(5.2*AU),0
xvj,yvj,zvj = -j_ap_v,0,0
# Sun
xs,ys,zs = 0,0,0
xvs,yvs,zvs = 0,0,0
t = 0.0
dt = 1*daysec # every frame move this time
xelist,yelist,zelist = [],[],[]
xslist,yslist,zslist = [],[],[]
xjlist,yjlist,zjlist = [],[],[]
# start simulation
while t<1*365*daysec:
################ earth #############
# compute G force on earth
#rx,ry,rz = xs - xe, ys - ye, zs - ze
rx,ry,rz = xe - xs, ye - ys, ze - zs
modr3_e = (rx**2+ry**2+rz**2)**1.5
fx_e = -gravconst_e*rx/modr3_e
fy_e = -gravconst_e*ry/modr3_e
fz_e = -gravconst_e*rz/modr3_e
# update quantities how is this calculated? F = ma -> a = F/m
xve += fx_e*dt/Me
yve += fy_e*dt/Me
zve += fz_e*dt/Me
# update position
xe += xve*dt
ye += yve*dt
ze += zve*dt
# save the position in list
xelist.append(xe)
yelist.append(ye)
zelist.append(ze)
################ Jupiter ##############
# compute G force on Jupiter
rx_j,ry_j,rz_j = xj - xs, yj - ys, zj - zs
modr3_j = (rx_j**2+ry_j**2+rz_j**2)**1.5
fx_j = -gravconst_j*rx_j/modr3_j
fy_j = -gravconst_j*ry_j/modr3_j
fz_j = -gravconst_j*rz_j/modr3_j
xvj += fx_j*dt/Mj
yvj += fy_j*dt/Mj
zvj += fz_j*dt/Mj
# update position
xj += xvj*dt
yj += yvj*dt
zj += zvj*dt
# add to list
xjlist.append(xj)
yjlist.append(yj)
zjlist.append(zj)
################ the sun ###########
# update quantities how is this calculated? F = ma -> a = F/m
xvs += -(fx_e+fx_j)*dt/Ms
yvs += -(fy_e+fy_j)*dt/Ms
zvs += -(fz_e+fz_j)*dt/Ms
# update position
xs += xvs*dt
ys += yvs*dt
zs += zvs*dt
xslist.append(xs)
yslist.append(ys)
zslist.append(zs)
# update dt
t +=dt
# to plot the data
import matplotlib.pyplot as plt
from matplotlib import animation
import matplotlib
matplotlib.rcParams['animation.embed_limit'] = 2**128
from IPython.display import HTML
fig, ax = plt.subplots(figsize=(5.8,5.8))
ax.set_aspect('equal')
# Defining the planets and orbit path attributes on the graph (size & colour)
line_e, = ax.plot([],[],'b',lw=3,)
point_e, = ax.plot([AU], [0], marker="o", markersize=5, markeredgecolor="green", markerfacecolor="blue")
text_e = ax.text(AU,0,'Earth')
line_j, = ax.plot([],[],'r',lw=3)
point_j, = ax.plot([5.2*AU], [0], marker="o", markersize=7, markeredgecolor="red", markerfacecolor="brown")
text_j = ax.text(5.2*AU,0,'Jupiter')
point_s, = ax.plot([0], [0], marker="o", markersize=10, markeredgecolor="orange", markerfacecolor="yellow")
text_s = ax.text(0,0,'Sun')
annotation = ax.arrow(0, AU, 0, (5.2 - 1) * AU)
text_a = ax.text(0, (5.2 + 1) / 2 * AU, '', ha='center', backgroundcolor='white')
exdata,eydata = [],[] # earth track
sxdata,sydata = [],[] # sun track
jxdata,jydata = [],[] # Jupiters track
print(len(xelist))
def update(i):
exdata.append(xelist[i])
eydata.append(yelist[i])
jxdata.append(xjlist[i])
jydata.append(yjlist[i])
line_e.set_data(exdata,eydata)
point_e.set_data(xelist[i],yelist[i])
text_e.set_position((xelist[i],yelist[i]))
line_j.set_data(jxdata,jydata)
point_j.set_data(xjlist[i],yjlist[i])
text_j.set_position((xjlist[i],yjlist[i]))
point_s.set_data(xslist[i],yslist[i])
text_s.set_position((xslist[i],yslist[i]))
ax.set_xlim(-5.8*AU,5.8*AU)
ax.set_ylim(-5.8*AU,5.8*AU)
dist = sqrt((exdata[-1] - jxdata[-1])**2 + (eydata[-1] - jydata[-1])**2)
annotation.set_data(x=exdata[-1], y=eydata[-1], dx=jxdata[-1] - exdata[-1], dy=jydata[-1] - eydata[-1])
text_a.set_position(((exdata[-1] + jxdata[-1]) / 2, (eydata[-1] + jydata[-1]) / 2))
text_a.set_text(f"{dist/1e9:.1f} Mio km")
return line_e,point_s,point_e,line_j,point_j,text_e,text_j,text_s,
anim = animation.FuncAnimation(fig,func=update,frames=len(xelist),interval=1,blit=True)
# Showing animation in Jupyter Notebook
from IPython.display import HTML
HTML(anim.to_jshtml())
|
Determine Shortest Distance between two moving points
|
Im am trying to produce the shortest distance between the two orbiting points (Earth and Jupiter) created by this orbital model. I'vebeen working on this for quite some time but have been struggling.
Could someone sugest a way to create this output or possibly create a line connecting the two points and display their distance reative to one another? Ideally in km but AU is fine too.
I'm new to programming so any help would be great.
Thank you in advance!
This is the code ive been using on Google Colab;
`
# -*- coding: utf-8 -*-
"""Untitled2.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/14vpwJ9ixq6YZGSxN2eBWay-hIPe__BxF
"""
#%% plot it
import matplotlib.pyplot as plt
from matplotlib import animation
import matplotlib
matplotlib.rcParams['animation.embed_limit'] = 2**128
#matplotlib.use("TkAgg") # for mac M1
from IPython.display import HTML
import matplotlib as mpl
mpl.rcParams.update(mpl.rcParamsDefault)
from tempfile import TemporaryFile
# First to define the Constants
G = 6.67e-11
Ms = 1.988e30 # Sun
Me = 5.972e24 # Earth
Mj = 1.898e27 # Jupiter
AU = 1.5e11
daysec = 24.0*60*60
e_ap_v = 29780 # Earth's velocity
j_ap_v = 13060 # Jupiter's velocity
gravconst_e = G*Me*Ms
gravconst_j = G*Mj*Ms
# To setup the starting conditions and locations on plot of each planet;
# Earth
xe,ye,ze = 0,(1*AU),0
xve,yve,zve = -e_ap_v,0,0
# Jupiter
xj,yj,zj = 0,(5.2*AU),0
xvj,yvj,zvj = -j_ap_v,0,0
# Sun
xs,ys,zs = 0,0,0
xvs,yvs,zvs = 0,0,0
t = 0.0
dt = 1*daysec # every frame move this time
xelist,yelist,zelist = [],[],[]
xslist,yslist,zslist = [],[],[]
xjlist,yjlist,zjlist = [],[],[]
# start simulation
while t<1*365*daysec:
################ earth #############
# compute G force on earth
#rx,ry,rz = xs - xe, ys - ye, zs - ze
rx,ry,rz = xe - xs, ye - ys, ze - zs
modr3_e = (rx**2+ry**2+rz**2)**1.5
fx_e = -gravconst_e*rx/modr3_e
fy_e = -gravconst_e*ry/modr3_e
fz_e = -gravconst_e*rz/modr3_e
# update quantities how is this calculated? F = ma -> a = F/m
xve += fx_e*dt/Me
yve += fy_e*dt/Me
zve += fz_e*dt/Me
# update position
xe += xve*dt
ye += yve*dt
ze += zve*dt
# save the position in list
xelist.append(xe)
yelist.append(ye)
zelist.append(ze)
################ Jupiter ##############
# compute G force on Jupiter
rx_j,ry_j,rz_j = xj - xs, yj - ys, zj - zs
modr3_j = (rx_j**2+ry_j**2+rz_j**2)**1.5
fx_j = -gravconst_j*rx_j/modr3_j
fy_j = -gravconst_j*ry_j/modr3_j
fz_j = -gravconst_j*rz_j/modr3_j
xvj += fx_j*dt/Mj
yvj += fy_j*dt/Mj
zvj += fz_j*dt/Mj
# update position
xj += xvj*dt
yj += yvj*dt
zj += zvj*dt
# add to list
xjlist.append(xj)
yjlist.append(yj)
zjlist.append(zj)
################ the sun ###########
# update quantities how is this calculated? F = ma -> a = F/m
xvs += -(fx_e+fx_j)*dt/Ms
yvs += -(fy_e+fy_j)*dt/Ms
zvs += -(fz_e+fz_j)*dt/Ms
# update position
xs += xvs*dt
ys += yvs*dt
zs += zvs*dt
xslist.append(xs)
yslist.append(ys)
zslist.append(zs)
# update dt
t +=dt
# to plot the data
import matplotlib.pyplot as plt
from matplotlib import animation
import matplotlib
matplotlib.rcParams['animation.embed_limit'] = 2**128
from IPython.display import HTML
fig, ax = plt.subplots(figsize=(5.8,5.8))
ax.set_aspect('equal')
# Defining the planets and orbit path attributes on the graph (size & colour)
line_e, = ax.plot([],[],'b',lw=3,)
point_e, = ax.plot([AU], [0], marker="o", markersize=5, markeredgecolor="green", markerfacecolor="blue")
text_e = ax.text(AU,0,'Earth')
line_j, = ax.plot([],[],'r',lw=3)
point_j, = ax.plot([5.2*AU], [0], marker="o", markersize=7, markeredgecolor="red", markerfacecolor="brown")
text_j = ax.text(5.2*AU,0,'Jupiter')
point_s, = ax.plot([0], [0], marker="o", markersize=10, markeredgecolor="orange", markerfacecolor="yellow")
text_s = ax.text(0,0,'Sun')
exdata,eydata = [],[] # earth track
sxdata,sydata = [],[] # sun track
jxdata,jydata = [],[] # Jupiters track
print(len(xelist))
def update(i):
exdata.append(xelist[i])
eydata.append(yelist[i])
jxdata.append(xjlist[i])
jydata.append(yjlist[i])
line_e.set_data(exdata,eydata)
point_e.set_data(xelist[i],yelist[i])
text_e.set_position((xelist[i],yelist[i]))
line_j.set_data(jxdata,jydata)
point_j.set_data(xjlist[i],yjlist[i])
text_j.set_position((xjlist[i],yjlist[i]))
point_s.set_data(xslist[i],yslist[i])
text_s.set_position((xslist[i],yslist[i]))
ax.set_xlim(-5.8*AU,5.8*AU)
ax.set_ylim(-5.8*AU,5.8*AU)
return line_e,point_s,point_e,line_j,point_j,text_e,text_j,text_s,
anim = animation.FuncAnimation(fig,func=update,frames=len(xelist),interval=1,blit=True)
# Showing animation in Jupyter Notebook
from IPython.display import HTML
HTML(anim.to_jshtml())
|
[
"The connection line between Jupiter and the Earth can be drawn using the arrow method of matplotlib. This allows to add arrow heads if desired. The distance can be added as a normal label.\nannotation = ax.arrow(0, AU, 0, (5.2 - 1) * AU)\ntext_a = ax.text(0, (5.2 + 1) / 2 * AU, '', ha='center', backgroundcolor='white')\n\nIn the update method the distance has to be calculated using the Pythagorean theorem:\n annotation.set_data(x=exdata[-1], y=eydata[-1], dx=jxdata[-1] - exdata[-1], dy=jydata[-1] - eydata[-1])\n text_a.set_position(((exdata[-1] + jxdata[-1]) / 2, (eydata[-1] + jydata[-1]) / 2))\n text_a.set_text(f\"{dist/1e9:.1f} Mio km\")\n\nWhole code:\n# -*- coding: utf-8 -*-\n\"\"\"Untitled2.ipynb\n\nAutomatically generated by Colaboratory.\n\nOriginal file is located at\n https://colab.research.google.com/drive/14vpwJ9ixq6YZGSxN2eBWay-hIPe__BxF\n\"\"\"\n\n#%% plot it \nimport matplotlib.pyplot as plt\nfrom matplotlib import animation\nimport matplotlib\nmatplotlib.rcParams['animation.embed_limit'] = 2**128\n#matplotlib.use(\"TkAgg\") # for mac M1\nfrom IPython.display import HTML\nfrom math import sqrt\n\nimport matplotlib as mpl\nmpl.rcParams.update(mpl.rcParamsDefault)\n\nfrom tempfile import TemporaryFile\n# First to define the Constants\nG = 6.67e-11\nMs = 1.988e30 # Sun\nMe = 5.972e24 # Earth \nMj = 1.898e27 # Jupiter\nAU = 1.5e11\ndaysec = 24.0*60*60\n\ne_ap_v = 29780 # Earth's velocity\nj_ap_v = 13060 # Jupiter's velocity\n\ngravconst_e = G*Me*Ms\ngravconst_j = G*Mj*Ms\n\n# To setup the starting conditions and locations on plot of each planet;\n\n# Earth\nxe,ye,ze = 0,(1*AU),0\nxve,yve,zve = -e_ap_v,0,0\n\n# Jupiter\nxj,yj,zj = 0,(5.2*AU),0\nxvj,yvj,zvj = -j_ap_v,0,0\n\n# Sun\nxs,ys,zs = 0,0,0\nxvs,yvs,zvs = 0,0,0\n\nt = 0.0\ndt = 1*daysec # every frame move this time\n\nxelist,yelist,zelist = [],[],[]\nxslist,yslist,zslist = [],[],[]\nxjlist,yjlist,zjlist = [],[],[]\n\n# start simulation\nwhile t<1*365*daysec:\n ################ earth #############\n # compute G force on earth\n #rx,ry,rz = xs - xe, ys - ye, zs - ze\n rx,ry,rz = xe - xs, ye - ys, ze - zs\n modr3_e = (rx**2+ry**2+rz**2)**1.5\n fx_e = -gravconst_e*rx/modr3_e\n fy_e = -gravconst_e*ry/modr3_e\n fz_e = -gravconst_e*rz/modr3_e\n \n # update quantities how is this calculated? F = ma -> a = F/m\n xve += fx_e*dt/Me\n yve += fy_e*dt/Me\n zve += fz_e*dt/Me\n \n # update position\n xe += xve*dt\n ye += yve*dt \n ze += zve*dt\n \n # save the position in list\n xelist.append(xe)\n yelist.append(ye)\n zelist.append(ze)\n \n ################ Jupiter ##############\n # compute G force on Jupiter\n rx_j,ry_j,rz_j = xj - xs, yj - ys, zj - zs\n modr3_j = (rx_j**2+ry_j**2+rz_j**2)**1.5\n fx_j = -gravconst_j*rx_j/modr3_j\n fy_j = -gravconst_j*ry_j/modr3_j\n fz_j = -gravconst_j*rz_j/modr3_j\n \n xvj += fx_j*dt/Mj\n yvj += fy_j*dt/Mj\n zvj += fz_j*dt/Mj\n \n # update position\n xj += xvj*dt\n yj += yvj*dt\n zj += zvj*dt\n \n # add to list\n xjlist.append(xj)\n yjlist.append(yj)\n zjlist.append(zj)\n \n ################ the sun ###########\n # update quantities how is this calculated? F = ma -> a = F/m\n xvs += -(fx_e+fx_j)*dt/Ms\n yvs += -(fy_e+fy_j)*dt/Ms\n zvs += -(fz_e+fz_j)*dt/Ms\n \n # update position\n xs += xvs*dt\n ys += yvs*dt \n zs += zvs*dt\n xslist.append(xs)\n yslist.append(ys)\n zslist.append(zs)\n \n # update dt\n t +=dt\n\n# to plot the data \nimport matplotlib.pyplot as plt\nfrom matplotlib import animation\nimport matplotlib\nmatplotlib.rcParams['animation.embed_limit'] = 2**128\nfrom IPython.display import HTML\n\nfig, ax = plt.subplots(figsize=(5.8,5.8))\nax.set_aspect('equal')\n\n# Defining the planets and orbit path attributes on the graph (size & colour)\nline_e, = ax.plot([],[],'b',lw=3,)\npoint_e, = ax.plot([AU], [0], marker=\"o\", markersize=5, markeredgecolor=\"green\", markerfacecolor=\"blue\")\ntext_e = ax.text(AU,0,'Earth')\n\nline_j, = ax.plot([],[],'r',lw=3)\npoint_j, = ax.plot([5.2*AU], [0], marker=\"o\", markersize=7, markeredgecolor=\"red\", markerfacecolor=\"brown\")\ntext_j = ax.text(5.2*AU,0,'Jupiter')\n\npoint_s, = ax.plot([0], [0], marker=\"o\", markersize=10, markeredgecolor=\"orange\", markerfacecolor=\"yellow\")\ntext_s = ax.text(0,0,'Sun')\n\nannotation = ax.arrow(0, AU, 0, (5.2 - 1) * AU)\ntext_a = ax.text(0, (5.2 + 1) / 2 * AU, '', ha='center', backgroundcolor='white')\n\nexdata,eydata = [],[] # earth track\nsxdata,sydata = [],[] # sun track\njxdata,jydata = [],[] # Jupiters track\n\nprint(len(xelist))\n\ndef update(i):\n exdata.append(xelist[i])\n eydata.append(yelist[i])\n \n jxdata.append(xjlist[i])\n jydata.append(yjlist[i])\n \n line_e.set_data(exdata,eydata)\n point_e.set_data(xelist[i],yelist[i])\n text_e.set_position((xelist[i],yelist[i]))\n \n line_j.set_data(jxdata,jydata)\n point_j.set_data(xjlist[i],yjlist[i])\n text_j.set_position((xjlist[i],yjlist[i]))\n \n point_s.set_data(xslist[i],yslist[i])\n text_s.set_position((xslist[i],yslist[i]))\n \n ax.set_xlim(-5.8*AU,5.8*AU)\n ax.set_ylim(-5.8*AU,5.8*AU)\n \n dist = sqrt((exdata[-1] - jxdata[-1])**2 + (eydata[-1] - jydata[-1])**2)\n \n annotation.set_data(x=exdata[-1], y=eydata[-1], dx=jxdata[-1] - exdata[-1], dy=jydata[-1] - eydata[-1])\n text_a.set_position(((exdata[-1] + jxdata[-1]) / 2, (eydata[-1] + jydata[-1]) / 2))\n text_a.set_text(f\"{dist/1e9:.1f} Mio km\")\n \n return line_e,point_s,point_e,line_j,point_j,text_e,text_j,text_s,\n\nanim = animation.FuncAnimation(fig,func=update,frames=len(xelist),interval=1,blit=True)\n\n\n# Showing animation in Jupyter Notebook \nfrom IPython.display import HTML\nHTML(anim.to_jshtml())\n\n"
] |
[
0
] |
[] |
[] |
[
"animation",
"matplotlib",
"matplotlib_animation",
"physics",
"python"
] |
stackoverflow_0074535579_animation_matplotlib_matplotlib_animation_physics_python.txt
|
Q:
Python JSON assign data from API
I have a py file to read data from Wordpress API and pass values to another fields of other API. When values are singles, i have no problem, but i don't know how make that:
When i read one field from the API, the states values, comes with code instead the text value. For example, when the text value in Wordpress is Barcelona, returns B, and i'll need that the value returned will be Barcelona.
One example of code with simple fields values:
oClienteT["Direcciones"] = []
oClienteT["Telefono"] = oClienteW["billing"]["phone"]
oClienteT["NombreFiscal"] = oClienteW["first_name"] " " oClienteW["last_name"]
oClienteT["Direcciones"].append( {
"Codigo" : oClienteW["id"],
"Nombre" : oClienteW["billing"]["first_name"],
"Apellidos" : oClienteW["billing"]["last_name"],
"Direccion" : oClienteW["billing"]["address_1"],
"Direccion2" : oClienteW["billing"]["address_2"],
"Poblacion" : oClienteW["billing"]["state"],
"Provincia" : oClienteW["billing"]["city"]
})
When billing city is Madrid and billing state is madrid, Wordpress returns Madrid and M
I need tell thst when Madrid, returns Madrid, and so on.
A:
Make sure to convert to a JSON object before accessing fields (data = json.loads(json_str))
response = { "billing": { "address_1": "C/GUSTAVO ADOLFO BECQUER, 4", "city": "SEVILLA", "state": "SE"}}
print(response["billing"].get("address_1", None))
|
Python JSON assign data from API
|
I have a py file to read data from Wordpress API and pass values to another fields of other API. When values are singles, i have no problem, but i don't know how make that:
When i read one field from the API, the states values, comes with code instead the text value. For example, when the text value in Wordpress is Barcelona, returns B, and i'll need that the value returned will be Barcelona.
One example of code with simple fields values:
oClienteT["Direcciones"] = []
oClienteT["Telefono"] = oClienteW["billing"]["phone"]
oClienteT["NombreFiscal"] = oClienteW["first_name"] " " oClienteW["last_name"]
oClienteT["Direcciones"].append( {
"Codigo" : oClienteW["id"],
"Nombre" : oClienteW["billing"]["first_name"],
"Apellidos" : oClienteW["billing"]["last_name"],
"Direccion" : oClienteW["billing"]["address_1"],
"Direccion2" : oClienteW["billing"]["address_2"],
"Poblacion" : oClienteW["billing"]["state"],
"Provincia" : oClienteW["billing"]["city"]
})
When billing city is Madrid and billing state is madrid, Wordpress returns Madrid and M
I need tell thst when Madrid, returns Madrid, and so on.
|
[
"Make sure to convert to a JSON object before accessing fields (data = json.loads(json_str))\nresponse = { \"billing\": { \"address_1\": \"C/GUSTAVO ADOLFO BECQUER, 4\", \"city\": \"SEVILLA\", \"state\": \"SE\"}}\n\nprint(response[\"billing\"].get(\"address_1\", None))\n\n"
] |
[
0
] |
[] |
[] |
[
"json",
"python",
"wordpress"
] |
stackoverflow_0074535916_json_python_wordpress.txt
|
Q:
Django save form with foreign key
I am currently trying to create a form where users get to fill in their details after creating an account. The idea is that every user, after registration, gets redirected to this form page to fill it out. To achieve this, i'm using foreign keys.However it doesn't save to database.
models.py
class User(AbstractUser):
pass
def __str__(self):
return self.username
class Detail(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE, null=False, default="")
first_name = models.CharField(max_length=200, default="")
last_name = models.CharField(max_length=255, default="")
class Meta:
verbose_name_plural = "Detail"
def __str__(self):
return self.first_name+ " "+self.last_name
forms.py
class Details(forms.ModelForm):
class Meta:
model = Detail
fields = "__all__"
widgets={
"user": forms.TextInput()
}
views.py
def details(request):
if request.method =="POST":
form = Details(request.POST)
if form.is_valid():
detail = form.save(commit=False)
detail.user = request.user
detail.first_name = detail.first_name.lower()
detail.last_name = detail.last_name.lower()
detail.save()
return redirect("admin:index")
else:
form = Details(initial={"user":request.user.username})
return render(request, "details.html", {"form":form})
A:
You need to exclue user field from ModelForm like this
form.py
class Details(forms.ModelForm):
class Meta:
model = Detail
fields = "__all__"
exclude =["user"]
views.py
def details(request):
if request.method =="POST":
form = Details(request.POST)
if form.is_valid():
detail = form.save(commit=False)
detail.user = request.user
detail.first_name = detail.first_name.lower()
detail.last_name = detail.last_name.lower()
detail.save()
return redirect("admin:index")
else:
form = Details()
return render(request, "details.html", {"form":form})
|
Django save form with foreign key
|
I am currently trying to create a form where users get to fill in their details after creating an account. The idea is that every user, after registration, gets redirected to this form page to fill it out. To achieve this, i'm using foreign keys.However it doesn't save to database.
models.py
class User(AbstractUser):
pass
def __str__(self):
return self.username
class Detail(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE, null=False, default="")
first_name = models.CharField(max_length=200, default="")
last_name = models.CharField(max_length=255, default="")
class Meta:
verbose_name_plural = "Detail"
def __str__(self):
return self.first_name+ " "+self.last_name
forms.py
class Details(forms.ModelForm):
class Meta:
model = Detail
fields = "__all__"
widgets={
"user": forms.TextInput()
}
views.py
def details(request):
if request.method =="POST":
form = Details(request.POST)
if form.is_valid():
detail = form.save(commit=False)
detail.user = request.user
detail.first_name = detail.first_name.lower()
detail.last_name = detail.last_name.lower()
detail.save()
return redirect("admin:index")
else:
form = Details(initial={"user":request.user.username})
return render(request, "details.html", {"form":form})
|
[
"You need to exclue user field from ModelForm like this\nform.py\nclass Details(forms.ModelForm):\n class Meta:\n model = Detail\n fields = \"__all__\"\n exclude =[\"user\"]\n\nviews.py\ndef details(request):\n if request.method ==\"POST\":\n form = Details(request.POST)\n if form.is_valid():\n detail = form.save(commit=False)\n detail.user = request.user\n detail.first_name = detail.first_name.lower()\n detail.last_name = detail.last_name.lower()\n detail.save()\n return redirect(\"admin:index\")\n else:\n form = Details()\n return render(request, \"details.html\", {\"form\":form})\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_forms",
"django_models",
"python"
] |
stackoverflow_0074535952_django_django_forms_django_models_python.txt
|
Q:
How to create Great Expectations checkpoint for Pandas dataframe?
My datasource config looks like:
datasource_config = {
"name": "example_datasource",
"class_name": "Datasource",
"module_name": "great_expectations.datasource",
"execution_engine": {
"module_name": "great_expectations.execution_engine",
"class_name": "PandasExecutionEngine",
},
"data_connectors": {
"default_runtime_data_connector_name": {
"class_name": "RuntimeDataConnector",
"module_name": "great_expectations.datasource.data_connector",
"batch_identifiers": ["default_identifier_name"],
},
},
}
context.add_datasource(**datasource_config)
My Pandas dataframe and batch_requests were successfully created by following commands:
...
df = read_csv_pandas(file_path="../done/my_file.txt",
sep="|",
header=0,
quoting=csv.QUOTE_ALL)
batch_request = RuntimeBatchRequest(
datasource_name="example_datasource",
data_connector_name="default_runtime_data_connector_name",
data_asset_name="MyDataAsset",
runtime_parameters={"batch_data": df},
batch_identifiers={"default_identifier_name": "default_identifier"}
)
My expectation suite:
expectation_suite_name = "My_validations"
suite = context.create_expectation_suite(expectation_suite_name, overwrite_existing=True)
Then I'm creating the validator.
validator = context.get_validator(
batch_request=batch_request, expectation_suite_name=expectation_suite_name
)
validator.head(2)
The last command successfully prints 2 rows of my dataframe.
Then I'm adding expectations to my suite.
validator.expect_table_columns_to_match_ordered_list(['last_name', 'first_name', 'sex'])
validator.expect_column_values_to_be_in_set("sex", ["male", "female", "other", "unknown"])
validator.save_expectation_suite(discard_failed_expectations=False)
Then I'm generating data docs:
suite_identifier = ExpectationSuiteIdentifier(expectation_suite_name=expectation_suite_name)
context.build_data_docs(resource_identifiers=[suite_identifier])
context.open_data_docs(resource_identifier=suite_identifier)
My checkpoint looks like:
name: my_checkpoint_2
config_version: 1
class_name: SimpleCheckpoint
validations:
- batch_request:
datasource_name: example_datasource
data_connector_name: default_runtime_data_connector_name
data_asset_name: MyDataAsset
runtime_parameters:
batch_data: {df}
batch_identifiers:
default_identifier_name: default_identifier
expectation_suite_name: My_validations
But this command
context.run_checkpoint(checkpoint_name="my_checkpoint_2")
produces the error:
ValueError: RuntimeDataBatchSpec must provide a Pandas DataFrame or PandasBatchData object.
A:
Great expectations has multiple execution engines. You are specifying the PandasExecutionEngine. The execution engine should be changed to SparkDFExecutionEngine or you should cast your dataframe to Pandas.
|
How to create Great Expectations checkpoint for Pandas dataframe?
|
My datasource config looks like:
datasource_config = {
"name": "example_datasource",
"class_name": "Datasource",
"module_name": "great_expectations.datasource",
"execution_engine": {
"module_name": "great_expectations.execution_engine",
"class_name": "PandasExecutionEngine",
},
"data_connectors": {
"default_runtime_data_connector_name": {
"class_name": "RuntimeDataConnector",
"module_name": "great_expectations.datasource.data_connector",
"batch_identifiers": ["default_identifier_name"],
},
},
}
context.add_datasource(**datasource_config)
My Pandas dataframe and batch_requests were successfully created by following commands:
...
df = read_csv_pandas(file_path="../done/my_file.txt",
sep="|",
header=0,
quoting=csv.QUOTE_ALL)
batch_request = RuntimeBatchRequest(
datasource_name="example_datasource",
data_connector_name="default_runtime_data_connector_name",
data_asset_name="MyDataAsset",
runtime_parameters={"batch_data": df},
batch_identifiers={"default_identifier_name": "default_identifier"}
)
My expectation suite:
expectation_suite_name = "My_validations"
suite = context.create_expectation_suite(expectation_suite_name, overwrite_existing=True)
Then I'm creating the validator.
validator = context.get_validator(
batch_request=batch_request, expectation_suite_name=expectation_suite_name
)
validator.head(2)
The last command successfully prints 2 rows of my dataframe.
Then I'm adding expectations to my suite.
validator.expect_table_columns_to_match_ordered_list(['last_name', 'first_name', 'sex'])
validator.expect_column_values_to_be_in_set("sex", ["male", "female", "other", "unknown"])
validator.save_expectation_suite(discard_failed_expectations=False)
Then I'm generating data docs:
suite_identifier = ExpectationSuiteIdentifier(expectation_suite_name=expectation_suite_name)
context.build_data_docs(resource_identifiers=[suite_identifier])
context.open_data_docs(resource_identifier=suite_identifier)
My checkpoint looks like:
name: my_checkpoint_2
config_version: 1
class_name: SimpleCheckpoint
validations:
- batch_request:
datasource_name: example_datasource
data_connector_name: default_runtime_data_connector_name
data_asset_name: MyDataAsset
runtime_parameters:
batch_data: {df}
batch_identifiers:
default_identifier_name: default_identifier
expectation_suite_name: My_validations
But this command
context.run_checkpoint(checkpoint_name="my_checkpoint_2")
produces the error:
ValueError: RuntimeDataBatchSpec must provide a Pandas DataFrame or PandasBatchData object.
|
[
"Great expectations has multiple execution engines. You are specifying the PandasExecutionEngine. The execution engine should be changed to SparkDFExecutionEngine or you should cast your dataframe to Pandas.\n"
] |
[
0
] |
[] |
[] |
[
"great_expectations",
"pandas",
"python",
"validation"
] |
stackoverflow_0069495245_great_expectations_pandas_python_validation.txt
|
Q:
I want to run a python script via integrated terminal using Ctrl + Enter in Visual Studio Code
Suppose I have foo.py.
I have to right click foo.py -> Open as Integrated Terminal -> (in terminal, python foo.py)
Is there a way to Ctrl + Enter and it does the above.
And can you select a single function and run only that a bit like Jupyter notebook.
A:
See the docs: Keyboard Shortcuts editor
The commands you're looking for are:
Python: Run Python File in Terminal (python.execInTerminal)
Python: Run Selection/Line in Python Terminal (python.execSelectionInTerminal)
Although note that they work differently: the first runs the file like you want, and the second one launches a Python REPL then runs the selection/line there.
|
I want to run a python script via integrated terminal using Ctrl + Enter in Visual Studio Code
|
Suppose I have foo.py.
I have to right click foo.py -> Open as Integrated Terminal -> (in terminal, python foo.py)
Is there a way to Ctrl + Enter and it does the above.
And can you select a single function and run only that a bit like Jupyter notebook.
|
[
"See the docs: Keyboard Shortcuts editor\nThe commands you're looking for are:\n\nPython: Run Python File in Terminal (python.execInTerminal)\nPython: Run Selection/Line in Python Terminal (python.execSelectionInTerminal)\n\nAlthough note that they work differently: the first runs the file like you want, and the second one launches a Python REPL then runs the selection/line there.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"visual_studio_code"
] |
stackoverflow_0074536293_python_visual_studio_code.txt
|
Q:
Web Page Icon's Changing Problem in Multiple Page App
Version: Streamlit 1.13.0
Structure:
Main folder > 1__main.py
in pages folder > 2_❌_project.py
In 1__main.py:
st.set_page_config(
page_title= “Multipage App”,
page_icon=)
When I press ❌ in the sidebar, icon is disappearing in the header tab. Streamlit theme comes instead.
How Can I solve this problem? Which icon do I press in the sidebar, the header icon must seem same. I want to make this.
Example scenario:
In the beginning like this:
Then, I click on Application:
Folder icon disappears, streamlit icon comes (Unwanted situation):
A:
In every page file, every piece of code including the page_config should be in a function, you can name that the main function and pass every code into it:
for e.g:
# Homepage
import streamlit as st
...
def main()
st.set_page_config("Replace me with your page config")
# The rest of your code
...
if __name__ == "__main__":
main()
Note: Apply same rule to the rest of your page files.
|
Web Page Icon's Changing Problem in Multiple Page App
|
Version: Streamlit 1.13.0
Structure:
Main folder > 1__main.py
in pages folder > 2_❌_project.py
In 1__main.py:
st.set_page_config(
page_title= “Multipage App”,
page_icon=)
When I press ❌ in the sidebar, icon is disappearing in the header tab. Streamlit theme comes instead.
How Can I solve this problem? Which icon do I press in the sidebar, the header icon must seem same. I want to make this.
Example scenario:
In the beginning like this:
Then, I click on Application:
Folder icon disappears, streamlit icon comes (Unwanted situation):
|
[
"In every page file, every piece of code including the page_config should be in a function, you can name that the main function and pass every code into it:\nfor e.g:\n# Homepage\n\nimport streamlit as st\n...\n\ndef main()\n st.set_page_config(\"Replace me with your page config\")\n # The rest of your code\n ...\n\nif __name__ == \"__main__\":\n main()\n\nNote: Apply same rule to the rest of your page files.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"streamlit"
] |
stackoverflow_0074521013_python_streamlit.txt
|
Q:
How to retrieve query parameter from URL after RedirectResponse in FastAPI?
I'm implementing an oauth autorization code flow.
What I want is to retrieve the code that shows in the url after redirection. I've made researches but haven't found something really helpful. I think that if I can get the current url in the browser after the RedirectResponse, I can then extract the code parameter of it with python module like urllib.parse. Or, does FastApi have a way to help me get that url after the RedirectResponse? I saw on their documentation the Background Tasks but I don't know if that can actually help me retrieve the url after the redirection. I tried using selenium library after having seen this but it opens up a new window and when I try to apply the driver.get('put_your_site_name') suggested in the comments, it just takes too long.
Here's the code excerpt which is redirecting me to the url in the browser with the code as a parameter :
from uuid import uuid4
from oauthlib.oauth2 import WebApplicationClient
from fastapi import APIRouter, Request, Response
from fastapi.responses import RedirectResponse
router = APIRouter()
@router.get("/install/")
async def install(request: Request) -> Response:
"""Trigger the client identification process."""
client_id = "xxx"
client = WebApplicationClient(client_id)
state = str(uuid4())
authorization_url = f"https://api-url.com/auth/authorize?client_id={client_id}"
url = client.prepare_request_uri(
authorization_url,
redirect_uri="http://127.0.0.1:8000/callback/",
scope=["read:user"],
state=state,
)
return RedirectResponse(url=url)
With the above, I'm redirected to the callback url with the authorization code as parameter : http://127.0.0.1:8000/callback/?code=random-string-xyz.
I found also this which is quite close to what I'm looking for, except I'm trying to get the current path only after the redirection.
I've also checked FastApi query parameters part and tried with the following :
import typing
from uuid import uuid4
from oauthlib.oauth2 import WebApplicationClient
from fastapi import APIRouter, Request, Response
from fastapi.responses import RedirectResponse
router = APIRouter()
@router.get("/install/")
async def install(request: Request, code : typing.Optional[str] = None) -> Response:
"""Trigger the client identification process."""
client_id = "xxx"
client = WebApplicationClient(client_id)
state = str(uuid4())
authorization_url = f"https://api-url.com/auth/authorize?client_id={client_id}"
url = client.prepare_request_uri(
authorization_url,
redirect_uri="http://127.0.0.1:8000/callback/",
scope=["read:user"],
state=state,
)
print("\n code : ", code, "\n")
return RedirectResponse(url=url)
Output : code : None, as the code is returned after the redirection I guess?
How do I get that url programmatically to retrieve then the code? Or maybe do you have any other way to get it ..?
A:
You should instead retrieve the value of the code parameter inside the /callback, not /install, endpoint, since that is the endpoint to which you are being redirected—according to the link provided in your question:
http://127.0.0.1:8000/callback/?code=random-string-xyz
^^^^^^^^^
In FastAPI, you can get query parameters by declaring the parameters in your endpoint. As per the documentation :
When you declare other function parameters that are not part of the
path parameters, they are automatically interpreted as "query"
parameters.
Example:
@router.get("/callback")
async def install(code : str = None):
# ...
Alternatively, you can use Starlette's Request object directly (see Starlette's documentation as well), as described in this answer, as well as here and here.
|
How to retrieve query parameter from URL after RedirectResponse in FastAPI?
|
I'm implementing an oauth autorization code flow.
What I want is to retrieve the code that shows in the url after redirection. I've made researches but haven't found something really helpful. I think that if I can get the current url in the browser after the RedirectResponse, I can then extract the code parameter of it with python module like urllib.parse. Or, does FastApi have a way to help me get that url after the RedirectResponse? I saw on their documentation the Background Tasks but I don't know if that can actually help me retrieve the url after the redirection. I tried using selenium library after having seen this but it opens up a new window and when I try to apply the driver.get('put_your_site_name') suggested in the comments, it just takes too long.
Here's the code excerpt which is redirecting me to the url in the browser with the code as a parameter :
from uuid import uuid4
from oauthlib.oauth2 import WebApplicationClient
from fastapi import APIRouter, Request, Response
from fastapi.responses import RedirectResponse
router = APIRouter()
@router.get("/install/")
async def install(request: Request) -> Response:
"""Trigger the client identification process."""
client_id = "xxx"
client = WebApplicationClient(client_id)
state = str(uuid4())
authorization_url = f"https://api-url.com/auth/authorize?client_id={client_id}"
url = client.prepare_request_uri(
authorization_url,
redirect_uri="http://127.0.0.1:8000/callback/",
scope=["read:user"],
state=state,
)
return RedirectResponse(url=url)
With the above, I'm redirected to the callback url with the authorization code as parameter : http://127.0.0.1:8000/callback/?code=random-string-xyz.
I found also this which is quite close to what I'm looking for, except I'm trying to get the current path only after the redirection.
I've also checked FastApi query parameters part and tried with the following :
import typing
from uuid import uuid4
from oauthlib.oauth2 import WebApplicationClient
from fastapi import APIRouter, Request, Response
from fastapi.responses import RedirectResponse
router = APIRouter()
@router.get("/install/")
async def install(request: Request, code : typing.Optional[str] = None) -> Response:
"""Trigger the client identification process."""
client_id = "xxx"
client = WebApplicationClient(client_id)
state = str(uuid4())
authorization_url = f"https://api-url.com/auth/authorize?client_id={client_id}"
url = client.prepare_request_uri(
authorization_url,
redirect_uri="http://127.0.0.1:8000/callback/",
scope=["read:user"],
state=state,
)
print("\n code : ", code, "\n")
return RedirectResponse(url=url)
Output : code : None, as the code is returned after the redirection I guess?
How do I get that url programmatically to retrieve then the code? Or maybe do you have any other way to get it ..?
|
[
"You should instead retrieve the value of the code parameter inside the /callback, not /install, endpoint, since that is the endpoint to which you are being redirected—according to the link provided in your question:\nhttp://127.0.0.1:8000/callback/?code=random-string-xyz\n ^^^^^^^^^\n\nIn FastAPI, you can get query parameters by declaring the parameters in your endpoint. As per the documentation :\n\nWhen you declare other function parameters that are not part of the\npath parameters, they are automatically interpreted as \"query\"\nparameters.\n\nExample:\n@router.get(\"/callback\")\nasync def install(code : str = None):\n # ...\n\nAlternatively, you can use Starlette's Request object directly (see Starlette's documentation as well), as described in this answer, as well as here and here.\n"
] |
[
1
] |
[] |
[] |
[
"fastapi",
"oauth",
"python",
"redirect"
] |
stackoverflow_0074448083_fastapi_oauth_python_redirect.txt
|
Q:
Comparing row-wise DATES and substituting the values with 1 or 0
I have one column to compare with other 100 columns. The columns I need to compare are all DATETIME
The problem statement is as follows:
If the date in the "UTIL_DATE" is greater than equal to the date in other columns, substitute that row's value to 1
Else, 0
I have attached an example image below for the reference.
For example:
Since UTIL_DATE "31-12-2021" is greater than "23-09-2021", then we change the row value in column "Col3" to 1.
Since there is NaT in Col1, Col2 (and so one), those specific cannot be compared with UTIL_DATE. Hence, 0.
And the same thing iterate over for all the other rows
CURRENTLY
EXPECTED
I have tried a try-except loop. However, it is taking more than 1 hour 30 mins. I need to improve the performance.
Attached the code snippet for your reference:
for idx, row in df.iterrows(): # row is each row in df and idx is the index for each row
for i in format_cols: # format_cols is the list of columns to be compared with the UTIL_DATE column
ifor_val = 0 # taking ifor_val as 0 by default
try:
if (pd.to_datetime(row["Util_Date"]) >= pd.to_datetime(row[i])):
ifor_val = 1 # if Util_Date >= column "i" date, then map it to 1. Else 0
except:
ifor_val = 0
df.loc[idx,i]=ifor_val
A:
can you try this:
df=df.set_index('UTIL_DATE')
df=df.ge(df.index, axis=0)
df=df.replace({True:1,False:0})
|
Comparing row-wise DATES and substituting the values with 1 or 0
|
I have one column to compare with other 100 columns. The columns I need to compare are all DATETIME
The problem statement is as follows:
If the date in the "UTIL_DATE" is greater than equal to the date in other columns, substitute that row's value to 1
Else, 0
I have attached an example image below for the reference.
For example:
Since UTIL_DATE "31-12-2021" is greater than "23-09-2021", then we change the row value in column "Col3" to 1.
Since there is NaT in Col1, Col2 (and so one), those specific cannot be compared with UTIL_DATE. Hence, 0.
And the same thing iterate over for all the other rows
CURRENTLY
EXPECTED
I have tried a try-except loop. However, it is taking more than 1 hour 30 mins. I need to improve the performance.
Attached the code snippet for your reference:
for idx, row in df.iterrows(): # row is each row in df and idx is the index for each row
for i in format_cols: # format_cols is the list of columns to be compared with the UTIL_DATE column
ifor_val = 0 # taking ifor_val as 0 by default
try:
if (pd.to_datetime(row["Util_Date"]) >= pd.to_datetime(row[i])):
ifor_val = 1 # if Util_Date >= column "i" date, then map it to 1. Else 0
except:
ifor_val = 0
df.loc[idx,i]=ifor_val
|
[
"can you try this:\ndf=df.set_index('UTIL_DATE')\ndf=df.ge(df.index, axis=0)\ndf=df.replace({True:1,False:0})\n\n"
] |
[
2
] |
[] |
[] |
[
"dataframe",
"datetime",
"numpy",
"pandas",
"python"
] |
stackoverflow_0074536364_dataframe_datetime_numpy_pandas_python.txt
|
Q:
merge rows into new column value
I am taking a df that is all dup value pairs and then from the 2nd row take the 2nd column value and add it to the first row in a new column called 'new_amt' then inserting NaN for the second row and new third column. After I'll drop all row that contain NaN.
so the dataframe look like this:
ref_num
Amt
fy
fund_type
row 1
1
10
21
IX
row 2
1
20
21
IX
row 3
2
5
22
III
row 4
2
15
22
III
row 5
3
12
20
VI
row 6
3
7
20
VI
after it should look like this:
ref_num
Amt
new_Amt
fy
fund_type
row 1
1
10
20
21
IX
row 2
1
20
NaN
21
IX
row 3
2
5
15
22
III
row 4
2
15
NaN
22
III
row 5
3
12
7
20
VI
row 6
3
7
NaN
20
VI
I thought a lambda function could work where I'd have the else statement return NaN for all the second dup rows but I could figure out the syntax.
df['new_Amt'] = df.apply(lambda x : x['Amt'] if x['ref_num'] == x['ref_num'] else x['new_Amt'] is NaN)
A:
Why not do both operations at once (resolve duplicates as you describe and drop the redundant rows)?
k = 'ref_num'
newdf = df.drop_duplicates(subset=k, keep='first').merge(
df.drop_duplicates(subset=k, keep='last'), on='ref_num', suffixes=('', '_new'))
>>> newdf
ref_num Amt Amt_new
0 1 10 20
1 2 5 15
2 3 12 7
Another possibility:
gb = df.groupby('ref_num')['Amt']
newdf = pd.concat([gb.first(), gb.last()], axis=1, keys=['Amt', 'new_Amt']).reset_index()
>>> newdf
ref_num Amt new_Amt
0 1 10 20
1 2 5 15
2 3 12 7
Note: in your question it is not clear if 'row 1', 'row 2' etc. are indices, meant to be kept or not, etc. If they are desired in the final output, please let us know if and how they should appear.
Addendum: what if df has more columns?
Here is a way to keep the whole "first" rows, and only add the column new_Amt:
gb = df.groupby('ref_num')
newdf = pd.concat([gb.first(), gb['Amt'].last().to_frame('new_Amt')], axis=1).reset_index()
Example:
df = df.rename_axis(index='foo').reset_index()
# code above
>>> newdf
ref_num foo Amt new_Amt
0 1 row 1 10 20
1 2 row 3 5 15
2 3 row 5 12 7
|
merge rows into new column value
|
I am taking a df that is all dup value pairs and then from the 2nd row take the 2nd column value and add it to the first row in a new column called 'new_amt' then inserting NaN for the second row and new third column. After I'll drop all row that contain NaN.
so the dataframe look like this:
ref_num
Amt
fy
fund_type
row 1
1
10
21
IX
row 2
1
20
21
IX
row 3
2
5
22
III
row 4
2
15
22
III
row 5
3
12
20
VI
row 6
3
7
20
VI
after it should look like this:
ref_num
Amt
new_Amt
fy
fund_type
row 1
1
10
20
21
IX
row 2
1
20
NaN
21
IX
row 3
2
5
15
22
III
row 4
2
15
NaN
22
III
row 5
3
12
7
20
VI
row 6
3
7
NaN
20
VI
I thought a lambda function could work where I'd have the else statement return NaN for all the second dup rows but I could figure out the syntax.
df['new_Amt'] = df.apply(lambda x : x['Amt'] if x['ref_num'] == x['ref_num'] else x['new_Amt'] is NaN)
|
[
"Why not do both operations at once (resolve duplicates as you describe and drop the redundant rows)?\nk = 'ref_num'\nnewdf = df.drop_duplicates(subset=k, keep='first').merge(\n df.drop_duplicates(subset=k, keep='last'), on='ref_num', suffixes=('', '_new'))\n>>> newdf\n ref_num Amt Amt_new\n0 1 10 20\n1 2 5 15\n2 3 12 7\n\nAnother possibility:\ngb = df.groupby('ref_num')['Amt']\nnewdf = pd.concat([gb.first(), gb.last()], axis=1, keys=['Amt', 'new_Amt']).reset_index()\n>>> newdf\n ref_num Amt new_Amt\n0 1 10 20\n1 2 5 15\n2 3 12 7\n\nNote: in your question it is not clear if 'row 1', 'row 2' etc. are indices, meant to be kept or not, etc. If they are desired in the final output, please let us know if and how they should appear.\nAddendum: what if df has more columns?\nHere is a way to keep the whole \"first\" rows, and only add the column new_Amt:\ngb = df.groupby('ref_num')\nnewdf = pd.concat([gb.first(), gb['Amt'].last().to_frame('new_Amt')], axis=1).reset_index()\n\nExample:\ndf = df.rename_axis(index='foo').reset_index()\n\n# code above\n\n>>> newdf\n ref_num foo Amt new_Amt\n0 1 row 1 10 20\n1 2 row 3 5 15\n2 3 row 5 12 7\n\n"
] |
[
1
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074536138_pandas_python.txt
|
Q:
Zip and merge the string
'''
return a new string containing the 2 strings interwoven or zipped together.
ex:
Input:
'hi'
'ha'
Ouput:
hhia
Input:
'lzr','iad'
output:
lizard
def interleave(str1, str2):
ls1 = zip(str1, str2)
print(list(ls1))
# prints [('h', 'h'), ('i', 'a')] as expected
ls2 = [''.join(x) for x in ls1]
print(ls2)
# prints below. not as expected
# output: []
print(list(ls2))
# prints empty list. not as expected.
# output: []
# Below prints hhia correctly
ms1 = ''.join(''.join(x) for x in zip(str1,str2))
print(ms1)
# Output: hhia
interleave('hi','ha')
its working when i give in a single line (ms1).
when i split and give its not working (ls1, ls2).
Can anyone advise the root cause?
'''
A:
Because zip() is a python iterator. Once you retrieve the values of ls1 in third line(print(list(ls1))), the ls1 become empty.
Try this case, it would be more clear:
def interleave(str1, str2):
ls1 = zip(str1, str2)
print(list(ls1)) # prints [('h', 'h'), ('i', 'a')] as expected
print(list(ls1)) # empty
ls1 = zip(str1, str2)
print(list(ls1)) # prints [('h', 'h'), ('i', 'a')] as expected
interleave('hi','ha')
|
Zip and merge the string
|
'''
return a new string containing the 2 strings interwoven or zipped together.
ex:
Input:
'hi'
'ha'
Ouput:
hhia
Input:
'lzr','iad'
output:
lizard
def interleave(str1, str2):
ls1 = zip(str1, str2)
print(list(ls1))
# prints [('h', 'h'), ('i', 'a')] as expected
ls2 = [''.join(x) for x in ls1]
print(ls2)
# prints below. not as expected
# output: []
print(list(ls2))
# prints empty list. not as expected.
# output: []
# Below prints hhia correctly
ms1 = ''.join(''.join(x) for x in zip(str1,str2))
print(ms1)
# Output: hhia
interleave('hi','ha')
its working when i give in a single line (ms1).
when i split and give its not working (ls1, ls2).
Can anyone advise the root cause?
'''
|
[
"Because zip() is a python iterator. Once you retrieve the values of ls1 in third line(print(list(ls1))), the ls1 become empty.\nTry this case, it would be more clear:\ndef interleave(str1, str2):\n ls1 = zip(str1, str2)\n print(list(ls1)) # prints [('h', 'h'), ('i', 'a')] as expected\n print(list(ls1)) # empty\n ls1 = zip(str1, str2)\n print(list(ls1)) # prints [('h', 'h'), ('i', 'a')] as expected\n\n\n\ninterleave('hi','ha')\n\n"
] |
[
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074526570_python.txt
|
Q:
How to display a HTML file and text with flask python
I am trying to make a website that displays text I can change (preferably instantly).
I was able to do this with:
@app.route('/')
def index():
global val
return str(val)
along with a function running at the same time allowing me to change the variable "val"
However, now I would like to be able to display data and receive data from the user. To do this I used an HTML template, however I am not able to instantly change the data in the HTML file, so I can not change the data easily.
@app.route('/')
def index():
return render_template('form.html')
I would like to be able to display this HTML template, and the variable "val" at the same time.
I have tried:
@app.route('/')
def index():
global val
return str(val), render_template('form.html')
But this also does not work and it gives an error.
A:
In render_template("form.html" , str=str)
And in form.html
<body>
<p>{{str}}</p>
</body>
This way using jinja syntax you can display the variable
thank you
|
How to display a HTML file and text with flask python
|
I am trying to make a website that displays text I can change (preferably instantly).
I was able to do this with:
@app.route('/')
def index():
global val
return str(val)
along with a function running at the same time allowing me to change the variable "val"
However, now I would like to be able to display data and receive data from the user. To do this I used an HTML template, however I am not able to instantly change the data in the HTML file, so I can not change the data easily.
@app.route('/')
def index():
return render_template('form.html')
I would like to be able to display this HTML template, and the variable "val" at the same time.
I have tried:
@app.route('/')
def index():
global val
return str(val), render_template('form.html')
But this also does not work and it gives an error.
|
[
"In render_template(\"form.html\" , str=str)\nAnd in form.html\n<body>\n <p>{{str}}</p>\n</body>\n\nThis way using jinja syntax you can display the variable\nthank you\n"
] |
[
1
] |
[] |
[] |
[
"flask",
"python"
] |
stackoverflow_0074536401_flask_python.txt
|
Q:
Regex Python pattern matching
import re
string = '39801 356, 2102 1111'
# Three digit number followed by space followed by two digit number
pattern = '(\d{3})' --- This i need to match 398 now its matching 801
# match variable contains a Match object.
match = re.search(pattern, string)
if match:
print(match.group())
else:
print("pattern not found")
In the above code i want to match first 3 numbers i.e 398 but with pattern \d{3} its matching801 .
Need to match from 1st digit
|
Regex Python pattern matching
|
import re
string = '39801 356, 2102 1111'
# Three digit number followed by space followed by two digit number
pattern = '(\d{3})' --- This i need to match 398 now its matching 801
# match variable contains a Match object.
match = re.search(pattern, string)
if match:
print(match.group())
else:
print("pattern not found")
In the above code i want to match first 3 numbers i.e 398 but with pattern \d{3} its matching801 .
Need to match from 1st digit
|
[] |
[] |
[
"You need to indicate the match starts at the beginning of the string.\nA ^ is needed at the begging of the search.\nThe pattern is ^\\d{3}\nTake a look at the example regex https://regex101.com/r/jk44z6/2\n"
] |
[
-1
] |
[
"python",
"regex"
] |
stackoverflow_0074536475_python_regex.txt
|
Q:
How to write a python program to identify phoone numbers?
Hi I have wirtten some code to identify phone numbers
but it is not working as expected
phone numbers should be format +44-4411109923
Area code such as +44 is optional and - and space before the phone number is also optional.
I have written code below
import re
phoneregex = re.compile(r'[+0-9]?(\s|-)\d{10}')
text = input('Enter your text')
print(phoneregex.findall(text))
but it is identifying only '-' symbol can anyone tell me where im making mistake ??
I m expecting for some help to understand where im making mistake and learn how to code better.
A:
I've fixed your phoneregex pattern and created a function that determines whether a string represents a phone number. Here's the code:
import re
def is_phone_number(phone_number: str) -> bool:
"""Determine if string represents a phone number.
A valid phone number is of one of the following forms:
- ``"+xx xxxxxxxxxx"``
- ``"+xx-xxxxxxxxxx"``
- ``"+xx - xxxxxxxxxx"``
- ``"+xx xxxxxx-xxxx"``
- ``"+xx-xxxxxx-xxxx"``
- ``"+xx - xxxxxx-xxxx"``
- ``"xxxxxxxxxx"``
- ``"xxxxxx-xxxx"``
Where ``"x"`` is a digit. For more details, please refer to the
examples section.
Parameters
----------
phone_number : str
The string to check.
Returns
-------
bool
``True`` if string represents a phone number, ``False`` otherwise.
Examples
--------
>>> is_phone_number("+44 - 4411109923")
True
>>> is_phone_number("+44-4411109923")
True
>>> is_phone_number("4411109923")
True
>>> is_phone_number("441110-9923")
True
>>> is_phone_number("+44-441110-9923")
True
>>> is_phone_number("+44 441110-9923")
True
>>> is_phone_number("+44 4411109923")
True
>>> is_phone_number("US 4411109923")
False
>>> is_phone_number("+44 44111099231010")
False
"""
phone_regex = re.compile(
r'((\+[0-9]{2})(\s|-|\s-\s)|)([0-9]{10}|[0-9]{6}\-[0-9]{4})'
)
match = re.match(phone_regex, phone_number)
return bool(hasattr(match, 'group'))
phone_number = input('Enter your phone number: ')
if is_phone_number(phone_number):
print(f'{phone_number} is a valid phone number.')
Note
If you wish to maintain your original implementation, you can replace only the phoneregex value for the one being used inside is_phone_number function, like so:
import re
phoneregex = = re.compile(
r'((\+[0-9]{2})(\s|-|\s-\s)|)([0-9]{10}|[0-9]{6}\-[0-9]{4})'
)
text = input('Enter your text')
print(phoneregex.findall(text))
Hint
The are many websites that can help you build your regex pattern. I recommend using regex101 for helping you create the correct pattern. Here's a screenshot of how building the pattern looks like, when using regex101:
|
How to write a python program to identify phoone numbers?
|
Hi I have wirtten some code to identify phone numbers
but it is not working as expected
phone numbers should be format +44-4411109923
Area code such as +44 is optional and - and space before the phone number is also optional.
I have written code below
import re
phoneregex = re.compile(r'[+0-9]?(\s|-)\d{10}')
text = input('Enter your text')
print(phoneregex.findall(text))
but it is identifying only '-' symbol can anyone tell me where im making mistake ??
I m expecting for some help to understand where im making mistake and learn how to code better.
|
[
"I've fixed your phoneregex pattern and created a function that determines whether a string represents a phone number. Here's the code:\n\nimport re\n\n\ndef is_phone_number(phone_number: str) -> bool:\n \"\"\"Determine if string represents a phone number.\n\n A valid phone number is of one of the following forms:\n\n - ``\"+xx xxxxxxxxxx\"``\n - ``\"+xx-xxxxxxxxxx\"``\n - ``\"+xx - xxxxxxxxxx\"``\n - ``\"+xx xxxxxx-xxxx\"``\n - ``\"+xx-xxxxxx-xxxx\"``\n - ``\"+xx - xxxxxx-xxxx\"``\n - ``\"xxxxxxxxxx\"``\n - ``\"xxxxxx-xxxx\"``\n\n Where ``\"x\"`` is a digit. For more details, please refer to the\n examples section.\n\n Parameters\n ----------\n phone_number : str\n The string to check.\n\n Returns\n -------\n bool\n ``True`` if string represents a phone number, ``False`` otherwise.\n\n Examples\n --------\n >>> is_phone_number(\"+44 - 4411109923\")\n True\n >>> is_phone_number(\"+44-4411109923\")\n True\n >>> is_phone_number(\"4411109923\")\n True\n >>> is_phone_number(\"441110-9923\")\n True\n >>> is_phone_number(\"+44-441110-9923\")\n True\n >>> is_phone_number(\"+44 441110-9923\")\n True\n >>> is_phone_number(\"+44 4411109923\")\n True\n >>> is_phone_number(\"US 4411109923\")\n False\n >>> is_phone_number(\"+44 44111099231010\")\n False\n \"\"\"\n phone_regex = re.compile(\n r'((\\+[0-9]{2})(\\s|-|\\s-\\s)|)([0-9]{10}|[0-9]{6}\\-[0-9]{4})'\n )\n match = re.match(phone_regex, phone_number)\n return bool(hasattr(match, 'group'))\n\n\nphone_number = input('Enter your phone number: ')\nif is_phone_number(phone_number):\n print(f'{phone_number} is a valid phone number.')\n\n\n\nNote\nIf you wish to maintain your original implementation, you can replace only the phoneregex value for the one being used inside is_phone_number function, like so:\n\nimport re\n\nphoneregex = = re.compile(\n r'((\\+[0-9]{2})(\\s|-|\\s-\\s)|)([0-9]{10}|[0-9]{6}\\-[0-9]{4})'\n)\ntext = input('Enter your text')\nprint(phoneregex.findall(text))\n\n\nHint\nThe are many websites that can help you build your regex pattern. I recommend using regex101 for helping you create the correct pattern. Here's a screenshot of how building the pattern looks like, when using regex101:\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0074533482_python_python_3.x.txt
|
Q:
I am not clear on how to go past this point as it returns an error of "could not convert string to float
X = Liver_data.drop('Class',axis=1)
y = Liver_data['Class'] -1
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, stratify=y, random_state=99)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
ValueError Traceback (most recent call last)
<ipython-input-33-dbff2fd1f6c2> in <module>
5
6 scaler = StandardScaler()
----> 7 X_train = scaler.fit_transform(X_train)
8 X_test = scaler.transform(X_test)
5 frames
/usr/local/lib/python3.7/dist-packages/pandas/core/generic.py in __array__(self, dtype)
1991
1992 def __array__(self, dtype: NpDtype | None = None) -> np.ndarray:
-> 1993 return np.asarray(self._values, dtype=dtype)
1994
1995 def __array_wrap__(
ValueError: could not convert string to float: 'Male'
I am trying to build a neural network to detect given values of Total Bilirubin if a patient has liver problem or not. The variables in the class column has 2 values, 1 & 2, where one indicates 'liver damage' and 2 indicates 'No liver damage'. I want to deduct 1 from each label in the class column since keras assumes class label starts at 0. Why does this try to convert string to float when i only have integer values in the Class column?
A:
As you only want to use "Total Bilirubin" variable so you need to use that column in X. Something like X = Liver_data.loc[:,["Total Bilirubin"]] in that way Standard Scaler works fine because only works with numeric columns. As you are passing string columns to the Standard Scaler will throw an error for the string type.
Try to use the correct column dtype with the available transformations that sklearn offers.
|
I am not clear on how to go past this point as it returns an error of "could not convert string to float
|
X = Liver_data.drop('Class',axis=1)
y = Liver_data['Class'] -1
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, stratify=y, random_state=99)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
ValueError Traceback (most recent call last)
<ipython-input-33-dbff2fd1f6c2> in <module>
5
6 scaler = StandardScaler()
----> 7 X_train = scaler.fit_transform(X_train)
8 X_test = scaler.transform(X_test)
5 frames
/usr/local/lib/python3.7/dist-packages/pandas/core/generic.py in __array__(self, dtype)
1991
1992 def __array__(self, dtype: NpDtype | None = None) -> np.ndarray:
-> 1993 return np.asarray(self._values, dtype=dtype)
1994
1995 def __array_wrap__(
ValueError: could not convert string to float: 'Male'
I am trying to build a neural network to detect given values of Total Bilirubin if a patient has liver problem or not. The variables in the class column has 2 values, 1 & 2, where one indicates 'liver damage' and 2 indicates 'No liver damage'. I want to deduct 1 from each label in the class column since keras assumes class label starts at 0. Why does this try to convert string to float when i only have integer values in the Class column?
|
[
"As you only want to use \"Total Bilirubin\" variable so you need to use that column in X. Something like X = Liver_data.loc[:,[\"Total Bilirubin\"]] in that way Standard Scaler works fine because only works with numeric columns. As you are passing string columns to the Standard Scaler will throw an error for the string type.\nTry to use the correct column dtype with the available transformations that sklearn offers.\n"
] |
[
0
] |
[] |
[] |
[
"neural_network",
"python",
"train_test_split"
] |
stackoverflow_0074496541_neural_network_python_train_test_split.txt
|
Q:
i need help creating a function of add two matrices and return the sum matrix
The below code looks to be error free to me at least. But I'm not getting the output I want, but if i dont use the function and add the two of them directly with the same syntax, I'm getting the correct answer. pls help
a = [[1,1],[2,2]] #first matrix
b = [[4,4],[3,3]] #second matrix
#creating a function to add to two matrices and return the sum
def sum(m,n):
o = [[0,0],[0,0]]
for i in range(2):
for j in range(2):
o[i][j] = m[i][j] + n[i][j]
return o
ans = sum(a,b)
print(ans)
this is giving the following answer output:
[[5, 0], [0, 0]]
where as the output should be :
[[5, 5], [5, 5]]
A:
Can you make sure the return statement is given outside both the for loops?.
Seems like you have given the return statement inside for loop of j, So It's calculating just one sum and returning
def sum(m,n):
o = [[0,0],[0,0]]
for i in range(2):
for j in range(2):
o[i][j] = m[i][j] + n[i][j]
return o
return should be given like this, then it'll give
[[5, 5], [5, 5]]
A:
You can use the module numpy to add matrices together.
First install the module using
"pip install numpy"
for windows or
"pip3 install numpy"
for linux. Then, in your code, run
import numpy
numpy.add(list1, list2)
A:
You can use a list comprehension:
def sum_matrices(a, b):
return [[a[i][j] + b[i][j] for j in range(len(a[i]))] for i in range(len(a))]
Or you can use numpy:
import numpy as np
def sum_matrices(a, b):
return np.add(a, b).tolist()
|
i need help creating a function of add two matrices and return the sum matrix
|
The below code looks to be error free to me at least. But I'm not getting the output I want, but if i dont use the function and add the two of them directly with the same syntax, I'm getting the correct answer. pls help
a = [[1,1],[2,2]] #first matrix
b = [[4,4],[3,3]] #second matrix
#creating a function to add to two matrices and return the sum
def sum(m,n):
o = [[0,0],[0,0]]
for i in range(2):
for j in range(2):
o[i][j] = m[i][j] + n[i][j]
return o
ans = sum(a,b)
print(ans)
this is giving the following answer output:
[[5, 0], [0, 0]]
where as the output should be :
[[5, 5], [5, 5]]
|
[
"Can you make sure the return statement is given outside both the for loops?.\nSeems like you have given the return statement inside for loop of j, So It's calculating just one sum and returning\ndef sum(m,n): \no = [[0,0],[0,0]] \nfor i in range(2): \n for j in range(2): \n o[i][j] = m[i][j] + n[i][j] \nreturn o\n\nreturn should be given like this, then it'll give\n\n[[5, 5], [5, 5]]\n\n",
"You can use the module numpy to add matrices together.\nFirst install the module using\n\"pip install numpy\"\nfor windows or\n\"pip3 install numpy\"\nfor linux. Then, in your code, run\nimport numpy\nnumpy.add(list1, list2)\n\n",
"You can use a list comprehension:\ndef sum_matrices(a, b):\n return [[a[i][j] + b[i][j] for j in range(len(a[i]))] for i in range(len(a))]\n\nOr you can use numpy:\nimport numpy as np\n\ndef sum_matrices(a, b):\n return np.add(a, b).tolist()\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"function",
"list",
"matrix",
"python"
] |
stackoverflow_0074536346_function_list_matrix_python.txt
|
Q:
Select Pandas rows based on list index
I have a dataframe df:
20060930 10.103 NaN 10.103 7.981
20061231 15.915 NaN 15.915 12.686
20070331 3.196 NaN 3.196 2.710
20070630 7.907 NaN 7.907 6.459
Then I want to select rows with certain sequence numbers which indicated in a list, suppose here is [1,3], then left:
20061231 15.915 NaN 15.915 12.686
20070630 7.907 NaN 7.907 6.459
How or what function can do that?
A:
Use .iloc for integer based indexing and .loc for label based indexing. See below example:
ind_list = [1, 3]
df.iloc[ind_list]
A:
you can also use iloc:
df.iloc[[1,3],:]
This will not work if the indexes in your dataframe do not correspond to the order of the rows due to prior computations. In that case use:
df.index.isin([1,3])
... as suggested in other responses.
A:
Another way (although it is a longer code) but it is faster than the above codes. Check it using %timeit function:
df[df.index.isin([1,3])]
PS: You figure out the reason
A:
If index_list contains your desired indices, you can get the dataframe with the desired rows by doing
index_list = [1,2,3,4,5,6]
df.loc[df.index[index_list]]
This is based on the latest documentation as of March 2021.
A:
For large datasets, it is memory efficient to read only selected rows via the skiprows parameter.
Example
pred = lambda x: x not in [1, 3]
pd.read_csv("data.csv", skiprows=pred, index_col=0, names=...)
This will now return a DataFrame from a file that skips all rows except 1 and 3.
Details
From the docs:
skiprows : list-like or integer or callable, default None
...
If callable, the callable function will be evaluated against the row indices, returning True if the row should be skipped and False otherwise. An example of a valid callable argument would be lambda x: x in [0, 2]
This feature works in version pandas 0.20.0+. See also the corresponding issue and a related post.
A:
There are many ways of solving this problem, and the ones listed above are the most commonly used ways of achieving the solution. I want to add two more ways, just in case someone is looking for an alternative.
index_list = [1,3]
df.take(pos)
#or
df.query('index in @index_list')
A:
What you are trying to do is to filter your dataframe by index. The best way to do that in pandas at the moment is the following:
Single Index
desired_index_list = [1,3]
df[df.index.isin(desired_index_list)]
Multiindex
desired_index_list = [1,3]
index_level_to_filter = 0
df[df.index.get_level_values(index_level_to_filter).isin(desired_index_list)]
A:
To get a new DataFrame from filtered indexes:
For my problem, I needed a new dataframe from the indexes. I found a straight-forward way to do this:
iloc_list=[1,2,4,8]
df_new = df.filter(items = iloc_list , axis=0)
You can also filter columns using this. Please see the documentation for details.
|
Select Pandas rows based on list index
|
I have a dataframe df:
20060930 10.103 NaN 10.103 7.981
20061231 15.915 NaN 15.915 12.686
20070331 3.196 NaN 3.196 2.710
20070630 7.907 NaN 7.907 6.459
Then I want to select rows with certain sequence numbers which indicated in a list, suppose here is [1,3], then left:
20061231 15.915 NaN 15.915 12.686
20070630 7.907 NaN 7.907 6.459
How or what function can do that?
|
[
"Use .iloc for integer based indexing and .loc for label based indexing. See below example:\nind_list = [1, 3]\ndf.iloc[ind_list]\n\n",
"you can also use iloc:\ndf.iloc[[1,3],:]\n\nThis will not work if the indexes in your dataframe do not correspond to the order of the rows due to prior computations. In that case use: \ndf.index.isin([1,3])\n\n... as suggested in other responses.\n",
"Another way (although it is a longer code) but it is faster than the above codes. Check it using %timeit function:\ndf[df.index.isin([1,3])]\n\nPS: You figure out the reason\n\n",
"If index_list contains your desired indices, you can get the dataframe with the desired rows by doing\nindex_list = [1,2,3,4,5,6]\ndf.loc[df.index[index_list]]\n\nThis is based on the latest documentation as of March 2021.\n",
"For large datasets, it is memory efficient to read only selected rows via the skiprows parameter.\nExample\npred = lambda x: x not in [1, 3]\npd.read_csv(\"data.csv\", skiprows=pred, index_col=0, names=...)\n\nThis will now return a DataFrame from a file that skips all rows except 1 and 3.\n\nDetails\nFrom the docs:\n\nskiprows : list-like or integer or callable, default None\n...\nIf callable, the callable function will be evaluated against the row indices, returning True if the row should be skipped and False otherwise. An example of a valid callable argument would be lambda x: x in [0, 2]\n\nThis feature works in version pandas 0.20.0+. See also the corresponding issue and a related post.\n",
"There are many ways of solving this problem, and the ones listed above are the most commonly used ways of achieving the solution. I want to add two more ways, just in case someone is looking for an alternative.\nindex_list = [1,3]\n\ndf.take(pos)\n\n#or\n\ndf.query('index in @index_list')\n\n",
"What you are trying to do is to filter your dataframe by index. The best way to do that in pandas at the moment is the following:\nSingle Index\ndesired_index_list = [1,3]\ndf[df.index.isin(desired_index_list)]\n\nMultiindex\ndesired_index_list = [1,3]\nindex_level_to_filter = 0\ndf[df.index.get_level_values(index_level_to_filter).isin(desired_index_list)]\n\n",
"To get a new DataFrame from filtered indexes:\nFor my problem, I needed a new dataframe from the indexes. I found a straight-forward way to do this:\niloc_list=[1,2,4,8]\ndf_new = df.filter(items = iloc_list , axis=0)\n\nYou can also filter columns using this. Please see the documentation for details.\n"
] |
[
222,
142,
103,
19,
5,
2,
2,
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0019155718_pandas_python.txt
|
Q:
PyInstaller windows binary missing third party python module
system specs:
PyInstaller: 5.6.2
Python: 3.9.2
Windows-10-10.0.15063-SP0
I set up a virtual environment and install my requirements, check.
Following along with the pyinstaller documentation section 2.6, I run the command:
pyinstaller myProgram.py
As the docs describe, this autogenerates a .spec file for my application.
pyinstaller generated spec file:
# -*- mode: python ; coding: utf-8 -*-
block_cipher = None
a = Analysis(
['myProgram.py'],
pathex=[],
binaries=[],
datas=[],
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False,
)
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)
exe = EXE(
pyz,
a.scripts,
[],
exclude_binaries=True,
name='myProgram',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
console=False,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
)
coll = COLLECT(
exe,
a.binaries,
a.zipfiles,
a.datas,
strip=False,
upx=True,
upx_exclude=[],
name='myProgram',
)
I have extra 'datas' that need to be packaged into the app, so I edit the .spec file as follows:
edited .spec file:
# -*- mode: python ; coding: utf-8 -*-
block_cipher = None
datas = [
('galil/x64/libcrypto-3.dll', 'x64'),
('galil/x64/libssl-3.dll', 'x64'),
('galil/x64/gclib.dll', 'x64'),
('galil/x64/gclibo.dll', 'x64'),
('galil/x86/libcrypto-3.dll', 'x86'),
('galil/x86/libssl-3.dll', 'x86'),
('galil/x86/gclib.dll', 'x86'),
('galil/x86/gclibo.dll', 'x86'),
('galil/x64/libgclibo.so.0.0', 'x64'),
('galil/x64/libgclib.so.0.449', 'x64'),
('msvcp140.dll','.'),
('vcruntime140.dll','.'),
('vccorlib140.dll','.'),
('concrt140.dll','.'),
('vcomp140.dll','.'),
]
lib_dir = os.path.realpath('../../../../../../outputs/myProgram/update')
datas.append((os.path.join( lib_dir, './extraDataDir' ), './update/extraDataDir'))
datas.append((os.path.join( lib_dir, './manifest.txt' ), './'))
a = Analysis(
['myProgram.py'],
pathex=[],
binaries=[],
datas=datas,
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False,
)
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)
exe = EXE(
pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
[],
name='myProgram',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=False,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
)
coll = COLLECT(
exe,
a.binaries,
a.zipfiles,
a.datas,
strip=False,
upx=True,
upx_exclude=[],
name='myProgram',
)
Edits made, I go back and re-run pyinstaller on the spec file:
pyinstaller myProgram.spec
This completes successfully, and if I navigate to the output directory I can verify that the datas were added successfully to the myProgram application directory. However :(, when I double click the executable, I get the following error:
Traceback (most recent call last):
File "myProgram.py", line 8, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "myProgram_Controller.py", line 41, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "galil\galil_client.py", line 8, in <module>
ModuleNotFoundError: No module named 'gclib'
Ok, I assume this is because of the
import gclib
at the the top, and while I have included the libraries, I have not told pyinstaller about the actual gclib.py python module(?). Reading over section 2.12 of the docs, I create a hook file: hook-gclib.py:
from PyInstaller.hooks.hookutils import (collect_data_files, collect_submodules)
datas = [('./galil/gclib.py', 'gclib')]
hiddenimports = collect_submodules('gclib')
At this point start over; delete the pyinstaller output directory, backup myProgram.spec to .BAK, and re-run pyinstaller command like this:
pyinstaller --additional-hooks-dir=. --windowed myProgram.py
This generates a new spec file that I once again modify to add the required datas. The new spec file after adding the datas looks exactly the same as the previous with the exception that in the analysis block: hookspath=['.'],
Re-run pyinstaller on the spec file: pyinstaller myProgram.spec, double click the newly generated executable, & no dice, exact same ModuleNotFoundError :(.
I do not get any errors for the hook file, but is it not correct, or?
A:
Ah ha! I was making this more complicated than it needed to be. no hook required. removed any mention of hookpath from spec file, and updated:
pathex=['./galil'],
I will leave this here. someone can mark it as a duplicate because I found my answer here
|
PyInstaller windows binary missing third party python module
|
system specs:
PyInstaller: 5.6.2
Python: 3.9.2
Windows-10-10.0.15063-SP0
I set up a virtual environment and install my requirements, check.
Following along with the pyinstaller documentation section 2.6, I run the command:
pyinstaller myProgram.py
As the docs describe, this autogenerates a .spec file for my application.
pyinstaller generated spec file:
# -*- mode: python ; coding: utf-8 -*-
block_cipher = None
a = Analysis(
['myProgram.py'],
pathex=[],
binaries=[],
datas=[],
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False,
)
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)
exe = EXE(
pyz,
a.scripts,
[],
exclude_binaries=True,
name='myProgram',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
console=False,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
)
coll = COLLECT(
exe,
a.binaries,
a.zipfiles,
a.datas,
strip=False,
upx=True,
upx_exclude=[],
name='myProgram',
)
I have extra 'datas' that need to be packaged into the app, so I edit the .spec file as follows:
edited .spec file:
# -*- mode: python ; coding: utf-8 -*-
block_cipher = None
datas = [
('galil/x64/libcrypto-3.dll', 'x64'),
('galil/x64/libssl-3.dll', 'x64'),
('galil/x64/gclib.dll', 'x64'),
('galil/x64/gclibo.dll', 'x64'),
('galil/x86/libcrypto-3.dll', 'x86'),
('galil/x86/libssl-3.dll', 'x86'),
('galil/x86/gclib.dll', 'x86'),
('galil/x86/gclibo.dll', 'x86'),
('galil/x64/libgclibo.so.0.0', 'x64'),
('galil/x64/libgclib.so.0.449', 'x64'),
('msvcp140.dll','.'),
('vcruntime140.dll','.'),
('vccorlib140.dll','.'),
('concrt140.dll','.'),
('vcomp140.dll','.'),
]
lib_dir = os.path.realpath('../../../../../../outputs/myProgram/update')
datas.append((os.path.join( lib_dir, './extraDataDir' ), './update/extraDataDir'))
datas.append((os.path.join( lib_dir, './manifest.txt' ), './'))
a = Analysis(
['myProgram.py'],
pathex=[],
binaries=[],
datas=datas,
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False,
)
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)
exe = EXE(
pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
[],
name='myProgram',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=False,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
)
coll = COLLECT(
exe,
a.binaries,
a.zipfiles,
a.datas,
strip=False,
upx=True,
upx_exclude=[],
name='myProgram',
)
Edits made, I go back and re-run pyinstaller on the spec file:
pyinstaller myProgram.spec
This completes successfully, and if I navigate to the output directory I can verify that the datas were added successfully to the myProgram application directory. However :(, when I double click the executable, I get the following error:
Traceback (most recent call last):
File "myProgram.py", line 8, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "myProgram_Controller.py", line 41, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "galil\galil_client.py", line 8, in <module>
ModuleNotFoundError: No module named 'gclib'
Ok, I assume this is because of the
import gclib
at the the top, and while I have included the libraries, I have not told pyinstaller about the actual gclib.py python module(?). Reading over section 2.12 of the docs, I create a hook file: hook-gclib.py:
from PyInstaller.hooks.hookutils import (collect_data_files, collect_submodules)
datas = [('./galil/gclib.py', 'gclib')]
hiddenimports = collect_submodules('gclib')
At this point start over; delete the pyinstaller output directory, backup myProgram.spec to .BAK, and re-run pyinstaller command like this:
pyinstaller --additional-hooks-dir=. --windowed myProgram.py
This generates a new spec file that I once again modify to add the required datas. The new spec file after adding the datas looks exactly the same as the previous with the exception that in the analysis block: hookspath=['.'],
Re-run pyinstaller on the spec file: pyinstaller myProgram.spec, double click the newly generated executable, & no dice, exact same ModuleNotFoundError :(.
I do not get any errors for the hook file, but is it not correct, or?
|
[
"Ah ha! I was making this more complicated than it needed to be. no hook required. removed any mention of hookpath from spec file, and updated:\npathex=['./galil'],\n\nI will leave this here. someone can mark it as a duplicate because I found my answer here\n"
] |
[
0
] |
[] |
[] |
[
"pyinstaller",
"python"
] |
stackoverflow_0074535250_pyinstaller_python.txt
|
Q:
regex: negate a group with condition
is it possible to match strings if a group is not present between a starting and end position, except if the group is followed by a certain character e.g. '§'?
# match if '\.\s' is not present between 'start' and 'end'
re.search(r'start((?!\.\s).)*end', string)
for example those two strings should match:
string = 'start abc abc abc.end. '
string = 'start abc abc abc. §end '
but this string shouldn't match:
string = 'start abc abc abc. end. '
a solution would be to set a word boundary: start((?!\.\s\b).)*end
but i am specifically looking to set a specific character that may be followed be the negated group
A:
You can add another negative lookahead after \.\s
start((?!\.\s(?!§)).)*end
See this demo at regex101
|
regex: negate a group with condition
|
is it possible to match strings if a group is not present between a starting and end position, except if the group is followed by a certain character e.g. '§'?
# match if '\.\s' is not present between 'start' and 'end'
re.search(r'start((?!\.\s).)*end', string)
for example those two strings should match:
string = 'start abc abc abc.end. '
string = 'start abc abc abc. §end '
but this string shouldn't match:
string = 'start abc abc abc. end. '
a solution would be to set a word boundary: start((?!\.\s\b).)*end
but i am specifically looking to set a specific character that may be followed be the negated group
|
[
"You can add another negative lookahead after \\.\\s\nstart((?!\\.\\s(?!§)).)*end\n\nSee this demo at regex101\n"
] |
[
2
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0074536451_python_regex.txt
|
Q:
Google cloud PubSub service not working (Python)
I am trying to use the pub sub service on my python application. When I am running the code it get stuck on the last publisher line for some reason and the code never end. The subscriber seems fine.Does someone know what is wrong with my code?
Publisher:
import os
from google.cloud import pubsub_v1
credentials_path = 'PATH/TO/THE/KEY.JSON'
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = credentials_path
publisher = pubsub_v1.PublisherClient()
topic_path = 'projects/PROJECT_NAME/topics/TOPIC_NAME'
# simple garbage text to check if it's working
data = 'A garden sensor is ready!'
data = data.encode('utf-8')
attributes = {
'sensorName': 'garden-001',
'temperature': '75.0',
'humidity': '60'
}
future = publisher.publish(topic_path, data, **attributes)
print(f'published message id {future.result()}') # here it is just waiting forever
Subscriber:
import os
from google.cloud import pubsub_v1
from concurrent.futures import TimeoutError
credentials_path = 'PATH/TO/THE/KEY.JSON'
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = credentials_path
subscriber = pubsub_v1.SubscriberClient()
subscription_path = 'projects/PROJECT_NAME/subscriptions/SUBSCRIPTION_NAME'
def callback(message):
print(f'Received message: {message}')
print(f'data: {message.data}')
if message.attributes:
print("Attributes:")
for key in message.attributes:
value = message.attributes.get(key)
print(f"{key}: {value}")
message.ack()
streaming_pull_future = subscriber.subscribe(
subscription_path, callback=callback)
print(f'Listening for messages on {subscription_path}')
# wrap subscriber in a 'with' block to automatically call close() when done
with subscriber:
try:
streaming_pull_future.result()
except TimeoutError:
streaming_pull_future.cancel()
# block until the shutdown is complete
streaming_pull_future.result()
A:
Google provides decent documentation for using its services including Pub/Sub including a basic Python example that would have helped you avoid your problem.
Aside: your publisher and subscriber snippets set GOOGLE_APPLICATION_CREDENTIALS statically within the code. Don't do this! Set the environment variable before running the code. This way, you can revise the value without changing the code but, more importantly, the value can be set by the runtime e.g. Compute Engine.
Here's a working example based on your code using Application Default Credentials obtained from the environment:
Q="74535931"
BILLING="[YOUR-BILLING-ID]"
PROJECT="$(whoami)-$(date %y%m%d)-${Q}"
gcloud projects create ${PROJECT}
gcloud beta billing projects link ${PROJECT} \
--billing-account=${BILLING}
gcloud services enable pubsub.googleapis.com \
--project=${PROJECT}
ACCOUNT=tester
EMAIL=${ACCOUNT}@${PROJECT}.iam.gserviceaccount.com
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL}
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=roles/pubsub.editor
export GOOGLE_APPLICATION_CREDENTIALS=${PWD}/${ACCOUNT}.json
export PROJECT
export PUB="pub"
export SUB="sub"
gcloud pubsub topics create ${PUB} \
--project=${PROJECT}
gcloud pubsub subscriptions create ${SUB} \
--topic=${PUB} \
--project=${PROJECT}
publish.py:
import os
from google.cloud import pubsub_v1
project = os.getenv("PROJECT")
topic = os.getenv("PUB")
topic_path = f"projects/{project}/topics/{topic}"
data = 'A garden sensor is ready!'
data = data.encode('utf-8')
attributes = {
'sensorName': 'garden-001',
'temperature': '75.0',
'humidity': '60'
}
publisher = pubsub_v1.PublisherClient()
future = publisher.publish(topic_path, data, **attributes)
print(f'published message id {future.result()}')
subscribe.py:
import os
from google.cloud import pubsub_v1
from concurrent.futures import TimeoutError
project=os.getenv("PROJECT")
subscription=os.getenv("SUB")
subscription_path = f"projects/{project}/subscriptions/{subscription}"
def callback(message):
print(f'Received message: {message}')
print(f'data: {message.data}')
if message.attributes:
print("Attributes:")
for key in message.attributes:
value = message.attributes.get(key)
print(f"{key}: {value}")
message.ack()
subscriber = pubsub_v1.SubscriberClient()
streaming_pull_future = subscriber.subscribe(
subscription_path, callback=callback)
print(f'Listening for messages on {subscription_path}')
with subscriber:
try:
streaming_pull_future.result()
except TimeoutError:
streaming_pull_future.cancel()
# block until the shutdown is complete
streaming_pull_future.result()
Run python3 subscribe.py:
python3 subscribe.py
Listening for messages on projects/{project}/subscriptions/{sub}
Received message: Message {
data: b'A garden sensor is ready!'
ordering_key: ''
attributes: {
"humidity": "60",
"sensorName": "garden-001",
"temperature": "75.0"
}
}
data: b'A garden sensor is ready!'
Attributes:
humidity: 60
temperature: 75.0
sensorName: garden-001
And in a separate window python3 publish.py:
python3 publish.py
published message id 1234567890123456
|
Google cloud PubSub service not working (Python)
|
I am trying to use the pub sub service on my python application. When I am running the code it get stuck on the last publisher line for some reason and the code never end. The subscriber seems fine.Does someone know what is wrong with my code?
Publisher:
import os
from google.cloud import pubsub_v1
credentials_path = 'PATH/TO/THE/KEY.JSON'
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = credentials_path
publisher = pubsub_v1.PublisherClient()
topic_path = 'projects/PROJECT_NAME/topics/TOPIC_NAME'
# simple garbage text to check if it's working
data = 'A garden sensor is ready!'
data = data.encode('utf-8')
attributes = {
'sensorName': 'garden-001',
'temperature': '75.0',
'humidity': '60'
}
future = publisher.publish(topic_path, data, **attributes)
print(f'published message id {future.result()}') # here it is just waiting forever
Subscriber:
import os
from google.cloud import pubsub_v1
from concurrent.futures import TimeoutError
credentials_path = 'PATH/TO/THE/KEY.JSON'
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = credentials_path
subscriber = pubsub_v1.SubscriberClient()
subscription_path = 'projects/PROJECT_NAME/subscriptions/SUBSCRIPTION_NAME'
def callback(message):
print(f'Received message: {message}')
print(f'data: {message.data}')
if message.attributes:
print("Attributes:")
for key in message.attributes:
value = message.attributes.get(key)
print(f"{key}: {value}")
message.ack()
streaming_pull_future = subscriber.subscribe(
subscription_path, callback=callback)
print(f'Listening for messages on {subscription_path}')
# wrap subscriber in a 'with' block to automatically call close() when done
with subscriber:
try:
streaming_pull_future.result()
except TimeoutError:
streaming_pull_future.cancel()
# block until the shutdown is complete
streaming_pull_future.result()
|
[
"Google provides decent documentation for using its services including Pub/Sub including a basic Python example that would have helped you avoid your problem.\nAside: your publisher and subscriber snippets set GOOGLE_APPLICATION_CREDENTIALS statically within the code. Don't do this! Set the environment variable before running the code. This way, you can revise the value without changing the code but, more importantly, the value can be set by the runtime e.g. Compute Engine.\nHere's a working example based on your code using Application Default Credentials obtained from the environment:\nQ=\"74535931\"\n\nBILLING=\"[YOUR-BILLING-ID]\"\nPROJECT=\"$(whoami)-$(date %y%m%d)-${Q}\"\n\ngcloud projects create ${PROJECT}\ngcloud beta billing projects link ${PROJECT} \\\n--billing-account=${BILLING}\n\ngcloud services enable pubsub.googleapis.com \\\n--project=${PROJECT}\n\nACCOUNT=tester\nEMAIL=${ACCOUNT}@${PROJECT}.iam.gserviceaccount.com\n\ngcloud iam service-accounts create ${ACCOUNT} \\\n--project=${PROJECT}\n\ngcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \\\n--iam-account=${EMAIL}\n\ngcloud projects add-iam-policy-binding ${PROJECT} \\\n--member=serviceAccount:${EMAIL} \\\n--role=roles/pubsub.editor\n\nexport GOOGLE_APPLICATION_CREDENTIALS=${PWD}/${ACCOUNT}.json\nexport PROJECT\nexport PUB=\"pub\"\nexport SUB=\"sub\"\n\ngcloud pubsub topics create ${PUB} \\\n--project=${PROJECT}\n\ngcloud pubsub subscriptions create ${SUB} \\\n--topic=${PUB} \\\n--project=${PROJECT}\n\npublish.py:\nimport os\nfrom google.cloud import pubsub_v1\n\nproject = os.getenv(\"PROJECT\")\ntopic = os.getenv(\"PUB\")\n\ntopic_path = f\"projects/{project}/topics/{topic}\"\n\ndata = 'A garden sensor is ready!'\ndata = data.encode('utf-8')\nattributes = {\n 'sensorName': 'garden-001',\n 'temperature': '75.0',\n 'humidity': '60'\n}\n\npublisher = pubsub_v1.PublisherClient()\nfuture = publisher.publish(topic_path, data, **attributes)\nprint(f'published message id {future.result()}')\n\n\nsubscribe.py:\nimport os\nfrom google.cloud import pubsub_v1\nfrom concurrent.futures import TimeoutError\n\n\nproject=os.getenv(\"PROJECT\")\nsubscription=os.getenv(\"SUB\")\nsubscription_path = f\"projects/{project}/subscriptions/{subscription}\"\n\n\ndef callback(message):\n print(f'Received message: {message}')\n print(f'data: {message.data}')\n\n if message.attributes:\n print(\"Attributes:\")\n for key in message.attributes:\n value = message.attributes.get(key)\n print(f\"{key}: {value}\")\n\n message.ack()\n\n\nsubscriber = pubsub_v1.SubscriberClient()\n\nstreaming_pull_future = subscriber.subscribe(\n subscription_path, callback=callback)\nprint(f'Listening for messages on {subscription_path}')\n\nwith subscriber:\n try:\n streaming_pull_future.result()\n except TimeoutError:\n streaming_pull_future.cancel()\n # block until the shutdown is complete\n streaming_pull_future.result()\n\nRun python3 subscribe.py:\npython3 subscribe.py\nListening for messages on projects/{project}/subscriptions/{sub}\nReceived message: Message {\n data: b'A garden sensor is ready!'\n ordering_key: ''\n attributes: {\n \"humidity\": \"60\",\n \"sensorName\": \"garden-001\",\n \"temperature\": \"75.0\"\n }\n}\ndata: b'A garden sensor is ready!'\nAttributes:\nhumidity: 60\ntemperature: 75.0\nsensorName: garden-001\n\nAnd in a separate window python3 publish.py:\npython3 publish.py\npublished message id 1234567890123456\n\n"
] |
[
3
] |
[] |
[] |
[
"google_cloud_platform",
"google_cloud_pubsub",
"python"
] |
stackoverflow_0074535931_google_cloud_platform_google_cloud_pubsub_python.txt
|
Q:
Convert txt file to csv, separation specific lines to column
I am currently try to have the data like this (The ... just means there are more lines, no need to post the entire file here.)
376 932
noms sommets
0000 Abbesses
0001 Alexandre Dumas
0002 Alma Marceau
...
0375 Étienne Marcel
coord sommets
0000 308 536
0001 472 386
0002 193 404
...
0375 347 412
arcs values
0 238 41
0 159 46
1 12 36
1 235 44
...
367 366 120.0
The data should be like this when converted to csv, the data should has three columns
nom
sommets
coord sommets
0000
Abbesses
308 536
However, everything in the data is a straight line and hard to deal with. What is the solution for this. I try to convert it from txt to csv.
A:
from pathlib import Path
import pandas as pd
f = Path("metro")
lines = [[], [], []]
file_num = -1
for line in f.read_text().split("\n"):
if not line:
continue
cells = line.split(maxsplit=1)
if cells[0] in ["noms", "coord", "arcs"]:
file_num += 1
if file_num >= 0:
lines[file_num].append(cells)
def get_df(data):
df1 = pd.DataFrame(data)
df1.columns = df1.iloc[0]
df1 = df1.drop(index=0)
df1.columns.name = None
return df1
df1 = get_df(lines[0])
df2 = get_df(lines[1])
df3 = get_df(lines[2])
df2.columns = [df1.columns[0], " ".join(df2.columns)]
res = pd.merge(df1, df2, how="outer", on="noms")
# noms sommets coord sommets
# 0 0000 Abbesses 308 536
# 1 0001 Alexandre Dumas 472 386
# 2 0002 NaN 193 404
res.to_csv("metro.csv")
Edit: to resolve the encoding issue pass the encoding you want to read_text().
for line in f.read_text(encoding="latin-1").split("\n"):
...
Edit: you don't say how you want to process the columns under "arcs values", so I left the df3 as is.
A:
without imports you can do this.
There's some safety checks due to the noise in the data.
Also, I'm using a dict as they are extremely fast when trying to find key/value pairs.
with open("metro", encoding="latin-1") as infile:
data = infile.read().splitlines()
nom_start = "noms sommets"
coord_start = "coord sommets"
end = "arcs values"
mode = None
# use a dict as lookups on dicts are stupidly fast.
result = {}
for line in data:
# this one is needed due to the first line
if mode == None:
if line == nom_start:
mode = nom_start
continue
line = line.strip()
# safety check
if line != "":
if line == end:
# skip the end data
break
key, value = line.split(maxsplit=1)
if mode == nom_start:
if line != coord_start:
result[key] = {"sommets": value}
else:
mode = coord_start
else:
result[key]["coord sommets"] = value
# CSV separator
SEP = ";"
with open("output.csv", "w", encoding="latin-1") as outfile:
# CSV header
outfile.write(f"noms{SEP}sommets{SEP}coord sommets\n")
for key, val in result.items():
outfile.write(f'{key}{SEP}{val["sommets"]}{SEP}{val["coord sommets"]}\n')
A:
Quite an interesting problem. I'm assuming the file contains more columns, or sets of key/variables, than just in the example. So you wouldn't want to hard-code the column names.
I would create an new empty dataframe, then read the input file line-by-line, check if it is the next new column name (not starting with digits), build a dictionary with those new values, and then keep merging that dictionary as a new columns into the new dataframe.
So I would do something like this:
import pandas as pd
# create an Empty DataFrame object
df_new = pd.DataFrame({"recordkey": []})
# read all input lines
inputfilename = "inputfile.txt"
file1 = open(inputfilename, 'r')
Lines = file1.readlines()
tmpdict = {}
colname = ""
# iterate through all lines
for idx in range(len(Lines)):
line = Lines[idx]
# this is assuming all keys are exactly 4 digits
iscolname = not (line[:4].isdigit())
if not iscolname:
# split on the first space for key and value
tmp = line.split(" ", 1)
getkey = tmp[0].strip()
getvalue = tmp[1].strip()
# add to dictionary
tmpdict[getkey] = getvalue
# new column or last line
if iscolname or idx == len(Lines)-1:
# new column (except skip for first line of file)
if colname != "":
# create new column from dictionary
df_tmp = pd.DataFrame(tmpdict.items(), columns=["recordkey", colname])
df_new = df_new.merge(df_tmp, how='outer', on='recordkey')
# keep new column name
colname = line.strip()
tmpdict = {}
# display dataframe
print(df_new)
# write dataframe to csv
fileoutput = "outputfile.csv"
df_new.to_csv(fileoutput, sep=",", index=False)
|
Convert txt file to csv, separation specific lines to column
|
I am currently try to have the data like this (The ... just means there are more lines, no need to post the entire file here.)
376 932
noms sommets
0000 Abbesses
0001 Alexandre Dumas
0002 Alma Marceau
...
0375 Étienne Marcel
coord sommets
0000 308 536
0001 472 386
0002 193 404
...
0375 347 412
arcs values
0 238 41
0 159 46
1 12 36
1 235 44
...
367 366 120.0
The data should be like this when converted to csv, the data should has three columns
nom
sommets
coord sommets
0000
Abbesses
308 536
However, everything in the data is a straight line and hard to deal with. What is the solution for this. I try to convert it from txt to csv.
|
[
"from pathlib import Path\n\nimport pandas as pd\n\nf = Path(\"metro\")\n\nlines = [[], [], []]\nfile_num = -1\n\nfor line in f.read_text().split(\"\\n\"):\n if not line:\n continue\n cells = line.split(maxsplit=1)\n if cells[0] in [\"noms\", \"coord\", \"arcs\"]:\n file_num += 1\n if file_num >= 0:\n lines[file_num].append(cells)\n\n\ndef get_df(data):\n df1 = pd.DataFrame(data)\n df1.columns = df1.iloc[0]\n df1 = df1.drop(index=0)\n df1.columns.name = None\n return df1\n\n\ndf1 = get_df(lines[0])\ndf2 = get_df(lines[1])\ndf3 = get_df(lines[2])\n\ndf2.columns = [df1.columns[0], \" \".join(df2.columns)]\n\nres = pd.merge(df1, df2, how=\"outer\", on=\"noms\")\n# noms sommets coord sommets\n# 0 0000 Abbesses 308 536\n# 1 0001 Alexandre Dumas 472 386\n# 2 0002 NaN 193 404\nres.to_csv(\"metro.csv\")\n\nEdit: to resolve the encoding issue pass the encoding you want to read_text().\nfor line in f.read_text(encoding=\"latin-1\").split(\"\\n\"):\n ...\n\nEdit: you don't say how you want to process the columns under \"arcs values\", so I left the df3 as is.\n",
"without imports you can do this.\nThere's some safety checks due to the noise in the data.\nAlso, I'm using a dict as they are extremely fast when trying to find key/value pairs.\nwith open(\"metro\", encoding=\"latin-1\") as infile:\n data = infile.read().splitlines()\n\nnom_start = \"noms sommets\"\ncoord_start = \"coord sommets\"\nend = \"arcs values\"\nmode = None\n\n# use a dict as lookups on dicts are stupidly fast.\nresult = {}\n\nfor line in data:\n # this one is needed due to the first line\n if mode == None:\n if line == nom_start:\n mode = nom_start\n continue\n line = line.strip()\n # safety check\n if line != \"\":\n if line == end:\n # skip the end data\n break\n key, value = line.split(maxsplit=1)\n if mode == nom_start:\n if line != coord_start:\n result[key] = {\"sommets\": value}\n else:\n mode = coord_start\n else:\n result[key][\"coord sommets\"] = value\n\n\n# CSV separator\nSEP = \";\"\nwith open(\"output.csv\", \"w\", encoding=\"latin-1\") as outfile:\n # CSV header\n outfile.write(f\"noms{SEP}sommets{SEP}coord sommets\\n\")\n for key, val in result.items():\n outfile.write(f'{key}{SEP}{val[\"sommets\"]}{SEP}{val[\"coord sommets\"]}\\n')\n\n",
"Quite an interesting problem. I'm assuming the file contains more columns, or sets of key/variables, than just in the example. So you wouldn't want to hard-code the column names.\nI would create an new empty dataframe, then read the input file line-by-line, check if it is the next new column name (not starting with digits), build a dictionary with those new values, and then keep merging that dictionary as a new columns into the new dataframe.\nSo I would do something like this:\nimport pandas as pd\n\n# create an Empty DataFrame object\ndf_new = pd.DataFrame({\"recordkey\": []})\n\n# read all input lines\ninputfilename = \"inputfile.txt\"\nfile1 = open(inputfilename, 'r')\nLines = file1.readlines()\n\ntmpdict = {}\ncolname = \"\"\n\n# iterate through all lines\nfor idx in range(len(Lines)):\n line = Lines[idx]\n # this is assuming all keys are exactly 4 digits\n iscolname = not (line[:4].isdigit())\n \n if not iscolname:\n # split on the first space for key and value\n tmp = line.split(\" \", 1)\n getkey = tmp[0].strip()\n getvalue = tmp[1].strip()\n\n # add to dictionary\n tmpdict[getkey] = getvalue\n\n # new column or last line\n if iscolname or idx == len(Lines)-1:\n # new column (except skip for first line of file)\n if colname != \"\":\n # create new column from dictionary\n df_tmp = pd.DataFrame(tmpdict.items(), columns=[\"recordkey\", colname])\n df_new = df_new.merge(df_tmp, how='outer', on='recordkey')\n\n # keep new column name\n colname = line.strip()\n tmpdict = {}\n\n# display dataframe\nprint(df_new)\n\n# write dataframe to csv\nfileoutput = \"outputfile.csv\"\ndf_new.to_csv(fileoutput, sep=\",\", index=False)\n\n"
] |
[
1,
1,
1
] |
[] |
[] |
[
"csv",
"python",
"txt"
] |
stackoverflow_0074534469_csv_python_txt.txt
|
Q:
Failed building wheel for python-rtmidi
I'm trying to import magenta to use wavenet, however it always fails and I cannot find any useful information online.
It keeps give me this error information
Building wheels for collected packages: numba, python-rtmidi, llvmlite
Building wheel for numba (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [14 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\13003\AppData\Local\Temp\pip-install-417gts98\numba_d67d4f03411546d299e8418ef12a08c5\setup.py", line 358, in <module>
metadata['ext_modules'] = get_ext_modules()
File "C:\Users\13003\AppData\Local\Temp\pip-install-417gts98\numba_d67d4f03411546d299e8418ef12a08c5\setup.py", line 94, in get_ext_modules
import numpy.distutils.misc_util as np_misc
File "D:\Anaconda\envs\venv\lib\site-packages\numpy\distutils\__init__.py", line 24, in <module>
from . import ccompiler
File "D:\Anaconda\envs\venv\lib\site-packages\numpy\distutils\ccompiler.py", line 20, in <module>
from numpy.distutils import log
File "D:\Anaconda\envs\venv\lib\site-packages\numpy\distutils\log.py", line 4, in <module>
from distutils.log import Log as old_Log
ImportError: cannot import name 'Log' from 'distutils.log' (D:\Anaconda\envs\venv\lib\site-packages\setuptools\_distutils\log.py)
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for numba
Running setup.py clean for numba
error: subprocess-exited-with-error
c:\users\13003\appdata\local\temp\pip-install-417gts98\python-rtmidi_e5f4214911f54de8b049d39f4499d15a\src\RtMidi.h(48): fatal error C1083: Cannot open include file: 'exception': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\bin\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for python-rtmidi
Running setup.py clean for python-rtmidi
Building wheel for llvmlite (setup.py) ... error
error: subprocess-exited-with-error
Message: '"C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\bin\\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -D__WINDOWS_MM__ -Isrc -ID:\\Anaconda\\envs\\venv\\include -ID:\\Anaconda\\envs\\venv\\Include /EHsc /Tpsrc\\RtMidi.cpp /Fobuild\\temp.win-amd64-cpython-39\\Release\\src\\RtMidi.obj /EHsc'
Arguments: ()
RtMidi.cpp
c:\users\13003\appdata\local\temp\pip-install-417gts98\python-rtmidi_e5f4214911f54de8b049d39f4499d15a\src\RtMidi.h(48): fatal error C1083: Cannot open include file: 'exception': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\bin\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
Rolling back uninstall of python-rtmidi
Moving to d:\anaconda\envs\venv\lib\site-packages\python_rtmidi-1.4.9.dist-info\
from D:\Anaconda\envs\venv\Lib\site-packages\~ython_rtmidi-1.4.9.dist-info
Moving to d:\anaconda\envs\venv\lib\site-packages\rtmidi\
from D:\Anaconda\envs\venv\Lib\site-packages\~tmidi
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> python-rtmidi
I'm sure I have installed python-rtmidi, numba,llvmlite but it keeps finding them in an unexit path.
Anyone can help me out?? I really appreciate it T_T
A:
Try setting this environment variable first:
export SETUPTOOLS_USE_DISTUTILS=stdlib
or alternatively prefix your command, e.g. change poetry install to:
SETUPTOOLS_USE_DISTUTILS=stdlib poetry install
(source)
|
Failed building wheel for python-rtmidi
|
I'm trying to import magenta to use wavenet, however it always fails and I cannot find any useful information online.
It keeps give me this error information
Building wheels for collected packages: numba, python-rtmidi, llvmlite
Building wheel for numba (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [14 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\13003\AppData\Local\Temp\pip-install-417gts98\numba_d67d4f03411546d299e8418ef12a08c5\setup.py", line 358, in <module>
metadata['ext_modules'] = get_ext_modules()
File "C:\Users\13003\AppData\Local\Temp\pip-install-417gts98\numba_d67d4f03411546d299e8418ef12a08c5\setup.py", line 94, in get_ext_modules
import numpy.distutils.misc_util as np_misc
File "D:\Anaconda\envs\venv\lib\site-packages\numpy\distutils\__init__.py", line 24, in <module>
from . import ccompiler
File "D:\Anaconda\envs\venv\lib\site-packages\numpy\distutils\ccompiler.py", line 20, in <module>
from numpy.distutils import log
File "D:\Anaconda\envs\venv\lib\site-packages\numpy\distutils\log.py", line 4, in <module>
from distutils.log import Log as old_Log
ImportError: cannot import name 'Log' from 'distutils.log' (D:\Anaconda\envs\venv\lib\site-packages\setuptools\_distutils\log.py)
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for numba
Running setup.py clean for numba
error: subprocess-exited-with-error
c:\users\13003\appdata\local\temp\pip-install-417gts98\python-rtmidi_e5f4214911f54de8b049d39f4499d15a\src\RtMidi.h(48): fatal error C1083: Cannot open include file: 'exception': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\bin\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for python-rtmidi
Running setup.py clean for python-rtmidi
Building wheel for llvmlite (setup.py) ... error
error: subprocess-exited-with-error
Message: '"C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\bin\\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -D__WINDOWS_MM__ -Isrc -ID:\\Anaconda\\envs\\venv\\include -ID:\\Anaconda\\envs\\venv\\Include /EHsc /Tpsrc\\RtMidi.cpp /Fobuild\\temp.win-amd64-cpython-39\\Release\\src\\RtMidi.obj /EHsc'
Arguments: ()
RtMidi.cpp
c:\users\13003\appdata\local\temp\pip-install-417gts98\python-rtmidi_e5f4214911f54de8b049d39f4499d15a\src\RtMidi.h(48): fatal error C1083: Cannot open include file: 'exception': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\bin\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
Rolling back uninstall of python-rtmidi
Moving to d:\anaconda\envs\venv\lib\site-packages\python_rtmidi-1.4.9.dist-info\
from D:\Anaconda\envs\venv\Lib\site-packages\~ython_rtmidi-1.4.9.dist-info
Moving to d:\anaconda\envs\venv\lib\site-packages\rtmidi\
from D:\Anaconda\envs\venv\Lib\site-packages\~tmidi
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> python-rtmidi
I'm sure I have installed python-rtmidi, numba,llvmlite but it keeps finding them in an unexit path.
Anyone can help me out?? I really appreciate it T_T
|
[
"Try setting this environment variable first:\nexport SETUPTOOLS_USE_DISTUTILS=stdlib\n\nor alternatively prefix your command, e.g. change poetry install to:\nSETUPTOOLS_USE_DISTUTILS=stdlib poetry install\n\n(source)\n"
] |
[
0
] |
[] |
[] |
[
"magenta",
"python"
] |
stackoverflow_0074500708_magenta_python.txt
|
Q:
Injecting function call after __init__ with decorator
I'm trying to find the best way to create a class decorator that does the following:
Injects a few functions into the decorated class
Forces a call to one of these functions AFTER the decorated class' __init__ is called
Currently, I'm just saving off a reference to the 'original' __init__ method and replacing it with my __init__ that calls the original and my additional function. It looks similar to this:
orig_init = cls.__init__
def new_init(self, *args, **kwargs):
"""
'Extend' wrapped class' __init__ so we can attach to all signals
automatically
"""
orig_init(self, *args, **kwargs)
self._debugSignals()
cls.__init__ = new_init
Is there a better way to 'augment' the original __init__ or inject my call somewhere else? All I really need is for my self._debugSignals() to be called sometime after the object is created. I also want it happen automatically, which is why I thought after __init__ was a good place.
Extra misc. decorator notes
It might be worth mentioning some background on this decorator. You can find the full code here. The point of the decorator is to automatically attach to any PyQt signals and print when they are emitted. The decorator works fine when I decorate my own subclasses of QtCore.QObject, however I've been recently trying to automatically decorate all QObject children.
I'd like to have a 'debug' mode in the application where I can automatically print ALL signals just to make sure things are doing what I expect. I'm sure this will result in TONS of debug, but I'd still like to see what's happening.
The problem is my current version of the decorator is causing a segfault when replacing QtCore.QObject.__init__. I've tried to debug this, but the code is all SIP generated, which I don't have much experience with.
So, I was wondering if there was a safer, more pythonic way to inject a function call AFTER the __init__ and hopefully avoid the segfault.
A:
Based on this post and this answer, an alternative way to do this is through a custom metaclass. This would work as follows (tested in Python 2.7):
# define a new metaclass which overrides the "__call__" function
class NewInitCaller(type):
def __call__(cls, *args, **kwargs):
"""Called when you call MyNewClass() """
obj = type.__call__(cls, *args, **kwargs)
obj.new_init()
return obj
# then create a new class with the __metaclass__ set as our custom metaclass
class MyNewClass(object):
__metaclass__ = NewInitCaller
def __init__(self):
print "Init class"
def new_init(self):
print "New init!!"
# when you create an instance
a = MyNewClass()
>>> Init class
>>> New init!!
The basic idea is that:
when you call MyNewClass() it searches for the metaclass, finds that you have defined NewInitCaller
The metaclass __call__ function is called.
This function creates the MyNewClass instance using type,
The instance runs its own __init__ (printing "Init class").
The meta class then calls the new_init function of the instance.
A:
Here is the solution for Python 3.x, based on this post's accepted answer. Also see PEP 3115 for reference, I think the rationale is an interesting read.
Changes in the example above are shown with comments; the only real change is the way the metaclass is defined, all other are trivial 2to3 modifications.
# define a new metaclass which overrides the "__call__" function
class NewInitCaller(type):
def __call__(cls, *args, **kwargs):
"""Called when you call MyNewClass() """
obj = type.__call__(cls, *args, **kwargs)
obj.new_init()
return obj
# then create a new class with the metaclass passed as an argument
class MyNewClass(object, metaclass=NewInitCaller): # added argument
# __metaclass__ = NewInitCaller this line is removed; would not have effect
def __init__(self):
print("Init class") # function, not command
def new_init(self):
print("New init!!") # function, not command
# when you create an instance
a = MyNewClass()
>>> Init class
>>> New init!!
A:
Here's a generalized form of jake77's example which implements __post_init__ on a non-dataclass. This enables a subclass's configure() to be automatically invoked in correct sequence after the base & subclass __init__s have completed.
# define a new metaclass which overrides the "__call__" function
class PostInitCaller(type):
def __call__(cls, *args, **kwargs):
"""Called when you call BaseClass() """
print(f"{__class__.__name__}.__call__({args}, {kwargs})")
obj = type.__call__(cls, *args, **kwargs)
obj.__post_init__(*args, **kwargs)
return obj
# then create a new class with the metaclass passed as an argument
class BaseClass(object, metaclass=PostInitCaller):
def __init__(self, *args, **kwargs):
print(f"{__class__.__name__}.__init__({args}, {kwargs})")
super().__init__()
def __post_init__(self, *args, **kwargs):
print(f"{__class__.__name__}.__post_init__({args}, {kwargs})")
self.configure(*args, **kwargs)
def configure(self, *args, **kwargs):
print(f"{__class__.__name__}.configure({args}, {kwargs})")
class SubClass(BaseClass):
def __init__(self, *args, **kwargs):
print(f"{__class__.__name__}.__init__({args}, {kwargs})")
super().__init__(*args, **kwargs)
def configure(self, *args, **kwargs):
print(f"{__class__.__name__}.configure({args}, {kwargs})")
super().configure(*args, **kwargs)
# when you create an instance
a = SubClass('a', b='b')
running gives:
PostInitCaller.__call__(('a',), {'b': 'b'})
SubClass.__init__(('a',), {'b': 'b'})
BaseClass.__init__(('a',), {'b': 'b'})
BaseClass.__post_init__(('a',), {'b': 'b'})
SubClass.configure(('a',), {'b': 'b'})
BaseClass.configure(('a',), {'b': 'b'})
A:
I know that the metaclass approach is the Pro way, but I've a more readable and easy proposal using @staticmethod:
class Invites(TimestampModel, db.Model):
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
invitee_email = db.Column(db.String(128), nullable=False)
def __init__(self, invitee_email):
invitee_email = invitee_email
@staticmethod
def create_invitation(invitee_email):
"""
Create an invitation
saves it and fetches it because the id
is being generated in the DB
"""
invitation = Invites(invitee_email)
db.session.save(invitation)
db.session.commit()
return Invites.query.filter(
PartnerInvites.invitee_email == invitee_email
).one_or_none()
So I could use it this way:
invitation = Invites.create_invitation("jim@mail.com")
print(invitation.id, invitation.invitee_email)
>>>> 1 jim@mail.com
|
Injecting function call after __init__ with decorator
|
I'm trying to find the best way to create a class decorator that does the following:
Injects a few functions into the decorated class
Forces a call to one of these functions AFTER the decorated class' __init__ is called
Currently, I'm just saving off a reference to the 'original' __init__ method and replacing it with my __init__ that calls the original and my additional function. It looks similar to this:
orig_init = cls.__init__
def new_init(self, *args, **kwargs):
"""
'Extend' wrapped class' __init__ so we can attach to all signals
automatically
"""
orig_init(self, *args, **kwargs)
self._debugSignals()
cls.__init__ = new_init
Is there a better way to 'augment' the original __init__ or inject my call somewhere else? All I really need is for my self._debugSignals() to be called sometime after the object is created. I also want it happen automatically, which is why I thought after __init__ was a good place.
Extra misc. decorator notes
It might be worth mentioning some background on this decorator. You can find the full code here. The point of the decorator is to automatically attach to any PyQt signals and print when they are emitted. The decorator works fine when I decorate my own subclasses of QtCore.QObject, however I've been recently trying to automatically decorate all QObject children.
I'd like to have a 'debug' mode in the application where I can automatically print ALL signals just to make sure things are doing what I expect. I'm sure this will result in TONS of debug, but I'd still like to see what's happening.
The problem is my current version of the decorator is causing a segfault when replacing QtCore.QObject.__init__. I've tried to debug this, but the code is all SIP generated, which I don't have much experience with.
So, I was wondering if there was a safer, more pythonic way to inject a function call AFTER the __init__ and hopefully avoid the segfault.
|
[
"Based on this post and this answer, an alternative way to do this is through a custom metaclass. This would work as follows (tested in Python 2.7):\n# define a new metaclass which overrides the \"__call__\" function\nclass NewInitCaller(type):\n def __call__(cls, *args, **kwargs):\n \"\"\"Called when you call MyNewClass() \"\"\"\n obj = type.__call__(cls, *args, **kwargs)\n obj.new_init()\n return obj\n\n\n# then create a new class with the __metaclass__ set as our custom metaclass\nclass MyNewClass(object):\n __metaclass__ = NewInitCaller\n def __init__(self):\n print \"Init class\"\n def new_init(self):\n print \"New init!!\"\n\n# when you create an instance\na = MyNewClass()\n>>> Init class\n>>> New init!!\n\nThe basic idea is that:\n\nwhen you call MyNewClass() it searches for the metaclass, finds that you have defined NewInitCaller \nThe metaclass __call__ function is called. \nThis function creates the MyNewClass instance using type, \nThe instance runs its own __init__ (printing \"Init class\"). \nThe meta class then calls the new_init function of the instance.\n\n",
"Here is the solution for Python 3.x, based on this post's accepted answer. Also see PEP 3115 for reference, I think the rationale is an interesting read. \nChanges in the example above are shown with comments; the only real change is the way the metaclass is defined, all other are trivial 2to3 modifications.\n# define a new metaclass which overrides the \"__call__\" function\nclass NewInitCaller(type):\n def __call__(cls, *args, **kwargs):\n \"\"\"Called when you call MyNewClass() \"\"\"\n obj = type.__call__(cls, *args, **kwargs)\n obj.new_init()\n return obj\n\n# then create a new class with the metaclass passed as an argument\nclass MyNewClass(object, metaclass=NewInitCaller): # added argument\n # __metaclass__ = NewInitCaller this line is removed; would not have effect\n def __init__(self):\n print(\"Init class\") # function, not command\n def new_init(self):\n print(\"New init!!\") # function, not command\n\n# when you create an instance\na = MyNewClass()\n>>> Init class\n>>> New init!!\n\n",
"Here's a generalized form of jake77's example which implements __post_init__ on a non-dataclass. This enables a subclass's configure() to be automatically invoked in correct sequence after the base & subclass __init__s have completed.\n# define a new metaclass which overrides the \"__call__\" function\nclass PostInitCaller(type):\n def __call__(cls, *args, **kwargs):\n \"\"\"Called when you call BaseClass() \"\"\"\n print(f\"{__class__.__name__}.__call__({args}, {kwargs})\")\n obj = type.__call__(cls, *args, **kwargs)\n obj.__post_init__(*args, **kwargs)\n return obj\n\n\n# then create a new class with the metaclass passed as an argument\nclass BaseClass(object, metaclass=PostInitCaller):\n def __init__(self, *args, **kwargs):\n print(f\"{__class__.__name__}.__init__({args}, {kwargs})\")\n super().__init__()\n\n def __post_init__(self, *args, **kwargs):\n print(f\"{__class__.__name__}.__post_init__({args}, {kwargs})\")\n self.configure(*args, **kwargs)\n\n def configure(self, *args, **kwargs):\n print(f\"{__class__.__name__}.configure({args}, {kwargs})\")\n\n\nclass SubClass(BaseClass):\n def __init__(self, *args, **kwargs):\n print(f\"{__class__.__name__}.__init__({args}, {kwargs})\")\n super().__init__(*args, **kwargs)\n\n def configure(self, *args, **kwargs):\n print(f\"{__class__.__name__}.configure({args}, {kwargs})\")\n super().configure(*args, **kwargs)\n\n# when you create an instance\na = SubClass('a', b='b')\n\nrunning gives:\nPostInitCaller.__call__(('a',), {'b': 'b'})\nSubClass.__init__(('a',), {'b': 'b'})\nBaseClass.__init__(('a',), {'b': 'b'})\nBaseClass.__post_init__(('a',), {'b': 'b'})\nSubClass.configure(('a',), {'b': 'b'})\nBaseClass.configure(('a',), {'b': 'b'})\n\n",
"I know that the metaclass approach is the Pro way, but I've a more readable and easy proposal using @staticmethod:\nclass Invites(TimestampModel, db.Model):\n id = db.Column(db.Integer, primary_key=True, autoincrement=True)\n invitee_email = db.Column(db.String(128), nullable=False)\n \n def __init__(self, invitee_email):\n invitee_email = invitee_email\n\n @staticmethod\n def create_invitation(invitee_email):\n \"\"\"\n Create an invitation\n saves it and fetches it because the id\n is being generated in the DB\n \"\"\"\n invitation = Invites(invitee_email)\n db.session.save(invitation)\n db.session.commit()\n\n return Invites.query.filter(\n PartnerInvites.invitee_email == invitee_email\n ).one_or_none()\n\nSo I could use it this way:\ninvitation = Invites.create_invitation(\"jim@mail.com\")\nprint(invitation.id, invitation.invitee_email)\n\n>>>> 1 jim@mail.com\n\n"
] |
[
16,
2,
1,
0
] |
[] |
[] |
[
"decorator",
"pyqt4",
"python"
] |
stackoverflow_0016017397_decorator_pyqt4_python.txt
|
Q:
Value error: The truth value of a series is ambiguous. Use a.empyt,a.bool(), a.utem(), a.any(), or a.all()
I was trying to apply a function in python that checks for multiple conditions across different columns in a dataframe and returns a value.
df= pd.DataFrame(data)
def function(data):
if data['product']= product1:
If data['tenure']> 4:
return 19
X= df.apply(function)
What am I doing wrong?
I changed logical and conditions with Boolean &.
Also tried converting each column into series inside the function as I thought apply will only take series instead of dataframe but again I got confused and this didn't work.
A:
you need to compare with ==
product1="some value"
df= pd.DataFrame(data)
def function(data):
if data['product']== product1:
If data['tenure']> 4:
return 19
X= df.apply(function)
|
Value error: The truth value of a series is ambiguous. Use a.empyt,a.bool(), a.utem(), a.any(), or a.all()
|
I was trying to apply a function in python that checks for multiple conditions across different columns in a dataframe and returns a value.
df= pd.DataFrame(data)
def function(data):
if data['product']= product1:
If data['tenure']> 4:
return 19
X= df.apply(function)
What am I doing wrong?
I changed logical and conditions with Boolean &.
Also tried converting each column into series inside the function as I thought apply will only take series instead of dataframe but again I got confused and this didn't work.
|
[
"you need to compare with ==\nproduct1=\"some value\"\n\ndf= pd.DataFrame(data)\n\ndef function(data):\n if data['product']== product1:\n If data['tenure']> 4:\n return 19\n\nX= df.apply(function)\n\n"
] |
[
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074536345_pandas_python.txt
|
Q:
How to access in base class method a certain class attributes all derived classes?
I have a class hierarchy like this:
class C:
keys = {1}
def get_keys(self):
return C.keys + self... # ???
class D(C):
keys = {2,3}
class E(D):
keys = {4,5}
I'd like to access and gather together all keys contents from all derived classes (from self.__class__ to C) without the need of adding any additional code to any derived class.
In this example, I'd like E().get_keys() to return {1,2,3,4,5}.
I suppose it should be feasible starting through self, but I'm not sure how I'm supposed to traverse the whole inheritance chain.
Could anyone help?
A:
Have each class add its keys to the set inherited from the parent. Then use self.keys in the method.
class C:
keys = {1}
def get_keys(self):
return self.keys
class D(C):
keys = C.keys | {2,3}
class E(D):
keys = D.keys | {4,5}
|
How to access in base class method a certain class attributes all derived classes?
|
I have a class hierarchy like this:
class C:
keys = {1}
def get_keys(self):
return C.keys + self... # ???
class D(C):
keys = {2,3}
class E(D):
keys = {4,5}
I'd like to access and gather together all keys contents from all derived classes (from self.__class__ to C) without the need of adding any additional code to any derived class.
In this example, I'd like E().get_keys() to return {1,2,3,4,5}.
I suppose it should be feasible starting through self, but I'm not sure how I'm supposed to traverse the whole inheritance chain.
Could anyone help?
|
[
"Have each class add its keys to the set inherited from the parent. Then use self.keys in the method.\nclass C:\n keys = {1}\n def get_keys(self):\n return self.keys\n\nclass D(C):\n keys = C.keys | {2,3}\n\nclass E(D):\n keys = D.keys | {4,5}\n\n"
] |
[
2
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0074536667_python_python_3.x.txt
|
Q:
Python Bigquery create temp table
When I create temp table via python, an error throws
400 Use of CREATE TEMPORARY TABLE requires a script or session
How can I create a session?
from google.colab import auth
from google.cloud import bigquery
from google.colab import data_table
client = bigquery.Client(project=project, location = location)
client.query('''
create temp table t_acquisted_users as
select *
from table_a
limit 10
''').result()
A:
You can create a session using the BigQuery API using the create_session parameter in a job config, for example:
job_config=bigquery.QueryJobConfig(create_session=True)
More details on this excellent article:
https://dev.to/stack-labs/bigquery-transactions-over-multiple-queries-with-sessions-2ll5
A:
That's how I fix it in quick. Awaiting others provide a better answer
# create session
client0 = bigquery.Client(project=project, location=location)
job = client0.query(
"SELECT 1;", # a query can't fail
job_config=bigquery.QueryJobConfig(create_session=True)
)
session_id = job.session_info.session_id
job.result()
# set default session
client = bigquery.Client(project=project, location=location,
default_query_job_config=bigquery.QueryJobConfig(
connection_properties=[
bigquery.query.ConnectionProperty(
key="session_id", value=session_id
)
]
))
|
Python Bigquery create temp table
|
When I create temp table via python, an error throws
400 Use of CREATE TEMPORARY TABLE requires a script or session
How can I create a session?
from google.colab import auth
from google.cloud import bigquery
from google.colab import data_table
client = bigquery.Client(project=project, location = location)
client.query('''
create temp table t_acquisted_users as
select *
from table_a
limit 10
''').result()
|
[
"You can create a session using the BigQuery API using the create_session parameter in a job config, for example:\njob_config=bigquery.QueryJobConfig(create_session=True)\nMore details on this excellent article:\nhttps://dev.to/stack-labs/bigquery-transactions-over-multiple-queries-with-sessions-2ll5\n",
"That's how I fix it in quick. Awaiting others provide a better answer\n# create session\nclient0 = bigquery.Client(project=project, location=location)\njob = client0.query(\n \"SELECT 1;\", # a query can't fail\n job_config=bigquery.QueryJobConfig(create_session=True)\n)\nsession_id = job.session_info.session_id\njob.result()\n\n# set default session\nclient = bigquery.Client(project=project, location=location, \n default_query_job_config=bigquery.QueryJobConfig(\n connection_properties=[\n bigquery.query.ConnectionProperty(\n key=\"session_id\", value=session_id\n )\n ]\n))\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"google_bigquery",
"python",
"temp_tables"
] |
stackoverflow_0074529599_google_bigquery_python_temp_tables.txt
|
Q:
How to calibrate camera and use it in real time?
I am trying to calibrate two cameras. I want to calibrate each one individually. At this point, my script can calibrate both cameras successfully. But now I want to use those calibrated cameras in real time. the code that I am using is the one available in the OpenCV documentation.
Below is the code. I'll just share this part because it's the one that it's not working as I want.
def calibrateCamera(self, chessboardRows=9, chessboardCols=6, imshow=False):
self.chessboardRows = chessboardRows
self.chessboardCols = chessboardCols
self.imshow = imshow
chessboardSize = (self.chessboardRows, self.chessboardCols)
criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 30, 0.001)
objp = np.zeros((self.chessboardCols*self.chessboardRows,3), np.float32)
objp[:,:2] = np.mgrid[0:self.chessboardRows,0:self.chessboardCols].T.reshape(-1,2)
objpoints = []
imgpoints = []
for path, index in zip(self.paths, self.indices):
images = glob.glob(path + "*.png")
for img in images:
frame = cv.imread(img)
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
ret, corners = cv.findChessboardCorners(gray, chessboardSize, None)
if ret == True:
objpoints.append(objp)
corners2 = cv.cornerSubPix(gray,corners, (11,11), (-1,-1), criteria)
imgpoints.append(corners2)
cv.drawChessboardCorners(frame, chessboardSize, corners2, ret)
if self.imshow == True:
cv.imshow(f"Calibrated images, Camera{index}", frame)
cv.waitKey(0)
if ret == False:
print("No pattern detected")
break
ret, mtx, dist, rvecs, tvecs = cv.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
print(f"Camera{index} matrix\n", mtx)
print(f"Camera{index} distortion coefficients\n", dist)
h, w = frame.shape[:2]
newcameramtx, roi = cv.getOptimalNewCameraMatrix(mtx, dist, (w,h), 1, (w,h))
mapx, mapy = cv.initUndistortRectifyMap(mtx, dist, None, newcameramtx, (w,h), 5)
dst = cv.remap(frame, mapx, mapy, cv.INTER_LINEAR)
x, y, w, h = roi
dst = dst[y:y+h, x:x+w]
cv.imshow('calibresult.png', dst)
k = cv.waitKey(0)
Can anyone help me to use this "remap" in real time?
And, lastly, is there any limitation in terms of frame rate to use this kind of method in real time?
Thanks in advance,
A:
From calibration (OpenCV's calibrateCamera(), not your own function), you gain "intrinsics", i.e. camera matrix and distortion coefficients.
Store those intrinsics.
Then call initUndistortRectifyMap() with those intrinsics. You receive lookup maps suitable for remap(). You do this once, not for every video frame.
Then you use remap() on video frames, using those maps.
remap() of an entire image is fast enough for real-time processing but it has some cost still.
If you can, do your processing on untouched camera images (those frames you have before you call remap()). Then undistort whatever point data you get from your processing. Undistorting points is also not cheap, but cheaper if done on a few points instead of an entire image.
A:
As mentioned by Christoph, you should use cv.initUndistortRectifyMap only once, outside your loop, to generate the map. Then, at each frame, you can use cv.remap.
Remapping (or undistorting) the entire image comes at a cost (especially for large images). Working on distorted images and only undistorting some selected points might be a better options. The function you can use to do so is cv.undistortPoints.
More information is available in the documentation of OpenCV.
https://docs.opencv.org/4.6.0/d9/d0c/group__calib3d.html#ga55c716492470bfe86b0ee9bf3a1f0f7e
|
How to calibrate camera and use it in real time?
|
I am trying to calibrate two cameras. I want to calibrate each one individually. At this point, my script can calibrate both cameras successfully. But now I want to use those calibrated cameras in real time. the code that I am using is the one available in the OpenCV documentation.
Below is the code. I'll just share this part because it's the one that it's not working as I want.
def calibrateCamera(self, chessboardRows=9, chessboardCols=6, imshow=False):
self.chessboardRows = chessboardRows
self.chessboardCols = chessboardCols
self.imshow = imshow
chessboardSize = (self.chessboardRows, self.chessboardCols)
criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 30, 0.001)
objp = np.zeros((self.chessboardCols*self.chessboardRows,3), np.float32)
objp[:,:2] = np.mgrid[0:self.chessboardRows,0:self.chessboardCols].T.reshape(-1,2)
objpoints = []
imgpoints = []
for path, index in zip(self.paths, self.indices):
images = glob.glob(path + "*.png")
for img in images:
frame = cv.imread(img)
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
ret, corners = cv.findChessboardCorners(gray, chessboardSize, None)
if ret == True:
objpoints.append(objp)
corners2 = cv.cornerSubPix(gray,corners, (11,11), (-1,-1), criteria)
imgpoints.append(corners2)
cv.drawChessboardCorners(frame, chessboardSize, corners2, ret)
if self.imshow == True:
cv.imshow(f"Calibrated images, Camera{index}", frame)
cv.waitKey(0)
if ret == False:
print("No pattern detected")
break
ret, mtx, dist, rvecs, tvecs = cv.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
print(f"Camera{index} matrix\n", mtx)
print(f"Camera{index} distortion coefficients\n", dist)
h, w = frame.shape[:2]
newcameramtx, roi = cv.getOptimalNewCameraMatrix(mtx, dist, (w,h), 1, (w,h))
mapx, mapy = cv.initUndistortRectifyMap(mtx, dist, None, newcameramtx, (w,h), 5)
dst = cv.remap(frame, mapx, mapy, cv.INTER_LINEAR)
x, y, w, h = roi
dst = dst[y:y+h, x:x+w]
cv.imshow('calibresult.png', dst)
k = cv.waitKey(0)
Can anyone help me to use this "remap" in real time?
And, lastly, is there any limitation in terms of frame rate to use this kind of method in real time?
Thanks in advance,
|
[
"From calibration (OpenCV's calibrateCamera(), not your own function), you gain \"intrinsics\", i.e. camera matrix and distortion coefficients.\nStore those intrinsics.\nThen call initUndistortRectifyMap() with those intrinsics. You receive lookup maps suitable for remap(). You do this once, not for every video frame.\nThen you use remap() on video frames, using those maps.\nremap() of an entire image is fast enough for real-time processing but it has some cost still.\nIf you can, do your processing on untouched camera images (those frames you have before you call remap()). Then undistort whatever point data you get from your processing. Undistorting points is also not cheap, but cheaper if done on a few points instead of an entire image.\n",
"As mentioned by Christoph, you should use cv.initUndistortRectifyMap only once, outside your loop, to generate the map. Then, at each frame, you can use cv.remap.\nRemapping (or undistorting) the entire image comes at a cost (especially for large images). Working on distorted images and only undistorting some selected points might be a better options. The function you can use to do so is cv.undistortPoints.\nMore information is available in the documentation of OpenCV.\nhttps://docs.opencv.org/4.6.0/d9/d0c/group__calib3d.html#ga55c716492470bfe86b0ee9bf3a1f0f7e\n"
] |
[
1,
0
] |
[] |
[] |
[
"computer_vision",
"opencv",
"python"
] |
stackoverflow_0074521195_computer_vision_opencv_python.txt
|
Q:
Python annotation on __init__ for two ways to instantiate a class
I have a class Circle
class Circle:
def __init__(self, R: float):
self.R = R
@property
def A(self):
return 3.14*self.R**2
# Annotation: Circle(R: float) -> None
But I want two ways to create an instance of this class using explicit arguments
Given the radius R: Circle(R = 1)
Given the diameter D: Circle(D = 2)
Then I can do it
class Circle:
def __init__(self, **kwargs):
if "R" in kwargs:
self.R = kwargs["R"]
elif "D" in kwargs:
self.R = kwargs["D"]/2
# Annotation: Circle(kwargs: Any) -> None
But the annotation of this __init__ gives no information that:
The possible arguments are R and D
The types of R and D are float.
Question: How can inform the user that this class accepts two inputs? And how do I implement it in a clean code way?
A:
I would write this as:
class Circle:
def __init__(self, *, radius: float | None = None, diameter: float | None = None):
if radius is diameter is None or None not in (radius, diameter):
raise ValueError('radius xor diameter is required')
elif radius is not None:
self.radius = radius
else:
self.radius = diameter / 2
Note that it's more pythonic (and a better practice in general for readability) to use more verbose attribute/parameter names (in snake_case) than single-letter ones.
In general, it's better to avoid **kwargs when you are expecting specific parameters.
In cases where None would be a valid value, you could use an alternate sentinel value.
Another alternative would be to provide an alternate constructor:
from __future__ import annotations
class Circle:
def __init__(self, radius: float):
self.radius = radius
@classmethod
def from_diameter(cls, diameter: float) -> Circle:
return cls(diameter / 2)
It would also be possible to use typing.overload with the first suggestion (or with **kwargs), but that can get a bit messier.
A:
In Python unfortunately it is not possible to have two __init__ for the same class. In Swift, for example, it is possible to create several constructors.
Also, it's much more common in Python to leave any parameters you want in the function call.
For example the function to create a bar graph from seaborn:
def barplot(
x=None, y=None,
hue=None, data=None,
order=None, hue_order=None,
estimator=np.mean, ci=95, n_boot=1000, units=None, seed=None,
orient=None, color=None, palette=None, saturation=.75,
errcolor=".26", errwidth=None, capsize=None, dodge=True,
ax=None,
**kwargs
):
pass
If one of the attributes does not have a default value, put None.
In your case it would look something like this:
class Foo:
def __init__(self, d: float = None, r: float = 0) -> None:
self.R = r
if d is not None:
self.R = d/2
This way your code is much more readable. If you really need to use **kwargs, there's no way to escape the conditionals to see if each variable exists.
.
To inform the user that there are 2 variables using the docstring is an excellent practice:
class Foo:
r"""Some description for Class."""
def __init__(self, d: float = None, r: float = 0) -> None:
r"""Some description.
### Parameters
``d``: float -- description
``r``: float -- description
"""
self.r = r
if d != None:
self.R = d/2
When placing the mouse over it, it shows this documentation (or even when writing):
Extra:
The answer to this question might help you too.
A:
Two ways to call
Distinct call signatures for the same function are defined using typing.overload.
Explicit parameters
To force a function parameter to be "keyword-only", you can insert a bare * in the parameter list before it.
Sensible defaults
Since you'll be likely dealing with floats only, a convenient object is the math.nan singleton. It is a float instance, but always non-equal to any other float. This allows you to keep your parameter type constrained to float as opposed to float | None for example.
Exclusive or
The int class (of which bool is a subclass) has the bitwise exclusive or operator ^ defined for it. By combining that with math.isnan we can therefore concisely check that only exactly one of the two arguments were provided.
Suggested implementation
Full working example:
from math import isnan, nan
from typing import overload
class Circle:
@overload
def __init__(self, *, radius: float) -> None:
...
@overload
def __init__(self, *, diameter: float) -> None:
...
def __init__(self, *, radius: float = nan, diameter: float = nan) -> None:
"""Takes either a `radius` or a `diameter` but not both."""
if not isnan(radius) ^ isnan(diameter):
raise TypeError("Either radius or diameter required")
self.radius = radius if isnan(diameter) else diameter / 2
if __name__ == "__main__":
c1 = Circle(radius=1)
c2 = Circle(diameter=2)
assert c1.radius == c2.radius
# Circle(radius=3.14, diameter=42) # error
# Circle() # same error
Some things to note
If you try this with for example PyCharm, after typing Circle and an opening parenthesis you'll see in a little popover the two possible calls listed in the order they were defined to hint to you that you have these two distinct options for calling the function. It does not show you the actual implementation's signature, where you have both parameters present.
If you add reveal_type(Circle) at the bottom and run mypy over that module, you'll get the following:
note: Revealed type is "Overload(def (*, radius: builtins.float) -> Circle, def (*, diameter: builtins.float) -> Circle)"
I agree with @dskrypa regarding names. See PEP 8 for more.
Also, the reason I defined a TypeError here is that this exception class is used by Python, when a function is called with unexpected arguments or arguments missing.
Finally, the ternary x if expr else y-construct is warranted, when you are dealing with a very simple expression and have two mutually exclusive and very simple assignment options. This is the case here after our check, so we can use it and make the code much shorter, as well as (arguably) cleaner and easier to read.
PS: In case you are wondering, bitwise XOR takes precedence over not, which is why not a ^ b without parantheses is effectively a XNOR b.
|
Python annotation on __init__ for two ways to instantiate a class
|
I have a class Circle
class Circle:
def __init__(self, R: float):
self.R = R
@property
def A(self):
return 3.14*self.R**2
# Annotation: Circle(R: float) -> None
But I want two ways to create an instance of this class using explicit arguments
Given the radius R: Circle(R = 1)
Given the diameter D: Circle(D = 2)
Then I can do it
class Circle:
def __init__(self, **kwargs):
if "R" in kwargs:
self.R = kwargs["R"]
elif "D" in kwargs:
self.R = kwargs["D"]/2
# Annotation: Circle(kwargs: Any) -> None
But the annotation of this __init__ gives no information that:
The possible arguments are R and D
The types of R and D are float.
Question: How can inform the user that this class accepts two inputs? And how do I implement it in a clean code way?
|
[
"I would write this as:\nclass Circle:\n def __init__(self, *, radius: float | None = None, diameter: float | None = None):\n if radius is diameter is None or None not in (radius, diameter):\n raise ValueError('radius xor diameter is required')\n elif radius is not None:\n self.radius = radius\n else:\n self.radius = diameter / 2\n\nNote that it's more pythonic (and a better practice in general for readability) to use more verbose attribute/parameter names (in snake_case) than single-letter ones.\nIn general, it's better to avoid **kwargs when you are expecting specific parameters.\nIn cases where None would be a valid value, you could use an alternate sentinel value.\nAnother alternative would be to provide an alternate constructor:\nfrom __future__ import annotations\n\nclass Circle:\n def __init__(self, radius: float):\n self.radius = radius\n\n @classmethod\n def from_diameter(cls, diameter: float) -> Circle:\n return cls(diameter / 2)\n\nIt would also be possible to use typing.overload with the first suggestion (or with **kwargs), but that can get a bit messier.\n",
"In Python unfortunately it is not possible to have two __init__ for the same class. In Swift, for example, it is possible to create several constructors.\nAlso, it's much more common in Python to leave any parameters you want in the function call.\nFor example the function to create a bar graph from seaborn:\ndef barplot(\n x=None, y=None,\n hue=None, data=None,\n order=None, hue_order=None,\n estimator=np.mean, ci=95, n_boot=1000, units=None, seed=None,\n orient=None, color=None, palette=None, saturation=.75,\n errcolor=\".26\", errwidth=None, capsize=None, dodge=True,\n ax=None,\n **kwargs\n):\n pass\n\nIf one of the attributes does not have a default value, put None.\nIn your case it would look something like this:\nclass Foo:\n def __init__(self, d: float = None, r: float = 0) -> None:\n self.R = r\n if d is not None:\n self.R = d/2\n\nThis way your code is much more readable. If you really need to use **kwargs, there's no way to escape the conditionals to see if each variable exists.\n.\nTo inform the user that there are 2 variables using the docstring is an excellent practice:\nclass Foo:\n r\"\"\"Some description for Class.\"\"\"\n\n def __init__(self, d: float = None, r: float = 0) -> None:\n r\"\"\"Some description.\n\n ### Parameters\n ``d``: float -- description\n ``r``: float -- description\n \"\"\"\n self.r = r\n if d != None:\n self.R = d/2\n\nWhen placing the mouse over it, it shows this documentation (or even when writing):\n\nExtra:\nThe answer to this question might help you too.\n",
"Two ways to call\nDistinct call signatures for the same function are defined using typing.overload.\nExplicit parameters\nTo force a function parameter to be \"keyword-only\", you can insert a bare * in the parameter list before it.\nSensible defaults\nSince you'll be likely dealing with floats only, a convenient object is the math.nan singleton. It is a float instance, but always non-equal to any other float. This allows you to keep your parameter type constrained to float as opposed to float | None for example.\nExclusive or\nThe int class (of which bool is a subclass) has the bitwise exclusive or operator ^ defined for it. By combining that with math.isnan we can therefore concisely check that only exactly one of the two arguments were provided.\nSuggested implementation\nFull working example:\nfrom math import isnan, nan\nfrom typing import overload\n\n\nclass Circle:\n @overload\n def __init__(self, *, radius: float) -> None:\n ...\n\n @overload\n def __init__(self, *, diameter: float) -> None:\n ...\n\n def __init__(self, *, radius: float = nan, diameter: float = nan) -> None:\n \"\"\"Takes either a `radius` or a `diameter` but not both.\"\"\"\n if not isnan(radius) ^ isnan(diameter):\n raise TypeError(\"Either radius or diameter required\")\n self.radius = radius if isnan(diameter) else diameter / 2\n\n\nif __name__ == \"__main__\":\n c1 = Circle(radius=1)\n c2 = Circle(diameter=2)\n assert c1.radius == c2.radius\n # Circle(radius=3.14, diameter=42) # error\n # Circle() # same error\n\nSome things to note\nIf you try this with for example PyCharm, after typing Circle and an opening parenthesis you'll see in a little popover the two possible calls listed in the order they were defined to hint to you that you have these two distinct options for calling the function. It does not show you the actual implementation's signature, where you have both parameters present.\nIf you add reveal_type(Circle) at the bottom and run mypy over that module, you'll get the following:\n\nnote: Revealed type is \"Overload(def (*, radius: builtins.float) -> Circle, def (*, diameter: builtins.float) -> Circle)\"\n\nI agree with @dskrypa regarding names. See PEP 8 for more.\nAlso, the reason I defined a TypeError here is that this exception class is used by Python, when a function is called with unexpected arguments or arguments missing.\nFinally, the ternary x if expr else y-construct is warranted, when you are dealing with a very simple expression and have two mutually exclusive and very simple assignment options. This is the case here after our check, so we can use it and make the code much shorter, as well as (arguably) cleaner and easier to read.\nPS: In case you are wondering, bitwise XOR takes precedence over not, which is why not a ^ b without parantheses is effectively a XNOR b.\n"
] |
[
3,
2,
2
] |
[] |
[] |
[
"constructor",
"parameter_passing",
"python",
"python_typing"
] |
stackoverflow_0074509629_constructor_parameter_passing_python_python_typing.txt
|
Q:
AttributeError: 'tuple' object has no attribute 'rank' when calling fit on a Keras model with custom generator
I want to build a Neural Network with two inputs: for image data and for numeric data. So I wrote custom data generator for that. The train and validation dataframes contain 11 columns:
image_name — path to the image;
9 numeric features;
target — class for the item (last column).
The code for custom generator (based on this answer):
target_size = (224, 224)
batch_size = 1
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True)
val_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_dataframe(
train,
x_col='image_name',
y_col=train.columns[1:],
target_size=target_size,
batch_size=batch_size,
shuffle=True,
class_mode='raw')
validation_generator = val_datagen.flow_from_dataframe(
validation,
x_col='image_name',
y_col=validation.columns[1:],
target_size=target_size,
shuffle=False,
batch_size=batch_size,
class_mode='raw')
def train_generator_func():
count = 0
while True:
if count == len(train.index):
train_generator.reset()
break
count += 1
data = train_generator.next()
imgs = []
cols = []
targets = []
for k in range(batch_size):
imgs.append(data[0][k])
cols.append(data[1][k][:-1])
targets.append(data[1][k][-1])
yield [imgs, cols], targets
def validation_generator_func():
count = 0
while True:
if count == len(validation.index):
validation_generator.reset()
break
count += 1
data = validation_generator.next()
imgs = []
cols = []
targets = []
for k in range(batch_size):
imgs.append(data[0][k])
cols.append(data[1][k][:-1])
targets.append(data[1][k][-1])
yield [imgs, cols], targets
Model building:
def mlp_model(dim):
model = Sequential()
model.add(Dense(8, input_dim=dim, activation="relu"))
model.add(Dense(4, activation="relu"))
return model
def vgg16_model():
model = VGG16(weights='imagenet', include_top=False, input_shape=target_size+(3,))
x=Flatten()(model.output)
output=Dense(1,activation='sigmoid')(x) # because we have to predict the AUC
model=Model(model.input,output)
return model
def concatenated_model(cnn, mlp):
combinedInput = concatenate([cnn.output, mlp.output])
x = Dense(4, activation="relu")(combinedInput)
x = Dense(1, activation="sigmoid")(x)
model = Model(inputs=[cnn.input, mlp.input], outputs=x)
return model
def focal_loss(alpha=0.25,gamma=2.0):
def focal_crossentropy(y_true, y_pred):
bce = K.binary_crossentropy(y_true, y_pred)
y_pred = K.clip(y_pred, K.epsilon(), 1.- K.epsilon())
p_t = (y_true*y_pred) + ((1-y_true)*(1-y_pred))
alpha_factor = 1
modulating_factor = 1
alpha_factor = y_true*alpha + ((1-alpha)*(1-y_true))
modulating_factor = K.pow((1-p_t), gamma)
# compute the final loss and return
return K.mean(alpha_factor*modulating_factor*bce, axis=-1)
return focal_crossentropy
cnn = vgg16_model()
mlp = mlp_model(9)
model = concatenated_model(cnn, mlp)
opt = Adam(lr=1e-5)
model.compile(loss=focal_loss(), metrics=[tf.keras.metrics.AUC()],optimizer=opt)
nb_epochs = 2
nb_train_steps = train.shape[0]//batch_size
nb_val_steps = validation.shape[0]//batch_size
model.fit(
train_generator_func(),
steps_per_epoch=nb_train_steps,
epochs=nb_epochs,
validation_data=validation_generator_func(),
validation_steps=nb_val_steps)
And fitting doesn't work with error message:
AttributeError Traceback (most recent call last)
<ipython-input-53-253849fd34d6> in <module>
9 epochs=nb_epochs,
10 validation_data=validation_generator_func(),
---> 11 validation_steps=nb_val_steps)
d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py in _method_wrapper(self, *args, **kwargs)
106 def _method_wrapper(self, *args, **kwargs):
107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
--> 108 return method(self, *args, **kwargs)
109
110 # Running inside `run_distribute_coordinator` already.
d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1061 use_multiprocessing=use_multiprocessing,
1062 model=self,
-> 1063 steps_per_execution=self._steps_per_execution)
1064
1065 # Container that configures and calls `tf.keras.Callback`s.
d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py in __init__(self, x, y, sample_weight, batch_size, steps_per_epoch, initial_epoch, epochs, shuffle, class_weight, max_queue_size, workers, use_multiprocessing, model, steps_per_execution)
1108 use_multiprocessing=use_multiprocessing,
1109 distribution_strategy=ds_context.get_strategy(),
-> 1110 model=model)
1111
1112 strategy = ds_context.get_strategy()
d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py in __init__(self, x, y, sample_weights, workers, use_multiprocessing, max_queue_size, model, **kwargs)
796 return tensor_shape.TensorShape([None for _ in shape.as_list()])
797
--> 798 output_shapes = nest.map_structure(_get_dynamic_shape, peek)
799 output_types = nest.map_structure(lambda t: t.dtype, peek)
800
d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\util\nest.py in map_structure(func, *structure, **kwargs)
633
634 return pack_sequence_as(
--> 635 structure[0], [func(*x) for x in entries],
636 expand_composites=expand_composites)
637
d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\util\nest.py in <listcomp>(.0)
633
634 return pack_sequence_as(
--> 635 structure[0], [func(*x) for x in entries],
636 expand_composites=expand_composites)
637
d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py in _get_dynamic_shape(t)
792 shape = t.shape
793 # Unknown number of dimensions, `as_list` cannot be called.
--> 794 if shape.rank is None:
795 return shape
796 return tensor_shape.TensorShape([None for _ in shape.as_list()])
AttributeError: 'tuple' object has no attribute 'rank'
So I tried to look at Keras sources but without any success.
If I use modified train_generator and validation_generator (y_col='target' instead of y_col=train.columns[1:]) everything works fine.
A:
You need to convert all the individual objects returned by both the training and validation generators to Numpy arrays:
yield [np.array(imgs), np.array(cols)], np.array(targets)
Alternatively, a simpler and much more efficient solution is to not iterate over the data batch at all; instead, we can take advantage of the fact that these objects are already Numpy arrays when returned by ImageDataGenerator, so we can write:
imgs = data[0]
cols = data[1][:,:-1]
targets = data[1][:,-1:]
yield [imgs, cols], targets
A:
A different solution worked for me, just posting it here.
I ran into the problem working with two very similar dataframes in one notebook, where for one of them the error occurred.
I noticed the dtypes were slightly different int64 vs Int64, where the target column coded as Int64 gave the error.
For me the following worked:
dataframe[target_col] = dataframe[target_col].astype(int)
|
AttributeError: 'tuple' object has no attribute 'rank' when calling fit on a Keras model with custom generator
|
I want to build a Neural Network with two inputs: for image data and for numeric data. So I wrote custom data generator for that. The train and validation dataframes contain 11 columns:
image_name — path to the image;
9 numeric features;
target — class for the item (last column).
The code for custom generator (based on this answer):
target_size = (224, 224)
batch_size = 1
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True)
val_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_dataframe(
train,
x_col='image_name',
y_col=train.columns[1:],
target_size=target_size,
batch_size=batch_size,
shuffle=True,
class_mode='raw')
validation_generator = val_datagen.flow_from_dataframe(
validation,
x_col='image_name',
y_col=validation.columns[1:],
target_size=target_size,
shuffle=False,
batch_size=batch_size,
class_mode='raw')
def train_generator_func():
count = 0
while True:
if count == len(train.index):
train_generator.reset()
break
count += 1
data = train_generator.next()
imgs = []
cols = []
targets = []
for k in range(batch_size):
imgs.append(data[0][k])
cols.append(data[1][k][:-1])
targets.append(data[1][k][-1])
yield [imgs, cols], targets
def validation_generator_func():
count = 0
while True:
if count == len(validation.index):
validation_generator.reset()
break
count += 1
data = validation_generator.next()
imgs = []
cols = []
targets = []
for k in range(batch_size):
imgs.append(data[0][k])
cols.append(data[1][k][:-1])
targets.append(data[1][k][-1])
yield [imgs, cols], targets
Model building:
def mlp_model(dim):
model = Sequential()
model.add(Dense(8, input_dim=dim, activation="relu"))
model.add(Dense(4, activation="relu"))
return model
def vgg16_model():
model = VGG16(weights='imagenet', include_top=False, input_shape=target_size+(3,))
x=Flatten()(model.output)
output=Dense(1,activation='sigmoid')(x) # because we have to predict the AUC
model=Model(model.input,output)
return model
def concatenated_model(cnn, mlp):
combinedInput = concatenate([cnn.output, mlp.output])
x = Dense(4, activation="relu")(combinedInput)
x = Dense(1, activation="sigmoid")(x)
model = Model(inputs=[cnn.input, mlp.input], outputs=x)
return model
def focal_loss(alpha=0.25,gamma=2.0):
def focal_crossentropy(y_true, y_pred):
bce = K.binary_crossentropy(y_true, y_pred)
y_pred = K.clip(y_pred, K.epsilon(), 1.- K.epsilon())
p_t = (y_true*y_pred) + ((1-y_true)*(1-y_pred))
alpha_factor = 1
modulating_factor = 1
alpha_factor = y_true*alpha + ((1-alpha)*(1-y_true))
modulating_factor = K.pow((1-p_t), gamma)
# compute the final loss and return
return K.mean(alpha_factor*modulating_factor*bce, axis=-1)
return focal_crossentropy
cnn = vgg16_model()
mlp = mlp_model(9)
model = concatenated_model(cnn, mlp)
opt = Adam(lr=1e-5)
model.compile(loss=focal_loss(), metrics=[tf.keras.metrics.AUC()],optimizer=opt)
nb_epochs = 2
nb_train_steps = train.shape[0]//batch_size
nb_val_steps = validation.shape[0]//batch_size
model.fit(
train_generator_func(),
steps_per_epoch=nb_train_steps,
epochs=nb_epochs,
validation_data=validation_generator_func(),
validation_steps=nb_val_steps)
And fitting doesn't work with error message:
AttributeError Traceback (most recent call last)
<ipython-input-53-253849fd34d6> in <module>
9 epochs=nb_epochs,
10 validation_data=validation_generator_func(),
---> 11 validation_steps=nb_val_steps)
d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py in _method_wrapper(self, *args, **kwargs)
106 def _method_wrapper(self, *args, **kwargs):
107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
--> 108 return method(self, *args, **kwargs)
109
110 # Running inside `run_distribute_coordinator` already.
d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1061 use_multiprocessing=use_multiprocessing,
1062 model=self,
-> 1063 steps_per_execution=self._steps_per_execution)
1064
1065 # Container that configures and calls `tf.keras.Callback`s.
d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py in __init__(self, x, y, sample_weight, batch_size, steps_per_epoch, initial_epoch, epochs, shuffle, class_weight, max_queue_size, workers, use_multiprocessing, model, steps_per_execution)
1108 use_multiprocessing=use_multiprocessing,
1109 distribution_strategy=ds_context.get_strategy(),
-> 1110 model=model)
1111
1112 strategy = ds_context.get_strategy()
d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py in __init__(self, x, y, sample_weights, workers, use_multiprocessing, max_queue_size, model, **kwargs)
796 return tensor_shape.TensorShape([None for _ in shape.as_list()])
797
--> 798 output_shapes = nest.map_structure(_get_dynamic_shape, peek)
799 output_types = nest.map_structure(lambda t: t.dtype, peek)
800
d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\util\nest.py in map_structure(func, *structure, **kwargs)
633
634 return pack_sequence_as(
--> 635 structure[0], [func(*x) for x in entries],
636 expand_composites=expand_composites)
637
d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\util\nest.py in <listcomp>(.0)
633
634 return pack_sequence_as(
--> 635 structure[0], [func(*x) for x in entries],
636 expand_composites=expand_composites)
637
d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py in _get_dynamic_shape(t)
792 shape = t.shape
793 # Unknown number of dimensions, `as_list` cannot be called.
--> 794 if shape.rank is None:
795 return shape
796 return tensor_shape.TensorShape([None for _ in shape.as_list()])
AttributeError: 'tuple' object has no attribute 'rank'
So I tried to look at Keras sources but without any success.
If I use modified train_generator and validation_generator (y_col='target' instead of y_col=train.columns[1:]) everything works fine.
|
[
"You need to convert all the individual objects returned by both the training and validation generators to Numpy arrays:\n yield [np.array(imgs), np.array(cols)], np.array(targets)\n\nAlternatively, a simpler and much more efficient solution is to not iterate over the data batch at all; instead, we can take advantage of the fact that these objects are already Numpy arrays when returned by ImageDataGenerator, so we can write:\n imgs = data[0]\n cols = data[1][:,:-1]\n targets = data[1][:,-1:]\n yield [imgs, cols], targets\n\n",
"A different solution worked for me, just posting it here.\nI ran into the problem working with two very similar dataframes in one notebook, where for one of them the error occurred.\nI noticed the dtypes were slightly different int64 vs Int64, where the target column coded as Int64 gave the error.\nFor me the following worked:\ndataframe[target_col] = dataframe[target_col].astype(int)\n\n"
] |
[
9,
1
] |
[] |
[] |
[
"deep_learning",
"keras",
"neural_network",
"python",
"tensorflow"
] |
stackoverflow_0062744659_deep_learning_keras_neural_network_python_tensorflow.txt
|
Q:
jupyter server : not started, no kernel in vs code
i am trying to use jupyter notebooks from vs code and installed jupyter notebook extension and i am using (base)conda environment for execution.
while this happened
Error: Jupyter cannot be started. Error attempting to locate jupyter:
at A.startServer (c:\Users\DELL\.vscode\extensions\ms-python.python-2020.2.63990\out\client\extension.js:1:784356)
at async A.ensureServerAndNotebookImpl (c:\Users\DELL\.vscode\extensions\ms-python.python-2020.2.63990\out\client\extension.js:1:783811)
at async A.ensureServerAndNotebook (c:\Users\DELL\.vscode\extensions\ms-python.python-2020.2.63990\out\client\extension.js:1:783612)
at async A.submitCode (c:\Users\DELL\.vscode\extensions\ms-python.python-2020.2.63990\out\client\extension.js:1:780564)
at async A.reexecuteCell (c:\Users\DELL\.vscode\extensions\ms-python.python-2020.2.63990\out\client\extension.js:75:879318)
how to resolve this issue ?
A:
I had exactly the same problem when I installed Visual Studio Code and tried to run some Python code from a jupyter notebook on my fresh Ubuntu 18.04.
How I solved it:
1) Press Command+Shift+P to open a new command pallete
2) Type >Python: Select Intepreter to start jupyter notebook server
3) Open the notebook again
And it worked fine. Hope it works for you.
A:
I have several versions of Python installed. It happened the same thing to me and I have fixed it this way.
Ctrl+Shift+p and select Python: Select Interpreter to start Jupyter server
Then, select the version under the Visual Studio Code
Nothing will happen and then press again Ctrl+Shift+p and select
Python: Create new blank Jupyter Notebook. And it works
I have even set the Python version to 3.8 at the bottom and it worked too with the new features print(a:=4) despite the fact that the version I have chosen was 3.7.5. Nevertheless, I have to lunch VS Code from Anaconda Navigator.
A:
Press Command+Shift+P on mac, Ctrl+Shift+p on windows
Type Jupyter: Select Interpreter to start Jupyterserver
It would show you a dropdown of python versions installed.
I chose python 3.7.5 and it worked for me you can choose the python version installed on your machine.
A:
I have seen all possible solutions but not work, finally I just upgrade jupyter,notebook,and jupyterlab,like pip3 install -U jupyterlab, and I can choose the kernel in VScode!
A:
I tried the following:
Press Command + SHIFT + P
Type Python: Select Interpreter to start Jupyterserver
Hope this answer was helpful.
A:
Making sure that in VS Code settings.json
"python.condaPath": "C:\\Program Files\\miniconda3\\Scripts\\conda.exe"
is pointing to the correct directory. It solved it for me.
A:
just fix this by add
"python.terminal.activateEnvironment": false,
to settings.json
hopes this help.
A:
just had the same issue and it did not help to update the interpreter within vscode.
What helped was: Check your dependencies within pip! It seems that new dependencies came up with the latest update of python, which are not installed. For me this was pygments:
jupyter-console 6.2.0 requires pygments, which is not installed.
Linux solution step-by-step:
xyz@xyz-pc-ubuntu:~$ pip3 check
qtconsole 4.7.6 requires pygments, which is not installed.
nbconvert 5.6.1 requires pygments, which is not installed.
jupyter-console 6.2.0 requires pygments, which is not installed.
ipython 7.18.1 requires pygments, which is not installed.
xyz@xyz-pc-ubuntu:~$ pip3 install pygments
Successfully installed pygments-2.7.2
Afterwards, jupyter found the updated python interpreter automatically.
A:
i've stumbled upon this post, since i had a similar issue.
provided that my context was different, since i was working remotely on a linux server, even if i selected the right interpreter (via shift+ctrl+P "Select Interpreter to start Jupyter server") the kernel remained unactive.
i've checked the installed dependencies inside the venv and tried to switch virtual environment to make it work.. kept on reloading the server, reloading the window.. no way.
eventually, a tiny fancy detail arouse my attention: the "Jupyter server : remote" label in the bottom right.
and tadaa : that was my issue. I've selected "default", letting VSCode starting a server on the local (remote) host, and then the interpeter / kernel was enabled.
hope it can help anyone stuck on the same issue.
A:
I faced the same problem and this solved my problem
https://www.reddit.com/r/vscode/comments/eq2bfv/vs_code_jupyter_server_no_kernel_python_not/
hope this helps
A:
In my case, I had the server working in 3.7.6, but i wanted to use >3.8.0 versions too. After multiple attempts, which failed, I decided to:
Uninstall 3.8.5, and delete the folder in the installation directory.
Uninstall VSCode too.
Restart the PC, and re-install Python and VSCode.
As a result, the Jupyter server initiated based on the latest version of Python automatically.
I hope this helps too!
Cheers!
A:
I installed anaconda and selected python kernal came with that as my interpreter (ctrl + shift + p) that solved my issue.
A:
I faced a similar issue quite often in VS Code, sometimes I can't get the kernel from my virtual enviroment (instead, VS Code only finds other venvs that are not related to my current project).
I tried reloading the window, selecting interpreter to start jupyter, reloading the VS Code itself, but nothing worked.
In case all the above and the other answers fail, try that, it worked for me:
Ctrl+Shift+P
Jupyter: Filter kernels
Select only the kernel you want (in my case, my venv)
Go on "Select Kernel" directly on the notebook UI;
Select your right kernel.
It is weird that the venv kernel appears on "Filter Kernels", but not always appears on the kernel list. But doing this might solve the issue.
A:
For me uninstalling the Jupyter extension, closing VS code and then reinstalling it worked. Not a really great solution, but the only one that worked for me. Hope that may help someone.
A:
For me the problem is that VSCode can't find the kernel, even in using the select interpreter option.
The most reliable solution that I can find and currently used is:
Install without cache:
pip install jupyter notebook jupyterlab pyzmq --upgrade --no-cache-dir
Restart VSCode
Another extra safety step is to uninstall first and followed with pip cache purge
|
jupyter server : not started, no kernel in vs code
|
i am trying to use jupyter notebooks from vs code and installed jupyter notebook extension and i am using (base)conda environment for execution.
while this happened
Error: Jupyter cannot be started. Error attempting to locate jupyter:
at A.startServer (c:\Users\DELL\.vscode\extensions\ms-python.python-2020.2.63990\out\client\extension.js:1:784356)
at async A.ensureServerAndNotebookImpl (c:\Users\DELL\.vscode\extensions\ms-python.python-2020.2.63990\out\client\extension.js:1:783811)
at async A.ensureServerAndNotebook (c:\Users\DELL\.vscode\extensions\ms-python.python-2020.2.63990\out\client\extension.js:1:783612)
at async A.submitCode (c:\Users\DELL\.vscode\extensions\ms-python.python-2020.2.63990\out\client\extension.js:1:780564)
at async A.reexecuteCell (c:\Users\DELL\.vscode\extensions\ms-python.python-2020.2.63990\out\client\extension.js:75:879318)
how to resolve this issue ?
|
[
"I had exactly the same problem when I installed Visual Studio Code and tried to run some Python code from a jupyter notebook on my fresh Ubuntu 18.04.\nHow I solved it:\n1) Press Command+Shift+P to open a new command pallete\n2) Type >Python: Select Intepreter to start jupyter notebook server\n3) Open the notebook again\nAnd it worked fine. Hope it works for you.\n",
"I have several versions of Python installed. It happened the same thing to me and I have fixed it this way.\nCtrl+Shift+p and select Python: Select Interpreter to start Jupyter server\n\nThen, select the version under the Visual Studio Code\n\nNothing will happen and then press again Ctrl+Shift+p and select \nPython: Create new blank Jupyter Notebook. And it works\nI have even set the Python version to 3.8 at the bottom and it worked too with the new features print(a:=4) despite the fact that the version I have chosen was 3.7.5. Nevertheless, I have to lunch VS Code from Anaconda Navigator.\n\n",
"\nPress Command+Shift+P on mac, Ctrl+Shift+p on windows\n\nType Jupyter: Select Interpreter to start Jupyterserver\n\nIt would show you a dropdown of python versions installed.\n\nI chose python 3.7.5 and it worked for me you can choose the python version installed on your machine.\n\n\n",
"I have seen all possible solutions but not work, finally I just upgrade jupyter,notebook,and jupyterlab,like pip3 install -U jupyterlab, and I can choose the kernel in VScode!\n",
"I tried the following:\n\nPress Command + SHIFT + P\nType Python: Select Interpreter to start Jupyterserver\n\nHope this answer was helpful.\n",
"Making sure that in VS Code settings.json\n\"python.condaPath\": \"C:\\\\Program Files\\\\miniconda3\\\\Scripts\\\\conda.exe\"\nis pointing to the correct directory. It solved it for me.\n",
"just fix this by add\n\"python.terminal.activateEnvironment\": false,\nto settings.json\nhopes this help.\n",
"just had the same issue and it did not help to update the interpreter within vscode.\nWhat helped was: Check your dependencies within pip! It seems that new dependencies came up with the latest update of python, which are not installed. For me this was pygments:\njupyter-console 6.2.0 requires pygments, which is not installed.\nLinux solution step-by-step:\nxyz@xyz-pc-ubuntu:~$ pip3 check\nqtconsole 4.7.6 requires pygments, which is not installed.\nnbconvert 5.6.1 requires pygments, which is not installed.\njupyter-console 6.2.0 requires pygments, which is not installed.\nipython 7.18.1 requires pygments, which is not installed.\n\nxyz@xyz-pc-ubuntu:~$ pip3 install pygments\nSuccessfully installed pygments-2.7.2\n\nAfterwards, jupyter found the updated python interpreter automatically.\n",
"i've stumbled upon this post, since i had a similar issue.\nprovided that my context was different, since i was working remotely on a linux server, even if i selected the right interpreter (via shift+ctrl+P \"Select Interpreter to start Jupyter server\") the kernel remained unactive.\ni've checked the installed dependencies inside the venv and tried to switch virtual environment to make it work.. kept on reloading the server, reloading the window.. no way.\neventually, a tiny fancy detail arouse my attention: the \"Jupyter server : remote\" label in the bottom right.\nand tadaa : that was my issue. I've selected \"default\", letting VSCode starting a server on the local (remote) host, and then the interpeter / kernel was enabled.\nhope it can help anyone stuck on the same issue.\n",
"I faced the same problem and this solved my problem\nhttps://www.reddit.com/r/vscode/comments/eq2bfv/vs_code_jupyter_server_no_kernel_python_not/\nhope this helps\n",
"In my case, I had the server working in 3.7.6, but i wanted to use >3.8.0 versions too. After multiple attempts, which failed, I decided to:\n\nUninstall 3.8.5, and delete the folder in the installation directory.\nUninstall VSCode too.\nRestart the PC, and re-install Python and VSCode.\nAs a result, the Jupyter server initiated based on the latest version of Python automatically.\n\nI hope this helps too!\nCheers!\n",
"I installed anaconda and selected python kernal came with that as my interpreter (ctrl + shift + p) that solved my issue.\n",
"I faced a similar issue quite often in VS Code, sometimes I can't get the kernel from my virtual enviroment (instead, VS Code only finds other venvs that are not related to my current project).\nI tried reloading the window, selecting interpreter to start jupyter, reloading the VS Code itself, but nothing worked.\nIn case all the above and the other answers fail, try that, it worked for me:\n\nCtrl+Shift+P\nJupyter: Filter kernels\nSelect only the kernel you want (in my case, my venv)\nGo on \"Select Kernel\" directly on the notebook UI;\nSelect your right kernel.\n\nIt is weird that the venv kernel appears on \"Filter Kernels\", but not always appears on the kernel list. But doing this might solve the issue.\n",
"For me uninstalling the Jupyter extension, closing VS code and then reinstalling it worked. Not a really great solution, but the only one that worked for me. Hope that may help someone.\n",
"For me the problem is that VSCode can't find the kernel, even in using the select interpreter option.\nThe most reliable solution that I can find and currently used is:\n\nInstall without cache:\npip install jupyter notebook jupyterlab pyzmq --upgrade --no-cache-dir\n\n\nRestart VSCode\n\n\n\nAnother extra safety step is to uninstall first and followed with pip cache purge\n"
] |
[
58,
17,
11,
10,
4,
4,
2,
2,
2,
1,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"jupyter_notebook",
"python",
"visual_studio_code"
] |
stackoverflow_0060330837_jupyter_notebook_python_visual_studio_code.txt
|
Q:
TypeError: unsupported operand type(s) for ** or pow(): 'float' and 'CubicSpline'
I am trying to solve a differential equation using scipy, but I cannot get past the TypeError that is being generated. I have looked, but I'm not sure how to solve this issue because the ** operator is what is used to denote an exponent. What can I do to solve this issue?
Here is the data frame you will need to reproduce the error:
# Opening the packages
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import CubicSpline
from scipy.integrate import odeint
from math import *
# Here is the data
data = {'day': [1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93],
'soil_temp': [18.15,
17.5,
19.1,
20.3,
19.75,
17.7,
15.2,
15.45,
14.3,
12.45,
12.75,
14.55,
16.55,
18.3,
19,
19,
18.8,
17.45,
17.15,
17.4,
19.9,
19.85,
21.4,
22.05,
21.75,
19.9,
21.9,
23.45,
24.65,
24.4,
25.1,
24.75,
25.2,
25.45,
25.75,
26.35,
26.5,
24.8,
24.55,
25.95,
26.35,
23.9,
22.2,
21.2,
21.9,
23.4,
25.45,
25.75,
25.25,
25.65,
26.4,
25.7,
25,
26.1,
27,
26.75,
26.95,
26.55,
25.9,
26.2,
27.15,
28.25,
27.95,
27.25,
26.5,
27.45,
27.55,
27.8,
28.4,
28.8,
28.05,
25.05,
25.15,
25.45,
25.3,
22.95,
22.6,
25.1,
25.95,
26.3,
26.55,
26.25,
27.15,
27.75,
28.2,
25.45,
25,
25.1,
25.15,
25.15,
26.05,
26.2,
27.45]}
# Create DataFrame
df = pd.DataFrame(data)
Here is the code to generate the error:
# Define parameters
alpha = 52.875
beta = 13.345
gamma = -1.44
delta = 2.29
constant = 60.589
g = 80.64
g2 = 1.04
# Define model
def model(a,t,om):
# Cubic spline
day = df['day'].to_numpy()
temp = df['soil_temp'].to_numpy()
cubT = CubicSpline(day, temp, bc_type='natural',extrapolate=False)
d_cubT = CubicSpline.derivative(cubT)
# Model parameters
bigfrac = (t/(((alpha-(beta*(-delta+(gamma*om))))/constant)*(g -(g2**cubT))))
smallfrac = (t*(alpha-(beta*(-delta+(gamma*om)))/constant))
# Derivative equation
k = (0.5**bigfrac) * log(0.5) * (((bigfrac - smallfrac - (g2**cubT) * log(g2) * d_cubT)) / (bigfrac**2))
dherb_dt = -k*a
return dherb_dt
# Initial condition
a0 = 4.271
# Time, in days, to interpolate over
t = np.linspace(0, 90) # 90 days
om = 0.3
y1 = odeint(model, a0, t, args=(om,))
# Plot
plt.plot(t,y1, 'r-', linewidth = 2, label = 'om = 0.3')
plt.xlabel("xlabel")
plt.ylabel("ylabel")
plt.legend()
plt.show()
The following error message is displayed:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [1], in <cell line: 233>()
230 t = np.linspace(0, 90) # 90 days
232 om = 0.3
--> 233 y1 = odeint(model, a0, t, args=(om,))
235 # Plot
236 plt.plot(t,y1, 'r-', linewidth = 2, label = 'om = 0.3')
File ~\anaconda3\envs\agron893\lib\site-packages\scipy\integrate\_odepack_py.py:241, in odeint(func, y0, t, args, Dfun, col_deriv, full_output, ml, mu, rtol, atol, tcrit, h0, hmax, hmin, ixpr, mxstep, mxhnil, mxordn, mxords, printmessg, tfirst)
239 t = copy(t)
240 y0 = copy(y0)
--> 241 output = _odepack.odeint(func, y0, t, args, Dfun, col_deriv, ml, mu,
242 full_output, rtol, atol, tcrit, h0, hmax, hmin,
243 ixpr, mxstep, mxhnil, mxordn, mxords,
244 int(bool(tfirst)))
245 if output[-1] < 0:
246 warning_msg = _msgs[output[-1]] + " Run with full_output = 1 to get quantitative information."
Input In [1], in model(a, t, om)
215 d_cubT = CubicSpline.derivative(cubT)
217 # Model parameters
--> 218 bigfrac = (t/(((alpha-(beta*(-delta+(gamma*om))))/constant)*(g -(g2**cubT))))
219 smallfrac = (t*(alpha-(beta*(-delta+(gamma*om)))/constant))
221 # Derivative equation
TypeError: unsupported operand type(s) for ** or pow(): 'float' and 'CubicSpline'
How can I fix this issue?
A:
The argument of CubicSpline.derivative(nu) is the derivative order, 1 for the first derivative, 2 for the second derivative, and so on. And you need to evaluate it on an argument to get a numerical value of the derivative at a given input.
|
TypeError: unsupported operand type(s) for ** or pow(): 'float' and 'CubicSpline'
|
I am trying to solve a differential equation using scipy, but I cannot get past the TypeError that is being generated. I have looked, but I'm not sure how to solve this issue because the ** operator is what is used to denote an exponent. What can I do to solve this issue?
Here is the data frame you will need to reproduce the error:
# Opening the packages
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import CubicSpline
from scipy.integrate import odeint
from math import *
# Here is the data
data = {'day': [1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93],
'soil_temp': [18.15,
17.5,
19.1,
20.3,
19.75,
17.7,
15.2,
15.45,
14.3,
12.45,
12.75,
14.55,
16.55,
18.3,
19,
19,
18.8,
17.45,
17.15,
17.4,
19.9,
19.85,
21.4,
22.05,
21.75,
19.9,
21.9,
23.45,
24.65,
24.4,
25.1,
24.75,
25.2,
25.45,
25.75,
26.35,
26.5,
24.8,
24.55,
25.95,
26.35,
23.9,
22.2,
21.2,
21.9,
23.4,
25.45,
25.75,
25.25,
25.65,
26.4,
25.7,
25,
26.1,
27,
26.75,
26.95,
26.55,
25.9,
26.2,
27.15,
28.25,
27.95,
27.25,
26.5,
27.45,
27.55,
27.8,
28.4,
28.8,
28.05,
25.05,
25.15,
25.45,
25.3,
22.95,
22.6,
25.1,
25.95,
26.3,
26.55,
26.25,
27.15,
27.75,
28.2,
25.45,
25,
25.1,
25.15,
25.15,
26.05,
26.2,
27.45]}
# Create DataFrame
df = pd.DataFrame(data)
Here is the code to generate the error:
# Define parameters
alpha = 52.875
beta = 13.345
gamma = -1.44
delta = 2.29
constant = 60.589
g = 80.64
g2 = 1.04
# Define model
def model(a,t,om):
# Cubic spline
day = df['day'].to_numpy()
temp = df['soil_temp'].to_numpy()
cubT = CubicSpline(day, temp, bc_type='natural',extrapolate=False)
d_cubT = CubicSpline.derivative(cubT)
# Model parameters
bigfrac = (t/(((alpha-(beta*(-delta+(gamma*om))))/constant)*(g -(g2**cubT))))
smallfrac = (t*(alpha-(beta*(-delta+(gamma*om)))/constant))
# Derivative equation
k = (0.5**bigfrac) * log(0.5) * (((bigfrac - smallfrac - (g2**cubT) * log(g2) * d_cubT)) / (bigfrac**2))
dherb_dt = -k*a
return dherb_dt
# Initial condition
a0 = 4.271
# Time, in days, to interpolate over
t = np.linspace(0, 90) # 90 days
om = 0.3
y1 = odeint(model, a0, t, args=(om,))
# Plot
plt.plot(t,y1, 'r-', linewidth = 2, label = 'om = 0.3')
plt.xlabel("xlabel")
plt.ylabel("ylabel")
plt.legend()
plt.show()
The following error message is displayed:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [1], in <cell line: 233>()
230 t = np.linspace(0, 90) # 90 days
232 om = 0.3
--> 233 y1 = odeint(model, a0, t, args=(om,))
235 # Plot
236 plt.plot(t,y1, 'r-', linewidth = 2, label = 'om = 0.3')
File ~\anaconda3\envs\agron893\lib\site-packages\scipy\integrate\_odepack_py.py:241, in odeint(func, y0, t, args, Dfun, col_deriv, full_output, ml, mu, rtol, atol, tcrit, h0, hmax, hmin, ixpr, mxstep, mxhnil, mxordn, mxords, printmessg, tfirst)
239 t = copy(t)
240 y0 = copy(y0)
--> 241 output = _odepack.odeint(func, y0, t, args, Dfun, col_deriv, ml, mu,
242 full_output, rtol, atol, tcrit, h0, hmax, hmin,
243 ixpr, mxstep, mxhnil, mxordn, mxords,
244 int(bool(tfirst)))
245 if output[-1] < 0:
246 warning_msg = _msgs[output[-1]] + " Run with full_output = 1 to get quantitative information."
Input In [1], in model(a, t, om)
215 d_cubT = CubicSpline.derivative(cubT)
217 # Model parameters
--> 218 bigfrac = (t/(((alpha-(beta*(-delta+(gamma*om))))/constant)*(g -(g2**cubT))))
219 smallfrac = (t*(alpha-(beta*(-delta+(gamma*om)))/constant))
221 # Derivative equation
TypeError: unsupported operand type(s) for ** or pow(): 'float' and 'CubicSpline'
How can I fix this issue?
|
[
"The argument of CubicSpline.derivative(nu) is the derivative order, 1 for the first derivative, 2 for the second derivative, and so on. And you need to evaluate it on an argument to get a numerical value of the derivative at a given input.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"scipy",
"typeerror"
] |
stackoverflow_0074526991_python_scipy_typeerror.txt
|
Q:
local variable 'x1' referenced before assignment python secant method
I am trying to do the secant method to find the root of the polynonium (-2x**6)-(1.5x**4)+(10*x)+(2) with inithial value 2,3 and I get this error
import numpy as np
def f(x):
return (-2*x**6)-(1.5*x**4)+(10*x)+(2)
def secante(f,x,fp,N=100,emax=0.0001):
for k in range(N):
fp=(f(x1)-f(x0))/(x1-x0)
x=x1-f(x1)/fp
e=abs((x-x1)/x)
if e<emax:
break
x0=x1
x1=x
print(k,x,f(x),e)
secante(f,2,3)
---------------------------------------------------------------------------
UnboundLocalError Traceback (most recent call last)
Input In [10], in <cell line: 17>()
15 x1=x
16 print(k,x,f(x),e)
---> 17 secante(f,2,3)
Input In [10], in secante(f, x, fp, N, emax)
7 def secante(f,x,fp,N=100,emax=0.0001):
8 for k in range(N):
----> 9 fp=(f(x1)-f(x0))/(x1-x0)
10 x=x1-f(x1)/fp
11 e=abs((x-x1)/x)
UnboundLocalError: local variable 'x1' referenced before assignment
A:
You're referenceing x1 before you define it. You are also referencing x0 before it is defined. I'm not familiar with the math here anymore so I'm not confident with fixing the code but ultimately you need to declare these variables before the for loop so they exist before they are changed by the for loop, or else they wont be defined for the first iteration of the loop (your error)
Based on this mistake I'm going to assume that you're new to python and explain from that perspective.
I'm not sure what you're going for with that break statement, but it means that your x0=x1 and x1=x will never run. I suspect you need to unindent these lines so they aren't in the if statement?
I'm not familiar with the math here anymore so I'm not confident with fixing the code, but i'll take a crack at it anyway. I'm going to make a couple assumptions about how you want this to work, but I think this correction is at least nearer the mark:
def f(x):
return (-2*x**6)-(1.5*x**4)+(10*x)+(2)
def secante(f,x,fp,N=100,emax=0.0001):
x0 = x
for k in range(N):
x1 = k
fp=(f(x1)-f(x0))/(x1-x0)
x=x1-f(x1)/fp
e=abs((x-x1)/x)
if e<emax:
break
x0=x1
print(k,x,f(x),e)
secante(f,2,3)
I assumed you want to loop from numbers n to N, where x0 is the number you're on and x1 is the next number. Here is my best visualization of my guess for x=2 and N = 100:
loop 1:
|x0|x1|
| 2| 3| 4, 5, 6, 7, 8, 9,10,11,...100
loop 2:
|x0|x1|
2,|3 | 4| 5, 6, 7, 8, 9,10,11,...100
loop 3:
|x0|x1|
2, 3| 4| 5| 6, 7, 8, 9,10,11,...100
etc.
I changed range(N) to range(x+1, N) because my approach will actually have x1 be the reference number.
Before initializing the for-loop I added a statement to set x0 to x. Now x0 is defined (which fixes the next error you would have encountered) but isn't very helpful if its the only change.
Next I changed range(N) to range(x+1, N). This will loop from 3 to 99 in your case (change N to N+1 if you want the final x1 to be 100)
Now you could replace x1 with k, but for readability sake I just set x1 = k. Now both variables are defined before they are used (you could also change the for loop to "for x1 in range(x+1, N):" for the same effect)
I unindented "x0=x1" so that it would run on every loop except the last one, (rather than ONLY running on the last loop)
Finally I removed "x1=x" which would just set x1 to 2 over and over, which you would have seen that with your print statement if you got past your errors.
This may not be the exact behavior you want but this will at least run, which gives you a shot at correcting it further from your print statement.
A:
Problem solved
error in def asignation of variable x0,x1 :)
import numpy as np
def f(x):
return (-2*x**6)-(1.5*x**4)+(10*x)+(2)
def secante(f,x0,x1,n=100,emax=1e-10):
for k in range(n):
fp=(f(x1)-f(x0))/(x1-x0)
x=x1-f(x1)/fp
e=abs((x-x1)/x)
if e<emax:
break
x0=x1
x1=x
print(k,x,f(x),e)
secante(f,2,3)
|
local variable 'x1' referenced before assignment python secant method
|
I am trying to do the secant method to find the root of the polynonium (-2x**6)-(1.5x**4)+(10*x)+(2) with inithial value 2,3 and I get this error
import numpy as np
def f(x):
return (-2*x**6)-(1.5*x**4)+(10*x)+(2)
def secante(f,x,fp,N=100,emax=0.0001):
for k in range(N):
fp=(f(x1)-f(x0))/(x1-x0)
x=x1-f(x1)/fp
e=abs((x-x1)/x)
if e<emax:
break
x0=x1
x1=x
print(k,x,f(x),e)
secante(f,2,3)
---------------------------------------------------------------------------
UnboundLocalError Traceback (most recent call last)
Input In [10], in <cell line: 17>()
15 x1=x
16 print(k,x,f(x),e)
---> 17 secante(f,2,3)
Input In [10], in secante(f, x, fp, N, emax)
7 def secante(f,x,fp,N=100,emax=0.0001):
8 for k in range(N):
----> 9 fp=(f(x1)-f(x0))/(x1-x0)
10 x=x1-f(x1)/fp
11 e=abs((x-x1)/x)
UnboundLocalError: local variable 'x1' referenced before assignment
|
[
"You're referenceing x1 before you define it. You are also referencing x0 before it is defined. I'm not familiar with the math here anymore so I'm not confident with fixing the code but ultimately you need to declare these variables before the for loop so they exist before they are changed by the for loop, or else they wont be defined for the first iteration of the loop (your error)\nBased on this mistake I'm going to assume that you're new to python and explain from that perspective.\nI'm not sure what you're going for with that break statement, but it means that your x0=x1 and x1=x will never run. I suspect you need to unindent these lines so they aren't in the if statement?\nI'm not familiar with the math here anymore so I'm not confident with fixing the code, but i'll take a crack at it anyway. I'm going to make a couple assumptions about how you want this to work, but I think this correction is at least nearer the mark:\ndef f(x):\n return (-2*x**6)-(1.5*x**4)+(10*x)+(2)\n\ndef secante(f,x,fp,N=100,emax=0.0001):\n x0 = x\n for k in range(N):\n x1 = k\n fp=(f(x1)-f(x0))/(x1-x0)\n x=x1-f(x1)/fp\n e=abs((x-x1)/x)\n if e<emax:\n break\n x0=x1\n print(k,x,f(x),e)\nsecante(f,2,3)\n\nI assumed you want to loop from numbers n to N, where x0 is the number you're on and x1 is the next number. Here is my best visualization of my guess for x=2 and N = 100:\nloop 1:\n\n|x0|x1|\n| 2| 3| 4, 5, 6, 7, 8, 9,10,11,...100\n\nloop 2:\n\n |x0|x1|\n 2,|3 | 4| 5, 6, 7, 8, 9,10,11,...100\n\nloop 3:\n\n |x0|x1|\n 2, 3| 4| 5| 6, 7, 8, 9,10,11,...100\n\n\netc.\n\nI changed range(N) to range(x+1, N) because my approach will actually have x1 be the reference number.\nBefore initializing the for-loop I added a statement to set x0 to x. Now x0 is defined (which fixes the next error you would have encountered) but isn't very helpful if its the only change.\nNext I changed range(N) to range(x+1, N). This will loop from 3 to 99 in your case (change N to N+1 if you want the final x1 to be 100)\nNow you could replace x1 with k, but for readability sake I just set x1 = k. Now both variables are defined before they are used (you could also change the for loop to \"for x1 in range(x+1, N):\" for the same effect)\nI unindented \"x0=x1\" so that it would run on every loop except the last one, (rather than ONLY running on the last loop)\nFinally I removed \"x1=x\" which would just set x1 to 2 over and over, which you would have seen that with your print statement if you got past your errors.\nThis may not be the exact behavior you want but this will at least run, which gives you a shot at correcting it further from your print statement.\n",
"Problem solved\nerror in def asignation of variable x0,x1 :)\nimport numpy as np\n\ndef f(x):\n return (-2*x**6)-(1.5*x**4)+(10*x)+(2)\n\ndef secante(f,x0,x1,n=100,emax=1e-10):\n for k in range(n):\n fp=(f(x1)-f(x0))/(x1-x0)\n \n x=x1-f(x1)/fp\n e=abs((x-x1)/x)\n if e<emax:\n break\n x0=x1\n x1=x\n print(k,x,f(x),e)\nsecante(f,2,3)\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074536170_python.txt
|
Q:
Incorrect path with function getcwd()
I get the wrong path back. The Datafile is in D:... and get everytime the path C:\Python27\lib\site-packages\xy back from python. I use the function
path = getcwd()
How can I fix it?
A:
You may be executing the script in a different place than your intended directory.
Solution 1: Move the .py file to the target directory, and execute it there.
Pros:
Easy
Works cross-platform (and for other users - if you do this, use getcwdu for Unicode)
No hard-coded path strings
Cons:
File must be in the same or higher directory as target folder
Solution 2: Manually write the string of the path to the folder.
Pros:
'Just Works'
Cons:
Annoying bugs w/typos
Need to re-code every time you change directories
Won't work anywhere else
A:
In my idea, make sure your IDE has your target directory opened as your project folder.
After all, it's a debug-time error, and won't affect the smooth running of your program in the runtime, if your program's launch script has the right address for your program to run, and does its part normally!
By the way @order, I'm totally against hardcoding a path into your code, because it's just a very lame programming exercise!
Just get yourselves used to going through the right exercise, although it doesn't seem very beneficial in the short run. in the coming years of your career, you're going to be against using the cons and pros of such a comparison, but it will become a right or wrong coding exercise!
Cheers.
|
Incorrect path with function getcwd()
|
I get the wrong path back. The Datafile is in D:... and get everytime the path C:\Python27\lib\site-packages\xy back from python. I use the function
path = getcwd()
How can I fix it?
|
[
"You may be executing the script in a different place than your intended directory.\nSolution 1: Move the .py file to the target directory, and execute it there.\n\nPros: \n\n\nEasy\nWorks cross-platform (and for other users - if you do this, use getcwdu for Unicode)\nNo hard-coded path strings\n\nCons:\n\n\nFile must be in the same or higher directory as target folder\n\n\nSolution 2: Manually write the string of the path to the folder.\n\nPros:\n\n\n'Just Works'\n\nCons:\n\n\nAnnoying bugs w/typos\nNeed to re-code every time you change directories\nWon't work anywhere else\n\n\n",
"In my idea, make sure your IDE has your target directory opened as your project folder.\nAfter all, it's a debug-time error, and won't affect the smooth running of your program in the runtime, if your program's launch script has the right address for your program to run, and does its part normally!\nBy the way @order, I'm totally against hardcoding a path into your code, because it's just a very lame programming exercise!\nJust get yourselves used to going through the right exercise, although it doesn't seem very beneficial in the short run. in the coming years of your career, you're going to be against using the cons and pros of such a comparison, but it will become a right or wrong coding exercise!\nCheers.\n"
] |
[
1,
0
] |
[] |
[] |
[
"getcwd",
"path",
"python"
] |
stackoverflow_0034042056_getcwd_path_python.txt
|
Q:
How to print the __str__ representation of the entire traceback
When printing an exception using, for instance, print(ex), only the last exception in the chain is printed, how can I instead print all the exceptions in the chain without crowding it with excessive traceback information.
For example:
def test_with_context(context: str, test: int)
try:
assert isinstance(test, int)
assert test > 4, "Test must be greater than 4"
assert test < 6, "Test must be smaller than 6"
exccept AssertionError as ex:
raise ValueError(f"Invalid test for context {context}") from ex
try:
test_with_context("ExampleContext", 8)
except ValueError as ex:
print("Value Test Failed":)
print(ex)
Provides me with an output of
Value Test Failed
ValueError: Invalid test for context ExampleContext
Which is useful in providing with me with the overall context, but doesnt tell me what error exactly caused that ValueError.
What I would like to achieve is:
Value Test Failed
ValueError: Invalid test for context ExampleContext
AssertionError: Test must be smaller than 6
I can use:
traceback.print_exc()
But that provides me with the entire formatted traceback, line numbers and all, which is too much information to provide a user with a simple input error for instance.
---
Similarly, I have tried using
exccept AssertionError as ex:
ex.add_note(f"Invalid test for context {context}")
But it would appear the notes dont appear at all in anything but the full context.
Is there any way to get a nice list of the exception history to print in order?
A:
I have produced a solution I'm not super keen on, but it does the job:
def cause_stack(exception: BaseException) -> List[BaseException]:
if exception.__cause__ is None:
return [exception]
else:
return [exception] + cause_stack(exception.__cause__)
def format_causes(exception: BaseException) -> str:
return "\n - caused by -\n".join([str(cause) for cause in cause_stack(exception)])
Because the cause of each exception is stored under the .__cause__ dunder property, recursively searching that can get you a list of each cause in order, which I then format by joining them together with a "caused by" string.
Not super happy with it as a solution - doesn't feel very pythonic, uses recursion which may be problematic with larger stacks and it's not as elegant as one might hope, but it serves my needs for now.
|
How to print the __str__ representation of the entire traceback
|
When printing an exception using, for instance, print(ex), only the last exception in the chain is printed, how can I instead print all the exceptions in the chain without crowding it with excessive traceback information.
For example:
def test_with_context(context: str, test: int)
try:
assert isinstance(test, int)
assert test > 4, "Test must be greater than 4"
assert test < 6, "Test must be smaller than 6"
exccept AssertionError as ex:
raise ValueError(f"Invalid test for context {context}") from ex
try:
test_with_context("ExampleContext", 8)
except ValueError as ex:
print("Value Test Failed":)
print(ex)
Provides me with an output of
Value Test Failed
ValueError: Invalid test for context ExampleContext
Which is useful in providing with me with the overall context, but doesnt tell me what error exactly caused that ValueError.
What I would like to achieve is:
Value Test Failed
ValueError: Invalid test for context ExampleContext
AssertionError: Test must be smaller than 6
I can use:
traceback.print_exc()
But that provides me with the entire formatted traceback, line numbers and all, which is too much information to provide a user with a simple input error for instance.
---
Similarly, I have tried using
exccept AssertionError as ex:
ex.add_note(f"Invalid test for context {context}")
But it would appear the notes dont appear at all in anything but the full context.
Is there any way to get a nice list of the exception history to print in order?
|
[
"I have produced a solution I'm not super keen on, but it does the job:\ndef cause_stack(exception: BaseException) -> List[BaseException]:\n if exception.__cause__ is None:\n return [exception]\n else:\n return [exception] + cause_stack(exception.__cause__)\n\n\ndef format_causes(exception: BaseException) -> str:\n return \"\\n - caused by -\\n\".join([str(cause) for cause in cause_stack(exception)])\n\nBecause the cause of each exception is stored under the .__cause__ dunder property, recursively searching that can get you a list of each cause in order, which I then format by joining them together with a \"caused by\" string.\nNot super happy with it as a solution - doesn't feel very pythonic, uses recursion which may be problematic with larger stacks and it's not as elegant as one might hope, but it serves my needs for now.\n"
] |
[
0
] |
[] |
[] |
[
"error_handling",
"exception",
"python",
"python_3.x",
"raise"
] |
stackoverflow_0074534502_error_handling_exception_python_python_3.x_raise.txt
|
Q:
cryptography.exceptions.InvalidSignature: Signature did not match digest
I wrote an example of KDC Server, using the package cryptography.fernet.
I cannot understand why, randomly, sometimes it runs correctly and sometimes it ends with an exception:
cryptography.exceptions.InvalidSignature: Signature did not match digest.
The keys are created once, at the startup of the main. So the issue seems not to be related to creation of different random keys.
Anyone could help me to detect what is wrong?
from cryptography.fernet import Fernet
import uuid
import pickle
def generate_challenge():
return uuid.uuid4().bytes
def serialize(o):
return pickle.dumps(o)
def deserialize(o):
return pickle.loads(o)
class InitiateRequest:
def __init__(self, initiator, responder, challenge):
self.initiator = initiator # IDa
self.responder = responder # IDb
self.challenge = challenge # N1
class InitiateResponse:
def __init__(self, session_key, initiator, responder, challenge):
self.session_key = session_key # Ks
self.initiator = initiator # IDa
self.responder = responder # IDb
self.challenge = challenge # N1
class InvitationForward:
def __init__(self, session_key, initiator):
self.session_key = session_key # Ks
self.initiator = initiator # IDa
class KDCServer:
def __init__(self, generator: Fernet):
self.generator = generator
self.map_keys = {} # user+key pairs
def subscribe(self, id, key):
self.map_keys[id] = key
print("KDCServer: I'm registering key " + str(key) + " for " + id)
def issue_session_key(self, r: InitiateRequest):
session_key = self.generator.generate_key()
response = InitiateResponse(session_key, r.initiator, r.responder, r.challenge)
invitation = InvitationForward(session_key, r.initiator)
print(self.map_keys)
print("KDCServer: I'm using " + str(self.map_keys[r.initiator]) + " as keyA_KDC")
keyA_KDC = Fernet(self.map_keys[r.initiator])
print("KDCServer: I'm using " + str(self.map_keys[r.responder]) + " as keyB_KDC")
keyB_KDC = Fernet(self.map_keys[r.responder])
print("KDCServer: I've just issued a session key for " + r.initiator + " and " + r.responder)
return {
keyA_KDC.encrypt(serialize(response)), # E(Ka,[Ks|IDa|IDb|N1])
keyB_KDC.encrypt(serialize(invitation)) # E(Kb,[Ks|IDa])
}
class User:
def __init__(self, id:str, key:bytes):
self.id = id
self.key = key
self.session_keys = {}
def initiate(self, responder):
challenge = generate_challenge()
print(self.id + ": Let's retrieve a session key to communicate with " + responder)
# store request for matching
self.request = InitiateRequest(self.id, responder, challenge)
return self.request
def match_request(self, check:InitiateResponse)->bool:
return (self.request.challenge == check.challenge) and (self.request.initiator == check.initiator) and (self.request.responder == check.responder)
def accept_response(self, response):
print(self.id + ": I'm decrypting using my key " + str(self.key))
check = deserialize(Fernet(self.key).decrypt(response))
if self.match_request(check):
self.session_keys[check.responder] = check.session_key # save session key Ks
print(self.id + ": I've got the session key to communicate with " + check.responder)
def accept_invitation(self, invitation):
check = deserialize(Fernet(self.key).decrypt(invitation))
print(self.id + ": I've accepted the invitation from " + check.initiator)
self.session_keys[check.initiator] = check.session_key # save session key Ks
def send_message(self, message, receiver):
print(self.id + ": I'm sending this message " + str(message) + " using the session key " + str(self.session_keys[receiver]))
return Fernet(self.session_keys[receiver]).encrypt(message)
def receive_message(self, cyphered, sender):
print(self.id + ": I'm decrypting a message using the session key " + str(self.session_keys[sender]))
message = Fernet(self.session_keys[sender]).decrypt(cyphered)
print(self.id + ": I've received the message " + str(message) + " from " + sender)
def main():
alice_key = Fernet.generate_key()
bob_key = Fernet.generate_key()
alice = User('Alice', alice_key)
bob = User('Bob', bob_key)
server = KDCServer(Fernet)
server.subscribe(alice.id, alice_key)
server.subscribe(bob.id, bob_key)
# Alice sends request to KDC to get a session key
request = alice.initiate(bob.id)
response, invitation = server.issue_session_key(request)
# Alice accepts response from KDC and forwards invitation to Bob
alice.accept_response(response)
# Bob accepts invitation from Alice
bob.accept_invitation(invitation)
cyphered = bob.send_message(b"My secret message", alice.id)
alice.receive_message(cyphered, bob.id)
if __name__ == "__main__":
main()
A:
The problem is caused by the Set returned in issue_session_key(). A Python Set is unordered, so response and invitation will be swapped in main() 50% of the time, causing the error. Use e.g. a Tuple instead of the Set. The Python Tuple is ordered:
...
return (
keyA_KDC.encrypt(serialize(response)), # E(Ka,[Ks|IDa|IDb|N1])
keyB_KDC.encrypt(serialize(invitation)) # E(Kb,[Ks|IDa])
)
...
|
cryptography.exceptions.InvalidSignature: Signature did not match digest
|
I wrote an example of KDC Server, using the package cryptography.fernet.
I cannot understand why, randomly, sometimes it runs correctly and sometimes it ends with an exception:
cryptography.exceptions.InvalidSignature: Signature did not match digest.
The keys are created once, at the startup of the main. So the issue seems not to be related to creation of different random keys.
Anyone could help me to detect what is wrong?
from cryptography.fernet import Fernet
import uuid
import pickle
def generate_challenge():
return uuid.uuid4().bytes
def serialize(o):
return pickle.dumps(o)
def deserialize(o):
return pickle.loads(o)
class InitiateRequest:
def __init__(self, initiator, responder, challenge):
self.initiator = initiator # IDa
self.responder = responder # IDb
self.challenge = challenge # N1
class InitiateResponse:
def __init__(self, session_key, initiator, responder, challenge):
self.session_key = session_key # Ks
self.initiator = initiator # IDa
self.responder = responder # IDb
self.challenge = challenge # N1
class InvitationForward:
def __init__(self, session_key, initiator):
self.session_key = session_key # Ks
self.initiator = initiator # IDa
class KDCServer:
def __init__(self, generator: Fernet):
self.generator = generator
self.map_keys = {} # user+key pairs
def subscribe(self, id, key):
self.map_keys[id] = key
print("KDCServer: I'm registering key " + str(key) + " for " + id)
def issue_session_key(self, r: InitiateRequest):
session_key = self.generator.generate_key()
response = InitiateResponse(session_key, r.initiator, r.responder, r.challenge)
invitation = InvitationForward(session_key, r.initiator)
print(self.map_keys)
print("KDCServer: I'm using " + str(self.map_keys[r.initiator]) + " as keyA_KDC")
keyA_KDC = Fernet(self.map_keys[r.initiator])
print("KDCServer: I'm using " + str(self.map_keys[r.responder]) + " as keyB_KDC")
keyB_KDC = Fernet(self.map_keys[r.responder])
print("KDCServer: I've just issued a session key for " + r.initiator + " and " + r.responder)
return {
keyA_KDC.encrypt(serialize(response)), # E(Ka,[Ks|IDa|IDb|N1])
keyB_KDC.encrypt(serialize(invitation)) # E(Kb,[Ks|IDa])
}
class User:
def __init__(self, id:str, key:bytes):
self.id = id
self.key = key
self.session_keys = {}
def initiate(self, responder):
challenge = generate_challenge()
print(self.id + ": Let's retrieve a session key to communicate with " + responder)
# store request for matching
self.request = InitiateRequest(self.id, responder, challenge)
return self.request
def match_request(self, check:InitiateResponse)->bool:
return (self.request.challenge == check.challenge) and (self.request.initiator == check.initiator) and (self.request.responder == check.responder)
def accept_response(self, response):
print(self.id + ": I'm decrypting using my key " + str(self.key))
check = deserialize(Fernet(self.key).decrypt(response))
if self.match_request(check):
self.session_keys[check.responder] = check.session_key # save session key Ks
print(self.id + ": I've got the session key to communicate with " + check.responder)
def accept_invitation(self, invitation):
check = deserialize(Fernet(self.key).decrypt(invitation))
print(self.id + ": I've accepted the invitation from " + check.initiator)
self.session_keys[check.initiator] = check.session_key # save session key Ks
def send_message(self, message, receiver):
print(self.id + ": I'm sending this message " + str(message) + " using the session key " + str(self.session_keys[receiver]))
return Fernet(self.session_keys[receiver]).encrypt(message)
def receive_message(self, cyphered, sender):
print(self.id + ": I'm decrypting a message using the session key " + str(self.session_keys[sender]))
message = Fernet(self.session_keys[sender]).decrypt(cyphered)
print(self.id + ": I've received the message " + str(message) + " from " + sender)
def main():
alice_key = Fernet.generate_key()
bob_key = Fernet.generate_key()
alice = User('Alice', alice_key)
bob = User('Bob', bob_key)
server = KDCServer(Fernet)
server.subscribe(alice.id, alice_key)
server.subscribe(bob.id, bob_key)
# Alice sends request to KDC to get a session key
request = alice.initiate(bob.id)
response, invitation = server.issue_session_key(request)
# Alice accepts response from KDC and forwards invitation to Bob
alice.accept_response(response)
# Bob accepts invitation from Alice
bob.accept_invitation(invitation)
cyphered = bob.send_message(b"My secret message", alice.id)
alice.receive_message(cyphered, bob.id)
if __name__ == "__main__":
main()
|
[
"The problem is caused by the Set returned in issue_session_key(). A Python Set is unordered, so response and invitation will be swapped in main() 50% of the time, causing the error. Use e.g. a Tuple instead of the Set. The Python Tuple is ordered:\n...\nreturn (\n keyA_KDC.encrypt(serialize(response)), # E(Ka,[Ks|IDa|IDb|N1])\n keyB_KDC.encrypt(serialize(invitation)) # E(Kb,[Ks|IDa])\n)\n...\n\n"
] |
[
0
] |
[] |
[] |
[
"cryptography",
"exception",
"fernet",
"python"
] |
stackoverflow_0074535542_cryptography_exception_fernet_python.txt
|
Q:
I'm trying to make a calculator but it doesn't work
I'm new to python and trying to make a calculator. The actual calculator part works but I can't figure out how to make it so that when the user puts in something that is not "+, -, *, or /" it prints a sentence then closes.
This is my code and the output
(https://i.stack.imgur.com/EYzxg.png)
A:
if choice in ["+", "/", "*", "-"]:
[do stuff]
if choice not in ["+", "/", "*", "-"]:
[do stuff]
Though this is not the best way to do what you want
A:
The code you are trying to achieve:
choice = input("Enter '+' for addition, LA for subtraction, * for multiplication, and '/' for division: ")
if (choice == "+" or choice == "-" or choice == "*" or choice == "/"):
num1 = float(input("Enter the first number:"))
num2 = float(input("Enter the second number:"))
if choice == "+":
print(num1, "+", num2, "=", (num1 + num2))
exit()
if choice == "-":
print(num1, "-", num2, "=", (num1 - num2))
exit()
if choice == "*":
print(num1, "*", num2, "=", (num1 * num2))
exit()
if choice == "/":
print(num1, "/", num2, "=", (num1 / num2))
exit()
if (choice != "+" or choice != "-" or choice !="*" or choice !="/"):
print("That's not an option.....")
exit()
You cannot directly compare the values like a = b c d rather we should mention compare for each value like
a == b or a == c or a == d, where 'or' is an operator
for logical conditions.
'else' can also be used instead of second condition here for better code optimization.
|
I'm trying to make a calculator but it doesn't work
|
I'm new to python and trying to make a calculator. The actual calculator part works but I can't figure out how to make it so that when the user puts in something that is not "+, -, *, or /" it prints a sentence then closes.
This is my code and the output
(https://i.stack.imgur.com/EYzxg.png)
|
[
"if choice in [\"+\", \"/\", \"*\", \"-\"]:\n [do stuff]\nif choice not in [\"+\", \"/\", \"*\", \"-\"]:\n [do stuff]\n\nThough this is not the best way to do what you want\n",
"The code you are trying to achieve:\nchoice = input(\"Enter '+' for addition, LA for subtraction, * for multiplication, and '/' for division: \")\nif (choice == \"+\" or choice == \"-\" or choice == \"*\" or choice == \"/\"):\n num1 = float(input(\"Enter the first number:\"))\n num2 = float(input(\"Enter the second number:\"))\n if choice == \"+\":\n print(num1, \"+\", num2, \"=\", (num1 + num2))\n exit()\n if choice == \"-\":\n print(num1, \"-\", num2, \"=\", (num1 - num2))\n exit()\n if choice == \"*\":\n print(num1, \"*\", num2, \"=\", (num1 * num2))\n exit()\n if choice == \"/\":\n print(num1, \"/\", num2, \"=\", (num1 / num2))\n exit()\nif (choice != \"+\" or choice != \"-\" or choice !=\"*\" or choice !=\"/\"):\n print(\"That's not an option.....\")\n exit()\n\nYou cannot directly compare the values like a = b c d rather we should mention compare for each value like\na == b or a == c or a == d, where 'or' is an operator\nfor logical conditions.\n'else' can also be used instead of second condition here for better code optimization.\n"
] |
[
0,
0
] |
[] |
[] |
[
"calculator",
"python"
] |
stackoverflow_0074533945_calculator_python.txt
|
Q:
django-plotly-dash multi session on CPU intensive pages
Running django-plotly-dash, I have multiple python pages. The issue is that when I am loading one of the pages while it is running some calculations, I can not run the same page or other pages from a different session, and the webserver is not responding for other users. If I look at the runserver output, it is busy rendering the first request only.
A:
If I look at the runserver output, it is busy rendering the first request only.
If I understand correctly, this means you use Django's development server, and thus that you are in development (if you use django-admin runserver in production, that's a serious issue).
Now’s a good time to note: don’t use this server in anything resembling a production environment. It’s intended only for use while developing. (We’re in the business of making web frameworks, not web servers.)
Django's development server is supposed to be multithreaded and support concurrent requests. However, from my experience I also noticed that it can handle only one request at a time. I didn't dig too much into it but I assume it might be caused by an app that overrides the runserver command and disables multithreading.
In development this shouldn't be too much of an issue. And in production you won't suffer this kind of blocks as real WSGI servers such as gunicorn will be able to handle several concurrent requests (provided it is configured to use the available resources correctly, and the hardware is able to handle the load).
However if your pages are actually slow to respond, this can be an issue for the user loading the page, and will also require more resources to handle more concurrent requests. It all depends if "slow" means 2 seconds, 5 seconds, 30 seconds or even more. Reducing the response time will depend a lot on the bottleneck of your code and could include:
Optimizing the algorithms
Reducing and optimizing SQL queries (See Database access optimization)
Delaying to Celery the calculations that do not affect the response
Using websockets to flow and display the data as they get calculated without blocking the client until the whole page is calculated. (See django-channels)
Using asyncio to avoid staying idle while waiting for I/O operations.
|
django-plotly-dash multi session on CPU intensive pages
|
Running django-plotly-dash, I have multiple python pages. The issue is that when I am loading one of the pages while it is running some calculations, I can not run the same page or other pages from a different session, and the webserver is not responding for other users. If I look at the runserver output, it is busy rendering the first request only.
|
[
"\nIf I look at the runserver output, it is busy rendering the first request only.\n\nIf I understand correctly, this means you use Django's development server, and thus that you are in development (if you use django-admin runserver in production, that's a serious issue).\n\nNow’s a good time to note: don’t use this server in anything resembling a production environment. It’s intended only for use while developing. (We’re in the business of making web frameworks, not web servers.)\n\nDjango's development server is supposed to be multithreaded and support concurrent requests. However, from my experience I also noticed that it can handle only one request at a time. I didn't dig too much into it but I assume it might be caused by an app that overrides the runserver command and disables multithreading.\nIn development this shouldn't be too much of an issue. And in production you won't suffer this kind of blocks as real WSGI servers such as gunicorn will be able to handle several concurrent requests (provided it is configured to use the available resources correctly, and the hardware is able to handle the load).\nHowever if your pages are actually slow to respond, this can be an issue for the user loading the page, and will also require more resources to handle more concurrent requests. It all depends if \"slow\" means 2 seconds, 5 seconds, 30 seconds or even more. Reducing the response time will depend a lot on the bottleneck of your code and could include:\n\nOptimizing the algorithms\nReducing and optimizing SQL queries (See Database access optimization)\nDelaying to Celery the calculations that do not affect the response\nUsing websockets to flow and display the data as they get calculated without blocking the client until the whole page is calculated. (See django-channels)\nUsing asyncio to avoid staying idle while waiting for I/O operations.\n\n"
] |
[
1
] |
[] |
[] |
[
"django",
"load_balancing",
"plotly",
"plotly_dash",
"python"
] |
stackoverflow_0074274383_django_load_balancing_plotly_plotly_dash_python.txt
|
Q:
Uncertainty propagation and confidence intervals calculation using python scipy.curvefit
I am trying to fit a function using scipy.optimize.curve_fit.
I am implementing a fitting of data with the sum of 3 gaussians.
Here is the data that must be fit.
I need to estimate the parameters of each Gaussian and the errors of calculation of these parameters.
So I need to calculate the uncertainty of the fitting procedure and propagate this data to show a confidence interval for the modelled data.
Is it possible to do it using the error of parameters calculated using np.sqrt(np.diag(pcov_3gauss))? I saw it here: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html
What approach should I use to propagate the parameters uncertainties and calc the uncertainty bands for the fitting?
Here is the code I have implemented for fitting.
# Initial data for fitting
x_array = np.array(sep_df.E)
y_array_3gauss = np.array(sep_df.exp_cs)
def _1gaussian(x, amp1,cen1,sigma1,offset):
return amp1*(1/(sigma1*(np.sqrt(2*np.pi))))*(np.exp((-1.0/2.0)*(((x-cen1)/sigma1)**2)))+offset
def _3gaussian(x, amp1,cen1,sigma1,amp2,cen2,sigma2,amp3,cen3,sigma3,offset):
return _1gaussian(x, amp1,cen1,sigma1,offset=0) + \
_1gaussian(x, amp2,cen2,sigma2,offset=0) + \
_1gaussian(x, amp3,cen3,sigma3,offset=0) + offset
#initial_guesses for Gaussians
amp1 = 100 #max value without an offset (!)
cen1 = 140 # position of a center
sigma1 = 1 # sd of a gaussian, can be calculated approx. as HWHM / 2.355
amp2 = 32
cen2 = 157
sigma2 = 1
amp3 = 17.5
cen3 = 171.5
sigma3 = 1
offset_initial_guess = y_array_3gauss.mean()
p0=[amp1, cen1, sigma1,
amp2, cen2, sigma2,
amp3, cen3, sigma3,
offset_initial_guess]
# using a scipy.optimize.curve_fit for parameters Estimation
popt_3gauss, pcov_3gauss = scipy.optimize.curve_fit(_3gaussian, x_array, y_array_3gauss, p0=p0)
perr_3gauss = np.sqrt(np.diag(pcov_3gauss)) # errors (??)
print('Popt_3 gauss')
print(popt_3gauss)
i=0
for param in popt_3gauss:
print(f'Guess: {p0[i]} -> value: {param} (+/-) {perr_3gauss[i]}')
i+=1
pars_1 = np.append(popt_3gauss[0:3], popt_3gauss[9])
pars_2 = np.append(popt_3gauss[3:6], popt_3gauss[9])
pars_3 = np.append(popt_3gauss[6:9], popt_3gauss[9])
#calculating the separate Gaussians
gauss_peak_1 = _1gaussian(x_array, *pars_1)
gauss_peak_2 = _1gaussian(x_array, *pars_2)
gauss_peak_3 = _1gaussian(x_array, *pars_3)
it fits the data with some errors.
I don't really understand why my Gaussian pulses look so weird, but it's not the question for now.
Here is the output for model parameters:
Guess: 100 -> value: 19.921886501569567 (+/-) 0.18211089486661997
Guess: 140 -> value: 140.8226385680359 (+/-) 0.0009978640529532633
Guess: 1 -> value: 0.07977753969265024 (+/-) 0.0008591843799752477
Guess: 32 -> value: 5.8061836613068865 (+/-) 0.21223980806115864
Guess: 157 -> value: 157.24985139555835 (+/-) 0.005092072398387486
Guess: 1 -> value: 0.08218041022663795 (+/-) 0.0034588647851462877
Guess: 17.5 -> value: 4.183133300983996 (+/-) 0.2522036049333162
Guess: 171.5 -> value: 171.47025791173272 (+/-) 0.008818904590601183
Guess: 1 -> value: 0.11713718144344663 (+/-) 0.008042004990244404
Guess: 4.743919339218138 -> value: 4.016878986311514 (+/-) 0.04473028381895628
And fitting results:
For the first pulse using zoom to scale the image:
And with resampled scale for E-axis:
A:
One may calculate the uncertainty bands using strong analytical approach (if the function is known, like in my case). But it's pretty hard to obtain all the derivatives and even if it's possible a lot of calculations must be done for that.
I havr tried to use simplified approach. I used Monte Carlo technique.
I have repeatedly randomly sampled the values of errors for estimated parameters and calculated overal fitting results.
So I have calculated the function results for all randomly-noised parameters and got a large dataset of results.
Then I have used only the the max and min values of calculated results I have got - to obtain numerical values of bounds.
The result is presented below.
|
Uncertainty propagation and confidence intervals calculation using python scipy.curvefit
|
I am trying to fit a function using scipy.optimize.curve_fit.
I am implementing a fitting of data with the sum of 3 gaussians.
Here is the data that must be fit.
I need to estimate the parameters of each Gaussian and the errors of calculation of these parameters.
So I need to calculate the uncertainty of the fitting procedure and propagate this data to show a confidence interval for the modelled data.
Is it possible to do it using the error of parameters calculated using np.sqrt(np.diag(pcov_3gauss))? I saw it here: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html
What approach should I use to propagate the parameters uncertainties and calc the uncertainty bands for the fitting?
Here is the code I have implemented for fitting.
# Initial data for fitting
x_array = np.array(sep_df.E)
y_array_3gauss = np.array(sep_df.exp_cs)
def _1gaussian(x, amp1,cen1,sigma1,offset):
return amp1*(1/(sigma1*(np.sqrt(2*np.pi))))*(np.exp((-1.0/2.0)*(((x-cen1)/sigma1)**2)))+offset
def _3gaussian(x, amp1,cen1,sigma1,amp2,cen2,sigma2,amp3,cen3,sigma3,offset):
return _1gaussian(x, amp1,cen1,sigma1,offset=0) + \
_1gaussian(x, amp2,cen2,sigma2,offset=0) + \
_1gaussian(x, amp3,cen3,sigma3,offset=0) + offset
#initial_guesses for Gaussians
amp1 = 100 #max value without an offset (!)
cen1 = 140 # position of a center
sigma1 = 1 # sd of a gaussian, can be calculated approx. as HWHM / 2.355
amp2 = 32
cen2 = 157
sigma2 = 1
amp3 = 17.5
cen3 = 171.5
sigma3 = 1
offset_initial_guess = y_array_3gauss.mean()
p0=[amp1, cen1, sigma1,
amp2, cen2, sigma2,
amp3, cen3, sigma3,
offset_initial_guess]
# using a scipy.optimize.curve_fit for parameters Estimation
popt_3gauss, pcov_3gauss = scipy.optimize.curve_fit(_3gaussian, x_array, y_array_3gauss, p0=p0)
perr_3gauss = np.sqrt(np.diag(pcov_3gauss)) # errors (??)
print('Popt_3 gauss')
print(popt_3gauss)
i=0
for param in popt_3gauss:
print(f'Guess: {p0[i]} -> value: {param} (+/-) {perr_3gauss[i]}')
i+=1
pars_1 = np.append(popt_3gauss[0:3], popt_3gauss[9])
pars_2 = np.append(popt_3gauss[3:6], popt_3gauss[9])
pars_3 = np.append(popt_3gauss[6:9], popt_3gauss[9])
#calculating the separate Gaussians
gauss_peak_1 = _1gaussian(x_array, *pars_1)
gauss_peak_2 = _1gaussian(x_array, *pars_2)
gauss_peak_3 = _1gaussian(x_array, *pars_3)
it fits the data with some errors.
I don't really understand why my Gaussian pulses look so weird, but it's not the question for now.
Here is the output for model parameters:
Guess: 100 -> value: 19.921886501569567 (+/-) 0.18211089486661997
Guess: 140 -> value: 140.8226385680359 (+/-) 0.0009978640529532633
Guess: 1 -> value: 0.07977753969265024 (+/-) 0.0008591843799752477
Guess: 32 -> value: 5.8061836613068865 (+/-) 0.21223980806115864
Guess: 157 -> value: 157.24985139555835 (+/-) 0.005092072398387486
Guess: 1 -> value: 0.08218041022663795 (+/-) 0.0034588647851462877
Guess: 17.5 -> value: 4.183133300983996 (+/-) 0.2522036049333162
Guess: 171.5 -> value: 171.47025791173272 (+/-) 0.008818904590601183
Guess: 1 -> value: 0.11713718144344663 (+/-) 0.008042004990244404
Guess: 4.743919339218138 -> value: 4.016878986311514 (+/-) 0.04473028381895628
And fitting results:
For the first pulse using zoom to scale the image:
And with resampled scale for E-axis:
|
[
"One may calculate the uncertainty bands using strong analytical approach (if the function is known, like in my case). But it's pretty hard to obtain all the derivatives and even if it's possible a lot of calculations must be done for that.\nI havr tried to use simplified approach. I used Monte Carlo technique.\nI have repeatedly randomly sampled the values of errors for estimated parameters and calculated overal fitting results.\nSo I have calculated the function results for all randomly-noised parameters and got a large dataset of results.\nThen I have used only the the max and min values of calculated results I have got - to obtain numerical values of bounds.\nThe result is presented below.\n\n"
] |
[
0
] |
[] |
[] |
[
"curve_fitting",
"python",
"scipy"
] |
stackoverflow_0074523336_curve_fitting_python_scipy.txt
|
Q:
How to get list of users who are having owner access for a azure subscription using python
I am trying to get the list of users who are having owner access for a subscription.
I tried checking for python azure sdk. But am not getting any api which does this functionality.
Subscription list api is available but it is not providing details of users who are having access to the particular subscription.
I tried the below code
subscriptionClient = SubscriptionClient(credentials)
for subscription in subscriptionClient.subscriptions.list():
print (subscription)
Any help would be appreciated
A:
Azure Python SDK
If you're looking to use the Azure Python SDK then you should use AuthorizationManagementClient class
You can try to get RoleAssignments for your subscription at the scope of subscription itself.
I work closely with C#, so don't have Python code handy, but will try to update back with Python code a little later.
UPDATE
Here's a sample code. I hope this gives you enough to proceed.
from azure.mgmt.authorization import AuthorizationManagementClient
authorizationClient = AuthorizationManagementClient(credentials, '<your subscription guid>')
roles = authorizationClient.role_assignments.list()
for role in roles:
print(role)
REST API
If you want to directly call the REST API from code, use the Microsoft.Authorization/roleAssignments REST API.
GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleAssignments?api-version=2018-01-01-preview
{scope} will be subscriptions/<your subscriptionId> to fetch roleAssignments at the subscription level.
Here is an example request to this API and response.
To find all the users who have been explicitly assigned "Owner" role at the subscription level
Request:
GET https://management.azure.com/subscriptions/{my subscription GUID}/providers/Microsoft.Authorization/roleAssignments?api-version=2018-01-01-preview
Response:
Notice That Role Definition Id in response is "8e3af657-a8ff-443c-a75c-2fe8c4bcb635". This corresponds to built-in Owner role.
{"value":[{"properties":{"roleDefinitionId":"/subscriptions/{my Subscription GUID}/providers/Microsoft.Authorization/roleDefinitions/8e3af657-a8ff-443c-a75c-2fe8c4bcb635","principalId":"{some user GUID}","principalType":"User","scope":"/subscriptions/{my Subscription GUID}","createdOn":"2018-10-03T05:12:52.7213301Z","updatedOn":"2018-10-03T05:12:52.7213301Z","createdBy":"GUID","updatedBy":"GUID"},"id":"/subscriptions/{my Subscription GUID}/providers/Microsoft.Authorization/roleAssignments/83eee76b-4a0d-4f61-8c62-409501e95457","type":"Microsoft.Authorization/roleAssignments","name":"83eee76b-4a0d-4f61-8c62-409501e95457"}]}
Once you get the response, it will contain Role Definitions IDs instead of exact names. For all Built-in Roles, you can know which Role it is before hand by visiting this Microsoft documentation. E.g. Id for Owner role is "8e3af657-a8ff-443c-a75c-2fe8c4bcb635"
A:
this PowerShell command :
(Get-AzureRmRoleAssignment -RoleDefinitionId "8e3af657-a8ff-443c-a75c-2fe8c4bcb635" -Scope "/subscriptions/<your azure sub ID>" | where {($_.ObjectType -EQ "user") -and ($_.Scope -EQ "/subscriptions/<your azure sub ID>") } ) | select DisplayName,SignInName
will return all Azure AD users with subscription owner role.
I have tried to captured data packages about this ps command, and it called multiple rest APIs to finish this process.
You can host this command on Azure App service webjobs, Azure function or Azure automation and explore a webhook to get the user list when you need it.
Hope it helps.
A:
Late but this could be helpful to someone else. Here is code in python to find the number of owners in subscription:
from azure.mgmt.authorization import AuthorizationManagementClient
authorizationClient = AuthorizationManagementClient(credentials, '<your
subscription guid>')
def number_of_owners(client):
results = []
owners_list = []
subscription_scope = '/subscriptions/<your subscription guid>'
owner_role = '8e3af657-a8ff-443c-a75c-2fe8c4bcb635' #this is the ID for the owner role in Azure
roles = client.role_assignments.list_for_scope(
scope = subscription_scope,
filter = 'atScope()'
)
for role in roles:
role_name_id = role.name
role_assignment_details = client.role_assignments.get(
scope = subscription_scope,
role_assignment_name = role_name_id
)
role_ids = role_assignment_details.properties.role_definition_id
if owner_role in role_ids:
owner_role_list = role_ids.count(owner_role)
print(owner_role_list)
|
How to get list of users who are having owner access for a azure subscription using python
|
I am trying to get the list of users who are having owner access for a subscription.
I tried checking for python azure sdk. But am not getting any api which does this functionality.
Subscription list api is available but it is not providing details of users who are having access to the particular subscription.
I tried the below code
subscriptionClient = SubscriptionClient(credentials)
for subscription in subscriptionClient.subscriptions.list():
print (subscription)
Any help would be appreciated
|
[
"Azure Python SDK\nIf you're looking to use the Azure Python SDK then you should use AuthorizationManagementClient class \nYou can try to get RoleAssignments for your subscription at the scope of subscription itself.\nI work closely with C#, so don't have Python code handy, but will try to update back with Python code a little later.\nUPDATE\nHere's a sample code. I hope this gives you enough to proceed.\nfrom azure.mgmt.authorization import AuthorizationManagementClient\n\nauthorizationClient = AuthorizationManagementClient(credentials, '<your subscription guid>')\nroles = authorizationClient.role_assignments.list()\nfor role in roles:\nprint(role)\n\nREST API\nIf you want to directly call the REST API from code, use the Microsoft.Authorization/roleAssignments REST API.\nGET https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleAssignments?api-version=2018-01-01-preview\n\n{scope} will be subscriptions/<your subscriptionId> to fetch roleAssignments at the subscription level.\nHere is an example request to this API and response.\nTo find all the users who have been explicitly assigned \"Owner\" role at the subscription level\nRequest:\nGET https://management.azure.com/subscriptions/{my subscription GUID}/providers/Microsoft.Authorization/roleAssignments?api-version=2018-01-01-preview\n\nResponse:\nNotice That Role Definition Id in response is \"8e3af657-a8ff-443c-a75c-2fe8c4bcb635\". This corresponds to built-in Owner role.\n{\"value\":[{\"properties\":{\"roleDefinitionId\":\"/subscriptions/{my Subscription GUID}/providers/Microsoft.Authorization/roleDefinitions/8e3af657-a8ff-443c-a75c-2fe8c4bcb635\",\"principalId\":\"{some user GUID}\",\"principalType\":\"User\",\"scope\":\"/subscriptions/{my Subscription GUID}\",\"createdOn\":\"2018-10-03T05:12:52.7213301Z\",\"updatedOn\":\"2018-10-03T05:12:52.7213301Z\",\"createdBy\":\"GUID\",\"updatedBy\":\"GUID\"},\"id\":\"/subscriptions/{my Subscription GUID}/providers/Microsoft.Authorization/roleAssignments/83eee76b-4a0d-4f61-8c62-409501e95457\",\"type\":\"Microsoft.Authorization/roleAssignments\",\"name\":\"83eee76b-4a0d-4f61-8c62-409501e95457\"}]}\n\nOnce you get the response, it will contain Role Definitions IDs instead of exact names. For all Built-in Roles, you can know which Role it is before hand by visiting this Microsoft documentation. E.g. Id for Owner role is \"8e3af657-a8ff-443c-a75c-2fe8c4bcb635\"\n",
"this PowerShell command :\n(Get-AzureRmRoleAssignment -RoleDefinitionId \"8e3af657-a8ff-443c-a75c-2fe8c4bcb635\" -Scope \"/subscriptions/<your azure sub ID>\" | where {($_.ObjectType -EQ \"user\") -and ($_.Scope -EQ \"/subscriptions/<your azure sub ID>\") } ) | select DisplayName,SignInName\n\nwill return all Azure AD users with subscription owner role. \nI have tried to captured data packages about this ps command, and it called multiple rest APIs to finish this process.\nYou can host this command on Azure App service webjobs, Azure function or Azure automation and explore a webhook to get the user list when you need it.\nHope it helps.\n",
"Late but this could be helpful to someone else. Here is code in python to find the number of owners in subscription:\nfrom azure.mgmt.authorization import AuthorizationManagementClient\n\nauthorizationClient = AuthorizationManagementClient(credentials, '<your \nsubscription guid>')\n\ndef number_of_owners(client):\n results = []\n owners_list = []\n subscription_scope = '/subscriptions/<your subscription guid>'\n owner_role = '8e3af657-a8ff-443c-a75c-2fe8c4bcb635' #this is the ID for the owner role in Azure\n\n roles = client.role_assignments.list_for_scope(\n scope = subscription_scope,\n filter = 'atScope()'\n ) \n\n for role in roles:\n role_name_id = role.name\n role_assignment_details = client.role_assignments.get(\n scope = subscription_scope,\n role_assignment_name = role_name_id\n )\n role_ids = role_assignment_details.properties.role_definition_id\n if owner_role in role_ids:\n owner_role_list = role_ids.count(owner_role)\n print(owner_role_list)\n\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"azure",
"azure_active_directory",
"azure_automation",
"azure_sdk_python",
"python"
] |
stackoverflow_0052828180_azure_azure_active_directory_azure_automation_azure_sdk_python_python.txt
|
Q:
Plotting points from a TKinter form
I'm trying to learn GUIs in Python and trying to populate a plot as a test; I could get the random input to behave (see Add Red) but despite the x and y being fed in correctly (verified as print statements in debugging) the blue points snap to the origin and reset it to those values.
What am I missing here? Thanks.
import tkinter as tk
import numpy as np
import matplotlib as plt
plt.use("TkAgg")
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
from matplotlib.figure import Figure
root = tk.Tk()
root.title("Main Window")
figure = Figure()
plot = figure.add_subplot(111)
plot.set_xlim(0, 100)
plot.set_ylim(0, 100)
plot.grid(which='both', visible=True)
canvas = FigureCanvasTkAgg(figure, root)
def openBlueWindow():
newWindow = tk.Toplevel(root)
newWindow.title("New Blue Platform")
a = tk.Label(newWindow ,text = "X Coordinate (0-100").grid(row = 0,column = 0)
b = tk.Label(newWindow ,text = "Y Coordinate (0-100)").grid(row = 1,column = 0)
a1 = tk.Entry(newWindow)
a1.grid(row = 0,column = 1)
b1 = tk.Entry(newWindow)
b1.grid(row = 1,column = 1)
btn = tk.Button(newWindow, text="Add", command = lambda: add_blue(x = a1.get(), y = b1.get())).grid(row=2, column=0)
def add_blue(x,y):
plot.scatter(x, y, marker = (3,0), color="Blue")
canvas.draw()
def add_red():
x = np.random.randint(0, 101)
y = np.random.randint(0, 101)
plot.scatter(x, y, marker=(3, 0), color="Red")
canvas.draw()
blueButton = tk.Button(root, text="Add Blue", command=openBlueWindow)
redButton = tk.Button(root, text="Add Red", command=add_red)
canvas.get_tk_widget().pack(fill=tk.BOTH, expand=True)
blueButton.pack()
redButton.pack()
root.mainloop()
A:
The issue is you are passing strings to the scatter function.
You can change the scatter plot line to ...
plot.scatter(float(x), (y), marker = (3,0), color="Blue")
Alternatively, you can define the variable for the entries to be doubles.
The end of your openBlueWindow function would then look like this.
ad = tk.DoubleVar(value=0)
bd = tk.DoubleVar(value=0)
a1 = tk.Entry(newWindow,textvariable=ad)
a1.grid(row = 0,column = 1)
b1 = tk.Entry(newWindow,textvariable=bd)
b1.grid(row = 1,column = 1)
tk.Button(newWindow, text="Add", command = lambda: add_blue(x = ad.get(), y = bd.get())).grid(row=2, column=0)
and you would not need to specify float in the scatter function.
|
Plotting points from a TKinter form
|
I'm trying to learn GUIs in Python and trying to populate a plot as a test; I could get the random input to behave (see Add Red) but despite the x and y being fed in correctly (verified as print statements in debugging) the blue points snap to the origin and reset it to those values.
What am I missing here? Thanks.
import tkinter as tk
import numpy as np
import matplotlib as plt
plt.use("TkAgg")
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
from matplotlib.figure import Figure
root = tk.Tk()
root.title("Main Window")
figure = Figure()
plot = figure.add_subplot(111)
plot.set_xlim(0, 100)
plot.set_ylim(0, 100)
plot.grid(which='both', visible=True)
canvas = FigureCanvasTkAgg(figure, root)
def openBlueWindow():
newWindow = tk.Toplevel(root)
newWindow.title("New Blue Platform")
a = tk.Label(newWindow ,text = "X Coordinate (0-100").grid(row = 0,column = 0)
b = tk.Label(newWindow ,text = "Y Coordinate (0-100)").grid(row = 1,column = 0)
a1 = tk.Entry(newWindow)
a1.grid(row = 0,column = 1)
b1 = tk.Entry(newWindow)
b1.grid(row = 1,column = 1)
btn = tk.Button(newWindow, text="Add", command = lambda: add_blue(x = a1.get(), y = b1.get())).grid(row=2, column=0)
def add_blue(x,y):
plot.scatter(x, y, marker = (3,0), color="Blue")
canvas.draw()
def add_red():
x = np.random.randint(0, 101)
y = np.random.randint(0, 101)
plot.scatter(x, y, marker=(3, 0), color="Red")
canvas.draw()
blueButton = tk.Button(root, text="Add Blue", command=openBlueWindow)
redButton = tk.Button(root, text="Add Red", command=add_red)
canvas.get_tk_widget().pack(fill=tk.BOTH, expand=True)
blueButton.pack()
redButton.pack()
root.mainloop()
|
[
"The issue is you are passing strings to the scatter function.\nYou can change the scatter plot line to ...\nplot.scatter(float(x), (y), marker = (3,0), color=\"Blue\")\n\nAlternatively, you can define the variable for the entries to be doubles.\nThe end of your openBlueWindow function would then look like this.\n ad = tk.DoubleVar(value=0)\n bd = tk.DoubleVar(value=0)\n a1 = tk.Entry(newWindow,textvariable=ad)\n a1.grid(row = 0,column = 1)\n b1 = tk.Entry(newWindow,textvariable=bd)\n b1.grid(row = 1,column = 1)\n tk.Button(newWindow, text=\"Add\", command = lambda: add_blue(x = ad.get(), y = bd.get())).grid(row=2, column=0)\n\nand you would not need to specify float in the scatter function.\n"
] |
[
1
] |
[] |
[] |
[
"matplotlib",
"python",
"tkinter"
] |
stackoverflow_0074536636_matplotlib_python_tkinter.txt
|
Q:
Applying lambda function to datetime
I am using the following code to find clusters with difference <=1 in a list
from itertools import groupby
from operator import itemgetter
data = [ 1, 4,5,6, 10, 15,16,17,18, 22, 25,26,27,28]
for k, g in groupby(enumerate(data), lambda (i, x): (i-x)):
print map(itemgetter(1), g)
If however I change the data to be an array of datetime to find cluster of datetimes which are only 1 hour apart, it fails.
I am trying the following:
>>> data
array([datetime.datetime(2016, 10, 1, 8, 0),
datetime.datetime(2016, 10, 1, 9, 0),
datetime.datetime(2016, 10, 1, 10, 0), ...,
datetime.datetime(2019, 1, 3, 9, 0),
datetime.datetime(2019, 1, 3, 10, 0),
datetime.datetime(2019, 1, 3, 11, 0)], dtype=object)
from itertools import groupby
from operator import itemgetter
data = [ 1, 4,5,6, 10, 15,16,17,18, 22, 25,26,27,28]
for k, g in groupby(enumerate(data), lambda (i, x): (i-x).total_seconds()/3600):
print map(itemgetter(1), g)
The error is:
for k, g in groupby(enumerate(data), lambda (i, x): int((i-x).total_seconds()/3600)):
TypeError: unsupported operand type(s) for -: 'int' and 'datetime.datetime'
There are lot of solutions on the web but I want to apply this particular one for learning.
A:
If you want to get all subsequences of items such that each item is an hour later than the previous one (not clusters of items that each are within an hour from eachother), you need to iterate over pairs (data[i-1], data[i]). Currently, you are just iterating over (i, data[i]) which raises TypeError when you try to substract data[i] from i. A working example could look like this:
from itertools import izip
def find_subsequences(data):
if len(data) <= 1:
return []
current_group = [data[0]]
delta = 3600
results = []
for current, next in izip(data, data[1:]):
if abs((next - current).total_seconds()) > delta:
# Here, `current` is the last item of the previous subsequence
# and `next` is the first item of the next subsequence.
if len(current_group) >= 2:
results.append(current_group)
current_group = [next]
continue
current_group.append(next)
return results
A:
Let's import datetime, and take out the elipsis from your data, and then apply a lambda function with two nested loops to calculate elapsed time between any two dates lower than one hour... a boolean matrix will identify the desired clusters easily.
from datetime import datetime as dt
data = np.array([dt(2016, 10, 1, 8, 0),
dt(2016, 10, 1, 9, 0),
dt(2016, 10, 1, 10, 0),
dt(2019, 1, 3, 9, 0),
dt(2019, 1, 3, 10, 0),
dt(2019, 1, 3, 11, 0)], dtype=object)
mds = lambda ds: [[abs(da-db).seconds/3600 <= 1 for da in ds] for db in ds]
Appling the function to data:
md = mds(data)
md will give us:
[[True, True, False, True, False, False],
[True, True, True, True, True, False],
[False, True, True, False, True, True],
[True, True, False, True, True, False],
[False, True, True, True, True, True],
[False, False, True, False, True, True]]
Note that the main diagonal is True (Deltatime is zero), and the matrix is symmetrical. True elements are those where abs(date[i] - date[j]) is lower or equal to one hour, i and j between 0 and 5 indicates each pair of dates that are considerated at the matrix.
|
Applying lambda function to datetime
|
I am using the following code to find clusters with difference <=1 in a list
from itertools import groupby
from operator import itemgetter
data = [ 1, 4,5,6, 10, 15,16,17,18, 22, 25,26,27,28]
for k, g in groupby(enumerate(data), lambda (i, x): (i-x)):
print map(itemgetter(1), g)
If however I change the data to be an array of datetime to find cluster of datetimes which are only 1 hour apart, it fails.
I am trying the following:
>>> data
array([datetime.datetime(2016, 10, 1, 8, 0),
datetime.datetime(2016, 10, 1, 9, 0),
datetime.datetime(2016, 10, 1, 10, 0), ...,
datetime.datetime(2019, 1, 3, 9, 0),
datetime.datetime(2019, 1, 3, 10, 0),
datetime.datetime(2019, 1, 3, 11, 0)], dtype=object)
from itertools import groupby
from operator import itemgetter
data = [ 1, 4,5,6, 10, 15,16,17,18, 22, 25,26,27,28]
for k, g in groupby(enumerate(data), lambda (i, x): (i-x).total_seconds()/3600):
print map(itemgetter(1), g)
The error is:
for k, g in groupby(enumerate(data), lambda (i, x): int((i-x).total_seconds()/3600)):
TypeError: unsupported operand type(s) for -: 'int' and 'datetime.datetime'
There are lot of solutions on the web but I want to apply this particular one for learning.
|
[
"If you want to get all subsequences of items such that each item is an hour later than the previous one (not clusters of items that each are within an hour from eachother), you need to iterate over pairs (data[i-1], data[i]). Currently, you are just iterating over (i, data[i]) which raises TypeError when you try to substract data[i] from i. A working example could look like this:\nfrom itertools import izip\n\ndef find_subsequences(data):\n if len(data) <= 1:\n return []\n\n current_group = [data[0]]\n delta = 3600\n results = []\n\n for current, next in izip(data, data[1:]):\n if abs((next - current).total_seconds()) > delta:\n # Here, `current` is the last item of the previous subsequence\n # and `next` is the first item of the next subsequence.\n if len(current_group) >= 2:\n results.append(current_group)\n\n current_group = [next]\n continue\n\n current_group.append(next)\n\n return results\n\n",
"Let's import datetime, and take out the elipsis from your data, and then apply a lambda function with two nested loops to calculate elapsed time between any two dates lower than one hour... a boolean matrix will identify the desired clusters easily.\nfrom datetime import datetime as dt\n \ndata = np.array([dt(2016, 10, 1, 8, 0),\n dt(2016, 10, 1, 9, 0),\n dt(2016, 10, 1, 10, 0),\n dt(2019, 1, 3, 9, 0),\n dt(2019, 1, 3, 10, 0),\n dt(2019, 1, 3, 11, 0)], dtype=object)\n \nmds = lambda ds: [[abs(da-db).seconds/3600 <= 1 for da in ds] for db in ds]\n\nAppling the function to data:\nmd = mds(data)\n\nmd will give us:\n[[True, True, False, True, False, False],\n [True, True, True, True, True, False],\n [False, True, True, False, True, True],\n [True, True, False, True, True, False],\n [False, True, True, True, True, True],\n [False, False, True, False, True, True]]\n\nNote that the main diagonal is True (Deltatime is zero), and the matrix is symmetrical. True elements are those where abs(date[i] - date[j]) is lower or equal to one hour, i and j between 0 and 5 indicates each pair of dates that are considerated at the matrix.\n"
] |
[
1,
0
] |
[] |
[] |
[
"lambda",
"python"
] |
stackoverflow_0039905432_lambda_python.txt
|
Q:
Python-Selenium no such element: Unable to locate element
I'm new to coding. I'am trying to make a twitter bot but when I find XPaths and paste it in my code it gives an error
I tried to find the element with id, name, selector and paste it in my code but none of them worked
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
import time
class TwitterBot:
def __init__(self , username , password) :
self.username = username
self.password = password
chrome_options = Options()
self.bot = webdriver.Chrome(ChromeDriverManager().install() , options = chrome_options)
def login(self):
bot = self.bot
bot.get("https://twitter.com/login")
time.sleep(5)
email = bot.find_element(By.XPATH , '/html[1]/body[1]/div[1]/div[1]/div[1]/div[1]/div[2]/div[1]/div[1]/div[1]/div[1]/div[1]/div[2]/div[2]/div[1]/div[1]/div[2]/div[2]/div[1]/div[1]/div[1]/div[5]/label[1]/div[1]/div[2]/div[1]/input[1]')
email.send_keys(self.username)
f = TwitterBot("blabla" ,"blabla")
f.login()
A:
You need to learn how to create correct, short and unique locators. Very long absolute XPaths and CSS Selectors are extremely breakable.
Also you need to use WebDriverWait expected_conditions explicit waits, not a hardcoded delays.
The following code works:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
options.add_argument('--disable-notifications')
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 20)
url = "https://twitter.com/login"
driver.get(url)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "[autocomplete='username']"))).send_keys("ku-ku")
The result is:
|
Python-Selenium no such element: Unable to locate element
|
I'm new to coding. I'am trying to make a twitter bot but when I find XPaths and paste it in my code it gives an error
I tried to find the element with id, name, selector and paste it in my code but none of them worked
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
import time
class TwitterBot:
def __init__(self , username , password) :
self.username = username
self.password = password
chrome_options = Options()
self.bot = webdriver.Chrome(ChromeDriverManager().install() , options = chrome_options)
def login(self):
bot = self.bot
bot.get("https://twitter.com/login")
time.sleep(5)
email = bot.find_element(By.XPATH , '/html[1]/body[1]/div[1]/div[1]/div[1]/div[1]/div[2]/div[1]/div[1]/div[1]/div[1]/div[1]/div[2]/div[2]/div[1]/div[1]/div[2]/div[2]/div[1]/div[1]/div[1]/div[5]/label[1]/div[1]/div[2]/div[1]/input[1]')
email.send_keys(self.username)
f = TwitterBot("blabla" ,"blabla")
f.login()
|
[
"You need to learn how to create correct, short and unique locators. Very long absolute XPaths and CSS Selectors are extremely breakable.\nAlso you need to use WebDriverWait expected_conditions explicit waits, not a hardcoded delays.\nThe following code works:\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\noptions.add_argument('--disable-notifications')\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 20)\n\nurl = \"https://twitter.com/login\"\ndriver.get(url)\n\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"[autocomplete='username']\"))).send_keys(\"ku-ku\")\n\nThe result is:\n\n"
] |
[
0
] |
[] |
[] |
[
"css_selectors",
"python",
"selenium",
"webdriverwait",
"xpath"
] |
stackoverflow_0074537005_css_selectors_python_selenium_webdriverwait_xpath.txt
|
Q:
OSError: SavedModel file does not exist
I saved a model as trained_model.h5 and loaded the model in different file was easily able to run it, it was working until today, it has started showing error
OSError: SavedModel file does not exist at: C:\Users\harsh\AppData\Local\Temp\tfhub_modules\602d30248ff7929470db09f7385fc895e9ceb4c0\{saved_model.pbtxt|saved_model.pb}
directory
I load the model using
model = tf.keras.models.load_model(('trained_model.h5'), custom_objects={'KerasLayer':hub.KerasLayer})
why is it showing error today and was working before ?
A:
I just deleted all the Temp files and it has started working fine again, I think it is some Windows flaw.
A:
Instead of using .h5 extension use .pd.
|
OSError: SavedModel file does not exist
|
I saved a model as trained_model.h5 and loaded the model in different file was easily able to run it, it was working until today, it has started showing error
OSError: SavedModel file does not exist at: C:\Users\harsh\AppData\Local\Temp\tfhub_modules\602d30248ff7929470db09f7385fc895e9ceb4c0\{saved_model.pbtxt|saved_model.pb}
directory
I load the model using
model = tf.keras.models.load_model(('trained_model.h5'), custom_objects={'KerasLayer':hub.KerasLayer})
why is it showing error today and was working before ?
|
[
"I just deleted all the Temp files and it has started working fine again, I think it is some Windows flaw.\n",
"Instead of using .h5 extension use .pd.\n"
] |
[
0,
0
] |
[] |
[] |
[
"deep_learning",
"machine_learning",
"python",
"tensorflow"
] |
stackoverflow_0073013513_deep_learning_machine_learning_python_tensorflow.txt
|
Q:
How to annotate results of iteration?
from __future__ import annotations
from typing import TypeVar, Generic, Type
T = TypeVar('T')
class ListElem(Generic[T]):
def __init__(self, value: T, nxt: ListElem[T] = None):
self.value = value
self.nxt = nxt
class LinkedList(Generic[T]):
def __init__(self, elem_factory: Type[ListElem], head: ListElem[T] = None):
self._elem_factory = elem_factory
self._head = head
def add(self, value: T):
elem = self._elem_factory(value, self._head)
self._head = elem
def __iter__(self):
self._next = self._head
return self
def __next__(self) -> ListElem[T]:
if not self._next:
raise StopIteration
result = self._next
self._next = self._next.nxt
return result
def main():
lst: LinkedList[int] = LinkedList(ListElem)
lst.add(5)
for elem in lst:
print(elem.value) # PyCharm see elem like int, not like ListElem
if __name__ == '__main__':
main()
This annotating __next__ doesnt helps me
How can I annotate that LinkedList return ListElem[T] while iteration, not just T?
If I do it like this
lst: LinkedList[ListElem[int]]
I cannot annotate this int to method
def add(self, value: ?): # T = ListElem[int], but I wanna annotate value just like int
pass
I dont wanna annotate LinkedList like
lst: LinkedList[ListElem[int], int]
Because int just repeating and annotate same - type of value inside ListElem.value
A:
I would say this is a bug in PyCharm's static type checker.
That elem type should be inferred as ListElem[int] and is correctly inferred as such by mypy for example.
Your code still had a few issues. Mostly type-safety related (since you are already dealing with annotations), but a few other optimizations. One of which incidentally also fixes that PyCharm problem.
The most important one IMO is that you did not use the type parameter of ListElem in the elem_factory annotation. If you do, you'll be able to immediately bind the type argument upon initialization by passing a specified ListElem[int] class. Then you don't need to explicitly annotate lst in your main function. This also happens to satisfy/silence PyCharm.
Here is a version with my proposed changes that passes mypy --strict and should work as intended:
from __future__ import annotations
from typing import Generic, Optional, TypeVar
T = TypeVar("T")
class ListElem(Generic[T]):
def __init__(self, value: T, nxt: Optional[ListElem[T]] = None) -> None:
self.value = value
self.nxt = nxt
class LinkedList(Generic[T]):
def __init__(self, elem_factory: type[ListElem[T]], head: Optional[ListElem[T]] = None) -> None:
self._elem_factory = elem_factory
self._head = head
def add(self, value: T) -> None:
self._head = self._elem_factory(value, self._head)
def __iter__(self) -> LinkedList[T]:
self._next = self._head
return self
def __next__(self) -> ListElem[T]:
if self._next is None:
raise StopIteration
result = self._next
self._next = self._next.nxt
return result
def main() -> None:
lst = LinkedList(ListElem[int])
lst.add(5)
for elem in lst:
print(elem.value)
if __name__ == "__main__":
main()
However it seems that PyCharm still does not identify elem as the correct type. But at least it is not complaining.
|
How to annotate results of iteration?
|
from __future__ import annotations
from typing import TypeVar, Generic, Type
T = TypeVar('T')
class ListElem(Generic[T]):
def __init__(self, value: T, nxt: ListElem[T] = None):
self.value = value
self.nxt = nxt
class LinkedList(Generic[T]):
def __init__(self, elem_factory: Type[ListElem], head: ListElem[T] = None):
self._elem_factory = elem_factory
self._head = head
def add(self, value: T):
elem = self._elem_factory(value, self._head)
self._head = elem
def __iter__(self):
self._next = self._head
return self
def __next__(self) -> ListElem[T]:
if not self._next:
raise StopIteration
result = self._next
self._next = self._next.nxt
return result
def main():
lst: LinkedList[int] = LinkedList(ListElem)
lst.add(5)
for elem in lst:
print(elem.value) # PyCharm see elem like int, not like ListElem
if __name__ == '__main__':
main()
This annotating __next__ doesnt helps me
How can I annotate that LinkedList return ListElem[T] while iteration, not just T?
If I do it like this
lst: LinkedList[ListElem[int]]
I cannot annotate this int to method
def add(self, value: ?): # T = ListElem[int], but I wanna annotate value just like int
pass
I dont wanna annotate LinkedList like
lst: LinkedList[ListElem[int], int]
Because int just repeating and annotate same - type of value inside ListElem.value
|
[
"I would say this is a bug in PyCharm's static type checker.\nThat elem type should be inferred as ListElem[int] and is correctly inferred as such by mypy for example.\nYour code still had a few issues. Mostly type-safety related (since you are already dealing with annotations), but a few other optimizations. One of which incidentally also fixes that PyCharm problem.\nThe most important one IMO is that you did not use the type parameter of ListElem in the elem_factory annotation. If you do, you'll be able to immediately bind the type argument upon initialization by passing a specified ListElem[int] class. Then you don't need to explicitly annotate lst in your main function. This also happens to satisfy/silence PyCharm.\nHere is a version with my proposed changes that passes mypy --strict and should work as intended:\nfrom __future__ import annotations\nfrom typing import Generic, Optional, TypeVar\n\n\nT = TypeVar(\"T\")\n\n\nclass ListElem(Generic[T]):\n def __init__(self, value: T, nxt: Optional[ListElem[T]] = None) -> None:\n self.value = value\n self.nxt = nxt\n\n\nclass LinkedList(Generic[T]):\n def __init__(self, elem_factory: type[ListElem[T]], head: Optional[ListElem[T]] = None) -> None:\n self._elem_factory = elem_factory\n self._head = head\n\n def add(self, value: T) -> None:\n self._head = self._elem_factory(value, self._head)\n\n def __iter__(self) -> LinkedList[T]:\n self._next = self._head\n return self\n\n def __next__(self) -> ListElem[T]:\n if self._next is None:\n raise StopIteration\n result = self._next\n self._next = self._next.nxt\n return result\n\n\ndef main() -> None:\n lst = LinkedList(ListElem[int])\n lst.add(5)\n for elem in lst:\n print(elem.value)\n\n\nif __name__ == \"__main__\":\n main()\n\nHowever it seems that PyCharm still does not identify elem as the correct type. But at least it is not complaining.\n"
] |
[
0
] |
[] |
[] |
[
"iteration",
"methods",
"oop",
"python",
"type_hinting"
] |
stackoverflow_0074510046_iteration_methods_oop_python_type_hinting.txt
|
Q:
How to download video by url using ffmpeg-python
I'm trying to write script that will be downloading part of youtube video by url. I'm using ffmpeg + ffmpeg-python library.
I have terminal command that I want put to python code.
ffmpeg -i "url_to_download" -ss 00:00:15 -t 00:00:25 -c:v copy -c:a copy "demo.mp4"
url_to_download is an youtube stream url that I get like in an answer to another question https://stackoverflow.com/a/57134397/6583203
I started writing script
import ffmpeg
FROM = "00:00:15"
TO = "00:00:25"
TARGET = "demo.mp4"
ffmpeg.input(url_to_download, ss=FROM, t=TO)
But I don't know how to pass parameters -c:v copy -c:a copy "demo.mp4" to ffmpeg.input
Do not advice me to use subprocess. I have the same error like in a following question: Python ffmpeg won't accept path, why?
A:
This answer worked for me
ffmpeg.input(url_to_download, ss=FROM, t=TO).output("demo.mp4", vcodec="copy", acodec="copy").overwrite_output().run()
|
How to download video by url using ffmpeg-python
|
I'm trying to write script that will be downloading part of youtube video by url. I'm using ffmpeg + ffmpeg-python library.
I have terminal command that I want put to python code.
ffmpeg -i "url_to_download" -ss 00:00:15 -t 00:00:25 -c:v copy -c:a copy "demo.mp4"
url_to_download is an youtube stream url that I get like in an answer to another question https://stackoverflow.com/a/57134397/6583203
I started writing script
import ffmpeg
FROM = "00:00:15"
TO = "00:00:25"
TARGET = "demo.mp4"
ffmpeg.input(url_to_download, ss=FROM, t=TO)
But I don't know how to pass parameters -c:v copy -c:a copy "demo.mp4" to ffmpeg.input
Do not advice me to use subprocess. I have the same error like in a following question: Python ffmpeg won't accept path, why?
|
[
"This answer worked for me\nffmpeg.input(url_to_download, ss=FROM, t=TO).output(\"demo.mp4\", vcodec=\"copy\", acodec=\"copy\").overwrite_output().run()\n\n"
] |
[
0
] |
[] |
[] |
[
"ffmpeg",
"ffmpeg_python",
"python"
] |
stackoverflow_0074439126_ffmpeg_ffmpeg_python_python.txt
|
Q:
'Add' object is not callable
I want to make a graphic with matplotlib and I have a problem with the class Intern Operation. This is the function:
def makeGraphic(self, f, root):
plt.rcParams['axes.unicode_minus'] = False
fig = plt.figure(figsize=(6, 4)) # Nuevo lienzo
# Use el método axisartist.Subplot para crear un objeto de área de dibujo ax
ax = axisartist.Subplot(fig, 111)
fig.add_axes(ax)
X = np.linspace(-5, 10, 100)
Y = [f(x) for x in X] # here I have the problem
ax.plot(X, Y)
ax.scatter(root, 0, color='red')
#plt.legend([r'$a>1$'], loc ='lower right')
print(max(X), max(Y))
ax.axis[:].set_visible(False)
ax.axis["x"] = ax.new_floating_axis(0, 0, axis_direction="bottom")
ax.axis["y"] = ax.new_floating_axis(1, 0, axis_direction="bottom")
ax.axis["x"].set_axisline_style("-|>", size=1.0)
ax.axis["y"].set_axisline_style("-|>", size=1.0)
ax.annotate('x', xy=(5, 0), xytext=(4+1, 0.3))
ax.annotate('y', xy=(0, 1.0), xytext=(-0.2, 5))
plt.xlim(-5, 5)
plt.ylim(-5, 5)
X_lim = np.arange(-4, 4+1, 1)
ax.set_xticks(X_lim)
Y_lim = np.arange(-10,10+1, 1)
ax.set_yticks(Y_lim)
fstr = str(f)
ax.annotate(rf'$y = {fstr}$', xy=(8, -3), xytext=(8, -3))
#plt.legend()
plt.show()
The imports are:
import numpy as np
from sympy import diff
import matplotlib.pyplot as plt
import mpl_toolkits.axisartist as axisartist
#CONSTANTS
from operaciones.utils.constants import x
The traceback:
Traceback (most recent call last):
File "c:\Users\user\Desktop\MetodosNumericConsola\main.py", line 13, in <module>
hal.pointsToGraphic()
File "c:\Users\user\Desktop\MetodosNumericConsola\operaciones\First\Halley.py", line 49, in pointsToGraphic
super().makeGraphic(self.equation, self.root)
File "c:\Users\emily\Desktop\MetodosNumericConsola\operaciones\ListOp\InternOperation.py", line 49, in makeGraphic
Y = [f(x) for x in X]
^^^^^^^^^^^^^^^^^
File "c:\Users\emily\Desktop\MetodosNumericConsola\operaciones\ListOp\InternOperation.py", line 49, in <listcomp>
Y = [f(x) for x in X]
^^^^
TypeError: 'Add' object is not callable
A:
That happens because your f is a symbolic expression, not a numerical function. So, inside your Halley.py file you would have to use sympy's lambdify to convert self.equation to a numerical function:
# hopefully your expression has one symbol...
f = lambdify(list(self.equation.free_symbols), self.equation)
super().makeGraphic(f, self.root)
|
'Add' object is not callable
|
I want to make a graphic with matplotlib and I have a problem with the class Intern Operation. This is the function:
def makeGraphic(self, f, root):
plt.rcParams['axes.unicode_minus'] = False
fig = plt.figure(figsize=(6, 4)) # Nuevo lienzo
# Use el método axisartist.Subplot para crear un objeto de área de dibujo ax
ax = axisartist.Subplot(fig, 111)
fig.add_axes(ax)
X = np.linspace(-5, 10, 100)
Y = [f(x) for x in X] # here I have the problem
ax.plot(X, Y)
ax.scatter(root, 0, color='red')
#plt.legend([r'$a>1$'], loc ='lower right')
print(max(X), max(Y))
ax.axis[:].set_visible(False)
ax.axis["x"] = ax.new_floating_axis(0, 0, axis_direction="bottom")
ax.axis["y"] = ax.new_floating_axis(1, 0, axis_direction="bottom")
ax.axis["x"].set_axisline_style("-|>", size=1.0)
ax.axis["y"].set_axisline_style("-|>", size=1.0)
ax.annotate('x', xy=(5, 0), xytext=(4+1, 0.3))
ax.annotate('y', xy=(0, 1.0), xytext=(-0.2, 5))
plt.xlim(-5, 5)
plt.ylim(-5, 5)
X_lim = np.arange(-4, 4+1, 1)
ax.set_xticks(X_lim)
Y_lim = np.arange(-10,10+1, 1)
ax.set_yticks(Y_lim)
fstr = str(f)
ax.annotate(rf'$y = {fstr}$', xy=(8, -3), xytext=(8, -3))
#plt.legend()
plt.show()
The imports are:
import numpy as np
from sympy import diff
import matplotlib.pyplot as plt
import mpl_toolkits.axisartist as axisartist
#CONSTANTS
from operaciones.utils.constants import x
The traceback:
Traceback (most recent call last):
File "c:\Users\user\Desktop\MetodosNumericConsola\main.py", line 13, in <module>
hal.pointsToGraphic()
File "c:\Users\user\Desktop\MetodosNumericConsola\operaciones\First\Halley.py", line 49, in pointsToGraphic
super().makeGraphic(self.equation, self.root)
File "c:\Users\emily\Desktop\MetodosNumericConsola\operaciones\ListOp\InternOperation.py", line 49, in makeGraphic
Y = [f(x) for x in X]
^^^^^^^^^^^^^^^^^
File "c:\Users\emily\Desktop\MetodosNumericConsola\operaciones\ListOp\InternOperation.py", line 49, in <listcomp>
Y = [f(x) for x in X]
^^^^
TypeError: 'Add' object is not callable
|
[
"That happens because your f is a symbolic expression, not a numerical function. So, inside your Halley.py file you would have to use sympy's lambdify to convert self.equation to a numerical function:\n# hopefully your expression has one symbol...\nf = lambdify(list(self.equation.free_symbols), self.equation)\nsuper().makeGraphic(f, self.root)\n\n"
] |
[
0
] |
[] |
[] |
[
"matplotlib",
"numpy",
"python",
"sympy"
] |
stackoverflow_0074537037_matplotlib_numpy_python_sympy.txt
|
Q:
How to get names of scheduled queries in bigquery
Using a python client to connect with bigquery, how can we get names of all the scheduled queries present in that project?
I tried following up with this link - https://cloud.google.com/bigquery/docs/reference/datatransfer/libraries
But got no information on the names of the scheduled queries.
A:
To list all the scheduled queries for a project with Python BigQuery Client :
def get_scheduled_queries_configs():
from google.cloud import bigquery_datatransfer
transfer_client = bigquery_datatransfer.DataTransferServiceClient()
project_id = "{project_id}"
parent = transfer_client.common_location_path(project=project_id, location='EU')
request = bigquery_datatransfer.ListTransferConfigsRequest(
parent=parent,
data_source_ids=['scheduled_query']
)
configs = transfer_client.list_transfer_configs(request=request)
print("Got the following configs:")
for config in configs:
print(f"\tID: {config.name}, Schedule: {config.schedule}")
print(f"\tDisplay name: {config.display_name}")
config_name = config.name
config_schedule = config.schedule
config_display_name = config.display_name
return configs
if __name__ == '__main__':
scheduled_queries_configs = get_scheduled_queries_configs()
Some explanations :
This code retrieves transfer configs only for scheduled queries via ListTransferConfigsRequest object. The request takes the parent argument containing the project and the location EU in this example. The request take also data_source_ids argument with scheduled_query value
The location is important because if your scheduled queries are in US and you execute the request in EU, the result will be empty
The config.display_name allows to retrieve the name of a scheduled query
|
How to get names of scheduled queries in bigquery
|
Using a python client to connect with bigquery, how can we get names of all the scheduled queries present in that project?
I tried following up with this link - https://cloud.google.com/bigquery/docs/reference/datatransfer/libraries
But got no information on the names of the scheduled queries.
|
[
"To list all the scheduled queries for a project with Python BigQuery Client :\ndef get_scheduled_queries_configs():\n from google.cloud import bigquery_datatransfer\n\n transfer_client = bigquery_datatransfer.DataTransferServiceClient()\n\n project_id = \"{project_id}\"\n parent = transfer_client.common_location_path(project=project_id, location='EU')\n\n request = bigquery_datatransfer.ListTransferConfigsRequest(\n parent=parent,\n data_source_ids=['scheduled_query']\n )\n\n configs = transfer_client.list_transfer_configs(request=request)\n print(\"Got the following configs:\")\n for config in configs:\n print(f\"\\tID: {config.name}, Schedule: {config.schedule}\")\n print(f\"\\tDisplay name: {config.display_name}\")\n\n config_name = config.name\n config_schedule = config.schedule\n config_display_name = config.display_name\n\n return configs\n\n\nif __name__ == '__main__':\n scheduled_queries_configs = get_scheduled_queries_configs()\n\nSome explanations :\n\nThis code retrieves transfer configs only for scheduled queries via ListTransferConfigsRequest object. The request takes the parent argument containing the project and the location EU in this example. The request take also data_source_ids argument with scheduled_query value\nThe location is important because if your scheduled queries are in US and you execute the request in EU, the result will be empty\nThe config.display_name allows to retrieve the name of a scheduled query\n\n"
] |
[
1
] |
[] |
[] |
[
"api",
"bigdata",
"google_bigquery",
"google_cloud_functions",
"python"
] |
stackoverflow_0074536688_api_bigdata_google_bigquery_google_cloud_functions_python.txt
|
Q:
Mean value in Pandas higher is higher than it should be
I'm trying to compare some NFL concussion stats with some individual player stats from the combine.
dfcomb.to_excel(r'C:\Users\Documents\GWG\NFL Concussion\NFL_concussion\comb.xlsx', index=False)
# Create a merged df with players that are concussed on dfconc and players that are on dfcomb
dfcommon = dfcomb.merge(dfconc, on=['nameFull'])
dfcommon = pd.read_csv(r'C:\Users\crae1\Documents\GWG\NFL Concussion\NFL_concussion\common.csv')
# Initialise list of pos
positions = ['C', 'RB', 'CB', 'LB', 'OG', 'OT', 'QB', 'DT', 'S', 'FB', 'WR', 'TE']
# Iterate through list and compare height and weight
for pos in positions:
avg = np.mean(dfcomb['heightInches'].where(dfcomb['position'] == pos))
avgconc = np.mean(dfcommon['heightInches'].where(dfcommon['position'] == pos))
print('mean height in the NFL for {}s is {} in mean height of concussed players {} in'.format(pos + '\'', avg, avgconc))
for pos in positions:
avg = np.mean(dfcomb['weight'].where(dfcomb['position'] == pos))
avgconc = np.mean(dfcommon['weight'].where(dfcommon['position'] == pos))
print('mean weight in the NFL for {}s is {} lbs mean weight of concussed players {} lbs'.format(pos + '\'', avg, avgconc))
# Create summary df for concussion and NFL groups
heightavgNFL = dfcomb.groupby('position')['heightInches'].mean
heightavgdf = dfcommon.groupby('position')['heightInches'].mean
weightavgNFL = dfcomb.groupby('position')['weight'].mean
weightavgdf = dfcommon.groupby('position')['weight'].mean
# Plot height
bar_width = 0.10
ax = heightavgNFL().plot(kind='bar', align='edge', title='Mean NFL Height vs Mean Concussed Height', ylabel='Height (in)', xlabel='Position', width=bar_width, figsize=(16,8), color='r',label='NFL')
heightavgdf().plot(kind='bar', ax=ax, align='edge', title='Mean NFL Height vs Mean Concussed Height', ylabel='Height (in)', xlabel='Position', width=-bar_width, figsize=(16,8), color='b',label='Concussion Group')
plt.legend(loc='lower right')
# Plot weight
bar_width = 0.10
ax = weightavgNFL().plot(kind='bar', align='edge', title='Mean NFL Weight vs Mean Concussed Weight', ylabel='Weight (lbs)', xlabel='Position', width=bar_width, figsize=(16,8), color='r',label='NFL')
weightavgdf().plot(kind='bar', ax=ax, align='edge', title='Mean NFL Weight vs Mean Concussed Weight', ylabel='Weight (lbs)', xlabel='Position', width=-bar_width, figsize=(16,8), color='b',label='Concussion Group')
plt.legend(loc='lower right')
However, when looking at the QB weight from the combine csv file, the weight is a lot higher than expected and this problem only occurs with the QB position. I've had a look through the data and I don't see where it could be getting the higher values from.
QB Weight Higher than expected
Here is the csv/xlsx file that dfcomb is from:
https://1drv.ms/x/s!AnmdeJC_g0dLnGzx9LplOkq8iYNJ?e=2NA3WA
Thanks
A:
The groupby method does not return sorted data, so your NFL and df data has different order when you add them to the plots. Try this:
heightavgNFL = pd.DataFrame(dfcomb.groupby('position')['heightInches'].mean()).sort_values(by=['position'])
heightavgdf = pd.DataFrame(dfcommon.groupby('position')['heightInches'].mean()).sort_values(by=['position'])
weightavgNFL = pd.DataFrame(dfcomb.groupby('position')['weight'].mean()).sort_values(by=['position'])
weightavgdf = pd.DataFrame(dfcommon.groupby('position')['weight'].mean()).sort_values(by=['position'])
This works for me:
pd.DataFrame(weightavgNFL).join(weightavgdf, lsuffix='_NFL', rsuffix='_df').plot(kind='bar', align='edge', title='Mean NFL Weight vs Mean Concussed Weight', ylabel='Weight (lbs)', xlabel='Position', width=0.1, figsize=(16,8))
|
Mean value in Pandas higher is higher than it should be
|
I'm trying to compare some NFL concussion stats with some individual player stats from the combine.
dfcomb.to_excel(r'C:\Users\Documents\GWG\NFL Concussion\NFL_concussion\comb.xlsx', index=False)
# Create a merged df with players that are concussed on dfconc and players that are on dfcomb
dfcommon = dfcomb.merge(dfconc, on=['nameFull'])
dfcommon = pd.read_csv(r'C:\Users\crae1\Documents\GWG\NFL Concussion\NFL_concussion\common.csv')
# Initialise list of pos
positions = ['C', 'RB', 'CB', 'LB', 'OG', 'OT', 'QB', 'DT', 'S', 'FB', 'WR', 'TE']
# Iterate through list and compare height and weight
for pos in positions:
avg = np.mean(dfcomb['heightInches'].where(dfcomb['position'] == pos))
avgconc = np.mean(dfcommon['heightInches'].where(dfcommon['position'] == pos))
print('mean height in the NFL for {}s is {} in mean height of concussed players {} in'.format(pos + '\'', avg, avgconc))
for pos in positions:
avg = np.mean(dfcomb['weight'].where(dfcomb['position'] == pos))
avgconc = np.mean(dfcommon['weight'].where(dfcommon['position'] == pos))
print('mean weight in the NFL for {}s is {} lbs mean weight of concussed players {} lbs'.format(pos + '\'', avg, avgconc))
# Create summary df for concussion and NFL groups
heightavgNFL = dfcomb.groupby('position')['heightInches'].mean
heightavgdf = dfcommon.groupby('position')['heightInches'].mean
weightavgNFL = dfcomb.groupby('position')['weight'].mean
weightavgdf = dfcommon.groupby('position')['weight'].mean
# Plot height
bar_width = 0.10
ax = heightavgNFL().plot(kind='bar', align='edge', title='Mean NFL Height vs Mean Concussed Height', ylabel='Height (in)', xlabel='Position', width=bar_width, figsize=(16,8), color='r',label='NFL')
heightavgdf().plot(kind='bar', ax=ax, align='edge', title='Mean NFL Height vs Mean Concussed Height', ylabel='Height (in)', xlabel='Position', width=-bar_width, figsize=(16,8), color='b',label='Concussion Group')
plt.legend(loc='lower right')
# Plot weight
bar_width = 0.10
ax = weightavgNFL().plot(kind='bar', align='edge', title='Mean NFL Weight vs Mean Concussed Weight', ylabel='Weight (lbs)', xlabel='Position', width=bar_width, figsize=(16,8), color='r',label='NFL')
weightavgdf().plot(kind='bar', ax=ax, align='edge', title='Mean NFL Weight vs Mean Concussed Weight', ylabel='Weight (lbs)', xlabel='Position', width=-bar_width, figsize=(16,8), color='b',label='Concussion Group')
plt.legend(loc='lower right')
However, when looking at the QB weight from the combine csv file, the weight is a lot higher than expected and this problem only occurs with the QB position. I've had a look through the data and I don't see where it could be getting the higher values from.
QB Weight Higher than expected
Here is the csv/xlsx file that dfcomb is from:
https://1drv.ms/x/s!AnmdeJC_g0dLnGzx9LplOkq8iYNJ?e=2NA3WA
Thanks
|
[
"The groupby method does not return sorted data, so your NFL and df data has different order when you add them to the plots. Try this:\nheightavgNFL = pd.DataFrame(dfcomb.groupby('position')['heightInches'].mean()).sort_values(by=['position'])\nheightavgdf = pd.DataFrame(dfcommon.groupby('position')['heightInches'].mean()).sort_values(by=['position'])\nweightavgNFL = pd.DataFrame(dfcomb.groupby('position')['weight'].mean()).sort_values(by=['position'])\nweightavgdf = pd.DataFrame(dfcommon.groupby('position')['weight'].mean()).sort_values(by=['position'])\n\nThis works for me:\npd.DataFrame(weightavgNFL).join(weightavgdf, lsuffix='_NFL', rsuffix='_df').plot(kind='bar', align='edge', title='Mean NFL Weight vs Mean Concussed Weight', ylabel='Weight (lbs)', xlabel='Position', width=0.1, figsize=(16,8))\n\n"
] |
[
0
] |
[] |
[] |
[
"csv",
"data_analysis",
"mean",
"pandas",
"python"
] |
stackoverflow_0074536482_csv_data_analysis_mean_pandas_python.txt
|
Q:
Replacing one data frame value from another based on timestamp Criterion
I am new to Python and may be it's a very basic question for many here so apologies in advance and please bear with me.
I have a timeseries of water level records where the timestamps are not continuous. I want to create a new timeseries which is continuous and for the intervals where there is no data I want to assign nan. I created a continuous time series with Nan values for level. I am trying to fill the observed values in it using df.replace function in a iterative way but I cannot produce what I want. Here is the same of my code:
Input data example:
Time stamp Level
2020-06-18 18:00:00 161.287
2020-06-18 21:00:00 161.286
2020-06-19 12:00:00 161.283
2020-06-19 15:00:00 161.283
dti = dti = pd.date_range("2020-05-01", periods=1224, freq="3H")
dti_df = pd.DataFrame(dti, columns=['Timestamp'])
dti_df["Level"] = np.nan
dti_df
df3 = pd.read_csv(r'C:\Users\krusm\Documents\water Levels Resampled.csv')
for i in dti_df.index:
for j in df3.index:
if dti_df['Timestamp'][i] == df3['Timestamp'][j]:
dti_df['Level'][i].replace(df3['Level'][j], inplace = True)
else:
pass
dti_df
This code runs without any error however produces the Nan timeseries.
NaN data frame created through first part of the code:
Time stamp Level
2020-06-18 18:00:00 NaN
2020-06-18 21:00:00 NaN
2020-06-19 00:00:00 NaN
2020-06-18 03:00:00 NaN
2020-06-19 06:00:00 NaN
2020-06-19 09:00:00 NaN
2020-06-19 12:00:00 NaN
2020-06-19 15:00:00 NaN
Output Expectation:
Time stamp Level
2020-06-18 18:00:00 161.287
2020-06-18 21:00:00 161.286
2020-06-19 00:00:00 NaN
2020-06-18 03:00:00 NaN
2020-06-19 06:00:00 NaN
2020-06-19 09:00:00 NaN
2020-06-19 12:00:00 161.283
2020-06-19 15:00:00 161.283
A:
This answer assumes that your input df, df3, has had the column renamed to Timestamp and the dtype converted to datetime. In this case, you can just use merge to make your life much easier
dti = pd.date_range("2020-05-01", periods=1224, freq="3H")
dti_df = pd.DataFrame(dti, columns=['Timestamp'])
new_df=pd.merge(left=dti_df,right=df3, on='Timestamp',how='left')
new_df.iloc[390:400,:]
Timestamp Level
390 2020-06-18 18:00:00 161.287
391 2020-06-18 21:00:00 161.286
392 2020-06-19 00:00:00 NaN
393 2020-06-19 03:00:00 NaN
394 2020-06-19 06:00:00 NaN
395 2020-06-19 09:00:00 NaN
396 2020-06-19 12:00:00 161.283
397 2020-06-19 15:00:00 161.283
398 2020-06-19 18:00:00 NaN
399 2020-06-19 21:00:00 NaN
|
Replacing one data frame value from another based on timestamp Criterion
|
I am new to Python and may be it's a very basic question for many here so apologies in advance and please bear with me.
I have a timeseries of water level records where the timestamps are not continuous. I want to create a new timeseries which is continuous and for the intervals where there is no data I want to assign nan. I created a continuous time series with Nan values for level. I am trying to fill the observed values in it using df.replace function in a iterative way but I cannot produce what I want. Here is the same of my code:
Input data example:
Time stamp Level
2020-06-18 18:00:00 161.287
2020-06-18 21:00:00 161.286
2020-06-19 12:00:00 161.283
2020-06-19 15:00:00 161.283
dti = dti = pd.date_range("2020-05-01", periods=1224, freq="3H")
dti_df = pd.DataFrame(dti, columns=['Timestamp'])
dti_df["Level"] = np.nan
dti_df
df3 = pd.read_csv(r'C:\Users\krusm\Documents\water Levels Resampled.csv')
for i in dti_df.index:
for j in df3.index:
if dti_df['Timestamp'][i] == df3['Timestamp'][j]:
dti_df['Level'][i].replace(df3['Level'][j], inplace = True)
else:
pass
dti_df
This code runs without any error however produces the Nan timeseries.
NaN data frame created through first part of the code:
Time stamp Level
2020-06-18 18:00:00 NaN
2020-06-18 21:00:00 NaN
2020-06-19 00:00:00 NaN
2020-06-18 03:00:00 NaN
2020-06-19 06:00:00 NaN
2020-06-19 09:00:00 NaN
2020-06-19 12:00:00 NaN
2020-06-19 15:00:00 NaN
Output Expectation:
Time stamp Level
2020-06-18 18:00:00 161.287
2020-06-18 21:00:00 161.286
2020-06-19 00:00:00 NaN
2020-06-18 03:00:00 NaN
2020-06-19 06:00:00 NaN
2020-06-19 09:00:00 NaN
2020-06-19 12:00:00 161.283
2020-06-19 15:00:00 161.283
|
[
"This answer assumes that your input df, df3, has had the column renamed to Timestamp and the dtype converted to datetime. In this case, you can just use merge to make your life much easier\ndti = pd.date_range(\"2020-05-01\", periods=1224, freq=\"3H\")\ndti_df = pd.DataFrame(dti, columns=['Timestamp'])\nnew_df=pd.merge(left=dti_df,right=df3, on='Timestamp',how='left')\n\nnew_df.iloc[390:400,:]\n\n Timestamp Level\n390 2020-06-18 18:00:00 161.287\n391 2020-06-18 21:00:00 161.286\n392 2020-06-19 00:00:00 NaN\n393 2020-06-19 03:00:00 NaN\n394 2020-06-19 06:00:00 NaN\n395 2020-06-19 09:00:00 NaN\n396 2020-06-19 12:00:00 161.283\n397 2020-06-19 15:00:00 161.283\n398 2020-06-19 18:00:00 NaN\n399 2020-06-19 21:00:00 NaN\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074525405_dataframe_pandas_python.txt
|
Q:
Anotate return type with psycopg2 type stub
I have a function which returns a psycopg2 connection, if a connection can be established. So the return type should be Optional[psycopg2.connection], or psycopg2.connection | None. However I am unable to import psycopg2.connection at runtime. I've tried the workaround mentioned in How can I import type-definitions from a typeshed stub-file? but that gives me this mypy error: Single overload definition, multiple required. Here's my code
import psycopg2
from typing import Optional, TYPE_CHECKING, overload
if TYPE_CHECKING:
from psycopg2 import connection
@overload
def get_connection() -> Optional[connection]: ...
# Make DB error logging less spammy
has_logged_error = False
def get_connection():
try:
conn = psycopg2.connect(
dbname=settings.db_name,
user=settings.db_user,
password=settings.db_password,
host=settings.db_host,
port=settings.db_port,
)
return conn
except Exception as e:
global has_logged_error
if not has_logged_error:
logger.error(f"Error connecting to DB: {e}")
has_logged_error = True
return
A:
The question you linked proposes some extremely dirty hack which doesn't seem to work any more. There is absolutely no need for it under such simple circumstances. Moreover, to be honest, I cannot reproduce that solution on any mypy version starting from 0.800 (old enough, given that the linked answer is recent), so that perhaps never worked.
I reduced your code samples to contain only minimal return for the sake of readability.
Variant 1: use conditional import and string annotation
import psycopg2
from typing import Optional, TYPE_CHECKING
if TYPE_CHECKING:
from psycopg2 import connection
def get_connection() -> Optional['connection']:
return psycopg2.connect(...)
This is simple: mypy known what connection is (defined in stubs); runtime does not try to learn something about connection because it sees simply a string.
Variant 2: use conditional import and annotations future
from __future__ import annotations
import psycopg2
from typing import Optional, TYPE_CHECKING
if TYPE_CHECKING:
from psycopg2 import connection
def get_connection() -> Optional[connection]:
return psycopg2.connect(...)
Docs for future imports. This is very similar to direct string usage, but looks nicer and is more convenient, IMO.
Variant 3: use string annotation, but avoid conditional import
import psycopg2
from typing import Optional
def get_connection() -> Optional['psycopg2.connection']:
return psycopg2.connect(...)
Variant 4: use annotations future, but avoid conditional import
from __future__ import annotations
import psycopg2
from typing import Optional
def get_connection() -> Optional[psycopg2.connection]:
return psycopg2.connect(...)
Variants 3 and 4 do not expose that connection is stub-only, hiding it as implementation detail. You may prefer to state that explicitly - then use 1 or 2.
Modification to use current features
This is my favourite. Union syntax is valid in python 3.10+, so if you use an older one - stick with Optional as described above.
from __future__ import annotations
import psycopg2
def get_connection() -> psycopg2.connection | None:
return psycopg2.connect(...)
|
Anotate return type with psycopg2 type stub
|
I have a function which returns a psycopg2 connection, if a connection can be established. So the return type should be Optional[psycopg2.connection], or psycopg2.connection | None. However I am unable to import psycopg2.connection at runtime. I've tried the workaround mentioned in How can I import type-definitions from a typeshed stub-file? but that gives me this mypy error: Single overload definition, multiple required. Here's my code
import psycopg2
from typing import Optional, TYPE_CHECKING, overload
if TYPE_CHECKING:
from psycopg2 import connection
@overload
def get_connection() -> Optional[connection]: ...
# Make DB error logging less spammy
has_logged_error = False
def get_connection():
try:
conn = psycopg2.connect(
dbname=settings.db_name,
user=settings.db_user,
password=settings.db_password,
host=settings.db_host,
port=settings.db_port,
)
return conn
except Exception as e:
global has_logged_error
if not has_logged_error:
logger.error(f"Error connecting to DB: {e}")
has_logged_error = True
return
|
[
"The question you linked proposes some extremely dirty hack which doesn't seem to work any more. There is absolutely no need for it under such simple circumstances. Moreover, to be honest, I cannot reproduce that solution on any mypy version starting from 0.800 (old enough, given that the linked answer is recent), so that perhaps never worked.\nI reduced your code samples to contain only minimal return for the sake of readability.\nVariant 1: use conditional import and string annotation\nimport psycopg2\nfrom typing import Optional, TYPE_CHECKING\n\nif TYPE_CHECKING:\n from psycopg2 import connection\n \ndef get_connection() -> Optional['connection']:\n return psycopg2.connect(...)\n\nThis is simple: mypy known what connection is (defined in stubs); runtime does not try to learn something about connection because it sees simply a string.\nVariant 2: use conditional import and annotations future\nfrom __future__ import annotations\nimport psycopg2\nfrom typing import Optional, TYPE_CHECKING\n\nif TYPE_CHECKING:\n from psycopg2 import connection\n \ndef get_connection() -> Optional[connection]:\n return psycopg2.connect(...)\n\nDocs for future imports. This is very similar to direct string usage, but looks nicer and is more convenient, IMO.\nVariant 3: use string annotation, but avoid conditional import\nimport psycopg2\nfrom typing import Optional\n \ndef get_connection() -> Optional['psycopg2.connection']:\n return psycopg2.connect(...)\n\nVariant 4: use annotations future, but avoid conditional import\nfrom __future__ import annotations\nimport psycopg2\nfrom typing import Optional\n \ndef get_connection() -> Optional[psycopg2.connection]:\n return psycopg2.connect(...)\n\nVariants 3 and 4 do not expose that connection is stub-only, hiding it as implementation detail. You may prefer to state that explicitly - then use 1 or 2.\nModification to use current features\nThis is my favourite. Union syntax is valid in python 3.10+, so if you use an older one - stick with Optional as described above.\nfrom __future__ import annotations\nimport psycopg2\n \ndef get_connection() -> psycopg2.connection | None:\n return psycopg2.connect(...)\n\n"
] |
[
1
] |
[] |
[] |
[
"mypy",
"psycopg2",
"python"
] |
stackoverflow_0074534284_mypy_psycopg2_python.txt
|
Q:
Extracting features from a pretrained model using hook function
I want to extract features from some of the layers from a pretrained model. For this aim, I am using thi pretrained model from here. I removed some of the final layers and for loading the pretrained weights, I use strict=False.
The architecture of the model is as follows:
Net(
(blocks): ModuleList(
(0): ResNetBasicStem(
(conv): Conv3d(3, 64, kernel_size=(1, 7, 7), stride=(1, 2, 2), padding=(0, 3, 3), bias=False)
(norm): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
)
(1): ResStage(
(res_blocks): ModuleList(
(0): ResBlock(
(branch1_conv): Conv3d(64, 256, kernel_size=(1, 1, 1), stride=(1, 2, 2), bias=False)
(branch1_norm): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(branch2): BottleneckBlock(
(conv_a): Conv3d(64, 64, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_a): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_a): ReLU()
(conv_b): Conv2plus1d(
(conv_t): Conv3d(64, 64, kernel_size=(3, 1, 1), stride=(1, 1, 1), padding=(1, 0, 0), bias=False)
(norm): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
(conv_xy): Conv3d(64, 64, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False)
)
(norm_b): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_b): ReLU()
(conv_c): Conv3d(64, 256, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_c): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(activation): ReLU()
)
(1): ResBlock(
(branch2): BottleneckBlock(
(conv_a): Conv3d(256, 64, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_a): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_a): ReLU()
(conv_b): Conv2plus1d(
(conv_t): Conv3d(64, 64, kernel_size=(3, 1, 1), stride=(1, 1, 1), padding=(1, 0, 0), bias=False)
(norm): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
(conv_xy): Conv3d(64, 64, kernel_size=(1, 3, 3), stride=(1, 1, 1), padding=(0, 1, 1), bias=False)
)
(norm_b): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_b): ReLU()
(conv_c): Conv3d(64, 256, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_c): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(activation): ReLU()
)
(2): ResBlock(
(branch2): BottleneckBlock(
(conv_a): Conv3d(256, 64, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_a): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_a): ReLU()
(conv_b): Conv2plus1d(
(conv_t): Conv3d(64, 64, kernel_size=(3, 1, 1), stride=(1, 1, 1), padding=(1, 0, 0), bias=False)
(norm): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
(conv_xy): Conv3d(64, 64, kernel_size=(1, 3, 3), stride=(1, 1, 1), padding=(0, 1, 1), bias=False)
)
(norm_b): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_b): ReLU()
(conv_c): Conv3d(64, 256, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_c): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(activation): ReLU()
)
)
)
(2): ResStage(
(res_blocks): ModuleList(
(0): ResBlock(
(branch1_conv): Conv3d(256, 512, kernel_size=(1, 1, 1), stride=(1, 2, 2), bias=False)
(branch1_norm): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(branch2): BottleneckBlock(
(conv_a): Conv3d(256, 128, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_a): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_a): ReLU()
(conv_b): Conv2plus1d(
(conv_t): Conv3d(128, 128, kernel_size=(3, 1, 1), stride=(1, 1, 1), padding=(1, 0, 0), bias=False)
(norm): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
(conv_xy): Conv3d(128, 128, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False)
)
(norm_b): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_b): ReLU()
(conv_c): Conv3d(128, 512, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_c): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(activation): ReLU()
)
(1): ResBlock(
(branch2): BottleneckBlock(
(conv_a): Conv3d(512, 128, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_a): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_a): ReLU()
(conv_b): Conv2plus1d(
(conv_t): Conv3d(128, 128, kernel_size=(3, 1, 1), stride=(1, 1, 1), padding=(1, 0, 0), bias=False)
(norm): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
(conv_xy): Conv3d(128, 128, kernel_size=(1, 3, 3), stride=(1, 1, 1), padding=(0, 1, 1), bias=False)
)
(norm_b): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_b): ReLU()
(conv_c): Conv3d(128, 512, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_c): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(activation): ReLU()
)
(2): ResBlock(
(branch2): BottleneckBlock(
(conv_a): Conv3d(512, 128, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_a): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_a): ReLU()
(conv_b): Conv2plus1d(
(conv_t): Conv3d(128, 128, kernel_size=(3, 1, 1), stride=(1, 1, 1), padding=(1, 0, 0), bias=False)
(norm): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
(conv_xy): Conv3d(128, 128, kernel_size=(1, 3, 3), stride=(1, 1, 1), padding=(0, 1, 1), bias=False)
)
(norm_b): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_b): ReLU()
(conv_c): Conv3d(128, 512, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_c): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(activation): ReLU()
)
(3): ResBlock(
(branch2): BottleneckBlock(
(conv_a): Conv3d(512, 128, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_a): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_a): ReLU()
(conv_b): Conv2plus1d(
(conv_t): Conv3d(128, 128, kernel_size=(3, 1, 1), stride=(1, 1, 1), padding=(1, 0, 0), bias=False)
(norm): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
(conv_xy): Conv3d(128, 128, kernel_size=(1, 3, 3), stride=(1, 1, 1), padding=(0, 1, 1), bias=False)
)
(norm_b): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_b): ReLU()
(conv_c): Conv3d(128, 512, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_c): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(activation): ReLU()
)
)
)
)
)
I use hook function for extracting features from layers and my method for loading the features from (1): ResStage and (2): ResStage is as follows:
class mymodel(nn.Module):
def __init__(self, pretrained=False):
super(mymodel, self).__init__()
self.activation = {}
def get_activation(name):
def hook(model, input, output):
self.activation[name] = output.detach()
return hook
self.r2plus1d = create_r2plus1d()
self.r2plus1d.Net.blocks[1].register_forward_hook(get_activation('ResBlock1'))
self.r2plus1d.Net.blocks[2].register_forward_hook(get_activation('ResBlock2'))
def forward(self, x, out_consp = False):
x = self.r2plus1d(x)
block1_output = self.activation['ResBlock1'] # channel_num:256
block2_output = self.activation['ResBlock2'] # channel_num:512
return block1_output, block2_output
Unfortunately the error says that there is not Net insised the state_dict of the model (when it comes to use from hook function). For other pretrained models I could use such scenarios for extracting features from intermediate layers but seemingly, if I'm not mistaken, I think maybe it would be tricky to extract features from Net.
A:
Looking at the link you provided, the function create_r2plus1d() returns the following
return Net(blocks=nn.ModuleList(blocks))
Your object self.r2plus1d is already a Net instance, so your line
self.r2plus1d.Net.blocks[1].register_forward_hook(get_activation('ResBlock1'))
is basically like calling Net twice.
You probably only have to call it like that and it should work.
self.r2plus1d.blocks[1].register_forward_hook(get_activation('ResBlock1'))
Let me know if this helps.
|
Extracting features from a pretrained model using hook function
|
I want to extract features from some of the layers from a pretrained model. For this aim, I am using thi pretrained model from here. I removed some of the final layers and for loading the pretrained weights, I use strict=False.
The architecture of the model is as follows:
Net(
(blocks): ModuleList(
(0): ResNetBasicStem(
(conv): Conv3d(3, 64, kernel_size=(1, 7, 7), stride=(1, 2, 2), padding=(0, 3, 3), bias=False)
(norm): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
)
(1): ResStage(
(res_blocks): ModuleList(
(0): ResBlock(
(branch1_conv): Conv3d(64, 256, kernel_size=(1, 1, 1), stride=(1, 2, 2), bias=False)
(branch1_norm): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(branch2): BottleneckBlock(
(conv_a): Conv3d(64, 64, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_a): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_a): ReLU()
(conv_b): Conv2plus1d(
(conv_t): Conv3d(64, 64, kernel_size=(3, 1, 1), stride=(1, 1, 1), padding=(1, 0, 0), bias=False)
(norm): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
(conv_xy): Conv3d(64, 64, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False)
)
(norm_b): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_b): ReLU()
(conv_c): Conv3d(64, 256, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_c): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(activation): ReLU()
)
(1): ResBlock(
(branch2): BottleneckBlock(
(conv_a): Conv3d(256, 64, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_a): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_a): ReLU()
(conv_b): Conv2plus1d(
(conv_t): Conv3d(64, 64, kernel_size=(3, 1, 1), stride=(1, 1, 1), padding=(1, 0, 0), bias=False)
(norm): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
(conv_xy): Conv3d(64, 64, kernel_size=(1, 3, 3), stride=(1, 1, 1), padding=(0, 1, 1), bias=False)
)
(norm_b): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_b): ReLU()
(conv_c): Conv3d(64, 256, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_c): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(activation): ReLU()
)
(2): ResBlock(
(branch2): BottleneckBlock(
(conv_a): Conv3d(256, 64, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_a): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_a): ReLU()
(conv_b): Conv2plus1d(
(conv_t): Conv3d(64, 64, kernel_size=(3, 1, 1), stride=(1, 1, 1), padding=(1, 0, 0), bias=False)
(norm): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
(conv_xy): Conv3d(64, 64, kernel_size=(1, 3, 3), stride=(1, 1, 1), padding=(0, 1, 1), bias=False)
)
(norm_b): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_b): ReLU()
(conv_c): Conv3d(64, 256, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_c): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(activation): ReLU()
)
)
)
(2): ResStage(
(res_blocks): ModuleList(
(0): ResBlock(
(branch1_conv): Conv3d(256, 512, kernel_size=(1, 1, 1), stride=(1, 2, 2), bias=False)
(branch1_norm): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(branch2): BottleneckBlock(
(conv_a): Conv3d(256, 128, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_a): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_a): ReLU()
(conv_b): Conv2plus1d(
(conv_t): Conv3d(128, 128, kernel_size=(3, 1, 1), stride=(1, 1, 1), padding=(1, 0, 0), bias=False)
(norm): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
(conv_xy): Conv3d(128, 128, kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), bias=False)
)
(norm_b): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_b): ReLU()
(conv_c): Conv3d(128, 512, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_c): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(activation): ReLU()
)
(1): ResBlock(
(branch2): BottleneckBlock(
(conv_a): Conv3d(512, 128, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_a): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_a): ReLU()
(conv_b): Conv2plus1d(
(conv_t): Conv3d(128, 128, kernel_size=(3, 1, 1), stride=(1, 1, 1), padding=(1, 0, 0), bias=False)
(norm): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
(conv_xy): Conv3d(128, 128, kernel_size=(1, 3, 3), stride=(1, 1, 1), padding=(0, 1, 1), bias=False)
)
(norm_b): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_b): ReLU()
(conv_c): Conv3d(128, 512, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_c): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(activation): ReLU()
)
(2): ResBlock(
(branch2): BottleneckBlock(
(conv_a): Conv3d(512, 128, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_a): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_a): ReLU()
(conv_b): Conv2plus1d(
(conv_t): Conv3d(128, 128, kernel_size=(3, 1, 1), stride=(1, 1, 1), padding=(1, 0, 0), bias=False)
(norm): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
(conv_xy): Conv3d(128, 128, kernel_size=(1, 3, 3), stride=(1, 1, 1), padding=(0, 1, 1), bias=False)
)
(norm_b): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_b): ReLU()
(conv_c): Conv3d(128, 512, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_c): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(activation): ReLU()
)
(3): ResBlock(
(branch2): BottleneckBlock(
(conv_a): Conv3d(512, 128, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_a): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_a): ReLU()
(conv_b): Conv2plus1d(
(conv_t): Conv3d(128, 128, kernel_size=(3, 1, 1), stride=(1, 1, 1), padding=(1, 0, 0), bias=False)
(norm): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
(conv_xy): Conv3d(128, 128, kernel_size=(1, 3, 3), stride=(1, 1, 1), padding=(0, 1, 1), bias=False)
)
(norm_b): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act_b): ReLU()
(conv_c): Conv3d(128, 512, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(norm_c): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(activation): ReLU()
)
)
)
)
)
I use hook function for extracting features from layers and my method for loading the features from (1): ResStage and (2): ResStage is as follows:
class mymodel(nn.Module):
def __init__(self, pretrained=False):
super(mymodel, self).__init__()
self.activation = {}
def get_activation(name):
def hook(model, input, output):
self.activation[name] = output.detach()
return hook
self.r2plus1d = create_r2plus1d()
self.r2plus1d.Net.blocks[1].register_forward_hook(get_activation('ResBlock1'))
self.r2plus1d.Net.blocks[2].register_forward_hook(get_activation('ResBlock2'))
def forward(self, x, out_consp = False):
x = self.r2plus1d(x)
block1_output = self.activation['ResBlock1'] # channel_num:256
block2_output = self.activation['ResBlock2'] # channel_num:512
return block1_output, block2_output
Unfortunately the error says that there is not Net insised the state_dict of the model (when it comes to use from hook function). For other pretrained models I could use such scenarios for extracting features from intermediate layers but seemingly, if I'm not mistaken, I think maybe it would be tricky to extract features from Net.
|
[
"Looking at the link you provided, the function create_r2plus1d() returns the following\n return Net(blocks=nn.ModuleList(blocks))\n\nYour object self.r2plus1d is already a Net instance, so your line\nself.r2plus1d.Net.blocks[1].register_forward_hook(get_activation('ResBlock1'))\n\nis basically like calling Net twice.\nYou probably only have to call it like that and it should work.\nself.r2plus1d.blocks[1].register_forward_hook(get_activation('ResBlock1'))\n\nLet me know if this helps.\n"
] |
[
3
] |
[] |
[] |
[
"deep_learning",
"image_processing",
"python",
"pytorch"
] |
stackoverflow_0074535731_deep_learning_image_processing_python_pytorch.txt
|
Q:
Iterate multiple JSON Objects in POST request via Python
#! /usr/bin/env python3
from contextlib import redirect_stdout
import io
from lib2to3.pgen2.token import LESS
from locale import format_string
from turtle import back
import requests
import json
import sys
from requests.structures import CaseInsensitiveDict
### ONLY PROVEN FOR ONE OBJECT AT A TIME
#data = open('commandoutputid.json')
#data = json.JSONDecoder(data)
#data = [json.loads(line) for line in open('commandoutputid.json', 'r')]
#data2= json.dumps(data)
### WOULD NOT WORK IF I DIDNT PASS DATA AS A STRING
### EVEN THOUGH WHEN I PASSED AS A GOOD JSON OBJECT INSIDE AN ARRAY. COULDNT DESERIALIZE
with open('commandoutputid.json', 'r') as json_file:
json_dict = json.load(json_file)
# dumps the json object into an element
json_str = json.dumps(json_dict)
# load the json to a string
dict_str = json.loads(json_str)
url = 'https://...'
headers = CaseInsensitiveDict()
headers["Accept"] = "plain/text"
headers["Authorization"] = "Bearer Token"
response = requests.post(url, json=dict_str, headers=headers, verify=False)
with open("backup_output.txt", "wb") as f:
f.write(response.content)
print(response.content)
print(dict_str)
The above script works for passing JSON Object via post in order to return desired output.
My goal is to iterate through hundreds if not thousands of JSON objects to pass VIA POST Request for each JSON Object in the "commandoutputid.json". So instead of the Single POST Request. It would me a Post request per JSON Object in the file.
A:
So you're dealing with an NDJSON (Newline Delimited JSON) instead of a JSON.
http://ndjson.org/
Of course python has a library for it → https://pypi.org/project/ndjson/
Below is a simple load/loop to show how to deal with it.
main.py
import ndjson
from pprint import pprint
with open('commandoutputid.ndjson', 'r') as infile:
all_data = ndjson.load(infile)
for dict_element in all_data:
print(type(dict_element))
pprint(dict_element, indent=4)
requirements.txt
ndjson
commandoutputid.ndjson
{"deviceId":1077,"aggregationId":"1aba3c13-5891-4c75-8648-4fe6b954e1f5","commandText":"show running-config","logicalSystemName":"<none>","seenAt":"2022-11-15T06:00:01.685243+00:00"}
{"deviceId":1082,"aggregationId":"1aba3c13-5891-4c75-8648-4fe6b954e1f5","commandText":"show running-config","logicalSystemName":"<none>","seenAt":"2022-11-15T06:00:01.705206+00:00"}
{"deviceId":763,"aggregationId":"1aba3c13-5891-4c75-8648-4fe6b954e1f5","commandText":"show running-config","logicalSystemName":"<none>","seenAt":"2022-11-15T06:00:01.698229+00:00"}
output
<class 'dict'>
{ 'aggregationId': '1aba3c13-5891-4c75-8648-4fe6b954e1f5',
'commandText': 'show running-config',
'deviceId': 1077,
'logicalSystemName': '<none>',
'seenAt': '2022-11-15T06:00:01.685243+00:00'}
<class 'dict'>
{ 'aggregationId': '1aba3c13-5891-4c75-8648-4fe6b954e1f5',
'commandText': 'show running-config',
'deviceId': 1082,
'logicalSystemName': '<none>',
'seenAt': '2022-11-15T06:00:01.705206+00:00'}
<class 'dict'>
{ 'aggregationId': '1aba3c13-5891-4c75-8648-4fe6b954e1f5',
'commandText': 'show running-config',
'deviceId': 763,
'logicalSystemName': '<none>',
|
Iterate multiple JSON Objects in POST request via Python
|
#! /usr/bin/env python3
from contextlib import redirect_stdout
import io
from lib2to3.pgen2.token import LESS
from locale import format_string
from turtle import back
import requests
import json
import sys
from requests.structures import CaseInsensitiveDict
### ONLY PROVEN FOR ONE OBJECT AT A TIME
#data = open('commandoutputid.json')
#data = json.JSONDecoder(data)
#data = [json.loads(line) for line in open('commandoutputid.json', 'r')]
#data2= json.dumps(data)
### WOULD NOT WORK IF I DIDNT PASS DATA AS A STRING
### EVEN THOUGH WHEN I PASSED AS A GOOD JSON OBJECT INSIDE AN ARRAY. COULDNT DESERIALIZE
with open('commandoutputid.json', 'r') as json_file:
json_dict = json.load(json_file)
# dumps the json object into an element
json_str = json.dumps(json_dict)
# load the json to a string
dict_str = json.loads(json_str)
url = 'https://...'
headers = CaseInsensitiveDict()
headers["Accept"] = "plain/text"
headers["Authorization"] = "Bearer Token"
response = requests.post(url, json=dict_str, headers=headers, verify=False)
with open("backup_output.txt", "wb") as f:
f.write(response.content)
print(response.content)
print(dict_str)
The above script works for passing JSON Object via post in order to return desired output.
My goal is to iterate through hundreds if not thousands of JSON objects to pass VIA POST Request for each JSON Object in the "commandoutputid.json". So instead of the Single POST Request. It would me a Post request per JSON Object in the file.
|
[
"So you're dealing with an NDJSON (Newline Delimited JSON) instead of a JSON.\nhttp://ndjson.org/\nOf course python has a library for it → https://pypi.org/project/ndjson/\nBelow is a simple load/loop to show how to deal with it.\nmain.py\nimport ndjson\nfrom pprint import pprint\n\n\nwith open('commandoutputid.ndjson', 'r') as infile:\n all_data = ndjson.load(infile)\n\n\nfor dict_element in all_data:\n print(type(dict_element))\n pprint(dict_element, indent=4)\n\nrequirements.txt\nndjson\n\ncommandoutputid.ndjson\n{\"deviceId\":1077,\"aggregationId\":\"1aba3c13-5891-4c75-8648-4fe6b954e1f5\",\"commandText\":\"show running-config\",\"logicalSystemName\":\"<none>\",\"seenAt\":\"2022-11-15T06:00:01.685243+00:00\"}\n{\"deviceId\":1082,\"aggregationId\":\"1aba3c13-5891-4c75-8648-4fe6b954e1f5\",\"commandText\":\"show running-config\",\"logicalSystemName\":\"<none>\",\"seenAt\":\"2022-11-15T06:00:01.705206+00:00\"}\n{\"deviceId\":763,\"aggregationId\":\"1aba3c13-5891-4c75-8648-4fe6b954e1f5\",\"commandText\":\"show running-config\",\"logicalSystemName\":\"<none>\",\"seenAt\":\"2022-11-15T06:00:01.698229+00:00\"}\n\noutput\n<class 'dict'>\n{ 'aggregationId': '1aba3c13-5891-4c75-8648-4fe6b954e1f5',\n 'commandText': 'show running-config',\n 'deviceId': 1077,\n 'logicalSystemName': '<none>',\n 'seenAt': '2022-11-15T06:00:01.685243+00:00'}\n<class 'dict'>\n{ 'aggregationId': '1aba3c13-5891-4c75-8648-4fe6b954e1f5',\n 'commandText': 'show running-config',\n 'deviceId': 1082,\n 'logicalSystemName': '<none>',\n 'seenAt': '2022-11-15T06:00:01.705206+00:00'}\n<class 'dict'>\n{ 'aggregationId': '1aba3c13-5891-4c75-8648-4fe6b954e1f5',\n 'commandText': 'show running-config',\n 'deviceId': 763,\n 'logicalSystemName': '<none>',\n\n"
] |
[
0
] |
[] |
[] |
[
"for_loop",
"json",
"post",
"python"
] |
stackoverflow_0074535924_for_loop_json_post_python.txt
|
Q:
Saving result from for loop to different columns
I am trying to run a nested loop in which I want the output to be saved in four different columns. Let C1R1 be the value I want in the first column first row, C2R2 the one I want in the second column second row, etc. What I have come up with this far gives me a list where the output is saved like this:
['C1R1', 'C2R1', 'C3R1', 'C4R1']. This is the code I am using:
dfs1 = []
for i in range(24):
pd = (data_json2['data']['Rows'][i])
for j in range(4):
pd1 = pd['Columns'][j]['Value']
dfs1.append(pd1)
What could be a good way to achieve this?
EDIT: This is what I want to achieve:
Column 1 Column 2 Column 3 Column 4
0 0 24 48 72
1 1 25 49 73
2 2 26 50 74
3 3 27 51 75
4 4 28 52 76
5 5 29 53 77
6 6 30 54 78
7 7 31 55 79
8 8 32 56 80
9 9 33 57 81
10 10 34 58 82
11 11 35 59 83
12 12 36 60 84
13 13 37 61 85
14 14 38 62 86
15 15 39 63 87
16 16 40 64 88
17 17 41 65 89
18 18 42 66 90
19 19 43 67 91
20 20 44 68 92
21 21 45 69 93
22 22 46 70 94
23 23 47 71 95
While this is what I got now:
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39]
Thank you.
A:
Try:
import pandas as pd
def get_dataframe(num_cols=4, num_values=24):
return pd.DataFrame(
([v * 24 + c for v in range(num_cols)] for c in range(num_values)),
columns=[f"Column {c}" for c in range(1, num_cols + 1)],
)
df = get_dataframe()
print(df)
Prints:
Column 1 Column 2 Column 3 Column 4
0 0 24 48 72
1 1 25 49 73
2 2 26 50 74
3 3 27 51 75
4 4 28 52 76
5 5 29 53 77
6 6 30 54 78
7 7 31 55 79
8 8 32 56 80
9 9 33 57 81
10 10 34 58 82
11 11 35 59 83
12 12 36 60 84
13 13 37 61 85
14 14 38 62 86
15 15 39 63 87
16 16 40 64 88
17 17 41 65 89
18 18 42 66 90
19 19 43 67 91
20 20 44 68 92
21 21 45 69 93
22 22 46 70 94
23 23 47 71 95
|
Saving result from for loop to different columns
|
I am trying to run a nested loop in which I want the output to be saved in four different columns. Let C1R1 be the value I want in the first column first row, C2R2 the one I want in the second column second row, etc. What I have come up with this far gives me a list where the output is saved like this:
['C1R1', 'C2R1', 'C3R1', 'C4R1']. This is the code I am using:
dfs1 = []
for i in range(24):
pd = (data_json2['data']['Rows'][i])
for j in range(4):
pd1 = pd['Columns'][j]['Value']
dfs1.append(pd1)
What could be a good way to achieve this?
EDIT: This is what I want to achieve:
Column 1 Column 2 Column 3 Column 4
0 0 24 48 72
1 1 25 49 73
2 2 26 50 74
3 3 27 51 75
4 4 28 52 76
5 5 29 53 77
6 6 30 54 78
7 7 31 55 79
8 8 32 56 80
9 9 33 57 81
10 10 34 58 82
11 11 35 59 83
12 12 36 60 84
13 13 37 61 85
14 14 38 62 86
15 15 39 63 87
16 16 40 64 88
17 17 41 65 89
18 18 42 66 90
19 19 43 67 91
20 20 44 68 92
21 21 45 69 93
22 22 46 70 94
23 23 47 71 95
While this is what I got now:
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39]
Thank you.
|
[
"Try:\nimport pandas as pd\n\n\ndef get_dataframe(num_cols=4, num_values=24):\n return pd.DataFrame(\n ([v * 24 + c for v in range(num_cols)] for c in range(num_values)),\n columns=[f\"Column {c}\" for c in range(1, num_cols + 1)],\n )\n\n\ndf = get_dataframe()\nprint(df)\n\nPrints:\n Column 1 Column 2 Column 3 Column 4\n0 0 24 48 72\n1 1 25 49 73\n2 2 26 50 74\n3 3 27 51 75\n4 4 28 52 76\n5 5 29 53 77\n6 6 30 54 78\n7 7 31 55 79\n8 8 32 56 80\n9 9 33 57 81\n10 10 34 58 82\n11 11 35 59 83\n12 12 36 60 84\n13 13 37 61 85\n14 14 38 62 86\n15 15 39 63 87\n16 16 40 64 88\n17 17 41 65 89\n18 18 42 66 90\n19 19 43 67 91\n20 20 44 68 92\n21 21 45 69 93\n22 22 46 70 94\n23 23 47 71 95\n\n"
] |
[
0
] |
[] |
[] |
[
"for_loop",
"nested_loops",
"python"
] |
stackoverflow_0074534987_for_loop_nested_loops_python.txt
|
Q:
Python typing annotation for an object built with a class method
Here's a code snippet.
from typing import Generic, TypeVar, Union
T = TypeVar('T')
class Bar(object):
def __init__(self, blah: Union[int, str]) -> None:
self.blah = blah
class Foo(Generic[T]):
def __init__(self, bar: T) -> None:
self.bar = bar
@classmethod
def extract(cls, klass: T) -> 'Foo':
return cls(bar=klass(123))
foo = Foo.extract(Bar)
foo.bar # typing says "Any"
foo1 = Foo(Bar(123))
foo1.bar # typing says "Bar" as expected
When I try to create an object with a class method the typing annotation is not working as expected.
Screenshot of annotation not working
However if I build the object directly then it works just fine.
Screenshot of annotation working
I'm using VSCode with latest version of pylance and python extension as writing this question.
Please let me know if this is not possible or if I'm doing something wrong here.
The typing of the object created by the classmethod to have the annotation.
A:
So, you are specifying:
@classmethod
def extract(cls, klass: T) -> 'Foo':
but Foo is implicitly Foo[Any], if you don't provide a type parameter. You want:
@classmethod
def extract(cls, klass: T) -> 'Foo[T]':
|
Python typing annotation for an object built with a class method
|
Here's a code snippet.
from typing import Generic, TypeVar, Union
T = TypeVar('T')
class Bar(object):
def __init__(self, blah: Union[int, str]) -> None:
self.blah = blah
class Foo(Generic[T]):
def __init__(self, bar: T) -> None:
self.bar = bar
@classmethod
def extract(cls, klass: T) -> 'Foo':
return cls(bar=klass(123))
foo = Foo.extract(Bar)
foo.bar # typing says "Any"
foo1 = Foo(Bar(123))
foo1.bar # typing says "Bar" as expected
When I try to create an object with a class method the typing annotation is not working as expected.
Screenshot of annotation not working
However if I build the object directly then it works just fine.
Screenshot of annotation working
I'm using VSCode with latest version of pylance and python extension as writing this question.
Please let me know if this is not possible or if I'm doing something wrong here.
The typing of the object created by the classmethod to have the annotation.
|
[
"So, you are specifying:\n@classmethod\ndef extract(cls, klass: T) -> 'Foo':\n\nbut Foo is implicitly Foo[Any], if you don't provide a type parameter. You want:\n@classmethod\ndef extract(cls, klass: T) -> 'Foo[T]':\n\n"
] |
[
3
] |
[] |
[] |
[
"pylance",
"python",
"python_typing",
"visual_studio_code"
] |
stackoverflow_0074537134_pylance_python_python_typing_visual_studio_code.txt
|
Q:
Database in python - index issue
for page in range(1, pages + 1):
def append_organizator(organizator, organizatorzy=[]):
organizatorzy.append(organizator)
for i in organizatorzy:
try:
query = "INSERT INTO stypendia (organizator) values(%s)"
values = []
values.append(organizatorzy.pop())
cursor.execute(query, values)
conn.commit()
except:
pass
def append_type(rodzaj, rodzaje=[]):
rodzaje.append(rodzaj)
for i in rodzaje:
try:
query = "INSERT INTO stypendia (rodzaj) values(%s)"
values = []
values.append(rodzaje.pop())
cursor.execute(query, values)
conn.commit()
except:
pass
Those are 2 functions that are inserting the data scrapped from website into the database
The program is iterating through all available pages on site. The data that's scrapped is inserted to database.
As you can see on screenshot, the title is inserted 7 times(the amount of pages), then the organizator again 7 times etc...
How can i solve this problem and have everything at same indexesdatabase ss
A:
You need to combine the insert operations - each insert will create a new row. You should also just use the parameters without the array, they really aren't needed.
This example only handles two parameters (same as your code above). Add additional parameters as needed and adjust the insert statement
# The organization of this loop assumes the order of returned data is
# consistent: each "rodzaj" is at the same index as its "organizator"
# (as the original code assumes)
organizator = doc.find_all(class_='organizator-title')
rodzaj = doc.find_all('div', class_='fleft', string="Rodzaj:")
for i in range(min(len(organizator), len(rodzaj))):
o = organizator[i].text.strip().replace('\\n', '').replace('\\r', '')
r = rodzaji].find_next().text.strip().replace('\\n', '').replace('\\r', '')
append(o, r)
def append(organizator: str, rodzaj: str):
try:
query = "INSERT INTO stypendia (organizator, rodzaj) values(%s, %s)"
values = (organizator, rodzaj)
cursor.execute(query, values)
conn.commit()
except:
pass
|
Database in python - index issue
|
for page in range(1, pages + 1):
def append_organizator(organizator, organizatorzy=[]):
organizatorzy.append(organizator)
for i in organizatorzy:
try:
query = "INSERT INTO stypendia (organizator) values(%s)"
values = []
values.append(organizatorzy.pop())
cursor.execute(query, values)
conn.commit()
except:
pass
def append_type(rodzaj, rodzaje=[]):
rodzaje.append(rodzaj)
for i in rodzaje:
try:
query = "INSERT INTO stypendia (rodzaj) values(%s)"
values = []
values.append(rodzaje.pop())
cursor.execute(query, values)
conn.commit()
except:
pass
Those are 2 functions that are inserting the data scrapped from website into the database
The program is iterating through all available pages on site. The data that's scrapped is inserted to database.
As you can see on screenshot, the title is inserted 7 times(the amount of pages), then the organizator again 7 times etc...
How can i solve this problem and have everything at same indexesdatabase ss
|
[
"You need to combine the insert operations - each insert will create a new row. You should also just use the parameters without the array, they really aren't needed.\nThis example only handles two parameters (same as your code above). Add additional parameters as needed and adjust the insert statement\n # The organization of this loop assumes the order of returned data is\n # consistent: each \"rodzaj\" is at the same index as its \"organizator\"\n # (as the original code assumes)\n organizator = doc.find_all(class_='organizator-title')\n rodzaj = doc.find_all('div', class_='fleft', string=\"Rodzaj:\")\n for i in range(min(len(organizator), len(rodzaj))):\n o = organizator[i].text.strip().replace('\\\\n', '').replace('\\\\r', '')\n r = rodzaji].find_next().text.strip().replace('\\\\n', '').replace('\\\\r', '')\n append(o, r)\n\n\ndef append(organizator: str, rodzaj: str):\n try:\n query = \"INSERT INTO stypendia (organizator, rodzaj) values(%s, %s)\"\n values = (organizator, rodzaj)\n cursor.execute(query, values)\n conn.commit()\n except:\n pass\n\n"
] |
[
0
] |
[] |
[] |
[
"beautifulsoup",
"database",
"mysql",
"python"
] |
stackoverflow_0074537169_beautifulsoup_database_mysql_python.txt
|
Q:
How to render math symbols as text in SVG/EPS/PDF images?
When creating graphs using, for instance, Python. It is possible to save these figures as vector graphics (SVG, EPS, PDF) and the text is rendered separately. This makes it possible to select or search the text when shown in a pdf file. However, I've tried to render a simple graph using math symbols in addition to text (in latex). The math symbol gets encoded as part of the image, rather than as text.
Here is a minimum reproducible example.
import numpy as np
import matplotlib.pyplot as plt
x_list = np.linspace(-10,10,num=128)
y = list(map(lambda x: (x**2 + x + 1), x_list))
plt.plot(y, label="$\\Psi_{example}$")
plt.legend()
plt.xticks(np.linspace(0, 128, num=8),
map(round, np.linspace(-10, 10, num=8), [0] * 8))
plt.savefig("./example.pdf")
Which produces the following image.
When saving this image as vector graphics, all the numbers as well as the 'example' word in the legend become selectable/searchable (i.e. rendered as text). However, the Ψ (Psi) character is not selectable/searchable.
Is there any way to make math symbols render as text in vector graphics?
A:
I have been able to get it to work in the way I think you want by first installing a LaTeX distribution (I used MikTex, from here) and then setting the matpotlib option to use LaTeX to render your symbols and text.
Note that after installing MikTex, I had to open a new instance of my command prompt or code editor to make sure it was aware of the change to my PATH and where the LaTex is installed.
I added the import and mpl.rcParams line to your example:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['text.usetex'] = True
x_list = np.linspace(-10, 10, num=128)
y = list(map(lambda x: (x**2 + x + 1), x_list))
plt.plot(y, label="$\\Psi_{example}$")
plt.legend()
plt.xticks(np.linspace(0, 128, num=8),
map(round, np.linspace(-10, 10, num=8), [0] * 8))
plt.savefig("./example.pdf")
A:
It's two different matters. Characters are represented by codes. In this case, you are not able to select some characters because the software you are using to display the rendered results does not have that Unicode defined in its fonts library. So it's treating that character as an object or an empty box(commonly called “tofu”). But the render engine that is turning your python code(or TeX file) into a PDF/SVG does understand that Unicode and that's why you can see that particular character. So much for understanding the source of the issue.
Solution: You may use another IDE/browser if you are using that platform to see the results. Chrome usually supports most Unicodes. Except for those that are defined very recently.
Moreover, Ψ (Psi) is a Greek letter. Check if your Operating System does have Greek letters installed in its fonts library. If it doesn't, go to The Unicode Consortium website and search "Display Problems" it will come up with a page explaining how to install a font depending on your OS or browser.
|
How to render math symbols as text in SVG/EPS/PDF images?
|
When creating graphs using, for instance, Python. It is possible to save these figures as vector graphics (SVG, EPS, PDF) and the text is rendered separately. This makes it possible to select or search the text when shown in a pdf file. However, I've tried to render a simple graph using math symbols in addition to text (in latex). The math symbol gets encoded as part of the image, rather than as text.
Here is a minimum reproducible example.
import numpy as np
import matplotlib.pyplot as plt
x_list = np.linspace(-10,10,num=128)
y = list(map(lambda x: (x**2 + x + 1), x_list))
plt.plot(y, label="$\\Psi_{example}$")
plt.legend()
plt.xticks(np.linspace(0, 128, num=8),
map(round, np.linspace(-10, 10, num=8), [0] * 8))
plt.savefig("./example.pdf")
Which produces the following image.
When saving this image as vector graphics, all the numbers as well as the 'example' word in the legend become selectable/searchable (i.e. rendered as text). However, the Ψ (Psi) character is not selectable/searchable.
Is there any way to make math symbols render as text in vector graphics?
|
[
"I have been able to get it to work in the way I think you want by first installing a LaTeX distribution (I used MikTex, from here) and then setting the matpotlib option to use LaTeX to render your symbols and text.\nNote that after installing MikTex, I had to open a new instance of my command prompt or code editor to make sure it was aware of the change to my PATH and where the LaTex is installed.\nI added the import and mpl.rcParams line to your example:\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\n\nmpl.rcParams['text.usetex'] = True\nx_list = np.linspace(-10, 10, num=128)\n\ny = list(map(lambda x: (x**2 + x + 1), x_list))\n\nplt.plot(y, label=\"$\\\\Psi_{example}$\")\nplt.legend()\nplt.xticks(np.linspace(0, 128, num=8),\n map(round, np.linspace(-10, 10, num=8), [0] * 8))\n\nplt.savefig(\"./example.pdf\")\n\n\n",
"It's two different matters. Characters are represented by codes. In this case, you are not able to select some characters because the software you are using to display the rendered results does not have that Unicode defined in its fonts library. So it's treating that character as an object or an empty box(commonly called “tofu”). But the render engine that is turning your python code(or TeX file) into a PDF/SVG does understand that Unicode and that's why you can see that particular character. So much for understanding the source of the issue.\nSolution: You may use another IDE/browser if you are using that platform to see the results. Chrome usually supports most Unicodes. Except for those that are defined very recently.\nMoreover, Ψ (Psi) is a Greek letter. Check if your Operating System does have Greek letters installed in its fonts library. If it doesn't, go to The Unicode Consortium website and search \"Display Problems\" it will come up with a page explaining how to install a font depending on your OS or browser.\n"
] |
[
2,
2
] |
[] |
[] |
[
"latex",
"python",
"symbols",
"unicode",
"vector_graphics"
] |
stackoverflow_0074477713_latex_python_symbols_unicode_vector_graphics.txt
|
Q:
PyTorch RuntimeError: "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Int'
Trying to implement a TGCN model on my local machine (entire code link) and I get this above Runtime error related to PyTorch. This issue doesn't happen when I implement the same code on Google Colab. What's the issue here and how do I fix it? Thanks
Here's the traceback error:
Traceback (most recent call last):
File "C:\signlanguage\code\TGCN\train_tgcn.py", line 123, in <module>
run(split_file=split_file, configs=configs, pose_data_root=pose_data_root)
File "C:\signlanguage\code\TGCN\train_tgcn.py", line 64, in run
train_losses, train_scores, train_gts, train_preds = train(log_interval, model,
File "C:\signlanguage\code\TGCN\train_utils.py", line 27, in train
loss = compute_loss(out, y)
File "C:\signlanguage\code\TGCN\train_utils.py", line 146, in compute_loss
ce_loss = F.cross_entropy(out, gt)
File "C:\Users\user\anaconda3\lib\site-packages\torch\nn\functional.py", line 3026, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Int'
Snippet of train_utils.py file code:
def train(log_interval, model, train_loader, optimizer, epoch):
# set model as training mode
losses = []
scores = []
train_labels = []
train_preds = []
N_count = 0 # counting total trained sample in one epoch
for batch_idx, data in enumerate(train_loader):
X, y, video_ids = data
# distribute data to device
X, y = X.cuda(), y.cuda().view(-1, )
N_count += X.size(0)
optimizer.zero_grad()
out = model(X) # output has dim = (batch, number of classes)
loss = compute_loss(out, y)
# loss = F.cross_entropy(output, y)
losses.append(loss.item())
# to compute accuracy
y_pred = torch.max(out, 1)[1] # y_pred != output
step_score = accuracy_score(y.cpu().data.squeeze().numpy(), y_pred.cpu().data.squeeze().numpy())
# collect prediction labels
train_labels.extend(y.cpu().data.squeeze().tolist())
train_preds.extend(y_pred.cpu().data.squeeze().tolist())
scores.append(step_score) # computed on CPU
loss.backward()
# torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=6)
#
# for p in model.parameters():
# param_norm = p.grad.data.norm(2)
# total_norm += param_norm.item() ** 2
# total_norm = total_norm ** (1. / 2)
#
# print(total_norm)
optimizer.step()
# show information
if (batch_idx + 1) % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}, Accu: {:.6f}%'.format(
epoch + 1, N_count, len(train_loader.dataset), 100. * (batch_idx + 1) / len(train_loader), loss.item(),
100 * step_score))
return losses, scores, train_labels, train_preds
def compute_loss(out, gt):
ce_loss = F.cross_entropy(out, gt)
return ce_loss
A:
You have to make sure that both your objects have the same type when you call F.cross_entropy(). You probably have two different type on this line loss = compute_loss(out, y), where y is either float or long type tensor and out is int.
You have to cast your out to the same type as y with something like this
out.type(torch.LongTensor)
or this for float
out.type(torch.FloatTensor)
Hope this helps.
|
PyTorch RuntimeError: "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Int'
|
Trying to implement a TGCN model on my local machine (entire code link) and I get this above Runtime error related to PyTorch. This issue doesn't happen when I implement the same code on Google Colab. What's the issue here and how do I fix it? Thanks
Here's the traceback error:
Traceback (most recent call last):
File "C:\signlanguage\code\TGCN\train_tgcn.py", line 123, in <module>
run(split_file=split_file, configs=configs, pose_data_root=pose_data_root)
File "C:\signlanguage\code\TGCN\train_tgcn.py", line 64, in run
train_losses, train_scores, train_gts, train_preds = train(log_interval, model,
File "C:\signlanguage\code\TGCN\train_utils.py", line 27, in train
loss = compute_loss(out, y)
File "C:\signlanguage\code\TGCN\train_utils.py", line 146, in compute_loss
ce_loss = F.cross_entropy(out, gt)
File "C:\Users\user\anaconda3\lib\site-packages\torch\nn\functional.py", line 3026, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Int'
Snippet of train_utils.py file code:
def train(log_interval, model, train_loader, optimizer, epoch):
# set model as training mode
losses = []
scores = []
train_labels = []
train_preds = []
N_count = 0 # counting total trained sample in one epoch
for batch_idx, data in enumerate(train_loader):
X, y, video_ids = data
# distribute data to device
X, y = X.cuda(), y.cuda().view(-1, )
N_count += X.size(0)
optimizer.zero_grad()
out = model(X) # output has dim = (batch, number of classes)
loss = compute_loss(out, y)
# loss = F.cross_entropy(output, y)
losses.append(loss.item())
# to compute accuracy
y_pred = torch.max(out, 1)[1] # y_pred != output
step_score = accuracy_score(y.cpu().data.squeeze().numpy(), y_pred.cpu().data.squeeze().numpy())
# collect prediction labels
train_labels.extend(y.cpu().data.squeeze().tolist())
train_preds.extend(y_pred.cpu().data.squeeze().tolist())
scores.append(step_score) # computed on CPU
loss.backward()
# torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=6)
#
# for p in model.parameters():
# param_norm = p.grad.data.norm(2)
# total_norm += param_norm.item() ** 2
# total_norm = total_norm ** (1. / 2)
#
# print(total_norm)
optimizer.step()
# show information
if (batch_idx + 1) % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}, Accu: {:.6f}%'.format(
epoch + 1, N_count, len(train_loader.dataset), 100. * (batch_idx + 1) / len(train_loader), loss.item(),
100 * step_score))
return losses, scores, train_labels, train_preds
def compute_loss(out, gt):
ce_loss = F.cross_entropy(out, gt)
return ce_loss
|
[
"You have to make sure that both your objects have the same type when you call F.cross_entropy(). You probably have two different type on this line loss = compute_loss(out, y), where y is either float or long type tensor and out is int.\nYou have to cast your out to the same type as y with something like this\nout.type(torch.LongTensor)\n\nor this for float\nout.type(torch.FloatTensor)\n\nHope this helps.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"pytorch"
] |
stackoverflow_0074534315_python_pytorch.txt
|
Q:
Copy a specific cell for a column
I'm working with a dataframe that has duplicate value, and I want to copy a cell and put in correct column.
I'm beginner with python(and new in programming) and I don't know how to solve this.
Example
Process
type
value
PVR
Hono
Civel
123456
PVR
$5
$0
$0
$0
123456
Civel
$17
$0
$0
$0
123456
Hono
$2
$0
$0
$0
145
Civel
$457
$0
$0
$0
8547
Civel
$47
$0
$0
$0
8547
PVR
$88
$0
$0
$0
3333
PVR
$74
$0
$0
$0
Output:
Process
type
value
PVR
Hono
Civel
123456
PVR
$5
$5
$17
$2
123456
Civel
$17
$5
$17
$2
123456
Hono
$2
$5
$17
$2
145
Civel
$457
$0
$0
$457
8547
Civel
$47
$88
$0
$47
8547
PVR
$88
$88
$0
$47
3333
PVR
$74
$74
$0
$0
A:
I can't use pivot now because column "Process" has duplicate values. -- I think you misunderstand the pivot process, because this is exactly what a pivot is set up to handle. This seems to do what you want in one line:
import pandas as pd
data = [
[123456,"PVR","$5"],
[123456,"Civel","$17"],
[123456,"Hono","$2"],
[145,"Civel","$457"],
[8547,"Civel","$47"],
[8547,"PVR","$88"],
[3333,"PVR","$74"]
]
df = pd.DataFrame(data, columns=['Process','type','value'])
print(df)
df1 = df.pivot(index='Process',columns='type',values='value').fillna(0)
print(df1)
Output:
Process type value
0 123456 PVR $5
1 123456 Civel $17
2 123456 Hono $2
3 145 Civel $457
4 8547 Civel $47
5 8547 PVR $88
6 3333 PVR $74
type Civel Hono PVR
Process
145 $457 0 0
3333 0 0 $74
8547 $47 0 $88
123456 $17 $2 $5
|
Copy a specific cell for a column
|
I'm working with a dataframe that has duplicate value, and I want to copy a cell and put in correct column.
I'm beginner with python(and new in programming) and I don't know how to solve this.
Example
Process
type
value
PVR
Hono
Civel
123456
PVR
$5
$0
$0
$0
123456
Civel
$17
$0
$0
$0
123456
Hono
$2
$0
$0
$0
145
Civel
$457
$0
$0
$0
8547
Civel
$47
$0
$0
$0
8547
PVR
$88
$0
$0
$0
3333
PVR
$74
$0
$0
$0
Output:
Process
type
value
PVR
Hono
Civel
123456
PVR
$5
$5
$17
$2
123456
Civel
$17
$5
$17
$2
123456
Hono
$2
$5
$17
$2
145
Civel
$457
$0
$0
$457
8547
Civel
$47
$88
$0
$47
8547
PVR
$88
$88
$0
$47
3333
PVR
$74
$74
$0
$0
|
[
"I can't use pivot now because column \"Process\" has duplicate values. -- I think you misunderstand the pivot process, because this is exactly what a pivot is set up to handle. This seems to do what you want in one line:\nimport pandas as pd\n\ndata = [\n [123456,\"PVR\",\"$5\"],\n [123456,\"Civel\",\"$17\"],\n [123456,\"Hono\",\"$2\"],\n [145,\"Civel\",\"$457\"],\n [8547,\"Civel\",\"$47\"],\n [8547,\"PVR\",\"$88\"],\n [3333,\"PVR\",\"$74\"]\n]\n\ndf = pd.DataFrame(data, columns=['Process','type','value'])\nprint(df)\ndf1 = df.pivot(index='Process',columns='type',values='value').fillna(0)\nprint(df1)\n\nOutput:\n Process type value\n0 123456 PVR $5\n1 123456 Civel $17\n2 123456 Hono $2\n3 145 Civel $457\n4 8547 Civel $47\n5 8547 PVR $88\n6 3333 PVR $74\ntype Civel Hono PVR\nProcess \n145 $457 0 0\n3333 0 0 $74\n8547 $47 0 $88\n123456 $17 $2 $5\n\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074523581_python.txt
|
Q:
Does imaplib works to download a pdf inside a mail from AppleMail?
I am trying to download the pdf from a mail received on an Apple account xy@me.com.
I have already managed to do it from a gmail account with no trouble but the result from imaplib.fetch seems to be empty using RFC822 when I apply it to a "@me.com" account, I get:
b'34878 ()'
When I use BODY instead, I get infos about the content of the body but I don't know how to access it.
I cannot check if it is a multipart email, use .walk or .get("Content-Disposition") nor .get_payload(decode=True)
import imaplib
import email
username = 'username'
password = 'password'
hostname = 'imap.mail.me.com'
port = '993'
expeditor= 'expeditor'
subj = 'subject of the mail'
since="01-Oct-2022"
# Connect to the server
connection = imaplib.IMAP4_SSL(hostname,port)
# Login to my account
connection.login(username, password)
status, messages = connection.select("INBOX")
# total number of emails
messages_number = int(messages[0])
print(f'{messages_number=}')
# Search for specific emails
typ_search, msg_ids_search = connection.search(None, 'SINCE', "{since}",'FROM', f'"{expeditor}"', 'SUBJECT', f'"{subj}"')
print(f'{msg_ids_search[0]=}')
# Select only the first message to test
msg_to_test=emails_ids=[elt.decode() for elt in msg_ids_search[0].split()][0]
print(f'{msg_to_test=}')
# Fetch the message to test
res, msg = connection.fetch(msg_to_test, ("BODY"))
print(f'{msg=}')
This is what I get:
messages_number=35780
msg_ids_search[0]=b'34878 34879'
msg_to_test='34878'
msg=[b'34878 (BODY ((("text" "html" ("CHARSET" "utf-8") NIL NIL "quoted-printable" 4128 74)("image" "jpeg" ("NAME" "img1") "<img1>" NIL "base64" 129910) "related")("application" "pdf" ("NAME" "0304217.pdf") NIL NIL "base64" 433256) "mixed"))']
How can I access the “application” part to download “0304217.pdf”?
Thank you so much for your help!
A:
As I didn't get any answer to my question, I have been searching for a solution for the past few days.
I managed to do it by using MailBox instead of imaplib.
|
Does imaplib works to download a pdf inside a mail from AppleMail?
|
I am trying to download the pdf from a mail received on an Apple account xy@me.com.
I have already managed to do it from a gmail account with no trouble but the result from imaplib.fetch seems to be empty using RFC822 when I apply it to a "@me.com" account, I get:
b'34878 ()'
When I use BODY instead, I get infos about the content of the body but I don't know how to access it.
I cannot check if it is a multipart email, use .walk or .get("Content-Disposition") nor .get_payload(decode=True)
import imaplib
import email
username = 'username'
password = 'password'
hostname = 'imap.mail.me.com'
port = '993'
expeditor= 'expeditor'
subj = 'subject of the mail'
since="01-Oct-2022"
# Connect to the server
connection = imaplib.IMAP4_SSL(hostname,port)
# Login to my account
connection.login(username, password)
status, messages = connection.select("INBOX")
# total number of emails
messages_number = int(messages[0])
print(f'{messages_number=}')
# Search for specific emails
typ_search, msg_ids_search = connection.search(None, 'SINCE', "{since}",'FROM', f'"{expeditor}"', 'SUBJECT', f'"{subj}"')
print(f'{msg_ids_search[0]=}')
# Select only the first message to test
msg_to_test=emails_ids=[elt.decode() for elt in msg_ids_search[0].split()][0]
print(f'{msg_to_test=}')
# Fetch the message to test
res, msg = connection.fetch(msg_to_test, ("BODY"))
print(f'{msg=}')
This is what I get:
messages_number=35780
msg_ids_search[0]=b'34878 34879'
msg_to_test='34878'
msg=[b'34878 (BODY ((("text" "html" ("CHARSET" "utf-8") NIL NIL "quoted-printable" 4128 74)("image" "jpeg" ("NAME" "img1") "<img1>" NIL "base64" 129910) "related")("application" "pdf" ("NAME" "0304217.pdf") NIL NIL "base64" 433256) "mixed"))']
How can I access the “application” part to download “0304217.pdf”?
Thank you so much for your help!
|
[
"As I didn't get any answer to my question, I have been searching for a solution for the past few days.\nI managed to do it by using MailBox instead of imaplib.\n"
] |
[
1
] |
[] |
[] |
[
"apple_mail",
"fetch",
"imaplib",
"python"
] |
stackoverflow_0074421762_apple_mail_fetch_imaplib_python.txt
|
Q:
How to put the input() in the middle of the sentence (python)?
I would like to get the input() in the middle of the sentence.
I have tried:
print('I have ${} dollars'.format(float(input())))
and get this:
45 #asked before showing the print
I have $45 dollars
instead of this:
I have $| dollars #(The "|" means the place where the input() is requested)
Is that possible?
A:
This is as close as you're going to get to what you want:
prefix = "I have $"
dollars = input(prefix)
ncols = len(dollars) + len(prefix) + 1
print(f"\033[F\033[{ncols}G dollars")
\033[F go to the (start of the) previous line
\03[{ncols}G move along ncols:
|
How to put the input() in the middle of the sentence (python)?
|
I would like to get the input() in the middle of the sentence.
I have tried:
print('I have ${} dollars'.format(float(input())))
and get this:
45 #asked before showing the print
I have $45 dollars
instead of this:
I have $| dollars #(The "|" means the place where the input() is requested)
Is that possible?
|
[
"This is as close as you're going to get to what you want:\nprefix = \"I have $\"\ndollars = input(prefix)\nncols = len(dollars) + len(prefix) + 1\nprint(f\"\\033[F\\033[{ncols}G dollars\")\n\n\n\\033[F go to the (start of the) previous line\n\\03[{ncols}G move along ncols:\n\n"
] |
[
1
] |
[] |
[] |
[
"input",
"python"
] |
stackoverflow_0074536699_input_python.txt
|
Q:
Which special methods bypasses __getattribute__ in Python?
In addition to bypassing any instance attributes in the interest of correctness, implicit special method lookup generally also bypasses the __getattribute__() method even of the object’s metaclass.
The docs mention special methods such as __hash__, __repr__ and __len__, and I know from experience it also includes __iter__ for Python 2.7.
To quote an answer to a related question:
"Magic __methods__() are treated specially: They are internally assigned to "slots" in the type data structure to speed up their look-up, and they are only looked up in these slots."
In a quest to improve my answer to another question, I need to know: Which methods, specifically, are we talking about?
A:
You can find an answer in the python3 documentation for object.__getattribute__, which states:
Called unconditionally to implement attribute accesses for instances of the class. If the class also defines __getattr__(), the
latter will not be called unless __getattribute__() either calls it
explicitly or raises an AttributeError. This method should return the
(computed) attribute value or raise an AttributeError exception. In
order to avoid infinite recursion in this method, its implementation
should always call the base class method with the same name to access
any attributes it needs, for example, object.__getattribute__(self,
name).
Note
This method may still be bypassed when looking up special methods as the result of implicit invocation via language syntax or built-in
functions. See Special method lookup.
also this page explains exactly how this "machinery" works. Fundamentally __getattribute__ is called only when you access an attribute with the .(dot) operator(and also by hasattr as Zagorulkin pointed out).
Note that the page does not specify which special methods are implicitly looked up, so I deem that this hold for all of them(which you may find here.
A:
Checked in 2.7.9
Couldn't find any way to bypass the call to __getattribute__, with any of the magical methods that are found on object or type:
# Preparation step: did this from the console
# magics = set(dir(object) + dir(type))
# got 38 names, for each of the names, wrote a.<that_name> to a file
# Ended up with this:
a.__module__
a.__base__
#...
Put this at the beginning of that file, which i renamed into a proper python module (asdf.py)
global_counter = 0
class Counter(object):
def __getattribute__(self, name):
# this will count how many times the method was called
global global_counter
global_counter += 1
return super(Counter, self).__getattribute__(name)
a = Counter()
# after this comes the list of 38 attribute accessess
a.__module__
#...
a.__repr__
#...
print global_counter # you're not gonna like it... it printer 38
Then i also tried to get each of those names by getattr and hasattr -> same result. __getattribute__ was called every time.
So if anyone has other ideas... I was too lazy to look inside C code for this, but I'm sure the answer lies somewhere there.
So either there's something that i'm not getting right, or the docs are lying.
A:
super().method will also bypass __getattribute__. This atrocious code will run just fine (Python 3.11).
class Base:
def print(self):
print("whatever")
def __getattribute__(self, item):
raise Exception("Don't access this with a dot!")
class Sub(Base):
def __init__(self):
super().print()
a = Sub()
# prints 'whatever'
a.print()
# Exception Don't access this with a dot!
|
Which special methods bypasses __getattribute__ in Python?
|
In addition to bypassing any instance attributes in the interest of correctness, implicit special method lookup generally also bypasses the __getattribute__() method even of the object’s metaclass.
The docs mention special methods such as __hash__, __repr__ and __len__, and I know from experience it also includes __iter__ for Python 2.7.
To quote an answer to a related question:
"Magic __methods__() are treated specially: They are internally assigned to "slots" in the type data structure to speed up their look-up, and they are only looked up in these slots."
In a quest to improve my answer to another question, I need to know: Which methods, specifically, are we talking about?
|
[
"You can find an answer in the python3 documentation for object.__getattribute__, which states:\n\nCalled unconditionally to implement attribute accesses for instances of the class. If the class also defines __getattr__(), the\n latter will not be called unless __getattribute__() either calls it\n explicitly or raises an AttributeError. This method should return the\n (computed) attribute value or raise an AttributeError exception. In\n order to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to access\n any attributes it needs, for example, object.__getattribute__(self,\n name).\nNote\nThis method may still be bypassed when looking up special methods as the result of implicit invocation via language syntax or built-in\n functions. See Special method lookup.\n\nalso this page explains exactly how this \"machinery\" works. Fundamentally __getattribute__ is called only when you access an attribute with the .(dot) operator(and also by hasattr as Zagorulkin pointed out).\nNote that the page does not specify which special methods are implicitly looked up, so I deem that this hold for all of them(which you may find here.\n",
"Checked in 2.7.9\nCouldn't find any way to bypass the call to __getattribute__, with any of the magical methods that are found on object or type:\n# Preparation step: did this from the console\n# magics = set(dir(object) + dir(type))\n# got 38 names, for each of the names, wrote a.<that_name> to a file\n# Ended up with this:\n\na.__module__\na.__base__\n#...\n\nPut this at the beginning of that file, which i renamed into a proper python module (asdf.py)\nglobal_counter = 0\n\nclass Counter(object):\n def __getattribute__(self, name):\n # this will count how many times the method was called\n global global_counter\n global_counter += 1\n return super(Counter, self).__getattribute__(name)\n\na = Counter()\n# after this comes the list of 38 attribute accessess\na.__module__\n#...\na.__repr__\n#...\n\nprint global_counter # you're not gonna like it... it printer 38\n\nThen i also tried to get each of those names by getattr and hasattr -> same result. __getattribute__ was called every time.\nSo if anyone has other ideas... I was too lazy to look inside C code for this, but I'm sure the answer lies somewhere there.\nSo either there's something that i'm not getting right, or the docs are lying.\n",
"super().method will also bypass __getattribute__. This atrocious code will run just fine (Python 3.11).\nclass Base:\n def print(self):\n print(\"whatever\")\n\n def __getattribute__(self, item):\n raise Exception(\"Don't access this with a dot!\")\n\n\nclass Sub(Base):\n def __init__(self):\n super().print()\n\na = Sub()\n# prints 'whatever'\na.print()\n# Exception Don't access this with a dot!\n\n"
] |
[
4,
2,
0
] |
[] |
[] |
[
"magic_methods",
"python"
] |
stackoverflow_0012872695_magic_methods_python.txt
|
Q:
Insert or update if primary key exists into postgreSQL table with .to_sql()
I have a pandas DataFrame that consists of multiple columns that I want to store into the postgreSQL database, using .to_sql():
my_table.to_sql('table', con=engine, schema='wrhouse', if_exists='append', index=False)
I have set a primary key (date), in order to avoid duplicate entries. So above-mentioned command works when my primary key does not exist in the database.
However, if that key exists I am getting the following error:
IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "table_pkey"
DETAIL: Key (date)=(2022-07-01 00:00:00) already exists.
Now, what I would like to do is:
Update the row with the already existed Key(date)
Insert a new row in case the Key(date) does not exist
I checked the documentation on: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html but I could't find any option by using the DataFrame.to_sql() function.
Additionally, if I change the if_exists='append' parameter to if_exists='replace', it deletes the whole table and that is not what I want.
Is there any way to update/insert rows using the .to_sql() function?
A:
you could convert the my_table dataframe (which holds new values you'd like to send to the table in the database) to a numpy record array and add it to the query used in the execute function in your comment ^:
values = str(list(my_table.to_records(index=False)))[1:-1]
conn.execute(f"INSERT INTO wrschema.table (date, first_hour, last_hour, quantity) VALUES {values} ON CONFLICT (date) DO UPDATE SET first_hour = EXCLUDED.first_hour, last_hour = EXCLUDED.last_hour, quantity = EXCLUDED.quantity;")
(this is something that worked for me, hope it helps!)
|
Insert or update if primary key exists into postgreSQL table with .to_sql()
|
I have a pandas DataFrame that consists of multiple columns that I want to store into the postgreSQL database, using .to_sql():
my_table.to_sql('table', con=engine, schema='wrhouse', if_exists='append', index=False)
I have set a primary key (date), in order to avoid duplicate entries. So above-mentioned command works when my primary key does not exist in the database.
However, if that key exists I am getting the following error:
IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "table_pkey"
DETAIL: Key (date)=(2022-07-01 00:00:00) already exists.
Now, what I would like to do is:
Update the row with the already existed Key(date)
Insert a new row in case the Key(date) does not exist
I checked the documentation on: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html but I could't find any option by using the DataFrame.to_sql() function.
Additionally, if I change the if_exists='append' parameter to if_exists='replace', it deletes the whole table and that is not what I want.
Is there any way to update/insert rows using the .to_sql() function?
|
[
"you could convert the my_table dataframe (which holds new values you'd like to send to the table in the database) to a numpy record array and add it to the query used in the execute function in your comment ^:\nvalues = str(list(my_table.to_records(index=False)))[1:-1]\nconn.execute(f\"INSERT INTO wrschema.table (date, first_hour, last_hour, quantity) VALUES {values} ON CONFLICT (date) DO UPDATE SET first_hour = EXCLUDED.first_hour, last_hour = EXCLUDED.last_hour, quantity = EXCLUDED.quantity;\")\n(this is something that worked for me, hope it helps!)\n"
] |
[
0
] |
[] |
[] |
[
"pandas",
"pandas_to_sql",
"postgresql",
"python",
"upsert"
] |
stackoverflow_0074432724_pandas_pandas_to_sql_postgresql_python_upsert.txt
|
Q:
Python 3.10.6 - Find the element-wise maxima of overlaid xarray datasets
I'm working with gridded data, specifically netcdf data, trying to find the maximum grid point value for overlaying pixels of each netcdf file in a directory, ignoring null values. If you're familiar with ArcGIS, this is the same as running the maximum function through Cell Statistics. However, when doing this through xarray, I keep getting errors saying the function I'm calling doesn't exist or that the data doesn't have the attribute I assigned it.
I found a module in xarray found here, and used the example in the docs which calls np.fmax, but I couldn't get it to work. I also tried xr.fmax, and xr.ufuncs.fmax, but they don't seem to exist.
The goal is to iterate through 10 years of data, saving the maximum of weekly batches, then of the new weekly dataset, find the maximum, average, and count of non-nan values. So if anyone knows of any other modules that can run said statistics, it would be much appreciated
Below is an example of how I've been setting things up:
import numpy as np
import xarray as xr
import rioxarray as rio
import os
# Directory that stores all the netcdf files
filepath = 'weekBatch/'
# Putting each file into its own xarray dataset
tot_files = 0
for filename in os.listdir(filepath):
tot_files += 1
ds = xr.open_dataset(filepath + filename)
# MESH is the value I'm trying to do math on
exec('ds_ge_0_' + str(tot_files) + ' = ds.where(ds.MESH >= 0.0)')
exec('ds_ge_15_' + str(tot_files) + ' = ds.where(ds.MESH >= 15.0)')
Here's an example of what a single dataset looks like:
>>> print(ds_ge_0_1.MESH)
<xarray.DataArray 'MESH' (latitude: 3501, longitude: 7001)>
array([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]])
Coordinates:
* longitude (longitude) float64 -130.0 -130.0 -130.0 ... -60.02 -60.01 -59.99
* latitude (latitude) float64 55.01 54.99 54.98 54.97 ... 20.02 20.01 20.0
I then try to run the following block...
for i in range(tot_files - 1):
ds1 = exec('ds_ge_0_' + str(i+1))
ds2 = exec('ds_ge_0_' + str(i+2))
ds_ge_0_final = np.fmax(ds1, ds2)
and get the following error:
Traceback (most recent call last):
File "D:\xarray_stuff\DataProcess.py", line 287, in <module>
ds_ge_0_final = np.fmax(ds1, ds2)
TypeError: '>=' not supported between instances of 'NoneType' and 'NoneType'
I used np.fmax, because I found a module in xarray found here, but couldn't get it to work. I also tried xr.fmax, and xr.ufuncs.fmax, but they don't seem to exist.
A:
Here is the safe way to handle this data:
import numpy as np
import os
# Directory that stores all the netcdf files
filepath = 'weekBatch/'
# Putting each file into its own xarray dataset
ds_ge_0 = []
ds_ge_15 = []
for filename in os.listdir(filepath):
ds = xr.open_dataset(filepath + filename)
ds_ge_0.append( ds.where(ds.MESH >= 0.0) )
ds_ge_15.append( ds.where(ds.MESH >= 15.0) )
print(ds_ge_0[0].MESH)
# finds the max of all the datasets in ds_ge_0
for i in range(len(ds_ge_0) - 1):
ds1 = ds_ge_0[i]
ds2 = ds_ge_0[i+1]
ds_ge_0_final = np.fmax(ds1, ds2)
# finds the max of all the datasets in ds_ge_15
for i in range(len(ds_ge_15) - 1):
ds1 = ds_ge_15[i]
ds2 = ds_ge_15[i+1]
ds_ge_15_final = np.fmax(ds1, ds2)
|
Python 3.10.6 - Find the element-wise maxima of overlaid xarray datasets
|
I'm working with gridded data, specifically netcdf data, trying to find the maximum grid point value for overlaying pixels of each netcdf file in a directory, ignoring null values. If you're familiar with ArcGIS, this is the same as running the maximum function through Cell Statistics. However, when doing this through xarray, I keep getting errors saying the function I'm calling doesn't exist or that the data doesn't have the attribute I assigned it.
I found a module in xarray found here, and used the example in the docs which calls np.fmax, but I couldn't get it to work. I also tried xr.fmax, and xr.ufuncs.fmax, but they don't seem to exist.
The goal is to iterate through 10 years of data, saving the maximum of weekly batches, then of the new weekly dataset, find the maximum, average, and count of non-nan values. So if anyone knows of any other modules that can run said statistics, it would be much appreciated
Below is an example of how I've been setting things up:
import numpy as np
import xarray as xr
import rioxarray as rio
import os
# Directory that stores all the netcdf files
filepath = 'weekBatch/'
# Putting each file into its own xarray dataset
tot_files = 0
for filename in os.listdir(filepath):
tot_files += 1
ds = xr.open_dataset(filepath + filename)
# MESH is the value I'm trying to do math on
exec('ds_ge_0_' + str(tot_files) + ' = ds.where(ds.MESH >= 0.0)')
exec('ds_ge_15_' + str(tot_files) + ' = ds.where(ds.MESH >= 15.0)')
Here's an example of what a single dataset looks like:
>>> print(ds_ge_0_1.MESH)
<xarray.DataArray 'MESH' (latitude: 3501, longitude: 7001)>
array([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]])
Coordinates:
* longitude (longitude) float64 -130.0 -130.0 -130.0 ... -60.02 -60.01 -59.99
* latitude (latitude) float64 55.01 54.99 54.98 54.97 ... 20.02 20.01 20.0
I then try to run the following block...
for i in range(tot_files - 1):
ds1 = exec('ds_ge_0_' + str(i+1))
ds2 = exec('ds_ge_0_' + str(i+2))
ds_ge_0_final = np.fmax(ds1, ds2)
and get the following error:
Traceback (most recent call last):
File "D:\xarray_stuff\DataProcess.py", line 287, in <module>
ds_ge_0_final = np.fmax(ds1, ds2)
TypeError: '>=' not supported between instances of 'NoneType' and 'NoneType'
I used np.fmax, because I found a module in xarray found here, but couldn't get it to work. I also tried xr.fmax, and xr.ufuncs.fmax, but they don't seem to exist.
|
[
"Here is the safe way to handle this data:\nimport numpy as np\nimport os\n\n\n# Directory that stores all the netcdf files\nfilepath = 'weekBatch/'\n\n\n# Putting each file into its own xarray dataset\nds_ge_0 = []\nds_ge_15 = []\nfor filename in os.listdir(filepath):\n ds = xr.open_dataset(filepath + filename)\n\n ds_ge_0.append( ds.where(ds.MESH >= 0.0) )\n ds_ge_15.append( ds.where(ds.MESH >= 15.0) )\n\nprint(ds_ge_0[0].MESH)\n\n# finds the max of all the datasets in ds_ge_0\nfor i in range(len(ds_ge_0) - 1):\n ds1 = ds_ge_0[i]\n ds2 = ds_ge_0[i+1]\n ds_ge_0_final = np.fmax(ds1, ds2)\n\n# finds the max of all the datasets in ds_ge_15\nfor i in range(len(ds_ge_15) - 1):\n ds1 = ds_ge_15[i]\n ds2 = ds_ge_15[i+1]\n ds_ge_15_final = np.fmax(ds1, ds2)\n\n"
] |
[
1
] |
[] |
[] |
[
"netcdf",
"python",
"python_xarray"
] |
stackoverflow_0074537405_netcdf_python_python_xarray.txt
|
Q:
Conway's Game of Life Gliders break after a few generations
So I tried creating Conway's game of life in python with pygame. I made this without watching any tutorials, which is probably why it is so broken. It seems to be working fine, but when I creates a glider it seems to just break after a few generations. I looked at some other posts about my problem and added their solutions but that didn't make it work either. I know this is a lot to ask for, but can someone at least identify the problem.
Here is my code. I expected the glider to function as do they are supposed to, but it ended up just breaking in a few generations
Code:
main.py:
from utils import *
from grid import Grid
running = True
t = Grid(30)
while running:
pygame.display.set_caption(f'Conways Game of Life <Gen {t.generations}>')
clock.tick(200)
screen.fill(background_colour)
if not t.started:
t.EditMode()
else:
t.Update()
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
pygame.display.flip()`
grid.py:
import cell
from utils import *
class Grid:
def __init__(self, size):
self.cells = []
self.cellSize = size
self.generations = 0
self.tick = 1
self.started = False
self.GenerateGrid()
def GenerateGrid(self):
x, y = 0, 0
while y < screen.get_height():
while x < screen.get_width():
c = cell.Cell(self, (x,y), self.cellSize)
self.cells.append(c)
x+=self.cellSize
x = 0
y+=self.cellSize
def EditMode(self):
self.Draw()
if self.started:
return
for cell in self.cells:
if pygame.mouse.get_pressed()[0]:
if cell.rect.collidepoint(pygame.mouse.get_pos()):
cell.state = 1
if pygame.mouse.get_pressed()[2]:
if cell.rect.collidepoint(pygame.mouse.get_pos()):
cell.state = 0
keys = pygame.key.get_pressed()
if keys[pygame.K_RETURN]:
self.started = True
def Draw(self):
for cell in self.cells:
cell.Draw()
def Update(self):
self.Draw()
self.tick -= 0.05
if self.tick < 0:
for cell in self.cells:
cell.UpdateState()
for cell in self.cells:
cell.state = cell.nextState
self.tick = 1
self.generations+=1
cell.py
from utils import *
class Cell:
def __init__(self, grid, position:tuple, size):
self.grid = grid
self.size = size
self.position = pygame.Vector2(position[0], position[1])
self.rect = pygame.Rect(self.position.x, self.position.y, self.size, self.size)
self.state = 0
self.nextState = self.state
def Draw(self):
pygame.draw.rect(screen, (0,0,0), self.rect)
if self.state == 0:
pygame.draw.rect(screen, (23,23,23), (self.position.x+4, self.position.y+4, self.size-4, self.size-4))
else:
pygame.draw.rect(screen, (255,255,255), (self.position.x+4, self.position.y+4, self.size-4, self.size-4))
def UpdateState(self):
rect = pygame.Rect(self.position.x-self.size, self.position.y-self.size, self.size*3, self.size*3)
pygame.draw.rect(screen, (0,0,0), rect)
targetCells = []
for c in self.grid.cells:
if rect.colliderect(c.rect):
targetCells.append(c)
livingAmt = 0
for c in targetCells:
if c.rect.x == self.rect.x and c.rect.y == self.rect.y:
continue
if c.state == 1:
livingAmt+=1
if self.state == 1:
if livingAmt > 3 or livingAmt <2:
self.nextState = 0
if self.state ==0:
if livingAmt == 3:
self.nextState =1
utils.py
import pygame
background_colour = (23, 23, 23)
screen = pygame.display.set_mode((900, 900))
clock = pygame.time.Clock()
running = True
A:
Your function UpdateState both counts a cell's neighbors and updates the cell's state. Since you call that function in a loop, both are done together, which does not work, as explained here. You must split the "count" phase from the "update state" phase.
|
Conway's Game of Life Gliders break after a few generations
|
So I tried creating Conway's game of life in python with pygame. I made this without watching any tutorials, which is probably why it is so broken. It seems to be working fine, but when I creates a glider it seems to just break after a few generations. I looked at some other posts about my problem and added their solutions but that didn't make it work either. I know this is a lot to ask for, but can someone at least identify the problem.
Here is my code. I expected the glider to function as do they are supposed to, but it ended up just breaking in a few generations
Code:
main.py:
from utils import *
from grid import Grid
running = True
t = Grid(30)
while running:
pygame.display.set_caption(f'Conways Game of Life <Gen {t.generations}>')
clock.tick(200)
screen.fill(background_colour)
if not t.started:
t.EditMode()
else:
t.Update()
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
pygame.display.flip()`
grid.py:
import cell
from utils import *
class Grid:
def __init__(self, size):
self.cells = []
self.cellSize = size
self.generations = 0
self.tick = 1
self.started = False
self.GenerateGrid()
def GenerateGrid(self):
x, y = 0, 0
while y < screen.get_height():
while x < screen.get_width():
c = cell.Cell(self, (x,y), self.cellSize)
self.cells.append(c)
x+=self.cellSize
x = 0
y+=self.cellSize
def EditMode(self):
self.Draw()
if self.started:
return
for cell in self.cells:
if pygame.mouse.get_pressed()[0]:
if cell.rect.collidepoint(pygame.mouse.get_pos()):
cell.state = 1
if pygame.mouse.get_pressed()[2]:
if cell.rect.collidepoint(pygame.mouse.get_pos()):
cell.state = 0
keys = pygame.key.get_pressed()
if keys[pygame.K_RETURN]:
self.started = True
def Draw(self):
for cell in self.cells:
cell.Draw()
def Update(self):
self.Draw()
self.tick -= 0.05
if self.tick < 0:
for cell in self.cells:
cell.UpdateState()
for cell in self.cells:
cell.state = cell.nextState
self.tick = 1
self.generations+=1
cell.py
from utils import *
class Cell:
def __init__(self, grid, position:tuple, size):
self.grid = grid
self.size = size
self.position = pygame.Vector2(position[0], position[1])
self.rect = pygame.Rect(self.position.x, self.position.y, self.size, self.size)
self.state = 0
self.nextState = self.state
def Draw(self):
pygame.draw.rect(screen, (0,0,0), self.rect)
if self.state == 0:
pygame.draw.rect(screen, (23,23,23), (self.position.x+4, self.position.y+4, self.size-4, self.size-4))
else:
pygame.draw.rect(screen, (255,255,255), (self.position.x+4, self.position.y+4, self.size-4, self.size-4))
def UpdateState(self):
rect = pygame.Rect(self.position.x-self.size, self.position.y-self.size, self.size*3, self.size*3)
pygame.draw.rect(screen, (0,0,0), rect)
targetCells = []
for c in self.grid.cells:
if rect.colliderect(c.rect):
targetCells.append(c)
livingAmt = 0
for c in targetCells:
if c.rect.x == self.rect.x and c.rect.y == self.rect.y:
continue
if c.state == 1:
livingAmt+=1
if self.state == 1:
if livingAmt > 3 or livingAmt <2:
self.nextState = 0
if self.state ==0:
if livingAmt == 3:
self.nextState =1
utils.py
import pygame
background_colour = (23, 23, 23)
screen = pygame.display.set_mode((900, 900))
clock = pygame.time.Clock()
running = True
|
[
"Your function UpdateState both counts a cell's neighbors and updates the cell's state. Since you call that function in a loop, both are done together, which does not work, as explained here. You must split the \"count\" phase from the \"update state\" phase.\n"
] |
[
2
] |
[] |
[] |
[
"conways_game_of_life",
"pygame",
"python"
] |
stackoverflow_0074536773_conways_game_of_life_pygame_python.txt
|
Q:
Iterate millions of rows in pandas optimally
What I am looking for is to put for each ID the most current description (as long as it is not an empty cell, and if it’s is empty, it should be the first non-empty description). I have sorted the DF by ID and by Date, so for each ID "group", the first description is the most current.
The problem comes when I have to take that description and replace it in the rest of the rows of the same ID. The process with a FOR loop takes me more than 30 minutes, so I need a much more efficient solution.
So far, my procedure has been:
list unique IDs
Iterate with a loop for those IDs, and with a '.loc' take out the description field:
If the most recent description is null, I put an if to catch the second description field
for id in list(df.columnaid.unique()):
if df.loc[(df.columnaid == id).description.unique()[0] != "":
description = df.loc[(df.columnaid == id).description.unique()[0]
elif df.loc[(df.columnaid == id).description.unique()[0] == "" and len(df.loc[(df.columnaid == id).description.unique()) >1:
description = df.loc[(df.columnaid == id).description.unique()[1]
Save the ID product and the description in a dictionary:
dicc[id] = dicc.get(id, description)
Then, with a.loc, an .isin and a map I replace the values obtained in the description column
This procedure Works, but it’s not optimal at all, and I need to know how could it be done a better way without taking more than 30 min.
df.loc[df['columnaid'].isin(dicc.keys()), 'description'] = df['columnaid'].map(dicc)
An example of the DataFrame (it would be the same but with millions of rows) is:
df = pd.DataFrame({"columnaid": ["2321fdsf", "2321fdsf", "3gsdfer3", "4gdsfg44", "4gdsfg44", "4gdsfg44", "7fg45d"],
"date": ["2022-11-16","2022-10-07","2022-09-02","2021-12-04","2021-09-23","2021-03-06","2021-03-15"],
"description": ["aaa", "bbb", "abc", "eee", "", "aqwert", "yuiop"],
})
columnaid date description
0 2321fdsf 2022-11-16 aaa
1 2321fdsf 2022-10-07 bbb
2 3gsdfer3 2022-09-02 abc
3 4gdsfg44 2021-12-04 eee
4 4gdsfg44 2021-09-23
5 4gdsfg44 2021-03-06 aqwert
6 7fg45d 2021-03-15 yuiop
The outcome should be:
columnaid date description
0 2321fdsf 2022-11-16 aaa
1 2321fdsf 2022-10-07 aaa
2 3gsdfer3 2022-09-02 abc
3 4gdsfg44 2021-12-04 eee
4 4gdsfg44 2021-09-23 eee
5 4gdsfg44 2021-03-15 eee
6 7fg45d 2021-03-06 yuiop
Thank you
A:
Sure thing – use groupby:
import pandas as pd
df = pd.DataFrame(
{
"columnaid": ["2321fdsf", "2321fdsf", "3gsdfer3", "4gdsfg44", "4gdsfg44", "4gdsfg44", "7fg45d"],
"date": ["2022-11-16", "2022-10-07", "2022-09-02", "2021-12-04", "2021-09-23", "2021-03-06", "2021-03-15"],
"description": ["aaa", "bbb", "abc", "eee", "", "aqwert", "yuiop"],
}
)
# Convert date so we can `idxmax` it
df["date"] = pd.to_datetime(df["date"])
# Find newest descriptions per columnaid into an indexed series
newest_descriptions = df.groupby("columnaid").apply(lambda x: x.loc[x["date"].idxmax(), "description"])
# (Print for debugging)
print(newest_descriptions)
# Map the descriptions back into the original df
df["description"] = df["columnaid"].map(newest_descriptions)
print(df)
This prints out
columnaid
2321fdsf aaa
3gsdfer3 abc
4gdsfg44 eee
7fg45d yuiop
dtype: object
columnaid date description
0 2321fdsf 2022-11-16 aaa
1 2321fdsf 2022-10-07 aaa
2 3gsdfer3 2022-09-02 abc
3 4gdsfg44 2021-12-04 eee
4 4gdsfg44 2021-09-23 eee
5 4gdsfg44 2021-03-06 eee
6 7fg45d 2021-03-15 yuiop
A:
Unless I misunderstand or you are overthinking. Your question can be done in three simple steps.
#drop n/a
dfn = df.dropna(subset=['description']).copy()
#now you have a clean `dfn` simply sort then pickup the 1st observation in each group
uniq = dfn.groupby('colummaid')['description'].first().reset_index('first_obs')
#now you can merge uniq back to df
df = pd.merge(df, uniq, how='left', on='columnaid')
The column first_obs is your expected output
A:
In the Spanish version, @HeytalePazguato found a very fast and optimal solution, which takes less than a second to do what it needed.
I'll leave it here in case anyone ever needs it. All credit to him.
If you read this, @HeytalePazguato, thank you very much again.
import pandas as pd
import numpy as np
df = pd.DataFrame({"columnid": ["2321fdsf", "2321fdsf", "3gsdfer3", "4gdsfg44", "4gdsfg44", "4gdsfg44", "7fg45d", "7fg45d"],
"date": ["2022-11-16","2022-10-07","2022-09-02","2021-12-04","2021-09-23","2021-03-06", "2022-05-13","2021-03-15"],
"description": ["aaa", "bbb", "abc", "eee", "", "aqwert", "", "yuiop"],
})
df["description"] = df["description"].replace("", np.nan)
df["description"] = df.groupby(by = "columnid")['description'].transform('first')
df
The result:
columnid date description
0 2321fdsf 2022-11-16 aaa
1 2321fdsf 2022-10-07 aaa
2 3gsdfer3 2022-09-02 abc
3 4gdsfg44 2021-12-04 eee
4 4gdsfg44 2021-09-23 eee
5 4gdsfg44 2021-03-06 eee
6 7fg45d 2022-05-13 yuiop
7 7fg45d 2021-03-15 yuiop
|
Iterate millions of rows in pandas optimally
|
What I am looking for is to put for each ID the most current description (as long as it is not an empty cell, and if it’s is empty, it should be the first non-empty description). I have sorted the DF by ID and by Date, so for each ID "group", the first description is the most current.
The problem comes when I have to take that description and replace it in the rest of the rows of the same ID. The process with a FOR loop takes me more than 30 minutes, so I need a much more efficient solution.
So far, my procedure has been:
list unique IDs
Iterate with a loop for those IDs, and with a '.loc' take out the description field:
If the most recent description is null, I put an if to catch the second description field
for id in list(df.columnaid.unique()):
if df.loc[(df.columnaid == id).description.unique()[0] != "":
description = df.loc[(df.columnaid == id).description.unique()[0]
elif df.loc[(df.columnaid == id).description.unique()[0] == "" and len(df.loc[(df.columnaid == id).description.unique()) >1:
description = df.loc[(df.columnaid == id).description.unique()[1]
Save the ID product and the description in a dictionary:
dicc[id] = dicc.get(id, description)
Then, with a.loc, an .isin and a map I replace the values obtained in the description column
This procedure Works, but it’s not optimal at all, and I need to know how could it be done a better way without taking more than 30 min.
df.loc[df['columnaid'].isin(dicc.keys()), 'description'] = df['columnaid'].map(dicc)
An example of the DataFrame (it would be the same but with millions of rows) is:
df = pd.DataFrame({"columnaid": ["2321fdsf", "2321fdsf", "3gsdfer3", "4gdsfg44", "4gdsfg44", "4gdsfg44", "7fg45d"],
"date": ["2022-11-16","2022-10-07","2022-09-02","2021-12-04","2021-09-23","2021-03-06","2021-03-15"],
"description": ["aaa", "bbb", "abc", "eee", "", "aqwert", "yuiop"],
})
columnaid date description
0 2321fdsf 2022-11-16 aaa
1 2321fdsf 2022-10-07 bbb
2 3gsdfer3 2022-09-02 abc
3 4gdsfg44 2021-12-04 eee
4 4gdsfg44 2021-09-23
5 4gdsfg44 2021-03-06 aqwert
6 7fg45d 2021-03-15 yuiop
The outcome should be:
columnaid date description
0 2321fdsf 2022-11-16 aaa
1 2321fdsf 2022-10-07 aaa
2 3gsdfer3 2022-09-02 abc
3 4gdsfg44 2021-12-04 eee
4 4gdsfg44 2021-09-23 eee
5 4gdsfg44 2021-03-15 eee
6 7fg45d 2021-03-06 yuiop
Thank you
|
[
"Sure thing – use groupby:\nimport pandas as pd\n\ndf = pd.DataFrame(\n {\n \"columnaid\": [\"2321fdsf\", \"2321fdsf\", \"3gsdfer3\", \"4gdsfg44\", \"4gdsfg44\", \"4gdsfg44\", \"7fg45d\"],\n \"date\": [\"2022-11-16\", \"2022-10-07\", \"2022-09-02\", \"2021-12-04\", \"2021-09-23\", \"2021-03-06\", \"2021-03-15\"],\n \"description\": [\"aaa\", \"bbb\", \"abc\", \"eee\", \"\", \"aqwert\", \"yuiop\"],\n }\n)\n\n# Convert date so we can `idxmax` it\ndf[\"date\"] = pd.to_datetime(df[\"date\"])\n\n# Find newest descriptions per columnaid into an indexed series\nnewest_descriptions = df.groupby(\"columnaid\").apply(lambda x: x.loc[x[\"date\"].idxmax(), \"description\"])\n# (Print for debugging)\nprint(newest_descriptions)\n\n# Map the descriptions back into the original df\ndf[\"description\"] = df[\"columnaid\"].map(newest_descriptions)\n\nprint(df)\n\nThis prints out\ncolumnaid\n2321fdsf aaa\n3gsdfer3 abc\n4gdsfg44 eee\n7fg45d yuiop\ndtype: object\n\n columnaid date description\n0 2321fdsf 2022-11-16 aaa\n1 2321fdsf 2022-10-07 aaa\n2 3gsdfer3 2022-09-02 abc\n3 4gdsfg44 2021-12-04 eee\n4 4gdsfg44 2021-09-23 eee\n5 4gdsfg44 2021-03-06 eee\n6 7fg45d 2021-03-15 yuiop\n\n",
"Unless I misunderstand or you are overthinking. Your question can be done in three simple steps.\n#drop n/a\ndfn = df.dropna(subset=['description']).copy()\n\n#now you have a clean `dfn` simply sort then pickup the 1st observation in each group\nuniq = dfn.groupby('colummaid')['description'].first().reset_index('first_obs')\n\n#now you can merge uniq back to df\ndf = pd.merge(df, uniq, how='left', on='columnaid')\n\nThe column first_obs is your expected output\n",
"In the Spanish version, @HeytalePazguato found a very fast and optimal solution, which takes less than a second to do what it needed.\nI'll leave it here in case anyone ever needs it. All credit to him.\nIf you read this, @HeytalePazguato, thank you very much again.\nimport pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame({\"columnid\": [\"2321fdsf\", \"2321fdsf\", \"3gsdfer3\", \"4gdsfg44\", \"4gdsfg44\", \"4gdsfg44\", \"7fg45d\", \"7fg45d\"],\n \"date\": [\"2022-11-16\",\"2022-10-07\",\"2022-09-02\",\"2021-12-04\",\"2021-09-23\",\"2021-03-06\", \"2022-05-13\",\"2021-03-15\"],\n \"description\": [\"aaa\", \"bbb\", \"abc\", \"eee\", \"\", \"aqwert\", \"\", \"yuiop\"],\n })\n\ndf[\"description\"] = df[\"description\"].replace(\"\", np.nan)\n\ndf[\"description\"] = df.groupby(by = \"columnid\")['description'].transform('first')\ndf\n\nThe result:\n columnid date description\n0 2321fdsf 2022-11-16 aaa\n1 2321fdsf 2022-10-07 aaa\n2 3gsdfer3 2022-09-02 abc\n3 4gdsfg44 2021-12-04 eee\n4 4gdsfg44 2021-09-23 eee\n5 4gdsfg44 2021-03-06 eee\n6 7fg45d 2022-05-13 yuiop\n7 7fg45d 2021-03-15 yuiop\n\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"dataframe",
"for_loop",
"loops",
"pandas",
"python"
] |
stackoverflow_0074529223_dataframe_for_loop_loops_pandas_python.txt
|
Q:
Python NameError: name is not defined
I have a python script and I am receiving the following error:
Traceback (most recent call last):
File "C:\Users\Tim\Desktop\pop-erp\test.py", line 1, in <module>
s = Something()
NameError: name 'Something' is not defined
Here is the code that causes the problem:
s = Something()
s.out()
class Something:
def out():
print("it works")
This is being run with Python 3.3.0 under Windows 7 x86-64.
Why can't the Something class be found?
A:
Define the class before you use it:
class Something:
def out(self):
print("it works")
s = Something()
s.out()
You need to pass self as the first argument to all instance methods.
A:
Note that sometimes you will want to use the class type name inside its own definition, for example when using Python Typing module, e.g.
class Tree:
def __init__(self, left: Tree, right: Tree):
self.left = left
self.right = right
This will also result in
NameError: name 'Tree' is not defined
That's because the class has not been defined yet at this point.
The workaround is using so called Forward Reference, i.e. wrapping a class name in a string, i.e.
class Tree:
def __init__(self, left: 'Tree', right: 'Tree'):
self.left = left
self.right = right
A:
You must define the class before creating an instance of the class. Move the invocation of Something to the end of the script.
You can try to put the cart before the horse and invoke procedures before they are defined, but it will be an ugly hack and you will have to roll your own as defined here:
Make function definition in a python file order independent
A:
I got the same error below:
NameError: name 'name' is not defined
When I don't define the getter method with @property while the setter and deleter are defined as shown below:
class Person:
def __init__(self, name):
self._name = name
# @property
# def name(self):
# return self._name
@name.setter
def name(self, name):
self._name = name
@name.deleter # Here
def name(self):
del self._name
|
Python NameError: name is not defined
|
I have a python script and I am receiving the following error:
Traceback (most recent call last):
File "C:\Users\Tim\Desktop\pop-erp\test.py", line 1, in <module>
s = Something()
NameError: name 'Something' is not defined
Here is the code that causes the problem:
s = Something()
s.out()
class Something:
def out():
print("it works")
This is being run with Python 3.3.0 under Windows 7 x86-64.
Why can't the Something class be found?
|
[
"Define the class before you use it:\nclass Something:\n def out(self):\n print(\"it works\")\n\ns = Something()\ns.out()\n\nYou need to pass self as the first argument to all instance methods.\n",
"Note that sometimes you will want to use the class type name inside its own definition, for example when using Python Typing module, e.g.\nclass Tree:\n def __init__(self, left: Tree, right: Tree):\n self.left = left\n self.right = right\n\nThis will also result in\nNameError: name 'Tree' is not defined\n\nThat's because the class has not been defined yet at this point.\nThe workaround is using so called Forward Reference, i.e. wrapping a class name in a string, i.e.\nclass Tree:\n def __init__(self, left: 'Tree', right: 'Tree'):\n self.left = left\n self.right = right\n\n",
"You must define the class before creating an instance of the class. Move the invocation of Something to the end of the script. \nYou can try to put the cart before the horse and invoke procedures before they are defined, but it will be an ugly hack and you will have to roll your own as defined here:\nMake function definition in a python file order independent\n",
"I got the same error below:\n\nNameError: name 'name' is not defined\n\nWhen I don't define the getter method with @property while the setter and deleter are defined as shown below:\nclass Person:\n def __init__(self, name):\n self._name = name\n\n # @property\n # def name(self):\n # return self._name\n\n @name.setter\n def name(self, name):\n self._name = name\n\n @name.deleter # Here\n def name(self):\n del self._name\n\n"
] |
[
106,
21,
5,
0
] |
[] |
[] |
[
"nameerror",
"python",
"python_3.x"
] |
stackoverflow_0014804084_nameerror_python_python_3.x.txt
|
Q:
How to Import files from another folder in python?
How to import files from another folder, try so many things, but my attempt is not fruitful. How to resolve it?
-> d:
-> project Main
-> First Folder
-> my_main_file.py
-> class (My_Main_Class)
-> module1
-> module2
-> module3
-> Second Folder
-> my_second_file.py
How to import module1 and module 3 from my first file(my_main_file.py) into my _second_file.py
My_main_file
This is my main file('d:/project_main/main folder'). now I want to import modules "establish_connection_general and other things to my second file. ( d:/project_main/second_folder)
from PyQt5.QtWidgets import QWidget,QMessageBox,QApplication
from PyQt5.QtGui import QIcon
import sqleet
import os
import sys
print("1234567890aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa")
file_path_general = r"d:\project makeeasy\assist"
db_name_general = (os.path.join(file_path_general,'makeeasy_general.me'))
passkey_general = "1234"
new_passkey_general = "1234"
show_errormsg = "show"
class Database_connection(QWidget):
def __init__(self):
super(). __init__()
self.establish_connection_general()
# self.cahnge_passkey()
def establish_connection_general(self):
try:
connection_general = sqleet.connect(db_name_general,key = passkey_general)
print("open scueffully")
except Exception as e:
if show_errormsg == "show":
self.handle_error(e)
else:
pass
def cahnge_passkey(self):
try:
general_connection = sqleet.connect(db_name_general,key = passkey_general)
general_connection.change_key(new_passkey_general)
print("new pass key sucessfully changed")
general_connection.close()
except Exception as e:
if show_errormsg == "show":
self.handle_error(e)
else:
pass
def handle_error(self,error):
exc_type, exc_value, exc_traceback = sys.exc_info()
filenamewithpath = exc_traceback.tb_frame.f_code.co_filename
head,tail = os.path.split((filenamewithpath))
lineno = exc_traceback.tb_lineno
name = exc_traceback.tb_frame.f_code.co_name
type = exc_type.__name__
message = exc_value
nl = '\n'
kk = f'File Name : {tail[:-3]}{nl}'\
f'Line No. : {lineno}{nl}'\
f'Type : {type}{nl}'\
f'Name : {name}{nl}'
self.msg = QMessageBox()
self.msg.setFixedSize(1600,400)
self.msg.setWindowTitle(" Error/Bugs Information")
self.msg.setWindowIcon(QIcon('icon\close006.png'))
fd = " "
self.msg.setText(f'{type} - {lineno}{fd}')
self.msg.setIcon(QMessageBox.Information)
self.msg.setStandardButtons(QMessageBox.Ok)
self.msg.setDefaultButton(QMessageBox.Ok)
self.msg.setInformativeText("")
self.msg.setDetailedText(kk)
self.msg.show()
def main():
app = QApplication(sys.argv)
ex = Database_connection()
app.setStyle("Fusion")
sys.exit(app.exec_())
if __name__ == '__main__':
main()
A:
in my_second_file.py
from First_Folder.my_main_file import My_Main_Class
After Your comment:
from First_Folder.my_main_file import ex(your class object)
In your second file
ex.module1()
ex.module3()
|
How to Import files from another folder in python?
|
How to import files from another folder, try so many things, but my attempt is not fruitful. How to resolve it?
-> d:
-> project Main
-> First Folder
-> my_main_file.py
-> class (My_Main_Class)
-> module1
-> module2
-> module3
-> Second Folder
-> my_second_file.py
How to import module1 and module 3 from my first file(my_main_file.py) into my _second_file.py
My_main_file
This is my main file('d:/project_main/main folder'). now I want to import modules "establish_connection_general and other things to my second file. ( d:/project_main/second_folder)
from PyQt5.QtWidgets import QWidget,QMessageBox,QApplication
from PyQt5.QtGui import QIcon
import sqleet
import os
import sys
print("1234567890aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa")
file_path_general = r"d:\project makeeasy\assist"
db_name_general = (os.path.join(file_path_general,'makeeasy_general.me'))
passkey_general = "1234"
new_passkey_general = "1234"
show_errormsg = "show"
class Database_connection(QWidget):
def __init__(self):
super(). __init__()
self.establish_connection_general()
# self.cahnge_passkey()
def establish_connection_general(self):
try:
connection_general = sqleet.connect(db_name_general,key = passkey_general)
print("open scueffully")
except Exception as e:
if show_errormsg == "show":
self.handle_error(e)
else:
pass
def cahnge_passkey(self):
try:
general_connection = sqleet.connect(db_name_general,key = passkey_general)
general_connection.change_key(new_passkey_general)
print("new pass key sucessfully changed")
general_connection.close()
except Exception as e:
if show_errormsg == "show":
self.handle_error(e)
else:
pass
def handle_error(self,error):
exc_type, exc_value, exc_traceback = sys.exc_info()
filenamewithpath = exc_traceback.tb_frame.f_code.co_filename
head,tail = os.path.split((filenamewithpath))
lineno = exc_traceback.tb_lineno
name = exc_traceback.tb_frame.f_code.co_name
type = exc_type.__name__
message = exc_value
nl = '\n'
kk = f'File Name : {tail[:-3]}{nl}'\
f'Line No. : {lineno}{nl}'\
f'Type : {type}{nl}'\
f'Name : {name}{nl}'
self.msg = QMessageBox()
self.msg.setFixedSize(1600,400)
self.msg.setWindowTitle(" Error/Bugs Information")
self.msg.setWindowIcon(QIcon('icon\close006.png'))
fd = " "
self.msg.setText(f'{type} - {lineno}{fd}')
self.msg.setIcon(QMessageBox.Information)
self.msg.setStandardButtons(QMessageBox.Ok)
self.msg.setDefaultButton(QMessageBox.Ok)
self.msg.setInformativeText("")
self.msg.setDetailedText(kk)
self.msg.show()
def main():
app = QApplication(sys.argv)
ex = Database_connection()
app.setStyle("Fusion")
sys.exit(app.exec_())
if __name__ == '__main__':
main()
|
[
"in my_second_file.py\nfrom First_Folder.my_main_file import My_Main_Class\n\nAfter Your comment:\nfrom First_Folder.my_main_file import ex(your class object)\n\nIn your second file\nex.module1()\nex.module3()\n\n"
] |
[
2
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0074537439_python_python_3.x.txt
|
Q:
Reverse for 'admin_update' with arguments '('',)' not found. 1 pattern(s) tried: ['admin_update/(?P[^/]+)$']
I'm trying to pass arguments to edit table values.
Would be grateful if anyone could breakdown the solution.
views.py
`
#@login_required(login_url='/admin_login/')
def AdminManageRequests(request):
lessons = Lesson.objects.all()
return render(request,'admin/manage_requests.html',{'lessons':lessons})
def AdminUpdateRequests(request, lesson_id):
lesson = Lesson.objects.get(pk=lesson_id)
form = StudentRequestForm(request.POST or None, instance=lesson)
context = {
'lesson':lesson, 'form':form
}
return render(request, 'admin/update_requests.html',context)
`
urls.py
`
path('admin_update/<lesson_id>', views.AdminUpdateRequests, name='admin_update'),
`
manage_requests.html
`
{% extends 'admin/admin_home_base.html' %}
{% block admin_content %}
<div>
<h3 class="display-8" style="text-align:center">
Admin Lesson Request Management
</h3>
<hr class="my-4">
<p style="text-align:center">
You can view fulfilled and unfulfilled lesson requests.
</p>
<p class="lead" style="text-align:center">
{% include 'admin/partials/fulfilled_lessons.html' %}
<br>
{% include 'admin/partials/unfulfilled_lessons.html' %}
</p>
</div>
{% endblock %}
`
lessons_table_base.html
`
<div class="card">
<div class="card-header">
<h5 class="card-title">{% block card_title %}{% endblock %}</h5>
<div class="card-body table-responsive p-0">
<table class="table table-hover text-nowrap">
<thead>
<tr>
<th>Lesson ID</th>
<th>Lesson Name</th>
<th>Student</th>
<th>Teacher</th>
<th>Interval (Days)</th>
<th>Duration (Minutes)</th>
<th></th>
</tr>
</thead>
<tbody>
{% for lesson in lessons %}
{% block lessons_content %}
{% endblock %}
{% endfor %}
</tbody>
</table>
</div>
</div>
</div>
`
fulfilled_lessons.html
`
{% extends 'admin/partials/lessons_table_base.html' %}
{% load widget_tweaks %}
{% block card_title %}
Fulfilled Requests <i class="bi-send-check-fill"></i>
{% endblock %}
{% block lessons_content %}
{% if not lesson.is_request %}
<tr>
<td>{{ lesson.lesson_id }}</td>
<td>{{ lesson.lesson_name }}</td>
<td>{{ lesson.student }}</td>
<td>{{ lesson.teacher }}</td>
<td>{{ lesson.interval }}</td>
<td>{{ lesson.duration }}</td>
<td>
<!-- admin edit lesson here -->
<a href="{% url 'admin_update' lesson.id %}">update</a>
<!-- admin delete lesson here -->
<a href="#" class="nav-link" role="button" data-bs-toggle="tooltip" title="Remove lesson">
<span class="bi-dash-square"></span>
</a>
</td>
</tr>
{% endif %}
{% endblock %}
`
updates_requests.html (will add more code when the issue's resolved.)
`
{% extends 'admin/admin_home_base.html' %}
{% block admin_content %}
<div>
<h2 style="text-align:center">
Update Lesson Request
</h2>
<hr class="my-4">
<p style="text-align:center">
You can update lessons to the system.
</p>
<form action="" method=POST>
{% csrf_token %}
{{form.as_p}}
<input type="submit" value="Update" class="btn btn-secondary">
</form>
</div>
{% endblock %}
`
tracebacks
`
Internal Server Error: /admin_managerequests/
Traceback (most recent call last):
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/lessons/views.py", line 55, in AdminManageRequests
return render(request,'admin/manage_requests.html',{'lessons':lessons})
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/shortcuts.py", line 19, in render
content = loader.render_to_string(template_name, context, request, using=using)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/loader.py", line 62, in render_to_string
return template.render(context, request)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/backends/django.py", line 61, in render
return self.template.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 171, in render
return self._render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 163, in _render
return self.nodelist.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 936, in render
bit = node.render_annotated(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/loader_tags.py", line 150, in render
return compiled_parent._render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 163, in _render
return self.nodelist.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 936, in render
bit = node.render_annotated(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/loader_tags.py", line 150, in render
return compiled_parent._render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 163, in _render
return self.nodelist.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 936, in render
bit = node.render_annotated(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/loader_tags.py", line 62, in render
result = block.nodelist.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 936, in render
bit = node.render_annotated(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/loader_tags.py", line 62, in render
result = block.nodelist.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 936, in render
bit = node.render_annotated(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/loader_tags.py", line 188, in render
return template.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 173, in render
return self._render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 163, in _render
return self.nodelist.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 936, in render
bit = node.render_annotated(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/loader_tags.py", line 150, in render
return compiled_parent._render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 163, in _render
return self.nodelist.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 936, in render
bit = node.render_annotated(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/defaulttags.py", line 209, in render
nodelist.append(node.render_annotated(context))
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/loader_tags.py", line 62, in render
result = block.nodelist.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 936, in render
bit = node.render_annotated(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/defaulttags.py", line 309, in render
return nodelist.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 936, in render
bit = node.render_annotated(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/defaulttags.py", line 443, in render
url = reverse(view_name, args=args, kwargs=kwargs, current_app=current_app)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/urls/base.py", line 87, in reverse
return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs))
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/urls/resolvers.py", line 677, in _reverse_with_prefix
raise NoReverseMatch(msg)
django.urls.exceptions.NoReverseMatch: Reverse for 'admin_update' with arguments '('',)' not found. 1 pattern(s) tried: ['admin_update/(?P<lesson_id>[^/]+)$']
`
I have tried some solutions but nothing worked. Can someone help?
A:
It should be <int:lesson_id> not only <lesson_id> as by default (when nothing is given) it is considered as string type so:
path('admin_update/<int:lesson_id>/', views.AdminUpdateRequests, name='admin_update'),
I'd also recommend you to use get_object_or_404() so:
lesson = get_object_or_404(Lesson,pk=lesson_id)
Note: Always add / at the end of every route.
|
Reverse for 'admin_update' with arguments '('',)' not found. 1 pattern(s) tried: ['admin_update/(?P[^/]+)$']
|
I'm trying to pass arguments to edit table values.
Would be grateful if anyone could breakdown the solution.
views.py
`
#@login_required(login_url='/admin_login/')
def AdminManageRequests(request):
lessons = Lesson.objects.all()
return render(request,'admin/manage_requests.html',{'lessons':lessons})
def AdminUpdateRequests(request, lesson_id):
lesson = Lesson.objects.get(pk=lesson_id)
form = StudentRequestForm(request.POST or None, instance=lesson)
context = {
'lesson':lesson, 'form':form
}
return render(request, 'admin/update_requests.html',context)
`
urls.py
`
path('admin_update/<lesson_id>', views.AdminUpdateRequests, name='admin_update'),
`
manage_requests.html
`
{% extends 'admin/admin_home_base.html' %}
{% block admin_content %}
<div>
<h3 class="display-8" style="text-align:center">
Admin Lesson Request Management
</h3>
<hr class="my-4">
<p style="text-align:center">
You can view fulfilled and unfulfilled lesson requests.
</p>
<p class="lead" style="text-align:center">
{% include 'admin/partials/fulfilled_lessons.html' %}
<br>
{% include 'admin/partials/unfulfilled_lessons.html' %}
</p>
</div>
{% endblock %}
`
lessons_table_base.html
`
<div class="card">
<div class="card-header">
<h5 class="card-title">{% block card_title %}{% endblock %}</h5>
<div class="card-body table-responsive p-0">
<table class="table table-hover text-nowrap">
<thead>
<tr>
<th>Lesson ID</th>
<th>Lesson Name</th>
<th>Student</th>
<th>Teacher</th>
<th>Interval (Days)</th>
<th>Duration (Minutes)</th>
<th></th>
</tr>
</thead>
<tbody>
{% for lesson in lessons %}
{% block lessons_content %}
{% endblock %}
{% endfor %}
</tbody>
</table>
</div>
</div>
</div>
`
fulfilled_lessons.html
`
{% extends 'admin/partials/lessons_table_base.html' %}
{% load widget_tweaks %}
{% block card_title %}
Fulfilled Requests <i class="bi-send-check-fill"></i>
{% endblock %}
{% block lessons_content %}
{% if not lesson.is_request %}
<tr>
<td>{{ lesson.lesson_id }}</td>
<td>{{ lesson.lesson_name }}</td>
<td>{{ lesson.student }}</td>
<td>{{ lesson.teacher }}</td>
<td>{{ lesson.interval }}</td>
<td>{{ lesson.duration }}</td>
<td>
<!-- admin edit lesson here -->
<a href="{% url 'admin_update' lesson.id %}">update</a>
<!-- admin delete lesson here -->
<a href="#" class="nav-link" role="button" data-bs-toggle="tooltip" title="Remove lesson">
<span class="bi-dash-square"></span>
</a>
</td>
</tr>
{% endif %}
{% endblock %}
`
updates_requests.html (will add more code when the issue's resolved.)
`
{% extends 'admin/admin_home_base.html' %}
{% block admin_content %}
<div>
<h2 style="text-align:center">
Update Lesson Request
</h2>
<hr class="my-4">
<p style="text-align:center">
You can update lessons to the system.
</p>
<form action="" method=POST>
{% csrf_token %}
{{form.as_p}}
<input type="submit" value="Update" class="btn btn-secondary">
</form>
</div>
{% endblock %}
`
tracebacks
`
Internal Server Error: /admin_managerequests/
Traceback (most recent call last):
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/lessons/views.py", line 55, in AdminManageRequests
return render(request,'admin/manage_requests.html',{'lessons':lessons})
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/shortcuts.py", line 19, in render
content = loader.render_to_string(template_name, context, request, using=using)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/loader.py", line 62, in render_to_string
return template.render(context, request)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/backends/django.py", line 61, in render
return self.template.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 171, in render
return self._render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 163, in _render
return self.nodelist.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 936, in render
bit = node.render_annotated(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/loader_tags.py", line 150, in render
return compiled_parent._render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 163, in _render
return self.nodelist.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 936, in render
bit = node.render_annotated(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/loader_tags.py", line 150, in render
return compiled_parent._render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 163, in _render
return self.nodelist.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 936, in render
bit = node.render_annotated(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/loader_tags.py", line 62, in render
result = block.nodelist.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 936, in render
bit = node.render_annotated(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/loader_tags.py", line 62, in render
result = block.nodelist.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 936, in render
bit = node.render_annotated(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/loader_tags.py", line 188, in render
return template.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 173, in render
return self._render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 163, in _render
return self.nodelist.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 936, in render
bit = node.render_annotated(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/loader_tags.py", line 150, in render
return compiled_parent._render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 163, in _render
return self.nodelist.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 936, in render
bit = node.render_annotated(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/defaulttags.py", line 209, in render
nodelist.append(node.render_annotated(context))
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/loader_tags.py", line 62, in render
result = block.nodelist.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 936, in render
bit = node.render_annotated(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/defaulttags.py", line 309, in render
return nodelist.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 936, in render
bit = node.render_annotated(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/base.py", line 903, in render_annotated
return self.render(context)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/template/defaulttags.py", line 443, in render
url = reverse(view_name, args=args, kwargs=kwargs, current_app=current_app)
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/urls/base.py", line 87, in reverse
return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs))
File "/Users/jaeholee/SEG-Small-Group-Project/hyena/venv/lib/python3.10/site-packages/django/urls/resolvers.py", line 677, in _reverse_with_prefix
raise NoReverseMatch(msg)
django.urls.exceptions.NoReverseMatch: Reverse for 'admin_update' with arguments '('',)' not found. 1 pattern(s) tried: ['admin_update/(?P<lesson_id>[^/]+)$']
`
I have tried some solutions but nothing worked. Can someone help?
|
[
"It should be <int:lesson_id> not only <lesson_id> as by default (when nothing is given) it is considered as string type so:\npath('admin_update/<int:lesson_id>/', views.AdminUpdateRequests, name='admin_update'),\n\nI'd also recommend you to use get_object_or_404() so:\nlesson = get_object_or_404(Lesson,pk=lesson_id)\n\n\nNote: Always add / at the end of every route.\n\n"
] |
[
2
] |
[] |
[] |
[
"django",
"django_forms",
"django_templates",
"django_urls",
"python"
] |
stackoverflow_0074537426_django_django_forms_django_templates_django_urls_python.txt
|
Q:
discord has no attribute Intents once compiled
So I'm making a GUI using Tkinter that one of the features is it launches a discord bot.
Now when I run the code within VS Code it all works fine. However when I compile it using pyinstaller I get an error saying "Module discord has no attribute Intents".
If I put the code for the bot in a separate python file and get the tkinter file to load the bot file using:
os.popen('py botcode.py')
Then compile the main tkinter file it all works BUT I want the code for the bot to be in the same file as the tkinter code and not two separate files.
Here is some of the code:
import tkinter as tk
import os, threading, json, collections
from tkinter import *
from tkinter import ttk
from tkinter import Scrollbar, messagebox
from threading import Thread
from PIL import ImageTk,Image
import discord
from discord.ext import commands, tasks
from itertools import cycle
import re
import subprocess, sys, random, smtplib, string, ctypes
import requests, asyncio, functools
def getintents():
return discord.Intents().all()
token = "BOT TOKEN HERE"
client = commands.Bot(command_prefix=",", intents=getintents())
status = cycle(['Running Gremlins App', 'Coded by Gremlin',])
client.remove_command('help')
def RandomColor():
randcolor = discord.Color(random.randint(0x000000, 0xFFFFFF))
return randcolor
@client.event
async def on_ready():
change_status.start()
print('Online')
@tasks.loop(seconds=5)
async def change_status():
await client.change_presence(activity=discord.Game(next(status)))
@client.command()
async def ping(ctx):
embed = discord.Embed(description=f'Pong! {round(client.latency * 1000)}ms', color=RandomColor())
await ctx.send(embed=embed)
class Main_Page(Temp):
def __init__(self, parent, controller):
Temp.__init__(self, parent)
botbut = tk.Button(self, button_stylesG, text='Start Bot',command=lambda:startbot())
botbut.pack()
def startbot():
def sbot():
client.run(token)
botstart = Thread(target=sbot)
botstart.start()
Why does it work when I run it through VS Code but not when compiled.
Why does it work when the bot code is in a separate file when compiled but not when in the same file?...
A:
Well the problem you are facing may be due to your bot structure
So first go to https://discord.com/developers/applications
Select your bot
Click the options button
then go to bot scroll down and enable all intents
A:
It should be
return discord.Intents.all()
and not
return discord.Intents().all()
or you could just do
client = commands.Bot(command_prefix=",", intents=discord.Intents.all)
instead of
def getintents():
return discord.Intents.all()
client = commands.Bot(command_prefix=",", intents=getintents())
|
discord has no attribute Intents once compiled
|
So I'm making a GUI using Tkinter that one of the features is it launches a discord bot.
Now when I run the code within VS Code it all works fine. However when I compile it using pyinstaller I get an error saying "Module discord has no attribute Intents".
If I put the code for the bot in a separate python file and get the tkinter file to load the bot file using:
os.popen('py botcode.py')
Then compile the main tkinter file it all works BUT I want the code for the bot to be in the same file as the tkinter code and not two separate files.
Here is some of the code:
import tkinter as tk
import os, threading, json, collections
from tkinter import *
from tkinter import ttk
from tkinter import Scrollbar, messagebox
from threading import Thread
from PIL import ImageTk,Image
import discord
from discord.ext import commands, tasks
from itertools import cycle
import re
import subprocess, sys, random, smtplib, string, ctypes
import requests, asyncio, functools
def getintents():
return discord.Intents().all()
token = "BOT TOKEN HERE"
client = commands.Bot(command_prefix=",", intents=getintents())
status = cycle(['Running Gremlins App', 'Coded by Gremlin',])
client.remove_command('help')
def RandomColor():
randcolor = discord.Color(random.randint(0x000000, 0xFFFFFF))
return randcolor
@client.event
async def on_ready():
change_status.start()
print('Online')
@tasks.loop(seconds=5)
async def change_status():
await client.change_presence(activity=discord.Game(next(status)))
@client.command()
async def ping(ctx):
embed = discord.Embed(description=f'Pong! {round(client.latency * 1000)}ms', color=RandomColor())
await ctx.send(embed=embed)
class Main_Page(Temp):
def __init__(self, parent, controller):
Temp.__init__(self, parent)
botbut = tk.Button(self, button_stylesG, text='Start Bot',command=lambda:startbot())
botbut.pack()
def startbot():
def sbot():
client.run(token)
botstart = Thread(target=sbot)
botstart.start()
Why does it work when I run it through VS Code but not when compiled.
Why does it work when the bot code is in a separate file when compiled but not when in the same file?...
|
[
"Well the problem you are facing may be due to your bot structure\nSo first go to https://discord.com/developers/applications\nSelect your bot\nClick the options button\n\nthen go to bot scroll down and enable all intents\n\n",
"It should be\nreturn discord.Intents.all()\n\nand not\nreturn discord.Intents().all()\n\nor you could just do\nclient = commands.Bot(command_prefix=\",\", intents=discord.Intents.all)\n\ninstead of\ndef getintents():\n return discord.Intents.all()\n\nclient = commands.Bot(command_prefix=\",\", intents=getintents())\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"discord",
"discord.py",
"python",
"tkinter"
] |
stackoverflow_0072496293_discord_discord.py_python_tkinter.txt
|
Q:
Annotate Min/Max/Median in Matplotlib Violin Plot
Given this example code:
import pandas as pd
import matplotlib.pyplot as plt
data = 'https://raw.githubusercontent.com/marsja/jupyter/master/flanks.csv'
df = pd.read_csv(data, index_col=0)
# Subsetting using Pandas query():
congruent = df.query('TrialType == "congruent"')['RT']
incongruent = df.query('TrialType == "incongruent"')['RT']
# Combine data
plot_data = list([incongruent, congruent])
fig, ax = plt.subplots()
xticklabels = ['Incongruent', 'Congruent']
ax.set_xticks([1, 2])
ax.set_xticklabels(xticklabels)
ax.violinplot(plot_data, showmedians=True)
Which results in the following plot:
How can I annotate the min, max, and mean lines with their respective values?
I haven't been able to find examples online that allude to how to annotate violin plots in this way. If we set plot = ax.violinplot(plot_data, showmedians=True) then we can access attributes like plot['cmaxes'] but I cant quite figure out how to use that for annotations.
Here is an example of what I am trying to achieve:
A:
So this was as easy as getting the medians/mins/maxes and then enumerating, adding the annotation with plt.text, and adding some small values for positioning:
medians = results_df.groupby(['model_cat'])['test_f1'].median()
for i, v in enumerate(medians):
plt.text((i+.85), (v+.001), str(round(v, 3)), fontsize = 12)
|
Annotate Min/Max/Median in Matplotlib Violin Plot
|
Given this example code:
import pandas as pd
import matplotlib.pyplot as plt
data = 'https://raw.githubusercontent.com/marsja/jupyter/master/flanks.csv'
df = pd.read_csv(data, index_col=0)
# Subsetting using Pandas query():
congruent = df.query('TrialType == "congruent"')['RT']
incongruent = df.query('TrialType == "incongruent"')['RT']
# Combine data
plot_data = list([incongruent, congruent])
fig, ax = plt.subplots()
xticklabels = ['Incongruent', 'Congruent']
ax.set_xticks([1, 2])
ax.set_xticklabels(xticklabels)
ax.violinplot(plot_data, showmedians=True)
Which results in the following plot:
How can I annotate the min, max, and mean lines with their respective values?
I haven't been able to find examples online that allude to how to annotate violin plots in this way. If we set plot = ax.violinplot(plot_data, showmedians=True) then we can access attributes like plot['cmaxes'] but I cant quite figure out how to use that for annotations.
Here is an example of what I am trying to achieve:
|
[
"So this was as easy as getting the medians/mins/maxes and then enumerating, adding the annotation with plt.text, and adding some small values for positioning:\nmedians = results_df.groupby(['model_cat'])['test_f1'].median()\n\nfor i, v in enumerate(medians):\n plt.text((i+.85), (v+.001), str(round(v, 3)), fontsize = 12)\n\n"
] |
[
0
] |
[] |
[] |
[
"data_science",
"matplotlib",
"python",
"violin_plot",
"visualization"
] |
stackoverflow_0074329144_data_science_matplotlib_python_violin_plot_visualization.txt
|
Q:
matplotlib, how to ignore key repeats
I am trying to active a mode when a key is pressed and turn it off when the key is released. So, while holding a key, be in this mode. The problem is, matplotlib is interpreting a held key as many key presses and releases in rapid succession.
Anyone know how to stop this?
here is some sample code:
import matplotlib.pyplot as plt
import numpy as np
def key_press(event):
# toggle mode on when key pressed
print(f'{event.key} pressed')
def key_release(event):
# toggle mode off when key released
print(f'{event.key} released')
fig = plt.figure(figsize=(8,6))
ax1 = fig.add_subplot(111)
x=np.random.random([20])
y=np.random.random([20])
ax1.scatter(x,y)
fig.canvas.mpl_connect('key_press_event',key_press)
fig.canvas.mpl_connect('key_release_event',key_release)
plt.show()
A:
I was/am experiencing the same problem. I believe Matplotlib is working correctly (at least for me on Pop OS/Linux); there is an accessibility setting called repeat keys or auto repeat keys. When enabled, this setting will release your key if you press on it for X amount of time and repress it again and then the process of pressing and releasing speeds up. Matplotlib will get these inputs as key_press_event and key_release_event. Indeed when I disable this setting in the system settings, holding down a key will only send key_press_event once.
Not sure what the fix would be (without disabling the setting), perhaps some hack with keyboard library but I am not sure if this library experiences the same problem as Matplotlib.
|
matplotlib, how to ignore key repeats
|
I am trying to active a mode when a key is pressed and turn it off when the key is released. So, while holding a key, be in this mode. The problem is, matplotlib is interpreting a held key as many key presses and releases in rapid succession.
Anyone know how to stop this?
here is some sample code:
import matplotlib.pyplot as plt
import numpy as np
def key_press(event):
# toggle mode on when key pressed
print(f'{event.key} pressed')
def key_release(event):
# toggle mode off when key released
print(f'{event.key} released')
fig = plt.figure(figsize=(8,6))
ax1 = fig.add_subplot(111)
x=np.random.random([20])
y=np.random.random([20])
ax1.scatter(x,y)
fig.canvas.mpl_connect('key_press_event',key_press)
fig.canvas.mpl_connect('key_release_event',key_release)
plt.show()
|
[
"I was/am experiencing the same problem. I believe Matplotlib is working correctly (at least for me on Pop OS/Linux); there is an accessibility setting called repeat keys or auto repeat keys. When enabled, this setting will release your key if you press on it for X amount of time and repress it again and then the process of pressing and releasing speeds up. Matplotlib will get these inputs as key_press_event and key_release_event. Indeed when I disable this setting in the system settings, holding down a key will only send key_press_event once.\nNot sure what the fix would be (without disabling the setting), perhaps some hack with keyboard library but I am not sure if this library experiences the same problem as Matplotlib.\n"
] |
[
0
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0062271731_matplotlib_python.txt
|
Q:
Trying to group by using multiple columns
Using Pandas I am trying to do group by for multiple columns and then fill the pandas dataframe where a person name is not present
For Example this is my Dataframe
enter image description here
V1 V2 V3 PN
1 10 20 A
2 10 21 A
3 10 20 C
I have a unique person name list = ['A','B','C','D','E']
Expected Outcome:-
enter image description here
V1 V2 V3 PN
1 10 20 A
1 10 20 B
1 10 20 C
1 10 20 D
1 10 20 E
2 10 21 A
2 10 21 B
2 10 21 C
2 10 21 D
2 10 21 E
3 10 20 A
3 10 20 B
3 10 20 C
3 10 20 D
3 10 20 E
I was thinking about trying group by pandas statement but it didnt work out
A:
Try this, using pd.MultiIndex with reindex to create additional rows:
import pandas as pd
df = pd.DataFrame({'Version 1':[1,2,3],
'Version 2':[10,10,10],
'Version 3':[20,21,20],
'Person Name':'A A C'.split(' ')})
p_list = [*'ABCDE']
df.set_index(['Version 1', 'Person Name'])\
.reindex(pd.MultiIndex.from_product([df['Version 1'].unique(), p_list],
names=['Version 1', 'Person Name']))\
.groupby(level=0, group_keys=False).apply(lambda x: x.ffill().bfill())\
.reset_index()
Output:
Version 1 Person Name Version 2 Version 3
0 1 A 10.0 20.0
1 1 B 10.0 20.0
2 1 C 10.0 20.0
3 1 D 10.0 20.0
4 1 E 10.0 20.0
5 2 A 10.0 21.0
6 2 B 10.0 21.0
7 2 C 10.0 21.0
8 2 D 10.0 21.0
9 2 E 10.0 21.0
10 3 A 10.0 20.0
11 3 B 10.0 20.0
12 3 C 10.0 20.0
13 3 D 10.0 20.0
14 3 E 10.0 20.0
|
Trying to group by using multiple columns
|
Using Pandas I am trying to do group by for multiple columns and then fill the pandas dataframe where a person name is not present
For Example this is my Dataframe
enter image description here
V1 V2 V3 PN
1 10 20 A
2 10 21 A
3 10 20 C
I have a unique person name list = ['A','B','C','D','E']
Expected Outcome:-
enter image description here
V1 V2 V3 PN
1 10 20 A
1 10 20 B
1 10 20 C
1 10 20 D
1 10 20 E
2 10 21 A
2 10 21 B
2 10 21 C
2 10 21 D
2 10 21 E
3 10 20 A
3 10 20 B
3 10 20 C
3 10 20 D
3 10 20 E
I was thinking about trying group by pandas statement but it didnt work out
|
[
"Try this, using pd.MultiIndex with reindex to create additional rows:\nimport pandas as pd\ndf = pd.DataFrame({'Version 1':[1,2,3],\n 'Version 2':[10,10,10],\n 'Version 3':[20,21,20],\n 'Person Name':'A A C'.split(' ')})\n\np_list = [*'ABCDE']\n\ndf.set_index(['Version 1', 'Person Name'])\\\n .reindex(pd.MultiIndex.from_product([df['Version 1'].unique(), p_list],\n names=['Version 1', 'Person Name']))\\\n .groupby(level=0, group_keys=False).apply(lambda x: x.ffill().bfill())\\\n .reset_index()\n\nOutput:\n Version 1 Person Name Version 2 Version 3\n0 1 A 10.0 20.0\n1 1 B 10.0 20.0\n2 1 C 10.0 20.0\n3 1 D 10.0 20.0\n4 1 E 10.0 20.0\n5 2 A 10.0 21.0\n6 2 B 10.0 21.0\n7 2 C 10.0 21.0\n8 2 D 10.0 21.0\n9 2 E 10.0 21.0\n10 3 A 10.0 20.0\n11 3 B 10.0 20.0\n12 3 C 10.0 20.0\n13 3 D 10.0 20.0\n14 3 E 10.0 20.0\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074536632_dataframe_pandas_python.txt
|
Q:
Plotting sine wave with delayed starting time
I want to plot a sine wave with a delayed starting time.
For example,
Sine wave frequency: 1Hz
Total time: 2s
2 periods
I want the sine wave to start at t=1s so there is only one period in my plot.
My code so far is
a = 1
d = 5
phi = 0
f = 1 # frequency
dt = 0.01 # timestep
fs = 1/dt # sampling rate
T = 1/f # period
Ttot = 2 # total
N = int(Ttot/dt) # number of samples
t = np.linspace(0, Ttot, N) # time channel
signal = a*np.sin(2*np.pi*f*t + phi)+d # sine signal
plt.plot(t, signal)
plt.xlim(0, 2*T)
plt.xlabel('Time / s')
plt.ylabel('Amplitude')
plt.show()
So in the plot I want to have for t<1s, signal=d=const and for t>1s, signal = one period of sine function.
Any ideas? I have to build a field function for a boundary condition in CFD simulation.
A:
First of all, for the delay of your sine, you have to modify this line
t = np.linspace(0, Ttot, N)
for this
t = np.linspace(1, Ttot, N)
This will provide your samples for the calculation of your signal, between the value of 1 and Ttot (which is 2 here).
Then you can add a horizontal line of value d between 0 and your delay like this
plt.hlines(d, 0, 1)
when you add a variable named delay for example, the whole code looks like this
import matplotlib.pyplot as plt
import numpy as np
a = 1
d = 5
phi = 0
f = 1 # frequency
dt = 0.01 # timestep
fs = 1/dt # sampling rate
T = 1/f # period
Ttot = 2 # total
N = int(Ttot/dt) # number of samples
delay = 1
t = np.linspace(delay, Ttot, N) # time channel
signal = a*np.sin(2*np.pi*f*t + phi)+d # sine signal
plt.plot(t, signal)
plt.hlines(d, 0, delay)
plt.xlim(0, 2*T)
plt.xlabel('Time / s')
plt.ylabel('Amplitude')
plt.show()
which gives this following plot
|
Plotting sine wave with delayed starting time
|
I want to plot a sine wave with a delayed starting time.
For example,
Sine wave frequency: 1Hz
Total time: 2s
2 periods
I want the sine wave to start at t=1s so there is only one period in my plot.
My code so far is
a = 1
d = 5
phi = 0
f = 1 # frequency
dt = 0.01 # timestep
fs = 1/dt # sampling rate
T = 1/f # period
Ttot = 2 # total
N = int(Ttot/dt) # number of samples
t = np.linspace(0, Ttot, N) # time channel
signal = a*np.sin(2*np.pi*f*t + phi)+d # sine signal
plt.plot(t, signal)
plt.xlim(0, 2*T)
plt.xlabel('Time / s')
plt.ylabel('Amplitude')
plt.show()
So in the plot I want to have for t<1s, signal=d=const and for t>1s, signal = one period of sine function.
Any ideas? I have to build a field function for a boundary condition in CFD simulation.
|
[
"First of all, for the delay of your sine, you have to modify this line\nt = np.linspace(0, Ttot, N)\n\nfor this\nt = np.linspace(1, Ttot, N)\n\nThis will provide your samples for the calculation of your signal, between the value of 1 and Ttot (which is 2 here).\nThen you can add a horizontal line of value d between 0 and your delay like this\nplt.hlines(d, 0, 1)\n\nwhen you add a variable named delay for example, the whole code looks like this\nimport matplotlib.pyplot as plt\nimport numpy as np\n\na = 1\nd = 5\nphi = 0\n\nf = 1 # frequency\ndt = 0.01 # timestep\nfs = 1/dt # sampling rate\nT = 1/f # period\nTtot = 2 # total\nN = int(Ttot/dt) # number of samples\n\ndelay = 1\n\nt = np.linspace(delay, Ttot, N) # time channel\n\nsignal = a*np.sin(2*np.pi*f*t + phi)+d # sine signal\n\n\nplt.plot(t, signal)\nplt.hlines(d, 0, delay)\nplt.xlim(0, 2*T)\nplt.xlabel('Time / s')\nplt.ylabel('Amplitude')\nplt.show()\n\nwhich gives this following plot\n\n"
] |
[
0
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0074536008_matplotlib_python.txt
|
Q:
Separating development and production parts of django project
I'm building a site that relies on the output of a machine learning algorithm. All that is needed for the user-facing part of the site is the output of the algorithm (class labels for a set of items), which can be easily stored and retrieved from the django models. The algorithm could be run once a day, and does not rely on user input.
So this part of the site only depends on django and related packages.
But developing, tuning, and evaluating the algorithm uses many other python packages such as scikit-learn, pandas, numpy, matplotlib, etc. It also requires saving many different sets of class labels.
These dependencies cause some issues when deploying to heroku, because numpy requires LAPACK/BLAS. It also seems like it would be good practice to have as few dependencies as possible in the deployed app.
How can I separate the machine-learning part from the user-facing part, but, still have them integrated enough that the results of the algorithm are easily used?
I thought of creating two separate projects, and then writing to the user-facing database in some way, but that seems like it would lead to maintance problems (managing the dependencies, changes in database schemas etc).
As far as I understand, this problem is a little bit different than using different settings or databases for production and development, because it is more about managing different sets of dependencies.
A:
Just move what we discussed to the answer in case people have the same question, my suggestion is:
Spend some time define what are the dependencies for your site and for the algorithm code.
Dump the dependency list into requirements.txt for each project.
Deploy them on different environments so the conflicts don't happen.
Develop some API endpoints on your site side using Django Rest Framework or Tastypie and let your algorithm code update your model using the API. Use cron to run your algorithm code regularly and push the data.
A:
Create a requirements file for each environment, and a base requirements file for those packages shared by all the environments.
$ mkdir requirements
$ pip freeze > requirements/base.txt
$ echo "-r base.txt" > requirements/development.txt
$ echo "-r base.txt" > requirements/production.txt
Then adjust your development and production dependencies and install each one in the proper environment
#change to your development virtualenv
#$source .virtualenvs/development/bin/activate
$ pip install -r requirements/development.txt
#change to your production virtualenv
#$source .virtualenvs/production/bin/activate
$ pip install -r requirements/production.txt
A:
I prefer using poetry as my dependency manager. It lets you define the dev dependencies, rather than having separate requirements.txt files which is extra work.
|
Separating development and production parts of django project
|
I'm building a site that relies on the output of a machine learning algorithm. All that is needed for the user-facing part of the site is the output of the algorithm (class labels for a set of items), which can be easily stored and retrieved from the django models. The algorithm could be run once a day, and does not rely on user input.
So this part of the site only depends on django and related packages.
But developing, tuning, and evaluating the algorithm uses many other python packages such as scikit-learn, pandas, numpy, matplotlib, etc. It also requires saving many different sets of class labels.
These dependencies cause some issues when deploying to heroku, because numpy requires LAPACK/BLAS. It also seems like it would be good practice to have as few dependencies as possible in the deployed app.
How can I separate the machine-learning part from the user-facing part, but, still have them integrated enough that the results of the algorithm are easily used?
I thought of creating two separate projects, and then writing to the user-facing database in some way, but that seems like it would lead to maintance problems (managing the dependencies, changes in database schemas etc).
As far as I understand, this problem is a little bit different than using different settings or databases for production and development, because it is more about managing different sets of dependencies.
|
[
"Just move what we discussed to the answer in case people have the same question, my suggestion is:\n\nSpend some time define what are the dependencies for your site and for the algorithm code.\nDump the dependency list into requirements.txt for each project.\nDeploy them on different environments so the conflicts don't happen.\nDevelop some API endpoints on your site side using Django Rest Framework or Tastypie and let your algorithm code update your model using the API. Use cron to run your algorithm code regularly and push the data.\n\n",
"Create a requirements file for each environment, and a base requirements file for those packages shared by all the environments.\n $ mkdir requirements\n $ pip freeze > requirements/base.txt\n $ echo \"-r base.txt\" > requirements/development.txt\n $ echo \"-r base.txt\" > requirements/production.txt\n\nThen adjust your development and production dependencies and install each one in the proper environment \n#change to your development virtualenv\n#$source .virtualenvs/development/bin/activate\n$ pip install -r requirements/development.txt\n\n#change to your production virtualenv\n#$source .virtualenvs/production/bin/activate\n$ pip install -r requirements/production.txt\n\n",
"I prefer using poetry as my dependency manager. It lets you define the dev dependencies, rather than having separate requirements.txt files which is extra work.\n"
] |
[
4,
1,
0
] |
[] |
[] |
[
"django",
"heroku",
"python"
] |
stackoverflow_0031753685_django_heroku_python.txt
|
Q:
Feature scaling for polynomial regression
Do we have to scale the polynomial features when creating a polynomial regression?
This question is already answered here and the answer is no. But when creating a model with scikit learn, I do observe a huge difference.
And I also found this article about the Importance of Feature Scaling in Data Modeling. And the example of polynomial features prove that the scaling does have an impact.
What did I miss ?
A:
Maybe because of numerical issues; polynomials are prone to ill-conditioning: I have experienced (outside of machine learning) such problems with not-so-complex polynomial models, and the first solution was scaling the values of the features.
Symbolically the result is the same, but numerically can be much different, in some instances
|
Feature scaling for polynomial regression
|
Do we have to scale the polynomial features when creating a polynomial regression?
This question is already answered here and the answer is no. But when creating a model with scikit learn, I do observe a huge difference.
And I also found this article about the Importance of Feature Scaling in Data Modeling. And the example of polynomial features prove that the scaling does have an impact.
What did I miss ?
|
[
"Maybe because of numerical issues; polynomials are prone to ill-conditioning: I have experienced (outside of machine learning) such problems with not-so-complex polynomial models, and the first solution was scaling the values of the features.\nSymbolically the result is the same, but numerically can be much different, in some instances\n"
] |
[
0
] |
[] |
[] |
[
"linear_regression",
"python",
"regression",
"scikit_learn"
] |
stackoverflow_0062492473_linear_regression_python_regression_scikit_learn.txt
|
Q:
Why can't I import my domain models into my flask app?
I've been working through the Architecture Patterns with Python book and am reworking my app to follow a folder structure like the one on the authors git repo for chapter 4: https://github.com/cosmicpython/code/tree/master/src/allocation
My problem is that I cannot get the flask_app.py to import any of the domain modules. I keep getting module not found error.
I have a folder structure like this:
src/
peaked/
domain/
__init__.py
models.py
api/
__init__.py
flask_api.py
dataaccess/
__init__.py
orm.py
repository.py
__init__.py
setup.py
The repository and orm modules can import import from the domain module just fine. using the standard from domain import models. The unit tests in a separate tests folder at the top level can also import domain and dataaccess classes and functions just fine.
However, as soon as I put the exact same import into the flask_api.py, I get a ModuleNotFound error saying 'no module named domain'.
I spent the last 6 hours going through flask docs, through the github repo and a couple of other blogs and I can't get it to work.
I'm currently lauching the flask app using python src/peaked/api/flask_api.py from the command line in VSCode. This works fine ( if I have just a simple flask file with no imports ). But as soon as I introduce one of the imports it breaks.
I can't seem to get flask run to work. I've tried using set FLASK_APP=src/peaked/api/flask_api.py but I just get a could not locate Flask application error.
The setup.py file simply contains:
from setuptools import setup
setup(
name="peaked",
version="0.1",
packages=["peaked"],
)
Do I need to do something different to set up FLASK_APP when it is in a subdir? And why are the imports not working when they work just fine in other files?
A:
That's an awesome book, I hope you enjoy reading it as much as I did.
If I recall correctly, the package is installed before being used (check the docker files) and therefore are available globally. You could simulate this behavior adding your src/peaked folder to the PYTHONPATH environment variable (something like set PYTHONPATH=%PYTHONPATH%;C:\path_to\src\peaked) or you could add a line very start of flask_api.py file adding the src/peaked path in the sys.path variable:
import sys
sys.path.append('C:\\path_to\\src\\peaked')
Note that the last suggestion adds infrastructure complexity to your API (not desired).
|
Why can't I import my domain models into my flask app?
|
I've been working through the Architecture Patterns with Python book and am reworking my app to follow a folder structure like the one on the authors git repo for chapter 4: https://github.com/cosmicpython/code/tree/master/src/allocation
My problem is that I cannot get the flask_app.py to import any of the domain modules. I keep getting module not found error.
I have a folder structure like this:
src/
peaked/
domain/
__init__.py
models.py
api/
__init__.py
flask_api.py
dataaccess/
__init__.py
orm.py
repository.py
__init__.py
setup.py
The repository and orm modules can import import from the domain module just fine. using the standard from domain import models. The unit tests in a separate tests folder at the top level can also import domain and dataaccess classes and functions just fine.
However, as soon as I put the exact same import into the flask_api.py, I get a ModuleNotFound error saying 'no module named domain'.
I spent the last 6 hours going through flask docs, through the github repo and a couple of other blogs and I can't get it to work.
I'm currently lauching the flask app using python src/peaked/api/flask_api.py from the command line in VSCode. This works fine ( if I have just a simple flask file with no imports ). But as soon as I introduce one of the imports it breaks.
I can't seem to get flask run to work. I've tried using set FLASK_APP=src/peaked/api/flask_api.py but I just get a could not locate Flask application error.
The setup.py file simply contains:
from setuptools import setup
setup(
name="peaked",
version="0.1",
packages=["peaked"],
)
Do I need to do something different to set up FLASK_APP when it is in a subdir? And why are the imports not working when they work just fine in other files?
|
[
"That's an awesome book, I hope you enjoy reading it as much as I did.\nIf I recall correctly, the package is installed before being used (check the docker files) and therefore are available globally. You could simulate this behavior adding your src/peaked folder to the PYTHONPATH environment variable (something like set PYTHONPATH=%PYTHONPATH%;C:\\path_to\\src\\peaked) or you could add a line very start of flask_api.py file adding the src/peaked path in the sys.path variable:\nimport sys\nsys.path.append('C:\\\\path_to\\\\src\\\\peaked')\n\nNote that the last suggestion adds infrastructure complexity to your API (not desired).\n"
] |
[
1
] |
[] |
[] |
[
"flask",
"python"
] |
stackoverflow_0074536122_flask_python.txt
|
Q:
open pdf link with python selenium
os.environ['PATH'] +=
r"C:\Users\dew23\OneDrive\Computer Science"
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.get("https://official.nba.com/nba-injury-
report-2022-23-season/")
WebDriverWait(driver,
10).until(EC.presence_of_element_located((By.XPATH,
'//*[@id="main"]/div/section[1]/div/div/p/a[12]')))
driver.find_element(By.XPATH, '//*[@id="main"]/div/section[1]/div/div/p/a[12]').send_keys(Keys.RETURN)
the link gets clicked but it does not open the pdf file. how do I open the file in a new tab?
A:
There are several issues here:
The main issue causing your code to click the element but not to open the file is because you need to wait for element clickability. Element presence is a very first state when element is already presented but still not fully rendered. So, clicking a web element on that stage will just do nothing as you see yourself.
No need to get the element again with driver.find_element(By.XPATH, '//*[@id="main"]/div/section[1]/div/div/p/a[12]') after you already applied WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, '//*[@id="main"]/div/section[1]/div/div/p/a[12]'))) since the former method returns a web element object.
Long '//*[@id="main"]/div/section[1]/div/div/p/a[12]' XPath expression can be changed by this XPath "//a[contains(@href,'2022-11-22_11AM')]" it is much more precise and reliable.
So, the final code can be like this:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
options.add_argument('--disable-notifications')
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 20)
url = "https://official.nba.com/nba-injury-report-2022-23-season/"
driver.get(url)
wait.until(EC.element_to_be_clickable((By.XPATH, "//a[contains(@href,'2022-11-22_11AM')]"))).click()
And it woks, the result is
|
open pdf link with python selenium
|
os.environ['PATH'] +=
r"C:\Users\dew23\OneDrive\Computer Science"
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.get("https://official.nba.com/nba-injury-
report-2022-23-season/")
WebDriverWait(driver,
10).until(EC.presence_of_element_located((By.XPATH,
'//*[@id="main"]/div/section[1]/div/div/p/a[12]')))
driver.find_element(By.XPATH, '//*[@id="main"]/div/section[1]/div/div/p/a[12]').send_keys(Keys.RETURN)
the link gets clicked but it does not open the pdf file. how do I open the file in a new tab?
|
[
"There are several issues here:\n\nThe main issue causing your code to click the element but not to open the file is because you need to wait for element clickability. Element presence is a very first state when element is already presented but still not fully rendered. So, clicking a web element on that stage will just do nothing as you see yourself.\nNo need to get the element again with driver.find_element(By.XPATH, '//*[@id=\"main\"]/div/section[1]/div/div/p/a[12]') after you already applied WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, '//*[@id=\"main\"]/div/section[1]/div/div/p/a[12]'))) since the former method returns a web element object.\nLong '//*[@id=\"main\"]/div/section[1]/div/div/p/a[12]' XPath expression can be changed by this XPath \"//a[contains(@href,'2022-11-22_11AM')]\" it is much more precise and reliable.\n\nSo, the final code can be like this:\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\noptions.add_argument('--disable-notifications')\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 20)\n\nurl = \"https://official.nba.com/nba-injury-report-2022-23-season/\"\ndriver.get(url)\n\nwait.until(EC.element_to_be_clickable((By.XPATH, \"//a[contains(@href,'2022-11-22_11AM')]\"))).click()\n\nAnd it woks, the result is\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"selenium",
"selenium_webdriver",
"webdriverwait",
"xpath"
] |
stackoverflow_0074536352_python_selenium_selenium_webdriver_webdriverwait_xpath.txt
|
Q:
Pandas Merge Many Rows of One Dataframe into Fewer Rows of Second Dataframe
Is there an easy way to merge two dataframes such that df2 adds all its rows with matching 'on' values as new columns in df1? Open to other methods of joining the data as well.
e.g.
Matching on Course Offering Code and User Id
df1:
Course Offering Code
User Id
001
1
001
2
df2:
Course Offering Code
User Id
Assignment
grade%
001
1
A01
65
001
1
A02
85
001
1
A03
95
001
1
A04
64
001
2
A01
87
001
2
A02
86
001
2
A03
82
001
2
A04
90
I had tried pd.merge(df1, df2, on=['User Id', 'Course Offering Code']) and was hoping for the following:
desired_df
Course Offering Code
User Id
Assignment_x
grade%_x
Assignment_y
grade%_y
Assignment_z
grade%_z
Assignment_a
grade%_a
001
1
A01
65
A02
85
A03
95
A04
64
001
2
A01
87
A02
86
A03
82
A04
90
A:
You can do this one with pandas.DataFrame.pivot :
def flatten_cols(df):
df.columns = ['_'.join(map(str, x)) for x in df.columns]
df = df[sorted(df.columns, key=lambda x: int(x.split("_")[-1]))]
return df
out = (
df1
.merge(df2, on=["Course Offering Code", "User Id"], how="left")
.assign(idx=lambda x: x.groupby(['Course Offering Code', 'User Id']).cumcount()+1)
.pivot(index= ["Course Offering Code", "User Id"],
columns= "idx", values=["Assignment", "grade%"])
.pipe(flatten_cols)
.reset_index()
)
# Output :
print(out.to_string())
Course Offering Code User Id Assignment_1 grade%_1 Assignment_2 grade%_2 Assignment_3 grade%_3 Assignment_4 grade%_4
0 001 1 A01 65 A02 85 A03 95 A04 64
1 001 2 A01 87 A02 86 A03 82 A04 90
|
Pandas Merge Many Rows of One Dataframe into Fewer Rows of Second Dataframe
|
Is there an easy way to merge two dataframes such that df2 adds all its rows with matching 'on' values as new columns in df1? Open to other methods of joining the data as well.
e.g.
Matching on Course Offering Code and User Id
df1:
Course Offering Code
User Id
001
1
001
2
df2:
Course Offering Code
User Id
Assignment
grade%
001
1
A01
65
001
1
A02
85
001
1
A03
95
001
1
A04
64
001
2
A01
87
001
2
A02
86
001
2
A03
82
001
2
A04
90
I had tried pd.merge(df1, df2, on=['User Id', 'Course Offering Code']) and was hoping for the following:
desired_df
Course Offering Code
User Id
Assignment_x
grade%_x
Assignment_y
grade%_y
Assignment_z
grade%_z
Assignment_a
grade%_a
001
1
A01
65
A02
85
A03
95
A04
64
001
2
A01
87
A02
86
A03
82
A04
90
|
[
"You can do this one with pandas.DataFrame.pivot :\ndef flatten_cols(df):\n df.columns = ['_'.join(map(str, x)) for x in df.columns]\n df = df[sorted(df.columns, key=lambda x: int(x.split(\"_\")[-1]))]\n return df\n\nout = (\n df1\n .merge(df2, on=[\"Course Offering Code\", \"User Id\"], how=\"left\")\n .assign(idx=lambda x: x.groupby(['Course Offering Code', 'User Id']).cumcount()+1)\n .pivot(index= [\"Course Offering Code\", \"User Id\"],\n columns= \"idx\", values=[\"Assignment\", \"grade%\"])\n .pipe(flatten_cols)\n .reset_index()\n )\n\n# Output :\nprint(out.to_string())\n\n Course Offering Code User Id Assignment_1 grade%_1 Assignment_2 grade%_2 Assignment_3 grade%_3 Assignment_4 grade%_4\n0 001 1 A01 65 A02 85 A03 95 A04 64\n1 001 2 A01 87 A02 86 A03 82 A04 90\n\n"
] |
[
0
] |
[] |
[] |
[
"merge",
"pandas",
"python"
] |
stackoverflow_0074537102_merge_pandas_python.txt
|
Q:
sklearn.cross_validate returns one unfitted estimator
I used sklearn.model_selection.cross_validate for cross-validation of an sklearn.pipeline.Pipeline, which works great.
Now I am interested in the coefficients of a feature selection step in the pipeline. The selector used is SelectFromModel(LinearSVC(penalty="l1", dual=False)).
By setting return_estimator=True the cross-validation method should return the estimators fitted on each split. This works well for the classifier:
>>> pipeline[-1].coef_
[ 0.20973553 0.48124347 -0.27811877 ... ]
However, when I inspect the feature selection step, an attribute error is raised, as the object is not yet fitted:
>>> output = cross_validate(pipeline, X, y, cv=skf.split(X, data.cohort_idx), return_estimator=True)
>>> output['estimator'][1][-2].estimator.coef_
AttributeError: 'LinearSVC' object has no attribute 'coef_'
Fitting this step afterwards solves issue, but would be cumbersome and error prone in the cross_validation process:
>>> pipeline.fit(X, y)
>>> pipeline[3].estimator.coef_
[-0.27501591 0.14398988 0.83767175 ... ]
How do I get the cross_validate to return a fitted feature selector?
You can replicate this example using:
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_validate
from sklearn.pipeline import make_pipeline
from sklearn.feature_selection import SelectFromModel
from sklearn.svm import LinearSVC
# Make dummy data
X = np.random.rand(50, 4)
y = np.random.choice([True, False], 50)
# Make pipeline
selector = SelectFromModel(LinearSVC(penalty="l1", dual=False))
classifier = LogisticRegression()
pipeline = make_pipeline(selector, classifier)
# Cross validate
output = cross_validate(pipeline, X, y, return_estimator=True)
# Print coefficients
print('Classifier coef_:', output['estimator'][0][1].coef_)
print('Selector coef_: ', output['estimator'][0][0].estimator.coef_)
A:
After finalizing this question, and just before submitting it, I realized I should access the member estimator_ and not estimator.
I hope this helps someone one day.
|
sklearn.cross_validate returns one unfitted estimator
|
I used sklearn.model_selection.cross_validate for cross-validation of an sklearn.pipeline.Pipeline, which works great.
Now I am interested in the coefficients of a feature selection step in the pipeline. The selector used is SelectFromModel(LinearSVC(penalty="l1", dual=False)).
By setting return_estimator=True the cross-validation method should return the estimators fitted on each split. This works well for the classifier:
>>> pipeline[-1].coef_
[ 0.20973553 0.48124347 -0.27811877 ... ]
However, when I inspect the feature selection step, an attribute error is raised, as the object is not yet fitted:
>>> output = cross_validate(pipeline, X, y, cv=skf.split(X, data.cohort_idx), return_estimator=True)
>>> output['estimator'][1][-2].estimator.coef_
AttributeError: 'LinearSVC' object has no attribute 'coef_'
Fitting this step afterwards solves issue, but would be cumbersome and error prone in the cross_validation process:
>>> pipeline.fit(X, y)
>>> pipeline[3].estimator.coef_
[-0.27501591 0.14398988 0.83767175 ... ]
How do I get the cross_validate to return a fitted feature selector?
You can replicate this example using:
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_validate
from sklearn.pipeline import make_pipeline
from sklearn.feature_selection import SelectFromModel
from sklearn.svm import LinearSVC
# Make dummy data
X = np.random.rand(50, 4)
y = np.random.choice([True, False], 50)
# Make pipeline
selector = SelectFromModel(LinearSVC(penalty="l1", dual=False))
classifier = LogisticRegression()
pipeline = make_pipeline(selector, classifier)
# Cross validate
output = cross_validate(pipeline, X, y, return_estimator=True)
# Print coefficients
print('Classifier coef_:', output['estimator'][0][1].coef_)
print('Selector coef_: ', output['estimator'][0][0].estimator.coef_)
|
[
"After finalizing this question, and just before submitting it, I realized I should access the member estimator_ and not estimator.\nI hope this helps someone one day.\n"
] |
[
0
] |
[] |
[] |
[
"cross_validation",
"python",
"scikit_learn"
] |
stackoverflow_0074537722_cross_validation_python_scikit_learn.txt
|
Q:
Averaging five rows above the value in the target column
The challenge that I have, and don't know how to approach is to have averaged five, ten, or whatever amount of rows above the target value plus the target row.
Dataset
target | A | B |
----------------------
nan | 6 | 4 |
nan | 2 | 7 |
nan | 4 | 9 |
nan | 7 | 3 |
nan | 3 | 7 |
nan | 6 | 8 |
nan | 7 | 6 |
53 | 4 | 5 |
nan | 6 | 4 |
nan | 2 | 7 |
nan | 3 | 3 |
nan | 4 | 9 |
nan | 7 | 3 |
nan | 3 | 7 |
51 | 1 | 3 |
Desired format:
target | A | B |
----------------------
53 | 5.16|6.33 |
51 |3.33 |5.33 |
A:
Try this, [::-1] reversing element to order the dataframe bottom to top, so we can group the values "above" valid targets:
df.groupby(df['target'].notna()[::-1].cumsum()[::-1]).apply(lambda x: x.tail(6).mean())
Output:
target A B
target
1 51.0 3.333333 5.333333
2 53.0 5.166667 6.333333
|
Averaging five rows above the value in the target column
|
The challenge that I have, and don't know how to approach is to have averaged five, ten, or whatever amount of rows above the target value plus the target row.
Dataset
target | A | B |
----------------------
nan | 6 | 4 |
nan | 2 | 7 |
nan | 4 | 9 |
nan | 7 | 3 |
nan | 3 | 7 |
nan | 6 | 8 |
nan | 7 | 6 |
53 | 4 | 5 |
nan | 6 | 4 |
nan | 2 | 7 |
nan | 3 | 3 |
nan | 4 | 9 |
nan | 7 | 3 |
nan | 3 | 7 |
51 | 1 | 3 |
Desired format:
target | A | B |
----------------------
53 | 5.16|6.33 |
51 |3.33 |5.33 |
|
[
"Try this, [::-1] reversing element to order the dataframe bottom to top, so we can group the values \"above\" valid targets:\ndf.groupby(df['target'].notna()[::-1].cumsum()[::-1]).apply(lambda x: x.tail(6).mean())\n\nOutput:\n target A B\ntarget \n1 51.0 3.333333 5.333333\n2 53.0 5.166667 6.333333\n\n"
] |
[
1
] |
[] |
[] |
[
"data_preprocessing",
"feature_engineering",
"pandas",
"python"
] |
stackoverflow_0074537593_data_preprocessing_feature_engineering_pandas_python.txt
|
Q:
Safen User Input Part I - htmlspecialchars
You are a(n) novice/average/experienced/professional/world-famous Web Developer (choose one) who owns a(n) simple/clean/slick/beautiful/complicated/professional/business website (choose one or more) which contains form fields so visitors can send emails or leave a comment on your website with ease. However, with ease comes danger. Every now and then, a hacker visits your website and attempts to compromise it through the use of XSS (Cross Site Scripting). This is done by injecting script tags into the website through form fields which may contain malicious code (e.g. a redirection to a malicious website that steals personal information).
Mission
Your mission is to implement a function that converts the following potentially harmful characters:
< --> <
--> >
" --> "
& --> &
My code is below :
def html_special_chars(data):
if "<" in data:
return data.replace("<","<")
elif ">" in data:
return data.replace(">",">")
elif '"' in data:
return data.replace('"',""")
else:
return data.replace('&',"&")
i got output for "alert('Website Hacked!' is <script>alert('Website Hacked! but it should be like this :"<script>alert('Website Hacked!');</script>"
A:
Here is the Solution:
def html_special_chars(data):
symbols = {'<': '<', '>': '>', '"': '"', '&': '&'}
return "".join(symbols.get(x, x) for x in data)
|
Safen User Input Part I - htmlspecialchars
|
You are a(n) novice/average/experienced/professional/world-famous Web Developer (choose one) who owns a(n) simple/clean/slick/beautiful/complicated/professional/business website (choose one or more) which contains form fields so visitors can send emails or leave a comment on your website with ease. However, with ease comes danger. Every now and then, a hacker visits your website and attempts to compromise it through the use of XSS (Cross Site Scripting). This is done by injecting script tags into the website through form fields which may contain malicious code (e.g. a redirection to a malicious website that steals personal information).
Mission
Your mission is to implement a function that converts the following potentially harmful characters:
< --> <
--> >
" --> "
& --> &
My code is below :
def html_special_chars(data):
if "<" in data:
return data.replace("<","<")
elif ">" in data:
return data.replace(">",">")
elif '"' in data:
return data.replace('"',""")
else:
return data.replace('&',"&")
i got output for "alert('Website Hacked!' is <script>alert('Website Hacked! but it should be like this :"<script>alert('Website Hacked!');</script>"
|
[
"Here is the Solution:\ndef html_special_chars(data): \n symbols = {'<': '<', '>': '>', '\"': '"', '&': '&'}\n return \"\".join(symbols.get(x, x) for x in data)\n\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0072893031_python.txt
|
Q:
I get an error with TikTokAPI when i use api.getUserByName("swisscom")
What is a problem? I use PyCharm on Ubuntu 22.04.
from TikTokAPI import TikTokAPI
api = TikTokAPI()
user_obj = api.getUserByName("tomschoenwolf")
When i start it i get error:
Traceback (most recent call last):
File "/home/user/PycharmProjects/video_download/with_api.py", line 3, in <module>
user_obj = api.getUserByName("tomschoenwolf")
File "/home/user/PycharmProjects/video_download/venv/lib/python3.10/site-packages/TikTokAPI/tiktokapi.py", line 114, in getUserByName
return self.send_get_request(url, params)
File "/home/user/PycharmProjects/video_download/venv/lib/python3.10/site-packages/TikTokAPI/tiktokapi.py", line 84, in send_get_request
data = get_req_json(url, params=None, headers=self.headers)
File "/home/user/PycharmProjects/video_download/venv/lib/python3.10/site-packages/TikTokAPI/utils.py", line 29, in get_req_json
return json.loads(r.text)
File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Process finished with exit code 1
I tried reinstall module but it didn't help.
A:
Are you sure you've installed the TikTokAPI package?
Looks like it can be installed with pip: https://pypi.org/project/TikTokApi/
To verify installation type pip show TikTokApi in the terminal.
|
I get an error with TikTokAPI when i use api.getUserByName("swisscom")
|
What is a problem? I use PyCharm on Ubuntu 22.04.
from TikTokAPI import TikTokAPI
api = TikTokAPI()
user_obj = api.getUserByName("tomschoenwolf")
When i start it i get error:
Traceback (most recent call last):
File "/home/user/PycharmProjects/video_download/with_api.py", line 3, in <module>
user_obj = api.getUserByName("tomschoenwolf")
File "/home/user/PycharmProjects/video_download/venv/lib/python3.10/site-packages/TikTokAPI/tiktokapi.py", line 114, in getUserByName
return self.send_get_request(url, params)
File "/home/user/PycharmProjects/video_download/venv/lib/python3.10/site-packages/TikTokAPI/tiktokapi.py", line 84, in send_get_request
data = get_req_json(url, params=None, headers=self.headers)
File "/home/user/PycharmProjects/video_download/venv/lib/python3.10/site-packages/TikTokAPI/utils.py", line 29, in get_req_json
return json.loads(r.text)
File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Process finished with exit code 1
I tried reinstall module but it didn't help.
|
[
"Are you sure you've installed the TikTokAPI package?\nLooks like it can be installed with pip: https://pypi.org/project/TikTokApi/\nTo verify installation type pip show TikTokApi in the terminal.\n"
] |
[
0
] |
[] |
[] |
[
"pycharm",
"python",
"python_3.x",
"tiktok_api",
"ubuntu_22.04"
] |
stackoverflow_0074537576_pycharm_python_python_3.x_tiktok_api_ubuntu_22.04.txt
|
Q:
How do I tell pylint about a descriptor providing access to iterables, subscriptables?
I have a decorator that I can use to mark a read-only class property:
class class_ro_property(property):
def __init__(self, getter:Callable):
self._getter = getter
def __get__(self, _, cls):
return self._getter(cls)
class MyClass:
@class_ro_property
def my_property(cls):
return [1, 2, 3]
The problem is that pylint doesn't understand this at all. If I try to write:
for num in MyClass.my_property:
print(num)
it will tell me:
E1133:0035:Non-iterable value MyClass.my_property is used in an iterating context
The same problem happens if I try to subscript it, ie MyClass.my_property[0].
Is there some way to tell pylint about the use of a descriptor here? If not, is there some way to tell pylint at the property definition (and not only where it's used) not to complain about its use in an iterating context?
A:
The straight answer is: that is a short-comming of the linter. A feature request should be open against "pylint" pointing to the fact it does not recognize descriptors.
That said, just strap a comment to skip linting where you are getting this problem as a workaround. Adding a comment like # pylint: disable=E1133 on the offending line should do the job.
When coding Python, one has to keep in mind that auxiliary tooling like linters and static type checkers do not have how to know everything that happens at runtime. Not recognizing a descriptor, like here, is a shortcoming of pylint, and they could implement the feature (though I doubt they will) - but there are other cases where that is not feasible at all.
(Judging by the other answer, pylint opts for special casing the property call, but ignores that property itself is just one use-case of descriptors)
By the way, I can't find this specific error in the pylint error codes at http://pylint-messages.wikidot.com/all-codes - maybe it is provided by an specific extension? You might just want to throw away such an extension if it is doing more harm than good.
|
How do I tell pylint about a descriptor providing access to iterables, subscriptables?
|
I have a decorator that I can use to mark a read-only class property:
class class_ro_property(property):
def __init__(self, getter:Callable):
self._getter = getter
def __get__(self, _, cls):
return self._getter(cls)
class MyClass:
@class_ro_property
def my_property(cls):
return [1, 2, 3]
The problem is that pylint doesn't understand this at all. If I try to write:
for num in MyClass.my_property:
print(num)
it will tell me:
E1133:0035:Non-iterable value MyClass.my_property is used in an iterating context
The same problem happens if I try to subscript it, ie MyClass.my_property[0].
Is there some way to tell pylint about the use of a descriptor here? If not, is there some way to tell pylint at the property definition (and not only where it's used) not to complain about its use in an iterating context?
|
[
"The straight answer is: that is a short-comming of the linter. A feature request should be open against \"pylint\" pointing to the fact it does not recognize descriptors.\nThat said, just strap a comment to skip linting where you are getting this problem as a workaround. Adding a comment like # pylint: disable=E1133 on the offending line should do the job.\nWhen coding Python, one has to keep in mind that auxiliary tooling like linters and static type checkers do not have how to know everything that happens at runtime. Not recognizing a descriptor, like here, is a shortcoming of pylint, and they could implement the feature (though I doubt they will) - but there are other cases where that is not feasible at all.\n(Judging by the other answer, pylint opts for special casing the property call, but ignores that property itself is just one use-case of descriptors)\nBy the way, I can't find this specific error in the pylint error codes at http://pylint-messages.wikidot.com/all-codes - maybe it is provided by an specific extension? You might just want to throw away such an extension if it is doing more harm than good.\n"
] |
[
0
] |
[
"My suggestion would be to write a metaclass that defines the property:\nclass MyMetaClass(type):\n @property\n def my_property(cls):\n return [1, 2, 3]\n\nclass MyClass(metaclass=MyMetaClass):\n my_property = MyClassMeta.my_property\n\nfor num in MyClass.my_property:\n print(num)\n\nPylint is fine with that use of MyClass.my_property. Note that if you don't need MyClass().my_property to get it on instances, you can omit the line my_property = MyClassMeta.my_property.\n"
] |
[
-1
] |
[
"pylint",
"python",
"python_decorators",
"python_descriptors"
] |
stackoverflow_0074523859_pylint_python_python_decorators_python_descriptors.txt
|
Q:
Extraction of versions in paths pandas column
I have a dataframe column that looks like this:
paths
0 ['/api/v2/clouds', '/api/v2/clouds/{cloud}']
1 ['/v0.1/book-lists/{type}/{date}', '/v0.1/book-lists]
2 ['/v1/Video/Rooms', '/v1/Video/Rooms/{RoomSid}'....]
3 ['/v3/attachments/{attachmentId}', '/v3/attachments]
4 '/v0.1/patrons', '/v0.2/patrons', '/v0.3/patrons/dependents]
I want to extract the versions from the column in such a format:
My desired output is :
paths Path_Version
0 ['/api/v2/clouds', '/api/v2/clouds/{cloud}'] v2
1 ['/v0.1/book-lists/{type}/{date}', '/v0.1/book-lists] v0.1
2 ['/v1/Video/Rooms', '/v1/Video/Rooms/{RoomSid}'....] v2
3 ['/v3/attachments/{attachmentId}', '/v3/attachments] v3
4 ['/v0.1/patrons', '/v0.2/patrons', '/v0.3/patrons/dependents] v0.1/v0.2/v0.3
I have tried this:
keywords = ['v1', 'v2', 'v3', 'v4', 'v1.0', 'v1.2', 'v1.1', 'v0.1', 'v0.2','v1.3', 'v1.4', 'v3.1', 'v3.2', '0.1.0', '3.1', 'v0.0.2', 'v0.0.3', 'v0.0.4', '1.0.0']
final_api['Path_Version'] = final_api['paths'].str.findall('(' + '|'.join(keywords) + ')')
But yields no result. I have looked at other codes as well, but none of them give me the desired output. I am struggling to figure this out, any help will be appreciated.
A:
No need for keywords, just use pandas.Series.str.findall as you started to do:
df["Path_Version"]= (
df["paths"].str.findall(r"(v\d\.?\d?)")
.apply(lambda x: "/".join(set(x)))
)
# Output :
print(df.to_string())
paths Path_Version
0 ['/api/v2/clouds', '/api/v2/clouds/{cloud}'] v2
1 ['/v0.1/book-lists/{type}/{date}', '/v0.1/book-lists] v0.1
2 ['/v1/Video/Rooms', '/v1/Video/Rooms/{RoomSid}'....] v1
3 ['/v3/attachments/{attachmentId}', '/v3/attachments] v3
4 '/v0.1/patrons', '/v0.2/patrons', '/v0.3/patrons/dependents] v0.2/v0.3/v0.1
A:
This seems like a good candidate for a regex:
import pandas as pd
import re
data = [
[['/api/v2/clouds', '/api/v2/clouds/{cloud}']],
[['/v0.1/book-lists/{type}/{date}', '/v0.1/book-lists']],
[['/v1/Video/Rooms', '/v1/Video/Rooms/{RoomSid}']],
[['/v3/attachments/{attachmentId}', '/v3/attachments']],
[['/v0.1/patrons', '/v0.2/patrons', '/v0.3/patrons/dependents']]
]
df = pd.DataFrame(data, columns=['paths'])
ver = re.compile(r'/(v\d(\.\d)?)/')
def getver(row):
vsets = set()
for p in row:
chk = ver.search(p)
vsets.add( chk.group(1) )
return '/'.join(vsets)
df['Version'] = df.paths.apply(getver)
print(df)
Output:
paths Version
0 [/api/v2/clouds, /api/v2/clouds/{cloud}] v2
1 [/v0.1/book-lists/{type}/{date}, /v0.1/book-li... v0.1
2 [/v1/Video/Rooms, /v1/Video/Rooms/{RoomSid}] v1
3 [/v3/attachments/{attachmentId}, /v3/attachments] v3
4 [/v0.1/patrons, /v0.2/patrons, /v0.3/patrons/d... v0.2/v0.3/v0.1
|
Extraction of versions in paths pandas column
|
I have a dataframe column that looks like this:
paths
0 ['/api/v2/clouds', '/api/v2/clouds/{cloud}']
1 ['/v0.1/book-lists/{type}/{date}', '/v0.1/book-lists]
2 ['/v1/Video/Rooms', '/v1/Video/Rooms/{RoomSid}'....]
3 ['/v3/attachments/{attachmentId}', '/v3/attachments]
4 '/v0.1/patrons', '/v0.2/patrons', '/v0.3/patrons/dependents]
I want to extract the versions from the column in such a format:
My desired output is :
paths Path_Version
0 ['/api/v2/clouds', '/api/v2/clouds/{cloud}'] v2
1 ['/v0.1/book-lists/{type}/{date}', '/v0.1/book-lists] v0.1
2 ['/v1/Video/Rooms', '/v1/Video/Rooms/{RoomSid}'....] v2
3 ['/v3/attachments/{attachmentId}', '/v3/attachments] v3
4 ['/v0.1/patrons', '/v0.2/patrons', '/v0.3/patrons/dependents] v0.1/v0.2/v0.3
I have tried this:
keywords = ['v1', 'v2', 'v3', 'v4', 'v1.0', 'v1.2', 'v1.1', 'v0.1', 'v0.2','v1.3', 'v1.4', 'v3.1', 'v3.2', '0.1.0', '3.1', 'v0.0.2', 'v0.0.3', 'v0.0.4', '1.0.0']
final_api['Path_Version'] = final_api['paths'].str.findall('(' + '|'.join(keywords) + ')')
But yields no result. I have looked at other codes as well, but none of them give me the desired output. I am struggling to figure this out, any help will be appreciated.
|
[
"No need for keywords, just use pandas.Series.str.findall as you started to do:\ndf[\"Path_Version\"]= (\n df[\"paths\"].str.findall(r\"(v\\d\\.?\\d?)\")\n .apply(lambda x: \"/\".join(set(x)))\n )\n\n# Output :\nprint(df.to_string())\n paths Path_Version\n0 ['/api/v2/clouds', '/api/v2/clouds/{cloud}'] v2\n1 ['/v0.1/book-lists/{type}/{date}', '/v0.1/book-lists] v0.1\n2 ['/v1/Video/Rooms', '/v1/Video/Rooms/{RoomSid}'....] v1\n3 ['/v3/attachments/{attachmentId}', '/v3/attachments] v3\n4 '/v0.1/patrons', '/v0.2/patrons', '/v0.3/patrons/dependents] v0.2/v0.3/v0.1\n\n",
"This seems like a good candidate for a regex:\nimport pandas as pd\nimport re\n\ndata = [\n [['/api/v2/clouds', '/api/v2/clouds/{cloud}']],\n [['/v0.1/book-lists/{type}/{date}', '/v0.1/book-lists']],\n [['/v1/Video/Rooms', '/v1/Video/Rooms/{RoomSid}']],\n [['/v3/attachments/{attachmentId}', '/v3/attachments']],\n [['/v0.1/patrons', '/v0.2/patrons', '/v0.3/patrons/dependents']]\n]\n\ndf = pd.DataFrame(data, columns=['paths'])\n\nver = re.compile(r'/(v\\d(\\.\\d)?)/')\ndef getver(row):\n vsets = set()\n for p in row:\n chk = ver.search(p)\n vsets.add( chk.group(1) )\n return '/'.join(vsets)\n\ndf['Version'] = df.paths.apply(getver)\nprint(df)\n\nOutput:\n paths Version\n0 [/api/v2/clouds, /api/v2/clouds/{cloud}] v2\n1 [/v0.1/book-lists/{type}/{date}, /v0.1/book-li... v0.1\n2 [/v1/Video/Rooms, /v1/Video/Rooms/{RoomSid}] v1\n3 [/v3/attachments/{attachmentId}, /v3/attachments] v3\n4 [/v0.1/patrons, /v0.2/patrons, /v0.3/patrons/d... v0.2/v0.3/v0.1\n\n"
] |
[
5,
3
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074537690_pandas_python.txt
|
Q:
Handling data trees in python with changing node attributes
I have a tree where I want to dynamically add or remove nodes. For now I just want to focus on adding nodes. I want to create python class in such a way that adding one node will recalculate attributes (x and y axis coordinates) of affected nodes. I have attached below diagrams to better explain the behaviour that I want to handle via python script.
steps for building tree
I am looking to create Node class, but not sure how I create and handle x values using methods. Anyone can shed some light? Any links to blogs?
A:
Probably there are libraries doing this. If you want to implement it yourself, following approach may help:
Find the height of the tree. Use it to find the bottom line's y value.
Evenly distribute all leaf nodes at y. Group them by their parents. If needed, put some more gap between nodes with different parents.
Going one level up, place the parent of a group above the middle node(s). Calculating y should be trivial.
Repeat 3 until all nodes are placed.
|
Handling data trees in python with changing node attributes
|
I have a tree where I want to dynamically add or remove nodes. For now I just want to focus on adding nodes. I want to create python class in such a way that adding one node will recalculate attributes (x and y axis coordinates) of affected nodes. I have attached below diagrams to better explain the behaviour that I want to handle via python script.
steps for building tree
I am looking to create Node class, but not sure how I create and handle x values using methods. Anyone can shed some light? Any links to blogs?
|
[
"Probably there are libraries doing this. If you want to implement it yourself, following approach may help:\n\nFind the height of the tree. Use it to find the bottom line's y value.\nEvenly distribute all leaf nodes at y. Group them by their parents. If needed, put some more gap between nodes with different parents.\nGoing one level up, place the parent of a group above the middle node(s). Calculating y should be trivial.\nRepeat 3 until all nodes are placed.\n\n"
] |
[
0
] |
[] |
[] |
[
"nodes",
"python",
"tree"
] |
stackoverflow_0074537776_nodes_python_tree.txt
|
Q:
python multiprocessing spawn method's conflict when _fixup_main_from_path() is called
experts,
I am using python's multiprocessing's spawn method to spawn child processes.
The child process is a function call.
One issue that recently get notified is when child process is created,
it will try to import all the import statement from main module via the call to _fixup_main_from_path()
I actually don't need those import statements at all.
It is causing conflicts in my child processes.
I am considering moving away from multiprocessing and use subprocess. Subprocess has its own problem though: it does not support functional call so I have to package the executable as a binary instead of a shared library. So there are some additional work there.
My question would be,
is there any way that we don't call _fixup_main_from_path for multiprocessing spawn method.
Or for subprocessing, can I do a functional call instead of launching as an executable?.
Any other alternatives?
Thanks a lot.
Update: I drafted an sample code for my situation.
There are three files involved in this example.
I only have access to one of the file my_lib.py
There is something bad in file1.py but I don't have control over it.
file1.py:
import x1
# do something bad like call
folly::symbolizer::addFatalSignalCallback() # c++ in pybind
### my_lib.py:
import multiprocessing
def my_runner():
# run my awesome application
folly::symbolizer::addFatalSignalCallback() # c++ in pybind
def run_my_awesome_application():
process = multiprocessing.get_context("spawn").Process(
target= my_runner
) # multiprocessing.spawn._fixup_main_from_path is called.
process.start()
## main.py:
import file1
import my_lib
if __name__ == "__main__":
run_my_awesome_application()
A:
You said: We don't want my_lib.py to depend on file1.py or x1.py.
Your module already depends on x1.py because you need to import it so that you can call addFataSignalCallback. So that should not be an issue. Not wanting a dependence on file1.py is understandable. If you are concerned about whether whatever script that is importing my_lib.py may have caused x1.py to be imported and therefore there may have also already been a call to addFatalSignalCallback, then the following code will do a regular import of x1 if it has not already been imported or it will reload x1 on the assumption that addFatalSignalCallback() has been called by whoever imported x1:
import multiprocessing
def my_runner():
# I am assuming that this function is only called from within this
# module and as a spawned child process.
# Therefore, we know __name__ must be '__mp_main__' and so
# there is no need to check this. Otherwise, modify the following
# if statement to add `and __name__ == '__mp_main__'` to the condition:
if 'x1' in globals() and globals()['x1'].__class__.__name__ == 'module':
# module x1 has already been loaded
import implib
implib.reload(x1)
else:
# module x1 has not been loaded
import x1
# run my awesome application
addFatalSignalCallback()
def run_my_awesome_application():
process = multiprocessing.get_context("spawn").Process(
target= my_runner
)
process.start()
|
python multiprocessing spawn method's conflict when _fixup_main_from_path() is called
|
experts,
I am using python's multiprocessing's spawn method to spawn child processes.
The child process is a function call.
One issue that recently get notified is when child process is created,
it will try to import all the import statement from main module via the call to _fixup_main_from_path()
I actually don't need those import statements at all.
It is causing conflicts in my child processes.
I am considering moving away from multiprocessing and use subprocess. Subprocess has its own problem though: it does not support functional call so I have to package the executable as a binary instead of a shared library. So there are some additional work there.
My question would be,
is there any way that we don't call _fixup_main_from_path for multiprocessing spawn method.
Or for subprocessing, can I do a functional call instead of launching as an executable?.
Any other alternatives?
Thanks a lot.
Update: I drafted an sample code for my situation.
There are three files involved in this example.
I only have access to one of the file my_lib.py
There is something bad in file1.py but I don't have control over it.
file1.py:
import x1
# do something bad like call
folly::symbolizer::addFatalSignalCallback() # c++ in pybind
### my_lib.py:
import multiprocessing
def my_runner():
# run my awesome application
folly::symbolizer::addFatalSignalCallback() # c++ in pybind
def run_my_awesome_application():
process = multiprocessing.get_context("spawn").Process(
target= my_runner
) # multiprocessing.spawn._fixup_main_from_path is called.
process.start()
## main.py:
import file1
import my_lib
if __name__ == "__main__":
run_my_awesome_application()
|
[
"You said: We don't want my_lib.py to depend on file1.py or x1.py.\nYour module already depends on x1.py because you need to import it so that you can call addFataSignalCallback. So that should not be an issue. Not wanting a dependence on file1.py is understandable. If you are concerned about whether whatever script that is importing my_lib.py may have caused x1.py to be imported and therefore there may have also already been a call to addFatalSignalCallback, then the following code will do a regular import of x1 if it has not already been imported or it will reload x1 on the assumption that addFatalSignalCallback() has been called by whoever imported x1:\nimport multiprocessing\n\ndef my_runner():\n # I am assuming that this function is only called from within this\n # module and as a spawned child process.\n # Therefore, we know __name__ must be '__mp_main__' and so\n # there is no need to check this. Otherwise, modify the following\n # if statement to add `and __name__ == '__mp_main__'` to the condition:\n if 'x1' in globals() and globals()['x1'].__class__.__name__ == 'module':\n # module x1 has already been loaded\n import implib\n implib.reload(x1)\n else:\n # module x1 has not been loaded\n import x1\n\n # run my awesome application \n addFatalSignalCallback() \n\ndef run_my_awesome_application():\n process = multiprocessing.get_context(\"spawn\").Process(\n target= my_runner\n )\n process.start()\n\n"
] |
[
0
] |
[] |
[] |
[
"multiprocessing",
"python",
"subprocess"
] |
stackoverflow_0074502924_multiprocessing_python_subprocess.txt
|
Q:
How can I get the links of the apps from a certain developer, till now i have scrapped the web objects but unable to get the actual links?
I am trying to extract the links of all application from a particular developer present on the playstore.
import time
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver. common.by import By
driver = webdriver.Chrome (executable_path=ChromeDriverManager().install())
driver.get("https://play.google.com/store/apps/dev?id=5305197572942248936")
l1 = driver.find_elements(By.CLASS_NAME, 'ULeU3b')
A:
You are close to the solution.
Inside elements you located there are a elements containing the links.
All you need here is to wait for all those elements to become visible, get the list of those elements, iterate over the list and extract the links.
The following cod works:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
options.add_argument('--disable-notifications')
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 20)
url = "https://play.google.com/store/apps/dev?id=5305197572942248936"
driver.get(url)
links = wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, ".ULeU3b a")))
for link in links:
print(link.get_attribute("href"))
The result is:
https://play.google.com/store/apps/details?id=com.tatamotors.eguruibcrm
https://play.google.com/store/apps/details?id=com.T1.Primarun
https://play.google.com/store/apps/details?id=com.tata.skoolman
https://play.google.com/store/apps/details?id=com.ttl.tatafleetman
|
How can I get the links of the apps from a certain developer, till now i have scrapped the web objects but unable to get the actual links?
|
I am trying to extract the links of all application from a particular developer present on the playstore.
import time
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver. common.by import By
driver = webdriver.Chrome (executable_path=ChromeDriverManager().install())
driver.get("https://play.google.com/store/apps/dev?id=5305197572942248936")
l1 = driver.find_elements(By.CLASS_NAME, 'ULeU3b')
|
[
"You are close to the solution.\nInside elements you located there are a elements containing the links.\nAll you need here is to wait for all those elements to become visible, get the list of those elements, iterate over the list and extract the links.\nThe following cod works:\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\noptions.add_argument('--disable-notifications')\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 20)\n\nurl = \"https://play.google.com/store/apps/dev?id=5305197572942248936\"\ndriver.get(url)\n\nlinks = wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, \".ULeU3b a\")))\nfor link in links:\n print(link.get_attribute(\"href\"))\n\nThe result is:\nhttps://play.google.com/store/apps/details?id=com.tatamotors.eguruibcrm\nhttps://play.google.com/store/apps/details?id=com.T1.Primarun\nhttps://play.google.com/store/apps/details?id=com.tata.skoolman\nhttps://play.google.com/store/apps/details?id=com.ttl.tatafleetman\n\n"
] |
[
0
] |
[] |
[] |
[
"css_selectors",
"python",
"selenium",
"selenium_webdriver",
"webdriverwait"
] |
stackoverflow_0074537588_css_selectors_python_selenium_selenium_webdriver_webdriverwait.txt
|
Q:
Using conda-index with S3
I am hosting my own private conda channel on S3, but I don't understand how to avoid mirroring all the packages on my local hard drive. The source of the problem is the repodata.json, channeldata.json, etc. files.
If I build a single package and copy just the .tar.bz2 file to S3, conda-install does not see it in the channel. In order for conda-build to see the package, I have to copy the package and all of the repodata.json, etc. metadata files created by conda-build to S3.
My conundrum is: these JSON metadata files do not appear to be updated properly unless all of the packages I've ever built appear in my local package directory. I now have more than 4GB of conda packages on my local hard drive so that I get the correct metadata in these JSON files.
Is there a way to continue building conda packages with my local machine while avoiding mirroring my private S3 conda channel on my local hard drive? For what its worth, I realize that conda-index will build the JSON files, but it doesn't seem to work when you attempt to index a location on S3.
A:
One way to host a private conda channel in s3 and build packages there is to mount the s3 bucket as a FUSE filesystem (goofys works well for this task) and then run conda build and conda index locally.
|
Using conda-index with S3
|
I am hosting my own private conda channel on S3, but I don't understand how to avoid mirroring all the packages on my local hard drive. The source of the problem is the repodata.json, channeldata.json, etc. files.
If I build a single package and copy just the .tar.bz2 file to S3, conda-install does not see it in the channel. In order for conda-build to see the package, I have to copy the package and all of the repodata.json, etc. metadata files created by conda-build to S3.
My conundrum is: these JSON metadata files do not appear to be updated properly unless all of the packages I've ever built appear in my local package directory. I now have more than 4GB of conda packages on my local hard drive so that I get the correct metadata in these JSON files.
Is there a way to continue building conda packages with my local machine while avoiding mirroring my private S3 conda channel on my local hard drive? For what its worth, I realize that conda-index will build the JSON files, but it doesn't seem to work when you attempt to index a location on S3.
|
[
"One way to host a private conda channel in s3 and build packages there is to mount the s3 bucket as a FUSE filesystem (goofys works well for this task) and then run conda build and conda index locally.\n"
] |
[
0
] |
[] |
[] |
[
"amazon_s3",
"conda",
"python"
] |
stackoverflow_0053269465_amazon_s3_conda_python.txt
|
Q:
How can I remove the self-loop edges from a NetworkX undirected graph plot?
I thought this would be obvious but I can't seem to figure this out. I tried just making all of the self-loops a weight of 0. It looks like the order of the edges isn't preserved or something.
d = {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.24': {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.24': {'weight': 100.0}, 'SRR9668968__METABAT2__P.1__bin.4': {'weight': 99.5976}}, 'SRR9668968__METABAT2__P.1__bin.4': {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.24': {'weight': 99.5976}, 'SRR9668968__METABAT2__P.1__bin.4': {'weight': 100.0}}, 'SRR9668973__MAXBIN2-107__P.1__bin.001': {'SRR9668973__MAXBIN2-107__P.1__bin.001': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.27': {'weight': 99.1217}, 'SRR9668959__CONCOCT__P.1__18': {'weight': 99.0443}}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.27': {'SRR9668973__MAXBIN2-107__P.1__bin.001': {'weight': 99.1217}, 'SRR9668959__CONCOCT__P.1__18': {'weight': 99.9955}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.27': {'weight': 100.0}}, 'SRR9668959__CONCOCT__P.1__18': {'SRR9668973__MAXBIN2-107__P.1__bin.001': {'weight': 99.0443}, 'SRR9668959__CONCOCT__P.1__18': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.27': {'weight': 99.9955}}, 'SRR9668957__CONCOCT__P.1__5': {'SRR9668957__CONCOCT__P.1__5': {'weight': 100.0}}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.9': {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.9': {'weight': 100.0}, 'SRR9668968__CONCOCT__P.1__11': {'weight': 99.9584}}, 'SRR9668968__CONCOCT__P.1__11': {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.9': {'weight': 99.9584}, 'SRR9668968__CONCOCT__P.1__11': {'weight': 100.0}}, 'SRR9668967__MAXBIN2-107__P.1__bin.001': {'SRR9668967__MAXBIN2-107__P.1__bin.001': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__METABAT2__P.2__bin.3': {'weight': 99.9547}}, 'PRJNA551026-COASSEMBLY__METABAT2__P.2__bin.3': {'SRR9668967__MAXBIN2-107__P.1__bin.001': {'weight': 99.9547}, 'PRJNA551026-COASSEMBLY__METABAT2__P.2__bin.3': {'weight': 100.0}}, 'SRR9668973__CONCOCT__P.1__16_sub': {'SRR9668973__CONCOCT__P.1__16_sub': {'weight': 100.0}}, 'SRR9668960__CONCOCT__P.1__21': {'SRR9668960__CONCOCT__P.1__21': {'weight': 100.0}, 'SRR9668957__MAXBIN2-107__P.1__bin.001': {'weight': 99.9627}, 'PRJNA551026-COASSEMBLY__MAXBIN2-40__P.2__bin.002': {'weight': 98.9865}}, 'SRR9668957__MAXBIN2-107__P.1__bin.001': {'SRR9668960__CONCOCT__P.1__21': {'weight': 99.9627}, 'PRJNA551026-COASSEMBLY__MAXBIN2-40__P.2__bin.002': {'weight': 98.2802}, 'SRR9668957__MAXBIN2-107__P.1__bin.001': {'weight': 100.0}}, 'PRJNA551026-COASSEMBLY__MAXBIN2-40__P.2__bin.002': {'SRR9668960__CONCOCT__P.1__21': {'weight': 98.9865}, 'PRJNA551026-COASSEMBLY__MAXBIN2-40__P.2__bin.002': {'weight': 100.0}, 'SRR9668957__MAXBIN2-107__P.1__bin.001': {'weight': 98.2802}}, 'SRR9668965__CONCOCT__P.1__23': {'SRR9668965__CONCOCT__P.1__23': {'weight': 100.0}, 'SRR9668961__METABAT2__P.1__bin.2': {'weight': 96.6062}}, 'SRR9668961__METABAT2__P.1__bin.2': {'SRR9668965__CONCOCT__P.1__23': {'weight': 96.6062}, 'SRR9668961__METABAT2__P.1__bin.2': {'weight': 100.0}}, 'SRR9668960__CONCOCT__P.1__3': {'SRR9668960__CONCOCT__P.1__3': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__5': {'weight': 99.7626}, 'SRR9668957__CONCOCT__P.1__38': {'weight': 99.66}}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__5': {'SRR9668960__CONCOCT__P.1__3': {'weight': 99.7626}, 'SRR9668957__CONCOCT__P.1__38': {'weight': 99.7424}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__5': {'weight': 100.0}}, 'SRR9668957__CONCOCT__P.1__38': {'SRR9668960__CONCOCT__P.1__3': {'weight': 99.66}, 'SRR9668957__CONCOCT__P.1__38': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__5': {'weight': 99.7424}}, 'SRR9668959__METABAT2__P.1__bin.3': {'SRR9668959__METABAT2__P.1__bin.3': {'weight': 100.0}}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__49': {'PRJNA551026-COASSEMBLY__CONCOCT__P.1__49': {'weight': 100.0}}, 'SRR9668973__METABAT2__P.1__bin.6': {'SRR9668973__METABAT2__P.1__bin.6': {'weight': 100.0}}}
graph_prok = nx.from_dict_of_dicts(d)
weights = list()
for (node_a, node_b, w) in graph_prok.edges(data="weight"):
if node_a == node_b:
w = 0
weights.append(w)
weights = np.asarray(weights)*0.01
with plt.style.context("seaborn-white"):
fig, ax = plt.subplots(figsize=(8,8))
pos = nx.nx_agraph.graphviz_layout(graph_prok, prog="neato")
nx.draw_networkx_nodes(graph_prok,pos=pos, ax=ax)
nx.draw_networkx_edges(graph_prok,pos=pos, ax=ax, width=weights)#, connectionstyle="arc3,rad=0")
A:
Instead of adjusting the edge weights, you can just pass an edgelist to draw_networkx_edges() and it will plot only those edges. There is a convenience function that only returns selfloops from a list of edges, so putting that together, you get:
d = {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.24': {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.24': {'weight': 100.0}, 'SRR9668968__METABAT2__P.1__bin.4': {'weight': 99.5976}}, 'SRR9668968__METABAT2__P.1__bin.4': {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.24': {'weight': 99.5976}, 'SRR9668968__METABAT2__P.1__bin.4': {'weight': 100.0}}, 'SRR9668973__MAXBIN2-107__P.1__bin.001': {'SRR9668973__MAXBIN2-107__P.1__bin.001': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.27': {'weight': 99.1217}, 'SRR9668959__CONCOCT__P.1__18': {'weight': 99.0443}}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.27': {'SRR9668973__MAXBIN2-107__P.1__bin.001': {'weight': 99.1217}, 'SRR9668959__CONCOCT__P.1__18': {'weight': 99.9955}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.27': {'weight': 100.0}}, 'SRR9668959__CONCOCT__P.1__18': {'SRR9668973__MAXBIN2-107__P.1__bin.001': {'weight': 99.0443}, 'SRR9668959__CONCOCT__P.1__18': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.27': {'weight': 99.9955}}, 'SRR9668957__CONCOCT__P.1__5': {'SRR9668957__CONCOCT__P.1__5': {'weight': 100.0}}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.9': {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.9': {'weight': 100.0}, 'SRR9668968__CONCOCT__P.1__11': {'weight': 99.9584}}, 'SRR9668968__CONCOCT__P.1__11': {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.9': {'weight': 99.9584}, 'SRR9668968__CONCOCT__P.1__11': {'weight': 100.0}}, 'SRR9668967__MAXBIN2-107__P.1__bin.001': {'SRR9668967__MAXBIN2-107__P.1__bin.001': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__METABAT2__P.2__bin.3': {'weight': 99.9547}}, 'PRJNA551026-COASSEMBLY__METABAT2__P.2__bin.3': {'SRR9668967__MAXBIN2-107__P.1__bin.001': {'weight': 99.9547}, 'PRJNA551026-COASSEMBLY__METABAT2__P.2__bin.3': {'weight': 100.0}}, 'SRR9668973__CONCOCT__P.1__16_sub': {'SRR9668973__CONCOCT__P.1__16_sub': {'weight': 100.0}}, 'SRR9668960__CONCOCT__P.1__21': {'SRR9668960__CONCOCT__P.1__21': {'weight': 100.0}, 'SRR9668957__MAXBIN2-107__P.1__bin.001': {'weight': 99.9627}, 'PRJNA551026-COASSEMBLY__MAXBIN2-40__P.2__bin.002': {'weight': 98.9865}}, 'SRR9668957__MAXBIN2-107__P.1__bin.001': {'SRR9668960__CONCOCT__P.1__21': {'weight': 99.9627}, 'PRJNA551026-COASSEMBLY__MAXBIN2-40__P.2__bin.002': {'weight': 98.2802}, 'SRR9668957__MAXBIN2-107__P.1__bin.001': {'weight': 100.0}}, 'PRJNA551026-COASSEMBLY__MAXBIN2-40__P.2__bin.002': {'SRR9668960__CONCOCT__P.1__21': {'weight': 98.9865}, 'PRJNA551026-COASSEMBLY__MAXBIN2-40__P.2__bin.002': {'weight': 100.0}, 'SRR9668957__MAXBIN2-107__P.1__bin.001': {'weight': 98.2802}}, 'SRR9668965__CONCOCT__P.1__23': {'SRR9668965__CONCOCT__P.1__23': {'weight': 100.0}, 'SRR9668961__METABAT2__P.1__bin.2': {'weight': 96.6062}}, 'SRR9668961__METABAT2__P.1__bin.2': {'SRR9668965__CONCOCT__P.1__23': {'weight': 96.6062}, 'SRR9668961__METABAT2__P.1__bin.2': {'weight': 100.0}}, 'SRR9668960__CONCOCT__P.1__3': {'SRR9668960__CONCOCT__P.1__3': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__5': {'weight': 99.7626}, 'SRR9668957__CONCOCT__P.1__38': {'weight': 99.66}}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__5': {'SRR9668960__CONCOCT__P.1__3': {'weight': 99.7626}, 'SRR9668957__CONCOCT__P.1__38': {'weight': 99.7424}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__5': {'weight': 100.0}}, 'SRR9668957__CONCOCT__P.1__38': {'SRR9668960__CONCOCT__P.1__3': {'weight': 99.66}, 'SRR9668957__CONCOCT__P.1__38': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__5': {'weight': 99.7424}}, 'SRR9668959__METABAT2__P.1__bin.3': {'SRR9668959__METABAT2__P.1__bin.3': {'weight': 100.0}}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__49': {'PRJNA551026-COASSEMBLY__CONCOCT__P.1__49': {'weight': 100.0}}, 'SRR9668973__METABAT2__P.1__bin.6': {'SRR9668973__METABAT2__P.1__bin.6': {'weight': 100.0}}}
graph_prok = nx.from_dict_of_dicts(d)
edgelist = [e for e in graph_prok.edges if e not in nx.selfloop_edges(graph_prok)]
with plt.style.context("seaborn-white"):
fig, ax = plt.subplots(figsize=(8,8))
pos = nx.nx_agraph.graphviz_layout(graph_prok, prog="neato")
nx.draw_networkx_nodes(graph_prok,pos=pos, ax=ax)
nx.draw_networkx_edges(graph_prok,
edgelist=edgelist,
pos=pos, ax=ax)#, connectionstyle="arc3,rad=0")
which plots:
|
How can I remove the self-loop edges from a NetworkX undirected graph plot?
|
I thought this would be obvious but I can't seem to figure this out. I tried just making all of the self-loops a weight of 0. It looks like the order of the edges isn't preserved or something.
d = {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.24': {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.24': {'weight': 100.0}, 'SRR9668968__METABAT2__P.1__bin.4': {'weight': 99.5976}}, 'SRR9668968__METABAT2__P.1__bin.4': {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.24': {'weight': 99.5976}, 'SRR9668968__METABAT2__P.1__bin.4': {'weight': 100.0}}, 'SRR9668973__MAXBIN2-107__P.1__bin.001': {'SRR9668973__MAXBIN2-107__P.1__bin.001': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.27': {'weight': 99.1217}, 'SRR9668959__CONCOCT__P.1__18': {'weight': 99.0443}}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.27': {'SRR9668973__MAXBIN2-107__P.1__bin.001': {'weight': 99.1217}, 'SRR9668959__CONCOCT__P.1__18': {'weight': 99.9955}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.27': {'weight': 100.0}}, 'SRR9668959__CONCOCT__P.1__18': {'SRR9668973__MAXBIN2-107__P.1__bin.001': {'weight': 99.0443}, 'SRR9668959__CONCOCT__P.1__18': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.27': {'weight': 99.9955}}, 'SRR9668957__CONCOCT__P.1__5': {'SRR9668957__CONCOCT__P.1__5': {'weight': 100.0}}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.9': {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.9': {'weight': 100.0}, 'SRR9668968__CONCOCT__P.1__11': {'weight': 99.9584}}, 'SRR9668968__CONCOCT__P.1__11': {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.9': {'weight': 99.9584}, 'SRR9668968__CONCOCT__P.1__11': {'weight': 100.0}}, 'SRR9668967__MAXBIN2-107__P.1__bin.001': {'SRR9668967__MAXBIN2-107__P.1__bin.001': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__METABAT2__P.2__bin.3': {'weight': 99.9547}}, 'PRJNA551026-COASSEMBLY__METABAT2__P.2__bin.3': {'SRR9668967__MAXBIN2-107__P.1__bin.001': {'weight': 99.9547}, 'PRJNA551026-COASSEMBLY__METABAT2__P.2__bin.3': {'weight': 100.0}}, 'SRR9668973__CONCOCT__P.1__16_sub': {'SRR9668973__CONCOCT__P.1__16_sub': {'weight': 100.0}}, 'SRR9668960__CONCOCT__P.1__21': {'SRR9668960__CONCOCT__P.1__21': {'weight': 100.0}, 'SRR9668957__MAXBIN2-107__P.1__bin.001': {'weight': 99.9627}, 'PRJNA551026-COASSEMBLY__MAXBIN2-40__P.2__bin.002': {'weight': 98.9865}}, 'SRR9668957__MAXBIN2-107__P.1__bin.001': {'SRR9668960__CONCOCT__P.1__21': {'weight': 99.9627}, 'PRJNA551026-COASSEMBLY__MAXBIN2-40__P.2__bin.002': {'weight': 98.2802}, 'SRR9668957__MAXBIN2-107__P.1__bin.001': {'weight': 100.0}}, 'PRJNA551026-COASSEMBLY__MAXBIN2-40__P.2__bin.002': {'SRR9668960__CONCOCT__P.1__21': {'weight': 98.9865}, 'PRJNA551026-COASSEMBLY__MAXBIN2-40__P.2__bin.002': {'weight': 100.0}, 'SRR9668957__MAXBIN2-107__P.1__bin.001': {'weight': 98.2802}}, 'SRR9668965__CONCOCT__P.1__23': {'SRR9668965__CONCOCT__P.1__23': {'weight': 100.0}, 'SRR9668961__METABAT2__P.1__bin.2': {'weight': 96.6062}}, 'SRR9668961__METABAT2__P.1__bin.2': {'SRR9668965__CONCOCT__P.1__23': {'weight': 96.6062}, 'SRR9668961__METABAT2__P.1__bin.2': {'weight': 100.0}}, 'SRR9668960__CONCOCT__P.1__3': {'SRR9668960__CONCOCT__P.1__3': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__5': {'weight': 99.7626}, 'SRR9668957__CONCOCT__P.1__38': {'weight': 99.66}}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__5': {'SRR9668960__CONCOCT__P.1__3': {'weight': 99.7626}, 'SRR9668957__CONCOCT__P.1__38': {'weight': 99.7424}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__5': {'weight': 100.0}}, 'SRR9668957__CONCOCT__P.1__38': {'SRR9668960__CONCOCT__P.1__3': {'weight': 99.66}, 'SRR9668957__CONCOCT__P.1__38': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__5': {'weight': 99.7424}}, 'SRR9668959__METABAT2__P.1__bin.3': {'SRR9668959__METABAT2__P.1__bin.3': {'weight': 100.0}}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__49': {'PRJNA551026-COASSEMBLY__CONCOCT__P.1__49': {'weight': 100.0}}, 'SRR9668973__METABAT2__P.1__bin.6': {'SRR9668973__METABAT2__P.1__bin.6': {'weight': 100.0}}}
graph_prok = nx.from_dict_of_dicts(d)
weights = list()
for (node_a, node_b, w) in graph_prok.edges(data="weight"):
if node_a == node_b:
w = 0
weights.append(w)
weights = np.asarray(weights)*0.01
with plt.style.context("seaborn-white"):
fig, ax = plt.subplots(figsize=(8,8))
pos = nx.nx_agraph.graphviz_layout(graph_prok, prog="neato")
nx.draw_networkx_nodes(graph_prok,pos=pos, ax=ax)
nx.draw_networkx_edges(graph_prok,pos=pos, ax=ax, width=weights)#, connectionstyle="arc3,rad=0")
|
[
"Instead of adjusting the edge weights, you can just pass an edgelist to draw_networkx_edges() and it will plot only those edges. There is a convenience function that only returns selfloops from a list of edges, so putting that together, you get:\nd = {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.24': {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.24': {'weight': 100.0}, 'SRR9668968__METABAT2__P.1__bin.4': {'weight': 99.5976}}, 'SRR9668968__METABAT2__P.1__bin.4': {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.24': {'weight': 99.5976}, 'SRR9668968__METABAT2__P.1__bin.4': {'weight': 100.0}}, 'SRR9668973__MAXBIN2-107__P.1__bin.001': {'SRR9668973__MAXBIN2-107__P.1__bin.001': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.27': {'weight': 99.1217}, 'SRR9668959__CONCOCT__P.1__18': {'weight': 99.0443}}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.27': {'SRR9668973__MAXBIN2-107__P.1__bin.001': {'weight': 99.1217}, 'SRR9668959__CONCOCT__P.1__18': {'weight': 99.9955}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.27': {'weight': 100.0}}, 'SRR9668959__CONCOCT__P.1__18': {'SRR9668973__MAXBIN2-107__P.1__bin.001': {'weight': 99.0443}, 'SRR9668959__CONCOCT__P.1__18': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.27': {'weight': 99.9955}}, 'SRR9668957__CONCOCT__P.1__5': {'SRR9668957__CONCOCT__P.1__5': {'weight': 100.0}}, 'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.9': {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.9': {'weight': 100.0}, 'SRR9668968__CONCOCT__P.1__11': {'weight': 99.9584}}, 'SRR9668968__CONCOCT__P.1__11': {'PRJNA551026-COASSEMBLY__METABAT2__P.1__bin.9': {'weight': 99.9584}, 'SRR9668968__CONCOCT__P.1__11': {'weight': 100.0}}, 'SRR9668967__MAXBIN2-107__P.1__bin.001': {'SRR9668967__MAXBIN2-107__P.1__bin.001': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__METABAT2__P.2__bin.3': {'weight': 99.9547}}, 'PRJNA551026-COASSEMBLY__METABAT2__P.2__bin.3': {'SRR9668967__MAXBIN2-107__P.1__bin.001': {'weight': 99.9547}, 'PRJNA551026-COASSEMBLY__METABAT2__P.2__bin.3': {'weight': 100.0}}, 'SRR9668973__CONCOCT__P.1__16_sub': {'SRR9668973__CONCOCT__P.1__16_sub': {'weight': 100.0}}, 'SRR9668960__CONCOCT__P.1__21': {'SRR9668960__CONCOCT__P.1__21': {'weight': 100.0}, 'SRR9668957__MAXBIN2-107__P.1__bin.001': {'weight': 99.9627}, 'PRJNA551026-COASSEMBLY__MAXBIN2-40__P.2__bin.002': {'weight': 98.9865}}, 'SRR9668957__MAXBIN2-107__P.1__bin.001': {'SRR9668960__CONCOCT__P.1__21': {'weight': 99.9627}, 'PRJNA551026-COASSEMBLY__MAXBIN2-40__P.2__bin.002': {'weight': 98.2802}, 'SRR9668957__MAXBIN2-107__P.1__bin.001': {'weight': 100.0}}, 'PRJNA551026-COASSEMBLY__MAXBIN2-40__P.2__bin.002': {'SRR9668960__CONCOCT__P.1__21': {'weight': 98.9865}, 'PRJNA551026-COASSEMBLY__MAXBIN2-40__P.2__bin.002': {'weight': 100.0}, 'SRR9668957__MAXBIN2-107__P.1__bin.001': {'weight': 98.2802}}, 'SRR9668965__CONCOCT__P.1__23': {'SRR9668965__CONCOCT__P.1__23': {'weight': 100.0}, 'SRR9668961__METABAT2__P.1__bin.2': {'weight': 96.6062}}, 'SRR9668961__METABAT2__P.1__bin.2': {'SRR9668965__CONCOCT__P.1__23': {'weight': 96.6062}, 'SRR9668961__METABAT2__P.1__bin.2': {'weight': 100.0}}, 'SRR9668960__CONCOCT__P.1__3': {'SRR9668960__CONCOCT__P.1__3': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__5': {'weight': 99.7626}, 'SRR9668957__CONCOCT__P.1__38': {'weight': 99.66}}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__5': {'SRR9668960__CONCOCT__P.1__3': {'weight': 99.7626}, 'SRR9668957__CONCOCT__P.1__38': {'weight': 99.7424}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__5': {'weight': 100.0}}, 'SRR9668957__CONCOCT__P.1__38': {'SRR9668960__CONCOCT__P.1__3': {'weight': 99.66}, 'SRR9668957__CONCOCT__P.1__38': {'weight': 100.0}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__5': {'weight': 99.7424}}, 'SRR9668959__METABAT2__P.1__bin.3': {'SRR9668959__METABAT2__P.1__bin.3': {'weight': 100.0}}, 'PRJNA551026-COASSEMBLY__CONCOCT__P.1__49': {'PRJNA551026-COASSEMBLY__CONCOCT__P.1__49': {'weight': 100.0}}, 'SRR9668973__METABAT2__P.1__bin.6': {'SRR9668973__METABAT2__P.1__bin.6': {'weight': 100.0}}}\ngraph_prok = nx.from_dict_of_dicts(d)\n\nedgelist = [e for e in graph_prok.edges if e not in nx.selfloop_edges(graph_prok)]\n\nwith plt.style.context(\"seaborn-white\"):\n fig, ax = plt.subplots(figsize=(8,8))\n pos = nx.nx_agraph.graphviz_layout(graph_prok, prog=\"neato\")\n nx.draw_networkx_nodes(graph_prok,pos=pos, ax=ax)\n nx.draw_networkx_edges(graph_prok,\n edgelist=edgelist,\n pos=pos, ax=ax)#, connectionstyle=\"arc3,rad=0\")\n\nwhich plots:\n\n"
] |
[
0
] |
[] |
[] |
[
"graph",
"matplotlib",
"networkx",
"python"
] |
stackoverflow_0074537283_graph_matplotlib_networkx_python.txt
|
Q:
The output is mot coming as expected and the values are getting returned as none
class Dog:
def bark (self ):
print("The dogo is barking")
return
def yolo (self):
print ("Munni badnam")
return
d= Dog()
print(d.bark())
print(d.yolo())
This is my code
and this is my output -
The dogo is barking
None
Munni badnam
None
A:
This will do what you expect.
class Dog:
def bark (self ):
return "The dogo is barking"
def yolo (self):
return "Munni badnam"
d= Dog()
print(d.bark())
print(d.yolo())
As a very general rule, a function like that should not have side effects (like printing stuff). Just let the function return its value, and allow the caller to decide what to DO with the value.
|
The output is mot coming as expected and the values are getting returned as none
|
class Dog:
def bark (self ):
print("The dogo is barking")
return
def yolo (self):
print ("Munni badnam")
return
d= Dog()
print(d.bark())
print(d.yolo())
This is my code
and this is my output -
The dogo is barking
None
Munni badnam
None
|
[
"This will do what you expect.\nclass Dog:\n def bark (self ):\n return \"The dogo is barking\"\n\n def yolo (self):\n return \"Munni badnam\"\n \nd= Dog()\nprint(d.bark())\nprint(d.yolo())\n\nAs a very general rule, a function like that should not have side effects (like printing stuff). Just let the function return its value, and allow the caller to decide what to DO with the value.\n"
] |
[
0
] |
[] |
[] |
[
"object",
"python"
] |
stackoverflow_0074537991_object_python.txt
|
Q:
How to unpack only values from nested dict in for loop
I have the following code
mydict = {
"key": {
"k1": "v1",
"k2": "v2",
}
}
for k, (v1, v2) in mydict.items():
v1 and v2 actaully equals to k1 and k2, is there a way to extract v1 and v2 with any unpacking syntax?
I tried to search for unpacking syntax but found nothing
A:
No need to complicate on trying to unpack values. Here is a workaround
for k, (v1, v2) in mydict.items():
print("Access the values for the key:", k, "--->", mydict[k][v1], mydict[k][v2])
output:
Accessing the values for the key: key ---> v1 v2
A:
It is possible like this:
mydict = {
"key": {
"k1": "v1",
"k2": "v2",
}
}
for v1, v2 in mydict.popitem()[1].values():
print(v1, v2)
|
How to unpack only values from nested dict in for loop
|
I have the following code
mydict = {
"key": {
"k1": "v1",
"k2": "v2",
}
}
for k, (v1, v2) in mydict.items():
v1 and v2 actaully equals to k1 and k2, is there a way to extract v1 and v2 with any unpacking syntax?
I tried to search for unpacking syntax but found nothing
|
[
"No need to complicate on trying to unpack values. Here is a workaround\nfor k, (v1, v2) in mydict.items():\n print(\"Access the values for the key:\", k, \"--->\", mydict[k][v1], mydict[k][v2])\n\noutput:\nAccessing the values for the key: key ---> v1 v2\n\n",
"It is possible like this:\nmydict = {\n \"key\": {\n \"k1\": \"v1\",\n \"k2\": \"v2\",\n }\n}\n\nfor v1, v2 in mydict.popitem()[1].values():\n print(v1, v2)\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"argument_unpacking",
"dictionary",
"python",
"python_3.x"
] |
stackoverflow_0074533047_argument_unpacking_dictionary_python_python_3.x.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.