content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
|---|---|---|---|---|---|---|---|---|
Q:
How to put/stream data into an Excel file on sftp
What works
With the following code, I can write the content of TheList into a CSV on an SFTP.
import paramiko
import csv
# code part to make and open sftp connection
TheList = [['name', 'address'], [ 'peter', 'london']]
with sftp.open(SftpPath + "anewfile.csv", mode='w', bufsize=32768) as csvfile:
writer = csv.writer(csvfile, delimiter=',')
filewriter.writerows(TheList)
What doesn't work
With the following code, the Excel file is created on the SFTP, but it is empty. What is false?
import paramiko
import xlsxwriter
# code part to make and open sftp connection
TheList = [['name', 'address'], [ 'peter', 'london']]
with sftp.open(SftpPath + "anewfile.xlsx", mode='wb', bufsize=32768) as f:
workbook = xlsxwriter.Workbook(f)
worksheet = workbook.add_worksheet()
for row_num, data in enumerate(TheList):
worksheet.write_row(row_num, 0, data)
A:
You need to close the Workbook. Either using the with statement:
with sftp.open(SftpPath + "anewfile.xlsx", mode='wb', bufsize=32768) as f, \
xlsxwriter.Workbook(f) as workbook:
worksheet = workbook.add_worksheet()
for row_num, data in enumerate(TheList):
worksheet.write_row(row_num, 0, data)
Or call Workbook.close explicitly:
workbook.close()
|
How to put/stream data into an Excel file on sftp
|
What works
With the following code, I can write the content of TheList into a CSV on an SFTP.
import paramiko
import csv
# code part to make and open sftp connection
TheList = [['name', 'address'], [ 'peter', 'london']]
with sftp.open(SftpPath + "anewfile.csv", mode='w', bufsize=32768) as csvfile:
writer = csv.writer(csvfile, delimiter=',')
filewriter.writerows(TheList)
What doesn't work
With the following code, the Excel file is created on the SFTP, but it is empty. What is false?
import paramiko
import xlsxwriter
# code part to make and open sftp connection
TheList = [['name', 'address'], [ 'peter', 'london']]
with sftp.open(SftpPath + "anewfile.xlsx", mode='wb', bufsize=32768) as f:
workbook = xlsxwriter.Workbook(f)
worksheet = workbook.add_worksheet()
for row_num, data in enumerate(TheList):
worksheet.write_row(row_num, 0, data)
|
[
"You need to close the Workbook. Either using the with statement:\nwith sftp.open(SftpPath + \"anewfile.xlsx\", mode='wb', bufsize=32768) as f, \\\n xlsxwriter.Workbook(f) as workbook:\n worksheet = workbook.add_worksheet()\n for row_num, data in enumerate(TheList):\n worksheet.write_row(row_num, 0, data)\n\nOr call Workbook.close explicitly:\nworkbook.close()\n\n"
] |
[
2
] |
[] |
[] |
[
"paramiko",
"python",
"sftp",
"xlsxwriter"
] |
stackoverflow_0074472747_paramiko_python_sftp_xlsxwriter.txt
|
Q:
How do I add Multiple Python Interactive Windows in VS Code?
I am trying to create a second Ipython window in my VS Code Environment.
A:
Sorry to say, but currently there can only be one Interactive Window open at a time. We do have an issue filed on allowing multiple windows here:
https://github.com/Microsoft/vscode-python/issues/3104
Which you can upvote or comment on if you would like.
A:
The response above is longer up-to-date. Now, VS code supports multiple interactive windows: Open settings and search "interactive window mode".
You can select from single, multiple, per-file
see https://visualstudiomagazine.com/articles/2020/08/13/vs-code-python.aspx
|
How do I add Multiple Python Interactive Windows in VS Code?
|
I am trying to create a second Ipython window in my VS Code Environment.
|
[
"Sorry to say, but currently there can only be one Interactive Window open at a time. We do have an issue filed on allowing multiple windows here:\nhttps://github.com/Microsoft/vscode-python/issues/3104\nWhich you can upvote or comment on if you would like. \n",
"The response above is longer up-to-date. Now, VS code supports multiple interactive windows: Open settings and search \"interactive window mode\".\nYou can select from single, multiple, per-file\nsee https://visualstudiomagazine.com/articles/2020/08/13/vs-code-python.aspx\n"
] |
[
6,
0
] |
[] |
[] |
[
"python",
"visual_studio_code"
] |
stackoverflow_0055719899_python_visual_studio_code.txt
|
Q:
Python: Get multiple id's from checkbox
I wanted to get multiple id's from a list using checkbox. I got an error
Field 'id' expected a number but got [].
Below is my code.
sample.html
<button href="/sample/save">Save</button>
{% for obj in queryset %}
<tr>
<td><input type="checkbox" name="sid" value="{{obj.id}}"></td>
<td>{{ obj.sample_name }}</td>
<td>{{ obj.sample_type}}</td>
<td>{{ obj.number}}</td>
</tr>
{% endfor %}
views.py
def sample(request):
if request.method == 'GET':
queryset = SampleList.objects.all()
return render(request, 'lab_management/sample.html', {'queryset': queryset})
def save_doc(request):
sid = request.POST.getlist('sid')
sample = SampleList.objects.filter(id=sid)[0:10]
template = DocxTemplate("doc.docx")
context = {
'headers' : ['Name', 'Type', 'Number'],
'doc': [],
}
for samp in sample:
list = [samp.name, samp.type, samp.number]
context['doc'].append(list)
template.render(context)
template.save('new_doc.docx')
A:
the field id should be int you passed a list, that's why you got error:
Field 'id' expected a number but got [].
Here you can use the in Filed lookup
Try this
sample = SampleList.objects.filter(id__in=sid)[0:10]
this will show all the SampleList items with the id's in sid
Update
Change your context to
context = {
'headers' : ['Name', 'Type', 'Number'],
'doc': sample,
}
then remove this for loop
# for samp in sample:
# list = [samp.name, samp.type, samp.number]
# context['doc'].append(list)
and in your template
{% for obj in doc %}
<tr>
<td><input type="checkbox" name="sid" value="{{obj.id}}"></td>
<td>{{ obj.name }}</td>
<td>{{ obj.type}}</td>
<td>{{ obj.number}}</td>
</tr>
{% endfor %}
NB: assuming name, type and number are the filed names of SampleList model
|
Python: Get multiple id's from checkbox
|
I wanted to get multiple id's from a list using checkbox. I got an error
Field 'id' expected a number but got [].
Below is my code.
sample.html
<button href="/sample/save">Save</button>
{% for obj in queryset %}
<tr>
<td><input type="checkbox" name="sid" value="{{obj.id}}"></td>
<td>{{ obj.sample_name }}</td>
<td>{{ obj.sample_type}}</td>
<td>{{ obj.number}}</td>
</tr>
{% endfor %}
views.py
def sample(request):
if request.method == 'GET':
queryset = SampleList.objects.all()
return render(request, 'lab_management/sample.html', {'queryset': queryset})
def save_doc(request):
sid = request.POST.getlist('sid')
sample = SampleList.objects.filter(id=sid)[0:10]
template = DocxTemplate("doc.docx")
context = {
'headers' : ['Name', 'Type', 'Number'],
'doc': [],
}
for samp in sample:
list = [samp.name, samp.type, samp.number]
context['doc'].append(list)
template.render(context)
template.save('new_doc.docx')
|
[
"the field id should be int you passed a list, that's why you got error:\n\nField 'id' expected a number but got [].\n\nHere you can use the in Filed lookup\nTry this\nsample = SampleList.objects.filter(id__in=sid)[0:10]\n\nthis will show all the SampleList items with the id's in sid\nUpdate\nChange your context to\ncontext = {\n 'headers' : ['Name', 'Type', 'Number'],\n 'doc': sample,\n}\n\nthen remove this for loop\n# for samp in sample:\n# list = [samp.name, samp.type, samp.number] \n# context['doc'].append(list)\n\nand in your template\n{% for obj in doc %}\n<tr>\n <td><input type=\"checkbox\" name=\"sid\" value=\"{{obj.id}}\"></td>\n <td>{{ obj.name }}</td>\n <td>{{ obj.type}}</td>\n <td>{{ obj.number}}</td>\n</tr>\n{% endfor %}\n\nNB: assuming name, type and number are the filed names of SampleList model\n"
] |
[
0
] |
[] |
[] |
[
"django",
"html",
"python"
] |
stackoverflow_0074473135_django_html_python.txt
|
Q:
ValueError: cannot switch from manual field specification to automatic field numbering
The class:
class Book(object):
def __init__(self, title, author):
self.title = title
self.author = author
def get_entry(self):
return "{0} by {1} on {}".format(self.title, self.author, self.press)
Create an instance of my book from it:
In [72]: mybook = Book('HTML','Lee')
In [75]: mybook.title
Out[75]: 'HTML'
In [76]: mybook.author
Out[76]: 'Lee'
Please notice that I didn't initialize attribute 'self.press',while use it in the get_entry method.Go ahead to type in data.
mybook.press = 'Murach'
mybook.price = 'download'
Till now, I can specify all the data input with vars
In [77]: vars(mybook)
Out[77]: {'author': 'Lee', 'title': 'HTML',...}
I hardtype lot of data about mybook in the console.When try to call get_entry method, errors report.
mybook.get_entry()
ValueError: cannot switch from manual field specification to automatic field numbering.
All this going in interactive mode on console.I cherish the data inputed, further to pickle mybook object in file. However, it is flawed. How can rescue it in the interactive mode.
or I have to restart all over again.
A:
return "{0} by {1} on {}".format(self.title, self.author, self.press)
that doesn't work. If you specify positions, you have to do it through the end:
return "{0} by {1} on {2}".format(self.title, self.author, self.press)
In your case, best is to leave python treat that automatically:
return "{} by {} on {}".format(self.title, self.author, self.press)
A:
print ("{0:.1f} and the other no {0:.2f}".format(a,b))
python cannot do both manual and automatic precision handling (field numbering) in a single execution of code. You can either go for specifying the field numbering for each variable or let python do it automatically for all.
A:
Well if can give a proper output in a table format if
instead of using format go for f"" ;
for e.g
<!DOCTYPE html>
<html>
<head>
<title><strong>Unable to handle Value Error</strong></title>
</head>
<body>
<p><ol>for name, branch,year in college:</ol>
<ol> print(f"{name:{10}} {branch:{20}} {year:{12}} )</ol>
<ol>name branch year </ol>
<ol>ankit cse 2</ol>
<ol>vijay ece 4</ol>
<ol> raj IT 1</ol>
</body>
</html>
A:
Escape the special characters
eg: '{}' -> '{{}}'
|
ValueError: cannot switch from manual field specification to automatic field numbering
|
The class:
class Book(object):
def __init__(self, title, author):
self.title = title
self.author = author
def get_entry(self):
return "{0} by {1} on {}".format(self.title, self.author, self.press)
Create an instance of my book from it:
In [72]: mybook = Book('HTML','Lee')
In [75]: mybook.title
Out[75]: 'HTML'
In [76]: mybook.author
Out[76]: 'Lee'
Please notice that I didn't initialize attribute 'self.press',while use it in the get_entry method.Go ahead to type in data.
mybook.press = 'Murach'
mybook.price = 'download'
Till now, I can specify all the data input with vars
In [77]: vars(mybook)
Out[77]: {'author': 'Lee', 'title': 'HTML',...}
I hardtype lot of data about mybook in the console.When try to call get_entry method, errors report.
mybook.get_entry()
ValueError: cannot switch from manual field specification to automatic field numbering.
All this going in interactive mode on console.I cherish the data inputed, further to pickle mybook object in file. However, it is flawed. How can rescue it in the interactive mode.
or I have to restart all over again.
|
[
"return \"{0} by {1} on {}\".format(self.title, self.author, self.press)\n\nthat doesn't work. If you specify positions, you have to do it through the end:\nreturn \"{0} by {1} on {2}\".format(self.title, self.author, self.press)\n\nIn your case, best is to leave python treat that automatically:\nreturn \"{} by {} on {}\".format(self.title, self.author, self.press)\n\n",
"print (\"{0:.1f} and the other no {0:.2f}\".format(a,b)) \n\n\npython cannot do both manual and automatic precision handling (field numbering) in a single execution of code. You can either go for specifying the field numbering for each variable or let python do it automatically for all.\n\n",
"Well if can give a proper output in a table format if\ninstead of using format go for f\"\" ;\nfor e.g\n<!DOCTYPE html>\n<html>\n<head>\n <title><strong>Unable to handle Value Error</strong></title>\n</head>\n<body>\n<p><ol>for name, branch,year in college:</ol>\n <ol> print(f\"{name:{10}} {branch:{20}} {year:{12}} )</ol>\n \n <ol>name branch year </ol>\n <ol>ankit cse 2</ol>\n <ol>vijay ece 4</ol>\n<ol> raj IT 1</ol>\n\n</body>\n</html>\n\n",
"Escape the special characters\neg: '{}' -> '{{}}'\n"
] |
[
62,
2,
1,
0
] |
[] |
[] |
[
"python",
"python_3.x",
"string_formatting"
] |
stackoverflow_0046768088_python_python_3.x_string_formatting.txt
|
Q:
How to conditionally declare code according to Python version in Cython?
I have the following pxd header which augments a regular Python module:
#!/usr/bin/env python
# coding: utf-8
cimport cython
@cython.locals(media_type=unicode, format=unicode, charset=unicode, render_style=unicode)
cdef class BaseRenderer(object):
"""
All renderers should extend this class, setting the `media_type`
and `format` attributes, and override the `.render()` method.
"""
@cython.locals(indent=int, separators=tuple)
cpdef object render(self, dict data, accepted_media_type=?, renderer_context=?)
@cython.locals(compact=bool, ensure_ascii=bool, charset=unicode)
cdef class JSONRenderer(BaseRenderer):
@cython.locals(base_media_type=unicode, params=dict)
cpdef int get_indent(self, unicode accepted_media_type, dict renderer_context)
@cython.locals(callback_parameter=unicode, default_callback=unicode)
cdef class JSONPRenderer(JSONRenderer):
cpdef unicode get_callback(self, dict renderer_context)
In Python 2, the render() method might return either string, byte or unicode but in Python 3 it is guaranteed that the method will return unicode.
I am unable to import the Python version #define called PY_MAJOR_VERSION that can be found in the Python/patchlevel.h header.
I tried to include it using:
cdef extern from "patchlevel.h":
pass
But the definition is not available. The include path is correctly set to /usr/include/pythonx.x./.
How do I branch this code at compilation according to the Python major version?
A:
The Python version constants are in https://github.com/cython/cython/blob/master/Cython/Includes/cpython/version.pxd
You can include them with cimport cpython.version and use them with either compile time IF or a runtime if.
Be careful, if you want to distribute the C code without requiring to install Cython using a compile time IF will make your code not portable because the generated code will match only certain python versions.
A:
Unfortunately it is not possible to use compile-time IF with PY_MAJOR_VERSION or similar; Cython only allows those compile-time constants described in the documentation.
|
How to conditionally declare code according to Python version in Cython?
|
I have the following pxd header which augments a regular Python module:
#!/usr/bin/env python
# coding: utf-8
cimport cython
@cython.locals(media_type=unicode, format=unicode, charset=unicode, render_style=unicode)
cdef class BaseRenderer(object):
"""
All renderers should extend this class, setting the `media_type`
and `format` attributes, and override the `.render()` method.
"""
@cython.locals(indent=int, separators=tuple)
cpdef object render(self, dict data, accepted_media_type=?, renderer_context=?)
@cython.locals(compact=bool, ensure_ascii=bool, charset=unicode)
cdef class JSONRenderer(BaseRenderer):
@cython.locals(base_media_type=unicode, params=dict)
cpdef int get_indent(self, unicode accepted_media_type, dict renderer_context)
@cython.locals(callback_parameter=unicode, default_callback=unicode)
cdef class JSONPRenderer(JSONRenderer):
cpdef unicode get_callback(self, dict renderer_context)
In Python 2, the render() method might return either string, byte or unicode but in Python 3 it is guaranteed that the method will return unicode.
I am unable to import the Python version #define called PY_MAJOR_VERSION that can be found in the Python/patchlevel.h header.
I tried to include it using:
cdef extern from "patchlevel.h":
pass
But the definition is not available. The include path is correctly set to /usr/include/pythonx.x./.
How do I branch this code at compilation according to the Python major version?
|
[
"The Python version constants are in https://github.com/cython/cython/blob/master/Cython/Includes/cpython/version.pxd\nYou can include them with cimport cpython.version and use them with either compile time IF or a runtime if.\nBe careful, if you want to distribute the C code without requiring to install Cython using a compile time IF will make your code not portable because the generated code will match only certain python versions.\n",
"Unfortunately it is not possible to use compile-time IF with PY_MAJOR_VERSION or similar; Cython only allows those compile-time constants described in the documentation.\n"
] |
[
5,
0
] |
[] |
[] |
[
"cython",
"python",
"python_2.7",
"python_3.x"
] |
stackoverflow_0028299202_cython_python_python_2.7_python_3.x.txt
|
Q:
not showing error to indicating missing keys in dictionary using python
I am trying the below code to find missing keys in dictionary. It should function like if the user tries to access a missing key, an error need to popped indicating missing keys.
# missing value error
# initializing Dictionary
d = { 'a' : 1 , 'b' : 2 }
# trying to output value of absent key
print ("The value associated with 'c' is : ")
print (d['c'])
getting below error
Traceback (most recent call last):
File "46a9aac96614587f5b794e451a8f4f5f.py", line 9, in
print (d['c'])
KeyError: 'c'
A:
In the above example, no key named ‘c’ in the dictionary popped a runtime error. To avoid such conditions, and to make the aware user that a particular key is absent or to pop a default message in that place, we can use get()
get(key,def_val) method is useful when we have to check for the key. If the key is present, the value associated with the key is printed, else the def_value passed in arguments is returned.
eg :
country_code = {'India' : '0091',
'Australia' : '0025',
'Nepal' : '00977'}
# search dictionary for country code of India
print(country_code.get('India', 'Not Found'))
# search dictionary for country code of Japan
print(country_code.get('Japan', 'Not Found'))
output:
0091
Not Present
|
not showing error to indicating missing keys in dictionary using python
|
I am trying the below code to find missing keys in dictionary. It should function like if the user tries to access a missing key, an error need to popped indicating missing keys.
# missing value error
# initializing Dictionary
d = { 'a' : 1 , 'b' : 2 }
# trying to output value of absent key
print ("The value associated with 'c' is : ")
print (d['c'])
getting below error
Traceback (most recent call last):
File "46a9aac96614587f5b794e451a8f4f5f.py", line 9, in
print (d['c'])
KeyError: 'c'
|
[
"In the above example, no key named ‘c’ in the dictionary popped a runtime error. To avoid such conditions, and to make the aware user that a particular key is absent or to pop a default message in that place, we can use get()\nget(key,def_val) method is useful when we have to check for the key. If the key is present, the value associated with the key is printed, else the def_value passed in arguments is returned.\neg :\ncountry_code = {'India' : '0091',\n 'Australia' : '0025',\n 'Nepal' : '00977'}\n\n# search dictionary for country code of India\nprint(country_code.get('India', 'Not Found'))\n \n# search dictionary for country code of Japan\nprint(country_code.get('Japan', 'Not Found'))\n\noutput:\n0091\nNot Present\n\n"
] |
[
3
] |
[] |
[] |
[
"dictionary",
"python"
] |
stackoverflow_0074470570_dictionary_python.txt
|
Q:
Pandas merging rows on two unique column values
I have problem that I have been trying to find a solution for. You would think that it wouldn't be that hard to figure out.
I have a pandas DataFrame with the below format:
Id Name Now Then There Sold Needed
0 1 Caden 8.1 3.40 3.95 NaN NaN
1 7 Bankist NaN 2.45 2.20 NaN NaN
2 1 Artistes 8.1 3.40 3.95 NaN NaN
0 1 NaN NaN NaN NaN 33.75 670,904
1 7 NaN NaN NaN NaN 33.75 670,904
I would like to have the DataFrame merge its rows based on the 'Id' column so that it looks like this:
Id Name Now Then There Sold Needed
0 1 Caden 8.1 3.40 3.95 33.75 670,904
1 7 Bankist NaN 2.45 2.20 33.75 670,904
2 1 Artistes 8.1 3.40 3.95 33.75 670,904
As you can see, the 'Id' column has two Id# 1 that each have a unique 'Name'. I have not been able to figure out how to ask the question that might provide some sample code. So far I have tried different methods, and have failed, including different combinations of merge, join, and concat. The best result has lead to the current DataFrame with NaN values.
I am trying to accomplish having the 'Sold' and 'Needed' columns (which have only one value) aligned with the appropriate 'Id' row when there are repeating Ids.
A:
here is one way to do it
# using groupby on Id, backfill the Sold and Needed where values are null
df[['Sold','Needed']] = df.groupby(['Id'], as_index=False)[['Sold','Needed']].bfill()
# drop the rows that has Null in a name
out=df.dropna(subset='Name')
out
Id Name Now Then There Sold Needed
0 1 Caden 8.1 3.40 3.95 33.75 670,904
1 7 Bankist NaN 2.45 2.20 33.75 670,904
2 1 Artistes 8.1 3.40 3.95 33.75 670,904
A:
Make a copy, separate the columns, drop rows where all values are nan, then merge:
*Assuming your dataframe is df1
df2=df1.copy()
df1.drop(['Sold', 'Needed'],axis=1,inplace=True)
df2.drop(['Name', 'Now', 'Then', 'There'],axis=1,inplace=True)
df1.dropna(subset="Name","Now","Then","There"],inplace=True,how='all',axis='rows')
df2.dropna(subset=["Sold","Needed"], inplace=True, how='all',axis='rows')
newdf=df1.merge(df2,how='left',left_on='Id',right_on='Id')
A:
Another possible solution:
(df.iloc[:, :5].dropna(axis=0, subset=df.columns[1:5], how='all')
.merge(df.iloc[:, [0, -2, -1]].dropna(axis=0), on='Id'))
Output:
Id Name Now Then There Sold Needed
0 1 Caden 8.1 3.40 3.95 33.75 670,904
1 1 Artistes 8.1 3.40 3.95 33.75 670,904
2 7 Bankist NaN 2.45 2.20 33.75 670,904
|
Pandas merging rows on two unique column values
|
I have problem that I have been trying to find a solution for. You would think that it wouldn't be that hard to figure out.
I have a pandas DataFrame with the below format:
Id Name Now Then There Sold Needed
0 1 Caden 8.1 3.40 3.95 NaN NaN
1 7 Bankist NaN 2.45 2.20 NaN NaN
2 1 Artistes 8.1 3.40 3.95 NaN NaN
0 1 NaN NaN NaN NaN 33.75 670,904
1 7 NaN NaN NaN NaN 33.75 670,904
I would like to have the DataFrame merge its rows based on the 'Id' column so that it looks like this:
Id Name Now Then There Sold Needed
0 1 Caden 8.1 3.40 3.95 33.75 670,904
1 7 Bankist NaN 2.45 2.20 33.75 670,904
2 1 Artistes 8.1 3.40 3.95 33.75 670,904
As you can see, the 'Id' column has two Id# 1 that each have a unique 'Name'. I have not been able to figure out how to ask the question that might provide some sample code. So far I have tried different methods, and have failed, including different combinations of merge, join, and concat. The best result has lead to the current DataFrame with NaN values.
I am trying to accomplish having the 'Sold' and 'Needed' columns (which have only one value) aligned with the appropriate 'Id' row when there are repeating Ids.
|
[
"here is one way to do it\n# using groupby on Id, backfill the Sold and Needed where values are null\ndf[['Sold','Needed']] = df.groupby(['Id'], as_index=False)[['Sold','Needed']].bfill()\n\n# drop the rows that has Null in a name\nout=df.dropna(subset='Name')\n\nout\n\n\nId Name Now Then There Sold Needed\n0 1 Caden 8.1 3.40 3.95 33.75 670,904\n1 7 Bankist NaN 2.45 2.20 33.75 670,904\n2 1 Artistes 8.1 3.40 3.95 33.75 670,904\n\n",
"Make a copy, separate the columns, drop rows where all values are nan, then merge:\n*Assuming your dataframe is df1\ndf2=df1.copy()\ndf1.drop(['Sold', 'Needed'],axis=1,inplace=True)\ndf2.drop(['Name', 'Now', 'Then', 'There'],axis=1,inplace=True)\ndf1.dropna(subset=\"Name\",\"Now\",\"Then\",\"There\"],inplace=True,how='all',axis='rows')\ndf2.dropna(subset=[\"Sold\",\"Needed\"], inplace=True, how='all',axis='rows')\nnewdf=df1.merge(df2,how='left',left_on='Id',right_on='Id')\n\n",
"Another possible solution:\n(df.iloc[:, :5].dropna(axis=0, subset=df.columns[1:5], how='all')\n .merge(df.iloc[:, [0, -2, -1]].dropna(axis=0), on='Id'))\n\nOutput:\n Id Name Now Then There Sold Needed\n0 1 Caden 8.1 3.40 3.95 33.75 670,904\n1 1 Artistes 8.1 3.40 3.95 33.75 670,904\n2 7 Bankist NaN 2.45 2.20 33.75 670,904\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"dataframe",
"nan",
"pandas",
"python",
"row"
] |
stackoverflow_0074467918_dataframe_nan_pandas_python_row.txt
|
Q:
How to implement a Priority Queue in numba where one element of the item is a list of tuples?
I am trying to create the PriorityQueue in numba for a very specific task. To achieve that, I need the nodes to have an element which is list of tuples. However, when I try to do that, it raises an error that I don't understand.
(Most of the implementation of PriorityQueue taken from How can I implement a numba jitted priority queue?)
import typing
from heapq import heappush, heappop
import numba as nb
from numba.experimental import jitclass
itemType = nb.typed.List.empty_list(nb.types.Tuple((nb.types.int64, nb.types.int64)))
entry_def = (0.0, 0, nb.typed.List([(0,0)]))
entry_type = nb.typeof(entry_def)
@jitclass
class PriorityQueue:
pq: typing.List[entry_type]
id: int
entry: entry_type
def __init__(self):
self.pq = nb.typed.List.empty_list((0.0, 0, nb.typed.List([(0,0)])))
def put(self, priority: float, id: int, item: itemType):
entry = (priority, id, item)
heappush(self.pq, entry)
def pop(self):
if self.pq:
priority, id, item = heappop(self.pq)
return priority, id, item
raise KeyError("pop from an empty priority queue")
The functionality that I need to achieve:
>>> q = PriorityQueue()
>>> q.put(5.0, 1, [(0,1)])
>>> q.put(2.0, 2, [(0,1), (1,2)])
>>> q.put(3.0, 3, [(0,1), (0,1), (1,1)])
>>> node = q.pop()
>>> node
(2.0, 2, [(0, 1), (1, 2)])
Here is the error that I am getting:
Traceback (most recent call last):
File "C:\VS Code\myproject.py", line 34, in <module>
q.put(5.0, 1, [(0,1)])
File "C:\Users\Me\AppData\Local\Programs\Python\Python310\lib\site-packages\numba\experimental\jitclass\boxing.py", line 61, in wrapper
return method(*args, **kwargs)
File "C:\Users\Me\AppData\Local\Programs\Python\Python310\lib\site-packages\numba\core\dispatcher.py", line 468, in _compile_for_args
error_rewrite(e, 'typing')
File "C:\Users\Me\AppData\Local\Programs\Python\Python310\lib\site-packages\numba\core\dispatcher.py", line 409, in error_rewrite
raise e.with_traceback(None)
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
- Resolution failure for literal arguments:
Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<built-in function heappush>) found for signature:
>>> heappush(ListType[Tuple(float64, int64, ListType[UniTuple(int64 x 2)])], Tuple(float64, int64, reflected list(UniTuple(int64 x 2))<iv=None>))
There are 2 candidate implementations:
- Of which 2 did not match due to:
Overload in function 'heappush': File: numba\cpython\heapq.py: Line 150.
With argument(s): '(ListType[Tuple(float64, int64, ListType[UniTuple(int64 x 2)])], Tuple(float64, int64, reflected list(UniTuple(int64 x 2))<iv=None>))':
Rejected as the implementation raised a specific error:
TypingError: heap type must be the same as item type
raised from C:\Users\Me\AppData\Local\Programs\Python\Python310\lib\site-packages\numba\cpython\heapq.py:119
During: resolving callee type: Function(<built-in function heappush>)
During: typing of call at C:\VS Code\myproject.py (24)
File "myproject.py", line 24:
def put(self, priority: float, id: int, item: itemType):
<source elided>
entry = (priority, id, item)
heappush(self.pq, entry)
^
- Resolution failure for non-literal arguments:
None
During: resolving callee type: BoundFunction((<class 'numba.core.types.misc.ClassInstanceType'>, 'put') for instance.jitclass.PriorityQueue#228d8eebd30<pq:ListType[Tuple(float64, int64, ListType[UniTuple(int64 x 2)])],id:int64,entry:Tuple(float64, int64, ListType[UniTuple(int64 x 2)])>)
During: typing of call at <string> (3)
File "<string>", line 3:
<source missing, REPL/exec in use?>
A:
The problem is that the inferred type for a Python list is a reflected list, but your lists are Numba typed lists. If you do:
q.put(5.0, 1, nb.typed.List([(0, 1)]))
q.put(2.0, 2, nb.typed.List([(0, 1), (1, 2)]))
q.put(3.0, 3, nb.typed.List([(0, 1), (0, 1), (1, 1)]))
then your code runs to completion and produces:
(2.0, 2, ListType[UniTuple(int64 x 2)]([(0, 1), (1, 2)]))
|
How to implement a Priority Queue in numba where one element of the item is a list of tuples?
|
I am trying to create the PriorityQueue in numba for a very specific task. To achieve that, I need the nodes to have an element which is list of tuples. However, when I try to do that, it raises an error that I don't understand.
(Most of the implementation of PriorityQueue taken from How can I implement a numba jitted priority queue?)
import typing
from heapq import heappush, heappop
import numba as nb
from numba.experimental import jitclass
itemType = nb.typed.List.empty_list(nb.types.Tuple((nb.types.int64, nb.types.int64)))
entry_def = (0.0, 0, nb.typed.List([(0,0)]))
entry_type = nb.typeof(entry_def)
@jitclass
class PriorityQueue:
pq: typing.List[entry_type]
id: int
entry: entry_type
def __init__(self):
self.pq = nb.typed.List.empty_list((0.0, 0, nb.typed.List([(0,0)])))
def put(self, priority: float, id: int, item: itemType):
entry = (priority, id, item)
heappush(self.pq, entry)
def pop(self):
if self.pq:
priority, id, item = heappop(self.pq)
return priority, id, item
raise KeyError("pop from an empty priority queue")
The functionality that I need to achieve:
>>> q = PriorityQueue()
>>> q.put(5.0, 1, [(0,1)])
>>> q.put(2.0, 2, [(0,1), (1,2)])
>>> q.put(3.0, 3, [(0,1), (0,1), (1,1)])
>>> node = q.pop()
>>> node
(2.0, 2, [(0, 1), (1, 2)])
Here is the error that I am getting:
Traceback (most recent call last):
File "C:\VS Code\myproject.py", line 34, in <module>
q.put(5.0, 1, [(0,1)])
File "C:\Users\Me\AppData\Local\Programs\Python\Python310\lib\site-packages\numba\experimental\jitclass\boxing.py", line 61, in wrapper
return method(*args, **kwargs)
File "C:\Users\Me\AppData\Local\Programs\Python\Python310\lib\site-packages\numba\core\dispatcher.py", line 468, in _compile_for_args
error_rewrite(e, 'typing')
File "C:\Users\Me\AppData\Local\Programs\Python\Python310\lib\site-packages\numba\core\dispatcher.py", line 409, in error_rewrite
raise e.with_traceback(None)
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
- Resolution failure for literal arguments:
Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<built-in function heappush>) found for signature:
>>> heappush(ListType[Tuple(float64, int64, ListType[UniTuple(int64 x 2)])], Tuple(float64, int64, reflected list(UniTuple(int64 x 2))<iv=None>))
There are 2 candidate implementations:
- Of which 2 did not match due to:
Overload in function 'heappush': File: numba\cpython\heapq.py: Line 150.
With argument(s): '(ListType[Tuple(float64, int64, ListType[UniTuple(int64 x 2)])], Tuple(float64, int64, reflected list(UniTuple(int64 x 2))<iv=None>))':
Rejected as the implementation raised a specific error:
TypingError: heap type must be the same as item type
raised from C:\Users\Me\AppData\Local\Programs\Python\Python310\lib\site-packages\numba\cpython\heapq.py:119
During: resolving callee type: Function(<built-in function heappush>)
During: typing of call at C:\VS Code\myproject.py (24)
File "myproject.py", line 24:
def put(self, priority: float, id: int, item: itemType):
<source elided>
entry = (priority, id, item)
heappush(self.pq, entry)
^
- Resolution failure for non-literal arguments:
None
During: resolving callee type: BoundFunction((<class 'numba.core.types.misc.ClassInstanceType'>, 'put') for instance.jitclass.PriorityQueue#228d8eebd30<pq:ListType[Tuple(float64, int64, ListType[UniTuple(int64 x 2)])],id:int64,entry:Tuple(float64, int64, ListType[UniTuple(int64 x 2)])>)
During: typing of call at <string> (3)
File "<string>", line 3:
<source missing, REPL/exec in use?>
|
[
"The problem is that the inferred type for a Python list is a reflected list, but your lists are Numba typed lists. If you do:\nq.put(5.0, 1, nb.typed.List([(0, 1)]))\nq.put(2.0, 2, nb.typed.List([(0, 1), (1, 2)]))\nq.put(3.0, 3, nb.typed.List([(0, 1), (0, 1), (1, 1)]))\n\nthen your code runs to completion and produces:\n(2.0, 2, ListType[UniTuple(int64 x 2)]([(0, 1), (1, 2)]))\n\n"
] |
[
2
] |
[] |
[] |
[
"numba",
"priority_queue",
"python",
"types"
] |
stackoverflow_0074469770_numba_priority_queue_python_types.txt
|
Q:
Intel Vtune cannot find python source file
This is an old problem as is demonstrated as in https://community.intel.com/t5/Analyzers/Unable-to-view-source-code-when-analyzing-results/td-p/1153210. I have tried all the listed methods, none of them works, and I cannot find any more solutions on the internet. Basically vtune cannot find the custom python source file no matter what is tried. I am using the most recently version as of speaking. Please let me whether there is a solution.
For example, if you run the following program.
def myfunc(*args):
# Do a lot of things.
if __name__ = '__main__':
# Do something and call myfunc
Call this script main.py. Now use the newest vtune version (I have using Ubuntu 18.04), run the vtune-gui and basic hotspot analysis. You will not found any information on this file. However, a huge pile of information on Python and its other codes are found (related to your python environment). In theory, you should be able to find the source of main.py as well as cost on each line in that script. However, that is simply not happening.
Desired behavior: I would really like to find the source file and function in the top-down manual (or any really). Any advice is welcome.
A:
VTune offer full support for profiling python code and the tool should be able to display the source code in your python file as you expected. Could you please check if the function you are expecting to see in the VTune results, ran long enough?
Just to confirm that everything is working fine, I wrote a matrix multiplication code as shown below (don't worry about the accuracy of the code itself):
def matrix_mul(X, Y):
result_matrix = [ [ 1 for i in range(len(X)) ] for j in range(len(Y[0])) ]
# iterate through rows of X
for i in range(len(X)):
# iterate through columns of Y
for j in range(len(Y[0])):
# iterate through rows of Y
for k in range(len(Y)):
result_matrix[i][j] += X[i][k] * Y[k][j]
return result_matrix
Then I called this function (matrix_mul) on my Ubuntu machine with large enough matrices so that the overall execution time was in the order of few seconds.
I used the below command to start profiling (you can also see the VTune version I used):
/opt/intel/oneapi/vtune/2021.1.1/bin64/vtune -collect hotspots -knob enable-stack-collection=true -data-limit=500 -ring-buffer=10 -app-working-dir /usr/bin -- python3 /home/johnypau/MyIntel/temp/Python_matrix_mul/mat_mul_method.py
Now open the VTune results in the GUI and under the bottom-up tab, order by "Module / Function / Call-stack" (or whatever preferred grouping is).
You should be able to see the the module (mat_mul_method.py in my case) and the function "matrix_mul". If you double click, VTune should be able to load the sources too.
|
Intel Vtune cannot find python source file
|
This is an old problem as is demonstrated as in https://community.intel.com/t5/Analyzers/Unable-to-view-source-code-when-analyzing-results/td-p/1153210. I have tried all the listed methods, none of them works, and I cannot find any more solutions on the internet. Basically vtune cannot find the custom python source file no matter what is tried. I am using the most recently version as of speaking. Please let me whether there is a solution.
For example, if you run the following program.
def myfunc(*args):
# Do a lot of things.
if __name__ = '__main__':
# Do something and call myfunc
Call this script main.py. Now use the newest vtune version (I have using Ubuntu 18.04), run the vtune-gui and basic hotspot analysis. You will not found any information on this file. However, a huge pile of information on Python and its other codes are found (related to your python environment). In theory, you should be able to find the source of main.py as well as cost on each line in that script. However, that is simply not happening.
Desired behavior: I would really like to find the source file and function in the top-down manual (or any really). Any advice is welcome.
|
[
"VTune offer full support for profiling python code and the tool should be able to display the source code in your python file as you expected. Could you please check if the function you are expecting to see in the VTune results, ran long enough?\nJust to confirm that everything is working fine, I wrote a matrix multiplication code as shown below (don't worry about the accuracy of the code itself):\n\ndef matrix_mul(X, Y):\n\n result_matrix = [ [ 1 for i in range(len(X)) ] for j in range(len(Y[0])) ]\n\n # iterate through rows of X\n\n for i in range(len(X)):\n\n # iterate through columns of Y\n\n for j in range(len(Y[0])):\n\n # iterate through rows of Y\n\n for k in range(len(Y)):\n\n result_matrix[i][j] += X[i][k] * Y[k][j]\n\n\n\n return result_matrix\n\n\nThen I called this function (matrix_mul) on my Ubuntu machine with large enough matrices so that the overall execution time was in the order of few seconds.\nI used the below command to start profiling (you can also see the VTune version I used):\n/opt/intel/oneapi/vtune/2021.1.1/bin64/vtune -collect hotspots -knob enable-stack-collection=true -data-limit=500 -ring-buffer=10 -app-working-dir /usr/bin -- python3 /home/johnypau/MyIntel/temp/Python_matrix_mul/mat_mul_method.py\nNow open the VTune results in the GUI and under the bottom-up tab, order by \"Module / Function / Call-stack\" (or whatever preferred grouping is).\nYou should be able to see the the module (mat_mul_method.py in my case) and the function \"matrix_mul\". If you double click, VTune should be able to load the sources too.\n"
] |
[
0
] |
[] |
[] |
[
"intel",
"intel_vtune",
"profiling",
"python"
] |
stackoverflow_0065447496_intel_intel_vtune_profiling_python.txt
|
Q:
KeyError: "None of [Index(['...', '...'], dtype='object')] are in the [index]"
Can someone helps in identifying the problem ?
I have written this code below, and then
import numpy as np
import pandas as pd
retail = pd.read_csv('online_retail2.csv')
retail.groupby(['Country','Description'])['Quantity','Price'].agg([np.mean,max])
retail.loc[('Australia','DOLLY GIRL BEAKER'),('Quantity','mean')]
The groupby function has output:
Out[36]:
Quantity Price
mean max mean max
Country Description
Australia DOLLY GIRL BEAKER 200.0 200 1.08 1.08
I LOVE LONDON MINI BACKPACK 4.0 4 4.15 4.15
10 COLOUR SPACEBOY PEN 48.0 48 0.85 0.85
12 PENCIL SMALL TUBE WOODLAND 384.0 384 0.55 0.55
12 PENCILS SMALL TUBE RED SPOTTY 24.0 24 0.65 0.65
... ... ... ...
West Indies VINTAGE BEAD PINK SCARF 3.0 3 7.95 7.95
WHITE AND BLUE CERAMIC OIL BURNER 6.0 6 1.25 1.25
WOODLAND PARTY BAG + STICKER SET 1.0 1 1.65 1.65
WOVEN BERRIES CUSHION COVER 2.0 2 4.95 4.95
WOVEN FROST CUSHION COVER 2.0 2 4.95 4.95
[30696 rows x 4 columns]
while the .loc function resulted in the below error:
KeyError: "None of [Index(['Australia', 'DOLLY GIRL BEAKER'], dtype='object')] are in the [index]"
A:
I think it's because you are not saving the result of groupby+aggregation to a new variable (groupby+aggregation is not an inplace operation, i.e. it will create a new dataframe and you need to save it otherwise it will just compute and print the result). Basically with your current code you're trying to index your initial dataframe retail which causes the error.
You can modify your code as follows :
import numpy as np
import pandas as pd
retail = pd.read_csv('online_retail2.csv')
retail_aggregated = retail.groupby(['Country','Description'])[['Quantity','Price']].agg([np.mean,max])
Then you can index your aggregated dataframe as you want :
retail_aggregated.loc[('Australia','DOLLY GIRL BEAKER'),('Quantity','mean')]
Edit : add a full working example
import numpy as np
import pandas as pd
import random
random.seed(123)
np.random.seed(123)
# Here I generate a random dataframe
retail = pd.DataFrame({
"Country": [random.choice(["Australia", "West Indies"]) for _ in range(100)],
"Description": [random.choice([
"DOLLY GIRL BEAKER", "DOLLY GIRL BEAKER", "COLOUR SPACEBOY PEN", "VINTAGE BEAD PINK SCARF", "WOODLAND PARTY BAG + STICKER SET"
]) for _ in range(100)],
"Quantity": np.random.randint(1, 10, 100),
"Price": np.random.randint(1, 100, 100),
})
# Then I groupby and compute aggregate
retail_gp = retail.groupby(['Country','Description'])[['Quantity','Price']].agg([np.mean,max])
retail_gp.loc[('Australia','DOLLY GIRL BEAKER'),('Quantity','mean')]
Output :
4.894736842105263
|
KeyError: "None of [Index(['...', '...'], dtype='object')] are in the [index]"
|
Can someone helps in identifying the problem ?
I have written this code below, and then
import numpy as np
import pandas as pd
retail = pd.read_csv('online_retail2.csv')
retail.groupby(['Country','Description'])['Quantity','Price'].agg([np.mean,max])
retail.loc[('Australia','DOLLY GIRL BEAKER'),('Quantity','mean')]
The groupby function has output:
Out[36]:
Quantity Price
mean max mean max
Country Description
Australia DOLLY GIRL BEAKER 200.0 200 1.08 1.08
I LOVE LONDON MINI BACKPACK 4.0 4 4.15 4.15
10 COLOUR SPACEBOY PEN 48.0 48 0.85 0.85
12 PENCIL SMALL TUBE WOODLAND 384.0 384 0.55 0.55
12 PENCILS SMALL TUBE RED SPOTTY 24.0 24 0.65 0.65
... ... ... ...
West Indies VINTAGE BEAD PINK SCARF 3.0 3 7.95 7.95
WHITE AND BLUE CERAMIC OIL BURNER 6.0 6 1.25 1.25
WOODLAND PARTY BAG + STICKER SET 1.0 1 1.65 1.65
WOVEN BERRIES CUSHION COVER 2.0 2 4.95 4.95
WOVEN FROST CUSHION COVER 2.0 2 4.95 4.95
[30696 rows x 4 columns]
while the .loc function resulted in the below error:
KeyError: "None of [Index(['Australia', 'DOLLY GIRL BEAKER'], dtype='object')] are in the [index]"
|
[
"I think it's because you are not saving the result of groupby+aggregation to a new variable (groupby+aggregation is not an inplace operation, i.e. it will create a new dataframe and you need to save it otherwise it will just compute and print the result). Basically with your current code you're trying to index your initial dataframe retail which causes the error.\nYou can modify your code as follows :\nimport numpy as np\nimport pandas as pd\n\n\nretail = pd.read_csv('online_retail2.csv')\n\nretail_aggregated = retail.groupby(['Country','Description'])[['Quantity','Price']].agg([np.mean,max])\n\nThen you can index your aggregated dataframe as you want :\nretail_aggregated.loc[('Australia','DOLLY GIRL BEAKER'),('Quantity','mean')]\n\nEdit : add a full working example\nimport numpy as np\nimport pandas as pd\nimport random\nrandom.seed(123)\nnp.random.seed(123)\n\n\n# Here I generate a random dataframe\nretail = pd.DataFrame({\n \"Country\": [random.choice([\"Australia\", \"West Indies\"]) for _ in range(100)],\n \"Description\": [random.choice([\n \"DOLLY GIRL BEAKER\", \"DOLLY GIRL BEAKER\", \"COLOUR SPACEBOY PEN\", \"VINTAGE BEAD PINK SCARF\", \"WOODLAND PARTY BAG + STICKER SET\"\n ]) for _ in range(100)],\n \"Quantity\": np.random.randint(1, 10, 100),\n \"Price\": np.random.randint(1, 100, 100),\n})\n\n# Then I groupby and compute aggregate\n\nretail_gp = retail.groupby(['Country','Description'])[['Quantity','Price']].agg([np.mean,max])\nretail_gp.loc[('Australia','DOLLY GIRL BEAKER'),('Quantity','mean')]\n\nOutput :\n4.894736842105263\n\n"
] |
[
0
] |
[] |
[] |
[
"pandas",
"python",
"python_3.x"
] |
stackoverflow_0074472853_pandas_python_python_3.x.txt
|
Q:
AttributeError: 'numpy.float64' object has no attribute 'cpu'
I am trying to run BERT and train a model using pytorch.
I am not sure why I am getting this error after finishing the first Epoch.
I am using this code link
history = defaultdict(list)
best_accuracy = 0
for epoch in range(EPOCHS):
# Show details
print(f"Epoch {epoch + 1}/{EPOCHS}")
print("-" * 10)
train_acc, train_loss = train_epoch(
model,
train_data_loader,
loss_fn,
optimizer,
device,
scheduler,
len(df_train)
)
print(f"Train loss {train_loss} accuracy {train_acc}")
# Get model performance (accuracy and loss)
val_acc, val_loss = eval_model(
model,
val_data_loader,
loss_fn,
device,
len(df_val)
)
print(f"Val loss {val_loss} accuracy {val_acc}")
print()
history['train_acc'].append(train_acc.cpu())
history['train_loss'].append(train_loss.cpu())
history['val_acc'].append(val_acc.cpu())
history['val_loss'].append(val_loss.cpu())
# If we beat prev performance
if val_acc > best_accuracy:
torch.save(model.state_dict(), 'best_model_state.bin')
best_accuracy = val_acc
Here is the output and the error message
Image
It is a first time for me to work with pytorch. Any ideas how to fix the error>
A:
I checked kaggle link and I see that there is no cpu() reference as you have posted in your code. It should simply be:
history['train_acc'].append(train_acc)
history['train_loss'].append(train_loss)
history['val_acc'].append(val_acc)
history['val_loss'].append(val_loss)
|
AttributeError: 'numpy.float64' object has no attribute 'cpu'
|
I am trying to run BERT and train a model using pytorch.
I am not sure why I am getting this error after finishing the first Epoch.
I am using this code link
history = defaultdict(list)
best_accuracy = 0
for epoch in range(EPOCHS):
# Show details
print(f"Epoch {epoch + 1}/{EPOCHS}")
print("-" * 10)
train_acc, train_loss = train_epoch(
model,
train_data_loader,
loss_fn,
optimizer,
device,
scheduler,
len(df_train)
)
print(f"Train loss {train_loss} accuracy {train_acc}")
# Get model performance (accuracy and loss)
val_acc, val_loss = eval_model(
model,
val_data_loader,
loss_fn,
device,
len(df_val)
)
print(f"Val loss {val_loss} accuracy {val_acc}")
print()
history['train_acc'].append(train_acc.cpu())
history['train_loss'].append(train_loss.cpu())
history['val_acc'].append(val_acc.cpu())
history['val_loss'].append(val_loss.cpu())
# If we beat prev performance
if val_acc > best_accuracy:
torch.save(model.state_dict(), 'best_model_state.bin')
best_accuracy = val_acc
Here is the output and the error message
Image
It is a first time for me to work with pytorch. Any ideas how to fix the error>
|
[
"I checked kaggle link and I see that there is no cpu() reference as you have posted in your code. It should simply be:\nhistory['train_acc'].append(train_acc)\nhistory['train_loss'].append(train_loss)\nhistory['val_acc'].append(val_acc)\nhistory['val_loss'].append(val_loss)\n\n"
] |
[
1
] |
[] |
[] |
[
"bert_language_model",
"python",
"pytorch"
] |
stackoverflow_0074473271_bert_language_model_python_pytorch.txt
|
Q:
Run and follow remote Python script execution from Django website
I am running a Django website where user can perform some light calculation.
This website is hosted in a Docker container on one of our server.
I would like now to add the ability for the users to run some more complicated simulations from the same website. These simulations will have to run on a dedicated calculation machine (they will run in parallel for a several hours/days) under Ubuntu server in the same network.
What will be the best way to achieve this? Send the calculation to the calculation serverand send them back automatically to Django?
How can I follow the status of the calculation (waiting, calculating, finished) from the Django instance?
Should I use a job scheduler on the calculation server?
This is close to this question that was asked in 2014, so there might be more actual solutions.
A:
"Best" depends on lots of local decisions that we can't help with.
Django can use Python subprocess to execute any Linux shell command. So once you decide on how to submit a job to the local machine from the command line, you can do it from your server. (Note, it may need a way to specify a linux user corresponding to the Django request.user)
You can also provide an API for updating status and returning results to the server. That'll be little different to a regular POST view, possibly with a FileField / request.FILES for sending results files.
|
Run and follow remote Python script execution from Django website
|
I am running a Django website where user can perform some light calculation.
This website is hosted in a Docker container on one of our server.
I would like now to add the ability for the users to run some more complicated simulations from the same website. These simulations will have to run on a dedicated calculation machine (they will run in parallel for a several hours/days) under Ubuntu server in the same network.
What will be the best way to achieve this? Send the calculation to the calculation serverand send them back automatically to Django?
How can I follow the status of the calculation (waiting, calculating, finished) from the Django instance?
Should I use a job scheduler on the calculation server?
This is close to this question that was asked in 2014, so there might be more actual solutions.
|
[
"\"Best\" depends on lots of local decisions that we can't help with.\nDjango can use Python subprocess to execute any Linux shell command. So once you decide on how to submit a job to the local machine from the command line, you can do it from your server. (Note, it may need a way to specify a linux user corresponding to the Django request.user)\nYou can also provide an API for updating status and returning results to the server. That'll be little different to a regular POST view, possibly with a FileField / request.FILES for sending results files.\n"
] |
[
1
] |
[] |
[] |
[
"django",
"hpc",
"python"
] |
stackoverflow_0074468489_django_hpc_python.txt
|
Q:
JavaScript: Parse python class
I upload python file with a python class(react form).
myfile.py
class MyClass():
"""
a: string - parameter a
"""
Is there any way I can get an annotation or list parans from the file with class using javascript? (without regexp)
A:
If you insist on not using regex, there are only one other way : parse the file using a python parser.
You can do it in pure Javascript. It may be possible for example with dt-python-parser to visitor the nodes you are interested in, but I have not tested it.
Or you can use Python code. Either you call your own server or an online service (if such exists). If you have access to a Python interpreter, from there you could use the ast Python standard library to parse py files.
The overblown solution would be to embed a Python interpreter through WASM into your Javascript app, to parse the uploaded file.
|
JavaScript: Parse python class
|
I upload python file with a python class(react form).
myfile.py
class MyClass():
"""
a: string - parameter a
"""
Is there any way I can get an annotation or list parans from the file with class using javascript? (without regexp)
|
[
"If you insist on not using regex, there are only one other way : parse the file using a python parser.\nYou can do it in pure Javascript. It may be possible for example with dt-python-parser to visitor the nodes you are interested in, but I have not tested it.\nOr you can use Python code. Either you call your own server or an online service (if such exists). If you have access to a Python interpreter, from there you could use the ast Python standard library to parse py files.\nThe overblown solution would be to embed a Python interpreter through WASM into your Javascript app, to parse the uploaded file.\n"
] |
[
0
] |
[] |
[] |
[
"javascript",
"parsing",
"python"
] |
stackoverflow_0074458913_javascript_parsing_python.txt
|
Q:
How to remove the all values of a specific person from dataframe which is not continuous based on date time
date consumption customer_id
2018-01-01 12 111
2018-01-02 12 111
*2018-01-03* 14 111
*2018-01-05* 12 111
2018-01-06 45 111
2018-01-07 34 111
2018-01-01 23 112
2018-01-02 23 112
2018-01-03 45 112
2018-01-04 34 112
2018-01-05 23 112
2018-01-06 34 112
2018-01-01 23 113
2018-01-02 34 113
2018-01-03 45 113
2018-01-04 34 113
The values in customer 111 is not continuous, it has missing value in 2018-01-04,
so i want to remove all 111 from my dataframe in pandas.
date consumption customer_id
2018-01-01 23 112
2018-01-02 23 112
2018-01-03 45 112
2018-01-04 34 112
2018-01-05 23 112
2018-01-06 34 112
2018-01-01 23 113
2018-01-02 34 113
2018-01-03 45 113
2018-01-04 34 113
i want result like this ? how does it possible in pandas?
A:
You can compute the successive delta and check if any is greater than 1d:
drop = (pd.to_datetime(df['date'])
.groupby(df['customer_id'])
.apply(lambda s: s.diff().gt('1d').any())
)
out = df[df['customer_id'].isin(drop[~drop].index)]
Or with groupby.filter:
df['date'] = pd.to_datetime(df['date'])
out = (df.groupby(df['customer_id'])
.filter(lambda d: ~d['date'].diff().gt('1d').any())
)
Output:
date consumption customer_id
6 2018-01-01 23 112
7 2018-01-02 23 112
8 2018-01-03 45 112
9 2018-01-04 34 112
10 2018-01-05 23 112
11 2018-01-06 34 112
12 2018-01-01 23 113
13 2018-01-02 34 113
14 2018-01-03 45 113
15 2018-01-04 34 113
If you the dates are not necessarily increasing, also check you cannot go back in time:
df['date'] = pd.to_datetime(df['date'])
out = (df.groupby(df['customer_id'])
.filter(lambda d: d['date'].diff().iloc[1:].eq('1d').all())
)
|
How to remove the all values of a specific person from dataframe which is not continuous based on date time
|
date consumption customer_id
2018-01-01 12 111
2018-01-02 12 111
*2018-01-03* 14 111
*2018-01-05* 12 111
2018-01-06 45 111
2018-01-07 34 111
2018-01-01 23 112
2018-01-02 23 112
2018-01-03 45 112
2018-01-04 34 112
2018-01-05 23 112
2018-01-06 34 112
2018-01-01 23 113
2018-01-02 34 113
2018-01-03 45 113
2018-01-04 34 113
The values in customer 111 is not continuous, it has missing value in 2018-01-04,
so i want to remove all 111 from my dataframe in pandas.
date consumption customer_id
2018-01-01 23 112
2018-01-02 23 112
2018-01-03 45 112
2018-01-04 34 112
2018-01-05 23 112
2018-01-06 34 112
2018-01-01 23 113
2018-01-02 34 113
2018-01-03 45 113
2018-01-04 34 113
i want result like this ? how does it possible in pandas?
|
[
"You can compute the successive delta and check if any is greater than 1d:\ndrop = (pd.to_datetime(df['date'])\n .groupby(df['customer_id'])\n .apply(lambda s: s.diff().gt('1d').any())\n )\n\nout = df[df['customer_id'].isin(drop[~drop].index)]\n\nOr with groupby.filter:\ndf['date'] = pd.to_datetime(df['date'])\n\nout = (df.groupby(df['customer_id'])\n .filter(lambda d: ~d['date'].diff().gt('1d').any())\n )\n\nOutput:\n date consumption customer_id\n6 2018-01-01 23 112\n7 2018-01-02 23 112\n8 2018-01-03 45 112\n9 2018-01-04 34 112\n10 2018-01-05 23 112\n11 2018-01-06 34 112\n12 2018-01-01 23 113\n13 2018-01-02 34 113\n14 2018-01-03 45 113\n15 2018-01-04 34 113\n\nIf you the dates are not necessarily increasing, also check you cannot go back in time:\ndf['date'] = pd.to_datetime(df['date'])\n\nout = (df.groupby(df['customer_id'])\n .filter(lambda d: d['date'].diff().iloc[1:].eq('1d').all())\n )\n\n"
] |
[
0
] |
[] |
[] |
[
"data_preprocessing",
"dataframe",
"datetime",
"pandas",
"python"
] |
stackoverflow_0074473320_data_preprocessing_dataframe_datetime_pandas_python.txt
|
Q:
Compare tables A and B, and if there are duplicate values, insert the code defined in table B
I am trying to compare the following two tables.
After comparing the words in table B with the words in table A, I want to put the code of the overlapping value in the empty Code column of table A.
Since it is not case-sensitive, I want to change all words to lower case before proceeding with the comparison.
If they don't match, I want to disable code injection.
There are about 10000 pieces of data
I haven't been able to solve this for 2 days. Please help me!!
Table A
Code
Title
Cholera
Intestinal infection due to other Vibrio
Typhoid fever
Typhoid peritonitis
Paratyphoid fever
Infections due to other Salmonella
Salmonella enteritis
Table B
Code
Title
1A00
Cholera
1A01
Intestinal infection due to other Vibrio
1A02
Intestinal infections due to Shigella
1A07
Typhoid fever
1A07.0
Typhoid peritonitis
1A07.Y
Other specified typhoid fever
1A07.Z
Typhoid fever, unspecifiedw
result Table
Code
Title
1A00
Cholera
1A01
Intestinal infection due to other Vibrio
1A07
Typhoid fever
1A07.0
Typhoid peritonitis
Paratyphoid fever
Infections due to other Salmonella
Salmonella enteritis
A:
First create a new column with the lower case
then just do a standard merge
df3 = pd.merge(
df1,
df2,
how = 'left',
on = 'cat',
suffixes = ['_x', '']
)[['Code', 'Title_x']].rename(columns = {'Title_x': 'Title'})
|
Compare tables A and B, and if there are duplicate values, insert the code defined in table B
|
I am trying to compare the following two tables.
After comparing the words in table B with the words in table A, I want to put the code of the overlapping value in the empty Code column of table A.
Since it is not case-sensitive, I want to change all words to lower case before proceeding with the comparison.
If they don't match, I want to disable code injection.
There are about 10000 pieces of data
I haven't been able to solve this for 2 days. Please help me!!
Table A
Code
Title
Cholera
Intestinal infection due to other Vibrio
Typhoid fever
Typhoid peritonitis
Paratyphoid fever
Infections due to other Salmonella
Salmonella enteritis
Table B
Code
Title
1A00
Cholera
1A01
Intestinal infection due to other Vibrio
1A02
Intestinal infections due to Shigella
1A07
Typhoid fever
1A07.0
Typhoid peritonitis
1A07.Y
Other specified typhoid fever
1A07.Z
Typhoid fever, unspecifiedw
result Table
Code
Title
1A00
Cholera
1A01
Intestinal infection due to other Vibrio
1A07
Typhoid fever
1A07.0
Typhoid peritonitis
Paratyphoid fever
Infections due to other Salmonella
Salmonella enteritis
|
[
"First create a new column with the lower case\nthen just do a standard merge\ndf3 = pd.merge(\n df1,\n df2,\n how = 'left',\n on = 'cat',\n suffixes = ['_x', '']\n )[['Code', 'Title_x']].rename(columns = {'Title_x': 'Title'})\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074473396_dataframe_pandas_python.txt
|
Q:
How to loop forward and backward in a range in python?
enter image description here
I want to program motion as described in the drawing above. The angle changes according to this equation:theta = Amp*np.sin(2*np.pi*ftheta*p) . I am looping through p(time) and that is the only variable in this equation, nothing else changes. How do i make it stop once it reaches the amplitude and make it start going in the reverse direction until it hits the -(amplitude)
import numpy as np
import matplotlib.pyplot as plt
import math
r=20
h=1.7
num_of_steps=100
emp=3
phi = []
theta = []
time=np.arange(0,100,1)
fphi = 1
ftheta = 1
Amp = 90
for j in time:
kampas = np.degrees(2*np.pi*fphi*j)
kitaskampas = np.degrees(np.sin(2*np.pi*ftheta*j))
if kampas > 360:
temp = math.floor(kampas/360)
sukasi = round(kampas - 360*temp)
print(sukasi)
phi.append(sukasi)
if kitaskampas == Amp:
print(phi)
A:
have you tried to write some code yourself? If it is so, please share so we can help.
As far as I understood while loop can work for you:
exit_flag = 0
time = 0
theta = 0
amplitude = 1
while(exit_flag == 0):
if(amplitude > 0):
time = time + 1
else:
time = time - 1
theta = some_equasion_dependning_of_time(time)
if(we_wanna_exit)
exit_flag = 1
A:
I couldn't understand what really is going with kampas variable. However, this is a minimal example to move between two limit. You can place your calculations where we change value variable.
direction = 1
_range = 90
unit_of_change = 7
value = 0
def looper():
global direction, value
if(value < -_range):
#if value(theta) hits -amplitude change its direction
direction = 1
if(value > _range):
#if value(theta) hits +amplitude change its direction reverse
direction = 0
#make required calculations with your theta
if direction:
value += unit_of_change
else:
value -= unit_of_change
unit_of_process = 200
#run for a while
while unit_of_process:
print(value, end=" ")
looper()
unit_of_process -= 1
|
How to loop forward and backward in a range in python?
|
enter image description here
I want to program motion as described in the drawing above. The angle changes according to this equation:theta = Amp*np.sin(2*np.pi*ftheta*p) . I am looping through p(time) and that is the only variable in this equation, nothing else changes. How do i make it stop once it reaches the amplitude and make it start going in the reverse direction until it hits the -(amplitude)
import numpy as np
import matplotlib.pyplot as plt
import math
r=20
h=1.7
num_of_steps=100
emp=3
phi = []
theta = []
time=np.arange(0,100,1)
fphi = 1
ftheta = 1
Amp = 90
for j in time:
kampas = np.degrees(2*np.pi*fphi*j)
kitaskampas = np.degrees(np.sin(2*np.pi*ftheta*j))
if kampas > 360:
temp = math.floor(kampas/360)
sukasi = round(kampas - 360*temp)
print(sukasi)
phi.append(sukasi)
if kitaskampas == Amp:
print(phi)
|
[
"have you tried to write some code yourself? If it is so, please share so we can help.\nAs far as I understood while loop can work for you:\nexit_flag = 0\ntime = 0\ntheta = 0\namplitude = 1\nwhile(exit_flag == 0):\n if(amplitude > 0):\n time = time + 1\n else:\n time = time - 1\n \n theta = some_equasion_dependning_of_time(time)\n \n if(we_wanna_exit)\n exit_flag = 1\n\n",
"I couldn't understand what really is going with kampas variable. However, this is a minimal example to move between two limit. You can place your calculations where we change value variable.\ndirection = 1\n_range = 90\nunit_of_change = 7\nvalue = 0\ndef looper():\n global direction, value\n \n if(value < -_range):\n #if value(theta) hits -amplitude change its direction\n direction = 1\n if(value > _range):\n #if value(theta) hits +amplitude change its direction reverse\n direction = 0\n\n #make required calculations with your theta\n if direction:\n value += unit_of_change\n else:\n value -= unit_of_change\n\n\nunit_of_process = 200\n\n#run for a while\nwhile unit_of_process:\n print(value, end=\" \")\n looper()\n unit_of_process -= 1\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"angle",
"loops",
"python"
] |
stackoverflow_0074472274_angle_loops_python.txt
|
Q:
can I 'inner-search' most similar vectors within a FAISS index?
I have a FAISS index populated with 8M embedding vectors. I don't have the embedding vectors anymore, only the index, and it is expensive to recompute the embeddings.
Can I search the index for the top-k most similar vectors to each of the index's vectors?
To be more concrete, say this is how my index was populated:
d = 1024
N = 100
embeddings = np.random.rand(N, d)
ids = range(N)
index = faiss.index_factory(
d, 'IDMap,Flat', faiss.METRIC_INNER_PRODUCT
)
index.add_with_ids(embeddings, ids)
I would like to get D, I such that:
D, I = index.search(embeddings, k)
but I don't have access to embeddings anymore, I only have the index.
I tried using index.reconstruct() to get back my (approximated?) embeddings but I run into
RuntimeError: Error in virtual void
faiss::Index::reconstruct(faiss::Index::idx_t, float*) const at /root/miniconda3/conda-bld/faiss-pkg_1613228717761/work/faiss/Index.cpp:57: reconstruct not implemented for this type of index
A:
First of all seems like you forgot train() your embeddings before add() it.
What is about your question you can just copy embeddings before adding it into the index.
|
can I 'inner-search' most similar vectors within a FAISS index?
|
I have a FAISS index populated with 8M embedding vectors. I don't have the embedding vectors anymore, only the index, and it is expensive to recompute the embeddings.
Can I search the index for the top-k most similar vectors to each of the index's vectors?
To be more concrete, say this is how my index was populated:
d = 1024
N = 100
embeddings = np.random.rand(N, d)
ids = range(N)
index = faiss.index_factory(
d, 'IDMap,Flat', faiss.METRIC_INNER_PRODUCT
)
index.add_with_ids(embeddings, ids)
I would like to get D, I such that:
D, I = index.search(embeddings, k)
but I don't have access to embeddings anymore, I only have the index.
I tried using index.reconstruct() to get back my (approximated?) embeddings but I run into
RuntimeError: Error in virtual void
faiss::Index::reconstruct(faiss::Index::idx_t, float*) const at /root/miniconda3/conda-bld/faiss-pkg_1613228717761/work/faiss/Index.cpp:57: reconstruct not implemented for this type of index
|
[
"First of all seems like you forgot train() your embeddings before add() it.\nWhat is about your question you can just copy embeddings before adding it into the index.\n"
] |
[
0
] |
[] |
[] |
[
"faiss",
"python"
] |
stackoverflow_0074097858_faiss_python.txt
|
Q:
Css not loading in Django
My css file is not getting loaded in the webpage. I have css and image file in the same location. The image is getting loaded but not the css.Also I have included the directory in staticfile_dirs.
Setting.py
DEBUG = True
ALLOWED_HOSTS = []
INSTALLED_APPS = [
'technicalCourse.apps.TechnicalcourseConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'website.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
r'C:\Users\Kesavan\PycharmProjects\Django web development\website\technicalCourse\template'
],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'website.wsgi.application'
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
STATIC_URL = '/static/'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'static'),
r'website\technicalCourse\static',
]
This the template file
<!DOCTYPE html>
{% load static %}
<head>
<link rel="stylesheet" type="text/css" href="{% static 'css/simple.css' %}">
</head>
<body>
<img src="{% static 'image/img.png' %}">
<h1>Welcome to course on programming</h1>
<ol>
{% for x in ac %}
<li><a href="{{x.id}}/">{{x.courseName}}</a></li>
{% endfor %}
</ol>
</body>
For testing I simply change the color of the h1 tag alone. The css file.
h1{
color:black;
}
Structure
C:.
├───migrations
│ └───__pycache__
├───static
│ ├───css
│ └───image
├───template
│ └───technicalCourse
└───__pycache__
But there is no reflection on the webpage.
Hoping for the solution.
Thanks in advance.
A:
try this command:
python manage.py collectstatic
and check again.
A:
It's weird that images are working and CSS isn't. There could be a multitude of possibilities for your problem.
The simplest way to solve this is to set the path to the CSS files via an absolute or a relative path.
Relavtive path case
<link href="/static/ui/css/base.css" rel="stylesheet">
A:
I was suffering with same problem but solved by adding the
{% load static %}
{
<link rel="stylesheet" type="text/css" href="{% static 'thumbnail_searcher/style.css' %}">
}
in same tag. For example in your case add load and link into <head> tag.
A:
clear browser cache
in settings.py change STATIC_URL = "static/" to STATIC_URL = "/static/"
run python manage.py runserver port_number(optional)
|
Css not loading in Django
|
My css file is not getting loaded in the webpage. I have css and image file in the same location. The image is getting loaded but not the css.Also I have included the directory in staticfile_dirs.
Setting.py
DEBUG = True
ALLOWED_HOSTS = []
INSTALLED_APPS = [
'technicalCourse.apps.TechnicalcourseConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'website.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
r'C:\Users\Kesavan\PycharmProjects\Django web development\website\technicalCourse\template'
],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'website.wsgi.application'
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
STATIC_URL = '/static/'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'static'),
r'website\technicalCourse\static',
]
This the template file
<!DOCTYPE html>
{% load static %}
<head>
<link rel="stylesheet" type="text/css" href="{% static 'css/simple.css' %}">
</head>
<body>
<img src="{% static 'image/img.png' %}">
<h1>Welcome to course on programming</h1>
<ol>
{% for x in ac %}
<li><a href="{{x.id}}/">{{x.courseName}}</a></li>
{% endfor %}
</ol>
</body>
For testing I simply change the color of the h1 tag alone. The css file.
h1{
color:black;
}
Structure
C:.
├───migrations
│ └───__pycache__
├───static
│ ├───css
│ └───image
├───template
│ └───technicalCourse
└───__pycache__
But there is no reflection on the webpage.
Hoping for the solution.
Thanks in advance.
|
[
"try this command:\npython manage.py collectstatic\n\nand check again.\n",
"It's weird that images are working and CSS isn't. There could be a multitude of possibilities for your problem.\nThe simplest way to solve this is to set the path to the CSS files via an absolute or a relative path.\nRelavtive path case\n<link href=\"/static/ui/css/base.css\" rel=\"stylesheet\">\n\n",
"I was suffering with same problem but solved by adding the\n{% load static %}\n {\n <link rel=\"stylesheet\" type=\"text/css\" href=\"{% static 'thumbnail_searcher/style.css' %}\">\n }\n\nin same tag. For example in your case add load and link into <head> tag.\n",
"\nclear browser cache\n\nin settings.py change STATIC_URL = \"static/\" to STATIC_URL = \"/static/\"\n\nrun python manage.py runserver port_number(optional)\n\n\n"
] |
[
0,
0,
0,
0
] |
[] |
[] |
[
"django",
"django_staticfiles",
"python"
] |
stackoverflow_0061405964_django_django_staticfiles_python.txt
|
Q:
Why do I keep getting errors when I try to install PySide6 on windows PC?
I have been trying to install PySide6 on my PC (Windows 10 64bits) with Python 3.9.0 installed, but I keep getting errors every time.
I used the command pip install PySide6 It is not working for me.
Any help will be appreciated.
Error:
ERROR: Could not find a version that satisfies the requirement pyside2 (from versions: none) ERROR: No matching distribution found for pyside2
A:
Check if you Python installation is 64 bit and not 32 bit. It has an impact on compatible and thus available binaries.
A:
At the time of writing:
The problem is that most of the binaries are not yet compatible and are not yet compiled for Python 3.9 at the time of writing. If you want the best compatibility, use Python 3.7 or Python 3.8. Most of the packages have been compiled for Python 3.8 but not many for Python 3.9.
For the future people that come here:
I would recommend you to keep one minor version behind the current stable release to not face dependency problems as the above section explains.
For Example:
If current stable release is Python 3.12 then you should install Python 3.11 or Python 3.10. Just one or two versions behind the current stable release.
|
Why do I keep getting errors when I try to install PySide6 on windows PC?
|
I have been trying to install PySide6 on my PC (Windows 10 64bits) with Python 3.9.0 installed, but I keep getting errors every time.
I used the command pip install PySide6 It is not working for me.
Any help will be appreciated.
Error:
ERROR: Could not find a version that satisfies the requirement pyside2 (from versions: none) ERROR: No matching distribution found for pyside2
|
[
"Check if you Python installation is 64 bit and not 32 bit. It has an impact on compatible and thus available binaries.\n",
"At the time of writing:\nThe problem is that most of the binaries are not yet compatible and are not yet compiled for Python 3.9 at the time of writing. If you want the best compatibility, use Python 3.7 or Python 3.8. Most of the packages have been compiled for Python 3.8 but not many for Python 3.9.\nFor the future people that come here:\nI would recommend you to keep one minor version behind the current stable release to not face dependency problems as the above section explains.\nFor Example:\nIf current stable release is Python 3.12 then you should install Python 3.11 or Python 3.10. Just one or two versions behind the current stable release.\n"
] |
[
1,
0
] |
[] |
[] |
[
"pip",
"pyside6",
"python"
] |
stackoverflow_0067635487_pip_pyside6_python.txt
|
Q:
how to deal with variable width of Buttons in ipywidgets
I have the need to display a bunch of buttons.
the description of every button corresponds to every word of a text.
In order to give a text appearance I want to make button width accoding to the length of the word inside.
So I create a variable that gives me the width px according to the number of letters.
I dont know why but it does not work well. It works for some words and for other doesnt.
some ideas?
(see in the screenshot how the word "the" does not have enough space in and only ... is dísplay.
The ultimate goal is of course have a text looking normally as a text where I can click words.
Thanks.
mylist=['loren','ipsum','whapmmtever','loren','ipsum','otra','the','palabra','concept']
list_btns=[]
for i,word in enumerate(mylist):
# here I try to define the width of the button as depending on the length of the word:
wordwidth= str(len(word)*12) + 'px'
wd_raw_save_button = widgets.Button(description=word,
disabled=False,
button_style='success',
tooltip='check the word',
icon='',
layout = Layout(width = wordwidth, margin='0px 0px 0px 0px'))
list_btns.append(wd_raw_save_button)
showcase=HBox(list_btns)
showcase
Actually after running voila to visualise the result I get this:
this does not give an impression of real text, i.e. same spacing between words which is the ultimate goal.
I am guessing, but I am not sure, that the reason might be that the characters are of different width, and I will have to calculate the width character by character. But this does not explain that the word "the" does not fit inside the button.
Second explanation is that the underlying CSS assumes a certain minimum border which goes "over" the word itself. In any case I dont know how to control/influence it.
A:
There's limited control for the CSS for widgets. There seems to be a cutoff around 40px where text will get truncated. I used a simple max comparison to get hopefully close to what you are looking for:
from ipywidgets import *
mylist=['loren','ipsum','whapmmtever','loren','ipsum','otra','the','palabra','concept', 'a'] * 2
list_btns=[]
for i,word in enumerate(mylist):
# here I try to define the width of the button as depending on the length of the word:
wordwidth= max(len(word) * 12, 40)
wordwidth = str(wordwidth) + 'px'
wd_raw_save_button = widgets.Button(description=word,
disabled=False,
button_style='success',
tooltip='check the word',
icon='',
layout = Layout(width = wordwidth, margin='0px 0px 0px 0px')
)
list_btns.append(wd_raw_save_button)
showcase=HBox(list_btns, layout= Layout(width='100%'))
showcase
A:
First:
display(ipywidgets.HTML("<style>.narrow_button { padding: 0px 2px; border: 1px solid black }</style>"))
Pay attention that this line is inside the exact same cell where you display the ipywidgets. The second line of code that is needed, comes after you defined your ipywidget. Now you have to assign this styling to each widget. Inside the loop, use:
wd_raw_save_button.add_class('narrow_button')
Then my buttons look like this:
Produced by your modified example:
from ipywidgets import *
import ipywidgets
mylist=['loren','ipsum','whapmmtever','loren','ipsum','otra','the','palabra','concept']
display(ipywidgets.HTML("<style>.narrow_button { padding: 0px 2px; border: 1px solid black }</style>"))
list_btns=[]
for i,word in enumerate(mylist):
# here I try to define the width of the button as depending on the length of the word:
wordwidth= str(len(word)*12) + 'px'
wd_raw_save_button = widgets.Button(description=word,
disabled=False,
button_style='success',
tooltip='check the word',
icon='',
layout = Layout(width = 'auto', margin='0px 0px 0px 0px'))
wd_raw_save_button.add_class("narrow_button")
list_btns.append(wd_raw_save_button)
showcase=HBox(list_btns)
showcase
|
how to deal with variable width of Buttons in ipywidgets
|
I have the need to display a bunch of buttons.
the description of every button corresponds to every word of a text.
In order to give a text appearance I want to make button width accoding to the length of the word inside.
So I create a variable that gives me the width px according to the number of letters.
I dont know why but it does not work well. It works for some words and for other doesnt.
some ideas?
(see in the screenshot how the word "the" does not have enough space in and only ... is dísplay.
The ultimate goal is of course have a text looking normally as a text where I can click words.
Thanks.
mylist=['loren','ipsum','whapmmtever','loren','ipsum','otra','the','palabra','concept']
list_btns=[]
for i,word in enumerate(mylist):
# here I try to define the width of the button as depending on the length of the word:
wordwidth= str(len(word)*12) + 'px'
wd_raw_save_button = widgets.Button(description=word,
disabled=False,
button_style='success',
tooltip='check the word',
icon='',
layout = Layout(width = wordwidth, margin='0px 0px 0px 0px'))
list_btns.append(wd_raw_save_button)
showcase=HBox(list_btns)
showcase
Actually after running voila to visualise the result I get this:
this does not give an impression of real text, i.e. same spacing between words which is the ultimate goal.
I am guessing, but I am not sure, that the reason might be that the characters are of different width, and I will have to calculate the width character by character. But this does not explain that the word "the" does not fit inside the button.
Second explanation is that the underlying CSS assumes a certain minimum border which goes "over" the word itself. In any case I dont know how to control/influence it.
|
[
"There's limited control for the CSS for widgets. There seems to be a cutoff around 40px where text will get truncated. I used a simple max comparison to get hopefully close to what you are looking for:\nfrom ipywidgets import *\nmylist=['loren','ipsum','whapmmtever','loren','ipsum','otra','the','palabra','concept', 'a'] * 2\n\nlist_btns=[]\nfor i,word in enumerate(mylist):\n # here I try to define the width of the button as depending on the length of the word:\n wordwidth= max(len(word) * 12, 40)\n wordwidth = str(wordwidth) + 'px'\n wd_raw_save_button = widgets.Button(description=word,\n disabled=False,\n button_style='success',\n tooltip='check the word',\n icon='',\n layout = Layout(width = wordwidth, margin='0px 0px 0px 0px')\n )\n\n list_btns.append(wd_raw_save_button)\n\n\n\nshowcase=HBox(list_btns, layout= Layout(width='100%'))\nshowcase\n\n\n",
"First:\ndisplay(ipywidgets.HTML(\"<style>.narrow_button { padding: 0px 2px; border: 1px solid black }</style>\"))\n\nPay attention that this line is inside the exact same cell where you display the ipywidgets. The second line of code that is needed, comes after you defined your ipywidget. Now you have to assign this styling to each widget. Inside the loop, use:\nwd_raw_save_button.add_class('narrow_button')\n\nThen my buttons look like this: \nProduced by your modified example:\nfrom ipywidgets import *\nimport ipywidgets\nmylist=['loren','ipsum','whapmmtever','loren','ipsum','otra','the','palabra','concept']\ndisplay(ipywidgets.HTML(\"<style>.narrow_button { padding: 0px 2px; border: 1px solid black }</style>\"))\nlist_btns=[]\nfor i,word in enumerate(mylist):\n # here I try to define the width of the button as depending on the length of the word:\n wordwidth= str(len(word)*12) + 'px'\n wd_raw_save_button = widgets.Button(description=word,\n disabled=False,\n button_style='success',\n tooltip='check the word',\n icon='',\n layout = Layout(width = 'auto', margin='0px 0px 0px 0px'))\n wd_raw_save_button.add_class(\"narrow_button\")\n list_btns.append(wd_raw_save_button)\n\n\n\nshowcase=HBox(list_btns)\nshowcase\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"button",
"ipywidgets",
"jupyter_notebook",
"python",
"voila"
] |
stackoverflow_0061278187_button_ipywidgets_jupyter_notebook_python_voila.txt
|
Q:
Creating a Maya animation control with a custom shape
I have a small python script that calls a MEL command to build a nurbs curve circle. The shape of the curve is then placed with a new transform node and together they generate an animation control. But nothing is being generated when the script is run and there is no error message.
import pymel.all as pm
import maya.cmds as cmds
import maya.mel as mel
# ---------------------------------------------------------------------------------
def makeHandle(name='NEW', shape='Circle'):
handle= pm.createNode('animHandle')
shape = melcmds = 'circle -c 0 0 0 -nr 0 1 0 -sw 360 -r 1 -d 3 -ut 0 -tol 0.000328084 -s 8 -ch 1;'
mel.eval (melcmds)
for each in shape.getChildren(): pm.parent(each, handle, r=True, s=True)
newName = name + '_handle'
handle.rename(newName)
for each in handle.getChildren(): each.rename(name + '_handleShape')
pm.delete(shape)
pm.select(handle)
A:
If you receive no error message, I'd bet you fogot to call the function with makeHandle(). But this function would not work anyway. You are heavily mixing mel, cmds and pymel concepts. I'd recommend to stay with one approach only, e.g. pymel. This way you do not need any mel scripts or eval calls, just create a circle wiht pm.circle(c=(0,0,0)...) which returns transform and shape. btw. a shape node usually has no children and what type is animHandle? Is this a custom node type? Doesn`t work here.
|
Creating a Maya animation control with a custom shape
|
I have a small python script that calls a MEL command to build a nurbs curve circle. The shape of the curve is then placed with a new transform node and together they generate an animation control. But nothing is being generated when the script is run and there is no error message.
import pymel.all as pm
import maya.cmds as cmds
import maya.mel as mel
# ---------------------------------------------------------------------------------
def makeHandle(name='NEW', shape='Circle'):
handle= pm.createNode('animHandle')
shape = melcmds = 'circle -c 0 0 0 -nr 0 1 0 -sw 360 -r 1 -d 3 -ut 0 -tol 0.000328084 -s 8 -ch 1;'
mel.eval (melcmds)
for each in shape.getChildren(): pm.parent(each, handle, r=True, s=True)
newName = name + '_handle'
handle.rename(newName)
for each in handle.getChildren(): each.rename(name + '_handleShape')
pm.delete(shape)
pm.select(handle)
|
[
"If you receive no error message, I'd bet you fogot to call the function with makeHandle(). But this function would not work anyway. You are heavily mixing mel, cmds and pymel concepts. I'd recommend to stay with one approach only, e.g. pymel. This way you do not need any mel scripts or eval calls, just create a circle wiht pm.circle(c=(0,0,0)...) which returns transform and shape. btw. a shape node usually has no children and what type is animHandle? Is this a custom node type? Doesn`t work here.\n"
] |
[
0
] |
[] |
[] |
[
"maya",
"mel",
"python"
] |
stackoverflow_0074463399_maya_mel_python.txt
|
Q:
How to find index of a list element
I want to create a program that gives you the position of the string in a list.
a = [1,3,4,5,6,7,8,9,2,"rick",56,"open"]
A:
You should read more on operations you can do on Lists here: https://docs.python.org/3/tutorial/datastructures.html#more-on-lists
In this case, you can use the index() function to get the index of a specific item in the list:
a=[1,3,4,5,6,7,8,9,2,"rick",56,"open"]
print(a.index(7))
print(a.index("rick"))
Output:
5
9
Remember, these indexes are 0-based, so index 5 is actually the 6th element of the list, and index 9 is the 10th element.
A:
If it goes to get all string elements' position:
a=[1,3,4,5,6,7,8,9,2,"rick",56,"open"]
def find_str(arr):
res = {}
for index, value in enumerate(arr):
if isinstance(value, str):
res[value] = index
return res
or just as a shortcut:
def find_str(arr):
return {value:index for index, value in enumerate(arr) if isinstance(value, str)}
|
How to find index of a list element
|
I want to create a program that gives you the position of the string in a list.
a = [1,3,4,5,6,7,8,9,2,"rick",56,"open"]
|
[
"You should read more on operations you can do on Lists here: https://docs.python.org/3/tutorial/datastructures.html#more-on-lists\nIn this case, you can use the index() function to get the index of a specific item in the list:\na=[1,3,4,5,6,7,8,9,2,\"rick\",56,\"open\"]\nprint(a.index(7))\nprint(a.index(\"rick\"))\n\nOutput:\n5\n9\n\nRemember, these indexes are 0-based, so index 5 is actually the 6th element of the list, and index 9 is the 10th element.\n",
"If it goes to get all string elements' position:\na=[1,3,4,5,6,7,8,9,2,\"rick\",56,\"open\"]\n\ndef find_str(arr):\n res = {}\n for index, value in enumerate(arr):\n if isinstance(value, str):\n res[value] = index\n return res\n\nor just as a shortcut:\n def find_str(arr):\n return {value:index for index, value in enumerate(arr) if isinstance(value, str)}\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074473509_python.txt
|
Q:
Creating AWS SES SMTP credentials in python2
I am creating an SES SMTP credentials from my iam accesskey and secretkey. i have referred to this document for creating the SES SMTP credentials
But the code produces different SES SMTP credentials for python2 and python3 but the python3 key is the valid one. how can i get the same key while executing the script with python2
Below is my script which returns accesskey and SES SMTP cred. Iam getting the IAM accesskey and secretkey from secrets manager
#!/usr/bin/env python3
import hmac
import hashlib
import base64
import argparse
import boto3
import json
from botocore.exceptions import ClientError
def get_secretmanager():
secret_name = "test"
region_name = "us-west-2"
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name
)
try:
get_secret_value_response = client.get_secret_value(
SecretId=secret_name
)
except ClientError as e:
raise e
# Decrypts secret using the associated KMS key.
secret = get_secret_value_response['SecretString']
response = json.loads(secret)
return str(response['Access Key Id']), str(response['Secret Access Key'])
SMTP_REGIONS = [
'us-east-2', # US East (Ohio)
'us-east-1', # US East (N. Virginia)
'us-west-2', # US West (Oregon)
'ap-south-1', # Asia Pacific (Mumbai)
'ap-northeast-2', # Asia Pacific (Seoul)
'ap-southeast-1', # Asia Pacific (Singapore)
'ap-southeast-2', # Asia Pacific (Sydney)
'ap-northeast-1', # Asia Pacific (Tokyo)
'ca-central-1', # Canada (Central)
'eu-central-1', # Europe (Frankfurt)
'eu-west-1', # Europe (Ireland)
'eu-west-2', # Europe (London)
'sa-east-1', # South America (Sao Paulo)
'us-gov-west-1', # AWS GovCloud (US)
]
# These values are required to calculate the signature. Do not change them.
DATE = "11111111"
SERVICE = "ses"
MESSAGE = "SendRawEmail"
TERMINAL = "aws4_request"
VERSION = 0x04
def sign(key, msg):
return hmac.new(key, msg.encode('utf-8'), hashlib.sha256).digest()
def calculate_key(secret_access_key, region):
if region not in SMTP_REGIONS:
raise ValueError("The " + region+ " Region doesn't have an SMTP endpoint.")
signature = sign(("AWS4" + secret_access_key).encode('utf-8'), DATE)
signature = sign(signature, region)
signature = sign(signature, SERVICE)
signature = sign(signature, TERMINAL)
signature = sign(signature, MESSAGE)
signature_and_version = bytes([VERSION]) + signature
smtp_password = base64.b64encode(signature_and_version)
print(smtp_password)
return smtp_password.decode('utf-8')
def get_keys():
accesskey, secretkey = get_secretmanager()
mailsecret = calculate_key(secretkey, "us-west-2")
return accesskey, mailsecret
print(get_keys())
Any help is much appreciated, Thank you
A:
After a lot of debugging i found out bytes([VERSION]) does not work same in both python3 and python2 thats why it was returning 2 different calue for both 2 and 3
My simple fix was that to hardcode the bytes value of the hex 0x04 as b'\x04'
signature_and_version = b'\x04' + signature
Make sure to return the value as a string return accesskey, str(mailsecret) cuz in python2 it returns as a unicode.
|
Creating AWS SES SMTP credentials in python2
|
I am creating an SES SMTP credentials from my iam accesskey and secretkey. i have referred to this document for creating the SES SMTP credentials
But the code produces different SES SMTP credentials for python2 and python3 but the python3 key is the valid one. how can i get the same key while executing the script with python2
Below is my script which returns accesskey and SES SMTP cred. Iam getting the IAM accesskey and secretkey from secrets manager
#!/usr/bin/env python3
import hmac
import hashlib
import base64
import argparse
import boto3
import json
from botocore.exceptions import ClientError
def get_secretmanager():
secret_name = "test"
region_name = "us-west-2"
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name
)
try:
get_secret_value_response = client.get_secret_value(
SecretId=secret_name
)
except ClientError as e:
raise e
# Decrypts secret using the associated KMS key.
secret = get_secret_value_response['SecretString']
response = json.loads(secret)
return str(response['Access Key Id']), str(response['Secret Access Key'])
SMTP_REGIONS = [
'us-east-2', # US East (Ohio)
'us-east-1', # US East (N. Virginia)
'us-west-2', # US West (Oregon)
'ap-south-1', # Asia Pacific (Mumbai)
'ap-northeast-2', # Asia Pacific (Seoul)
'ap-southeast-1', # Asia Pacific (Singapore)
'ap-southeast-2', # Asia Pacific (Sydney)
'ap-northeast-1', # Asia Pacific (Tokyo)
'ca-central-1', # Canada (Central)
'eu-central-1', # Europe (Frankfurt)
'eu-west-1', # Europe (Ireland)
'eu-west-2', # Europe (London)
'sa-east-1', # South America (Sao Paulo)
'us-gov-west-1', # AWS GovCloud (US)
]
# These values are required to calculate the signature. Do not change them.
DATE = "11111111"
SERVICE = "ses"
MESSAGE = "SendRawEmail"
TERMINAL = "aws4_request"
VERSION = 0x04
def sign(key, msg):
return hmac.new(key, msg.encode('utf-8'), hashlib.sha256).digest()
def calculate_key(secret_access_key, region):
if region not in SMTP_REGIONS:
raise ValueError("The " + region+ " Region doesn't have an SMTP endpoint.")
signature = sign(("AWS4" + secret_access_key).encode('utf-8'), DATE)
signature = sign(signature, region)
signature = sign(signature, SERVICE)
signature = sign(signature, TERMINAL)
signature = sign(signature, MESSAGE)
signature_and_version = bytes([VERSION]) + signature
smtp_password = base64.b64encode(signature_and_version)
print(smtp_password)
return smtp_password.decode('utf-8')
def get_keys():
accesskey, secretkey = get_secretmanager()
mailsecret = calculate_key(secretkey, "us-west-2")
return accesskey, mailsecret
print(get_keys())
Any help is much appreciated, Thank you
|
[
"After a lot of debugging i found out bytes([VERSION]) does not work same in both python3 and python2 thats why it was returning 2 different calue for both 2 and 3\nMy simple fix was that to hardcode the bytes value of the hex 0x04 as b'\\x04'\nsignature_and_version = b'\\x04' + signature\n\nMake sure to return the value as a string return accesskey, str(mailsecret) cuz in python2 it returns as a unicode.\n"
] |
[
0
] |
[] |
[] |
[
"amazon_web_services",
"python",
"python_2.7"
] |
stackoverflow_0074471381_amazon_web_services_python_python_2.7.txt
|
Q:
Grouping pandas dataframe by column specificity to row values - python
I have a dataset of this type:
id 1 2 3 4 5
A 10 40 80 12 50
B 20 60 70 77 60
C 30 15 50 20 60
C 30 15 20 45 43
B 50 100 70 77 32
C 30 15 20 80 21
A 50 100 10 12 50
Is there a way to group it somehow to show which columns are specific for which id? For example, we can see that all of the values corresponding to the id 'C' in the first column equal to 30; similarly, column 3 is pretty 'B' id specific - all of the values are the same for 'B' in column 3 etc. Same for column 5 and id 'A'.
So, there are columns specific for each id; is there a way to group them somehow and for each id visualise/list columns specific to each of them?
A:
(df.melt('id')
.groupby(['id', 'variable'])
.agg(lambda x: x.max() if x.max() == x.min() else None)
.unstack())
result
value
variable 1 2 3 4 5
id
A NaN NaN NaN 12.0 50.0
B NaN NaN 70.0 77.0 NaN
C 30.0 15.0 NaN NaN NaN
or
(df.melt('id')
.groupby(['id', 'variable'])
.agg(lambda x: x.max() if x.max() == x.min() else None)
.dropna().astype('int'))
result:
value
id variable
A 4 12
5 50
B 3 70
4 77
C 1 30
2 15
A:
You can use a double groupby to aggregate the data as sets and define a threshold of a number of values to keep to define uniqueness:
thresh = 1
(df.melt('id', var_name='col')
.groupby(['col', 'id'], as_index=False)['value']
.agg(frozenset)
.loc[lambda d: d['value'].str.len().le(thresh)]
.groupby(['value', 'col'])['id']
.agg(set)
.loc[lambda s: s.str.len().eq(1)]
)
Output:
value col
(30) 1 {C}
(15) 2 {C}
(70) 3 {B}
(12) 4 {A}
(77) 4 {B}
(50) 5 {A}
Name: id, dtype: object
Example with a threshold of 2 values:
value col
(10, 50) 1 {A}
(50, 20) 1 {B}
3 {C}
(30) 1 {C}
(40, 100) 2 {A}
(100, 60) 2 {B}
(15) 2 {C}
(80, 10) 3 {A}
(70) 3 {B}
(12) 4 {A}
(77) 4 {B}
(50) 5 {A}
(32, 60) 5 {B}
Name: id, dtype: object
|
Grouping pandas dataframe by column specificity to row values - python
|
I have a dataset of this type:
id 1 2 3 4 5
A 10 40 80 12 50
B 20 60 70 77 60
C 30 15 50 20 60
C 30 15 20 45 43
B 50 100 70 77 32
C 30 15 20 80 21
A 50 100 10 12 50
Is there a way to group it somehow to show which columns are specific for which id? For example, we can see that all of the values corresponding to the id 'C' in the first column equal to 30; similarly, column 3 is pretty 'B' id specific - all of the values are the same for 'B' in column 3 etc. Same for column 5 and id 'A'.
So, there are columns specific for each id; is there a way to group them somehow and for each id visualise/list columns specific to each of them?
|
[
"(df.melt('id')\n .groupby(['id', 'variable'])\n .agg(lambda x: x.max() if x.max() == x.min() else None)\n .unstack())\n\nresult\n value\nvariable 1 2 3 4 5\nid \nA NaN NaN NaN 12.0 50.0\nB NaN NaN 70.0 77.0 NaN\nC 30.0 15.0 NaN NaN NaN\n\nor\n(df.melt('id')\n .groupby(['id', 'variable'])\n .agg(lambda x: x.max() if x.max() == x.min() else None)\n .dropna().astype('int'))\n\nresult:\n value\nid variable \nA 4 12\n 5 50\nB 3 70\n 4 77\nC 1 30\n 2 15\n\n",
"You can use a double groupby to aggregate the data as sets and define a threshold of a number of values to keep to define uniqueness:\nthresh = 1\n\n(df.melt('id', var_name='col')\n .groupby(['col', 'id'], as_index=False)['value']\n .agg(frozenset)\n .loc[lambda d: d['value'].str.len().le(thresh)]\n .groupby(['value', 'col'])['id']\n .agg(set)\n .loc[lambda s: s.str.len().eq(1)]\n)\n\nOutput:\nvalue col\n(30) 1 {C}\n(15) 2 {C}\n(70) 3 {B}\n(12) 4 {A}\n(77) 4 {B}\n(50) 5 {A}\nName: id, dtype: object\n\nExample with a threshold of 2 values:\nvalue col\n(10, 50) 1 {A}\n(50, 20) 1 {B}\n 3 {C}\n(30) 1 {C}\n(40, 100) 2 {A}\n(100, 60) 2 {B}\n(15) 2 {C}\n(80, 10) 3 {A}\n(70) 3 {B}\n(12) 4 {A}\n(77) 4 {B}\n(50) 5 {A}\n(32, 60) 5 {B}\nName: id, dtype: object\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074473141_dataframe_pandas_python.txt
|
Q:
Detect whether Celery is Available/Running
I'm using Celery to manage asynchronous tasks. Occasionally, however, the celery process goes down which causes none of the tasks to get executed. I would like to be able to check the status of celery and make sure everything is working fine, and if I detect any problems display an error message to the user. From the Celery Worker documentation it looks like I might be able to use ping or inspect for this, but ping feels hacky and it's not clear exactly how inspect is meant to be used (if inspect().registered() is empty?).
Any guidance on this would be appreciated. Basically what I'm looking for is a method like so:
def celery_is_alive():
from celery.task.control import inspect
return bool(inspect().registered()) # is this right??
EDIT: It doesn't even look like registered() is available on celery 2.3.3 (even though the 2.1 docs list it). Maybe ping is the right answer.
EDIT: Ping also doesn't appear to do what I thought it would do, so still not sure the answer here.
A:
Here's the code I've been using. celery.task.control.Inspect.stats() returns a dict containing lots of details about the currently available workers, None if there are no workers running, or raises an IOError if it can't connect to the message broker. I'm using RabbitMQ - it's possible that other messaging systems might behave slightly differently. This worked in Celery 2.3.x and 2.4.x; I'm not sure how far back it goes.
def get_celery_worker_status():
ERROR_KEY = "ERROR"
try:
from celery.task.control import inspect
insp = inspect()
d = insp.stats()
if not d:
d = { ERROR_KEY: 'No running Celery workers were found.' }
except IOError as e:
from errno import errorcode
msg = "Error connecting to the backend: " + str(e)
if len(e.args) > 0 and errorcode.get(e.args[0]) == 'ECONNREFUSED':
msg += ' Check that the RabbitMQ server is running.'
d = { ERROR_KEY: msg }
except ImportError as e:
d = { ERROR_KEY: str(e)}
return d
A:
From the documentation of celery 4.2:
from your_celery_app import app
def get_celery_worker_status():
i = app.control.inspect()
availability = i.ping()
stats = i.stats()
registered_tasks = i.registered()
active_tasks = i.active()
scheduled_tasks = i.scheduled()
result = {
'availability': availability,
'stats': stats,
'registered_tasks': registered_tasks,
'active_tasks': active_tasks,
'scheduled_tasks': scheduled_tasks
}
return result
of course you could/should improve the code with error handling...
A:
To check the same using command line in case celery is running as daemon,
Activate virtualenv and go to the dir where the 'app' is
Now run : celery -A [app_name] status
It will show if celery is up or not plus no. of nodes online
Source:
http://michal.karzynski.pl/blog/2014/05/18/setting-up-an-asynchronous-task-queue-for-django-using-celery-redis/
A:
The following worked for me:
import socket
from kombu import Connection
celery_broker_url = "amqp://localhost"
try:
conn = Connection(celery_broker_url)
conn.ensure_connection(max_retries=3)
except socket.error:
raise RuntimeError("Failed to connect to RabbitMQ instance at {}".format(celery_broker_url))
A:
One method to test if any worker is responding is to send out a 'ping' broadcast and return with a successful result on the first response.
from .celery import app # the celery 'app' created in your project
def is_celery_working():
result = app.control.broadcast('ping', reply=True, limit=1)
return bool(result) # True if at least one result
This broadcasts a 'ping' and will wait up to one second for responses. As soon as the first response comes in, it will return a result. If you want a False result faster, you can add a timeout argument to reduce how long it waits before giving up.
A:
I found an elegant solution:
from .celery import app
try:
app.broker_connection().ensure_connection(max_retries=3)
except Exception as ex:
raise RuntimeError("Failed to connect to celery broker, {}".format(str(ex)))
A:
You can use ping method to check whether any worker (or specific worker) is alive or not https://docs.celeryproject.org/en/latest/_modules/celery/app/control.html#Control.ping
celey_app.control.ping()
A:
You can test on your terminal by running the following command.
celery -A proj_name worker -l INFO
You can review every time your celery runs.
A:
The below script is worked for me.
#Import the celery app from project
from application_package import app as celery_app
def get_celery_worker_status():
insp = celery_app.control.inspect()
nodes = insp.stats()
if not nodes:
raise Exception("celery is not running.")
logger.error("celery workers are: {}".format(nodes))
return nodes
A:
Run celery status to get the status.
When celery is running,
(venv) ubuntu@server1:~/project-dir$ celery status
-> celery@server1: OK
1 node online.
When no celery worker is running, you get the below information displayed in terminal.
(venv) ubuntu@server1:~/project-dir$ celery status
Error: No nodes replied within time constraint
|
Detect whether Celery is Available/Running
|
I'm using Celery to manage asynchronous tasks. Occasionally, however, the celery process goes down which causes none of the tasks to get executed. I would like to be able to check the status of celery and make sure everything is working fine, and if I detect any problems display an error message to the user. From the Celery Worker documentation it looks like I might be able to use ping or inspect for this, but ping feels hacky and it's not clear exactly how inspect is meant to be used (if inspect().registered() is empty?).
Any guidance on this would be appreciated. Basically what I'm looking for is a method like so:
def celery_is_alive():
from celery.task.control import inspect
return bool(inspect().registered()) # is this right??
EDIT: It doesn't even look like registered() is available on celery 2.3.3 (even though the 2.1 docs list it). Maybe ping is the right answer.
EDIT: Ping also doesn't appear to do what I thought it would do, so still not sure the answer here.
|
[
"Here's the code I've been using. celery.task.control.Inspect.stats() returns a dict containing lots of details about the currently available workers, None if there are no workers running, or raises an IOError if it can't connect to the message broker. I'm using RabbitMQ - it's possible that other messaging systems might behave slightly differently. This worked in Celery 2.3.x and 2.4.x; I'm not sure how far back it goes.\ndef get_celery_worker_status():\n ERROR_KEY = \"ERROR\"\n try:\n from celery.task.control import inspect\n insp = inspect()\n d = insp.stats()\n if not d:\n d = { ERROR_KEY: 'No running Celery workers were found.' }\n except IOError as e:\n from errno import errorcode\n msg = \"Error connecting to the backend: \" + str(e)\n if len(e.args) > 0 and errorcode.get(e.args[0]) == 'ECONNREFUSED':\n msg += ' Check that the RabbitMQ server is running.'\n d = { ERROR_KEY: msg }\n except ImportError as e:\n d = { ERROR_KEY: str(e)}\n return d\n\n",
"From the documentation of celery 4.2:\nfrom your_celery_app import app\n\n\ndef get_celery_worker_status():\n i = app.control.inspect()\n availability = i.ping()\n stats = i.stats()\n registered_tasks = i.registered()\n active_tasks = i.active()\n scheduled_tasks = i.scheduled()\n result = {\n 'availability': availability,\n 'stats': stats,\n 'registered_tasks': registered_tasks,\n 'active_tasks': active_tasks,\n 'scheduled_tasks': scheduled_tasks\n }\n return result\n\nof course you could/should improve the code with error handling...\n",
"To check the same using command line in case celery is running as daemon,\n\nActivate virtualenv and go to the dir where the 'app' is \nNow run : celery -A [app_name] status\nIt will show if celery is up or not plus no. of nodes online\n\nSource:\nhttp://michal.karzynski.pl/blog/2014/05/18/setting-up-an-asynchronous-task-queue-for-django-using-celery-redis/\n",
"The following worked for me:\nimport socket\nfrom kombu import Connection\n\ncelery_broker_url = \"amqp://localhost\"\n\ntry:\n conn = Connection(celery_broker_url)\n conn.ensure_connection(max_retries=3)\nexcept socket.error:\n raise RuntimeError(\"Failed to connect to RabbitMQ instance at {}\".format(celery_broker_url))\n\n",
"One method to test if any worker is responding is to send out a 'ping' broadcast and return with a successful result on the first response.\nfrom .celery import app # the celery 'app' created in your project\n\ndef is_celery_working():\n result = app.control.broadcast('ping', reply=True, limit=1)\n return bool(result) # True if at least one result\n\nThis broadcasts a 'ping' and will wait up to one second for responses. As soon as the first response comes in, it will return a result. If you want a False result faster, you can add a timeout argument to reduce how long it waits before giving up.\n",
"I found an elegant solution:\nfrom .celery import app\ntry:\n app.broker_connection().ensure_connection(max_retries=3)\nexcept Exception as ex:\n raise RuntimeError(\"Failed to connect to celery broker, {}\".format(str(ex)))\n\n",
"You can use ping method to check whether any worker (or specific worker) is alive or not https://docs.celeryproject.org/en/latest/_modules/celery/app/control.html#Control.ping\nceley_app.control.ping()\n",
"You can test on your terminal by running the following command.\ncelery -A proj_name worker -l INFO\nYou can review every time your celery runs.\n",
"The below script is worked for me.\n #Import the celery app from project\n from application_package import app as celery_app\n def get_celery_worker_status():\n insp = celery_app.control.inspect()\n nodes = insp.stats()\n if not nodes:\n raise Exception(\"celery is not running.\")\n logger.error(\"celery workers are: {}\".format(nodes))\n return nodes\n\n",
"Run celery status to get the status.\nWhen celery is running,\n(venv) ubuntu@server1:~/project-dir$ celery status\n-> celery@server1: OK\n\n1 node online.\n\nWhen no celery worker is running, you get the below information displayed in terminal.\n(venv) ubuntu@server1:~/project-dir$ celery status\nError: No nodes replied within time constraint\n\n"
] |
[
66,
18,
12,
7,
5,
3,
2,
1,
0,
0
] |
[] |
[] |
[
"celery",
"django",
"django_celery",
"python"
] |
stackoverflow_0008506914_celery_django_django_celery_python.txt
|
Q:
How to put dynamic json response in panda dataframe?
I have the following JSON response which is dynamic, most of the fields(bccRecipients ,replyTo, and ccRecipients) can be empty sometimes, and sometimes it contains values
{
"hasAttachments": False,
"sender": {
"emailAddress": {
"name": "John Henry",
"address": "john@abc.com"
}
},
"from": {
"emailAddress": {
"name": "Mike Tyson",
"address": "mike@xyz.com"
}
},
"toRecipients": [
{
"emailAddress": {
"name": "Himan",
"address": "himan@pqrst.com"
}
}
],
"ccRecipients": [
],
"bccRecipients": [
],
"replyTo": [
],
"flag": {
"flagStatus": "notFlagged"
}
}
Till now I have created empty dataframe with column names as follows
email_metadata = pd.DataFrame(columns=["Subject","SenderEmailAddress","SenderName","FromEmailAddress","FromName","ToRecipients","HasAttachments","ccRecipients","bccRecipients"])
Also if ccRecipients array is empty it should store Null/NaN or the values if there are multiple fields.
Example for multiple values
ccRecipients.emailAddress.name
1) Mike
2) John
Example for empty data
ccRecipients.emailAddress.name
1) Null
2) Null
A:
import pandas as pd
response = {...}
email_metadata = pd.json_normalize(response)
Updated answer after question update:
import pandas as pd
def get_seperated_data(metadata, column_name):
tmp = pd.json_normalize(metadata[column_name][0]).apply(', '.join).to_frame().T
tmp = tmp.rename(columns={c: column_name + '.' + c for c in tmp.columns})
return tmp
response = {...}
email_metadata = pd.json_normalize(response)
list_type_columns = ['toRecipients', 'ccRecipients', 'bccRecipients', 'replyTo']
dfs_to_join = [get_seperated_data(email_metadata, c) for c in list_type_columns]
for df in dfs_to_join:
email_metadata = email_metadata.join(df)
email_metadata = email_metadata.drop(columns=list_type_columns)
for c in list_type_columns:
for field in ['.emailAddress.name', '.emailAddress.address']:
if c + field not in email_metadata.columns:
email_metadata[c + field] = None
|
How to put dynamic json response in panda dataframe?
|
I have the following JSON response which is dynamic, most of the fields(bccRecipients ,replyTo, and ccRecipients) can be empty sometimes, and sometimes it contains values
{
"hasAttachments": False,
"sender": {
"emailAddress": {
"name": "John Henry",
"address": "john@abc.com"
}
},
"from": {
"emailAddress": {
"name": "Mike Tyson",
"address": "mike@xyz.com"
}
},
"toRecipients": [
{
"emailAddress": {
"name": "Himan",
"address": "himan@pqrst.com"
}
}
],
"ccRecipients": [
],
"bccRecipients": [
],
"replyTo": [
],
"flag": {
"flagStatus": "notFlagged"
}
}
Till now I have created empty dataframe with column names as follows
email_metadata = pd.DataFrame(columns=["Subject","SenderEmailAddress","SenderName","FromEmailAddress","FromName","ToRecipients","HasAttachments","ccRecipients","bccRecipients"])
Also if ccRecipients array is empty it should store Null/NaN or the values if there are multiple fields.
Example for multiple values
ccRecipients.emailAddress.name
1) Mike
2) John
Example for empty data
ccRecipients.emailAddress.name
1) Null
2) Null
|
[
"import pandas as pd\n\nresponse = {...}\nemail_metadata = pd.json_normalize(response)\n\nUpdated answer after question update:\nimport pandas as pd\n\ndef get_seperated_data(metadata, column_name):\n tmp = pd.json_normalize(metadata[column_name][0]).apply(', '.join).to_frame().T\n tmp = tmp.rename(columns={c: column_name + '.' + c for c in tmp.columns})\n return tmp\n\nresponse = {...}\nemail_metadata = pd.json_normalize(response)\n\nlist_type_columns = ['toRecipients', 'ccRecipients', 'bccRecipients', 'replyTo']\n\ndfs_to_join = [get_seperated_data(email_metadata, c) for c in list_type_columns]\nfor df in dfs_to_join:\n email_metadata = email_metadata.join(df)\n\nemail_metadata = email_metadata.drop(columns=list_type_columns)\n\nfor c in list_type_columns:\n for field in ['.emailAddress.name', '.emailAddress.address']:\n if c + field not in email_metadata.columns:\n email_metadata[c + field] = None\n\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"json",
"pandas",
"python"
] |
stackoverflow_0074471956_dataframe_json_pandas_python.txt
|
Q:
Buildozer fails to compile libffi on arm64/aarch64 CPU
I am Trying to run Buildozer on my Android Phone.
For this i am using Arch Linux pRoot on Termux App (Android 7)(redmi Note 4)
Since Google Only distributes x86_64 version of NDK, i am using aarch64/arm64 Android NDK (version r21d) and SDK from this GitHub Repo : https://github.com/Lzhiyong/termux-ndk
And i am using JDK 15 from arch Linux arm repositories.
For testing purposes i am using a simple hello world script, print("Hello")
When i Run buildozer -v android debug, everything goes fine first, it Downloads stuff and then tries to compile it.
Everything else compiles successfully except libffi.
What I Have Tried So far :-
I tried compiling for both armeabi-v7a and arm64-v8a, both failed.
I tried installing libffi via pacman, no effect.
I tried Creating Github issue, no reply.
I tried to ask on stack overflow, received lot of downvotes saying all logs and code should be pasted in question not on external website, deleted old question.
Here is my buildozer.spec File :-
[app]
# (str) Title of your application
title = My Application
# (str) Package name
package.name = myapp
# (str) Package domain (needed for android/ios packaging)
package.domain = org.test
# (str) Source code where the main.py live
source.dir = .
# (list) Source files to include (let empty to include all the files)
source.include_exts = py,png,jpg,ui
# (list) List of inclusions using pattern matching
#source.include_patterns = assets/*,images/*.png
# (list) Source files to exclude (let empty to not exclude anything)
source.exclude_exts = spec
# (list) List of directory to exclude (let empty to not exclude anything)
source.exclude_dirs = tests,bin,.buildozer,__pycache__
# (list) List of exclusions using pattern matching
#source.exclude_patterns = license,images/*/*.jpg
# (str) Application versioning (method 1)
version = 0.1
# (str) Application versioning (method 2)
# version.regex = __version__ = ['"](.*)['"]
# version.filename = %(source.dir)s/main.py
# (list) Application requirements
# comma separated e.g. requirements = sqlite3,kivy
requirements = python3==3.8.7
# (str) Custom source folders for requirements
# Sets custom source for any requirements with recipes
# requirements.source.kivy = ../../kivy
# (list) Garden requirements
#garden_requirements =
# (str) Presplash of the application
#presplash.filename = %(source.dir)s/data/presplash.png
# (str) Icon of the application
#icon.filename = %(source.dir)s/data/icon.png
# (str) Supported orientation (one of landscape, sensorLandscape, portrait or all)
orientation = portrait
# (list) List of service to declare
#services = NAME:ENTRYPOINT_TO_PY,NAME2:ENTRYPOINT2_TO_PY
#
# OSX Specific
#
#
# author = © Copyright Info
# change the major version of python used by the app
osx.python_version = 3
# Kivy version to use
osx.kivy_version = 1.9.1
#
# Android specific
#
# (bool) Indicate if the application should be fullscreen or not
fullscreen = 1
# (string) Presplash background color (for new android toolchain)
# Supported formats are: #RRGGBB #AARRGGBB or one of the following names:
# red, blue, green, black, white, gray, cyan, magenta, yellow, lightgray,
# darkgray, grey, lightgrey, darkgrey, aqua, fuchsia, lime, maroon, navy,
# olive, purple, silver, teal.
android.presplash_color = cyan
# (list) Permissions
#android.permissions = INTERNET
# (int) Target Android API, should be as high as possible.
android.api = 30
# (int) Minimum API your APK will support.
android.minapi = 23
# (int) Android SDK version to use
#android.sdk = 20
# (str) Android NDK version to use
android.ndk = r21d
# (int) Android NDK API to use. This is the minimum API your app will support, it should usually match android.minapi.
android.ndk_api = 23
# (bool) Use --private data storage (True) or --dir public storage (False)
android.private_storage = True
# (str) Android NDK directory (if empty, it will be automatically downloaded.)
android.ndk_path = ~/Android_Tools/android-ndk-r21d
# (str) Android SDK directory (if empty, it will be automatically downloaded.)
android.sdk_path = ~/Android_Tools/android-sdk
# (str) ANT directory (if empty, it will be automatically downloaded.)
#android.ant_path =
# (bool) If True, then skip trying to update the Android sdk
# This can be useful to avoid excess Internet downloads or save time
# when an update is due and you just want to test/build your package
android.skip_update = True
# (bool) If True, then automatically accept SDK license
# agreements. This is intended for automation only. If set to False,
# the default, you will be shown the license when first running
# buildozer.
android.accept_sdk_license = True
# (str) Android entry point, default is ok for Kivy-based app
#android.entrypoint = org.renpy.android.PythonActivity
# (str) Android app theme, default is ok for Kivy-based app
# android.apptheme = "@android:style/Theme.NoTitleBar"
# (list) Pattern to whitelist for the whole project
#android.whitelist =
# (str) Path to a custom whitelist file
#android.whitelist_src =
# (str) Path to a custom blacklist file
#android.blacklist_src =
# (list) List of Java .jar files to add to the libs so that pyjnius can access
# their classes. Don't add jars that you do not need, since extra jars can slow
# down the build process. Allows wildcards matching, for example:
# OUYA-ODK/libs/*.jar
#android.add_jars = foo.jar,bar.jar,path/to/more/*.jar
# (list) List of Java files to add to the android project (can be java or a
# directory containing the files)
#android.add_src =
# (list) Android AAR archives to add (currently works only with sdl2_gradle
# bootstrap)
#android.add_aars =
# (list) Gradle dependencies to add (currently works only with sdl2_gradle
# bootstrap)
#android.gradle_dependencies =
# (list) add java compile options
# this can for example be necessary when importing certain java libraries using the 'android.gradle_dependencies' option
# see https://developer.android.com/studio/write/java8-support for further information
# android.add_compile_options = "sourceCompatibility = 1.8", "targetCompatibility = 1.8"
# (list) Gradle repositories to add {can be necessary for some android.gradle_dependencies}
# please enclose in double quotes
# e.g. android.gradle_repositories = "maven { url 'https://kotlin.bintray.com/ktor' }"
#android.add_gradle_repositories =
# (list) packaging options to add
# see https://google.github.io/android-gradle-dsl/current/com.android.build.gradle.internal.dsl.PackagingOptions.html
# can be necessary to solve conflicts in gradle_dependencies
# please enclose in double quotes
# e.g. android.add_packaging_options = "exclude 'META-INF/common.kotlin_module'", "exclude 'META-INF/*.kotlin_module'"
#android.add_gradle_repositories =
# (list) Java classes to add as activities to the manifest.
#android.add_activities = com.example.ExampleActivity
# (str) OUYA Console category. Should be one of GAME or APP
# If you leave this blank, OUYA support will not be enabled
#android.ouya.category = GAME
# (str) Filename of OUYA Console icon. It must be a 732x412 png image.
#android.ouya.icon.filename = %(source.dir)s/data/ouya_icon.png
# (str) XML file to include as an intent filters in <activity> tag
#android.manifest.intent_filters =
# (str) launchMode to set for the main activity
#android.manifest.launch_mode = standard
# (list) Android additional libraries to copy into libs/armeabi
#android.add_libs_armeabi = libs/android/*.so
#android.add_libs_armeabi_v7a = libs/android-v7/*.so
#android.add_libs_arm64_v8a = libs/android-v8/*.so
#android.add_libs_x86 = libs/android-x86/*.so
#android.add_libs_mips = libs/android-mips/*.so
# (bool) Indicate whether the screen should stay on
# Don't forget to add the WAKE_LOCK permission if you set this to True
#android.wakelock = False
# (list) Android application meta-data to set (key=value format)
#android.meta_data =
# (list) Android library project to add (will be added in the
# project.properties automatically.)
#android.library_references =
# (list) Android shared libraries which will be added to AndroidManifest.xml using <uses-library> tag
#android.uses_library =
# (str) Android logcat filters to use
#android.logcat_filters = *:S python:D
# (bool) Copy library instead of making a libpymodules.so
#android.copy_libs = 1
# (str) The Android arch to build for, choices: armeabi-v7a, arm64-v8a, x86, x86_64
android.arch = arm64-v8a,armeabi-v7a
# (int) overrides automatic versionCode computation (used in build.gradle)
# this is not the same as app version and should only be edited if you know what you're doing
# android.numeric_version = 1
[buildozer]
# (int) Log level (0 = error only, 1 = info, 2 = debug (with command output))
log_level = 2
# (int) Display warning if buildozer is run as root (0 = False, 1 = True)
warn_on_root = 0
# (str) Path to build artifact storage, absolute or relative to spec file
# build_dir = ./.buildozer
# (str) Path to build output (i.e. .apk, .ipa) storage
# bin_dir = ./bin
I am getting Following Error when libffi compilation fails :-
[INFO]: Building libffi for armeabi-v7a
[INFO]: -> directory context /root/test/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/libffi/armeabi-v7a__ndk_target_21/libffi
[INFO]: -> running autoreconf -vif
[INFO]: -> running configure --host=arm-linux-androidea...(and 163 more)
working: See `config.log' for more details Exception in thread background thread for pid 14933:
Traceback (most recent call last):
File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner
self.run()
File "/usr/lib/python3.9/threading.py", line 892, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.9/site-packages/sh.py", line 1637, in wrap
fn(*rgs, **kwargs)
File "/usr/lib/python3.9/site-packages/sh.py", line 2561, in background_thread
handle_exit_code(exit_code)
File "/usr/lib/python3.9/site-packages/sh.py", line 2265, in fn
return self.command.handle_command_exit_code(exit_code)
File "/usr/lib/python3.9/site-packages/sh.py", line 865, in handle_command_exit_code
raise exc
sh.ErrorReturnCode_77:
RAN: /root/test/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/libffi/armeabi-v7a__ndk_target_21/libffi/configure --host=arm-linux-androideabi --prefix=/root/test/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/libffi/armeabi-v7a__ndk_target_21/libffi --disable-builddir --enable-shared
STDOUT:
checking build system type... aarch64-unknown-linux-gnu
checking host system type... arm-unknown-linux-androideabi
checking target system type... arm-unknown-linux-androideabi
checking for gsed... sed
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for arm-linux-androideabi-strip... arm-linux-androideabi-strip --strip-unneeded
checking for a race-free mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make -j8 sets $(MAKE)... yes
checking whether make -j8 supports nested variables... yes
checking for arm-linux-androideabi-gcc... /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target armv7a-linux-androideabi21 -fomit-frame-pointer -march=armv7-a -mfloat-abi=softfp -mfpu=vfp -mthumb -fPIC
checking whether the C compiler works... no
configure: error: in `/root/test/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/libffi/armeabi-v7a__ndk_target_21/libffi':
configure: error: C compiler cannot create executables
See `config.log' for more details
STDERR:
Traceback (most recent call last):
File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/root/test/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 1260, in <module>
main()
File "/root/test/.buildozer/android/platform/python-for-android/pythonforandroid/entrypoints.py", line 18, in main
ToolchainCL()
File "/root/test/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 709, in __init__
getattr(self, command)(args)
File "/root/test/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 154, in wrapper_func
build_dist_from_args(ctx, dist, args)
File "/root/test/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 213, in build_dist_from_args
build_recipes(build_order, python_modules, ctx,
File "/root/test/.buildozer/android/platform/python-for-android/pythonforandroid/build.py", line 577, in build_recipes
recipe.build_arch(arch)
File "/root/test/.buildozer/android/platform/python-for-android/pythonforandroid/recipes/libffi/__init__.py", line 42, in build_arch
shprint(sh.Command('./configure'),
File "/root/test/.buildozer/android/platform/python-for-android/pythonforandroid/logger.py", line 167, in shprint
for line in output:
File "/usr/lib/python3.9/site-packages/sh.py", line 911, in next
self.wait()
File "/usr/lib/python3.9/site-packages/sh.py", line 841, in wait
self.handle_command_exit_code(exit_code)
File "/usr/lib/python3.9/site-packages/sh.py", line 865, in handle_command_exit_code
raise exc
sh.ErrorReturnCode_77:
RAN: /root/test/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/libffi/armeabi-v7a__ndk_target_21/libffi/configure --host=arm-linux-androideabi --prefix=/root/test/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/libffi/armeabi-v7a__ndk_target_21/libffi --disable-builddir --enable-shared
STDOUT:
checking build system type... aarch64-unknown-linux-gnu
checking host system type... arm-unknown-linux-androideabi
checking target system type... arm-unknown-linux-androideabi
checking for gsed... sed
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for arm-linux-androideabi-strip... arm-linux-androideabi-strip --strip-unneeded
checking for a race-free mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make -j8 sets $(MAKE)... yes
checking whether make -j8 supports nested variables... yes
checking for arm-linux-androideabi-gcc... /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target armv7a-linux-androideabi21 -fomit-frame-pointer -march=armv7-a -mfloat-abi=softfp -mfpu=vfp -mthumb -fPIC
checking whether the C compiler works... no
configure: error: in `/root/test/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/libffi/armeabi-v7a__ndk_target_21/libffi':
configure: error: C compiler cannot create executables
See `config.log' for more details
STDERR:
# Command failed: /usr/bin/python -m pythonforandroid.toolchain create --dist_name=myapp --bootstrap=sdl2 --requirements=python3 --arch armeabi-v7a --copy-libs --color=always --storage-dir="/root/test/.buildozer/android/platform/build-armeabi-v7a" --ndk-api=21
Here is the config.log :-
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by libffi configure 3.3-rc0, which was
generated by GNU Autoconf 2.70. Invocation command line was
$ /root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi/configure --host=aarch64-linux-android --prefix=/root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi --disable-builddir --enable-shared
## --------- ##
## Platform. ##
## --------- ##
hostname = localhost
uname -m = aarch64
uname -r = 3.18.31-perf-g040a88f
uname -s = Linux
uname -v = #1 SMP PREEMPT Thu Nov 7 00:28:25 WIB 2019
/usr/bin/uname -p = unknown
/bin/uname -X = unknown
/bin/arch = unknown
/usr/bin/arch -k = unknown
/usr/convex/getsysinfo = unknown
/usr/bin/hostinfo = unknown
/bin/machine = unknown
/usr/bin/oslevel = unknown
/bin/universe = unknown
PATH: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/
PATH: /root/Android_Tools/android-ndk-r21d/toolchains/aarch64-linux-android-4.9/prebuilt/linux-x86/bin/
PATH: /root/Android_Tools/android-ndk-r21d/toolchains/aarch64-linux-android-4.9/prebuilt/linux-x86_64/bin/
PATH: /root/Android_Tools/android-ndk-r21d/
PATH: /root/Android_Tools/android-sdk/tools/
PATH: /root/.buildozer/android/platform/apache-ant-1.9.4/bin/
PATH: /root/bin/
PATH: /usr/local/sbin/
PATH: /usr/local/bin/
PATH: /usr/bin/
PATH: /usr/lib/jvm/default/bin/
PATH: /usr/bin/site_perl/
PATH: /usr/bin/vendor_perl/
PATH: /usr/bin/core_perl/
PATH: /usr/sbin/
PATH: /sbin/
PATH: /bin/
## ----------- ##
## Core tests. ##
## ----------- ##
configure:3064: looking for aux files: ltmain.sh compile missing install-sh config.guess config.sub
configure:3077: trying ./
configure:3106: ./ltmain.sh found
configure:3106: ./compile found
configure:3106: ./missing found
configure:3088: ./install-sh found
configure:3106: ./config.guess found
configure:3106: ./config.sub found
configure:3227: checking build system type
configure:3242: result: aarch64-unknown-linux-gnu
configure:3262: checking host system type
configure:3276: result: aarch64-unknown-linux-android
configure:3296: checking target system type
configure:3310: result: aarch64-unknown-linux-android
configure:3409: checking for gsed
configure:3445: result: sed
configure:3474: checking for a BSD-compatible install
configure:3547: result: /usr/bin/install -c
configure:3558: checking whether build environment is sane
configure:3613: result: yes
configure:3670: checking for aarch64-linux-android-strip
configure:3702: result: aarch64-linux-android-strip --strip-unneeded
configure:3773: checking for a race-free mkdir -p
configure:3817: result: /usr/bin/mkdir -p
configure:3824: checking for gawk
configure:3845: found /usr/bin/gawk
configure:3856: result: gawk
configure:3867: checking whether make -j8 sets $(MAKE)
configure:3890: result: yes
configure:3920: checking whether make -j8 supports nested variables
configure:3938: result: yes
configure:4088: checking for aarch64-linux-android-gcc
configure:4120: result: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a
configure:4518: checking for C compiler version
configure:4527: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a --version >&5
WARNING: linker: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang.real: unsupported flags DT_FLAGS_1=0x8000001
configure:4538: $? = 1
configure:4527: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a -v >&5
WARNING: linker: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang.real: unsupported flags DT_FLAGS_1=0x8000001
configure:4538: $? = 1
configure:4527: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a -V >&5
WARNING: linker: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang.real: unsupported flags DT_FLAGS_1=0x8000001
configure:4538: $? = 1
configure:4527: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a -qversion >&5
WARNING: linker: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang.real: unsupported flags DT_FLAGS_1=0x8000001
configure:4538: $? = 1
configure:4527: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a -version >&5
WARNING: linker: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang.real: unsupported flags DT_FLAGS_1=0x8000001
configure:4538: $? = 1
configure:4558: checking whether the C compiler works
configure:4580: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a -DANDROID -D__ANDROID_API__=23 -I/root/Android_Tools/android-ndk-r21d/sysroot/usr/include/aarch64-linux-android -I/root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/python-installs/myapp/include/python3.8 -L/root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/libs_collections/myapp/arm64-v8a conftest.c >&5
WARNING: linker: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang.real: unsupported flags DT_FLAGS_1=0x8000001
configure:4584: $? = 1
configure:4624: result: no
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "libffi"
| #define PACKAGE_TARNAME "libffi"
| #define PACKAGE_VERSION "3.3-rc0"
| #define PACKAGE_STRING "libffi 3.3-rc0"
| #define PACKAGE_BUGREPORT "http://github.com/libffi/libffi/issues"
| #define PACKAGE_URL ""
| #define PACKAGE "libffi"
| #define VERSION "3.3-rc0"
| /* end confdefs.h. */
|
| int
| main (void)
| {
|
| ;
| return 0;
| }
configure:4629: error: in `/root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi':
configure:4631: error: C compiler cannot create executables
See `config.log' for more details
## ---------------- ##
## Cache variables. ##
## ---------------- ##
ac_cv_build=aarch64-unknown-linux-gnu
ac_cv_env_CCASFLAGS_set=
ac_cv_env_CCASFLAGS_value=
ac_cv_env_CCAS_set=
ac_cv_env_CCAS_value=
ac_cv_env_CPPFLAGS_set=set
ac_cv_env_CPPFLAGS_value='-DANDROID -D__ANDROID_API__=23 -I/root/Android_Tools/android-ndk-r21d/sysroot/usr/include/aarch64-linux-android -I/root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/python-installs/myapp/include/python3.8'
ac_cv_env_CXXCPP_set=
ac_cv_env_CXXCPP_value=
ac_cv_env_LT_SYS_LIBRARY_PATH_set=
ac_cv_env_LT_SYS_LIBRARY_PATH_value=
ac_cv_env_build_alias_set=
ac_cv_env_build_alias_value=
ac_cv_env_host_alias_set=set
ac_cv_env_host_alias_value=aarch64-linux-android
ac_cv_env_target_alias_set=
ac_cv_env_target_alias_value=
ac_cv_host=aarch64-unknown-linux-android
ac_cv_path_ax_enable_builddir_sed=sed
ac_cv_path_install='/usr/bin/install -c'
ac_cv_path_mkdir=/usr/bin/mkdir
ac_cv_prog_AWK=gawk
ac_cv_prog_CC='/root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a'
ac_cv_prog_STRIP='aarch64-linux-android-strip --strip-unneeded'
ac_cv_prog_make_make_set=yes
ac_cv_target=aarch64-unknown-linux-android
am_cv_make_support_nested_variables=yes
## ----------------- ##
## Output variables. ##
## ----------------- ##
ACLOCAL='${SHELL} /root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi/missing aclocal-1.16'
ALLOCA=''
AMDEPBACKSLASH=''
AMDEP_FALSE=''
AMDEP_TRUE=''
AMTAR='$${TAR-tar}'
AM_BACKSLASH='\'
AM_DEFAULT_V='$(AM_DEFAULT_VERBOSITY)'
AM_DEFAULT_VERBOSITY='1'
AM_LTLDFLAGS=''
AM_RUNTESTFLAGS=''
AM_V='$(V)'
AR='aarch64-linux-android-ar'
AUTOCONF='${SHELL} /root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi/missing autoconf'
AUTOHEADER='${SHELL} /root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi/missing autoheader'
AUTOMAKE='${SHELL} /root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi/missing automake-1.16'
AWK='gawk'
BUILD_DOCS_FALSE=''
BUILD_DOCS_TRUE=''
CC='/root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a'
CCAS=''
CCASDEPMODE=''
CCASFLAGS=''
CCDEPMODE=''
CFLAGS='-target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a'
CPPFLAGS='-DANDROID -D__ANDROID_API__=23 -I/root/Android_Tools/android-ndk-r21d/sysroot/usr/include/aarch64-linux-android -I/root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/python-installs/myapp/include/python3.8'
CXX='/root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang++ -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a'
CXXCPP=''
CXXDEPMODE=''
CXXFLAGS='-target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a'
CYGPATH_W='echo'
DEFS=''
DEPDIR=''
DLLTOOL=''
DSYMUTIL=''
DUMPBIN=''
ECHO_C=''
ECHO_N='-n'
ECHO_T=''
EGREP=''
EXEEXT=''
FFI_DEBUG_FALSE=''
FFI_DEBUG_TRUE=''
FFI_EXEC_TRAMPOLINE_TABLE=''
FFI_EXEC_TRAMPOLINE_TABLE_FALSE=''
FFI_EXEC_TRAMPOLINE_TABLE_TRUE=''
FGREP=''
GREP=''
HAVE_LONG_DOUBLE=''
HAVE_LONG_DOUBLE_VARIANT=''
INSTALL_DATA='${INSTALL} -m 644'
INSTALL_PROGRAM='${INSTALL}'
INSTALL_SCRIPT='${INSTALL}'
INSTALL_STRIP_PROGRAM='$(install_sh) -c -s'
LD='aarch64-linux-android-ld'
LDFLAGS=' -L/root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/libs_collections/myapp/arm64-v8a'
LIBFFI_BUILD_VERSIONED_SHLIB_FALSE=''
LIBFFI_BUILD_VERSIONED_SHLIB_GNU_FALSE=''
LIBFFI_BUILD_VERSIONED_SHLIB_GNU_TRUE=''
LIBFFI_BUILD_VERSIONED_SHLIB_SUN_FALSE=''
LIBFFI_BUILD_VERSIONED_SHLIB_SUN_TRUE=''
LIBFFI_BUILD_VERSIONED_SHLIB_TRUE=''
LIBOBJS=''
LIBS=''
LIBTOOL=''
LIPO=''
LN_S=''
LTLIBOBJS=''
LT_SYS_LIBRARY_PATH=''
MAINT=''
MAINTAINER_MODE_FALSE=''
MAINTAINER_MODE_TRUE=''
MAKEINFO='${SHELL} /root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi/missing makeinfo'
MANIFEST_TOOL=''
MKDIR_P='/usr/bin/mkdir -p'
NM='aarch64-linux-android-nm'
NMEDIT=''
OBJDUMP=''
OBJEXT=''
OPT_LDFLAGS=''
OTOOL64=''
OTOOL=''
PACKAGE='libffi'
PACKAGE_BUGREPORT='http://github.com/libffi/libffi/issues'
PACKAGE_NAME='libffi'
PACKAGE_STRING='libffi 3.3-rc0'
PACKAGE_TARNAME='libffi'
PACKAGE_URL=''
PACKAGE_VERSION='3.3-rc0'
PATH_SEPARATOR=':'
PRTDIAG=''
RANLIB='aarch64-linux-android-ranlib'
SECTION_LDFLAGS=''
SED=''
SET_MAKE=''
SHELL='/bin/sh'
STRIP='aarch64-linux-android-strip --strip-unneeded'
TARGET='aarch64-unknown-linux-android'
TARGETDIR=''
TARGET_OBJ=''
TESTSUBDIR_FALSE=''
TESTSUBDIR_TRUE=''
VERSION='3.3-rc0'
ac_ct_AR=''
ac_ct_CC=''
ac_ct_CXX=''
ac_ct_DUMPBIN=''
am__EXEEXT_FALSE=''
am__EXEEXT_TRUE=''
am__fastdepCCAS_FALSE=''
am__fastdepCCAS_TRUE=''
am__fastdepCC_FALSE=''
am__fastdepCC_TRUE=''
am__fastdepCXX_FALSE=''
am__fastdepCXX_TRUE=''
am__include=''
am__isrc=''
am__leading_dot='.'
am__nodep=''
am__quote=''
am__tar='$${TAR-tar} chof - "$$tardir"'
am__untar='$${TAR-tar} xf -'
ax_enable_builddir_sed='sed'
bindir='${exec_prefix}/bin'
build='aarch64-unknown-linux-gnu'
build_alias=''
build_cpu='aarch64'
build_os='linux-gnu'
build_vendor='unknown'
datadir='${datarootdir}'
datarootdir='${prefix}/share'
docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
dvidir='${docdir}'
exec_prefix='NONE'
host='aarch64-unknown-linux-android'
host_alias='aarch64-linux-android'
host_cpu='aarch64'
host_os='linux-android'
host_vendor='unknown'
htmldir='${docdir}'
includedir='${prefix}/include'
infodir='${datarootdir}/info'
install_sh='${SHELL} /root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi/install-sh'
libdir='${exec_prefix}/lib'
libexecdir='${exec_prefix}/libexec'
localedir='${datarootdir}/locale'
localstatedir='${prefix}/var'
mandir='${datarootdir}/man'
mkdir_p='$(MKDIR_P)'
oldincludedir='/usr/include'
pdfdir='${docdir}'
prefix='/root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi'
program_transform_name='s,x,x,'
psdir='${docdir}'
runstatedir='${localstatedir}/run'
sbindir='${exec_prefix}/sbin'
sharedstatedir='${prefix}/com'
sys_symbol_underscore=''
sysconfdir='${prefix}/etc'
target='aarch64-unknown-linux-android'
target_alias='aarch64-linux-android'
target_cpu='aarch64'
target_os='linux-android'
target_vendor='unknown'
toolexecdir=''
toolexeclibdir=''
## ----------- ##
## confdefs.h. ##
## ----------- ##
/* confdefs.h */
#define PACKAGE_NAME "libffi"
#define PACKAGE_TARNAME "libffi"
#define PACKAGE_VERSION "3.3-rc0"
#define PACKAGE_STRING "libffi 3.3-rc0"
#define PACKAGE_BUGREPORT "http://github.com/libffi/libffi/issues"
#define PACKAGE_URL ""
#define PACKAGE "libffi"
#define VERSION "3.3-rc0"
configure: exit 77
Any Help ?
A:
Did you try to get the latest clang compiler? Seems like there is no working c compiler on your installation of arch on termux. If you do have a c compiler working, you may try to get the base-devel package and make an alias for the c compiler as gcc
A:
I too had this error.
After many hours looking at config files and other logs, I found what got me past the 'unable to build libffi' errors was tweaking my buildozer.spec, the android api versions within, and then re-running
buildozer -v android debug
Here's a the bit of the buildozer spec I'd changed:
# (int) Target Android API, should be as high as possible.
android.api = 28
# (int) Minimum API your APK / AAB will support.
android.minapi = 23
# (int) Android SDK version to use
android.sdk = 23
# (str) Android NDK version to use
android.ndk = 23
# (int) Android NDK API to use. This is the minimum API your app will support, it should usually match android.minapi.
android.ndk_api = 23
(Having every version, that isn't the max version, the same seems to help as well.)
As an educated guess as to why this works: buildozer is downloading slightly different packages,build instructions and config files, depending on which ADK is specified and there just happens to be the correct tools/PATHS etc for building libffi in the versions I've specified.
These versions are not what I'm trying to build for, but I've had to alter my goals, after 2 weeks of wrestling with this software, to:-
Just building the simplest 'hello world' for any version of android that might work. I figure if I can get something built I'll target more later.
In summary:
Now libffi does compile after adjusting android API versions in the buildozer spec; I have new errors but libffi is now compiled.
I'm runnning Linux Mate 19.1 on a 64bit machine
A:
I had the exact same issue with windows (ubuntu subsystem 20.4.1), NDK r25b, android api 31, sdk 21, p4a develop.
I could solve this error by manualy updating wsl1 to wsl2.
|
Buildozer fails to compile libffi on arm64/aarch64 CPU
|
I am Trying to run Buildozer on my Android Phone.
For this i am using Arch Linux pRoot on Termux App (Android 7)(redmi Note 4)
Since Google Only distributes x86_64 version of NDK, i am using aarch64/arm64 Android NDK (version r21d) and SDK from this GitHub Repo : https://github.com/Lzhiyong/termux-ndk
And i am using JDK 15 from arch Linux arm repositories.
For testing purposes i am using a simple hello world script, print("Hello")
When i Run buildozer -v android debug, everything goes fine first, it Downloads stuff and then tries to compile it.
Everything else compiles successfully except libffi.
What I Have Tried So far :-
I tried compiling for both armeabi-v7a and arm64-v8a, both failed.
I tried installing libffi via pacman, no effect.
I tried Creating Github issue, no reply.
I tried to ask on stack overflow, received lot of downvotes saying all logs and code should be pasted in question not on external website, deleted old question.
Here is my buildozer.spec File :-
[app]
# (str) Title of your application
title = My Application
# (str) Package name
package.name = myapp
# (str) Package domain (needed for android/ios packaging)
package.domain = org.test
# (str) Source code where the main.py live
source.dir = .
# (list) Source files to include (let empty to include all the files)
source.include_exts = py,png,jpg,ui
# (list) List of inclusions using pattern matching
#source.include_patterns = assets/*,images/*.png
# (list) Source files to exclude (let empty to not exclude anything)
source.exclude_exts = spec
# (list) List of directory to exclude (let empty to not exclude anything)
source.exclude_dirs = tests,bin,.buildozer,__pycache__
# (list) List of exclusions using pattern matching
#source.exclude_patterns = license,images/*/*.jpg
# (str) Application versioning (method 1)
version = 0.1
# (str) Application versioning (method 2)
# version.regex = __version__ = ['"](.*)['"]
# version.filename = %(source.dir)s/main.py
# (list) Application requirements
# comma separated e.g. requirements = sqlite3,kivy
requirements = python3==3.8.7
# (str) Custom source folders for requirements
# Sets custom source for any requirements with recipes
# requirements.source.kivy = ../../kivy
# (list) Garden requirements
#garden_requirements =
# (str) Presplash of the application
#presplash.filename = %(source.dir)s/data/presplash.png
# (str) Icon of the application
#icon.filename = %(source.dir)s/data/icon.png
# (str) Supported orientation (one of landscape, sensorLandscape, portrait or all)
orientation = portrait
# (list) List of service to declare
#services = NAME:ENTRYPOINT_TO_PY,NAME2:ENTRYPOINT2_TO_PY
#
# OSX Specific
#
#
# author = © Copyright Info
# change the major version of python used by the app
osx.python_version = 3
# Kivy version to use
osx.kivy_version = 1.9.1
#
# Android specific
#
# (bool) Indicate if the application should be fullscreen or not
fullscreen = 1
# (string) Presplash background color (for new android toolchain)
# Supported formats are: #RRGGBB #AARRGGBB or one of the following names:
# red, blue, green, black, white, gray, cyan, magenta, yellow, lightgray,
# darkgray, grey, lightgrey, darkgrey, aqua, fuchsia, lime, maroon, navy,
# olive, purple, silver, teal.
android.presplash_color = cyan
# (list) Permissions
#android.permissions = INTERNET
# (int) Target Android API, should be as high as possible.
android.api = 30
# (int) Minimum API your APK will support.
android.minapi = 23
# (int) Android SDK version to use
#android.sdk = 20
# (str) Android NDK version to use
android.ndk = r21d
# (int) Android NDK API to use. This is the minimum API your app will support, it should usually match android.minapi.
android.ndk_api = 23
# (bool) Use --private data storage (True) or --dir public storage (False)
android.private_storage = True
# (str) Android NDK directory (if empty, it will be automatically downloaded.)
android.ndk_path = ~/Android_Tools/android-ndk-r21d
# (str) Android SDK directory (if empty, it will be automatically downloaded.)
android.sdk_path = ~/Android_Tools/android-sdk
# (str) ANT directory (if empty, it will be automatically downloaded.)
#android.ant_path =
# (bool) If True, then skip trying to update the Android sdk
# This can be useful to avoid excess Internet downloads or save time
# when an update is due and you just want to test/build your package
android.skip_update = True
# (bool) If True, then automatically accept SDK license
# agreements. This is intended for automation only. If set to False,
# the default, you will be shown the license when first running
# buildozer.
android.accept_sdk_license = True
# (str) Android entry point, default is ok for Kivy-based app
#android.entrypoint = org.renpy.android.PythonActivity
# (str) Android app theme, default is ok for Kivy-based app
# android.apptheme = "@android:style/Theme.NoTitleBar"
# (list) Pattern to whitelist for the whole project
#android.whitelist =
# (str) Path to a custom whitelist file
#android.whitelist_src =
# (str) Path to a custom blacklist file
#android.blacklist_src =
# (list) List of Java .jar files to add to the libs so that pyjnius can access
# their classes. Don't add jars that you do not need, since extra jars can slow
# down the build process. Allows wildcards matching, for example:
# OUYA-ODK/libs/*.jar
#android.add_jars = foo.jar,bar.jar,path/to/more/*.jar
# (list) List of Java files to add to the android project (can be java or a
# directory containing the files)
#android.add_src =
# (list) Android AAR archives to add (currently works only with sdl2_gradle
# bootstrap)
#android.add_aars =
# (list) Gradle dependencies to add (currently works only with sdl2_gradle
# bootstrap)
#android.gradle_dependencies =
# (list) add java compile options
# this can for example be necessary when importing certain java libraries using the 'android.gradle_dependencies' option
# see https://developer.android.com/studio/write/java8-support for further information
# android.add_compile_options = "sourceCompatibility = 1.8", "targetCompatibility = 1.8"
# (list) Gradle repositories to add {can be necessary for some android.gradle_dependencies}
# please enclose in double quotes
# e.g. android.gradle_repositories = "maven { url 'https://kotlin.bintray.com/ktor' }"
#android.add_gradle_repositories =
# (list) packaging options to add
# see https://google.github.io/android-gradle-dsl/current/com.android.build.gradle.internal.dsl.PackagingOptions.html
# can be necessary to solve conflicts in gradle_dependencies
# please enclose in double quotes
# e.g. android.add_packaging_options = "exclude 'META-INF/common.kotlin_module'", "exclude 'META-INF/*.kotlin_module'"
#android.add_gradle_repositories =
# (list) Java classes to add as activities to the manifest.
#android.add_activities = com.example.ExampleActivity
# (str) OUYA Console category. Should be one of GAME or APP
# If you leave this blank, OUYA support will not be enabled
#android.ouya.category = GAME
# (str) Filename of OUYA Console icon. It must be a 732x412 png image.
#android.ouya.icon.filename = %(source.dir)s/data/ouya_icon.png
# (str) XML file to include as an intent filters in <activity> tag
#android.manifest.intent_filters =
# (str) launchMode to set for the main activity
#android.manifest.launch_mode = standard
# (list) Android additional libraries to copy into libs/armeabi
#android.add_libs_armeabi = libs/android/*.so
#android.add_libs_armeabi_v7a = libs/android-v7/*.so
#android.add_libs_arm64_v8a = libs/android-v8/*.so
#android.add_libs_x86 = libs/android-x86/*.so
#android.add_libs_mips = libs/android-mips/*.so
# (bool) Indicate whether the screen should stay on
# Don't forget to add the WAKE_LOCK permission if you set this to True
#android.wakelock = False
# (list) Android application meta-data to set (key=value format)
#android.meta_data =
# (list) Android library project to add (will be added in the
# project.properties automatically.)
#android.library_references =
# (list) Android shared libraries which will be added to AndroidManifest.xml using <uses-library> tag
#android.uses_library =
# (str) Android logcat filters to use
#android.logcat_filters = *:S python:D
# (bool) Copy library instead of making a libpymodules.so
#android.copy_libs = 1
# (str) The Android arch to build for, choices: armeabi-v7a, arm64-v8a, x86, x86_64
android.arch = arm64-v8a,armeabi-v7a
# (int) overrides automatic versionCode computation (used in build.gradle)
# this is not the same as app version and should only be edited if you know what you're doing
# android.numeric_version = 1
[buildozer]
# (int) Log level (0 = error only, 1 = info, 2 = debug (with command output))
log_level = 2
# (int) Display warning if buildozer is run as root (0 = False, 1 = True)
warn_on_root = 0
# (str) Path to build artifact storage, absolute or relative to spec file
# build_dir = ./.buildozer
# (str) Path to build output (i.e. .apk, .ipa) storage
# bin_dir = ./bin
I am getting Following Error when libffi compilation fails :-
[INFO]: Building libffi for armeabi-v7a
[INFO]: -> directory context /root/test/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/libffi/armeabi-v7a__ndk_target_21/libffi
[INFO]: -> running autoreconf -vif
[INFO]: -> running configure --host=arm-linux-androidea...(and 163 more)
working: See `config.log' for more details Exception in thread background thread for pid 14933:
Traceback (most recent call last):
File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner
self.run()
File "/usr/lib/python3.9/threading.py", line 892, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.9/site-packages/sh.py", line 1637, in wrap
fn(*rgs, **kwargs)
File "/usr/lib/python3.9/site-packages/sh.py", line 2561, in background_thread
handle_exit_code(exit_code)
File "/usr/lib/python3.9/site-packages/sh.py", line 2265, in fn
return self.command.handle_command_exit_code(exit_code)
File "/usr/lib/python3.9/site-packages/sh.py", line 865, in handle_command_exit_code
raise exc
sh.ErrorReturnCode_77:
RAN: /root/test/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/libffi/armeabi-v7a__ndk_target_21/libffi/configure --host=arm-linux-androideabi --prefix=/root/test/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/libffi/armeabi-v7a__ndk_target_21/libffi --disable-builddir --enable-shared
STDOUT:
checking build system type... aarch64-unknown-linux-gnu
checking host system type... arm-unknown-linux-androideabi
checking target system type... arm-unknown-linux-androideabi
checking for gsed... sed
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for arm-linux-androideabi-strip... arm-linux-androideabi-strip --strip-unneeded
checking for a race-free mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make -j8 sets $(MAKE)... yes
checking whether make -j8 supports nested variables... yes
checking for arm-linux-androideabi-gcc... /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target armv7a-linux-androideabi21 -fomit-frame-pointer -march=armv7-a -mfloat-abi=softfp -mfpu=vfp -mthumb -fPIC
checking whether the C compiler works... no
configure: error: in `/root/test/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/libffi/armeabi-v7a__ndk_target_21/libffi':
configure: error: C compiler cannot create executables
See `config.log' for more details
STDERR:
Traceback (most recent call last):
File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/root/test/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 1260, in <module>
main()
File "/root/test/.buildozer/android/platform/python-for-android/pythonforandroid/entrypoints.py", line 18, in main
ToolchainCL()
File "/root/test/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 709, in __init__
getattr(self, command)(args)
File "/root/test/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 154, in wrapper_func
build_dist_from_args(ctx, dist, args)
File "/root/test/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 213, in build_dist_from_args
build_recipes(build_order, python_modules, ctx,
File "/root/test/.buildozer/android/platform/python-for-android/pythonforandroid/build.py", line 577, in build_recipes
recipe.build_arch(arch)
File "/root/test/.buildozer/android/platform/python-for-android/pythonforandroid/recipes/libffi/__init__.py", line 42, in build_arch
shprint(sh.Command('./configure'),
File "/root/test/.buildozer/android/platform/python-for-android/pythonforandroid/logger.py", line 167, in shprint
for line in output:
File "/usr/lib/python3.9/site-packages/sh.py", line 911, in next
self.wait()
File "/usr/lib/python3.9/site-packages/sh.py", line 841, in wait
self.handle_command_exit_code(exit_code)
File "/usr/lib/python3.9/site-packages/sh.py", line 865, in handle_command_exit_code
raise exc
sh.ErrorReturnCode_77:
RAN: /root/test/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/libffi/armeabi-v7a__ndk_target_21/libffi/configure --host=arm-linux-androideabi --prefix=/root/test/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/libffi/armeabi-v7a__ndk_target_21/libffi --disable-builddir --enable-shared
STDOUT:
checking build system type... aarch64-unknown-linux-gnu
checking host system type... arm-unknown-linux-androideabi
checking target system type... arm-unknown-linux-androideabi
checking for gsed... sed
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for arm-linux-androideabi-strip... arm-linux-androideabi-strip --strip-unneeded
checking for a race-free mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make -j8 sets $(MAKE)... yes
checking whether make -j8 supports nested variables... yes
checking for arm-linux-androideabi-gcc... /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target armv7a-linux-androideabi21 -fomit-frame-pointer -march=armv7-a -mfloat-abi=softfp -mfpu=vfp -mthumb -fPIC
checking whether the C compiler works... no
configure: error: in `/root/test/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/libffi/armeabi-v7a__ndk_target_21/libffi':
configure: error: C compiler cannot create executables
See `config.log' for more details
STDERR:
# Command failed: /usr/bin/python -m pythonforandroid.toolchain create --dist_name=myapp --bootstrap=sdl2 --requirements=python3 --arch armeabi-v7a --copy-libs --color=always --storage-dir="/root/test/.buildozer/android/platform/build-armeabi-v7a" --ndk-api=21
Here is the config.log :-
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by libffi configure 3.3-rc0, which was
generated by GNU Autoconf 2.70. Invocation command line was
$ /root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi/configure --host=aarch64-linux-android --prefix=/root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi --disable-builddir --enable-shared
## --------- ##
## Platform. ##
## --------- ##
hostname = localhost
uname -m = aarch64
uname -r = 3.18.31-perf-g040a88f
uname -s = Linux
uname -v = #1 SMP PREEMPT Thu Nov 7 00:28:25 WIB 2019
/usr/bin/uname -p = unknown
/bin/uname -X = unknown
/bin/arch = unknown
/usr/bin/arch -k = unknown
/usr/convex/getsysinfo = unknown
/usr/bin/hostinfo = unknown
/bin/machine = unknown
/usr/bin/oslevel = unknown
/bin/universe = unknown
PATH: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/
PATH: /root/Android_Tools/android-ndk-r21d/toolchains/aarch64-linux-android-4.9/prebuilt/linux-x86/bin/
PATH: /root/Android_Tools/android-ndk-r21d/toolchains/aarch64-linux-android-4.9/prebuilt/linux-x86_64/bin/
PATH: /root/Android_Tools/android-ndk-r21d/
PATH: /root/Android_Tools/android-sdk/tools/
PATH: /root/.buildozer/android/platform/apache-ant-1.9.4/bin/
PATH: /root/bin/
PATH: /usr/local/sbin/
PATH: /usr/local/bin/
PATH: /usr/bin/
PATH: /usr/lib/jvm/default/bin/
PATH: /usr/bin/site_perl/
PATH: /usr/bin/vendor_perl/
PATH: /usr/bin/core_perl/
PATH: /usr/sbin/
PATH: /sbin/
PATH: /bin/
## ----------- ##
## Core tests. ##
## ----------- ##
configure:3064: looking for aux files: ltmain.sh compile missing install-sh config.guess config.sub
configure:3077: trying ./
configure:3106: ./ltmain.sh found
configure:3106: ./compile found
configure:3106: ./missing found
configure:3088: ./install-sh found
configure:3106: ./config.guess found
configure:3106: ./config.sub found
configure:3227: checking build system type
configure:3242: result: aarch64-unknown-linux-gnu
configure:3262: checking host system type
configure:3276: result: aarch64-unknown-linux-android
configure:3296: checking target system type
configure:3310: result: aarch64-unknown-linux-android
configure:3409: checking for gsed
configure:3445: result: sed
configure:3474: checking for a BSD-compatible install
configure:3547: result: /usr/bin/install -c
configure:3558: checking whether build environment is sane
configure:3613: result: yes
configure:3670: checking for aarch64-linux-android-strip
configure:3702: result: aarch64-linux-android-strip --strip-unneeded
configure:3773: checking for a race-free mkdir -p
configure:3817: result: /usr/bin/mkdir -p
configure:3824: checking for gawk
configure:3845: found /usr/bin/gawk
configure:3856: result: gawk
configure:3867: checking whether make -j8 sets $(MAKE)
configure:3890: result: yes
configure:3920: checking whether make -j8 supports nested variables
configure:3938: result: yes
configure:4088: checking for aarch64-linux-android-gcc
configure:4120: result: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a
configure:4518: checking for C compiler version
configure:4527: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a --version >&5
WARNING: linker: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang.real: unsupported flags DT_FLAGS_1=0x8000001
configure:4538: $? = 1
configure:4527: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a -v >&5
WARNING: linker: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang.real: unsupported flags DT_FLAGS_1=0x8000001
configure:4538: $? = 1
configure:4527: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a -V >&5
WARNING: linker: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang.real: unsupported flags DT_FLAGS_1=0x8000001
configure:4538: $? = 1
configure:4527: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a -qversion >&5
WARNING: linker: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang.real: unsupported flags DT_FLAGS_1=0x8000001
configure:4538: $? = 1
configure:4527: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a -version >&5
WARNING: linker: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang.real: unsupported flags DT_FLAGS_1=0x8000001
configure:4538: $? = 1
configure:4558: checking whether the C compiler works
configure:4580: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a -DANDROID -D__ANDROID_API__=23 -I/root/Android_Tools/android-ndk-r21d/sysroot/usr/include/aarch64-linux-android -I/root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/python-installs/myapp/include/python3.8 -L/root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/libs_collections/myapp/arm64-v8a conftest.c >&5
WARNING: linker: /root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang.real: unsupported flags DT_FLAGS_1=0x8000001
configure:4584: $? = 1
configure:4624: result: no
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "libffi"
| #define PACKAGE_TARNAME "libffi"
| #define PACKAGE_VERSION "3.3-rc0"
| #define PACKAGE_STRING "libffi 3.3-rc0"
| #define PACKAGE_BUGREPORT "http://github.com/libffi/libffi/issues"
| #define PACKAGE_URL ""
| #define PACKAGE "libffi"
| #define VERSION "3.3-rc0"
| /* end confdefs.h. */
|
| int
| main (void)
| {
|
| ;
| return 0;
| }
configure:4629: error: in `/root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi':
configure:4631: error: C compiler cannot create executables
See `config.log' for more details
## ---------------- ##
## Cache variables. ##
## ---------------- ##
ac_cv_build=aarch64-unknown-linux-gnu
ac_cv_env_CCASFLAGS_set=
ac_cv_env_CCASFLAGS_value=
ac_cv_env_CCAS_set=
ac_cv_env_CCAS_value=
ac_cv_env_CPPFLAGS_set=set
ac_cv_env_CPPFLAGS_value='-DANDROID -D__ANDROID_API__=23 -I/root/Android_Tools/android-ndk-r21d/sysroot/usr/include/aarch64-linux-android -I/root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/python-installs/myapp/include/python3.8'
ac_cv_env_CXXCPP_set=
ac_cv_env_CXXCPP_value=
ac_cv_env_LT_SYS_LIBRARY_PATH_set=
ac_cv_env_LT_SYS_LIBRARY_PATH_value=
ac_cv_env_build_alias_set=
ac_cv_env_build_alias_value=
ac_cv_env_host_alias_set=set
ac_cv_env_host_alias_value=aarch64-linux-android
ac_cv_env_target_alias_set=
ac_cv_env_target_alias_value=
ac_cv_host=aarch64-unknown-linux-android
ac_cv_path_ax_enable_builddir_sed=sed
ac_cv_path_install='/usr/bin/install -c'
ac_cv_path_mkdir=/usr/bin/mkdir
ac_cv_prog_AWK=gawk
ac_cv_prog_CC='/root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a'
ac_cv_prog_STRIP='aarch64-linux-android-strip --strip-unneeded'
ac_cv_prog_make_make_set=yes
ac_cv_target=aarch64-unknown-linux-android
am_cv_make_support_nested_variables=yes
## ----------------- ##
## Output variables. ##
## ----------------- ##
ACLOCAL='${SHELL} /root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi/missing aclocal-1.16'
ALLOCA=''
AMDEPBACKSLASH=''
AMDEP_FALSE=''
AMDEP_TRUE=''
AMTAR='$${TAR-tar}'
AM_BACKSLASH='\'
AM_DEFAULT_V='$(AM_DEFAULT_VERBOSITY)'
AM_DEFAULT_VERBOSITY='1'
AM_LTLDFLAGS=''
AM_RUNTESTFLAGS=''
AM_V='$(V)'
AR='aarch64-linux-android-ar'
AUTOCONF='${SHELL} /root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi/missing autoconf'
AUTOHEADER='${SHELL} /root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi/missing autoheader'
AUTOMAKE='${SHELL} /root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi/missing automake-1.16'
AWK='gawk'
BUILD_DOCS_FALSE=''
BUILD_DOCS_TRUE=''
CC='/root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a'
CCAS=''
CCASDEPMODE=''
CCASFLAGS=''
CCDEPMODE=''
CFLAGS='-target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a'
CPPFLAGS='-DANDROID -D__ANDROID_API__=23 -I/root/Android_Tools/android-ndk-r21d/sysroot/usr/include/aarch64-linux-android -I/root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/python-installs/myapp/include/python3.8'
CXX='/root/Android_Tools/android-ndk-r21d/toolchains/llvm/prebuilt/linux-aarch64/bin/clang++ -target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a'
CXXCPP=''
CXXDEPMODE=''
CXXFLAGS='-target aarch64-linux-android23 -fomit-frame-pointer -march=armv8-a'
CYGPATH_W='echo'
DEFS=''
DEPDIR=''
DLLTOOL=''
DSYMUTIL=''
DUMPBIN=''
ECHO_C=''
ECHO_N='-n'
ECHO_T=''
EGREP=''
EXEEXT=''
FFI_DEBUG_FALSE=''
FFI_DEBUG_TRUE=''
FFI_EXEC_TRAMPOLINE_TABLE=''
FFI_EXEC_TRAMPOLINE_TABLE_FALSE=''
FFI_EXEC_TRAMPOLINE_TABLE_TRUE=''
FGREP=''
GREP=''
HAVE_LONG_DOUBLE=''
HAVE_LONG_DOUBLE_VARIANT=''
INSTALL_DATA='${INSTALL} -m 644'
INSTALL_PROGRAM='${INSTALL}'
INSTALL_SCRIPT='${INSTALL}'
INSTALL_STRIP_PROGRAM='$(install_sh) -c -s'
LD='aarch64-linux-android-ld'
LDFLAGS=' -L/root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/libs_collections/myapp/arm64-v8a'
LIBFFI_BUILD_VERSIONED_SHLIB_FALSE=''
LIBFFI_BUILD_VERSIONED_SHLIB_GNU_FALSE=''
LIBFFI_BUILD_VERSIONED_SHLIB_GNU_TRUE=''
LIBFFI_BUILD_VERSIONED_SHLIB_SUN_FALSE=''
LIBFFI_BUILD_VERSIONED_SHLIB_SUN_TRUE=''
LIBFFI_BUILD_VERSIONED_SHLIB_TRUE=''
LIBOBJS=''
LIBS=''
LIBTOOL=''
LIPO=''
LN_S=''
LTLIBOBJS=''
LT_SYS_LIBRARY_PATH=''
MAINT=''
MAINTAINER_MODE_FALSE=''
MAINTAINER_MODE_TRUE=''
MAKEINFO='${SHELL} /root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi/missing makeinfo'
MANIFEST_TOOL=''
MKDIR_P='/usr/bin/mkdir -p'
NM='aarch64-linux-android-nm'
NMEDIT=''
OBJDUMP=''
OBJEXT=''
OPT_LDFLAGS=''
OTOOL64=''
OTOOL=''
PACKAGE='libffi'
PACKAGE_BUGREPORT='http://github.com/libffi/libffi/issues'
PACKAGE_NAME='libffi'
PACKAGE_STRING='libffi 3.3-rc0'
PACKAGE_TARNAME='libffi'
PACKAGE_URL=''
PACKAGE_VERSION='3.3-rc0'
PATH_SEPARATOR=':'
PRTDIAG=''
RANLIB='aarch64-linux-android-ranlib'
SECTION_LDFLAGS=''
SED=''
SET_MAKE=''
SHELL='/bin/sh'
STRIP='aarch64-linux-android-strip --strip-unneeded'
TARGET='aarch64-unknown-linux-android'
TARGETDIR=''
TARGET_OBJ=''
TESTSUBDIR_FALSE=''
TESTSUBDIR_TRUE=''
VERSION='3.3-rc0'
ac_ct_AR=''
ac_ct_CC=''
ac_ct_CXX=''
ac_ct_DUMPBIN=''
am__EXEEXT_FALSE=''
am__EXEEXT_TRUE=''
am__fastdepCCAS_FALSE=''
am__fastdepCCAS_TRUE=''
am__fastdepCC_FALSE=''
am__fastdepCC_TRUE=''
am__fastdepCXX_FALSE=''
am__fastdepCXX_TRUE=''
am__include=''
am__isrc=''
am__leading_dot='.'
am__nodep=''
am__quote=''
am__tar='$${TAR-tar} chof - "$$tardir"'
am__untar='$${TAR-tar} xf -'
ax_enable_builddir_sed='sed'
bindir='${exec_prefix}/bin'
build='aarch64-unknown-linux-gnu'
build_alias=''
build_cpu='aarch64'
build_os='linux-gnu'
build_vendor='unknown'
datadir='${datarootdir}'
datarootdir='${prefix}/share'
docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
dvidir='${docdir}'
exec_prefix='NONE'
host='aarch64-unknown-linux-android'
host_alias='aarch64-linux-android'
host_cpu='aarch64'
host_os='linux-android'
host_vendor='unknown'
htmldir='${docdir}'
includedir='${prefix}/include'
infodir='${datarootdir}/info'
install_sh='${SHELL} /root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi/install-sh'
libdir='${exec_prefix}/lib'
libexecdir='${exec_prefix}/libexec'
localedir='${datarootdir}/locale'
localstatedir='${prefix}/var'
mandir='${datarootdir}/man'
mkdir_p='$(MKDIR_P)'
oldincludedir='/usr/include'
pdfdir='${docdir}'
prefix='/root/test/.buildozer/android/platform/build-arm64-v8a,armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_23/libffi'
program_transform_name='s,x,x,'
psdir='${docdir}'
runstatedir='${localstatedir}/run'
sbindir='${exec_prefix}/sbin'
sharedstatedir='${prefix}/com'
sys_symbol_underscore=''
sysconfdir='${prefix}/etc'
target='aarch64-unknown-linux-android'
target_alias='aarch64-linux-android'
target_cpu='aarch64'
target_os='linux-android'
target_vendor='unknown'
toolexecdir=''
toolexeclibdir=''
## ----------- ##
## confdefs.h. ##
## ----------- ##
/* confdefs.h */
#define PACKAGE_NAME "libffi"
#define PACKAGE_TARNAME "libffi"
#define PACKAGE_VERSION "3.3-rc0"
#define PACKAGE_STRING "libffi 3.3-rc0"
#define PACKAGE_BUGREPORT "http://github.com/libffi/libffi/issues"
#define PACKAGE_URL ""
#define PACKAGE "libffi"
#define VERSION "3.3-rc0"
configure: exit 77
Any Help ?
|
[
"Did you try to get the latest clang compiler? Seems like there is no working c compiler on your installation of arch on termux. If you do have a c compiler working, you may try to get the base-devel package and make an alias for the c compiler as gcc\n",
"I too had this error.\nAfter many hours looking at config files and other logs, I found what got me past the 'unable to build libffi' errors was tweaking my buildozer.spec, the android api versions within, and then re-running\nbuildozer -v android debug\n\nHere's a the bit of the buildozer spec I'd changed:\n# (int) Target Android API, should be as high as possible.\nandroid.api = 28\n\n# (int) Minimum API your APK / AAB will support.\nandroid.minapi = 23\n\n# (int) Android SDK version to use\nandroid.sdk = 23\n\n# (str) Android NDK version to use\nandroid.ndk = 23\n\n# (int) Android NDK API to use. This is the minimum API your app will support, it should usually match android.minapi.\nandroid.ndk_api = 23\n\n(Having every version, that isn't the max version, the same seems to help as well.)\nAs an educated guess as to why this works: buildozer is downloading slightly different packages,build instructions and config files, depending on which ADK is specified and there just happens to be the correct tools/PATHS etc for building libffi in the versions I've specified.\nThese versions are not what I'm trying to build for, but I've had to alter my goals, after 2 weeks of wrestling with this software, to:-\nJust building the simplest 'hello world' for any version of android that might work. I figure if I can get something built I'll target more later.\nIn summary:\nNow libffi does compile after adjusting android API versions in the buildozer spec; I have new errors but libffi is now compiled.\nI'm runnning Linux Mate 19.1 on a 64bit machine\n",
"I had the exact same issue with windows (ubuntu subsystem 20.4.1), NDK r25b, android api 31, sdk 21, p4a develop.\nI could solve this error by manualy updating wsl1 to wsl2.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"android",
"buildozer",
"libffi",
"python",
"termux"
] |
stackoverflow_0065916898_android_buildozer_libffi_python_termux.txt
|
Q:
How to sum and count values within a list of dicts?
I have a list of Dicts as follows
[{"Sender":"bob","Receiver":"alice","Amount":50},{"Sender":"bob","Receiver":"alice","Amount":60},{"Sender":"bob","Receiver":"alice","Amount":70},{"Sender":"joe","Receiver":"bob","Amount":50},{"Sender":"joe","Receiver":"bob","Amount":150},{"Sender":"alice","Receiver":"bob","Amount":100},{"Sender":"bob","Receiver":"kyle","Amount":260}]
What i need is to sum up the totals per each unique sender/receiver pair, as well as how many total "transactions" there were per pair, as shown below in my desired output
[{"Sender":"bob","Receiver":"alice","Total":180,"Count":3},{"Sender":"joe","Receiver":"bob","Total":"200","Count":2},{"Sender":"alice","Receiver":"bob","Total":"100","Count":1}, {"Sender":"bob","Receiver":"kyle","Total":260,"Count":1}]
What i'm currently doing to get the "total" is
total = sum(a['Amount'] for a in transactions).
But this simply sums up all of the amounts across all pairs, i need the total for each unique pair of sender/receiver i would't know where to begin getting the "count" numbers, either.
A:
Make a new dictionary where the key is the sender/receiver pair.
Iterate over the list of senders/receivers. If that sender/receiver pair does not exist in the new dict, create it. Otherwise increment the count for that pair by one.
newdict = {}
for row in transactions:
sender = row['Sender']
receiver = row['Receiver']
amount = row['Amount']
key = f'{sender},{receiver}'
if key in newdict:
newdict[key]['Total'] += amount
newdict[key]['Count'] += 1
else:
newdict[key] = {'Count': 1, 'Total': amount}
This solution produces a single dict instead of a list of dicts.
A:
lookup for existing value and keep updating it. e.g.
transactions = [
{"Sender":"bob","Receiver":"alice","Amount":50},
{"Sender":"bob","Receiver":"alice","Amount":60},
{"Sender":"bob","Receiver":"alice","Amount":70},
{"Sender":"joe","Receiver":"bob","Amount":50},
{"Sender":"joe","Receiver":"bob","Amount":150},
{"Sender":"alice","Receiver":"bob","Amount":100},
{"Sender":"bob","Receiver":"kyle","Amount":260}
]
output = []
def find_transaction(sender: str, receiver: str):
for o in output:
if o["Sender"] == sender and o["Receiver"] == receiver:
return o
return None
for tx in transactions:
existing = find_transaction(tx["Sender"], tx["Receiver"])
if existing:
existing["Total"] += tx["Amount"]
existing["Count"] += 1
else:
output.append({
"Sender": tx["Sender"],
"Receiver": tx["Receiver"],
"Total": tx["Amount"],
"Count": 1
})
print(output)
A:
Use a dict to group the amounts, using (sender, receiver) as key;
Rebuild your list of dicts from the dict of amounts.
ll = [{"Sender":"bob","Receiver":"alice","Amount":50},{"Sender":"bob","Receiver":"alice","Amount":60},{"Sender":"bob","Receiver":"alice","Amount":70},{"Sender":"joe","Receiver":"bob","Amount":50},{"Sender":"joe","Receiver":"bob","Amount":150},{"Sender":"alice","Receiver":"bob","Amount":100},{"Sender":"bob","Receiver":"kyle","Amount":260}]
# GROUP BY (SENDER, RECEIVER) AND REDUCE BY KEY
d = {}
for transaction in ll:
k = (transaction['Sender'], transaction['Receiver'])
d[k] = d.get(k, 0) + transaction['Amount']
# print(d)
# d = {('bob', 'alice'): 180, ('joe', 'bob'): 200, ('alice', 'bob'): 100, ('bob', 'kyle'): 260}
# REBUILD LIST OF DICT
new_ll = [{'Sender': s, 'Receiver': r, 'Amount': a} for (s,r),a in d.items()]
print(new_ll)
# [{'Sender': 'bob', 'Receiver': 'alice', 'Amount': 180}, {'Sender': 'joe', 'Receiver': 'bob', 'Amount': 200}, {'Sender': 'alice', 'Receiver': 'bob', 'Amount': 100}, {'Sender': 'bob', 'Receiver': 'kyle', 'Amount': 260}]
These kinds of group-by-key and reduce-by-key operations are extremely common. Using a dict to group by key is the best method. There is also a library function in module more_itertools:
from more_itertools import map_reduce
from operator import itemgetter
ll = ll = [{"Sender":"bob","Receiver":"alice","Amount":50},{"Sender":"bob","Receiver":"alice","Amount":60},{"Sender":"bob","Receiver":"alice","Amount":70},{"Sender":"joe","Receiver":"bob","Amount":50},{"Sender":"joe","Receiver":"bob","Amount":150},{"Sender":"alice","Receiver":"bob","Amount":100},{"Sender":"bob","Receiver":"kyle","Amount":260}]
d = map_reduce(ll, keyfunc=itemgetter('Sender', 'Receiver'), valuefunc=itemgetter('Amount'), reducefunc=sum)
print(d)
# defaultdict(None, {('bob', 'alice'): 180, ('joe', 'bob'): 200, ('alice', 'bob'): 100, ('bob', 'kyle'): 260})
new_ll = [{'Sender': s, 'Receiver': r, 'Amount': a} for (s,r),a in d.items()]
|
How to sum and count values within a list of dicts?
|
I have a list of Dicts as follows
[{"Sender":"bob","Receiver":"alice","Amount":50},{"Sender":"bob","Receiver":"alice","Amount":60},{"Sender":"bob","Receiver":"alice","Amount":70},{"Sender":"joe","Receiver":"bob","Amount":50},{"Sender":"joe","Receiver":"bob","Amount":150},{"Sender":"alice","Receiver":"bob","Amount":100},{"Sender":"bob","Receiver":"kyle","Amount":260}]
What i need is to sum up the totals per each unique sender/receiver pair, as well as how many total "transactions" there were per pair, as shown below in my desired output
[{"Sender":"bob","Receiver":"alice","Total":180,"Count":3},{"Sender":"joe","Receiver":"bob","Total":"200","Count":2},{"Sender":"alice","Receiver":"bob","Total":"100","Count":1}, {"Sender":"bob","Receiver":"kyle","Total":260,"Count":1}]
What i'm currently doing to get the "total" is
total = sum(a['Amount'] for a in transactions).
But this simply sums up all of the amounts across all pairs, i need the total for each unique pair of sender/receiver i would't know where to begin getting the "count" numbers, either.
|
[
"Make a new dictionary where the key is the sender/receiver pair.\nIterate over the list of senders/receivers. If that sender/receiver pair does not exist in the new dict, create it. Otherwise increment the count for that pair by one.\nnewdict = {}\nfor row in transactions:\n sender = row['Sender']\n receiver = row['Receiver']\n amount = row['Amount']\n\n key = f'{sender},{receiver}'\n if key in newdict:\n newdict[key]['Total'] += amount\n newdict[key]['Count'] += 1\n else:\n newdict[key] = {'Count': 1, 'Total': amount}\n\nThis solution produces a single dict instead of a list of dicts.\n",
"lookup for existing value and keep updating it. e.g.\ntransactions = [\n {\"Sender\":\"bob\",\"Receiver\":\"alice\",\"Amount\":50},\n {\"Sender\":\"bob\",\"Receiver\":\"alice\",\"Amount\":60},\n {\"Sender\":\"bob\",\"Receiver\":\"alice\",\"Amount\":70},\n {\"Sender\":\"joe\",\"Receiver\":\"bob\",\"Amount\":50},\n {\"Sender\":\"joe\",\"Receiver\":\"bob\",\"Amount\":150},\n {\"Sender\":\"alice\",\"Receiver\":\"bob\",\"Amount\":100},\n {\"Sender\":\"bob\",\"Receiver\":\"kyle\",\"Amount\":260}\n]\n\noutput = []\n\ndef find_transaction(sender: str, receiver: str):\n for o in output:\n if o[\"Sender\"] == sender and o[\"Receiver\"] == receiver:\n return o\n return None\n\nfor tx in transactions:\n existing = find_transaction(tx[\"Sender\"], tx[\"Receiver\"])\n \n if existing:\n existing[\"Total\"] += tx[\"Amount\"]\n existing[\"Count\"] += 1\n else:\n output.append({\n \"Sender\": tx[\"Sender\"],\n \"Receiver\": tx[\"Receiver\"],\n \"Total\": tx[\"Amount\"],\n \"Count\": 1\n })\n\nprint(output)\n\n",
"\nUse a dict to group the amounts, using (sender, receiver) as key;\nRebuild your list of dicts from the dict of amounts.\n\nll = [{\"Sender\":\"bob\",\"Receiver\":\"alice\",\"Amount\":50},{\"Sender\":\"bob\",\"Receiver\":\"alice\",\"Amount\":60},{\"Sender\":\"bob\",\"Receiver\":\"alice\",\"Amount\":70},{\"Sender\":\"joe\",\"Receiver\":\"bob\",\"Amount\":50},{\"Sender\":\"joe\",\"Receiver\":\"bob\",\"Amount\":150},{\"Sender\":\"alice\",\"Receiver\":\"bob\",\"Amount\":100},{\"Sender\":\"bob\",\"Receiver\":\"kyle\",\"Amount\":260}]\n\n# GROUP BY (SENDER, RECEIVER) AND REDUCE BY KEY\n\nd = {}\nfor transaction in ll:\n k = (transaction['Sender'], transaction['Receiver'])\n d[k] = d.get(k, 0) + transaction['Amount']\n\n# print(d)\n# d = {('bob', 'alice'): 180, ('joe', 'bob'): 200, ('alice', 'bob'): 100, ('bob', 'kyle'): 260}\n\n# REBUILD LIST OF DICT\n\nnew_ll = [{'Sender': s, 'Receiver': r, 'Amount': a} for (s,r),a in d.items()]\n\nprint(new_ll)\n# [{'Sender': 'bob', 'Receiver': 'alice', 'Amount': 180}, {'Sender': 'joe', 'Receiver': 'bob', 'Amount': 200}, {'Sender': 'alice', 'Receiver': 'bob', 'Amount': 100}, {'Sender': 'bob', 'Receiver': 'kyle', 'Amount': 260}]\n\nThese kinds of group-by-key and reduce-by-key operations are extremely common. Using a dict to group by key is the best method. There is also a library function in module more_itertools:\nfrom more_itertools import map_reduce\nfrom operator import itemgetter\n\nll = ll = [{\"Sender\":\"bob\",\"Receiver\":\"alice\",\"Amount\":50},{\"Sender\":\"bob\",\"Receiver\":\"alice\",\"Amount\":60},{\"Sender\":\"bob\",\"Receiver\":\"alice\",\"Amount\":70},{\"Sender\":\"joe\",\"Receiver\":\"bob\",\"Amount\":50},{\"Sender\":\"joe\",\"Receiver\":\"bob\",\"Amount\":150},{\"Sender\":\"alice\",\"Receiver\":\"bob\",\"Amount\":100},{\"Sender\":\"bob\",\"Receiver\":\"kyle\",\"Amount\":260}]\n\nd = map_reduce(ll, keyfunc=itemgetter('Sender', 'Receiver'), valuefunc=itemgetter('Amount'), reducefunc=sum)\n\nprint(d)\n# defaultdict(None, {('bob', 'alice'): 180, ('joe', 'bob'): 200, ('alice', 'bob'): 100, ('bob', 'kyle'): 260})\n\nnew_ll = [{'Sender': s, 'Receiver': r, 'Amount': a} for (s,r),a in d.items()]\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"dictionary",
"list",
"python",
"sorting"
] |
stackoverflow_0074469414_dictionary_list_python_sorting.txt
|
Q:
Connect to MySQL database using python
So I am having a super hard time connecting to a local database using the python mysql.connector module.
So I am trying to connect using the highlighted connection. I use the password abcdefghijkl to log into the SQL environment. I am trying to connect to a database named flight_school.
My python script looks like so.
import mysql.connector
mydb = mysql.connector.connect("localhost", "root", "abcdefghijkl", "flight_school")
print(mydb.is_connected())
This above code in the arguments in the following order i.e.
hostname = localhost,
user = 'root',
password = 'abcdefghijkl', and
database name = 'flight_school'.
It's just not working. I get the following error.
I would really appreciate some advice, please.
A:
Please read always the official documentation
Your cooenction stirng has to have this form(if you do it this way=
mydb = mysql.connector.connect(
host="localhost",
user="root",
passwd="testpaaword",
database="testdb"
)
A:
Check out SQL-Alchemy module, works wonders from my experience.
A:
Please read always the official documentation: https://dev.mysql.com/doc/connector-python/en/connector-python-connectargs.html
import mysql.connector
from mysql.connector import errorcode, MySQLConnection
try:
db_connection = MySQLConnection(user='root', password='', port='3306', database='your_database')
print("Database connection made!")
except mysql.connector.Error as error:
if error.errno == errorcode.ER_BAD_DB_ERROR:
print("Database doesn't exist")
elif error.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print("User name or password is wrong")
else:
print(error)
else:
db_connection.close()
cursor = db_connection.cursor()
sql = ("commands")
cursor.execute(sql)
|
Connect to MySQL database using python
|
So I am having a super hard time connecting to a local database using the python mysql.connector module.
So I am trying to connect using the highlighted connection. I use the password abcdefghijkl to log into the SQL environment. I am trying to connect to a database named flight_school.
My python script looks like so.
import mysql.connector
mydb = mysql.connector.connect("localhost", "root", "abcdefghijkl", "flight_school")
print(mydb.is_connected())
This above code in the arguments in the following order i.e.
hostname = localhost,
user = 'root',
password = 'abcdefghijkl', and
database name = 'flight_school'.
It's just not working. I get the following error.
I would really appreciate some advice, please.
|
[
"Please read always the official documentation\nYour cooenction stirng has to have this form(if you do it this way=\n mydb = mysql.connector.connect(\n host=\"localhost\",\n user=\"root\",\n passwd=\"testpaaword\",\n database=\"testdb\"\n )\n\n",
"Check out SQL-Alchemy module, works wonders from my experience.\n",
"Please read always the official documentation: https://dev.mysql.com/doc/connector-python/en/connector-python-connectargs.html\nimport mysql.connector\nfrom mysql.connector import errorcode, MySQLConnection\n\ntry:\n db_connection = MySQLConnection(user='root', password='', port='3306', database='your_database')\n print(\"Database connection made!\")\nexcept mysql.connector.Error as error:\n if error.errno == errorcode.ER_BAD_DB_ERROR:\n print(\"Database doesn't exist\")\n elif error.errno == errorcode.ER_ACCESS_DENIED_ERROR:\n print(\"User name or password is wrong\")\n else:\n print(error)\nelse:\n db_connection.close()\n\ncursor = db_connection.cursor()\n\nsql = (\"commands\")\ncursor.execute(sql)\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"database",
"mysql",
"python"
] |
stackoverflow_0061513711_database_mysql_python.txt
|
Q:
python concurrent.futures.ProcessPoolExecutor crashing with full RAM
Python concurrent.futures.ProcessPoolExecutor crashing with full RAM
Program description
Hi, I've got a computationally heavy function which I want to run in parallel. The function is a test that accepts as inputs:
a DataFrame to test on
parameters based on which the calculations will be ran.
The return value is a short list of calculation results.
I want to run the same function in a for loop with different parameters and the same input DataFrame, basically run a brute-force to find optimal parameters for my problem.
The code I've written
I currently am running the code concurrently with ProcessPoolExecutor from the module concurrent.futures.
import concurrent.futures
from itertools import repeat
import pandas as pd
from my_tests import func
parameters = [
(arg1, arg2, arg3),
(arg1, arg2, arg3),
...
]
large_df = pd.read_csv(csv_path)
with concurrent.futures.ProcessPoolExecutor() as executor:
for future in executor.map(func, repeat(large_df.copy()), parameters):
test_result = future.result()
...
The problem
The problem I face is that I need to run a large amount of iterations, but my program crashes almost instantly.
In on order for it not to crash, I need to limit it to max 4 workers, which is 1/4 of my CPU resources.
with concurrent.futures.ProcessPoolExecutor(max_workers=4) as executor:
...
I figured out my program crashes due to a full RAM (16 GB). What I found weird is that when I was running it on more workers, it was gradually eating more and more RAM, which it never released, until it crashed.
Instead of passing a copy of the DataFrame, I tried to pass the file path, but apart of slowing down my program, it didn't change anything.
Do you have any idea of why that problem occurs and how to solve it?
A:
See my comment on what map actually returns.
This answer is relevant according to how large your parameters list is, i.e. how many total tasks are being placed on the multiprocessing pool's task queue:
You are currently creating and passing a copy of your dataframe (with large_df.copy()) every time you are submitting a new task (one task for each element of parameters. One thing you can do is to initialize your pool processes once with a single copy per pool process that will be used by every task submitted and executed by the pool process. The assumption is that the dataframe itself is not modified by my_tests.func. If it is modified and you need a copy of the original large_df for each new task, the function worker (see below) can make the copy. In this case you would need 2 * N copies (instead of just N copies) to exist simultaneously where N is the number of processes in the pool. This will save you memory if the length of parameters is greater than that since in your code a copy of the dataframe will exist either on the task queue or in a pool process's address space.
If you are running under a platform such as Linux that uses the fork method to create new processes, then each child process will inherit a copy automatcally as a global variable:
import concurrent.futures
import pandas as pd
from my_tests import func
parameters = [
(arg1, arg2, arg3),
(arg1, arg2, arg3),
...
]
large_df = pd.read_csv(csv_path) # will be inherited
def worker(parameter):
return func(large_df, parameter)
"""
# or:
return func(large_df.copy(), parameter)
"""
with concurrent.futures.ProcessPoolExecutor() as executor:
for result in executor.map(worker, parameters):
...
my_tests.func is expecting as its first argument a dataframe, but with the above change the dataframe is no longer being passed; the dataframe is now accessed as a global variable. So without modifying func, we need am adapter function, worker, that will pass to func what it is expecting. Of course, if you are able to modify func, then you can do without the adapter.
If you were running on a platform such as Windows that uses the spawn method to create new processes, then:
import concurrent.futures
import pandas as pd
from my_tests import func
def init_pool_processes(df):
global large_df
large_df = df
def worker(parameter):
return func(large_df, parameter)
"""
# or:
return func(large_df.copy(), parameter)
"""
if __name__ == '__main__':
parameters = [
(arg1, arg2, arg3),
(arg1, arg2, arg3),
...
]
large_df = pd.read_csv(csv_path) # will be inherited
with concurrent.futures.ProcessPoolExecutor(initializer=init_pool_processes, initargs=(large_df,)) as executor:
for result in executor.map(worker, parameters):
...
A:
Combining the suggestions of Aaron and Booboo, the solution to my problem was indeed copying a large DataFrame every single time I called the function, which made my computer run out of memory. The quick solution I found was to delete the copy of the DataFrame at the end of func:
def func(large_df_copy, parameters):
...
del large_df_copy
I will look into modifying Booboo's code, as the DataFrame is indeed modified in the function in order to run calculations.
Thank you very much for helping me out, I appreciate it a lot!
|
python concurrent.futures.ProcessPoolExecutor crashing with full RAM
|
Python concurrent.futures.ProcessPoolExecutor crashing with full RAM
Program description
Hi, I've got a computationally heavy function which I want to run in parallel. The function is a test that accepts as inputs:
a DataFrame to test on
parameters based on which the calculations will be ran.
The return value is a short list of calculation results.
I want to run the same function in a for loop with different parameters and the same input DataFrame, basically run a brute-force to find optimal parameters for my problem.
The code I've written
I currently am running the code concurrently with ProcessPoolExecutor from the module concurrent.futures.
import concurrent.futures
from itertools import repeat
import pandas as pd
from my_tests import func
parameters = [
(arg1, arg2, arg3),
(arg1, arg2, arg3),
...
]
large_df = pd.read_csv(csv_path)
with concurrent.futures.ProcessPoolExecutor() as executor:
for future in executor.map(func, repeat(large_df.copy()), parameters):
test_result = future.result()
...
The problem
The problem I face is that I need to run a large amount of iterations, but my program crashes almost instantly.
In on order for it not to crash, I need to limit it to max 4 workers, which is 1/4 of my CPU resources.
with concurrent.futures.ProcessPoolExecutor(max_workers=4) as executor:
...
I figured out my program crashes due to a full RAM (16 GB). What I found weird is that when I was running it on more workers, it was gradually eating more and more RAM, which it never released, until it crashed.
Instead of passing a copy of the DataFrame, I tried to pass the file path, but apart of slowing down my program, it didn't change anything.
Do you have any idea of why that problem occurs and how to solve it?
|
[
"See my comment on what map actually returns.\nThis answer is relevant according to how large your parameters list is, i.e. how many total tasks are being placed on the multiprocessing pool's task queue:\nYou are currently creating and passing a copy of your dataframe (with large_df.copy()) every time you are submitting a new task (one task for each element of parameters. One thing you can do is to initialize your pool processes once with a single copy per pool process that will be used by every task submitted and executed by the pool process. The assumption is that the dataframe itself is not modified by my_tests.func. If it is modified and you need a copy of the original large_df for each new task, the function worker (see below) can make the copy. In this case you would need 2 * N copies (instead of just N copies) to exist simultaneously where N is the number of processes in the pool. This will save you memory if the length of parameters is greater than that since in your code a copy of the dataframe will exist either on the task queue or in a pool process's address space.\nIf you are running under a platform such as Linux that uses the fork method to create new processes, then each child process will inherit a copy automatcally as a global variable:\nimport concurrent.futures\nimport pandas as pd\n\nfrom my_tests import func\n\n\nparameters = [\n (arg1, arg2, arg3),\n (arg1, arg2, arg3),\n ...\n]\n\nlarge_df = pd.read_csv(csv_path) # will be inherited\n\ndef worker(parameter):\n return func(large_df, parameter)\n \"\"\"\n # or:\n return func(large_df.copy(), parameter)\n \"\"\"\n\nwith concurrent.futures.ProcessPoolExecutor() as executor:\n for result in executor.map(worker, parameters):\n ...\n\nmy_tests.func is expecting as its first argument a dataframe, but with the above change the dataframe is no longer being passed; the dataframe is now accessed as a global variable. So without modifying func, we need am adapter function, worker, that will pass to func what it is expecting. Of course, if you are able to modify func, then you can do without the adapter.\nIf you were running on a platform such as Windows that uses the spawn method to create new processes, then:\nimport concurrent.futures\nimport pandas as pd\n\nfrom my_tests import func\n\ndef init_pool_processes(df):\n global large_df\n large_df = df\n\n\ndef worker(parameter):\n return func(large_df, parameter)\n \"\"\"\n # or:\n return func(large_df.copy(), parameter)\n \"\"\"\n\nif __name__ == '__main__':\n \n parameters = [\n (arg1, arg2, arg3),\n (arg1, arg2, arg3),\n ...\n ]\n \n large_df = pd.read_csv(csv_path) # will be inherited\n \n with concurrent.futures.ProcessPoolExecutor(initializer=init_pool_processes, initargs=(large_df,)) as executor:\n for result in executor.map(worker, parameters):\n ...\n\n",
"Combining the suggestions of Aaron and Booboo, the solution to my problem was indeed copying a large DataFrame every single time I called the function, which made my computer run out of memory. The quick solution I found was to delete the copy of the DataFrame at the end of func:\ndef func(large_df_copy, parameters):\n ...\n del large_df_copy\n\nI will look into modifying Booboo's code, as the DataFrame is indeed modified in the function in order to run calculations.\nThank you very much for helping me out, I appreciate it a lot!\n"
] |
[
1,
0
] |
[] |
[] |
[
"concurrent.futures",
"multiprocessing",
"process_pool",
"python"
] |
stackoverflow_0074433987_concurrent.futures_multiprocessing_process_pool_python.txt
|
Q:
Splitting a string based on multiple delimeters using split() function in python by ignoring certain special characters present in the string
Not getting desired result while splitting a string based on multiple delimiters and based on specific conditions.
I tried executing below code:
import re
text = r'ced"|"ms|n"|4|98'
finallist = re.split('\"\|\"|\"\||\|', text)
Here i'm trying to split string based on 3 delimiters by joining all using OR (|). First delimiter is by using "|" , another is "| and then using |
finallist looks like this:
finallist=['ced', 'ms','n', '4', '98']
However I don't wish the function to split at ms|n present in the string. As the pipe symbol is present inside the letters enclosed within double quotes i.e in this case "ms|n" so I don't want the function to match pipe symbol for this case.
And I'm expecting the finallist to look like this:
finallist=['ced', 'ms|n', '4', '98']
Is there anyway I can achieve this by changing the logic in the split function? Please let me know.
A:
You can use
"?\|(?!(?:(?<=[A-Za-z]\|)|(?<=[A-Za-z]\\\|))(?=[a-zA-Z]))"?
See the regex demo. Details:
"? - an optional " char
\| - a | char
(?!(?:(?<=[A-Za-z]\|)|(?<=[A-Za-z]\\\|))(?=[a-zA-Z])) - a negative lookahead that fails the match if there is an ASCII letter immediately after the | char AND either an ASCII letter before the | char or an ASCII letter + \ right before the | char
"? - an optional " char
See the Python demo:
import re
text = r'ced"|"ms|n"|4|98'
pattern = r'"?\|(?!(?:(?<=[A-Za-z]\|)|(?<=[A-Za-z]\\\|))(?=[a-zA-Z]))"?'
print( re.split(pattern, text) )
# => ['ced', 'ms|n', '4', '98']
text = r'ced"|"ms\|n"|4|98'
print( re.split(pattern, text) )
# => ['ced', 'ms\\|n', '4', '98']
|
Splitting a string based on multiple delimeters using split() function in python by ignoring certain special characters present in the string
|
Not getting desired result while splitting a string based on multiple delimiters and based on specific conditions.
I tried executing below code:
import re
text = r'ced"|"ms|n"|4|98'
finallist = re.split('\"\|\"|\"\||\|', text)
Here i'm trying to split string based on 3 delimiters by joining all using OR (|). First delimiter is by using "|" , another is "| and then using |
finallist looks like this:
finallist=['ced', 'ms','n', '4', '98']
However I don't wish the function to split at ms|n present in the string. As the pipe symbol is present inside the letters enclosed within double quotes i.e in this case "ms|n" so I don't want the function to match pipe symbol for this case.
And I'm expecting the finallist to look like this:
finallist=['ced', 'ms|n', '4', '98']
Is there anyway I can achieve this by changing the logic in the split function? Please let me know.
|
[
"You can use\n\"?\\|(?!(?:(?<=[A-Za-z]\\|)|(?<=[A-Za-z]\\\\\\|))(?=[a-zA-Z]))\"?\n\nSee the regex demo. Details:\n\n\"? - an optional \" char\n\\| - a | char\n(?!(?:(?<=[A-Za-z]\\|)|(?<=[A-Za-z]\\\\\\|))(?=[a-zA-Z])) - a negative lookahead that fails the match if there is an ASCII letter immediately after the | char AND either an ASCII letter before the | char or an ASCII letter + \\ right before the | char\n\"? - an optional \" char\n\nSee the Python demo:\nimport re\ntext = r'ced\"|\"ms|n\"|4|98'\npattern = r'\"?\\|(?!(?:(?<=[A-Za-z]\\|)|(?<=[A-Za-z]\\\\\\|))(?=[a-zA-Z]))\"?'\nprint( re.split(pattern, text) )\n# => ['ced', 'ms|n', '4', '98']\ntext = r'ced\"|\"ms\\|n\"|4|98'\nprint( re.split(pattern, text) )\n# => ['ced', 'ms\\\\|n', '4', '98']\n\n"
] |
[
0
] |
[] |
[] |
[
"list",
"python",
"regex",
"split",
"string"
] |
stackoverflow_0074472442_list_python_regex_split_string.txt
|
Q:
Tensorflow loss is diverging in my RNN
I'm trying to get my hand wet with Tensorflow by solving this challenge: https://www.kaggle.com/c/integer-sequence-learning.
My work is based on these blog posts:
https://danijar.com/variable-sequence-lengths-in-tensorflow/
https://gist.github.com/evanthebouncy/8e16148687e807a46e3f
A complete working example - with my data - can be found here: https://github.com/bottiger/Integer-Sequence-Learning Running the example will print out a lot of debug information. Run execute rnn-lstm-my.py . (Requires tensorflow and pandas)
The approach is pretty straight forward. I load all of my train sequences, store their length in a vector and the length of the longest one in a variable I call ''max_length''.
In my training data I strip out the last element in all the sequences and store it in a vector called "train_solutions"
The I store all the sequences, padded with zeros, in a matrix with the shape: [n_seq, max_length].
Since I want to predict the next number in a sequence my output should be a single number, and my input should be a sequence.
I use a RNN (tf.nn.rnn) with a BasicLSTMCell as cell, with 24 hidden units. The output is feeded into a basic linear model (xW+B) which should produce my prediction.
My cost function is simply the predicted number of my model, I calculate the cost like this:
cost = tf.nn.l2_loss(tf_result - prediction)
The basics dimensions seems to be correct because the code actually runs. However, after only one or two iterations some NaN starts to occur which quickly spreads, and everything becomes NaN.
Here is the important part of the code where I define and run the graph. However, I have omitted posted loading/preparation of the data. Please look at the git repo for details about that - but I pretty sure that part is correct.
cell = tf.nn.rnn_cell.BasicLSTMCell(num_hidden, state_is_tuple=True)
num_inputs = tf.placeholder(tf.int32, name='NumInputs')
seq_length = tf.placeholder(tf.int32, shape=[batch_size], name='NumInputs')
# Define the input as a list (num elements = batch_size) of sequences
inputs = [tf.placeholder(tf.float32,shape=[1, max_length], name='InputData') for _ in range(batch_size)]
# Result should be 1xbatch_szie vector
result = tf.placeholder(tf.float32, shape=[batch_size, 1], name='OutputData')
tf_seq_length = tf.Print(seq_length, [seq_length, seq_length.get_shape()], 'SequenceLength: ')
outputs, states = tf.nn.rnn(cell, inputs, dtype=tf.float32)
# Print the output. The NaN first shows up here
outputs2 = tf.Print(outputs, [outputs], 'Last: ', name="Last", summarize=800)
# Define the model
tf_weight = tf.Variable(tf.truncated_normal([batch_size, num_hidden, frame_size]), name='Weight')
tf_bias = tf.Variable(tf.constant(0.1, shape=[batch_size]), name='Bias')
# Debug the model parameters
weight = tf.Print(tf_weight, [tf_weight, tf_weight.get_shape()], "Weight: ")
bias = tf.Print(tf_bias, [tf_bias, tf_bias.get_shape()], "bias: ")
# More debug info
print('bias: ', bias.get_shape())
print('weight: ', weight.get_shape())
print('targets ', result.get_shape())
print('RNN input ', type(inputs))
print('RNN input len()', len(inputs))
print('RNN input[0] ', inputs[0].get_shape())
# Calculate the prediction
tf_prediction = tf.batch_matmul(outputs2, weight) + bias
prediction = tf.Print(tf_prediction, [tf_prediction, tf_prediction.get_shape()], 'prediction: ')
tf_result = result
# Calculate the cost
cost = tf.nn.l2_loss(tf_result - prediction)
#optimizer = tf.train.AdamOptimizer()
learning_rate = 0.05
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
minimize = optimizer.minimize(cost)
mistakes = tf.not_equal(tf.argmax(result, 1), tf.argmax(prediction, 1))
error = tf.reduce_mean(tf.cast(mistakes, tf.float32))
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
no_of_batches = int(len(train_input)) / batch_size
epoch = 1
val_dict = get_input_dict(val_input, val_output, train_length, inputs, batch_size)
for i in range(epoch):
ptr = 0
for j in range(no_of_batches):
print('eval w: ', weight.eval(session=sess))
# inputs batch
t_i = train_input[ptr:ptr+batch_size]
# output batch
t_o = train_output[ptr:ptr+batch_size]
# sequence lengths
t_l = train_length[ptr:ptr+batch_size]
sess.run(minimize,feed_dict=get_input_dict(t_i, t_o, t_l, inputs, batch_size))
ptr += batch_size
print("result: ", tf_result)
print("result len: ", tf_result.get_shape())
print("prediction: ", prediction)
print("prediction len: ", prediction.get_shape())
c_val = sess.run(error, feed_dict = val_dict )
print "Validation cost: {}, on Epoch {}".format(c_val,i)
print "Epoch ",str(i)
print('test input: ', type(test_input))
print('test output: ', type(test_output))
incorrect = sess.run(error,get_input_dict(test_input, test_output, test_length, inputs, batch_size))
sess.close()
And here is (the first lines of) the output it produces. You can see that everything become NaN: http://pastebin.com/TnFFNFrr (I could not post it here due to the body limit)
The first time I see the NaN is here:
I tensorflow/core/kernels/logging_ops.cc:79] Last: [0 0.76159418 0 0 0
0 0 -0.76159418 0 -0.76159418 0 0 0 0.76159418 0.76159418 0
-0.76159418 0.76159418 0 0 0 0.76159418 0 0 0 nan nan nan nan 0 0 nan nan 1 0 nan 0 0.76159418 nan nan nan 1 0 nan 0 0.76159418 nan nan -nan
-nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan
-nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
nan nan nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan
-nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
nan nan nan nan nan nan -nan -nan -nan -nan -nan -nan -nan -nan -nan
-nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan]
I hope I made my problem clear. Thanks in advance
A:
RNNs suffer from an exploding gradient, so you should clip the gradients for the RNN parameters. Look at this post:
How to effectively apply gradient clipping in tensor flow?
A:
use AdamOptimizer instead
optimizer = tf.train.AdamOptimizer()
A:
Try using LSTM which is more optimized and better version of RNN or use Relu as activation function. Our Normal Rnn architecture has some disadvantages when implemented on a large network it gives uneven or fixed losses which causes the model to not train properly, this problem in RNN occurs due to activation function such as sigmoid or tanh and the problem is called vanishing gradient if losses are constant or exploding gradient if they show hue deflection
Give below is the code for LSTM.
CODE SNIPPET
|
Tensorflow loss is diverging in my RNN
|
I'm trying to get my hand wet with Tensorflow by solving this challenge: https://www.kaggle.com/c/integer-sequence-learning.
My work is based on these blog posts:
https://danijar.com/variable-sequence-lengths-in-tensorflow/
https://gist.github.com/evanthebouncy/8e16148687e807a46e3f
A complete working example - with my data - can be found here: https://github.com/bottiger/Integer-Sequence-Learning Running the example will print out a lot of debug information. Run execute rnn-lstm-my.py . (Requires tensorflow and pandas)
The approach is pretty straight forward. I load all of my train sequences, store their length in a vector and the length of the longest one in a variable I call ''max_length''.
In my training data I strip out the last element in all the sequences and store it in a vector called "train_solutions"
The I store all the sequences, padded with zeros, in a matrix with the shape: [n_seq, max_length].
Since I want to predict the next number in a sequence my output should be a single number, and my input should be a sequence.
I use a RNN (tf.nn.rnn) with a BasicLSTMCell as cell, with 24 hidden units. The output is feeded into a basic linear model (xW+B) which should produce my prediction.
My cost function is simply the predicted number of my model, I calculate the cost like this:
cost = tf.nn.l2_loss(tf_result - prediction)
The basics dimensions seems to be correct because the code actually runs. However, after only one or two iterations some NaN starts to occur which quickly spreads, and everything becomes NaN.
Here is the important part of the code where I define and run the graph. However, I have omitted posted loading/preparation of the data. Please look at the git repo for details about that - but I pretty sure that part is correct.
cell = tf.nn.rnn_cell.BasicLSTMCell(num_hidden, state_is_tuple=True)
num_inputs = tf.placeholder(tf.int32, name='NumInputs')
seq_length = tf.placeholder(tf.int32, shape=[batch_size], name='NumInputs')
# Define the input as a list (num elements = batch_size) of sequences
inputs = [tf.placeholder(tf.float32,shape=[1, max_length], name='InputData') for _ in range(batch_size)]
# Result should be 1xbatch_szie vector
result = tf.placeholder(tf.float32, shape=[batch_size, 1], name='OutputData')
tf_seq_length = tf.Print(seq_length, [seq_length, seq_length.get_shape()], 'SequenceLength: ')
outputs, states = tf.nn.rnn(cell, inputs, dtype=tf.float32)
# Print the output. The NaN first shows up here
outputs2 = tf.Print(outputs, [outputs], 'Last: ', name="Last", summarize=800)
# Define the model
tf_weight = tf.Variable(tf.truncated_normal([batch_size, num_hidden, frame_size]), name='Weight')
tf_bias = tf.Variable(tf.constant(0.1, shape=[batch_size]), name='Bias')
# Debug the model parameters
weight = tf.Print(tf_weight, [tf_weight, tf_weight.get_shape()], "Weight: ")
bias = tf.Print(tf_bias, [tf_bias, tf_bias.get_shape()], "bias: ")
# More debug info
print('bias: ', bias.get_shape())
print('weight: ', weight.get_shape())
print('targets ', result.get_shape())
print('RNN input ', type(inputs))
print('RNN input len()', len(inputs))
print('RNN input[0] ', inputs[0].get_shape())
# Calculate the prediction
tf_prediction = tf.batch_matmul(outputs2, weight) + bias
prediction = tf.Print(tf_prediction, [tf_prediction, tf_prediction.get_shape()], 'prediction: ')
tf_result = result
# Calculate the cost
cost = tf.nn.l2_loss(tf_result - prediction)
#optimizer = tf.train.AdamOptimizer()
learning_rate = 0.05
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
minimize = optimizer.minimize(cost)
mistakes = tf.not_equal(tf.argmax(result, 1), tf.argmax(prediction, 1))
error = tf.reduce_mean(tf.cast(mistakes, tf.float32))
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
no_of_batches = int(len(train_input)) / batch_size
epoch = 1
val_dict = get_input_dict(val_input, val_output, train_length, inputs, batch_size)
for i in range(epoch):
ptr = 0
for j in range(no_of_batches):
print('eval w: ', weight.eval(session=sess))
# inputs batch
t_i = train_input[ptr:ptr+batch_size]
# output batch
t_o = train_output[ptr:ptr+batch_size]
# sequence lengths
t_l = train_length[ptr:ptr+batch_size]
sess.run(minimize,feed_dict=get_input_dict(t_i, t_o, t_l, inputs, batch_size))
ptr += batch_size
print("result: ", tf_result)
print("result len: ", tf_result.get_shape())
print("prediction: ", prediction)
print("prediction len: ", prediction.get_shape())
c_val = sess.run(error, feed_dict = val_dict )
print "Validation cost: {}, on Epoch {}".format(c_val,i)
print "Epoch ",str(i)
print('test input: ', type(test_input))
print('test output: ', type(test_output))
incorrect = sess.run(error,get_input_dict(test_input, test_output, test_length, inputs, batch_size))
sess.close()
And here is (the first lines of) the output it produces. You can see that everything become NaN: http://pastebin.com/TnFFNFrr (I could not post it here due to the body limit)
The first time I see the NaN is here:
I tensorflow/core/kernels/logging_ops.cc:79] Last: [0 0.76159418 0 0 0
0 0 -0.76159418 0 -0.76159418 0 0 0 0.76159418 0.76159418 0
-0.76159418 0.76159418 0 0 0 0.76159418 0 0 0 nan nan nan nan 0 0 nan nan 1 0 nan 0 0.76159418 nan nan nan 1 0 nan 0 0.76159418 nan nan -nan
-nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan
-nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
nan nan nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan
-nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
nan nan nan nan nan nan -nan -nan -nan -nan -nan -nan -nan -nan -nan
-nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan -nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan]
I hope I made my problem clear. Thanks in advance
|
[
"RNNs suffer from an exploding gradient, so you should clip the gradients for the RNN parameters. Look at this post:\nHow to effectively apply gradient clipping in tensor flow?\n",
"use AdamOptimizer instead\noptimizer = tf.train.AdamOptimizer()\n\n",
"Try using LSTM which is more optimized and better version of RNN or use Relu as activation function. Our Normal Rnn architecture has some disadvantages when implemented on a large network it gives uneven or fixed losses which causes the model to not train properly, this problem in RNN occurs due to activation function such as sigmoid or tanh and the problem is called vanishing gradient if losses are constant or exploding gradient if they show hue deflection\nGive below is the code for LSTM.\nCODE SNIPPET\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"deep_learning",
"neural_network",
"python",
"sequence",
"tensorflow"
] |
stackoverflow_0038762104_deep_learning_neural_network_python_sequence_tensorflow.txt
|
Q:
Python makemigrations does not work right
I use Django framework
This is my models.py
from django.db import models
# Create your models here.
class Destination(models.Model):
name: models.CharField(max_length=100)
img: models.ImageField(upload_to='pics')
desc: models.TextField
price: models.IntegerField
offer: models.BooleanField(default=False)
and here is my migrations folder-0001_initial.py:
# Generated by Django 4.1.3 on 2022-11-17 10:17
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Destination',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
],
),
]
it don't migrate my model, i deleted 0001_initial.py and pycache and migrate again but it worked the same
How can i make migrations with my models ?
A:
You have added model fields in incorrect way. You can't add like this.
change this:
class Destination(models.Model):
name: models.CharField(max_length=100)
img: models.ImageField(upload_to='pics')
desc: models.TextField
price: models.IntegerField
offer: models.BooleanField(default=False)
To this:
class Destination(models.Model):
name = models.CharField(max_length=100)
img = models.ImageField(upload_to='pics')
desc = models.TextField()
price = models.IntegerField()
offer = models.BooleanField(default=False)
After changing above code. Try to migrate manually using below commands:
python manage.py makemigrations appname
python manage.py sqlmigrate appname 0001 #This value will generate after makemigrations. it can be 0001 or 0002 or more
python manage.py migrate
|
Python makemigrations does not work right
|
I use Django framework
This is my models.py
from django.db import models
# Create your models here.
class Destination(models.Model):
name: models.CharField(max_length=100)
img: models.ImageField(upload_to='pics')
desc: models.TextField
price: models.IntegerField
offer: models.BooleanField(default=False)
and here is my migrations folder-0001_initial.py:
# Generated by Django 4.1.3 on 2022-11-17 10:17
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Destination',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
],
),
]
it don't migrate my model, i deleted 0001_initial.py and pycache and migrate again but it worked the same
How can i make migrations with my models ?
|
[
"You have added model fields in incorrect way. You can't add like this.\nchange this:\nclass Destination(models.Model):\n name: models.CharField(max_length=100)\n img: models.ImageField(upload_to='pics')\n desc: models.TextField\n price: models.IntegerField\n offer: models.BooleanField(default=False)\n\nTo this:\nclass Destination(models.Model):\n name = models.CharField(max_length=100)\n img = models.ImageField(upload_to='pics')\n desc = models.TextField()\n price = models.IntegerField()\n offer = models.BooleanField(default=False)\n\nAfter changing above code. Try to migrate manually using below commands:\npython manage.py makemigrations appname\n\npython manage.py sqlmigrate appname 0001 #This value will generate after makemigrations. it can be 0001 or 0002 or more\n\npython manage.py migrate\n\n"
] |
[
1
] |
[] |
[] |
[
"django",
"makemigrations",
"migrate",
"python"
] |
stackoverflow_0074473808_django_makemigrations_migrate_python.txt
|
Q:
AttributeError: 'UnaryOp' object has no attribute 'evaluate' when using eval function in Python
for test_ind, case_data in test_df.iterrows():
case_data = case_data.to_frame().T
rule = "Ask_before>-0.4843681 & 0.5255821<=BidVol_before<=0.07581073 & Volume>0.1107559"
print(case_data, "case_data")
if case_data.eval(rule).all() == True:
print("TRUE")
Here, when the rule contains negative values, this error will appear. Could you please help me to fix this. I need to check whether this rule applies to the instances in the data-frame. Ask_before, BidVol_before are the columns of the dataframe test_df. Could you please help me to fix this issue.
A:
It isn't a problem in your code, it's a bug in pandas https://github.com/pandas-dev/pandas/issues/16363
It is fixed by now.
A:
Seems like it's still open as of 2022 January
https://github.com/pandas-dev/pandas/issues/16363
in my experience resolving negative index values solved iit
A:
Change the "unary" to an simple expression works for me, e.g. -5 -> (0 - 5)
|
AttributeError: 'UnaryOp' object has no attribute 'evaluate' when using eval function in Python
|
for test_ind, case_data in test_df.iterrows():
case_data = case_data.to_frame().T
rule = "Ask_before>-0.4843681 & 0.5255821<=BidVol_before<=0.07581073 & Volume>0.1107559"
print(case_data, "case_data")
if case_data.eval(rule).all() == True:
print("TRUE")
Here, when the rule contains negative values, this error will appear. Could you please help me to fix this. I need to check whether this rule applies to the instances in the data-frame. Ask_before, BidVol_before are the columns of the dataframe test_df. Could you please help me to fix this issue.
|
[
"It isn't a problem in your code, it's a bug in pandas https://github.com/pandas-dev/pandas/issues/16363\nIt is fixed by now.\n",
"Seems like it's still open as of 2022 January\nhttps://github.com/pandas-dev/pandas/issues/16363\nin my experience resolving negative index values solved iit\n",
"Change the \"unary\" to an simple expression works for me, e.g. -5 -> (0 - 5)\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"conditional_operator",
"dataframe",
"eval",
"python",
"python_3.x"
] |
stackoverflow_0063528707_conditional_operator_dataframe_eval_python_python_3.x.txt
|
Q:
How to convert date/time to YYYY-MM-DDTHH:mm:ss.000+000 format?
How to convert DD-MM-YYYY to YYYY-MM-DDTHH:mm:ss.000+0000 format using Python.
I want to convert this
20-05-2022 14:03:02
to
2022-05-20T14:03:02.000+0000
A:
Use the datetime module
from datetime import datetime, timezone
dtt = datetime.strptime("20-05-2022 14:03:02", "%d-%m-%Y %H:%M:%S")
print(dtt.replace(tzinfo=timezone.utc).isoformat(timespec="milliseconds"))
Prints 2022-05-20T14:03:02.000+00:00
See this answer for python datetime and ISO 8601.
|
How to convert date/time to YYYY-MM-DDTHH:mm:ss.000+000 format?
|
How to convert DD-MM-YYYY to YYYY-MM-DDTHH:mm:ss.000+0000 format using Python.
I want to convert this
20-05-2022 14:03:02
to
2022-05-20T14:03:02.000+0000
|
[
"Use the datetime module\nfrom datetime import datetime, timezone\n\ndtt = datetime.strptime(\"20-05-2022 14:03:02\", \"%d-%m-%Y %H:%M:%S\")\nprint(dtt.replace(tzinfo=timezone.utc).isoformat(timespec=\"milliseconds\"))\n\nPrints 2022-05-20T14:03:02.000+00:00\nSee this answer for python datetime and ISO 8601.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"python_datetime"
] |
stackoverflow_0074473622_python_python_datetime.txt
|
Q:
Creating custom protocol with Raspberry Pi 4
Hello and thank you for reading. As a hobby project I thought it would be fun to try and create my own communication protocol. I am trying to use the GPIO-pins on my Raspberry Pi 4 to send a digital signal. The reason for using a Raspberry Pi is because I want to connect it to a webpage that I want to run on the Pi. I am using Python with the RPi.GPIO library to control the pins. I am very much at the start of this project but I already ran into a problem.
When sending pulses for my signal I get a strange offset when going for higher speeds. See the code below:
import RPi.GPIO as GPIO
import time
pin = 18 # select pin
pulse_time = 1/100 # set lenght of pulse
GPIO.setmode(GPIO.BOARD)
GPIO.setup(pin, GPIO.OUT)
GPIO.output(pin, GPIO.HIGH) # set pin high
time.sleep(pulse_time) # wait
GPIO.output(pin, GPIO.LOW) # set pin low
time.sleep(pulse_time)
GPIO.output(pin, GPIO.HIGH)
time.sleep(pulse_time)
GPIO.cleanup()
In the variable "pulse_time" I set the wait time for the pulses. In this case I am trying to send bits with a speed of 100 bits per second. Which would be 1 bit per 10 milliseconds. See the image below for the data signal (sorry for bad quality, my oscilloscope doesn't have a USB-port for screenshots).
In the image above you can see the 2 pulses I send with my Python code. The first pulse is exactly 10ms long, just as I wanted, but the second pulse already gets a slight offset. When changing the bps to 1000 instead of 100, the offset gets a lot worse. For my project I intend to use a bitrate of 2400 bps.
I also tried doing the same things using C++ instead of Python, since C++ is generally faster/better at controlling hardware. Sadly the GPIO library 'wiringPi' for C++ got deleted and I can't find another way to control the GPIO-pins using C++.
Now that I explained the situation I have the following questions:
Can I set a clock speed in Python for controlling the pins at a set speed? If so, what is the max bps I could reach?
Is there a new way to control the GPIO-pins using C++ instead of Python?
Am I an idiot for trying to do this on a Raspberry Pi and should I use something else?
Any advice would be appreciated. Thank you in advance for taking the time to answer any of my questions.
A:
I think this offset could come from the time it takes to run GPIO.output(pin, GPIO.HIGH).
You could improve this by measuring this execution time and condier it in the time.sleep(...). (e.g. time.sleep(pulse_time - some_gpio_time)
Have a look at timeit to measure the time experimentally or you could try to measure it on the fly and consider it in the consecutive sleep.
But keep in mind, that an applicaiton like that is not intented to fulfill hard realtime requirements and you will always get some error. To achieve better timing you would need to implement this as a linux kernel module, but this might go a bit too far for a hobby project.
|
Creating custom protocol with Raspberry Pi 4
|
Hello and thank you for reading. As a hobby project I thought it would be fun to try and create my own communication protocol. I am trying to use the GPIO-pins on my Raspberry Pi 4 to send a digital signal. The reason for using a Raspberry Pi is because I want to connect it to a webpage that I want to run on the Pi. I am using Python with the RPi.GPIO library to control the pins. I am very much at the start of this project but I already ran into a problem.
When sending pulses for my signal I get a strange offset when going for higher speeds. See the code below:
import RPi.GPIO as GPIO
import time
pin = 18 # select pin
pulse_time = 1/100 # set lenght of pulse
GPIO.setmode(GPIO.BOARD)
GPIO.setup(pin, GPIO.OUT)
GPIO.output(pin, GPIO.HIGH) # set pin high
time.sleep(pulse_time) # wait
GPIO.output(pin, GPIO.LOW) # set pin low
time.sleep(pulse_time)
GPIO.output(pin, GPIO.HIGH)
time.sleep(pulse_time)
GPIO.cleanup()
In the variable "pulse_time" I set the wait time for the pulses. In this case I am trying to send bits with a speed of 100 bits per second. Which would be 1 bit per 10 milliseconds. See the image below for the data signal (sorry for bad quality, my oscilloscope doesn't have a USB-port for screenshots).
In the image above you can see the 2 pulses I send with my Python code. The first pulse is exactly 10ms long, just as I wanted, but the second pulse already gets a slight offset. When changing the bps to 1000 instead of 100, the offset gets a lot worse. For my project I intend to use a bitrate of 2400 bps.
I also tried doing the same things using C++ instead of Python, since C++ is generally faster/better at controlling hardware. Sadly the GPIO library 'wiringPi' for C++ got deleted and I can't find another way to control the GPIO-pins using C++.
Now that I explained the situation I have the following questions:
Can I set a clock speed in Python for controlling the pins at a set speed? If so, what is the max bps I could reach?
Is there a new way to control the GPIO-pins using C++ instead of Python?
Am I an idiot for trying to do this on a Raspberry Pi and should I use something else?
Any advice would be appreciated. Thank you in advance for taking the time to answer any of my questions.
|
[
"I think this offset could come from the time it takes to run GPIO.output(pin, GPIO.HIGH).\nYou could improve this by measuring this execution time and condier it in the time.sleep(...). (e.g. time.sleep(pulse_time - some_gpio_time)\nHave a look at timeit to measure the time experimentally or you could try to measure it on the fly and consider it in the consecutive sleep.\nBut keep in mind, that an applicaiton like that is not intented to fulfill hard realtime requirements and you will always get some error. To achieve better timing you would need to implement this as a linux kernel module, but this might go a bit too far for a hobby project.\n"
] |
[
0
] |
[] |
[] |
[
"c++",
"python",
"raspberry_pi",
"signal_processing"
] |
stackoverflow_0074473437_c++_python_raspberry_pi_signal_processing.txt
|
Q:
Finding IP Camera using OpenCV
So this is what I have currently following this tutorial.
#number 0 is front web cam, number 1 is back webcam
capture = cv2.VideoCapture(0)
capture.set(3, 640)
capture.set(4, 480)
while True:
success, img = capture.read()
cv2.imshow("video", img)
#This function loops -> Delay -> press Q it breaks loop
if cv2.waitKey(1) & 0xFF ==ord('q'):
break`
This works great if I wanted to use my webcams. I do not.
I have an Ethernet camera attached to an Ethernet injector which runs to my PC using an Ethernet to USB adapter attached to a USB hub.
Hardware
Ethernet Camera -> 2. Ethernet Injector -> 3. Ethernet USB Adapter -> 4. USB Hub
OS: Windows 10
Mako G503C
Tp-Link TL-POE150S
Insignia USB to Ethernet
BYEASY USB Hub, 4 Port USB 3.0 Hub
Question: How would I find the Ethernet camera and implement it into my code?
Thanks,
J
A:
This issue is old, but just in case someone has the same issue. Use the VimbaPython example for asynchronous streaming with openCV from GitHub.
GitHub VimbaPython examples
You use Vimba to open the camera and access the frames and convert them to an openCV format. Then you can continue image manipulation/analysis with openCV. With the Mako specifically, some pixel formats won't be compatible with VimbaPython, but the non-packed ones should be fine.
|
Finding IP Camera using OpenCV
|
So this is what I have currently following this tutorial.
#number 0 is front web cam, number 1 is back webcam
capture = cv2.VideoCapture(0)
capture.set(3, 640)
capture.set(4, 480)
while True:
success, img = capture.read()
cv2.imshow("video", img)
#This function loops -> Delay -> press Q it breaks loop
if cv2.waitKey(1) & 0xFF ==ord('q'):
break`
This works great if I wanted to use my webcams. I do not.
I have an Ethernet camera attached to an Ethernet injector which runs to my PC using an Ethernet to USB adapter attached to a USB hub.
Hardware
Ethernet Camera -> 2. Ethernet Injector -> 3. Ethernet USB Adapter -> 4. USB Hub
OS: Windows 10
Mako G503C
Tp-Link TL-POE150S
Insignia USB to Ethernet
BYEASY USB Hub, 4 Port USB 3.0 Hub
Question: How would I find the Ethernet camera and implement it into my code?
Thanks,
J
|
[
"This issue is old, but just in case someone has the same issue. Use the VimbaPython example for asynchronous streaming with openCV from GitHub.\nGitHub VimbaPython examples\nYou use Vimba to open the camera and access the frames and convert them to an openCV format. Then you can continue image manipulation/analysis with openCV. With the Mako specifically, some pixel formats won't be compatible with VimbaPython, but the non-packed ones should be fine.\n"
] |
[
0
] |
[] |
[] |
[
"computer_vision",
"cv2",
"opencv",
"python"
] |
stackoverflow_0063527336_computer_vision_cv2_opencv_python.txt
|
Q:
CPU throttling in chrome via python selenium
Is it possible to throttle CPU in chrome's devtools via python selenium? And if so, how?
It would appear the driver has a method execute_cdp_cmd which stands for "Execute Chrome Devtools Protocol command" but I do not know what command I would give it.
A:
It would appear to be possible in chromedriver 75.
## rate 1 is no throttle, 2 is 2x slower, etc.
driver.execute_cdp_cmd("Emulation.setCPUThrottlingRate", {'rate': 10})
NOTE:
2.38 didn't seem to support execute_cdp_cmd() while 2.48 did. Chromedriver also appears to have changed their versioning scheme to keep in sync with browser releases.
I did some quick checking and was able to push the throttle rate to upwards of 200x but it started having serious issues. My guess it going beyond 100x is ill-advised.
A:
## rate 1 is no throttle, 2 is 2x slower, etc.
driver.execute_cdp_cmd("Emulation.setCPUThrottlingRate", {'rate': 10})
This still works today!
|
CPU throttling in chrome via python selenium
|
Is it possible to throttle CPU in chrome's devtools via python selenium? And if so, how?
It would appear the driver has a method execute_cdp_cmd which stands for "Execute Chrome Devtools Protocol command" but I do not know what command I would give it.
|
[
"It would appear to be possible in chromedriver 75. \n## rate 1 is no throttle, 2 is 2x slower, etc. \ndriver.execute_cdp_cmd(\"Emulation.setCPUThrottlingRate\", {'rate': 10})\n\nNOTE:\n2.38 didn't seem to support execute_cdp_cmd() while 2.48 did. Chromedriver also appears to have changed their versioning scheme to keep in sync with browser releases. \nI did some quick checking and was able to push the throttle rate to upwards of 200x but it started having serious issues. My guess it going beyond 100x is ill-advised. \n",
"## rate 1 is no throttle, 2 is 2x slower, etc. \ndriver.execute_cdp_cmd(\"Emulation.setCPUThrottlingRate\", {'rate': 10})\n\nThis still works today!\n"
] |
[
1,
0
] |
[] |
[] |
[
"google_chrome_devtools",
"python",
"python_3.x",
"selenium",
"selenium_chromedriver"
] |
stackoverflow_0057008946_google_chrome_devtools_python_python_3.x_selenium_selenium_chromedriver.txt
|
Q:
In Regex after match use group method to return only a part of the string
I am using the regular expression below to get the names of 40 hotels from a HTML file using python using grouping.
[edit]- The catch is that we have to do this only using Regex and no other module like Beautiful Soup
pattern_names = re.compile(r'\t(?P<Hotel_name>[a-zA-Z0-9][a-z0-9]*.+)\n</a>\n')
name_list=pattern_names.findall(data)
print("No of hotels=",len(name_list))
name_list
I am getting the required list of 40 names, but some of these names are having "& amp;" string due to presence of "&" in the HTML file.
"Rocco's Cafe",
'Local Kitchen & Wine Merchant',
'Ristorante Umbria',
'flour + water',
'Firewood At Metreon',
'Palomino',
'Buono',
'Farina Focaccia & Cucina Italiana',
I want to modify my regular expression so that "& amp;" is not returned with the string name.
I tried the following regex
pattern_names = re.compile(r'\t(?P<Hotel_name>[a-zA-Z0-9][a-z0-9]*.+^[&])\n</a>\n')
but this returned an empty list. No strings matched.
A:
There
pattern_names = re.compile(r'\t(?P<Hotel_name>[a-zA-Z0-9][a-z0-9]*.+^[&])\n</a>\n')
you have ^ inside which does not make sense for ^ which denotes begin of line, also observe that [&] means one of characters listed, i.e. & or a or m or p or ;.
I suggest to properly process text from HTML rather than deleting HTML entities, html.unescape (part of standard library) allows you to do it easily
import html
names = ['Local Kitchen & Wine Merchant','Firewood At Metreon','Farina Focaccia & Cucina Italiana']
cleaned_names = [html.unescape(i) for i in names]
print(cleaned_names)
output
['Local Kitchen & Wine Merchant', 'Firewood At Metreon', 'Farina Focaccia & Cucina Italiana']
|
In Regex after match use group method to return only a part of the string
|
I am using the regular expression below to get the names of 40 hotels from a HTML file using python using grouping.
[edit]- The catch is that we have to do this only using Regex and no other module like Beautiful Soup
pattern_names = re.compile(r'\t(?P<Hotel_name>[a-zA-Z0-9][a-z0-9]*.+)\n</a>\n')
name_list=pattern_names.findall(data)
print("No of hotels=",len(name_list))
name_list
I am getting the required list of 40 names, but some of these names are having "& amp;" string due to presence of "&" in the HTML file.
"Rocco's Cafe",
'Local Kitchen & Wine Merchant',
'Ristorante Umbria',
'flour + water',
'Firewood At Metreon',
'Palomino',
'Buono',
'Farina Focaccia & Cucina Italiana',
I want to modify my regular expression so that "& amp;" is not returned with the string name.
I tried the following regex
pattern_names = re.compile(r'\t(?P<Hotel_name>[a-zA-Z0-9][a-z0-9]*.+^[&])\n</a>\n')
but this returned an empty list. No strings matched.
|
[
"There\npattern_names = re.compile(r'\\t(?P<Hotel_name>[a-zA-Z0-9][a-z0-9]*.+^[&])\\n</a>\\n')\n\nyou have ^ inside which does not make sense for ^ which denotes begin of line, also observe that [&] means one of characters listed, i.e. & or a or m or p or ;.\nI suggest to properly process text from HTML rather than deleting HTML entities, html.unescape (part of standard library) allows you to do it easily\nimport html\nnames = ['Local Kitchen & Wine Merchant','Firewood At Metreon','Farina Focaccia & Cucina Italiana']\ncleaned_names = [html.unescape(i) for i in names]\nprint(cleaned_names)\n\noutput\n['Local Kitchen & Wine Merchant', 'Firewood At Metreon', 'Farina Focaccia & Cucina Italiana']\n\n"
] |
[
0
] |
[] |
[] |
[
"group",
"python",
"regex"
] |
stackoverflow_0074473813_group_python_regex.txt
|
Q:
Go to definition in VS code doesn't show the body of a function
When I right click on a function and then select "Go to definition" there shows up a module with that function, but it only shows the parameters which have to be passed to it, and I can't see anything about the body of the function.
Here is what's shown when I went to the definition of itertools.dropwhile:
A:
As mentioned in the comments, VSCode can only show you source code it has access to, and many of the Python builtins and stdlib (including the itertools module) are implemented in compiled C -- there's no source code to show you.
A:
Sometimes this happens if you develop code that runs inside an environment whose libraries are not visible in your main OS.
One way to solve this is by opening the terminal in VSCode and doing a pip install <library> to install the library and make VSCode aware of it.
|
Go to definition in VS code doesn't show the body of a function
|
When I right click on a function and then select "Go to definition" there shows up a module with that function, but it only shows the parameters which have to be passed to it, and I can't see anything about the body of the function.
Here is what's shown when I went to the definition of itertools.dropwhile:
|
[
"As mentioned in the comments, VSCode can only show you source code it has access to, and many of the Python builtins and stdlib (including the itertools module) are implemented in compiled C -- there's no source code to show you.\n",
"Sometimes this happens if you develop code that runs inside an environment whose libraries are not visible in your main OS.\nOne way to solve this is by opening the terminal in VSCode and doing a pip install <library> to install the library and make VSCode aware of it.\n"
] |
[
6,
0
] |
[] |
[] |
[
"go_to_definition",
"python",
"visual_studio_code"
] |
stackoverflow_0059339718_go_to_definition_python_visual_studio_code.txt
|
Q:
Convert list into columns by matching values
I have a pandas dataframe like so:
df = pd.DataFrame({'column': [[np.nan, np.nan, np.nan], [1, np.nan, np.nan], [2, 3, np.nan], [3, 2, 1]]})
column
0 [nan, nan, nan]
1 [1, nan, nan]
2 [2, 3, nan]
3 [3, 2, 1]
Note that there is never the same value twice in a row.
I wish to transform this single column into multiple columns named with the corresponding values. So I want to order the values and put them in the right column. The ones under column_1, twos under column_2 etc.
column_1 column_2 column_3
0 NaN NaN NaN
1 1.0 NaN NaN
2 NaN 2.0 3.0
3 1.0 2.0 3.0
How to do this? I don't really know where to start to be honest.
A:
Use dict comprehension to compute pandas Series of columns:
import math
df = df.apply(lambda row: pd.Series(data={f"column_{v}": v for v in row["column"] if not math.isnan(v)}, dtype="float64"), axis=1)
[Out]:
column_1 column_2 column_3
0 NaN NaN NaN
1 1.0 NaN NaN
2 NaN 2.0 3.0
3 1.0 2.0 3.0
A:
Using a pivot_table:
(df['column'].explode().reset_index()
.dropna()
.assign(col=lambda d: 'column_'+d['column'].astype(str))
.pivot_table(index='index', columns='col', values='column',
aggfunc='first', dropna=False)
.reindex(df.index)
)
Output:
col column_1 column_2 column_3
0 NaN NaN NaN
1 1.0 NaN NaN
2 NaN 2.0 3.0
3 1.0 2.0 3.0
|
Convert list into columns by matching values
|
I have a pandas dataframe like so:
df = pd.DataFrame({'column': [[np.nan, np.nan, np.nan], [1, np.nan, np.nan], [2, 3, np.nan], [3, 2, 1]]})
column
0 [nan, nan, nan]
1 [1, nan, nan]
2 [2, 3, nan]
3 [3, 2, 1]
Note that there is never the same value twice in a row.
I wish to transform this single column into multiple columns named with the corresponding values. So I want to order the values and put them in the right column. The ones under column_1, twos under column_2 etc.
column_1 column_2 column_3
0 NaN NaN NaN
1 1.0 NaN NaN
2 NaN 2.0 3.0
3 1.0 2.0 3.0
How to do this? I don't really know where to start to be honest.
|
[
"Use dict comprehension to compute pandas Series of columns:\nimport math\ndf = df.apply(lambda row: pd.Series(data={f\"column_{v}\": v for v in row[\"column\"] if not math.isnan(v)}, dtype=\"float64\"), axis=1)\n\n[Out]:\n column_1 column_2 column_3\n0 NaN NaN NaN\n1 1.0 NaN NaN\n2 NaN 2.0 3.0\n3 1.0 2.0 3.0\n\n",
"Using a pivot_table:\n(df['column'].explode().reset_index()\n .dropna()\n .assign(col=lambda d: 'column_'+d['column'].astype(str))\n .pivot_table(index='index', columns='col', values='column',\n aggfunc='first', dropna=False)\n .reindex(df.index)\n)\n\nOutput:\ncol column_1 column_2 column_3\n0 NaN NaN NaN\n1 1.0 NaN NaN\n2 NaN 2.0 3.0\n3 1.0 2.0 3.0\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074472549_pandas_python.txt
|
Q:
Invalid object name "django_migrations" when trying to runserver
I wanted to connect my Django app to client's MSSQL database (Previously my app worked on SQLite). I made an migration on their test server and It worked successfully, then they copied this database to destination server and when I try to
python manage.py runserver
It shows me just
django.db.utils.ProgrammingError: ('42S02', "[42S02] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Invalid object name 'django_migrations'. (208) (SQLExecDirectW)")
What can be possible problem with this? This is my connection:
DATABASES = {
'default': {
'ENGINE': 'sql_server.pyodbc',
'NAME': 'ESD',
'HOST': 'IPaddress',
'USER': 'Username',
'PASSWORD': 'Password',
'OPTIONS': {
'driver': 'ODBC Driver 17 for SQL Server'
}
}
}
I tried to migrate too, it shows me the same error
There are all errors:
(norm)esd@server:~/Desktop/norm/myproject> python manage.py runserver
Performing system checks...
System check identified no issues (0 silenced).
Unhandled exception in thread started by <function wrapper at 0x7fc57eb4f8c0>
Traceback (most recent call last):
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/core/management/commands/runserver.py", line 117, in inner_run
self.check_migrations()
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/core/management/commands/runserver.py", line 163, in check_migrations
executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/migrations/executor.py", line 20, in __init__
self.loader = MigrationLoader(self.connection)
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/migrations/loader.py", line 49, in __init__
self.build_graph()
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/migrations/loader.py", line 176, in build_graph
self.applied_migrations = recorder.applied_migrations()
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/migrations/recorder.py", line 66, in applied_migrations
return set(tuple(x) for x in self.migration_qs.values_list("app", "name"))
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/models/query.py", line 258, in __iter__
self._fetch_all()
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/models/query.py", line 1074, in _fetch_all
self._result_cache = list(self.iterator())
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/models/query.py", line 128, in __iter__
for row in compiler.results_iter():
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/models/sql/compiler.py", line 802, in results_iter
results = self.execute_sql(MULTI)
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/models/sql/compiler.py", line 848, in execute_sql
cursor.execute(sql, params)
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/sql_server/pyodbc/base.py", line 537, in execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: ('42S02', "[42S02] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Invalid object name 'django_migrations'. (208) (SQLExecDirectW)")
A:
So the reason this is happening is likely because the default schema you've set for django's internal tables isn't the same as the default schema of your current user.
You can test this by checking the actual schema of django_migrations table in your database, and then run a python script to verify your current default schema, like below:
import sqlalchemy as db
from sqlalchemy.sql import text
db_host = "Your host URL"
db_name = "Your DB name"
engine = db.create_engine(
f"mssql+pyodbc://@{db_host}/{db_name}?trusted_connection=yes&driver=SQL+Server+Native+Client+11.0"
)
statement = text("SELECT SCHEMA_NAME();")
with engine.connect() as connection:
with connection.begin():
results = connection.execute(statement)
for result in results:
print(result)
If they're not the same, go in to SQL Server Manager and set the default schema for your user to the correct one.
To see what user your connected as, just replace SELECT SCHEMA_NAME() with SELECT SYSTEM_USER and run again
A:
Maybe you need to run "py manage.py makemigrations" first and then "py manage.py migrate".
If not, maybe it's worth adding the specific host and port for the databases. What I used for my project that uses mysql were the following.
> DATABASES = {
> 'default': {
> 'ENGINE': 'django.db.backends.mysql',
> 'NAME': 'pkg',
> 'HOST': '127.0.0.1',
> 'PORT': '3306',
> 'USER': 'root',
> 'PASSWD': '',
> } }
Not sure if it will work but perhaps it's worth trying.
|
Invalid object name "django_migrations" when trying to runserver
|
I wanted to connect my Django app to client's MSSQL database (Previously my app worked on SQLite). I made an migration on their test server and It worked successfully, then they copied this database to destination server and when I try to
python manage.py runserver
It shows me just
django.db.utils.ProgrammingError: ('42S02', "[42S02] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Invalid object name 'django_migrations'. (208) (SQLExecDirectW)")
What can be possible problem with this? This is my connection:
DATABASES = {
'default': {
'ENGINE': 'sql_server.pyodbc',
'NAME': 'ESD',
'HOST': 'IPaddress',
'USER': 'Username',
'PASSWORD': 'Password',
'OPTIONS': {
'driver': 'ODBC Driver 17 for SQL Server'
}
}
}
I tried to migrate too, it shows me the same error
There are all errors:
(norm)esd@server:~/Desktop/norm/myproject> python manage.py runserver
Performing system checks...
System check identified no issues (0 silenced).
Unhandled exception in thread started by <function wrapper at 0x7fc57eb4f8c0>
Traceback (most recent call last):
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/core/management/commands/runserver.py", line 117, in inner_run
self.check_migrations()
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/core/management/commands/runserver.py", line 163, in check_migrations
executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/migrations/executor.py", line 20, in __init__
self.loader = MigrationLoader(self.connection)
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/migrations/loader.py", line 49, in __init__
self.build_graph()
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/migrations/loader.py", line 176, in build_graph
self.applied_migrations = recorder.applied_migrations()
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/migrations/recorder.py", line 66, in applied_migrations
return set(tuple(x) for x in self.migration_qs.values_list("app", "name"))
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/models/query.py", line 258, in __iter__
self._fetch_all()
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/models/query.py", line 1074, in _fetch_all
self._result_cache = list(self.iterator())
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/models/query.py", line 128, in __iter__
for row in compiler.results_iter():
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/models/sql/compiler.py", line 802, in results_iter
results = self.execute_sql(MULTI)
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/models/sql/compiler.py", line 848, in execute_sql
cursor.execute(sql, params)
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/home/esd/Desktop/norm/lib64/python2.7/site-packages/sql_server/pyodbc/base.py", line 537, in execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: ('42S02', "[42S02] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Invalid object name 'django_migrations'. (208) (SQLExecDirectW)")
|
[
"So the reason this is happening is likely because the default schema you've set for django's internal tables isn't the same as the default schema of your current user.\nYou can test this by checking the actual schema of django_migrations table in your database, and then run a python script to verify your current default schema, like below:\nimport sqlalchemy as db\nfrom sqlalchemy.sql import text\n\ndb_host = \"Your host URL\"\ndb_name = \"Your DB name\"\n\nengine = db.create_engine(\n f\"mssql+pyodbc://@{db_host}/{db_name}?trusted_connection=yes&driver=SQL+Server+Native+Client+11.0\"\n)\n\nstatement = text(\"SELECT SCHEMA_NAME();\")\n\nwith engine.connect() as connection:\n with connection.begin():\n results = connection.execute(statement)\n\nfor result in results:\n print(result)\n\nIf they're not the same, go in to SQL Server Manager and set the default schema for your user to the correct one.\nTo see what user your connected as, just replace SELECT SCHEMA_NAME() with SELECT SYSTEM_USER and run again\n",
"Maybe you need to run \"py manage.py makemigrations\" first and then \"py manage.py migrate\".\nIf not, maybe it's worth adding the specific host and port for the databases. What I used for my project that uses mysql were the following.\n> DATABASES = {\n> 'default': {\n> 'ENGINE': 'django.db.backends.mysql',\n> 'NAME': 'pkg',\n> 'HOST': '127.0.0.1',\n> 'PORT': '3306',\n> 'USER': 'root',\n> 'PASSWD': '',\n> } }\n\nNot sure if it will work but perhaps it's worth trying.\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"odbc",
"python",
"sql_server"
] |
stackoverflow_0056630509_django_odbc_python_sql_server.txt
|
Q:
Pandas percentage of two columns
I have a data frame that looks like this:
Vendor GRDate Pass/Fail
0 204177 2022-22 1.0
1 204177 2022-22 0.0
2 204177 2022-22 0.0
3 204177 2022-22 1.0
4 204177 2022-22 1.0
5 204177 2022-22 1.0
7 201645 2022-22 0.0
8 201645 2022-22 0.0
9 201645 2022-22 1.0
10 201645 2022-22 1.0
I am trying to work out the percentage of where Pass/Fail equals 1 for each week for each vendor and put it in a new df (Number of pass = 1 / total number of lines per vendor & week)
which would look like this:
Vendor GRDate Performance
0 204177 2022-22 0.6
1 201645 2022-22 0.5
I'm trying to do this with .groupby() and .count() but i can't work out how to get this into a new df along with the Vendor and GRDate columns. The code I have here returns the percentage of pass fail but drops the other two columns.
sdp_percent = sdp.groupby(['GRDate','Vendor'])['Pass/Fail'].apply(lambda x: x[x == 1].count()) / sdp.groupby(['GRDate','Vendor'])['Pass/Fail'].count()
But then if I add .reset_index() to keep them I get this error: unsupported operand type(s) for /: 'str' and 'str'
Please can someone explain what i'm doing wrong?
A:
Try:
x = (
df.groupby(["GRDate", "Vendor"])["Pass/Fail"]
.mean()
.reset_index()
.rename(columns={"Pass/Fail": "Performance"})
)
print(x)
Prints:
GRDate Vendor Performance
0 2022-22 201645 0.500000
1 2022-22 204177 0.666667
A:
As you have 0/1, you can use a groupby.mean:
(df.groupby(['Vendor', 'GRDate'], as_index=False, sort=False)
.agg(Performance=('Pass/Fail', 'mean'))
)
If you had a specific arbitrary value X:
(df.assign(val=df['Pass/Fail'].eq(X))
.groupby(['Vendor', 'GRDate'], as_index=False, sort=False)
.agg(Performance=('val', 'mean'))
)
Output:
Vendor GRDate Performance
0 204177 2022-22 0.666667
1 201645 2022-22 0.500000
|
Pandas percentage of two columns
|
I have a data frame that looks like this:
Vendor GRDate Pass/Fail
0 204177 2022-22 1.0
1 204177 2022-22 0.0
2 204177 2022-22 0.0
3 204177 2022-22 1.0
4 204177 2022-22 1.0
5 204177 2022-22 1.0
7 201645 2022-22 0.0
8 201645 2022-22 0.0
9 201645 2022-22 1.0
10 201645 2022-22 1.0
I am trying to work out the percentage of where Pass/Fail equals 1 for each week for each vendor and put it in a new df (Number of pass = 1 / total number of lines per vendor & week)
which would look like this:
Vendor GRDate Performance
0 204177 2022-22 0.6
1 201645 2022-22 0.5
I'm trying to do this with .groupby() and .count() but i can't work out how to get this into a new df along with the Vendor and GRDate columns. The code I have here returns the percentage of pass fail but drops the other two columns.
sdp_percent = sdp.groupby(['GRDate','Vendor'])['Pass/Fail'].apply(lambda x: x[x == 1].count()) / sdp.groupby(['GRDate','Vendor'])['Pass/Fail'].count()
But then if I add .reset_index() to keep them I get this error: unsupported operand type(s) for /: 'str' and 'str'
Please can someone explain what i'm doing wrong?
|
[
"Try:\nx = (\n df.groupby([\"GRDate\", \"Vendor\"])[\"Pass/Fail\"]\n .mean()\n .reset_index()\n .rename(columns={\"Pass/Fail\": \"Performance\"})\n)\nprint(x)\n\nPrints:\n GRDate Vendor Performance\n0 2022-22 201645 0.500000\n1 2022-22 204177 0.666667\n\n",
"As you have 0/1, you can use a groupby.mean:\n(df.groupby(['Vendor', 'GRDate'], as_index=False, sort=False)\n .agg(Performance=('Pass/Fail', 'mean'))\n)\n\nIf you had a specific arbitrary value X:\n(df.assign(val=df['Pass/Fail'].eq(X))\n .groupby(['Vendor', 'GRDate'], as_index=False, sort=False)\n .agg(Performance=('val', 'mean'))\n)\n\nOutput:\n Vendor GRDate Performance\n0 204177 2022-22 0.666667\n1 201645 2022-22 0.500000\n\n"
] |
[
2,
2
] |
[] |
[] |
[
"group_by",
"pandas",
"python"
] |
stackoverflow_0074474022_group_by_pandas_python.txt
|
Q:
Unwanted RST TCP packet with Scapy
In order to understand how TCP works, I tried to forge my own TCP SYN/SYN-ACK/ACK (based on the tutorial: http://www.thice.nl/creating-ack-get-packets-with-scapy/ ).
The problem is that whenever my computer recieve the SYN-ACK from the server, it generates a RST packet that stops the connection process.
I tried on a OS X Lion and on a Ubuntu 10.10 Maverick Meerkat, both reset the connection. I found this: http://lkml.indiana.edu/hypermail/linux/net/0404.2/0021.html, I don't know if it is the reason.
Does anyone could tell me what could be the reason? And how to avoid this problem?
Thank you.
A:
The article you cited makes this pretty clear...
Since you are not completing the full TCP handshake your operating system might try to take control and can start sending RST (reset) packets, to avoid this we can use iptables:
iptables -A OUTPUT -p tcp --tcp-flags RST RST -s 192.168.1.20 -j DROP
Essentially, the problem is that scapy runs in user space, and the linux kernel will receive the SYN-ACK first. The kernel will send a RST because it won't have a socket open on the port number in question, before you have a chance to do anything with scapy.
The solution (as the blog mentions) is to firewall your kernel from sending a RST packet.
A:
I don't have a non-iptables answer, but one can fix the reset issue. Instead of trying to filter the outgoing reset in the filter table, filter all of the incoming packets from the target in the raw table instead. This prevents the return packets from the target from even being processed by the kernel, though scapy still sees them. I used the following syntax:
iptables -t raw -A PREROUTING -p tcp --dport <source port I use for scapy traffic> -j DROP
This solution does force me to use the same source port for my traffic; feel free to use your own iptables-fu to identify your target's return packets.
A:
The blog article cited in other answers is not entirely correct. It's not only that you aren't completing the three way handshake, it's that the kernel's IP stack has no idea that there's a connection happening. When it receives the SYN-ACK, it sends a RST-ACK because it's unexpected. Receiving first or last really doesn't enter into it. The stack receiving the SYN-ACK is the issue.
Using IPTables to drop outbound RST packets is a common and valid approach, but sometimes you need to send a RST from Scapy. A more involved but very workable approach is to go lower, generating and responding to ARP with a MAC that is different from the host's. This allows you to have the ability to send and receive anything without any interference from the host.
Clearly this is more effort. Personally, I only take this approach (as opposed to the RST dropping approach) when I actually need to send a RST myself.
A:
I found a solution without IPTables in https://widu.tumblr.com/post/43624355124/suppressing-tcp-rst-on-raw-sockets .
To bypass this problem, simply create a standard TCP socket as a server socket and bind to the requested port. Don’t do accept().
Just socket(), bind() on the port and listen(). This relaxes the kernel and let you do the 3-way handshake.
|
Unwanted RST TCP packet with Scapy
|
In order to understand how TCP works, I tried to forge my own TCP SYN/SYN-ACK/ACK (based on the tutorial: http://www.thice.nl/creating-ack-get-packets-with-scapy/ ).
The problem is that whenever my computer recieve the SYN-ACK from the server, it generates a RST packet that stops the connection process.
I tried on a OS X Lion and on a Ubuntu 10.10 Maverick Meerkat, both reset the connection. I found this: http://lkml.indiana.edu/hypermail/linux/net/0404.2/0021.html, I don't know if it is the reason.
Does anyone could tell me what could be the reason? And how to avoid this problem?
Thank you.
|
[
"The article you cited makes this pretty clear...\n\nSince you are not completing the full TCP handshake your operating system might try to take control and can start sending RST (reset) packets, to avoid this we can use iptables:\n\niptables -A OUTPUT -p tcp --tcp-flags RST RST -s 192.168.1.20 -j DROP\n\nEssentially, the problem is that scapy runs in user space, and the linux kernel will receive the SYN-ACK first. The kernel will send a RST because it won't have a socket open on the port number in question, before you have a chance to do anything with scapy.\nThe solution (as the blog mentions) is to firewall your kernel from sending a RST packet.\n",
"I don't have a non-iptables answer, but one can fix the reset issue. Instead of trying to filter the outgoing reset in the filter table, filter all of the incoming packets from the target in the raw table instead. This prevents the return packets from the target from even being processed by the kernel, though scapy still sees them. I used the following syntax:\niptables -t raw -A PREROUTING -p tcp --dport <source port I use for scapy traffic> -j DROP\n\nThis solution does force me to use the same source port for my traffic; feel free to use your own iptables-fu to identify your target's return packets.\n",
"The blog article cited in other answers is not entirely correct. It's not only that you aren't completing the three way handshake, it's that the kernel's IP stack has no idea that there's a connection happening. When it receives the SYN-ACK, it sends a RST-ACK because it's unexpected. Receiving first or last really doesn't enter into it. The stack receiving the SYN-ACK is the issue.\nUsing IPTables to drop outbound RST packets is a common and valid approach, but sometimes you need to send a RST from Scapy. A more involved but very workable approach is to go lower, generating and responding to ARP with a MAC that is different from the host's. This allows you to have the ability to send and receive anything without any interference from the host.\nClearly this is more effort. Personally, I only take this approach (as opposed to the RST dropping approach) when I actually need to send a RST myself.\n",
"I found a solution without IPTables in https://widu.tumblr.com/post/43624355124/suppressing-tcp-rst-on-raw-sockets .\n\nTo bypass this problem, simply create a standard TCP socket as a server socket and bind to the requested port. Don’t do accept().\nJust socket(), bind() on the port and listen(). This relaxes the kernel and let you do the 3-way handshake.\n\n"
] |
[
34,
6,
6,
1
] |
[] |
[] |
[
"networking",
"python",
"scapy",
"tcp"
] |
stackoverflow_0009058052_networking_python_scapy_tcp.txt
|
Q:
Schedule an iterative function every x seconds without drifting
Complete newbie here so bare with me. I've got a number of devices that report status updates to a singular location, and as more sites have been added, drift with time.sleep(x) is becoming more noticeable, and with as many sites connected now it has completely doubles the sleep time between iterations.
import time
...
def client_list():
sites=pandas.read_csv('sites')
return sites['Site']
def logs(site):
time.sleep(x)
if os.path.isfile(os.path.join(f'{site}/target/', 'hit')):
stamp = time.strftime('%Y-%m-%d,%H:%M:%S')
log = open(f"{site}/log", 'a')
log.write(f",{stamp},{site},hit\n")
log.close()
os.remove(f"{site}/target/hit")
else:
stamp = time.strftime('%Y-%m-%d,%H:%M:%S')
log = open(f"{site}/log", 'a')
log.write(f",{stamp},{site},miss\n")
log.close()
...
if __name__ == '__main__':
while True:
try:
client_list()
with concurrent.futures.ThreadPoolExecutor() as executor:
executor.map(logs, client_list())
...
I did try adding calculations for drift with this:
from datetime import datetime, timedelta
def logs(site):
first_called=datetime.now()
num_calls=1
drift=timedelta()
time_period=timedelta(seconds=5)
while 1:
time.sleep(n-drift.microseconds/1000000.0)
current_time = datetime.now()
num_calls += 1
difference = current_time - first_called
drift = difference - time_period* num_calls
if os.path.isfile(os.path.join(f'{site}/target/', 'hit')):
...
It ends up with a duplicate entries in the log, and the process still drifts.
Is there a better way to schedule the function to run every x seconds and account for the drift in start times?
A:
Create a variable equal to the desired system time at the next interval. Increment that variable by 5 seconds each time through the loop. Calculate the sleep time so that the sleep will end at the desired time. The timings will not be perfect because sleep intervals are not super precise, but errors will not accumulate. Your logs function will look something like this:
def logs(site):
next_time = time.time() + 5.0
while 1:
time.sleep(time.time() - next_time)
next_time += 5.0
if os.path.isfile(os.path.join(f'{site}/target/', 'hit')):
# do something that takes a while
|
Schedule an iterative function every x seconds without drifting
|
Complete newbie here so bare with me. I've got a number of devices that report status updates to a singular location, and as more sites have been added, drift with time.sleep(x) is becoming more noticeable, and with as many sites connected now it has completely doubles the sleep time between iterations.
import time
...
def client_list():
sites=pandas.read_csv('sites')
return sites['Site']
def logs(site):
time.sleep(x)
if os.path.isfile(os.path.join(f'{site}/target/', 'hit')):
stamp = time.strftime('%Y-%m-%d,%H:%M:%S')
log = open(f"{site}/log", 'a')
log.write(f",{stamp},{site},hit\n")
log.close()
os.remove(f"{site}/target/hit")
else:
stamp = time.strftime('%Y-%m-%d,%H:%M:%S')
log = open(f"{site}/log", 'a')
log.write(f",{stamp},{site},miss\n")
log.close()
...
if __name__ == '__main__':
while True:
try:
client_list()
with concurrent.futures.ThreadPoolExecutor() as executor:
executor.map(logs, client_list())
...
I did try adding calculations for drift with this:
from datetime import datetime, timedelta
def logs(site):
first_called=datetime.now()
num_calls=1
drift=timedelta()
time_period=timedelta(seconds=5)
while 1:
time.sleep(n-drift.microseconds/1000000.0)
current_time = datetime.now()
num_calls += 1
difference = current_time - first_called
drift = difference - time_period* num_calls
if os.path.isfile(os.path.join(f'{site}/target/', 'hit')):
...
It ends up with a duplicate entries in the log, and the process still drifts.
Is there a better way to schedule the function to run every x seconds and account for the drift in start times?
|
[
"Create a variable equal to the desired system time at the next interval. Increment that variable by 5 seconds each time through the loop. Calculate the sleep time so that the sleep will end at the desired time. The timings will not be perfect because sleep intervals are not super precise, but errors will not accumulate. Your logs function will look something like this:\ndef logs(site):\n next_time = time.time() + 5.0\n while 1:\n time.sleep(time.time() - next_time)\n next_time += 5.0\n if os.path.isfile(os.path.join(f'{site}/target/', 'hit')):\n # do something that takes a while\n\n"
] |
[
0
] |
[] |
[] |
[
"concurrent.futures",
"python",
"python_3.x",
"python_multithreading",
"sched"
] |
stackoverflow_0074467045_concurrent.futures_python_python_3.x_python_multithreading_sched.txt
|
Q:
How to create a nested dictionary from a string list (Python)?
I'm trying to create a nested dictionary from a list of strings.
Each index of the strings corresponds to a key, while each character a value.
I have a list:
list = ['game', 'club', 'party', 'play']
I would like to create a (nested) dictionary:
dict = {0: {'g', 'c', 'p', 'p'}, 1: {'a', 'l', 'a', 'l'}, 2: {'m', 'u', 'r', 'a'}, etc.}
I was thinking something along the lines of:
res = {}
for item in range(len(list)):
for i in list[item]:
if i not in res:
# create a key (index - ex. '0') and a value (character - ex. 'g' of 'game')
else:
# put the value in the corresponding key (ex. 'c' of 'club')
print(res)
A:
Note: you cannot have sets with duplicate values. Instead, create a dictinary where values are lists or tuples:
from itertools import zip_longest
lst = ["game", "club", "party", "play"]
out = {
i: [v for v in t if not v is None] for i, t in enumerate(zip_longest(*lst))
}
print(out)
Prints:
{
0: ["g", "c", "p", "p"],
1: ["a", "l", "a", "l"],
2: ["m", "u", "r", "a"],
3: ["e", "b", "t", "y"],
4: ["y"],
}
A:
Andrejs solution is surely the more elegant one.
But to stay closer to your proposed solution you could do something like this:
items = ['game', 'club', 'party', 'play']
result = {}
for item in items:
for (idx, char) in enumerate(list(item)):
if idx not in result:
result[idx] = [char]
else:
result[idx].append(char)
print(result)
|
How to create a nested dictionary from a string list (Python)?
|
I'm trying to create a nested dictionary from a list of strings.
Each index of the strings corresponds to a key, while each character a value.
I have a list:
list = ['game', 'club', 'party', 'play']
I would like to create a (nested) dictionary:
dict = {0: {'g', 'c', 'p', 'p'}, 1: {'a', 'l', 'a', 'l'}, 2: {'m', 'u', 'r', 'a'}, etc.}
I was thinking something along the lines of:
res = {}
for item in range(len(list)):
for i in list[item]:
if i not in res:
# create a key (index - ex. '0') and a value (character - ex. 'g' of 'game')
else:
# put the value in the corresponding key (ex. 'c' of 'club')
print(res)
|
[
"Note: you cannot have sets with duplicate values. Instead, create a dictinary where values are lists or tuples:\nfrom itertools import zip_longest\n\nlst = [\"game\", \"club\", \"party\", \"play\"]\n\nout = {\n i: [v for v in t if not v is None] for i, t in enumerate(zip_longest(*lst))\n}\n\nprint(out)\n\nPrints:\n{\n 0: [\"g\", \"c\", \"p\", \"p\"],\n 1: [\"a\", \"l\", \"a\", \"l\"],\n 2: [\"m\", \"u\", \"r\", \"a\"],\n 3: [\"e\", \"b\", \"t\", \"y\"],\n 4: [\"y\"],\n}\n\n",
"Andrejs solution is surely the more elegant one.\nBut to stay closer to your proposed solution you could do something like this:\nitems = ['game', 'club', 'party', 'play']\nresult = {}\nfor item in items:\n for (idx, char) in enumerate(list(item)):\n if idx not in result:\n result[idx] = [char]\n else:\n result[idx].append(char)\nprint(result)\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"dictionary",
"list",
"python",
"string"
] |
stackoverflow_0074474058_dictionary_list_python_string.txt
|
Q:
How could I get a result for every column after comparing dataframes?
I have two csv files, and the two files have the exact same amount of rows and columns containing only numerical values. I want to compare each columns separately.
The idea would be to compare column 1 value of file "a" to column 1 value of file "b" and check the difference and so on for all the numbers in the column (there are 100 rows) and write out a number that in how many cases were the difference more than 0. So e.g. if in the case of column 1 there where 55 numbers that didnt mach in case of file "a" and "b" than I want to get back a value of 55 for column 1 and so on.
I would like to repeat the same for all the columns. I know it should be a double for loop but idk exactly how.
Thanks in advance!
import pandas as pd
dk = pd.read_csv('C:/Users/D/1_top_a.csv', sep=',', header=None)
dk = dk.dropna(how='all')
dk = dk.dropna(how='all', axis=1)
print(dk)
dl = pd.read_csv('C:/Users/D/1_top_b.csv', sep=',', header=None)
dl = dl.dropna(how='all')
dl = dl.dropna(how='all', axis=1)
#print(dl)
rows=dk.shape[0]
print(rows)
for row in range(len(dl)):
for col in range(len(dl.columns)):
if dl.iloc[row, col] != dk.iloc[row, col]:
A:
I find the recordlinkage package very useful for comparing values from 2 datasets. You can define which columns to compare and it returns a 0 or 1 if they match. Next, you can filter for all matching values
https://recordlinkage.readthedocs.io/en/latest/about.html
Code looks like this:
# create pair of dataframes to compare
indexer = rl.Index()
indexer.add(Block('row_identifier1', 'row_identifier2'))
datasets = indexer.index(dataset1, dataset2)
# initialise class
comparer = rl.Compare()
# initialise similarity measurement algorithms
comparer.string('string_value1', 'string_value2', method='jarowinkler', threshold=0.95, label='string_matching')
comparer.exact('value3', 'value4', label='integer_matching')
# the method .compute() returns the DataFrame with the feature vectors.
results = comparer.compute(datasets, dataset1, dataset2)
|
How could I get a result for every column after comparing dataframes?
|
I have two csv files, and the two files have the exact same amount of rows and columns containing only numerical values. I want to compare each columns separately.
The idea would be to compare column 1 value of file "a" to column 1 value of file "b" and check the difference and so on for all the numbers in the column (there are 100 rows) and write out a number that in how many cases were the difference more than 0. So e.g. if in the case of column 1 there where 55 numbers that didnt mach in case of file "a" and "b" than I want to get back a value of 55 for column 1 and so on.
I would like to repeat the same for all the columns. I know it should be a double for loop but idk exactly how.
Thanks in advance!
import pandas as pd
dk = pd.read_csv('C:/Users/D/1_top_a.csv', sep=',', header=None)
dk = dk.dropna(how='all')
dk = dk.dropna(how='all', axis=1)
print(dk)
dl = pd.read_csv('C:/Users/D/1_top_b.csv', sep=',', header=None)
dl = dl.dropna(how='all')
dl = dl.dropna(how='all', axis=1)
#print(dl)
rows=dk.shape[0]
print(rows)
for row in range(len(dl)):
for col in range(len(dl.columns)):
if dl.iloc[row, col] != dk.iloc[row, col]:
|
[
"I find the recordlinkage package very useful for comparing values from 2 datasets. You can define which columns to compare and it returns a 0 or 1 if they match. Next, you can filter for all matching values\nhttps://recordlinkage.readthedocs.io/en/latest/about.html\n\nCode looks like this:\n# create pair of dataframes to compare\nindexer = rl.Index()\nindexer.add(Block('row_identifier1', 'row_identifier2'))\ndatasets = indexer.index(dataset1, dataset2)\n\n# initialise class\ncomparer = rl.Compare()\n\n# initialise similarity measurement algorithms\ncomparer.string('string_value1', 'string_value2', method='jarowinkler', threshold=0.95, label='string_matching')\ncomparer.exact('value3', 'value4', label='integer_matching')\n\n# the method .compute() returns the DataFrame with the feature vectors.\nresults = comparer.compute(datasets, dataset1, dataset2)\n\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"for_loop",
"pandas",
"python"
] |
stackoverflow_0074474117_dataframe_for_loop_pandas_python.txt
|
Q:
Django: How to display a pdf file in a new tab?
Most of the posts showing how to open a pdf file in a new tab are 3 years old. What is the best way in Django to open a pdf file uploaded to a model?
invoice.py
class Invoice(models.Model):
file = models.FileField(upload_to='estimates/', blank =True)
name = models.CharField(max_length=250, blank =True)
def __str__(self):
return self.name
views.py
def view_invoice(request, invoice_id):
invoice = get_object_or_404(Invoice, pk=invoice_id)
return render(request, 'index_invoice.html', {'invoice': invoice})
index_invoice.html
<a href=""><i class="fas fa-eye"></i> </a>
Many Thanks
A:
You should be able to use the FileField's url:
<a href="{{ invoice.file.url }}"><i class="fas fa-eye"></i> </a>
A:
you can use this
<a href="{{ invoice.file.url }}" target="_blank"><i class="fas fa-eye"></i> </a>
|
Django: How to display a pdf file in a new tab?
|
Most of the posts showing how to open a pdf file in a new tab are 3 years old. What is the best way in Django to open a pdf file uploaded to a model?
invoice.py
class Invoice(models.Model):
file = models.FileField(upload_to='estimates/', blank =True)
name = models.CharField(max_length=250, blank =True)
def __str__(self):
return self.name
views.py
def view_invoice(request, invoice_id):
invoice = get_object_or_404(Invoice, pk=invoice_id)
return render(request, 'index_invoice.html', {'invoice': invoice})
index_invoice.html
<a href=""><i class="fas fa-eye"></i> </a>
Many Thanks
|
[
"You should be able to use the FileField's url:\n<a href=\"{{ invoice.file.url }}\"><i class=\"fas fa-eye\"></i> </a>\n\n",
"you can use this\n<a href=\"{{ invoice.file.url }}\" target=\"_blank\"><i class=\"fas fa-eye\"></i> </a>\n\n"
] |
[
3,
0
] |
[
"Into the link add:\n<a href=\"{{ invoice.pdf.url }}\" target=\"_blank\"><i class=\"fas fa-eye\"></i> </a>\n\n"
] |
[
-1
] |
[
"django",
"python"
] |
stackoverflow_0065926111_django_python.txt
|
Q:
Can't click button in pop up UI Selenium
Hi I am trying to click on a button within a pop up("klant aanpassen"), I already tried allot of options including ActionChains but I just don't get it to work. Right now this is my script:
driver.find_element_by_xpath('//*[@title="Acties"]').click()
time.sleep(2)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[class='_2f9OE _2nG1g yG7LA mekFH _2zshv _2enAb _2g-UE iyvDv _17-jo']"))).click()
The first code does the right thing, open up the UI(PopUp)
The second line of code give's me the following error:
Warning (from warnings module):
File "C:/Users/Marnix Bolier/Desktop/TLinputter.py", line 52
driver.find_element_by_xpath('//*[@title="Acties"]').click()
DeprecationWarning: find_element_by_xpath is deprecated. Please use find_element(by=By.XPATH, value=xpath) instead
Traceback (most recent call last):
File "C:/Users/Marnix Bolier/Desktop/TLinputter.py", line 54, in <module>
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[class='_2f9OE _2nG1g yG7LA mekFH _2zshv _2enAb _2g-UE iyvDv _17-jo']"))).click()
File "C:\Users\Marnix Bolier\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\support\wait.py", line 89, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
I also tried the following line:
driver.find_element_by_xpath("//button[@class='_2f9OE _2nG1g yG7LA mekFH _2zshv _2enAb _2g-UE iyvDv _17-jo']").click()
That gives the following error:
line 247, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable
Google Inspector (button highlighted in blue):
Can anyone tell me what I am doing wrong here, Thanks.
A:
Locators like this CSS Selector button[class='_2f9OE _2nG1g yG7LA mekFH _2zshv _2enAb _2g-UE iyvDv _17-jo'] are problematic since they based on too much class names. These class names may be dynamically changing per session and per page state.
This locator can also be not unique.
What we can try here is text based XPath like this:
wait.until(EC.element_to_be_clickable((By.XPATH, "//button[contains(.,'Klant aanpassen')]"))).click()
This locator looks more stable and unique.
|
Can't click button in pop up UI Selenium
|
Hi I am trying to click on a button within a pop up("klant aanpassen"), I already tried allot of options including ActionChains but I just don't get it to work. Right now this is my script:
driver.find_element_by_xpath('//*[@title="Acties"]').click()
time.sleep(2)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[class='_2f9OE _2nG1g yG7LA mekFH _2zshv _2enAb _2g-UE iyvDv _17-jo']"))).click()
The first code does the right thing, open up the UI(PopUp)
The second line of code give's me the following error:
Warning (from warnings module):
File "C:/Users/Marnix Bolier/Desktop/TLinputter.py", line 52
driver.find_element_by_xpath('//*[@title="Acties"]').click()
DeprecationWarning: find_element_by_xpath is deprecated. Please use find_element(by=By.XPATH, value=xpath) instead
Traceback (most recent call last):
File "C:/Users/Marnix Bolier/Desktop/TLinputter.py", line 54, in <module>
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[class='_2f9OE _2nG1g yG7LA mekFH _2zshv _2enAb _2g-UE iyvDv _17-jo']"))).click()
File "C:\Users\Marnix Bolier\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\support\wait.py", line 89, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
I also tried the following line:
driver.find_element_by_xpath("//button[@class='_2f9OE _2nG1g yG7LA mekFH _2zshv _2enAb _2g-UE iyvDv _17-jo']").click()
That gives the following error:
line 247, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable
Google Inspector (button highlighted in blue):
Can anyone tell me what I am doing wrong here, Thanks.
|
[
"Locators like this CSS Selector button[class='_2f9OE _2nG1g yG7LA mekFH _2zshv _2enAb _2g-UE iyvDv _17-jo'] are problematic since they based on too much class names. These class names may be dynamically changing per session and per page state.\nThis locator can also be not unique.\nWhat we can try here is text based XPath like this:\nwait.until(EC.element_to_be_clickable((By.XPATH, \"//button[contains(.,'Klant aanpassen')]\"))).click()\n\nThis locator looks more stable and unique.\n"
] |
[
1
] |
[] |
[] |
[
"css_selectors",
"python",
"selenium",
"selenium_webdriver",
"xpath"
] |
stackoverflow_0074473367_css_selectors_python_selenium_selenium_webdriver_xpath.txt
|
Q:
Get class labels from Keras functional model
I have a functional model in Keras (Resnet50 from repo examples). I trained it with ImageDataGenerator and flow_from_directory data and saved model to .h5 file. When I call model.predict I get an array of class probabilities. But I want to associate them with class labels (in my case - folder names). How can I get them? I found that I could use model.predict_classes and model.predict_proba, but I don't have these functions in Functional model, only in Sequential.
A:
y_prob = model.predict(x)
y_classes = y_prob.argmax(axis=-1)
As suggested here.
A:
When one uses flow_from_directory the problem is how to interpret the probability outputs. As in, how to map the probability outputs and the class labels as how flow_from_directory creates one-hot vectors is not known in prior.
We can get a dictionary that maps the class labels to the index of the prediction vector that we get as the output when we use
generator= train_datagen.flow_from_directory("train", batch_size=batch_size)
label_map = (generator.class_indices)
The label_map variable is a dictionary like this
{'class_14': 5, 'class_10': 1, 'class_11': 2, 'class_12': 3, 'class_13': 4, 'class_2': 6, 'class_3': 7, 'class_1': 0, 'class_6': 10, 'class_7': 11, 'class_4': 8, 'class_5': 9, 'class_8': 12, 'class_9': 13}
Then from this the relation can be derived between the probability scores and class names.
Basically, you can create this dictionary by this code.
from glob import glob
class_names = glob("*") # Reads all the folders in which images are present
class_names = sorted(class_names) # Sorting them
name_id_map = dict(zip(class_names, range(len(class_names))))
The variable name_id_map in the above code also contains the same dictionary as the one obtained from class_indices function of flow_from_directory.
Hope this helps!
A:
UPDATE: This is no longer valid for newer Keras versions. Please use argmax() as in the answer from Emilia Apostolova.
The functional API models have just the predict() function which for classification would return the class probabilities. You can then select the most probable classes using the probas_to_classes() utility function. Example:
y_proba = model.predict(x)
y_classes = keras.np_utils.probas_to_classes(y_proba)
This is equivalent to model.predict_classes(x) on the Sequential model.
The reason for this is that the functional API support more general class of tasks where predict_classes() would not make sense.
More info: https://github.com/fchollet/keras/issues/2524
A:
In addition to @Emilia Apostolova answer to get the ground truth labels, from
generator = train_datagen.flow_from_directory("train", batch_size=batch_size)
just call
y_true_labels = generator.classes
A:
You must use the labels index you have, here what I do for text classification:
# data labels = [1, 2, 1...]
labels_index = { "website" : 0, "money" : 1 ....}
# to feed model
label_categories = to_categorical(np.asarray(labels))
Then, for predictions:
texts = ["hello, rejoins moi sur skype", "bonjour comment ça va ?", "tu me donnes de l'argent"]
sequences = tokenizer.texts_to_sequences(texts)
data = pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH)
predictions = model.predict(data)
t = 0
for text in texts:
i = 0
print("Prediction for \"%s\": " % (text))
for label in labels_index:
print("\t%s ==> %f" % (label, predictions[t][i]))
i = i + 1
t = t + 1
This gives:
Prediction for "hello, rejoins moi sur skype":
website ==> 0.759483
money ==> 0.037091
under ==> 0.010587
camsite ==> 0.114436
email ==> 0.075975
abuse ==> 0.002428
Prediction for "bonjour comment ça va ?":
website ==> 0.433079
money ==> 0.084878
under ==> 0.048375
camsite ==> 0.036674
email ==> 0.369197
abuse ==> 0.027798
Prediction for "tu me donnes de l'argent":
website ==> 0.006223
money ==> 0.095308
under ==> 0.003586
camsite ==> 0.003115
email ==> 0.884112
abuse ==> 0.007655
A:
It is possible to save a "list" of labels in keras model directly. This way the user who uses the model for predictions and does not have any other sources of information can perform the lookup himself. Here is a dummy example of how one can perform an "injection" of labels
# assume we get labels as list
labels = ["cat","dog","horse","tomato"]
# here we start building our model with input image 299x299 and one output layer
xx = Input(shape=(299,299,3))
flat = Flatten()(xx)
output = Dense(shape=(4))(flat)
# here we perform injection of labels
tf_labels = tf.constant([labels],dtype="string")
tf_labels = tf.tile(labels,[tf.shape(xx)[0],1])
output_labels = Lambda(lambda x: tf_labels,name="label_injection")(xx)
#and finaly creating a model
model=tf.keras.Model(xx,[output,output_labels])
When used for prediction, this model returns tensor of scores and tensot of string labels. Model like this can be saved to h5. In this case the file contains the labels. This model can also be exported to saved_model and used for serving in the cloud.
A:
To map predicted classes and filenames using ImageDataGenerator, I use:
# Data generator and prediction
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_directory(
inputpath,
target_size=(150, 150),
batch_size=20,
class_mode='categorical',
shuffle=False)
pred = model.predict_generator(test_generator, steps=len(test_generator), verbose=0)
# Get classes by max element in np (as a list)
classes = list(np.argmax(pred, axis=1))
# Get filenames (set shuffle=false in generator is important)
filenames = test_generator.filenames
I can loop over predicted classes and the associated filename using:
for f in zip(classes, filenames):
...
Addendum:
The path in which the images are located inputpath needs to have a subdirectory in which the images are actually stored. The reason is that the generator looks for subdirectories. The generator will give a feedback during prediction:
Found 283 images belonging to 1 classes.
The 1 classes part refers to the one subdirectory (this comes from the generator and is unrelated to the actual prediction).
So when your inputpath is (for example) C:/images/, the actual images are located in C:/images/temp/.
A:
Just do this magic on your data generator
y_true_labels = train_generator. class_indices
y_true_labels
it will print (cat , dogs on my case)
{'cat': 0, 'dog': 1}
|
Get class labels from Keras functional model
|
I have a functional model in Keras (Resnet50 from repo examples). I trained it with ImageDataGenerator and flow_from_directory data and saved model to .h5 file. When I call model.predict I get an array of class probabilities. But I want to associate them with class labels (in my case - folder names). How can I get them? I found that I could use model.predict_classes and model.predict_proba, but I don't have these functions in Functional model, only in Sequential.
|
[
"y_prob = model.predict(x) \ny_classes = y_prob.argmax(axis=-1)\n\nAs suggested here.\n",
"When one uses flow_from_directory the problem is how to interpret the probability outputs. As in, how to map the probability outputs and the class labels as how flow_from_directory creates one-hot vectors is not known in prior.\nWe can get a dictionary that maps the class labels to the index of the prediction vector that we get as the output when we use \ngenerator= train_datagen.flow_from_directory(\"train\", batch_size=batch_size)\nlabel_map = (generator.class_indices)\n\nThe label_map variable is a dictionary like this\n{'class_14': 5, 'class_10': 1, 'class_11': 2, 'class_12': 3, 'class_13': 4, 'class_2': 6, 'class_3': 7, 'class_1': 0, 'class_6': 10, 'class_7': 11, 'class_4': 8, 'class_5': 9, 'class_8': 12, 'class_9': 13}\n\nThen from this the relation can be derived between the probability scores and class names.\nBasically, you can create this dictionary by this code.\nfrom glob import glob\nclass_names = glob(\"*\") # Reads all the folders in which images are present\nclass_names = sorted(class_names) # Sorting them\nname_id_map = dict(zip(class_names, range(len(class_names))))\n\nThe variable name_id_map in the above code also contains the same dictionary as the one obtained from class_indices function of flow_from_directory.\nHope this helps!\n",
"UPDATE: This is no longer valid for newer Keras versions. Please use argmax() as in the answer from Emilia Apostolova.\nThe functional API models have just the predict() function which for classification would return the class probabilities. You can then select the most probable classes using the probas_to_classes() utility function. Example:\ny_proba = model.predict(x)\ny_classes = keras.np_utils.probas_to_classes(y_proba)\n\nThis is equivalent to model.predict_classes(x) on the Sequential model.\nThe reason for this is that the functional API support more general class of tasks where predict_classes() would not make sense.\nMore info: https://github.com/fchollet/keras/issues/2524\n",
"In addition to @Emilia Apostolova answer to get the ground truth labels, from \ngenerator = train_datagen.flow_from_directory(\"train\", batch_size=batch_size)\n\njust call\ny_true_labels = generator.classes\n\n",
"You must use the labels index you have, here what I do for text classification:\n# data labels = [1, 2, 1...]\nlabels_index = { \"website\" : 0, \"money\" : 1 ....} \n# to feed model\nlabel_categories = to_categorical(np.asarray(labels)) \n\nThen, for predictions:\ntexts = [\"hello, rejoins moi sur skype\", \"bonjour comment ça va ?\", \"tu me donnes de l'argent\"]\n\nsequences = tokenizer.texts_to_sequences(texts)\n\ndata = pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH)\n\npredictions = model.predict(data)\n\nt = 0\n\nfor text in texts:\n i = 0\n print(\"Prediction for \\\"%s\\\": \" % (text))\n for label in labels_index:\n print(\"\\t%s ==> %f\" % (label, predictions[t][i]))\n i = i + 1\n t = t + 1\n\nThis gives:\nPrediction for \"hello, rejoins moi sur skype\": \n website ==> 0.759483\n money ==> 0.037091\n under ==> 0.010587\n camsite ==> 0.114436\n email ==> 0.075975\n abuse ==> 0.002428\nPrediction for \"bonjour comment ça va ?\": \n website ==> 0.433079\n money ==> 0.084878\n under ==> 0.048375\n camsite ==> 0.036674\n email ==> 0.369197\n abuse ==> 0.027798\nPrediction for \"tu me donnes de l'argent\": \n website ==> 0.006223\n money ==> 0.095308\n under ==> 0.003586\n camsite ==> 0.003115\n email ==> 0.884112\n abuse ==> 0.007655\n\n",
"It is possible to save a \"list\" of labels in keras model directly. This way the user who uses the model for predictions and does not have any other sources of information can perform the lookup himself. Here is a dummy example of how one can perform an \"injection\" of labels\n# assume we get labels as list\nlabels = [\"cat\",\"dog\",\"horse\",\"tomato\"]\n# here we start building our model with input image 299x299 and one output layer\nxx = Input(shape=(299,299,3))\nflat = Flatten()(xx)\noutput = Dense(shape=(4))(flat)\n# here we perform injection of labels\ntf_labels = tf.constant([labels],dtype=\"string\")\ntf_labels = tf.tile(labels,[tf.shape(xx)[0],1])\noutput_labels = Lambda(lambda x: tf_labels,name=\"label_injection\")(xx)\n#and finaly creating a model\nmodel=tf.keras.Model(xx,[output,output_labels])\n\nWhen used for prediction, this model returns tensor of scores and tensot of string labels. Model like this can be saved to h5. In this case the file contains the labels. This model can also be exported to saved_model and used for serving in the cloud.\n",
"To map predicted classes and filenames using ImageDataGenerator, I use:\n# Data generator and prediction\ntest_datagen = ImageDataGenerator(rescale=1./255)\ntest_generator = test_datagen.flow_from_directory(\n inputpath,\n target_size=(150, 150),\n batch_size=20,\n class_mode='categorical',\n shuffle=False)\npred = model.predict_generator(test_generator, steps=len(test_generator), verbose=0)\n# Get classes by max element in np (as a list)\nclasses = list(np.argmax(pred, axis=1))\n# Get filenames (set shuffle=false in generator is important)\nfilenames = test_generator.filenames\n\nI can loop over predicted classes and the associated filename using:\nfor f in zip(classes, filenames):\n ...\n\nAddendum:\nThe path in which the images are located inputpath needs to have a subdirectory in which the images are actually stored. The reason is that the generator looks for subdirectories. The generator will give a feedback during prediction:\nFound 283 images belonging to 1 classes.\n\nThe 1 classes part refers to the one subdirectory (this comes from the generator and is unrelated to the actual prediction).\nSo when your inputpath is (for example) C:/images/, the actual images are located in C:/images/temp/.\n",
"Just do this magic on your data generator\ny_true_labels = train_generator. class_indices\ny_true_labels\n\nit will print (cat , dogs on my case)\n{'cat': 0, 'dog': 1}\n\n"
] |
[
89,
57,
17,
6,
3,
2,
1,
0
] |
[
"You can use:\nmodel.predict(x_test).argmax(axis=-1)\n\n"
] |
[
-1
] |
[
"keras",
"python"
] |
stackoverflow_0038971293_keras_python.txt
|
Q:
How to catch Error in telegram_send when there is no connection to the internet?
I'm trying to add a try-catch to my script that is supposed to notify my when my script is done executing using telegram_send(). So I ran the script with the internet connection off to see what error is raised by the function so I could catch it and add a small print() message to inform the user that the internet was out. What I got, however is this:
Traceback (most recent call last):
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connection.py", line 140, in _new_conn
conn = connection.create_connection(
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\util\connection.py", line 60, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "C:\Python\Python38\lib\socket.py", line 914, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 614, in urlopen
httplib_response = self._make_request(conn, method, url,
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 360, in _make_request
self._validate_conn(conn)
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 857, in _validate_conn
super(HTTPSConnectionPool, self)._validate_conn(conn)
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 289, in _validate_conn
conn.connect()
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connection.py", line 284, in connect
conn = self._new_conn()
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connection.py", line 149, in _new_conn
raise NewConnectionError(
telegram.vendor.ptb_urllib3.urllib3.exceptions.NewConnectionError: <telegram.vendor.ptb_urllib3.urllib3.connection.VerifiedHTTPSConnection object at 0x0000014144AA8100>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python\Python38\lib\site-packages\telegram\utils\request.py", line 259, in _request_wrapper
resp = self._con_pool.request(*args, **kwargs)
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\request.py", line 68, in request
return self.request_encode_body(method, url, fields=fields,
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\request.py", line 148, in request_encode_body
return self.urlopen(method, url, **extra_kw)
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\poolmanager.py", line 244, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 691, in urlopen
return self.urlopen(method, url, body, headers, retries,
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 691, in urlopen
return self.urlopen(method, url, body, headers, retries,
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 691, in urlopen
return self.urlopen(method, url, body, headers, retries,
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 665, in urlopen
retries = retries.increment(method, url, error=e, _pool=self,
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\util\retry.py", line 376, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
telegram.vendor.ptb_urllib3.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api.telegram.org', port=443): Max retries exceeded with url: /TOKEN/sendMessage (Caused by NewConnectionError('<telegram.vendor.ptb_urllib3.urllib3.connection.VerifiedHTTPSConnection object at 0x0000014144AA8100>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\chris\Documents\GitHub\QSAR-Versuch\ROC.py", line 212, in <module>
telegram_send.send(messages=['OI',time_string])
File "C:\Python\Python38\lib\site-packages\telegram_send\telegram_send.py", line 246, in send
message_ids += [send_message(m, parse_mode)["message_id"]]
File "C:\Python\Python38\lib\site-packages\telegram_send\telegram_send.py", line 228, in send_message
return bot.send_message(
File "C:\Python\Python38\lib\site-packages\telegram\bot.py", line 133, in decorator
result = func(*args, **kwargs)
File "C:\Python\Python38\lib\site-packages\telegram\bot.py", line 525, in send_message
return self._message( # type: ignore[return-value]
File "C:\Python\Python38\lib\site-packages\telegram\bot.py", line 339, in _message
result = self._post(endpoint, data, timeout=timeout, api_kwargs=api_kwargs)
File "C:\Python\Python38\lib\site-packages\telegram\bot.py", line 298, in _post
return self.request.post(
File "C:\Python\Python38\lib\site-packages\telegram\utils\request.py", line 361, in post
result = self._request_wrapper(
File "C:\Python\Python38\lib\site-packages\telegram\utils\request.py", line 265, in _request_wrapper
raise NetworkError(f'urllib3 HTTPError {error}') from error
telegram.error.NetworkError: urllib3 HTTPError HTTPSConnectionPool(host='api.telegram.org', port=443): Max retries exceeded with url: /bot5484246240:TOKEN/sendMessage (Caused by NewConnectionError('<telegram.vendor.ptb_urllib3.urllib3.connection.VerifiedHTTPSConnection object at 0x0000014144AA8100>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
Now call me crazy, but I find that traceback to be slightly confusing. I am used to having a simple builtin error raised by python so I'm at a loss what to do in this case, since it seems like there is a custom error type that I'd need to import from urlib or something and then make the try-catch with that. So, how do I fix this? Is there a hacky way to do it?
Not really useful, but here's a script example:
import telegram_send
try:
telegram_send.send(messages=['OI!'])
except SomeErrorIfInternetIsOut:
print('There was no internet connection.')
EDIT: I tried to import socket.gaierror and checking for that, but no chage. The traceback stays exactly the same.
A:
If you try to catch the error without specifying the type, you can see the error type is telegram.error.NetworkError (which is the last one in your stack trace).
Then, if you want to write an except statement specific to this error, you can first import telegram.error as tg_error in your code and change your except statement to except tg_error.NetworkError as e:).
My answer comes a bit late so I hope you found a solution or a workaround before, otherwise I hope this will be usefull to others.
|
How to catch Error in telegram_send when there is no connection to the internet?
|
I'm trying to add a try-catch to my script that is supposed to notify my when my script is done executing using telegram_send(). So I ran the script with the internet connection off to see what error is raised by the function so I could catch it and add a small print() message to inform the user that the internet was out. What I got, however is this:
Traceback (most recent call last):
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connection.py", line 140, in _new_conn
conn = connection.create_connection(
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\util\connection.py", line 60, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "C:\Python\Python38\lib\socket.py", line 914, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 614, in urlopen
httplib_response = self._make_request(conn, method, url,
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 360, in _make_request
self._validate_conn(conn)
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 857, in _validate_conn
super(HTTPSConnectionPool, self)._validate_conn(conn)
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 289, in _validate_conn
conn.connect()
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connection.py", line 284, in connect
conn = self._new_conn()
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connection.py", line 149, in _new_conn
raise NewConnectionError(
telegram.vendor.ptb_urllib3.urllib3.exceptions.NewConnectionError: <telegram.vendor.ptb_urllib3.urllib3.connection.VerifiedHTTPSConnection object at 0x0000014144AA8100>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python\Python38\lib\site-packages\telegram\utils\request.py", line 259, in _request_wrapper
resp = self._con_pool.request(*args, **kwargs)
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\request.py", line 68, in request
return self.request_encode_body(method, url, fields=fields,
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\request.py", line 148, in request_encode_body
return self.urlopen(method, url, **extra_kw)
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\poolmanager.py", line 244, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 691, in urlopen
return self.urlopen(method, url, body, headers, retries,
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 691, in urlopen
return self.urlopen(method, url, body, headers, retries,
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 691, in urlopen
return self.urlopen(method, url, body, headers, retries,
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\connectionpool.py", line 665, in urlopen
retries = retries.increment(method, url, error=e, _pool=self,
File "C:\Python\Python38\lib\site-packages\telegram\vendor\ptb_urllib3\urllib3\util\retry.py", line 376, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
telegram.vendor.ptb_urllib3.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api.telegram.org', port=443): Max retries exceeded with url: /TOKEN/sendMessage (Caused by NewConnectionError('<telegram.vendor.ptb_urllib3.urllib3.connection.VerifiedHTTPSConnection object at 0x0000014144AA8100>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\chris\Documents\GitHub\QSAR-Versuch\ROC.py", line 212, in <module>
telegram_send.send(messages=['OI',time_string])
File "C:\Python\Python38\lib\site-packages\telegram_send\telegram_send.py", line 246, in send
message_ids += [send_message(m, parse_mode)["message_id"]]
File "C:\Python\Python38\lib\site-packages\telegram_send\telegram_send.py", line 228, in send_message
return bot.send_message(
File "C:\Python\Python38\lib\site-packages\telegram\bot.py", line 133, in decorator
result = func(*args, **kwargs)
File "C:\Python\Python38\lib\site-packages\telegram\bot.py", line 525, in send_message
return self._message( # type: ignore[return-value]
File "C:\Python\Python38\lib\site-packages\telegram\bot.py", line 339, in _message
result = self._post(endpoint, data, timeout=timeout, api_kwargs=api_kwargs)
File "C:\Python\Python38\lib\site-packages\telegram\bot.py", line 298, in _post
return self.request.post(
File "C:\Python\Python38\lib\site-packages\telegram\utils\request.py", line 361, in post
result = self._request_wrapper(
File "C:\Python\Python38\lib\site-packages\telegram\utils\request.py", line 265, in _request_wrapper
raise NetworkError(f'urllib3 HTTPError {error}') from error
telegram.error.NetworkError: urllib3 HTTPError HTTPSConnectionPool(host='api.telegram.org', port=443): Max retries exceeded with url: /bot5484246240:TOKEN/sendMessage (Caused by NewConnectionError('<telegram.vendor.ptb_urllib3.urllib3.connection.VerifiedHTTPSConnection object at 0x0000014144AA8100>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
Now call me crazy, but I find that traceback to be slightly confusing. I am used to having a simple builtin error raised by python so I'm at a loss what to do in this case, since it seems like there is a custom error type that I'd need to import from urlib or something and then make the try-catch with that. So, how do I fix this? Is there a hacky way to do it?
Not really useful, but here's a script example:
import telegram_send
try:
telegram_send.send(messages=['OI!'])
except SomeErrorIfInternetIsOut:
print('There was no internet connection.')
EDIT: I tried to import socket.gaierror and checking for that, but no chage. The traceback stays exactly the same.
|
[
"If you try to catch the error without specifying the type, you can see the error type is telegram.error.NetworkError (which is the last one in your stack trace).\nThen, if you want to write an except statement specific to this error, you can first import telegram.error as tg_error in your code and change your except statement to except tg_error.NetworkError as e:).\nMy answer comes a bit late so I hope you found a solution or a workaround before, otherwise I hope this will be usefull to others.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"python_3.x",
"telegram"
] |
stackoverflow_0074055738_python_python_3.x_telegram.txt
|
Q:
Using Pillow and img2pdf to convert images to pdf
I have a task that requires me to get data from an image upload (jpg or png), resize it based on the requirement, and then transform it into pdf and then store in s3.
The file comes in as ByteIO
I have Pillow available so I can resize the image with it
Now the file type is class 'PIL.Image.Image' and I don't know how to proceed.
I found img2pdf library on https://gitlab.mister-muffin.de/josch/img2pdf, but I don't know how to use it when I have PIL format (use tobytes()?)
The s3 upload also looks like a file-like object, so I don't want to save it into a temp file before loading it again. Do I even need img2pdf in this case then?
How do I achieve this goal?
EDIT: I tried using tobytes() and upload to s3 directly. Upload was successful. However, when downloading to see the content, it shows an empty page. It seems like the file data is not written into the pdf file
EDIT 2: Actually went on the s3 and check the file stored. When I download it and open it, it shows cannot be opened
EDIT 3: I don't really have working code as I'm still experimenting what could work, but here's a gist
data = request.FILES['file'].file # where the data is
im = Image.open(data)
(width, height) = (im.width // 2, im.height // 2) # example action I wanna take with Pillow
data = im_resized.tobytes()
# potential step for using img2pdf here but I don't know how
# img2pdf.convert(data) # this fails because "ImageOpenError: cannot read input image (not jpeg2000). PIL: error reading image: cannot identify image file <_io.BytesIO..."
# img2pdf.convert(im_resized) # this also fails because "TypeError: Neither implements read() nor is str or bytes"
upload_to_s3(data) # some function that utilizes boto3 to upload to s3
A:
The problem is that u use Image.Image object instead of JPEG or something like it
Try this:
bytes_io = io.BytesIO()
image.save(bytes_io, 'PNG')
with open(output_pdf, "wb") as f:
f.write(img2pdf.convert(bytes_io.getvalue()))
|
Using Pillow and img2pdf to convert images to pdf
|
I have a task that requires me to get data from an image upload (jpg or png), resize it based on the requirement, and then transform it into pdf and then store in s3.
The file comes in as ByteIO
I have Pillow available so I can resize the image with it
Now the file type is class 'PIL.Image.Image' and I don't know how to proceed.
I found img2pdf library on https://gitlab.mister-muffin.de/josch/img2pdf, but I don't know how to use it when I have PIL format (use tobytes()?)
The s3 upload also looks like a file-like object, so I don't want to save it into a temp file before loading it again. Do I even need img2pdf in this case then?
How do I achieve this goal?
EDIT: I tried using tobytes() and upload to s3 directly. Upload was successful. However, when downloading to see the content, it shows an empty page. It seems like the file data is not written into the pdf file
EDIT 2: Actually went on the s3 and check the file stored. When I download it and open it, it shows cannot be opened
EDIT 3: I don't really have working code as I'm still experimenting what could work, but here's a gist
data = request.FILES['file'].file # where the data is
im = Image.open(data)
(width, height) = (im.width // 2, im.height // 2) # example action I wanna take with Pillow
data = im_resized.tobytes()
# potential step for using img2pdf here but I don't know how
# img2pdf.convert(data) # this fails because "ImageOpenError: cannot read input image (not jpeg2000). PIL: error reading image: cannot identify image file <_io.BytesIO..."
# img2pdf.convert(im_resized) # this also fails because "TypeError: Neither implements read() nor is str or bytes"
upload_to_s3(data) # some function that utilizes boto3 to upload to s3
|
[
"The problem is that u use Image.Image object instead of JPEG or something like it\nTry this:\n\nbytes_io = io.BytesIO()\n\nimage.save(bytes_io, 'PNG')\n\nwith open(output_pdf, \"wb\") as f:\n f.write(img2pdf.convert(bytes_io.getvalue()))\n\n"
] |
[
0
] |
[] |
[] |
[
"img2pdf",
"python",
"python_imaging_library"
] |
stackoverflow_0058458485_img2pdf_python_python_imaging_library.txt
|
Q:
What is proper way to close connection in psyopcg2 with "with statement"?
I want to konw, what is a proper way to closing connection with Postgres database using with statement and psyopcg2.
import pandas as pd
import psycopg2
def create_df_from_postgres(params: dict,
columns: str,
tablename: str,
) -> pd.DataFrame:
with psycopg2.connect(**params) as conn:
data_sql = pd.read_sql_query(
"SELECT " + columns + ", SUM(total)"
" AS total FROM " + str(tablename),
con=conn
)
# i need to close conection here:
# conn.close()
# or here:
conn.close()
return data_sql
Is this a better way to handle connection ?
def get_ci_method_and_date(params: dict,
columns: str,
tablename: str,
) -> pd.DataFrame:
try:
connection = psycopg2.connect(**params)
data_sql = pd.read_sql_query('SELECT ' + columns +
' FROM ' + str(tablename),
con=connection
)
finally:
if(connection):
connection.close()
return data_sql
From official psycopg docs
Warning Unlike file objects or other resources, exiting the connection’s with block doesn’t close the connection, but only the transaction associated to it. If you want to make sure the connection is closed after a certain point, you should still use a try-catch block:
conn = psycopg2.connect(DSN)
try:
# connection usage
finally:
conn.close()
A:
Proper way to close a connection:
From official psycopg docs:
Warning Unlike file objects or other resources, exiting the connection’s with
block doesn’t close the connection, but only the transaction associated to
it. If you want to make sure the connection is closed after a certain point, you
should still use a try-catch block:
conn = psycopg2.connect(DSN)
try:
# connection usage
finally:
conn.close()
A:
I thought the Connection ContextManager closes the connection, but according to the docs, it does not:
Connections can be used as context managers. Note that a context wraps a transaction: if the context exits with success the transaction is committed, if it exits with an exception the transaction is rolled back. Note that the connection is not closed by the context and it can be used for several contexts.
Proposed usage is:
conn = psycopg2.connect(DSN)
with conn:
with conn.cursor() as curs:
curs.execute(SQL1)
with conn:
with conn.cursor() as curs:
curs.execute(SQL2)
# leaving contexts doesn't close the connection
conn.close()
source: https://www.psycopg.org/docs/connection.html
|
What is proper way to close connection in psyopcg2 with "with statement"?
|
I want to konw, what is a proper way to closing connection with Postgres database using with statement and psyopcg2.
import pandas as pd
import psycopg2
def create_df_from_postgres(params: dict,
columns: str,
tablename: str,
) -> pd.DataFrame:
with psycopg2.connect(**params) as conn:
data_sql = pd.read_sql_query(
"SELECT " + columns + ", SUM(total)"
" AS total FROM " + str(tablename),
con=conn
)
# i need to close conection here:
# conn.close()
# or here:
conn.close()
return data_sql
Is this a better way to handle connection ?
def get_ci_method_and_date(params: dict,
columns: str,
tablename: str,
) -> pd.DataFrame:
try:
connection = psycopg2.connect(**params)
data_sql = pd.read_sql_query('SELECT ' + columns +
' FROM ' + str(tablename),
con=connection
)
finally:
if(connection):
connection.close()
return data_sql
From official psycopg docs
Warning Unlike file objects or other resources, exiting the connection’s with block doesn’t close the connection, but only the transaction associated to it. If you want to make sure the connection is closed after a certain point, you should still use a try-catch block:
conn = psycopg2.connect(DSN)
try:
# connection usage
finally:
conn.close()
|
[
"Proper way to close a connection: \nFrom official psycopg docs:\n\nWarning Unlike file objects or other resources, exiting the connection’s with\n block doesn’t close the connection, but only the transaction associated to\n it. If you want to make sure the connection is closed after a certain point, you\n should still use a try-catch block: \nconn = psycopg2.connect(DSN)\ntry:\n # connection usage\nfinally:\n conn.close()\n\n\n",
"I thought the Connection ContextManager closes the connection, but according to the docs, it does not:\n\nConnections can be used as context managers. Note that a context wraps a transaction: if the context exits with success the transaction is committed, if it exits with an exception the transaction is rolled back. Note that the connection is not closed by the context and it can be used for several contexts.\n\nProposed usage is:\nconn = psycopg2.connect(DSN)\n\nwith conn:\n with conn.cursor() as curs:\n curs.execute(SQL1)\n\nwith conn:\n with conn.cursor() as curs:\n curs.execute(SQL2)\n\n# leaving contexts doesn't close the connection\nconn.close()\n\nsource: https://www.psycopg.org/docs/connection.html\n"
] |
[
7,
0
] |
[
"The whole point of a with statement is that the resources are cleaned up automatically when it exits. So there is no need to call conn.close() explicitly at all.\n"
] |
[
-2
] |
[
"pandas",
"psycopg2",
"python"
] |
stackoverflow_0055334704_pandas_psycopg2_python.txt
|
Q:
Empty dataframe when importing CSV file
Importing data from CSV file doesn't work. Data:
df = pd.DataFrame({'Result1': [1552, 3954, 7495], 'Result2': [1552, 3950, 1559]},
index=['Customer1', 'Customer2', 'Customer3'])
I want to search for any customer who has a particular value in any column:
results_to_keep = ['155101', '1551011']
df2 = df[df.isin(results_to_keep)]
df3 = df2.dropna(axis=0, how='all')
print(df3)
It works for supplied data. But when I instead import from CSV file I get an empty dataframe. How to import a CSV file to the same format as data above?
EDIT: here's a cut & paste from the input file; I've truncated the number of results columns & rows for brevity
Customer Reference Profile Name Score Band Text Result1 Result2 Result3
038ff126-1ed5-4a96-bb34-3f4b595228d3 UK 1200 APPROVED 155261 155101 155151
87529660 Germany 1111 APPROVED 2289528 401126 401102
37a52968-8093-41e5-8a2e-6bd251d0666d UK 2200 APPROVED 155261 155101 155151
1.39E+08 Germany 1111 APPROVED 2283524 2283525 2282111
1d45f78b-01c5-4007-8f8c-a9fb845cba1f UK 1300 Fail 155261 155101 155151
a56b590b-b8bd-4e56-987e-f801a37e487d UK 1300 Fail 155261 155101 155151
1.39E+08 Germany 2221 APPROVED 2283525 2282111 2282100
A:
Failing to read a column as an index results in an empty DataFrame. Assuming data in CSV file like:
Result 1
Result2
0
Customer1
1552
7495
1
Customer2
3954
3950
2
Customer3
7495
1559
The Customer columns should be read as an index:
import pandas as pd
df=pd.read_csv("test.csv", index_col=0)
results_to_keep = ['1552', '1551']
df2 = df[df.isin(results_to_keep)]
df3=df2.dropna(axis=0, how='all')
print(df3)
Result:
Result 1
Result2
Customer1
1552
1552
|
Empty dataframe when importing CSV file
|
Importing data from CSV file doesn't work. Data:
df = pd.DataFrame({'Result1': [1552, 3954, 7495], 'Result2': [1552, 3950, 1559]},
index=['Customer1', 'Customer2', 'Customer3'])
I want to search for any customer who has a particular value in any column:
results_to_keep = ['155101', '1551011']
df2 = df[df.isin(results_to_keep)]
df3 = df2.dropna(axis=0, how='all')
print(df3)
It works for supplied data. But when I instead import from CSV file I get an empty dataframe. How to import a CSV file to the same format as data above?
EDIT: here's a cut & paste from the input file; I've truncated the number of results columns & rows for brevity
Customer Reference Profile Name Score Band Text Result1 Result2 Result3
038ff126-1ed5-4a96-bb34-3f4b595228d3 UK 1200 APPROVED 155261 155101 155151
87529660 Germany 1111 APPROVED 2289528 401126 401102
37a52968-8093-41e5-8a2e-6bd251d0666d UK 2200 APPROVED 155261 155101 155151
1.39E+08 Germany 1111 APPROVED 2283524 2283525 2282111
1d45f78b-01c5-4007-8f8c-a9fb845cba1f UK 1300 Fail 155261 155101 155151
a56b590b-b8bd-4e56-987e-f801a37e487d UK 1300 Fail 155261 155101 155151
1.39E+08 Germany 2221 APPROVED 2283525 2282111 2282100
|
[
"Failing to read a column as an index results in an empty DataFrame. Assuming data in CSV file like:\n\n\n\n\n\n\nResult 1\nResult2\n\n\n\n\n0\nCustomer1\n1552\n7495\n\n\n1\nCustomer2\n3954\n3950\n\n\n2\nCustomer3\n7495\n1559\n\n\n\n\nThe Customer columns should be read as an index:\nimport pandas as pd\n\ndf=pd.read_csv(\"test.csv\", index_col=0)\n\nresults_to_keep = ['1552', '1551']\ndf2 = df[df.isin(results_to_keep)]\ndf3=df2.dropna(axis=0, how='all')\n\nprint(df3)\n\nResult:\n\n\n\n\n\nResult 1\nResult2\n\n\n\n\nCustomer1\n1552\n1552\n\n\n\n"
] |
[
0
] |
[] |
[] |
[
"csv",
"dataframe",
"import",
"python"
] |
stackoverflow_0074474096_csv_dataframe_import_python.txt
|
Q:
Is it possible to change the seed of a random generator in NumPy?
Say I instantiated a random generator with
import numpy as np
rng = np.random.default_rng(seed=42)
and I want to change its seed. Is it possible to update the seed of the generator instead of instantiating a new generator with the new seed?
I managed to find that you can see the state of the generator with rng.__getstate__(), for example in this case it is
{'bit_generator': 'PCG64',
'state': {'state': 274674114334540486603088602300644985544,
'inc': 332724090758049132448979897138935081983},
'has_uint32': 0,
'uinteger': 0}
and you can change it with rng.__setstate__ with arguments as printed above, but it is not clear to me how to set those arguments so that to have the initial state of the rng given a different seed. Is it possible to do so or instantiating a new generator is the only way?
A:
A Numpy call like default_rng() gives you a Generator with an implicitly created BitGenerator. The difference between these is that a BitGenerator is the low-level method that just knows how to generate uniform uint32s, uint64s, and doubles. The Generator can then take these and turn them into other distributions.
As you noticed you can use __getstate__ but this is designed for the pickle module and not really for what you're using it for.
You're better off accessing the bit_generator directly. Which means you don't need to use any dunder methods.
The following code still uses default_rng but this means the BitGenerator could change in the future so I need a call to type to reseed. You'd probably be better off following the second example which uses an explicit BitGenerator.
import numpy as np
seed = 42
rng = np.random.default_rng()
# get the BitGenerator used by default_rng
BitGen = type(rng.bit_generator)
# use the state from a fresh bit generator
rng.bit_generator.state = BitGen(seed).state
# generate a random float
print(rng.random())
outputs 0.7739560485559633. If you're happy fixing the BitGenerator you can avoid the call to type, e.g.:
rng = np.random.Generator(np.random.PCG64())
rng.bit_generator.state = np.random.PCG64(seed).state
rng.random()
which outputs the same value.
A:
N.B. The other answer (https://stackoverflow.com/a/74474377/2954547) is better. Use that one, not this one.
This is maybe a silly hack, but one solution is to create a new RNG instance using the desired new seed, then replace the state of the existing RNG instance with the state of the new instance:
import numpy as np
seed = 12345
rng = np.random.default_rng(seed)
x1 = rng.normal(size=10)
rng.__setstate__(np.random.default_rng(seed).__getstate__())
x2 = rng.normal(size=10)
np.testing.assert_array_equal(x1, x2)
However this isn't much different from just replacing the RNG instance.
Edit: To answer the question more directly, I don't think it's possible to replace the seed without constructing a new Generator or BitGenerator, unless you know how to correctly construct the state data for the particular BitGenerator inside your Generator. Creating a new RNG is fairly cheap, and while I understand the conceptual appeal of not instantiating a new one, the only alternative here is to post a feature request on the Numpy issue tracker or mailing list.
|
Is it possible to change the seed of a random generator in NumPy?
|
Say I instantiated a random generator with
import numpy as np
rng = np.random.default_rng(seed=42)
and I want to change its seed. Is it possible to update the seed of the generator instead of instantiating a new generator with the new seed?
I managed to find that you can see the state of the generator with rng.__getstate__(), for example in this case it is
{'bit_generator': 'PCG64',
'state': {'state': 274674114334540486603088602300644985544,
'inc': 332724090758049132448979897138935081983},
'has_uint32': 0,
'uinteger': 0}
and you can change it with rng.__setstate__ with arguments as printed above, but it is not clear to me how to set those arguments so that to have the initial state of the rng given a different seed. Is it possible to do so or instantiating a new generator is the only way?
|
[
"A Numpy call like default_rng() gives you a Generator with an implicitly created BitGenerator. The difference between these is that a BitGenerator is the low-level method that just knows how to generate uniform uint32s, uint64s, and doubles. The Generator can then take these and turn them into other distributions.\nAs you noticed you can use __getstate__ but this is designed for the pickle module and not really for what you're using it for.\nYou're better off accessing the bit_generator directly. Which means you don't need to use any dunder methods.\nThe following code still uses default_rng but this means the BitGenerator could change in the future so I need a call to type to reseed. You'd probably be better off following the second example which uses an explicit BitGenerator.\nimport numpy as np\n\nseed = 42\n\nrng = np.random.default_rng()\n\n# get the BitGenerator used by default_rng\nBitGen = type(rng.bit_generator)\n\n# use the state from a fresh bit generator\nrng.bit_generator.state = BitGen(seed).state\n\n# generate a random float\nprint(rng.random())\n\noutputs 0.7739560485559633. If you're happy fixing the BitGenerator you can avoid the call to type, e.g.:\nrng = np.random.Generator(np.random.PCG64())\n\nrng.bit_generator.state = np.random.PCG64(seed).state\n\nrng.random()\n\nwhich outputs the same value.\n",
"N.B. The other answer (https://stackoverflow.com/a/74474377/2954547) is better. Use that one, not this one.\n\nThis is maybe a silly hack, but one solution is to create a new RNG instance using the desired new seed, then replace the state of the existing RNG instance with the state of the new instance:\nimport numpy as np\n\nseed = 12345\n\nrng = np.random.default_rng(seed)\nx1 = rng.normal(size=10)\n\nrng.__setstate__(np.random.default_rng(seed).__getstate__())\nx2 = rng.normal(size=10)\n\nnp.testing.assert_array_equal(x1, x2)\n\nHowever this isn't much different from just replacing the RNG instance.\nEdit: To answer the question more directly, I don't think it's possible to replace the seed without constructing a new Generator or BitGenerator, unless you know how to correctly construct the state data for the particular BitGenerator inside your Generator. Creating a new RNG is fairly cheap, and while I understand the conceptual appeal of not instantiating a new one, the only alternative here is to post a feature request on the Numpy issue tracker or mailing list.\n"
] |
[
3,
1
] |
[] |
[] |
[
"numpy",
"python",
"python_3.x",
"random",
"random_seed"
] |
stackoverflow_0074469039_numpy_python_python_3.x_random_random_seed.txt
|
Q:
SQLAlchemy: How to order query results (order_by) on a relationship's field?
Models
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, ForeignKey
from sqlalchemy import Integer
from sqlalchemy import Unicode
from sqlalchemy import TIMESTAMP
from sqlalchemy.orm import relationship
BaseModel = declarative_base()
class Base(BaseModel):
__tablename__ = 'base'
id = Column(Integer, primary_key=True)
location = Column(Unicode(12), ForeignKey("locationterrain.location"), unique=True,)
name = Column(Unicode(45))
ownerid = Column(Integer,ForeignKey("player.id"))
occupierid = Column(Integer, ForeignKey("player.id"))
submitid = Column(Integer,ForeignKey("player.id"))
updateid = Column(Integer,ForeignKey("player.id"))
owner = relationship("Player",
primaryjoin='Base.ownerid==Player.id',
join_depth=3,
lazy='joined')
occupier= relationship("Player",
primaryjoin='Base.occupierid==Player.id',
join_depth=3,
lazy='joined')
submitter = relationship("Player",
primaryjoin='Base.submitid== Player.id',
join_depth=3,
lazy='joined')
updater= relationship("Player",
primaryjoin='Base.updateid== Player.id',
join_depth=3,
lazy='joined')
class Player(BaseModel):
__tablename__ = 'player'
id = Column(Integer, ForeignKey("guildmember.playerid"), primary_key=True)
name = Column(Unicode(45))
Searching
bases = dbsession.query(Base)
bases = bases.order_by(Base.owner.name)
This doesn't work .... I've searched everywhere and read the documentation.
But I just don't see how I can sort my (Base) query on their 'owner' relationship's name.
It always results in:
AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object has an attribute 'name'
This must be easy... but I don't see it. Also looked into Comparators, which seemed logical, but I don't see where the query part for the ORDER BY is generated or what I should be returning since everything is generated dynamically. And making a comparator for each of my 'player' relationships to do a simple thing seems over complicated.
A:
SQLAlchemy wants you to think in terms of SQL. If you do a query for "Base", that's:
SELECT * FROM base
easy. So how, in SQL, would you select the rows from "base" and order by the "name" column in a totally different table, that is, "player"? You use a join:
SELECT base.* FROM base JOIN player ON base.ownerid=player.id ORDER BY player.name
SQLAlchemy has you use the identical thought process - you join():
session.query(Base).join(Base.owner).order_by(Player.name)
A:
Use order_by argument
note here order_by argumnent in relationship from sqlalchemy.orm
Please consider parent children relationship here as many-to-many.
class Parent(BaseModel):
__tablename__ = "parent"
name = Column(String(100), nullable=False)
children = relationship(
'Children',
secondary=chile_parent_mapping,
backref=backref(
'parents',
),
order_by="Parent.order_id",
)
|
SQLAlchemy: How to order query results (order_by) on a relationship's field?
|
Models
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, ForeignKey
from sqlalchemy import Integer
from sqlalchemy import Unicode
from sqlalchemy import TIMESTAMP
from sqlalchemy.orm import relationship
BaseModel = declarative_base()
class Base(BaseModel):
__tablename__ = 'base'
id = Column(Integer, primary_key=True)
location = Column(Unicode(12), ForeignKey("locationterrain.location"), unique=True,)
name = Column(Unicode(45))
ownerid = Column(Integer,ForeignKey("player.id"))
occupierid = Column(Integer, ForeignKey("player.id"))
submitid = Column(Integer,ForeignKey("player.id"))
updateid = Column(Integer,ForeignKey("player.id"))
owner = relationship("Player",
primaryjoin='Base.ownerid==Player.id',
join_depth=3,
lazy='joined')
occupier= relationship("Player",
primaryjoin='Base.occupierid==Player.id',
join_depth=3,
lazy='joined')
submitter = relationship("Player",
primaryjoin='Base.submitid== Player.id',
join_depth=3,
lazy='joined')
updater= relationship("Player",
primaryjoin='Base.updateid== Player.id',
join_depth=3,
lazy='joined')
class Player(BaseModel):
__tablename__ = 'player'
id = Column(Integer, ForeignKey("guildmember.playerid"), primary_key=True)
name = Column(Unicode(45))
Searching
bases = dbsession.query(Base)
bases = bases.order_by(Base.owner.name)
This doesn't work .... I've searched everywhere and read the documentation.
But I just don't see how I can sort my (Base) query on their 'owner' relationship's name.
It always results in:
AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object has an attribute 'name'
This must be easy... but I don't see it. Also looked into Comparators, which seemed logical, but I don't see where the query part for the ORDER BY is generated or what I should be returning since everything is generated dynamically. And making a comparator for each of my 'player' relationships to do a simple thing seems over complicated.
|
[
"SQLAlchemy wants you to think in terms of SQL. If you do a query for \"Base\", that's:\nSELECT * FROM base\n\neasy. So how, in SQL, would you select the rows from \"base\" and order by the \"name\" column in a totally different table, that is, \"player\"? You use a join:\nSELECT base.* FROM base JOIN player ON base.ownerid=player.id ORDER BY player.name\n\nSQLAlchemy has you use the identical thought process - you join():\nsession.query(Base).join(Base.owner).order_by(Player.name)\n\n",
"Use order_by argument\nnote here order_by argumnent in relationship from sqlalchemy.orm\nPlease consider parent children relationship here as many-to-many.\nclass Parent(BaseModel):\n __tablename__ = \"parent\"\n\n name = Column(String(100), nullable=False)\n \n children = relationship(\n 'Children',\n secondary=chile_parent_mapping,\n backref=backref(\n 'parents',\n ),\n order_by=\"Parent.order_id\",\n )\n\n"
] |
[
39,
0
] |
[] |
[] |
[
"field",
"python",
"relationship",
"sql_order_by",
"sqlalchemy"
] |
stackoverflow_0009861990_field_python_relationship_sql_order_by_sqlalchemy.txt
|
Q:
Differentiable round function in Tensorflow?
So the output of my network is a list of propabilities, which I then round using tf.round() to be either 0 or 1, this is crucial for this project.
I then found out that tf.round isn't differentiable so I'm kinda lost there.. :/
A:
Something along the lines of x - sin(2pi x)/(2pi)?
I'm sure there's a way to squish the slope to be a bit steeper.
A:
You can use the fact that tf.maximum() and tf.minimum() are differentiable, and the inputs are probabilities from 0 to 1
# round numbers less than 0.5 to zero;
# by making them negative and taking the maximum with 0
differentiable_round = tf.maximum(x-0.499,0)
# scale the remaining numbers (0 to 0.5) to greater than 1
# the other half (zeros) is not affected by multiplication
differentiable_round = differentiable_round * 10000
# take the minimum with 1
differentiable_round = tf.minimum(differentiable_round, 1)
Example:
[0.1, 0.5, 0.7]
[-0.0989, 0.001, 0.20099] # x - 0.499
[0, 0.001, 0.20099] # max(x-0.499, 0)
[0, 10, 2009.9] # max(x-0.499, 0) * 10000
[0, 1.0, 1.0] # min(max(x-0.499, 0) * 10000, 1)
A:
This works for me:
x_rounded_NOT_differentiable = tf.round(x)
x_rounded_differentiable = x - tf.stop_gradient(x - x_rounded_NOT_differentiable)
A:
Rounding is a fundamentally nondifferentiable function, so you're out of luck there. The normal procedure for this kind of situation is to find a way to either use the probabilities, say by using them to calculate an expected value, or by taking the maximum probability that is output and choose that one as the network's prediction. If you aren't using the output for calculating your loss function though, you can go ahead and just apply it to the result and it doesn't matter if it's differentiable. Now, if you want an informative loss function for the purpose of training the network, maybe you should consider whether keeping the output in the format of probabilities might actually be to your advantage (it will likely make your training process smoother)- that way you can just convert the probabilities to actual estimates outside of the network, after training.
A:
Building on a previous answer, a way to get an arbitrarily good approximation is to approximate round() using a finite Fourier approximation and use as many terms as you need. Fundamentally, you can think of round(x) as adding a reverse (i. e. descending) sawtooth wave to x. So, using the Fourier expansion of the sawtooth wave we get
With N = 5, we get a pretty nice approximation:
A:
Kind of an old question, but I just solved this problem for TensorFlow 2.0. I am using the following round function on in my audio auto-encoder project. I basically want to create a discrete representation of sound which is compressed in time. I use the round function to clamp the output of the encoder to integer values. It has been working well for me so far.
@tf.custom_gradient
def round_with_gradients(x):
def grad(dy):
return dy
return tf.round(x), grad
A:
In range 0 1, translating and scaling a sigmoid can be a solution:
slope = 1000
center = 0.5
e = tf.exp(slope*(x-center))
round_diff = e/(e+1)
A:
In tensorflow 2.10, there is a function called soft_round which achieves exactly this.
Fortunately, for those who are using lower versions, the source code is really simple, so I just copy-pasted those lines, and it works like a charm:
def soft_round(x, alpha, eps=1e-3):
"""Differentiable approximation to `round`.
Larger alphas correspond to closer approximations of the round function.
If alpha is close to zero, this function reduces to the identity.
This is described in Sec. 4.1. in the paper
> "Universally Quantized Neural Compression"<br />
> Eirikur Agustsson & Lucas Theis<br />
> https://arxiv.org/abs/2006.09952
Args:
x: `tf.Tensor`. Inputs to the rounding function.
alpha: Float or `tf.Tensor`. Controls smoothness of the approximation.
eps: Float. Threshold below which `soft_round` will return identity.
Returns:
`tf.Tensor`
"""
# This guards the gradient of tf.where below against NaNs, while maintaining
# correctness, as for alpha < eps the result is ignored.
alpha_bounded = tf.maximum(alpha, eps)
m = tf.floor(x) + .5
r = x - m
z = tf.tanh(alpha_bounded / 2.) * 2.
y = m + tf.tanh(alpha_bounded * r) / z
# For very low alphas, soft_round behaves like identity
return tf.where(alpha < eps, x, y, name="soft_round")
alpha sets how soft the function is. Greater values leads to better approximations of round function, but then it becomes harder to fit since gradients vanish:
x = tf.convert_to_tensor(np.arange(-2,2,.1).astype(np.float32))
for alpha in [ 3., 7., 15.]:
y = soft_round(x, alpha)
plt.plot(x.numpy(), y.numpy(), label=f'alpha={alpha}')
plt.legend()
plt.title('Soft round function for different alphas')
plt.grid()
In my case, I tried different values for alpha, and 3. looks like a good choice.
|
Differentiable round function in Tensorflow?
|
So the output of my network is a list of propabilities, which I then round using tf.round() to be either 0 or 1, this is crucial for this project.
I then found out that tf.round isn't differentiable so I'm kinda lost there.. :/
|
[
"Something along the lines of x - sin(2pi x)/(2pi)?\nI'm sure there's a way to squish the slope to be a bit steeper.\n\n",
"You can use the fact that tf.maximum() and tf.minimum() are differentiable, and the inputs are probabilities from 0 to 1\n# round numbers less than 0.5 to zero;\n# by making them negative and taking the maximum with 0\ndifferentiable_round = tf.maximum(x-0.499,0)\n# scale the remaining numbers (0 to 0.5) to greater than 1\n# the other half (zeros) is not affected by multiplication\ndifferentiable_round = differentiable_round * 10000\n# take the minimum with 1\ndifferentiable_round = tf.minimum(differentiable_round, 1)\n\nExample:\n[0.1, 0.5, 0.7]\n[-0.0989, 0.001, 0.20099] # x - 0.499\n[0, 0.001, 0.20099] # max(x-0.499, 0)\n[0, 10, 2009.9] # max(x-0.499, 0) * 10000\n[0, 1.0, 1.0] # min(max(x-0.499, 0) * 10000, 1)\n\n",
"This works for me:\nx_rounded_NOT_differentiable = tf.round(x)\nx_rounded_differentiable = x - tf.stop_gradient(x - x_rounded_NOT_differentiable)\n\n",
"Rounding is a fundamentally nondifferentiable function, so you're out of luck there. The normal procedure for this kind of situation is to find a way to either use the probabilities, say by using them to calculate an expected value, or by taking the maximum probability that is output and choose that one as the network's prediction. If you aren't using the output for calculating your loss function though, you can go ahead and just apply it to the result and it doesn't matter if it's differentiable. Now, if you want an informative loss function for the purpose of training the network, maybe you should consider whether keeping the output in the format of probabilities might actually be to your advantage (it will likely make your training process smoother)- that way you can just convert the probabilities to actual estimates outside of the network, after training.\n",
"Building on a previous answer, a way to get an arbitrarily good approximation is to approximate round() using a finite Fourier approximation and use as many terms as you need. Fundamentally, you can think of round(x) as adding a reverse (i. e. descending) sawtooth wave to x. So, using the Fourier expansion of the sawtooth wave we get\n\nWith N = 5, we get a pretty nice approximation:\n",
"Kind of an old question, but I just solved this problem for TensorFlow 2.0. I am using the following round function on in my audio auto-encoder project. I basically want to create a discrete representation of sound which is compressed in time. I use the round function to clamp the output of the encoder to integer values. It has been working well for me so far.\n@tf.custom_gradient\ndef round_with_gradients(x):\n def grad(dy):\n return dy\n return tf.round(x), grad\n\n",
"In range 0 1, translating and scaling a sigmoid can be a solution:\n slope = 1000\n center = 0.5\n e = tf.exp(slope*(x-center))\n round_diff = e/(e+1)\n\n",
"In tensorflow 2.10, there is a function called soft_round which achieves exactly this.\nFortunately, for those who are using lower versions, the source code is really simple, so I just copy-pasted those lines, and it works like a charm:\ndef soft_round(x, alpha, eps=1e-3):\n \"\"\"Differentiable approximation to `round`.\n\n Larger alphas correspond to closer approximations of the round function.\n If alpha is close to zero, this function reduces to the identity.\n\n This is described in Sec. 4.1. in the paper\n > \"Universally Quantized Neural Compression\"<br />\n > Eirikur Agustsson & Lucas Theis<br />\n > https://arxiv.org/abs/2006.09952\n\n Args:\n x: `tf.Tensor`. Inputs to the rounding function.\n alpha: Float or `tf.Tensor`. Controls smoothness of the approximation.\n eps: Float. Threshold below which `soft_round` will return identity.\n\n Returns:\n `tf.Tensor`\n \"\"\"\n # This guards the gradient of tf.where below against NaNs, while maintaining\n # correctness, as for alpha < eps the result is ignored.\n alpha_bounded = tf.maximum(alpha, eps)\n\n\n m = tf.floor(x) + .5\n r = x - m\n z = tf.tanh(alpha_bounded / 2.) * 2.\n y = m + tf.tanh(alpha_bounded * r) / z\n\n\n # For very low alphas, soft_round behaves like identity\n return tf.where(alpha < eps, x, y, name=\"soft_round\")\n\nalpha sets how soft the function is. Greater values leads to better approximations of round function, but then it becomes harder to fit since gradients vanish:\nx = tf.convert_to_tensor(np.arange(-2,2,.1).astype(np.float32))\n\nfor alpha in [ 3., 7., 15.]:\n\n y = soft_round(x, alpha)\n plt.plot(x.numpy(), y.numpy(), label=f'alpha={alpha}')\n\nplt.legend()\nplt.title('Soft round function for different alphas')\nplt.grid()\n\nIn my case, I tried different values for alpha, and 3. looks like a good choice.\n\n"
] |
[
20,
11,
7,
4,
2,
2,
0,
0
] |
[] |
[] |
[
"python",
"tensorflow"
] |
stackoverflow_0046596636_python_tensorflow.txt
|
Q:
I made a code with Biopython but it does not work every time. What is wrong with my code?
I have a FASTA file which contains sequences classified in an order from 1 (the first sequence: from > to *) to n (the last). The content is as follows:
>TRINITY_GG_10000_c0_g1_i1.p2 TRINITY_GG_10000_c0_g1~~TRINITY_GG_10000_c0_g1_i1.p2 ORF type:complete len:381 (+),score=55.64 TRINITY_GG_10000_c0_g1_i1:244-1386(+)
MNSFLSIRKRTSLATASKTRQLNWKPAKVSIRVTSNDKKLPVTQADVARKETSKHVSMLE
TTPKLKKSFIFMAGRVVRVMIGSFLVLFALLHMGILHTLSPAVKKGLGNFSSRTWQAAEQ
IFTGKWEDHEATATAFEHGF*
>TRINITY_GG_10000_c0_g1_i1.p1 TRINITY_GG_10000_c0_g1~~TRINITY_GG_10000_c0_g1_i1.p1 ORF type:5prime_partial len:1567 (-),score=319.89 TRINITY_GG_10000_c0_g1_i1:1694-6394(-)
SPNAVQQVPVQSPNAVQQVPVQSPNAVQQVPVQSARAIQQVPNQNPNAVQQWTRHPGAMQ
QPVQDSRAIQQQQQNNSSVQAQPQATGHHARQVDESTTRSGPEVPVSSQQGHTNAPSDV*
>TRINITY_GG_10000_c0_g1_i1.p........
And I have another text file containing numbers corresponding to some sequence classification in the first FASTA file, the content is like this:
10140
10178
11626
12110
12119
n
I tried to create a program that allows me to extract the sequences from the FASTA file that correspond to the number contained in the text file, my program doesn't work well. The extracted sequences do not correspond to the number of sequences desired and numbered in the text file. What is wrong with my program?
import sys
fasta_name = sys.argv[1]
nums_name = sys.argv[2]
out_name = sys.argv[3]
from Bio import SeqIO
fasta_sequences = list(SeqIO.parse(fasta_name, "fasta"))
nums_file = open(nums_name,"r")
nums=nums_file.readlines()
nums_file.close()
out_file = open(out_name,"w")
out_file.close()
out_file = open(out_name,"a+")
numsAsInt= [int(num[:-1]) for num in nums]
indexes = set(range(1,len(fasta_sequences)+1)).intersection(set(numsAsInt))
for ind in indexes:
fasta = fasta_sequences[ind-1]
name, sequence = fasta.id, str(fasta.seq)
out_file.write(">"+name+"\n")
out_file.write(sequence+"\n")
out_file.close()
I have tried to solve this problem but being a beginner with Python I can't go further. What can I try next?
A:
Hey I hope you still need an answer:
The problem faulty list I provided my answer as code I tested it and it works.
I also provided a alternative more biopythonic way to do it:
#!/bin/python3
import sys
fasta_name = 'test.fasta'
nums_name = 'test.list'
out_name = 'out2.fasta'
from Bio import SeqIO
from Bio import Seq
fasta_sequences = list(SeqIO.parse(fasta_name, "fasta"))
#print the number of sequences in the file
"""
nums_file = open(nums_name,"r") #
nums=nums_file.readlines()
nums_file.close()
#produced: ['1 \ n', '3 \ n', '4'] these are strings not ints
['1\ n', '3\ n', '4'] needs to be [1,3,4] fix file readlines
"""
#nicer way to read in the list of numbers
nums=[]
with open(nums_name, 'r') as f:
nums_raw=f.readlines()
#strip newlines if they exist
nums=[x.strip() for x in nums_raw]
#turn nums into integers
nums=[int(x) for x in nums]
out_file = open(out_name,"w")
out_file.close()
out_file = open(out_name,"a+")
#numsAsInt= [int(num[:-1]) for num in nums]
# caused an error and is now no longer needed since we already have ints
numsAsInt=nums
indexes = set(range(1,len(fasta_sequences)+1)).intersection(set(numsAsInt))
#you can directly iterate over the SeqIO object and provide the indexes as a list
for ind in nums:
fasta = fasta_sequences[ind-1] #generally it would be advisable to start indexes from 0
name, sequence = fasta.id, str(fasta.seq)
out_file.write(">"+name+"\n")
out_file.write(sequence+"\n")
out_file.close()
# a more biopython way is this:
fasta_sequences = list(SeqIO.parse(fasta_name, "fasta"))
nums=[]
with open(nums_name, "r") as f:
nums=[int(x.strip()) for x in f.readlines()]
selected_seqs = [fasta_sequences[ind-1] for ind in nums]
SeqIO.write(selected_seqs, out_name, "fasta")
The last one is the shortest and efficient way to do it.
[tag]
|
I made a code with Biopython but it does not work every time. What is wrong with my code?
|
I have a FASTA file which contains sequences classified in an order from 1 (the first sequence: from > to *) to n (the last). The content is as follows:
>TRINITY_GG_10000_c0_g1_i1.p2 TRINITY_GG_10000_c0_g1~~TRINITY_GG_10000_c0_g1_i1.p2 ORF type:complete len:381 (+),score=55.64 TRINITY_GG_10000_c0_g1_i1:244-1386(+)
MNSFLSIRKRTSLATASKTRQLNWKPAKVSIRVTSNDKKLPVTQADVARKETSKHVSMLE
TTPKLKKSFIFMAGRVVRVMIGSFLVLFALLHMGILHTLSPAVKKGLGNFSSRTWQAAEQ
IFTGKWEDHEATATAFEHGF*
>TRINITY_GG_10000_c0_g1_i1.p1 TRINITY_GG_10000_c0_g1~~TRINITY_GG_10000_c0_g1_i1.p1 ORF type:5prime_partial len:1567 (-),score=319.89 TRINITY_GG_10000_c0_g1_i1:1694-6394(-)
SPNAVQQVPVQSPNAVQQVPVQSPNAVQQVPVQSARAIQQVPNQNPNAVQQWTRHPGAMQ
QPVQDSRAIQQQQQNNSSVQAQPQATGHHARQVDESTTRSGPEVPVSSQQGHTNAPSDV*
>TRINITY_GG_10000_c0_g1_i1.p........
And I have another text file containing numbers corresponding to some sequence classification in the first FASTA file, the content is like this:
10140
10178
11626
12110
12119
n
I tried to create a program that allows me to extract the sequences from the FASTA file that correspond to the number contained in the text file, my program doesn't work well. The extracted sequences do not correspond to the number of sequences desired and numbered in the text file. What is wrong with my program?
import sys
fasta_name = sys.argv[1]
nums_name = sys.argv[2]
out_name = sys.argv[3]
from Bio import SeqIO
fasta_sequences = list(SeqIO.parse(fasta_name, "fasta"))
nums_file = open(nums_name,"r")
nums=nums_file.readlines()
nums_file.close()
out_file = open(out_name,"w")
out_file.close()
out_file = open(out_name,"a+")
numsAsInt= [int(num[:-1]) for num in nums]
indexes = set(range(1,len(fasta_sequences)+1)).intersection(set(numsAsInt))
for ind in indexes:
fasta = fasta_sequences[ind-1]
name, sequence = fasta.id, str(fasta.seq)
out_file.write(">"+name+"\n")
out_file.write(sequence+"\n")
out_file.close()
I have tried to solve this problem but being a beginner with Python I can't go further. What can I try next?
|
[
"Hey I hope you still need an answer:\nThe problem faulty list I provided my answer as code I tested it and it works.\nI also provided a alternative more biopythonic way to do it:\n#!/bin/python3\n\nimport sys\nfasta_name = 'test.fasta'\nnums_name = 'test.list'\nout_name = 'out2.fasta'\n\nfrom Bio import SeqIO\nfrom Bio import Seq\n\nfasta_sequences = list(SeqIO.parse(fasta_name, \"fasta\"))\n#print the number of sequences in the file\n\n\"\"\"\nnums_file = open(nums_name,\"r\") # \nnums=nums_file.readlines()\nnums_file.close()\n#produced: ['1 \\ n', '3 \\ n', '4'] these are strings not ints\n ['1\\ n', '3\\ n', '4'] needs to be [1,3,4] fix file readlines\n\n\"\"\"\n\n#nicer way to read in the list of numbers\nnums=[]\nwith open(nums_name, 'r') as f:\n nums_raw=f.readlines()\n #strip newlines if they exist\n nums=[x.strip() for x in nums_raw]\n #turn nums into integers\n nums=[int(x) for x in nums]\n \n\nout_file = open(out_name,\"w\")\nout_file.close()\nout_file = open(out_name,\"a+\")\n\n#numsAsInt= [int(num[:-1]) for num in nums] \n# caused an error and is now no longer needed since we already have ints\nnumsAsInt=nums\nindexes = set(range(1,len(fasta_sequences)+1)).intersection(set(numsAsInt))\n\n#you can directly iterate over the SeqIO object and provide the indexes as a list\nfor ind in nums:\n fasta = fasta_sequences[ind-1] #generally it would be advisable to start indexes from 0\n name, sequence = fasta.id, str(fasta.seq)\n out_file.write(\">\"+name+\"\\n\")\n out_file.write(sequence+\"\\n\")\n\nout_file.close()\n\n# a more biopython way is this:\nfasta_sequences = list(SeqIO.parse(fasta_name, \"fasta\"))\nnums=[]\nwith open(nums_name, \"r\") as f:\n nums=[int(x.strip()) for x in f.readlines()]\nselected_seqs = [fasta_sequences[ind-1] for ind in nums]\nSeqIO.write(selected_seqs, out_name, \"fasta\") \n\n\n \n \n\nThe last one is the shortest and efficient way to do it.\n[tag]\n"
] |
[
0
] |
[] |
[] |
[
"biopython",
"extract",
"fasta",
"python",
"sequence"
] |
stackoverflow_0074358827_biopython_extract_fasta_python_sequence.txt
|
Q:
Closest, index and minimum distance between points
I have a code that calculates the distance between closest points in a list, by using cdist.
However, I would like an improved version also gives me the index of the Points.
distance.cdist(Coordinates,Points).min(axis=1)
A:
Use argmin instead.
( padding)
|
Closest, index and minimum distance between points
|
I have a code that calculates the distance between closest points in a list, by using cdist.
However, I would like an improved version also gives me the index of the Points.
distance.cdist(Coordinates,Points).min(axis=1)
|
[
"Use argmin instead.\n( padding)\n"
] |
[
0
] |
[] |
[] |
[
"python",
"scipy"
] |
stackoverflow_0074457742_python_scipy.txt
|
Q:
How to not show the webdriver console when running an exe file created with python?
I am writing a parser using the selenium library, which is launched via an exe file (converted from a python script with the --windowed parameter), but when parsing starts, the chromedriver.exe console window opens, how can I prevent it from opening?
I searched for information about this, but did not find anything normal
A:
After some digging, I found the answer to this question and the console no longer opened. All you need to do is add some code to the python script.
This will only work for Windows!!
Import the required libraries:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service as ChromeService
from subprocess import CREATE_NO_WINDOW
I also advise you to use the webdriver-manager library, it will automatically download driver updates.
Let's import this library:
from webdriver_manager.chrome import ChromeDriverManager
Next, create a variable chrome_service and then assign a new flag to it:
chrome_service = ChromeService(ChromeDriverManager().install())
chrome_service.creationflags = CREATE_NO_WINDOW
And already in the driver settings we specify it:
driver = webdriver.Chrome(ChromeDriverManager().install(), service = chrome_service)
Maybe I didn’t explain very well and clearly, but everything works.
It can be said that this is a better reworked of this answer - hide chromeDriver console in python
In the same place below there is essentially the same answer as I am writing here, but to close the question I answer))
|
How to not show the webdriver console when running an exe file created with python?
|
I am writing a parser using the selenium library, which is launched via an exe file (converted from a python script with the --windowed parameter), but when parsing starts, the chromedriver.exe console window opens, how can I prevent it from opening?
I searched for information about this, but did not find anything normal
|
[
"After some digging, I found the answer to this question and the console no longer opened. All you need to do is add some code to the python script.\nThis will only work for Windows!!\nImport the required libraries:\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service as ChromeService\nfrom subprocess import CREATE_NO_WINDOW\n\nI also advise you to use the webdriver-manager library, it will automatically download driver updates.\nLet's import this library:\nfrom webdriver_manager.chrome import ChromeDriverManager\n\nNext, create a variable chrome_service and then assign a new flag to it:\nchrome_service = ChromeService(ChromeDriverManager().install())\nchrome_service.creationflags = CREATE_NO_WINDOW\n\nAnd already in the driver settings we specify it:\ndriver = webdriver.Chrome(ChromeDriverManager().install(), service = chrome_service)\n\nMaybe I didn’t explain very well and clearly, but everything works.\nIt can be said that this is a better reworked of this answer - hide chromeDriver console in python\nIn the same place below there is essentially the same answer as I am writing here, but to close the question I answer))\n"
] |
[
0
] |
[] |
[] |
[
"exe",
"python",
"selenium",
"selenium_webdriver"
] |
stackoverflow_0074464796_exe_python_selenium_selenium_webdriver.txt
|
Q:
How to get last date of the previous 9 months in python
Here I need to get the previous 9 months last date when we provide year and quarter as a input.
Input to my program is year and quarter
Example:
year = 2022
quarter = 'Q3'
Expected output
2022-06-30
2022-05-30
2022-04-30
2022-03-31
2022-02-28
2022-01-31
2021-12-31
2021-11-30
2021-10-31
Is there any way to achieve this in python it would be great.
A:
You can use date_range:
year = 2022
quarter = 'Q3'
pd.Series(pd.date_range(end=pd.Timestamp(f'{year}-{quarter}'), periods=9, freq='1M')[::-1])
Output:
0 2022-06-30
1 2022-05-31
2 2022-04-30
3 2022-03-31
4 2022-02-28
5 2022-01-31
6 2021-12-31
7 2021-11-30
8 2021-10-31
dtype: datetime64[ns]
|
How to get last date of the previous 9 months in python
|
Here I need to get the previous 9 months last date when we provide year and quarter as a input.
Input to my program is year and quarter
Example:
year = 2022
quarter = 'Q3'
Expected output
2022-06-30
2022-05-30
2022-04-30
2022-03-31
2022-02-28
2022-01-31
2021-12-31
2021-11-30
2021-10-31
Is there any way to achieve this in python it would be great.
|
[
"You can use date_range:\nyear = 2022\nquarter = 'Q3'\n\npd.Series(pd.date_range(end=pd.Timestamp(f'{year}-{quarter}'), periods=9, freq='1M')[::-1])\n\nOutput:\n0 2022-06-30\n1 2022-05-31\n2 2022-04-30\n3 2022-03-31\n4 2022-02-28\n5 2022-01-31\n6 2021-12-31\n7 2021-11-30\n8 2021-10-31\ndtype: datetime64[ns]\n\n"
] |
[
1
] |
[] |
[] |
[
"datetime",
"python"
] |
stackoverflow_0074474242_datetime_python.txt
|
Q:
Using Python 3.3 in C++ 'python33_d.lib' not found
I am trying to #include <Python.h> in my C++ code and when I go to compile my code I get this error:
fatal error LNK1104: cannot open file 'python33_d.lib'
I have tried to find python33_d.lib on my computer to include in my linker dependencies, but I cannot find it. I have been able to find python33.lib.
Where can I find the python33_d.lib, or how can I resolve this issue?
A:
Simple solution from the python bug tracker:
#ifdef _DEBUG
#undef _DEBUG
#include <python.h>
#define _DEBUG
#else
#include <python.h>
#endif
A:
In the event that you need a debug version (as I do for work), it is possible to build the library yourself:
Download the source tarball from http://www.python.org/download
Extract the tarball (7zip will do the trick) and go into the resulting directory (should be something like Python-3.3.2).
From the Python directory, go to the PCBuild folder. There are two important files here: readme.txt, which contains the instructions for building Python in Windows (even if it uses the UNIX line feed style...), and pcbuild.sln, which is the Visual Studio solution that builds Python.
Open pcbuild.sln in Visual Studio. (I am assuming you are using Visual Studio 10; readme.txt contains specific instructions for older versions of Visual Studio.)
Make sure Visual Studio is set to the "debug" configuration, and then build the solution for your appropriate architecture (x64 or Win32). You may get a few failed subprojects, but not all of them are necessary to build python33_d; by my count, 8 builds failed and I got a working .lib file anyway.
You will find python33_d.lib and python33_d.dll in either the PCBuild folder (if building Win32) or the amd64 subfolder (if building x64).
A:
*_d.lib is used for debug builds. Switch to a release build instead.
A:
If you install python via the installers on python.org, you can tell the installer to include the debugging symbols and binaries such as the pythonXX_d.dll file by selecting "Customize Installation" while installing (I think it's on the second customization page). This may be the easiest solution if you're not very savvy at building the project yourself (like me). Too bad I don't see any way to do this with the anaconda distribution.
A:
Open Python installer(.exe) -- Modify -- Next -- Enable checkbox Debug Symbols and Libs
A:
If you are using Swig to generate python wrappers then you can define the macro SWIG_PYTHON_INTERPRETER_NO_DEBUG. In which case it will not look for python**_d.lib
A:
As addition to the answer of @liorda:
It might happen that there appears conflicts with other libraries and python.
The error C1017: invalid integer constant expression may come up.
For this use @liorda's code and replace
#define _DEBUG
with
#define _DEBUG 1
|
Using Python 3.3 in C++ 'python33_d.lib' not found
|
I am trying to #include <Python.h> in my C++ code and when I go to compile my code I get this error:
fatal error LNK1104: cannot open file 'python33_d.lib'
I have tried to find python33_d.lib on my computer to include in my linker dependencies, but I cannot find it. I have been able to find python33.lib.
Where can I find the python33_d.lib, or how can I resolve this issue?
|
[
"Simple solution from the python bug tracker:\n#ifdef _DEBUG\n #undef _DEBUG\n #include <python.h>\n #define _DEBUG\n#else\n #include <python.h>\n#endif\n\n",
"In the event that you need a debug version (as I do for work), it is possible to build the library yourself:\n\nDownload the source tarball from http://www.python.org/download\nExtract the tarball (7zip will do the trick) and go into the resulting directory (should be something like Python-3.3.2).\nFrom the Python directory, go to the PCBuild folder. There are two important files here: readme.txt, which contains the instructions for building Python in Windows (even if it uses the UNIX line feed style...), and pcbuild.sln, which is the Visual Studio solution that builds Python.\nOpen pcbuild.sln in Visual Studio. (I am assuming you are using Visual Studio 10; readme.txt contains specific instructions for older versions of Visual Studio.)\nMake sure Visual Studio is set to the \"debug\" configuration, and then build the solution for your appropriate architecture (x64 or Win32). You may get a few failed subprojects, but not all of them are necessary to build python33_d; by my count, 8 builds failed and I got a working .lib file anyway.\nYou will find python33_d.lib and python33_d.dll in either the PCBuild folder (if building Win32) or the amd64 subfolder (if building x64).\n\n",
"*_d.lib is used for debug builds. Switch to a release build instead.\n",
"If you install python via the installers on python.org, you can tell the installer to include the debugging symbols and binaries such as the pythonXX_d.dll file by selecting \"Customize Installation\" while installing (I think it's on the second customization page). This may be the easiest solution if you're not very savvy at building the project yourself (like me). Too bad I don't see any way to do this with the anaconda distribution.\n",
"Open Python installer(.exe) -- Modify -- Next -- Enable checkbox Debug Symbols and Libs\n",
"If you are using Swig to generate python wrappers then you can define the macro SWIG_PYTHON_INTERPRETER_NO_DEBUG. In which case it will not look for python**_d.lib\n",
"As addition to the answer of @liorda:\nIt might happen that there appears conflicts with other libraries and python.\nThe error C1017: invalid integer constant expression may come up.\nFor this use @liorda's code and replace\n#define _DEBUG\n\nwith\n#define _DEBUG 1\n\n"
] |
[
35,
25,
18,
12,
9,
0,
0
] |
[] |
[] |
[
"c++",
"python",
"visual_c++"
] |
stackoverflow_0017028576_c++_python_visual_c++.txt
|
Q:
Expand folium map size on mobile devices
I make a web app using Django, Folium. I have a navbar and a Folium map on the web page. It works fine om computers and landscape screen devices, but on portrait screen devices the map has a free space.
My code for map:
current_map = folium.Map(location=start_location, zoom_start=6)
fig = branca.element.Figure(height="100%")
fig.add_child(current_map)
context = {"current_map": current_map._repr_html_()}
return render(request, template_name="index.html", context=context)
index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title></title>
</head>
<body>
{{ current_map | safe }}
</body>
</html>
How do I fill it?
A:
Well, I might to use Figure and modify folium package.
current_map = folium.Map(location=(48.51, 32.25), zoom_start=6)
map_container = branca.element.Figure(height="100%")
map_container.add_child(current_map)
...
context = {"current_map": map_container.render(), "form": form}
return render(request, template_name="hub/index.html", context=context)
I downloaded folium-0.12.1.post1.tar.gz, and from folium/folium.py removed mentions about old versions of Bootstrap from lists _default_js and _default_css. But I use Bootstrap in base.html.
And in requirements.txt I use this local modified distr:
./distr/folium-0.12.1.post1.tar.gz
|
Expand folium map size on mobile devices
|
I make a web app using Django, Folium. I have a navbar and a Folium map on the web page. It works fine om computers and landscape screen devices, but on portrait screen devices the map has a free space.
My code for map:
current_map = folium.Map(location=start_location, zoom_start=6)
fig = branca.element.Figure(height="100%")
fig.add_child(current_map)
context = {"current_map": current_map._repr_html_()}
return render(request, template_name="index.html", context=context)
index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title></title>
</head>
<body>
{{ current_map | safe }}
</body>
</html>
How do I fill it?
|
[
"Well, I might to use Figure and modify folium package.\ncurrent_map = folium.Map(location=(48.51, 32.25), zoom_start=6)\nmap_container = branca.element.Figure(height=\"100%\")\nmap_container.add_child(current_map)\n...\ncontext = {\"current_map\": map_container.render(), \"form\": form}\nreturn render(request, template_name=\"hub/index.html\", context=context)\n\nI downloaded folium-0.12.1.post1.tar.gz, and from folium/folium.py removed mentions about old versions of Bootstrap from lists _default_js and _default_css. But I use Bootstrap in base.html.\nAnd in requirements.txt I use this local modified distr:\n./distr/folium-0.12.1.post1.tar.gz\n\n"
] |
[
1
] |
[
"Try this:\ncontext = {'map': map.get_root().render()}\nreturn render(request, template_name=\"index.html\", context=context)\n\nindex.html:\n<html> \n {{map|safe}}\n</html>\n \n\n"
] |
[
-1
] |
[
"django",
"folium",
"python"
] |
stackoverflow_0073791474_django_folium_python.txt
|
Q:
How to call device function inside an object from CUDA kernel in python
I am writing very specific Neural Network and I have many classes of different activation functions, each has function for normal python and one jitted as device function. The problem is calling that method from inside a CUDA kernel.
@cuda.jit(device=True)
def activation_fn(z):
return max(0, z)
@cuda.jit
def backprop_kernel(arr):
arr[cuda.threadIdx.x] = activation_fn(arr[cuda.threadIdx.x])
def backprop_GPU(x, y):
arr = np.array([-3, -2, -1, 0, 1, 2, 3])
print(arr)
backprop_kernel[1, 7](arr)
print(arr)
backprop_GPU(None, None)
This works perfectly fine but I want to make the code bellow work.
class Activation:
@cuda.jit(device=True)
def fn(z):
return max(0, z)
class Network:
def __init__(self):
self.activation_fn = Activation()
@cuda.jit
def kernel(arr):
arr[cuda.threadIdx.x] = activation_fn(arr[cuda.threadIdx.x])
def backprop(self, x, y):
arr = np.array([-3, -2, -1, 0, 1, 2, 3])
self.kernel[1, 7](arr)
net = Network()
net.backprop(None, None)
How do I make the "activation_fn" accesible from the kernel?
A:
@cuda.jit has to be used with functions, not members, so you need to define the decorated functions inside methods, and capture the activation function when you define the kernel:
from numba import cuda
import numpy as np
class Activation:
def __init__(self):
@cuda.jit(device=True)
def fn(z):
return max(0, z)
self.fn = fn
class Network:
def __init__(self):
self.activation = Activation()
activation_fn = self.activation.fn
@cuda.jit
def kernel(arr):
arr[cuda.threadIdx.x] = activation_fn(arr[cuda.threadIdx.x])
self.kernel = kernel
def backprop(self, x, y):
arr = np.array([-3, -2, -1, 0, 1, 2, 3])
self.kernel[1, 7](arr)
print(arr)
net = Network()
net.backprop(None, None)
prints:
$ python repro.py
[0 0 0 0 1 2 3]
(Note, I omitted the performance warnings that come out here, as they're orthogonal to the issue at hand)
|
How to call device function inside an object from CUDA kernel in python
|
I am writing very specific Neural Network and I have many classes of different activation functions, each has function for normal python and one jitted as device function. The problem is calling that method from inside a CUDA kernel.
@cuda.jit(device=True)
def activation_fn(z):
return max(0, z)
@cuda.jit
def backprop_kernel(arr):
arr[cuda.threadIdx.x] = activation_fn(arr[cuda.threadIdx.x])
def backprop_GPU(x, y):
arr = np.array([-3, -2, -1, 0, 1, 2, 3])
print(arr)
backprop_kernel[1, 7](arr)
print(arr)
backprop_GPU(None, None)
This works perfectly fine but I want to make the code bellow work.
class Activation:
@cuda.jit(device=True)
def fn(z):
return max(0, z)
class Network:
def __init__(self):
self.activation_fn = Activation()
@cuda.jit
def kernel(arr):
arr[cuda.threadIdx.x] = activation_fn(arr[cuda.threadIdx.x])
def backprop(self, x, y):
arr = np.array([-3, -2, -1, 0, 1, 2, 3])
self.kernel[1, 7](arr)
net = Network()
net.backprop(None, None)
How do I make the "activation_fn" accesible from the kernel?
|
[
"@cuda.jit has to be used with functions, not members, so you need to define the decorated functions inside methods, and capture the activation function when you define the kernel:\nfrom numba import cuda\nimport numpy as np\n\n\nclass Activation:\n def __init__(self):\n @cuda.jit(device=True)\n def fn(z):\n return max(0, z)\n\n self.fn = fn\n\n\nclass Network:\n def __init__(self):\n self.activation = Activation()\n\n activation_fn = self.activation.fn\n\n @cuda.jit\n def kernel(arr):\n arr[cuda.threadIdx.x] = activation_fn(arr[cuda.threadIdx.x])\n\n self.kernel = kernel\n\n def backprop(self, x, y):\n arr = np.array([-3, -2, -1, 0, 1, 2, 3])\n self.kernel[1, 7](arr)\n print(arr)\n\n\nnet = Network()\nnet.backprop(None, None)\n\nprints:\n$ python repro.py \n[0 0 0 0 1 2 3]\n\n(Note, I omitted the performance warnings that come out here, as they're orthogonal to the issue at hand)\n"
] |
[
0
] |
[] |
[] |
[
"cuda",
"machine_learning",
"numba",
"python"
] |
stackoverflow_0074343708_cuda_machine_learning_numba_python.txt
|
Q:
Create a new column in multiple dataframes using for loop
I have multiple dataframes with the same structure but different values
for instance,
df0, df1, df2...., df9
To each dataframe I want to add a column named eventdate that consists of one date, for instance, 2019-09-15 using for loop
for i in range(0, 9);
df+str(i)['eventdate'] = "2021-09-15"
but I get an error message
SyntaxError: cannot assign to operator
I think it's because df isn't defined. This should be very simple.. Any idea how to do this? thanks.
A:
dfs = [df0, df1, df2...., df9]
dfs_new = []
for i, df in enumerate(dfs):
df['eventdate'] = "2021-09-15"
dfs_new.append(df)
if you can't generate a list then you could use eval(f"df{str(num)}") but this method isn't recommended from what I've seen
|
Create a new column in multiple dataframes using for loop
|
I have multiple dataframes with the same structure but different values
for instance,
df0, df1, df2...., df9
To each dataframe I want to add a column named eventdate that consists of one date, for instance, 2019-09-15 using for loop
for i in range(0, 9);
df+str(i)['eventdate'] = "2021-09-15"
but I get an error message
SyntaxError: cannot assign to operator
I think it's because df isn't defined. This should be very simple.. Any idea how to do this? thanks.
|
[
"dfs = [df0, df1, df2...., df9]\ndfs_new = []\n\nfor i, df in enumerate(dfs):\n df['eventdate'] = \"2021-09-15\"\n dfs_new.append(df)\n\nif you can't generate a list then you could use eval(f\"df{str(num)}\") but this method isn't recommended from what I've seen\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074474322_dataframe_pandas_python.txt
|
Q:
Is there a way to diff two files, and move common lines to a third file?
My case is fairly simple in theory. As a refactoring task, I have two python files consisting purely of variables, each referring to a testing environment's own version. These files are thousands of lines long, but there is only a handful (<100) of variables in each file that are version specific, the rest could be moved to a common variables file.
So I am hoping in my case, to diff two files (version_1_vars.py and version_2_vars.py), extract any lines that are an exact match regardless of order, and move them to a common_vars.py, remove them from the original.
Here is my snippet:
version_1_vars.py
var_1 = "Stackoverflow is cool"
var_2 = "something_else"
var_A = "I am here"
var_version1_unique = "this is unique to version 1"
version_2_vars.py
var_1 = "Stackoverflow is cool"
var_A = "I am here"
var_2 = "something_else"
var_version2_unique = "this is unique to version two"
Desired result:
version_1_vars.py
var_version1_unique = "this is unique to version 1"
version_2_vars.py
var_version2_unique = "this is unique to version two"
common_vars.py
var_1 = "Stackoverflow is cool"
var_2 = "something_else"
var_A = "I am here"
I have tried something with grep and comm commands but while grep did move lines to a new file, the results contained both unique and not unique lines and did not delete any of the original lines. I also have some placeholders {} in the files so using purely grep -vf returns a regex error. (invalid content of {})
grep -F -vf file_1 file_2 >> file_3 did not work as expected.
A:
You can try "difflib" library and then use unified_diff() to compare and find out difference.
|
Is there a way to diff two files, and move common lines to a third file?
|
My case is fairly simple in theory. As a refactoring task, I have two python files consisting purely of variables, each referring to a testing environment's own version. These files are thousands of lines long, but there is only a handful (<100) of variables in each file that are version specific, the rest could be moved to a common variables file.
So I am hoping in my case, to diff two files (version_1_vars.py and version_2_vars.py), extract any lines that are an exact match regardless of order, and move them to a common_vars.py, remove them from the original.
Here is my snippet:
version_1_vars.py
var_1 = "Stackoverflow is cool"
var_2 = "something_else"
var_A = "I am here"
var_version1_unique = "this is unique to version 1"
version_2_vars.py
var_1 = "Stackoverflow is cool"
var_A = "I am here"
var_2 = "something_else"
var_version2_unique = "this is unique to version two"
Desired result:
version_1_vars.py
var_version1_unique = "this is unique to version 1"
version_2_vars.py
var_version2_unique = "this is unique to version two"
common_vars.py
var_1 = "Stackoverflow is cool"
var_2 = "something_else"
var_A = "I am here"
I have tried something with grep and comm commands but while grep did move lines to a new file, the results contained both unique and not unique lines and did not delete any of the original lines. I also have some placeholders {} in the files so using purely grep -vf returns a regex error. (invalid content of {})
grep -F -vf file_1 file_2 >> file_3 did not work as expected.
|
[
"You can try \"difflib\" library and then use unified_diff() to compare and find out difference.\n"
] |
[
0
] |
[] |
[] |
[
"diff",
"grep",
"python"
] |
stackoverflow_0074474444_diff_grep_python.txt
|
Q:
Poetry installed but `poetry: command not found`
I've had a million and one issues with Poetry recently.
I got it fully installed and working yesterday, but after a restart of my machine I'm back to having issues with it ;(
Is there anyway to have Poetry consistently recognised in my Terminal, even after reboot?
System Specs:
Windows 10,
Visual Studio Code,
Bash - WSL Ubuntu CLI,
Python 3.8.
Terminal:
me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator/src$ poetry run python3 cli.py
poetry: command not found
me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator/src$ curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python3
Retrieving Poetry metadata
This installer is deprecated. Poetry versions installed using this script will not be able to use 'self update' command to upgrade to 1.2.0a1 or later.
Latest version already installed.
me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator/src$ poetry run python3 cli.py
poetry: command not found
me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator/src$
Please let me know if there is anything else I can add to post to help further clarify.
A:
When I run this, after shutdown of bash Terminal:
export PATH="$HOME/.poetry/bin:$PATH"
poetry command is then recognised.
However, this isn't enough alone; as every time I shutdown the terminal I need to run the export.
Possibly needs to be saved in a file.
A:
Since this is the top StackOverflow result for "poetry command not found"...
For Mac users, create a .zshrc file in your home folder with the following:
export PATH="$HOME/.local/bin:$PATH"
A:
To follow-up on @StressedBoi69420's answer, could you add the line he suggested, i.e.
export PATH="$HOME/.poetry/bin:$PATH"
to your .bashrc?
As per this other Stack Overflow post What is the default install path for poetry and @arshbot's answer, I have added the line
export PATH=$PATH:$HOME/.poetry/bin to my .zshrc and it seems to be working.
A:
Just to add some beginner level context around Julien's excellent answer, to find and edit your .zshrc file, you need to use your default editor (in this case i am using VSCode) and run:
>> code ~/.zshrc
...and, crucially, restart the terminal.
More details and the source of this info here: https://superuser.com/questions/886132/where-is-the-zshrc-file-on-mac
A:
I updated poetry to v1.2.2 (Nov 2022), but had issues with the path being set properly. This was the path definition I found:
Windows 10:
C:\User\<myUserName>\AppData\Roaming\pypoetry\venv\Scripts
Add it temporarily to your path with:
set PATH=%PATH%;%USERPROFILE%\AppData\Roaming\pypoetry\venv\Scripts
Or set this in your usual Windows Environment setup
A:
Generally the path of Poetry should be added to the ENV variable PATH.
1. use which find where is poetry installed just after you installed poetry.
which poetry
# $HOME/.local/bin/poetry # if installed with Brew
# maybe elsewhere: "$HOME/.poetry/bin:$PATH"
In fact, it reminds users to add the path to the shell configuration file in the output of the Poetry installation.
Installing Poetry (1.2.2): Done
Poetry (1.2.2) is installed now. Great!
To get started you need Poetry's bin directory
(/home/shell/.local/bin) in your PATH environment variable.
Add export PATH="/home/shell/.local/bin:$PATH" to your shell
configuration file.
Alternatively, you can call Poetry explicitly with
/home/shell/.local/bin/poetry.
2. find what kind of shell you are using:
echo $SHELL
# /usr/bin/zsh # many people use zsh or oh-my-zsh
# For zsh, put stuff in ~/.zshrc, which is always executed.
# For bash, put stuff in ~/.bashrc, and make ~/.bash_profile source it.
3. add the Poetry executable path to $PATH and add it to a shell startup configuration file to init it every time you start with that shell:
export SHELL_RCFILE="~/.zshrc"
echo "export POETRY_PATH=$HOME/.local/bin/ && export PATH="$POETRY_PATH:$PATH" >> $SHELL_RCFILE
Reference: zsh, bash startup files loading
|
Poetry installed but `poetry: command not found`
|
I've had a million and one issues with Poetry recently.
I got it fully installed and working yesterday, but after a restart of my machine I'm back to having issues with it ;(
Is there anyway to have Poetry consistently recognised in my Terminal, even after reboot?
System Specs:
Windows 10,
Visual Studio Code,
Bash - WSL Ubuntu CLI,
Python 3.8.
Terminal:
me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator/src$ poetry run python3 cli.py
poetry: command not found
me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator/src$ curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python3
Retrieving Poetry metadata
This installer is deprecated. Poetry versions installed using this script will not be able to use 'self update' command to upgrade to 1.2.0a1 or later.
Latest version already installed.
me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator/src$ poetry run python3 cli.py
poetry: command not found
me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator/src$
Please let me know if there is anything else I can add to post to help further clarify.
|
[
"When I run this, after shutdown of bash Terminal:\nexport PATH=\"$HOME/.poetry/bin:$PATH\"\n\npoetry command is then recognised.\nHowever, this isn't enough alone; as every time I shutdown the terminal I need to run the export.\nPossibly needs to be saved in a file.\n",
"Since this is the top StackOverflow result for \"poetry command not found\"...\nFor Mac users, create a .zshrc file in your home folder with the following:\nexport PATH=\"$HOME/.local/bin:$PATH\"\n\n",
"To follow-up on @StressedBoi69420's answer, could you add the line he suggested, i.e.\nexport PATH=\"$HOME/.poetry/bin:$PATH\"\nto your .bashrc?\nAs per this other Stack Overflow post What is the default install path for poetry and @arshbot's answer, I have added the line\nexport PATH=$PATH:$HOME/.poetry/bin to my .zshrc and it seems to be working.\n",
"Just to add some beginner level context around Julien's excellent answer, to find and edit your .zshrc file, you need to use your default editor (in this case i am using VSCode) and run:\n>> code ~/.zshrc\n\n...and, crucially, restart the terminal.\nMore details and the source of this info here: https://superuser.com/questions/886132/where-is-the-zshrc-file-on-mac\n",
"I updated poetry to v1.2.2 (Nov 2022), but had issues with the path being set properly. This was the path definition I found:\nWindows 10:\nC:\\User\\<myUserName>\\AppData\\Roaming\\pypoetry\\venv\\Scripts\nAdd it temporarily to your path with:\nset PATH=%PATH%;%USERPROFILE%\\AppData\\Roaming\\pypoetry\\venv\\Scripts\nOr set this in your usual Windows Environment setup\n",
"Generally the path of Poetry should be added to the ENV variable PATH.\n1. use which find where is poetry installed just after you installed poetry.\nwhich poetry\n\n# $HOME/.local/bin/poetry # if installed with Brew\n# maybe elsewhere: \"$HOME/.poetry/bin:$PATH\"\n\nIn fact, it reminds users to add the path to the shell configuration file in the output of the Poetry installation.\n\nInstalling Poetry (1.2.2): Done\nPoetry (1.2.2) is installed now. Great!\nTo get started you need Poetry's bin directory\n(/home/shell/.local/bin) in your PATH environment variable.\nAdd export PATH=\"/home/shell/.local/bin:$PATH\" to your shell\nconfiguration file.\nAlternatively, you can call Poetry explicitly with\n/home/shell/.local/bin/poetry.\n\n2. find what kind of shell you are using:\necho $SHELL\n\n# /usr/bin/zsh # many people use zsh or oh-my-zsh\n # For zsh, put stuff in ~/.zshrc, which is always executed.\n # For bash, put stuff in ~/.bashrc, and make ~/.bash_profile source it.\n\n3. add the Poetry executable path to $PATH and add it to a shell startup configuration file to init it every time you start with that shell:\nexport SHELL_RCFILE=\"~/.zshrc\"\necho \"export POETRY_PATH=$HOME/.local/bin/ && export PATH=\"$POETRY_PATH:$PATH\" >> $SHELL_RCFILE\n\nReference: zsh, bash startup files loading\n"
] |
[
18,
11,
8,
2,
1,
0
] |
[] |
[] |
[
"python",
"python_poetry"
] |
stackoverflow_0070003829_python_python_poetry.txt
|
Q:
ERROR: Could not build wheels for backports.zoneinfo, which is required to install pyproject.toml-based projects
The Heroku Build is returning this error when I'm trying to deploy a Django application for the past few days. The Django Code and File Structure are the same as Django's Official Documentation and Procfile is added in the root folder.
Log -
-----> Building on the Heroku-20 stack
-----> Determining which buildpack to use for this app
-----> Python app detected
-----> No Python version was specified. Using the buildpack default: python-3.10.4
To use a different version, see: https://devcenter.heroku.com/articles/python-runtimes
Building wheels for collected packages: backports.zoneinfo
Building wheel for backports.zoneinfo (pyproject.toml): started
Building wheel for backports.zoneinfo (pyproject.toml): finished with status 'error'
ERROR: Command errored out with exit status 1:
command: /app/.heroku/python/bin/python /app/.heroku/python/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmpqqu_1qow
cwd: /tmp/pip-install-txfn1ua9/backports-zoneinfo_a462ef61051d49e7bf54e715f78a34f1
Complete output (41 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.10
creating build/lib.linux-x86_64-3.10/backports
copying src/backports/__init__.py -> build/lib.linux-x86_64-3.10/backports
creating build/lib.linux-x86_64-3.10/backports/zoneinfo
copying src/backports/zoneinfo/_zoneinfo.py -> build/lib.linux-x86_64-3.10/backports/zoneinfo
copying src/backports/zoneinfo/_tzpath.py -> build/lib.linux-x86_64-3.10/backports/zoneinfo
copying src/backports/zoneinfo/_common.py -> build/lib.linux-x86_64-3.10/backports/zoneinfo
copying src/backports/zoneinfo/_version.py -> build/lib.linux-x86_64-3.10/backports/zoneinfo
copying src/backports/zoneinfo/__init__.py -> build/lib.linux-x86_64-3.10/backports/zoneinfo
running egg_info
writing src/backports.zoneinfo.egg-info/PKG-INFO
writing dependency_links to src/backports.zoneinfo.egg-info/dependency_links.txt
writing requirements to src/backports.zoneinfo.egg-info/requires.txt
writing top-level names to src/backports.zoneinfo.egg-info/top_level.txt
reading manifest file 'src/backports.zoneinfo.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.png' under directory 'docs'
warning: no files found matching '*.svg' under directory 'docs'
no previously-included directories found matching 'docs/_build'
no previously-included directories found matching 'docs/_output'
adding license file 'LICENSE'
adding license file 'licenses/LICENSE_APACHE'
writing manifest file 'src/backports.zoneinfo.egg-info/SOURCES.txt'
copying src/backports/zoneinfo/__init__.pyi -> build/lib.linux-x86_64-3.10/backports/zoneinfo
copying src/backports/zoneinfo/py.typed -> build/lib.linux-x86_64-3.10/backports/zoneinfo
running build_ext
building 'backports.zoneinfo._czoneinfo' extension
creating build/temp.linux-x86_64-3.10
creating build/temp.linux-x86_64-3.10/lib
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/app/.heroku/python/include/python3.10 -c lib/zoneinfo_module.c -o build/temp.linux-x86_64-3.10/lib/zoneinfo_module.o -std=c99
lib/zoneinfo_module.c: In function ‘zoneinfo_fromutc’:
lib/zoneinfo_module.c:600:19: error: ‘_PyLong_One’ undeclared (first use in this function); did you mean ‘_PyLong_New’?
600 | one = _PyLong_One;
| ^~~~~~~~~~~
| _PyLong_New
lib/zoneinfo_module.c:600:19: note: each undeclared identifier is reported only once for each function it appears in
error: command '/usr/bin/gcc' failed with exit code 1
----------------------------------------
ERROR: Failed building wheel for backports.zoneinfo
Failed to build backports.zoneinfo
ERROR: Could not build wheels for backports.zoneinfo, which is required to install pyproject.toml-based projects
! Push rejected, failed to compile Python app.
! Push failed
Thanks.
A:
I was having the same error while deploying my application on heroku and well the problem is actually that when you are deploying it on heroku so heroku by default uses python version 3.10.x and backports.zoneinfo is not working properly with this version so I suggest you to switch to version 3.8.x(stable).
In order to do that you need to tell heroku to switch that version and it can be done as follows :
Create runtime.txt in root directory.
python-3.8.10 <- write this in 'runtime.txt' there as to specify the version.
heroku will then install this version and you will be not getting anymore error.
PS : worked it for me and later when heroku removes this bug you can switch to python latest version.
A:
Avoid installing backports.zoneinfo when using python >= 3.9
Edit your requirements.txt file
FROM:
backports.zoneinfo==0.2.1
TO:
backports.zoneinfo;python_version<"3.9"
OR:
backports.zoneinfo==0.2.1;python_version<"3.9"
You can read more about this here and here
A:
I was facing the same error while creating my container. I solved the error by using the exact version of my Python venv i.e. 3.8.9
Earlier for the image, I was using 3.8-alpine for a lighter version of the image. But, it wasn't working out for me and got the same error as yours.
A:
this type of problems occur when you forget to modify your requirements.txt file and heroku server uses default settings like it uses python updated version which is not stable.
use the following commands and you will be get rid of this type of problem.
$ git status
you need to modify requirements.txt
$ git add-A
$ git commit -m "Python VERSION-3.8.10"
then push your server and i am sure you will be get rid of this type of problem.
In order to push your server...
$ git push heroku master
A:
Downgrading Python from 3.10.5 to 3.9.0 worked for me. I hope this helps.
A:
Tried & tested on Mac pro:
Check your python version on your terminal
python3 --version
OR
python --version
If the python version is 3.9 & above , then update the following (backports.zoneinfo) line in your "requirements.txt" file to :
backports.zoneinfo;python_version<"3.9"
Run -
pip3 install -r requirements.txt
test running your app again , should work at this stage.
A:
Install venv with python3.9 version helped for me.
python3.9 default version in my system
python3.9 -m venv venv
A:
I was facing the same error while deploying my Scrapy spider onto Heroku
but using python-3.9.15 in runtime.txt resolved the issue.
however, the python installed in my venv was 3.8.13
you can try one of these I don't know their actual meaning but these are recommended by Heroku you can read the full documentation here
Supported runtimes
python-3.10.8 on all supported stacks (recommended)
python-3.9.15 on all supported stacks
python-3.8.15 on Heroku-18 and Heroku-20 only
python-3.7.15 on Heroku-18 and Heroku-20 only
|
ERROR: Could not build wheels for backports.zoneinfo, which is required to install pyproject.toml-based projects
|
The Heroku Build is returning this error when I'm trying to deploy a Django application for the past few days. The Django Code and File Structure are the same as Django's Official Documentation and Procfile is added in the root folder.
Log -
-----> Building on the Heroku-20 stack
-----> Determining which buildpack to use for this app
-----> Python app detected
-----> No Python version was specified. Using the buildpack default: python-3.10.4
To use a different version, see: https://devcenter.heroku.com/articles/python-runtimes
Building wheels for collected packages: backports.zoneinfo
Building wheel for backports.zoneinfo (pyproject.toml): started
Building wheel for backports.zoneinfo (pyproject.toml): finished with status 'error'
ERROR: Command errored out with exit status 1:
command: /app/.heroku/python/bin/python /app/.heroku/python/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmpqqu_1qow
cwd: /tmp/pip-install-txfn1ua9/backports-zoneinfo_a462ef61051d49e7bf54e715f78a34f1
Complete output (41 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.10
creating build/lib.linux-x86_64-3.10/backports
copying src/backports/__init__.py -> build/lib.linux-x86_64-3.10/backports
creating build/lib.linux-x86_64-3.10/backports/zoneinfo
copying src/backports/zoneinfo/_zoneinfo.py -> build/lib.linux-x86_64-3.10/backports/zoneinfo
copying src/backports/zoneinfo/_tzpath.py -> build/lib.linux-x86_64-3.10/backports/zoneinfo
copying src/backports/zoneinfo/_common.py -> build/lib.linux-x86_64-3.10/backports/zoneinfo
copying src/backports/zoneinfo/_version.py -> build/lib.linux-x86_64-3.10/backports/zoneinfo
copying src/backports/zoneinfo/__init__.py -> build/lib.linux-x86_64-3.10/backports/zoneinfo
running egg_info
writing src/backports.zoneinfo.egg-info/PKG-INFO
writing dependency_links to src/backports.zoneinfo.egg-info/dependency_links.txt
writing requirements to src/backports.zoneinfo.egg-info/requires.txt
writing top-level names to src/backports.zoneinfo.egg-info/top_level.txt
reading manifest file 'src/backports.zoneinfo.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.png' under directory 'docs'
warning: no files found matching '*.svg' under directory 'docs'
no previously-included directories found matching 'docs/_build'
no previously-included directories found matching 'docs/_output'
adding license file 'LICENSE'
adding license file 'licenses/LICENSE_APACHE'
writing manifest file 'src/backports.zoneinfo.egg-info/SOURCES.txt'
copying src/backports/zoneinfo/__init__.pyi -> build/lib.linux-x86_64-3.10/backports/zoneinfo
copying src/backports/zoneinfo/py.typed -> build/lib.linux-x86_64-3.10/backports/zoneinfo
running build_ext
building 'backports.zoneinfo._czoneinfo' extension
creating build/temp.linux-x86_64-3.10
creating build/temp.linux-x86_64-3.10/lib
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/app/.heroku/python/include/python3.10 -c lib/zoneinfo_module.c -o build/temp.linux-x86_64-3.10/lib/zoneinfo_module.o -std=c99
lib/zoneinfo_module.c: In function ‘zoneinfo_fromutc’:
lib/zoneinfo_module.c:600:19: error: ‘_PyLong_One’ undeclared (first use in this function); did you mean ‘_PyLong_New’?
600 | one = _PyLong_One;
| ^~~~~~~~~~~
| _PyLong_New
lib/zoneinfo_module.c:600:19: note: each undeclared identifier is reported only once for each function it appears in
error: command '/usr/bin/gcc' failed with exit code 1
----------------------------------------
ERROR: Failed building wheel for backports.zoneinfo
Failed to build backports.zoneinfo
ERROR: Could not build wheels for backports.zoneinfo, which is required to install pyproject.toml-based projects
! Push rejected, failed to compile Python app.
! Push failed
Thanks.
|
[
"I was having the same error while deploying my application on heroku and well the problem is actually that when you are deploying it on heroku so heroku by default uses python version 3.10.x and backports.zoneinfo is not working properly with this version so I suggest you to switch to version 3.8.x(stable).\nIn order to do that you need to tell heroku to switch that version and it can be done as follows :\n\nCreate runtime.txt in root directory.\npython-3.8.10 <- write this in 'runtime.txt' there as to specify the version.\nheroku will then install this version and you will be not getting anymore error.\n\nPS : worked it for me and later when heroku removes this bug you can switch to python latest version.\n",
"Avoid installing backports.zoneinfo when using python >= 3.9\nEdit your requirements.txt file\nFROM:\nbackports.zoneinfo==0.2.1\n\nTO:\nbackports.zoneinfo;python_version<\"3.9\"\n\nOR:\nbackports.zoneinfo==0.2.1;python_version<\"3.9\"\n\nYou can read more about this here and here\n",
"I was facing the same error while creating my container. I solved the error by using the exact version of my Python venv i.e. 3.8.9\nEarlier for the image, I was using 3.8-alpine for a lighter version of the image. But, it wasn't working out for me and got the same error as yours.\n",
"this type of problems occur when you forget to modify your requirements.txt file and heroku server uses default settings like it uses python updated version which is not stable.\nuse the following commands and you will be get rid of this type of problem.\n$ git status\n\nyou need to modify requirements.txt\n$ git add-A\n$ git commit -m \"Python VERSION-3.8.10\"\nthen push your server and i am sure you will be get rid of this type of problem.\nIn order to push your server...\n$ git push heroku master\n\n",
"Downgrading Python from 3.10.5 to 3.9.0 worked for me. I hope this helps.\n",
"Tried & tested on Mac pro:\nCheck your python version on your terminal\npython3 --version\n\nOR\npython --version\n\nIf the python version is 3.9 & above , then update the following (backports.zoneinfo) line in your \"requirements.txt\" file to :\n\nbackports.zoneinfo;python_version<\"3.9\"\n\nRun -\npip3 install -r requirements.txt\n\ntest running your app again , should work at this stage.\n",
"Install venv with python3.9 version helped for me.\npython3.9 default version in my system\npython3.9 -m venv venv\n\n",
"I was facing the same error while deploying my Scrapy spider onto Heroku\nbut using python-3.9.15 in runtime.txt resolved the issue.\nhowever, the python installed in my venv was 3.8.13\nyou can try one of these I don't know their actual meaning but these are recommended by Heroku you can read the full documentation here\nSupported runtimes\npython-3.10.8 on all supported stacks (recommended)\npython-3.9.15 on all supported stacks\npython-3.8.15 on Heroku-18 and Heroku-20 only\npython-3.7.15 on Heroku-18 and Heroku-20 only\n\n"
] |
[
39,
35,
7,
3,
1,
1,
0,
0
] |
[] |
[] |
[
"django",
"heroku",
"python"
] |
stackoverflow_0071712258_django_heroku_python.txt
|
Q:
Totally stuck on putting code into functions
I've written code which loops 5 times for product and 5 times for product price. This saves the product and the price into separate arrays which then goes through a bubble sorting algorithm, sorting the price and corresponding products from high to low.
It then calculates the total discounting the fifth (cheapest) product as free.
We're required to put everything into functions but I don't know where to start, as well as use an if-statement which I think may be best placed in with the for-loop for a ""/non-entry by the user which returns to the product/price input until the 5 condition is met.
Then we're to display in a side by side the product and price but I don't know how to display this, as the arrays are separate.
This is what I have so far:
array = []
array2 = []
for index in range(5):
User_Product = input("What is the name of the product you wish to buy?")
array.append(User_Product)
Product_Price = int(input("What is the price of the product?"))
array2.append(Product_Price)
totalPrice = array2
swapped = True
while swapped == True:
swapped = False
for x in range(1, len(array2)):
if array2[x] > array2[x - 1]:
array2[x - 1], array2[x] = array2[x], array2[x - 1]
array[x - 1], array[x] = array[x], array [x - 1]
swapped = True
print(array2)
print(array)
totalPrice[4] = 0
print("The total for your order is £",sum(totalPrice), "the fifth product is free!")
A:
It would be simpler to have only one container, with dictionaries or tuples inside, but having two arrays can do.
Based on my advanced knowledge of commerce, I think you should have a condition that "if the price sum is more than XXX money, then the fifth item is free".
Which literally translates into :
if sum(totalPrice) > XXX:
totalPrice[4] = 0
print("The total for your order is £",sum(totalPrice), "the fifth product is free!")
else:
# keep the price for the fifth
print("The total for your order is £",sum(totalPrice), " if you had purchased for more, the fifth would have been free!")
As for your function problem, I think it is part of your (I assume) assignment. Did you understand how functions work ?
I would expect something like read_products_and_prices, sort_products_on_their_price, apply_reduction_if_applicable, tell_the_final_price, ... each taking parameters and returning results.
But this website is not for doing all of your homework. Clarify what is the exact problem and we would gladly help :)
|
Totally stuck on putting code into functions
|
I've written code which loops 5 times for product and 5 times for product price. This saves the product and the price into separate arrays which then goes through a bubble sorting algorithm, sorting the price and corresponding products from high to low.
It then calculates the total discounting the fifth (cheapest) product as free.
We're required to put everything into functions but I don't know where to start, as well as use an if-statement which I think may be best placed in with the for-loop for a ""/non-entry by the user which returns to the product/price input until the 5 condition is met.
Then we're to display in a side by side the product and price but I don't know how to display this, as the arrays are separate.
This is what I have so far:
array = []
array2 = []
for index in range(5):
User_Product = input("What is the name of the product you wish to buy?")
array.append(User_Product)
Product_Price = int(input("What is the price of the product?"))
array2.append(Product_Price)
totalPrice = array2
swapped = True
while swapped == True:
swapped = False
for x in range(1, len(array2)):
if array2[x] > array2[x - 1]:
array2[x - 1], array2[x] = array2[x], array2[x - 1]
array[x - 1], array[x] = array[x], array [x - 1]
swapped = True
print(array2)
print(array)
totalPrice[4] = 0
print("The total for your order is £",sum(totalPrice), "the fifth product is free!")
|
[
"It would be simpler to have only one container, with dictionaries or tuples inside, but having two arrays can do.\nBased on my advanced knowledge of commerce, I think you should have a condition that \"if the price sum is more than XXX money, then the fifth item is free\".\nWhich literally translates into :\nif sum(totalPrice) > XXX:\n totalPrice[4] = 0\n print(\"The total for your order is £\",sum(totalPrice), \"the fifth product is free!\")\nelse:\n # keep the price for the fifth\n print(\"The total for your order is £\",sum(totalPrice), \" if you had purchased for more, the fifth would have been free!\")\n\nAs for your function problem, I think it is part of your (I assume) assignment. Did you understand how functions work ?\nI would expect something like read_products_and_prices, sort_products_on_their_price, apply_reduction_if_applicable, tell_the_final_price, ... each taking parameters and returning results.\nBut this website is not for doing all of your homework. Clarify what is the exact problem and we would gladly help :)\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0074462579_python_python_3.x.txt
|
Q:
Csv file with multiple lines in the same cell
I am trying to write down a CSV using python with multiple lines inside the same cell. For example i want the next result:
But I am getting this result:
I have tried several ways to insert a "\n" between both elements of the cell but not working. My last attempt was the next piece of code:
f=open("prueba.csv","a",newline="")
header=["Prueba","Prueba2"]
writer=csv.writer(f)
writer.writerow(header)
prueba11="123"
prueba21="123"
prueba22="124"
prueba1=prueba11
prueba2=prueba21+"\n"+(prueba22)
prueba_write2=str(prueba2)
prueba_write1=str(prueba1)
row=[prueba_write1,prueba_write2]
writer.writerow(row)
f.close()
Does somebody know if it is a easy way to get the desired result?
Thanks!
A:
please try:
prueba2=str(f'{prueba21}\n{prueba22}')
should result:
If you are concatenating variables with strings, try to use f"Hello {variable}" (fstrings) - recommended or "Hello, %s." % variable.
If you search python fstrings or string formating for python on google, you`ll have a better idea than i wrote here..
Also, if you have time, check this article/chapter from the book "Automate the Boring Stuff with Python" by Al Sweigart -> MANIPULATING STRINGS it helped me at the beginning of my python journey.
I had to drag the row wider in excel to see the "124" part.
Hope it helps.
A:
To be honest, I can't imagine a usecase why you want to achieve that, but since you asked for it, here is one way to do it:
I changed a few things in the code to make it look more clean:
Use context manager insted of open/close the file be hand
(for me) I needed to add delimiter to ; so I could open the result csv in Excel and got the desired result
define the constant strings outside the context manager
build your cell values with f-strings
header=["Prueba","Prueba2"]
p1 = "123"
p2 = "124"
with open("prueba.csv", "a", newline="") as f:
writer = csv.writer(f, delimiter=';')
writer.writerow(header)
col1 = f"\n\n{p1}"
col2 = f"{p1}\n{p2}\n"
row = [col1, col2]
print(row)
writer.writerow(row)
|
Csv file with multiple lines in the same cell
|
I am trying to write down a CSV using python with multiple lines inside the same cell. For example i want the next result:
But I am getting this result:
I have tried several ways to insert a "\n" between both elements of the cell but not working. My last attempt was the next piece of code:
f=open("prueba.csv","a",newline="")
header=["Prueba","Prueba2"]
writer=csv.writer(f)
writer.writerow(header)
prueba11="123"
prueba21="123"
prueba22="124"
prueba1=prueba11
prueba2=prueba21+"\n"+(prueba22)
prueba_write2=str(prueba2)
prueba_write1=str(prueba1)
row=[prueba_write1,prueba_write2]
writer.writerow(row)
f.close()
Does somebody know if it is a easy way to get the desired result?
Thanks!
|
[
"please try:\nprueba2=str(f'{prueba21}\\n{prueba22}')\n\nshould result:\n\nIf you are concatenating variables with strings, try to use f\"Hello {variable}\" (fstrings) - recommended or \"Hello, %s.\" % variable.\nIf you search python fstrings or string formating for python on google, you`ll have a better idea than i wrote here..\nAlso, if you have time, check this article/chapter from the book \"Automate the Boring Stuff with Python\" by Al Sweigart -> MANIPULATING STRINGS it helped me at the beginning of my python journey.\nI had to drag the row wider in excel to see the \"124\" part.\nHope it helps.\n",
"To be honest, I can't imagine a usecase why you want to achieve that, but since you asked for it, here is one way to do it:\nI changed a few things in the code to make it look more clean:\n\nUse context manager insted of open/close the file be hand\n(for me) I needed to add delimiter to ; so I could open the result csv in Excel and got the desired result\ndefine the constant strings outside the context manager\nbuild your cell values with f-strings\n\nheader=[\"Prueba\",\"Prueba2\"]\np1 = \"123\"\np2 = \"124\"\n\nwith open(\"prueba.csv\", \"a\", newline=\"\") as f:\n writer = csv.writer(f, delimiter=';')\n writer.writerow(header)\n\n col1 = f\"\\n\\n{p1}\"\n col2 = f\"{p1}\\n{p2}\\n\"\n\n row = [col1, col2]\n print(row)\n writer.writerow(row)\n\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"csv",
"python"
] |
stackoverflow_0074461758_csv_python.txt
|
Q:
Python: Convert PDF to DOC
How to convert a pdf file to docx. Is there a way of doing this using python?
I've saw some pages that allow user to upload PDF and returns a DOC file, like PdfToWord
Thanks in advance
A:
If you have LibreOffice installed
lowriter --invisible --convert-to doc '/your/file.pdf'
If you want to use Python for this:
import os
import subprocess
for top, dirs, files in os.walk('/my/pdf/folder'):
for filename in files:
if filename.endswith('.pdf'):
abspath = os.path.join(top, filename)
subprocess.call('lowriter --invisible --convert-to doc "{}"'
.format(abspath), shell=True)
A:
This is difficult because PDFs are presentation oriented and word documents are content oriented. I have tested both and can recommend the following projects.
PyPDF2
PDFMiner
However, you are most definitely going to lose presentational aspects in the conversion.
A:
If you want to convert PDF -> MS Word type file like docx, I came across this.
Ahsin Shabbir wrote:
import glob
import win32com.client
import os
word = win32com.client.Dispatch("Word.Application")
word.visible = 0
pdfs_path = "" # folder where the .pdf files are stored
for i, doc in enumerate(glob.iglob(pdfs_path+"*.pdf")):
print(doc)
filename = doc.split('\\')[-1]
in_file = os.path.abspath(doc)
print(in_file)
wb = word.Documents.Open(in_file)
out_file = os.path.abspath(reqs_path +filename[0:-4]+ ".docx".format(i))
print("outfile\n",out_file)
wb.SaveAs2(out_file, FileFormat=16) # file format for docx
print("success...")
wb.Close()
word.Quit()
This worked like a charm for me, converted 500 pages PDF with formatting and images.
A:
You can use GroupDocs.Conversion Cloud SDK for python without installing any third-party tool or software.
Sample Python code:
# Import module
import groupdocs_conversion_cloud
# Get your app_sid and app_key at https://dashboard.groupdocs.cloud (free registration is required).
app_sid = "xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx"
app_key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
# Create instance of the API
convert_api = groupdocs_conversion_cloud.ConvertApi.from_keys(app_sid, app_key)
file_api = groupdocs_conversion_cloud.FileApi.from_keys(app_sid, app_key)
try:
#upload soruce file to storage
filename = 'Sample.pdf'
remote_name = 'Sample.pdf'
output_name= 'sample.docx'
strformat='docx'
request_upload = groupdocs_conversion_cloud.UploadFileRequest(remote_name,filename)
response_upload = file_api.upload_file(request_upload)
#Convert PDF to Word document
settings = groupdocs_conversion_cloud.ConvertSettings()
settings.file_path =remote_name
settings.format = strformat
settings.output_path = output_name
loadOptions = groupdocs_conversion_cloud.PdfLoadOptions()
loadOptions.hide_pdf_annotations = True
loadOptions.remove_embedded_files = False
loadOptions.flatten_all_fields = True
settings.load_options = loadOptions
convertOptions = groupdocs_conversion_cloud.DocxConvertOptions()
convertOptions.from_page = 1
convertOptions.pages_count = 1
settings.convert_options = convertOptions
.
request = groupdocs_conversion_cloud.ConvertDocumentRequest(settings)
response = convert_api.convert_document(request)
print("Document converted successfully: " + str(response))
except groupdocs_conversion_cloud.ApiException as e:
print("Exception when calling get_supported_conversion_types: {0}".format(e.message))
I'm developer evangelist at aspose.
A:
Based on previews answers this was the solution that worked best for me using Python 3.7.1
import win32com.client
import os
# INPUT/OUTPUT PATH
pdf_path = r"""C:\path2pdf.pdf"""
output_path = r"""C:\output_folder"""
word = win32com.client.Dispatch("Word.Application")
word.visible = 0 # CHANGE TO 1 IF YOU WANT TO SEE WORD APPLICATION RUNNING AND ALL MESSAGES OR WARNINGS SHOWN BY WORD
# GET FILE NAME AND NORMALIZED PATH
filename = pdf_path.split('\\')[-1]
in_file = os.path.abspath(pdf_path)
# CONVERT PDF TO DOCX AND SAVE IT ON THE OUTPUT PATH WITH THE SAME INPUT FILE NAME
wb = word.Documents.Open(in_file)
out_file = os.path.abspath(output_path + '\\' + filename[0:-4] + ".docx")
wb.SaveAs2(out_file, FileFormat=16)
wb.Close()
word.Quit()
A:
With Adobe on your machine
If you have adobe acrobate on your machine you can use the following function that enables you to save the PDF file as docx file
# Open PDF file, use Acrobat Exchange to save file as .docx file.
import win32com.client, win32com.client.makepy, os, winerror, errno, re
from win32com.client.dynamic import ERRORS_BAD_CONTEXT
def PDF_to_Word(input_file, output_file):
ERRORS_BAD_CONTEXT.append(winerror.E_NOTIMPL)
src = os.path.abspath(input_file)
# Lunch adobe
win32com.client.makepy.GenerateFromTypeLibSpec('Acrobat')
adobe = win32com.client.DispatchEx('AcroExch.App')
avDoc = win32com.client.DispatchEx('AcroExch.AVDoc')
# Open file
avDoc.Open(src, src)
pdDoc = avDoc.GetPDDoc()
jObject = pdDoc.GetJSObject()
# Save as word document
jObject.SaveAs(output_file, "com.adobe.acrobat.docx")
avDoc.Close(-1)
Be mindful that the input_file and the output_file need to be as follow:
D:\OneDrive...\file.pdf
D:\OneDrive...\dafad.docx
A:
For Linux users with LibreOffice installed try
soffice --invisible --convert-to doc file_name.pdf
If you get an error like Error: no export filter found, abording try this
soffice --infilter="writer_pdf_import" --convert-to doc file_name.pdf
|
Python: Convert PDF to DOC
|
How to convert a pdf file to docx. Is there a way of doing this using python?
I've saw some pages that allow user to upload PDF and returns a DOC file, like PdfToWord
Thanks in advance
|
[
"If you have LibreOffice installed\nlowriter --invisible --convert-to doc '/your/file.pdf'\n\nIf you want to use Python for this:\nimport os\nimport subprocess\n\nfor top, dirs, files in os.walk('/my/pdf/folder'):\n for filename in files:\n if filename.endswith('.pdf'):\n abspath = os.path.join(top, filename)\n subprocess.call('lowriter --invisible --convert-to doc \"{}\"'\n .format(abspath), shell=True)\n\n",
"This is difficult because PDFs are presentation oriented and word documents are content oriented. I have tested both and can recommend the following projects.\n\nPyPDF2\nPDFMiner\n\nHowever, you are most definitely going to lose presentational aspects in the conversion.\n",
"If you want to convert PDF -> MS Word type file like docx, I came across this.\n\nAhsin Shabbir wrote:\n\nimport glob\nimport win32com.client\nimport os\n\nword = win32com.client.Dispatch(\"Word.Application\")\nword.visible = 0\n\npdfs_path = \"\" # folder where the .pdf files are stored\nfor i, doc in enumerate(glob.iglob(pdfs_path+\"*.pdf\")):\n print(doc)\n filename = doc.split('\\\\')[-1]\n in_file = os.path.abspath(doc)\n print(in_file)\n wb = word.Documents.Open(in_file)\n out_file = os.path.abspath(reqs_path +filename[0:-4]+ \".docx\".format(i))\n print(\"outfile\\n\",out_file)\n wb.SaveAs2(out_file, FileFormat=16) # file format for docx\n print(\"success...\")\n wb.Close()\n\nword.Quit()\n\nThis worked like a charm for me, converted 500 pages PDF with formatting and images. \n",
"You can use GroupDocs.Conversion Cloud SDK for python without installing any third-party tool or software.\nSample Python code:\n# Import module\nimport groupdocs_conversion_cloud\n\n# Get your app_sid and app_key at https://dashboard.groupdocs.cloud (free registration is required).\napp_sid = \"xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx\"\napp_key = \"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\n\n# Create instance of the API\nconvert_api = groupdocs_conversion_cloud.ConvertApi.from_keys(app_sid, app_key)\nfile_api = groupdocs_conversion_cloud.FileApi.from_keys(app_sid, app_key)\n\ntry:\n\n #upload soruce file to storage\n filename = 'Sample.pdf'\n remote_name = 'Sample.pdf'\n output_name= 'sample.docx'\n strformat='docx'\n\n request_upload = groupdocs_conversion_cloud.UploadFileRequest(remote_name,filename)\n response_upload = file_api.upload_file(request_upload)\n #Convert PDF to Word document\n settings = groupdocs_conversion_cloud.ConvertSettings()\n settings.file_path =remote_name\n settings.format = strformat\n settings.output_path = output_name\n\n loadOptions = groupdocs_conversion_cloud.PdfLoadOptions()\n loadOptions.hide_pdf_annotations = True\n loadOptions.remove_embedded_files = False\n loadOptions.flatten_all_fields = True\n\n settings.load_options = loadOptions\n\n convertOptions = groupdocs_conversion_cloud.DocxConvertOptions()\n convertOptions.from_page = 1\n convertOptions.pages_count = 1\n\n settings.convert_options = convertOptions\n . \n request = groupdocs_conversion_cloud.ConvertDocumentRequest(settings)\n response = convert_api.convert_document(request)\n\n print(\"Document converted successfully: \" + str(response))\nexcept groupdocs_conversion_cloud.ApiException as e:\n print(\"Exception when calling get_supported_conversion_types: {0}\".format(e.message))\n\nI'm developer evangelist at aspose.\n",
"Based on previews answers this was the solution that worked best for me using Python 3.7.1\nimport win32com.client\nimport os\n\n# INPUT/OUTPUT PATH\npdf_path = r\"\"\"C:\\path2pdf.pdf\"\"\"\noutput_path = r\"\"\"C:\\output_folder\"\"\"\n\nword = win32com.client.Dispatch(\"Word.Application\")\nword.visible = 0 # CHANGE TO 1 IF YOU WANT TO SEE WORD APPLICATION RUNNING AND ALL MESSAGES OR WARNINGS SHOWN BY WORD\n\n# GET FILE NAME AND NORMALIZED PATH\nfilename = pdf_path.split('\\\\')[-1]\nin_file = os.path.abspath(pdf_path)\n\n# CONVERT PDF TO DOCX AND SAVE IT ON THE OUTPUT PATH WITH THE SAME INPUT FILE NAME\nwb = word.Documents.Open(in_file)\nout_file = os.path.abspath(output_path + '\\\\' + filename[0:-4] + \".docx\")\nwb.SaveAs2(out_file, FileFormat=16)\nwb.Close()\nword.Quit()\n\n",
"With Adobe on your machine\nIf you have adobe acrobate on your machine you can use the following function that enables you to save the PDF file as docx file\n# Open PDF file, use Acrobat Exchange to save file as .docx file.\n\nimport win32com.client, win32com.client.makepy, os, winerror, errno, re\nfrom win32com.client.dynamic import ERRORS_BAD_CONTEXT\n\ndef PDF_to_Word(input_file, output_file):\n \n ERRORS_BAD_CONTEXT.append(winerror.E_NOTIMPL)\n src = os.path.abspath(input_file)\n \n # Lunch adobe\n win32com.client.makepy.GenerateFromTypeLibSpec('Acrobat')\n adobe = win32com.client.DispatchEx('AcroExch.App')\n avDoc = win32com.client.DispatchEx('AcroExch.AVDoc')\n # Open file\n avDoc.Open(src, src)\n pdDoc = avDoc.GetPDDoc()\n jObject = pdDoc.GetJSObject()\n # Save as word document\n jObject.SaveAs(output_file, \"com.adobe.acrobat.docx\")\n avDoc.Close(-1)\n\nBe mindful that the input_file and the output_file need to be as follow:\n\nD:\\OneDrive...\\file.pdf\nD:\\OneDrive...\\dafad.docx\n\n",
"For Linux users with LibreOffice installed try\nsoffice --invisible --convert-to doc file_name.pdf\n\nIf you get an error like Error: no export filter found, abording try this\nsoffice --infilter=\"writer_pdf_import\" --convert-to doc file_name.pdf\n\n"
] |
[
20,
9,
7,
2,
1,
0,
0
] |
[] |
[] |
[
"bash",
"doc",
"docx",
"pdf",
"python"
] |
stackoverflow_0026358281_bash_doc_docx_pdf_python.txt
|
Q:
Pytorch RuntimeError: CUDA out of memory with a huge amount of free memory
While training the model, I encountered the following problem:
RuntimeError: CUDA out of memory. Tried to allocate 304.00 MiB (GPU 0; 8.00 GiB total capacity; 142.76 MiB already allocated; 6.32 GiB free; 158.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
As we can see, the error occurs when trying to allocate 304 MiB of memory, while 6.32 GiB is free! What is the problem? As I can see, the suggested option is to set max_split_size_mb to avoid fragmentation. Will it help and how to do it correctly?
This is my version of PyTorch:
torch==1.10.2+cu113
torchvision==0.11.3+cu113
torchaudio===0.10.2+cu113
A:
I tried hours til i found out:
to reduce the batch size
and the resize my input image image size
A:
Your problem may be due to fragmentation of your GPU memory.You may want to empty your cached memory used by caching allocator.
import torch
torch.cuda.empty_cache()
A:
I was trying this command:
python3 val.py --weights ./weights/yolov5l-xs-1.pt --img 1996 --data ./data/VisDrone.yaml
and I have a 24G Titan video Card.
Then I reduced the image size and worked for me. to:
python3 val.py --weights ./weights/yolov5l-xs-1.pt --img 1280 --data ./data/VisDrone.yaml
Results:
Class Images Labels P R mAP@.5 mAP@.5:.95: 100%|████████████████████████████████| 18/18 [00:50<00:00, 2.79s/it]
all 548 38759 0.653 0.537 0.584 0.375
pedestrian 548 8844 0.74 0.631 0.708 0.375
people 548 5125 0.677 0.506 0.574 0.258
bicycle 548 1287 0.541 0.377 0.41 0.213
car 548 14064 0.828 0.868 0.904 0.681
van 548 1975 0.636 0.566 0.601 0.453
truck 548 750 0.595 0.516 0.538 0.388
tricycle 548 1045 0.601 0.416 0.457 0.288
awning-tricycle 548 532 0.387 0.242 0.245 0.173
bus 548 251 0.782 0.653 0.725 0.565
motor 548 4886 0.744 0.598 0.674 0.355
|
Pytorch RuntimeError: CUDA out of memory with a huge amount of free memory
|
While training the model, I encountered the following problem:
RuntimeError: CUDA out of memory. Tried to allocate 304.00 MiB (GPU 0; 8.00 GiB total capacity; 142.76 MiB already allocated; 6.32 GiB free; 158.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
As we can see, the error occurs when trying to allocate 304 MiB of memory, while 6.32 GiB is free! What is the problem? As I can see, the suggested option is to set max_split_size_mb to avoid fragmentation. Will it help and how to do it correctly?
This is my version of PyTorch:
torch==1.10.2+cu113
torchvision==0.11.3+cu113
torchaudio===0.10.2+cu113
|
[
"I tried hours til i found out:\nto reduce the batch size\nand the resize my input image image size\n",
"Your problem may be due to fragmentation of your GPU memory.You may want to empty your cached memory used by caching allocator.\nimport torch\ntorch.cuda.empty_cache()\n\n",
"I was trying this command:\npython3 val.py --weights ./weights/yolov5l-xs-1.pt --img 1996 --data ./data/VisDrone.yaml\n\nand I have a 24G Titan video Card.\nThen I reduced the image size and worked for me. to:\npython3 val.py --weights ./weights/yolov5l-xs-1.pt --img 1280 --data ./data/VisDrone.yaml\n\nResults:\nClass Images Labels P R mAP@.5 mAP@.5:.95: 100%|████████████████████████████████| 18/18 [00:50<00:00, 2.79s/it]\n all 548 38759 0.653 0.537 0.584 0.375\n pedestrian 548 8844 0.74 0.631 0.708 0.375\n people 548 5125 0.677 0.506 0.574 0.258\n bicycle 548 1287 0.541 0.377 0.41 0.213\n car 548 14064 0.828 0.868 0.904 0.681\n van 548 1975 0.636 0.566 0.601 0.453\n truck 548 750 0.595 0.516 0.538 0.388\n tricycle 548 1045 0.601 0.416 0.457 0.288\n awning-tricycle 548 532 0.387 0.242 0.245 0.173\n bus 548 251 0.782 0.653 0.725 0.565\n motor 548 4886 0.744 0.598 0.674 0.355\n\n"
] |
[
1,
0,
0
] |
[
"Exit the docker image and stop the docker and start it again.\n"
] |
[
-3
] |
[
"computer_vision",
"machine_learning",
"python",
"pytorch"
] |
stackoverflow_0071498324_computer_vision_machine_learning_python_pytorch.txt
|
Q:
How to fix this TypeError?
I get this error: TypeError: clear() missing 1 required positional argument: 'self'
from this bit of code as far as I know:
def drawHUD(self,score):
self.hud.clear()
self.hud.color("white")
self.hud.penup()
self.hud.hideturtle()
self.hud.goto(0, -400)
I'm not really sure what to do. I'm expecting it to collide with the fruit and then grow longer and the fruit appears elsewhere
A:
This means that the call to clear expects to be given an argument. In your call to clear you dont pass any.
You should take a look at its documentation and/or its declaration.
|
How to fix this TypeError?
|
I get this error: TypeError: clear() missing 1 required positional argument: 'self'
from this bit of code as far as I know:
def drawHUD(self,score):
self.hud.clear()
self.hud.color("white")
self.hud.penup()
self.hud.hideturtle()
self.hud.goto(0, -400)
I'm not really sure what to do. I'm expecting it to collide with the fruit and then grow longer and the fruit appears elsewhere
|
[
"This means that the call to clear expects to be given an argument. In your call to clear you dont pass any.\nYou should take a look at its documentation and/or its declaration.\n"
] |
[
0
] |
[] |
[] |
[
"project",
"python",
"typeerror"
] |
stackoverflow_0074468243_project_python_typeerror.txt
|
Q:
Week number of the month?
Does python offer a way to easily get the current week of the month (1:4) ?
A:
In order to use straight division, the day of month for the date you're looking at needs to be adjusted according to the position (within the week) of the first day of the month. So, if your month happens to start on a Monday (the first day of the week), you can just do division as suggested above. However, if the month starts on a Wednesday, you'll want to add 2 and then do the division. This is all encapsulated in the function below.
from math import ceil
def week_of_month(dt):
""" Returns the week of the month for the specified date.
"""
first_day = dt.replace(day=1)
dom = dt.day
adjusted_dom = dom + first_day.weekday()
return int(ceil(adjusted_dom/7.0))
A:
I know this is years old, but I spent a lot of time trying to find this answer. I made my own method and thought I should share.
The calendar module has a monthcalendar method that returns a 2D array where each row represents a week. For example:
import calendar
calendar.monthcalendar(2015,9)
result:
[[0,0,1,2,3,4,5],
[6,7,8,9,10,11,12],
[13,14,15,16,17,18,19],
[20,21,22,23,24,25,26],
[27,28,29,30,0,0,0]]
So numpy's where is your friend here. And I'm in USA so I want the week to start on Sunday and the first week to be labelled 1:
import calendar
import numpy as np
calendar.setfirstweekday(6)
def get_week_of_month(year, month, day):
x = np.array(calendar.monthcalendar(year, month))
week_of_month = np.where(x==day)[0][0] + 1
return(week_of_month)
get_week_of_month(2015,9,14)
returns
3
A:
If your first week starts on the first day of the month you can use integer division:
import datetime
day_of_month = datetime.datetime.now().day
week_number = (day_of_month - 1) // 7 + 1
A:
Check out the package Pendulum
>>> dt = pendulum.parse('2018-09-30')
>>> dt.week_of_month
5
A:
This version could be improved but as a first look in python modules (datetime and calendar), I make this solution, I hope could be useful:
from datetime import datetime
n = datetime.now()
#from django.utils.timezone import now
#n = now() #if you use django with timezone
from calendar import Calendar
cal = Calendar() # week starts Monday
#cal = Calendar(6) # week stars Sunday
weeks = cal.monthdayscalendar(n.year, n.month)
for x in range(len(weeks)):
if n.day in weeks[x]:
print x+1
A:
Josh's answer has to be tweaked slightly to accomodate the first day falling on a Sunday.
def get_week_of_month(date):
first_day = date.replace(day=1)
day_of_month = date.day
if(first_day.weekday() == 6):
adjusted_dom = (1 + first_day.weekday()) / 7
else:
adjusted_dom = day_of_month + first_day.weekday()
return int(ceil(adjusted_dom/7.0))
A:
Check out the python calendar module
A:
def week_of_month(date_value):
week = date_value.isocalendar()[1] - date_value.replace(day=1).isocalendar()[1] + 1
return date_value.isocalendar()[1] if week < 0 else week
date_value should in timestamp format
This will give the perfect answer in all the cases. It is purely based on ISO calendar
A:
I found a quite simple way:
import datetime
def week(year, month, day):
first_week_month = datetime.datetime(year, month, 1).isocalendar()[1]
if month == 1 and first_week_month > 10:
first_week_month = 0
user_date = datetime.datetime(year, month, day).isocalendar()[1]
if month == 1 and user_date > 10:
user_date = 0
return user_date - first_week_month
returns 0 if first week
A:
Josh' answer seems the best but I think that we should take into account the fact that a week belongs to a month only if its Thursday falls into that month. At least that's what the iso says.
According to that standard, a month can have up to 5 weeks. A day could belong to a month, but the week it belongs to may not.
I have taken into account that just by adding a simple
if (first_day.weekday()>3) :
return ret_val-1
else:
return ret_val
where ret_val is exactly Josh's calculated value. Tested on June 2017 (has 5 weeks) and on September 2017. Passing '2017-09-01' returns 0 because that day belongs to a week that does not belong to September.
The most correct way would be to have the method return both the week number and the month name the input day belongs to.
A:
A variation on @Manuel Solorzano's answer:
from calendar import monthcalendar
def get_week_of_month(year, month, day):
return next(
(
week_number
for week_number, days_of_week in enumerate(monthcalendar(year, month), start=1)
if day in days_of_week
),
None,
)
E.g.:
>>> get_week_of_month(2020, 9, 1)
1
>>> get_week_of_month(2020, 9, 30)
5
>>> get_week_of_month(2020, 5, 35)
None
A:
Say we have some month's calender as follows:
Mon Tue Wed Thur Fri Sat Sun
1 2 3
4 5 6 7 8 9 10
We say day 1 ~ 3 belongs to week 1 and day 4 ~ 10 belongs to week 2 etc.
In this case, I believe the week_of_month for a specific day should be calculated as follows:
import datetime
def week_of_month(year, month, day):
weekday_of_day_one = datetime.date(year, month, 1).weekday()
weekday_of_day = datetime.date(year, month, day)
return (day - 1)//7 + 1 + (weekday_of_day < weekday_of_day_one)
However, if instead we want to get the nth of the weekday that date is, such as day 1 is the 1st Friday, day 8 is the 2nd Friday, and day 6 is the 1st Wednesday, then we can simply return (day - 1)//7 + 1
A:
The answer you are looking for is (dm-dw+(dw-dm)%7)/7+1 where dm is the day of the month, dw is the day of the week, and % is the positive remainder.
This comes from relating the month offset (mo) and the week of the month (wm), where the month offset is how far into the week the first day starts. If we consider all of these variables to start at 0 we have
wm*7+dw = dm+mo
You can solve this modulo 7 for mo as that causes the wm variable drops out as it only appears as a multiple of 7
dw = dm+mo (%7)
mo = dw-dm (%7)
mo = (dw-dm)%7 (since the month offset is 0-6)
Then you just substitute the month offset into the original equation and solve for wm
wm*7+dw = dm+mo
wm*7 = dm-dw + mo
wm*7 = dm-dw + (dw-dm)%7
wm = (dm-dw + (dw-dm)%7) / 7
As dm and dw are always paired, these can be offset by any amount, so, switching everything to start a 1 only changes the the equation to (dm-dw + (dw-dm)%7)/7 + 1.
Of course the python datetime library starts dm at 1 and dw at 0. So, assuming date is a datatime.date object, you can go with
(date.day-1-date.dayofweek() + (date.dayofweek()-date.day+1)%7) / 7 + 1
As the inner bit is always a multiple of 7 (it is literally dw*7), you can see that the first -date.dayofweek() simply adjusts the value backwards to closest multiple of 7. Integer division does this too, so it can be further simplified to
(date.day-1 + (date.dayofweek()-date.day+1)%7) // 7 + 1
Be aware that dayofweek() puts Sunday at the end of the week.
A:
This should do it.
#! /usr/bin/env python2
import calendar, datetime
#FUNCTIONS
def week_of_month(date):
"""Determines the week (number) of the month"""
#Calendar object. 6 = Start on Sunday, 0 = Start on Monday
cal_object = calendar.Calendar(6)
month_calendar_dates = cal_object.itermonthdates(date.year,date.month)
day_of_week = 1
week_number = 1
for day in month_calendar_dates:
#add a week and reset day of week
if day_of_week > 7:
week_number += 1
day_of_week = 1
if date == day:
break
else:
day_of_week += 1
return week_number
#MAIN
example_date = datetime.date(2015,9,21)
print "Week",str(week_of_month(example_date))
#Returns 'Week 4'
A:
Move to last day of week in month and divide to 7
from math import ceil
def week_of_month(dt):
""" Returns the week of the month for the specified date.
"""
# weekday from monday == 0 ---> sunday == 6
last_day_of_week_of_month = dt.day + (7 - (1 + dt.weekday()))
return int(ceil(last_day_of_week_of_month/7.0))
A:
You can simply do as follow:
First extract the month and the week of year number
df['month'] = df['Date'].dt.month
df['week'] = df['Date'].dt.week
Then group by month and rank the week numbers
df['weekOfMonth'] = df.groupby('month')["week"].rank("dense", ascending=False)
A:
One more solution, where Sunday is first day of week, base Python only.
def week_of_month(dt):
""" Returns the week of the month for the specified date.
TREATS SUNDAY AS FIRST DAY OF WEEK!
"""
first_day = dt.replace(day=1)
dom = dt.day
adjusted_dom = dom + (first_day.weekday() + 1) % 7
return (adjusted_dom - 1) // 7 + 1
A:
def week_number(time_ctime = None):
import time
import calendar
if time_ctime == None:
time_ctime = str(time.ctime())
date = time_ctime.replace(' ',' ').split(' ')
months = {'Jan':1,'Feb':2,'Mar':3,'Apr':4,'May':5,'Jun':6,'Jul':7,'Aug':8,'Sep':9,'Oct':10,'Nov':11,'Dec':12}
week, day, month, year = (-1, str(date[2]), months[date[1]], int(date[-1]))
cal = calendar.monthcalendar(year,month)
for wk in range(len(cal)):
wstr = [str(x) for x in cal[wk]]
if day in wstr:
week = wk
break
return week
import time
print(week_number())
print(week_number(time.ctime()))
A:
An Easy way to get a week number of month;
if the datatype is datetime64 then
week_number_of_month = date_value.dayofweek
|
Week number of the month?
|
Does python offer a way to easily get the current week of the month (1:4) ?
|
[
"In order to use straight division, the day of month for the date you're looking at needs to be adjusted according to the position (within the week) of the first day of the month. So, if your month happens to start on a Monday (the first day of the week), you can just do division as suggested above. However, if the month starts on a Wednesday, you'll want to add 2 and then do the division. This is all encapsulated in the function below.\nfrom math import ceil\n\ndef week_of_month(dt):\n \"\"\" Returns the week of the month for the specified date.\n \"\"\"\n\n first_day = dt.replace(day=1)\n\n dom = dt.day\n adjusted_dom = dom + first_day.weekday()\n\n return int(ceil(adjusted_dom/7.0))\n\n",
"I know this is years old, but I spent a lot of time trying to find this answer. I made my own method and thought I should share.\nThe calendar module has a monthcalendar method that returns a 2D array where each row represents a week. For example:\nimport calendar\ncalendar.monthcalendar(2015,9)\n\nresult:\n[[0,0,1,2,3,4,5],\n [6,7,8,9,10,11,12],\n [13,14,15,16,17,18,19],\n [20,21,22,23,24,25,26],\n [27,28,29,30,0,0,0]]\n\nSo numpy's where is your friend here. And I'm in USA so I want the week to start on Sunday and the first week to be labelled 1:\nimport calendar\nimport numpy as np\ncalendar.setfirstweekday(6)\n\ndef get_week_of_month(year, month, day):\n x = np.array(calendar.monthcalendar(year, month))\n week_of_month = np.where(x==day)[0][0] + 1\n return(week_of_month)\n\nget_week_of_month(2015,9,14)\n\nreturns\n3\n\n",
"If your first week starts on the first day of the month you can use integer division:\n\nimport datetime\nday_of_month = datetime.datetime.now().day\nweek_number = (day_of_month - 1) // 7 + 1\n\n",
"Check out the package Pendulum\n>>> dt = pendulum.parse('2018-09-30')\n>>> dt.week_of_month\n5\n\n",
"This version could be improved but as a first look in python modules (datetime and calendar), I make this solution, I hope could be useful:\nfrom datetime import datetime\nn = datetime.now()\n#from django.utils.timezone import now\n#n = now() #if you use django with timezone\n\nfrom calendar import Calendar\ncal = Calendar() # week starts Monday\n#cal = Calendar(6) # week stars Sunday\n\nweeks = cal.monthdayscalendar(n.year, n.month)\nfor x in range(len(weeks)):\n if n.day in weeks[x]:\n print x+1\n\n",
"Josh's answer has to be tweaked slightly to accomodate the first day falling on a Sunday.\ndef get_week_of_month(date):\n first_day = date.replace(day=1)\n\n day_of_month = date.day\n\n if(first_day.weekday() == 6):\n adjusted_dom = (1 + first_day.weekday()) / 7\n else:\n adjusted_dom = day_of_month + first_day.weekday()\n\n return int(ceil(adjusted_dom/7.0))\n\n",
"Check out the python calendar module\n",
"def week_of_month(date_value):\n week = date_value.isocalendar()[1] - date_value.replace(day=1).isocalendar()[1] + 1\n return date_value.isocalendar()[1] if week < 0 else week\n\ndate_value should in timestamp format\nThis will give the perfect answer in all the cases. It is purely based on ISO calendar\n",
"I found a quite simple way:\nimport datetime\ndef week(year, month, day):\n first_week_month = datetime.datetime(year, month, 1).isocalendar()[1]\n if month == 1 and first_week_month > 10:\n first_week_month = 0\n user_date = datetime.datetime(year, month, day).isocalendar()[1]\n if month == 1 and user_date > 10:\n user_date = 0\n return user_date - first_week_month\n\nreturns 0 if first week\n",
"Josh' answer seems the best but I think that we should take into account the fact that a week belongs to a month only if its Thursday falls into that month. At least that's what the iso says.\nAccording to that standard, a month can have up to 5 weeks. A day could belong to a month, but the week it belongs to may not.\nI have taken into account that just by adding a simple \nif (first_day.weekday()>3) :\n return ret_val-1\n else:\n return ret_val\n\nwhere ret_val is exactly Josh's calculated value. Tested on June 2017 (has 5 weeks) and on September 2017. Passing '2017-09-01' returns 0 because that day belongs to a week that does not belong to September.\nThe most correct way would be to have the method return both the week number and the month name the input day belongs to.\n",
"A variation on @Manuel Solorzano's answer:\nfrom calendar import monthcalendar\ndef get_week_of_month(year, month, day):\n return next(\n (\n week_number\n for week_number, days_of_week in enumerate(monthcalendar(year, month), start=1)\n if day in days_of_week\n ),\n None,\n )\n\nE.g.:\n>>> get_week_of_month(2020, 9, 1)\n1\n>>> get_week_of_month(2020, 9, 30)\n5\n>>> get_week_of_month(2020, 5, 35)\nNone\n\n",
"Say we have some month's calender as follows:\nMon Tue Wed Thur Fri Sat Sun\n 1 2 3 \n4 5 6 7 8 9 10\n\nWe say day 1 ~ 3 belongs to week 1 and day 4 ~ 10 belongs to week 2 etc.\nIn this case, I believe the week_of_month for a specific day should be calculated as follows:\nimport datetime\ndef week_of_month(year, month, day):\n weekday_of_day_one = datetime.date(year, month, 1).weekday()\n weekday_of_day = datetime.date(year, month, day)\n return (day - 1)//7 + 1 + (weekday_of_day < weekday_of_day_one)\n\nHowever, if instead we want to get the nth of the weekday that date is, such as day 1 is the 1st Friday, day 8 is the 2nd Friday, and day 6 is the 1st Wednesday, then we can simply return (day - 1)//7 + 1\n",
"The answer you are looking for is (dm-dw+(dw-dm)%7)/7+1 where dm is the day of the month, dw is the day of the week, and % is the positive remainder.\nThis comes from relating the month offset (mo) and the week of the month (wm), where the month offset is how far into the week the first day starts. If we consider all of these variables to start at 0 we have\nwm*7+dw = dm+mo\n\nYou can solve this modulo 7 for mo as that causes the wm variable drops out as it only appears as a multiple of 7\ndw = dm+mo (%7)\nmo = dw-dm (%7)\nmo = (dw-dm)%7 (since the month offset is 0-6)\n\nThen you just substitute the month offset into the original equation and solve for wm\nwm*7+dw = dm+mo\nwm*7 = dm-dw + mo\nwm*7 = dm-dw + (dw-dm)%7\nwm = (dm-dw + (dw-dm)%7) / 7\n\nAs dm and dw are always paired, these can be offset by any amount, so, switching everything to start a 1 only changes the the equation to (dm-dw + (dw-dm)%7)/7 + 1.\nOf course the python datetime library starts dm at 1 and dw at 0. So, assuming date is a datatime.date object, you can go with\n(date.day-1-date.dayofweek() + (date.dayofweek()-date.day+1)%7) / 7 + 1\n\nAs the inner bit is always a multiple of 7 (it is literally dw*7), you can see that the first -date.dayofweek() simply adjusts the value backwards to closest multiple of 7. Integer division does this too, so it can be further simplified to\n(date.day-1 + (date.dayofweek()-date.day+1)%7) // 7 + 1\n\nBe aware that dayofweek() puts Sunday at the end of the week.\n",
"This should do it.\n#! /usr/bin/env python2\n\nimport calendar, datetime\n\n#FUNCTIONS\ndef week_of_month(date):\n \"\"\"Determines the week (number) of the month\"\"\"\n\n #Calendar object. 6 = Start on Sunday, 0 = Start on Monday\n cal_object = calendar.Calendar(6)\n month_calendar_dates = cal_object.itermonthdates(date.year,date.month)\n\n day_of_week = 1\n week_number = 1\n\n for day in month_calendar_dates:\n #add a week and reset day of week\n if day_of_week > 7:\n week_number += 1\n day_of_week = 1\n\n if date == day:\n break\n else:\n day_of_week += 1\n\n return week_number\n\n\n#MAIN\nexample_date = datetime.date(2015,9,21)\n\nprint \"Week\",str(week_of_month(example_date))\n#Returns 'Week 4'\n\n",
"Move to last day of week in month and divide to 7\nfrom math import ceil\n\ndef week_of_month(dt):\n \"\"\" Returns the week of the month for the specified date.\n \"\"\"\n # weekday from monday == 0 ---> sunday == 6\n last_day_of_week_of_month = dt.day + (7 - (1 + dt.weekday()))\n return int(ceil(last_day_of_week_of_month/7.0))\n\n",
"You can simply do as follow:\n\nFirst extract the month and the week of year number\n\ndf['month'] = df['Date'].dt.month\ndf['week'] = df['Date'].dt.week \n\n\nThen group by month and rank the week numbers\n\ndf['weekOfMonth'] = df.groupby('month')[\"week\"].rank(\"dense\", ascending=False)\n\n",
"One more solution, where Sunday is first day of week, base Python only.\ndef week_of_month(dt):\n\"\"\" Returns the week of the month for the specified date.\nTREATS SUNDAY AS FIRST DAY OF WEEK!\n\"\"\"\n first_day = dt.replace(day=1)\n dom = dt.day\n adjusted_dom = dom + (first_day.weekday() + 1) % 7\n return (adjusted_dom - 1) // 7 + 1\n\n",
"def week_number(time_ctime = None):\n import time\n import calendar\n if time_ctime == None:\n time_ctime = str(time.ctime())\n date = time_ctime.replace(' ',' ').split(' ')\n months = {'Jan':1,'Feb':2,'Mar':3,'Apr':4,'May':5,'Jun':6,'Jul':7,'Aug':8,'Sep':9,'Oct':10,'Nov':11,'Dec':12}\n week, day, month, year = (-1, str(date[2]), months[date[1]], int(date[-1]))\n cal = calendar.monthcalendar(year,month)\n for wk in range(len(cal)):\n wstr = [str(x) for x in cal[wk]]\n if day in wstr:\n week = wk\n break\n return week\n\nimport time\nprint(week_number())\nprint(week_number(time.ctime()))\n\n",
"An Easy way to get a week number of month;\nif the datatype is datetime64 then\nweek_number_of_month = date_value.dayofweek \n\n"
] |
[
53,
25,
16,
14,
5,
4,
3,
2,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0
] |
[
" import datetime\n \n def week_number_of_month(date_value):\n week_number = (date_value.isocalendar()[1] - date_value.replace(day=1).isocalendar()[1] + 1)\n if week_number == -46:\n week_number = 6\n return week_number\n \n date_given = datetime.datetime(year=2018, month=12, day=31).date()\n \n week_number_of_month(date_given)\n\n"
] |
[
-1
] |
[
"python",
"time",
"week_number"
] |
stackoverflow_0003806473_python_time_week_number.txt
|
Q:
python usgsm2m module how to specify bounding box in cli
been trying to use usgsm2m module using CLI but get "is not a valid floating point value"
if I try the --location option for a single point it works
usgsm2m search --username ####### --password ####### --dataset landsat_tm_c2_l1 --bbox 30.32,78.03,31.5,79.0 --clouds 5 --start 2005-01-01 --end 2005-12-31 --output display_id
also tried, but no luck
usgsm2m search --username ####### --password ####### --dataset landsat_tm_c2_l1 --bbox (30.32,78.03,31.5,79.0) --clouds 5 --start 2005-01-01 --end 2005-12-31 --output display_id
A:
You should have used spaces instead of commas
usgsm2m search --username ####### --password ####### --dataset landsat_tm_c2_l1 --bbox 30.32 78.03 31.5 79.0 --clouds 5 --start 2005-01-01 --end 2005-12-31 --output display_id
|
python usgsm2m module how to specify bounding box in cli
|
been trying to use usgsm2m module using CLI but get "is not a valid floating point value"
if I try the --location option for a single point it works
usgsm2m search --username ####### --password ####### --dataset landsat_tm_c2_l1 --bbox 30.32,78.03,31.5,79.0 --clouds 5 --start 2005-01-01 --end 2005-12-31 --output display_id
also tried, but no luck
usgsm2m search --username ####### --password ####### --dataset landsat_tm_c2_l1 --bbox (30.32,78.03,31.5,79.0) --clouds 5 --start 2005-01-01 --end 2005-12-31 --output display_id
|
[
"You should have used spaces instead of commas\nusgsm2m search --username ####### --password ####### --dataset landsat_tm_c2_l1 --bbox 30.32 78.03 31.5 79.0 --clouds 5 --start 2005-01-01 --end 2005-12-31 --output display_id\n\n"
] |
[
0
] |
[] |
[] |
[
"command_line_interface",
"python"
] |
stackoverflow_0074473118_command_line_interface_python.txt
|
Q:
Need to extract data from a column, if a particular character exists, extracting the substring before the character
I've got a column which I am trying to clean, the data is like this:
Wherever the pattern is of x-y year, I want to extract only the 'x' value and leave it in the string.
For any other value, I want to keep it as is.
Using str.extract('(.{,2}(-))') is returning a NaN value for all the other rows.
A:
The solution first compiles the regex then the compiled regex will be used on each row.
The lambda also relies on the walrus operator :=.
Assumes that your 2nd column is named col2.
import re
pattern = re.compile("([\d]+)-[\d]+ year")
df["col2"] = df["col2"].map(lambda x: m[1] if (m:=pattern.match(x)) else x)
A:
You want series.str.replace(), I believe.
Does this give you the desired output?
df = pd.DataFrame.from_records([[1778, '3-5 year'], [961, np.nan], [2141, 'h 3+ year']], columns=['a','b'])
repl = lambda m: m.group(1)
df.b = df.b.str.replace(r'(\d+)-\d+\syear', repl, regex=True)
df
which takes the original df:
a b
0 1778 3-5 year
1 961 NaN
2 2141 h 3+ year
and gives the output:
a b
0 1778 3
1 961 NaN
2 2141 h 3+ year
|
Need to extract data from a column, if a particular character exists, extracting the substring before the character
|
I've got a column which I am trying to clean, the data is like this:
Wherever the pattern is of x-y year, I want to extract only the 'x' value and leave it in the string.
For any other value, I want to keep it as is.
Using str.extract('(.{,2}(-))') is returning a NaN value for all the other rows.
|
[
"The solution first compiles the regex then the compiled regex will be used on each row.\nThe lambda also relies on the walrus operator :=.\nAssumes that your 2nd column is named col2.\nimport re\n\npattern = re.compile(\"([\\d]+)-[\\d]+ year\")\ndf[\"col2\"] = df[\"col2\"].map(lambda x: m[1] if (m:=pattern.match(x)) else x)\n\n",
"You want series.str.replace(), I believe.\nDoes this give you the desired output?\ndf = pd.DataFrame.from_records([[1778, '3-5 year'], [961, np.nan], [2141, 'h 3+ year']], columns=['a','b'])\n\nrepl = lambda m: m.group(1)\ndf.b = df.b.str.replace(r'(\\d+)-\\d+\\syear', repl, regex=True)\ndf\n\nwhich takes the original df:\n a b\n0 1778 3-5 year\n1 961 NaN\n2 2141 h 3+ year\n\nand gives the output:\n a b\n0 1778 3\n1 961 NaN\n2 2141 h 3+ year\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"pandas",
"python",
"regex"
] |
stackoverflow_0074474329_pandas_python_regex.txt
|
Q:
Python multiprocessing: exit on error in any process
import time
from multiprocessing import Process
def possible_error_causer(a, b):
time.sleep(5)
c = a / b
print(c)
time.sleep(100)
for i in range(3):
p = Process(target=possible_error_causer, args=(i, i))
p.start()
The code above will execute after an exception occured in process that received 0, 0 as arguments (will run 100 seconds after that). But I want script to stop when there is error in any process. Try except is not an option (sys.exit() in except), because it doesn't catch all external errors (eg it doesn't catch some OpenCV errors)
A:
I didn't succeed with that exactly, as I would probably need to collect processes objects in main thread and then pass them to child processes via Queue and calling terminate on them after except (yes, I gave up other than try except, sys.excepthook didn't work for me).
However, as I didn't need async to be exactly multiprocessing, I just replaced it with threading.Thread and called os._exit(1) in except.
import sys
import time
import os
from threading import Thread
def myexcepthook(*args):
# Meant for sys.excepthook in multiprocessing script, but didn't work
print("My except worked")
def possible_error_causer(a, b):
try:
time.sleep(5)
c = a / b
print(c)
time.sleep(100)
except:
os._exit(1)
for i in range(3):
p = Thread(target=possible_error_causer, args=(i, i))
p.start()
A:
You are performing a division by zero. Indeed, in the first call to subprocess you pass (, i, ) which translates to a = i and b = i with i = 0.
To fix this you can either:
change the range to range(1, 4) so that you iterate from 1 to 4 excluded.
add one to b args=(i, i+1) to ensure the divisor if not 0.
|
Python multiprocessing: exit on error in any process
|
import time
from multiprocessing import Process
def possible_error_causer(a, b):
time.sleep(5)
c = a / b
print(c)
time.sleep(100)
for i in range(3):
p = Process(target=possible_error_causer, args=(i, i))
p.start()
The code above will execute after an exception occured in process that received 0, 0 as arguments (will run 100 seconds after that). But I want script to stop when there is error in any process. Try except is not an option (sys.exit() in except), because it doesn't catch all external errors (eg it doesn't catch some OpenCV errors)
|
[
"I didn't succeed with that exactly, as I would probably need to collect processes objects in main thread and then pass them to child processes via Queue and calling terminate on them after except (yes, I gave up other than try except, sys.excepthook didn't work for me).\nHowever, as I didn't need async to be exactly multiprocessing, I just replaced it with threading.Thread and called os._exit(1) in except.\nimport sys\nimport time\nimport os\nfrom threading import Thread\n\n\ndef myexcepthook(*args):\n # Meant for sys.excepthook in multiprocessing script, but didn't work\n print(\"My except worked\")\n\n\ndef possible_error_causer(a, b):\n try:\n time.sleep(5)\n c = a / b\n print(c)\n time.sleep(100)\n except:\n os._exit(1)\n\n\n\nfor i in range(3):\n p = Thread(target=possible_error_causer, args=(i, i))\n p.start()\n\n",
"You are performing a division by zero. Indeed, in the first call to subprocess you pass (, i, ) which translates to a = i and b = i with i = 0.\nTo fix this you can either:\n\nchange the range to range(1, 4) so that you iterate from 1 to 4 excluded.\nadd one to b args=(i, i+1) to ensure the divisor if not 0.\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"multiprocessing",
"python",
"python_multiprocessing"
] |
stackoverflow_0074463220_multiprocessing_python_python_multiprocessing.txt
|
Q:
Wildcard assertions with python unittest
Checking if there is anyway to assertEqual an object with some of the key/value being wildcarded.
I have a function, that returns an object, with one of the key being current timestamp in nanoseconds. Because this nanoseconds will change everytime I run the test, I can not expect that based on any inputs. What I want to do is to be able to call something like below
self.assertEqual(returnedObject, {'key1' : 'val1', 'timestampkey' : '*'} #where * being the value is wildcarded, hence dont care what is.
IS there any provision like this in the unittests?
What's the alternative to assert something like this.
I could perhaps assertEqual individual key/value, but wanted to prevent extra effort.
A:
Not sure if there is a solution in the unittests. But you can check value using regexp or convert value to datetime and check types. Here is an example:
import time
import unittest
from datetime import datetime
import re
class Something:
def __init__(self) -> None:
self.key1 = 'val1'
self.timestampkey = time.time()
class TextExample(unittest.TestCase):
def test_something(self):
expected = dict(key1='val1')
something = Something()
# convert timestamp to datetime and check type
dt = datetime.utcfromtimestamp(something.timestampkey)
self.assertTrue(isinstance(dt, datetime))
# or using regexp
self.assertTrue(re.match(r'^\d{10}\.\d{7}$', str(something.timestampkey)))
expected['timestampkey'] = something.timestampkey
self.assertEqual(expected, something.__dict__)
|
Wildcard assertions with python unittest
|
Checking if there is anyway to assertEqual an object with some of the key/value being wildcarded.
I have a function, that returns an object, with one of the key being current timestamp in nanoseconds. Because this nanoseconds will change everytime I run the test, I can not expect that based on any inputs. What I want to do is to be able to call something like below
self.assertEqual(returnedObject, {'key1' : 'val1', 'timestampkey' : '*'} #where * being the value is wildcarded, hence dont care what is.
IS there any provision like this in the unittests?
What's the alternative to assert something like this.
I could perhaps assertEqual individual key/value, but wanted to prevent extra effort.
|
[
"Not sure if there is a solution in the unittests. But you can check value using regexp or convert value to datetime and check types. Here is an example:\nimport time\nimport unittest\nfrom datetime import datetime\nimport re\n\n\nclass Something:\n def __init__(self) -> None:\n self.key1 = 'val1'\n self.timestampkey = time.time()\n\n\nclass TextExample(unittest.TestCase):\n def test_something(self):\n expected = dict(key1='val1')\n something = Something()\n # convert timestamp to datetime and check type\n dt = datetime.utcfromtimestamp(something.timestampkey)\n self.assertTrue(isinstance(dt, datetime))\n # or using regexp\n self.assertTrue(re.match(r'^\\d{10}\\.\\d{7}$', str(something.timestampkey)))\n expected['timestampkey'] = something.timestampkey\n self.assertEqual(expected, something.__dict__)\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_unittest",
"wildcard"
] |
stackoverflow_0074450904_python_python_unittest_wildcard.txt
|
Q:
Call one serializer's update() method from another serilaizer's create() method
I have 2 serializers serializer_1 and serializer_2 which are both model serilizer i want to execute update method of serializer_1 from create method of serializer_2 how can i achieve that?
class serializer_1(serializers.ModelSerializer):
date = serializers.DateTimeField(required=False, allow_null=True)
ispublic = serializers.BooleanField(allow_null=False)
details_api_url = serializers.SerializerMethodField()
dispute_types = OtherSerializer(many=True, required=False, write_only=True)
nature_of_dispute_list = serializers.SerializerMethodField()
plaintiff = OtherSerializer(many=True, required=False, write_only=True)
defendant = OtherSerializer(many=True, required=False, write_only=True)
claims_rep = OtherSerializer(many=True, required=False, write_only=True)
class Meta:
model = Media
fields = "__all_"
def update(self, instance, validated_data):
date = validated_data.pop('close_out_date', None)
plaintiff_data = validated_data.pop('plaintiff', [])
defendant_data = validated_data.pop('defendant', [])
claims_rep_data = validated_data.pop('claims', [])
is_summary_public_previous = instance.is_summary_public
obj = super().update(instance, validated_data)
return obj
class serializer_2(serializers.ModelsSerializer):
class Meta:
model = Fedia
fields = "__all__"
def create(self, validated_data):
request = self.context['request']
**serilizer_1_data** = validated_data.pop('serialzer_1_data', None)
is_final = validated_data.get('is_final')
serializer_1_object = Media.objects.create(**serializer_1_data)
if is_final:
**Call Serializer_1 Update method**
I have access to date,plaintiff etc mentioned under serializer_1 in create method of serilizer_2 through serilizer_1_data
A:
I think you can achieve this way
class serializer_2(serializers.ModelsSerializer):
class Meta:
model = Fedia
fields = "__all__"
def create(self, validated_data):
request = self.context['request']
**serilizer_1_data** = validated_data.pop('serialzer_1_data', None)
is_final = validated_data.get('is_final')
serializer_1_object = Media.objects.create(**serializer_1_data)
if is_final:
Serializer_1.update(serializer_1_object,validated_date)
|
Call one serializer's update() method from another serilaizer's create() method
|
I have 2 serializers serializer_1 and serializer_2 which are both model serilizer i want to execute update method of serializer_1 from create method of serializer_2 how can i achieve that?
class serializer_1(serializers.ModelSerializer):
date = serializers.DateTimeField(required=False, allow_null=True)
ispublic = serializers.BooleanField(allow_null=False)
details_api_url = serializers.SerializerMethodField()
dispute_types = OtherSerializer(many=True, required=False, write_only=True)
nature_of_dispute_list = serializers.SerializerMethodField()
plaintiff = OtherSerializer(many=True, required=False, write_only=True)
defendant = OtherSerializer(many=True, required=False, write_only=True)
claims_rep = OtherSerializer(many=True, required=False, write_only=True)
class Meta:
model = Media
fields = "__all_"
def update(self, instance, validated_data):
date = validated_data.pop('close_out_date', None)
plaintiff_data = validated_data.pop('plaintiff', [])
defendant_data = validated_data.pop('defendant', [])
claims_rep_data = validated_data.pop('claims', [])
is_summary_public_previous = instance.is_summary_public
obj = super().update(instance, validated_data)
return obj
class serializer_2(serializers.ModelsSerializer):
class Meta:
model = Fedia
fields = "__all__"
def create(self, validated_data):
request = self.context['request']
**serilizer_1_data** = validated_data.pop('serialzer_1_data', None)
is_final = validated_data.get('is_final')
serializer_1_object = Media.objects.create(**serializer_1_data)
if is_final:
**Call Serializer_1 Update method**
I have access to date,plaintiff etc mentioned under serializer_1 in create method of serilizer_2 through serilizer_1_data
|
[
"I think you can achieve this way\nclass serializer_2(serializers.ModelsSerializer):\n class Meta:\n model = Fedia\n fields = \"__all__\"\n\n def create(self, validated_data):\n request = self.context['request']\n **serilizer_1_data** = validated_data.pop('serialzer_1_data', None)\n is_final = validated_data.get('is_final')\n\n \n serializer_1_object = Media.objects.create(**serializer_1_data)\n\n if is_final:\n Serializer_1.update(serializer_1_object,validated_date)\n\n \n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_rest_framework",
"django_serializer",
"django_views",
"python"
] |
stackoverflow_0074474226_django_django_rest_framework_django_serializer_django_views_python.txt
|
Q:
Can't access input field in POP UP UI selenium. StaleElementReferenceException after element found clickable
I am trying to access an input field in a pop up UI(Aantal KvK uittreksels). Right now I am trying this code:
element = wait.until(EC.element_to_be_clickable((By.XPATH, "//input[contains(.,'custom_field_387439')]")))
element.send_keys("testing")
That results in this error:
selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
This is the webview + Google Elements:
Please let me know.
A:
StaleElementReferenceException appearing after applying WebDriverWait element_to_be_clickable expected_conditions means that the page you are working on is built with not Selenium friendly dynamic DOM technique. The page rendering is performed so that on some step the desired physical element is already exists and even appears clickable i.e. defined as fully rendered, but after that the page rendering is still continues so the previously collected by Selenium web element (actually a pointer to the physical element on the DOM) reference no more pointing to that physical element, since previously created physical element no more exists, new physical element is now created.
To overcome this we can create special method attempting to click element in a loop. Something like this:
def click_element(locator):
for i in range(5):
try:
wait.until(EC.element_to_be_clickable(locator)).click()
break
except:
pass
Send keys can be done in exactly the same way:
def send_keys_in_loop(locator, value):
for i in range(5):
try:
wait.until(EC.element_to_be_clickable(locator)).send_keys(value)
break
except:
pass
You can use these methods in the following way:
click_element((By.XPATH, "//input[contains(.,'custom_field_387439')]"))
send_keys_in_loop((By.XPATH, "//input[contains(.,'custom_field_387439')]"),"testing")
Please pay attention on the fact that the locator parameter here is a tuple so that I use double parenthesis when calling click_element and click_element methods while inside those methods implementation you can see only single parenthesis inside the element_to_be_clickable(locator) expression.
|
Can't access input field in POP UP UI selenium. StaleElementReferenceException after element found clickable
|
I am trying to access an input field in a pop up UI(Aantal KvK uittreksels). Right now I am trying this code:
element = wait.until(EC.element_to_be_clickable((By.XPATH, "//input[contains(.,'custom_field_387439')]")))
element.send_keys("testing")
That results in this error:
selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
This is the webview + Google Elements:
Please let me know.
|
[
"StaleElementReferenceException appearing after applying WebDriverWait element_to_be_clickable expected_conditions means that the page you are working on is built with not Selenium friendly dynamic DOM technique. The page rendering is performed so that on some step the desired physical element is already exists and even appears clickable i.e. defined as fully rendered, but after that the page rendering is still continues so the previously collected by Selenium web element (actually a pointer to the physical element on the DOM) reference no more pointing to that physical element, since previously created physical element no more exists, new physical element is now created.\nTo overcome this we can create special method attempting to click element in a loop. Something like this:\ndef click_element(locator):\n for i in range(5):\n try:\n wait.until(EC.element_to_be_clickable(locator)).click()\n break\n except:\n pass\n\nSend keys can be done in exactly the same way:\ndef send_keys_in_loop(locator, value):\n for i in range(5):\n try:\n wait.until(EC.element_to_be_clickable(locator)).send_keys(value)\n break\n except:\n pass\n\nYou can use these methods in the following way:\nclick_element((By.XPATH, \"//input[contains(.,'custom_field_387439')]\"))\nsend_keys_in_loop((By.XPATH, \"//input[contains(.,'custom_field_387439')]\"),\"testing\")\n\nPlease pay attention on the fact that the locator parameter here is a tuple so that I use double parenthesis when calling click_element and click_element methods while inside those methods implementation you can see only single parenthesis inside the element_to_be_clickable(locator) expression.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"selenium",
"selenium_webdriver",
"staleelementreferenceexception"
] |
stackoverflow_0074474645_python_selenium_selenium_webdriver_staleelementreferenceexception.txt
|
Q:
How do I generate a small image randomly in different parts of the big image?
Let's assume there are two images. One is called small image and another one is called big image. I want to randomly generate the small image inside the different parts of the big image one at a time everytime I run.
So, currently I have this image. Let's call it big image
I also have smaller image:
def mask_generation(blob_index,image_index):
experimental_image = markup_images[image_index]
h, w = cropped_images[blob_index].shape[:2]
x = np.random.randint(experimental_image.shape[0] - w)
y = np.random.randint(experimental_image.shape[1] - h)
experimental_image[y:y+h, x:x+w] = cropped_images[blob_index]
return experimental_image
I have created above function to generate the small image in big image everytime I call this function. Note: blob index is the index that I use to call specific 'small image' since I have a collection of those small images and image_index is the index to call specific 'big images'. Big images are stored in the list called experimental_image and small images are stored in list called markup images
However, when I run this, I do get the small image randomly generated but the previously randomly generated image never gets deleted and I am not too sure how to proceed with it,
Example: When I run it once
When I run it twice
How do I fix this? Any help will be appreciated. Thank you
I tried the above code but didn't work as I wanted it to work
A:
I suppose you only want the small random image you generated in this iteration in your code.
The problem you have, is due to the modification of your calling args.
When you call your function multiple times with the same big image
markup_image = ...
result_1 = mask_generation(blob_index=0, image_index=0)
result_2 = mask_generation(blob_index=1, image_index=0)
You get in result_2 both small images.
This is due to the writing to the original image in
experimental_image[y:y+h, x:x+w] = cropped_images[blob_index]
This adds the small image to your original image in your list of images.
When getting this image the next time, the small image is already there.
To fix:
Do not alter your images, e.g. by first copying the image and then adding the small image in your function
Probably even better: Only give your function a big and small image, and make sure that they always receive a copy
|
How do I generate a small image randomly in different parts of the big image?
|
Let's assume there are two images. One is called small image and another one is called big image. I want to randomly generate the small image inside the different parts of the big image one at a time everytime I run.
So, currently I have this image. Let's call it big image
I also have smaller image:
def mask_generation(blob_index,image_index):
experimental_image = markup_images[image_index]
h, w = cropped_images[blob_index].shape[:2]
x = np.random.randint(experimental_image.shape[0] - w)
y = np.random.randint(experimental_image.shape[1] - h)
experimental_image[y:y+h, x:x+w] = cropped_images[blob_index]
return experimental_image
I have created above function to generate the small image in big image everytime I call this function. Note: blob index is the index that I use to call specific 'small image' since I have a collection of those small images and image_index is the index to call specific 'big images'. Big images are stored in the list called experimental_image and small images are stored in list called markup images
However, when I run this, I do get the small image randomly generated but the previously randomly generated image never gets deleted and I am not too sure how to proceed with it,
Example: When I run it once
When I run it twice
How do I fix this? Any help will be appreciated. Thank you
I tried the above code but didn't work as I wanted it to work
|
[
"I suppose you only want the small random image you generated in this iteration in your code.\nThe problem you have, is due to the modification of your calling args.\nWhen you call your function multiple times with the same big image\nmarkup_image = ...\nresult_1 = mask_generation(blob_index=0, image_index=0)\nresult_2 = mask_generation(blob_index=1, image_index=0)\n\nYou get in result_2 both small images.\nThis is due to the writing to the original image in\nexperimental_image[y:y+h, x:x+w] = cropped_images[blob_index]\n\nThis adds the small image to your original image in your list of images.\nWhen getting this image the next time, the small image is already there.\nTo fix:\n\nDo not alter your images, e.g. by first copying the image and then adding the small image in your function\nProbably even better: Only give your function a big and small image, and make sure that they always receive a copy\n\n"
] |
[
0
] |
[] |
[] |
[
"image_processing",
"numpy",
"python"
] |
stackoverflow_0074474704_image_processing_numpy_python.txt
|
Q:
Tkinter button executes when the window opens, and not when I click it
I have the following python code with tkinter module. I would like buttons to be an inner function in my code, below is an example where I face the same issue of tkinter button executing itself before I click it
from tkinter import *
from tkinter.filedialog import asksaveasfile
def main():
root = Tk()
root.geometry("777x575")
root.columnconfigure(0, weight=1)
root.columnconfigure(1, weight=2)
root.rowconfigure(0, weight=1)
text = Text(root, width = 405 , height = 205)
text.place(x=500, y=10, anchor=S)
def save():
Files = [('All Files', '*.*'),
('Python Files', '*.py'),
('Text Document', '*.txt')]
file = asksaveasfile(filetypes = Files, defaultextension = Files)
button = Button(root, text = 'Save', command = lambda : save('Save'))
button.place(anchor = CENTER, x = 400, y = 500)
save()
root.mainloop()
if __name__ == '__main__':
main()
I have looked at solutions but nothing really has worked for this case. Any suggestions to solve this issue is highly appreciated.
A:
You called save() function before enters mainloop thats why it shows.
Also in button command you called function with a parameter where function doesn't accept any arguments.
.
.
.
text = Text(root, width = 405 , height = 205)
text.place(x=500, y=10, anchor=S)
def save():
Files = [('All Files', '*.*'),
('Python Files', '*.py'),
('Text Document', '*.txt')]
file = asksaveasfile(filetypes = Files, defaultextension = Files)
button = Button(root, text = 'Save', command = lambda : save())
button.place(anchor = CENTER, x = 400, y = 500)
root.mainloop()
this should work
|
Tkinter button executes when the window opens, and not when I click it
|
I have the following python code with tkinter module. I would like buttons to be an inner function in my code, below is an example where I face the same issue of tkinter button executing itself before I click it
from tkinter import *
from tkinter.filedialog import asksaveasfile
def main():
root = Tk()
root.geometry("777x575")
root.columnconfigure(0, weight=1)
root.columnconfigure(1, weight=2)
root.rowconfigure(0, weight=1)
text = Text(root, width = 405 , height = 205)
text.place(x=500, y=10, anchor=S)
def save():
Files = [('All Files', '*.*'),
('Python Files', '*.py'),
('Text Document', '*.txt')]
file = asksaveasfile(filetypes = Files, defaultextension = Files)
button = Button(root, text = 'Save', command = lambda : save('Save'))
button.place(anchor = CENTER, x = 400, y = 500)
save()
root.mainloop()
if __name__ == '__main__':
main()
I have looked at solutions but nothing really has worked for this case. Any suggestions to solve this issue is highly appreciated.
|
[
"You called save() function before enters mainloop thats why it shows.\nAlso in button command you called function with a parameter where function doesn't accept any arguments.\n.\n.\n.\n text = Text(root, width = 405 , height = 205)\n text.place(x=500, y=10, anchor=S)\n\n def save():\n Files = [('All Files', '*.*'),\n ('Python Files', '*.py'),\n ('Text Document', '*.txt')]\n file = asksaveasfile(filetypes = Files, defaultextension = Files)\n\n button = Button(root, text = 'Save', command = lambda : save())\n button.place(anchor = CENTER, x = 400, y = 500)\n\n root.mainloop()\n \n\nthis should work\n"
] |
[
3
] |
[] |
[] |
[
"python",
"tkinter"
] |
stackoverflow_0074473726_python_tkinter.txt
|
Q:
Filtering pandas dataframe based on repeated column values - Python
So, I have a data frame of this type:
Name 1 2 3 4 5
Alex 10 40 20 11 50
Alex 10 60 20 11 60
Sam 30 15 50 15 60
Sam 30 12 50 15 43
John 50 18 100 8 32
John 50 15 100 8 21
I am trying to keep only the columns that have repeated values for all unique row values. For example, in this case, I want to keep columns 1,3,4 because they have repeated values for each 'duplicate' row. But I want to keep the column only if the values are repeated for EACH pair of names - so, the whole column should consist of pairs of same values. Any ideas of how to do that?
A:
Using a simple list inside agg:
cond = df.groupby('Name').agg(list).applymap(lambda x: len(x) != len(set(x)))
dupe_cols = cond.columns[cond.all()]
A:
this is the easiest way I can think of
from collections import Counter
import pandas as pd
data = [[ 'Name', 1, 2, 3, 4, 5],
[ 'Alex', 10, 40, 20, 11, 50],
[ 'Alex', 10, 60, 20, 11, 60],
[ 'Sam', 30, 15, 50, 15, 60],
[ 'Sam', 30, 12, 50, 15, 43],
[ 'John', 50, 18, 100, 8, 32],
[ 'John', 50, 15, 100, 8, 21]]
df = pd.DataFrame(data)
vals = []
for row in range(0,len(df)):
tmp = Counter(df.iloc[row])
if 2 not in tmp.values():
vals.append(row)
ndf = df.iloc[vals]
ndf.drop_duplicates(subset='Name',keep='first')
returns
Name 1 2 3 4 5
1 Alex 10 40 20 11 50
4 Sam 30 12 50 15 43
5 John 50 18 100 8 32
|
Filtering pandas dataframe based on repeated column values - Python
|
So, I have a data frame of this type:
Name 1 2 3 4 5
Alex 10 40 20 11 50
Alex 10 60 20 11 60
Sam 30 15 50 15 60
Sam 30 12 50 15 43
John 50 18 100 8 32
John 50 15 100 8 21
I am trying to keep only the columns that have repeated values for all unique row values. For example, in this case, I want to keep columns 1,3,4 because they have repeated values for each 'duplicate' row. But I want to keep the column only if the values are repeated for EACH pair of names - so, the whole column should consist of pairs of same values. Any ideas of how to do that?
|
[
"Using a simple list inside agg:\ncond = df.groupby('Name').agg(list).applymap(lambda x: len(x) != len(set(x)))\n\ndupe_cols = cond.columns[cond.all()]\n\n",
"this is the easiest way I can think of\nfrom collections import Counter\n\nimport pandas as pd\n\ndata = [[ 'Name', 1, 2, 3, 4, 5],\n[ 'Alex', 10, 40, 20, 11, 50],\n[ 'Alex', 10, 60, 20, 11, 60],\n[ 'Sam', 30, 15, 50, 15, 60],\n[ 'Sam', 30, 12, 50, 15, 43],\n[ 'John', 50, 18, 100, 8, 32],\n[ 'John', 50, 15, 100, 8, 21]]\n\ndf = pd.DataFrame(data)\n\nvals = []\nfor row in range(0,len(df)):\n tmp = Counter(df.iloc[row])\n if 2 not in tmp.values():\n vals.append(row)\n \nndf = df.iloc[vals]\nndf.drop_duplicates(subset='Name',keep='first')\n\nreturns\n\nName 1 2 3 4 5\n1 Alex 10 40 20 11 50\n4 Sam 30 12 50 15 43\n5 John 50 18 100 8 32\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074474504_dataframe_pandas_python.txt
|
Q:
Python OpenCV: mouse callback for drawing rectangle
I want to save an image from the video stream and then draw a rectangle onto the shown image to produce a region of interest. Later, save that ROI in a file. I used opencv python grabcut example to use the setMouseCallback function. But I don't know what I'm doing incorrect as it is not giving the result I expect. I would like to see the green rectangle drawn on the static image shown in mouse input window and the roi being saved to file. Please help debug this code or show a better approach:
import cv2
rect = (0,0,1,1)
rectangle = False
rect_over = False
def onmouse(event,x,y,flags,params):
global sceneImg,rectangle,rect,ix,iy,rect_over
# Draw Rectangle
if event == cv2.EVENT_LBUTTONDOWN:
rectangle = True
ix,iy = x,y
elif event == cv2.EVENT_MOUSEMOVE:
if rectangle == True:
cv2.rectangle(sceneImg,(ix,iy),(x,y),(0,255,0),2)
rect = (min(ix,x),min(iy,y),abs(ix-x),abs(iy-y))
elif event == cv2.EVENT_LBUTTONUP:
rectangle = False
rect_over = True
cv2.rectangle(sceneImg,(ix,iy),(x,y),(0,255,0),2)
rect = (min(ix,x),min(iy,y),abs(ix-x),abs(iy-y))
x1,y1,w,h = rect
roi = sceneImg[y1:y1+h, x1:x1+w]
cv2.imwrite('roi.jpg', roi)
# Named window and mouse callback
cv2.namedWindow('video')
cv2.namedWindow('mouse input')
cv2.setMouseCallback('mouse input',onmouse)
camObj = cv2.VideoCapture(-1)
keyPressed = None
running = True
scene = False
# Start video stream
while running:
readOK, frame = camObj.read()
keyPressed = cv2.waitKey(5)
if keyPressed == ord('s'):
scene = True
cv2.imwrite('sceneImg.jpg',frame)
sceneImg = cv2.imread('sceneImg.jpg')
cv2.destroyWindow('video')
cv2.imshow('mouse input', sceneImg)
elif keyPressed == ord('r'):
scene = False
cv2.destroyWindow('mouse input')
elif keyPressed == ord('q'):
running = False
if not scene:
cv2.imshow('video', frame)
cv2.destroyAllWindows()
camObj.release()
A:
You need to reset the image everytime when the {event == cv2.EVENT_MOUSEMOVE:} called.
Your code should look something like this:
if event == cv2.EVENT_LBUTTONDOWN:
rectangle = True
ix,iy = x,y
elif event == cv2.EVENT_MOUSEMOVE:
if rectangle == True:
sceneImg = sceneImg2.copy()
cv2.rectangle(sceneImg,(ix,iy),(x,y),(0,255,0),2)
rect = (min(ix,x),min(iy,y),abs(ix-x),abs(iy-y))
elif event == cv2.EVENT_LBUTTONUP:
rectangle = False
rect_over = True
cv2.rectangle(sceneImg,(ix,iy),(x,y),(0,255,0),2)
rect = (min(ix,x),min(iy,y),abs(ix-x),abs(iy-y))
A:
This is my current work around where I again render the mouse input window upon EVENT_LBUTTONUP. To avoid the bounding box showing up in the ROI saved to file I use a copy of the inputed scene:
import cv2
rect = (0,0,1,1)
rectangle = False
rect_over = False
def onmouse(event,x,y,flags,params):
global sceneImg,rectangle,rect,ix,iy,rect_over, roi
# Draw Rectangle
if event == cv2.EVENT_LBUTTONDOWN:
rectangle = True
ix,iy = x,y
elif event == cv2.EVENT_MOUSEMOVE:
if rectangle == True:
# cv2.rectangle(sceneCopy,(ix,iy),(x,y),(0,255,0),1)
rect = (min(ix,x),min(iy,y),abs(ix-x),abs(iy-y))
elif event == cv2.EVENT_LBUTTONUP:
rectangle = False
rect_over = True
sceneCopy = sceneImg.copy()
cv2.rectangle(sceneCopy,(ix,iy),(x,y),(0,255,0),1)
rect = (min(ix,x),min(iy,y),abs(ix-x),abs(iy-y))
roi = sceneImg[rect[1]:rect[1]+rect[3], rect[0]:rect[0]+rect[2]]
cv2.imshow('mouse input', sceneCopy)
cv2.imwrite('roi.jpg', roi)
# Named window and mouse callback
cv2.namedWindow('mouse input')
cv2.setMouseCallback('mouse input',onmouse)
cv2.namedWindow('video')
camObj = cv2.VideoCapture(-1)
keyPressed = None
running = True
scene = False
# Start video stream
while running:
readOK, frame = camObj.read()
keyPressed = cv2.waitKey(5)
if keyPressed == ord('s'):
scene = True
cv2.destroyWindow('video')
cv2.imwrite('sceneImg.jpg',frame)
sceneImg = cv2.imread('sceneImg.jpg')
cv2.imshow('mouse input', sceneImg)
elif keyPressed == ord('r'):
scene = False
cv2.destroyWindow('mouse input')
elif keyPressed == ord('q'):
running = False
if not scene:
cv2.imshow('video', frame)
cv2.destroyAllWindows()
camObj.release()
Thus, I can visualize the rectangle which is supposed to bound the ROI but I still don't know how to visualize the bounding box while the mouse left button is down and the mouse cursor is moving. That visualization works in the grabcut example but I couldn't figure it out in my case. Upon uncommenting the line for drawing rectangle during EVENT_MOUSEMOVE I get multiple rectangles drawn onto the image. If someone answers with a way to visualize a single rectangle as it is being created I can accept it.
A:
Building on top of answer provided by @Marco167, I will just change one line as otherwise there's object reference problem.
So, instead of sceneImg = sceneImg2.copy() I'd suggest sceneImg[:] = sceneImg2[:], where sceneImg2 should be same, separately loaded image, like:
sceneImg = cv2.imread('sceneImg.jpg')
sceneImg2 = cv2.imread('sceneImg.jpg')
Also, I've moved rectangle check to the condition.
This way on mouse move, you first redraw the original image (thus removing the existing rectangle) and draw rectangle. Moving the mouse by even a pixel you again redraw the original picture removing any rectangle and draw again one rectangle. At any given point there will be just one rectangle.
Yup, replying after over 7 years, just in case anyone would ever find it useful :)
Putting it all together:
if event == cv2.EVENT_LBUTTONDOWN:
rectangle = True
ix,iy = x,y
elif event == cv2.EVENT_MOUSEMOVE and rectangle :
sceneImg[:] = sceneImg2[:]
cv2.rectangle(sceneImg,(ix,iy),(x,y),(0,255,0),2)
rect = (min(ix,x),min(iy,y),abs(ix-x),abs(iy-y))
elif event == cv2.EVENT_LBUTTONUP:
rectangle = False
rect_over = True
cv2.rectangle(sceneImg,(ix,iy),(x,y),(0,255,0),2)
rect = (min(ix,x),min(iy,y),abs(ix-x),abs(iy-y))
|
Python OpenCV: mouse callback for drawing rectangle
|
I want to save an image from the video stream and then draw a rectangle onto the shown image to produce a region of interest. Later, save that ROI in a file. I used opencv python grabcut example to use the setMouseCallback function. But I don't know what I'm doing incorrect as it is not giving the result I expect. I would like to see the green rectangle drawn on the static image shown in mouse input window and the roi being saved to file. Please help debug this code or show a better approach:
import cv2
rect = (0,0,1,1)
rectangle = False
rect_over = False
def onmouse(event,x,y,flags,params):
global sceneImg,rectangle,rect,ix,iy,rect_over
# Draw Rectangle
if event == cv2.EVENT_LBUTTONDOWN:
rectangle = True
ix,iy = x,y
elif event == cv2.EVENT_MOUSEMOVE:
if rectangle == True:
cv2.rectangle(sceneImg,(ix,iy),(x,y),(0,255,0),2)
rect = (min(ix,x),min(iy,y),abs(ix-x),abs(iy-y))
elif event == cv2.EVENT_LBUTTONUP:
rectangle = False
rect_over = True
cv2.rectangle(sceneImg,(ix,iy),(x,y),(0,255,0),2)
rect = (min(ix,x),min(iy,y),abs(ix-x),abs(iy-y))
x1,y1,w,h = rect
roi = sceneImg[y1:y1+h, x1:x1+w]
cv2.imwrite('roi.jpg', roi)
# Named window and mouse callback
cv2.namedWindow('video')
cv2.namedWindow('mouse input')
cv2.setMouseCallback('mouse input',onmouse)
camObj = cv2.VideoCapture(-1)
keyPressed = None
running = True
scene = False
# Start video stream
while running:
readOK, frame = camObj.read()
keyPressed = cv2.waitKey(5)
if keyPressed == ord('s'):
scene = True
cv2.imwrite('sceneImg.jpg',frame)
sceneImg = cv2.imread('sceneImg.jpg')
cv2.destroyWindow('video')
cv2.imshow('mouse input', sceneImg)
elif keyPressed == ord('r'):
scene = False
cv2.destroyWindow('mouse input')
elif keyPressed == ord('q'):
running = False
if not scene:
cv2.imshow('video', frame)
cv2.destroyAllWindows()
camObj.release()
|
[
"You need to reset the image everytime when the {event == cv2.EVENT_MOUSEMOVE:} called.\nYour code should look something like this:\nif event == cv2.EVENT_LBUTTONDOWN:\n rectangle = True\n ix,iy = x,y\n\nelif event == cv2.EVENT_MOUSEMOVE:\n if rectangle == True:\n sceneImg = sceneImg2.copy()\n cv2.rectangle(sceneImg,(ix,iy),(x,y),(0,255,0),2)\n rect = (min(ix,x),min(iy,y),abs(ix-x),abs(iy-y))\n\n\nelif event == cv2.EVENT_LBUTTONUP:\n rectangle = False\n rect_over = True\n\n cv2.rectangle(sceneImg,(ix,iy),(x,y),(0,255,0),2)\n rect = (min(ix,x),min(iy,y),abs(ix-x),abs(iy-y))\n\n",
"This is my current work around where I again render the mouse input window upon EVENT_LBUTTONUP. To avoid the bounding box showing up in the ROI saved to file I use a copy of the inputed scene:\nimport cv2\n\nrect = (0,0,1,1)\nrectangle = False\nrect_over = False \ndef onmouse(event,x,y,flags,params):\n global sceneImg,rectangle,rect,ix,iy,rect_over, roi\n\n # Draw Rectangle\n if event == cv2.EVENT_LBUTTONDOWN:\n rectangle = True\n ix,iy = x,y\n\n elif event == cv2.EVENT_MOUSEMOVE:\n if rectangle == True:\n# cv2.rectangle(sceneCopy,(ix,iy),(x,y),(0,255,0),1)\n rect = (min(ix,x),min(iy,y),abs(ix-x),abs(iy-y))\n\n elif event == cv2.EVENT_LBUTTONUP:\n rectangle = False\n rect_over = True\n\n sceneCopy = sceneImg.copy()\n cv2.rectangle(sceneCopy,(ix,iy),(x,y),(0,255,0),1)\n\n rect = (min(ix,x),min(iy,y),abs(ix-x),abs(iy-y)) \n roi = sceneImg[rect[1]:rect[1]+rect[3], rect[0]:rect[0]+rect[2]]\n\n cv2.imshow('mouse input', sceneCopy)\n cv2.imwrite('roi.jpg', roi)\n\n# Named window and mouse callback\ncv2.namedWindow('mouse input')\ncv2.setMouseCallback('mouse input',onmouse)\ncv2.namedWindow('video')\n\ncamObj = cv2.VideoCapture(-1)\nkeyPressed = None\nrunning = True\nscene = False\n# Start video stream\nwhile running:\n readOK, frame = camObj.read()\n\n keyPressed = cv2.waitKey(5)\n if keyPressed == ord('s'):\n scene = True\n cv2.destroyWindow('video')\n\n cv2.imwrite('sceneImg.jpg',frame)\n sceneImg = cv2.imread('sceneImg.jpg')\n\n cv2.imshow('mouse input', sceneImg)\n\n elif keyPressed == ord('r'):\n scene = False\n cv2.destroyWindow('mouse input')\n\n elif keyPressed == ord('q'):\n running = False\n\n if not scene:\n cv2.imshow('video', frame)\n\ncv2.destroyAllWindows()\ncamObj.release()\n\nThus, I can visualize the rectangle which is supposed to bound the ROI but I still don't know how to visualize the bounding box while the mouse left button is down and the mouse cursor is moving. That visualization works in the grabcut example but I couldn't figure it out in my case. Upon uncommenting the line for drawing rectangle during EVENT_MOUSEMOVE I get multiple rectangles drawn onto the image. If someone answers with a way to visualize a single rectangle as it is being created I can accept it. \n",
"Building on top of answer provided by @Marco167, I will just change one line as otherwise there's object reference problem.\nSo, instead of sceneImg = sceneImg2.copy() I'd suggest sceneImg[:] = sceneImg2[:], where sceneImg2 should be same, separately loaded image, like:\nsceneImg = cv2.imread('sceneImg.jpg')\nsceneImg2 = cv2.imread('sceneImg.jpg')\n\nAlso, I've moved rectangle check to the condition.\nThis way on mouse move, you first redraw the original image (thus removing the existing rectangle) and draw rectangle. Moving the mouse by even a pixel you again redraw the original picture removing any rectangle and draw again one rectangle. At any given point there will be just one rectangle.\nYup, replying after over 7 years, just in case anyone would ever find it useful :)\nPutting it all together:\nif event == cv2.EVENT_LBUTTONDOWN:\n rectangle = True\n ix,iy = x,y\n\nelif event == cv2.EVENT_MOUSEMOVE and rectangle :\n sceneImg[:] = sceneImg2[:]\n cv2.rectangle(sceneImg,(ix,iy),(x,y),(0,255,0),2)\n rect = (min(ix,x),min(iy,y),abs(ix-x),abs(iy-y))\n\n\nelif event == cv2.EVENT_LBUTTONUP:\n rectangle = False\n rect_over = True\n\n cv2.rectangle(sceneImg,(ix,iy),(x,y),(0,255,0),2)\n rect = (min(ix,x),min(iy,y),abs(ix-x),abs(iy-y))\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"draw",
"mouseevent",
"opencv",
"python"
] |
stackoverflow_0028823243_draw_mouseevent_opencv_python.txt
|
Q:
Subclassing Pandas and Openpyxl to read excel and skip cells with "strikethrough"
The Problem at hand:
I want to parse and concatenate hundreds of excel tables. However, many of these have entries that are formatted with strikethrough. I need to skip these entries.
Per request, this is a minimal example file, and a picture of the example table (values are randomized and may differ in the file):
The solution:
As @Alka has pointed out below, the code works with pandas=1.4.1. - my current solution is a separate conda environment with a frozen pandas version.
Of course, I'd still be happy about any suggestions on how to make this run with up-to-date libraries.
The code
I have used the solution provided by @Henry Yik for longer than a year.
Taken from the original stackoverflow-thread linked above:
import pandas as pd
from pandas.io.excel._openpyxl import _OpenpyxlReader
from pandas._typing import Scalar
from typing import List
from pandas.io.excel._odfreader import _ODFReader
from pandas.io.excel._xlrd import _XlrdReader
class CustomReader(_OpenpyxlReader):
def get_sheet_data(self, sheet, convert_float: bool) -> List[List[Scalar]]:
data = []
for row in sheet.rows:
first = row[1] # I need the strikethrough check on this cell only
if first.value is not None and first.font.strike: continue
else:
data.append([self._convert_cell(cell, convert_float) for cell in row])
return data
class CustomExcelFile(pd.ExcelFile):
_engines = {"xlrd": _XlrdReader, "openpyxl": CustomReader, "odf": _ODFReader}
Unfortunately, it broke with some library updates early October. I was not able to get it to run again.
I would be really happy to solve this. I use this solution in several workflows.
I couldn't figure out how to solve the issue. I tried fiddling around with the CustomReader - class, to no avail. Neither was I sucessful with reinstating an older environments.yml.
excel = CustomExcelFile(r"excel_file_name.xlsx", engine="openpyxl")
df = excel.parse()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [12], line 17
15 file.decrypt(decrypted)
16 excel = CustomExcelFile(decrypted, engine = "openpyxl")
---> 17 data = excel.parse(usecols="A:M", index = 0, header = 0)
File /home/dev/anaconda3/envs/lwo_datasci/lib/python3.10/site-packages/pandas/io/excel/_base.py:1734, in ExcelFile.parse(self, sheet_name, header, names, index_col, usecols, squeeze, converters, true_values, false_values, skiprows, nrows, na_values, parse_dates, date_parser, thousands, comment, skipfooter, convert_float, mangle_dupe_cols, **kwds)
1700 def parse(
1701 self,
1702 sheet_name: str | int | list[int] | list[str] | None = 0,
(...)
1721 **kwds,
1722 ) -> DataFrame | dict[str, DataFrame] | dict[int, DataFrame]:
1723 """
1724 Parse specified sheet(s) into a DataFrame.
1725
(...)
1732 DataFrame from the passed in Excel file.
1733 """
-> 1734 return self._reader.parse(
1735 sheet_name=sheet_name,
1736 header=header,
1737 names=names,
1738 index_col=index_col,
1739 usecols=usecols,
1740 squeeze=squeeze,
1741 converters=converters,
1742 true_values=true_values,
1743 false_values=false_values,
1744 skiprows=skiprows,
1745 nrows=nrows,
1746 na_values=na_values,
1747 parse_dates=parse_dates,
1748 date_parser=date_parser,
1749 thousands=thousands,
1750 comment=comment,
1751 skipfooter=skipfooter,
1752 convert_float=convert_float,
1753 mangle_dupe_cols=mangle_dupe_cols,
1754 **kwds,
1755 )
File /home/dev/anaconda3/envs/lwo_datasci/lib/python3.10/site-packages/pandas/io/excel/_base.py:765, in BaseExcelReader.parse(self, sheet_name, header, names, index_col, usecols, squeeze, dtype, true_values, false_values, skiprows, nrows, na_values, verbose, parse_dates, date_parser, thousands, decimal, comment, skipfooter, convert_float, mangle_dupe_cols, **kwds)
762 sheet = self.get_sheet_by_index(asheetname)
764 file_rows_needed = self._calc_rows(header, index_col, skiprows, nrows)
--> 765 data = self.get_sheet_data(sheet, convert_float, file_rows_needed)
766 if hasattr(sheet, "close"):
767 # pyxlsb opens two TemporaryFiles
768 sheet.close()
TypeError: CustomReader.get_sheet_data() takes 3 positional arguments but 4 were given
> TypeError: CustomReader.get_sheet_data() takes 3 positional arguments
> but 4 were given
A:
The problem might be related to versions of libraries you are using.
I've tried your code with the following versions and it worked (but I had to change the way the engines are imported from pandas):
pandas==1.4.1
openpyxl==3.0.10
import pandas as pd
from pandas.io.excel._openpyxl import OpenpyxlReader
from pandas._typing import Scalar
from typing import List
from pandas.io.excel._odfreader import ODFReader
from pandas.io.excel._xlrd import XlrdReader
class CustomReader(OpenpyxlReader):
def get_sheet_data(self, sheet, convert_float: bool) -> List[List[Scalar]]:
data = []
for row in sheet.rows:
first = row[1] # I need the strikethrough check on this cell only
if first.value is not None and first.font.strike: continue
else:
data.append([self._convert_cell(cell, convert_float) for cell in row])
return data
class CustomExcelFile(pd.ExcelFile):
_engines = {"xlrd": XlrdReader, "openpyxl": CustomReader, "odf": ODFReader}
And then :
excel = CustomExcelFile("example_file.xlsx", engine="openpyxl")
df = excel.parse()
df
|
Subclassing Pandas and Openpyxl to read excel and skip cells with "strikethrough"
|
The Problem at hand:
I want to parse and concatenate hundreds of excel tables. However, many of these have entries that are formatted with strikethrough. I need to skip these entries.
Per request, this is a minimal example file, and a picture of the example table (values are randomized and may differ in the file):
The solution:
As @Alka has pointed out below, the code works with pandas=1.4.1. - my current solution is a separate conda environment with a frozen pandas version.
Of course, I'd still be happy about any suggestions on how to make this run with up-to-date libraries.
The code
I have used the solution provided by @Henry Yik for longer than a year.
Taken from the original stackoverflow-thread linked above:
import pandas as pd
from pandas.io.excel._openpyxl import _OpenpyxlReader
from pandas._typing import Scalar
from typing import List
from pandas.io.excel._odfreader import _ODFReader
from pandas.io.excel._xlrd import _XlrdReader
class CustomReader(_OpenpyxlReader):
def get_sheet_data(self, sheet, convert_float: bool) -> List[List[Scalar]]:
data = []
for row in sheet.rows:
first = row[1] # I need the strikethrough check on this cell only
if first.value is not None and first.font.strike: continue
else:
data.append([self._convert_cell(cell, convert_float) for cell in row])
return data
class CustomExcelFile(pd.ExcelFile):
_engines = {"xlrd": _XlrdReader, "openpyxl": CustomReader, "odf": _ODFReader}
Unfortunately, it broke with some library updates early October. I was not able to get it to run again.
I would be really happy to solve this. I use this solution in several workflows.
I couldn't figure out how to solve the issue. I tried fiddling around with the CustomReader - class, to no avail. Neither was I sucessful with reinstating an older environments.yml.
excel = CustomExcelFile(r"excel_file_name.xlsx", engine="openpyxl")
df = excel.parse()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [12], line 17
15 file.decrypt(decrypted)
16 excel = CustomExcelFile(decrypted, engine = "openpyxl")
---> 17 data = excel.parse(usecols="A:M", index = 0, header = 0)
File /home/dev/anaconda3/envs/lwo_datasci/lib/python3.10/site-packages/pandas/io/excel/_base.py:1734, in ExcelFile.parse(self, sheet_name, header, names, index_col, usecols, squeeze, converters, true_values, false_values, skiprows, nrows, na_values, parse_dates, date_parser, thousands, comment, skipfooter, convert_float, mangle_dupe_cols, **kwds)
1700 def parse(
1701 self,
1702 sheet_name: str | int | list[int] | list[str] | None = 0,
(...)
1721 **kwds,
1722 ) -> DataFrame | dict[str, DataFrame] | dict[int, DataFrame]:
1723 """
1724 Parse specified sheet(s) into a DataFrame.
1725
(...)
1732 DataFrame from the passed in Excel file.
1733 """
-> 1734 return self._reader.parse(
1735 sheet_name=sheet_name,
1736 header=header,
1737 names=names,
1738 index_col=index_col,
1739 usecols=usecols,
1740 squeeze=squeeze,
1741 converters=converters,
1742 true_values=true_values,
1743 false_values=false_values,
1744 skiprows=skiprows,
1745 nrows=nrows,
1746 na_values=na_values,
1747 parse_dates=parse_dates,
1748 date_parser=date_parser,
1749 thousands=thousands,
1750 comment=comment,
1751 skipfooter=skipfooter,
1752 convert_float=convert_float,
1753 mangle_dupe_cols=mangle_dupe_cols,
1754 **kwds,
1755 )
File /home/dev/anaconda3/envs/lwo_datasci/lib/python3.10/site-packages/pandas/io/excel/_base.py:765, in BaseExcelReader.parse(self, sheet_name, header, names, index_col, usecols, squeeze, dtype, true_values, false_values, skiprows, nrows, na_values, verbose, parse_dates, date_parser, thousands, decimal, comment, skipfooter, convert_float, mangle_dupe_cols, **kwds)
762 sheet = self.get_sheet_by_index(asheetname)
764 file_rows_needed = self._calc_rows(header, index_col, skiprows, nrows)
--> 765 data = self.get_sheet_data(sheet, convert_float, file_rows_needed)
766 if hasattr(sheet, "close"):
767 # pyxlsb opens two TemporaryFiles
768 sheet.close()
TypeError: CustomReader.get_sheet_data() takes 3 positional arguments but 4 were given
> TypeError: CustomReader.get_sheet_data() takes 3 positional arguments
> but 4 were given
|
[
"The problem might be related to versions of libraries you are using.\nI've tried your code with the following versions and it worked (but I had to change the way the engines are imported from pandas):\n\npandas==1.4.1\nopenpyxl==3.0.10\n\nimport pandas as pd\nfrom pandas.io.excel._openpyxl import OpenpyxlReader\nfrom pandas._typing import Scalar\nfrom typing import List\nfrom pandas.io.excel._odfreader import ODFReader\nfrom pandas.io.excel._xlrd import XlrdReader\n\nclass CustomReader(OpenpyxlReader):\n def get_sheet_data(self, sheet, convert_float: bool) -> List[List[Scalar]]:\n data = []\n for row in sheet.rows:\n first = row[1] # I need the strikethrough check on this cell only\n if first.value is not None and first.font.strike: continue\n else:\n data.append([self._convert_cell(cell, convert_float) for cell in row])\n return data\n\n\nclass CustomExcelFile(pd.ExcelFile):\n\n _engines = {\"xlrd\": XlrdReader, \"openpyxl\": CustomReader, \"odf\": ODFReader}\n\n\nAnd then :\nexcel = CustomExcelFile(\"example_file.xlsx\", engine=\"openpyxl\")\ndf = excel.parse()\ndf\n\n\n"
] |
[
2
] |
[] |
[] |
[
"openpyxl",
"pandas",
"python"
] |
stackoverflow_0074237797_openpyxl_pandas_python.txt
|
Q:
Python, concurency, critical sections
here I have some question about possible critical sections.
In my code I have a function dealing with queue. This function is one and only to put elements in the queue. But a number of threads operating concurently get elements from this queue. Since there is a chance (I am not sure if such a chance exists tbh) that multiple threads will attempt to get one element each from the queue at the same time, is it possible that they will get exactly the same element from the queue?
One of the things my workers do is opening a file (different workers opens different files in exclusive dirs). I am using context manager "with open(>some file<, 'w') as file...". So is it possible, that at the same time multiple threads opening different files but using exectly the same variable 'file' will mess up things cause it looks like I have a critical section here, doesnt it?
A:
Your first question is easy to answer with the documentation of the queue class. If you implemented a custom queue, the locking is on you but the python queue module states:
Internally, those three types of queues use locks to temporarily block competing threads; however, they are not designed to handle reentrancy within a thread.
I am uncertain if your second question follows from the first question.
It would be helpful to clear up your question with an example.
|
Python, concurency, critical sections
|
here I have some question about possible critical sections.
In my code I have a function dealing with queue. This function is one and only to put elements in the queue. But a number of threads operating concurently get elements from this queue. Since there is a chance (I am not sure if such a chance exists tbh) that multiple threads will attempt to get one element each from the queue at the same time, is it possible that they will get exactly the same element from the queue?
One of the things my workers do is opening a file (different workers opens different files in exclusive dirs). I am using context manager "with open(>some file<, 'w') as file...". So is it possible, that at the same time multiple threads opening different files but using exectly the same variable 'file' will mess up things cause it looks like I have a critical section here, doesnt it?
|
[
"Your first question is easy to answer with the documentation of the queue class. If you implemented a custom queue, the locking is on you but the python queue module states:\n\nInternally, those three types of queues use locks to temporarily block competing threads; however, they are not designed to handle reentrancy within a thread.\n\nI am uncertain if your second question follows from the first question.\nIt would be helpful to clear up your question with an example.\n"
] |
[
1
] |
[] |
[] |
[
"concurrency",
"critical_section",
"python",
"queue"
] |
stackoverflow_0074474863_concurrency_critical_section_python_queue.txt
|
Q:
Update Django model field when actions taking place in another model
I want to make changes in a model instance A, when a second model instance B
is saved,updated or deleted.
All models are in the same Django app.
What would be the optimal way to do it?
Should I use signals?
Override default methods[save, update,delete]?
Something else?
Django documentation warns:
Where possible you should opt for directly calling the handling code, rather than dispatching via a signal.
Can somebody elaborate on that statement?
A:
The performance impact of your signal handlers depends of course on their functionality. But you should be aware of the fact that they are executed synchronously when the signal is fired, so if there's a lot of action going on in the handlers (for example a lot of database calls) the execution will be delayed.
|
Update Django model field when actions taking place in another model
|
I want to make changes in a model instance A, when a second model instance B
is saved,updated or deleted.
All models are in the same Django app.
What would be the optimal way to do it?
Should I use signals?
Override default methods[save, update,delete]?
Something else?
Django documentation warns:
Where possible you should opt for directly calling the handling code, rather than dispatching via a signal.
Can somebody elaborate on that statement?
|
[
"The performance impact of your signal handlers depends of course on their functionality. But you should be aware of the fact that they are executed synchronously when the signal is fired, so if there's a lot of action going on in the handlers (for example a lot of database calls) the execution will be delayed.\n"
] |
[
0
] |
[] |
[] |
[
"django",
"model",
"python",
"signals"
] |
stackoverflow_0074474918_django_model_python_signals.txt
|
Q:
Login Automation Using Selenium Not Working Properly
I have built a login Automator using Selenium, and the code executes without errors but the script doesn't login. The page is stuck at login page, email and password are entered, but login is not completed.
enter image description here
I have tried 2 ways to login:
By clicking on Login through Click ()
e = self.driver.find_element(By.XPATH, "//div[text()='Login']")
e.click()
Using Enter in password area
password_element.send_keys(Keys.ENTER)
But neither of them logs me in, even though I can see the button being clicked, and name and password being entered.
I also tried adding a wait time, but the problem is not solved. What an I doing wrong?
Here is the code:
import pandas as pd
from selenium import webdriver
from selenium.webdriver import Firefox
from selenium.webdriver.firefox.service import Service
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
class QuoraScraper:
def __init__(self):
self.driver = ''
self.dataframe = ''
self.credentials = {
'email': 'email',
'password': 'password'
}
self.questions = []
self.answers = []
def start_driver(self):
options = Options()
options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe'
self.driver = webdriver.Firefox(options=options)
def close_driver(self):
self.driver.close()
def open_url(self, url):
self.driver.get(url)
def initialize_columns(self, columns):
self.dataframe = pd.DataFrame(columns=columns)
def set_credentials(self, email, password):
self.credentials['email'] = email
self.credentials['password'] = password
def login(self):
self.open_url('https://www.quora.com/')
if (self.credentials['email'] and self.credentials['password']):
email_element = self.driver.find_element(By.ID, 'email')
password_element = self.driver.find_element(By.ID, 'password')
email_element.send_keys(self.credentials['email'])
password_element.send_keys(self.credentials['password'])
# I tried adding a wait time but the script is not successful either way
#self.driver.maximize_window() # For maximizing window
#self.driver.implicitly_wait(20) # gives an implicit wait for 20 seconds
# I tried clcking on Login through both Click and Enter but neither of them logs me in, even though I can see the button clicking
password_element.send_keys(Keys.ENTER)
e = self.driver.find_element(By.XPATH, "//div[text()='Login']")
e.click()
else:
print('Credentials not set. Error')
if __name__ == "__main__":
scraper = QuoraScraper()
scraper.start_driver()
email = "email"
password = "password"
scraper.set_credentials(email, password)
scraper.login()
UPDATE 2: I get a login popup window after the email and password have been correctly entered and i try to close it by finding the xpath of the X button like this:
cross = self.driver.find_element(By.XPATH, '//*[@id="close"]')
cross.click()
But the element cannot be located:
selenium.common.exceptions.NoSuchElementException: Message: Unable to
locate element: //*[@id="close"]
A:
Before you start with scripting. please understand the AUT. how exactly it works.
using quora login page. as you enter the valid email address there is backend validation happening with server if the email is valid.
Unless and untill email address is validated and correct password the login button is disabled.
Add an intermediate layer where check the attribute or wait for the attribute disabled=false. then proceed with click. this should solve the issue.
A:
I typed this //div[text()='Login'] into source code of quora site to check if this is the correct xpath for the login button, but instead it highlighted the Login text on top of the email, this is the correct xpath for the login button :
login_btn = driver.find_element_by_xpath('//*[@id="root"]/div/div[2]/div/div/div/div/div/div[2]/div[2]/div[4]/button')
|
Login Automation Using Selenium Not Working Properly
|
I have built a login Automator using Selenium, and the code executes without errors but the script doesn't login. The page is stuck at login page, email and password are entered, but login is not completed.
enter image description here
I have tried 2 ways to login:
By clicking on Login through Click ()
e = self.driver.find_element(By.XPATH, "//div[text()='Login']")
e.click()
Using Enter in password area
password_element.send_keys(Keys.ENTER)
But neither of them logs me in, even though I can see the button being clicked, and name and password being entered.
I also tried adding a wait time, but the problem is not solved. What an I doing wrong?
Here is the code:
import pandas as pd
from selenium import webdriver
from selenium.webdriver import Firefox
from selenium.webdriver.firefox.service import Service
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
class QuoraScraper:
def __init__(self):
self.driver = ''
self.dataframe = ''
self.credentials = {
'email': 'email',
'password': 'password'
}
self.questions = []
self.answers = []
def start_driver(self):
options = Options()
options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe'
self.driver = webdriver.Firefox(options=options)
def close_driver(self):
self.driver.close()
def open_url(self, url):
self.driver.get(url)
def initialize_columns(self, columns):
self.dataframe = pd.DataFrame(columns=columns)
def set_credentials(self, email, password):
self.credentials['email'] = email
self.credentials['password'] = password
def login(self):
self.open_url('https://www.quora.com/')
if (self.credentials['email'] and self.credentials['password']):
email_element = self.driver.find_element(By.ID, 'email')
password_element = self.driver.find_element(By.ID, 'password')
email_element.send_keys(self.credentials['email'])
password_element.send_keys(self.credentials['password'])
# I tried adding a wait time but the script is not successful either way
#self.driver.maximize_window() # For maximizing window
#self.driver.implicitly_wait(20) # gives an implicit wait for 20 seconds
# I tried clcking on Login through both Click and Enter but neither of them logs me in, even though I can see the button clicking
password_element.send_keys(Keys.ENTER)
e = self.driver.find_element(By.XPATH, "//div[text()='Login']")
e.click()
else:
print('Credentials not set. Error')
if __name__ == "__main__":
scraper = QuoraScraper()
scraper.start_driver()
email = "email"
password = "password"
scraper.set_credentials(email, password)
scraper.login()
UPDATE 2: I get a login popup window after the email and password have been correctly entered and i try to close it by finding the xpath of the X button like this:
cross = self.driver.find_element(By.XPATH, '//*[@id="close"]')
cross.click()
But the element cannot be located:
selenium.common.exceptions.NoSuchElementException: Message: Unable to
locate element: //*[@id="close"]
|
[
"Before you start with scripting. please understand the AUT. how exactly it works.\nusing quora login page. as you enter the valid email address there is backend validation happening with server if the email is valid.\nUnless and untill email address is validated and correct password the login button is disabled.\nAdd an intermediate layer where check the attribute or wait for the attribute disabled=false. then proceed with click. this should solve the issue.\n",
"I typed this //div[text()='Login'] into source code of quora site to check if this is the correct xpath for the login button, but instead it highlighted the Login text on top of the email, this is the correct xpath for the login button :\nlogin_btn = driver.find_element_by_xpath('//*[@id=\"root\"]/div/div[2]/div/div/div/div/div/div[2]/div[2]/div[4]/button')\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"automation",
"python",
"selenium",
"selenium_webdriver"
] |
stackoverflow_0074447578_automation_python_selenium_selenium_webdriver.txt
|
Q:
Python code to draw a path on a map with arrows using lat/long data
Along with time and heading, I also have the latitude & longitude data. From the data, I want to draw a path with arrows that indicate the path a vehicle took.
I can create a path, but I am not able to make arrows on the path, in order to specify the path and the direction, it took.
I want to create a plot that looks like the image below.
A:
If you are using matplotlib, you could use quiver
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
# Arrow locations
lon = [1,1,2,2]
lat = [1,2,2,1]
# Arrow directions
x_dir = [1,0,-1,0]
y_dir = [0,-1,0,1]
# add to plot using quiver
ax.quiver(lon,lat,x_dir,y_dir,angles='xy', scale_units='xy')
plt.show()
I cannot be more precise with info you provided, but above logic could be used.
|
Python code to draw a path on a map with arrows using lat/long data
|
Along with time and heading, I also have the latitude & longitude data. From the data, I want to draw a path with arrows that indicate the path a vehicle took.
I can create a path, but I am not able to make arrows on the path, in order to specify the path and the direction, it took.
I want to create a plot that looks like the image below.
|
[
"If you are using matplotlib, you could use quiver\nimport matplotlib.pyplot as plt\nfig, ax = plt.subplots()\n# Arrow locations\nlon = [1,1,2,2]\nlat = [1,2,2,1]\n# Arrow directions\nx_dir = [1,0,-1,0]\ny_dir = [0,-1,0,1]\n# add to plot using quiver\nax.quiver(lon,lat,x_dir,y_dir,angles='xy', scale_units='xy')\nplt.show()\n\nI cannot be more precise with info you provided, but above logic could be used.\n"
] |
[
0
] |
[] |
[] |
[
"maps",
"python",
"python_3.x"
] |
stackoverflow_0074473185_maps_python_python_3.x.txt
|
Q:
create a new column for each strategy and add or subtract an amount
I want to extract from a dataset the amount accumulated by strategy according to the transactions between the strategies (from or to):
import pandas as pd
df = pd.DataFrame({"value": [1000, 4000, 2000, 3000],
"out": ["cash", "cash", "lending", "DCA"],
"in": ["DCA", "lending", "cash", "lending"]})
value out in
0 1000 cash DCA
1 4000 cash lending
2 2000 lending cash
3 3000 DCA lending
What I expect to do:
value out in cash lending DCA
0 1000 cash DCA -1000 0 1000
1 4000 cash lending -5000 4000 1000
2 2000 lending cash -3000 2000 1000
3 3000 DCA lending -3000 5000 -2000
I don't know how to approach the problem.
Any help would be appreciated.
A:
You can try like this:
import pandas as pd
df = pd.DataFrame({"value": [1000, 4000, 2000, 3000],
"out": ["cash", "cash", "lending", "DCA"],
"in": ["DCA", "lending", "cash", "lending"]})
# get strategies from data source and create an account for each
accounts = {strat: 0 for strat in list(df["out"]) + list(df["in"])}
# add new columns for each strategy to dataframe
for strat in accounts.keys():
df[strat] = 0
# loop through transactions and enter values to accounts
for i, t in df.iterrows():
accounts[t["out"]] -= t["value"]
accounts[t["in"]] += t["value"]
for strat, v in accounts.items():
df.loc[i, strat] = v
print(df)
Output:
value out in cash lending DCA
0 1000 cash DCA -1000 0 1000
1 4000 cash lending -5000 4000 1000
2 2000 lending cash -3000 2000 1000
3 3000 DCA lending -3000 5000 -2000
|
create a new column for each strategy and add or subtract an amount
|
I want to extract from a dataset the amount accumulated by strategy according to the transactions between the strategies (from or to):
import pandas as pd
df = pd.DataFrame({"value": [1000, 4000, 2000, 3000],
"out": ["cash", "cash", "lending", "DCA"],
"in": ["DCA", "lending", "cash", "lending"]})
value out in
0 1000 cash DCA
1 4000 cash lending
2 2000 lending cash
3 3000 DCA lending
What I expect to do:
value out in cash lending DCA
0 1000 cash DCA -1000 0 1000
1 4000 cash lending -5000 4000 1000
2 2000 lending cash -3000 2000 1000
3 3000 DCA lending -3000 5000 -2000
I don't know how to approach the problem.
Any help would be appreciated.
|
[
"You can try like this:\nimport pandas as pd\n\ndf = pd.DataFrame({\"value\": [1000, 4000, 2000, 3000],\n \"out\": [\"cash\", \"cash\", \"lending\", \"DCA\"],\n \"in\": [\"DCA\", \"lending\", \"cash\", \"lending\"]})\n\n# get strategies from data source and create an account for each\naccounts = {strat: 0 for strat in list(df[\"out\"]) + list(df[\"in\"])}\n\n# add new columns for each strategy to dataframe\nfor strat in accounts.keys():\n df[strat] = 0\n\n# loop through transactions and enter values to accounts\nfor i, t in df.iterrows():\n accounts[t[\"out\"]] -= t[\"value\"]\n accounts[t[\"in\"]] += t[\"value\"]\n for strat, v in accounts.items():\n df.loc[i, strat] = v\n\nprint(df)\n\nOutput:\n value out in cash lending DCA\n0 1000 cash DCA -1000 0 1000\n1 4000 cash lending -5000 4000 1000\n2 2000 lending cash -3000 2000 1000\n3 3000 DCA lending -3000 5000 -2000\n\n"
] |
[
1
] |
[] |
[] |
[
"finance",
"pandas",
"python"
] |
stackoverflow_0074467844_finance_pandas_python.txt
|
Q:
Record not getting edited in django form using instance
The model is not getting updated in the database while using the below methods.
This is upload form in views
def upload(request):
if request.method == 'POST':
form = UploadForm(request.POST, request.FILES)
if form.is_valid():
upload = form.save(commit= False)
upload.user = request.user
upload.save()
messages.info(request,"Added Successfully..!")
return redirect("home")
return render(request, "upload.html", {'form': UploadForm})
This is my edit function in views
def editbug(request, pk):
edit = bug.objects.get(id=pk)
if request.method == 'POST':
form = UploadForm(request.POST, instance= edit)
if form.is_valid():
form.save()
print("uploaded")
messages.info('Record updated successfully..!')
return redirect("home")
else:
form = UploadForm(instance= edit)
return render(request, "upload.html", {'form': form})
urls.py
urlpatterns = [
path('home',views.home, name='home'),
path('index',views.index, name='index'),
path("records/<int:pk>/", views.records, name="records"),
path("", views.login_request, name="login"),
path("logout", views.logout_request, name="logout"),
path("upload", views.upload, name='upload'),
path("edit/<int:pk>/", views.editbug, name="edit")
]
Relevant Template:
{% for bug in b %}
<tr>
<td>{{ forloop.counter }}</td>
<td><a href="{% url 'records' pk=bug.pk %}">{{bug.name}}</a>
</td>
<td>{{bug.created_at}}</td>
<td>{{bug.user}}</td>
<td>{{bug.status}}</td>
<td><a class="btn btn-sm btn-info" href="{% url 'edit' bug.id
%}">Edit</a></td>
</tr>
{% endfor %}
This is the template used for editing the form
<!DOCTYPE html>
<html>
<head>
<style>
table {
font-family: arial, sans-serif;
border-collapse: collapse;
width: 60%;
margin-left: auto;
margin-right: auto;
}
td, th {
border: 1px solid #dddddd;
text-align: left;
padding: 8px;
}
tr:nth-child(even) {
background-color: #dddddd;
}
</style>
</head>
<body>
<h2 style="text-align: center;">Bug List</h2>
<table>
<tr>
<th style="width: 5%;">Sl no.</th>
<th>Bug</th>
<th style="width: 20%;">Created at</th>
<th>Created by</th>
<th>Status</th>
</tr>
{% block content %}
{% for bug in b %}
<tr>
<td>{{ forloop.counter }}</td>
<td><a href="{% url 'records' pk=bug.pk %}">{{bug.name}}</a>
</td>
<td>{{bug.created_at}}</td>
<td>{{bug.user}}</td>
<td>{{bug.status}}</td>
<td><a class="btn btn-sm btn-info" href="{% url 'edit' bug.id
%}">Edit</a></td>
</tr>
{% endfor %}
{% endblock %}
</table>
</body>
</html>
This is the upload form for the form creation and editing.
class UploadForm(ModelForm):
name = forms.CharField(max_length=200)
info = forms.TextInput()
status = forms.ChoiceField(choices = status_choice, widget=
forms.RadioSelect())
fixed_by = forms.CharField(max_length=30)
phn_number = PhoneNumberField()
#created_by = forms.CharField(max_length=30)
#created_at = forms.DateTimeField()
#updated_at = forms.DateTimeField()
screeenshot = forms.ImageField()
class Meta:
model = bug
fields = ['name', 'info', 'status', 'fixed_by',
'phn_number', 'screeenshot']
Tried editing the record but it is not getting updated. please check the views, templates and urls.
A:
I have following suggestions:
Only redirect if the form is valid.
Also use request.FILES while editing.
Use get_object_or_404() instead of get().
So, upload view should be:
def upload(request):
if request.method == 'POST':
form = UploadForm(request.POST, request.FILES)
if form.is_valid():
upload = form.save(commit= False)
upload.user = request.user
upload.save()
messages.info(request,"Added Successfully..!")
return redirect("home") #Only redirect if the form is valid.
else:
form=UploadForm()
return render(request, "upload.html", {'form': form})
Edit view should be:
def editbug(request, pk):
edit = get_object_or_404(bug, id=pk)
if request.method == 'POST':
form = UploadForm(request.POST, request.FILES, instance= edit)
if form.is_valid():
form.save()
print("uploaded")
messages.info('Record updated successfully..!')
return redirect("home")
else:
form = UploadForm(instance=edit)
return render(request, "edit.html", {'form': form})
Note: Models are written in PascalCase so it is better to name it as Bug instead of bug.
Make separate template for editing, say edit.html so:
<body>
<form method='POST'>
{% crsf_token %}
{{form.as_p}}
<input type='submit'>
</form>
</body>
A:
Also you can try this way:
views.py:
def update(request, id):
edit = bug.objects.get(id=id)
form = Formname(request.POST,instance=edit)
if form.is_valid():
form.save()
return HttpResponseRedirect('/')
return render(request,'edit.html',{'edit': edit})
And in templates:
edit.html:
<form method='post' action=/update/{{edit.id}}>
{% csrf_token %}
<input value={{edit.fieldname}}/>
<input type="submit"/>
</form>
urls.py:
path('update/<int:id>/',views.update)
|
Record not getting edited in django form using instance
|
The model is not getting updated in the database while using the below methods.
This is upload form in views
def upload(request):
if request.method == 'POST':
form = UploadForm(request.POST, request.FILES)
if form.is_valid():
upload = form.save(commit= False)
upload.user = request.user
upload.save()
messages.info(request,"Added Successfully..!")
return redirect("home")
return render(request, "upload.html", {'form': UploadForm})
This is my edit function in views
def editbug(request, pk):
edit = bug.objects.get(id=pk)
if request.method == 'POST':
form = UploadForm(request.POST, instance= edit)
if form.is_valid():
form.save()
print("uploaded")
messages.info('Record updated successfully..!')
return redirect("home")
else:
form = UploadForm(instance= edit)
return render(request, "upload.html", {'form': form})
urls.py
urlpatterns = [
path('home',views.home, name='home'),
path('index',views.index, name='index'),
path("records/<int:pk>/", views.records, name="records"),
path("", views.login_request, name="login"),
path("logout", views.logout_request, name="logout"),
path("upload", views.upload, name='upload'),
path("edit/<int:pk>/", views.editbug, name="edit")
]
Relevant Template:
{% for bug in b %}
<tr>
<td>{{ forloop.counter }}</td>
<td><a href="{% url 'records' pk=bug.pk %}">{{bug.name}}</a>
</td>
<td>{{bug.created_at}}</td>
<td>{{bug.user}}</td>
<td>{{bug.status}}</td>
<td><a class="btn btn-sm btn-info" href="{% url 'edit' bug.id
%}">Edit</a></td>
</tr>
{% endfor %}
This is the template used for editing the form
<!DOCTYPE html>
<html>
<head>
<style>
table {
font-family: arial, sans-serif;
border-collapse: collapse;
width: 60%;
margin-left: auto;
margin-right: auto;
}
td, th {
border: 1px solid #dddddd;
text-align: left;
padding: 8px;
}
tr:nth-child(even) {
background-color: #dddddd;
}
</style>
</head>
<body>
<h2 style="text-align: center;">Bug List</h2>
<table>
<tr>
<th style="width: 5%;">Sl no.</th>
<th>Bug</th>
<th style="width: 20%;">Created at</th>
<th>Created by</th>
<th>Status</th>
</tr>
{% block content %}
{% for bug in b %}
<tr>
<td>{{ forloop.counter }}</td>
<td><a href="{% url 'records' pk=bug.pk %}">{{bug.name}}</a>
</td>
<td>{{bug.created_at}}</td>
<td>{{bug.user}}</td>
<td>{{bug.status}}</td>
<td><a class="btn btn-sm btn-info" href="{% url 'edit' bug.id
%}">Edit</a></td>
</tr>
{% endfor %}
{% endblock %}
</table>
</body>
</html>
This is the upload form for the form creation and editing.
class UploadForm(ModelForm):
name = forms.CharField(max_length=200)
info = forms.TextInput()
status = forms.ChoiceField(choices = status_choice, widget=
forms.RadioSelect())
fixed_by = forms.CharField(max_length=30)
phn_number = PhoneNumberField()
#created_by = forms.CharField(max_length=30)
#created_at = forms.DateTimeField()
#updated_at = forms.DateTimeField()
screeenshot = forms.ImageField()
class Meta:
model = bug
fields = ['name', 'info', 'status', 'fixed_by',
'phn_number', 'screeenshot']
Tried editing the record but it is not getting updated. please check the views, templates and urls.
|
[
"I have following suggestions:\n\nOnly redirect if the form is valid.\n\nAlso use request.FILES while editing.\n\nUse get_object_or_404() instead of get().\n\n\nSo, upload view should be:\ndef upload(request):\n if request.method == 'POST':\n form = UploadForm(request.POST, request.FILES)\n if form.is_valid():\n upload = form.save(commit= False)\n upload.user = request.user\n upload.save()\n messages.info(request,\"Added Successfully..!\")\n return redirect(\"home\") #Only redirect if the form is valid.\n else:\n form=UploadForm()\n return render(request, \"upload.html\", {'form': form})\n\nEdit view should be:\ndef editbug(request, pk):\n edit = get_object_or_404(bug, id=pk)\n\n if request.method == 'POST':\n form = UploadForm(request.POST, request.FILES, instance= edit)\n if form.is_valid():\n form.save()\n print(\"uploaded\")\n messages.info('Record updated successfully..!')\n return redirect(\"home\")\n else:\n form = UploadForm(instance=edit)\n\n return render(request, \"edit.html\", {'form': form})\n\n\nNote: Models are written in PascalCase so it is better to name it as Bug instead of bug.\n\nMake separate template for editing, say edit.html so:\n<body>\n <form method='POST'>\n {% crsf_token %}\n {{form.as_p}}\n <input type='submit'>\n </form>\n</body>\n\n",
"Also you can try this way:\nviews.py:\ndef update(request, id): \n edit = bug.objects.get(id=id) \n form = Formname(request.POST,instance=edit) \n if form.is_valid(): \n form.save() \n return HttpResponseRedirect('/') \n return render(request,'edit.html',{'edit': edit})\n\nAnd in templates:\nedit.html:\n<form method='post' action=/update/{{edit.id}}>\n {% csrf_token %}\n <input value={{edit.fieldname}}/>\n <input type=\"submit\"/>\n</form>\n\nurls.py:\npath('update/<int:id>/',views.update)\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"django",
"django_forms",
"django_models",
"django_urls",
"python"
] |
stackoverflow_0074474785_django_django_forms_django_models_django_urls_python.txt
|
Q:
how to write a function that calculates the age category
I need to write a function that calculates the age category, so this is the function :
def age_category(dob_years):
if dob_years < 0 or pd.isna(dob_years):
return 'NA'
elif dob_years < 20:
return '10-19'
elif dob_years < 30:
return '20-29'
elif dob_years < 40:
return '30-39'
elif dob_years < 50:
return '40-49'
elif dob_years < 60:
return '50-59'
elif dob_years < 70:
return '60-69'
else:
return '70+'
I checked the function it works
but when I try to create a new column :
credit_scoring['age_group']= credit_scoring.apply(age_category, axis=1)
I have this error :
TypeError: '<' not supported between instances of 'str' and 'int'
actually, i am new in python i don't know what to do
pls help what is wrong with the code ?
thanks for your time :)
A:
def age_category(dob_years):
if not isinstance(dob_years, (float, int)):
try:
dob_years = int(dob_years)
except ValueError:
return 'NA'
if dob_years < 0:
return 'NA'
return {
0: '0-9',
10: '10-19',
20: '20-29',
30: '30-39',
40: '40-49',
50: '50-59',
60: '60-69',
70: '70+',
}[10 * int(dob_years // 10)]
A:
You can achieve your goal more easily using pd.cut.
First of all, the sample data:
>>> df = pd.DataFrame([0, 18, -3, 73, 17, 88, 60, 1, 20, 14], columns=["age"])
>>> df
age
0 0
1 18
2 -3
3 73
4 17
5 88
6 60
7 1
8 20
9 14
Then you need to prepare the bins and their labels:
>>> from math import inf
>>> bins = list(range(0, 80, 10))
>>> bins.append(inf)
>>> bins
[0, 10, 20, 30, 40, 50, 60, 70, inf]
>>> labels = [f"{i}-{i + 9}" for i in bins[:-2]]
>>> labels.append(f"{bins[-2]}+")
>>> labels
['0-9', '10-19', '20-29', '30-39', '40-49', '50-59', '60-69', '70+']
Once you have them, use pd.cut with right=True so it will assign labels according to your example.
>>> df["age group"] = pd.cut(df["age"], bins=bins, labels=labels, right=False)
>>> df
age age group
0 0 0-9
1 18 10-19
2 -3 NaN
3 73 70+
4 17 10-19
5 88 70+
6 60 60-69
7 1 0-9
8 20 20-29
9 14 10-19
|
how to write a function that calculates the age category
|
I need to write a function that calculates the age category, so this is the function :
def age_category(dob_years):
if dob_years < 0 or pd.isna(dob_years):
return 'NA'
elif dob_years < 20:
return '10-19'
elif dob_years < 30:
return '20-29'
elif dob_years < 40:
return '30-39'
elif dob_years < 50:
return '40-49'
elif dob_years < 60:
return '50-59'
elif dob_years < 70:
return '60-69'
else:
return '70+'
I checked the function it works
but when I try to create a new column :
credit_scoring['age_group']= credit_scoring.apply(age_category, axis=1)
I have this error :
TypeError: '<' not supported between instances of 'str' and 'int'
actually, i am new in python i don't know what to do
pls help what is wrong with the code ?
thanks for your time :)
|
[
"def age_category(dob_years):\n if not isinstance(dob_years, (float, int)):\n try:\n dob_years = int(dob_years)\n except ValueError:\n return 'NA'\n\n if dob_years < 0:\n return 'NA'\n\n return {\n 0: '0-9',\n 10: '10-19',\n 20: '20-29',\n 30: '30-39',\n 40: '40-49',\n 50: '50-59',\n 60: '60-69',\n 70: '70+',\n }[10 * int(dob_years // 10)]\n\n",
"You can achieve your goal more easily using pd.cut.\nFirst of all, the sample data:\n>>> df = pd.DataFrame([0, 18, -3, 73, 17, 88, 60, 1, 20, 14], columns=[\"age\"])\n>>> df\n age\n0 0\n1 18\n2 -3\n3 73\n4 17\n5 88\n6 60\n7 1\n8 20\n9 14\n\nThen you need to prepare the bins and their labels:\n>>> from math import inf\n>>> bins = list(range(0, 80, 10))\n>>> bins.append(inf)\n>>> bins\n[0, 10, 20, 30, 40, 50, 60, 70, inf]\n>>> labels = [f\"{i}-{i + 9}\" for i in bins[:-2]]\n>>> labels.append(f\"{bins[-2]}+\")\n>>> labels\n['0-9', '10-19', '20-29', '30-39', '40-49', '50-59', '60-69', '70+']\n\nOnce you have them, use pd.cut with right=True so it will assign labels according to your example.\n>>> df[\"age group\"] = pd.cut(df[\"age\"], bins=bins, labels=labels, right=False)\n>>> df\n age age group\n0 0 0-9\n1 18 10-19\n2 -3 NaN\n3 73 70+\n4 17 10-19\n5 88 70+\n6 60 60-69\n7 1 0-9\n8 20 20-29\n9 14 10-19\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074467111_pandas_python.txt
|
Q:
Is it possible to export a screenshot of a GRC flowgraph in commandline?
What is my problem?
Is it possible to export a screenshot of a gnuradio-companion flowgraph in commandline?
The command grcc provides the ability to compile .grc flowgraph files into python files. But it doesn't provide the funtionality to export the graph as a screenshot, which is possible when you use the gnuradio-companion GUI (Using Ctrl-P).
What do I need this for?
I'm currently working with a bunch of .grc flowgraphs and I'm storing them in gitlab. I'd like to set up a pipeline of some sort, that generates a screenshot of those grc files so the people I am collaborating with can view the gnuradio flowgraph without needing to install gnuradio. So for this, I need to have some ability to generate screenshots from the commandline with a script of some sort.
Looking into the gnuradio source code this is the only reference to screen capture I find, and even if I find code that would generate the screen capture, I wouldn't know how use that to create an independent script that just generates a screen shot.
A:
Ok, so a solution that kinda works for me now is using a virtual X server (with Xvfb) and manually executing the export functionality. The script looks like this:
#!/bin/bash
if [ -z "$1" ];
then
echo "Usage: $1 [flowgraph .grc file]"
echo "This script outputs an output.png file"
exit 1
fi
DISPLAY=:2
echo "Starting Xvfb instance on display $DISPLAY"
Xvfb $DISPLAY &
echo "Starting GRC..."
gnuradio-companion $1 2>&1 1>/dev/null &
sleep 2
xdotool key ctrl+p
xdotool key ctrl+a
xdotool key BackSpace
xdotool type output.png
xdotool key enter
echo "Saving Screenshot..."
sleep 2
echo "Killing GRC..."
killall gnuradio-companion
echo "Killing Xvfb..."
killall Xvfb
The output of the script is called output.png and will be placed in the current directory. This is quite a hacky solution but nothing better currently comes to mind.
|
Is it possible to export a screenshot of a GRC flowgraph in commandline?
|
What is my problem?
Is it possible to export a screenshot of a gnuradio-companion flowgraph in commandline?
The command grcc provides the ability to compile .grc flowgraph files into python files. But it doesn't provide the funtionality to export the graph as a screenshot, which is possible when you use the gnuradio-companion GUI (Using Ctrl-P).
What do I need this for?
I'm currently working with a bunch of .grc flowgraphs and I'm storing them in gitlab. I'd like to set up a pipeline of some sort, that generates a screenshot of those grc files so the people I am collaborating with can view the gnuradio flowgraph without needing to install gnuradio. So for this, I need to have some ability to generate screenshots from the commandline with a script of some sort.
Looking into the gnuradio source code this is the only reference to screen capture I find, and even if I find code that would generate the screen capture, I wouldn't know how use that to create an independent script that just generates a screen shot.
|
[
"Ok, so a solution that kinda works for me now is using a virtual X server (with Xvfb) and manually executing the export functionality. The script looks like this:\n#!/bin/bash\n\nif [ -z \"$1\" ];\nthen\n echo \"Usage: $1 [flowgraph .grc file]\"\n echo \"This script outputs an output.png file\"\n exit 1\nfi\n\nDISPLAY=:2\n\necho \"Starting Xvfb instance on display $DISPLAY\"\nXvfb $DISPLAY &\necho \"Starting GRC...\"\ngnuradio-companion $1 2>&1 1>/dev/null &\nsleep 2\nxdotool key ctrl+p\nxdotool key ctrl+a\nxdotool key BackSpace\nxdotool type output.png\nxdotool key enter\necho \"Saving Screenshot...\"\nsleep 2\necho \"Killing GRC...\"\nkillall gnuradio-companion\necho \"Killing Xvfb...\"\nkillall Xvfb\n\nThe output of the script is called output.png and will be placed in the current directory. This is quite a hacky solution but nothing better currently comes to mind.\n"
] |
[
2
] |
[] |
[] |
[
"gnuradio",
"gnuradio_companion",
"python",
"qt"
] |
stackoverflow_0074473535_gnuradio_gnuradio_companion_python_qt.txt
|
Q:
Dash data table add a column on user input with predefined values
I have a simple dash app containing a data table.Two user inputs make it possible to add a row or a column. Juste like when I add a row I get default values (here 0 hours) for every column, I would also like to have default values for all rows when adding a new column. Here is the code:
import pathlib as pl
import dash
from dash import dash_table
from dash.dash_table.Format import Format, Scheme, Sign, Symbol
from dash import dcc
from dash import html
import plotly.graph_objs as go
import pandas as pd
from dash.dependencies import Input, Output, State
table_header_style = {
"backgroundColor": "rgb(2,21,70)",
"color": "white",
"textAlign": "center",
}
app = dash.Dash(__name__)
app.title = "Trial"
server = app.server
APP_PATH = str(pl.Path(__file__).parent.resolve())
list_rows = ['a', 'b', 'c', 'd']
tasks = ['task' + str(i) for i in range(5)]
data = {task:[0 for i in range(len(list_rows))] for task in tasks}
app.layout = html.Div(
className="",
children=[
html.Div(
# className="container",
children=[
html.Div(
# className="row",
style={},
children=[
html.Div(
# className="four columns pkcalc-settings",
children=[
html.P(["Study Design"]),
html.Div(
[
html.Label(
[
dcc.Input(
id="new-row",
placeholder="Row to be added...",
type="text",
debounce=True,
maxLength=20,
style={
'width':'66%',
'margin-left': '5px'
}
),
html.Button(
'Add',
id='add-row-button',
n_clicks=0,
style={
'font-size': '10px',
'width': '140px',
'display': 'inline-block',
'margin-bottom': '5px',
'margin-right': '5px',
'margin-left': '5px',
'height':'38px',
'verticalAlign': 'top'
}
),
]
),
html.Label(
[
dcc.Input(
id="new-task",
placeholder="Task to be added...",
type="text",
debounce=True,
maxLength=50,
style={'width':'66%'}
),
html.Button(
'Add',
id='add-task-button',
n_clicks=0,
style={
'font-size': '10px',
'width': '140px',
'display': 'inline-block',
'margin-bottom': '5px',
'margin-right': '5px',
'margin-left': '5px',
'height':'38px',
'verticalAlign': 'top'
}
),
]
),
]
),
],
),
html.Div(
# className="eight columns pkcalc-data-table",
children=[
dash_table.DataTable(
id='table',
columns=(
[{
'id': 'name',
'name': 'Name',
'type': 'text',
'deletable': True,
'renamable': True,
}] +
[{
'id': task,
'name': task,
'type': 'numeric',
'deletable': True,
'renamable': True,
'format': Format(
precision=0,
scheme=Scheme.fixed,
symbol=Symbol.yes,
symbol_suffix='h'
),
} for task in tasks]
),
data=[dict(name=i, **{task: 0 for task in tasks}) for i in list_rows],
editable=True,
style_header=table_header_style,
active_cell={"row": 0, "column": 0},
selected_cells=[{"row": 0, "column": 0}],
),
],
),
],
),
],
),
],
)
# Callback to add column
@app.callback(
Output(component_id='table', component_property='columns'),
Input(component_id='add-task-button', component_property='n_clicks'),
State(component_id='new-task', component_property='value'),
State(component_id='table', component_property='columns'),)
def update_columns(n_clicks, new_task, existing_tasks):
if n_clicks > 0:
existing_tasks.append({
'id': new_task, 'name': new_task,
'renamable': True, 'deletable': True
})
return existing_tasks
# Callback to add row
@app.callback(
Output(component_id='table', component_property='data'),
Input(component_id='add-row-button', component_property='n_clicks'),
State(component_id='new-row', component_property='value'),
State(component_id='table', component_property='columns'),
State(component_id='table', component_property='data'))
def update_rows(n_clicks, new_row, columns, rows):
if n_clicks > 0:
rows.append(
{
'name': new_row,
**{column['id']: 0 for column in columns[1:]}
}
)
return rows
if __name__ == "__main__":
app.run_server(debug=True)
Any help would be greatly appreciated!
A:
I figured out a way using one callback only to add a column or a row. It's not the prettiest but it works. If anyone has a better way, allowing to keep the two callbacks separated I would appreciate it.
Here is the code:
import pathlib as pl
import dash
from dash import dash_table
from dash.dash_table.Format import Format, Scheme, Sign, Symbol
from dash import dcc
from dash import html
import plotly.graph_objs as go
import pandas as pd
from dash.dependencies import Input, Output, State
table_header_style = {
"backgroundColor": "rgb(2,21,70)",
"color": "white",
"textAlign": "center",
}
app = dash.Dash(__name__)
app.title = "Trial"
server = app.server
APP_PATH = str(pl.Path(__file__).parent.resolve())
list_rows = ['a', 'b', 'c', 'd']
tasks = ['task' + str(i) for i in range(5)]
data = {task:[0 for i in range(len(list_rows))] for task in tasks}
app.layout = html.Div(
className="",
children=[
html.Div(
# className="container",
children=[
dcc.Store(id='n_clicks-column', data=0),
dcc.Store(id='n_clicks-row', data=0),
html.Div(
# className="row",
style={},
children=[
html.Div(
# className="four columns pkcalc-settings",
children=[
html.P(["Study Design"]),
html.Div(
[
html.Label(
[
dcc.Input(
id="new-row",
placeholder="Row to be added...",
type="text",
debounce=True,
maxLength=20,
style={
'width':'66%',
}
),
html.Button(
'Add',
id='add-row-button',
n_clicks=0,
style={
'font-size': '10px',
'width': '140px',
'display': 'inline-block',
'margin-bottom': '5px',
'margin-right': '5px',
'margin-left': '5px',
'height':'38px',
'verticalAlign': 'top'
}
),
]
),
html.Label(
[
dcc.Input(
id="new-task",
placeholder="Task to be added...",
type="text",
debounce=True,
maxLength=50,
style={'width':'66%'}
),
html.Button(
'Add',
id='add-task-button',
n_clicks=0,
style={
'font-size': '10px',
'width': '140px',
'display': 'inline-block',
'margin-bottom': '5px',
'margin-right': '5px',
'margin-left': '5px',
'height':'38px',
'verticalAlign': 'top'
}
),
]
),
]
),
],
),
html.Div(
# className="eight columns pkcalc-data-table",
children=[
dash_table.DataTable(
id='table',
columns=(
[{
'id': 'name',
'name': 'Name',
'type': 'text',
'deletable': True,
'renamable': True,
}] +
[{
'id': task,
'name': task,
'type': 'numeric',
'deletable': True,
'renamable': True,
'format': Format(
precision=0,
scheme=Scheme.fixed,
symbol=Symbol.yes,
symbol_suffix='h'
),
} for task in tasks]
),
data=[dict(name=i, **{task: 0 for task in tasks}) for i in list_rows],
editable=True,
style_header=table_header_style,
active_cell={"row": 0, "column": 0},
selected_cells=[{"row": 0, "column": 0}],
),
],
),
],
),
],
),
],
)
@app.callback(
Output(component_id='table', component_property='columns'),
Output(component_id='table', component_property='data'),
Output(component_id='n_clicks-column', component_property='data'),
Output(component_id='n_clicks-row', component_property='data'),
Input(component_id='add-task-button', component_property='n_clicks'),
Input(component_id='add-row-button', component_property='n_clicks'),
State(component_id='new-task', component_property='value'),
State(component_id='new-row', component_property='value'),
State(component_id='table', component_property='columns'),
State(component_id='table', component_property='data'),
State(component_id='n_clicks-column', component_property='data'),
State(component_id='n_clicks-row', component_property='data')
)
def update_table(n_clicks_column, n_clicks_row, new_task, new_row, columns, table_data, stored_n_clicks_column, stored_n_clicks_row):
if n_clicks_column > stored_n_clicks_column:
columns.append({
'id': new_task,
'name': new_task,
'type': 'numeric',
'renamable': True,
'deletable': True,
'format': Format(
precision=0,
scheme=Scheme.fixed,
symbol=Symbol.yes,
symbol_suffix='h'
),
})
for row_dict in table_data:
row_dict[new_task] = 0
stored_n_clicks_column += 1
if n_clicks_row > stored_n_clicks_row:
table_data.append(
{
'name': new_row,
**{column['id']: 0 for column in columns[1:]}
}
)
stored_n_clicks_row += 1
return columns, table_data, stored_n_clicks_column, stored_n_clicks_row
if __name__ == "__main__":
app.run_server(debug=True)
I had to add 2 dcc.Store to keep in memory the number of time each button has been clicked. Then I had to gather the 2 callbacks into one in order to modify the 'data' and 'columns' properties of the data table at once.
I tried keeping the two callbacks separated but in order to add default values to the new column, the only way I found was to modify both the 'data' and 'columns' properties in the add column callback. And dash doesn't allow to have 2 callbacks modifying the same element, here the 'data' property that was also being updated in the callback to add a row.
A:
In your situation, the duplicate callback outputs is an issue you cannot work around since both callbacks need to update table.data, so combining these two into one callback function is the right solution.
However, you could refactor a bit to use specific functions for each action in order to reduce the size of the callback :
def add_column(new_task, columns, data):
columns.append({
'id': new_task,
'name': new_task,
'type': 'numeric',
'deletable': True,
'renamable': True,
'format': Format(
precision=0,
scheme=Scheme.fixed,
symbol=Symbol.yes,
symbol_suffix='h'
)
})
for row in data:
row[new_task] = 0
return columns, data
def add_row(new_row, columns, data):
data.append(
{
'name': new_row,
**{column['id']: 0 for column in columns[1:]}
}
)
return data
Also note that you can distinguish which component/property pair has triggered the callback by using dash.callback_context, which simplify things :
@app.callback(
Output(component_id='table', component_property='columns'),
Output(component_id='table', component_property='data'),
Input(component_id='add-row-button', component_property='n_clicks'),
Input(component_id='add-task-button', component_property='n_clicks'),
State(component_id='new-row', component_property='value'),
State(component_id='new-task', component_property='value'),
State(component_id='table', component_property='columns'),
State(component_id='table', component_property='data'),
prevent_initial_call=True)
def update_table(nc_row, nc_task, new_row, new_task, columns, data):
ctx = dash.callback_context
id, prop = ctx.triggered[0]['prop_id'].split('.')
if id == 'add-row-button':
columns, data = add_row(new_row, columns, data)
elif id == 'add-task-button':
data = add_column(new_task, columns, data)
return columns, data
[EDIT] : Actually there exists a workaround, but it requires to install dash-extensions :
The package provides a proxy component and something called a MultiplexerTransform, which can be used to make possible for multiple callbacks to target the same output (note that it will chain/merge the callbacks under the hood).
|
Dash data table add a column on user input with predefined values
|
I have a simple dash app containing a data table.Two user inputs make it possible to add a row or a column. Juste like when I add a row I get default values (here 0 hours) for every column, I would also like to have default values for all rows when adding a new column. Here is the code:
import pathlib as pl
import dash
from dash import dash_table
from dash.dash_table.Format import Format, Scheme, Sign, Symbol
from dash import dcc
from dash import html
import plotly.graph_objs as go
import pandas as pd
from dash.dependencies import Input, Output, State
table_header_style = {
"backgroundColor": "rgb(2,21,70)",
"color": "white",
"textAlign": "center",
}
app = dash.Dash(__name__)
app.title = "Trial"
server = app.server
APP_PATH = str(pl.Path(__file__).parent.resolve())
list_rows = ['a', 'b', 'c', 'd']
tasks = ['task' + str(i) for i in range(5)]
data = {task:[0 for i in range(len(list_rows))] for task in tasks}
app.layout = html.Div(
className="",
children=[
html.Div(
# className="container",
children=[
html.Div(
# className="row",
style={},
children=[
html.Div(
# className="four columns pkcalc-settings",
children=[
html.P(["Study Design"]),
html.Div(
[
html.Label(
[
dcc.Input(
id="new-row",
placeholder="Row to be added...",
type="text",
debounce=True,
maxLength=20,
style={
'width':'66%',
'margin-left': '5px'
}
),
html.Button(
'Add',
id='add-row-button',
n_clicks=0,
style={
'font-size': '10px',
'width': '140px',
'display': 'inline-block',
'margin-bottom': '5px',
'margin-right': '5px',
'margin-left': '5px',
'height':'38px',
'verticalAlign': 'top'
}
),
]
),
html.Label(
[
dcc.Input(
id="new-task",
placeholder="Task to be added...",
type="text",
debounce=True,
maxLength=50,
style={'width':'66%'}
),
html.Button(
'Add',
id='add-task-button',
n_clicks=0,
style={
'font-size': '10px',
'width': '140px',
'display': 'inline-block',
'margin-bottom': '5px',
'margin-right': '5px',
'margin-left': '5px',
'height':'38px',
'verticalAlign': 'top'
}
),
]
),
]
),
],
),
html.Div(
# className="eight columns pkcalc-data-table",
children=[
dash_table.DataTable(
id='table',
columns=(
[{
'id': 'name',
'name': 'Name',
'type': 'text',
'deletable': True,
'renamable': True,
}] +
[{
'id': task,
'name': task,
'type': 'numeric',
'deletable': True,
'renamable': True,
'format': Format(
precision=0,
scheme=Scheme.fixed,
symbol=Symbol.yes,
symbol_suffix='h'
),
} for task in tasks]
),
data=[dict(name=i, **{task: 0 for task in tasks}) for i in list_rows],
editable=True,
style_header=table_header_style,
active_cell={"row": 0, "column": 0},
selected_cells=[{"row": 0, "column": 0}],
),
],
),
],
),
],
),
],
)
# Callback to add column
@app.callback(
Output(component_id='table', component_property='columns'),
Input(component_id='add-task-button', component_property='n_clicks'),
State(component_id='new-task', component_property='value'),
State(component_id='table', component_property='columns'),)
def update_columns(n_clicks, new_task, existing_tasks):
if n_clicks > 0:
existing_tasks.append({
'id': new_task, 'name': new_task,
'renamable': True, 'deletable': True
})
return existing_tasks
# Callback to add row
@app.callback(
Output(component_id='table', component_property='data'),
Input(component_id='add-row-button', component_property='n_clicks'),
State(component_id='new-row', component_property='value'),
State(component_id='table', component_property='columns'),
State(component_id='table', component_property='data'))
def update_rows(n_clicks, new_row, columns, rows):
if n_clicks > 0:
rows.append(
{
'name': new_row,
**{column['id']: 0 for column in columns[1:]}
}
)
return rows
if __name__ == "__main__":
app.run_server(debug=True)
Any help would be greatly appreciated!
|
[
"I figured out a way using one callback only to add a column or a row. It's not the prettiest but it works. If anyone has a better way, allowing to keep the two callbacks separated I would appreciate it.\nHere is the code:\nimport pathlib as pl\nimport dash\nfrom dash import dash_table\nfrom dash.dash_table.Format import Format, Scheme, Sign, Symbol\nfrom dash import dcc\nfrom dash import html\nimport plotly.graph_objs as go\nimport pandas as pd\nfrom dash.dependencies import Input, Output, State\n\ntable_header_style = {\n \"backgroundColor\": \"rgb(2,21,70)\",\n \"color\": \"white\",\n \"textAlign\": \"center\",\n}\n\n\napp = dash.Dash(__name__)\napp.title = \"Trial\"\nserver = app.server\n\nAPP_PATH = str(pl.Path(__file__).parent.resolve())\n\nlist_rows = ['a', 'b', 'c', 'd']\ntasks = ['task' + str(i) for i in range(5)]\ndata = {task:[0 for i in range(len(list_rows))] for task in tasks}\n\napp.layout = html.Div(\n className=\"\",\n children=[\n html.Div(\n # className=\"container\",\n children=[\n dcc.Store(id='n_clicks-column', data=0),\n dcc.Store(id='n_clicks-row', data=0),\n html.Div(\n # className=\"row\",\n style={},\n children=[\n html.Div(\n # className=\"four columns pkcalc-settings\",\n children=[\n html.P([\"Study Design\"]),\n html.Div(\n [\n html.Label(\n [\n dcc.Input(\n id=\"new-row\",\n placeholder=\"Row to be added...\",\n type=\"text\",\n debounce=True,\n maxLength=20,\n style={\n 'width':'66%',\n }\n ),\n html.Button(\n 'Add', \n id='add-row-button', \n n_clicks=0,\n style={\n 'font-size': '10px', \n 'width': '140px', \n 'display': 'inline-block', \n 'margin-bottom': '5px', \n 'margin-right': '5px',\n 'margin-left': '5px', \n 'height':'38px', \n 'verticalAlign': 'top'\n }\n ),\n ]\n ),\n html.Label(\n [\n dcc.Input(\n id=\"new-task\",\n placeholder=\"Task to be added...\",\n type=\"text\",\n debounce=True,\n maxLength=50,\n style={'width':'66%'}\n ),\n html.Button(\n 'Add', \n id='add-task-button', \n n_clicks=0,\n style={\n 'font-size': '10px', \n 'width': '140px', \n 'display': 'inline-block', \n 'margin-bottom': '5px', \n 'margin-right': '5px',\n 'margin-left': '5px', \n 'height':'38px', \n 'verticalAlign': 'top'\n }\n ),\n ]\n ),\n ]\n ),\n ],\n ),\n html.Div(\n # className=\"eight columns pkcalc-data-table\",\n children=[\n dash_table.DataTable(\n id='table',\n columns=(\n [{\n 'id': 'name', \n 'name': 'Name',\n 'type': 'text',\n 'deletable': True,\n 'renamable': True,\n }] +\n [{\n 'id': task, \n 'name': task,\n 'type': 'numeric',\n 'deletable': True,\n 'renamable': True,\n 'format': Format(\n precision=0,\n scheme=Scheme.fixed,\n symbol=Symbol.yes,\n symbol_suffix='h'\n ),\n } for task in tasks]\n ),\n data=[dict(name=i, **{task: 0 for task in tasks}) for i in list_rows],\n editable=True,\n style_header=table_header_style,\n active_cell={\"row\": 0, \"column\": 0},\n selected_cells=[{\"row\": 0, \"column\": 0}],\n ),\n ],\n ),\n ],\n ),\n ],\n ),\n ],\n)\n\n\n@app.callback(\n Output(component_id='table', component_property='columns'),\n Output(component_id='table', component_property='data'),\n Output(component_id='n_clicks-column', component_property='data'),\n Output(component_id='n_clicks-row', component_property='data'),\n Input(component_id='add-task-button', component_property='n_clicks'),\n Input(component_id='add-row-button', component_property='n_clicks'),\n State(component_id='new-task', component_property='value'),\n State(component_id='new-row', component_property='value'),\n State(component_id='table', component_property='columns'),\n State(component_id='table', component_property='data'),\n State(component_id='n_clicks-column', component_property='data'),\n State(component_id='n_clicks-row', component_property='data')\n )\ndef update_table(n_clicks_column, n_clicks_row, new_task, new_row, columns, table_data, stored_n_clicks_column, stored_n_clicks_row):\n \n if n_clicks_column > stored_n_clicks_column:\n columns.append({\n 'id': new_task, \n 'name': new_task,\n 'type': 'numeric',\n 'renamable': True, \n 'deletable': True,\n 'format': Format(\n precision=0,\n scheme=Scheme.fixed,\n symbol=Symbol.yes,\n symbol_suffix='h'\n ),\n })\n\n for row_dict in table_data:\n row_dict[new_task] = 0\n\n stored_n_clicks_column += 1\n\n if n_clicks_row > stored_n_clicks_row:\n table_data.append(\n {\n 'name': new_row,\n **{column['id']: 0 for column in columns[1:]}\n }\n )\n\n stored_n_clicks_row += 1\n \n\n return columns, table_data, stored_n_clicks_column, stored_n_clicks_row\n\nif __name__ == \"__main__\":\n app.run_server(debug=True)\n\nI had to add 2 dcc.Store to keep in memory the number of time each button has been clicked. Then I had to gather the 2 callbacks into one in order to modify the 'data' and 'columns' properties of the data table at once.\nI tried keeping the two callbacks separated but in order to add default values to the new column, the only way I found was to modify both the 'data' and 'columns' properties in the add column callback. And dash doesn't allow to have 2 callbacks modifying the same element, here the 'data' property that was also being updated in the callback to add a row.\n",
"In your situation, the duplicate callback outputs is an issue you cannot work around since both callbacks need to update table.data, so combining these two into one callback function is the right solution.\nHowever, you could refactor a bit to use specific functions for each action in order to reduce the size of the callback :\ndef add_column(new_task, columns, data):\n columns.append({\n 'id': new_task,\n 'name': new_task,\n 'type': 'numeric',\n 'deletable': True,\n 'renamable': True,\n 'format': Format(\n precision=0,\n scheme=Scheme.fixed,\n symbol=Symbol.yes,\n symbol_suffix='h'\n )\n })\n\n for row in data:\n row[new_task] = 0\n\n return columns, data\n\n\ndef add_row(new_row, columns, data):\n data.append(\n {\n 'name': new_row,\n **{column['id']: 0 for column in columns[1:]}\n }\n )\n return data\n\nAlso note that you can distinguish which component/property pair has triggered the callback by using dash.callback_context, which simplify things :\n@app.callback(\n Output(component_id='table', component_property='columns'),\n Output(component_id='table', component_property='data'),\n Input(component_id='add-row-button', component_property='n_clicks'),\n Input(component_id='add-task-button', component_property='n_clicks'),\n State(component_id='new-row', component_property='value'),\n State(component_id='new-task', component_property='value'),\n State(component_id='table', component_property='columns'),\n State(component_id='table', component_property='data'),\n prevent_initial_call=True)\ndef update_table(nc_row, nc_task, new_row, new_task, columns, data):\n ctx = dash.callback_context\n id, prop = ctx.triggered[0]['prop_id'].split('.')\n\n if id == 'add-row-button':\n columns, data = add_row(new_row, columns, data)\n elif id == 'add-task-button':\n data = add_column(new_task, columns, data)\n\n return columns, data\n\n\n[EDIT] : Actually there exists a workaround, but it requires to install dash-extensions :\nThe package provides a proxy component and something called a MultiplexerTransform, which can be used to make possible for multiple callbacks to target the same output (note that it will chain/merge the callbacks under the hood).\n"
] |
[
1,
1
] |
[] |
[] |
[
"datatable",
"plotly_dash",
"python"
] |
stackoverflow_0074460862_datatable_plotly_dash_python.txt
|
Q:
configure: error: C compiler cannot create executables in buildozer kivy
I'm trying to compile an apk using buildozer and kivy.
I have configure: error: C compiler cannot create executables error when I want to convert my kivy file to apk android file with buildozer android debug deploy.
Here is the complete error:
STDERR:
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 1297, in <module>
main()
File "/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/python-for-android/pythonforandroid/entrypoints.py", line 18, in main
ToolchainCL()
File "/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 730, in __init__
getattr(self, command)(args)
File "/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 153, in wrapper_func
build_dist_from_args(ctx, dist, args)
File "/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 215, in build_dist_from_args
args, "ignore_setup_py", False
File "/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/python-for-android/pythonforandroid/build.py", line 505, in build_recipes
recipe.build_arch(arch)
File "/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/python-for-android/pythonforandroid/recipes/libffi/__init__.py", line 34, in build_arch
'--enable-shared', _env=env)
File "/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/python-for-android/pythonforandroid/logger.py", line 167, in shprint
for line in output:
File "/home/elmantr/.local/lib/python3.6/site-packages/sh.py", line 915, in next
self.wait()
File "/home/elmantr/.local/lib/python3.6/site-packages/sh.py", line 845, in wait
self.handle_command_exit_code(exit_code)
File "/home/elmantr/.local/lib/python3.6/site-packages/sh.py", line 869, in handle_command_exit_code
raise exc
sh.ErrorReturnCode_77:
RAN: /media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_21/libffi/configure --host=aarch64-linux-android --prefix=/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_21/libffi --disable-builddir --enable-shared
STDOUT:
checking build system type... x86_64-pc-linux-gnu
checking host system type... aarch64-unknown-linux-android
checking target system type... aarch64-unknown-linux-android
checking for gsed... sed
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for aarch64-linux-android-strip... /home/elmantr/.buildozer/android/platform/android-ndk-r25b/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-strip --strip-unneeded
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make -j4 sets $(MAKE)... yes
checking whether make -j4 supports nested variables... yes
checking for aarch64-linux-android-gcc... /home/elmantr/.buildozer/android/platform/android-ndk-r25b/toolchains/llvm/prebuilt/linux-x86_64/bin/clang -target aarch64-linux-android21 -fomit-frame-pointer -march=armv8-a -fPIC
checking whether the C compiler works... no
configure: error: in `/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_21/libffi':
configure: error: C compiler cannot create executables
See `config.log' for more details
STDERR:
# Command failed: ['/usr/bin/python3', '-m', 'pythonforandroid.toolchain', 'create', '--dist_name=elmanapp', '--bootstrap=sdl2', '--requirements=python3,kivy', '--arch=arm64-v8a', '--arch=armeabi-v7a', '--copy-libs', '--color=always', '--storage-dir=/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a', '--ndk-api=21', '--ignore-setup-py', '--debug']
# ENVIRONMENT:
# CLUTTER_IM_MODULE = 'xim'
# LS_COLORS = 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:'
# LC_MEASUREMENT = 'az_IR'
# LESSCLOSE = '/usr/bin/lesspipe %s %s'
# LC_PAPER = 'az_IR'
# LC_MONETARY = 'az_IR'
# XDG_MENU_PREFIX = 'gnome-'
# LANG = 'en_US.UTF-8'
# MANAGERPID = '2713'
# DISPLAY = ':0'
# INVOCATION_ID = '730eb555b32c458694066550940adc12'
# GNOME_SHELL_SESSION_MODE = 'ubuntu'
# COLORTERM = 'truecolor'
# USERNAME = 'elmantr'
# XDG_VTNR = '2'
# SSH_AUTH_SOCK = '/run/user/1000/keyring/ssh'
# LC_NAME = 'az_IR'
# XDG_SESSION_ID = '2'
# USER = 'elmantr'
# DESKTOP_SESSION = 'ubuntu'
# QT4_IM_MODULE = 'xim'
# TEXTDOMAINDIR = '/usr/share/locale/'
# GNOME_TERMINAL_SCREEN = '/org/gnome/Terminal/screen/e5623443_face_4b77_b607_cf5202805f3b'
# PWD = '/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy'
# HOME = '/home/elmantr'
# JOURNAL_STREAM = '9:38945'
# TEXTDOMAIN = 'im-config'
# SSH_AGENT_PID = '2853'
# QT_ACCESSIBILITY = '1'
# XDG_SESSION_TYPE = 'x11'
# XDG_DATA_DIRS = '/usr/share/ubuntu:/home/elmantr/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share/:/usr/share/:/var/lib/snapd/desktop'
# XDG_SESSION_DESKTOP = 'ubuntu'
# LC_ADDRESS = 'az_IR'
# DBUS_STARTER_ADDRESS = 'unix:path=/run/user/1000/bus,guid=eefa6fc1f5cb65c532b63f7e634033a7'
# LC_NUMERIC = 'az_IR'
# GTK_MODULES = 'gail:atk-bridge'
# WINDOWPATH = '2'
# TERM = 'xterm-256color'
# VTE_VERSION = '5202'
# SHELL = '/bin/bash'
# QT_IM_MODULE = 'ibus'
# XMODIFIERS = '@im=ibus'
# IM_CONFIG_PHASE = '2'
# DBUS_STARTER_BUS_TYPE = 'session'
# XDG_CURRENT_DESKTOP = 'ubuntu:GNOME'
# GPG_AGENT_INFO = '/run/user/1000/gnupg/S.gpg-agent:0:1'
# GNOME_TERMINAL_SERVICE = ':1.124'
# SHLVL = '1'
# XDG_SEAT = 'seat0'
# LC_TELEPHONE = 'az_IR'
# GDMSESSION = 'ubuntu'
# GNOME_DESKTOP_SESSION_ID = 'this-is-deprecated'
# LOGNAME = 'elmantr'
# DBUS_SESSION_BUS_ADDRESS = 'unix:path=/run/user/1000/bus,guid=eefa6fc1f5cb65c532b63f7e634033a7'
# XDG_RUNTIME_DIR = '/run/user/1000'
# XAUTHORITY = '/run/user/1000/gdm/Xauthority'
# XDG_CONFIG_DIRS = '/etc/xdg/xdg-ubuntu:/etc/xdg'
# PATH = '/home/elmantr/.buildozer/android/platform/apache-ant-1.9.4/bin:/home/elmantr/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'
# LC_IDENTIFICATION = 'az_IR'
# SESSION_MANAGER = 'local/elmantr:@/tmp/.ICE-unix/2749,unix/elmantr:/tmp/.ICE-unix/2749'
# LESSOPEN = '| /usr/bin/lesspipe %s'
# GTK_IM_MODULE = 'ibus'
# LC_TIME = 'az_IR'
# _ = '/usr/local/bin/buildozer'
# PACKAGES_PATH = '/home/elmantr/.buildozer/android/packages'
# ANDROIDSDK = '/home/elmantr/.buildozer/android/platform/android-sdk'
# ANDROIDNDK = '/home/elmantr/.buildozer/android/platform/android-ndk-r25b'
# ANDROIDAPI = '27'
# ANDROIDMINAPI = '21'
#
# Buildozer failed to execute the last command
# The error might be hidden in the log above this error
# Please read the full log, and search for it before
# raising an issue with buildozer itself.
# In case of a bug report, please add a full log with log_level = 2
How can I solve the problem?
I have read these similar questions and they did not help:
buildozer - C compiler cannot create executables
configure: error: C compiler cannot create executables while compiling python for android ON Linux Ubuntu
buildozer android debug: c compiler error - cannot create executables - configure: exit77
configure: error: C compiler cannot create executables - Buildozer kivy to android debuging
Note: My operating system is 64-bit!
Please help me. thanks!
A:
I had the exact same issue for three weeks with windows (ubuntu subsystem 20.4.1), NDK r25b, android api 31, sdk 21, p4a develop.
If I run the command that failed in the terminal everything works fine, because my system use a different compiler outside of buildozer. If I am correct, you are using Ubuntu-Gnome, so this fix may not be interesting or relevant to you:
I could solve this error by manualy updating wsl1 to wsl2.
Maybe you have enough knowledge in linux or someone else can explain what happend after updating wsl1 to wsl2 so you can transfer the fix for your current system.
|
configure: error: C compiler cannot create executables in buildozer kivy
|
I'm trying to compile an apk using buildozer and kivy.
I have configure: error: C compiler cannot create executables error when I want to convert my kivy file to apk android file with buildozer android debug deploy.
Here is the complete error:
STDERR:
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 1297, in <module>
main()
File "/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/python-for-android/pythonforandroid/entrypoints.py", line 18, in main
ToolchainCL()
File "/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 730, in __init__
getattr(self, command)(args)
File "/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 153, in wrapper_func
build_dist_from_args(ctx, dist, args)
File "/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 215, in build_dist_from_args
args, "ignore_setup_py", False
File "/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/python-for-android/pythonforandroid/build.py", line 505, in build_recipes
recipe.build_arch(arch)
File "/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/python-for-android/pythonforandroid/recipes/libffi/__init__.py", line 34, in build_arch
'--enable-shared', _env=env)
File "/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/python-for-android/pythonforandroid/logger.py", line 167, in shprint
for line in output:
File "/home/elmantr/.local/lib/python3.6/site-packages/sh.py", line 915, in next
self.wait()
File "/home/elmantr/.local/lib/python3.6/site-packages/sh.py", line 845, in wait
self.handle_command_exit_code(exit_code)
File "/home/elmantr/.local/lib/python3.6/site-packages/sh.py", line 869, in handle_command_exit_code
raise exc
sh.ErrorReturnCode_77:
RAN: /media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_21/libffi/configure --host=aarch64-linux-android --prefix=/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_21/libffi --disable-builddir --enable-shared
STDOUT:
checking build system type... x86_64-pc-linux-gnu
checking host system type... aarch64-unknown-linux-android
checking target system type... aarch64-unknown-linux-android
checking for gsed... sed
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for aarch64-linux-android-strip... /home/elmantr/.buildozer/android/platform/android-ndk-r25b/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-strip --strip-unneeded
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make -j4 sets $(MAKE)... yes
checking whether make -j4 supports nested variables... yes
checking for aarch64-linux-android-gcc... /home/elmantr/.buildozer/android/platform/android-ndk-r25b/toolchains/llvm/prebuilt/linux-x86_64/bin/clang -target aarch64-linux-android21 -fomit-frame-pointer -march=armv8-a -fPIC
checking whether the C compiler works... no
configure: error: in `/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_21/libffi':
configure: error: C compiler cannot create executables
See `config.log' for more details
STDERR:
# Command failed: ['/usr/bin/python3', '-m', 'pythonforandroid.toolchain', 'create', '--dist_name=elmanapp', '--bootstrap=sdl2', '--requirements=python3,kivy', '--arch=arm64-v8a', '--arch=armeabi-v7a', '--copy-libs', '--color=always', '--storage-dir=/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a', '--ndk-api=21', '--ignore-setup-py', '--debug']
# ENVIRONMENT:
# CLUTTER_IM_MODULE = 'xim'
# LS_COLORS = 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:'
# LC_MEASUREMENT = 'az_IR'
# LESSCLOSE = '/usr/bin/lesspipe %s %s'
# LC_PAPER = 'az_IR'
# LC_MONETARY = 'az_IR'
# XDG_MENU_PREFIX = 'gnome-'
# LANG = 'en_US.UTF-8'
# MANAGERPID = '2713'
# DISPLAY = ':0'
# INVOCATION_ID = '730eb555b32c458694066550940adc12'
# GNOME_SHELL_SESSION_MODE = 'ubuntu'
# COLORTERM = 'truecolor'
# USERNAME = 'elmantr'
# XDG_VTNR = '2'
# SSH_AUTH_SOCK = '/run/user/1000/keyring/ssh'
# LC_NAME = 'az_IR'
# XDG_SESSION_ID = '2'
# USER = 'elmantr'
# DESKTOP_SESSION = 'ubuntu'
# QT4_IM_MODULE = 'xim'
# TEXTDOMAINDIR = '/usr/share/locale/'
# GNOME_TERMINAL_SCREEN = '/org/gnome/Terminal/screen/e5623443_face_4b77_b607_cf5202805f3b'
# PWD = '/media/elmantr/8C7CA79C7CA77F96/programming/python/kivy'
# HOME = '/home/elmantr'
# JOURNAL_STREAM = '9:38945'
# TEXTDOMAIN = 'im-config'
# SSH_AGENT_PID = '2853'
# QT_ACCESSIBILITY = '1'
# XDG_SESSION_TYPE = 'x11'
# XDG_DATA_DIRS = '/usr/share/ubuntu:/home/elmantr/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share/:/usr/share/:/var/lib/snapd/desktop'
# XDG_SESSION_DESKTOP = 'ubuntu'
# LC_ADDRESS = 'az_IR'
# DBUS_STARTER_ADDRESS = 'unix:path=/run/user/1000/bus,guid=eefa6fc1f5cb65c532b63f7e634033a7'
# LC_NUMERIC = 'az_IR'
# GTK_MODULES = 'gail:atk-bridge'
# WINDOWPATH = '2'
# TERM = 'xterm-256color'
# VTE_VERSION = '5202'
# SHELL = '/bin/bash'
# QT_IM_MODULE = 'ibus'
# XMODIFIERS = '@im=ibus'
# IM_CONFIG_PHASE = '2'
# DBUS_STARTER_BUS_TYPE = 'session'
# XDG_CURRENT_DESKTOP = 'ubuntu:GNOME'
# GPG_AGENT_INFO = '/run/user/1000/gnupg/S.gpg-agent:0:1'
# GNOME_TERMINAL_SERVICE = ':1.124'
# SHLVL = '1'
# XDG_SEAT = 'seat0'
# LC_TELEPHONE = 'az_IR'
# GDMSESSION = 'ubuntu'
# GNOME_DESKTOP_SESSION_ID = 'this-is-deprecated'
# LOGNAME = 'elmantr'
# DBUS_SESSION_BUS_ADDRESS = 'unix:path=/run/user/1000/bus,guid=eefa6fc1f5cb65c532b63f7e634033a7'
# XDG_RUNTIME_DIR = '/run/user/1000'
# XAUTHORITY = '/run/user/1000/gdm/Xauthority'
# XDG_CONFIG_DIRS = '/etc/xdg/xdg-ubuntu:/etc/xdg'
# PATH = '/home/elmantr/.buildozer/android/platform/apache-ant-1.9.4/bin:/home/elmantr/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'
# LC_IDENTIFICATION = 'az_IR'
# SESSION_MANAGER = 'local/elmantr:@/tmp/.ICE-unix/2749,unix/elmantr:/tmp/.ICE-unix/2749'
# LESSOPEN = '| /usr/bin/lesspipe %s'
# GTK_IM_MODULE = 'ibus'
# LC_TIME = 'az_IR'
# _ = '/usr/local/bin/buildozer'
# PACKAGES_PATH = '/home/elmantr/.buildozer/android/packages'
# ANDROIDSDK = '/home/elmantr/.buildozer/android/platform/android-sdk'
# ANDROIDNDK = '/home/elmantr/.buildozer/android/platform/android-ndk-r25b'
# ANDROIDAPI = '27'
# ANDROIDMINAPI = '21'
#
# Buildozer failed to execute the last command
# The error might be hidden in the log above this error
# Please read the full log, and search for it before
# raising an issue with buildozer itself.
# In case of a bug report, please add a full log with log_level = 2
How can I solve the problem?
I have read these similar questions and they did not help:
buildozer - C compiler cannot create executables
configure: error: C compiler cannot create executables while compiling python for android ON Linux Ubuntu
buildozer android debug: c compiler error - cannot create executables - configure: exit77
configure: error: C compiler cannot create executables - Buildozer kivy to android debuging
Note: My operating system is 64-bit!
Please help me. thanks!
|
[
"I had the exact same issue for three weeks with windows (ubuntu subsystem 20.4.1), NDK r25b, android api 31, sdk 21, p4a develop.\nIf I run the command that failed in the terminal everything works fine, because my system use a different compiler outside of buildozer. If I am correct, you are using Ubuntu-Gnome, so this fix may not be interesting or relevant to you:\nI could solve this error by manualy updating wsl1 to wsl2.\nMaybe you have enough knowledge in linux or someone else can explain what happend after updating wsl1 to wsl2 so you can transfer the fix for your current system.\n"
] |
[
1
] |
[] |
[] |
[
"android",
"buildozer",
"c",
"kivy",
"python"
] |
stackoverflow_0073986991_android_buildozer_c_kivy_python.txt
|
Q:
How ignore specific range of rows in a dataframe
I have a dataframe with 1000000 rows and I want to ignore 8000 rows in first 40000 rows and then ignore 80000 rows in next 40000 rows. How can I achieve this ?
As an example:
Drop 1 to 8000, 40001 to 48000, 80001 to 88000 rows and so on.
A:
Approach
Adapted Numpy slicing function: Dynamically create slice indices np.r an answer that uses a mask rather than np.r_ so can be done dynamically
Two solutions
For loop solution (to illustrate method)
Vectorized solution (for performance) using
[numpy.ma.masked_where(https://numpy.org/doc/stable/reference/generated/numpy.ma.masked_where.html) to generate mask array and
numpy.ma.getmask to get mask
Note: Dataframes are 0-index, so first 40,000 rows has indexing 0 to 39,999 rather than 1 to 40,000
Looping Solution
def drop_rows(df, chunksize, separation):
'''
Drops rows in dataframe
rows 0 to chunksizse
rows 1*separation to 1*separation + chunksize
rows 2*separation to 2*separation + chunksizse
...
'''
# Create mask which is True for rows we want to drop
n = len(df.index) # number of rows in dataframe
mask = np.zeros(n, dtype=bool)
for start in np.arange(0, n, separation):
stop = start + chunksize
mask[start:stop] = True
return df.drop(df[mask].index) # drop rows based upon indexes where mask is true
Vectorized Solution
def drop_rows_vect(df, chunksize, separation):
'''
Drops rows in dataframe
rows 0 to chunksizse
rows 1*separation to 1*separation + chunksize
rows 2*separation to 2*separation + chunksizse
...
'''
# Create mask which is True for rows we want to drop
mask = np.ma.getmaskarray(np.ma.masked_where(df.index % separation < chunksize, df.index))
return df.drop(df[mask].index) # drop rows based upon indexes where mask is true
Test
Create random dataframe with two columns
data = np.random.randint(100, size=(40, 2))
df = pd.DataFrame(data, columns = ['A', 'B'])
Drop rows using the two methods
# Drop first 8 rows in chunks with a separation of 10
df_looping = drop_rows(df, chunksize = 8, separation = 10)
df_vect = drop_rows_vect(df, chunksize = 8, separation = 10)
Two methods produces the same result
print(df_looping.equals(df_vect )
# Output: True
Show result
print(df_vect)
# Output
A B
2 69 48
3 61 45
4 15 29
7 30 42
8 54 46
9 22 78
|
How ignore specific range of rows in a dataframe
|
I have a dataframe with 1000000 rows and I want to ignore 8000 rows in first 40000 rows and then ignore 80000 rows in next 40000 rows. How can I achieve this ?
As an example:
Drop 1 to 8000, 40001 to 48000, 80001 to 88000 rows and so on.
|
[
"Approach\n\nAdapted Numpy slicing function: Dynamically create slice indices np.r an answer that uses a mask rather than np.r_ so can be done dynamically\nTwo solutions\n\nFor loop solution (to illustrate method)\nVectorized solution (for performance) using\n\n[numpy.ma.masked_where(https://numpy.org/doc/stable/reference/generated/numpy.ma.masked_where.html) to generate mask array and\nnumpy.ma.getmask to get mask\n\n\n\n\nNote: Dataframes are 0-index, so first 40,000 rows has indexing 0 to 39,999 rather than 1 to 40,000\n\nLooping Solution\ndef drop_rows(df, chunksize, separation):\n '''\n Drops rows in dataframe\n \n rows 0 to chunksizse\n rows 1*separation to 1*separation + chunksize\n rows 2*separation to 2*separation + chunksizse\n ...\n '''\n # Create mask which is True for rows we want to drop\n n = len(df.index) # number of rows in dataframe\n mask = np.zeros(n, dtype=bool)\n for start in np.arange(0, n, separation):\n stop = start + chunksize\n mask[start:stop] = True\n \n return df.drop(df[mask].index) # drop rows based upon indexes where mask is true\n\nVectorized Solution\ndef drop_rows_vect(df, chunksize, separation):\n '''\n Drops rows in dataframe\n \n rows 0 to chunksizse\n rows 1*separation to 1*separation + chunksize\n rows 2*separation to 2*separation + chunksizse\n ...\n '''\n # Create mask which is True for rows we want to drop\n mask = np.ma.getmaskarray(np.ma.masked_where(df.index % separation < chunksize, df.index))\n \n return df.drop(df[mask].index) # drop rows based upon indexes where mask is true\n\nTest\nCreate random dataframe with two columns\ndata = np.random.randint(100, size=(40, 2))\ndf = pd.DataFrame(data, columns = ['A', 'B'])\n\nDrop rows using the two methods\n# Drop first 8 rows in chunks with a separation of 10\ndf_looping = drop_rows(df, chunksize = 8, separation = 10)\ndf_vect = drop_rows_vect(df, chunksize = 8, separation = 10)\n\nTwo methods produces the same result\nprint(df_looping.equals(df_vect )\n# Output: True\n\nShow result\nprint(df_vect)\n# Output\n\n A B\n2 69 48\n3 61 45\n4 15 29\n7 30 42\n8 54 46\n9 22 78\n\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"python",
"statistics"
] |
stackoverflow_0074473469_dataframe_python_statistics.txt
|
Q:
Merge columns with more than one value in pandas dataframe
I've got this DataFrame in Python using pandas:
Column 1
Column 2
Column 3
hello
a,b,c
1,2,3
hi
b,c,a
4,5,6
The values in column 3 belong to the categories in column 2.
Is there a way to combine columns 2 and 3 that I get this output?
Column 1
a
b
c
hello
1
2
3
hi
6
4
5
Any advise will be very helpful! Thank you!
A:
You can use pd.crosstab after exploding the commas:
new_df = ( df.assign(t=df['Column 2'].str.split(','), a=df['Column 3'].str.split(',')).
explode(['t', 'a']) )
output = ( pd.crosstab(index=new_df['Column 1'], columns=new_df['t'],
values=new_df['a'], aggfunc='sum').reset_index() )
Output:
t Column 1 a b c
0 hello 1 2 3
1 hi 4 5 6
A:
df.apply(lambda x: pd.Series(x['Column 3'].split(','), index=x['Column2'].split(',')), axis=1)
output:
a b c
0 1 2 3
1 4 5 6
result make to df1 and concat
df1 = df.apply(lambda x: pd.Series(x['Column 3'].split(','), index=x['Column2'].split(',')), axis=1)
pd.concat([df['Column 1'], df1], axis=1)
output:
col1 a b c
0 hello 1 2 3
1 hi 4 5 6
A:
Efficiency wise, I'd say do all the wrangling in vanilla python and create a new dataframe:
from collections import defaultdict
outcome = defaultdict(list)
for column, row in zip(df['Column 2'], df['Column 3']):
column = column.split(',')
row = row.split(',')
for first, last in zip(column, row):
outcome[first].append(last)
pd.DataFrame(outcome).assign(Column = df['Column 1'])
a b c Column
0 1 2 3 hello
1 6 4 5 hi
|
Merge columns with more than one value in pandas dataframe
|
I've got this DataFrame in Python using pandas:
Column 1
Column 2
Column 3
hello
a,b,c
1,2,3
hi
b,c,a
4,5,6
The values in column 3 belong to the categories in column 2.
Is there a way to combine columns 2 and 3 that I get this output?
Column 1
a
b
c
hello
1
2
3
hi
6
4
5
Any advise will be very helpful! Thank you!
|
[
"You can use pd.crosstab after exploding the commas:\nnew_df = ( df.assign(t=df['Column 2'].str.split(','), a=df['Column 3'].str.split(',')).\n explode(['t', 'a']) )\n\noutput = ( pd.crosstab(index=new_df['Column 1'], columns=new_df['t'], \n values=new_df['a'], aggfunc='sum').reset_index() ) \n\nOutput:\nt Column 1 a b c\n0 hello 1 2 3\n1 hi 4 5 6\n\n",
"df.apply(lambda x: pd.Series(x['Column 3'].split(','), index=x['Column2'].split(',')), axis=1) \n\noutput:\n a b c\n0 1 2 3\n1 4 5 6\n\nresult make to df1 and concat\ndf1 = df.apply(lambda x: pd.Series(x['Column 3'].split(','), index=x['Column2'].split(',')), axis=1)\n\npd.concat([df['Column 1'], df1], axis=1)\n\noutput:\n col1 a b c\n0 hello 1 2 3\n1 hi 4 5 6\n\n",
"Efficiency wise, I'd say do all the wrangling in vanilla python and create a new dataframe:\nfrom collections import defaultdict\noutcome = defaultdict(list)\nfor column, row in zip(df['Column 2'], df['Column 3']):\n column = column.split(',')\n row = row.split(',')\n for first, last in zip(column, row):\n outcome[first].append(last)\npd.DataFrame(outcome).assign(Column = df['Column 1'])\n a b c Column\n0 1 2 3 hello\n1 6 4 5 hi\n\n"
] |
[
2,
2,
1
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074474220_dataframe_pandas_python.txt
|
Q:
Extracting the letter of specific index from each word in the list
I have a list of words, and I need to extract the letter of specific index from each word in the list to the dictionary, counting their amount. For example, my list consists of "carrot", "sky", "house", "picture" words. Then the dictionary of first indexes would be: {"c":1, "s":1, "h":1, "p":1}, the second indexes: {"a":1, "k":1, "o":1, "i":1}, and so on. It should check each algorithm.
I tried using this code, but it executes the dictionary of all letters that are in the list of words.
for el in list:
for x in el:
dictionary[x] = dictionary.get(x, 0) + 1
return dictionary
A:
There is the standard collections.Counter that can help you. Given an iterable (in your case a list of letters), it gives you a counter object.
So you have to pass it list of letters per index.
I added a word so that counts are not always 0.
from collections import Counter
words = [
"carrot",
"sky",
"house",
"picture",
"cinder", # <-- this word I added !
]
longest_word_length = max(len(word) for word in words)
for index in range(longest_word_length):
letters = [word[index] for word in words if index < len(word)]
letters_count = Counter(letters)
print(f"{index=} {letters_count=}")
index=0 letters_count=Counter({'c': 2, 's': 1, 'h': 1, 'p': 1})
index=1 letters_count=Counter({'i': 2, 'a': 1, 'k': 1, 'o': 1})
index=2 letters_count=Counter({'r': 1, 'y': 1, 'u': 1, 'c': 1, 'n': 1})
index=3 letters_count=Counter({'r': 1, 's': 1, 't': 1, 'd': 1})
index=4 letters_count=Counter({'e': 2, 'o': 1, 'u': 1})
index=5 letters_count=Counter({'r': 2, 't': 1})
index=6 letters_count=Counter({'e': 1})
|
Extracting the letter of specific index from each word in the list
|
I have a list of words, and I need to extract the letter of specific index from each word in the list to the dictionary, counting their amount. For example, my list consists of "carrot", "sky", "house", "picture" words. Then the dictionary of first indexes would be: {"c":1, "s":1, "h":1, "p":1}, the second indexes: {"a":1, "k":1, "o":1, "i":1}, and so on. It should check each algorithm.
I tried using this code, but it executes the dictionary of all letters that are in the list of words.
for el in list:
for x in el:
dictionary[x] = dictionary.get(x, 0) + 1
return dictionary
|
[
"There is the standard collections.Counter that can help you. Given an iterable (in your case a list of letters), it gives you a counter object.\nSo you have to pass it list of letters per index.\nI added a word so that counts are not always 0.\nfrom collections import Counter\n\nwords = [\n \"carrot\",\n \"sky\",\n \"house\",\n \"picture\",\n \"cinder\", # <-- this word I added !\n]\n\nlongest_word_length = max(len(word) for word in words)\n\nfor index in range(longest_word_length):\n letters = [word[index] for word in words if index < len(word)]\n letters_count = Counter(letters)\n print(f\"{index=} {letters_count=}\")\n\nindex=0 letters_count=Counter({'c': 2, 's': 1, 'h': 1, 'p': 1})\nindex=1 letters_count=Counter({'i': 2, 'a': 1, 'k': 1, 'o': 1})\nindex=2 letters_count=Counter({'r': 1, 'y': 1, 'u': 1, 'c': 1, 'n': 1})\nindex=3 letters_count=Counter({'r': 1, 's': 1, 't': 1, 'd': 1})\nindex=4 letters_count=Counter({'e': 2, 'o': 1, 'u': 1})\nindex=5 letters_count=Counter({'r': 2, 't': 1})\nindex=6 letters_count=Counter({'e': 1})\n\n"
] |
[
0
] |
[] |
[] |
[
"dictionary",
"python"
] |
stackoverflow_0074462378_dictionary_python.txt
|
Q:
AttributeError: module 'collections' has no attribute 'MutableMapping'
I recently installed python3.10 on my ubuntu system and I believe I made a link from /usr/bin/python3 to /usr/bin/python3.10
If I run python --version I get Python 2.7.17 and if I run python3 --version I get Python 3.10.2
I believe something I did broke something in my global python / pip.
Whenever I try to use pip globally I get this error:
Traceback (most recent call last):
File "/usr/local/bin/pip", line 7, in <module>
from pip._internal.cli.main import main
File "/usr/lib/python3/dist-packages/pip/__init__.py", line 22, in <module>
from pip._vendor.requests.packages.urllib3.exceptions import DependencyWarning
File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 73, in <module>
vendored("pkg_resources")
File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 33, in vendored
__import__(modulename, globals(), locals(), level=0)
File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 77, in <module>
File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/_vendor/packaging/requirements.py", line 9, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 672, in _load_unlocked
File "<frozen importlib._bootstrap>", line 632, in _load_backward_compatible
File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/extern/__init__.py", line 43, in load_module
File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/_vendor/pyparsing.py", line 943, in <module>
AttributeError: module 'collections' has no attribute 'MutableMapping'
After googling I thought the issue is that my pip was made using an older version of python I had so I tried to run:
sudo apt remove python-pip python3-pip
sudo apt install python-pip python3-pip
but even after this I still get the same error with pip.
I do have a virtualenv that I was using with a django project (that uses python 3.10) and if I source into that I am able to use the pip there, but I think this just shows the pip in that venv is properly configured
Result of running ls -la /usr/bin/ | grep -i:
brick@nextgearserver:/etc/apache2$ ls -la /usr/bin/ | grep -i "pip\|python"
lrwxrwxrwx 1 root root 26 Mar 26 2018 dh_pypy -> ../share/dh-python/dh_pypy
-rwxr-xr-x 1 root root 1056 Apr 16 2018 dh_python2
lrwxrwxrwx 1 root root 29 Mar 26 2018 dh_python3 -> ../share/dh-python/dh_python3
lrwxrwxrwx 1 root root 13 Dec 7 2018 lesspipe -> /bin/lesspipe
lrwxrwxrwx 1 root root 23 Feb 27 2021 pdb2.7 -> ../lib/python2.7/pdb.py
lrwxrwxrwx 1 root root 24 Jan 15 13:03 pdb3.10 -> ../lib/python3.10/pdb.py
lrwxrwxrwx 1 root root 23 Dec 8 16:08 pdb3.6 -> ../lib/python3.6/pdb.py
-rwxr-xr-x 1 root root 292 Apr 30 2021 pip
-rwxr-xr-x 1 root root 292 Apr 30 2021 pip2
-rwxr-xr-x 1 root root 293 Apr 30 2021 pip3
lrwxrwxrwx 1 root root 31 Oct 25 2018 py3versions -> ../share/python3/py3versions.py
lrwxrwxrwx 1 root root 26 Mar 26 2018 pybuild -> ../share/dh-python/pybuild
lrwxrwxrwx 1 root root 9 Apr 16 2018 python -> python2.7
lrwxrwxrwx 1 root root 9 Apr 16 2018 python2 -> python2.7
-rwxr-xr-x 1 root root 3633000 Feb 27 2021 python2.7
lrwxrwxrwx 1 root root 33 Feb 27 2021 python2.7-config -> x86_64-linux-gnu-python2.7-config
lrwxrwxrwx 1 root root 16 Apr 16 2018 python2-config -> python2.7-config
lrwxrwxrwx 1 root root 19 Jan 30 15:07 python3 -> /usr/bin/python3.10
-rwxr-xr-x 1 root root 5515256 Jan 15 13:03 python3.10
-rwxr-xr-x 2 root root 4526456 Dec 8 16:08 python3.6
lrwxrwxrwx 1 root root 33 Dec 8 16:08 python3.6-config -> x86_64-linux-gnu-python3.6-config
-rwxr-xr-x 2 root root 4526456 Dec 8 16:08 python3.6m
lrwxrwxrwx 1 root root 34 Dec 8 16:08 python3.6m-config -> x86_64-linux-gnu-python3.6m-config
lrwxrwxrwx 1 root root 16 Oct 25 2018 python3-config -> python3.6-config
-rwxr-xr-x 1 root root 384 Feb 5 2018 python3-futurize
lrwxrwxrwx 1 root root 10 Oct 25 2018 python3m -> python3.6m
lrwxrwxrwx 1 root root 17 Oct 25 2018 python3m-config -> python3.6m-config
-rwxr-xr-x 1 root root 388 Feb 5 2018 python3-pasteurize
-rwxr-xr-x 1 root root 152 Nov 11 2017 python3-pbr
lrwxrwxrwx 1 root root 16 Apr 16 2018 python-config -> python2.7-config
lrwxrwxrwx 1 root root 29 Apr 16 2018 pyversions -> ../share/python/pyversions.py
-rwxr-xr-x 1 root root 2971 Feb 27 2021 x86_64-linux-gnu-python2.7-config
-rwxr-xr-x 1 root root 3246 Jan 15 13:03 x86_64-linux-gnu-python3.10-config
lrwxrwxrwx 1 root root 34 Dec 8 16:08 x86_64-linux-gnu-python3.6-config -> x86_64-linux-gnu-python3.6m-config
-rwxr-xr-x 1 root root 3283 Dec 8 16:08 x86_64-linux-gnu-python3.6m-config
lrwxrwxrwx 1 root root 33 Oct 25 2018 x86_64-linux-gnu-python3-config -> x86_64-linux-gnu-python3.6-config
lrwxrwxrwx 1 root root 34 Oct 25 2018 x86_64-linux-gnu-python3m-config -> x86_64-linux-gnu-python3.6m-config
lrwxrwxrwx 1 root root 33 Apr 16 2018 x86_64-linux-gnu-python-config -> x86_64-linux-gnu
python2.7-config
A:
The question already seems to have a solution but for better understanding of the problem, in python 3.10, the attribute MutableMapping from the module collections have been removed.
In your case, /usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/_vendor/pyparsing.py uses the MutableMapping attribute of collections.
As a backward compatibility, the attribute has been moved to collections.abc .
So a dirty hack would be (if you don't want to upgrade) to replace all collections.MutableMapping to collections.abc.MutableMapping
An example :
import collections
if sys.version_info.major == 3 and sys.version_info.minor >= 10:
from collections.abc import MutableMapping
else:
from collections import MutableMapping
A:
In my case, upgrading the following packages worked on Windows 11:
pip install --upgrade pip
pip install --upgrade wheel
pip install --upgrade setuptools
pip install --upgrade requests
I hope this helps
A:
There are some Libraries aren't fully compatible with 3.10 to the time of writing this answer
You can downgrade to 3.8 or 3.9 for now and it will work seamlessly
you can find all version for python here
choose your most suitable version
A:
I'm not sure this qualifies as an "answer", but to offer an additional work-around for the case of a library that relies on the existence of collections.MutableMapping and hasn't been updated for Python 3.10+, you can place the following code directly before the import of the affected library:
import sys
if sys.version_info.major == 3 and sys.version_info.minor >= 10:
import collections
setattr(collections, "MutableMapping", collections.abc.MutableMapping)
A:
just update requests library version to 2.27.1
use :
sudo apt-get install python-requests==2.27.1
A:
For version 3.10 or above –
from collections.abc import MutableMapping
For version 3.9 or lower –
from collections import MutableMapping
If you want this environment completely dynamic then call the below code.
import collections
if sys.version_info.major == 3 and sys.version_info.minor >= 10:
from collections.abc import MutableMapping
else:
from collections import MutableMapping
A:
I was getting the same error on ubuntu 22.04, This is how I solved it.
remove pipenv if you have installed it using apt
sudo apt remove pipenv
install pipenv unsing pip
pip3 install pipenv
activate virtual environment
python3 -m pipenv shell
install from pipfile
pipenv install
A:
In my case pip was trying to install too old pyparsing version from the requirements.txt file. When I changed from 2.0.1 to 2.4.7 everything went fine, so:
pip install pyparsing==2.4.7
A:
We've bumped into this issue (also disguised as ModuleNotFoundError: No module named 'urllib3') with this exemplary stacktrace:
File "<..>/python3/dist-packages/pipenv/core.py", line 21, in <module>
import requests
File "<..>/python3/dist-packages/pipenv/vendor/requests/__init__.py", line 62, in <module>
from .packages.urllib3.exceptions import DependencyWarning
File "<..>/python3/dist-packages/pipenv/vendor/requests/packages/__init__.py", line 29, in <module>
import urllib3
ModuleNotFoundError: No module named 'urllib3'
The solutions posted in a dedicated blog post didn't help. When one actually installs requests or even urllib3 via pip/requirements.txt, the issue mentioned here pops up with this exemplary stacktrace:
File "<..>/python3/dist-packages/pipenv/core.py", line 21, in <module>
import requests
File "<..>/python3/dist-packages/pipenv/vendor/requests/__init__.py", line 65, in <module>
from . import utils
File "<..>/python3/dist-packages/pipenv/vendor/requests/utils.py", line 27, in <module>
from .cookies import RequestsCookieJar, cookiejar_from_dict
File "/usr/lib/python3/dist-packages/pipenv/vendor/requests/cookies.py", line 172, in <module>
class RequestsCookieJar(cookielib.CookieJar, collections.MutableMapping):
AttributeError: module 'collections' has no attribute 'MutableMapping'
What helped in our case was to pin the docker base image we were using to ensure a python 3.8 install/environment (via an ubuntu package, in this case python3-pip). Having left the base image to latest we got a 3.10 python environment, which, as others have mentioned, are not compatible with dependencies that are too old and require 3.8/3.9.
A:
An alternative to make python 3 better and more comatible with itself is to use dynamic loading, for instance the code below fails for some versions of python 3.
import tornado.httpclient
However the following import works (see code below), it uses the fact that python doesn't normally reload modules and that modules can be altered during runtime. It is an advanced type of load patching method to backport elements that should not have been changed in Python 3 in the official repositories but were for political reasons.
import collections.abc
import collections
collections.MutableMapping = collections.abc.MutablesMapping
import tornado.httpclient
A:
I also had the same problem for no good reason and realized I was using Python3.10. After downgrading to Python3.9 I had no issue and never reencountered this. I only downgraded because the rest of my team was using version 3.9 and I was the only one using 3.10. And that solved the problem. I hope it also helps with your case.
|
AttributeError: module 'collections' has no attribute 'MutableMapping'
|
I recently installed python3.10 on my ubuntu system and I believe I made a link from /usr/bin/python3 to /usr/bin/python3.10
If I run python --version I get Python 2.7.17 and if I run python3 --version I get Python 3.10.2
I believe something I did broke something in my global python / pip.
Whenever I try to use pip globally I get this error:
Traceback (most recent call last):
File "/usr/local/bin/pip", line 7, in <module>
from pip._internal.cli.main import main
File "/usr/lib/python3/dist-packages/pip/__init__.py", line 22, in <module>
from pip._vendor.requests.packages.urllib3.exceptions import DependencyWarning
File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 73, in <module>
vendored("pkg_resources")
File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 33, in vendored
__import__(modulename, globals(), locals(), level=0)
File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 77, in <module>
File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/_vendor/packaging/requirements.py", line 9, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 672, in _load_unlocked
File "<frozen importlib._bootstrap>", line 632, in _load_backward_compatible
File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/extern/__init__.py", line 43, in load_module
File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/_vendor/pyparsing.py", line 943, in <module>
AttributeError: module 'collections' has no attribute 'MutableMapping'
After googling I thought the issue is that my pip was made using an older version of python I had so I tried to run:
sudo apt remove python-pip python3-pip
sudo apt install python-pip python3-pip
but even after this I still get the same error with pip.
I do have a virtualenv that I was using with a django project (that uses python 3.10) and if I source into that I am able to use the pip there, but I think this just shows the pip in that venv is properly configured
Result of running ls -la /usr/bin/ | grep -i:
brick@nextgearserver:/etc/apache2$ ls -la /usr/bin/ | grep -i "pip\|python"
lrwxrwxrwx 1 root root 26 Mar 26 2018 dh_pypy -> ../share/dh-python/dh_pypy
-rwxr-xr-x 1 root root 1056 Apr 16 2018 dh_python2
lrwxrwxrwx 1 root root 29 Mar 26 2018 dh_python3 -> ../share/dh-python/dh_python3
lrwxrwxrwx 1 root root 13 Dec 7 2018 lesspipe -> /bin/lesspipe
lrwxrwxrwx 1 root root 23 Feb 27 2021 pdb2.7 -> ../lib/python2.7/pdb.py
lrwxrwxrwx 1 root root 24 Jan 15 13:03 pdb3.10 -> ../lib/python3.10/pdb.py
lrwxrwxrwx 1 root root 23 Dec 8 16:08 pdb3.6 -> ../lib/python3.6/pdb.py
-rwxr-xr-x 1 root root 292 Apr 30 2021 pip
-rwxr-xr-x 1 root root 292 Apr 30 2021 pip2
-rwxr-xr-x 1 root root 293 Apr 30 2021 pip3
lrwxrwxrwx 1 root root 31 Oct 25 2018 py3versions -> ../share/python3/py3versions.py
lrwxrwxrwx 1 root root 26 Mar 26 2018 pybuild -> ../share/dh-python/pybuild
lrwxrwxrwx 1 root root 9 Apr 16 2018 python -> python2.7
lrwxrwxrwx 1 root root 9 Apr 16 2018 python2 -> python2.7
-rwxr-xr-x 1 root root 3633000 Feb 27 2021 python2.7
lrwxrwxrwx 1 root root 33 Feb 27 2021 python2.7-config -> x86_64-linux-gnu-python2.7-config
lrwxrwxrwx 1 root root 16 Apr 16 2018 python2-config -> python2.7-config
lrwxrwxrwx 1 root root 19 Jan 30 15:07 python3 -> /usr/bin/python3.10
-rwxr-xr-x 1 root root 5515256 Jan 15 13:03 python3.10
-rwxr-xr-x 2 root root 4526456 Dec 8 16:08 python3.6
lrwxrwxrwx 1 root root 33 Dec 8 16:08 python3.6-config -> x86_64-linux-gnu-python3.6-config
-rwxr-xr-x 2 root root 4526456 Dec 8 16:08 python3.6m
lrwxrwxrwx 1 root root 34 Dec 8 16:08 python3.6m-config -> x86_64-linux-gnu-python3.6m-config
lrwxrwxrwx 1 root root 16 Oct 25 2018 python3-config -> python3.6-config
-rwxr-xr-x 1 root root 384 Feb 5 2018 python3-futurize
lrwxrwxrwx 1 root root 10 Oct 25 2018 python3m -> python3.6m
lrwxrwxrwx 1 root root 17 Oct 25 2018 python3m-config -> python3.6m-config
-rwxr-xr-x 1 root root 388 Feb 5 2018 python3-pasteurize
-rwxr-xr-x 1 root root 152 Nov 11 2017 python3-pbr
lrwxrwxrwx 1 root root 16 Apr 16 2018 python-config -> python2.7-config
lrwxrwxrwx 1 root root 29 Apr 16 2018 pyversions -> ../share/python/pyversions.py
-rwxr-xr-x 1 root root 2971 Feb 27 2021 x86_64-linux-gnu-python2.7-config
-rwxr-xr-x 1 root root 3246 Jan 15 13:03 x86_64-linux-gnu-python3.10-config
lrwxrwxrwx 1 root root 34 Dec 8 16:08 x86_64-linux-gnu-python3.6-config -> x86_64-linux-gnu-python3.6m-config
-rwxr-xr-x 1 root root 3283 Dec 8 16:08 x86_64-linux-gnu-python3.6m-config
lrwxrwxrwx 1 root root 33 Oct 25 2018 x86_64-linux-gnu-python3-config -> x86_64-linux-gnu-python3.6-config
lrwxrwxrwx 1 root root 34 Oct 25 2018 x86_64-linux-gnu-python3m-config -> x86_64-linux-gnu-python3.6m-config
lrwxrwxrwx 1 root root 33 Apr 16 2018 x86_64-linux-gnu-python-config -> x86_64-linux-gnu
python2.7-config
|
[
"The question already seems to have a solution but for better understanding of the problem, in python 3.10, the attribute MutableMapping from the module collections have been removed.\nIn your case, /usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/_vendor/pyparsing.py uses the MutableMapping attribute of collections.\nAs a backward compatibility, the attribute has been moved to collections.abc .\nSo a dirty hack would be (if you don't want to upgrade) to replace all collections.MutableMapping to collections.abc.MutableMapping\nAn example :\nimport collections \nif sys.version_info.major == 3 and sys.version_info.minor >= 10:\n\n from collections.abc import MutableMapping\nelse:\n from collections import MutableMapping\n\n",
"In my case, upgrading the following packages worked on Windows 11:\npip install --upgrade pip\npip install --upgrade wheel\npip install --upgrade setuptools\npip install --upgrade requests\n\nI hope this helps\n",
"There are some Libraries aren't fully compatible with 3.10 to the time of writing this answer\nYou can downgrade to 3.8 or 3.9 for now and it will work seamlessly\nyou can find all version for python here\nchoose your most suitable version\n",
"I'm not sure this qualifies as an \"answer\", but to offer an additional work-around for the case of a library that relies on the existence of collections.MutableMapping and hasn't been updated for Python 3.10+, you can place the following code directly before the import of the affected library:\nimport sys\n\nif sys.version_info.major == 3 and sys.version_info.minor >= 10:\n import collections\n setattr(collections, \"MutableMapping\", collections.abc.MutableMapping)\n\n",
"just update requests library version to 2.27.1\nuse :\nsudo apt-get install python-requests==2.27.1\n\n",
"For version 3.10 or above –\nfrom collections.abc import MutableMapping\n\nFor version 3.9 or lower –\nfrom collections import MutableMapping\n\nIf you want this environment completely dynamic then call the below code.\nimport collections \nif sys.version_info.major == 3 and sys.version_info.minor >= 10:\n from collections.abc import MutableMapping\nelse:\n from collections import MutableMapping\n\n",
"I was getting the same error on ubuntu 22.04, This is how I solved it.\nremove pipenv if you have installed it using apt\nsudo apt remove pipenv\n\ninstall pipenv unsing pip\n pip3 install pipenv\n\nactivate virtual environment\npython3 -m pipenv shell\n\ninstall from pipfile\npipenv install\n\n",
"In my case pip was trying to install too old pyparsing version from the requirements.txt file. When I changed from 2.0.1 to 2.4.7 everything went fine, so:\npip install pyparsing==2.4.7\n",
"We've bumped into this issue (also disguised as ModuleNotFoundError: No module named 'urllib3') with this exemplary stacktrace:\n File \"<..>/python3/dist-packages/pipenv/core.py\", line 21, in <module>\n import requests\n File \"<..>/python3/dist-packages/pipenv/vendor/requests/__init__.py\", line 62, in <module>\n from .packages.urllib3.exceptions import DependencyWarning\n File \"<..>/python3/dist-packages/pipenv/vendor/requests/packages/__init__.py\", line 29, in <module>\n import urllib3\n\nModuleNotFoundError: No module named 'urllib3'\n\nThe solutions posted in a dedicated blog post didn't help. When one actually installs requests or even urllib3 via pip/requirements.txt, the issue mentioned here pops up with this exemplary stacktrace:\n File \"<..>/python3/dist-packages/pipenv/core.py\", line 21, in <module>\n import requests\n File \"<..>/python3/dist-packages/pipenv/vendor/requests/__init__.py\", line 65, in <module>\n from . import utils\n File \"<..>/python3/dist-packages/pipenv/vendor/requests/utils.py\", line 27, in <module>\n from .cookies import RequestsCookieJar, cookiejar_from_dict\n File \"/usr/lib/python3/dist-packages/pipenv/vendor/requests/cookies.py\", line 172, in <module>\n class RequestsCookieJar(cookielib.CookieJar, collections.MutableMapping):\n\nAttributeError: module 'collections' has no attribute 'MutableMapping'\n\nWhat helped in our case was to pin the docker base image we were using to ensure a python 3.8 install/environment (via an ubuntu package, in this case python3-pip). Having left the base image to latest we got a 3.10 python environment, which, as others have mentioned, are not compatible with dependencies that are too old and require 3.8/3.9.\n",
"An alternative to make python 3 better and more comatible with itself is to use dynamic loading, for instance the code below fails for some versions of python 3.\nimport tornado.httpclient \n\nHowever the following import works (see code below), it uses the fact that python doesn't normally reload modules and that modules can be altered during runtime. It is an advanced type of load patching method to backport elements that should not have been changed in Python 3 in the official repositories but were for political reasons.\nimport collections.abc\nimport collections\ncollections.MutableMapping = collections.abc.MutablesMapping\nimport tornado.httpclient\n\n",
"I also had the same problem for no good reason and realized I was using Python3.10. After downgrading to Python3.9 I had no issue and never reencountered this. I only downgraded because the rest of my team was using version 3.9 and I was the only one using 3.10. And that solved the problem. I hope it also helps with your case.\n"
] |
[
18,
17,
8,
3,
2,
2,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"pip",
"python",
"python_3.x"
] |
stackoverflow_0070943244_pip_python_python_3.x.txt
|
Q:
Objects from query results are working with dot notation but throwing not callable with .get
sample_object = db.fetch_one(sample_query) # Object from db query result
print(sample_object.key) #working when called`
#does not work when
print(sample_object.get("key"))
It's working in version python 3.9.6 but not from 3.10.4
A:
Based on fetchone() [sqlalchemy-docs], it returns:
Fetch one row.
When all rows are exhausted, returns None.
And the fetchone() method is a method of Row object in sqlalchemy ORM which:
Represent a single result row.
The Row object represents a row of a database result. It is typically associated in the 1.x series of SQLAlchemy with the CursorResult object, however is also used by the ORM for tuple-like results as of SQLAlchemy 1.4.
So with more focus on Row object in sqlalchemy you can find out it's not an object like dict python built-in type and you can't use .get() method.
|
Objects from query results are working with dot notation but throwing not callable with .get
|
sample_object = db.fetch_one(sample_query) # Object from db query result
print(sample_object.key) #working when called`
#does not work when
print(sample_object.get("key"))
It's working in version python 3.9.6 but not from 3.10.4
|
[
"Based on fetchone() [sqlalchemy-docs], it returns:\n\nFetch one row.\nWhen all rows are exhausted, returns None.\n\nAnd the fetchone() method is a method of Row object in sqlalchemy ORM which:\n\nRepresent a single result row.\nThe Row object represents a row of a database result. It is typically associated in the 1.x series of SQLAlchemy with the CursorResult object, however is also used by the ORM for tuple-like results as of SQLAlchemy 1.4.\n\nSo with more focus on Row object in sqlalchemy you can find out it's not an object like dict python built-in type and you can't use .get() method.\n"
] |
[
0
] |
[] |
[] |
[
"fastapi",
"pydantic",
"python",
"python_3.x"
] |
stackoverflow_0074473427_fastapi_pydantic_python_python_3.x.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.