content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
|---|---|---|---|---|---|---|---|---|
Q:
How to split a Multiindex row into two Multiindex rows?
I have a dataframe with multiple levels of Multiindex. One of the levels is latlon, which is a string of the numbers with an ; between them.
However, for further processing, it makes much more sense to have a lat level and a lon level. with floats for the numbers, instead of the combined string.
How do I best partition this level into two levels?
I have a solution, but it doesn't seem very pythonic and requires building a new dataframe, so I'm looking for a better way.
MWE:
Set up a simple test df:
number = [1, 2, 3]
name = ['foo', 'bar', 'baz']
latlon = ['10.1;50.1', '12.2;52.2', '13.3;53.3']
idx = pd.MultiIndex.from_arrays([number, name, latlon],
names=('number','name', 'latlon'))
data = np.random.rand(4,3)
df = pd.DataFrame(data=data, columns=idx)
(Original data has 10 levels in the Multiindex and is of size 25000, 750)
As you can see, latlon is easily human-readble, but not particularly useful. I want a lat and lon level, with floats.
What I've come up with:
# get a list of them, to iterate through
latlons = df.columns.get_level_values('latlon').to_list()
# set up emptly lists and start iterating
lats = []
lons = []
for i in latlons:
# do some string searches and split by positions
start_str = i.find(';')+1
end_str = i.find('\n')
lon_str = i[0:start_str-1]
lon = float(lon_str)
lons.append(lon)
lat_str = i[start_str:end_str]
lat = float(lat_str)
lats.append(lat)
Now there's two lists, one with lats and one with lons, which can be used to build a new index and thus a new df:
number = df.columns.get_level_values('number').to_list()
# I can't reuse 'number' from the initial setup, since the original
# comes from an excel import, so I must extract it here.
name = df.columns.get_level_values('name').to_list()
idx = pd.MultiIndex.from_arrays([number, name, lats, lons],
names=('number','name', 'lat', 'lon'))
data = df.values
df2 = pd.DataFrame(data=data, columns=idx)
This works and is very easy to understand, but it all feels very hacky and one hickup away from mixing up data.
Is there a simpler/better way?
A:
I would temporarily convert the MultiIndex to DataFrame to benefit from DataFrame's methods:
new_idx = pd.MultiIndex.from_frame(
df.columns.to_frame()
.pipe(lambda d: d.join(d.pop('latlon')
.str.split(';', expand=True)
.set_axis(['lat', 'lon'], axis=1)
))
.astype({'lat': float, 'lon': float})
)
df.columns = new_idx
Output:
number 1 2 3
name foo bar baz
lat 10.1 12.2 13.3
lon 50.1 52.2 53.3
0 0.796467 0.769194 0.733470
1 0.272247 0.558985 0.345007
2 0.209480 0.669443 0.648002
3 0.466146 0.262006 0.236987
A:
extract the index, split, and rebuild the index:
arr = df.columns
arrays = [arr.get_level_values(num) for num in range(arr.nlevels)]
*arrays, latlon = arrays
latlon = latlon.str.split(';')
lon = latlon.str[-1].astype(float).rename('lon')
lat = latlon.str[0].astype(float).rename('lat')
arrays.extend([lat,lon])
df.columns = pd.MultiIndex.from_arrays(arrays)
df
number 1 2 3
name foo bar baz
lat 10.1 12.2 13.3
lon 50.1 52.2 53.3
0 0.469529 0.356716 0.287799
1 0.786352 0.557752 0.318536
2 0.877670 0.503199 0.225858
3 0.324959 0.253091 0.967328
|
How to split a Multiindex row into two Multiindex rows?
|
I have a dataframe with multiple levels of Multiindex. One of the levels is latlon, which is a string of the numbers with an ; between them.
However, for further processing, it makes much more sense to have a lat level and a lon level. with floats for the numbers, instead of the combined string.
How do I best partition this level into two levels?
I have a solution, but it doesn't seem very pythonic and requires building a new dataframe, so I'm looking for a better way.
MWE:
Set up a simple test df:
number = [1, 2, 3]
name = ['foo', 'bar', 'baz']
latlon = ['10.1;50.1', '12.2;52.2', '13.3;53.3']
idx = pd.MultiIndex.from_arrays([number, name, latlon],
names=('number','name', 'latlon'))
data = np.random.rand(4,3)
df = pd.DataFrame(data=data, columns=idx)
(Original data has 10 levels in the Multiindex and is of size 25000, 750)
As you can see, latlon is easily human-readble, but not particularly useful. I want a lat and lon level, with floats.
What I've come up with:
# get a list of them, to iterate through
latlons = df.columns.get_level_values('latlon').to_list()
# set up emptly lists and start iterating
lats = []
lons = []
for i in latlons:
# do some string searches and split by positions
start_str = i.find(';')+1
end_str = i.find('\n')
lon_str = i[0:start_str-1]
lon = float(lon_str)
lons.append(lon)
lat_str = i[start_str:end_str]
lat = float(lat_str)
lats.append(lat)
Now there's two lists, one with lats and one with lons, which can be used to build a new index and thus a new df:
number = df.columns.get_level_values('number').to_list()
# I can't reuse 'number' from the initial setup, since the original
# comes from an excel import, so I must extract it here.
name = df.columns.get_level_values('name').to_list()
idx = pd.MultiIndex.from_arrays([number, name, lats, lons],
names=('number','name', 'lat', 'lon'))
data = df.values
df2 = pd.DataFrame(data=data, columns=idx)
This works and is very easy to understand, but it all feels very hacky and one hickup away from mixing up data.
Is there a simpler/better way?
|
[
"I would temporarily convert the MultiIndex to DataFrame to benefit from DataFrame's methods:\nnew_idx = pd.MultiIndex.from_frame(\n df.columns.to_frame()\n .pipe(lambda d: d.join(d.pop('latlon')\n .str.split(';', expand=True)\n .set_axis(['lat', 'lon'], axis=1)\n ))\n .astype({'lat': float, 'lon': float})\n)\n\ndf.columns = new_idx\n\nOutput:\nnumber 1 2 3\nname foo bar baz\nlat 10.1 12.2 13.3\nlon 50.1 52.2 53.3\n0 0.796467 0.769194 0.733470\n1 0.272247 0.558985 0.345007\n2 0.209480 0.669443 0.648002\n3 0.466146 0.262006 0.236987\n\n",
"extract the index, split, and rebuild the index:\narr = df.columns\narrays = [arr.get_level_values(num) for num in range(arr.nlevels)]\n*arrays, latlon = arrays\nlatlon = latlon.str.split(';')\nlon = latlon.str[-1].astype(float).rename('lon')\nlat = latlon.str[0].astype(float).rename('lat')\narrays.extend([lat,lon])\ndf.columns = pd.MultiIndex.from_arrays(arrays)\ndf\nnumber 1 2 3\nname foo bar baz\nlat 10.1 12.2 13.3\nlon 50.1 52.2 53.3\n0 0.469529 0.356716 0.287799\n1 0.786352 0.557752 0.318536\n2 0.877670 0.503199 0.225858\n3 0.324959 0.253091 0.967328\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"multi_index",
"pandas",
"python"
] |
stackoverflow_0074545649_multi_index_pandas_python.txt
|
Q:
Using OracleDB OS.Environment Password
I am trying to connect to an oracle database with Python code. I am using the OracleDB package but want it so that the user is able to connect to the DB with their own password machine and password rather than coding it into the code itself.
So far I have this,
import oracledb
import os
username=os.environ.get("Username")
pw=os.environ.get("pasword")
conn = oracledb.connect(user=username, password=pw, host="url", port=0000, service_name="service"
A:
Source the environment variables (Make them available to the python process)
$cat env.sh
export USERNAME=app_schema
export PASSWORD=secret
$cat connect.py
import oracledb
import os
username=os.environ.get("USERNAME")
pw=os.environ.get("PASSWORD")
conn = oracledb.connect(user=username, password=pw, host="localhost", port=1521, service_name="XEPDB1")
c = conn.cursor()
c.execute('select dummy from dual')
for row in c:
print (row[0])
conn.close()
$ # source the variables (note the dot)
$. env.sh
$python connect.py
X
Best of luck!
|
Using OracleDB OS.Environment Password
|
I am trying to connect to an oracle database with Python code. I am using the OracleDB package but want it so that the user is able to connect to the DB with their own password machine and password rather than coding it into the code itself.
So far I have this,
import oracledb
import os
username=os.environ.get("Username")
pw=os.environ.get("pasword")
conn = oracledb.connect(user=username, password=pw, host="url", port=0000, service_name="service"
|
[
"Source the environment variables (Make them available to the python process)\n$cat env.sh\nexport USERNAME=app_schema\nexport PASSWORD=secret\n\n$cat connect.py\nimport oracledb\nimport os\n\nusername=os.environ.get(\"USERNAME\")\npw=os.environ.get(\"PASSWORD\")\nconn = oracledb.connect(user=username, password=pw, host=\"localhost\", port=1521, service_name=\"XEPDB1\")\nc = conn.cursor()\nc.execute('select dummy from dual')\nfor row in c:\n print (row[0])\nconn.close()\n\n$ # source the variables (note the dot)\n$. env.sh\n$python connect.py\nX\n\nBest of luck!\n"
] |
[
0
] |
[] |
[] |
[
"oracle",
"python"
] |
stackoverflow_0074539834_oracle_python.txt
|
Q:
Failing to install lxml using pip
I am attempting to use pip to install lxml. I have Windows 11 and Python version python-3.10.2-amd64. I am using Visual Studio Code (VSC) as well. I realized I needed lxml from this error message in my VSC terminal:
Traceback (most recent call last):
File "Vegas.py", line 13, in <module>
soup = BeautifulSoup(html_text, 'lxml')
File "/usr/lib/python3.6/site-packages/bs4/__init__.py", line 248, in
__init__
% ",".join(features))
bs4.FeatureNotFound: Couldn't find a tree builder with the features you
requested: lxml. Do you need to install a parser library?
From there, I tried to install lxml by using the command in the VSC terminal:
pip install lxml
And I got this error message:
Collecting lxml
Using cached lxml-4.7.1.tar.gz (3.2 MB)
Preparing metadata (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3.6 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-xgwntbxb/lxml_73c33ff5c1614a6da59bbd9f3017fa5c/setup.py'"'"'; __file__='"'"'/tmp/pip-install-xgwntbxb/lxml_73c33ff5c1614a6da59bbd9f3017fa5c/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-ezhmqybu
cwd: /tmp/pip-install-xgwntbxb/lxml_73c33ff5c1614a6da59bbd9f3017fa5c/
Complete output (3 lines):
Building lxml version 4.7.1.
Building without Cython.
Error: Please make sure the libxml2 and libxslt development packages are installed.
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/84/74/4a97db45381316cd6e7d4b1eb707d7f60d38cb2985b5dfd7251a340404da/lxml-4.7.1.tar.gz#sha256=a1613838aa6b89af4ba10a0f3a972836128801ed008078f8c1244e65958f1b24 (from https://pypi.org/simple/lxml/) (requires-python:>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, != 3.4.*). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
Using cached lxml-4.6.5.tar.gz (3.2 MB)
So I went to this website to download libmxl2 and libxslt: https://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml. After downloading the lxml‑4.7.1‑cp310‑cp310‑win_amd64.whl version (since it matched my python version) I tried using the following command in the windows command prompt:
pip install lxml-4.7.1-cp310-cp310-win_amd64.whl
And I got this result:
lxml is already installed with the same version as the provided wheel.
Use --force-reinstall to force an installation of the wheel.
So then I did the same command but added the --force-reinstall and it said it successfully installed lxml-4.7.1. Then I went back to the VSC terminal, ran "pip install lxml" and got the same error message as I did before. So I tried the "pip install lxml-4.7.1-cp310-cp310-win_amd64.whl" command in the VSC terminal and got this error:
ERROR: lxml-4.7.1-cp310-cp310-win_amd64.whl is not a supported wheel on this platform.
Then I thought that I should try the win32 version since I have an Intel processor. So I run this command in the command prompt:
pip install lxml-4.7.1-cp310-cp310-win32.whl
And I get this error message:
ERROR: lxml-4.7.1-cp310-cp310-win32.whl is not a supported wheel on this platform.
So I'm at a loss. Any help is greatly appreciated!
A:
I am using windows 11 and python 3.11 so for the easy solution first I downloaded the latest 'lxml-4.9.0-cp311-cp311-win_amd64.whl' from the https://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml
Then copied it to my user folder which is "C:\Users\memon"
And then run this command from the terminal:
pip install lxml-4.9.0-cp311-cp311-win_amd64.whl
And it successfully installed 'lxml' on my windows 11 without any errors.
Previously I was facing 'C++ 14.0 or greater is required' error for the same 'lxml' install which I resolved with this thread: https://stackoverflow.com/a/64262038/14957324
A:
Try with the last wheels on https://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml corresponding to your 3.10 version. It worked for me
|
Failing to install lxml using pip
|
I am attempting to use pip to install lxml. I have Windows 11 and Python version python-3.10.2-amd64. I am using Visual Studio Code (VSC) as well. I realized I needed lxml from this error message in my VSC terminal:
Traceback (most recent call last):
File "Vegas.py", line 13, in <module>
soup = BeautifulSoup(html_text, 'lxml')
File "/usr/lib/python3.6/site-packages/bs4/__init__.py", line 248, in
__init__
% ",".join(features))
bs4.FeatureNotFound: Couldn't find a tree builder with the features you
requested: lxml. Do you need to install a parser library?
From there, I tried to install lxml by using the command in the VSC terminal:
pip install lxml
And I got this error message:
Collecting lxml
Using cached lxml-4.7.1.tar.gz (3.2 MB)
Preparing metadata (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3.6 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-xgwntbxb/lxml_73c33ff5c1614a6da59bbd9f3017fa5c/setup.py'"'"'; __file__='"'"'/tmp/pip-install-xgwntbxb/lxml_73c33ff5c1614a6da59bbd9f3017fa5c/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-ezhmqybu
cwd: /tmp/pip-install-xgwntbxb/lxml_73c33ff5c1614a6da59bbd9f3017fa5c/
Complete output (3 lines):
Building lxml version 4.7.1.
Building without Cython.
Error: Please make sure the libxml2 and libxslt development packages are installed.
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/84/74/4a97db45381316cd6e7d4b1eb707d7f60d38cb2985b5dfd7251a340404da/lxml-4.7.1.tar.gz#sha256=a1613838aa6b89af4ba10a0f3a972836128801ed008078f8c1244e65958f1b24 (from https://pypi.org/simple/lxml/) (requires-python:>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, != 3.4.*). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
Using cached lxml-4.6.5.tar.gz (3.2 MB)
So I went to this website to download libmxl2 and libxslt: https://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml. After downloading the lxml‑4.7.1‑cp310‑cp310‑win_amd64.whl version (since it matched my python version) I tried using the following command in the windows command prompt:
pip install lxml-4.7.1-cp310-cp310-win_amd64.whl
And I got this result:
lxml is already installed with the same version as the provided wheel.
Use --force-reinstall to force an installation of the wheel.
So then I did the same command but added the --force-reinstall and it said it successfully installed lxml-4.7.1. Then I went back to the VSC terminal, ran "pip install lxml" and got the same error message as I did before. So I tried the "pip install lxml-4.7.1-cp310-cp310-win_amd64.whl" command in the VSC terminal and got this error:
ERROR: lxml-4.7.1-cp310-cp310-win_amd64.whl is not a supported wheel on this platform.
Then I thought that I should try the win32 version since I have an Intel processor. So I run this command in the command prompt:
pip install lxml-4.7.1-cp310-cp310-win32.whl
And I get this error message:
ERROR: lxml-4.7.1-cp310-cp310-win32.whl is not a supported wheel on this platform.
So I'm at a loss. Any help is greatly appreciated!
|
[
"I am using windows 11 and python 3.11 so for the easy solution first I downloaded the latest 'lxml-4.9.0-cp311-cp311-win_amd64.whl' from the https://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml\nThen copied it to my user folder which is \"C:\\Users\\memon\"\nAnd then run this command from the terminal:\npip install lxml-4.9.0-cp311-cp311-win_amd64.whl\n\nAnd it successfully installed 'lxml' on my windows 11 without any errors.\nPreviously I was facing 'C++ 14.0 or greater is required' error for the same 'lxml' install which I resolved with this thread: https://stackoverflow.com/a/64262038/14957324\n",
"Try with the last wheels on https://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml corresponding to your 3.10 version. It worked for me\n"
] |
[
6,
0
] |
[] |
[] |
[
"lxml",
"python",
"python_wheel"
] |
stackoverflow_0071152710_lxml_python_python_wheel.txt
|
Q:
How can you group a data frame and reshape from long to wide?
I am fairly new to Python, so excuse me if this question has been answered before or can be easily solved.
I have a long data frame with numerical variables and categorical variables. It looks something like this:
Category Detail Gender Weight
Food Apple Female 30
Food Apple Male 40
Beverage Milk Female 10
Beverage Milk Male 5
Beverage Milk Male 20
Food Banana Female 50
What I want to do is this: Group by Category and Detail and then count all instances of 'Female' and 'Male'. I then want to weight these instances (see column 'Weight'). This should be done by taking the value from column 'Weight' and then deviding that by the summed weight. (so here for the group: Beverage, Milk, Male, it would be 25 devided by 35). It also would be nice to have the share of the gender.
At the end of the day I want my data frame to look something like this:
Category Detail Female Male
Beverage Milk 29% 71%
Food Apple 43% 57%
Food Banana 100% 0%
So in addition to the grouping, I want to kind of 'unmelt' the data frame by taking Female and Male an adding them as new columns.
I could just sum the weights with groupby on different levels, but how can I reshape the data frame in that way of adding these new columns?
Is there any way to do that? Thanks for any help in advance!
A:
Like so
df2 = df.pivot_table(
index=['Category', 'Detail'],
columns='Gender',
values='Weight',
aggfunc='sum'
).fillna(0)
final = df2[['Female', 'Male']].div(df2.sum(axis=1), axis=0)
Gender Female Male
Category Detail
Beverage Milk 0.285714 0.714286
Food Apple 0.428571 0.571429
Banana 1.000000 0.000000
A:
Use DataFrame.pivot_table with divide summed values, last multiple by 100 and round:
df = df.pivot_table(index=['Category','Detail'],
columns='Gender', values='Weight', aggfunc='sum', fill_value=0)
df = df.div(df.sum(axis=1), axis=0).mul(100).round().reset_index()
print (df)
Gender Category Detail Female Male
0 Beverage Milk 29.0 71.0
1 Food Apple 43.0 57.0
2 Food Banana 100.0 0.0
For percentages use:
df = df.pivot_table(index=['Category','Detail'],
columns='Gender', values='Weight', aggfunc='sum', fill_value=0)
df = df.div(df.sum(axis=1), axis=0).applymap("{:.2%}".format).reset_index()
print (df)
Gender Category Detail Female Male
0 Beverage Milk 28.57% 71.43%
1 Food Apple 42.86% 57.14%
2 Food Banana 100.00% 0.00%
|
How can you group a data frame and reshape from long to wide?
|
I am fairly new to Python, so excuse me if this question has been answered before or can be easily solved.
I have a long data frame with numerical variables and categorical variables. It looks something like this:
Category Detail Gender Weight
Food Apple Female 30
Food Apple Male 40
Beverage Milk Female 10
Beverage Milk Male 5
Beverage Milk Male 20
Food Banana Female 50
What I want to do is this: Group by Category and Detail and then count all instances of 'Female' and 'Male'. I then want to weight these instances (see column 'Weight'). This should be done by taking the value from column 'Weight' and then deviding that by the summed weight. (so here for the group: Beverage, Milk, Male, it would be 25 devided by 35). It also would be nice to have the share of the gender.
At the end of the day I want my data frame to look something like this:
Category Detail Female Male
Beverage Milk 29% 71%
Food Apple 43% 57%
Food Banana 100% 0%
So in addition to the grouping, I want to kind of 'unmelt' the data frame by taking Female and Male an adding them as new columns.
I could just sum the weights with groupby on different levels, but how can I reshape the data frame in that way of adding these new columns?
Is there any way to do that? Thanks for any help in advance!
|
[
"Like so\ndf2 = df.pivot_table(\n index=['Category', 'Detail'], \n columns='Gender', \n values='Weight', \n aggfunc='sum'\n).fillna(0)\nfinal = df2[['Female', 'Male']].div(df2.sum(axis=1), axis=0)\n\nGender Female Male\nCategory Detail \nBeverage Milk 0.285714 0.714286\nFood Apple 0.428571 0.571429\n Banana 1.000000 0.000000\n\n",
"Use DataFrame.pivot_table with divide summed values, last multiple by 100 and round:\ndf = df.pivot_table(index=['Category','Detail'],\n columns='Gender', values='Weight', aggfunc='sum', fill_value=0)\ndf = df.div(df.sum(axis=1), axis=0).mul(100).round().reset_index()\nprint (df)\nGender Category Detail Female Male\n0 Beverage Milk 29.0 71.0\n1 Food Apple 43.0 57.0\n2 Food Banana 100.0 0.0\n\nFor percentages use:\ndf = df.pivot_table(index=['Category','Detail'],\n columns='Gender', values='Weight', aggfunc='sum', fill_value=0)\ndf = df.div(df.sum(axis=1), axis=0).applymap(\"{:.2%}\".format).reset_index()\nprint (df)\nGender Category Detail Female Male\n0 Beverage Milk 28.57% 71.43%\n1 Food Apple 42.86% 57.14%\n2 Food Banana 100.00% 0.00%\n\n"
] |
[
3,
3
] |
[] |
[] |
[
"group_by",
"melt",
"pandas",
"pivot",
"python"
] |
stackoverflow_0074545972_group_by_melt_pandas_pivot_python.txt
|
Q:
Python : How to properly implement a raise an exception in a function
I am trying to make a function that receives a list/array and returns the index of the maximum
value in that sequence. The function should raise an exception if non-numerical values are
present in the list.
def maxvalue(values):
"""
Function that receives a list/array and returns the index of the maximum
value in that sequence
"""
indices = []
max_value = max(values)
for i in range(len(values)):
if type(i) not in (float, int): # raise an exception if the value is not float or integer
raise TypeError("Only numberical values are allowed")
if values[i] == max_value:
indices.append(i)
return indices
maxvalue([1, 1, 1.5, "e", 235.8, 9, 220, 220])
The function works when it receives a list containing floats and integers and doesn't work if there is a string in it.
How do I get the function to produce "TypeError("Only numberical values are allowed")" error quote when there is a str present in the list?
Currently, it produces "TypeError: '>' not supported between instances of 'str' and 'float'"
A:
The 'Comparison' happens in max function which raises an exception.
You should do all checks, before your logic.
def maxvalue(values):
"""
Function that receives a list/array and returns the index of the maximum
value in that sequence
"""
try:
max_value = max(values)
except TypeError:
raise TypeError("Only numberical values are allowed")
indices = []
for idx, val in enumerate(values):
if val == max_value:
indices.append(idx)
return indices
As you can see i am catching TypeError and re-raise it with different message. Also use enumerate in for loops.
A:
def maxvalue(values):
indices = []
print(values)
int_values = []
"""the max function cannot fetch a max value with a string as part of the list so you can filter the list to get only integer values before you get max value"""
for x in values:
if type(x)==int:
int_values.append(x)
max_value = max(int_values)
for i in range(len(int_values)):
if int_values[i] == max_value:
indices.append(i)
return indices
print(maxvalue([1, 1, 1.5, "e", 235.8, 9, 220, 220]))
|
Python : How to properly implement a raise an exception in a function
|
I am trying to make a function that receives a list/array and returns the index of the maximum
value in that sequence. The function should raise an exception if non-numerical values are
present in the list.
def maxvalue(values):
"""
Function that receives a list/array and returns the index of the maximum
value in that sequence
"""
indices = []
max_value = max(values)
for i in range(len(values)):
if type(i) not in (float, int): # raise an exception if the value is not float or integer
raise TypeError("Only numberical values are allowed")
if values[i] == max_value:
indices.append(i)
return indices
maxvalue([1, 1, 1.5, "e", 235.8, 9, 220, 220])
The function works when it receives a list containing floats and integers and doesn't work if there is a string in it.
How do I get the function to produce "TypeError("Only numberical values are allowed")" error quote when there is a str present in the list?
Currently, it produces "TypeError: '>' not supported between instances of 'str' and 'float'"
|
[
"The 'Comparison' happens in max function which raises an exception.\nYou should do all checks, before your logic.\ndef maxvalue(values):\n \"\"\"\n Function that receives a list/array and returns the index of the maximum\n value in that sequence\n\n \"\"\"\n\n try:\n max_value = max(values)\n except TypeError:\n raise TypeError(\"Only numberical values are allowed\")\n\n indices = []\n for idx, val in enumerate(values):\n if val == max_value:\n indices.append(idx)\n\n return indices\n\nAs you can see i am catching TypeError and re-raise it with different message. Also use enumerate in for loops.\n",
"def maxvalue(values):\n indices = []\n print(values)\n int_values = []\n \"\"\"the max function cannot fetch a max value with a string as part of the list so you can filter the list to get only integer values before you get max value\"\"\"\n for x in values: \n if type(x)==int:\n int_values.append(x)\n\n max_value = max(int_values)\n\n for i in range(len(int_values)):\n if int_values[i] == max_value:\n indices.append(i)\n return indices\n\nprint(maxvalue([1, 1, 1.5, \"e\", 235.8, 9, 220, 220]))\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"function",
"indexing",
"max",
"python",
"typeerror"
] |
stackoverflow_0074545604_function_indexing_max_python_typeerror.txt
|
Q:
Column created with label() returns no values
this is the query
query = (
select(
User.id,
(func.sqrt(func.pow(User.dist[0] - (-4.23), 2))).label("dist"),
)
.order_by("dist")
.limit(1)
)
when I execute it I get the id but in place of dist I get None
A:
This query is correct and will work, I just did not realize indexing an Array in Postgres starts at 1 and not zero. It kept returning None with no errors so I thought my query had something to do with it.
|
Column created with label() returns no values
|
this is the query
query = (
select(
User.id,
(func.sqrt(func.pow(User.dist[0] - (-4.23), 2))).label("dist"),
)
.order_by("dist")
.limit(1)
)
when I execute it I get the id but in place of dist I get None
|
[
"This query is correct and will work, I just did not realize indexing an Array in Postgres starts at 1 and not zero. It kept returning None with no errors so I thought my query had something to do with it.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"sqlalchemy"
] |
stackoverflow_0074536053_python_sqlalchemy.txt
|
Q:
How to get particular index number of list items
my_list = ['A', 'B', 'C', 'D', 'E', 'B', 'F', 'D', 'C', 'B']
idx = my_list.index('B')
print("index :", idx)
In here I used the '.index()' function.
for i in my_list:
print(f"index no. {my_list.index(i)}")
I tried to find each index number of the items of the (my_list) list.
But it gave same result for same values. But they located in difference places of the list.
if 'B' == my_list[(len(my_list) - 1)]:
print("True")
if 'B' == my_list[(len(my_list) - 4)]:
print("True")
I need to mention particular values by the index number of their (to do something).
Imagine; I need to set values to nesting with the values of the list.
i.e :
my_list_2 = ['A', 'B', '2', 'C', '3', 'D', '4', 'E', 'B', '2', 'F', '6', 'D', 'C', '3', 'B']
- ------ ------ ------ - ------ ------ - ------ -
If I want to nesting values with their Consecutive (number type) items and
the other values need to nest with '*' mark (as default).Because they have no any Consecutive (numeric) values.
so then how I mention each (string) values and (numeric) values in a coding part to nesting them.
In this case as my example I expected result:
--> my_list_2 = [['A', ''], ['B', '2'], ['C', '3'], ['D', '4'], ['E', ''], ['B', '2'], ['F', '6'], ['D', ''], ['C', '3'], ['B', '']]
This is the coding part which I tried to do this :
def_setter = [
[my_list_2[i], '*'] if my_list_2[i].isalpha() and my_list_2[i + 1].isalpha() else [my_list_2[i], my_list_2[i + 1]]
for i in range(0, len(my_list_2) - 1)]
print("Result : ", def_setter)
But it not gave me the expected result.
Could you please help me to do this !
A:
There might be a more pythonic way to reorganize this array, however, with the following function you can loop through the list and append [letter, value] if value is a number, append [letter, ''] if value is a letter.
def_setter = []
i = 0
while i < len(my_list_2):
if i + 1 == len(my_list_2):
if my_list_2[i].isalpha():
def_setter.append([my_list_2[i], ''])
break
prev, cur = my_list_2[i], my_list_2[i + 1]
if cur.isalpha():
def_setter.append([prev, ''])
i += 1
else:
def_setter.append([prev, cur])
i += 2
print(def_setter)
>>> [['A', ''],
['B', '2'],
['C', '3'],
['D', '4'],
['E', ''],
['B', '2'],
['F', '6'],
['D', ''],
['C', '3'],
['B', '']]
|
How to get particular index number of list items
|
my_list = ['A', 'B', 'C', 'D', 'E', 'B', 'F', 'D', 'C', 'B']
idx = my_list.index('B')
print("index :", idx)
In here I used the '.index()' function.
for i in my_list:
print(f"index no. {my_list.index(i)}")
I tried to find each index number of the items of the (my_list) list.
But it gave same result for same values. But they located in difference places of the list.
if 'B' == my_list[(len(my_list) - 1)]:
print("True")
if 'B' == my_list[(len(my_list) - 4)]:
print("True")
I need to mention particular values by the index number of their (to do something).
Imagine; I need to set values to nesting with the values of the list.
i.e :
my_list_2 = ['A', 'B', '2', 'C', '3', 'D', '4', 'E', 'B', '2', 'F', '6', 'D', 'C', '3', 'B']
- ------ ------ ------ - ------ ------ - ------ -
If I want to nesting values with their Consecutive (number type) items and
the other values need to nest with '*' mark (as default).Because they have no any Consecutive (numeric) values.
so then how I mention each (string) values and (numeric) values in a coding part to nesting them.
In this case as my example I expected result:
--> my_list_2 = [['A', ''], ['B', '2'], ['C', '3'], ['D', '4'], ['E', ''], ['B', '2'], ['F', '6'], ['D', ''], ['C', '3'], ['B', '']]
This is the coding part which I tried to do this :
def_setter = [
[my_list_2[i], '*'] if my_list_2[i].isalpha() and my_list_2[i + 1].isalpha() else [my_list_2[i], my_list_2[i + 1]]
for i in range(0, len(my_list_2) - 1)]
print("Result : ", def_setter)
But it not gave me the expected result.
Could you please help me to do this !
|
[
"There might be a more pythonic way to reorganize this array, however, with the following function you can loop through the list and append [letter, value] if value is a number, append [letter, ''] if value is a letter.\ndef_setter = []\ni = 0\nwhile i < len(my_list_2):\n if i + 1 == len(my_list_2):\n if my_list_2[i].isalpha():\n def_setter.append([my_list_2[i], ''])\n break\n prev, cur = my_list_2[i], my_list_2[i + 1]\n if cur.isalpha():\n def_setter.append([prev, ''])\n i += 1\n else:\n def_setter.append([prev, cur])\n i += 2\nprint(def_setter)\n\n>>> [['A', ''],\n ['B', '2'],\n ['C', '3'],\n ['D', '4'],\n ['E', ''],\n ['B', '2'],\n ['F', '6'],\n ['D', ''],\n ['C', '3'],\n ['B', '']]\n\n"
] |
[
2
] |
[] |
[] |
[
"arraylist",
"data_science",
"python"
] |
stackoverflow_0074545933_arraylist_data_science_python.txt
|
Q:
Python: How to check what types are in defined types.UnionType?
I am using Python 3.11 and I would need to detect if an optional class attribute is type of Enum (i.e. type of a subclass of Enum).
With typing.get_type_hints() I can get the type hints as a dict, but how to check if a field's type is optional Enum (subclass)? Even better if I could get the type of any optional field regardless is it Optional[str], Optional[int], Optional[Class_X], etc.
Example code
from typing import Optional, get_type_hints
from enum import IntEnum, Enum
class TestEnum(IntEnum):
foo = 1
bar = 2
class Foo():
opt_enum : TestEnum | None = None
types = get_type_hints(Foo)['opt_enum']
This works
(ipython)
In [4]: Optional[TestEnum] == types
Out[4]: True
These ones fail
(yes, these are desperate attempts)
In [6]: Optional[IntEnum] == types
Out[6]: False
and
In [11]: issubclass(Enum, types)
Out[11]: False
and
In [12]: issubclass(types, Enum)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [12], line 1
----> 1 issubclass(types, Enum)
TypeError: issubclass() arg 1 must be a class
and
In [13]: issubclass(types, Optional[Enum])
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [13], line 1
----> 1 issubclass(types, Optional[Enum])
File /usr/lib/python3.10/typing.py:1264, in _UnionGenericAlias.__subclasscheck__(self, cls)
1262 def __subclasscheck__(self, cls):
1263 for arg in self.__args__:
-> 1264 if issubclass(cls, arg):
1265 return True
TypeError: issubclass() arg 1 must be a class
and
In [7]: IntEnum in types
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [7], line 1
----> 1 IntEnum in types
TypeError: argument of type 'types.UnionType' is not iterable
Why I needed this
I have several cases where I am importing data from csv files and creating objects of a class from each row. csv.DictReader() returns a dict[str, str] and I need to fix the types for the fields before attempting to create the object. However, some of the object fields are Optional[int], Optional[bool], Optional[EnumX] or Optional[ClassX]. I have several of those classes multi-inheriting my CSVImportable() class/interface. I want to implement the logic once into CSVImportabl() class instead of writing roughly same code in field-aware way in every subclass. This CSVImportable._field_type_updater() should:
correctly change the types at least for basic types and enums
Gracefully skip Optional[ClassX] fields
Naturally I am thankful for better designs too :-)
A:
When you are dealing with a parameterized type (generic or special like typing.Optional), you can inspect it via get_args/get_origin.
Doing that you'll see that T | S is implemented slightly differently than typing.Union[T, S]. The origin of the former is types.UnionType, while that of the latter is typing.Union. Unfortunately this means that to cover both variants, we need two distinct checks.
from types import UnionType
from typing import Union, get_origin
def is_union(t: object) -> bool:
origin = get_origin(t)
return origin is Union or origin is UnionType
Using typing.Optional just uses typing.Union under the hood, so the origin is the same. Here is a working demo:
from enum import IntEnum
from types import UnionType
from typing import Optional, get_type_hints, get_args, get_origin, Union
class TestEnum(IntEnum):
foo = 1
bar = 2
class Foo:
opt_enum1: TestEnum | None = None
opt_enum2: Optional[TestEnum] = None
opt_enum3: TestEnum
opt4: str
def is_union(t: object) -> bool:
origin = get_origin(t)
return origin is Union or origin is UnionType
if __name__ == "__main__":
for name, type_ in get_type_hints(Foo).items():
if type_ is TestEnum or is_union(type_) and TestEnum in get_args(type_):
print(name, "accepts TestEnum")
Output:
opt_enum1 accepts TestEnum
opt_enum2 accepts TestEnum
opt_enum3 accepts TestEnum
|
Python: How to check what types are in defined types.UnionType?
|
I am using Python 3.11 and I would need to detect if an optional class attribute is type of Enum (i.e. type of a subclass of Enum).
With typing.get_type_hints() I can get the type hints as a dict, but how to check if a field's type is optional Enum (subclass)? Even better if I could get the type of any optional field regardless is it Optional[str], Optional[int], Optional[Class_X], etc.
Example code
from typing import Optional, get_type_hints
from enum import IntEnum, Enum
class TestEnum(IntEnum):
foo = 1
bar = 2
class Foo():
opt_enum : TestEnum | None = None
types = get_type_hints(Foo)['opt_enum']
This works
(ipython)
In [4]: Optional[TestEnum] == types
Out[4]: True
These ones fail
(yes, these are desperate attempts)
In [6]: Optional[IntEnum] == types
Out[6]: False
and
In [11]: issubclass(Enum, types)
Out[11]: False
and
In [12]: issubclass(types, Enum)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [12], line 1
----> 1 issubclass(types, Enum)
TypeError: issubclass() arg 1 must be a class
and
In [13]: issubclass(types, Optional[Enum])
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [13], line 1
----> 1 issubclass(types, Optional[Enum])
File /usr/lib/python3.10/typing.py:1264, in _UnionGenericAlias.__subclasscheck__(self, cls)
1262 def __subclasscheck__(self, cls):
1263 for arg in self.__args__:
-> 1264 if issubclass(cls, arg):
1265 return True
TypeError: issubclass() arg 1 must be a class
and
In [7]: IntEnum in types
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [7], line 1
----> 1 IntEnum in types
TypeError: argument of type 'types.UnionType' is not iterable
Why I needed this
I have several cases where I am importing data from csv files and creating objects of a class from each row. csv.DictReader() returns a dict[str, str] and I need to fix the types for the fields before attempting to create the object. However, some of the object fields are Optional[int], Optional[bool], Optional[EnumX] or Optional[ClassX]. I have several of those classes multi-inheriting my CSVImportable() class/interface. I want to implement the logic once into CSVImportabl() class instead of writing roughly same code in field-aware way in every subclass. This CSVImportable._field_type_updater() should:
correctly change the types at least for basic types and enums
Gracefully skip Optional[ClassX] fields
Naturally I am thankful for better designs too :-)
|
[
"When you are dealing with a parameterized type (generic or special like typing.Optional), you can inspect it via get_args/get_origin.\nDoing that you'll see that T | S is implemented slightly differently than typing.Union[T, S]. The origin of the former is types.UnionType, while that of the latter is typing.Union. Unfortunately this means that to cover both variants, we need two distinct checks.\nfrom types import UnionType\nfrom typing import Union, get_origin\n\ndef is_union(t: object) -> bool:\n origin = get_origin(t)\n return origin is Union or origin is UnionType\n\nUsing typing.Optional just uses typing.Union under the hood, so the origin is the same. Here is a working demo:\nfrom enum import IntEnum\nfrom types import UnionType\nfrom typing import Optional, get_type_hints, get_args, get_origin, Union\n\n\nclass TestEnum(IntEnum):\n foo = 1\n bar = 2\n\n\nclass Foo:\n opt_enum1: TestEnum | None = None\n opt_enum2: Optional[TestEnum] = None\n opt_enum3: TestEnum\n opt4: str\n\n\ndef is_union(t: object) -> bool:\n origin = get_origin(t)\n return origin is Union or origin is UnionType\n\n\nif __name__ == \"__main__\":\n for name, type_ in get_type_hints(Foo).items():\n if type_ is TestEnum or is_union(type_) and TestEnum in get_args(type_):\n print(name, \"accepts TestEnum\")\n\nOutput:\n\nopt_enum1 accepts TestEnum\nopt_enum2 accepts TestEnum\nopt_enum3 accepts TestEnum\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_3.11",
"type_hinting",
"union_types"
] |
stackoverflow_0074544539_python_python_3.11_type_hinting_union_types.txt
|
Q:
Numpy Array Iteration, starting with third value
I need to iterate through an numpy array, but I need to start with the third value.
My exact problem is the following.
I get an array like this:
data([0.0000, 1], [0.0011, 2], [0.0036, 3], ....)
I need to subtract the 0.0011 from 0.0036 and all the following values, from the first column.
I wanted to do something like this:
data[:, 0] = data[:, 0] - data[1, 0]
But it needs to start with 0.0036 and not 0.0000. Could somebody maybe help me with this problem?
A:
You could enumerate from index 2 and beyond, something like this:
for i, n in enumerate(data):
if i < 2:
continue
data[:, 0] = data[i, 0] - data[i+1, 0]
|
Numpy Array Iteration, starting with third value
|
I need to iterate through an numpy array, but I need to start with the third value.
My exact problem is the following.
I get an array like this:
data([0.0000, 1], [0.0011, 2], [0.0036, 3], ....)
I need to subtract the 0.0011 from 0.0036 and all the following values, from the first column.
I wanted to do something like this:
data[:, 0] = data[:, 0] - data[1, 0]
But it needs to start with 0.0036 and not 0.0000. Could somebody maybe help me with this problem?
|
[
"You could enumerate from index 2 and beyond, something like this:\nfor i, n in enumerate(data):\n if i < 2:\n continue\n data[:, 0] = data[i, 0] - data[i+1, 0]\n\n"
] |
[
0
] |
[] |
[] |
[
"iteration",
"loops",
"numpy",
"python"
] |
stackoverflow_0074545981_iteration_loops_numpy_python.txt
|
Q:
Similarity between each users always 0 while using the KNNBasic of the python Surprise package based on user
The actual situation is that I need to find users with similar interests according to the url favorites of a large number of users. So my data only have "like" without "dislike" and "ignore". And for the number of urls is almost unlimited, it is also impossible to assume that all urls without "like" are "dislike" or "ignore". So, in this case, how should I convert the raw data to a Surprise Dataset? Or, these data is impossible to used by algorithms such as KNN and so on for relative recommendation of collaborative filtering?
source data of favorite items per User:
s_data = [
[
"user1",
[
"item1",
"item2",
"item3",
"item4",
"item5",
"item6"
]
],
[
"user2",
[
"item3",
"item4",
"item5",
"item6"
]
],
[
"user3",
[
"item1",
"item2",
"item3",
"item6"
]
],
[
"user4",
[
"item4",
"item5",
"item6",
"item7",
"item8",
"item9"
]
]
]
Because there is only one case in the original data that the user "likes" the item, I will assume that the user scored '1' for the item they liked.
Python Code:
import pandas as pd
from surprise import Dataset, KNNBasic, Reader
# prepare for data
df_pre = [[z[0], zz, 1] for z in s_data if z[1] is not None for zz in z[1]]
df = pd.DataFrame(df_pre)
reader = Reader(rating_scale=(0, 1))
data = Dataset.load_from_df(df, reader)
trainset = data.build_full_trainset()
# trainning
sim_options = {'name': 'pearson', 'user_based': True}
algo = KNNBasic(sim_options=sim_options)
algo.fit(trainset)
# calc similarity
inner_id = algo.trainset.to_inner_uid(ruid='user1')
all_instances = algo.trainset.all_users
rs = [(x, algo.sim[inner_id][x]) for x in all_instances() if x != inner_id]
sorted_rs = sorted(rs, key=lambda x: x[1], reverse=True)
print(sorted_others)
result:
[(1, 0.0), (2, 0.0), (3, 0.0)]
the similarity between each users:
raw data in tabular form:
As shown above, the result obtained by the program is that the correlation between all people is 0. If I change to cosine, msd, the result is the same. If it is replaced by pearson_baseline, it will prompt "ZeroDivisionError: float division".
I want to know how to use KNN to find similar behavior users of a certain user with data as shown above. Thanks a lot.
A:
You need to include information about items that users do not like so that you have both 0s and 1s in your dataset. The data should look like this (just screenshotting the top part here):
I got this dataframe with this code:
users_and_items = {e[0]:e[1] for e in s_data}
users = sorted(list(users_and_items.keys()))
items = sorted(list(set([item for item_list in users_and_items.values() for item in item_list])))
df_pre = [(user, item, 1 if item in users_and_items[user] else 0) for user in users for item in items]
df = pd.DataFrame(df_pre)
Now running your code with the new df:
import pandas as pd
from surprise import Dataset, KNNBasic, Reader
# prepare for data
reader = Reader(rating_scale=(0, 1))
data = Dataset.load_from_df(df, reader)
trainset = data.build_full_trainset()
# trainning
sim_options = {'name': 'pearson', 'user_based': True}
algo = KNNBasic(sim_options=sim_options)
algo.fit(trainset)
# calc similarity
inner_id = algo.trainset.to_inner_uid(ruid='user1')
all_instances = algo.trainset.all_users
rs = [(x, algo.sim[inner_id][x]) for x in all_instances() if x != inner_id]
print(rs)
Gives:
Computing the pearson similarity matrix...
Done computing similarity matrix.
[(1, 0.6324555320336759), (2, 0.6324555320336759), (3, -0.5)]
Which I believe is more like what you expected to see.
|
Similarity between each users always 0 while using the KNNBasic of the python Surprise package based on user
|
The actual situation is that I need to find users with similar interests according to the url favorites of a large number of users. So my data only have "like" without "dislike" and "ignore". And for the number of urls is almost unlimited, it is also impossible to assume that all urls without "like" are "dislike" or "ignore". So, in this case, how should I convert the raw data to a Surprise Dataset? Or, these data is impossible to used by algorithms such as KNN and so on for relative recommendation of collaborative filtering?
source data of favorite items per User:
s_data = [
[
"user1",
[
"item1",
"item2",
"item3",
"item4",
"item5",
"item6"
]
],
[
"user2",
[
"item3",
"item4",
"item5",
"item6"
]
],
[
"user3",
[
"item1",
"item2",
"item3",
"item6"
]
],
[
"user4",
[
"item4",
"item5",
"item6",
"item7",
"item8",
"item9"
]
]
]
Because there is only one case in the original data that the user "likes" the item, I will assume that the user scored '1' for the item they liked.
Python Code:
import pandas as pd
from surprise import Dataset, KNNBasic, Reader
# prepare for data
df_pre = [[z[0], zz, 1] for z in s_data if z[1] is not None for zz in z[1]]
df = pd.DataFrame(df_pre)
reader = Reader(rating_scale=(0, 1))
data = Dataset.load_from_df(df, reader)
trainset = data.build_full_trainset()
# trainning
sim_options = {'name': 'pearson', 'user_based': True}
algo = KNNBasic(sim_options=sim_options)
algo.fit(trainset)
# calc similarity
inner_id = algo.trainset.to_inner_uid(ruid='user1')
all_instances = algo.trainset.all_users
rs = [(x, algo.sim[inner_id][x]) for x in all_instances() if x != inner_id]
sorted_rs = sorted(rs, key=lambda x: x[1], reverse=True)
print(sorted_others)
result:
[(1, 0.0), (2, 0.0), (3, 0.0)]
the similarity between each users:
raw data in tabular form:
As shown above, the result obtained by the program is that the correlation between all people is 0. If I change to cosine, msd, the result is the same. If it is replaced by pearson_baseline, it will prompt "ZeroDivisionError: float division".
I want to know how to use KNN to find similar behavior users of a certain user with data as shown above. Thanks a lot.
|
[
"You need to include information about items that users do not like so that you have both 0s and 1s in your dataset. The data should look like this (just screenshotting the top part here):\n\nI got this dataframe with this code:\nusers_and_items = {e[0]:e[1] for e in s_data}\nusers = sorted(list(users_and_items.keys()))\nitems = sorted(list(set([item for item_list in users_and_items.values() for item in item_list])))\ndf_pre = [(user, item, 1 if item in users_and_items[user] else 0) for user in users for item in items]\ndf = pd.DataFrame(df_pre)\n\nNow running your code with the new df:\nimport pandas as pd\nfrom surprise import Dataset, KNNBasic, Reader\n\n# prepare for data\nreader = Reader(rating_scale=(0, 1))\ndata = Dataset.load_from_df(df, reader)\ntrainset = data.build_full_trainset()\n\n\n# trainning\nsim_options = {'name': 'pearson', 'user_based': True}\nalgo = KNNBasic(sim_options=sim_options)\nalgo.fit(trainset)\n\n\n# calc similarity\ninner_id = algo.trainset.to_inner_uid(ruid='user1') \nall_instances = algo.trainset.all_users\nrs = [(x, algo.sim[inner_id][x]) for x in all_instances() if x != inner_id]\nprint(rs)\n\nGives:\nComputing the pearson similarity matrix...\nDone computing similarity matrix.\n[(1, 0.6324555320336759), (2, 0.6324555320336759), (3, -0.5)]\n\nWhich I believe is more like what you expected to see.\n"
] |
[
1
] |
[] |
[] |
[
"collaborative_filtering",
"cosine_similarity",
"knn",
"machine_learning",
"python"
] |
stackoverflow_0074545697_collaborative_filtering_cosine_similarity_knn_machine_learning_python.txt
|
Q:
How to override model field in Django for library model?
I need to override library model field in Django. This model is integrated in that library and used there. The changes I need is to add a unique constraint to one of the model fields. But this is not the abstract model so I can't inherit this model as I understand.
The question: is there a way to override usual model field in Django without inheritance?
A:
Not that I'm aware of. I'd be looking at modifying (forking) the "library" model, although there might be issues if it's proprietary 3rd party code for which you do not have source.
The usual thing against concrete inheritance is that it causes a new DB table to be created and thereafter, every query involves a join on a OneToOne field between the two tables. However, I'm wondering whether this happens if you merely add a constraint to an existing field. Might be worth investigating in detail.
Likewise, can you inherit merely to subclass the .save() method, and apply the equivalent to a constraint therein? This isn't bullet-proof, because it's enforced only by the saving of Django objects and can be overridden both by django bulk_update and by anybody accessing the DB table without Django. And again, does it end up creating and joining a second DB table even if it contains no fields at all (except a primary key).
Sorry, more questions than a proper answer. Maybe somebody knows these answers?
|
How to override model field in Django for library model?
|
I need to override library model field in Django. This model is integrated in that library and used there. The changes I need is to add a unique constraint to one of the model fields. But this is not the abstract model so I can't inherit this model as I understand.
The question: is there a way to override usual model field in Django without inheritance?
|
[
"Not that I'm aware of. I'd be looking at modifying (forking) the \"library\" model, although there might be issues if it's proprietary 3rd party code for which you do not have source.\nThe usual thing against concrete inheritance is that it causes a new DB table to be created and thereafter, every query involves a join on a OneToOne field between the two tables. However, I'm wondering whether this happens if you merely add a constraint to an existing field. Might be worth investigating in detail.\nLikewise, can you inherit merely to subclass the .save() method, and apply the equivalent to a constraint therein? This isn't bullet-proof, because it's enforced only by the saving of Django objects and can be overridden both by django bulk_update and by anybody accessing the DB table without Django. And again, does it end up creating and joining a second DB table even if it contains no fields at all (except a primary key).\nSorry, more questions than a proper answer. Maybe somebody knows these answers?\n"
] |
[
0
] |
[] |
[] |
[
"django",
"inheritance",
"integration",
"overriding",
"python"
] |
stackoverflow_0074546024_django_inheritance_integration_overriding_python.txt
|
Q:
How to plot a colored gantt chart with plotly keeping the correct bar height
I have the following code to plot a gantt chart in plotly:
import datetime
import pandas
import plotly.express as px
task_list = [{
'Task': 'T-3', 'y': 0, 'Start': datetime.date(2022, 2, 24),
'Finish': datetime.date(2022, 3, 17), 'Status': 'Scheduled'}, {
'Task': 'SNP-350', 'y': 1, 'Start': datetime.date(2022, 2, 24),
'Finish': datetime.date(2022, 2, 25), 'Status': 'Backlog'}, {
'Task': 'RD-6687', 'y': 2, 'Start': datetime.date(2022, 3, 18),
'Finish': datetime.date(2022, 4, 8), 'Status': 'Selected'}, {
'Task': 'RD-6643', 'y': 3, 'Start': datetime.date(2022, 2, 24),
'Finish': datetime.date(2022, 3, 17), 'Status': 'Scheduled'}, {
'Task': 'SNP-337', 'y': 4, 'Start': datetime.date(2022, 5, 21),
'Finish': datetime.date(2022, 5, 23), 'Status': 'Backlog'}, {
'Task': 'SNP-352', 'y': 5, 'Start': datetime.date(2022, 2, 26),
'Finish': datetime.date(2022, 2, 28), 'Status': 'Clarification'}, {
'Task': 'SNP-239', 'y': 6, 'Start': datetime.date(2022, 5, 24),
'Finish': datetime.date(2022, 5, 25), 'Status': 'Selected'}]
df = pandas.DataFrame(task_list)
fig = px.timeline(df, x_start="Start", x_end="Finish", y="y",
# color="Status",
)
fig.show()
This gives me a gantt chart as expected:
However, if I now include the line that is commented out in the code above, i.e. color the bars in the gantt chart according to their status, it messes up the height of the different bars:
So the colors are shown as expected but it seems the height of the different bars is now not limited by the neighboring bar but only by the neighboring bar with the same color. How can I add the colors to the gantt chart but keep the height of the bars as it is without colors?
A:
using color creates a trace for each Status, hence change in heights
have put Status into hover_data
built colormap and then updated trace to use value of Status to lookup color
import datetime
import pandas
import plotly.express as px
import numpy as np
# fmt: off
task_list = [{
'Task': 'T-3', 'y': 0, 'Start': datetime.date(2022, 2, 24),
'Finish': datetime.date(2022, 3, 17), 'Status': 'Scheduled'}, {
'Task': 'SNP-350', 'y': 1, 'Start': datetime.date(2022, 2, 24),
'Finish': datetime.date(2022, 2, 25), 'Status': 'Backlog'}, {
'Task': 'RD-6687', 'y': 2, 'Start': datetime.date(2022, 3, 18),
'Finish': datetime.date(2022, 4, 8), 'Status': 'Selected'}, {
'Task': 'RD-6643', 'y': 3, 'Start': datetime.date(2022, 2, 24),
'Finish': datetime.date(2022, 3, 17), 'Status': 'Scheduled'}, {
'Task': 'SNP-337', 'y': 4, 'Start': datetime.date(2022, 5, 21),
'Finish': datetime.date(2022, 5, 23), 'Status': 'Backlog'}, {
'Task': 'SNP-352', 'y': 5, 'Start': datetime.date(2022, 2, 26),
'Finish': datetime.date(2022, 2, 28), 'Status': 'Clarification'}, {
'Task': 'SNP-239', 'y': 6, 'Start': datetime.date(2022, 5, 24),
'Finish': datetime.date(2022, 5, 25), 'Status': 'Selected'}]
# fmt: on
df = pandas.DataFrame(task_list)
# put status into figure as well as customdata
fig = px.timeline(
df,
x_start="Start",
x_end="Finish",
y="y",
# color="Status",
hover_data=["Status"],
)
# build a colormap for status
colormap = {s:c for s,c in zip(df["Status"].unique(), px.colors.qualitative.Plotly)}
# use status in customdata to map color
fig.update_traces(marker_color=[colormap[s[0]] for s in fig.data[0].customdata])
A:
I've found the answer, but spent more when 2 hours and I found this accidentally.
# Format for your call/bar/trace
# Set/align bar HEIGHT to one. Prevent to make some of bar bigger or lower.
fig.update_traces( width=0.6 )
Absolutely unpredictable place for these settings. I'm shocked.
Before
After
Full code:
import datetime
import pandas
import plotly.express as px
task_list = [{
'Task': 'T-3', 'y': 0, 'Start': datetime.date(2022, 2, 24),
'Finish': datetime.date(2022, 3, 17), 'Status': 'Scheduled'}, {
'Task': 'SNP-350', 'y': 1, 'Start': datetime.date(2022, 2, 24),
'Finish': datetime.date(2022, 2, 25), 'Status': 'Backlog'}, {
'Task': 'RD-6687', 'y': 2, 'Start': datetime.date(2022, 3, 18),
'Finish': datetime.date(2022, 4, 8), 'Status': 'Selected'}, {
'Task': 'RD-6643', 'y': 3, 'Start': datetime.date(2022, 2, 24),
'Finish': datetime.date(2022, 3, 17), 'Status': 'Scheduled'}, {
'Task': 'SNP-337', 'y': 4, 'Start': datetime.date(2022, 5, 21),
'Finish': datetime.date(2022, 5, 23), 'Status': 'Backlog'}, {
'Task': 'SNP-352', 'y': 5, 'Start': datetime.date(2022, 2, 26),
'Finish': datetime.date(2022, 2, 28), 'Status': 'Clarification'}, {
'Task': 'SNP-239', 'y': 6, 'Start': datetime.date(2022, 5, 24),
'Finish': datetime.date(2022, 5, 25), 'Status': 'Selected'}]
df = pandas.DataFrame(task_list)
fig = px.timeline(df, x_start="Start", x_end="Finish", y="y",
color="Status",
)
# Set/align bar HEIGHT to one. Prevent to make some of bar bigger or lower.
fig.update_traces( width=0.6 )
fig.show()
|
How to plot a colored gantt chart with plotly keeping the correct bar height
|
I have the following code to plot a gantt chart in plotly:
import datetime
import pandas
import plotly.express as px
task_list = [{
'Task': 'T-3', 'y': 0, 'Start': datetime.date(2022, 2, 24),
'Finish': datetime.date(2022, 3, 17), 'Status': 'Scheduled'}, {
'Task': 'SNP-350', 'y': 1, 'Start': datetime.date(2022, 2, 24),
'Finish': datetime.date(2022, 2, 25), 'Status': 'Backlog'}, {
'Task': 'RD-6687', 'y': 2, 'Start': datetime.date(2022, 3, 18),
'Finish': datetime.date(2022, 4, 8), 'Status': 'Selected'}, {
'Task': 'RD-6643', 'y': 3, 'Start': datetime.date(2022, 2, 24),
'Finish': datetime.date(2022, 3, 17), 'Status': 'Scheduled'}, {
'Task': 'SNP-337', 'y': 4, 'Start': datetime.date(2022, 5, 21),
'Finish': datetime.date(2022, 5, 23), 'Status': 'Backlog'}, {
'Task': 'SNP-352', 'y': 5, 'Start': datetime.date(2022, 2, 26),
'Finish': datetime.date(2022, 2, 28), 'Status': 'Clarification'}, {
'Task': 'SNP-239', 'y': 6, 'Start': datetime.date(2022, 5, 24),
'Finish': datetime.date(2022, 5, 25), 'Status': 'Selected'}]
df = pandas.DataFrame(task_list)
fig = px.timeline(df, x_start="Start", x_end="Finish", y="y",
# color="Status",
)
fig.show()
This gives me a gantt chart as expected:
However, if I now include the line that is commented out in the code above, i.e. color the bars in the gantt chart according to their status, it messes up the height of the different bars:
So the colors are shown as expected but it seems the height of the different bars is now not limited by the neighboring bar but only by the neighboring bar with the same color. How can I add the colors to the gantt chart but keep the height of the bars as it is without colors?
|
[
"\nusing color creates a trace for each Status, hence change in heights\nhave put Status into hover_data\nbuilt colormap and then updated trace to use value of Status to lookup color\n\nimport datetime\nimport pandas\nimport plotly.express as px\nimport numpy as np\n\n# fmt: off\ntask_list = [{\n 'Task': 'T-3', 'y': 0, 'Start': datetime.date(2022, 2, 24),\n 'Finish': datetime.date(2022, 3, 17), 'Status': 'Scheduled'}, {\n 'Task': 'SNP-350', 'y': 1, 'Start': datetime.date(2022, 2, 24),\n 'Finish': datetime.date(2022, 2, 25), 'Status': 'Backlog'}, {\n 'Task': 'RD-6687', 'y': 2, 'Start': datetime.date(2022, 3, 18),\n 'Finish': datetime.date(2022, 4, 8), 'Status': 'Selected'}, {\n 'Task': 'RD-6643', 'y': 3, 'Start': datetime.date(2022, 2, 24),\n 'Finish': datetime.date(2022, 3, 17), 'Status': 'Scheduled'}, {\n 'Task': 'SNP-337', 'y': 4, 'Start': datetime.date(2022, 5, 21),\n 'Finish': datetime.date(2022, 5, 23), 'Status': 'Backlog'}, {\n 'Task': 'SNP-352', 'y': 5, 'Start': datetime.date(2022, 2, 26),\n 'Finish': datetime.date(2022, 2, 28), 'Status': 'Clarification'}, {\n 'Task': 'SNP-239', 'y': 6, 'Start': datetime.date(2022, 5, 24),\n 'Finish': datetime.date(2022, 5, 25), 'Status': 'Selected'}]\n# fmt: on\ndf = pandas.DataFrame(task_list)\n\n# put status into figure as well as customdata\nfig = px.timeline(\n df,\n x_start=\"Start\",\n x_end=\"Finish\",\n y=\"y\",\n # color=\"Status\",\n hover_data=[\"Status\"],\n)\n\n# build a colormap for status \ncolormap = {s:c for s,c in zip(df[\"Status\"].unique(), px.colors.qualitative.Plotly)}\n# use status in customdata to map color\nfig.update_traces(marker_color=[colormap[s[0]] for s in fig.data[0].customdata])\n\n\n",
"I've found the answer, but spent more when 2 hours and I found this accidentally.\n# Format for your call/bar/trace\n# Set/align bar HEIGHT to one. Prevent to make some of bar bigger or lower. \nfig.update_traces( width=0.6 )\n\nAbsolutely unpredictable place for these settings. I'm shocked.\nBefore\nAfter\nFull code:\nimport datetime\nimport pandas\nimport plotly.express as px\n\ntask_list = [{\n 'Task': 'T-3', 'y': 0, 'Start': datetime.date(2022, 2, 24),\n 'Finish': datetime.date(2022, 3, 17), 'Status': 'Scheduled'}, {\n 'Task': 'SNP-350', 'y': 1, 'Start': datetime.date(2022, 2, 24),\n 'Finish': datetime.date(2022, 2, 25), 'Status': 'Backlog'}, {\n 'Task': 'RD-6687', 'y': 2, 'Start': datetime.date(2022, 3, 18),\n 'Finish': datetime.date(2022, 4, 8), 'Status': 'Selected'}, {\n 'Task': 'RD-6643', 'y': 3, 'Start': datetime.date(2022, 2, 24),\n 'Finish': datetime.date(2022, 3, 17), 'Status': 'Scheduled'}, {\n 'Task': 'SNP-337', 'y': 4, 'Start': datetime.date(2022, 5, 21),\n 'Finish': datetime.date(2022, 5, 23), 'Status': 'Backlog'}, {\n 'Task': 'SNP-352', 'y': 5, 'Start': datetime.date(2022, 2, 26),\n 'Finish': datetime.date(2022, 2, 28), 'Status': 'Clarification'}, {\n 'Task': 'SNP-239', 'y': 6, 'Start': datetime.date(2022, 5, 24),\n 'Finish': datetime.date(2022, 5, 25), 'Status': 'Selected'}]\n\ndf = pandas.DataFrame(task_list)\n\nfig = px.timeline(df, x_start=\"Start\", x_end=\"Finish\", y=\"y\",\n color=\"Status\",\n )\n# Set/align bar HEIGHT to one. Prevent to make some of bar bigger or lower. \nfig.update_traces( width=0.6 )\n\nfig.show()\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"gantt_chart",
"plotly",
"python"
] |
stackoverflow_0071254131_gantt_chart_plotly_python.txt
|
Q:
Python: Why is 0x01 an integer?
The following:
print(type(0x01))
Returns:
<class 'int'>
Whereas, the following:
print(0x01)
Returns
1
Now let's say we have:
x = "0x01"
How do I convert x such that it returns 1 when printed?
Thank you!
A:
You would need to convert using base 16 as suggested by @Joran Beasley
x = "0x01"
print(x)
x = int(x, 16)
print(x)
Returns:
0x01
1
A:
Actually 0x01 is an hexadecimal format which is base16 so to convert it to decimal format you need to use int() method as shown below
x ="0x01"
x = int(x, 16)
print(x)
|
Python: Why is 0x01 an integer?
|
The following:
print(type(0x01))
Returns:
<class 'int'>
Whereas, the following:
print(0x01)
Returns
1
Now let's say we have:
x = "0x01"
How do I convert x such that it returns 1 when printed?
Thank you!
|
[
"You would need to convert using base 16 as suggested by @Joran Beasley\nx = \"0x01\"\nprint(x)\nx = int(x, 16)\nprint(x)\n\nReturns:\n0x01\n1\n\n",
"Actually 0x01 is an hexadecimal format which is base16 so to convert it to decimal format you need to use int() method as shown below\nx =\"0x01\"\nx = int(x, 16)\nprint(x)\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"integer",
"python",
"windows"
] |
stackoverflow_0074524902_integer_python_windows.txt
|
Q:
Python List count() using a class and text file
I am a beginner at python and have a question.
I have 2 text files:
The first one contains a schedule for programs:
------------------------------------------------------------------
Channel 1
16.00-17.45 Matinee: The kiss on the cross
17.45-17.50 The stock market today
------------------------------------------------------------------
Channel 2
8.30-9.00 The mosquito
9.00-10.00 Tip
17.50-18.20 In the company of dead masters
------------------------------------------------------------------
Channel 3
15.35-17.05 The weaker sex
17.05-18.00 The Onedine line
18.00-18.30 Children's trio: Dastardly & Mutley
I have created a class and managed on my own to create instances that look like this:
print(program_instances)
returns for example:
Channel 1, start:16.00 end:17.45 name:Matinee: The kiss on the cross
Channel 1, start:17.45 end:17.50 name:The stock market today
...
Then I have another text file with "viewer data":
File which contains data for one day collected from the TV sets included in the survey.
When the TV is switched on, it registers the time and tuned channel at fixed times every
day. Data from different devices is in the same file, but separated by dashed lines.
Format: time/channel
=================
19.37/2
19.52/2
21.07/1
21.22/1
21.37/1
-------
16.22/4
16.37/4
16.52/4
17.07/4
17.22/4
17.37/4
17.52/4
19.37/2
19.52/2
...
from this file, I want to calculate:
How many times the TVs were switched on to that specific channel.
The percentage of viewers from the total amount of data.
The total amount of TV’s separated by dashed lines
ex. I want to be able to print a top 10 list:
--------------------- top 10 -------------------
1. program 1: 57 times (60%)
2. program 2: 47 times(49%)
3. program 3: 34 times (36%)
...
Data was collected from #number of TV’s
I'm having trouble deciding how to do when reading and calculating the second file.
Could I use a list in my class to add/ append a '*' symbol each time a channel has a viewer? Then use the count() function to count the number of occurrences of the symbol for each list?
ex:
viewers = ['*','*','*','*','*','*']
My class looks like this:
def __init__(self,channel,start, end, name, viewers, percentage):
self.channel = channel
self.start = start
self.end = end
self.name = name
self.viewers = viewers #mabye use a list here?
self.percentage = percentage
All help is much appreciated!
A:
You could create a list that accepts that channel number for each time the channel is switched on(the list looks like:[1,2,1,1,4,4....] then use the list.count(the channel num) function to return the number of times for each channel. so you will create a function that returns the value from the list and does other calculations
|
Python List count() using a class and text file
|
I am a beginner at python and have a question.
I have 2 text files:
The first one contains a schedule for programs:
------------------------------------------------------------------
Channel 1
16.00-17.45 Matinee: The kiss on the cross
17.45-17.50 The stock market today
------------------------------------------------------------------
Channel 2
8.30-9.00 The mosquito
9.00-10.00 Tip
17.50-18.20 In the company of dead masters
------------------------------------------------------------------
Channel 3
15.35-17.05 The weaker sex
17.05-18.00 The Onedine line
18.00-18.30 Children's trio: Dastardly & Mutley
I have created a class and managed on my own to create instances that look like this:
print(program_instances)
returns for example:
Channel 1, start:16.00 end:17.45 name:Matinee: The kiss on the cross
Channel 1, start:17.45 end:17.50 name:The stock market today
...
Then I have another text file with "viewer data":
File which contains data for one day collected from the TV sets included in the survey.
When the TV is switched on, it registers the time and tuned channel at fixed times every
day. Data from different devices is in the same file, but separated by dashed lines.
Format: time/channel
=================
19.37/2
19.52/2
21.07/1
21.22/1
21.37/1
-------
16.22/4
16.37/4
16.52/4
17.07/4
17.22/4
17.37/4
17.52/4
19.37/2
19.52/2
...
from this file, I want to calculate:
How many times the TVs were switched on to that specific channel.
The percentage of viewers from the total amount of data.
The total amount of TV’s separated by dashed lines
ex. I want to be able to print a top 10 list:
--------------------- top 10 -------------------
1. program 1: 57 times (60%)
2. program 2: 47 times(49%)
3. program 3: 34 times (36%)
...
Data was collected from #number of TV’s
I'm having trouble deciding how to do when reading and calculating the second file.
Could I use a list in my class to add/ append a '*' symbol each time a channel has a viewer? Then use the count() function to count the number of occurrences of the symbol for each list?
ex:
viewers = ['*','*','*','*','*','*']
My class looks like this:
def __init__(self,channel,start, end, name, viewers, percentage):
self.channel = channel
self.start = start
self.end = end
self.name = name
self.viewers = viewers #mabye use a list here?
self.percentage = percentage
All help is much appreciated!
|
[
"You could create a list that accepts that channel number for each time the channel is switched on(the list looks like:[1,2,1,1,4,4....] then use the list.count(the channel num) function to return the number of times for each channel. so you will create a function that returns the value from the list and does other calculations\n"
] |
[
1
] |
[] |
[] |
[
"class",
"count",
"list",
"python",
"ranking"
] |
stackoverflow_0074546163_class_count_list_python_ranking.txt
|
Q:
Is there a faster way of reading two files line by line, then adding one line at the end of the other?
so here's my problem:
I have two CSV files with each files having around 500 000 lines.
File 1 looks like this:
ID|NAME|OTHER INFO
353253453|LAURENT|STUFF 1
563636345|MARK|OTHERS
786970908|GEORGES|THINGS
File 2 looks like this:
LOCATION;ID_PERSON;PHONE
CA;786970908;555555
NY;353253453;555666
So what I have to do is look look for the lines where there are the same IDs, and add the line from file 2 to the end of corresponding line from file 1 in a new file, and if there's no corresponding IDs, add empty columns, like this:
ID;NAME;OTHER INFO;LOCATION;ID_PERSON;PHONE
353253453;LAURENT;STUFF 1;NY;353253453;555666
563636345;MARK;OTHERS;;;
786970908;GEORGES;THINGS;CA;786970908;555555
File 1 is the primary one if that makes sense.
The thing is I have found a solution but it takes way too long since for each lines of file 1 I loop through file 2.
Here's my code:
input1 = open(filename1, 'r', errors='ignore')
input2 = open(filename2, 'r', errors='ignore')
output = open('result.csv', 'w', newline='')
for line1 in input1:
line_splitted = line1.split("|")
id_1 = line_splitted[0]
index = 0
find = False
for line2 in file2:
file2_splitted = line2.split(";")
if id_1 in file2_splitted[1]:
output.write((";").join(line1.split("|"))+line2)
find = True
file2.remove(line2)
break
index+=1
if index == len(file2) and find == True:
output.write((";").join(line1.split("|")))
for j in range(nbr_col_2):
output.write(";")
output.write("\n")
So I was wondering if there is a faster way to do that, or if I just have to be patient, because right now after 20 minutes, only 20000 lines have been written...
A:
As Alex pointed out in his comment, you can merge both files using pandas.
import pandas as pd
# Load files
file_1 = pd.read_csv("file_1.csv", index_col=0, delimiter="|")
file_2 = pd.read_csv("file_2.csv", index_col=1, delimiter=";")
# Rename PERSON_ID as ID
file_2.index.name = "ID"
# Merge files
file_3 = file_1.merge(file_2, how="left", on="ID")
file_3.to_csv("file_3.csv")
Using your examples file_3.csv looks like this:
ID,NAME,OTHER INFO,LOCATION,PHONE
353253453,LAURENT,STUFF 1,NY,555666.0
563636345,MARK,OTHERS,,
786970908,GEORGES,THINGS,CA,555555.0
Extra
By the way, if you are not familiar with pandas, this is a great introductory course: Learn Pandas Tutorials
A:
You can create indexing to prevent iterating over file2 each time.
Do this by creating a dictionary from file2 and retrieving each related item of it by calling its index.
file1 = open(filename1, 'r', errors='ignore')
file2 = open(filename2, 'r', errors='ignore')
output = open('result.csv', 'w', newline='')
indexed_data = {}
for line2 in file2.readlines()[1:]:
data2 = line2.rstrip('\n').split(";")
indexed_data[data2[1]] = {
'Location': data2[0],
'Phone': data2[2],
}
output.write('ID;NAME;OTHER INFO;LOCATION;ID_PERSON;PHONE\n')
for line1 in file1.readlines()[1:]:
data1 = line1.rstrip('\n').split("|")
if data1[0] in indexed_data:
output.write(f'{data1[0]};{data1[1]};{data1[2]};{indexed_data[data1[0]]["Location"]};{data1[0]};{indexed_data[data1[0]]["Phone"]}\n')
else:
output.write(f'{data1[0]};{data1[1]};{data1[2]};;;\n')
A:
First read file2 line by line in order to build the lookup dict.
Then read file1 line by line, lookup in the dict for the ID key, build the output line and write to the output file.
Should be quite efficient with complexity = O(n).
Only the dict consumes a little bit of memory.
All the file processing is "stream-based".
with open("file2.txt") as f:
lookup = {}
f.readline() # skip header
while True:
line = f.readline().rstrip()
if not line:
break
fields = line.split(";")
lookup[fields[1]] = line
with open("file1.txt") as f, open("output.txt", "w") as out:
f.readline() # skip header
out.write("ID;NAME;OTHER INFO;LOCATION;ID_PERSON;PHONE\n")
while True:
line_in = f.readline().rstrip()
if not line_in:
break
fields = line_in.split("|")
line_out = ";".join(fields)
if found := lookup.get(fields[0]):
line_out += ";" + found
else:
line_out += ";;;"
out.write(line_out + "\n")
|
Is there a faster way of reading two files line by line, then adding one line at the end of the other?
|
so here's my problem:
I have two CSV files with each files having around 500 000 lines.
File 1 looks like this:
ID|NAME|OTHER INFO
353253453|LAURENT|STUFF 1
563636345|MARK|OTHERS
786970908|GEORGES|THINGS
File 2 looks like this:
LOCATION;ID_PERSON;PHONE
CA;786970908;555555
NY;353253453;555666
So what I have to do is look look for the lines where there are the same IDs, and add the line from file 2 to the end of corresponding line from file 1 in a new file, and if there's no corresponding IDs, add empty columns, like this:
ID;NAME;OTHER INFO;LOCATION;ID_PERSON;PHONE
353253453;LAURENT;STUFF 1;NY;353253453;555666
563636345;MARK;OTHERS;;;
786970908;GEORGES;THINGS;CA;786970908;555555
File 1 is the primary one if that makes sense.
The thing is I have found a solution but it takes way too long since for each lines of file 1 I loop through file 2.
Here's my code:
input1 = open(filename1, 'r', errors='ignore')
input2 = open(filename2, 'r', errors='ignore')
output = open('result.csv', 'w', newline='')
for line1 in input1:
line_splitted = line1.split("|")
id_1 = line_splitted[0]
index = 0
find = False
for line2 in file2:
file2_splitted = line2.split(";")
if id_1 in file2_splitted[1]:
output.write((";").join(line1.split("|"))+line2)
find = True
file2.remove(line2)
break
index+=1
if index == len(file2) and find == True:
output.write((";").join(line1.split("|")))
for j in range(nbr_col_2):
output.write(";")
output.write("\n")
So I was wondering if there is a faster way to do that, or if I just have to be patient, because right now after 20 minutes, only 20000 lines have been written...
|
[
"As Alex pointed out in his comment, you can merge both files using pandas.\nimport pandas as pd\n\n# Load files\nfile_1 = pd.read_csv(\"file_1.csv\", index_col=0, delimiter=\"|\")\nfile_2 = pd.read_csv(\"file_2.csv\", index_col=1, delimiter=\";\")\n\n# Rename PERSON_ID as ID\nfile_2.index.name = \"ID\"\n\n# Merge files\nfile_3 = file_1.merge(file_2, how=\"left\", on=\"ID\")\nfile_3.to_csv(\"file_3.csv\")\n\nUsing your examples file_3.csv looks like this:\nID,NAME,OTHER INFO,LOCATION,PHONE\n353253453,LAURENT,STUFF 1,NY,555666.0\n563636345,MARK,OTHERS,,\n786970908,GEORGES,THINGS,CA,555555.0\n\nExtra\nBy the way, if you are not familiar with pandas, this is a great introductory course: Learn Pandas Tutorials\n",
"You can create indexing to prevent iterating over file2 each time.\nDo this by creating a dictionary from file2 and retrieving each related item of it by calling its index.\nfile1 = open(filename1, 'r', errors='ignore')\nfile2 = open(filename2, 'r', errors='ignore')\noutput = open('result.csv', 'w', newline='')\n\nindexed_data = {}\nfor line2 in file2.readlines()[1:]:\n data2 = line2.rstrip('\\n').split(\";\")\n indexed_data[data2[1]] = {\n 'Location': data2[0],\n 'Phone': data2[2],\n }\n\noutput.write('ID;NAME;OTHER INFO;LOCATION;ID_PERSON;PHONE\\n')\n\nfor line1 in file1.readlines()[1:]:\n data1 = line1.rstrip('\\n').split(\"|\")\n if data1[0] in indexed_data:\n output.write(f'{data1[0]};{data1[1]};{data1[2]};{indexed_data[data1[0]][\"Location\"]};{data1[0]};{indexed_data[data1[0]][\"Phone\"]}\\n')\n else:\n output.write(f'{data1[0]};{data1[1]};{data1[2]};;;\\n')\n\n",
"First read file2 line by line in order to build the lookup dict.\nThen read file1 line by line, lookup in the dict for the ID key, build the output line and write to the output file.\nShould be quite efficient with complexity = O(n).\nOnly the dict consumes a little bit of memory.\nAll the file processing is \"stream-based\".\nwith open(\"file2.txt\") as f:\n lookup = {}\n f.readline() # skip header\n while True:\n line = f.readline().rstrip()\n if not line:\n break\n fields = line.split(\";\")\n lookup[fields[1]] = line\n\nwith open(\"file1.txt\") as f, open(\"output.txt\", \"w\") as out:\n f.readline() # skip header\n out.write(\"ID;NAME;OTHER INFO;LOCATION;ID_PERSON;PHONE\\n\")\n while True:\n line_in = f.readline().rstrip()\n if not line_in:\n break\n fields = line_in.split(\"|\")\n line_out = \";\".join(fields)\n if found := lookup.get(fields[0]):\n line_out += \";\" + found\n else:\n line_out += \";;;\"\n out.write(line_out + \"\\n\")\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"csv",
"python"
] |
stackoverflow_0074545816_csv_python.txt
|
Q:
Run python script when a file has been added to a folder?
I have a folder where files are continuously added, approximately every 3 seconds. I want to create a while loop that keeps running, checking, analysing, moving and deleting files in said folder.
So far my to do list is:
I need to make the while loop that keeps checking the folder for new files.
It then has to run an analysis (a python script in a separate directory) on the new files that are added.
Finally it needs to move the output from the analysis, a .txt file, into a new directory, whilst the original file the analysis was run on, is deleted.
This just needs to keep running, maybe having a keybreak to stop it if needed.
I'm a total beginner at python, and this is pretty beyond my skills with while loops. I can only do very simple ones.
I've tried various things, but I'm way too inexperienced to make it work properly. Still stuck at my 1st step, just making it check the folder. Step 2 and 3 feel impossible, though I imagine it's not that difficult for people who know python.
I apologize if this is the wrong place to post this question and if I've left out important information.
If anyone has any good resources/guides/simple step-by-step I'd appreciate it!
I've tried using various suggestions I've found, they're similar questions to mine, but struggling to apply them to my problem. This is the closest I've gotten.
# Folder where files are added
path_to_watch = "C:/Users/Projects/main/example"
before = dict([(f, None) for f in os.listdir(path_to_watch)])
while 1:
after = dict([(f, None) for f in os.listdir(path_to_watch)])
added = [f for f in after if not f in before]
if added:
print("Added: ", ", ".join(added))
# I think the analysis.py script has to happen somewhere here, but I don't know how to use it.
# Then it needs to take the analysis output .txt file and move it to a new folder and delete the original file
# Break Currently stops the loop after the first added file is detected. Otherwise, it keeps detecting the same
# file indefinitely
break
else:
# Return to checking folder, break stops this currently
before = after
A:
Hope this will resolve your problem
import csv
import os
import shutil
import test_folder.test
# Folder where files are added
path_to_watch = "."
before = dict([(f, None) for f in os.listdir(path_to_watch)])
while 1:
after = dict([(f, None) for f in os.listdir(path_to_watch)])
added = [f for f in after if not f in before]
if added:
print("Added: ", ", ".join(added))
mydir = os.getcwd()
mydir_new = os.chdir(mydir+"/test_folder")
os.system('python test.py > test.log')
shutil.move('test.log',mydir)
break
else:
# Return to checking folder, break stops this currently
before = after
|
Run python script when a file has been added to a folder?
|
I have a folder where files are continuously added, approximately every 3 seconds. I want to create a while loop that keeps running, checking, analysing, moving and deleting files in said folder.
So far my to do list is:
I need to make the while loop that keeps checking the folder for new files.
It then has to run an analysis (a python script in a separate directory) on the new files that are added.
Finally it needs to move the output from the analysis, a .txt file, into a new directory, whilst the original file the analysis was run on, is deleted.
This just needs to keep running, maybe having a keybreak to stop it if needed.
I'm a total beginner at python, and this is pretty beyond my skills with while loops. I can only do very simple ones.
I've tried various things, but I'm way too inexperienced to make it work properly. Still stuck at my 1st step, just making it check the folder. Step 2 and 3 feel impossible, though I imagine it's not that difficult for people who know python.
I apologize if this is the wrong place to post this question and if I've left out important information.
If anyone has any good resources/guides/simple step-by-step I'd appreciate it!
I've tried using various suggestions I've found, they're similar questions to mine, but struggling to apply them to my problem. This is the closest I've gotten.
# Folder where files are added
path_to_watch = "C:/Users/Projects/main/example"
before = dict([(f, None) for f in os.listdir(path_to_watch)])
while 1:
after = dict([(f, None) for f in os.listdir(path_to_watch)])
added = [f for f in after if not f in before]
if added:
print("Added: ", ", ".join(added))
# I think the analysis.py script has to happen somewhere here, but I don't know how to use it.
# Then it needs to take the analysis output .txt file and move it to a new folder and delete the original file
# Break Currently stops the loop after the first added file is detected. Otherwise, it keeps detecting the same
# file indefinitely
break
else:
# Return to checking folder, break stops this currently
before = after
|
[
"Hope this will resolve your problem\nimport csv\nimport os\nimport shutil\nimport test_folder.test\n\n# Folder where files are added\npath_to_watch = \".\"\n\nbefore = dict([(f, None) for f in os.listdir(path_to_watch)])\nwhile 1:\n after = dict([(f, None) for f in os.listdir(path_to_watch)])\n\n added = [f for f in after if not f in before]\n if added:\n print(\"Added: \", \", \".join(added))\n mydir = os.getcwd() \n mydir_new = os.chdir(mydir+\"/test_folder\")\n os.system('python test.py > test.log')\n shutil.move('test.log',mydir)\n break\n else:\n # Return to checking folder, break stops this currently\n before = after\n\n"
] |
[
1
] |
[] |
[] |
[
"directory",
"python",
"while_loop",
"windows"
] |
stackoverflow_0074544783_directory_python_while_loop_windows.txt
|
Q:
Check if a python list contains numeric data
I am checking whether a list in python contains only numeric data. For simple ints and floats I can use the following code:
if all(isinstance(x, (int, float)) for x in lstA):
If there any easy way to check whether another list is embedded in the first list also containing numeric data?
A:
You can do a recursive check for all lists within the list, like so
def is_all_numeric(lst):
for elem in lst:
if isinstance(elem, list):
if not is_all_numeric(elem):
return False
elif not isinstance(elem, (int, float)):
return False
return True
print(is_all_numeric([1,2,3]))
>>> True
print(is_all_numeric([1,2,'a']))
>>> False
print(is_all_numeric([1,2,[1,2,3]]))
>>> True
print(is_all_numeric([1,2,[1,2,'a']]))
>>> False
|
Check if a python list contains numeric data
|
I am checking whether a list in python contains only numeric data. For simple ints and floats I can use the following code:
if all(isinstance(x, (int, float)) for x in lstA):
If there any easy way to check whether another list is embedded in the first list also containing numeric data?
|
[
"You can do a recursive check for all lists within the list, like so\ndef is_all_numeric(lst):\n for elem in lst:\n if isinstance(elem, list):\n if not is_all_numeric(elem):\n return False\n elif not isinstance(elem, (int, float)):\n return False\n return True\n\nprint(is_all_numeric([1,2,3]))\n>>> True\n\nprint(is_all_numeric([1,2,'a']))\n>>> False\n\nprint(is_all_numeric([1,2,[1,2,3]]))\n>>> True\n\nprint(is_all_numeric([1,2,[1,2,'a']]))\n>>> False\n\n"
] |
[
3
] |
[
"I don't know if there is another way to do this, but you could just do a for loop for each item, and if any of those items is not a number, just set on bool to false:\nnumbers = [1,2,3,4,5,6,7,8]\n\nallListIsNumber = True\n\nfor i in numbers:\n if i.isnumeric() == False:\n allListIsNumber = False\n\nYou can either use isnumeric() or isinstance()\n"
] |
[
-1
] |
[
"iteration",
"list",
"python"
] |
stackoverflow_0074546288_iteration_list_python.txt
|
Q:
Efficiently mask an image with a label mask
I have an image that I read in with tifffile.imread and it is turned into a 3D matrix, with the first dimension representing the Y coordinate, the second the X and the third the channel of the image (these images are not RGB and so there can be an arbitrary number of channels).
Each of these images has a label mask which is a 2D array that indicates the position of objects in the image. In the label mask, pixels that have a value of 0 do not belong to any object, pixels that have a value of 1 belong to the first object, pixels that have a value of 2 belong to the second object and so on.
What I would like to calculate is for each object and for each channel of the image I would like to know the mean, median, std, min and max of the channel. So, for example, I would like to know the mean, mediam std, min and max values of the first channel for pixels in object 10.
I have written code to do this but it is very slow (shown below) and I wondered if people had a better way or knew a package(s) that might be helpful i making this faster/doing this more efficiently. (Here the word 'stain' means the same as channel)
sample = imread(input_img)
label_mask = np.load(input_mask)
n_stains = sample.shape[2]
n_labels = np.max(label_mask)
#Create empty dataframe to store intensity measurements
intensity_measurements = pd.DataFrame(columns = ['sample', 'label', 'stain', 'mean', 'median', 'std', 'min', 'max'])
for label in range(1, n_labels+1):
for stain in range(n_stains):
#Extract stain and label
stain_label = sample[:,:,stain][label_mask == label]
#Calculate intensity measurements
mean = np.mean(stain_label)
median = np.median(stain_label)
std = np.std(stain_label)
min = np.min(stain_label)
max = np.max(stain_label)
#Add intensity measurements to dataframe
intensity_measurements = intensity_measurements.append({'sample' : args.input_img, 'label': label, 'stain': stain, 'mean': mean, 'median': median, 'std': std, 'min': min, 'max': max}, ignore_index=True)
A:
Your code is slow because you iterate over the whole image for each of the labels. This is an operation of O(n k), for n pixels and k labels. You could instead iterate over the image, and for each pixel examine the label, then update the measurements for that label with the pixel values. This is an operation of O(n). You'd keep an accumulator for each label and each measurement (standard deviation requires accumulating the square sum as well as the sum, but the sum you're already accumulating for the mean). The only measure that you cannot compute this way is the median, as it requires a partial sort of the full list of values.
This would obviously be a much cheaper operation, except for the fact that Python is a slow, interpreted language, and looping over each pixel in Python leads to a very slow program. In a compiled language you would implement it this way though.
See this answer for a way to implement this efficiently using NumPy functionality.
Using the DIPlib library (disclosure: I'm an author) you can apply the operation as follows (the median is not implemented). Other image processing libraries have similar functionality, though might not be as flexible with the number of channels.
import diplib as dip
# sample = imread(input_img)
# label_mask = np.load(input_mask)
# Alternative random data so that I can run the code for testing:
sample = imageio.imread("../images/trui_c.tif")
label_mask = np.random.randint(0, 20, sample.shape[:2], dtype=np.uint32)
sample = dip.Image(sample, tensor_axis=2)
msr = dip.MeasurementTool.Measure(label_mask, sample, features=["Mean", "StandardDeviation", "MinVal", "MaxVal"])
print(msr)
This prints out:
| Mean | StandardDeviation | MinVal | MaxVal |
-- | ------------------------------------ | ------------------------------------ | ------------------------------------ | ------------------------------------ |
| chan0 | chan1 | chan2 | chan0 | chan1 | chan2 | chan0 | chan1 | chan2 | chan0 | chan1 | chan2 |
| | | | | | | | | | | | |
-- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- |
1 | 82.26 | 41.30 | 24.77 | 57.77 | 52.16 | 48.22 | 5.000 | 3.000 | 1.000 | 255.0 | 255.0 | 255.0 |
2 | 82.02 | 41.18 | 24.85 | 52.16 | 48.22 | 48.33 | 3.000 | 1.000 | 1.000 | 255.0 | 255.0 | 255.0 |
3 | 82.39 | 41.17 | 24.93 | 48.22 | 48.33 | 48.48 | 1.000 | 1.000 | 1.000 | 255.0 | 255.0 | 255.0 |
4 | 82.14 | 41.62 | 25.03 | 48.33 | 48.48 | 48.47 | 1.000 | 1.000 | 0.000 | 255.0 | 255.0 | 255.0 |
5 | 82.89 | 41.45 | 24.94 | 48.48 | 48.47 | 48.54 | 1.000 | 0.000 | 1.000 | 255.0 | 255.0 | 255.0 |
6 | 82.83 | 41.60 | 25.26 | 48.47 | 48.54 | 48.65 | 0.000 | 1.000 | 1.000 | 255.0 | 255.0 | 255.0 |
7 | 81.95 | 41.77 | 25.51 | 48.54 | 48.65 | 48.22 | 1.000 | 1.000 | 2.000 | 255.0 | 255.0 | 255.0 |
8 | 82.93 | 41.36 | 25.19 | 48.65 | 48.22 | 48.11 | 1.000 | 2.000 | 1.000 | 255.0 | 255.0 | 255.0 |
9 | 81.88 | 41.70 | 25.07 | 48.22 | 48.11 | 47.69 | 2.000 | 1.000 | 1.000 | 255.0 | 255.0 | 255.0 |
10 | 81.46 | 41.40 | 24.82 | 48.11 | 47.69 | 48.32 | 1.000 | 1.000 | 2.000 | 255.0 | 255.0 | 255.0 |
11 | 81.33 | 40.98 | 24.76 | 47.69 | 48.32 | 48.85 | 1.000 | 2.000 | 1.000 | 255.0 | 255.0 | 255.0 |
12 | 82.30 | 41.55 | 25.12 | 48.32 | 48.85 | 48.75 | 2.000 | 1.000 | 1.000 | 255.0 | 255.0 | 255.0 |
13 | 82.43 | 41.50 | 25.15 | 48.85 | 48.75 | 48.89 | 1.000 | 1.000 | 1.000 | 255.0 | 255.0 | 255.0 |
14 | 83.29 | 42.11 | 25.65 | 48.75 | 48.89 | 48.32 | 1.000 | 1.000 | 1.000 | 255.0 | 255.0 | 255.0 |
15 | 83.20 | 41.64 | 25.28 | 48.89 | 48.32 | 48.13 | 1.000 | 1.000 | 1.000 | 255.0 | 255.0 | 255.0 |
16 | 81.51 | 40.92 | 24.76 | 48.32 | 48.13 | 48.73 | 1.000 | 1.000 | 1.000 | 255.0 | 255.0 | 255.0 |
17 | 81.81 | 41.31 | 24.71 | 48.13 | 48.73 | 48.49 | 1.000 | 1.000 | 0.000 | 255.0 | 255.0 | 255.0 |
18 | 83.58 | 41.85 | 25.25 | 48.73 | 48.49 | 32.20 | 1.000 | 0.000 | 1.000 | 255.0 | 255.0 | 212.0 |
19 | 82.12 | 41.24 | 25.06 | 48.49 | 32.20 | 24.44 | 0.000 | 1.000 | 1.000 | 255.0 | 212.0 | 145.0 |
I don't have an efficient solution for the median. You'd have to split the image into a separate array for each label, then run the median over that. This would be equally efficient as the above, but use up much more memory.
A:
The proposed method below utilizes matrix multiplications in order to speed up the calculations. It is built on two crucial Numpy tools:
https://numpy.org/doc/stable/reference/generated/numpy.einsum.html?highlight=einsum#numpy.einsum
Evaluates the Einstein summation convention on the operands.
https://numpy.org/doc/stable/reference/maskedarray.html
Masked arrays are arrays that may have missing or invalid entries. The numpy.ma module provides a nearly work-alike replacement for numpy that supports data arrays with masks.
masked array update:
The initial code was updated with the masked array use after https://stackoverflow.com/users/7328782/cris-luengo spotted a mistake in my intial code.
This replaces all the non-selected pixels for a given label with a 0 value, and includes all those zeros into the measurements.
Now we mask the non-selected pixels before measurement calculations.
import numpy as np
import numpy.ma as ma
import pandas as pd
sample = imread(input_img)
label_mask = np.load(input_mask)
n_labels = np.max(label_mask)
# let's create boolean label masks for each label
# producing 3D matrix where 1st axis is label
label_mask_unraveled = np.equal.outer(label_mask, np.arange(1, n_labels +1))
# now we can apply these boolean label masks simultaniously
# to all the sample channels with help of 'einsum' producing 4D matrix,
# where the 1st axis is channel/stain and the 2nd axis is label
sample_label_masks_applied = np.einsum("ijk,ijl->klij", sample, label_mask_unraveled)
# in order to exclude the non-selected pixels
# from meausurement calculations, we mask the pixels first
non_selected_pixels_mask = np.moveaxis(~label_mask_unraveled, -1, 0)[np.newaxis, :, :, :]
non_selected_pixels_mask = np.repeat(non_selected_pixels_mask, sample.shape[2], axis=0)
sample_label_masks_applied = ma.masked_array(sample_label_masks_applied, non_selected_pixels_mask)
# intensity measurement calculations
# embedded into pd.DataFrame initialization
intensity_measurements = pd.DataFrame(
{
"sample": args.input_img,
"label": sample.shape[2] * list(range(1, n_labels+1)),
"stain": n_labels * list(range(sample.shape[2])),
"mean": ma.mean(sample_label_masks_applied, axis=(2, 3)).flatten(),
"median": ma.median(sample_label_masks_applied, axis=(2, 3)).flatten(),
"std": ma.std(sample_label_masks_applied, axis=(2, 3)).flatten(),
"min": ma.min(sample_label_masks_applied, axis=(2, 3)).flatten(),
"max": ma.max(sample_label_masks_applied, axis=(2, 3)).flatten()
}
)
A:
I've found a good solution that works for me using scikit image, specifically the regionprops functions.
import numpy as np
import pandas as pd
from skimage.measure import regionprops, regionprops_table
np.random.seed(42)
Here is a random "image" and label mask of that image
img = np.random.randint(0, 255, size=(100, 100, 3))
mask = np.zeros((100, 100)).astype(np.uint8)
mask[20:50, 20:50] = 1
mask[65:70, 65:70] = 2
There is already an inbuilt function for measuring the mean intensity for each channel that is very fast
pd.DataFrame(regionprops_table(mask, img, properties=['label', 'mean_intensity']))
You can also pass custom functions that take a binary mask and one channel of an intensity image to regionprops_table
def my_mean_func(mask, img):
return np.mean(img[mask])
pd.DataFrame(regionprops_table(mask, img, properties=['label'], extra_properties=[my_mean_func]))
This is fast because the binary mask and intensity image passed to the custom function is the minimum bounding box of the mask. Therefore, the computations are much faster as they are operating over a much smaller area.
This only allows the user to calculate values per channel, but there is a generalisation that returns a 3D matrix of the selected region so that between channel measurements (or any measurements you like can be made).
props = regionprops(mask, img)
for prop in props:
print("Region ", prop['label'], ":")
print("Mean intensity: ", prop['mean_intensity'])
print()
This is only an example of the very basic functionality.
I haven't had time to benchmark any of the above algorithms, but the ones used in this answer are very very fast indeed and I use them to operate over very large images quite quickly. However, it is important to note here that one of the reasons why this is so much faster for me is because I expect each object (each entry of the label mask that has the same value) to be only situated in a very small part of the image. Therefore, the minimum bounding box representation returned by regionprops is much much smaller than the original image and drastically speeds up computation.
Thank you very much to everyone for their help.
|
Efficiently mask an image with a label mask
|
I have an image that I read in with tifffile.imread and it is turned into a 3D matrix, with the first dimension representing the Y coordinate, the second the X and the third the channel of the image (these images are not RGB and so there can be an arbitrary number of channels).
Each of these images has a label mask which is a 2D array that indicates the position of objects in the image. In the label mask, pixels that have a value of 0 do not belong to any object, pixels that have a value of 1 belong to the first object, pixels that have a value of 2 belong to the second object and so on.
What I would like to calculate is for each object and for each channel of the image I would like to know the mean, median, std, min and max of the channel. So, for example, I would like to know the mean, mediam std, min and max values of the first channel for pixels in object 10.
I have written code to do this but it is very slow (shown below) and I wondered if people had a better way or knew a package(s) that might be helpful i making this faster/doing this more efficiently. (Here the word 'stain' means the same as channel)
sample = imread(input_img)
label_mask = np.load(input_mask)
n_stains = sample.shape[2]
n_labels = np.max(label_mask)
#Create empty dataframe to store intensity measurements
intensity_measurements = pd.DataFrame(columns = ['sample', 'label', 'stain', 'mean', 'median', 'std', 'min', 'max'])
for label in range(1, n_labels+1):
for stain in range(n_stains):
#Extract stain and label
stain_label = sample[:,:,stain][label_mask == label]
#Calculate intensity measurements
mean = np.mean(stain_label)
median = np.median(stain_label)
std = np.std(stain_label)
min = np.min(stain_label)
max = np.max(stain_label)
#Add intensity measurements to dataframe
intensity_measurements = intensity_measurements.append({'sample' : args.input_img, 'label': label, 'stain': stain, 'mean': mean, 'median': median, 'std': std, 'min': min, 'max': max}, ignore_index=True)
|
[
"Your code is slow because you iterate over the whole image for each of the labels. This is an operation of O(n k), for n pixels and k labels. You could instead iterate over the image, and for each pixel examine the label, then update the measurements for that label with the pixel values. This is an operation of O(n). You'd keep an accumulator for each label and each measurement (standard deviation requires accumulating the square sum as well as the sum, but the sum you're already accumulating for the mean). The only measure that you cannot compute this way is the median, as it requires a partial sort of the full list of values.\nThis would obviously be a much cheaper operation, except for the fact that Python is a slow, interpreted language, and looping over each pixel in Python leads to a very slow program. In a compiled language you would implement it this way though.\nSee this answer for a way to implement this efficiently using NumPy functionality.\n\nUsing the DIPlib library (disclosure: I'm an author) you can apply the operation as follows (the median is not implemented). Other image processing libraries have similar functionality, though might not be as flexible with the number of channels.\nimport diplib as dip\n\n# sample = imread(input_img)\n# label_mask = np.load(input_mask)\n# Alternative random data so that I can run the code for testing:\nsample = imageio.imread(\"../images/trui_c.tif\")\nlabel_mask = np.random.randint(0, 20, sample.shape[:2], dtype=np.uint32)\n\nsample = dip.Image(sample, tensor_axis=2)\nmsr = dip.MeasurementTool.Measure(label_mask, sample, features=[\"Mean\", \"StandardDeviation\", \"MinVal\", \"MaxVal\"])\nprint(msr)\n\nThis prints out:\n | Mean | StandardDeviation | MinVal | MaxVal |\n-- | ------------------------------------ | ------------------------------------ | ------------------------------------ | ------------------------------------ |\n | chan0 | chan1 | chan2 | chan0 | chan1 | chan2 | chan0 | chan1 | chan2 | chan0 | chan1 | chan2 |\n | | | | | | | | | | | | |\n-- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- |\n 1 | 82.26 | 41.30 | 24.77 | 57.77 | 52.16 | 48.22 | 5.000 | 3.000 | 1.000 | 255.0 | 255.0 | 255.0 |\n 2 | 82.02 | 41.18 | 24.85 | 52.16 | 48.22 | 48.33 | 3.000 | 1.000 | 1.000 | 255.0 | 255.0 | 255.0 |\n 3 | 82.39 | 41.17 | 24.93 | 48.22 | 48.33 | 48.48 | 1.000 | 1.000 | 1.000 | 255.0 | 255.0 | 255.0 |\n 4 | 82.14 | 41.62 | 25.03 | 48.33 | 48.48 | 48.47 | 1.000 | 1.000 | 0.000 | 255.0 | 255.0 | 255.0 |\n 5 | 82.89 | 41.45 | 24.94 | 48.48 | 48.47 | 48.54 | 1.000 | 0.000 | 1.000 | 255.0 | 255.0 | 255.0 |\n 6 | 82.83 | 41.60 | 25.26 | 48.47 | 48.54 | 48.65 | 0.000 | 1.000 | 1.000 | 255.0 | 255.0 | 255.0 |\n 7 | 81.95 | 41.77 | 25.51 | 48.54 | 48.65 | 48.22 | 1.000 | 1.000 | 2.000 | 255.0 | 255.0 | 255.0 |\n 8 | 82.93 | 41.36 | 25.19 | 48.65 | 48.22 | 48.11 | 1.000 | 2.000 | 1.000 | 255.0 | 255.0 | 255.0 |\n 9 | 81.88 | 41.70 | 25.07 | 48.22 | 48.11 | 47.69 | 2.000 | 1.000 | 1.000 | 255.0 | 255.0 | 255.0 |\n10 | 81.46 | 41.40 | 24.82 | 48.11 | 47.69 | 48.32 | 1.000 | 1.000 | 2.000 | 255.0 | 255.0 | 255.0 |\n11 | 81.33 | 40.98 | 24.76 | 47.69 | 48.32 | 48.85 | 1.000 | 2.000 | 1.000 | 255.0 | 255.0 | 255.0 |\n12 | 82.30 | 41.55 | 25.12 | 48.32 | 48.85 | 48.75 | 2.000 | 1.000 | 1.000 | 255.0 | 255.0 | 255.0 |\n13 | 82.43 | 41.50 | 25.15 | 48.85 | 48.75 | 48.89 | 1.000 | 1.000 | 1.000 | 255.0 | 255.0 | 255.0 |\n14 | 83.29 | 42.11 | 25.65 | 48.75 | 48.89 | 48.32 | 1.000 | 1.000 | 1.000 | 255.0 | 255.0 | 255.0 |\n15 | 83.20 | 41.64 | 25.28 | 48.89 | 48.32 | 48.13 | 1.000 | 1.000 | 1.000 | 255.0 | 255.0 | 255.0 |\n16 | 81.51 | 40.92 | 24.76 | 48.32 | 48.13 | 48.73 | 1.000 | 1.000 | 1.000 | 255.0 | 255.0 | 255.0 |\n17 | 81.81 | 41.31 | 24.71 | 48.13 | 48.73 | 48.49 | 1.000 | 1.000 | 0.000 | 255.0 | 255.0 | 255.0 |\n18 | 83.58 | 41.85 | 25.25 | 48.73 | 48.49 | 32.20 | 1.000 | 0.000 | 1.000 | 255.0 | 255.0 | 212.0 |\n19 | 82.12 | 41.24 | 25.06 | 48.49 | 32.20 | 24.44 | 0.000 | 1.000 | 1.000 | 255.0 | 212.0 | 145.0 |\n\n\nI don't have an efficient solution for the median. You'd have to split the image into a separate array for each label, then run the median over that. This would be equally efficient as the above, but use up much more memory.\n",
"The proposed method below utilizes matrix multiplications in order to speed up the calculations. It is built on two crucial Numpy tools:\n\nhttps://numpy.org/doc/stable/reference/generated/numpy.einsum.html?highlight=einsum#numpy.einsum\n\n\nEvaluates the Einstein summation convention on the operands.\n\n\nhttps://numpy.org/doc/stable/reference/maskedarray.html\n\n\nMasked arrays are arrays that may have missing or invalid entries. The numpy.ma module provides a nearly work-alike replacement for numpy that supports data arrays with masks.\n\nmasked array update:\nThe initial code was updated with the masked array use after https://stackoverflow.com/users/7328782/cris-luengo spotted a mistake in my intial code.\n\nThis replaces all the non-selected pixels for a given label with a 0 value, and includes all those zeros into the measurements.\n\nNow we mask the non-selected pixels before measurement calculations.\nimport numpy as np\nimport numpy.ma as ma\nimport pandas as pd\n\nsample = imread(input_img)\nlabel_mask = np.load(input_mask)\n\nn_labels = np.max(label_mask)\n\n# let's create boolean label masks for each label \n# producing 3D matrix where 1st axis is label\nlabel_mask_unraveled = np.equal.outer(label_mask, np.arange(1, n_labels +1))\n\n# now we can apply these boolean label masks simultaniously\n# to all the sample channels with help of 'einsum' producing 4D matrix, \n# where the 1st axis is channel/stain and the 2nd axis is label\nsample_label_masks_applied = np.einsum(\"ijk,ijl->klij\", sample, label_mask_unraveled)\n\n# in order to exclude the non-selected pixels \n# from meausurement calculations, we mask the pixels first\nnon_selected_pixels_mask = np.moveaxis(~label_mask_unraveled, -1, 0)[np.newaxis, :, :, :]\nnon_selected_pixels_mask = np.repeat(non_selected_pixels_mask, sample.shape[2], axis=0)\n\nsample_label_masks_applied = ma.masked_array(sample_label_masks_applied, non_selected_pixels_mask) \n\n# intensity measurement calculations\n# embedded into pd.DataFrame initialization\nintensity_measurements = pd.DataFrame(\n {\n \"sample\": args.input_img,\n \"label\": sample.shape[2] * list(range(1, n_labels+1)),\n \"stain\": n_labels * list(range(sample.shape[2])),\n \"mean\": ma.mean(sample_label_masks_applied, axis=(2, 3)).flatten(),\n \"median\": ma.median(sample_label_masks_applied, axis=(2, 3)).flatten(),\n \"std\": ma.std(sample_label_masks_applied, axis=(2, 3)).flatten(),\n \"min\": ma.min(sample_label_masks_applied, axis=(2, 3)).flatten(),\n \"max\": ma.max(sample_label_masks_applied, axis=(2, 3)).flatten() \n }\n)\n\n",
"I've found a good solution that works for me using scikit image, specifically the regionprops functions.\nimport numpy as np\nimport pandas as pd\nfrom skimage.measure import regionprops, regionprops_table\nnp.random.seed(42)\n\nHere is a random \"image\" and label mask of that image\nimg = np.random.randint(0, 255, size=(100, 100, 3))\nmask = np.zeros((100, 100)).astype(np.uint8)\nmask[20:50, 20:50] = 1\nmask[65:70, 65:70] = 2\n\nThere is already an inbuilt function for measuring the mean intensity for each channel that is very fast\npd.DataFrame(regionprops_table(mask, img, properties=['label', 'mean_intensity']))\n\nYou can also pass custom functions that take a binary mask and one channel of an intensity image to regionprops_table\ndef my_mean_func(mask, img):\n return np.mean(img[mask])\n\npd.DataFrame(regionprops_table(mask, img, properties=['label'], extra_properties=[my_mean_func]))\n\nThis is fast because the binary mask and intensity image passed to the custom function is the minimum bounding box of the mask. Therefore, the computations are much faster as they are operating over a much smaller area.\nThis only allows the user to calculate values per channel, but there is a generalisation that returns a 3D matrix of the selected region so that between channel measurements (or any measurements you like can be made).\nprops = regionprops(mask, img)\n\nfor prop in props:\n print(\"Region \", prop['label'], \":\")\n print(\"Mean intensity: \", prop['mean_intensity'])\n print()\n\nThis is only an example of the very basic functionality.\nI haven't had time to benchmark any of the above algorithms, but the ones used in this answer are very very fast indeed and I use them to operate over very large images quite quickly. However, it is important to note here that one of the reasons why this is so much faster for me is because I expect each object (each entry of the label mask that has the same value) to be only situated in a very small part of the image. Therefore, the minimum bounding box representation returned by regionprops is much much smaller than the original image and drastically speeds up computation.\nThank you very much to everyone for their help.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"image_processing",
"python"
] |
stackoverflow_0074341139_image_processing_python.txt
|
Q:
How to convert JSON to CSV file from s3 and save it in same s3 bucket using Glue job
Please help me with the coding part
I googled for the code, but it only shows with using lambda handler. My project requires use gluejob.
A:
Here you can find the answer for converting json to csv.
GlueContext glueContext = new GlueContext(Spark.getActiveSession())
val jsonDf = glueContext.getSource(
connectionType = "s3",
connectionOptions = JsonOptions(Map("paths" -> "s3://:sourcePath/data.json")),
format = "json",
transformationContext = "jsonDf"
)
val dataDf = jsonDf.toDF()
val csvRDD = dataDf.repartition(1).rdd.map(_.mkString(","))
csvRDD.saveAsTextFile("s3://sourcePath/data.csv")
|
How to convert JSON to CSV file from s3 and save it in same s3 bucket using Glue job
|
Please help me with the coding part
I googled for the code, but it only shows with using lambda handler. My project requires use gluejob.
|
[
"Here you can find the answer for converting json to csv.\nGlueContext glueContext = new GlueContext(Spark.getActiveSession())\n\nval jsonDf = glueContext.getSource(\n connectionType = \"s3\",\n connectionOptions = JsonOptions(Map(\"paths\" -> \"s3://:sourcePath/data.json\")),\n format = \"json\",\n transformationContext = \"jsonDf\"\n )\n\nval dataDf = jsonDf.toDF()\nval csvRDD = dataDf.repartition(1).rdd.map(_.mkString(\",\"))\ncsvRDD.saveAsTextFile(\"s3://sourcePath/data.csv\")\n\n"
] |
[
0
] |
[] |
[] |
[
"amazon_glue",
"amazon_s3",
"amazon_web_services",
"aws_lambda",
"python"
] |
stackoverflow_0074546278_amazon_glue_amazon_s3_amazon_web_services_aws_lambda_python.txt
|
Q:
Is there a way to run my code for different values of probability?
I am working on a probability question. In my code when I enter the red and blue archers' probability, the code runs fine.
from IPython.core import history
rng = default_rng(42)
def trial(red,blue , red_accurcy = 1, blue_accurcy = 1, history = False ,debug = False):
if history:
red_history = [red]
blue_history = [blue]
if debug:
print(f"Start:\t red = {red:3d} blue = {blue:3d}")
while red and blue:
p = np.array([red,blue] , dtype = float)
p /= p.sum()
arrow = rng.choice(['red' , 'blue'] , p=p)
if arrow == 'red':
blue -= 1 if rng.uniform() < red_accurcy else 0
else:
red -= 1 if rng.uniform() < blue_accurcy else 0
if debug:
print(f"\t red = {red:3d} blue = {blue:3d} arrow = {arrow}")
if history:
red_history.append(red)
blue_history.append(blue)
if debug:
print(f"End:\t red = {red:3d} blue = {blue:3d}")
if history:
return red,blue, red_history, blue_history
else:
return red, blue ,red_accurcy ,blue_accurcy
trial(90,45,0.1,0.2)
Output:
(60, 0, 0.1, 0.2)
Now, I am trying to run for different probabilities for red and see if blue can win:
red_ac = np.linspace(0,1,10)
battles = [trial(90,45,red_accurcy=red_ac,blue_accurcy= 1) for red_ac in range(10)]
battles
Output:
[(0, 45, 0, 1),
(75, 0, 1, 1),
(82, 0, 2, 1),
(80, 0, 3, 1),
(78, 0, 4, 1),
(74, 0, 5, 1),
(69, 0, 6, 1),
(74, 0, 7, 1),
(81, 0, 8, 1),
(81, 0, 9, 1)]
This dose not look correct for me. Any suggestion where I went wrong ?
A:
This answer will work for the above code.
red_accurcy = np.linspace(0,1,10)
N = 10
n_max = 100
n_values = range(2,n_max)
simulate_results = []
for n in n_values:
results = [trial(90,45,red_accurcy= r ,blue_accurcy= 1) for r in red_accurcy]
Check if blue wins
blue_win = [(red,blue,red_accurcy,blue_accurcy) for red,blue,red_accurcy,blue_accurcy in results if blue>0]
blue_win
and the output:
[(0, 45, 0.0, 1),
(0, 24, 0.1111111111111111, 1),
(0, 11, 0.2222222222222222, 1)]
|
Is there a way to run my code for different values of probability?
|
I am working on a probability question. In my code when I enter the red and blue archers' probability, the code runs fine.
from IPython.core import history
rng = default_rng(42)
def trial(red,blue , red_accurcy = 1, blue_accurcy = 1, history = False ,debug = False):
if history:
red_history = [red]
blue_history = [blue]
if debug:
print(f"Start:\t red = {red:3d} blue = {blue:3d}")
while red and blue:
p = np.array([red,blue] , dtype = float)
p /= p.sum()
arrow = rng.choice(['red' , 'blue'] , p=p)
if arrow == 'red':
blue -= 1 if rng.uniform() < red_accurcy else 0
else:
red -= 1 if rng.uniform() < blue_accurcy else 0
if debug:
print(f"\t red = {red:3d} blue = {blue:3d} arrow = {arrow}")
if history:
red_history.append(red)
blue_history.append(blue)
if debug:
print(f"End:\t red = {red:3d} blue = {blue:3d}")
if history:
return red,blue, red_history, blue_history
else:
return red, blue ,red_accurcy ,blue_accurcy
trial(90,45,0.1,0.2)
Output:
(60, 0, 0.1, 0.2)
Now, I am trying to run for different probabilities for red and see if blue can win:
red_ac = np.linspace(0,1,10)
battles = [trial(90,45,red_accurcy=red_ac,blue_accurcy= 1) for red_ac in range(10)]
battles
Output:
[(0, 45, 0, 1),
(75, 0, 1, 1),
(82, 0, 2, 1),
(80, 0, 3, 1),
(78, 0, 4, 1),
(74, 0, 5, 1),
(69, 0, 6, 1),
(74, 0, 7, 1),
(81, 0, 8, 1),
(81, 0, 9, 1)]
This dose not look correct for me. Any suggestion where I went wrong ?
|
[
"This answer will work for the above code.\n\nred_accurcy = np.linspace(0,1,10)\nN = 10\nn_max = 100\nn_values = range(2,n_max)\n\nsimulate_results = []\nfor n in n_values:\n results = [trial(90,45,red_accurcy= r ,blue_accurcy= 1) for r in red_accurcy]\n\nCheck if blue wins\nblue_win = [(red,blue,red_accurcy,blue_accurcy) for red,blue,red_accurcy,blue_accurcy in results if blue>0] \nblue_win\n\nand the output:\n[(0, 45, 0.0, 1),\n (0, 24, 0.1111111111111111, 1),\n (0, 11, 0.2222222222222222, 1)]\n\n"
] |
[
0
] |
[] |
[] |
[
"numpy",
"python"
] |
stackoverflow_0074467049_numpy_python.txt
|
Q:
Run mod_wsgi with virtualenv or Python with version different that system default
I am trying to make my Flask application work on CentOS server. Basically the issue is that I have Python 2.6 installed in /usr/bin which is system default and Python 3.4 installed in /usr/local/bin. I would like to use Python 3.4 virtualenv or at least Python 3.4 interpreter for mod_wsgi to run my application.
I have created virtualenv in ~/virtualenvs/flask.
I have this WSGI script:
import os
import sys
from logging import Formatter, FileHandler
APP_HOME = r"/home/fenikso/Album"
activate_this = os.path.join("/home/fenikso/virtualenvs/flask/bin/activate_this.py")
execfile(activate_this, dict(__file__=activate_this))
sys.path.insert(0, APP_HOME)
os.chdir(APP_HOME)
from app import app
handler = FileHandler("app.log")
handler.setFormatter(Formatter("[%(asctime)s | %(levelname)s] %(message)s"))
app.logger.addHandler(handler)
application = app
And following config in Apache:
<VirtualHost *:80>
ServerName album2.site.cz
Alias /static "/home/fenikso/Album/static"
Alias /photos "/home/fenikso/Album/photos"
Alias /thumbs "/home/fenikso/Album/thumbs"
WSGIScriptAlias / "/home/fenikso/Album/wsgi.py"
<Directory "/home/fenikso/Album">
AllowOverride None
Allow from all
</Directory>
<Directory "/home/fenikso/Album/static">
AllowOverride None
Allow from all
</Directory>
<Directory "/home/fenikso/Album/photos">
AllowOverride None
Allow from all
</Directory>
<Directory "/home/fenikso/Album/thumbs">
AllowOverride None
Allow from all
</Directory>
</VirtualHost>
However, when trying to run the application, I get an error:
Apache/2.2.15 (Unix) DAV/2 mod_wsgi/3.2 Python/2.6.6 mod_fcgid/2.3.7 PHP/5.3.3 mod_ssl/2.2.15 OpenSSL/1.0.1e-fips SVN/1.6.11 mod_perl/2.0.4 Perl/v5.10.1 configured -- resuming normal operations
mod_wsgi (pid=14627): Target WSGI script '/home/fenikso/Album/wsgi.py' cannot be loaded as Python module.
mod_wsgi (pid=14627): Exception occurred processing WSGI script '/home/fenikso/Album/wsgi.py'.
Traceback (most recent call last):
File "/home/fenikso/Album/wsgi.py", line 15, in <module>
from app import app
File "/home/fenikso/Album/app.py", line 1, in <module>
from flask import Flask
ImportError: No module named flask
I have noticed that either Python 2.6 is being ran and my virtualenv is not activated. What would be the proper way to get this working and still have the Python 2.6 as a system default?
A:
You have to add the following line in your apache.conf in order to give the right executable and the path to the virtualenv.
WSGIPythonHome /usr/local/bin
WSGIPythonPath /home/fenikso/virtualenv/lib/python3.4/site-packages
You will find all the options of these two command in the mod_wsgi documentation
Be aware that you must have the version of mod_wsgi compatible with the python executable. In your case, you probably have to install mod_wsgi3.4 and configure apache to use it instead of the standart mod_wsgi module.
The whole configuration file should be :
WSGIPythonHome "/usr/local/bin"
WSGIPythonPath "/home/fenikso/virtualenv/lib/python3.4/site-packages"
<VirtualHost *:80>
ServerName album2.site.cz
Alias /static "/home/fenikso/Album/static"
Alias /photos "/home/fenikso/Album/photos"
Alias /thumbs "/home/fenikso/Album/thumbs"
WSGIScriptAlias / "/home/fenikso/Album/wsgi.py"
<Directory "/home/fenikso/Album">
AllowOverride None
Allow from all
</Directory>
<Directory "/home/fenikso/Album/static">
AllowOverride None
Allow from all
</Directory>
<Directory "/home/fenikso/Album/photos">
AllowOverride None
Allow from all
</Directory>
<Directory "/home/fenikso/Album/thumbs">
AllowOverride None
Allow from all
</Directory>
</VirtualHost>
A:
Look into the WSGIPythonHome and WSGIPythonPath directives. It's also possible that you have a python2.6 mod_wsgi installed, mod_wsgi must be compiled for the intended python version and does not support multiple python versions. So check that your mod_wsgi is py3.4 compatible and set the directives above.
Alternatively, you could run the flask app with a python server like gunicorn and proxypass from apache to gunicorn.
A:
Another option, which I believe is much cleaner, logical, and flexible, is to simply reference the python interpreter from your venv at the beginning of your wsgi file. This way, it is easy to change (no fiddling with system config files) and opens the possibility for multiple apps running with different python environments, like so:
#!/path/to/your/venv/bin/python
A:
If the python version installed on the system is different from the python version used in the virtual environment, then mod_wsgi will not work because mod_wsgi is always compiled for a specific python version.
In this situation, you need to install mod_wsgi in a virtual environment
pip install mod_wsgi-standalone
Then such a module should be loaded instead of the default one installed in the system.
For Ubuntu for example, modify the path to the module in /etc/apache2/mods-available/wsgi.load
LoadModule wsgi_module /home/user/etc/.venv/lib/python3.9/site-packages/mod_wsgi/server/mod_wsgi-py39.cpython-39-x86_64-linux-gnu.so
Then in order to avoid the error "no such file or directory: mod_wsgi (pid=XXXX): Couldn't bind unix domain socket '/usr/local/opt/httpd/logs/wsgi.xxxxx.11.1.sock" should be added to the httpd.conf file:
WSGISocketPrefix /var/run/wsgi
After restarting apache everything should work
|
Run mod_wsgi with virtualenv or Python with version different that system default
|
I am trying to make my Flask application work on CentOS server. Basically the issue is that I have Python 2.6 installed in /usr/bin which is system default and Python 3.4 installed in /usr/local/bin. I would like to use Python 3.4 virtualenv or at least Python 3.4 interpreter for mod_wsgi to run my application.
I have created virtualenv in ~/virtualenvs/flask.
I have this WSGI script:
import os
import sys
from logging import Formatter, FileHandler
APP_HOME = r"/home/fenikso/Album"
activate_this = os.path.join("/home/fenikso/virtualenvs/flask/bin/activate_this.py")
execfile(activate_this, dict(__file__=activate_this))
sys.path.insert(0, APP_HOME)
os.chdir(APP_HOME)
from app import app
handler = FileHandler("app.log")
handler.setFormatter(Formatter("[%(asctime)s | %(levelname)s] %(message)s"))
app.logger.addHandler(handler)
application = app
And following config in Apache:
<VirtualHost *:80>
ServerName album2.site.cz
Alias /static "/home/fenikso/Album/static"
Alias /photos "/home/fenikso/Album/photos"
Alias /thumbs "/home/fenikso/Album/thumbs"
WSGIScriptAlias / "/home/fenikso/Album/wsgi.py"
<Directory "/home/fenikso/Album">
AllowOverride None
Allow from all
</Directory>
<Directory "/home/fenikso/Album/static">
AllowOverride None
Allow from all
</Directory>
<Directory "/home/fenikso/Album/photos">
AllowOverride None
Allow from all
</Directory>
<Directory "/home/fenikso/Album/thumbs">
AllowOverride None
Allow from all
</Directory>
</VirtualHost>
However, when trying to run the application, I get an error:
Apache/2.2.15 (Unix) DAV/2 mod_wsgi/3.2 Python/2.6.6 mod_fcgid/2.3.7 PHP/5.3.3 mod_ssl/2.2.15 OpenSSL/1.0.1e-fips SVN/1.6.11 mod_perl/2.0.4 Perl/v5.10.1 configured -- resuming normal operations
mod_wsgi (pid=14627): Target WSGI script '/home/fenikso/Album/wsgi.py' cannot be loaded as Python module.
mod_wsgi (pid=14627): Exception occurred processing WSGI script '/home/fenikso/Album/wsgi.py'.
Traceback (most recent call last):
File "/home/fenikso/Album/wsgi.py", line 15, in <module>
from app import app
File "/home/fenikso/Album/app.py", line 1, in <module>
from flask import Flask
ImportError: No module named flask
I have noticed that either Python 2.6 is being ran and my virtualenv is not activated. What would be the proper way to get this working and still have the Python 2.6 as a system default?
|
[
"You have to add the following line in your apache.conf in order to give the right executable and the path to the virtualenv.\nWSGIPythonHome /usr/local/bin\nWSGIPythonPath /home/fenikso/virtualenv/lib/python3.4/site-packages\n\nYou will find all the options of these two command in the mod_wsgi documentation\nBe aware that you must have the version of mod_wsgi compatible with the python executable. In your case, you probably have to install mod_wsgi3.4 and configure apache to use it instead of the standart mod_wsgi module. \nThe whole configuration file should be :\nWSGIPythonHome \"/usr/local/bin\"\nWSGIPythonPath \"/home/fenikso/virtualenv/lib/python3.4/site-packages\"\n\n<VirtualHost *:80>\n ServerName album2.site.cz\n Alias /static \"/home/fenikso/Album/static\"\n Alias /photos \"/home/fenikso/Album/photos\"\n Alias /thumbs \"/home/fenikso/Album/thumbs\"\n WSGIScriptAlias / \"/home/fenikso/Album/wsgi.py\"\n <Directory \"/home/fenikso/Album\">\n AllowOverride None\n Allow from all\n </Directory>\n <Directory \"/home/fenikso/Album/static\">\n AllowOverride None\n Allow from all\n </Directory>\n <Directory \"/home/fenikso/Album/photos\">\n AllowOverride None\n Allow from all\n </Directory>\n <Directory \"/home/fenikso/Album/thumbs\">\n AllowOverride None\n Allow from all\n </Directory>\n</VirtualHost>\n\n",
"Look into the WSGIPythonHome and WSGIPythonPath directives. It's also possible that you have a python2.6 mod_wsgi installed, mod_wsgi must be compiled for the intended python version and does not support multiple python versions. So check that your mod_wsgi is py3.4 compatible and set the directives above.\nAlternatively, you could run the flask app with a python server like gunicorn and proxypass from apache to gunicorn.\n",
"Another option, which I believe is much cleaner, logical, and flexible, is to simply reference the python interpreter from your venv at the beginning of your wsgi file. This way, it is easy to change (no fiddling with system config files) and opens the possibility for multiple apps running with different python environments, like so:\n#!/path/to/your/venv/bin/python\n\n",
"If the python version installed on the system is different from the python version used in the virtual environment, then mod_wsgi will not work because mod_wsgi is always compiled for a specific python version.\nIn this situation, you need to install mod_wsgi in a virtual environment\npip install mod_wsgi-standalone\n\nThen such a module should be loaded instead of the default one installed in the system.\nFor Ubuntu for example, modify the path to the module in /etc/apache2/mods-available/wsgi.load\nLoadModule wsgi_module /home/user/etc/.venv/lib/python3.9/site-packages/mod_wsgi/server/mod_wsgi-py39.cpython-39-x86_64-linux-gnu.so\n\nThen in order to avoid the error \"no such file or directory: mod_wsgi (pid=XXXX): Couldn't bind unix domain socket '/usr/local/opt/httpd/logs/wsgi.xxxxx.11.1.sock\" should be added to the httpd.conf file:\nWSGISocketPrefix /var/run/wsgi\n\nAfter restarting apache everything should work\n"
] |
[
10,
2,
0,
0
] |
[] |
[] |
[
"apache",
"flask",
"mod_wsgi",
"python"
] |
stackoverflow_0027450998_apache_flask_mod_wsgi_python.txt
|
Q:
how to get proxy in python on scraping with request?
how to get proxy randomly? and get only one?
I've made the code as below but I don't know how to get the proxy randomly, and I want to get the proxy also based on the page, anyone know how?
import requests
from bs4 import BeautifulSoup
url = 'https://hidemy.name/en/proxy-list/?anon=34#list'
r = requests.get(url,headers={"User-Agent":"Mozilla/5.0"})
soup = BeautifulSoup(r.text,"html.parser")
print(soup)
A:
This link should help you get going. Check out the table section, will be most salient to what you're trying to do.
https://www.pluralsight.com/guides/extracting-data-html-beautifulsoup
You need to get further into the soup and put your desired data into some kind of container to then be able to extract an item randomly from it.
getting further into the soup:
# this will find the table block on the proxy url site
table = soup.find("div", {"class": "table_block"})
# this will find all the 'tr' elements in the table variable and put them in a list
data = table.tbody.findAll("tr")
# separate the elements you care about from the data list
# pick proxy list or from write your data to file
# use a module like random to aid in picking a random proxy from your list or file
"Getting the proxy based on the page" is really non-specific - based on what page? What condition on the page? Add a little more detail (and make sure to check other answers on stack overflow) to help people better answer your questions and to help improve question quality on SO.
A:
I would suggest you try out this library, proxy-randomizer
You will be able to get a pool of free proxies to use in any of your requests. Then you can use a hash map to use them base on the page you want to scrape
from proxy_randomizer import RegisteredProviders
import requests
rp = RegisteredProviders()
rp.parse_providers()
for proxy in rp.proxies:
proxies = {"https": proxy.get_proxy()}
response = requests.get("http://google.com", proxies=proxies)
Disclaimer: I'm the author.
|
how to get proxy in python on scraping with request?
|
how to get proxy randomly? and get only one?
I've made the code as below but I don't know how to get the proxy randomly, and I want to get the proxy also based on the page, anyone know how?
import requests
from bs4 import BeautifulSoup
url = 'https://hidemy.name/en/proxy-list/?anon=34#list'
r = requests.get(url,headers={"User-Agent":"Mozilla/5.0"})
soup = BeautifulSoup(r.text,"html.parser")
print(soup)
|
[
"This link should help you get going. Check out the table section, will be most salient to what you're trying to do.\nhttps://www.pluralsight.com/guides/extracting-data-html-beautifulsoup\nYou need to get further into the soup and put your desired data into some kind of container to then be able to extract an item randomly from it.\ngetting further into the soup:\n# this will find the table block on the proxy url site \ntable = soup.find(\"div\", {\"class\": \"table_block\"})\n\n# this will find all the 'tr' elements in the table variable and put them in a list\ndata = table.tbody.findAll(\"tr\")\n\n# separate the elements you care about from the data list \n# pick proxy list or from write your data to file \n# use a module like random to aid in picking a random proxy from your list or file \n\n\"Getting the proxy based on the page\" is really non-specific - based on what page? What condition on the page? Add a little more detail (and make sure to check other answers on stack overflow) to help people better answer your questions and to help improve question quality on SO.\n",
"I would suggest you try out this library, proxy-randomizer\nYou will be able to get a pool of free proxies to use in any of your requests. Then you can use a hash map to use them base on the page you want to scrape\nfrom proxy_randomizer import RegisteredProviders\nimport requests\n\nrp = RegisteredProviders()\nrp.parse_providers()\n\nfor proxy in rp.proxies:\n\n proxies = {\"https\": proxy.get_proxy()}\n response = requests.get(\"http://google.com\", proxies=proxies)\n\nDisclaimer: I'm the author.\n"
] |
[
0,
0
] |
[] |
[] |
[
"beautifulsoup",
"python",
"web_scraping"
] |
stackoverflow_0067802543_beautifulsoup_python_web_scraping.txt
|
Q:
Is there a way to map english letter(s) (or graphemes) in word from correspondent phoneme(s) in Python?
e.g. let's assume we have something like:
WOULD | YOU | LIKE | A | CUP | OF | TEA
w ʊ d | j uː | l a ɪ k | ə | k ʌ p | ʊ v | t iː
W UH D | Y UW | L AY K | AH | K AH P | AH V | T IY
And besides that I need to solve P2G problem, I also want to get some mapping of each phoneme and corresponding grapheme (letter or group of letters).
Could you please help me to understand whether I can get this P2G correspondance in English using some python tools?
Thanks a bunch in advance!
A:
You can use CMU pronouncing dictionary and aspell or enchant spell checker.
CMU pronouncing dictionary is a list of English words and their pronunciations, where each pronunciation is a list of phonemes.
The pronunciation dictionary can be downloaded in text format here:
http://www.speech.cs.cmu.edu/cgi-bin/cmudict
The raw text is not in a very useful format, so it is more convenient to download it already parsed.
I used the cmudict.dict file from the CMU pronouncing dictionary on nltk.
You can also use enchant spell checker to check if a string of letters is a word.
This is useful because the CMU pronouncing dictionary does not contain all possible words and has some errors.
If you have enchant installed, you can use the following code to test it:
import nltk
from enchant.checker import SpellChecker
# Download CMU pronouncing dictionary using nltk
nltk.download()
# Get list of English phonemes
phonemes = [p for w, ps in nltk.corpus.cmudict.entries() for p in ps]
# Get list of possible English graphemes
graphemes = [c for p in phonemes for c in p if c.isalpha()]
# Check the words 'cup' and 'tea' with the CMU pronouncing dictionary
assert nltk.corpus.cmudict.entries()[('cup',)] == [('cup', ['K', 'AH', 'P'])]
assert nltk.corpus.cmudict.entries()[('tea',)] == [('tea', ['T', 'IY'])]
# Check the words 'cup' and 'tea' with an enchant spell checker
c = SpellChecker('en_US')
c.set_text('cup')
assert c.check()
c.set_text('cup ')
assert not c.check()
c.set_text('tea')
assert c.check()
c.set_text('tea ')
assert not c.check()
|
Is there a way to map english letter(s) (or graphemes) in word from correspondent phoneme(s) in Python?
|
e.g. let's assume we have something like:
WOULD | YOU | LIKE | A | CUP | OF | TEA
w ʊ d | j uː | l a ɪ k | ə | k ʌ p | ʊ v | t iː
W UH D | Y UW | L AY K | AH | K AH P | AH V | T IY
And besides that I need to solve P2G problem, I also want to get some mapping of each phoneme and corresponding grapheme (letter or group of letters).
Could you please help me to understand whether I can get this P2G correspondance in English using some python tools?
Thanks a bunch in advance!
|
[
"You can use CMU pronouncing dictionary and aspell or enchant spell checker.\nCMU pronouncing dictionary is a list of English words and their pronunciations, where each pronunciation is a list of phonemes.\nThe pronunciation dictionary can be downloaded in text format here:\nhttp://www.speech.cs.cmu.edu/cgi-bin/cmudict\nThe raw text is not in a very useful format, so it is more convenient to download it already parsed.\nI used the cmudict.dict file from the CMU pronouncing dictionary on nltk.\nYou can also use enchant spell checker to check if a string of letters is a word.\nThis is useful because the CMU pronouncing dictionary does not contain all possible words and has some errors.\nIf you have enchant installed, you can use the following code to test it:\nimport nltk\nfrom enchant.checker import SpellChecker\n\n# Download CMU pronouncing dictionary using nltk\nnltk.download()\n\n# Get list of English phonemes\nphonemes = [p for w, ps in nltk.corpus.cmudict.entries() for p in ps]\n\n# Get list of possible English graphemes\ngraphemes = [c for p in phonemes for c in p if c.isalpha()]\n\n# Check the words 'cup' and 'tea' with the CMU pronouncing dictionary\nassert nltk.corpus.cmudict.entries()[('cup',)] == [('cup', ['K', 'AH', 'P'])]\nassert nltk.corpus.cmudict.entries()[('tea',)] == [('tea', ['T', 'IY'])]\n\n# Check the words 'cup' and 'tea' with an enchant spell checker\nc = SpellChecker('en_US')\nc.set_text('cup')\nassert c.check()\nc.set_text('cup ')\nassert not c.check()\nc.set_text('tea')\nassert c.check()\nc.set_text('tea ')\nassert not c.check()\n\n"
] |
[
0
] |
[] |
[] |
[
"grapheme",
"phoneme",
"python"
] |
stackoverflow_0074546260_grapheme_phoneme_python.txt
|
Q:
Office365 smtp server does not respond to ehlo() in python
I am trying to use Office365 smtp server for automatically sending out emails. My code works previously with gmail server, but not the Office365 server in Python using smtplib.
My code:
import smtplib
server_365 = smtplib.SMTP('smtp.office365.com', '587')
server_365.ehlo()
server_365.starttls()
The response for the ehlo() is: (501, '5.5.4 Invalid domain name [DM5PR13CA0034.namprd13.prod.outlook.com]')
In addition, .starttls() raises a SMTPException: STARTTLS extension not supported by server
Any idea why this happens?
A:
The smtplib ehlo function automatically adds the senders host name to the EHLO command, but Office365 requires that the domain be all lowercase, so when youe default host name is uppercase it errors.
You can fix by explicitly setting sender host name in the ehlo command to anything lowercase.
import smtplib
server_365 = smtplib.SMTP('smtp.office365.com', '587')
server_365.ehlo('mylowercasehost')
server_365.starttls()
A:
I had to put SMTP EHLO before and after starttls().
Thanks!
import smtplib
server_365 = smtplib.SMTP('smtp.office365.com', '587')
server_365.ehlo('mylowercasehost')
server_365.starttls()
server_365.ehlo('mylowercasehost')
|
Office365 smtp server does not respond to ehlo() in python
|
I am trying to use Office365 smtp server for automatically sending out emails. My code works previously with gmail server, but not the Office365 server in Python using smtplib.
My code:
import smtplib
server_365 = smtplib.SMTP('smtp.office365.com', '587')
server_365.ehlo()
server_365.starttls()
The response for the ehlo() is: (501, '5.5.4 Invalid domain name [DM5PR13CA0034.namprd13.prod.outlook.com]')
In addition, .starttls() raises a SMTPException: STARTTLS extension not supported by server
Any idea why this happens?
|
[
"The smtplib ehlo function automatically adds the senders host name to the EHLO command, but Office365 requires that the domain be all lowercase, so when youe default host name is uppercase it errors.\nYou can fix by explicitly setting sender host name in the ehlo command to anything lowercase.\nimport smtplib\n\nserver_365 = smtplib.SMTP('smtp.office365.com', '587')\n\nserver_365.ehlo('mylowercasehost')\n\nserver_365.starttls()\n\n",
"I had to put SMTP EHLO before and after starttls().\nThanks!\nimport smtplib\nserver_365 = smtplib.SMTP('smtp.office365.com', '587')\nserver_365.ehlo('mylowercasehost')\nserver_365.starttls()\nserver_365.ehlo('mylowercasehost')\n"
] |
[
0,
0
] |
[] |
[] |
[
"office365",
"python",
"smtp"
] |
stackoverflow_0044763856_office365_python_smtp.txt
|
Q:
TypeError: Workbook.__init__() got an unexpected keyword argument 'options'
I am getting this error
Traceback (most recent call last): File
"C:\Python310\lib\tkinter_init_.py", line 1921, in call
return self.func(*args) File "C:\Users\achille.gouttard\Documents\synergie\11_16_2021\app.py", line
861, in selectItem
self.find_match(1) File "C:\Users\achille.gouttard\Documents\synergie\11_16_2021\app.py", line
673, in find_match
with ExcelWriter(path, engine="openpyxl", File "C:\Users\achille.gouttard\AppData\Roaming\Python\Python310\site-packages\pandas\io\excel_openpyxl.py",
line 72, in init
self.book = Workbook(**engine_kwargs) TypeError: Workbook.init() got an unexpected keyword argument 'options'
from this line of code:
with ExcelWriter(path, engine="openpyxl",
engine_kwargs={'options': {'strings_to_formulas': False, "strings_to_urls": False}}) as writer:
It used to work just fine but someone messed up with my Python installation and when I tried to reinstall everything I came across this error.
Note that when I installed the packages I had this warning:
WARNING: The script styleframe.exe is installed in
'C:\Users\achille.gouttard\AppData\Roaming\Python\Python310\Scripts'
which is not on PATH. Consider adding this directory to PATH or, if
you prefer to suppress this warning, use --no-warn-script-location.
Can't find anything related to this issue so if you've got any ideas they are welcomed.
A:
It is because engine openpyxl does not support engine kwargs.
You need to change the engine parametr (for example 'xlsxwriter' works perfect).
with pd.ExcelWriter(name, engine='xlsxwriter', engine_kwargs={'options': {'strings_to_numbers': True}}) as writer:
stuff()
Also be note that if you do not have xlsxwriter library you should install it manually.
If there are no engine arg passed, it chooses engine automaticly.
|
TypeError: Workbook.__init__() got an unexpected keyword argument 'options'
|
I am getting this error
Traceback (most recent call last): File
"C:\Python310\lib\tkinter_init_.py", line 1921, in call
return self.func(*args) File "C:\Users\achille.gouttard\Documents\synergie\11_16_2021\app.py", line
861, in selectItem
self.find_match(1) File "C:\Users\achille.gouttard\Documents\synergie\11_16_2021\app.py", line
673, in find_match
with ExcelWriter(path, engine="openpyxl", File "C:\Users\achille.gouttard\AppData\Roaming\Python\Python310\site-packages\pandas\io\excel_openpyxl.py",
line 72, in init
self.book = Workbook(**engine_kwargs) TypeError: Workbook.init() got an unexpected keyword argument 'options'
from this line of code:
with ExcelWriter(path, engine="openpyxl",
engine_kwargs={'options': {'strings_to_formulas': False, "strings_to_urls": False}}) as writer:
It used to work just fine but someone messed up with my Python installation and when I tried to reinstall everything I came across this error.
Note that when I installed the packages I had this warning:
WARNING: The script styleframe.exe is installed in
'C:\Users\achille.gouttard\AppData\Roaming\Python\Python310\Scripts'
which is not on PATH. Consider adding this directory to PATH or, if
you prefer to suppress this warning, use --no-warn-script-location.
Can't find anything related to this issue so if you've got any ideas they are welcomed.
|
[
"It is because engine openpyxl does not support engine kwargs.\nYou need to change the engine parametr (for example 'xlsxwriter' works perfect).\nwith pd.ExcelWriter(name, engine='xlsxwriter', engine_kwargs={'options': {'strings_to_numbers': True}}) as writer:\n stuff()\n\nAlso be note that if you do not have xlsxwriter library you should install it manually.\nIf there are no engine arg passed, it chooses engine automaticly.\n"
] |
[
1
] |
[] |
[] |
[
"openpyxl",
"python",
"sysadmin"
] |
stackoverflow_0071479516_openpyxl_python_sysadmin.txt
|
Q:
Python list comprehension filtering out files in directory
#To pull content from dirNames directory
dirNames = str(glob.glob('.\\Output'+ globalData.__TWO_BACK_SLASH_SEPERATORS__ + globalData.__MANUAL_STRING_OUTPUT__ + globalData.__TWO_BACK_SLASH_SEPERATORS__ +'*'))
print ("Dir Names:: "+dirNames)
#Seperate the contents of dirNames into list dirNames[0], dirNames[1]....etc
dirNames = [dir for dir in dirNames if os.path.isdir( dir)]
print("\nPost Filetring Dir names are ::{}".format(dirNames))
Output: Dir Names:: ['.\Output\ManualOutput\152_156_92_230_NLA_6_0_0_08112022',
'.\Output\ManualOutput\152_156_92_230_NLA_6_0_0_112022_1_REV01',
'.\Output\ManualOutput\152_156_92_230_NLA_6_0_0_21112022_1_REV01']
Post Filetring Dir names are ::['.', '\', '\', '\', '\', '\',
'\', '.', '\', '\', '\', '\', '\', '\', '.', '\', '\', '\',
'\', '\', '\']
The output after filtering is incomplete as only the back slashes have been parsed. This code works fine in another module where the Directory is only .\Output\152_156_92_230_NLA_6_0_0_112022_1_REV01, but with this directory format .\Output\ManualOutput\152_156_92_230_NLA_6_0_0_112022_1_REV01 I am facing this issue.
A:
Your problem is that you're passing a list of strings to os.path.isdir, which isn't what it expects. Either pass it each directory name individually, or split on the newlines.
dirNames = [dir for dir in dirNames.split('\n') if os.path.isdir( dir)]
|
Python list comprehension filtering out files in directory
|
#To pull content from dirNames directory
dirNames = str(glob.glob('.\\Output'+ globalData.__TWO_BACK_SLASH_SEPERATORS__ + globalData.__MANUAL_STRING_OUTPUT__ + globalData.__TWO_BACK_SLASH_SEPERATORS__ +'*'))
print ("Dir Names:: "+dirNames)
#Seperate the contents of dirNames into list dirNames[0], dirNames[1]....etc
dirNames = [dir for dir in dirNames if os.path.isdir( dir)]
print("\nPost Filetring Dir names are ::{}".format(dirNames))
Output: Dir Names:: ['.\Output\ManualOutput\152_156_92_230_NLA_6_0_0_08112022',
'.\Output\ManualOutput\152_156_92_230_NLA_6_0_0_112022_1_REV01',
'.\Output\ManualOutput\152_156_92_230_NLA_6_0_0_21112022_1_REV01']
Post Filetring Dir names are ::['.', '\', '\', '\', '\', '\',
'\', '.', '\', '\', '\', '\', '\', '\', '.', '\', '\', '\',
'\', '\', '\']
The output after filtering is incomplete as only the back slashes have been parsed. This code works fine in another module where the Directory is only .\Output\152_156_92_230_NLA_6_0_0_112022_1_REV01, but with this directory format .\Output\ManualOutput\152_156_92_230_NLA_6_0_0_112022_1_REV01 I am facing this issue.
|
[
"Your problem is that you're passing a list of strings to os.path.isdir, which isn't what it expects. Either pass it each directory name individually, or split on the newlines.\ndirNames = [dir for dir in dirNames.split('\\n') if os.path.isdir( dir)]\n\n"
] |
[
0
] |
[] |
[] |
[
"automation",
"database",
"directory",
"file",
"python"
] |
stackoverflow_0074546238_automation_database_directory_file_python.txt
|
Q:
to extract the data in a csv file which is given in the config.json file
i have a config.json file and the data inside the config.json is
""
{
"mortalityfile":"C:/Users/DELL/mortality.csv"
}
and the mortality file is a csv file with some data..i want to extract the csv file data from the cofig.json.The code which i wrote is
js = open('config.json').read()
results = []
for line in js:
words = line.split(',')
results.append((words[0:]))
print(results)
and i am geeting the output as the sourcefilename which i given..
[['{'], ['\n'], [' '], [' '], [' '], [' '], ['"'], ['m'], ['o'], ['r'], ['t'], ['a'], ['l'], ['i'], ['t'], ['y'], ['f'], ['i'], ['l'], ['e'], ['"'], [':'], ['"'], ['C'], [':'], ['/'], ['U'], ['s'], ['e'], ['r'], ['s'], ['/'], ['D'], ['E'], ['L'], ['L'], ['/'], ['m'], ['o'], ['r'], ['t'], ['a'], ['l'], ['i'], ['t'], ['y'], ['.'], ['c'], ['s'], ['v'], ['"'], ['\n'], [' '], [' '], [' '], [' '], ['\n'], ['}']]
i want to extract the data which is stored in the csv file through config.json in the python
A:
I think you are confusing reading your .csv and reading your .json files.
import json
# open the json
config_file = open('config.json')
# convert it to a dict
data = json.load(config_file)
# open your csv
with open(data['mortalityfile'], 'r') as f:
# do stuff with you csv data
csv_data = f.readlines()
result = []
for line in csv_data:
split_line = line.rstrip().split(',')
result.append(split_line)
print(result)
|
to extract the data in a csv file which is given in the config.json file
|
i have a config.json file and the data inside the config.json is
""
{
"mortalityfile":"C:/Users/DELL/mortality.csv"
}
and the mortality file is a csv file with some data..i want to extract the csv file data from the cofig.json.The code which i wrote is
js = open('config.json').read()
results = []
for line in js:
words = line.split(',')
results.append((words[0:]))
print(results)
and i am geeting the output as the sourcefilename which i given..
[['{'], ['\n'], [' '], [' '], [' '], [' '], ['"'], ['m'], ['o'], ['r'], ['t'], ['a'], ['l'], ['i'], ['t'], ['y'], ['f'], ['i'], ['l'], ['e'], ['"'], [':'], ['"'], ['C'], [':'], ['/'], ['U'], ['s'], ['e'], ['r'], ['s'], ['/'], ['D'], ['E'], ['L'], ['L'], ['/'], ['m'], ['o'], ['r'], ['t'], ['a'], ['l'], ['i'], ['t'], ['y'], ['.'], ['c'], ['s'], ['v'], ['"'], ['\n'], [' '], [' '], [' '], [' '], ['\n'], ['}']]
i want to extract the data which is stored in the csv file through config.json in the python
|
[
"I think you are confusing reading your .csv and reading your .json files.\nimport json\n\n# open the json\nconfig_file = open('config.json')\n\n# convert it to a dict\ndata = json.load(config_file)\n\n# open your csv\nwith open(data['mortalityfile'], 'r') as f:\n # do stuff with you csv data\n csv_data = f.readlines()\n result = []\n for line in csv_data:\n split_line = line.rstrip().split(',')\n result.append(split_line)\n\nprint(result)\n\n"
] |
[
0
] |
[] |
[] |
[
"config.json",
"csv",
"json",
"python"
] |
stackoverflow_0074546380_config.json_csv_json_python.txt
|
Q:
Whenever I run '!pip list' or some command inside jupyter notebook it returns '& was unexpected at this time.'
I'm using a conda (miniconda3) env. Inside my conda env I've Jupyter Notebook.
If I run some command from the notebook it returns & was unexpected at this time..
%pip list also shows the same result
Some Info:
OS: Windows 10;
Python Version: 3.9.15 ( miniconda3 )
I didn't find any good solution.
I'm expecting to have that command executed.
A:
Please see here for detailed anwser:
What is the meaning of exclamation and question marks in Jupyter notebook?
What !pip is doing in your case is:
cmd> pip
and pip is not a valid windows cmd command.
What you're looking for might be:
%pip list
edit: this is my guess, I don't have a windows computer here to test this
|
Whenever I run '!pip list' or some command inside jupyter notebook it returns '& was unexpected at this time.'
|
I'm using a conda (miniconda3) env. Inside my conda env I've Jupyter Notebook.
If I run some command from the notebook it returns & was unexpected at this time..
%pip list also shows the same result
Some Info:
OS: Windows 10;
Python Version: 3.9.15 ( miniconda3 )
I didn't find any good solution.
I'm expecting to have that command executed.
|
[
"Please see here for detailed anwser:\nWhat is the meaning of exclamation and question marks in Jupyter notebook?\nWhat !pip is doing in your case is:\ncmd> pip\nand pip is not a valid windows cmd command.\nWhat you're looking for might be:\n%pip list\nedit: this is my guess, I don't have a windows computer here to test this\n"
] |
[
0
] |
[] |
[] |
[
"conda",
"jupyter_notebook",
"powershell",
"python",
"windows"
] |
stackoverflow_0074546483_conda_jupyter_notebook_powershell_python_windows.txt
|
Q:
How to consolidate different methods in Python
I have this operation of filling missing values.
mean_impute = df['column'].fillna(value=df['column'].mean())
median_impute = df['column'].fillna(value=df['column'].median())
mode_impute = df['column'].fillna(value=df['column'].mode())
Is there any way on how to replicate this line of code in a much cleaner way, is there a way to loop on this or to create a function?
A:
This might not be best practice (because of eval) but you could avoid to repeat yourself by storing your results in a dictionary:
impute = dict()
for fun in ["mean", "median", "mode"]:
impute[fun] = eval(f"df['column'].fillna(value=df['column'].{fun}())")
|
How to consolidate different methods in Python
|
I have this operation of filling missing values.
mean_impute = df['column'].fillna(value=df['column'].mean())
median_impute = df['column'].fillna(value=df['column'].median())
mode_impute = df['column'].fillna(value=df['column'].mode())
Is there any way on how to replicate this line of code in a much cleaner way, is there a way to loop on this or to create a function?
|
[
"This might not be best practice (because of eval) but you could avoid to repeat yourself by storing your results in a dictionary:\nimpute = dict()\n\nfor fun in [\"mean\", \"median\", \"mode\"]:\n impute[fun] = eval(f\"df['column'].fillna(value=df['column'].{fun}())\")\n\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"pandas",
"python",
"statistics"
] |
stackoverflow_0074543298_dataframe_pandas_python_statistics.txt
|
Q:
Subsequent Django Unittests raise "MySQLdb.OperationalError: (2006, '')"
I am unittesting a method creating an HttpReponse and delivering a CSV file.
def _generate_csv(self):
filename = self._get_filename('csv')
# Create the HttpResponse object with the appropriate CSV header.
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename=%s' % filename
writer = csv.DictWriter(response, csv_header_list, delimiter=";")
writer.writeheader()
for obj in object_list:
writer.writerow(csv_dict)
response.close()
return response
When running my unittests, all test before this tests passt but all subsequent fail with this obscure error:
MySQLdb.OperationalError: (2006, '')
Any idea why testing this method breaks the other tests?
A:
Ok, found it! Closing the response is killing the database connection (2206 means "Database gone away")
response.close()
Just mock the close away in your tests:
@mock.patch.object(HttpResponse, 'close')
def test_generate_csv_case_x(self, *args):
...
|
Subsequent Django Unittests raise "MySQLdb.OperationalError: (2006, '')"
|
I am unittesting a method creating an HttpReponse and delivering a CSV file.
def _generate_csv(self):
filename = self._get_filename('csv')
# Create the HttpResponse object with the appropriate CSV header.
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename=%s' % filename
writer = csv.DictWriter(response, csv_header_list, delimiter=";")
writer.writeheader()
for obj in object_list:
writer.writerow(csv_dict)
response.close()
return response
When running my unittests, all test before this tests passt but all subsequent fail with this obscure error:
MySQLdb.OperationalError: (2006, '')
Any idea why testing this method breaks the other tests?
|
[
"Ok, found it! Closing the response is killing the database connection (2206 means \"Database gone away\")\nresponse.close()\n\nJust mock the close away in your tests:\n@mock.patch.object(HttpResponse, 'close')\ndef test_generate_csv_case_x(self, *args):\n ...\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"mysql_error_1064",
"python",
"unit_testing"
] |
stackoverflow_0074546616_django_mysql_error_1064_python_unit_testing.txt
|
Q:
how do i remove characters between 2 different characters inside a string
So i have this inside a text file :
"00:00:25,58 --> 00:00:27,91 (DRAMATIC MUSIC PLAYING)"
I want to remove characters inside and including the braces itself so :
"00:00:25,58 --> 00:00:27,91 "
eng_sub = open(text).read()
eng_sub2 = re.sub("\(", "", eng_sub)
new_eng_sub = re.sub("\)", "", eng_sub2)
open(text, "w").write(new_eng_sub)
I've tried using sub() and it removes a character but what i really want to do is manipulate characters between those 2 (i.e. "(" , ")") characters.
I don't know how to do it. thank you for your help.
A:
You may try matching on the pattern \(.*?\):
eng_sub = open(text).read()
eng_sub2 = re.sub(r'\(.*?\)', '', eng_sub)
open(text, "w").write(eng_sub2)
A:
Indeed, you can't use the "sub" method which will simply delete the pattern.
But what you can do (and which is not too complex) is to use "findall" (also present in the re library) which allows you to extract a pattern in the STR
Here's a simple example :
import re
text = "00:00:25,58 --> 00:00:27,91 (DRAMATIC MUSIC PLAYING)"
print(re.findall(r"\(.*\)", text)[0])
Output: (DRAMATIC MUSIC PLAYING)
Once you have extracted what you want to manipulate, you can delete this pattern via sub
print(re.sub(r"\(.*\)", '', text))
|
how do i remove characters between 2 different characters inside a string
|
So i have this inside a text file :
"00:00:25,58 --> 00:00:27,91 (DRAMATIC MUSIC PLAYING)"
I want to remove characters inside and including the braces itself so :
"00:00:25,58 --> 00:00:27,91 "
eng_sub = open(text).read()
eng_sub2 = re.sub("\(", "", eng_sub)
new_eng_sub = re.sub("\)", "", eng_sub2)
open(text, "w").write(new_eng_sub)
I've tried using sub() and it removes a character but what i really want to do is manipulate characters between those 2 (i.e. "(" , ")") characters.
I don't know how to do it. thank you for your help.
|
[
"You may try matching on the pattern \\(.*?\\):\neng_sub = open(text).read()\neng_sub2 = re.sub(r'\\(.*?\\)', '', eng_sub)\n\nopen(text, \"w\").write(eng_sub2)\n\n",
"Indeed, you can't use the \"sub\" method which will simply delete the pattern.\nBut what you can do (and which is not too complex) is to use \"findall\" (also present in the re library) which allows you to extract a pattern in the STR\nHere's a simple example :\nimport re\ntext = \"00:00:25,58 --> 00:00:27,91 (DRAMATIC MUSIC PLAYING)\"\nprint(re.findall(r\"\\(.*\\)\", text)[0])\n\nOutput: (DRAMATIC MUSIC PLAYING)\nOnce you have extracted what you want to manipulate, you can delete this pattern via sub\nprint(re.sub(r\"\\(.*\\)\", '', text))\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"python_3.x",
"string"
] |
stackoverflow_0074546461_python_python_3.x_string.txt
|
Q:
Generating embedding for long documents using pre-trained word vectors
I have a set of pre-trained word embeddings from the Wikipedia corpus. I also have 300 dimension embeddings of Wikipedia article pages. I am looking to build a similarity engine by running a simple cosine similarity algorithm for any new query (long documents) against these pre-trained embeddings. To do this, I want to represent any new input document as a 300d vector using the pre-trained word embeddings and then run cosine similarity against the corpus. How can this be achieved?
A:
You can use doc2vec model for representing documents as a vector. It is a generalizing of the word2vec method.
|
Generating embedding for long documents using pre-trained word vectors
|
I have a set of pre-trained word embeddings from the Wikipedia corpus. I also have 300 dimension embeddings of Wikipedia article pages. I am looking to build a similarity engine by running a simple cosine similarity algorithm for any new query (long documents) against these pre-trained embeddings. To do this, I want to represent any new input document as a 300d vector using the pre-trained word embeddings and then run cosine similarity against the corpus. How can this be achieved?
|
[
"You can use doc2vec model for representing documents as a vector. It is a generalizing of the word2vec method.\n"
] |
[
0
] |
[] |
[] |
[
"huggingface_transformers",
"nlp",
"python",
"sentence_similarity",
"word_embedding"
] |
stackoverflow_0074450390_huggingface_transformers_nlp_python_sentence_similarity_word_embedding.txt
|
Q:
cross check if two df have different values and print any if there
i have two df and i wanna check for the id if the value differs in both df if so i need to print those.
example:
df1 = |id |check_column1|
|1|abc|
|1|bcd|
|2|xyz|
|2|mno|
|2|mmm|
df2 =
|id |check_column2|
|1|bcd|
|1|abc|
|2|xyz|
|2|mno|
|2|kkk|
here the output should be just |2|mmm|kkk| but i am getting whole table as output since index are different
This is what i did
output = pd.merge(df1,df2, on= ['id'], how='inner')
event4 = output[output.apply(lambda x: x['check_column1'] != x['check_column2'], axis=1)]
A:
Idea is sorting values per id in both columns and join with helper counter by GroupBy.cumcount, then is possible filtering not matched rows:
df1 = df1.sort_values(['id','check_column1'])
df2 = df2.sort_values(['id','check_column2'])
df = pd.merge(df1,df2, left_on= ['id',df1.groupby('id').cumcount()],
right_on= ['id',df2.groupby('id').cumcount()])
output = df[df['check_column1'] != df['check_column2']]
print (output)
id key_1 check_column1 check_column2
2 2 0 mmm kkk
A:
mask = np.where((df1['id'] != df2['id']) | (df1['check_column1'] != df2['check_column2']), True, False)
output = df2[mask]
A:
You can use np.where to achieve this.
df1 = pd.DataFrame({'id':[1,1,2,2,2],'check_column1':['abc','bcd','xyz','mno','mmm']})
df2 = pd.DataFrame({'id':[1,1,2,2,2],'check_column2':['bcd','abc','xyz','mno','kkk']})
output = pd.merge(df1,df2, on= ['id'], how='inner')
event4 = np.where(output['check_column1']!=output['check_column2'],output[['id','check_column1']],output[['id','check_column2']])
Output:
array([[2, 'mmm'],
[2, 'kkk']], dtype=object)
|
cross check if two df have different values and print any if there
|
i have two df and i wanna check for the id if the value differs in both df if so i need to print those.
example:
df1 = |id |check_column1|
|1|abc|
|1|bcd|
|2|xyz|
|2|mno|
|2|mmm|
df2 =
|id |check_column2|
|1|bcd|
|1|abc|
|2|xyz|
|2|mno|
|2|kkk|
here the output should be just |2|mmm|kkk| but i am getting whole table as output since index are different
This is what i did
output = pd.merge(df1,df2, on= ['id'], how='inner')
event4 = output[output.apply(lambda x: x['check_column1'] != x['check_column2'], axis=1)]
|
[
"Idea is sorting values per id in both columns and join with helper counter by GroupBy.cumcount, then is possible filtering not matched rows:\ndf1 = df1.sort_values(['id','check_column1'])\ndf2 = df2.sort_values(['id','check_column2'])\n \ndf = pd.merge(df1,df2, left_on= ['id',df1.groupby('id').cumcount()], \n right_on= ['id',df2.groupby('id').cumcount()])\n\noutput = df[df['check_column1'] != df['check_column2']]\nprint (output)\n id key_1 check_column1 check_column2\n2 2 0 mmm kkk\n\n",
"mask = np.where((df1['id'] != df2['id']) | (df1['check_column1'] != df2['check_column2']), True, False)\n\noutput = df2[mask]\n\n",
"You can use np.where to achieve this.\ndf1 = pd.DataFrame({'id':[1,1,2,2,2],'check_column1':['abc','bcd','xyz','mno','mmm']})\ndf2 = pd.DataFrame({'id':[1,1,2,2,2],'check_column2':['bcd','abc','xyz','mno','kkk']})\n\noutput = pd.merge(df1,df2, on= ['id'], how='inner')\nevent4 = np.where(output['check_column1']!=output['check_column2'],output[['id','check_column1']],output[['id','check_column2']])\n\nOutput:\narray([[2, 'mmm'],\n [2, 'kkk']], dtype=object)\n\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"dataframe",
"lambda",
"merge",
"pandas",
"python"
] |
stackoverflow_0074546214_dataframe_lambda_merge_pandas_python.txt
|
Q:
SQLAlchemy select : How to assign a column name to an ORM entity?
Given the SQLAlchemy 1.4 query below, I am selecting an ORM entity, models.Person with two additional calculated columns, prev and next.
@pytest.mark.wip
@pytest.mark.asyncio
async def test_sqlalchemy(people: AsyncSession) -> None:
session = people
query = (
select(
models.Person,
Bundle(
"navigation",
sa.func.lag(models.Person.cursor)
.over(order_by=models.Person.cursor)
.label("prev"),
sa.func.lead(models.Person.cursor)
.over(order_by=models.Person.cursor)
.label("next"),
),
)
.where(models.Person.id > "26744d86-1918-4f67-92a0-e7ca12043721")
.order_by(models.Person.id.asc())
)
iterator = await session.execute(query)
rows = iterator.all()
print(f"person = {rows[0][0]}")
print(f"next={rows[0].navigation.next} prev={rows[0].navigation.prev}")
When I execute the query and access a resulting row, I can access the next and prev columns via a named attribute, e.g. rows[0].navigation.next. This was possible using the Bundle feature in SQLAlchemy.
However, I can only access the Person ORM entity attributes via column index, e.g rows[0][0].surname. Is it possible in SQLAlchemy to assign an ORM entity a column name so that I can do something like rows[0].person.surname?
I have tried assigning a Bundle to the ORM entity, e.g. Bundle("person", models.Person). However I get an error: AttributeError: 'AnnotatedTable' object has no attribute '_label'.
A:
Solved based on information given here. The method was to retrieve each column into a variable, e.g
person, navigation = rows[0]
print(f"person = {person.surname}")
print(f"next={navigation.next} prev={navigation.prev}")
Full listing solution
@pytest.mark.wip
@pytest.mark.asyncio
async def test_sqlalchemy(people: AsyncSession) -> None:
session = people
query = (
select(
models.Person,
Bundle(
"navigation",
sa.func.lag(models.Person.cursor)
.over(order_by=models.Person.cursor)
.label("prev"),
sa.func.lead(models.Person.cursor)
.over(order_by=models.Person.cursor)
.label("next"),
),
)
.where(models.Person.id > "26744d86-1918-4f67-92a0-e7ca12043721")
.order_by(models.Person.id.asc())
)
iterator = await session.execute(query) File not indexed
rows = iterator.all()
person, navigation = rows[0]
print(f"person = {person.surname}")
print(f"next={navigation.next} prev={navigation.prev}")
|
SQLAlchemy select : How to assign a column name to an ORM entity?
|
Given the SQLAlchemy 1.4 query below, I am selecting an ORM entity, models.Person with two additional calculated columns, prev and next.
@pytest.mark.wip
@pytest.mark.asyncio
async def test_sqlalchemy(people: AsyncSession) -> None:
session = people
query = (
select(
models.Person,
Bundle(
"navigation",
sa.func.lag(models.Person.cursor)
.over(order_by=models.Person.cursor)
.label("prev"),
sa.func.lead(models.Person.cursor)
.over(order_by=models.Person.cursor)
.label("next"),
),
)
.where(models.Person.id > "26744d86-1918-4f67-92a0-e7ca12043721")
.order_by(models.Person.id.asc())
)
iterator = await session.execute(query)
rows = iterator.all()
print(f"person = {rows[0][0]}")
print(f"next={rows[0].navigation.next} prev={rows[0].navigation.prev}")
When I execute the query and access a resulting row, I can access the next and prev columns via a named attribute, e.g. rows[0].navigation.next. This was possible using the Bundle feature in SQLAlchemy.
However, I can only access the Person ORM entity attributes via column index, e.g rows[0][0].surname. Is it possible in SQLAlchemy to assign an ORM entity a column name so that I can do something like rows[0].person.surname?
I have tried assigning a Bundle to the ORM entity, e.g. Bundle("person", models.Person). However I get an error: AttributeError: 'AnnotatedTable' object has no attribute '_label'.
|
[
"Solved based on information given here. The method was to retrieve each column into a variable, e.g\nperson, navigation = rows[0]\n print(f\"person = {person.surname}\")\n print(f\"next={navigation.next} prev={navigation.prev}\")\n\nFull listing solution\n @pytest.mark.wip\n @pytest.mark.asyncio\n async def test_sqlalchemy(people: AsyncSession) -> None:\n session = people\n\n query = (\n select(\n models.Person,\n Bundle(\n \"navigation\",\n sa.func.lag(models.Person.cursor)\n .over(order_by=models.Person.cursor)\n .label(\"prev\"),\n sa.func.lead(models.Person.cursor)\n .over(order_by=models.Person.cursor)\n .label(\"next\"),\n ),\n )\n .where(models.Person.id > \"26744d86-1918-4f67-92a0-e7ca12043721\")\n .order_by(models.Person.id.asc())\n )\n\n iterator = await session.execute(query) File not indexed\n rows = iterator.all()\n\n person, navigation = rows[0]\n print(f\"person = {person.surname}\")\n print(f\"next={navigation.next} prev={navigation.prev}\")\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"sqlalchemy"
] |
stackoverflow_0074546525_python_sqlalchemy.txt
|
Q:
Is it possible to check if a phone number is real or not in Python?
I have thousands of random phone numbers and I wanted to check which one of them are real and working right now. is_valid_number and is_possible_number methods in phonenumbers library won't solve my issue.
Is there any way to identify through Python? Any way maybe through which we send a ping as an sms to the list of numbers and could get some kind of response back to validate the phone numbers.
I have seen few posts on stackoverflow like this International phone number validation
but they are simply validating either the format of the phone numbers or the possibility if the numbers can be right or not. But my requirement is that I want to know if the phone numbers are working in real life or not.
I have tried using phonenumbers library but there is no such method. I couldn't find anything relevant on the web too.
A:
There are a few possible solutions that come to mind:
using regex and dataset of all the supported countries code, to check but it will not be 100% bulletproof
like @DamDam Suggested you can use Twillo Confirm Delivery but it will probably wont work on landlines for example
from a quick internet search I have found this AbstractAPI which meant to validate phone numbers, it has a free version as well
using the phonenumbers package
From what it seems, option number 4, using the phonenumbers package seems like the best solution (I never tried option number 3 so you can try and choose), this package is a python port of Google's libphonenumber which includes a data set of mobile carriers.
Example Usage (there are more usage options, you should visit the docs of this library):
import phonenumbers
from phonenumbers import carrier
from phonenumbers.phonenumberutil import number_type
number = "+972 5012345678"
carrier._is_mobile(number_type(phonenumbers.parse(number)))
|
Is it possible to check if a phone number is real or not in Python?
|
I have thousands of random phone numbers and I wanted to check which one of them are real and working right now. is_valid_number and is_possible_number methods in phonenumbers library won't solve my issue.
Is there any way to identify through Python? Any way maybe through which we send a ping as an sms to the list of numbers and could get some kind of response back to validate the phone numbers.
I have seen few posts on stackoverflow like this International phone number validation
but they are simply validating either the format of the phone numbers or the possibility if the numbers can be right or not. But my requirement is that I want to know if the phone numbers are working in real life or not.
I have tried using phonenumbers library but there is no such method. I couldn't find anything relevant on the web too.
|
[
"There are a few possible solutions that come to mind:\n\nusing regex and dataset of all the supported countries code, to check but it will not be 100% bulletproof\nlike @DamDam Suggested you can use Twillo Confirm Delivery but it will probably wont work on landlines for example\nfrom a quick internet search I have found this AbstractAPI which meant to validate phone numbers, it has a free version as well\nusing the phonenumbers package\n\nFrom what it seems, option number 4, using the phonenumbers package seems like the best solution (I never tried option number 3 so you can try and choose), this package is a python port of Google's libphonenumber which includes a data set of mobile carriers.\nExample Usage (there are more usage options, you should visit the docs of this library):\nimport phonenumbers\nfrom phonenumbers import carrier\nfrom phonenumbers.phonenumberutil import number_type\n\nnumber = \"+972 5012345678\"\ncarrier._is_mobile(number_type(phonenumbers.parse(number)))\n\n"
] |
[
1
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0074545014_python_python_3.x.txt
|
Q:
How to split the prefix of currency symbol in separate column in Pandas Data Frame
Amount
0 250000
1 ₹40,000,000
2 ₹65,000,000
3 2000000
4 —
... ...
521 225000000
522 —
523 7500
524 ₹35,000,000
525 35000000
526 rows × 1 columns
how can we split the Amount column in separate of Currency Symbol and amount in separate column
A:
You can use str.extract:
df[['currency', 'Amount']] = df['Amount'].str.extract(r'(\D*)(\d.*)')
Output:
Amount currency
0 250000
1 40,000,000 ₹
2 65,000,000 ₹
3 2000000
4 NaN NaN
521 225000000
522 NaN NaN
523 7500
524 35,000,000 ₹
525 35000000
If you further need to convert to number:
df['Amount'] = pd.to_numeric(df['Amount'].str.replace(',', ''))
Output:
Amount currency
0 250000.0
1 40000000.0 ₹
2 65000000.0 ₹
3 2000000.0
4 NaN NaN
521 225000000.0
522 NaN NaN
523 7500.0
524 35000000.0 ₹
525 35000000.0
|
How to split the prefix of currency symbol in separate column in Pandas Data Frame
|
Amount
0 250000
1 ₹40,000,000
2 ₹65,000,000
3 2000000
4 —
... ...
521 225000000
522 —
523 7500
524 ₹35,000,000
525 35000000
526 rows × 1 columns
how can we split the Amount column in separate of Currency Symbol and amount in separate column
|
[
"You can use str.extract:\ndf[['currency', 'Amount']] = df['Amount'].str.extract(r'(\\D*)(\\d.*)')\n\nOutput:\n Amount currency\n0 250000 \n1 40,000,000 ₹\n2 65,000,000 ₹\n3 2000000 \n4 NaN NaN\n521 225000000 \n522 NaN NaN\n523 7500 \n524 35,000,000 ₹\n525 35000000 \n\nIf you further need to convert to number:\ndf['Amount'] = pd.to_numeric(df['Amount'].str.replace(',', ''))\n\nOutput:\n Amount currency\n0 250000.0 \n1 40000000.0 ₹\n2 65000000.0 ₹\n3 2000000.0 \n4 NaN NaN\n521 225000000.0 \n522 NaN NaN\n523 7500.0 \n524 35000000.0 ₹\n525 35000000.0 \n\n"
] |
[
0
] |
[] |
[] |
[
"data_science",
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074546695_data_science_dataframe_pandas_python.txt
|
Q:
How can i make selenium to parse every network request?
I am trying to capture all requests with their responses using this code
capabilities = DesiredCapabilities.CHROME
capabilities["goog:loggingPrefs"] = {"performance": "ALL"}
driver.get("<URL>")
def log_filter(log_):
return (
# is an actual response
log_["method"] == "Network.responseReceived"
# and json
and "json" in log_["params"]["response"]["mimeType"]
)
#sys.stdout = open('C:\\Users\\Zile\\Videos\\mhm\\output.txt', 'wt')
sys.stdout = open('output.txt', 'wt')
for log in filter(log_filter, logs):
request_id = log["params"]["requestId"]
resp_url = log["params"]["response"]["url"]
print(f"Caught {resp_url}")
print(driver.execute_cdp_cmd(
"Network.getResponseBody", {"requestId": request_id}))
It works great and it saves into txt file everything but the problem is it justs saves responses that have simillar domain to the driver.get, if url is google.com he will parse everything that google sended not other ones.
How can i make him to parse everything?
A:
You can use the browsermob proxy for this.
from selenium import webdriver
from browsermobproxy import Server
server = Server("/path/to/browsermob-proxy")
server.start()
proxy = server.create_proxy()
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--proxy-server={0}".format(proxy.proxy))
chrome = webdriver.Chrome(chrome_options=chrome_options)
proxy.new_har("request")
Now to your driver you can add the proxy and then save all the details in a .har file.
To filter the records from .har file you can use the python-har.
This will filter the records for you according to the url.
from haralyzer import HarPage
har_page = HarPage(har_data=har_data)
print(har_page.entries[0]) # HAR Entry dict of first entry
print(har_page.entries[:2]) # List of HAR Entry dicts of first two entries
print(har_page.media_urls) # List of all media URLS across all entries
print(har_page.html_urls) # List of all HTML document URLS across all entries
print(har_page.other_urls) # List of all non-HTML, non-media URLS across all entries
print(har_page.all_urls) # Combined list of all URLs across all entries
print(har_page.all_js_urls) # List of all JS URLs loaded across all entries
print(har_page.all_css_urls) # List of all CSS URLs loaded across all entries
print(har_page.all_image_urls) # List of all image URLs loaded across all entries
print(har_page.all_font_urls) # List of all font URLs loaded across all entries
print(har_page.all_other_urls) # List of all other URLs loaded across all entries
print(har_page.size) # Total number of bytes comprising all entries
|
How can i make selenium to parse every network request?
|
I am trying to capture all requests with their responses using this code
capabilities = DesiredCapabilities.CHROME
capabilities["goog:loggingPrefs"] = {"performance": "ALL"}
driver.get("<URL>")
def log_filter(log_):
return (
# is an actual response
log_["method"] == "Network.responseReceived"
# and json
and "json" in log_["params"]["response"]["mimeType"]
)
#sys.stdout = open('C:\\Users\\Zile\\Videos\\mhm\\output.txt', 'wt')
sys.stdout = open('output.txt', 'wt')
for log in filter(log_filter, logs):
request_id = log["params"]["requestId"]
resp_url = log["params"]["response"]["url"]
print(f"Caught {resp_url}")
print(driver.execute_cdp_cmd(
"Network.getResponseBody", {"requestId": request_id}))
It works great and it saves into txt file everything but the problem is it justs saves responses that have simillar domain to the driver.get, if url is google.com he will parse everything that google sended not other ones.
How can i make him to parse everything?
|
[
"You can use the browsermob proxy for this.\nfrom selenium import webdriver\nfrom browsermobproxy import Server\n\nserver = Server(\"/path/to/browsermob-proxy\")\nserver.start()\nproxy = server.create_proxy()\n\nchrome_options = webdriver.ChromeOptions()\nchrome_options.add_argument(\"--proxy-server={0}\".format(proxy.proxy))\nchrome = webdriver.Chrome(chrome_options=chrome_options)\nproxy.new_har(\"request\")\n\nNow to your driver you can add the proxy and then save all the details in a .har file.\nTo filter the records from .har file you can use the python-har.\nThis will filter the records for you according to the url.\nfrom haralyzer import HarPage\nhar_page = HarPage(har_data=har_data)\nprint(har_page.entries[0]) # HAR Entry dict of first entry\nprint(har_page.entries[:2]) # List of HAR Entry dicts of first two entries\nprint(har_page.media_urls) # List of all media URLS across all entries\nprint(har_page.html_urls) # List of all HTML document URLS across all entries\nprint(har_page.other_urls) # List of all non-HTML, non-media URLS across all entries\nprint(har_page.all_urls) # Combined list of all URLs across all entries\nprint(har_page.all_js_urls) # List of all JS URLs loaded across all entries\nprint(har_page.all_css_urls) # List of all CSS URLs loaded across all entries\nprint(har_page.all_image_urls) # List of all image URLs loaded across all entries\nprint(har_page.all_font_urls) # List of all font URLs loaded across all entries\nprint(har_page.all_other_urls) # List of all other URLs loaded across all entries\nprint(har_page.size) # Total number of bytes comprising all entries\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"selenium",
"selenium_webdriver"
] |
stackoverflow_0074546174_python_selenium_selenium_webdriver.txt
|
Q:
Explaining the memory usage pattern of a multiprocessing pool used for ETL
I am using a multiprocessing.Pool for an ETL processing of several thousands parquet files. Each worker applies a processing function on the parquet and returns the processing result to the main process which aggregates data from all workers.
The pool is configured with 16 workers and maxtasksperchild=1
I measured the ram usage and observed a rather unusual pattern:
It's not clear to me what causes those momentary spikes which tend to grow (enveloped by the blue line). I would've expected a small growth due to the memory used by the main process to aggregated worker data (green envelope).
Since the bottleneck of the entire processing is loading the parquet data (using pandas.read_parquet), the spikes may be due to all workers loading the data simultaneously. However, this does not explain the increasing envelope of the spikes. All the parquet files close to identical sizes, so the spikes' height shouldn't grow.
Further debugging showed that this pattern is visible for other values of maxtasksperchild as well, apart from maxtasksperchild=None (so worker processes will live as long as the pool). The memory pattern no longer indicates spikes:
My question is what's causing the spikes and their growth, when maxtasksperchild is not None?
A:
Following @juanpa-arrivillaga comments, I understand the potential caveats of naively using free to track memory usage. To this extent, I switched to using mprof, configured to include memory usage of the child processes, as well as track each one separately:
mprof run --include-children --multiprocess <cmd>
The output looks like this:
|
Explaining the memory usage pattern of a multiprocessing pool used for ETL
|
I am using a multiprocessing.Pool for an ETL processing of several thousands parquet files. Each worker applies a processing function on the parquet and returns the processing result to the main process which aggregates data from all workers.
The pool is configured with 16 workers and maxtasksperchild=1
I measured the ram usage and observed a rather unusual pattern:
It's not clear to me what causes those momentary spikes which tend to grow (enveloped by the blue line). I would've expected a small growth due to the memory used by the main process to aggregated worker data (green envelope).
Since the bottleneck of the entire processing is loading the parquet data (using pandas.read_parquet), the spikes may be due to all workers loading the data simultaneously. However, this does not explain the increasing envelope of the spikes. All the parquet files close to identical sizes, so the spikes' height shouldn't grow.
Further debugging showed that this pattern is visible for other values of maxtasksperchild as well, apart from maxtasksperchild=None (so worker processes will live as long as the pool). The memory pattern no longer indicates spikes:
My question is what's causing the spikes and their growth, when maxtasksperchild is not None?
|
[
"Following @juanpa-arrivillaga comments, I understand the potential caveats of naively using free to track memory usage. To this extent, I switched to using mprof, configured to include memory usage of the child processes, as well as track each one separately:\nmprof run --include-children --multiprocess <cmd>\n\nThe output looks like this:\n\n"
] |
[
1
] |
[] |
[] |
[
"memory",
"multiprocessing",
"python"
] |
stackoverflow_0074543932_memory_multiprocessing_python.txt
|
Q:
Fresh install of conda not working due to "KeyError('pkgs_dirs')" and missing DLLs
I installed a fresh new version of Miniconda, but no matter what i try to do (Install new module in Anaconda: Keyerror('pkgs_dirs',), Download error for us package (KeyError: 'pkgs_dirs'), Change conda default pkgs_dirs and envs dirs) installing packages or reading conda info doesn't seem to work.
An exception is thrown when trying to run, for example conda install -c anaconda cudatoolkit=10.1. New installations (as this is a fresh installation) are not helping either. All commands are being run from Miniconda terminal on Windows 10, and MS Redistributables have been installed with default settings (and the system has been rebooted). I was not able to see which DLLs are missing, as the only one I'm able to see with my single brain cell is "shell". I have followed the installation instructions from https://www.tensorflow.org/install/pip#windows.
.condarc file contents are as follow:
envs_dirs:
- 'C:\Users\matti\.conda\envs'
pkgs_dirs:
- 'C:\Users\matti\.conda\pkgs'
The envs directory does not exist but the pkgs directory exists. As such I've also tried to create the directory "envs" with no success
Error thrown when trying to install package:
Collecting package metadata (current_repodata.json): failed
WARNING conda.exceptions:print_unexpected_error_report(1216): KeyError('pkgs_dirs')
Traceback (most recent call last):
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\exceptions.py", line 1082, in __call__
return func(*args, **kwargs)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\main.py", line 87, in _main
exit_code = do_call(args, p)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\conda_argparse.py", line 84, in do_call
return getattr(module, func_name)(args, parser)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\main_install.py", line 20, in execute
install(args, parser, 'install')
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\install.py", line 260, in install
unlink_link_transaction = solver.solve_for_transaction(
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 152, in solve_for_transaction
unlink_precs, link_precs = self.solve_for_diff(update_modifier, deps_modifier,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 195, in solve_for_diff
final_precs = self.solve_final_state(update_modifier, deps_modifier, prune, ignore_pinned,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 300, in solve_final_state
ssc = self._collect_all_metadata(ssc)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\common\io.py", line 88, in decorated
return f(*args, **kwds)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 463, in _collect_all_metadata
index, r = self._prepare(prepared_specs)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 1058, in _prepare
reduced_index = get_reduced_index(self.prefix, self.channels,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\index.py", line 288, in get_reduced_index
new_records = SubdirData.query_all(spec, channels=channels, subdirs=subdirs,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 140, in query_all
result = tuple(concat(executor.map(subdir_query, channel_urls)))
File "C:\ProgramData\Miniconda3\lib\concurrent\futures\_base.py", line 609, in result_iterator
yield fs.pop().result()
File "C:\ProgramData\Miniconda3\lib\concurrent\futures\_base.py", line 446, in result
return self.__get_result()
File "C:\ProgramData\Miniconda3\lib\concurrent\futures\_base.py", line 391, in __get_result
raise self._exception
File "C:\ProgramData\Miniconda3\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 132, in <lambda>
subdir_query = lambda url: tuple(SubdirData(Channel(url), repodata_fn=repodata_fn).query(
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 145, in query
self.load()
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 210, in load
_internal_state = self._load()
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 375, in _load
raw_repodata_str = fetch_repodata_remote_request(
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 701, in fetch_repodata_remote_request
resp = session.get(join_url(url, filename), headers=headers, proxies=session.proxies,
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\sessions.py", line 555, in get
return self.request('GET', url, **kwargs)
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\sessions.py", line 528, in request
prep = self.prepare_request(req)
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\sessions.py", line 456, in prepare_request
p.prepare(
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\models.py", line 320, in prepare
self.prepare_auth(auth, url)
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\models.py", line 551, in prepare_auth
r = auth(self)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\gateways\connection\session.py", line 110, in __call__
request.url = CondaHttpAuth.add_binstar_token(request.url)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\gateways\connection\session.py", line 134, in add_binstar_token
for binstar_url, token in iteritems(read_binstar_tokens()):
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\gateways\anaconda_client.py", line 35, in read_binstar_tokens
token_dir = _get_binstar_token_directory()
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\gateways\anaconda_client.py", line 30, in _get_binstar_token_directory
return AppDirs('binstar', 'ContinuumIO').user_data_dir
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\_vendor\appdirs.py", line 245, in user_data_dir
return user_data_dir(self.appname, self.appauthor,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\_vendor\appdirs.py", line 67, in user_data_dir
path = os.path.join(_get_win_folder(const), appauthor, appname)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\_vendor\appdirs.py", line 284, in _get_win_folder_with_pywin32
from win32com.shell import shellcon, shell
ImportError: DLL load failed while importing shell: The specified procedure could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\exceptions.py", line 1214, in print_unexpected_error_report
message_builder.append(get_main_info_str(error_report['conda_info']))
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\main_info.py", line 237, in get_main_info_str
info_dict['_' + key] = ('\n' + 26 * ' ').join(info_dict[key])
KeyError: 'pkgs_dirs'
# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<
Traceback (most recent call last):
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\exceptions.py", line 1082, in __call__
return func(*args, **kwargs)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\main.py", line 87, in _main
exit_code = do_call(args, p)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\conda_argparse.py", line 84, in do_call
return getattr(module, func_name)(args, parser)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\main_install.py", line 20, in execute
install(args, parser, 'install')
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\install.py", line 260, in install
unlink_link_transaction = solver.solve_for_transaction(
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 152, in solve_for_transaction
unlink_precs, link_precs = self.solve_for_diff(update_modifier, deps_modifier,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 195, in solve_for_diff
final_precs = self.solve_final_state(update_modifier, deps_modifier, prune, ignore_pinned,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 300, in solve_final_state
ssc = self._collect_all_metadata(ssc)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\common\io.py", line 88, in decorated
return f(*args, **kwds)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 463, in _collect_all_metadata
index, r = self._prepare(prepared_specs)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 1058, in _prepare
reduced_index = get_reduced_index(self.prefix, self.channels,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\index.py", line 288, in get_reduced_index
new_records = SubdirData.query_all(spec, channels=channels, subdirs=subdirs,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 140, in query_all
result = tuple(concat(executor.map(subdir_query, channel_urls)))
File "C:\ProgramData\Miniconda3\lib\concurrent\futures\_base.py", line 609, in result_iterator
yield fs.pop().result()
File "C:\ProgramData\Miniconda3\lib\concurrent\futures\_base.py", line 446, in result
return self.__get_result()
File "C:\ProgramData\Miniconda3\lib\concurrent\futures\_base.py", line 391, in __get_result
raise self._exception
File "C:\ProgramData\Miniconda3\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 132, in <lambda>
subdir_query = lambda url: tuple(SubdirData(Channel(url), repodata_fn=repodata_fn).query(
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 145, in query
self.load()
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 210, in load
_internal_state = self._load()
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 375, in _load
raw_repodata_str = fetch_repodata_remote_request(
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 701, in fetch_repodata_remote_request
resp = session.get(join_url(url, filename), headers=headers, proxies=session.proxies,
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\sessions.py", line 555, in get
return self.request('GET', url, **kwargs)
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\sessions.py", line 528, in request
prep = self.prepare_request(req)
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\sessions.py", line 456, in prepare_request
p.prepare(
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\models.py", line 320, in prepare
self.prepare_auth(auth, url)
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\models.py", line 551, in prepare_auth
r = auth(self)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\gateways\connection\session.py", line 110, in __call__
request.url = CondaHttpAuth.add_binstar_token(request.url)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\gateways\connection\session.py", line 134, in add_binstar_token
for binstar_url, token in iteritems(read_binstar_tokens()):
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\gateways\anaconda_client.py", line 35, in read_binstar_tokens
token_dir = _get_binstar_token_directory()
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\gateways\anaconda_client.py", line 30, in _get_binstar_token_directory
return AppDirs('binstar', 'ContinuumIO').user_data_dir
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\_vendor\appdirs.py", line 245, in user_data_dir
return user_data_dir(self.appname, self.appauthor,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\_vendor\appdirs.py", line 67, in user_data_dir
path = os.path.join(_get_win_folder(const), appauthor, appname)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\_vendor\appdirs.py", line 284, in _get_win_folder_with_pywin32
from win32com.shell import shellcon, shell
ImportError: DLL load failed while importing shell: The specified procedure could not be found.
`$ C:\ProgramData\Miniconda3\Scripts\conda-script.py install -c anaconda cudatoolkit=10.1`
environment variables:
conda info could not be constructed.
KeyError('pkgs_dirs')
Many places seem to mention that running conda info would somehow help, but even that throws an error:
# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<
Traceback (most recent call last):
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\exceptions.py", line 1082, in __call__
return func(*args, **kwargs)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\main.py", line 87, in _main
exit_code = do_call(args, p)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\conda_argparse.py", line 84, in do_call
return getattr(module, func_name)(args, parser)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\main_info.py", line 317, in execute
info_dict = get_info_dict(args.system)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\main_info.py", line 164, in get_info_dict
envs_dirs=context.envs_dirs,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\base\context.py", line 517, in envs_dirs
return mockable_context_envs_dirs(self.root_writable, self.root_prefix, self._envs_dirs)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\base\context.py", line 91, in mockable_context_envs_dirs
fixed_dirs += join(user_data_dir(APP_NAME, APP_NAME), 'envs'),
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\_vendor\appdirs.py", line 67, in user_data_dir
path = os.path.join(_get_win_folder(const), appauthor, appname)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\_vendor\appdirs.py", line 284, in _get_win_folder_with_pywin32
from win32com.shell import shellcon, shell
ImportError: DLL load failed while importing shell: The specified procedure could not be found.
`$ C:\ProgramData\Miniconda3\Scripts\conda-script.py info`
A:
Maybe it is a bit late but this error on windows should be related to the pywin32 package. A solution is to reinstall this package. You could do this running this command from your base environment:
conda install -n name-your-environment -c anaconda pywin32
|
Fresh install of conda not working due to "KeyError('pkgs_dirs')" and missing DLLs
|
I installed a fresh new version of Miniconda, but no matter what i try to do (Install new module in Anaconda: Keyerror('pkgs_dirs',), Download error for us package (KeyError: 'pkgs_dirs'), Change conda default pkgs_dirs and envs dirs) installing packages or reading conda info doesn't seem to work.
An exception is thrown when trying to run, for example conda install -c anaconda cudatoolkit=10.1. New installations (as this is a fresh installation) are not helping either. All commands are being run from Miniconda terminal on Windows 10, and MS Redistributables have been installed with default settings (and the system has been rebooted). I was not able to see which DLLs are missing, as the only one I'm able to see with my single brain cell is "shell". I have followed the installation instructions from https://www.tensorflow.org/install/pip#windows.
.condarc file contents are as follow:
envs_dirs:
- 'C:\Users\matti\.conda\envs'
pkgs_dirs:
- 'C:\Users\matti\.conda\pkgs'
The envs directory does not exist but the pkgs directory exists. As such I've also tried to create the directory "envs" with no success
Error thrown when trying to install package:
Collecting package metadata (current_repodata.json): failed
WARNING conda.exceptions:print_unexpected_error_report(1216): KeyError('pkgs_dirs')
Traceback (most recent call last):
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\exceptions.py", line 1082, in __call__
return func(*args, **kwargs)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\main.py", line 87, in _main
exit_code = do_call(args, p)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\conda_argparse.py", line 84, in do_call
return getattr(module, func_name)(args, parser)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\main_install.py", line 20, in execute
install(args, parser, 'install')
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\install.py", line 260, in install
unlink_link_transaction = solver.solve_for_transaction(
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 152, in solve_for_transaction
unlink_precs, link_precs = self.solve_for_diff(update_modifier, deps_modifier,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 195, in solve_for_diff
final_precs = self.solve_final_state(update_modifier, deps_modifier, prune, ignore_pinned,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 300, in solve_final_state
ssc = self._collect_all_metadata(ssc)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\common\io.py", line 88, in decorated
return f(*args, **kwds)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 463, in _collect_all_metadata
index, r = self._prepare(prepared_specs)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 1058, in _prepare
reduced_index = get_reduced_index(self.prefix, self.channels,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\index.py", line 288, in get_reduced_index
new_records = SubdirData.query_all(spec, channels=channels, subdirs=subdirs,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 140, in query_all
result = tuple(concat(executor.map(subdir_query, channel_urls)))
File "C:\ProgramData\Miniconda3\lib\concurrent\futures\_base.py", line 609, in result_iterator
yield fs.pop().result()
File "C:\ProgramData\Miniconda3\lib\concurrent\futures\_base.py", line 446, in result
return self.__get_result()
File "C:\ProgramData\Miniconda3\lib\concurrent\futures\_base.py", line 391, in __get_result
raise self._exception
File "C:\ProgramData\Miniconda3\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 132, in <lambda>
subdir_query = lambda url: tuple(SubdirData(Channel(url), repodata_fn=repodata_fn).query(
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 145, in query
self.load()
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 210, in load
_internal_state = self._load()
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 375, in _load
raw_repodata_str = fetch_repodata_remote_request(
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 701, in fetch_repodata_remote_request
resp = session.get(join_url(url, filename), headers=headers, proxies=session.proxies,
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\sessions.py", line 555, in get
return self.request('GET', url, **kwargs)
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\sessions.py", line 528, in request
prep = self.prepare_request(req)
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\sessions.py", line 456, in prepare_request
p.prepare(
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\models.py", line 320, in prepare
self.prepare_auth(auth, url)
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\models.py", line 551, in prepare_auth
r = auth(self)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\gateways\connection\session.py", line 110, in __call__
request.url = CondaHttpAuth.add_binstar_token(request.url)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\gateways\connection\session.py", line 134, in add_binstar_token
for binstar_url, token in iteritems(read_binstar_tokens()):
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\gateways\anaconda_client.py", line 35, in read_binstar_tokens
token_dir = _get_binstar_token_directory()
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\gateways\anaconda_client.py", line 30, in _get_binstar_token_directory
return AppDirs('binstar', 'ContinuumIO').user_data_dir
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\_vendor\appdirs.py", line 245, in user_data_dir
return user_data_dir(self.appname, self.appauthor,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\_vendor\appdirs.py", line 67, in user_data_dir
path = os.path.join(_get_win_folder(const), appauthor, appname)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\_vendor\appdirs.py", line 284, in _get_win_folder_with_pywin32
from win32com.shell import shellcon, shell
ImportError: DLL load failed while importing shell: The specified procedure could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\exceptions.py", line 1214, in print_unexpected_error_report
message_builder.append(get_main_info_str(error_report['conda_info']))
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\main_info.py", line 237, in get_main_info_str
info_dict['_' + key] = ('\n' + 26 * ' ').join(info_dict[key])
KeyError: 'pkgs_dirs'
# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<
Traceback (most recent call last):
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\exceptions.py", line 1082, in __call__
return func(*args, **kwargs)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\main.py", line 87, in _main
exit_code = do_call(args, p)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\conda_argparse.py", line 84, in do_call
return getattr(module, func_name)(args, parser)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\main_install.py", line 20, in execute
install(args, parser, 'install')
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\install.py", line 260, in install
unlink_link_transaction = solver.solve_for_transaction(
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 152, in solve_for_transaction
unlink_precs, link_precs = self.solve_for_diff(update_modifier, deps_modifier,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 195, in solve_for_diff
final_precs = self.solve_final_state(update_modifier, deps_modifier, prune, ignore_pinned,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 300, in solve_final_state
ssc = self._collect_all_metadata(ssc)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\common\io.py", line 88, in decorated
return f(*args, **kwds)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 463, in _collect_all_metadata
index, r = self._prepare(prepared_specs)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\solve.py", line 1058, in _prepare
reduced_index = get_reduced_index(self.prefix, self.channels,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\index.py", line 288, in get_reduced_index
new_records = SubdirData.query_all(spec, channels=channels, subdirs=subdirs,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 140, in query_all
result = tuple(concat(executor.map(subdir_query, channel_urls)))
File "C:\ProgramData\Miniconda3\lib\concurrent\futures\_base.py", line 609, in result_iterator
yield fs.pop().result()
File "C:\ProgramData\Miniconda3\lib\concurrent\futures\_base.py", line 446, in result
return self.__get_result()
File "C:\ProgramData\Miniconda3\lib\concurrent\futures\_base.py", line 391, in __get_result
raise self._exception
File "C:\ProgramData\Miniconda3\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 132, in <lambda>
subdir_query = lambda url: tuple(SubdirData(Channel(url), repodata_fn=repodata_fn).query(
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 145, in query
self.load()
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 210, in load
_internal_state = self._load()
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 375, in _load
raw_repodata_str = fetch_repodata_remote_request(
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\core\subdir_data.py", line 701, in fetch_repodata_remote_request
resp = session.get(join_url(url, filename), headers=headers, proxies=session.proxies,
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\sessions.py", line 555, in get
return self.request('GET', url, **kwargs)
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\sessions.py", line 528, in request
prep = self.prepare_request(req)
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\sessions.py", line 456, in prepare_request
p.prepare(
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\models.py", line 320, in prepare
self.prepare_auth(auth, url)
File "C:\Users\matti\AppData\Roaming\Python\Python39\site-packages\requests\models.py", line 551, in prepare_auth
r = auth(self)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\gateways\connection\session.py", line 110, in __call__
request.url = CondaHttpAuth.add_binstar_token(request.url)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\gateways\connection\session.py", line 134, in add_binstar_token
for binstar_url, token in iteritems(read_binstar_tokens()):
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\gateways\anaconda_client.py", line 35, in read_binstar_tokens
token_dir = _get_binstar_token_directory()
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\gateways\anaconda_client.py", line 30, in _get_binstar_token_directory
return AppDirs('binstar', 'ContinuumIO').user_data_dir
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\_vendor\appdirs.py", line 245, in user_data_dir
return user_data_dir(self.appname, self.appauthor,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\_vendor\appdirs.py", line 67, in user_data_dir
path = os.path.join(_get_win_folder(const), appauthor, appname)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\_vendor\appdirs.py", line 284, in _get_win_folder_with_pywin32
from win32com.shell import shellcon, shell
ImportError: DLL load failed while importing shell: The specified procedure could not be found.
`$ C:\ProgramData\Miniconda3\Scripts\conda-script.py install -c anaconda cudatoolkit=10.1`
environment variables:
conda info could not be constructed.
KeyError('pkgs_dirs')
Many places seem to mention that running conda info would somehow help, but even that throws an error:
# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<
Traceback (most recent call last):
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\exceptions.py", line 1082, in __call__
return func(*args, **kwargs)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\main.py", line 87, in _main
exit_code = do_call(args, p)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\conda_argparse.py", line 84, in do_call
return getattr(module, func_name)(args, parser)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\main_info.py", line 317, in execute
info_dict = get_info_dict(args.system)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\cli\main_info.py", line 164, in get_info_dict
envs_dirs=context.envs_dirs,
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\base\context.py", line 517, in envs_dirs
return mockable_context_envs_dirs(self.root_writable, self.root_prefix, self._envs_dirs)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\base\context.py", line 91, in mockable_context_envs_dirs
fixed_dirs += join(user_data_dir(APP_NAME, APP_NAME), 'envs'),
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\_vendor\appdirs.py", line 67, in user_data_dir
path = os.path.join(_get_win_folder(const), appauthor, appname)
File "C:\ProgramData\Miniconda3\lib\site-packages\conda\_vendor\appdirs.py", line 284, in _get_win_folder_with_pywin32
from win32com.shell import shellcon, shell
ImportError: DLL load failed while importing shell: The specified procedure could not be found.
`$ C:\ProgramData\Miniconda3\Scripts\conda-script.py info`
|
[
"Maybe it is a bit late but this error on windows should be related to the pywin32 package. A solution is to reinstall this package. You could do this running this command from your base environment:\nconda install -n name-your-environment -c anaconda pywin32\n\n"
] |
[
0
] |
[] |
[] |
[
"anaconda",
"conda",
"python"
] |
stackoverflow_0073168204_anaconda_conda_python.txt
|
Q:
Add rank to dataframe of IDs
I have a dataframe of just IDs e.g.
data=pd.DataFrame({'ID':['D29305C3-6652-E911-B81F-005056962850','570AE90B-CB53-EA11-B836-005056962850','5F21D4D2-E156-EA11-B836-005056962850','73579A31-1252-E911-B81F-005056962850']})
I want to add a row from 1-30 for each ID. I tried making a separate list and joining it (range is manually worked out as 30 x number of IDs):
numbers=pd.DataFrame({'Integers':[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]})
numbers2=pd.DataFrame()
for x in range (1,120):
numbers2=numbers2.append(numbers)
numbers2=numbers2.reset_index()
df=pd.DataFrame()
from collections import Counter
id_count = Counter(data['ID'])
# Create lists of each id repeated the number of times each is needed:
n = 30
id_values = [[i] * (n - id_count[i]) for i in id_count.keys()]
# Flatten to a single list:
id_values = [i for s in id_values for i in s]
# Create as new DataFrame and append to existing data:
new_data = pd.DataFrame({"ID": id_values})
df = df.append(new_data).sort_values(by="ID")
df=df.reset_index()
template=pd.merge(df, numbers2, left_index=True, right_index=True)
Where I worked out the range manually Which works sometimes but e.g. for this ID has behaviour I don't understand:
template[template.ID=='D29305C3-6652-E911-B81F-005056962850']
And it's a clunky way to attempt it in any case. Thanks for any suggestions! :)
A:
Let us do cross merge
data.merge(pd.Series(range(1, 31), name='rank'), how='cross')
ID rank
0 D29305C3-6652-E911-B81F-005056962850 1
1 D29305C3-6652-E911-B81F-005056962850 2
2 D29305C3-6652-E911-B81F-005056962850 3
3 D29305C3-6652-E911-B81F-005056962850 4
4 D29305C3-6652-E911-B81F-005056962850 5
5 D29305C3-6652-E911-B81F-005056962850 6
6 D29305C3-6652-E911-B81F-005056962850 7
...
118 73579A31-1252-E911-B81F-005056962850 29
119 73579A31-1252-E911-B81F-005056962850 30
|
Add rank to dataframe of IDs
|
I have a dataframe of just IDs e.g.
data=pd.DataFrame({'ID':['D29305C3-6652-E911-B81F-005056962850','570AE90B-CB53-EA11-B836-005056962850','5F21D4D2-E156-EA11-B836-005056962850','73579A31-1252-E911-B81F-005056962850']})
I want to add a row from 1-30 for each ID. I tried making a separate list and joining it (range is manually worked out as 30 x number of IDs):
numbers=pd.DataFrame({'Integers':[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]})
numbers2=pd.DataFrame()
for x in range (1,120):
numbers2=numbers2.append(numbers)
numbers2=numbers2.reset_index()
df=pd.DataFrame()
from collections import Counter
id_count = Counter(data['ID'])
# Create lists of each id repeated the number of times each is needed:
n = 30
id_values = [[i] * (n - id_count[i]) for i in id_count.keys()]
# Flatten to a single list:
id_values = [i for s in id_values for i in s]
# Create as new DataFrame and append to existing data:
new_data = pd.DataFrame({"ID": id_values})
df = df.append(new_data).sort_values(by="ID")
df=df.reset_index()
template=pd.merge(df, numbers2, left_index=True, right_index=True)
Where I worked out the range manually Which works sometimes but e.g. for this ID has behaviour I don't understand:
template[template.ID=='D29305C3-6652-E911-B81F-005056962850']
And it's a clunky way to attempt it in any case. Thanks for any suggestions! :)
|
[
"Let us do cross merge\ndata.merge(pd.Series(range(1, 31), name='rank'), how='cross')\n\n\n ID rank\n0 D29305C3-6652-E911-B81F-005056962850 1\n1 D29305C3-6652-E911-B81F-005056962850 2\n2 D29305C3-6652-E911-B81F-005056962850 3\n3 D29305C3-6652-E911-B81F-005056962850 4\n4 D29305C3-6652-E911-B81F-005056962850 5\n5 D29305C3-6652-E911-B81F-005056962850 6\n6 D29305C3-6652-E911-B81F-005056962850 7\n...\n118 73579A31-1252-E911-B81F-005056962850 29\n119 73579A31-1252-E911-B81F-005056962850 30\n\n"
] |
[
2
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074546708_dataframe_pandas_python.txt
|
Q:
weights & biases : ERROR Failed to sample metric: Not Supported
I am training a yolox model and using wandb (weight & biases library) to follow training evolution. My problem is that when I am loading wandb library (version 0.13.5) I get an error message, which is:
wandb: ERROR Failed to sample metric: Not Supported
The surprising thing is that when I run the exact same code on google collab (that has the library version), it works perfectly (problem: can't have unlimited GPU access on collab). So I have to find out how to avoid this error.
A:
Engineer from W&B here! Would it be possible if you could share the console log so that we can find the line where the error originates.
|
weights & biases : ERROR Failed to sample metric: Not Supported
|
I am training a yolox model and using wandb (weight & biases library) to follow training evolution. My problem is that when I am loading wandb library (version 0.13.5) I get an error message, which is:
wandb: ERROR Failed to sample metric: Not Supported
The surprising thing is that when I run the exact same code on google collab (that has the library version), it works perfectly (problem: can't have unlimited GPU access on collab). So I have to find out how to avoid this error.
|
[
"Engineer from W&B here! Would it be possible if you could share the console log so that we can find the line where the error originates.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"tensorboard",
"wandb",
"yolo"
] |
stackoverflow_0074520555_python_tensorboard_wandb_yolo.txt
|
Q:
Select the features with positive contribution to each class using SHAP values
I am trying to get the features which are important for a class and have a positive contribution (having red points on the positive side of the SHAP plot).
I can get the shap_values and plot the shap summary for each class (e.g. class 2 here) using the following code:
import shap
explainer = shap.TreeExplainer(clf)
shap_values = explainer.shap_values(X)
shap.summary_plot(shap_values[2], X)
From the plot I can understand which features are important to that class. In the below plot, I can say alcohol and sulphates are the main features (that I am more interested in).
However, I want to automate this process, so the code can rank the features (which are important on the positive side) and return the top N. Any idea on how to automate this interpretation?
I need to automatically identify those important features for each class. Any other method rather than shap that can handle this process would be ideal.
A:
You can do the following steps - where basically we are trying to get only the values that effect the classification positively (shap_values>0) when shap_values<0 it means don't classify
Later you take mean and sort the results.
If you prefers the global values then use .abs() instead of [shap_df>0]
and for the hole model use only shap_values instead of shap_values['your_class_number']
import shap
import pandas as pd
explainer = shap.TreeExplainer(clf)
shap_values = explainer.shap_values(X)
shap_df = pd.DataFrame(shap_values['your_class_number'],columns=X.columns)
feature_importance = (shap_df
[shap_df>0]
.mean()
.sort_values(ascending=False)
.reset_index()
.rename(columns={'index':'feature',0:'weight'})
.head(n)
)
|
Select the features with positive contribution to each class using SHAP values
|
I am trying to get the features which are important for a class and have a positive contribution (having red points on the positive side of the SHAP plot).
I can get the shap_values and plot the shap summary for each class (e.g. class 2 here) using the following code:
import shap
explainer = shap.TreeExplainer(clf)
shap_values = explainer.shap_values(X)
shap.summary_plot(shap_values[2], X)
From the plot I can understand which features are important to that class. In the below plot, I can say alcohol and sulphates are the main features (that I am more interested in).
However, I want to automate this process, so the code can rank the features (which are important on the positive side) and return the top N. Any idea on how to automate this interpretation?
I need to automatically identify those important features for each class. Any other method rather than shap that can handle this process would be ideal.
|
[
"You can do the following steps - where basically we are trying to get only the values that effect the classification positively (shap_values>0) when shap_values<0 it means don't classify\nLater you take mean and sort the results.\nIf you prefers the global values then use .abs() instead of [shap_df>0]\nand for the hole model use only shap_values instead of shap_values['your_class_number']\nimport shap \nimport pandas as pd \n\nexplainer = shap.TreeExplainer(clf) \nshap_values = explainer.shap_values(X) \nshap_df = pd.DataFrame(shap_values['your_class_number'],columns=X.columns) \n \nfeature_importance = (shap_df\n [shap_df>0]\n .mean()\n .sort_values(ascending=False)\n .reset_index()\n .rename(columns={'index':'feature',0:'weight'})\n .head(n)\n )\n\n"
] |
[
0
] |
[] |
[] |
[
"classification",
"feature_engineering",
"python",
"shap"
] |
stackoverflow_0072661604_classification_feature_engineering_python_shap.txt
|
Q:
Python RegEx , how to find words that start with uppercase followed by lower case?
I have the following string
Date: 20/8/2020 Duration: 0.33 IP: 110.1.x.x Server:01
I'm applying findall as a way to split my string when I apply findall it split I & P how can I change expression to get this output
['Date: 20/8/2020 ', 'Duration: 0.33 ', 'IP: 110.1.x.x ', 'Server:01']
text = "Date: 20/8/2020 Duration: 0.33 IP: 110.1.x.x Server:01"
my_list = re.findall('[a-zA-Z][^A-Z]*', text)
my_list
['Date: 20/8/2020 ', 'Duration: 0.33 ', 'I', 'P: 110.1.x.x ', 'Server:01']
A:
Look for any string that begins with either two uppercase letters, or an uppercase followed by a lowercase, and then match until you find either the same pattern or end of line.
>>> re.findall(r'([A-Z][a-zA-Z].*?)\s*(?=[A-Z][a-zA-Z]|$)', text)
['Date: 20/8/2020', 'Duration: 0.33', 'IP: 110.1.x.x', 'Server:01']
You may also wish to use this to create a dictionary.
>>> dict(re.split(r'\s*:\s*', m, 1) for m in re.findall(r'([A-Z][a-zA
-Z].*?)\s*(?=[A-Z][a-zA-Z]|$)', text))
{'Date': '20/8/2020', 'Duration': '0.33', 'IP': '110.1.x.x', 'Server': '01'}
A:
With Regex you should always be as precise as possible.
So if you know that your input data always looks like that, I would suggest writing the full words in Regex.
If that's not what you want you have to make a sacrifice of certainty:
Change Regex to accept any word containing letters of any size at any position
Add capital P as following letter
Add IP as special case
A:
You can use:
(?<!\S)[A-Z][a-zA-Z]*:\s*\S+
Explanation
(?<!\S)
[A-Z][a-zA-Z]*: Match an uppercase char A-Z, optional chars a-zA-Z followed by :
\s*\S Match optional whitespace chars and 1+ non whitespace chars
Regex demo
import re
pattern = r"(?<!\S)[A-Z][a-zA-Z]*:\s*\S+"
s = "Date: 20/8/2020 Duration: 0.33 IP: 110.1.x.x Server:01"
print(re.findall(pattern, s))
Output
['Date: 20/8/2020', 'Duration: 0.33', 'IP: 110.1.x.x', 'Server:01']
|
Python RegEx , how to find words that start with uppercase followed by lower case?
|
I have the following string
Date: 20/8/2020 Duration: 0.33 IP: 110.1.x.x Server:01
I'm applying findall as a way to split my string when I apply findall it split I & P how can I change expression to get this output
['Date: 20/8/2020 ', 'Duration: 0.33 ', 'IP: 110.1.x.x ', 'Server:01']
text = "Date: 20/8/2020 Duration: 0.33 IP: 110.1.x.x Server:01"
my_list = re.findall('[a-zA-Z][^A-Z]*', text)
my_list
['Date: 20/8/2020 ', 'Duration: 0.33 ', 'I', 'P: 110.1.x.x ', 'Server:01']
|
[
"Look for any string that begins with either two uppercase letters, or an uppercase followed by a lowercase, and then match until you find either the same pattern or end of line.\n>>> re.findall(r'([A-Z][a-zA-Z].*?)\\s*(?=[A-Z][a-zA-Z]|$)', text)\n['Date: 20/8/2020', 'Duration: 0.33', 'IP: 110.1.x.x', 'Server:01']\n\nYou may also wish to use this to create a dictionary.\n>>> dict(re.split(r'\\s*:\\s*', m, 1) for m in re.findall(r'([A-Z][a-zA\n-Z].*?)\\s*(?=[A-Z][a-zA-Z]|$)', text))\n{'Date': '20/8/2020', 'Duration': '0.33', 'IP': '110.1.x.x', 'Server': '01'}\n\n",
"With Regex you should always be as precise as possible.\nSo if you know that your input data always looks like that, I would suggest writing the full words in Regex.\nIf that's not what you want you have to make a sacrifice of certainty:\n\nChange Regex to accept any word containing letters of any size at any position\nAdd capital P as following letter\nAdd IP as special case\n\n",
"You can use:\n(?<!\\S)[A-Z][a-zA-Z]*:\\s*\\S+\n\nExplanation\n\n(?<!\\S)\n[A-Z][a-zA-Z]*: Match an uppercase char A-Z, optional chars a-zA-Z followed by :\n\\s*\\S Match optional whitespace chars and 1+ non whitespace chars\n\nRegex demo\nimport re\n\npattern = r\"(?<!\\S)[A-Z][a-zA-Z]*:\\s*\\S+\"\ns = \"Date: 20/8/2020 Duration: 0.33 IP: 110.1.x.x Server:01\"\nprint(re.findall(pattern, s))\n\nOutput\n['Date: 20/8/2020', 'Duration: 0.33', 'IP: 110.1.x.x', 'Server:01']\n\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0074543261_python_regex.txt
|
Q:
What Happens when Combining the 'in' Operator with the 'for in' Operator in Python?
When looking for a secure random password generator in Python I came across this script:
# necessary imports
import secrets
import string
# define the alphabet
letters = string.ascii_letters
digits = string.digits
special_chars = string.punctuation
alphabet = letters + digits + special_chars
# fix password length
pwd_length = 12
# generate a password string
pwd = ''
for i in range(pwd_length):
pwd += ''.join(secrets.choice(alphabet))
print(pwd)
# generate password meeting constraints
while True:
pwd = ''
for i in range(pwd_length):
pwd += ''.join(secrets.choice(alphabet))
if (any(char in special_chars for char in pwd) and
sum(char in digits for char in pwd)>=2):
break
print(pwd)
Source: How to Create a Random Password Generator in Python - Geekflare
There is one thing that is unclear to me in the final "if" statement, which checks if the generated password meets certain constraints.
The expression is:
char in special_chars for char in pwd
I understand, that "in" can either check if something is part of an iterable or be part of the "for in" statement that generates a loop from an iterable.
But what I do not understand is how these both are interacting here. To me it looks as if "char in special_chars" checks if the second "char", defined in "for char in pwd", is part of special_chars.
But: how does the first "char" gets defined before the "char" in "for in" gets defined? I always thought that a variable could not be accessed before it gets defined. This example looks to me as if Python behaves differently. Could anybody explain this to me?
A:
This is known as a list comprehension.
A very simple example of this is:
[i for i in range(5)] => [0, 1, 2, 3, 4]
Now, breaking down your example.
The second half of the list comprehension for char in pwd is looping through every character in the password.
Now the first part char in special_chars is giving a True or False value depending on whether the current charcter in the for char in pwd loop is a special character or not.
I've tried to show a basic example below
pwd = 'qw@rt&'
char in special_chars for char in pwd => [False, False, True, False, False, True]
The any() statement is then checking this created list to see if there is at least one True value
A:
Thank's to @matszwecja I can explain it myself:
The whole construct is a so called generator expression:
6. Expressions -- Python 3.11.0 documentation
"Variables used in the generator expression are evaluated lazily when the next() method is called for the generator object (in the same fashion as normal generators)."
This explains to me how the first 'char' knows about the second 'char' defined in the for-loop.
|
What Happens when Combining the 'in' Operator with the 'for in' Operator in Python?
|
When looking for a secure random password generator in Python I came across this script:
# necessary imports
import secrets
import string
# define the alphabet
letters = string.ascii_letters
digits = string.digits
special_chars = string.punctuation
alphabet = letters + digits + special_chars
# fix password length
pwd_length = 12
# generate a password string
pwd = ''
for i in range(pwd_length):
pwd += ''.join(secrets.choice(alphabet))
print(pwd)
# generate password meeting constraints
while True:
pwd = ''
for i in range(pwd_length):
pwd += ''.join(secrets.choice(alphabet))
if (any(char in special_chars for char in pwd) and
sum(char in digits for char in pwd)>=2):
break
print(pwd)
Source: How to Create a Random Password Generator in Python - Geekflare
There is one thing that is unclear to me in the final "if" statement, which checks if the generated password meets certain constraints.
The expression is:
char in special_chars for char in pwd
I understand, that "in" can either check if something is part of an iterable or be part of the "for in" statement that generates a loop from an iterable.
But what I do not understand is how these both are interacting here. To me it looks as if "char in special_chars" checks if the second "char", defined in "for char in pwd", is part of special_chars.
But: how does the first "char" gets defined before the "char" in "for in" gets defined? I always thought that a variable could not be accessed before it gets defined. This example looks to me as if Python behaves differently. Could anybody explain this to me?
|
[
"This is known as a list comprehension.\nA very simple example of this is:\n[i for i in range(5)] => [0, 1, 2, 3, 4]\n\nNow, breaking down your example.\nThe second half of the list comprehension for char in pwd is looping through every character in the password.\nNow the first part char in special_chars is giving a True or False value depending on whether the current charcter in the for char in pwd loop is a special character or not.\nI've tried to show a basic example below\npwd = 'qw@rt&'\nchar in special_chars for char in pwd => [False, False, True, False, False, True]\n\nThe any() statement is then checking this created list to see if there is at least one True value\n",
"Thank's to @matszwecja I can explain it myself:\nThe whole construct is a so called generator expression:\n6. Expressions -- Python 3.11.0 documentation\n\"Variables used in the generator expression are evaluated lazily when the next() method is called for the generator object (in the same fashion as normal generators).\"\nThis explains to me how the first 'char' knows about the second 'char' defined in the for-loop.\n"
] |
[
0,
0
] |
[] |
[] |
[
"for_loop",
"in_operator",
"python"
] |
stackoverflow_0074546455_for_loop_in_operator_python.txt
|
Q:
How do i add numbers to variables?
So, i have been trying to build a python number guessing game. I am new, and i can't figure out how i add +1 to my chance variable. I have tried +=1 like here but it always shows 1 as the output no matter what. And i know that there is a lot of things wrong with this code but, keep in mind that i am new to coding.
import random
numbers = 1,2,3,4,5,6,7,8,9,10
user = None
hidden = random.choice(numbers)
print("Welcome to volty's's number guessing game!")
def game():
chance = 0
user = int(input("choose a number from 1 to 10: "))
if user > hidden:
print ("ur number is more than the hidden number")
game()
chance += 1
elif user < hidden:
print ("ur number is less than the hidden number")
game()
chance = +1
elif user == hidden:
print (" u guessed the hidden number!")
print ("the hidden number was:",hidden)
print (f"u guessed it in {chance +1} step {'s' if chance > 1 else ' '}")
game()
So this is the code.
A:
You have to pass the variable chance to the function when you call it. You can then increment chance directly when you recursively call upon the function:
import random
numbers = 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
user = None
hidden = random.choice(numbers)
print("Welcome to volty's's number guessing game!")
chance = 0
def game(chance):
user = int(input("choose a number from 1 to 10: "))
if user > hidden:
print("ur number is more than the hidden number")
game(chance + 1)
# chance += 1
elif user < hidden:
print("ur number is less than the hudden number")
game(chance + 1)
# chance += 1
elif user == hidden:
print(" u guessed the hidden number!")
print("the hidden number was:", hidden)
print(f"u guessed it in {chance + 1} step{'s' if chance > 1 else ' '}")
game(chance)
A:
Your big issue here is the way you are using recursion. The awnser is simple:
change your variable before recursing
import random
numbers = 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
user = None
hidden = random.choice(numbers)
print("Welcome to volty's's number guessing game!")
def game(chance):
user = int(input("choose a number from 1 to 10: "))
if user > hidden:
print("ur number is more than the hidden number")
chance += 1
game(chance)
elif user < hidden:
print("ur number is less than the hudden number")
chance += 1
game(chance)
elif user == hidden:
print(" u guessed the hidden number!")
print("the hidden number was:", hidden)
print(f"u guessed it in {chance + 1} step {'s' if chance > 1 else ' '}")
game(0)
Checkout a tutorial about recursion
A:
This is because you don't have to re-call the function "game()" every time you fail.
Ideally you should create the condition "is_hidden_found" which is true ONLY if the player succeeds in finding the number.
Thus, the same variables are used until the win and "chance" is incremented.
import random
print("Welcome to volty's's number guessing game!")
def game():
numbers = 1,2,3,4,5,6,7,8,9,10
user = None
hidden = random.choice(numbers) # maybe use random.randint(0, 10)
is_hidden_found = False
chance = 0
while is_hidden_found == False:
user = int(input("choose a number from 1 to 10: "))
if user > hidden:
print ("ur number is more than the hidden number")
chance += 1
elif user < hidden:
print ("ur number is less than the hudden number")
chance += 1
elif user == hidden:
is_hidden_found = True
print (" u guessed the hidden number!")
print ("the hidden number was:",hidden)
print (f"u guessed it in {chance +1} step {'s' if chance > 1 else ' '}")
game()
Have fun while you learn ;)
A:
Solution : shift you "chance = 0" outside the function scope start.
Reason: every time your code gets inside the if statement it re-initialized the chance value to 0.
|
How do i add numbers to variables?
|
So, i have been trying to build a python number guessing game. I am new, and i can't figure out how i add +1 to my chance variable. I have tried +=1 like here but it always shows 1 as the output no matter what. And i know that there is a lot of things wrong with this code but, keep in mind that i am new to coding.
import random
numbers = 1,2,3,4,5,6,7,8,9,10
user = None
hidden = random.choice(numbers)
print("Welcome to volty's's number guessing game!")
def game():
chance = 0
user = int(input("choose a number from 1 to 10: "))
if user > hidden:
print ("ur number is more than the hidden number")
game()
chance += 1
elif user < hidden:
print ("ur number is less than the hidden number")
game()
chance = +1
elif user == hidden:
print (" u guessed the hidden number!")
print ("the hidden number was:",hidden)
print (f"u guessed it in {chance +1} step {'s' if chance > 1 else ' '}")
game()
So this is the code.
|
[
"You have to pass the variable chance to the function when you call it. You can then increment chance directly when you recursively call upon the function:\nimport random\n\nnumbers = 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\nuser = None\nhidden = random.choice(numbers)\n\nprint(\"Welcome to volty's's number guessing game!\")\n\nchance = 0\n\ndef game(chance):\n user = int(input(\"choose a number from 1 to 10: \"))\n if user > hidden:\n print(\"ur number is more than the hidden number\")\n game(chance + 1)\n # chance += 1\n elif user < hidden:\n print(\"ur number is less than the hudden number\")\n game(chance + 1)\n # chance += 1\n elif user == hidden:\n print(\" u guessed the hidden number!\")\n print(\"the hidden number was:\", hidden)\n print(f\"u guessed it in {chance + 1} step{'s' if chance > 1 else ' '}\")\n\n\ngame(chance)\n\n",
"Your big issue here is the way you are using recursion. The awnser is simple:\nchange your variable before recursing\nimport random\n\nnumbers = 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\nuser = None\nhidden = random.choice(numbers)\n\nprint(\"Welcome to volty's's number guessing game!\")\n\n\ndef game(chance):\n user = int(input(\"choose a number from 1 to 10: \"))\n if user > hidden:\n print(\"ur number is more than the hidden number\")\n chance += 1\n game(chance)\n elif user < hidden:\n print(\"ur number is less than the hudden number\")\n chance += 1\n game(chance)\n elif user == hidden:\n print(\" u guessed the hidden number!\")\n print(\"the hidden number was:\", hidden)\n print(f\"u guessed it in {chance + 1} step {'s' if chance > 1 else ' '}\")\n\n\ngame(0)\n\nCheckout a tutorial about recursion\n",
"This is because you don't have to re-call the function \"game()\" every time you fail.\nIdeally you should create the condition \"is_hidden_found\" which is true ONLY if the player succeeds in finding the number.\nThus, the same variables are used until the win and \"chance\" is incremented.\nimport random\n\nprint(\"Welcome to volty's's number guessing game!\")\ndef game():\n numbers = 1,2,3,4,5,6,7,8,9,10 \n user = None\n hidden = random.choice(numbers) # maybe use random.randint(0, 10)\n\n is_hidden_found = False\n chance = 0\n while is_hidden_found == False:\n\n user = int(input(\"choose a number from 1 to 10: \"))\n if user > hidden:\n print (\"ur number is more than the hidden number\")\n chance += 1\n elif user < hidden: \n print (\"ur number is less than the hudden number\")\n chance += 1\n elif user == hidden:\n is_hidden_found = True\n print (\" u guessed the hidden number!\")\n print (\"the hidden number was:\",hidden)\n print (f\"u guessed it in {chance +1} step {'s' if chance > 1 else ' '}\")\n\ngame()\n\nHave fun while you learn ;)\n",
"Solution : shift you \"chance = 0\" outside the function scope start.\nReason: every time your code gets inside the if statement it re-initialized the chance value to 0.\n"
] |
[
1,
0,
0,
-1
] |
[] |
[] |
[
"android",
"python",
"python_3.x"
] |
stackoverflow_0074546696_android_python_python_3.x.txt
|
Q:
Hi, while I was trying to run this code this mensage came up ModuleNotFoundError: No module named 'spacy.lemmatizer'
import spacy
nlp = spacy.load('en_core_web_sm')
from spacy.lemmatizer import Lemmatizer
from spacy.lang.en import LEMMA_INDEX, LEMMA_EXC, LEMMA_RULES
lemmatizer = Lemmatizer(LEMMA_INDEX, LEMMA_EXC, LEMMA_RULES)
lemmattizer('chunkles', 'NOUN')
Can anyone help me? I'm using Version 3 of python
A:
The official document shows that after spacy 3.0, the lemmatizer has become a standalone pipeline component. Therefore, you should install the spacy whose version is smaller than 3.0. The link is as follow: https://spacy.io/api/lemmatizer
A:
try:
doc = nlp('chuckles')
doc[0].lemma_
|
Hi, while I was trying to run this code this mensage came up ModuleNotFoundError: No module named 'spacy.lemmatizer'
|
import spacy
nlp = spacy.load('en_core_web_sm')
from spacy.lemmatizer import Lemmatizer
from spacy.lang.en import LEMMA_INDEX, LEMMA_EXC, LEMMA_RULES
lemmatizer = Lemmatizer(LEMMA_INDEX, LEMMA_EXC, LEMMA_RULES)
lemmattizer('chunkles', 'NOUN')
Can anyone help me? I'm using Version 3 of python
|
[
"The official document shows that after spacy 3.0, the lemmatizer has become a standalone pipeline component. Therefore, you should install the spacy whose version is smaller than 3.0. The link is as follow: https://spacy.io/api/lemmatizer\n",
"try:\ndoc = nlp('chuckles')\ndoc[0].lemma_\n"
] |
[
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0070021042_python.txt
|
Q:
How to modify this multiple argument method to kwargs only?
class SalesforceConnectionLiaison:
def __init__(self, *, organisation_id, mongo_client):
self.organisation_id = organisation_id
self.mongo_client = mongo_client
self.salesforce_manager = SalesforceConnectionManager(
organisation_id=organisation_id, mongo_client=mongo_client)
self.mapping_manager = UnifiedMappingManager(
mongo_client=mongo_client, organisation_id=organisation_id)
def connection_enable(self, automap, start_sync, connection_id, updated_by):
How to modify this multiple argument method to kwargs only?
Can someone tell me how to do this?
A:
Assuming that your question is about connection_enable().
Just use **kwargs that allows to receive your args as a dictionary.
Then inside the function if you want to use variables, you can use get().
The advantage of get() is that it returns None if the key is not present (this arg hasn't been provided as an argument).
class SalesforceConnectionLiaison:
def connection_enable(self, **kwargs):
automap = kwargs.get("automap")
start_sync = kwargs.get("start_sync")
connection_id = kwargs.get("connection_id")
updated_by = kwargs.get("updated_by")
|
How to modify this multiple argument method to kwargs only?
|
class SalesforceConnectionLiaison:
def __init__(self, *, organisation_id, mongo_client):
self.organisation_id = organisation_id
self.mongo_client = mongo_client
self.salesforce_manager = SalesforceConnectionManager(
organisation_id=organisation_id, mongo_client=mongo_client)
self.mapping_manager = UnifiedMappingManager(
mongo_client=mongo_client, organisation_id=organisation_id)
def connection_enable(self, automap, start_sync, connection_id, updated_by):
How to modify this multiple argument method to kwargs only?
Can someone tell me how to do this?
|
[
"Assuming that your question is about connection_enable().\nJust use **kwargs that allows to receive your args as a dictionary.\nThen inside the function if you want to use variables, you can use get().\nThe advantage of get() is that it returns None if the key is not present (this arg hasn't been provided as an argument).\nclass SalesforceConnectionLiaison:\n\n def connection_enable(self, **kwargs):\n automap = kwargs.get(\"automap\")\n start_sync = kwargs.get(\"start_sync\")\n connection_id = kwargs.get(\"connection_id\")\n updated_by = kwargs.get(\"updated_by\")\n\n"
] |
[
0
] |
[] |
[] |
[
"arguments",
"keyword_argument",
"methods",
"parameters",
"python"
] |
stackoverflow_0074546713_arguments_keyword_argument_methods_parameters_python.txt
|
Q:
Opencv save last N seconds of a camera stream
Is there a way to save last N seconds of a video stream to a file with openCV? E.g. The camera recording starts at 0s and ends at 20s leading to a recorded file which contains the video from 10s -> 20s.
One way I can think of is to save last N seconds in a memory buffer and write them to file once the process finishes, but this is not a desireable solution because of the latency involved at the end as well as memory limitations when multiple HD streams are involved.
A:
The best solution is to use a fifo buffer for the last 10 seconds or stream and past it into a file when the process stop (as you've explained).
why would it imply a latency though ? just need to use 2 buffer.
a short buffer for display and long fifo buffer for recording last 10 sec
A:
Common "dashcam"/CCTV solutions write short video segments continuously, then remove the oldest ones as needed.
You can't have a single file in which data "disappears" again.
Use streamable video container formats like MPEG (1/2) Transport Streams (.ts or .mts or various other suffixes). These types can be decoded even if they're incomplete. Do not use containers that must be "finalized" (metadata written) to be decodable, e.g. .mp4. In any exceptional situation, incomplete files of that type would be unrecoverable.
|
Opencv save last N seconds of a camera stream
|
Is there a way to save last N seconds of a video stream to a file with openCV? E.g. The camera recording starts at 0s and ends at 20s leading to a recorded file which contains the video from 10s -> 20s.
One way I can think of is to save last N seconds in a memory buffer and write them to file once the process finishes, but this is not a desireable solution because of the latency involved at the end as well as memory limitations when multiple HD streams are involved.
|
[
"The best solution is to use a fifo buffer for the last 10 seconds or stream and past it into a file when the process stop (as you've explained).\nwhy would it imply a latency though ? just need to use 2 buffer.\na short buffer for display and long fifo buffer for recording last 10 sec\n",
"Common \"dashcam\"/CCTV solutions write short video segments continuously, then remove the oldest ones as needed.\nYou can't have a single file in which data \"disappears\" again.\nUse streamable video container formats like MPEG (1/2) Transport Streams (.ts or .mts or various other suffixes). These types can be decoded even if they're incomplete. Do not use containers that must be \"finalized\" (metadata written) to be decodable, e.g. .mp4. In any exceptional situation, incomplete files of that type would be unrecoverable.\n"
] |
[
0,
0
] |
[] |
[] |
[
"opencv",
"python",
"video",
"video_encoding"
] |
stackoverflow_0074545126_opencv_python_video_video_encoding.txt
|
Q:
For a large array (16,000+ rows): How to find index value of a 2D array that satisfies a certain condition, as quickly as possible?
I have a dataset having x coordinates, y coordinates, and a function value. I have a function that checks for input coordinates, and does something whether or not the value is found. But for the large sized numpy array,it takes too long, a second to check through two such arrays (x and y).
The reason it is a 'long time' is because this same checking happens few thousand times itself. Basically, I am tring to match the function value from one large grid with its own spacing in x and y to a new large grid with different spacing, using interpolation when the x, y values do not match.
Is there a faster way to find the proper index? The problem is, the new grid values cannot be assigned for all grid points at the same time like using a text file, its coordinates get passed one at a time using a function (rules of an open source library). Moreover, the read_excel() needs to be called several times too, since this function can only handle the new grid coordinates as its arguments, otherwise there are constructor errors that calls this function.
I tried this common way:
def assign_to_new_grid(p):
old_grid_values_pd = pd.read_excel('data.xlsx')
old_grid_values = np.array(old_grid_values_pd)
if p.x in old_grid_values[:,0] and p.y in old_grid_values[:,1]:
#Assign simply, 0th column has x values, 1st column has y values
else:
#Interpolate
This gets called for all thousands of points in the new grid. There's no way to use a list of the new coordinates productively, since that will only eliminate the need for the else statement, not the function as a whole.
A:
If one of your performance problems is reading the file countless time, like that :
read_excel_count = 0
def pd_read_excel(filename): # fake for demo
global read_excel_count
read_excel_count += 1
print(f"call pd_read_excel, {filename=!r}, {read_excel_count=}")
def assign_to_new_grid(p):
old_grid_values_pd = pd_read_excel('data.xlsx')
# ...
for i in range(100):
assign_to_new_grid(...)
call pd_read_excel, filename='data.xlsx', read_excel_count=1
call pd_read_excel, filename='data.xlsx', read_excel_count=2
call pd_read_excel, filename='data.xlsx', read_excel_count=3
...
call pd_read_excel, filename='data.xlsx', read_excel_count=99
call pd_read_excel, filename='data.xlsx', read_excel_count=100
Then you can read it once and reuse it afterwards, by creating a class to store the data, and giving a method instead of a function to your framework, like so :
read_excel_count = 0 # reset
class Something:
def __init__(self, filename):
self.data = pd_read_excel(filename)
def assign_to_new_grid(self, p):
old_grid_values_pd = self.data
# ...
filename = 'data.xlsx'
assign_to_new_grid = Something(filename).assign_to_new_grid # taking a reference on the instance's method
for i in range(100):
assign_to_new_grid(...)
Which will result in only 1 call :
call pd_read_excel, filename='data.xlsx', read_excel_count=1
And there is a stdlib tool just for that : functools.lru_cache !
@lru_cache(maxsize=10) # maxsize is optional, but better be careful if you have large data files
def pd_read_excel(filename): # fake for demo
global read_excel_count
read_excel_count += 1
print(f"call pd_read_excel, {filename=!r}, {read_excel_count=}")
def assign_to_new_grid(p):
old_grid_values_pd = pd_read_excel('data.xlsx')
# ...
for i in range(100):
assign_to_new_grid(...)
WHich also get the pd_read_excel function to only be called once, the first time when the cache is not yet filled.
And it will work if you have several files too.
This is a simple "trick" that may get you a big speed boost, I know it sometimes did for me. But without a proper way to understand/reproduce your problem, I fear it will not be practical for us to search a solution to your problem.
|
For a large array (16,000+ rows): How to find index value of a 2D array that satisfies a certain condition, as quickly as possible?
|
I have a dataset having x coordinates, y coordinates, and a function value. I have a function that checks for input coordinates, and does something whether or not the value is found. But for the large sized numpy array,it takes too long, a second to check through two such arrays (x and y).
The reason it is a 'long time' is because this same checking happens few thousand times itself. Basically, I am tring to match the function value from one large grid with its own spacing in x and y to a new large grid with different spacing, using interpolation when the x, y values do not match.
Is there a faster way to find the proper index? The problem is, the new grid values cannot be assigned for all grid points at the same time like using a text file, its coordinates get passed one at a time using a function (rules of an open source library). Moreover, the read_excel() needs to be called several times too, since this function can only handle the new grid coordinates as its arguments, otherwise there are constructor errors that calls this function.
I tried this common way:
def assign_to_new_grid(p):
old_grid_values_pd = pd.read_excel('data.xlsx')
old_grid_values = np.array(old_grid_values_pd)
if p.x in old_grid_values[:,0] and p.y in old_grid_values[:,1]:
#Assign simply, 0th column has x values, 1st column has y values
else:
#Interpolate
This gets called for all thousands of points in the new grid. There's no way to use a list of the new coordinates productively, since that will only eliminate the need for the else statement, not the function as a whole.
|
[
"If one of your performance problems is reading the file countless time, like that :\nread_excel_count = 0\ndef pd_read_excel(filename): # fake for demo\n global read_excel_count\n read_excel_count += 1\n print(f\"call pd_read_excel, {filename=!r}, {read_excel_count=}\")\n\n\ndef assign_to_new_grid(p):\n old_grid_values_pd = pd_read_excel('data.xlsx')\n # ...\n\n\nfor i in range(100):\n assign_to_new_grid(...)\n\ncall pd_read_excel, filename='data.xlsx', read_excel_count=1\ncall pd_read_excel, filename='data.xlsx', read_excel_count=2\ncall pd_read_excel, filename='data.xlsx', read_excel_count=3\n...\ncall pd_read_excel, filename='data.xlsx', read_excel_count=99\ncall pd_read_excel, filename='data.xlsx', read_excel_count=100\n\nThen you can read it once and reuse it afterwards, by creating a class to store the data, and giving a method instead of a function to your framework, like so :\nread_excel_count = 0 # reset\n\nclass Something:\n def __init__(self, filename):\n self.data = pd_read_excel(filename)\n\n def assign_to_new_grid(self, p):\n old_grid_values_pd = self.data\n # ...\n\nfilename = 'data.xlsx'\nassign_to_new_grid = Something(filename).assign_to_new_grid # taking a reference on the instance's method\n\nfor i in range(100):\n assign_to_new_grid(...)\n\nWhich will result in only 1 call :\ncall pd_read_excel, filename='data.xlsx', read_excel_count=1\n\nAnd there is a stdlib tool just for that : functools.lru_cache !\n@lru_cache(maxsize=10) # maxsize is optional, but better be careful if you have large data files\ndef pd_read_excel(filename): # fake for demo\n global read_excel_count\n read_excel_count += 1\n print(f\"call pd_read_excel, {filename=!r}, {read_excel_count=}\")\n\ndef assign_to_new_grid(p):\n old_grid_values_pd = pd_read_excel('data.xlsx')\n # ...\n\nfor i in range(100):\n assign_to_new_grid(...)\n\nWHich also get the pd_read_excel function to only be called once, the first time when the cache is not yet filled.\nAnd it will work if you have several files too.\nThis is a simple \"trick\" that may get you a big speed boost, I know it sometimes did for me. But without a proper way to understand/reproduce your problem, I fear it will not be practical for us to search a solution to your problem.\n"
] |
[
0
] |
[] |
[] |
[
"arrays",
"large_files",
"python"
] |
stackoverflow_0074537402_arrays_large_files_python.txt
|
Q:
Add Array to pandas column
I need to iterate over a dataframe. In each iteration row.Text is converted into a vector-representation and stored as a numpy.ndarray (newData). Now i want to add a column (Vektoren) to the original dataframe and apply to each row the newData array
for idx,row in data.iterrows():
doc = nlp(row.Text)
newData =doc.vector
data.loc[idx,'Vektoren'] = newData
Unfortunatly i cant get it to work. what would be a better way instead of using iterrows?
I got it to work with a list:
vectorList = []
for idx,row in data.iterrows():
doc = nlp(row.Text)
newData =doc.vector
vectorList.append(newData)
data['Vektoren'] = pd.Series(vectorList)
I am still wondering if there is a more elegant solution
A:
Make your solution concise with map
data['Vektoren'] = data['Text'].map(lambda s: nlp(s).vector)
|
Add Array to pandas column
|
I need to iterate over a dataframe. In each iteration row.Text is converted into a vector-representation and stored as a numpy.ndarray (newData). Now i want to add a column (Vektoren) to the original dataframe and apply to each row the newData array
for idx,row in data.iterrows():
doc = nlp(row.Text)
newData =doc.vector
data.loc[idx,'Vektoren'] = newData
Unfortunatly i cant get it to work. what would be a better way instead of using iterrows?
I got it to work with a list:
vectorList = []
for idx,row in data.iterrows():
doc = nlp(row.Text)
newData =doc.vector
vectorList.append(newData)
data['Vektoren'] = pd.Series(vectorList)
I am still wondering if there is a more elegant solution
|
[
"Make your solution concise with map\ndata['Vektoren'] = data['Text'].map(lambda s: nlp(s).vector)\n\n"
] |
[
1
] |
[] |
[] |
[
"iteration",
"pandas",
"python"
] |
stackoverflow_0074546445_iteration_pandas_python.txt
|
Q:
image is too big for OpenCV imshow window, how do I make it smaller?
I'm comparing two images - a complete image & a small part of the same image. If a match is found, then a rectangular box is drawn around that part of the image which contains the smaller image.
To implement this, I have used the matchTemplate method.
The code works as expected, but if the original image's dimensions are 1000 PPI or above, then the image gets cut when displaying the output, hence, the sub-image cannot be highlighted.
Is there a way to fix this?
My code -->
import cv2
import numpy as np
img = cv2.imread("C:\Images\big_image.png")
grey_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
template = cv2.imread("C:\Images\sub_image.png", 0)
w, h = template.shape[::-1]
res = cv2.matchTemplate(grey_img, template, cv2.TM_CCOEFF_NORMED)
print(res)
threshold = 0.9;
loc = np.where(res >= threshold)
print(loc)
for pt in zip(*loc[::-1]):
cv2.rectangle(img, pt, (pt[0] + w, pt[1] + h), (0, 0, 255), 2)
cv2.imshow("img", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
A:
Before imshow, call namedWindow() with the WINDOW_NORMAL flag. That makes it resizable and scales the image to the size of the window.
cv.namedWindow("img", cv.WINDOW_NORMAL)
# then imshow()...
|
image is too big for OpenCV imshow window, how do I make it smaller?
|
I'm comparing two images - a complete image & a small part of the same image. If a match is found, then a rectangular box is drawn around that part of the image which contains the smaller image.
To implement this, I have used the matchTemplate method.
The code works as expected, but if the original image's dimensions are 1000 PPI or above, then the image gets cut when displaying the output, hence, the sub-image cannot be highlighted.
Is there a way to fix this?
My code -->
import cv2
import numpy as np
img = cv2.imread("C:\Images\big_image.png")
grey_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
template = cv2.imread("C:\Images\sub_image.png", 0)
w, h = template.shape[::-1]
res = cv2.matchTemplate(grey_img, template, cv2.TM_CCOEFF_NORMED)
print(res)
threshold = 0.9;
loc = np.where(res >= threshold)
print(loc)
for pt in zip(*loc[::-1]):
cv2.rectangle(img, pt, (pt[0] + w, pt[1] + h), (0, 0, 255), 2)
cv2.imshow("img", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
|
[
"Before imshow, call namedWindow() with the WINDOW_NORMAL flag. That makes it resizable and scales the image to the size of the window.\ncv.namedWindow(\"img\", cv.WINDOW_NORMAL)\n# then imshow()...\n\n"
] |
[
1
] |
[] |
[] |
[
"opencv",
"python",
"user_interface"
] |
stackoverflow_0074546171_opencv_python_user_interface.txt
|
Q:
How can I handle 400 bad request error using DRF in Django
I am trying to perform a POST request using DRF in Django, the program is raising a 400 error (this is the error, Bad Request: /api/menu_items/, the frontend is raising the following error (This field is required) the problem is I cannot see the exact field that is missing. How can I locate the missing field? The error occurs when I try to post a new Menu item.
This is the place model
# Place models
class Place(models.Model):
# When User is deleted the Place gets deleted too
owner = models.ForeignKey(User, on_delete=models.CASCADE)
name = models.CharField(max_length=255)
image = models.CharField(max_length=255)
number_of_tables = models.IntegerField(default=1)
def __str__(self):
return "{}/{}".format(self.owner.username, self.name)
This is the Menu Item model
class MenuItem(models.Model):
place = models.ForeignKey(Place, on_delete=models.CASCADE)
category = models.ForeignKey(Category, on_delete=models.CASCADE, related_name="menu_items")
name = models.CharField(max_length=255)
description = models.TextField(blank=True)
price = models.IntegerField(default=0,)
image = models.CharField(max_length=255)
is_available = models.BooleanField(default=True)
def __str__(self):
return "{}/{}".format(self.category, self.name)
Below are the serialisers.
The error is occurring in the MenuItemSerializer.
from rest_framework import serializers
from . import models
class MenuItemSerializer(serializers.ModelSerializer):
class Meta:
model = models.MenuItem
fields = ('id', 'name', 'description', 'price', 'image', 'is_available', 'place', 'category')
class CategorySerializer(serializers.ModelSerializer):
menu_items = MenuItemSerializer(many=True, read_only=True)
class Meta:
model = models.Category
fields = ('id', 'name', 'menu_items', 'place')
class PlaceDetailSerializer(serializers.ModelSerializer):
categories = CategorySerializer(many=True, read_only=True)
class Meta:
model = models.Place
fields = ('id','name','image','number_of_tables','categories',)
class PlaceSerializer(serializers.ModelSerializer):
class Meta:
model = models.Place
fields = ('id', 'name', 'image')
Following are the views
from rest_framework import generics
from . import models, serializers, permissions
from django.core.exceptions import BadRequest
#Place Views
class PlaceList(generics.ListCreateAPIView):
serializer_class = serializers.PlaceSerializer
# Filtering content
def get_queryset(self):
return models.Place.objects.filter(owner_id=self.request.user.id)
# Only the user of a place can make changes
def perform_create(self, serializer):
serializer.save(owner=self.request.user)
class PlaceDetail(generics.RetrieveUpdateDestroyAPIView):
permission_classes = [permissions.IsOwnerOrReadOnly] #passing permissions
serializer_class = serializers.PlaceDetailSerializer
queryset = models.Place.objects.all()
# Category List
class CategoryList(generics.CreateAPIView):
permission_classes = [permissions.PlaceOwnerOrReadOnly]
serializer_class = serializers.CategorySerializer
# Category Details
#No direct relation between Place and Category
class CategoryDetail(generics.UpdateAPIView, generics.DestroyAPIView):
serializer_class = serializers.CategorySerializer
queryset = models.Place.objects.all()
# Menu Items
class MenuItemList(generics.CreateAPIView):
serializer_class = serializers.MenuItemSerializer
permission_classes = [permissions.PlaceOwnerOrReadOnly]
# Menu Item Details
class MenuItemDetail(generics.UpdateAPIView, generics.DestroyAPIView):
permission_classes = [permissions.PlaceOwnerOrReadOnly]
serializer_class = serializers.MenuItemSerializer
queryset = models.MenuItem.objects.all()
This is the UI code for the Menu Form
import { Button, Form, Overlay } from 'react-bootstrap';
import Popover from 'react-bootstrap/Popover';
import { RiPlayListAddFill } from 'react-icons/ri';
import { toast } from 'react-toastify';
import { addCategory, addMenuItems } from '../apis';
import ImageDropzone from './ImageDropZone';
import AuthContext from '../contexts/AuthContext';
import { useState, useRef,useContext } from 'react';
function MenuItemForms({ place, onDone }) {
const [categoryName, setCategoryName] = useState("");
const [categoryFormShow, setCategoryFormShow] = useState(false);
const [category, setCategory] = useState("");
const [itemName, setItemName] = useState("");
const [price, setPrice] = useState(itemName.price || 0);
const [description, setDescription] = useState(itemName.description);
const [image, setImage] = useState("");
const [isAvailable, setIsAvailable] = useState(true);
const target = useRef(null);
const auth = useContext(AuthContext);
//Adding category event
const onAddCategory = async () => {
const json = await addCategory({ name: categoryName, place: place.id }, auth.token)
console.log(json)
if (json) {
toast(`Category ${json.name} was created.`, { type: "success" });
setCategory(json.id);
setCategoryName("");
setCategoryFormShow(false);
onDone();
}
}
const onAddMenuItems = async () => {
const json = await addMenuItems({
place: place.id,
category,
itemName,
price,
description,
image,
is_Available: isAvailable,
}, auth.token);
if (json) {
toast(`Menu Item ${json.name} was created`);
setCategory("");
setItemName("");
setPrice(0);
setDescription("");
setImage("");
setIsAvailable(true);
onDone()
}
}
return (
<div>
{/* Category Form */}
<Form.Group>
<Form.Label>Category</Form.Label>
<Form.Control as="select" value={ category } onChange={ (e) => setCategory(e.target.value) }>
<option />
{ place?.categories?.map((c) => (
<option key={ c.id } value={ c.id }>
{ c.name }
</option>
)) }
</Form.Control>
{/* Here */ }
<Button variant="link" ref={ target } onClick={ () => setCategoryFormShow(true) }>
<RiPlayListAddFill />
</Button>
<Overlay
target={ target.current }
show={ categoryFormShow }
placement="right"
rootClose
onHide={() => setCategoryFormShow(false)}
>
<Popover id="popover-contained">
<Popover.Header as="h3">Category</Popover.Header>
<Popover.Body>
<Form.Group>
<Form.Control
type="text"
placeHolder="Category Name"
value={ categoryName }
onChange={(e) => setCategoryName(e.target.value)}
/>
</Form.Group>
<Button className="mt-2" variant="standard" block onClick={onAddCategory}>
Add Category
</Button>
</Popover.Body>
</Popover>
</Overlay>
</Form.Group>
<Form.Group>
<Form.Label>Name</Form.Label>
<Form.Control
type="text"
placeholder= "Enter item name"
value={ itemName }
onChange={(e) => setItemName(e.target.value)}
/>
</Form.Group>
{/* Price input */}
<Form.Group>
<Form.Label>Price</Form.Label>
<Form.Control
type="number"
placeholder="Enter the price.."
value={ price }
onChange={ (e) => setPrice(e.target.value) }
/>
</Form.Group>
{/* Description */}
<Form.Group>
<Form.Label>Description</Form.Label>
<Form.Control
type="text"
placeholder="Enter Description.."
value={ description }
onChange={ (e) => setDescription(e.target.value) }
/>
</Form.Group>
{/* Image */ }
<Form.Group>
<Form.Label>Image</Form.Label>
<ImageDropzone value={image} onChange={setImage} />
</Form.Group>
<Form.Group>
<Form.Check className='m-1'
type="checkbox"
label="Is available"
checked={ isAvailable }
onChange={(e) => setIsAvailable(e.target.checked)}
/>
</Form.Group>
<Button variant="standard" block onClick={onAddMenuItems}>
+ Add Menu Iten
</Button>
</div>
)
}
export default MenuItemForms
A:
I got solved the error, I wasn't passing the right values is the useState() function.
|
How can I handle 400 bad request error using DRF in Django
|
I am trying to perform a POST request using DRF in Django, the program is raising a 400 error (this is the error, Bad Request: /api/menu_items/, the frontend is raising the following error (This field is required) the problem is I cannot see the exact field that is missing. How can I locate the missing field? The error occurs when I try to post a new Menu item.
This is the place model
# Place models
class Place(models.Model):
# When User is deleted the Place gets deleted too
owner = models.ForeignKey(User, on_delete=models.CASCADE)
name = models.CharField(max_length=255)
image = models.CharField(max_length=255)
number_of_tables = models.IntegerField(default=1)
def __str__(self):
return "{}/{}".format(self.owner.username, self.name)
This is the Menu Item model
class MenuItem(models.Model):
place = models.ForeignKey(Place, on_delete=models.CASCADE)
category = models.ForeignKey(Category, on_delete=models.CASCADE, related_name="menu_items")
name = models.CharField(max_length=255)
description = models.TextField(blank=True)
price = models.IntegerField(default=0,)
image = models.CharField(max_length=255)
is_available = models.BooleanField(default=True)
def __str__(self):
return "{}/{}".format(self.category, self.name)
Below are the serialisers.
The error is occurring in the MenuItemSerializer.
from rest_framework import serializers
from . import models
class MenuItemSerializer(serializers.ModelSerializer):
class Meta:
model = models.MenuItem
fields = ('id', 'name', 'description', 'price', 'image', 'is_available', 'place', 'category')
class CategorySerializer(serializers.ModelSerializer):
menu_items = MenuItemSerializer(many=True, read_only=True)
class Meta:
model = models.Category
fields = ('id', 'name', 'menu_items', 'place')
class PlaceDetailSerializer(serializers.ModelSerializer):
categories = CategorySerializer(many=True, read_only=True)
class Meta:
model = models.Place
fields = ('id','name','image','number_of_tables','categories',)
class PlaceSerializer(serializers.ModelSerializer):
class Meta:
model = models.Place
fields = ('id', 'name', 'image')
Following are the views
from rest_framework import generics
from . import models, serializers, permissions
from django.core.exceptions import BadRequest
#Place Views
class PlaceList(generics.ListCreateAPIView):
serializer_class = serializers.PlaceSerializer
# Filtering content
def get_queryset(self):
return models.Place.objects.filter(owner_id=self.request.user.id)
# Only the user of a place can make changes
def perform_create(self, serializer):
serializer.save(owner=self.request.user)
class PlaceDetail(generics.RetrieveUpdateDestroyAPIView):
permission_classes = [permissions.IsOwnerOrReadOnly] #passing permissions
serializer_class = serializers.PlaceDetailSerializer
queryset = models.Place.objects.all()
# Category List
class CategoryList(generics.CreateAPIView):
permission_classes = [permissions.PlaceOwnerOrReadOnly]
serializer_class = serializers.CategorySerializer
# Category Details
#No direct relation between Place and Category
class CategoryDetail(generics.UpdateAPIView, generics.DestroyAPIView):
serializer_class = serializers.CategorySerializer
queryset = models.Place.objects.all()
# Menu Items
class MenuItemList(generics.CreateAPIView):
serializer_class = serializers.MenuItemSerializer
permission_classes = [permissions.PlaceOwnerOrReadOnly]
# Menu Item Details
class MenuItemDetail(generics.UpdateAPIView, generics.DestroyAPIView):
permission_classes = [permissions.PlaceOwnerOrReadOnly]
serializer_class = serializers.MenuItemSerializer
queryset = models.MenuItem.objects.all()
This is the UI code for the Menu Form
import { Button, Form, Overlay } from 'react-bootstrap';
import Popover from 'react-bootstrap/Popover';
import { RiPlayListAddFill } from 'react-icons/ri';
import { toast } from 'react-toastify';
import { addCategory, addMenuItems } from '../apis';
import ImageDropzone from './ImageDropZone';
import AuthContext from '../contexts/AuthContext';
import { useState, useRef,useContext } from 'react';
function MenuItemForms({ place, onDone }) {
const [categoryName, setCategoryName] = useState("");
const [categoryFormShow, setCategoryFormShow] = useState(false);
const [category, setCategory] = useState("");
const [itemName, setItemName] = useState("");
const [price, setPrice] = useState(itemName.price || 0);
const [description, setDescription] = useState(itemName.description);
const [image, setImage] = useState("");
const [isAvailable, setIsAvailable] = useState(true);
const target = useRef(null);
const auth = useContext(AuthContext);
//Adding category event
const onAddCategory = async () => {
const json = await addCategory({ name: categoryName, place: place.id }, auth.token)
console.log(json)
if (json) {
toast(`Category ${json.name} was created.`, { type: "success" });
setCategory(json.id);
setCategoryName("");
setCategoryFormShow(false);
onDone();
}
}
const onAddMenuItems = async () => {
const json = await addMenuItems({
place: place.id,
category,
itemName,
price,
description,
image,
is_Available: isAvailable,
}, auth.token);
if (json) {
toast(`Menu Item ${json.name} was created`);
setCategory("");
setItemName("");
setPrice(0);
setDescription("");
setImage("");
setIsAvailable(true);
onDone()
}
}
return (
<div>
{/* Category Form */}
<Form.Group>
<Form.Label>Category</Form.Label>
<Form.Control as="select" value={ category } onChange={ (e) => setCategory(e.target.value) }>
<option />
{ place?.categories?.map((c) => (
<option key={ c.id } value={ c.id }>
{ c.name }
</option>
)) }
</Form.Control>
{/* Here */ }
<Button variant="link" ref={ target } onClick={ () => setCategoryFormShow(true) }>
<RiPlayListAddFill />
</Button>
<Overlay
target={ target.current }
show={ categoryFormShow }
placement="right"
rootClose
onHide={() => setCategoryFormShow(false)}
>
<Popover id="popover-contained">
<Popover.Header as="h3">Category</Popover.Header>
<Popover.Body>
<Form.Group>
<Form.Control
type="text"
placeHolder="Category Name"
value={ categoryName }
onChange={(e) => setCategoryName(e.target.value)}
/>
</Form.Group>
<Button className="mt-2" variant="standard" block onClick={onAddCategory}>
Add Category
</Button>
</Popover.Body>
</Popover>
</Overlay>
</Form.Group>
<Form.Group>
<Form.Label>Name</Form.Label>
<Form.Control
type="text"
placeholder= "Enter item name"
value={ itemName }
onChange={(e) => setItemName(e.target.value)}
/>
</Form.Group>
{/* Price input */}
<Form.Group>
<Form.Label>Price</Form.Label>
<Form.Control
type="number"
placeholder="Enter the price.."
value={ price }
onChange={ (e) => setPrice(e.target.value) }
/>
</Form.Group>
{/* Description */}
<Form.Group>
<Form.Label>Description</Form.Label>
<Form.Control
type="text"
placeholder="Enter Description.."
value={ description }
onChange={ (e) => setDescription(e.target.value) }
/>
</Form.Group>
{/* Image */ }
<Form.Group>
<Form.Label>Image</Form.Label>
<ImageDropzone value={image} onChange={setImage} />
</Form.Group>
<Form.Group>
<Form.Check className='m-1'
type="checkbox"
label="Is available"
checked={ isAvailable }
onChange={(e) => setIsAvailable(e.target.checked)}
/>
</Form.Group>
<Button variant="standard" block onClick={onAddMenuItems}>
+ Add Menu Iten
</Button>
</div>
)
}
export default MenuItemForms
|
[
"I got solved the error, I wasn't passing the right values is the useState() function.\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_rest_framework",
"generics",
"python",
"rest"
] |
stackoverflow_0074546188_django_django_rest_framework_generics_python_rest.txt
|
Q:
merging dataframes by country and year while the countries are not named the same (for example US,United states )
Hello I am trying to drop rows that have in a specific column string that is not a year.
For example I have the in last rows year formats that have decimal points or '-'.
I have tried to convert the year column into a string and then drop them using the code below but it only removes the row with 2011-21, the ones with decimal points stay.
df.level_1=df.level_1.astype(str)
df.loc[
(~df.level_1.str.contains("."))
|~(df.level_1.str.contains("-")),
:]
is there a way to fix this issue ??
A:
You can filter all rows where level_1 contains non digit characters:
df[~df.level_1.str.contains('\D')]
A:
you can use regex:
df['level_1']=df['level_1'].astype(str)
df = df[df['level_1'].str.contains('\d\d\d\d-\d\d',regex=True)]
|
merging dataframes by country and year while the countries are not named the same (for example US,United states )
|
Hello I am trying to drop rows that have in a specific column string that is not a year.
For example I have the in last rows year formats that have decimal points or '-'.
I have tried to convert the year column into a string and then drop them using the code below but it only removes the row with 2011-21, the ones with decimal points stay.
df.level_1=df.level_1.astype(str)
df.loc[
(~df.level_1.str.contains("."))
|~(df.level_1.str.contains("-")),
:]
is there a way to fix this issue ??
|
[
"You can filter all rows where level_1 contains non digit characters:\ndf[~df.level_1.str.contains('\\D')]\n\n",
"you can use regex:\ndf['level_1']=df['level_1'].astype(str)\ndf = df[df['level_1'].str.contains('\\d\\d\\d\\d-\\d\\d',regex=True)]\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"data_science",
"dataframe",
"fuzzy_logic",
"pandas",
"python"
] |
stackoverflow_0074545960_data_science_dataframe_fuzzy_logic_pandas_python.txt
|
Q:
Why do I not get the expected result when concatenating a square wave and f(x)=x^2?
When I plot each t and f variable individually, I get the 4 parts of the signal I am looking for, but when I concatenate them it is not resulting in the signal I am expecting.
Here is the wave I am trying to replicate:
Here is the current output I am getting:
Here is the code:
T = 0.5
dutycycle = 0.5
samples = 10000
t2 = np.linspace(-0.5, 0, samples)
f2 = square(2*np.pi/T*t2, duty=dutycycle)
t3 = np.linspace(0.5, 1, samples)
f3 = square(2*np.pi/T*t3, duty=dutycycle)
t4 = np.linspace(-1, -0.5, samples)
f4 = (t4)**2
t5 = np.linspace(0, 0.5, samples)
f5 = (t5)**2
t = np.concatenate((t2, t3, t4, t5))
f = np.concatenate((f2, f3, f4, f5))
plt.plot(t, f, label="$x(t)$")
A:
The order of points in plt.plot matters in a line-plot. You put the time-intervals not in ascending order. In your example, you need to change the lines:
t = np.concatenate((t2, t3, t4, t5))
f = np.concatenate((f2, f3, f4, f5))
to
t = np.concatenate((t4, t2, t5, t3))
f = np.concatenate((f4, f2, f5, f3))
because, the t4 vector you created starts at -1, etc.
|
Why do I not get the expected result when concatenating a square wave and f(x)=x^2?
|
When I plot each t and f variable individually, I get the 4 parts of the signal I am looking for, but when I concatenate them it is not resulting in the signal I am expecting.
Here is the wave I am trying to replicate:
Here is the current output I am getting:
Here is the code:
T = 0.5
dutycycle = 0.5
samples = 10000
t2 = np.linspace(-0.5, 0, samples)
f2 = square(2*np.pi/T*t2, duty=dutycycle)
t3 = np.linspace(0.5, 1, samples)
f3 = square(2*np.pi/T*t3, duty=dutycycle)
t4 = np.linspace(-1, -0.5, samples)
f4 = (t4)**2
t5 = np.linspace(0, 0.5, samples)
f5 = (t5)**2
t = np.concatenate((t2, t3, t4, t5))
f = np.concatenate((f2, f3, f4, f5))
plt.plot(t, f, label="$x(t)$")
|
[
"The order of points in plt.plot matters in a line-plot. You put the time-intervals not in ascending order. In your example, you need to change the lines:\nt = np.concatenate((t2, t3, t4, t5))\nf = np.concatenate((f2, f3, f4, f5))\n\nto\nt = np.concatenate((t4, t2, t5, t3))\nf = np.concatenate((f4, f2, f5, f3))\n\nbecause, the t4 vector you created starts at -1, etc.\n"
] |
[
0
] |
[] |
[] |
[
"concatenation",
"numpy",
"python"
] |
stackoverflow_0074540078_concatenation_numpy_python.txt
|
Q:
drop_duplicates not working in pandas?
The purpose of my code is to import 2 Excel files, compare them, and print out the differences to a new Excel file.
However, after concatenating all the data, and using the drop_duplicates function, the code is accepted by the console. But, when printed to the new excel file, duplicates still remain within the day.
Am I missing something? Is something nullifying the drop_duplicates function?
My code is as follows:
import datetime
import xlrd
import pandas as pd
#identify excel file paths
filepath = r"excel filepath"
filepath2 = r"excel filepath2"
#read relevant columns from the excel files
df1 = pd.read_excel(filepath, sheetname="Sheet1", parse_cols= "B, D, G, O")
df2 = pd.read_excel(filepath2, sheetname="Sheet1", parse_cols= "B, D, F, J")
#merge the columns from both excel files into one column each respectively
df4 = df1["Exchange Code"] + df1["Product Type"] + df1["Product Description"] + df1["Quantity"].apply(str)
df5 = df2["Exchange"] + df2["Product Type"] + df2["Product Description"] + df2["Quantity"].apply(str)
#concatenate both columns from each excel file, to make one big column containing all the data
df = pd.concat([df4, df5])
#remove all whitespace from each row of the column of data
df=df.str.strip()
df=["".join(x.split()) for x in df]
#convert the data to a dataframe from a series
df = pd.DataFrame({'Value': df})
#remove any duplicates
df.drop_duplicates(subset=None, keep="first", inplace=False)
#print to the console just as a visual aid
print(df)
#print the erroneous entries to an excel file
df.to_excel("Comparison19.xls")
A:
You've got inplace=False so you're not modifying df. You want either
df.drop_duplicates(subset=None, keep="first", inplace=True)
or
df = df.drop_duplicates(subset=None, keep="first", inplace=False)
A:
I have just had this issue, and this was not the solution.
It may be in the docs - I admittedly havent looked - and crucially this is only when dealing with date-based unique rows: the 'date' column must be formatted as such.
If the date data is a pandas object dtype, the drop_duplicates will not work - do a pd.to_datetime first.
A:
If you are using a DatetimeIndex in your DataFrame this will not work
df.drop_duplicates(subset=None, keep="first", inplace=True)
Instead one can use:
df = df[~df.index.duplicated()]
A:
Might help anyone in the future.
I had a column with dates, where I tried to remove duplicates without success.
If it's not important to keep the column as a date for further operations, I converted the column from type object to string.
df = df.astype('str')
Then I performed @Keith answers
df = df.drop_duplicates(subset=None, keep="first", inplace=False)
A:
The use of inplace=False tells pandas to return a new dataframe with duplicates dropped, so you need to assign that back to df:
df = df.drop_duplicates(subset=None, keep="first", inplace=False)
or inplace=True to tell pandas to drop duplicates in the current dataframe
df.drop_duplicates(subset=None, keep="first", inplace=True)
A:
Not sure if this is a good place to put it. But I recently learned that .drop_duplicates() has to have a match in ALL subsets for dropping a row.
So for deleting multiple based on only the one value i used this code:
no_duplicates_df = df.drop_duplicates(subset=['email'], keep="first", inplace=False) # Delete duplicates in email
no_duplicates_df = no_duplicates_df.drop_duplicates(subset=['phonenumber'], keep="first", inplace=False) # Delete duplicates in phonenumber
A:
I had the same problem, but a different reason.
After appending one dataframe to another I wanted to de-duplicate based on an id (integer). However, appending changed the type of that column to float and it did not work (see https://github.com/pydata/pandas/issues/6485). I fixed it by running the following before running drop_duplicates:
df = df.astype({'id': 'int64'})
|
drop_duplicates not working in pandas?
|
The purpose of my code is to import 2 Excel files, compare them, and print out the differences to a new Excel file.
However, after concatenating all the data, and using the drop_duplicates function, the code is accepted by the console. But, when printed to the new excel file, duplicates still remain within the day.
Am I missing something? Is something nullifying the drop_duplicates function?
My code is as follows:
import datetime
import xlrd
import pandas as pd
#identify excel file paths
filepath = r"excel filepath"
filepath2 = r"excel filepath2"
#read relevant columns from the excel files
df1 = pd.read_excel(filepath, sheetname="Sheet1", parse_cols= "B, D, G, O")
df2 = pd.read_excel(filepath2, sheetname="Sheet1", parse_cols= "B, D, F, J")
#merge the columns from both excel files into one column each respectively
df4 = df1["Exchange Code"] + df1["Product Type"] + df1["Product Description"] + df1["Quantity"].apply(str)
df5 = df2["Exchange"] + df2["Product Type"] + df2["Product Description"] + df2["Quantity"].apply(str)
#concatenate both columns from each excel file, to make one big column containing all the data
df = pd.concat([df4, df5])
#remove all whitespace from each row of the column of data
df=df.str.strip()
df=["".join(x.split()) for x in df]
#convert the data to a dataframe from a series
df = pd.DataFrame({'Value': df})
#remove any duplicates
df.drop_duplicates(subset=None, keep="first", inplace=False)
#print to the console just as a visual aid
print(df)
#print the erroneous entries to an excel file
df.to_excel("Comparison19.xls")
|
[
"You've got inplace=False so you're not modifying df. You want either\n df.drop_duplicates(subset=None, keep=\"first\", inplace=True)\n\nor\n df = df.drop_duplicates(subset=None, keep=\"first\", inplace=False)\n\n",
"I have just had this issue, and this was not the solution. \nIt may be in the docs - I admittedly havent looked - and crucially this is only when dealing with date-based unique rows: the 'date' column must be formatted as such. \nIf the date data is a pandas object dtype, the drop_duplicates will not work - do a pd.to_datetime first.\n",
"If you are using a DatetimeIndex in your DataFrame this will not work\ndf.drop_duplicates(subset=None, keep=\"first\", inplace=True)\n\nInstead one can use:\ndf = df[~df.index.duplicated()]\n\n",
"Might help anyone in the future.\nI had a column with dates, where I tried to remove duplicates without success.\nIf it's not important to keep the column as a date for further operations, I converted the column from type object to string.\ndf = df.astype('str')\n\nThen I performed @Keith answers\ndf = df.drop_duplicates(subset=None, keep=\"first\", inplace=False)\n\n",
"The use of inplace=False tells pandas to return a new dataframe with duplicates dropped, so you need to assign that back to df:\ndf = df.drop_duplicates(subset=None, keep=\"first\", inplace=False)\n\nor inplace=True to tell pandas to drop duplicates in the current dataframe \ndf.drop_duplicates(subset=None, keep=\"first\", inplace=True)\n\n",
"Not sure if this is a good place to put it. But I recently learned that .drop_duplicates() has to have a match in ALL subsets for dropping a row.\nSo for deleting multiple based on only the one value i used this code:\nno_duplicates_df = df.drop_duplicates(subset=['email'], keep=\"first\", inplace=False) # Delete duplicates in email\nno_duplicates_df = no_duplicates_df.drop_duplicates(subset=['phonenumber'], keep=\"first\", inplace=False) # Delete duplicates in phonenumber\n\n",
"I had the same problem, but a different reason.\nAfter appending one dataframe to another I wanted to de-duplicate based on an id (integer). However, appending changed the type of that column to float and it did not work (see https://github.com/pydata/pandas/issues/6485). I fixed it by running the following before running drop_duplicates:\ndf = df.astype({'id': 'int64'})\n"
] |
[
28,
12,
10,
8,
4,
0,
0
] |
[] |
[] |
[
"duplicates",
"excel",
"pandas",
"python"
] |
stackoverflow_0046489695_duplicates_excel_pandas_python.txt
|
Q:
Pandas Dataframe - Droping Certain Hours of the Day from 20 Years of Historical Data
I have stock market data for a single security going back 20 years. The data is currently in an Pandas DataFrame, in the following format:
The problem is, I do not want any "after hours" trading data in my DataFrame. The market in question is open from 9:30AM to 4PM (09:30 to 16:00 on each trading day). I would like to drop all rows of data that are not within this time frame.
My instinct is to use a Pandas mask, which I know how to do if I wanted certain hours in a single day:
mask = (df['date'] > '2015-07-06 09:30:0') & (df['date'] <= '2015-07-06 16:00:0')
sub = df.loc[mask]
However, I have no idea how to use one on a revolving basis to remove the data for certain times of day over a 20 year period.
A:
Problem here is how you are importing data. There is no indicator whether 04:00 is am or pm? but based on your comments we need to assume it is PM. However input is showing it as AM.
To solve this we need to include two conditions with OR clause.
9:30-11:59
0:00-4:00
Input:
df = pd.DataFrame({'date': {880551: '2015-07-06 04:00:00', 880552: '2015-07-06 04:02:00',880553: '2015-07-06 04:03:00', 880554: '2015-07-06 04:04:00', 880555: '2015-07-06 04:05:00'},
'open': {880551: 125.00, 880552: 125.36,880553: 125.34, 880554: 125.08, 880555: 125.12},
'high': {880551: 125.00, 880552: 125.36,880553: 125.34, 880554: 125.11, 880555: 125.12},
'low': {880551: 125.00, 880552: 125.32,880553: 125.21, 880554: 125.05, 880555: 125.12},
'close': {880551: 125.00, 880552: 125.32,880553: 125.21, 880554: 125.05, 880555: 125.12},
'volume': {880551: 141, 880552: 200,880553: 750, 880554: 17451, 880555: 1000},
},
)
df.head()
date open high low close volume
880551 2015-07-06 04:00:00 125.00 125.00 125.00 125.00 141
880552 2015-07-06 04:02:00 125.36 125.36 125.32 125.32 200
880553 2015-07-06 04:03:00 125.34 125.34 125.21 125.21 750
880554 2015-07-06 04:04:00 125.08 125.11 125.05 125.05 17451
880555 2015-07-06 04:05:00 125.12 125.12 125.12 125.12 1000
from datetime import time
start_first = time(9, 30)
end_first = time(11, 59)
start_second = time(0, 00)
end_second = time(4,00)
df['date'] = pd.to_datetime(df['date'])
df= df[(df['date'].dt.time.between(start_first, end_first)) | (df['date'].dt.time.between(start_second, end_second))]
df
date open high low close volume
880551 2015-07-06 04:00:00 125.0 125.0 125.0 125.0 141
Above is not good practice, and I strongly discourage to use this kind of ambiguous data. long time solution is to correctly populate data with am/pm.
We can achieve it in two way in case of correct data format:
1) using datetime
from datetime import time
start = time(9, 30)
end = time(16)
df['date'] = pd.to_datetime(df['date'])
df= df[df['date'].dt.time.between(start, end)]
2) using between time, which only works with datetime index
df['date'] = pd.to_datetime(df['date'])
df = (df.set_index('date')
.between_time('09:30', '16:00')
.reset_index())
If you still face error, edit your question with line by line approach and exact error.
A:
I think the answer is already in the comments (@Parfait's .between_time) but that it got lost in debugging issues. It appears your df['date'] column is not of type Datetime yet.
This should be enough to fix that and get the required result:
df['date'] = pd.to_datetime(df['date'])
df = df.set_index('date')
df = df.between_time('9:30', '16:00')
A:
This example code consolidates the answers provided by Bhavesh Ghodasara, Parfait and jorijnsmit into one complete, commented example:
import pandas as pd
# example dataframe containing 6 records: 2 days of 3 records each in which all cases are covered:
# each day has one record before trading hours, one record during trading hours and one recrod after trading hours
df = pd.DataFrame({'date': {0: '2015-07-06 08:00:00', 1: '2015-07-06 13:00:00', 2: '2015-07-06 18:00:00',
3: '2015-07-07 08:00:00', 4: '2015-07-07 13:00:00', 5: '2015-07-07 18:00:00'},
'open': {0: 125.00, 1: 125.36, 2: 125.34, 3: 125.08, 4: 125.12, 5: 125.37},
'high': {0: 125.00, 1: 125.36, 2: 125.34, 3: 125.08, 4: 125.12, 5: 125.37},
'low': {0: 125.00, 1: 125.36, 2: 125.34, 3: 125.08, 4: 125.12, 5: 125.37},
'close': {0: 125.00, 1: 125.36, 2: 125.34, 3: 125.08, 4: 125.12, 5: 125.37},
'volume': {0: 141, 1: 200, 2: 750, 3: 17451, 4: 1000, 5: 38234},
},
)
# inspect the example data set
df.head(6)
# first, ensure that the 'date' column is of the correct data type: MAKE IT SO!
df['date'] = pd.to_datetime(df['date'])
# inspect the data types: date column should be of type 'datetime64[ns]'
print(df.dtypes)
# set the index of the dataframe to the datetime-type column 'data'
df = df.set_index('date')
# inspect the index: it should be a DatetimeIndex of dtype 'datetime64[ns]'
print(df.index)
# filter the data set
df_filtered = df.between_time('9:30', '16:00')
# inspect the filtered data set: Voilà! No more outside trading hours records.
df_filtered.head()
A:
All of the previous answers are ignoring one important fact - Daylight saving.
Assuming your data is in UTC time zone, the opening and closing hours NYSE are different depending on DST.
Just filtering your data with df.between_time("09:30","16:30") is wrong. You should be aware of the NYSE's schedule on any given day.
Fortunately, The pip package pandas_market_calendars is making this much easier to handle.
import pandas_market_calendars as mcal
nyse = mcal.get_calendar('NYSE')
nyse.schedule(start_date='2022-03-10', end_date='2022-03-20')
This will result in
2022-03-10 2022-03-10 14:30:00+00:00 2022-03-10 21:00:00+00:00
2022-03-11 2022-03-11 14:30:00+00:00 2022-03-11 21:00:00+00:00
2022-03-14 2022-03-14 13:30:00+00:00 2022-03-14 20:00:00+00:00
2022-03-15 2022-03-15 13:30:00+00:00 2022-03-15 20:00:00+00:00
2022-03-16 2022-03-16 13:30:00+00:00 2022-03-16 20:00:00+00:00
2022-03-17 2022-03-17 13:30:00+00:00 2022-03-17 20:00:00+00:00
2022-03-18 2022-03-18 13:30:00+00:00 2022-03-18 20:00:00+00:00
You can use this output to create one index that contains all minutes between market_open and market_close of each day.
Note: This piece of code for sure can be done better, but it still runs pretty fast.
hours = []
for i, row in nyse_scehdule.iterrows():
hours.append(pd.date_range(start=row['market_open'], end=row['market_close'], tz="UTC", freq="1min").to_series())
hours_index = pd.concat(hours).index
Now you can just reindex your original dataframe by this new index:
data.reindex(hours_index)
Hope this helps.
|
Pandas Dataframe - Droping Certain Hours of the Day from 20 Years of Historical Data
|
I have stock market data for a single security going back 20 years. The data is currently in an Pandas DataFrame, in the following format:
The problem is, I do not want any "after hours" trading data in my DataFrame. The market in question is open from 9:30AM to 4PM (09:30 to 16:00 on each trading day). I would like to drop all rows of data that are not within this time frame.
My instinct is to use a Pandas mask, which I know how to do if I wanted certain hours in a single day:
mask = (df['date'] > '2015-07-06 09:30:0') & (df['date'] <= '2015-07-06 16:00:0')
sub = df.loc[mask]
However, I have no idea how to use one on a revolving basis to remove the data for certain times of day over a 20 year period.
|
[
"Problem here is how you are importing data. There is no indicator whether 04:00 is am or pm? but based on your comments we need to assume it is PM. However input is showing it as AM.\nTo solve this we need to include two conditions with OR clause. \n\n9:30-11:59\n0:00-4:00\n\nInput:\ndf = pd.DataFrame({'date': {880551: '2015-07-06 04:00:00', 880552: '2015-07-06 04:02:00',880553: '2015-07-06 04:03:00', 880554: '2015-07-06 04:04:00', 880555: '2015-07-06 04:05:00'},\n 'open': {880551: 125.00, 880552: 125.36,880553: 125.34, 880554: 125.08, 880555: 125.12},\n 'high': {880551: 125.00, 880552: 125.36,880553: 125.34, 880554: 125.11, 880555: 125.12},\n 'low': {880551: 125.00, 880552: 125.32,880553: 125.21, 880554: 125.05, 880555: 125.12},\n 'close': {880551: 125.00, 880552: 125.32,880553: 125.21, 880554: 125.05, 880555: 125.12},\n 'volume': {880551: 141, 880552: 200,880553: 750, 880554: 17451, 880555: 1000},\n },\n )\n\n\ndf.head()\n\n date open high low close volume\n880551 2015-07-06 04:00:00 125.00 125.00 125.00 125.00 141\n880552 2015-07-06 04:02:00 125.36 125.36 125.32 125.32 200\n880553 2015-07-06 04:03:00 125.34 125.34 125.21 125.21 750\n880554 2015-07-06 04:04:00 125.08 125.11 125.05 125.05 17451\n880555 2015-07-06 04:05:00 125.12 125.12 125.12 125.12 1000\n\nfrom datetime import time\n\nstart_first = time(9, 30)\nend_first = time(11, 59)\nstart_second = time(0, 00)\nend_second = time(4,00)\ndf['date'] = pd.to_datetime(df['date'])\ndf= df[(df['date'].dt.time.between(start_first, end_first)) | (df['date'].dt.time.between(start_second, end_second))]\ndf\ndate open high low close volume\n880551 2015-07-06 04:00:00 125.0 125.0 125.0 125.0 141\n\nAbove is not good practice, and I strongly discourage to use this kind of ambiguous data. long time solution is to correctly populate data with am/pm.\nWe can achieve it in two way in case of correct data format:\n1) using datetime\nfrom datetime import time\n\nstart = time(9, 30)\nend = time(16)\ndf['date'] = pd.to_datetime(df['date'])\ndf= df[df['date'].dt.time.between(start, end)]\n\n2) using between time, which only works with datetime index\ndf['date'] = pd.to_datetime(df['date'])\n\ndf = (df.set_index('date')\n .between_time('09:30', '16:00')\n .reset_index())\n\nIf you still face error, edit your question with line by line approach and exact error.\n",
"I think the answer is already in the comments (@Parfait's .between_time) but that it got lost in debugging issues. It appears your df['date'] column is not of type Datetime yet.\nThis should be enough to fix that and get the required result:\ndf['date'] = pd.to_datetime(df['date'])\ndf = df.set_index('date')\ndf = df.between_time('9:30', '16:00')\n\n",
"This example code consolidates the answers provided by Bhavesh Ghodasara, Parfait and jorijnsmit into one complete, commented example:\nimport pandas as pd\n\n# example dataframe containing 6 records: 2 days of 3 records each in which all cases are covered:\n# each day has one record before trading hours, one record during trading hours and one recrod after trading hours\ndf = pd.DataFrame({'date': {0: '2015-07-06 08:00:00', 1: '2015-07-06 13:00:00', 2: '2015-07-06 18:00:00', \n 3: '2015-07-07 08:00:00', 4: '2015-07-07 13:00:00', 5: '2015-07-07 18:00:00'},\n 'open': {0: 125.00, 1: 125.36, 2: 125.34, 3: 125.08, 4: 125.12, 5: 125.37},\n 'high': {0: 125.00, 1: 125.36, 2: 125.34, 3: 125.08, 4: 125.12, 5: 125.37},\n 'low': {0: 125.00, 1: 125.36, 2: 125.34, 3: 125.08, 4: 125.12, 5: 125.37},\n 'close': {0: 125.00, 1: 125.36, 2: 125.34, 3: 125.08, 4: 125.12, 5: 125.37},\n 'volume': {0: 141, 1: 200, 2: 750, 3: 17451, 4: 1000, 5: 38234},\n },\n )\n\n# inspect the example data set\ndf.head(6)\n\n# first, ensure that the 'date' column is of the correct data type: MAKE IT SO!\ndf['date'] = pd.to_datetime(df['date'])\n\n# inspect the data types: date column should be of type 'datetime64[ns]'\nprint(df.dtypes)\n\n# set the index of the dataframe to the datetime-type column 'data'\ndf = df.set_index('date')\n\n# inspect the index: it should be a DatetimeIndex of dtype 'datetime64[ns]'\nprint(df.index)\n\n# filter the data set\ndf_filtered = df.between_time('9:30', '16:00')\n\n# inspect the filtered data set: Voilà! No more outside trading hours records.\ndf_filtered.head()\n\n",
"All of the previous answers are ignoring one important fact - Daylight saving.\nAssuming your data is in UTC time zone, the opening and closing hours NYSE are different depending on DST.\nJust filtering your data with df.between_time(\"09:30\",\"16:30\") is wrong. You should be aware of the NYSE's schedule on any given day.\nFortunately, The pip package pandas_market_calendars is making this much easier to handle.\nimport pandas_market_calendars as mcal\n\nnyse = mcal.get_calendar('NYSE')\nnyse.schedule(start_date='2022-03-10', end_date='2022-03-20')\n\nThis will result in\n2022-03-10 2022-03-10 14:30:00+00:00 2022-03-10 21:00:00+00:00\n2022-03-11 2022-03-11 14:30:00+00:00 2022-03-11 21:00:00+00:00\n2022-03-14 2022-03-14 13:30:00+00:00 2022-03-14 20:00:00+00:00\n2022-03-15 2022-03-15 13:30:00+00:00 2022-03-15 20:00:00+00:00\n2022-03-16 2022-03-16 13:30:00+00:00 2022-03-16 20:00:00+00:00\n2022-03-17 2022-03-17 13:30:00+00:00 2022-03-17 20:00:00+00:00\n2022-03-18 2022-03-18 13:30:00+00:00 2022-03-18 20:00:00+00:00\n\nYou can use this output to create one index that contains all minutes between market_open and market_close of each day.\nNote: This piece of code for sure can be done better, but it still runs pretty fast.\nhours = []\nfor i, row in nyse_scehdule.iterrows():\n hours.append(pd.date_range(start=row['market_open'], end=row['market_close'], tz=\"UTC\", freq=\"1min\").to_series())\nhours_index = pd.concat(hours).index\n\nNow you can just reindex your original dataframe by this new index:\ndata.reindex(hours_index)\n\nHope this helps.\n"
] |
[
9,
5,
1,
0
] |
[] |
[] |
[
"dataframe",
"numpy",
"pandas",
"python"
] |
stackoverflow_0060895196_dataframe_numpy_pandas_python.txt
|
Q:
pyinstaller Unable to access file
I wrote a simple code in python/pygame, and the game was running fine in both sublime text and cmd, but when I tried to make it a exe file pyinstaller --onefile version2.py, I got an error and I don't know what is the problem ?
A:
Well bro, I don't know if I will be helpfull to you know after 6 days, I am having 2 solutions for you which worked for me today.
Disable Antivirus OR put that folder in which you are building app.exe to Exception, like I have done below [Mandatory Process].
Dont' run pyinstaller command from the directory where app.py application is present.
a) Get out from that directory/folder
b) Create new directory/folder
c) Enter that directory and start cmd in that directory/folder by pressing (shift + MouseRightClick )
d) And then run the command
pyinstaller --name "MyApp" "python script file path of which you want to create .exe file"
Where "MyApp" is the name which you want to give to the file after it is build successfully
For Example
pyinstaller --name "FileRenamer" "G:\data\python\Project\Bunch File Renamer\rename.py"
|
pyinstaller Unable to access file
|
I wrote a simple code in python/pygame, and the game was running fine in both sublime text and cmd, but when I tried to make it a exe file pyinstaller --onefile version2.py, I got an error and I don't know what is the problem ?
|
[
"Well bro, I don't know if I will be helpfull to you know after 6 days, I am having 2 solutions for you which worked for me today.\n\nDisable Antivirus OR put that folder in which you are building app.exe to Exception, like I have done below [Mandatory Process].\n\n\n\nDont' run pyinstaller command from the directory where app.py application is present.\na) Get out from that directory/folder\nb) Create new directory/folder\nc) Enter that directory and start cmd in that directory/folder by pressing (shift + MouseRightClick )\nd) And then run the command\npyinstaller --name \"MyApp\" \"python script file path of which you want to create .exe file\"\n\n\n\nWhere \"MyApp\" is the name which you want to give to the file after it is build successfully\nFor Example\npyinstaller --name \"FileRenamer\" \"G:\\data\\python\\Project\\Bunch File Renamer\\rename.py\"\n\n"
] |
[
1
] |
[
"Try to disable the anti-virus, that worked for me\n"
] |
[
-1
] |
[
"pygame",
"pyinstaller",
"python"
] |
stackoverflow_0069989638_pygame_pyinstaller_python.txt
|
Q:
list out of range when using embeddings
I have the following list:
list1=[['brute-force',
'password-guessing',
'password-guessing',
'default-credentials',
'shell'],
['malware',
'ddos',
'phishing',
'spam',
'botnet',
'cryptojacking',
'xss',
'sqli',
'vulnerability'],
['sensitive-information']]
I am trying the example from here enter link description here
However when I am fitting my list to get the embeddings :
embeddings1 =sbert_model.encode(list1, convert_to_tensor=True)
I get the embeding i get the following error:
IndexError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_16484/3954167634.py in <module>
----> 1 embeddings2 = sbert_model.encode(list3, convert_to_tensor=True)
~\anaconda3\envs\tensorflow_env\lib\site-packages\sentence_transformers\SentenceTransformer.py in encode(self, sentences, batch_size, show_progress_bar, output_value, convert_to_numpy, convert_to_tensor, device, normalize_embeddings)
159 for start_index in trange(0, len(sentences), batch_size, desc="Batches", disable=not show_progress_bar):
160 sentences_batch = sentences_sorted[start_index:start_index+batch_size]
--> 161 features = self.tokenize(sentences_batch)
162 features = batch_to_device(features, device)
163
~\anaconda3\envs\tensorflow_env\lib\site-packages\sentence_transformers\SentenceTransformer.py in tokenize(self, texts)
317 Tokenizes the texts
318 """
--> 319 return self._first_module().tokenize(texts)
320
321 def get_sentence_features(self, *features):
~\anaconda3\envs\tensorflow_env\lib\site-packages\sentence_transformers\models\Transformer.py in tokenize(self, texts)
101 for text_tuple in texts:
102 batch1.append(text_tuple[0])
--> 103 batch2.append(text_tuple[1])
104 to_tokenize = [batch1, batch2]
105
IndexError: list index out of range
I am understanding how lists work and I have read many asnwers to same problem in here but i cannot fiqure out why is going out of range.
Any ideas?
A:
You need to flatten your input nested list first.
from nltk import flatten
flattened_list1 = flatten(list1)
embeddings1 = sbert_model.encode(flattened_list1, convert_to_tensor=True)
|
list out of range when using embeddings
|
I have the following list:
list1=[['brute-force',
'password-guessing',
'password-guessing',
'default-credentials',
'shell'],
['malware',
'ddos',
'phishing',
'spam',
'botnet',
'cryptojacking',
'xss',
'sqli',
'vulnerability'],
['sensitive-information']]
I am trying the example from here enter link description here
However when I am fitting my list to get the embeddings :
embeddings1 =sbert_model.encode(list1, convert_to_tensor=True)
I get the embeding i get the following error:
IndexError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_16484/3954167634.py in <module>
----> 1 embeddings2 = sbert_model.encode(list3, convert_to_tensor=True)
~\anaconda3\envs\tensorflow_env\lib\site-packages\sentence_transformers\SentenceTransformer.py in encode(self, sentences, batch_size, show_progress_bar, output_value, convert_to_numpy, convert_to_tensor, device, normalize_embeddings)
159 for start_index in trange(0, len(sentences), batch_size, desc="Batches", disable=not show_progress_bar):
160 sentences_batch = sentences_sorted[start_index:start_index+batch_size]
--> 161 features = self.tokenize(sentences_batch)
162 features = batch_to_device(features, device)
163
~\anaconda3\envs\tensorflow_env\lib\site-packages\sentence_transformers\SentenceTransformer.py in tokenize(self, texts)
317 Tokenizes the texts
318 """
--> 319 return self._first_module().tokenize(texts)
320
321 def get_sentence_features(self, *features):
~\anaconda3\envs\tensorflow_env\lib\site-packages\sentence_transformers\models\Transformer.py in tokenize(self, texts)
101 for text_tuple in texts:
102 batch1.append(text_tuple[0])
--> 103 batch2.append(text_tuple[1])
104 to_tokenize = [batch1, batch2]
105
IndexError: list index out of range
I am understanding how lists work and I have read many asnwers to same problem in here but i cannot fiqure out why is going out of range.
Any ideas?
|
[
"You need to flatten your input nested list first.\nfrom nltk import flatten\nflattened_list1 = flatten(list1)\nembeddings1 = sbert_model.encode(flattened_list1, convert_to_tensor=True)\n\n"
] |
[
0
] |
[] |
[] |
[
"list",
"nlp",
"python",
"python_3.x",
"word_embedding"
] |
stackoverflow_0073681331_list_nlp_python_python_3.x_word_embedding.txt
|
Q:
Implode rows and create new col
How could I create a unique detail colum, which is conditoinal on fruit being followed by fruit -2 in the type column. detail1 or detail2 could be NaN
df type detail1 detail2 name
0 fruit apple
1 fruit -2 best best apple
2 yellow yellowish apple
3 green apple
4 fruit banana
5 sub
6 fruit -2 best best banana
7 yellow orange banana
8 green brown banana
Expected Output
df type detail1 detail2 name unique_detail
0 fruit apple [best, yellow, yellowish, green ]
1 fruit -2 best best apple [best, yellow, yellowish, green ]
2 yellow yellowish apple [best, yellow, yellowish, green ]
3 green apple [best, yellow, yellowish, green brown]
4 fruit banana sub: [yellow, orange, green, brown]
5 sub
6 fruit -2 banana sub:[yellow, orange, green, brown]
7 yellow orange banana sub:[yellow, orange, green, brown]
8 green brown banana sub:[yellow, orange, green, brown]
I tried
m = df.type.eq("fruit") & df.type.shift(-1).ne("fruit -2")
df["detail"] = df.detail1 + df.detail2
df["detail"] = df.groupby("type").transform("unique")
df["detail"] = df["detail"].mask(m, "sub:"+df.detail)
A:
The exact logic is not fully clear, but you should use a custom function for groupby.apply:
def process(df):
m1 = df['type'].shift().eq('fruit')
m2 = df['type'].ne('fruit -2')
m3 = df['type'].isnull()
prefix = next(iter(df.loc[m1&m2, 'type']), '')
if prefix:
prefix += ': '
return prefix + str(df[m3].filter(regex='^detail').stack().unique())
group = df['name'].ffill()
s = df.groupby(group).apply(process)
df['unique_detail'] = group.map(s)
You can also use as grouper:
group = (df['type'].eq('fruit')
&df['type'].shift(-1).ne('fruit -2')
).cumsum()
Output:
type detail1 detail2 name unique_detail
0 fruit NaN NaN apple ['yellow' 'yellowish' 'green']
1 fruit -2 best best apple ['yellow' 'yellowish' 'green']
2 NaN yellow yellowish apple ['yellow' 'yellowish' 'green']
3 NaN green NaN apple ['yellow' 'yellowish' 'green']
4 fruit NaN NaN banana sub: ['yellow' 'orange' 'green' 'brown']
5 sub NaN NaN None sub: ['yellow' 'orange' 'green' 'brown']
6 fruit -2 best best banana sub: ['yellow' 'orange' 'green' 'brown']
7 NaN yellow orange banana sub: ['yellow' 'orange' 'green' 'brown']
8 NaN green brown banana sub: ['yellow' 'orange' 'green' 'brown']
|
Implode rows and create new col
|
How could I create a unique detail colum, which is conditoinal on fruit being followed by fruit -2 in the type column. detail1 or detail2 could be NaN
df type detail1 detail2 name
0 fruit apple
1 fruit -2 best best apple
2 yellow yellowish apple
3 green apple
4 fruit banana
5 sub
6 fruit -2 best best banana
7 yellow orange banana
8 green brown banana
Expected Output
df type detail1 detail2 name unique_detail
0 fruit apple [best, yellow, yellowish, green ]
1 fruit -2 best best apple [best, yellow, yellowish, green ]
2 yellow yellowish apple [best, yellow, yellowish, green ]
3 green apple [best, yellow, yellowish, green brown]
4 fruit banana sub: [yellow, orange, green, brown]
5 sub
6 fruit -2 banana sub:[yellow, orange, green, brown]
7 yellow orange banana sub:[yellow, orange, green, brown]
8 green brown banana sub:[yellow, orange, green, brown]
I tried
m = df.type.eq("fruit") & df.type.shift(-1).ne("fruit -2")
df["detail"] = df.detail1 + df.detail2
df["detail"] = df.groupby("type").transform("unique")
df["detail"] = df["detail"].mask(m, "sub:"+df.detail)
|
[
"The exact logic is not fully clear, but you should use a custom function for groupby.apply:\ndef process(df):\n m1 = df['type'].shift().eq('fruit')\n m2 = df['type'].ne('fruit -2')\n m3 = df['type'].isnull()\n \n prefix = next(iter(df.loc[m1&m2, 'type']), '')\n if prefix:\n prefix += ': '\n \n return prefix + str(df[m3].filter(regex='^detail').stack().unique())\n\ngroup = df['name'].ffill()\n\ns = df.groupby(group).apply(process)\n\ndf['unique_detail'] = group.map(s)\n\n\nYou can also use as grouper:\ngroup = (df['type'].eq('fruit')\n &df['type'].shift(-1).ne('fruit -2')\n ).cumsum()\n\nOutput:\n type detail1 detail2 name unique_detail\n0 fruit NaN NaN apple ['yellow' 'yellowish' 'green']\n1 fruit -2 best best apple ['yellow' 'yellowish' 'green']\n2 NaN yellow yellowish apple ['yellow' 'yellowish' 'green']\n3 NaN green NaN apple ['yellow' 'yellowish' 'green']\n4 fruit NaN NaN banana sub: ['yellow' 'orange' 'green' 'brown']\n5 sub NaN NaN None sub: ['yellow' 'orange' 'green' 'brown']\n6 fruit -2 best best banana sub: ['yellow' 'orange' 'green' 'brown']\n7 NaN yellow orange banana sub: ['yellow' 'orange' 'green' 'brown']\n8 NaN green brown banana sub: ['yellow' 'orange' 'green' 'brown']\n\n"
] |
[
0
] |
[] |
[] |
[
"pandas",
"python",
"shift"
] |
stackoverflow_0074546777_pandas_python_shift.txt
|
Q:
How to convert PNG to JPG in Python?
I'm trying to compare two images, one a .png and the other a .jpg. So I need to convert the .png file to a .jpg to get closer values for SSIM. Below is the code that I've tried, but I'm getting this error:
AttributeError: 'tuple' object has no attribute 'dtype'
image2 = imread(thisPath + caption)
image2 = io.imsave("jpgtest.jpg", (76, 59))
image2 = cv2.cvtColor(image2, cv2.COLOR_BGR2GRAY)
image2 = resize(image2, (76, 59))
imshow("is it a jpg", image2)
cv2.waitKey()
A:
Before demonstrating how to convert an image from .png to .jpg format, I want to point out that you should be consistent on the library that you use. Currently, you're mixing scikit-image with opencv. It's best to choose one library and stick with it instead of reading in an image with scikit and then converting to grayscale with opencv.
To convert a .png to .jpg image using OpenCV, you can use cv2.imwrite. Note with .jpg or .jpeg format, to maintain the highest quality, you must specify the quality value from [0..100] (default value is 95). Simply do this:
import cv2
# Load .png image
image = cv2.imread('image.png')
# Save .jpg image
cv2.imwrite('image.jpg', image, [int(cv2.IMWRITE_JPEG_QUALITY), 100])
A:
The function skimage.io.imsave expects you to give it a filename and an array that you want to save under that filename. For example:
skimage.io.imsave("image.jpg", image)
where image is a numpy array.
You are using it incorrectly here:
image2 = io.imsave("jpgtest.jpg", (76, 59))
you are assigning the output of the imsave function to image2 and I don't think that is what you want to do.
You probably don't need to convert the image to JPG because the skimage library already handles all of this conversion by itself. You usually only load the images with imread (does not matter whether they are PNG or JPG, because they are represented in a numpy array) and then perform all the necessary computations.
A:
Python script to convert all .png in the folder into .jpg
import cv2 as cv
import glob
import os
import re
png_file_paths = glob.glob(r"*.png")
for i, png_file_path in enumerate(png_file_paths):
jpg_file_path = png_file_path[:-3] + "jpg";
# Load .png image
image = cv.imread(png_file_path)
# Save .jpg image
cv.imwrite(jpg_file_path, image, [int(cv.IMWRITE_JPEG_QUALITY), 100])
pass
A:
Simply use opencv's cvtColor. Assuming image read using cv2.imread(); image color channels arranged as BGR.
To convert from PNG to JPG
jpg_img = cv2.cvtColor(png_img, cv2.COLOR_RGBA2BGR)
To convert from JPG to PNG
png_img = cv2.cvtColor(jpg_img, cv2.COLOR_BGR2BGRA)
|
How to convert PNG to JPG in Python?
|
I'm trying to compare two images, one a .png and the other a .jpg. So I need to convert the .png file to a .jpg to get closer values for SSIM. Below is the code that I've tried, but I'm getting this error:
AttributeError: 'tuple' object has no attribute 'dtype'
image2 = imread(thisPath + caption)
image2 = io.imsave("jpgtest.jpg", (76, 59))
image2 = cv2.cvtColor(image2, cv2.COLOR_BGR2GRAY)
image2 = resize(image2, (76, 59))
imshow("is it a jpg", image2)
cv2.waitKey()
|
[
"Before demonstrating how to convert an image from .png to .jpg format, I want to point out that you should be consistent on the library that you use. Currently, you're mixing scikit-image with opencv. It's best to choose one library and stick with it instead of reading in an image with scikit and then converting to grayscale with opencv. \nTo convert a .png to .jpg image using OpenCV, you can use cv2.imwrite. Note with .jpg or .jpeg format, to maintain the highest quality, you must specify the quality value from [0..100] (default value is 95). Simply do this:\nimport cv2\n\n# Load .png image\nimage = cv2.imread('image.png')\n\n# Save .jpg image\ncv2.imwrite('image.jpg', image, [int(cv2.IMWRITE_JPEG_QUALITY), 100])\n\n",
"The function skimage.io.imsave expects you to give it a filename and an array that you want to save under that filename. For example:\nskimage.io.imsave(\"image.jpg\", image)\n\nwhere image is a numpy array.\nYou are using it incorrectly here:\nimage2 = io.imsave(\"jpgtest.jpg\", (76, 59))\n\nyou are assigning the output of the imsave function to image2 and I don't think that is what you want to do.\n\nYou probably don't need to convert the image to JPG because the skimage library already handles all of this conversion by itself. You usually only load the images with imread (does not matter whether they are PNG or JPG, because they are represented in a numpy array) and then perform all the necessary computations.\n",
"Python script to convert all .png in the folder into .jpg\nimport cv2 as cv\nimport glob\nimport os\nimport re\n\npng_file_paths = glob.glob(r\"*.png\")\nfor i, png_file_path in enumerate(png_file_paths):\n jpg_file_path = png_file_path[:-3] + \"jpg\";\n \n # Load .png image\n image = cv.imread(png_file_path)\n\n # Save .jpg image\n cv.imwrite(jpg_file_path, image, [int(cv.IMWRITE_JPEG_QUALITY), 100])\n\n pass\n \n\n",
"Simply use opencv's cvtColor. Assuming image read using cv2.imread(); image color channels arranged as BGR.\nTo convert from PNG to JPG\njpg_img = cv2.cvtColor(png_img, cv2.COLOR_RGBA2BGR)\n\nTo convert from JPG to PNG\npng_img = cv2.cvtColor(jpg_img, cv2.COLOR_BGR2BGRA)\n\n"
] |
[
17,
3,
1,
0
] |
[] |
[] |
[
"image",
"image_processing",
"opencv",
"python",
"scikit_image"
] |
stackoverflow_0060048149_image_image_processing_opencv_python_scikit_image.txt
|
Q:
Pyrogram forward + edit message
How can I make it so that the text can be changed in the forwarded message in PYROGRAM?
@app.on_message(filters.chat(publics))
def new_channel_post(client, message):
message.forward(private_public)
A message is sent to a private public and I haven't figured out how to change it.
@app.on_message(filters.chat(publics))
def new_channel_post(client, message):
client.send_message(private_public, message.text)
This code captures only text, without hyperlinks.
A:
Forwarded messages cannot be edited. Neither by the original author, nor by the user forwarding. If you need, you can copy a message (app.copy_message()), in which case the forwarded message will look as though you sent the message yourself and will not link back the the original user/channel.
An example flow could look like this:
@app.on_message(filters.chat(public))
def new_channel_post(_, message):
copy = message.copy(private)
copy.edit(new_text)
Message.copy() is the a method making app.copy_message() easier to use. It returns the copied message, which can be bound to a variable and edited.
|
Pyrogram forward + edit message
|
How can I make it so that the text can be changed in the forwarded message in PYROGRAM?
@app.on_message(filters.chat(publics))
def new_channel_post(client, message):
message.forward(private_public)
A message is sent to a private public and I haven't figured out how to change it.
@app.on_message(filters.chat(publics))
def new_channel_post(client, message):
client.send_message(private_public, message.text)
This code captures only text, without hyperlinks.
|
[
"Forwarded messages cannot be edited. Neither by the original author, nor by the user forwarding. If you need, you can copy a message (app.copy_message()), in which case the forwarded message will look as though you sent the message yourself and will not link back the the original user/channel.\nAn example flow could look like this:\n@app.on_message(filters.chat(public))\ndef new_channel_post(_, message):\n copy = message.copy(private)\n copy.edit(new_text)\n\nMessage.copy() is the a method making app.copy_message() easier to use. It returns the copied message, which can be bound to a variable and edited.\n"
] |
[
0
] |
[] |
[] |
[
"pyrogram",
"python",
"telegram"
] |
stackoverflow_0074546893_pyrogram_python_telegram.txt
|
Q:
How to use python dataframe styling in streamlit
I have styled my dataframe using the below code:
th_props = [
('font-size', '14px'),
('text-align', 'center'),
('font-weight', 'bold'),
('color', '#6d6d6d'),
('background-color', '#f7ffff')
]
td_props = [
('font-size', '12px')
]
styles = [
dict(selector="th", props=th_props),
dict(selector="td", props=td_props)
]
df2=outputdframe.style.set_properties(**{'text-align': 'left'}).set_table_styles(styles)
But it doesn't work on streamlit.
So, any idea how to style the dataframe on streamlit?
Can anyone help me?
A:
You should use st.table instead of st.dataframe. Here is some reproducible code:
# import packages
import streamlit as st
import pandas as pd
import numpy as np
# Example dataframe
outputdframe = pd.DataFrame(np.array([["CS", "University", "KR", 7032], ["IE", "Bangalore", "Bengaluru", 7861], ["CS", "Bangalore", "Bengaluru", 11036]]), columns=['Branch', 'College', 'Location', 'Cutoff'])
# style
th_props = [
('font-size', '14px'),
('text-align', 'center'),
('font-weight', 'bold'),
('color', '#6d6d6d'),
('background-color', '#f7ffff')
]
td_props = [
('font-size', '12px')
]
styles = [
dict(selector="th", props=th_props),
dict(selector="td", props=td_props)
]
# table
df2=outputdframe.style.set_properties(**{'text-align': 'left'}).set_table_styles(styles)
st.table(df2)
Output:
Let's change the font size of the values in table by the value of "td_props" to "20" in the code:
Or change the colors of the header to red #FF0000 in "th_props" of the code like this:
So as you can see it changes the color of the headers. There are a lot of options to modify the style of your dataframe using st.table.
A:
You need to pass the styled dataframe to Streamlit.
st.dataframe(df2)
A:
Pass the style (that you called it "df2") with html:
st.write(df2.to_html(), unsafe_allow_html=True))
|
How to use python dataframe styling in streamlit
|
I have styled my dataframe using the below code:
th_props = [
('font-size', '14px'),
('text-align', 'center'),
('font-weight', 'bold'),
('color', '#6d6d6d'),
('background-color', '#f7ffff')
]
td_props = [
('font-size', '12px')
]
styles = [
dict(selector="th", props=th_props),
dict(selector="td", props=td_props)
]
df2=outputdframe.style.set_properties(**{'text-align': 'left'}).set_table_styles(styles)
But it doesn't work on streamlit.
So, any idea how to style the dataframe on streamlit?
Can anyone help me?
|
[
"You should use st.table instead of st.dataframe. Here is some reproducible code:\n# import packages\nimport streamlit as st\nimport pandas as pd\nimport numpy as np\n\n# Example dataframe\noutputdframe = pd.DataFrame(np.array([[\"CS\", \"University\", \"KR\", 7032], [\"IE\", \"Bangalore\", \"Bengaluru\", 7861], [\"CS\", \"Bangalore\", \"Bengaluru\", 11036]]), columns=['Branch', 'College', 'Location', 'Cutoff'])\n\n# style\nth_props = [\n ('font-size', '14px'),\n ('text-align', 'center'),\n ('font-weight', 'bold'),\n ('color', '#6d6d6d'),\n ('background-color', '#f7ffff')\n ]\n \ntd_props = [\n ('font-size', '12px')\n ]\n \nstyles = [\n dict(selector=\"th\", props=th_props),\n dict(selector=\"td\", props=td_props)\n ]\n\n# table\ndf2=outputdframe.style.set_properties(**{'text-align': 'left'}).set_table_styles(styles)\nst.table(df2)\n\nOutput:\n\nLet's change the font size of the values in table by the value of \"td_props\" to \"20\" in the code:\n\nOr change the colors of the header to red #FF0000 in \"th_props\" of the code like this:\n\nSo as you can see it changes the color of the headers. There are a lot of options to modify the style of your dataframe using st.table.\n",
"You need to pass the styled dataframe to Streamlit.\nst.dataframe(df2)\n\n",
"Pass the style (that you called it \"df2\") with html:\nst.write(df2.to_html(), unsafe_allow_html=True))\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"dataframe",
"python",
"streamlit",
"styling"
] |
stackoverflow_0068379442_dataframe_python_streamlit_styling.txt
|
Q:
Selenium: element is not attached to the page document restaurant get free food script
Trying to make a free food finder but get unknown error
I'm trying to make this code to get every free product in this food delivery restaurant
I expect it to iterate through this 'hbaEIe.sc-5674cfe4-2' elements, that look like this:
Restaurant div
url = 'https://www.rappi.com.ar/restaurantes'
for restaurant in all_restaurant:
link = restaurant.get_attribute("href")
full_link = base_url + link
name = restaurant.get_attribute("aria-label")
# open tab
driver.find_element(By.TAG_NAME, 'body').send_keys(Keys.CONTROL + 't')
# Load a page
driver.get(full_link)
getFreeStuff(name, full_link)
# close the tab
driver.find_element(By.TAG_NAME, 'body').send_keys(Keys.CONTROL + 'w')
print(name)
Then, for each restaurant i want to iterate through all the product list and get the price, comparing it to 0.
def getFreeStuff(restaurant, link):
time.sleep(1)
prices = driver.find_elements(By.CLASS_NAME, "css-kowr8")
names = driver.find_elements(By.CLASS_NAME, "css-puxjan")
for i in range(0, len(prices)):
price = prices[i]
if price == "$ 0,00":
restaurants.append(restaurant)
links.append(link)
products.append(names[i])
return 0
But when i run it it gives me the following error:
BOULEVARD HONORIO
Traceback (most recent call last):
File "C:\Users\Usuario\Desktop\Web Scraping\practica2\main.py", line 39, in <module>
link = restaurant.get_attribute("href")
File "C:\Users\Usuario\Desktop\Web Scraping\practica1\venv\lib\site-packages\selenium\webdriver\remote\webelement.py", line 155, in get_attribute
attribute_value = self.parent.execute_script(...
selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
(Session info: chrome=107.0.5304.107)
Stacktrace:
Backtrace:
Ordinal0 [0x00A3ACD3+2075859]
Ordinal0 [0x009CEE61+1633889]
...
RtlGetAppContainerNamedObjectPath [0x771A7B8E+238]
Process finished with exit code 1
I've tried many things, but i don't know how to proceed
A:
This happens when the page is refreshed but you keep the old reference in your varibale. Try rediscovering element in the dom after refresh.
A:
I suspect that your attempt to open the individual restaurant link in a new tab is not successful. As a result you are no longer on the main page when iterating to the second item in the list. As you are only taking the name and link for each restaurant from the main page I would suggest it is best to store this data and then simply step through all of the restaurant pages adding free items (sadly there don't seem to be any!) to the restaurant objects:
all_restaurant = driver.find_elements(By.XPATH, "//a[@class='sc-5674cfe4-2 hbaEIe']")
for restaurant in all_restaurant:
link = restaurant.get_attribute("href")
name = restaurant.get_attribute("aria-label")
restaurants.append({"name":name, "link":link})
for restaurant in restaurants:
# Load a page
driver.get(restaurant["link"])
products = getFreeStuff(driver, name, link)
if len(products) > 0:
restaurant["products"] = products
def getFreeStuff(driver):
time.sleep(1)
products = []
prices = driver.find_elements(By.CLASS_NAME, "css-kowr8")
names = driver.find_elements(By.CLASS_NAME, "css-puxjan")
for i in range(0, len(prices)):
price = prices[i]
if price == "$ 0,00":
products.append(names[i])
return products
|
Selenium: element is not attached to the page document restaurant get free food script
|
Trying to make a free food finder but get unknown error
I'm trying to make this code to get every free product in this food delivery restaurant
I expect it to iterate through this 'hbaEIe.sc-5674cfe4-2' elements, that look like this:
Restaurant div
url = 'https://www.rappi.com.ar/restaurantes'
for restaurant in all_restaurant:
link = restaurant.get_attribute("href")
full_link = base_url + link
name = restaurant.get_attribute("aria-label")
# open tab
driver.find_element(By.TAG_NAME, 'body').send_keys(Keys.CONTROL + 't')
# Load a page
driver.get(full_link)
getFreeStuff(name, full_link)
# close the tab
driver.find_element(By.TAG_NAME, 'body').send_keys(Keys.CONTROL + 'w')
print(name)
Then, for each restaurant i want to iterate through all the product list and get the price, comparing it to 0.
def getFreeStuff(restaurant, link):
time.sleep(1)
prices = driver.find_elements(By.CLASS_NAME, "css-kowr8")
names = driver.find_elements(By.CLASS_NAME, "css-puxjan")
for i in range(0, len(prices)):
price = prices[i]
if price == "$ 0,00":
restaurants.append(restaurant)
links.append(link)
products.append(names[i])
return 0
But when i run it it gives me the following error:
BOULEVARD HONORIO
Traceback (most recent call last):
File "C:\Users\Usuario\Desktop\Web Scraping\practica2\main.py", line 39, in <module>
link = restaurant.get_attribute("href")
File "C:\Users\Usuario\Desktop\Web Scraping\practica1\venv\lib\site-packages\selenium\webdriver\remote\webelement.py", line 155, in get_attribute
attribute_value = self.parent.execute_script(...
selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
(Session info: chrome=107.0.5304.107)
Stacktrace:
Backtrace:
Ordinal0 [0x00A3ACD3+2075859]
Ordinal0 [0x009CEE61+1633889]
...
RtlGetAppContainerNamedObjectPath [0x771A7B8E+238]
Process finished with exit code 1
I've tried many things, but i don't know how to proceed
|
[
"This happens when the page is refreshed but you keep the old reference in your varibale. Try rediscovering element in the dom after refresh.\n",
"I suspect that your attempt to open the individual restaurant link in a new tab is not successful. As a result you are no longer on the main page when iterating to the second item in the list. As you are only taking the name and link for each restaurant from the main page I would suggest it is best to store this data and then simply step through all of the restaurant pages adding free items (sadly there don't seem to be any!) to the restaurant objects:\nall_restaurant = driver.find_elements(By.XPATH, \"//a[@class='sc-5674cfe4-2 hbaEIe']\")\nfor restaurant in all_restaurant:\n link = restaurant.get_attribute(\"href\")\n name = restaurant.get_attribute(\"aria-label\")\n restaurants.append({\"name\":name, \"link\":link})\nfor restaurant in restaurants:\n # Load a page\n driver.get(restaurant[\"link\"])\n products = getFreeStuff(driver, name, link)\n if len(products) > 0:\n restaurant[\"products\"] = products \n\ndef getFreeStuff(driver):\n time.sleep(1)\n products = []\n prices = driver.find_elements(By.CLASS_NAME, \"css-kowr8\")\n names = driver.find_elements(By.CLASS_NAME, \"css-puxjan\")\n for i in range(0, len(prices)):\n price = prices[i]\n if price == \"$ 0,00\":\n products.append(names[i])\n return products\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"selenium"
] |
stackoverflow_0074546372_python_selenium.txt
|
Q:
How to make the Open3D read the pandas DataFrame and generate points clouds in Python
I extracted certain data from the original CSV file (which contains the XYZ coordinates) by using the following code
.
data=pd.read_csv("./assets/landmarks_frame0.csv",header=None,usecols=range(1,4))
print(data)
The printing output looks fine as below. Recall that the first (started with 0.524606), second and third columns correspond to the x,y and z coordinates.
the snipped image of the pandas DataFrame extracted from the CSV file
Meanwhile, my goal is to import the Open3D library and generate the points cloud based on the data extracted from the pandas. I read the Open3D documents (http://www.open3d.org/docs/release/tutorial/geometry/pointcloud.html) and wrote the script as follows
print("Load a ply point cloud, print it, and render it")
pcd = o3d.io.read_point_cloud(data,format="xyz")
print(pcd)
print(np.asarray(pcd.points))
o3d.visualization.draw_geometries([pcd],
zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])
As shown in the second line
pcd = o3d.io.read_point_cloud(data,format="xyz")
I learned from the File IO document (http://www.open3d.org/docs/release/tutorial/geometry/file_io.html) and passed the first argument as the data to be processed into the points cloud. Besides, I set the second argument format to be 'xyz', which means each line contains [x, y, z], where x, y, and z are the 3D coordinates.
However, the error message indicates as follow.
TypeError Traceback (most recent call last)
Input In [3], in <cell line: 4>()
1 print("Load a ply point cloud, print it, and render it")
2 # ply_point_cloud = o3d.data.PLYPointCloud()
3 # pcd = o3d.io.read_point_cloud(data,format="xyz")
----> 4 pcd = o3d.io.read_point_cloud(data,format="xyz")
6 print(pcd)
7 print(np.asarray(pcd.points))
TypeError: read_point_cloud(): incompatible function arguments. The following argument types are supported:
1. (filename: str, format: str = 'auto', remove_nan_points: bool = False, remove_infinite_points: bool = False, print_progress: bool = False) -> open3d.cpu.pybind.geometry.PointCloud
Invoked with: 1 2 3
0 0.524606 0.675098 -0.021419
1 0.524134 0.628257 -0.034960
2 0.524757 0.641571 -0.019187
3 0.518863 0.589718 -0.024071
4 0.523975 0.615806 -0.036730
.. ... ... ...
473 0.557430 0.553579 0.006053
474 0.563593 0.553342 0.006053
475 0.557327 0.544035 0.006053
476 0.551414 0.553678 0.006053
477 0.557613 0.563182 0.006053
[478 rows x 3 columns]; kwargs: format='xyz'
I would like to know how I should correctly import the data into the Open3D and generate the point cloud. I appreciate your help.
A:
Open3D supports NumPy arrays. So, firstly you have to convert your dataframe with XYZ coordinates to a NumPy array. This will allow you to convert the NumPy array to the Open3D point cloud. You can check the documentation (here) of Open3D for further details.
The important lines (of documentation) for the conversion of a NumPy array to an Open3D point cloud are given below:
# Pass xyz to Open3D.o3d.geometry.PointCloud and visualize
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(xyz)
Here, 'xyz' is the NumPy array.
|
How to make the Open3D read the pandas DataFrame and generate points clouds in Python
|
I extracted certain data from the original CSV file (which contains the XYZ coordinates) by using the following code
.
data=pd.read_csv("./assets/landmarks_frame0.csv",header=None,usecols=range(1,4))
print(data)
The printing output looks fine as below. Recall that the first (started with 0.524606), second and third columns correspond to the x,y and z coordinates.
the snipped image of the pandas DataFrame extracted from the CSV file
Meanwhile, my goal is to import the Open3D library and generate the points cloud based on the data extracted from the pandas. I read the Open3D documents (http://www.open3d.org/docs/release/tutorial/geometry/pointcloud.html) and wrote the script as follows
print("Load a ply point cloud, print it, and render it")
pcd = o3d.io.read_point_cloud(data,format="xyz")
print(pcd)
print(np.asarray(pcd.points))
o3d.visualization.draw_geometries([pcd],
zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])
As shown in the second line
pcd = o3d.io.read_point_cloud(data,format="xyz")
I learned from the File IO document (http://www.open3d.org/docs/release/tutorial/geometry/file_io.html) and passed the first argument as the data to be processed into the points cloud. Besides, I set the second argument format to be 'xyz', which means each line contains [x, y, z], where x, y, and z are the 3D coordinates.
However, the error message indicates as follow.
TypeError Traceback (most recent call last)
Input In [3], in <cell line: 4>()
1 print("Load a ply point cloud, print it, and render it")
2 # ply_point_cloud = o3d.data.PLYPointCloud()
3 # pcd = o3d.io.read_point_cloud(data,format="xyz")
----> 4 pcd = o3d.io.read_point_cloud(data,format="xyz")
6 print(pcd)
7 print(np.asarray(pcd.points))
TypeError: read_point_cloud(): incompatible function arguments. The following argument types are supported:
1. (filename: str, format: str = 'auto', remove_nan_points: bool = False, remove_infinite_points: bool = False, print_progress: bool = False) -> open3d.cpu.pybind.geometry.PointCloud
Invoked with: 1 2 3
0 0.524606 0.675098 -0.021419
1 0.524134 0.628257 -0.034960
2 0.524757 0.641571 -0.019187
3 0.518863 0.589718 -0.024071
4 0.523975 0.615806 -0.036730
.. ... ... ...
473 0.557430 0.553579 0.006053
474 0.563593 0.553342 0.006053
475 0.557327 0.544035 0.006053
476 0.551414 0.553678 0.006053
477 0.557613 0.563182 0.006053
[478 rows x 3 columns]; kwargs: format='xyz'
I would like to know how I should correctly import the data into the Open3D and generate the point cloud. I appreciate your help.
|
[
"Open3D supports NumPy arrays. So, firstly you have to convert your dataframe with XYZ coordinates to a NumPy array. This will allow you to convert the NumPy array to the Open3D point cloud. You can check the documentation (here) of Open3D for further details.\nThe important lines (of documentation) for the conversion of a NumPy array to an Open3D point cloud are given below:\n# Pass xyz to Open3D.o3d.geometry.PointCloud and visualize\npcd = o3d.geometry.PointCloud()\npcd.points = o3d.utility.Vector3dVector(xyz)\n\nHere, 'xyz' is the NumPy array.\n"
] |
[
0
] |
[] |
[] |
[
"open3d",
"pandas",
"python",
"python_3.x"
] |
stackoverflow_0073464667_open3d_pandas_python_python_3.x.txt
|
Q:
Python if statement executes but condition false
As the subject says, I have some live trading code, in Python. I had been using it successfully for well over a week before my issue began. Suddenly, yesterday morning, when my code began running, it would open trades and close them almost as soon as they had been opened [as opposed to waiting until a certain amount of time (ExitThreshold)] had been reached. Here is the errant excerpt of code:
p = trades.OpenTrades(accountID= accountID); pv = client.request(p)
for i in range(0, len(pv["trades"])):
openTime = datetime.datetime.strptime(pv["trades"][i]["openTime"][:19], '%Y-%m-%dT%H:%M:%S')
if (datetime.datetime.utcnow() - openTime).seconds >= ExitThreshold:
closeID = pv["trades"][i]["id"]
r = trades.TradeClose(accountID, tradeID= closeID, data= TradeCloseRequest(units= "ALL").data); rv = client.request(r)
I have done some dubugging, stepped into the execution and repeatedly confirmed that the if condition is False. Nonetheless, the trade is closed regardless. As I had said, this code was previously performing as per expectations for well over a week and there have been no changes to it. So why this sudden change in behaviour???
A:
I managed to solve the issue quite some time ago, but I seem to have neglected to post and update. Oh well, better late than never.
The issue was remedied simply by replacing the if statement with the one below:
datetime.datetime.utcnow() >= (openTime + datetime.timedelta(seconds= ExitThreshold))
The two statements/conditions mean different things and the above is the correct way to formulate my intention.
|
Python if statement executes but condition false
|
As the subject says, I have some live trading code, in Python. I had been using it successfully for well over a week before my issue began. Suddenly, yesterday morning, when my code began running, it would open trades and close them almost as soon as they had been opened [as opposed to waiting until a certain amount of time (ExitThreshold)] had been reached. Here is the errant excerpt of code:
p = trades.OpenTrades(accountID= accountID); pv = client.request(p)
for i in range(0, len(pv["trades"])):
openTime = datetime.datetime.strptime(pv["trades"][i]["openTime"][:19], '%Y-%m-%dT%H:%M:%S')
if (datetime.datetime.utcnow() - openTime).seconds >= ExitThreshold:
closeID = pv["trades"][i]["id"]
r = trades.TradeClose(accountID, tradeID= closeID, data= TradeCloseRequest(units= "ALL").data); rv = client.request(r)
I have done some dubugging, stepped into the execution and repeatedly confirmed that the if condition is False. Nonetheless, the trade is closed regardless. As I had said, this code was previously performing as per expectations for well over a week and there have been no changes to it. So why this sudden change in behaviour???
|
[
"I managed to solve the issue quite some time ago, but I seem to have neglected to post and update. Oh well, better late than never.\nThe issue was remedied simply by replacing the if statement with the one below:\ndatetime.datetime.utcnow() >= (openTime + datetime.timedelta(seconds= ExitThreshold))\n\nThe two statements/conditions mean different things and the above is the correct way to formulate my intention.\n"
] |
[
1
] |
[] |
[] |
[
"if_statement",
"oanda",
"python",
"spyder"
] |
stackoverflow_0067586652_if_statement_oanda_python_spyder.txt
|
Q:
How to lookup a specific value in a range of one DataFrame an put in in another
df1:
**Tarif von bis GK**
FedEx 0.0 1.0 G001
FedEx 1.0 2.0 G002
...
DHL. 0.0 0.5 G001
DHL. 0.5 1.0 G002
...
DPD 0.0 5.0 G001
DPD 5.0 10.0 G002
df2:
**Tarif Weight GK**
FedEx 0.6
DHL 0.6
FedEx 0.5
DPD 7.5
My attempt:
for i in range(len(df2)):
df2.loc[[i]['GK'] = df1['GK'].loc[(df1['Tarif'] == df2.loc[[i]]['Tarif'])
& (df1['von'] < df2[[i]]['Weight'])
& (df1['bis'] >= df2[[i]]['Weight'])]
ValueError: Can only compare identically-labeled Series objects*
Result should be
df2:
**Tarif Weight GK****
FedEx 0.6. G001
DHL 0.6. G002
FedEx 0.5. G001
DPD 3.5. G002
A:
Another possible solution, which is based on the following ideas:
Merge the two dataframes as usual with pandas.DataFrame.merge.
Filter out the cases that do not satisfy the conditions.
out = df2.iloc[:,:2].merge(df1, on='Tarif')
out = out.loc[out['von'].lt(out['Weight']) & out['bis'].ge(out['Weight'])]
out = out.reset_index(drop=True)
Output:
Tarif Weight von bis GK
0 FedEx 0.6 0.0 1.0 G001
1 FedEx 0.5 0.0 1.0 G001
2 DHL 0.6 0.5 1.0 G002
3 DPD 7.5 5.0 10.0 G002
A:
Use a merge_asof:
(pd.merge_asof(df2.reset_index().drop(columns='GK', errors='ignore')
.sort_values(by='Weight'),
df1.sort_values(by='von'),
left_on='Weight', right_on='von', by='Tarif'
)
.set_index('index')
# the line below is only necessary if the bins are disjoint
# or if there is a risk that the Weight is greater than the max "bis"
.assign(GK=lambda d: d['GK'].mask(d['Weight'].gt(d['bis'])))
.sort_index()
#.drop(columns=['von', 'bis']) # uncomment to remove von/bis
)
Output:
Tarif Weight von bis GK
index
0 FedEx 0.6 0.0 1.0 G001
1 DHL 0.6 0.5 1.0 G002
2 FedEx 0.5 0.0 1.0 G001
3 DPD 7.5 5.0 10.0 G002
|
How to lookup a specific value in a range of one DataFrame an put in in another
|
df1:
**Tarif von bis GK**
FedEx 0.0 1.0 G001
FedEx 1.0 2.0 G002
...
DHL. 0.0 0.5 G001
DHL. 0.5 1.0 G002
...
DPD 0.0 5.0 G001
DPD 5.0 10.0 G002
df2:
**Tarif Weight GK**
FedEx 0.6
DHL 0.6
FedEx 0.5
DPD 7.5
My attempt:
for i in range(len(df2)):
df2.loc[[i]['GK'] = df1['GK'].loc[(df1['Tarif'] == df2.loc[[i]]['Tarif'])
& (df1['von'] < df2[[i]]['Weight'])
& (df1['bis'] >= df2[[i]]['Weight'])]
ValueError: Can only compare identically-labeled Series objects*
Result should be
df2:
**Tarif Weight GK****
FedEx 0.6. G001
DHL 0.6. G002
FedEx 0.5. G001
DPD 3.5. G002
|
[
"Another possible solution, which is based on the following ideas:\n\nMerge the two dataframes as usual with pandas.DataFrame.merge.\n\nFilter out the cases that do not satisfy the conditions.\n\n\nout = df2.iloc[:,:2].merge(df1, on='Tarif')\nout = out.loc[out['von'].lt(out['Weight']) & out['bis'].ge(out['Weight'])]\nout = out.reset_index(drop=True)\n\nOutput:\n Tarif Weight von bis GK\n0 FedEx 0.6 0.0 1.0 G001\n1 FedEx 0.5 0.0 1.0 G001\n2 DHL 0.6 0.5 1.0 G002\n3 DPD 7.5 5.0 10.0 G002\n\n",
"Use a merge_asof:\n(pd.merge_asof(df2.reset_index().drop(columns='GK', errors='ignore')\n .sort_values(by='Weight'),\n df1.sort_values(by='von'),\n left_on='Weight', right_on='von', by='Tarif'\n )\n .set_index('index')\n # the line below is only necessary if the bins are disjoint\n # or if there is a risk that the Weight is greater than the max \"bis\"\n .assign(GK=lambda d: d['GK'].mask(d['Weight'].gt(d['bis'])))\n .sort_index()\n #.drop(columns=['von', 'bis']) # uncomment to remove von/bis\n)\n\nOutput:\n Tarif Weight von bis GK\nindex \n0 FedEx 0.6 0.0 1.0 G001\n1 DHL 0.6 0.5 1.0 G002\n2 FedEx 0.5 0.0 1.0 G001\n3 DPD 7.5 5.0 10.0 G002\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074546896_pandas_python.txt
|
Q:
A little game using Matrix, code doesn't work
We have a n to n matrix (made of nested lists) whose elements are either "+" or "-". What I need to do is to write something that changes "+" to "-" or "-" to "+" with a X-shaped pattern (i.e itself and diagonal upperleft, upperright, lowerleft, lowerright, example below).
So for example if user input of coordinates (i.e index of list1 + 1 and index of list2 +2) is 2,2 (row,column) the result will be:
+ - + - - - + - - - - +
- + - to - - - or - - - to - + -
+ - + - - - + - - - - +
Or if given coordinate is 1,3 the result will be:
+ - + + - - + - - - - +
- + - to - - - or - - - to - + -
+ - + + - + + - - - - -
My code works mostly fine. But when the coordinates are 1,n(n is matrix width)(the upperrightmost point), it just doesn't change 2,n-1(one to the left and one to the down). In other points there are no issues at all.
That is if given coordinate is 1,3 the result should be(as above):
+ - + + - - + - - - - +
- + - to - - - or - - - to - + -
+ - + + - + + - - - - -
But my code gives:
+ - + + - - + - - - - +
- + - to - + - or - - - to - - -
+ - + + - + + - - - - -
Here is my code. Language is Python 3.8 and a and b are indices (minus 1, as list indices start from 0) of the nested lists I wrote the matrix with.
try:
if matrix[a][b] == "+":
matrix[a][b] = "-"
elif matrix[a][b] == "-":
matrix[a][b] = "+"
else:
pass
if matrix[a-1][b-1] == "+" and (a!=0 and b!=0):
matrix[a-1][b-1] = "-"
elif matrix[a-1][b-1] == "-" and (a!=0 and b!=0):
matrix[a-1][b-1] = "+"
else:
pass
if matrix[a-1][b+1] == "+" and a!=0:
matrix[a-1][b+1] = "-"
elif matrix[a-1][b+1] == "-" and a!=0:
matrix[a-1][b+1] = "+"
else:
pass
if matrix[a+1][b-1] == "+" and b!=0:
matrix[a+1][b-1] = "-"
elif matrix[a+1][b-1] == "-" and b!=0:
matrix[a+1][b-1] = "+"
else:
pass
if matrix[a+1][b+1] == "+":
matrix[a+1][b+1] = "-"
elif matrix[a+1][b+1] == "-":
matrix[a+1][b+1] = "+"
else:
pass
except:
pass
A:
Your all-enveloping try/catch is the issue. Never do that if you don't know what exactly you are catching.
In this case, your code proceeds in the order:
center
top left
top right
bottom left
bottom right
But the top left corner is out of bounds, so it throws an IndexError and exits. It will only have processed whatever got checked before the faulty one.
Before you try/catch every separate case, try to make sure that a and b in [0, len(matrix) - 1]
|
A little game using Matrix, code doesn't work
|
We have a n to n matrix (made of nested lists) whose elements are either "+" or "-". What I need to do is to write something that changes "+" to "-" or "-" to "+" with a X-shaped pattern (i.e itself and diagonal upperleft, upperright, lowerleft, lowerright, example below).
So for example if user input of coordinates (i.e index of list1 + 1 and index of list2 +2) is 2,2 (row,column) the result will be:
+ - + - - - + - - - - +
- + - to - - - or - - - to - + -
+ - + - - - + - - - - +
Or if given coordinate is 1,3 the result will be:
+ - + + - - + - - - - +
- + - to - - - or - - - to - + -
+ - + + - + + - - - - -
My code works mostly fine. But when the coordinates are 1,n(n is matrix width)(the upperrightmost point), it just doesn't change 2,n-1(one to the left and one to the down). In other points there are no issues at all.
That is if given coordinate is 1,3 the result should be(as above):
+ - + + - - + - - - - +
- + - to - - - or - - - to - + -
+ - + + - + + - - - - -
But my code gives:
+ - + + - - + - - - - +
- + - to - + - or - - - to - - -
+ - + + - + + - - - - -
Here is my code. Language is Python 3.8 and a and b are indices (minus 1, as list indices start from 0) of the nested lists I wrote the matrix with.
try:
if matrix[a][b] == "+":
matrix[a][b] = "-"
elif matrix[a][b] == "-":
matrix[a][b] = "+"
else:
pass
if matrix[a-1][b-1] == "+" and (a!=0 and b!=0):
matrix[a-1][b-1] = "-"
elif matrix[a-1][b-1] == "-" and (a!=0 and b!=0):
matrix[a-1][b-1] = "+"
else:
pass
if matrix[a-1][b+1] == "+" and a!=0:
matrix[a-1][b+1] = "-"
elif matrix[a-1][b+1] == "-" and a!=0:
matrix[a-1][b+1] = "+"
else:
pass
if matrix[a+1][b-1] == "+" and b!=0:
matrix[a+1][b-1] = "-"
elif matrix[a+1][b-1] == "-" and b!=0:
matrix[a+1][b-1] = "+"
else:
pass
if matrix[a+1][b+1] == "+":
matrix[a+1][b+1] = "-"
elif matrix[a+1][b+1] == "-":
matrix[a+1][b+1] = "+"
else:
pass
except:
pass
|
[
"Your all-enveloping try/catch is the issue. Never do that if you don't know what exactly you are catching.\nIn this case, your code proceeds in the order:\n\ncenter\ntop left\ntop right\nbottom left\nbottom right\n\nBut the top left corner is out of bounds, so it throws an IndexError and exits. It will only have processed whatever got checked before the faulty one.\nBefore you try/catch every separate case, try to make sure that a and b in [0, len(matrix) - 1]\n"
] |
[
1
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0074546839_python_python_3.x.txt
|
Q:
Change dictionary values with for-loop
I have a dictionary that I want to modify. Not the strings, but only the numbers. The function below should convert all the numbers in the dictionary to two decimal floats:
def roundup(resultdict):
print("resultdict is: " +str(resultdict))
for item in resultdict.items():
print("item is: " + str(item))
if isinstance(item[1], str):
print("The item {} is a string {}".format(item[0], item[1]))
else:
item = list(item) # else Python gives an error it cannot modify a tuple
item[1] = float("{:.2f}".format(item[1]))
print("The item {} is now a float {}".format(item[0], item[1]))
return resultdict
newdict = roundup(resultdict)
print("newdict is: " + str(newdict))
This seems to work (although 4 is changed into 4.0, and not into 4.00), but returns the original input? See newdict in the output below.
resultdict is: {'standplaats': 'Zagreb', 'totaal': 4215.04, 'totaal_v': 2087.43, 'totaaleenmalig': 16834.78, 'zone': 4, 'categorie': 'A', 'KKC': 0.961, 'ns': 2127.61, 'bs': 2677.58, 'KKC_s': -40.66, 'spvm': 751.82, 'kkc_spvm': -14.66, 'spvp': 526.28, 'kkc_spvp': -13.34, 'spvk': 0.0, 'kkc_spvk': -0.0, 'vh': 502, 'prm': 195, 'kkc_prm': -1.9, 'prp': 68.25, 'kkc_prp': -0.67, 'ovp': 463, 'tv': 314, 'ih': -497.86, 'ibkh': -163.83, 'hk': 3470.14, 'av': 13364.64}
item is: ('standplaats', 'Zagreb')
The item standplaats is a string Zagreb
item is: ('totaal', 4215.04)
The item totaal is now a float 4215.04
item is: ('totaal_v', 2087.43)
The item totaal_v is now a float 2087.43
item is: ('totaaleenmalig', 16834.78)
The item totaaleenmalig is now a float 16834.78
item is: ('zone', 4)
The item zone is now a float 4.0
item is: ('categorie', 'A')
The item categorie is a string A
item is: ('KKC', 0.961)
The item KKC is now a float 0.96
item is: ('ns', 2127.61)
The item ns is now a float 2127.61
item is: ('bs', 2677.58)
The item bs is now a float 2677.58
item is: ('KKC_s', -40.66)
The item KKC_s is now a float -40.66
item is: ('spvm', 751.82)
The item spvm is now a float 751.82
item is: ('kkc_spvm', -14.66)
The item kkc_spvm is now a float -14.66
item is: ('spvp', 526.28)
The item spvp is now a float 526.28
item is: ('kkc_spvp', -13.34)
The item kkc_spvp is now a float -13.34
item is: ('spvk', 0.0)
The item spvk is now a float 0.0
item is: ('kkc_spvk', -0.0)
The item kkc_spvk is now a float -0.0
item is: ('vh', 502)
The item vh is now a float 502.0
item is: ('prm', 195)
The item prm is now a float 195.0
item is: ('kkc_prm', -1.9)
The item kkc_prm is now a float -1.9
item is: ('prp', 68.25)
The item prp is now a float 68.25
item is: ('kkc_prp', -0.67)
The item kkc_prp is now a float -0.67
item is: ('ovp', 463)
The item ovp is now a float 463.0
item is: ('tv', 314)
The item tv is now a float 314.0
item is: ('ih', -497.86)
The item ih is now a float -497.86
item is: ('ibkh', -163.83)
The item ibkh is now a float -163.83
item is: ('hk', 3470.14)
The item hk is now a float 3470.14
item is: ('av', 13364.64)
The item av is now a float 13364.64
newdict is: {'standplaats': 'Zagreb', 'totaal': 4215.04, 'totaal_v': 2087.43, 'totaaleenmalig': 16834.78, 'zone': 4, 'categorie': 'A', 'KKC': 0.961, 'ns': 2127.61, 'bs': 2677.58, 'KKC_s': -40.66, 'spvm': 751.82, 'kkc_spvm': -14.66, 'spvp': 526.28, 'kkc_spvp': -13.34, 'spvk': 0.0, 'kkc_spvk': -0.0, 'vh': 502, 'prm': 195, 'kkc_prm': -1.9, 'prp': 68.25, 'kkc_prp': -0.67, 'ovp': 463, 'tv': 314, 'ih': -497.86, 'ibkh': -163.83, 'hk': 3470.14, 'av': 13364.64}
What am I doing wrong? Any help is much appreciated!
A:
Thanks for your suggestions. The solution was to iterate over key, value pairs and modify the dictionary key with a value, not the iterator. The rounding is now fixed as well.
def roundup(resultdict):
print("resultdict is: " +str(resultdict))
for key, value in resultdict.items():
print("item is: " + str(key) + str(value))
if isinstance(value, str):
print("The item {} is a string {}".format(key, value))
else:
resultdict[key] = "{:.2f}".format(round(value, 2))
print("The item {} is now a float {}".format(key, value))
return resultdict
newdict = roundup(resultdict)
print("newdict is: " + str(newdict))
|
Change dictionary values with for-loop
|
I have a dictionary that I want to modify. Not the strings, but only the numbers. The function below should convert all the numbers in the dictionary to two decimal floats:
def roundup(resultdict):
print("resultdict is: " +str(resultdict))
for item in resultdict.items():
print("item is: " + str(item))
if isinstance(item[1], str):
print("The item {} is a string {}".format(item[0], item[1]))
else:
item = list(item) # else Python gives an error it cannot modify a tuple
item[1] = float("{:.2f}".format(item[1]))
print("The item {} is now a float {}".format(item[0], item[1]))
return resultdict
newdict = roundup(resultdict)
print("newdict is: " + str(newdict))
This seems to work (although 4 is changed into 4.0, and not into 4.00), but returns the original input? See newdict in the output below.
resultdict is: {'standplaats': 'Zagreb', 'totaal': 4215.04, 'totaal_v': 2087.43, 'totaaleenmalig': 16834.78, 'zone': 4, 'categorie': 'A', 'KKC': 0.961, 'ns': 2127.61, 'bs': 2677.58, 'KKC_s': -40.66, 'spvm': 751.82, 'kkc_spvm': -14.66, 'spvp': 526.28, 'kkc_spvp': -13.34, 'spvk': 0.0, 'kkc_spvk': -0.0, 'vh': 502, 'prm': 195, 'kkc_prm': -1.9, 'prp': 68.25, 'kkc_prp': -0.67, 'ovp': 463, 'tv': 314, 'ih': -497.86, 'ibkh': -163.83, 'hk': 3470.14, 'av': 13364.64}
item is: ('standplaats', 'Zagreb')
The item standplaats is a string Zagreb
item is: ('totaal', 4215.04)
The item totaal is now a float 4215.04
item is: ('totaal_v', 2087.43)
The item totaal_v is now a float 2087.43
item is: ('totaaleenmalig', 16834.78)
The item totaaleenmalig is now a float 16834.78
item is: ('zone', 4)
The item zone is now a float 4.0
item is: ('categorie', 'A')
The item categorie is a string A
item is: ('KKC', 0.961)
The item KKC is now a float 0.96
item is: ('ns', 2127.61)
The item ns is now a float 2127.61
item is: ('bs', 2677.58)
The item bs is now a float 2677.58
item is: ('KKC_s', -40.66)
The item KKC_s is now a float -40.66
item is: ('spvm', 751.82)
The item spvm is now a float 751.82
item is: ('kkc_spvm', -14.66)
The item kkc_spvm is now a float -14.66
item is: ('spvp', 526.28)
The item spvp is now a float 526.28
item is: ('kkc_spvp', -13.34)
The item kkc_spvp is now a float -13.34
item is: ('spvk', 0.0)
The item spvk is now a float 0.0
item is: ('kkc_spvk', -0.0)
The item kkc_spvk is now a float -0.0
item is: ('vh', 502)
The item vh is now a float 502.0
item is: ('prm', 195)
The item prm is now a float 195.0
item is: ('kkc_prm', -1.9)
The item kkc_prm is now a float -1.9
item is: ('prp', 68.25)
The item prp is now a float 68.25
item is: ('kkc_prp', -0.67)
The item kkc_prp is now a float -0.67
item is: ('ovp', 463)
The item ovp is now a float 463.0
item is: ('tv', 314)
The item tv is now a float 314.0
item is: ('ih', -497.86)
The item ih is now a float -497.86
item is: ('ibkh', -163.83)
The item ibkh is now a float -163.83
item is: ('hk', 3470.14)
The item hk is now a float 3470.14
item is: ('av', 13364.64)
The item av is now a float 13364.64
newdict is: {'standplaats': 'Zagreb', 'totaal': 4215.04, 'totaal_v': 2087.43, 'totaaleenmalig': 16834.78, 'zone': 4, 'categorie': 'A', 'KKC': 0.961, 'ns': 2127.61, 'bs': 2677.58, 'KKC_s': -40.66, 'spvm': 751.82, 'kkc_spvm': -14.66, 'spvp': 526.28, 'kkc_spvp': -13.34, 'spvk': 0.0, 'kkc_spvk': -0.0, 'vh': 502, 'prm': 195, 'kkc_prm': -1.9, 'prp': 68.25, 'kkc_prp': -0.67, 'ovp': 463, 'tv': 314, 'ih': -497.86, 'ibkh': -163.83, 'hk': 3470.14, 'av': 13364.64}
What am I doing wrong? Any help is much appreciated!
|
[
"Thanks for your suggestions. The solution was to iterate over key, value pairs and modify the dictionary key with a value, not the iterator. The rounding is now fixed as well.\ndef roundup(resultdict):\n print(\"resultdict is: \" +str(resultdict))\n for key, value in resultdict.items():\n print(\"item is: \" + str(key) + str(value))\n if isinstance(value, str):\n print(\"The item {} is a string {}\".format(key, value))\n else:\n resultdict[key] = \"{:.2f}\".format(round(value, 2))\n print(\"The item {} is now a float {}\".format(key, value))\n return resultdict\n\nnewdict = roundup(resultdict)\nprint(\"newdict is: \" + str(newdict))\n\n"
] |
[
0
] |
[] |
[] |
[
"dictionary",
"for_loop",
"python"
] |
stackoverflow_0074539045_dictionary_for_loop_python.txt
|
Q:
Cannot assign a value to a string in a loop
import re
import os
path ="/data/notebook_files/"
Filelist = []
txt = ''
for home, dirs, files in os.walk(path):
for filename in files:
Filelist.append(os.path.join(home, filename))
for file in Filelist:
with open(file,"r+",encoding='utf-8',errors='ignore') as f:
print (file)
for line in f:
old_str = re.findall(r'\d+', line)
for i in range(0,len(old_str)):
line = line.replace(old_str[i],"\n"+"###### v"+old_str[i],1)
txt = line
print (txt)
import re
import os
path ="/data/notebook_files/"
Filelist = []
txt = ''
for home, dirs, files in os.walk(path):
for filename in files:
Filelist.append(os.path.join(home, filename))
for file in Filelist:
with open(file,"r+",encoding='utf-8',errors='ignore') as f:
print (file)
for line in f:
old_str = re.findall(r'\d+', line)
for i in range(0,len(old_str)):
line = line.replace(old_str[i],"\n"+"###### v"+old_str[i],1)
txt = line
print(txt)
I'm going to add ####### before all the numbers in the text, so I just need the result of the last number, which is the result of all the numbers having been replaced. And assign it to the txt.
But the txt doesn't get printed out or written to the file.
In the first code, the txt does not print, but in the second code it prints what I want. My question is how can I continue to call the contents of the txt outside of this loop.
I would like to know how this is caused? This really bothers me.
A:
import re
import os
import codecs
path ="/data/notebook_files/bible"
Filelist = []
for home, dirs, files in os.walk(path):
for filename in files:
if str(filename)[len(str(filename))-3:] != '.py':
Filelist.append(os.path.join(home, filename))
for file in Filelist:
with codecs.open(file,"r+",encoding='utf-8') as f:
txt = ''
for line in f:
#print (file)
old_str = re.findall(r'\d+', line)
for i in range(0,len(old_str)):
line = line.replace(old_str[i],"\n"+"###### v"+old_str[i],1).strip()
txt += line
txt before opening the file inside the loop so that each time txt is assigned the empty string. Then outside the loop within the open file code, so that all the data is inside txt.
txt += line
This allows the contents of txt to be appended.
|
Cannot assign a value to a string in a loop
|
import re
import os
path ="/data/notebook_files/"
Filelist = []
txt = ''
for home, dirs, files in os.walk(path):
for filename in files:
Filelist.append(os.path.join(home, filename))
for file in Filelist:
with open(file,"r+",encoding='utf-8',errors='ignore') as f:
print (file)
for line in f:
old_str = re.findall(r'\d+', line)
for i in range(0,len(old_str)):
line = line.replace(old_str[i],"\n"+"###### v"+old_str[i],1)
txt = line
print (txt)
import re
import os
path ="/data/notebook_files/"
Filelist = []
txt = ''
for home, dirs, files in os.walk(path):
for filename in files:
Filelist.append(os.path.join(home, filename))
for file in Filelist:
with open(file,"r+",encoding='utf-8',errors='ignore') as f:
print (file)
for line in f:
old_str = re.findall(r'\d+', line)
for i in range(0,len(old_str)):
line = line.replace(old_str[i],"\n"+"###### v"+old_str[i],1)
txt = line
print(txt)
I'm going to add ####### before all the numbers in the text, so I just need the result of the last number, which is the result of all the numbers having been replaced. And assign it to the txt.
But the txt doesn't get printed out or written to the file.
In the first code, the txt does not print, but in the second code it prints what I want. My question is how can I continue to call the contents of the txt outside of this loop.
I would like to know how this is caused? This really bothers me.
|
[
"import re\nimport os\nimport codecs\npath =\"/data/notebook_files/bible\"\nFilelist = []\nfor home, dirs, files in os.walk(path):\n for filename in files:\n if str(filename)[len(str(filename))-3:] != '.py':\n Filelist.append(os.path.join(home, filename))\nfor file in Filelist:\n with codecs.open(file,\"r+\",encoding='utf-8') as f:\n txt = ''\n for line in f: \n #print (file)\n old_str = re.findall(r'\\d+', line)\n for i in range(0,len(old_str)):\n line = line.replace(old_str[i],\"\\n\"+\"###### v\"+old_str[i],1).strip()\n txt += line\n\ntxt before opening the file inside the loop so that each time txt is assigned the empty string. Then outside the loop within the open file code, so that all the data is inside txt.\ntxt += line\nThis allows the contents of txt to be appended.\n"
] |
[
0
] |
[] |
[] |
[
"for_loop",
"python",
"string"
] |
stackoverflow_0074538487_for_loop_python_string.txt
|
Q:
pipenv requires python 3.7 but installed version is 3.8 and won't install
I know a little of Python and more than a year ago I wrote a small script, using pipenv to manage the dependencies.
The old platform was Windows 7, the current platform is Windows 10.
At that time I probably had Python 3.7 installed, now I have 3.8.3 but running:
pipenv install
Complained that:
Warning: Python 3.7 was not found on your system…
Neither 'pyenv' nor 'asdf' could be found to install Python.
You can specify specific versions of Python with:
$ pipenv --python path\to\python
This is the Pipfile
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
python-ldap = {path = "./dependencies/python_ldap-3.1.0-cp37-cp37m-win_amd64.whl"}
requests = "~=2.0"
mysqlclient = "~=1.0"
[dev-packages]
[requires]
python_version = "3.7"
I manually edited that last line to allow 3.8, but how do I properly fix that?
I think 3.7 should be a minimum requirement — well, the script is so simple that I think even 3.0 should work.
A:
[requires]
python_version = "3.7"
and the error:
Warning: Python 3.7 was not found on your system…
Sort of hints that pipenv is installed but when it reads your config file, it sees that it should create environment with python 3.7, So, logically, you should install 3.7 or update the pipfile to use the python you have installed ?
A:
If your python command is Python 2.x, but you have a python3 command which runs Python 3.x, then you might just need to use the --python option to tell pipenv to use it:
$ pipenv install
Warning: Python >= 3.5 was not found on your system...
Neither 'pyenv' nor 'asdf' could be found to install Python.
You can specify specific versions of Python with:
$ pipenv --python path/to/python
$ pipenv --python `which python3` install
Creating a virtualenv for this project...
A:
I would recommend you to install pyenv (so it can manage your Python versions).
The repository of pyenv is this one:
https://github.com/pyenv/pyenv
With pyenv installed when a Pipfile requires a version of Python that you do not have on your machine it will ask if you want to install it with pyenv.
In this way you'll be able to work in projects with different python versions without having to worry about which version the project is using.
With pyenv you can easily change between python version in your global shell as well.
Another suggestion not really realated to your question is, you should take a look on poetry to replace pipenv (it is faster because it install the dependencies in a async way), the website for poetry is this one: https://python-poetry.org/
But still, you will need something like pyenv or install another python version manually to solve your problem.
If you just wanna make the Pipfile allow you to use newer versions of python I guess you can change it like this:
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
python-ldap = {path = "./dependencies/python_ldap-3.1.0-cp37-cp37m-win_amd64.whl"}
requests = "~=2.0"
mysqlclient = "~=1.0"
[dev-packages]
[requires]
python_version = "^3.7"
I'm not really sure if the syntax is ^3.7 or >=3.7 but should be one of this two.
A:
You can download Python 3.7 from the official site - https://www.python.org/downloads/
A:
I have Python 3.9 installed,
and pipenv requires Python 3.6 as stated in pipfile.
So, I change the value in pipfile
[requires]
python_version = "3.6"
to
[requires]
python_version = "*"
so it can be work on any version.
A:
The simple solution is, you can avoid such a problem and make it a fully system dependable python version. To make this happen, just remove python_version = "3.7" line from requires.
[[source]]
url = "https://pypi.org/simple"
verify_ssl = false
name = "pypi"
[packages]
[dev-packages]
[requires]
python_version = "3.7"
So after removing the python version, your pipfile would be like,
[[source]]
url = "https://pypi.org/simple"
verify_ssl = false
name = "pypi"
[packages]
[dev-packages]
[requires]
python_version = "*"
A:
Install Relevant version
How to check Python Version
cmd any directory>Python -v
In My case they need Python 3.8.2
But i have Python 3.8.0 version installed
Due this i could not able over come this error
Once i install exact version i can able to run command
A:
I had a similar problem and it was solved by deleting the pipfile.
The pipfile path is the path where you enter the pipenv install command in the terminal.
A:
I meet the same problem when I move my old project to a new Laptop.
Warning: Python 3.7 was not found on your system...
Neither 'pyenv' nor 'asdf' could be found to install Python.
You can specify specific versions of Python with:
$ pipenv --python path/to/python
My way to solve the problem is simply delete the old Pipfile and input
pipenv shell in the terminal. Then it will build a new Pipfile and match your python as it should be.
|
pipenv requires python 3.7 but installed version is 3.8 and won't install
|
I know a little of Python and more than a year ago I wrote a small script, using pipenv to manage the dependencies.
The old platform was Windows 7, the current platform is Windows 10.
At that time I probably had Python 3.7 installed, now I have 3.8.3 but running:
pipenv install
Complained that:
Warning: Python 3.7 was not found on your system…
Neither 'pyenv' nor 'asdf' could be found to install Python.
You can specify specific versions of Python with:
$ pipenv --python path\to\python
This is the Pipfile
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
python-ldap = {path = "./dependencies/python_ldap-3.1.0-cp37-cp37m-win_amd64.whl"}
requests = "~=2.0"
mysqlclient = "~=1.0"
[dev-packages]
[requires]
python_version = "3.7"
I manually edited that last line to allow 3.8, but how do I properly fix that?
I think 3.7 should be a minimum requirement — well, the script is so simple that I think even 3.0 should work.
|
[
"[requires]\npython_version = \"3.7\"\n\nand the error:\nWarning: Python 3.7 was not found on your system…\nSort of hints that pipenv is installed but when it reads your config file, it sees that it should create environment with python 3.7, So, logically, you should install 3.7 or update the pipfile to use the python you have installed ?\n",
"If your python command is Python 2.x, but you have a python3 command which runs Python 3.x, then you might just need to use the --python option to tell pipenv to use it:\n$ pipenv install\nWarning: Python >= 3.5 was not found on your system...\nNeither 'pyenv' nor 'asdf' could be found to install Python.\nYou can specify specific versions of Python with:\n$ pipenv --python path/to/python\n\n$ pipenv --python `which python3` install\nCreating a virtualenv for this project...\n\n",
"I would recommend you to install pyenv (so it can manage your Python versions).\nThe repository of pyenv is this one:\nhttps://github.com/pyenv/pyenv\nWith pyenv installed when a Pipfile requires a version of Python that you do not have on your machine it will ask if you want to install it with pyenv.\nIn this way you'll be able to work in projects with different python versions without having to worry about which version the project is using.\nWith pyenv you can easily change between python version in your global shell as well.\nAnother suggestion not really realated to your question is, you should take a look on poetry to replace pipenv (it is faster because it install the dependencies in a async way), the website for poetry is this one: https://python-poetry.org/\nBut still, you will need something like pyenv or install another python version manually to solve your problem.\nIf you just wanna make the Pipfile allow you to use newer versions of python I guess you can change it like this:\n[[source]]\nurl = \"https://pypi.org/simple\"\nverify_ssl = true\nname = \"pypi\"\n\n[packages]\npython-ldap = {path = \"./dependencies/python_ldap-3.1.0-cp37-cp37m-win_amd64.whl\"}\nrequests = \"~=2.0\"\nmysqlclient = \"~=1.0\"\n\n[dev-packages]\n\n[requires]\npython_version = \"^3.7\"\n\nI'm not really sure if the syntax is ^3.7 or >=3.7 but should be one of this two.\n",
"You can download Python 3.7 from the official site - https://www.python.org/downloads/\n",
"I have Python 3.9 installed,\nand pipenv requires Python 3.6 as stated in pipfile.\nSo, I change the value in pipfile\n[requires]\npython_version = \"3.6\"\n\nto\n[requires]\npython_version = \"*\"\n\nso it can be work on any version.\n",
"The simple solution is, you can avoid such a problem and make it a fully system dependable python version. To make this happen, just remove python_version = \"3.7\" line from requires.\n[[source]]\nurl = \"https://pypi.org/simple\"\nverify_ssl = false\nname = \"pypi\"\n\n[packages]\n\n[dev-packages]\n\n[requires]\npython_version = \"3.7\"\n\nSo after removing the python version, your pipfile would be like,\n[[source]]\nurl = \"https://pypi.org/simple\"\nverify_ssl = false\nname = \"pypi\"\n\n[packages]\n\n[dev-packages]\n\n[requires]\npython_version = \"*\"\n\n",
"Install Relevant version\nHow to check Python Version\ncmd any directory>Python -v\nIn My case they need Python 3.8.2\nBut i have Python 3.8.0 version installed\nDue this i could not able over come this error\nOnce i install exact version i can able to run command\n",
"I had a similar problem and it was solved by deleting the pipfile.\nThe pipfile path is the path where you enter the pipenv install command in the terminal.\n",
"I meet the same problem when I move my old project to a new Laptop.\nWarning: Python 3.7 was not found on your system...\nNeither 'pyenv' nor 'asdf' could be found to install Python.\nYou can specify specific versions of Python with:\n$ pipenv --python path/to/python\n\nMy way to solve the problem is simply delete the old Pipfile and input\npipenv shell in the terminal. Then it will build a new Pipfile and match your python as it should be.\n"
] |
[
12,
9,
8,
3,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"pipenv",
"python",
"python_3.x"
] |
stackoverflow_0063247803_pipenv_python_python_3.x.txt
|
Q:
How to plot multiple lines from a loop on one 3d plot in Python?
Basically, I am looping generation of rays in Python and I'm trying to plot them all on the same graph. They should all be on a circle of radius 0.1. Each ray should be at a position on the circle that is varied by the arg which is in this case the theta. Also, just to mention (although I don't think it's that relevant) I am doing OOP here.
I get correct rays but I can't get them on the same 3d graph and I'm not sure how I'm supposed to do it. I thought using plt.show() would give me a graph with all 24 rays but it just plots 24 graphs.
Here is the relevant bit of code for reference:
r = 0.1
arg = 0
for i in range (0,24):
arg += np.pi/12
x = r*np.sin(arg)
y = r*np.cos(arg)
l = ray.Ray(r=np.array([x,y,0]),v=np.array([0.5,0,5]))
c = ray.SphericalRefraction(z0 = 100, curv = 0.0009, n1 = 1.0, n2 = 1.5, ar = 5)
c.propagate_ray(l)
o = ray.OutputPlane(250)
o.outputintercept(l)
points = np.array(l.vertices())
fig = plt.figure()
ax = plt.axes(projection='3d')
#ax = fig.add_subplot(1,2,1,projection='3d')
#plt.plot(points[:,2],points[:,0])
ax.plot3D(points[:,0],points[:,1],points[:,2])
plt.show()
A:
Expanding on the comment by Mercury, the figure and also axes object must be created outside the loop.
import matplotlib.pyplot as plt
import numpy as np
r = 0.1
arg = 0
fig = plt.figure()
ax = plt.axes(projection='3d')
for i in range(0,24):
arg += np.pi/12 * i
v1 = r*np.sin(arg)
v2 = r*np.cos(arg)
# ...
# using sample data
x = []
y = []
z = []
for j in range(2):
x.append(j*v1)
y.append(j*v2)
z.append(j)
# add vertex to the axes object
ax.plot3D(x, y, z)
plt.show()
|
How to plot multiple lines from a loop on one 3d plot in Python?
|
Basically, I am looping generation of rays in Python and I'm trying to plot them all on the same graph. They should all be on a circle of radius 0.1. Each ray should be at a position on the circle that is varied by the arg which is in this case the theta. Also, just to mention (although I don't think it's that relevant) I am doing OOP here.
I get correct rays but I can't get them on the same 3d graph and I'm not sure how I'm supposed to do it. I thought using plt.show() would give me a graph with all 24 rays but it just plots 24 graphs.
Here is the relevant bit of code for reference:
r = 0.1
arg = 0
for i in range (0,24):
arg += np.pi/12
x = r*np.sin(arg)
y = r*np.cos(arg)
l = ray.Ray(r=np.array([x,y,0]),v=np.array([0.5,0,5]))
c = ray.SphericalRefraction(z0 = 100, curv = 0.0009, n1 = 1.0, n2 = 1.5, ar = 5)
c.propagate_ray(l)
o = ray.OutputPlane(250)
o.outputintercept(l)
points = np.array(l.vertices())
fig = plt.figure()
ax = plt.axes(projection='3d')
#ax = fig.add_subplot(1,2,1,projection='3d')
#plt.plot(points[:,2],points[:,0])
ax.plot3D(points[:,0],points[:,1],points[:,2])
plt.show()
|
[
"Expanding on the comment by Mercury, the figure and also axes object must be created outside the loop.\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nr = 0.1\narg = 0\n\nfig = plt.figure()\nax = plt.axes(projection='3d')\nfor i in range(0,24):\n arg += np.pi/12 * i\n v1 = r*np.sin(arg)\n v2 = r*np.cos(arg)\n # ...\n # using sample data\n x = []\n y = []\n z = []\n for j in range(2):\n x.append(j*v1)\n y.append(j*v2)\n z.append(j)\n # add vertex to the axes object\n ax.plot3D(x, y, z)\nplt.show()\n\n\n"
] |
[
0
] |
[] |
[] |
[
"3d",
"matplotlib",
"python",
"raytracing"
] |
stackoverflow_0074539989_3d_matplotlib_python_raytracing.txt
|
Q:
How to get html form, process it with py script and display it on screen?
I have a question about how html+py script works.
....
I get 3 values of trapezoidal data from html: height, length of side 1 and length of side 2, then press submit.
and send the value that the user has entered let's calculate with py script
Then show the secret result on the screen.
But the results showed that There are no hidden effects on the screen at all.
How should I fix it?
thank you
this is my code
<!DOCTYPE>
<html lang="th">
<head>
<meta charset="UTF-8">
<title>
trapezoidal
</title>
<link rel="stylesheet" href="https://pyscript.net/alpha/pyscript.css" />
<script defer src="https://pyscript.net/alpha/pyscript.js"></script>
</head>
<body>
<form onsubmit="return false">
<label for="w1">1st side length:</label><br>
<input type="number" step="any" id="w1" name="w1" ><br>
<label for="w2">2st side length:</label><br>
<input type="number" step="any" id="w2" name="w2" ><br>
<label for="h">height:</label><br>
<input type="number" step="any" id="h" name="h" ><br>
<input pys-onClick="whcalc" type="submit" id="btn-form" value="submit">
</form>
<p id = 'output'></p>
<py-script>
def whcalc(*args,**kwargs):
w1= float(Element('w1').value)
w2= float(Element('w2').value)
h= float(Element('h').value)
result_place = Element('output')
#except:
w1 = 0.0
w2 = 0.0
h = 0.0
if w1 > 0 and w2 > 0 and h > 0:
s = 1/2*h*w1+w2
result_place.write('this is trapezoidal ', s)
</py-script>
</body>
</html>
A:
You set all your variables to 0.0 before checking their values, which means your condition (if w1 > 0 and w2 > 0 and h > 0) will always evaluate to False, thus never printing anything in your output element.
Remove the following lines and your script will work:
w1 = 0.0
w2 = 0.0
h = 0.0
Also if you meant to display the value, you will need to change the following line:
result_place.write('this is trapezoidal ', s)
to this:
result_place.write('this is trapezoidal ' + str(s))
Codepen: https://codepen.io/roboto/pen/mdKxmxx
|
How to get html form, process it with py script and display it on screen?
|
I have a question about how html+py script works.
....
I get 3 values of trapezoidal data from html: height, length of side 1 and length of side 2, then press submit.
and send the value that the user has entered let's calculate with py script
Then show the secret result on the screen.
But the results showed that There are no hidden effects on the screen at all.
How should I fix it?
thank you
this is my code
<!DOCTYPE>
<html lang="th">
<head>
<meta charset="UTF-8">
<title>
trapezoidal
</title>
<link rel="stylesheet" href="https://pyscript.net/alpha/pyscript.css" />
<script defer src="https://pyscript.net/alpha/pyscript.js"></script>
</head>
<body>
<form onsubmit="return false">
<label for="w1">1st side length:</label><br>
<input type="number" step="any" id="w1" name="w1" ><br>
<label for="w2">2st side length:</label><br>
<input type="number" step="any" id="w2" name="w2" ><br>
<label for="h">height:</label><br>
<input type="number" step="any" id="h" name="h" ><br>
<input pys-onClick="whcalc" type="submit" id="btn-form" value="submit">
</form>
<p id = 'output'></p>
<py-script>
def whcalc(*args,**kwargs):
w1= float(Element('w1').value)
w2= float(Element('w2').value)
h= float(Element('h').value)
result_place = Element('output')
#except:
w1 = 0.0
w2 = 0.0
h = 0.0
if w1 > 0 and w2 > 0 and h > 0:
s = 1/2*h*w1+w2
result_place.write('this is trapezoidal ', s)
</py-script>
</body>
</html>
|
[
"You set all your variables to 0.0 before checking their values, which means your condition (if w1 > 0 and w2 > 0 and h > 0) will always evaluate to False, thus never printing anything in your output element.\nRemove the following lines and your script will work:\nw1 = 0.0\nw2 = 0.0\nh = 0.0\n\nAlso if you meant to display the value, you will need to change the following line:\nresult_place.write('this is trapezoidal ', s)\n\nto this:\nresult_place.write('this is trapezoidal ' + str(s))\n\nCodepen: https://codepen.io/roboto/pen/mdKxmxx\n"
] |
[
1
] |
[] |
[] |
[
"html",
"python",
"python_3.x",
"typescript"
] |
stackoverflow_0074543367_html_python_python_3.x_typescript.txt
|
Q:
Way to use twiiter hashtags in python sentiment analysis?
Is there any way to extract twitter hashtags in this code instead of tweets from a single user. I am working on sentiment analysis in python/
# Extract 100 tweets from the twitter user
posts = api.user_timeline(screen_name="OlectraEbus", count = 100, lang ="en", tweet_mode="extended")
# Print the last 5 tweets
print("Show the 5 recent tweets:\n")
i=1
for tweet in posts[:5]:
print(str(i) +') '+ tweet.full_text + '\n')
i= i+1
A:
I recommend two link to read :
Link 1
Link 2
A:
What worked for my team was making a data frame. Each row represents a tweet and the columns include mentions, hashtags, time, date, etc. Since one of the columns is the "hashtag" column, containing the hashtags of each tweet, you can easily select tweets with specific hashtags using that column.
|
Way to use twiiter hashtags in python sentiment analysis?
|
Is there any way to extract twitter hashtags in this code instead of tweets from a single user. I am working on sentiment analysis in python/
# Extract 100 tweets from the twitter user
posts = api.user_timeline(screen_name="OlectraEbus", count = 100, lang ="en", tweet_mode="extended")
# Print the last 5 tweets
print("Show the 5 recent tweets:\n")
i=1
for tweet in posts[:5]:
print(str(i) +') '+ tweet.full_text + '\n')
i= i+1
|
[
"I recommend two link to read :\n\nLink 1\n\nLink 2\n\n\n",
"What worked for my team was making a data frame. Each row represents a tweet and the columns include mentions, hashtags, time, date, etc. Since one of the columns is the \"hashtag\" column, containing the hashtags of each tweet, you can easily select tweets with specific hashtags using that column.\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"twitter"
] |
stackoverflow_0066254893_python_twitter.txt
|
Q:
error OSError: [Errno 30] Read-only file system: '/ '
I was trying to install new packages in an application with pip, but I got this error
from pip._internal import main as pipmain
pipmain(['install','mechanize','-t .'])
A:
If you want to add packages to your app, don't run pip at runtime, use Chaquopy's built-in pip support as documented here.
|
error OSError: [Errno 30] Read-only file system: '/ '
|
I was trying to install new packages in an application with pip, but I got this error
from pip._internal import main as pipmain
pipmain(['install','mechanize','-t .'])
|
[
"If you want to add packages to your app, don't run pip at runtime, use Chaquopy's built-in pip support as documented here.\n"
] |
[
0
] |
[] |
[] |
[
"chaquopy",
"pip",
"python"
] |
stackoverflow_0074534787_chaquopy_pip_python.txt
|
Q:
How to use variables inside query in Pandas?
I have problem quering the data frame in panda when I use variable instead of value.
df2 = pd.read_csv('my.csv')
query=df2.query('cc_vehicle_line==7')
works fine but
df2 = pd.read_csv('my.csv')
query=df2.query('cc_vehicle_line==variable_name')
It throws the message that variable_name is undefined.But it is defined. I cannot use hardcoded value as I need to automate and depending of value of variable_name, select relevant rows.
Am I missing something?
Thanks
A:
You should use @variable_name with @
query=df2.query('cc_vehicle_line==@variable_name')
A:
You can also use ->
query=df2.query(f'cc_vehicle_line=="{variable_name}"')
query=df2.query(f"cc_vehicle_line=='{variable_name}'")
query=df2.query('cc_vehicle_line==@variable_name')
query=df2.query("cc_vehicle_line== {0}".format(variable_name))
|
How to use variables inside query in Pandas?
|
I have problem quering the data frame in panda when I use variable instead of value.
df2 = pd.read_csv('my.csv')
query=df2.query('cc_vehicle_line==7')
works fine but
df2 = pd.read_csv('my.csv')
query=df2.query('cc_vehicle_line==variable_name')
It throws the message that variable_name is undefined.But it is defined. I cannot use hardcoded value as I need to automate and depending of value of variable_name, select relevant rows.
Am I missing something?
Thanks
|
[
"You should use @variable_name with @\nquery=df2.query('cc_vehicle_line==@variable_name')\n\n",
"You can also use ->\n\nquery=df2.query(f'cc_vehicle_line==\"{variable_name}\"')\nquery=df2.query(f\"cc_vehicle_line=='{variable_name}'\")\nquery=df2.query('cc_vehicle_line==@variable_name')\nquery=df2.query(\"cc_vehicle_line== {0}\".format(variable_name))\n\n"
] |
[
23,
0
] |
[] |
[] |
[
"indexing",
"pandas",
"python",
"variables"
] |
stackoverflow_0030340277_indexing_pandas_python_variables.txt
|
Q:
Menu option in python
I have the problem of implementing these programs to find the root of a polynomial (bisection, regular falsi, raphson, secant), I want to make a menu to select the program that I want to execute but when I make the menu I do not get the menu as such only programs are executed.
# Defining Function
def f(x):
return x**3-5*x-9
# Implementing Bisection Method
def bisection(x0,x1,e):
step = 1
print('\n\n*** BISECTION METHOD IMPLEMENTATION ***')
condition = True
while condition:
x2 = (x0 + x1)/2
print('Iteration-%d, x2 = %0.6f and f(x2) = %0.6f' % (step, x2, f(x2)))
if f(x0) * f(x2) < 0:
x1 = x2
else:
x0 = x2
step = step + 1
condition = abs(f(x2)) > e
print('\nRequired Root is : %0.8f' % x2)
# Input Section
x0 = input('First Guess: ')
x1 = input('Second Guess: ')
e = input('Tolerable Error: ')
# Converting input to float
x0 = float(x0)
x1 = float(x1)
e = float(e)
#Note: You can combine above two section like this
# x0 = float(input('First Guess: '))
# x1 = float(input('Second Guess: '))
# e = float(input('Tolerable Error: '))
# Checking Correctness of initial guess values and bisecting
if f(x0) * f(x1) > 0.0:
print('Given guess values do not bracket the root.')
print('Try Again with different guess values.')
else:
bisection(x0,x1,e)
#-------------------------
# Defining Function
def g(x):
return x**3-5*x-9
# Implementing False Position Method
def falsePosition(x0,x1,e):
step = 1
print('\n\n*** FALSE POSITION METHOD IMPLEMENTATION ***')
condition = True
while condition:
x2 = x0 - (x1-x0) * g(x0)/( g(x1) - g(x0) )
print('Iteration-%d, x2 = %0.6f and f(x2) = %0.6f' % (step, x2, g(x2)))
if g(x0) * g(x2) < 0:
x1 = x2
else:
x0 = x2
step = step + 1
condition = abs(g(x2)) > e
print('\nRequired root is: %0.8f' % x2)
# Input Section
x0 = input('First Guess: ')
x1 = input('Second Guess: ')
e = input('Tolerable Error: ')
# Converting input to float
x0 = float(x0)
x1 = float(x1)
e = float(e)
#Note: You can combine above two section like this
# x0 = float(input('First Guess: '))
# x1 = float(input('Second Guess: '))
# e = float(input('Tolerable Error: '))
# Checking Correctness of initial guess values and false positioning
if f(x0) * f(x1) > 0.0:
print('Given guess values do not bracket the root.')
print('Try Again with different guess values.')
else:
falsePosition(x0,x1,e)
#---------------------------------------------
# Defining Function
def h(x):
return x**3 - 5*x - 9
# Defining derivative of function
def hp(x):
return 3*x**2 - 5
# Implementing Newton Raphson Method
def newtonRaphson(x0,e,N):
print('\n\n*** NEWTON RAPHSON METHOD IMPLEMENTATION ***')
step = 1
flag = 1
condition = True
while condition:
if g(x0) == 0.0:
print('Divide by zero error!')
break
x1 = x0 - h(x0)/hp(x0)
print('Iteration-%d, x1 = %0.6f and f(x1) = %0.6f' % (step, x1, h(x1)))
x0 = x1
step = step + 1
if step > N:
flag = 0
break
condition = abs(h(x1)) > e
if flag==1:
print('\nRequired root is: %0.8f' % x1)
else:
print('\nNot Convergent.')
# Input Section
x0 = input('Enter Guess: ')
e = input('Tolerable Error: ')
N = input('Maximum Step: ')
# Converting x0 and e to float
x0 = float(x0)
e = float(e)
# Converting N to integer
N = int(N)
#Note: You can combine above three section like this
# x0 = float(input('Enter Guess: '))
# e = float(input('Tolerable Error: '))
# N = int(input('Maximum Step: '))
# Starting Newton Raphson Method
newtonRaphson(x0,e,N)
#---------------------------------------
def i(x):
return x**3 - 5*x - 9
# Implementing Secant Method
def secant(x0,i1,e,N):
print('\n\n*** SECANT METHOD IMPLEMENTATION ***')
step = 1
condition = True
while condition:
if f(x0) == i(x1):
print('Divide by zero error!')
break
x2 = x0 - (x1-x0)*i(x0)/( i(x1) - i(x0) )
print('Iteration-%d, x2 = %0.6f and f(x2) = %0.6f' % (step, x2, i(x2)))
x0 = x1
x1 = x2
step = step + 1
if step > N:
print('Not Convergent!')
break
condition = abs(i(x2)) > e
print('\n Required root is: %0.8f' % x2)
# Input Section
x0 = input('Enter First Guess: ')
x1 = input('Enter Second Guess: ')
e = input('Tolerable Error: ')
N = input('Maximum Step: ')
# Converting x0 and e to float
x0 = float(x0)
x1 = float(x1)
e = float(e)
# Converting N to integer
N = int(N)
#Note: You can combine above three section like this
# x0 = float(input('Enter First Guess: '))
# x1 = float(input('Enter Second Guess: '))
# e = float(input('Tolerable Error: '))
# N = int(input('Maximum Step: '))
# Starting Secant Method
secant(x0,x1,e,N)
opcion = input(" Bienvenido a la calculadora de raices\n Seleccione el metodo a usar:\n 1-Biseccion\n 2-Regla Falsa\n 3-Newton Rapson\n 4-Secante\n")
print("El metodo a usar es: " + str(opcion)) #I use spanish :)
if opcion == 1:
f(x)
elif opcion == 2:
g(x)
elif opcion == 3:
h(x)
elif opcion == 3:
i(x)
I'm just starting in python sorry if I don't know the basics
A:
Here is a (high level) solution to your problem:
Create main function for your program. Check https://realpython.com/python-main-function/ for more info.
Move input() functions and data conversions to the main function. This saves you a lot of copy/paste code. You then need to have the input functions and conversions only once in your code.
Create a list (or dictionary) of the choices inside the main function e.g. choices = ['newtonRaphson', 'secant, ...]. A comprehensive description of data structures here: https://docs.python.org/3/tutorial/datastructures.html and if you are looking for a quick guide on list and dictionaries, check https://www.w3schools.com/python/python_lists.asp and https://www.w3schools.com/python/python_dictionaries.asp.
Print the list of choices to the user. You can loop through the list and call print function for each list item. See example here: https://www.w3schools.com/python/python_lists_loop.asp
Ask user input to choose the correct function e.g. the list index or dictionary key.
Evaluate the input and call the appropriate function. This requires conditions, in other words if ... else statements. See more in here:https://www.w3schools.com/python/python_conditions.asp
I had difficulties adding the example code, but I can edit this later. Feel free to comment and ask questions!
|
Menu option in python
|
I have the problem of implementing these programs to find the root of a polynomial (bisection, regular falsi, raphson, secant), I want to make a menu to select the program that I want to execute but when I make the menu I do not get the menu as such only programs are executed.
# Defining Function
def f(x):
return x**3-5*x-9
# Implementing Bisection Method
def bisection(x0,x1,e):
step = 1
print('\n\n*** BISECTION METHOD IMPLEMENTATION ***')
condition = True
while condition:
x2 = (x0 + x1)/2
print('Iteration-%d, x2 = %0.6f and f(x2) = %0.6f' % (step, x2, f(x2)))
if f(x0) * f(x2) < 0:
x1 = x2
else:
x0 = x2
step = step + 1
condition = abs(f(x2)) > e
print('\nRequired Root is : %0.8f' % x2)
# Input Section
x0 = input('First Guess: ')
x1 = input('Second Guess: ')
e = input('Tolerable Error: ')
# Converting input to float
x0 = float(x0)
x1 = float(x1)
e = float(e)
#Note: You can combine above two section like this
# x0 = float(input('First Guess: '))
# x1 = float(input('Second Guess: '))
# e = float(input('Tolerable Error: '))
# Checking Correctness of initial guess values and bisecting
if f(x0) * f(x1) > 0.0:
print('Given guess values do not bracket the root.')
print('Try Again with different guess values.')
else:
bisection(x0,x1,e)
#-------------------------
# Defining Function
def g(x):
return x**3-5*x-9
# Implementing False Position Method
def falsePosition(x0,x1,e):
step = 1
print('\n\n*** FALSE POSITION METHOD IMPLEMENTATION ***')
condition = True
while condition:
x2 = x0 - (x1-x0) * g(x0)/( g(x1) - g(x0) )
print('Iteration-%d, x2 = %0.6f and f(x2) = %0.6f' % (step, x2, g(x2)))
if g(x0) * g(x2) < 0:
x1 = x2
else:
x0 = x2
step = step + 1
condition = abs(g(x2)) > e
print('\nRequired root is: %0.8f' % x2)
# Input Section
x0 = input('First Guess: ')
x1 = input('Second Guess: ')
e = input('Tolerable Error: ')
# Converting input to float
x0 = float(x0)
x1 = float(x1)
e = float(e)
#Note: You can combine above two section like this
# x0 = float(input('First Guess: '))
# x1 = float(input('Second Guess: '))
# e = float(input('Tolerable Error: '))
# Checking Correctness of initial guess values and false positioning
if f(x0) * f(x1) > 0.0:
print('Given guess values do not bracket the root.')
print('Try Again with different guess values.')
else:
falsePosition(x0,x1,e)
#---------------------------------------------
# Defining Function
def h(x):
return x**3 - 5*x - 9
# Defining derivative of function
def hp(x):
return 3*x**2 - 5
# Implementing Newton Raphson Method
def newtonRaphson(x0,e,N):
print('\n\n*** NEWTON RAPHSON METHOD IMPLEMENTATION ***')
step = 1
flag = 1
condition = True
while condition:
if g(x0) == 0.0:
print('Divide by zero error!')
break
x1 = x0 - h(x0)/hp(x0)
print('Iteration-%d, x1 = %0.6f and f(x1) = %0.6f' % (step, x1, h(x1)))
x0 = x1
step = step + 1
if step > N:
flag = 0
break
condition = abs(h(x1)) > e
if flag==1:
print('\nRequired root is: %0.8f' % x1)
else:
print('\nNot Convergent.')
# Input Section
x0 = input('Enter Guess: ')
e = input('Tolerable Error: ')
N = input('Maximum Step: ')
# Converting x0 and e to float
x0 = float(x0)
e = float(e)
# Converting N to integer
N = int(N)
#Note: You can combine above three section like this
# x0 = float(input('Enter Guess: '))
# e = float(input('Tolerable Error: '))
# N = int(input('Maximum Step: '))
# Starting Newton Raphson Method
newtonRaphson(x0,e,N)
#---------------------------------------
def i(x):
return x**3 - 5*x - 9
# Implementing Secant Method
def secant(x0,i1,e,N):
print('\n\n*** SECANT METHOD IMPLEMENTATION ***')
step = 1
condition = True
while condition:
if f(x0) == i(x1):
print('Divide by zero error!')
break
x2 = x0 - (x1-x0)*i(x0)/( i(x1) - i(x0) )
print('Iteration-%d, x2 = %0.6f and f(x2) = %0.6f' % (step, x2, i(x2)))
x0 = x1
x1 = x2
step = step + 1
if step > N:
print('Not Convergent!')
break
condition = abs(i(x2)) > e
print('\n Required root is: %0.8f' % x2)
# Input Section
x0 = input('Enter First Guess: ')
x1 = input('Enter Second Guess: ')
e = input('Tolerable Error: ')
N = input('Maximum Step: ')
# Converting x0 and e to float
x0 = float(x0)
x1 = float(x1)
e = float(e)
# Converting N to integer
N = int(N)
#Note: You can combine above three section like this
# x0 = float(input('Enter First Guess: '))
# x1 = float(input('Enter Second Guess: '))
# e = float(input('Tolerable Error: '))
# N = int(input('Maximum Step: '))
# Starting Secant Method
secant(x0,x1,e,N)
opcion = input(" Bienvenido a la calculadora de raices\n Seleccione el metodo a usar:\n 1-Biseccion\n 2-Regla Falsa\n 3-Newton Rapson\n 4-Secante\n")
print("El metodo a usar es: " + str(opcion)) #I use spanish :)
if opcion == 1:
f(x)
elif opcion == 2:
g(x)
elif opcion == 3:
h(x)
elif opcion == 3:
i(x)
I'm just starting in python sorry if I don't know the basics
|
[
"Here is a (high level) solution to your problem:\n\nCreate main function for your program. Check https://realpython.com/python-main-function/ for more info.\nMove input() functions and data conversions to the main function. This saves you a lot of copy/paste code. You then need to have the input functions and conversions only once in your code.\nCreate a list (or dictionary) of the choices inside the main function e.g. choices = ['newtonRaphson', 'secant, ...]. A comprehensive description of data structures here: https://docs.python.org/3/tutorial/datastructures.html and if you are looking for a quick guide on list and dictionaries, check https://www.w3schools.com/python/python_lists.asp and https://www.w3schools.com/python/python_dictionaries.asp.\nPrint the list of choices to the user. You can loop through the list and call print function for each list item. See example here: https://www.w3schools.com/python/python_lists_loop.asp\nAsk user input to choose the correct function e.g. the list index or dictionary key.\nEvaluate the input and call the appropriate function. This requires conditions, in other words if ... else statements. See more in here:https://www.w3schools.com/python/python_conditions.asp\n\nI had difficulties adding the example code, but I can edit this later. Feel free to comment and ask questions!\n"
] |
[
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074546872_python.txt
|
Q:
QStandardItemModel delete multiple rows without problem - python
I'm coding a reddit bot and created a UI like this:
What I want to do is user selects an account from list, clicks the remove selected account and all checked accounts deleted from list. So here is my code:
def delete_selected_accounts(self):
print(len(self.account_list))
for i in range(self.model.rowCount()):
if self.model.item(i).checkState() == Qt.Checked:
self.model.removeRow(i)
self.account_list.pop(i)
However, this code does not work as expected. When I removeRow from a model or pop from the account list, count of lists changes and I'm getting list out of range problem. What can I do to delete selected item without this problem?
A:
Hehe, I answered a similar question yesterday.
You're iterating over N rows (N being the original number of rows), but along the way, you remove some of the rows. The result? Eventually you'll try to access self.model.item(N - 1), which would be out of range.
One way to solve this is to iterate from the back (from N - 1 to 0), so that deleted rows don't affect future rows.
for i in range(self.model.rowCount())[::-1]:
...
Here, [::-1] tells range to generate a reverse range. You could write it out explicitly like so: range(self.model.rowCount() - 1, -1, -1). But it looks a bit ugly.
|
QStandardItemModel delete multiple rows without problem - python
|
I'm coding a reddit bot and created a UI like this:
What I want to do is user selects an account from list, clicks the remove selected account and all checked accounts deleted from list. So here is my code:
def delete_selected_accounts(self):
print(len(self.account_list))
for i in range(self.model.rowCount()):
if self.model.item(i).checkState() == Qt.Checked:
self.model.removeRow(i)
self.account_list.pop(i)
However, this code does not work as expected. When I removeRow from a model or pop from the account list, count of lists changes and I'm getting list out of range problem. What can I do to delete selected item without this problem?
|
[
"Hehe, I answered a similar question yesterday.\nYou're iterating over N rows (N being the original number of rows), but along the way, you remove some of the rows. The result? Eventually you'll try to access self.model.item(N - 1), which would be out of range.\nOne way to solve this is to iterate from the back (from N - 1 to 0), so that deleted rows don't affect future rows.\nfor i in range(self.model.rowCount())[::-1]:\n ...\n\nHere, [::-1] tells range to generate a reverse range. You could write it out explicitly like so: range(self.model.rowCount() - 1, -1, -1). But it looks a bit ugly.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"qt"
] |
stackoverflow_0074547384_python_qt.txt
|
Q:
Active Directory Authentication of Logged-in User using ADODB/ADSDSOObject
My system is connected to Active Directory and I can query it by binding using a username and password.
I noticed that I am also able to query it without explicitly providing a username and password, when using ADO or ADSDSOObject Provider (tried in Java/Python/VBA).
I would like to understand how the authentication is done in this case.
Example of first case where username and password is explicitly needed:
import ldap3
from ldap3.extend.microsoft.addMembersToGroups import ad_add_members_to_groups as addUsersInGroups
server = Server('172.16.10.50', port=636, use_ssl=True)
conn = Connection(server, 'CN=ldap_bind_account,OU=1_Service_Accounts,OU=0_Users,DC=TG,DC=LOCAL','Passw0rds123!',auto_bind=True)
print(conn)
Example of second case where no username and password is explicitly needed:
Set objConnection = CreateObject("ADODB.Connection")
Set objCommand = CreateObject("ADODB.Command")
objConnection.Provider = "ADsDSOObject"
objConnection.Open "Active Directory Provider"
Set objCOmmand.ActiveConnection = objConnection
objCommand.CommandText = "SELECT Name FROM 'LDAP://DC=mydomain,DC=com' WHERE objectClass = 'Computer'"
objCommand.Properties("Page Size") = 1000
objCommand.Properties("Searchscope") = ADS_SCOPE_SUBTREE
Set objRecordSet = objCommand.Execute
I tried to look at the source code of the libraries but was not able to understand what is being done.
A:
In the second case, it's using the credentials of the account running the program, or it could even be using the computer account (every computer joined to the domain has an account on the domain, with a password that no person ever sees).
Python's ldap3 package doesn't automatically do that, however, it appears there may be way to make it work without specifying credentials, using Kerberos authentication. For example, from this issue:
I know that, for GSSAPI and GSS-SPNEGO, if you specify "authentication=SASL, sasl_mechanism=GSSAPI" (or spnego as needed) in your connection, then you don't need to specify user/password at all.
And there's also this StackOverflow question on the same topic: Passwordless Python LDAP3 authentication from Windows client
|
Active Directory Authentication of Logged-in User using ADODB/ADSDSOObject
|
My system is connected to Active Directory and I can query it by binding using a username and password.
I noticed that I am also able to query it without explicitly providing a username and password, when using ADO or ADSDSOObject Provider (tried in Java/Python/VBA).
I would like to understand how the authentication is done in this case.
Example of first case where username and password is explicitly needed:
import ldap3
from ldap3.extend.microsoft.addMembersToGroups import ad_add_members_to_groups as addUsersInGroups
server = Server('172.16.10.50', port=636, use_ssl=True)
conn = Connection(server, 'CN=ldap_bind_account,OU=1_Service_Accounts,OU=0_Users,DC=TG,DC=LOCAL','Passw0rds123!',auto_bind=True)
print(conn)
Example of second case where no username and password is explicitly needed:
Set objConnection = CreateObject("ADODB.Connection")
Set objCommand = CreateObject("ADODB.Command")
objConnection.Provider = "ADsDSOObject"
objConnection.Open "Active Directory Provider"
Set objCOmmand.ActiveConnection = objConnection
objCommand.CommandText = "SELECT Name FROM 'LDAP://DC=mydomain,DC=com' WHERE objectClass = 'Computer'"
objCommand.Properties("Page Size") = 1000
objCommand.Properties("Searchscope") = ADS_SCOPE_SUBTREE
Set objRecordSet = objCommand.Execute
I tried to look at the source code of the libraries but was not able to understand what is being done.
|
[
"In the second case, it's using the credentials of the account running the program, or it could even be using the computer account (every computer joined to the domain has an account on the domain, with a password that no person ever sees).\nPython's ldap3 package doesn't automatically do that, however, it appears there may be way to make it work without specifying credentials, using Kerberos authentication. For example, from this issue:\n\nI know that, for GSSAPI and GSS-SPNEGO, if you specify \"authentication=SASL, sasl_mechanism=GSSAPI\" (or spnego as needed) in your connection, then you don't need to specify user/password at all.\n\nAnd there's also this StackOverflow question on the same topic: Passwordless Python LDAP3 authentication from Windows client\n"
] |
[
0
] |
[] |
[] |
[
"active_directory",
"ado",
"python",
"vba"
] |
stackoverflow_0074543730_active_directory_ado_python_vba.txt
|
Q:
Bar Chart sowing wrong dates on xaxis and how can I show correct dates?
When I plot monthly data in plotly the xaxis shows me the wrong dates. For data in June it shows July on the xaxis and all upcoming month are also wrong. I found a similar question on community.plotly, but it didn't work for me. How can I show the correct dates on the xaxis?
fig = go.Figure(data=[go.Bar(
x=df.index,
y=df['val_1'],
),
go.Bar(
x=df.index,
y=df['val_2'],
),
go.Bar(
x=df.index,
y=df['val_3'],
),
go.Bar(
x=df.index,
y=df['val_4'],
),
go.Bar(
x=df.index,
y=df['val_5'],
)])
fig.layout.xaxis.tick0 = '2022-06-31' #adapted from similar question from community.plotly
fig.layout.xaxis.dtick = 'M1'
fig.show()
dataframe:
date,val_1,val_2,val_3,val_4,val_5
2022-06-30,24.87,27.5,32.76,22.39,24.08
2022-07-31,25.25,23.06,42.59,24.79,27.09
2022-08-31,23.72,32.7,41.33,27.85,31.2
2022-09-30,22.5,21.16,43.12,25.84,25.58
2022-10-31,21.76,24.79,33.95,22.34,21.07
2022-11-30,27.92,26.15,29.85,21.83,20.44
df.info()
DatetimeIndex: 6 entries, 2022-06-30 to 2022-11-30
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 val_1 6 non-null float64
1 val_2 6 non-null float64
2 val_3 6 non-null float64
3 val_4 6 non-null float64
4 val_5 6 non-null float64
A:
The plotly identifies the data to be on a monthly update and assumes the starting month is July instead of June. If you zoom into the plot, you could observe the month changes to June 30.
The month can also be updated using the ticktext where the labels are mentioned for the concerned data point.
df = pd.DataFrame.from_dict(
{
"date": ["2022-06-30", "2022-07-31", "2022-08-31", "2022-09-30", "2022-10-31", "2022-11-30"],
"val_1": [24.87, 25.25, 23.72, 22.5, 21.76, 27.92],
"val_2": [27.5, 23.06, 32.7, 21.16, 24.79, 26.15],
"val_3": [32.76, 42.59, 41.33, 43.12, 33.95, 29.85],
"val_4": [22.39, 24.79, 27.85, 25.84, 22.34, 21.83],
"val_5": [24.08, 27.09, 31.2, 25.58, 21.07, 20.44],
}
)
df['date'] = pd.to_datetime(df['date'], format='%Y-%m-%d')
df=df.set_index("date")
fig = go.Figure(
data=[
go.Bar(x=df.index, y=df["val_1"]),
go.Bar(x=df.index, y=df["val_2"]),
go.Bar(x=df.index, y=df["val_3"]),
go.Bar(x=df.index, y=df["val_4"],),
go.Bar(x=df.index, y=df["val_5"],),
]
)
tickvals = df.index.tolist()
ticktexts = [val.strftime("%b %d") for val in tickvals]
fig.update_xaxes(tickvals=tickvals, ticktext=ticktexts)
fig.show()
Result:
|
Bar Chart sowing wrong dates on xaxis and how can I show correct dates?
|
When I plot monthly data in plotly the xaxis shows me the wrong dates. For data in June it shows July on the xaxis and all upcoming month are also wrong. I found a similar question on community.plotly, but it didn't work for me. How can I show the correct dates on the xaxis?
fig = go.Figure(data=[go.Bar(
x=df.index,
y=df['val_1'],
),
go.Bar(
x=df.index,
y=df['val_2'],
),
go.Bar(
x=df.index,
y=df['val_3'],
),
go.Bar(
x=df.index,
y=df['val_4'],
),
go.Bar(
x=df.index,
y=df['val_5'],
)])
fig.layout.xaxis.tick0 = '2022-06-31' #adapted from similar question from community.plotly
fig.layout.xaxis.dtick = 'M1'
fig.show()
dataframe:
date,val_1,val_2,val_3,val_4,val_5
2022-06-30,24.87,27.5,32.76,22.39,24.08
2022-07-31,25.25,23.06,42.59,24.79,27.09
2022-08-31,23.72,32.7,41.33,27.85,31.2
2022-09-30,22.5,21.16,43.12,25.84,25.58
2022-10-31,21.76,24.79,33.95,22.34,21.07
2022-11-30,27.92,26.15,29.85,21.83,20.44
df.info()
DatetimeIndex: 6 entries, 2022-06-30 to 2022-11-30
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 val_1 6 non-null float64
1 val_2 6 non-null float64
2 val_3 6 non-null float64
3 val_4 6 non-null float64
4 val_5 6 non-null float64
|
[
"The plotly identifies the data to be on a monthly update and assumes the starting month is July instead of June. If you zoom into the plot, you could observe the month changes to June 30.\nThe month can also be updated using the ticktext where the labels are mentioned for the concerned data point.\ndf = pd.DataFrame.from_dict(\n {\n \"date\": [\"2022-06-30\", \"2022-07-31\", \"2022-08-31\", \"2022-09-30\", \"2022-10-31\", \"2022-11-30\"],\n \"val_1\": [24.87, 25.25, 23.72, 22.5, 21.76, 27.92],\n \"val_2\": [27.5, 23.06, 32.7, 21.16, 24.79, 26.15],\n \"val_3\": [32.76, 42.59, 41.33, 43.12, 33.95, 29.85],\n \"val_4\": [22.39, 24.79, 27.85, 25.84, 22.34, 21.83],\n \"val_5\": [24.08, 27.09, 31.2, 25.58, 21.07, 20.44],\n }\n)\n\ndf['date'] = pd.to_datetime(df['date'], format='%Y-%m-%d')\ndf=df.set_index(\"date\")\n\nfig = go.Figure(\n data=[\n go.Bar(x=df.index, y=df[\"val_1\"]),\n go.Bar(x=df.index, y=df[\"val_2\"]),\n go.Bar(x=df.index, y=df[\"val_3\"]),\n go.Bar(x=df.index, y=df[\"val_4\"],),\n go.Bar(x=df.index, y=df[\"val_5\"],),\n ]\n)\ntickvals = df.index.tolist()\n\nticktexts = [val.strftime(\"%b %d\") for val in tickvals]\nfig.update_xaxes(tickvals=tickvals, ticktext=ticktexts)\nfig.show()\n\nResult:\n\n"
] |
[
2
] |
[] |
[] |
[
"plotly",
"python"
] |
stackoverflow_0074546714_plotly_python.txt
|
Q:
TypeError: classification_report() takes 2 positional arguments but 3 were given
return metrics.classification_report(y_true, y_pred, labels, **kwargs)
TypeError: classification_report() takes 2 positional arguments but 3 were given
We are currently training a crf model and we wanted to get the classification report of the metrics but we got this error.
we tried to do this instead:
from sklearn.metrics import classification report
print(classification_report(y_test, y_pred, labels='labels'))
and got this error:
ValueError: You appear to be using a legacy multi-label data representation. Sequence of sequences are no longer supported; use a binary array or sparse matrix instead - the MultiLabelBinarizer transformer can convert to this format.
So then we tried to convert it to a sparse matrix and also using the MultiLabel Binarizer and nothing worked. We can't seem to figure it out. does anyone know how this works?
# metrics on test dataset
print("For Testing Set: ")
print("F1 score: {}".format(metrics.flat_f1_score(y_test, y_pred, average='weighted')))
print("Precision score: {}".format(metrics.flat_precision_score(y_test, y_pred, average='weighted')))
print("Recall score: {}".format(metrics.flat_recall_score(y_test, y_pred, average='weighted', labels='labels')))
print(metrics.flat_classification_report(y_test, y_pred, labels='labels', digits=3))
above is our sample code
A:
Issue is raised in git https://github.com/TeamHG-Memex/sklearn-crfsuite/issues/66
But its not resolved till now.
I was able to alternative :
pip install git+https://github.com/MeMartijn/updated-sklearn-crfsuite.git#egg=sklearn_crfsuite.
|
TypeError: classification_report() takes 2 positional arguments but 3 were given
|
return metrics.classification_report(y_true, y_pred, labels, **kwargs)
TypeError: classification_report() takes 2 positional arguments but 3 were given
We are currently training a crf model and we wanted to get the classification report of the metrics but we got this error.
we tried to do this instead:
from sklearn.metrics import classification report
print(classification_report(y_test, y_pred, labels='labels'))
and got this error:
ValueError: You appear to be using a legacy multi-label data representation. Sequence of sequences are no longer supported; use a binary array or sparse matrix instead - the MultiLabelBinarizer transformer can convert to this format.
So then we tried to convert it to a sparse matrix and also using the MultiLabel Binarizer and nothing worked. We can't seem to figure it out. does anyone know how this works?
# metrics on test dataset
print("For Testing Set: ")
print("F1 score: {}".format(metrics.flat_f1_score(y_test, y_pred, average='weighted')))
print("Precision score: {}".format(metrics.flat_precision_score(y_test, y_pred, average='weighted')))
print("Recall score: {}".format(metrics.flat_recall_score(y_test, y_pred, average='weighted', labels='labels')))
print(metrics.flat_classification_report(y_test, y_pred, labels='labels', digits=3))
above is our sample code
|
[
"Issue is raised in git https://github.com/TeamHG-Memex/sklearn-crfsuite/issues/66\nBut its not resolved till now.\nI was able to alternative :\npip install git+https://github.com/MeMartijn/updated-sklearn-crfsuite.git#egg=sklearn_crfsuite.\n"
] |
[
0
] |
[] |
[] |
[
"crf",
"metrics",
"multilabel_classification",
"python"
] |
stackoverflow_0071351771_crf_metrics_multilabel_classification_python.txt
|
Q:
ValueError: string size must be a multiple of element size while implementing Word2Vec
I am trying to implement Word2Vec but I'm getting this error:
ValueError: string size must be a multiple of element size
This is the code:
from gensim.models.keyedvectors import KeyedVectors
model_path = './data/GoogleNews-vectors-negative300.bin'
w2v_model = KeyedVectors.load_word2vec_format(model_path, binary=True)
The last line throws the error. Can somebody please help me find the cause of this issue?
A:
You should set unicode_errors='replace' in the last line:
w2v_model = KeyedVectors.load_word2vec_format(model_path, binary=True, unicode_errors='replace')
|
ValueError: string size must be a multiple of element size while implementing Word2Vec
|
I am trying to implement Word2Vec but I'm getting this error:
ValueError: string size must be a multiple of element size
This is the code:
from gensim.models.keyedvectors import KeyedVectors
model_path = './data/GoogleNews-vectors-negative300.bin'
w2v_model = KeyedVectors.load_word2vec_format(model_path, binary=True)
The last line throws the error. Can somebody please help me find the cause of this issue?
|
[
"You should set unicode_errors='replace' in the last line:\nw2v_model = KeyedVectors.load_word2vec_format(model_path, binary=True, unicode_errors='replace')\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"word2vec",
"word_embedding"
] |
stackoverflow_0073149660_python_word2vec_word_embedding.txt
|
Q:
Delete row if next row has the same first value, python
I have an array that looks like this:
data([0.000, 1], [0.0025, 2], [0.0025, 3], [0.005, 5])
I need to delete [0.0025, 3], because it has the same first value as the one before.
I have tried:
for i in data:
if data[i, 0] == data[i+1,0]:
np.delete(data, (i+1), axis = 0)
But then I get the following Error:
IndexError: arrays used as indices must be of integer (or boolean) type,
Can somebody help me with that
A:
input:
data = np.array([[0.000, 1], [0.0025, 2], [0.0025, 3], [0.005, 5]])
solution:
data = data[np.unique(data[:,0], return_index=True)[1]]
output:
array([[0.0e+00, 1.0e+00],
[2.5e-03, 2.0e+00],
[5.0e-03, 5.0e+00]])
|
Delete row if next row has the same first value, python
|
I have an array that looks like this:
data([0.000, 1], [0.0025, 2], [0.0025, 3], [0.005, 5])
I need to delete [0.0025, 3], because it has the same first value as the one before.
I have tried:
for i in data:
if data[i, 0] == data[i+1,0]:
np.delete(data, (i+1), axis = 0)
But then I get the following Error:
IndexError: arrays used as indices must be of integer (or boolean) type,
Can somebody help me with that
|
[
"input:\ndata = np.array([[0.000, 1], [0.0025, 2], [0.0025, 3], [0.005, 5]])\n\nsolution:\ndata = data[np.unique(data[:,0], return_index=True)[1]]\n\noutput:\narray([[0.0e+00, 1.0e+00],\n [2.5e-03, 2.0e+00],\n [5.0e-03, 5.0e+00]])\n\n"
] |
[
3
] |
[] |
[] |
[
"iteration",
"numpy",
"python"
] |
stackoverflow_0074547401_iteration_numpy_python.txt
|
Q:
Modify only a few bytes from a npz numpy file without rewriting the whole file
This works to write and load a numpy array + metadata in a .npz compressed file (here the compression is useless because it's random, but anyway):
import numpy as np
# save
D = {"x": np.random.random((10000, 1000)), "metadata": {"date": "20221123", "user": "bob", "name": "abc"}}
with open("test.npz", "wb") as f:
np.savez_compressed(f, **D)
# load
D2 = np.load("test.npz", allow_pickle=True)
print(D2["x"])
print(D2["metadata"].item()["date"])
Let's say we want to change only a metadata:
D["metadata"]["name"] = "xyz"
Is there a way to re-write to disk in test.npz only D["metadata"] and not the whole file because D["x"] has not changed?
In my case, the .npz file can be 100 MB to 4 GB large, that's why it would be interesting to rewrite only the metadata.
A:
Ultimately the solution that I could get to work (thus far) is the one I originally thought of with zipfile.
import zipfile
import os
from contextlib import contextmanager
@contextmanager
def archive_manager(archive_name: str, key: str):
f, s = zipfile.ZipFile(archive_name, "a"), f"{key}.npy"
yield s
f.write(s)
f.close()
os.remove(s)
Let's say we want to change metadata:
new_metadata = {"date": "20221123", "user": "bob", "name": "xyz"}
with archive_manager("test.npz", "metadata") as archive:
np.save(archive, new_metadata)
np.load returns an NpzFile, which is a lazy loader. However, NpzFile objects aren't directly writeable. We cannot also do something like D["metadata"] = new_metadata until D has been converted to a dict, and that loses the lazy functionality.
|
Modify only a few bytes from a npz numpy file without rewriting the whole file
|
This works to write and load a numpy array + metadata in a .npz compressed file (here the compression is useless because it's random, but anyway):
import numpy as np
# save
D = {"x": np.random.random((10000, 1000)), "metadata": {"date": "20221123", "user": "bob", "name": "abc"}}
with open("test.npz", "wb") as f:
np.savez_compressed(f, **D)
# load
D2 = np.load("test.npz", allow_pickle=True)
print(D2["x"])
print(D2["metadata"].item()["date"])
Let's say we want to change only a metadata:
D["metadata"]["name"] = "xyz"
Is there a way to re-write to disk in test.npz only D["metadata"] and not the whole file because D["x"] has not changed?
In my case, the .npz file can be 100 MB to 4 GB large, that's why it would be interesting to rewrite only the metadata.
|
[
"Ultimately the solution that I could get to work (thus far) is the one I originally thought of with zipfile.\nimport zipfile\nimport os\nfrom contextlib import contextmanager\n\n@contextmanager\ndef archive_manager(archive_name: str, key: str):\n f, s = zipfile.ZipFile(archive_name, \"a\"), f\"{key}.npy\"\n\n yield s\n\n f.write(s)\n f.close()\n os.remove(s)\n\nLet's say we want to change metadata:\nnew_metadata = {\"date\": \"20221123\", \"user\": \"bob\", \"name\": \"xyz\"}\n\nwith archive_manager(\"test.npz\", \"metadata\") as archive:\n np.save(archive, new_metadata)\n\n\nnp.load returns an NpzFile, which is a lazy loader. However, NpzFile objects aren't directly writeable. We cannot also do something like D[\"metadata\"] = new_metadata until D has been converted to a dict, and that loses the lazy functionality.\n"
] |
[
1
] |
[] |
[] |
[
"npz_file",
"numpy",
"python",
"serialization"
] |
stackoverflow_0074544551_npz_file_numpy_python_serialization.txt
|
Q:
SparkContext Error - File not found /tmp/spark-events does not exist
Running a Python Spark Application via API call -
On submitting the Application - response - Failed
SSH into the Worker
My python application exists in
/root/spark/work/driver-id/wordcount.py
Error can be found in
/root/spark/work/driver-id/stderr
Show the following error -
Traceback (most recent call last):
File "/root/wordcount.py", line 34, in <module>
main()
File "/root/wordcount.py", line 18, in main
sc = SparkContext(conf=conf)
File "/root/spark/python/lib/pyspark.zip/pyspark/context.py", line 115, in __init__
File "/root/spark/python/lib/pyspark.zip/pyspark/context.py", line 172, in _do_init
File "/root/spark/python/lib/pyspark.zip/pyspark/context.py", line 235, in _initialize_context
File "/root/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 1064, in __call__
File "/root/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: java.io.FileNotFoundException: File file:/tmp/spark-events does not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:402)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:255)
at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:100)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:549)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:214)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:79)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
It indicates - /tmp/spark-events Does not exist - which is true
However, in wordcount.py
from pyspark import SparkContext, SparkConf
... few more lines ...
def main():
conf = SparkConf().setAppName("MyApp").setMaster("spark://ec2-54-209-108-127.compute-1.amazonaws.com:7077")
sc = SparkContext(conf=conf)
sc.stop()
if __name__ == "__main__":
main()
A:
/tmp/spark-events is the location that Spark store the events logs. Just create this directory in the master machine and you're set.
$mkdir /tmp/spark-events
$ sudo /root/spark-ec2/copy-dir /tmp/spark-events/
RSYNC'ing /tmp/spark-events to slaves...
ec2-54-175-163-32.compute-1.amazonaws.com
A:
While trying to setup my spark history server on my local machine, I had the same 'File file:/tmp/spark-events does not exist.' error. I had customized my log directory to a non-default path. To resolve this, I needed to do 2 things.
edit $SPARK_HOME/conf/spark-defaults.conf
-- add these 2 lines
spark.history.fs.logDirectory /mycustomdir
spark.eventLog.enabled true
create a link from /tmp/spark-events to /mycustomdir.
ln -fs /tmp/spark-events /mycustomdir
Ideally, step 1 would have solved my issue entirely, but i still needed to create the link so I suspect there might have been one other setting i missed. Anyhow, once I did this, i was able to run my historyserver and see new jobs logged in my webui.
A:
Use spark.eventLog.dir for client/driver program
spark.eventLog.dir=/usr/local/spark/history
and use spark.history.fs.logDirectory for history server
spark.history.fs.logDirectory=/usr/local/spark/history
as mentioned in: How to enable spark-history server for standalone cluster non hdfs mode
At least as per Spark version 2.2.1
A:
I just created /tmp/spark-events on the {master} node and then distributed it to other nodes on the cluster to work.
mkdir /tmp/spark-events
rsync -a /tmp/spark-events {slaves}:/tmp/spark-events
my spark-default.conf:
spark.history.ui.port=18080
spark.eventLog.enabled=true
spark.history.fs.logDirectory=hdfs:///home/elon/spark/events
A:
when I try edit two files spark-default.conf spark_env.sh, and histroy-server starting.
spark-default.conf:
spark.eventLog.enabled true
spark.history.ui.port=18080
spark.history.fs.logDirectory={host}:{port}/directory
spark_env.sh
export SPARK_HISTORY_OPTS="
-Dspark.history.ui.port=18080
-Dspark.history.fs.logDirectory={host}:{port}/directory
-Dspark.history.retainedApplications=30"
|
SparkContext Error - File not found /tmp/spark-events does not exist
|
Running a Python Spark Application via API call -
On submitting the Application - response - Failed
SSH into the Worker
My python application exists in
/root/spark/work/driver-id/wordcount.py
Error can be found in
/root/spark/work/driver-id/stderr
Show the following error -
Traceback (most recent call last):
File "/root/wordcount.py", line 34, in <module>
main()
File "/root/wordcount.py", line 18, in main
sc = SparkContext(conf=conf)
File "/root/spark/python/lib/pyspark.zip/pyspark/context.py", line 115, in __init__
File "/root/spark/python/lib/pyspark.zip/pyspark/context.py", line 172, in _do_init
File "/root/spark/python/lib/pyspark.zip/pyspark/context.py", line 235, in _initialize_context
File "/root/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 1064, in __call__
File "/root/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: java.io.FileNotFoundException: File file:/tmp/spark-events does not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:402)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:255)
at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:100)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:549)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:214)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:79)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
It indicates - /tmp/spark-events Does not exist - which is true
However, in wordcount.py
from pyspark import SparkContext, SparkConf
... few more lines ...
def main():
conf = SparkConf().setAppName("MyApp").setMaster("spark://ec2-54-209-108-127.compute-1.amazonaws.com:7077")
sc = SparkContext(conf=conf)
sc.stop()
if __name__ == "__main__":
main()
|
[
"/tmp/spark-events is the location that Spark store the events logs. Just create this directory in the master machine and you're set.\n$mkdir /tmp/spark-events\n$ sudo /root/spark-ec2/copy-dir /tmp/spark-events/\nRSYNC'ing /tmp/spark-events to slaves...\nec2-54-175-163-32.compute-1.amazonaws.com\n\n",
"While trying to setup my spark history server on my local machine, I had the same 'File file:/tmp/spark-events does not exist.' error. I had customized my log directory to a non-default path. To resolve this, I needed to do 2 things. \n\nedit $SPARK_HOME/conf/spark-defaults.conf \n-- add these 2 lines\n\nspark.history.fs.logDirectory /mycustomdir\nspark.eventLog.enabled true\n\ncreate a link from /tmp/spark-events to /mycustomdir.\nln -fs /tmp/spark-events /mycustomdir\n\nIdeally, step 1 would have solved my issue entirely, but i still needed to create the link so I suspect there might have been one other setting i missed. Anyhow, once I did this, i was able to run my historyserver and see new jobs logged in my webui.\n\n",
"Use spark.eventLog.dir for client/driver program\nspark.eventLog.dir=/usr/local/spark/history\n\nand use spark.history.fs.logDirectory for history server\nspark.history.fs.logDirectory=/usr/local/spark/history\n\nas mentioned in: How to enable spark-history server for standalone cluster non hdfs mode\nAt least as per Spark version 2.2.1\n",
"I just created /tmp/spark-events on the {master} node and then distributed it to other nodes on the cluster to work.\nmkdir /tmp/spark-events\nrsync -a /tmp/spark-events {slaves}:/tmp/spark-events\n\nmy spark-default.conf:\nspark.history.ui.port=18080\nspark.eventLog.enabled=true\nspark.history.fs.logDirectory=hdfs:///home/elon/spark/events\n\n",
"when I try edit two files spark-default.conf spark_env.sh, and histroy-server starting.\nspark-default.conf:\nspark.eventLog.enabled true\nspark.history.ui.port=18080\nspark.history.fs.logDirectory={host}:{port}/directory\n\nspark_env.sh\nexport SPARK_HISTORY_OPTS=\"\n-Dspark.history.ui.port=18080\n-Dspark.history.fs.logDirectory={host}:{port}/directory\n-Dspark.history.retainedApplications=30\"\n\n"
] |
[
37,
9,
4,
1,
0
] |
[] |
[] |
[
"amazon_ec2",
"amazon_web_services",
"apache_spark",
"pyspark",
"python"
] |
stackoverflow_0038350249_amazon_ec2_amazon_web_services_apache_spark_pyspark_python.txt
|
Q:
Gigantic memory use in example pytorch program. Why?
I have been trying to debug a program using vast amounts of memory and have distilled it into the following example:
# Caution, use carefully, this can utilise all available memory on your computer
# and render it effectively unresponsive, to the point where you cannot access
# the shell to kill the process; thus requiring reboot.
import numpy as np
import collections
import torch
# q = collections.deque(maxlen=1500) # Uses around 6.4GB
# q = collections.deque(maxlen=3000) # Uses around 12GB
q = collections.deque(maxlen=5000) # Uses around 18GB
def f():
nparray = np.zeros([4,84,84], dtype=np.uint8)
q.append(nparray)
nparray1 = np.zeros([32,4,84,84], dtype=np.float32)
tens = torch.tensor(nparray1, dtype=torch.float32)
while True:
f()
Please note the cautionary message in the 1st line of this program. If you set maxlen to a level where it uses too much of your available RAM, it can crash your computer.
I measured the memory using top (VIRT column), and its memory use seems wildly excessive (details on the commented lines above). From previous experience in my original program if maxlen is high enough it will crash my computer.
Why is it using so much memory?
I calculate the increase in expected memory from maxlen=1500 to maxlen=3000 to be:
4 * 84 * 84 * 15000 / (1024**2) == 403MB.
But we see an increase of 6GB.
There seems to be some sort of interaction between using collections and the tensor allocation as commenting either out causes memory use to be expected; eg commenting out the tensor line leads to total memory use of 2GB which seems much more reasonable.
Thanks for any help or insight,
Julian.
A:
I think PyTorch store and update the computational graph each time you call f(), and thus the graph-size just keeps getting bigger and bigger.
Can you try to free the memory usage by using del(tens) (deleting the reference for the variable after usage), and let me know how it works? (found in PyTorch-documents here: https://pytorch.org/docs/stable/notes/faq.html)
|
Gigantic memory use in example pytorch program. Why?
|
I have been trying to debug a program using vast amounts of memory and have distilled it into the following example:
# Caution, use carefully, this can utilise all available memory on your computer
# and render it effectively unresponsive, to the point where you cannot access
# the shell to kill the process; thus requiring reboot.
import numpy as np
import collections
import torch
# q = collections.deque(maxlen=1500) # Uses around 6.4GB
# q = collections.deque(maxlen=3000) # Uses around 12GB
q = collections.deque(maxlen=5000) # Uses around 18GB
def f():
nparray = np.zeros([4,84,84], dtype=np.uint8)
q.append(nparray)
nparray1 = np.zeros([32,4,84,84], dtype=np.float32)
tens = torch.tensor(nparray1, dtype=torch.float32)
while True:
f()
Please note the cautionary message in the 1st line of this program. If you set maxlen to a level where it uses too much of your available RAM, it can crash your computer.
I measured the memory using top (VIRT column), and its memory use seems wildly excessive (details on the commented lines above). From previous experience in my original program if maxlen is high enough it will crash my computer.
Why is it using so much memory?
I calculate the increase in expected memory from maxlen=1500 to maxlen=3000 to be:
4 * 84 * 84 * 15000 / (1024**2) == 403MB.
But we see an increase of 6GB.
There seems to be some sort of interaction between using collections and the tensor allocation as commenting either out causes memory use to be expected; eg commenting out the tensor line leads to total memory use of 2GB which seems much more reasonable.
Thanks for any help or insight,
Julian.
|
[
"I think PyTorch store and update the computational graph each time you call f(), and thus the graph-size just keeps getting bigger and bigger.\nCan you try to free the memory usage by using del(tens) (deleting the reference for the variable after usage), and let me know how it works? (found in PyTorch-documents here: https://pytorch.org/docs/stable/notes/faq.html)\n"
] |
[
0
] |
[] |
[] |
[
"python",
"pytorch"
] |
stackoverflow_0074547107_python_pytorch.txt
|
Q:
Split the single column to 4 different columns in Dataframe
I just need need to split a single column of dataframe to 4 different columns. I tried few steps but didn't worked.
DATA1:
Dump
12525 2 153898 Winch
24798 1 147654 Gear
65116 4 Screw
46456 1 Rowing
46563 5 Nut
Expected1:
Item Qty Part_no Description
12525 2 153898 Winch
24798 1 147654 Gear
65116 4 Screw
46456 1 Rowing
46563 5 Nut
DATA2:
Dump
12525 2 153898 Winch Gear
24798 1 147654 Gear nuts
65116 X Screw bolts
46456 1 Rowing rings
46563 X Nut
Expected2:
Item Qty Part_no Description
12525 2 153898 Winch Gear
24798 1 147654 Gear nuts
65116 X Screw bolts
46456 1 Rowing rings
46563 X Nut
I tried the below code
data_df[['Item','Qty','Part_no','Description']] = data_df["Dump"].str.split(" ", 3, expand=True)
and got the output like
Item Qty Part_no Description
12525 2 153898 Winch
24798 1 147654 Gear
65116 4 Screw
46456 1 Rowing
46563 5 Nut
Any suggestions, how can i fix this???
A:
Use str.extract:
data_df[['Item','Qty','Part_no','Description']] = \
data_df['Dump'].str.extract(r'(\d+)\s+(\d+)\s+(\d*)\s*(\w+)')
Output:
Dump Item Qty Part_no Description
0 12525 2 153898 Winch 12525 2 153898 Winch
1 24798 1 147654 Gear 24798 1 147654 Gear
2 65116 4 Screw 65116 4 Screw
3 46456 1 Rowing 46456 1 Rowing
4 46563 5 Nut 46563 5 Nut
|
Split the single column to 4 different columns in Dataframe
|
I just need need to split a single column of dataframe to 4 different columns. I tried few steps but didn't worked.
DATA1:
Dump
12525 2 153898 Winch
24798 1 147654 Gear
65116 4 Screw
46456 1 Rowing
46563 5 Nut
Expected1:
Item Qty Part_no Description
12525 2 153898 Winch
24798 1 147654 Gear
65116 4 Screw
46456 1 Rowing
46563 5 Nut
DATA2:
Dump
12525 2 153898 Winch Gear
24798 1 147654 Gear nuts
65116 X Screw bolts
46456 1 Rowing rings
46563 X Nut
Expected2:
Item Qty Part_no Description
12525 2 153898 Winch Gear
24798 1 147654 Gear nuts
65116 X Screw bolts
46456 1 Rowing rings
46563 X Nut
I tried the below code
data_df[['Item','Qty','Part_no','Description']] = data_df["Dump"].str.split(" ", 3, expand=True)
and got the output like
Item Qty Part_no Description
12525 2 153898 Winch
24798 1 147654 Gear
65116 4 Screw
46456 1 Rowing
46563 5 Nut
Any suggestions, how can i fix this???
|
[
"Use str.extract:\ndata_df[['Item','Qty','Part_no','Description']] = \\\ndata_df['Dump'].str.extract(r'(\\d+)\\s+(\\d+)\\s+(\\d*)\\s*(\\w+)')\n\nOutput:\n Dump Item Qty Part_no Description\n0 12525 2 153898 Winch 12525 2 153898 Winch\n1 24798 1 147654 Gear 24798 1 147654 Gear\n2 65116 4 Screw 65116 4 Screw\n3 46456 1 Rowing 46456 1 Rowing\n4 46563 5 Nut 46563 5 Nut\n\n"
] |
[
2
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074547463_dataframe_pandas_python.txt
|
Q:
How does the logic behind this piece of code work?
I have typed out the following code.
la = [1,[2,[3,[4]]]]
lb = [la[1], la[1][1]]
print(la)
lb[0][1]=9
print(la)
I was expecting la to remain as in the original first line, but it changed as shown below.
[1, [2, [3, [4]]]]
[1, [2, 9]]
Does this have to do with shallow and deep copy? I can't seem to wrap my head around what's going on. Apologies for the formatting, I'm trying to fix it.
A:
So the answer gives you the solution to how to properly copy data in your case, but it does not explains what happens. I commented your script to get an insight into what you are doing:
A comment on notation:
ptr(X) means a reference to somewhere in memory which I call X.
X is a un-named address, if you will.
Don't look to much into the words "pointer" and "reference" as I have only used them to explain a concept, not as strict keywords like one might think if coming from C++ or similar.
la = [1,[2,[3,[4]]]]
# la = [ 1 , ptr(X) ]
# X = [ 2 , ptr(Y) ]
# Y = [ 3 , ptr(Z) ]
# Z = [ 4 ]
print(la)
lb = [la[1], la[1][1]]
# lb = [ ptr(X) , ptr(Y) ]
lb[0][1]=9
# lb[0] is ptr(X)
# lb[0][1] is X[1] is ptr(Y)
# lb[0][1] = 9 --> X = [2,9]
# now if we look at how la is defined
# la = [ 1 , ptr(X) ]
# X = [ 2 , ptr(Y) ]
# meaning that now X is [2,9] and ptr(X) points to [2,9],
# so la is
# la = [ 1 , ptr(X) ]
# X = [ 2 , 9 ]
# so if we print la we get
# [1,[2,9]]
# et voila
print(la)
A:
You should use copy() in your case:
lb = [la[1].copy(), la[1][1].copy()]
Assignment statements in Python do not copy objects, they create bindings between a target and an object.
See doc.
|
How does the logic behind this piece of code work?
|
I have typed out the following code.
la = [1,[2,[3,[4]]]]
lb = [la[1], la[1][1]]
print(la)
lb[0][1]=9
print(la)
I was expecting la to remain as in the original first line, but it changed as shown below.
[1, [2, [3, [4]]]]
[1, [2, 9]]
Does this have to do with shallow and deep copy? I can't seem to wrap my head around what's going on. Apologies for the formatting, I'm trying to fix it.
|
[
"So the answer gives you the solution to how to properly copy data in your case, but it does not explains what happens. I commented your script to get an insight into what you are doing:\nA comment on notation:\nptr(X) means a reference to somewhere in memory which I call X.\nX is a un-named address, if you will.\nDon't look to much into the words \"pointer\" and \"reference\" as I have only used them to explain a concept, not as strict keywords like one might think if coming from C++ or similar.\nla = [1,[2,[3,[4]]]]\n\n# la = [ 1 , ptr(X) ]\n# X = [ 2 , ptr(Y) ]\n# Y = [ 3 , ptr(Z) ]\n# Z = [ 4 ]\nprint(la)\n\n\nlb = [la[1], la[1][1]]\n\n# lb = [ ptr(X) , ptr(Y) ]\n\nlb[0][1]=9\n\n# lb[0] is ptr(X)\n# lb[0][1] is X[1] is ptr(Y)\n# lb[0][1] = 9 --> X = [2,9]\n\n# now if we look at how la is defined\n# la = [ 1 , ptr(X) ]\n# X = [ 2 , ptr(Y) ]\n\n# meaning that now X is [2,9] and ptr(X) points to [2,9], \n# so la is\n\n# la = [ 1 , ptr(X) ]\n# X = [ 2 , 9 ]\n\n# so if we print la we get\n# [1,[2,9]]\n\n# et voila\nprint(la)\n\n",
"You should use copy() in your case:\nlb = [la[1].copy(), la[1][1].copy()]\n\n\nAssignment statements in Python do not copy objects, they create bindings between a target and an object.\n\nSee doc.\n"
] |
[
2,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074547471_python.txt
|
Q:
Write a program to read through the mbox-short.txt and figure out the distribution by hour of the day for each of the messages
10.2 Write a program to read through the mbox-short.txt and figure out the distribution by hour of the day for each of the messages. You can pull the hour out from the 'From ' line by finding the time and then splitting the string a second time using a colon.
From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008
Once you have accumulated the counts for each hour, print out the counts, sorted by hour as shown below.
My Code:
fname = input("Enter file:")
fhandle = open(fname)
dic={}
for line in fhandle:
if not line.startswith("From "):
continue
else:
line=line.split()
line=line[5] # accesing the list using index and splitting it
line=line[0:2]
for bline in line:
dic[bline]=dic.get(bline,0)+1 # Using this line we created a dictionary having keys and values
#Now it's time to access the dictionary and sort in some way.
lst=[]
for k1,v1 in dic.items(): # dictionary er key value pair access korar jonno items method use kora hoyechhe
lst.append((k1,v1)) # dictionary er keys and corresponding values ke lst te append korlam
lst.sort() #lst take sort korlam. sorting is done through key
#print(lst)
for k1,v1 in lst: # we are able to access this list using key value pair as it was basically a dictionary before, It is just appended
print(k1,v1)
#print(dic)
#print(dic)
Desired Output:
04 3
06 1
07 1
09 2
10 3
11 6
14 1
15 2
16 4
17 2
18 1
19 1
My Output:
enter image description here
I don't understand what's going wrong.
A:
Working Code. Break down the code in to simple form as much i can. So it will be easy to understand for you.
d = dict()
lst = list()
fname = input('enter the file name : ')
try:
fopen = open(fname,'r')
except:
print('wrong file name !!!')
for line in fopen:
stline = line.strip()
if stline.startswith('From:'):
continue
elif stline.startswith('From'):
spline = stline.split()
time = spline[5]
tsplit = time.split(':')
t1 = tsplit[0].split()
for t in t1:
if t not in d:
d[t] = 1
else:
d[t] = d[t] + 1
for k,v in d.items():
lst.append((k,v))
lst = sorted(lst)
for k,v in lst:
print(k,v)
A:
name = input("Enter file:")
if len(name) < 1 : name = "mbox-short.txt"
handle = open(name)
emailcount = dict()
for line in handle:
if not line.startswith("From "): continue
line = line.split()
line = line[1]
emailcount[line] = emailcount.get(line, 0) +1
bigcount = None
bigword = None
for word,count in emailcount.items():
if bigcount == None or count > bigcount:
bigcount = count
bigword = word
print(bigword, bigcount)
A:
name = input("Enter file:")
if len(name) < 1 : name = "mbox-short.txt"
handle = open(name)
counts = {}
for line in handle:
word = line.split()
if len(word) < 3 or word[0] != "From" : continue
full_hour = word[5]
hour = full_hour.split(":")
hour = str(hour[:1])
hour = hour[2:4]
if hour in counts :
counts[hour] = 1 + counts[hour]
else :
counts.update({hour:1})
lst = list()
for k, v in counts.items():
new_tup = (k, v)
lst.append(new_tup)
lst = sorted(lst)
for k, v in lst:
print(k,v)
A:
counts=dict()
fill=open("mbox-short.txt")
for line in fill :
if line.startswith("From "):
x=line.split()
b=x[5]
y=b.split(":")
f=y[0]
counts[f]=counts.get(f,0)+1
l=list()
for k,v in counts.items():
l.append((k,v))
l.sort()
for k,v in l:
print(k,v)
A:
I listen carefully in the online lessons, This is my code by what i learned in class. I think it will be easy for you to understand.
fn = input('Please enter file: ')
if len(fn) < 1: fn = 'mbox-short.txt'
hand = open(fn)
di = dict()
for line in hand:
ls = line.strip()
wds = line.split()
if 'From' in wds and len(wds) > 2:
hours = ls.split()
hour = hours[-2].split(':')
ch = hour[0]
di[ch] = di.get(ch, 0) + 1
tmp = list()
for h,t in di.items():
newt = (h,t)
tmp.append(newt)
tmp = sorted(tmp)
for h,t in tmp:
print(h,t)
A:
name = input("Enter file:")
if len(name) < 1:
name = "mbox-short.txt"
handle = open(name)
counts = dict()
for line in handle:
line = line.strip()
if not line.startswith("From ") : continue
line = line.split()
hr = line[5].split(":")
hr = hr[0:1]
for piece in hr:
counts[piece] = counts.get(piece,0) + 1
lst = list()
for k,v in counts.items():
lst.append((k,v))
lst = sorted(lst)
for k,v in lst:
print(k,v)
|
Write a program to read through the mbox-short.txt and figure out the distribution by hour of the day for each of the messages
|
10.2 Write a program to read through the mbox-short.txt and figure out the distribution by hour of the day for each of the messages. You can pull the hour out from the 'From ' line by finding the time and then splitting the string a second time using a colon.
From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008
Once you have accumulated the counts for each hour, print out the counts, sorted by hour as shown below.
My Code:
fname = input("Enter file:")
fhandle = open(fname)
dic={}
for line in fhandle:
if not line.startswith("From "):
continue
else:
line=line.split()
line=line[5] # accesing the list using index and splitting it
line=line[0:2]
for bline in line:
dic[bline]=dic.get(bline,0)+1 # Using this line we created a dictionary having keys and values
#Now it's time to access the dictionary and sort in some way.
lst=[]
for k1,v1 in dic.items(): # dictionary er key value pair access korar jonno items method use kora hoyechhe
lst.append((k1,v1)) # dictionary er keys and corresponding values ke lst te append korlam
lst.sort() #lst take sort korlam. sorting is done through key
#print(lst)
for k1,v1 in lst: # we are able to access this list using key value pair as it was basically a dictionary before, It is just appended
print(k1,v1)
#print(dic)
#print(dic)
Desired Output:
04 3
06 1
07 1
09 2
10 3
11 6
14 1
15 2
16 4
17 2
18 1
19 1
My Output:
enter image description here
I don't understand what's going wrong.
|
[
"Working Code. Break down the code in to simple form as much i can. So it will be easy to understand for you.\nd = dict()\nlst = list()\n\nfname = input('enter the file name : ')\ntry:\n fopen = open(fname,'r')\nexcept:\n print('wrong file name !!!')\n\nfor line in fopen:\n \n stline = line.strip()\n \n if stline.startswith('From:'):\n continue\n elif stline.startswith('From'):\n spline = stline.split()\n \n time = spline[5]\n tsplit = time.split(':')\n \n t1 = tsplit[0].split()\n \n for t in t1:\n if t not in d:\n d[t] = 1\n else:\n d[t] = d[t] + 1\n\nfor k,v in d.items():\n lst.append((k,v))\nlst = sorted(lst)\n\nfor k,v in lst:\n print(k,v)\n\n\n\n\n\n\n \n\n\n\n\n \n\n",
" name = input(\"Enter file:\")\nif len(name) < 1 : name = \"mbox-short.txt\"\nhandle = open(name)\nemailcount = dict()\nfor line in handle:\n if not line.startswith(\"From \"): continue\n line = line.split()\n line = line[1]\n emailcount[line] = emailcount.get(line, 0) +1\nbigcount = None\nbigword = None\nfor word,count in emailcount.items():\n if bigcount == None or count > bigcount:\n bigcount = count\n bigword = word\nprint(bigword, bigcount)\n\n",
"name = input(\"Enter file:\")\nif len(name) < 1 : name = \"mbox-short.txt\"\nhandle = open(name)\n\ncounts = {}\nfor line in handle:\n word = line.split()\n if len(word) < 3 or word[0] != \"From\" : continue\n full_hour = word[5]\n hour = full_hour.split(\":\")\n hour = str(hour[:1])\n hour = hour[2:4]\n if hour in counts :\n counts[hour] = 1 + counts[hour]\n else :\n counts.update({hour:1})\nlst = list()\nfor k, v in counts.items():\n new_tup = (k, v)\n lst.append(new_tup)\n \nlst = sorted(lst) \nfor k, v in lst:\n print(k,v)\n\n",
"counts=dict()\nfill=open(\"mbox-short.txt\")\nfor line in fill :\n if line.startswith(\"From \"):\n x=line.split()\n b=x[5]\n y=b.split(\":\")\n f=y[0]\n counts[f]=counts.get(f,0)+1\nl=list()\nfor k,v in counts.items():\n l.append((k,v))\nl.sort()\nfor k,v in l:\n print(k,v)\n\n",
"I listen carefully in the online lessons, This is my code by what i learned in class. I think it will be easy for you to understand.\nfn = input('Please enter file: ')\nif len(fn) < 1: fn = 'mbox-short.txt'\nhand = open(fn)\n\ndi = dict()\n\nfor line in hand:\n ls = line.strip()\n wds = line.split()\n if 'From' in wds and len(wds) > 2:\n\n hours = ls.split()\n hour = hours[-2].split(':')\n ch = hour[0]\n\n di[ch] = di.get(ch, 0) + 1\n\ntmp = list()\nfor h,t in di.items():\n newt = (h,t)\n tmp.append(newt)\n\ntmp = sorted(tmp)\nfor h,t in tmp:\n print(h,t)\n\n",
"name = input(\"Enter file:\")\nif len(name) < 1:\n name = \"mbox-short.txt\"\nhandle = open(name)\n\ncounts = dict()\n\nfor line in handle:\n line = line.strip()\n if not line.startswith(\"From \") : continue\n line = line.split()\n hr = line[5].split(\":\")\n hr = hr[0:1]\n for piece in hr:\n counts[piece] = counts.get(piece,0) + 1\n\n\nlst = list()\n\nfor k,v in counts.items():\n lst.append((k,v))\nlst = sorted(lst)\n\nfor k,v in lst:\n print(k,v)\n\n"
] |
[
0,
0,
0,
0,
0,
0
] |
[
"fname = input(\"Enter file:\")\nfhandle = open(fname)\ndic={}\nfor line in fhandle:\n if not line.startswith('From '):\n continue\n else:\n line=line.split()\n line=line[5] # accesing the list using index and splitting it\n line=line.split(':')\n bline=line[0]\n #for bline in line:\n #print(bline)\n dic[bline]=dic.get(bline,0)+1 # Using this line we created a \ndictionary having keys and values\n#Now it's time to access the dictionary and sort in some way.\nlst=[]\nfor k1,v1 in dic.items(): # dictionary er key value pair access korar jonno items method use kora hoyechhe\n lst.append((k1,v1)) # dictionary er keys and corresponding values ke lst te append korlam\nlst.sort() #lst take sort korlam. sorting is done through key\n#print(lst)\nfor k1,v1 in lst: # we are able to access this list using key value pair as it was basically a dictionary before, It is just appended\n print(k1,v1)\n\n#print(dic)\n #print(dic)\n\n"
] |
[
-1
] |
[
"arrays",
"dictionary",
"list",
"python",
"tuples"
] |
stackoverflow_0062247502_arrays_dictionary_list_python_tuples.txt
|
Q:
Data where cells have factors separated by comma
There is data-frame with cells that have factors ('At a pub', 'At home' ...) that are separated by a comma and are not the same for each cell. See the picture bellow (how excel sees the CSV file):
How can I separate each factor into a column so that the same factors would be in the same column and blanks for others - create dummy columns? I have many of these columns for different types of drinks (vodka, gin and others)
There are possible tools such as R, Python and Power BI at my disposal.
Tried simple MS Excel and some of its commands and capabilities.
A:
This can also be done in base R (plus reshape2) or in data.table as well, but here's a working premise for flow to get what you think you need.
DF <- data.frame(id=1:2, text=c("Pubs in the old town,At a club", "House party,Pubs in the old town"))
library(dplyr)
library(tidyr) # unnest, pivot_wider
DF %>%
mutate(text = strsplit(text, ","), fake = 1) %>%
unnest(text) %>%
pivot_wider(id, names_from = "text", values_from = "fake") %>%
mutate(across(-id, Negate(is.na)))
# # A tibble: 2 x 4
# id `Pubs in the old town` `At a club` `House party`
# <int> <lgl> <lgl> <lgl>
# 1 1 TRUE TRUE FALSE
# 2 2 TRUE FALSE TRUE
This won't deal correctly with similar phrases that are off by spelling or spacing; for that, you may need to go into NLP or similar to reduce a phrase to comparable states. There is room for more conditioning of the data here (e.g., trimws, reducing repeated inner-spaces, case differences) that might mitigate some of those concerns; for this, you may want to do the conditioning in the long form before pivot_wider(.), since you'll have one column's values to fix, not column names.
|
Data where cells have factors separated by comma
|
There is data-frame with cells that have factors ('At a pub', 'At home' ...) that are separated by a comma and are not the same for each cell. See the picture bellow (how excel sees the CSV file):
How can I separate each factor into a column so that the same factors would be in the same column and blanks for others - create dummy columns? I have many of these columns for different types of drinks (vodka, gin and others)
There are possible tools such as R, Python and Power BI at my disposal.
Tried simple MS Excel and some of its commands and capabilities.
|
[
"This can also be done in base R (plus reshape2) or in data.table as well, but here's a working premise for flow to get what you think you need.\nDF <- data.frame(id=1:2, text=c(\"Pubs in the old town,At a club\", \"House party,Pubs in the old town\"))\nlibrary(dplyr)\nlibrary(tidyr) # unnest, pivot_wider\nDF %>%\n mutate(text = strsplit(text, \",\"), fake = 1) %>%\n unnest(text) %>%\n pivot_wider(id, names_from = \"text\", values_from = \"fake\") %>%\n mutate(across(-id, Negate(is.na)))\n# # A tibble: 2 x 4\n# id `Pubs in the old town` `At a club` `House party`\n# <int> <lgl> <lgl> <lgl> \n# 1 1 TRUE TRUE FALSE \n# 2 2 TRUE FALSE TRUE \n\nThis won't deal correctly with similar phrases that are off by spelling or spacing; for that, you may need to go into NLP or similar to reduce a phrase to comparable states. There is room for more conditioning of the data here (e.g., trimws, reducing repeated inner-spaces, case differences) that might mitigate some of those concerns; for this, you may want to do the conditioning in the long form before pivot_wider(.), since you'll have one column's values to fix, not column names.\n"
] |
[
1
] |
[] |
[] |
[
"data_manipulation",
"dataframe",
"powerbi",
"python",
"r"
] |
stackoverflow_0074547494_data_manipulation_dataframe_powerbi_python_r.txt
|
Q:
Fade widgets out using animation to transition to a screen
I want to implement a button (already has a custom class) that when clicked, fades out all the widgets on the existing screen before switching to another layout (implemented using QStackedLayout)
I've looked at different documentations and guides on PySide6 on how to animate fading in/out but nothing seems to be working. Not sure what is wrong with the code per se bit I've done the debugging and the animation class is acting on the widget
I assume that to make the Widget fade, I had to create QGraphicsOpacityEffect with the top-level widget being the parent, then adding the QPropertyAnimation for it to work.
main.py
# Required Libraries for PySide6/Qt
from PySide6.QtWidgets import QWidget, QApplication, QPushButton, QVBoxLayout, QHBoxLayout, QLabel, QSizePolicy, QMainWindow, QSystemTrayIcon, QStackedLayout, QGraphicsOpacityEffect
from PySide6.QtGui import QIcon, QPixmap, QFont, QLinearGradient, QPainter, QColor
from PySide6.QtCore import Qt, QPointF, QSize, QVariantAnimation, QAbstractAnimation, QEasingCurve, QPropertyAnimation, QTimer
# For changing the taskbar icon
import ctypes
import platform
# For relative imports
import sys
sys.path.append('../classes')
# Classes and Different Windows
from classes.getStartedButton import getStartedButton
class window(QMainWindow):
# Set up core components of window
def __init__(self,h,w):
# Gets primary parameters of the screen that the window will display in
self.height = h
self.width = w
super().__init__()
# Required to change taskbar icon
if platform.system() == "Windows":
ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID(
"com.random.appID")
# Multiple screens will overlay each other
self.layout = QStackedLayout()
# Init splash screen - this is first
self.splashScreen()
# Function to init other screens
self.initScreens()
# Must create main widget to hold the stacked layout, or another window will appear
main_widget_holder = QWidget()
main_widget_holder.setLayout(self.layout)
self.setCentralWidget(main_widget_holder)
def initScreens(self):
apiScreen = QWidget()
another_button = QPushButton("Test")
another_layout = QVBoxLayout()
another_layout.addWidget(another_button)
apiScreen.setLayout(another_layout)
self.layout.addWidget(apiScreen)
# Window definition for splash screen
def splashScreen(self):
"""Window that displays the splash screen
"""
# Widget that holds all the widgets in the splash screen
self.placeholder = QWidget()
# Logo & Title Component
logo = QLabel("")
logo.setPixmap(QPixmap("image.png"))
# Align logo on right side of the split
logo.setAlignment(Qt.AlignRight | Qt.AlignVCenter)
logo.setStyleSheet(
"border-right:3px solid white;background-color: rgb(22,22,22);")
title = QLabel("Another\nApp")
title.setStyleSheet("padding-left:2px; font-size:36px")
# Header to hold title and logo
header_layout = QHBoxLayout()
header_layout.addWidget(logo)
header_layout.addWidget(title)
header = QWidget()
header.setStyleSheet("margin-bottom:20px")
# Assign header_layout to the header widget
header.setLayout(header_layout)
# Button Component
button = getStartedButton("Get Started ")
# Set max width of the button to cover both text and logo
button.setMaximumWidth(self.placeholder.width())
button.clicked.connect(self.transition_splash)
# Vertical Layout from child widget components
title_scrn = QVBoxLayout()
title_scrn.addWidget(header)
title_scrn.addWidget(button)
# Define alignment to be vertically and horizontal aligned (Prevents button from appearing on the bottom of the app as well)
title_scrn.setAlignment(Qt.AlignCenter)
# Enlarge the default window size of 640*480 to something bigger (add 50px on all sides)
self.placeholder.setLayout(title_scrn)
self.placeholder.setObjectName("self.placeholder")
self.placeholder.setMinimumSize(
self.placeholder.width()+100, self.placeholder.height()+100)
self.setCentralWidget(self.placeholder)
# Grey/Black Background
self.setStyleSheet(
"QMainWindow{background-color: rgb(22,22,22);}QLabel{color:white} #button{padding:25px; border-radius:15px;background: qlineargradient(x1:0, y1:0,x2:1,y2:1,stop: 0 #00dbde, stop:1 #D600ff); border:1px solid white}")
self.setMinimumSize(self.width/3*2,self.height/3*2)
self.layout.addWidget(self.placeholder)
def transition_splash(self):
opacityEffect = QGraphicsOpacityEffect(self.placeholder)
self.placeholder.setGraphicsEffect(opacityEffect)
animationEffect = QPropertyAnimation(opacityEffect, b"opacity")
animationEffect.setStartValue(1)
animationEffect.setEndValue(0)
animationEffect.setDuration(2500)
animationEffect.start()
timer = QTimer()
timer.singleShot(2500,self.change_layout)
def change_layout(self):
self.layout.setCurrentIndex(1)
# Initialise program
if __name__ == "__main__":
app = QApplication([])
page = window(app.primaryScreen().size().height(),app.primaryScreen().size().width())
page.show()
sys.exit(app.exec())
button.py
# Required Libraries for PySide6/Qt
from PySide6.QtWidgets import QWidget, QApplication, QPushButton, QVBoxLayout, QHBoxLayout, QLabel, QSizePolicy, QMainWindow, QSystemTrayIcon, QStackedLayout, QGraphicsOpacityEffect
from PySide6.QtGui import QIcon, QPixmap, QFont, QLinearGradient, QPainter, QColor
from PySide6.QtCore import Qt, QPointF, QSize, QVariantAnimation, QAbstractAnimation, QEasingCurve
# Custom PushButton for splash screen
class getStartedButton(QPushButton):
getStartedButtonColorStart = QColor(0, 219, 222)
getStartedButtonColorInt = QColor(101, 118, 255)
getStartedButtonColorEnd = QColor(214, 0, 255)
def __init__(self, text):
super().__init__()
self.setText(text)
# Setting ID so that it can be used in CSS
self.setObjectName("button")
self.setStyleSheet("font-size:24px")
# Button Animation
self.getStartedButtonAnimation = QVariantAnimation(
self, startValue=0.42, endValue=0.98, duration=300)
self.getStartedButtonAnimation.valueChanged.connect(
self.animate_button)
self.getStartedButtonAnimation.setEasingCurve(QEasingCurve.InOutCubic)
def enterEvent(self, event):
self.getStartedButtonAnimation.setDirection(QAbstractAnimation.Forward)
self.getStartedButtonAnimation.start()
# Suppression of event type error
try:
super().enterEvent(event)
except TypeError:
pass
def leaveEvent(self, event):
self.getStartedButtonAnimation.setDirection(
QAbstractAnimation.Backward)
self.getStartedButtonAnimation.start()
# Suppression of event type error
try:
super().enterEvent(event)
except TypeError:
pass
def animate_button(self, value):
grad = "background-color: qlineargradient(x1:0, y1:0, x2:1, y2:1, stop:0 {startColor}, stop:{value} {intermediateColor}, stop: 1.0 {stopColor});font-size:24px".format(
startColor=self.getStartedButtonColorStart.name(), intermediateColor=self.getStartedButtonColorInt.name(), stopColor=self.getStartedButtonColorEnd.name(), value=value
)
self.setStyleSheet(grad)
For context, I've looked at other questions already on SO and other sites such as
How to change the opacity of a PyQt5 window
https://www.pythonguis.com/tutorials/pyside6-animated-widgets/
A:
Not sure what is wrong with the code
The QPropertyAnimation object is destroyed before it gets a chance to start your animation. Your question has already been solved here.
To make it work, you must persist the object:
def transition_splash(self):
opacityEffect = QGraphicsOpacityEffect(self.placeholder)
self.placeholder.setGraphicsEffect(opacityEffect)
self.animationEffect = QPropertyAnimation(opacityEffect, b"opacity")
self.animationEffect.setStartValue(1)
self.animationEffect.setEndValue(0)
self.animationEffect.setDuration(2500)
self.animationEffect.start()
# Use the finished signal instead of the QTimer
self.animationEffect.finished.connect(self.change_layout)
|
Fade widgets out using animation to transition to a screen
|
I want to implement a button (already has a custom class) that when clicked, fades out all the widgets on the existing screen before switching to another layout (implemented using QStackedLayout)
I've looked at different documentations and guides on PySide6 on how to animate fading in/out but nothing seems to be working. Not sure what is wrong with the code per se bit I've done the debugging and the animation class is acting on the widget
I assume that to make the Widget fade, I had to create QGraphicsOpacityEffect with the top-level widget being the parent, then adding the QPropertyAnimation for it to work.
main.py
# Required Libraries for PySide6/Qt
from PySide6.QtWidgets import QWidget, QApplication, QPushButton, QVBoxLayout, QHBoxLayout, QLabel, QSizePolicy, QMainWindow, QSystemTrayIcon, QStackedLayout, QGraphicsOpacityEffect
from PySide6.QtGui import QIcon, QPixmap, QFont, QLinearGradient, QPainter, QColor
from PySide6.QtCore import Qt, QPointF, QSize, QVariantAnimation, QAbstractAnimation, QEasingCurve, QPropertyAnimation, QTimer
# For changing the taskbar icon
import ctypes
import platform
# For relative imports
import sys
sys.path.append('../classes')
# Classes and Different Windows
from classes.getStartedButton import getStartedButton
class window(QMainWindow):
# Set up core components of window
def __init__(self,h,w):
# Gets primary parameters of the screen that the window will display in
self.height = h
self.width = w
super().__init__()
# Required to change taskbar icon
if platform.system() == "Windows":
ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID(
"com.random.appID")
# Multiple screens will overlay each other
self.layout = QStackedLayout()
# Init splash screen - this is first
self.splashScreen()
# Function to init other screens
self.initScreens()
# Must create main widget to hold the stacked layout, or another window will appear
main_widget_holder = QWidget()
main_widget_holder.setLayout(self.layout)
self.setCentralWidget(main_widget_holder)
def initScreens(self):
apiScreen = QWidget()
another_button = QPushButton("Test")
another_layout = QVBoxLayout()
another_layout.addWidget(another_button)
apiScreen.setLayout(another_layout)
self.layout.addWidget(apiScreen)
# Window definition for splash screen
def splashScreen(self):
"""Window that displays the splash screen
"""
# Widget that holds all the widgets in the splash screen
self.placeholder = QWidget()
# Logo & Title Component
logo = QLabel("")
logo.setPixmap(QPixmap("image.png"))
# Align logo on right side of the split
logo.setAlignment(Qt.AlignRight | Qt.AlignVCenter)
logo.setStyleSheet(
"border-right:3px solid white;background-color: rgb(22,22,22);")
title = QLabel("Another\nApp")
title.setStyleSheet("padding-left:2px; font-size:36px")
# Header to hold title and logo
header_layout = QHBoxLayout()
header_layout.addWidget(logo)
header_layout.addWidget(title)
header = QWidget()
header.setStyleSheet("margin-bottom:20px")
# Assign header_layout to the header widget
header.setLayout(header_layout)
# Button Component
button = getStartedButton("Get Started ")
# Set max width of the button to cover both text and logo
button.setMaximumWidth(self.placeholder.width())
button.clicked.connect(self.transition_splash)
# Vertical Layout from child widget components
title_scrn = QVBoxLayout()
title_scrn.addWidget(header)
title_scrn.addWidget(button)
# Define alignment to be vertically and horizontal aligned (Prevents button from appearing on the bottom of the app as well)
title_scrn.setAlignment(Qt.AlignCenter)
# Enlarge the default window size of 640*480 to something bigger (add 50px on all sides)
self.placeholder.setLayout(title_scrn)
self.placeholder.setObjectName("self.placeholder")
self.placeholder.setMinimumSize(
self.placeholder.width()+100, self.placeholder.height()+100)
self.setCentralWidget(self.placeholder)
# Grey/Black Background
self.setStyleSheet(
"QMainWindow{background-color: rgb(22,22,22);}QLabel{color:white} #button{padding:25px; border-radius:15px;background: qlineargradient(x1:0, y1:0,x2:1,y2:1,stop: 0 #00dbde, stop:1 #D600ff); border:1px solid white}")
self.setMinimumSize(self.width/3*2,self.height/3*2)
self.layout.addWidget(self.placeholder)
def transition_splash(self):
opacityEffect = QGraphicsOpacityEffect(self.placeholder)
self.placeholder.setGraphicsEffect(opacityEffect)
animationEffect = QPropertyAnimation(opacityEffect, b"opacity")
animationEffect.setStartValue(1)
animationEffect.setEndValue(0)
animationEffect.setDuration(2500)
animationEffect.start()
timer = QTimer()
timer.singleShot(2500,self.change_layout)
def change_layout(self):
self.layout.setCurrentIndex(1)
# Initialise program
if __name__ == "__main__":
app = QApplication([])
page = window(app.primaryScreen().size().height(),app.primaryScreen().size().width())
page.show()
sys.exit(app.exec())
button.py
# Required Libraries for PySide6/Qt
from PySide6.QtWidgets import QWidget, QApplication, QPushButton, QVBoxLayout, QHBoxLayout, QLabel, QSizePolicy, QMainWindow, QSystemTrayIcon, QStackedLayout, QGraphicsOpacityEffect
from PySide6.QtGui import QIcon, QPixmap, QFont, QLinearGradient, QPainter, QColor
from PySide6.QtCore import Qt, QPointF, QSize, QVariantAnimation, QAbstractAnimation, QEasingCurve
# Custom PushButton for splash screen
class getStartedButton(QPushButton):
getStartedButtonColorStart = QColor(0, 219, 222)
getStartedButtonColorInt = QColor(101, 118, 255)
getStartedButtonColorEnd = QColor(214, 0, 255)
def __init__(self, text):
super().__init__()
self.setText(text)
# Setting ID so that it can be used in CSS
self.setObjectName("button")
self.setStyleSheet("font-size:24px")
# Button Animation
self.getStartedButtonAnimation = QVariantAnimation(
self, startValue=0.42, endValue=0.98, duration=300)
self.getStartedButtonAnimation.valueChanged.connect(
self.animate_button)
self.getStartedButtonAnimation.setEasingCurve(QEasingCurve.InOutCubic)
def enterEvent(self, event):
self.getStartedButtonAnimation.setDirection(QAbstractAnimation.Forward)
self.getStartedButtonAnimation.start()
# Suppression of event type error
try:
super().enterEvent(event)
except TypeError:
pass
def leaveEvent(self, event):
self.getStartedButtonAnimation.setDirection(
QAbstractAnimation.Backward)
self.getStartedButtonAnimation.start()
# Suppression of event type error
try:
super().enterEvent(event)
except TypeError:
pass
def animate_button(self, value):
grad = "background-color: qlineargradient(x1:0, y1:0, x2:1, y2:1, stop:0 {startColor}, stop:{value} {intermediateColor}, stop: 1.0 {stopColor});font-size:24px".format(
startColor=self.getStartedButtonColorStart.name(), intermediateColor=self.getStartedButtonColorInt.name(), stopColor=self.getStartedButtonColorEnd.name(), value=value
)
self.setStyleSheet(grad)
For context, I've looked at other questions already on SO and other sites such as
How to change the opacity of a PyQt5 window
https://www.pythonguis.com/tutorials/pyside6-animated-widgets/
|
[
"\nNot sure what is wrong with the code\n\nThe QPropertyAnimation object is destroyed before it gets a chance to start your animation. Your question has already been solved here.\nTo make it work, you must persist the object:\ndef transition_splash(self):\n opacityEffect = QGraphicsOpacityEffect(self.placeholder)\n self.placeholder.setGraphicsEffect(opacityEffect)\n self.animationEffect = QPropertyAnimation(opacityEffect, b\"opacity\")\n self.animationEffect.setStartValue(1)\n self.animationEffect.setEndValue(0)\n self.animationEffect.setDuration(2500)\n self.animationEffect.start()\n\n # Use the finished signal instead of the QTimer\n self.animationEffect.finished.connect(self.change_layout)\n\n"
] |
[
1
] |
[] |
[] |
[
"opacity",
"pyside6",
"python",
"qt",
"qwidget"
] |
stackoverflow_0074532883_opacity_pyside6_python_qt_qwidget.txt
|
Q:
Converting dot to png in python
I have a dot file generated from my code and want to render it in my output. For this i have seen on the net that the command is something like this on cmd
dot -Tpng InputFile.dot -o OutputFile.png for Graphviz
But my problem is that I want to use this inbuilt in my python program.
How can i do so ??
I looked at pydot but can't seem to find an answer in there.....
A:
Load the file with pydot.graph_from_dot_file to get a pydot.Dot class instance. Then write it to a PNG file with the write_png method.
import pydot
(graph,) = pydot.graph_from_dot_file('somefile.dot')
graph.write_png('somefile.png')
A:
pydot needs the GraphViz binaries to be installed anyway, so if you've already generated your dot file you might as well just invoke dot directly yourself. For example:
from subprocess import check_call
check_call(['dot','-Tpng','InputFile.dot','-o','OutputFile.png'])
A:
You can use pygraphviz. Once you have a graph loaded, you can do
graph.draw('file.png')
A:
You can try:
import os
os.environ["PATH"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin/'
os.system('dot -Tpng random.dot -o random.png')
A:
You can use graphviz:
# Convert a .dot file to .png
from graphviz import render
render('dot', 'png', 'fname.dot')
# To render an existing file in a notebook
from graphviz import Source
Source.from_file("fname.dot")
A:
1st Solution)
Going further with the approach of @Mauricio Carrero by setting the PATH inside the script (the same PATH set in the environment variables does not have this effect!):
import os
import pydotplus
from sklearn.tree import export_graphviz
os.environ['PATH'] = os.environ['PATH']+';' + r'C:\Users\Admin\Anaconda3\Library\bin\graphviz'
# first export the dot file only if needed
export_graphviz(clf, out_file=filename + ".dot", feature_names = feature_names)
# now generate the dot_data again
dot_data = export_graphviz(clf, out_file=None, feature_names = feature_names)
graph = pydotplus.graphviz.graph_from_dot_data(dot_data)
graph.write_png(filename + "_gv.png")
This made it possible to save the dot_data to png. Choose your own local paths, you might also have installed graphviz in `C:/Program Files (x86)/Graphviz2.38/bin/
This solution also came from Sarunas answer here:
https://datascience.stackexchange.com/questions/37428/graphviz-not-working-when-imported-inside-pydotplus-graphvizs-executables-not
2nd Solution)
You can also avoid the error
Exception has occurred: InvocationException
GraphViz's executables not found
by simply giving it what it wants, as it asks for the executables of the graphviz object:
graph = pydotplus.graphviz.graph_from_dot_data(dot_data)
# graph is now a new Dot object!
# That is why we need to set_graphviz_executables for every new instance
# This cannot be set globally but must be set again and again
# that is why the PATH solution (1st Solution) above seems much better to me
# see the docs in https://pydotplus.readthedocs.io/reference.html
pathCur = 'C:\\Program Files (x86)\\Graphviz2.38\\bin\\'
graph.set_graphviz_executables({'dot': pathCur+'dot.exe', 'twopi': pathCur +'twopi.exe', 'neato': pathCur+'neato.exe', 'circo': pathCur+'circo.exe', 'fdp': pathCur+'fdp.exe'})
graph.write_png(filename + "_gv.png")
p.s:
These 2 approaches were the only solutions working for me after 2 hours of calibrating erroneuos installations and full uninstall and install again, all varieties of PATH variables, external and internal graphviz installation, python-graphviz, pygraphviz and all of the solutions I could find in here, or in
Convert decision tree directly to png
or in
https://datascience.stackexchange.com/questions/37428/graphviz-not-working-when-imported-inside-pydotplus-graphvizs-executables-not?newreg=a789aadc5d4b4975949afadd3919fe55
For conda python-graphviz, I got constant installation errors like
InvalidArchiveError('Error with archive C:\\Users\\Admin\\Anaconda3\\pkgs\\openssl-1.1.1d-he774522_20ffr2kor\\pkg-openssl-1.1.1d-he774522_2.tar.zst. You probably need to delete and re-download or re-create this file. Message from libarchive was:\n\nCould not unlink')
For conda install graphviz, I got
InvalidArchiveError('Error with archive C:\\Users\\Admin\\Anaconda3\\pkgs\\openssl-1.1.1d-he774522_21ww0bpcs\\pkg-openssl-1.1.1d-he774522_2.tar.zst. You probably need to delete and re-download or re-create this file. Message from libarchive was:\n\nCould not unlink')
pygraphviz needs MS Visual C++ which I did not want to install:
error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": https://visualstudio.microsoft.com/downloads/
In the end, no guide was working to really set the PATH variables correctly except for the 1st Solution approach.
A:
Here is the code that worked for me on windows 11 system using a pycharm terminal using venv python 3.8 interpreter.
from graphviz import render
import os
os.environ['PATH'] = os.environ['PATH']+';' + r"C:\Program Files\Graphviz\bin" #find binaries of graphviz and add to path
render('dot','png','classes.dot')
this will create a classes.dot.pg file (I don't know how to fix the name but this is a png file you can open)
The classes dot was generated on terminal using
pyreverse package_path
pyreverse comes with pylint.
Installations:
pip install pylint
(install pylint only if you are creating classes.dot file)
pip install graphviz
A:
This would create a graph 'a' to 'b' and save it as a png file.
code:
from graphviz import Digraph
dot = Digraph()
dot.node('a')
dot.node('b')
dot.edge('a','b')
dot. Render("sample.png")
|
Converting dot to png in python
|
I have a dot file generated from my code and want to render it in my output. For this i have seen on the net that the command is something like this on cmd
dot -Tpng InputFile.dot -o OutputFile.png for Graphviz
But my problem is that I want to use this inbuilt in my python program.
How can i do so ??
I looked at pydot but can't seem to find an answer in there.....
|
[
"Load the file with pydot.graph_from_dot_file to get a pydot.Dot class instance. Then write it to a PNG file with the write_png method.\nimport pydot\n\n(graph,) = pydot.graph_from_dot_file('somefile.dot')\ngraph.write_png('somefile.png')\n\n",
"pydot needs the GraphViz binaries to be installed anyway, so if you've already generated your dot file you might as well just invoke dot directly yourself. For example:\nfrom subprocess import check_call\ncheck_call(['dot','-Tpng','InputFile.dot','-o','OutputFile.png'])\n\n",
"You can use pygraphviz. Once you have a graph loaded, you can do\ngraph.draw('file.png')\n\n",
"You can try:\nimport os\nos.environ[\"PATH\"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin/'\nos.system('dot -Tpng random.dot -o random.png')\n\n",
"You can use graphviz:\n# Convert a .dot file to .png\nfrom graphviz import render\nrender('dot', 'png', 'fname.dot')\n\n# To render an existing file in a notebook\nfrom graphviz import Source\nSource.from_file(\"fname.dot\")\n\n",
"1st Solution)\nGoing further with the approach of @Mauricio Carrero by setting the PATH inside the script (the same PATH set in the environment variables does not have this effect!):\nimport os\nimport pydotplus\nfrom sklearn.tree import export_graphviz\n\nos.environ['PATH'] = os.environ['PATH']+';' + r'C:\\Users\\Admin\\Anaconda3\\Library\\bin\\graphviz'\n\n# first export the dot file only if needed\nexport_graphviz(clf, out_file=filename + \".dot\", feature_names = feature_names)\n# now generate the dot_data again\ndot_data = export_graphviz(clf, out_file=None, feature_names = feature_names)\ngraph = pydotplus.graphviz.graph_from_dot_data(dot_data)\ngraph.write_png(filename + \"_gv.png\")\n\nThis made it possible to save the dot_data to png. Choose your own local paths, you might also have installed graphviz in `C:/Program Files (x86)/Graphviz2.38/bin/\nThis solution also came from Sarunas answer here:\nhttps://datascience.stackexchange.com/questions/37428/graphviz-not-working-when-imported-inside-pydotplus-graphvizs-executables-not\n2nd Solution)\nYou can also avoid the error \nException has occurred: InvocationException\nGraphViz's executables not found\n\nby simply giving it what it wants, as it asks for the executables of the graphviz object: \ngraph = pydotplus.graphviz.graph_from_dot_data(dot_data)\n# graph is now a new Dot object!\n# That is why we need to set_graphviz_executables for every new instance\n# This cannot be set globally but must be set again and again\n# that is why the PATH solution (1st Solution) above seems much better to me\n# see the docs in https://pydotplus.readthedocs.io/reference.html\npathCur = 'C:\\\\Program Files (x86)\\\\Graphviz2.38\\\\bin\\\\'\ngraph.set_graphviz_executables({'dot': pathCur+'dot.exe', 'twopi': pathCur +'twopi.exe', 'neato': pathCur+'neato.exe', 'circo': pathCur+'circo.exe', 'fdp': pathCur+'fdp.exe'})\ngraph.write_png(filename + \"_gv.png\")\n\np.s:\nThese 2 approaches were the only solutions working for me after 2 hours of calibrating erroneuos installations and full uninstall and install again, all varieties of PATH variables, external and internal graphviz installation, python-graphviz, pygraphviz and all of the solutions I could find in here, or in\nConvert decision tree directly to png\nor in\nhttps://datascience.stackexchange.com/questions/37428/graphviz-not-working-when-imported-inside-pydotplus-graphvizs-executables-not?newreg=a789aadc5d4b4975949afadd3919fe55\nFor conda python-graphviz, I got constant installation errors like \nInvalidArchiveError('Error with archive C:\\\\Users\\\\Admin\\\\Anaconda3\\\\pkgs\\\\openssl-1.1.1d-he774522_20ffr2kor\\\\pkg-openssl-1.1.1d-he774522_2.tar.zst. You probably need to delete and re-download or re-create this file. Message from libarchive was:\\n\\nCould not unlink')\n\nFor conda install graphviz, I got\nInvalidArchiveError('Error with archive C:\\\\Users\\\\Admin\\\\Anaconda3\\\\pkgs\\\\openssl-1.1.1d-he774522_21ww0bpcs\\\\pkg-openssl-1.1.1d-he774522_2.tar.zst. You probably need to delete and re-download or re-create this file. Message from libarchive was:\\n\\nCould not unlink')\n\npygraphviz needs MS Visual C++ which I did not want to install:\nerror: Microsoft Visual C++ 14.0 is required. Get it with \"Microsoft Visual C++ Build Tools\": https://visualstudio.microsoft.com/downloads/\n\nIn the end, no guide was working to really set the PATH variables correctly except for the 1st Solution approach.\n",
"Here is the code that worked for me on windows 11 system using a pycharm terminal using venv python 3.8 interpreter.\nfrom graphviz import render\nimport os\nos.environ['PATH'] = os.environ['PATH']+';' + r\"C:\\Program Files\\Graphviz\\bin\" #find binaries of graphviz and add to path\nrender('dot','png','classes.dot')\n\nthis will create a classes.dot.pg file (I don't know how to fix the name but this is a png file you can open)\nThe classes dot was generated on terminal using\n\npyreverse package_path\n\npyreverse comes with pylint.\nInstallations:\n\npip install pylint\n\n(install pylint only if you are creating classes.dot file)\n\npip install graphviz\n\n",
"This would create a graph 'a' to 'b' and save it as a png file.\ncode:\nfrom graphviz import Digraph\n\ndot = Digraph()\ndot.node('a')\ndot.node('b')\ndot.edge('a','b')\n\ndot. Render(\"sample.png\")\n\n"
] |
[
66,
26,
6,
4,
4,
4,
1,
0
] |
[
"from graphviz import render\ndot.render(directory='doctest-output', view=True)\n\n"
] |
[
-2
] |
[
"dot",
"png",
"python"
] |
stackoverflow_0005316206_dot_png_python.txt
|
Q:
How to avoid "RuntimeError: dictionary changed size during iteration" error?
I have a dictionary of lists in which some of the values are empty:
d = {'a': [1], 'b': [1, 2], 'c': [], 'd':[]}
At the end of creating these lists, I want to remove these empty lists before returning my dictionary. I tried doing it like this:
for i in d:
if not d[i]:
d.pop(i)
but I got a RuntimeError. I am aware that you cannot add/remove elements in a dictionary while iterating through it...what would be a way around this then?
See Modifying a Python dict while iterating over it for citations that this can cause problems, and why.
A:
In Python 3.x and 2.x you can use use list to force a copy of the keys to be made:
for i in list(d):
In Python 2.x calling keys made a copy of the keys that you could iterate over while modifying the dict:
for i in d.keys():
But note that in Python 3.x this second method doesn't help with your error because keys returns an a view object instead of copying the keys into a list.
A:
You only need to use copy:
This way you iterate over the original dictionary fields and on the fly can change the desired dict d.
It works on each Python version, so it's more clear.
In [1]: d = {'a': [1], 'b': [1, 2], 'c': [], 'd':[]}
In [2]: for i in d.copy():
...: if not d[i]:
...: d.pop(i)
...:
In [3]: d
Out[3]: {'a': [1], 'b': [1, 2]}
(BTW - Generally to iterate over copy of your data structure, instead of using .copy for dictionaries or slicing [:] for lists, you can use import copy -> copy.copy (for shallow copy which is equivalent to copy that is supported by dictionaries or slicing [:] that is supported by lists) or copy.deepcopy on your data structure.)
A:
Just use dictionary comprehension to copy the relevant items into a new dict:
>>> d
{'a': [1], 'c': [], 'b': [1, 2], 'd': []}
>>> d = {k: v for k, v in d.items() if v}
>>> d
{'a': [1], 'b': [1, 2]}
For this in Python 2:
>>> d
{'a': [1], 'c': [], 'b': [1, 2], 'd': []}
>>> d = {k: v for k, v in d.iteritems() if v}
>>> d
{'a': [1], 'b': [1, 2]}
A:
This worked for me:
d = {1: 'a', 2: '', 3: 'b', 4: '', 5: '', 6: 'c'}
for key, value in list(d.items()):
if value == '':
del d[key]
print(d)
# {1: 'a', 3: 'b', 6: 'c'}
Casting the dictionary items to list creates a list of its items, so you can iterate over it and avoid the RuntimeError.
A:
I would try to avoid inserting empty lists in the first place, but, would generally use:
d = {k: v for k,v in d.iteritems() if v} # re-bind to non-empty
If prior to 2.7:
d = dict( (k, v) for k,v in d.iteritems() if v )
or just:
empty_key_vals = list(k for k in k,v in d.iteritems() if v)
for k in empty_key_vals:
del[k]
A:
For Python 3:
{k:v for k,v in d.items() if v}
A:
to avoid "dictionary changed size during iteration error".
for example : "when you try to delete some key" ,
just use 'list' with '.items()' , and here is a simple example :
my_dict = {
'k1':1,
'k2':2,
'k3':3,
'k4':4
}
print(my_dict)
for key, val in list(my_dict.items()):
if val == 2 or val == 4:
my_dict.pop(key)
print(my_dict)
+++
output :
{'k1': 1, 'k2': 2, 'k3': 3, 'k4': 4}
{'k1': 1, 'k3': 3}
+++
this is just example and change it based on your case/requirements,
i hope this helpful.
A:
You cannot iterate through a dictionary while its changing during for loop. Make a casting to list and iterate over that list, it works for me.
for key in list(d):
if not d[key]:
d.pop(key)
A:
Python 3 does not allow deletion while iterating (using for loop above) dictionary. There are various alternatives to do; one simple way is the to change following line
for i in x.keys():
With
for i in list(x)
A:
The reason for the runtime error is that you cannot iterate through a data structure while its structure is changing during iteration.
One way to achieve what you are looking for is to use list to append the keys you want to remove and then use pop function on dictionary to remove the identified key while iterating through the list.
d = {'a': [1], 'b': [1, 2], 'c': [], 'd':[]}
pop_list = []
for i in d:
if not d[i]:
pop_list.append(i)
for x in pop_list:
d.pop(x)
print (d)
A:
For situations like this, i like to make a deep copy and loop through that copy while modifying the original dict.
If the lookup field is within a list, you can enumerate in the for loop of the list and then specify the position as index to access the field in the original dict.
A:
dictc={"stName":"asas"}
keys=dictc.keys()
for key in list(keys):
dictc[key.upper()] ='New value'
print(str(dictc))
A:
If the values in the dictionary are unique too, then I used this solution
keyToBeDeleted=None
for k,v in mydict.items():
if(v==match):
keyToBeDeleted=k
break
mydict.pop(keyToBeDeleted,None)
|
How to avoid "RuntimeError: dictionary changed size during iteration" error?
|
I have a dictionary of lists in which some of the values are empty:
d = {'a': [1], 'b': [1, 2], 'c': [], 'd':[]}
At the end of creating these lists, I want to remove these empty lists before returning my dictionary. I tried doing it like this:
for i in d:
if not d[i]:
d.pop(i)
but I got a RuntimeError. I am aware that you cannot add/remove elements in a dictionary while iterating through it...what would be a way around this then?
See Modifying a Python dict while iterating over it for citations that this can cause problems, and why.
|
[
"In Python 3.x and 2.x you can use use list to force a copy of the keys to be made:\nfor i in list(d):\n\nIn Python 2.x calling keys made a copy of the keys that you could iterate over while modifying the dict:\nfor i in d.keys():\n\nBut note that in Python 3.x this second method doesn't help with your error because keys returns an a view object instead of copying the keys into a list.\n",
"You only need to use copy:\nThis way you iterate over the original dictionary fields and on the fly can change the desired dict d.\nIt works on each Python version, so it's more clear.\nIn [1]: d = {'a': [1], 'b': [1, 2], 'c': [], 'd':[]}\n\nIn [2]: for i in d.copy():\n ...: if not d[i]:\n ...: d.pop(i)\n ...: \n\nIn [3]: d\nOut[3]: {'a': [1], 'b': [1, 2]}\n\n(BTW - Generally to iterate over copy of your data structure, instead of using .copy for dictionaries or slicing [:] for lists, you can use import copy -> copy.copy (for shallow copy which is equivalent to copy that is supported by dictionaries or slicing [:] that is supported by lists) or copy.deepcopy on your data structure.)\n",
"Just use dictionary comprehension to copy the relevant items into a new dict:\n>>> d\n{'a': [1], 'c': [], 'b': [1, 2], 'd': []}\n>>> d = {k: v for k, v in d.items() if v}\n>>> d\n{'a': [1], 'b': [1, 2]}\n\nFor this in Python 2:\n>>> d\n{'a': [1], 'c': [], 'b': [1, 2], 'd': []}\n>>> d = {k: v for k, v in d.iteritems() if v}\n>>> d\n{'a': [1], 'b': [1, 2]}\n\n",
"This worked for me:\nd = {1: 'a', 2: '', 3: 'b', 4: '', 5: '', 6: 'c'}\nfor key, value in list(d.items()):\n if value == '':\n del d[key]\nprint(d)\n# {1: 'a', 3: 'b', 6: 'c'}\n\nCasting the dictionary items to list creates a list of its items, so you can iterate over it and avoid the RuntimeError.\n",
"I would try to avoid inserting empty lists in the first place, but, would generally use:\nd = {k: v for k,v in d.iteritems() if v} # re-bind to non-empty\n\nIf prior to 2.7:\nd = dict( (k, v) for k,v in d.iteritems() if v )\n\nor just:\nempty_key_vals = list(k for k in k,v in d.iteritems() if v)\nfor k in empty_key_vals:\n del[k]\n\n",
"For Python 3:\n{k:v for k,v in d.items() if v}\n\n",
"to avoid \"dictionary changed size during iteration error\".\nfor example : \"when you try to delete some key\" ,\njust use 'list' with '.items()' , and here is a simple example :\nmy_dict = {\n 'k1':1,\n 'k2':2,\n 'k3':3,\n 'k4':4\n \n }\n \nprint(my_dict)\n\nfor key, val in list(my_dict.items()):\n if val == 2 or val == 4:\n my_dict.pop(key)\n\nprint(my_dict)\n\n+++\noutput :\n{'k1': 1, 'k2': 2, 'k3': 3, 'k4': 4}\n{'k1': 1, 'k3': 3}\n+++\nthis is just example and change it based on your case/requirements,\ni hope this helpful.\n",
"You cannot iterate through a dictionary while its changing during for loop. Make a casting to list and iterate over that list, it works for me.\n for key in list(d):\n if not d[key]: \n d.pop(key)\n\n",
"Python 3 does not allow deletion while iterating (using for loop above) dictionary. There are various alternatives to do; one simple way is the to change following line \nfor i in x.keys():\n\nWith \nfor i in list(x)\n\n",
"The reason for the runtime error is that you cannot iterate through a data structure while its structure is changing during iteration.\nOne way to achieve what you are looking for is to use list to append the keys you want to remove and then use pop function on dictionary to remove the identified key while iterating through the list.\nd = {'a': [1], 'b': [1, 2], 'c': [], 'd':[]}\npop_list = []\n\nfor i in d:\n if not d[i]:\n pop_list.append(i)\n\nfor x in pop_list:\n d.pop(x)\nprint (d)\n\n",
"For situations like this, i like to make a deep copy and loop through that copy while modifying the original dict.\nIf the lookup field is within a list, you can enumerate in the for loop of the list and then specify the position as index to access the field in the original dict.\n",
"dictc={\"stName\":\"asas\"}\nkeys=dictc.keys()\nfor key in list(keys):\n dictc[key.upper()] ='New value'\nprint(str(dictc))\n\n",
"If the values in the dictionary are unique too, then I used this solution\n keyToBeDeleted=None\n for k,v in mydict.items():\n if(v==match):\n keyToBeDeleted=k\n break\n mydict.pop(keyToBeDeleted,None)\n\n"
] |
[
717,
113,
65,
35,
14,
13,
13,
9,
3,
1,
1,
1,
0
] |
[] |
[] |
[
"dictionary",
"list",
"loops",
"python"
] |
stackoverflow_0011941817_dictionary_list_loops_python.txt
|
Q:
flask_mysqldb - MYSQL cursor is throwing an error - cursor is not a known member of "None"
I am writing a simple auth service in python using flask and flask_mysqldb. There is an error with the cursor.
import jwt
from flask import Flask, request
from flask_mysqldb import MySQL
server = Flask(__name__)
mysql = MySQL(server)
# server configuration
server.config["MYSQL_HOST"] = os.environ.get("MYSQL_HOST")
server.config["MYSQL_USER"] = os.environ.get("MYSQL_USER")
server.config["MYSQL_PASSWORD"] = os.environ.get("MYSQL_PASSWORD")
server.config["MYSQL_DB"] = os.environ.get("MYSQL_DB")
server.config["MYSQL_PORT"] = os.environ.get("MYSQL_PORT")
# print(server.config["MYSQL_HOST"])
# print(server.config["MYSQL_PORT"])
@server.route("/login", methods=["POST"])
def login():
auth = request.authorization
if not auth:
return "missing credentials",401
#check db for username and password
cur = mysql.connection.cursor()
res = cur.execute(
"SELECT email,password FROM user WHERE email=%s, (auth.username)"
)
This works on a virtual environment. All the specified packages are correctly installed.
A:
Please try
cur = mysql.connect.cursor()
if you use a connection it will not suggest cursor(). once you use connect.cursor() it will not show the error. Thanks
|
flask_mysqldb - MYSQL cursor is throwing an error - cursor is not a known member of "None"
|
I am writing a simple auth service in python using flask and flask_mysqldb. There is an error with the cursor.
import jwt
from flask import Flask, request
from flask_mysqldb import MySQL
server = Flask(__name__)
mysql = MySQL(server)
# server configuration
server.config["MYSQL_HOST"] = os.environ.get("MYSQL_HOST")
server.config["MYSQL_USER"] = os.environ.get("MYSQL_USER")
server.config["MYSQL_PASSWORD"] = os.environ.get("MYSQL_PASSWORD")
server.config["MYSQL_DB"] = os.environ.get("MYSQL_DB")
server.config["MYSQL_PORT"] = os.environ.get("MYSQL_PORT")
# print(server.config["MYSQL_HOST"])
# print(server.config["MYSQL_PORT"])
@server.route("/login", methods=["POST"])
def login():
auth = request.authorization
if not auth:
return "missing credentials",401
#check db for username and password
cur = mysql.connection.cursor()
res = cur.execute(
"SELECT email,password FROM user WHERE email=%s, (auth.username)"
)
This works on a virtual environment. All the specified packages are correctly installed.
|
[
"Please try\ncur = mysql.connect.cursor()\nif you use a connection it will not suggest cursor(). once you use connect.cursor() it will not show the error. Thanks\n"
] |
[
0
] |
[] |
[] |
[
"flask",
"python"
] |
stackoverflow_0074422679_flask_python.txt
|
Q:
Retrive firstnames from a list of names in pandas
I have a dataset with pubmed articles, and a column with the authors of each article, like this:
DOI Fullnames
0 10.1016/0022-1759(96)00092-0 B I Korelitz, S C Sommers
1 10.1038/jhg.2017.16 Avi Saskin 1 , Vanessa Fulginiti 1 , Ashley ...
2 10.1007/s00415-005-0964-z M Tiberio 1 , D T Chard, D R Altmann, G Davie...
3 10.1111/ene.13789 L Bonzano 1 , M Bove 2 , M P Sormani 3 , M ...
4 10.1038/s41598-018-19303-3 Dilek Yonar 1 , Levent Ocek 2 , Bedile Irem ...
5 10.1016/j.yebeh.2016.06.023 Klajdi Puka 1 , Luc Rubinger 2 , Carol Chan ...
6 10.3389/fnins.2019.00618 Paola Valsasina 1 , Milagros Hidalgo de la Cr...
7 10.5152/iao.2018.4467 Teruo Toi 1 , Yasuyuki Nomura 1 , Akihiro Ki...
8 10.1038/cdd.2016.71 Q Yang 1 , C Zheng 1 , J Cao 1 , G Cao 1 ,...
9 10.1002/j.2048-7940.2003 Alexa K Stuifbergen 1 , Tracie Culp Harrison
My aim is to obtain a column with a list of names of the authors, to then apply gender guesser to obtain a list of gender of authors. In order to do so I did the following:
Obtain a column with a clean list of authors for each article
# First create a column with clean list of authors
df['Authorlist']=''
for index, row in df.iterrows():
try:
Fullnames=df.loc[index, 'Fullnames']
authors=list(Fullnames.split(","))
Allauthors = []
for author in authors:
author=''.join([i for i in author if not i.isdigit()])
author=author.rstrip()
author=author.lstrip()
Allauthors.append(author)
df.loc[index, 'Authorlist']=Allauthors
except:
pass
To try to impute names for authors with just the first letter of the firstname, I create a list of author for which the firstname is available, so that later when I encounter an incomplete name I try to look in the list and see if the authors appeared elsewhere with a complete firstname
# Create a list of authors with available first names
Authorswithfullnameslist=[]
for index, row in df.iterrows():
try:
#Fullnames=df.loc[index, 'Authorlist']
for author in df.loc[index, 'Authorlist']:
firstname=author.split()[0]
if len(firstname) > 1:
Authorswithfullnameslist.append(author)
except:
pass
Up to this point everything is working fine. Then, when I try to create a column with a list of firstnames, things don't go well, as you can see the final result is not correct and I don't understand where I am wrong
# Create a column with clean list of names
df['Nameslist']=''
for index, row in df.iterrows():
try:
Names=[]
for author in df.loc[index, 'Authorlist']:
firstname=author.split()[0]
surname=author.split()[-1]
if len(firstname) > 1:
Names.append(firstname)
elif len(firstname) < 2:
for completeauthor in Authorswithfullnameslist:
if surname == completeauthor.split()[-1]:
firstnamecorrect = completeauthor.split()[0]
if firstnamecorrect[0] == firstname[0]:
Names.append(firstnamecorrect)
df.loc[index, 'Nameslist']=Names
except:
pass
Fullnames Authorlist Nameslist
0 B I Korelitz, S C Sommers [B I Korelitz, S C Sommers] []
1 Avi Saskin 1 , Vanessa Fulginiti 1 , Ashley ... [Avi Saskin, Vanessa Fulginiti, Ashley H Birch... [Avi, Vanessa, Ashley, Yannis]
2 M Tiberio 1 , D T Chard, D R Altmann, G Davie...[M Tiberio, D T Chard, D R Altmann, G Davies, ... [Daniel, Mary, Alan, Alan, Aiko, Alan, David, ...
3 L Bonzano 1 , M Bove 2 , M P Sormani 3 , M ... [L Bonzano, M Bove, M P Sormani, M L Stromillo... [Maria, Maria, Maria, Maria, Antonio, Antonio,...
4 Dilek Yonar 1 , Levent Ocek 2 , Bedile Irem ... [Dilek Yonar, Levent Ocek, Bedile Irem Tiftikc... [Dilek, Levent, Bedile, Yasar, Feride]
5 Klajdi Puka 1 , Luc Rubinger 2 , Carol Chan ... [Klajdi Puka, Luc Rubinger, Carol Chan, Mary L... [Klajdi, Luc, Carol, Mary, Elysa]
6 Paola Valsasina 1 , Milagros Hidalgo de la Cr...[Paola Valsasina, Milagros Hidalgo de la Cruz,... [Paola, Milagros, Massimo, Maria]
7 Teruo Toi 1 , Yasuyuki Nomura 1 , Akihiro Ki... [Teruo Toi, Yasuyuki Nomura, Akihiro Kishino, ... [Teruo, Yasuyuki, Akihiro, Shuntaro, Takeshi, ...
8 Q Yang 1 , C Zheng 1 , J Cao 1 , G Cao 1 ,... [Q Yang, C Zheng, J Cao, G Cao, P Shou, L Lin,... [Ying, Yi, Ying, Yu]
9 Alexa K Stuifbergen 1 , Tracie Culp Harrison [Alexa K Stuifbergen, Tracie Culp Harrison] [Alexa, Tracie]
Could you help me understand where i am wrong, or anyother way of achieving the result?
A:
I suppose you say the results are incorrect because of the 8th row of your dataframe. You can see there that all the first names start with Y, so this shows you the potential failure mode of your method.
In the 8th row most of the authors are Chinese (looking at the surnames) and Chinese surnames are not very varied. So you matched the surname several times, and because the author's real first name started with Y, you saved all the completeauthor first names that start with Y.
Potential remedy
Once you add a name from the Authorswithfullnamelist exit the entire loop to prevent more matches. Clearly, this will only add the first found surname match which may not be correct. However, your method is not guaranteed to be correct from the start, so hopefully this at least prevents you from adding more first names for each author.
|
Retrive firstnames from a list of names in pandas
|
I have a dataset with pubmed articles, and a column with the authors of each article, like this:
DOI Fullnames
0 10.1016/0022-1759(96)00092-0 B I Korelitz, S C Sommers
1 10.1038/jhg.2017.16 Avi Saskin 1 , Vanessa Fulginiti 1 , Ashley ...
2 10.1007/s00415-005-0964-z M Tiberio 1 , D T Chard, D R Altmann, G Davie...
3 10.1111/ene.13789 L Bonzano 1 , M Bove 2 , M P Sormani 3 , M ...
4 10.1038/s41598-018-19303-3 Dilek Yonar 1 , Levent Ocek 2 , Bedile Irem ...
5 10.1016/j.yebeh.2016.06.023 Klajdi Puka 1 , Luc Rubinger 2 , Carol Chan ...
6 10.3389/fnins.2019.00618 Paola Valsasina 1 , Milagros Hidalgo de la Cr...
7 10.5152/iao.2018.4467 Teruo Toi 1 , Yasuyuki Nomura 1 , Akihiro Ki...
8 10.1038/cdd.2016.71 Q Yang 1 , C Zheng 1 , J Cao 1 , G Cao 1 ,...
9 10.1002/j.2048-7940.2003 Alexa K Stuifbergen 1 , Tracie Culp Harrison
My aim is to obtain a column with a list of names of the authors, to then apply gender guesser to obtain a list of gender of authors. In order to do so I did the following:
Obtain a column with a clean list of authors for each article
# First create a column with clean list of authors
df['Authorlist']=''
for index, row in df.iterrows():
try:
Fullnames=df.loc[index, 'Fullnames']
authors=list(Fullnames.split(","))
Allauthors = []
for author in authors:
author=''.join([i for i in author if not i.isdigit()])
author=author.rstrip()
author=author.lstrip()
Allauthors.append(author)
df.loc[index, 'Authorlist']=Allauthors
except:
pass
To try to impute names for authors with just the first letter of the firstname, I create a list of author for which the firstname is available, so that later when I encounter an incomplete name I try to look in the list and see if the authors appeared elsewhere with a complete firstname
# Create a list of authors with available first names
Authorswithfullnameslist=[]
for index, row in df.iterrows():
try:
#Fullnames=df.loc[index, 'Authorlist']
for author in df.loc[index, 'Authorlist']:
firstname=author.split()[0]
if len(firstname) > 1:
Authorswithfullnameslist.append(author)
except:
pass
Up to this point everything is working fine. Then, when I try to create a column with a list of firstnames, things don't go well, as you can see the final result is not correct and I don't understand where I am wrong
# Create a column with clean list of names
df['Nameslist']=''
for index, row in df.iterrows():
try:
Names=[]
for author in df.loc[index, 'Authorlist']:
firstname=author.split()[0]
surname=author.split()[-1]
if len(firstname) > 1:
Names.append(firstname)
elif len(firstname) < 2:
for completeauthor in Authorswithfullnameslist:
if surname == completeauthor.split()[-1]:
firstnamecorrect = completeauthor.split()[0]
if firstnamecorrect[0] == firstname[0]:
Names.append(firstnamecorrect)
df.loc[index, 'Nameslist']=Names
except:
pass
Fullnames Authorlist Nameslist
0 B I Korelitz, S C Sommers [B I Korelitz, S C Sommers] []
1 Avi Saskin 1 , Vanessa Fulginiti 1 , Ashley ... [Avi Saskin, Vanessa Fulginiti, Ashley H Birch... [Avi, Vanessa, Ashley, Yannis]
2 M Tiberio 1 , D T Chard, D R Altmann, G Davie...[M Tiberio, D T Chard, D R Altmann, G Davies, ... [Daniel, Mary, Alan, Alan, Aiko, Alan, David, ...
3 L Bonzano 1 , M Bove 2 , M P Sormani 3 , M ... [L Bonzano, M Bove, M P Sormani, M L Stromillo... [Maria, Maria, Maria, Maria, Antonio, Antonio,...
4 Dilek Yonar 1 , Levent Ocek 2 , Bedile Irem ... [Dilek Yonar, Levent Ocek, Bedile Irem Tiftikc... [Dilek, Levent, Bedile, Yasar, Feride]
5 Klajdi Puka 1 , Luc Rubinger 2 , Carol Chan ... [Klajdi Puka, Luc Rubinger, Carol Chan, Mary L... [Klajdi, Luc, Carol, Mary, Elysa]
6 Paola Valsasina 1 , Milagros Hidalgo de la Cr...[Paola Valsasina, Milagros Hidalgo de la Cruz,... [Paola, Milagros, Massimo, Maria]
7 Teruo Toi 1 , Yasuyuki Nomura 1 , Akihiro Ki... [Teruo Toi, Yasuyuki Nomura, Akihiro Kishino, ... [Teruo, Yasuyuki, Akihiro, Shuntaro, Takeshi, ...
8 Q Yang 1 , C Zheng 1 , J Cao 1 , G Cao 1 ,... [Q Yang, C Zheng, J Cao, G Cao, P Shou, L Lin,... [Ying, Yi, Ying, Yu]
9 Alexa K Stuifbergen 1 , Tracie Culp Harrison [Alexa K Stuifbergen, Tracie Culp Harrison] [Alexa, Tracie]
Could you help me understand where i am wrong, or anyother way of achieving the result?
|
[
"I suppose you say the results are incorrect because of the 8th row of your dataframe. You can see there that all the first names start with Y, so this shows you the potential failure mode of your method.\nIn the 8th row most of the authors are Chinese (looking at the surnames) and Chinese surnames are not very varied. So you matched the surname several times, and because the author's real first name started with Y, you saved all the completeauthor first names that start with Y.\nPotential remedy\nOnce you add a name from the Authorswithfullnamelist exit the entire loop to prevent more matches. Clearly, this will only add the first found surname match which may not be correct. However, your method is not guaranteed to be correct from the start, so hopefully this at least prevents you from adding more first names for each author.\n"
] |
[
1
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074546694_pandas_python.txt
|
Q:
List Directories and get the name of the Directory
I am trying to get the code to list all the directories in a folder, change directory into that folder and get the name of the current folder. The code I have so far is below and isn't working at the minute. I seem to be getting the parent folder name.
import os
for directories in os.listdir(os.getcwd()):
dir = os.path.join('/home/user/workspace', directories)
os.chdir(dir)
current = os.path.dirname(dir)
new = str(current).split("-")[0]
print new
I also have other files in the folder but I do not want to list them. I have tried the below code but I haven't got it working yet either.
for directories in os.path.isdir(os.listdir(os.getcwd())):
Can anyone see where I am going wrong?
Thanks
Got it working but it seems a bit round about.
import os
os.chdir('/home/user/workspace')
all_subdirs = [d for d in os.listdir('.') if os.path.isdir(d)]
for dirs in all_subdirs:
dir = os.path.join('/home/user/workspace', dirs)
os.chdir(dir)
current = os.getcwd()
new = str(current).split("/")[4]
print new
A:
This will print all the subdirectories of the current directory:
print [name for name in os.listdir(".") if os.path.isdir(name)]
I'm not sure what you're doing with split("-"), but perhaps this code will help you find a solution?
If you want the full pathnames of the directories, use abspath:
print [os.path.abspath(name) for name in os.listdir(".") if os.path.isdir(name)]
Note that these pieces of code will only get the immediate subdirectories. If you want sub-sub-directories and so on, you should use walk as others have suggested.
A:
import os
for root, dirs, files in os.walk(top, topdown=False):
for name in dirs:
print os.path.join(root, name)
Walk is a good built-in for what you are doing
A:
I liked the answer of @RichieHindle but I add a small fix for it
import os
folder = './my_folder'
sub_folders = [name for name in os.listdir(folder) if os.path.isdir(os.path.join(folder, name))]
print(sub_folders)
otherwise it's not really work for me
A:
You seem to be using Python as if it were the shell. Whenever I've needed to do something like what you're doing, I've used os.walk()
For example, as explained here: [x[0] for x in os.walk(directory)] should give you all of the subdirectories, recursively.
A:
Listing the entries in the current directory (for directories in os.listdir(os.getcwd()):) and then interpreting those entries as subdirectories of an entirely different directory (dir = os.path.join('/home/user/workspace', directories)) is one thing that looks fishy.
A:
Slight correction for python3 (same answer as @RichieHindle)
This will print all the subdirectories of the current directory in an array:
print( [name for name in os.listdir(".") if os.path.isdir(name)] )
To make the above simpler to read
for name in os.listdir("."):
if os.path.isdir(name):
print(name)
If you want the full pathnames of the directories, use abspath:
print( [os.path.abspath(name) for name in os.listdir(".") if os.path.isdir(name)])
Note that these pieces of code will only get the immediate subdirectories.
|
List Directories and get the name of the Directory
|
I am trying to get the code to list all the directories in a folder, change directory into that folder and get the name of the current folder. The code I have so far is below and isn't working at the minute. I seem to be getting the parent folder name.
import os
for directories in os.listdir(os.getcwd()):
dir = os.path.join('/home/user/workspace', directories)
os.chdir(dir)
current = os.path.dirname(dir)
new = str(current).split("-")[0]
print new
I also have other files in the folder but I do not want to list them. I have tried the below code but I haven't got it working yet either.
for directories in os.path.isdir(os.listdir(os.getcwd())):
Can anyone see where I am going wrong?
Thanks
Got it working but it seems a bit round about.
import os
os.chdir('/home/user/workspace')
all_subdirs = [d for d in os.listdir('.') if os.path.isdir(d)]
for dirs in all_subdirs:
dir = os.path.join('/home/user/workspace', dirs)
os.chdir(dir)
current = os.getcwd()
new = str(current).split("/")[4]
print new
|
[
"This will print all the subdirectories of the current directory:\nprint [name for name in os.listdir(\".\") if os.path.isdir(name)]\n\nI'm not sure what you're doing with split(\"-\"), but perhaps this code will help you find a solution?\nIf you want the full pathnames of the directories, use abspath:\nprint [os.path.abspath(name) for name in os.listdir(\".\") if os.path.isdir(name)]\n\nNote that these pieces of code will only get the immediate subdirectories. If you want sub-sub-directories and so on, you should use walk as others have suggested.\n",
"import os\nfor root, dirs, files in os.walk(top, topdown=False):\n for name in dirs:\n print os.path.join(root, name)\n\nWalk is a good built-in for what you are doing\n",
"I liked the answer of @RichieHindle but I add a small fix for it\nimport os\n\nfolder = './my_folder'\n\nsub_folders = [name for name in os.listdir(folder) if os.path.isdir(os.path.join(folder, name))]\n\nprint(sub_folders)\n\notherwise it's not really work for me\n",
"You seem to be using Python as if it were the shell. Whenever I've needed to do something like what you're doing, I've used os.walk()\nFor example, as explained here: [x[0] for x in os.walk(directory)] should give you all of the subdirectories, recursively.\n",
"Listing the entries in the current directory (for directories in os.listdir(os.getcwd()):) and then interpreting those entries as subdirectories of an entirely different directory (dir = os.path.join('/home/user/workspace', directories)) is one thing that looks fishy.\n",
"Slight correction for python3 (same answer as @RichieHindle)\nThis will print all the subdirectories of the current directory in an array:\nprint( [name for name in os.listdir(\".\") if os.path.isdir(name)] )\n\nTo make the above simpler to read\nfor name in os.listdir(\".\"):\n if os.path.isdir(name):\n print(name)\n\nIf you want the full pathnames of the directories, use abspath:\nprint( [os.path.abspath(name) for name in os.listdir(\".\") if os.path.isdir(name)])\n\nNote that these pieces of code will only get the immediate subdirectories.\n"
] |
[
97,
25,
22,
9,
4,
0
] |
[] |
[] |
[
"directory",
"list",
"operating_system",
"python"
] |
stackoverflow_0002690324_directory_list_operating_system_python.txt
|
Q:
What is an Object in Python?
I am surprised that my question was not asked (worded like the above) before. I am hoping that someone could break down this basic term "object" in the context of a OOP language like Python. Explained in a way in which a beginner like myself would be able to grasp.
When I typed my question on Google, the first post that appears is found here.
This is the definition:
An object is created using the constructor of the class. This object will then be called the instance of the class.
Wikipedia defines it like this:
An object is an instance of a Class. Objects are an abstraction. They hold both data, and ways to manipulate the data. The data is usually not visible outside the object.
I am hoping someone could help to break down this vital concept for me, or kindly point me to more resources. Thanks!
A:
Everything is an object
An object is a fundamental building block of an object-oriented language. Integers, strings, floating point numbers, even arrays and dictionaries, are all objects. More specifically, any single integer or any single string is an object. The number 12 is an object, the string "hello, world" is an object, a list is an object that can hold other objects, and so on. You've been using objects all along and may not even realize it.
Objects have types
Every object has a type, and that type defines what you can do with the object. For example, the int type defines what happens when you add something to an int, what happens when you try to convert it to a string, and so on.
Conceptually, if not literally, another word for type is class. When you define a class, you are in essence defining your own type. Just like 12 is an instance of an integer, and "hello world" is an instance of a string, you can create your own custom type and then create instances of that type. Each instance is an object.
Classes are just custom types
Most programs that go beyond just printing a string on the display need to manage something more than just numbers and strings. For example, you might be writing a program that manipulates pictures, like photoshop. Or, perhaps you're creating a competitor to iTunes and need to manipulate songs and collections of songs. Or maybe you are writing a program to manage recipes.
A single picture, a single song, or a single recipe are each an object of a particular type. The only difference is, instead of your object being a type provided by the language (eg: integers, strings, etc), it is something you define yourself.
A:
To go deep, you need to understand the Python data model.
But if you want a glossy stackoverflow cheat sheet, let's start with a dictionary. (In order to avoid circular definitions, let's just agree that at a minimum, a dictionary is a mapping of keys to values. In this case, we can even say the keys are definitely strings.)
def some_method():
return 'hello world'
some_dictionary = {
"a_data_key": "a value",
"a_method_key": some_method,
}
An object, then, is such a mapping, with some additional syntactic sugar that allows you to access the "keys" using dot notation.
Now, there's a lot more to it than that. (In fact, if you want to understand this beyond python, I recommend The Art of the Metaobject Protocol.) You have to follow up with "but what is an instance?" and "how can you do things like iterate on entries in a dictionary like that?" and "what's a type system"? Some of this is addressed in Skam's fine answer.
We can talk about the python dunder methods, and how they are basically a protocol to implementing native behaviors like sized (things with length), comparable types (x < y), iterable types, etc.
But since the question is basically PhD-level broad, I think I'll leave my answer terribly reductive and see if you want to constrain the question.
A:
I think that the O.P. has a very good question. Object is a word that should come with a definition. I found this one that made things simpler for me. Maybe it will help you have a general concept of what an object is.
Objects: Objects are an encapsulation of variables and functions into a single entity. Objects get their variables and functions from classes. Classes are essentially a template to create your objects.
See: https://www.learnpython.org/en/Classes_and_Objects for further information.
This might not be a complete definition but it is a concept I can understand and work with.
A:
In the context of Python and all other Object Oriented Programming (OOP) languages, objects have two main characteristics: state and behavior.
You can think of a constructor as a factory that creates an instance of an object with state and behavior.
State - Any instance or class variables associated to that object.
Behavior - Any instance or class methods
The following is an example of a class, in Python, to illustrate some of these concepts.
class Dog:
SOUND = 'woof'
def __init__(self, name):
"""Creates a new instance of the Dog class.
This is the constructor in Python.
The underscores are pronounced dunder so this function is called
dunder init.
"""
# this is an instance variable.
# every time you instantiate an object (call the constructor)
# you must provide a name for the dog
self._name = name
def name(self):
"""Gets the name of the dog."""
return self._name
@classmethod
def bork(cls):
"""Makes the noise Dogs do.
Look past the @classmethod as this is a more advanced feature of Python.
Just know that this is how you would create a class method in Python.
This is a little hairy.
"""
print(cls.SOUND)
Though I agree with the comments that this question is a little vague. Please be a little more specific but the answer I've provided is a quick overview of classes and objects in Python.
A:
Your question was perfectly fine, nothing to apologize for. Some people are just arrogant.
Your question being broad is the only way to interpret that you're actually asking what the basic definition of "object" is; and in this case, for Python, the answer is "everything" < and that's not confusing at all.
Every building block is an object (any variable, function, tuple, string, etc). It's literally like having a Lego Set to build a Star Wars big ship and asking "hey, the manual says to be careful not break any piece; so, what is a piece?" < the answer would be; everything, every single individual element of that Lego Set is a piece.
Legit question.
Best,
A:
In OOP, object is a very simple yet complicated concept that people don't understands at first. I also had issue understanding it first but once you understands clearly what class is then it's so easy to understands what object is. If i am an object then my class is "Homo Sapiens". Let us know more about classes.
Class and Object:
Class is a blueprint that we create to have any object with some specific characteristics. Let us take an example of any fruit, Apple. Apple will fall in to the class named Fruits and it will have some attributes which would be the taste, smell, shape, etc. So we can say that Apple is an object of the class Fruits which has different attributes associated with it.
Now, let us take a technical example. If there is a class defined by any programmer and if he assigns any object to that class then it means that the object will have all the attributes and methods which that particular class consists.
For further knowledge of class and object, you can use the official python link which i am providing here.
https://docs.python.org/3/tutorial/classes.html
Hope i am little helpful to you.
Have a great day.
A:
The documentation says below:
Objects are Python’s abstraction for data. All data in a Python
program is represented by objects or by relations between objects. (In
a sense, and in conformance to Von Neumann’s model of a “stored
program computer”, code is also represented by objects.)
Every object has an identity, a type and a value. An object’s identity
never changes once it has been created; you may think of it as the
object’s address in memory. The ‘is’ operator compares the identity of
two objects; the id() function returns an integer representing its
identity.
|
What is an Object in Python?
|
I am surprised that my question was not asked (worded like the above) before. I am hoping that someone could break down this basic term "object" in the context of a OOP language like Python. Explained in a way in which a beginner like myself would be able to grasp.
When I typed my question on Google, the first post that appears is found here.
This is the definition:
An object is created using the constructor of the class. This object will then be called the instance of the class.
Wikipedia defines it like this:
An object is an instance of a Class. Objects are an abstraction. They hold both data, and ways to manipulate the data. The data is usually not visible outside the object.
I am hoping someone could help to break down this vital concept for me, or kindly point me to more resources. Thanks!
|
[
"Everything is an object\nAn object is a fundamental building block of an object-oriented language. Integers, strings, floating point numbers, even arrays and dictionaries, are all objects. More specifically, any single integer or any single string is an object. The number 12 is an object, the string \"hello, world\" is an object, a list is an object that can hold other objects, and so on. You've been using objects all along and may not even realize it. \nObjects have types\nEvery object has a type, and that type defines what you can do with the object. For example, the int type defines what happens when you add something to an int, what happens when you try to convert it to a string, and so on. \nConceptually, if not literally, another word for type is class. When you define a class, you are in essence defining your own type. Just like 12 is an instance of an integer, and \"hello world\" is an instance of a string, you can create your own custom type and then create instances of that type. Each instance is an object.\nClasses are just custom types\nMost programs that go beyond just printing a string on the display need to manage something more than just numbers and strings. For example, you might be writing a program that manipulates pictures, like photoshop. Or, perhaps you're creating a competitor to iTunes and need to manipulate songs and collections of songs. Or maybe you are writing a program to manage recipes. \nA single picture, a single song, or a single recipe are each an object of a particular type. The only difference is, instead of your object being a type provided by the language (eg: integers, strings, etc), it is something you define yourself. \n",
"To go deep, you need to understand the Python data model.\nBut if you want a glossy stackoverflow cheat sheet, let's start with a dictionary. (In order to avoid circular definitions, let's just agree that at a minimum, a dictionary is a mapping of keys to values. In this case, we can even say the keys are definitely strings.)\ndef some_method():\n return 'hello world'\n\nsome_dictionary = {\n \"a_data_key\": \"a value\",\n \"a_method_key\": some_method,\n}\n\nAn object, then, is such a mapping, with some additional syntactic sugar that allows you to access the \"keys\" using dot notation.\nNow, there's a lot more to it than that. (In fact, if you want to understand this beyond python, I recommend The Art of the Metaobject Protocol.) You have to follow up with \"but what is an instance?\" and \"how can you do things like iterate on entries in a dictionary like that?\" and \"what's a type system\"? Some of this is addressed in Skam's fine answer.\nWe can talk about the python dunder methods, and how they are basically a protocol to implementing native behaviors like sized (things with length), comparable types (x < y), iterable types, etc.\nBut since the question is basically PhD-level broad, I think I'll leave my answer terribly reductive and see if you want to constrain the question.\n",
"I think that the O.P. has a very good question. Object is a word that should come with a definition. I found this one that made things simpler for me. Maybe it will help you have a general concept of what an object is.\nObjects: Objects are an encapsulation of variables and functions into a single entity. Objects get their variables and functions from classes. Classes are essentially a template to create your objects.\nSee: https://www.learnpython.org/en/Classes_and_Objects for further information.\nThis might not be a complete definition but it is a concept I can understand and work with.\n",
"In the context of Python and all other Object Oriented Programming (OOP) languages, objects have two main characteristics: state and behavior.\nYou can think of a constructor as a factory that creates an instance of an object with state and behavior.\nState - Any instance or class variables associated to that object.\nBehavior - Any instance or class methods\nThe following is an example of a class, in Python, to illustrate some of these concepts.\nclass Dog:\n SOUND = 'woof'\n def __init__(self, name):\n \"\"\"Creates a new instance of the Dog class.\n\n This is the constructor in Python.\n The underscores are pronounced dunder so this function is called\n dunder init.\n \"\"\" \n # this is an instance variable.\n # every time you instantiate an object (call the constructor)\n # you must provide a name for the dog\n self._name = name\n\n def name(self):\n \"\"\"Gets the name of the dog.\"\"\"\n return self._name\n\n @classmethod\n def bork(cls):\n \"\"\"Makes the noise Dogs do.\n\n Look past the @classmethod as this is a more advanced feature of Python.\n Just know that this is how you would create a class method in Python.\n This is a little hairy.\n \"\"\"\n print(cls.SOUND)\n\nThough I agree with the comments that this question is a little vague. Please be a little more specific but the answer I've provided is a quick overview of classes and objects in Python.\n",
"Your question was perfectly fine, nothing to apologize for. Some people are just arrogant.\nYour question being broad is the only way to interpret that you're actually asking what the basic definition of \"object\" is; and in this case, for Python, the answer is \"everything\" < and that's not confusing at all.\nEvery building block is an object (any variable, function, tuple, string, etc). It's literally like having a Lego Set to build a Star Wars big ship and asking \"hey, the manual says to be careful not break any piece; so, what is a piece?\" < the answer would be; everything, every single individual element of that Lego Set is a piece.\nLegit question.\nBest,\n",
"In OOP, object is a very simple yet complicated concept that people don't understands at first. I also had issue understanding it first but once you understands clearly what class is then it's so easy to understands what object is. If i am an object then my class is \"Homo Sapiens\". Let us know more about classes.\nClass and Object:\nClass is a blueprint that we create to have any object with some specific characteristics. Let us take an example of any fruit, Apple. Apple will fall in to the class named Fruits and it will have some attributes which would be the taste, smell, shape, etc. So we can say that Apple is an object of the class Fruits which has different attributes associated with it.\nNow, let us take a technical example. If there is a class defined by any programmer and if he assigns any object to that class then it means that the object will have all the attributes and methods which that particular class consists.\nFor further knowledge of class and object, you can use the official python link which i am providing here.\nhttps://docs.python.org/3/tutorial/classes.html\nHope i am little helpful to you.\nHave a great day.\n",
"The documentation says below:\n\nObjects are Python’s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann’s model of a “stored\nprogram computer”, code is also represented by objects.)\nEvery object has an identity, a type and a value. An object’s identity\nnever changes once it has been created; you may think of it as the\nobject’s address in memory. The ‘is’ operator compares the identity of\ntwo objects; the id() function returns an integer representing its\nidentity.\n\n"
] |
[
18,
8,
4,
3,
1,
0,
0
] |
[
"\nAn object is simply a collection of data (variables) and methods (functions) that act on data.\n\nA class is a blueprint for that object.\n\n\n"
] |
[
-2
] |
[
"object",
"oop",
"python"
] |
stackoverflow_0056310092_object_oop_python.txt
|
Q:
Suggestions to replace nans with a mixture of previous and subsequent values
Assume I have a 1D array and want to replace / interpolate NaN blocks of length n with copies of the n/2 non-nan previous values and the n/2 non-nan subsequent values.
Example 1:
input = [1, 2, NaN, NaN, NaN, NaN, 3, 2]
output= [1, 2, 1, 2, 3, 2, 3, 2]
Example 2: if n is odd, fill with n%2+1 previous values and n%2 subsequent values
input = [1, 2, 3, NaN, NaN, NaN, 4, 2]
output= [1, 2, 3, 2, 3, 4, 4, 2]
Example 3: if not enough non-nan neighbours are available, replicate the available one (in this example value = 3)
input = [3, NaN, NaN, NaN, NaN, 4, 2]
output = [3, 3, 3, 4, 2, 4, 2]
my prelim solution looks like this..
def fillna_with_neighbours(data):
# get start / stop of nan blocks
nan_blocks = np.where(np.isnan(data),1,0)
nan_blocks = np.concatenate([[0],nan_blocks,[0]])
nan_blocks = np.where(np.diff(nan_blocks)!=0)[0]
nan_blocks = nan_blocks.reshape(-1, 2)
for block in nan_blocks:
nan_start, nan_end = block
n = nan_end - nan_start
n_pre = n//2 if n%2 == 0 else n//2 + 1
nan_pre = nan_start - n_pre if nan_start>=n_pre else 0
n_post = n//2
nan_post = nan_end + n_post
pre = data[nan_pre:nan_start]
post = data[nan_end:nan_post]
if pre.size < n_pre:
pre = np.resize(pre, n_pre)
if post.size < n_post:
post = np.resize(post, n_post)
data[nan_start:nan_end] = np.hstack([pre, post])
return data
ex1 = np.asarray([1, 2, np.nan, np.nan, np.nan, np.nan, 3, 2])
ex2 = np.asarray([1, 2, 3, np.nan, np.nan, np.nan, 4, 2])
ex3 = np.asarray([3, np.nan, np.nan, np.nan, np.nan, 4, 2])
is there any ready (scipy?) function for this problem. I am sure there are much better
A:
Considering this comment from OP:
it is no assignment. just thought that one of you guys will come up with a much nicer way to solve this.
The question is too broad in this form, but here are some leads.
If you are using pandas, you can have a look to the following functions which are designed to fill missing data using various methods (quotes follow) :
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html
Fill NA/NaN values using the specified method.
method{‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default None
Method to use for filling holes in reindexed Series pad / ffill: propagate last valid observation forward to next valid backfill / bfill: use next valid observation to fill gap.
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.interpolate.html
Fill NaN values using an interpolation method.
If you are not using pandas, then it is most probable that you have to implement an equivalent logic using numpy, or basic python.
|
Suggestions to replace nans with a mixture of previous and subsequent values
|
Assume I have a 1D array and want to replace / interpolate NaN blocks of length n with copies of the n/2 non-nan previous values and the n/2 non-nan subsequent values.
Example 1:
input = [1, 2, NaN, NaN, NaN, NaN, 3, 2]
output= [1, 2, 1, 2, 3, 2, 3, 2]
Example 2: if n is odd, fill with n%2+1 previous values and n%2 subsequent values
input = [1, 2, 3, NaN, NaN, NaN, 4, 2]
output= [1, 2, 3, 2, 3, 4, 4, 2]
Example 3: if not enough non-nan neighbours are available, replicate the available one (in this example value = 3)
input = [3, NaN, NaN, NaN, NaN, 4, 2]
output = [3, 3, 3, 4, 2, 4, 2]
my prelim solution looks like this..
def fillna_with_neighbours(data):
# get start / stop of nan blocks
nan_blocks = np.where(np.isnan(data),1,0)
nan_blocks = np.concatenate([[0],nan_blocks,[0]])
nan_blocks = np.where(np.diff(nan_blocks)!=0)[0]
nan_blocks = nan_blocks.reshape(-1, 2)
for block in nan_blocks:
nan_start, nan_end = block
n = nan_end - nan_start
n_pre = n//2 if n%2 == 0 else n//2 + 1
nan_pre = nan_start - n_pre if nan_start>=n_pre else 0
n_post = n//2
nan_post = nan_end + n_post
pre = data[nan_pre:nan_start]
post = data[nan_end:nan_post]
if pre.size < n_pre:
pre = np.resize(pre, n_pre)
if post.size < n_post:
post = np.resize(post, n_post)
data[nan_start:nan_end] = np.hstack([pre, post])
return data
ex1 = np.asarray([1, 2, np.nan, np.nan, np.nan, np.nan, 3, 2])
ex2 = np.asarray([1, 2, 3, np.nan, np.nan, np.nan, 4, 2])
ex3 = np.asarray([3, np.nan, np.nan, np.nan, np.nan, 4, 2])
is there any ready (scipy?) function for this problem. I am sure there are much better
|
[
"Considering this comment from OP:\n\nit is no assignment. just thought that one of you guys will come up with a much nicer way to solve this.\n\nThe question is too broad in this form, but here are some leads.\nIf you are using pandas, you can have a look to the following functions which are designed to fill missing data using various methods (quotes follow) :\nhttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html\n\nFill NA/NaN values using the specified method.\nmethod{‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default None\nMethod to use for filling holes in reindexed Series pad / ffill: propagate last valid observation forward to next valid backfill / bfill: use next valid observation to fill gap.\n\nhttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.interpolate.html\n\nFill NaN values using an interpolation method.\n\nIf you are not using pandas, then it is most probable that you have to implement an equivalent logic using numpy, or basic python.\n"
] |
[
0
] |
[] |
[] |
[
"fillna",
"interpolation",
"python"
] |
stackoverflow_0074547741_fillna_interpolation_python.txt
|
Q:
BFS - TypeError: 'ellipsis' object is not subscriptable - implementing algorithm
I am trying to implement the BFS algorithm but python is giving me an error that the ellipsis object is not sub scriptable.
I am unsure what these means because as far as I am aware this type should not be Ellipsis?
TypeError: 'ellipsis' object is not subscriptable
Causing error:
visited[starting_row][starting_col] = True
Function:
def findRoute(self, x1, y1, x2, y2):
grid = self.grid
print(grid)
starting_row, starting_col = x1, y1
# Creating 2 seperate queues for X and Y.
x_queue, y_queue = deque(), deque()
number_of_moves = 0
number_of_nodes_in_current_layer = 1
number_of_nodes_in_next_layer = 0
end_reached = False
# Up/Down/Right/Left directions
direction_row = [-1, 1, 0, 0]
direction_col = [0, 0, 1, -1]
visited = ...
x_queue.append(starting_row)
y_queue.append(starting_col)
visited[starting_row][starting_col] = True
while len(x_queue) > 0:
x = x_queue.dequeue()
y = y_queue.dequeue()
if x == x2 & y == y2:
end_reached = True
break
# for(i = 0; i < 4; i++):
# Loop through direction.
for i in range(0, 4):
new_row = x + direction_row[i]
new_col = x + direction_col[i]
#Validate position
# Skip locations not in grid.
if new_row < 0 or new_col < 0 or new_row >= self.height or new_col >= self.width:
continue
# Skip locations already visited / cells blocked by walls.
if visited[new_row][new_col] or grid[new_row][new_col]:
continue
x_queue.enqueue(new_row)
y_queue.enqueue(new_col)
visited[new_row][new_col] = True
number_of_nodes_in_next_layer += 1
if number_of_nodes_in_current_layer == 0:
number_of_nodes_in_current_layer = number_of_nodes_in_next_layer
number_of_nodes_in_next_layer = 0
number_of_moves += 1
if end_reached:
return number_of_moves
return -1
return grid[1][2]
Any help would be appreciated, thanks.
A:
Your code has this line:
visited = ...
This ... is not commonly used, but it is a native object. The documentation on Ellipsis has:
The same as the ellipsis literal “...”. Special value used mostly in conjunction with extended slicing syntax for user-defined container data types. Ellipsis is the sole instance of the types.EllipsisType type.
As the error message states, this object is not subscriptable, yet that is exactly what you tried to do with:
visited[starting_row][starting_col] = True
I suppose you didn't really intend to use visited = ..., and that you were planning to complete this statement later and then forgot about it. It should be:
visited = [[False] * len(row) for row in grid]
|
BFS - TypeError: 'ellipsis' object is not subscriptable - implementing algorithm
|
I am trying to implement the BFS algorithm but python is giving me an error that the ellipsis object is not sub scriptable.
I am unsure what these means because as far as I am aware this type should not be Ellipsis?
TypeError: 'ellipsis' object is not subscriptable
Causing error:
visited[starting_row][starting_col] = True
Function:
def findRoute(self, x1, y1, x2, y2):
grid = self.grid
print(grid)
starting_row, starting_col = x1, y1
# Creating 2 seperate queues for X and Y.
x_queue, y_queue = deque(), deque()
number_of_moves = 0
number_of_nodes_in_current_layer = 1
number_of_nodes_in_next_layer = 0
end_reached = False
# Up/Down/Right/Left directions
direction_row = [-1, 1, 0, 0]
direction_col = [0, 0, 1, -1]
visited = ...
x_queue.append(starting_row)
y_queue.append(starting_col)
visited[starting_row][starting_col] = True
while len(x_queue) > 0:
x = x_queue.dequeue()
y = y_queue.dequeue()
if x == x2 & y == y2:
end_reached = True
break
# for(i = 0; i < 4; i++):
# Loop through direction.
for i in range(0, 4):
new_row = x + direction_row[i]
new_col = x + direction_col[i]
#Validate position
# Skip locations not in grid.
if new_row < 0 or new_col < 0 or new_row >= self.height or new_col >= self.width:
continue
# Skip locations already visited / cells blocked by walls.
if visited[new_row][new_col] or grid[new_row][new_col]:
continue
x_queue.enqueue(new_row)
y_queue.enqueue(new_col)
visited[new_row][new_col] = True
number_of_nodes_in_next_layer += 1
if number_of_nodes_in_current_layer == 0:
number_of_nodes_in_current_layer = number_of_nodes_in_next_layer
number_of_nodes_in_next_layer = 0
number_of_moves += 1
if end_reached:
return number_of_moves
return -1
return grid[1][2]
Any help would be appreciated, thanks.
|
[
"Your code has this line:\nvisited = ...\n\nThis ... is not commonly used, but it is a native object. The documentation on Ellipsis has:\n\nThe same as the ellipsis literal “...”. Special value used mostly in conjunction with extended slicing syntax for user-defined container data types. Ellipsis is the sole instance of the types.EllipsisType type.\n\nAs the error message states, this object is not subscriptable, yet that is exactly what you tried to do with:\nvisited[starting_row][starting_col] = True\n\nI suppose you didn't really intend to use visited = ..., and that you were planning to complete this statement later and then forgot about it. It should be:\nvisited = [[False] * len(row) for row in grid] \n\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074547163_python.txt
|
Q:
convert a excel file to nested json file using python
My data with multiple pages looks like this
I want to convert it to a JSON file like below.
{
"Name" : "A",
{
"Project" : "P1",
{
[
"T1" : "P1.com",
]
},
"Project" : "P2",
{
[
"T2" : "P2.com",
"T3" : "P2.com",
]
}
},
"Name" : "B",
{
"Project" : "Q1",
{
[
"T1" : "Q1.com",
]
},
"Project" : "Q2",
{
[
"T2" : "Q2.com"
]
}
},
"Name" : "C",
{
"Project" : "R1",
{
[
"T1" : "R1.com",
]
},
"Project" : "R2",
{
[
"T2" : "R2.com"
]
}
}
}
This is the first time i'm working on json files and bit confused how to make the excel's heading as key and remaining datas as values. I am able to read the excel file using pandas.
Can anyone help me with the idea to do this?
import json
import pandas as pd
df = pd.read_excel (r'D:\example.xlsx', sheet_name='Sheet1')
Names = df["Name"]
d = {}
for index, name in enumerate(Names):
I have tried reading the file.Since im working with JSON files for the first time i don't have much idea about how to convert this,
A:
You have many options, I'll provide you with them using the pandas library, choose which one is more suitable to you
Pandas Library
Pandas to_json() method
Example Code:
import pandas as pd
data = [
["A", "P1", "T1", "P1.com"],
["A", "P2", "T2", "P2.com"],
["A", "P2", "T3", "P2.com"],
["B", "Q1", "T1", "Q1.com"],
["B", "Q2", "T2", "Q2.com"],
["C", "R1", "T1", "R1.com"],
["C", "R2", "T2", "R2.com"],
]
cols = ['name', 'project', 'type', 'link']
# Create the pandas DataFrame
df = pd.DataFrame(data, columns=cols)
In order to convert it to JSON apply one of the following:
parsed_split = json.loads(df.to_json(orient="split"))
parsed_records = json.loads(df.to_json(orient="records"))
parsed_index = json.loads(df.to_json(orient="index"))
parsed_columns = json.loads(df.to_json(orient="columns"))
parsed_values = json.loads(df.to_json(orient="values"))
parsed_table = json.loads(df.to_json(orient="table"))
If none of the following meets your requirement, for sure you'll be able to loop over one of them in order to achieve your solution, if you're struggling with that, comment and I'll update it for you.
Keep us updated, @Anoosha
Update 1: Multi Sheet Excel
import pandas as pd
sheets = ["A", "B"]
xls = pd.ExcelFile('path_to_file.xls')
for sheet in sheets:
df = pd.read_excel(xls, sheet) # Or use without loop if you want
|
convert a excel file to nested json file using python
|
My data with multiple pages looks like this
I want to convert it to a JSON file like below.
{
"Name" : "A",
{
"Project" : "P1",
{
[
"T1" : "P1.com",
]
},
"Project" : "P2",
{
[
"T2" : "P2.com",
"T3" : "P2.com",
]
}
},
"Name" : "B",
{
"Project" : "Q1",
{
[
"T1" : "Q1.com",
]
},
"Project" : "Q2",
{
[
"T2" : "Q2.com"
]
}
},
"Name" : "C",
{
"Project" : "R1",
{
[
"T1" : "R1.com",
]
},
"Project" : "R2",
{
[
"T2" : "R2.com"
]
}
}
}
This is the first time i'm working on json files and bit confused how to make the excel's heading as key and remaining datas as values. I am able to read the excel file using pandas.
Can anyone help me with the idea to do this?
import json
import pandas as pd
df = pd.read_excel (r'D:\example.xlsx', sheet_name='Sheet1')
Names = df["Name"]
d = {}
for index, name in enumerate(Names):
I have tried reading the file.Since im working with JSON files for the first time i don't have much idea about how to convert this,
|
[
"You have many options, I'll provide you with them using the pandas library, choose which one is more suitable to you\n\nPandas Library\nPandas to_json() method\n\nExample Code:\nimport pandas as pd\n\ndata = [\n [\"A\", \"P1\", \"T1\", \"P1.com\"],\n [\"A\", \"P2\", \"T2\", \"P2.com\"],\n [\"A\", \"P2\", \"T3\", \"P2.com\"],\n [\"B\", \"Q1\", \"T1\", \"Q1.com\"],\n [\"B\", \"Q2\", \"T2\", \"Q2.com\"],\n [\"C\", \"R1\", \"T1\", \"R1.com\"],\n [\"C\", \"R2\", \"T2\", \"R2.com\"],\n]\ncols = ['name', 'project', 'type', 'link']\n# Create the pandas DataFrame\ndf = pd.DataFrame(data, columns=cols)\n\nIn order to convert it to JSON apply one of the following:\nparsed_split = json.loads(df.to_json(orient=\"split\"))\nparsed_records = json.loads(df.to_json(orient=\"records\"))\nparsed_index = json.loads(df.to_json(orient=\"index\"))\nparsed_columns = json.loads(df.to_json(orient=\"columns\"))\nparsed_values = json.loads(df.to_json(orient=\"values\"))\nparsed_table = json.loads(df.to_json(orient=\"table\"))\n\nIf none of the following meets your requirement, for sure you'll be able to loop over one of them in order to achieve your solution, if you're struggling with that, comment and I'll update it for you.\nKeep us updated, @Anoosha\n\nUpdate 1: Multi Sheet Excel\nimport pandas as pd\n\nsheets = [\"A\", \"B\"]\nxls = pd.ExcelFile('path_to_file.xls')\nfor sheet in sheets:\n df = pd.read_excel(xls, sheet) # Or use without loop if you want\n\n"
] |
[
2
] |
[] |
[] |
[
"excel",
"json",
"key_value",
"python",
"xls"
] |
stackoverflow_0074472792_excel_json_key_value_python_xls.txt
|
Q:
Column transformers using NumPy indexing
I am studying this snippet and I don't understand how to column addition was constructed.
def column_addition(X):
return X[:, [0]] + X[:, [1]]
def addition_pipeline():
return make_pipeline(
SimpleImputer(strategy="median"),
FunctionTransformer(column_addition))
preprocessing = ColumnTransformer(
transformers=[("accompany", addition_pipeline, ["SibSp", "Parch"])], remainder='passthrough')
preprocess = preprocessing.fit_transform(df)
How is the df, and ["SibSp", "Parch"] running in background to create an addition in the code below
# How is the df and ["SibSp", "Parch"] implemented here?
# How can I replicate this as a non-function?
X[:, [0]] + X[:, [1]]
When I try to replace the X with the dataframe it throws an error.
A:
A couple of premises to understand how this example works:
the Pipeline is meant to apply transformations serially. Therefore, your pipeline will first impute some columns (we'll see later which ones; anticipation: they'll be ['sibsp', 'parch']) with the median value and then it will apply the column addition on these same columns
the ColumnTransformer is meant to apply transformations in parallel on the columns that you're passing it to. In your case you're applying a single transformation on columns ['sibsp', 'parch'] and you're leaving all other columns untouched (by specifying remainder='passthrough'). Moreover, the transformation you're performing is the one defined by the pipeline itself (imputation + column addition).
This said, the reason why the column_addition function is referencing columns by index (columns 0 and 1 will be respectively 'sibsp' and 'parch' because you're applying transformations to them only) is that - historically - calling .fit_transform() on a Pipeline or a ColumnTransformer instance (and thus applying multiple transformations at once on a DataFrame) made you lose the DataFrame structure and transform it into a Numpy array (whose columns can be only referenced positionally). I'm emphasizing historically because the upcoming sklearn version will bring a big news towards this (see the link attached at the end of the answer for further details; anticipation: there'll be the possibility of maintaining the DataFrame structure while applying serial or parallel transformations).
All in all, the way to reproduce the example might be the following:
Your version:
import pandas as pd
from sklearn.datasets import fetch_openml
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
df = fetch_openml('titanic', version=1, as_frame=True)['data']
def column_addition(X):
return X[:, [0]] + X[:, [1]]
addition_pipeline = make_pipeline(
SimpleImputer(strategy="median"),
FunctionTransformer(column_addition)
)
preprocessing = ColumnTransformer(
transformers=[("accompany", addition_pipeline, ["sibsp", "parch"])], remainder='passthrough')
df_updated = pd.DataFrame(preprocessing.fit_transform(df))
Replica:
si = SimpleImputer(strategy="median")
df_new = df.copy()
df_new['sibsp_new'] = si.fit_transform(df[['sibsp']])
df_new['parch_new'] = si.fit_transform(df[['parch']])
df_new['addition_col'] = df_new['sibsp_new'] + df_new['parch_new']
Proof that all's working:
(df_new['addition_col'] == df_updated.iloc[:, 0]).all() # gives True
Also, observe that the Titanic dataset from openml does not require imputation on columns 'sibsp' and 'parch'; however, in principle this might not be the case for you depending on the dataset you're starting from.
df['sibsp'].isna().sum(), df['parch'].isna().sum() # gives (0, 0)
I'd suggest how to use ColumnTransformer() to return a dataframe? for a couple of further details on the use of ColumnTransformer instances.
|
Column transformers using NumPy indexing
|
I am studying this snippet and I don't understand how to column addition was constructed.
def column_addition(X):
return X[:, [0]] + X[:, [1]]
def addition_pipeline():
return make_pipeline(
SimpleImputer(strategy="median"),
FunctionTransformer(column_addition))
preprocessing = ColumnTransformer(
transformers=[("accompany", addition_pipeline, ["SibSp", "Parch"])], remainder='passthrough')
preprocess = preprocessing.fit_transform(df)
How is the df, and ["SibSp", "Parch"] running in background to create an addition in the code below
# How is the df and ["SibSp", "Parch"] implemented here?
# How can I replicate this as a non-function?
X[:, [0]] + X[:, [1]]
When I try to replace the X with the dataframe it throws an error.
|
[
"A couple of premises to understand how this example works:\n\nthe Pipeline is meant to apply transformations serially. Therefore, your pipeline will first impute some columns (we'll see later which ones; anticipation: they'll be ['sibsp', 'parch']) with the median value and then it will apply the column addition on these same columns\nthe ColumnTransformer is meant to apply transformations in parallel on the columns that you're passing it to. In your case you're applying a single transformation on columns ['sibsp', 'parch'] and you're leaving all other columns untouched (by specifying remainder='passthrough'). Moreover, the transformation you're performing is the one defined by the pipeline itself (imputation + column addition).\n\nThis said, the reason why the column_addition function is referencing columns by index (columns 0 and 1 will be respectively 'sibsp' and 'parch' because you're applying transformations to them only) is that - historically - calling .fit_transform() on a Pipeline or a ColumnTransformer instance (and thus applying multiple transformations at once on a DataFrame) made you lose the DataFrame structure and transform it into a Numpy array (whose columns can be only referenced positionally). I'm emphasizing historically because the upcoming sklearn version will bring a big news towards this (see the link attached at the end of the answer for further details; anticipation: there'll be the possibility of maintaining the DataFrame structure while applying serial or parallel transformations).\nAll in all, the way to reproduce the example might be the following:\n\nYour version:\n import pandas as pd\n from sklearn.datasets import fetch_openml\n from sklearn.compose import ColumnTransformer\n from sklearn.impute import SimpleImputer\n from sklearn.pipeline import make_pipeline\n from sklearn.preprocessing import FunctionTransformer\n\n df = fetch_openml('titanic', version=1, as_frame=True)['data']\n\n def column_addition(X):\n return X[:, [0]] + X[:, [1]]\n\n addition_pipeline = make_pipeline(\n SimpleImputer(strategy=\"median\"),\n FunctionTransformer(column_addition)\n )\n\n preprocessing = ColumnTransformer(\n transformers=[(\"accompany\", addition_pipeline, [\"sibsp\", \"parch\"])], remainder='passthrough')\n\n df_updated = pd.DataFrame(preprocessing.fit_transform(df))\n\n\nReplica:\n si = SimpleImputer(strategy=\"median\")\n df_new = df.copy()\n df_new['sibsp_new'] = si.fit_transform(df[['sibsp']])\n df_new['parch_new'] = si.fit_transform(df[['parch']])\n df_new['addition_col'] = df_new['sibsp_new'] + df_new['parch_new']\n\n\nProof that all's working:\n (df_new['addition_col'] == df_updated.iloc[:, 0]).all() # gives True\n\n\n\nAlso, observe that the Titanic dataset from openml does not require imputation on columns 'sibsp' and 'parch'; however, in principle this might not be the case for you depending on the dataset you're starting from.\ndf['sibsp'].isna().sum(), df['parch'].isna().sum() # gives (0, 0)\n\nI'd suggest how to use ColumnTransformer() to return a dataframe? for a couple of further details on the use of ColumnTransformer instances.\n"
] |
[
0
] |
[] |
[] |
[
"numpy",
"python",
"scikit_learn"
] |
stackoverflow_0074510935_numpy_python_scikit_learn.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.