content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: How can I change the result of torch.cuda.is_available() to True GTX 1050ti hello i have recently started using pytorch but now i need to use my GPU which is a Nvidia GTX 1050ti to process some data but unfortunately torch.cuda.is_available() is returning False i have tried uninstaling cudatoolkit 11.3 and downgrade it to 11.1 and also deleating and reinstaling pytorch using conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia -c conda-forge, here's some information about my local configuration : OS : windows 10 graphical card : nvidia 1050ti current nvidia driver : 465.89 >>> torch.version.cuda '11.1' >>> torch.cuda.is_available() False >>> torch.backends.cudnn.enabled True >>> torch.__version__ '1.8.1' -> nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2020 NVIDIA Corporation Built on Tue_Sep_15_19:12:04_Pacific_Daylight_Time_2020 Cuda compilation tools, release 11.1, V11.1.74 Build cuda_11.1.relgpu_drvr455TC455_06.29069683_0 A: You have probably downloaded the only-cpu version. Have you tried to install it using pip and the stable pytorch version? pip install torch torchvision -f https://download.pytorch.org/whl/torch_stable.html Make sure to purge your pip cache before so you won't install the same chached wheels: pip uninstall torch pip chache purge Then run the above mentioned install instruction.
How can I change the result of torch.cuda.is_available() to True GTX 1050ti
hello i have recently started using pytorch but now i need to use my GPU which is a Nvidia GTX 1050ti to process some data but unfortunately torch.cuda.is_available() is returning False i have tried uninstaling cudatoolkit 11.3 and downgrade it to 11.1 and also deleating and reinstaling pytorch using conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia -c conda-forge, here's some information about my local configuration : OS : windows 10 graphical card : nvidia 1050ti current nvidia driver : 465.89 >>> torch.version.cuda '11.1' >>> torch.cuda.is_available() False >>> torch.backends.cudnn.enabled True >>> torch.__version__ '1.8.1' -> nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2020 NVIDIA Corporation Built on Tue_Sep_15_19:12:04_Pacific_Daylight_Time_2020 Cuda compilation tools, release 11.1, V11.1.74 Build cuda_11.1.relgpu_drvr455TC455_06.29069683_0
[ "You have probably downloaded the only-cpu version.\nHave you tried to install it using pip and the stable pytorch version?\npip install torch torchvision -f https://download.pytorch.org/whl/torch_stable.html\n\nMake sure to purge your pip cache before so you won't install the same chached wheels:\npip uninstall torch\npip chache purge\n\nThen run the above mentioned install instruction.\n" ]
[ 0 ]
[]
[]
[ "gpu", "python", "pytorch" ]
stackoverflow_0067527807_gpu_python_pytorch.txt
Q: Pandas - How to count multiple columns and generate a percentage I have a dataframe that has 100+ columns. The columns have either a 1 (for yes) and 0 (for no) as seen below. I'm trying to find out the percentage of each - such as "what percentage of the episodes have a barn?" or "what percentage of the episodes have a beach?" I think I'll need to iterate through the rows and put these in buckets, but I'm wondering how. The total number of episodes is 381, which I know I'll need to find the total percentage of each. The end product should look like this: A: is this what you mean? df = pd.DataFrame([[1,1,0],[0,0,1],[1,0,1],[0,0,1]], columns=['barn','beach','boat']) >>> df ''' barn beach boat 0 1 1 0 1 0 0 1 2 1 0 1 3 0 0 1 ''' >>> df.mean().reset_index(name='value') ''' index value 0 barn 0.50 1 beach 0.25 2 boat 0.75
Pandas - How to count multiple columns and generate a percentage
I have a dataframe that has 100+ columns. The columns have either a 1 (for yes) and 0 (for no) as seen below. I'm trying to find out the percentage of each - such as "what percentage of the episodes have a barn?" or "what percentage of the episodes have a beach?" I think I'll need to iterate through the rows and put these in buckets, but I'm wondering how. The total number of episodes is 381, which I know I'll need to find the total percentage of each. The end product should look like this:
[ "is this what you mean?\ndf = pd.DataFrame([[1,1,0],[0,0,1],[1,0,1],[0,0,1]],\n columns=['barn','beach','boat'])\n\n>>> df\n'''\n barn beach boat\n0 1 1 0\n1 0 0 1\n2 1 0 1\n3 0 0 1\n'''\n\n>>> df.mean().reset_index(name='value')\n'''\n index value\n0 barn 0.50\n1 beach 0.25\n2 boat 0.75\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "pandas", "python", "python_3.x" ]
stackoverflow_0074563730_dataframe_pandas_python_python_3.x.txt
Q: ImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant' Traceback (most recent call last): File "g:\mydrive\ \pdftotext_pdfminer.py", line 3, in <module> from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter File "C:\Users\ \anaconda3\envs\ \lib\site-packages\pdfminer\pdfinterp.py", line 7, in <module> from .cmapdb import CMap File "C:\Users\ \anaconda3\envs\ \lib\site-packages\pdfminer\encodingdb.py", line 7, in <module> from .psparser import PSLiteral File "C:\Users\ \anaconda3\envs\ \lib\site-packages\pdfminer\psparser.py", line 22, in <module> from .utils import choplist File "C:\Users\ \anaconda3\envs\ \lib\site-packages\pdfminer\utils.py", line 31, in <module> import charset_normalizer # For str encoding detection File "C:\Users\ \anaconda3\envs\ \lib\site-packages\charset_normalizer\__init__.py", line 23, in <module> from charset_normalizer.api import from_fp, from_path, from_bytes, normalize File "C:\Users\ \anaconda3\envs\ \lib\site-packages\charset_normalizer\api.py", line 10, in <module> from charset_normalizer.md import mess_ratio File "charset_normalizer\md.py", line 5, in <module> ImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant' (C:\Users\ \anaconda3\envs\ \lib\site-packages\charset_normalizer\constant.py) This error happens whenever I'm using pdfminer. I also installed pdfminer-six My code worked just fine until two days ago. It started to happen today when I tried to just run it again without any adjustment in the file I'm assuming maybe it's the pdfminer's problem but there's no update about the module... (I'm running this on my conda env) Does anyone know what this error means? and how to fix it? A: there. I faced the same problem when trying to use the pdfplumber package today (2022-11-24) from a script I have long used with no problem. I don't know why this error is happening but found one of the solutions in this link helpful: How to fix AttributeError: partially initialized module? Briefly, I removed my entire virtual environment using the command conda env remove --name ds (being ds the name of my environment). Then, I created a new one and installed every package I needed again through conda or pip. It is working perfectly now. Hope it works for you as well. Out of curiosity, I have installed Tensorflow last week. Maybe it interfered with pdfplumber somehow (not sure). Have you installed any new package since the last time you used pdfminer? Best of luck!
ImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant'
Traceback (most recent call last): File "g:\mydrive\ \pdftotext_pdfminer.py", line 3, in <module> from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter File "C:\Users\ \anaconda3\envs\ \lib\site-packages\pdfminer\pdfinterp.py", line 7, in <module> from .cmapdb import CMap File "C:\Users\ \anaconda3\envs\ \lib\site-packages\pdfminer\encodingdb.py", line 7, in <module> from .psparser import PSLiteral File "C:\Users\ \anaconda3\envs\ \lib\site-packages\pdfminer\psparser.py", line 22, in <module> from .utils import choplist File "C:\Users\ \anaconda3\envs\ \lib\site-packages\pdfminer\utils.py", line 31, in <module> import charset_normalizer # For str encoding detection File "C:\Users\ \anaconda3\envs\ \lib\site-packages\charset_normalizer\__init__.py", line 23, in <module> from charset_normalizer.api import from_fp, from_path, from_bytes, normalize File "C:\Users\ \anaconda3\envs\ \lib\site-packages\charset_normalizer\api.py", line 10, in <module> from charset_normalizer.md import mess_ratio File "charset_normalizer\md.py", line 5, in <module> ImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant' (C:\Users\ \anaconda3\envs\ \lib\site-packages\charset_normalizer\constant.py) This error happens whenever I'm using pdfminer. I also installed pdfminer-six My code worked just fine until two days ago. It started to happen today when I tried to just run it again without any adjustment in the file I'm assuming maybe it's the pdfminer's problem but there's no update about the module... (I'm running this on my conda env) Does anyone know what this error means? and how to fix it?
[ "there. I faced the same problem when trying to use the pdfplumber package today (2022-11-24) from a script I have long used with no problem. I don't know why this error is happening but found one of the solutions in this link helpful:\nHow to fix AttributeError: partially initialized module?\nBriefly, I removed my entire virtual environment using the command conda env remove --name ds (being ds the name of my environment). Then, I created a new one and installed every package I needed again through conda or pip. It is working perfectly now. Hope it works for you as well.\nOut of curiosity, I have installed Tensorflow last week. Maybe it interfered with pdfplumber somehow (not sure). Have you installed any new package since the last time you used pdfminer? Best of luck!\n" ]
[ 0 ]
[]
[]
[ "importerror", "pdfminer", "python" ]
stackoverflow_0074535380_importerror_pdfminer_python.txt
Q: How to get all keys and values from aioredis redis = aioredis.from_url(url='redis://some_url', decode_responses=True) redis.set('key', 'value') redis.set('key1', 'value1) redis.get('key') I want to get all keys and values with loop, like: for key, values in redis.scan_iter(): print(key, value) For example. I am looking for in docs, but can not find. Anybody know? A: I find the answer. keys = await redis.keys() for key in keys: value = await redis.get(key) This help to me!
How to get all keys and values from aioredis
redis = aioredis.from_url(url='redis://some_url', decode_responses=True) redis.set('key', 'value') redis.set('key1', 'value1) redis.get('key') I want to get all keys and values with loop, like: for key, values in redis.scan_iter(): print(key, value) For example. I am looking for in docs, but can not find. Anybody know?
[ "I find the answer.\nkeys = await redis.keys()\nfor key in keys:\n value = await redis.get(key)\n\nThis help to me!\n" ]
[ 0 ]
[]
[]
[ "aioredis", "python", "redis" ]
stackoverflow_0074559311_aioredis_python_redis.txt
Q: Pyspark 'from_json', dataframe return null for all json columns Utilizing python (version 3.7.12) and pyspark (version 2.4.0). I am trying to use a from_json statement using the columns and identified schema. However, the df returns as null. I am assuming I am incorrectly identifying the schema and type for the columns. The following code is the json string from a table I pulled from using get_json_object : df = df.select(col('id'), get_json_object(col("pulled_col"), "$.data")) df.head() #Row(id = '0123456', data = '[ #{"time" : [], "history" : [], "zip" : "78910", "phnumber" : #"5678910123", "name" : "-"}, #{"time" : [], "history" : [], "zip" : "78920", "phnumber" : #"5678910123", "name" : "-"}, #{"time" : [], "history" : [], "zip" : "78930", "phnumber" : #"5678910123", "name" : "-"}, #{"time" : [], "history" : [], "zip" : "78910", "phnumber" : #"5678910123", "name" : "-"} #]') df.printSchema() #root # |-- id: string (nullable = true) # |-- data: string (nullable = true) df.show() #+-------+----------------------------+ #| id| data| #+-------+----------------------------+ #|0123456|[{"time" : [], "history"....| #|0123456|[{"time" : [], "history"....| #+-------+----------------------------+ test = df.select(col("id"), get_json_object(col("data"),"$.zip")\ .alias("zip"))\ .show(truncate=False) # The output shouldn't be null? #+-------+----+ #| id| zip| #+-------+----+ #|0123456|null| #|0123456|null| #+-------+----+ schema = StructType( [ StructField('zip', StringType(), True), StructField('phnumber', StringType(), True), StructField('name', StringType(), True) ] ) data_json = df.withColumn("data", from_json("data", schema))\ .select(col('id'), col('data.*')) # The df output shouldn't be null for the new json schema? data_json.show() #+-------+----+---------+-----+ #| id| zip| phnumber| name| #+-------+----+---------+-----+ #|0123456|null| null| null| #|0123456|null| null| null| #+-------+----+---------+-----+ A: The data column actually contains a json array so the schema must be an ArrayType: schema = ArrayType( elementType = StructType( [ StructField('zip', StringType(), True), StructField('phnumber', StringType(), True), StructField('name', StringType(), True) ] ) ) data_json = df.withColumn("data", F.from_json("data", schema)) which results in the following schema: root |-- id: long (nullable = true) |-- data: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- zip: string (nullable = true) | | |-- phnumber: string (nullable = true) | | |-- name: string (nullable = true) Now if you want each element of the array in a separate row, you can explode it and extract the fields you need: data_json = df.withColumn("data", F.from_json("data", schema)) \ .withColumn("data", F.explode("data")) \ .select(F.col('id'), F.col('data.*')) Result: +---+-----+----------+----+ | id| zip| phnumber|name| +---+-----+----------+----+ | 1|78910|5678910123| -| | 1|78920|5678910123| -| +---+-----+----------+----+
Pyspark 'from_json', dataframe return null for all json columns
Utilizing python (version 3.7.12) and pyspark (version 2.4.0). I am trying to use a from_json statement using the columns and identified schema. However, the df returns as null. I am assuming I am incorrectly identifying the schema and type for the columns. The following code is the json string from a table I pulled from using get_json_object : df = df.select(col('id'), get_json_object(col("pulled_col"), "$.data")) df.head() #Row(id = '0123456', data = '[ #{"time" : [], "history" : [], "zip" : "78910", "phnumber" : #"5678910123", "name" : "-"}, #{"time" : [], "history" : [], "zip" : "78920", "phnumber" : #"5678910123", "name" : "-"}, #{"time" : [], "history" : [], "zip" : "78930", "phnumber" : #"5678910123", "name" : "-"}, #{"time" : [], "history" : [], "zip" : "78910", "phnumber" : #"5678910123", "name" : "-"} #]') df.printSchema() #root # |-- id: string (nullable = true) # |-- data: string (nullable = true) df.show() #+-------+----------------------------+ #| id| data| #+-------+----------------------------+ #|0123456|[{"time" : [], "history"....| #|0123456|[{"time" : [], "history"....| #+-------+----------------------------+ test = df.select(col("id"), get_json_object(col("data"),"$.zip")\ .alias("zip"))\ .show(truncate=False) # The output shouldn't be null? #+-------+----+ #| id| zip| #+-------+----+ #|0123456|null| #|0123456|null| #+-------+----+ schema = StructType( [ StructField('zip', StringType(), True), StructField('phnumber', StringType(), True), StructField('name', StringType(), True) ] ) data_json = df.withColumn("data", from_json("data", schema))\ .select(col('id'), col('data.*')) # The df output shouldn't be null for the new json schema? data_json.show() #+-------+----+---------+-----+ #| id| zip| phnumber| name| #+-------+----+---------+-----+ #|0123456|null| null| null| #|0123456|null| null| null| #+-------+----+---------+-----+
[ "The data column actually contains a json array so the schema must be an ArrayType:\nschema = ArrayType(\n elementType = StructType(\n [\n StructField('zip', StringType(), True),\n StructField('phnumber', StringType(), True),\n StructField('name', StringType(), True)\n ]\n )\n)\ndata_json = df.withColumn(\"data\", F.from_json(\"data\", schema))\n\nwhich results in the following schema:\nroot\n |-- id: long (nullable = true)\n |-- data: array (nullable = true)\n | |-- element: struct (containsNull = true)\n | | |-- zip: string (nullable = true)\n | | |-- phnumber: string (nullable = true)\n | | |-- name: string (nullable = true)\n\nNow if you want each element of the array in a separate row, you can explode it and extract the fields you need:\ndata_json = df.withColumn(\"data\", F.from_json(\"data\", schema)) \\\n .withColumn(\"data\", F.explode(\"data\")) \\\n .select(F.col('id'), F.col('data.*'))\n\nResult:\n+---+-----+----------+----+\n| id| zip| phnumber|name|\n+---+-----+----------+----+\n| 1|78910|5678910123| -|\n| 1|78920|5678910123| -|\n+---+-----+----------+----+\n\n" ]
[ 0 ]
[]
[]
[ "apache_spark_sql", "pyspark", "python" ]
stackoverflow_0074513136_apache_spark_sql_pyspark_python.txt
Q: Merge dataframes from two dictionaries through a loop Tried to keep this relatively simple but let me know if you need more information. I have 2 dictionaries made up of three dataframes each, these have been produced through loops then added into a dictionary. They have the keys ['XAUUSD', 'EURUSD', 'GBPUSD'] in common: trades_dict {'XAUUSD': df_trades_1 'EURUSD': df_trades_2 'GBPUSD': df_trades_3} prices_dict {'XAUUSD': df_prices_1 'EURUSD': df_prices_2 'GBPUSD': df_prices_3} I would like to merge the tables on the closest timestamps to produce 3 new dataframes such that the XAUUSD trades dataframe is merged with the corresponding XAUUSD prices dataframe and so on I have been able to join the dataframes in a loop using: df_merge_list = [] for trades in trades_dict.values(): for prices in prices_dict.values(): df_merge = pd.merge_asof(trades, prices, left_on='transact_time', right_on='time', direction='backward') df_merge_list.append(df_merge) However this produces a list of 9 dataframes, XAUUSD trades + XAUUSD price, XAUUSD trades + EURUSD price and XAUUSD trades + GBPUSD price etc. Is there a way for me to join only the dataframes where the keys are identical? I'm assuming it will need to be something like this: if trades_dict.keys() == prices_dict.keys(): df_merge_list = [] for trades in trades_dict.values(): for prices in prices_dict.values(): if trades_dict.keys() == prices_dict.keys(): df_merge = pd.merge_asof(trades, prices, left_on='transact_time', right_on='time', direction='backward') df_merge_list.append(df_merge) but I'm getting the same result as above Am I close? How can I do this for all instruments and only produce the 3 outputs I need? Any help is appreciated Thanks in advance A: """ Pseudocode : For each key in the list of keys in trades_dict : Pick that key's value (trades df) from trades_dict Using the same key, pick corresponding value (prices df) from prices_dict Merge both values (trades & prices dataframes) """ df_merge_list = [] for key in trades_dict.keys(): trades = trades_dict[key] prices = prices_dict[key] # using the same key to get corresponding prices df_merge = pd.merge_asof(trades, prices, left_on='transact_time', right_on='time', direction='backward') df_merge_list.append(df_merge) What went wrong in code posted in question? Nested for loop creates cartesian product 3 iterations in outer loop multiplied by 3 iterations in inner loop = 9 iterations Result of trades_dict.keys() == prices_dict.keys() is True in all 9 iterations dict_a_all_keys == dict_b_all_keys is not same as dict_a_key_1 == dict_b_key_1. So, you could iterate through keys of dictionary and check if they are matching in nested loop, like this : df_merge_list = [] for trades_key in trades_dict.keys(): for prices_key in prices_dict.keys(): if trades_key == prices_key: trades = trades_dict[trades_key] prices = prices_dict[trades_key] # since trades_key is same as prices_key, they are interchangeable df_merge = pd.merge_asof(trades, prices, left_on='transact_time', right_on='time', direction='backward') df_merge_list.append(df_merge) A: You need to provide the exact dataframes with the correct column names in a reproducible form but you can use a dictionary like this: import numpy as np import pandas as pd np.random.seed(42) df_trades_1 = df_trades_2 = df_trades_3 = pd.DataFrame(np.random.rand(10, 2), columns = ['ID1', 'Val1']) df_prices_1 = df_prices_2 = df_prices_3 = pd.DataFrame(np.random.rand(10, 2), columns = ['ID2', 'Val2']) trades_dict = {'XAUUSD':df_trades_1, 'EURUSD':df_trades_2, 'GBPUSD':df_trades_3} prices_dict = {'XAUUSD':df_prices_1, 'EURUSD':df_prices_2, 'GBPUSD':df_prices_3} frames ={} for t in trades_dict.keys(): frames[t] = (pd.concat([trades_dict[t], prices_dict[t]], axis = 1)) frames['XAUUSD'] This would concatenate the two dataframes, making them both available under the same key: ID1 Val1 ID2 Val2 0 0.374540 0.950714 0.611853 0.139494 1 0.731994 0.598658 0.292145 0.366362 2 0.156019 0.155995 0.456070 0.785176 3 0.058084 0.866176 0.199674 0.514234 4 0.601115 0.708073 0.592415 0.046450 5 0.020584 0.969910 0.607545 0.170524 6 0.832443 0.212339 0.065052 0.948886 7 0.181825 0.183405 0.965632 0.808397 8 0.304242 0.524756 0.304614 0.097672 9 0.431945 0.291229 0.684233 0.440152 You may need some error checking in case your keys don't match or the kind of join (left, right, inner etc.) depending upon your columns but that's the gist of it.
Merge dataframes from two dictionaries through a loop
Tried to keep this relatively simple but let me know if you need more information. I have 2 dictionaries made up of three dataframes each, these have been produced through loops then added into a dictionary. They have the keys ['XAUUSD', 'EURUSD', 'GBPUSD'] in common: trades_dict {'XAUUSD': df_trades_1 'EURUSD': df_trades_2 'GBPUSD': df_trades_3} prices_dict {'XAUUSD': df_prices_1 'EURUSD': df_prices_2 'GBPUSD': df_prices_3} I would like to merge the tables on the closest timestamps to produce 3 new dataframes such that the XAUUSD trades dataframe is merged with the corresponding XAUUSD prices dataframe and so on I have been able to join the dataframes in a loop using: df_merge_list = [] for trades in trades_dict.values(): for prices in prices_dict.values(): df_merge = pd.merge_asof(trades, prices, left_on='transact_time', right_on='time', direction='backward') df_merge_list.append(df_merge) However this produces a list of 9 dataframes, XAUUSD trades + XAUUSD price, XAUUSD trades + EURUSD price and XAUUSD trades + GBPUSD price etc. Is there a way for me to join only the dataframes where the keys are identical? I'm assuming it will need to be something like this: if trades_dict.keys() == prices_dict.keys(): df_merge_list = [] for trades in trades_dict.values(): for prices in prices_dict.values(): if trades_dict.keys() == prices_dict.keys(): df_merge = pd.merge_asof(trades, prices, left_on='transact_time', right_on='time', direction='backward') df_merge_list.append(df_merge) but I'm getting the same result as above Am I close? How can I do this for all instruments and only produce the 3 outputs I need? Any help is appreciated Thanks in advance
[ "\"\"\"\nPseudocode :\nFor each key in the list of keys in trades_dict :\n Pick that key's value (trades df) from trades_dict\n Using the same key, pick corresponding value (prices df) from prices_dict\n Merge both values (trades & prices dataframes)\n\"\"\"\n\ndf_merge_list = []\n\nfor key in trades_dict.keys():\n trades = trades_dict[key]\n prices = prices_dict[key] # using the same key to get corresponding prices\n\n df_merge = pd.merge_asof(trades, prices, left_on='transact_time', right_on='time', direction='backward')\n df_merge_list.append(df_merge)\n\nWhat went wrong in code posted in question?\n\nNested for loop creates cartesian product\n3 iterations in outer loop multiplied by 3 iterations in inner loop = 9 iterations\n\nResult of trades_dict.keys() == prices_dict.keys() is True in all 9 iterations\ndict_a_all_keys == dict_b_all_keys is not same as dict_a_key_1 == dict_b_key_1. So, you could iterate through keys of dictionary and check if they are matching in nested loop, like this :\n\n\ndf_merge_list = []\n\nfor trades_key in trades_dict.keys():\n for prices_key in prices_dict.keys():\n if trades_key == prices_key:\n trades = trades_dict[trades_key]\n prices = prices_dict[trades_key] # since trades_key is same as prices_key, they are interchangeable\n df_merge = pd.merge_asof(trades, prices, left_on='transact_time', right_on='time', direction='backward')\n df_merge_list.append(df_merge)\n\n", "You need to provide the exact dataframes with the correct column names in a reproducible form but you can use a dictionary like this:\nimport numpy as np\nimport pandas as pd\n\nnp.random.seed(42)\ndf_trades_1 = df_trades_2 = df_trades_3 = pd.DataFrame(np.random.rand(10, 2), columns = ['ID1', 'Val1'])\ndf_prices_1 = df_prices_2 = df_prices_3 = pd.DataFrame(np.random.rand(10, 2), columns = ['ID2', 'Val2'])\ntrades_dict = {'XAUUSD':df_trades_1, 'EURUSD':df_trades_2, 'GBPUSD':df_trades_3}\nprices_dict = {'XAUUSD':df_prices_1, 'EURUSD':df_prices_2, 'GBPUSD':df_prices_3}\n\nframes ={}\nfor t in trades_dict.keys():\n frames[t] = (pd.concat([trades_dict[t], prices_dict[t]], axis = 1))\nframes['XAUUSD']\n\nThis would concatenate the two dataframes, making them both available under the same key:\n ID1 Val1 ID2 Val2\n0 0.374540 0.950714 0.611853 0.139494\n1 0.731994 0.598658 0.292145 0.366362\n2 0.156019 0.155995 0.456070 0.785176\n3 0.058084 0.866176 0.199674 0.514234\n4 0.601115 0.708073 0.592415 0.046450\n5 0.020584 0.969910 0.607545 0.170524\n6 0.832443 0.212339 0.065052 0.948886\n7 0.181825 0.183405 0.965632 0.808397\n8 0.304242 0.524756 0.304614 0.097672\n9 0.431945 0.291229 0.684233 0.440152\n\nYou may need some error checking in case your keys don't match or the kind of join (left, right, inner etc.) depending upon your columns but that's the gist of it.\n" ]
[ 1, 0 ]
[]
[]
[ "dictionary", "loops", "merge", "pandas", "python" ]
stackoverflow_0074564510_dictionary_loops_merge_pandas_python.txt
Q: Is there a cleaner way to replace characters in a text file? i am trying to replace characters in a text file, the code works but it just seems too long. I was wondering if there is a different way to do this? (It is a good way for me to learn a better way than just a long repetivite way) Thanks with open('documento.txt', 'r') as file: filedata = file.read() filedata = filedata.replace('+', 'e') filedata = filedata.replace('P', 'a') filedata = filedata.replace('B', 'o') filedata = filedata.replace('N', 's') filedata = filedata.replace('K', 'n') filedata = filedata.replace('X', 'r') filedata = filedata.replace('Q', 'i') filedata = filedata.replace('T', 'l') filedata = filedata.replace('*', 'd') filedata = filedata.replace('Y', 'u') filedata = filedata.replace('_', 'c') filedata = filedata.replace('V', 't') filedata = filedata.replace('H', 'm') filedata = filedata.replace('D', 'q') filedata = filedata.replace('M', 'h') filedata = filedata.replace('R', 'j') with open('documento.txt', 'w') as file: file.write(filedata) A: In order to make this process more efficient, you may want to consider using a for loop with parallel lists containing what you want to replace and what you want to replace with. In your case, the code would look something like this: beforeList = ['+', 'P', 'B', 'N', 'K', 'X', 'Q', 'T', '*', 'Y', '_', 'V', 'H', 'D', 'M', 'R'] afterList = ['e', 'a', 'o', 's', 'n', 'r', 'i', 'l', 'd', 'u', 'c', 't', 'm', 'q', 'h', 'j'] with open('documento.txt', 'r') as file: filedata = file.read() for i in range(len(beforeList)): filedata = filedata.replace(beforeList[i], afterList[i]) with open('documento.txt', 'w') as file: file.write(filedata) Because the characters you want to replace are 'parallel' to the characters you want to replace with, their indexes will match, allowing the for loop to go through each original character and replace them accordingly.
Is there a cleaner way to replace characters in a text file?
i am trying to replace characters in a text file, the code works but it just seems too long. I was wondering if there is a different way to do this? (It is a good way for me to learn a better way than just a long repetivite way) Thanks with open('documento.txt', 'r') as file: filedata = file.read() filedata = filedata.replace('+', 'e') filedata = filedata.replace('P', 'a') filedata = filedata.replace('B', 'o') filedata = filedata.replace('N', 's') filedata = filedata.replace('K', 'n') filedata = filedata.replace('X', 'r') filedata = filedata.replace('Q', 'i') filedata = filedata.replace('T', 'l') filedata = filedata.replace('*', 'd') filedata = filedata.replace('Y', 'u') filedata = filedata.replace('_', 'c') filedata = filedata.replace('V', 't') filedata = filedata.replace('H', 'm') filedata = filedata.replace('D', 'q') filedata = filedata.replace('M', 'h') filedata = filedata.replace('R', 'j') with open('documento.txt', 'w') as file: file.write(filedata)
[ "In order to make this process more efficient, you may want to consider using a for loop with parallel lists containing what you want to replace and what you want to replace with. In your case, the code would look something like this:\nbeforeList = ['+', 'P', 'B', 'N', 'K', 'X', 'Q', 'T', '*', 'Y', '_', 'V', 'H', 'D', 'M', 'R']\nafterList = ['e', 'a', 'o', 's', 'n', 'r', 'i', 'l', 'd', 'u', 'c', 't', 'm', 'q', 'h', 'j']\n\nwith open('documento.txt', 'r') as file:\n filedata = file.read()\n\nfor i in range(len(beforeList)):\n filedata = filedata.replace(beforeList[i], afterList[i])\n\nwith open('documento.txt', 'w') as file:\n file.write(filedata)\n\nBecause the characters you want to replace are 'parallel' to the characters you want to replace with, their indexes will match, allowing the for loop to go through each original character and replace them accordingly.\n" ]
[ 0 ]
[]
[]
[ "python", "replace" ]
stackoverflow_0074564829_python_replace.txt
Q: Error: Could not locate a Flask application in VSCode I am trying to learn Flask using VScode. The tutorial that I am following is: Python Flask Tutorial: Full-Featured Web App Part 1 - Getting Started. I did the following things: Created a new virtualenv in a folder using: virtualenv venv activated it as: venv\Scripts\activate (I am on Windows 10) After that, I created a new directory named Flask_Blog using mkdir Flask_Blog and in it, I created a new flaskblog.py file containing the following code: from flask import Flask app = Flask(__name__) @app.route('/') def hello(): return 'Hello' Then, in the terminal of VScode, I changed my working directory in order to be in the Flask_Blog directory using cd Flask_Blog. Now, when I am doing set FLASK_APP=flaskblog.py followed by flask run, I am getting the following error: (venv) PS C:\Users\kashy\OneDrive\Desktop\Flask\Flask_Blog> flask run * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off Usage: flask run [OPTIONS] Error: Could not locate a Flask application. You did not provide the "FLASK_APP" environment variable, and a "wsgi.py" or "app.py" module was not found in the current directory. But When I do the same in the cmd prompt, the code runs and I get to see the output. I am completely new to this. Can anyone please tell me what is the mistake I am doing in VSCode and why is it working in the cmd? A: Issue raised in VsCode Under Powershell, you have to set the FLASK_APP environment variable as follows: $env:FLASK_APP = "webapp" Then you should be able to run "python -m flask run" inside the hello_app folder. In other words, PowerShell manages environment variables differently, so the standard command-line "set FLASK_APP=webapp" won't work. A: Try Set FLASK_APP = Full path of the folder/filename.py. This worked for me A: This worked for me on the VSCode: $env:FLASK_APP= 'C:\Python\ex003\main:app'
Error: Could not locate a Flask application in VSCode
I am trying to learn Flask using VScode. The tutorial that I am following is: Python Flask Tutorial: Full-Featured Web App Part 1 - Getting Started. I did the following things: Created a new virtualenv in a folder using: virtualenv venv activated it as: venv\Scripts\activate (I am on Windows 10) After that, I created a new directory named Flask_Blog using mkdir Flask_Blog and in it, I created a new flaskblog.py file containing the following code: from flask import Flask app = Flask(__name__) @app.route('/') def hello(): return 'Hello' Then, in the terminal of VScode, I changed my working directory in order to be in the Flask_Blog directory using cd Flask_Blog. Now, when I am doing set FLASK_APP=flaskblog.py followed by flask run, I am getting the following error: (venv) PS C:\Users\kashy\OneDrive\Desktop\Flask\Flask_Blog> flask run * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off Usage: flask run [OPTIONS] Error: Could not locate a Flask application. You did not provide the "FLASK_APP" environment variable, and a "wsgi.py" or "app.py" module was not found in the current directory. But When I do the same in the cmd prompt, the code runs and I get to see the output. I am completely new to this. Can anyone please tell me what is the mistake I am doing in VSCode and why is it working in the cmd?
[ "Issue raised in VsCode\nUnder Powershell, you have to set the FLASK_APP environment variable as follows:\n$env:FLASK_APP = \"webapp\"\nThen you should be able to run \"python -m flask run\" inside the hello_app folder. In other words, PowerShell manages environment variables differently, so the standard command-line \"set FLASK_APP=webapp\" won't work.\n", "Try Set FLASK_APP = Full path of the folder/filename.py.\nThis worked for me\n", "This worked for me on the VSCode:\n$env:FLASK_APP= 'C:\\Python\\ex003\\main:app'\n" ]
[ 11, 0, 0 ]
[]
[]
[ "flask", "python", "visual_studio_code" ]
stackoverflow_0058320164_flask_python_visual_studio_code.txt
Q: python web scraping none value issue I am trying to get the salary from this web_page but each time i got the same value "None" however i tried to take different tags! link_content = requests.get("https://wuzzuf.net/jobs/p/KxrcG1SmaBZB-Facility-Administrator-Majorel-Egypt-Alexandria-Egypt?o=1&l=sp&t=sj&a=search-v3") soup = BeautifulSoup(link_content.text, 'html.parser') salary = soup.find("span", {"class":"css-47jx3m"}) print(salary) output: None A: Page is being generated dynamically with Javascript, so Requests cannot see it as you see it. Try disabling Javascript in your browser and hard reload the page, and you will see a lot of information missing. However, data exists in page in a script tag. One way of getting that information is by slicing that script tag, to get to the information you need [EDITED to account for different encoded keys - now it should work for any job]: import requests from bs4 import BeautifulSoup as bs import json import pandas as pd pd.set_option('display.max_columns', None) pd.set_option('display.max_colwidth', None) headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36' } url = 'https://wuzzuf.net/jobs/p/KxrcG1SmaBZB-Facility-Administrator-Majorel-Egypt-Alexandria-Egypt?o=1&l=sp&t=sj&a=search-v3' soup = bs(requests.get(url, headers=headers).text, 'html.parser') salary = soup.select_one('script').text.split('Wuzzuf.initialStoreState = ')[1].split('Wuzzuf.serverRenderedURL = ')[0].rsplit(';', 1)[0] data = json.loads(salary)['entities']['job']['collection'] enc_key = [x for x in data.keys()][0] df = pd.json_normalize(data[enc_key]['attributes']['salary']) print(df) Result in terminal: min max currency period additionalDetails isPaid 0 None None None None None True
python web scraping none value issue
I am trying to get the salary from this web_page but each time i got the same value "None" however i tried to take different tags! link_content = requests.get("https://wuzzuf.net/jobs/p/KxrcG1SmaBZB-Facility-Administrator-Majorel-Egypt-Alexandria-Egypt?o=1&l=sp&t=sj&a=search-v3") soup = BeautifulSoup(link_content.text, 'html.parser') salary = soup.find("span", {"class":"css-47jx3m"}) print(salary) output: None
[ "Page is being generated dynamically with Javascript, so Requests cannot see it as you see it. Try disabling Javascript in your browser and hard reload the page, and you will see a lot of information missing. However, data exists in page in a script tag.\nOne way of getting that information is by slicing that script tag, to get to the information you need [EDITED to account for different encoded keys - now it should work for any job]:\nimport requests\nfrom bs4 import BeautifulSoup as bs\nimport json\nimport pandas as pd\n\n\npd.set_option('display.max_columns', None)\npd.set_option('display.max_colwidth', None)\n\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'\n}\n\nurl = 'https://wuzzuf.net/jobs/p/KxrcG1SmaBZB-Facility-Administrator-Majorel-Egypt-Alexandria-Egypt?o=1&l=sp&t=sj&a=search-v3'\n\nsoup = bs(requests.get(url, headers=headers).text, 'html.parser')\nsalary = soup.select_one('script').text.split('Wuzzuf.initialStoreState = ')[1].split('Wuzzuf.serverRenderedURL = ')[0].rsplit(';', 1)[0]\ndata = json.loads(salary)['entities']['job']['collection']\nenc_key = [x for x in data.keys()][0]\ndf = pd.json_normalize(data[enc_key]['attributes']['salary'])\nprint(df)\n\nResult in terminal:\n min max currency period additionalDetails isPaid\n0 None None None None None True\n\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "python", "web_scraping" ]
stackoverflow_0074564707_beautifulsoup_python_web_scraping.txt
Q: Convert NETCDF file to TIFF when coordinates are variables (not coordinates) How to convert the NetCDF to TIFF, when the coordinates are stored in another NetCDF file (and are a irregular grid, since this covers the Arctic region)? An example of the NetCDF file can be downloaded here: https://drive.google.com/uc?export=download&id=1i4OGCQhKlZ056H1YHq4hTb0EbEkl-pYd The NetCDF file with the coordinates can be donwnloaded here: https://drive.google.com/uc?export=download&id=1WVzZ--NnHSPkJmBqlGwXAN7abXM5_uNh (Just additional information files only provide the following in what regards coordinates): NC_GLOBAL#geospatial_bounds_crs=EPSG:4326 NC_GLOBAL#geospatial_lat_max=90 NC_GLOBAL#geospatial_lat_min=57.8 NC_GLOBAL#geospatial_lon_max=180 NC_GLOBAL#geospatial_lon_min=-180 Corner Coordinates: Upper Left ( 0.0, 0.0) Lower Left ( 0.0, 512.0) Upper Right ( 512.0, 0.0) Lower Right ( 512.0, 512.0) Center ( 256.0, 256.0) I know how to a do a conversion with Gdaltranslate, but the problem is if I apply it, my generated netCDF file will not be georeferenced, as lat/lon are not as coordinates, but stored as variables on another netCDF file. So below I have my so far progress trying to do this with a GDAL-Python. It results on the a rotated image, still seems not georeferenced. Also: it seems I managed to insert the coordinates but their name do not change to y and x, and keep as c and r, despite having changed them (see pics below). EDIT - - This is what I tried so far, and the output is a tiff (wrongly rotated) and no coordinates on the axis: import xarray as xr import numpy as np import matplotlib.pyplot as plt import rioxarray as rio xds = xr.open_dataset(r'path_to_netdfc') xdc = xr.open_dataset(r"path_to_netcdf_with_coordinates") # Adds coordinates to x and y xds.coords["c"] = xdc.mp_lon[1,:] xds.coords["r"] = xdc.mp_lat[:,1] xds # Reorganize the netCDF file into standard names/locations xds = xds.squeeze().rename_dims({"c": "x", "r": "y"}).transpose('y', 'x') xds.rio.write_crs('epsg:4326', inplace=True) #Take the variable that I'm interested in df = xds['daily_fraction'] #It was giving me error later on, so i needed to set_spatial_dims df = df.rio.set_spatial_dims(x_dim='x', y_dim='y') #Save the GeoTIFF file: df.rio.to_raster(r"C:\PHD\name_of_output.tiff") A: Your files do not follow any standard that I know of. Each dimension is in its separate dataset. If you are sure that the longitude/latitude is linear - which it might not be given that your dataset covers the polar regions - you can simply use gdal_translate to convert to TIFF and then gdal_edit.py -a_ulurll ulx uly urx ury llx lly to set a geotransform with your coordinates. But this will work only if the longitude/latitude are linear relative to your pixels.
Convert NETCDF file to TIFF when coordinates are variables (not coordinates)
How to convert the NetCDF to TIFF, when the coordinates are stored in another NetCDF file (and are a irregular grid, since this covers the Arctic region)? An example of the NetCDF file can be downloaded here: https://drive.google.com/uc?export=download&id=1i4OGCQhKlZ056H1YHq4hTb0EbEkl-pYd The NetCDF file with the coordinates can be donwnloaded here: https://drive.google.com/uc?export=download&id=1WVzZ--NnHSPkJmBqlGwXAN7abXM5_uNh (Just additional information files only provide the following in what regards coordinates): NC_GLOBAL#geospatial_bounds_crs=EPSG:4326 NC_GLOBAL#geospatial_lat_max=90 NC_GLOBAL#geospatial_lat_min=57.8 NC_GLOBAL#geospatial_lon_max=180 NC_GLOBAL#geospatial_lon_min=-180 Corner Coordinates: Upper Left ( 0.0, 0.0) Lower Left ( 0.0, 512.0) Upper Right ( 512.0, 0.0) Lower Right ( 512.0, 512.0) Center ( 256.0, 256.0) I know how to a do a conversion with Gdaltranslate, but the problem is if I apply it, my generated netCDF file will not be georeferenced, as lat/lon are not as coordinates, but stored as variables on another netCDF file. So below I have my so far progress trying to do this with a GDAL-Python. It results on the a rotated image, still seems not georeferenced. Also: it seems I managed to insert the coordinates but their name do not change to y and x, and keep as c and r, despite having changed them (see pics below). EDIT - - This is what I tried so far, and the output is a tiff (wrongly rotated) and no coordinates on the axis: import xarray as xr import numpy as np import matplotlib.pyplot as plt import rioxarray as rio xds = xr.open_dataset(r'path_to_netdfc') xdc = xr.open_dataset(r"path_to_netcdf_with_coordinates") # Adds coordinates to x and y xds.coords["c"] = xdc.mp_lon[1,:] xds.coords["r"] = xdc.mp_lat[:,1] xds # Reorganize the netCDF file into standard names/locations xds = xds.squeeze().rename_dims({"c": "x", "r": "y"}).transpose('y', 'x') xds.rio.write_crs('epsg:4326', inplace=True) #Take the variable that I'm interested in df = xds['daily_fraction'] #It was giving me error later on, so i needed to set_spatial_dims df = df.rio.set_spatial_dims(x_dim='x', y_dim='y') #Save the GeoTIFF file: df.rio.to_raster(r"C:\PHD\name_of_output.tiff")
[ "Your files do not follow any standard that I know of. Each dimension is in its separate dataset.\nIf you are sure that the longitude/latitude is linear - which it might not be given that your dataset covers the polar regions - you can simply use gdal_translate to convert to TIFF and then gdal_edit.py -a_ulurll ulx uly urx ury llx lly to set a geotransform with your coordinates. But this will work only if the longitude/latitude are linear relative to your pixels.\n" ]
[ 0 ]
[]
[]
[ "gdal", "netcat", "netcdf", "python", "tiff" ]
stackoverflow_0074545153_gdal_netcat_netcdf_python_tiff.txt
Q: How to verify RSA signature generated by Python-RSA using Crypto++ I have a server written in Python, and a C++ client. The Python server has a private RSA key, and the redistributable C++ client has the paired public key. The C++ client sends a string to the Python server, the server generates a signature by encoding this string with its private key, and sends it to the client in ASCII format. Finally, the C++ client verifies this signature to ensure a) the signature comes from the paired key and no other, and b) the signature was made based on this specific string, and no other. The Python side looks like this: import rsa from base64 import b64encode str = "message" pub, priv = rsa.newkeys(2048) keyB64 = rsa.sign(str.encode('utf-8'), privkey, 'SHA-1') signature = b64encode(keyB64).decode('ascii') with open("public_key.txt", "w") as file: file.write(pub.save_pkcs1().decode('utf8')) file.close() With the generated public key file looking like this (just an example): -----BEGIN RSA PUBLIC KEY----- MIGJAoGBALqrXqb17/TiXmGGbvbFwRMV+mbCqPtvnD0zlvIKxpJ4NSBVZ2Lz87SU Ww69uFILy19G6prThJAzHha9pa3fWRKRv5epMXcP6TFZ3er0h0uaxOKxle+OtpnC xyW+QMzkhuDL1gR1OrgVW6jCV6lmVdca63+m2PfTjQj1Vc64OyWBAgMBAAE= -----END RSA PUBLIC KEY----- On the client side, I read this file and store the characters between the two tags in a string. Then it looks like this: #include <./Cryptopp/rsa.h> #include <./Cryptopp/hex.h> #include <./Cryptopp/pssr.h> inline bool RsaVerifyString(const std::string &aPublicKeyStrASCII, const std::string &str, const std::string &aSignatureStrASCII) { // decode and load public key (using pipeline) CryptoPP::RSA::PublicKey publicKey; publicKey.Load(CryptoPP::StringSource(aPublicKeyStrASCII, true).Ref()); // decode signature std::string decodedSignature; CryptoPP::StringSource ss(aSignatureStrASCII, true); // verify message bool result = false; CryptoPP::RSASS<CryptoPP::PSSR, CryptoPP::SHA1>::Verifier verifier(publicKey); CryptoPP::StringSource ss2(decodedSignature + str, true, new CryptoPP::SignatureVerificationFilter(verifier, new CryptoPP::ArraySink((unsigned char*)&result, sizeof(result)))); return result; } //... std::string message("message"); if(RsaVerifyString(publicKeyASCII, message, signatureASCII)) { std::cout << "OK" << std::endl; } But it doesn't work: it always returns false, and CryptoPP's architecture is too complicated for me to debug - whereas I'm sure it's actually very simple and just a matter of adapting parameters. Anyone with experience in these could tell me what I'm doing wrong? Update Trying to port it to PyCryptoDome to increase compatibility upon recommendation of a comment: from Crypto import Random from Crypto.Hash import SHA from Crypto.PublicKey import RSA from Crypto.Signature import PKCS1_v1_5 PRIV_PATH = '../priv.pem' PUB_PATH= '../pub.pem' def gen_key_pair(): random_generator = Random.new().read key = RSA.generate(2048, random_generator) print(key.exportKey(), key.publickey().exportKey()) with open(PRIV_PATH, 'wb') as file: file.write(key.exportKey()) with open(PUB_PATH, 'wb') as file: file.write(key.publickey().exportKey()) return key.exportKey(), key.publickey().exportKey() def sign_message(message): key = RSA.importKey(open(PRIV_PATH, 'rb').read()) h = SHA.new(message) signer = PKCS1_v1_5.new(key) signature = signer.sign(h) return signature def verify_sign(message, signature): key = RSA.importKey(open(PUB_PATH, 'rb').read()) h = SHA.new(message) verifier = PKCS1_v1_5.new(key) if verifier.verify(h, signature): print("The signature is authentic.") else: print("The signature is not authentic.") # TEST CRYPTO gen_key_pair() message = 'Hello pycrypto!'.encode('utf-8') signature = sign_message(message).hex() print('signature='+signature) verify_sign(message, bytes.fromhex(signature)) On the client side I expect to have the following changes to make (before cleaning the code): inline bool RsaVerifyString(const std::string &aPublicKeyStrASCII, const std::string &str, const std::string &aSignatureStrASCII) { // decode and load public key (using pipeline) CryptoPP::RSA::PublicKey publicKey; publicKey.Load(CryptoPP::StringSource(aPublicKeyStrASCII, true).Ref()); // decode signature std::string decodedSignature; CryptoPP::StringSource ss(aSignatureStrASCII, true); // verify message bool result = false; CryptoPP::RSASS<CryptoPP::PKCS1v15, CryptoPP::SHA256>::Verifier verifier(publicKey); CryptoPP::StringSource ss2(decodedSignature + str, true, new CryptoPP::SignatureVerificationFilter(verifier, new CryptoPP::ArraySink((unsigned char*)&result, sizeof(result)))); return result; } I can't yet compile Crypto++ (it's throwing failures to make functions inline) but I doubt it would work as-is. A: I abandoned Crypto++ because I couldn't get it to work on QtCreator + Windows, and used OpenSSL instead. It's horribly counterintuitive to code, but there is a lot of support and I got it to work with a member's help in this thread: Verify in OpenSSL C++ a signature generated in PyCryptoDome Use this if you are flexible on the C++ implementation of the verifying process and you can't get Crypto++ to work, all the code is there.
How to verify RSA signature generated by Python-RSA using Crypto++
I have a server written in Python, and a C++ client. The Python server has a private RSA key, and the redistributable C++ client has the paired public key. The C++ client sends a string to the Python server, the server generates a signature by encoding this string with its private key, and sends it to the client in ASCII format. Finally, the C++ client verifies this signature to ensure a) the signature comes from the paired key and no other, and b) the signature was made based on this specific string, and no other. The Python side looks like this: import rsa from base64 import b64encode str = "message" pub, priv = rsa.newkeys(2048) keyB64 = rsa.sign(str.encode('utf-8'), privkey, 'SHA-1') signature = b64encode(keyB64).decode('ascii') with open("public_key.txt", "w") as file: file.write(pub.save_pkcs1().decode('utf8')) file.close() With the generated public key file looking like this (just an example): -----BEGIN RSA PUBLIC KEY----- MIGJAoGBALqrXqb17/TiXmGGbvbFwRMV+mbCqPtvnD0zlvIKxpJ4NSBVZ2Lz87SU Ww69uFILy19G6prThJAzHha9pa3fWRKRv5epMXcP6TFZ3er0h0uaxOKxle+OtpnC xyW+QMzkhuDL1gR1OrgVW6jCV6lmVdca63+m2PfTjQj1Vc64OyWBAgMBAAE= -----END RSA PUBLIC KEY----- On the client side, I read this file and store the characters between the two tags in a string. Then it looks like this: #include <./Cryptopp/rsa.h> #include <./Cryptopp/hex.h> #include <./Cryptopp/pssr.h> inline bool RsaVerifyString(const std::string &aPublicKeyStrASCII, const std::string &str, const std::string &aSignatureStrASCII) { // decode and load public key (using pipeline) CryptoPP::RSA::PublicKey publicKey; publicKey.Load(CryptoPP::StringSource(aPublicKeyStrASCII, true).Ref()); // decode signature std::string decodedSignature; CryptoPP::StringSource ss(aSignatureStrASCII, true); // verify message bool result = false; CryptoPP::RSASS<CryptoPP::PSSR, CryptoPP::SHA1>::Verifier verifier(publicKey); CryptoPP::StringSource ss2(decodedSignature + str, true, new CryptoPP::SignatureVerificationFilter(verifier, new CryptoPP::ArraySink((unsigned char*)&result, sizeof(result)))); return result; } //... std::string message("message"); if(RsaVerifyString(publicKeyASCII, message, signatureASCII)) { std::cout << "OK" << std::endl; } But it doesn't work: it always returns false, and CryptoPP's architecture is too complicated for me to debug - whereas I'm sure it's actually very simple and just a matter of adapting parameters. Anyone with experience in these could tell me what I'm doing wrong? Update Trying to port it to PyCryptoDome to increase compatibility upon recommendation of a comment: from Crypto import Random from Crypto.Hash import SHA from Crypto.PublicKey import RSA from Crypto.Signature import PKCS1_v1_5 PRIV_PATH = '../priv.pem' PUB_PATH= '../pub.pem' def gen_key_pair(): random_generator = Random.new().read key = RSA.generate(2048, random_generator) print(key.exportKey(), key.publickey().exportKey()) with open(PRIV_PATH, 'wb') as file: file.write(key.exportKey()) with open(PUB_PATH, 'wb') as file: file.write(key.publickey().exportKey()) return key.exportKey(), key.publickey().exportKey() def sign_message(message): key = RSA.importKey(open(PRIV_PATH, 'rb').read()) h = SHA.new(message) signer = PKCS1_v1_5.new(key) signature = signer.sign(h) return signature def verify_sign(message, signature): key = RSA.importKey(open(PUB_PATH, 'rb').read()) h = SHA.new(message) verifier = PKCS1_v1_5.new(key) if verifier.verify(h, signature): print("The signature is authentic.") else: print("The signature is not authentic.") # TEST CRYPTO gen_key_pair() message = 'Hello pycrypto!'.encode('utf-8') signature = sign_message(message).hex() print('signature='+signature) verify_sign(message, bytes.fromhex(signature)) On the client side I expect to have the following changes to make (before cleaning the code): inline bool RsaVerifyString(const std::string &aPublicKeyStrASCII, const std::string &str, const std::string &aSignatureStrASCII) { // decode and load public key (using pipeline) CryptoPP::RSA::PublicKey publicKey; publicKey.Load(CryptoPP::StringSource(aPublicKeyStrASCII, true).Ref()); // decode signature std::string decodedSignature; CryptoPP::StringSource ss(aSignatureStrASCII, true); // verify message bool result = false; CryptoPP::RSASS<CryptoPP::PKCS1v15, CryptoPP::SHA256>::Verifier verifier(publicKey); CryptoPP::StringSource ss2(decodedSignature + str, true, new CryptoPP::SignatureVerificationFilter(verifier, new CryptoPP::ArraySink((unsigned char*)&result, sizeof(result)))); return result; } I can't yet compile Crypto++ (it's throwing failures to make functions inline) but I doubt it would work as-is.
[ "I abandoned Crypto++ because I couldn't get it to work on QtCreator + Windows, and used OpenSSL instead. It's horribly counterintuitive to code, but there is a lot of support and I got it to work with a member's help in this thread: Verify in OpenSSL C++ a signature generated in PyCryptoDome\nUse this if you are flexible on the C++ implementation of the verifying process and you can't get Crypto++ to work, all the code is there.\n" ]
[ 0 ]
[]
[]
[ "c++", "crypto++", "cryptography", "python", "rsa" ]
stackoverflow_0074554044_c++_crypto++_cryptography_python_rsa.txt
Q: Find the maximum frequency of an element in a given Array This is the solution I have come up with but I'm unsure whether this is the best possible solution as far as Big (O) notation is concerned... def solution(A): B = [0, 0, 0, 0, 0] for i in range (len(A)): if A[i] == "Cardiology": B[0] += 1 elif A[i] == "Neurology": B[1] += 1 elif A[i] == "Orthopaedics": B[2] += 1 elif A[i] == "Gynaecology": B[3] += 1 elif A[i] == "Oncology": B[4] += 1 max_patients = max(B) return max_patients A: Because you know all of the possible values, you could use a dict with department names as keys and counts as values. You could initialize it as: departments = {"Cardiology": 0, "Neurology": 0, "Orthopaedics": 0, "Gynaecology": 0, "Oncology": 0} As a style suggestion, since you're iterating over the elements of a list, you don't need to access them by index, instead you can loop over the list directly. Combining that with the dictionary, you can do: for dept in A: departments[dept] += 1 max_patients = max(departments) return max_patients Of course, if you're willing to explore the documentation a little, the collections.Counter object does the same thing (but probably a little faster) A: You can solve this very easily with collections.Counter. You don't want or need dictionaries, side lists or anything else extra. Everything you add is another thing to break. Keep it as simple as possible. from collections import Counter def solution(A): return max(Counter(A).values()) I only stuck this in a function to give you context. There's no reason for this to be a function. Wrapping this in a function is basically just giving the operation an alias. Unfortunately, your alias doesn't give any indication of what you aliased. It's better to just put the one line in place. A: Based on assumptions and the return value: def solution(A): A = [A.count(A[i]) for i in set(range(len(A)))] return max(A) A = ["Cardiology", "Neurology", "Oncology", "Orthopaedics", "Gynaecology", "Oncology", "Oncology"] print(solution(A)) # 3
Find the maximum frequency of an element in a given Array
This is the solution I have come up with but I'm unsure whether this is the best possible solution as far as Big (O) notation is concerned... def solution(A): B = [0, 0, 0, 0, 0] for i in range (len(A)): if A[i] == "Cardiology": B[0] += 1 elif A[i] == "Neurology": B[1] += 1 elif A[i] == "Orthopaedics": B[2] += 1 elif A[i] == "Gynaecology": B[3] += 1 elif A[i] == "Oncology": B[4] += 1 max_patients = max(B) return max_patients
[ "Because you know all of the possible values, you could use a dict with department names as keys and counts as values.\nYou could initialize it as:\ndepartments = {\"Cardiology\": 0, \"Neurology\": 0, \"Orthopaedics\": 0, \"Gynaecology\": 0, \"Oncology\": 0}\n\nAs a style suggestion, since you're iterating over the elements of a list, you don't need to access them by index, instead you can loop over the list directly. Combining that with the dictionary, you can do:\nfor dept in A:\n departments[dept] += 1\n\nmax_patients = max(departments)\nreturn max_patients\n\nOf course, if you're willing to explore the documentation a little, the collections.Counter object does the same thing (but probably a little faster)\n", "You can solve this very easily with collections.Counter. You don't want or need dictionaries, side lists or anything else extra. Everything you add is another thing to break. Keep it as simple as possible.\nfrom collections import Counter\n\ndef solution(A):\n return max(Counter(A).values())\n\nI only stuck this in a function to give you context. There's no reason for this to be a function. Wrapping this in a function is basically just giving the operation an alias. Unfortunately, your alias doesn't give any indication of what you aliased. It's better to just put the one line in place.\n", "Based on assumptions and the return value:\ndef solution(A):\n A = [A.count(A[i]) for i in set(range(len(A)))]\n return max(A)\n\nA = [\"Cardiology\", \"Neurology\", \"Oncology\", \"Orthopaedics\", \"Gynaecology\", \"Oncology\", \"Oncology\"]\nprint(solution(A))\n\n# 3\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "performance", "python" ]
stackoverflow_0074564755_performance_python.txt
Q: How to fill missing text values with NA while scraping? I am using beautifulsoup to create two dataframes of unique classes with text. The first dataframe has a few missing values that is messing up the alignment in rows when I join them. I tried to use an if not statement but I still get error: get_text() is empty. soup = bs(response.text, 'html5lib') for x in soup.find_all("div", {"class": "details_table"}): S = x.find("span", {"class": "search_line_s details_table_data"}).get_text(strip=True) if not x.find("span", {"class": "search_line_a__ details_table_data"}).get_text(strip=True): A = "N/A" else: A = x.find("span", {"class": "search_line_a__ details_table_data"}).get_text(strip=True) App = x.find("span", {"class": "search_line_app details_table_data"}).get_text(strip=True) df3.loc[len(df3.index)] = [S, A, App] for items in soup.find_all("a", {"class": "e_link"}): item_at = items.attrs list_of_dict_values = item_at.values() good_objects = [True, False, True, False, True, False, True, True, False, True, False, False, False] property_asel = [val for is_good, val in zip(good_objects, list_of_dict_values) if is_good] link = property_asel[0] type = property_asel[1] name = property_asel[2] category = property_asel[3] sub_category = property_asel[4] price = property_asel[5] df.loc[len(df.index)] = [name, category, sub_category, type, price, link] fac.append(items.get_text(strip=True)) result = pd.concat([df, df3], axis=1) A: Avoid calling .get_text(strip=True) in your condition, cause you have to check if the element itself is available: if not x.find("span", {"class": "search_line_a__ details_table_data"}): ... or A = x.find("span", {"class": "search_line_a__ details_table_data"}).get_text(strip=True) if x.find("span", {"class": "search_line_a__ details_table_data"}) else "N/A" or with walrus operator (needs python 3.8 and higher): A = e.get_text(strip=True) if (e:=x.find("span", {"class": "search_line_a__ details_table_data"})) else "N/A"
How to fill missing text values with NA while scraping?
I am using beautifulsoup to create two dataframes of unique classes with text. The first dataframe has a few missing values that is messing up the alignment in rows when I join them. I tried to use an if not statement but I still get error: get_text() is empty. soup = bs(response.text, 'html5lib') for x in soup.find_all("div", {"class": "details_table"}): S = x.find("span", {"class": "search_line_s details_table_data"}).get_text(strip=True) if not x.find("span", {"class": "search_line_a__ details_table_data"}).get_text(strip=True): A = "N/A" else: A = x.find("span", {"class": "search_line_a__ details_table_data"}).get_text(strip=True) App = x.find("span", {"class": "search_line_app details_table_data"}).get_text(strip=True) df3.loc[len(df3.index)] = [S, A, App] for items in soup.find_all("a", {"class": "e_link"}): item_at = items.attrs list_of_dict_values = item_at.values() good_objects = [True, False, True, False, True, False, True, True, False, True, False, False, False] property_asel = [val for is_good, val in zip(good_objects, list_of_dict_values) if is_good] link = property_asel[0] type = property_asel[1] name = property_asel[2] category = property_asel[3] sub_category = property_asel[4] price = property_asel[5] df.loc[len(df.index)] = [name, category, sub_category, type, price, link] fac.append(items.get_text(strip=True)) result = pd.concat([df, df3], axis=1)
[ "Avoid calling .get_text(strip=True) in your condition, cause you have to check if the element itself is available:\nif not x.find(\"span\", {\"class\": \"search_line_a__ details_table_data\"}):\n ...\n\nor\n A = x.find(\"span\", {\"class\": \"search_line_a__ details_table_data\"}).get_text(strip=True) if x.find(\"span\", {\"class\": \"search_line_a__ details_table_data\"}) else \"N/A\"\n\nor with walrus operator (needs python 3.8 and higher):\n A = e.get_text(strip=True) if (e:=x.find(\"span\", {\"class\": \"search_line_a__ details_table_data\"})) else \"N/A\"\n\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "python", "web_scraping" ]
stackoverflow_0074564440_beautifulsoup_python_web_scraping.txt
Q: How to use link to JPG as media content in message.edit_media method in aoigram, if it requests JSON object I'm making test bot using aiogram now and i faced a problem. I want to edit message media, but i only have a link to an image, and when i try to use an 'edit_media' method aiogram tells me that it can't parse JSON object. (error) In documentation said that 'media' parameter must be 'A JSON-serialized object for a new media content of the message'. Here is my code: code @dp.callback_query_handler(kb.item_nav_cb.filter(action='next')) async def item_next_cb_handler(query: types.CallbackQuery, callback_data: dict): await bot.answer_callback_query(query.id) logging.info(callback_data) current_index = int(callback_data['index']) current_index += 1 link = df_new['link'].iloc[current_index] item_info = get_item_info.get_item_info(domen+link) if link in favourites[f'user_{query.from_user.id}']: item_kb = kb.get_item_fav_kb(index=current_index, link=link) else: item_kb = kb.get_item_notfav_kb(index=current_index, link=link) await query.message.edit_caption(caption=f'<b>{item_info[0]}</b>\n{item_info[1]}\n\nЦена: {item_info[2]}', reply_markup=item_kb) await query.message.edit_media(media=item_info[3]) # Here is the problem A: Found a solution. Needed to use InputMediaPhoto type: photo = types.input_media.InputMediaPhoto(item_info[3]) await query.message.edit_media(media=photo) And all together code looks like this: @dp.callback_query_handler(kb.item_nav_cb.filter(action='next')) async def item_next_cb_handler(query: types.CallbackQuery, callback_data: dict): await bot.answer_callback_query(query.id) logging.info(callback_data) current_index = int(callback_data['index']) current_index += 1 link = df_new['link'].iloc[current_index] item_info = get_item_info.get_item_info(domen+link) if link in favourites[f'user_{query.from_user.id}']: item_kb = kb.get_item_fav_kb(index=current_index, link=link) else: item_kb = kb.get_item_notfav_kb(index=current_index, link=link) photo = types.input_media.InputMediaPhoto(item_info[3]) # Makes a difference await query.message.edit_media(media=photo) await query.message.edit_caption(caption=f'<b>{item_info[0]}</b>\n{item_info[1]}\n\nЦена: {item_info[2]}', reply_markup=item_kb)
How to use link to JPG as media content in message.edit_media method in aoigram, if it requests JSON object
I'm making test bot using aiogram now and i faced a problem. I want to edit message media, but i only have a link to an image, and when i try to use an 'edit_media' method aiogram tells me that it can't parse JSON object. (error) In documentation said that 'media' parameter must be 'A JSON-serialized object for a new media content of the message'. Here is my code: code @dp.callback_query_handler(kb.item_nav_cb.filter(action='next')) async def item_next_cb_handler(query: types.CallbackQuery, callback_data: dict): await bot.answer_callback_query(query.id) logging.info(callback_data) current_index = int(callback_data['index']) current_index += 1 link = df_new['link'].iloc[current_index] item_info = get_item_info.get_item_info(domen+link) if link in favourites[f'user_{query.from_user.id}']: item_kb = kb.get_item_fav_kb(index=current_index, link=link) else: item_kb = kb.get_item_notfav_kb(index=current_index, link=link) await query.message.edit_caption(caption=f'<b>{item_info[0]}</b>\n{item_info[1]}\n\nЦена: {item_info[2]}', reply_markup=item_kb) await query.message.edit_media(media=item_info[3]) # Here is the problem
[ "Found a solution. Needed to use InputMediaPhoto type:\nphoto = types.input_media.InputMediaPhoto(item_info[3])\nawait query.message.edit_media(media=photo)\n\nAnd all together code looks like this:\n@dp.callback_query_handler(kb.item_nav_cb.filter(action='next'))\nasync def item_next_cb_handler(query: types.CallbackQuery, callback_data: dict):\nawait bot.answer_callback_query(query.id)\nlogging.info(callback_data)\ncurrent_index = int(callback_data['index'])\ncurrent_index += 1\n\nlink = df_new['link'].iloc[current_index]\nitem_info = get_item_info.get_item_info(domen+link)\n\nif link in favourites[f'user_{query.from_user.id}']:\n item_kb = kb.get_item_fav_kb(index=current_index,\n link=link)\nelse:\n item_kb = kb.get_item_notfav_kb(index=current_index,\n link=link)\n \nphoto = types.input_media.InputMediaPhoto(item_info[3]) # Makes a difference\n\nawait query.message.edit_media(media=photo)\nawait query.message.edit_caption(caption=f'<b>{item_info[0]}</b>\\n{item_info[1]}\\n\\nЦена: {item_info[2]}',\n reply_markup=item_kb)\n\n" ]
[ 0 ]
[]
[]
[ "aiogram", "python", "telegram", "telegram_bot" ]
stackoverflow_0074564885_aiogram_python_telegram_telegram_bot.txt
Q: Runtime error: asyncio.run cannot be called from running event loop import discord import os import schedule import time import requests from bs4 import BeautifulSoup from discord.ext import commands intents = discord.Intents.default() intents.members = True intents.message_content = True client = commands.Bot(intents=intents, command_prefix="!") @client.event async def on_ready(): print(f'{client.user} is now online!') print('eho') run = False async def task(): while not client.is_closed(): embeds = [] headers = {'User-Agent':'Mozilla/5.0 (Linux; Android 10; SM-G980F Build/QP1A.190711.020; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/78.0.3904.96 Mobile Safari/537.36'} response = requests.get('https://www.bestbuy.com/site/misc/deal-of-the-day/pcmcat248000050016.c?id=pcmcat248000050016', headers=headers) content = response.content soup = BeautifulSoup(content, 'html.parser') wf = soup.find('div', class_='wf-wrapper') ofs = wf.findAll('div', class_='wf-offer') if ofs == None: return for of in ofs: title = of.find('a', class_="wf-offer-link v-line-clamp ").text l = of.find('a', class_='wf-offer-link') link = "https://bestbuy.com"+l.get('href') image = of.find('img', class_="wf-image img-responsive").get('src') p = of.find('div', class_="priceView-hero-price priceView-customer-price") price = p.find('span').text was = of.find('div', class_="pricing-price__regular-price") if was == None: was = 'NAN' else: was = was.text discount = of.find('div', class_="pricing-price__savings") if discount == None: discount = 'NAN' embed=discord.Embed(title=f'{title}', url=link, color=0xff9a03) embed.set_thumbnail(url=image) embed.add_field(name="Name", value=f'{title}', inline=False) embed.add_field(name="Price", value=f'{price}', inline=True) embed.add_field(name="Was", value=f'{was}', inline=True) embed.add_field(name="Discounted", value=f'{discount}', inline=True) embeds.append(embed) for embed in embeds: channel_id = '1041263119814631436'; channel = discord.utils.get(client.get_all_channels(), id=channel_id) await channel.send(embed=embed) time.sleep(20) async def main(): async with client: client.loop.create_task(task()) await client.run('***') asyncio.run(main()) #client.run(os.environ['token']) ` replaced token with *** error: title it was meant to scrape deals of the day from bestbuy and paste it to discord every 30mins/secs A: You can't call asyncio.run() inside of itself. Client.run() already calls this, so you can't use Client.run() in an async main. If you only want to log something to Discord, you don't need a Client/Bot at all. This can just be done using a simple Webhook. Also, Client.run() is not async, so you can't await it... If you really want to use an async main and start your bot in in it, use Client.start() instead, which is async and doesn't call asyncio.run() internally. Note that this does not configure logging, so you'll have to do that yourself.
Runtime error: asyncio.run cannot be called from running event loop
import discord import os import schedule import time import requests from bs4 import BeautifulSoup from discord.ext import commands intents = discord.Intents.default() intents.members = True intents.message_content = True client = commands.Bot(intents=intents, command_prefix="!") @client.event async def on_ready(): print(f'{client.user} is now online!') print('eho') run = False async def task(): while not client.is_closed(): embeds = [] headers = {'User-Agent':'Mozilla/5.0 (Linux; Android 10; SM-G980F Build/QP1A.190711.020; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/78.0.3904.96 Mobile Safari/537.36'} response = requests.get('https://www.bestbuy.com/site/misc/deal-of-the-day/pcmcat248000050016.c?id=pcmcat248000050016', headers=headers) content = response.content soup = BeautifulSoup(content, 'html.parser') wf = soup.find('div', class_='wf-wrapper') ofs = wf.findAll('div', class_='wf-offer') if ofs == None: return for of in ofs: title = of.find('a', class_="wf-offer-link v-line-clamp ").text l = of.find('a', class_='wf-offer-link') link = "https://bestbuy.com"+l.get('href') image = of.find('img', class_="wf-image img-responsive").get('src') p = of.find('div', class_="priceView-hero-price priceView-customer-price") price = p.find('span').text was = of.find('div', class_="pricing-price__regular-price") if was == None: was = 'NAN' else: was = was.text discount = of.find('div', class_="pricing-price__savings") if discount == None: discount = 'NAN' embed=discord.Embed(title=f'{title}', url=link, color=0xff9a03) embed.set_thumbnail(url=image) embed.add_field(name="Name", value=f'{title}', inline=False) embed.add_field(name="Price", value=f'{price}', inline=True) embed.add_field(name="Was", value=f'{was}', inline=True) embed.add_field(name="Discounted", value=f'{discount}', inline=True) embeds.append(embed) for embed in embeds: channel_id = '1041263119814631436'; channel = discord.utils.get(client.get_all_channels(), id=channel_id) await channel.send(embed=embed) time.sleep(20) async def main(): async with client: client.loop.create_task(task()) await client.run('***') asyncio.run(main()) #client.run(os.environ['token']) ` replaced token with *** error: title it was meant to scrape deals of the day from bestbuy and paste it to discord every 30mins/secs
[ "You can't call asyncio.run() inside of itself. Client.run() already calls this, so you can't use Client.run() in an async main.\nIf you only want to log something to Discord, you don't need a Client/Bot at all. This can just be done using a simple Webhook.\nAlso, Client.run() is not async, so you can't await it...\nIf you really want to use an async main and start your bot in in it, use Client.start() instead, which is async and doesn't call asyncio.run() internally. Note that this does not configure logging, so you'll have to do that yourself.\n" ]
[ 1 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0074563899_discord_discord.py_python.txt
Q: Sorting students and what exams they are doing I have a list of tuples and the tuples look like this (2, 11) which means exam 2 must be taken by student 11. The exams are numbered from 0 to however many exams there are and the same with students. I need to produce a 2D list where the first list is the exams the 0th student is taking and the second list is the exams student number 1 is taking etc. I have this code: examsEachStudentsIsDoing = [] exams = [] number_of_students = 14 exams_to_students = [(0, 1), (0, 4), (0, 5), (0, 3), (0, 10), (0, 13), (0, 9), (0, 11), (0, 12), (0, 2), (0, 7), (0, 6), (1, 7), (2, 7), (2, 5), (2, 0), (2, 11), (2, 13), (3, 4), (4, 6), (4, 8)] for i in range(0,number_of_students): exams.clear() for j in range(0,len(exams_to_students)): if (exams_to_students[j][1]==i): exams.append(exams_to_students[j][0]) examsEachStudentsIsDoing.append(exams) print(examsEachStudentsIsDoing) if i add a print line just before examsEachStudentsIsDoing.append(exams) then i get the result: [2] [0] [0] [0] [0, 3] [0, 2] [0, 4] [0, 1, 2] [4] [0] [0] [0, 2] [0] [0, 2] [[0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2]] why is it repeatedly appending on the last students exams and not each one individually A: The exams is a list. In python, lists are passed by reference, so when you are appending exams to examsEachStudentsIsDoing you are just appending a reference to the exams in the examsEachStudentsIsDoing. At the end of the loop, for last student, the exams is set to [0,2], hence for all the entries in examsEachStudentsIsDoing, you see that value. So instead of appending the exams, you can append a copy of the current student's exams to the examsEachStudentsIsDoing. To get a copy of the list you have different options - list.copy() , copy.copy() method, or just slicing list[:]. Try the following code - examsEachStudentsIsDoing = [] exams = [] number_of_students = 14 exams_to_students = [(0, 1), (0, 4), (0, 5), (0, 3), (0, 10), (0, 13), (0, 9), (0, 11), (0, 12), (0, 2), (0, 7), (0, 6), (1, 7), (2, 7), (2, 5), (2, 0), (2, 11), (2, 13), (3, 4), (4, 6), (4, 8)] for i in range(0,number_of_students): exams.clear() for j in range(0,len(exams_to_students)): if (exams_to_students[j][1]==i): exams.append(exams_to_students[j][0]) examsEachStudentsIsDoing.append(exams.copy()) #updated print(examsEachStudentsIsDoing) Output: [[2], [0], [0], [0], [0, 3], [0, 2], [0, 4], [0, 1, 2], [4], [0], [0], [0, 2], [0], [0, 2]] To avoid such issues, you can create new exams list for each student, so a better way to rewrite your code might be - examsEachStudentsIsDoing = [] number_of_students = 14 exams_to_students = [(0, 1), (0, 4), (0, 5), (0, 3), (0, 10), (0, 13), (0, 9), (0, 11), (0, 12), (0, 2), (0, 7), (0, 6), (1, 7), (2, 7), (2, 5), (2, 0), (2, 11), (2, 13), (3, 4), (4, 6), (4, 8)] for i in range(0,number_of_students): exams = [] for j in range(0,len(exams_to_students)): if (exams_to_students[j][1]==i): exams.append(exams_to_students[j][0]) examsEachStudentsIsDoing.append(exams) print(examsEachStudentsIsDoing) A: Optionally, you can use a dictionary , I used json just to easy print. Also, there is no treatment on the data, meaning that one student can take the same exam twice. import json number_of_students = 14 exams_to_students = [(0, 1), (0, 4), (0, 5), (0, 3), (0, 10), (0, 13), (0, 9), (0, 11), (0, 12), (0, 2), (0, 7), (0, 6), (1, 7), (2, 7), (2, 5), (2, 0), (2, 11), (2, 13), (3, 4), (4, 6), (4, 8)] """ (2, 11) which means exam 2 must be taken by student 11 """ disposal = {} # Create a key for every student for i in range(0, number_of_students): disposal[i] = { 'exams': []} # loop through tuples # add exam to the designated student for value in exams_to_students: disposal[value[1]]['exams'].append(value[0]) json_object = json.dumps(disposal, indent=4) print(json_object) #Output { "0": { "exams": [ 2 ] }, "1": { "exams": [ 0 ] }, "2": { "exams": [ 0 ] }, "3": { "exams": [ 0 ] }, "4": { "exams": [ 0, 3 ] }, "5": { "exams": [ 0, 2 ] }, "6": { "exams": [ 0, 4 ] }, "7": { "exams": [ 0, 1, 2 ] }, ... } To avoid duplicated values: # loop through your tuples for value in exams_to_students: if value[0] not in disposal[value[1]]['exams']: disposal[value[1]]['exams'].append(value[0]) A: I agree with @Daniel Hao, a defaultdict offers the simplest solution for this. Try to avoid tricky solutions if you can. from collections import defaultdict exams_to_students = [(0, 1), (0, 4), (0, 5), (0, 3), (0, 10), (0, 13), (0, 9), (0, 11), (0, 12), (0, 2), (0, 7), (0, 6), (1, 7), (2, 7), (2, 5), (2, 0), (2, 11), (2, 13), (3, 4), (4, 6), (4, 8)] tracker = defaultdict(list) for (exam, student) in exams_to_students: tracker[student].append(exam) print("Exams by student using a defaultdict") for student in sorted(tracker.keys()): print(student, tracker[student]) print("Exams by student using a list") exams_by_student_as_list = [tracker[student] for student in sorted(tracker.keys())] for exams in exams_by_student_as_list: print(exams) A: Description of what is happening here Lists in python (also other objects) are mutable. It is a detailed article, but briefly, when you create a list and saves it in a variable like l then the l will point to a location of memory that List had created there. When you assign another variable for this list with a=l (or in your case you append it to another list) it will use the same pointer so then a and l will point to same location in your memory. when you commit the list (append, remove, clear, ...) this functions will change the list in your memory, can say they change the reference and when you need the data, all variables that point to that location will return just the same value. Solutions There are to many solution for this problem, one of them is to replace exam.clear() with exam = []: examsEachStudentsIsDoing = [] # no need to write exams = [] here number_of_students = 14 exams_to_students = [(0, 1), (0, 4), (0, 5), (0, 3), (0, 10), (0, 13), (0, 9), (0, 11), (0, 12), (0, 2), (0, 7), (0, 6), (1, 7), (2, 7), (2, 5), (2, 0), (2, 11), (2, 13), (3, 4), (4, 6), (4, 8)] for i in range(0,number_of_students): print(f'{i=}') exams = [] # replaced exams.clear() for j in range(0,len(exams_to_students)): if (exams_to_students[j][1]==i): print(f'index ({j})={exams_to_students[j]}') exams.append(exams_to_students[j][0]) print(f'{exams=}') examsEachStudentsIsDoing.append(exams) print(examsEachStudentsIsDoing) But I can also improve your code and re-write it like this: exams_to_students = [(0, 1), (0, 4), (0, 5), (0, 3), (0, 10), (0, 13), (0, 9), (0, 11), (0, 12), (0, 2), (0, 7), (0, 6), (1, 7), (2, 7), (2, 5), (2, 0), (2, 11), (2, 13), (3, 4), (4, 6), (4, 8)] examsEachStudentsIsDoing = [list() for _ in exams_to_students] for exam, student in exams_to_students: examsEachStudentsIsDoing[student].append(exam) This code is much more smaller and more reliable. For more clarity, I create a new and separate list for each member of the main list in the second line so that the said problem does not occur. And at the end I recommend you to use shorter variable names!
Sorting students and what exams they are doing
I have a list of tuples and the tuples look like this (2, 11) which means exam 2 must be taken by student 11. The exams are numbered from 0 to however many exams there are and the same with students. I need to produce a 2D list where the first list is the exams the 0th student is taking and the second list is the exams student number 1 is taking etc. I have this code: examsEachStudentsIsDoing = [] exams = [] number_of_students = 14 exams_to_students = [(0, 1), (0, 4), (0, 5), (0, 3), (0, 10), (0, 13), (0, 9), (0, 11), (0, 12), (0, 2), (0, 7), (0, 6), (1, 7), (2, 7), (2, 5), (2, 0), (2, 11), (2, 13), (3, 4), (4, 6), (4, 8)] for i in range(0,number_of_students): exams.clear() for j in range(0,len(exams_to_students)): if (exams_to_students[j][1]==i): exams.append(exams_to_students[j][0]) examsEachStudentsIsDoing.append(exams) print(examsEachStudentsIsDoing) if i add a print line just before examsEachStudentsIsDoing.append(exams) then i get the result: [2] [0] [0] [0] [0, 3] [0, 2] [0, 4] [0, 1, 2] [4] [0] [0] [0, 2] [0] [0, 2] [[0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2]] why is it repeatedly appending on the last students exams and not each one individually
[ "The exams is a list. In python, lists are passed by reference, so when you are appending exams to examsEachStudentsIsDoing you are just appending a reference to the exams in the examsEachStudentsIsDoing.\nAt the end of the loop, for last student, the exams is set to [0,2], hence for all the entries in examsEachStudentsIsDoing, you see that value.\nSo instead of appending the exams, you can append a copy of the current student's exams to the examsEachStudentsIsDoing. To get a copy of the list you have different options - list.copy() , copy.copy() method, or just slicing list[:].\nTry the following code -\nexamsEachStudentsIsDoing = []\nexams = []\nnumber_of_students = 14\nexams_to_students = [(0, 1), (0, 4), (0, 5), (0, 3), (0, 10), (0, 13), (0, 9), (0, 11), (0, 12), (0, 2), (0, 7), (0, 6), (1, 7), (2, 7), (2, 5), (2, 0), (2, 11), (2, 13), (3, 4), (4, 6), (4, 8)]\n \nfor i in range(0,number_of_students):\n exams.clear()\n for j in range(0,len(exams_to_students)):\n if (exams_to_students[j][1]==i):\n exams.append(exams_to_students[j][0])\n examsEachStudentsIsDoing.append(exams.copy()) #updated\n\nprint(examsEachStudentsIsDoing)\n\nOutput:\n[[2], [0], [0], [0], [0, 3], [0, 2], [0, 4], [0, 1, 2], [4], [0], [0], [0, 2], [0], [0, 2]]\n\n\nTo avoid such issues, you can create new exams list for each student, so a better way to rewrite your code might be -\nexamsEachStudentsIsDoing = []\nnumber_of_students = 14\nexams_to_students = [(0, 1), (0, 4), (0, 5), (0, 3), (0, 10), (0, 13), (0, 9), (0, 11), (0, 12), (0, 2), (0, 7), (0, 6), (1, 7), (2, 7), (2, 5), (2, 0), (2, 11), (2, 13), (3, 4), (4, 6), (4, 8)]\n \nfor i in range(0,number_of_students):\n exams = []\n for j in range(0,len(exams_to_students)):\n if (exams_to_students[j][1]==i):\n exams.append(exams_to_students[j][0])\n examsEachStudentsIsDoing.append(exams)\n\nprint(examsEachStudentsIsDoing)\n\n", "Optionally, you can use a dictionary , I used json just to easy print. Also, there is no treatment on the data, meaning that one student can take the same exam twice.\nimport json\n\nnumber_of_students = 14\nexams_to_students = [(0, 1), (0, 4), (0, 5), (0, 3), (0, 10), (0, 13), (0, 9), (0, 11), (0, 12), (0, 2), (0, 7), (0, 6), (1, 7), (2, 7), (2, 5), (2, 0), (2, 11), (2, 13), (3, 4), (4, 6), (4, 8)]\n\"\"\"\n(2, 11) which means exam 2 must be taken by student 11\n\"\"\"\ndisposal = {}\n# Create a key for every student\nfor i in range(0, number_of_students):\n disposal[i] = { 'exams': []}\n\n\n# loop through tuples\n# add exam to the designated student\nfor value in exams_to_students:\n disposal[value[1]]['exams'].append(value[0])\n\njson_object = json.dumps(disposal, indent=4)\nprint(json_object)\n\n#Output\n{\n \"0\": { \n \"exams\": [\n 2 \n ]\n },\n \"1\": { \n \"exams\": [\n 0 \n ]\n },\n \"2\": { \n \"exams\": [\n 0 \n ]\n },\n \"3\": { \n \"exams\": [\n 0\n ]\n },\n \"4\": {\n \"exams\": [\n 0,\n 3\n ]\n },\n \"5\": {\n \"exams\": [\n 0,\n 2\n ]\n },\n \"6\": {\n \"exams\": [\n 0,\n 4\n ]\n },\n \"7\": {\n \"exams\": [\n 0,\n 1,\n 2\n ]\n },\n ...\n}\n\nTo avoid duplicated values:\n# loop through your tuples\nfor value in exams_to_students:\n if value[0] not in disposal[value[1]]['exams']:\n disposal[value[1]]['exams'].append(value[0])\n\n", "I agree with @Daniel Hao, a defaultdict offers the simplest solution for this. Try to avoid tricky solutions if you can.\nfrom collections import defaultdict\n\nexams_to_students = [(0, 1), (0, 4), (0, 5), (0, 3),\n (0, 10), (0, 13), (0, 9), (0, 11),\n (0, 12), (0, 2), (0, 7), (0, 6),\n (1, 7), (2, 7), (2, 5), (2, 0),\n (2, 11), (2, 13), (3, 4), (4, 6), (4, 8)]\n\ntracker = defaultdict(list)\nfor (exam, student) in exams_to_students:\n tracker[student].append(exam)\n\nprint(\"Exams by student using a defaultdict\")\nfor student in sorted(tracker.keys()):\n print(student, tracker[student])\n\nprint(\"Exams by student using a list\")\nexams_by_student_as_list = [tracker[student] for student in sorted(tracker.keys())]\nfor exams in exams_by_student_as_list:\n print(exams)\n\n", "Description of what is happening here\nLists in python (also other objects) are mutable. It is a detailed article, but briefly, when you create a list and saves it in a variable like l then the l will point to a location of memory that List had created there. When you assign another variable for this list with a=l (or in your case you append it to another list) it will use the same pointer so then a and l will point to same location in your memory.\nwhen you commit the list (append, remove, clear, ...) this functions will change the list in your memory, can say they change the reference and when you need the data, all variables that point to that location will return just the same value.\nSolutions\nThere are to many solution for this problem, one of them is to replace exam.clear() with exam = []:\nexamsEachStudentsIsDoing = []\n# no need to write exams = [] here\nnumber_of_students = 14\nexams_to_students = [(0, 1), (0, 4), (0, 5), (0, 3), (0, 10), (0, 13), (0, 9), (0, 11), (0, 12), (0, 2), (0, 7), (0, 6), (1, 7), (2, 7), (2, 5), (2, 0), (2, 11), (2, 13), (3, 4), (4, 6), (4, 8)]\n \nfor i in range(0,number_of_students):\n print(f'{i=}')\n exams = [] # replaced exams.clear()\n for j in range(0,len(exams_to_students)):\n if (exams_to_students[j][1]==i):\n print(f'index ({j})={exams_to_students[j]}')\n exams.append(exams_to_students[j][0])\n print(f'{exams=}')\n examsEachStudentsIsDoing.append(exams)\n\nprint(examsEachStudentsIsDoing)\n\nBut I can also improve your code and re-write it like this:\nexams_to_students = [(0, 1), (0, 4), (0, 5), (0, 3), (0, 10), (0, 13), (0, 9), (0, 11), (0, 12), (0, 2), (0, 7), (0, 6), (1, 7), (2, 7), (2, 5), (2, 0), (2, 11), (2, 13), (3, 4), (4, 6), (4, 8)]\nexamsEachStudentsIsDoing = [list() for _ in exams_to_students]\nfor exam, student in exams_to_students:\n examsEachStudentsIsDoing[student].append(exam)\n\nThis code is much more smaller and more reliable. For more clarity, I create a new and separate list for each member of the main list in the second line so that the said problem does not occur.\nAnd at the end I recommend you to use shorter variable names!\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "list", "list_comprehension", "python" ]
stackoverflow_0074564509_list_list_comprehension_python.txt
Q: Translate curl command to python requests.get I have the following curl command which I can use to retrieve a list of users from a specific group in PagerDuty: curl -H "Accept: application/vnd.pagerduty+json;version=2" -H "Authorization: Token token=xxx" -X GET --data-urlencode "team_ids[]=abc" 'https://api.pagerduty.com/users' How can I translate this exact command to run in a python get.requests() command? I can't seem to get it to work. This is what I'm currently trying but it doesn't filter on the group: response = requests.get( 'https://api.pagerduty.com/users', params = {"data-urlencode": "team_ids[]=abc"}, headers={'Accept': 'application/vnd.pagerduty+json;version=2','Authorization': 'Token token=xxx'} ) ) Thanks! A: Simply omit --data-urlencode from your params: import requests params = { 'team_ids[]':"abc", "offset": '0', } headers = { 'Accept': 'application/vnd.pagerduty+json;version=2', 'Authorization': 'Token token=xxx', } response = requests.get('https://api.pagerduty.com/users', params=params, headers=headers) A: if first solution is hard for you u can use one of websites that convert curl command to pytho/php etc ... code curlconverter.com https://sqqihao.github.io/trillworks.html https://www.scrapingbee.com/curl-converter/python/
Translate curl command to python requests.get
I have the following curl command which I can use to retrieve a list of users from a specific group in PagerDuty: curl -H "Accept: application/vnd.pagerduty+json;version=2" -H "Authorization: Token token=xxx" -X GET --data-urlencode "team_ids[]=abc" 'https://api.pagerduty.com/users' How can I translate this exact command to run in a python get.requests() command? I can't seem to get it to work. This is what I'm currently trying but it doesn't filter on the group: response = requests.get( 'https://api.pagerduty.com/users', params = {"data-urlencode": "team_ids[]=abc"}, headers={'Accept': 'application/vnd.pagerduty+json;version=2','Authorization': 'Token token=xxx'} ) ) Thanks!
[ "Simply omit --data-urlencode from your params:\nimport requests\n\nparams = {\n 'team_ids[]':\"abc\",\n \"offset\": '0',\n}\nheaders = {\n 'Accept': 'application/vnd.pagerduty+json;version=2',\n 'Authorization': 'Token token=xxx',\n}\n\nresponse = requests.get('https://api.pagerduty.com/users', params=params, headers=headers)\n\n", "if first solution is hard for you u can use one of websites that convert curl command to pytho/php etc ... code\ncurlconverter.com\nhttps://sqqihao.github.io/trillworks.html\nhttps://www.scrapingbee.com/curl-converter/python/\n" ]
[ 1, 0 ]
[]
[]
[ "pagerduty", "python", "python_requests" ]
stackoverflow_0074564792_pagerduty_python_python_requests.txt
Q: How to select only specific defines in a SWIG interface? i have a C header file with many defines for registers, and in my Python SWIG interface i only want to expose a few. My C header looks like this (just with many more defines): #define REG_1 0x0001 #define REG_2 0x0002 #define REG_3 0x0003 Let's say in my generated Python module if I want to have all defines accessible, I can just do: %module myheader %{ #include myheader.h %} %include myheader.h But what if I only want to have REG_1 accessible/wrapped? I tried something like: %module myheader %{ #include myheader.h %} %constant REG_1; But it didn't work. Checked the documentation, still clueless unfortunately. Any ideas? A: The only way I know of is to be repetitive and declare exactly what you want to expose: %module myheader %{ #include myheader.h %} #define REG_1 0x0001 A: You can achieve this using the advanced renaming support of SWIG. Given the demo header file (defs.h): #define WRAP_ME_1 1 #define WRAP_ME_2 2 #define IGNORE_ME_1 1 #define IGNORE_ME_2 2 An interface written like: %module test %ignore ""; %rename("%s", regexmatch$name="^WRAP_ME") ""; %include "defs.h" Will wrap only things which start with 'WRAP_ME' (by regex). Likewise: %module test %rename("$ignore", regexmatch$name="^IGNORE_ME") ""; %include "defs.h" Will wrap all except those beginning with 'IGNORE_ME'. There are flavours of this you can employ, outlined in the linked docs, but that's the general gist of pattern matching to selectively wrap things.
How to select only specific defines in a SWIG interface?
i have a C header file with many defines for registers, and in my Python SWIG interface i only want to expose a few. My C header looks like this (just with many more defines): #define REG_1 0x0001 #define REG_2 0x0002 #define REG_3 0x0003 Let's say in my generated Python module if I want to have all defines accessible, I can just do: %module myheader %{ #include myheader.h %} %include myheader.h But what if I only want to have REG_1 accessible/wrapped? I tried something like: %module myheader %{ #include myheader.h %} %constant REG_1; But it didn't work. Checked the documentation, still clueless unfortunately. Any ideas?
[ "The only way I know of is to be repetitive and declare exactly what you want to expose:\n%module myheader\n%{\n#include myheader.h\n%}\n\n#define REG_1 0x0001\n\n", "You can achieve this using the advanced renaming support of SWIG.\nGiven the demo header file (defs.h):\n#define WRAP_ME_1 1\n#define WRAP_ME_2 2\n\n#define IGNORE_ME_1 1\n#define IGNORE_ME_2 2\n\nAn interface written like:\n%module test\n\n%ignore \"\";\n%rename(\"%s\", regexmatch$name=\"^WRAP_ME\") \"\";\n%include \"defs.h\"\n\nWill wrap only things which start with 'WRAP_ME' (by regex).\nLikewise:\n%module test\n\n%rename(\"$ignore\", regexmatch$name=\"^IGNORE_ME\") \"\"; \n%include \"defs.h\"\n\nWill wrap all except those beginning with 'IGNORE_ME'.\nThere are flavours of this you can employ, outlined in the linked docs, but that's the general gist of pattern matching to selectively wrap things.\n" ]
[ 0, 0 ]
[]
[]
[ "c", "header_files", "python", "swig" ]
stackoverflow_0074385357_c_header_files_python_swig.txt
Q: Seg Fault from dictionary initialization Python So I am working on a project that deals with a large number of vehicles and transmissions between those vehicles. I have a working code that works well for small numbers of vehicles, but when I start using large numbers ~500 vehicles then the program will seg fault about half the time. I have backtraced the seg fault using faulthandler() and the dictionary initialization is where the program currently faults. Here is the code for the dictionary initialization: for j in range(self.num_vehicles): # for each vehicle if self.timeIndex == 0: self.vehicles[j].pseudo_random_number(0) self.seconds = [] for j in range(500): mso = self.vehicles[j].full_mso_range() vehicle_transmissions = [] for k in range(500): vehicle_transmissions.append(0) # Iterate through all of the other vehicles besides vehicle j for k in range(j+1, j+500): distance = float(np.ceil(np.sqrt( (self.vehicles[j].position[0] - self.vehicles[k % self.num_vehicles].position[0]) ** 2.0 + (self.vehicles[j].position[1] - self.vehicles[k % self.num_vehicles].position[1]) ** 2.0 + (self.vehicles[j].position[2] - self.vehicles[k % self.num_vehicles].position[2]) ** 2.0))) transmission = {"receiverID": int(k%500), "distance": 0, "receivedpower": 0, "successfuldecode": 0} vehicle_transmissions[transmission["receiverID"]] = transmission # store transmission self.vehicles[j].transmissions = vehicle_transmissions.copy() # save transmission to vehicle self.vehicles[j].frame += 1 # increase frame self.seconds.append(mso) # record MSO in a list that helps with collisions The Seg Fault occurs where the transmission is allocated. I have tried to simplify the allocation so try and isolate the fault but I have had no luck so far. Any advice would be appreciated. Also this code is called through a c++ code and gdb only tells me that the seg fault is happening in this function. A: The problem ended up being a error in the python to C++ interface. It only happens when there is a large number of calls, it seems to lose its place in memory and when it accesses that point again hits an error. I ended up just writing the complete code in C++ which solves the problem. If anyone else reads this its easier to go from C++ which calls Python rather than Python calling C++.
Seg Fault from dictionary initialization Python
So I am working on a project that deals with a large number of vehicles and transmissions between those vehicles. I have a working code that works well for small numbers of vehicles, but when I start using large numbers ~500 vehicles then the program will seg fault about half the time. I have backtraced the seg fault using faulthandler() and the dictionary initialization is where the program currently faults. Here is the code for the dictionary initialization: for j in range(self.num_vehicles): # for each vehicle if self.timeIndex == 0: self.vehicles[j].pseudo_random_number(0) self.seconds = [] for j in range(500): mso = self.vehicles[j].full_mso_range() vehicle_transmissions = [] for k in range(500): vehicle_transmissions.append(0) # Iterate through all of the other vehicles besides vehicle j for k in range(j+1, j+500): distance = float(np.ceil(np.sqrt( (self.vehicles[j].position[0] - self.vehicles[k % self.num_vehicles].position[0]) ** 2.0 + (self.vehicles[j].position[1] - self.vehicles[k % self.num_vehicles].position[1]) ** 2.0 + (self.vehicles[j].position[2] - self.vehicles[k % self.num_vehicles].position[2]) ** 2.0))) transmission = {"receiverID": int(k%500), "distance": 0, "receivedpower": 0, "successfuldecode": 0} vehicle_transmissions[transmission["receiverID"]] = transmission # store transmission self.vehicles[j].transmissions = vehicle_transmissions.copy() # save transmission to vehicle self.vehicles[j].frame += 1 # increase frame self.seconds.append(mso) # record MSO in a list that helps with collisions The Seg Fault occurs where the transmission is allocated. I have tried to simplify the allocation so try and isolate the fault but I have had no luck so far. Any advice would be appreciated. Also this code is called through a c++ code and gdb only tells me that the seg fault is happening in this function.
[ "The problem ended up being a error in the python to C++ interface. It only happens when there is a large number of calls, it seems to lose its place in memory and when it accesses that point again hits an error. I ended up just writing the complete code in C++ which solves the problem. If anyone else reads this its easier to go from C++ which calls Python rather than Python calling C++.\n" ]
[ 0 ]
[]
[]
[ "dictionary", "list_initialization", "python", "segmentation_fault" ]
stackoverflow_0072112587_dictionary_list_initialization_python_segmentation_fault.txt
Q: Pip: connection broken by 'ProtocolError' I am trying to install a package with pip in a fresh virtual environment on Ubuntu 20.04.5, but I keep getting the following warning, when I run pip a second time. The installation of the package fails after the first attempt. WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: There was an error checking the latest version of pip. The first attempt in a fresh environment works ./venv/bin/pip install --upgrade pip Collecting pip Using cached pip-22.2.2-py3-none-any.whl (2.0 MB) Installing collected packages: pip Attempting uninstall: pip Found existing installation: pip 20.0.2 Uninstalling pip-20.0.2: Successfully uninstalled pip-20.0.2 Successfully installed pip-22.2.2 but the same command fails afterwards and I see the warning messages. ./venv/bin/pip install --upgrade pip Requirement already satisfied: pip in ./venv/lib/python3.8/site-packages (22.2.2) WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: There was an error checking the latest version of pip. I cannot install other packages either ./venv/bin/pip install --upgrade numpy WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/numpy/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/numpy/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/numpy/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/numpy/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/numpy/ ERROR: Could not find a version that satisfies the requirement numpy (from versions: none) ERROR: No matching distribution found for numpy WARNING: There was an error checking the latest version of pip. Steps to reproduce Create a new environment: python3 -m venv venv source ./venv/bin/activate Check the versions I use for Python ./venv/bin/python --version Python 3.8.10 and pip ❯ ./venv/bin/pip --version pip 20.0.2 from /home/$USER/projects/venv/lib/python3.8/site-packages/pip (python 3.8) I am not using a proxy and my firewall is disabled. ❯ echo "$http_proxy" ❯ echo "$https_proxy" ❯ sudo ufw status Status: inactive I can run the same steps without problems within a Docker container on the same machine. My openssl.conf is not changed. It seems to be related to my local Python setup. My pip config list is empty. There are no config files pip config list -v For variant 'global', will try loading '/etc/xdg/pip/pip.conf' For variant 'global', will try loading '/etc/pip.conf' For variant 'user', will try loading '/home/$USER/.pip/pip.conf' For variant 'user', will try loading '/home/$USER/.config/pip/pip.conf' For variant 'site', will try loading '/home/$USER/git/infrastructure-manual-tasks/cropster-csar/resize-login-images/venv/pip.conf' I use Google DNS Link 12 (ens4) Current Scopes: DNS DefaultRoute setting: yes LLMNR setting: yes MulticastDNS setting: no DNSOverTLS setting: no DNSSEC setting: no DNSSEC supported: no Current DNS Server: 8.8.8.8 DNS Servers: 8.8.8.8 8.8.4.4 1.1.1.1 DNS Domain: ~. I noticed that I can install packages when I force pip to use a different mirror which does not enforce HTTPS. So the problem seems to be related to SSL, but I cannot find the source of it. This works ./venv/bin/pip install --upgrade -i http://pypi.douban.com/simple --trusted-host pypi.douban.com numpy Looking in indexes: http://pypi.douban.com/simple Collecting numpy Downloading http://pypi.doubanio.com/packages/d6/e2/bed33bdbf513cd6d3fcb4377792ef1b8aad941da542a191e1e2a98c6621f/numpy-1.23.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.1 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.1/17.1 MB 6.4 MB/s eta 0:00:00 Installing collected packages: numpy Successfully installed numpy-1.23.3 but this does not when using HTTPS. ./venv/bin/pip install --upgrade -i https://pypi.douban.com/simple --trusted-host pypi.douban.com pandas Looking in indexes: https://pypi.douban.com/simple WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pandas/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pandas/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pandas/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pandas/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pandas/ ERROR: Could not find a version that satisfies the requirement pandas (from versions: none) ERROR: No matching distribution found for pandas I have no clue where else to look. Updates Testing with curl Accessing the repository with curl seems fine. ❯ curl -v -I https://pypi.douban.com/simple * Trying 140.143.177.206:443... * TCP_NODELAY set * Connected to pypi.douban.com (140.143.177.206) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-certificates.crt CApath: /etc/ssl/certs * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 * ALPN, server accepted to use http/1.1 * Server certificate: * subject: C=CN; ST=Beijing; O=Beijing Douwang Technology Co. Ltd.; CN=*.douban.com * start date: Jun 22 00:00:00 2022 GMT * expire date: Jul 23 23:59:59 2023 GMT * subjectAltName: host "pypi.douban.com" matched cert's "*.douban.com" * issuer: C=US; O=DigiCert Inc; OU=www.digicert.com; CN=GeoTrust RSA CA 2018 * SSL certificate verify ok. > HEAD /simple HTTP/1.1 > Host: pypi.douban.com > User-Agent: curl/7.68.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 301 Moved Permanently HTTP/1.1 301 Moved Permanently < Date: Thu, 15 Sep 2022 06:38:26 GMT Date: Thu, 15 Sep 2022 06:38:26 GMT < Content-Type: text/html Content-Type: text/html < Content-Length: 162 Content-Length: 162 < Connection: keep-alive Connection: keep-alive < Keep-Alive: timeout=30 Keep-Alive: timeout=30 < Location: https://pypi.doubanio.com/simple Location: https://pypi.doubanio.com/simple < Server: dae Server: dae < * Connection #0 to host pypi.douban.com left intact OpenSSL version ❯ openssl version -a OpenSSL 1.1.1f 31 Mar 2020 built on: Mon Jul 4 11:24:28 2022 UTC platform: debian-amd64 options: bn(64,64) rc4(16x,int) des(int) blowfish(ptr) compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -Wa,--noexecstack -g -O2 -fdebug-prefix-map=/build/openssl-51ig8V/openssl-1.1.1f=. -fstack-protector-strong -Wformat -Werror=format-security -DOPENSSL_TLS_SECURITY_LEVEL=2 -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAESNI_ASM -DVPAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPOLY1305_ASM -DNDEBUG -Wdate-time -D_FORTIFY_SOURCE=2 OPENSSLDIR: "/usr/lib/ssl" ENGINESDIR: "/usr/lib/x86_64-linux-gnu/engines-1.1" Seeding source: os-specific Not updating pip It works with the old version of pip 20.0.2 ❯ ./venv/bin/pip install Pillow Collecting Pillow Using cached Pillow-9.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.1 MB) Installing collected packages: Pillow Successfully installed Pillow-9.2.0 ❯ ./venv/bin/pip install numpy Collecting numpy Using cached numpy-1.23.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.1 MB) Installing collected packages: numpy Successfully installed numpy-1.23.3 ❯ ./venv/bin/pip --version pip 20.0.2 Breaking change comes with pip 21.0 The release notes ❯ ./venv/bin/pip install --upgrade pip==20.3.4 Collecting pip==20.3.4 Using cached pip-20.3.4-py2.py3-none-any.whl (1.5 MB) Installing collected packages: pip Attempting uninstall: pip Found existing installation: pip 20.0.2 Uninstalling pip-20.0.2: Successfully uninstalled pip-20.0.2 Successfully installed pip-20.3.4 ❯ ./venv/bin/pip install --upgrade pip==21.0 WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ ERROR: Could not find a version that satisfies the requirement pip==21.0 ERROR: No matching distribution found for pip==21.0 WARNING: You are using pip version 20.3.4; however, version 22.2.2 is available. You should consider upgrading via the '/home/$USER/git/infrastructure-manual-tasks/cropster-csar/resize-login-images/venv/bin/python3 -m pip install --upgrade pip' command. A: I have encountered the same issue. Turns out I did some debugging by setting the environment variable SSLKEYLOGFILE to a file that I deleted afterwards, and pip won't work if it's unable to access or create this file. Removing the environment variable will fix it
Pip: connection broken by 'ProtocolError'
I am trying to install a package with pip in a fresh virtual environment on Ubuntu 20.04.5, but I keep getting the following warning, when I run pip a second time. The installation of the package fails after the first attempt. WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: There was an error checking the latest version of pip. The first attempt in a fresh environment works ./venv/bin/pip install --upgrade pip Collecting pip Using cached pip-22.2.2-py3-none-any.whl (2.0 MB) Installing collected packages: pip Attempting uninstall: pip Found existing installation: pip 20.0.2 Uninstalling pip-20.0.2: Successfully uninstalled pip-20.0.2 Successfully installed pip-22.2.2 but the same command fails afterwards and I see the warning messages. ./venv/bin/pip install --upgrade pip Requirement already satisfied: pip in ./venv/lib/python3.8/site-packages (22.2.2) WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: There was an error checking the latest version of pip. I cannot install other packages either ./venv/bin/pip install --upgrade numpy WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/numpy/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/numpy/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/numpy/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/numpy/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/numpy/ ERROR: Could not find a version that satisfies the requirement numpy (from versions: none) ERROR: No matching distribution found for numpy WARNING: There was an error checking the latest version of pip. Steps to reproduce Create a new environment: python3 -m venv venv source ./venv/bin/activate Check the versions I use for Python ./venv/bin/python --version Python 3.8.10 and pip ❯ ./venv/bin/pip --version pip 20.0.2 from /home/$USER/projects/venv/lib/python3.8/site-packages/pip (python 3.8) I am not using a proxy and my firewall is disabled. ❯ echo "$http_proxy" ❯ echo "$https_proxy" ❯ sudo ufw status Status: inactive I can run the same steps without problems within a Docker container on the same machine. My openssl.conf is not changed. It seems to be related to my local Python setup. My pip config list is empty. There are no config files pip config list -v For variant 'global', will try loading '/etc/xdg/pip/pip.conf' For variant 'global', will try loading '/etc/pip.conf' For variant 'user', will try loading '/home/$USER/.pip/pip.conf' For variant 'user', will try loading '/home/$USER/.config/pip/pip.conf' For variant 'site', will try loading '/home/$USER/git/infrastructure-manual-tasks/cropster-csar/resize-login-images/venv/pip.conf' I use Google DNS Link 12 (ens4) Current Scopes: DNS DefaultRoute setting: yes LLMNR setting: yes MulticastDNS setting: no DNSOverTLS setting: no DNSSEC setting: no DNSSEC supported: no Current DNS Server: 8.8.8.8 DNS Servers: 8.8.8.8 8.8.4.4 1.1.1.1 DNS Domain: ~. I noticed that I can install packages when I force pip to use a different mirror which does not enforce HTTPS. So the problem seems to be related to SSL, but I cannot find the source of it. This works ./venv/bin/pip install --upgrade -i http://pypi.douban.com/simple --trusted-host pypi.douban.com numpy Looking in indexes: http://pypi.douban.com/simple Collecting numpy Downloading http://pypi.doubanio.com/packages/d6/e2/bed33bdbf513cd6d3fcb4377792ef1b8aad941da542a191e1e2a98c6621f/numpy-1.23.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.1 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.1/17.1 MB 6.4 MB/s eta 0:00:00 Installing collected packages: numpy Successfully installed numpy-1.23.3 but this does not when using HTTPS. ./venv/bin/pip install --upgrade -i https://pypi.douban.com/simple --trusted-host pypi.douban.com pandas Looking in indexes: https://pypi.douban.com/simple WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pandas/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pandas/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pandas/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pandas/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pandas/ ERROR: Could not find a version that satisfies the requirement pandas (from versions: none) ERROR: No matching distribution found for pandas I have no clue where else to look. Updates Testing with curl Accessing the repository with curl seems fine. ❯ curl -v -I https://pypi.douban.com/simple * Trying 140.143.177.206:443... * TCP_NODELAY set * Connected to pypi.douban.com (140.143.177.206) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-certificates.crt CApath: /etc/ssl/certs * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 * ALPN, server accepted to use http/1.1 * Server certificate: * subject: C=CN; ST=Beijing; O=Beijing Douwang Technology Co. Ltd.; CN=*.douban.com * start date: Jun 22 00:00:00 2022 GMT * expire date: Jul 23 23:59:59 2023 GMT * subjectAltName: host "pypi.douban.com" matched cert's "*.douban.com" * issuer: C=US; O=DigiCert Inc; OU=www.digicert.com; CN=GeoTrust RSA CA 2018 * SSL certificate verify ok. > HEAD /simple HTTP/1.1 > Host: pypi.douban.com > User-Agent: curl/7.68.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 301 Moved Permanently HTTP/1.1 301 Moved Permanently < Date: Thu, 15 Sep 2022 06:38:26 GMT Date: Thu, 15 Sep 2022 06:38:26 GMT < Content-Type: text/html Content-Type: text/html < Content-Length: 162 Content-Length: 162 < Connection: keep-alive Connection: keep-alive < Keep-Alive: timeout=30 Keep-Alive: timeout=30 < Location: https://pypi.doubanio.com/simple Location: https://pypi.doubanio.com/simple < Server: dae Server: dae < * Connection #0 to host pypi.douban.com left intact OpenSSL version ❯ openssl version -a OpenSSL 1.1.1f 31 Mar 2020 built on: Mon Jul 4 11:24:28 2022 UTC platform: debian-amd64 options: bn(64,64) rc4(16x,int) des(int) blowfish(ptr) compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -Wa,--noexecstack -g -O2 -fdebug-prefix-map=/build/openssl-51ig8V/openssl-1.1.1f=. -fstack-protector-strong -Wformat -Werror=format-security -DOPENSSL_TLS_SECURITY_LEVEL=2 -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAESNI_ASM -DVPAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPOLY1305_ASM -DNDEBUG -Wdate-time -D_FORTIFY_SOURCE=2 OPENSSLDIR: "/usr/lib/ssl" ENGINESDIR: "/usr/lib/x86_64-linux-gnu/engines-1.1" Seeding source: os-specific Not updating pip It works with the old version of pip 20.0.2 ❯ ./venv/bin/pip install Pillow Collecting Pillow Using cached Pillow-9.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.1 MB) Installing collected packages: Pillow Successfully installed Pillow-9.2.0 ❯ ./venv/bin/pip install numpy Collecting numpy Using cached numpy-1.23.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.1 MB) Installing collected packages: numpy Successfully installed numpy-1.23.3 ❯ ./venv/bin/pip --version pip 20.0.2 Breaking change comes with pip 21.0 The release notes ❯ ./venv/bin/pip install --upgrade pip==20.3.4 Collecting pip==20.3.4 Using cached pip-20.3.4-py2.py3-none-any.whl (1.5 MB) Installing collected packages: pip Attempting uninstall: pip Found existing installation: pip 20.0.2 Uninstalling pip-20.0.2: Successfully uninstalled pip-20.0.2 Successfully installed pip-20.3.4 ❯ ./venv/bin/pip install --upgrade pip==21.0 WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': /simple/pip/ ERROR: Could not find a version that satisfies the requirement pip==21.0 ERROR: No matching distribution found for pip==21.0 WARNING: You are using pip version 20.3.4; however, version 22.2.2 is available. You should consider upgrading via the '/home/$USER/git/infrastructure-manual-tasks/cropster-csar/resize-login-images/venv/bin/python3 -m pip install --upgrade pip' command.
[ "I have encountered the same issue. Turns out I did some debugging by setting the environment variable SSLKEYLOGFILE to a file that I deleted afterwards, and pip won't work if it's unable to access or create this file.\nRemoving the environment variable will fix it\n" ]
[ 0 ]
[]
[]
[ "pip", "python", "python_3.x" ]
stackoverflow_0073726324_pip_python_python_3.x.txt
Q: Draw outside a windows bounds with pygame I'm looking to create something that interacts with your desktop in some way with pygame. What I want to do is draw something outside of the pygame window, as in anywhere on the screen. Is this possible at all? What would be more helpful, if you can do it at all, is if you can draw without a window even on the screen. A: I had an idea that to create a transparent fullscreen window That Can Be Displayed on The Desktop.  import pygame from win32api import GetSystemMetrics import win32api import win32con import win32gui pygame.init() screen = pygame.display.set_mode((GetSystemMetrics(0),GetSystemMetrics(1)),pygame.FULLSCREEN) done = False fuchsia = (255, 0, 128) hwnd = pygame.display.get_wm_info()["window"] win32gui.SetWindowLong(hwnd, win32con.GWL_EXSTYLE,win32gui.GetWindowLong(hwnd, win32con.GWL_EXSTYLE) | win32con.WS_EX_LAYERED) win32gui.SetLayeredWindowAttributes(hwnd, win32api.RGB(*fuchsia), 0, win32con.LWA_COLORKEY) while not done: for event in pygame.event.get(): if event.type == pygame.QUIT: done = True screen.fill(fuchsia) pygame.draw.rect(screen, (200,200,0), pygame.Rect(30, 30, 100, 100)) pygame.display.update()
Draw outside a windows bounds with pygame
I'm looking to create something that interacts with your desktop in some way with pygame. What I want to do is draw something outside of the pygame window, as in anywhere on the screen. Is this possible at all? What would be more helpful, if you can do it at all, is if you can draw without a window even on the screen.
[ "I had an idea that to create a transparent fullscreen window That Can Be Displayed on The Desktop. \nimport pygame\nfrom win32api import GetSystemMetrics\nimport win32api\nimport win32con\nimport win32gui\n\npygame.init()\nscreen = pygame.display.set_mode((GetSystemMetrics(0),GetSystemMetrics(1)),pygame.FULLSCREEN)\ndone = False\n\nfuchsia = (255, 0, 128)\n\nhwnd = pygame.display.get_wm_info()[\"window\"]\n\nwin32gui.SetWindowLong(hwnd, win32con.GWL_EXSTYLE,win32gui.GetWindowLong(hwnd, win32con.GWL_EXSTYLE) | win32con.WS_EX_LAYERED)\n\nwin32gui.SetLayeredWindowAttributes(hwnd, win32api.RGB(*fuchsia), 0, win32con.LWA_COLORKEY)\n\nwhile not done:\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n done = True\n \n screen.fill(fuchsia)\n pygame.draw.rect(screen, (200,200,0), pygame.Rect(30, 30, 100, 100))\n pygame.display.update()\n\n" ]
[ 1 ]
[]
[]
[ "desktop", "pygame", "python", "screen" ]
stackoverflow_0074564181_desktop_pygame_python_screen.txt
Q: How can I prevent Opencv from using a wrong path after installation / ImportError There are two other Python versions on the system: 2.7 and - in a different environment - 3.7 including Opencv installed. For some reasons I need another python version (3.8). Therefore I installed python 3.8 in a separate environment and after activating this environment I installed Opencv in this environment: I open a miniforge3 promt (which is NOT installed on partition c:\ ) change to the miniforge3 path on partition d:\ and enter: conda create -n Python38 python=3.8 NumPy xarray netCDF4 holoviews hvplot bokeh pandas matplotlib IPython ipywidgets datashader after that I installed opencv among some other packages: pip install opencv-python I check the versions with: (Python38) D:\mypath\miniforge3\envs\Python38>pip list |findstr opencv opencv-contrib-python 4.6.0.66 opencv-python 4.6.0.66 opencv-python-headless 4.6.0.66 So it should not be due to incompatibility between versions, as is often found on the net. However, I still get an import error: ImportError: cannot import the name '_registerMatType' from 'cv2.cv2' (c:\python38\lib\site-packages\cv2\cv2.cp38-win_amd64.pyd) This points to the partition c:\ I think this is strange and is certainly indicative of the error. I don't understand this yet. Can anyone help me solve this problem? A: Do not install multiple package variants of OpenCV. Install exactly one variant. Remove them all, then install one of them. All of them contain the base modules. Use only the packages on PyPI (installable with pip). Those are official packages.
How can I prevent Opencv from using a wrong path after installation / ImportError
There are two other Python versions on the system: 2.7 and - in a different environment - 3.7 including Opencv installed. For some reasons I need another python version (3.8). Therefore I installed python 3.8 in a separate environment and after activating this environment I installed Opencv in this environment: I open a miniforge3 promt (which is NOT installed on partition c:\ ) change to the miniforge3 path on partition d:\ and enter: conda create -n Python38 python=3.8 NumPy xarray netCDF4 holoviews hvplot bokeh pandas matplotlib IPython ipywidgets datashader after that I installed opencv among some other packages: pip install opencv-python I check the versions with: (Python38) D:\mypath\miniforge3\envs\Python38>pip list |findstr opencv opencv-contrib-python 4.6.0.66 opencv-python 4.6.0.66 opencv-python-headless 4.6.0.66 So it should not be due to incompatibility between versions, as is often found on the net. However, I still get an import error: ImportError: cannot import the name '_registerMatType' from 'cv2.cv2' (c:\python38\lib\site-packages\cv2\cv2.cp38-win_amd64.pyd) This points to the partition c:\ I think this is strange and is certainly indicative of the error. I don't understand this yet. Can anyone help me solve this problem?
[ "Do not install multiple package variants of OpenCV.\nInstall exactly one variant.\nRemove them all, then install one of them.\nAll of them contain the base modules.\nUse only the packages on PyPI (installable with pip). Those are official packages.\n" ]
[ 0 ]
[]
[]
[ "conda", "installation", "opencv", "pip", "python" ]
stackoverflow_0074564371_conda_installation_opencv_pip_python.txt
Q: How to use a column in dataframe as dictionary key value? Suppose there is a dataset. Let's say that the dataset has a column A which contains city names like "New York", "California", or "Florida" now we have a dictionary like my_dict = {"New York":1, "California":2, "Florida":3} So I need to generate a column B such that if column A has a row value "New York", then column B has the value 1 as in the dictionary. I used the lambda function and it worked but is it possible without the use of the lambda function? A: This seems rather contrived but works: df = pd.DataFrame([my_dict]).stack().reset_index() df.drop(df.columns[[0]], axis=1, inplace=True) df.columns = ['A', 'B'] and gives A B 0 New York 1 1 California 2 2 Florida 3 A: Method 1 import pandas as pd my_dict = {"New York":1, "California":2, "Florida":3} # creating dataframe from dictionary itself, for reproducing the scenario existing_df = pd.DataFrame({"reference_column" : my_dict.keys()}) # duplicate the reference column (city column) existing_df["value_column"] = existing_df["reference_column"] # replace the values in duplicate column with corresponding values from dictionary existing_df.replace({"value_column" : my_dict}, inplace = True) Explanation : df.replace({'column' : replacement_dictionary}) is a find and replace technique. find compares the values of column with keys of replacement_dictionary. If the key matches, its value is used to replace existing value of column Method 2 import pandas as pd my_dict = {"New York":1, "California":2, "Florida":3} # reproducing original dataframe with reference city column existing_df = pd.DataFrame({"reference_column" : my_dict.keys()}) # dictionary coverted into dataframe replacement_df = pd.DataFrame({"reference_column" : my_dict.keys(), "value_column" : my_dict.values()}) # left join both on city column with original df as left table merge_df = existing_df.merge(replacement_df, on = ["reference_column"], how = "left") Explanation : Dictionary can be converted into dataframe (replacement_df) with keys as one column and values as another column. This converted dataframe can be merged with existing dataframe on the condition that city names in existing dataframe should match with city names in replacement_df Output of both methods :
How to use a column in dataframe as dictionary key value?
Suppose there is a dataset. Let's say that the dataset has a column A which contains city names like "New York", "California", or "Florida" now we have a dictionary like my_dict = {"New York":1, "California":2, "Florida":3} So I need to generate a column B such that if column A has a row value "New York", then column B has the value 1 as in the dictionary. I used the lambda function and it worked but is it possible without the use of the lambda function?
[ "This seems rather contrived but works:\ndf = pd.DataFrame([my_dict]).stack().reset_index()\ndf.drop(df.columns[[0]], axis=1, inplace=True)\ndf.columns = ['A', 'B']\n\nand gives\n A B\n0 New York 1\n1 California 2\n2 Florida 3\n\n", "Method 1\nimport pandas as pd\n\nmy_dict = {\"New York\":1, \"California\":2, \"Florida\":3}\n\n# creating dataframe from dictionary itself, for reproducing the scenario\nexisting_df = pd.DataFrame({\"reference_column\" : my_dict.keys()})\n\n# duplicate the reference column (city column)\nexisting_df[\"value_column\"] = existing_df[\"reference_column\"]\n\n# replace the values in duplicate column with corresponding values from dictionary\nexisting_df.replace({\"value_column\" : my_dict}, inplace = True)\n\nExplanation :\ndf.replace({'column' : replacement_dictionary}) is a find and replace technique. find compares the values of column with keys of replacement_dictionary. If the key matches, its value is used to replace existing value of column\nMethod 2\nimport pandas as pd\n\nmy_dict = {\"New York\":1, \"California\":2, \"Florida\":3}\n\n# reproducing original dataframe with reference city column\nexisting_df = pd.DataFrame({\"reference_column\" : my_dict.keys()})\n\n# dictionary coverted into dataframe\nreplacement_df = pd.DataFrame({\"reference_column\" : my_dict.keys(), \"value_column\" : my_dict.values()})\n\n# left join both on city column with original df as left table\nmerge_df = existing_df.merge(replacement_df, on = [\"reference_column\"], how = \"left\")\n\nExplanation :\nDictionary can be converted into dataframe (replacement_df) with keys as one column and values as another column. This converted dataframe can be merged with existing dataframe on the condition that city names in existing dataframe should match with city names in replacement_df\nOutput of both methods :\n\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "dictionary", "jupyter_notebook", "python" ]
stackoverflow_0074564846_dataframe_dictionary_jupyter_notebook_python.txt
Q: Reduce steps in this simple maze puzzle? Video Puzzle Current code: for i in range(6,0,-2): Spaceship.step(2) Dev.step(i) for idk in range(3): Dev.turnRight() Dev.step(i*2) Dev.turnRight() Dev.step(i) In this puzzle the objective is to get all the item (blue thing). With 6 line of code, and I'm currently at 8 line of code. I don't know how to minimalize the line of code. Note: Dev.step() is the robot, it can go backward by set the value by negative. Spaceship.step() is the spaceship, it can not go backward. A: This is a possible solution in only 6 lines: for i in range(6, 0, -2): Spaceship.step(2) for k, j in enumerate([1, 2, 2, 2, 1]): Dev.step(i * j) if k != 4: Dev.turnRight() The idea is to group all steps of the robot in a list in order to do a nested loop and turn only if it is not the last element of the list. A: You can avoid pythonesque code like so: for i in range(6,0,-2): Spaceship.step(2) for idk in range(4): Dev.step(i) Dev.turnRight() Dev.step(i)
Reduce steps in this simple maze puzzle?
Video Puzzle Current code: for i in range(6,0,-2): Spaceship.step(2) Dev.step(i) for idk in range(3): Dev.turnRight() Dev.step(i*2) Dev.turnRight() Dev.step(i) In this puzzle the objective is to get all the item (blue thing). With 6 line of code, and I'm currently at 8 line of code. I don't know how to minimalize the line of code. Note: Dev.step() is the robot, it can go backward by set the value by negative. Spaceship.step() is the spaceship, it can not go backward.
[ "This is a possible solution in only 6 lines:\nfor i in range(6, 0, -2):\n Spaceship.step(2)\n for k, j in enumerate([1, 2, 2, 2, 1]):\n Dev.step(i * j)\n if k != 4:\n Dev.turnRight()\n\nThe idea is to group all steps of the robot in a list in order to do a nested loop and turn only if it is not the last element of the list.\n", "You can avoid pythonesque code like so:\nfor i in range(6,0,-2):\n Spaceship.step(2)\n for idk in range(4):\n Dev.step(i)\n Dev.turnRight()\n Dev.step(i)\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074562403_python.txt
Q: How to write a text file from a dictionary in Python with values going from a list to string? I am trying to write a text file from a dictionary in which the dictionary values is a list that I want to convert to a string with "," separators, for example: ["string1","string2","string3"] --> "string1,string2,string3" When the list becomes one string I want to write its key (also a string with : separator) with its corresponding string: "key1:string1,string2,string3" "key2:string4,string5,string6" "key3:string7,string8,string9" Is what should be written within the text file. The dictionary would look like this for example: {'key1': ['string1', 'string2', 'string3'], 'key2': ['string4', 'string5', 'string6'], 'key3': ['string7', 'string8', 'string9']} Here is the function I have so far but as shown not much is here: def quit(dictionary): with open("file.txt", 'w') as f: for key, value in dictionary.items(): f.write('%s:%s\n' % (key, value)) However it makes every item within the text file as a list so I am not sure how to fix it. I do not want to use built-ins (such as json), just simple for loops, and basic i/o commands. A: You can use str.join to join the dictionary values with ,: dct = { "key1": ["string1", "string2", "string3"], "key2": ["string4", "string5", "string6"], "key3": ["string7", "string8", "string9"], } with open("output.txt", "w") as f_out: for k, v in dct.items(): print("{}:{}".format(k, ",".join(v)), file=f_out) This creates file output.txt: key1:string1,string2,string3 key2:string4,string5,string6 key3:string7,string8,string9
How to write a text file from a dictionary in Python with values going from a list to string?
I am trying to write a text file from a dictionary in which the dictionary values is a list that I want to convert to a string with "," separators, for example: ["string1","string2","string3"] --> "string1,string2,string3" When the list becomes one string I want to write its key (also a string with : separator) with its corresponding string: "key1:string1,string2,string3" "key2:string4,string5,string6" "key3:string7,string8,string9" Is what should be written within the text file. The dictionary would look like this for example: {'key1': ['string1', 'string2', 'string3'], 'key2': ['string4', 'string5', 'string6'], 'key3': ['string7', 'string8', 'string9']} Here is the function I have so far but as shown not much is here: def quit(dictionary): with open("file.txt", 'w') as f: for key, value in dictionary.items(): f.write('%s:%s\n' % (key, value)) However it makes every item within the text file as a list so I am not sure how to fix it. I do not want to use built-ins (such as json), just simple for loops, and basic i/o commands.
[ "You can use str.join to join the dictionary values with ,:\ndct = {\n \"key1\": [\"string1\", \"string2\", \"string3\"],\n \"key2\": [\"string4\", \"string5\", \"string6\"],\n \"key3\": [\"string7\", \"string8\", \"string9\"],\n}\n\nwith open(\"output.txt\", \"w\") as f_out:\n for k, v in dct.items():\n print(\"{}:{}\".format(k, \",\".join(v)), file=f_out)\n\nThis creates file output.txt:\nkey1:string1,string2,string3\nkey2:string4,string5,string6\nkey3:string7,string8,string9\n\n" ]
[ 3 ]
[]
[]
[ "dictionary", "io", "python", "text" ]
stackoverflow_0074565326_dictionary_io_python_text.txt
Q: Append element to 2D array How can I append an element to first or second element of a numpy array in python? My code does not work. I did like this: my_num = np.array([[],[]]) my_num[0] = np.append(my_num[0],6) print(my_num[0]) But my_num is empty. A: Maybe try this: my_num[0] = np.append(my_num[0],np.array([6])) The second argument in append must be the same "shape" as the first (i.e., an array). See Numpy docs here.
Append element to 2D array
How can I append an element to first or second element of a numpy array in python? My code does not work. I did like this: my_num = np.array([[],[]]) my_num[0] = np.append(my_num[0],6) print(my_num[0]) But my_num is empty.
[ "Maybe try this:\nmy_num[0] = np.append(my_num[0],np.array([6]))\n\nThe second argument in append must be the same \"shape\" as the first (i.e., an array). See Numpy docs here.\n" ]
[ 1 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0074565257_arrays_numpy_python.txt
Q: my simple_interest function is returning 0 and i'm not sure why even though i think my math is correct i'm trying to make a program that calculates simple interest and the function i made to calculate the interest returns 0 when i run it and i don't know what the problem is `year = int(input("Enter years: ")) month = int(input("Enter months: ")) days = int(input("Enter days: ")) totalYears = float() interest = float() def get_time(): totalYears = round(year + (month * 31 + days)/365,1) print("total time in years is",totalYears,"years") principal = float(input("Enter principal: ")) rate = float(input("Enter rate (in %): ")) def simple_interest(): interest = round(principal * rate/100 * totalYears,2) print("Total interest earned is $",interest) get_time() simple_interest()` i tried intializing the principal and rate at the start even though i know it's unnecessary since it's intialized when i ask for the input, i assume the problem is with the variables but i can't find it if it is. A: The variable TotalYears in the get_time function is not the same as the variable TotalYears referenced outside the function. When you want to assign a value to TotalYears in get_time, you need to declare "Global" to let python know that you are referencing the variable outside the scope of the get_time function. year = int(input("Enter years: ")) month = int(input("Enter months: ")) days = int(input("Enter days: ")) totalYears = float() interest = float() principal = float(input("Enter principal: ")) rate = float(input("Enter rate (in %): ")) def get_time(): global totalYears totalYears = round(year + (month * 31 + days)/365,1) print("total time in years is",totalYears,"years") def simple_interest(): interest = round(principal * (rate/100) * totalYears,2) print("Total interest earned is $",interest) get_time() simple_interest() A: When you initialise totalYears as float(), it gets defined in the global variable scope. When you then define totalYears by calling the function get_time(), that version of totalYears gets defined inside the scope of the get_time() function call. When that function call ends, the value of totalYears is extinguished - and never gets returned for use by any other function. So the version of totalYears that's available to your next function call - simple_interest is 0.0 - the original version you initialised in the global scope, not the version you defined in the scope of get_time(). If you want totalYears to be defined inside get_time(), and then be available for use by simple_interest(), I would suggest something like this: year = int(input("Enter years: ")) month = int(input("Enter months: ")) days = int(input("Enter days: ")) totalYears = float() interest = float() def get_time(): totalYears = round(year + (month * 31 + days)/365,1) print("total time in years is",totalYears,"years") return totalYears principal = float(input("Enter principal: ")) rate = float(input("Enter rate (in %): ")) def simple_interest(totalYears): interest = principal * rate/100 * totalYears print(f'Total interest earned is ${round(interest, 2)}') totalYears = get_time() simple_interest(totalYears) If I call run your script with 5 years, 3 months, 2 days, $1000 in principal and a rate of 2%, I get the following output: Enter years: 5 Enter months: 3 Enter days: 2 Enter principal: 1000 Enter rate (in %): 2 total time in years is 5.3 years Total interest earned is $106.0 That addresses your immediate question - but a couple of points to note: First: You don't have to define totalYears and interest as empty floats. You could delete totalYears = float() and interest = float() from your script and make no difference to how it runs. Second: It's a lot easier to manage/analyse/debug your code if you try to keep your functions pure - with inputs and outputs passed into and returned from those functions. An example of how your code might be refactored to reflect this approach below: def get_inputs(): year = int(input("Enter years: ")) month = int(input("Enter months: ")) days = int(input("Enter days: ")) principal = float(input("Enter principal: ")) rate = float(input("Enter rate (in %): ")) return year, month, days, principal, rate get_inputs gets inputs from the user and returns these inputs as five clearly named variables. This function is not "pure" - i.e. it interacts with other things (i.e. the user). def get_time(year, month, days): totalYears = round(year + (month * 31 + days)/365,1) return totalYears def simple_interest(principal, rate, totalYears): interest = principal * rate/100 * totalYears return interest get_time and simple_interest are pure - in that their behaviour depends ONLY on the inputs provided to them - they have no "side effects", they don't print anything to the screen, they don't modify anything outside the scope of the function call. Their behaviour is entirely predictable and stable. def print_outputs(totalYears, interest): print(f'total time in years is {totalYears} years') print(f'Total interest earned is ${round(interest, 2)}') print_outputs like get_inputs is not pure - but it IS predictable - it takes in two inputs, and the resulting behaviour is an always predictable, repeatable output to the screen. It doesn't mutate any variables not explicitly passed to it. We then call these functions: year, month, days, principal, rate = get_inputs() totalYears = get_time(year, month, days) interest = simple_interest(principal, rate, totalYears) print_outputs(totalYears, interest) This produces exactly the same behaviour as before, but this approach makes very clear what variables are being passed into and out of various function calls, without creating any "globally" accessible variables whose names/meanings could become ambiguous. There are of course ways to define global variables in Python, but personally, I tend to run scared of doing that if at all avoidable. A: The problem is that get_time() is not updating the value of your global totalYears since you're not returning it. And simple_interest() won't update interest for the same reason. You can check this by printing totalYears inside simple_interest(): def simple_interest(): interest = round(principal * rate/100 * totalYears,2) #This should return 0: print(totalYears) print("Total interest earned is $",interest) One easy way is just returning totalYears and interest: def get_time(): totalYears = round(year + (month * 31 + days)/365,1) print("total time in years is",totalYears,"years") return totalYears def simple_interest(): interest = round(principal * rate/100 * totalYears,2) print("Total interest earned is $",interest) return interest year = int(input("Enter years: ")) month = int(input("Enter months: ")) days = int(input("Enter days: ")) principal = float(input("Enter principal: ")) rate = float(input("Enter rate (in %): ")) totalYears = get_time() interest = simple_interest() I hope this helps :)
my simple_interest function is returning 0 and i'm not sure why even though i think my math is correct
i'm trying to make a program that calculates simple interest and the function i made to calculate the interest returns 0 when i run it and i don't know what the problem is `year = int(input("Enter years: ")) month = int(input("Enter months: ")) days = int(input("Enter days: ")) totalYears = float() interest = float() def get_time(): totalYears = round(year + (month * 31 + days)/365,1) print("total time in years is",totalYears,"years") principal = float(input("Enter principal: ")) rate = float(input("Enter rate (in %): ")) def simple_interest(): interest = round(principal * rate/100 * totalYears,2) print("Total interest earned is $",interest) get_time() simple_interest()` i tried intializing the principal and rate at the start even though i know it's unnecessary since it's intialized when i ask for the input, i assume the problem is with the variables but i can't find it if it is.
[ "The variable TotalYears in the get_time function is not the same as the variable TotalYears referenced outside the function. When you want to assign a value to TotalYears in get_time, you need to declare \"Global\" to let python know that you are referencing the variable outside the scope of the get_time function.\nyear = int(input(\"Enter years: \"))\nmonth = int(input(\"Enter months: \"))\ndays = int(input(\"Enter days: \"))\n\ntotalYears = float()\ninterest = float()\n\nprincipal = float(input(\"Enter principal: \"))\nrate = float(input(\"Enter rate (in %): \"))\n\ndef get_time():\n global totalYears\n totalYears = round(year + (month * 31 + days)/365,1)\n print(\"total time in years is\",totalYears,\"years\")\n\ndef simple_interest():\n interest = round(principal * (rate/100) * totalYears,2)\n print(\"Total interest earned is $\",interest)\n\nget_time()\nsimple_interest()\n\n", "When you initialise totalYears as float(), it gets defined in the global variable scope. When you then define totalYears by calling the function get_time(), that version of totalYears gets defined inside the scope of the get_time() function call. When that function call ends, the value of totalYears is extinguished - and never gets returned for use by any other function.\nSo the version of totalYears that's available to your next function call - simple_interest is 0.0 - the original version you initialised in the global scope, not the version you defined in the scope of get_time().\nIf you want totalYears to be defined inside get_time(), and then be available for use by simple_interest(), I would suggest something like this:\nyear = int(input(\"Enter years: \"))\nmonth = int(input(\"Enter months: \"))\ndays = int(input(\"Enter days: \"))\n\ntotalYears = float()\ninterest = float()\n\ndef get_time():\n totalYears = round(year + (month * 31 + days)/365,1)\n print(\"total time in years is\",totalYears,\"years\")\n return totalYears\n\nprincipal = float(input(\"Enter principal: \"))\nrate = float(input(\"Enter rate (in %): \"))\n\ndef simple_interest(totalYears):\n interest = principal * rate/100 * totalYears\n\n print(f'Total interest earned is ${round(interest, 2)}')\n\ntotalYears = get_time()\nsimple_interest(totalYears)\n\nIf I call run your script with 5 years, 3 months, 2 days, $1000 in principal and a rate of 2%, I get the following output:\nEnter years: 5\n\nEnter months: 3\n\nEnter days: 2\n\nEnter principal: 1000\n\nEnter rate (in %): 2\ntotal time in years is 5.3 years\nTotal interest earned is $106.0\n\nThat addresses your immediate question - but a couple of points to note:\nFirst: You don't have to define totalYears and interest as empty floats. You could delete totalYears = float() and interest = float() from your script and make no difference to how it runs.\nSecond: It's a lot easier to manage/analyse/debug your code if you try to keep your functions pure - with inputs and outputs passed into and returned from those functions. An example of how your code might be refactored to reflect this approach below:\ndef get_inputs():\n\n year = int(input(\"Enter years: \"))\n month = int(input(\"Enter months: \"))\n days = int(input(\"Enter days: \"))\n principal = float(input(\"Enter principal: \"))\n rate = float(input(\"Enter rate (in %): \"))\n\n return year, month, days, principal, rate\n\nget_inputs gets inputs from the user and returns these inputs as five clearly named variables. This function is not \"pure\" - i.e. it interacts with other things (i.e. the user).\ndef get_time(year, month, days):\n\n totalYears = round(year + (month * 31 + days)/365,1)\n\n return totalYears\n\ndef simple_interest(principal, rate, totalYears):\n\n interest = principal * rate/100 * totalYears\n\n return interest\n\nget_time and simple_interest are pure - in that their behaviour depends ONLY on the inputs provided to them - they have no \"side effects\", they don't print anything to the screen, they don't modify anything outside the scope of the function call. Their behaviour is entirely predictable and stable.\ndef print_outputs(totalYears, interest):\n\n print(f'total time in years is {totalYears} years')\n print(f'Total interest earned is ${round(interest, 2)}')\n \n\nprint_outputs like get_inputs is not pure - but it IS predictable - it takes in two inputs, and the resulting behaviour is an always predictable, repeatable output to the screen. It doesn't mutate any variables not explicitly passed to it.\nWe then call these functions:\nyear, month, days, principal, rate = get_inputs()\ntotalYears = get_time(year, month, days)\ninterest = simple_interest(principal, rate, totalYears)\nprint_outputs(totalYears, interest)\n\nThis produces exactly the same behaviour as before, but this approach makes very clear what variables are being passed into and out of various function calls, without creating any \"globally\" accessible variables whose names/meanings could become ambiguous. There are of course ways to define global variables in Python, but personally, I tend to run scared of doing that if at all avoidable.\n", "The problem is that get_time() is not updating the value of your global totalYears since you're not returning it. And simple_interest() won't update interest for the same reason.\nYou can check this by printing totalYears inside simple_interest():\ndef simple_interest():\n interest = round(principal * rate/100 * totalYears,2)\n \n #This should return 0:\n print(totalYears)\n print(\"Total interest earned is $\",interest)\n\nOne easy way is just returning totalYears and interest:\ndef get_time():\n totalYears = round(year + (month * 31 + days)/365,1)\n print(\"total time in years is\",totalYears,\"years\")\n return totalYears\n\ndef simple_interest():\n interest = round(principal * rate/100 * totalYears,2)\n print(\"Total interest earned is $\",interest)\n return interest\n\nyear = int(input(\"Enter years: \"))\nmonth = int(input(\"Enter months: \"))\ndays = int(input(\"Enter days: \"))\nprincipal = float(input(\"Enter principal: \"))\nrate = float(input(\"Enter rate (in %): \"))\n\n\ntotalYears = get_time()\ninterest = simple_interest()\n\nI hope this helps :)\n" ]
[ 1, 1, 0 ]
[]
[]
[ "math", "python" ]
stackoverflow_0074565063_math_python.txt
Q: Cannot locate pygubu-designer.exe after instalation I'm trying to install pygubu-designer, but the 'pygubu-designer.exe' never shows up in any of the folders after the installation process. https://github.com/alejandroautalan/pygubu https://github.com/alejandroautalan/pygubu-designer I did: pip install pygubu cmd pip install pygubu-designer cmd I have couple of folders with pygubu but pygubu-designer.exe is missing explorer I have python 3.10.4 and pip 20.0.4 and python is added as system variable: cmd I tried to upgrade pip version but it did not work A: Could not locate pygubudesigner within my python310\Scripts folder and any other but the cmd command mentioned in this issue: https://github.com/alejandroautalan/pygubu/issues/222 worked for me python -m pygubudesigner
Cannot locate pygubu-designer.exe after instalation
I'm trying to install pygubu-designer, but the 'pygubu-designer.exe' never shows up in any of the folders after the installation process. https://github.com/alejandroautalan/pygubu https://github.com/alejandroautalan/pygubu-designer I did: pip install pygubu cmd pip install pygubu-designer cmd I have couple of folders with pygubu but pygubu-designer.exe is missing explorer I have python 3.10.4 and pip 20.0.4 and python is added as system variable: cmd I tried to upgrade pip version but it did not work
[ "Could not locate pygubudesigner within my python310\\Scripts folder and any other but the cmd command mentioned in this issue: https://github.com/alejandroautalan/pygubu/issues/222 worked for me\npython -m pygubudesigner\n\n" ]
[ 0 ]
[]
[]
[ "pygubu", "python", "tkinter" ]
stackoverflow_0074533716_pygubu_python_tkinter.txt
Q: How to measure a text element in matplotlib I need to lay out a table full of text boxes using matplotlib. It should be obvious how to do this: create a gridspec for the table members, fill in each element of the grid, take the maximum heights and widths of the elements in the grid, change the appropriate height and widths of the grid columns and rows. Easy peasy, right? Wrong. Everything works except the measurements of the items themselves. Matplotlib consistently returns the wrong size for each item. I believe that I have been able to track this down to not even being able to measure the size of a text path correctly: import numpy as np import matplotlib.pyplot as plt import matplotlib.patches as mpatch import matplotlib.text as mtext import matplotlib.path as mpath import matplotlib.patches as mpatches fig, ax = plt.subplots(1, 1) ax.set_axis_off() text = '!?' * 16 size=36 ## Buildand measure hidden text path text_path=mtext.TextPath( (0.0, 0.0), text, prop={'size' : size} ) vertices = text_path.vertices code = text_path.codes min_x, min_y = np.min( text_path.vertices[text_path.codes != mpath.Path.CLOSEPOLY], axis=0) max_x, max_y = np.max( text_path.vertices[text_path.codes != mpath.Path.CLOSEPOLY], axis=0) ## Transform measurement to graph units transData = ax.transData.inverted() ((local_min_x, local_min_y), (local_max_x, local_max_y)) = transData.transform( ((min_x, min_y), (max_x, max_y))) ## Draw a box which should enclose the path x_offset = (local_max_x - local_max_y) / 2 y_offset = (local_max_y - local_min_y) / 2 local_min_x = 0.5 - x_offset local_min_y = 0.5 - y_offset local_max_x = 0.5 + x_offset local_max_y = 0.5 + y_offset path_data = [ (mpath.Path.MOVETO, (local_min_x, local_min_y)), (mpath.Path.LINETO, (local_max_x, local_min_y)), (mpath.Path.LINETO, (local_max_x, local_max_y)), (mpath.Path.LINETO, (local_min_x, local_max_y)), (mpath.Path.LINETO, (local_min_x, local_min_y)), (mpath.Path.CLOSEPOLY, (local_min_x, local_min_y)), ] codes, verts = zip(*path_data) path = mpath.Path(verts, codes) patch = mpatches.PathPatch( path, facecolor='white', edgecolor='red', linewidth=3) ax.add_patch(patch) ## Draw the text itself item_textbox = ax.text( 0.5, 0.5, text, bbox=dict(boxstyle='square', fc='white', ec='white', alpha=0.0), transform=ax.transAxes, size=size, horizontalalignment="center", verticalalignment="center", alpha=1.0) plt.show() Run this under Python 3.8 Expect: the red box to be the exact height and width of the text Observe: the red box is the right height, but is most definitely not the right width. A: There doesn't seem to be any way to do this directly, but there's a way to do it indirectly: instead of using a text box, use TextPath, transform it to Axis coordinates, and then use the differences between min and max on each coordinate. (See https://matplotlib.org/stable/gallery/text_labels_and_annotations/demo_text_path.html#sphx-glr-gallery-text-labels-and-annotations-demo-text-path-py for a sample implementation. This implementation has a significant bug -- it uses vertices and codes directly, which break in the case of a clipped text path.)
How to measure a text element in matplotlib
I need to lay out a table full of text boxes using matplotlib. It should be obvious how to do this: create a gridspec for the table members, fill in each element of the grid, take the maximum heights and widths of the elements in the grid, change the appropriate height and widths of the grid columns and rows. Easy peasy, right? Wrong. Everything works except the measurements of the items themselves. Matplotlib consistently returns the wrong size for each item. I believe that I have been able to track this down to not even being able to measure the size of a text path correctly: import numpy as np import matplotlib.pyplot as plt import matplotlib.patches as mpatch import matplotlib.text as mtext import matplotlib.path as mpath import matplotlib.patches as mpatches fig, ax = plt.subplots(1, 1) ax.set_axis_off() text = '!?' * 16 size=36 ## Buildand measure hidden text path text_path=mtext.TextPath( (0.0, 0.0), text, prop={'size' : size} ) vertices = text_path.vertices code = text_path.codes min_x, min_y = np.min( text_path.vertices[text_path.codes != mpath.Path.CLOSEPOLY], axis=0) max_x, max_y = np.max( text_path.vertices[text_path.codes != mpath.Path.CLOSEPOLY], axis=0) ## Transform measurement to graph units transData = ax.transData.inverted() ((local_min_x, local_min_y), (local_max_x, local_max_y)) = transData.transform( ((min_x, min_y), (max_x, max_y))) ## Draw a box which should enclose the path x_offset = (local_max_x - local_max_y) / 2 y_offset = (local_max_y - local_min_y) / 2 local_min_x = 0.5 - x_offset local_min_y = 0.5 - y_offset local_max_x = 0.5 + x_offset local_max_y = 0.5 + y_offset path_data = [ (mpath.Path.MOVETO, (local_min_x, local_min_y)), (mpath.Path.LINETO, (local_max_x, local_min_y)), (mpath.Path.LINETO, (local_max_x, local_max_y)), (mpath.Path.LINETO, (local_min_x, local_max_y)), (mpath.Path.LINETO, (local_min_x, local_min_y)), (mpath.Path.CLOSEPOLY, (local_min_x, local_min_y)), ] codes, verts = zip(*path_data) path = mpath.Path(verts, codes) patch = mpatches.PathPatch( path, facecolor='white', edgecolor='red', linewidth=3) ax.add_patch(patch) ## Draw the text itself item_textbox = ax.text( 0.5, 0.5, text, bbox=dict(boxstyle='square', fc='white', ec='white', alpha=0.0), transform=ax.transAxes, size=size, horizontalalignment="center", verticalalignment="center", alpha=1.0) plt.show() Run this under Python 3.8 Expect: the red box to be the exact height and width of the text Observe: the red box is the right height, but is most definitely not the right width.
[ "There doesn't seem to be any way to do this directly, but there's a way to do it indirectly: instead of using a text box, use TextPath, transform it to Axis coordinates, and then use the differences between min and max on each coordinate. (See https://matplotlib.org/stable/gallery/text_labels_and_annotations/demo_text_path.html#sphx-glr-gallery-text-labels-and-annotations-demo-text-path-py for a sample implementation. This implementation has a significant bug -- it uses vertices and codes directly, which break in the case of a clipped text path.)\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "python", "text_rendering" ]
stackoverflow_0074493627_matplotlib_python_text_rendering.txt
Q: OpenCV Contrib Python missing functions in ximgproc I cant find certain function when I list everything inside opencv-contrib ximgproc module. What am I missing? Here is a "pip freeze" output: Here is a list from dir(cv2.ximgproc): Now, when I look at the source code OpenCV.sln I can see that some of the functions of "ximgproc" are not listed here and some are, for example "createQuaternionImage" which I can easily see inside the source code: Where can I look for other functions that are not listed when runnin dir(cv2.ximgproc)? Thanks! I have tried couple of "modules"(in the list output of dir(cv2.ximgproc)), I thought that it may be that some functions are on deeper level, but no luck with that. A: Not every (C++) API of OpenCV is exposed to Python. The main modules are mostly covered. Stuff in contrib is more likely to lack annotations for Python bindings. If any bindings are missing, you can DIY and try slapping a CV_EXPORTS_W to the declaration. There's a description of this somewhere... just copy what you see in other header files and see what the bindings generation script thinks about it. Or open an issue and wait until someone gets around to it. To my knowledge, there isn't a list of APIs that haven't been given the required annotation. It makes no sense to ask for that.
OpenCV Contrib Python missing functions in ximgproc
I cant find certain function when I list everything inside opencv-contrib ximgproc module. What am I missing? Here is a "pip freeze" output: Here is a list from dir(cv2.ximgproc): Now, when I look at the source code OpenCV.sln I can see that some of the functions of "ximgproc" are not listed here and some are, for example "createQuaternionImage" which I can easily see inside the source code: Where can I look for other functions that are not listed when runnin dir(cv2.ximgproc)? Thanks! I have tried couple of "modules"(in the list output of dir(cv2.ximgproc)), I thought that it may be that some functions are on deeper level, but no luck with that.
[ "Not every (C++) API of OpenCV is exposed to Python.\nThe main modules are mostly covered. Stuff in contrib is more likely to lack annotations for Python bindings.\nIf any bindings are missing, you can DIY and try slapping a CV_EXPORTS_W to the declaration. There's a description of this somewhere... just copy what you see in other header files and see what the bindings generation script thinks about it.\nOr open an issue and wait until someone gets around to it.\nTo my knowledge, there isn't a list of APIs that haven't been given the required annotation. It makes no sense to ask for that.\n" ]
[ 1 ]
[]
[]
[ "opencv", "python" ]
stackoverflow_0074563602_opencv_python.txt
Q: Creating a list for column names of dataframe while changing multiple values into one value I have a dataframe named df which is the combination of multiple .csv files, so for certain index in each file there are several column names. Lets say column names are A, B, C, D, E for different .csv files. I want to change all A, B, C, D, E s in column names into F. I tried this; df = pd.read_csv(path + config['file'] + '.csv') list = [c.replace('A', 'F') for c in df.columns] But could not figure out an easy one line way to change B, C, D, E values into F. Helps are appreciated. A: define col names first: to_replace_cols=['A','B','C','D','E'] df.columns = ['F' if i in to_replace_cols else i for i in df.columns] #one line df.columns = ['F' if i in ['A','B','C','D','E'] else i for i in df.columns]
Creating a list for column names of dataframe while changing multiple values into one value
I have a dataframe named df which is the combination of multiple .csv files, so for certain index in each file there are several column names. Lets say column names are A, B, C, D, E for different .csv files. I want to change all A, B, C, D, E s in column names into F. I tried this; df = pd.read_csv(path + config['file'] + '.csv') list = [c.replace('A', 'F') for c in df.columns] But could not figure out an easy one line way to change B, C, D, E values into F. Helps are appreciated.
[ "define col names first:\nto_replace_cols=['A','B','C','D','E']\n\ndf.columns = ['F' if i in to_replace_cols else i for i in df.columns]\n\n#one line\ndf.columns = ['F' if i in ['A','B','C','D','E'] else i for i in df.columns]\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "multiple_columns", "pandas", "python", "replace" ]
stackoverflow_0074564796_dataframe_multiple_columns_pandas_python_replace.txt
Q: Create New Dict from Existing DIct Using List of Values I have a list of values and want to create a new dictionary from an existing dictionary using the key/value pairs that correspond to the values in the list. I can't find a Stackoverflow answer that covers this. example_list = [1, 2, 3, 4, 5] original_dict = {"a": 1, "b": 2, "c": 9, "d": 2, "e": 6, "f": 1} desired_dict = {"a": 1, "b": 2, "d": 2, "f": 1} Note that there are some values that are assigned to multiple keys in original_dict (as in the example). Any help would be appreciated. Thanks A: You can use dict comprehension or you can use filter. example_list = [1, 2, 3, 4, 5] original_dict = {"a": 1, "b": 2, "c": 9, "d": 2, "e": 6, "f": 1} desired_dict = {key: value for key, value in original_dict.items() if value in example_list} # Option_2 desired_dict = dict(filter(lambda x: x[1] in example_list, original_dict.items())) # -------------------------------^^^ x[0] is key, x[1] is value of 'dict' print(desired_dict) Output: {'a': 1, 'b': 2, 'd': 2, 'f': 1} A: You could pass original_dict.items() to filter and then the results of filter can be recast to dict. el = [1, 2, 3, 4, 5] od = {"a": 1, "b": 2, "c": 9, "d": 2, "e": 6, "f": 1} # `i` will be (key,value) dd = dict(filter(lambda i: i[1] in el, od.items())) print(dd) #{"a": 1, "b": 2, "d": 2, "f": 1}
Create New Dict from Existing DIct Using List of Values
I have a list of values and want to create a new dictionary from an existing dictionary using the key/value pairs that correspond to the values in the list. I can't find a Stackoverflow answer that covers this. example_list = [1, 2, 3, 4, 5] original_dict = {"a": 1, "b": 2, "c": 9, "d": 2, "e": 6, "f": 1} desired_dict = {"a": 1, "b": 2, "d": 2, "f": 1} Note that there are some values that are assigned to multiple keys in original_dict (as in the example). Any help would be appreciated. Thanks
[ "You can use dict comprehension or you can use filter.\nexample_list = [1, 2, 3, 4, 5]\n\noriginal_dict = {\"a\": 1, \"b\": 2, \"c\": 9, \"d\": 2, \"e\": 6, \"f\": 1}\n\ndesired_dict = {key: value for key, value in original_dict.items() if value in example_list}\n\n# Option_2\ndesired_dict = dict(filter(lambda x: x[1] in example_list, original_dict.items()))\n# -------------------------------^^^ x[0] is key, x[1] is value of 'dict'\n\nprint(desired_dict)\n\nOutput:\n{'a': 1, 'b': 2, 'd': 2, 'f': 1}\n\n", "You could pass original_dict.items() to filter and then the results of filter can be recast to dict.\nel = [1, 2, 3, 4, 5]\n\nod = {\"a\": 1, \"b\": 2, \"c\": 9, \"d\": 2, \"e\": 6, \"f\": 1}\n\n# `i` will be (key,value)\ndd = dict(filter(lambda i: i[1] in el, od.items()))\n\nprint(dd) #{\"a\": 1, \"b\": 2, \"d\": 2, \"f\": 1}\n\n" ]
[ 2, 1 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0074565298_dictionary_list_python.txt
Q: Find the area between two curves plotted in matplotlib (fill_between area) I have a list of x and y values for two curves, both having weird shapes, and I don't have a function for any of them. I need to do two things: Plot it and shade the area between the curves like the image below. Find the total area of this shaded region between the curves. I'm able to plot and shade the area between those curves with fill_between and fill_betweenx in matplotlib, but I have no idea on how to calculate the exact area between them, specially because I don't have a function for any of those curves. Any ideas? I looked everywhere and can't find a simple solution for this. I'm quite desperate, so any help is much appreciated. Thank you very much! EDIT: For future reference (in case anyone runs into the same problem), here is how I've solved this: connected the first and last node/point of each curve together, resulting in a big weird-shaped polygon, then used shapely to calculate the polygon's area automatically, which is the exact area between the curves, no matter which way they go or how nonlinear they are. Works like a charm! :) Here is my code: from shapely.geometry import Polygon x_y_curve1 = [(0.121,0.232),(2.898,4.554),(7.865,9.987)] #these are your points for curve 1 (I just put some random numbers) x_y_curve2 = [(1.221,1.232),(3.898,5.554),(8.865,7.987)] #these are your points for curve 2 (I just put some random numbers) polygon_points = [] #creates a empty list where we will append the points to create the polygon for xyvalue in x_y_curve1: polygon_points.append([xyvalue[0],xyvalue[1]]) #append all xy points for curve 1 for xyvalue in x_y_curve2[::-1]: polygon_points.append([xyvalue[0],xyvalue[1]]) #append all xy points for curve 2 in the reverse order (from last point to first point) for xyvalue in x_y_curve1[0:1]: polygon_points.append([xyvalue[0],xyvalue[1]]) #append the first point in curve 1 again, to it "closes" the polygon polygon = Polygon(polygon_points) area = polygon.area print(area) EDIT 2: Thank you for the answers. Like Kyle explained, this only works for positive values. If your curves go below 0 (which is not my case, as showed in the example chart), then you would have to work with absolute numbers. A: The area calculation is straightforward in blocks where the two curves don't intersect: thats the trapezium as has been pointed out above. If they intersect, then you create two triangles between x[i] and x[i+1], and you should add the area of the two. If you want to do it directly, you should handle the two cases separately. Here's a basic working example to solve your problem. First, I will start with some fake data: #!/usr/bin/python import numpy as np # let us generate fake test data x = np.arange(10) y1 = np.random.rand(10) * 20 y2 = np.random.rand(10) * 20 Now, the main code. Based on your plot, looks like you have y1 and y2 defined at the same X points. Then we define, z = y1-y2 dx = x[1:] - x[:-1] cross_test = np.sign(z[:-1] * z[1:]) cross_test will be negative whenever the two graphs cross. At these points, we want to calculate the x coordinate of the crossover. For simplicity, I will calculate x coordinates of the intersection of all segments of y. For places where the two curves don't intersect, they will be useless values, and we won't use them anywhere. This just keeps the code easier to understand. Suppose you have z1 and z2 at x1 and x2, then we are solving for x0 such that z = 0: # (z2 - z1)/(x2 - x1) = (z0 - z1) / (x0 - x1) = -z1/(x0 - x1) # x0 = x1 - (x2 - x1) / (z2 - z1) * z1 x_intersect = x[:-1] - dx / (z[1:] - z[:-1]) * z[:-1] dx_intersect = - dx / (z[1:] - z[:-1]) * z[:-1] Where the curves don't intersect, area is simply given by: areas_pos = abs(z[:-1] + z[1:]) * 0.5 * dx # signs of both z are same Where they intersect, we add areas of both triangles: areas_neg = 0.5 * dx_intersect * abs(z[:-1]) + 0.5 * (dx - dx_intersect) * abs(z[1:]) Now, the area in each block x[i] to x[i+1] is to be selected, for which I use np.where: areas = np.where(cross_test < 0, areas_neg, areas_pos) total_area = np.sum(areas) That is your desired answer. As has been pointed out above, this will get more complicated if the both the y graphs were defined at different x points. If you want to test this, you can simply plot it (in my test case, y range will be -20 to 20) negatives = np.where(cross_test < 0) positives = np.where(cross_test >= 0) plot(x, y1) plot(x, y2) plot(x, z) plt.vlines(x_intersect[negatives], -20, 20) A: Define your two curves as functions f and g that are linear by segment, e.g. between x1 and x2, f(x) = f(x1) + ((x-x1)/(x2-x1))*(f(x2)-f(x1)). Define h(x)=abs(g(x)-f(x)). Then use scipy.integrate.quad to integrate h. That way you don't need to bother about the intersections. It will do the "trapeze summing" suggested by ch41rmn automatically. A: Your set of data is quite "nice" in the sense that the two sets of data share the same set of x-coordinates. You can therefore calculate the area using a series of trapezoids. e.g. define the two functions as f(x) and g(x), then, between any two consecutive points in x, you have four points of data: (x1, f(x1))-->(x2, f(x2)) (x1, g(x1))-->(x2, g(x2)) Then, the area of the trapezoid is A(x1-->x2) = ( f(x1)-g(x1) + f(x2)-g(x2) ) * (x2-x1)/2 (1) A complication arises that equation (1) only works for simply-connected regions, i.e. there must not be a cross-over within this region: |\ |\/| |_| vs |/\| The area of the two sides of the intersection must be evaluated separately. You will need to go through your data to find all points of intersections, then insert their coordinates into your list of coordinates. The correct order of x must be maintained. Then, you can loop through your list of simply connected regions and obtain a sum of the area of trapezoids. EDIT: For curiosity's sake, if the x-coordinates for the two lists are different, you can instead construct triangles. e.g. .____. | / \ | / \ | / \ |/ \ ._________. Overlap between triangles must be avoided, so you will again need to find points of intersections and insert them into your ordered list. The lengths of each side of the triangle can be calculated using Pythagoras' formula, and the area of the triangles can be calculated using Heron's formula. A: The area_between_two_curves function in pypi library similaritymeasures (released in 2018) might give you what you need. I tried a trivial example on my side, comparing the area between a function and a constant value and got pretty close tie-back to Excel (within 2%). Not sure why it doesn't give me 100% tie-back, maybe I am doing something wrong. Worth considering though. A: I had the same problem.The answer below is based on an attempt by the question author. However, shapely will not directly give the area of the polygon in purple. You need to edit the code to break it up into its component polygons and then get the area of each. After-which you simply add them up. Area Between two lines Consider the lines below: Sample Two lines If you run the code below you will get zero for area because it takes the clockwise and subtracts the anti clockwise area: from shapely.geometry import Polygon x_y_curve1 = [(1,1),(2,1),(3,3),(4,3)] #these are your points for curve 1 x_y_curve2 = [(1,3),(2,3),(3,1),(4,1)] #these are your points for curve 2 polygon_points = [] #creates a empty list where we will append the points to create the polygon for xyvalue in x_y_curve1: polygon_points.append([xyvalue[0],xyvalue[1]]) #append all xy points for curve 1 for xyvalue in x_y_curve2[::-1]: polygon_points.append([xyvalue[0],xyvalue[1]]) #append all xy points for curve 2 in the reverse order (from last point to first point) for xyvalue in x_y_curve1[0:1]: polygon_points.append([xyvalue[0],xyvalue[1]]) #append the first point in curve 1 again, to it "closes" the polygon polygon = Polygon(polygon_points) area = polygon.area print(area) The solution is therefore to split the polygon into smaller pieces based on where the lines intersect. Then use a for loop to add these up: from shapely.geometry import Polygon x_y_curve1 = [(1,1),(2,1),(3,3),(4,3)] #these are your points for curve 1 x_y_curve2 = [(1,3),(2,3),(3,1),(4,1)] #these are your points for curve 2 polygon_points = [] #creates a empty list where we will append the points to create the polygon for xyvalue in x_y_curve1: polygon_points.append([xyvalue[0],xyvalue[1]]) #append all xy points for curve 1 for xyvalue in x_y_curve2[::-1]: polygon_points.append([xyvalue[0],xyvalue[1]]) #append all xy points for curve 2 in the reverse order (from last point to first point) for xyvalue in x_y_curve1[0:1]: polygon_points.append([xyvalue[0],xyvalue[1]]) #append the first point in curve 1 again, to it "closes" the polygon polygon = Polygon(polygon_points) area = polygon.area x,y = polygon.exterior.xy # original data ls = LineString(np.c_[x, y]) # closed, non-simple lr = LineString(ls.coords[:] + ls.coords[0:1]) lr.is_simple # False mls = unary_union(lr) mls.geom_type # MultiLineString' Area_cal =[] for polygon in polygonize(mls): Area_cal.append(polygon.area) Area_poly = (np.asarray(Area_cal).sum()) print(Area_poly) A: A straightforward application of the area of a general polygon (see Shoelace formula) makes for a super-simple and fast, vectorized calculation: def area(p): # for p: 2D vertices of a polygon: # area = 1/2 abs(sum(p0 ^ p1 + p1 ^ p2 + ... + pn-1 ^ p0)) # where ^ is the cross product return np.abs(np.cross(p, np.roll(p, 1, axis=0)).sum()) / 2 Application to area between two curves. In this example, we don't even have matching x coordinates! np.random.seed(0) n0 = 10 n1 = 15 xy0 = np.c_[np.linspace(0, 10, n0), np.random.uniform(0, 10, n0)] xy1 = np.c_[np.linspace(0, 10, n1), np.random.uniform(0, 10, n1)] p = np.r_[xy0, xy1[::-1]] >>> area(p) 4.9786... Plot: plt.plot(*xy0.T, 'b-') plt.plot(*xy1.T, 'r-') p = np.r_[xy0, xy1[::-1]] plt.fill(*p.T, alpha=.2) Speed For both curves having 1 million points: n = 1_000_000 xy0 = np.c_[np.linspace(0, 10, n), np.random.uniform(0, 10, n)] xy1 = np.c_[np.linspace(0, 10, n), np.random.uniform(0, 10, n)] %timeit area(np.r_[xy0, xy1[::-1]]) # 42.9 ms ± 140 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) Simple viz of polygon area calculation # say: p = np.array([[0, 3], [1, 0], [3, 3], [1, 3], [1, 2]]) p_closed = np.r_[p, p[:1]] fig, axes = plt.subplots(ncols=2, figsize=(10, 5), subplot_kw=dict(box_aspect=1), sharex=True) ax = axes[0] ax.set_aspect('equal') ax.plot(*p_closed.T, '.-') ax.fill(*p_closed.T, alpha=0.6) center = p.mean(0) txtkwargs = dict(ha='center', va='center') ax.text(*center, f'{area(p):.2f}', **txtkwargs) ax = axes[1] ax.set_aspect('equal') for a, b in zip(p_closed, p_closed[1:]): ar = 1/2 * np.cross(a, b) pos = ar >= 0 tri = np.c_[(0,0), a, b, (0,0)].T # shrink a bit to make individual triangles easier to visually identify center = tri.mean(0) tri = (tri - center)*0.95 + center c = 'b' if pos else 'r' ax.plot(*tri.T, 'k') ax.fill(*tri.T, c, alpha=0.2, zorder=2 - pos) t = ax.text(*center, f'{ar:.1f}', color=c, fontsize=8, **txtkwargs) t.set_bbox(dict(facecolor='white', alpha=0.8, edgecolor='none')) plt.tight_layout()
Find the area between two curves plotted in matplotlib (fill_between area)
I have a list of x and y values for two curves, both having weird shapes, and I don't have a function for any of them. I need to do two things: Plot it and shade the area between the curves like the image below. Find the total area of this shaded region between the curves. I'm able to plot and shade the area between those curves with fill_between and fill_betweenx in matplotlib, but I have no idea on how to calculate the exact area between them, specially because I don't have a function for any of those curves. Any ideas? I looked everywhere and can't find a simple solution for this. I'm quite desperate, so any help is much appreciated. Thank you very much! EDIT: For future reference (in case anyone runs into the same problem), here is how I've solved this: connected the first and last node/point of each curve together, resulting in a big weird-shaped polygon, then used shapely to calculate the polygon's area automatically, which is the exact area between the curves, no matter which way they go or how nonlinear they are. Works like a charm! :) Here is my code: from shapely.geometry import Polygon x_y_curve1 = [(0.121,0.232),(2.898,4.554),(7.865,9.987)] #these are your points for curve 1 (I just put some random numbers) x_y_curve2 = [(1.221,1.232),(3.898,5.554),(8.865,7.987)] #these are your points for curve 2 (I just put some random numbers) polygon_points = [] #creates a empty list where we will append the points to create the polygon for xyvalue in x_y_curve1: polygon_points.append([xyvalue[0],xyvalue[1]]) #append all xy points for curve 1 for xyvalue in x_y_curve2[::-1]: polygon_points.append([xyvalue[0],xyvalue[1]]) #append all xy points for curve 2 in the reverse order (from last point to first point) for xyvalue in x_y_curve1[0:1]: polygon_points.append([xyvalue[0],xyvalue[1]]) #append the first point in curve 1 again, to it "closes" the polygon polygon = Polygon(polygon_points) area = polygon.area print(area) EDIT 2: Thank you for the answers. Like Kyle explained, this only works for positive values. If your curves go below 0 (which is not my case, as showed in the example chart), then you would have to work with absolute numbers.
[ "The area calculation is straightforward in blocks where the two curves don't intersect: thats the trapezium as has been pointed out above. If they intersect, then you create two triangles between x[i] and x[i+1], and you should add the area of the two. If you want to do it directly, you should handle the two cases separately. Here's a basic working example to solve your problem. First, I will start with some fake data:\n#!/usr/bin/python\nimport numpy as np\n\n# let us generate fake test data\nx = np.arange(10)\ny1 = np.random.rand(10) * 20\ny2 = np.random.rand(10) * 20\n\nNow, the main code. Based on your plot, looks like you have y1 and y2 defined at the same X points. Then we define,\nz = y1-y2\ndx = x[1:] - x[:-1]\ncross_test = np.sign(z[:-1] * z[1:])\n\ncross_test will be negative whenever the two graphs cross. At these points, we want to calculate the x coordinate of the crossover. For simplicity, I will calculate x coordinates of the intersection of all segments of y. For places where the two curves don't intersect, they will be useless values, and we won't use them anywhere. This just keeps the code easier to understand.\nSuppose you have z1 and z2 at x1 and x2, then we are solving for x0 such that z = 0:\n# (z2 - z1)/(x2 - x1) = (z0 - z1) / (x0 - x1) = -z1/(x0 - x1)\n# x0 = x1 - (x2 - x1) / (z2 - z1) * z1\nx_intersect = x[:-1] - dx / (z[1:] - z[:-1]) * z[:-1]\ndx_intersect = - dx / (z[1:] - z[:-1]) * z[:-1]\n\nWhere the curves don't intersect, area is simply given by:\nareas_pos = abs(z[:-1] + z[1:]) * 0.5 * dx # signs of both z are same\n\nWhere they intersect, we add areas of both triangles:\nareas_neg = 0.5 * dx_intersect * abs(z[:-1]) + 0.5 * (dx - dx_intersect) * abs(z[1:])\n\nNow, the area in each block x[i] to x[i+1] is to be selected, for which I use np.where:\nareas = np.where(cross_test < 0, areas_neg, areas_pos)\ntotal_area = np.sum(areas)\n\nThat is your desired answer. As has been pointed out above, this will get more complicated if the both the y graphs were defined at different x points. If you want to test this, you can simply plot it (in my test case, y range will be -20 to 20)\nnegatives = np.where(cross_test < 0)\npositives = np.where(cross_test >= 0)\nplot(x, y1)\nplot(x, y2)\nplot(x, z)\nplt.vlines(x_intersect[negatives], -20, 20)\n\n", "Define your two curves as functions f and g that are linear by segment, e.g. between x1 and x2, f(x) = f(x1) + ((x-x1)/(x2-x1))*(f(x2)-f(x1)). \nDefine h(x)=abs(g(x)-f(x)). Then use scipy.integrate.quad to integrate h. \nThat way you don't need to bother about the intersections. It will do the \"trapeze summing\" suggested by ch41rmn automatically.\n", "Your set of data is quite \"nice\" in the sense that the two sets of data share the same set of x-coordinates. You can therefore calculate the area using a series of trapezoids.\ne.g. define the two functions as f(x) and g(x), then, between any two consecutive points in x, you have four points of data:\n(x1, f(x1))-->(x2, f(x2))\n(x1, g(x1))-->(x2, g(x2))\n\nThen, the area of the trapezoid is\nA(x1-->x2) = ( f(x1)-g(x1) + f(x2)-g(x2) ) * (x2-x1)/2 (1)\n\nA complication arises that equation (1) only works for simply-connected regions, i.e. there must not be a cross-over within this region:\n|\\ |\\/|\n|_| vs |/\\|\n\nThe area of the two sides of the intersection must be evaluated separately. You will need to go through your data to find all points of intersections, then insert their coordinates into your list of coordinates. The correct order of x must be maintained. Then, you can loop through your list of simply connected regions and obtain a sum of the area of trapezoids.\nEDIT:\nFor curiosity's sake, if the x-coordinates for the two lists are different, you can instead construct triangles. e.g.\n.____.\n| / \\\n| / \\\n| / \\\n|/ \\\n._________.\n\nOverlap between triangles must be avoided, so you will again need to find points of intersections and insert them into your ordered list. The lengths of each side of the triangle can be calculated using Pythagoras' formula, and the area of the triangles can be calculated using Heron's formula.\n", "The area_between_two_curves function in pypi library similaritymeasures (released in 2018) might give you what you need. I tried a trivial example on my side, comparing the area between a function and a constant value and got pretty close tie-back to Excel (within 2%). Not sure why it doesn't give me 100% tie-back, maybe I am doing something wrong. Worth considering though.\n", "I had the same problem.The answer below is based on an attempt by the question author. However, shapely will not directly give the area of the polygon in purple. You need to edit the code to break it up into its component polygons and then get the area of each. After-which you simply add them up. \nArea Between two lines \nConsider the lines below:\nSample Two lines\nIf you run the code below you will get zero for area because it takes the clockwise and subtracts the anti clockwise area:\nfrom shapely.geometry import Polygon\n\nx_y_curve1 = [(1,1),(2,1),(3,3),(4,3)] #these are your points for curve 1 \nx_y_curve2 = [(1,3),(2,3),(3,1),(4,1)] #these are your points for curve 2 \n\npolygon_points = [] #creates a empty list where we will append the points to create the polygon\n\nfor xyvalue in x_y_curve1:\n polygon_points.append([xyvalue[0],xyvalue[1]]) #append all xy points for curve 1\n\nfor xyvalue in x_y_curve2[::-1]:\n polygon_points.append([xyvalue[0],xyvalue[1]]) #append all xy points for curve 2 in the reverse order (from last point to first point)\n\nfor xyvalue in x_y_curve1[0:1]:\n polygon_points.append([xyvalue[0],xyvalue[1]]) #append the first point in curve 1 again, to it \"closes\" the polygon\n\npolygon = Polygon(polygon_points)\narea = polygon.area\nprint(area)\n\nThe solution is therefore to split the polygon into smaller pieces based on where the lines intersect. Then use a for loop to add these up:\nfrom shapely.geometry import Polygon\n\nx_y_curve1 = [(1,1),(2,1),(3,3),(4,3)] #these are your points for curve 1 \nx_y_curve2 = [(1,3),(2,3),(3,1),(4,1)] #these are your points for curve 2 \n\npolygon_points = [] #creates a empty list where we will append the points to create the polygon\n\nfor xyvalue in x_y_curve1:\n polygon_points.append([xyvalue[0],xyvalue[1]]) #append all xy points for curve 1\n\nfor xyvalue in x_y_curve2[::-1]:\n polygon_points.append([xyvalue[0],xyvalue[1]]) #append all xy points for curve 2 in the reverse order (from last point to first point)\n\nfor xyvalue in x_y_curve1[0:1]:\n polygon_points.append([xyvalue[0],xyvalue[1]]) #append the first point in curve 1 again, to it \"closes\" the polygon\n\npolygon = Polygon(polygon_points)\narea = polygon.area\n\nx,y = polygon.exterior.xy\n # original data\nls = LineString(np.c_[x, y])\n # closed, non-simple\nlr = LineString(ls.coords[:] + ls.coords[0:1])\nlr.is_simple # False\nmls = unary_union(lr)\nmls.geom_type # MultiLineString'\n\nArea_cal =[]\n\nfor polygon in polygonize(mls):\n Area_cal.append(polygon.area)\n Area_poly = (np.asarray(Area_cal).sum())\nprint(Area_poly)\n\n", "A straightforward application of the area of a general polygon (see Shoelace formula) makes for a super-simple and fast, vectorized calculation:\ndef area(p):\n # for p: 2D vertices of a polygon:\n # area = 1/2 abs(sum(p0 ^ p1 + p1 ^ p2 + ... + pn-1 ^ p0))\n # where ^ is the cross product\n return np.abs(np.cross(p, np.roll(p, 1, axis=0)).sum()) / 2\n\nApplication to area between two curves. In this example, we don't even have matching x coordinates!\nnp.random.seed(0)\nn0 = 10\nn1 = 15\nxy0 = np.c_[np.linspace(0, 10, n0), np.random.uniform(0, 10, n0)]\nxy1 = np.c_[np.linspace(0, 10, n1), np.random.uniform(0, 10, n1)]\n\np = np.r_[xy0, xy1[::-1]]\n>>> area(p)\n4.9786...\n\nPlot:\nplt.plot(*xy0.T, 'b-')\nplt.plot(*xy1.T, 'r-')\np = np.r_[xy0, xy1[::-1]]\nplt.fill(*p.T, alpha=.2)\n\n\nSpeed\nFor both curves having 1 million points:\nn = 1_000_000\nxy0 = np.c_[np.linspace(0, 10, n), np.random.uniform(0, 10, n)]\nxy1 = np.c_[np.linspace(0, 10, n), np.random.uniform(0, 10, n)]\n\n%timeit area(np.r_[xy0, xy1[::-1]])\n# 42.9 ms ± 140 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)\n\nSimple viz of polygon area calculation\n# say:\np = np.array([[0, 3], [1, 0], [3, 3], [1, 3], [1, 2]])\n\np_closed = np.r_[p, p[:1]]\nfig, axes = plt.subplots(ncols=2, figsize=(10, 5), subplot_kw=dict(box_aspect=1), sharex=True)\nax = axes[0]\nax.set_aspect('equal')\nax.plot(*p_closed.T, '.-')\nax.fill(*p_closed.T, alpha=0.6)\ncenter = p.mean(0)\ntxtkwargs = dict(ha='center', va='center')\nax.text(*center, f'{area(p):.2f}', **txtkwargs)\nax = axes[1]\nax.set_aspect('equal')\nfor a, b in zip(p_closed, p_closed[1:]):\n ar = 1/2 * np.cross(a, b)\n pos = ar >= 0\n tri = np.c_[(0,0), a, b, (0,0)].T\n # shrink a bit to make individual triangles easier to visually identify\n center = tri.mean(0)\n tri = (tri - center)*0.95 + center\n c = 'b' if pos else 'r'\n ax.plot(*tri.T, 'k')\n ax.fill(*tri.T, c, alpha=0.2, zorder=2 - pos)\n t = ax.text(*center, f'{ar:.1f}', color=c, fontsize=8, **txtkwargs)\n t.set_bbox(dict(facecolor='white', alpha=0.8, edgecolor='none'))\n\nplt.tight_layout()\n\n\n" ]
[ 6, 5, 4, 3, 2, 0 ]
[]
[]
[ "area", "matplotlib", "python", "scipy" ]
stackoverflow_0025439243_area_matplotlib_python_scipy.txt
Q: What is the best way to scrape multiple urls and tackle pagination problem (load more button)? The main link is (https://www.europarl.europa.eu/meps/en/197818/BILLY_KELLEHER/meetings/past#detailedcardmep) My code shows me only fist pages but I need to browse all of them for all the links (I have more than 100 links) from bs4 import BeautifulSoup import requests page=0 list=[] isHaveNextPage=True links = [(f"https://www.europarl.europa.eu/meps/en/loadmore-meetings?meetingType=PAST&memberId=197506&termId=9&page={page}&pageSize=10"), (f"https://www.europarl.europa.eu/meps/en/loadmore-meetings?meetingType=PAST&memberId=124861&termId=9&page={page}&pageSize=10"), (f"https://www.europarl.europa.eu/meps/en/loadmore-meetings?meetingType=PAST&memberId=229519&termId=9&page={page}&pageSize=10" .....), while(isHaveNextPage): for url in links: r= requests.get(url).text soup =BeautifulSoup(r,"lxml") product = soup.find_all("div",class_="europarl-expandable-item") for data in product: title = data.find(class_="t-item").get_text() date = data.find(class_="erpl_document-subtitle-date").get_text() address = data.find(class_="erpl_document-subtitle-location").get_text() reporter = data.find(class_="erpl_document-subtitle-reporter").get_text() author = data.find(class_="erpl_document-subtitle-author").get_text() list.append([author.strip(), date.strip(), address.strip(), reporter.strip(), title.strip()]) print("page---",page) if soup.find("button",class_='btn btn-default europarl-expandable-async-loadmore') is None: isHaveNextPage=False page+=1 A: The problem is: you may be incrementing the page number, but the format string has already been made. Updating page doesn't update the string, at all. You have to keep remaking the string with the new data. Instead of this: f"https://...&page={page}..." do this: "https://...&page=%i..." Then do this: for url in links: r= requests.get(url % page).text Alternately, you can do this: "https://...&page={}..." and this: r= requests.get(url.format(page)).text Both versions are just different ways to format a string after the string has already been created. The version of formatting you used only allows you to format the string during creation. A: Here is one way of getting that data, handling pagination, and generally solving this issue in a decent manner: import requests from bs4 import BeautifulSoup as bs import pandas as pd from tqdm import tqdm ## if using Jupyter: from tqdm.notebook import tqdm pd.set_option('display.max_columns', None) pd.set_option('display.max_colwidth', None) headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36' } s = requests.Session() s.headers.update(headers) big_list = [] slightly_incompetent_people_ids = ['197818', '96829', '197530', '97968', '197691', '189065', '197636', '33997'] for p in tqdm(slightly_incompetent_people_ids): counter = 0 while True: soup = bs(s.get(f'https://www.europarl.europa.eu/meps/en/loadmore-meetings?meetingType=PAST&memberId={p}&termId=9&page={counter}&pageSize=20').text, 'html.parser') has_more = soup.select_one('button[class="btn btn-default europarl-expandable-async-loadmore"]') if soup.select_one('button[class="btn btn-default europarl-expandable-async-loadmore"]') else None meetings = soup.select('div[class="europarl-expandable-item"]') for m in meetings: title = m.select_one('h3').text.strip() date = m.select_one('span[class="erpl_document-subtitle-date"]').text.strip() place = m.select_one('span[class="erpl_document-subtitle-location"]').text.strip() big_list.append((p, title, date, place)) if has_more == None: counter = 0 break counter += 1 df = pd.DataFrame(big_list, columns = ['MEP', 'Title', 'Date', 'Place']) print(df) Result in terminal: 100% 8/8 [00:01<00:00, 5.61it/s] MEP Title Date Place 0 197818 AIFMD 25-05-2022 Virtual meeting 1 197818 DORA 25-05-2022 Virtual meeting 2 197818 AIFMD 25-05-2022 Virtual meeting 3 197818 AIFMD 18-05-2022 Brussels 4 197818 AIFMD 17-05-2022 Virtual meeting ... ... ... ... ... 77 33997 Meeting with H.E. Aigul Kuspan, the Ambassador of the Republic of Kazakhstan to the Kingdom of Belgium and Head of Mission of the Republic of Kazakhstan to the European Union 08-01-2020 European Parliament 78 33997 Meeting with H.E. Daniel Ioniță, Ambassador Extraordinary and Plenipotentiary of Romania to the Republic of Moldova 09-12-2019 Embassy of Romania to the Republic of Moldova 79 33997 Meeting with Mihai Chirica, Mayor of Iași 07-12-2019 Iași, Romania 80 33997 Meeting with Laura Codruța Kövesi, the European Public Prosecut 06-11-2019 European Parliament 81 33997 Meeting with Tony Murphy, Member of the European Court of Auditors 24-09-2019 European Parliament 82 rows × 4 columns You can get more details from the meetings, and you can add more MEP id's to that list. Relevant documentation for packages used: tqdm pandas BeautifulSoup Requests
What is the best way to scrape multiple urls and tackle pagination problem (load more button)?
The main link is (https://www.europarl.europa.eu/meps/en/197818/BILLY_KELLEHER/meetings/past#detailedcardmep) My code shows me only fist pages but I need to browse all of them for all the links (I have more than 100 links) from bs4 import BeautifulSoup import requests page=0 list=[] isHaveNextPage=True links = [(f"https://www.europarl.europa.eu/meps/en/loadmore-meetings?meetingType=PAST&memberId=197506&termId=9&page={page}&pageSize=10"), (f"https://www.europarl.europa.eu/meps/en/loadmore-meetings?meetingType=PAST&memberId=124861&termId=9&page={page}&pageSize=10"), (f"https://www.europarl.europa.eu/meps/en/loadmore-meetings?meetingType=PAST&memberId=229519&termId=9&page={page}&pageSize=10" .....), while(isHaveNextPage): for url in links: r= requests.get(url).text soup =BeautifulSoup(r,"lxml") product = soup.find_all("div",class_="europarl-expandable-item") for data in product: title = data.find(class_="t-item").get_text() date = data.find(class_="erpl_document-subtitle-date").get_text() address = data.find(class_="erpl_document-subtitle-location").get_text() reporter = data.find(class_="erpl_document-subtitle-reporter").get_text() author = data.find(class_="erpl_document-subtitle-author").get_text() list.append([author.strip(), date.strip(), address.strip(), reporter.strip(), title.strip()]) print("page---",page) if soup.find("button",class_='btn btn-default europarl-expandable-async-loadmore') is None: isHaveNextPage=False page+=1
[ "The problem is: you may be incrementing the page number, but the format string has already been made. Updating page doesn't update the string, at all. You have to keep remaking the string with the new data.\nInstead of this: f\"https://...&page={page}...\" \ndo this: \"https://...&page=%i...\"\nThen do this:\nfor url in links:\n r= requests.get(url % page).text\n\nAlternately, you can do this: \"https://...&page={}...\"\nand this: r= requests.get(url.format(page)).text\nBoth versions are just different ways to format a string after the string has already been created. The version of formatting you used only allows you to format the string during creation.\n", "Here is one way of getting that data, handling pagination, and generally solving this issue in a decent manner:\nimport requests\nfrom bs4 import BeautifulSoup as bs\nimport pandas as pd\nfrom tqdm import tqdm ## if using Jupyter: from tqdm.notebook import tqdm \n\npd.set_option('display.max_columns', None)\npd.set_option('display.max_colwidth', None)\n\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'\n}\n\ns = requests.Session()\ns.headers.update(headers)\nbig_list = []\nslightly_incompetent_people_ids = ['197818', '96829', '197530', '97968', '197691', '189065', '197636', '33997']\nfor p in tqdm(slightly_incompetent_people_ids):\n counter = 0\n while True:\n soup = bs(s.get(f'https://www.europarl.europa.eu/meps/en/loadmore-meetings?meetingType=PAST&memberId={p}&termId=9&page={counter}&pageSize=20').text, 'html.parser')\n has_more = soup.select_one('button[class=\"btn btn-default europarl-expandable-async-loadmore\"]') if soup.select_one('button[class=\"btn btn-default europarl-expandable-async-loadmore\"]') else None\n \n meetings = soup.select('div[class=\"europarl-expandable-item\"]')\n for m in meetings:\n title = m.select_one('h3').text.strip()\n date = m.select_one('span[class=\"erpl_document-subtitle-date\"]').text.strip()\n place = m.select_one('span[class=\"erpl_document-subtitle-location\"]').text.strip()\n big_list.append((p, title, date, place))\n if has_more == None:\n counter = 0\n break\n counter += 1\ndf = pd.DataFrame(big_list, columns = ['MEP', 'Title', 'Date', 'Place'])\nprint(df)\n\nResult in terminal:\n100%\n8/8 [00:01<00:00, 5.61it/s]\nMEP Title Date Place\n0 197818 AIFMD 25-05-2022 Virtual meeting\n1 197818 DORA 25-05-2022 Virtual meeting\n2 197818 AIFMD 25-05-2022 Virtual meeting\n3 197818 AIFMD 18-05-2022 Brussels\n4 197818 AIFMD 17-05-2022 Virtual meeting\n... ... ... ... ...\n77 33997 Meeting with H.E. Aigul Kuspan, the Ambassador of the Republic of Kazakhstan to the Kingdom of Belgium and Head of Mission of the Republic of Kazakhstan to the European Union 08-01-2020 European Parliament\n78 33997 Meeting with H.E. Daniel Ioniță, Ambassador Extraordinary and Plenipotentiary of Romania to the Republic of Moldova 09-12-2019 Embassy of Romania to the Republic of Moldova\n79 33997 Meeting with Mihai Chirica, Mayor of Iași 07-12-2019 Iași, Romania\n80 33997 Meeting with Laura Codruța Kövesi, the European Public Prosecut 06-11-2019 European Parliament\n81 33997 Meeting with Tony Murphy, Member of the European Court of Auditors 24-09-2019 European Parliament\n82 rows × 4 columns\n\nYou can get more details from the meetings, and you can add more MEP id's to that list.\nRelevant documentation for packages used:\n\ntqdm\npandas\nBeautifulSoup\nRequests\n\n" ]
[ 1, 0 ]
[]
[]
[ "html", "javascript", "python", "web_scraping" ]
stackoverflow_0074563973_html_javascript_python_web_scraping.txt
Q: How to get pygame bar width with method? enter image description hereHow I can get width of pygame bar? I mean the gray taskbar. I've tried to to following but it does not work properly. import pygame pygame.init() disp = pygame.display.set_mode((640, 480)) disp.fill((0, 0, 0)) pygame.display.flip() title = 'text' pygame.display.set_caption(title) while True: for event in pygame.event.get(): if event.type == pygame.QUIT: exit() But the text is positioned to the left side. I I use pygame.display.set_caption(f"{title: > 320}") it goes out of window. Try to figure how to get width of the bar and then position text to center. A: You Can get the the bar width Beacause Bar Width = Screen width. and this code is how i put the text in the center import pygame pygame.init() ScreenWidth,ScreenHight = 640, 480 disp = pygame.display.set_mode((ScreenWidth, ScreenHight)) disp.fill((0, 0, 0)) pygame.display.flip() title = 'textdd' spaces = " " spacecount = (round(640/7)-(len(title))) print(spacecount) for c in range(spacecount): spaces = ' '+spaces pygame.display.set_caption(spaces+title) while True: for event in pygame.event.get(): if event.type == pygame.QUIT: exit()
How to get pygame bar width with method?
enter image description hereHow I can get width of pygame bar? I mean the gray taskbar. I've tried to to following but it does not work properly. import pygame pygame.init() disp = pygame.display.set_mode((640, 480)) disp.fill((0, 0, 0)) pygame.display.flip() title = 'text' pygame.display.set_caption(title) while True: for event in pygame.event.get(): if event.type == pygame.QUIT: exit() But the text is positioned to the left side. I I use pygame.display.set_caption(f"{title: > 320}") it goes out of window. Try to figure how to get width of the bar and then position text to center.
[ "You Can get the the bar width Beacause\nBar Width = Screen width.\nand this code is how i put the text in the center\nimport pygame\n\npygame.init()\nScreenWidth,ScreenHight = 640, 480\ndisp = pygame.display.set_mode((ScreenWidth, ScreenHight))\ndisp.fill((0, 0, 0))\npygame.display.flip()\ntitle = 'textdd'\nspaces = \" \"\nspacecount = (round(640/7)-(len(title)))\n\nprint(spacecount)\n\nfor c in range(spacecount):\n spaces = ' '+spaces\n\npygame.display.set_caption(spaces+title)\n\nwhile True:\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n exit()\n\n" ]
[ 0 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0074563349_pygame_python.txt
Q: Tesseract OCR extraction I am building an OCR model where I have performed object detection on the images. I am calling the detection function to detect bounding boxes. I am cropping the images basis bounding boxes. The challenge I am facing is the cropped images are too small for tesseract for data extraction and it is impacting the accuracy quality. # Crop Image cropped_image = tf.image.crop_to_bounding_box(image, y_min, x_min, y_max - y_min, x_max - x_min) # write jpg with pillow img_pil = Image.fromarray(cropped_image.numpy()) score = bscores[idx] * 100 file_name = OUTPUT_PATH + "somefilename" img_pil = ImageOps.grayscale(img_pil) img_pil.save(file_name, quality=95, subsampling=0) I am running super resolution algorithm over the cropped images to improve the image quality before passing to tesseract, however still not able to achieve good accuracy. # Create an SR object sr = dnn_superres.DnnSuperResImpl_create() # Define model path model_path = os.path.join(base_path, model + ".pb") # Extract model name, get the text between '/' and '_' model_name = model_path.split('\\')[-1].split('_')[0].lower() # Extract model scale model_scale = int(model_path.split('\\')[-1].split('_')[1].split('.')[0][1]) # Read the desired model sr.readModel(model_path) sr.setModel(model_name, model_scale) How to fix these cropped images issue so that data extraction is more accurate. A: Have you tried OCRing and then cropping, rather than the reverse? It may take longer but it is likely going to be more accurate. I have a lot of experience using ocrmypdf with PDFPlumber and Regex to parse PDF documents into spreadsheets and this is the process I generally follow: import pandas as pd import os import pdfplumber import re #OCR PDF os.system('ocrmypdf --force-ocr --deskew path/to/file.pdf path/to/file.pdf') text = '' with pdfplumber.open('path/to/file.pdf'): for i in range(0, len(pages)): page = pdf.pages[i] text = page.extract_text() pdf_text = pdf_text + '\n' + text ids = re.findall('id: (.*)', text) y = pdf_text.split('\n') ds = [] for i,j in enumerate(ids): d = {} try: id1 = ids[i] idx1 = [idx for idx, s in enumerate(y) if id1 in s][0] try: id2 = ids[i+1] idx2 = [idx for idx, s in enumerate(y) if id2 in s][0] z = y[idx1:idx2] except: z = y[idx1:] except: pass chunk = '' #may need to add if/else or try/except d['value'] = re.findall('Model name: (.*)', chunk)[0] #rinse and repeat ds.append(d) df = pd.DataFrame(ds) Not sure how helpful that will be, but it may give you some inspiration.
Tesseract OCR extraction
I am building an OCR model where I have performed object detection on the images. I am calling the detection function to detect bounding boxes. I am cropping the images basis bounding boxes. The challenge I am facing is the cropped images are too small for tesseract for data extraction and it is impacting the accuracy quality. # Crop Image cropped_image = tf.image.crop_to_bounding_box(image, y_min, x_min, y_max - y_min, x_max - x_min) # write jpg with pillow img_pil = Image.fromarray(cropped_image.numpy()) score = bscores[idx] * 100 file_name = OUTPUT_PATH + "somefilename" img_pil = ImageOps.grayscale(img_pil) img_pil.save(file_name, quality=95, subsampling=0) I am running super resolution algorithm over the cropped images to improve the image quality before passing to tesseract, however still not able to achieve good accuracy. # Create an SR object sr = dnn_superres.DnnSuperResImpl_create() # Define model path model_path = os.path.join(base_path, model + ".pb") # Extract model name, get the text between '/' and '_' model_name = model_path.split('\\')[-1].split('_')[0].lower() # Extract model scale model_scale = int(model_path.split('\\')[-1].split('_')[1].split('.')[0][1]) # Read the desired model sr.readModel(model_path) sr.setModel(model_name, model_scale) How to fix these cropped images issue so that data extraction is more accurate.
[ "Have you tried OCRing and then cropping, rather than the reverse? It may take longer but it is likely going to be more accurate.\nI have a lot of experience using ocrmypdf with PDFPlumber and Regex to parse PDF documents into spreadsheets and this is the process I generally follow:\nimport pandas as pd\nimport os\nimport pdfplumber\nimport re\n\n#OCR PDF\nos.system('ocrmypdf --force-ocr --deskew path/to/file.pdf path/to/file.pdf')\n\ntext = ''\n\nwith pdfplumber.open('path/to/file.pdf'):\n for i in range(0, len(pages)):\n page = pdf.pages[i]\n text = page.extract_text()\n pdf_text = pdf_text + '\\n' + text\n\nids = re.findall('id: (.*)', text)\n\ny = pdf_text.split('\\n')\nds = []\nfor i,j in enumerate(ids):\n d = {}\n try:\n id1 = ids[i]\n idx1 = [idx for idx, s in enumerate(y) if id1 in s][0]\n try:\n id2 = ids[i+1]\n idx2 = [idx for idx, s in enumerate(y) if id2 in s][0]\n z = y[idx1:idx2]\n except:\n z = y[idx1:]\n except:\n pass\n chunk = ''\n #may need to add if/else or try/except\n d['value'] = re.findall('Model name: (.*)', chunk)[0]\n #rinse and repeat\n ds.append(d)\ndf = pd.DataFrame(ds)\n\n\nNot sure how helpful that will be, but it may give you some inspiration.\n" ]
[ 0 ]
[]
[]
[ "ocr", "python", "python_imaging_library", "tensorflow", "tesseract" ]
stackoverflow_0074564535_ocr_python_python_imaging_library_tensorflow_tesseract.txt
Q: Is there a way I can modify some lines in site html code so it marks the checkbox? So, there's a site I'm trying to parse so it can automatically raising my offers every two hours. The site designed in that way that you have to mark with checkboxes the lots you want to raise. Somehow in html code the checkbox doesn't have value, instead it looks like this: I have to click it manually via using wait.until(EC.element_to_be_clickable((By.CLASS_NAME, "idk what to write so it checks it"))).click() But I really don't know how do I find it so it can be clicked. <label> <input type="checkbox" value="613" checked=""> # value - lot id, checked - means the checkbox is marked <label> # and non-checked checkbox code looks like this: <label> <input type="checkbox" value="613"> <label> A: You can't use By.CLASS_NAME here since it has no class. You can use: By.CSS_SELECTOR to find by CSS selectors chbVal = '613' # in case you need be able to change this (By.CSS_SELECTOR, f'label > input[type="checkbox"][value="{chbVal}"][checked=""]') # for checked (By.CSS_SELECTOR, f'label > input[type="checkbox"][value="{chbVal}"]:not([checked])') # for unchecked or By.XPATH to find by Xpath chbVal = '613' # in case you need be able to change this (By.XPATH, f'//label/input[@type="checkbox"][@value="{chbVal}"][@checked=""]') # for checked (By.XPATH, f'//label/input[@type="checkbox"][@value="{chbVal}"][not(@checked="")]') # for unchecked Note: These are just based on the html snippet you've included - there might by parent elements with better identifiers that you need to include in your path/selector. Also, Somehow in html code the checkbox doesn't have value but in your snippet it does have value...? Anyway, the examples above include value, but you don't have to include them; you can even exclude them with not(...) as shown for checked. (Btw, not(checked)/not(@checked) should exclude elements that have a checked attribute at all, no matter what the value is.)
Is there a way I can modify some lines in site html code so it marks the checkbox?
So, there's a site I'm trying to parse so it can automatically raising my offers every two hours. The site designed in that way that you have to mark with checkboxes the lots you want to raise. Somehow in html code the checkbox doesn't have value, instead it looks like this: I have to click it manually via using wait.until(EC.element_to_be_clickable((By.CLASS_NAME, "idk what to write so it checks it"))).click() But I really don't know how do I find it so it can be clicked. <label> <input type="checkbox" value="613" checked=""> # value - lot id, checked - means the checkbox is marked <label> # and non-checked checkbox code looks like this: <label> <input type="checkbox" value="613"> <label>
[ "You can't use By.CLASS_NAME here since it has no class.\nYou can use:\n\n\nBy.CSS_SELECTOR to find by CSS selectors\n\nchbVal = '613' # in case you need be able to change this\n\n(By.CSS_SELECTOR, f'label > input[type=\"checkbox\"][value=\"{chbVal}\"][checked=\"\"]') # for checked\n\n(By.CSS_SELECTOR, f'label > input[type=\"checkbox\"][value=\"{chbVal}\"]:not([checked])') # for unchecked\n\n\n\nor By.XPATH to find by Xpath\n\nchbVal = '613' # in case you need be able to change this\n\n(By.XPATH, f'//label/input[@type=\"checkbox\"][@value=\"{chbVal}\"][@checked=\"\"]') # for checked\n\n(By.XPATH, f'//label/input[@type=\"checkbox\"][@value=\"{chbVal}\"][not(@checked=\"\")]') # for unchecked\n\n\nNote: These are just based on the html snippet you've included - there might by parent elements with better identifiers that you need to include in your path/selector.\n\nAlso,\n\nSomehow in html code the checkbox doesn't have value\n\nbut in your snippet it does have value...? Anyway, the examples above include value, but you don't have to include them; you can even exclude them with not(...) as shown for checked. (Btw, not(checked)/not(@checked) should exclude elements that have a checked attribute at all, no matter what the value is.)\n" ]
[ 2 ]
[]
[]
[ "beautifulsoup", "html", "parsing", "python", "selenium" ]
stackoverflow_0074565163_beautifulsoup_html_parsing_python_selenium.txt
Q: How to properly check if a number is a prime number Hey so i have this function to check if a number is a prime number def is_prime(n): flag = True for i in range(2, n ): if (n % i) == 0: flag = False return flag print(is_prime(1)) However when i test the number 1, it skips the for loop and returns True which isn't correct because 1 is not a prime number. How could i fix this? A: You can first start by checking if n is greater than 1 the code should proceed, else it should return False. If n passes the first condition, only then the code can proceed to verify if n is indeed prime or not. def is_prime(n): flag = True if n > 1: for i in range(2, n ): if (n % i) == 0: flag = False return flag # Returns this flag after check whether n is prime or not # Returns False if n <= 1 return False print(is_prime(1)) output: False A: 2 is also skipped by the loop, but the function returns true thanks to the flag, and we know that's right. Also you can exit early the loop if the condition is met: def is_prime(n: int) -> bool: if n > 1: for i in range(2, n): # if n == 2, there is no loop, is never checked if (n % i) == 0: return False # can return early once we meet the condition, don't need to finish the loop return True print(is_prime(7534322224)) print(is_prime(5)) An alternative approach (though much slower on bigger numbers): def is_prime(n: int) -> bool: if n < 2: return False return n == 2 or True not in [True for i in range(2, n) if (n % i) == 0] print(is_prime(75343224)) print(is_prime(5))
How to properly check if a number is a prime number
Hey so i have this function to check if a number is a prime number def is_prime(n): flag = True for i in range(2, n ): if (n % i) == 0: flag = False return flag print(is_prime(1)) However when i test the number 1, it skips the for loop and returns True which isn't correct because 1 is not a prime number. How could i fix this?
[ "You can first start by checking if n is greater than 1 the code should proceed, else it should return False. If n passes the first condition, only then the code can proceed to verify if n is indeed prime or not.\ndef is_prime(n):\n flag = True\n if n > 1:\n for i in range(2, n ):\n if (n % i) == 0:\n flag = False\n\n return flag # Returns this flag after check whether n is prime or not\n \n # Returns False if n <= 1\n return False\n\n\nprint(is_prime(1))\n\noutput:\nFalse\n\n", "2 is also skipped by the loop, but the function returns true thanks to the flag, and we know that's right.\nAlso you can exit early the loop if the condition is met:\ndef is_prime(n: int) -> bool:\n if n > 1:\n for i in range(2, n): # if n == 2, there is no loop, is never checked\n if (n % i) == 0:\n return False # can return early once we meet the condition, don't need to finish the loop\n\n return True\n\nprint(is_prime(7534322224))\nprint(is_prime(5))\n\nAn alternative approach (though much slower on bigger numbers):\ndef is_prime(n: int) -> bool:\n if n < 2: return False \n return n == 2 or True not in [True for i in range(2, n) if (n % i) == 0]\n\nprint(is_prime(75343224))\nprint(is_prime(5))\n\n" ]
[ 1, 0 ]
[]
[]
[ "function", "python" ]
stackoverflow_0074564053_function_python.txt
Q: Python Getting Last evalues of a loop I have a definition and it repeats with a while loop. I want to pull the penultimate value of m and ar from the operations that occur in the definition in each loop. To do this, I opened an out file and tried to print the second-to-last element in each while loop. But when I look at this file as output, there is only 1 line. How can I fix this? import numpy as np import matplotlib.pyplot as plt def main(rho_c): f = open("radial.out","w") ### Constants pi = 3.1415926535897 gamma =5./3. m_sun = 2.998e33 K = 1e10 P_c = K*rho_c**gamma km = 1e5 # cm dr = 1e4 # cm r = 0; Rho = np.zeros(2) Rho[0] = rho_c Rho[1] = rho_c ar = np.zeros(2) # Tum r degerleri array olarak buna atilacak ar[0] = r ar[1] = dr m = np.zeros(2) m_k1 = 0 m_k2 = 0 m_k3 = 0 m_k4 = 0 P = np.zeros(2) P_k1 = 0 P_k2 = 0 P_k3 = 0 P_k4 = 0 m[0] = 0 P[0] = P_c P[1] = P[0] m[1] = 4*pi*dr**3*rho_c/3 def der_m(r,m,ro): return 4.*np.pi*(r**2)*ro def ro(P): if P==P_c: return rho_c return (P/K)**(3./5.) def der_P(r,P,ro,m): G = 6.67430e-8 # Gravitational constant cm^3/(g*s)^ c = 2.998e+10 # cm/s return - (G*m*ro/(r**2))*(1.+(4.*np.pi*(r**3)*P/(m*c**2) ) )* \ (1. + P/(ro*c**2) )/(1-(2.*G*m/(r*c**2))) i = 1 r = r+i*dr while P[-1] > 0: # For m """ r , m , ro """ m_k1 = der_m( r , m[i] , ro(P[i]) ) m_k2 = der_m( r+0.5*dr , m[i]+dr*m_k1*0.5 , ro(P[i]) ) m_k3 = der_m( r+0.5*dr , m[i]+dr*m_k2*0.5 , ro(P[i]) ) m_k4 = der_m( r+1.*dr , m[i]+dr*m_k3 , ro(P[i]) ) m = np.append(m, m[i] + (dr/6.)*( m_k1+2.*m_k2+2.*m_k3+m_k4 ) ) # For P P_k1 = der_P(r , P[i] ,ro(P[i]) , m[i]) P_k2 = der_P(r+0.5*dr , P[i]+dr*P_k1*0.5 ,ro(P[i]+dr*P_k1*0.5) , m[i]+dr*m_k1*0.5) P_k3 = der_P(r+0.5*dr , P[i]+dr*P_k2*0.5 ,ro(P[i]+dr*P_k2*0.5) , m[i]+dr*m_k1*0.5) P_k4 = der_P(r+1.*dr , P[i]+dr*P_k3 ,ro(P[i]+dr*P_k3) , m[i]) P = np.append(P, P[i] + (dr/6.)*( P_k1+2.*P_k2+2.*P_k3+P_k4 ) ) r = r+dr ar = np.append(ar,r) Rho = np.append(Rho, ro(P[i]) ) i = i+1 Hm = m[i]/np.abs(der_m(r,m[i],ro(P[i]))) Hp = P[i]/np.abs(der_P(r,P[i],ro(P[i]),m[i])) H = (Hm*Hp)/(Hm+Hp) dr = H data = ar[-2]/km, m[-2]/m_sun f.write(str(data)+'\n') np.savetxt("radial.out", data, delimiter=" ") plt.plot(ar/km,m/m_sun) plt.xlim([0, 13]) plt.ylim([0,1.035]) plt.xlabel(r'$R$ (km)') plt.ylabel(r'$M/M_\odot$') #plt.savefig('plot_mass.pdf') f.close() rho_c = 2e15 # Density centeral kg/me3 while rho_c < 2e16: #logspace main(rho_c) rho_c = rho_c*1.2 A: You open a new out file every time main() is called. import numpy as np import matplotlib.pyplot as plt f = open("radial.out","w") # outside main() def main(rho_c): ### Constants pi = 3.1415926535897 gamma =5./3. m_sun = 2.998e33 K = 1e10 P_c = K*rho_c**gamma km = 1e5 # cm dr = 1e4 # cm r = 0; Rho = np.zeros(2) Rho[0] = rho_c Rho[1] = rho_c ar = np.zeros(2) # Tum r degerleri array olarak buna atilacak ar[0] = r ar[1] = dr m = np.zeros(2) m_k1 = 0 m_k2 = 0 m_k3 = 0 m_k4 = 0 P = np.zeros(2) P_k1 = 0 P_k2 = 0 P_k3 = 0 P_k4 = 0 m[0] = 0 P[0] = P_c P[1] = P[0] m[1] = 4*pi*dr**3*rho_c/3 def der_m(r,m,ro): return 4.*np.pi*(r**2)*ro def ro(P): if P==P_c: return rho_c return (P/K)**(3./5.) def der_P(r,P,ro,m): G = 6.67430e-8 # Gravitational constant cm^3/(g*s)^ c = 2.998e+10 # cm/s return - (G*m*ro/(r**2))*(1.+(4.*np.pi*(r**3)*P/(m*c**2) ) )* \ (1. + P/(ro*c**2) )/(1-(2.*G*m/(r*c**2))) i = 1 r = r+i*dr while P[-1] > 0: # For m """ r , m , ro """ m_k1 = der_m( r , m[i] , ro(P[i]) ) m_k2 = der_m( r+0.5*dr , m[i]+dr*m_k1*0.5 , ro(P[i]) ) m_k3 = der_m( r+0.5*dr , m[i]+dr*m_k2*0.5 , ro(P[i]) ) m_k4 = der_m( r+1.*dr , m[i]+dr*m_k3 , ro(P[i]) ) m = np.append(m, m[i] + (dr/6.)*( m_k1+2.*m_k2+2.*m_k3+m_k4 ) ) # For P P_k1 = der_P(r , P[i] ,ro(P[i]) , m[i]) P_k2 = der_P(r+0.5*dr , P[i]+dr*P_k1*0.5 ,ro(P[i]+dr*P_k1*0.5) , m[i]+dr*m_k1*0.5) P_k3 = der_P(r+0.5*dr , P[i]+dr*P_k2*0.5 ,ro(P[i]+dr*P_k2*0.5) , m[i]+dr*m_k1*0.5) P_k4 = der_P(r+1.*dr , P[i]+dr*P_k3 ,ro(P[i]+dr*P_k3) , m[i]) P = np.append(P, P[i] + (dr/6.)*( P_k1+2.*P_k2+2.*P_k3+P_k4 ) ) r = r+dr ar = np.append(ar,r) Rho = np.append(Rho, ro(P[i]) ) i = i+1 Hm = m[i]/np.abs(der_m(r,m[i],ro(P[i]))) Hp = P[i]/np.abs(der_P(r,P[i],ro(P[i]),m[i])) H = (Hm*Hp)/(Hm+Hp) dr = H data = ar[-2]/km, m[-2]/m_sun f.write(str(data)+'\n') np.savetxt("radial.out", data, delimiter=" ") plt.plot(ar/km,m/m_sun) plt.xlim([0, 13]) plt.ylim([0,1.035]) plt.xlabel(r'$R$ (km)') plt.ylabel(r'$M/M_\odot$') #plt.savefig('plot_mass.pdf') rho_c = 2e15 # Density centeral kg/me3 while rho_c < 2e16: #logspace main(rho_c) rho_c = rho_c*1.2 f.close() # outside main and after while
Python Getting Last evalues of a loop
I have a definition and it repeats with a while loop. I want to pull the penultimate value of m and ar from the operations that occur in the definition in each loop. To do this, I opened an out file and tried to print the second-to-last element in each while loop. But when I look at this file as output, there is only 1 line. How can I fix this? import numpy as np import matplotlib.pyplot as plt def main(rho_c): f = open("radial.out","w") ### Constants pi = 3.1415926535897 gamma =5./3. m_sun = 2.998e33 K = 1e10 P_c = K*rho_c**gamma km = 1e5 # cm dr = 1e4 # cm r = 0; Rho = np.zeros(2) Rho[0] = rho_c Rho[1] = rho_c ar = np.zeros(2) # Tum r degerleri array olarak buna atilacak ar[0] = r ar[1] = dr m = np.zeros(2) m_k1 = 0 m_k2 = 0 m_k3 = 0 m_k4 = 0 P = np.zeros(2) P_k1 = 0 P_k2 = 0 P_k3 = 0 P_k4 = 0 m[0] = 0 P[0] = P_c P[1] = P[0] m[1] = 4*pi*dr**3*rho_c/3 def der_m(r,m,ro): return 4.*np.pi*(r**2)*ro def ro(P): if P==P_c: return rho_c return (P/K)**(3./5.) def der_P(r,P,ro,m): G = 6.67430e-8 # Gravitational constant cm^3/(g*s)^ c = 2.998e+10 # cm/s return - (G*m*ro/(r**2))*(1.+(4.*np.pi*(r**3)*P/(m*c**2) ) )* \ (1. + P/(ro*c**2) )/(1-(2.*G*m/(r*c**2))) i = 1 r = r+i*dr while P[-1] > 0: # For m """ r , m , ro """ m_k1 = der_m( r , m[i] , ro(P[i]) ) m_k2 = der_m( r+0.5*dr , m[i]+dr*m_k1*0.5 , ro(P[i]) ) m_k3 = der_m( r+0.5*dr , m[i]+dr*m_k2*0.5 , ro(P[i]) ) m_k4 = der_m( r+1.*dr , m[i]+dr*m_k3 , ro(P[i]) ) m = np.append(m, m[i] + (dr/6.)*( m_k1+2.*m_k2+2.*m_k3+m_k4 ) ) # For P P_k1 = der_P(r , P[i] ,ro(P[i]) , m[i]) P_k2 = der_P(r+0.5*dr , P[i]+dr*P_k1*0.5 ,ro(P[i]+dr*P_k1*0.5) , m[i]+dr*m_k1*0.5) P_k3 = der_P(r+0.5*dr , P[i]+dr*P_k2*0.5 ,ro(P[i]+dr*P_k2*0.5) , m[i]+dr*m_k1*0.5) P_k4 = der_P(r+1.*dr , P[i]+dr*P_k3 ,ro(P[i]+dr*P_k3) , m[i]) P = np.append(P, P[i] + (dr/6.)*( P_k1+2.*P_k2+2.*P_k3+P_k4 ) ) r = r+dr ar = np.append(ar,r) Rho = np.append(Rho, ro(P[i]) ) i = i+1 Hm = m[i]/np.abs(der_m(r,m[i],ro(P[i]))) Hp = P[i]/np.abs(der_P(r,P[i],ro(P[i]),m[i])) H = (Hm*Hp)/(Hm+Hp) dr = H data = ar[-2]/km, m[-2]/m_sun f.write(str(data)+'\n') np.savetxt("radial.out", data, delimiter=" ") plt.plot(ar/km,m/m_sun) plt.xlim([0, 13]) plt.ylim([0,1.035]) plt.xlabel(r'$R$ (km)') plt.ylabel(r'$M/M_\odot$') #plt.savefig('plot_mass.pdf') f.close() rho_c = 2e15 # Density centeral kg/me3 while rho_c < 2e16: #logspace main(rho_c) rho_c = rho_c*1.2
[ "You open a new out file every time main() is called.\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nf = open(\"radial.out\",\"w\") # outside main()\ndef main(rho_c):\n \n ### Constants\n \n pi = 3.1415926535897\n gamma =5./3.\n m_sun = 2.998e33\n K = 1e10\n P_c = K*rho_c**gamma\n km = 1e5 # cm\n\n dr = 1e4 # cm\n r = 0;\n\n Rho = np.zeros(2)\n Rho[0] = rho_c\n Rho[1] = rho_c\n\n ar = np.zeros(2) # Tum r degerleri array olarak buna atilacak\n ar[0] = r\n ar[1] = dr\n\n m = np.zeros(2)\n m_k1 = 0\n m_k2 = 0\n m_k3 = 0\n m_k4 = 0\n\n P = np.zeros(2)\n P_k1 = 0\n P_k2 = 0\n P_k3 = 0\n P_k4 = 0\n\n m[0] = 0\n P[0] = P_c\n\n\n P[1] = P[0]\n m[1] = 4*pi*dr**3*rho_c/3\n\n def der_m(r,m,ro): \n return 4.*np.pi*(r**2)*ro\n \n def ro(P):\n if P==P_c:\n return rho_c \n return (P/K)**(3./5.)\n \n def der_P(r,P,ro,m):\n G = 6.67430e-8 # Gravitational constant cm^3/(g*s)^\n c = 2.998e+10 # cm/s\n return - (G*m*ro/(r**2))*(1.+(4.*np.pi*(r**3)*P/(m*c**2) ) )* \\\n (1. + P/(ro*c**2) )/(1-(2.*G*m/(r*c**2)))\n\n i = 1\n r = r+i*dr\n while P[-1] > 0:\n \n # For m\n \"\"\" r , m , ro \"\"\" \n m_k1 = der_m( r , m[i] , ro(P[i]) )\n m_k2 = der_m( r+0.5*dr , m[i]+dr*m_k1*0.5 , ro(P[i]) )\n m_k3 = der_m( r+0.5*dr , m[i]+dr*m_k2*0.5 , ro(P[i]) )\n m_k4 = der_m( r+1.*dr , m[i]+dr*m_k3 , ro(P[i]) )\n \n m = np.append(m, m[i] + (dr/6.)*( m_k1+2.*m_k2+2.*m_k3+m_k4 ) )\n \n # For P\n P_k1 = der_P(r , P[i] ,ro(P[i]) , m[i])\n P_k2 = der_P(r+0.5*dr , P[i]+dr*P_k1*0.5 ,ro(P[i]+dr*P_k1*0.5) , m[i]+dr*m_k1*0.5)\n P_k3 = der_P(r+0.5*dr , P[i]+dr*P_k2*0.5 ,ro(P[i]+dr*P_k2*0.5) , m[i]+dr*m_k1*0.5)\n P_k4 = der_P(r+1.*dr , P[i]+dr*P_k3 ,ro(P[i]+dr*P_k3) , m[i])\n \n P = np.append(P, P[i] + (dr/6.)*( P_k1+2.*P_k2+2.*P_k3+P_k4 ) )\n \n r = r+dr\n ar = np.append(ar,r)\n Rho = np.append(Rho, ro(P[i]) )\n i = i+1\n \n Hm = m[i]/np.abs(der_m(r,m[i],ro(P[i])))\n Hp = P[i]/np.abs(der_P(r,P[i],ro(P[i]),m[i]))\n H = (Hm*Hp)/(Hm+Hp)\n dr = H\n\n data = ar[-2]/km, m[-2]/m_sun\n f.write(str(data)+'\\n')\n np.savetxt(\"radial.out\", data, delimiter=\" \")\n\n plt.plot(ar/km,m/m_sun)\n plt.xlim([0, 13]) \n plt.ylim([0,1.035])\n plt.xlabel(r'$R$ (km)')\n plt.ylabel(r'$M/M_\\odot$')\n #plt.savefig('plot_mass.pdf') \n \n\nrho_c = 2e15 # Density centeral kg/me3\n\n\n\nwhile rho_c < 2e16:\n#logspace \n main(rho_c)\n rho_c = rho_c*1.2\n\nf.close() # outside main and after while\n\n" ]
[ 1 ]
[]
[]
[ "append", "function", "loops", "python" ]
stackoverflow_0074565373_append_function_loops_python.txt
Q: How do I get my code to continue looping? I'm trying to make the code repeat the line "player name invalid" and ask for the input repetively until the input is "player 1". How do i do that? correct_n="player 1" while True: Name1 = input ("Enter Your Name: ") if Name1 == correct_n: cp = 'password' while True: password= input("enter the password ") if password == cp: print ("yes you are in") break print("please try again") else: print("Player name not valid") break print("player name invalid") The code just prints "player name invalid" and goes on to do the rest of the code. I don't want the rest of the code to be outputted until the user inputs the correct name and password. A: Because you have two while loops, it is not possible to use break to exit both of them. Instead, you should separate the loops so that the name loop runs until the name is correct, and then the password loop runs until the password matches. correct_n="player 1" while True: Name1 = input("Enter Your Name: ") if Name1 == correct_n: break else: print("Player name not valid") cp = 'password' while True: password= input("enter the password ") if password == cp: print ("yes you are in") break else: print("please try again")
How do I get my code to continue looping?
I'm trying to make the code repeat the line "player name invalid" and ask for the input repetively until the input is "player 1". How do i do that? correct_n="player 1" while True: Name1 = input ("Enter Your Name: ") if Name1 == correct_n: cp = 'password' while True: password= input("enter the password ") if password == cp: print ("yes you are in") break print("please try again") else: print("Player name not valid") break print("player name invalid") The code just prints "player name invalid" and goes on to do the rest of the code. I don't want the rest of the code to be outputted until the user inputs the correct name and password.
[ "Because you have two while loops, it is not possible to use break to exit both of them. Instead, you should separate the loops so that the name loop runs until the name is correct, and then the password loop runs until the password matches.\ncorrect_n=\"player 1\"\nwhile True:\n Name1 = input(\"Enter Your Name: \")\n if Name1 == correct_n:\n break\n else:\n print(\"Player name not valid\")\n\ncp = 'password'\nwhile True:\n password= input(\"enter the password \")\n if password == cp:\n print (\"yes you are in\")\n break\n else:\n print(\"please try again\")\n\n" ]
[ 0 ]
[ "Solution:\nwhile input('Enter your name: ') != 'player 1': print('Player name invalid')\n\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0074565551_python.txt
Q: How to make this dictionary/key python code work I am trying to make a function that takes a threshold and determines which names from a csv file of song names and their lyrics that contain human names and the function should create a csv file named outputfile that contains the number of distinct names, the name of the song and the artist. import csv def findName(thresh, outputFile): dictNames={} with open('allNames.csv') as csvfile: reader = csv.DictReader(csvfile, delimiter="\t") for row in reader: if row["name"] in dictNames: dictNames[row["name"]] +=1 else: dictNames[row["name"]]=1 with open(outputFile, "w", newline='') as outfile: headers= ["song", "artist", "year"] writer=csv.DictWriter(outfile, fieldnames=headers) writer.writeheader() for key, val in dictNames.items(): if val>= thresh: writer.writerow({key: val}) csvfile.close() outfile.close() A: What's the rationale for not using Pandas here? Not sure I fully understand your question, but I'm thinking something like: df = pd.read_csv('allNames.csv') #partition df after threshold df['index'] = df.index def partition_return(threshold, df): df = df.loc[df['index'] >= threshold].reset_index(drop=true) df = df[['song', 'artist', 'year]] df['count_names_dist'] = len(df['artist'].unique()) df.to_csv('outfile.csv', index=False)
How to make this dictionary/key python code work
I am trying to make a function that takes a threshold and determines which names from a csv file of song names and their lyrics that contain human names and the function should create a csv file named outputfile that contains the number of distinct names, the name of the song and the artist. import csv def findName(thresh, outputFile): dictNames={} with open('allNames.csv') as csvfile: reader = csv.DictReader(csvfile, delimiter="\t") for row in reader: if row["name"] in dictNames: dictNames[row["name"]] +=1 else: dictNames[row["name"]]=1 with open(outputFile, "w", newline='') as outfile: headers= ["song", "artist", "year"] writer=csv.DictWriter(outfile, fieldnames=headers) writer.writeheader() for key, val in dictNames.items(): if val>= thresh: writer.writerow({key: val}) csvfile.close() outfile.close()
[ "What's the rationale for not using Pandas here?\nNot sure I fully understand your question, but I'm thinking something like:\ndf = pd.read_csv('allNames.csv')\n\n#partition df after threshold\ndf['index'] = df.index\n\ndef partition_return(threshold, df):\n df = df.loc[df['index'] >= threshold].reset_index(drop=true)\n df = df[['song', 'artist', 'year]]\n df['count_names_dist'] = len(df['artist'].unique())\n df.to_csv('outfile.csv', index=False)\n\n" ]
[ 0 ]
[]
[]
[ "csv", "dictionary", "function", "key", "python" ]
stackoverflow_0074565501_csv_dictionary_function_key_python.txt
Q: reshape pandas data frame: duplicated rows to columns, with textual data I have a dataframe like this: INDEX_COL col1 A Random Text B Some more random text C more stuff A Blah B Blah, Blah C Yet more stuff A erm B yup C whatever What I need is it reformed into new columns and stacked/grouped by values in col_1. So something like this: A B C Random Text Some more random text more stuff Blah Blah, Blah Yet more stuff erm yup whatever I've reviewed How can I pivot a dataframe? but all of the examples work with numerical data and this is a use case that involves textual data, so aggregation appears to be not an option (but it was - see accepted answer below) I've tried the following: Pivot - but all the examples I've seen involve numerical values with aggregate functions. This is reshaping non-numerical data I get that index=INDEX COL, and columns= 'col1', but values? add a numerical column, pivot and then drop the numberical columns created? Feels like trying for forced pivot to do something it was never meant to do. Unstack - but this seems to convert the df into a new df with a single value index of 'b' unstack(level=0) I've even considered slicing the data frame by index into separate dataframes and the concatinating them, but the mismatched indexes result in NaN appearing like a checkerboard. Also this feels like an fugly solution. I've tried dropping the index_col, with Col1=['A,B,C'] and col2= the random text, but the new integer index comes along and spoils the fun. Any suggestions or thoughts in which direction I should go with this? A: You can use agg(list) and then explode the whole dataframe: output = df.groupby('INDEX_COL').agg(list).T.apply(pd.Series.explode) output: INDEX_COL A B C col1 Random Text Some more random text more stuff col1 Blah Blah, Blah Yet more stuff col1 erm yup whatever A: Another possible solution, using pandas.pivot_table: (df.pivot_table(columns='INDEX_COL', values='col1', aggfunc=list) .pipe(lambda d: d.explode(d.columns.tolist())) .reset_index(drop=True)) Output: INDEX_COL A B C 0 Random Text Some more random text more stuff 1 Blah Blah, Blah Yet more stuff 2 erm yup whatever
reshape pandas data frame: duplicated rows to columns, with textual data
I have a dataframe like this: INDEX_COL col1 A Random Text B Some more random text C more stuff A Blah B Blah, Blah C Yet more stuff A erm B yup C whatever What I need is it reformed into new columns and stacked/grouped by values in col_1. So something like this: A B C Random Text Some more random text more stuff Blah Blah, Blah Yet more stuff erm yup whatever I've reviewed How can I pivot a dataframe? but all of the examples work with numerical data and this is a use case that involves textual data, so aggregation appears to be not an option (but it was - see accepted answer below) I've tried the following: Pivot - but all the examples I've seen involve numerical values with aggregate functions. This is reshaping non-numerical data I get that index=INDEX COL, and columns= 'col1', but values? add a numerical column, pivot and then drop the numberical columns created? Feels like trying for forced pivot to do something it was never meant to do. Unstack - but this seems to convert the df into a new df with a single value index of 'b' unstack(level=0) I've even considered slicing the data frame by index into separate dataframes and the concatinating them, but the mismatched indexes result in NaN appearing like a checkerboard. Also this feels like an fugly solution. I've tried dropping the index_col, with Col1=['A,B,C'] and col2= the random text, but the new integer index comes along and spoils the fun. Any suggestions or thoughts in which direction I should go with this?
[ "You can use agg(list) and then explode the whole dataframe:\noutput = df.groupby('INDEX_COL').agg(list).T.apply(pd.Series.explode)\n\noutput:\nINDEX_COL A B C\ncol1 Random Text Some more random text more stuff\ncol1 Blah Blah, Blah Yet more stuff\ncol1 erm yup whatever\n\n", "Another possible solution, using pandas.pivot_table:\n(df.pivot_table(columns='INDEX_COL', values='col1', aggfunc=list)\n .pipe(lambda d: d.explode(d.columns.tolist()))\n .reset_index(drop=True))\n\nOutput:\nINDEX_COL A B C\n0 Random Text Some more random text more stuff\n1 Blah Blah, Blah Yet more stuff\n2 erm yup whatever\n\n" ]
[ 2, 0 ]
[]
[]
[ "data_wrangling", "dataframe", "pandas", "python" ]
stackoverflow_0074565364_data_wrangling_dataframe_pandas_python.txt
Q: Trying to create a class that creates a turtle I'm trying to create a class where I can build a turtle so I can call that multiple times and get a bunch of turtles. I'm not sure exactly how to create a turtle with the name = turtle.Turtle(). It gives me an error but doesn't say why. import turtle class CreateTurtle: # initialize constructor def __init__(self, name, color, pensize, shape): self.name = name self.color = color self.pensize = pensize self.shape = shape def make_turtle(self): name = turtle.Turtle() name.color(self.color) name.pensize(self.pensize) name.shape(self.shape) A: If you want to make many turtle from a class of Turtle, you should make class and call it with input and assign it to your variable. This is the minimal working example: class Turtle: # initialize constructor def __init__(self, name, color, pensize, shape): self.name = name self.color = color self.pensize = pensize self.shape = shape turtles = [] turtles.append(Turtle('a','b','c','d')) print(turtles[0].name) You can keep appending to store your turtles in list or any format you want. Note: There is many Q&A about Python Class in SO. You just need to search what you actually needs. This question should be marked as duplicate, but I can't find an appropriate link. Thanks. A: There is nothing wrong with using a function to create a class instance, and return it. Now, you probably want to have many Turtle subclasses, each with different "settings" I suppose, so your best bet is to subclass Turtle, and overwrite the class methods, or add new ones. You also need to indent your functions into the class. You can do this as follows: class BlueTurtle(Turtle): #if you want to inherit directly all the methods of Turtle def __init__(param1, param2): super().__init__(param1, param2) #this will inherit all of Turtle's methods, but you have to configure it # to be able to pass in all the params that you'll be using # and if there are any obligaroty params. A: You can create a Class by a separate method of creating the turtles. For example: class Turtles: def __init__(self): self.create_turtle() self.new_turtles = [] def create_turtle(): for i in range(3): mini = Turtle() mini.shape("square") mini.color("white") self.new_turtles.append(mini) Main File: turtles = Turtles() So now, in the variable turtles you have 3 turtles.
Trying to create a class that creates a turtle
I'm trying to create a class where I can build a turtle so I can call that multiple times and get a bunch of turtles. I'm not sure exactly how to create a turtle with the name = turtle.Turtle(). It gives me an error but doesn't say why. import turtle class CreateTurtle: # initialize constructor def __init__(self, name, color, pensize, shape): self.name = name self.color = color self.pensize = pensize self.shape = shape def make_turtle(self): name = turtle.Turtle() name.color(self.color) name.pensize(self.pensize) name.shape(self.shape)
[ "If you want to make many turtle from a class of Turtle, you should make class and call it with input and assign it to your variable. This is the minimal working example:\nclass Turtle:\n # initialize constructor\n def __init__(self, name, color, pensize, shape):\n self.name = name\n self.color = color\n self.pensize = pensize\n self.shape = shape\n\nturtles = []\nturtles.append(Turtle('a','b','c','d'))\nprint(turtles[0].name)\n\nYou can keep appending to store your turtles in list or any format you want.\nNote:\nThere is many Q&A about Python Class in SO. You just need to search what you actually needs. This question should be marked as duplicate, but I can't find an appropriate link. Thanks.\n", "There is nothing wrong with using a function to create a class instance, and return it.\nNow, you probably want to have many Turtle subclasses, each with different \"settings\" I suppose, so your best bet is to subclass Turtle, and overwrite the class methods, or add new ones. You also need to indent your functions into the class.\nYou can do this as follows:\nclass BlueTurtle(Turtle):\n #if you want to inherit directly all the methods of Turtle\n def __init__(param1, param2):\n super().__init__(param1, param2)\n #this will inherit all of Turtle's methods, but you have to configure it\n # to be able to pass in all the params that you'll be using\n # and if there are any obligaroty params.\n \n\n", "You can create a Class by a separate method of creating the turtles.\nFor example:\n\n\nclass Turtles: \n def __init__(self): \n self.create_turtle()\n self.new_turtles = []\n\n def create_turtle():\n for i in range(3):\n mini = Turtle()\n mini.shape(\"square\")\n mini.color(\"white\")\n self.new_turtles.append(mini)\n\n\n\nMain File:\nturtles = Turtles()\nSo now, in the variable turtles you have 3 turtles.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "python", "turtle_graphics" ]
stackoverflow_0070072276_python_turtle_graphics.txt
Q: How to get a directory path in pyqt6 via QFileDialog? Name: PyQt6 Version: 6.1.0 OS: Ubuntu 20.04.5 LTS from PyQt6.QtWidgets import QFileDialog HOME_PATH = os.getenv("HOME") ... dir_path = QFileDialog.getExistingDirectory( parent=self, caption="Select directory", directory=HOME_PATH, options=QFileDialog.Option.ShowDirsOnly, ) directory and options do not work. Init path is wrong. Files get displayed, i can not select a directory. A: The original question was mainly related to the new way PyQt6 uses Enums, which now always require the full namespace: until PyQt5, the syntax Class.FlagName was sufficient, but PyQt6 now requires Class.EnumName.FlagName. The other issue is probably related to QTBUG-88709 and is part of a long series of issues that deal with native dialogs provided by the OS, which always depend on the system. There is no direct solution for this, and the only safe way is to enforce the DontUseNativeDialog flag whenever in doubt: dir_path = QFileDialog.getExistingDirectory( parent=self, caption="Select directory", directory=HOME_PATH, options=QFileDialog.Option.DontUseNativeDialog, ) Note that: the ShowDirsOnly is useless as it's always set whenever the file dialog is in Directory mode (which is automatically done for the getExistingDirectory() static function); using the non-native dialog will obviously be inconsistent with the normal dialog shown by the OS; it is unclear if the bug was automatically resolved in Ubuntu/Gnome versions following the environment normally used in Ubuntu 20.04; there is no direct way to enforce the workaround only when required: either you find out the specific cause of the issue, or you just assume it doesn't work and always use the non-native dialog;
How to get a directory path in pyqt6 via QFileDialog?
Name: PyQt6 Version: 6.1.0 OS: Ubuntu 20.04.5 LTS from PyQt6.QtWidgets import QFileDialog HOME_PATH = os.getenv("HOME") ... dir_path = QFileDialog.getExistingDirectory( parent=self, caption="Select directory", directory=HOME_PATH, options=QFileDialog.Option.ShowDirsOnly, ) directory and options do not work. Init path is wrong. Files get displayed, i can not select a directory.
[ "The original question was mainly related to the new way PyQt6 uses Enums, which now always require the full namespace: until PyQt5, the syntax Class.FlagName was sufficient, but PyQt6 now requires Class.EnumName.FlagName.\nThe other issue is probably related to QTBUG-88709 and is part of a long series of issues that deal with native dialogs provided by the OS, which always depend on the system.\nThere is no direct solution for this, and the only safe way is to enforce the DontUseNativeDialog flag whenever in doubt:\ndir_path = QFileDialog.getExistingDirectory(\n parent=self,\n caption=\"Select directory\",\n directory=HOME_PATH,\n options=QFileDialog.Option.DontUseNativeDialog,\n)\n\nNote that:\n\nthe ShowDirsOnly is useless as it's always set whenever the file dialog is in Directory mode (which is automatically done for the getExistingDirectory() static function);\nusing the non-native dialog will obviously be inconsistent with the normal dialog shown by the OS;\nit is unclear if the bug was automatically resolved in Ubuntu/Gnome versions following the environment normally used in Ubuntu 20.04;\nthere is no direct way to enforce the workaround only when required: either you find out the specific cause of the issue, or you just assume it doesn't work and always use the non-native dialog;\n\n" ]
[ 1 ]
[]
[]
[ "pyqt", "python" ]
stackoverflow_0074557955_pyqt_python.txt
Q: Python Function to add column items to list based around criteria from a different column [Code Sample] (https://i.stack.imgur.com/kx1UH.png) I have created this code to show the percentage of missing values for each of these columns, how can I now create a new variable that contains only the column names for the columns with over X% missing values? Assumed it would be an if statement but not too sure what it should do. List based on DF.loc Progress based on below comment Edit 2: # make a list of the categorical variables that contain missing values cat_vars2 = X_train.select_dtypes(include=['object']) len(cat_vars2) print(cat_vars2.isnull().sum()) # print percentage of missing values per variable percent_missing = cat_vars2.isnull().sum() * 100 / len(cat_vars2) missing_value_cat_vars = pd.DataFrame({'column_name': cat_vars2.columns, 'percent_missing': percent_missing}) missing_value_cat_vars.sort_values('percent_missing', inplace=True) print(missing_value_cat_vars) Output1 variables to impute with the string missing with_string_missing=list(missing_value_cat_vars.loc[missing_value_cat_vars['percent_missing']>=10,'column_name']) print(with_string_missing) # variables to impute with the most frequent category with_frequent_category=list(missing_value_cat_vars.loc[missing_value_cat_vars['percent_missing']<90,'column_name']) print(with_frequent_category) Output2 A: Something like this should work: threshold = 1 #can be whatever you want df.loc[df['percent_missing'] >= threshold, column_name] If you want it as a list just do: list(df.loc[df['percent_missing'] >= threshold, column_name])
Python Function to add column items to list based around criteria from a different column
[Code Sample] (https://i.stack.imgur.com/kx1UH.png) I have created this code to show the percentage of missing values for each of these columns, how can I now create a new variable that contains only the column names for the columns with over X% missing values? Assumed it would be an if statement but not too sure what it should do. List based on DF.loc Progress based on below comment Edit 2: # make a list of the categorical variables that contain missing values cat_vars2 = X_train.select_dtypes(include=['object']) len(cat_vars2) print(cat_vars2.isnull().sum()) # print percentage of missing values per variable percent_missing = cat_vars2.isnull().sum() * 100 / len(cat_vars2) missing_value_cat_vars = pd.DataFrame({'column_name': cat_vars2.columns, 'percent_missing': percent_missing}) missing_value_cat_vars.sort_values('percent_missing', inplace=True) print(missing_value_cat_vars) Output1 variables to impute with the string missing with_string_missing=list(missing_value_cat_vars.loc[missing_value_cat_vars['percent_missing']>=10,'column_name']) print(with_string_missing) # variables to impute with the most frequent category with_frequent_category=list(missing_value_cat_vars.loc[missing_value_cat_vars['percent_missing']<90,'column_name']) print(with_frequent_category) Output2
[ "Something like this should work:\nthreshold = 1 #can be whatever you want\ndf.loc[df['percent_missing'] >= threshold, column_name]\n\nIf you want it as a list just do:\nlist(df.loc[df['percent_missing'] >= threshold, column_name])\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "jupyter_notebook", "pandas", "python" ]
stackoverflow_0074564235_dataframe_jupyter_notebook_pandas_python.txt
Q: Is there any way to make dictionary key,value pairs to tuple? I need to convert this dictionary: {'A': 0, 'B': 1290, 'C': 515, 'D': 600} Into this test case: (A : 0) - (B : 1290) - (C : 515) - (D : 600) This is how I derive my dictionary def stock_list(list_of_art, list_of_cat): new_dictionary = {} numbers = [] for character in list_of_cat: if character not in new_dictionary: new_dictionary[character] = 0 for word in list_of_art: first_element = word[0][0] for number in word: if number.isdigit(): numbers.append(number) make_str = "".join(numbers) numbers = [] if first_element in new_dictionary: new_dictionary[first_element] += int(make_str) return new_dictionary This is a sample call stock_list(["BBAR 150", "CDXE 515", "BKWR 250", "BTSQ 890", "DRTY 600"],["A","B","C","D"]) A: Just iterate over the keys and values of your dictionary, and format them into a string. d = {'A': 0, 'B': 1290, 'C': 515, 'D': 600} print(" - ".join(f'({k} : {v})' for k,v in d.items())) #(A : 0) - (B : 1290) - (C : 515) - (D : 600)
Is there any way to make dictionary key,value pairs to tuple?
I need to convert this dictionary: {'A': 0, 'B': 1290, 'C': 515, 'D': 600} Into this test case: (A : 0) - (B : 1290) - (C : 515) - (D : 600) This is how I derive my dictionary def stock_list(list_of_art, list_of_cat): new_dictionary = {} numbers = [] for character in list_of_cat: if character not in new_dictionary: new_dictionary[character] = 0 for word in list_of_art: first_element = word[0][0] for number in word: if number.isdigit(): numbers.append(number) make_str = "".join(numbers) numbers = [] if first_element in new_dictionary: new_dictionary[first_element] += int(make_str) return new_dictionary This is a sample call stock_list(["BBAR 150", "CDXE 515", "BKWR 250", "BTSQ 890", "DRTY 600"],["A","B","C","D"])
[ "Just iterate over the keys and values of your dictionary, and format them into a string.\nd = {'A': 0, 'B': 1290, 'C': 515, 'D': 600}\n\nprint(\" - \".join(f'({k} : {v})' for k,v in d.items()))\n\n#(A : 0) - (B : 1290) - (C : 515) - (D : 600)\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074565285_python.txt
Q: Error when calculating singular values of a matrix I'm trying to calculate the singular values of a matrix using 2 methods. The matrix I'm using is the red channel of a sunflower image. Here's the image if you need it. The first method is using SVD: import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np A = mpimg.imread('sunflower.jpeg') R = A[:,:,0] U, S, V = np.linalg.svd(R) print(S) The second is using an alternate approach to calculating singular values, where you take the square root of the eigenvalues of R.T*R. import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np A = mpimg.imread('sunflower.jpeg') R = A[:,:,0] rW = np.linalg.eigvals(np.dot(R.T, R)) singvals = np.sqrt(rW) print(singvals) Hypothetically they should yield the same result, but that's not what I'm getting. Any help would be appreciated! A: When I run your code after casting R to be .astype(np.int64), and round the values to 6 decimal places, and compare the two return values as set, I get that they return the same values. I suspect that one or more of Unexpected int overflow Floating point rounding errors Order of the singular values is the source of the difference between the two... import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np A = mpimg.imread('sunflower.jpeg') R = A[:,:,0].astype(np.int64) U, S, V = np.linalg.svd(R) rW = np.linalg.eigvals(np.dot(R.T, R)) singvals = np.sqrt(rW) set(S.round(6)) == set(singvals.round(6)) # True
Error when calculating singular values of a matrix
I'm trying to calculate the singular values of a matrix using 2 methods. The matrix I'm using is the red channel of a sunflower image. Here's the image if you need it. The first method is using SVD: import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np A = mpimg.imread('sunflower.jpeg') R = A[:,:,0] U, S, V = np.linalg.svd(R) print(S) The second is using an alternate approach to calculating singular values, where you take the square root of the eigenvalues of R.T*R. import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np A = mpimg.imread('sunflower.jpeg') R = A[:,:,0] rW = np.linalg.eigvals(np.dot(R.T, R)) singvals = np.sqrt(rW) print(singvals) Hypothetically they should yield the same result, but that's not what I'm getting. Any help would be appreciated!
[ "When I run your code after casting R to be .astype(np.int64), and round the values to 6 decimal places, and compare the two return values as set, I get that they return the same values. I suspect that one or more of\n\nUnexpected int overflow\nFloating point rounding errors\nOrder of the singular values\n\nis the source of the difference between the two...\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\n\nA = mpimg.imread('sunflower.jpeg')\nR = A[:,:,0].astype(np.int64)\n\nU, S, V = np.linalg.svd(R)\n\nrW = np.linalg.eigvals(np.dot(R.T, R))\nsingvals = np.sqrt(rW)\n\nset(S.round(6)) == set(singvals.round(6))\n# True\n \n\n" ]
[ 0 ]
[]
[]
[ "image", "numpy", "python", "svd" ]
stackoverflow_0074565226_image_numpy_python_svd.txt
Q: ValueError: `decode_predictions` expects a batch of predictions (i.e. a 2D array of shape (samples, 1000)). Found array with shape: (1, 26) I am using a model trained by myself to translate braille digits into plain text. As you can see this is a classification problem with 26 classes, one for each letter in the alphabet. This is the dataset that I used to train my model: https://www.kaggle.com/datasets/shanks0465/braille-character-dataset This is how I am generating my training and validation set: os.mkdir('./images/') alpha = 'a' for i in range(0, 26): os.mkdir('./images/' + alpha) alpha = chr(ord(alpha) + 1) rootdir = "C:\\Users\\ffernandez\\Downloads\\capstoneProject\\Braille Dataset\\Braille Dataset\\" for file in os.listdir(rootdir): letter = file[0] copyfile(rootdir+file, './images/' + letter + '/' + file) The resulting folder looks like this: folder structure And this is how I create the train and validation split: datagen = ImageDataGenerator(rotation_range=20, shear_range=10, validation_split=0.2) train_generator = datagen.flow_from_directory('./images/', target_size=(28,28), subset='training') val_generator = datagen.flow_from_directory('./images/', target_size=(28,28), subset='validation') Finally this is the code corresponding to the design, compilation and training of the model: K.clear_session() model_ckpt = ModelCheckpoint('BrailleNet.h5',save_best_only=True) reduce_lr = ReduceLROnPlateau(patience=8,verbose=0) early_stop = EarlyStopping(patience=15,verbose=1) entry = L.Input(shape=(28,28,3)) x = L.SeparableConv2D(64,(3,3),activation='relu')(entry) x = L.MaxPooling2D((2,2))(x) x = L.SeparableConv2D(128,(3,3),activation='relu')(x) x = L.MaxPooling2D((2,2))(x) x = L.SeparableConv2D(256,(2,2),activation='relu')(x) x = L.GlobalMaxPooling2D()(x) x = L.Dense(256)(x) x = L.LeakyReLU()(x) x = L.Dense(64,kernel_regularizer=l2(2e-4))(x) x = L.LeakyReLU()(x) x = L.Dense(26,activation='softmax')(x) model = Model(entry,x) model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy']) history = model.fit_generator(train_generator,validation_data=val_generator,epochs=666, callbacks=[model_ckpt,reduce_lr,early_stop],verbose=0) Then this is the code for testing an image of the letter 'a' in braille has the same size as the training and validation set (28x28): img_path = "./test/a1.JPG10whs.jpg" img = plt.imread(img_path) img_array = tf.keras.utils.img_to_array(img) img_batch = np.expand_dims(img_array, axis=0) img_preprocessed = tf.keras.applications.resnet50.preprocess_input(img_batch) prediction = model.predict(img_preprocessed) print(tf.keras.applications.imagenet_utils.decode_predictions(prediction, top=3)[0]) Just when I execute that last line of code this error appears: ValueError: decode_predictions expects a batch of predictions (i.e. a 2D array of shape (samples, 1000)). Found array with shape: (1, 26) A similar question I found here on stackoverflow (ValueError: `decode_predictions` expects a batch of predictions (i.e. a 2D array of shape (samples, 1000)). Found array with shape: (1, 7)). I've seen that using "decode_predictions" only makes sense if your model outputs the ImageNet classes (1000-dimensional) but if I can't use "decode_predictions" I don't know how to get my predictions. My desired output would be like: prediction = model.predict(img_preprocessed) print(prediction) output: 'a' Any hint or suggestion on how to solve this issue is highly appreciated. A: If we take a look at what the prediction object acually is we can see that it has 26 values. These values are the propabiity for each letter that the model predicts: So we need a way to map the prediction value to the respective letter. A simple way to do this could to create a list of all the 26 possible letters and search the max value in the prediction array. Example: #Create prediction labels from a-z alpha="a" labels=["a"] for i in range(0, 25): alpha = chr(ord(alpha) + 1) labels.append(alpha) #Search the max value in prediction labels[np.argmax(prediction)] The output should be the character with the highest probability:
ValueError: `decode_predictions` expects a batch of predictions (i.e. a 2D array of shape (samples, 1000)). Found array with shape: (1, 26)
I am using a model trained by myself to translate braille digits into plain text. As you can see this is a classification problem with 26 classes, one for each letter in the alphabet. This is the dataset that I used to train my model: https://www.kaggle.com/datasets/shanks0465/braille-character-dataset This is how I am generating my training and validation set: os.mkdir('./images/') alpha = 'a' for i in range(0, 26): os.mkdir('./images/' + alpha) alpha = chr(ord(alpha) + 1) rootdir = "C:\\Users\\ffernandez\\Downloads\\capstoneProject\\Braille Dataset\\Braille Dataset\\" for file in os.listdir(rootdir): letter = file[0] copyfile(rootdir+file, './images/' + letter + '/' + file) The resulting folder looks like this: folder structure And this is how I create the train and validation split: datagen = ImageDataGenerator(rotation_range=20, shear_range=10, validation_split=0.2) train_generator = datagen.flow_from_directory('./images/', target_size=(28,28), subset='training') val_generator = datagen.flow_from_directory('./images/', target_size=(28,28), subset='validation') Finally this is the code corresponding to the design, compilation and training of the model: K.clear_session() model_ckpt = ModelCheckpoint('BrailleNet.h5',save_best_only=True) reduce_lr = ReduceLROnPlateau(patience=8,verbose=0) early_stop = EarlyStopping(patience=15,verbose=1) entry = L.Input(shape=(28,28,3)) x = L.SeparableConv2D(64,(3,3),activation='relu')(entry) x = L.MaxPooling2D((2,2))(x) x = L.SeparableConv2D(128,(3,3),activation='relu')(x) x = L.MaxPooling2D((2,2))(x) x = L.SeparableConv2D(256,(2,2),activation='relu')(x) x = L.GlobalMaxPooling2D()(x) x = L.Dense(256)(x) x = L.LeakyReLU()(x) x = L.Dense(64,kernel_regularizer=l2(2e-4))(x) x = L.LeakyReLU()(x) x = L.Dense(26,activation='softmax')(x) model = Model(entry,x) model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy']) history = model.fit_generator(train_generator,validation_data=val_generator,epochs=666, callbacks=[model_ckpt,reduce_lr,early_stop],verbose=0) Then this is the code for testing an image of the letter 'a' in braille has the same size as the training and validation set (28x28): img_path = "./test/a1.JPG10whs.jpg" img = plt.imread(img_path) img_array = tf.keras.utils.img_to_array(img) img_batch = np.expand_dims(img_array, axis=0) img_preprocessed = tf.keras.applications.resnet50.preprocess_input(img_batch) prediction = model.predict(img_preprocessed) print(tf.keras.applications.imagenet_utils.decode_predictions(prediction, top=3)[0]) Just when I execute that last line of code this error appears: ValueError: decode_predictions expects a batch of predictions (i.e. a 2D array of shape (samples, 1000)). Found array with shape: (1, 26) A similar question I found here on stackoverflow (ValueError: `decode_predictions` expects a batch of predictions (i.e. a 2D array of shape (samples, 1000)). Found array with shape: (1, 7)). I've seen that using "decode_predictions" only makes sense if your model outputs the ImageNet classes (1000-dimensional) but if I can't use "decode_predictions" I don't know how to get my predictions. My desired output would be like: prediction = model.predict(img_preprocessed) print(prediction) output: 'a' Any hint or suggestion on how to solve this issue is highly appreciated.
[ "If we take a look at what the prediction object acually is we can see that it has 26 values. These values are the propabiity for each letter that the model predicts:\n\nSo we need a way to map the prediction value to the respective letter.\nA simple way to do this could to create a list of all the 26 possible letters and search the max value in the prediction array. Example:\n#Create prediction labels from a-z\nalpha=\"a\"\nlabels=[\"a\"]\nfor i in range(0, 25): \n alpha = chr(ord(alpha) + 1)\n labels.append(alpha)\n#Search the max value in prediction\nlabels[np.argmax(prediction)]\n\nThe output should be the character with the highest probability:\n\n" ]
[ 0 ]
[]
[]
[ "conv_neural_network", "keras", "machine_learning", "pre_trained_model", "python" ]
stackoverflow_0074561274_conv_neural_network_keras_machine_learning_pre_trained_model_python.txt
Q: How to add numbers for one to 8 on the left and righ side of this function Hi I have problem i dont knbwo how to add numbers order 1 to 8 on the left side and right side of this function. And another problem is there is always showing none when I print it I dont know why I thouth it was beacouse my function was empty but that din't help . So what can I do with this thank you very much. A: Ok, I shortened a bit your code. Mainly replaced multiple if conditions with a replacement map, that you can pass to your function. sachy = [[0, 1, 0, 1, 0, 1, 0, 1],[1, 0, 1, 0, 1, 0, 1, 0],[0, 1, 0, 1, 0, 1, 0, 1],[0, 0, 0, 0, 0, 0, 0, 0],[0, 0, 0, 0, 0, 0, 0, 0],[0, 2, 0, 2, 0, 2, 0, 2],[2, 0, 2, 0, 2, 0, 2, 0],[0, 2, 0, 2, 0, 2, 0,2]] poradi =["_", "a", "b", "c", "d", "e", "f", "g", "h", "_"] repl_map = { 0: ".", 1: "o", 2: "*" } def sachovnice(repl_map): print(" ".join(poradi)) for i, radek in enumerate(sachy): print(f"{i+1} " + " ".join([repl_map.get(cell) for cell in radek]) + f" {i+1}") print( " ".join(poradi)) sachovnice(repl_map) A: Without changing your own code too much, please see below a fix to it. Igor's approach is a better way to do it, if you can understand what it does. sachy = [[0, 1, 0, 1, 0, 1, 0, 1],[1, 0, 1, 0, 1, 0, 1, 0],[0, 1, 0, 1, 0, 1, 0, 1],[0, 0, 0, 0, 0, 0, 0, 0],[0, 0, 0, 0, 0, 0, 0, 0],[0, 2, 0, 2, 0, 2, 0, 2],[2, 0, 2, 0, 2, 0, 2, 0],[0, 2, 0, 2, 0, 2, 0,2]] poradi = [" ", "a", "b", "c", "d", "e", "f", "g", "h", " "] def sachovnice(nula, jedna, dva): x = 1 result = " ".join(poradi) + '\n' for radek in sachy: result += f"{x} " for i in radek: if i == nula: result += '. ' elif i == jedna: result += 'o ' elif i == dva: result += '* ' result += f"{x}\n" x += 1 result += " ".join(poradi) return result print(sachovnice(0,1,2))
How to add numbers for one to 8 on the left and righ side of this function
Hi I have problem i dont knbwo how to add numbers order 1 to 8 on the left side and right side of this function. And another problem is there is always showing none when I print it I dont know why I thouth it was beacouse my function was empty but that din't help . So what can I do with this thank you very much.
[ "Ok, I shortened a bit your code. Mainly replaced multiple if conditions with a replacement map, that you can pass to your function.\nsachy = [[0, 1, 0, 1, 0, 1, 0, 1],[1, 0, 1, 0, 1, 0, 1, 0],[0, 1, 0, 1, 0, 1, 0, 1],[0, 0, 0, 0, 0, 0, 0, 0],[0, 0, 0, 0, 0, 0, 0, 0],[0, 2, 0, 2, 0, 2, 0, 2],[2, 0, 2, 0, 2, 0, 2, 0],[0, 2, 0, 2, 0, 2, 0,2]]\nporadi =[\"_\", \"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\", \"h\", \"_\"]\n\nrepl_map = {\n 0: \".\",\n 1: \"o\",\n 2: \"*\"\n}\n\ndef sachovnice(repl_map):\n print(\" \".join(poradi)) \n \n for i, radek in enumerate(sachy): \n print(f\"{i+1} \" + \" \".join([repl_map.get(cell) for cell in radek]) + f\" {i+1}\")\n \n print( \" \".join(poradi)) \n \nsachovnice(repl_map)\n\n", "Without changing your own code too much, please see below a fix to it. Igor's approach is a better way to do it, if you can understand what it does.\nsachy = [[0, 1, 0, 1, 0, 1, 0, 1],[1, 0, 1, 0, 1, 0, 1, 0],[0, 1, 0, 1, 0, 1, 0, 1],[0, 0, 0, 0, 0, 0, 0, 0],[0, 0, 0, 0, 0, 0, 0, 0],[0, 2, 0, 2, 0, 2, 0, 2],[2, 0, 2, 0, 2, 0, 2, 0],[0, 2, 0, 2, 0, 2, 0,2]]\nporadi = [\" \", \"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\", \"h\", \" \"]\n\ndef sachovnice(nula, jedna, dva):\n x = 1\n result = \" \".join(poradi) + '\\n'\n\n for radek in sachy:\n result += f\"{x} \"\n for i in radek:\n if i == nula:\n result += '. '\n elif i == jedna:\n result += 'o '\n elif i == dva:\n result += '* '\n result += f\"{x}\\n\"\n x += 1\n \n result += \" \".join(poradi)\n\n return result\n\n\nprint(sachovnice(0,1,2))\n\n" ]
[ 0, 0 ]
[]
[]
[ "function", "nonetype", "numbers", "python" ]
stackoverflow_0074565450_function_nonetype_numbers_python.txt
Q: How to find the position/index of a particular file in a directory? I am new to python.I have a list of file names contained in a folder and I want to build a function which can search and return the position of a particular file in the list of files. A: Suppose, you have a list of string list_of_names=["Abc","Def","Ghi","Jkl"]. You can use list.index() method to find the index of a particular string as given below: >> list_of_names.index("Abc") >> 0 >> list_of_names.index("Jkl") >> 3 A: please do this names = [filename1,filename2,.............] index = names.index(filename you want to search) print index A: Something like this would work. Assuming you wanted the files alphabetical. >>> from os import listdir >>> my_files = listdir('./') >>> my_files.sort() >>> my_files.index('myfile.txt') 9 A: do like this os.listdir(path).index('filename')
How to find the position/index of a particular file in a directory?
I am new to python.I have a list of file names contained in a folder and I want to build a function which can search and return the position of a particular file in the list of files.
[ "Suppose, you have a list of string list_of_names=[\"Abc\",\"Def\",\"Ghi\",\"Jkl\"].\nYou can use list.index() method to find the index of a particular string as given below:\n>> list_of_names.index(\"Abc\")\n>> 0\n>> list_of_names.index(\"Jkl\")\n>> 3\n\n", "please do this\nnames = [filename1,filename2,.............]\nindex = names.index(filename you want to search) \nprint index\n\n", "Something like this would work. Assuming you wanted the files alphabetical.\n>>> from os import listdir\n>>> my_files = listdir('./')\n>>> my_files.sort()\n>>> my_files.index('myfile.txt')\n9\n\n", "do like this\n os.listdir(path).index('filename')\n\n" ]
[ 0, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0040675412_python.txt
Q: Find difference between two data frames I have two data frames df1 and df2, where df2 is a subset of df1. How do I get a new data frame (df3) which is the difference between the two data frames? In other word, a data frame that has all the rows/columns in df1 that are not in df2? A: By using drop_duplicates pd.concat([df1,df2]).drop_duplicates(keep=False) Update : The above method only works for those data frames that don't already have duplicates themselves. For example: df1=pd.DataFrame({'A':[1,2,3,3],'B':[2,3,4,4]}) df2=pd.DataFrame({'A':[1],'B':[2]}) It will output like below , which is wrong Wrong Output : pd.concat([df1, df2]).drop_duplicates(keep=False) Out[655]: A B 1 2 3 Correct Output Out[656]: A B 1 2 3 2 3 4 3 3 4 How to achieve that? Method 1: Using isin with tuple df1[~df1.apply(tuple,1).isin(df2.apply(tuple,1))] Out[657]: A B 1 2 3 2 3 4 3 3 4 Method 2: merge with indicator df1.merge(df2,indicator = True, how='left').loc[lambda x : x['_merge']!='both'] Out[421]: A B _merge 1 2 3 left_only 2 3 4 left_only 3 3 4 left_only A: For rows, try this, where Name is the joint index column (can be a list for multiple common columns, or specify left_on and right_on): m = df1.merge(df2, on='Name', how='outer', suffixes=['', '_'], indicator=True) The indicator=True setting is useful as it adds a column called _merge, with all changes between df1 and df2, categorized into 3 possible kinds: "left_only", "right_only" or "both". For columns, try this: set(df1.columns).symmetric_difference(df2.columns) A: Accepted answer Method 1 will not work for data frames with NaNs inside, as pd.np.nan != pd.np.nan. I am not sure if this is the best way, but it can be avoided by df1[~df1.astype(str).apply(tuple, 1).isin(df2.astype(str).apply(tuple, 1))] It's slower, because it needs to cast data to string, but thanks to this casting pd.np.nan == pd.np.nan. Let's go trough the code. First we cast values to string, and apply tuple function to each row. df1.astype(str).apply(tuple, 1) df2.astype(str).apply(tuple, 1) Thanks to that, we get pd.Series object with list of tuples. Each tuple contains whole row from df1/df2. Then we apply isin method on df1 to check if each tuple "is in" df2. The result is pd.Series with bool values. True if tuple from df1 is in df2. In the end, we negate results with ~ sign, and applying filter on df1. Long story short, we get only those rows from df1 that are not in df2. To make it more readable, we may write it as: df1_str_tuples = df1.astype(str).apply(tuple, 1) df2_str_tuples = df2.astype(str).apply(tuple, 1) df1_values_in_df2_filter = df1_str_tuples.isin(df2_str_tuples) df1_values_not_in_df2 = df1[~df1_values_in_df2_filter] A: import pandas as pd # given df1 = pd.DataFrame({'Name':['John','Mike','Smith','Wale','Marry','Tom','Menda','Bolt','Yuswa',], 'Age':[23,45,12,34,27,44,28,39,40]}) df2 = pd.DataFrame({'Name':['John','Smith','Wale','Tom','Menda','Yuswa',], 'Age':[23,12,34,44,28,40]}) # find elements in df1 that are not in df2 df_1notin2 = df1[~(df1['Name'].isin(df2['Name']) & df1['Age'].isin(df2['Age']))].reset_index(drop=True) # output: print('df1\n', df1) print('df2\n', df2) print('df_1notin2\n', df_1notin2) # df1 # Age Name # 0 23 John # 1 45 Mike # 2 12 Smith # 3 34 Wale # 4 27 Marry # 5 44 Tom # 6 28 Menda # 7 39 Bolt # 8 40 Yuswa # df2 # Age Name # 0 23 John # 1 12 Smith # 2 34 Wale # 3 44 Tom # 4 28 Menda # 5 40 Yuswa # df_1notin2 # Age Name # 0 45 Mike # 1 27 Marry # 2 39 Bolt A: edit2, I figured out a new solution without the need of setting index newdf=pd.concat([df1,df2]).drop_duplicates(keep=False) Okay i found the answer of highest vote already contain what I have figured out. Yes, we can only use this code on condition that there are no duplicates in each two dfs. I have a tricky method. First we set ’Name’ as the index of two dataframe given by the question. Since we have same ’Name’ in two dfs, we can just drop the ’smaller’ df’s index from the ‘bigger’ df. Here is the code. df1.set_index('Name',inplace=True) df2.set_index('Name',inplace=True) newdf=df1.drop(df2.index) A: Perhaps a simpler one-liner, with identical or different column names. Worked even when df2['Name2'] contained duplicate values. newDf = df1.set_index('Name1') .drop(df2['Name2'], errors='ignore') .reset_index(drop=False) A: Pandas now offers a new API to do data frame diff: pandas.DataFrame.compare df.compare(df2) col1 col3 self other self other 0 a c NaN NaN 2 NaN NaN 3.0 4.0 A: In addition to accepted answer, I would like to propose one more wider solution that can find a 2D set difference of two dataframes with any index/columns (they might not coincide for both datarames). Also method allows to setup tolerance for float elements for dataframe comparison (it uses np.isclose) import numpy as np import pandas as pd def get_dataframe_setdiff2d(df_new: pd.DataFrame, df_old: pd.DataFrame, rtol=1e-03, atol=1e-05) -> pd.DataFrame: """Returns set difference of two pandas DataFrames""" union_index = np.union1d(df_new.index, df_old.index) union_columns = np.union1d(df_new.columns, df_old.columns) new = df_new.reindex(index=union_index, columns=union_columns) old = df_old.reindex(index=union_index, columns=union_columns) mask_diff = ~np.isclose(new, old, rtol, atol) df_bool = pd.DataFrame(mask_diff, union_index, union_columns) df_diff = pd.concat([new[df_bool].stack(), old[df_bool].stack()], axis=1) df_diff.columns = ["New", "Old"] return df_diff Example: In [1] df1 = pd.DataFrame({'A':[2,1,2],'C':[2,1,2]}) df2 = pd.DataFrame({'A':[1,1],'B':[1,1]}) print("df1:\n", df1, "\n") print("df2:\n", df2, "\n") diff = get_dataframe_setdiff2d(df1, df2) print("diff:\n", diff, "\n") Out [1] df1: A C 0 2 2 1 1 1 2 2 2 df2: A B 0 1 1 1 1 1 diff: New Old 0 A 2.0 1.0 B NaN 1.0 C 2.0 NaN 1 B NaN 1.0 C 1.0 NaN 2 A 2.0 NaN C 2.0 NaN A: As mentioned here that df1[~df1.apply(tuple,1).isin(df2.apply(tuple,1))] is correct solution but it will produce wrong output if df1=pd.DataFrame({'A':[1],'B':[2]}) df2=pd.DataFrame({'A':[1,2,3,3],'B':[2,3,4,4]}) In that case above solution will give Empty DataFrame, instead you should use concat method after removing duplicates from each datframe. Use concate with drop_duplicates df1=df1.drop_duplicates(keep="first") df2=df2.drop_duplicates(keep="first") pd.concat([df1,df2]).drop_duplicates(keep=False) A: I had issues with handling duplicates when there were duplicates on one side and at least one on the other side, so I used Counter.collections to do a better diff, ensuring both sides have the same count. This doesn't return duplicates, but it won't return any if both sides have the same count. from collections import Counter def diff(df1, df2, on=None): """ :param on: same as pandas.df.merge(on) (a list of columns) """ on = on if on else df1.columns df1on = df1[on] df2on = df2[on] c1 = Counter(df1on.apply(tuple, 'columns')) c2 = Counter(df2on.apply(tuple, 'columns')) c1c2 = c1-c2 c2c1 = c2-c1 df1ondf2on = pd.DataFrame(list(c1c2.elements()), columns=on) df2ondf1on = pd.DataFrame(list(c2c1.elements()), columns=on) df1df2 = df1.merge(df1ondf2on).drop_duplicates(subset=on) df2df1 = df2.merge(df2ondf1on).drop_duplicates(subset=on) return pd.concat([df1df2, df2df1]) > df1 = pd.DataFrame({'a': [1, 1, 3, 4, 4]}) > df2 = pd.DataFrame({'a': [1, 2, 3, 4, 4]}) > diff(df1, df2) a 0 1 0 2 A: A slight variation of the nice @liangli's solution that does not require to change the index of existing dataframes: newdf = df1.drop(df1.join(df2.set_index('Name').index)) A: Finding difference by index. Assuming df1 is a subset of df2 and the indexes are carried forward when subsetting df1.loc[set(df1.index).symmetric_difference(set(df2.index))].dropna() # Example df1 = pd.DataFrame({"gender":np.random.choice(['m','f'],size=5), "subject":np.random.choice(["bio","phy","chem"],size=5)}, index = [1,2,3,4,5]) df2 = df1.loc[[1,3,5]] df1 gender subject 1 f bio 2 m chem 3 f phy 4 m bio 5 f bio df2 gender subject 1 f bio 3 f phy 5 f bio df3 = df1.loc[set(df1.index).symmetric_difference(set(df2.index))].dropna() df3 gender subject 2 m chem 4 m bio A: Defining our dataframes: df1 = pd.DataFrame({ 'Name': ['John','Mike','Smith','Wale','Marry','Tom','Menda','Bolt','Yuswa'], 'Age': [23,45,12,34,27,44,28,39,40] }) df2 = df1[df1.Name.isin(['John','Smith','Wale','Tom','Menda','Yuswa']) df1 Name Age 0 John 23 1 Mike 45 2 Smith 12 3 Wale 34 4 Marry 27 5 Tom 44 6 Menda 28 7 Bolt 39 8 Yuswa 40 df2 Name Age 0 John 23 2 Smith 12 3 Wale 34 5 Tom 44 6 Menda 28 8 Yuswa 40 The difference between the two would be: df1[~df1.isin(df2)].dropna() Name Age 1 Mike 45.0 4 Marry 27.0 7 Bolt 39.0 Where: df1.isin(df2) returns the rows in df1 that are also in df2. ~ (Element-wise logical NOT) in front of the expression negates the results, so we get the elements in df1 that are NOT in df2–the difference between the two. .dropna() drops the rows with NaN presenting the desired output Note This only works if len(df1) >= len(df2). If df2 is longer than df1 you can reverse the expression: df2[~df2.isin(df1)].dropna() A: I found the deepdiff library is a wonderful tool that also extends well to dataframes if different detail is required or ordering matters. You can experiment with diffing to_dict('records'), to_numpy(), and other exports: import pandas as pd from deepdiff import DeepDiff df1 = pd.DataFrame({ 'Name': ['John','Mike','Smith','Wale','Marry','Tom','Menda','Bolt','Yuswa'], 'Age': [23,45,12,34,27,44,28,39,40] }) df2 = df1[df1.Name.isin(['John','Smith','Wale','Tom','Menda','Yuswa'])] DeepDiff(df1.to_dict(), df2.to_dict()) # {'dictionary_item_removed': [root['Name'][1], root['Name'][4], root['Name'][7], root['Age'][1], root['Age'][4], root['Age'][7]]} A: Using the lambda function you can filter the rows with _merge value “left_only” to get all the rows in df1 which are missing from df2 df3 = df1.merge(df2, how = 'outer' ,indicator=True).loc[lambda x :x['_merge']=='left_only'] df A: There is a new method in pandas DataFrame.compare that compare 2 different dataframes and return which values changed in each column for the data records. Example First Dataframe Id Customer Status Date 1 ABC Good Mar 2023 2 BAC Good Feb 2024 3 CBA Bad Apr 2022 Second Dataframe Id Customer Status Date 1 ABC Bad Mar 2023 2 BAC Good Feb 2024 5 CBA Good Apr 2024 Comparing Dataframes print("Dataframe difference -- \n") print(df1.compare(df2)) print("Dataframe difference keeping equal values -- \n") print(df1.compare(df2, keep_equal=True)) print("Dataframe difference keeping same shape -- \n") print(df1.compare(df2, keep_shape=True)) print("Dataframe difference keeping same shape and equal values -- \n") print(df1.compare(df2, keep_shape=True, keep_equal=True)) Result Dataframe difference -- Id Status Date self other self other self other 0 NaN NaN Good Bad NaN NaN 2 3.0 5.0 Bad Good Apr 2022 Apr 2024 Dataframe difference keeping equal values -- Id Status Date self other self other self other 0 1 1 Good Bad Mar 2023 Mar 2023 2 3 5 Bad Good Apr 2022 Apr 2024 Dataframe difference keeping same shape -- Id Customer Status Date self other self other self other self other 0 NaN NaN NaN NaN Good Bad NaN NaN 1 NaN NaN NaN NaN NaN NaN NaN NaN 2 3.0 5.0 NaN NaN Bad Good Apr 2022 Apr 2024 Dataframe difference keeping same shape and equal values -- Id Customer Status Date self other self other self other self other 0 1 1 ABC ABC Good Bad Mar 2023 Mar 2023 1 2 2 BAC BAC Good Good Feb 2024 Feb 2024 2 3 5 CBA CBA Bad Good Apr 2022 Apr 2024 A: Symmetric Difference If you are interested in the rows that are only in one of the dataframes but not both, you are looking for the set difference: pd.concat([df1,df2]).drop_duplicates(keep=False) ⚠️ Only works, if both dataframes do not contain any duplicates. Set Difference / Relational Algebra Difference If you are interested in the relational algebra difference / set difference, i.e. df1-df2 or df1\df2: pd.concat([df1,df2,df2]).drop_duplicates(keep=False) ⚠️ Only works, if both dataframes do not contain any duplicates.
Find difference between two data frames
I have two data frames df1 and df2, where df2 is a subset of df1. How do I get a new data frame (df3) which is the difference between the two data frames? In other word, a data frame that has all the rows/columns in df1 that are not in df2?
[ "By using drop_duplicates\npd.concat([df1,df2]).drop_duplicates(keep=False)\n\n\nUpdate :\n\nThe above method only works for those data frames that don't already have duplicates themselves. For example:\n\ndf1=pd.DataFrame({'A':[1,2,3,3],'B':[2,3,4,4]})\ndf2=pd.DataFrame({'A':[1],'B':[2]})\n\nIt will output like below , which is wrong\n\nWrong Output :\n\npd.concat([df1, df2]).drop_duplicates(keep=False)\nOut[655]: \n A B\n1 2 3\n\n\nCorrect Output\n\nOut[656]: \n A B\n1 2 3\n2 3 4\n3 3 4\n\n\n\nHow to achieve that?\n\nMethod 1: Using isin with tuple\ndf1[~df1.apply(tuple,1).isin(df2.apply(tuple,1))]\nOut[657]: \n A B\n1 2 3\n2 3 4\n3 3 4\n\nMethod 2: merge with indicator\ndf1.merge(df2,indicator = True, how='left').loc[lambda x : x['_merge']!='both']\nOut[421]: \n A B _merge\n1 2 3 left_only\n2 3 4 left_only\n3 3 4 left_only\n\n", "For rows, try this, where Name is the joint index column (can be a list for multiple common columns, or specify left_on and right_on):\nm = df1.merge(df2, on='Name', how='outer', suffixes=['', '_'], indicator=True)\n\nThe indicator=True setting is useful as it adds a column called _merge, with all changes between df1 and df2, categorized into 3 possible kinds: \"left_only\", \"right_only\" or \"both\".\nFor columns, try this:\nset(df1.columns).symmetric_difference(df2.columns)\n\n", "Accepted answer Method 1 will not work for data frames with NaNs inside, as pd.np.nan != pd.np.nan. I am not sure if this is the best way, but it can be avoided by\ndf1[~df1.astype(str).apply(tuple, 1).isin(df2.astype(str).apply(tuple, 1))]\n\nIt's slower, because it needs to cast data to string, but thanks to this casting pd.np.nan == pd.np.nan.\nLet's go trough the code. First we cast values to string, and apply tuple function to each row.\ndf1.astype(str).apply(tuple, 1)\ndf2.astype(str).apply(tuple, 1)\n\nThanks to that, we get pd.Series object with list of tuples. Each tuple contains whole row from df1/df2.\nThen we apply isin method on df1 to check if each tuple \"is in\" df2.\nThe result is pd.Series with bool values. True if tuple from df1 is in df2. In the end, we negate results with ~ sign, and applying filter on df1. Long story short, we get only those rows from df1 that are not in df2.\nTo make it more readable, we may write it as:\ndf1_str_tuples = df1.astype(str).apply(tuple, 1)\ndf2_str_tuples = df2.astype(str).apply(tuple, 1)\ndf1_values_in_df2_filter = df1_str_tuples.isin(df2_str_tuples)\ndf1_values_not_in_df2 = df1[~df1_values_in_df2_filter]\n\n", "import pandas as pd\n# given\ndf1 = pd.DataFrame({'Name':['John','Mike','Smith','Wale','Marry','Tom','Menda','Bolt','Yuswa',],\n 'Age':[23,45,12,34,27,44,28,39,40]})\ndf2 = pd.DataFrame({'Name':['John','Smith','Wale','Tom','Menda','Yuswa',],\n 'Age':[23,12,34,44,28,40]})\n\n# find elements in df1 that are not in df2\ndf_1notin2 = df1[~(df1['Name'].isin(df2['Name']) & df1['Age'].isin(df2['Age']))].reset_index(drop=True)\n\n# output:\nprint('df1\\n', df1)\nprint('df2\\n', df2)\nprint('df_1notin2\\n', df_1notin2)\n\n# df1\n# Age Name\n# 0 23 John\n# 1 45 Mike\n# 2 12 Smith\n# 3 34 Wale\n# 4 27 Marry\n# 5 44 Tom\n# 6 28 Menda\n# 7 39 Bolt\n# 8 40 Yuswa\n# df2\n# Age Name\n# 0 23 John\n# 1 12 Smith\n# 2 34 Wale\n# 3 44 Tom\n# 4 28 Menda\n# 5 40 Yuswa\n# df_1notin2\n# Age Name\n# 0 45 Mike\n# 1 27 Marry\n# 2 39 Bolt\n\n", "edit2, I figured out a new solution without the need of setting index\nnewdf=pd.concat([df1,df2]).drop_duplicates(keep=False)\n\nOkay i found the answer of highest vote already contain what I have figured out. Yes, we can only use this code on condition that there are no duplicates in each two dfs.\n\nI have a tricky method. First we set ’Name’ as the index of two dataframe given by the question. Since we have same ’Name’ in two dfs, we can just drop the ’smaller’ df’s index from the ‘bigger’ df.\nHere is the code.\ndf1.set_index('Name',inplace=True)\ndf2.set_index('Name',inplace=True)\nnewdf=df1.drop(df2.index)\n\n", "Perhaps a simpler one-liner, with identical or different column names. Worked even when df2['Name2'] contained duplicate values.\nnewDf = df1.set_index('Name1')\n .drop(df2['Name2'], errors='ignore')\n .reset_index(drop=False)\n\n", "Pandas now offers a new API to do data frame diff: pandas.DataFrame.compare\ndf.compare(df2)\n col1 col3\n self other self other\n0 a c NaN NaN\n2 NaN NaN 3.0 4.0\n\n", "In addition to accepted answer, I would like to propose one more wider solution that can find a 2D set difference of two dataframes with any index/columns (they might not coincide for both datarames). Also method allows to setup tolerance for float elements for dataframe comparison (it uses np.isclose)\n\nimport numpy as np\nimport pandas as pd\n\ndef get_dataframe_setdiff2d(df_new: pd.DataFrame, \n df_old: pd.DataFrame, \n rtol=1e-03, atol=1e-05) -> pd.DataFrame:\n \"\"\"Returns set difference of two pandas DataFrames\"\"\"\n\n union_index = np.union1d(df_new.index, df_old.index)\n union_columns = np.union1d(df_new.columns, df_old.columns)\n\n new = df_new.reindex(index=union_index, columns=union_columns)\n old = df_old.reindex(index=union_index, columns=union_columns)\n\n mask_diff = ~np.isclose(new, old, rtol, atol)\n\n df_bool = pd.DataFrame(mask_diff, union_index, union_columns)\n\n df_diff = pd.concat([new[df_bool].stack(),\n old[df_bool].stack()], axis=1)\n\n df_diff.columns = [\"New\", \"Old\"]\n\n return df_diff\n\nExample:\nIn [1]\n\ndf1 = pd.DataFrame({'A':[2,1,2],'C':[2,1,2]})\ndf2 = pd.DataFrame({'A':[1,1],'B':[1,1]})\n\nprint(\"df1:\\n\", df1, \"\\n\")\n\nprint(\"df2:\\n\", df2, \"\\n\")\n\ndiff = get_dataframe_setdiff2d(df1, df2)\n\nprint(\"diff:\\n\", diff, \"\\n\")\n\nOut [1]\n\ndf1:\n A C\n0 2 2\n1 1 1\n2 2 2 \n\ndf2:\n A B\n0 1 1\n1 1 1 \n\ndiff:\n New Old\n0 A 2.0 1.0\n B NaN 1.0\n C 2.0 NaN\n1 B NaN 1.0\n C 1.0 NaN\n2 A 2.0 NaN\n C 2.0 NaN \n\n", "As mentioned here\nthat \ndf1[~df1.apply(tuple,1).isin(df2.apply(tuple,1))]\n\nis correct solution but it will produce wrong output if\ndf1=pd.DataFrame({'A':[1],'B':[2]})\ndf2=pd.DataFrame({'A':[1,2,3,3],'B':[2,3,4,4]})\n\nIn that case above solution will give\nEmpty DataFrame, instead you should use concat method after removing duplicates from each datframe.\nUse concate with drop_duplicates\ndf1=df1.drop_duplicates(keep=\"first\") \ndf2=df2.drop_duplicates(keep=\"first\") \npd.concat([df1,df2]).drop_duplicates(keep=False)\n\n", "I had issues with handling duplicates when there were duplicates on one side and at least one on the other side, so I used Counter.collections to do a better diff, ensuring both sides have the same count. This doesn't return duplicates, but it won't return any if both sides have the same count.\nfrom collections import Counter\n\ndef diff(df1, df2, on=None):\n \"\"\"\n :param on: same as pandas.df.merge(on) (a list of columns)\n \"\"\"\n on = on if on else df1.columns\n df1on = df1[on]\n df2on = df2[on]\n c1 = Counter(df1on.apply(tuple, 'columns'))\n c2 = Counter(df2on.apply(tuple, 'columns'))\n c1c2 = c1-c2\n c2c1 = c2-c1\n df1ondf2on = pd.DataFrame(list(c1c2.elements()), columns=on)\n df2ondf1on = pd.DataFrame(list(c2c1.elements()), columns=on)\n df1df2 = df1.merge(df1ondf2on).drop_duplicates(subset=on)\n df2df1 = df2.merge(df2ondf1on).drop_duplicates(subset=on)\n return pd.concat([df1df2, df2df1])\n\n> df1 = pd.DataFrame({'a': [1, 1, 3, 4, 4]})\n> df2 = pd.DataFrame({'a': [1, 2, 3, 4, 4]})\n> diff(df1, df2)\n a\n0 1\n0 2\n\n", "A slight variation of the nice @liangli's solution that does not require to change the index of existing dataframes:\nnewdf = df1.drop(df1.join(df2.set_index('Name').index))\n\n", "Finding difference by index. Assuming df1 is a subset of df2 and the indexes are carried forward when subsetting\ndf1.loc[set(df1.index).symmetric_difference(set(df2.index))].dropna()\n\n# Example\n\ndf1 = pd.DataFrame({\"gender\":np.random.choice(['m','f'],size=5), \"subject\":np.random.choice([\"bio\",\"phy\",\"chem\"],size=5)}, index = [1,2,3,4,5])\n\ndf2 = df1.loc[[1,3,5]]\n\ndf1\n\n gender subject\n1 f bio\n2 m chem\n3 f phy\n4 m bio\n5 f bio\n\ndf2\n\n gender subject\n1 f bio\n3 f phy\n5 f bio\n\ndf3 = df1.loc[set(df1.index).symmetric_difference(set(df2.index))].dropna()\n\ndf3\n\n gender subject\n2 m chem\n4 m bio\n\n\n", "Defining our dataframes:\ndf1 = pd.DataFrame({\n 'Name':\n ['John','Mike','Smith','Wale','Marry','Tom','Menda','Bolt','Yuswa'],\n 'Age':\n [23,45,12,34,27,44,28,39,40]\n})\n\ndf2 = df1[df1.Name.isin(['John','Smith','Wale','Tom','Menda','Yuswa'])\n\ndf1\n\n Name Age\n0 John 23\n1 Mike 45\n2 Smith 12\n3 Wale 34\n4 Marry 27\n5 Tom 44\n6 Menda 28\n7 Bolt 39\n8 Yuswa 40\n\ndf2\n\n Name Age\n0 John 23\n2 Smith 12\n3 Wale 34\n5 Tom 44\n6 Menda 28\n8 Yuswa 40\n\nThe difference between the two would be:\ndf1[~df1.isin(df2)].dropna()\n\n Name Age\n1 Mike 45.0\n4 Marry 27.0\n7 Bolt 39.0\n\nWhere:\n\ndf1.isin(df2) returns the rows in df1 that are also in df2.\n~ (Element-wise logical NOT) in front of the expression negates the results, so we get the elements in df1 that are NOT in df2–the difference between the two.\n.dropna() drops the rows with NaN presenting the desired output\n\n\nNote This only works if len(df1) >= len(df2). If df2 is longer than df1 you can reverse the expression: df2[~df2.isin(df1)].dropna()\n\n", "I found the deepdiff library is a wonderful tool that also extends well to dataframes if different detail is required or ordering matters. You can experiment with diffing to_dict('records'), to_numpy(), and other exports:\nimport pandas as pd\nfrom deepdiff import DeepDiff\n\ndf1 = pd.DataFrame({\n 'Name':\n ['John','Mike','Smith','Wale','Marry','Tom','Menda','Bolt','Yuswa'],\n 'Age':\n [23,45,12,34,27,44,28,39,40]\n})\n\ndf2 = df1[df1.Name.isin(['John','Smith','Wale','Tom','Menda','Yuswa'])]\n\nDeepDiff(df1.to_dict(), df2.to_dict())\n# {'dictionary_item_removed': [root['Name'][1], root['Name'][4], root['Name'][7], root['Age'][1], root['Age'][4], root['Age'][7]]}\n\n", "Using the lambda function you can filter the rows with _merge value “left_only” to get all the rows in df1 which are missing from df2\ndf3 = df1.merge(df2, how = 'outer' ,indicator=True).loc[lambda x :x['_merge']=='left_only']\ndf\n\n", "There is a new method in pandas DataFrame.compare that compare 2 different dataframes and return which values changed in each column for the data records.\nExample\nFirst Dataframe\nId Customer Status Date\n1 ABC Good Mar 2023\n2 BAC Good Feb 2024\n3 CBA Bad Apr 2022\n\nSecond Dataframe\nId Customer Status Date\n1 ABC Bad Mar 2023\n2 BAC Good Feb 2024\n5 CBA Good Apr 2024\n\nComparing Dataframes\nprint(\"Dataframe difference -- \\n\")\nprint(df1.compare(df2))\n\nprint(\"Dataframe difference keeping equal values -- \\n\")\nprint(df1.compare(df2, keep_equal=True))\n\nprint(\"Dataframe difference keeping same shape -- \\n\")\nprint(df1.compare(df2, keep_shape=True))\n\nprint(\"Dataframe difference keeping same shape and equal values -- \\n\")\nprint(df1.compare(df2, keep_shape=True, keep_equal=True))\n\nResult\nDataframe difference -- \n\n Id Status Date \n self other self other self other\n0 NaN NaN Good Bad NaN NaN\n2 3.0 5.0 Bad Good Apr 2022 Apr 2024\n\nDataframe difference keeping equal values -- \n\n Id Status Date \n self other self other self other\n0 1 1 Good Bad Mar 2023 Mar 2023\n2 3 5 Bad Good Apr 2022 Apr 2024\n\nDataframe difference keeping same shape -- \n\n Id Customer Status Date \n self other self other self other self other\n0 NaN NaN NaN NaN Good Bad NaN NaN\n1 NaN NaN NaN NaN NaN NaN NaN NaN\n2 3.0 5.0 NaN NaN Bad Good Apr 2022 Apr 2024\n\nDataframe difference keeping same shape and equal values -- \n\n Id Customer Status Date \n self other self other self other self other\n0 1 1 ABC ABC Good Bad Mar 2023 Mar 2023\n1 2 2 BAC BAC Good Good Feb 2024 Feb 2024\n2 3 5 CBA CBA Bad Good Apr 2022 Apr 2024\n\n", "Symmetric Difference\nIf you are interested in the rows that are only in one of the dataframes but not both, you are looking for the set difference:\npd.concat([df1,df2]).drop_duplicates(keep=False)\n\n\n⚠️ Only works, if both dataframes do not contain any duplicates.\n\nSet Difference / Relational Algebra Difference\nIf you are interested in the relational algebra difference / set difference, i.e. df1-df2 or df1\\df2:\npd.concat([df1,df2,df2]).drop_duplicates(keep=False) \n\n\n⚠️ Only works, if both dataframes do not contain any duplicates.\n\n" ]
[ 291, 70, 22, 12, 7, 7, 5, 3, 3, 2, 1, 1, 1, 1, 0, 0, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0048647534_dataframe_pandas_python.txt
Q: Pandas: How to Squash Multiple Rows into One Row with More Columns I'm looking for a way to convert 5 rows in a pandas dataframe into one row with 5 times the amount of columns (so I have the same information, just squashed into one row). Let me explain: I'm working with hockey game statistics. Currently, there are 5 rows representing the same game in different situations, each with 111 columns. I want to convert these 5 rows into one row (so that one game is represented by one row) but keep the information contained in the different situations. In other words, I want to convert 5 rows, each with 111 columns into one row with 554 columns (554=111*5 minus one since we're joining on gameId). Here is my DF head: So, as an example, we can see the first 5 rows have gameId = 2008020001, but each have a different situation (i.e. other, all, 5on5, 4on5, and 5on4). I'd like these 5 rows to be converted into one row with gameId = 2008020001, and with columns labelled according to their situation. For example, I want columns for all unblockedShotAttemptsAgainst, 5on5 unblockedShotAttemptsAgainst, 5on4 unblockedShotAttemptsAgainst, 4on5 unblockedShotAttemptsAgainst, and other unblockedShotAttemptsAgainst (and the same for every other stat). Any info would be greatly appreciated. It's also worth mentioning that my dataset is fairly large (177990 rows), so an efficient solution is desired. The resulting dataframe should have one-fifth the rows and 5 times the columns. Thanks in advance! ---- What I've Tried Already ---- I tried to do this using df.apply() and some nested for loops, but it got very ugly very quickly and was incredibly slow. I think pandas has a better way of doing this, but I'm not sure how. Looking at other SO answers, I initially thought it might have something to do with df.pivot() or df.groupby(), but I couldn't figure it out. Thanks again! A: It sounds like what you are looking for is pd.get_dummies() cols = df.columns #get dummies df1 = pd.get_dummies(df, columns = ['situation']) #drop all columns from existing df, including original col passed into get dummies df1.drop(cols, axis=1 , inplace=True) #add dummy cols to original df df = pd.concat([df, df1], axis=1) #drop duplicate rows df.groupby(cols).first() For the last line you can also use df.drop_duplicates() : https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.drop_duplicates.html
Pandas: How to Squash Multiple Rows into One Row with More Columns
I'm looking for a way to convert 5 rows in a pandas dataframe into one row with 5 times the amount of columns (so I have the same information, just squashed into one row). Let me explain: I'm working with hockey game statistics. Currently, there are 5 rows representing the same game in different situations, each with 111 columns. I want to convert these 5 rows into one row (so that one game is represented by one row) but keep the information contained in the different situations. In other words, I want to convert 5 rows, each with 111 columns into one row with 554 columns (554=111*5 minus one since we're joining on gameId). Here is my DF head: So, as an example, we can see the first 5 rows have gameId = 2008020001, but each have a different situation (i.e. other, all, 5on5, 4on5, and 5on4). I'd like these 5 rows to be converted into one row with gameId = 2008020001, and with columns labelled according to their situation. For example, I want columns for all unblockedShotAttemptsAgainst, 5on5 unblockedShotAttemptsAgainst, 5on4 unblockedShotAttemptsAgainst, 4on5 unblockedShotAttemptsAgainst, and other unblockedShotAttemptsAgainst (and the same for every other stat). Any info would be greatly appreciated. It's also worth mentioning that my dataset is fairly large (177990 rows), so an efficient solution is desired. The resulting dataframe should have one-fifth the rows and 5 times the columns. Thanks in advance! ---- What I've Tried Already ---- I tried to do this using df.apply() and some nested for loops, but it got very ugly very quickly and was incredibly slow. I think pandas has a better way of doing this, but I'm not sure how. Looking at other SO answers, I initially thought it might have something to do with df.pivot() or df.groupby(), but I couldn't figure it out. Thanks again!
[ "It sounds like what you are looking for is pd.get_dummies()\ncols = df.columns\n\n#get dummies\ndf1 = pd.get_dummies(df, columns = ['situation'])\n\n#drop all columns from existing df, including original col passed into get dummies\ndf1.drop(cols, axis=1 , inplace=True)\n\n#add dummy cols to original df\ndf = pd.concat([df, df1], axis=1)\n\n#drop duplicate rows\ndf.groupby(cols).first() \n\n\nFor the last line you can also use df.drop_duplicates() : https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.drop_duplicates.html\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074565718_dataframe_pandas_python.txt
Q: how to resolve this "inf" problem with python code i have a problem with this python code for inverting a Number like Nb = 358 ---> inv = 853 but in the end i got 'inf' msg from the prog , and its runs normally in C language def envers(Nb): inv = 0 cond = True while cond: s = Nb % 10 inv = (inv*10)+ s Nb = Nb/10 if Nb == 0: cond = False return inv data = int(input("give num")) res = envers(data) print(res) A: This is likely much easier to do via string manipulation, which has a friendly and simple syntax (which are a major reason to choose to use Python) >>> int(input("enter a number to reverse: ")[::-1]) enter a number to reverse: 1234 4321 How this works input() returns a string strings are iterable and [::-1] is used to reverse it finally convert to an int Add error checking to taste (for example to to ensure you really received a number) A: When you set Nb = Nb / 10 Nb becomes a float (say 1 -> 0.1) and will be able to keep being divided until it reaches a certain limit. By that point, your inv value will reach python's limits and become 'inf'. Replacing this line with Nb = Nb // 10 using Python's builtin integer division will fix the issue. A: Here is a simple implementation for a numerical approach: def envers(Nb): out = 0 while Nb>0: Nb, r = divmod(Nb, 10) out = 10*out + r return out envers(1234) # 4321 envers(358) # 853 envers(1020) # 201 Without divmod: def envers(Nb): out = 0 while Nb>0: r = Nb % 10 Nb //= 10 out = 10*out + r return out
how to resolve this "inf" problem with python code
i have a problem with this python code for inverting a Number like Nb = 358 ---> inv = 853 but in the end i got 'inf' msg from the prog , and its runs normally in C language def envers(Nb): inv = 0 cond = True while cond: s = Nb % 10 inv = (inv*10)+ s Nb = Nb/10 if Nb == 0: cond = False return inv data = int(input("give num")) res = envers(data) print(res)
[ "This is likely much easier to do via string manipulation, which has a friendly and simple syntax (which are a major reason to choose to use Python)\n>>> int(input(\"enter a number to reverse: \")[::-1])\nenter a number to reverse: 1234\n4321\n\nHow this works\n\ninput() returns a string\nstrings are iterable and [::-1] is used to reverse it\nfinally convert to an int\n\nAdd error checking to taste (for example to to ensure you really received a number)\n", "When you set\nNb = Nb / 10\nNb becomes a float (say 1 -> 0.1) and will be able to keep being divided until it reaches a certain limit.\nBy that point, your inv value will reach python's limits and become 'inf'.\nReplacing this line with\nNb = Nb // 10\nusing Python's builtin integer division will fix the issue.\n", "Here is a simple implementation for a numerical approach:\ndef envers(Nb):\n out = 0\n while Nb>0:\n Nb, r = divmod(Nb, 10)\n out = 10*out + r\n return out\n\n\nenvers(1234)\n# 4321\n\nenvers(358)\n# 853\n\nenvers(1020)\n# 201\n\nWithout divmod:\ndef envers(Nb):\n out = 0\n while Nb>0:\n r = Nb % 10\n Nb //= 10\n out = 10*out + r\n return out\n\n" ]
[ 2, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0074565689_python.txt
Q: JSON Parsing with python from Rethink database [Python] Im trying to retrieve data from a database named RethinkDB, they output JSON when called with r.db("Databasename").table("tablename").insert([{ "id or primary key": line}]).run(), when doing so it outputs [{'id': 'ValueInRowOfid\n'}] and I want to parse that to just the value eg. "ValueInRowOfid". Ive tried with JSON in Python, but I always end up with the typeerror: list indices must be integers or slices, not str, and Ive been told that it is because the Database outputs invalid JSON format. My question is how can a JSON format be invalid (I cant see what is invalid with the output) and also what would be the best way to parse it so that the value "ValueInRowOfid" is left in a Operator eg. Value = ("ValueInRowOfid"). This part imports the modules used and connects to RethinkDB: import json from rethinkdb import RethinkDB r = RethinkDB() r.connect( "localhost", 28015).repl() This part is getting the output/value and my trial at parsing it: getvalue = r.db("Databasename").table("tablename").sample(1).run() # gets a single row/value from the table print(getvalue) # If I print that, it will show as [{'id': 'ValueInRowOfid\n'}] dumper = json.dumps(getvalue) # I cant use `json.loads(dumper)` as JSON object must be str. Which the output of the database isnt (The output is a list) parsevalue = json.loads(dumper) # After `json.dumps(getvalue)` I can now load it, but I cant use the loaded JSON. print(parsevalue["id"]) # When doing this it now says that the list is a str and it needs to be an integers or slices. Quite frustrating for me as it is opposing it self eg. It first wants str and now it cant use str print(parsevalue{'id'}) # I also tried to shuffle it around as seen here, but still the same result I know this is janky and is very hard to comprehend this level of stupidity that I might be on. As I dont know if it is the most simple problem or something that just isnt possible (Which it should or else I cant use my data in the database.) Thank you for reading this through and not jumping straight into the comments and say that I have to read the JSON documentation, because I have and I havent found a single piece that could help me. I tried reading the documentation and watching tutorials about JSON and JSON parsing. I also looked for others whom have had the same problems as me and couldnt find. A: It looks like it's returning a dictionary ({}) inside a list ([]) of one element. Try: getvalue = r.db("Databasename").table("tablename").sample(1).run() print(getvalue[0]['id'])
JSON Parsing with python from Rethink database [Python]
Im trying to retrieve data from a database named RethinkDB, they output JSON when called with r.db("Databasename").table("tablename").insert([{ "id or primary key": line}]).run(), when doing so it outputs [{'id': 'ValueInRowOfid\n'}] and I want to parse that to just the value eg. "ValueInRowOfid". Ive tried with JSON in Python, but I always end up with the typeerror: list indices must be integers or slices, not str, and Ive been told that it is because the Database outputs invalid JSON format. My question is how can a JSON format be invalid (I cant see what is invalid with the output) and also what would be the best way to parse it so that the value "ValueInRowOfid" is left in a Operator eg. Value = ("ValueInRowOfid"). This part imports the modules used and connects to RethinkDB: import json from rethinkdb import RethinkDB r = RethinkDB() r.connect( "localhost", 28015).repl() This part is getting the output/value and my trial at parsing it: getvalue = r.db("Databasename").table("tablename").sample(1).run() # gets a single row/value from the table print(getvalue) # If I print that, it will show as [{'id': 'ValueInRowOfid\n'}] dumper = json.dumps(getvalue) # I cant use `json.loads(dumper)` as JSON object must be str. Which the output of the database isnt (The output is a list) parsevalue = json.loads(dumper) # After `json.dumps(getvalue)` I can now load it, but I cant use the loaded JSON. print(parsevalue["id"]) # When doing this it now says that the list is a str and it needs to be an integers or slices. Quite frustrating for me as it is opposing it self eg. It first wants str and now it cant use str print(parsevalue{'id'}) # I also tried to shuffle it around as seen here, but still the same result I know this is janky and is very hard to comprehend this level of stupidity that I might be on. As I dont know if it is the most simple problem or something that just isnt possible (Which it should or else I cant use my data in the database.) Thank you for reading this through and not jumping straight into the comments and say that I have to read the JSON documentation, because I have and I havent found a single piece that could help me. I tried reading the documentation and watching tutorials about JSON and JSON parsing. I also looked for others whom have had the same problems as me and couldnt find.
[ "It looks like it's returning a dictionary ({}) inside a list ([]) of one element.\nTry:\ngetvalue = r.db(\"Databasename\").table(\"tablename\").sample(1).run()\n\nprint(getvalue[0]['id'])\n\n" ]
[ 0 ]
[]
[]
[ "database", "json", "parsing", "python", "rethinkdb_python" ]
stackoverflow_0074565449_database_json_parsing_python_rethinkdb_python.txt
Q: Download file with dcc.send_bytes I am trying to create and download a pptx presentation with pptx and python dash. Although the file is created without an error, there are no slides created in the presentation. Thanks in advance. import dash import dash_core_components as dcc import dash_html_components as html import dash_bootstrap_components as dbc from dash.exceptions import PreventUpdate from dash.dependencies import Input, Output from pptx import Presentation import io app = dash.Dash(__name__,external_stylesheets=[dbc.themes.BOOTSTRAP]) app.layout = html.Div([html.Button('Download Slides', id='download_button', n_clicks=0), dcc.Download(id='download')]) @app.callback(Output('download', 'data'), [Input('download_button', 'n_clicks')]) def download_file(n_clicks): if n_clicks == 0: raise PreventUpdate def to_pptx(bytes_io): prs = Presentation() title_slide_layout = prs.slide_layouts[0] slide = prs.slides.add_slide(title_slide_layout) title = slide.shapes.title title.text = 'Hello, World!' filename = io.BytesIO() #io.StringIO() prs.save(filename) filename.seek(0) return dcc.send_bytes(to_pptx, 'Slides.pptx') if __name__ == '__main__': app.run_server(debug=True, use_reloader=False) A: You can use this code instead: def to_pptx(bytes_io): prs = Presentation() title_slide_layout = prs.slide_layouts[0] slide = prs.slides.add_slide(title_slide_layout) title = slide.shapes.title subtitle = slide.placeholders[1] title.text = "Hello, World!" subtitle.text = "python-pptx was here!" prs.save(bytes_io) #<------ save the file this way return dcc.send_bytes(to_pptx, 'Slides.pptx')
Download file with dcc.send_bytes
I am trying to create and download a pptx presentation with pptx and python dash. Although the file is created without an error, there are no slides created in the presentation. Thanks in advance. import dash import dash_core_components as dcc import dash_html_components as html import dash_bootstrap_components as dbc from dash.exceptions import PreventUpdate from dash.dependencies import Input, Output from pptx import Presentation import io app = dash.Dash(__name__,external_stylesheets=[dbc.themes.BOOTSTRAP]) app.layout = html.Div([html.Button('Download Slides', id='download_button', n_clicks=0), dcc.Download(id='download')]) @app.callback(Output('download', 'data'), [Input('download_button', 'n_clicks')]) def download_file(n_clicks): if n_clicks == 0: raise PreventUpdate def to_pptx(bytes_io): prs = Presentation() title_slide_layout = prs.slide_layouts[0] slide = prs.slides.add_slide(title_slide_layout) title = slide.shapes.title title.text = 'Hello, World!' filename = io.BytesIO() #io.StringIO() prs.save(filename) filename.seek(0) return dcc.send_bytes(to_pptx, 'Slides.pptx') if __name__ == '__main__': app.run_server(debug=True, use_reloader=False)
[ "You can use this code instead:\ndef to_pptx(bytes_io):\n prs = Presentation()\n title_slide_layout = prs.slide_layouts[0]\n slide = prs.slides.add_slide(title_slide_layout)\n title = slide.shapes.title\n subtitle = slide.placeholders[1] \n title.text = \"Hello, World!\"\n subtitle.text = \"python-pptx was here!\" \n prs.save(bytes_io) #<------ save the file this way\n\nreturn dcc.send_bytes(to_pptx, 'Slides.pptx')\n\n" ]
[ 1 ]
[]
[]
[ "download", "plotly_dash", "python" ]
stackoverflow_0074565356_download_plotly_dash_python.txt
Q: Preventing "Warning Potential Security Risk Ahead" in selenium python Firefox So when using selenium python with firefox I need to prevent this: This is what I have already tried profile = webdriver.FirefoxOptions() profile.accept_insecure_certs = True profile.accept_untrusted_certs = True firefox = webdriver.Firefox(executable_path=utils.str_master_dir('geckodriver.exe'), options=profile) ... Any help would be appreciated thanks. A: This should work: from selenium import webdriver capabilities = webdriver.DesiredCapabilities().FIREFOX capabilities['acceptInsecureCerts'] = True capabilities['marionette'] = True driver = webdriver.Firefox(desired_capabilities=capabilities) You can also create a custom Firefox profile as described here
Preventing "Warning Potential Security Risk Ahead" in selenium python Firefox
So when using selenium python with firefox I need to prevent this: This is what I have already tried profile = webdriver.FirefoxOptions() profile.accept_insecure_certs = True profile.accept_untrusted_certs = True firefox = webdriver.Firefox(executable_path=utils.str_master_dir('geckodriver.exe'), options=profile) ... Any help would be appreciated thanks.
[ "This should work:\nfrom selenium import webdriver\ncapabilities = webdriver.DesiredCapabilities().FIREFOX\ncapabilities['acceptInsecureCerts'] = True\ncapabilities['marionette'] = True\ndriver = webdriver.Firefox(desired_capabilities=capabilities)\n\nYou can also create a custom Firefox profile as described here\n" ]
[ 1 ]
[]
[]
[ "firefox_marionette", "python", "selenium", "selenium_firefoxdriver" ]
stackoverflow_0074565289_firefox_marionette_python_selenium_selenium_firefoxdriver.txt
Q: How to get maximum values in a row and call the proper name of the appropriate column with pandas I want to get the maximum values in a row and print the value and the name of the appropriate column. s1 = pd.Series([5, 6, 7, 10, 12, 6, 8, 55, 9]) s2 = pd.Series([7, 8, 9, 16, 13, 8, 2, 11, 7]) df = pd.DataFrame([list(s1), list(s2)], columns = ["A", "B", "C", "D", "E", "F", "G", "H", "I"]) A B C D E F G H I 0 5 6 7 10 12 6 8 55 9 1 7 8 9 16 13 8 2 11 7 I want to choose for example "index 0" and get something like this: 55 H 12 E 10 D 9 I A: Sorting is relatively expensive (O(n*log(n)) complexity). Use nlargest: out = df.loc[0].nlargest(4) Output: H 55 E 12 D 10 I 9 Name: 0, dtype: int64 A: You can sort and then take the top N values: >>> df.loc[0].sort_values(ascending=False).iloc[:4] H 55 E 12 D 10 I 9 Name: 0, dtype: int64 As a function: def top_n(idx, n): return df.loc[idx].sort_values(ascending=False).iloc[:n]
How to get maximum values in a row and call the proper name of the appropriate column with pandas
I want to get the maximum values in a row and print the value and the name of the appropriate column. s1 = pd.Series([5, 6, 7, 10, 12, 6, 8, 55, 9]) s2 = pd.Series([7, 8, 9, 16, 13, 8, 2, 11, 7]) df = pd.DataFrame([list(s1), list(s2)], columns = ["A", "B", "C", "D", "E", "F", "G", "H", "I"]) A B C D E F G H I 0 5 6 7 10 12 6 8 55 9 1 7 8 9 16 13 8 2 11 7 I want to choose for example "index 0" and get something like this: 55 H 12 E 10 D 9 I
[ "Sorting is relatively expensive (O(n*log(n)) complexity).\nUse nlargest:\nout = df.loc[0].nlargest(4)\n\nOutput:\nH 55\nE 12\nD 10\nI 9\nName: 0, dtype: int64\n\n", "You can sort and then take the top N values:\n>>> df.loc[0].sort_values(ascending=False).iloc[:4]\nH 55\nE 12\nD 10\nI 9\nName: 0, dtype: int64\n\nAs a function:\ndef top_n(idx, n):\n return df.loc[idx].sort_values(ascending=False).iloc[:n]\n\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074564823_pandas_python.txt
Q: Average on overlapping windows in Python I'm trying to compute a moving average but with a set step size between each average. For example, if I was computing the average of a 4 element window every 2 elements: data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] This should produce the average of [1, 2, 3, 4], [3, 4, 5, 6], [5, 6, 7, 8], [7, 8, 9, 10]. window_avg = [2.5, 4.5, 6.5, 8.5] My data is such that the ending will be truncated before processing so there is no problem with the length with respect to window size. I've read a bit about how to do moving averages in Python and there seems to be a lot of usage of itertools; however, the iterators go one element at a time and I can't figure out how to have a step size between each calculation of the average. (How to calculate moving average in Python 3?) I have also been able to do this before in MATLAB by creating a matrix of indices which are overlapping and then indexing the data vector and performing a column wise mean (Create matrix by repeatedly overlapping a vector). However, since this vector is rather large (~70 000 elements, window of 450 samples, average every 30 samples), the computation would probably require too much memory. Any help would be greatly appreciated. I am using Python 2.7. A: One way to compute the average of a sliding window across a list in Python is to use a list comprehension. You can use >>> range(0, len(data), 2) [0, 2, 4, 6, 8] to get the starting indices of each window, and then numpy's mean function to take the average of each window. See the demo below: >>> import numpy as np >>> window_size = 4 >>> stride = 2 >>> window_avg = [ np.mean(data[i:i+window_size]) for i in range(0, len(data), stride) if i+window_size <= len(data) ] >>> window_avg [2.5, 4.5, 6.5, 8.5] Note that the list comprehension does have a condition to ensure that it only computes the average of "full windows", or sublists with exactly window_size elements. When run on a dataset of the size discussed in the OP, this method computes on my MBA in a little over 200 ms: In [5]: window_size = 450 In [6]: data = range(70000) In [7]: stride = 30 In [8]: timeit [ np.mean(data[i:i+window_size]) for i in range(0, len(data), stride) if i+window_size <= len(data) ] 1 loops, best of 3: 220 ms per loop It is about twice as fast on my machine to the itertools approach presented by @Abhijit: In [9]: timeit map(np.mean, izip(*(islice(it, i, None, stride) for i, it in enumerate(tee(data, window_size))))) 1 loops, best of 3: 436 ms per loop A: The following approach uses itertools at its fullest to create moving average window of size 4. As then entire expression is a generator which is evaluated when calculating the average, it has a complexity of O(n). >>> import numpy as np >>> from itertools import count, tee, izip, islice >>> map(np.mean, izip(*(islice(it,i,None,2) for i, it in enumerate(tee(data, 4))))) [2.5, 4.5, 6.5, 8.5] Its interesting to note, how individual itertools function works in accord. itertools.tee n-plicates an iterator, in this case 4 times enumerate creates an enumerator object which yield a tuple of index and element (which is the iterator) slice the iterator with stride 2, starting from the index position. A: You can use rolling function of Pandas DataFrame, data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] df = pd.DataFrame(data) >>> 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 Using Pandas DataFrame's rolling function, df.rolling(4).mean().dropna()[::2] >>> 0 3 2.5 5 4.5 7 6.5 9 8.5 4 is the window size and 2 in [::2] can be assumed to be step size. Actually, df.rolling(4).mean().dropna() shift the window 1-by-1 and by applying index [::2], we pick one after taking two steps. Alternatively, If you have Pandas version > 1.5, you can give step size. Note that, center argument must be 'True'. The solution: df.rolling(4, step=2, center=True).mean().dropna() >>> df 0 2 2.5 4 4.5 6 6.5 8 8.5
Average on overlapping windows in Python
I'm trying to compute a moving average but with a set step size between each average. For example, if I was computing the average of a 4 element window every 2 elements: data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] This should produce the average of [1, 2, 3, 4], [3, 4, 5, 6], [5, 6, 7, 8], [7, 8, 9, 10]. window_avg = [2.5, 4.5, 6.5, 8.5] My data is such that the ending will be truncated before processing so there is no problem with the length with respect to window size. I've read a bit about how to do moving averages in Python and there seems to be a lot of usage of itertools; however, the iterators go one element at a time and I can't figure out how to have a step size between each calculation of the average. (How to calculate moving average in Python 3?) I have also been able to do this before in MATLAB by creating a matrix of indices which are overlapping and then indexing the data vector and performing a column wise mean (Create matrix by repeatedly overlapping a vector). However, since this vector is rather large (~70 000 elements, window of 450 samples, average every 30 samples), the computation would probably require too much memory. Any help would be greatly appreciated. I am using Python 2.7.
[ "One way to compute the average of a sliding window across a list in Python is to use a list comprehension. You can use\n>>> range(0, len(data), 2)\n[0, 2, 4, 6, 8]\n\nto get the starting indices of each window, and then numpy's mean function to take the average of each window. See the demo below:\n>>> import numpy as np\n>>> window_size = 4\n>>> stride = 2\n>>> window_avg = [ np.mean(data[i:i+window_size]) for i in range(0, len(data), stride)\n if i+window_size <= len(data) ]\n>>> window_avg\n[2.5, 4.5, 6.5, 8.5]\n\nNote that the list comprehension does have a condition to ensure that it only computes the average of \"full windows\", or sublists with exactly window_size elements.\nWhen run on a dataset of the size discussed in the OP, this method computes on my MBA in a little over 200 ms:\nIn [5]: window_size = 450\nIn [6]: data = range(70000)\nIn [7]: stride = 30\nIn [8]: timeit [ np.mean(data[i:i+window_size]) for i in range(0, len(data), stride)\n if i+window_size <= len(data) ]\n1 loops, best of 3: 220 ms per loop\n\nIt is about twice as fast on my machine to the itertools approach presented by @Abhijit:\nIn [9]: timeit map(np.mean, izip(*(islice(it, i, None, stride) for i, it in enumerate(tee(data, window_size)))))\n1 loops, best of 3: 436 ms per loop\n\n", "The following approach uses itertools at its fullest to create moving average window of size 4. As then entire expression is a generator which is evaluated when calculating the average, it has a complexity of O(n). \n>>> import numpy as np\n>>> from itertools import count, tee, izip, islice\n>>> map(np.mean, izip(*(islice(it,i,None,2)\n for i, it in enumerate(tee(data, 4)))))\n[2.5, 4.5, 6.5, 8.5]\n\nIts interesting to note, how individual itertools function works in accord.\n\nitertools.tee n-plicates an iterator, in this case 4 times \nenumerate creates an enumerator object which yield a tuple of index and element (which is the iterator)\nslice the iterator with stride 2, starting from the index position.\n\n", "You can use rolling function of Pandas DataFrame,\ndata = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\ndf = pd.DataFrame(data)\n>>> \n 0\n0 1\n1 2\n2 3\n3 4\n4 5\n5 6\n6 7\n7 8\n8 9\n9 10\n\nUsing Pandas DataFrame's rolling function,\ndf.rolling(4).mean().dropna()[::2]\n>>> \n 0\n3 2.5\n5 4.5\n7 6.5\n9 8.5\n\n4 is the window size and 2 in [::2] can be assumed to be step size.\nActually, df.rolling(4).mean().dropna() shift the window 1-by-1 and by applying index [::2], we pick one after taking two steps.\nAlternatively,\nIf you have Pandas version > 1.5, you can give step size. Note that, center argument must be 'True'. The solution:\ndf.rolling(4, step=2, center=True).mean().dropna()\n\n>>> df\n 0\n2 2.5\n4 4.5\n6 6.5\n8 8.5\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "moving_average", "python", "python_itertools" ]
stackoverflow_0021097039_moving_average_python_python_itertools.txt
Q: How to find type hints for psycopg2 functions How can one find what type hints to use when annotating my python code for to package functions eg. What will thepsycopg2.connect return so that I can put it in place of ??? eg: def sql_connect(sql_config: dict = None) -> ???: db = psycopg2.connect( host=sql_config["host"], port=sql_config["port"], dbname=sql_config["database"] ) return db A: Just use the psycopg2.connection as the type hint. Also check with type(sql_connect())
How to find type hints for psycopg2 functions
How can one find what type hints to use when annotating my python code for to package functions eg. What will thepsycopg2.connect return so that I can put it in place of ??? eg: def sql_connect(sql_config: dict = None) -> ???: db = psycopg2.connect( host=sql_config["host"], port=sql_config["port"], dbname=sql_config["database"] ) return db
[ "Just use the psycopg2.connection as the type hint.\nAlso check with type(sql_connect())\n" ]
[ 0 ]
[]
[]
[ "python", "python_typing", "type_hinting" ]
stackoverflow_0074564998_python_python_typing_type_hinting.txt
Q: " no error message for this problem how could i make the could run correctly"? the problem is that After choosing the name of the city, the code is freezing , he code is: import time import pandas as pd import numpy as np CITY_DATA = { 'chicago': 'chicago.csv', 'new york city': 'new_york_city.csv', 'washington': 'washington.csv' } def get_filters(): """ Asks user to specify a city, month, and day to analyze. Returns: (str) city - name of the city to analyze (str) month - name of the month to filter by, or "all" to apply no month filter (str) day - name of the day of week to filter by, or "all" to apply no day filter """ print('Hello! Let\'s explore some US bikeshare data!') # TO DO: get user input for city (chicago, new york city, washington). HINT: Use a while loop to handle invalid inputs city = input( "please choose a city from (chicago , new york city , washington): ").lower() while True: if city not in CITY_DATA.keys(): print("invaild city name please try again/n: ") city = input( "please choose a city from (chicago , new york city , washington): ").lower() break # TO DO: get user input for month (all, january, february, ... , june) month = input(" please choose and type a full month name or type all: ").lower() months = ['january' , 'faburay' , 'march' , 'april' , 'may' , 'june' , 'all' ] while True: if month not in months: print("invaild month name please try again") month = input(" please choose and type a full month name or type all: ").lower() break # TO DO: get user input for day of week (all, monday, tuesday, ... sunday) day = input("please add a week day name or type all: ").lower() days = ['saturday', ' sunday', 'monday' , 'tusday', 'wedensday','thrusday','friday','all'] while True: if day not in days: prtint('invaild week day name please try again') day = input("please add a week day name or type all: ").lower() break print('-'*40) return city, month, day it was WORKING AT FIRST BUT IT SUDDINLY BROKE and i cant make sure that the rest of the code is working since its not working from the beginning, the project is all about bikeshare data that should return specific statistics when choosing specific city , month , and day A: One easy fix would be to add the breakin an else block (Though not the best solution this will get you going): while True: if city not in CITY_DATA.keys(): print("invaild city name please try again/n: ") city = input( "please choose a city from (chicago , new york city , washington): ").lower() else: break This is because you ask for the input then enter the while, if the city is not correct then the user gets a new input change, after that there is a break even if that answer is incorrect. Adding the else means only breaking if if city not in CITY_DATA.keys(): is false (The city is in the keys) Like i said this is maybe not the optimal solution and you should read more into the control flow of the code Here is a working example of making the code a little easier to work with and not repeat the while several times: import time from enum import Enum CITY_DATA = { 'chicago': 'chicago.csv', 'new york city': 'new_york_city.csv', 'washington': 'washington.csv' } MONTHS = ['january' , 'faburay' , 'march' , 'april' , 'may' , 'june' , 'all' ] DAYS = ['saturday', ' sunday', 'monday' , 'tusday', 'wedensday','thrusday','friday','all'] def get_filters(): """ Asks user to specify a city, month, and day to analyze. Returns: (str) city - name of the city to analyze (str) month - name of the month to filter by, or "all" to apply no month filter (str) day - name of the day of week to filter by, or "all" to apply no day filter """ global CITY_DATA, MONTHS, DAYS print('Hello! Let\'s explore some US bikeshare data!') # TO DO: get user input for city (chicago, new york city, washington). HINT: Use a while loop to handle invalid inputs city = get_input(CITY_DATA, "please choose a city from (chicago , new york city , washington): ", "invaild city name please try again/n: ") # TO DO: get user input for month (all, january, february, ... , june) month = get_input(MONTHS, " please choose and type a full month name or type all: ", "invaild month name please try again/n:") # TO DO: get user input for day of week (all, monday, tuesday, ... sunday) day = get_input(DAYS, "please add a week day name or type all: ", 'invaild week day name please try again/n:') print('-'*40) return city, month, day def get_input(correct_list, input_text: str, error_text: str) -> str: """Get the input and return the input if it's in the correct_list Args: correct_list (list): The list of correct inputs input_text (str): The text to show the user before listening for an input error_text (str): The error text to show if the user enters something not in the correct_list Returns: str: The string entered if in correct_list """ output = "" while True: output = input(input_text).lower() if output not in DAYS: print(error_text) else: return output if __name__ == "__main__": get_filters()
" no error message for this problem how could i make the could run correctly"?
the problem is that After choosing the name of the city, the code is freezing , he code is: import time import pandas as pd import numpy as np CITY_DATA = { 'chicago': 'chicago.csv', 'new york city': 'new_york_city.csv', 'washington': 'washington.csv' } def get_filters(): """ Asks user to specify a city, month, and day to analyze. Returns: (str) city - name of the city to analyze (str) month - name of the month to filter by, or "all" to apply no month filter (str) day - name of the day of week to filter by, or "all" to apply no day filter """ print('Hello! Let\'s explore some US bikeshare data!') # TO DO: get user input for city (chicago, new york city, washington). HINT: Use a while loop to handle invalid inputs city = input( "please choose a city from (chicago , new york city , washington): ").lower() while True: if city not in CITY_DATA.keys(): print("invaild city name please try again/n: ") city = input( "please choose a city from (chicago , new york city , washington): ").lower() break # TO DO: get user input for month (all, january, february, ... , june) month = input(" please choose and type a full month name or type all: ").lower() months = ['january' , 'faburay' , 'march' , 'april' , 'may' , 'june' , 'all' ] while True: if month not in months: print("invaild month name please try again") month = input(" please choose and type a full month name or type all: ").lower() break # TO DO: get user input for day of week (all, monday, tuesday, ... sunday) day = input("please add a week day name or type all: ").lower() days = ['saturday', ' sunday', 'monday' , 'tusday', 'wedensday','thrusday','friday','all'] while True: if day not in days: prtint('invaild week day name please try again') day = input("please add a week day name or type all: ").lower() break print('-'*40) return city, month, day it was WORKING AT FIRST BUT IT SUDDINLY BROKE and i cant make sure that the rest of the code is working since its not working from the beginning, the project is all about bikeshare data that should return specific statistics when choosing specific city , month , and day
[ "One easy fix would be to add the breakin an else block (Though not the best solution this will get you going):\nwhile True:\n if city not in CITY_DATA.keys():\n print(\"invaild city name please try again/n: \")\n city = input( \"please choose a city from (chicago , new york city , washington): \").lower()\n else:\n break\n\nThis is because you ask for the input then enter the while, if the city is not correct then the user gets a new input change, after that there is a break even if that answer is incorrect.\nAdding the else means only breaking if if city not in CITY_DATA.keys(): is false (The city is in the keys)\nLike i said this is maybe not the optimal solution and you should read more into the control flow of the code\nHere is a working example of making the code a little easier to work with and not repeat the while several times:\nimport time\nfrom enum import Enum\n\nCITY_DATA = { 'chicago': 'chicago.csv',\n 'new york city': 'new_york_city.csv',\n 'washington': 'washington.csv' }\n\nMONTHS = ['january' , 'faburay' , 'march' , 'april' , 'may' , 'june' , 'all' ]\n\nDAYS = ['saturday', ' sunday', 'monday' , 'tusday', 'wedensday','thrusday','friday','all']\n\ndef get_filters():\n \"\"\"\n Asks user to specify a city, month, and day to analyze.\n\n Returns:\n (str) city - name of the city to analyze\n (str) month - name of the month to filter by, or \"all\" to apply no month filter\n (str) day - name of the day of week to filter by, or \"all\" to apply no day filter\n \"\"\"\n global CITY_DATA, MONTHS, DAYS\n print('Hello! Let\\'s explore some US bikeshare data!')\n # TO DO: get user input for city (chicago, new york city, washington). HINT: Use a while loop to handle invalid inputs\n city = get_input(CITY_DATA, \"please choose a city from (chicago , new york city , washington): \", \"invaild city name please try again/n: \")\n # TO DO: get user input for month (all, january, february, ... , june)\n month = get_input(MONTHS, \" please choose and type a full month name or type all: \", \"invaild month name please try again/n:\")\n # TO DO: get user input for day of week (all, monday, tuesday, ... sunday)\n day = get_input(DAYS, \"please add a week day name or type all: \", 'invaild week day name please try again/n:')\n\n print('-'*40)\n return city, month, day\n\ndef get_input(correct_list, input_text: str, error_text: str) -> str:\n \"\"\"Get the input and return the input if it's in the correct_list\n\n Args:\n correct_list (list): The list of correct inputs\n input_text (str): The text to show the user before listening for an input\n error_text (str): The error text to show if the user enters something not in the correct_list\n\n Returns:\n str: The string entered if in correct_list\n \"\"\"\n output = \"\"\n while True:\n output = input(input_text).lower()\n if output not in DAYS:\n print(error_text)\n else:\n return output\n\n\nif __name__ == \"__main__\":\n get_filters()\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074565676_dataframe_pandas_python.txt
Q: Why is Anaconda Navigator not opening? I have trouble with my anaconda3 navigator. I am using it with Python 3 and jupyter notebook. Today, because I had trouble with installing some packages, I updated everything and it worked fine. A few hours later, my anaconda navigator is not opening and when I open the PowerShell prompt I get the following error: and here: failed to create process. Invoke-Expression : Das Argument kann nicht an den Parameter "Command" gebunden werden, da es sich um eine leere Zeichenfolge handelt. In C:\Users\elihe\Anaconda3\shell\condabin\Conda.psm1:101 Zeichen:36 + Invoke-Expression -Command $activateCommand; + ~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidData: (:) [Invoke-Expression], ParameterBindingValidationException + FullyQualifiedErrorId : ParameterArgumentValidationErrorEmptyStringNotAllowed,Microsoft.PowerShell.Commands.Invo keExpressionCommand PS C:\Users\elihe> Can someone tell me what to do? I really need help :(( A: I got the solution, okay it is not really a solution,but better than nothing :) You just have to uninstall Anaconda completely from your computer, also every folder with Anaconda things. Then you have to install it completely new, as if you never had Anaconda on your computer. For me, that helped - everything goes like before this problem! A: If you just install Anaconda and when you try to lunch the Anaconda navigator and it gives you this message There is instance of Anaconda Navigator already running. This means you have a previous version was not fully uninstalled or removed. So you have to find the location of the file which still excites from the previous version Search For the previous's version file location, then after that open the the file location. then open the command prompt Win+R and write the command [resmon.exe] this window will appear resmon.exe, in the [Associated Handles] search field write the folder name and when it appears choose the folder and then right click and [end process] to be able to delete it, then reinstall Anaconda Again.
Why is Anaconda Navigator not opening?
I have trouble with my anaconda3 navigator. I am using it with Python 3 and jupyter notebook. Today, because I had trouble with installing some packages, I updated everything and it worked fine. A few hours later, my anaconda navigator is not opening and when I open the PowerShell prompt I get the following error: and here: failed to create process. Invoke-Expression : Das Argument kann nicht an den Parameter "Command" gebunden werden, da es sich um eine leere Zeichenfolge handelt. In C:\Users\elihe\Anaconda3\shell\condabin\Conda.psm1:101 Zeichen:36 + Invoke-Expression -Command $activateCommand; + ~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidData: (:) [Invoke-Expression], ParameterBindingValidationException + FullyQualifiedErrorId : ParameterArgumentValidationErrorEmptyStringNotAllowed,Microsoft.PowerShell.Commands.Invo keExpressionCommand PS C:\Users\elihe> Can someone tell me what to do? I really need help :((
[ "I got the solution, okay it is not really a solution,but better than nothing :)\nYou just have to uninstall Anaconda completely from your computer, also every folder with Anaconda things. Then you have to install it completely new, as if you never had Anaconda on your computer. For me, that helped - everything goes like before this problem!\n", "If you just install Anaconda and when you try to lunch the Anaconda navigator and it gives you this message There is instance of Anaconda Navigator already running. This means you have a previous version was\nnot fully uninstalled or removed. So you have to find the location of the file which still excites from the previous version Search For the previous's version file location, then after that open the the file location.\nthen open the command prompt Win+R and write the command [resmon.exe] this window will appear resmon.exe, in the [Associated Handles] search field write the folder name and when it appears choose the folder and then right click and [end process] to be able to delete it, then reinstall Anaconda Again.\n" ]
[ 0, 0 ]
[]
[]
[ "anaconda", "conda", "jupyter_notebook", "powershell", "python" ]
stackoverflow_0061688021_anaconda_conda_jupyter_notebook_powershell_python.txt
Q: return items from dictionary not as tuple I have an excel file and I accumulate thre values for each fruit sort with each other. So I do it like this: def calulate_total_fruit_NorthMidSouth(): import openpyxl import tabula excelWorkbook = openpyxl.load_workbook(path, data_only=True) sheet_factuur = excelWorkbook['Facturen '] new_list =[] fruit_sums = { 'ananas': 0, 'apple': 0, 'waspeen': 0, } fruit_name_rows = { 'ananas': [6, 7, 8], 'apple': [9, 10, 11], 'waspeen': [12, 13, 14], } array = [row for row in sheet_factuur.values] # type: ignore # excel does not have a row 0 for row_num, row_values in enumerate(array, 1): for fruit in ['ananas', 'apple', 'waspeen']: # loop through specific fruits if row_num in fruit_name_rows[fruit]: # index 4 is column 5 in excel fruit_sums[fruit] += row_values[4] # type: ignore return list(fruit_sums.items()) But the output is this: [('ananas', 3962), ('apple', 3304.08), ('waspeen', 3767.3999999999996)] But the output has to look like this: ananas 3962 apple 3304.08 waspeen 3767.39 How to archive this with return statement? A: Try something like this: def f(): x = {'a': 1, 'b': 2, 'c': 3} return '\n'.join(f'{a} {b}' for a, b in x.items()) print(f()) # a 1 # b 2 # c 3 A: Not sure if this is the best way to solve this problem but you could add this to your code before printing: mylist = list(fruit_sums.items()) for i in mylist: newlist = list(i) for x in range(len(newlist)): newlist[x] = str(newlist[x]) print(" ".join(newlist))
return items from dictionary not as tuple
I have an excel file and I accumulate thre values for each fruit sort with each other. So I do it like this: def calulate_total_fruit_NorthMidSouth(): import openpyxl import tabula excelWorkbook = openpyxl.load_workbook(path, data_only=True) sheet_factuur = excelWorkbook['Facturen '] new_list =[] fruit_sums = { 'ananas': 0, 'apple': 0, 'waspeen': 0, } fruit_name_rows = { 'ananas': [6, 7, 8], 'apple': [9, 10, 11], 'waspeen': [12, 13, 14], } array = [row for row in sheet_factuur.values] # type: ignore # excel does not have a row 0 for row_num, row_values in enumerate(array, 1): for fruit in ['ananas', 'apple', 'waspeen']: # loop through specific fruits if row_num in fruit_name_rows[fruit]: # index 4 is column 5 in excel fruit_sums[fruit] += row_values[4] # type: ignore return list(fruit_sums.items()) But the output is this: [('ananas', 3962), ('apple', 3304.08), ('waspeen', 3767.3999999999996)] But the output has to look like this: ananas 3962 apple 3304.08 waspeen 3767.39 How to archive this with return statement?
[ "Try something like this:\ndef f():\n x = {'a': 1, 'b': 2, 'c': 3}\n return '\\n'.join(f'{a} {b}' for a, b in x.items())\n\nprint(f())\n# a 1\n# b 2\n# c 3\n\n", "Not sure if this is the best way to solve this problem but you could add this to your code before printing:\nmylist = list(fruit_sums.items())\nfor i in mylist:\n newlist = list(i)\n for x in range(len(newlist)):\n newlist[x] = str(newlist[x])\nprint(\" \".join(newlist))\n\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074565707_python.txt
Q: Checking if a file exist in S3 bucket or not? I am trying to check if a file exist or not in a S3 bucket. I am currently using boto3 library in python. I am using below code to check it exist or not - file_name = 'random_name' s3_client = boto3.client('s3') result = s3_client.list_objects_v2(Bucket=bucket_name, Prefix=file_name) if 'Contents' in result: print('Exist') else: print('Doesn't exist) But the issue with this code is, it checking for prefix - and it misses edge cases where two file can have same prefix. Example - I want to check if file 'hello' exist or not in S3 but there is file 'helloworld' then this program would fail. Looking for a better solution. A: Compare the keys directly using standard Python string comprehensions. The answers to this question go further into detail on this boto3 file_upload does it check if file exists
Checking if a file exist in S3 bucket or not?
I am trying to check if a file exist or not in a S3 bucket. I am currently using boto3 library in python. I am using below code to check it exist or not - file_name = 'random_name' s3_client = boto3.client('s3') result = s3_client.list_objects_v2(Bucket=bucket_name, Prefix=file_name) if 'Contents' in result: print('Exist') else: print('Doesn't exist) But the issue with this code is, it checking for prefix - and it misses edge cases where two file can have same prefix. Example - I want to check if file 'hello' exist or not in S3 but there is file 'helloworld' then this program would fail. Looking for a better solution.
[ "Compare the keys directly using standard Python string comprehensions.\nThe answers to this question go further into detail on this boto3 file_upload does it check if file exists\n" ]
[ 0 ]
[]
[]
[ "amazon_s3", "amazon_web_services", "boto3", "python", "python_3.x" ]
stackoverflow_0074565946_amazon_s3_amazon_web_services_boto3_python_python_3.x.txt
Q: How to perform split/merge/melt with Python and polars? I have a data transformation problem where the original data consists of "blocks" of three rows of data, where the first row denotes a 'parent' and the two others are related children. A minimum working example looks like this: import polars as pl df_original = pl.DataFrame( { 'Order ID': ['A', 'foo', 'bar'], 'Parent Order ID': [None, 'A', 'A'], 'Direction': ["Buy", "Buy", "Sell"], 'Price': [1.21003, None, 1.21003], 'Some Value': [4, 4, 4], 'Name Provider 1': ['P8', 'P8', 'P8'], 'Quote Provider 1': [None, 1.1, 1.3], 'Name Provider 2': ['P2', 'P2', 'P2'], 'Quote Provider 2': [None, 1.15, 1.25], 'Name Provider 3': ['P1', 'P1', 'P1'], 'Quote Provider 3': [None, 1.0, 1.4], 'Name Provider 4': ['P5', 'P5', 'P5'], 'Quote Provider 4': [None, 1.0, 1.4] } ) In reality, there are up to 15 Providers (so up to 30 columns), but they are not necessary for the example. We would like to transform this into a format where each row represents both the Buy and Sell quote of a single provider for that parent. The desired result is as follows: df_desired = pl.DataFrame( { 'Order ID': ['A', 'A', 'A', 'A'], 'Parent Direction': ['Buy', 'Buy', 'Buy', 'Buy'], 'Price': [1.21003, 1.21003, 1.21003, 1.21003], 'Some Value': [4, 4, 4, 4], 'Name Provider': ['P8', 'P2', 'P1', 'P5'], 'Quote Buy': [1.1, 1.15, 1.0, 1.0], 'Quote Sell': [1.3, 1.25, 1.4, 1.4], } ) df_desired However, I'm having a hard time doing this in polars. My first approach was splitting the data into parents and children, then joining them together on the respective ids: df_parents = ( df_original .filter(pl.col("Parent Order ID").is_null()) .drop(columns=['Parent Order ID']) ) df_ch = ( df_original .filter(pl.col("Parent Order ID").is_not_null()) .drop(columns=['Price', 'Some Value']) ) ch_buy = df_ch.filter(pl.col("Direction") == 'Buy').drop(columns=['Direction']) ch_sell = df_ch.filter(pl.col("Direction") == 'Sell').drop(columns=['Direction']) df_joined = ( df_parents .join(ch_buy, left_on='Order ID', right_on='Parent Order ID', suffix="_Buy") .join(ch_sell, left_on='Order ID', right_on='Parent Order ID', suffix="_Sell") # The Name and Quote columns in the parent are all empty, so they can go, buy they had to be there for the suffix to work for the first join .drop(columns=[f'Name Provider {i}' for i in range(1, 5)]) .drop(columns=[f'Quote Provider {i}' for i in range(1, 5)]) ) But this still leaves you with a mess where you somehow have to split this into four rows - not eight, as you could easily do with .melt(). Any tips on how to best approach this? Am I missing some obivous method here? EDIT: Added a slightly larger example dataframe with two parent orders and their children (the real-world dataset has ~50k+ of those) : df_original_two_orders = pl.DataFrame( { 'Order ID': ['A', 'foo', 'bar', 'B', 'baz', 'rar'], # Two parent orders 'Parent Order ID': [None, 'A', 'A', None, 'B', 'B'], 'Direction': ["Buy", "Buy", "Sell", "Sell", "Sell", "Buy"], # Second parent has different direction 'Price': [1.21003, None, 1.21003, 1.1384, None, 1.1384], 'Some Value': [4, 4, 4, 42, 42, 42], 'Name Provider 1': ['P8', 'P8', 'P8', 'P2', 'P2', 'P2'], 'Quote Provider 1': [None, 1.1, 1.3, None, 1.10, 1.40], # Above, 1.10 corresponds to Buy for order A for to Sell for order B - depends on Direction 'Name Provider 2': ['P2', 'P2', 'P2', 'P1', 'P1', 'P1'], 'Quote Provider 2': [None, 1.15, 1.25, None, 1.11, 1.39], 'Name Provider 3': ['P1', 'P1', 'P1', 'P3', 'P3', 'P3'], 'Quote Provider 3': [None, 1.0, 1.4, None, 1.05, 1.55], 'Name Provider 4': ['P5', 'P5', 'P5', None, None, None], 'Quote Provider 4': [None, 1.0, 1.4, None, None, None] } ) I think this is slightly more representative of the real world in that it has multiple parent orders and not all provider columns are filled for all orders, while still keeping the annoying business logic far away. The correct output for this example is the following: df_desired_two_parents = pl.DataFrame( { 'Order ID': ['A']*4 + ['B'] * 3, 'Parent Direction': ['Buy']*4 + ['Sell'] * 3, 'Price': [1.21003] * 4 + [1.1384] * 3, 'Some Value': [4] * 4 + [42] * 3, 'Name Provider': ['P8', 'P2', 'P1', 'P5', 'P2', 'P1', 'P3'], 'Quote Buy': [1.1, 1.15, 1.0, 1.0, 1.40, 1.39, 1.55], # Note the last three values are the "second" values in the original column now because the parent order was 'Sell' 'Quote Sell': [1.3, 1.25, 1.4, 1.4, 1.10, 1.11, 1.05], } ) A: Here's how I've attempted it: fill the nulls in the Parent Order ID column and use that to .groupby() >>> columns = ["Order ID", "Direction", "Price", "Some Value"] ... names = pl.col("^Name .*$") # All name columns ... quotes = pl.col("^Quote .*$") # All quote columns ... ( ... df_original_two_orders ... .with_column(pl.col("Parent Order ID").backward_fill()) ... .groupby("Parent Order ID") ... .agg([ ... pl.col(columns).first(), ... pl.concat_list(names.first()).alias("Name"), # Put all names into single column: ["Name1", "Name2", ...] ... pl.col("^Quote .*$").slice(1), # Create list for each quote column (skip first row): [1.1, 1.3], [1.15, 1.25], ... ... ]) ... .with_columns([ ... pl.concat_list( # Create list of Buy values ... pl.when(pl.col("Direction") == "Buy") ... .then(quotes.arr.first()) ... .otherwise(quotes.arr.last()) ... .alias("Buy")), ... pl.concat_list( # Create list of Sell values ... pl.when(pl.col("Direction") == "Sell") ... .then(quotes.arr.first()) ... .otherwise(quotes.arr.last()) ... .alias("Sell") ... ) ... ]) ... .select(columns + ["Name", "Buy", "Sell"]) # Remove Name/Quote [1234..] columns ... .explode(["Name", "Buy", "Sell"]) # Turn into rows ... ) shape: (8, 7) ┌──────────┬───────────┬─────────┬────────────┬──────┬──────┬──────┐ │ Order ID | Direction | Price | Some Value | Name | Buy | Sell │ │ --- | --- | --- | --- | --- | --- | --- │ │ str | str | f64 | i64 | str | f64 | f64 │ ╞══════════╪═══════════╪═════════╪════════════╪══════╪══════╪══════╡ │ B | Sell | 1.1384 | 42 | P2 | 1.4 | 1.1 │ ├──────────┼───────────┼─────────┼────────────┼──────┼──────┼──────┤ │ B | Sell | 1.1384 | 42 | P1 | 1.39 | 1.11 │ ├──────────┼───────────┼─────────┼────────────┼──────┼──────┼──────┤ │ B | Sell | 1.1384 | 42 | P3 | 1.55 | 1.05 │ ├──────────┼───────────┼─────────┼────────────┼──────┼──────┼──────┤ │ B | Sell | 1.1384 | 42 | null | null | null │ ├──────────┼───────────┼─────────┼────────────┼──────┼──────┼──────┤ │ A | Buy | 1.21003 | 4 | P8 | 1.1 | 1.3 │ ├──────────┼───────────┼─────────┼────────────┼──────┼──────┼──────┤ │ A | Buy | 1.21003 | 4 | P2 | 1.15 | 1.25 │ ├──────────┼───────────┼─────────┼────────────┼──────┼──────┼──────┤ │ A | Buy | 1.21003 | 4 | P1 | 1.0 | 1.4 │ ├──────────┼───────────┼─────────┼────────────┼──────┼──────┼──────┤ │ A | Buy | 1.21003 | 4 | P5 | 1.0 | 1.4 │ └─//───────┴─//────────┴─//──────┴─//─────────┴─//───┴─//───┴─//───┘ Explanation: Step 1 creates a list of names and puts each quote into a list: >>> columns = ["Order ID", "Direction", "Price", "Some Value"] ... names = pl.col("^Name .*$") # All name columns ... quotes = pl.col("^Quote .*$") # All quote columns ... agg = ( ... df_original_two_orders ... .with_column(pl.col("Parent Order ID").backward_fill()) ... .groupby("Parent Order ID") ... .agg([ ... pl.col(columns).first(), ... pl.concat_list(names.first()).alias("Name"), # Put all names into single column: ["Name1", "Name2", ...] ... pl.col("^Quote .*$").slice(1), # Create list for each quote column (skip first row): [1.1, 1.3], [1.15, 1.25], ... ... ]) ... ) >>> agg shape: (2, 10) ┌─────────────────┬──────────┬───────────┬─────────┬────────────┬────────────────────────┬──────────────────┬──────────────────┬──────────────────┬──────────────────┐ │ Parent Order ID | Order ID | Direction | Price | Some Value | Name | Quote Provider 1 | Quote Provider 2 | Quote Provider 3 | Quote Provider 4 │ │ --- | --- | --- | --- | --- | --- | --- | --- | --- | --- │ │ str | str | str | f64 | i64 | list[str] | list[f64] | list[f64] | list[f64] | list[f64] │ ╞═════════════════╪══════════╪═══════════╪═════════╪════════════╪════════════════════════╪══════════════════╪══════════════════╪══════════════════╪══════════════════╡ │ A | A | Buy | 1.21003 | 4 | ["P8", "P2", ... "P5"] | [1.1, 1.3] | [1.15, 1.25] | [1.0, 1.4] | [1.0, 1.4] │ ├─────────────────┼──────────┼───────────┼─────────┼────────────┼────────────────────────┼──────────────────┼──────────────────┼──────────────────┼──────────────────┤ │ B | B | Sell | 1.1384 | 42 | ["P2", "P1", ... null] | [1.1, 1.4] | [1.11, 1.39] | [1.05, 1.55] | [null, null] │ └─//──────────────┴─//───────┴─//────────┴─//──────┴─//─────────┴─//─────────────────────┴─//───────────────┴─//───────────────┴─//───────────────┴─//───────────────┘ Step 2 creates separate Buy/Sell lists from the Quote columns. We can use pl.when().then().otherwise() to test if we should take the first/last value in each Quote list depending if the Direction is Buy/Sell. >>> ( ... agg ... .with_columns([ ... pl.concat_list( # Create list of Buy values ... pl.when(pl.col("Direction") == "Buy") ... .then(quotes.arr.first()) ... .otherwise(quotes.arr.last()) ... .alias("Buy")), ... pl.concat_list( # Create list of Sell values ... pl.when(pl.col("Direction") == "Sell") ... .then(quotes.arr.first()) ... .otherwise(quotes.arr.last()) ... .alias("Sell") ... ) ... ]) ... .select(columns + ["Name", "Buy", "Sell"]) ... ) shape: (2, 7) ┌──────────┬───────────┬─────────┬────────────┬────────────────────────┬───────────────────────┬───────────────────────┐ │ Order ID | Direction | Price | Some Value | Name | Buy | Sell │ │ --- | --- | --- | --- | --- | --- | --- │ │ str | str | f64 | i64 list[str] | list[f64] | list[f64] │ ╞══════════╪═══════════╪═════════╪════════════╪════════════════════════╪═══════════════════════╪═══════════════════════╡ │ A | Buy | 1.21003 | 4 | ["P8", "P2", ... "P5"] | [1.1, 1.15, ... 1.0] | [1.3, 1.25, ... 1.4] │ ├──────────┼───────────┼─────────┼────────────┼────────────────────────┼───────────────────────┼───────────────────────┤ │ B | Sell | 1.1384 | 42 | ["P2", "P1", ... null] | [1.4, 1.39, ... null] | [1.1, 1.11, ... null] │ └─//───────┴─//────────┴─//──────┴─//─────────┴─//─────────────────────┴─//────────────────────┴─//────────────────────┘- Finally we .explode() to turn the lists into rows. You can add a .drop_nulls() afterwards to remove the null rows if desired.
How to perform split/merge/melt with Python and polars?
I have a data transformation problem where the original data consists of "blocks" of three rows of data, where the first row denotes a 'parent' and the two others are related children. A minimum working example looks like this: import polars as pl df_original = pl.DataFrame( { 'Order ID': ['A', 'foo', 'bar'], 'Parent Order ID': [None, 'A', 'A'], 'Direction': ["Buy", "Buy", "Sell"], 'Price': [1.21003, None, 1.21003], 'Some Value': [4, 4, 4], 'Name Provider 1': ['P8', 'P8', 'P8'], 'Quote Provider 1': [None, 1.1, 1.3], 'Name Provider 2': ['P2', 'P2', 'P2'], 'Quote Provider 2': [None, 1.15, 1.25], 'Name Provider 3': ['P1', 'P1', 'P1'], 'Quote Provider 3': [None, 1.0, 1.4], 'Name Provider 4': ['P5', 'P5', 'P5'], 'Quote Provider 4': [None, 1.0, 1.4] } ) In reality, there are up to 15 Providers (so up to 30 columns), but they are not necessary for the example. We would like to transform this into a format where each row represents both the Buy and Sell quote of a single provider for that parent. The desired result is as follows: df_desired = pl.DataFrame( { 'Order ID': ['A', 'A', 'A', 'A'], 'Parent Direction': ['Buy', 'Buy', 'Buy', 'Buy'], 'Price': [1.21003, 1.21003, 1.21003, 1.21003], 'Some Value': [4, 4, 4, 4], 'Name Provider': ['P8', 'P2', 'P1', 'P5'], 'Quote Buy': [1.1, 1.15, 1.0, 1.0], 'Quote Sell': [1.3, 1.25, 1.4, 1.4], } ) df_desired However, I'm having a hard time doing this in polars. My first approach was splitting the data into parents and children, then joining them together on the respective ids: df_parents = ( df_original .filter(pl.col("Parent Order ID").is_null()) .drop(columns=['Parent Order ID']) ) df_ch = ( df_original .filter(pl.col("Parent Order ID").is_not_null()) .drop(columns=['Price', 'Some Value']) ) ch_buy = df_ch.filter(pl.col("Direction") == 'Buy').drop(columns=['Direction']) ch_sell = df_ch.filter(pl.col("Direction") == 'Sell').drop(columns=['Direction']) df_joined = ( df_parents .join(ch_buy, left_on='Order ID', right_on='Parent Order ID', suffix="_Buy") .join(ch_sell, left_on='Order ID', right_on='Parent Order ID', suffix="_Sell") # The Name and Quote columns in the parent are all empty, so they can go, buy they had to be there for the suffix to work for the first join .drop(columns=[f'Name Provider {i}' for i in range(1, 5)]) .drop(columns=[f'Quote Provider {i}' for i in range(1, 5)]) ) But this still leaves you with a mess where you somehow have to split this into four rows - not eight, as you could easily do with .melt(). Any tips on how to best approach this? Am I missing some obivous method here? EDIT: Added a slightly larger example dataframe with two parent orders and their children (the real-world dataset has ~50k+ of those) : df_original_two_orders = pl.DataFrame( { 'Order ID': ['A', 'foo', 'bar', 'B', 'baz', 'rar'], # Two parent orders 'Parent Order ID': [None, 'A', 'A', None, 'B', 'B'], 'Direction': ["Buy", "Buy", "Sell", "Sell", "Sell", "Buy"], # Second parent has different direction 'Price': [1.21003, None, 1.21003, 1.1384, None, 1.1384], 'Some Value': [4, 4, 4, 42, 42, 42], 'Name Provider 1': ['P8', 'P8', 'P8', 'P2', 'P2', 'P2'], 'Quote Provider 1': [None, 1.1, 1.3, None, 1.10, 1.40], # Above, 1.10 corresponds to Buy for order A for to Sell for order B - depends on Direction 'Name Provider 2': ['P2', 'P2', 'P2', 'P1', 'P1', 'P1'], 'Quote Provider 2': [None, 1.15, 1.25, None, 1.11, 1.39], 'Name Provider 3': ['P1', 'P1', 'P1', 'P3', 'P3', 'P3'], 'Quote Provider 3': [None, 1.0, 1.4, None, 1.05, 1.55], 'Name Provider 4': ['P5', 'P5', 'P5', None, None, None], 'Quote Provider 4': [None, 1.0, 1.4, None, None, None] } ) I think this is slightly more representative of the real world in that it has multiple parent orders and not all provider columns are filled for all orders, while still keeping the annoying business logic far away. The correct output for this example is the following: df_desired_two_parents = pl.DataFrame( { 'Order ID': ['A']*4 + ['B'] * 3, 'Parent Direction': ['Buy']*4 + ['Sell'] * 3, 'Price': [1.21003] * 4 + [1.1384] * 3, 'Some Value': [4] * 4 + [42] * 3, 'Name Provider': ['P8', 'P2', 'P1', 'P5', 'P2', 'P1', 'P3'], 'Quote Buy': [1.1, 1.15, 1.0, 1.0, 1.40, 1.39, 1.55], # Note the last three values are the "second" values in the original column now because the parent order was 'Sell' 'Quote Sell': [1.3, 1.25, 1.4, 1.4, 1.10, 1.11, 1.05], } )
[ "Here's how I've attempted it:\nfill the nulls in the Parent Order ID column and use that to .groupby()\n>>> columns = [\"Order ID\", \"Direction\", \"Price\", \"Some Value\"]\n... names = pl.col(\"^Name .*$\") # All name columns\n... quotes = pl.col(\"^Quote .*$\") # All quote columns\n... (\n... df_original_two_orders\n... .with_column(pl.col(\"Parent Order ID\").backward_fill())\n... .groupby(\"Parent Order ID\")\n... .agg([\n... pl.col(columns).first(),\n... pl.concat_list(names.first()).alias(\"Name\"), # Put all names into single column: [\"Name1\", \"Name2\", ...]\n... pl.col(\"^Quote .*$\").slice(1), # Create list for each quote column (skip first row): [1.1, 1.3], [1.15, 1.25], ...\n... ])\n... .with_columns([\n... pl.concat_list( # Create list of Buy values\n... pl.when(pl.col(\"Direction\") == \"Buy\")\n... .then(quotes.arr.first())\n... .otherwise(quotes.arr.last())\n... .alias(\"Buy\")),\n... pl.concat_list( # Create list of Sell values\n... pl.when(pl.col(\"Direction\") == \"Sell\")\n... .then(quotes.arr.first())\n... .otherwise(quotes.arr.last())\n... .alias(\"Sell\")\n... )\n... ])\n... .select(columns + [\"Name\", \"Buy\", \"Sell\"]) # Remove Name/Quote [1234..] columns\n... .explode([\"Name\", \"Buy\", \"Sell\"]) # Turn into rows\n... )\nshape: (8, 7)\n┌──────────┬───────────┬─────────┬────────────┬──────┬──────┬──────┐\n│ Order ID | Direction | Price | Some Value | Name | Buy | Sell │\n│ --- | --- | --- | --- | --- | --- | --- │\n│ str | str | f64 | i64 | str | f64 | f64 │\n╞══════════╪═══════════╪═════════╪════════════╪══════╪══════╪══════╡\n│ B | Sell | 1.1384 | 42 | P2 | 1.4 | 1.1 │\n├──────────┼───────────┼─────────┼────────────┼──────┼──────┼──────┤\n│ B | Sell | 1.1384 | 42 | P1 | 1.39 | 1.11 │\n├──────────┼───────────┼─────────┼────────────┼──────┼──────┼──────┤\n│ B | Sell | 1.1384 | 42 | P3 | 1.55 | 1.05 │\n├──────────┼───────────┼─────────┼────────────┼──────┼──────┼──────┤\n│ B | Sell | 1.1384 | 42 | null | null | null │\n├──────────┼───────────┼─────────┼────────────┼──────┼──────┼──────┤\n│ A | Buy | 1.21003 | 4 | P8 | 1.1 | 1.3 │\n├──────────┼───────────┼─────────┼────────────┼──────┼──────┼──────┤\n│ A | Buy | 1.21003 | 4 | P2 | 1.15 | 1.25 │\n├──────────┼───────────┼─────────┼────────────┼──────┼──────┼──────┤\n│ A | Buy | 1.21003 | 4 | P1 | 1.0 | 1.4 │\n├──────────┼───────────┼─────────┼────────────┼──────┼──────┼──────┤\n│ A | Buy | 1.21003 | 4 | P5 | 1.0 | 1.4 │\n└─//───────┴─//────────┴─//──────┴─//─────────┴─//───┴─//───┴─//───┘\n\n\nExplanation:\nStep 1 creates a list of names and puts each quote into a list:\n>>> columns = [\"Order ID\", \"Direction\", \"Price\", \"Some Value\"]\n... names = pl.col(\"^Name .*$\") # All name columns\n... quotes = pl.col(\"^Quote .*$\") # All quote columns\n... agg = (\n... df_original_two_orders\n... .with_column(pl.col(\"Parent Order ID\").backward_fill())\n... .groupby(\"Parent Order ID\")\n... .agg([\n... pl.col(columns).first(),\n... pl.concat_list(names.first()).alias(\"Name\"), # Put all names into single column: [\"Name1\", \"Name2\", ...]\n... pl.col(\"^Quote .*$\").slice(1), # Create list for each quote column (skip first row): [1.1, 1.3], [1.15, 1.25], ...\n... ])\n... )\n>>> agg\nshape: (2, 10)\n┌─────────────────┬──────────┬───────────┬─────────┬────────────┬────────────────────────┬──────────────────┬──────────────────┬──────────────────┬──────────────────┐\n│ Parent Order ID | Order ID | Direction | Price | Some Value | Name | Quote Provider 1 | Quote Provider 2 | Quote Provider 3 | Quote Provider 4 │\n│ --- | --- | --- | --- | --- | --- | --- | --- | --- | --- │\n│ str | str | str | f64 | i64 | list[str] | list[f64] | list[f64] | list[f64] | list[f64] │\n╞═════════════════╪══════════╪═══════════╪═════════╪════════════╪════════════════════════╪══════════════════╪══════════════════╪══════════════════╪══════════════════╡\n│ A | A | Buy | 1.21003 | 4 | [\"P8\", \"P2\", ... \"P5\"] | [1.1, 1.3] | [1.15, 1.25] | [1.0, 1.4] | [1.0, 1.4] │\n├─────────────────┼──────────┼───────────┼─────────┼────────────┼────────────────────────┼──────────────────┼──────────────────┼──────────────────┼──────────────────┤\n│ B | B | Sell | 1.1384 | 42 | [\"P2\", \"P1\", ... null] | [1.1, 1.4] | [1.11, 1.39] | [1.05, 1.55] | [null, null] │\n└─//──────────────┴─//───────┴─//────────┴─//──────┴─//─────────┴─//─────────────────────┴─//───────────────┴─//───────────────┴─//───────────────┴─//───────────────┘\n\nStep 2 creates separate Buy/Sell lists from the Quote columns.\nWe can use pl.when().then().otherwise() to test if we should take the first/last value in each Quote list depending if the Direction is Buy/Sell.\n>>> (\n... agg\n... .with_columns([\n... pl.concat_list( # Create list of Buy values\n... pl.when(pl.col(\"Direction\") == \"Buy\")\n... .then(quotes.arr.first())\n... .otherwise(quotes.arr.last())\n... .alias(\"Buy\")),\n... pl.concat_list( # Create list of Sell values\n... pl.when(pl.col(\"Direction\") == \"Sell\")\n... .then(quotes.arr.first())\n... .otherwise(quotes.arr.last())\n... .alias(\"Sell\")\n... )\n... ])\n... .select(columns + [\"Name\", \"Buy\", \"Sell\"])\n... )\nshape: (2, 7)\n┌──────────┬───────────┬─────────┬────────────┬────────────────────────┬───────────────────────┬───────────────────────┐\n│ Order ID | Direction | Price | Some Value | Name | Buy | Sell │\n│ --- | --- | --- | --- | --- | --- | --- │\n│ str | str | f64 | i64 list[str] | list[f64] | list[f64] │\n╞══════════╪═══════════╪═════════╪════════════╪════════════════════════╪═══════════════════════╪═══════════════════════╡\n│ A | Buy | 1.21003 | 4 | [\"P8\", \"P2\", ... \"P5\"] | [1.1, 1.15, ... 1.0] | [1.3, 1.25, ... 1.4] │\n├──────────┼───────────┼─────────┼────────────┼────────────────────────┼───────────────────────┼───────────────────────┤\n│ B | Sell | 1.1384 | 42 | [\"P2\", \"P1\", ... null] | [1.4, 1.39, ... null] | [1.1, 1.11, ... null] │\n└─//───────┴─//────────┴─//──────┴─//─────────┴─//─────────────────────┴─//────────────────────┴─//────────────────────┘-\n\nFinally we .explode() to turn the lists into rows.\nYou can add a .drop_nulls() afterwards to remove the null rows if desired.\n" ]
[ 1 ]
[]
[]
[ "dataframe", "join", "melt", "python", "python_polars" ]
stackoverflow_0074562243_dataframe_join_melt_python_python_polars.txt
Q: Modifying pandas row value based on its length I have a column in my pandas dataframe with the following values that represent hours worked in a week. 0 40 1 40h / week 2 46.25h/week on average 3 11 I would like to check every row, and if the length of the value is larger than 2 digits - extract the number of hours only from it. I have tried the following: df['Hours_per_week'].apply(lambda x: (x.extract('(\d+)') if(len(str(x)) > 2) else x)) However I am getting the AttributeError: 'str' object has no attribute 'extract' error. A: It looks like you could ensure having h after the number: df['Hours_per_week'].str.extract(r'(\d{2}\.?\d*)h', expand=False) Output: 0 NaN 1 40 2 46.25 3 NaN Name: Hours_per_week, dtype: object A: Assuming the series data are strings, try this: df['Hours_per_week'].str.extract('(\d+)') A: Why not immediately extract float pattern i.e. \d+\.?\d+ ? >>> s = pd.Series(['40', '40h / week', '46.25h/week on average', '11']) >>> s.str.extract("(\d+\.?\d+)") 0 0 40 1 40 2 46.25 3 11 2 digits will still match either way.
Modifying pandas row value based on its length
I have a column in my pandas dataframe with the following values that represent hours worked in a week. 0 40 1 40h / week 2 46.25h/week on average 3 11 I would like to check every row, and if the length of the value is larger than 2 digits - extract the number of hours only from it. I have tried the following: df['Hours_per_week'].apply(lambda x: (x.extract('(\d+)') if(len(str(x)) > 2) else x)) However I am getting the AttributeError: 'str' object has no attribute 'extract' error.
[ "It looks like you could ensure having h after the number:\ndf['Hours_per_week'].str.extract(r'(\\d{2}\\.?\\d*)h', expand=False)\n\nOutput:\n0 NaN\n1 40\n2 46.25\n3 NaN\nName: Hours_per_week, dtype: object\n\n", "Assuming the series data are strings, try this:\ndf['Hours_per_week'].str.extract('(\\d+)')\n\n", "Why not immediately extract float pattern i.e. \\d+\\.?\\d+ ?\n>>> s = pd.Series(['40', '40h / week', '46.25h/week on average', '11'])\n>>> s.str.extract(\"(\\d+\\.?\\d+)\")\n 0\n0 40\n1 40\n2 46.25\n3 11\n\n2 digits will still match either way.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074565953_dataframe_pandas_python.txt
Q: Python Pandas read_html multi_index table? I am not sure if it should be called multi index. Here is the page I am trying to get data from: Azure product availability by region. There is hierarchy level: class "category-row" --> "service-row" --> "capability-row" . pandas.read_html give me a flat table, with all values from three classes. Is there a way to get the hierarchy data? Here is the code from selenium import webdriver from selenium.webdriver.firefox.options import Options from bs4 import BeautifulSoup import pandas as pd options = Options() options.add_argument('--headless') driver = webdriver.Firefox(options=options) driver.implicitly_wait(30) url = url = 'https://azure.microsoft.com/en-us/explore/global-infrastructure/products-by-region/?regions=us-east-2,canada-central,canada-east&products=all' driver.get(url) tree = BeautifulSoup(driver.find_element_by_id("primary-table").get_attribute('outerHTML'), "html5lib") table = tree.find('table', class_='primary-table') header_list = table.find('tr', {'class': 'region-headers-row'}).find_all('th') df = pd.read_html(driver.find_element_by_id("primary-table").get_attribute('outerHTML'), header=0)[0].iloc[:, :len(header_list)]`` A: Not sure, if it fit your needs, but it is also take the table contents - May provide an expected result. Example ... data=[] soup = BeautifulSoup(driver.page_source) for r in soup.select('table tr.service-row:has([data-region-slug])'): row = [ r.find_previous('tr', attrs={'class':'category-row'}).th.get_text(strip=True), r.th.get_text(strip=True) ] for c in r.select('td'): if c.img: row.append(c.img.get('src')) else: row.append(c.span.text) data.append(row) df = pd.DataFrame(data, columns=['Category']+list(soup.table.stripped_strings)) df.columns = pd.MultiIndex.from_tuples( list( zip( ['','']+[c.get('data-colgroup') for c in soup.table.select('th[data-colgroup]')], df.columns) ) ) df mapper = {'//azurecomcdn.azureedge.net/cvt-5983f2707de6e50e5020c6059b619845bc5be5434c362ed8e18652d58e15571e/images/page/explore/global-infrastructure/products-by-region/ga.svg':'hook', '//azurecomcdn.azureedge.net/cvt-5983f2707de6e50e5020c6059b619845bc5be5434c362ed8e18652d58e15571e/images/page/explore/global-infrastructure/products-by-region/planned-active.svg':'planned-active', '//azurecomcdn.azureedge.net/cvt-5983f2707de6e50e5020c6059b619845bc5be5434c362ed8e18652d58e15571e/images/page/explore/global-infrastructure/products-by-region/preview-active.svg':'preview-active', '//azurecomcdn.azureedge.net/cvt-5983f2707de6e50e5020c6059b619845bc5be5434c362ed8e18652d58e15571e/images/page/explore/global-infrastructure/products-by-region/preview.svg':'preview' } df.replace(mapper) Output Canada United States Category Products Canada Central Canada East East US 2 0 AI + machine learning Azure Databricks hook hook hook 1 AI + machine learning Azure Bot Services Not available Not available Not available 2 AI + machine learning Azure Cognitive Search hook hook hook 3 AI + machine learning Microsoft Genomics Not available Not available hook 4 AI + machine learning Azure Machine Learning hook hook hook 9613 Web Azure Web PubSub hook hook hook 9614 Web Azure Fluid Relay planned-active Not available hook 9615 Virtual desktop infrastructure Azure Virtual Desktop Not available Not available Not available 9616 Virtual desktop infrastructure Azure Lab Services hook hook hook 9617 Virtual desktop infrastructure Microsoft Dev Box preview Not available preview
Python Pandas read_html multi_index table?
I am not sure if it should be called multi index. Here is the page I am trying to get data from: Azure product availability by region. There is hierarchy level: class "category-row" --> "service-row" --> "capability-row" . pandas.read_html give me a flat table, with all values from three classes. Is there a way to get the hierarchy data? Here is the code from selenium import webdriver from selenium.webdriver.firefox.options import Options from bs4 import BeautifulSoup import pandas as pd options = Options() options.add_argument('--headless') driver = webdriver.Firefox(options=options) driver.implicitly_wait(30) url = url = 'https://azure.microsoft.com/en-us/explore/global-infrastructure/products-by-region/?regions=us-east-2,canada-central,canada-east&products=all' driver.get(url) tree = BeautifulSoup(driver.find_element_by_id("primary-table").get_attribute('outerHTML'), "html5lib") table = tree.find('table', class_='primary-table') header_list = table.find('tr', {'class': 'region-headers-row'}).find_all('th') df = pd.read_html(driver.find_element_by_id("primary-table").get_attribute('outerHTML'), header=0)[0].iloc[:, :len(header_list)]``
[ "Not sure, if it fit your needs, but it is also take the table contents - May provide an expected result.\nExample\n...\ndata=[]\nsoup = BeautifulSoup(driver.page_source)\n\nfor r in soup.select('table tr.service-row:has([data-region-slug])'):\n row = [\n r.find_previous('tr', attrs={'class':'category-row'}).th.get_text(strip=True),\n r.th.get_text(strip=True)\n ]\n for c in r.select('td'):\n if c.img:\n row.append(c.img.get('src'))\n else:\n row.append(c.span.text)\n data.append(row)\n\ndf = pd.DataFrame(data, columns=['Category']+list(soup.table.stripped_strings))\n\ndf.columns = pd.MultiIndex.from_tuples(\n list(\n zip(\n ['','']+[c.get('data-colgroup') for c in soup.table.select('th[data-colgroup]')], \n df.columns)\n )\n )\ndf\n\nmapper = {'//azurecomcdn.azureedge.net/cvt-5983f2707de6e50e5020c6059b619845bc5be5434c362ed8e18652d58e15571e/images/page/explore/global-infrastructure/products-by-region/ga.svg':'hook',\n '//azurecomcdn.azureedge.net/cvt-5983f2707de6e50e5020c6059b619845bc5be5434c362ed8e18652d58e15571e/images/page/explore/global-infrastructure/products-by-region/planned-active.svg':'planned-active',\n '//azurecomcdn.azureedge.net/cvt-5983f2707de6e50e5020c6059b619845bc5be5434c362ed8e18652d58e15571e/images/page/explore/global-infrastructure/products-by-region/preview-active.svg':'preview-active',\n '//azurecomcdn.azureedge.net/cvt-5983f2707de6e50e5020c6059b619845bc5be5434c362ed8e18652d58e15571e/images/page/explore/global-infrastructure/products-by-region/preview.svg':'preview'\n }\n\ndf.replace(mapper)\n\nOutput\n\n\n\n\n\n\n\n\nCanada\nUnited States\n\n\n\n\n\nCategory\nProducts\nCanada Central\nCanada East\nEast US 2\n\n\n0\nAI + machine learning\nAzure Databricks\nhook\nhook\nhook\n\n\n1\nAI + machine learning\nAzure Bot Services\nNot available\nNot available\nNot available\n\n\n2\nAI + machine learning\nAzure Cognitive Search\nhook\nhook\nhook\n\n\n3\nAI + machine learning\nMicrosoft Genomics\nNot available\nNot available\nhook\n\n\n4\nAI + machine learning\nAzure Machine Learning\nhook\nhook\nhook\n\n\n9613\nWeb\nAzure Web PubSub\nhook\nhook\nhook\n\n\n9614\nWeb\nAzure Fluid Relay\nplanned-active\nNot available\nhook\n\n\n9615\nVirtual desktop infrastructure\nAzure Virtual Desktop\nNot available\nNot available\nNot available\n\n\n9616\nVirtual desktop infrastructure\nAzure Lab Services\nhook\nhook\nhook\n\n\n9617\nVirtual desktop infrastructure\nMicrosoft Dev Box\npreview\nNot available\npreview\n\n\n\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "pandas", "python", "web_scraping" ]
stackoverflow_0074563937_beautifulsoup_pandas_python_web_scraping.txt
Q: Make Source and Target column based on consecutive rows I have the following problem Person 1001 accomplishes activity A and then activity C (which follows activity A) I need to move consecutive rows to target columns df = pd.DataFrame([[1001, 'A'], [1001,'C'], [1004, 'D'],[1005, 'C'], [1005,'D'], [1010, 'A'],[1010,'D'],[1010,'F']], columns=['CustomerNr','Activity']) df = pd.DataFrame([[1001, 'A','C'], [1004, 'D',np.nan],[1005, 'C','D'], [1010, 'A','D'],[1010,'D' ,'F']], columns=['CustomerNr','Target','Source']) CustomerNr Source Target 1001 A C 1004 D NaN 1005 C D 1010 A D 1010 D F A: you can use: df['Target']=df['Activity'].shift(-1) df['prev_CustomerNr']=df['CustomerNr'].shift(-1) print(df) ''' CustomerNr Activity Target prev_CustomerNr 0 1001 A C 1001.0 1 1001 C D 1004.0 2 1004 D C 1005.0 3 1005 C D 1005.0 4 1005 D A 1010.0 5 1010 A D 1010.0 6 1010 D F 1010.0 7 1010 F None NaN ''' #we can't find the target information of the most recent activity. So we drop the last row for each CustomerNr. m1 = df.duplicated(['CustomerNr'], keep="last") #https://stackoverflow.com/a/70216388/15415267 m2 = ~df.duplicated(['CustomerNr'], keep=False) df = df[m1|m2] #If CustomerNr and prev_CustomerNr are not the same, I replace with nan. df['Target']=np.where(df['CustomerNr']==df['prev_CustomerNr'],df['Target'],np.nan) df=df.drop(['prev_CustomerNr'],axis=1) print(df) ''' CustomerNr Activity Target 0 1001 A C 2 1004 D NaN 3 1005 C D 5 1010 A D 6 1010 D F '''
Make Source and Target column based on consecutive rows
I have the following problem Person 1001 accomplishes activity A and then activity C (which follows activity A) I need to move consecutive rows to target columns df = pd.DataFrame([[1001, 'A'], [1001,'C'], [1004, 'D'],[1005, 'C'], [1005,'D'], [1010, 'A'],[1010,'D'],[1010,'F']], columns=['CustomerNr','Activity']) df = pd.DataFrame([[1001, 'A','C'], [1004, 'D',np.nan],[1005, 'C','D'], [1010, 'A','D'],[1010,'D' ,'F']], columns=['CustomerNr','Target','Source']) CustomerNr Source Target 1001 A C 1004 D NaN 1005 C D 1010 A D 1010 D F
[ "you can use:\ndf['Target']=df['Activity'].shift(-1)\ndf['prev_CustomerNr']=df['CustomerNr'].shift(-1)\nprint(df)\n'''\n CustomerNr Activity Target prev_CustomerNr\n0 1001 A C 1001.0\n1 1001 C D 1004.0\n2 1004 D C 1005.0\n3 1005 C D 1005.0\n4 1005 D A 1010.0\n5 1010 A D 1010.0\n6 1010 D F 1010.0\n7 1010 F None NaN\n'''\n#we can't find the target information of the most recent activity. So we drop the last row for each CustomerNr.\n\nm1 = df.duplicated(['CustomerNr'], keep=\"last\") #https://stackoverflow.com/a/70216388/15415267\nm2 = ~df.duplicated(['CustomerNr'], keep=False)\ndf = df[m1|m2]\n\n#If CustomerNr and prev_CustomerNr are not the same, I replace with nan.\ndf['Target']=np.where(df['CustomerNr']==df['prev_CustomerNr'],df['Target'],np.nan)\ndf=df.drop(['prev_CustomerNr'],axis=1)\n\nprint(df)\n'''\n CustomerNr Activity Target\n0 1001 A C\n2 1004 D NaN\n3 1005 C D\n5 1010 A D\n6 1010 D F\n'''\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "linked_list", "python" ]
stackoverflow_0074565119_dataframe_linked_list_python.txt
Q: All items overwritten by the last item when using pipeline to save picture in scrapy I am new to scrapy and not a native English speaker, so sorry in advance if I make some silly mistakes or cannot make my point clear.I want to scrapy the information and covers of rock albums from a Chinese website (music.douban.com/tag/%E6%91%87%E6%BB%9A?start=0&type=T). When I am just using xpath to get non-picture information(artists,the detail page's url and the cover's url),nothing went wrong: import scrapy from myscrapy.items import musicItem class doubanAlbumSpider(scrapy.Spider): name = "albumspider" start_urls = ['https://music.douban.com/tag/%E6%91%87%E6%BB%9A?start=0&type=T'] headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36', } def start_requests(self): url = 'https://music.douban.com/tag/%E6%91%87%E6%BB%9A?start=0&type=T' yield scrapy.Request(url, headers=self.headers) def parse(self,response): item = musicItem() albums = response.xpath(r"//tr[@class='item']") for album in albums: item['alname'] = " ".join(album.xpath(r"./td/div/a/text()")[0].extract().split()) item['detailUrl'] = album.xpath(r"./td/a/@href")[0].extract() item['imageUrl'] = (r"/m/").join(album.xpath(r"./td/a/img/@src")[0].extract().split(r"/s/")) yield(item) class musicItem(scrapy.Item): alname = scrapy.Field() imageUrl = scrapy.Field() detailUrl = scrapy.Field() image = scrapy.Field() image_paths = scrapy.Field() enter image description here But when I added a pipeline to download tye pictures, the pictures are downloaded successfully, while the non-picture info went wrong. They are all overwritten by the last item, which is In the Court of the Crimson King. Have anyone else similar problems? class DoubanImagePipeline(ImagesPipeline): default_headers = { 'accept': 'image/webp,image/*,*/*;q=0.8', 'accept-encoding': 'gzip, deflate, sdch, br', 'accept-language': 'zh-CN,zh;q=0.8,en;q=0.6', 'cookie': 'bid=yQdC/AzTaCw', 'referer': 'https://www.douban.com/', 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36', } def get_media_requests(self, item, info): yield scrapy.Request(url=item['imageUrl']) enter image description here A: Item fields are mutable, and right now in your parse method you create 1 item at the beginning of your method body and use that same item when you yield each of the results. What you need to do is create a unique item on each iteration of your for loop. For example: def parse(self,response): albums = response.xpath(r"//tr[@class='item']") for album in albums: item = musicItem() item['alname'] = " ".join(album.xpath(r"./td/div/a/text()")[0].extract().split()) item['detailUrl'] = album.xpath(r"./td/a/@href")[0].extract() item['imageUrl'] = (r"/m/").join(album.xpath(r"./td/a/img/@src")[0].extract().split(r"/s/")) yield(item)
All items overwritten by the last item when using pipeline to save picture in scrapy
I am new to scrapy and not a native English speaker, so sorry in advance if I make some silly mistakes or cannot make my point clear.I want to scrapy the information and covers of rock albums from a Chinese website (music.douban.com/tag/%E6%91%87%E6%BB%9A?start=0&type=T). When I am just using xpath to get non-picture information(artists,the detail page's url and the cover's url),nothing went wrong: import scrapy from myscrapy.items import musicItem class doubanAlbumSpider(scrapy.Spider): name = "albumspider" start_urls = ['https://music.douban.com/tag/%E6%91%87%E6%BB%9A?start=0&type=T'] headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36', } def start_requests(self): url = 'https://music.douban.com/tag/%E6%91%87%E6%BB%9A?start=0&type=T' yield scrapy.Request(url, headers=self.headers) def parse(self,response): item = musicItem() albums = response.xpath(r"//tr[@class='item']") for album in albums: item['alname'] = " ".join(album.xpath(r"./td/div/a/text()")[0].extract().split()) item['detailUrl'] = album.xpath(r"./td/a/@href")[0].extract() item['imageUrl'] = (r"/m/").join(album.xpath(r"./td/a/img/@src")[0].extract().split(r"/s/")) yield(item) class musicItem(scrapy.Item): alname = scrapy.Field() imageUrl = scrapy.Field() detailUrl = scrapy.Field() image = scrapy.Field() image_paths = scrapy.Field() enter image description here But when I added a pipeline to download tye pictures, the pictures are downloaded successfully, while the non-picture info went wrong. They are all overwritten by the last item, which is In the Court of the Crimson King. Have anyone else similar problems? class DoubanImagePipeline(ImagesPipeline): default_headers = { 'accept': 'image/webp,image/*,*/*;q=0.8', 'accept-encoding': 'gzip, deflate, sdch, br', 'accept-language': 'zh-CN,zh;q=0.8,en;q=0.6', 'cookie': 'bid=yQdC/AzTaCw', 'referer': 'https://www.douban.com/', 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36', } def get_media_requests(self, item, info): yield scrapy.Request(url=item['imageUrl']) enter image description here
[ "Item fields are mutable, and right now in your parse method you create 1 item at the beginning of your method body and use that same item when you yield each of the results. What you need to do is create a unique item on each iteration of your for loop.\nFor example:\n def parse(self,response):\n albums = response.xpath(r\"//tr[@class='item']\")\n for album in albums:\n item = musicItem()\n item['alname'] = \" \".join(album.xpath(r\"./td/div/a/text()\")[0].extract().split())\n item['detailUrl'] = album.xpath(r\"./td/a/@href\")[0].extract()\n item['imageUrl'] = (r\"/m/\").join(album.xpath(r\"./td/a/img/@src\")[0].extract().split(r\"/s/\"))\n yield(item)\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "scrapy", "web_crawler" ]
stackoverflow_0074565693_python_python_3.x_scrapy_web_crawler.txt
Q: The Labels on my window don't appear when the correct button is pressed I have a quiz that I made in python (Tkinter). For some reason, when I press a button, I doesn't show the label. I have no more info about this, because it does not even give me an error message. here is the (bad) code: from random import * def submit(): ca = 0 ca = randint(1, 3) if ca == 1: if val1 == 1: score = Label(winroot, text="1 is Correct") score.pack() if ca == 2: if val2 == 1: score = Label(winroot, text="2 is Correct") score.pack() if ca == 3: if val3 == 1: score = Label(winroot, text="3 is Correct") score.pack() win = Tk() win.title("ziqp Quiz") winroot = Frame(win) winroot.pack() question = Label(winroot, width=60, font=(10), text="Q") question.pack() val1 = IntVar() val2 = IntVar() val3 = IntVar() option1 = Checkbutton(winroot, variable=val1, text="1", command=submit) option1.pack() option2 = Checkbutton(winroot, variable=val2, text="2", command=submit) option2.pack() option3 = Checkbutton(winroot, variable=val3, text="3", command=submit) option3.pack() nextb = Button(winroot, text="Submit", command=submit) nextb.pack() win.mainloop() A: Your code is incomplete. But the first thing that jumps out at me is that you have: val1 = IntVar() and in def submit(): you're checking if: if val1 == 1: But val1 is an IntVar Tkinter variable, which you would check with: val1.get() In your code, your conditionals always fail, because val1 will never be equal to 1, because it's an IntVar So, remember, with an IntVar (and all the other Tkinter variables), you assign with .set() and check with .get(), so... val2.set(1) if val2.get() == 1: print("Nice! They're equal!") Addendum: in your case, since you're using the IntVars as variables for the Checkbuttons, the system is handling the "setting" of the values for the IntVars. But you still need to read their values using the .get() method.
The Labels on my window don't appear when the correct button is pressed
I have a quiz that I made in python (Tkinter). For some reason, when I press a button, I doesn't show the label. I have no more info about this, because it does not even give me an error message. here is the (bad) code: from random import * def submit(): ca = 0 ca = randint(1, 3) if ca == 1: if val1 == 1: score = Label(winroot, text="1 is Correct") score.pack() if ca == 2: if val2 == 1: score = Label(winroot, text="2 is Correct") score.pack() if ca == 3: if val3 == 1: score = Label(winroot, text="3 is Correct") score.pack() win = Tk() win.title("ziqp Quiz") winroot = Frame(win) winroot.pack() question = Label(winroot, width=60, font=(10), text="Q") question.pack() val1 = IntVar() val2 = IntVar() val3 = IntVar() option1 = Checkbutton(winroot, variable=val1, text="1", command=submit) option1.pack() option2 = Checkbutton(winroot, variable=val2, text="2", command=submit) option2.pack() option3 = Checkbutton(winroot, variable=val3, text="3", command=submit) option3.pack() nextb = Button(winroot, text="Submit", command=submit) nextb.pack() win.mainloop()
[ "Your code is incomplete.\nBut the first thing that jumps out at me is that you have:\nval1 = IntVar()\n\nand in\ndef submit():\n\nyou're checking if:\nif val1 == 1:\n\nBut val1 is an IntVar Tkinter variable, which you would check with:\nval1.get()\n\nIn your code, your conditionals always fail, because val1 will never be equal to 1, because it's an IntVar\nSo, remember, with an IntVar (and all the other Tkinter variables), you assign with .set() and check with .get(), so...\nval2.set(1)\nif val2.get() == 1:\n print(\"Nice! They're equal!\")\n\nAddendum: in your case, since you're using the IntVars as variables for the Checkbuttons, the system is handling the \"setting\" of the values for the IntVars. But you still need to read their values using the .get() method.\n" ]
[ 1 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074566050_python_tkinter.txt
Q: How to solve inverse transform using MixMaxScaler on a single value I'm trying to perform the inverse of MixMaxScaler from a single value. However, I get this error: ValueError: Expected 2D array, got scalar array instead: array=0.16019679677629. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample. The code is this one: from sklearn.preprocessing import StandardScaler, MinMaxScaler minmaxscaler_targets = MinMaxScaler() minmaxscaler_targets.fit(pred) print(minmaxscaler_targets.inverse_transform(np.array([[pred]]))) the value to do the inverse is pred= 0.16019679677629 Mention that the original values were already scaled (using the same function). Updated: If tried to reshape as mentioned with: print(minmaxscaler_targets.inverse_transform(np.array([pred]).reshape(1, -1))) But I got the same error. A: You need to reshape it correctly: pred = np.array([0.16]) minmaxscaler_targets = MinMaxScaler() minmaxscaler_targets.fit(pred.reshape(-1,1)) minmaxscaler_targets.inverse_transform(pred.reshape(-1,1)) # array([[0.32]])
How to solve inverse transform using MixMaxScaler on a single value
I'm trying to perform the inverse of MixMaxScaler from a single value. However, I get this error: ValueError: Expected 2D array, got scalar array instead: array=0.16019679677629. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample. The code is this one: from sklearn.preprocessing import StandardScaler, MinMaxScaler minmaxscaler_targets = MinMaxScaler() minmaxscaler_targets.fit(pred) print(minmaxscaler_targets.inverse_transform(np.array([[pred]]))) the value to do the inverse is pred= 0.16019679677629 Mention that the original values were already scaled (using the same function). Updated: If tried to reshape as mentioned with: print(minmaxscaler_targets.inverse_transform(np.array([pred]).reshape(1, -1))) But I got the same error.
[ "You need to reshape it correctly:\npred = np.array([0.16])\n\nminmaxscaler_targets = MinMaxScaler()\n\nminmaxscaler_targets.fit(pred.reshape(-1,1))\nminmaxscaler_targets.inverse_transform(pred.reshape(-1,1))\n# array([[0.32]])\n\n" ]
[ 0 ]
[]
[]
[ "inverse", "minmax", "python", "scikit_learn" ]
stackoverflow_0074563197_inverse_minmax_python_scikit_learn.txt
Q: How to create nested type of data in Python? I want to make sure, that one of the arguments, passed when class creation is of certain type. Here is an example: from __future__ import annotations from dataclasses import dataclass @dataclass(frozen=True, order=True) class ListItems: items: list | str | int | ListItems class PList: def __init__(self, name: str, items: ListItems): self.type = "list" self.name = name self.items = items a = PList('asd', ['asd']) The idea was next: items can only be list of string, int data type or other list of string and int, and it's nested. For example: [] OK [1,2,'asd'] OK [[1,2,3],'asd',[]] OK [{}] NOT OK ['test', [{}]] NOT OK Is it possible to implement something like this in Python? I am not really familiar with Python OOP, but from what I have found, there is no native implementation of interfaces and/or abstract class like in other programming languages. PS: The code you see, was just my attempt of implementation, it did not work. A: def __init__(self, name: str, items: ListItems): the items: ListItems bit is saying that items should be a ListItems object, it's not passing through the logic of what ListItems is doing, it's literally just comparing what type it is. i don't have much experience with typing, but i think you're looking for items: list[str|int] note that for lists, there is the normal list type hint, and then there's also one in the typing library. not sure if there's a difference, i just know that the normal list type hint is lowercased (list and not List like in the typing library), and that it is relatively new (3.11 i think) A: Short answer to your question Python is a dynamically typed language. It doesn’t know about the type of the variable until the code is run. So declaration is of no use. What it does is, It stores that value at some memory location and then binds that variable name to that memory container. And makes the contents of the container accessible through that variable name. So the data type does not matter. As it will get to know the type of the value at run-time. Names are bound to objects at execution time by means of assignment statements, and it is possible to bind a name to objects of different types during the execution of the program. Functions and objects can be altered at runtime. In a dynamically typed language, a variable is simply a value bound to a name; the value has a type -- like "integer" or "string" or "list" -- but the variable itself doesn't. You could have a variable which, right now, holds a number, and later assign a string to it if you need it to change. In a statically typed language, the variable itself has a type; if you have a variable that's an integer, you won't be able to assign any other type of value to it later. From the following point you will find that predefining the datatype explicitly in python code won't enforce it to only accept this type: (since type errors are a small fraction of all the things that might go wrong in a program); as a result, programmers in dynamic languages rely on their test suites to catch these and all other errors, rather than using a dedicated type-checking compiler. Check this reference for more info.
How to create nested type of data in Python?
I want to make sure, that one of the arguments, passed when class creation is of certain type. Here is an example: from __future__ import annotations from dataclasses import dataclass @dataclass(frozen=True, order=True) class ListItems: items: list | str | int | ListItems class PList: def __init__(self, name: str, items: ListItems): self.type = "list" self.name = name self.items = items a = PList('asd', ['asd']) The idea was next: items can only be list of string, int data type or other list of string and int, and it's nested. For example: [] OK [1,2,'asd'] OK [[1,2,3],'asd',[]] OK [{}] NOT OK ['test', [{}]] NOT OK Is it possible to implement something like this in Python? I am not really familiar with Python OOP, but from what I have found, there is no native implementation of interfaces and/or abstract class like in other programming languages. PS: The code you see, was just my attempt of implementation, it did not work.
[ " def __init__(self, name: str, items: ListItems):\nthe items: ListItems bit is saying that items should be a ListItems object, it's not passing through the logic of what ListItems is doing, it's literally just comparing what type it is.\ni don't have much experience with typing, but i think you're looking for items: list[str|int] note that for lists, there is the normal list type hint, and then there's also one in the typing library. not sure if there's a difference, i just know that the normal list type hint is lowercased (list and not List like in the typing library), and that it is relatively new (3.11 i think)\n", "Short answer to your question Python is a dynamically typed language. It doesn’t know about the type of the variable until the code is run. So declaration is of no use. What it does is, It stores that value at some memory location and then binds that variable name to that memory container. And makes the contents of the container accessible through that variable name. So the data type does not matter. As it will get to know the type of the value at run-time.\nNames are bound to objects at execution time by means of assignment statements, and it is possible to bind a name to objects of different types during the execution of the program. Functions and objects can be altered at runtime.\n\nIn a dynamically typed language, a variable is simply a value bound to\na name; the value has a type -- like \"integer\" or \"string\" or \"list\"\n-- but the variable itself doesn't. You could have a variable which, right now, holds a number, and later assign a string to it if you need\nit to change. In a statically typed language, the variable itself has\na type; if you have a variable that's an integer, you won't be able to\nassign any other type of value to it later.\n\nFrom the following point you will find that predefining the datatype explicitly in python code won't enforce it to only accept this type:\n\n(since type errors are a small fraction of all the things that might go wrong in a program); as a result, programmers in dynamic languages rely on their test\nsuites to catch these and all other errors, rather than using a\ndedicated type-checking compiler.\n\nCheck this reference for more info.\n" ]
[ 0, 0 ]
[ "Specifying input type isn't a thing in Python the way it is in TypeScript. I'm not sure you even need the class listItems. Just use a simple if statement in your init method.\nclass PList:\n def __init__(self, name, items):\n self.type = 'list'\n self.name = name\n if type(items) is list or type(items) is str or type(items) is int:\n self.items = items\n\n" ]
[ -1 ]
[ "oop", "python" ]
stackoverflow_0074565861_oop_python.txt
Q: How to go back to a point of code within a While True Loop in python? I need my input 3 to be validated, therefore it needs to return back to input 3 if the "else block" is activated. also "if block" must go back to input 1 thats why ive put continue. while True: input 1 input 2 process input 3 if result == "c'; continue elif result == "e" break else: the code needs return back to input 3 until user enters "c" or "e" How can implement this using python? A: You can use flag variable to cope with nested loops. Such as: flag = True while flag == True: input('1') input('2') while True: result = input('3') if result == 'c': break elif result == 'e': flag= False break else: continue
How to go back to a point of code within a While True Loop in python?
I need my input 3 to be validated, therefore it needs to return back to input 3 if the "else block" is activated. also "if block" must go back to input 1 thats why ive put continue. while True: input 1 input 2 process input 3 if result == "c'; continue elif result == "e" break else: the code needs return back to input 3 until user enters "c" or "e" How can implement this using python?
[ "You can use flag variable to cope with nested loops. Such as:\nflag = True\nwhile flag == True:\n input('1')\n input('2')\n while True:\n result = input('3')\n if result == 'c':\n break\n elif result == 'e':\n flag= False\n break\n else:\n continue\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074565269_python.txt
Q: discord.py - edit the interaction message after a timeout in discord.ui.Select How can I access the interaction message and edit it? discord.ui.Select class SearchMenu(discord.ui.Select): def __init__(self, ctx, bot, data): self.ctx = ctx self.bot = bot self.data = data self.player = Player values = [] for index, track in enumerate(self.data[:9]): values.append( discord.SelectOption( label=track.title, value=index + 1, description=track.author, emoji=f"{index + 1}\U0000fe0f\U000020e3" ) ) values.append(discord.SelectOption(label='Cancel', description='Exit the search menu.', emoji="")) super().__init__(placeholder='Click on the Dropdown.', min_values=1, max_values=1, options=values) async def callback(self, interaction: discord.Interaction): if self.values[0] == "Cancel": embed = Embed(emoji=self.ctx.emoji.whitecheck, description="This interaction has been deleted.") return await interaction.message.edit(embed=embed, view=None) discord.ui.View class SearchMenuView(discord.ui.View): def __init__(self, options, ctx, bot): super().__init__(timeout=60.0) self.ctx = ctx self.add_item(SearchMenu(ctx, bot, options)) async def interaction_check(self, interaction: discord.Interaction): if interaction.user != self.ctx.author: embed = Embed(description=f"Sorry, but this interaction can only be used by {self.ctx.author.name}.") await interaction.response.send_message(embed=embed, ephemeral=True) return False else: return True async def on_timeout(self): embed = Embed(emoji=self.ctx.emoji.whitecross, description="Interaction has timed out. Please try again.") await self.message.edit(embed=embed, view=None) If I try to edit the interaction like this I am getting -> AttributeError: 'SearchMenuView' object has no attribute 'message' After 60 seconds the original message should be replaced with the embed in the timeout. A: You're trying to ask the View to send a message, which is not a method in discord.ui.View. You could defer the response and don't let it timeout and allow the user to try again? async def interaction_check(self, interaction: discord.Interaction): if interaction.user != self.ctx.author: embed = Embed(description=f"Sorry, but this interaction can only be used by {self.ctx.author.name}.") await interaction.channel.send(embed=embed, delete_after=60) await interaction.response.defer() return True A: view = MyView() view.message = await channel.send('...', view=view) After that you can use self.message in on_timeout (or somewhere else you don't have access to interaction.message) to edit it. Source: https://discord.com/channels/336642139381301249/669155775700271126/860883838657495040
discord.py - edit the interaction message after a timeout in discord.ui.Select
How can I access the interaction message and edit it? discord.ui.Select class SearchMenu(discord.ui.Select): def __init__(self, ctx, bot, data): self.ctx = ctx self.bot = bot self.data = data self.player = Player values = [] for index, track in enumerate(self.data[:9]): values.append( discord.SelectOption( label=track.title, value=index + 1, description=track.author, emoji=f"{index + 1}\U0000fe0f\U000020e3" ) ) values.append(discord.SelectOption(label='Cancel', description='Exit the search menu.', emoji="")) super().__init__(placeholder='Click on the Dropdown.', min_values=1, max_values=1, options=values) async def callback(self, interaction: discord.Interaction): if self.values[0] == "Cancel": embed = Embed(emoji=self.ctx.emoji.whitecheck, description="This interaction has been deleted.") return await interaction.message.edit(embed=embed, view=None) discord.ui.View class SearchMenuView(discord.ui.View): def __init__(self, options, ctx, bot): super().__init__(timeout=60.0) self.ctx = ctx self.add_item(SearchMenu(ctx, bot, options)) async def interaction_check(self, interaction: discord.Interaction): if interaction.user != self.ctx.author: embed = Embed(description=f"Sorry, but this interaction can only be used by {self.ctx.author.name}.") await interaction.response.send_message(embed=embed, ephemeral=True) return False else: return True async def on_timeout(self): embed = Embed(emoji=self.ctx.emoji.whitecross, description="Interaction has timed out. Please try again.") await self.message.edit(embed=embed, view=None) If I try to edit the interaction like this I am getting -> AttributeError: 'SearchMenuView' object has no attribute 'message' After 60 seconds the original message should be replaced with the embed in the timeout.
[ "You're trying to ask the View to send a message, which is not a method in discord.ui.View.\nYou could defer the response and don't let it timeout and allow the user to try again?\nasync def interaction_check(self, interaction: discord.Interaction):\n if interaction.user != self.ctx.author:\n embed = Embed(description=f\"Sorry, but this interaction can only be used by {self.ctx.author.name}.\")\n await interaction.channel.send(embed=embed, delete_after=60)\n await interaction.response.defer()\n return True\n\n", "view = MyView()\nview.message = await channel.send('...', view=view)\n\nAfter that you can use self.message in on_timeout (or somewhere else you don't have access to interaction.message) to edit it.\nSource: https://discord.com/channels/336642139381301249/669155775700271126/860883838657495040\n" ]
[ 0, -1 ]
[]
[]
[ "discord", "discord.py", "python", "python_3.x" ]
stackoverflow_0069265909_discord_discord.py_python_python_3.x.txt
Q: What is Pytorch equivalent of Pandas groupby.apply(list)? I have the following pytorch tensor long_format: tensor([[ 1., 1.], [ 1., 2.], [ 1., 3.], [ 1., 4.], [ 0., 5.], [ 0., 6.], [ 0., 7.], [ 1., 8.], [ 0., 9.], [ 0., 10.]]) I would like to groupby the first column and store the 2nd column as a tensor. The result is NOT guranteed to be the same size for each grouping. See example below. [tensor([ 1., 2., 3., 4., 8.]), tensor([ 5., 6., 7., 9., 10.])] Is there any nice way to do this using purely Pytorch operators? I would like to avoid using for loops for tracebility purposes. I have tried using a for loop and empty list of empty tensors but this result in an incorrect trace (different inputs values gave same results) n_groups = 2 inverted = [torch.empty([0]) for _ in range(n_groups)] for index, value in long_format: value = value.unsqueeze(dim=0) index = index.int() if type(inverted[index]) != torch.Tensor: inverted[index] = value else: inverted[index] = torch.cat((inverted[index], value)) A: You can use this code: import torch x = torch.tensor([[ 1., 1.], [ 1., 2.], [ 1., 3.], [ 1., 4.], [ 0., 5.], [ 0., 6.], [ 0., 7.], [ 1., 8.], [ 0., 9.], [ 0., 10.]]) result = [x[x[:,0]==i][:,1] for i in x[:,0].unique()] output [tensor([ 5., 6., 7., 9., 10.]), tensor([1., 2., 3., 4., 8.])]
What is Pytorch equivalent of Pandas groupby.apply(list)?
I have the following pytorch tensor long_format: tensor([[ 1., 1.], [ 1., 2.], [ 1., 3.], [ 1., 4.], [ 0., 5.], [ 0., 6.], [ 0., 7.], [ 1., 8.], [ 0., 9.], [ 0., 10.]]) I would like to groupby the first column and store the 2nd column as a tensor. The result is NOT guranteed to be the same size for each grouping. See example below. [tensor([ 1., 2., 3., 4., 8.]), tensor([ 5., 6., 7., 9., 10.])] Is there any nice way to do this using purely Pytorch operators? I would like to avoid using for loops for tracebility purposes. I have tried using a for loop and empty list of empty tensors but this result in an incorrect trace (different inputs values gave same results) n_groups = 2 inverted = [torch.empty([0]) for _ in range(n_groups)] for index, value in long_format: value = value.unsqueeze(dim=0) index = index.int() if type(inverted[index]) != torch.Tensor: inverted[index] = value else: inverted[index] = torch.cat((inverted[index], value))
[ "You can use this code:\nimport torch\nx = torch.tensor([[ 1., 1.],\n [ 1., 2.],\n [ 1., 3.],\n [ 1., 4.],\n [ 0., 5.],\n [ 0., 6.],\n [ 0., 7.],\n [ 1., 8.],\n [ 0., 9.],\n [ 0., 10.]])\n\nresult = [x[x[:,0]==i][:,1] for i in x[:,0].unique()]\n\noutput\n[tensor([ 5., 6., 7., 9., 10.]), tensor([1., 2., 3., 4., 8.])]\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python", "pytorch" ]
stackoverflow_0074564843_pandas_python_pytorch.txt
Q: Pulp Python problem setting constraints when summing values in a column Hi this is my first question here so go easy on me if I format things incorrectly. I'm trying to model a table where each value is either 1 or 0. I'd like to determine whether the sum of a column is 0 or not 0, then check how many columns are > 0. The underlying problem I'm trying to solve is appointment scheduling, where each column represent one appointment. I've simplified it here as in the original I'm using a dataframe to match clinician competencies to patient needs (each row is a patient need). My problem started when I tried to ensure all variables could only be equal to 1 if in one if they were in one of 2 columns, hence my simplified code here to try to work out where I am going wrong. I've set up a pulp variable dictionary with ROWS and COLS as the keys, and value == 0 or 1. In the problem definition I'm trying to assign a value of 1 to the column sum if sum of the row values in the column is >= 1 and 0 otherwise, then summing the total. This should allow me to set the total number of columns that sum to >= 1, for example only 2 columns are represented by non zero variables. In the code below my aim is for the total sum of all variables to be minimised BUT there should be 2 columns that contain a variable 1 i.e. 2 columns sum to >=1. Thanks in advance. import pulp as Pulp ROWS = range(1, 6) COLS = range(1,5) prob = Pulp.LpProblem("Fewestcolumns", Pulp.LpMinimize) choices = Pulp.LpVariable.dicts("Choice", (ROWS, COLS), cat="Integer", lowBound=0, upBound=1) prob += Pulp.lpSum([choices[row][col] for row in ROWS for col in COLS]) prob += Pulp.lpSum([1 if Pulp.lpSum([choices[row][col] for row in ROWS]) >= 1 else 0 for col in COLS]) == 2 prob.solve() print("Status:", Pulp.LpStatus[prob.status]) for v in prob.variables(): print(v.name, "=", v.varValue)` My results: C:\Users\xxxComputing\LinearProgramming\Scripts\python.exe C:/Users/xxx/Computing/LinearProgramming/LinearProgTest.py Welcome to the CBC MILP Solver Version: 2.10.3 Build Date: Dec 15 2019 command line - C:\Users\xxxx\Computing\LinearProgramming\lib\site-packages\pulp\solverdir\cbc\win\64\cbc.exe C:\Users\simon\AppData\Local\Temp\4f8ff67726844bde8abe98316b6338c4-pulp.mps timeMode elapsed branch printingOptions all solution C:\Users\simon\AppData\Local\Temp\4f8ff67726844bde8abe98316b6338c4-pulp.sol (default strategy 1) At line 2 NAME MODEL At line 3 ROWS At line 6 COLUMNS At line 67 RHS At line 69 BOUNDS At line 90 ENDATA Problem MODEL has 1 rows, 20 columns and 0 elements Coin0008I MODEL read with 0 errors Option for timeMode changed from cpu to elapsed Problem is infeasible - 0.00 seconds Option for printingOptions changed from normal to all Total time (CPU seconds): 0.01 (Wallclock seconds): 0.01 Status: Infeasible Choice_1_1 = 0.0 Choice_1_2 = 0.0 Choice_1_3 = 0.0 Choice_1_4 = 0.0 Choice_2_1 = 0.0 Choice_2_2 = 0.0 Choice_2_3 = 0.0 Choice_2_4 = 0.0 Choice_3_1 = 0.0 Choice_3_2 = 0.0 Choice_3_3 = 0.0 Choice_3_4 = 0.0 Choice_4_1 = 0.0 Choice_4_2 = 0.0 Choice_4_3 = 0.0 Choice_4_4 = 0.0 Choice_5_1 = 0.0 Choice_5_2 = 0.0 Choice_5_3 = 0.0 Choice_5_4 = 0.0 Process finished with exit code 0 I was expecting a list of variables a bit like this, with a possible solution: Status: Optimal Choice_1_1 = 1.0 Choice_1_2 = 1.0 Choice_1_3 = 0.0 Choice_1_4 = 0.0 Choice_2_1 = 0.0 Choice_2_2 = 0.0 Choice_2_3 = 0.0 Choice_2_4 = 0.0 Choice_3_1 = 0.0 Choice_3_2 = 0.0 Choice_3_3 = 0.0 Choice_3_4 = 0.0 Choice_4_1 = 0.0 Choice_4_2 = 0.0 Choice_4_3 = 0.0 Choice_4_4 = 0.0 Choice_5_1 = 0.0 Choice_5_2 = 0.0 Choice_5_3 = 0.0 Choice_5_4 = 0.0 Edits: Many thanks AirSquid for pointing me in the right direction. I'm still struggling with big M constraints. I tried this: import pulp as Pulp ROWS = range(1, 6) COLS = range(1,5) prob = Pulp.LpProblem("Fewestcolumns", Pulp.LpMaximize) choices = Pulp.LpVariable.dicts("Choice", (ROWS, COLS), cat="Integer", lowBound=0, upBound=1) used = Pulp.LpVariable.dicts("used", COLS, cat="Binary") b = Pulp.LpVariable.dicts("b", COLS, cat="Binary") prob += Pulp.lpSum([choices[row][col] for row in ROWS for col in COLS]) for rows, items in choices.items(): prob += Pulp.lpSum(cols for cols in items.values()) == 1 M = 20 for col in COLS: prob += b[col] >= (Pulp.lpSum([choices[row][col] for row in ROWS]) - 1) / M prob += used[col] >= M * (b[col] - 1) prob += Pulp.lpSum([used[col] for col in COLS]) == 2 prob.solve() print("Status:", Pulp.LpStatus[prob.status]) for v in prob.variables(): print(v.name, "=", v.varValue) I got the following results: Result - Optimal solution found Objective value: 5.00000000 Enumerated nodes: 0 Total iterations: 0 Time (CPU seconds): 0.00 Time (Wallclock seconds): 0.00 Option for printingOptions changed from normal to all Total time (CPU seconds): 0.01 (Wallclock seconds): 0.02 Status: Optimal Choice_1_1 = 0.0 Choice_1_2 = 0.0 Choice_1_3 = 0.0 Choice_1_4 = 1.0 Choice_2_1 = 0.0 Choice_2_2 = 0.0 Choice_2_3 = 0.0 Choice_2_4 = 1.0 Choice_3_1 = 0.0 Choice_3_2 = 0.0 Choice_3_3 = 0.0 Choice_3_4 = 1.0 Choice_4_1 = 0.0 Choice_4_2 = 0.0 Choice_4_3 = 0.0 Choice_4_4 = 1.0 Choice_5_1 = 0.0 Choice_5_2 = 0.0 Choice_5_3 = 0.0 Choice_5_4 = 1.0 b_1 = 1.0 b_2 = 1.0 b_3 = 1.0 b_4 = 1.0 used_1 = 1.0 used_2 = 1.0 used_3 = 0.0 used_4 = 0.0 Process finished with exit code 0 Not sure what I did wrong - I was hoping for some 1.0s in columns that aren't column 4. Any more hints please? A: Your question is clear, but the setup on your LP isn’t real clear. We can come back to that. You are getting the error because you used an if statement in your summation. That isn’t legal. When pulp makes the math model to solve, the value of the variables are not known, so we cannot use if statements in the formulation. It sounds like you want to use a “big M” constraint here to see if anything was selected within the column. (Google it or look on this site, it is a fundamental LP concept and I have posted several answers with it). You will need to introduce another binary variable indexed by column and then minimize that… In pseudocode: used[col] a binary variable, indexed by Col M = some suitably large variable (a max). In your case the number of rows would be appropriate. Then: sum(choices[row, col] for row in rows) <= used[col] * M If desired, you could then minimize the variable used to minimize columns used. A: I had trouble with > causing errors; but >= didn't force the used_ variable to be either 1 or 0. I ended up adding a very small number to formula to ensure if my decision variable b_ == 1 then my used_variable would definitely be 1: for col in COLS: prob += b[col] >= 0.001 + ((Pulp.lpSum([choices[row][col] for row in ROWS]) - 1) / M) prob += used[col] >= (M * (b[col] - 1)) + 0.001 prob += Pulp.lpSum([used[col] for col in COLS]) == 2 This gave the following result: Status: Optimal Choice_1_1 = 0.0 Choice_1_2 = 1.0 Choice_1_3 = 0.0 Choice_1_4 = 0.0 Choice_2_1 = 1.0 Choice_2_2 = 0.0 Choice_2_3 = 0.0 Choice_2_4 = 0.0 Choice_3_1 = 0.0 Choice_3_2 = 1.0 Choice_3_3 = 0.0 Choice_3_4 = 0.0 Choice_4_1 = 0.0 Choice_4_2 = 1.0 Choice_4_3 = 0.0 Choice_4_4 = 0.0 Choice_5_1 = 0.0 Choice_5_2 = 1.0 Choice_5_3 = 0.0 Choice_5_4 = 0.0 b_1 = 1.0 b_2 = 1.0 b_3 = 0.0 b_4 = 0.0 used_1 = 1.0 used_2 = 1.0 used_3 = 0.0 used_4 = 0.0 This seems to work with different numbers of columns to give acceptable answers. I'm not sure how hacky this solution is so if there is a more elegant way please feel free to answer!
Pulp Python problem setting constraints when summing values in a column
Hi this is my first question here so go easy on me if I format things incorrectly. I'm trying to model a table where each value is either 1 or 0. I'd like to determine whether the sum of a column is 0 or not 0, then check how many columns are > 0. The underlying problem I'm trying to solve is appointment scheduling, where each column represent one appointment. I've simplified it here as in the original I'm using a dataframe to match clinician competencies to patient needs (each row is a patient need). My problem started when I tried to ensure all variables could only be equal to 1 if in one if they were in one of 2 columns, hence my simplified code here to try to work out where I am going wrong. I've set up a pulp variable dictionary with ROWS and COLS as the keys, and value == 0 or 1. In the problem definition I'm trying to assign a value of 1 to the column sum if sum of the row values in the column is >= 1 and 0 otherwise, then summing the total. This should allow me to set the total number of columns that sum to >= 1, for example only 2 columns are represented by non zero variables. In the code below my aim is for the total sum of all variables to be minimised BUT there should be 2 columns that contain a variable 1 i.e. 2 columns sum to >=1. Thanks in advance. import pulp as Pulp ROWS = range(1, 6) COLS = range(1,5) prob = Pulp.LpProblem("Fewestcolumns", Pulp.LpMinimize) choices = Pulp.LpVariable.dicts("Choice", (ROWS, COLS), cat="Integer", lowBound=0, upBound=1) prob += Pulp.lpSum([choices[row][col] for row in ROWS for col in COLS]) prob += Pulp.lpSum([1 if Pulp.lpSum([choices[row][col] for row in ROWS]) >= 1 else 0 for col in COLS]) == 2 prob.solve() print("Status:", Pulp.LpStatus[prob.status]) for v in prob.variables(): print(v.name, "=", v.varValue)` My results: C:\Users\xxxComputing\LinearProgramming\Scripts\python.exe C:/Users/xxx/Computing/LinearProgramming/LinearProgTest.py Welcome to the CBC MILP Solver Version: 2.10.3 Build Date: Dec 15 2019 command line - C:\Users\xxxx\Computing\LinearProgramming\lib\site-packages\pulp\solverdir\cbc\win\64\cbc.exe C:\Users\simon\AppData\Local\Temp\4f8ff67726844bde8abe98316b6338c4-pulp.mps timeMode elapsed branch printingOptions all solution C:\Users\simon\AppData\Local\Temp\4f8ff67726844bde8abe98316b6338c4-pulp.sol (default strategy 1) At line 2 NAME MODEL At line 3 ROWS At line 6 COLUMNS At line 67 RHS At line 69 BOUNDS At line 90 ENDATA Problem MODEL has 1 rows, 20 columns and 0 elements Coin0008I MODEL read with 0 errors Option for timeMode changed from cpu to elapsed Problem is infeasible - 0.00 seconds Option for printingOptions changed from normal to all Total time (CPU seconds): 0.01 (Wallclock seconds): 0.01 Status: Infeasible Choice_1_1 = 0.0 Choice_1_2 = 0.0 Choice_1_3 = 0.0 Choice_1_4 = 0.0 Choice_2_1 = 0.0 Choice_2_2 = 0.0 Choice_2_3 = 0.0 Choice_2_4 = 0.0 Choice_3_1 = 0.0 Choice_3_2 = 0.0 Choice_3_3 = 0.0 Choice_3_4 = 0.0 Choice_4_1 = 0.0 Choice_4_2 = 0.0 Choice_4_3 = 0.0 Choice_4_4 = 0.0 Choice_5_1 = 0.0 Choice_5_2 = 0.0 Choice_5_3 = 0.0 Choice_5_4 = 0.0 Process finished with exit code 0 I was expecting a list of variables a bit like this, with a possible solution: Status: Optimal Choice_1_1 = 1.0 Choice_1_2 = 1.0 Choice_1_3 = 0.0 Choice_1_4 = 0.0 Choice_2_1 = 0.0 Choice_2_2 = 0.0 Choice_2_3 = 0.0 Choice_2_4 = 0.0 Choice_3_1 = 0.0 Choice_3_2 = 0.0 Choice_3_3 = 0.0 Choice_3_4 = 0.0 Choice_4_1 = 0.0 Choice_4_2 = 0.0 Choice_4_3 = 0.0 Choice_4_4 = 0.0 Choice_5_1 = 0.0 Choice_5_2 = 0.0 Choice_5_3 = 0.0 Choice_5_4 = 0.0 Edits: Many thanks AirSquid for pointing me in the right direction. I'm still struggling with big M constraints. I tried this: import pulp as Pulp ROWS = range(1, 6) COLS = range(1,5) prob = Pulp.LpProblem("Fewestcolumns", Pulp.LpMaximize) choices = Pulp.LpVariable.dicts("Choice", (ROWS, COLS), cat="Integer", lowBound=0, upBound=1) used = Pulp.LpVariable.dicts("used", COLS, cat="Binary") b = Pulp.LpVariable.dicts("b", COLS, cat="Binary") prob += Pulp.lpSum([choices[row][col] for row in ROWS for col in COLS]) for rows, items in choices.items(): prob += Pulp.lpSum(cols for cols in items.values()) == 1 M = 20 for col in COLS: prob += b[col] >= (Pulp.lpSum([choices[row][col] for row in ROWS]) - 1) / M prob += used[col] >= M * (b[col] - 1) prob += Pulp.lpSum([used[col] for col in COLS]) == 2 prob.solve() print("Status:", Pulp.LpStatus[prob.status]) for v in prob.variables(): print(v.name, "=", v.varValue) I got the following results: Result - Optimal solution found Objective value: 5.00000000 Enumerated nodes: 0 Total iterations: 0 Time (CPU seconds): 0.00 Time (Wallclock seconds): 0.00 Option for printingOptions changed from normal to all Total time (CPU seconds): 0.01 (Wallclock seconds): 0.02 Status: Optimal Choice_1_1 = 0.0 Choice_1_2 = 0.0 Choice_1_3 = 0.0 Choice_1_4 = 1.0 Choice_2_1 = 0.0 Choice_2_2 = 0.0 Choice_2_3 = 0.0 Choice_2_4 = 1.0 Choice_3_1 = 0.0 Choice_3_2 = 0.0 Choice_3_3 = 0.0 Choice_3_4 = 1.0 Choice_4_1 = 0.0 Choice_4_2 = 0.0 Choice_4_3 = 0.0 Choice_4_4 = 1.0 Choice_5_1 = 0.0 Choice_5_2 = 0.0 Choice_5_3 = 0.0 Choice_5_4 = 1.0 b_1 = 1.0 b_2 = 1.0 b_3 = 1.0 b_4 = 1.0 used_1 = 1.0 used_2 = 1.0 used_3 = 0.0 used_4 = 0.0 Process finished with exit code 0 Not sure what I did wrong - I was hoping for some 1.0s in columns that aren't column 4. Any more hints please?
[ "Your question is clear, but the setup on your LP isn’t real clear. We can come back to that.\nYou are getting the error because you used an if statement in your summation. That isn’t legal. When pulp makes the math model to solve, the value of the variables are not known, so we cannot use if statements in the formulation. It sounds like you want to use a “big M” constraint here to see if anything was selected within the column. (Google it or look on this site, it is a fundamental LP concept and I have posted several answers with it). You will need to introduce another binary variable indexed by column and then minimize that… In pseudocode:\nused[col] a binary variable, indexed by Col\nM = some suitably large variable (a max). In your case the number of rows would be appropriate.\n\nThen:\nsum(choices[row, col] for row in rows) <= used[col] * M \n\nIf desired, you could then minimize the variable used to minimize columns used.\n", "I had trouble with > causing errors; but >= didn't force the used_ variable to be either 1 or 0.\nI ended up adding a very small number to formula to ensure if my decision variable b_ == 1 then my used_variable would definitely be 1:\nfor col in COLS:\n prob += b[col] >= 0.001 + ((Pulp.lpSum([choices[row][col] for row in ROWS]) - 1) / M)\n prob += used[col] >= (M * (b[col] - 1)) + 0.001\n prob += Pulp.lpSum([used[col] for col in COLS]) == 2\n\nThis gave the following result:\nStatus: Optimal\nChoice_1_1 = 0.0\nChoice_1_2 = 1.0\nChoice_1_3 = 0.0\nChoice_1_4 = 0.0\nChoice_2_1 = 1.0\nChoice_2_2 = 0.0\nChoice_2_3 = 0.0\nChoice_2_4 = 0.0\nChoice_3_1 = 0.0\nChoice_3_2 = 1.0\nChoice_3_3 = 0.0\nChoice_3_4 = 0.0\nChoice_4_1 = 0.0\nChoice_4_2 = 1.0\nChoice_4_3 = 0.0\nChoice_4_4 = 0.0\nChoice_5_1 = 0.0\nChoice_5_2 = 1.0\nChoice_5_3 = 0.0\nChoice_5_4 = 0.0\nb_1 = 1.0\nb_2 = 1.0\nb_3 = 0.0\nb_4 = 0.0\nused_1 = 1.0\nused_2 = 1.0\nused_3 = 0.0\nused_4 = 0.0\n\nThis seems to work with different numbers of columns to give acceptable answers.\nI'm not sure how hacky this solution is so if there is a more elegant way please feel free to answer!\n" ]
[ 0, 0 ]
[]
[]
[ "constraints", "pulp", "python" ]
stackoverflow_0074559219_constraints_pulp_python.txt
Q: Is there a way to send message to telegram with python without using a bot? Sorry, if it's a dumb question. I just want to know for sure yes, or no. A: It is possible to create a self-bot for telegram. To get an inspiration on how to do it, I would suggest you dig into this GitHub repository.
Is there a way to send message to telegram with python without using a bot?
Sorry, if it's a dumb question. I just want to know for sure yes, or no.
[ "It is possible to create a self-bot for telegram.\nTo get an inspiration on how to do it, I would suggest you dig into this GitHub repository.\n" ]
[ 0 ]
[]
[]
[ "python", "telegram" ]
stackoverflow_0074564528_python_telegram.txt
Q: HTML Calendar Appearing Under Footer I've created an HTML Calendar for my django app. However when I add it to one of my templates it adds it underneath my footer. I'm not understanding why this would happen. {% extends "bf_app/app_bases/app_base.html" %} {% block main %} {% include "bf_app/overviews/overview_nav.html" %} <div class="flex justify-between mx-10"> <a href="{% url 'calendar_overview_context' previous_year previous_month %}">< {{ previous_month_name }}</a> <a href="{% url 'calendar_overview_context' next_year next_month %}">{{ next_month_name }} ></a> </div> <div class="grid grid-cols-1 md:grid-cols-3 px-4"> <table> <thead> <tr> <th class="text-left">Transaction</th> <th class="text-left">Amount</th> <th class="text-left">Date</th> </tr> </thead> {% for transaction, tally in monthly_budget.items %} <tr> <td>{{ transaction }}</td> <td class="{% if tally|last == "IN" %}text-green-700{% else %}text-red-700{% endif %}"> {{ tally|first|floatformat:2 }} </td> <td>{{ transaction.next_date|date:"D, d M, Y" }}</td> </tr> {% endfor %} </table> </div> <div> {{ calendar }} </div> {% endblock %} I pretty much followed this tutorial: https://www.huiwenteo.com/normal/2018/07/24/django-calendar.html Is there something I'm missing? This to my understanding should be above the footer like everything else I've created. EDIT: It seems to be the "mark_safe" module causing the issue. I've tried using {{ calendar|safe }} and this also creates the same issue. with safe without safe A: I had the same problem, using huiwenteo calendar tutorial. This weird behavior is making Calendar class, more specific the formatmonth method. Because its returning calendar table, without closing html tag. So in your cal/utils.py file you should add cal += f'</table>\n' to formatmonth method, before returning calendar. Here is example, which worked for me. def formatmonth(self, ....): events = Event.objects.filter(....) cal = f'<table border="0" cellpadding="0" cellspacing="0" class="calendar">\n' cal += f'{self.formatmonthname(self.year, self.month, withyear=withyear)}\n' cal += f'{self.formatweekheader()}\n' for week in self.monthdays2calendar(self.year, self.month): cal += f'{self.formatweek(week, events)}\n' cal += f'</table>\n' return cal
HTML Calendar Appearing Under Footer
I've created an HTML Calendar for my django app. However when I add it to one of my templates it adds it underneath my footer. I'm not understanding why this would happen. {% extends "bf_app/app_bases/app_base.html" %} {% block main %} {% include "bf_app/overviews/overview_nav.html" %} <div class="flex justify-between mx-10"> <a href="{% url 'calendar_overview_context' previous_year previous_month %}">< {{ previous_month_name }}</a> <a href="{% url 'calendar_overview_context' next_year next_month %}">{{ next_month_name }} ></a> </div> <div class="grid grid-cols-1 md:grid-cols-3 px-4"> <table> <thead> <tr> <th class="text-left">Transaction</th> <th class="text-left">Amount</th> <th class="text-left">Date</th> </tr> </thead> {% for transaction, tally in monthly_budget.items %} <tr> <td>{{ transaction }}</td> <td class="{% if tally|last == "IN" %}text-green-700{% else %}text-red-700{% endif %}"> {{ tally|first|floatformat:2 }} </td> <td>{{ transaction.next_date|date:"D, d M, Y" }}</td> </tr> {% endfor %} </table> </div> <div> {{ calendar }} </div> {% endblock %} I pretty much followed this tutorial: https://www.huiwenteo.com/normal/2018/07/24/django-calendar.html Is there something I'm missing? This to my understanding should be above the footer like everything else I've created. EDIT: It seems to be the "mark_safe" module causing the issue. I've tried using {{ calendar|safe }} and this also creates the same issue. with safe without safe
[ "I had the same problem, using huiwenteo calendar tutorial.\nThis weird behavior is making Calendar class, more specific the formatmonth method. Because its returning calendar table, without closing html tag. So in your cal/utils.py file you should add cal += f'</table>\\n' to formatmonth method, before returning calendar.\nHere is example, which worked for me.\ndef formatmonth(self, ....):\n events = Event.objects.filter(....)\n\n cal = f'<table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" class=\"calendar\">\\n'\n cal += f'{self.formatmonthname(self.year, self.month, withyear=withyear)}\\n'\n cal += f'{self.formatweekheader()}\\n'\n for week in self.monthdays2calendar(self.year, self.month):\n cal += f'{self.formatweek(week, events)}\\n'\n cal += f'</table>\\n'\n return cal\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_templates", "html", "python" ]
stackoverflow_0073341019_django_django_templates_html_python.txt
Q: unable to get the expected answer for string palindrom Every time i am geting else condition as true. If i pass input string as "ama" then code should give input string is palindrom. But i am getting string is not palindrom. Input: ami output: ami Expected:string is palindrom Input: amit output: tima Expected:string is n palindrom def str_rev (input_str): print("input_str:", input_str) rev_str = " " for i in (input_str): rev_str = i + rev_str print("inp_str:", input_str) print("rev_str:", rev_str) if (input_str == rev_str): print("string is palindrom") else: print("string is not palindrom") return rev_str str = input ("Enter the string:") print("org string:", str) final_str= str_rev (str) print("reverse string:", final_str) A: A palindrome is a word that is the same backwards and forwards. Therefore ami is not a palindrome. A: At a quick glance, your formatting is off, but I think your problem is with white space. Change: rev_str = " " to rev_str = "" to get rid of that extra white space. In fact, you can trim your strings before comparing, with the .strip() command to remove any leading or trailing white space. ' hi '.strip() --> 'hi' A: you got bug at line 3 rev_str = " " should be rev_str = "" #empty string otherwise, you create a new string with empty space at the start Jarda
unable to get the expected answer for string palindrom
Every time i am geting else condition as true. If i pass input string as "ama" then code should give input string is palindrom. But i am getting string is not palindrom. Input: ami output: ami Expected:string is palindrom Input: amit output: tima Expected:string is n palindrom def str_rev (input_str): print("input_str:", input_str) rev_str = " " for i in (input_str): rev_str = i + rev_str print("inp_str:", input_str) print("rev_str:", rev_str) if (input_str == rev_str): print("string is palindrom") else: print("string is not palindrom") return rev_str str = input ("Enter the string:") print("org string:", str) final_str= str_rev (str) print("reverse string:", final_str)
[ "A palindrome is a word that is the same backwards and forwards. Therefore ami is not a palindrome.\n", "At a quick glance, your formatting is off, but I think your problem is with white space. Change:\nrev_str = \" \"\n\nto\nrev_str = \"\"\n\nto get rid of that extra white space.\nIn fact, you can trim your strings before comparing, with the .strip() command to remove any leading or trailing white space.\n' hi '.strip() --> 'hi'\n\n", "you got bug at line 3\nrev_str = \" \"\nshould be\nrev_str = \"\" #empty string\notherwise, you create a new string with empty space at the start\nJarda\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074566125_python.txt
Q: How to get the growth between two rows I'm trying to get the growth (in %) between two values at different period. Here is how my DataFrame looks like: sessionSource dateRange activeUsers 0 instagram.com current 5 1 instagram.com previous 0 2 l.instagram.com current 83 3 l.instagram.com previous 11 4 snapchat.com current 2 5 snapchat.com previous 1 What I'm trying to get is: sessionSource dateRange activeUsers Growth 0 instagram.com current 5 xx% 2 l.instagram.com current 83 xx% 4 snapchat.com current 2 xx% I'm not a Pandas expert, I tried few things but nothing came close to what I need. Thanks a lot for any help. A: Assuming you literally just need the percent change between current and previous and current/previous are in the correct order, you can just group the data based on the source and get the percent change of the group .Use the pandas.Series.pct_change() method on the grouped object and you should be good. # sort values before to make sure the order is maintained df = df.sort_values(by=["sessionSource", "dateRange"], ascending=False) df['Growth']= (df.groupby('sessionSource')['activeUsers'].apply(pd.Series.pct_change)) #drop na from the unavailable results and convert to % df["growth"] = (df["growth"].dropna()*100).round(2) For ex.(taken from the official documentation and applied on a series): s = pd.Series([90, 91, 85]) s 0 90 1 91 2 85 dtype: int64 s.pct_change() 0 NaN 1 0.011111 2 -0.065934 dtype: float64 EDIT As @Omar suggested, I posted a small edit to the code that fully solved his problem(just added manual reordering + converting percentage points into percentages). The main gist is still group_by + pct_change A: You can use: (df.sort_values(by=['sessionSource', 'dateRange'], ascending=[True, False]) .groupby('sessionSource', as_index=False) .agg({'dateRange': 'first', 'activeUsers': lambda s: s.pct_change().dropna().mul(100)}) ) Output: sessionSource dateRange activeUsers 0 instagram.com previous inf 1 l.instagram.com previous 654.545455 2 snapchat.com previous 100.000000
How to get the growth between two rows
I'm trying to get the growth (in %) between two values at different period. Here is how my DataFrame looks like: sessionSource dateRange activeUsers 0 instagram.com current 5 1 instagram.com previous 0 2 l.instagram.com current 83 3 l.instagram.com previous 11 4 snapchat.com current 2 5 snapchat.com previous 1 What I'm trying to get is: sessionSource dateRange activeUsers Growth 0 instagram.com current 5 xx% 2 l.instagram.com current 83 xx% 4 snapchat.com current 2 xx% I'm not a Pandas expert, I tried few things but nothing came close to what I need. Thanks a lot for any help.
[ "Assuming you literally just need the percent change between current and previous and current/previous are in the correct order, you can just group the data based on the source and get the percent change of the group\n.Use the pandas.Series.pct_change() method on the grouped object and you should be good.\n# sort values before to make sure the order is maintained\ndf = df.sort_values(by=[\"sessionSource\", \"dateRange\"], ascending=False)\ndf['Growth']= (df.groupby('sessionSource')['activeUsers'].apply(pd.Series.pct_change))\n#drop na from the unavailable results and convert to %\ndf[\"growth\"] = (df[\"growth\"].dropna()*100).round(2)\n\n\nFor ex.(taken from the official documentation and applied on a series):\ns = pd.Series([90, 91, 85])\ns\n0 90\n1 91\n2 85\ndtype: int64\n\ns.pct_change()\n0 NaN\n1 0.011111\n2 -0.065934\ndtype: float64\n\nEDIT\nAs @Omar suggested, I posted a small edit to the code that fully solved his problem(just added manual reordering + converting percentage points into percentages). The main gist is still group_by + pct_change\n", "You can use:\n(df.sort_values(by=['sessionSource', 'dateRange'],\n ascending=[True, False])\n .groupby('sessionSource', as_index=False)\n .agg({'dateRange': 'first', 'activeUsers': lambda s: s.pct_change().dropna().mul(100)})\n )\n\nOutput:\n sessionSource dateRange activeUsers\n0 instagram.com previous inf\n1 l.instagram.com previous 654.545455\n2 snapchat.com previous 100.000000\n\n" ]
[ 1, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074566108_dataframe_pandas_python.txt
Q: Time elapsed since first log for each user I'm trying to calculate the time difference between all the logs of a user and the first log of that same user. There are users with several logs. The dataframe looks like this: 16 00000021601 2022-08-23 17:12:04 20 00000021601 2022-08-23 17:12:04 21 00000031313 2022-10-22 11:16:57 22 00000031313 2022-10-22 12:16:44 23 00000031313 2022-10-22 14:39:07 24 00000065137 2022-05-06 11:51:33 25 00000065137 2022-05-06 11:51:33 I know that I could do df['DELTA'] = df.groupby('ID')['DATE'].shift(-1) - df['DATE'] to get the difference between consecutive dates for each user, but since something like iat[0] doesn't work in this case I don't know how to get the difference in relation to the first date. A: You can try this code import pandas as pd dates = ['2022-08-23 17:12:04', '2022-08-23 17:12:04', '2022-10-22 11:16:57', '2022-10-22 12:16:44', '2022-10-22 14:39:07', '2022-05-06 11:51:33', '2022-05-06 11:51:33',] ids = [1,1,1,2,2,2,2] df = pd.DataFrame({'id':ids, 'dates':dates}) df['dates'] = pd.to_datetime(df['dates']) df.groupby('id').apply(lambda x: x['dates'] - x.iloc[0, 0]) Out: id 1 0 0 days 00:00:00 1 0 days 00:00:00 2 59 days 18:04:53 2 3 0 days 00:00:00 4 0 days 02:22:23 5 -170 days +23:34:49 6 -170 days +23:34:49 Name: dates, dtype: timedelta64[ns] If you dataframe is large and apply took a long time you can try use parallel-pandas. It's very simple import pandas as pd from parallel_pandas import ParallelPandas ParallelPandas.initialize(n_cpu=8) dates = ['2022-08-23 17:12:04', '2022-08-23 17:12:04', '2022-10-22 11:16:57', '2022-10-22 12:16:44', '2022-10-22 14:39:07', '2022-05-06 11:51:33', '2022-05-06 11:51:33',] ids = [1,1,1,2,2,2,2] df = pd.DataFrame({'id':ids, 'dates':dates}) df['dates'] = pd.to_datetime(df['dates']) #p_apply is parallel analogue of apply method df.groupby('id').p_apply(lambda x: x['dates'] - x.iloc[0, 0]) It will be 5-10 time faster
Time elapsed since first log for each user
I'm trying to calculate the time difference between all the logs of a user and the first log of that same user. There are users with several logs. The dataframe looks like this: 16 00000021601 2022-08-23 17:12:04 20 00000021601 2022-08-23 17:12:04 21 00000031313 2022-10-22 11:16:57 22 00000031313 2022-10-22 12:16:44 23 00000031313 2022-10-22 14:39:07 24 00000065137 2022-05-06 11:51:33 25 00000065137 2022-05-06 11:51:33 I know that I could do df['DELTA'] = df.groupby('ID')['DATE'].shift(-1) - df['DATE'] to get the difference between consecutive dates for each user, but since something like iat[0] doesn't work in this case I don't know how to get the difference in relation to the first date.
[ "You can try this code\nimport pandas as pd\n\ndates = ['2022-08-23 17:12:04',\n '2022-08-23 17:12:04',\n '2022-10-22 11:16:57',\n '2022-10-22 12:16:44',\n '2022-10-22 14:39:07',\n '2022-05-06 11:51:33',\n '2022-05-06 11:51:33',]\nids = [1,1,1,2,2,2,2]\ndf = pd.DataFrame({'id':ids, 'dates':dates})\ndf['dates'] = pd.to_datetime(df['dates'])\ndf.groupby('id').apply(lambda x: x['dates'] - x.iloc[0, 0])\n\n\nOut:\nid \n1 0 0 days 00:00:00\n 1 0 days 00:00:00\n 2 59 days 18:04:53\n2 3 0 days 00:00:00\n 4 0 days 02:22:23\n 5 -170 days +23:34:49\n 6 -170 days +23:34:49\nName: dates, dtype: timedelta64[ns]\n\n\nIf you dataframe is large and apply took a long time you can try use parallel-pandas. It's very simple\nimport pandas as pd\nfrom parallel_pandas import ParallelPandas\n\nParallelPandas.initialize(n_cpu=8)\n\ndates = ['2022-08-23 17:12:04',\n '2022-08-23 17:12:04',\n '2022-10-22 11:16:57',\n '2022-10-22 12:16:44',\n '2022-10-22 14:39:07',\n '2022-05-06 11:51:33',\n '2022-05-06 11:51:33',]\nids = [1,1,1,2,2,2,2]\ndf = pd.DataFrame({'id':ids, 'dates':dates})\ndf['dates'] = pd.to_datetime(df['dates'])\n#p_apply is parallel analogue of apply method\ndf.groupby('id').p_apply(lambda x: x['dates'] - x.iloc[0, 0])\n\n\nIt will be 5-10 time faster\n" ]
[ 0 ]
[]
[]
[ "dataframe", "datetime", "pandas", "python" ]
stackoverflow_0074566047_dataframe_datetime_pandas_python.txt
Q: selenium.common.exceptions.InvalidArgumentException: Message: invalid argument: invalid locator so I'm trying to make this bot with selenium but when I'm trying to use the send keys func it doesn't work I'm stuck on it for hours and I cant seem to find to solve the problem please if anyone has any idea I beg you to help me thanks. print(driver.title) tos = driver.find_element("xpath", '//*[@id="pop"]/button') tos.click() time.sleep(5) name = driver.find_element("ID", "inpNick") time.sleep(5) name.send_keys('baby') time.sleep(50) driver.quit() Traceback (most recent call last): File "c:\Users\SexiKiller41\Downloads\a\catchno1se.py", line 17, in <module> name = driver.find_element("ID", "inpNick") File "C:\Users\SexiKiller41\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\selenium\webdriver\remote\webdriver.py", line 861, in find_element return self.execute(Command.FIND_ELEMENT, {"using": by, "value": value})["value"] File "C:\Users\SexiKiller41\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\selenium\webdriver\remote\webdriver.py", line 444, in execute self.error_handler.check_response(response) File "C:\Users\SexiKiller41\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\selenium\webdriver\remote\errorhandler.py", line 249, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.InvalidArgumentException: Message: invalid argument: invalid locator (Session info: chrome=107.0.5304.122) Stacktrace: Backtrace: Ordinal0 [0x0039ACD3+2075859] Ordinal0 [0x0032EE61+1633889] Ordinal0 [0x0022B7BD+571325] Ordinal0 [0x0025A745+763717] Ordinal0 [0x0025AE1B+765467] Ordinal0 [0x0028D0F2+970994] Ordinal0 [0x00277364+881508] Ordinal0 [0x0028B56A+963946] Ordinal0 [0x00277136+880950] Ordinal0 [0x0024FEFD+720637] Ordinal0 [0x00250F3F+724799] GetHandleVerifier [0x0064EED2+2769538] GetHandleVerifier [0x00640D95+2711877] GetHandleVerifier [0x0042A03A+521194] GetHandleVerifier [0x00428DA0+516432] Ordinal0 [0x0033682C+1665068] Ordinal0 [0x0033B128+1683752] Ordinal0 [0x0033B215+1683989] Ordinal0 [0x00346484+1729668] BaseThreadInitThunk [0x7753FEF9+25] RtlGetAppContainerNamedObjectPath [0x77D37BBE+286] RtlGetAppContainerNamedObjectPath [0x77D37B8E+238] [Done] exited with code=1 in 9.895 seconds I was trying to enter text to an input on a website A: Try: # id instead of ID name = driver.find_element("id", "inpNick") # or from selenium.webdriver.common.by import By name = driver.find_element(By.ID, "inpNick") A: Instead driver.find_element("ID", "inpNick") try driver.find_element(By.ID, "inpNick") Also, no need to add delays between locating element and clicking it or sending keys to it. Delays are meaningful before locating the element to make the element rendered on the page. It is much better to use WebDriverWait expected_conditions explicit waits than hardcoded sleeps. Improved, your code can be as following: from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC print(driver.title) wait.until(EC.element_to_be_clickable((By.XPATH, '//*[@id="pop"]/button'))).click() wait.until(EC.visibility_of_element_located((By.ID, "inpNick"))).send_keys('baby')
selenium.common.exceptions.InvalidArgumentException: Message: invalid argument: invalid locator
so I'm trying to make this bot with selenium but when I'm trying to use the send keys func it doesn't work I'm stuck on it for hours and I cant seem to find to solve the problem please if anyone has any idea I beg you to help me thanks. print(driver.title) tos = driver.find_element("xpath", '//*[@id="pop"]/button') tos.click() time.sleep(5) name = driver.find_element("ID", "inpNick") time.sleep(5) name.send_keys('baby') time.sleep(50) driver.quit() Traceback (most recent call last): File "c:\Users\SexiKiller41\Downloads\a\catchno1se.py", line 17, in <module> name = driver.find_element("ID", "inpNick") File "C:\Users\SexiKiller41\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\selenium\webdriver\remote\webdriver.py", line 861, in find_element return self.execute(Command.FIND_ELEMENT, {"using": by, "value": value})["value"] File "C:\Users\SexiKiller41\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\selenium\webdriver\remote\webdriver.py", line 444, in execute self.error_handler.check_response(response) File "C:\Users\SexiKiller41\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\selenium\webdriver\remote\errorhandler.py", line 249, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.InvalidArgumentException: Message: invalid argument: invalid locator (Session info: chrome=107.0.5304.122) Stacktrace: Backtrace: Ordinal0 [0x0039ACD3+2075859] Ordinal0 [0x0032EE61+1633889] Ordinal0 [0x0022B7BD+571325] Ordinal0 [0x0025A745+763717] Ordinal0 [0x0025AE1B+765467] Ordinal0 [0x0028D0F2+970994] Ordinal0 [0x00277364+881508] Ordinal0 [0x0028B56A+963946] Ordinal0 [0x00277136+880950] Ordinal0 [0x0024FEFD+720637] Ordinal0 [0x00250F3F+724799] GetHandleVerifier [0x0064EED2+2769538] GetHandleVerifier [0x00640D95+2711877] GetHandleVerifier [0x0042A03A+521194] GetHandleVerifier [0x00428DA0+516432] Ordinal0 [0x0033682C+1665068] Ordinal0 [0x0033B128+1683752] Ordinal0 [0x0033B215+1683989] Ordinal0 [0x00346484+1729668] BaseThreadInitThunk [0x7753FEF9+25] RtlGetAppContainerNamedObjectPath [0x77D37BBE+286] RtlGetAppContainerNamedObjectPath [0x77D37B8E+238] [Done] exited with code=1 in 9.895 seconds I was trying to enter text to an input on a website
[ "Try:\n# id instead of ID\nname = driver.find_element(\"id\", \"inpNick\") \n\n# or\nfrom selenium.webdriver.common.by import By\nname = driver.find_element(By.ID, \"inpNick\")\n\n", "Instead driver.find_element(\"ID\", \"inpNick\") try\ndriver.find_element(By.ID, \"inpNick\")\n\nAlso, no need to add delays between locating element and clicking it or sending keys to it.\nDelays are meaningful before locating the element to make the element rendered on the page.\nIt is much better to use WebDriverWait expected_conditions explicit waits than hardcoded sleeps.\nImproved, your code can be as following:\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\nprint(driver.title)\nwait.until(EC.element_to_be_clickable((By.XPATH, '//*[@id=\"pop\"]/button'))).click()\nwait.until(EC.visibility_of_element_located((By.ID, \"inpNick\"))).send_keys('baby')\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "selenium", "selenium_webdriver", "webdriverwait", "xpath" ]
stackoverflow_0074566182_python_selenium_selenium_webdriver_webdriverwait_xpath.txt
Q: How to parse and flatten nested JSON API response into tabular format JSON structure: { "help": "https://data.boston.gov/api/3/action/help_show?name=datastore_search_sql", "success": true, "result": { "records": [ { "latitude": "42.38331999978103", "property_type": "Residential 1-family", "neighborhood": "Charlestown", "description": "Improper storage trash: res", "year built": "1885", "_full_text": "'-11':2 '-23':3 '-71.06920000136572':29 '00':4,5,6 '02129':16 '1':26 '107':13 '1885':23 '201340000':19 '2017':24 '2022':1 '2129':18 '42.38331999978103':28 'baldwin':14 'charlestown':17 'enforcement':7 'family':27 'improper':9 'lia':20 'res':12 'residential':25 'ryan':21 'st':15 'storage':10 'trash':11 'v':22 'violations':8", "longitude": "-71.06920000136572", "owner": "LIA RYAN V", "address": "107 Baldwin St, 02129", "date": "2022-11-23T00:00:00", "violation_type": "Enforcement Violations", "_id": 1, "year remodeled": "2017", "parcel": "201340000", "zip_code": "2129" }, { "latitude": "42.32762329872878", "property_type": ...} ], "fields": [ { "type": "int4", "id": "_id" }, { "type": "tsvector"... } ], "sql": "SELECT * from \"dc615ff7-2ff3-416a-922b-f0f334f085d0\" where date >= '2022-11-23'" } } Received as API response from Boston.gov website: response = requests.request('GET', 'https://data.boston.gov/api/3/action/datastore_search_sql?sql=SELECT%20*%20from%20%22dc615ff7-2ff3-416a-922b-f0f334f085d0%22%20where%20date%20%3E=%20%272022-11-23%27') So 5 top-level keys, but I only care about getting the result.records into a tabular format Keys from relevant dict (result.records): >>> json_data['result']['records'][0].keys() dict_keys(['latitude', 'property_type', 'neighborhood', 'description', 'year built', '_full_text', 'longitude', 'owner', 'address', 'date', 'violation_type', '_id', 'year remodeled', 'parcel', 'zip_code']) The closest I have gotten is 1x52 dataframe using the flatten_json module's flatten(), however that just has each results.records dict in a separate column. 0 ... 51 0 {'latitude': '42.38331999978103', 'property_ty... ... {'latitude': '42.38306999993893', 'property_ty... Previous attempt using json_normalize (twice) with open(extracted_data_fn) as json_file: # store file data in object json_data = json.load(json_file) print (json_data) # using flatten_json module flat_json = flatten_json.flatten(json_data) df_flat = pd.DataFrame(flat_json, index = range(len(flat_json))) df = pd.json_normalize(json_data) df_result_records = pd.json_normalize(df['result.records']) df_result_records My preferred output would be the keys as columns and each value as a cell in the row. Any thoughts on how to achieve this? Thank you! A: just use: json_data= response.json() df=pd.json_normalize(json_data['result']['records']) df | | latitude | property_type | neighborhood | description | year built | _full_text | longitude | owner | address | date | violation_type | _id | year remodeled | parcel | zip_code | |---:|-----------:|:---------------------|:---------------|:----------------------------|-------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------:|:-----------|:----------------------|:--------------------|:-----------------------|------:|-----------------:|-------------:|-----------:| | 0 | 42.3833 | Residential 1-family | Charlestown | Improper storage trash: res | 1885 | '-11':2 '-23':3 '-71.06920000136572':29 '00':4,5,6 '02129':16 '1':26 '107':13 '1885':23 '201340000':19 '2017':24 '2022':1 '2129':18 '42.38331999978103':28 'baldwin':14 'charlestown':17 'enforcement':7 'family':27 'improper':9 'lia':20 'res':12 'residential':25 'ryan':21 'st':15 'storage':10 'trash':11 'v':22 'violations':8 | -71.0692 | LIA RYAN V | 107 Baldwin St, 02129 | 2022-11-23T00:00:00 | Enforcement Violations | 1 | 2017 | 2.0134e+08 | 2129 | | 1 | 42.3276 | Ellipsis | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan |
How to parse and flatten nested JSON API response into tabular format
JSON structure: { "help": "https://data.boston.gov/api/3/action/help_show?name=datastore_search_sql", "success": true, "result": { "records": [ { "latitude": "42.38331999978103", "property_type": "Residential 1-family", "neighborhood": "Charlestown", "description": "Improper storage trash: res", "year built": "1885", "_full_text": "'-11':2 '-23':3 '-71.06920000136572':29 '00':4,5,6 '02129':16 '1':26 '107':13 '1885':23 '201340000':19 '2017':24 '2022':1 '2129':18 '42.38331999978103':28 'baldwin':14 'charlestown':17 'enforcement':7 'family':27 'improper':9 'lia':20 'res':12 'residential':25 'ryan':21 'st':15 'storage':10 'trash':11 'v':22 'violations':8", "longitude": "-71.06920000136572", "owner": "LIA RYAN V", "address": "107 Baldwin St, 02129", "date": "2022-11-23T00:00:00", "violation_type": "Enforcement Violations", "_id": 1, "year remodeled": "2017", "parcel": "201340000", "zip_code": "2129" }, { "latitude": "42.32762329872878", "property_type": ...} ], "fields": [ { "type": "int4", "id": "_id" }, { "type": "tsvector"... } ], "sql": "SELECT * from \"dc615ff7-2ff3-416a-922b-f0f334f085d0\" where date >= '2022-11-23'" } } Received as API response from Boston.gov website: response = requests.request('GET', 'https://data.boston.gov/api/3/action/datastore_search_sql?sql=SELECT%20*%20from%20%22dc615ff7-2ff3-416a-922b-f0f334f085d0%22%20where%20date%20%3E=%20%272022-11-23%27') So 5 top-level keys, but I only care about getting the result.records into a tabular format Keys from relevant dict (result.records): >>> json_data['result']['records'][0].keys() dict_keys(['latitude', 'property_type', 'neighborhood', 'description', 'year built', '_full_text', 'longitude', 'owner', 'address', 'date', 'violation_type', '_id', 'year remodeled', 'parcel', 'zip_code']) The closest I have gotten is 1x52 dataframe using the flatten_json module's flatten(), however that just has each results.records dict in a separate column. 0 ... 51 0 {'latitude': '42.38331999978103', 'property_ty... ... {'latitude': '42.38306999993893', 'property_ty... Previous attempt using json_normalize (twice) with open(extracted_data_fn) as json_file: # store file data in object json_data = json.load(json_file) print (json_data) # using flatten_json module flat_json = flatten_json.flatten(json_data) df_flat = pd.DataFrame(flat_json, index = range(len(flat_json))) df = pd.json_normalize(json_data) df_result_records = pd.json_normalize(df['result.records']) df_result_records My preferred output would be the keys as columns and each value as a cell in the row. Any thoughts on how to achieve this? Thank you!
[ "just use:\njson_data= response.json()\ndf=pd.json_normalize(json_data['result']['records'])\n\ndf\n\n| | latitude | property_type | neighborhood | description | year built | _full_text | longitude | owner | address | date | violation_type | _id | year remodeled | parcel | zip_code |\n|---:|-----------:|:---------------------|:---------------|:----------------------------|-------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------:|:-----------|:----------------------|:--------------------|:-----------------------|------:|-----------------:|-------------:|-----------:|\n| 0 | 42.3833 | Residential 1-family | Charlestown | Improper storage trash: res | 1885 | '-11':2 '-23':3 '-71.06920000136572':29 '00':4,5,6 '02129':16 '1':26 '107':13 '1885':23 '201340000':19 '2017':24 '2022':1 '2129':18 '42.38331999978103':28 'baldwin':14 'charlestown':17 'enforcement':7 'family':27 'improper':9 'lia':20 'res':12 'residential':25 'ryan':21 'st':15 'storage':10 'trash':11 'v':22 'violations':8 | -71.0692 | LIA RYAN V | 107 Baldwin St, 02129 | 2022-11-23T00:00:00 | Enforcement Violations | 1 | 2017 | 2.0134e+08 | 2129 |\n| 1 | 42.3276 | Ellipsis | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan |\n\n\n" ]
[ 0 ]
[]
[]
[ "dictionary", "flatten", "json", "json_normalize", "python" ]
stackoverflow_0074564373_dictionary_flatten_json_json_normalize_python.txt
Q: pytest coverage 'if 0:' statement body not listed as a miss Test marks the code as covered if the condition is 0 but as uncovered if the condition is a variable with value of zero. I was trying a simple thing in pytest with coverage and I found this bug (?). I am not sure if I am missing something in how pytest or python works. Here bellow is my function def dummy_func(a=0): if a: print('this part is not tested !!') else: print('this part is tested !!') if 0: # tried None as well print('this part is not tested, but appears like it is !') else: print('this part is tested !!') return 1 and this is the report I got ---------- coverage: platform linux, python 3.10.6-final-0 ----------- Name Stmts Miss Cover Missing ------------------------------------------------------ myproject/flask_api.py 7 1 86% 4 tests/test_hello.py 4 0 100% ------------------------------------------------------ TOTAL 11 1 91% Should not the line under the if 0 be marked as Miss. Is that a bug or I am missing something ? I got that in both version pytest-cov = "4.0.0" and "3.0.0" also with coverage my test code is that from myproject import flask_api def test_dummy(): result = flask_api.dummy_func() assert result == 1 A: So the answer to your question is that it's not a bug, it's expected behavior. From the coverage.py docs: After your program has been executed and the line numbers recorded, coverage.py needs to determine what lines could have been executed. Luckily, compiled Python files (.pyc files) have a table of line numbers in them. Coverage.py reads this table to get the set of executable lines, with a little more source analysis to leave out things like docstrings. and The data file is read to get the set of lines that were executed. The difference between the executable lines and the executed lines are the lines that were not executed. and The same principle applies for branch measurement, though the process for determining possible branches is more involved. Coverage.py uses the abstract syntax tree of the Python source file to determine the set of possible branches. That is just how coverage.py works. It does not consider unreachable code in its report. It doesn't mark it as missed but it doesn't mark it as tested either. For instance, here is how the report looks in PyCharm (note the unmarked line 7): Some more examples: Interestingly enough, PyCharm can't evaluate 'a' * 0 as "always falsy" but can evaluate 'a'.replace('a', '') as such, while coverage.py does the opposite.
pytest coverage 'if 0:' statement body not listed as a miss
Test marks the code as covered if the condition is 0 but as uncovered if the condition is a variable with value of zero. I was trying a simple thing in pytest with coverage and I found this bug (?). I am not sure if I am missing something in how pytest or python works. Here bellow is my function def dummy_func(a=0): if a: print('this part is not tested !!') else: print('this part is tested !!') if 0: # tried None as well print('this part is not tested, but appears like it is !') else: print('this part is tested !!') return 1 and this is the report I got ---------- coverage: platform linux, python 3.10.6-final-0 ----------- Name Stmts Miss Cover Missing ------------------------------------------------------ myproject/flask_api.py 7 1 86% 4 tests/test_hello.py 4 0 100% ------------------------------------------------------ TOTAL 11 1 91% Should not the line under the if 0 be marked as Miss. Is that a bug or I am missing something ? I got that in both version pytest-cov = "4.0.0" and "3.0.0" also with coverage my test code is that from myproject import flask_api def test_dummy(): result = flask_api.dummy_func() assert result == 1
[ "So the answer to your question is that it's not a bug, it's expected behavior.\nFrom the coverage.py docs:\n\nAfter your program has been executed and the line numbers recorded,\ncoverage.py needs to determine what lines could have been executed.\nLuckily, compiled Python files (.pyc files) have a table of line\nnumbers in them. Coverage.py reads this table to get the set of\nexecutable lines, with a little more source analysis to leave out\nthings like docstrings.\n\nand\n\nThe data file is read to get the set of lines that were executed. The\ndifference between the executable lines and the executed lines are the lines that were not executed.\n\nand\n\nThe same principle applies for branch measurement, though the process\nfor determining possible branches is more involved. Coverage.py uses\nthe abstract syntax tree of the Python source file to determine the\nset of possible branches.\n\nThat is just how coverage.py works. It does not consider unreachable code in its report. It doesn't mark it as missed but it doesn't mark it as tested either.\nFor instance, here is how the report looks in PyCharm (note the unmarked line 7):\n\nSome more examples:\n\nInterestingly enough, PyCharm can't evaluate 'a' * 0 as \"always falsy\" but can evaluate 'a'.replace('a', '') as such, while coverage.py does the opposite.\n" ]
[ 2 ]
[]
[]
[ "code_coverage", "pytest", "python" ]
stackoverflow_0074564731_code_coverage_pytest_python.txt
Q: How to check user has put in correct number of arguments on command line in python I'm trying to check that the user has entered two arguments on the command line - the iface name and passive for a type of scan - I thought the script would just exit if the wrong arguments entered but it still prints out the error message no matter how many arguments entered - what am I missing ? import sys import os def main(): if len(sys.argv) != 2: print("not enough arguments") sys.exit(1) else: args = sys.argv if("-i" in args): i = args.index("-i")+1 iface = args[i] print(iface) if("-p" in args): passive = args.index("-p")+1 passive = args[passive] print(passive) main() A: sys.argv also returns the name of the python file. Try running: print(sys.argv)
How to check user has put in correct number of arguments on command line in python
I'm trying to check that the user has entered two arguments on the command line - the iface name and passive for a type of scan - I thought the script would just exit if the wrong arguments entered but it still prints out the error message no matter how many arguments entered - what am I missing ? import sys import os def main(): if len(sys.argv) != 2: print("not enough arguments") sys.exit(1) else: args = sys.argv if("-i" in args): i = args.index("-i")+1 iface = args[i] print(iface) if("-p" in args): passive = args.index("-p")+1 passive = args[passive] print(passive) main()
[ "sys.argv also returns the name of the python file. Try running:\nprint(sys.argv)\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074566228_python_python_3.x.txt
Q: Python nested Dictionaries to csv I have a data set in this format: data = { 'sensor1': {'units': 'x', 'values': [{'time': 17:00, 'value': 10}, {'time': 17:10, 'value': 12}, {'time': 17:20, 'value' :7}, ...]} 'sensor2': {'units': 'x', 'values': [{'time': 17:00, 'value': 9}, {'time': 17:20, 'value': 11}, ...]} } And I want to collect the data to put into a csv like: time, sensor1, sensor2 17:00, 10, 9, 17:10, 12, , 17:20, 7, 11, ... I need to use the csv module so I require a list of dictionaries like so: [{'time': 17:00, 'sensor1': 10, 'sensor2': 9}, ... ] I know that fields = list(data.keys()) Will go into csv write as the header. It's just the rows I can't format properly. Especially since the times don't always exist in both sensors. e.g. 17:10 has a value in sensor 1 but does not exist in sensor 2. A: You can use pandas to create a dataframe from your data and save it as CSV: import pandas as pd data = { "sensor1": { "units": "x", "values": [ {"time": "17:00", "value": "10"}, {"time": "17:10", "value": "12"}, {"time": "17:20", "value": "7"}, ], }, "sensor2": { "units": "x", "values": [ {"time": "17:00", "value": "9"}, {"time": "17:20", "value": "11"}, ], }, } df = pd.DataFrame( [ {"time": vv["time"], "column": k, "value": vv["value"]} for k, v in data.items() for vv in v["values"] ], ) df = df.pivot(index="time", columns="column", values="value").reset_index() df.to_csv("data.csv", index=False) saves data.csv: time,sensor1,sensor2 17:00,10,9 17:10,12, 17:20,7,11 A: You can do it in 2 steps. data = { 'sensor1': {'units': 'x', 'values': [{'time': '17:00', 'value': 10}, {'time': '17:10', 'value': 12}, {'time': '17:20', 'value' :7}, ]}, 'sensor2': {'units': 'x', 'values': [{'time': '17:00', 'value': 9}, {'time': '17:20', 'value': 11}, ]}, } # first step: dictionary keyed by time {time:t,{'sensorx':v,}} dic = {} for sensor in data: for sample in data[sensor]['values']: if not dic.get( sample['time']): dic[ sample['time']] = {} # create the record dic[ sample[ 'time']][sensor] = sample['value'] # second step: generate CSV columns=['time', 'sensor1', 'sensor2'] import csv f = open( 'zz.csv', 'w', newline='') writer = csv.DictWriter( f, columns) writer.writeheader() for key in dic: writer.writerow( dict( [('time',key)] + list( dic[key].items()))) The result will be: time,sensor1,sensor2 17:00,10,9 17:10,12, 17:20,7,11
Python nested Dictionaries to csv
I have a data set in this format: data = { 'sensor1': {'units': 'x', 'values': [{'time': 17:00, 'value': 10}, {'time': 17:10, 'value': 12}, {'time': 17:20, 'value' :7}, ...]} 'sensor2': {'units': 'x', 'values': [{'time': 17:00, 'value': 9}, {'time': 17:20, 'value': 11}, ...]} } And I want to collect the data to put into a csv like: time, sensor1, sensor2 17:00, 10, 9, 17:10, 12, , 17:20, 7, 11, ... I need to use the csv module so I require a list of dictionaries like so: [{'time': 17:00, 'sensor1': 10, 'sensor2': 9}, ... ] I know that fields = list(data.keys()) Will go into csv write as the header. It's just the rows I can't format properly. Especially since the times don't always exist in both sensors. e.g. 17:10 has a value in sensor 1 but does not exist in sensor 2.
[ "You can use pandas to create a dataframe from your data and save it as CSV:\nimport pandas as pd\n\ndata = {\n \"sensor1\": {\n \"units\": \"x\",\n \"values\": [\n {\"time\": \"17:00\", \"value\": \"10\"},\n {\"time\": \"17:10\", \"value\": \"12\"},\n {\"time\": \"17:20\", \"value\": \"7\"},\n ],\n },\n \"sensor2\": {\n \"units\": \"x\",\n \"values\": [\n {\"time\": \"17:00\", \"value\": \"9\"},\n {\"time\": \"17:20\", \"value\": \"11\"},\n ],\n },\n}\n\ndf = pd.DataFrame(\n [\n {\"time\": vv[\"time\"], \"column\": k, \"value\": vv[\"value\"]}\n for k, v in data.items()\n for vv in v[\"values\"]\n ],\n)\ndf = df.pivot(index=\"time\", columns=\"column\", values=\"value\").reset_index()\n\ndf.to_csv(\"data.csv\", index=False)\n\nsaves data.csv:\ntime,sensor1,sensor2\n17:00,10,9\n17:10,12,\n17:20,7,11\n\n", "You can do it in 2 steps.\ndata = { 'sensor1': {'units': 'x', 'values': [{'time': '17:00', 'value': 10},\n {'time': '17:10', 'value': 12}, \n {'time': '17:20', 'value' :7},\n ]},\n 'sensor2': {'units': 'x', 'values': [{'time': '17:00', 'value': 9},\n {'time': '17:20', 'value': 11},\n ]},\n }\n\n# first step: dictionary keyed by time {time:t,{'sensorx':v,}}\ndic = {}\nfor sensor in data:\n for sample in data[sensor]['values']:\n if not dic.get( sample['time']):\n dic[ sample['time']] = {} # create the record\n dic[ sample[ 'time']][sensor] = sample['value']\n\n# second step: generate CSV\ncolumns=['time', 'sensor1', 'sensor2']\nimport csv\nf = open( 'zz.csv', 'w', newline='')\nwriter = csv.DictWriter( f, columns)\nwriter.writeheader()\nfor key in dic:\n writer.writerow( dict( [('time',key)] + list( dic[key].items())))\n\nThe result will be:\n time,sensor1,sensor2\n 17:00,10,9\n 17:10,12,\n 17:20,7,11\n\n" ]
[ 0, 0 ]
[]
[]
[ "dictionary", "python" ]
stackoverflow_0074564045_dictionary_python.txt
Q: Python map() function with an existing list might be a dumb question but why when I try to use map() function on an already existing list: nums = [1,2,3,4,5] result = map(lambda num: num+num , nums) print(result) it returns me: <map object at 0x7f41cef17130> , instead of my result; on the contrary when I do this: nums = 1,2,3,4,5 result = list(map(lambda num: num+num , nums)) print(result) it does print me my desired result: [2, 4, 6, 8, 10] A: Others have already said this. In python you have a few datatypes that don't show the values directly and require another function. This is one of them. Others are a generator: (x for x in range(5)) <generator object at ....> And zip: zip([1,2], [1,2]) <zip at ....> What that means is basically where in the memory it is saved In general just know that whenever you see python return something like that, but you wanted a datatype (like a list or a string), that means you still need to do something to it. So: list(map(...)) list(zip(...)) etc.
Python map() function with an existing list
might be a dumb question but why when I try to use map() function on an already existing list: nums = [1,2,3,4,5] result = map(lambda num: num+num , nums) print(result) it returns me: <map object at 0x7f41cef17130> , instead of my result; on the contrary when I do this: nums = 1,2,3,4,5 result = list(map(lambda num: num+num , nums)) print(result) it does print me my desired result: [2, 4, 6, 8, 10]
[ "Others have already said this. In python you have a few datatypes that don't show the values directly and require another function. This is one of them. Others are a generator:\n(x for x in range(5))\n\n\n<generator object at ....>\n\nAnd zip:\nzip([1,2], [1,2])\n\n\n<zip at ....>\n\nWhat that means is basically where in the memory it is saved\nIn general just know that whenever you see python return something like that, but you wanted a datatype (like a list or a string), that means you still need to do something to it. So:\nlist(map(...))\nlist(zip(...))\netc.\n" ]
[ 0 ]
[]
[]
[ "list", "map_function", "python" ]
stackoverflow_0074566167_list_map_function_python.txt
Q: how to parse strings and apply them to dataframe I have an excel table that use as reference for logical operators so I can join them later to apply a logical string to pandas dataframe. dataframe GOOD BAD UGLY 0 101 60 0 1 22 61 0 2 103 62 NaN 3 104 63 0 I can get values from the excel sheet and append them into list. But How can parse this logical formulas to df? import pandas as pd import openpyxl def create_dataframe(): df = pd.DataFrame({'GOOD': [101,22,103,104], 'BAD': [60,61,62,63], 'UGLY': [0,0,'NaN',0], }) print(df) read_filter = pd.read_excel('test.xlsx') print(read_filter) formulas = [] logicals = ['>','<'] for i, filter_col in enumerate(read_filter['col1']): if read_filter['Logic'][i] in logicals: formula = f"df['{filter_col}'][{i}]" + read_filter['Logic'][i] + str(read_filter['value'][i]) formulas.append(formula) else: formula = f"{read_filter['Logic'][i]}(df['{filter_col}'])" formulas.append(formula) # print(formulas) #df['Result'] = df.apply(lambda x: eval(formulas) , axis=1) return df formulas ---- ["df['GOOD'][0]>100", "df['BAD'][1]<50", "pd.isna(df['UGLY'])"] The expected result : GOOD BAD UGLY Result 0 101 60 0 False 1 22 61 0 False 2 103 62 True 3 104 63 0 False A: You can create the full condition like this: >>> ' & '.join(f"({f})" for f in formulas) "(df['GOOD'][0]>100) & (df['BAD'][1]>50) & (pd.isna(df['UGLY']))" Each expression should be put in parentheses. Otherwise a > b & c > d will be parsed as a > (b & c) > d, not (a > b) & (c > d). Then eval it: >>> import pandas as pd >>> df = pd.DataFrame({'GOOD': [101,22,103,104], 'BAD': [60,61,62,63], 'UGLY': [0,0,float('nan'),0]}) >>> formulas = ["df['GOOD'][0]>100", "df['BAD'][1]<50", "pd.isna(df['UGLY'])"] >>> eval(' & '.join(f"({f})" for f in formulas), {'df': df, 'pd': pd}) 0 False 1 False 2 True 3 False Name: UGLY, dtype: bool Then you can create a column with this result: >>> df.assign(Result=eval(' & '.join(f"({f})" for f in formulas), {'df': df, 'pd': pd})) GOOD BAD UGLY Result 0 101 60 0.0 False 1 22 61 0.0 False 2 103 62 NaN True 3 104 63 0.0 False
how to parse strings and apply them to dataframe
I have an excel table that use as reference for logical operators so I can join them later to apply a logical string to pandas dataframe. dataframe GOOD BAD UGLY 0 101 60 0 1 22 61 0 2 103 62 NaN 3 104 63 0 I can get values from the excel sheet and append them into list. But How can parse this logical formulas to df? import pandas as pd import openpyxl def create_dataframe(): df = pd.DataFrame({'GOOD': [101,22,103,104], 'BAD': [60,61,62,63], 'UGLY': [0,0,'NaN',0], }) print(df) read_filter = pd.read_excel('test.xlsx') print(read_filter) formulas = [] logicals = ['>','<'] for i, filter_col in enumerate(read_filter['col1']): if read_filter['Logic'][i] in logicals: formula = f"df['{filter_col}'][{i}]" + read_filter['Logic'][i] + str(read_filter['value'][i]) formulas.append(formula) else: formula = f"{read_filter['Logic'][i]}(df['{filter_col}'])" formulas.append(formula) # print(formulas) #df['Result'] = df.apply(lambda x: eval(formulas) , axis=1) return df formulas ---- ["df['GOOD'][0]>100", "df['BAD'][1]<50", "pd.isna(df['UGLY'])"] The expected result : GOOD BAD UGLY Result 0 101 60 0 False 1 22 61 0 False 2 103 62 True 3 104 63 0 False
[ "You can create the full condition like this:\n>>> ' & '.join(f\"({f})\" for f in formulas)\n\"(df['GOOD'][0]>100) & (df['BAD'][1]>50) & (pd.isna(df['UGLY']))\"\n\nEach expression should be put in parentheses. Otherwise a > b & c > d will be parsed as a > (b & c) > d, not (a > b) & (c > d).\nThen eval it:\n>>> import pandas as pd\n>>> df = pd.DataFrame({'GOOD': [101,22,103,104], 'BAD': [60,61,62,63], 'UGLY': [0,0,float('nan'),0]})\n>>> formulas = [\"df['GOOD'][0]>100\", \"df['BAD'][1]<50\", \"pd.isna(df['UGLY'])\"]\n>>> eval(' & '.join(f\"({f})\" for f in formulas), {'df': df, 'pd': pd})\n0 False\n1 False\n2 True\n3 False\nName: UGLY, dtype: bool\n\nThen you can create a column with this result:\n>>> df.assign(Result=eval(' & '.join(f\"({f})\" for f in formulas), {'df': df, 'pd': pd}))\n GOOD BAD UGLY Result\n0 101 60 0.0 False\n1 22 61 0.0 False\n2 103 62 NaN True\n3 104 63 0.0 False\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074566170_dataframe_pandas_python.txt
Q: How to explain the counter/map in this oneline code to count the frequency of a 2d array count = defaultdict(int, sum(map(Counter, board), Counter())) board is a 2d array: List[List[str]] I can understand that this one-line code is to count the frequency of the board, and we can write this way: count = defaultdict(int) for i in range(len(board)): for j in range(len(board[0]): count[board[i][j]] += 1 Could you help explain the one-line logic? Thank you! A: Let's say board is defined as such: board = [["hello", "hello"], ["world", "hello"]] The call to map gives us: >>> list(map(Counter, board)) [Counter({'hello': 2}), Counter({'world': 1, 'hello': 1})] We can try to sum these counters, but we will get an error: >>> sum(map(Counter, board)) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for +: 'int' and 'Counter' So we provide an initial value other than the default 0. >>> sum(map(Counter, board), Counter()) Counter({'hello': 3, 'world': 1}) And then convert that to a defaultdict presumably for the sake of code that follows this line. A: So, the explanation of the code is given in Chris's answer, but I think it should be noted that sum(map(Counter, board), Counter()) is kinda inefficient, since it needs to re-create a new Counter object for each item in board, and then a new one for each internal iteration of the sum. In the worst case, where every item is unique (everything has a count of 1) this degenerates into quadratic time behavior. Observe: In [5]: rs = [range(i, i+1000) for i in range(1, 10_000, 1000)] In [6]: rs Out[6]: [range(1, 1001), range(1001, 2001), range(2001, 3001), range(3001, 4001), range(4001, 5001), range(5001, 6001), range(6001, 7001), range(7001, 8001), range(8001, 9001), range(9001, 10001)] In [7]: %timeit sum(map(Counter, rs), Counter()) 15.2 ms ± 155 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [8]: %%timeit ...: counts = Counter() ...: for r in rs: ...: counts.update(r) ...: 500 µs ± 5.51 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) So, right off the bat, we are orders of magnitude slower using the map vs the naive loop. 30 times slower. But look how it scales when we double the size: In [15]: %timeit sum(map(Counter, rs), Counter()) 60.9 ms ± 1.37 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) In [16]: %%timeit ...: counts = Counter() ...: for r in rs: ...: counts.update(r) ...: 1.01 ms ± 8.37 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) Here we see the quadratic behavior - the map version quadrupled in time, whereas the naive for-loop only doubled (linear scaling). Let's double it yet again: In [19]: rs = [range(i, i+1000) for i in range(1, 40_000, 1000)] In [20]: %timeit sum(map(Counter, rs), Counter()) 244 ms ± 8.96 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [21]: %%timeit ...: counts = Counter() ...: for r in rs: ...: counts.update(r) ...: 2.13 ms ± 34.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) The quadratic vs linear behavior still holds, and now the difference in time is 2 orders of magnitude, 100 times slower for the map version.
How to explain the counter/map in this oneline code to count the frequency of a 2d array
count = defaultdict(int, sum(map(Counter, board), Counter())) board is a 2d array: List[List[str]] I can understand that this one-line code is to count the frequency of the board, and we can write this way: count = defaultdict(int) for i in range(len(board)): for j in range(len(board[0]): count[board[i][j]] += 1 Could you help explain the one-line logic? Thank you!
[ "Let's say board is defined as such:\nboard = [[\"hello\", \"hello\"], [\"world\", \"hello\"]]\n\nThe call to map gives us:\n>>> list(map(Counter, board))\n[Counter({'hello': 2}), Counter({'world': 1, 'hello': 1})]\n\nWe can try to sum these counters, but we will get an error:\n>>> sum(map(Counter, board))\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: unsupported operand type(s) for +: 'int' and 'Counter'\n\nSo we provide an initial value other than the default 0.\n>>> sum(map(Counter, board), Counter())\nCounter({'hello': 3, 'world': 1})\n\nAnd then convert that to a defaultdict presumably for the sake of code that follows this line.\n", "So, the explanation of the code is given in Chris's answer, but I think it should be noted that sum(map(Counter, board), Counter()) is kinda inefficient, since it needs to re-create a new Counter object for each item in board, and then a new one for each internal iteration of the sum. In the worst case, where every item is unique (everything has a count of 1) this degenerates into quadratic time behavior. Observe:\nIn [5]: rs = [range(i, i+1000) for i in range(1, 10_000, 1000)]\n\nIn [6]: rs\nOut[6]:\n[range(1, 1001),\n range(1001, 2001),\n range(2001, 3001),\n range(3001, 4001),\n range(4001, 5001),\n range(5001, 6001),\n range(6001, 7001),\n range(7001, 8001),\n range(8001, 9001),\n range(9001, 10001)]\n\nIn [7]: %timeit sum(map(Counter, rs), Counter())\n15.2 ms ± 155 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\nIn [8]: %%timeit\n ...: counts = Counter()\n ...: for r in rs:\n ...: counts.update(r)\n ...:\n500 µs ± 5.51 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)\n\nSo, right off the bat, we are orders of magnitude slower using the map vs the naive loop. 30 times slower. But look how it scales when we double the size:\nIn [15]: %timeit sum(map(Counter, rs), Counter())\n60.9 ms ± 1.37 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n\nIn [16]: %%timeit\n ...: counts = Counter()\n ...: for r in rs:\n ...: counts.update(r)\n ...:\n1.01 ms ± 8.37 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)\n\nHere we see the quadratic behavior - the map version quadrupled in time, whereas the naive for-loop only doubled (linear scaling).\nLet's double it yet again:\nIn [19]: rs = [range(i, i+1000) for i in range(1, 40_000, 1000)]\n\nIn [20]: %timeit sum(map(Counter, rs), Counter())\n244 ms ± 8.96 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nIn [21]: %%timeit\n ...: counts = Counter()\n ...: for r in rs:\n ...: counts.update(r)\n ...:\n2.13 ms ± 34.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\nThe quadratic vs linear behavior still holds, and now the difference in time is 2 orders of magnitude, 100 times slower for the map version.\n" ]
[ 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0074566143_python.txt
Q: Merge datasets using pandas Below I have code which was provided to me in order to join 2 datasets. import pandas as pd from sklearn.datasets import load_iris import matplotlib.pyplot as plt df= pd.read_csv("student/student-por.csv") ds= pd.read_csv("student/student-mat.csv") print("before merge") print(df) print(ds) print("After merging:") dq = pd.merge(df,ds,by=c("school","sex","age","address","famsize","Pstatus","Medu","Fedu","Mjob","Fjob","reason","nursery","internet")) print(dq) I get this error: Traceback (most recent call last): File "/Users/PycharmProjects/datamining/main.py", line 15, in <module> dq = pd.merge(df, ds,by=c ("school","sex","age","address","famsize","Pstatus","Medu","Fedu","Mjob","Fjob","reason","nursery","internet")) NameError: name 'c' is not defined Any help would be great, I've tried messing about with it for a while. I believe the 'by=c' is the issue. Thanks A: Hi Hope you are doing well! The error is happening because of the c symbol in the arguments of the merge function. Also merge function has a different signature and it doesn't have the argument by but instead it should be on, which accepts only the list of columns So in summary it should something similar to this: import pandas as pd df = pd.read_csv("student/student-por.csv") ds = pd.read_csv("student/student-mat.csv") print("Before merge.") print(df) print(ds) print("After merge.") dq = pd.merge( left=df, right=ds, on=[ "school", "sex", "age", "address", "famsize", "Pstatus", "Medu", "Fedu", "Mjob", "Fjob", "reason", "nursery", "internet", ], ) print(dq) Docs: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html
Merge datasets using pandas
Below I have code which was provided to me in order to join 2 datasets. import pandas as pd from sklearn.datasets import load_iris import matplotlib.pyplot as plt df= pd.read_csv("student/student-por.csv") ds= pd.read_csv("student/student-mat.csv") print("before merge") print(df) print(ds) print("After merging:") dq = pd.merge(df,ds,by=c("school","sex","age","address","famsize","Pstatus","Medu","Fedu","Mjob","Fjob","reason","nursery","internet")) print(dq) I get this error: Traceback (most recent call last): File "/Users/PycharmProjects/datamining/main.py", line 15, in <module> dq = pd.merge(df, ds,by=c ("school","sex","age","address","famsize","Pstatus","Medu","Fedu","Mjob","Fjob","reason","nursery","internet")) NameError: name 'c' is not defined Any help would be great, I've tried messing about with it for a while. I believe the 'by=c' is the issue. Thanks
[ "Hi Hope you are doing well!\nThe error is happening because of the c symbol in the arguments of the merge function. Also merge function has a different signature and it doesn't have the argument by but instead it should be on, which accepts only the list of columns So in summary it should something similar to this:\nimport pandas as pd\n\ndf = pd.read_csv(\"student/student-por.csv\")\nds = pd.read_csv(\"student/student-mat.csv\")\n\nprint(\"Before merge.\")\nprint(df)\nprint(ds)\n\nprint(\"After merge.\")\ndq = pd.merge(\n left=df,\n right=ds,\n on=[\n \"school\",\n \"sex\",\n \"age\",\n \"address\",\n \"famsize\",\n \"Pstatus\",\n \"Medu\",\n \"Fedu\",\n \"Mjob\",\n \"Fjob\",\n \"reason\",\n \"nursery\",\n \"internet\",\n ],\n)\nprint(dq)\n\nDocs: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html\n" ]
[ 0 ]
[]
[]
[ "dataframe", "dataset", "pandas", "python" ]
stackoverflow_0074562882_dataframe_dataset_pandas_python.txt
Q: How do you create cogs and what is the error here? my main.py import discord from discord.ext import commands import os from secret import TOKEN intents = discord.Intents.default() intents.message_content = True intents.members = True client = commands.Bot(command_prefix='$', intents=intents, case_insensitive=True, owner_id=262215934322671616) @client.event async def on_ready(): print(f'Logged in as {client.user}') print("At Your Service, Sir") print("------------------") await client.change_presence(activity=discord.Game(name=f'Hi, my name is {client.user.name}. Currently in development, but come back later!')) for filename in os.listdir('./cogs'): if filename.endswith('.py'): client.load_extension(f'cogs. {filename[:-3]}') print(f'Loaded {filename[:-3]}') client.run(TOKEN) my corg (greetings.py) import discord from discord.ext import commands class Greetings(commands.Cog): def __init__(self, client): self.client = client @commands.command() async def hi(self, ctx): await ctx.send(f'seems that i can @ people now, so hey {ctx.author.mention}!') def setup(client): client.add_cog(Greetings(client)) i've tried a different approach in main.py: for filename in os.listdir('./cogs'): if filename.endswith('.py'): initial_extensions.append("cogs." + filename[:-3]) if __name__ == '__main__': for extension in initial_extensions: client.load_extension(extension) and ended up with error RuntimeWarning: coroutine 'BotBase.load_extension' was never awaited client.load_extension(extension) RuntimeWarning: Enable tracemalloc to get the object allocation traceback and still nothing. it doesn't show any error and loads the corg which can be seen through print(f'Loaded {filename[:-3]}') i have no experience with cogs and if someone can point me in the right direction i'd really appreciate this! A: coroutine 'BotBase.load_extension' was never awaited The error is telling you what the problem is (as errors usually do). load_extension is a coroutine and you're not awaiting it. Similarly, add_cog is also a coroutine that you're not awaiting, and your setup is not a coroutine while it should be. The migration guide explains how to adapt to this: https://discordpy.readthedocs.io/en/stable/migrating.html#extension-and-cog-loading-unloading-is-now-asynchronous Your code looks like you got it from an outdated tutorial somewhere. This is one of the main reasons (along with many others) why you shouldn't use tutorials and instead just read the docs. Also, don't change presence in on_ready. If you make API calls in on_ready Discord has a high chance to disconnect you. There's literally 0 reason to change presence here, you can just pass the status and activity when you initialize your Client instance...
How do you create cogs and what is the error here?
my main.py import discord from discord.ext import commands import os from secret import TOKEN intents = discord.Intents.default() intents.message_content = True intents.members = True client = commands.Bot(command_prefix='$', intents=intents, case_insensitive=True, owner_id=262215934322671616) @client.event async def on_ready(): print(f'Logged in as {client.user}') print("At Your Service, Sir") print("------------------") await client.change_presence(activity=discord.Game(name=f'Hi, my name is {client.user.name}. Currently in development, but come back later!')) for filename in os.listdir('./cogs'): if filename.endswith('.py'): client.load_extension(f'cogs. {filename[:-3]}') print(f'Loaded {filename[:-3]}') client.run(TOKEN) my corg (greetings.py) import discord from discord.ext import commands class Greetings(commands.Cog): def __init__(self, client): self.client = client @commands.command() async def hi(self, ctx): await ctx.send(f'seems that i can @ people now, so hey {ctx.author.mention}!') def setup(client): client.add_cog(Greetings(client)) i've tried a different approach in main.py: for filename in os.listdir('./cogs'): if filename.endswith('.py'): initial_extensions.append("cogs." + filename[:-3]) if __name__ == '__main__': for extension in initial_extensions: client.load_extension(extension) and ended up with error RuntimeWarning: coroutine 'BotBase.load_extension' was never awaited client.load_extension(extension) RuntimeWarning: Enable tracemalloc to get the object allocation traceback and still nothing. it doesn't show any error and loads the corg which can be seen through print(f'Loaded {filename[:-3]}') i have no experience with cogs and if someone can point me in the right direction i'd really appreciate this!
[ "\ncoroutine 'BotBase.load_extension' was never awaited\n\nThe error is telling you what the problem is (as errors usually do). load_extension is a coroutine and you're not awaiting it. Similarly, add_cog is also a coroutine that you're not awaiting, and your setup is not a coroutine while it should be.\nThe migration guide explains how to adapt to this: https://discordpy.readthedocs.io/en/stable/migrating.html#extension-and-cog-loading-unloading-is-now-asynchronous\nYour code looks like you got it from an outdated tutorial somewhere. This is one of the main reasons (along with many others) why you shouldn't use tutorials and instead just read the docs.\nAlso, don't change presence in on_ready. If you make API calls in on_ready Discord has a high chance to disconnect you. There's literally 0 reason to change presence here, you can just pass the status and activity when you initialize your Client instance...\n" ]
[ 1 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0074566202_discord_discord.py_python.txt
Q: Python3: Decoding an 'application/x-gzip' response with Requests I'm using requests to download data from a website. The content is an EPG XML file packed in a compressed gz file. I've been googling and trying all night with no success. This is the relevant snip of my current stage. I'v tried to change the encoding to UTF-8 and ISO-8859-1, but it just gives me a different kind of nonsense. import xml.etree.ElementTree as ET import requests import gzip url = 'http://example.com' headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) ' 'AppleWebKit/537.36 (KHTML, like Gecko) ' 'Chrome/102.0.0.0 Safari/537.36' } def import_xml() -> list: try: response = requests.get(url, headers=headers, stream=True) print(response) print(response.headers['Content-Type']) print(response.encoding) print(response.text[:100]) print(response.content[:100]) data = import_xml This outputs the following: <Response [200]> application/x-gzip None ��v�J�-�~�8��^��%�u�vuI�/�����z*��HX ���o�?��t����;s��I� ,[�K{��e�@�̌��1#�����4 b'\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x0b\xec\xbd\xdbv\xdbJ\x92-\xfa~\xc68\xff\x80^\x0f\xed\xee\xb1%\x97u\xf1\xbavu\x0fI\xbe/\xcb\xf6\xb6\xb4\xec\xaaz*\x90\x84HX \xc0\x02\x08\xc9\xf4o\xec?\xd8\xfdt\x1e\xce\xf9\x88\xae\x1f;s\xce\xc8\x04I\x98\x00\x05,[\x1e\xbbK{\x8f\xaee\x9b@\x02\x88\xcc\x8c\x8c\x981#\xe2\xdf' A: Generally gzipped content is served as application/gzip. It seems requests doesn't know what to do with application/x-gzip, so you will have to decode it manually. import gzip result = gzip.decompress(response.content)
Python3: Decoding an 'application/x-gzip' response with Requests
I'm using requests to download data from a website. The content is an EPG XML file packed in a compressed gz file. I've been googling and trying all night with no success. This is the relevant snip of my current stage. I'v tried to change the encoding to UTF-8 and ISO-8859-1, but it just gives me a different kind of nonsense. import xml.etree.ElementTree as ET import requests import gzip url = 'http://example.com' headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) ' 'AppleWebKit/537.36 (KHTML, like Gecko) ' 'Chrome/102.0.0.0 Safari/537.36' } def import_xml() -> list: try: response = requests.get(url, headers=headers, stream=True) print(response) print(response.headers['Content-Type']) print(response.encoding) print(response.text[:100]) print(response.content[:100]) data = import_xml This outputs the following: <Response [200]> application/x-gzip None ��v�J�-�~�8��^��%�u�vuI�/�����z*��HX ���o�?��t����;s��I� ,[�K{��e�@�̌��1#�����4 b'\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x0b\xec\xbd\xdbv\xdbJ\x92-\xfa~\xc68\xff\x80^\x0f\xed\xee\xb1%\x97u\xf1\xbavu\x0fI\xbe/\xcb\xf6\xb6\xb4\xec\xaaz*\x90\x84HX \xc0\x02\x08\xc9\xf4o\xec?\xd8\xfdt\x1e\xce\xf9\x88\xae\x1f;s\xce\xc8\x04I\x98\x00\x05,[\x1e\xbbK{\x8f\xaee\x9b@\x02\x88\xcc\x8c\x8c\x981#\xe2\xdf'
[ "Generally gzipped content is served as application/gzip. It seems requests doesn't know what to do with application/x-gzip, so you will have to decode it manually.\nimport gzip\n\nresult = gzip.decompress(response.content)\n\n" ]
[ 0 ]
[]
[]
[ "gzip", "python", "python_requests", "xml" ]
stackoverflow_0074566359_gzip_python_python_requests_xml.txt
Q: Python with TKinter - how do you access data from an entry multiple times? In this program, I have the user input their username into a login entry. Once they click the login button, the login_user() function is called to verify that their username exists in a .txt file. I am trying to access the user's username in a later part of the program, more specifically in this line: welcomeLabel = ttk.Label(F3, text="Welcome, {}".format(usernameEntry), background='#EAE2E2', font=('Inter', '20')).grid() More specifically, I am trying to display a string "Welcome, USERNAME" on the home page, but when I reference the usernameEntry it gives the value None. I have also tried creating a global variable in the login_user() function and referencing that along with reading the loginString String variable, but those have not worked. I am new to TKinter, so any help would be appreciated - thanks! More specifically, I am trying to display a string "Welcome, USERNAME" on the home page, but when I reference the usernameEntry it gives the value None. I have also tried creating a global variable in the login_user() function and referencing that along with reading the loginString String variable, but those have not worked. I am new to TKinter, so any help would be appreciated - thanks! I also attached my code below. from tkinter import * from tkinter import ttk def raise_frame(frame): frame.tkraise() def login_user(): #Without this if-statement, the blank input is captured and goes to the second frame uInput = loginString.get() if uInput == '': return loginString.get() with open('username.txt', 'r') as searchUsername: textFileSearch = searchUsername.readlines() for row in textFileSearch: findUserName = row.find(uInput) if findUserName == 0: raise_frame(F3) break else: print('Your username does not exist') root = Tk() root.title("FLASHCARDS PROGRAM") root.resizable(width=False,height=False) style = ttk.Style() style.theme_use('default') #Login Screen Frame F1 = Frame(root, background = '#BACDAF', highlightbackground="black", highlightthickness=1) F1.grid(padx=410,pady=210) #Top-Label Frame login_top_frame = Frame(F1, background='#468D70') login_top_frame.grid(row=0, column=0, padx=10, pady=5) #Light-Gray Background Frame F2 = Frame(root, width=1280, height=720, background='#939393') F2.grid(row=0, column=0, sticky='news') #Home Screen Frame F3 = Frame(root, width=1280, height=720, background='#EAE2E2') F3.grid(row=0, column=0, sticky='news') header_frame = Frame(F3, width = 1000, height = 100, background='#E4DDF4', highlightbackground="black", highlightthickness=1) header_frame.grid(row=0, column=0, padx=10, pady=5, ipadx=475) #Login Screen Widgets loginHeader = ttk.Label(login_top_frame, text = "Flashcards For Free", background = '#468D70', font=('Inter','28','bold')).grid(column = 0, row = 0, padx = 90, pady = 20) loginLabel = ttk.Label(F1, text = "Username:", font=('Inter', '15'), background='#BACDAF').grid(column=0,row=1, padx=20, sticky=W) loginString = StringVar() usernameEntry = ttk.Entry(F1, width=10, textvariable=loginString, background='green').grid(column=0, row=2, padx=20, pady=5, ipady = 5, sticky=W) usernameInput = ttk.Button(F1, text="LOGIN", width = 8, command=login_user).grid(column=0, row=5, padx=21, sticky=W) createAccount = ttk.Button(F1, text="No Account? Click Here").grid(column=0, row=8, padx=125,pady=25) #Home Screen Widgets menuLabel = ttk.Label(header_frame, text="Flashcards for Free", background='#E4DDF4', font=('Inter', '15')).grid(row=0, column=1, padx=10) homeButton = ttk.Button(header_frame, text='Home').grid(row=0, column=2, padx=5) createButton = ttk.Button(header_frame, text='Create').grid(row=0, column=3, padx=5) welcomeLabel = ttk.Label(F3, text="Welcome, {}".format(usernameEntry), background='#EAE2E2', font=('Inter', '20')).grid() raise_frame(F2) raise_frame(F1) root.mainloop() A: You have to call the get method of the entry after the user has had a chance to enter some data. Your code is trying to use the value about a millisecond after you create the entry widget. You can do something like the following in the function that logs the user in: welcomeLabel.configure(text=f"Welcome {usernameEntry.get()}") You also have to make sure welcomeLabel is not None. See Tkinter: AttributeError: NoneType object has no attribute <attribute name> A: You have to make a .txt file and add your entries in there and can use the entries where ever you want to use them. You can use the following links: File Handling in Python Extract the data from line of text file A: Use loginString.get() to get the value of a tkinter variable.
Python with TKinter - how do you access data from an entry multiple times?
In this program, I have the user input their username into a login entry. Once they click the login button, the login_user() function is called to verify that their username exists in a .txt file. I am trying to access the user's username in a later part of the program, more specifically in this line: welcomeLabel = ttk.Label(F3, text="Welcome, {}".format(usernameEntry), background='#EAE2E2', font=('Inter', '20')).grid() More specifically, I am trying to display a string "Welcome, USERNAME" on the home page, but when I reference the usernameEntry it gives the value None. I have also tried creating a global variable in the login_user() function and referencing that along with reading the loginString String variable, but those have not worked. I am new to TKinter, so any help would be appreciated - thanks! More specifically, I am trying to display a string "Welcome, USERNAME" on the home page, but when I reference the usernameEntry it gives the value None. I have also tried creating a global variable in the login_user() function and referencing that along with reading the loginString String variable, but those have not worked. I am new to TKinter, so any help would be appreciated - thanks! I also attached my code below. from tkinter import * from tkinter import ttk def raise_frame(frame): frame.tkraise() def login_user(): #Without this if-statement, the blank input is captured and goes to the second frame uInput = loginString.get() if uInput == '': return loginString.get() with open('username.txt', 'r') as searchUsername: textFileSearch = searchUsername.readlines() for row in textFileSearch: findUserName = row.find(uInput) if findUserName == 0: raise_frame(F3) break else: print('Your username does not exist') root = Tk() root.title("FLASHCARDS PROGRAM") root.resizable(width=False,height=False) style = ttk.Style() style.theme_use('default') #Login Screen Frame F1 = Frame(root, background = '#BACDAF', highlightbackground="black", highlightthickness=1) F1.grid(padx=410,pady=210) #Top-Label Frame login_top_frame = Frame(F1, background='#468D70') login_top_frame.grid(row=0, column=0, padx=10, pady=5) #Light-Gray Background Frame F2 = Frame(root, width=1280, height=720, background='#939393') F2.grid(row=0, column=0, sticky='news') #Home Screen Frame F3 = Frame(root, width=1280, height=720, background='#EAE2E2') F3.grid(row=0, column=0, sticky='news') header_frame = Frame(F3, width = 1000, height = 100, background='#E4DDF4', highlightbackground="black", highlightthickness=1) header_frame.grid(row=0, column=0, padx=10, pady=5, ipadx=475) #Login Screen Widgets loginHeader = ttk.Label(login_top_frame, text = "Flashcards For Free", background = '#468D70', font=('Inter','28','bold')).grid(column = 0, row = 0, padx = 90, pady = 20) loginLabel = ttk.Label(F1, text = "Username:", font=('Inter', '15'), background='#BACDAF').grid(column=0,row=1, padx=20, sticky=W) loginString = StringVar() usernameEntry = ttk.Entry(F1, width=10, textvariable=loginString, background='green').grid(column=0, row=2, padx=20, pady=5, ipady = 5, sticky=W) usernameInput = ttk.Button(F1, text="LOGIN", width = 8, command=login_user).grid(column=0, row=5, padx=21, sticky=W) createAccount = ttk.Button(F1, text="No Account? Click Here").grid(column=0, row=8, padx=125,pady=25) #Home Screen Widgets menuLabel = ttk.Label(header_frame, text="Flashcards for Free", background='#E4DDF4', font=('Inter', '15')).grid(row=0, column=1, padx=10) homeButton = ttk.Button(header_frame, text='Home').grid(row=0, column=2, padx=5) createButton = ttk.Button(header_frame, text='Create').grid(row=0, column=3, padx=5) welcomeLabel = ttk.Label(F3, text="Welcome, {}".format(usernameEntry), background='#EAE2E2', font=('Inter', '20')).grid() raise_frame(F2) raise_frame(F1) root.mainloop()
[ "You have to call the get method of the entry after the user has had a chance to enter some data. Your code is trying to use the value about a millisecond after you create the entry widget.\nYou can do something like the following in the function that logs the user in:\nwelcomeLabel.configure(text=f\"Welcome {usernameEntry.get()}\")\n\nYou also have to make sure welcomeLabel is not None. See Tkinter: AttributeError: NoneType object has no attribute <attribute name>\n", "You have to make a .txt file and add your entries in there and can use the entries where ever you want to use them. You can use the following links:\nFile Handling in Python\nExtract the data from line of text file\n", "Use loginString.get() to get the value of a tkinter variable.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python", "tkinter", "tkinter_entry" ]
stackoverflow_0074566194_python_tkinter_tkinter_entry.txt
Q: How to retrieve Stripe Subscription using customer email Given a customer's email address how do I access their subscriptions, specifically their subscription status. There is only 1 subscription service I provide, so any queries should only bring up one result. e.g. import stripe def functionA(customer_email): ... return customer_sub sub = stripe.Subscription.retrieve(functionA("xyz@gmail.com")) status = sub.status A: First you need to get the customer ID, either using list (case-sensitive) or search (case insensitive): stripe.Customer.list(email="Test@example.com") stripe.Customer.search(query="email:'test@example.com'") then you can list Subscriptions for that customer id: stripe.Subscription.list(customer="cus_123")
How to retrieve Stripe Subscription using customer email
Given a customer's email address how do I access their subscriptions, specifically their subscription status. There is only 1 subscription service I provide, so any queries should only bring up one result. e.g. import stripe def functionA(customer_email): ... return customer_sub sub = stripe.Subscription.retrieve(functionA("xyz@gmail.com")) status = sub.status
[ "First you need to get the customer ID, either using list (case-sensitive) or search (case insensitive):\nstripe.Customer.list(email=\"Test@example.com\")\nstripe.Customer.search(query=\"email:'test@example.com'\")\n\nthen you can list Subscriptions for that customer id:\nstripe.Subscription.list(customer=\"cus_123\")\n\n" ]
[ 1 ]
[]
[]
[ "python", "stripe_payments" ]
stackoverflow_0074565120_python_stripe_payments.txt
Q: AWS Wrangler S3 reading parquet, writing to DynamoDB - Unsupported type numpy.ndarray I am trying to read parquet into dataframe with AWS wrangler, while writing this data to DynamoDB its erroring out with unsupported type error - Unsupported type numpy.ndarray for value....... wr.s3.read_parquet(path=s3_path, dataset=dataset, chunked=True) and writing like wr.dynamodb.put_df(df=df, table_name=table_name) Is there a way, I can convert the ndarray type to dynamoDB list ? I dont want to lose the readability of the array if I use np.tobytes() to write and np.frombuffer() to read again. Also, users will not have access Numpy to read the data from DynamoDB A: Finally able to get the answer, so it worked like this for me using tolist()- df["key"] = df["key"].apply(lambda x: x.tolist()) wr.dynamodb.put_df(df, "test_table")
AWS Wrangler S3 reading parquet, writing to DynamoDB - Unsupported type numpy.ndarray
I am trying to read parquet into dataframe with AWS wrangler, while writing this data to DynamoDB its erroring out with unsupported type error - Unsupported type numpy.ndarray for value....... wr.s3.read_parquet(path=s3_path, dataset=dataset, chunked=True) and writing like wr.dynamodb.put_df(df=df, table_name=table_name) Is there a way, I can convert the ndarray type to dynamoDB list ? I dont want to lose the readability of the array if I use np.tobytes() to write and np.frombuffer() to read again. Also, users will not have access Numpy to read the data from DynamoDB
[ "Finally able to get the answer, so it worked like this for me using tolist()-\ndf[\"key\"] = df[\"key\"].apply(lambda x: x.tolist())\n\nwr.dynamodb.put_df(df, \"test_table\") \n\n" ]
[ 0 ]
[]
[]
[ "amazon_dynamodb", "amazon_web_services", "aws_data_wrangler", "numpy", "python" ]
stackoverflow_0074549552_amazon_dynamodb_amazon_web_services_aws_data_wrangler_numpy_python.txt
Q: Extract most central area in a Binary Image I am processing binary images, and was previously using this code to find the largest area in the binary image: # Use the hue value to convert to binary thresh = 20 thresh, thresh_img = cv2.threshold(h, thresh, 255, cv2.THRESH_BINARY) cv2.imshow('thresh', thresh_img) cv2.waitKey(0) cv2.destroyAllWindows() # Finding Contours # Use a copy of the image since findContours alters the image contours, _ = cv2.findContours(thresh_img.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) #Extract the largest area c = max(contours, key=cv2.contourArea) This code isn't really doing what I need it to do, now I think it would better to extract the most central area in the binary image. Binary Image Largest Image This is currently what the code is extracting, but I am hoping to get the central circle in the first binary image extracted. A: OpenCV comes with a point-polygon test function (for contours). It even gives a signed distance, if you ask for that. I'll find the contour that is closest to the center of the picture. That may be a contour actually overlapping the center of the picture. Timings, on my quadcore from 2012, give or take a millisecond: findContours: ~1 millisecond all pointPolygonTests and argmax: ~1 millisecond mask = cv.imread("fkljm.png", cv.IMREAD_GRAYSCALE) (height, width) = mask.shape ret, mask = cv.threshold(mask, 128, 255, cv.THRESH_BINARY) # required because the sample picture isn't exactly clean # get contours contours, hierarchy = cv.findContours(mask, cv.RETR_LIST | cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE) center = (np.array([width, height]) - 1) / 2 # find contour closest to center of picture distances = [ cv.pointPolygonTest(contour, center, True) # looking for most positive (inside); negative is outside for contour in contours ] iclosest = np.argmax(distances) print("closest contour is", iclosest, "with distance", distances[iclosest]) # draw closest contour canvas = cv.cvtColor(mask, cv.COLOR_GRAY2BGR) cv.drawContours(image=canvas, contours=[contours[iclosest]], contourIdx=-1, color=(0, 255, 0), thickness=5) closest contour is 45 with distance 65.19202405202648 a cv.floodFill() on the center point can also quickly yield a labeling on that blob... assuming the mask is positive there. Otherwise, there needs to be search. (cx, cy) = center.astype(int) assert mask[cy,cx], "floodFill not applicable" # trying cv.floodFill on the image center mask2 = mask >> 1 # turns everything else gray cv.floodFill(image=mask2, mask=None, seedPoint=center.astype(int), newVal=255) # use (mask2 == 255) to identify that blob This also takes less than a millisecond. Some practically faster approaches might involve a pyramid scheme (low-res versions of the mask) to quickly identify areas of the picture that are candidates for an exact test (distance/intersection). Test target pixel. Hit (positive)? Done. Calculate low-res mask. Per block, if any pixel is positive, block is positive. Find positive blocks, sort by distance, examine closer all those that are within sqrt(2) * blocksize of the best distance. A: There are several ways you define "most central." I chose to define it as the region with the closest distance to the point you're searching for. If the point is inside the region, then that distance will be zero. I also chose to do this with a pixel-based approach rather than a polygon-based approach, like you're doing with findContours(). Here's a step-by-step breakdown of what this code is doing. Load the image, put it into grayscale, and threshold it. You're already doing these things. Identify connected components of the image. Connected components are places where there are white pixels which are directly connected to other white pixels. This breaks up the image into regions. Using np.argwhere(), convert a true/false mask into an array of coordinates. For each coordinate, compute the Euclidean distance between that point and search_point. Find the minimum within each region. Across all regions, find the smallest distance. import cv2 import numpy as np img = cv2.imread('test197_img.png') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) _, thresh_img = cv2.threshold(gray,127,255,cv2.THRESH_BINARY) n_groups, comp_grouped = cv2.connectedComponents(thresh_img) components = [] search_point = [600, 150] for i in range(1, n_groups): mask = (comp_grouped == i) component_coords = np.argwhere(mask)[:, ::-1] min_distance = np.sqrt(((component_coords - search_point) ** 2).sum(axis=1)).min() components.append({ 'mask': mask, 'min_distance': min_distance, }) closest = min(components, key=lambda x: x['min_distance'])['mask'] Output:
Extract most central area in a Binary Image
I am processing binary images, and was previously using this code to find the largest area in the binary image: # Use the hue value to convert to binary thresh = 20 thresh, thresh_img = cv2.threshold(h, thresh, 255, cv2.THRESH_BINARY) cv2.imshow('thresh', thresh_img) cv2.waitKey(0) cv2.destroyAllWindows() # Finding Contours # Use a copy of the image since findContours alters the image contours, _ = cv2.findContours(thresh_img.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) #Extract the largest area c = max(contours, key=cv2.contourArea) This code isn't really doing what I need it to do, now I think it would better to extract the most central area in the binary image. Binary Image Largest Image This is currently what the code is extracting, but I am hoping to get the central circle in the first binary image extracted.
[ "OpenCV comes with a point-polygon test function (for contours). It even gives a signed distance, if you ask for that.\nI'll find the contour that is closest to the center of the picture. That may be a contour actually overlapping the center of the picture.\nTimings, on my quadcore from 2012, give or take a millisecond:\n\nfindContours: ~1 millisecond\nall pointPolygonTests and argmax: ~1 millisecond\n\nmask = cv.imread(\"fkljm.png\", cv.IMREAD_GRAYSCALE)\n(height, width) = mask.shape\nret, mask = cv.threshold(mask, 128, 255, cv.THRESH_BINARY) # required because the sample picture isn't exactly clean\n\n# get contours\ncontours, hierarchy = cv.findContours(mask, cv.RETR_LIST | cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)\n\ncenter = (np.array([width, height]) - 1) / 2\n\n# find contour closest to center of picture\ndistances = [\n cv.pointPolygonTest(contour, center, True) # looking for most positive (inside); negative is outside\n for contour in contours\n]\niclosest = np.argmax(distances)\nprint(\"closest contour is\", iclosest, \"with distance\", distances[iclosest])\n\n# draw closest contour\ncanvas = cv.cvtColor(mask, cv.COLOR_GRAY2BGR)\ncv.drawContours(image=canvas, contours=[contours[iclosest]], contourIdx=-1, color=(0, 255, 0), thickness=5)\n\nclosest contour is 45 with distance 65.19202405202648\n\n\n\na cv.floodFill() on the center point can also quickly yield a labeling on that blob... assuming the mask is positive there. Otherwise, there needs to be search.\n(cx, cy) = center.astype(int)\nassert mask[cy,cx], \"floodFill not applicable\"\n\n# trying cv.floodFill on the image center\nmask2 = mask >> 1 # turns everything else gray\ncv.floodFill(image=mask2, mask=None, seedPoint=center.astype(int), newVal=255)\n\n# use (mask2 == 255) to identify that blob\n\nThis also takes less than a millisecond.\n\n\nSome practically faster approaches might involve a pyramid scheme (low-res versions of the mask) to quickly identify areas of the picture that are candidates for an exact test (distance/intersection).\n\nTest target pixel. Hit (positive)? Done.\nCalculate low-res mask. Per block, if any pixel is positive, block is positive.\nFind positive blocks, sort by distance, examine closer all those that are within sqrt(2) * blocksize of the best distance.\n\n", "There are several ways you define \"most central.\" I chose to define it as the region with the closest distance to the point you're searching for. If the point is inside the region, then that distance will be zero.\nI also chose to do this with a pixel-based approach rather than a polygon-based approach, like you're doing with findContours().\nHere's a step-by-step breakdown of what this code is doing.\n\nLoad the image, put it into grayscale, and threshold it. You're already doing these things.\nIdentify connected components of the image. Connected components are places where there are white pixels which are directly connected to other white pixels. This breaks up the image into regions.\nUsing np.argwhere(), convert a true/false mask into an array of coordinates.\nFor each coordinate, compute the Euclidean distance between that point and search_point.\nFind the minimum within each region.\nAcross all regions, find the smallest distance.\n\nimport cv2\nimport numpy as np\n\nimg = cv2.imread('test197_img.png')\ngray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n_, thresh_img = cv2.threshold(gray,127,255,cv2.THRESH_BINARY)\nn_groups, comp_grouped = cv2.connectedComponents(thresh_img)\ncomponents = []\nsearch_point = [600, 150]\nfor i in range(1, n_groups):\n mask = (comp_grouped == i)\n component_coords = np.argwhere(mask)[:, ::-1]\n min_distance = np.sqrt(((component_coords - search_point) ** 2).sum(axis=1)).min()\n components.append({\n 'mask': mask,\n 'min_distance': min_distance,\n })\nclosest = min(components, key=lambda x: x['min_distance'])['mask']\n\nOutput:\n\n" ]
[ 4, 1 ]
[]
[]
[ "binary_image", "image_processing", "opencv", "python" ]
stackoverflow_0074564868_binary_image_image_processing_opencv_python.txt
Q: Selecting row from a Pandas DataFrame based on constraints I have several datasets that I import as csv files and display them in a DataFrame in Pandas. The csv files are info about Covid updates. The datasets has several columns relating to this, for example "country_region", "last_update" & "confirmed". Let's say I wanted to look up the confirmed cases of Covid for Germany. I'm trying to write a function that will return a slice of the DataFrame that corresponds to those constraints to be able to display the match I'm looking for. I need to do this in some generic way so I can provide any value from any column. I wish I had some code to include but I'm stuck on how to even proceed. Everything I find online only specifies for looking up values relating to a pre-defined value. A: Something like this? def filter(county_region_val, last_update_val, confirmed_val, df): df = df.loc[((df['county_region'] == county_region_val) & (df['last_update'] == last_update_val) & (df[''confirmed'] == confirmed_val)).reset_index(drop=True) return df
Selecting row from a Pandas DataFrame based on constraints
I have several datasets that I import as csv files and display them in a DataFrame in Pandas. The csv files are info about Covid updates. The datasets has several columns relating to this, for example "country_region", "last_update" & "confirmed". Let's say I wanted to look up the confirmed cases of Covid for Germany. I'm trying to write a function that will return a slice of the DataFrame that corresponds to those constraints to be able to display the match I'm looking for. I need to do this in some generic way so I can provide any value from any column. I wish I had some code to include but I'm stuck on how to even proceed. Everything I find online only specifies for looking up values relating to a pre-defined value.
[ "Something like this?\ndef filter(county_region_val, last_update_val, confirmed_val, df):\n df = df.loc[((df['county_region'] == county_region_val) & (df['last_update'] == last_update_val) & (df[''confirmed'] == confirmed_val)).reset_index(drop=True)\n return df\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074551104_dataframe_pandas_python.txt
Q: Binary String To Plaintext I have a string which is like that "00100101 10010011 01010100". Every 8 bits there is a space and the string might be bigger. I want to convert it to plaintext with python3. I am new in python and I tried some solutions I found here but without success. A: string = "00100101 10010011 01010100" string_list = string.split() def bin2str(s): return ''.join([chr(int(s[i:i+8], 2)) for i in range(0, len(s), 8)]) for s in string_list: print(bin2str(s)) Would solve your problem and print '%', '\x93' and 'T' A: I think this should work bits = "00100101 10010011 01010100" #split the string into a list of strings of 8 bits bits_list = bits.split(" ") #convert it with chr(int(x,2)) and print to screen "".join([chr(int(x,2)) for x in bits_list]) or to print bits = "00100101 10010011 01010100" #split the string into a list of strings of 8 bits bits_list = bits.split(" ") #convert it with chr(int(x,2)) and join into a new string _ = [print(chr(int(x,2))) for x in bits_list]
Binary String To Plaintext
I have a string which is like that "00100101 10010011 01010100". Every 8 bits there is a space and the string might be bigger. I want to convert it to plaintext with python3. I am new in python and I tried some solutions I found here but without success.
[ "string = \"00100101 10010011 01010100\"\nstring_list = string.split()\n\ndef bin2str(s):\n return ''.join([chr(int(s[i:i+8], 2)) for i in range(0, len(s), 8)])\n\nfor s in string_list:\n print(bin2str(s))\n\nWould solve your problem and print '%', '\\x93' and 'T'\n", "I think this should work\nbits = \"00100101 10010011 01010100\"\n#split the string into a list of strings of 8 bits \nbits_list = bits.split(\" \")\n#convert it with chr(int(x,2)) and print to screen\n\"\".join([chr(int(x,2)) for x in bits_list])\n\nor to print\nbits = \"00100101 10010011 01010100\"\n#split the string into a list of strings of 8 bits \nbits_list = bits.split(\" \")\n#convert it with chr(int(x,2)) and join into a new string\n_ = [print(chr(int(x,2))) for x in bits_list]\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074566388_python.txt
Q: Getting "path" is not defined Pylance Error in my Flask app This is my code in init.py from flask import Flask from flask_sqlalchemy import SQLAlchemy db = SQLAlchemy() DB_NAME = 'database.db' def create_app(): app = Flask(__name__) app.config['SECRET_KEY'] = 'secret' app.config['SQLALCHEMY_DATABASE_URI'] = f'sqlite:///{DB_NAME}' db.init_app(app) from .views import views from .auth import auth app.register_blueprint(views, url_prefix='/') app.register_blueprint(auth, url_prefix='/') from .models import User, Note create_database(app) return app def create_database(app): if not path.exists('website/' + DB_NAME): db.create_all(app=app) print('Created Database!') i tried changing my python interpreter but still getting same error When I run main.py it throws this error NameError: name 'path' is not defined also path.exists shows "path" is not defined Pylance Thanks for help A: The problem was i didnt import path from os Error Fix
Getting "path" is not defined Pylance Error in my Flask app
This is my code in init.py from flask import Flask from flask_sqlalchemy import SQLAlchemy db = SQLAlchemy() DB_NAME = 'database.db' def create_app(): app = Flask(__name__) app.config['SECRET_KEY'] = 'secret' app.config['SQLALCHEMY_DATABASE_URI'] = f'sqlite:///{DB_NAME}' db.init_app(app) from .views import views from .auth import auth app.register_blueprint(views, url_prefix='/') app.register_blueprint(auth, url_prefix='/') from .models import User, Note create_database(app) return app def create_database(app): if not path.exists('website/' + DB_NAME): db.create_all(app=app) print('Created Database!') i tried changing my python interpreter but still getting same error When I run main.py it throws this error NameError: name 'path' is not defined also path.exists shows "path" is not defined Pylance Thanks for help
[ "The problem was i didnt import path from os\nError Fix\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074566473_python.txt
Q: Django: html input type=input and checkbox I have a problem with send data from input type='number' to django view. I have a page with products, each of them has a checkbox and a quantity selection (input type='number') <form action="{% url 'create-order' %}" method="POST"> {% csrf_token %} <table class="table table-responsive table-borderless"> <thead> <th>&nbsp;</th> <th>Quantity</th> </thead> <tbody> {% for item in items %} <tr class="align-middle alert border-bottom"> <td> <input type="checkbox" id="check" name="item" value="{{ item.id }}"> </td> <td> <input class="input" min="1" value=1 type="number" name="quantity"> </td> </tr> {% endfor %} </tbody> </table> <div class="submitButton"> <button type="submit" class="lgbtn green">Go to order</button> </div> </form> Submit button go to view: def create_order(request): quantities = request.POST.getlist('quantity') items = request.POST.getlist('item') return JsonResponse({ 'quantities': quantities, 'items': items }) For example, I have 6 products with id = 1, 2, 3, 4, 5, 6. And if I choose 1, 2, 3 and set quantities: 3, 4, 5, then I get: items = [1, 2, 3] # it's OK quantities = [3, 4, 5, 1, 1, 1] # but I need [3, 4, 5] Ideally, I want items and quantities to be in the same object (For example [(1, 3), (2, 4), (3, 5)] or dict {1: 3, 2: 4, 3: 5}), but not necessarily, but in any case, I need to select the quantity only for those items that have been checked A: <input class="input" min="1" value=1 type="number" name="quantity"> This tag basically ensures the value will be at least 1. Try <input class="input" min="0" value=0 type="number" name="quantity"> For the next part you need to match the correct item with its quantity and skip if the quantity is 0 - but we also need some error checking, and to do that which have to know which quanity goes with which ID. You can do that with something like <input class="input" min="0" value=0 type="number" name="quantity-{{item.id}}"> def create_order(request): #get fields from the form items = request.POST.getlist('item') #create a result array to hold pairs result = [] for item in items: quantity_name = "quantity-" + str(item) item_quantity = request.POST.get(quantity_name) if item_quantity==0: #handle error submitted item with 0 quantity else: result.append( (item, item_quantity) ) return JsonResponse({ 'result':result, }) While this won't check for unsubmitted items with positive quantities (you may need JS for that) it will behave as you'd expect a form to.
Django: html input type=input and checkbox
I have a problem with send data from input type='number' to django view. I have a page with products, each of them has a checkbox and a quantity selection (input type='number') <form action="{% url 'create-order' %}" method="POST"> {% csrf_token %} <table class="table table-responsive table-borderless"> <thead> <th>&nbsp;</th> <th>Quantity</th> </thead> <tbody> {% for item in items %} <tr class="align-middle alert border-bottom"> <td> <input type="checkbox" id="check" name="item" value="{{ item.id }}"> </td> <td> <input class="input" min="1" value=1 type="number" name="quantity"> </td> </tr> {% endfor %} </tbody> </table> <div class="submitButton"> <button type="submit" class="lgbtn green">Go to order</button> </div> </form> Submit button go to view: def create_order(request): quantities = request.POST.getlist('quantity') items = request.POST.getlist('item') return JsonResponse({ 'quantities': quantities, 'items': items }) For example, I have 6 products with id = 1, 2, 3, 4, 5, 6. And if I choose 1, 2, 3 and set quantities: 3, 4, 5, then I get: items = [1, 2, 3] # it's OK quantities = [3, 4, 5, 1, 1, 1] # but I need [3, 4, 5] Ideally, I want items and quantities to be in the same object (For example [(1, 3), (2, 4), (3, 5)] or dict {1: 3, 2: 4, 3: 5}), but not necessarily, but in any case, I need to select the quantity only for those items that have been checked
[ "<input class=\"input\" min=\"1\" value=1 type=\"number\" name=\"quantity\">\n\nThis tag basically ensures the value will be at least 1. Try\n <input class=\"input\" min=\"0\" value=0 type=\"number\" name=\"quantity\">\n\nFor the next part you need to match the correct item with its quantity and skip if the quantity is 0 - but we also need some error checking, and to do that which have to know which quanity goes with which ID. You can do that with something like\n<input class=\"input\" min=\"0\" value=0 type=\"number\" name=\"quantity-{{item.id}}\">\n\n\ndef create_order(request):\n #get fields from the form\n items = request.POST.getlist('item')\n #create a result array to hold pairs\n result = []\n for item in items:\n quantity_name = \"quantity-\" + str(item)\n item_quantity = request.POST.get(quantity_name)\n if item_quantity==0:\n #handle error submitted item with 0 quantity\n else:\n result.append( (item, item_quantity) ) \n\n return JsonResponse({\n 'result':result,\n })\n\nWhile this won't check for unsubmitted items with positive quantities (you may need JS for that) it will behave as you'd expect a form to.\n" ]
[ 1 ]
[]
[]
[ "checkbox", "django", "html", "python" ]
stackoverflow_0074565401_checkbox_django_html_python.txt
Q: Manipulating column names in a multiindex dataframe I converted the following dictionary to a dataframe: dic = {'US':{'Traffic':{'new':1415, 'repeat':670}, 'Sales':{'new':67068, 'repeat':105677}}, 'UK': {'Traffic':{'new':230, 'repeat':156}, 'Sales':{'new':4568, 'repeat':10738}}} d1 = defaultdict(dict) for k, v in dic.items(): for k1, v1 in v.items(): for k2, v2 in v1.items(): d1[(k, k2)].update({k1: v2}) df.insert(loc=2, column=' ', value=None) df.insert(loc=0, column='Mode', value='Website') df.columns = df.columns.rename("Metric", level=1) The dataframe currently looks like: How do I move the column header - Mode to the following row? To get an output of this sort: A: Change this: df.insert(loc=0, column='Mode', value='Website') to this: df.insert(loc=0, column=('', 'Mode'), value='Website') then your full code looks like this: import pandas as pd from collections import defaultdict dic = {'US':{'Traffic':{'new':1415, 'repeat':670}, 'Sales':{'new':67068, 'repeat':105677}}, 'UK': {'Traffic':{'new':230, 'repeat':156}, 'Sales':{'new':4568, 'repeat':10738}}} d1 = defaultdict(dict) for k, v in dic.items(): for k1, v1 in v.items(): for k2, v2 in v1.items(): d1[(k, k2)].update({k1: v2}) df = pd.DataFrame.from_dict(d1) df.insert(loc=0, column=('', 'Mode'), value='Website') and this is your df Rinse and repeat with your empty column between US and UK. (though, admittedly, this looks like a strange way of handling stuff)
Manipulating column names in a multiindex dataframe
I converted the following dictionary to a dataframe: dic = {'US':{'Traffic':{'new':1415, 'repeat':670}, 'Sales':{'new':67068, 'repeat':105677}}, 'UK': {'Traffic':{'new':230, 'repeat':156}, 'Sales':{'new':4568, 'repeat':10738}}} d1 = defaultdict(dict) for k, v in dic.items(): for k1, v1 in v.items(): for k2, v2 in v1.items(): d1[(k, k2)].update({k1: v2}) df.insert(loc=2, column=' ', value=None) df.insert(loc=0, column='Mode', value='Website') df.columns = df.columns.rename("Metric", level=1) The dataframe currently looks like: How do I move the column header - Mode to the following row? To get an output of this sort:
[ "Change this:\ndf.insert(loc=0, column='Mode', value='Website')\n\nto this:\ndf.insert(loc=0, column=('', 'Mode'), value='Website')\n\nthen your full code looks like this:\nimport pandas as pd\nfrom collections import defaultdict\n\ndic = {'US':{'Traffic':{'new':1415, 'repeat':670}, 'Sales':{'new':67068, 'repeat':105677}},\n 'UK': {'Traffic':{'new':230, 'repeat':156}, 'Sales':{'new':4568, 'repeat':10738}}}\nd1 = defaultdict(dict)\nfor k, v in dic.items():\n for k1, v1 in v.items():\n for k2, v2 in v1.items():\n d1[(k, k2)].update({k1: v2})\n\ndf = pd.DataFrame.from_dict(d1)\ndf.insert(loc=0, column=('', 'Mode'), value='Website')\n\nand this is your df\n\nRinse and repeat with your empty column between US and UK.\n(though, admittedly, this looks like a strange way of handling stuff)\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074564504_pandas_python.txt