content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Support for Enum arguments in argparse Is there a better way of supporting Enums as types of argparse arguments than this pattern? class SomeEnum(Enum): ONE = 1 TWO = 2 parser.add_argument('some_val', type=str, default='one', choices=[i.name.lower() for i in SomeEnum]) ... args.some_val = SomeEnum[args.some_val.upper()] A: I see this is an old question, but I just came across the same problem (Python 2.7) and here's how I solved it: from argparse import ArgumentParser from enum import Enum class Color(Enum): red = 'red' blue = 'blue' green = 'green' def __str__(self): return self.value parser = ArgumentParser() parser.add_argument('color', type=Color, choices=list(Color)) opts = parser.parse_args() print 'your color was:', opts.color Note that defining __str__ is required to get ArgumentParser's help output to include the human readable (values) of Color. Some sample invocations: => python enumtest.py blue your color was: blue => python enumtest.py not-a-color usage: enumtest.py [-h] {blue,green,red} enumtest.py: error: argument color: invalid Color value: 'not-a-color' => python enumtest.py -h usage: enumtest.py [-h] {blue,green,red} positional arguments: {blue,green,red} Since the OP's question specified integers as values, here is a slightly modified version that works in that case (using the enum names, rather than the values, as the command line args): class Color(Enum): red = 1 blue = 2 green = 3 def __str__(self): return self.name parser = ArgumentParser() parser.add_argument('color', type=lambda color: Color[color], choices=list(Color)) The only drawback there is that a bad parameter causes an ugly KeyError. That's easily solved by adding just a bit more code, converting the lambda into a proper function. class Color(Enum): red = 1 blue = 2 green = 3 def __str__(self): return self.name @staticmethod def from_string(s): try: return Color[s] except KeyError: raise ValueError() parser = ArgumentParser() parser.add_argument('color', type=Color.from_string, choices=list(Color)) A: Just came across this issue also; however, all of the proposed solutions require adding new methods to the Enum definition. argparse includes a way of supporting an enum cleanly using actions. The solution using a custom Action: import argparse import enum class EnumAction(argparse.Action): """ Argparse action for handling Enums """ def __init__(self, **kwargs): # Pop off the type value enum_type = kwargs.pop("type", None) # Ensure an Enum subclass is provided if enum_type is None: raise ValueError("type must be assigned an Enum when using EnumAction") if not issubclass(enum_type, enum.Enum): raise TypeError("type must be an Enum when using EnumAction") # Generate choices from the Enum kwargs.setdefault("choices", tuple(e.value for e in enum_type)) super(EnumAction, self).__init__(**kwargs) self._enum = enum_type def __call__(self, parser, namespace, values, option_string=None): # Convert value back into an Enum value = self._enum(values) setattr(namespace, self.dest, value) Usage class Do(enum.Enum): Foo = "foo" Bar = "bar" parser = argparse.ArgumentParser() parser.add_argument('do', type=Do, action=EnumAction) The advantages of this solution are that it will work with any Enum without requiring additional boilerplate code while remaining simple to use. If you prefer to specify the enum by name change: tuple(e.value for e in enum_type) to tuple(e.name for e in enum_type) value = self._enum(values) to value = self._enum[values] A: This in an improvement on ron rothman's answer. By also overriding __repr__ and changing to_string a bit, we can get a better error message from argparse when the user enters a bad value. import argparse import enum class SomeEnum(enum.IntEnum): ONE = 1 TWO = 2 # magic methods for argparse compatibility def __str__(self): return self.name.lower() def __repr__(self): return str(self) @staticmethod def argparse(s): try: return SomeEnum[s.upper()] except KeyError: return s parser = argparse.ArgumentParser() parser.add_argument('some_val', type=SomeEnum.argparse, choices=list(SomeEnum)) args = parser.parse_args() print('success:', type(args.some_val), args.some_val) In ron rothman's example, if we pass the color yellow as a command line argument, we get the following error: demo.py: error: argument color: invalid from_string value: 'yellow' With the improved code above, if we pass three as a command line argument, we get: demo.py: error: argument some_val: invalid choice: 'three' (choose from one, two) IMHO, in the simple case of just converting the name of the enum members to lower case, the OP's method seems simpler. However, for more complex conversion cases, this could be useful. A: Here's the relevant bug/issue: http://bugs.python.org/issue25061 Add native enum support for argparse I already wrote too much there. :) A: Building on the answer by @Tim here is an extension to use enumeration names instead of values and print pretty error messages: class EnumAction(argparse.Action): """ Argparse action for handling Enums """ def __init__(self, **kwargs): # Pop off the type value enum_type = kwargs.pop("type", None) # Ensure an Enum subclass is provided if enum_type is None: raise ValueError( "type must be assigned an Enum when using EnumAction") if not issubclass(enum_type, enum.Enum): raise TypeError("type must be an Enum when using EnumAction") # Generate choices from the Enum kwargs.setdefault("choices", tuple(e.name for e in enum_type)) super(EnumAction, self).__init__(**kwargs) self._enum = enum_type def __call__(self, parser: argparse.ArgumentParser, namespace: argparse.Namespace, value: Any, option_string: str = None): # Convert value back into an Enum if isinstance(value, str): value = self._enum[value] setattr(namespace, self.dest, value) elif value is None: raise argparse.ArgumentTypeError( f"You need to pass a value after {option_string}!") else: # A pretty invalid choice message will be generated by argparse raise argparse.ArgumentTypeError() A: Here's a simple way: class Color(str, Enum): red = 'red' blue = 'blue' parser = ArgumentParser() parser.add_argument('color', type=Color) args = parser.parse_args() print('Your color was:', args.color)
Support for Enum arguments in argparse
Is there a better way of supporting Enums as types of argparse arguments than this pattern? class SomeEnum(Enum): ONE = 1 TWO = 2 parser.add_argument('some_val', type=str, default='one', choices=[i.name.lower() for i in SomeEnum]) ... args.some_val = SomeEnum[args.some_val.upper()]
[ "I see this is an old question, but I just came across the same problem (Python 2.7) and here's how I solved it:\nfrom argparse import ArgumentParser\nfrom enum import Enum\n\nclass Color(Enum):\n red = 'red'\n blue = 'blue'\n green = 'green'\n\n def __str__(self):\n return self.value\n\nparser = ArgumentParser()\nparser.add_argument('color', type=Color, choices=list(Color))\n\nopts = parser.parse_args()\nprint 'your color was:', opts.color\n\nNote that defining __str__ is required to get ArgumentParser's help output to include the human readable (values) of Color.\nSome sample invocations:\n=> python enumtest.py blue\nyour color was: blue\n\n=> python enumtest.py not-a-color\nusage: enumtest.py [-h] {blue,green,red}\nenumtest.py: error: argument color: invalid Color value: 'not-a-color'\n\n=> python enumtest.py -h\nusage: enumtest.py [-h] {blue,green,red}\n\npositional arguments:\n {blue,green,red}\n\n\nSince the OP's question specified integers as values, here is a slightly modified version that works in that case (using the enum names, rather than the values, as the command line args):\nclass Color(Enum):\n red = 1\n blue = 2\n green = 3\n\n def __str__(self):\n return self.name\n\nparser = ArgumentParser()\nparser.add_argument('color', type=lambda color: Color[color], choices=list(Color))\n\nThe only drawback there is that a bad parameter causes an ugly KeyError. That's easily solved by adding just a bit more code, converting the lambda into a proper function.\nclass Color(Enum):\n red = 1\n blue = 2\n green = 3\n\n def __str__(self):\n return self.name\n\n @staticmethod\n def from_string(s):\n try:\n return Color[s]\n except KeyError:\n raise ValueError()\n\nparser = ArgumentParser()\nparser.add_argument('color', type=Color.from_string, choices=list(Color))\n\n", "Just came across this issue also; however, all of the proposed solutions require adding new methods to the Enum definition.\nargparse includes a way of supporting an enum cleanly using actions.\nThe solution using a custom Action:\nimport argparse\nimport enum\n\n\nclass EnumAction(argparse.Action):\n \"\"\"\n Argparse action for handling Enums\n \"\"\"\n def __init__(self, **kwargs):\n # Pop off the type value\n enum_type = kwargs.pop(\"type\", None)\n\n # Ensure an Enum subclass is provided\n if enum_type is None:\n raise ValueError(\"type must be assigned an Enum when using EnumAction\")\n if not issubclass(enum_type, enum.Enum):\n raise TypeError(\"type must be an Enum when using EnumAction\")\n\n # Generate choices from the Enum\n kwargs.setdefault(\"choices\", tuple(e.value for e in enum_type))\n\n super(EnumAction, self).__init__(**kwargs)\n\n self._enum = enum_type\n\n def __call__(self, parser, namespace, values, option_string=None):\n # Convert value back into an Enum\n value = self._enum(values)\n setattr(namespace, self.dest, value)\n\nUsage\nclass Do(enum.Enum):\n Foo = \"foo\"\n Bar = \"bar\"\n\n\nparser = argparse.ArgumentParser()\nparser.add_argument('do', type=Do, action=EnumAction)\n\nThe advantages of this solution are that it will work with any Enum without requiring additional boilerplate code while remaining simple to use.\nIf you prefer to specify the enum by name change:\n\ntuple(e.value for e in enum_type) to tuple(e.name for e in enum_type)\nvalue = self._enum(values) to value = self._enum[values]\n\n", "This in an improvement on ron rothman's answer. By also overriding __repr__ and changing to_string a bit, we can get a better error message from argparse when the user enters a bad value.\nimport argparse\nimport enum\n\n\nclass SomeEnum(enum.IntEnum):\n ONE = 1\n TWO = 2\n\n # magic methods for argparse compatibility\n\n def __str__(self):\n return self.name.lower()\n\n def __repr__(self):\n return str(self)\n\n @staticmethod\n def argparse(s):\n try:\n return SomeEnum[s.upper()]\n except KeyError:\n return s\n\n\nparser = argparse.ArgumentParser()\nparser.add_argument('some_val', type=SomeEnum.argparse, choices=list(SomeEnum))\nargs = parser.parse_args()\nprint('success:', type(args.some_val), args.some_val)\n\nIn ron rothman's example, if we pass the color yellow as a command line argument, we get the following error:\ndemo.py: error: argument color: invalid from_string value: 'yellow'\n\nWith the improved code above, if we pass three as a command line argument, we get:\ndemo.py: error: argument some_val: invalid choice: 'three' (choose from one, two)\n\n\nIMHO, in the simple case of just converting the name of the enum members to lower case, the OP's method seems simpler. However, for more complex conversion cases, this could be useful.\n", "Here's the relevant bug/issue: http://bugs.python.org/issue25061\nAdd native enum support for argparse\nI already wrote too much there. :) \n", "Building on the answer by @Tim here is an extension to use enumeration names instead of values and print pretty error messages:\n\nclass EnumAction(argparse.Action):\n \"\"\"\n Argparse action for handling Enums\n \"\"\"\n\n def __init__(self, **kwargs):\n # Pop off the type value\n enum_type = kwargs.pop(\"type\", None)\n\n # Ensure an Enum subclass is provided\n if enum_type is None:\n raise ValueError(\n \"type must be assigned an Enum when using EnumAction\")\n if not issubclass(enum_type, enum.Enum):\n raise TypeError(\"type must be an Enum when using EnumAction\")\n\n # Generate choices from the Enum\n kwargs.setdefault(\"choices\", tuple(e.name for e in enum_type))\n\n super(EnumAction, self).__init__(**kwargs)\n\n self._enum = enum_type\n\n def __call__(self,\n parser: argparse.ArgumentParser,\n namespace: argparse.Namespace,\n value: Any,\n option_string: str = None):\n # Convert value back into an Enum\n if isinstance(value, str):\n value = self._enum[value]\n setattr(namespace, self.dest, value)\n elif value is None:\n raise argparse.ArgumentTypeError(\n f\"You need to pass a value after {option_string}!\")\n else:\n # A pretty invalid choice message will be generated by argparse\n raise argparse.ArgumentTypeError()\n\n", "Here's a simple way:\nclass Color(str, Enum):\n red = 'red'\n blue = 'blue'\n\nparser = ArgumentParser()\nparser.add_argument('color', type=Color)\nargs = parser.parse_args()\nprint('Your color was:', args.color)\n\n" ]
[ 156, 29, 17, 10, 0, 0 ]
[]
[]
[ "argparse", "python" ]
stackoverflow_0043968006_argparse_python.txt
Q: Str split and explode How could I string split and explode whilst retaining information? df 0 Apple_a red, green; banana_b yellow 1 peach_p orange; pear_p green Expected output 0 Apple_a red 1 Apple_a green 2 banana_b yellow 3 peach_p orange 4 pear_p green I tried: df1 =df.str.split("; ").str.split(" ", n=1) df2=df1.str[0] +x for x in df1.str[1:] df2.explode() A: Example data = ['Apple_a red, green; banana_b yellow', 'peach_p orange; pear_p green'] s1 = pd.Series(data) output(s1): 0 Apple_a red, green; banana_b yellow 1 peach_p orange; pear_p green dtype: object My idea s1.str.split('; ').explode().str.split(r',* ', expand=True) output: 0 1 2 0 Apple_a red green 0 banana_b yellow None 1 peach_p orange None 1 pear_p green None On my idea using set_index, stack, reset_index and so on, get your desired output. (s1.str.split('; ').explode().str.split(r',* ', expand=True) .set_index(0).stack().to_frame(2).reset_index(0) .apply(' '.join, axis=1) .reset_index(drop=True)) result: 0 Apple_a red 1 Apple_a green 2 banana_b yellow 3 peach_p orange 4 pear_p green dtype: object code is longer using stack instead of melt, because of sort order. If you don't care about the sort order, you can use melt instead stack.
Str split and explode
How could I string split and explode whilst retaining information? df 0 Apple_a red, green; banana_b yellow 1 peach_p orange; pear_p green Expected output 0 Apple_a red 1 Apple_a green 2 banana_b yellow 3 peach_p orange 4 pear_p green I tried: df1 =df.str.split("; ").str.split(" ", n=1) df2=df1.str[0] +x for x in df1.str[1:] df2.explode()
[ "Example\ndata = ['Apple_a red, green; banana_b yellow', 'peach_p orange; pear_p green']\ns1 = pd.Series(data)\n\noutput(s1):\n0 Apple_a red, green; banana_b yellow\n1 peach_p orange; pear_p green\ndtype: object\n\n\nMy idea\ns1.str.split('; ').explode().str.split(r',* ', expand=True)\n\noutput:\n 0 1 2\n0 Apple_a red green\n0 banana_b yellow None\n1 peach_p orange None\n1 pear_p green None\n\n\nOn my idea using set_index, stack, reset_index and so on, get your desired output.\n(s1.str.split('; ').explode().str.split(r',* ', expand=True)\n .set_index(0).stack().to_frame(2).reset_index(0)\n .apply(' '.join, axis=1)\n .reset_index(drop=True))\n\nresult:\n0 Apple_a red\n1 Apple_a green\n2 banana_b yellow\n3 peach_p orange\n4 pear_p green\ndtype: object\n\n\ncode is longer using stack instead of melt, because of sort order. If you don't care about the sort order, you can use melt instead stack.\n" ]
[ 2 ]
[]
[]
[ "pandas", "python", "string" ]
stackoverflow_0074474900_pandas_python_string.txt
Q: Pandas COUNTIF equivalent (preserve duplicate values, see description) I have the following pandas column1 and I want to create a column2 displaying the count of the value in each row in the column1. I do not want to use pandas value_counts as I do not want to group by the values of the column. Column1 : COL 1 VALUE1 VALUE2 VALUE1 VALUE1 VALUE1 VALUE3 VALUE2 VALUE1 VALLUE3 VALUE2 Desired result : COL 1 Desired Result VALUE1 5 VALUE2 3 VALUE1 5 VALUE1 5 VALUE1 5 VALUE3 1 VALUE2 3 VALUE1 5 VALLUE3 1 VALUE2 3 A: value_counts does not require you to group and it creates a series which you can map back to your df: df['Resired Result'] = df['COL 1'].map(df['COL 1'].value_counts()) prints COL 1 Resired Result 0 VALUE1 5 1 VALUE2 3 2 VALUE1 5 3 VALUE1 5 4 VALUE1 5 5 VALUE3 1 6 VALUE2 3 7 VALUE1 5 8 VALLUE3 1 9 VALUE2 3 A: value_counts might be more efficient, but you can also achieve it with groupby.transform('count'): df['Resired Result'] = df.groupby('COL 1')['COL 1'].transform('size') Output: COL 1 Resired Result 0 VALUE1 5 1 VALUE2 3 2 VALUE1 5 3 VALUE1 5 4 VALUE1 5 5 VALUE3 1 6 VALUE2 3 7 VALUE1 5 8 VALLUE3 1 9 VALUE2 3
Pandas COUNTIF equivalent (preserve duplicate values, see description)
I have the following pandas column1 and I want to create a column2 displaying the count of the value in each row in the column1. I do not want to use pandas value_counts as I do not want to group by the values of the column. Column1 : COL 1 VALUE1 VALUE2 VALUE1 VALUE1 VALUE1 VALUE3 VALUE2 VALUE1 VALLUE3 VALUE2 Desired result : COL 1 Desired Result VALUE1 5 VALUE2 3 VALUE1 5 VALUE1 5 VALUE1 5 VALUE3 1 VALUE2 3 VALUE1 5 VALLUE3 1 VALUE2 3
[ "value_counts does not require you to group and it creates a series\nwhich you can map back to your df:\ndf['Resired Result'] = df['COL 1'].map(df['COL 1'].value_counts())\n\nprints\n COL 1 Resired Result\n0 VALUE1 5\n1 VALUE2 3\n2 VALUE1 5\n3 VALUE1 5\n4 VALUE1 5\n5 VALUE3 1\n6 VALUE2 3\n7 VALUE1 5\n8 VALLUE3 1\n9 VALUE2 3\n\n", "value_counts might be more efficient, but you can also achieve it with groupby.transform('count'):\ndf['Resired Result'] = df.groupby('COL 1')['COL 1'].transform('size')\n\nOutput:\n COL 1 Resired Result\n0 VALUE1 5\n1 VALUE2 3\n2 VALUE1 5\n3 VALUE1 5\n4 VALUE1 5\n5 VALUE3 1\n6 VALUE2 3\n7 VALUE1 5\n8 VALLUE3 1\n9 VALUE2 3\n\n" ]
[ 2, 2 ]
[]
[]
[ "count", "function", "numpy", "pandas", "python" ]
stackoverflow_0074475090_count_function_numpy_pandas_python.txt
Q: Find the closest date with conditions There are two pandas tables, each containing two columns. In the first time, there is also a heart rhythm. Second time is the systolic pressure. Write the code that creates a third table, in which for each blood pressure measurement, the same line contains the time and value of the nearest heart rate measurement, if it was done necessarily before the blood pressure measurement and not earlier than 15 minutes ago I tried to solve it with truncate and iloc but I didn't succeed. import pandas as pd df_hr = pd.DataFrame({'time': [datetime.datetime(2022,1,1,7,40), datetime.datetime(2022,1,1,9,50), datetime.datetime(2022,1,1,10,1)], 'hr': [60, 90, 100]}).set_index('time') df_bp = pd.DataFrame({'time': [datetime.datetime(2022,1,1,10), datetime.datetime(2022,1,1,8)], 'bp': [140, 120]}).set_index('time') A: Lets do merge_asof with direction='backward' and tolerance of 15min: pd.merge_asof( df_bp.sort_index(), df_hr.sort_index(), on='time', direction='backward', tolerance=pd.Timedelta('15min'), ) Note: The keyword argument direction=backward selects the last row in the right DataFrame whose 'on' key is less than or equal to the left's key Result time bp hr 0 2022-01-01 08:00:00 120 NaN 1 2022-01-01 10:00:00 140 90.0
Find the closest date with conditions
There are two pandas tables, each containing two columns. In the first time, there is also a heart rhythm. Second time is the systolic pressure. Write the code that creates a third table, in which for each blood pressure measurement, the same line contains the time and value of the nearest heart rate measurement, if it was done necessarily before the blood pressure measurement and not earlier than 15 minutes ago I tried to solve it with truncate and iloc but I didn't succeed. import pandas as pd df_hr = pd.DataFrame({'time': [datetime.datetime(2022,1,1,7,40), datetime.datetime(2022,1,1,9,50), datetime.datetime(2022,1,1,10,1)], 'hr': [60, 90, 100]}).set_index('time') df_bp = pd.DataFrame({'time': [datetime.datetime(2022,1,1,10), datetime.datetime(2022,1,1,8)], 'bp': [140, 120]}).set_index('time')
[ "Lets do merge_asof with direction='backward' and tolerance of 15min:\npd.merge_asof(\n df_bp.sort_index(), \n df_hr.sort_index(), \n on='time', \n direction='backward',\n tolerance=pd.Timedelta('15min'), \n)\n\nNote:\nThe keyword argument direction=backward selects the last row in the right DataFrame whose 'on' key is less than or equal to the left's key\nResult\n time bp hr\n0 2022-01-01 08:00:00 120 NaN\n1 2022-01-01 10:00:00 140 90.0\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "datetime", "pandas", "python" ]
stackoverflow_0074474940_dataframe_datetime_pandas_python.txt
Q: Pandas read_csv: low_memory and dtype options df = pd.read_csv('somefile.csv') ...gives an error: .../site-packages/pandas/io/parsers.py:1130: DtypeWarning: Columns (4,5,7,16) have mixed types. Specify dtype option on import or set low_memory=False. Why is the dtype option related to low_memory, and why might low_memory=False help? A: The deprecated low_memory option The low_memory option is not properly deprecated, but it should be, since it does not actually do anything differently[source] The reason you get this low_memory warning is because guessing dtypes for each column is very memory demanding. Pandas tries to determine what dtype to set by analyzing the data in each column. Dtype Guessing (very bad) Pandas can only determine what dtype a column should have once the whole file is read. This means nothing can really be parsed before the whole file is read unless you risk having to change the dtype of that column when you read the last value. Consider the example of one file which has a column called user_id. It contains 10 million rows where the user_id is always numbers. Since pandas cannot know it is only numbers, it will probably keep it as the original strings until it has read the whole file. Specifying dtypes (should always be done) adding dtype={'user_id': int} to the pd.read_csv() call will make pandas know when it starts reading the file, that this is only integers. Also worth noting is that if the last line in the file would have "foobar" written in the user_id column, the loading would crash if the above dtype was specified. Example of broken data that breaks when dtypes are defined import pandas as pd try: from StringIO import StringIO except ImportError: from io import StringIO csvdata = """user_id,username 1,Alice 3,Bob foobar,Caesar""" sio = StringIO(csvdata) pd.read_csv(sio, dtype={"user_id": int, "username": "string"}) ValueError: invalid literal for long() with base 10: 'foobar' dtypes are typically a numpy thing, read more about them here: http://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.html What dtypes exists? We have access to numpy dtypes: float, int, bool, timedelta64[ns] and datetime64[ns]. Note that the numpy date/time dtypes are not time zone aware. Pandas extends this set of dtypes with its own: 'datetime64[ns, <tz>]' Which is a time zone aware timestamp. 'category' which is essentially an enum (strings represented by integer keys to save 'period[]' Not to be confused with a timedelta, these objects are actually anchored to specific time periods 'Sparse', 'Sparse[int]', 'Sparse[float]' is for sparse data or 'Data that has a lot of holes in it' Instead of saving the NaN or None in the dataframe it omits the objects, saving space. 'Interval' is a topic of its own but its main use is for indexing. See more here 'Int8', 'Int16', 'Int32', 'Int64', 'UInt8', 'UInt16', 'UInt32', 'UInt64' are all pandas specific integers that are nullable, unlike the numpy variant. 'string' is a specific dtype for working with string data and gives access to the .str attribute on the series. 'boolean' is like the numpy 'bool' but it also supports missing data. Read the complete reference here: Pandas dtype reference Gotchas, caveats, notes Setting dtype=object will silence the above warning, but will not make it more memory efficient, only process efficient if anything. Setting dtype=unicode will not do anything, since to numpy, a unicode is represented as object. Usage of converters @sparrow correctly points out the usage of converters to avoid pandas blowing up when encountering 'foobar' in a column specified as int. I would like to add that converters are really heavy and inefficient to use in pandas and should be used as a last resort. This is because the read_csv process is a single process. CSV files can be processed line by line and thus can be processed by multiple converters in parallel more efficiently by simply cutting the file into segments and running multiple processes, something that pandas does not support. But this is a different story. A: Try: dashboard_df = pd.read_csv(p_file, sep=',', error_bad_lines=False, index_col=False, dtype='unicode') According to the pandas documentation: dtype : Type name or dict of column -> type As for low_memory, it's True by default and isn't yet documented. I don't think its relevant though. The error message is generic, so you shouldn't need to mess with low_memory anyway. Hope this helps and let me know if you have further problems A: df = pd.read_csv('somefile.csv', low_memory=False) This should solve the issue. I got exactly the same error, when reading 1.8M rows from a CSV. A: As mentioned earlier by firelynx if dtype is explicitly specified and there is mixed data that is not compatible with that dtype then loading will crash. I used a converter like this as a workaround to change the values with incompatible data type so that the data could still be loaded. def conv(val): if not val: return 0 try: return np.float64(val) except: return np.float64(0) df = pd.read_csv(csv_file,converters={'COL_A':conv,'COL_B':conv}) A: This worked for me! file = pd.read_csv('example.csv', engine='python') A: I was facing a similar issue when processing a huge csv file (6 million rows). I had three issues: the file contained strange characters (fixed using encoding) the datatype was not specified (fixed using dtype property) Using the above I still faced an issue which was related with the file_format that could not be defined based on the filename (fixed using try .. except..) df = pd.read_csv(csv_file,sep=';', encoding = 'ISO-8859-1', names=['permission','owner_name','group_name','size','ctime','mtime','atime','filename','full_filename'], dtype={'permission':str,'owner_name':str,'group_name':str,'size':str,'ctime':object,'mtime':object,'atime':object,'filename':str,'full_filename':str,'first_date':object,'last_date':object}) try: df['file_format'] = [Path(f).suffix[1:] for f in df.filename.tolist()] except: df['file_format'] = '' A: It worked for me with low_memory = False while importing a DataFrame. That is all the change that worked for me: df = pd.read_csv('export4_16.csv',low_memory=False) A: According to the pandas documentation, specifying low_memory=False as long as the engine='c' (which is the default) is a reasonable solution to this problem. If low_memory=False, then whole columns will be read in first, and then the proper types determined. For example, the column will be kept as objects (strings) as needed to preserve information. If low_memory=True (the default), then pandas reads in the data in chunks of rows, then appends them together. Then some of the columns might look like chunks of integers and strings mixed up, depending on whether during the chunk pandas encountered anything that couldn't be cast to integer (say). This could cause problems later. The warning is telling you that this happened at least once in the read in, so you should be careful. Setting low_memory=False will use more memory but will avoid the problem. Personally, I think low_memory=True is a bad default, but I work in an area that uses many more small datasets than large ones and so convenience is more important than efficiency. The following code illustrates an example where low_memory=True is set and a column comes in with mixed types. It builds off the answer by @firelynx import pandas as pd try: from StringIO import StringIO except ImportError: from io import StringIO # make a big csv data file, following earlier approach by @firelynx csvdata = """1,Alice 2,Bob 3,Caesar """ # we have to replicate the "integer column" user_id many many times to get # pd.read_csv to actually chunk read. otherwise it just reads # the whole thing in one chunk, because it's faster, and we don't get any # "mixed dtype" issue. the 100000 below was chosen by experimentation. csvdatafull = "" for i in range(100000): csvdatafull = csvdatafull + csvdata csvdatafull = csvdatafull + "foobar,Cthlulu\n" csvdatafull = "user_id,username\n" + csvdatafull sio = StringIO(csvdatafull) # the following line gives me the warning: # C:\Users\rdisa\anaconda3\lib\site-packages\IPython\core\interactiveshell.py:3072: DtypeWarning: Columns (0) have mixed types.Specify dtype option on import or set low_memory=False. # interactivity=interactivity, compiler=compiler, result=result) # but it does not always give me the warning, so i guess the internal workings of read_csv depend on background factors x = pd.read_csv(sio, low_memory=True) #, dtype={"user_id": int, "username": "string"}) x.dtypes # this gives: # Out[69]: # user_id object # username object # dtype: object type(x['user_id'].iloc[0]) # int type(x['user_id'].iloc[1]) # int type(x['user_id'].iloc[2]) # int type(x['user_id'].iloc[10000]) # int type(x['user_id'].iloc[299999]) # str !!!! (even though it's a number! so this chunk must have been read in as strings) type(x['user_id'].iloc[300000]) # str !!!!! Aside: To give an example where this is a problem (and where I first encountered this as a serious issue), imagine you ran pd.read_csv() on a file then wanted to drop duplicates based on an identifier. Say the identifier is sometimes numeric, sometimes string. One row might be "81287", another might be "97324-32". Still, they are unique identifiers. With low_memory=True, pandas might read in the identifier column like this: 81287 81287 81287 81287 81287 "81287" "81287" "81287" "81287" "97324-32" "97324-32" "97324-32" "97324-32" "97324-32" Just because it chunks things and so, sometimes the identifier 81287 is a number, sometimes a string. When I try to drop duplicates based on this, well, 81287 == "81287" Out[98]: False A: As the error says, you should specify the datatypes when using the read_csv() method. So, you should write file = pd.read_csv('example.csv', dtype='unicode') A: Sometimes, when all else fails, you just want to tell pandas to shut up about it: # Ignore DtypeWarnings from pandas' read_csv warnings.filterwarnings('ignore', message="^Columns.*") A: I had a similar issue with a ~400MB file. Setting low_memory=False did the trick for me. Do the simple things first,I would check that your dataframe isn't bigger than your system memory, reboot, clear the RAM before proceeding. If you're still running into errors, its worth making sure your .csv file is ok, take a quick look in Excel and make sure there's no obvious corruption. Broken original data can wreak havoc... A: Building on the answer given by Jerald Achaibar we can detect the mixed Dytpes warning and only use the slower python engine when the warning occurs: import warnings # Force mixed datatype warning to be a python error so we can catch it and reattempt the # load using the slower python engine warnings.simplefilter('error', pandas.errors.DtypeWarning) try: df = pandas.read_csv(path, sep=sep, encoding=encoding) except pandas.errors.DtypeWarning: df = pandas.read_csv(path, sep=sep, encoding=encoding, engine="python") A: This worked for me! dashboard_df = pd.read_csv(p_file, sep=';', error_bad_lines=False, index_col=False, dtype='unicode')
Pandas read_csv: low_memory and dtype options
df = pd.read_csv('somefile.csv') ...gives an error: .../site-packages/pandas/io/parsers.py:1130: DtypeWarning: Columns (4,5,7,16) have mixed types. Specify dtype option on import or set low_memory=False. Why is the dtype option related to low_memory, and why might low_memory=False help?
[ "The deprecated low_memory option\nThe low_memory option is not properly deprecated, but it should be, since it does not actually do anything differently[source]\nThe reason you get this low_memory warning is because guessing dtypes for each column is very memory demanding. Pandas tries to determine what dtype to set by analyzing the data in each column.\nDtype Guessing (very bad)\nPandas can only determine what dtype a column should have once the whole file is read. This means nothing can really be parsed before the whole file is read unless you risk having to change the dtype of that column when you read the last value.\nConsider the example of one file which has a column called user_id.\nIt contains 10 million rows where the user_id is always numbers.\nSince pandas cannot know it is only numbers, it will probably keep it as the original strings until it has read the whole file.\nSpecifying dtypes (should always be done)\nadding\ndtype={'user_id': int}\n\nto the pd.read_csv() call will make pandas know when it starts reading the file, that this is only integers.\nAlso worth noting is that if the last line in the file would have \"foobar\" written in the user_id column, the loading would crash if the above dtype was specified.\nExample of broken data that breaks when dtypes are defined\nimport pandas as pd\ntry:\n from StringIO import StringIO\nexcept ImportError:\n from io import StringIO\n\n\ncsvdata = \"\"\"user_id,username\n1,Alice\n3,Bob\nfoobar,Caesar\"\"\"\nsio = StringIO(csvdata)\npd.read_csv(sio, dtype={\"user_id\": int, \"username\": \"string\"})\n\nValueError: invalid literal for long() with base 10: 'foobar'\n\ndtypes are typically a numpy thing, read more about them here:\nhttp://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.html\nWhat dtypes exists?\nWe have access to numpy dtypes: float, int, bool, timedelta64[ns] and datetime64[ns]. Note that the numpy date/time dtypes are not time zone aware.\nPandas extends this set of dtypes with its own:\n'datetime64[ns, <tz>]' Which is a time zone aware timestamp.\n'category' which is essentially an enum (strings represented by integer keys to save\n'period[]' Not to be confused with a timedelta, these objects are actually anchored to specific time periods\n'Sparse', 'Sparse[int]', 'Sparse[float]' is for sparse data or 'Data that has a lot of holes in it' Instead of saving the NaN or None in the dataframe it omits the objects, saving space.\n'Interval' is a topic of its own but its main use is for indexing. See more here\n'Int8', 'Int16', 'Int32', 'Int64', 'UInt8', 'UInt16', 'UInt32', 'UInt64' are all pandas specific integers that are nullable, unlike the numpy variant.\n'string' is a specific dtype for working with string data and gives access to the .str attribute on the series.\n'boolean' is like the numpy 'bool' but it also supports missing data.\nRead the complete reference here:\nPandas dtype reference\nGotchas, caveats, notes\nSetting dtype=object will silence the above warning, but will not make it more memory efficient, only process efficient if anything.\nSetting dtype=unicode will not do anything, since to numpy, a unicode is represented as object.\nUsage of converters\n@sparrow correctly points out the usage of converters to avoid pandas blowing up when encountering 'foobar' in a column specified as int. I would like to add that converters are really heavy and inefficient to use in pandas and should be used as a last resort. This is because the read_csv process is a single process.\nCSV files can be processed line by line and thus can be processed by multiple converters in parallel more efficiently by simply cutting the file into segments and running multiple processes, something that pandas does not support. But this is a different story.\n", "Try:\ndashboard_df = pd.read_csv(p_file, sep=',', error_bad_lines=False, index_col=False, dtype='unicode')\n\nAccording to the pandas documentation:\n\ndtype : Type name or dict of column -> type\n\nAs for low_memory, it's True by default and isn't yet documented. I don't think its relevant though. The error message is generic, so you shouldn't need to mess with low_memory anyway. Hope this helps and let me know if you have further problems\n", "df = pd.read_csv('somefile.csv', low_memory=False)\n\nThis should solve the issue. I got exactly the same error, when reading 1.8M rows from a CSV.\n", "As mentioned earlier by firelynx if dtype is explicitly specified and there is mixed data that is not compatible with that dtype then loading will crash. I used a converter like this as a workaround to change the values with incompatible data type so that the data could still be loaded.\ndef conv(val):\n if not val:\n return 0 \n try:\n return np.float64(val)\n except: \n return np.float64(0)\n\ndf = pd.read_csv(csv_file,converters={'COL_A':conv,'COL_B':conv})\n\n", "This worked for me!\nfile = pd.read_csv('example.csv', engine='python')\n\n", "I was facing a similar issue when processing a huge csv file (6 million rows). I had three issues:\n\nthe file contained strange characters (fixed using encoding)\nthe datatype was not specified (fixed using dtype property)\nUsing the above I still faced an issue which was related with the file_format that could not be defined based on the filename (fixed using try .. except..)\n\n df = pd.read_csv(csv_file,sep=';', encoding = 'ISO-8859-1',\n names=['permission','owner_name','group_name','size','ctime','mtime','atime','filename','full_filename'],\n dtype={'permission':str,'owner_name':str,'group_name':str,'size':str,'ctime':object,'mtime':object,'atime':object,'filename':str,'full_filename':str,'first_date':object,'last_date':object})\n \n try:\n df['file_format'] = [Path(f).suffix[1:] for f in df.filename.tolist()]\n except:\n df['file_format'] = ''\n\n", "It worked for me with low_memory = False while importing a DataFrame. That is all the change that worked for me:\ndf = pd.read_csv('export4_16.csv',low_memory=False)\n\n", "According to the pandas documentation, specifying low_memory=False as long as the engine='c' (which is the default) is a reasonable solution to this problem.\nIf low_memory=False, then whole columns will be read in first, and then the proper types determined. For example, the column will be kept as objects (strings) as needed to preserve information.\nIf low_memory=True (the default), then pandas reads in the data in chunks of rows, then appends them together. Then some of the columns might look like chunks of integers and strings mixed up, depending on whether during the chunk pandas encountered anything that couldn't be cast to integer (say). This could cause problems later. The warning is telling you that this happened at least once in the read in, so you should be careful. Setting low_memory=False will use more memory but will avoid the problem.\nPersonally, I think low_memory=True is a bad default, but I work in an area that uses many more small datasets than large ones and so convenience is more important than efficiency.\nThe following code illustrates an example where low_memory=True is set and a column comes in with mixed types. It builds off the answer by @firelynx\nimport pandas as pd\ntry:\n from StringIO import StringIO\nexcept ImportError:\n from io import StringIO\n\n# make a big csv data file, following earlier approach by @firelynx\ncsvdata = \"\"\"1,Alice\n2,Bob\n3,Caesar\n\"\"\"\n\n# we have to replicate the \"integer column\" user_id many many times to get\n# pd.read_csv to actually chunk read. otherwise it just reads \n# the whole thing in one chunk, because it's faster, and we don't get any \n# \"mixed dtype\" issue. the 100000 below was chosen by experimentation.\ncsvdatafull = \"\"\nfor i in range(100000):\n csvdatafull = csvdatafull + csvdata\ncsvdatafull = csvdatafull + \"foobar,Cthlulu\\n\"\ncsvdatafull = \"user_id,username\\n\" + csvdatafull\n\nsio = StringIO(csvdatafull)\n# the following line gives me the warning:\n # C:\\Users\\rdisa\\anaconda3\\lib\\site-packages\\IPython\\core\\interactiveshell.py:3072: DtypeWarning: Columns (0) have mixed types.Specify dtype option on import or set low_memory=False.\n # interactivity=interactivity, compiler=compiler, result=result)\n# but it does not always give me the warning, so i guess the internal workings of read_csv depend on background factors\nx = pd.read_csv(sio, low_memory=True) #, dtype={\"user_id\": int, \"username\": \"string\"})\n\nx.dtypes\n# this gives:\n# Out[69]: \n# user_id object\n# username object\n# dtype: object\n\ntype(x['user_id'].iloc[0]) # int\ntype(x['user_id'].iloc[1]) # int\ntype(x['user_id'].iloc[2]) # int\ntype(x['user_id'].iloc[10000]) # int\ntype(x['user_id'].iloc[299999]) # str !!!! (even though it's a number! so this chunk must have been read in as strings)\ntype(x['user_id'].iloc[300000]) # str !!!!!\n\n\nAside: To give an example where this is a problem (and where I first encountered this as a serious issue), imagine you ran pd.read_csv() on a file then wanted to drop duplicates based on an identifier. Say the identifier is sometimes numeric, sometimes string. One row might be \"81287\", another might be \"97324-32\". Still, they are unique identifiers.\nWith low_memory=True, pandas might read in the identifier column like this:\n81287\n81287\n81287\n81287\n81287\n\"81287\"\n\"81287\"\n\"81287\"\n\"81287\"\n\"97324-32\"\n\"97324-32\"\n\"97324-32\"\n\"97324-32\"\n\"97324-32\"\n\nJust because it chunks things and so, sometimes the identifier 81287 is a number, sometimes a string. When I try to drop duplicates based on this, well,\n81287 == \"81287\"\nOut[98]: False\n\n", "As the error says, you should specify the datatypes when using the read_csv() method.\nSo, you should write\nfile = pd.read_csv('example.csv', dtype='unicode')\n\n", "Sometimes, when all else fails, you just want to tell pandas to shut up about it:\n# Ignore DtypeWarnings from pandas' read_csv \nwarnings.filterwarnings('ignore', message=\"^Columns.*\")\n\n", "I had a similar issue with a ~400MB file. Setting low_memory=False did the trick for me. Do the simple things first,I would check that your dataframe isn't bigger than your system memory, reboot, clear the RAM before proceeding. If you're still running into errors, its worth making sure your .csv file is ok, take a quick look in Excel and make sure there's no obvious corruption. Broken original data can wreak havoc...\n", "Building on the answer given by Jerald Achaibar we can detect the mixed Dytpes warning and only use the slower python engine when the warning occurs:\nimport warnings\n\n# Force mixed datatype warning to be a python error so we can catch it and reattempt the \n# load using the slower python engine\nwarnings.simplefilter('error', pandas.errors.DtypeWarning)\ntry:\n df = pandas.read_csv(path, sep=sep, encoding=encoding)\nexcept pandas.errors.DtypeWarning:\n df = pandas.read_csv(path, sep=sep, encoding=encoding, engine=\"python\")\n\n", "This worked for me!\ndashboard_df = pd.read_csv(p_file, sep=';', error_bad_lines=False, index_col=False, dtype='unicode')\n\n" ]
[ 663, 75, 59, 22, 17, 7, 6, 5, 5, 3, 2, 0, 0 ]
[]
[]
[ "dataframe", "numpy", "pandas", "parsing", "python" ]
stackoverflow_0024251219_dataframe_numpy_pandas_parsing_python.txt
Q: PyTest exits with TypeError: 'NoneType' object is not callable when collecting test cases When running pytest --collect-only, PyTest collects the correct tests, but terminates with this: Traceback (most recent call last): File "/data/anaconda/envs/env/lib/python3.9/logging/__init__.py", line 831, in _removeHandlerRef File "/data/anaconda/envs/env/lib/python3.9/logging/__init__.py", line 225, in _acquireLock File "/data/anaconda/envs/env/lib/python3.9/threading.py", line 156, in acquire File "/data/anaconda/envs/env/lib/python3.9/site-packages/gevent/thread.py", line 74, in get_ident TypeError: 'NoneType' object is not callable From the traceback, there's no indication what went wrong in my code. I tried upgrading related dependencies, removing some test cases, but the error persists. A: Code source for gevent/thread.py around line 74 : def get_ident(gr=None): # 72 if gr is None: # 73 gr = getcurrent() # 74 return id(gr) # 75 So it looks like getcurrent is None, but strangely : from gevent.hub import getcurrent # 56 So it should not be those from the hub, which imports it from greenlet, which gets it from its C API. I can't tell what is the problem there. And I did not find similar occurrences. Would you consider updating/reconstructing your environment ? It may solve the problem, but that's only a guess.
PyTest exits with TypeError: 'NoneType' object is not callable when collecting test cases
When running pytest --collect-only, PyTest collects the correct tests, but terminates with this: Traceback (most recent call last): File "/data/anaconda/envs/env/lib/python3.9/logging/__init__.py", line 831, in _removeHandlerRef File "/data/anaconda/envs/env/lib/python3.9/logging/__init__.py", line 225, in _acquireLock File "/data/anaconda/envs/env/lib/python3.9/threading.py", line 156, in acquire File "/data/anaconda/envs/env/lib/python3.9/site-packages/gevent/thread.py", line 74, in get_ident TypeError: 'NoneType' object is not callable From the traceback, there's no indication what went wrong in my code. I tried upgrading related dependencies, removing some test cases, but the error persists.
[ "Code source for gevent/thread.py around line 74 :\ndef get_ident(gr=None): # 72\n if gr is None: # 73\n gr = getcurrent() # 74\n return id(gr) # 75\n\nSo it looks like getcurrent is None, but strangely :\nfrom gevent.hub import getcurrent # 56\n\nSo it should not be those from the hub, which imports it from greenlet, which gets it from its C API. I can't tell what is the problem there. And I did not find similar occurrences.\nWould you consider updating/reconstructing your environment ? It may solve the problem, but that's only a guess.\n" ]
[ 0 ]
[]
[]
[ "gevent", "pytest", "python", "typeerror" ]
stackoverflow_0074467629_gevent_pytest_python_typeerror.txt
Q: Read csv in python pandas with different number of quotation marks and commas my csv-data-file is looking like this: "Date,""Time"",""Tags"",""Measurement"",""Info"",""GMT+01:00"""; "13.11.2022,""21:47:56"","""",""156"","""",""GMT+01:00"""; "29.05.2022,""09:00:00"","""",""Comment1,Comment2"","""",""GMT+01:00"""; The line begins with double quotation marks and ends with double quotation marks and a semicolon. The first column has no quotation marks and all the other entries have two of them. The separator is a comma, but there can also be comments instead of values in a row which are also separated by a comma. Some columns have no data (""""). How can I read this file in python pandas? I tried different codes, i. e.: df = pd.read_csv('test.csv', sep=',', lineterminator=';', quotechar='"') I get errors like (test.csv): ParserError: Error tokenizing data. C error: Expected 6 fields in line 3, saw 7 or (real.csv) ParserError: Error tokenizing data. C error: Expected 26 fields in line 147, saw 27 It seems as the comma in between the two quotation marks is also recognized as a separator. Thanks, regards sts85 A: import pandas as pd with open('test.csv', 'r') as f: data = [line[1:-3].replace('""', '"') + '\n' for line in f] with open('test.csv', 'w') as f: f.writelines(data) df = pd.read_csv('test.csv')
Read csv in python pandas with different number of quotation marks and commas
my csv-data-file is looking like this: "Date,""Time"",""Tags"",""Measurement"",""Info"",""GMT+01:00"""; "13.11.2022,""21:47:56"","""",""156"","""",""GMT+01:00"""; "29.05.2022,""09:00:00"","""",""Comment1,Comment2"","""",""GMT+01:00"""; The line begins with double quotation marks and ends with double quotation marks and a semicolon. The first column has no quotation marks and all the other entries have two of them. The separator is a comma, but there can also be comments instead of values in a row which are also separated by a comma. Some columns have no data (""""). How can I read this file in python pandas? I tried different codes, i. e.: df = pd.read_csv('test.csv', sep=',', lineterminator=';', quotechar='"') I get errors like (test.csv): ParserError: Error tokenizing data. C error: Expected 6 fields in line 3, saw 7 or (real.csv) ParserError: Error tokenizing data. C error: Expected 26 fields in line 147, saw 27 It seems as the comma in between the two quotation marks is also recognized as a separator. Thanks, regards sts85
[ "import pandas as pd\n\nwith open('test.csv', 'r') as f:\n data = [line[1:-3].replace('\"\"', '\"') + '\\n' for line in f]\nwith open('test.csv', 'w') as f:\n f.writelines(data)\n\ndf = pd.read_csv('test.csv')\n\n\n" ]
[ 0 ]
[]
[]
[ "csv", "pandas", "python" ]
stackoverflow_0074471473_csv_pandas_python.txt
Q: Pandas - Cumulative Count with Labeling I have a pandas df that looks like the following: +---------+---------+------------+--------+ | Cluster | Country | Publishers | Assets | +---------+---------+------------+--------+ | South | IT | SS | Asset1 | | South | IT | SS | Asset2 | | South | IT | SS | Asset3 | | South | IT | ML | Asset1 | | South | IT | ML | Asset2 | | South | IT | ML | Asset3 | | South | IT | TT | Asset1 | | South | IT | TT | Asset2 | | South | IT | TT | Asset3 | | South | ES | SS | Asset1 | | South | ES | SS | Asset2 | +---------+---------+------------+--------+ I would like to create a new column "Package" that uses a cumulative count based on the following columns: Publishers Assets The result would be this: +---------+---------+------------+--------+---------+ | Cluster | Country | Publishers | Assets | Package | +---------+---------+------------+--------+---------+ | South | IT | SS | Asset1 | 1 | | South | IT | SS | Asset2 | 1a | | South | IT | SS | Asset3 | 1b | | South | IT | ML | Asset1 | 2 | | South | IT | ML | Asset2 | 2a | | South | IT | ML | Asset3 | 2b | | South | IT | TT | Asset1 | 3 | | South | IT | TT | Asset2 | 3a | | South | IT | TT | Asset3 | 3b | | South | ES | SS | Asset1 | 4 | | South | ES | SS | Asset2 | 4a | +---------+---------+------------+--------+---------+ So far I have tried df['Package'] = df.groupby(['Cluster','Publishers']).cumcount() but it seems not to work as the value resets to 0 after every publisher instance is gone through. A: You can use groupby.cumcount, but with a different grouper. You will also need the related groupby.ngroup: from string import ascii_lowercase # group by consecutive identical values group = df['Publishers'].ne(df['Publishers'].shift()).cumsum() # alternatively, you can also group by Cluster/Country/Publishers # group = ['Cluster', 'Country', 'Publisher'] df['Package'] =( df.groupby(group).ngroup().add(1).astype(str) +df.groupby(group).cumcount().map(dict(enumerate(['']+list(ascii_lowercase)))) ) output: Cluster Country Publishers Assets Package 0 South IT SS Asset1 1 1 South IT SS Asset2 1a 2 South IT SS Asset3 1b 3 South IT ML Asset1 2 4 South IT ML Asset2 2a 5 South IT ML Asset3 2b 6 South IT TT Asset1 3 7 South IT TT Asset2 3a 8 South IT TT Asset3 3b 9 South ES SS Asset1 4 10 South ES SS Asset2 4a
Pandas - Cumulative Count with Labeling
I have a pandas df that looks like the following: +---------+---------+------------+--------+ | Cluster | Country | Publishers | Assets | +---------+---------+------------+--------+ | South | IT | SS | Asset1 | | South | IT | SS | Asset2 | | South | IT | SS | Asset3 | | South | IT | ML | Asset1 | | South | IT | ML | Asset2 | | South | IT | ML | Asset3 | | South | IT | TT | Asset1 | | South | IT | TT | Asset2 | | South | IT | TT | Asset3 | | South | ES | SS | Asset1 | | South | ES | SS | Asset2 | +---------+---------+------------+--------+ I would like to create a new column "Package" that uses a cumulative count based on the following columns: Publishers Assets The result would be this: +---------+---------+------------+--------+---------+ | Cluster | Country | Publishers | Assets | Package | +---------+---------+------------+--------+---------+ | South | IT | SS | Asset1 | 1 | | South | IT | SS | Asset2 | 1a | | South | IT | SS | Asset3 | 1b | | South | IT | ML | Asset1 | 2 | | South | IT | ML | Asset2 | 2a | | South | IT | ML | Asset3 | 2b | | South | IT | TT | Asset1 | 3 | | South | IT | TT | Asset2 | 3a | | South | IT | TT | Asset3 | 3b | | South | ES | SS | Asset1 | 4 | | South | ES | SS | Asset2 | 4a | +---------+---------+------------+--------+---------+ So far I have tried df['Package'] = df.groupby(['Cluster','Publishers']).cumcount() but it seems not to work as the value resets to 0 after every publisher instance is gone through.
[ "You can use groupby.cumcount, but with a different grouper. You will also need the related groupby.ngroup:\nfrom string import ascii_lowercase\n\n# group by consecutive identical values\ngroup = df['Publishers'].ne(df['Publishers'].shift()).cumsum()\n# alternatively, you can also group by Cluster/Country/Publishers\n# group = ['Cluster', 'Country', 'Publisher']\n\n\ndf['Package'] =(\n df.groupby(group).ngroup().add(1).astype(str)\n +df.groupby(group).cumcount().map(dict(enumerate(['']+list(ascii_lowercase))))\n)\n\noutput:\n Cluster Country Publishers Assets Package\n0 South IT SS Asset1 1\n1 South IT SS Asset2 1a\n2 South IT SS Asset3 1b\n3 South IT ML Asset1 2\n4 South IT ML Asset2 2a\n5 South IT ML Asset3 2b\n6 South IT TT Asset1 3\n7 South IT TT Asset2 3a\n8 South IT TT Asset3 3b\n9 South ES SS Asset1 4\n10 South ES SS Asset2 4a\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074475243_dataframe_pandas_python.txt
Q: Selecting the index column of a pandas dataframe import pandas as pd df = pd.DataFrame({'customer' : ['customer2', 'customer1'], 'item1': [12, 13], 'item2' : [3, 28],'item3': [2, 1]}) df2 = pd.DataFrame({'customer' : ['customer1', 'customer2'], 'item?': ['item1', 'item1'], 'quantity' : [2, 5]}) df = df.set_index('customer').add(pd.pivot_table(df2,index='customer',columns='item?',values='quantity')).fillna(df.set_index('customer')).astype(int) print(df['customer']) It gives me the following error: Traceback (most recent call last): File "C:\Users\Inzamelhelden\twilio.py", line 8, in <module> print(df['customer']) File "C:\Users\Inzamelhelden\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\frame.py", line 3805, in __getitem__ indexer = self.columns.get_loc(key) File "C:\Users\Inzamelhelden\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\indexes\base.py", line 3802, in get_loc raise KeyError(key) from err KeyError: 'customer' The expected result would be the 'name' column of dataframe one. A: The customer column in the index so you need to use the below code for getting the value of df.reset_index()['customer']
Selecting the index column of a pandas dataframe
import pandas as pd df = pd.DataFrame({'customer' : ['customer2', 'customer1'], 'item1': [12, 13], 'item2' : [3, 28],'item3': [2, 1]}) df2 = pd.DataFrame({'customer' : ['customer1', 'customer2'], 'item?': ['item1', 'item1'], 'quantity' : [2, 5]}) df = df.set_index('customer').add(pd.pivot_table(df2,index='customer',columns='item?',values='quantity')).fillna(df.set_index('customer')).astype(int) print(df['customer']) It gives me the following error: Traceback (most recent call last): File "C:\Users\Inzamelhelden\twilio.py", line 8, in <module> print(df['customer']) File "C:\Users\Inzamelhelden\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\frame.py", line 3805, in __getitem__ indexer = self.columns.get_loc(key) File "C:\Users\Inzamelhelden\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\indexes\base.py", line 3802, in get_loc raise KeyError(key) from err KeyError: 'customer' The expected result would be the 'name' column of dataframe one.
[ "The customer column in the index so you need to use the below code for getting the value of\ndf.reset_index()['customer']\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074475249_pandas_python.txt
Q: Error while installing PyCaret: No module named 'numpy.distutils._msvccompiler' in numpy.distutils in windows I am getting a huge error while installing pycaret module in my system. Can anyone help me with this please. I am using python 3.10.8 Also, please suggest the best way to keep things clean version-wise. PyCaret always troubles me whenever I work on a new system. Thanks Adding more blocks of description in order to get approved to post this question. python setup.py bdist_wheel did not run successfully. exit code: 1 [268 lines of output] Running from numpy source directory. blas_opt_info: blas_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE blis_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries blis not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE openblas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Could not locate executable ifort Could not locate executable ifl customize AbsoftFCompiler Could not locate executable f90 customize CompaqVisualFCompiler Could not locate executable DF customize IntelItaniumVisualFCompiler Could not locate executable efl customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize G95FCompiler Could not locate executable g95 customize IntelEM64VisualFCompiler customize IntelEM64TFCompiler Could not locate executable efort Could not locate executable efc customize PGroupFlangCompiler Could not locate executable flang don't know how to compile Fortran code on platform 'nt' NOT AVAILABLE atlas_3_10_blas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE atlas_3_10_blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE atlas_blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE accelerate_info: NOT AVAILABLE No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE blis_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries blis not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE openblas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Could not locate executable ifort Could not locate executable ifl customize AbsoftFCompiler Could not locate executable f90 customize CompaqVisualFCompiler Could not locate executable DF customize IntelItaniumVisualFCompiler Could not locate executable efl customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize G95FCompiler Could not locate executable g95 customize IntelEM64VisualFCompiler customize IntelEM64TFCompiler Could not locate executable efort Could not locate executable efc customize PGroupFlangCompiler Could not locate executable flang don't know how to compile Fortran code on platform 'nt' NOT AVAILABLE atlas_3_10_blas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE atlas_3_10_blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE atlas_blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE accelerate_info: NOT AVAILABLE C:\Users\Subrat Mahim Saxena\AppData\Local\Temp\pip-install-7l7kgjyr\numpy_4169e359f16040cf9da6b18e8f4a3b31\numpy\distutils\system_info.py:690: UserWarning: Optimized (vendor) Blas libraries are not found. Falls back to netlib Blas library which has worse performance. A better performance should be easily gained by switching Blas library. self.calc_info() blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries blas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE C:\Users\Subrat Mahim Saxena\AppData\Local\Temp\pip-install-7l7kgjyr\numpy_4169e359f16040cf9da6b18e8f4a3b31\numpy\distutils\system_info.py:690: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. self.calc_info() blas_src_info: NOT AVAILABLE C:\Users\Subrat Mahim Saxena\AppData\Local\Temp\pip-install-7l7kgjyr\numpy_4169e359f16040cf9da6b18e8f4a3b31\numpy\distutils\system_info.py:690: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. self.calc_info() NOT AVAILABLE 'svnversion' is not recognized as an internal or external command, operable program or batch file. non-existing path in 'numpy\\distutils': 'site.cfg' lapack_opt_info: lapack_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE openblas_lapack_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE openblas_clapack_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas,lapack not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE flame_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries flame not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE atlas_3_10_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas,tatlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas,tatlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas,tatlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\libs <class 'numpy.distutils.system_info.atlas_3_10_threads_info'> NOT AVAILABLE atlas_3_10_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas,satlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas,satlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas,satlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\libs <class 'numpy.distutils.system_info.atlas_3_10_info'> NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\libs <class 'numpy.distutils.system_info.atlas_threads_info'> NOT AVAILABLE atlas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\libs <class 'numpy.distutils.system_info.atlas_info'> NOT AVAILABLE lapack_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE C:\Users\Subrat Mahim Saxena\AppData\Local\Temp\pip-install-7l7kgjyr\numpy_4169e359f16040cf9da6b18e8f4a3b31\numpy\distutils\system_info.py:1712: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. if getattr(self, '_calc_info_{}'.format(lapack))(): lapack_src_info: NOT AVAILABLE C:\Users\Subrat Mahim Saxena\AppData\Local\Temp\pip-install-7l7kgjyr\numpy_4169e359f16040cf9da6b18e8f4a3b31\numpy\distutils\system_info.py:1712: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. if getattr(self, '_calc_info_{}'.format(lapack))(): NOT AVAILABLE C:\Users\Subrat Mahim Saxena\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\setuptools\_distutils\dist.py:262: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) running install C:\Users\Subrat Mahim Saxena\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src ''' A: Try installing/updating C++ build tools https://visualstudio.microsoft.com/visual-cpp-build-tools/ it worked for me
Error while installing PyCaret: No module named 'numpy.distutils._msvccompiler' in numpy.distutils in windows
I am getting a huge error while installing pycaret module in my system. Can anyone help me with this please. I am using python 3.10.8 Also, please suggest the best way to keep things clean version-wise. PyCaret always troubles me whenever I work on a new system. Thanks Adding more blocks of description in order to get approved to post this question. python setup.py bdist_wheel did not run successfully. exit code: 1 [268 lines of output] Running from numpy source directory. blas_opt_info: blas_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE blis_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries blis not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE openblas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Could not locate executable ifort Could not locate executable ifl customize AbsoftFCompiler Could not locate executable f90 customize CompaqVisualFCompiler Could not locate executable DF customize IntelItaniumVisualFCompiler Could not locate executable efl customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize G95FCompiler Could not locate executable g95 customize IntelEM64VisualFCompiler customize IntelEM64TFCompiler Could not locate executable efort Could not locate executable efc customize PGroupFlangCompiler Could not locate executable flang don't know how to compile Fortran code on platform 'nt' NOT AVAILABLE atlas_3_10_blas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE atlas_3_10_blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE atlas_blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE accelerate_info: NOT AVAILABLE No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE blis_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries blis not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE openblas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Could not locate executable ifort Could not locate executable ifl customize AbsoftFCompiler Could not locate executable f90 customize CompaqVisualFCompiler Could not locate executable DF customize IntelItaniumVisualFCompiler Could not locate executable efl customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize G95FCompiler Could not locate executable g95 customize IntelEM64VisualFCompiler customize IntelEM64TFCompiler Could not locate executable efort Could not locate executable efc customize PGroupFlangCompiler Could not locate executable flang don't know how to compile Fortran code on platform 'nt' NOT AVAILABLE atlas_3_10_blas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE atlas_3_10_blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE atlas_blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE accelerate_info: NOT AVAILABLE C:\Users\Subrat Mahim Saxena\AppData\Local\Temp\pip-install-7l7kgjyr\numpy_4169e359f16040cf9da6b18e8f4a3b31\numpy\distutils\system_info.py:690: UserWarning: Optimized (vendor) Blas libraries are not found. Falls back to netlib Blas library which has worse performance. A better performance should be easily gained by switching Blas library. self.calc_info() blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries blas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE C:\Users\Subrat Mahim Saxena\AppData\Local\Temp\pip-install-7l7kgjyr\numpy_4169e359f16040cf9da6b18e8f4a3b31\numpy\distutils\system_info.py:690: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. self.calc_info() blas_src_info: NOT AVAILABLE C:\Users\Subrat Mahim Saxena\AppData\Local\Temp\pip-install-7l7kgjyr\numpy_4169e359f16040cf9da6b18e8f4a3b31\numpy\distutils\system_info.py:690: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. self.calc_info() NOT AVAILABLE 'svnversion' is not recognized as an internal or external command, operable program or batch file. non-existing path in 'numpy\\distutils': 'site.cfg' lapack_opt_info: lapack_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE openblas_lapack_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE openblas_clapack_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas,lapack not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE flame_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries flame not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE atlas_3_10_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas,tatlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas,tatlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas,tatlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\libs <class 'numpy.distutils.system_info.atlas_3_10_threads_info'> NOT AVAILABLE atlas_3_10_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas,satlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas,satlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas,satlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\libs <class 'numpy.distutils.system_info.atlas_3_10_info'> NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\libs <class 'numpy.distutils.system_info.atlas_threads_info'> NOT AVAILABLE atlas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\libs <class 'numpy.distutils.system_info.atlas_info'> NOT AVAILABLE lapack_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\\libs'] NOT AVAILABLE C:\Users\Subrat Mahim Saxena\AppData\Local\Temp\pip-install-7l7kgjyr\numpy_4169e359f16040cf9da6b18e8f4a3b31\numpy\distutils\system_info.py:1712: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. if getattr(self, '_calc_info_{}'.format(lapack))(): lapack_src_info: NOT AVAILABLE C:\Users\Subrat Mahim Saxena\AppData\Local\Temp\pip-install-7l7kgjyr\numpy_4169e359f16040cf9da6b18e8f4a3b31\numpy\distutils\system_info.py:1712: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. if getattr(self, '_calc_info_{}'.format(lapack))(): NOT AVAILABLE C:\Users\Subrat Mahim Saxena\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\setuptools\_distutils\dist.py:262: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) running install C:\Users\Subrat Mahim Saxena\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src '''
[ "Try installing/updating C++ build tools\nhttps://visualstudio.microsoft.com/visual-cpp-build-tools/\nit worked for me\n" ]
[ 0 ]
[]
[]
[ "pycaret", "python" ]
stackoverflow_0074359405_pycaret_python.txt
Q: Ceil any number in python this way - 2.3, 2.1, 1.9, 2.6. to 2.5, 2.5, 2, 3 i.e. addition leads to addition of .5 or 1 whichever is closest Default ceiling doesn't work this way. Ceil should work in this way - Example 2 - 3.1, 4.5, 5.9 after ceiling - 3.5, 4.5, 6 A: def roundOffnumber (number): return (math.ceil(number*2))/2
Ceil any number in python this way - 2.3, 2.1, 1.9, 2.6. to 2.5, 2.5, 2, 3 i.e. addition leads to addition of .5 or 1 whichever is closest
Default ceiling doesn't work this way. Ceil should work in this way - Example 2 - 3.1, 4.5, 5.9 after ceiling - 3.5, 4.5, 6
[ "def roundOffnumber (number):\n return (math.ceil(number*2))/2\n\n" ]
[ 1 ]
[]
[]
[ "ceil", "python" ]
stackoverflow_0074475229_ceil_python.txt
Q: creating an admin user using django I was creating an admin user account, when it got to create password my keys stopped working!!I even rebooted my system and started from top boom it happened again tried to create password on django admin user account? A: The prompt for the password when you use the createsuperuser command [Django-doc] does not show the entered characters, for privacy and security concerns. Just like a password box in a browser does not show the password. You thus enter the password and hit Enter to enter the password.
creating an admin user using django
I was creating an admin user account, when it got to create password my keys stopped working!!I even rebooted my system and started from top boom it happened again tried to create password on django admin user account?
[ "The prompt for the password when you use the createsuperuser command [Django-doc] does not show the entered characters, for privacy and security concerns. Just like a password box in a browser does not show the password.\nYou thus enter the password and hit Enter to enter the password.\n" ]
[ 0 ]
[]
[]
[ "django", "manage.py", "python" ]
stackoverflow_0074475258_django_manage.py_python.txt
Q: How can i not set at my url to get certain data? My api was set to api/barrel/details/<int:pk> originally but i want to make the delete function into api/barrel (which only have get and post function) without parsing the pk class BarrelAPIView(APIView): def get(self,request): barrel = Barrel.objects.all() #queryset serializer = BarrelSerializer(barrel, many=True) return Response(serializer.data) def post(self,request): serializer = BarrelSerializer(data=request.data) if serializer.is_valid(): serializer.save() return Response(serializer.data, status=status.HTTP_201_CREATED) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) def delete(self,request): try: data = request.data Barrel.objects.filter(code=data['code']).delete() return Response(status=status.HTTP_204_NO_CONTENT) except Exception as error: return Response( status=status.HTTP_400_BAD_REQUEST) It can be done on postman by parsing the 'code'. but when i try on restframework default api browser the delete button showed up but nothing happens after that A: You should check out: https://www.django-rest-framework.org/api-guide/generic-views/#mixins in the long run it will make your life easier.. The url could have any structure you like. I would also suggest to refer to: Two scoops of Django that introduce you to various tips, tricks, patterns, code snippets, and techniques
How can i not set at my url to get certain data?
My api was set to api/barrel/details/<int:pk> originally but i want to make the delete function into api/barrel (which only have get and post function) without parsing the pk class BarrelAPIView(APIView): def get(self,request): barrel = Barrel.objects.all() #queryset serializer = BarrelSerializer(barrel, many=True) return Response(serializer.data) def post(self,request): serializer = BarrelSerializer(data=request.data) if serializer.is_valid(): serializer.save() return Response(serializer.data, status=status.HTTP_201_CREATED) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) def delete(self,request): try: data = request.data Barrel.objects.filter(code=data['code']).delete() return Response(status=status.HTTP_204_NO_CONTENT) except Exception as error: return Response( status=status.HTTP_400_BAD_REQUEST) It can be done on postman by parsing the 'code'. but when i try on restframework default api browser the delete button showed up but nothing happens after that
[ "You should check out:\nhttps://www.django-rest-framework.org/api-guide/generic-views/#mixins\nin the long run it will make your life easier..\nThe url could have any structure you like.\nI would also suggest to refer to:\nTwo scoops of Django that introduce you to various tips, tricks, patterns, code snippets, and techniques\n" ]
[ 0 ]
[]
[]
[ "django", "django_rest_framework", "python" ]
stackoverflow_0074472180_django_django_rest_framework_python.txt
Q: os.path.join('BASE_DIR','template') problem When I run following code In settings.py TEMPLATES = [{ 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join('BASE_DIR','template')], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] it shows error template not found but when I execute following code in settings.py TEMPLATES = [{ 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': ['templates'], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] this works fine! What does os.path.join('BASE_DIR','template') actually mean? A: You can try the following steps:. Declare 'import os' command in the top header section of settings.py file. While defining DIRS': [os.path.join(BASE_DIR,'template')] , take the os from auto suggestions instead of typing.
os.path.join('BASE_DIR','template') problem
When I run following code In settings.py TEMPLATES = [{ 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join('BASE_DIR','template')], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] it shows error template not found but when I execute following code in settings.py TEMPLATES = [{ 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': ['templates'], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] this works fine! What does os.path.join('BASE_DIR','template') actually mean?
[ "You can try the following steps:.\n\nDeclare 'import os' command in the top header section of settings.py file.\n\nWhile defining DIRS': [os.path.join(BASE_DIR,'template')] , take the os from auto suggestions instead of typing.\n\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_settings", "python" ]
stackoverflow_0064261167_django_django_settings_python.txt
Q: How to replace a number in a text file using regular expressions I have a text document with lots of lines that look something like this: some_string_of_changing_length 1234.56000000 99997.65723122992939764 4.63700 text -d NAME -r I want to go line by line and change only the 4th entry (the number 4.63700 in this example) and replace it with another number. I think I have to do with using regular expressions, but I'm not sure how to ask to replace the 4th entry only. I would be happy to do this in either python or bash - whatever is easiest. A: If you know the offset - the below will work for you (so there is no need for regex) line = 'some_string_of_changing_length 1234.56000000 99997.65723122992939764 4.63700 text -d NAME -r' new_val = 'I am the new val' parts = line.split(' ') parts[3] = new_val line = ' '.join(parts) print(line)
How to replace a number in a text file using regular expressions
I have a text document with lots of lines that look something like this: some_string_of_changing_length 1234.56000000 99997.65723122992939764 4.63700 text -d NAME -r I want to go line by line and change only the 4th entry (the number 4.63700 in this example) and replace it with another number. I think I have to do with using regular expressions, but I'm not sure how to ask to replace the 4th entry only. I would be happy to do this in either python or bash - whatever is easiest.
[ "If you know the offset - the below will work for you (so there is no need for regex)\nline = 'some_string_of_changing_length 1234.56000000 99997.65723122992939764 4.63700 text -d NAME -r'\nnew_val = 'I am the new val'\nparts = line.split(' ')\nparts[3] = new_val\nline = ' '.join(parts)\nprint(line)\n\n" ]
[ 1 ]
[]
[]
[ "bash", "python" ]
stackoverflow_0074475334_bash_python.txt
Q: How to find the cause of CancelledError in asyncio? I have a big project which depends some third-party libraries, and sometimes its execution gets interrupted by a CancelledError. To demonstrate the issue, let's look at a small example: import asyncio async def main(): task = asyncio.create_task(foo()) # Cancel the task in 1 second. loop = asyncio.get_event_loop() loop.call_later(1.0, lambda: task.cancel()) await task async def foo(): await asyncio.sleep(999) if __name__ == '__main__': asyncio.run(main()) Traceback: Traceback (most recent call last): File "/Users/ss/Library/Application Support/JetBrains/PyCharm2021.2/scratches/async.py", line 19, in <module> asyncio.run(main()) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/runners.py", line 43, in run return loop.run_until_complete(main) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete return future.result() concurrent.futures._base.CancelledError As you can see, there's no information about the place the CancelledError originates from. How do I find out the exact cause of it? One approach that I came up with is to place a lot of try/except blocks which would catch the CancelledError and narrow down the place where it comes from. But that's quite tedious. A: I've solved it by applyting a decorator to every async function in the project. The decorator's job is simple - log a message when a CancelledError is raised from the function. This way we will see which functions (and more importantly, in which order) get cancelled. Here's the decorator code: def log_cancellation(f): async def wrapper(*args, **kwargs): try: return await f(*args, **kwargs) except asyncio.CancelledError: print(f"Cancelled {f}") raise return wrapper In order to add this decorator everywhere I used regex. Find: (.*)(async def). Replace with: $1@log_cancellation\n$1$2. Also to avoid importing log_cancellation in every file I modified the builtins: builtins.log_cancellation = log_cancellation
How to find the cause of CancelledError in asyncio?
I have a big project which depends some third-party libraries, and sometimes its execution gets interrupted by a CancelledError. To demonstrate the issue, let's look at a small example: import asyncio async def main(): task = asyncio.create_task(foo()) # Cancel the task in 1 second. loop = asyncio.get_event_loop() loop.call_later(1.0, lambda: task.cancel()) await task async def foo(): await asyncio.sleep(999) if __name__ == '__main__': asyncio.run(main()) Traceback: Traceback (most recent call last): File "/Users/ss/Library/Application Support/JetBrains/PyCharm2021.2/scratches/async.py", line 19, in <module> asyncio.run(main()) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/runners.py", line 43, in run return loop.run_until_complete(main) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete return future.result() concurrent.futures._base.CancelledError As you can see, there's no information about the place the CancelledError originates from. How do I find out the exact cause of it? One approach that I came up with is to place a lot of try/except blocks which would catch the CancelledError and narrow down the place where it comes from. But that's quite tedious.
[ "I've solved it by applyting a decorator to every async function in the project. The decorator's job is simple - log a message when a CancelledError is raised from the function. This way we will see which functions (and more importantly, in which order) get cancelled.\nHere's the decorator code:\ndef log_cancellation(f):\n async def wrapper(*args, **kwargs):\n try:\n return await f(*args, **kwargs)\n except asyncio.CancelledError:\n print(f\"Cancelled {f}\")\n raise\n return wrapper\n\nIn order to add this decorator everywhere I used regex. Find: (.*)(async def). Replace with: $1@log_cancellation\\n$1$2.\nAlso to avoid importing log_cancellation in every file I modified the builtins:\nbuiltins.log_cancellation = log_cancellation\n" ]
[ 0 ]
[ "The rich package has helped us to identify the cause of CancelledError, without much code change required.\nfrom rich.console import Console\n\nconsole = Console()\n\nif __name__ == \"__main__\":\n try:\n asyncio.run(main()) # replace main() with your entrypoint\n except BaseException as e:\n console.print_exception(show_locals=True)\n\n" ]
[ -1 ]
[ "python", "python_asyncio" ]
stackoverflow_0071324885_python_python_asyncio.txt
Q: Trying to get PyCharm to work, keep getting "No Python interpreter selected" I'm trying to learn Python and decided to use PyCharm. When I try to start a new project I get a dialog that says "No Python interpreter selected". It has a drop down to select a interpreter, but the drop down is empty. A: Your problem probably is that you haven't installed python. Meaning that, if you are using Windows, you have not downloaded the installer for Windows, that you can find on the official Python website. In case you have, chances are that PyCharm cannot find your Python installation because its not in the default location, which is usually C:\Python27 or C:\Python33 (for me at least). So, if you have installed Python and it still gives this error, then there can be two things that have happened: You use a virtualenv and that virtualenv has been deleted or the filepath changed. In this case, you will have to find proceed to the next part of this answer. Your python installation is not in its default place, in which case you will need to find its location, and locate the python.exe file. Once you have located the necessary binaries, you will need to tell PyCharm were to look: Open your settings dialogue CTRL + ALT + S Then you will need to type in interpreter in the search box: As you can see above, you will need to go to Project Interpreter and then go to Python Interpreter. The location has been selected for you in the above image. To the side you will see a couple of options as icons, click the big + icon, then click on local, because your interpreter is on this computer. This will open up a dialogue box. Make sure to select the python.exe file of that directory, do not give pycharm the whole directory. It just wants the interpreter. A: Go to File->Settings->Project Settings->Project Interpreter->Python Interpreters There will be a "+" sign on the right side. Navigate to your python binary, PyCharm will figure out the rest. A: This situation occurred to me when I uninstalled a method and tried to reinstall it. My very same interpreter, which worked before, suddenly stopped working. And this error occurred. I tried restarting my PC, reinstalling Pycharm, invalidating caches, nothing worked. Then I went here to reinstall the interpreter: https://www.python.org/downloads/ When you install it, there's an option to fix the python.exe interpreter. Click that. My IDE went back to normal working conditions. A: During the install of python make sure you have "Install for all users" selected. Uninstall python and do a custom install and check "Install for all users". A: Even I got the same issue and my mistake was that I didn't download python MSI file. You will get it here: https://www.python.org/downloads/ Once you download the msi, run the setup and that will solve the problem. After that you can go to File->Settings->Project Settings->Project Interpreter->Python Interpreters and select the python.exe file. (This file will be available at c:\Python34) Select the python.exe file. That's it. A: for mac I can tell you that first you have to check your path by executing this command which python or which python3 then you have to configure it in your pycharm. pycharm-->preferences-->gear button-->add.. click on system interpreter--> then on ... then you search where your python version is installed once it is done then you have to configure for your project click on edit configuration then choose the python interpreter A: If you are using Ubuntu, Python has already been downloaded on your PC. so, go to -> ctrl + alt + s -> search interpreter -> go to project interpreter than select Python 3.6 in the dropdown menu. Edit: If there is no Python interpreter in drop-down menu, you should click the gear icon that on the right of the drop-down menu --> add --> select an interpreter. (on PyCharm 2018.2.4 Community Edition) A: I got the same issue when i newly installed pycharm in my windows 10 machine. download python setup install this solved my problem. for more help visit goodluck During the install of python make sure you have "Install for all users" selected. Uninstall python and do a custom install and check "Install for all users" A: In my case, there are several interpreters, but I have to manually add them. To the right of where you see "No Interpreters", there is a gear icon. Click the gear icon -> Click "Add...", then you can add the ones you need. A: In Linux, it was solved by opening PyCharm from the terminal and leaving it open. After that, I was able to choose the correct interpreter in preferences. In my case, linked to a virtual environment (venv). A: You don't have Python Interpreter installed on your machine whereas Pycharm is looking for a Python interpreter, just go to https://www.python.org/downloads/ and download python and then create a new project, you'll be all set! A: I had the same problem and stumbled onto this solution. I ran PyCharm (as administrator, though not sure if necessary). After PyCharm has completely loaded (green tick mark top right), see bottom right. Click on it. An interface will open. In my case the path was already there. I just clicked OK and all was fine. closed PyCharm and ran it again normally. Still all fine. A: I has to close PyCharm, delete the .idea folder then open PyCharm again.
Trying to get PyCharm to work, keep getting "No Python interpreter selected"
I'm trying to learn Python and decided to use PyCharm. When I try to start a new project I get a dialog that says "No Python interpreter selected". It has a drop down to select a interpreter, but the drop down is empty.
[ "Your problem probably is that you haven't installed python. Meaning that, if you are using Windows, you have not downloaded the installer for Windows, that you can find on the official Python website.\nIn case you have, chances are that PyCharm cannot find your Python installation because its not in the default location, which is usually C:\\Python27 or C:\\Python33 (for me at least).\nSo, if you have installed Python and it still gives this error, then there can be two things that have happened:\n\nYou use a virtualenv and that virtualenv has been deleted or the filepath changed. In this case, you will have to find proceed to the next part of this answer.\nYour python installation is not in its default place, in which case you will need to find its location, and locate the python.exe file.\n\nOnce you have located the necessary binaries, you will need to tell PyCharm were to look:\n\nOpen your settings dialogue CTRL + ALT + S\nThen you will need to type in interpreter in the search box:\n\nAs you can see above, you will need to go to Project Interpreter and then go to Python Interpreter. The location has been selected for you in the above image.\nTo the side you will see a couple of options as icons, click the big + icon, then click on local, because your interpreter is on this computer.\nThis will open up a dialogue box. Make sure to select the python.exe file of that directory, do not give pycharm the whole directory. It just wants the interpreter.\n\n", "Go to File->Settings->Project Settings->Project Interpreter->Python Interpreters \nThere will be a \"+\" sign on the right side. Navigate to your python binary, PyCharm will figure out the rest.\n", "This situation occurred to me when I uninstalled a method and tried to reinstall it. My very same interpreter, which worked before, suddenly stopped working. And this error occurred.\nI tried restarting my PC, reinstalling Pycharm, invalidating caches, nothing worked.\nThen I went here to reinstall the interpreter:\nhttps://www.python.org/downloads/ \nWhen you install it, there's an option to fix the python.exe interpreter. Click that. My IDE went back to normal working conditions.\n", "During the install of python make sure you have \"Install for all users\" selected.\nUninstall python and do a custom install and check \"Install for all users\".\n", "Even I got the same issue and my mistake was that I didn't download python MSI file. You will get it here: https://www.python.org/downloads/\nOnce you download the msi, run the setup and that will solve the problem. After that you can go to File->Settings->Project Settings->Project Interpreter->Python Interpreters \nand select the python.exe file. (This file will be available at c:\\Python34)\nSelect the python.exe file. That's it.\n", "for mac I can tell you that first you have to check your path\nby executing this command\nwhich python or which python3\n\nthen you have to configure it in your pycharm.\npycharm-->preferences-->gear button-->add..\n\nclick on system interpreter--> then on ...\nthen you search where your python version is installed\n\nonce it is done then you have to configure for your project\nclick on edit configuration\n\nthen choose the python interpreter\n\n", "If you are using Ubuntu, Python has already been downloaded on your PC.\nso, go to -> ctrl + alt + s -> search interpreter -> go to project interpreter than select Python 3.6 in the dropdown menu.\nEdit: If there is no Python interpreter in drop-down menu, you should click the gear icon that on the right of the drop-down menu --> add --> select an interpreter.\n(on PyCharm 2018.2.4 Community Edition)\n", "I got the same issue when i newly installed pycharm in my windows 10 machine.\n\ndownload python setup \ninstall this solved my problem.\n\nfor more help visit\ngoodluck \n\nDuring the install of python make sure you have \"Install for all users\" selected. Uninstall python and do a custom install and check \"Install for all users\"\n\n", "In my case, there are several interpreters, but I have to manually add them.\nTo the right of where you see \"No Interpreters\", there is a gear icon. Click the gear icon -> Click \"Add...\", then you can add the ones you need.\n", "In Linux, it was solved by opening PyCharm from the terminal and leaving it open. After that, I was able to choose the correct interpreter in preferences. In my case, linked to a virtual environment (venv).\n", "You don't have Python Interpreter installed on your machine whereas Pycharm is looking for a Python interpreter, just go to https://www.python.org/downloads/\nand download python and then create a new project, you'll be all set!\n", "I had the same problem and stumbled onto this solution.\nI ran PyCharm (as administrator, though not sure if necessary).\nAfter PyCharm has completely loaded (green tick mark top right), see bottom right. Click on it.\nAn interface will open. In my case the path was already there. I just clicked OK and all was fine.\nclosed PyCharm and ran it again normally. Still all fine.\n", "I has to close PyCharm, delete the .idea folder then open PyCharm again.\n" ]
[ 67, 22, 4, 3, 2, 2, 1, 1, 1, 1, 0, 0, 0 ]
[]
[]
[ "pycharm", "python" ]
stackoverflow_0019645527_pycharm_python.txt
Q: Assign consequential values to a DataFrame from a numpy array based on a condition The task seems easy but I've been googling and experimenting for hours without any result. I can easily assign a 'static' value in such case or assign a value if I have two columns in the same DataFrame (of the same length, ofc) but I'm stuck with this situation. I need to assign a consequential value to a pandas DataFrame column from a numpy array based on a condition when the sizes of the DataFrame and the numpy.array are different. Here is the example: import pandas as pd import numpy as np if __name__ == "__main__": df = pd.DataFrame([np.nan, 1, np.nan, 1, np.nan, 1, np.nan]) arr = np.array([4, 5, 6]) i = iter(arr) df[0] = np.where(df[0] == 1, next(i), np.nan) print(df) The result is: 0 0 NaN 1 4.0 2 NaN 3 4.0 4 NaN 5 4.0 6 NaN But I need the result where consequential numbers from the numpy array are put in the DataFrame like: 0 0 NaN 1 4.0 2 NaN 3 5.0 4 NaN 5 6.0 6 NaN I appreciate any help. A: it's not the very efficient way but it will do the job. import pandas as pd import numpy as np def util(it, row): ele = next(it, None) return ele if ele is not None else row df = pd.DataFrame([np.nan, 1, np.nan, 1, np.nan, 1, np.nan]) arr = np.array([4, 5, 6]) it = iter(arr) df[0] = np.array(list(map(lambda r : util(it, r) if r == 1.0 else np.nan, df[0])))
Assign consequential values to a DataFrame from a numpy array based on a condition
The task seems easy but I've been googling and experimenting for hours without any result. I can easily assign a 'static' value in such case or assign a value if I have two columns in the same DataFrame (of the same length, ofc) but I'm stuck with this situation. I need to assign a consequential value to a pandas DataFrame column from a numpy array based on a condition when the sizes of the DataFrame and the numpy.array are different. Here is the example: import pandas as pd import numpy as np if __name__ == "__main__": df = pd.DataFrame([np.nan, 1, np.nan, 1, np.nan, 1, np.nan]) arr = np.array([4, 5, 6]) i = iter(arr) df[0] = np.where(df[0] == 1, next(i), np.nan) print(df) The result is: 0 0 NaN 1 4.0 2 NaN 3 4.0 4 NaN 5 4.0 6 NaN But I need the result where consequential numbers from the numpy array are put in the DataFrame like: 0 0 NaN 1 4.0 2 NaN 3 5.0 4 NaN 5 6.0 6 NaN I appreciate any help.
[ "it's not the very efficient way but it will do the job.\nimport pandas as pd\nimport numpy as np\n\ndef util(it, row):\n ele = next(it, None)\n return ele if ele is not None else row\n\ndf = pd.DataFrame([np.nan, 1, np.nan, 1, np.nan, 1, np.nan])\narr = np.array([4, 5, 6])\nit = iter(arr)\n\ndf[0] = np.array(list(map(lambda r : util(it, r) if r == 1.0 else np.nan, df[0])))\n\n" ]
[ 2 ]
[]
[]
[ "numpy", "pandas", "python" ]
stackoverflow_0074474157_numpy_pandas_python.txt
Q: Hot to make pandas cut have first range equal to minimum value I have this dataframe: lst = [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,3,3,3,3,3,3,3,3,3,3,3,3,3] ser = pd.Series(lst) df1 = pd.DataFrame(ser, columns=['Quantity']) When i check unique values from variable quantity i have the following distribution: df1.groupby(['Quantity'])['Quantity'].count() / sum ( df1['Quantity']) Quantity 0 0.741935 1 0.338710 2 0.016129 3 0.209677 Name: Quantity, dtype: float64 Because value 2 represents only 0.016 i want to create a new categorical variable that creates "bins" like: Quantity 0 1-2 3+ How the bins are created is not relevant, the rule of thumb is : If a number has low representation, it should be aggregated with the other values in a class (bin) . Other example: Quantity 0 2662035 1 1200 2 2 Could be converted in : Quantity 0 1+ A: You can define the bins the way you want in pandas.cut, by default the right part of the bins is uncluded: import numpy as np (pd.cut(df['Quantity'], bins=[-1, 0, 2, np.inf], labels=['0', '1-2', '3+']) .value_counts() ) Output: 0 57 1-2 29 3+ 5 Name: Quantity, dtype: int64 combining counts based on a threshold threshold = 0.05 c = df1['Quantity'].value_counts(sort=False).sort_index() group = c.div(c.sum()).gt(threshold).cumsum() (c.reset_index() .groupby(group) .agg({'index': lambda x: f'{x.iloc[0]}-{x.iloc[-1]}' if len(x)>1 else str(x.iloc[0]), 'Quantity': 'sum', }) .set_index('index') ) Output: Quantity index 0 46 1-2 22 3 13
Hot to make pandas cut have first range equal to minimum value
I have this dataframe: lst = [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,3,3,3,3,3,3,3,3,3,3,3,3,3] ser = pd.Series(lst) df1 = pd.DataFrame(ser, columns=['Quantity']) When i check unique values from variable quantity i have the following distribution: df1.groupby(['Quantity'])['Quantity'].count() / sum ( df1['Quantity']) Quantity 0 0.741935 1 0.338710 2 0.016129 3 0.209677 Name: Quantity, dtype: float64 Because value 2 represents only 0.016 i want to create a new categorical variable that creates "bins" like: Quantity 0 1-2 3+ How the bins are created is not relevant, the rule of thumb is : If a number has low representation, it should be aggregated with the other values in a class (bin) . Other example: Quantity 0 2662035 1 1200 2 2 Could be converted in : Quantity 0 1+
[ "You can define the bins the way you want in pandas.cut, by default the right part of the bins is uncluded:\nimport numpy as np\n\n(pd.cut(df['Quantity'], bins=[-1, 0, 2, np.inf], labels=['0', '1-2', '3+'])\n .value_counts()\n)\n\nOutput:\n0 57\n1-2 29\n3+ 5\nName: Quantity, dtype: int64\n\ncombining counts based on a threshold\nthreshold = 0.05\nc = df1['Quantity'].value_counts(sort=False).sort_index()\n\ngroup = c.div(c.sum()).gt(threshold).cumsum()\n\n(c.reset_index()\n .groupby(group)\n .agg({'index': lambda x: f'{x.iloc[0]}-{x.iloc[-1]}' if len(x)>1 else str(x.iloc[0]),\n 'Quantity': 'sum',\n })\n .set_index('index')\n )\n\nOutput:\n Quantity\nindex \n0 46\n1-2 22\n3 13\n\n" ]
[ 2 ]
[]
[]
[ "cut", "pandas", "python" ]
stackoverflow_0074475324_cut_pandas_python.txt
Q: Extra feature values in dataset fragments After reading dataset with filters in dataset.fragments other values of filtered column is presented. Is this the expected behavior? import pyarrow.parquet as pq from pyarrow import csv path_ds = 'path/to/ds/' path_csv = 'path/to/csv/' read_options = csv.ReadOptions(autogenerate_column_names=True) parse_options = csv.ParseOptions(delimiter='|') with csv.open_csv(path_csv, parse_options=parse_options, read_options=read_options) as reader: for chunk in reader: tbl = pa.Table.from_batches([chunk]) pq.write_to_dataset( tbl, root_path=path_ds, partition_cols=['f0', 'f2'], use_legacy_dataset=False ) temp_dataset = pq.ParquetDataset( path_ds, use_legacy_dataset=False, filters=[('f0', '=', '01.09.2022'), ('f2', '=', 'code1')] ) print(temp_dataset.fragments) >>> [<pyarrow.dataset.ParquetFileFragment path=path/to/ds/f0=01.09.2022/f2=code1/008f64795a3640f3a5cab0273fc287b1-0.parquet partition=[f0=01.09.2022, f2='code1']>, >>> ... >>> <pyarrow.dataset.ParquetFileFragment path=path/to/ds/f0=01.09.2022/f2=code2/5c1225fae02a4226b62f3959f6a57cf0-0.parquet partition=[f0=01.09.2022, f2='code2']>, >>> ... A: According to the doc Predicates are expressed in disjunctive normal form (DNF), like [[('x', '=', 0), ...], ...]. DNF allows arbitrary boolean logical combinations of single column predicates. The innermost tuples each describe a single column predicate. The list of inner predicates is interpreted as a conjunction (AND), forming a more selective and multiple column predicate. Finally, the most outer list combines these filters as a disjunction (OR). It means if you want to filter the data based on f0 and f2, you need to do: filters=[[('f0', '=', '01.09.2022'), ('f2', '=', 'code1')]] (note the extra [])
Extra feature values in dataset fragments
After reading dataset with filters in dataset.fragments other values of filtered column is presented. Is this the expected behavior? import pyarrow.parquet as pq from pyarrow import csv path_ds = 'path/to/ds/' path_csv = 'path/to/csv/' read_options = csv.ReadOptions(autogenerate_column_names=True) parse_options = csv.ParseOptions(delimiter='|') with csv.open_csv(path_csv, parse_options=parse_options, read_options=read_options) as reader: for chunk in reader: tbl = pa.Table.from_batches([chunk]) pq.write_to_dataset( tbl, root_path=path_ds, partition_cols=['f0', 'f2'], use_legacy_dataset=False ) temp_dataset = pq.ParquetDataset( path_ds, use_legacy_dataset=False, filters=[('f0', '=', '01.09.2022'), ('f2', '=', 'code1')] ) print(temp_dataset.fragments) >>> [<pyarrow.dataset.ParquetFileFragment path=path/to/ds/f0=01.09.2022/f2=code1/008f64795a3640f3a5cab0273fc287b1-0.parquet partition=[f0=01.09.2022, f2='code1']>, >>> ... >>> <pyarrow.dataset.ParquetFileFragment path=path/to/ds/f0=01.09.2022/f2=code2/5c1225fae02a4226b62f3959f6a57cf0-0.parquet partition=[f0=01.09.2022, f2='code2']>, >>> ...
[ "According to the doc\n\nPredicates are expressed in disjunctive normal form (DNF), like [[('x', '=', 0), ...], ...]. DNF allows arbitrary boolean logical combinations of single column predicates. The innermost tuples each describe a single column predicate. The list of inner predicates is interpreted as a conjunction (AND), forming a more selective and multiple column predicate. Finally, the most outer list combines these filters as a disjunction (OR).\n\nIt means if you want to filter the data based on f0 and f2, you need to do:\nfilters=[[('f0', '=', '01.09.2022'), ('f2', '=', 'code1')]] (note the extra [])\n" ]
[ 1 ]
[]
[]
[ "pyarrow", "python" ]
stackoverflow_0074474939_pyarrow_python.txt
Q: How to convert nested dictionary to levelled Pandas Dataframe How to convert more than 3 level N nested dictionary to levelled dataframe? input_dict = { '.Stock': { '.No[0]': '3241512)', '.No[1]': '1111111111', '.No[2]': '444444444444', '.Version': '46', '.Revision': '78' }, '.Time': '12.11.2022' } what I expect: import pandas as pd expected_df = pd.DataFrame([{'level_0': '.Stock', 'level_1': '.No_0', "value": '3241512'}, {'level_0': '.Stock', 'level_1': '.No_1', "value": '1111111111',}, {'level_0': '.Stock', 'level_1': '.No_2', "value": '444444444444'}, {'level_0': '.Stock', 'level_1': '.Version', "value": '46'}, {'level_0': '.Stock', 'level_1': '.Revision', "value": '78'}, {'level_0': '.Time', "value": '12.11.2022'}]) index level_0 level_1 value 0 .Stock .No_0 3241512 1 .Stock .No_1 1111111111 2 .Stock .No_2 444444444444 3 .Stock .Version 46 4 .Stock .Revision 78 5 .Time NaN 12.11.2022 Firsly I need to convert nested dictionary to list of levelled dictionaries, than lastly convert list of dictionaries to dataframe. How can I convert, pls help me! I've already tried the code below but it doesn't show exactly the right result. pd.DataFrame(input_dict).unstack().to_frame().reset_index() A: You can first flatten your nested dictionary with a recursive function (see "Best way to get nested dictionary items"). def flatten(ndict): def key_value_pairs(d, key=[]): if not isinstance(d, dict): yield tuple(key), d else: for level, d_sub in d.items(): key.append(level) yield from key_value_pairs(d_sub, key) key.pop() return dict(key_value_pairs(ndict)) >>> input_dict = { '.Stock': { '.No[0]': '3241512)', '.No[1]': '1111111111', '.No[2]': '444444444444', '.Version': '46', '.Revision': '78' }, '.Time': '12.11.2022' } >>> d = flatten(input_dict) >>> d {('.Stock', '.No[0]'): '3241512)', ('.Stock', '.No[1]'): '1111111111', ('.Stock', '.No[2]'): '444444444444', ('.Stock', '.Version'): '46', ('.Stock', '.Revision'): '78', ('.Time',): '12.11.2022'} You then need to fill missing levels, as for the last row in your example. You can use zip_longest for the purpose and also stick the values to the last position. >>> from itertools import zip_longest >>> d = list(zip(*zip_longest(*d.keys()), d.values())) >>> d [('.Stock', '.No[0]', '3241512)'), ('.Stock', '.No[1]', '1111111111'), ('.Stock', '.No[2]', '444444444444'), ('.Stock', '.Version', '46'), ('.Stock', '.Revision', '78'), ('.Time', None, '12.11.2022')] Now you can create your dataframe: >>> pd.DataFrame(d) 0 1 2 0 .Stock .No[0] 3241512) 1 .Stock .No[1] 1111111111 2 .Stock .No[2] 444444444444 3 .Stock .Version 46 4 .Stock .Revision 78 5 .Time None 12.11.2022 A: I found solution, thanks for your comments:( def nesting_list_convert(in_dict,level=0): out_list = [] for k1, v1 in in_dict.items(): if isinstance(v1, dict): temp_list = nesting_list_convert(v1,level+1) for element in temp_list: temp_dict = {("level_"+str(level)) : k1} temp_dict.update(element) out_list.append(temp_dict) else: out_list.append({("level_"+str(level)) : k1,"value":v1}) return out_list out_df = pd.DataFrame(nesting_list_convert(input_dict)) out_df = out_df.reindex(sorted(out_df.columns), axis=1) index level_0 level_1 value 0 .Stock .No_0 3241512 1 .Stock .No_1 1111111111 2 .Stock .No_2 444444444444 3 .Stock .Version 46 4 .Stock .Revision 78 5 .Time NaN 12.11.2022 This solves 6' nested level of dictionary.
How to convert nested dictionary to levelled Pandas Dataframe
How to convert more than 3 level N nested dictionary to levelled dataframe? input_dict = { '.Stock': { '.No[0]': '3241512)', '.No[1]': '1111111111', '.No[2]': '444444444444', '.Version': '46', '.Revision': '78' }, '.Time': '12.11.2022' } what I expect: import pandas as pd expected_df = pd.DataFrame([{'level_0': '.Stock', 'level_1': '.No_0', "value": '3241512'}, {'level_0': '.Stock', 'level_1': '.No_1', "value": '1111111111',}, {'level_0': '.Stock', 'level_1': '.No_2', "value": '444444444444'}, {'level_0': '.Stock', 'level_1': '.Version', "value": '46'}, {'level_0': '.Stock', 'level_1': '.Revision', "value": '78'}, {'level_0': '.Time', "value": '12.11.2022'}]) index level_0 level_1 value 0 .Stock .No_0 3241512 1 .Stock .No_1 1111111111 2 .Stock .No_2 444444444444 3 .Stock .Version 46 4 .Stock .Revision 78 5 .Time NaN 12.11.2022 Firsly I need to convert nested dictionary to list of levelled dictionaries, than lastly convert list of dictionaries to dataframe. How can I convert, pls help me! I've already tried the code below but it doesn't show exactly the right result. pd.DataFrame(input_dict).unstack().to_frame().reset_index()
[ "You can first flatten your nested dictionary with a recursive function (see \"Best way to get nested dictionary items\").\ndef flatten(ndict):\n def key_value_pairs(d, key=[]):\n if not isinstance(d, dict):\n yield tuple(key), d\n else:\n for level, d_sub in d.items():\n key.append(level)\n yield from key_value_pairs(d_sub, key)\n key.pop()\n return dict(key_value_pairs(ndict))\n\n>>> input_dict = {\n '.Stock': {\n '.No[0]': '3241512)',\n '.No[1]': '1111111111',\n '.No[2]': '444444444444',\n '.Version': '46',\n '.Revision': '78'\n },\n '.Time': '12.11.2022'\n }\n>>> d = flatten(input_dict)\n>>> d\n{('.Stock', '.No[0]'): '3241512)',\n ('.Stock', '.No[1]'): '1111111111',\n ('.Stock', '.No[2]'): '444444444444',\n ('.Stock', '.Version'): '46',\n ('.Stock', '.Revision'): '78',\n ('.Time',): '12.11.2022'}\n\nYou then need to fill missing levels, as for the last row in your example. You can use zip_longest for the purpose and also stick the values to the last position.\n>>> from itertools import zip_longest\n>>> d = list(zip(*zip_longest(*d.keys()), d.values()))\n>>> d\n[('.Stock', '.No[0]', '3241512)'),\n ('.Stock', '.No[1]', '1111111111'),\n ('.Stock', '.No[2]', '444444444444'),\n ('.Stock', '.Version', '46'),\n ('.Stock', '.Revision', '78'),\n ('.Time', None, '12.11.2022')]\n\nNow you can create your dataframe:\n>>> pd.DataFrame(d)\n 0 1 2\n0 .Stock .No[0] 3241512)\n1 .Stock .No[1] 1111111111\n2 .Stock .No[2] 444444444444\n3 .Stock .Version 46\n4 .Stock .Revision 78\n5 .Time None 12.11.2022\n\n", "I found solution, thanks for your comments:(\ndef nesting_list_convert(in_dict,level=0):\n out_list = []\n for k1, v1 in in_dict.items():\n if isinstance(v1, dict):\n temp_list = nesting_list_convert(v1,level+1)\n for element in temp_list:\n temp_dict = {(\"level_\"+str(level)) : k1}\n temp_dict.update(element)\n out_list.append(temp_dict)\n else:\n out_list.append({(\"level_\"+str(level)) : k1,\"value\":v1})\nreturn out_list\n\nout_df = pd.DataFrame(nesting_list_convert(input_dict))\nout_df = out_df.reindex(sorted(out_df.columns), axis=1)\n\n\n\n\n\nindex\nlevel_0\nlevel_1\nvalue\n\n\n\n\n0\n.Stock\n.No_0\n3241512\n\n\n1\n.Stock\n.No_1\n1111111111\n\n\n2\n.Stock\n.No_2\n444444444444\n\n\n3\n.Stock\n.Version\n46\n\n\n4\n.Stock\n.Revision\n78\n\n\n5\n.Time\nNaN\n12.11.2022\n\n\n\n\nThis solves 6' nested level of dictionary.\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "dictionary", "pandas", "python" ]
stackoverflow_0074471768_dataframe_dictionary_pandas_python.txt
Q: Strange exec scoping rule for list comprehension with a filter condition It seems like when you execute a block of text using exec, the variable you define along the way isn't available in all contexts. I've detected this when using list comprehension with a filter condition. There seems to be a bug with the scope of the filter condition. Tested on Python 3.8, 3.9, and 3.10. Example of text that seems always to work: a = [1, 2] b = [i for i in a] Example of text that often fails: a = [1, 2] b = [i for i in a if i in a] The extra if i in a often results in NameError: name 'a' is not defined. Examples of exec successes and failures In [25]: from pathlib import Path In [26]: Path("execwrap.py").write_text(""" ...: def execwrap(*args, **kwargs): exec(*args, **kwargs) ...: """); In [27]: import execwrap In [28]: exec("a=[1,2];b=[i for i in a if i in a]") In [29]: execwrap.execwrap("a=[1,2];b=[i for i in a if i in a]") --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-29-fe8166128fb2> in <module> ----> 1 execwrap.execwrap("a=[1,2];b=[i for i in a if i in a]") ~\execwrap.py in execwrap(*args, **kwargs) 1 ----> 2 def execwrap(*args, **kwargs): exec(*args, **kwargs) ~\execwrap.py in <module> ~\execwrap.py in <listcomp>(.0) NameError: name 'a' is not defined In [30]: execwrap.execwrap("a=[1,2];b=[i for i in a if i in a]", {}, {}) --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-30-06a3e90e79c1> in <module> ----> 1 execwrap.execwrap("a=[1,2];b=[i for i in a if i in a]", {}, {}) ~\execwrap.py in execwrap(*args, **kwargs) 1 ----> 2 def execwrap(*args, **kwargs): exec(*args, **kwargs) <string> in <module> <string> in <listcomp>(.0) NameError: name 'a' is not defined In [31]: execwrap.execwrap("a=[1,2];b=[i for i in a if i in a]", globals(), {}) In [32]: execwrap.execwrap("a=[1,2];b=[i for i in a]") A: Workaround: I have since developed neval as a workaround for class-definition scoping. Neval is an alternative scoping evaluator with some additional features not available through exec and eval, such as: explicit separation of staging and readonly namespaces returning the value of the last statement of your code (no more distinction between evaluation and execution) allowing the stacktrace to access the code text (for better error reporting) Answer: It seems like testing the scoping rules in an interactive session added to the confusion. The variables a got pushed to globals() after the first successful run of exec("a=[1,2];b=[i for i in a if i in a]"). From there on, "a=[1,2];b=[i for i in a if i in a]" can successfully be execed when providing both globals and locals, in the case of globals=globals(). This is contrary to what is expected from the docs, as @user2357112 commented, the other relevant weird limitation is "If exec gets two separate objects as globals and locals, the code will be executed as if it were embedded in a class definition." The class-definition-scoping should produce a failure, but since globals() sneakily adds an a to the scope, the if i in a section successfully resolves to that a. This answer perfectly explains class-definition-scope: http://stackoverflow.com/a/39647647/1490584 Here is an example illustrating the scope failures using a class definition: In [1]: class _: ...: a = [1] ...: b = [i for i in a] ...: In [2]: class _: ...: a = [1] ...: b = [i for i in a if a] ...: --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-2-430a1ef068c1> in <module> ----> 1 class _: 2 a = [1] 3 b = [i for i in a if a] 4 <ipython-input-2-430a1ef068c1> in _() 1 class _: 2 a = [1] ----> 3 b = [i for i in a if a] 4 <ipython-input-2-430a1ef068c1> in <listcomp>(.0) 1 class _: 2 a = [1] ----> 3 b = [i for i in a if a] 4 NameError: name 'a' is not defined
Strange exec scoping rule for list comprehension with a filter condition
It seems like when you execute a block of text using exec, the variable you define along the way isn't available in all contexts. I've detected this when using list comprehension with a filter condition. There seems to be a bug with the scope of the filter condition. Tested on Python 3.8, 3.9, and 3.10. Example of text that seems always to work: a = [1, 2] b = [i for i in a] Example of text that often fails: a = [1, 2] b = [i for i in a if i in a] The extra if i in a often results in NameError: name 'a' is not defined. Examples of exec successes and failures In [25]: from pathlib import Path In [26]: Path("execwrap.py").write_text(""" ...: def execwrap(*args, **kwargs): exec(*args, **kwargs) ...: """); In [27]: import execwrap In [28]: exec("a=[1,2];b=[i for i in a if i in a]") In [29]: execwrap.execwrap("a=[1,2];b=[i for i in a if i in a]") --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-29-fe8166128fb2> in <module> ----> 1 execwrap.execwrap("a=[1,2];b=[i for i in a if i in a]") ~\execwrap.py in execwrap(*args, **kwargs) 1 ----> 2 def execwrap(*args, **kwargs): exec(*args, **kwargs) ~\execwrap.py in <module> ~\execwrap.py in <listcomp>(.0) NameError: name 'a' is not defined In [30]: execwrap.execwrap("a=[1,2];b=[i for i in a if i in a]", {}, {}) --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-30-06a3e90e79c1> in <module> ----> 1 execwrap.execwrap("a=[1,2];b=[i for i in a if i in a]", {}, {}) ~\execwrap.py in execwrap(*args, **kwargs) 1 ----> 2 def execwrap(*args, **kwargs): exec(*args, **kwargs) <string> in <module> <string> in <listcomp>(.0) NameError: name 'a' is not defined In [31]: execwrap.execwrap("a=[1,2];b=[i for i in a if i in a]", globals(), {}) In [32]: execwrap.execwrap("a=[1,2];b=[i for i in a]")
[ "Workaround:\nI have since developed neval as a workaround for class-definition scoping. Neval is an alternative scoping evaluator with some additional features not available through exec and eval, such as:\n\nexplicit separation of staging and readonly namespaces\nreturning the value of the last statement of your code (no more distinction between evaluation and execution)\nallowing the stacktrace to access the code text (for better error reporting)\n\nAnswer:\nIt seems like testing the scoping rules in an interactive session added to the confusion. The variables a got pushed to globals() after the first successful run of exec(\"a=[1,2];b=[i for i in a if i in a]\"). From there on, \"a=[1,2];b=[i for i in a if i in a]\" can successfully be execed when providing both globals and locals, in the case of globals=globals(). This is contrary to what is expected from the docs, as @user2357112 commented,\n\nthe other relevant weird limitation is \"If exec gets two separate objects as globals and locals, the code will be executed as if it were embedded in a class definition.\"\n\nThe class-definition-scoping should produce a failure, but since globals() sneakily adds an a to the scope, the if i in a section successfully resolves to that a.\nThis answer perfectly explains class-definition-scope: http://stackoverflow.com/a/39647647/1490584\nHere is an example illustrating the scope failures using a class definition:\nIn [1]: class _:\n ...: a = [1]\n ...: b = [i for i in a]\n ...:\n\nIn [2]: class _:\n ...: a = [1]\n ...: b = [i for i in a if a]\n ...:\n---------------------------------------------------------------------------\nNameError Traceback (most recent call last)\n<ipython-input-2-430a1ef068c1> in <module>\n----> 1 class _:\n 2 a = [1]\n 3 b = [i for i in a if a]\n 4\n\n<ipython-input-2-430a1ef068c1> in _()\n 1 class _:\n 2 a = [1]\n----> 3 b = [i for i in a if a]\n 4\n\n<ipython-input-2-430a1ef068c1> in <listcomp>(.0)\n 1 class _:\n 2 a = [1]\n----> 3 b = [i for i in a if a]\n 4\n\nNameError: name 'a' is not defined\n\n" ]
[ 1 ]
[]
[]
[ "eval", "python" ]
stackoverflow_0074457440_eval_python.txt
Q: How to import functions from a different folder in python? Q1: So let's say I have 2 folders and some files in them like this: root ├── Folder │   └── file.py └── Folder1 └── file2.py Let's say that I have a function in file.py named function() and I want to use it in file2.py. How can I make this happen? Q2: If file.py contains 5 functions, and I want to use them at any time in file2.py. How do I do that? Is it any different to the answer in the previous question? function() function1() function2() function3() function4() I've tried something with init.py and PYTHONPATH and it didn't work so I've decided to start from the begining. A: Found the answer: #You write this code in file2.py #This imports the whole file2 import numpy as np import sys sys.path.insert(0, "../Folder") import file.py as U def main(): s = U.log_sig(0.5) if __name__ == "__main__": main Or if you like to import only function() from file.py then: from file import function s = function()
How to import functions from a different folder in python?
Q1: So let's say I have 2 folders and some files in them like this: root ├── Folder │   └── file.py └── Folder1 └── file2.py Let's say that I have a function in file.py named function() and I want to use it in file2.py. How can I make this happen? Q2: If file.py contains 5 functions, and I want to use them at any time in file2.py. How do I do that? Is it any different to the answer in the previous question? function() function1() function2() function3() function4() I've tried something with init.py and PYTHONPATH and it didn't work so I've decided to start from the begining.
[ "Found the answer:\n#You write this code in file2.py\n#This imports the whole file2\n\nimport numpy as np\nimport sys\n\nsys.path.insert(0, \"../Folder\")\nimport file.py as U\n\ndef main():\n s = U.log_sig(0.5)\n\nif __name__ == \"__main__\":\n main\n\nOr if you like to import only function() from file.py then:\nfrom file import function\n\ns = function()\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074474997_python.txt
Q: Alembic: alter column to JSON type I have initial migration setup, but I want to change a column from sqlalchemy.Text to sqlalchemy.JSON I followed this article https://amercader.net/blog/beware-of-json-fields-in-sqlalchemy/ column_foo = Column(mutable_json_type(dbtype=JSONB, nested=True), nullable=True) When I run alembic autogenerate it does not recognise any change, so I wrote it manually: import sqlalchemy as sa from sqlalchemy.dialects.postgresql import JSONB ... def upgrade() -> None: op.alter_column("some_table", "column_foo", existing_type=JSONB(), nullable=True) def downgrade() -> None: op.alter_column("some_table", "column_foo", existing_type=sa.TEXT(), nullable=True) NOTE: I also tried regular sqlalchemy.JSON in my ORM module, and in ALembic migriaon script. #orm script column_foo = Column(JSON, nullable=True) #migration script op.alter_column("some_table", "column_foo", existing_type=sa.JSON(), nullable=True) In both scenarios when I check metadata in my postgres database I still see TEXT type: CREATE TABLE public.some_table ( ... column_foo text NULL, Why does it not say JSON as a column type? A: Found a solution (postgresql_using): op.alter_column("some_table", "column_foo", type_=sa.JSON(), nullable=True,postgresql_using='column_foo::json')
Alembic: alter column to JSON type
I have initial migration setup, but I want to change a column from sqlalchemy.Text to sqlalchemy.JSON I followed this article https://amercader.net/blog/beware-of-json-fields-in-sqlalchemy/ column_foo = Column(mutable_json_type(dbtype=JSONB, nested=True), nullable=True) When I run alembic autogenerate it does not recognise any change, so I wrote it manually: import sqlalchemy as sa from sqlalchemy.dialects.postgresql import JSONB ... def upgrade() -> None: op.alter_column("some_table", "column_foo", existing_type=JSONB(), nullable=True) def downgrade() -> None: op.alter_column("some_table", "column_foo", existing_type=sa.TEXT(), nullable=True) NOTE: I also tried regular sqlalchemy.JSON in my ORM module, and in ALembic migriaon script. #orm script column_foo = Column(JSON, nullable=True) #migration script op.alter_column("some_table", "column_foo", existing_type=sa.JSON(), nullable=True) In both scenarios when I check metadata in my postgres database I still see TEXT type: CREATE TABLE public.some_table ( ... column_foo text NULL, Why does it not say JSON as a column type?
[ "Found a solution (postgresql_using):\nop.alter_column(\"some_table\", \"column_foo\", type_=sa.JSON(), nullable=True,postgresql_using='column_foo::json')\n\n" ]
[ 0 ]
[]
[]
[ "alembic", "postgresql", "python", "sqlalchemy" ]
stackoverflow_0074475213_alembic_postgresql_python_sqlalchemy.txt
Q: how we can order browsing in 4 list of words? i have 4 group of word i intend write a program in python to input a name and brows in 4 group and if find in one's say the group name group1=["anbar","tamirgah kochak","ordogah jahangardi","zamin varzeshi","mohavate sazi","kargah kochak"] group2=["maskoni","small store","khabgah","mehmansara","asayeshgah","parking tabaghati","kodakestan","dabestan","rahnamaii","dabirstan","honarestan","salon varzesh"] group3=["daneshsara","fani herfei","daneshgah","namayeshgah","teather","cinema"] group4=["hospital","mokhaberat","metro","mosque","museum","bank","stadium","airport"] all_group=[group1, group2, group3,group4] project_type = input("chose project type: ") for x in all_group: if x==project_type : project_area = int(input("inter the area m2 :")) price_per_m2 = int(input("inter price per m2 :")) print(project_type ,":",x,"=",project_area * price_per_m2,"$ per m2") else: print("enter another word") break what i wrote A: Probably the easiest way is to use a dictionary, and iterate over items: all_group = {"group1":group1, "group2":group2, "group3":group3, "group4":group4} project_type = input("chose project type: ") for group_name, group_values in all_group.items(): if project_type in group_values: print(group_name) ...
how we can order browsing in 4 list of words?
i have 4 group of word i intend write a program in python to input a name and brows in 4 group and if find in one's say the group name group1=["anbar","tamirgah kochak","ordogah jahangardi","zamin varzeshi","mohavate sazi","kargah kochak"] group2=["maskoni","small store","khabgah","mehmansara","asayeshgah","parking tabaghati","kodakestan","dabestan","rahnamaii","dabirstan","honarestan","salon varzesh"] group3=["daneshsara","fani herfei","daneshgah","namayeshgah","teather","cinema"] group4=["hospital","mokhaberat","metro","mosque","museum","bank","stadium","airport"] all_group=[group1, group2, group3,group4] project_type = input("chose project type: ") for x in all_group: if x==project_type : project_area = int(input("inter the area m2 :")) price_per_m2 = int(input("inter price per m2 :")) print(project_type ,":",x,"=",project_area * price_per_m2,"$ per m2") else: print("enter another word") break what i wrote
[ "Probably the easiest way is to use a dictionary, and iterate over items:\nall_group = {\"group1\":group1, \"group2\":group2, \"group3\":group3, \"group4\":group4}\n\nproject_type = input(\"chose project type: \")\nfor group_name, group_values in all_group.items(): \n if project_type in group_values:\n print(group_name)\n ...\n\n" ]
[ 0 ]
[]
[]
[ "list", "loops", "python", "validation" ]
stackoverflow_0074475387_list_loops_python_validation.txt
Q: Matrix Calculation Numpy I am trying to calculate Rij = Aij x Bji/Cij with numPy broadcasting. Also raise an exception if matrices are not the same size (n × n). I am not so sure if this is correct or if I should be doing element wise or matrix wise. could anyone tell me how to do it A = [[(i+j)/2000 for i in range(500)] for j in range(500)] B = [[(i-j)/2000 for i in range(500)] for j in range(500)] C = [[((i+1)/(j+1))/2000 for i in range(500)] for j in range(500)] def matrix_R(A,B,C): A1 = np.array(A) B1 = np.array(B) C1 = np.array(C) eq = (A1 @ np.transpose(B1)) Rij = np.divide(eq, C1) if len(A1) != len(B1) or len(A1) != len(C1): raise ArithmeticError('Matrices are NOT the same size.') return Rij matrix_R(A, B, C) A: The @ is the matrix product operator for numpy arrays. np.array([[1, 2], [3, 4]]) @ np.array([[5, 6], [7, 8]]) is np.array([[1*5+2*7, 1*6+2*8], [3*5+4*7, 3*6+4*8]]) For element multiplication you may use * which does element-wise product for numpy arrays. np.array([[1, 2], [3, 4]]) * np.array([[5, 6], [7, 8]]) is np.array([[1*5, 2*6], [3*7, 4*8]) To answer your question, you can compute R the matrix of Rij = Aij x Bji/Cij with: R = np.divide(np.multiply(A, np.transpose(B)), C) or equivalently and shorter: R = A * B.T / C
Matrix Calculation Numpy
I am trying to calculate Rij = Aij x Bji/Cij with numPy broadcasting. Also raise an exception if matrices are not the same size (n × n). I am not so sure if this is correct or if I should be doing element wise or matrix wise. could anyone tell me how to do it A = [[(i+j)/2000 for i in range(500)] for j in range(500)] B = [[(i-j)/2000 for i in range(500)] for j in range(500)] C = [[((i+1)/(j+1))/2000 for i in range(500)] for j in range(500)] def matrix_R(A,B,C): A1 = np.array(A) B1 = np.array(B) C1 = np.array(C) eq = (A1 @ np.transpose(B1)) Rij = np.divide(eq, C1) if len(A1) != len(B1) or len(A1) != len(C1): raise ArithmeticError('Matrices are NOT the same size.') return Rij matrix_R(A, B, C)
[ "The @ is the matrix product operator for numpy arrays.\nnp.array([[1, 2], [3, 4]]) @ np.array([[5, 6], [7, 8]])\n\nis\nnp.array([[1*5+2*7, 1*6+2*8], [3*5+4*7, 3*6+4*8]])\n\nFor element multiplication you may use * which does element-wise product for numpy arrays.\nnp.array([[1, 2], [3, 4]]) * np.array([[5, 6], [7, 8]])\n\nis\nnp.array([[1*5, 2*6], [3*7, 4*8])\n\nTo answer your question, you can compute R the matrix of Rij = Aij x Bji/Cij with:\nR = np.divide(np.multiply(A, np.transpose(B)), C)\n\nor equivalently and shorter:\nR = A * B.T / C\n\n" ]
[ 1 ]
[]
[]
[ "broadcasting", "matrix", "numpy", "python", "transpose" ]
stackoverflow_0074475244_broadcasting_matrix_numpy_python_transpose.txt
Q: Python OpenCV video.get(cv2.CAP_PROP_FPS) returns 0.0 FPS This is my video This is the script to find fps: import cv2 if __name__ == '__main__' : video = cv2.VideoCapture("test.mp4"); # Find OpenCV version (major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.') if int(major_ver) < 3 : fps = video.get(cv2.cv.CV_CAP_PROP_FPS) print "Frames per second using video.get(cv2.cv.CV_CAP_PROP_FPS): {0}".format(fps) else : fps = video.get(cv2.CAP_PROP_FPS) print "Frames per second using video.get(cv2.CAP_PROP_FPS) : {0}".format(fps) video.release(); This is the output of the script for this video: Frames per second using video.get(cv2.CAP_PROP_FPS) : 0.0 Why is it returning 0.0? The FPS is 14.0 A: Performing pip install python-opencv fixed the problem and the FPS is correctly detected. EDIT: tested with python 3.8 and indeed it is pip install opencv-python. Cannot remember two years ago what python I was using. EDIT November 2022: please also check Perry's answer below, if you are using a newer opencv-python version A: The recent versions of opencv-python will give an error called AttributeError because cv2 doesn't have any attribute named cv. Instead use the following import cv2 vidcap = cv2.VideoCapture('some_video.avi') fps = vidcap.get(cv2.CAP_PROP_FPS) print(f"{fps} frames per second") This will give the frames per second value
Python OpenCV video.get(cv2.CAP_PROP_FPS) returns 0.0 FPS
This is my video This is the script to find fps: import cv2 if __name__ == '__main__' : video = cv2.VideoCapture("test.mp4"); # Find OpenCV version (major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.') if int(major_ver) < 3 : fps = video.get(cv2.cv.CV_CAP_PROP_FPS) print "Frames per second using video.get(cv2.cv.CV_CAP_PROP_FPS): {0}".format(fps) else : fps = video.get(cv2.CAP_PROP_FPS) print "Frames per second using video.get(cv2.CAP_PROP_FPS) : {0}".format(fps) video.release(); This is the output of the script for this video: Frames per second using video.get(cv2.CAP_PROP_FPS) : 0.0 Why is it returning 0.0? The FPS is 14.0
[ "Performing pip install python-opencv fixed the problem and the FPS is correctly detected.\nEDIT: tested with python 3.8 and indeed it is pip install opencv-python. Cannot remember two years ago what python I was using.\nEDIT November 2022: please also check Perry's answer below, if you are using a newer opencv-python version\n", "The recent versions of opencv-python will give an error called AttributeError because cv2 doesn't have any attribute named cv.\nInstead use the following\nimport cv2\n\nvidcap = cv2.VideoCapture('some_video.avi')\nfps = vidcap.get(cv2.CAP_PROP_FPS)\n\nprint(f\"{fps} frames per second\")\n\nThis will give the frames per second value\n" ]
[ 10, 2 ]
[]
[]
[ "frame_rate", "mp4", "opencv", "python" ]
stackoverflow_0049025795_frame_rate_mp4_opencv_python.txt
Q: Loading np.array from csv dataframe I have a dataframe with columns values that are np.arrays. For example df = pd.DataFrame([{"id":1, "sample": np.array([1,2,3])}, {"id":2, "sample": np.array([2,3,4])}]) df.to_csv("./tmp.csv", index=False) if I save df to csv and load it again I get "sample" column as strings. df_from_csv = pd.read_csv("./tmp.csv") df_from_csv == pd.DataFrame([{"id":1, "sample": '[1 2 3]')}, {"id":2, "sample": '[2 3 4]')}]) True Is there a better way to save/load my data that does no requiere manually passing '[1 2 3]' to ist corresponding array? A: You can use a converter in read_csv: import numpy as np from ast import literal_eval import re def to_array(x): return np.array(literal_eval(re.sub('\s+', ',', x))) df_from_csv = pd.read_csv("./tmp.csv", converters={'sample': to_array}) # id sample # 0 1 [1, 2, 3] # 1 2 [2, 3, 4] df_from_csv.loc[0, 'sample'] # array([1, 2, 3])
Loading np.array from csv dataframe
I have a dataframe with columns values that are np.arrays. For example df = pd.DataFrame([{"id":1, "sample": np.array([1,2,3])}, {"id":2, "sample": np.array([2,3,4])}]) df.to_csv("./tmp.csv", index=False) if I save df to csv and load it again I get "sample" column as strings. df_from_csv = pd.read_csv("./tmp.csv") df_from_csv == pd.DataFrame([{"id":1, "sample": '[1 2 3]')}, {"id":2, "sample": '[2 3 4]')}]) True Is there a better way to save/load my data that does no requiere manually passing '[1 2 3]' to ist corresponding array?
[ "You can use a converter in read_csv:\nimport numpy as np\nfrom ast import literal_eval\nimport re\n\ndef to_array(x):\n return np.array(literal_eval(re.sub('\\s+', ',', x)))\n\ndf_from_csv = pd.read_csv(\"./tmp.csv\", converters={'sample': to_array}) \n\n# id sample\n# 0 1 [1, 2, 3]\n# 1 2 [2, 3, 4]\n\ndf_from_csv.loc[0, 'sample']\n\n# array([1, 2, 3])\n\n" ]
[ 1 ]
[]
[]
[ "csv", "numpy", "pandas", "python" ]
stackoverflow_0074475413_csv_numpy_pandas_python.txt
Q: How to run a single line or selected code in a Jupyter Notebook or JupyterLab cell? In both JupyterLab and Jupyter Notebook you can execute a cell using ctrl + Enter: Code: print('line 1') print('line 2') print('line 3') Cell and output: But how can you run only line 2? Or even a selection of lines within a cell without running the entire cell? Sure you could just insert a cell with that single line or selection of lines, but that gets really cumbersome and messy really quick. So are there better ways of doing this? A: Updated answer As there have been a few updates of JupyterLab since my first answer (I'm now on 1.1.4), and it has been stated that JupyterLab 1.0 will eventually replace the classic Jupyter Notebook, here's what I think is the best approach right now and even more so in the time to come: In JupyterLab use Run > Run selected line or highlighted text with an assigned keyboard shortcut to run code in the console. Here's how it will look like when you run the three print statements line by line using a keyboard shortcut: Here's how you set up a shortcut in Settings > Advanced Settings > Keyboard shortcuts: And here's what you need to add under Settings > Keyboard Shortcuts > User preferences > : { // List of Keyboard Shortcuts "shortcuts": [ { "command": "notebook:run-in-console", "keys": [ "F9" ], "selector": ".jp-Notebook.jp-mod-editMode" }, ] } Note that ability to edit these shortcuts directly, for newer versions of JupyterLab, now hides under a specific option in the menu, namely the JSON Settings Editor: If you're able to find that, the rest of the descriptions above should work like a breeze. The shortcut will even show in the menu. I've chosen to use F9 Original answer for older versions: Short answer: Jupyter notebook: qtconsole scratchpad JupyterLab: qtconsole Run > Run Selected Text or Current Line in Console, optionally with a keyboard shortcut Have a look at the details below, as well as some special cases in an edit at the very end of the answer. The details: Jupyter Notebook option 1: qtconsole The arguably most flexible alternative to inserting new cell is to open an IPython console using the magic function %qtconsole For a bit more fancy console you can use %qtconsole --style vim The results of the lines executed in this console will also be available to the Jupyter Notebook since it's still the same kernel that's running. One drawback is that you'll have to copy&paste or type the desired lines into the console. [ Jupyter Notebook option 2: Scratchpad Notebook Extension With a successful installation you can launch a Scratchpad with ctrl + B: JupyterLab option 1: %qtconsole Works the same way as for a Notebook JupyterLab option 2: Run > Run Selected Text or Current Line in Console A similar option to a qtconsole, but arguably more elegant, has been built in for newer versions of JupyterLab. Now you canput your marker on a single line, or highlight a selection, and use the menu option Run > Run Selected Text or Current Line in Console: You're still going to get your results in an IPython console, but you don't have to add an extra line with %qtconsole and it's much easier to run a selection of lines within a cell: You can make things even easier by assigning a keyboard shortcut to the menu option Run > Run Selected Text or Current Line in Console like this: 1 - Go to Settings and select Advanced Settings editor: 2 - Under the Keyboard shortcuts tab, do a ctrl+F search for run-in-console to locate the following section: // [missing schema title] // [missing schema description] "notebook:run-in-console": { "command": "notebook:run-in-console", "keys": [ "" ], "selector": ".jp-Notebook.jp-mod-editMode", "title": "Run In Console", "category": "Notebook Cell Operations" } 3 - Copy that part and paste it under User Overrides and type in your desired shortcut below keys like so: [...] "keys": [ "F9" ], [...] 4 - Click Save All under File. 5 - If the process went smoothly, you'll see that your menu option has changed: 6 - You may have to restart JupyterLab, but now you can easily run a single line or selection of lines with your desired shortcut. ##EDIT: Special cases Your preferred approach will depend on the nature of the output of the lines in question. Below is an example with plotly. More examples will possibly be added with time. 1. - plotly plotly figures will not be displayed directly in a Jupyter QtConsole (possibly related to this), but both the Scratchpad in a Jupyter Notebook and the integrated console in Jupyterlab using Run > Run Selected Text or Current Line in Console will handle plotly figures just fine. Snippet: from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot import plotly.graph_objs as go init_notebook_mode(connected=True) trace0 = go.Scatter( x=[1, 2, 3, 4], y=[10, 15, 13, 17] ) fig = go.Figure([trace0]) iplot(fig) 1.1 - plotly with scratchpad 1.2 - plotly with JupyterLab console using highlighted line and keyboard shortcut: A: On Jupyterlab as of 2022-11 Go to Settings > Advanced Settings Editor Find the "Keyboard Shortcuts" and click on it. Click on "JSON Settings Editor" Introduce the code listed below. Type Ctrl+S to save. Now, if on the menu you go to "Run" you should see the option there (on the video I show this step at the beginning) No need to restart the kernel on my case. {"shortcuts": [ { "args": {}, "command": "notebook:run-in-console", "keys": [ "Ctrl Shift Enter" ], "selector": ".jp-Notebook.jp-mod-editMode" }, ] } Note: I was trying to emulate what Colab does, displaying the result below the cell, but I couldn't find the way. Ctrl+Shift+Enter is the shortcut in Colab for "run selected text". Here is a similar explanation. In Jupyter Lab, execute editor code in Python console
How to run a single line or selected code in a Jupyter Notebook or JupyterLab cell?
In both JupyterLab and Jupyter Notebook you can execute a cell using ctrl + Enter: Code: print('line 1') print('line 2') print('line 3') Cell and output: But how can you run only line 2? Or even a selection of lines within a cell without running the entire cell? Sure you could just insert a cell with that single line or selection of lines, but that gets really cumbersome and messy really quick. So are there better ways of doing this?
[ "Updated answer\nAs there have been a few updates of JupyterLab since my first answer (I'm now on 1.1.4), and it has been stated that JupyterLab 1.0 will eventually replace the classic Jupyter Notebook, here's what I think is the best approach right now and even more so in the time to come:\nIn JupyterLab use Run > Run selected line or highlighted text with an assigned keyboard shortcut to run code in the console.\nHere's how it will look like when you run the three print statements line by line using a keyboard shortcut:\n\nHere's how you set up a shortcut in Settings > Advanced Settings > Keyboard shortcuts:\n\nAnd here's what you need to add under Settings > Keyboard Shortcuts > User preferences > :\n{\n // List of Keyboard Shortcuts\n \"shortcuts\": [\n {\n \"command\": \"notebook:run-in-console\",\n \"keys\": [\n \"F9\"\n ],\n \"selector\": \".jp-Notebook.jp-mod-editMode\"\n },\n ]\n}\n\nNote that ability to edit these shortcuts directly, for newer versions of JupyterLab, now hides under a specific option in the menu, namely the JSON Settings Editor:\n\nIf you're able to find that, the rest of the descriptions above should work like a breeze.\nThe shortcut will even show in the menu. I've chosen to use F9\n\n\nOriginal answer for older versions:\n\nShort answer:\nJupyter notebook:\n\nqtconsole\nscratchpad\n\nJupyterLab:\n\nqtconsole\nRun > Run Selected Text or Current Line in Console, optionally with a keyboard shortcut\n\nHave a look at the details below, as well as some special cases in an edit at the very end of the answer.\n\nThe details:\nJupyter Notebook option 1: qtconsole\nThe arguably most flexible alternative to inserting new cell is to open an IPython console using the magic function\n%qtconsole\n\nFor a bit more fancy console you can use\n%qtconsole --style vim\n\nThe results of the lines executed in this console will also be available to the Jupyter Notebook since it's still the same kernel that's running. One drawback is that you'll have to copy&paste or type the desired lines into the console.\n[\nJupyter Notebook option 2: Scratchpad Notebook Extension\nWith a successful installation you can launch a Scratchpad with ctrl + B:\n\nJupyterLab option 1: %qtconsole\nWorks the same way as for a Notebook\nJupyterLab option 2: Run > Run Selected Text or Current Line in Console\nA similar option to a qtconsole, but arguably more elegant, has been built in for newer versions of JupyterLab. Now you canput your marker on a single line, or highlight a selection, and use the menu option Run > Run Selected Text or Current Line in Console:\n\nYou're still going to get your results in an IPython console, but you don't have to add an extra line with %qtconsole and it's much easier to run a selection of lines within a cell:\n\nYou can make things even easier by assigning a keyboard shortcut\nto the menu option Run > Run Selected Text or Current Line in Console like this:\n1 - Go to Settings and select Advanced Settings editor:\n2 - Under the Keyboard shortcuts tab, do a ctrl+F search for run-in-console to locate the following section:\n// [missing schema title]\n // [missing schema description]\n \"notebook:run-in-console\": {\n \"command\": \"notebook:run-in-console\",\n \"keys\": [\n \"\"\n ],\n \"selector\": \".jp-Notebook.jp-mod-editMode\",\n \"title\": \"Run In Console\",\n \"category\": \"Notebook Cell Operations\"\n }\n\n3 - Copy that part and paste it under User Overrides and type in your desired shortcut below keys like so:\n[...]\n\"keys\": [\n \"F9\"\n],\n[...]\n\n4 - Click Save All under File.\n5 - If the process went smoothly, you'll see that your menu option has changed:\n\n6 - You may have to restart JupyterLab, but now you can easily run a single line or selection of lines with your desired shortcut.\n##EDIT: Special cases\nYour preferred approach will depend on the nature of the output of the lines in question. Below is an example with plotly. More examples will possibly be added with time.\n1. - plotly\nplotly figures will not be displayed directly in a Jupyter QtConsole (possibly related to this), but both the Scratchpad in a Jupyter Notebook and the integrated console in Jupyterlab using Run > Run Selected Text or Current Line in Console will handle plotly figures just fine.\nSnippet:\nfrom plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot\nimport plotly.graph_objs as go\ninit_notebook_mode(connected=True)\n\ntrace0 = go.Scatter(\n x=[1, 2, 3, 4],\n y=[10, 15, 13, 17]\n)\n\nfig = go.Figure([trace0])\niplot(fig)\n\n1.1 - plotly with scratchpad\n\n1.2 - plotly with JupyterLab console using highlighted line and keyboard shortcut:\n\n", "\nOn Jupyterlab as of 2022-11\n\nGo to Settings > Advanced Settings Editor\nFind the \"Keyboard Shortcuts\" and click on it.\nClick on \"JSON Settings Editor\"\nIntroduce the code listed below.\nType Ctrl+S to save.\n\nNow, if on the menu you go to \"Run\" you should see the option there (on the video I show this step at the beginning)\nNo need to restart the kernel on my case.\n{\"shortcuts\": [\n {\n \"args\": {},\n \"command\": \"notebook:run-in-console\",\n \"keys\": [ \"Ctrl Shift Enter\" ],\n \"selector\": \".jp-Notebook.jp-mod-editMode\"\n },\n ]\n}\n\nNote: I was trying to emulate what Colab does, displaying the result below the cell, but I couldn't find the way. Ctrl+Shift+Enter is the shortcut in Colab for \"run selected text\".\nHere is a similar explanation.\nIn Jupyter Lab, execute editor code in Python console\n" ]
[ 45, 0 ]
[]
[]
[ "jupyter", "jupyter_lab", "jupyter_notebook", "python" ]
stackoverflow_0056460834_jupyter_jupyter_lab_jupyter_notebook_python.txt
Q: How do I check if a value already appeared in pandas df column? I have a Dataframe of stock prices... I wish to have a boolean column that indicates if the price had reached a certain threshold in the previous rows or not. My output should be something like this (let's say my threshold is 100): index price bool 0 98 False 1 99 False 2 100.5 True 3 101 True 4 99 True 5 98 True I've managed to do this with the following code but it's not efficient and takes a lot of time: (df.loc[:, 'price'] > threshold).cumsum().fillna(0).gt(0) Please, any suggestions? A: Use a comparison and cummax: threshold = 100 df['bool'] = df['price'].ge(threshold).cummax() Note that it would work the other way around (although maybe less efficiently*): threshold = 100 df['bool'] = df['price'].cummax().ge(threshold) Output: index price bool 0 0 98.0 False 1 1 99.0 False 2 2 100.5 True 3 3 101.0 True 4 4 99.0 True 5 5 98.0 True * indeed on a large array: %%timeit df['price'].ge(threshold).cummax() # 193 µs ± 4.96 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) %%timeit df['price'].cummax().ge(threshold) # 309 µs ± 4.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) timing # setting up a dummy example with 10M rows np.random.seed(0) df = pd.DataFrame({'price': np.random.choice([0,1], p=[0.999,0.001], size=10_000_000)}) threshold = 0.5 ## comparison %%timeit df['bool'] = (df.loc[:, 'price'] > threshold).cumsum().fillna(0).gt(0) # 271 ms ± 28.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %%timeit df['bool'] = df['price'].ge(threshold).cummax() # 109 ms ± 5.74 ms per loop (mean ± std. dev. of 7 runs, 10 loops each %%timeit df['bool'] = np.maximum.accumulate(df['price'].to_numpy()>threshold) # 75.8 ms ± 2.86 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
How do I check if a value already appeared in pandas df column?
I have a Dataframe of stock prices... I wish to have a boolean column that indicates if the price had reached a certain threshold in the previous rows or not. My output should be something like this (let's say my threshold is 100): index price bool 0 98 False 1 99 False 2 100.5 True 3 101 True 4 99 True 5 98 True I've managed to do this with the following code but it's not efficient and takes a lot of time: (df.loc[:, 'price'] > threshold).cumsum().fillna(0).gt(0) Please, any suggestions?
[ "Use a comparison and cummax:\nthreshold = 100\ndf['bool'] = df['price'].ge(threshold).cummax()\n\nNote that it would work the other way around (although maybe less efficiently*):\nthreshold = 100\ndf['bool'] = df['price'].cummax().ge(threshold)\n\nOutput:\n index price bool\n0 0 98.0 False\n1 1 99.0 False\n2 2 100.5 True\n3 3 101.0 True\n4 4 99.0 True\n5 5 98.0 True\n\n* indeed on a large array:\n%%timeit\ndf['price'].ge(threshold).cummax()\n# 193 µs ± 4.96 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n\n%%timeit\ndf['price'].cummax().ge(threshold)\n# 309 µs ± 4.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n\ntiming\n# setting up a dummy example with 10M rows\nnp.random.seed(0)\ndf = pd.DataFrame({'price': np.random.choice([0,1], p=[0.999,0.001], size=10_000_000)})\nthreshold = 0.5\n\n## comparison\n\n%%timeit\ndf['bool'] = (df.loc[:, 'price'] > threshold).cumsum().fillna(0).gt(0)\n# 271 ms ± 28.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\n%%timeit\ndf['bool'] = df['price'].ge(threshold).cummax()\n# 109 ms ± 5.74 ms per loop (mean ± std. dev. of 7 runs, 10 loops each\n\n%%timeit\ndf['bool'] = np.maximum.accumulate(df['price'].to_numpy()>threshold)\n# 75.8 ms ± 2.86 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074475685_pandas_python.txt
Q: Reading a pajek file with partitions I would need to read data from a pajek file consisting of partitions (files .clu). Looking for more information on how reading a pajek format, I've found the following question: Reading a Pajek Dataset into Networkx The answer refers to partitions of the vertex set. I've tried to open a file as follows example = nx.read_pajek('path/file.paj') or, alternatively, with open('path/file.paj') as txtfile: comments = [] data = [] part = [] for line in txtfile: if line.startswith('*'): comment = line comments.append(comment) if part: data.append(part) part = [] else: if comment.startswith('*Vertices') and len(line.split()) > 1: sublist = line.split('"') sublist = sublist[:2] + sublist[-1].split() part.append(sublist) elif not line.isspace(): part.append(line.split()) data.append(part) but the file cannot be read correctly as it returns [[]]. I guess that the above method cannot be applied in case of partitions. Can I ask you how to access a .paj having partitions in it? Happy to provide an example of dataset (you might found some example on the link provided above). Examples of files with partitions might be that one mentioned on the question at the above link (e.g. http://vlado.fmf.uni-lj.si/pub/networks/data/esna/SanJuanSur.htm) or other files in the repository http://vlado.fmf.uni-lj.si/pub/networks/data (e.g. http://vlado.fmf.uni-lj.si/pub/networks/data/2mode/Sandi/Sandi.htm or http://vlado.fmf.uni-lj.si/pub/networks/data/2mode/DutchElite.htm) A: Executing your code, I could read the downloaded SanJuanSur2.paj very well. What make you think that you have a partition problem?
Reading a pajek file with partitions
I would need to read data from a pajek file consisting of partitions (files .clu). Looking for more information on how reading a pajek format, I've found the following question: Reading a Pajek Dataset into Networkx The answer refers to partitions of the vertex set. I've tried to open a file as follows example = nx.read_pajek('path/file.paj') or, alternatively, with open('path/file.paj') as txtfile: comments = [] data = [] part = [] for line in txtfile: if line.startswith('*'): comment = line comments.append(comment) if part: data.append(part) part = [] else: if comment.startswith('*Vertices') and len(line.split()) > 1: sublist = line.split('"') sublist = sublist[:2] + sublist[-1].split() part.append(sublist) elif not line.isspace(): part.append(line.split()) data.append(part) but the file cannot be read correctly as it returns [[]]. I guess that the above method cannot be applied in case of partitions. Can I ask you how to access a .paj having partitions in it? Happy to provide an example of dataset (you might found some example on the link provided above). Examples of files with partitions might be that one mentioned on the question at the above link (e.g. http://vlado.fmf.uni-lj.si/pub/networks/data/esna/SanJuanSur.htm) or other files in the repository http://vlado.fmf.uni-lj.si/pub/networks/data (e.g. http://vlado.fmf.uni-lj.si/pub/networks/data/2mode/Sandi/Sandi.htm or http://vlado.fmf.uni-lj.si/pub/networks/data/2mode/DutchElite.htm)
[ "Executing your code, I could read the downloaded SanJuanSur2.paj very well.\nWhat make you think that you have a partition problem?\n" ]
[ 0 ]
[]
[]
[ "networkx", "pajek", "python" ]
stackoverflow_0074368436_networkx_pajek_python.txt
Q: How do I run 3 sequential methods in independent fashion in python I have a scenario where I have an Input X that is given to function A, then to function B and then to function C and finally gives an output Y. This process happens in sequence hence is slow. I am trying to build this in Python. Can you guide me on what I should use so that method A, B and C can run independently such that as soon as method A finishes processing 1 item it dispatches it to B and starts processing the next item without waiting for method B and C to finish it's work. I am new to programming and have limited knowledge in multithreading & concurrency so any help would be awesome. A: I've tried to rebuild your function sequence in combination with multithreading. Here is what ive come up with: import threading import time # Your functions def functionA(num): res = num + 1 time.sleep(2) functionB(res) def functionB(num): res = num * 2 time.sleep(2) functionC(res) def functionC(num): res = num - 5 time.sleep(2) print(res) thread_counter = 0 while True: inp = int(input("Enter Number: ")) if inp == 0: # When input is 0 it stops input break # Thread creation and starting globals()[f"Thread{thread_counter}"] = threading.Thread(target=functionA, args=(inp,)) globals()[f"Thread{thread_counter}"].start() thread_counter += 1 So we are creating a new Thread for every input that runs the functionA and this function gives the result to functionB and so on. Theoretically you can have unlimited inputs. Note: globals() might be a bad way for creating variables. The Output for the inputs 5 and 7 would look like this: Enter Number: 5 Enter Number: 7 Enter Number: 0 7 11 I've put the 0 as an exit value so you can break the while loop. A visualization would be look like this:
How do I run 3 sequential methods in independent fashion in python
I have a scenario where I have an Input X that is given to function A, then to function B and then to function C and finally gives an output Y. This process happens in sequence hence is slow. I am trying to build this in Python. Can you guide me on what I should use so that method A, B and C can run independently such that as soon as method A finishes processing 1 item it dispatches it to B and starts processing the next item without waiting for method B and C to finish it's work. I am new to programming and have limited knowledge in multithreading & concurrency so any help would be awesome.
[ "I've tried to rebuild your function sequence in combination with multithreading.\nHere is what ive come up with:\nimport threading\nimport time\n\n# Your functions\ndef functionA(num):\n res = num + 1\n time.sleep(2)\n functionB(res)\n\ndef functionB(num):\n res = num * 2\n time.sleep(2)\n functionC(res)\n\ndef functionC(num):\n res = num - 5\n time.sleep(2)\n print(res)\n\nthread_counter = 0\nwhile True:\n inp = int(input(\"Enter Number: \"))\n\n if inp == 0: # When input is 0 it stops input\n break\n\n # Thread creation and starting\n globals()[f\"Thread{thread_counter}\"] = threading.Thread(target=functionA, args=(inp,))\n globals()[f\"Thread{thread_counter}\"].start()\n\n thread_counter += 1\n\nSo we are creating a new Thread for every input that runs the functionA and this function gives the result to functionB and so on. Theoretically you can have unlimited inputs. Note: globals() might be a bad way for creating variables.\nThe Output for the inputs 5 and 7 would look like this:\nEnter Number: 5\nEnter Number: 7\nEnter Number: 0\n7\n11\n\nI've put the 0 as an exit value so you can break the while loop. A visualization would be look like this:\n\n" ]
[ 0 ]
[]
[]
[ "multithreading", "python" ]
stackoverflow_0074475006_multithreading_python.txt
Q: Find which column has the minimum value of a sum of all rows, and having the the name of the column has output I have a sum of all rows of columns y1 to y7 of a data frame y1 4.475017e+02 y2 4.825798e+02 y3 4.077346e+04 y4 1.083712e+04 y5 4.005989e+04 y6 4.223634e+02 y7 3.385693e+01 I need to find which column has min value, in this case it is y7, so I want the output to be just: y7 What I did: minimum = sum.min() output: 33.85692709115603 ideal = sum.loc[sum == minimum] output: y7 33.856927 dtype: float64 I had to print and manually insert df["y7"] later. I want to be able to do this without printing, that is, inserting df["y7"] but without having to actually write y7 since this will be the output of the prior input. Edit: Now that I have the output I wanted: min = y7, how do I mention a column with same name as the output? Example: I need the output of the input df["y7"], but instead of writing y7, I want to write the output of min. A: You can use: df.sum().idxmin() Example: print(df) A B C D 0 0 1 2 3 1 4 5 6 7 2 8 9 10 11 3 12 13 14 15 df.sum().idxmin() 'A' referencing the column: col = df.sum().idxmin() df[col] # or without variable df[df.sum().idxmin()] 0 0 1 4 2 8 3 12 Name: A, dtype: int64
Find which column has the minimum value of a sum of all rows, and having the the name of the column has output
I have a sum of all rows of columns y1 to y7 of a data frame y1 4.475017e+02 y2 4.825798e+02 y3 4.077346e+04 y4 1.083712e+04 y5 4.005989e+04 y6 4.223634e+02 y7 3.385693e+01 I need to find which column has min value, in this case it is y7, so I want the output to be just: y7 What I did: minimum = sum.min() output: 33.85692709115603 ideal = sum.loc[sum == minimum] output: y7 33.856927 dtype: float64 I had to print and manually insert df["y7"] later. I want to be able to do this without printing, that is, inserting df["y7"] but without having to actually write y7 since this will be the output of the prior input. Edit: Now that I have the output I wanted: min = y7, how do I mention a column with same name as the output? Example: I need the output of the input df["y7"], but instead of writing y7, I want to write the output of min.
[ "You can use:\ndf.sum().idxmin()\n\nExample:\nprint(df)\n\n A B C D\n0 0 1 2 3\n1 4 5 6 7\n2 8 9 10 11\n3 12 13 14 15\n\ndf.sum().idxmin()\n\n'A'\n\nreferencing the column:\ncol = df.sum().idxmin()\n\ndf[col] # or without variable df[df.sum().idxmin()]\n\n0 0\n1 4\n2 8\n3 12\nName: A, dtype: int64\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074475760_dataframe_pandas_python.txt
Q: How to fix "RuntimeWarning: Running interpreter doesn't sufficiently support code object introspection." warning when using pipenv? Every time I run any pipenv command I'm getting this: C:\Users\user_name\AppData\Local\Programs\Python\Python311\Lib\site-packages\pipenv\vendor\attr_make.py:876: RuntimeWarning: Running interpreter doesn't sufficiently support code object introspection. Some features like bare super() or accessing class will not work with slotted classes. set_closure_cell(cell, cls) The command runs after it, but I would like to disable this message. I'm using Windows 10 19044.2194 and pipenv 2022.10.25. A: I was fighting the same issue on MacOS. The problem seems to be when pipenv is installed with brew. I fixed it by uninstalling the brew version of pipenv, then installing pipenv using pip. Here are the commands: brew uninstall pipenv pip install pipenv Worked like a charm for me. Hope it helps you.
How to fix "RuntimeWarning: Running interpreter doesn't sufficiently support code object introspection." warning when using pipenv?
Every time I run any pipenv command I'm getting this: C:\Users\user_name\AppData\Local\Programs\Python\Python311\Lib\site-packages\pipenv\vendor\attr_make.py:876: RuntimeWarning: Running interpreter doesn't sufficiently support code object introspection. Some features like bare super() or accessing class will not work with slotted classes. set_closure_cell(cell, cls) The command runs after it, but I would like to disable this message. I'm using Windows 10 19044.2194 and pipenv 2022.10.25.
[ "I was fighting the same issue on MacOS. The problem seems to be when pipenv is installed with brew. I fixed it by uninstalling the brew version of pipenv, then installing pipenv using pip. Here are the commands:\nbrew uninstall pipenv\npip install pipenv\n\nWorked like a charm for me. Hope it helps you.\n" ]
[ 3 ]
[]
[]
[ "pipenv", "python" ]
stackoverflow_0074468285_pipenv_python.txt
Q: Keeping the same legend while changing the palette in go.Pie subplot I am trying to make a subplot with two pies, and I have trouble keeping the same legend for both of them while changing the palette (although it works fine with plotly default palette) There are the two dataframes I am working with. They are made with a value_counts and therefore are sorted. yearly = pd.DataFrame(data={'index': ['A', 'B','C','D'], 'count': [3000, 2000,1000,50]}) monthly = pd.DataFrame(data={'index': ['B', 'A','C','D'], 'count': [250, 200,80,10]}) index A and B are reversed in the two dfs. Then if I do : fig = make_subplots(rows=1, cols=2, specs=[[{"type": "pie"}, {"type": "pie"}]]) fig.add_trace(go.Pie( values=monthly['count'], labels=monthly['index'].astype(str), title='Monthly'), row=1, col=1) fig.add_trace(go.Pie( values=yearly['count'], labels=yearly['index'].astype(str), title="yearly",), row=1, col=2) fig.update_layout(legend=dict(x=0.4)) fig.update_traces(textposition='inside', textinfo='percent+label') fig.show() it works fine, I have the same legend for both charts (ie every index has the same color on both charts) Result with defaut palette = what I want But, if I try to change the palette and change the fig.update_trace to this fig.update_traces(textposition='inside', textinfo='percent+label' ,marker=dict(colors=["#93C572","#CD5C5C","#F49D37","#3C6C82"])) It does not work anymore. A and B do not have the same color on both charts and the legend accounts only for the first chart. result with custom palette = legend not shared And I don't understand why. Otherwise, I will end up sorting the df so that the index is always in the same order (A, B, C, D) but I am sure there is a more elegant way. I have tried the solutions of this thread Map colors to labels in plotly go.Pie charts but it did not work. A: I'm not sure what you intend to do since there is no image that includes a specific legend, but I think the issue would be solved if each subplot unit had its own legend. Your intended subplot is one row and two columns, but the legend appears above and below. This seems to be the default behavior for pie chart subplots. To improve this, I have made it 2 rows and 1 column. Also, I have opened up the space between the legends to align them with the subplots. fig = make_subplots(rows=2, cols=1, specs=[[{"type": "pie"}], [{"type": "pie"}]]) fig.add_trace(go.Pie( values=monthly['count'], labels=monthly['index'].astype(str), title='Monthly', legendgroup='gr1'), row=1, col=1) fig.add_trace(go.Pie( values=yearly['count'], labels=yearly['index'].astype(str), title="yearly", legendgroup='gr2'), row=2, col=1) # fig.update_traces(textposition='inside', textinfo='percent+label' # ,marker=dict(colors=["#93C572","#CD5C5C","#F49D37","#3C6C82"])) fig.update_traces(textposition='inside', textinfo='percent+label') fig.update_layout(height=600, width=500, legend_tracegroupgap=180) fig.show() In my response, I have aligned the pie charts vertically, as the following image will appear if the pie charts are aligned horizontally. fig.update_layout(autosize=False, width=450, legend=dict(yanchor='top', y=0.85, tracegroupgap=35))
Keeping the same legend while changing the palette in go.Pie subplot
I am trying to make a subplot with two pies, and I have trouble keeping the same legend for both of them while changing the palette (although it works fine with plotly default palette) There are the two dataframes I am working with. They are made with a value_counts and therefore are sorted. yearly = pd.DataFrame(data={'index': ['A', 'B','C','D'], 'count': [3000, 2000,1000,50]}) monthly = pd.DataFrame(data={'index': ['B', 'A','C','D'], 'count': [250, 200,80,10]}) index A and B are reversed in the two dfs. Then if I do : fig = make_subplots(rows=1, cols=2, specs=[[{"type": "pie"}, {"type": "pie"}]]) fig.add_trace(go.Pie( values=monthly['count'], labels=monthly['index'].astype(str), title='Monthly'), row=1, col=1) fig.add_trace(go.Pie( values=yearly['count'], labels=yearly['index'].astype(str), title="yearly",), row=1, col=2) fig.update_layout(legend=dict(x=0.4)) fig.update_traces(textposition='inside', textinfo='percent+label') fig.show() it works fine, I have the same legend for both charts (ie every index has the same color on both charts) Result with defaut palette = what I want But, if I try to change the palette and change the fig.update_trace to this fig.update_traces(textposition='inside', textinfo='percent+label' ,marker=dict(colors=["#93C572","#CD5C5C","#F49D37","#3C6C82"])) It does not work anymore. A and B do not have the same color on both charts and the legend accounts only for the first chart. result with custom palette = legend not shared And I don't understand why. Otherwise, I will end up sorting the df so that the index is always in the same order (A, B, C, D) but I am sure there is a more elegant way. I have tried the solutions of this thread Map colors to labels in plotly go.Pie charts but it did not work.
[ "I'm not sure what you intend to do since there is no image that includes a specific legend, but I think the issue would be solved if each subplot unit had its own legend. Your intended subplot is one row and two columns, but the legend appears above and below. This seems to be the default behavior for pie chart subplots. To improve this, I have made it 2 rows and 1 column. Also, I have opened up the space between the legends to align them with the subplots.\nfig = make_subplots(rows=2, cols=1, specs=[[{\"type\": \"pie\"}], [{\"type\": \"pie\"}]])\n\nfig.add_trace(go.Pie(\n values=monthly['count'],\n labels=monthly['index'].astype(str),\n title='Monthly', legendgroup='gr1'),\n row=1, col=1)\n\nfig.add_trace(go.Pie(\n values=yearly['count'],\n labels=yearly['index'].astype(str),\n title=\"yearly\", legendgroup='gr2'),\n row=2, col=1)\n\n# fig.update_traces(textposition='inside', textinfo='percent+label'\n# ,marker=dict(colors=[\"#93C572\",\"#CD5C5C\",\"#F49D37\",\"#3C6C82\"]))\n\nfig.update_traces(textposition='inside', textinfo='percent+label')\nfig.update_layout(height=600, width=500, legend_tracegroupgap=180)\nfig.show()\n\n\nIn my response, I have aligned the pie charts vertically, as the following image will appear if the pie charts are aligned horizontally.\nfig.update_layout(autosize=False, width=450, legend=dict(yanchor='top', y=0.85, tracegroupgap=35))\n\n\n" ]
[ 0 ]
[]
[]
[ "color_palette", "pie_chart", "plotly_python", "python", "subplot" ]
stackoverflow_0074473895_color_palette_pie_chart_plotly_python_python_subplot.txt
Q: Is it possible to pass a Literal type to a function and have that function output a value of that Literal type? I'm trying to write a function for asserting that user input matches a defined Literal type. Basically, given: MyLiteral = Literal["foo", "bar"] I want to write a function that lets me do this: some_user_provided_value = input() # For example good_value = assert_literal(MyLiteral, some_user_provided_value) The type of good_value should now be inferred to be MyLiteral. If the user value didn't match any of the defined literal strings, an assertion error would be raised. This should have the same effect as: some_user_provided_value = input() # For example good_value: MyLiteral if some_user_provided_value == "foo": good_value = "foo" elif some_user_provided_value == "bar": good_value = "bar" else: raise AssertionError(f"Value {some_user_provided_value!r} is not a MyLiteral") This is a pattern that is often repeated in my project, so I would like to wrap it up in a function. The function would look something like this: from typing import Any, TypeVar T = TypeVar("T") def assert_literal(literal_type: T, value: Any) -> T: if value not in typing.get_args(literal_type): raise AssertionError(f"Value {value!r} is not a {literal_type!r}") return typing.cast(T, value) This doesn't work, of course, because this function currently wants an instance of type T and it'll output an instance of type T. If I was dealing with regular classes, I could make the parameter type Type[T], but this explodes in my face when I use Literal types, presumably because "foo" is not an instance of MyLiteral, that doesn't even make any sense. Is there currently any way to achieve what I'm trying to do? Can Literals even be mixed with TypeVars? A: The wonderful library called pydantic offers exactly that. From their homepage description: pydantic enforces type hints at runtime, and provides user friendly errors when data is invalid. And here's an example for Literal types, found here: from typing import Literal from pydantic import BaseModel, ValidationError class Pie(BaseModel): flavor: Literal['apple', 'pumpkin'] Pie(flavor='apple') Pie(flavor='pumpkin') try: Pie(flavor='cherry') except ValidationError as e: print(str(e)) """ 1 validation error for Pie flavor unexpected value; permitted: 'apple', 'pumpkin' (type=value_error.const; given=cherry; permitted=('apple', 'pumpkin')) """ I would safely rely on this library to do the heavy lifting for you (it is used in the popular FastAPI framework, for example) - but if you want to do it yourself then I'd go to their source code and see how they manage to do that. Good luck!
Is it possible to pass a Literal type to a function and have that function output a value of that Literal type?
I'm trying to write a function for asserting that user input matches a defined Literal type. Basically, given: MyLiteral = Literal["foo", "bar"] I want to write a function that lets me do this: some_user_provided_value = input() # For example good_value = assert_literal(MyLiteral, some_user_provided_value) The type of good_value should now be inferred to be MyLiteral. If the user value didn't match any of the defined literal strings, an assertion error would be raised. This should have the same effect as: some_user_provided_value = input() # For example good_value: MyLiteral if some_user_provided_value == "foo": good_value = "foo" elif some_user_provided_value == "bar": good_value = "bar" else: raise AssertionError(f"Value {some_user_provided_value!r} is not a MyLiteral") This is a pattern that is often repeated in my project, so I would like to wrap it up in a function. The function would look something like this: from typing import Any, TypeVar T = TypeVar("T") def assert_literal(literal_type: T, value: Any) -> T: if value not in typing.get_args(literal_type): raise AssertionError(f"Value {value!r} is not a {literal_type!r}") return typing.cast(T, value) This doesn't work, of course, because this function currently wants an instance of type T and it'll output an instance of type T. If I was dealing with regular classes, I could make the parameter type Type[T], but this explodes in my face when I use Literal types, presumably because "foo" is not an instance of MyLiteral, that doesn't even make any sense. Is there currently any way to achieve what I'm trying to do? Can Literals even be mixed with TypeVars?
[ "The wonderful library called pydantic offers exactly that.\nFrom their homepage description:\n\npydantic enforces type hints at runtime, and provides user friendly errors when data is invalid.\n\nAnd here's an example for Literal types, found here:\nfrom typing import Literal\n\nfrom pydantic import BaseModel, ValidationError\n\n\nclass Pie(BaseModel):\n flavor: Literal['apple', 'pumpkin']\n\n\nPie(flavor='apple')\nPie(flavor='pumpkin')\ntry:\n Pie(flavor='cherry')\nexcept ValidationError as e:\n print(str(e))\n \"\"\"\n 1 validation error for Pie\n flavor\n unexpected value; permitted: 'apple', 'pumpkin'\n (type=value_error.const; given=cherry; permitted=('apple', 'pumpkin'))\n \"\"\"\n\n\nI would safely rely on this library to do the heavy lifting for you (it is used in the popular FastAPI framework, for example) - but if you want to do it yourself then I'd go to their source code and see how they manage to do that.\nGood luck!\n" ]
[ 0 ]
[]
[]
[ "mypy", "python", "python_typing" ]
stackoverflow_0073631990_mypy_python_python_typing.txt
Q: Overwrite existing column and extract values to new columns based on different conditions i have this series which contains country,state,city and i would like to extract them accordingly- refer to the output table Region US* Arizona** Phoenix Mesa California** Los Angeles San Diego Sacramento Florida** Tampa Miami Canada* Central Canada** Montreal London my desired output Region State City US* Arizona** Phoenix US* Arizona** Mesa US* California** Los Angeles US* California** San Diego US* California** Sacramento US* Florida** Tampa US* Florida** Miami Canada* Central Canada** Montreal Canada* Central Canada** London is this even possible? I tried some panda operations with isin() but failed miserably A: of course it's possible: def split_by_country(region_list: pd.Series): result = [] start_idx = None for i, region in enumerate(region_list): if region.endswith("*") and not region.endswith("**"): if start_idx is None: start_idx = i elif isinstance(start_idx, int): result.append(region_list[start_idx: i]) start_idx = i result.append(region_list[start_idx:]) return result countries = split_by_country(regions_s) countries Above code will splits the series/list of regions to list of lists. Every sublist starts (index 0) with country name. Then u can do something like that: country_dict = {country[0]: split_by_region(country[1:]) for country in countries} split_by_region is the same as split_by_country by with different condition (region.endswith("*") and not region.endswith("**") > region.endswith("**")) and at the end to (belowe code i write without checking, so it may contains some syntax error) : result_df = pd.DataFrame(columns=["country","subregion","city"]) for i, (country, subregions) in enumerate(country_dict.iteritems()): for subregion, city in subregions.iteritems(): result_df.loc[i] = [country, subregion, city]
Overwrite existing column and extract values to new columns based on different conditions
i have this series which contains country,state,city and i would like to extract them accordingly- refer to the output table Region US* Arizona** Phoenix Mesa California** Los Angeles San Diego Sacramento Florida** Tampa Miami Canada* Central Canada** Montreal London my desired output Region State City US* Arizona** Phoenix US* Arizona** Mesa US* California** Los Angeles US* California** San Diego US* California** Sacramento US* Florida** Tampa US* Florida** Miami Canada* Central Canada** Montreal Canada* Central Canada** London is this even possible? I tried some panda operations with isin() but failed miserably
[ "of course it's possible:\ndef split_by_country(region_list: pd.Series):\n result = []\n start_idx = None\n for i, region in enumerate(region_list):\n if region.endswith(\"*\") and not region.endswith(\"**\"):\n if start_idx is None:\n start_idx = i\n elif isinstance(start_idx, int):\n result.append(region_list[start_idx: i])\n start_idx = i\n result.append(region_list[start_idx:])\n return result\n \ncountries = split_by_country(regions_s) \ncountries\n\nAbove code will splits the series/list of regions to list of lists.\nEvery sublist starts (index 0) with country name.\nThen u can do something like that:\ncountry_dict = {country[0]: split_by_region(country[1:])\n for country in countries}\n\nsplit_by_region is the same as split_by_country by with different condition (region.endswith(\"*\") and not region.endswith(\"**\") > region.endswith(\"**\"))\nand at the end to (belowe code i write without checking, so it may contains some syntax error) :\nresult_df = pd.DataFrame(columns=[\"country\",\"subregion\",\"city\"])\nfor i, (country, subregions) in enumerate(country_dict.iteritems()):\n for subregion, city in subregions.iteritems():\n result_df.loc[i] = [country, subregion, city]\n\n" ]
[ 0 ]
[]
[]
[ "extract", "numpy", "pandas", "python" ]
stackoverflow_0074475368_extract_numpy_pandas_python.txt
Q: AWS Python Lambda Unable to Import Module In zip File constructed on CodeBuild I am trying to deploy a python lambda function with a zip file archive. Using AWS CodeBuild I followed the instructions to setup a zip file with my source code and dependencies at the top level. However, when I invoke my Lambda function an import error is reported: { "errorMessage": "Unable to import module 'my_module': No module named 'cytoolz.itertoolz'", "errorType": "Runtime.ImportModuleError", "requestId": "<random uuid>", "stackTrace": [] } Why is Lambda unable to find a dependency that exists within a zip file that is built using CodeBuild? A: The problem is with python versions. The Standard 6.0 CodeBuild environment has python 3.10 installed but the python runtime environment in Lambda only goes up to 3.9 (at the time of this answer). Therefore, when installing the dependencies into the zip file pip install --target ./package requests cd package zip -r ../my-deployment-package.zip . The version of pip is 3.10 and all of the dependencies that are installed and packaged in the zip file are the wrong python version. In CodeBuild install a python & pip 3.9 to create the zip file.
AWS Python Lambda Unable to Import Module In zip File constructed on CodeBuild
I am trying to deploy a python lambda function with a zip file archive. Using AWS CodeBuild I followed the instructions to setup a zip file with my source code and dependencies at the top level. However, when I invoke my Lambda function an import error is reported: { "errorMessage": "Unable to import module 'my_module': No module named 'cytoolz.itertoolz'", "errorType": "Runtime.ImportModuleError", "requestId": "<random uuid>", "stackTrace": [] } Why is Lambda unable to find a dependency that exists within a zip file that is built using CodeBuild?
[ "The problem is with python versions. The Standard 6.0 CodeBuild environment has python 3.10 installed but the python runtime environment in Lambda only goes up to 3.9 (at the time of this answer).\nTherefore, when installing the dependencies into the zip file\npip install --target ./package requests\n\ncd package \nzip -r ../my-deployment-package.zip . \n\nThe version of pip is 3.10 and all of the dependencies that are installed and packaged in the zip file are the wrong python version.\nIn CodeBuild install a python & pip 3.9 to create the zip file.\n" ]
[ 0 ]
[]
[]
[ "aws_codebuild", "aws_lambda", "python", "zip" ]
stackoverflow_0074475868_aws_codebuild_aws_lambda_python_zip.txt
Q: AWS Lambda : OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k - While using Google Custom Search API I deployed the Google Custom Search API as AWS lambda function for my project. It uses the 3GB (full memory provided by lambda) and task got terminated. It throws a warning : "OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k" I don't know why its consuming more memory? A: This warning is just a warning, and has nothing to do with your problems. BLAS is a highly optimised library, aiming to get near-perfect performance on all hardware. AWS Lambdas are supposed to run in a more abstract environment than most, and the low-level details of what CPU it's running on are not available to your code. Therefore OpenBLAS just guesses. The only impact it would have is slightly reduced performance of certain mathematical operations, if the guess were incorrect. A: It's about the configurations of your function. Solution is as below: Go to configuration tab in your function. In the General configuration tab, increase Timeout and Memory size. I hope this will help to you.
AWS Lambda : OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k - While using Google Custom Search API
I deployed the Google Custom Search API as AWS lambda function for my project. It uses the 3GB (full memory provided by lambda) and task got terminated. It throws a warning : "OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k" I don't know why its consuming more memory?
[ "This warning is just a warning, and has nothing to do with your problems.\nBLAS is a highly optimised library, aiming to get near-perfect performance on all hardware. AWS Lambdas are supposed to run in a more abstract environment than most, and the low-level details of what CPU it's running on are not available to your code. Therefore OpenBLAS just guesses.\nThe only impact it would have is slightly reduced performance of certain mathematical operations, if the guess were incorrect.\n", "It's about the configurations of your function. Solution is as below:\n\nGo to configuration tab in your function.\nIn the General configuration tab, increase Timeout and Memory size.\n\nI hope this will help to you.\n" ]
[ 27, 0 ]
[]
[]
[ "amazon_web_services", "aws_lambda", "google_api", "python", "python_3.x" ]
stackoverflow_0057087498_amazon_web_services_aws_lambda_google_api_python_python_3.x.txt
Q: Using Astropy to make an array dimensionless I have an array, and I want to make it dimensionless so I can use np.log() However, following the guide in the Units and Quantities documentation for astropy does not seem to work. This is the code I written so far: #Calculating luminosity of a source (units Jy/Mpc^2) luminosity = 4*pi*Total_flux*dl*dl*((1.+z)**(alpha-1.)) #using astropy to convert the units from JyMpc2 to W/Hz lum_in_W = luminosity.to(u.Watt/u.Hertz) #making the quantity dimensionless lum_dimensionless = lum_in_W*u.dimensionless_unscaled #checking to see the dimension test = lum_dimensionless.unit print(test) The output of this is W / Hz and when I try to take the log log_lum = np.log(lum_dimensionless) I still get this error UnitTypeError: Can only apply 'log' function to dimensionless quantities Any help or hints appreciated, thanks for your time A: Answering my own question here: lum_dimensionless = lum_in_W.to_value() returns the dimensionless value. Should have found this before posting, but I suppose you can get tunnel vision sometimes into trying to get a specific line to work, especially when you are convinced it must be the correct solution--my apologies for that! Here is the link to the answer: How to take log of a unit with dimensions?
Using Astropy to make an array dimensionless
I have an array, and I want to make it dimensionless so I can use np.log() However, following the guide in the Units and Quantities documentation for astropy does not seem to work. This is the code I written so far: #Calculating luminosity of a source (units Jy/Mpc^2) luminosity = 4*pi*Total_flux*dl*dl*((1.+z)**(alpha-1.)) #using astropy to convert the units from JyMpc2 to W/Hz lum_in_W = luminosity.to(u.Watt/u.Hertz) #making the quantity dimensionless lum_dimensionless = lum_in_W*u.dimensionless_unscaled #checking to see the dimension test = lum_dimensionless.unit print(test) The output of this is W / Hz and when I try to take the log log_lum = np.log(lum_dimensionless) I still get this error UnitTypeError: Can only apply 'log' function to dimensionless quantities Any help or hints appreciated, thanks for your time
[ "Answering my own question here:\n lum_dimensionless = lum_in_W.to_value()\n\nreturns the dimensionless value.\nShould have found this before posting, but I suppose you can get tunnel vision sometimes into trying to get a specific line to work, especially when you are convinced it must be the correct solution--my apologies for that!\nHere is the link to the answer:\nHow to take log of a unit with dimensions?\n" ]
[ 0 ]
[]
[]
[ "astropy", "python" ]
stackoverflow_0074462659_astropy_python.txt
Q: pandas - how can I remove some character after find specific character I have a data frame like this. document_group A12J3/381 A02J3/40 B12P4/2536 C10P234/3569 and I would like to get like this document_group A12J3/38 A02J3/40 B12P4/25 C10P234/35 I have tried to adapt a function for single string like this def remove_str_start(s, start): return s[:start] + s[start] and work with this sample s='H02J3/381' s.find('/') remove_str_start(s,s.find('/')+2) it returns 'H02J3/38', what I want to do while s is the input data frame and start is cutting the char start from the position char. but when I tried with data frame remove_str_start(df['document_group'],df['document_group'].str.find('/')+2) the result returns an error could everyone help me with this kind of situation? A: We can use str.replace here: df["document_group"] = df["document_group"].str.replace(r'/(\d{2})\d+$', r'\1', regex=True) Here is a Python regex demo showing that the replacement logic is working. A: You can also str.split remove the unwanted parts and put together: s = df.document_group.str.split('/') df['document_group'] = s.str[0] + "/" + s.str[1].str[:2] prints: document_group 0 A12J3/38 1 A02J3/40 2 B12P4/25 3 C10P234/35 A: You are trying too hard, just: Create the column you want: for each value, the same value till the character where you find "/" plus 3 (because you want the / and the next 2) df['new_column'] = [e[:e.find('/') + 3] for e in filt['your_initial_column']] Regards,
pandas - how can I remove some character after find specific character
I have a data frame like this. document_group A12J3/381 A02J3/40 B12P4/2536 C10P234/3569 and I would like to get like this document_group A12J3/38 A02J3/40 B12P4/25 C10P234/35 I have tried to adapt a function for single string like this def remove_str_start(s, start): return s[:start] + s[start] and work with this sample s='H02J3/381' s.find('/') remove_str_start(s,s.find('/')+2) it returns 'H02J3/38', what I want to do while s is the input data frame and start is cutting the char start from the position char. but when I tried with data frame remove_str_start(df['document_group'],df['document_group'].str.find('/')+2) the result returns an error could everyone help me with this kind of situation?
[ "We can use str.replace here:\ndf[\"document_group\"] = df[\"document_group\"].str.replace(r'/(\\d{2})\\d+$', r'\\1', regex=True)\n\nHere is a Python regex demo showing that the replacement logic is working.\n", "You can also str.split remove the unwanted parts and put together:\ns = df.document_group.str.split('/')\ndf['document_group'] = s.str[0] + \"/\" + s.str[1].str[:2]\n\nprints:\n document_group\n0 A12J3/38\n1 A02J3/40\n2 B12P4/25\n3 C10P234/35\n\n", "You are trying too hard, just:\nCreate the column you want: for each value, the same value till the character where you find \"/\" plus 3 (because you want the / and the next 2)\ndf['new_column'] = [e[:e.find('/') + 3] for e in filt['your_initial_column']]\n\nRegards,\n" ]
[ 2, 2, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074475850_pandas_python.txt
Q: Find common Keys from two dict and return new dict from common keys So I have two dicts called results and results_2 with different lengths with the following setup: Result is also significantly shorter than the Result_2: result {('CMS', 'LNT'): 0.8500276624334894, ('LNT', 'CMS'): 0.8500276624334894, ('LOW', 'HD'): 0.8502400376842035, ('HD', 'LOW'): 0.8502400376842036, ('SWKS', 'QRVO'): 0.8507993847326996, ... result_2 {('CMS', 'CMS'): 1.0, ('CMS', 'LNT'): 0.7761431649456381, ('CMS', 'LOW'): 0.4476903306386938, ('CMS', 'HD'): 0.35617507290738476, ('CMS', 'SWKS'): 0.04797167063700858, ('CMS', 'QRVO'): -0.08844725271734241, .... Now I want to find all keys which are in both dict and create a new dict with only the duplicate combinations and their values The Output should look like this: result_combined {('CMS', 'LNT'): 0.8500276624334894, ('CMS', 'LNT'): 0.7761431649456381, .... I have tried using the following code however I am only getting an empty dict: dict_sorted_2 = {} for key_2, result_2 in results_2.items(): for key, result in results.items(): if key_2 == key: dict_sorted[key_2] = result_2 abcd = dict(sorted(dict_sorted_2.items(), key=lambda item: item[1])) abcd Out[24]: {} EDIT: This is the code I am using to get the result dicts results_2 = {} for ticker1 in data_5d: for ticker2 in data_5d: if data_5d[ticker1].size == data_5d[ticker2].size: corr = np.corrcoef(data_5d[ticker1]['%Percentage'], data_5d[ticker2]['%Percentage'])[0,1] print(f"Correlation between {ticker1} and {ticker2}: {corr}") results_2[(ticker1, ticker2)] = corr A: Coincidentally, the dict.keys method returns a set-like object that you can do set-like operations on: >>> a = {(1, 2): 'a', (3, 4): 'b'} >>> b = {(3, 4): 'c', (5, 6): 'd'} >>> a.keys() & b.keys() {(3, 4)} From there you can pick the values from whichever dict you like: >>> {k: a[k] for k in a.keys() & b.keys()} {(3, 4): 'b'} {('CMS', 'LNT'): 0.8500276624334894, ('CMS', 'LNT'): 0.7761431649456381, This makes no sense as expected result, as you can't have the same key twice in a dict. Maybe you want this? >>> {k: (a[k], b[k]) for k in a.keys() & b.keys()} {(3, 4): ('b', 'c')}
Find common Keys from two dict and return new dict from common keys
So I have two dicts called results and results_2 with different lengths with the following setup: Result is also significantly shorter than the Result_2: result {('CMS', 'LNT'): 0.8500276624334894, ('LNT', 'CMS'): 0.8500276624334894, ('LOW', 'HD'): 0.8502400376842035, ('HD', 'LOW'): 0.8502400376842036, ('SWKS', 'QRVO'): 0.8507993847326996, ... result_2 {('CMS', 'CMS'): 1.0, ('CMS', 'LNT'): 0.7761431649456381, ('CMS', 'LOW'): 0.4476903306386938, ('CMS', 'HD'): 0.35617507290738476, ('CMS', 'SWKS'): 0.04797167063700858, ('CMS', 'QRVO'): -0.08844725271734241, .... Now I want to find all keys which are in both dict and create a new dict with only the duplicate combinations and their values The Output should look like this: result_combined {('CMS', 'LNT'): 0.8500276624334894, ('CMS', 'LNT'): 0.7761431649456381, .... I have tried using the following code however I am only getting an empty dict: dict_sorted_2 = {} for key_2, result_2 in results_2.items(): for key, result in results.items(): if key_2 == key: dict_sorted[key_2] = result_2 abcd = dict(sorted(dict_sorted_2.items(), key=lambda item: item[1])) abcd Out[24]: {} EDIT: This is the code I am using to get the result dicts results_2 = {} for ticker1 in data_5d: for ticker2 in data_5d: if data_5d[ticker1].size == data_5d[ticker2].size: corr = np.corrcoef(data_5d[ticker1]['%Percentage'], data_5d[ticker2]['%Percentage'])[0,1] print(f"Correlation between {ticker1} and {ticker2}: {corr}") results_2[(ticker1, ticker2)] = corr
[ "Coincidentally, the dict.keys method returns a set-like object that you can do set-like operations on:\n>>> a = {(1, 2): 'a', (3, 4): 'b'}\n>>> b = {(3, 4): 'c', (5, 6): 'd'}\n>>> a.keys() & b.keys()\n{(3, 4)}\n\nFrom there you can pick the values from whichever dict you like:\n>>> {k: a[k] for k in a.keys() & b.keys()}\n{(3, 4): 'b'}\n\n\n\n{('CMS', 'LNT'): 0.8500276624334894,\n ('CMS', 'LNT'): 0.7761431649456381,\n\n\nThis makes no sense as expected result, as you can't have the same key twice in a dict. Maybe you want this?\n>>> {k: (a[k], b[k]) for k in a.keys() & b.keys()}\n{(3, 4): ('b', 'c')}\n\n" ]
[ 2 ]
[]
[]
[ "dictionary", "python" ]
stackoverflow_0074475876_dictionary_python.txt
Q: How to merge 2 columns in pandas dataframe by taking either value or mean and create a third column? I have a dataframe with 2 columns. How can I create a third column which: Takes either col1 or col2 value if either exists Takes mean if both exists Keeps NaN if neither exists And finally I want to store it in df['col3']. I tried this, but the values are wrong. df['col3']=pd.concat([df['col2'], df['col1']]).groupby(level=0).mean() How can I do this? time col1 col2 2000-01-31 389.5400 NaN 2000-02-29 387.7700 NaN 2000-03-31 386.6600 250.2 2000-04-30 384.1850 NaN 2000-05-31 383.3600 267.2 ... ... ... 2020-03-31 396.3755 NaN 2020-04-30 NaN 350.12 2020-05-31 395.0485 NaN 2020-06-30 394.9400 396.321 2020-07-31 395.3070 NaN A: The answer is surprisingly simple: df['col3'] = df[['col1', 'col2']].mean(axis=1) This is due to the fact that mean ignores the NaN by default (skipna=True), so if you have only one value, the mean is the value itself, if only NaNs, the output is a NaN Output: time col1 col2 col3 0 2000-01-31 389.5400 NaN 389.5400 1 2000-02-29 387.7700 NaN 387.7700 2 2000-03-31 386.6600 250.200 318.4300 3 2000-04-30 384.1850 NaN 384.1850 4 2000-05-31 383.3600 267.200 325.2800 5 2020-03-31 396.3755 NaN 396.3755 6 2020-04-30 NaN 350.120 350.1200 7 2020-05-31 395.0485 NaN 395.0485 8 2020-06-30 394.9400 396.321 395.6305 9 2020-07-31 395.3070 NaN 395.3070 A: You can use this: df['col3'] = df.loc[:, ["col1","col2"]].mean(axis = 1)
How to merge 2 columns in pandas dataframe by taking either value or mean and create a third column?
I have a dataframe with 2 columns. How can I create a third column which: Takes either col1 or col2 value if either exists Takes mean if both exists Keeps NaN if neither exists And finally I want to store it in df['col3']. I tried this, but the values are wrong. df['col3']=pd.concat([df['col2'], df['col1']]).groupby(level=0).mean() How can I do this? time col1 col2 2000-01-31 389.5400 NaN 2000-02-29 387.7700 NaN 2000-03-31 386.6600 250.2 2000-04-30 384.1850 NaN 2000-05-31 383.3600 267.2 ... ... ... 2020-03-31 396.3755 NaN 2020-04-30 NaN 350.12 2020-05-31 395.0485 NaN 2020-06-30 394.9400 396.321 2020-07-31 395.3070 NaN
[ "The answer is surprisingly simple:\ndf['col3'] = df[['col1', 'col2']].mean(axis=1)\n\nThis is due to the fact that mean ignores the NaN by default (skipna=True), so if you have only one value, the mean is the value itself, if only NaNs, the output is a NaN\nOutput:\n time col1 col2 col3\n0 2000-01-31 389.5400 NaN 389.5400\n1 2000-02-29 387.7700 NaN 387.7700\n2 2000-03-31 386.6600 250.200 318.4300\n3 2000-04-30 384.1850 NaN 384.1850\n4 2000-05-31 383.3600 267.200 325.2800\n5 2020-03-31 396.3755 NaN 396.3755\n6 2020-04-30 NaN 350.120 350.1200\n7 2020-05-31 395.0485 NaN 395.0485\n8 2020-06-30 394.9400 396.321 395.6305\n9 2020-07-31 395.3070 NaN 395.3070\n\n", "You can use this:\ndf['col3'] = df.loc[:, [\"col1\",\"col2\"]].mean(axis = 1)\n\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074475833_pandas_python.txt
Q: How to upload document on an Article using article id when it does not even exists currently i am in a very weird situation. I am trying to upload document using article id and user id to an article. But the issue is when I try to select article id from the document model, it gives error that article doesnt exists. And tbh that is true, because how can i upload document to an article when it doesnt even exists. So how can i use article id in such situation? Below is my document model in which i am sending user id and article id for uploading document. documentmodels.py class DocumentModel(models.Model): id=models.AutoField(primary_key=True, auto_created=True, verbose_name="DOCUMENT_ID") user_fk_doc=models.ForeignKey(User, on_delete=models.CASCADE, related_name="users_fk_doc") article_fk_doc=models.ForeignKey(Article, on_delete=models.CASCADE, related_name="articles_fk_doc") document=models.FileField(max_length=350, validators=[FileExtensionValidator(extensions)], upload_to=uploaded_files) filename=models.CharField(max_length=100, blank=True) filesize=models.IntegerField(default=0, blank=True) mimetype=models.CharField(max_length=100, blank=True) created_at=models.DateField(auto_now_add=True) and below is the articles models, class Article(models.Model): id=models.AutoField(primary_key=True, auto_created=True, verbose_name="ARTICLE_ID") headline=models.CharField(max_length=250) abstract=models.TextField(max_length=1500, blank=True) content=models.TextField(max_length=10000, blank=True) published=models.DateTimeField(auto_now_add=True) tags=models.ManyToManyField('Tags', related_name='tags', blank=True) isDraft=models.BooleanField(blank=True, default=False) isFavourite=models.ManyToManyField(User, related_name="favourite", blank=True) created_by=models.ForeignKey(User, on_delete=models.CASCADE, null=True, blank=True, related_name="articles") EDITED Actually i am trying to implement upload file feature in article which is in editor. I can upload the document for the article according to user but the issue is in the start: 1: There is no article in the beginning so I have no article id 2: So if I have no article id, I cannot upload the document without article id else it throws error that field cannot be null, or field s required 3: I want to know how can I solve the issue where in the beginning I dont have any article id and upload the document document views.py class DocumentViewSet(viewsets.ModelViewSet): serializer_class=DocumentSerializer permission_classes=[permissions.IsAuthenticated] authentication_classes= [authentication.TokenAuthentication] parser_classes=[FormParser, MultiPartParser] def get_queryset(self): return DocumentModel.objects.select_related('user_fk_doc').all() #Cache @action(detail=True, methods =['get'], url_path='download') def download(self, request, pk): try: document_file=DocumentModel.objects.get(id=pk, user_fk_doc=self.request.user) file_path=document_file.document.path print(file_path) if os.path.exists(file_path): with open(file_path, 'rb') as fh: response=HttpResponse(fh.read(), content_type=mimetypes.guess_type(file_path)[0]) response['Content-Disposition'] = "Inline; filename={}".format(os.path.basename(file_path)) response['Content-Length'] = os.path.getsize(file_path) return response return Response({'error' : 'There is no document file of the user'}, status=status.HTTP_403_FORBIDDEN) except DocumentModel.DoesNotExist as e: return Response({'error': 'Document for this user does not exists'}, status=status.HTTP_404_NOT_FOUND) Article views.py class ArticleViewSet(viewsets.ModelViewSet): queryset=Article.objects.all() serializer_class=ArticleSerializer permission_classes=[permissions.IsAuthenticated] authentication_classes = [authentication.TokenAuthentication] A: So, I think the way I was trying to post document for article was complicated. I was adding the article id into document when article id was not even created till then. So the solution that I came upon is instead of using these two foreign keys in document model below: user_fk_doc=models.ForeignKey(User, on_delete=models.CASCADE, related_name="users_fk_doc") article_fk_doc=models.ForeignKey(Article, on_delete=models.CASCADE, related_name="articles_fk_doc") I simply added a many to many field in article model. so my model became like this below: class Article(models.Model): id=models.AutoField(primary_key=True, auto_created=True, verbose_name="ARTICLE_ID") headline=models.CharField(max_length=250) abstract=models.TextField(max_length=1500, blank=True) content=models.TextField(max_length=10000, blank=True) published=models.DateTimeField(auto_now_add=True) tags=models.ManyToManyField('Tags', related_name='tags', blank=True) **files=models.ManyToManyField('DocumentModel', related_name='uploads', blank=True)** isDraft=models.BooleanField(blank=True, default=False) isFavourite=models.ManyToManyField(User, related_name="favourite", blank=True) created_by=models.ForeignKey(User, on_delete=models.CASCADE, null=True, blank=True, related_name="articles") and then simply upload document through document model and then use many to many field to filter all the documents according to user and article. In the end simply execute such query depending on your situation to filter documents DocumentModel.objects.get(id=pk,uploads__created_by_id=self.request.user.id)
How to upload document on an Article using article id when it does not even exists
currently i am in a very weird situation. I am trying to upload document using article id and user id to an article. But the issue is when I try to select article id from the document model, it gives error that article doesnt exists. And tbh that is true, because how can i upload document to an article when it doesnt even exists. So how can i use article id in such situation? Below is my document model in which i am sending user id and article id for uploading document. documentmodels.py class DocumentModel(models.Model): id=models.AutoField(primary_key=True, auto_created=True, verbose_name="DOCUMENT_ID") user_fk_doc=models.ForeignKey(User, on_delete=models.CASCADE, related_name="users_fk_doc") article_fk_doc=models.ForeignKey(Article, on_delete=models.CASCADE, related_name="articles_fk_doc") document=models.FileField(max_length=350, validators=[FileExtensionValidator(extensions)], upload_to=uploaded_files) filename=models.CharField(max_length=100, blank=True) filesize=models.IntegerField(default=0, blank=True) mimetype=models.CharField(max_length=100, blank=True) created_at=models.DateField(auto_now_add=True) and below is the articles models, class Article(models.Model): id=models.AutoField(primary_key=True, auto_created=True, verbose_name="ARTICLE_ID") headline=models.CharField(max_length=250) abstract=models.TextField(max_length=1500, blank=True) content=models.TextField(max_length=10000, blank=True) published=models.DateTimeField(auto_now_add=True) tags=models.ManyToManyField('Tags', related_name='tags', blank=True) isDraft=models.BooleanField(blank=True, default=False) isFavourite=models.ManyToManyField(User, related_name="favourite", blank=True) created_by=models.ForeignKey(User, on_delete=models.CASCADE, null=True, blank=True, related_name="articles") EDITED Actually i am trying to implement upload file feature in article which is in editor. I can upload the document for the article according to user but the issue is in the start: 1: There is no article in the beginning so I have no article id 2: So if I have no article id, I cannot upload the document without article id else it throws error that field cannot be null, or field s required 3: I want to know how can I solve the issue where in the beginning I dont have any article id and upload the document document views.py class DocumentViewSet(viewsets.ModelViewSet): serializer_class=DocumentSerializer permission_classes=[permissions.IsAuthenticated] authentication_classes= [authentication.TokenAuthentication] parser_classes=[FormParser, MultiPartParser] def get_queryset(self): return DocumentModel.objects.select_related('user_fk_doc').all() #Cache @action(detail=True, methods =['get'], url_path='download') def download(self, request, pk): try: document_file=DocumentModel.objects.get(id=pk, user_fk_doc=self.request.user) file_path=document_file.document.path print(file_path) if os.path.exists(file_path): with open(file_path, 'rb') as fh: response=HttpResponse(fh.read(), content_type=mimetypes.guess_type(file_path)[0]) response['Content-Disposition'] = "Inline; filename={}".format(os.path.basename(file_path)) response['Content-Length'] = os.path.getsize(file_path) return response return Response({'error' : 'There is no document file of the user'}, status=status.HTTP_403_FORBIDDEN) except DocumentModel.DoesNotExist as e: return Response({'error': 'Document for this user does not exists'}, status=status.HTTP_404_NOT_FOUND) Article views.py class ArticleViewSet(viewsets.ModelViewSet): queryset=Article.objects.all() serializer_class=ArticleSerializer permission_classes=[permissions.IsAuthenticated] authentication_classes = [authentication.TokenAuthentication]
[ "So, I think the way I was trying to post document for article was complicated.\nI was adding the article id into document when article id was not even created till then.\nSo the solution that I came upon is instead of using these two foreign keys in document model below:\n user_fk_doc=models.ForeignKey(User, on_delete=models.CASCADE, related_name=\"users_fk_doc\")\n article_fk_doc=models.ForeignKey(Article, on_delete=models.CASCADE, related_name=\"articles_fk_doc\")\n\nI simply added a many to many field in article model. so my model became like this below:\nclass Article(models.Model):\n id=models.AutoField(primary_key=True, auto_created=True, verbose_name=\"ARTICLE_ID\")\n headline=models.CharField(max_length=250)\n abstract=models.TextField(max_length=1500, blank=True)\n content=models.TextField(max_length=10000, blank=True)\n published=models.DateTimeField(auto_now_add=True)\n tags=models.ManyToManyField('Tags', related_name='tags', blank=True)\n **files=models.ManyToManyField('DocumentModel', related_name='uploads', blank=True)**\n isDraft=models.BooleanField(blank=True, default=False)\n isFavourite=models.ManyToManyField(User, related_name=\"favourite\", blank=True)\n created_by=models.ForeignKey(User, on_delete=models.CASCADE, null=True, blank=True, related_name=\"articles\")\n\nand then simply upload document through document model and then use many to many field to filter all the documents according to user and article.\nIn the end simply execute such query depending on your situation to filter documents\nDocumentModel.objects.get(id=pk,uploads__created_by_id=self.request.user.id)\n\n" ]
[ 0 ]
[]
[]
[ "blogs", "django", "django_models", "django_rest_framework", "python" ]
stackoverflow_0074459771_blogs_django_django_models_django_rest_framework_python.txt
Q: What is the difference between Django timezone now and the built-in one? I've just noticed this: >>> import datetime >>> from django.utils import timezone >>> (datetime.datetime.now(tz=datetime.timezone.utc) - timezone.now()).microseconds 999989 >>> (datetime.datetime.now(tz=datetime.timezone.utc) - timezone.now()).seconds 86399 >>> 24*60*60 86400 >>> (datetime.datetime.now(tz=datetime.timezone.utc) - timezone.now()).days -1 >>> timezone.now() datetime.datetime(2022, 11, 17, 13, 1, 36, 913132, tzinfo=<UTC>) >>> datetime.datetime.now(tz=datetime.timezone.utc) datetime.datetime(2022, 11, 17, 13, 1, 41, 913958, tzinfo=datetime.timezone.utc) How do both options to get the current time with the UTC "timezone" differ? Why is the difference a positive number of seconds, but exactly negative one day? Can I replace timezone.now() by datetime.datetime.now(tz=datetime.timezone.utc)? A: The second value in your subtraction is getting created a microsecond or so after the first value. So it's a later point in time. You're subtracting the later point in time from the earlier point in time. Yielding a negative delta: >>> datetime.datetime.now(tz=datetime.timezone.utc) - timezone.now() datetime.timedelta(days=-1, seconds=86399, microseconds=999981) If you're only looking at the day or microsecond part of that, it looks like a huge difference. But this is simply the way a timedelta represents a fraction of a second in the past. It's minus one day plus 86399 seconds and 999981 ms. See Python timedelta object with negative values.
What is the difference between Django timezone now and the built-in one?
I've just noticed this: >>> import datetime >>> from django.utils import timezone >>> (datetime.datetime.now(tz=datetime.timezone.utc) - timezone.now()).microseconds 999989 >>> (datetime.datetime.now(tz=datetime.timezone.utc) - timezone.now()).seconds 86399 >>> 24*60*60 86400 >>> (datetime.datetime.now(tz=datetime.timezone.utc) - timezone.now()).days -1 >>> timezone.now() datetime.datetime(2022, 11, 17, 13, 1, 36, 913132, tzinfo=<UTC>) >>> datetime.datetime.now(tz=datetime.timezone.utc) datetime.datetime(2022, 11, 17, 13, 1, 41, 913958, tzinfo=datetime.timezone.utc) How do both options to get the current time with the UTC "timezone" differ? Why is the difference a positive number of seconds, but exactly negative one day? Can I replace timezone.now() by datetime.datetime.now(tz=datetime.timezone.utc)?
[ "The second value in your subtraction is getting created a microsecond or so after the first value. So it's a later point in time. You're subtracting the later point in time from the earlier point in time. Yielding a negative delta:\n>>> datetime.datetime.now(tz=datetime.timezone.utc) - timezone.now()\ndatetime.timedelta(days=-1, seconds=86399, microseconds=999981)\n\nIf you're only looking at the day or microsecond part of that, it looks like a huge difference. But this is simply the way a timedelta represents a fraction of a second in the past. It's minus one day plus 86399 seconds and 999981 ms. See Python timedelta object with negative values.\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074475995_django_python.txt
Q: Split variable in Pyspark I try to split the utc value found in timestamp_value in a new column called utc. I tried to use the Python RegEx but I was not able to do it. Thank you for your answer! This is how my dataframe looks like +--------+----------------------------+ |machine |timestamp_value | +--------+----------------------------+ |1 |2022-01-06T07:47:37.319+0000| |2 |2022-01-06T07:47:37.319+0000| |3 |2022-01-06T07:47:37.319+0000| +--------+----------------------------+ This is how It should look like +--------+----------------------------+-----+ |machine |timestamp_value |utc | +--------+----------------------------------+ |1 |2022-01-06T07:47:37.319 |+0000| |2 |2022-01-06T07:47:37.319 |+0000| |3 |2022-01-06T07:47:37.319 |+0000| +--------+----------------------------------+ A: You can do this with with a regexp_extract and regexp_replace respectively import pyspark.sql.functions as F (df .withColumn('utc', F.regexp_extract('timestamp_value', '.*(\+.*)', 1)) .withColumn('timestamp_value', F.regexp_replace('timestamp_value', '\+(.*)', '')) ).show(truncate=False) +-------+-----------------------+-----+ |machine|timestamp_value |utc | +-------+-----------------------+-----+ |1 |2022-01-06T07:47:37.319|+0000| |2 |2022-01-06T07:47:37.319|+0000| |3 |2022-01-06T07:47:37.319|+0000| +-------+-----------------------+-----+ To better understand what that regular expression means, have a look at this tool.
Split variable in Pyspark
I try to split the utc value found in timestamp_value in a new column called utc. I tried to use the Python RegEx but I was not able to do it. Thank you for your answer! This is how my dataframe looks like +--------+----------------------------+ |machine |timestamp_value | +--------+----------------------------+ |1 |2022-01-06T07:47:37.319+0000| |2 |2022-01-06T07:47:37.319+0000| |3 |2022-01-06T07:47:37.319+0000| +--------+----------------------------+ This is how It should look like +--------+----------------------------+-----+ |machine |timestamp_value |utc | +--------+----------------------------------+ |1 |2022-01-06T07:47:37.319 |+0000| |2 |2022-01-06T07:47:37.319 |+0000| |3 |2022-01-06T07:47:37.319 |+0000| +--------+----------------------------------+
[ "You can do this with with a regexp_extract and regexp_replace respectively\nimport pyspark.sql.functions as F\n\n(df\n .withColumn('utc', F.regexp_extract('timestamp_value', '.*(\\+.*)', 1))\n .withColumn('timestamp_value', F.regexp_replace('timestamp_value', '\\+(.*)', ''))\n).show(truncate=False)\n\n+-------+-----------------------+-----+\n|machine|timestamp_value |utc |\n+-------+-----------------------+-----+\n|1 |2022-01-06T07:47:37.319|+0000|\n|2 |2022-01-06T07:47:37.319|+0000|\n|3 |2022-01-06T07:47:37.319|+0000|\n+-------+-----------------------+-----+\n\nTo better understand what that regular expression means, have a look at this tool.\n" ]
[ 2 ]
[]
[]
[ "apache_spark", "data_wrangling", "pyspark", "python", "regex" ]
stackoverflow_0074476037_apache_spark_data_wrangling_pyspark_python_regex.txt
Q: How do I pass application context to a child function in flask? Here is the project Structure. |-- a_api/ | |- a1.py | |-- b_api/ | |-b1.py | |-- c_api/ | |-c1.py | |-c2.py | |-- utils/ | |-db.py | |-- main.py db.py connects to mongo and stores connection in g from flask. from flask import g from pymongo import MongoClient mongo_db = 'mongo_db' def get_mongo_db(): """Function will create a connection to mongo db for the current Request Returns: mongo_db: THe connection to Mongo Db """ if mongo_db not in g: print('New Connection Created for mongo db') mongo_client = MongoClient('the_url') # Store the Client g.mongo_db = mongo_client else: print('Old Connection reused for mongo db') # Return The db return g.mongo_db['db_name'] main.py calls two functions a1.py and b1.py for a1: it interacts directly with the db.py file and updates data, this happens without any error and task is completed successfully. for b1: it first calls c1 in a separate process, which used db.py and updates data - but in this case an error is thrown set up an application context with app.app_context() How do I pass the application context to db.py when it is called from c1, which is called from b1? How Do I create a single connection point to mongodb and use it across all requests or process in flask? Traceback (most recent call last): File "C:\Users\kunda\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 315, in _bootstrap self.run() File "C:\Users\kunda\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "W:\xx\offers_api\offer_ops.py", line 60, in update_offer response = aser(id=id, d=d) File "W:\xx\offers_api\offer_ops.py", line 86, in aser x = get_mongo_db() File "W:\xx\utils\db.py", line 13, in get_mongo_db if mongo_db not in g: File "C:\Users\kunda\AppData\Local\Programs\Python\Python310\lib\site-packages\werkzeug\local.py", line 278, in __get__ obj = instance._get_current_object() File "C:\Users\kunda\AppData\Local\Programs\Python\Python310\lib\site-packages\werkzeug\local.py", line 407, in _get_current_object return self.__local() # type: ignore File "C:\Users\kunda\AppData\Local\Programs\Python\Python310\lib\site-packages\flask\globals.py", line 40, in _lookup_app_object raise RuntimeError(_app_ctx_err_msg) RuntimeError: Working outside of application context. This typically means that you attempted to use functionality that needed to interface with the current application object in some way. To solve this, set up an application context with app.app_context(). See the documentation for more information. A: Try something like this: from flask import g from pymongo import MongoClient # import main flask app from X import app mongo_db = 'mongo_db' def get_mongo_db(): """Function will create a connection to mongo db for the current Request Returns: mongo_db: THe connection to Mongo Db """ # if circular dependency error try importing app here from X import app with app.app_context(): if mongo_db not in g: print('New Connection Created for mongo db') mongo_client = MongoClient('the_url') # Store the Client g.mongo_db = mongo_client else: print('Old Connection reused for mongo db') # Return The db return g.mongo_db['db_name']
How do I pass application context to a child function in flask?
Here is the project Structure. |-- a_api/ | |- a1.py | |-- b_api/ | |-b1.py | |-- c_api/ | |-c1.py | |-c2.py | |-- utils/ | |-db.py | |-- main.py db.py connects to mongo and stores connection in g from flask. from flask import g from pymongo import MongoClient mongo_db = 'mongo_db' def get_mongo_db(): """Function will create a connection to mongo db for the current Request Returns: mongo_db: THe connection to Mongo Db """ if mongo_db not in g: print('New Connection Created for mongo db') mongo_client = MongoClient('the_url') # Store the Client g.mongo_db = mongo_client else: print('Old Connection reused for mongo db') # Return The db return g.mongo_db['db_name'] main.py calls two functions a1.py and b1.py for a1: it interacts directly with the db.py file and updates data, this happens without any error and task is completed successfully. for b1: it first calls c1 in a separate process, which used db.py and updates data - but in this case an error is thrown set up an application context with app.app_context() How do I pass the application context to db.py when it is called from c1, which is called from b1? How Do I create a single connection point to mongodb and use it across all requests or process in flask? Traceback (most recent call last): File "C:\Users\kunda\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 315, in _bootstrap self.run() File "C:\Users\kunda\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "W:\xx\offers_api\offer_ops.py", line 60, in update_offer response = aser(id=id, d=d) File "W:\xx\offers_api\offer_ops.py", line 86, in aser x = get_mongo_db() File "W:\xx\utils\db.py", line 13, in get_mongo_db if mongo_db not in g: File "C:\Users\kunda\AppData\Local\Programs\Python\Python310\lib\site-packages\werkzeug\local.py", line 278, in __get__ obj = instance._get_current_object() File "C:\Users\kunda\AppData\Local\Programs\Python\Python310\lib\site-packages\werkzeug\local.py", line 407, in _get_current_object return self.__local() # type: ignore File "C:\Users\kunda\AppData\Local\Programs\Python\Python310\lib\site-packages\flask\globals.py", line 40, in _lookup_app_object raise RuntimeError(_app_ctx_err_msg) RuntimeError: Working outside of application context. This typically means that you attempted to use functionality that needed to interface with the current application object in some way. To solve this, set up an application context with app.app_context(). See the documentation for more information.
[ "Try something like this:\nfrom flask import g\nfrom pymongo import MongoClient\n\n# import main flask app\nfrom X import app\n\nmongo_db = 'mongo_db'\n\ndef get_mongo_db():\n \"\"\"Function will create a connection to mongo db for the current Request\n\n Returns:\n mongo_db: THe connection to Mongo Db\n \"\"\"\n # if circular dependency error try importing app here\n from X import app\n with app.app_context():\n if mongo_db not in g:\n print('New Connection Created for mongo db')\n mongo_client = MongoClient('the_url')\n\n # Store the Client\n g.mongo_db = mongo_client\n else:\n print('Old Connection reused for mongo db')\n # Return The db\n return g.mongo_db['db_name']\n\n" ]
[ 2 ]
[]
[]
[ "flask", "python" ]
stackoverflow_0074386757_flask_python.txt
Q: Switch to editor pane shortcut not working in Spyder 5.X I'm on Windows 10 and recently updated to Spyder 5.3.3 standalone version and the keyboard shortcut to switch to the editor pane (default Ctrl+E) will not work no matter what I try, it simply has no effect. I've tried reinstalling Spyder, resetting everything back to defaults multiple times, changing to different keyboard shortcuts than Ctrl+E, trying to switch to the editor pane while having various other panes selected (different contexts), but nothing helps. I can switch to all other panes (like the console with Ctrl+I, etc) just fine and so far all of the other key shortcuts I'm used to work but this one is the most impactful and I can't get it to work. I opened my previous version of Spyder 4.X and the Ctrl+E works fine from any context as expected. Any ideas on what the issue could be? A: Same issue here, and I think the issue started with the upgrade from 5.3.2 to 5.3.3 only. Before that it still worked. A: The issue appears to be version-specific. Have upgraded from 5.3.3 to 5.4.0, and the shortcut is working again.
Switch to editor pane shortcut not working in Spyder 5.X
I'm on Windows 10 and recently updated to Spyder 5.3.3 standalone version and the keyboard shortcut to switch to the editor pane (default Ctrl+E) will not work no matter what I try, it simply has no effect. I've tried reinstalling Spyder, resetting everything back to defaults multiple times, changing to different keyboard shortcuts than Ctrl+E, trying to switch to the editor pane while having various other panes selected (different contexts), but nothing helps. I can switch to all other panes (like the console with Ctrl+I, etc) just fine and so far all of the other key shortcuts I'm used to work but this one is the most impactful and I can't get it to work. I opened my previous version of Spyder 4.X and the Ctrl+E works fine from any context as expected. Any ideas on what the issue could be?
[ "Same issue here, and I think the issue started with the upgrade from 5.3.2 to 5.3.3 only. Before that it still worked.\n", "The issue appears to be version-specific.\nHave upgraded from 5.3.3 to 5.4.0, and the shortcut is working again.\n" ]
[ 2, 1 ]
[]
[]
[ "ide", "keyboard_shortcuts", "python", "spyder", "windows" ]
stackoverflow_0073818754_ide_keyboard_shortcuts_python_spyder_windows.txt
Q: How can I use Stockfish in Python so that the evaluation is continuously updated like in chess.com, instead of computed for a given amount of time? I am using the stockfish 3.23 package in python. To get the evaluation of the chess position, I use the following code: self.stockfish = Stockfish(path="stockfish\\stockfish", depth=18, parameters={"Threads": 2, "Minimum Thinking Time": 1000}) self.stockfish.set_fen_position(fen) evaluationValue = self.stockfish.get_evaluation()['value'] This works fine. However, I would like stockfish to constantly evaluate the position, and give me the current evaluation when I want, instead of waiting a predetermined amount of time for the result of the evaluation. Is this possible? Thank you very much, Joost A: I assume one way to solve it would be to make the call in a loop from 1 to maxDepth and then print the results for each depth in the loop. I am not sure how the Stockfish package works but Stockfish uses some sort of iterative deepening which means that if it searches for depth 18 it will do the loop mentioned above. I just don't know how to print the results from that built in loop with that library, maybe there is some better way of doing it than I proposed. A: In the stockfish package, The get_evaluation function works by evaluating the top moves in the current position, the score is either the centipawn or mate. While evaluating, stockfish will output top moves at each depth, but the package will wait until the evaluation is done. I have created a pull request that adds generate_top_moves method which returns a generator that yields top moves in the position at each depth. Here's the idea, you can read more about this in the PR: class TopMove: def __init__(self, line: str) -> None: splits = line.split(" ") pv_index = splits.index("pv") self.move = splits[pv_index + 1] self.line = splits[pv_index + 1 :] self.depth = int(splits[splits.index("depth") + 1]) self.seldepth = int(splits[splits.index("seldepth") + 1]) self.cp = None self.mate = None try: self.cp = int(splits[splits.index("cp") + 1]) except ValueError: self.mate = int(splits[splits.index("mate") + 1]) def dict(self) -> dict: return { "move": self.move, "depth": self.depth, "seldepth": self.seldepth, "line": self.line, "cp": self.cp, "mate": self.mate, } # compare if this move is better than the other move def __gt__(self, other: Stockfish.TopMove) -> bool: if other.mate is None: # this move is mate and the other is not if self.mate is not None: # a negative mate value is a losing move return self.mate < 0 # both moves has no mate, compare the depth first than centipawn if self.depth == other.depth: if self.cp == other.cp: return self.seldepth > other.seldepth else: return self.cp > other.cp else: return self.depth > other.depth else: # both this move and other move is mate if self.mate is not None: # both losing move, which takes more moves is better # both winning move, which takes less move is better if ( self.mate < 0 and other.mate < 0 or self.mate > 0 and other.mate > 0 ): return self.mate < other.mate else: # comparing a losing move with a winning move, positive mate score is winning return self.mate > other.mate else: return other.mate < 0 # the oposite of __gt__ def __lt__(self, other: Stockfish.TopMove) -> bool: return not self.__gt__(other) # equal move, by "move", not by score/evaluation def __eq__(self, other: Stockfish.TopMove) -> bool: return self.move == other.move def generate_top_moves( self, num_top_moves: int = 5 ) -> Generator[List[TopMove], None, None]: """Returns a generator that yields top moves in the position at each depth Args: num_top_moves: The number of moves to return info on, assuming there are at least those many legal moves. Returns: A generator that yields top moves in the position at each depth. The evaluation could be stopped early by calling Generator.close(); this however will take some time for stockfish to stop. Unlike `get_top_moves` - which returns a list of dict, this will yield a list of `Stockfish.TopMove` instead, and the score (cp/mate) is relative to which side is playing instead of absolute like `get_top_moves`. The score is either `cp` or `mate`; a higher `cp` is better, positive `mate` is winning and vice versa. If there are no moves in the position, an empty list is returned. """ if num_top_moves <= 0: raise ValueError("num_top_moves is not a positive number.") old_MultiPV_value = self._parameters["MultiPV"] if num_top_moves != self._parameters["MultiPV"]: self._set_option("MultiPV", num_top_moves) self._parameters.update({"MultiPV": num_top_moves}) foundBestMove = False try: self._go() top_moves: List[Stockfish.TopMove] = [] current_depth = 1 while True: line = self._read_line() if "multipv" in line and "depth" in line: move = Stockfish.TopMove(line) # try to find the move in the list, if it exists then update it, else append to the list try: idx = top_moves.index(move) # don't update if the new move has a smaller depth than the one in the list if move.depth >= top_moves[idx].depth: top_moves[idx] = move except ValueError: top_moves.append(move) # yield the top moves once the current depth changed, the current depth might be smaller than the old depth if move.depth != current_depth: current_depth = move.depth top_moves.sort(reverse=True) yield top_moves[:num_top_moves] elif line.startswith("bestmove"): foundBestMove = True best_move = line.split(" ")[1] # no more moves, the game is ended if best_move == "(none)": yield [] else: # sort the list once again top_moves.sort(reverse=True) # if the move at index 0 is not the best move returned by stockfish if best_move != top_moves[0].move: for move in top_moves: if best_move == move.move: top_moves.remove(move) top_moves.insert(0, move) break else: raise ValueError(f"Stockfish returned the best move: {best_move}, but it's not in the list") yield top_moves[:num_top_moves] break except BaseException as e: raise e from e finally: # stockfish has not returned the best move, but the generator was signaled to close if not foundBestMove: self._put("stop") while not self._read_line().startswith("bestmove"): pass if old_MultiPV_value != self._parameters["MultiPV"]: self._set_option("MultiPV", old_MultiPV_value) self._parameters.update({"MultiPV": old_MultiPV_value}) To evaluate the position, you can get the top moves, then the score will be either mate or cp (centipawn) of the best move: for top_moves in stockfish.generate_top_moves(): best_move = top_moves[0] print(f"Evaluation at depth {best_move.depth}: {best_move.cp}") The output for the starting position: Evaluation at depth 2: 141 Evaluation at depth 3: 127 Evaluation at depth 4: 77 Evaluation at depth 5: 70 Evaluation at depth 6: 69 Evaluation at depth 7: 77 Evaluation at depth 8: 77 Evaluation at depth 9: 83 Evaluation at depth 10: 83 Evaluation at depth 11: 63 Evaluation at depth 12: 63 Evaluation at depth 13: 70 Evaluation at depth 14: 56 Evaluation at depth 15: 56 Evaluation at depth 16: 56 Evaluation at depth 17: 56 Evaluation at depth 18: 49 Evaluation at depth 18: 49 With this simple method added, you can do some amazing stuff like this, the evaluation bar on the left is calculated in python:
How can I use Stockfish in Python so that the evaluation is continuously updated like in chess.com, instead of computed for a given amount of time?
I am using the stockfish 3.23 package in python. To get the evaluation of the chess position, I use the following code: self.stockfish = Stockfish(path="stockfish\\stockfish", depth=18, parameters={"Threads": 2, "Minimum Thinking Time": 1000}) self.stockfish.set_fen_position(fen) evaluationValue = self.stockfish.get_evaluation()['value'] This works fine. However, I would like stockfish to constantly evaluate the position, and give me the current evaluation when I want, instead of waiting a predetermined amount of time for the result of the evaluation. Is this possible? Thank you very much, Joost
[ "I assume one way to solve it would be to make the call in a loop from 1 to maxDepth and then print the results for each depth in the loop.\nI am not sure how the Stockfish package works but Stockfish uses some sort of iterative deepening which means that if it searches for depth 18 it will do the loop mentioned above. I just don't know how to print the results from that built in loop with that library, maybe there is some better way of doing it than I proposed.\n", "In the stockfish package, The get_evaluation function works by evaluating the top moves in the current position, the score is either the centipawn or mate. While evaluating, stockfish will output top moves at each depth, but the package will wait until the evaluation is done.\nI have created a pull request that adds generate_top_moves method which returns a generator that yields top moves in the position at each depth. Here's the idea, you can read more about this in the PR:\n\nclass TopMove:\n def __init__(self, line: str) -> None:\n splits = line.split(\" \")\n pv_index = splits.index(\"pv\")\n self.move = splits[pv_index + 1]\n self.line = splits[pv_index + 1 :]\n self.depth = int(splits[splits.index(\"depth\") + 1])\n self.seldepth = int(splits[splits.index(\"seldepth\") + 1])\n\n self.cp = None\n self.mate = None\n\n try:\n self.cp = int(splits[splits.index(\"cp\") + 1])\n except ValueError:\n self.mate = int(splits[splits.index(\"mate\") + 1])\n\n def dict(self) -> dict:\n return {\n \"move\": self.move,\n \"depth\": self.depth,\n \"seldepth\": self.seldepth,\n \"line\": self.line,\n \"cp\": self.cp,\n \"mate\": self.mate,\n }\n\n # compare if this move is better than the other move\n def __gt__(self, other: Stockfish.TopMove) -> bool:\n\n if other.mate is None:\n # this move is mate and the other is not\n if self.mate is not None:\n # a negative mate value is a losing move\n return self.mate < 0\n\n # both moves has no mate, compare the depth first than centipawn\n if self.depth == other.depth:\n if self.cp == other.cp:\n return self.seldepth > other.seldepth\n else:\n return self.cp > other.cp\n else:\n return self.depth > other.depth\n\n else:\n # both this move and other move is mate\n if self.mate is not None:\n # both losing move, which takes more moves is better\n # both winning move, which takes less move is better\n if (\n self.mate < 0\n and other.mate < 0\n or self.mate > 0\n and other.mate > 0\n ):\n return self.mate < other.mate\n else:\n # comparing a losing move with a winning move, positive mate score is winning\n return self.mate > other.mate\n else:\n return other.mate < 0\n\n # the oposite of __gt__\n def __lt__(self, other: Stockfish.TopMove) -> bool:\n return not self.__gt__(other)\n\n # equal move, by \"move\", not by score/evaluation\n def __eq__(self, other: Stockfish.TopMove) -> bool:\n return self.move == other.move\n\ndef generate_top_moves(\n self, num_top_moves: int = 5\n) -> Generator[List[TopMove], None, None]:\n \"\"\"Returns a generator that yields top moves in the position at each depth\n\n Args:\n num_top_moves:\n The number of moves to return info on, assuming there are at least\n those many legal moves.\n\n Returns:\n A generator that yields top moves in the position at each depth.\n\n The evaluation could be stopped early by calling Generator.close();\n this however will take some time for stockfish to stop.\n\n Unlike `get_top_moves` - which returns a list of dict, this will yield\n a list of `Stockfish.TopMove` instead, and the score (cp/mate) is relative\n to which side is playing instead of absolute like `get_top_moves`.\n\n The score is either `cp` or `mate`; a higher `cp` is better, positive `mate`\n is winning and vice versa.\n\n If there are no moves in the position, an empty list is returned.\n \"\"\"\n\n if num_top_moves <= 0:\n raise ValueError(\"num_top_moves is not a positive number.\")\n\n old_MultiPV_value = self._parameters[\"MultiPV\"]\n if num_top_moves != self._parameters[\"MultiPV\"]:\n self._set_option(\"MultiPV\", num_top_moves)\n self._parameters.update({\"MultiPV\": num_top_moves})\n\n foundBestMove = False\n\n try:\n self._go()\n\n top_moves: List[Stockfish.TopMove] = []\n current_depth = 1\n\n while True:\n line = self._read_line()\n\n if \"multipv\" in line and \"depth\" in line:\n move = Stockfish.TopMove(line)\n\n # try to find the move in the list, if it exists then update it, else append to the list\n try:\n idx = top_moves.index(move)\n\n # don't update if the new move has a smaller depth than the one in the list\n if move.depth >= top_moves[idx].depth:\n top_moves[idx] = move\n\n except ValueError:\n top_moves.append(move)\n\n # yield the top moves once the current depth changed, the current depth might be smaller than the old depth\n if move.depth != current_depth:\n current_depth = move.depth\n top_moves.sort(reverse=True)\n yield top_moves[:num_top_moves]\n\n elif line.startswith(\"bestmove\"):\n foundBestMove = True\n best_move = line.split(\" \")[1]\n\n \n # no more moves, the game is ended\n if best_move == \"(none)\":\n yield []\n else:\n # sort the list once again\n top_moves.sort(reverse=True)\n\n # if the move at index 0 is not the best move returned by stockfish\n if best_move != top_moves[0].move:\n for move in top_moves:\n if best_move == move.move:\n top_moves.remove(move)\n top_moves.insert(0, move)\n break\n else:\n raise ValueError(f\"Stockfish returned the best move: {best_move}, but it's not in the list\")\n \n\n yield top_moves[:num_top_moves]\n\n break\n\n except BaseException as e:\n raise e from e\n\n finally:\n # stockfish has not returned the best move, but the generator was signaled to close\n if not foundBestMove:\n self._put(\"stop\")\n while not self._read_line().startswith(\"bestmove\"):\n pass\n\n if old_MultiPV_value != self._parameters[\"MultiPV\"]:\n self._set_option(\"MultiPV\", old_MultiPV_value)\n self._parameters.update({\"MultiPV\": old_MultiPV_value})\n\nTo evaluate the position, you can get the top moves, then the score will be either mate or cp (centipawn) of the best move:\nfor top_moves in stockfish.generate_top_moves():\n best_move = top_moves[0]\n print(f\"Evaluation at depth {best_move.depth}: {best_move.cp}\")\n\nThe output for the starting position:\nEvaluation at depth 2: 141\nEvaluation at depth 3: 127\nEvaluation at depth 4: 77\nEvaluation at depth 5: 70\nEvaluation at depth 6: 69\nEvaluation at depth 7: 77\nEvaluation at depth 8: 77\nEvaluation at depth 9: 83\nEvaluation at depth 10: 83\nEvaluation at depth 11: 63\nEvaluation at depth 12: 63\nEvaluation at depth 13: 70\nEvaluation at depth 14: 56\nEvaluation at depth 15: 56\nEvaluation at depth 16: 56\nEvaluation at depth 17: 56\nEvaluation at depth 18: 49\nEvaluation at depth 18: 49\n\nWith this simple method added, you can do some amazing stuff like this, the evaluation bar on the left is calculated in python:\n\n" ]
[ 0, 0 ]
[]
[]
[ "chess", "evaluation", "python", "stockfish" ]
stackoverflow_0071945463_chess_evaluation_python_stockfish.txt
Q: Gekko and Intermediate variables I am using GEKKO to fit a function. The form of a function is known. It looks like a sum of similar subfunctions with various parameters, each subfunction has its own set of parameters to find (optimize)... I don't think I understand the use of intermediate variables fully and would love some help with my code. I am using intermediate variables to minimize the code and make it more readable. I got an error: apm 212.160.70.179_gk_model1 <br><pre> ---------------------------------------------------------------- APMonitor, Version 1.0.1 APMonitor Optimization Suite ---------------------------------------------------------------- @error: Intermediate Definition Error: Intermediate variable with no equality (=) expression 30.124819430.1605366830.1963387730.2322259530.2681985430.30425683 STOPPING... As I understood I am getting some array as output? How can I understand which Intermediate variable causes a problem? Or what am I doing wrong? Here is the code: # using GEKKO for preliminary fitting xData = np.array(df['E']) yData = np.array(df['exp_trans']) m = GEKKO() # parameters x = m.Param(value = xData) # x-coordinates for fitting z = m.Param(value = yData) # experimental results to fit to # constants A1 = m.Const(A1_c) gj = m.Const(gj_c) # variables El_0 = m.FV(lb = borders_left[0], ub=borders_right[0]) El_1 = m.FV(lb = borders_left[1], ub=borders_right[1]) El_2 = m.FV(lb = borders_left[2], ub=borders_right[2]) G1_0 = m.FV(lb = 0.000001, ub=1) G1_1 = m.FV(lb = 0.000001, ub=1) G1_2 = m.FV(lb = 0.000001, ub=1) # G2 = 0 G2_0 = m.FV(lb = 0.000000, ub=0) G2_1 = m.FV(lb = 0.000000, ub=0) G2_2 = m.FV(lb = 0.000000, ub=0) Gg_0 = m.FV(lb = 0.000001, ub=1) Gg_1 = m.FV(lb = 0.000001, ub=1) Gg_2 = m.FV(lb = 0.000001, ub=1) #Intermediates k_alfa = m.Intermediate(A1*np.sqrt(x)) ro = m.Intermediate(k_alfa*ac) G_0 = m.Intermediate(G1_0+G2_0+Gg_0) G_1 = m.Intermediate(G1_1+G2_1+Gg_1) G_2 = m.Intermediate(G1_2+G2_2+Gg_2) d0 = m.Intermediate((El_0-x)**2+(G_0/2)**2) d1 = m.Intermediate((El_1-x)**2+(G_1/2)**2) d2 = m.Intermediate((El_2-x)**2+(G_2/2)**2) phi = m.Intermediate(ro) f0=m.Intermediate((1-(1-(G_0*G1_0/(2*d0)))*m.cos(2*phi)-((El_0-x)*G1_0/d0)*m.sin(2*phi))) f1=m.Intermediate((1-(1-(G_1*G1_1/(2*d1)))*m.cos(2*phi)-((El_1-x)*G1_1/d1)*m.sin(2*phi))) f2=m.Intermediate((1-(1-(G_2*G1_2/(2*d2)))*m.cos(2*phi)-((El_2-x)*G1_2/d2)*m.sin(2*phi))) sigma_sum = m.Intermediate(2*math.pi*gj/k_alfa * (f0+f1+f2)) # designing an equation for the model y = m.Var() m.Equation(y == m.exp(-n*sigma_sum)) m.Minimize(((y-z))**2) # Options El_0.STATUS = 1 El_1.STATUS = 1 El_2.STATUS = 1 G1_0.STATUS = 1 G1_1.STATUS = 1 G1_2.STATUS = 1 G2_0.STATUS = 1 G2_1.STATUS = 1 G2_2.STATUS = 1 Gg_0.STATUS = 1 Gg_1.STATUS = 1 Gg_2.STATUS = 1 m.options.IMODE = 2 m.options.SOLVER = 3 m.options.MAX_ITER = 1000 m.solve(disp=1) A: The problem with the code was connected with the use of various data types representation e.g. k_alfa = m.Intermediate(A1*np.sqrt(x)) need to be changed on: k_alfa = m.Intermediate(A1*m.sqrt(x)) and so on.. Because functions vary in output datatypes. Always check and be aware of using various data types & structures in your models and variables definitions.
Gekko and Intermediate variables
I am using GEKKO to fit a function. The form of a function is known. It looks like a sum of similar subfunctions with various parameters, each subfunction has its own set of parameters to find (optimize)... I don't think I understand the use of intermediate variables fully and would love some help with my code. I am using intermediate variables to minimize the code and make it more readable. I got an error: apm 212.160.70.179_gk_model1 <br><pre> ---------------------------------------------------------------- APMonitor, Version 1.0.1 APMonitor Optimization Suite ---------------------------------------------------------------- @error: Intermediate Definition Error: Intermediate variable with no equality (=) expression 30.124819430.1605366830.1963387730.2322259530.2681985430.30425683 STOPPING... As I understood I am getting some array as output? How can I understand which Intermediate variable causes a problem? Or what am I doing wrong? Here is the code: # using GEKKO for preliminary fitting xData = np.array(df['E']) yData = np.array(df['exp_trans']) m = GEKKO() # parameters x = m.Param(value = xData) # x-coordinates for fitting z = m.Param(value = yData) # experimental results to fit to # constants A1 = m.Const(A1_c) gj = m.Const(gj_c) # variables El_0 = m.FV(lb = borders_left[0], ub=borders_right[0]) El_1 = m.FV(lb = borders_left[1], ub=borders_right[1]) El_2 = m.FV(lb = borders_left[2], ub=borders_right[2]) G1_0 = m.FV(lb = 0.000001, ub=1) G1_1 = m.FV(lb = 0.000001, ub=1) G1_2 = m.FV(lb = 0.000001, ub=1) # G2 = 0 G2_0 = m.FV(lb = 0.000000, ub=0) G2_1 = m.FV(lb = 0.000000, ub=0) G2_2 = m.FV(lb = 0.000000, ub=0) Gg_0 = m.FV(lb = 0.000001, ub=1) Gg_1 = m.FV(lb = 0.000001, ub=1) Gg_2 = m.FV(lb = 0.000001, ub=1) #Intermediates k_alfa = m.Intermediate(A1*np.sqrt(x)) ro = m.Intermediate(k_alfa*ac) G_0 = m.Intermediate(G1_0+G2_0+Gg_0) G_1 = m.Intermediate(G1_1+G2_1+Gg_1) G_2 = m.Intermediate(G1_2+G2_2+Gg_2) d0 = m.Intermediate((El_0-x)**2+(G_0/2)**2) d1 = m.Intermediate((El_1-x)**2+(G_1/2)**2) d2 = m.Intermediate((El_2-x)**2+(G_2/2)**2) phi = m.Intermediate(ro) f0=m.Intermediate((1-(1-(G_0*G1_0/(2*d0)))*m.cos(2*phi)-((El_0-x)*G1_0/d0)*m.sin(2*phi))) f1=m.Intermediate((1-(1-(G_1*G1_1/(2*d1)))*m.cos(2*phi)-((El_1-x)*G1_1/d1)*m.sin(2*phi))) f2=m.Intermediate((1-(1-(G_2*G1_2/(2*d2)))*m.cos(2*phi)-((El_2-x)*G1_2/d2)*m.sin(2*phi))) sigma_sum = m.Intermediate(2*math.pi*gj/k_alfa * (f0+f1+f2)) # designing an equation for the model y = m.Var() m.Equation(y == m.exp(-n*sigma_sum)) m.Minimize(((y-z))**2) # Options El_0.STATUS = 1 El_1.STATUS = 1 El_2.STATUS = 1 G1_0.STATUS = 1 G1_1.STATUS = 1 G1_2.STATUS = 1 G2_0.STATUS = 1 G2_1.STATUS = 1 G2_2.STATUS = 1 Gg_0.STATUS = 1 Gg_1.STATUS = 1 Gg_2.STATUS = 1 m.options.IMODE = 2 m.options.SOLVER = 3 m.options.MAX_ITER = 1000 m.solve(disp=1)
[ "The problem with the code was connected with the use of various data types representation\ne.g.\nk_alfa = m.Intermediate(A1*np.sqrt(x))\n\nneed to be changed on:\nk_alfa = m.Intermediate(A1*m.sqrt(x))\n\nand so on..\nBecause functions vary in output datatypes.\nAlways check and be aware of using various data types & structures in your models and variables definitions.\n" ]
[ 2 ]
[]
[]
[ "gekko", "optimization", "python" ]
stackoverflow_0074416814_gekko_optimization_python.txt
Q: I want create comment section that can only logged in users can use but I have this problem I get an Error: cannot unpack non-iterable bool object profile = Profile.objects.get(Profile.user == request.user) This is my models.py in account app and blog app: class Profile(models.Model): STATUS_CHOICES = ( ('manager', 'مدیر'), ('developer', 'توسعه‌دهنده'), ('designer', 'طراح پروژه'), ) user = models.OneToOneField(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) bio = models.CharField(max_length=50, blank=True) task = models.CharField(choices=STATUS_CHOICES, max_length=20, blank=True, null=True, default=None) date_of_birth = models.DateField(blank=True, null=True) photo = models.ImageField(upload_to='users/photos/%Y/%m/%d/', blank=True) def __str__(self): return f'{self.user.get_full_name()}' class Comment(models.Model): post = models.ForeignKey(Post, on_delete=models.CASCADE, related_name='comments') profile = models.ForeignKey(Profile, on_delete=models.CASCADE, related_name='user_comments') body = models.TextField() created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) active = models.BooleanField(default=False) and this is my views.py for comments: def post_detail(request, year, month, day, slug): post = get_object_or_404(Post, slug=slug, status='published', publish__year=year, publish__month=month, publish__day=day) tags = Tag.objects.all() tagsList = [] for tag in post.tags.get_queryset(): tagsList.append(tag.name) profile = Profile.objects.get(Profile.user == request.user) comments = post.comments.filter(active=True) new_comment = None if request.method == 'POST': comment_form = CommentForm(data=request.POST) if comment_form.is_valid(): new_comment = comment_form.save(commit=False) new_comment.profile = profile new_comment.post = post new_comment.save() return redirect('post_detail', slug=post.slug) else: comment_form = CommentForm() post_tags_ids = post.tags.values_list('id', flat=True) similar_posts = Post.published.filter(tags__in=post_tags_ids).exclude(id=post.id) similar_posts = similar_posts.annotate(same_tags=Count('tags')).order_by('-same_tags', '-publish')[:3] return render(request, 'blog/post/detail.html', {'post': post, 'comments': comments, 'new_comment': new_comment, 'comment_form': comment_form, 'similar_posts': similar_posts, 'tagsList': tagsList, 'tags': tags}) Is there any solution for this problem? A: Assuming you need to get only single profile instance i.e. current logged in user's profile so you can either use: profile = Profile.objects.get(user=request.user) or: get_object_or_404(Profile, user=request.user) To limit the view to be accessed by only logged in users, use @login_required decorator so: @login_required(login_url='/accounts/login/') # you can give any login_url you want. def post_detail(request, year, month, day, slug): #...
I want create comment section that can only logged in users can use but I have this problem
I get an Error: cannot unpack non-iterable bool object profile = Profile.objects.get(Profile.user == request.user) This is my models.py in account app and blog app: class Profile(models.Model): STATUS_CHOICES = ( ('manager', 'مدیر'), ('developer', 'توسعه‌دهنده'), ('designer', 'طراح پروژه'), ) user = models.OneToOneField(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) bio = models.CharField(max_length=50, blank=True) task = models.CharField(choices=STATUS_CHOICES, max_length=20, blank=True, null=True, default=None) date_of_birth = models.DateField(blank=True, null=True) photo = models.ImageField(upload_to='users/photos/%Y/%m/%d/', blank=True) def __str__(self): return f'{self.user.get_full_name()}' class Comment(models.Model): post = models.ForeignKey(Post, on_delete=models.CASCADE, related_name='comments') profile = models.ForeignKey(Profile, on_delete=models.CASCADE, related_name='user_comments') body = models.TextField() created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) active = models.BooleanField(default=False) and this is my views.py for comments: def post_detail(request, year, month, day, slug): post = get_object_or_404(Post, slug=slug, status='published', publish__year=year, publish__month=month, publish__day=day) tags = Tag.objects.all() tagsList = [] for tag in post.tags.get_queryset(): tagsList.append(tag.name) profile = Profile.objects.get(Profile.user == request.user) comments = post.comments.filter(active=True) new_comment = None if request.method == 'POST': comment_form = CommentForm(data=request.POST) if comment_form.is_valid(): new_comment = comment_form.save(commit=False) new_comment.profile = profile new_comment.post = post new_comment.save() return redirect('post_detail', slug=post.slug) else: comment_form = CommentForm() post_tags_ids = post.tags.values_list('id', flat=True) similar_posts = Post.published.filter(tags__in=post_tags_ids).exclude(id=post.id) similar_posts = similar_posts.annotate(same_tags=Count('tags')).order_by('-same_tags', '-publish')[:3] return render(request, 'blog/post/detail.html', {'post': post, 'comments': comments, 'new_comment': new_comment, 'comment_form': comment_form, 'similar_posts': similar_posts, 'tagsList': tagsList, 'tags': tags}) Is there any solution for this problem?
[ "Assuming you need to get only single profile instance i.e. current logged in user's profile so you can either use:\n profile = Profile.objects.get(user=request.user)\n\nor:\nget_object_or_404(Profile, user=request.user)\n\nTo limit the view to be accessed by only logged in users, use @login_required decorator so:\n@login_required(login_url='/accounts/login/') # you can give any login_url you want.\ndef post_detail(request, year, month, day, slug):\n #...\n\n" ]
[ 3 ]
[]
[]
[ "django", "django_forms", "django_queryset", "django_views", "python" ]
stackoverflow_0074475920_django_django_forms_django_queryset_django_views_python.txt
Q: sendkeys() to bloomberg panel python I try to do a basic sendkeys() to and open an logged into bloomberg panel. I am able to verify that sendkeys() works with this: import time import win32com.client as comclt wsh= comclt.Dispatch("WScript.Shell") wsh.AppActivate("Notepad") # select another application time.sleep(0.5) # wait for half a second wsh.SendKeys("a") # send the keys you want print('key is sent') What i have tried: With the above i try to change Notepad for Bloomberg or Bloomberg App Host as is seen in the task manager, but i am unable to sendkeys... How can one get this to work or is there an alternative method that does work ? A: assuming you are trying to log into bloomberg automatically using some script. i use vbscript to achieve this at a scheduled time of the day. below is my vb script saved as a .vbs file and executed using windows task manager you will need to change loginname and password to match yours the commented part of the loop waits for the Bloomberg chat window to appear. this was then commented as i made some settings on bloomberg to not to open chat window upon logon. - i do not remember what was exactly done then This will work only if you have a bloomberg open terminal which does not ask for an otp after login as it generally asks on a Bloomberg Anywhere terminal also before running this script ensure that bloomberg application is closed / not open you may use taskkill command to close all the instances of wintrv.exe taskkill /IM wintrv.exe /F Below is the vbs script. set WshShell = WScript.CreateObject("WScript.Shell") dim ret ret = False do while ret=False ret = WshShell.AppActivate("BLOOMBERG: Login") WScript.Sleep 10000 If ret = True Then 'CreateObject("WScript.Shell").PopUp "here", 5 'WshShell.AppActivate("BLOOMBERG: Login") WScript.Sleep 3000 WshShell.SendKeys "{esc}" WScript.Sleep 1000 WshShell.SendKeys "login~" WScript.Sleep 10000 WshShell.SendKeys "loginname{tab}password~" wScript.Sleep 20000 else Call WshShell.Run("C:\blp\Wintrv\wintrv.exe") WScript.Sleep 15000 End If loop 'WScript.Sleep 5000 'ret=False 'do while ret=False 'ret = WshShell.AppActivate("IB - IB Manager") 'if ret=False Then ' WScript.Sleep 3000 'End If 'loop WScript.Sleep 5000 WshShell.AppActivate("New Tab")
sendkeys() to bloomberg panel python
I try to do a basic sendkeys() to and open an logged into bloomberg panel. I am able to verify that sendkeys() works with this: import time import win32com.client as comclt wsh= comclt.Dispatch("WScript.Shell") wsh.AppActivate("Notepad") # select another application time.sleep(0.5) # wait for half a second wsh.SendKeys("a") # send the keys you want print('key is sent') What i have tried: With the above i try to change Notepad for Bloomberg or Bloomberg App Host as is seen in the task manager, but i am unable to sendkeys... How can one get this to work or is there an alternative method that does work ?
[ "assuming you are trying to log into bloomberg automatically using some script. i use vbscript to achieve this at a scheduled time of the day.\nbelow is my vb script saved as a .vbs file and executed using windows task manager\nyou will need to change loginname and password to match yours\nthe commented part of the loop waits for the Bloomberg chat window to appear. this was then commented as i made some settings on bloomberg to not to open chat window upon logon. - i do not remember what was exactly done then\nThis will work only if you have a bloomberg open terminal which does not ask for an otp after login as it generally asks on a Bloomberg Anywhere terminal\nalso before running this script ensure that bloomberg application is closed / not open\nyou may use taskkill command to close all the instances of wintrv.exe\ntaskkill /IM wintrv.exe /F\n\nBelow is the vbs script.\nset WshShell = WScript.CreateObject(\"WScript.Shell\") \ndim ret\nret = False\ndo while ret=False\nret = WshShell.AppActivate(\"BLOOMBERG: Login\")\nWScript.Sleep 10000\nIf ret = True Then\n 'CreateObject(\"WScript.Shell\").PopUp \"here\", 5\n 'WshShell.AppActivate(\"BLOOMBERG: Login\")\n WScript.Sleep 3000 \n WshShell.SendKeys \"{esc}\" \n WScript.Sleep 1000 \n WshShell.SendKeys \"login~\" \n WScript.Sleep 10000 \n WshShell.SendKeys \"loginname{tab}password~\"\n wScript.Sleep 20000\nelse\n Call WshShell.Run(\"C:\\blp\\Wintrv\\wintrv.exe\")\n WScript.Sleep 15000\nEnd If \nloop\n'WScript.Sleep 5000\n'ret=False\n'do while ret=False\n'ret = WshShell.AppActivate(\"IB - IB Manager\")\n'if ret=False Then\n' WScript.Sleep 3000\n'End If\n'loop\nWScript.Sleep 5000\nWshShell.AppActivate(\"New Tab\")\n\n" ]
[ 0 ]
[]
[]
[ "bloomberg", "python", "sendkeys" ]
stackoverflow_0072642190_bloomberg_python_sendkeys.txt
Q: Add values to new column from a dict with keys matching the index of a dataframe I have a dictionary that for examples sake, looks like {'a': 1, 'b': 4, 'c': 7} I have a dataframe that has the same index values as the keys in this dict. I want to add each value from the dict to the dataframe. I feel like doing a check for every row of the DF, checking the index value, matching it to the one in the dict, then trying to add it is going to be a very slow way right? A: You can use map and assign back to a new column: d = {'a': 1, 'b': 4, 'c': 7} df = pd.DataFrame({'c':[1,2,3]},index=['a','b','c']) df['new_col'] = df.index.map(d) prints: c new_col a 1 1 b 2 4 c 3 7
Add values to new column from a dict with keys matching the index of a dataframe
I have a dictionary that for examples sake, looks like {'a': 1, 'b': 4, 'c': 7} I have a dataframe that has the same index values as the keys in this dict. I want to add each value from the dict to the dataframe. I feel like doing a check for every row of the DF, checking the index value, matching it to the one in the dict, then trying to add it is going to be a very slow way right?
[ "You can use map and assign back to a new column:\nd = {'a': 1, 'b': 4, 'c': 7}\ndf = pd.DataFrame({'c':[1,2,3]},index=['a','b','c'])\n\ndf['new_col'] = df.index.map(d)\n\nprints:\n c new_col\na 1 1\nb 2 4\nc 3 7\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074476226_pandas_python.txt
Q: how to add dictionary object name to json object I have 3 python dictionaries as below: gender = {'Female': 241, 'Male': 240} marital_status = {'Divorced': 245, 'Engaged': 243, 'Married': 244, 'Partnered': 246, 'Single': 242} family_type = {'Extended': 234, 'Joint': 235, 'Nuclear': 233, 'Single Parent': 236} I add them to a list: lst = [gender, marital_status, family_type] And create a JSON object which I need to save as a JSON file using pd.to_json using: jf = json.dumps(lst, indent = 4) When we look at jf object: print(jf) [ { "Female": 241, "Male": 240 }, { "Divorced": 245, "Engaged": 243, "Married": 244, "Partnered": 246, "Single": 242 }, { "Extended": 234, "Joint": 235, "Nuclear": 233, "Single Parent": 236 } ] Is there a way to make the dictionary name as key and get output as below: { "gender": { "Female": 241, "Male": 240 }, "marital_status": { "Divorced": 245, "Engaged": 243, "Married": 244, "Partnered": 246, "Single": 242 }, "family_type": { "Extended": 234, "Joint": 235, "Nuclear": 233, "Single Parent": 236 } } A: You'll have to do this manually by creating a dictionary and mapping the name to the sub_dictionary yourself. my_data = {'gender': gender, 'marital_status':marital_status, 'family_type': family_type} Edit: example of adding to an outfile using json.dump with open('myfile.json','w') as wrtier: json.dump(my_data, writer) A: As per your requirement you can done it like this by replacing line lst dict_req = {"gender":gender, "marital_status":marital_status, "family_type":family_type}
how to add dictionary object name to json object
I have 3 python dictionaries as below: gender = {'Female': 241, 'Male': 240} marital_status = {'Divorced': 245, 'Engaged': 243, 'Married': 244, 'Partnered': 246, 'Single': 242} family_type = {'Extended': 234, 'Joint': 235, 'Nuclear': 233, 'Single Parent': 236} I add them to a list: lst = [gender, marital_status, family_type] And create a JSON object which I need to save as a JSON file using pd.to_json using: jf = json.dumps(lst, indent = 4) When we look at jf object: print(jf) [ { "Female": 241, "Male": 240 }, { "Divorced": 245, "Engaged": 243, "Married": 244, "Partnered": 246, "Single": 242 }, { "Extended": 234, "Joint": 235, "Nuclear": 233, "Single Parent": 236 } ] Is there a way to make the dictionary name as key and get output as below: { "gender": { "Female": 241, "Male": 240 }, "marital_status": { "Divorced": 245, "Engaged": 243, "Married": 244, "Partnered": 246, "Single": 242 }, "family_type": { "Extended": 234, "Joint": 235, "Nuclear": 233, "Single Parent": 236 } }
[ "You'll have to do this manually by creating a dictionary and mapping the name to the sub_dictionary yourself.\nmy_data = {'gender': gender, 'marital_status':marital_status, 'family_type': family_type}\n\nEdit: example of adding to an outfile using json.dump\nwith open('myfile.json','w') as wrtier:\n json.dump(my_data, writer)\n\n", "As per your requirement you can done it like this by replacing line lst\ndict_req = {\"gender\":gender, \"marital_status\":marital_status, \"family_type\":family_type}\n\n" ]
[ 2, 0 ]
[]
[]
[ "json", "python" ]
stackoverflow_0074476259_json_python.txt
Q: Python import error on MacOS: `import scipy.integrate` raises `Library not loaded: ibgfortran.5.dylib` echo $PATH gives /usr/local/texlive/2021/bin/universal-darwin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin:/opt/X11/bin:/Library/Apple/usr/bin After updating to MacOS Monterey import scipy.integrate in Python raises --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-3-f7ec28d1adc8> in <module> ----> 1 import scipy.integrate /usr/local/lib/python3.9/site-packages/scipy/integrate/__init__.py in <module> 88 solve_bvp -- Solve a boundary value problem for a system of ODEs. 89 """ ---> 90 from ._quadrature import * 91 from .odepack import * 92 from .quadpack import * /usr/local/lib/python3.9/site-packages/scipy/integrate/_quadrature.py in <module> 8 # even though it's actually a NumPy function. 9 from numpy import trapz ---> 10 from scipy.special import roots_legendre 11 from scipy.special import gammaln 12 /usr/local/lib/python3.9/site-packages/scipy/special/__init__.py in <module> 631 from .sf_error import SpecialFunctionWarning, SpecialFunctionError 632 --> 633 from . import _ufuncs 634 from ._ufuncs import * 635 ImportError: dlopen(/usr/local/lib/python3.9/site-packages/scipy/special/_ufuncs.cpython-39-darwin.so, 0x0002): Library not loaded: /usr/local/opt/gcc/lib/gcc/10/libgfortran.5.dylib Referenced from: /usr/local/lib/python3.9/site-packages/scipy/special/_ufuncs.cpython-39-darwin.so Reason: tried: '/usr/local/opt/gcc/lib/gcc/10/libgfortran.5.dylib' (no such file), '/usr/local/lib/libgfortran.5.dylib' (no such file), '/usr/lib/libgfortran.5.dylib' (no such file) Any idea? A: According to the error message, it can't find libgfortran.5.dylib inside /usr/local/opt/gcc/lib/gcc/10. Since you are on gcc version 11, you can try to copy it from there via mkdir -p /usr/local/opt/gcc/lib/gcc/10 cp /usr/local/opt/gcc/lib/gcc/11/libgfortran.5.dylib /usr/local/opt/gcc/lib/gcc/10/ inside a terminal. A: The same thing happened to me when I upgraded my MacBook pro to MacOs ventura 13.0.1, scipy.stat didn't load. I fixed it by updating conda conda update conda
Python import error on MacOS: `import scipy.integrate` raises `Library not loaded: ibgfortran.5.dylib`
echo $PATH gives /usr/local/texlive/2021/bin/universal-darwin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin:/opt/X11/bin:/Library/Apple/usr/bin After updating to MacOS Monterey import scipy.integrate in Python raises --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-3-f7ec28d1adc8> in <module> ----> 1 import scipy.integrate /usr/local/lib/python3.9/site-packages/scipy/integrate/__init__.py in <module> 88 solve_bvp -- Solve a boundary value problem for a system of ODEs. 89 """ ---> 90 from ._quadrature import * 91 from .odepack import * 92 from .quadpack import * /usr/local/lib/python3.9/site-packages/scipy/integrate/_quadrature.py in <module> 8 # even though it's actually a NumPy function. 9 from numpy import trapz ---> 10 from scipy.special import roots_legendre 11 from scipy.special import gammaln 12 /usr/local/lib/python3.9/site-packages/scipy/special/__init__.py in <module> 631 from .sf_error import SpecialFunctionWarning, SpecialFunctionError 632 --> 633 from . import _ufuncs 634 from ._ufuncs import * 635 ImportError: dlopen(/usr/local/lib/python3.9/site-packages/scipy/special/_ufuncs.cpython-39-darwin.so, 0x0002): Library not loaded: /usr/local/opt/gcc/lib/gcc/10/libgfortran.5.dylib Referenced from: /usr/local/lib/python3.9/site-packages/scipy/special/_ufuncs.cpython-39-darwin.so Reason: tried: '/usr/local/opt/gcc/lib/gcc/10/libgfortran.5.dylib' (no such file), '/usr/local/lib/libgfortran.5.dylib' (no such file), '/usr/lib/libgfortran.5.dylib' (no such file) Any idea?
[ "According to the error message, it can't find libgfortran.5.dylib inside /usr/local/opt/gcc/lib/gcc/10. Since you are on gcc version 11, you can try to copy it from there via\nmkdir -p /usr/local/opt/gcc/lib/gcc/10\ncp /usr/local/opt/gcc/lib/gcc/11/libgfortran.5.dylib /usr/local/opt/gcc/lib/gcc/10/\n\ninside a terminal.\n", "The same thing happened to me when I upgraded my MacBook pro to MacOs ventura 13.0.1, scipy.stat didn't load. I fixed it by updating conda\nconda update conda\n\n" ]
[ 0, 0 ]
[]
[]
[ "dylib", "macos", "python", "scipy" ]
stackoverflow_0069809226_dylib_macos_python_scipy.txt
Q: Python adding the values # Initialising list of dictionary ini_dict = [{'a':5, 'b':10, 'c':90}, {'a':45, 'b':78}, {'a':90, 'c':10}] # printing initial dictionary print ("initial dictionary", (ini_dict)) # sum the values with same keys result = {} for d in ini_dict: for k in d.keys(): result[k] = result.get(k,0) + d[k] print("resultant dictionary : ", (result)) Can someone explain the program line by line A: Creating a list of dictionary's ini_dict = [{'a':5, 'b':10, 'c':90}, {'a':45, 'b':78}, {'a':90, 'c':10}] Prints out the list with dictionary's print ("initial dictionary", (ini_dict)) Creates a new dictionary result = {} Loop's through the List of dictionarys for d in ini_dict: so the first d would be {'a':5, 'b':10, 'c':90} Loop's through the keys of that dict for k in d.keys(): -> a, b and c Creates or gets the same key in the result dict and adds the value from the current key. Default value for a new created key is 0. result[k] = result.get(k,0) + d[k] Prints out the result dict print("resultant dictionary : ", (result)) A: the first line initialises a list of three dictionaries. ini_dict = [{'a':5, 'b':10, 'c':90}, {'a':45, 'b':78}, {'a':90, 'c':10}] next up, the dictionary is printed print ("initial dictionary", (ini_dict)) finally, a weighted histogram is made of the dictionaries based on the keys of the elements within said dictionaries. This is done in three steps: iterating over the list of dictionaries to get at each different dictionary. for d in ini_dict: remember: ini_dict is a list of dictionaries. when you for-loop over a list, the symbol (here d) becomes each of the dictionaries. iterating over the keys in the dictionary. The method dict.keys() returns a list of keys, over which can be iterated. for k in d.keys(): finally, for each key in the dictionary the corresponding key in the result dictionary is modified to add the new value. with result.get(k,0) the value for the key k in the result dictionary is fetched, but 0 is the default value if the key is not present. result[k] = result.get(k,0) + d[k] This just replaces the result with the previous result + the value in d. At the end of this bit of code, the result dictionary has the added value of each of the keys.
Python adding the values
# Initialising list of dictionary ini_dict = [{'a':5, 'b':10, 'c':90}, {'a':45, 'b':78}, {'a':90, 'c':10}] # printing initial dictionary print ("initial dictionary", (ini_dict)) # sum the values with same keys result = {} for d in ini_dict: for k in d.keys(): result[k] = result.get(k,0) + d[k] print("resultant dictionary : ", (result)) Can someone explain the program line by line
[ "Creating a list of dictionary's\nini_dict = [{'a':5, 'b':10, 'c':90},\n {'a':45, 'b':78},\n {'a':90, 'c':10}]\n\nPrints out the list with dictionary's\nprint (\"initial dictionary\", (ini_dict))\n\nCreates a new dictionary\nresult = {}\n\nLoop's through the List of dictionarys\n for d in ini_dict:\n\nso the first d would be {'a':5, 'b':10, 'c':90}\nLoop's through the keys of that dict\nfor k in d.keys():\n\n-> a, b and c\nCreates or gets the same key in the result dict and adds the value from the current key. Default value for a new created key is 0.\nresult[k] = result.get(k,0) + d[k]\n\nPrints out the result dict\nprint(\"resultant dictionary : \", (result))\n\n", "the first line initialises a list of three dictionaries.\nini_dict = [{'a':5, 'b':10, 'c':90},\n {'a':45, 'b':78},\n {'a':90, 'c':10}]\n\nnext up, the dictionary is printed\nprint (\"initial dictionary\", (ini_dict))\n\nfinally, a weighted histogram is made of the dictionaries based on the keys of the elements within said dictionaries. This is done in three steps:\n\niterating over the list of dictionaries to get at each different dictionary.\n\nfor d in ini_dict:\n\nremember: ini_dict is a list of dictionaries. when you for-loop over a list, the symbol (here d) becomes each of the dictionaries.\n\niterating over the keys in the dictionary. The method dict.keys() returns a list of keys, over which can be iterated.\n\nfor k in d.keys():\n\n\nfinally, for each key in the dictionary the corresponding key in the result dictionary is modified to add the new value. with result.get(k,0) the value for the key k in the result dictionary is fetched, but 0 is the default value if the key is not present.\n\nresult[k] = result.get(k,0) + d[k]\n\nThis just replaces the result with the previous result + the value in d.\nAt the end of this bit of code, the result dictionary has the added value of each of the keys.\n" ]
[ 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074476206_python.txt
Q: double integer checker function As the title says, this is a double integer checker, meaning it has two functions + the main. Please correct me if I do not paraphrase it correctly. Anyways, here is the model: def is_integer(kraai): kraai.replace(" ", "") if len(kraai) == 1: if kraai.isdigit(): print(valid) else: print(invalid) exit() elif len(kraai) > 1: if roek == "-" or roek == "+" or roek.isdigit(): print(valid) else: print(invalid) exit() elif len(kraai) == 0: print(invalid) exit() def remove_non_integer(kauw): if len(kauw) >= 1: for z in kauw: if not z.isdigit(): ekster = kauw.replace(z, "") print(invalid) print(f'''\nNot all characters after the first are integers... \nnogsteeds, vet raaf!:, {ekster}''') if __name__ == '__main__': valid = "valid" invalid = "invalid" kraai = input("Welcome to the integer tester. Please give an input: ") if len(kraai) > 1: roek = kraai[0] kauw = kraai[1:] y = "".join([roek, kauw]) corvidae = is_integer(kraai), remove_non_integer(kauw) elif len(kraai) < 1: corvidae = is_integer(kraai) As you can see, one functions to check the integer while the other functions to remove every non-integer. However, three problems: It will remove only one unique character It will print the same message every time a non-integer is in the integer It will print both 'valid' and 'invalid' for some reason when the remove_integer(x) function filters a non-integer. Any help? A: So yeah there were multiple errors at the time I posted this question. def is_integer(kraai): valid = "valid" invalid = "invalid" if len(kraai) == 1: if kraai.isdigit(): print(valid) elif not kraai.isdigit(): print(invalid) elif len(kraai) > 1: if kraai[0] == "-" or kraai[0] == "+" or kraai[0].isdigit(): if kraai[1:].isdigit(): print(True) print(valid) else: print(False) else: print(invalid) elif len(kraai) == 0: print(invalid) exit() def remove_non_integer(bonte_kraai): invalid = "invalid" roek = bonte_kraai[0] kauw = bonte_kraai[1:] y = "".join([roek, kauw]) for x in kauw: if x.isalpha(): ekster = ''.join([i for i in y if i.isdigit() or i == "-"]) print(False) print(invalid, f'''\nNot all characters after the first are integers... \nnogsteeds, vet raaf!: {ekster}''') break if __name__ == '__main__': kraai = input("Welcome to the integer tester. Please give an input: ") kraai.replace(" ", "") if len(kraai) == 1: corvidae = is_integer(kraai) elif len(kraai) > 1: for x in kraai[1:]: if kraai[1:].isdigit(): corvidae = is_integer(kraai) break elif x.isalpha(): bonte_kraai = kraai corvidae = remove_non_integer(bonte_kraai) break The difference between both codes, now, is first of all that they work individually as observable in the if name == 'main' block. Secondly, I used the 'break' statement to break the loop after it has fulfilled its task. Otherwise it will repeat a number of times and that is unwanted (it should only run once). Thirdly, as you can see I moved some variables to functions so that pytest doesn't return an 'x is not defined' error. I substituted the variables with indexes []. Thanks.
double integer checker function
As the title says, this is a double integer checker, meaning it has two functions + the main. Please correct me if I do not paraphrase it correctly. Anyways, here is the model: def is_integer(kraai): kraai.replace(" ", "") if len(kraai) == 1: if kraai.isdigit(): print(valid) else: print(invalid) exit() elif len(kraai) > 1: if roek == "-" or roek == "+" or roek.isdigit(): print(valid) else: print(invalid) exit() elif len(kraai) == 0: print(invalid) exit() def remove_non_integer(kauw): if len(kauw) >= 1: for z in kauw: if not z.isdigit(): ekster = kauw.replace(z, "") print(invalid) print(f'''\nNot all characters after the first are integers... \nnogsteeds, vet raaf!:, {ekster}''') if __name__ == '__main__': valid = "valid" invalid = "invalid" kraai = input("Welcome to the integer tester. Please give an input: ") if len(kraai) > 1: roek = kraai[0] kauw = kraai[1:] y = "".join([roek, kauw]) corvidae = is_integer(kraai), remove_non_integer(kauw) elif len(kraai) < 1: corvidae = is_integer(kraai) As you can see, one functions to check the integer while the other functions to remove every non-integer. However, three problems: It will remove only one unique character It will print the same message every time a non-integer is in the integer It will print both 'valid' and 'invalid' for some reason when the remove_integer(x) function filters a non-integer. Any help?
[ "So yeah there were multiple errors at the time I posted this question.\ndef is_integer(kraai):\n\n valid = \"valid\"\n invalid = \"invalid\"\n\n if len(kraai) == 1:\n if kraai.isdigit():\n print(valid)\n elif not kraai.isdigit():\n print(invalid)\n\n elif len(kraai) > 1:\n if kraai[0] == \"-\" or kraai[0] == \"+\" or kraai[0].isdigit():\n if kraai[1:].isdigit():\n print(True)\n print(valid)\n else:\n print(False)\n else:\n print(invalid)\n\n elif len(kraai) == 0:\n print(invalid)\n exit()\n\n\ndef remove_non_integer(bonte_kraai):\n\n invalid = \"invalid\"\n roek = bonte_kraai[0]\n kauw = bonte_kraai[1:]\n y = \"\".join([roek, kauw])\n for x in kauw:\n if x.isalpha():\n ekster = ''.join([i for i in y if i.isdigit() or i == \"-\"])\n print(False)\n print(invalid, f'''\\nNot all characters after the first are integers...\n \\nnogsteeds, vet raaf!: {ekster}''')\n break\n\n\nif __name__ == '__main__':\n\n kraai = input(\"Welcome to the integer tester. Please give an input: \")\n kraai.replace(\" \", \"\")\n\n if len(kraai) == 1:\n corvidae = is_integer(kraai)\n elif len(kraai) > 1:\n for x in kraai[1:]:\n if kraai[1:].isdigit():\n corvidae = is_integer(kraai)\n break\n elif x.isalpha():\n bonte_kraai = kraai\n corvidae = remove_non_integer(bonte_kraai)\n break\n\n\n\nThe difference between both codes, now, is first of all that they work individually as observable in the if name == 'main' block.\n\n\nSecondly, I used the 'break' statement to break the loop after it has fulfilled its task. Otherwise it will repeat a number of times and that is unwanted (it should only run once).\n\n\nThirdly, as you can see I moved some variables to functions so that pytest doesn't return an 'x is not defined' error. I substituted the variables with indexes [].\n\nThanks.\n" ]
[ 0 ]
[]
[]
[ "arguments", "filter", "for_loop", "function", "python" ]
stackoverflow_0074423121_arguments_filter_for_loop_function_python.txt
Q: changing opacity based on different column using plotly I would like to change the opacity of the bar based on a value in a different column. here is a simple example. if the gdpPercap <20000 I want to change the opacity to 0.5 for instance. I also have a discrete color map that assigns colors based on the decade, for instance 1980-1990 is green , 1990-2000 is red. Within this color map I am looking to change the opacity of the bars. import plotly.express as px data_canada = px.data.gapminder().query("country == 'Canada'") fig = px.bar(data_canada, x='year', y='pop') fig.show() A: Add a column with the newly standardized Gross Domestic Product values. (from 0 to 1) Specify a continuous colormap with that column as the color target. Specify the transparency of that color specification in RGBA. The threshold value of 0.4 is set appropriately, so change it to your threshold value. import plotly.express as px data_canada = px.data.gapminder().query("country == 'Canada'") data_canada_ = data_canada.copy() s = data_canada_['gdpPercap'] data_canada_['gdpPercap_min_max'] = (s - s.min()) / (s.max() - s.min()) fig = px.bar(data_canada_, x='year', y='pop', color='gdpPercap_min_max', color_continuous_scale=[(0, "rgba(65,105,225,1.0)"), (0.4, "rgba(65,105,225,0.5)"), (1.0, "rgba(65,105,225,0.5)")]) fig.show()
changing opacity based on different column using plotly
I would like to change the opacity of the bar based on a value in a different column. here is a simple example. if the gdpPercap <20000 I want to change the opacity to 0.5 for instance. I also have a discrete color map that assigns colors based on the decade, for instance 1980-1990 is green , 1990-2000 is red. Within this color map I am looking to change the opacity of the bars. import plotly.express as px data_canada = px.data.gapminder().query("country == 'Canada'") fig = px.bar(data_canada, x='year', y='pop') fig.show()
[ "Add a column with the newly standardized Gross Domestic Product values. (from 0 to 1) Specify a continuous colormap with that column as the color target. Specify the transparency of that color specification in RGBA. The threshold value of 0.4 is set appropriately, so change it to your threshold value.\nimport plotly.express as px\n\ndata_canada = px.data.gapminder().query(\"country == 'Canada'\")\n\ndata_canada_ = data_canada.copy()\ns = data_canada_['gdpPercap']\ndata_canada_['gdpPercap_min_max'] = (s - s.min()) / (s.max() - s.min())\n\nfig = px.bar(data_canada_,\n x='year', y='pop',\n color='gdpPercap_min_max',\n color_continuous_scale=[(0, \"rgba(65,105,225,1.0)\"), (0.4, \"rgba(65,105,225,0.5)\"), (1.0, \"rgba(65,105,225,0.5)\")])\n\nfig.show()\n\n\n" ]
[ 0 ]
[]
[]
[ "plotly", "python" ]
stackoverflow_0074475536_plotly_python.txt
Q: Workaround for TypeVar bound on a TypeVar? Is there some way of expressing this Scala code with Python's type hints? trait List[A] { def ::[B >: A](x: B): List[B] } I'm trying to achieve this sort of thing class X: pass class Y(X): pass class Z(X): pass xs = MyList(X(), X()) # inferred as MyList[X] ys = MyList(Y(), Y()) # inferred as MyList[Y] _ = xs.extended_by(X()) # inferred as MyList[X] _ = xs.extended_by(Y()) # inferred as MyList[X] _ = ys.extended_by(X()) # inferred as MyList[X] _ = ys.extended_by(Y()) # inferred as MyList[Y] _ = ys.extended_by(Z()) # inferred as MyList[X] Note that the type MyList is initialised with, and the type it's extended_by, can be anything. MyList is immutable. See the comments for more detail. What I tried from __future__ import annotations from typing import TypeVar, Generic B = TypeVar('B') A = TypeVar('A', bound=B) class MyList(Generic[A]): def __init__(*o: A): ... def extended_by(self, x: B) -> MyList[B]: ... but I get (where the above is in main.py) main.py:5: error: Type variable "main.B" is unbound main.py:5: note: (Hint: Use "Generic[B]" or "Protocol[B]" base class to bind "B" inside a class) main.py:5: note: (Hint: Use "B" in function signature to bind "B" inside a function) Afaict, it's not allowed to bound on a TypeVar. Is there a workaround in this scenario? A: You're trying to specify that B is a supertype of A. But instead of specifying that B should be a supertype of A, it is much easier to state that B is any type, and then the Union A|B is the supertype of A you need. from typing import TypeVar, Generic A = TypeVar("A", covariant=True) B = TypeVar("B") class MyList(Generic[A]): def __init__(*objects: A) -> None: ... def extended_by(self, other: B) -> MyList[A | B]: ... class X: pass class Y(X): pass class Z(X): pass xs = MyList(X(), X()) # inferred as MyList[X] ys = MyList(Y(), Y()) # inferred as MyList[Y] reveal_type(xs.extended_by(X())) # inferred as MyList[X] reveal_type(xs.extended_by(Y())) # inferred as MyList[X] reveal_type(ys.extended_by(X())) # inferred as MyList[X] reveal_type(ys.extended_by(Y())) # inferred as MyList[Y] reveal_type(ys.extended_by(Z())) # inferred as MyList[Y|Z] # MyList[Y|Z] is a subtype of MyList[X] If MyList is immutable, its type variable is most likely covariant. That means that MyList[X] is a subclass of MyList[Y]. I took the liberty to make your generic parameter covariant, which will also make implementing the extended_by method easier. Note: If you have to use Python<3.10, make sure to replace | with Union. Note 2: This is the same approach as, for example list.__add__, see the code in typeshed.
Workaround for TypeVar bound on a TypeVar?
Is there some way of expressing this Scala code with Python's type hints? trait List[A] { def ::[B >: A](x: B): List[B] } I'm trying to achieve this sort of thing class X: pass class Y(X): pass class Z(X): pass xs = MyList(X(), X()) # inferred as MyList[X] ys = MyList(Y(), Y()) # inferred as MyList[Y] _ = xs.extended_by(X()) # inferred as MyList[X] _ = xs.extended_by(Y()) # inferred as MyList[X] _ = ys.extended_by(X()) # inferred as MyList[X] _ = ys.extended_by(Y()) # inferred as MyList[Y] _ = ys.extended_by(Z()) # inferred as MyList[X] Note that the type MyList is initialised with, and the type it's extended_by, can be anything. MyList is immutable. See the comments for more detail. What I tried from __future__ import annotations from typing import TypeVar, Generic B = TypeVar('B') A = TypeVar('A', bound=B) class MyList(Generic[A]): def __init__(*o: A): ... def extended_by(self, x: B) -> MyList[B]: ... but I get (where the above is in main.py) main.py:5: error: Type variable "main.B" is unbound main.py:5: note: (Hint: Use "Generic[B]" or "Protocol[B]" base class to bind "B" inside a class) main.py:5: note: (Hint: Use "B" in function signature to bind "B" inside a function) Afaict, it's not allowed to bound on a TypeVar. Is there a workaround in this scenario?
[ "You're trying to specify that B is a supertype of A. But instead of specifying that B should be a supertype of A, it is much easier to state that B is any type, and then the Union A|B is the supertype of A you need.\nfrom typing import TypeVar, Generic\nA = TypeVar(\"A\", covariant=True)\nB = TypeVar(\"B\")\n\nclass MyList(Generic[A]):\n def __init__(*objects: A) -> None: ...\n def extended_by(self, other: B) -> MyList[A | B]: ...\n\nclass X: pass\nclass Y(X): pass\nclass Z(X): pass\n\n\nxs = MyList(X(), X()) # inferred as MyList[X]\nys = MyList(Y(), Y()) # inferred as MyList[Y]\n\nreveal_type(xs.extended_by(X())) # inferred as MyList[X]\nreveal_type(xs.extended_by(Y())) # inferred as MyList[X]\n\nreveal_type(ys.extended_by(X())) # inferred as MyList[X]\nreveal_type(ys.extended_by(Y())) # inferred as MyList[Y]\nreveal_type(ys.extended_by(Z())) # inferred as MyList[Y|Z]\n# MyList[Y|Z] is a subtype of MyList[X]\n\nIf MyList is immutable, its type variable is most likely covariant. That means that\nMyList[X] is a subclass of MyList[Y]. I took the liberty to make your generic parameter covariant, which will also make implementing the extended_by method easier.\nNote: If you have to use Python<3.10, make sure to replace | with Union.\nNote 2: This is the same approach as, for example list.__add__, see\nthe code in typeshed.\n" ]
[ 0 ]
[ "from __future__ import annotations\n\nfrom typing import (\n TYPE_CHECKING,\n Generic,\n TypeVar,\n)\n\nB = TypeVar('B')\nA = TypeVar('A')\n\n\nclass MyList(Generic[A]):\n def __init__(*o: A):\n ...\n\n def extended_by(self, x: B) -> MyList[B]:\n ...\n\n\nclass Y:\n ...\n\n\nclass X:\n ...\n\n\nys = MyList(Y(), Y())\nxs = ys.extended_by(X())\nif TYPE_CHECKING:\n reveal_locals()\n\nThis produces :\ntest.py:32: note: Revealed local types are:\ntest.py:32: note: xs: test.MyList[test.X*]\ntest.py:32: note: ys: test.MyList[test.Y*]\n\nI didn’t understand the link between A and B. Could you give an example with for instance class Y and X so I can update my answer?\n" ]
[ -2 ]
[ "python", "type_bounds", "type_hinting", "types" ]
stackoverflow_0057590086_python_type_bounds_type_hinting_types.txt
Q: I can't use the command ' python manage.py makemigrations' in django VSC I already did 'python manage.py migrations'. Now i want to create '0001_inital.py' file in migrations with the code 'python manage.py makemigrations'. Firstly this is my models.py; from django.db import models class Room(models.Model): #host = #topic = name = models.CharField(max_Length=200) description = models.Textfield(null=True, blank = True) #participants = updated = models.DateTimeField(auto_now = True) created = models.DateTimeField(auto_now_add = True) def __str__(self): return str(self.name) And here is the some of the errors that when i write 'python manage.py makemigrations'. File "", line 1006, in _find_and_load_unlocked File "", line 688, in _load_unlocked File "", line 883, in exec_module File "", line 241, in _call_with_frames_removed File "C:\Users\c.aktel\OneDrive\Masaüstü\laan\base\models.py", line 5, in class Room(models.Model): File "C:\Users\c.aktel\OneDrive\Masaüstü\laan\base\models.py", line 8, in Room name = models.CharField(max_Length=200) File "C:\Users\c.aktel\OneDrive\Masaüstü\laan\new_env\lib\site-packages\django\db\models\fields_init_.py", line 1121, in init super().init(*args, **kwargs) TypeError: Field.init() got an unexpected keyword argument 'max_Length' A: It should be max_length not max_Length and TextField not Textfield so the correct is: class Room(models.Model): #host = #topic = name = models.CharField(max_length=200) description = models.TextField(null=True, blank = True) #participants = updated = models.DateTimeField(auto_now = True) created = models.DateTimeField(auto_now_add = True) def __str__(self): return f"{self.name}" Also I'd recommend you to use f-strings in __str__() method of model. Then run both the migration commands.
I can't use the command ' python manage.py makemigrations' in django VSC
I already did 'python manage.py migrations'. Now i want to create '0001_inital.py' file in migrations with the code 'python manage.py makemigrations'. Firstly this is my models.py; from django.db import models class Room(models.Model): #host = #topic = name = models.CharField(max_Length=200) description = models.Textfield(null=True, blank = True) #participants = updated = models.DateTimeField(auto_now = True) created = models.DateTimeField(auto_now_add = True) def __str__(self): return str(self.name) And here is the some of the errors that when i write 'python manage.py makemigrations'. File "", line 1006, in _find_and_load_unlocked File "", line 688, in _load_unlocked File "", line 883, in exec_module File "", line 241, in _call_with_frames_removed File "C:\Users\c.aktel\OneDrive\Masaüstü\laan\base\models.py", line 5, in class Room(models.Model): File "C:\Users\c.aktel\OneDrive\Masaüstü\laan\base\models.py", line 8, in Room name = models.CharField(max_Length=200) File "C:\Users\c.aktel\OneDrive\Masaüstü\laan\new_env\lib\site-packages\django\db\models\fields_init_.py", line 1121, in init super().init(*args, **kwargs) TypeError: Field.init() got an unexpected keyword argument 'max_Length'
[ "It should be max_length not max_Length and TextField not Textfield so the correct is:\n\nclass Room(models.Model):\n #host =\n #topic =\n name = models.CharField(max_length=200)\n description = models.TextField(null=True, blank = True)\n #participants = \n updated = models.DateTimeField(auto_now = True)\n created = models.DateTimeField(auto_now_add = True)\n\n\n def __str__(self):\n return f\"{self.name}\"\n\nAlso I'd recommend you to use f-strings in __str__() method of model.\nThen run both the migration commands.\n" ]
[ 5 ]
[]
[]
[ "django", "django_migrations", "django_model_field", "django_models", "python" ]
stackoverflow_0074476353_django_django_migrations_django_model_field_django_models_python.txt
Q: XML and Excel Structures, debugging and etc I'm currently working on this project: https://github.com/lucasmolinari/unlocker-EX. It's a excel unlocker, it works by editing the XML files inside the workbooks. (more information on the github page). The script works fine in workbooks with almost no content inside, but recently I'm testing some bigger workbooks, and when I open the unlocked file, excel says it's corrupted and I can't find any difference between the original and the unlocked workbook, I'm 100% sure the problem is when the script change the content in the file, I watched every step of the script and it just stops working when the files are edited. Does someone have more knowlege on how XML files work or in the structure of excel workbooks? Or like, some way to verify the differences between the original file and the edited to see if is some formatting problem..? I'm really sorry about this question, but I have no idea from where to start now, I tried everything I can. Changed to open files in UTF-8 format and tried to find any corrupted character in the edited file,but manually is too hard to find any. A: Using ElementTree library solves the problem
XML and Excel Structures, debugging and etc
I'm currently working on this project: https://github.com/lucasmolinari/unlocker-EX. It's a excel unlocker, it works by editing the XML files inside the workbooks. (more information on the github page). The script works fine in workbooks with almost no content inside, but recently I'm testing some bigger workbooks, and when I open the unlocked file, excel says it's corrupted and I can't find any difference between the original and the unlocked workbook, I'm 100% sure the problem is when the script change the content in the file, I watched every step of the script and it just stops working when the files are edited. Does someone have more knowlege on how XML files work or in the structure of excel workbooks? Or like, some way to verify the differences between the original file and the edited to see if is some formatting problem..? I'm really sorry about this question, but I have no idea from where to start now, I tried everything I can. Changed to open files in UTF-8 format and tried to find any corrupted character in the edited file,but manually is too hard to find any.
[ "Using ElementTree library solves the problem\n" ]
[ 0 ]
[]
[]
[ "debugging", "excel", "python", "xml" ]
stackoverflow_0074465153_debugging_excel_python_xml.txt
Q: Python plot_data can anyone teach me how to plot a csv A: You can try this also to display plot import matplotlib.pyplot as plt import csv x = [] y = [] with open('data_file.csv','r') as csvfile: plots = csv.reader(csvfile, delimiter = ',') for row in plots: x.append(row[0]) y.append(row[1]) plt.bar(x, y, color = 'g', width = 0.72, label = "recall") plt.xlabel('precision') plt.ylabel('recall') plt.title('Title') plt.legend() plt.show() or import matplotlib.pyplot as plt import csv x = [] y = [] with open('data_file.csv','r') as csvfile: lines = csv.reader(csvfile, delimiter=',') for row in lines: x.append(row[0]) y.append(row[1]) plt.plot(x, y, color = 'g', linestyle = 'dashed', marker = 'o',label = "precision") plt.xticks(rotation = 25) plt.xlabel('precision') plt.ylabel('recall') plt.title('Title', fontsize = 20) plt.grid() plt.legend() plt.show() A: So the thing is you have to call plt.plot() before with the data, and then call plt.show() You could do something like: import csv import matplotlib.pyplot as plt f = open("data_file.csv", "w") w = csv.writer(f) _ = w.writerow(["precision", "recall"]) rows = [[0.013,0.951],[0.376,0.851],[0.441,0.839],[0.570,0.758],[0.635,0.674],[0.721,0.604],[0.837,0.531],[0.860,0.453],[0.962,0.348],[0.982,0.273],[1.0,0.0]] precision = [row[0] for row in rows] recall = [row[1] for row in rows] w.writerows(rows) f.close() plt.plot(precision, recall) plt.show() Above you have your data in rows, and your two variables divided in precision and recall. So now you can call plt.plot(precision, recall) and then plt.show A: Ideally, your code should have thrown a NameError at the last line, saying 'matplotlib' is not defined. Since you have already imported matplotlib.pyplot as plt, use plt as a reference later in the code, instead of calling matplotlib.pyplot again and again. You cannot pass a CSV file directly into the plt.show() function as an argument. I suggest you use Pandas package to create a dataframe out of your CSV file and plot the values like in the below code: import csv import matplotlib.pyplot as plt import pandas as pd f = open("data_file.csv", "w") w = csv.writer(f) _ = w.writerow(["precision", "recall"]) w.writerows([[0.013,0.951], [0.376,0.851], [0.441,0.839], [0.570,0.758], [0.635,0.674], [0.721,0.604], [0.837,0.531], [0.860,0.453], [0.962,0.348], [0.982,0.273], [1.0,0.0]]) f.close() Dataframe = pd.read_csv('data_file.csv') plt.plot(Dataframe.precision, Dataframe.recall) plt.show() You can check more about Pandas documentation here and matplot documentation here
Python plot_data
can anyone teach me how to plot a csv
[ "You can try this also to display plot\nimport matplotlib.pyplot as plt\nimport csv\n\nx = []\ny = []\n\nwith open('data_file.csv','r') as csvfile:\n plots = csv.reader(csvfile, delimiter = ',')\n \n for row in plots:\n x.append(row[0])\n y.append(row[1])\n\nplt.bar(x, y, color = 'g', width = 0.72, label = \"recall\")\nplt.xlabel('precision')\nplt.ylabel('recall')\nplt.title('Title')\nplt.legend()\nplt.show()\n\nor\nimport matplotlib.pyplot as plt\nimport csv\n\nx = []\ny = []\n\nwith open('data_file.csv','r') as csvfile:\n lines = csv.reader(csvfile, delimiter=',')\n for row in lines:\n x.append(row[0])\n y.append(row[1])\n\nplt.plot(x, y, color = 'g', linestyle = 'dashed',\n marker = 'o',label = \"precision\")\n\nplt.xticks(rotation = 25)\nplt.xlabel('precision')\nplt.ylabel('recall')\nplt.title('Title', fontsize = 20)\nplt.grid()\nplt.legend()\nplt.show()\n\n\n", "So the thing is you have to call plt.plot() before with the data, and then call plt.show()\nYou could do something like:\nimport csv\nimport matplotlib.pyplot as plt\n\nf = open(\"data_file.csv\", \"w\")\nw = csv.writer(f)\n_ = w.writerow([\"precision\", \"recall\"])\nrows = [[0.013,0.951],[0.376,0.851],[0.441,0.839],[0.570,0.758],[0.635,0.674],[0.721,0.604],[0.837,0.531],[0.860,0.453],[0.962,0.348],[0.982,0.273],[1.0,0.0]]\nprecision = [row[0] for row in rows]\nrecall = [row[1] for row in rows]\nw.writerows(rows)\nf.close()\nplt.plot(precision, recall)\nplt.show()\n\nAbove you have your data in rows, and your two variables divided in precision and recall. So now you can call plt.plot(precision, recall) and then plt.show\n", "Ideally, your code should have thrown a NameError at the last line, saying 'matplotlib' is not defined. Since you have already imported matplotlib.pyplot as plt, use plt as a reference later in the code, instead of calling matplotlib.pyplot again and again.\nYou cannot pass a CSV file directly into the plt.show() function as an argument.\nI suggest you use Pandas package to create a dataframe out of your CSV file and plot the values like in the below code:\nimport csv\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nf = open(\"data_file.csv\", \"w\")\nw = csv.writer(f)\n_ = w.writerow([\"precision\", \"recall\"])\nw.writerows([[0.013,0.951],\n [0.376,0.851],\n [0.441,0.839],\n [0.570,0.758],\n [0.635,0.674],\n [0.721,0.604],\n [0.837,0.531],\n [0.860,0.453],\n [0.962,0.348],\n [0.982,0.273],\n [1.0,0.0]])\nf.close()\n\nDataframe = pd.read_csv('data_file.csv')\n\nplt.plot(Dataframe.precision, Dataframe.recall)\nplt.show()\n\n\nYou can check more about Pandas documentation here and matplot documentation here\n" ]
[ 1, 1, 0 ]
[]
[]
[ "plot", "python" ]
stackoverflow_0074476005_plot_python.txt
Q: Stockfish for python not working correctly, how to fix this? I'm writing a chess puzzle solver using stockfish. I'm using the python interfacing of stockfish as described here. https://pypi.org/project/stockfish/ Like the author told, I installed the stockfish engine from the terminal of my can and ran the code below. It throws an error "AttributeError: 'Stockfish' object has no attribute '_stockfish.' " from stockfish import Stockfish stockfish = Stockfish() stockfish.set_position(['e2e4', 'e7e6']) How do I fix the issue? The code the author wrote is this.' from stockfish import Stockfish stockfish = Stockfish(path="/Users/zhelyabuzhsky/Work/stockfish/stockfish-9-64") but how do I find the path to a program installed in mac? A: The stockfish package is only a python interface for stockfish, you need to either compile it from the source, or download an executable. Once you have the executable, simply provide a path to the Stockfish constructor as in the example. from stockfish import Stockfish stockfish = Stockfish(path="/Users/zhelyabuzhsky/Work/stockfish/stockfish-15-64") stockfish.set_position(['e2e4', 'e7e6'])
Stockfish for python not working correctly, how to fix this?
I'm writing a chess puzzle solver using stockfish. I'm using the python interfacing of stockfish as described here. https://pypi.org/project/stockfish/ Like the author told, I installed the stockfish engine from the terminal of my can and ran the code below. It throws an error "AttributeError: 'Stockfish' object has no attribute '_stockfish.' " from stockfish import Stockfish stockfish = Stockfish() stockfish.set_position(['e2e4', 'e7e6']) How do I fix the issue? The code the author wrote is this.' from stockfish import Stockfish stockfish = Stockfish(path="/Users/zhelyabuzhsky/Work/stockfish/stockfish-9-64") but how do I find the path to a program installed in mac?
[ "The stockfish package is only a python interface for stockfish, you need to either compile it from the source, or download an executable.\nOnce you have the executable, simply provide a path to the Stockfish constructor as in the example.\nfrom stockfish import Stockfish\nstockfish = Stockfish(path=\"/Users/zhelyabuzhsky/Work/stockfish/stockfish-15-64\")\n\nstockfish.set_position(['e2e4', 'e7e6'])\n\n" ]
[ 1 ]
[]
[]
[ "chess", "python", "stockfish" ]
stackoverflow_0073559878_chess_python_stockfish.txt
Q: How to calculate tax in python? I need to write a function compute_tax(money_list) that calculates the total tax for a given list of financial amounts. The rich (200 money and more) pay a tax of 20. Those who are not rich, but have at least 100 money, pay a tax of 10. The others do not pay the tax. I have prepared the basis of the function, which needs to be fixed and finished. def compute_tax(money_list): tax = 0 for money in money_list: if money >= 200: tax += 20 elif money >= 100: tax += 10 else: tax += 0 money += tax return tax print(compute_tax([50, 20, 80])) print(compute_tax([50, 120, 80, 480])) print(compute_tax([250, 120, 170, 480, 30, 1000])) print(compute_tax([250, 120, 70, 4080, 30, 120, 600, 78])) Needed output have to be: 0 30 80 80 A: You have two issues in your code. Firstly you just check for money == 100 in your first if Statement and secondly you assign tax = 0 in your else statement. To correct: def compute_tax(money_list): tax = 0 for money in money_list: if money >= 100 and money < 200: tax += 10 elif money >= 200: tax += 20 else: tax += 0 money -= tax return tax print(compute_tax([50, 120, 80, 480])) print(compute_tax([250, 120, 170, 480, 30, 1000])) print(compute_tax([50, 20, 80])) Simplier u can just check for money <100, money >= 200 and else as matszwecja pointed out.
How to calculate tax in python?
I need to write a function compute_tax(money_list) that calculates the total tax for a given list of financial amounts. The rich (200 money and more) pay a tax of 20. Those who are not rich, but have at least 100 money, pay a tax of 10. The others do not pay the tax. I have prepared the basis of the function, which needs to be fixed and finished. def compute_tax(money_list): tax = 0 for money in money_list: if money >= 200: tax += 20 elif money >= 100: tax += 10 else: tax += 0 money += tax return tax print(compute_tax([50, 20, 80])) print(compute_tax([50, 120, 80, 480])) print(compute_tax([250, 120, 170, 480, 30, 1000])) print(compute_tax([250, 120, 70, 4080, 30, 120, 600, 78])) Needed output have to be: 0 30 80 80
[ "You have two issues in your code. Firstly you just check for money == 100 in your first if Statement and secondly you assign tax = 0 in your else statement. To correct:\ndef compute_tax(money_list):\n tax = 0\n for money in money_list:\n if money >= 100 and money < 200:\n tax += 10\n elif money >= 200:\n tax += 20\n else:\n tax += 0\n money -= tax\n return tax\n\n\nprint(compute_tax([50, 120, 80, 480]))\nprint(compute_tax([250, 120, 170, 480, 30, 1000]))\nprint(compute_tax([50, 20, 80]))\n\nSimplier u can just check for money <100, money >= 200 and else as matszwecja\npointed out.\n" ]
[ 2 ]
[]
[]
[ "function", "python", "python_3.x", "tax" ]
stackoverflow_0074476237_function_python_python_3.x_tax.txt
Q: Scraping data from CME I am trying to webscrape data from CME exchange: https://www.cmegroup.com/CmeWS/mvc/Settlements/Futures/Settlements/425/FUT?tradeDate=11/05/2021 I have the following code snippet: import requests as r user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36" header = {'User-Agent': user_agent} link = 'https://www.cmegroup.com/CmeWS/mvc/Settlements/Futures/Settlements/425/FUT?tradeDate=11/05/2021' page = r.get(link,headers=header) raw_json = json.loads(page.text) While it works perfectly well on a local computer, it totally hangs on remote hosting servers (Digital Ocean, Hetzner). I have also tried to curl url but it gives a timeout error without additional details. Do I need to use selenium for this? I wonder what can be different between scraping data from a local machine and the hosting server. I don't know how to resolve this. Hope you can give me some clues. A: Apparently, some hosting providers are blocked by CME. You should look for one which is not blocked and you can use it as a proxy server. That's the solution that worked for me. However, now I am thinking that this could be related to IPv6 settings on the server. Try to disable IPv6 connection and it will automatically fall back into IPv4. on Ubuntu sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1 sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1 sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=1 A: Just found the solution for this problem. Reason for this behaviour its due to the protocol HTTP/2. A way to test this its upgrading curl, since 7.47.0, the curl tool enables HTTP/2 by default for HTTPS connections. Hope it helps! A: You can get json response from URL itself not requried page.text to transform in to json Just use this directly may be it could work data=page.json()
Scraping data from CME
I am trying to webscrape data from CME exchange: https://www.cmegroup.com/CmeWS/mvc/Settlements/Futures/Settlements/425/FUT?tradeDate=11/05/2021 I have the following code snippet: import requests as r user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36" header = {'User-Agent': user_agent} link = 'https://www.cmegroup.com/CmeWS/mvc/Settlements/Futures/Settlements/425/FUT?tradeDate=11/05/2021' page = r.get(link,headers=header) raw_json = json.loads(page.text) While it works perfectly well on a local computer, it totally hangs on remote hosting servers (Digital Ocean, Hetzner). I have also tried to curl url but it gives a timeout error without additional details. Do I need to use selenium for this? I wonder what can be different between scraping data from a local machine and the hosting server. I don't know how to resolve this. Hope you can give me some clues.
[ "Apparently, some hosting providers are blocked by CME. You should look for one which is not blocked and you can use it as a proxy server. That's the solution that worked for me. However, now I am thinking that this could be related to IPv6 settings on the server. Try to disable IPv6 connection and it will automatically fall back into IPv4.\non Ubuntu\nsudo sysctl -w net.ipv6.conf.all.disable_ipv6=1\nsudo sysctl -w net.ipv6.conf.default.disable_ipv6=1\nsudo sysctl -w net.ipv6.conf.lo.disable_ipv6=1\n\n", "Just found the solution for this problem.\nReason for this behaviour its due to the protocol HTTP/2.\nA way to test this its upgrading curl, since 7.47.0, the curl tool enables HTTP/2 by default for HTTPS connections.\nHope it helps!\n", "You can get json response from URL itself not requried page.text to transform in to json\nJust use this directly may be it could work\ndata=page.json()\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "python", "python_requests", "web_scraping" ]
stackoverflow_0069870683_python_python_requests_web_scraping.txt
Q: Tkinter scrollbar removes textbox and doesnt sticky to the right My textbox spans over 5 rows and 4 columns and I want it to have a scrollbar, so I added one but it removes the textbox and doesn't stick. My textbox looks like this: and its code like this # Textbox self.textbox = Text(self) self.textbox.grid(row=10, column=1, rowspan=5, columnspan=4, padx=10, pady=10) self.vsb = Scrollbar(self.textbox, orient='vertical', command=self.textbox.yview) self.vsb.grid(row=10, column=4, rowspan=5, sticky='ns') self.textbox.configure(yscrollcommand=self.vsb.set) after running it my textbox just vanishes I have no clue what caused this, I usually have no problems with scrollbars. A: Maybe you'll have better luck with the ScrolledText widget. See here for docs from tkinter.scrolledtext import ScrolledText self.textbox = ScrolledText(self)
Tkinter scrollbar removes textbox and doesnt sticky to the right
My textbox spans over 5 rows and 4 columns and I want it to have a scrollbar, so I added one but it removes the textbox and doesn't stick. My textbox looks like this: and its code like this # Textbox self.textbox = Text(self) self.textbox.grid(row=10, column=1, rowspan=5, columnspan=4, padx=10, pady=10) self.vsb = Scrollbar(self.textbox, orient='vertical', command=self.textbox.yview) self.vsb.grid(row=10, column=4, rowspan=5, sticky='ns') self.textbox.configure(yscrollcommand=self.vsb.set) after running it my textbox just vanishes I have no clue what caused this, I usually have no problems with scrollbars.
[ "Maybe you'll have better luck with the ScrolledText widget. See here for docs\nfrom tkinter.scrolledtext import ScrolledText\n\nself.textbox = ScrolledText(self)\n\n" ]
[ 1 ]
[]
[]
[ "python", "scrollbar", "tkinter" ]
stackoverflow_0074476452_python_scrollbar_tkinter.txt
Q: flake8 not picking up config file I have my flake8 config file in ~/.config/flake8 [flake8] max-line-length = 100 However when I run flake8 the config file is not picked up. I know that because i still get warnings over lines longer than 79 char. I'm on redhat, but the same happens on mac. I use pyenv. Global is 2.7.6 (not even sure this is relevant) A: For anyone running into this more recently: I turns out flake8 4.x no longer supports loading .config/flake8, and seems to have no alternative. From https://flake8.pycqa.org/en/latest/internal/option_handling.html#configuration-file-management : In 4.0.0 we have once again changed how this works and we removed support for user-level config files. As a workaround, you could try passing --append-config ~/.config/flake8 (possibly in a bash alias). Alternatively, for code that lives in your homedir, you could create a ~/.flake8 config file, that will be picked up for any project inside your homedir that does not have its own flake8 config. This works because flake8 looks in the current directory (or maybe the directory with the source file) and then looks upwards through the filesystem until it finds a config file (setup.cfg, tox.ini, or .flake8). Note that documentation is a bit vague about this (suggesting it would not stop at the first config file it finds, but at least flake8 4.0.1 behaves like that). A: I had a silly mistake, leaving out the [flake8] tag at the beginning of my configuration file I just spent 2 hours debugging this problem. Here was my original .flake8 file: ignore= # line too long E501, #line break after binary operator W504 This was the fix: [flake8] ignore= # line too long E501, #line break after binary operator W504 Obviously this wasn't OP's problem: they have the tag in there. But if I can save one person from my stupidity, I will be happy. Frankly I was almost too embarrassed to post this because it is an "Is your computer plugged in?" level error, but oh well. A: Sharing my mistake in case this can help someone: I had a similar issue which was simply due to a bad file name: .flake8.txt instead of .flake8. Correcting resolves the issue. A: This was caused by a regression in pep8 1.6.1 and is resolved in the just released 1.6.2 version. A: If you want to use Flake8 with VS Code, then do the following: Install the VS Code extension called flake8 Read the documentation of the extension! It tells you to use the setting flake8.args Add your settings to settings.json. Example: "flake8.args": [ "--max-line-length=100", "--ignore=E501,W503,W504,E203", "--max-complexity=10", ],
flake8 not picking up config file
I have my flake8 config file in ~/.config/flake8 [flake8] max-line-length = 100 However when I run flake8 the config file is not picked up. I know that because i still get warnings over lines longer than 79 char. I'm on redhat, but the same happens on mac. I use pyenv. Global is 2.7.6 (not even sure this is relevant)
[ "For anyone running into this more recently: I turns out flake8 4.x no longer supports loading .config/flake8, and seems to have no alternative.\nFrom https://flake8.pycqa.org/en/latest/internal/option_handling.html#configuration-file-management :\n\nIn 4.0.0 we have once again changed how this works and we removed support for user-level config files.\n\nAs a workaround, you could try passing --append-config ~/.config/flake8 (possibly in a bash alias).\nAlternatively, for code that lives in your homedir, you could create a ~/.flake8 config file, that will be picked up for any project inside your homedir that does not have its own flake8 config. This works because flake8 looks in the current directory (or maybe the directory with the source file) and then looks upwards through the filesystem until it finds a config file (setup.cfg, tox.ini, or .flake8). Note that documentation is a bit vague about this (suggesting it would not stop at the first config file it finds, but at least flake8 4.0.1 behaves like that).\n", "I had a silly mistake, leaving out the [flake8] tag at the beginning of my configuration file I just spent 2 hours debugging this problem.\nHere was my original .flake8 file:\nignore=\n # line too long\n E501,\n #line break after binary operator\n W504\n\nThis was the fix:\n[flake8]\nignore=\n # line too long\n E501,\n #line break after binary operator\n W504\n\nObviously this wasn't OP's problem: they have the tag in there. But if I can save one person from my stupidity, I will be happy. Frankly I was almost too embarrassed to post this because it is an \"Is your computer plugged in?\" level error, but oh well.\n", "Sharing my mistake in case this can help someone:\nI had a similar issue which was simply due to a bad file name: .flake8.txt instead of .flake8.\nCorrecting resolves the issue.\n", "This was caused by a regression in pep8 1.6.1 and is resolved in the just released 1.6.2 version.\n", "If you want to use Flake8 with VS Code, then do the following:\n\nInstall the VS Code extension called flake8\nRead the documentation of the extension! It tells you to use the setting flake8.args\nAdd your settings to settings.json. Example:\n\n\"flake8.args\": [\n \"--max-line-length=100\",\n \"--ignore=E501,W503,W504,E203\",\n \"--max-complexity=10\",\n ],\n\n" ]
[ 7, 5, 2, 1, 0 ]
[]
[]
[ "flake8", "python", "python_2.7" ]
stackoverflow_0028436382_flake8_python_python_2.7.txt
Q: Merge lists in columns in pandas dataframe I've got a dataframe with lists in two columns. It looks like this: column1 column2 column3 0 text [cat1,cat2,cat3] [1,2,3] 1 text2 [cat2,cat3,cat1] [4,5,6] The values in column3 belong to the categories in column2. How can I get a dataframe that looks like this? column1 cat1 cat2 cat3 0 text 1 2 3 1 text2 6 4 5 Thank you for your help! A: You could use explode to break the values in your lists into separate rows and use pivot_table: df.explode( ['column2','column3'] ).pivot_table(index='column1',columns='column2',values='column3',aggfunc='first').reset_index() prints: index column1 cat1 cat2 cat3 0 text 1 2 3 1 text2 6 4 5
Merge lists in columns in pandas dataframe
I've got a dataframe with lists in two columns. It looks like this: column1 column2 column3 0 text [cat1,cat2,cat3] [1,2,3] 1 text2 [cat2,cat3,cat1] [4,5,6] The values in column3 belong to the categories in column2. How can I get a dataframe that looks like this? column1 cat1 cat2 cat3 0 text 1 2 3 1 text2 6 4 5 Thank you for your help!
[ "You could use explode to break the values in your lists into separate rows and use pivot_table:\ndf.explode(\n ['column2','column3']\n ).pivot_table(index='column1',columns='column2',values='column3',aggfunc='first').reset_index()\n\nprints:\nindex column1 cat1 cat2 cat3\n0 text 1 2 3\n1 text2 6 4 5\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "python" ]
stackoverflow_0074476147_dataframe_python.txt
Q: [Python]: Check that no *args. is passed Say that I have a function with this signature foo(*args,a:int=0, b:int=1). How to check if no *args is passed? I am trying def foo(*args,a:int=0, b:int=1): if args is None: print("No args passed") If I call it with foo(), but I don't get anything printed on screen. A: In conclusion: Use not args or args == () def foo(*args, a:int=0, b:int=1): if not args: print("No args passed") foo() def foo(*args, a:int=0, b:int=1): if args == (): print("No args passed") foo()
[Python]: Check that no *args. is passed
Say that I have a function with this signature foo(*args,a:int=0, b:int=1). How to check if no *args is passed? I am trying def foo(*args,a:int=0, b:int=1): if args is None: print("No args passed") If I call it with foo(), but I don't get anything printed on screen.
[ "In conclusion:\nUse not args or args == ()\ndef foo(*args, a:int=0, b:int=1):\n if not args:\n print(\"No args passed\")\nfoo()\n\ndef foo(*args, a:int=0, b:int=1):\n if args == ():\n print(\"No args passed\")\nfoo()\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074476441_python.txt
Q: Python Open and Save Most Recent Files in Different Subfolders with Win32com I have a main folder that has 37subfolders, each subfolder contains multiple files, I want to open the most recent file in each subfolder, then save and close using win32. My code works just fine but its opening and saving only one file in one subfolder, I need the code to open, save and close the most recent file in each of 37subfolders rather than just one. See my code below: import os from pathlib import Path import glob from win32com.client import Dispatch main = r'C:\Users\me\Documents\themainfolder' for path, subdirs, files in os.walk(main): for folders in subdirs: latest_file = max(glob.glob(f"{os.path.join(path, folders)}/*.xlsm"),key=os.path.getmtime)) xlApp = Dispatch("Excel.Application") xlApp.Visible = False xlBook = xlApp.Workbooks.Open(latest_file) xlBook.Save() xlBook.Close() PS: The reason I am doing this is because files were saved with openpyxl but R studio is not able to read them properly(it reads them but reads them wrong) I don't know why but I found that just manually opening the files and saving them again fixes the problem however, I cannot manually reopen and save each of the files, so I did a test and found that using win32 to reopen and save the files also fixes the problem. A: In case it helps anybody out there here is the correct code that works to opeb abd close most recent files in each subfolders of a directory from pathlib import Path import glob from win32com.client import Dispatch xlApp = Dispatch("Excel.Application") #call dispatch just once, dispatching multiple times causes issues(I think) xlApp.Visible = False #this will prevent the file from visibly opening main = r'C:\Users\me\Documents\themainfolder' for path, subdirs, files in os.walk(main): for folders in subdirs: latest_file = max(glob.glob(f"{os.path.join(path, folders)}/*.xlsm"),key=os.path.getmtime) print(latest_file) #just to double check that it got the latest files xlBook = xlApp.Workbooks.Open(latest_file) xlBook.Save() xlBook.Close()
Python Open and Save Most Recent Files in Different Subfolders with Win32com
I have a main folder that has 37subfolders, each subfolder contains multiple files, I want to open the most recent file in each subfolder, then save and close using win32. My code works just fine but its opening and saving only one file in one subfolder, I need the code to open, save and close the most recent file in each of 37subfolders rather than just one. See my code below: import os from pathlib import Path import glob from win32com.client import Dispatch main = r'C:\Users\me\Documents\themainfolder' for path, subdirs, files in os.walk(main): for folders in subdirs: latest_file = max(glob.glob(f"{os.path.join(path, folders)}/*.xlsm"),key=os.path.getmtime)) xlApp = Dispatch("Excel.Application") xlApp.Visible = False xlBook = xlApp.Workbooks.Open(latest_file) xlBook.Save() xlBook.Close() PS: The reason I am doing this is because files were saved with openpyxl but R studio is not able to read them properly(it reads them but reads them wrong) I don't know why but I found that just manually opening the files and saving them again fixes the problem however, I cannot manually reopen and save each of the files, so I did a test and found that using win32 to reopen and save the files also fixes the problem.
[ "In case it helps anybody out there here is the correct code that works to opeb abd close most recent files in each subfolders of a directory\nfrom pathlib import Path\nimport glob\nfrom win32com.client import Dispatch\n\n\nxlApp = Dispatch(\"Excel.Application\") #call dispatch just once, dispatching multiple times causes issues(I think)\nxlApp.Visible = False #this will prevent the file from visibly opening\n\nmain = r'C:\\Users\\me\\Documents\\themainfolder'\n\nfor path, subdirs, files in os.walk(main):\n for folders in subdirs:\n latest_file = max(glob.glob(f\"{os.path.join(path, folders)}/*.xlsm\"),key=os.path.getmtime)\n print(latest_file) #just to double check that it got the latest files\n xlBook = xlApp.Workbooks.Open(latest_file)\n xlBook.Save()\n xlBook.Close()\n\n" ]
[ 0 ]
[]
[]
[ "python", "pywin32", "win32com" ]
stackoverflow_0074467093_python_pywin32_win32com.txt
Q: Index to closest coordinate I have this function A=[(1,2,3),(2,3,4)] B=[(2,4,3),(1,8,1),(2,3,5),(1,5,3)] def closestNew(A,B): C = {} for bp in B: closestDist = -1 for ap in A: dist = sum(((bp[0]-ap[0])**2, (bp[1]-ap[1])**2, (bp[2]-ap[2])**2)) if(closestDist > dist or closestDist == -1): C[bp] = ap closestDist = dist return C That will return the closest coordinate between the two lists. Output: {(1, 2, 3): (2, 4, 3), (2, 3, 4): (2, 3, 5)} However, I want the index of array B (the points that matched with array A (check output)) as well in a seperate list, any ideas? Return idx=[0,2] A: A=[(1,2,3),(2,3,4)] B=[(2,4,3),(1,8,1),(2,3,5),(1,5,3)] C={(1, 2, 3): (2, 4, 3), (2, 3, 4): (2, 3, 5)} C is a dictionary where it values correspond to points on B. idx=[] # an empty list for x in C.values(): idx.append(B.index(x)) # index function to find the index of values in B print(idx) #[0, 2] A: If you want to calcule the closest point to A, is better to have A as a outer loop and B as inside loop, in that way you can iterate for every A through all B's. Also you can use enumerate to know what index you are in the loop. a = [(1,2,3),(2,3,4)] b =[(2,4,3),(1,8,1),(2,3,5),(1,5,3)] # store reference for the min index-point index = [] C = {} for indexA, ap in enumerate(a): # Assume the max distance closestDist = 1e9 for indexB,bp in enumerate(b): dist = sum(((bp[0]-ap[0])**2, (bp[1]-ap[1])**2, (bp[2]-ap[2])**2)) if(dist < closestDist): C[ap] = bp closestDist = dist # Initialize the list if not have value for the i-th of A if indexA + 1 > len(index): index.append(indexB) else: index[indexA] = indexB print(index) return C
Index to closest coordinate
I have this function A=[(1,2,3),(2,3,4)] B=[(2,4,3),(1,8,1),(2,3,5),(1,5,3)] def closestNew(A,B): C = {} for bp in B: closestDist = -1 for ap in A: dist = sum(((bp[0]-ap[0])**2, (bp[1]-ap[1])**2, (bp[2]-ap[2])**2)) if(closestDist > dist or closestDist == -1): C[bp] = ap closestDist = dist return C That will return the closest coordinate between the two lists. Output: {(1, 2, 3): (2, 4, 3), (2, 3, 4): (2, 3, 5)} However, I want the index of array B (the points that matched with array A (check output)) as well in a seperate list, any ideas? Return idx=[0,2]
[ "A=[(1,2,3),(2,3,4)]\nB=[(2,4,3),(1,8,1),(2,3,5),(1,5,3)]\nC={(1, 2, 3): (2, 4, 3), (2, 3, 4): (2, 3, 5)}\n\nC is a dictionary where it values correspond to points on B.\nidx=[] # an empty list\nfor x in C.values():\n idx.append(B.index(x)) # index function to find the index of values in B\n\nprint(idx)\n#[0, 2]\n\n", "If you want to calcule the closest point to A, is better to have A as a outer loop and B as inside loop, in that way you can iterate for every A through all B's. Also you can use enumerate to know what index you are in the loop.\n\n a = [(1,2,3),(2,3,4)]\n b =[(2,4,3),(1,8,1),(2,3,5),(1,5,3)]\n # store reference for the min index-point\n index = []\n C = {}\n for indexA, ap in enumerate(a):\n # Assume the max distance\n closestDist = 1e9\n for indexB,bp in enumerate(b):\n dist = sum(((bp[0]-ap[0])**2, (bp[1]-ap[1])**2, (bp[2]-ap[2])**2))\n if(dist < closestDist):\n C[ap] = bp\n closestDist = dist\n # Initialize the list if not have value for the i-th of A\n if indexA + 1 > len(index):\n index.append(indexB)\n else:\n index[indexA] = indexB\n print(index)\n return C\n\n" ]
[ 0, 0 ]
[]
[]
[ "distance", "indexing", "list", "python", "tuples" ]
stackoverflow_0074475912_distance_indexing_list_python_tuples.txt
Q: highlight.js not working on django+dash website I have a django website where I'd like to display blocks of code w/ syntax highlighting. I've installed highlight.js and per their instructions am injecting style and js into html, in this case in base.html: ... <link rel="stylesheet" href="{% static 'highlight/styles/default.min.css' %}"> <script src="{% static 'highlight/highlight.min.js' %}"></script> <script>hljs.highlightAll();</script> I then add code to some view using dash html components: ... html.Div([html.H3(title), html.Pre(html.Code(code, className=f'language-{lang}'))]) The code isnt't syntax highlighted. Not sure how to troubleshoot this. Edit: If hardcode a <pre><code>...</pre><code> element into an html template that will get highlight.js applied to it, where inspecting the element shows the various transformations to each word in the code block. However if the HTML is generated by dash, such as in the above, it is just a plain <pre><code>...</pre><code> block. How do I allow highlight.js to apply to dash generated HTML? A: I have encountered this problem myself trying to implement highlight.js for my dash app. I have found a nice alternative, built directly for Dash: The DMC Code and Prism components Prism component for Syntax highlighting https://www.dash-mantine-components.com/components/prism Code component that for inline code: https://www.dash-mantine-components.com/components/code
highlight.js not working on django+dash website
I have a django website where I'd like to display blocks of code w/ syntax highlighting. I've installed highlight.js and per their instructions am injecting style and js into html, in this case in base.html: ... <link rel="stylesheet" href="{% static 'highlight/styles/default.min.css' %}"> <script src="{% static 'highlight/highlight.min.js' %}"></script> <script>hljs.highlightAll();</script> I then add code to some view using dash html components: ... html.Div([html.H3(title), html.Pre(html.Code(code, className=f'language-{lang}'))]) The code isnt't syntax highlighted. Not sure how to troubleshoot this. Edit: If hardcode a <pre><code>...</pre><code> element into an html template that will get highlight.js applied to it, where inspecting the element shows the various transformations to each word in the code block. However if the HTML is generated by dash, such as in the above, it is just a plain <pre><code>...</pre><code> block. How do I allow highlight.js to apply to dash generated HTML?
[ "I have encountered this problem myself trying to implement highlight.js for my dash app. I have found a nice alternative, built directly for Dash:\nThe DMC Code and Prism components\n\nPrism component for Syntax highlighting\nhttps://www.dash-mantine-components.com/components/prism\n\nCode\ncomponent that for inline code:\nhttps://www.dash-mantine-components.com/components/code\n\n\n" ]
[ 1 ]
[]
[]
[ "css", "django", "html", "javascript", "python" ]
stackoverflow_0072297872_css_django_html_javascript_python.txt
Q: Auto mount usb drive to raspberry pi without boot I have raspberry pi 3B. It's running on Raspbian GNU/Linux 9 (stretch). I saw some tutorials about mounting usb drive to it but mostly there are 2 ways: -mount that drive manually, -mount that drive automatically at boot and I'm looking for mounting that usb drive automatically at lifetime (without boot) on specific path. I assume that there is no option to do that on linux configuration but maybe there is an option to do that by python script? My drive has exfat file system (there is no other drive with that file system), so it should be east to manage uuid and mounting that. I think that it should looks like: Background process that looks once for a while that 'exfat' drive is connected If yes get his UUID Mount that drive with that UUID on specific path* *that path should also exist when that usb drive isn't connected Can I do that like that? Or maybe there is already a solution for this? A: usbmount is a nifty package that adds udev hooks for auto mounting/unmounting. Simply install it with: sudo apt install usbmount There appears to be an issue that stops it working properly, and in a nutshell the solution is as follows: Edit the following file in an editor: sudo nano /lib/systemd/system/systemd-udevd.service Look for the line with the contents: PrivateMounts=yes Change the yes in the line to no, like so: PrivateMounts=no Save the file and reload the udev daemon: sudo systemctl daemon-reload sudo systemctl restart systemd-udevd (or just reboot) Your USB devices should now auto mount at /media/usb0, /media/usb1 and so on.
Auto mount usb drive to raspberry pi without boot
I have raspberry pi 3B. It's running on Raspbian GNU/Linux 9 (stretch). I saw some tutorials about mounting usb drive to it but mostly there are 2 ways: -mount that drive manually, -mount that drive automatically at boot and I'm looking for mounting that usb drive automatically at lifetime (without boot) on specific path. I assume that there is no option to do that on linux configuration but maybe there is an option to do that by python script? My drive has exfat file system (there is no other drive with that file system), so it should be east to manage uuid and mounting that. I think that it should looks like: Background process that looks once for a while that 'exfat' drive is connected If yes get his UUID Mount that drive with that UUID on specific path* *that path should also exist when that usb drive isn't connected Can I do that like that? Or maybe there is already a solution for this?
[ "usbmount is a nifty package that adds udev hooks for auto mounting/unmounting.\nSimply install it with:\nsudo apt install usbmount\n\nThere appears to be an issue that stops it working properly, and in a nutshell the solution is as follows:\n\nEdit the following file in an editor:\nsudo nano /lib/systemd/system/systemd-udevd.service\n\n\nLook for the line with the contents: PrivateMounts=yes\nChange the yes in the line to no, like so: PrivateMounts=no\nSave the file and reload the udev daemon:\nsudo systemctl daemon-reload\nsudo systemctl restart systemd-udevd\n\n(or just reboot)\n\nYour USB devices should now auto mount at /media/usb0, /media/usb1 and so on.\n" ]
[ 0 ]
[]
[]
[ "linux", "mount", "python", "raspberry_pi", "usb" ]
stackoverflow_0074474113_linux_mount_python_raspberry_pi_usb.txt
Q: How do I get the negative of this answer? Why can I not just put a negative sign when returning the function? The problem The problem is basically using if and else loops to get the outputs as shown above. So based on the formula for harmonic series, I returned the following results should n be above 1 My code was basically this and seems to have gotten the right answers but I always end up with a negative value. Is there something wrong with the logic or is there a way to get the reverse of the results because I have tried doing min() and subtracting from 0. def alternating(n): if n == 1: return 1 else: return 1/n + (-1**(n % 2)) * alternating(n-1) A: Your function is not correct. This one is: """ Harmonic series using recursion See https://stackoverflow.com/questions/74476333/how-do-i-get-the-negative-of-this-answer-why-can-i-not-just-put-a-negative-sign See https://i.stack.imgur.com/ShNUi.png """ def alternating(k): if k != 1: return (-1) ** (k + 1) / k + alternating(k - 1) else: return 1 Please learn to stop saying and writing "basically". It's a high-tech filler word, akin to "um". Don't use it.
How do I get the negative of this answer? Why can I not just put a negative sign when returning the function?
The problem The problem is basically using if and else loops to get the outputs as shown above. So based on the formula for harmonic series, I returned the following results should n be above 1 My code was basically this and seems to have gotten the right answers but I always end up with a negative value. Is there something wrong with the logic or is there a way to get the reverse of the results because I have tried doing min() and subtracting from 0. def alternating(n): if n == 1: return 1 else: return 1/n + (-1**(n % 2)) * alternating(n-1)
[ "Your function is not correct.\nThis one is:\n\"\"\"\nHarmonic series using recursion\n\nSee https://stackoverflow.com/questions/74476333/how-do-i-get-the-negative-of-this-answer-why-can-i-not-just-put-a-negative-sign\nSee https://i.stack.imgur.com/ShNUi.png\n\"\"\"\n\n\ndef alternating(k):\n if k != 1:\n return (-1) ** (k + 1) / k + alternating(k - 1)\n else:\n return 1\n\nPlease learn to stop saying and writing \"basically\". It's a high-tech filler word, akin to \"um\". Don't use it.\n" ]
[ 1 ]
[]
[]
[ "function", "if_statement", "math", "python" ]
stackoverflow_0074476333_function_if_statement_math_python.txt
Q: I am trying to write an algorithm that uses a stack to check if an expression has balanced parentheses but I keep encountering this error def is_matched(expression): left_bracket = "[({" right_bracket = "])}" my_stack = Stack(len(expression)) # our solution methodology is to go through the expression and push all of the the open brackets onto the stack and then # with the closing brackets - each time we encounter a closing bracket we will pop the stack and compare for character in expression: if character in left_bracket: my_stack.push(character) elif character in right_bracket: # first check to see that the stack is not empty i.e we actually have some opneing brackets in the expression if my_stack.is_empty(): return False # now we need to check that the type of braket we pop is the equivalent of it's closing bracket in the expression if right_bracket.index(character) != left_bracket.index(my_stack.pop): return False return my_stack.is_empty() print(is_matched("()")) if right_bracket.index(character) != left_bracket.index(my_stack.pop): TypeError: expected a string or other character buffer object python-BaseException here is my stack implementation: class Stack: def __init__(self, capacity): """Builds a stack with given capacity > 0.""" if capacity <= 0: raise Exception("The capacity must be positive") self.the_array = [None] * capacity self.top = -1 # the index of the top element def size(self): """Returns the size, i.e. the number of elements in the container.""" return self.top + 1 def is_empty(self): """Returns True if and only if the container is empty.""" return self.size() == 0 def is_full(self): """Returns True if and only if the container is full.""" return self.size() >= len(self.the_array) def push(self, item): """Places the given item at the top of the stack if there is capacity, or raises an Exception.""" if self.is_full(): raise Exception("The stack is full") self.top += 1 self.the_array[self.top] = item def pop(self): """Removes and returns the top element of the stack, or raises an Exception if there is none.""" if self.is_empty(): raise Exception("The stack is empty") item = self.the_array[self.top] # removes a reference to this item, # helps with memory management and debugging self.the_array[self.top] = None self.top -= 1 return item def reset(self): """Removes all elements from the container.""" while not self.is_empty(): self.pop() assert (self.is_empty) It should upon the second iteration pop the stack and notice that the indexes of the right and left bracket are the same and move into the final iteration where it realises the stack is empty and returns True but it is not doing so but instead throwing a typeError. Any help is appreciated. Thank you A: at this line: if right_bracket.index(character) != left_bracket.index(my_stack.pop): you actually need to call pop method, since pop is a method, not a property. therefore it should look like this: if right_bracket.index(character) != left_bracket.index(my_stack.pop()):
I am trying to write an algorithm that uses a stack to check if an expression has balanced parentheses but I keep encountering this error
def is_matched(expression): left_bracket = "[({" right_bracket = "])}" my_stack = Stack(len(expression)) # our solution methodology is to go through the expression and push all of the the open brackets onto the stack and then # with the closing brackets - each time we encounter a closing bracket we will pop the stack and compare for character in expression: if character in left_bracket: my_stack.push(character) elif character in right_bracket: # first check to see that the stack is not empty i.e we actually have some opneing brackets in the expression if my_stack.is_empty(): return False # now we need to check that the type of braket we pop is the equivalent of it's closing bracket in the expression if right_bracket.index(character) != left_bracket.index(my_stack.pop): return False return my_stack.is_empty() print(is_matched("()")) if right_bracket.index(character) != left_bracket.index(my_stack.pop): TypeError: expected a string or other character buffer object python-BaseException here is my stack implementation: class Stack: def __init__(self, capacity): """Builds a stack with given capacity > 0.""" if capacity <= 0: raise Exception("The capacity must be positive") self.the_array = [None] * capacity self.top = -1 # the index of the top element def size(self): """Returns the size, i.e. the number of elements in the container.""" return self.top + 1 def is_empty(self): """Returns True if and only if the container is empty.""" return self.size() == 0 def is_full(self): """Returns True if and only if the container is full.""" return self.size() >= len(self.the_array) def push(self, item): """Places the given item at the top of the stack if there is capacity, or raises an Exception.""" if self.is_full(): raise Exception("The stack is full") self.top += 1 self.the_array[self.top] = item def pop(self): """Removes and returns the top element of the stack, or raises an Exception if there is none.""" if self.is_empty(): raise Exception("The stack is empty") item = self.the_array[self.top] # removes a reference to this item, # helps with memory management and debugging self.the_array[self.top] = None self.top -= 1 return item def reset(self): """Removes all elements from the container.""" while not self.is_empty(): self.pop() assert (self.is_empty) It should upon the second iteration pop the stack and notice that the indexes of the right and left bracket are the same and move into the final iteration where it realises the stack is empty and returns True but it is not doing so but instead throwing a typeError. Any help is appreciated. Thank you
[ "at this line:\nif right_bracket.index(character) != left_bracket.index(my_stack.pop):\nyou actually need to call pop method, since pop is a method, not a property.\ntherefore it should look like this:\nif right_bracket.index(character) != left_bracket.index(my_stack.pop()):\n" ]
[ 0 ]
[]
[]
[ "parentheses", "python", "stack" ]
stackoverflow_0074476000_parentheses_python_stack.txt
Q: count the numbers of Objects of fruits apple, guava, orange, gooseberry, lemon, tomato I am encountering error in P1 = 10: like SyntaxError: invalid syntax Statements must be separated by newlines or semicolons Expected expression and error in cv2.imwrite(‘RGB_image.jpg’,rgb_image)like Expected expression. I have my own dataset like apple 1sample 2samples till 6samples. import numpy as np import imutils import cv2 Importing the necessary libraries image = cv2.imread('49.jpg') #reads the image Reading the input image of apple object dst = cv2.fastNlMeansDenoisingColored(image, None, 10, 10, 7, 15) #the meaning of parameters given p1 = 10: size of pixels to compute weights of the image p2 = 10: to compute the weighted average p3 = 7: filter strength for luminescence p4 = 15: filter strength for color component noising and Blur filters to get a more clear image here rgb_image = cv2.cvtColor(dst, cv2.COLOR_BGR2RGB) cv2.imwrite(‘RGB_image.jpg’,rgb_image) new_image = (cv2.medianBlur(rgb_image,5) cv2.imwrite('median_blur.jpg',new_image) hsv_image = cv2.cvtColor(new_image, cv2.COLOR_RGB2HSV) h, s, v = cv2.split(hsv_image) cv2.imwrite(‘H.jpg’,h) ret,th1=cv2.threshold(h,180,255, cv2.THRESH_BINARY+cv2.THRESH_OTSU) cv2.imwrite('Binary_image.jpg',th1) kernel = np.ones((5,5), dtype = "uint8")/9 bilateral = cv2.bilateralFilter(th1, 9 , 75, 75) erosion = cv2.erode(bilateral, kernel, iterations = 6) cv2.imwrite('mask_erosion.jpg', erosion) find contours in the thresholded image cnts = cv2.findContours(th1.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) print("[INFO] {} unique contours found".format(len(cnts))) for (i, c) in enumerate(cnts): ((x, y), _) = cv2.minEnclosingCircle(c) cv2.putText(image, "#{}".format(i + 1), (int(x) - 10, int(y)), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 2) cv2.drawContours(image, [c], -1, (0, 255, 0), 2) cv2.imshow("Image", image) cv2.waitKey(0) cv2.imwrite('Contour_Image.jpg',image) I need proper code for counting these above objects for all above mentioned fruits. A: you can't use a semicolon after the assignment p1 = 10, if you'd like to write a comment about the assignment, use the # sign as you did in other lines. p1 = 10 # size of pixels to compute weights of the image p2 = 10 # to compute the weighted average p3 = 7 # filter strength for luminescence p4 = 15 # filter strength for color component EDIT: python string quotes after editing the question and adding another issue, I've noticed that your line: cv2.imwrite(‘RGB_image.jpg’,rgb_image) have invalid quotes. try to replace this line to cv2.imwrite('RGB_image.jpg',rgb_image) or cv2.imwrite("RGB_image.jpg",rgb_image) or even cv2.imwrite(`RGB_image.jpg`,rgb_image) note the difference between ‘’ which is not valid to define a string and '' (single quotes), "" (double quotes) or `` (back ticks) which are fine.
count the numbers of Objects of fruits apple, guava, orange, gooseberry, lemon, tomato
I am encountering error in P1 = 10: like SyntaxError: invalid syntax Statements must be separated by newlines or semicolons Expected expression and error in cv2.imwrite(‘RGB_image.jpg’,rgb_image)like Expected expression. I have my own dataset like apple 1sample 2samples till 6samples. import numpy as np import imutils import cv2 Importing the necessary libraries image = cv2.imread('49.jpg') #reads the image Reading the input image of apple object dst = cv2.fastNlMeansDenoisingColored(image, None, 10, 10, 7, 15) #the meaning of parameters given p1 = 10: size of pixels to compute weights of the image p2 = 10: to compute the weighted average p3 = 7: filter strength for luminescence p4 = 15: filter strength for color component noising and Blur filters to get a more clear image here rgb_image = cv2.cvtColor(dst, cv2.COLOR_BGR2RGB) cv2.imwrite(‘RGB_image.jpg’,rgb_image) new_image = (cv2.medianBlur(rgb_image,5) cv2.imwrite('median_blur.jpg',new_image) hsv_image = cv2.cvtColor(new_image, cv2.COLOR_RGB2HSV) h, s, v = cv2.split(hsv_image) cv2.imwrite(‘H.jpg’,h) ret,th1=cv2.threshold(h,180,255, cv2.THRESH_BINARY+cv2.THRESH_OTSU) cv2.imwrite('Binary_image.jpg',th1) kernel = np.ones((5,5), dtype = "uint8")/9 bilateral = cv2.bilateralFilter(th1, 9 , 75, 75) erosion = cv2.erode(bilateral, kernel, iterations = 6) cv2.imwrite('mask_erosion.jpg', erosion) find contours in the thresholded image cnts = cv2.findContours(th1.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) print("[INFO] {} unique contours found".format(len(cnts))) for (i, c) in enumerate(cnts): ((x, y), _) = cv2.minEnclosingCircle(c) cv2.putText(image, "#{}".format(i + 1), (int(x) - 10, int(y)), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 2) cv2.drawContours(image, [c], -1, (0, 255, 0), 2) cv2.imshow("Image", image) cv2.waitKey(0) cv2.imwrite('Contour_Image.jpg',image) I need proper code for counting these above objects for all above mentioned fruits.
[ "you can't use a semicolon after the assignment p1 = 10, if you'd like to write a comment about the assignment, use the # sign as you did in other lines.\np1 = 10 # size of pixels to compute weights of the image\np2 = 10 # to compute the weighted average\np3 = 7 # filter strength for luminescence\np4 = 15 # filter strength for color component\n\nEDIT: python string quotes\nafter editing the question and adding another issue, I've noticed that your line: cv2.imwrite(‘RGB_image.jpg’,rgb_image) have invalid quotes.\ntry to replace this line to cv2.imwrite('RGB_image.jpg',rgb_image)\nor cv2.imwrite(\"RGB_image.jpg\",rgb_image) or even cv2.imwrite(`RGB_image.jpg`,rgb_image)\nnote the difference between ‘’ which is not valid to define a string and '' (single quotes), \"\" (double quotes) or `` (back ticks) which are fine.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074476520_python.txt
Q: Xpath - obtener solo valores I am trying to extract some data with seleneitor, and I have doubts when I extract the text to transform it into DF. I show an example: texto_columnas = driver.find_element(By.XPATH,'/html/body/div[5]/div[1]/div[4]/div/section[4]/section/div[1]/ul') texto_columnas = texto_columnas.text print(texto_columnas) if i run i get this result: Today 15 Temperature 22° Wind 16 km/h Tomorrow 16 Temperature 20° Wind 13 km/h I want to remove all texts except the Today and Tomorrow fields. How could I do it? Thanks. A: Now you can slice and get the values or remove the itens what you want. today = texto_columnas[0:7] # or texto_columnas[6:7] if you want only the field tomorrow= texto_columnas[36:48] # or texto_columnas[47:48] if you want only the field If you want all in the same text, you can concatenate: today_and_tomorrrow = today+tomorrow You can read all this informations in Python documentation too.
Xpath - obtener solo valores
I am trying to extract some data with seleneitor, and I have doubts when I extract the text to transform it into DF. I show an example: texto_columnas = driver.find_element(By.XPATH,'/html/body/div[5]/div[1]/div[4]/div/section[4]/section/div[1]/ul') texto_columnas = texto_columnas.text print(texto_columnas) if i run i get this result: Today 15 Temperature 22° Wind 16 km/h Tomorrow 16 Temperature 20° Wind 13 km/h I want to remove all texts except the Today and Tomorrow fields. How could I do it? Thanks.
[ "Now you can slice and get the values or remove the itens what you want.\ntoday = texto_columnas[0:7] # or texto_columnas[6:7] if you want only the field\ntomorrow= texto_columnas[36:48] # or texto_columnas[47:48] if you want only the field\n\nIf you want all in the same text, you can concatenate:\ntoday_and_tomorrrow = today+tomorrow\n\nYou can read all this informations in Python documentation too.\n" ]
[ 0 ]
[]
[]
[ "python", "web_scraping" ]
stackoverflow_0074476388_python_web_scraping.txt
Q: Replacing and inserting characters in a dateset python I have this data set that contains asymmetry between the left and right leg. I would like to display the data as a graph and my thought process is that convert the left leg data('L') to negative values, e.g. -3.1, and the right leg data to positive values. I can't quite figure it out, so far I got: df_selection['Asymmetry)'] = df_selection['Assymetry'].str.replace('R','+').replace('L','-') This however places the + & - after the number and the R is the only thing that changes and leaves # L as it is. So the output that I want is something like this: | Asymmetry | | -------- | | 1.3 | | 2.5 | | -3.1 | Here's the dataset A: I would split your single array into two arrays of left and right. If you want to make the left's negative you can still do that using this methods and just negate the numbers. fd = ['1.3 R', '3.4l','2.5 R', ' 3.1L'] right = [float(each.upper().replace('R','').strip()) for each in fd if 'R' in each.upper()] left = [float(each.upper().replace('L','').strip()) for each in fd if 'L' in each.upper()] ...now plot right and left in different colors on the same plot... To do it your way: [float(each.upper().replace('R','').strip()) if 'R' in each.upper() else -float(each.upper().replace('L','').strip()) for each in fd]
Replacing and inserting characters in a dateset python
I have this data set that contains asymmetry between the left and right leg. I would like to display the data as a graph and my thought process is that convert the left leg data('L') to negative values, e.g. -3.1, and the right leg data to positive values. I can't quite figure it out, so far I got: df_selection['Asymmetry)'] = df_selection['Assymetry'].str.replace('R','+').replace('L','-') This however places the + & - after the number and the R is the only thing that changes and leaves # L as it is. So the output that I want is something like this: | Asymmetry | | -------- | | 1.3 | | 2.5 | | -3.1 | Here's the dataset
[ "I would split your single array into two arrays of left and right. If you want to make the left's negative you can still do that using this methods and just negate the numbers.\nfd = ['1.3 R', '3.4l','2.5 R', ' 3.1L']\nright = [float(each.upper().replace('R','').strip()) for each in fd if 'R' in each.upper()]\nleft = [float(each.upper().replace('L','').strip()) for each in fd if 'L' in each.upper()]\n\n...now plot right and left in different colors on the same plot...\n\nTo do it your way:\n[float(each.upper().replace('R','').strip()) if 'R' in each.upper() else -float(each.upper().replace('L','').strip()) for each in fd]\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074476270_python.txt
Q: Python Class in function I have really annoying issue with python. I would like to do some instruction in function, code work without function but it does nothing in function. As below: Could anyone help? It doesnt work: def total(): obiekt = Preference2('202211', 'DAYS') obiekt.dates() obiekt.pipeline() print(vars(obiekt)) obiekt_n = Natural('202211', 'DAYS') obiekt_n.dates() obiekt_n.natural() df_natural = obiekt_n.Natural df_natural.set_index(['CUSTOMERID','MONTHID','YEARID'], inplace = True) obiekt_gen = GeneralKPI('202211', 'DAYS') obiekt_gen.dates() obiekt_gen.general() df_general = obiekt_gen.General df_general.set_index(['CUSTOMERID','MONTHID','YEARID'], inplace = True) return (df_natural, df_general, obiekt.df) #return reduce(lambda left,right: pd.merge(left,right,on= ['CUSTOMERID','MONTHID','YEARID'], how='outer'), data_frames).fillna(np.nan) It work: obiekt = Preference2('202211', 'DAYS') obiekt.dates() obiekt.pipeline() print(vars(obiekt)) A: #Just call the function after it is created def fun(): obiekt = Preference2("202211","DAYS") obiekt.dates() obiekt.pipeline() print(vars(obiekt)) fun()
Python Class in function
I have really annoying issue with python. I would like to do some instruction in function, code work without function but it does nothing in function. As below: Could anyone help? It doesnt work: def total(): obiekt = Preference2('202211', 'DAYS') obiekt.dates() obiekt.pipeline() print(vars(obiekt)) obiekt_n = Natural('202211', 'DAYS') obiekt_n.dates() obiekt_n.natural() df_natural = obiekt_n.Natural df_natural.set_index(['CUSTOMERID','MONTHID','YEARID'], inplace = True) obiekt_gen = GeneralKPI('202211', 'DAYS') obiekt_gen.dates() obiekt_gen.general() df_general = obiekt_gen.General df_general.set_index(['CUSTOMERID','MONTHID','YEARID'], inplace = True) return (df_natural, df_general, obiekt.df) #return reduce(lambda left,right: pd.merge(left,right,on= ['CUSTOMERID','MONTHID','YEARID'], how='outer'), data_frames).fillna(np.nan) It work: obiekt = Preference2('202211', 'DAYS') obiekt.dates() obiekt.pipeline() print(vars(obiekt))
[ "#Just call the function after it is created\ndef fun():\n obiekt = Preference2(\"202211\",\"DAYS\")\n obiekt.dates()\n obiekt.pipeline()\n print(vars(obiekt))\n\nfun()\n\n" ]
[ 0 ]
[]
[]
[ "class", "function", "python" ]
stackoverflow_0074476529_class_function_python.txt
Q: Why does my python function remember a "set" variable? I am trying to run a recursive program that takes an element and iterates over similar elements contained in it but never repeating. I want to keep track of the checked elements with a set type object and I want to repeat the process as many times as I want. This is my code def assaignPuntuation(song, assigned={"0"}): if( song in assigned ): return assigned assigned.add(song) def runthrough(songlist, song, assigned): for element in songlist: assigned = assaignPuntuation (song,assigned=assigned) return assigned ... assigned = runthrough (song, song[4], assigned) ... return assigned assaignPuntuation(A) assaignPuntuation(B) B is contained in the songlist of A, but when it is not indicated it should not start with all the songs checked in A, but it does. I expected the set to start with {"0"} every time the function was called with only the song, but it saves the value the first time so I can't repeat it a second time. I tried changing the name of the variables to be different, but it keeps happening and I don't know why. A: When you create a function, the function header is executed once at the start of your program. So in your case def assaignPuntuation(song, assigned={"0"}): creates a function object with an initialised set for your default argument assigned. That is why every subsequent call of assaignPuntuation gets the initally initalized assigned set which you mutate inside your function. To avoid this unexpected mutation follow this approach when dealing with mutable data types: def assaignPuntuation(song, assigned=None): if assigned is None: assigned = {"0"} # Rest of your function
Why does my python function remember a "set" variable?
I am trying to run a recursive program that takes an element and iterates over similar elements contained in it but never repeating. I want to keep track of the checked elements with a set type object and I want to repeat the process as many times as I want. This is my code def assaignPuntuation(song, assigned={"0"}): if( song in assigned ): return assigned assigned.add(song) def runthrough(songlist, song, assigned): for element in songlist: assigned = assaignPuntuation (song,assigned=assigned) return assigned ... assigned = runthrough (song, song[4], assigned) ... return assigned assaignPuntuation(A) assaignPuntuation(B) B is contained in the songlist of A, but when it is not indicated it should not start with all the songs checked in A, but it does. I expected the set to start with {"0"} every time the function was called with only the song, but it saves the value the first time so I can't repeat it a second time. I tried changing the name of the variables to be different, but it keeps happening and I don't know why.
[ "When you create a function, the function header is executed once at the start of your program.\nSo in your case\ndef assaignPuntuation(song, assigned={\"0\"}):\n\ncreates a function object with an initialised set for your default argument assigned.\nThat is why every subsequent call of assaignPuntuation gets the initally initalized assigned set which you mutate inside your function.\nTo avoid this unexpected mutation follow this approach when dealing with mutable data types:\ndef assaignPuntuation(song, assigned=None):\n if assigned is None:\n assigned = {\"0\"}\n # Rest of your function\n\n" ]
[ 0 ]
[]
[]
[ "function", "python" ]
stackoverflow_0074476583_function_python.txt
Q: Python No module name 'PhotoScan' I'm trying to run the code from this webpage. It says that No module name 'PhotoScan'. I try to pip install PhotoScan but couldn't find it. How can I install it? A: The PhotoScan module is available to Python code running in PhotoScan Pro, not to other Python installations. The module interfaces with the PhotoScan Pro internals Also see the PhotoScan Pro Python reference documentation (PDF). As such, it is not something you can install outside of PhotoScan Pro. Note that the standard edition of PhotoScan does not support scripting. A: Since some versions it is possible to use Metashape (PhotoScan) Python module without the original applications: https://agisoft.freshdesk.com/support/solutions/articles/31000148930-how-to-install-metashape-stand-alone-python-module However, the valid license is still required.
Python No module name 'PhotoScan'
I'm trying to run the code from this webpage. It says that No module name 'PhotoScan'. I try to pip install PhotoScan but couldn't find it. How can I install it?
[ "The PhotoScan module is available to Python code running in PhotoScan Pro, not to other Python installations. The module interfaces with the PhotoScan Pro internals\nAlso see the PhotoScan Pro Python reference documentation (PDF).\nAs such, it is not something you can install outside of PhotoScan Pro. Note that the standard edition of PhotoScan does not support scripting.\n", "Since some versions it is possible to use Metashape (PhotoScan) Python module without the original applications: https://agisoft.freshdesk.com/support/solutions/articles/31000148930-how-to-install-metashape-stand-alone-python-module\nHowever, the valid license is still required.\n" ]
[ 4, 1 ]
[]
[]
[ "python" ]
stackoverflow_0025106483_python.txt
Q: how to generate a rolling mean grouped by columns in pandas I'm trying to generate a rolling 2 average of col3 grouped by col2. What I'm struggling with is populating the NaN values to take the previously calculated rolling mean. DataFrame: df = pd.read_csv(StringIO("""col1,col2,col3 0,A,1 0,A,2 0,B,3 0,B,4 1,A,5 1,A,6 1,B,7 1,B,8 2,A,9 2,A,10 2,B,11 2,B,12 3,A 3,A 3,B 3,B 4,A 4,A 4,B 4,B """)) Tried: df.groupby(["col2"])["col3"].rolling(2).mean() col2 A 0 NaN 1 1.5 4 3.5 5 5.5 8 7.5 9 9.5 12 NaN 13 NaN 16 NaN 17 NaN B 2 NaN 3 3.5 6 5.5 7 7.5 10 9.5 11 11.5 14 NaN 15 NaN 18 NaN 19 NaN What I want (looking at A as an example): col1 col2 col3 0 A 1.0 0 A 2.0 0 B 3.0 0 B 4.0 1 A 5.0 1 A 6.0 1 B 7.0 1 B 8.0 2 A 9.0 2 A 10.0 2 B 11.0 2 B 12.0 3 A NaN # (10 + 9) / 2 = 9.5 3 A NaN # (9.5 + 10) / 2 = 9.75 3 B NaN # ... 3 B NaN 4 A NaN # (9.75 + 9.5) / 2 = 9.625 4 A NaN # (9.625 + ...) 4 B NaN 4 B NaN If we can offset the rolling mean to start at the first NaN that would be great. If this can't be done using rolling then happy to go for a for loop solution? A: You can try this: import pandas as pd from functools import reduce def my_fun(d): return reduce(lambda x, _: x.fillna(x.rolling(2, min_periods=2).mean().shift()), range(d['col3'].isna().sum()), d) df = df.groupby('col2').apply(my_fun) df col1 col2 col3 0 0 A 1.0000 1 0 A 2.0000 2 0 B 3.0000 3 0 B 4.0000 4 1 A 5.0000 5 1 A 6.0000 6 1 B 7.0000 7 1 B 8.0000 8 2 A 9.0000 9 2 A 10.0000 10 2 B 11.0000 11 2 B 12.0000 12 3 A 9.5000 13 3 A 9.7500 14 3 B 11.5000 15 3 B 11.7500 16 4 A 9.6250 17 4 A 9.6875 18 4 B 11.6250 19 4 B 11.6875
how to generate a rolling mean grouped by columns in pandas
I'm trying to generate a rolling 2 average of col3 grouped by col2. What I'm struggling with is populating the NaN values to take the previously calculated rolling mean. DataFrame: df = pd.read_csv(StringIO("""col1,col2,col3 0,A,1 0,A,2 0,B,3 0,B,4 1,A,5 1,A,6 1,B,7 1,B,8 2,A,9 2,A,10 2,B,11 2,B,12 3,A 3,A 3,B 3,B 4,A 4,A 4,B 4,B """)) Tried: df.groupby(["col2"])["col3"].rolling(2).mean() col2 A 0 NaN 1 1.5 4 3.5 5 5.5 8 7.5 9 9.5 12 NaN 13 NaN 16 NaN 17 NaN B 2 NaN 3 3.5 6 5.5 7 7.5 10 9.5 11 11.5 14 NaN 15 NaN 18 NaN 19 NaN What I want (looking at A as an example): col1 col2 col3 0 A 1.0 0 A 2.0 0 B 3.0 0 B 4.0 1 A 5.0 1 A 6.0 1 B 7.0 1 B 8.0 2 A 9.0 2 A 10.0 2 B 11.0 2 B 12.0 3 A NaN # (10 + 9) / 2 = 9.5 3 A NaN # (9.5 + 10) / 2 = 9.75 3 B NaN # ... 3 B NaN 4 A NaN # (9.75 + 9.5) / 2 = 9.625 4 A NaN # (9.625 + ...) 4 B NaN 4 B NaN If we can offset the rolling mean to start at the first NaN that would be great. If this can't be done using rolling then happy to go for a for loop solution?
[ "You can try this:\nimport pandas as pd\nfrom functools import reduce\n\ndef my_fun(d):\n return reduce(lambda x, _: x.fillna(x.rolling(2, min_periods=2).mean().shift()), range(d['col3'].isna().sum()), d)\n\ndf = df.groupby('col2').apply(my_fun)\ndf\n\n col1 col2 col3\n0 0 A 1.0000\n1 0 A 2.0000\n2 0 B 3.0000\n3 0 B 4.0000\n4 1 A 5.0000\n5 1 A 6.0000\n6 1 B 7.0000\n7 1 B 8.0000\n8 2 A 9.0000\n9 2 A 10.0000\n10 2 B 11.0000\n11 2 B 12.0000\n12 3 A 9.5000\n13 3 A 9.7500\n14 3 B 11.5000\n15 3 B 11.7500\n16 4 A 9.6250\n17 4 A 9.6875\n18 4 B 11.6250\n19 4 B 11.6875\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074475836_pandas_python.txt
Q: Shapes3d from Tensorflow not allowing test in split I copied this part of the code straight from tensorflow's example, but it's not allowing the split. Does anyone know why? I've tried many different split options, but I just keep getting this error every time I put test in. A: The shapes3d dataset only contains one split, which is train. You are passing "test" as a split element hence raising the error. It also does not support supervised structures. Please try as follows train_data,test_data,valid_data = tfds.load("shapes3d",split=["train[20%:80%]","train[:20%]","train[80%:]"]) If with_info = True, tfds.load will return the tuple (tf.data.Dataset, tfds.core.DatasetInfo), the latter containing the info associated with the builder. Thank you! (train_data,test_data,valid_data),data_info = tfds.load("shapes3d",split=["train[20%:80%]","train[:20%]","train[80%:]"],with_info=True)
Shapes3d from Tensorflow not allowing test in split
I copied this part of the code straight from tensorflow's example, but it's not allowing the split. Does anyone know why? I've tried many different split options, but I just keep getting this error every time I put test in.
[ "The shapes3d dataset only contains one split, which is train. You are passing \"test\" as a split element hence raising the error. It also does not support supervised structures. Please try as follows\ntrain_data,test_data,valid_data = tfds.load(\"shapes3d\",split=[\"train[20%:80%]\",\"train[:20%]\",\"train[80%:]\"])\n\nIf with_info = True, tfds.load will return the tuple (tf.data.Dataset, tfds.core.DatasetInfo), the latter containing the info associated with the builder. Thank you!\n(train_data,test_data,valid_data),data_info = tfds.load(\"shapes3d\",split=[\"train[20%:80%]\",\"train[:20%]\",\"train[80%:]\"],with_info=True)\n\n" ]
[ 0 ]
[]
[]
[ "jupyter_notebook", "python", "tensorflow", "tensorflow_datasets" ]
stackoverflow_0074332319_jupyter_notebook_python_tensorflow_tensorflow_datasets.txt
Q: Why is the get_attribute() function in selenium returning an empty string when inspecting the webpage shows the attribute? I am trying to grab the src attribute from the video tag from this webpage. This shows where I see the video tag when I am inspecting the image. The XPath for the tag in safari is "//*[@id="player"]/div[2]/div[4]/video" This is my code: from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By import os os.environ["SELENIUM_SERVER_JAR"] = "selenium-server-standalone-2.41.0.jar" browser = webdriver.Safari() browser.get("https://mplayer.me/default.php?id=MTc3ODc3") print(WebDriverWait(browser, 20).until(EC.visibility_of_element_located((By.TAG_NAME,"video"))).get_attribute("src")) browser.quit() Using .text instead og .get_Attribute also returns an empty string. I have to use safari and not chrome to get the src link because chrome uses a blob storage design due to which scraping via chrome shows "blob:https://mplayer.me/d420cb30-ed6e-4772-b169-ed33a5d3ee9f" instead of "https://wwwx18.gogocdn.stream/videos/hls/6CjH7KUeu18L4Y7ls0ohCw/1668685924/177877/81aa0af3891f4ef11da3f67f0d43ade6/ep.1.1657688313.m3u8" which is the link I want to get. A: You can get a link to m3u8 file in Chrome from logs using Desired Capabilities Here is one of the possible solutions to do this: import json from selenium import webdriver from selenium.webdriver import DesiredCapabilities from selenium.webdriver.chrome.service import Service options = webdriver.ChromeOptions() options.add_argument('--headless') capabilities = DesiredCapabilities.CHROME capabilities["goog:loggingPrefs"] = {"performance": "ALL"} options.add_experimental_option("excludeSwitches", ["enable-automation", "enable-logging"]) service = Service(executable_path="path/to/your/chromedriver.exe") driver = webdriver.Chrome(service=service, options=options, desired_capabilities=capabilities) driver.get('https://mplayer.me/default.php?id=MTc3ODc3') logs = driver.get_log('performance') for log in logs: data = json.loads(log['message'])['message']['params'].get('request') if data and data['url'].endswith('.m3u8'): print(data['url']) driver.quit() Output: https://wwwx18.gogocdn.stream/videos/hls/myv1spZ0483oSfvbo4bcbQ/1668706324/177877/81aa0af3891f4ef11da3f67f0d43ade6/ep.1.1657688313.m3u8 Tested on Win 10, Python 3.9.10, Selenium 4.5.0
Why is the get_attribute() function in selenium returning an empty string when inspecting the webpage shows the attribute?
I am trying to grab the src attribute from the video tag from this webpage. This shows where I see the video tag when I am inspecting the image. The XPath for the tag in safari is "//*[@id="player"]/div[2]/div[4]/video" This is my code: from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By import os os.environ["SELENIUM_SERVER_JAR"] = "selenium-server-standalone-2.41.0.jar" browser = webdriver.Safari() browser.get("https://mplayer.me/default.php?id=MTc3ODc3") print(WebDriverWait(browser, 20).until(EC.visibility_of_element_located((By.TAG_NAME,"video"))).get_attribute("src")) browser.quit() Using .text instead og .get_Attribute also returns an empty string. I have to use safari and not chrome to get the src link because chrome uses a blob storage design due to which scraping via chrome shows "blob:https://mplayer.me/d420cb30-ed6e-4772-b169-ed33a5d3ee9f" instead of "https://wwwx18.gogocdn.stream/videos/hls/6CjH7KUeu18L4Y7ls0ohCw/1668685924/177877/81aa0af3891f4ef11da3f67f0d43ade6/ep.1.1657688313.m3u8" which is the link I want to get.
[ "You can get a link to m3u8 file in Chrome from logs using Desired Capabilities\nHere is one of the possible solutions to do this:\nimport json\nfrom selenium import webdriver\nfrom selenium.webdriver import DesiredCapabilities\nfrom selenium.webdriver.chrome.service import Service\n\n\noptions = webdriver.ChromeOptions()\noptions.add_argument('--headless')\ncapabilities = DesiredCapabilities.CHROME\ncapabilities[\"goog:loggingPrefs\"] = {\"performance\": \"ALL\"}\noptions.add_experimental_option(\"excludeSwitches\", [\"enable-automation\", \"enable-logging\"])\nservice = Service(executable_path=\"path/to/your/chromedriver.exe\")\ndriver = webdriver.Chrome(service=service, options=options, desired_capabilities=capabilities)\n\ndriver.get('https://mplayer.me/default.php?id=MTc3ODc3')\nlogs = driver.get_log('performance')\n\nfor log in logs:\n data = json.loads(log['message'])['message']['params'].get('request')\n if data and data['url'].endswith('.m3u8'):\n print(data['url'])\n\ndriver.quit()\n\nOutput:\nhttps://wwwx18.gogocdn.stream/videos/hls/myv1spZ0483oSfvbo4bcbQ/1668706324/177877/81aa0af3891f4ef11da3f67f0d43ade6/ep.1.1657688313.m3u8\n\nTested on Win 10, Python 3.9.10, Selenium 4.5.0\n" ]
[ 0 ]
[]
[]
[ "python", "safari", "selenium", "web_scraping" ]
stackoverflow_0074472216_python_safari_selenium_web_scraping.txt
Q: How to filter dataframe based on values in pyspark/python? I have a dataframe like below. I want to read the dataframe and filter the records based on start time and store in different dataframes. INPUT DF name start_time AA 2022-11-16 AAA 2022-11-15 BBB 2022-11-14 For eg: I need to store each record based on start time, which means all, 16 th date start time records should go to one dataframe and so on. OUTPUT DF df1 = ["Store 2022-11-16 record"] df2 = ["Store 2022-11-15 record"] df3 = ["Store 2022-11-14 record"] A: Well, technially a duplicate but idk how to report that but I think this works : df = pd.DataFrame({"name" : ["AA", "AAA", "BBB"], "start_time" : ["2022-11-16"," 2022-11-15", "2022-11-14"]}) dfs = dict(tuple(df.groupby('start_time'))) dfs you can select each DataFrame by the start time : print (dfs['2022-11-14'']) name start_time 2 BBB 2022-11-14
How to filter dataframe based on values in pyspark/python?
I have a dataframe like below. I want to read the dataframe and filter the records based on start time and store in different dataframes. INPUT DF name start_time AA 2022-11-16 AAA 2022-11-15 BBB 2022-11-14 For eg: I need to store each record based on start time, which means all, 16 th date start time records should go to one dataframe and so on. OUTPUT DF df1 = ["Store 2022-11-16 record"] df2 = ["Store 2022-11-15 record"] df3 = ["Store 2022-11-14 record"]
[ "Well, technially a duplicate but idk how to report that but I think this works :\ndf = pd.DataFrame({\"name\" : [\"AA\", \"AAA\", \"BBB\"], \n\"start_time\" : [\"2022-11-16\",\" 2022-11-15\", \"2022-11-14\"]})\n\ndfs = dict(tuple(df.groupby('start_time')))\n\ndfs\n\nyou can select each DataFrame by the start time :\nprint (dfs['2022-11-14''])\n\n name start_time\n2 BBB 2022-11-14\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "pyspark", "python", "python_3.x" ]
stackoverflow_0074475890_dataframe_pandas_pyspark_python_python_3.x.txt
Q: Convert extremly nested JSON to CSV using python I'm having trouble converting below JSON to csv esepcially the details.kpis results section as it's quite nested. I'm trying to use pandas and the JSON_Normalize function but even if I give the correct record path and meta it's not helping. Below is the JSON, and I suggest pasting it into http://jsonviewer.stack.hu/ to better understand as it neatly visualizes it. My end goal would be to get it to look similar to the ouput of this site that converts json to csv (https://data.page/json/csv). Any help would be much appreciated, thank you in advance. Please find the JSON here https://docs.google.com/document/d/1Swam8LnKRA17Yo_Um0OdFXpN9nHbOqNMFIEHC2Gcgr8/edit?usp=sharing A: You need to replace following in your json : false with False/"false" null with None After this just run the json_normalize() function , it should work. I am able to make it run.
Convert extremly nested JSON to CSV using python
I'm having trouble converting below JSON to csv esepcially the details.kpis results section as it's quite nested. I'm trying to use pandas and the JSON_Normalize function but even if I give the correct record path and meta it's not helping. Below is the JSON, and I suggest pasting it into http://jsonviewer.stack.hu/ to better understand as it neatly visualizes it. My end goal would be to get it to look similar to the ouput of this site that converts json to csv (https://data.page/json/csv). Any help would be much appreciated, thank you in advance. Please find the JSON here https://docs.google.com/document/d/1Swam8LnKRA17Yo_Um0OdFXpN9nHbOqNMFIEHC2Gcgr8/edit?usp=sharing
[ "You need to replace following in your json :\n\nfalse with False/\"false\"\nnull with None\n\nAfter this just run the json_normalize() function , it should work.\nI am able to make it run.\n" ]
[ 0 ]
[]
[]
[ "csv", "json", "pandas", "python" ]
stackoverflow_0074476534_csv_json_pandas_python.txt
Q: Is there a way to change True to False in python? I would like to change so True = False or more exact change so True = 0 and False = 1 is there a way to do this? I have a dataframe and would like to df.groupby('country',as_index=False).sum() and see how many False values there is in each country I have tried df['allowed'] = --df['allowed'] (allowed is the column with True and False values) to swap them but it didn't work A: Swapping booleans is easy with df["neg_allowed"] = ~df['allowed'] A: # we can use map method to change values directly df['allowed'] = df['allowed'].map({True: 0, False: 1}) #Before: allowed ---> #After: allowed True 0 False 1 A: Example s1 = pd.Series([True, True, False, False]) output(s1): 0 True 1 True 2 False 3 False dtype: bool Code True -> 0 & False -> 1 1 - s1 result: 0 0 1 0 2 1 3 1 dtype: int32
Is there a way to change True to False in python?
I would like to change so True = False or more exact change so True = 0 and False = 1 is there a way to do this? I have a dataframe and would like to df.groupby('country',as_index=False).sum() and see how many False values there is in each country I have tried df['allowed'] = --df['allowed'] (allowed is the column with True and False values) to swap them but it didn't work
[ "Swapping booleans is easy with df[\"neg_allowed\"] = ~df['allowed']\n", "# we can use map method to change values directly \n\ndf['allowed'] = df['allowed'].map({True: 0, False: 1})\n\n#Before: allowed ---> #After: allowed\n True 0\n False 1\n\n", "Example\ns1 = pd.Series([True, True, False, False])\n\noutput(s1):\n0 True\n1 True\n2 False\n3 False\ndtype: bool\n\nCode\nTrue -> 0 & False -> 1\n1 - s1\n\nresult:\n0 0\n1 0\n2 1\n3 1\ndtype: int32\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "boolean", "pandas", "python" ]
stackoverflow_0074476574_boolean_pandas_python.txt
Q: shape '[58, 2048, -1]' is invalid for input of size 534528 I'm new to PyTorch. I found a sample code of the capsule network on mnist, I changed it to use my own dataset, but it gives me a runtime error Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_3248\67117472.py in <module> 176 train(capsule_net, optimizer,mnist.train_loader, e) 177 print('start test') --> 178 test(capsule_net, mnist.test_loader, e) ~\AppData\Local\Temp\ipykernel_3248\67117472.py in test(capsule_net, test_loader, epoch) 142 data, target = data.cuda(), target.cuda() 143 --> 144 output, reconstructions, masked = capsule_net(data) 145 loss = capsule_net.loss(data, output, target, reconstructions) 146 ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] ~\AppData\Local\Temp\ipykernel_3248\1108288962.py in forward(self, data) 142 def forward(self, data): 143 #2 --> 144 output = self.digit_capsules(self.primary_capsules(self.conv_layer(data))) 145 reconstructions, masked = self.decoder(output, data) 146 return output, reconstructions, masked ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] ~\AppData\Local\Temp\ipykernel_3248\1108288962.py in forward(self, x) 34 u = [capsule(x) for capsule in self.capsules] 35 u = torch.stack(u, dim=1) ---> 36 u = u.view(x.size(0), self.num_routes, -1) 37 return self.squash(u) 38 RuntimeError: shape '[58, 2048, -1]' is invalid for input of size 534528 The image size is 32*32. Could anyone tell me how to fix this error? There are 3 layers, Cov layer, primary caps and digit caps. The train dataset contains 100 images and the test dataset includes 20 images. class ConvLayer(nn.Module): def __init__(self, in_channels=3, out_channels=256, kernel_size=9): super(ConvLayer, self).__init__() self.conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=1 ) def forward(self, x): return F.relu(self.conv(x)) class PrimaryCaps(nn.Module): def __init__(self, num_capsules=8, in_channels=256, out_channels=32, kernel_size=9, num_routes=32 * 6 * 6): super(PrimaryCaps, self).__init__() self.num_routes = num_routes self.capsules = nn.ModuleList([ nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=2, padding=0) for _ in range(num_capsules)]) def forward(self, x): print(x) u = [capsule(x) for capsule in self.capsules] u = torch.stack(u, dim=1) u = u.view(x.size(0), self.num_routes, -1) return self.squash(u) def squash(self, input_tensor): # take norm of input vectors squared_norm = (input_tensor ** 2).sum(-1, keepdim=True) output_tensor = squared_norm * input_tensor / ((1. + squared_norm) * torch.sqrt(squared_norm)) return output_tensor class DigitCaps(nn.Module): def __init__(self, num_capsules=10, num_routes=32 * 6 * 6, in_channels=8, out_channels=16): super(DigitCaps, self).__init__() self.in_channels = in_channels self.num_routes = num_routes self.num_capsules = num_capsules self.W = nn.Parameter(torch.randn(1, num_routes, num_capsules, out_channels, in_channels)) def forward(self, x): batch_size = x.size(0) x = torch.stack([x] * self.num_capsules, dim=2).unsqueeze(4) W = torch.cat([self.W] * batch_size, dim=0) u_hat = torch.matmul(W, x) b_ij = Variable(torch.zeros(1, self.num_routes, self.num_capsules, 1)) if USE_CUDA: b_ij = b_ij.cuda() num_iterations = 3 for iteration in range(num_iterations): c_ij = F.softmax(b_ij, dim=1) c_ij = torch.cat([c_ij] * batch_size, dim=0).unsqueeze(4) s_j = (c_ij * u_hat).sum(dim=1, keepdim=True) v_j = self.squash(s_j) if iteration < num_iterations - 1: a_ij = torch.matmul(u_hat.transpose(3, 4), torch.cat([v_j] * self.num_routes, dim=1)) b_ij = b_ij + a_ij.squeeze(4).mean(dim=0, keepdim=True) return v_j.squeeze(1) def squash(self, input_tensor): squared_norm = (input_tensor ** 2).sum(-1, keepdim=True) output_tensor = squared_norm * input_tensor / ((1. + squared_norm) * torch.sqrt(squared_norm)) return output_tensor class Decoder(nn.Module): def __init__(self, input_width=28, input_height=28, input_channel=1): super(Decoder, self).__init__() self.input_width = input_width self.input_height = input_height self.input_channel = input_channel self.reconstraction_layers = nn.Sequential( nn.Linear(16 * 10, 512), nn.ReLU(inplace=True), nn.Linear(512, 1024), nn.ReLU(inplace=True), nn.Linear(1024, self.input_height * self.input_width * self.input_channel), nn.Sigmoid() ) def forward(self, x, data): classes = torch.sqrt((x ** 2).sum(2)) classes = F.softmax(classes, dim=0) _, max_length_indices = classes.max(dim=1) masked = Variable(torch.sparse.torch.eye(10)) if USE_CUDA: masked = masked.cuda() masked = masked.index_select(dim=0, index=Variable(max_length_indices.squeeze(1).data)) t = (x * masked[:, :, None, None]).view(x.size(0), -1) reconstructions = self.reconstraction_layers(t) reconstructions = reconstructions.view(-1, self.input_channel, self.input_width, self.input_height) return reconstructions, masked class CapsNet(nn.Module): def __init__(self, config=None): super(CapsNet, self).__init__() if config: self.conv_layer = ConvLayer(config.cnn_in_channels, config.cnn_out_channels, config.cnn_kernel_size) print(self.conv_layer) self.primary_capsules = PrimaryCaps(config.pc_num_capsules, config.pc_in_channels, config.pc_out_channels, config.pc_kernel_size, config.pc_num_routes) print(self.primary_capsules) self.digit_capsules = DigitCaps(config.dc_num_capsules, config.dc_num_routes, config.dc_in_channels, config.dc_out_channels) print(self.digit_capsules) self.decoder = Decoder(config.input_width, config.input_height, config.cnn_in_channels) print(self.decoder) else: self.conv_layer = ConvLayer() self.primary_capsules = PrimaryCaps() self.digit_capsules = DigitCaps() self.decoder = Decoder() self.mse_loss = nn.MSELoss() def forward(self, data): output = self.digit_capsules(self.primary_capsules(self.conv_layer(data))) reconstructions, masked = self.decoder(output, data) return output, reconstructions, masked following is the related part of main function for e in range(1, N_EPOCHS + 1): transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) train(capsule_net, optimizer,mnist.train_loader, e) print('start test') test(capsule_net, mnist.test_loader, e) A: You are trying to reshape your tensor in your forward method of your PrimaryCaps class. However, you are trying to reshape it as [58, 2048, -1] but you have a size of 534528. 534528 is not a multiple of 58*2048. My guess is that the value of your self.num_routes is supposed to be of 32 * 6 * 6, but somewhere in your code you are defining it as 2048. class PrimaryCaps(nn.Module): def __init__(self, num_capsules=8, in_channels=256, out_channels=32, kernel_size=9, num_routes=32 * 6 * 6): ... def forward(self, x): print(x) u = [capsule(x) for capsule in self.capsules] u = torch.stack(u, dim=1) u = u.view(x.size(0), self.num_routes, -1) #HERE return self.squash(u) Hope this helps. EDIT : You're probably setting the wrong value on this line self.primary_capsules = PrimaryCaps(config.pc_num_capsules, config.pc_in_channels, config.pc_out_channels, config.pc_kernel_size, config.pc_num_routes) where config.pc_num_routes is probably set to 2048, which overrides your value of 32 * 6 * 6.
shape '[58, 2048, -1]' is invalid for input of size 534528
I'm new to PyTorch. I found a sample code of the capsule network on mnist, I changed it to use my own dataset, but it gives me a runtime error Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_3248\67117472.py in <module> 176 train(capsule_net, optimizer,mnist.train_loader, e) 177 print('start test') --> 178 test(capsule_net, mnist.test_loader, e) ~\AppData\Local\Temp\ipykernel_3248\67117472.py in test(capsule_net, test_loader, epoch) 142 data, target = data.cuda(), target.cuda() 143 --> 144 output, reconstructions, masked = capsule_net(data) 145 loss = capsule_net.loss(data, output, target, reconstructions) 146 ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] ~\AppData\Local\Temp\ipykernel_3248\1108288962.py in forward(self, data) 142 def forward(self, data): 143 #2 --> 144 output = self.digit_capsules(self.primary_capsules(self.conv_layer(data))) 145 reconstructions, masked = self.decoder(output, data) 146 return output, reconstructions, masked ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] ~\AppData\Local\Temp\ipykernel_3248\1108288962.py in forward(self, x) 34 u = [capsule(x) for capsule in self.capsules] 35 u = torch.stack(u, dim=1) ---> 36 u = u.view(x.size(0), self.num_routes, -1) 37 return self.squash(u) 38 RuntimeError: shape '[58, 2048, -1]' is invalid for input of size 534528 The image size is 32*32. Could anyone tell me how to fix this error? There are 3 layers, Cov layer, primary caps and digit caps. The train dataset contains 100 images and the test dataset includes 20 images. class ConvLayer(nn.Module): def __init__(self, in_channels=3, out_channels=256, kernel_size=9): super(ConvLayer, self).__init__() self.conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=1 ) def forward(self, x): return F.relu(self.conv(x)) class PrimaryCaps(nn.Module): def __init__(self, num_capsules=8, in_channels=256, out_channels=32, kernel_size=9, num_routes=32 * 6 * 6): super(PrimaryCaps, self).__init__() self.num_routes = num_routes self.capsules = nn.ModuleList([ nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=2, padding=0) for _ in range(num_capsules)]) def forward(self, x): print(x) u = [capsule(x) for capsule in self.capsules] u = torch.stack(u, dim=1) u = u.view(x.size(0), self.num_routes, -1) return self.squash(u) def squash(self, input_tensor): # take norm of input vectors squared_norm = (input_tensor ** 2).sum(-1, keepdim=True) output_tensor = squared_norm * input_tensor / ((1. + squared_norm) * torch.sqrt(squared_norm)) return output_tensor class DigitCaps(nn.Module): def __init__(self, num_capsules=10, num_routes=32 * 6 * 6, in_channels=8, out_channels=16): super(DigitCaps, self).__init__() self.in_channels = in_channels self.num_routes = num_routes self.num_capsules = num_capsules self.W = nn.Parameter(torch.randn(1, num_routes, num_capsules, out_channels, in_channels)) def forward(self, x): batch_size = x.size(0) x = torch.stack([x] * self.num_capsules, dim=2).unsqueeze(4) W = torch.cat([self.W] * batch_size, dim=0) u_hat = torch.matmul(W, x) b_ij = Variable(torch.zeros(1, self.num_routes, self.num_capsules, 1)) if USE_CUDA: b_ij = b_ij.cuda() num_iterations = 3 for iteration in range(num_iterations): c_ij = F.softmax(b_ij, dim=1) c_ij = torch.cat([c_ij] * batch_size, dim=0).unsqueeze(4) s_j = (c_ij * u_hat).sum(dim=1, keepdim=True) v_j = self.squash(s_j) if iteration < num_iterations - 1: a_ij = torch.matmul(u_hat.transpose(3, 4), torch.cat([v_j] * self.num_routes, dim=1)) b_ij = b_ij + a_ij.squeeze(4).mean(dim=0, keepdim=True) return v_j.squeeze(1) def squash(self, input_tensor): squared_norm = (input_tensor ** 2).sum(-1, keepdim=True) output_tensor = squared_norm * input_tensor / ((1. + squared_norm) * torch.sqrt(squared_norm)) return output_tensor class Decoder(nn.Module): def __init__(self, input_width=28, input_height=28, input_channel=1): super(Decoder, self).__init__() self.input_width = input_width self.input_height = input_height self.input_channel = input_channel self.reconstraction_layers = nn.Sequential( nn.Linear(16 * 10, 512), nn.ReLU(inplace=True), nn.Linear(512, 1024), nn.ReLU(inplace=True), nn.Linear(1024, self.input_height * self.input_width * self.input_channel), nn.Sigmoid() ) def forward(self, x, data): classes = torch.sqrt((x ** 2).sum(2)) classes = F.softmax(classes, dim=0) _, max_length_indices = classes.max(dim=1) masked = Variable(torch.sparse.torch.eye(10)) if USE_CUDA: masked = masked.cuda() masked = masked.index_select(dim=0, index=Variable(max_length_indices.squeeze(1).data)) t = (x * masked[:, :, None, None]).view(x.size(0), -1) reconstructions = self.reconstraction_layers(t) reconstructions = reconstructions.view(-1, self.input_channel, self.input_width, self.input_height) return reconstructions, masked class CapsNet(nn.Module): def __init__(self, config=None): super(CapsNet, self).__init__() if config: self.conv_layer = ConvLayer(config.cnn_in_channels, config.cnn_out_channels, config.cnn_kernel_size) print(self.conv_layer) self.primary_capsules = PrimaryCaps(config.pc_num_capsules, config.pc_in_channels, config.pc_out_channels, config.pc_kernel_size, config.pc_num_routes) print(self.primary_capsules) self.digit_capsules = DigitCaps(config.dc_num_capsules, config.dc_num_routes, config.dc_in_channels, config.dc_out_channels) print(self.digit_capsules) self.decoder = Decoder(config.input_width, config.input_height, config.cnn_in_channels) print(self.decoder) else: self.conv_layer = ConvLayer() self.primary_capsules = PrimaryCaps() self.digit_capsules = DigitCaps() self.decoder = Decoder() self.mse_loss = nn.MSELoss() def forward(self, data): output = self.digit_capsules(self.primary_capsules(self.conv_layer(data))) reconstructions, masked = self.decoder(output, data) return output, reconstructions, masked following is the related part of main function for e in range(1, N_EPOCHS + 1): transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) train(capsule_net, optimizer,mnist.train_loader, e) print('start test') test(capsule_net, mnist.test_loader, e)
[ "You are trying to reshape your tensor in your forward method of your PrimaryCaps class. However, you are trying to reshape it as [58, 2048, -1] but you have a size of 534528. 534528 is not a multiple of 58*2048. My guess is that the value of your self.num_routes is supposed to be of 32 * 6 * 6, but somewhere in your code you are defining it as 2048.\nclass PrimaryCaps(nn.Module):\n def __init__(self, num_capsules=8, in_channels=256, out_channels=32, kernel_size=9, num_routes=32 * 6 * 6):\n ...\n\n def forward(self, x):\n print(x)\n u = [capsule(x) for capsule in self.capsules]\n u = torch.stack(u, dim=1)\n u = u.view(x.size(0), self.num_routes, -1) #HERE\n return self.squash(u)\n\nHope this helps.\nEDIT : You're probably setting the wrong value on this line\nself.primary_capsules = PrimaryCaps(config.pc_num_capsules, config.pc_in_channels, config.pc_out_channels,\n config.pc_kernel_size, config.pc_num_routes)\n\nwhere config.pc_num_routes is probably set to 2048, which overrides your value of 32 * 6 * 6.\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "machine_learning", "python", "pytorch" ]
stackoverflow_0074472573_deep_learning_machine_learning_python_pytorch.txt
Q: how to check if button is clicked on tkinter I am trying to create a car configurator using tkinter as a gui in my free time. I have managed to open a tkinter box with images that act as buttons. What I want to do is for the user to click on a button. I want to check which button has been clicked (i.e if the family car button is clicked, how can I check that it has been clicked). I have done my research on this website, and all of the solutions I have found have been in javascript or other languages. Once the button has been clicked, I want a new window to be opened ONLY containing attributes for a family car i.e a family car can have a red exterior colour, but a sports car cannot have a red exterior colour at all. Here is my code below: from tkinter import * import tkinter as tk def create_window(): window = tk.Toplevel(root) root = tk.Tk() familycar = PhotoImage(file = "VW family car.png") familylabel = Button(root, image=familycar) familybutton = Button(root, image=familycar, command=create_window) familybutton.pack() So how can I check that the family car button has been clicked? Thanks A: Use a Boolean flag. Define isClicked as False near the beginning of your code, and then set isClicked as True in your create_window() function. This way, other functions and variables in your code can see whether the button's been clicked (if isClicked). A: Not sure what you asked, do you want to disable it or check its status in another routine ? Or just to count the times it has been clicked, In order to do that Simple solution would be to add a general variable that will be updated inside the create_window method (general because you want to allow access from other places). A: First, you would need to initialize a function in which you want to execute on the button click. example: def button_clicked(): print('I got clicked') Then, when you define the target button, you'll have to set the 'command' argument to the required function(the button_clicked function in this context).
how to check if button is clicked on tkinter
I am trying to create a car configurator using tkinter as a gui in my free time. I have managed to open a tkinter box with images that act as buttons. What I want to do is for the user to click on a button. I want to check which button has been clicked (i.e if the family car button is clicked, how can I check that it has been clicked). I have done my research on this website, and all of the solutions I have found have been in javascript or other languages. Once the button has been clicked, I want a new window to be opened ONLY containing attributes for a family car i.e a family car can have a red exterior colour, but a sports car cannot have a red exterior colour at all. Here is my code below: from tkinter import * import tkinter as tk def create_window(): window = tk.Toplevel(root) root = tk.Tk() familycar = PhotoImage(file = "VW family car.png") familylabel = Button(root, image=familycar) familybutton = Button(root, image=familycar, command=create_window) familybutton.pack() So how can I check that the family car button has been clicked? Thanks
[ "Use a Boolean flag.\nDefine isClicked as False near the beginning of your code, and then set isClicked as True in your create_window() function.\nThis way, other functions and variables in your code can see whether the button's been clicked (if isClicked).\n", "Not sure what you asked, do you want to disable it or check its status in another routine ?\nOr just to count the times it has been clicked, \nIn order to do that Simple solution would be to add a general variable that will be updated inside the create_window method (general because you want to allow access from other places).\n", "First, you would need to initialize a function in which you want to execute on the button click.\nexample:\ndef button_clicked():\n print('I got clicked')\n\nThen, when you define the target button, you'll have to set the 'command' argument to the required function(the button_clicked function in this context).\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0051766129_python_tkinter.txt
Q: Remove number of lines from string in python for example below is the string news="Waukesha trial: US man sentenced to life for car-ramming attack - BBC NewsBBC HomepageSkip to contentAccessibility HelpYour accountHomeNewsSportReelWorklifeTravelFutureMore menuMore menuSearch BBCHomeNewsSportReelWorklifeTravelFutureCultureMusicTVWeatherSoundsClose menuBBC NewsMenuHomeWar in UkraineCoronavirusClimateVideoWorldAsiaUKBusinessTechScienceMoreStoriesEntertainment ArtsHealthWorld News TVIn PicturesReality CheckNewsbeatLong ReadsUS CanadaUS Elections 2022ResultsWaukesha trial: US man sentenced to life for car-ramming attackPublished11 hours agoSharecloseShare pageCopy linkAbout sharingThis video can not be playedTo play this video you need to enable JavaScript in your browser Media caption, Watch: Emotional testimony from victims familiesA Wisconsin judge has sentenced a man who killed six people and injured 62 by driving through a Christmas parade last year to life in prison A jury convicted Darrell Brooks last month after prosecutors argued he had shown "utter disregard for human life" The court also heard emotional testimony from dozens of survivors and families of victims during sentencing Brooks represented himself in the four-week trial, often interrupting court proceedings In her sentencing on Wednesday, Judge Jennifer Dorow said Brooks had chosen "a path of evil" I want to remove 1st 3 sentences from above string. If the words are ending with . or : or - It means one sentence. How to do that? A: First, you need to escape double quotes, your string is not valid like this. Second, are you sure the words ending with "-" mean the end of a sentence? In your example you would split "car-ramming" and "four-week". Anyway, you can split the string into sentences like this: sentences = news.replace('-','.').split('.') You would get a list of your sentences like [sentence1, sentence2...], which you could slice to remove the first 3: new_sentences = sentences[3:] You can then get a string from the new list like this: ". ".join(new_sentences) The only problem is that you would replace all the "-" for ".", but as I said, I think the "-" does not actually indicate the end of a sentence in your example. Another way would be to find your separators 3 times, and remove all the characters to that position: for i in range(3): point = news.find(".") minus = news.find("-") index = min(point,minus) if index == -1: index = max(point,minus) news = news[index+1:]
Remove number of lines from string in python
for example below is the string news="Waukesha trial: US man sentenced to life for car-ramming attack - BBC NewsBBC HomepageSkip to contentAccessibility HelpYour accountHomeNewsSportReelWorklifeTravelFutureMore menuMore menuSearch BBCHomeNewsSportReelWorklifeTravelFutureCultureMusicTVWeatherSoundsClose menuBBC NewsMenuHomeWar in UkraineCoronavirusClimateVideoWorldAsiaUKBusinessTechScienceMoreStoriesEntertainment ArtsHealthWorld News TVIn PicturesReality CheckNewsbeatLong ReadsUS CanadaUS Elections 2022ResultsWaukesha trial: US man sentenced to life for car-ramming attackPublished11 hours agoSharecloseShare pageCopy linkAbout sharingThis video can not be playedTo play this video you need to enable JavaScript in your browser Media caption, Watch: Emotional testimony from victims familiesA Wisconsin judge has sentenced a man who killed six people and injured 62 by driving through a Christmas parade last year to life in prison A jury convicted Darrell Brooks last month after prosecutors argued he had shown "utter disregard for human life" The court also heard emotional testimony from dozens of survivors and families of victims during sentencing Brooks represented himself in the four-week trial, often interrupting court proceedings In her sentencing on Wednesday, Judge Jennifer Dorow said Brooks had chosen "a path of evil" I want to remove 1st 3 sentences from above string. If the words are ending with . or : or - It means one sentence. How to do that?
[ "First, you need to escape double quotes, your string is not valid like this.\nSecond, are you sure the words ending with \"-\" mean the end of a sentence? In your example you would split \"car-ramming\" and \"four-week\". Anyway, you can split the string into sentences like this:\nsentences = news.replace('-','.').split('.')\n\nYou would get a list of your sentences like [sentence1, sentence2...], which you could slice to remove the first 3:\nnew_sentences = sentences[3:]\n\nYou can then get a string from the new list like this:\n\". \".join(new_sentences)\n\nThe only problem is that you would replace all the \"-\" for \".\", but as I said, I think the \"-\" does not actually indicate the end of a sentence in your example.\nAnother way would be to find your separators 3 times, and remove all the characters to that position:\nfor i in range(3):\n point = news.find(\".\")\n minus = news.find(\"-\")\n index = min(point,minus)\n if index == -1:\n index = max(point,minus)\n news = news[index+1:]\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074476357_python_python_3.x.txt
Q: Python k8s client: is there a way to use wildcards on job-name query, when calling list_namespaced_pod? Getting all pods in a given namespace takes too long, so I'm trying somehow to reduce it. I don't know whether using such filtration may be faster or not, but I at least must try - if it's at all possible... Tried stuff like: label_selector='job-name=my-agent-*' or label_selector='job-name=my-agent-%' and many other variations with no success. Full code: from kubernetes import config, client from kubernetes.client import CoreV1Api, V1PodList config.load_kube_config() v1: CoreV1Api = client.CoreV1Api() pods_list: V1PodList = v1.list_namespaced_pod( 'dev-pool', label_selector='job-name=my-agent-*' ) Is it even possible? A: The use of wildcards is not documented. But, since you can pass a series of label_selectors, does the following approach work out for you? # Example. Acquire job and agent names per your project requirements selectors = [("job-name-1","my-agent-a"),("job-name-2","my-agent-b")] # Job and agent names as string literal # 'job_name_1=my_agent_a,job_name_2=my_agent_b' label_selectors = ','.join('='.join(map(str, x)) for x in selectors) Then: ... config.load_kube_config() v1: CoreV1Api = client.CoreV1Api() pods_list: V1PodList = v1.list_namespaced_pod( 'dev-pool', label_selector=label_selectors )
Python k8s client: is there a way to use wildcards on job-name query, when calling list_namespaced_pod?
Getting all pods in a given namespace takes too long, so I'm trying somehow to reduce it. I don't know whether using such filtration may be faster or not, but I at least must try - if it's at all possible... Tried stuff like: label_selector='job-name=my-agent-*' or label_selector='job-name=my-agent-%' and many other variations with no success. Full code: from kubernetes import config, client from kubernetes.client import CoreV1Api, V1PodList config.load_kube_config() v1: CoreV1Api = client.CoreV1Api() pods_list: V1PodList = v1.list_namespaced_pod( 'dev-pool', label_selector='job-name=my-agent-*' ) Is it even possible?
[ "The use of wildcards is not documented. But, since you can pass a series of label_selectors, does the following approach work out for you?\n# Example. Acquire job and agent names per your project requirements\nselectors = [(\"job-name-1\",\"my-agent-a\"),(\"job-name-2\",\"my-agent-b\")]\n\n# Job and agent names as string literal\n# 'job_name_1=my_agent_a,job_name_2=my_agent_b'\nlabel_selectors = ','.join('='.join(map(str, x)) for x in selectors)\n\nThen:\n...\nconfig.load_kube_config()\nv1: CoreV1Api = client.CoreV1Api()\npods_list: V1PodList = v1.list_namespaced_pod(\n 'dev-pool',\n label_selector=label_selectors\n)\n\n" ]
[ 0 ]
[]
[]
[ "client", "kubectl", "kubernetes", "python" ]
stackoverflow_0074395159_client_kubectl_kubernetes_python.txt
Q: Dividing one dataframe by another in python using pandas with float values I have two separate data frames named df1 and df2 as shown below: Scaffold Position Ref_Allele_Count Alt_Allele_Count Coverage_Depth Alt_Allele_Frequency 0 1 11 7 51 58 0.879310 1 1 16 20 95 115 0.826087 2 2 9 9 33 42 0.785714 3 2 12 86 51 137 0.372263 4 2 67 41 98 139 0.705036 5 3 8 0 0 0 0.000000 6 4 99 32 26 58 0.448276 7 4 101 100 24 124 0.193548 8 4 115 69 26 95 0.273684 9 5 6 40 57 97 0.587629 10 5 19 53 87 140 0.621429 Scaffold Position Ref_Allele_Count Alt_Allele_Count Coverage_Depth Alt_Allele_Frequency 0 1 11 7 64 71 0.901408 1 1 16 10 90 100 0.900000 2 2 9 79 86 165 0.521212 3 2 12 12 73 85 0.858824 4 2 67 54 96 150 0.640000 5 3 8 0 0 0 0.000000 6 4 99 86 28 114 0.245614 7 4 101 32 25 57 0.438596 8 4 115 97 16 113 0.141593 9 5 6 86 43 129 0.333333 10 5 19 59 27 86 0.313953 I have already found the sum values for df1 and df2 in Allele_Count and Coverage Depth but I need to divide the resulting Alt_Allele_Count and Coverage_Depth of both df's with one another to fine the total allele frequency(AF). I have tried dividing the two variable and got the error message : TypeError: float() argument must be a string or a number, not 'DataFrame' when I tried to convert them to floats and this table when I laft it as a df: Alt_Allele_Count Coverage_Depth 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 NaN NaN 4 NaN NaN 5 NaN NaN 6 NaN NaN 7 NaN NaN 8 NaN NaN 9 NaN NaN 10 NaN NaN My code so far: import csv import pandas as pd import numpy as np df1 = pd.read_csv('C:/Users/Tom/Python_CW/file_pairA_1.csv') df2 = pd.read_csv('C:/Users/Tom/Python_CW/file_pairA_2.csv') print(df1) print(df2) Ref_Allele_Count = (df1[['Ref_Allele_Count']] + df2[['Ref_Allele_Count']]) print(Ref_Allele_Count) Alt_Allele_Count = (df1[['Alt_Allele_Count']] + df2[['Alt_Allele_Count']]) print(Alt_Allele_Count) Coverage_Depth = (df1[['Coverage_Depth']] + df2[['Coverage_Depth']]).astype(float) print(Coverage_Depth) AF = Alt_Allele_Count / Coverage_Depth print(AF) A: This can be fixed by only using once set of brackets '[]' while referring to a column in a pandas df, rather than 2. import csv import pandas as pd import numpy as np df1 = pd.read_csv('C:/Users/Tom/Python_CW/file_pairA_1.csv') df2 = pd.read_csv('C:/Users/Tom/Python_CW/file_pairA_2.csv') print(df1) print(df2) # note that I changed your double brackets ([["col_name"]]) to single (["col_name"]) # this results in pd.Series objects instead of pd.DataFrame objects Ref_Allele_Count = (df1['Ref_Allele_Count'] + df2['Ref_Allele_Count']) print(Ref_Allele_Count) Alt_Allele_Count = (df1['Alt_Allele_Count'] + df2['Alt_Allele_Count']) print(Alt_Allele_Count) Coverage_Depth = (df1['Coverage_Depth'] + df2['Coverage_Depth']).astype(float) print(Coverage_Depth) AF = Alt_Allele_Count / Coverage_Depth print(AF) A: The error stems from the difference between a pandas series and a dataframe. Series are 1 dimensional structures like a singular column, while dataframes are 2d objects like tables. Series added together make a new series of values while dataframes added together make something a lot less usable. Taking slices of a dataframe can either result in a series or dataframe object depending on how you do it: df['column_name'] -> Series df[['column_name', 'column_2']] -> Dataframe So in the line: Ref_Allele_Count = (df1[['Ref_Allele_Count']] + df2[['Ref_Allele_Count']]) df1[['Ref_Allele_Count']] becomes a singular column dataframe rather than a series. Ref_Allele_Count = (df1['Ref_Allele_Count'] + df2['Ref_Allele_Count']) Should return the correct result here. Same goes for the rest of the columns you're adding together.
Dividing one dataframe by another in python using pandas with float values
I have two separate data frames named df1 and df2 as shown below: Scaffold Position Ref_Allele_Count Alt_Allele_Count Coverage_Depth Alt_Allele_Frequency 0 1 11 7 51 58 0.879310 1 1 16 20 95 115 0.826087 2 2 9 9 33 42 0.785714 3 2 12 86 51 137 0.372263 4 2 67 41 98 139 0.705036 5 3 8 0 0 0 0.000000 6 4 99 32 26 58 0.448276 7 4 101 100 24 124 0.193548 8 4 115 69 26 95 0.273684 9 5 6 40 57 97 0.587629 10 5 19 53 87 140 0.621429 Scaffold Position Ref_Allele_Count Alt_Allele_Count Coverage_Depth Alt_Allele_Frequency 0 1 11 7 64 71 0.901408 1 1 16 10 90 100 0.900000 2 2 9 79 86 165 0.521212 3 2 12 12 73 85 0.858824 4 2 67 54 96 150 0.640000 5 3 8 0 0 0 0.000000 6 4 99 86 28 114 0.245614 7 4 101 32 25 57 0.438596 8 4 115 97 16 113 0.141593 9 5 6 86 43 129 0.333333 10 5 19 59 27 86 0.313953 I have already found the sum values for df1 and df2 in Allele_Count and Coverage Depth but I need to divide the resulting Alt_Allele_Count and Coverage_Depth of both df's with one another to fine the total allele frequency(AF). I have tried dividing the two variable and got the error message : TypeError: float() argument must be a string or a number, not 'DataFrame' when I tried to convert them to floats and this table when I laft it as a df: Alt_Allele_Count Coverage_Depth 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 NaN NaN 4 NaN NaN 5 NaN NaN 6 NaN NaN 7 NaN NaN 8 NaN NaN 9 NaN NaN 10 NaN NaN My code so far: import csv import pandas as pd import numpy as np df1 = pd.read_csv('C:/Users/Tom/Python_CW/file_pairA_1.csv') df2 = pd.read_csv('C:/Users/Tom/Python_CW/file_pairA_2.csv') print(df1) print(df2) Ref_Allele_Count = (df1[['Ref_Allele_Count']] + df2[['Ref_Allele_Count']]) print(Ref_Allele_Count) Alt_Allele_Count = (df1[['Alt_Allele_Count']] + df2[['Alt_Allele_Count']]) print(Alt_Allele_Count) Coverage_Depth = (df1[['Coverage_Depth']] + df2[['Coverage_Depth']]).astype(float) print(Coverage_Depth) AF = Alt_Allele_Count / Coverage_Depth print(AF)
[ "This can be fixed by only using once set of brackets '[]' while referring to a column in a pandas df, rather than 2.\nimport csv\nimport pandas as pd\nimport numpy as np\n\ndf1 = pd.read_csv('C:/Users/Tom/Python_CW/file_pairA_1.csv')\ndf2 = pd.read_csv('C:/Users/Tom/Python_CW/file_pairA_2.csv')\nprint(df1)\nprint(df2)\n\n\n# note that I changed your double brackets ([[\"col_name\"]]) to single ([\"col_name\"])\n# this results in pd.Series objects instead of pd.DataFrame objects\nRef_Allele_Count = (df1['Ref_Allele_Count'] + df2['Ref_Allele_Count'])\nprint(Ref_Allele_Count)\n\nAlt_Allele_Count = (df1['Alt_Allele_Count'] + df2['Alt_Allele_Count'])\nprint(Alt_Allele_Count)\n\nCoverage_Depth = (df1['Coverage_Depth'] + df2['Coverage_Depth']).astype(float)\nprint(Coverage_Depth)\n\nAF = Alt_Allele_Count / Coverage_Depth\n\nprint(AF)\n\n", "The error stems from the difference between a pandas series and a dataframe. Series are 1 dimensional structures like a singular column, while dataframes are 2d objects like tables. Series added together make a new series of values while dataframes added together make something a lot less usable.\nTaking slices of a dataframe can either result in a series or dataframe object depending on how you do it:\ndf['column_name'] -> Series\ndf[['column_name', 'column_2']] -> Dataframe\n\nSo in the line:\nRef_Allele_Count = (df1[['Ref_Allele_Count']] + df2[['Ref_Allele_Count']])\n\ndf1[['Ref_Allele_Count']] becomes a singular column dataframe rather than a series.\nRef_Allele_Count = (df1['Ref_Allele_Count'] + df2['Ref_Allele_Count'])\n\nShould return the correct result here. Same goes for the rest of the columns you're adding together.\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "pandas", "python", "python_3.x" ]
stackoverflow_0074476652_dataframe_pandas_python_python_3.x.txt
Q: TypeError: 'type' object is not subscriptable when indexing in to a dictionary I have multiple files that I need to load so I'm using a dict to shorten things. When I run I get a TypeError: 'type' object is not subscriptable Error. How can I get this to work? m1 = pygame.image.load(dict[1]) m2 = pygame.image.load(dict[2]) m3 = pygame.image.load(dict[3]) dict = {1: "walk1.png", 2: "walk2.png", 3: "walk3.png"} playerxy = (375,130) window.blit(m1, (playerxy)) A: Normally Python throws NameError if the variable is not defined: >>> d[0] Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'd' is not defined However, you've managed to stumble upon a name that already exists in Python. Because dict is the name of a built-in type in Python you are seeing what appears to be a strange error message, but in reality it is not. The type of dict is a type. All types are objects in Python. Thus you are actually trying to index into the type object. This is why the error message says that the "'type' object is not subscriptable." >>> type(dict) <type 'type'> >>> dict[0] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'type' object is not subscriptable Note that you can blindly assign to the dict name, but you really don't want to do that. It's just going to cause you problems later. >>> dict = {1:'a'} >>> type(dict) <class 'dict'> >>> dict[1] 'a' The true source of the problem is that you must assign variables prior to trying to use them. If you simply reorder the statements of your question, it will almost certainly work: d = {1: "walk1.png", 2: "walk2.png", 3: "walk3.png"} m1 = pygame.image.load(d[1]) m2 = pygame.image.load(d[2]) m3 = pygame.image.load(d[3]) playerxy = (375,130) window.blit(m1, (playerxy)) A: you should update to python >= 3.9 and everything will work well A: When I stumbled across this error, I had this function: def trainByDistribution(xs: pd.DataFrame, ys: pd.DataFrame, step) -> tuple[float]: My idea was to create a function that takes two pandas dataframes and an integer and would return a tuple of floating-pointing numbers. Like other answers have stated, in Python everything is objects, even classes themselves. Classes are in turn blueprint objects that can be used to generate new objects, and consequently classes can be assigned to variables, passed as arguments and programmatically constructed with type() function. class Person(object): def __init__(self, name, age): self.name = name self.age = age def __str__(self): return str(name) + str(age) This class is equivalent to this: def __init__(self, name, age): self.age = age self.name = name def __str__(self): return str(name) + str(age) Person = type("Person, ("object",), {"__init__": __init__, "__str__": __str__}) This error "object is not subscriptable" appears when you pass to a function an object that doesn't support accessing values by indexing (doesn't overload the [] operator). Since the type of all classes is <class "type">: >>> type(Person) <class "type"> then type object is not subscriptable means you pass class instead of an actual object. This could be more tricky though, in my function above I passed valid dataframes, and my error appeared because I attempted to annotate the return type as tuple[float], and while this syntax should be legal to my knowledge, the interpreter understood this expression as if I wanted to index the tuple class itself. Conclusions This error appears when: you pass class instead of an actual object to a function argument; you use the class name to index anything, including naming variables with class names. You can take a look at this tutorial by mCoding to find out more about classes as objects.
TypeError: 'type' object is not subscriptable when indexing in to a dictionary
I have multiple files that I need to load so I'm using a dict to shorten things. When I run I get a TypeError: 'type' object is not subscriptable Error. How can I get this to work? m1 = pygame.image.load(dict[1]) m2 = pygame.image.load(dict[2]) m3 = pygame.image.load(dict[3]) dict = {1: "walk1.png", 2: "walk2.png", 3: "walk3.png"} playerxy = (375,130) window.blit(m1, (playerxy))
[ "Normally Python throws NameError if the variable is not defined:\n>>> d[0]\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nNameError: name 'd' is not defined\n\nHowever, you've managed to stumble upon a name that already exists in Python.\nBecause dict is the name of a built-in type in Python you are seeing what appears to be a strange error message, but in reality it is not.\nThe type of dict is a type. All types are objects in Python. Thus you are actually trying to index into the type object. This is why the error message says that the \"'type' object is not subscriptable.\"\n>>> type(dict)\n<type 'type'>\n>>> dict[0]\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: 'type' object is not subscriptable\n\nNote that you can blindly assign to the dict name, but you really don't want to do that. It's just going to cause you problems later.\n>>> dict = {1:'a'}\n>>> type(dict)\n<class 'dict'>\n>>> dict[1]\n'a'\n\nThe true source of the problem is that you must assign variables prior to trying to use them. If you simply reorder the statements of your question, it will almost certainly work:\nd = {1: \"walk1.png\", 2: \"walk2.png\", 3: \"walk3.png\"}\nm1 = pygame.image.load(d[1])\nm2 = pygame.image.load(d[2])\nm3 = pygame.image.load(d[3])\nplayerxy = (375,130)\nwindow.blit(m1, (playerxy))\n\n", "you should update to python >= 3.9\nand everything will work well\n", "When I stumbled across this error, I had this function:\ndef trainByDistribution(xs: pd.DataFrame, ys: pd.DataFrame, step) -> tuple[float]:\n\nMy idea was to create a function that takes two pandas dataframes and an integer and would return a tuple of floating-pointing numbers.\nLike other answers have stated, in Python everything is objects, even classes themselves. Classes are in turn blueprint objects that can be used to generate new objects, and consequently classes can be assigned to variables, passed as arguments and programmatically constructed with type() function.\nclass Person(object):\n def __init__(self, name, age):\n self.name = name\n self.age = age\n def __str__(self):\n return str(name) + str(age)\n\nThis class is equivalent to this:\ndef __init__(self, name, age):\n self.age = age\n self.name = name\ndef __str__(self):\n return str(name) + str(age)\nPerson = type(\"Person, (\"object\",), {\"__init__\": __init__,\n \"__str__\": __str__})\n\nThis error \"object is not subscriptable\" appears when you pass to a function an object that doesn't support accessing values by indexing (doesn't overload the [] operator). Since the type of all classes is <class \"type\">:\n>>> type(Person)\n<class \"type\">\n\nthen type object is not subscriptable means you pass class instead of an actual object. This could be more tricky though, in my function above I passed valid dataframes, and my error appeared because I attempted to annotate the return type as tuple[float], and while this syntax should be legal to my knowledge, the interpreter understood this expression as if I wanted to index the tuple class itself.\nConclusions\nThis error appears when:\n\nyou pass class instead of an actual object to a function argument;\nyou use the class name to index anything, including naming variables with class names.\n\nYou can take a look at this tutorial by mCoding to find out more about classes as objects.\n" ]
[ 72, 37, 0 ]
[]
[]
[ "dictionary", "python", "python_3.x" ]
stackoverflow_0026920955_dictionary_python_python_3.x.txt
Q: Visual Studio Code syntax highlighting not working I am using Visual Studio Code (VSC) as my IDE. My computer just updated to Catalina 10.15.2 (19C57) and since the update, now VSC is not highlighting syntax errors. The extensions I have seem to be working and it recognizes my miniconda python environment. Is there a solution for this yet? I was avoiding Catalina as I know it has caused lots of errors, but now that I was forced to install it I need a solution as I love VSC. A: I also had the same problem for typescript react files. Tried many things and nothing worked. Finally I checked the extensions I've installed for typescript react. Disabling JavaScript and TypeScript Nightly extension worked for me A: In my case, the Catalina installation didn't remove my Python installation. After checking as suggested by @Brett Cannon in his comment, the update to Catalina uninstalled some extensions from VS Code. These are not available in the VS Code extension Marketplace anymore, so there must be an issue regarding compatibility. I fixed it after I opened my command palette (Command + Shift + p) and typed python: select linter. Then selected pylint, selected the install with conda option, Close/Open VS Code and now it's working(though it's still not shown in my extensions section in VS Code). It's necessary to point out that you will have to install pylint in every Python environment you are using. In my case I have multiple Conda environments. A: It's very specific but for me it was a missing semicolon in my css (styled-component). I use styled-components in react and it didn't throw an error for missing semicolon but highlighting was suddenly gone. I had given up and left it that way until I came up with the solution quite by accident. A: If you were using the global install of Python then that was removed in Catalina which would break your virtual environment. A new install of Python and recreating the virtual environment should fix things. A: Had similar issue on new vscode setup - my problem was rather that eslint warnings are not being highlighted, only errors. After opening my eslint setup for the project - .eslintrc.js file, saw message saying that eslint needed permission accessing some files, which I did by clicking the lightbulb next to module.exports and hitting accept button.
Visual Studio Code syntax highlighting not working
I am using Visual Studio Code (VSC) as my IDE. My computer just updated to Catalina 10.15.2 (19C57) and since the update, now VSC is not highlighting syntax errors. The extensions I have seem to be working and it recognizes my miniconda python environment. Is there a solution for this yet? I was avoiding Catalina as I know it has caused lots of errors, but now that I was forced to install it I need a solution as I love VSC.
[ "I also had the same problem for typescript react files. Tried many things and nothing worked. Finally I checked the extensions I've installed for typescript react. Disabling JavaScript and TypeScript Nightly extension worked for me\n", "In my case, the Catalina installation didn't remove my Python installation.\nAfter checking as suggested by @Brett Cannon in his comment, the update to Catalina uninstalled some extensions from VS Code. These are not available in the VS Code extension Marketplace anymore, so there must be an issue regarding compatibility. I fixed it after I opened my command palette (Command + Shift + p) and typed python: select linter. Then selected pylint, selected the install with conda option, Close/Open VS Code and now it's working(though it's still not shown in my extensions section in VS Code). It's necessary to point out that you will have to install pylint in every Python environment you are using. In my case I have multiple Conda environments.\n", "It's very specific but for me it was a missing semicolon in my css (styled-component). I use styled-components in react and it didn't throw an error for missing semicolon but highlighting was suddenly gone.\nI had given up and left it that way until I came up with the solution quite by accident.\n", "If you were using the global install of Python then that was removed in Catalina which would break your virtual environment. A new install of Python and recreating the virtual environment should fix things.\n", "Had similar issue on new vscode setup - my problem was rather that eslint warnings are not being highlighted, only errors.\nAfter opening my eslint setup for the project - .eslintrc.js file, saw message saying that eslint needed permission accessing some files, which I did by clicking the lightbulb next to module.exports and hitting accept button.\n" ]
[ 9, 2, 2, 0, 0 ]
[]
[]
[ "macos_catalina", "python", "syntax_highlighting", "visual_studio_code" ]
stackoverflow_0059775038_macos_catalina_python_syntax_highlighting_visual_studio_code.txt
Q: How to transform payload data after it comes in using Pydantic I have a payload that comes in which has two parameters. One of the parameters is a long string which contains more parameters. Something like this param1%param2%param3. I am using FastAPI and Pydantic BaseModel to get that data and validate it, however since I am using it in other places I also want to transform it and store it in an object so I can access it later without having to transform it when I need to. Something like PayloadObject.param1. from fastapi import FastAPI from pydantic import BaseModel class Payload(BaseModel): string_params: str #param1%param2%param3 second_param: dict @validator(string_params) def string_params_validator(cls, strings_params): #validation stuff @validator(second_param) def second_param(cls, second_param): #validation stuff app = FastAPI() @app.post("/my_route") async def post_my_route(payload: Payload): # want to have transformed payload around here func(payload) What would be the best way to go about that using pydantic? I am just thinking of making a class that transforms this information on __init__ without using BaseModel. So after I get that data from the request and validate it I run it through this class and get a format that I am happy with. class NewPayload: def __init__(self, payload: Payload): # do transformations so i end up with self.param1 = param1 self.param2 = param2 self.param3 = param3 self.second_param = second_param A: If this payload structure is specific to this route it's a good idea to transform it directly in your route def. The structure you gave for NewPayload will not work if the number of param isn't always the same. example 1: from typing import List from fastapi import FastAPI from pydantic import BaseModel class Payload(BaseModel): string_params: str #param1%param2%param3 second_param: dict @validator(string_params) def string_params_validator(cls, strings_params): #validation stuff @validator(second_param) def second_param(cls, second_param): #validation stuff app = FastAPI() @app.post("/my_route") async def post_my_route(payload: Payload): params: List[str] = payload.string_params.split("%") # params = ["param1", "param2", "param3"] # Do something with params func(payload) Another idea: not the best since you accept the data in list format also, you can add a validator to stop this behavior but it will modify the doc from typing import List from fastapi import FastAPI from pydantic import BaseModel class Payload(BaseModel): string_params: Union[str, List[str]] #param1%param2%param3 second_param: dict @validator(string_params) def string_params_validator(cls, strings_params): string_params = strings_params.split("%") return string_params @validator(string_params) def params_to_list(cls, strings_params): #validation stuff @validator(second_param) def second_param(cls, second_param): #validation stuff app = FastAPI() @app.post("/my_route") async def post_my_route(payload: Payload): func(payload) You can use a second pydantic model with the second solution to only accept str in input and cast your first model into the other. from typing import List from fastapi import FastAPI from pydantic import BaseModel class Payload(BaseModel): string_params: str #param1%param2%param3 second_param: dict @validator(string_params) def params_to_list(cls, strings_params): #validation stuff @validator(second_param) def second_param(cls, second_param): #validation stuff class Payload1(BaseModel): string_params: Union[str, List[str]] second_param: dict @validator(string_params) def string_params_validator(cls, strings_params): string_params = strings_params.split("%") return string_params app = FastAPI() @app.post("/my_route") async def post_my_route(payload: Payload): params: Payload1 = Payload1(**Payload.dict()) func(payload) In the end the cleaner solution would be to make string_params a list of str and not a simple str since you will always need to convert it to list
How to transform payload data after it comes in using Pydantic
I have a payload that comes in which has two parameters. One of the parameters is a long string which contains more parameters. Something like this param1%param2%param3. I am using FastAPI and Pydantic BaseModel to get that data and validate it, however since I am using it in other places I also want to transform it and store it in an object so I can access it later without having to transform it when I need to. Something like PayloadObject.param1. from fastapi import FastAPI from pydantic import BaseModel class Payload(BaseModel): string_params: str #param1%param2%param3 second_param: dict @validator(string_params) def string_params_validator(cls, strings_params): #validation stuff @validator(second_param) def second_param(cls, second_param): #validation stuff app = FastAPI() @app.post("/my_route") async def post_my_route(payload: Payload): # want to have transformed payload around here func(payload) What would be the best way to go about that using pydantic? I am just thinking of making a class that transforms this information on __init__ without using BaseModel. So after I get that data from the request and validate it I run it through this class and get a format that I am happy with. class NewPayload: def __init__(self, payload: Payload): # do transformations so i end up with self.param1 = param1 self.param2 = param2 self.param3 = param3 self.second_param = second_param
[ "If this payload structure is specific to this route it's a good idea to transform it directly in your route def.\nThe structure you gave for NewPayload will not work if the number of param isn't always the same.\nexample 1:\nfrom typing import List\n\nfrom fastapi import FastAPI\nfrom pydantic import BaseModel\n\nclass Payload(BaseModel):\n string_params: str #param1%param2%param3\n second_param: dict\n\n @validator(string_params)\n def string_params_validator(cls, strings_params):\n #validation stuff\n\n @validator(second_param)\n def second_param(cls, second_param):\n #validation stuff\n\napp = FastAPI()\n\n@app.post(\"/my_route\")\nasync def post_my_route(payload: Payload):\n params: List[str] = payload.string_params.split(\"%\")\n # params = [\"param1\", \"param2\", \"param3\"]\n # Do something with params\n func(payload)\n\nAnother idea:\nnot the best since you accept the data in list format also, you can add a validator to stop this behavior but it will modify the doc\nfrom typing import List\n\nfrom fastapi import FastAPI\nfrom pydantic import BaseModel\n\nclass Payload(BaseModel):\n string_params: Union[str, List[str]] #param1%param2%param3\n second_param: dict\n\n @validator(string_params)\n def string_params_validator(cls, strings_params):\n string_params = strings_params.split(\"%\")\n return string_params\n \n @validator(string_params)\n def params_to_list(cls, strings_params):\n #validation stuff\n\n @validator(second_param)\n def second_param(cls, second_param):\n #validation stuff\n\napp = FastAPI()\n\n@app.post(\"/my_route\")\nasync def post_my_route(payload: Payload):\n func(payload)\n\nYou can use a second pydantic model with the second solution to only accept str in input and cast your first model into the other.\nfrom typing import List\n\nfrom fastapi import FastAPI\nfrom pydantic import BaseModel\n\nclass Payload(BaseModel):\n string_params: str #param1%param2%param3\n second_param: dict\n\n @validator(string_params)\n def params_to_list(cls, strings_params):\n #validation stuff\n\n @validator(second_param)\n def second_param(cls, second_param):\n #validation stuff\n\nclass Payload1(BaseModel):\n string_params: Union[str, List[str]]\n second_param: dict\n\n @validator(string_params)\n def string_params_validator(cls, strings_params):\n string_params = strings_params.split(\"%\")\n return string_params\n\napp = FastAPI()\n\n@app.post(\"/my_route\")\nasync def post_my_route(payload: Payload):\n params: Payload1 = Payload1(**Payload.dict())\n func(payload)\n\nIn the end the cleaner solution would be to make string_params a list of str and not a simple str since you will always need to convert it to list\n" ]
[ 0 ]
[]
[]
[ "fastapi", "pydantic", "python" ]
stackoverflow_0074452086_fastapi_pydantic_python.txt
Q: Any depth nested dict to pandas dataframe I've been fighting to go from a nested dictionary of depth D to a pandas DataFrame. I've tried with recursive function, like the following one, but my problem is that when I'm iterating over a KEY, I don't know what was the pervious key. I've also tried with json.normalize, pandas from dict but I always end up with dots in the columns... Example code: def iterate_dict(d, i = 2, cols = []): for k, v in d.items(): # missing here how to check for the previous key # so that I can create an structure to create the dataframe. if type(v) is dict: print('this is k: ', k) if i % 2 == 0: cols.append(k) i+=1 iterate_dict(v, i, cols) else: print('this is k2: ' , k, ': ', v) iterate_dict(test2) This is an example of how my dictionary looks like: # example 2 test = { 'column-gender': { 'male': { 'column-country' : { 'FRENCH': { 'column-class': [0,1] }, ('SPAIN','ITALY') : { 'column-married' : { 'YES': { 'column-class' : [0,1] }, 'NO' : { 'column-class' : 2 } } } } }, 'female': { 'column-country' : { ('FRENCH', 'SPAIN') : { 'column-class' : [[1,2],'#'] }, 'REST-OF-VALUES': { 'column-married' : '*' } } } } } And this is how I want the dataframe to look like: Any suggestion is welcome :) A: I'm not sure how that data going to be consistent but for just understanding we can do something like the below, remember this is just a little demo on the approach of how we can handle it, you can spend more time to polish it up accordingly: I added comments on each step for better understanding. import pandas as pd def nested_dict_to_df(data, columns=None): if columns are None: columns = [] # if the data is a dictionary, then we need to iterate over the keys if isinstance(data, dict): for key, value in data.items(): columns.append(key) yield from nested_dict_to_df(value, columns) # recursive call columns.pop() # remove the last element else: yield columns + [data] df = pd.DataFrame(nested_dict_to_df(data)) # Drop column [0, 2, 4, 6] from the dataframe that are not needed for the final output df = df.drop(df.columns[[0, 2, 4, 6]], axis=1) header = ["GENDER", "COUNTRY", "CLASS", "MARRIED"] # Desired header df.columns = header print(df) Output: GENDER COUNTRY CLASS MARRIED 0 male FRENCH [0, 1] None 1 male (SPAIN, ITALY) YES [0, 1] 2 male (SPAIN, ITALY) NO 2 3 female (FRENCH, SPAIN) [[1, 2], #] None 4 female REST-OF-VALUES * None A: If the column-keys are consistently prefixed with column-, you can create a recursive function: def data_to_df(data): rec_out = [] def dict_to_rec(d, curr_row={}): for k, v in d.items(): if 'column-' in k: # definition of a column if isinstance(v, dict): for val, nested_dict in v.items(): dict_to_rec(nested_dict, dict(curr_row, **{k[7:]: val})) else: rec_out.append(dict(curr_row, **{k[7:]: v})) dict_to_rec(data) return pd.DataFrame(rec_out) print(data_to_df(test)) Edit: removing unnecessary variable and argument Output: gender country class married 0 male FRENCH [0, 1] NaN 1 male (SPAIN, ITALY) YES [0, 1] 2 male (SPAIN, ITALY) NO 2 3 female (FRENCH, SPAIN) [[1, 2], #] NaN 4 female REST-OF-VALUES * NaN
Any depth nested dict to pandas dataframe
I've been fighting to go from a nested dictionary of depth D to a pandas DataFrame. I've tried with recursive function, like the following one, but my problem is that when I'm iterating over a KEY, I don't know what was the pervious key. I've also tried with json.normalize, pandas from dict but I always end up with dots in the columns... Example code: def iterate_dict(d, i = 2, cols = []): for k, v in d.items(): # missing here how to check for the previous key # so that I can create an structure to create the dataframe. if type(v) is dict: print('this is k: ', k) if i % 2 == 0: cols.append(k) i+=1 iterate_dict(v, i, cols) else: print('this is k2: ' , k, ': ', v) iterate_dict(test2) This is an example of how my dictionary looks like: # example 2 test = { 'column-gender': { 'male': { 'column-country' : { 'FRENCH': { 'column-class': [0,1] }, ('SPAIN','ITALY') : { 'column-married' : { 'YES': { 'column-class' : [0,1] }, 'NO' : { 'column-class' : 2 } } } } }, 'female': { 'column-country' : { ('FRENCH', 'SPAIN') : { 'column-class' : [[1,2],'#'] }, 'REST-OF-VALUES': { 'column-married' : '*' } } } } } And this is how I want the dataframe to look like: Any suggestion is welcome :)
[ "I'm not sure how that data going to be consistent but for just understanding we can do something like the below, remember this is just a little demo on the approach of how we can handle it, you can spend more time to polish it up accordingly:\nI added comments on each step for better understanding.\nimport pandas as pd\n\n\ndef nested_dict_to_df(data, columns=None):\n\n if columns are None:\n columns = []\n\n # if the data is a dictionary, then we need to iterate over the keys\n if isinstance(data, dict):\n\n for key, value in data.items():\n columns.append(key)\n yield from nested_dict_to_df(value, columns) # recursive call\n columns.pop() # remove the last element\n else:\n yield columns + [data]\n\n\ndf = pd.DataFrame(nested_dict_to_df(data))\n\n# Drop column [0, 2, 4, 6] from the dataframe that are not needed for the final output\ndf = df.drop(df.columns[[0, 2, 4, 6]], axis=1)\n\nheader = [\"GENDER\", \"COUNTRY\", \"CLASS\", \"MARRIED\"] # Desired header\ndf.columns = header\n\nprint(df)\n\nOutput:\n GENDER COUNTRY CLASS MARRIED\n0 male FRENCH [0, 1] None\n1 male (SPAIN, ITALY) YES [0, 1]\n2 male (SPAIN, ITALY) NO 2\n3 female (FRENCH, SPAIN) [[1, 2], #] None\n4 female REST-OF-VALUES * None\n\n", "If the column-keys are consistently prefixed with column-, you can create a recursive function:\ndef data_to_df(data):\n rec_out = []\n def dict_to_rec(d, curr_row={}):\n for k, v in d.items():\n if 'column-' in k: # definition of a column\n if isinstance(v, dict):\n for val, nested_dict in v.items():\n dict_to_rec(nested_dict, dict(curr_row, **{k[7:]: val}))\n else:\n rec_out.append(dict(curr_row, **{k[7:]: v}))\n dict_to_rec(data)\n return pd.DataFrame(rec_out)\n\nprint(data_to_df(test))\n\nEdit: removing unnecessary variable and argument\nOutput:\n gender country class married\n0 male FRENCH [0, 1] NaN\n1 male (SPAIN, ITALY) YES [0, 1]\n2 male (SPAIN, ITALY) NO 2\n3 female (FRENCH, SPAIN) [[1, 2], #] NaN\n4 female REST-OF-VALUES * NaN\n\n" ]
[ 1, 1 ]
[]
[]
[ "dictionary", "json", "nested", "pandas", "python" ]
stackoverflow_0074475332_dictionary_json_nested_pandas_python.txt
Q: Python PIL/Image generate grid of images of different width/height I came across the following example: from PIL import Image def image_grid(imgs, rows, cols): assert len(imgs) == rows*cols w, h = imgs[0].size grid = Image.new('RGB', size=(cols*w, rows*h)) grid_w, grid_h = grid.size for i, img in enumerate(imgs): grid.paste(img, box=(i%cols*w, i//cols*h)) return grid grid = image_grid(imgs, rows=3, cols=3) Works great, but what I need is a way to generate a grid of images of different width/height. I haven't been able to find any such example while searching. What I've been able to do is to iterate all images, get the max image dimensions encountered and change: grid.paste(img, box=(i%cols*w, i//cols*h)) to: grid.paste(img, box=(i%cols*maxWidth, i//cols*maxHeight)) But that ends up wasting a lot of space. Perhaps to avoid this the max width/height of each column/row would have to be calculated instead, but all the ways I've tried so far don't get the job done. Your help is much appreciated. A: I was able to solve it in the following way: First, we iterate all images and gather the max dimensions for each column and row. size = 3 maxWidth = {} maxHeight = {} for i, img in enumerate(imgs): col = i%size row = i//size if col not in maxWidth: maxWidth[col] = 0 if row not in maxHeight: maxHeight[row] = 0 if img.size[0] > maxWidth[col]: maxWidth[col] = img.size[0] if img.size[1] > maxHeight[row]: maxHeight[row] = img.size[1] We then calculate their sums in order to determine the image size. x = sum(maxWidth for maxWidth in maxWidth.values()) y = sum(maxHeight for maxHeight in maxHeight.values()) grid = Image.new('RGB', size=(x, y)) Then we iterate all images again and position them based on the max dimensions that we gathered. We reset and increment x1 and y1 accordingly depending on where we are. for i, img in enumerate(imgs): col = i%size row = i//size if col == 0: x1 = 0 else: x1 += maxWidth[col-1] if row == 0: y1 = 0 elif col == 0: y1 += maxHeight[row-1] grid.paste(img, box=(x1, y1)) This way, there is no need to allocate more space to an image than its column/row requires.
Python PIL/Image generate grid of images of different width/height
I came across the following example: from PIL import Image def image_grid(imgs, rows, cols): assert len(imgs) == rows*cols w, h = imgs[0].size grid = Image.new('RGB', size=(cols*w, rows*h)) grid_w, grid_h = grid.size for i, img in enumerate(imgs): grid.paste(img, box=(i%cols*w, i//cols*h)) return grid grid = image_grid(imgs, rows=3, cols=3) Works great, but what I need is a way to generate a grid of images of different width/height. I haven't been able to find any such example while searching. What I've been able to do is to iterate all images, get the max image dimensions encountered and change: grid.paste(img, box=(i%cols*w, i//cols*h)) to: grid.paste(img, box=(i%cols*maxWidth, i//cols*maxHeight)) But that ends up wasting a lot of space. Perhaps to avoid this the max width/height of each column/row would have to be calculated instead, but all the ways I've tried so far don't get the job done. Your help is much appreciated.
[ "I was able to solve it in the following way:\nFirst, we iterate all images and gather the max dimensions for each column and row.\nsize = 3\nmaxWidth = {}\nmaxHeight = {}\n\nfor i, img in enumerate(imgs):\n col = i%size\n row = i//size\n\n if col not in maxWidth:\n maxWidth[col] = 0\n\n if row not in maxHeight:\n maxHeight[row] = 0\n\n if img.size[0] > maxWidth[col]:\n maxWidth[col] = img.size[0]\n\n if img.size[1] > maxHeight[row]:\n maxHeight[row] = img.size[1]\n\nWe then calculate their sums in order to determine the image size.\nx = sum(maxWidth for maxWidth in maxWidth.values())\ny = sum(maxHeight for maxHeight in maxHeight.values())\n\ngrid = Image.new('RGB', size=(x, y))\n\nThen we iterate all images again and position them based on the max dimensions that we gathered. We reset and increment x1 and y1 accordingly depending on where we are.\nfor i, img in enumerate(imgs):\n col = i%size\n row = i//size\n\n if col == 0:\n x1 = 0\n else:\n x1 += maxWidth[col-1]\n\n if row == 0:\n y1 = 0\n elif col == 0:\n y1 += maxHeight[row-1]\n \n grid.paste(img, box=(x1, y1))\n\nThis way, there is no need to allocate more space to an image than its column/row requires.\n" ]
[ 0 ]
[]
[]
[ "image", "python", "python_imaging_library" ]
stackoverflow_0074454460_image_python_python_imaging_library.txt
Q: I need help making calculations from entry I am very new to python and quite interested in learning it. Tried googling an answer for this but couldn't find one. I'm doing a project for myself to get the price of the fuel costs (daily, monthly and yearly costs). Fuel consumption (liter/100km) / 100 * kilometers driven (per day) * fuel cost (per liter) I am trying to get the data from entry, then calculating it and then displaying the results in labels. It seemed like an easy beginner practice but ended up being a bit too difficult because I couldn't find on google something that I would understand. Here's what I got to: from tkinter import * root = Tk() root.title("Consumption calculator") root.geometry("300x300") root.minsize(300, 500) root.maxsize(300, 500) #Label label1 = Label(root, text = "Fuel consumption", pady=20,padx=60) label1.grid(row=0) label2 = Label(root, text = "Current fuel cost", pady=20,padx=60) label2.grid(row=2) label3 = Label(root, text = "Kilometers per day", pady=20,padx=60) label3.grid(row=4) akkuna4 = Label(root, text = " ", pady=10,padx=60) akkuna4.grid(row=5) #Entry txt1 = Entry(root, width=5, state=NORMAL) txt1.grid(row=1) txt2 = Entry(root, width=5, state=NORMAL) txt2.grid(row=3) txt3 = Entry(root, width=5, state=NORMAL) txt3.grid(row=5) #Button btn = Button(text="Calculate", font=("Arial",15,"bold")) btn.grid(row=7) # root.mainloop() I wanted to get the results to update in real-time whenever you typed in entry but I don't know if it's possible so I just resulted to using the button. Please let me know if it's possible to get them to calculate in real time without pressing a button! (It probably is) I was trying out commands and "var-stuff" (I don't even know what that really does yet). I couldn't figure out how it would be possible. A: you can use a variable that will be connected to your entries: In the example, the code prints to the screen the values taken from the entry. import tkinter as tk def func(*args): # *args allows passing a variable number of non-keyword arguments to the # function label1.configure(text=var.get()) root = tk.Tk() var = tk.StringVar() entry1 = tk.Entry(root, textvariable=var) # var points to the entry1 input. label1 = tk.Label(root) entry1.pack() var.trace('w', func) # w for write, and trace for following var label1.pack() root.mainloop() Now your job is to implement a similar algorithm in your code! Good luck :)
I need help making calculations from entry
I am very new to python and quite interested in learning it. Tried googling an answer for this but couldn't find one. I'm doing a project for myself to get the price of the fuel costs (daily, monthly and yearly costs). Fuel consumption (liter/100km) / 100 * kilometers driven (per day) * fuel cost (per liter) I am trying to get the data from entry, then calculating it and then displaying the results in labels. It seemed like an easy beginner practice but ended up being a bit too difficult because I couldn't find on google something that I would understand. Here's what I got to: from tkinter import * root = Tk() root.title("Consumption calculator") root.geometry("300x300") root.minsize(300, 500) root.maxsize(300, 500) #Label label1 = Label(root, text = "Fuel consumption", pady=20,padx=60) label1.grid(row=0) label2 = Label(root, text = "Current fuel cost", pady=20,padx=60) label2.grid(row=2) label3 = Label(root, text = "Kilometers per day", pady=20,padx=60) label3.grid(row=4) akkuna4 = Label(root, text = " ", pady=10,padx=60) akkuna4.grid(row=5) #Entry txt1 = Entry(root, width=5, state=NORMAL) txt1.grid(row=1) txt2 = Entry(root, width=5, state=NORMAL) txt2.grid(row=3) txt3 = Entry(root, width=5, state=NORMAL) txt3.grid(row=5) #Button btn = Button(text="Calculate", font=("Arial",15,"bold")) btn.grid(row=7) # root.mainloop() I wanted to get the results to update in real-time whenever you typed in entry but I don't know if it's possible so I just resulted to using the button. Please let me know if it's possible to get them to calculate in real time without pressing a button! (It probably is) I was trying out commands and "var-stuff" (I don't even know what that really does yet). I couldn't figure out how it would be possible.
[ "you can use a variable that will be connected to your entries:\nIn the example, the code prints to the screen the values taken from the entry.\nimport tkinter as tk\n\n\ndef func(*args):\n # *args allows passing a variable number of non-keyword arguments to the \n # function \n label1.configure(text=var.get())\n\n\nroot = tk.Tk()\nvar = tk.StringVar()\nentry1 = tk.Entry(root, textvariable=var) # var points to the entry1 input.\nlabel1 = tk.Label(root)\n\nentry1.pack()\nvar.trace('w', func) # w for write, and trace for following var\nlabel1.pack()\nroot.mainloop()\n\nNow your job is to implement a similar algorithm in your code!\nGood luck :)\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074467285_python_tkinter.txt
Q: Find keyword from a list in a page using BeautifulSoup Using Beautiful Soup, I'd like to detect porn keywords (that i get by concatening two lists of porn-keywords (one in french, the other in english) in a web page. Here's my code (from BeautifulSoup find two different strings): proxy_support = urllib.request.ProxyHandler(my_proxies) opener = urllib.request.build_opener(proxy_support) urllib.request.install_opener(opener) lst_porn_keyword_eng = str(urllib.request.urlopen("http://www.cs.cmu.edu/~biglou/resources/bad-words.txt").read()).split('\\n') # the textfile starts with a LF, deleting it. if lst_porn_keyword_eng[0] == "b\"": del lst_porn_keyword_eng[0] lst_porn_keyword_fr = str(urllib.request.urlopen("https://raw.githubusercontent.com/darwiin/french-badwords-list/master/list.txt").read()).split('\\n') lst_porn_keyword = lst_porn_keyword_eng + lst_porn_keyword_fr lst_porn_keyword_found = [] with urllib.request.urlopen("http://www.example.com") as page_to_check: soup = BeautifulSoup(page_to_check, "html5lib") for node in soup.find_all(text=lambda text: any(x in text for x in lst_porn_keyword)): lst_porn_keyword_found.append(str(node.text)) return lst_porn_keyword_found This code runs correctly but porn keyword are found even if they shouldn't be. For instance, the text of the second node found in "http://www.example.com" is This domain is for use in illustrative examples in documents. You may use this domain in literature without prior coordination or asking for permission. And none of these words are in lst_porn_keyword A: I replaced your lambda function with def testfn(text): elms = list([x for x in lst_porn_keyword if x in text]) if len(elms) > 0: print(f"found words {elms} in {text}") return len(elms)>0 calling soup.find_all(text=testfn) will result in the following output: found words ['color', 'gin', '"'] in ` body { background-color: #f0f0f2; margin: 0; padding: 0; font-family: -apple-system, system-ui, BlinkMacSystemFont, "Segoe UI", "Open Sans", "Helvetica Neue", Helvetica, Arial, sans-serif; } div { width: 600px; margin: 5em auto; padding: 2em; background-color: #fdfdff; border-radius: 0.5em; box-shadow: 2px 3px 7px 2px rgba(0,0,0,0.02); } a:link, a:visited { color: #38488f; text-decoration: none; } @media (max-width: 700px) { div { margin: 0 auto; width: auto; } } ` found words ['cum', 'ho'] in `This domain is for use in illustrative examples in documents. You may use this domain in literature without prior coordination or asking for permission.` I think your problem is that the in keyword also works for partial words. E.g.: "cum" in "document" > True A: Your soup.find_all() doesn't return the html but the css instead: body { background-color: #f0f0f2; margin: 0; padding: 0; font-family: -apple-system, system-ui, BlinkMacSystemFont, "Segoe UI", "Open Sans", "Helvetica Neue", Helvetica, Arial, sans-serif; } div { width: 600px; margin: 5em auto; padding: 2em; background-color: #fdfdff; border-radius: 0.5em; box-shadow: 2px 3px 7px 2px rgba(0,0,0,0.02); } a:link, a:visited { color: #38488f; text-decoration: none; } @media (max-width: 700px) { div { margin: 0 auto; width: auto; } } The words "color", "gin", and the character " appear in lst_porn_keyword and on the css, which triggered your detection. Partial words like "gin" in "margin" are also problematic using soup.findall(), consider using regular expressions with word delimiters like the example below: import regex as re for word in lst_porn_keyword: result = re.findall(fr"\W{word}\W", node) if len(result) > 0: print(f"detected in text: {word}")
Find keyword from a list in a page using BeautifulSoup
Using Beautiful Soup, I'd like to detect porn keywords (that i get by concatening two lists of porn-keywords (one in french, the other in english) in a web page. Here's my code (from BeautifulSoup find two different strings): proxy_support = urllib.request.ProxyHandler(my_proxies) opener = urllib.request.build_opener(proxy_support) urllib.request.install_opener(opener) lst_porn_keyword_eng = str(urllib.request.urlopen("http://www.cs.cmu.edu/~biglou/resources/bad-words.txt").read()).split('\\n') # the textfile starts with a LF, deleting it. if lst_porn_keyword_eng[0] == "b\"": del lst_porn_keyword_eng[0] lst_porn_keyword_fr = str(urllib.request.urlopen("https://raw.githubusercontent.com/darwiin/french-badwords-list/master/list.txt").read()).split('\\n') lst_porn_keyword = lst_porn_keyword_eng + lst_porn_keyword_fr lst_porn_keyword_found = [] with urllib.request.urlopen("http://www.example.com") as page_to_check: soup = BeautifulSoup(page_to_check, "html5lib") for node in soup.find_all(text=lambda text: any(x in text for x in lst_porn_keyword)): lst_porn_keyword_found.append(str(node.text)) return lst_porn_keyword_found This code runs correctly but porn keyword are found even if they shouldn't be. For instance, the text of the second node found in "http://www.example.com" is This domain is for use in illustrative examples in documents. You may use this domain in literature without prior coordination or asking for permission. And none of these words are in lst_porn_keyword
[ "I replaced your lambda function with\ndef testfn(text):\n elms = list([x for x in lst_porn_keyword if x in text])\n if len(elms) > 0:\n print(f\"found words {elms} in {text}\")\n return len(elms)>0\n\ncalling soup.find_all(text=testfn) will result in the following output:\nfound words ['color', 'gin', '\"'] in `\n body {\n background-color: #f0f0f2;\n margin: 0;\n padding: 0;\n font-family: -apple-system, system-ui, BlinkMacSystemFont, \"Segoe UI\", \"Open Sans\", \"Helvetica Neue\", Helvetica, Arial, sans-serif;\n \n }\n div {\n width: 600px;\n margin: 5em auto;\n padding: 2em;\n background-color: #fdfdff;\n border-radius: 0.5em;\n box-shadow: 2px 3px 7px 2px rgba(0,0,0,0.02);\n }\n a:link, a:visited {\n color: #38488f;\n text-decoration: none;\n }\n @media (max-width: 700px) {\n div {\n margin: 0 auto;\n width: auto;\n }\n }\n `\nfound words ['cum', 'ho'] in `This domain is for use in illustrative examples in documents. You may use this\n domain in literature without prior coordination or asking for permission.`\n\nI think your problem is that the in keyword also works for partial words. E.g.:\n\"cum\" in \"document\"\n> True\n\n", "Your soup.find_all() doesn't return the html but the css instead:\n body {\n background-color: #f0f0f2;\n margin: 0;\n padding: 0;\n font-family: -apple-system, system-ui, BlinkMacSystemFont, \"Segoe UI\", \"Open Sans\", \"Helvetica Neue\", Helvetica, Arial, sans-serif;\n \n }\n div {\n width: 600px;\n margin: 5em auto;\n padding: 2em;\n background-color: #fdfdff;\n border-radius: 0.5em;\n box-shadow: 2px 3px 7px 2px rgba(0,0,0,0.02);\n }\n a:link, a:visited {\n color: #38488f;\n text-decoration: none;\n }\n @media (max-width: 700px) {\n div {\n margin: 0 auto;\n width: auto;\n }\n }\n \n\nThe words \"color\", \"gin\", and the character \" appear in lst_porn_keyword and on the css, which triggered your detection.\nPartial words like \"gin\" in \"margin\" are also problematic using soup.findall(), consider using regular expressions with word delimiters like the example below:\nimport regex as re\n\nfor word in lst_porn_keyword:\n result = re.findall(fr\"\\W{word}\\W\", node)\n if len(result) > 0:\n print(f\"detected in text: {word}\")\n\n" ]
[ 0, 0 ]
[]
[]
[ "beautifulsoup", "python", "web_scraping" ]
stackoverflow_0074476605_beautifulsoup_python_web_scraping.txt
Q: Try-except with NameError and TypeError Can you please help me with the following. I am trying to catch two exceptions: 1) TypeError and 2)NameError. I use the following code below that estimates the average: def calculate_average(number_list): try: if type(number_list) is not list: raise ValueError("You should pass list to this function") except ValueError as err: print(err) return try: average = sum(number_list)/len(number_list) except TypeError: print('List should contain numbers') return except NameError: print('List should contain numbers') return return average The code works fine for: print(calculate_average([1, 2, 3])) print(calculate_average([1, 2, 'a'])) But when I use: print(calculate_average([1, 2, a])) I have the following error that was supposed to be captured by except: NameError: name 'a' is not defined Can you please help me with understanding the issue? (I use Spyder) A: In spyder, if you look in the trail of recent tracebacks, the error is raised by site-package ...lib\site-packages\spyder_kernels\py3compat.py", line 356, in compat_exec(code, globals, locals) , as NameError: name 'a' is not defined since it searches for a variable declaration of a in the script (which is absent), before executing the function. A: The NameError on a is raised in the calling scope, not when you attempt to use number_list. You would need to catch it there: try: print(calculate_average([1, 2, a])) except NameError: print("Variable not defined") However, you shouldn't be catching NameErrors at all. When they arise in testing, you should figure out what undefined name you are trying to use, and make sure it is defined. Like most exceptions, this isn't intended for flow control or dealing with easily fixed problems at runtime. Rather than littering your code with run-time type check, consider using type hints and static typecheckers like mypy to catch code that would produce a TypeError at runtime. # list[int] might be too restrictive, but this is a simplified # example def calculate_average(number_list: list[int]): average = sum(number_list)/len(number_list) return average They only error left here that mypy wouldn't catch is the attempt to divide by zero if you pass an empty list. That you can check for and handle. You can raise a ValueError, or just decide that the average of an empty list is 0 by definition. def calculate_average(number_list: list[int]): if not number_list: # raise ValueError("Cannot average an empty list") return 0 return sum(number_list)/len(number_list) This is preferable to try: return sum(number_list)/len(number_list) except ZeroDivisionError: ... because it anticipates the problem before you go to the trouble of calling sum and len.
Try-except with NameError and TypeError
Can you please help me with the following. I am trying to catch two exceptions: 1) TypeError and 2)NameError. I use the following code below that estimates the average: def calculate_average(number_list): try: if type(number_list) is not list: raise ValueError("You should pass list to this function") except ValueError as err: print(err) return try: average = sum(number_list)/len(number_list) except TypeError: print('List should contain numbers') return except NameError: print('List should contain numbers') return return average The code works fine for: print(calculate_average([1, 2, 3])) print(calculate_average([1, 2, 'a'])) But when I use: print(calculate_average([1, 2, a])) I have the following error that was supposed to be captured by except: NameError: name 'a' is not defined Can you please help me with understanding the issue? (I use Spyder)
[ "In spyder, if you look in the trail of recent tracebacks, the error is raised by site-package ...lib\\site-packages\\spyder_kernels\\py3compat.py\", line 356, in compat_exec(code, globals, locals) , as NameError: name 'a' is not defined since it searches for a variable declaration of a in the script (which is absent), before executing the function.\n", "The NameError on a is raised in the calling scope, not when you attempt to use number_list. You would need to catch it there:\ntry:\n print(calculate_average([1, 2, a]))\nexcept NameError:\n print(\"Variable not defined\")\n\nHowever, you shouldn't be catching NameErrors at all. When they arise in testing, you should figure out what undefined name you are trying to use, and make sure it is defined. Like most exceptions, this isn't intended for flow control or dealing with easily fixed problems at runtime.\n\nRather than littering your code with run-time type check, consider using type hints and static typecheckers like mypy to catch code that would produce a TypeError at runtime.\n# list[int] might be too restrictive, but this is a simplified\n# example\ndef calculate_average(number_list: list[int]):\n \n average = sum(number_list)/len(number_list)\n return average\n\nThey only error left here that mypy wouldn't catch is the attempt to divide by zero if you pass an empty list. That you can check for and handle. You can raise a ValueError, or just decide that the average of an empty list is 0 by definition.\ndef calculate_average(number_list: list[int]):\n if not number_list:\n # raise ValueError(\"Cannot average an empty list\")\n return 0\n\n return sum(number_list)/len(number_list)\n\nThis is preferable to\ntry:\n return sum(number_list)/len(number_list)\nexcept ZeroDivisionError:\n ...\n\nbecause it anticipates the problem before you go to the trouble of calling sum and len.\n" ]
[ 0, 0 ]
[]
[]
[ "error_handling", "python", "try_except" ]
stackoverflow_0074476772_error_handling_python_try_except.txt